Pages

Sunday, 9 December 2012

Blame it on human error

Blame it on 'Human Error', after all everybody knows that to Err is Human






Can you see what is happening in the pictures above? Of course you can. They are two infusion pumps with two different types of numerical key pads.

Can you now see how easy it would be for a tired nurse or an even more tired doctor when they are really busy at 3 am to confuse between the keypads and make a mistake?

Would that be counted as human error? Probably yes. But is that human error? Certainly not. This situation would be without doubt a systems error at two levels. Firstly for the manufacturers not standardising numerical keypads. Secondly for the buyer/healthcare facility for buying and using pumps with two different key pads in their premises. By doing so we have designed our system to fail, we have designed for the humans in our systems to fail. Avoiding that is what human factors is all about.

Let us assume that a clinician made an error in a facility that both these styles of numerical keypads in use in say adjacent beds/wards/floors. The investigation would only show that the clinician made a human error in pressing the wrong numbers in that particular key pad. That would be a fact. Would that be the whole truth? No. A standard investigation would not show that the error was triggered by the system by having those two types of keypads in adjacent areas. The investigation would probably end by stating that individual clinicians are responsible for their actions. Then the clinician would be sanctioned against, sometimes that is insultingly yet euphemistically called providing enhanced support and training for the concerned clinician.

The reason called 'human error' becomes a convenient parking lot for system errors that mostly go unrecognised due to poor management and poor investigators who have not much clue about human factors. Human error is easy, its tangible, you have someone clearly responsible and someone who has failed in their responsibility. Every one understands human error. System error recognition is very complex, its often fuzzy, once recognised there is no one to 'blame', to be held responsible. After all that, resolving system errors takes patience, time, energy and technical skills which many would pretend to have. It is frightening to imagine how many clinicians might have been afflicted with the 'human error' label when the actual reason was the system.

Now, if you were remotely responsible for patient safety, you will now rush out into your healthcare facility to make sure that you have only type of numerical keypads at your facility for these. That is at the narrow level. At the intermediate level please ask yourself how many other items that are non-standard at your place of work that confuses people and compels them into making an error. Go looking for them and eliminate them.

At a bigger picture level we are beginning to believe that 'Human Error' is often a cop out clause for managers who don't fully understand systems or processes. To err is indeed human but to design for failure and then blame it on 'human error' is inhuman.

©M HEMADRI 
Follow me on twitter @HemadriTweets

PS: Note the manufacturer of the pumps in the picture above is only an illustration to make a wider point, those pumps are good and overall have served patients well.  So please do not get hung up and pious about a particular product or company. I have also found that my windows calculator and my samsung phone calculator have different numerical keypads which are different from my computer keyboard's numerical keypad. How confusing is that? Is it any wonder then if some poor bloke at the office goofs up? 

Friday, 7 December 2012

Examinations for doctors - time to think differently

I wrote the article below in 2006. I was not blogging at that time so it just lived in my computer. When you read it please be in a 2006 frame of mind. The article 14, the new rules for surgical exit exam, the impeding new contracts for doctors especially for SAS doctors and so on.

Once you have read it, cross reference it to the recent GP exam results.

We need an end to the monopoly of examination providers for post-graduate doctors. We need a plurality of avenues to demonstrate knowledge.  Why should every university in UK not have a knowledge test for specialist doctors?

The link to the intercollegiate website cited in the article will not work, you may want to search their website for the current link or otherwise check with them.

-------------------------------------------------------

THE EXIT EXAMINATIONS: IS IT TIME TO HAVE A DIFFERENT THINKING?

The surgical royal colleges have decided to allow any candidate who is able to muster the references of two consultant surgeons to take the intercollegiate exit examination. The colleges would see this as a response to the changes in the rules that have happened due the PMETB to allow a fair opportunity to anyone who wants to demonstrate their proficiency in surgical knowledge. The General and Specialist Medical Practise Order that created the PMETB was passed in April 2003 and there have been wide consultations before and since. It has taken three years to arrange a new format which is likely to change again very soon, in view of the MMC reforms.

While it is clear that the 'standard of knowledge' should be the same for surgeons entering the specialist register one has to question if the actual examination should also be the same. Whether different formats for differing groups/sub-specialties were considered is not known. Whether any surgeons who are not in training were consulted before these changes is not known. If any of the 'mediated entry' candidates who have taken these examinations in the past were consulted is not known. A close look seems to reveal the need to have some radical, new and different thinking about who should take which examinations and who should offer them.


HISTORY OF WHO PASSED AND WHO FAILED

The point about consulting the past candidates is rather important. The evidence for the importance lies in the figures available in the Intercollegiate Speciality Boards website (http://www.intercollegiate.org.uk/html/results.html) where between 1998 and 2001 the overall pass rates in the intercollegiate surgical exit examinations was 70% for mediated entry candidates, 76% for type two trainees and 96% for type one trainees. We should keep aside the issue of mediated entry candidates for just a moment and look at the glaring difference of pass rate between type 1 and type 2 trainees. Most type two trainees worked to similar rotas in similar hospitals with similar consultants and mostly for similar number of years. Some differences do exist in their pathways such as type one trainees spending more time in teaching hospitals and type one trainees having experience in some research, while many type 2 trainees also have such exposure not all of them do. Opportunities for courses, learning etc are all similar. However, when it comes to examinations type 2 trainees did not do well. It begs many obvious questions, the foremost of which is why trainees with such similar pathways did not fare similarly at the examinations. If type 2 surgical trainees had equivalent training to type 1 trainees, as an admission to the examination implied till recently, why did they not do well? If we accept that the examination was a true reflection of their training and knowledge then was the process that selected them was wrong? If we accept that their pathways were not as similar as described here then why were they allowed into the examination on the basis of ‘end of training’ ‘exit’ examination? Knowing that type 2 candidates fared badly what changes were made to address that situation? If they were genuinely poor why were they selected into specialist registrar posts, if they continued to be poor why were they not stopped from progressing through their training which enabled them to take the examination?

When so many questions exist in the issue of type 2 registrars, there are even more for mediated entry candidates of the past and especially possibly for non-training post holding candidates of the future.


THE DEBATE IS INTERNATIONAL AND ABOUT THE FUTURE

The debate is not simply about the present UK based SAS doctors, FTTA, LAT and LAS posts who intend to take these examinations under the new regulations. The future also demands some answers. Some of the colleges have taken upon themselves to hold these examinations in many parts of the world. The demand for such examinations exists. Would the colleges allow non-training doctors from abroad to sit the intercollegiate exit examinations? This opens an even wider debate whether surgeons not in non-training posts from anywhere in the world would be allowed entry in to the specialist register partly on the basis of a test of knowledge that UK Royal Colleges offered. That is not to say that such surgeons should not be allowed but to wonder if the GMC, PMETB and royal colleges have the resources to probe the credentials of such candidates so thoroughly that the British public can be assured of quality in real time practise and not success in a paper work exercise. Perhaps the easy way out is to ‘rule’ on application, that the applicant is in need of further training, which is in reality will be difficult and expensive to challenge by overseas applicants.

INTENTION VS REALITY

The law in the form of the PMETB rules allows for various types of demonstration of knowledge, specifically to enable a variety of suitable candidates to enter the specialist register. The surgical colleges instead of taking the cue and innovating, have changed the entry criteria and the format to allow non-training surgeons to sit the same examination. Instead of exploring and enabling diversity that the law demanded the situation is now quite simply similar to tying the hands of a challenger and then putting him into the boxing ring. The example of an SAS doing excellent breast work for years taking the exit examination as an opportunity and achieving a predictable failure can be foreseen very clearly. To state that it is the responsibility of the candidate to ready themselves in all aspects before appearing for the examination sounds very reasonable but in reality very cynical. To then retrain the candidate due to a PMETB refusal or an examination failure and on the successful completion of 'training' and/or 'examination' only to be employed to the same job but possibly a higher title seems bad logic and an extreme waste of resources.

There is also a general perception that the current format of the new examination could be interpreted as being that of a different standard than the recently expired one. There is a suspicion that the goal posts are set differently in preparation for the MMC changes.


MONOPOLY

In the UK there is only one form of test of knowledge. There is only one body that provides it. This situation may be appreciated as offering uniformity. On the other hand it could also be considered as a monopoly of provision. The general view of monopolistic provision is that it is unhealthy. The intercollegiate format could also be perceived as cartelisation of sorts. The reality of a very small number of people involved in taking these examinations may prevent such a thought stream from developing into meaningful progress.

Surely the royal colleges have huge experience in designing examinations and though a challenge could devise a range of 'fit for purpose' examinations which would be of equivalent standards to enter the specialist register. The law allows it though does not require the colleges to do so. Coming from a different angle would it not be logical to wonder why a breast specialist has not taken a specific exit examination in breast surgery and so on? The urologist does.

More and more of assessments are being delegated and devolved to local deaneries who then sub-delegate to individual trusts and consultants in the form of in the work place assessments. As a logical futuristic extension some consideration may be given to decentralising the test of knowledge to be provided by a range of alternative providers. This may be not only a great market opportunity but also an opportunity to demonstrate leadership and vision, for universities and private educational systems to device such tailored high standard tests of knowledge as they have already done in the CME/CPD areas.


CONCLUSION

No one argues the need for good knowledge before entering the specialist register; it is no doubt a must. The entire debate is about the demonstration of that knowledge. The intercollegiate surgical exit examination is one of them but it is probably suited only for the current type one trainees. That examination's suitability for others including type 2 trainees and their derivatives, the future MMC defined ST post holders, SAS surgeons, MMC generated non-training post holding surgeons, overseas non-training post holding surgeons is unclear, though many will take it due to lack of alternatives. There may also be reluctance on the part of the ‘higher’ authorities to accept alternatives.

It is time to realise that 'similar' and 'equivalent' do not have to mean doing the same things or taking the same examinations. It is possibly the time to wonder about the paucity of alternatives to demonstrate knowledge. With the large increase in the number of medical students and the possibility of expansion of ‘consultant’ numbers, it is time for the good and great of the medical profession, though the surgical example is illustrated here, to lead in thinking, policy and practise rather than to react and respond as shown repeatedly with some of the glowing examples such as Calman, EWTD, PMETB and MMC amongst many others, with many issues arising from them still remaining unresolved.

------------------------------------------------------------


©M HEMADRI 
Follow me on twitter @HemadriTweets

Saturday, 1 December 2012

Revalidation - GMC must make it objective as soon as possible


The shortest overview of revalidation


GMC is commencing the process of revalidation for doctors in December 2012. The revalidation demands that we have evidence of 

1) Continuing professional development
2) Quality Improvement Activity
3) Significant events
4) Feedback from colleagues
5) Feedback from patients
6) Review of complaints and compliments

These six will populate the annual appraisal which apart from its main domains include the personal development plan, probity and health

Based on the above, the responsible officer will make a 'judgement' on whether the doctor can be recommended for revalidation. The GMC will then make a decision on whether the doctor has been successfully revalidated.

There is plenty of guidance on GMC website : http://www.gmc-uk.org/doctors/revalidation.asp

Concerns about the background for revalidation

While the issue of periodic quality assurance of licensed doctors has been discussed for a long time, the common view is that the current revalidation efforts commenced after the Bristol enquiry and gathered momentum after the Shipman enquiry. Bristol was an outlier, there was no trend that many hospitals or many cardiac surgery units were having unacceptably bad outcomes. Shipman was an outlier, there was no trend that many doctors were behaving or beginning to behave in a Shipman like manner. Outliers need to be analysed properly so that outliers can be stabilised to a performance level compatible with other performers within the general system. Quality principles would suggest that outliers should not trigger a process change for the whole system. Process change for a system could be triggered by an unacceptable trend (there are other reasons to change the process as well, but outlier is generally not one of them). To create a process change on the basis of outliers is thought to result in unnecessary expense and wasted effort.

This does not mean that we cannot learn from outliers, undoubtedly there are extraordinarily important lessons to be learned from Bristol and Shipman.

Linking the background to current revalidation method

Bristol is about performance and Shipman is about behaviour. We can safely assume that this is what the GMC seeks to assure. Quality assurance needs to be demonstrated in an objectively measureable manner.

Revalidation criteria - Not Objective

The six areas of evidence that the GMC asks for seem to be mostly subjective.
Continuous professional development (CPD) is generally accepted as a reflection of time spent on courses and conferences or other learning opportunities. It is certainly not a measure of the knowledge or skills gained or updated though that might happen. Some professional bodies have not defined the time needed to be spent on CPD. Hence while CPD is often measured and entered as a number it is a measure of time spent rather than a number to show the knowledge or skills gained. It could therefore be argued that CPD is either subjective or fit for purpose for revalidation if the intention was to validate or assure knowledge and/or skills.

Quality Improvement Activity: within this areas a range of activity is included. Activity is neither outcome nor achievement. Therefore activity is again time spent rather than gains (or losses) measured. An important point in quality improvement activity is that people fail more often than they succeed, that is the nature of quality improvement. Doctors, hospitals and the GMC should be comfortable with that. This could be potentially be an objective criteria but currently it could probably be considered unsure.

Learning from significant events and review of complaints and compliments are about self-reflection and reflective writing. It is obviously subjective. Feedback from colleagues and feedback from patients though done through validated tools by external or independent service providers is essentially the conversion of subjectivity into a scale to be able to measure.

Are subjective criteria relevant?

Absolutely yes. But only when looked along with objective criteria. Any form of quality assurance process must include subjectivity. The current criteria for revalidation seems mostly subjective and hence the concerns.

Why are objective criteria important?

We are talking about doctors who are essentially already very highly qualified and doing an extremely complex job under phenomenally varying conditions. We are taking about professionals on whom we have already spent somewhere between half-a-million to a million pounds before they are employed to do their role. Revalidation is about making a decision about their careers which could potentially be halted. To make such major decisions on mostly subjective criteria would not make sense. Further, there are planet loads of data already gathered and analysed and hence objective criteria are possibly already available if we wanted to use them.

Next is the issue of who may potentially be adversely affected to a higher degree than most. When subjective criteria are used there is a risk that often the weak, the easy targets and usual suspects may be affected. This has been seen in a few exam situations where certain sections of candidates pass the objective knowledge and skill components but fail the subjective elements of vivas, communication, simulation etc. There is a fear that it is possible that IMGs and BMEs (and SAS) doctors  would be affected by the level of subjectivity involved in revalidation.
There are good reasons behind these fears which relate to the culture and history of healthcare institutions and the culture and mind-set of BME/IMG doctors which is not explored here.

Increasing objectivity

Testing knowledge has traditionally been done by examinations. Americans revalidate their doctors on the basis of an objective examination of knowledge. This while reducing bias increases the validity of assurance of knowledge. Skills assessment could quite relevantly be based on performance data. Speaking from a hospital doctor perspective, this should be quite easy to do with some minimal tweaking on how data is gathered. Operational performance data is either the best or as good as any other indicator of a doctors skill.

Increasing objectivity still would not resolve the underlying issue of a process change for all doctors based on outliers and not trends. The GMC also needs to resolve other issues. Is revalidation a quality assurance process or a quality improvement process? Because the theory and the tools for assurance are different from improvement

Revalidation is important. It is likely that as it stands the revalidation process is heavily subjective. Given the importance of healthcare of the nation it would be advisable to quickly move to mainly objective criteria. We are where we are, let us make it better and fit for purpose.

©M HEMADRI 
Follow me on twitter @HemadriTweets