Pages

Friday 29 April 2011

HSMR

HSMR

Hospital Standardised Mortality Ratio

M HEMADRI

Preface
This is written on the basis of my understanding of the HSMR after attending a mini-course at the International Forum on Quality and Safety in Healthcare, Amsterdam 2011, taught by Sir Brian Jarman the original designer of HSMR, Paul Aylin of the Imperial College Dr Foster unit and Andre van der Veen (of de Praktijk index the Dutch collaborator of dr Foster). Their methodology and descriptions are publicly available and links are provided at the end.

Introduction
Death is a definite unarguable outcome; that includes deaths in hospitals. Though hospitals are essentially to provide care and save lives there will be some patients who will die in hospital despite the best possible care provided by the hospital and its staff. Using risk assessment models it is possible to calculate the number of patients who could be expected to die in hospital.
The number of actual patients who die in a hospital can obviously be accurately measured. The number of patients who are expected to die in the hospital can be calculated by risk assessment and risk adjustment models. These values are converted into a ratio and expressed as a value. That value would be the value of the Hospital Standardised Mortality Ratio.
In this write up, the basis of the calculation of the model is explained, some questions about the way it works are explained and the implications of the ratio are explored.

Founder/creator of HSMR
Prof Brian Jarman was an exploration geophysicist who worked at Shell and later became a doctor. He is a qualified physician, general practitioner and public health doctor. He developed the HSMR in 1999 at the Imperial College. He was a Senior Fellow at the IHI (where he looked into American HSMRs). He was a panel member of the Bristol Enquiry. He is a former president of the BMA. He is of course the author of innumerable papers, book chapters, member of various committees and boards

Calculating the HSMR
HSMR = (observed mortality/expected mortality) X 100
Observed mortality is the actual number of deaths that happen in the hospital. The expected mortality is based on a reference population. The standardisation is the risk adjustment that is taken into account for the reference population.
In England, the HSMR is based on HES (Hospital Episode Statistics) data with 14 million records and 300 fields of information. The risk adjustments are made for numerous factors including but not limited to age, sex, elective status, socio-economic status, diagnostic subgroup, procedure subgroup, some co-morbidity palliative care, source of admission, ethnicity, month, number of prior emergency admissions and so on.
Clinical risk adjustment takes into account specific biometric data some of the models are Euroscore, ASA, APACHE, POSSUM and so on. But the HSMR risk adjustment model takes into account sociological and operational data. HSMR uses the 56 diagnostic groups which contribute to 80% of in-hospital deaths in England

THE DEBATES AND ARGUMENTS

PALLIATIVE CARE CODING IS INACCURATE AND DISTORTS HSMR
The arguments about HSMR are about not including some of the preferred or favourite variables of some users. For instance, some hospitals feel that they have a palliative care/hospice ward within their premises and that could make their mortality rates high, some hospitals feel that there are no adequate hospice facilities in their area and hence more patients could come into hospital to die thus distorting their mortality rates by increasing it.
Research shows that firstly that the coding of palliative care is unreliable (more about it in an example below) and secondly that HSMR adjusted and non-adjusted for palliative care showed good correlation (i.e. no difference)

HSMR IS BASED ON HES DATA AND NOT ON SPECIFIC CLINICAL RISK DATA
Another argument is that HSMR risk adjustments are based on HES data which does not include specific clinical data on co-morbidity and hence does not account for the clinical complexity of the patients who died. Interestingly, HSMR adjusted and unadjusted for co-morbidity still has a good correlation (i.e. no difference).
In the instance of vascular society data the data showed 8462 cases whereas the HES data showed 32242 cases.
In the case of the ACPGBI (colo-rectal), the database showed 7635 cases when the HES data showed 16346 cases. The ACPGBI/NBOCAP audit was voluntary (it has since then thought to be biased due to under reporting by the latest article on bowel cancer outcomes in Gut on 11 April 2011.)
It seems that the HES data is more complete.
In the ACPGBI database 39% of patients had missing data for risk factors. It seems that the HES data is more accurate for its (HSMR) parameters. (In the same article in GUT published on 11 April 2011 where they analyse cancer survival/mortality they admit they had Duke’s classification missing in 15% of cases – to show that even within the parameters/data they set themselves clinical databases seem to have incomplete data; whereas there was incomplete post code information only in 0.25%).
Research shows that HES-drFoster is as good as or better than clinical models/databases.

COST
The cost of a clinical data base is up to £60 per patient whereas the HES general database is about £5 per patient.

THE ADMISSION DIAGNOSIS IS A POOR INDICATOR WHEN CALCULATING HSMR
Another common feeling is that admission diagnosis based coding could distort HSMR. Again interestingly in UK HES data apparently has no admission diagnosis and hence that is not taken into account in calculating HSMR.

IN SPECIALTIES WITH SMALL VOLUMES OF DEATHS THE HSMR IS NOT VERY USEFUL
Broadly speaking an increase or decrease in the HSMR in specialties with a small number of deaths may not indeed be a very useful way of understanding the issues – hospitals would be better off looking at the outcomes of specific process measures (and their compliance) within those deaths to obtain a better understanding on whether appropriate care was offered.
But for specialties with larger volumes, death as an outcome (increased or decreased deaths) is valid.

CODING IS POOR
That is certainly possible. However change of coding could result in actually increasing the HSMR (due to change in the denominators of the new code)

HSMR AND NON-NHS BEDS
One of the things we hear is mortality in private hospitals and mortality in private beds in NHS hospitals not being considered seriously.
Only 2% of bed usage in UK is non-NHS.
So obviously there is a substantial case for focussing on the NHS.

SOME INSIGHTS FROM SIR BRIAN JARMAN'S TALK
MORTALITY ALERTS & MID-STAFFS
Mid Staffs were sent mortality alerts like dr Foster would do for any other hospital.
Mid Staffs internally looked into 200 deaths and explained it as coding errors – they may well might have been – but subsequently took no notice of overall deaths or HSMR.
At the same time or thereabouts dr Foster looked into coding and found it was average.
Mid Staffs were doing regular clinical audits.
Mid Staffs palliative care coding ('not curable' categorisation) went up from 2% to 60%

CAN HOSPITALS REPORTED AS GOOD BY REGULATORS HAVE PROBLEMS?
Of all the assessments and inspections reports 96% are dependent on self-reported quality measures and only 4% are by external/independent assessment and inspection.
2/3rds of self-reported quality measures are incorrect.

WHAT CAN WE DO TO GET A START ON REDUCING MORTALITY
ADVERSE EVENT REPORTING
Hospitals with high adverse event reporting have low mortality. When hospitals start looking a mortality they start by encouraging increased adverse event reporting which then goes up by 4 times.

CARE BUNDLE APPROACH
We will all recall the hospital where trial patients developed severe organ failure. That was as a result of a private company hiring the hospital facilities for their drug trial. The NHS hospital itself at that time was doing just about okay. One of the senior nurses there took the care bundle approach to move to the hospital with the lowest mortality in England.

FINALLY AN ASIDE
Looking into mortality can be a threat to longevity.
Sir Brian says that there were assassination threats to the Bristol enquiry panel of which he was a member. Apparently there were people very upset that the panel refused to look into morbidity and stuck only to mortality investigation.

PERSONAL VIEWS
All the above is 'as heard' from the mini-course that I attended. My personal observations/views follow below from this point and hence cannot be attributed to the speakers of the course.
HSMR is a valid way of looking at mortality and is an excellent indicator of quality of healthcare provided by any healthcare organisation. Ignoring or explaining away HSMR and its related alerts have a huge underlying risk which may come back and bite very severely.

PROCESS MEASURES AND OUTCOME MEASURES
Michael Porter says measuring process is servitude and measuring outcome is liberation.
We should have a clear understanding of process measures and outcome measures. The new white paper's core theme is better outcome.
If we are achieving 4 hours, 31/62, 18 weeks, NPSA alert implementation, CQC points, Monitor requirements and so on; good for us but they are process measures.
Process measures have meaning only if they lead to improved outcome measures such as reduced mortality and reduced complications.

WHERE TO FOCUS
Hospitals that are at the higher end of the mortality ratio need to realise and accept that they do have the resources to deal with it. Having self confidence is the first and the best place to start.
That has to be followed by a very deep reflection on the activity, its explanations and results in the context of mortality.
Hospitals need to accept that the HSMR is mostly and broadly right and the alerts are relevant. When there is activity on internal validation of HSMR alerts it cannot be enough to explain coding issues/data validity; internal validation of HSMR alerts can only be accepted if they include a plan to reduce the subspecialty mortality (or risk as the case may be).
What should not be said is 'we are already doing this' or 'we are doing something even better’ when the mortality is not showing a downward trend.
If the mortality is high but regulator's ratings are good the questions to ask are about the accuracy/correctness of the internal reporting mechanisms – however uncomfortable those questions are. Similarly if care bundles are not working and the assumption should be that there is perhaps nothing wrong with the bundles or the patients, perhaps it is the way it is being done. If clinical audits are showing good results and but HSMR is increasing or procedure risk alerts are increasing that should trigger a reflection on whether the hospital is actually looking in the right direction.

A month on month continuous reduction in mortality (HSMR) should be the only acceptable proof. It looks like arguing with the data and explaining it away is no longer an option. If activity does not match the outcome data there may not be much point in attacking the data.

BY THE WAY WHAT ABOUT OTHER PROVIDERS
Dr Foster is not the only provider of analytical and comparative information; there are CHKS and others. It may or may not matter who the provider is; the point is to use the information in a way that makes a meaningful difference to the patients.


© HEMADRI
Follow me on twitter @HemadriTweets


Check out blog posts on 
Why High Mortality hospitals cannot afford to pay staff well (http://successinhealthcare.blogspot.co.uk/2012/06/any-links-between-bank-holiday-pay-and.html)
What your hospital mortality was in 1998 and if it is any different now?  http://successinhealthcare.blogspot.co.uk/2012/01/mortality-1998-now-what-can-we-learn.html

Links:
Mid Staffs public enquiry: http://www.midstaffspublicinquiry.com/

Sunday 10 April 2011

Clinical Leadership 'Development' - have we got it right?

Development has two components; in order, first is technical skills (hard) and second is personal (so called soft).

Technical skills in my view has two steps core professional skills (how to do the best) and core generic skills (how to do the best for everyone, every time, everyday). Many of us are good at our core professional technical skills (eg surgery, finance, radiology, facilities, HR, cardiology, etc) but it is very well known that in healthcare many of us are unaware of core generic technical skills (evidence, shared baselines, operational data analysis and data tracking, data based decision making).

The NHS is and has been focused for a while on 'leadership', 'social movements', 'change' and such similar things.

My problem with this is profound. I believe that core generic technical development should precede personal development. Personal development methodology is very profound and is designed to promote self-awareness and self-belief. The risk is when the personal development comes before the technical development, people become so convinced about themselves and what they are doing that they feel that technical development is a non-essential trivial distraction.

What is also interesting is the technical skills are easier to teach/learn, assess and practice though most people would think it is difficult and personal development is far more difficult to achieve and demonstrate though most people would think that they have 'got it' after a few sessions.

I have huge concerns that at a local level the deaneries and SHAs do not do this and at a national level personal development happens at a fantastic level to NHS persons who mostly do not have the technical development. 

The fundamental message here is, one must know what/how to do it before they begin to believe they can do it.