#FOAMPubMed 1: Lemons and Limes, the first clinical trial and how to make a research question

fruit-15408_960_720.jpg

Before we conduct any research we first need to construct a research question. This can be a difficult step. Our question needs to be precise and easy to understand. To do this we can use the ‘PICO’ criteria:

Population

We need a population of interest. These will be subjects who share particular demographics and needs to be clearly documented.

Intervention

The intervention is something you’re going to do to your population. This could be treatment or education or an exposure such as asbestos. The effect of the intervention is what you’re interested in.

Control/Comparison

If we’re going to study an intervention we need to compare it. We can use people without the exposure (control) or compare the treatment to another or placebo.

Outcome

The outcome is essentially what we are going to measure in our study. This could be mortality, it could be an observation such as blood pressure or a statistic such as length of stay in hospital. Whatever it is we need be very clear that this our main outcome, otherwise known as our primary outcome. The outcome decides our sample size so has be explicit.

PICO therefore allows us to form a research question.

To demonstrate this let’s look at the first ever clinical trial and see how we use PICO to write a research question.

It’s the 18th century. An age of empires, war and exploration. Britain, an island nation in competition with its neighbours for hegemony, relies heavily on her navy as the basis of her expansion and conquest. This is the time of Rule Britannia. Yet Britain, as with all sea going nations, was riddled with one scourge amongst its sailors: scurvy.

Scurvy is a disease caused by a lack of Vitamin C. Vitamin C, or ascorbic acid, is essential in the body to help catalyse a variety of different functions including making collagen, a protein which forms the building blocks of connective tissue, and wound healing. A lack of Vitamin C therefore causes a breakdown of connective tissue as well as impaired healing; this is scurvy, a disease marked by skin changes, bleeding, loss of teeth and lethargy. Hardly the state you want your military to be when you’re trying to rule the waves.

James Lind was born in Edinburgh in 1716. In 1731, he registered as an apprentice at the College of Surgeons in Edinburgh and in 1739 became a surgeon's mate, seeing service in the Mediterranean, Guinea and the West Indies, as well as the English Channel. In 1747, whilst serving on HMS Salisbury he decided to study scurvy and a potential cure.

James Lind 1716-1794

James Lind 1716-1794

Lind, as with medical opinion at the time, believed that scurvy was caused by a lack of acid in the body which made the body rot or putrefy. He therefore sought to treat sailors suffering with scurvy with a variety of acidic substances to see which was the best treatment. He took 12 sailors with scurvy and divided them into six pairs. One pair were given cider on top of their normal rations, another sea water, another vinegar, another sulphuric acid, another a mix of spicy paste and barley with another pair receiving two oranges and one lemon (citrus fruits containing citric acid).

Although they ran out of fruit after five days by that point one of the pair receiving citrus fruits had returned to active duty whilst the other was nearly recovered. Lind published his findings in his 1753 work, A treatise on scurvy. Despite this outcome Lind himself and the wider medical community did not recommend citrus fruits to be given to sailors. This was partly due to the impossibility of keeping fresh fruit on a long voyage and the belief that other easier to store acids could cure the disease. Lind recommended a condensed juice called ‘rob’ which was made by boiling fruit juice. Boiling destroys vitamin C and so subsequent research using ‘rob’ showed no benefit. Captain James Cook managed to circumnavigate the globe without any loss of life to scurvy. This is likely due to his regular replenishment of fresh food along the way as well as the rations of sauerkraut he provided.

It wasn’t until 1794, the year that Lind died, that senior officers on board the HMS Suffolk overruled the medical establishment and insisted on lemon juice being provided on their twenty three week voyage to India to mix with the sailors’ grog. The lemon juice worked. The organisation responsible for the health of the Navy, the Sick and Hurt Board, recommended that lemon juice be included on all voyages in the future.

Although his initial assumption was wrong, that scurvy was due to a lack of acid and it was the acidic quality of citrus fruits that was the solution, James Lind had performed what is now recognised as the world’s first clinical trial. Using PICO we can construct Lind’s research question.

Population

Sailors in the Royal Navy with scurvy

Intervention

Giving sailors citrus fruits on top of their normal rations

Comparison

Seawater, vinegar, spicy paste and barley water, sulphuric acid and cider

Outcome

Patient recovering from scurvy to return to active duty

So James Lind’s research question would be:

Are citrus fruits better than seawater, vinegar, spicy paste and barley water, sulphuric acid and cider at treating sailors in the Royal Navy with scurvy so they can recover and return to active duty?

After HMS Suffolk arrived in India without scurvy the Naval establishment began to give citrus fruits in the form of juice to all sailors. This arguably helped swing superiority the way of the British as health amongst sailors improved. It became common for citrus fruits to be planted across Empires by the Imperial countries in order to help their ships stop off and replenish. The British planted a particularly large stock in Hawaii. Whilst lemon juice was originally used the British soon switched to lime juice. Hence the nickname, ‘limey’.

A factor which had made the cause of scurvy hard to find was the fact that most animals can actually make their own Vitamin C, unlike humans, and so don’t get scurvy. A team in 1907 was studying beriberi, a disease caused by the lack of Thiamine (Vitamin B1), in sailors by giving guinea pigs their diet of grains. Guinea pigs by chance also don’t synthesise Vitamin C and so the team were surprised when rather then develop beriberi they developed scurvy. In 1912 Vitamin C was identified. In 1928 it was isolated and by 1933 it was being synthesised. It was given the name ascorbic (against scurvy) acid.

James Lind didn’t know it but he had effectively invented the clinical trial. He had a hunch. He tested it against comparisons. He had a clear outcome. As rudimentary as it was this is still the model we use today. Whenever we come up with a research question we are following the tradition of a ship’s surgeon and his citrus fruit.

Thanks for reading.

- Jamie

Hippocrates to Helsinki: Medical Ethics

300px-Karl-Brandt.jpg

On 2nd June 1948 seven men were hanged for crimes committed in World War Two. Although all held some form of military rank none had actually fired a gun in combat. Four of them were doctors. They, along with sixteen other defendants, had been on trial from 9th December 1946 to 20th August 1947 to answer for the horrors of Nazi human experimentation under the guise of Medicine. A common defense used in response for their crimes was that there was no agreed standard saying what they had done was wrong. After almost 140 days of proceedings, including the testimony of 85 witnesses and the submission of almost 1,500 documents, the judges disagreed.

The Nuremberg trials are rightfully held up as a landmark of medical ethics. Yet they are only one point on a timeline that stretches back to the very beginnings of Medicine. This musing is a brief journey through that timeline.

The Hippocratic Oath

Hippocrates

Hippocrates

We start with Hippocrates (c. 460 BCE to c. 370 BCE) a Greek physician considered the Father of Medicine. The Hippocratic Oath is attributed to him, although it may have been written after his death. The oldest surviving copy dates to circa 275 CE. The original text provides an ethical code for the physician to base themselves on:

I swear by Apollo Physician, by Asclepius, by Hygieia, by Panacea, and by all the gods and goddesses, making them my witnesses, that I will carry out, according to my ability and judgment, this oath and this indenture. To hold my teacher in this art equal to my own parents; to make him partner in my livelihood; when he is in need of money to share mine with him; to consider his family as my own brothers, and to teach them this art, if they want to learn it, without fee or indenture; to impart precept, oral instruction, and all other instruction to my own sons, the sons of my teacher, and to indentured pupils who have taken the physician’s oath, but to nobody else. I will use treatment to help the sick according to my ability and judgment, but never with a view to injury and wrong-doing. Neither will I administer a poison to anybody when asked to do so, nor will I suggest such a course. Similarly I will not give to a woman a pessary to cause abortion. But I will keep pure and holy both my life and my art. I will not use the knife, not even, verily, on sufferers from stone, but I will give place to such as are craftsmen therein. Into whatsoever houses I enter, I will enter to help the sick, and I will abstain from all intentional wrong-doing and harm, especially from abusing the bodies of man or woman, bond or free. And whatsoever I shall see or hear in the course of my profession, as well as outside my profession in my intercourse with men, if it be what should not be published abroad, I will never divulge, holding such things to be holy secrets. Now if I carry out this oath, and break it not, may I gain for ever reputation among all men for my life and for my art; but if I break it and forswear myself, may the opposite befall me.

The oath has been used in update iterations as a pledge for doctors to make on graduation.

download.jpg

The Formula Comitis Archiatrorum is the earliest known text on medical ethics from the Christian era. It was written by Magnus Aurelius Cassiodorus (c. 484-90 to c. 577-90 CE), a statesman and writer serving in the administration of Theodoric the Great, the king of Ostrogoths. It laid out a code of conduct for physicians to align their lives and medical practice with.

Ethics of the Physician was penned by Ishāq bin Ali al-Rohawi, a 9th century Arab physician, and is the first medical ethics book in Arab medicine. It contains the first documented description of the peer review process where a physician’s notes were reviewed by their peers.

Primum non nocere and the beginnings of ‘medical ethics’

The phrase ‘primum non nocere’ (first do no harm) is often attributed to Hippocrates and the Hippocratic Oath. The exact author is unknown, however. One study looked at medical writings back to the Middle Ages and found the first mention in 1860 attributed to a physician Thomas Sydenham.

Thomas Percival

Thomas Percival

In 1794, Thomas Percival (1740-1804) created one of the first modern codes of medical ethics in a pamphlet which was expanded in 1803 into a book: Medical Ethics; or, a Code of Institutes and Precepts, Adapted to the Professional Conduct of Physicians and Surgeons. This was the first use of the phrase ‘medical ethics’. The book largely influenced the American Medical Association code, which was adopted in 1847. An Introduction to the Study of Experimental Medicine was written by French physiologist Claude Bernard (1813-1878) in 1865. Bernard’s aim in the Introduction was to demonstrate that medicine, in order to progress, must be founded on experimental physiology.

Nuremberg and Helsinki

After the failed defence of the defendants in the Doctors Trial The Nuremberg Code, written in 1947, is a ten part code guiding research ethics:

  1. The voluntary consent of the human subject is absolutely essential.

  2. The experiment should be such as to yield fruitful results for the good of society, unprocurable by other methods or means of study, and not random and unnecessary in nature.

  3. The experiment should be so designed and based on the results of animal experimentation and a knowledge of the natural history of the disease or other problem under study that the anticipated results will justify the performance of the experiment.

  4. The experiment should be so conducted as to avoid all unnecessary physical and mental suffering and injury.

  5. No experiment should be conducted where there is an a priori reason to believe that death or disabling injury will occur; except, perhaps, in those experiments where the experimental physicians also serve as subjects.

  6. The degree of risk to be taken should never exceed that determined by the humanitarian importance of the problem to be solved by the experiment..

  7. Proper preparations should be made and adequate facilities provided to protect the experimental subject against even remote possibilities of injury, disability, or death.

  8. The experiment should be conducted only by scientifically qualified persons. The highest degree of skill and care should be required through all stages of the experiment of those who conduct or engage in the experiment.

  9. During the course of the experiment the human subject should be at liberty to bring the experiment to an end if he has reached the physical or mental state where continuation of the experiment seems to him to be impossible.

  10. During the course of the experiment the scientist in charge must be prepared to terminate the experiment at any stage, if he has probable cause to believe, in the exercise of the good faith, superior skill and careful judgment required of him that a continuation of the experiment is likely to result in injury, disability, or death to the experimental subject.

It’s not hard reading this code to see the inspiration from the trials behind the code, in particular empowering patients. The Nuremberg Code remains a keystone of medical research.

Another keystone is The Helsinki Declaration, a set of ethical principles for research on human subjects, developed by the World Medical Association in 1964. It’s not a law in itself but has been used to form the basis of laws in the signatory countries. The original draft contained five basic principles, a section on clinical research and a section on non-therapeutic research. It has been revised several times since, most recently in 2013 which contained thirty-seven principles. 1979 saw the publication of the first edition of Principles of Biomedical Ethics by philosophers Tom Beauchamp and James Childress. It is this book which gave us the four ethical principles often quoted by medical students:

  • Beneficence (promoting well being)

  • Non-malevolence (not doing harm - drawing back to primum non nocere)

  • Autonomy (the patient’s right to decide for themselves)

  • Justice (the community at large)

Scandals

Despite Nuremberg and Helsinki a number of scandals occurred throughout the 20th century reflecting inequalities and prejudices in society. In 1951, a young American black woman named Henrietta Lacks had a piece of her cancerous tumour extracted without her knowledge. The cells from Henrietta’s cervical tumour, known as HeLa cells, were the first human cell line to survive in-vitro and have since been immortalised for testing new medical treatments, most notably the Polio vaccine.

Henrietta Lacks

Henrietta Lacks

However, the case raised serious concerns surrounding the lack of informed consent and taking samples from living patients. In 1997, President Bill Clinton issued a formal apology for the Tuskegee Syphilis Study, which took place between 1932 and 1972 in the state of Alabama. This infamous government experiment allowed hundreds of African-American men to go untreated for their syphilis, so doctors could study the effects of the disease. This continued even after penicillin was discovered to be an effective cure. This was one of a number of unethical studies performed in the 20th century including the Guatemalan Syphilis Study and the Skid Row Cancer Study. When we talk about ethics we often think about Nazis and other deplorable regimes I think these studies remind us that even in the democratic West we are more than capable of committing horrible acts against vulnerable people.

Reproductive advances & rights and Alder Hey

The latter part of the 20th century saw a growth in women’s rights including their reproductive rights and the abilities of reproductive medicine. The Abortion Act was passed in the UK in 1967 after widespread evidence that unsafe illegal abortion can often result in maternal mortality/morbidity. The Act made abortion legal in all of Great Britain (but not Northern Ireland) up until a gestational period of 24 weeks. In 1973 the US Supreme Court voted in favour of Jane Roe in the case of Roe vs. Wade. The Court ruled 7-2 that access to safe abortion is a fundamental right. The arguments contained within this court case are still prominent topics of debate to this day.

Louise Joy Brown was born in Oldham, UK in 1978. She was the first baby to be born as a result of in-vitro fertilisation (IVF). The Report of the Committee of Inquiry into Human Fertilisation and Embryology, commonly called The Warnock Report was published in 1984 as a result of a UK governmental inquiry into the social impacts of infertility treatment and embryological research. The Human Fertilisation & Embryology Act 2008 updated and revised The Human Fertilisation and Embryology Act of 1990 in the UK, and implemented key provisions within IVF and human reproduction. These include the banning of sex selection for non-medical purposes and that all human embryos are subject to regulation.

The Human Tissue Act 2004 created the Human Tissue Authority to "regulate the removal, storage, use and disposal of human bodies, organs and tissue." In response to the Alder Hey organ scandal which involved the removal and retention of children’s organs without parental knowledge or consent.

The right to die

In February 1990, Terri Schiavo collapsed at home after suffering a cardiac arrest. The oxygen supply to her brain was cut off and, as a result, she entered a "persistive vegetative state" from which she would never wake. For the next 10 years, Terri's husband would fight various legal battles against her parents, and eventually the state of Florida, to enable her feeding tube to be removed. Her case was one of several sparking an enduring national debate over end-of-life care and the right to die. Dignitas was founded by in 1998 by Ludwig Minelli, a Swiss lawyer specialising in human rights law. Since its foundation Dignitas has helped over 2,100 people with severe physical illnesses, as well as the terminally ill, to end their own lives. In the UK the Assisted Dying Bill was blocked by the House of Lords in 2007. The bill would have only allowed those of sound mind and with less than six months to live to seek assisted death. Both euthanasia and assisted suicide remain illegal in the UK.

In 2016 Charlie Gard was born in the UK with an exceedingly rare and fatal inherited disease - infantile onset encephalomyopathy mitochondrial DNA depletion syndrome (MDDS). The ensuing legal battle between Charlie's parents and his doctors over withdrawing life-support provoked passionate debate throughout the world. It illustrated the power of social media to both facilitate and hamper debate around autonomy, end-of-life care, and parental rights with ugly scenes of mob violence threatened against the staff looking after Charlie. It also showed how complicated nuanced cases can be hijacked by various individuals and groups to spread their own agenda, such as the American right.

How we can show we respect research ethics

For anyone working in clinical research Good Clinical Practice is the international ethical, scientific and practical standard. Everyone involved in research must be trained or appropriately experienced to perform the specific tasks they are being asked to undertake. Compliance with GCP provides public assurance that the rights, safety and well being of research participants are protected and that research data are reliable. More information can be found on the National Institute for Health Research website who offer both introduction and refresher courses.

Looking at the timeline of medical ethics it’s tempting to think that we’ve never been more advanced or ethical and the whole movement is an evolution to enlightenment. To a certain extent that’s true, of course things are better than they were. But we can’t be complacent. The trend is that ethics often lag behind medical advances. As we become better at saving the lives of premature children, our population ages with more complex diseases and our resuscitation and intensive care improve there will undoubtedly continue to be more debates. As the case of Charlie Gard showed these can be very adversarial. Social media and fake news will no doubt continue to play a huge part of any scientific discussion, be it in medicine or climate change. All the more reason to stick to our principles and always aim to do no harm.

Thanks for reading

- Jamie

Share

Not to be sneezed at: How we found the cause of hay fever

dandelion-302129_960_720.jpg

The recent good weather in the UK has seen barbecues dusted off and people taking to the garden. Cue sneezes and runny eyes and noses. Yes, with the nice weather comes hay fever. Hay fever or allergic rhinitis affects somewhere between 26% and 30% of people in the UK. Symptoms include sneezing, swelling to the conjunctivae and eyelids, runny nose (rhinorrhea) and a blocked nose. Sometimes it can result in hospital admissions and death.

We all know that pollen is the cause behind hay fever. Pollen in the air is inhaled and trapped by hairs in the membrane of the nostrils. There the body responses to proteins on the surface of the pollen. These proteins are called allergens. Different types of pollen have different allergens. A type of white blood cell called a B cell produce an antbody called immunoglobulin E or IgE specific to a particular allergen. The IgE then binds to a type of cell called mast cells. These are found in some of the most sensitive parts of the body, including the skin, blood vessels, and respiratory system. Mast cells contain 500 to 1500 granules containing a mix of chemicals including histamine. This binding causes mast cells to release their histamine. It is histamine which causes the symptoms of hay fever by binding to histamine receptors throughout the body. Antihistamines work by binding to these receptors instead of histamine and blocking them.

But two centuries ago hay fever was a mystery. It took a couple of doctors with sneezing and blocked noses to research the problem to link it to pollen. This musing is their story.

The first description of what we would call hay fever came in 1819 in a study presented to the Medical and Chirurgical Society called ‘Case of a Periodical Affection of the Eyes and Chest’. The case was a patient called ‘JB’, a man “of a spare and rather delicate habit”. The patient was 46 and had suffered from cattarh (blockage of the sinus and a general feeling of heaviness and tiredness) every June since the age of eight. Numerous treatments including bleeding, cold baths, opium and vomiting were tried to no avail. What makes this study even more interesting is that ‘JB’ was the author, John Bostock, a Liverpool-born doctor who was not afraid to experiment on himself.

John Bostock

Bostock tried to broaden his research by looking for more sufferers, He found 28. In 1828 he published his work and called the condition, “catarrhus aestivus” or “summer catarrh”. After Bostock published an idea spread amongst the public that the smell of hay was to blame. This led to the colloquial term “hay fever”. Bostock didn’t agree and felt that the heat of summer was to blame. He rented a clifftop house near Ramsgate, Kent for three consecutive summers which helped. In 1827 The Times reported that the Duke of Devonshire was "afflicted with what is vulgarly called the Hay-fever, which annually drives him from London to some sea-port". In 1837 a few days before King William IV died the same paper reported that the king had "been subject to an attack of hay fever from which he has generally suffered for several weeks".


Charles Harrison Blackley

In 1859 another doctor, Charles Harrison Blackley, sniffed a bouquet of bluegrass and sneezed. He was convinced that pollen was to blame and methodically set out to prove it. He experimented on himself and seven other subjects. He first applied pollen to the nose and noted how it produced the symptoms of hay fever. He then covered microscope slides with glycerine and left them in the sunshine under a little roof for 24 hours before removing them and studying them under a microscope. He was then able to count the number of pollen granules in the air. In noted the prevalence of grass pollen in June, the time when symptoms were at their worse. To prove that wind could carry pollen great distances he then put similar slides up in kites to altitudes of 500 to 1500 feet. He discovered the slides there caught an even greater number of granules than at the lower level. In 1873 he published his work, Experimental Researches on the Causes and Nature of Catarrhus aestivus.

Fast forward to 1906. An Austrian paediatrician, Clemens von Pirquet, notices that if patients vaccinated against smallpox with horse serum are given a second dose they react quickly and severely. He correctly deduces that the body ‘remembers’ certain substances and produces antibodies against them. He calls this ‘allergy’. In the 1950s mast cells are discovered. In 1967 IgE is identified. The mechanism of allergic rhinitis and other allergies is finally understood. With this came new lifesaving treatment such as the EpiPen.

For a lot of us hay fever is an annual nuisance. But as we reach for the antihistamines and tissues we should thank a couple of 19th century sufferers who happened to turn their symptoms into a life’s work and, as a result, make hay fever that bit easier for us.


Thanks for reading

- Jamie

Going Mobile: A review of mobile learning in medical education

shutterstock_1042572946.jpg

Next month will mark the 50th anniversary of mankind’s greatest accomplishment; landing human beings on the Moon. Yet today the vast majority of our learners each carry in their bag or packet a device with millions of times the computing power than the machines we used to meet this achievement. This is why one of my passions as an educator is mobile learning and the opportunities our unprecedented age now offers. I’ve enjoyed learning how to create resources such as a podcast and a smartphone application and how these have innovated the way I teach.

The Higher Education Academy defines mobile learning as “the use of mobile devices to enhance personal learning across multiple contexts.” Mobile learning itself is a subset of TEL or ‘Technology Enhanced Learning.’ There’s repetition of a key word: enhance/enhanced. We’ll come back to that word later.

This musing looks at some of the current evidence of mobile learning use in medical education and tries to pinpoint some themes and things we still need to iron out if we’re going to make the most out of mobile learning.

From Del Boy to Web 2.0

It’s safe to safe that mobile phones have come a long way since being used as a cumbersome prop in Only Fools and Horses. They are now a key part of everyday life.

More than 4 billion people, over half the world’s population, now have access to the internet, with two thirds using a mobile phone; more than half of which are smartphones.  By 2020 66% of new global connections between people will occur via a smartphone. We are now in the era of the internet of things such as touchscreen phones and tablets as well as smart wearables such as glasses or watches. Humans have been described as “technology equipped mobile creatures that are using applications, devices and networks as a platform for enhancing their learning in both formal and informal settings.”  It’s been argued that as society is now heavily characterised by the widespread use of mobile devices and the connectivity they afford there is a need to re-conceptualise the idea of learning in the digital age.

Mobile Learning Workshop.005.jpeg

A key development in the potential of mobile learning was the development of Web 2.0. The first iterations of the internet were themselves as clunky as Del Boy’s mobile, fixed, un-editable and open to a select few. Web 2.0 is known as the ‘participatory web’; blogs, podcasts and wikis. It is now possible for people with no computing background whatsoever to produce and share learning resources with massive success such as Geeky Medics.

Another aspect interlinked with these social and technological changes has been the shortening of the half-life of knowledge. By 2017 the half-life of medical knowledge was estimated at 18-24 months.  It is estimated that by 2021 it will be only 73 days. It’s therefore fairly easy to envisage a world where libraries of books will be out of date. Students will instead be their own librarian accessing knowledge on the go via their mobile device.

Mobile Learning Workshop.006.jpeg

In education when we look at professionals collaborating we think of a community of practice. Thanks to Web 2.0 the collaboration of professionals, patients and students in medicine has been given the epithet of ‘Medicine 2.0’. This represents a new community of practice and how technology links all of us in healthcare. Health Education England argue that digital skills and knowledge should be “a core component” of healthcare staff education. In order to reflect the new world of Medicine 2.0 medical schools in the US and Hungary have set up courses aimed at familiarising students with social media. A Best Evidence Medical Education review showed that mobile resources help with the transition from student to professional.

Towards collaboration

The general movement in mobile learning is towards collaboration. The unique features of mobile devices, in particular their portability, social connectivity and a sense of individuality mean they make online collaboration more likely as opposed to desktop computers that don’t have those features. A meta-analysis of 48 peer-reviewed journal articles and doctoral dissertations from 2000 to 2015 revealed that mobile technology has produced meaningful improvements to collaborative learning. The focus is on bringing people together via their mobile devices to share learning and practice.

Mobile Learning Workshop.007.jpeg

Perhaps the extreme of mobile learning and collaboration has been the advent of massive open online courses (MOOCs) since 2008.  These are courses without any fees or prerequisites beyond technological access. Some MOOCs are delivered to tens of thousands of learners. As a result along with mobile learning in general MOOCs have been credited with democratising education.  MOOCs have been suggested as best augmenting traditional teaching methods in the ‘flipped classroom’ approach. In the flipped classroom students are introduced to learning material before the classroom session with that time being used to deepen understanding.  In general the HEA credits mobile and online resources as providing an accessible toolkit for delivering flipped learning.

Medical students and mobile learning

Mobile Learning Workshop.008.jpeg

The tradition has been to divide students into digital natives (those who grew up with technology) and digital immigrants (those for whom technology arrived later on in life). This distinction assumes that younger people have innate skills with and a preference of technology. However, more recent evidence suggests that this distinction doesn’t exist and is unhelpful. Learners whose age falls within the category of being a digital native still need and benefit from teaching aimed at digital literacy. The notion of digital natives belongs in the same file as learning styles; they just don’t exist.

Mobile Learning Workshop.009.jpeg

Research into medical students use of mobile learning focuses on evaluating a specific intervention. These include Facebook, a novel Wiki platform, a MOOC and a tailor made smartphone application. While that has a use I’d argue that most students will appreciate any new learning intervention. As a result we’re still in the early days of understanding how students use mobile resources. That said, the evidence suggests that students quickly find a preferred way of using Web 2.0 resources. Whilst it’s been suggested that male students are less likely to ask questions via a Web 2.0 resource students overall seem to find them a safe environment and more comfortable than clinical teaching. It’s been suggested that mobile learning resource usage is linked to a student’s intrinsic motivation. The more motivated a student is the more likely they are to use a mobile resource. Medical students themselves report concerns regarding privacy and professional behaviour when using social media in education.

A 2013 systematic review of social media use in medical education found an association with improving knowledge, attitudes and skills.  The most often reported benefits were in learner engagement, feedback, collaboration and professional development. The most commonly cited challenges were technical difficulties, unpredictable learner participation and privacy/security concerns.  A systemic review the following year however reviewed only publications that included randomisation, reviews and meta-analyses and concluded despite the wide use of social media there were no significant improvements in the learning process and that some novel mobile learning resources don’t result in better student outcomes.   

Mobile Learning Workshop.010.jpeg

A recent review of literature on mobile learning use in medical education suggests that it remains a supplement only.  There still is not a consensus on the most efficient use of mobile learning resources in medical education but the ever changing nature of resources means this is probably inevitable. There’s that word enhance again. Is this is limit of mobile learning in medical education? To enhance more traditional teaching and not replace?

Mobile Learning Workshop.011.jpeg

There’s also the issue of whether students want more mobile resources. The most recent student survey by the Higher Education Policy Institute found that students prefer direct contact time with educators over other learning events.  44% of students rating their course as poor or very poor value for money included a lack of contact hours as part of their complaint.  Only tuition fees and teaching quality were reported more often as a reason for rating their course as poor or very poor value for money. More students (19%) were dissatisfied with their contact time than neutral (17%); an increase on the previous year. However, the survey did not explore mobile resources either as a contact time alternative or how students viewed their educators creating resources for them. 62% of Medicine and Dentistry students reported that they felt they had value for money for their tuition fees; this was the highest reported value for money of any subject.

According to the HEPI students in the UK are conservative in their preferred learning methods and this means any innovation takes time to become embedded in a curriculum. The HEPI recommend engaging with students and involving them in the development of any resource as well as building technology into curriculum design and for a nationwide evidence and knowledge base to be developed on what works.

Mobile Learning Workshop.012.jpeg

This is being done. Case studies in the UK show that the success of mobile learning in higher education has involved some degree of student inclusion alongside educators during design.  But there’s no evidence of this being done in UK medical schools. One example was published from Vanderbilt University, Nashville; a committee formed of administrators, educators and selectively recruited students.  This committee serves four functions: to liaise between students and administration; advising development of institutional educational technologies; developing, piloting, and assessing new student-led educational technologies; and promoting biomedical and educational informatics within the school community. The authors report benefits from rapid improvements to educational technologies that meet students’ needs and enhance learning opportunities as well as fostering a campus culture of awareness and innovation in informatics and medical education.

An example from a European medical school was found from the Faculty of Medicine of Universität Leipzig, Germany.  Rather than a physical committee their E-learning and New Media Working Group established an online portal for discussion with students over mobile resources as well as expanding the university’s presence across social media to help disseminate information.

The HEPI have also recommended that the UK higher education sector develop an:

“evidence and knowledge base on what works in technology-enhanced learning to help universities, faculties and course teams make informed decisions. Mechanisms to share, discuss and disseminate these insights to the rest of the sector will also be required.”

Medical educators and mobile learning

Mobile Learning Workshop.013.jpeg

Teachers’ attitudes toward and ability with mobile resources are a major influence on students deciding to use them. It’s been suggested that Web 2.0 offers opportunities for educator innovation. However, it has been shown that teachers may be less engaged than their students in utilising Web 2.0 resources especially in accessing materials outside of the classroom.

I’ve not been able to find any research in the literature looking at the perceptions of UK medical educators toward mobile learning. However, a recent online survey of 284 medical educators in Germany did show some interesting findings.  Respondents valued interactive patient cases, podcasts and subject-specific apps as the more constructive teaching tools while Facebook and Twitter were considered unsuitable as platforms for medical education.  There was no relationship found between an educator’s demographics and their use of mobile learning resources.

* * *

It’s obvious that mobile learning offers great opportunities for medical students and educators. I hope this review has shown some of the trends in our current understanding of mobile learning in medical education: that the future seems to be collaboration, digital natives don’t exist and students need tuition in how to use mobile resources, research is currently limited to studying interventions, students value contact time and need to be included to make the most of resources and need to know more about what teachers think.

This is a time for leadership, for educators to start to fill these gaps in knowledge and expand on these trends. In September 1962 President Kennedy challenged his country to go to the Moon by the end of the decade. To say this was ambitious is an understatement; Americans had only got into space barely a year earlier. Yet the country rose to the challenge and on 20th July 1969 man walked on the Moon. I like how he said it. “We CHOOSE to go to the Moon.” Challenges are there to be met. We can meet the challenges of mobile learning in medical education if we choose to. We can choose to use mobile learning and help shape it. Or not. That choice is ours.

Medical school medieval style

022519-46-History-Medieval-Middle-Ages-Health-Medicine-Disease-768x448.jpg

We discuss the first Medical schools (and review wine) in the first episode of the Quacks Who Quaff podcast.

Philosophia et septem artes liberales, the seven liberal arts. From the Hortus deliciarum of Herrad of Landsberg (12th century)

It’s tempting to see medieval doctors as a group of quacks and inadequates stuck between the Dark Ages and the enlightened Renaissance. Certainly, it was a dangerous time to be alive and sick. In the twelfth century the majority of people lived in rural servitude and received no education. Average life expectancy was 30-35 years with 1 in 5 children dying at birth. Healthcare policy, such as it was, was based on Christian teachings; that it was everyone’s duty to care for the sick and poor. To that end medieval hospitals more resembled modern day hospices providing basic care for the destitute and dying with nowhere else to go. Education and literacy were largely the preserve of the clergy and it was in monasteries where most hospitals could be found. The Saxons built the first hospital in England in 937 C. E, and many more followed after the Norman Conquest in 1066, including St. Bartholomew's of London, built in 1123 C.E. The sick were cared for by a mix of practitioners including physicians, surgeons, barber-surgeons and apothecaries. Of these only physicians would have received formal training. The vast majority of people providing healthcare were practising a mix of folklore and superstition.

However, it was in the early medieval period that the first medical schools were formed and the first ever medical students went to university. In this musing I’m looking at what medical education was like in the Middle Ages at the most prestigious university of the age as well as the common theories behind disease and cure.

The Schola Medica Salernitana was founded in the 9th century in the Southern Italian city of Salerno. In 1050 one of its teachers Gariopontus wrote the Passionarius, one of the earliest written records of Western Medicine as we would recognise it. Gariopontus drew on the teachings of Galen (c. 129-199 CE) and latinised Greek terms. In doing so he formed the basis of several modern medical terms such as cauterise. Another early writing mentioned a student: “ut ferrum magnes, juvenes sic attrahit Agnes” (Agnes attracts the boys like iron to a magnet”). This shows that the first medical school in the world had female students.

The medical school published a number of treatises such as work by a woman called Trotula on childbirth and uterine prolapse and work on the management of cranial and abdominal wounds. In head wounds it was recommended to feel for and then surgical remove pieces of damaged skull. In abdominal trauma students were advised to try to put any protruding intestine back inside the abdomen. If the intestine was cold it was to be warmed by wrapping the intestines of a freshly killed animal over it beforehand with the wound being left open before a drain was inserted.

Anatomy remained based on the work of Galen. Doctors were encouraged to dissect pigs as their anatomy was felt to be the most closely related to humans. However, the teachers were more innovative when it came to disseminating knowledge, in verse form often with a spice of humour. 362 of these verses were printed for the first time in 1480 and would increase to 3520 verses in a later edition. By 1224 the Holy Roman Emperor Frederick II made it obligatory that anyone hoping to practice Medicine in the kingdom of Naples should seek approval from the masters of Salerno medical school.

But Salerno medical school did not teach any other subjects and so did not evolve into a studium generale or university as they began to spring up. By the fourteenth century the most prestigious medical school in Europe was the University of Bologna, founded in 1088, the oldest university in the world. In the United Kingdom medical training began at the University of Oxford in the 12th century but was haphazard and based on apprenticeship. The first formal UK medical school would not be established until 1726 in Edinburgh.

The University of Bologna was run along democratic lines, with students choosing their own professors and electing a rector who had precedence over everyone, including cardinals, at official functions.

The Medicine course lasted 4 years and consisted of forty six lectures. Each lecture focused on one particular medical text as written by Hippocrates (c. 460-370 BCE), Galen, and Avicenna (c. 980-1037 CE). Students would also read texts by these authors and analyse them using the methods of the French philosopher Peter Abelard to draw conclusions. His work Sic et Non had actually been written as guide for debating contrasting religious text, not scientific work. This reflected how religion and philosophy dominated the training of medical students. The university was attached to a cathedral and students were required to be admitted to the clergy prior to starting their studies. Further to studying Medicine students were also required to study the seven classical liberal arts: Grammar, Rhetoric, Logic, Geometry, Arithmetic, Music and Astronomy.

At the time knowledge of physiology and disease focused on the four humors: phlegm, blood, black bile, and yellow bile. Imbalance of one was what caused disease, for example too much phlegm caused lung disease and the body had to cough it up. This was a theory largely unchanged since its inception by the ancient Egyptians. This is why blood letting and purging were often the basis of medieval medicine. The state of imbalance was called dyskrasia while the perfect state of equilibrium was called eukrasia. Disease was also linked to extremes of temperature and personality. For example, patients who were prone to anger or passion were at risk of overheating and becoming unwell. Patients would also be at risk if they went to hot or cold places and so doctors were taught to advise maintaining a moderate personalty and avoiding extreme temperatures.

Diet was taught as important to prevent disease. During blood letting doctors were taught to strengthen the patient’s heart through a diet of rose syrup, bugloss or borage juice, the bone of a stag’s heart, or sugar mixed with precious stones such as emerald. Other foods such as lettuce and wine were taught as measures to help balance the humors.

Pharmacy was similarly guided by ancient principles. The Doctrine of Signatures dated back to the days of Galen and was adopted by Christian philosophers and medics. The idea being that in causing disease God would also provide the cure and make that intended cure apparent through design in nature. For example, eyebright flowers were said to resemble the human eye, while skullcap seeds were said to resemble the human skull. This was interpreted as God’s design that eyebright was to be used as a cure for eye disease and skullcap seeds for headaches.

God, the planets and polluted air or miasma were all blamed as the causes of disease. When the Black Death struck Italy in 1347 a contemporary account by the scholar Giovanni Villani blamed “the conjunction of Saturn and Jupiter and Mars in the sign of Aquarius” while the official Gabriel de Mussis noted “the illness was more dangerous during an eclipse, because then its effect was enhanced”. Gentile da Foligno, a physician and professor at the University of Bologna, blamed a tremor felt before the plague hit opening up underground pools of stagnant air and water. Doctors therefore were taught to purify either the body through poultices of mallow, nettles, mercury, and other herbs or the air by breathing through a posy of flowers, herbs and spices. De Mussis mentioned that “doctors attending the sick were advised to stand near an open window, keep their nose in something aromatic, or hold a sponge soaked in vinegar in their mouth.” During the Black Death a vengeful God was often blamed. The flagellants were a group of religious zealots who would march and whip themselves as an act of penance to try and appease God. There’s a sense here of being close but not quite. Of understanding that balance is important for the body, that environmental factors can cause disease and that there was something unseen spreading disease. Close but not yet there. The Middle Ages isn’t known as a time of enlightenment. That would come with the Renaissance. But it was not a barren wasteland. It was a time of small yet important steps.

It was in the Middle Ages that laws against human dissection were relaxed and knowledge of human anatomy began to improve. An eminent surgeon of the time Guy de Chauliac would lobby for surgeons to require university training and so started to create equivalence with physicians. Physicians began to use more observations to help them diagnose disease, in particular urine as seen in the Fasciculus Medicinae, published in 1491, then the pinnacle of medical knowledge at the time (this book also contained Wound Man as discussed in a previous musing). The scholarly approach encouraged at medical school led to methodical documentation from several physicians; is is through these writings that we know so much about the Black Death and other medieval illness. An English physician Gilbertus Angelicus (1180-1250) teaching at the Montpellier school of Medicine would be one of the first to recognise that diseases such as leprosy and smallpox were contagious.

Perhaps most importantly, it was in this period that the first medical schools and universities were established. These particular small steps would begin the role of doctor as a scholar and start to legislate the standards required of a physician. This would be an vital first step without which future advances could never have been possible.

Thanks for reading

- Jamie

Medicine and Game Theory: How to win

board-game-1846400_960_720.jpg

You have to learn the rules of the game; then learn to play better than anyone else - Albert Einstein

Game theory is a field of mathematics which emerged in the 20th century looking at how players in a game interact. In game theory any interaction between two or more people can be described as a game. In this musing I’m looking at how game theory can influence healthcare both in the way we view an individual patient as well as future policy.

There are at least two kinds of games. One could be called finite, the other infinite. A finite game is played for the purpose of winning, an infinite game for the purpose of continuing the play.     

James P. Carse Author of Finite and Infinite Games

Game theory is often mentioned in sports and business

In a finite game all the players and all the rules are known. The game also has a known end point. A football match would therefore be an example of a finite game. There are two teams of eleven players with their respective coaches. There are two halves of 45 minutes and clear laws of football officiated by a referee. After 90 minutes the match is either won, lost or drawn and is definitely over.

Infinite games have innumerable players and no end points. Players can stop playing or join or be absorbed by other teams. The goal is not an endpoint but to keep playing. A football season or even several football seasons could be described as an infinite game. Key to infinite games then is a vision and principles. A team may lose one match but success is viewed by the team remaining consistent to that vision; such as avoiding relegation every season or promoting young talent. Athletic Club in Spain are perhaps the prime example of this. Their whole raison d'être is that they only use players from the Basque Region of Spain. This infinite game of promoting local talent eschews any short term game. In fact their supporters regularly report they’d rather get relegated than play non-Basque players.

Problems arise by confusing finite and infinite games. When Sir Alex Ferguson retired as Manchester United manager after 27 years in 2013 the club attempted to play an infinite game. They chose as his replacement David Moyes, a manager with a similar background and ethics to Ferguson, giving him a 9 year contract. 6 months into that he was fired and since then United have been playing a finite game choosing more short term appointments, Louis van Gaal and Jose Mourinho, rather than following a vision.

It’s easy to see lessons for business from game theory. You may get a deal or not. You may have good quarters or bad quarters. But whilst those finite games are going on you have your overall business plan, an infinite game. You’re playing to keep playing by staying in business.

What about healthcare?

So a clinician and patient could be said to be players in a finite game competing against whatever illness the patient has. In this game the clinician and patient have to work together and use their own experiences to first diagnose and then treat the illness. The right diagnosis is made and the patient gets better. The game is won and over. Or the wrong diagnosis is made and the patient doesn’t get better. The game is lost and over. But what about if the right diagnosis is made but for whatever reason the patient doesn’t get better? That finite game is lost. But what about the infinite game?

Let’s say our patient has an infection. That infection has got worse and now the patient has sepsis. In the United Kingdom we have very clear guidelines on how to manage sepsis from the National Institute of Clinical Excellence. Management is usually summed up as the ‘Sepsis Six’. There are clear principles about how to play this game. So we follow these principles as we treat our patient. We follow the Sepsis Six. But they aren’t guarantees. We use them because they give us the best possible chance to win this particular finite game. Sometimes it will work and the patient will get better and we win. Sometimes it won’t and the patient may die. Even if all the ‘rules’ are followed, due to reasons beyond any of the players. But whilst each individual patient may be seen as a finite game there is a larger infinite game being played. By making sure we approach each patient with these same principles we not only give them the best chance of winning their finite game but we also keep the infinite game going; of ensuring each patient with sepsis is managed in the same optimum way. By playing the infinite game well we have a better chance of winning finite games.

This works at the wider level too. For example, if we look at pneumonia we know that up to 70% of patients develop sepsis. We know that smokers who develop chronic obstructive pulmonary disease (COPD) have up to 50% greater risk of developing pneumonia. We know that the pneumococcal vaccine has reduced pneumonia rates especially amongst patients in more deprived areas. Reducing smoking and ensuring vaccination are infinite game goals and they work. This is beyond the control of one person and needs a coordinated approach across healthcare policy.

wood-100181_960_720.jpg

Are infinite games the future of healthcare?

In March 2015 just before the UK General Election the Faculty of Public Health published their manifesto called ‘Start Well, Live Better’ for improving general health. The manifesto consisted of 12 points:

The Start Well, Live Better 12 priorities from Lindsey Stewart, Liz Skinner, Mark Weiss, John Middleton, Start Well, Live Better—a manifesto for the public's health, Journal of Public Health, Volume 37, Issue 1, March 2015, Pages 3–5,

There’s a mixture of finite goals here - establishing a living wage for example - and some infinite goals as well such as universal healthcare. The problem is that finite game success is much more short-term and easier to measure than with infinite games. We can put a certain policy in place and then measure impact. However, infinite games aimed improving a population’s general health take years if not decades to show tangible benefit. Politicians who control healthcare policy and heads of department have a limited time in office and need to show benefits immediately. The political and budgetary cycles are short. It is therefore tempting to choose to play finite games only rather than infinite.

The National Health Service Long Term Plan is an attempt to commit to playing an infinite game. The NHS England Chief Simon Stevens laid out five priorities for the NHS focusing health spending over the next 5 years: mental health, cardiovascular disease, cancer, child services and reducing inequalities. This comes after a succession of NHS plans since 2000 which all focused on increasing competition and choice. The Kings Fund have been ambivalent about the benefit those plans made.

Since its inception the National Health Service has been an infinite game changing how we view illness and the relationship between the state and patients. Yet if we chase finite games that are incongruous to our finite game we risk that infinite game. There is a very clear link between the effect of the UK government’s austerity policy on social care and its impact on the NHS.

We all need to identify the infinite game we want to play and make sure it fits our principles and vision. We have to accept that benefits will often be intangible and appreciate the difficulties and scale we’re working with. We then have to be careful with the finite games we choose to play and make sure they don’t cost us the infinite game.

Playing an infinite game means committing to values at both a personal and institutional level. It says a lot about us and where we work. It means those in power putting aside division and ego. Above all it would mean honesty.

Thanks for reading

- Jamie

Spoiler Alert: why we actually love spoilers and what this tells us about communication

spoiler-alert.jpg

Last week the very last episode of Game of Thrones was broadcast. I was surrounded by friends and loved ones all doing everything they could to avoid hearing the ending before they’d seen it; even it this meant fingers in the ears and loud singing. I’ve only ever seen one episode so don’t worry I won’t spoil the ending for you. But actually that wouldn’t be as bad as you think. Spoiler alert: we actually love spoilers. And knowing this improves we way we communicate.

For all we complain if someone ‘spoils the ending’ of something the opposite is true. In 2011 a series of experiments explored the effect of spoilers on the enjoyment of a story. Subjects were given twelve stories from a variety of genres. One group were told the plot twist as part of a separate introduction. In the second the outcome was given away in the opening paragraph and the third group had no spoilers. The groups receiving the spoilers reported enjoying the story more than the group without spoilers. The group where the spoiler was a separate introduction actually enjoyed the story the most. This is known as the spoiler paradox.

Understanding the spoiler paradox is to understand how human beings find meaning. This is known as ‘theory of mind’. This means we like giving meaning and intentions to other people and even inanimate objects. As a result we love stories. A lot. Therefore we find stories a better way of sharing a message. The message “don’t tell lies” is an important one we’ve tried to teach others for generations. But one of the best ways to teach it was to give it a story: ‘The Boy Who Cried Wolf’. Consider Aesop’s fables or the parables of Jesus. Stories have a power.

Therefore, if we know where the story is going it becomes easier for us to follow. We don’t have to waste cognitive energy wondering where the story is taking us. Instead we can focus on the information as it comes. Knowing the final point makes the ‘journey’ easier.

Think how often we’ll watch a favourite movie or read a favourite book even though we know the end. We all know the story of Romeo and Juliet but will still watch it in the theatre. We’ll still go to see a film based on a book we’ve read. Knowing the ending doesn’t detract at all. In fact, I’d argue that focusing on twists and spoilers actually detracts from telling a good story. If you’re relying on spoilers to keep your audience’s attention then your story isn’t going to stand up to much. As a fan of BBC’s Sherlock I think the series went downhill fast in Series 3 when the writers focused on plot twists rather than just telling a decent updated version of the classic stories.

So, how can knowing about the spoiler paradox shape the way we communicate?

In healthcare we’re encouraged to user the ‘SBAR’ model to communicate about a patient. SBAR (Situation, Background, Assessment and Recommendation) was originally used by the military in the early 21st century before becoming widely adopted in healthcare where it has been shown to improve patient safety. In order to standardise communication about a patient SBAR proformas are often included by phones. There’s clear guidance about the content for each section of SBAR.

Situation:

Why I’m calling

Background:

What led to me seeing this patient

Assessment:

What I’ve found and done

Recommendation:

What I need from you

Handing over a patient on the phone to a senior is regularly included as a core skill to be assessed in examinations.

You’ll notice that right at the very beginning of the proforma in this photo (taken by me in the Resus room at Queens Medical Centre, Nottingham) it says ‘Presenting Complaint’. In other proformas I’ve seen this is also written as ‘Reason for call’. This makes a big impact on how easy the handover is for the other person. For example:

“Hi, is that the surgical registrar on call? My name is Jamie I’m one of the doctors in the Emergency Department. I’ve got a 20 year old man called John Smith down here who’s got lower right abdominal pain. He’s normally well and takes no medications. The pain started yesterday near his belly button and has moved to his right lower abdomen. He’s been vomiting and has a fever. His inflammatory markers are raised. I think he has appendicitis and would like to refer him to you for assessment.

OR

“Hi, is that the surgical registrar on call? My name is Jamie I’m one of the doctors in the Emergency Department. I’d like to refer a patient for assessment who I think has appendicitis. He’s a 20 year old man called John Smith who’s got lower right abdominal pain. He’s normally well and takes no medications. The pain started yesterday near his belly button and has moved to his right lower abdomen. He’s been vomiting and has a fever. His inflammatory markers are raised. Could I please send him for assessment?”

Both are the same story with the same intended message - I’ve got a patient with appendicitis I’d like to refer. But which one would be easier for a tired, stress surgeon on call to follow?

Or:

We can use this simple hack to make our presenting more effective as well. Rather than our audience sitting their trying to formulate their own ideas and meaning, which risks them either taking the wrong message home or just giving up, we must be explicit from the beginning.

“Hello my name is Jamie. I’m going to talk about diabetic ketoacidosis which affects 4% of our patients with Type 1 Diabetes. In particular I’m going to focus on three key points: what causes DKA, the three features we need to make a diagnosis and how the treatment for DKA is different from other diabetic emergencies and why we that it is important.”

Your audience immediately knows what is coming and what to look out for without any ambiguity. Communication is based on stories. Knowing what is coming actually helps us follow that story. The real spoiler is that we love spoilers. Don’t try and pull a rabbit from the hat. Punchlines are for jokes. Be clear with what you want.

Thanks for reading

- Jamie

Game of thrones.gif

"Obviously a major malfunction" - how unrealistic targets, organisational failings and misuse of statistics destroyed Challenger

AP8601281739-1.jpg

There is a saying commonly misattributed to Gene Kranz the Apollo 13 flight director: failure is not an option. In a way that’s true. Failure isn’t an option. I would say it’s inevitable in any complicated system. Most of us work in one organisation or another. All of us rely on various organisations in our day to day lives. I work in the National Health Service, one of 1.5 million people. A complex system doing complex work.

In a recent musing I looked at how poor communication through PowerPoint had helped destroy the space shuttle Columbia in 2003. That, of course, was the second shuttle disaster. In this musing I’m going to look at the first.

This is the story of how NASA was arrogant; of unrealistic targets, of disconnect between seniors and those on the shop floor and of the misuse of statistics. It’s a story of the science of failure and how failure is inevitable. This is the story of the Challenger disaster.

”An accident rooted in history”

It’s January 28th 1986 at Cape Canaveral, Florida. 73 seconds after launching the space shuttle Challenger explodes. All seven of its crew are lost. Over the tannoy a distraught audience hears the words, “obviously a major malfunction.” After the horror come the questions.

The Rogers Commission is formed to investigate the disaster. Amongst its members are astronaut Sally Ride, Air Force General Donald Kutyna, Neil Armstrong, the first man on the moon, and Professor Richard Feynman; legendary quantum physicist, bongo enthusiast and educator.

The components of the space shuttle system (From https://www.nasa.gov/returntoflight/system/system_STS.html)

The shuttle programme was designed to be as reusable as possible. Not only was the orbiter itself reused (this was Challenger’s tenth mission) but the two solid rocket boosters (SRBs) were also retrieved and re-serviced for each launch. The cause of the Challenger disaster was found to be a flaw in the right SRB. The SRBs were not one long section but rather several which connected with two rubber O-rings (a primary and a secondary) sealing the join. The commission discovered longstanding concerns regarding the O-rings.

In January 1985 following a launch with the shuttle Discovery soot was found between the O-rings indicating that the primary ring hadn’t maintained a seal. At that time the launch had been the coldest yet at about 12 degrees Celsius. At that temperature the rubber contracted and became brittle making it harder to maintain a seal. On other missions the primary ring was nearly completely eroded through. The flawed O-ring design had been known about since 1977 leading the commission to describe Challenger, “an accident rooted in history.”

The forecast for the launch of Challenger would break the cold temperature record of Discovery: -1 degrees Celsius. On the eve of the launch engineers from Morton Thiokol alerted NASA managers of the danger of O-ring failure. They advised waiting for a warmer launch day. NASA however pushed back and asked for proof of failure rather than proof of safety. An impossibility.

“My God Thiokol, when do you want me to launch? Next April?”

Lawrence Molloy, SRB Manager at NASA

NASA pressed Morton Thiokol managers to go over their engineers and approve launch. On the morning of the 28th the forecast was proved right and the launch site was covered with ice. Reviewing launch footage the Rogers Commission found that in the cold temperature O-rings on the right SRB had failed to maintain a seal. 0.678 seconds into the launch grey smoke was seen escaping the right SRB. Due to ignition the SRB casing expanded slightly and the rings should have moved with the casing to maintain the seal. However, at minus one degrees Celsius they were too brittle and failed to do so. This should have caused Challenger to explode on the launch pad but aluminium oxides from the rocket fuel filled the damaged joint and did the job of the O-rings by sealing the site. This temporary seal allowed the Challenger to lift off.

This piece of good fortune might have allowed Challenger and its crew to survive. Sadly, 58.788 seconds into the launch Challenger hit a strong wind sheer which dislodged the aluminium oxide. This allowed hot air to escape and ignite. The right SRB burned through its joint to the external tank, coming loose and colliding with it. This caused a fireball which ignited the whole stack.

Challenger disintegrated and the crew cabin was sent into free fall before crashing into the sea. When the cabin was retrieved from the sea bed the personal safety equipment of three of the crew had been activated suggesting they survived the explosion but not the crash into the sea. The horrible truth is that it is possible they were conscious for at least a part of the free fall. Two minutes and forty five seconds.

So why the push back from NASA? Why did they proceed when there were concerns about the safety of the O-rings? This is where we have to look at NASA as an organisation arrogantly assumed it could guarantee safety. This included its own unrealistic targets.

NASA’s unrealistic targets

NASA had been through decades of boom and bust. The sixties had begun with them lagging behind the Soviets in the space race and finished with the stars and stripes planted on the moon. Yet the political enthusiasm triggered by President Kennedy and the Apollo missions had dried up and with it the public’s enthusiasm also waned. The economic troubles of the seventies were now followed by the fiscal conservatism of President Reagan. The money had dried up. NASA managers looked to shape the space programme in a way to fit the new economic order.

First, space shuttles would be reusable. Second, NASA made bold promises to the government. Their space shuttles would be so reliable and easy to use there would be no need to spend money on any military space programme; instead give the money to NASA to launch spy satellites. In between any government mission the shuttles would be a source of income as the private sector paid to use them. In short, the shuttle would be a dependable bus service to space. NASA promised that they could complete sixty missions a year with two shuttles at any one time ready to launch. This promise meant the pressure was immediately on to perform.

Four shuttles were initially built: Atlantis, Challenger, Columbia and Discovery. The first shuttle to launch was Columbia on 12th April 1981, one of two missions that year. In 1985 nine shuttle missions were completed. This was a peak that NASA would never exceed. By 1986 the target of sixty flights a year was becoming a monkey on the back of NASA. STS-51-L’s launch date had been pushed back five times due to bad weather and the previous mission itself being delayed seven times. Delays in that previous mission were even more embarrassing as Congressman Bill Nelson was part of the crew. Expectation was mounting and not just from the government.

Partly in order to inspire public interest in the shuttle programme the ‘Teacher in Space Project’ had been created in 1984 to carry teachers into space as civilian members of future shuttle crews. From 11,000 completed applications one teacher, Christa McAuliffe from New Hampshire was chosen to fly on Challenger as the first civilian in space. She would deliver two fifteen minute lessons from space to be watched by school children in their classrooms. The project worked. There was widespread interest in the mission with the ‘first teacher in space’ becoming something of a celebrity. It also created more pressure. McAuliffe was due to deliver her lessons on Day 4 of the mission. Launching on 28th January meant Day 4 would be a Friday. Any further delays and Day 4 would fall on the weekend; there wouldn’t be any children in school to watch her lessons. Fatefully, the interest also meant 17% of Americans would watch Challenger’s launch on television.

NASA were never able to get anywhere close to their target of sixty missions a year. They were caught out by the amount of refurbishment needed after each shuttle flight to get the orbiter and solid rocket boosters ready to be used again. They were hamstrung immediately from conception by an unrealistic target they never should have made. Their move to inspire public interest arguably increased demand to perform. But they had more problems including a disconnect between senior staff and those on the ground floor.

Organisational failings

During the Rogers Commission NASA managers quoted that the risk of a catastrophic accident (one that would cause loss of craft and life) befalling their shuttles was 1 in 100,000. Feynman found this figure ludicrous. A risk of 1 in 100,000 meant that NASA could expect to launch a shuttle every day for 274 years before they had a catastrophic accident. The figure of 1 in 100,000 was found to have been calculated as a necessity; it had to be that high. It had been used to reassure both the government and astronauts. It had also helped encourage a civilian to agree to be part of the mission. Once that figure was agreed NASA managers had worked backwards to make sure that the safety figures for all the shuttle components combined to make an overall risk of 1 in 100,000. NASA engineers knew this to be the case and formed their own opinion of risk. Feynman spoke to them directly. They perceived the risk at somewhere between 1 in 50 and 1 in 200. Assuming NASA managed to launch sixty missions a year that meant their engineers expected a catastrophic accident somewhere between once a year to once every three years. As it turned out the Challenger disaster would occur on the 25th shuttle mission. There was a clear disengagement between the perceptions of managers and those with hands on experience regarding the shuttle programme’s safety. But there were also fundamental errors when it came to calculating how safe the shuttle programme was.

Misusing statistics

One of those safety figures NASA included in their 1 in 100,000 figure involved the O rings responsible for the disaster. NASA had given the O rings a safety factor of 3. This was based on test results which showed that the O rings could maintain a seal despite being burnt a third of the way through. Feynman again tore this argument apart. A safety factor of 3 actually means that something can withstand conditions three times those its actually designed for. He used the analogy of a bridge built to only hold 1000 pounds being able to hold a 3000 pound load as showing a safety factor of 3. If a 1000 pound truck drove over the bridge and it cracked a third of a way through then the bridge would be defective, even if it managed to still hold the truck. The O rings shouldn’t have burnt through at all. Regardless of them still maintaining a seal the test results actually showed that they were defective. Therefore the safety factor for the O rings was not 3. It was zero. NASA misused the definitions and values of statistics to ‘sell’ the space shuttle as safer that it was. There was an assumption of total control. No American astronaut had ever been killed on a mission. Even when a mission went wrong like Apollo 13 the astronauts were brought home safely. NASA were drunk on their reputation.

Aftermath

The Rogers Commission Report was published on 9th June 1986. Feynman was concerned that the report was too lenient to NASA and so insisted his own thoughts were published as Appendix F. The investigation into Challenger would be his final adventure; he was terminally ill with cancer during the hearing and died in 1988. Sally Ride would also be part of the team investigating the Columbia disaster; the only person to do so. After she died in 2012 Kutyna revealed she had been the person discretely pointing the commission in the correct direction of the faulty O-rings. The shuttle programme underwent a major redesign and it would be two years before there was another mission.

Sadly, the investigation following the Columbia disaster found that NASA had failed to learn lessons from Challenger with similar organisational dysfunction. The programme was retired in 2011 after 30 years and 133 successful missions and 2 tragedies. Since then NASA has been using the Russian Soyuz rocket programme to get to space.

The science of failure

Failure isn’t an option. It’s inevitable. By its nature the shuttle programme was always experimental at best. It was wrong to pretend otherwise. Feynman would later compare NASA’s attitude to safety to a child believing that running across the road is safe because they didn’t get run over. In a system of over two million parts to have complete control is a fallacy.

We may not all work in spaceflight but Challenger and then Columbia offer stark lessons in human factors we should all learn from. A system may seem perfect because its imperfection is yet to be found, or has been ignored or misunderstood.

The key lesson is this: We may think our systems are safe, but how will we really know?

"For a successful technology, reality must take precedence over public relations,

for Nature cannot be fooled."

Professor Richard Feynman

The world’s first forensic scientist

dirt-88363_960_720.jpg

Our setting is a rural Chinese village. A man is found stabbed and hacked to death. Local investigators perform a series of experiments with an animal carcass looking at the type of wounds caused by different shaped blades and determine that the man had been killed with a sickle. The magistrate calls all of the owners of a sickle together. The ten or so suspects all deny murder. The sickles are examined, all are clean with nothing to give away being a murder weapon. Most in the village believe the crime won’t be solved. The magistrate then orders all of the suspects to stand in a field and place their sickle on the ground before stepping back. They all stand and wait in the hot afternoon sun. It’s an unusual sight. At first nothing happens. Eventually a metallic green fly lands on one of the sickles. It’s joined by another. And another. And another. The sickle’s owner starts to look very nervous as more and more flies land on his sickle and ignore everyone else’s. The magistrate smiles. He knows that the murderer would clean his weapon. But there would be tiny fragments of blood, bone and flesh invisible to the human eye but not beyond a fly’s sense of smell. The owner of the sickle breaks down and confesses. He’s arrested and taken away.

I think it’s safe to say that we love forensic science dramas. They’re all of a type: low lit metallic labs, ultraviolet lights, an array of brilliant yet troubled scientists and detectives dredging the depths of human depravity. Forensic science is the cornerstone of our criminal justice system, a vital weapon in fighting crime. Yet the tale of the flies and the sickle didn’t take place in 2019. It didn’t even take place this century. It was 1235 CE.

This account, the first recorded example of what we would now call forensic entomology, was recorded in Collected Cases of Injustice Rectified a Chinese book written in 1247 by Song Ci, the world’s first forensic scientist. This is his story.

Song Ci from a Chinese Stamp (From China Post)

Song Ci was born in 1186 in southeast China. He was born in a period of China’s history called the Song dynasty (960-1279 CE). This period saw a number of political and administrative reforms including developing the justice system to create the post of sheriff. Sheriffs were employed to investigate crime, determine the cause of death and to interrogate and prosecute subjects. With this developed a framework to investigate crime.

The son of a bureaucrat he was educated into a life of scholarship. First training as a physician he found his way into the word of justice and was appointed judge of prisons four times during his lifetime.

Bust of Shen Kuo (From Lapham’s Quarterly)

This was a time of polymaths. Song Ci was inspired by the work of Shen Kuo (1031-1095) a man who excelled in many different areas of philosophy, science and mathematics. Shen Kuo argued for autopsy and dissected the bodies of criminals in the process proving centuries held theories about human anatomy wrong. In the UK such a practice would not be supported in legislation for another seven centuries.

Song Ci built on Shen Kuo’s work, observing the practice of magistrates and complying recommendations based on good practice. This would form his book Collected Cases of Injustice Rectified; in all fifty-three chapters in five volumes. The first volume contained an imperial decree on the inspection of bodies and injuries. The second volume was designed as instruction in post-mortem examination. The remaining volumes helped identify cause of death and the treatment of certain injuries.

Of note the book outlines the responsibilities of the official as well as what would be considered now routine practices such as the importance of accurate notes and the need to present during the post-mortem (including not being put off by bad smells). There are procedures for medical examination and specific advice on questioning suspects and interviewing family members.

Forensically, the richest part of the text is within the section titled "Difficult Cases". This explains how an official could piece together evidence when the cause of death appears to be something else such as strangulation masked as suicidal hanging or intentional drowning made to look accidental. A pharmacopoeia is also provided to make obscure injuries appear. There is a detailed description of determining time of death by the rate of decomposition and whether the corpse has been moved.

Whilst forensic science has obviously progressed since the work of Song Ci what is striking is how the foundations of good forensic work have not changed. He wrote about determining self-inflicted wounds and suicide based on the direction of wounds or the disposition of the body. He recommended noting tiny details such as looking underneath fingernails or in various orifices for clues of foul-play. Standard procedure today.

Song Ci died in 1249 with little heraldry. However, in modern times there has been an increased appreciation of his work. Just think how few 13th century scientific publications could have been as relevant as his after nearly a millennium.

There is an Asian maxim that “China is the ocean that salts all the rivers that flow into it”. All of us try to contribute in some way to the river of life. Any practitioner or appreciator of forensics must recognise the tremendous contribution Song Ci and his contemporaries made to progress the flow of justice.

Thanks for reading

- Jamie

Those who cannot remember the past: how we forgot the first great plague and how we're failing to remember lessons with Ebola

Plague-in-the-Streets.png

“Those who cannot remember the past are condemned to repeat it”

George Santayana

To look at the History of Medicine is to realise how often diseases recur and, sadly, how humans repeat the same mistakes. It is easy to look back with the benefit of hindsight and with modern medical knowledge but we must remember how we remain as fallible as our forebears.

Our first introduction to the History of Medicine is often through learning about the Black Death at school. The story is very familiar; between 1347 and 1351 plague swept the whole of Europe killing between a third and two-thirds of the continent. At the time it was felt the end of the world was coming as a disease never before seen took up to 200 million lives. However, this was actually the second time plague had hit Europe. Nearly a thousand years earlier the first plague pandemic had devastated parts of Europe and the Mediterranean. Half the population of Europe were affected. This was Justinian’s plague, named for the Holy Roman Emperor whose reign the disease helped to define. Yet despite the carnage Europe forgot and had no preparation when plague returned.

Between 2014 and 2016 nearly 30,000 people were hit by the Ebola outbreak in West Africa. Our systems were found lacking as the disease struck in a way never before seen. We said we would learn from our mistakes. Never again.

Yet the current Ebola epidemic in the Democratic Republic of the Congo (DRC) is proving that even today it is hard to remember the lessons of the past and how disease will find any hole in our memory. This is the story of Justinian’s plague, the lessons we failed to learn then and now as we struggle with Ebola.

Justinian’s Plague

Justinian I. Contemporary portrait in the Basilica of San Vitale, Ravenna . From Wikipedia.

It’s 542 CE in Constantinople (now Istanbul). A century earlier the Western provinces of the Roman Empire collapsed. The Eastern empire continues in what will be called the Eastern Roman or Byzantine Empire. Constantinople is the capital city, then as now, a melting pot between Europe and Asia. Since 527 CE this Empire has been ruled by Justian I, an absolute monarch determined to return to the glory years of conquest.

The Empire has already expanded to cover swathes of Northern Africa. Justinian’s focus is now on reclaiming Italy. The Empire is confident and proud, a jewel in an otherwise divided Europe now in the Dark Ages.

The Eastern Roman Empire at the succession of Justinian I (purple) in 527 CE and the lands conquered by the end of his reign in 565 CE (yellow). From US Military Academy.

Procopius of Caesarea (Creative Commons)


The main contemporary chronicler of the plague of Justinian, Procopius of Caesarea (500-565 CE)  identified the plague as arriving in Egypt on the Nile’s north and east shores. From there it spread to Alexandria in the north of Egypt and east to Palestine. The Nile was a major route of trade from the great lakes of Africa to the south. We now know that black rats on board trade ships brought the plague initially from China and India via Africa and the Nile to Justinian’s Empire.


Procopius noted that there had been a particularly long period of cold weather in Southern Italy causing famine and migration throughout the Empire. Perfect conditions to help a disease spread.

In his book Secret History Procopius detailed the symptoms of this new plague: delusions, nightmares, fevers and swellings in the groin, armpits, and behind their ears. For most came an agonising death. Procopius was of no doubt that this was God’s vengeance against Justinian, a man he claimed was supernatural and demonic.

Justinian’s war in Italy helped spread disease but so did peace in the areas he’d conquered. The established trade routes in Northern Africa and Eastern Europe, with Constantinople in the centre, formed a network of contagion. Plague swept throughout the Mediterranean. Constantinople was under siege for four months during which time Procopius alleged 10,000 people died a day in the city. Modern historians believe this figure to be closer to a still incredible 5,000 a day. Corpses littered the city streets. In scenes prescient of the Black Death, mass plague pits were dug with bodies thrown in and piled on top of each other. Other victims were disposed of at sea. Justinian was struck down but survived. Others in Constantinople were not so lucky, in just four months up to 40% of its citizens died.

The plague’s legacy

The plague continued to weaken the Empire, making it harder to defend. Like the medieval kings after him Justinian I struggled to maintain the status quo and tried to impose the same levels of taxation and expansion. He died in 565 CE. His obsession with empire building has led to his legacy as the ‘last Roman’. By the end of the sixth century much of the land in Italy Justinian had conquered had been lost but the Empire had pushed east into Persia. Far from Constantinople the plague continued in the countryside. The plague finally vanished in 750 CE by which point up to 50 million people had died, 25% of the population of the Empire.

Procopius’s description of the Justinian plague sounds like a lot like bubonic plague. This suspicion was confirmed in recent research.

Yersinia pestis bacteria, Creative Commons

At a two separate graves in Bavaria bacterial DNA was extracted from the remains of Justinian plague victims. The DNA matched that of Yersinia pestis the bacterium which causes bubonic plague. The DNA was analysed and found to be most closely related with Y. pestis still endemic to this day in Central Asia. This suggests the route from infection via trade from Asia to Europe.

After 750 CE plague vanished from Europe. New conquerors came and went with the end of the Dark Ages and the rise of the Middle Ages. Europeans forgot about plague. In 1347 they would get a very nasty reminder.

It’s very easy now in our halcyon era of medical advances to feel somewhat smug. Yes, an interesting story but wouldn’t happen now. Medieval scholars didn’t have germ theory. Or a way of easy accessing Procopius’s work. Things are different now.

We’d study Justinian’s plague with its high mortality. We’d identify the cause. We’d work backwards and spot how the trade link with Asia was the route of infection. We’d work to identify potential outbreaks in their early stages in Asia. By the time the second plague epidemic was just starting we’d notice it. There would be warnings to spot disease in travelers and protocols for dealing with mass casualties and disposal of bodies. We’d initiate rapid treatment and vaccination if possible. We’d be OK.

Ebola shows how hard this supposedly simple process remains.

Ebola: A Modern Plague

The Ebola virus (Creative Commons)

Ebola Viral Disease is a type of viral haemorrhagic fever first identified in 1976 during an outbreak in what is now South Sudan and the DRC. Caused a spaghetti-like virus known as a filovirus this disease causes severe dehydration through vomiting and diarrhoea before internal and external bleeding can develop. Named for the Ebola River where it was first identified it spreads by direct human contact with a mortality rate varying from 25% to 90%. An epidemic has been ongoing in DRC since August 2018. We are in our fourth decade of knowing about Ebola. And five years ago we were given the biggest warning yet about its danger.

Up until 2014 the largest outbreak of Ebola had affected 315 people. Other outbreaks were much smaller. Ebola seemed to burn brightly but with only a few embers. In 2014 it became a forest fire.

A healthcare worker during the Ebola outbreak of 2014-16 (From CNN.com)

The West Africa Epidemic of 2014-16 hit Guinea, Sierra Leone and Liberia. The disease showed its potential in our age of global travel as the first cases appeared in America and Europe. In all there were 28,160 reported cases. 11,308 people died. Ebola caught the world napping. What had been a rare disease of Africa was potentially a threat to us all. Suspicion of foreign healthcare workers and miscommunication about the causes of Ebola were blamed for helping to further the disease. Yet there was hope as promising experimental vaccines were put into production.

As the forest fire finally died down there was a chance to reflect. There were many publications, including from Médecins sans frontières, about the lessons learnt from Ebola and how not to repeat the lessons from the past. These were all along similar themes: the importance of trained frontline staff, rapid identification of the disease, engaging and informing local communities, employing simple yet effective methods to provide disease spread and the use of the new vaccine to protect contacts and contacts of contacts. There was lots of criticism about the speed of the World Health Organisation response but also a feeling that with new tools and lessons learnt things would be different next time.

When Ebola surfaced again last year in the DRC there was initial hope that lessons were learnt. Over 100,000 people have been vaccinated; a new weapon. However, the disease continues a year on with over 1000 cases and over 800 fatalities and fresh concern that this outbreak is far from over.

There remains delays in identifying patients with Ebola; not surprising as the early symptoms mimic more common diseases such as malaria. As a result patients are not isolated quickly enough and may infect others before their test results are back. Also the talk of engaging communities is falling flat. In a region torn apart by decades of civil unrest there is widespread mistrust of authorities with blame falling on the Ebola units themselves for causing death. It is estimated that 30% of patients are staying at home and being a potent vector for disease rather than coming forward. There has also been violence against healthcare workers and hospitals as a result of this fear. Reassuringly, where local communities and healthcare has come together Ebola has been stopped but this is not the norm and behavioural scientists are being used to help connect with locals. Despite the lessons learnt Ebola is continuing to be a difficult adversary.

It is easy in the West to feel we are immune from misinformation and fear. Yet look at the current measles epidemic in New York State. Look at the anti-vaccination movement, labelled a “public health timebomb” by Simon Stevens, the chief executive of NHS England last week. We are no more immune than anyone else to irrationality. Nor too proud to learn the lessons of the past; the ‘ring’ style of vaccinating contacts against Ebola is the same as used during the successful campaign to eradicate smallpox over four decades ago.

Medical advances have come on in ways no-one in the Middle Ages could have foreseen. We have never had more ways to share our knowledge of disease or so many ways to prevent suffering. Yet people remain the same. And that’s the tricky part. Let’s not forget that bit.

Thanks for reading

- Jamie

Bullet Holes & Bias: The Story of Abraham Wald

“History is written by the victors”

Sir Winston Churchill

It is some achievement if we can be acknowledged as succeeding in our field of work. If that field of work happens to be helping to win the most bloody conflict in history then our achievement deserves legendary status. What then do you say of a man who not only succeeded in his field and helped the Allies win the Second World War but whose work continues to resonate throughout life today? Abraham Wald was a statistician whose unique insight echoes in areas as diverse as clinical research, finance and the modern celebrity obsession. This is his story and the story of survivorship bias. This is the story of why we must take a step back and think.

Abraham Wald and Bullet Holes in Planes

Wald was born in 1902 in the then Austria-Hungarian empire. After graduating in Mathematics he lectured in Economics in Vienna. As a Jew following the Anschluss between Nazi Germany and Austria in 1938 Wald and his family faced persecution and so they emigrated to the USA after he was offered a university position at Yale. During World War Two Wald was a member of the Statistical Research Group (SRG) as the US tried to approach military problems with research methodology.

One problem the US military faced was how to reduce aircraft casualties. They researched the damage received to their planes returning from conflict. By mapping out damage they found their planes were receiving most bullet holes to the wings and tail. The engine was spared.

DISTRIBUTION OF BULLET HOLES IN AIRCRAFT THAT RETURNED TO BASE AFTER MISSIONS. SKETCH BY WALD. IN “VISUAL REVELATIONS” BY HOWARD WAINER. LAWRENCE ERLBAUM AND ASSOCIATES, 1997.

Abraham Wald

The US military’s conclusion was simple: the wings and tail are obviously vulnerable to receiving bullets. We need to increase armour to these areas. Wald stepped in. His conclusion was surprising: don’t armour the wings and tail. Armour the engine.

Wald’s insight and reasoning was based on understanding what we now call survivorship bias. Bias is any factor in the research process which skews the results. Survivorship bias describes the error of looking only at subjects who’ve reached a certain point without considering the (often invisible) subjects who haven’t. In the case of the US military they were only studying the planes which had returned to base following conflict i.e. the survivors. In other words what their diagram of bullet holes actually showed was the areas their planes could sustain damage and still be able to fly and bring their pilots home.

No matter what you’re studying if you’re only looking at the results you want and not the whole then you’re subject to survivorship bias.

No matter what you’re studying if you’re only looking at the results you want and not the whole then you’re subject to survivorship bias.

Wald surmised that it was actually the engines which were vulnerable: if these were hit the plane and its pilot went down and didn’t return to base to be counted in the research. The military listened and armoured the engine not the wings and tail.

The US Airforce suffered over 88,000 casualties during the Second World War. Without Wald’s research this undoubtedly would have been higher. But his insight continues to this day and has become an issue in clinical research, financial markets and the people we choose to look up to.

Survivorship Bias in Clinical Research

In 2010 in Boston, Massachusetts a trial was conducted at Harvard Medical School and Beth Israel Deaconess Medical Center (BIDMC) into improving patient survival following trauma. A major problem following trauma is if the patient develops abnormal blood clotting or coagulopathy. This hinders them in stemming any bleeding they have and increases their chances of bleeding to death. Within our blood are naturally occurring proteins called factors which act to encourage blood clotting. The team at Harvard and BIDMC investigated whether giving trauma patients one of these factors would improve survival. The study was aimed at patients who had received 4-8 blood transfusions within 12 hours of their injury. They hoped to recruit 1502 patients but abandoned the trial after recruiting only 573.

Why? Survivorship bias. The trial only included patients who survived their initial accident and then received care in the Emergency Department before going to Intensive Care with enough time passed to have been given at least 4 bags of blood. Those patients who died prior to hospital or in the Emergency Department were not included. The team concluded that due to rising standards in emergency care it was actually very difficult to find patients suitable for the trial. It was therefore pointless to continue with the research.

This research was not the only piece reporting survivorship bias in trauma research. Does this matter? Yes. Trauma is the biggest cause of death worldwide in the under 45 year-olds. About 5.8 million people die worldwide due to trauma. That’s more than the annual total of deaths due to malaria, tuberculosis and HIV/AIDS. Combined. Or, to put it another way, one third of the total number of deaths in combat during the whole of the Second World War. Every year. Anything that impedes research into trauma has to be understood. Otherwise it costs lives. But 90% of injury deaths occur in less economically developed countries. Yet we perform research in Major Trauma Units in the West. Survivorship bias again.

As our understanding of survivorship bias grows so we are realising that no area of Medicine is safe. It clouds outcomes in surgery and anti-microbial research. It touches cancer research. Cancer survival rates are usually expressed as 5 year survival; the percentage of patients alive 5 years after survival. But this doesn’t include the patients who died of something other than cancer and so may be falsely optimistic. However, Medicine is only a part of the human experience survivorship bias touches.

Survivorship Bias in Financial Markets & our Role Models

Between 1950 and 1980 Mexico industrialised at an amazing rate achieving an average of 6.5% growth annually. The ‘Mexico Miracle’ was held up as an example of how to run an economy as well as encouraging investment into Latin American markets. However, since 1980 the miracle has run out and never returned. Again, looking only at the successes and not the failures can cost investors a lot of money.

Say I’m a fund manager and I approach you asking for investment. I quote an average of 1.8% growth across my funds. Sensibly you do your research and request my full portfolio:

Funds.jpeg

It is common practice in the fund market to only quote active funds. Poorly performing funds, especially those with negative growth, are closed. If we only look at my active funds in this example then yes, my average growth is 1.8%. You might invest in me. If however you look at all of my portfolio then actually my average performance is -0.2% growth. You probably wouldn’t invest then.

Yet survivorship bias has a slight less tangible effect on modern life now. How often is Mark Zuckerberg held up as an example for anyone working in business? We focus on the one self-made billionaire who dropped out of education before making their fortune and not the thousands who followed the same path but failed. A single actor or sports star is used as a case study on how to succeed and we are encouraged to follow their path never mind that many who do fail. Think as well about how we look at other aspects of life. How often do we look at one car still in use after 50 years or one building still standing after centuries and say, “we don’t make them like they used to”? We overlook how many cars or buildings of a similar age have now rusted or crumbled away. All of this is the same thought process going through the minds of the US Military as they counted bullet holes in their planes.

To the victor belong the spoils but we must always remember the danger of only looking at the positive outcomes and ignoring those often invisible negatives. We must be aware of the need to see the whole picture and notice when we are not. With our appreciation of survivorship bias must also come an appreciation of Abraham Wald. A man whose simple yet profound insight shows us the value of stepping back and thinking.

Thanks for reading

- Jamie

Death by PowerPoint: the slide that killed seven people

The space shuttle Columbia disintegrating in the atmosphere (Creative Commons)

We’ve all sat in those presentations. A speaker with a stream of slides full of text, monotonously reading them off as we read along. We’re so used to it we expect it. We accept it. We even consider it ‘learning’. As an educator I push against ‘death by PowerPoint’ and I'm fascinated with how we can improve the way we present and teach. The fact is we know that PowerPoint kills. Most often the only victims are our audience’s inspiration and interest. This, however, is the story of a PowerPoint slide that actually helped kill seven people.

January 16th 2003. NASA Mission STS-107 is underway. The Space Shuttle Columbia launches carrying its crew of seven to low orbit. Their objective was to study the effects of microgravity on the human body and on ants and spiders they had with them. Columbia had been the first Space Shuttle, first launched in 1981 and had been on 27 missions prior to this one. Whereas other shuttle crews had focused on work to the Hubble Space Telescope or to the International Space Station this mission was one of pure scientific research.

The launch proceeded as normal. The crew settled into their mission. They would spend 16 days in orbit, completing 80 experiments. One day into their mission it was clear to those back on Earth that something had gone wrong.

As a matter of protocol NASA staff reviewed footage from an external camera mounted to the fuel tank. At eighty-two seconds into the launch a piece of spray on foam insulation (SOFI) fell from one of the ramps that attached the shuttle to its external fuel tank. As the crew rose at 28,968 kilometres per hour the piece of foam collided with one of the tiles on the outer edge of the shuttle’s left wing.

Frame of NASA launch footage showing the moment the foam struck the shuttle’s left wing (Creative Commons)

It was impossible to tell from Earth how much damage this foam, falling nine times faster than a fired bullet, would have caused when it collided with the wing. Foam falling during launch was nothing new. It had happened on four previous missions and was one of the reasons why the camera was there in the first place. But the tile the foam had struck was on the edge of the wing designed to protect the shuttle from the heat of Earth’s atmosphere during launch and re-entry. In space the shuttle was safe but NASA didn’t know how it would respond to re-entry. There were a number of options. The astronauts could perform a spacewalk and visually inspect the hull. NASA could launch another Space Shuttle to pick the crew up. Or they could risk re-entry.

NASA officials sat down with Boeing Corporation engineers who took them through three reports; a total of 28 slides. The salient point was whilst there was data showing that the tiles on the shuttle wing could tolerate being hit by the foam this was based on test conditions using foam more than 600 times smaller than that that had struck Columbia. This is the slide the engineers chose to illustrate this point:

NASA managers listened to the engineers and their PowerPoint. The engineers felt they had communicated the potential risks. NASA felt the engineers didn’t know what would happen but that all data pointed to there not being enough damage to put the lives of the crew in danger. They rejected the other options and pushed ahead with Columbia re-entering Earth’s atmosphere as normal. Columbia was scheduled to land at 0916 (EST) on February 1st 2003. Just before 0900, 61,170 metres above Dallas at 18 times the speed of sound, temperature readings on the shuttle’s left wing were abnormally high and then were lost. Tyre pressures on the left side were soon lost as was communication with the crew. At 0912, as Columbia should have been approaching the runway, ground control heard reports from residents near Dallas that the shuttle had been seen disintegrating. Columbia was lost and with it her crew of seven. The oldest crew member was 48.

The shuttle programme was on lock down, grounded for two years as the investigation began. The cause of the accident became clear: a hole in a tile on the left wing caused by the foam let the wing dangerously overheat until the shuttle disintegrated.

The questions to answer included a very simple one: Why, given that the foam strike had occurred at a force massively out of test conditions had NASA proceeded with re-entry?

Edward Tufte, a Professor at Yale University and expert in communication reviewed the slideshow the Boeing engineers had given NASA, in particular the above slide. His findings were tragically profound.

Firstly, the slide had a misleadingly reassuring title claiming that test data pointed to the tile being able to withstand the foam strike. This was not the case but the presence of the title, centred in the largest font makes this seem the salient, summary point of this slide. This helped Boeing’s message be lost almost immediately.

Secondly, the slide contains four different bullet points with no explanation of what they mean. This means that interpretation is left up to the reader. Is number 1 the main bullet point? Do the bullet points become less important or more? It’s not helped that there’s a change in font sizes as well. In all with bullet points and indents six levels of hierarchy were created. This allowed NASA managers to imply a hierarchy of importance in their head: the writing lower down and in smaller font was ignored. Actually, this had been where the contradictory (and most important) information was placed.

Thirdly, there is a huge amount of text, more than 100 words or figures on one screen. Two words, ‘SOFI’ and ‘ramp’ both mean the same thing: the foam. Vague terms are used. Sufficient is used once, significant or significantly, five times with little or no quantifiable data. As a result this left a lot open to audience interpretation. How much is significant? Is it statistical significance you mean or something else?

Finally the single most important fact, that the foam strike had occurred at forces massively out of test conditions, is hidden at the very bottom. Twelve little words which the audience would have had to wade through more than 100 to get to. If they even managed to keep reading to that point. In the middle it does say that it is possible for the foam to damage the tile. This is in the smallest font, lost.

NASA’s subsequent report criticised technical aspects along with human factors. Their report mentioned an over-reliance on PowerPoint:

“The Board views the endemic use of PowerPoint briefing slides instead of technical papers as an illustration of the problematic methods of technical communication at NASA.”

Edward Tufte’s full report makes for fascinating reading. Since being released in 1987 PowerPoint has grown exponentially to the point where it is now estimated than thirty million PowerPoint presentations are made every day. Yet, PowerPoint is blamed by academics for killing critical thought. Amazon’s CEO Jeff Bezos has banned it from meetings. Typing text on a screen and reading it out loud does not count as teaching. An audience reading text off the screen does not count as learning. Imagine if the engineers had put up a slide with just: “foam strike more than 600 times bigger than test data.” Maybe NASA would have listened. Maybe they wouldn’t have attempted re-entry. Next time you’re asked to give a talk remember Columbia. Don’t just jump to your laptop and write out slides of text. Think about your message. Don’t let that message be lost amongst text. Death by PowerPoint is a real thing. Sometimes literally.

Thanks for reading

- Jamie

Columbia’s final crew (from https://www.space.com/19436-columbia-disaster.html)


There is nothing new under the sun: the current New York measles epidemic and lessons from the first 'anti-vaxxers'

An 1807 cartoon showing ‘The Vaccination Monster’

What has been will be again,
    what has been done will be done again;
    there is nothing new under the sun.

Ecclesiastes 1:9

The State of New York is currently in the midst of an epidemic. Measles, once eradicated from the USA has returned with a vengeance. Thanks to a rise in unvaccinated children fueled by the ‘anti-vaxxer’ movement 156 children in Rockwood County have been infected by measles; 82.8% of these had never had even one MMR vaccine. With measles now rampant in the boroughs of Brooklyn and Queens the state government has taken an unusual step. In New York in the USA, the home of liberty and personal choice, no unvaccinated under-18 year old is now able to set foot in a public space. Parents of unvaccinated children who break this ban will face fines or jail.

In a previous blog I wrote about the fight against smallpox first using variolation (which sometimes caused infection) and then the invention of the world’s first vaccine. This musing is about how vaccination was made compulsory in the United Kingdom, the subsequent fight against it through a public campaign and how that movement raised its head again in the last few decades. This is the story of the first ‘anti-vaxxer’ movement and how the arguments regarding vaccination show there isn’t really anything new under the sun.

Early opposition to vaccination

Following Edward Jenner’s work into using cowpox to offer immunity against smallpox in 1796 the Royal Jennerian Society was established in 1803 to continue his research.

Even in these early days there was opposition to the vaccine. John Birch, the ‘surgeon extraordinary’ to the Prince of Wales pamphleteered against Jenner’s work with arguments one might expect to see circulating today on social media:

A section of John Birch’s pamphlet from https://vaxopedia.org/2017/10/07/who-was-john-birch/

He of course did not mention how he was making a lot of money through inoculating patients against smallpox (a practice that vaccination would replace) or using novel treatments such as electrocution.

Wood painting caricature from 1808 showing Edward Jenner confronting opponents to his vaccine (note the dead at their feet) (Creative Commons)

Despite Birch’s efforts by 1840 the efficacy of Jenner’s vaccine was widely accepted. Decades before germ theory was established and viruses were identified we finally had a powerful weapon against a deadly disease. Between 1837 and 1840 a smallpox epidemic killed 6,400 people in London alone. Parliament was persuaded to legislate. The 1840 Vaccination Act made the unpredictable variolation illegal and made provision for free, optional smallpox vaccination.

At the time healthcare in the UK was largely unchanged since Tudor times. Parish based charity had been the core of support for the sick and poor until workhouses were made the centre of welfare provision in 1834. With the workhouse came a stigma that illness and poverty were avoidable and to be punished. Government was dominated by two parties, the Whigs and the Tories both of whom were non-interventionist and the universal healthcare provided by the NHS was over a century away. Consider this laissez-faire backdrop with punitive welfare. The fact free vaccination was provided is remarkable and I think reflects the giddy optimism at a future without ‘the speckled monster’ of smallpox.

The Anti-Vaccination Leagues

The Vaccination Act of 1853 went further. Now vaccination against smallpox was compulsory for all children born after 1st August 1853 within the first three months of their life with fines for parents who failed to comply. By the 1860s two-thirds of babies in the UK had been vaccinated.

There was immediate opposition to the 1853 Act with violent protests across the country. This was the state’s first steps into the health of private citizens. The response seems to have been motivated in much the same way as the modern day opposition in the US to vaccination and universal healthcare in general: that health is a matter of private civil liberty and that vaccination caused undue distress and risk. In England and Wales in particular although the penalties were rarely enacted their presence alone seems to have been motivation for opposition. The Anti-Vaccination League in London was established in 1853 to allow dissenting voices to coalesce.

The Vaccination Act of 1867 extended the age by which a child had to be vaccinated to 14 with cumulative fines to non-compliance. That same year saw the formation of the Anti-Compulsory Vaccination League. They published the National Anti-Compulsory Vaccination Reporter newsletter in which they listed their concerns, the first three being:

I. It is the bounden duty of parliament to protect all the rights of man.

II. By the vaccination acts, which trample upon the right of parents to protect their children from disease, parliament has reversed its function.

III. As parliament, instead of guarding the liberty of the subject, has invaded this liberty by rendering good health a crime, punishable by fine or imprisonment, inflicted on dutiful parents, parliament is deserving of public condemnation.

Further newsletters were formed over the following decades: the Anti-Vaccinator (founded 1869), the National Anti-Compulsory Vaccination Reporter (1874), and the Vaccination Inquirer (1879). All of these continued to place political pressure against compulsory vaccination. Much like today the main body of arguments focused on personal choice and the testimony of parents alleging that their child was injured or killed by vaccination. In Leicester in 1885 an anti-vaccination demonstration attracted 100,000 people. A staggering number when the city’s population in total at the time was around 190,000.

A royal commission was called to advise on further vaccination policy. After deliberation for seven years listening to evidence across the spectrum of opinion in 1896 they published their findings. Smallpox vaccination was safe and effective. However, it advised against continuing compulsory vaccination. Following the 1898 Vaccination Act parents who did not want their child to be vaccinated could ‘conscientiously object’ and be exempt. There was no further appetite for Parliament to intervene in the rights of parents. Even the fledgling socialist Labour Party, no enemy of government intervention, made non-compulsory vaccination one of its policies.

Whilst the two World Wars saw a change in public opinion towards a greater role in society for government, culminating in the creation of the National Health Service in 1948, vaccination remains voluntary in the United Kingdom. The first half of the 20th century saw the advent of vaccines against several deadly diseases such as polio, measles, diphtheria and tetanus. In 1966 an ambitious worldwide vaccination programme led by the World Health Organisation saw smallpox become the first disease to be eradicated by mankind in 1980. There were dreams of polio and measles going the same way. It was not to be.

Anti-vaccination re-emerges

Herd immunity is a key component for any vaccination programme to be effective. Not everyone can be vaccinated and so they rely on being surround by vaccinated people to prevent transmission. The level of vaccination in a population required for herd immunity varies between diseases. The accepted standard to prevent measles transmission is 90-95%.

On 28th February 1998 an article was published in the Lancet which claimed that the Measles, Mumps and Rubella (MMR) vaccine was linked to the development of development and digestive problems in children. Its lead author was Dr Andrew Wakefield, a gastroenterologist.

The infamous Lancet paper linking the MMR vaccine to developmental and digestive disorders

The paper saw national panic about the safety of vaccination. The Prime Minister Tony Blair refused to answer whether his newborn son Leo had been vaccinated.

Except just like John Birch nearly two centuries before him Andrew Wakefield had held a lot back from the public and his fellow authors. He was funded by a legal firm seeking to prosecute the companies who produce vaccines. This firm led him to the parents who formed the basis of his ‘research’. The link between children developing developmental and digestive problems was made by the parents of twelve children recalling that their child first showed their symptoms following the MMR vaccine. Their testimony and recall alone were enough for Wakefield to form a link between vaccination and autism. From a research sense his findings were no more useful than those the Victorian pamphlets used. But the damage was done. The paper was retracted in 2010. Andrew Wakefield was struck off as were some of his co-authors who did not practice due diligence. Sadly, this has only helped Wakefield’s ‘legend’ as he tours America spreading his message tapping in to the general ‘anti-truth’ populist movement. Tragically unsurprisingly, often in his wake comes measles.

Earlier this year the largest study to date investigating the links between MMR and autism was published. 657,461 children in Denmark were followed up over several years (compare that to Wakefield’s research where he interviewed the parents of 12 children). No link between the vaccine and autism was shown. In fact, no large high level research has ever backed up Wakefield’s claim.

There are financial and political forces at work here. Anti-vaccination is worth big money. The National Vaccination Information Center in the US had an annual income of $1.2 billion in 2017. And the people they target are economically and politically powerful. Recent research in America shows that parents who refuse vaccinations for their children are more likely to be white, educated and of higher income. They prize purity and liberty above all, emotional reasoning over logic. They vote. And their world view is prevalent in certain circles.

Tweet by Donald Trump 28th March 2014

Tweet by Donald Trump 28th March 2014

In the UK in 2018 the rate of MMR vaccination was 91.8%, worryingly close to no longer being effective for herd immunity. There have been debates in the UK about re-introducing compulsory vaccination. In France certain childhood vaccinations are now compulsory. Social media companies are under pressure to silence the groups anti-vaxxers use to spread their message and recruit. The state is once again prepared to step into personal liberty when it comes to vaccines.

In 1901 52% of childhood deaths in England and Wales were due to infectious diseases. By 2000 it was 7.4%. In 1901 40.6% of all deaths were children. By 2000 it was 0.9%. No-one would want that progress to reverse. But history does have a habit of repeating itself if we let it. The debates continue to be the same: the rights of parents and the individual versus those of the state and public health necessity. This is a debate we have to get right. History tells us what will happen if we don’t. After all, there is nothing new under the sun.

Thanks for reading

- Jamie

Sweating Sickness: England’s Forgotten Plague

dancedeath.jpg

The history of medicine is littered with diseases which impacted on the course of humanity. The Black Death. Smallpox. Influenza. HIV/AIDS. Each one has left its own mark on our collective consciousness. And yet there is an often-overlooked addition to this list: sweating sickness. This disease tore its way through Tudor England, killing within hours, before disappearing as quickly and mysteriously as it arrived. In its wake it left its mark, a nation changed. The identity of this disease remains a matter for conjecture to this day. This is the story of England’s forgotten plague.

Background to an outbreak

It’s summer 1485. An epic contest for the throne of England is reaching its bloody climax. In a few weeks on August 22nd at the Battle of Bosworth Henry Tudor will wrest the crown from King Richard III and conclude the Wars of the Roses. Away from the fighting people start dying. As contemporary physicians described:

“A newe Kynde of sickness came through the whole region, which was so sore, so peynfull, and sharp, that the lyke was never harde of to any mannes rememberance before that tyme.

These words take on added impact when you remember the writer would have experienced patients with bubonic plague. What was this disease “the like was never heard of”? Sudor Anglicus, later known as the English sweating sickness, struck quickly. The French physician Thomas le Forestier described victims feeling apprehensive and generally unwell before violent sweating, shaking and headaches began. Up to half of patients died, usually within 24 hours. Those who lived longer than this tended to survive. However, survival did not seem to offer immunity and patients could be struck multiple times. 15,000 died in London alone. We don’t have an exact figure for its mortality but it is commonly estimated at 30-50%.

Outbreaks continued beyond 1485 and the reign of Henry VII and into that of his grandson Edward VI in five further epidemics; 1508, 1517, 1528, and 1551, each time in summer/autumn. The disease remained limited to England apart from 1528/29 when it also spread to mainland Europe.

John Keys

The principle chronicler of the sweat was the English doctor John Keys (often Latinised to John Caius/Johannes Caius) in his 1552 work ‘A Boke or Counseill Against the Disease Commonly Called the Sweate, or Sweatyng Sicknesse.’ This is how know so much about how the disease presented and progressed.

Key’s noted that the patients most at risk of the disease were:

“either men of wealth, ease or welfare, or of the poorer sort, such as were idle persons, good ale drinkers and tavern haunters.

Both Cardinal Wolsely and Anne Boleyn contracted the disease but survived. Wolsely survived two attacks. Anne’s brother-in-law William Carey wasn’t so lucky and died of the sweat. The disease’s predilection for the young and wealthy led to it being dubbed the ‘Stop Gallant’ by the poorer classes.

Key’s was the physician to Edward VI, Mary I and Elizabeth I. As he was born in 1510 his work on the first epidemics of sweating sickness was based on prior reports of the illness; it could therefore be said he had performed a kind of literature review. Unlike le Forestier his lack of first hand experience and the fact he focused mostly on noble deaths has led to criticism. However, Keys was clear that the sweat was different to plague and other conditions. This goes with le Forestier and other physicians at the time.

The impact of the sweat permeated Tudor culture. Even in 1604 William Shakespeare was concerned enough about sweating sickness to write in his play ‘Measure by Measure’:

“Thus, what with the war, what with the sweat, what with the gallows, and what with poverty…

How the sweat changed history

Henry Tudor was an ambitious man with a fairly loose claim to the throne of England: his mother, Lady Margaret Beaufort, was a great-granddaughter of John of Gaunt, Duke of Lancaster, fourth son of Edward III, and his third wife Katherine Swynford. Katherine was Gaunt’s mistress for 25 years before they married and had 4 children already before she gave birth to John Beaufort, Henry’s great-grandfather. If this sounds complicated it is. Henry was not a strong claimant and his chances had been further weakened by an Act of Parliament in 1407 by Henry IV, John of Gaunt’s first son, which recognised his half-siblings but ruled them and their descendants ineligible for the throne.

Henry Tudor’s ancestry from http://www.livimichael.co.uk/succession-the-players

Henry needed alliances if he was going to get anywhere. He attempted to take the crown in 1483 but the campaign was a disaster. He was running out of time and needed to kill Richard III in battle if he was going to be king. He accepted the help of King Charles VIII of France who provided Henry with 2000 mercenaries from France, Germany and Switzerland. This force crossed the English Channel on 7th August 1485. In was in this army that the sweat first appeared. There is debate about whether this was before or after the Battle of Bosworth but Lord Stanley, a key ally of Richard III and a contributor of 30% of the king’s army, used fear of sweating sickness as a reason to not join the royal forces in battle. It’s therefore possible that sweating sickness was seen before Bosworth and helped shape the course of English history.

Arthur Tudor (1486-1502)

Sweating sickness may have had a further impact on the Tudors and their role in our history. Henry VII’s first son, Arthur the Prince of Wales died in 1502 aged 15. Sweating sickness has been suggested as the cause of his sudden death. His death saw Henry VII’s second son, also called Henry, become first in line to the throne which he took in 1509 as King Henry VIII.

What was the sweat?

Unlike other plagues the identity of sweating sickness remains a mystery to this day. The periodicity of the epidemics suggests an environmental or meteorological trigger and possibly an insect or rodent vector.

A similar disease struck Northern France in 1718 in an outbreak known as ‘the Picardy sweat’. 196 local epidemics followed until the disease disappeared in 1861 with its identity also a mystery. Interestingly, the region of France where the Picardy sweat arose is near where Henry Tudor’s group of French, German and Swiss solders amassed prior to the Battle of Bosworth.

Several diseases have been proposed as the true identity of the sweat. Typhus (not as virulent), influenza and ergotism (don’t match the recorded symptoms) have been suggested and dropped. In 1997 it was suggested that a hantavirus could have been responsible. Hantaviruses are spread by inhalation of rodent droppings and cause similar symptoms to sweating sickness before killing with bleeding and complications to the heart and lungs. Although rare they have been identified in wild rodents in Britain. If we remember how the sweat seemed to strike following summer when rodent numbers would be at their highest and add in the poor sanitation of Tudor times then hantavirus is a strong candidate.

We’ll likely never know the true identity of sweating sickness unless it re-emerges. If that’s the case based on the terror it inspired to Tudor England we should be glad to keep it a mystery.

Thanks for reading.

- Jamie

The Most Famous Case Report Ever

Case reports are nothing new. We’ve all told colleagues about interesting cases we’ve seen. I’ve presented a couple at RCEM. They tend to focus on the weird and wonderful, cases with surprising twists and turns but with actual limited learning. That’s why case reports are at the bottom of the table when it comes to levels of evidence. However, one in particular could be said to have marked a turning point in modern medical practice.

The Morbidity and Mortality Weekly Report (MMWR) has been published weekly by the Centre for Disease Control and Prevention (CDC) since 1950. Each week they release public health information, possible exposures, outbreaks and other health risks for health workers to be aware of. One case report in particular stands out out of all of their back catalogue. It was written by various doctors from the University of California, Los Angeles and Cedars-Mt Sinai Hospital, Los Angeles. It was published on June 5th 1981:

The MMWR June 5th 1981

Reported by MS Gottlieb, MD, HM Schanker, MD, PT Fan, MD, A Saxon, MD, JD Weisman, DO, Div of Clinical Immunology-Allergy, Dept of Medicine, UCLA School of Medicine; I Pozalski, MD, Cedars-Mt. Sinai Hospital, Los Angeles; Field Services Div, Epidemiology Program Office, CDC.

Pneumocystis Pneumonia (PCP) is a rare form of pneumonia caused by the yeast like fungus Pneumocystis jiroveci. The fungus can live in the lungs of healthy people without causing any problems so to see it in 5 otherwise healthy young (the oldest was 36) people was odd.

Less than a month later the MMWR published further cases of PCP as well as Kaposi sarcoma in 26 previously well homosexual men in Los Angeles and New York since 1978. Kaposi sarcoma is very rare form of cancer previously seen usually in older men of Jewish/Mediterranean descent. Again it was virtually unheard of it in young men. It was suspected that something was affecting their immune systems preventing them from fighting off infections and malignancy.

At the time there were many theories as to what was causing the immune systems of patients to shut down. It was felt that it was linked to the ‘gay lifestyle’ in some way leading to the stigmatising description in the media of GRID (Gay-related immunodeficiency) first used in 1982. By 1983 the disease was linked also to injecting drug users, haemophiliacs who’d received blood products and Haitians. This led to another stigmatising phrase ‘the 4H club’ (Homosexuals, Heroin addicts, Haemophiliacs and Haitians).

In 1982 however, the CDC had actually given it a proper name: ‘Acquired Immune Deficiency Syndome’ or ‘AIDS’.

The fact it was being transmitted to blood product recipients suggested the cause had to be viral as only a virus could pass the filtration process. In 1983 two rival teams, one American and one French, both announced they had found the virus causing AIDS with ongoing debate as to who got there first. Each team gave it a different name. In 1985 a third one was chosen: ‘Human Immunodeficiency Virus’ or ‘HIV’. By that time the virus had spread not just in America but in Canada, South America, Australia, China, the Middle East and Europe. Since 1981 worldwide more than 70 million people have been infected with the HIV virus and about 35 million people have died of AIDS. 

The MMWR of 5th June 1981 is now recognised both as the beginning of the HIV/AIDS pandemic and as the first publication of HIV/AIDS. Although only a case report it shows the value of these publications at the front line. Only by recording and publishing the ‘weird and wonderful’ can we start to share practice, appreciate patterns and spot emergent diseases.

Thanks for reading

- Jamie

shutterstock_748216612.jpg

Mass Hysteria & Moral Panic: The Dancing Plagues and #Momo

dance_at_molenbeek.jpg

“The only thing we have to fear is fear itself” - President Franklin Roosevelt

Human beings are social animals with conventions and norms. Yet sometimes we act in ways that defy logic or reason. Very often the inspiration for this is fear. In this week’s blog I’ve looked at the ‘Dancing Plagues’ of the Middle Ages and the ‘Momo Challenge’ of last month and how they illustrate the power of fear over people. In a previous blog I wrote about the importance of being accurate with psychiatric diagnosis and so I’m also going to use them as examples of the difference between mass hysteria and moral panic.

In 1374 in Aachen along the River Rhine in what is now Germany locals suddenly started dancing. They didn’t stop. They couldn’t stop. Justus Hecker, an 18th century physician researched the phenomenon and described it as:

“They formed circles hand in hand, and appearing to have lost all control over their senses, continued dancing, regardless of the bystanders, for hours together, in wild delirium, until at length they fell to the ground in a state of exhaustion. They then complained of extreme oppression, and groaned as if in the agonies of death, until they were swathed in cloths bound tightly round their waists, upon which they again recovered, and remained free from complaint until the next attack.”

The dancing spread to Liege, Utrecht, Tongres and other towns in the Netherlands and Belgium, up and down the Rhine river. It was known as St. John’s Dance or St. Vitus’ Dance as these saints were blamed for causing the ‘disease’.

In July 1518, this time in Strasbourg, a woman known as Frau Troffea started dancing and carried on for 4 to 6 days. By the end of the week 34 people had joined her and at the end of month 400 people were dancing. It didn’t stop. It seemed contagious. Local authorities initially thought that the people just needed to get the dancing out of their systems and so organised musicians to encourage dancing. It didn’t work. Dozens collapsed and died out of exhaustion.

The ‘Dancing Plagues’ continued to occur randomly throughout medieval Europe. It’s still a mystery why so many people acted in such a bizarre way. Possession was blamed but exorcisms had no effect. Doctors at the time wondered if the stars or even spider bites were to blame. Various different theories have since been put forward such as encephalitis which can cause delirium and hallucinations or poisoning of grain by the ergot fungus. None completely explain all the symptoms. There may have been an element of religious demonstration but that doesn’t explain how the dancing seemed contagious.

It may be that ‘Dancing Mania’ was due to mass hysteria.

Mass hysteria is a "conversion disorder," in which a person has physiological symptoms affecting the nervous system in the absence of a physical cause of illness, and which may appear in reaction to psychological distress.

It has been suggested that mass hysteria has 5 principles:

  1. "it is an outbreak of abnormal illness behavior that cannot be explained by physical disease"

  2. "it affects people who would not normally behave in this fashion"

  3. "it excludes symptoms deliberately provoked in groups gathered for that purpose," such as when someone intentionally gathers a group of people and convinces them that they are collectively experiencing a psychological or physiological symptom

  4. "it excludes collective manifestations used to obtain a state of satisfaction unavailable singly, such as fads, crazes, and riots"

  5. "the link between the [individuals experiencing collective obsessional behavior] must not be coincidental," meaning, for instance, that they are all part of the same close-knit community

In 1374 Europe was still recovering from the Black Death of 1347-1351. The people of Strasbourg in 1518 had suffered a famine. At a time where disease and famine were the preserve of God’s wrath perhaps the stress of potential apocalypse manifested itself in a a plague of dancing.

If you think that mass hysteria is confined to the dark pre-Renaissance ages you’d be very wrong. In 1999 26 school children in Belgium fell ill after drinking cans of Coca Cola. After this was reported in the news more students in other schools also fell ill. Coca Cola withdrew nearly 30 million cans from the market as about 100 people eventually complained of feeling unwell after drinking their product. Except when they were examined nothing organically wrong could be found and the students had drunk cans from different factories.

Professor Simon Wessely, the former president of the Royal College of Psychiatrists, has written a lot about mass hysteria and is clear: mass hysteria does not make a person insane. The patient thinks they are unwell and communicate that illness. This brings us to moral panic and #Momo.

maxresdefault.jpg

If you’ve been anywhere near social media the past month chances are you heard about Momo or ‘The Momo Challenge’. These were posts, shared widely, warning about a character called Momo who has started appearing in online videos aimed at children threatening them and encouraging them to perform acts of violence against themselves and others. It led to widespread discussion and warnings from authorities:

Except it wasn’t real. There is no evidence of Momo or any character infiltrating videos of Pepper Pig and inspiring children to hurt themselves. This obsession over an imaginary character was been traced to a solitary post in America warning about Momo. Over the following month there was an exponential growth of interest into this imaginary threat. Charities warned that the fear mongering would actually do more harm than good.

Guardian graphic. Source: Google trends. Numbers represent search interest

Guardian graphic. Source: Google trends. Numbers represent search interest

Professor Wessely defines moral panic as the “phenomenon of masses of people becoming distressed about a perceived — usually unreal or exaggerated — threat portrayed in catastrophising terms by the media“. Like mass hysteria there’s a threat. Unlike mass hysteria people don’t believe themselves to be unwell. The sociologist Stanley Cohen described four stages of moral panic in response to a perceived threat:

  1. The threat is then depicted in a simple and recognizable symbol/form by the media

  2. The portrayal of this symbol rouses public concern

  3. There is a response from authorities and policy makers

  4. The moral panic over the issue results in social changes within the community

Look at these four steps and then the graph above. #Momo illustrates moral panic perfectly. It also brilliantly illustrates how misinformation and as a result panic can be spread by social media. Moral panic is the result of fears the ‘threat’ poses to our way of life. It is therefore a powerful tool by the far right. Watch how Donald Trump and his supporters spread inaccurate information about illegal immigrants to push their border wall agenda.

The world is a scary place and as a species we instinctively fear the unknown especially when our way of life or our lives themselves are believed to be at risk. The panic at Oxford Street, London last December where gun shots were reported is believed to be due to a ‘perfect storm’ of fear over terrorism and violence. People panicked when a fight broke out. Next thing hundreds of shoppers were running and their imagination took over. What’s worse is when that fear is used to ostracise whole groups of people. The Witch Trials of Salem, on which both mass hysteria and moral panic have been blamed, is a classic example of this. For all the drama caused what the #Momo phenomenon also shows is a potential solution to fear: knowledge and a measured response.

Thanks for reading

- Jamie

Flogging, Bellows, Barrels, Smoke Up the Bum and Abraham Lincoln - Early CPR

Last week saw the start of induction of our new third year students starting the clinical phase of their time at university. I was on CPR duty for much of it. CPR as we know it was developed in 1960. For the centuries before that there were many different techniques attempted to revive a patient without a pulse. It’s fair to say these were, ahem, interesting.

Flogging

In the early Middle Ages a patient would be flogged or flagellated sometimes with nettles to try and revive them. It was presumed that the shock would awake the patient rather than any consideration of their circulation.

Bellows

In an early example of artificial ventilation bellows began to be used in the 1500s to blow air down the patient’s throat and into their lungs. However, simple air manoeuvres were not used. Bellows were popular for about 300 years until in 1829, the French physician Leroy d’Etiolles demonstrated through his work on barotrauma that over distension of the lungs could kill an animal so they went out of fashion.

Fumigation

Fast forward to the 1700s and fumigation comes into use. The physician would light a pipe of tobacco and use the above contraption to literally blow tobacco smoke up the patient’s rectum. It was believed that the smoke’s effects would revive the patient through the irritant effects of the tobacco smoke.

Barrel Method

Resuscitation would then go back the lungs and a technique to force air in and out of the chest. This led to the barrel method where the barrel was used to force inspiration and expiration. You can kind of see what they were trying to do here with a mechanical intervention.

Russian Snow Method

Meanwhile in Russia in the early 19th century the snow method came into fashion. The idea being to slow the metabolism of the patient and hope that circulation would return at a later date.

The Assassination of Abraham Lincoln

Whilst at Ford’s Theatre on 14th April 1865 US President Abraham Lincoln was shot by John Wilkes Booth in the back of the head. Charles Leal, a young military surgeon, and another doctor Charles Sabin Taft attempted to revive Lincoln with a three stage approach:

Method “A” “... As the President made no move to revive then, I thought of another way of death, apnea, and I assumed my preferred position for him with artificial respiration to revive …” “… I knelt on the floor on the President, with one knee on each side of the pelvis and in front of him. I leaned forward, opened my mouth and inserted two fingers of his right hand as far as possible .. . and then I opened the larynx and I did a free passage for air to enter the lungs … “

Method “B”: “… I put an assistant in each of his arms to manipulate in order to expand the chest and then slowly pushed his arms down the side of the body as I pressed the diaphragm above: These methods caused the vacuum and air is forced out of their lungs … “

Method “C”: “… Also with the thumb and fingers of my right hand pressure intermittent sliding pressure below the ribs stimulated the apex of the heart …”

Lincoln did become more responsive but it was clear his wounds were fatal and he died the next day. The three stages above are almost recognisable to the ‘ABC’ method we’re taught today. The doctors took steps to protect Lincoln’s airway, there was some consideration to force ventilation and an attempt at pressing the heart. It’s still not CPR as we would know it.

It took the US Military adopting mouth-to-mouth resuscitation and CPR as well as public campaigns helped by the arrival of Resusci Anne (more on her here) for CPR to become a key part of both medicine and health education.

It’s very easy to laugh at these previous techniques and sometimes hard to see the logic behind them. However, we don’t know which staples of Medicine we use today will be deemed irrelevant or even wrong. For example, we no longer perform gastric lavage and even collar and blocks are being debated as sometimes doing more than good. Maybe in the future medics will view us as incredulously as we do at someone blowing tobacco smoke up the rectum of a moribund patient. Maybe.

Thanks for reading.

- Jamie

What if I Told You About Morpheus and 'The Mandela Effect'?

We’ve probably all seen this meme of Morpheus played by Laurence Fishburne from the movie The Matrix. In the film Morpheus has a ‘teacher role’ and reveals the true nature of reality to Keanu Reeve’s character, Neo. The meme has been in use since 2012 often as social commentary noting a common observed truth. I’ve also seen it in use in PowerPoint presentations. A simple search via Google reveals the sheer number of Morpheus memes there are out there:

However, the truth is that Morpheus never actually says the line, “what if I told you” in the film. Despite this, people who have seen it may say that they remember the line from the film and not from a meme often in this scene:

How about Darth Vader when he reveals he’s Luke Skywalker’s father in the film ‘Empire Strikes Back’? Chances are you’re thinking “Luke, I am your father”, when he actually says:

And if I asked you what the Queen in ‘Snow White’ says to her mirror? Did you think “Mirror, Mirror on the wall?” Wrong again.

This phenomenon of commonly reported false memories is known as ‘The Mandela Effect’ due to a large number of South Africans recalling that they heard news of the death of Nelson Mandela in the 1980s despite the fact he went on to become their president in 1994 and actually died in 2013. Searching for the Mandela Effect will bring up pseudoscientific websites who explain the phenomenon as multiverses mixing together. Psychologists would describe it as confabulation and it’s incredibly common. It’s been shown in studies that subjects will falsely recall a certain word (sleep) if they’ve been hearing related words (tired, bed, pillow etc). In another study participants were told of 4 events that happened to them aged between 4 and 6; three of them were real and one (getting lost in a mall) was false. 25% of participants stated that they could remember the false memory and even elaborated and embellished giving new details they ‘remembered’. In another experiment participants’ recall of a witnessed car crash was affected by the way they were questioned. If they were asked “what speed were the cars going when they smashed?” they tended to recall a much faster speed than if they were asked “what speed were the cars going when they contacted?”

We all rely on our memories to help make us who we are and establish the values we have. It is important to be aware of how our recall and that of our patients may be affected by context or other factors we may not be aware of.

No one is easier to fool than yourself.

Thanks for reading.

- Jamie

Smallpox: the Giants' Shoulders Edward Jenner Stood on to Overcome 'The Speckled Monster'

Convergent evolution is a key principle in Biology. Basically, it means that nature will find the same ‘solutions’ for organisms filling similar environmental niches. So in different species with no close relation but who encounter similar environments (say sharks and dolphins) you’ll see similar features (both have fins). The same is true in the history of science. Very few discoveries are actually made in solitude; more often there were several people working on the same problem but often only one gets the fame. For Charles Darwin see Alfred Russel Wallace. For Sir Isaac Newton see Robert Hooke.

We discuss the unsung heroes in the fight against smallpox and variolation (and review wine) in the third episode of the Quacks Who Quaff podcast.

When it comes to vaccination and the conquest of smallpox, one of the most deadly diseases to ever afflict mankind, one name always comes to mind: Edward Jenner. We all know the story: an English country doctor in the 18th century observed how milkmaids who contracted cowpox from their cattle were immune to the far more serious smallpox. On 14th May 1796 he deliberately infected the eight-year-old boy James Phipps with cowpox. The boy develops a mild fever. Later Jenner exposes Phipps to pus from a smallpox blister. The boy is unaffected by smallpox. A legend is born. Vaccination becomes the mainstay of the fight against smallpox. In the 20th century a worldwide campaign results in smallpox being the first disease to be eradicated.

Of course, as we all know, things are rarely this straightforward and there are many less famous individuals who all contributed to the successful fight against smallpox.

Dr Jenner performing his first vaccination on James Phipps, a boy of age 8. May 14th, 1796. Painting by Ernest Board (early 20th century).

Smallpox, the ‘speckled monster’, has a mortality rate of 30% and is caused by a highly contagious airborne virus. It long affected mankind; smallpox scars have been seen in Egyptian mummiesfrom the 3rd Century BCE. Occurring in outbreaks it is estimated to have killed up to 500 million peoplein the 20th century alone. Throughout history it was no respecter of status or geography, killing monarchs and playing a role in the downfall of indigenous peoples around the world. Superstition followed smallpox with the disease even being worshipped as a god around the world.

Doctors in the West had no answer to it. But in other parts of the world solutions were found. With colonisation and exploration the West started to hear about them. More than seventy years before Jenner’s work two people on either side of the Atlantic were inspired by custom from Africa and Asia. 

Left: Shitala, the North Indian goddess of pustules and sores

Right: Sopona, the god of smallpox in the Yoruba region of Nigeria

Lady Mary Wortley Montagu


In 1721 in London Lady Mary Wortley Montagu the wife of the Ambassador to Turkey who had lost her brother to smallpox having survived the disease herself is keen to protect her daughter. Whilst in Turkey she learned of the local practice of ‘variolation’ or inoculation against smallpox. Pus from the pustules of patients with mild smallpox is collected and scratched into the skin of an uninfected person. This was a version of a process dating back to circa 1000 AD in China where pustule material was blown up the nose of a patient. Mary was impressed and her son was inoculated in Turkey in 1715. 

Six years later in London her daughter was similarly inoculated in the presence of doctors of the royal court, the first such procedure in Britain. She campaigns for the practice to be spread.

Cotton Mather

Also in 1721, an outbreak of smallpox is ravaging Boston, Massachusetts. An influential minister Cotton Mather is told by one of his African slaves, Onesimus, about a procedure similar to inoculation. According to Onesimus as a younger man a cut was made into his skin and smallpox pustules were rubbed into the wound. He told Mather that this had made him immune to smallpox. Mather hears of the same practice in China and Turkey and is intrigued. He finds a doctor, Zabdiel Boylston, and the two start performing the first inoculations in America.

Lady Wortley Montagu and Cotton Mather never met, they lived on opposite sides of the Atlantic and yet learnt of the same practice. Convergent medical advances in practice.

Inoculation is not without risk however and Prince Octavius, son of King George III dies in 1783 following inoculation. People inoculated with smallpox are also potentially contagious. An alternative is sought.

In 1768 an English physician John Fewster identified that cowpox offered immunity against smallpox and begins offering the practice of immunisation. In 1774 Benjamin Festy, a dairy farmer in Yetminster, Dorset, also aware of the immunity offered by cowpox,, has his wife and children infected with the disease. He tests the procedure by then exposing his son to smallpox. His son is fine. The process is called vaccination after the Latin word for cow. However, neither men publish their work. In France in 1780 politician Jacques Antoine Rabaut-Pommier opens a hospital and offers vaccination against smallpox. It is said that he told an English doctor called Dr Pugh about the practice. The English physician promised to tell his friend, also a doctor, about vaccination. His friend’s name? Edward Jenner.

It is Jenner who publishes his work and pushes for vaccination as a preventative measure against smallpox. It is his vaccine which is used as the UK Parliament through a succession of Acts starting in 1840 makes variolation illegal and provides free vaccination against smallpox. In a precursor of the ignorance of the ‘anti-vaxxer’ movement there is some public hysteria but vaccination proves safe and effective (as it still is).

Cartoon by British satirist James Gillray from 1802 showing vaccination hysteria

Arguably, without Jenner’s work such an effective campaign would have been held back. As the American financier Bernard Baruch put it, “Millions saw the apple fall, but Newton was the one who asked why.” However, Newton himself felt that “if I have seen further it is because I have stood on the shoulders of giants.” History in the West has a habit of focusing on rich white men at the exclusion of others, especially women and slaves. Any appreciation of Jenner’s work must also include those giants whose shoulders he stood by acknowledging the contribution of people like Lady Wortley Montagu and Onesimus and of Fewster and Festy - the Wallaces to his Darwin.

Thanks for reading

- Jamie

How 'The Unknown Woman of the Seine' became 'Resusci Anne'

20141210_105259.jpg

She’s been called “the most kissed face in the world.” I think it’s fair to say that most of us will encounter Resusci Anne/Resus Anne/Rescue Anne/CPR Anne at some point of our lives. The mannequin itself dates from the 1960s when it was first produced by Laerdal Medical. However, as often with medical history, the story of Anne goes back further than that.

Paris in the 1880s showing the beginnings of the Eiffel Tower

Paris in the 1880s showing the beginnings of the Eiffel Tower

It’s Paris in the late 1880s; a busy, bustling city where, much like London, there was what we would now consider a morbid curiosity with death and it was not uncommon for people to vanish. The body of a young woman, presumed to be no more than 16 years old, is pulled out of the River Seine. As is customary her body is displayed in the mortuary window on a marble slab (a popular attraction at the time) in the hope a family will come forward. They don’t.

Her body shows no sign of disease or trauma. Suicide is suspected. But something else intrigues the pathologist. Her face upon which is a calm, knowing half smile quite unlike anything you’d expect from a person drowning. The pathologist was so taken by her he ordered a plaster cast to be made of her face.

189624-131-BAF1184D.jpg

Copies of the plaster cast became widespread as both a decoration and artistic inspiration. Parallels were drawn with Ophelia, the character in William Shakespeare’s ‘Hamlet’ who drowns after falling out of a willow tree. The French philosopher Albert Camus compared her enigmatic smile to the Mona Lisa.

In Richard Le Gallienne’s 1899 novella, ‘The Worshipper of the Image’, the protagonist Antony falls in love with the death mask. The Russian born poet Vladimir Nabokov wrote a whole 1934 poem titled “L’Inconnue de la Seine” in which he imagined her final days:

Urging on this life’s denouement,
loving nothing upon this earth,
I keep staring at the white mask
of your lifeless face.

In 1926 the death mask was included in a catalogue of death masks and was titled, ‘L'Inconnue de la Seine’ (The Unknown Woman of the Seine) and her legend was complete.

Fast forward to 1955 and the Norwegian toy manufacturer Asmund Laerdal saves his son Tore from near drowning in a river. Shortly after he is approached to design a mannequin to help teach cardio-pulmonary resuscitation. He decides that a female mannequin would be less scary for students and wants her to have as natural a face as possible.   Remembering a plaster cast of L’Inconnue he decides to replicate her face. L’Inconnue is reborn as ‘Resusci Anne’. The rest is history.

As Laerdal themselves put it: "Like Leonardo da Vinci's Mona Lisa, and John Everett Millais' Ophelia, the girl from the Seine represents an ideal of beauty and innocence." An anonymous victim of drowning now responsible for teaching cardio-pulmonary resuscitation around the world having briefly been the focus of gothic romantic obsession. It’s unlikely she could ever have imagined this legacy in her short life.

Thanks for reading

- Jamie