The evidence doesn't lie: The case of the Phantom of Heilbronn and the importance of pre-test probability


“Evidence doesn’t lie” - Gil Grissom, CSI

Ten years ago police were on the hunt for an unusual serial killer. There were several factors that made this suspect unique. Firstly; she was female, a rarity amongst serial killers. Secondly; there seemed to be no pattern to her crimes. Her DNA was found at crime scenes in France, Germany and Austria dating back to 1993. On a cup at the scene of the murder of a 62 year old woman. A knife at the house of a murdered 61 year old man. A syringe containing heroin. Altogether she was linked to forty separate crimes including six murders. Her accomplices included Slovaks, Iraqis, Serbs, Romanians and Albanians. This was an unprecedented case. A modern day Moriarty. She was called ‘The Phantom of Heilbronn’ or ‘The Woman Without a Face’.

Then in 2009 the police found her. After a case lasting eight years, 16,000 man hours and a cost of €2 million the police had their suspect. She was a technician working at the factory which made the cotton swabs the forensics team used to collect samples. As she had gone about her work moving and speaking her saliva and skin had got on the swabs and contaminated them. Police confirmed that every sample of the Phantom’s DNA had been collected with swabs from the same factory. The Phantom of Heilbronn did not exist.

If you think about it, it was incredibly unlikely that one woman was involved in so many different crimes across so many countries over so many years. It actually makes much more sense that it was error. And yet the investigators were blinded by the result in black and white on a screen.

This can happen in Medicine. A result from a blood test or imaging comes back positive or negative and we just accept it. We have use our brains and think about the tests we’re ordering and what the results mean.


If you have a certain disease we want a test that will detect if you have it and come back positive. That is a test’s sensitivity. We don’t want false negatives: people with a disease not testing positive. A sensitivity of 100% means that the test will always come back positive if you have the disease. A sensitivity of 50% means that the test will correctly detect disease in 50% of patients with the disease. The other 50% get a false negative. Sensitivity is very important if you’re testing for a serious disease. For example, if you’re testing for cancer you don’t want many false negatives.


As well as detecting disease you also want the test to accurately rule out a disease if the patient doesn’t have it. This is its specificity. We don’t want false positives: people who don’t have the disease testing positive. A specificity of 100% means that the test will always come back negative if you don’t have the disease. A specificity of 50% means that 50% of people who don’t have a disease will correctly test negative. The other 50% will be given a false positive result. Specificity is very important if there’s a potentially hazardous treatment or further investigation following a positive result. If a positive result means your patient has to undergo a surgical procedure or be exposed to radiation by a CT scan you’re going to want as few false positives as possible.

The trouble is that no test is 100% sensitive or 100% specific. This has to be understood. No result can be interpreted properly without understanding the clinical context.


For example, the sensitivity of a chest x-ray for picking up lung cancer is about 75%. That means it gives a true positive for 3 out of 4 patients with the other patient getting a false negative. If your patient is in their twenties, a non-smoker with no family history and no symptoms other than a cough you’d probably accept that 1/4 chance of a false negative and be happy you’ve ruled out a malignancy unless the situation changes. However, in a patient in their seventies with a smoking history of over 50 years who’s coughing up blood and had unexplained weight loss suddenly that 75% chance of detecting cancer on a chest x-ray doesn’t sound so comforting. Even if you can’t see a mass on their chest x-ray you’d still refer them for more sensitive imaging. That’s because the second patient has a much higher probability of having lung cancer based on their history. So high in fact that choosing a test with such poor sensitivity as a chest x-ray might not be the right decision to make. This is where pre-test probability comes in.

Pre-test probability

This principle of understanding the clinical context is called the pre-test probability. Basically it is the likelihood the individual patient in front of you has a particular condition before you’ve even done the test for that condition.

The probability of the condition or target disorder, usually abbreviated P(D+), can be calculated as the proportion of patients with the target disorder, out of all the patients with the symptoms(s), both those with and without the disorder:

P(D+) = D+ / (D+ + D-)

(where D+ indicates the number of patients with target disorder, D- indicates the number of patients without target disorder, and P(D+) is the probability of the target disorder.)

Pre-test probability depends on the circumstances at that time. For example, the pre-test probability of a particular patient attending their GP with a headache having a brain tumour is 0.09%. Absolutely tiny. However, with every re-attendance with the same symptom or developing new symptoms or even then attending an Emergency Department, that pre-test probability goes up.

Pre-test probability helps us interpret results. It also helps us pick the right test to do in the first place.

Pulmonary embolism: a difficult diagnosis

Pulmonary embolism (blood clot on the lung) affects people of all ages, killing up to 15% of patients hospitalised with a PE. This is reduced by 20% if the condition is identified and treated correctly with anticoagulation. PE doesn’t play fair though and has very non-specific symptoms such as shortness of breath or chest pain. The gold standard for detecting or ruling out a PE is with a computerised tomography pulmonary angiogram (CTPA) scan. However, a CTPA scan involves exposing the chest and breasts to a lot of radiation. For instance, a 35 year old woman who has one CTPA scan has her overall risk of breast cancer increased by 14%. There’s also the logistical impossibility of scanning every patient we have. So we need a way of ensuring we don’t scan needlessly.

We do have a blood test, checking for D-Dimers which are the products of the body’s attempts to break down a clot. The trouble is other conditions such as infection or cancer can increase our D-Dimer as well. The D-Dimer test has a sensitivity of 95% and a specificity of 60%. That means that it will fail to detect PE in 5% of patients meaning we miss a potentially fatal disease in 1/20 patients with a PE. It also means it will fail to rule out PE in 40% of patients and so risk exposing patients without a PE to a scan which increases their risk of cancer. Not to mention starting anticoagulation treatment (and so increasing risk of bleeding such as as a brain haemorrhage) needlessly. So we have to be careful to only do the D-Dimer test in the right patients. This is why we need to work out our patient’s risk.

Luckily there is a risk score for PE called the Well’s Score. This uses signs, symptoms, the patient’s history and clinical suspicion and can stratify the patient as low or high risk for a PE. We then know the chances of whether the patient will turn out to have a PE based on whether they are low or high risk.

Only 12.1% of low risk patients will have a PE. At such a low chance of PE we accept the D-Dimer’s 5% probability of a false negative and are keen to avoid the radiation exposure of a scan and so do the blood test. If it is negative we accept that and consider PE ruled out unless the facts change. If it is positive we can proceed to imaging.

However, 37.1% of high risk patients will have a PE. Now it’s a different ballgame. The pre-test probability has changed. A high risk patient has a more than 1/3 chance of having a PE. Suddenly the 95% sensitivity of a D-Dimer doesn’t seem enough knowing there’s a 1/20 chance of missing a potentially fatal diagnosis. The patient is likely to deem the scan worth the radiation risk knowing they’re high risk. So in these patients we don’t do the D-Dimer. We go straight to imaging. If a D-Dimer has been done for some reason and is negative we ignore it and go to scan. We interpret the evidence based on circumstances and probability.

This is basis of the NICE guidance for suspected pulmonary embolism.

Grissom is wrong; the evidence can lie. Some of the results we get will be phantoms. Not only must we pick the right test we must also think: will I accept the result I might get?

Thanks for reading.

- Jamie


#FOAMPubMed 4: p values

In the previous blog we looked at how Type I Error is the false rejection of a null hypothesis.


This is a gold standard.


A p value is a decimal showing the probability of falsely rejecting the null hypothesis. It will usually be given in a paper along with the results.

As we want a chance of less than 5% of falsely rejecting our null hypothesis the p value we want is p<0.05

Some studies want an even smaller chance of Type I Error and so design their study for p=0.01 (1% chance of falsely rejecting the null hypothesis) for example.

The p value we want will help shape our study, including sample size.

With p<0.05 we will have significant results - more of that in the next blog

Two snakes or one? How we get the symbol for Medicine wrong


Healthcare is full of antiquity, not surprising for a venture as old as humanity itself. Humans have always got sick and always turned to wise men and women and the divine to help them. With that comes symbols and provenance. Wound Man. The Red Cross. The Rod of Asclepius.

Ah yes, the Rod of Asclepius, the Ancient Greek God of healing. It’s a prominent symbol of Medicine. One staff, with two snakes entwined around it…

Except that symbol is not the Rod of Asclepius at all. That symbol of two snakes wrapped around a pole, known as a caduceus, actually belonged to Hermes, the Ancient Greek messenger God in charge of shepherds, travel and commerce. The Ancient Romans called him Mercury. The fastest of the gods, he had winged shoes and helmet to help him travel. On one adventure he saw two snakes fighting. To stop them he threw a stick at them and at once the serpents wrapped themselves around it and became fixed. Hermes liked the resulting staff so much he took it as his own. Hence the caduceus became a symbol of Hermes; of commerce and travel.


Asclepius (Vejovis to the Romans) on the other hand was the son of Apollo the Sun God. Just like Hermes Asclepius was also linked to snakes. One story has a snake licking his ears clean and in so doing giving him healing knowledge. Another story has a snake giving him a herb with resurrecting powers. For whatever reason, Asclepius would show his gratitude to snakes by carrying a staff with one snake on it. Not two. One.


The Ancient Greeks weren’t the first or last civilisation to link snakes to divinity. People have a habit of venerating and fearing in equal measure. Snakes, with their stillness, mysterious venom and supposed powers of self-renewal through shedding their skin are always going to inspire wonder.

So why the confusion between these two symbols? One possible reason is due to alchemy; the attempt by early scientists to turn base metals to gold which, while a folly, helped advance scientific knowledge including Medicine. The caduceus was used as a symbol by alchemists as they often used mercury or quicksilver in their preparations. Hermes/Mercury was linked to the metal that bore his name and so a connection was made. However, the caduceus was also a symbol of professionalism and craft. Therefore anyone wanting their work to be taken seriously would include the caduceus as a kind of early precursor of professional accreditation. In that vein when John Caius, the chronicler of sweating sickness, presented both the Cambridge college which bears his name and the Royal College is Physicians with a silver caduceus it was not as a symbol of Medicine but of professionalism.

In any case, in Great Britain, as late as 1854, the distinction between the rod of Asclepius and the caduceus as symbols of two very different professions was apparently still quite clear. In his article On Tradesmen's Signs of London A.H. Burkitt notes that among the very old symbols still used in London at that time, which were based on associations between pagan gods and professions, "we find Mercury, or his caduceus, appropriate in trade, as indicating expedition. Esculapius, his Serpent and staff, or his cock, for professors of the healing art"

It seems the mix up didn’t take place until the 20th century. In 1902 the US Army Medical Corp adopted the caduceus as their symbol. The reason isn’t clear as the American Medical Association, Royal Army Medical Corp and the French Military Service all would happily adopt the staff of Asclepius. This decision to choose the caduceus has been credited either to a Captain Frederick P. Reynolds or a Colonel Hoff. The Americans Public Health Service and US Marine Hospital would also take Hermes’s symbol as their own.

This confusion seems to be uniquely American and driven by commercialisation. In 1990, a survey in the US found that 62% of the professional associations used the Rod of Aesculapius while 37% used the Caduceus and 76% of commercial organizations used the Caduceus. Perhaps that makes sense as Hermes was the god of trade (or maybe that’s me being cynical). The World Health Organisation would choose the Rod of Asclepius for their emblem where it can still be seen today.


Medicine is full of symbolism. Symbols, like language, change their meaning. There was a time that healthcare was full of quacks and charlatans. The Caduceus was a mark of professionalism long before their were accreditations to be had. Using the two snakes is a nod to those efforts to make the trade professional and accountable. But if you want to be accurate, it’s the staff with one snake you’re after.

Thanks for reading.

- Jamie


When mental health robbed England of its king for over a year

Both Prince William and Prince Harry have spoken openly about their own mental health and the impact of losing their mother and growing up in the public eye. Together they have formed a charity to support young people with mental health problems. They aim to remove a stigma which still remains in the 21st century.

This musing goes back to another royal with mental health problems, this time in the 15th century. Problems whose diagnosis we still can’t identify and which led to his downfall and changed the course of history in England.

It’s 1453 and to say that King Henry VI of England has a lot on his plate would be an understatement. The Battle of Castillon on 17th July effectively ends the Hundred Years War with France and sees Henry lose the last part of an empire which once had stretched from the Channel to the Pyrenees. At home this defeat stoked the embers of rebellion. The War of the Roses is imminent. For Henry defeat was a personal blow too. He was the son of Henry V; war hero of Agincourt. He succeeded the throne in 1422 aged only nine months after his father’s sudden death and by the time he was deemed old enough to rule in his own right in 1437 the war with France had already turned against England. Henry was unable to live up to his father’s legend and reverse the slide putting his reign under increasing pressure from the very beginning.

King Henry VI

King Henry VI

Henry did have one thing going for him, his wife Margaret of Anjou whom he married in 1445. By the summer of 1453 she was pregnant. Strong willed and volatile she was far more willing than Henry to stand firm and make decisions. Henry deplored violence and would rather spare traitors and cut back himself instead of raising taxes. Royal duties were a distraction from his preferred activities; praying and reading religious texts. Admirable, but not ideal when revolution is in the air. As Henry began to earn his reputation as one of England’s weakest ever kings Margaret would come to be the de facto monarch. He would soon need her even more.

Margaret of Anjou

Margaret of Anjou

10th August 1453 at the royal lodge in Clarendon near Salisbury. Henry receives news of the defeat at Castillon and the deaths of one of his most faithful and talented commanders John Talbot, Earl of Shrewsbury and his son. Suddenly he falls unwell. Without warning he acts unaware of his surroundings, unresponsive to anyone and anything around him and seemingly unable to even move. With England on the verge of civil war his entourage are understandably keen to keep this under wraps and hope it passes. It doesn’t. Margaret stays in London and the royal court continues as normal. In early October, accepting how ill the king is, his court moves him gradually to Windsor. On 13th October Margaret goes into labour and is delivered of a baby boy, Edward. Henry is informed of the birth of his heir but doesn’t react. In the New Year Margaret brings Prince Edward to Henry. Both her and the Duke of Buckingham beg Henry to bless the young prince. Other than moving his eyes he does nothing. At the time he has to be fed and guided around the palace by his attendants.

One 22nd March 1454 John Kemp, the Archbishop of Canterbury and Lord Chancellor of England dies. The news is given to Henry by a delegation of bishops and noblemen in the hope he will wake and announce a successor. The group report back to Parliament that the king remained unresponsive. That same month a commission sends a group of doctors to treat Henry. They are provided with a list of possible treatments including enemas, head purging (heat applied to the head), laxatives and ointments. Whatever treatment they chose nothing works.

As suddenly as Henry fell ill he recovered after nearly 18 months on Christmas Day 1455. On 30th December Margaret brought Edward to Henry. The king was delighted and acted as though he was meeting the prince for the first time. Margaret was overjoyed, but with an agenda. During Henry’s illness Richard of York had claimed the title of Lord Protector and on the death of John Kemp placed his brother-in-law Richard Neville as the new Chancellor, a move Margaret opposed. Edmund Duke of Somerset, a rival of Richard’s and an ally of Margaret’s, was sent to the Tower of London. Richard was a relative of Henry’s and had a claim to throne. A claim scuppered by the birth of Prince Edward. The life of her son was in jeopardy. With Henry now well again Margaret persuaded him to remove Richard from favour and restore Somerset from the Tower. So intensified the resentment. Richard would begin to grow his support. The Wars of the Roses sprang from these personal rivalries. Had Henry not been unwell it’s possible the Wars of the Roses could have been avoided.


So what was Henry’s illness? Much has been made of a supposed family history of mental health problems. His maternal grandfather King Charles VI of France suffered recurrent bouts of violence and disorientation, not recognising his family or remember he was king. These bouts lasted months at a time. It is possible they were due to mental illness such as bipolar disorder or schizophrenia. However, they seemed to follow a fever and seizures he suffered in 1392. Potentially Charles’s ‘madness’ may have been due to an infection such as encephalitis rather than psychiatric illness.

The length of Henry’s illness and sudden improvement with no apparent ill effect make schizophrenia or catatonic schizophrenia unlikely. The length of illness again along with the loss of awareness and memory make a depressive illness unlikely. There’s no record of him being similarly ill at another time of his life. It is is possible he suffered a severe dissociative disorder due to stress. Of course, it is completely plausible that contemporary accounts are inaccurate or incomplete, never mind the fact that it is impossible to make a diagnosis of a patient you haven’t met, never mind one who died six centuries ago.

Henry would cling to the throne until he was deposed in 1461, replaced by Edward IV, son of Richard of York. Henry was imprisoned and Margaret fled to Scotland with their son. But she wasn’t finished. She would reach out to Richard Neville and form an alliance based on an arranged marriage between her son and his daughter. Neville would force out Edward IV and reinstate Henry in 1470. It was to be a short return however. Edward IV raised an army and in the ensuing conflict both Richard Neville and then Henry’s son died in combat in early 1471. Henry once again was imprisoned in the Tower of London. He died mysteriously, possibly murdered on the orders of Edward IV, in 1471. His mental health was blamed, with supporters of Edward IV claiming he died of a broken heart at the loss of his son. Margaret was also imprisoned until she was ransomed by King Louis XI of France in 1475. She lived out her days in France until she died in 1482.

King as long as he could remember, losing his kingdom and facing potential rebellion and death it’s no wonder Henry’s mental health suffered. But what I think is remarkable is that at a time of no mental health knowledge his court was able to keep him fed and watered and otherwise healthy for 18 months. Even in the time since their mother died in 1997 Princes William and Harry are showing how far we have come in appreciating mental health. Their ancestor King Henry VI is a powerful example of the impact of mental health.

Thanks for reading

- Jamie

#FOAMPubMed 3: Type I Error


First things first, no piece of research is perfect.  Every study will have its limitations. 

One way we try to make research better is through understanding error.  

If we find that the new drug works when it doesn’t that’s called a false positive.  We can’t eliminate false positives; some patients will get better even if given placebo.  But too many false positives and we will find an effect when one doesn’t actually exist. We will wrongly reject our null hypothesis.  

Type I Error comes about when we wrongly reject our null hypothesis. 

This will mean that we will find our new drug is better than the standard treatment (or placebo) when it actually isn't.

Type I Error is also called alpha

A way I like to look at Type I Error is the influence of chance on your study. Some patients will get better just through chance. You need to reduce the impact of chance on your study.

For instance, I may want to investigate how psychic I am. My null hypothesis would be ‘I am not psychic.’

I toss a coin once. I guess tails. I’m right. I therefore reject my null hypothesis and conclude I’m psychic.

You don’t need to be an expert in research to see how open to chance that study is and how one coin toss can’t be enough proof. We’d need at least hundreds of coin tosses to see if I could predict each one.

You see how understanding Type I Error influences how you design your study, including your sample size

More of that later. The next blog will look at how we actually statistically show that we’ve reduced Type I Error in our study.

#FOAMPubMed 2: The null hypothesis


When we do research in Medicine it’s usually to test whether a new treatment works (by testing it against placebo) or better than the established treatment we’re already using.

At the beginning of our study we have to come up with a null hypothesis (denoted as H0).

The null hypothesis is a statement that assumes no measurable difference between whatever you’re studying.  

The null hypothesis is therefore usually something along the lines of: 

‘Drug A won’t be better than Drug B at treating this condition.’  

We then set out to test this null hypothesis.  If we find Drug A is better than B then we reject the null hypothesis and conclude Drug A is the superior treatment. If Drug A is found to be no better (i.e. the same or worse) than Drug B then we accept our null hypothesis and conclude that Drug A is non-superior (or inferior).

Error comes when we either wrongly reject or wrongly accept the null hypothesis.

Error means we come to the wrong conclusion. There are two types of error, the next blog will look at the first, Type I Error.

#FOAMPubMed 1: Lemons and Limes, the first clinical trial and how to make a research question


Before we conduct any research we first need to construct a research question. This can be a difficult step. Our question needs to be precise and easy to understand. To do this we can use the ‘PICO’ criteria:


We need a population of interest. These will be subjects who share particular demographics and needs to be clearly documented.


The intervention is something you’re going to do to your population. This could be treatment or education or an exposure such as asbestos. The effect of the intervention is what you’re interested in.


If we’re going to study an intervention we need to compare it. We can use people without the exposure (control) or compare the treatment to another or placebo.


The outcome is essentially what we are going to measure in our study. This could be mortality, it could be an observation such as blood pressure or a statistic such as length of stay in hospital. Whatever it is we need be very clear that this our main outcome, otherwise known as our primary outcome. The outcome decides our sample size so has be explicit.

PICO therefore allows us to form a research question.

To demonstrate this let’s look at the first ever clinical trial and see how we use PICO to write a research question.

It’s the 18th century. An age of empires, war and exploration. Britain, an island nation in competition with its neighbours for hegemony, relies heavily on her navy as the basis of her expansion and conquest. This is the time of Rule Britannia. Yet Britain, as with all sea going nations, was riddled with one scourge amongst its sailors: scurvy.

Scurvy is a disease caused by a lack of Vitamin C. Vitamin C, or ascorbic acid, is essential in the body to help catalyse a variety of different functions including making collagen, a protein which forms the building blocks of connective tissue, and wound healing. A lack of Vitamin C therefore causes a breakdown of connective tissue as well as impaired healing; this is scurvy, a disease marked by skin changes, bleeding, loss of teeth and lethargy. Hardly the state you want your military to be when you’re trying to rule the waves.

James Lind was born in Edinburgh in 1716. In 1731, he registered as an apprentice at the College of Surgeons in Edinburgh and in 1739 became a surgeon's mate, seeing service in the Mediterranean, Guinea and the West Indies, as well as the English Channel. In 1747, whilst serving on HMS Salisbury he decided to study scurvy and a potential cure.

James Lind 1716-1794

James Lind 1716-1794

Lind, as with medical opinion at the time, believed that scurvy was caused by a lack of acid in the body which made the body rot or putrefy. He therefore sought to treat sailors suffering with scurvy with a variety of acidic substances to see which was the best treatment. He took 12 sailors with scurvy and divided them into six pairs. One pair were given cider on top of their normal rations, another sea water, another vinegar, another sulphuric acid, another a mix of spicy paste and barley with another pair receiving two oranges and one lemon (citrus fruits containing citric acid).

Although they ran out of fruit after five days by that point one of the pair receiving citrus fruits had returned to active duty whilst the other was nearly recovered. Lind published his findings in his 1753 work, A treatise on scurvy. Despite this outcome Lind himself and the wider medical community did not recommend citrus fruits to be given to sailors. This was partly due to the impossibility of keeping fresh fruit on a long voyage and the belief that other easier to store acids could cure the disease. Lind recommended a condensed juice called ‘rob’ which was made by boiling fruit juice. Boiling destroys vitamin C and so subsequent research using ‘rob’ showed no benefit. Captain James Cook managed to circumnavigate the globe without any loss of life to scurvy. This is likely due to his regular replenishment of fresh food along the way as well as the rations of sauerkraut he provided.

It wasn’t until 1794, the year that Lind died, that senior officers on board the HMS Suffolk overruled the medical establishment and insisted on lemon juice being provided on their twenty three week voyage to India to mix with the sailors’ grog. The lemon juice worked. The organisation responsible for the health of the Navy, the Sick and Hurt Board, recommended that lemon juice be included on all voyages in the future.

Although his initial assumption was wrong, that scurvy was due to a lack of acid and it was the acidic quality of citrus fruits that was the solution, James Lind had performed what is now recognised as the world’s first clinical trial. Using PICO we can construct Lind’s research question.


Sailors in the Royal Navy with scurvy


Giving sailors citrus fruits on top of their normal rations


Seawater, vinegar, spicy paste and barley water, sulphuric acid and cider


Patient recovering from scurvy to return to active duty

So James Lind’s research question would be:

Are citrus fruits better than seawater, vinegar, spicy paste and barley water, sulphuric acid and cider at treating sailors in the Royal Navy with scurvy so they can recover and return to active duty?

After HMS Suffolk arrived in India without scurvy the Naval establishment began to give citrus fruits in the form of juice to all sailors. This arguably helped swing superiority the way of the British as health amongst sailors improved. It became common for citrus fruits to be planted across Empires by the Imperial countries in order to help their ships stop off and replenish. The British planted a particularly large stock in Hawaii. Whilst lemon juice was originally used the British soon switched to lime juice. Hence the nickname, ‘limey’.

A factor which had made the cause of scurvy hard to find was the fact that most animals can actually make their own Vitamin C, unlike humans, and so don’t get scurvy. A team in 1907 was studying beriberi, a disease caused by the lack of Thiamine (Vitamin B1), in sailors by giving guinea pigs their diet of grains. Guinea pigs by chance also don’t synthesise Vitamin C and so the team were surprised when rather then develop beriberi they developed scurvy. In 1912 Vitamin C was identified. In 1928 it was isolated and by 1933 it was being synthesised. It was given the name ascorbic (against scurvy) acid.

James Lind didn’t know it but he had effectively invented the clinical trial. He had a hunch. He tested it against comparisons. He had a clear outcome. As rudimentary as it was this is still the model we use today. Whenever we come up with a research question we are following the tradition of a ship’s surgeon and his citrus fruit.

Thanks for reading.

- Jamie

Hippocrates to Helsinki: Medical Ethics


On 2nd June 1948 seven men were hanged for crimes committed in World War Two. Although all held some form of military rank none had actually fired a gun in combat. Four of them were doctors. They, along with sixteen other defendants, had been on trial from 9th December 1946 to 20th August 1947 to answer for the horrors of Nazi human experimentation under the guise of Medicine. A common defense used in response for their crimes was that there was no agreed standard saying what they had done was wrong. After almost 140 days of proceedings, including the testimony of 85 witnesses and the submission of almost 1,500 documents, the judges disagreed.

The Nuremberg trials are rightfully held up as a landmark of medical ethics. Yet they are only one point on a timeline that stretches back to the very beginnings of Medicine. This musing is a brief journey through that timeline.

The Hippocratic Oath



We start with Hippocrates (c. 460 BCE to c. 370 BCE) a Greek physician considered the Father of Medicine. The Hippocratic Oath is attributed to him, although it may have been written after his death. The oldest surviving copy dates to circa 275 CE. The original text provides an ethical code for the physician to base themselves on:

I swear by Apollo Physician, by Asclepius, by Hygieia, by Panacea, and by all the gods and goddesses, making them my witnesses, that I will carry out, according to my ability and judgment, this oath and this indenture. To hold my teacher in this art equal to my own parents; to make him partner in my livelihood; when he is in need of money to share mine with him; to consider his family as my own brothers, and to teach them this art, if they want to learn it, without fee or indenture; to impart precept, oral instruction, and all other instruction to my own sons, the sons of my teacher, and to indentured pupils who have taken the physician’s oath, but to nobody else. I will use treatment to help the sick according to my ability and judgment, but never with a view to injury and wrong-doing. Neither will I administer a poison to anybody when asked to do so, nor will I suggest such a course. Similarly I will not give to a woman a pessary to cause abortion. But I will keep pure and holy both my life and my art. I will not use the knife, not even, verily, on sufferers from stone, but I will give place to such as are craftsmen therein. Into whatsoever houses I enter, I will enter to help the sick, and I will abstain from all intentional wrong-doing and harm, especially from abusing the bodies of man or woman, bond or free. And whatsoever I shall see or hear in the course of my profession, as well as outside my profession in my intercourse with men, if it be what should not be published abroad, I will never divulge, holding such things to be holy secrets. Now if I carry out this oath, and break it not, may I gain for ever reputation among all men for my life and for my art; but if I break it and forswear myself, may the opposite befall me.

The oath has been used in update iterations as a pledge for doctors to make on graduation.


The Formula Comitis Archiatrorum is the earliest known text on medical ethics from the Christian era. It was written by Magnus Aurelius Cassiodorus (c. 484-90 to c. 577-90 CE), a statesman and writer serving in the administration of Theodoric the Great, the king of Ostrogoths. It laid out a code of conduct for physicians to align their lives and medical practice with.

Ethics of the Physician was penned by Ishāq bin Ali al-Rohawi, a 9th century Arab physician, and is the first medical ethics book in Arab medicine. It contains the first documented description of the peer review process where a physician’s notes were reviewed by their peers.

Primum non nocere and the beginnings of ‘medical ethics’

The phrase ‘primum non nocere’ (first do no harm) is often attributed to Hippocrates and the Hippocratic Oath. The exact author is unknown, however. One study looked at medical writings back to the Middle Ages and found the first mention in 1860 attributed to a physician Thomas Sydenham.

Thomas Percival

Thomas Percival

In 1794, Thomas Percival (1740-1804) created one of the first modern codes of medical ethics in a pamphlet which was expanded in 1803 into a book: Medical Ethics; or, a Code of Institutes and Precepts, Adapted to the Professional Conduct of Physicians and Surgeons. This was the first use of the phrase ‘medical ethics’. The book largely influenced the American Medical Association code, which was adopted in 1847. An Introduction to the Study of Experimental Medicine was written by French physiologist Claude Bernard (1813-1878) in 1865. Bernard’s aim in the Introduction was to demonstrate that medicine, in order to progress, must be founded on experimental physiology.

Nuremberg and Helsinki

After the failed defence of the defendants in the Doctors Trial The Nuremberg Code, written in 1947, is a ten part code guiding research ethics:

  1. The voluntary consent of the human subject is absolutely essential.

  2. The experiment should be such as to yield fruitful results for the good of society, unprocurable by other methods or means of study, and not random and unnecessary in nature.

  3. The experiment should be so designed and based on the results of animal experimentation and a knowledge of the natural history of the disease or other problem under study that the anticipated results will justify the performance of the experiment.

  4. The experiment should be so conducted as to avoid all unnecessary physical and mental suffering and injury.

  5. No experiment should be conducted where there is an a priori reason to believe that death or disabling injury will occur; except, perhaps, in those experiments where the experimental physicians also serve as subjects.

  6. The degree of risk to be taken should never exceed that determined by the humanitarian importance of the problem to be solved by the experiment..

  7. Proper preparations should be made and adequate facilities provided to protect the experimental subject against even remote possibilities of injury, disability, or death.

  8. The experiment should be conducted only by scientifically qualified persons. The highest degree of skill and care should be required through all stages of the experiment of those who conduct or engage in the experiment.

  9. During the course of the experiment the human subject should be at liberty to bring the experiment to an end if he has reached the physical or mental state where continuation of the experiment seems to him to be impossible.

  10. During the course of the experiment the scientist in charge must be prepared to terminate the experiment at any stage, if he has probable cause to believe, in the exercise of the good faith, superior skill and careful judgment required of him that a continuation of the experiment is likely to result in injury, disability, or death to the experimental subject.

It’s not hard reading this code to see the inspiration from the trials behind the code, in particular empowering patients. The Nuremberg Code remains a keystone of medical research.

Another keystone is The Helsinki Declaration, a set of ethical principles for research on human subjects, developed by the World Medical Association in 1964. It’s not a law in itself but has been used to form the basis of laws in the signatory countries. The original draft contained five basic principles, a section on clinical research and a section on non-therapeutic research. It has been revised several times since, most recently in 2013 which contained thirty-seven principles. 1979 saw the publication of the first edition of Principles of Biomedical Ethics by philosophers Tom Beauchamp and James Childress. It is this book which gave us the four ethical principles often quoted by medical students:

  • Beneficence (promoting well being)

  • Non-malevolence (not doing harm - drawing back to primum non nocere)

  • Autonomy (the patient’s right to decide for themselves)

  • Justice (the community at large)


Despite Nuremberg and Helsinki a number of scandals occurred throughout the 20th century reflecting inequalities and prejudices in society. In 1951, a young American black woman named Henrietta Lacks had a piece of her cancerous tumour extracted without her knowledge. The cells from Henrietta’s cervical tumour, known as HeLa cells, were the first human cell line to survive in-vitro and have since been immortalised for testing new medical treatments, most notably the Polio vaccine.

Henrietta Lacks

Henrietta Lacks

However, the case raised serious concerns surrounding the lack of informed consent and taking samples from living patients. In 1997, President Bill Clinton issued a formal apology for the Tuskegee Syphilis Study, which took place between 1932 and 1972 in the state of Alabama. This infamous government experiment allowed hundreds of African-American men to go untreated for their syphilis, so doctors could study the effects of the disease. This continued even after penicillin was discovered to be an effective cure. This was one of a number of unethical studies performed in the 20th century including the Guatemalan Syphilis Study and the Skid Row Cancer Study. When we talk about ethics we often think about Nazis and other deplorable regimes I think these studies remind us that even in the democratic West we are more than capable of committing horrible acts against vulnerable people.

Reproductive advances & rights and Alder Hey

The latter part of the 20th century saw a growth in women’s rights including their reproductive rights and the abilities of reproductive medicine. The Abortion Act was passed in the UK in 1967 after widespread evidence that unsafe illegal abortion can often result in maternal mortality/morbidity. The Act made abortion legal in all of Great Britain (but not Northern Ireland) up until a gestational period of 24 weeks. In 1973 the US Supreme Court voted in favour of Jane Roe in the case of Roe vs. Wade. The Court ruled 7-2 that access to safe abortion is a fundamental right. The arguments contained within this court case are still prominent topics of debate to this day.

Louise Joy Brown was born in Oldham, UK in 1978. She was the first baby to be born as a result of in-vitro fertilisation (IVF). The Report of the Committee of Inquiry into Human Fertilisation and Embryology, commonly called The Warnock Report was published in 1984 as a result of a UK governmental inquiry into the social impacts of infertility treatment and embryological research. The Human Fertilisation & Embryology Act 2008 updated and revised The Human Fertilisation and Embryology Act of 1990 in the UK, and implemented key provisions within IVF and human reproduction. These include the banning of sex selection for non-medical purposes and that all human embryos are subject to regulation.

The Human Tissue Act 2004 created the Human Tissue Authority to "regulate the removal, storage, use and disposal of human bodies, organs and tissue." In response to the Alder Hey organ scandal which involved the removal and retention of children’s organs without parental knowledge or consent.

The right to die

In February 1990, Terri Schiavo collapsed at home after suffering a cardiac arrest. The oxygen supply to her brain was cut off and, as a result, she entered a "persistive vegetative state" from which she would never wake. For the next 10 years, Terri's husband would fight various legal battles against her parents, and eventually the state of Florida, to enable her feeding tube to be removed. Her case was one of several sparking an enduring national debate over end-of-life care and the right to die. Dignitas was founded by in 1998 by Ludwig Minelli, a Swiss lawyer specialising in human rights law. Since its foundation Dignitas has helped over 2,100 people with severe physical illnesses, as well as the terminally ill, to end their own lives. In the UK the Assisted Dying Bill was blocked by the House of Lords in 2007. The bill would have only allowed those of sound mind and with less than six months to live to seek assisted death. Both euthanasia and assisted suicide remain illegal in the UK.

In 2016 Charlie Gard was born in the UK with an exceedingly rare and fatal inherited disease - infantile onset encephalomyopathy mitochondrial DNA depletion syndrome (MDDS). The ensuing legal battle between Charlie's parents and his doctors over withdrawing life-support provoked passionate debate throughout the world. It illustrated the power of social media to both facilitate and hamper debate around autonomy, end-of-life care, and parental rights with ugly scenes of mob violence threatened against the staff looking after Charlie. It also showed how complicated nuanced cases can be hijacked by various individuals and groups to spread their own agenda, such as the American right.

How we can show we respect research ethics

For anyone working in clinical research Good Clinical Practice is the international ethical, scientific and practical standard. Everyone involved in research must be trained or appropriately experienced to perform the specific tasks they are being asked to undertake. Compliance with GCP provides public assurance that the rights, safety and well being of research participants are protected and that research data are reliable. More information can be found on the National Institute for Health Research website who offer both introduction and refresher courses.

Looking at the timeline of medical ethics it’s tempting to think that we’ve never been more advanced or ethical and the whole movement is an evolution to enlightenment. To a certain extent that’s true, of course things are better than they were. But we can’t be complacent. The trend is that ethics often lag behind medical advances. As we become better at saving the lives of premature children, our population ages with more complex diseases and our resuscitation and intensive care improve there will undoubtedly continue to be more debates. As the case of Charlie Gard showed these can be very adversarial. Social media and fake news will no doubt continue to play a huge part of any scientific discussion, be it in medicine or climate change. All the more reason to stick to our principles and always aim to do no harm.

Thanks for reading

- Jamie

Not to be sneezed at: How we found the cause of hay fever


The recent good weather in the UK has seen barbecues dusted off and people taking to the garden. Cue sneezes and runny eyes and noses. Yes, with the nice weather comes hay fever. Hay fever or allergic rhinitis affects somewhere between 26% and 30% of people in the UK. Symptoms include sneezing, swelling to the conjunctivae and eyelids, runny nose (rhinorrhea) and a blocked nose. Sometimes it can result in hospital admissions and death.

We all know that pollen is the cause behind hay fever. Pollen in the air is inhaled and trapped by hairs in the membrane of the nostrils. There the body responses to proteins on the surface of the pollen. These proteins are called allergens. Different types of pollen have different allergens. A type of white blood cell called a B cell produce an antbody called immunoglobulin E or IgE specific to a particular allergen. The IgE then binds to a type of cell called mast cells. These are found in some of the most sensitive parts of the body, including the skin, blood vessels, and respiratory system. Mast cells contain 500 to 1500 granules containing a mix of chemicals including histamine. This binding causes mast cells to release their histamine. It is histamine which causes the symptoms of hay fever by binding to histamine receptors throughout the body. Antihistamines work by binding to these receptors instead of histamine and blocking them.

But two centuries ago hay fever was a mystery. It took a couple of doctors with sneezing and blocked noses to research the problem to link it to pollen. This musing is their story.

The first description of what we would call hay fever came in 1819 in a study presented to the Medical and Chirurgical Society called ‘Case of a Periodical Affection of the Eyes and Chest’. The case was a patient called ‘JB’, a man “of a spare and rather delicate habit”. The patient was 46 and had suffered from cattarh (blockage of the sinus and a general feeling of heaviness and tiredness) every June since the age of eight. Numerous treatments including bleeding, cold baths, opium and vomiting were tried to no avail. What makes this study even more interesting is that ‘JB’ was the author, John Bostock, a Liverpool-born doctor who was not afraid to experiment on himself.

John Bostock

Bostock tried to broaden his research by looking for more sufferers, He found 28. In 1828 he published his work and called the condition, “catarrhus aestivus” or “summer catarrh”. After Bostock published an idea spread amongst the public that the smell of hay was to blame. This led to the colloquial term “hay fever”. Bostock didn’t agree and felt that the heat of summer was to blame. He rented a clifftop house near Ramsgate, Kent for three consecutive summers which helped. In 1827 The Times reported that the Duke of Devonshire was "afflicted with what is vulgarly called the Hay-fever, which annually drives him from London to some sea-port". In 1837 a few days before King William IV died the same paper reported that the king had "been subject to an attack of hay fever from which he has generally suffered for several weeks".

Charles Harrison Blackley

In 1859 another doctor, Charles Harrison Blackley, sniffed a bouquet of bluegrass and sneezed. He was convinced that pollen was to blame and methodically set out to prove it. He experimented on himself and seven other subjects. He first applied pollen to the nose and noted how it produced the symptoms of hay fever. He then covered microscope slides with glycerine and left them in the sunshine under a little roof for 24 hours before removing them and studying them under a microscope. He was then able to count the number of pollen granules in the air. In noted the prevalence of grass pollen in June, the time when symptoms were at their worse. To prove that wind could carry pollen great distances he then put similar slides up in kites to altitudes of 500 to 1500 feet. He discovered the slides there caught an even greater number of granules than at the lower level. In 1873 he published his work, Experimental Researches on the Causes and Nature of Catarrhus aestivus.

Fast forward to 1906. An Austrian paediatrician, Clemens von Pirquet, notices that if patients vaccinated against smallpox with horse serum are given a second dose they react quickly and severely. He correctly deduces that the body ‘remembers’ certain substances and produces antibodies against them. He calls this ‘allergy’. In the 1950s mast cells are discovered. In 1967 IgE is identified. The mechanism of allergic rhinitis and other allergies is finally understood. With this came new lifesaving treatment such as the EpiPen.

For a lot of us hay fever is an annual nuisance. But as we reach for the antihistamines and tissues we should thank a couple of 19th century sufferers who happened to turn their symptoms into a life’s work and, as a result, make hay fever that bit easier for us.

Thanks for reading

- Jamie

Going Mobile: A review of mobile learning in medical education


Next month will mark the 50th anniversary of mankind’s greatest accomplishment; landing human beings on the Moon. Yet today the vast majority of our learners each carry in their bag or packet a device with millions of times the computing power than the machines we used to meet this achievement. This is why one of my passions as an educator is mobile learning and the opportunities our unprecedented age now offers. I’ve enjoyed learning how to create resources such as a podcast and a smartphone application and how these have innovated the way I teach.

The Higher Education Academy defines mobile learning as “the use of mobile devices to enhance personal learning across multiple contexts.” Mobile learning itself is a subset of TEL or ‘Technology Enhanced Learning.’ There’s repetition of a key word: enhance/enhanced. We’ll come back to that word later.

This musing looks at some of the current evidence of mobile learning use in medical education and tries to pinpoint some themes and things we still need to iron out if we’re going to make the most out of mobile learning.

From Del Boy to Web 2.0

It’s safe to safe that mobile phones have come a long way since being used as a cumbersome prop in Only Fools and Horses. They are now a key part of everyday life.

More than 4 billion people, over half the world’s population, now have access to the internet, with two thirds using a mobile phone; more than half of which are smartphones.  By 2020 66% of new global connections between people will occur via a smartphone. We are now in the era of the internet of things such as touchscreen phones and tablets as well as smart wearables such as glasses or watches. Humans have been described as “technology equipped mobile creatures that are using applications, devices and networks as a platform for enhancing their learning in both formal and informal settings.”  It’s been argued that as society is now heavily characterised by the widespread use of mobile devices and the connectivity they afford there is a need to re-conceptualise the idea of learning in the digital age.

Mobile Learning Workshop.005.jpeg

A key development in the potential of mobile learning was the development of Web 2.0. The first iterations of the internet were themselves as clunky as Del Boy’s mobile, fixed, un-editable and open to a select few. Web 2.0 is known as the ‘participatory web’; blogs, podcasts and wikis. It is now possible for people with no computing background whatsoever to produce and share learning resources with massive success such as Geeky Medics.

Another aspect interlinked with these social and technological changes has been the shortening of the half-life of knowledge. By 2017 the half-life of medical knowledge was estimated at 18-24 months.  It is estimated that by 2021 it will be only 73 days. It’s therefore fairly easy to envisage a world where libraries of books will be out of date. Students will instead be their own librarian accessing knowledge on the go via their mobile device.

Mobile Learning Workshop.006.jpeg

In education when we look at professionals collaborating we think of a community of practice. Thanks to Web 2.0 the collaboration of professionals, patients and students in medicine has been given the epithet of ‘Medicine 2.0’. This represents a new community of practice and how technology links all of us in healthcare. Health Education England argue that digital skills and knowledge should be “a core component” of healthcare staff education. In order to reflect the new world of Medicine 2.0 medical schools in the US and Hungary have set up courses aimed at familiarising students with social media. A Best Evidence Medical Education review showed that mobile resources help with the transition from student to professional.

Towards collaboration

The general movement in mobile learning is towards collaboration. The unique features of mobile devices, in particular their portability, social connectivity and a sense of individuality mean they make online collaboration more likely as opposed to desktop computers that don’t have those features. A meta-analysis of 48 peer-reviewed journal articles and doctoral dissertations from 2000 to 2015 revealed that mobile technology has produced meaningful improvements to collaborative learning. The focus is on bringing people together via their mobile devices to share learning and practice.

Mobile Learning Workshop.007.jpeg

Perhaps the extreme of mobile learning and collaboration has been the advent of massive open online courses (MOOCs) since 2008.  These are courses without any fees or prerequisites beyond technological access. Some MOOCs are delivered to tens of thousands of learners. As a result along with mobile learning in general MOOCs have been credited with democratising education.  MOOCs have been suggested as best augmenting traditional teaching methods in the ‘flipped classroom’ approach. In the flipped classroom students are introduced to learning material before the classroom session with that time being used to deepen understanding.  In general the HEA credits mobile and online resources as providing an accessible toolkit for delivering flipped learning.

Medical students and mobile learning

Mobile Learning Workshop.008.jpeg

The tradition has been to divide students into digital natives (those who grew up with technology) and digital immigrants (those for whom technology arrived later on in life). This distinction assumes that younger people have innate skills with and a preference of technology. However, more recent evidence suggests that this distinction doesn’t exist and is unhelpful. Learners whose age falls within the category of being a digital native still need and benefit from teaching aimed at digital literacy. The notion of digital natives belongs in the same file as learning styles; they just don’t exist.

Mobile Learning Workshop.009.jpeg

Research into medical students use of mobile learning focuses on evaluating a specific intervention. These include Facebook, a novel Wiki platform, a MOOC and a tailor made smartphone application. While that has a use I’d argue that most students will appreciate any new learning intervention. As a result we’re still in the early days of understanding how students use mobile resources. That said, the evidence suggests that students quickly find a preferred way of using Web 2.0 resources. Whilst it’s been suggested that male students are less likely to ask questions via a Web 2.0 resource students overall seem to find them a safe environment and more comfortable than clinical teaching. It’s been suggested that mobile learning resource usage is linked to a student’s intrinsic motivation. The more motivated a student is the more likely they are to use a mobile resource. Medical students themselves report concerns regarding privacy and professional behaviour when using social media in education.

A 2013 systematic review of social media use in medical education found an association with improving knowledge, attitudes and skills.  The most often reported benefits were in learner engagement, feedback, collaboration and professional development. The most commonly cited challenges were technical difficulties, unpredictable learner participation and privacy/security concerns.  A systemic review the following year however reviewed only publications that included randomisation, reviews and meta-analyses and concluded despite the wide use of social media there were no significant improvements in the learning process and that some novel mobile learning resources don’t result in better student outcomes.   

Mobile Learning Workshop.010.jpeg

A recent review of literature on mobile learning use in medical education suggests that it remains a supplement only.  There still is not a consensus on the most efficient use of mobile learning resources in medical education but the ever changing nature of resources means this is probably inevitable. There’s that word enhance again. Is this is limit of mobile learning in medical education? To enhance more traditional teaching and not replace?

Mobile Learning Workshop.011.jpeg

There’s also the issue of whether students want more mobile resources. The most recent student survey by the Higher Education Policy Institute found that students prefer direct contact time with educators over other learning events.  44% of students rating their course as poor or very poor value for money included a lack of contact hours as part of their complaint.  Only tuition fees and teaching quality were reported more often as a reason for rating their course as poor or very poor value for money. More students (19%) were dissatisfied with their contact time than neutral (17%); an increase on the previous year. However, the survey did not explore mobile resources either as a contact time alternative or how students viewed their educators creating resources for them. 62% of Medicine and Dentistry students reported that they felt they had value for money for their tuition fees; this was the highest reported value for money of any subject.

According to the HEPI students in the UK are conservative in their preferred learning methods and this means any innovation takes time to become embedded in a curriculum. The HEPI recommend engaging with students and involving them in the development of any resource as well as building technology into curriculum design and for a nationwide evidence and knowledge base to be developed on what works.

Mobile Learning Workshop.012.jpeg

This is being done. Case studies in the UK show that the success of mobile learning in higher education has involved some degree of student inclusion alongside educators during design.  But there’s no evidence of this being done in UK medical schools. One example was published from Vanderbilt University, Nashville; a committee formed of administrators, educators and selectively recruited students.  This committee serves four functions: to liaise between students and administration; advising development of institutional educational technologies; developing, piloting, and assessing new student-led educational technologies; and promoting biomedical and educational informatics within the school community. The authors report benefits from rapid improvements to educational technologies that meet students’ needs and enhance learning opportunities as well as fostering a campus culture of awareness and innovation in informatics and medical education.

An example from a European medical school was found from the Faculty of Medicine of Universität Leipzig, Germany.  Rather than a physical committee their E-learning and New Media Working Group established an online portal for discussion with students over mobile resources as well as expanding the university’s presence across social media to help disseminate information.

The HEPI have also recommended that the UK higher education sector develop an:

“evidence and knowledge base on what works in technology-enhanced learning to help universities, faculties and course teams make informed decisions. Mechanisms to share, discuss and disseminate these insights to the rest of the sector will also be required.”

Medical educators and mobile learning

Mobile Learning Workshop.013.jpeg

Teachers’ attitudes toward and ability with mobile resources are a major influence on students deciding to use them. It’s been suggested that Web 2.0 offers opportunities for educator innovation. However, it has been shown that teachers may be less engaged than their students in utilising Web 2.0 resources especially in accessing materials outside of the classroom.

I’ve not been able to find any research in the literature looking at the perceptions of UK medical educators toward mobile learning. However, a recent online survey of 284 medical educators in Germany did show some interesting findings.  Respondents valued interactive patient cases, podcasts and subject-specific apps as the more constructive teaching tools while Facebook and Twitter were considered unsuitable as platforms for medical education.  There was no relationship found between an educator’s demographics and their use of mobile learning resources.

* * *

It’s obvious that mobile learning offers great opportunities for medical students and educators. I hope this review has shown some of the trends in our current understanding of mobile learning in medical education: that the future seems to be collaboration, digital natives don’t exist and students need tuition in how to use mobile resources, research is currently limited to studying interventions, students value contact time and need to be included to make the most of resources and need to know more about what teachers think.

This is a time for leadership, for educators to start to fill these gaps in knowledge and expand on these trends. In September 1962 President Kennedy challenged his country to got to the Moon by the end of the decade. To say this was ambitious is an understatement; American’s had only got into space barely a year earlier. Yet the country rose to the challenge and on 20th July 1969 man walked on the Moon. I like how he said it. “We CHOOSE to go to the Moon.” Challenges are there to be met. We can meet the challenges of mobile learning in medical education if we choose to. We can choose to use mobile learning and help shape it. Or not. That choice is ours.

Medical school medieval style


It’s tempting to see medieval doctors as a group of quacks and inadequates stuck between the Dark Ages and the enlightened Renaissance. Certainly, it was a dangerous time to be alive and sick. In the twelfth century the majority of people lived in rural servitude and received no education. Average life expectancy was 30-35 years with 1 in 5 children dying at birth. Healthcare policy, such as it was, was based on Christian teachings; that it was everyone’s duty to care for the sick and poor. To that end medieval hospitals more resembled modern day hospices providing basic care for the destitute and dying with nowhere else to go. Education and literacy were largely the preserve of the clergy and it was in monasteries where most hospitals could be found. The Saxons built the first hospital in England in 937 C. E, and many more followed after the Norman Conquest in 1066, including St. Bartholomew's of London, built in 1123 C.E. The sick were cared for by a mix of practitioners including physicians, surgeons, barber-surgeons and apothecaries. Of these only physicians would have received formal training. The vast majority of people providing healthcare were practising a mix of folklore and superstition.

However, it was in the early medieval period that the first medical schools were formed and the first ever medical students went to university. In this musing I’m looking at what medical education was like in the Middle Ages at the most prestigious university of the age as well as the common theories behind disease and cure.

The Schola Medica Salernitana was founded in the 9th century in the Southern Italian city of Salerno. In 1050 one of its teachers Gariopontus wrote the Passionarius, one of the earliest written records of Western Medicine as we would recognise it. Gariopontus drew on the teachings of Galen (c. 129-199 CE) and latinised Greek terms. In doing so he formed the basis of several modern medical terms such as cauterise. Another early writing mentioned a student: “ut ferrum magnes, juvenes sic attrahit Agnes” (Agnes attracts the boys like iron to a magnet”). This shows that the first medical school in the world had female students.

The medical school published a number of treatises such as work by a woman called Trotula on childbirth and uterine prolapse and work on the management of cranial and abdominal wounds. In head wounds it was recommended to feel for and then surgical remove pieces of damaged skull. In abdominal trauma students were advised to try to put any protruding intestine back inside the abdomen. If the intestine was cold it was to be warmed by wrapping the intestines of a freshly killed animal over it beforehand with the wound being left open before a drain was inserted.

Anatomy remained based on the work of Galen. Doctors were encouraged to dissect pigs as their anatomy was felt to be the most closely related to humans. However, the teachers were more innovative when it came to disseminating knowledge, in verse form often with a spice of humour. 362 of these verses were printed for the first time in 1480 and would increase to 3520 verses in a later edition. By 1224 the Holy Roman Emperor Frederick II made it obligatory that anyone hoping to practice Medicine in the kingdom of Naples should seek approval from the masters of Salerno medical school.

But Salerno medical school did not teach any other subjects and so did not evolve into a studium generale or university as they began to spring up. By the fourteenth century the most prestigious medical school in Europe was the University of Bologna, founded in 1088, the oldest university in the world. In the United Kingdom medical training began at the University of Oxford in the 12th century but was haphazard and based on apprenticeship. The first formal UK medical school would not be established until 1726 in Edinburgh.

Philosophia et septem artes liberales, the seven liberal arts. From the Hortus deliciarum of Herrad of Landsberg (12th century)

The University of Bologna was run along democratic lines, with students choosing their own professors and electing a rector who had precedence over everyone, including cardinals, at official functions.

The Medicine course lasted 4 years and consisted of forty six lectures. Each lecture focused on one particular medical text as written by Hippocrates (c. 460-370 BCE), Galen, and Avicenna (c. 980-1037 CE). Students would also read texts by these authors and analyse them using the methods of the French philosopher Peter Abelard to draw conclusions. His work Sic et Non had actually been written as guide for debating contrasting religious text, not scientific work. This reflected how religion and philosophy dominated the training of medical students. The university was attached to a cathedral and students were required to be admitted to the clergy prior to starting their studies. Further to studying Medicine students were also required to study the seven classical liberal arts: Grammar, Rhetoric, Logic, Geometry, Arithmetic, Music and Astronomy.

At the time knowledge of physiology and disease focused on the four humors: phlegm, blood, black bile, and yellow bile. Imbalance of one was what caused disease, for example too much phlegm caused lung disease and the body had to cough it up. This was a theory largely unchanged since its inception by the ancient Egyptians. This is why blood letting and purging were often the basis of medieval medicine. The state of imbalance was called dyskrasia while the perfect state of equilibrium was called eukrasia. Disease was also linked to extremes of temperature and personality. For example, patients who were prone to anger or passion were at risk of overheating and becoming unwell. Patients would also be at risk if they went to hot or cold places and so doctors were taught to advise maintaining a moderate personalty and avoiding extreme temperatures.

Diet was taught as important to prevent disease. During blood letting doctors were taught to strengthen the patient’s heart through a diet of rose syrup, bugloss or borage juice, the bone of a stag’s heart, or sugar mixed with precious stones such as emerald. Other foods such as lettuce and wine were taught as measures to help balance the humors.

Pharmacy was similarly guided by ancient principles. The Doctrine of Signatures dated back to the days of Galen and was adopted by Christian philosophers and medics. The idea being that in causing disease God would also provide the cure and make that intended cure apparent through design in nature. For example, eyebright flowers were said to resemble the human eye, while skullcap seeds were said to resemble the human skull. This was interpreted as God’s design that eyebright was to be used as a cure for eye disease and skullcap seeds for headaches.

God, the planets and polluted air or miasma were all blamed as the causes of disease. When the Black Death struck Italy in 1347 a contemporary account by the scholar Giovanni Villani blamed “the conjunction of Saturn and Jupiter and Mars in the sign of Aquarius” while the official Gabriel de Mussis noted “the illness was more dangerous during an eclipse, because then its effect was enhanced”. Gentile da Foligno, a physician and professor at the University of Bologna, blamed a tremor felt before the plague hit opening up underground pools of stagnant air and water. Doctors therefore were taught to purify either the body through poultices of mallow, nettles, mercury, and other herbs or the air by breathing through a posy of flowers, herbs and spices. De Mussis mentioned that “doctors attending the sick were advised to stand near an open window, keep their nose in something aromatic, or hold a sponge soaked in vinegar in their mouth.” During the Black Death a vengeful God was often blamed. The flagellants were a group of religious zealots who would march and whip themselves as an act of penance to try and appease God.

There’s a sense here of being close but not quite. Of understanding that balance is important for the body, that environmental factors can cause disease and that there was something unseen spreading disease. Close but not yet there. The Middle Ages isn’t known as a time of enlightenment. That would come with the Renaissance. But it was not a barren wasteland. It was a time of small yet important steps.

It was in the Middle Ages that laws against human dissection were relaxed and knowledge of human anatomy began to improve. An eminent surgeon of the time Guy de Chauliac would lobby for surgeons to require university training and so started to create equivalence with physicians. Physicians began to use more observations to help them diagnose disease, in particular urine as seen in the Fasciculus Medicinae, published in 1491, then the pinnacle of medical knowledge at the time (this book also contained Wound Man as discussed in a previous musing). The scholarly approach encouraged at medical school led to methodical documentation from several physicians; is is through these writings that we know so much about the Black Death and other medieval illness. An English physician Gilbertus Angelicus (1180-1250) teaching at the Montpellier school of Medicine would be one of the first to recognise that diseases such as leprosy and smallpox were contagious.

Perhaps most importantly, it was in this period that the first medical schools and universities were established. These particular small steps would begin the role of doctor as a scholar and start to legislate the standards required of a physician. This would be an vital first step without which future advances could never have been possible.

Thanks for reading

- Jamie

Medicine and Game Theory: How to win


You have to learn the rules of the game; then learn to play better than anyone else - Albert Einstein

Game theory is a field of mathematics which emerged in the 20th century looking at how players in a game interact. In game theory any interaction between two or more people can be described as a game. In this musing I’m looking at how game theory can influence healthcare both in the way we view an individual patient as well as future policy.

There are at least two kinds of games. One could be called finite, the other infinite. A finite game is played for the purpose of winning, an infinite game for the purpose of continuing the play.     

James P. Carse Author of Finite and Infinite Games

Game theory is often mentioned in sports and business

In a finite game all the players and all the rules are known. The game also has a known end point. A football match would therefore be an example of a finite game. There are two teams of eleven players with their respective coaches. There are two halves of 45 minutes and clear laws of football officiated by a referee. After 90 minutes the match is either won, lost or drawn and is definitely over.

Infinite games have innumerable players and no end points. Players can stop playing or join or be absorbed by other teams. The goal is not an endpoint but to keep playing. A football season or even several football seasons could be described as an infinite game. Key to infinite games then is a vision and principles. A team may lose one match but success is viewed by the team remaining consistent to that vision; such as avoiding relegation every season or promoting young talent. Athletic Club in Spain are perhaps the prime example of this. Their whole raison d'être is that they only use players from the Basque Region of Spain. This infinite game of promoting local talent eschews any short term game. In fact their supporters regularly report they’d rather get relegated than play non-Basque players.

Problems arise by confusing finite and infinite games. When Sir Alex Ferguson retired as Manchester United manager after 27 years in 2013 the club attempted to play an infinite game. They chose as his replacement David Moyes, a manager with a similar background and ethics to Ferguson, giving him a 9 year contract. 6 months into that he was fired and since then United have been playing a finite game choosing more short term appointments, Louis van Gaal and Jose Mourinho, rather than following a vision.

It’s easy to see lessons for business from game theory. You may get a deal or not. You may have good quarters or bad quarters. But whilst those finite games are going on you have your overall business plan, an infinite game. You’re playing to keep playing by staying in business.

What about healthcare?

So a clinician and patient could be said to be players in a finite game competing against whatever illness the patient has. In this game the clinician and patient have to work together and use their own experiences to first diagnose and then treat the illness. The right diagnosis is made and the patient gets better. The game is won and over. Or the wrong diagnosis is made and the patient doesn’t get better. The game is lost and over. But what about if the right diagnosis is made but for whatever reason the patient doesn’t get better? That finite game is lost. But what about the infinite game?

Let’s say our patient has an infection. That infection has got worse and now the patient has sepsis. In the United Kingdom we have very clear guidelines on how to manage sepsis from the National Institute of Clinical Excellence. Management is usually summed up as the ‘Sepsis Six’. There are clear principles about how to play this game. So we follow these principles as we treat our patient. We follow the Sepsis Six. But they aren’t guarantees. We use them because they give us the best possible chance to win this particular finite game. Sometimes it will work and the patient will get better and we win. Sometimes it won’t and the patient may die. Even if all the ‘rules’ are followed, due to reasons beyond any of the players. But whilst each individual patient may be seen as a finite game there is a larger infinite game being played. By making sure we approach each patient with these same principles we not only give them the best chance of winning their finite game but we also keep the infinite game going; of ensuring each patient with sepsis is managed in the same optimum way. By playing the infinite game well we have a better chance of winning finite games.

This works at the wider level too. For example, if we look at pneumonia we know that up to 70% of patients develop sepsis. We know that smokers who develop chronic obstructive pulmonary disease (COPD) have up to 50% greater risk of developing pneumonia. We know that the pneumococcal vaccine has reduced pneumonia rates especially amongst patients in more deprived areas. Reducing smoking and ensuring vaccination are infinite game goals and they work. This is beyond the control of one person and needs a coordinated approach across healthcare policy.


Are infinite games the future of healthcare?

In March 2015 just before the UK General Election the Faculty of Public Health published their manifesto called ‘Start Well, Live Better’ for improving general health. The manifesto consisted of 12 points:

The Start Well, Live Better 12 priorities from Lindsey Stewart, Liz Skinner, Mark Weiss, John Middleton, Start Well, Live Better—a manifesto for the public's health, Journal of Public Health, Volume 37, Issue 1, March 2015, Pages 3–5,

There’s a mixture of finite goals here - establishing a living wage for example - and some infinite goals as well such as universal healthcare. The problem is that finite game success is much more short-term and easier to measure than with infinite games. We can put a certain policy in place and then measure impact. However, infinite games aimed improving a population’s general health take years if not decades to show tangible benefit. Politicians who control healthcare policy and heads of department have a limited time in office and need to show benefits immediately. The political and budgetary cycles are short. It is therefore tempting to choose to play finite games only rather than infinite.

The National Health Service Long Term Plan is an attempt to commit to playing an infinite game. The NHS England Chief Simon Stevens laid out five priorities for the NHS focusing health spending over the next 5 years: mental health, cardiovascular disease, cancer, child services and reducing inequalities. This comes after a succession of NHS plans since 2000 which all focused on increasing competition and choice. The Kings Fund have been ambivalent about the benefit those plans made.

Since its inception the National Health Service has been an infinite game changing how we view illness and the relationship between the state and patients. Yet if we chase finite games that are incongruous to our finite game we risk that infinite game. There is a very clear link between the effect of the UK government’s austerity policy on social care and its impact on the NHS.

We all need to identify the infinite game we want to play and make sure it fits our principles and vision. We have to accept that benefits will often be intangible and appreciate the difficulties and scale we’re working with. We then have to be careful with the finite games we choose to play and make sure they don’t cost us the infinite game.

Playing an infinite game means committing to values at both a personal and institutional level. It says a lot about us and where we work. It means those in power putting aside division and ego. Above all it would mean honesty.

Thanks for reading

- Jamie

Spoiler Alert: why we actually love spoilers and what this tells us about communication


Last week the very last episode of Game of Thrones was broadcast. I was surrounded by friends and loved ones all doing everything they could to avoid hearing the ending before they’d seen it; even it this meant fingers in the ears and loud singing. I’ve only ever seen one episode so don’t worry I won’t spoil the ending for you. But actually that wouldn’t be as bad as you think. Spoiler alert: we actually love spoilers. And knowing this improves we way we communicate.

For all we complain if someone ‘spoils the ending’ of something the opposite is true. In 2011 a series of experiments explored the effect of spoilers on the enjoyment of a story. Subjects were given twelve stories from a variety of genres. One group were told the plot twist as part of a separate introduction. In the second the outcome was given away in the opening paragraph and the third group had no spoilers. The groups receiving the spoilers reported enjoying the story more than the group without spoilers. The group where the spoiler was a separate introduction actually enjoyed the story the most. This is known as the spoiler paradox.

Understanding the spoiler paradox is to understand how human beings find meaning. This is known as ‘theory of mind’. This means we like giving meaning and intentions to other people and even inanimate objects. As a result we love stories. A lot. Therefore we find stories a better way of sharing a message. The message “don’t tell lies” is an important one we’ve tried to teach others for generations. But one of the best ways to teach it was to give it a story: ‘The Boy Who Cried Wolf’. Consider Aesop’s fables or the parables of Jesus. Stories have a power.

Therefore, if we know where the story is going it becomes easier for us to follow. We don’t have to waste cognitive energy wondering where the story is taking us. Instead we can focus on the information as it comes. Knowing the final point makes the ‘journey’ easier.

Think how often we’ll watch a favourite movie or read a favourite book even though we know the end. We all know the story of Romeo and Juliet but will still watch it in the theatre. We’ll still go to see a film based on a book we’ve read. Knowing the ending doesn’t detract at all. In fact, I’d argue that focusing on twists and spoilers actually detracts from telling a good story. If you’re relying on spoilers to keep your audience’s attention then your story isn’t going to stand up to much. As a fan of BBC’s Sherlock I think the series went downhill fast in Series 3 when the writers focused on plot twists rather than just telling a decent updated version of the classic stories.

So, how can knowing about the spoiler paradox shape the way we communicate?

In healthcare we’re encouraged to user the ‘SBAR’ model to communicate about a patient. SBAR (Situation, Background, Assessment and Recommendation) was originally used by the military in the early 21st century before becoming widely adopted in healthcare where it has been shown to improve patient safety. In order to standardise communication about a patient SBAR proformas are often included by phones. There’s clear guidance about the content for each section of SBAR.


Why I’m calling


What led to me seeing this patient


What I’ve found and done


What I need from you

Handing over a patient on the phone to a senior is regularly included as a core skill to be assessed in examinations.

You’ll notice that right at the very beginning of the proforma in this photo (taken by me in the Resus room at Queens Medical Centre, Nottingham) it says ‘Presenting Complaint’. In other proformas I’ve seen this is also written as ‘Reason for call’. This makes a big impact on how easy the handover is for the other person. For example:

“Hi, is that the surgical registrar on call? My name is Jamie I’m one of the doctors in the Emergency Department. I’ve got a 20 year old man called John Smith down here who’s got lower right abdominal pain. He’s normally well and takes no medications. The pain started yesterday near his belly button and has moved to his right lower abdomen. He’s been vomiting and has a fever. His inflammatory markers are raised. I think he has appendicitis and would like to refer him to you for assessment.


“Hi, is that the surgical registrar on call? My name is Jamie I’m one of the doctors in the Emergency Department. I’d like to refer a patient for assessment who I think has appendicitis. He’s a 20 year old man called John Smith who’s got lower right abdominal pain. He’s normally well and takes no medications. The pain started yesterday near his belly button and has moved to his right lower abdomen. He’s been vomiting and has a fever. His inflammatory markers are raised. Could I please send him for assessment?”

Both are the same story with the same intended message - I’ve got a patient with appendicitis I’d like to refer. But which one would be easier for a tired, stress surgeon on call to follow?


We can use this simple hack to make our presenting more effective as well. Rather than our audience sitting their trying to formulate their own ideas and meaning, which risks them either taking the wrong message home or just giving up, we must be explicit from the beginning.

“Hello my name is Jamie. I’m going to talk about diabetic ketoacidosis which affects 4% of our patients with Type 1 Diabetes. In particular I’m going to focus on three key points: what causes DKA, the three features we need to make a diagnosis and how the treatment for DKA is different from other diabetic emergencies and why we that it is important.”

Your audience immediately knows what is coming and what to look out for without any ambiguity. Communication is based on stories. Knowing what is coming actually helps us follow that story. The real spoiler is that we love spoilers. Don’t try and pull a rabbit from the hat. Punchlines are for jokes. Be clear with what you want.

Thanks for reading

- Jamie

Game of thrones.gif

"Obviously a major malfunction" - how unrealistic targets, organisational failings and misuse of statistics destroyed Challenger


There is a saying commonly misattributed to Gene Kranz the Apollo 13 flight director: failure is not an option. In a way that’s true. Failure isn’t an option. I would say it’s inevitable in any complicated system. Most of us work in one organisation or another. All of us rely on various organisations in our day to day lives. I work in the National Health Service, one of 1.5 million people. A complex system doing complex work.

In a recent musing I looked at how poor communication through PowerPoint had helped destroy the space shuttle Columbia in 2003. That, of course, was the second shuttle disaster. In this musing I’m going to look at the first.

This is the story of how NASA was arrogant; of unrealistic targets, of disconnect between seniors and those on the shop floor and of the misuse of statistics. It’s a story of the science of failure and how failure is inevitable. This is the story of the Challenger disaster.

”An accident rooted in history”

It’s January 28th 1986 at Cape Canaveral, Florida. 73 seconds after launching the space shuttle Challenger explodes. All seven of its crew are lost. Over the tannoy a distraught audience hears the words, “obviously a major malfunction.” After the horror come the questions.

The Rogers Commission is formed to investigate the disaster. Amongst its members are astronaut Sally Ride, Air Force General Donald Kutyna, Neil Armstrong, the first man on the moon, and Professor Richard Feynman; legendary quantum physicist, bongo enthusiast and educator.

The components of the space shuttle system (From

The shuttle programme was designed to be as reusable as possible. Not only was the orbiter itself reused (this was Challenger’s tenth mission) but the two solid rocket boosters (SRBs) were also retrieved and re-serviced for each launch. The cause of the Challenger disaster was found to be a flaw in the right SRB. The SRBs were not one long section but rather several which connected with two rubber O-rings (a primary and a secondary) sealing the join. The commission discovered longstanding concerns regarding the O-rings.

In January 1985 following a launch with the shuttle Discovery soot was found between the O-rings indicating that the primary ring hadn’t maintained a seal. At that time the launch had been the coldest yet at about 12 degrees Celsius. At that temperature the rubber contracted and became brittle making it harder to maintain a seal. On other missions the primary ring was nearly completely eroded through. The flawed O-ring design had been known about since 1977 leading the commission to describe Challenger, “an accident rooted in history.”

The forecast for the launch of Challenger would break the cold temperature record of Discovery: -1 degrees Celsius. On the eve of the launch engineers from Morton Thiokol alerted NASA managers of the danger of O-ring failure. They advised waiting for a warmer launch day. NASA however pushed back and asked for proof of failure rather than proof of safety. An impossibility.

“My God Thiokol, when do you want me to launch? Next April?”

Lawrence Molloy, SRB Manager at NASA

NASA pressed Morton Thiokol managers to go over their engineers and approve launch. On the morning of the 28th the forecast was proved right and the launch site was covered with ice. Reviewing launch footage the Rogers Commission found that in the cold temperature O-rings on the right SRB had failed to maintain a seal. 0.678 seconds into the launch grey smoke was seen escaping the right SRB. Due to ignition the SRB casing expanded slightly and the rings should have moved with the casing to maintain the seal. However, at minus one degrees Celsius they were too brittle and failed to do so. This should have caused Challenger to explode on the launch pad but aluminium oxides from the rocket fuel filled the damaged joint and did the job of the O-rings by sealing the site. This temporary seal allowed the Challenger to lift off.

This piece of good fortune might have allowed Challenger and its crew to survive. Sadly, 58.788 seconds into the launch Challenger hit a strong wind sheer which dislodged the aluminium oxide. This allowed hot air to escape and ignite. The right SRB burned through its joint to the external tank, coming loose and colliding with it. This caused a fireball which ignited the whole stack.

Challenger disintegrated and the crew cabin was sent into free fall before crashing into the sea. When the cabin was retrieved from the sea bed the personal safety equipment of three of the crew had been activated suggesting they survived the explosion but not the crash into the sea. The horrible truth is that it is possible they were conscious for at least a part of the free fall. Two minutes and forty five seconds.

So why the push back from NASA? Why did they proceed when there were concerns about the safety of the O-rings? This is where we have to look at NASA as an organisation arrogantly assumed it could guarantee safety. This included its own unrealistic targets.

NASA’s unrealistic targets

NASA had been through decades of boom and bust. The sixties had begun with them lagging behind the Soviets in the space race and finished with the stars and stripes planted on the moon. Yet the political enthusiasm triggered by President Kennedy and the Apollo missions had dried up and with it the public’s enthusiasm also waned. The economic troubles of the seventies were now followed by the fiscal conservatism of President Reagan. The money had dried up. NASA managers looked to shape the space programme in a way to fit the new economic order.

First, space shuttles would be reusable. Second, NASA made bold promises to the government. Their space shuttles would be so reliable and easy to use there would be no need to spend money on any military space programme; instead give the money to NASA to launch spy satellites. In between any government mission the shuttles would be a source of income as the private sector paid to use them. In short, the shuttle would be a dependable bus service to space. NASA promised that they could complete sixty missions a year with two shuttles at any one time ready to launch. This promise meant the pressure was immediately on to perform.

Four shuttles were initially built: Atlantis, Challenger, Columbia and Discovery. The first shuttle to launch was Columbia on 12th April 1981, one of two missions that year. In 1985 nine shuttle missions were completed. This was a peak that NASA would never exceed. By 1986 the target of sixty flights a year was becoming a monkey on the back of NASA. STS-51-L’s launch date had been pushed back five times due to bad weather and the previous mission itself being delayed seven times. Delays in that previous mission were even more embarrassing as Congressman Bill Nelson was part of the crew. Expectation was mounting and not just from the government.

Partly in order to inspire public interest in the shuttle programme the ‘Teacher in Space Project’ had been created in 1984 to carry teachers into space as civilian members of future shuttle crews. From 11,000 completed applications one teacher, Christa McAuliffe from New Hampshire was chosen to fly on Challenger as the first civilian in space. She would deliver two fifteen minute lessons from space to be watched by school children in their classrooms. The project worked. There was widespread interest in the mission with the ‘first teacher in space’ becoming something of a celebrity. It also created more pressure. McAuliffe was due to deliver her lessons on Day 4 of the mission. Launching on 28th January meant Day 4 would be a Friday. Any further delays and Day 4 would fall on the weekend; there wouldn’t be any children in school to watch her lessons. Fatefully, the interest also meant 17% of Americans would watch Challenger’s launch on television.

NASA were never able to get anywhere close to their target of sixty missions a year. They were caught out by the amount of refurbishment needed after each shuttle flight to get the orbiter and solid rocket boosters ready to be used again. They were hamstrung immediately from conception by an unrealistic target they never should have made. Their move to inspire public interest arguably increased demand to perform. But they had more problems including a disconnect between senior staff and those on the ground floor.

Organisational failings

During the Rogers Commission NASA managers quoted that the risk of a catastrophic accident (one that would cause loss of craft and life) befalling their shuttles was 1 in 100,000. Feynman found this figure ludicrous. A risk of 1 in 100,000 meant that NASA could expect to launch a shuttle every day for 274 years before they had a catastrophic accident. The figure of 1 in 100,000 was found to have been calculated as a necessity; it had to be that high. It had been used to reassure both the government and astronauts. It had also helped encourage a civilian to agree to be part of the mission. Once that figure was agreed NASA managers had worked backwards to make sure that the safety figures for all the shuttle components combined to make an overall risk of 1 in 100,000. NASA engineers knew this to be the case and formed their own opinion of risk. Feynman spoke to them directly. They perceived the risk at somewhere between 1 in 50 and 1 in 200. Assuming NASA managed to launch sixty missions a year that meant their engineers expected a catastrophic accident somewhere between once a year to once every three years. As it turned out the Challenger disaster would occur on the 25th shuttle mission. There was a clear disengagement between the perceptions of managers and those with hands on experience regarding the shuttle programme’s safety. But there were also fundamental errors when it came to calculating how safe the shuttle programme was.

Misusing statistics

One of those safety figures NASA included in their 1 in 100,000 figure involved the O rings responsible for the disaster. NASA had given the O rings a safety factor of 3. This was based on test results which showed that the O rings could maintain a seal despite being burnt a third of the way through. Feynman again tore this argument apart. A safety factor of 3 actually means that something can withstand conditions three times those its actually designed for. He used the analogy of a bridge built to only hold 1000 pounds being able to hold a 3000 pound load as showing a safety factor of 3. If a 1000 pound truck drove over the bridge and it cracked a third of a way through then the bridge would be defective, even if it managed to still hold the truck. The O rings shouldn’t have burnt through at all. Regardless of them still maintaining a seal the test results actually showed that they were defective. Therefore the safety factor for the O rings was not 3. It was zero. NASA misused the definitions and values of statistics to ‘sell’ the space shuttle as safer that it was. There was an assumption of total control. No American astronaut had ever been killed on a mission. Even when a mission went wrong like Apollo 13 the astronauts were brought home safely. NASA were drunk on their reputation.


The Rogers Commission Report was published on 9th June 1986. Feynman was concerned that the report was too lenient to NASA and so insisted his own thoughts were published as Appendix F. The investigation into Challenger would be his final adventure; he was terminally ill with cancer during the hearing and died in 1988. Sally Ride would also be part of the team investigating the Columbia disaster; the only person to do so. After she died in 2012 Kutyna revealed she had been the person discretely pointing the commission in the correct direction of the faulty O-rings. The shuttle programme underwent a major redesign and it would be two years before there was another mission.

Sadly, the investigation following the Columbia disaster found that NASA had failed to learn lessons from Challenger with similar organisational dysfunction. The programme was retired in 2011 after 30 years and 133 successful missions and 2 tragedies. Since then NASA has been using the Russian Soyuz rocket programme to get to space.

The science of failure

Failure isn’t an option. It’s inevitable. By its nature the shuttle programme was always experimental at best. It was wrong to pretend otherwise. Feynman would later compare NASA’s attitude to safety to a child believing that running across the road is safe because they didn’t get run over. In a system of over two million parts to have complete control is a fallacy.

We may not all work in spaceflight but Challenger and then Columbia offer stark lessons in human factors we should all learn from. A system may seem perfect because its imperfection is yet to be found, or has been ignored or misunderstood.

The key lesson is this: We may think our systems are safe, but how will we really know?

"For a successful technology, reality must take precedence over public relations,

for Nature cannot be fooled."

Professor Richard Feynman

The world’s first forensic scientist


Our setting is a rural Chinese village. A man is found stabbed and hacked to death. Local investigators perform a series of experiments with an animal carcass looking at the type of wounds caused by different shaped blades and determine that the man had been killed with a sickle. The magistrate calls all of the owners of a sickle together. The ten or so suspects all deny murder. The sickles are examined, all are clean with nothing to give away being a murder weapon. Most in the village believe the crime won’t be solved. The magistrate then orders all of the suspects to stand in a field and place their sickle on the ground before stepping back. They all stand and wait in the hot afternoon sun. It’s an unusual sight. At first nothing happens. Eventually a metallic green fly lands on one of the sickles. It’s joined by another. And another. And another. The sickle’s owner starts to look very nervous as more and more flies land on his sickle and ignore everyone else’s. The magistrate smiles. He knows that the murderer would clean his weapon. But there would be tiny fragments of blood, bone and flesh invisible to the human eye but not beyond a fly’s sense of smell. The owner of the sickle breaks down and confesses. He’s arrested and taken away.

I think it’s safe to say that we love forensic science dramas. They’re all of a type: low lit metallic labs, ultraviolet lights, an array of brilliant yet troubled scientists and detectives dredging the depths of human depravity. Forensic science is the cornerstone of our criminal justice system, a vital weapon in fighting crime. Yet the tale of the flies and the sickle didn’t take place in 2019. It didn’t even take place this century. It was 1235 CE.

This account, the first recorded example of what we would now call forensic entomology, was recorded in Collected Cases of Injustice Rectified a Chinese book written in 1247 by Song Ci, the world’s first forensic scientist. This is his story.

Song Ci from a Chinese Stamp (From China Post)

Song Ci was born in 1186 in southeast China. He was born in a period of China’s history called the Song dynasty (960-1279 CE). This period saw a number of political and administrative reforms including developing the justice system to create the post of sheriff. Sheriffs were employed to investigate crime, determine the cause of death and to interrogate and prosecute subjects. With this developed a framework to investigate crime.

The son of a bureaucrat he was educated into a life of scholarship. First training as a physician he found his way into the word of justice and was appointed judge of prisons four times during his lifetime.

Bust of Shen Kuo (From Lapham’s Quarterly)

This was a time of polymaths. Song Ci was inspired by the work of Shen Kuo (1031-1095) a man who excelled in many different areas of philosophy, science and mathematics. Shen Kuo argued for autopsy and dissected the bodies of criminals in the process proving centuries held theories about human anatomy wrong. In the UK such a practice would not be supported in legislation for another seven centuries.

Song Ci built on Shen Kuo’s work, observing the practice of magistrates and complying recommendations based on good practice. This would form his book Collected Cases of Injustice Rectified; in all fifty-three chapters in five volumes. The first volume contained an imperial decree on the inspection of bodies and injuries. The second volume was designed as instruction in post-mortem examination. The remaining volumes helped identify cause of death and the treatment of certain injuries.

Of note the book outlines the responsibilities of the official as well as what would be considered now routine practices such as the importance of accurate notes and the need to present during the post-mortem (including not being put off by bad smells). There are procedures for medical examination and specific advice on questioning suspects and interviewing family members.

Forensically, the richest part of the text is within the section titled "Difficult Cases". This explains how an official could piece together evidence when the cause of death appears to be something else such as strangulation masked as suicidal hanging or intentional drowning made to look accidental. A pharmacopoeia is also provided to make obscure injuries appear. There is a detailed description of determining time of death by the rate of decomposition and whether the corpse has been moved.

Whilst forensic science has obviously progressed since the work of Song Ci what is striking is how the foundations of good forensic work have not changed. He wrote about determining self-inflicted wounds and suicide based on the direction of wounds or the disposition of the body. He recommended noting tiny details such as looking underneath fingernails or in various orifices for clues of foul-play. Standard procedure today.

Song Ci died in 1249 with little heraldry. However, in modern times there has been an increased appreciation of his work. Just think how few 13th century scientific publications could have been as relevant as his after nearly a millennium.

There is an Asian maxim that “China is the ocean that salts all the rivers that flow into it”. All of us try to contribute in some way to the river of life. Any practitioner or appreciator of forensics must recognise the tremendous contribution Song Ci and his contemporaries made to progress the flow of justice.

Thanks for reading

- Jamie

Those who cannot remember the past: how we forgot the first great plague and how we're failing to remember lessons with Ebola


“Those who cannot remember the past are condemned to repeat it”

George Santayana

To look at the History of Medicine is to realise how often diseases recur and, sadly, how humans repeat the same mistakes. It is easy to look back with the benefit of hindsight and with modern medical knowledge but we must remember how we remain as fallible as our forebears.

Our first introduction to the History of Medicine is often through learning about the Black Death at school. The story is very familiar; between 1347 and 1351 plague swept the whole of Europe killing between a third and two-thirds of the continent. At the time it was felt the end of the world was coming as a disease never before seen took up to 200 million lives. However, this was actually the second time plague had hit Europe. Nearly a thousand years earlier the first plague pandemic had devastated parts of Europe and the Mediterranean. Half the population of Europe were affected. This was Justinian’s plague, named for the Holy Roman Emperor whose reign the disease helped to define. Yet despite the carnage Europe forgot and had no preparation when plague returned.

Between 2014 and 2016 nearly 30,000 people were hit by the Ebola outbreak in West Africa. Our systems were found lacking as the disease struck in a way never before seen. We said we would learn from our mistakes. Never again.

Yet the current Ebola epidemic in the Democratic Republic of the Congo (DRC) is proving that even today it is hard to remember the lessons of the past and how disease will find any hole in our memory. This is the story of Justinian’s plague, the lessons we failed to learn then and now as we struggle with Ebola.

Justinian’s Plague

Justinian I. Contemporary portrait in the Basilica of San Vitale, Ravenna . From Wikipedia.

It’s 542 CE in Constantinople (now Istanbul). A century earlier the Western provinces of the Roman Empire collapsed. The Eastern empire continues in what will be called the Eastern Roman or Byzantine Empire. Constantinople is the capital city, then as now, a melting pot between Europe and Asia. Since 527 CE this Empire has been ruled by Justian I, an absolute monarch determined to return to the glory years of conquest.

The Empire has already expanded to cover swathes of Northern Africa. Justinian’s focus is now on reclaiming Italy. The Empire is confident and proud, a jewel in an otherwise divided Europe now in the Dark Ages.

The Eastern Roman Empire at the succession of Justinian I (purple) in 527 CE and the lands conquered by the end of his reign in 565 CE (yellow). From US Military Academy.

Procopius of Caesarea (Creative Commons)

The main contemporary chronicler of the plague of Justinian, Procopius of Caesarea (500-565 CE)  identified the plague as arriving in Egypt on the Nile’s north and east shores. From there it spread to Alexandria in the north of Egypt and east to Palestine. The Nile was a major route of trade from the great lakes of Africa to the south. We now know that black rats on board trade ships brought the plague initially from China and India via Africa and the Nile to Justinian’s Empire.

Procopius noted that there had been a particularly long period of cold weather in Southern Italy causing famine and migration throughout the Empire. Perfect conditions to help a disease spread.

In his book Secret History Procopius detailed the symptoms of this new plague: delusions, nightmares, fevers and swellings in the groin, armpits, and behind their ears. For most came an agonising death. Procopius was of no doubt that this was God’s vengeance against Justinian, a man he claimed was supernatural and demonic.

Justinian’s war in Italy helped spread disease but so did peace in the areas he’d conquered. The established trade routes in Northern Africa and Eastern Europe, with Constantinople in the centre, formed a network of contagion. Plague swept throughout the Mediterranean. Constantinople was under siege for four months during which time Procopius alleged 10,000 people died a day in the city. Modern historians believe this figure to be closer to a still incredible 5,000 a day. Corpses littered the city streets. In scenes prescient of the Black Death, mass plague pits were dug with bodies thrown in and piled on top of each other. Other victims were disposed of at sea. Justinian was struck down but survived. Others in Constantinople were not so lucky, in just four months up to 40% of its citizens died.

The plague’s legacy

The plague continued to weaken the Empire, making it harder to defend. Like the medieval kings after him Justinian I struggled to maintain the status quo and tried to impose the same levels of taxation and expansion. He died in 565 CE. His obsession with empire building has led to his legacy as the ‘last Roman’. By the end of the sixth century much of the land in Italy Justinian had conquered had been lost but the Empire had pushed east into Persia. Far from Constantinople the plague continued in the countryside. The plague finally vanished in 750 CE by which point up to 50 million people had died, 25% of the population of the Empire.

Procopius’s description of the Justinian plague sounds like a lot like bubonic plague. This suspicion was confirmed in recent research.

Yersinia pestis bacteria, Creative Commons

At a two separate graves in Bavaria bacterial DNA was extracted from the remains of Justinian plague victims. The DNA matched that of Yersinia pestis the bacterium which causes bubonic plague. The DNA was analysed and found to be most closely related with Y. pestis still endemic to this day in Central Asia. This suggests the route from infection via trade from Asia to Europe.

After 750 CE plague vanished from Europe. New conquerors came and went with the end of the Dark Ages and the rise of the Middle Ages. Europeans forgot about plague. In 1347 they would get a very nasty reminder.

It’s very easy now in our halcyon era of medical advances to feel somewhat smug. Yes, an interesting story but wouldn’t happen now. Medieval scholars didn’t have germ theory. Or a way of easy accessing Procopius’s work. Things are different now.

We’d study Justinian’s plague with its high mortality. We’d identify the cause. We’d work backwards and spot how the trade link with Asia was the route of infection. We’d work to identify potential outbreaks in their early stages in Asia. By the time the second plague epidemic was just starting we’d notice it. There would be warnings to spot disease in travelers and protocols for dealing with mass casualties and disposal of bodies. We’d initiate rapid treatment and vaccination if possible. We’d be OK.

Ebola shows how hard this supposedly simple process remains.

Ebola: A Modern Plague

The Ebola virus (Creative Commons)

Ebola Viral Disease is a type of viral haemorrhagic fever first identified in 1976 during an outbreak in what is now South Sudan and the DRC. Caused a spaghetti-like virus known as a filovirus this disease causes severe dehydration through vomiting and diarrhoea before internal and external bleeding can develop. Named for the Ebola River where it was first identified it spreads by direct human contact with a mortality rate varying from 25% to 90%. An epidemic has been ongoing in DRC since August 2018. We are in our fourth decade of knowing about Ebola. And five years ago we were given the biggest warning yet about its danger.

Up until 2014 the largest outbreak of Ebola had affected 315 people. Other outbreaks were much smaller. Ebola seemed to burn brightly but with only a few embers. In 2014 it became a forest fire.

A healthcare worker during the Ebola outbreak of 2014-16 (From

The West Africa Epidemic of 2014-16 hit Guinea, Sierra Leone and Liberia. The disease showed its potential in our age of global travel as the first cases appeared in America and Europe. In all there were 28,160 reported cases. 11,308 people died. Ebola caught the world napping. What had been a rare disease of Africa was potentially a threat to us all. Suspicion of foreign healthcare workers and miscommunication about the causes of Ebola were blamed for helping to further the disease. Yet there was hope as promising experimental vaccines were put into production.

As the forest fire finally died down there was a chance to reflect. There were many publications, including from Médecins sans frontières, about the lessons learnt from Ebola and how not to repeat the lessons from the past. These were all along similar themes: the importance of trained frontline staff, rapid identification of the disease, engaging and informing local communities, employing simple yet effective methods to provide disease spread and the use of the new vaccine to protect contacts and contacts of contacts. There was lots of criticism about the speed of the World Health Organisation response but also a feeling that with new tools and lessons learnt things would be different next time.

When Ebola surfaced again last year in the DRC there was initial hope that lessons were learnt. Over 100,000 people have been vaccinated; a new weapon. However, the disease continues a year on with over 1000 cases and over 800 fatalities and fresh concern that this outbreak is far from over.

There remains delays in identifying patients with Ebola; not surprising as the early symptoms mimic more common diseases such as malaria. As a result patients are not isolated quickly enough and may infect others before their test results are back. Also the talk of engaging communities is falling flat. In a region torn apart by decades of civil unrest there is widespread mistrust of authorities with blame falling on the Ebola units themselves for causing death. It is estimated that 30% of patients are staying at home and being a potent vector for disease rather than coming forward. There has also been violence against healthcare workers and hospitals as a result of this fear. Reassuringly, where local communities and healthcare has come together Ebola has been stopped but this is not the norm and behavioural scientists are being used to help connect with locals. Despite the lessons learnt Ebola is continuing to be a difficult adversary.

It is easy in the West to feel we are immune from misinformation and fear. Yet look at the current measles epidemic in New York State. Look at the anti-vaccination movement, labelled a “public health timebomb” by Simon Stevens, the chief executive of NHS England last week. We are no more immune than anyone else to irrationality. Nor too proud to learn the lessons of the past; the ‘ring’ style of vaccinating contacts against Ebola is the same as used during the successful campaign to eradicate smallpox over four decades ago.

Medical advances have come on in ways no-one in the Middle Ages could have foreseen. We have never had more ways to share our knowledge of disease or so many ways to prevent suffering. Yet people remain the same. And that’s the tricky part. Let’s not forget that bit.

Thanks for reading

- Jamie

Bullet Holes & Bias: The Story of Abraham Wald

“History is written by the victors”

Sir Winston Churchill

It is some achievement if we can be acknowledged as succeeding in our field of work. If that field of work happens to be helping to win the most bloody conflict in history then our achievement deserves legendary status. What then do you say of a man who not only succeeded in his field and helped the Allies win the Second World War but whose work continues to resonate throughout life today? Abraham Wald was a statistician whose unique insight echoes in areas as diverse as clinical research, finance and the modern celebrity obsession. This is his story and the story of survivorship bias. This is the story of why we must take a step back and think.

Abraham Wald and Bullet Holes in Planes

Wald was born in 1902 in the then Austria-Hungarian empire. After graduating in Mathematics he lectured in Economics in Vienna. As a Jew following the Anschluss between Nazi Germany and Austria in 1938 Wald and his family faced persecution and so they emigrated to the USA after he was offered a university position at Yale. During World War Two Wald was a member of the Statistical Research Group (SRG) as the US tried to approach military problems with research methodology.

One problem the US military faced was how to reduce aircraft casualties. They researched the damage received to their planes returning from conflict. By mapping out damage they found their planes were receiving most bullet holes to the wings and tail. The engine was spared.


Abraham Wald

The US military’s conclusion was simple: the wings and tail are obviously vulnerable to receiving bullets. We need to increase armour to these areas. Wald stepped in. His conclusion was surprising: don’t armour the wings and tail. Armour the engine.

Wald’s insight and reasoning was based on understanding what we now call survivorship bias. Bias is any factor in the research process which skews the results. Survivorship bias describes the error of looking only at subjects who’ve reached a certain point without considering the (often invisible) subjects who haven’t. In the case of the US military they were only studying the planes which had returned to base following conflict i.e. the survivors. In other words what their diagram of bullet holes actually showed was the areas their planes could sustain damage and still be able to fly and bring their pilots home.

No matter what you’re studying if you’re only looking at the results you want and not the whole then you’re subject to survivorship bias.

No matter what you’re studying if you’re only looking at the results you want and not the whole then you’re subject to survivorship bias.

Wald surmised that it was actually the engines which were vulnerable: if these were hit the plane and its pilot went down and didn’t return to base to be counted in the research. The military listened and armoured the engine not the wings and tail.

The US Airforce suffered over 88,000 casualties during the Second World War. Without Wald’s research this undoubtedly would have been higher. But his insight continues to this day and has become an issue in clinical research, financial markets and the people we choose to look up to.

Survivorship Bias in Clinical Research

In 2010 in Boston, Massachusetts a trial was conducted at Harvard Medical School and Beth Israel Deaconess Medical Center (BIDMC) into improving patient survival following trauma. A major problem following trauma is if the patient develops abnormal blood clotting or coagulopathy. This hinders them in stemming any bleeding they have and increases their chances of bleeding to death. Within our blood are naturally occurring proteins called factors which act to encourage blood clotting. The team at Harvard and BIDMC investigated whether giving trauma patients one of these factors would improve survival. The study was aimed at patients who had received 4-8 blood transfusions within 12 hours of their injury. They hoped to recruit 1502 patients but abandoned the trial after recruiting only 573.

Why? Survivorship bias. The trial only included patients who survived their initial accident and then received care in the Emergency Department before going to Intensive Care with enough time passed to have been given at least 4 bags of blood. Those patients who died prior to hospital or in the Emergency Department were not included. The team concluded that due to rising standards in emergency care it was actually very difficult to find patients suitable for the trial. It was therefore pointless to continue with the research.

This research was not the only piece reporting survivorship bias in trauma research. Does this matter? Yes. Trauma is the biggest cause of death worldwide in the under 45 year-olds. About 5.8 million people die worldwide due to trauma. That’s more than the annual total of deaths due to malaria, tuberculosis and HIV/AIDS. Combined. Or, to put it another way, one third of the total number of deaths in combat during the whole of the Second World War. Every year. Anything that impedes research into trauma has to be understood. Otherwise it costs lives. But 90% of injury deaths occur in less economically developed countries. Yet we perform research in Major Trauma Units in the West. Survivorship bias again.

As our understanding of survivorship bias grows so we are realising that no area of Medicine is safe. It clouds outcomes in surgery and anti-microbial research. It touches cancer research. Cancer survival rates are usually expressed as 5 year survival; the percentage of patients alive 5 years after survival. But this doesn’t include the patients who died of something other than cancer and so may be falsely optimistic. However, Medicine is only a part of the human experience survivorship bias touches.

Survivorship Bias in Financial Markets & our Role Models

Between 1950 and 1980 Mexico industrialised at an amazing rate achieving an average of 6.5% growth annually. The ‘Mexico Miracle’ was held up as an example of how to run an economy as well as encouraging investment into Latin American markets. However, since 1980 the miracle has run out and never returned. Again, looking only at the successes and not the failures can cost investors a lot of money.

Say I’m a fund manager and I approach you asking for investment. I quote an average of 1.8% growth across my funds. Sensibly you do your research and request my full portfolio:


It is common practice in the fund market to only quote active funds. Poorly performing funds, especially those with negative growth, are closed. If we only look at my active funds in this example then yes, my average growth is 1.8%. You might invest in me. If however you look at all of my portfolio then actually my average performance is -0.2% growth. You probably wouldn’t invest then.

Yet survivorship bias has a slight less tangible effect on modern life now. How often is Mark Zuckerberg held up as an example for anyone working in business? We focus on the one self-made billionaire who dropped out of education before making their fortune and not the thousands who followed the same path but failed. A single actor or sports star is used as a case study on how to succeed and we are encouraged to follow their path never mind that many who do fail. Think as well about how we look at other aspects of life. How often do we look at one car still in use after 50 years or one building still standing after centuries and say, “we don’t make them like they used to”? We overlook how many cars or buildings of a similar age have now rusted or crumbled away. All of this is the same thought process going through the minds of the US Military as they counted bullet holes in their planes.

To the victor belong the spoils but we must always remember the danger of only looking at the positive outcomes and ignoring those often invisible negatives. We must be aware of the need to see the whole picture and notice when we are not. With our appreciation of survivorship bias must also come an appreciation of Abraham Wald. A man whose simple yet profound insight shows us the value of stepping back and thinking.

Thanks for reading

- Jamie

Death by PowerPoint: the slide that killed seven people

The space shuttle Columbia disintegrating in the atmosphere (Creative Commons)

We’ve all sat in those presentations. A speaker with a stream of slides full of text, monotonously reading them off as we read along. We’re so used to it we expect it. We accept it. We even consider it ‘learning’. As an educator I push against ‘death by PowerPoint’ and I'm fascinated with how we can improve the way we present and teach. The fact is we know that PowerPoint kills. Most often the only victims are our audience’s inspiration and interest. This, however, is the story of a PowerPoint slide that actually helped kill seven people.

January 16th 2003. NASA Mission STS-107 is underway. The Space Shuttle Columbia launches carrying its crew of seven to low orbit. Their objective was to study the effects of microgravity on the human body and on ants and spiders they had with them. Columbia had been the first Space Shuttle, first launched in 1981 and had been on 27 missions prior to this one. Whereas other shuttle crews had focused on work to the Hubble Space Telescope or to the International Space Station this mission was one of pure scientific research.

The launch proceeded as normal. The crew settled into their mission. They would spend 16 days in orbit, completing 80 experiments. One day into their mission it was clear to those back on Earth that something had gone wrong.

As a matter of protocol NASA staff reviewed footage from an external camera mounted to the fuel tank. At eighty-two seconds into the launch a piece of spray on foam insulation (SOFI) fell from one of the ramps that attached the shuttle to its external fuel tank. As the crew rose at 28,968 kilometres per hour the piece of foam collided with one of the tiles on the outer edge of the shuttle’s left wing.

Frame of NASA launch footage showing the moment the foam struck the shuttle’s left wing (Creative Commons)

It was impossible to tell from Earth how much damage this foam, falling nine times faster than a fired bullet, would have caused when it collided with the wing. Foam falling during launch was nothing new. It had happened on four previous missions and was one of the reasons why the camera was there in the first place. But the tile the foam had struck was on the edge of the wing designed to protect the shuttle from the heat of Earth’s atmosphere during launch and re-entry. In space the shuttle was safe but NASA didn’t know how it would respond to re-entry. There were a number of options. The astronauts could perform a spacewalk and visually inspect the hull. NASA could launch another Space Shuttle to pick the crew up. Or they could risk re-entry.

NASA officials sat down with Boeing Corporation engineers who took them through three reports; a total of 28 slides. The salient point was whilst there was data showing that the tiles on the shuttle wing could tolerate being hit by the foam this was based on test conditions using foam more than 600 times smaller than that that had struck Columbia. This is the slide the engineers chose to illustrate this point:

NASA managers listened to the engineers and their PowerPoint. The engineers felt they had communicated the potential risks. NASA felt the engineers didn’t know what would happen but that all data pointed to there not being enough damage to put the lives of the crew in danger. They rejected the other options and pushed ahead with Columbia re-entering Earth’s atmosphere as normal. Columbia was scheduled to land at 0916 (EST) on February 1st 2003. Just before 0900, 61,170 metres above Dallas at 18 times the speed of sound, temperature readings on the shuttle’s left wing were abnormally high and then were lost. Tyre pressures on the left side were soon lost as was communication with the crew. At 0912, as Columbia should have been approaching the runway, ground control heard reports from residents near Dallas that the shuttle had been seen disintegrating. Columbia was lost and with it her crew of seven. The oldest crew member was 48.

The shuttle programme was on lock down, grounded for two years as the investigation began. The cause of the accident became clear: a hole in a tile on the left wing caused by the foam let the wing dangerously overheat until the shuttle disintegrated.

The questions to answer included a very simple one: Why, given that the foam strike had occurred at a force massively out of test conditions had NASA proceeded with re-entry?

Edward Tufte, a Professor at Yale University and expert in communication reviewed the slideshow the Boeing engineers had given NASA, in particular the above slide. His findings were tragically profound.

Firstly, the slide had a misleadingly reassuring title claiming that test data pointed to the tile being able to withstand the foam strike. This was not the case but the presence of the title, centred in the largest font makes this seem the salient, summary point of this slide. This helped Boeing’s message be lost almost immediately.

Secondly, the slide contains four different bullet points with no explanation of what they mean. This means that interpretation is left up to the reader. Is number 1 the main bullet point? Do the bullet points become less important or more? It’s not helped that there’s a change in font sizes as well. In all with bullet points and indents six levels of hierarchy were created. This allowed NASA managers to imply a hierarchy of importance in their head: the writing lower down and in smaller font was ignored. Actually, this had been where the contradictory (and most important) information was placed.

Thirdly, there is a huge amount of text, more than 100 words or figures on one screen. Two words, ‘SOFI’ and ‘ramp’ both mean the same thing: the foam. Vague terms are used. Sufficient is used once, significant or significantly, five times with little or no quantifiable data. As a result this left a lot open to audience interpretation. How much is significant? Is it statistical significance you mean or something else?

Finally the single most important fact, that the foam strike had occurred at forces massively out of test conditions, is hidden at the very bottom. Twelve little words which the audience would have had to wade through more than 100 to get to. If they even managed to keep reading to that point. In the middle it does say that it is possible for the foam to damage the tile. This is in the smallest font, lost.

NASA’s subsequent report criticised technical aspects along with human factors. Their report mentioned an over-reliance on PowerPoint:

“The Board views the endemic use of PowerPoint briefing slides instead of technical papers as an illustration of the problematic methods of technical communication at NASA.”

Edward Tufte’s full report makes for fascinating reading. Since being released in 1987 PowerPoint has grown exponentially to the point where it is now estimated than thirty million PowerPoint presentations are made every day. Yet, PowerPoint is blamed by academics for killing critical thought. Amazon’s CEO Jeff Bezos has banned it from meetings. Typing text on a screen and reading it out loud does not count as teaching. An audience reading text off the screen does not count as learning. Imagine if the engineers had put up a slide with just: “foam strike more than 600 times bigger than test data.” Maybe NASA would have listened. Maybe they wouldn’t have attempted re-entry. Next time you’re asked to give a talk remember Columbia. Don’t just jump to your laptop and write out slides of text. Think about your message. Don’t let that message be lost amongst text. Death by PowerPoint is a real thing. Sometimes literally.

Thanks for reading

- Jamie

Columbia’s final crew (from

There is nothing new under the sun: the current New York measles epidemic and lessons from the first 'anti-vaxxers'

An 1807 cartoon showing ‘The Vaccination Monster’

What has been will be again,
    what has been done will be done again;
    there is nothing new under the sun.

Ecclesiastes 1:9

The State of New York is currently in the midst of an epidemic. Measles, once eradicated from the USA has returned with a vengeance. Thanks to a rise in unvaccinated children fueled by the ‘anti-vaxxer’ movement 156 children in Rockwood County have been infected by measles; 82.8% of these had never had even one MMR vaccine. With measles now rampant in the boroughs of Brooklyn and Queens the state government has taken an unusual step. In New York in the USA, the home of liberty and personal choice, no unvaccinated under-18 year old is now able to set foot in a public space. Parents of unvaccinated children who break this ban will face fines or jail.

In a previous blog I wrote about the fight against smallpox first using variolation (which sometimes caused infection) and then the invention of the world’s first vaccine. This musing is about how vaccination was made compulsory in the United Kingdom, the subsequent fight against it through a public campaign and how that movement raised its head again in the last few decades. This is the story of the first ‘anti-vaxxer’ movement and how the arguments regarding vaccination show there isn’t really anything new under the sun.

Early opposition to vaccination

Following Edward Jenner’s work into using cowpox to offer immunity against smallpox in 1796 the Royal Jennerian Society was established in 1803 to continue his research.

Even in these early days there was opposition to the vaccine. John Birch, the ‘surgeon extraordinary’ to the Prince of Wales pamphleteered against Jenner’s work with arguments one might expect to see circulating today on social media:

A section of John Birch’s pamphlet from

He of course did not mention how he was making a lot of money through inoculating patients against smallpox (a practice that vaccination would replace) or using novel treatments such as electrocution.

Wood painting caricature from 1808 showing Edward Jenner confronting opponents to his vaccine (note the dead at their feet) (Creative Commons)

Despite Birch’s efforts by 1840 the efficacy of Jenner’s vaccine was widely accepted. Decades before germ theory was established and viruses were identified we finally had a powerful weapon against a deadly disease. Between 1837 and 1840 a smallpox epidemic killed 6,400 people in London alone. Parliament was persuaded to legislate. The 1840 Vaccination Act made the unpredictable variolation illegal and made provision for free, optional smallpox vaccination.

At the time healthcare in the UK was largely unchanged since Tudor times. Parish based charity had been the core of support for the sick and poor until workhouses were made the centre of welfare provision in 1834. With the workhouse came a stigma that illness and poverty were avoidable and to be punished. Government was dominated by two parties, the Whigs and the Tories both of whom were non-interventionist and the universal healthcare provided by the NHS was over a century away. Consider this laissez-faire backdrop with punitive welfare. The fact free vaccination was provided is remarkable and I think reflects the giddy optimism at a future without ‘the speckled monster’ of smallpox.

The Anti-Vaccination Leagues

The Vaccination Act of 1853 went further. Now vaccination against smallpox was compulsory for all children born after 1st August 1853 within the first three months of their life with fines for parents who failed to comply. By the 1860s two-thirds of babies in the UK had been vaccinated.

There was immediate opposition to the 1853 Act with violent protests across the country. This was the state’s first steps into the health of private citizens. The response seems to have been motivated in much the same way as the modern day opposition in the US to vaccination and universal healthcare in general: that health is a matter of private civil liberty and that vaccination caused undue distress and risk. In England and Wales in particular although the penalties were rarely enacted their presence alone seems to have been motivation for opposition. The Anti-Vaccination League in London was established in 1853 to allow dissenting voices to coalesce.

The Vaccination Act of 1867 extended the age by which a child had to be vaccinated to 14 with cumulative fines to non-compliance. That same year saw the formation of the Anti-Compulsory Vaccination League. They published the National Anti-Compulsory Vaccination Reporter newsletter in which they listed their concerns, the first three being:

I. It is the bounden duty of parliament to protect all the rights of man.

II. By the vaccination acts, which trample upon the right of parents to protect their children from disease, parliament has reversed its function.

III. As parliament, instead of guarding the liberty of the subject, has invaded this liberty by rendering good health a crime, punishable by fine or imprisonment, inflicted on dutiful parents, parliament is deserving of public condemnation.

Further newsletters were formed over the following decades: the Anti-Vaccinator (founded 1869), the National Anti-Compulsory Vaccination Reporter (1874), and the Vaccination Inquirer (1879). All of these continued to place political pressure against compulsory vaccination. Much like today the main body of arguments focused on personal choice and the testimony of parents alleging that their child was injured or killed by vaccination. In Leicester in 1885 an anti-vaccination demonstration attracted 100,000 people. A staggering number when the city’s population in total at the time was around 190,000.

A royal commission was called to advise on further vaccination policy. After deliberation for seven years listening to evidence across the spectrum of opinion in 1896 they published their findings. Smallpox vaccination was safe and effective. However, it advised against continuing compulsory vaccination. Following the 1898 Vaccination Act parents who did not want their child to be vaccinated could ‘conscientiously object’ and be exempt. There was no further appetite for Parliament to intervene in the rights of parents. Even the fledgling socialist Labour Party, no enemy of government intervention, made non-compulsory vaccination one of its policies.

Whilst the two World Wars saw a change in public opinion towards a greater role in society for government, culminating in the creation of the National Health Service in 1948, vaccination remains voluntary in the United Kingdom. The first half of the 20th century saw the advent of vaccines against several deadly diseases such as polio, measles, diphtheria and tetanus. In 1966 an ambitious worldwide vaccination programme led by the World Health Organisation saw smallpox become the first disease to be eradicated by mankind in 1980. There were dreams of polio and measles going the same way. It was not to be.

Anti-vaccination re-emerges

Herd immunity is a key component for any vaccination programme to be effective. Not everyone can be vaccinated and so they rely on being surround by vaccinated people to prevent transmission. The level of vaccination in a population required for herd immunity varies between diseases. The accepted standard to prevent measles transmission is 90-95%.

On 28th February 1998 an article was published in the Lancet which claimed that the Measles, Mumps and Rubella (MMR) vaccine was linked to the development of development and digestive problems in children. Its lead author was Dr Andrew Wakefield, a gastroenterologist.

The infamous Lancet paper linking the MMR vaccine to developmental and digestive disorders

The paper saw national panic about the safety of vaccination. The Prime Minister Tony Blair refused to answer whether his newborn son Leo had been vaccinated.

Except just like John Birch nearly two centuries before him Andrew Wakefield had held a lot back from the public and his fellow authors. He was funded by a legal firm seeking to prosecute the companies who produce vaccines. This firm led him to the parents who formed the basis of his ‘research’. The link between children developing developmental and digestive problems was made by the parents of twelve children recalling that their child first showed their symptoms following the MMR vaccine. Their testimony and recall alone were enough for Wakefield to form a link between vaccination and autism. From a research sense his findings were no more useful than those the Victorian pamphlets used. But the damage was done. The paper was retracted in 2010. Andrew Wakefield was struck off as were some of his co-authors who did not practice due diligence. Sadly, this has only helped Wakefield’s ‘legend’ as he tours America spreading his message tapping in to the general ‘anti-truth’ populist movement. Tragically unsurprisingly, often in his wake comes measles.

Earlier this year the largest study to date investigating the links between MMR and autism was published. 657,461 children in Denmark were followed up over several years (compare that to Wakefield’s research where he interviewed the parents of 12 children). No link between the vaccine and autism was shown. In fact, no large high level research has ever backed up Wakefield’s claim.

There are financial and political forces at work here. Anti-vaccination is worth big money. The National Vaccination Information Center in the US had an annual income of $1.2 billion in 2017. And the people they target are economically and politically powerful. Recent research in America shows that parents who refuse vaccinations for their children are more likely to be white, educated and of higher income. They prize purity and liberty above all, emotional reasoning over logic. They vote. And their world view is prevalent in certain circles.

Tweet by Donald Trump 28th March 2014

Tweet by Donald Trump 28th March 2014

In the UK in 2018 the rate of MMR vaccination was 91.8%, worryingly close to no longer being effective for herd immunity. There have been debates in the UK about re-introducing compulsory vaccination. In France certain childhood vaccinations are now compulsory. Social media companies are under pressure to silence the groups anti-vaxxers use to spread their message and recruit. The state is once again prepared to step into personal liberty when it comes to vaccines.

In 1901 52% of childhood deaths in England and Wales were due to infectious diseases. By 2000 it was 7.4%. In 1901 40.6% of all deaths were children. By 2000 it was 0.9%. No-one would want that progress to reverse. But history does have a habit of repeating itself if we let it. The debates continue to be the same: the rights of parents and the individual versus those of the state and public health necessity. This is a debate we have to get right. History tells us what will happen if we don’t. After all, there is nothing new under the sun.

Thanks for reading

- Jamie

Sweating Sickness: England’s Forgotten Plague


The history of medicine is littered with diseases which impacted on the course of humanity. The Black Death. Smallpox. Influenza. HIV/AIDS. Each one has left its own mark on our collective consciousness. And yet there is an often overlooked addition to this list: sweating sickness. This disease tore its way through Tudor England, killing within hours, before disappearing as quickly and mysteriously as it arrived. In it’s wake it left it’s mark, a nation changed. The identity of this disease remains a matter for conjecture to this day. This is the story of England’s forgotten plague.

Background to an outbreak

It’s summer 1485. An epic contest for the throne of England is reaching its bloody climax. In a few weeks on August 22nd at the Battle of Bosworth Henry Tudor will wrest the crown from King Richard III and conclude the Wars of the Roses. Away from the fighting people start dying. As contemporary physicians described:

“A newe Kynde of sickness came through the whole region, which was so sore, so peynfull, and sharp, that the lyke was never harde of to any mannes rememberance before that tyme.

These words take on added impact when you remember the writer would have experienced patients with bubonic plague. What was this disease “the like was never heard of”? Sudor Anglicus, later known as the English sweating sickness, struck quickly. The French physician Thomas le Forestier described victims feeling apprehensive and generally unwell before violent sweating, shaking and headaches began. Up to half of patients died, usually within 24 hours. Those who lived longer than this tended to survive. However, survival did not seem to offer immunity and patients could be struck multiple times. 15,000 died in London alone. We don’t have an exact figure for its mortality but it is commonly estimated at 30-50%.

Outbreaks continued beyond 1485 and the reign of Henry VII and into that of his grandson Edward VI in five further epidemics; 1508, 1517, 1528, and 1551, each time in summer/autumn. The disease remained limited to England apart from 1528/29 when it also spread to mainland Europe.

John Keys

The principle chronicler of the sweat was the English doctor John Keys (often Latinised to John Caius/Johannes Caius) in his 1552 work ‘A Boke or Counseill Against the Disease Commonly Called the Sweate, or Sweatyng Sicknesse.’ This is how know so much about how the disease presented and progressed.

Key’s noted that the patients most at risk of the disease were:

“either men of wealth, ease or welfare, or of the poorer sort, such as were idle persons, good ale drinkers and tavern haunters.

Both Cardinal Wolsely and Anne Boleyn contracted the disease but survived. Wolsely survived two attacks. Anne’s brother-in-law William Carey wasn’t so lucky and died of the sweat. The disease’s predilection for the young and wealthy led to it being dubbed the ‘Stop Gallant’ by the poorer classes.

Key’s was the physician to Edward VI, Mary I and Elizabeth I. As he was born in 1510 his work on the first epidemics of sweating sickness was based on prior reports of the illness; it could therefore be said he had performed a kind of literature review. Unlike le Forestier his lack of first hand experience and the fact he focused mostly on noble deaths has led to criticism. However, Keys was clear that the sweat was different to plague and other conditions. This goes with le Forestier and other physicians at the time.

The impact of the sweat permeated Tudor culture. Even in 1604 William Shakespeare was concerned enough about sweating sickness to write in his play ‘Measure by Measure’:

“Thus, what with the war, what with the sweat, what with the gallows, and what with poverty…

How the sweat changed history

Henry Tudor was an ambitious man with a fairly loose claim to the throne of England: his mother, Lady Margaret Beaufort, was a great-granddaughter of John of Gaunt, Duke of Lancaster, fourth son of Edward III, and his third wife Katherine Swynford. Katherine was Gaunt’s mistress for 25 years before they married and had 4 children already before she gave birth to John Beaufort, Henry’s great-grandfather. If this sounds complicated it is. Henry was not a strong claimant and his chances had been further weakened by an Act of Parliament in 1407 by Henry IV, John of Gaunt’s first son, which recognised his half-siblings but ruled them and their descendants ineligible for the throne.

Henry Tudor’s ancestry from

Henry needed alliances if he was going to get anywhere. He attempted to take the crown in 1483 but the campaign was a disaster. He was running out of time and needed to kill Richard III in battle if he was going to be king. He accepted the help of King Charles VIII of France who provided Henry with 2000 mercenaries from France, Germany and Switzerland. This force crossed the English Channel on 7th August 1485. In was in this army that the sweat first appeared. There is debate about whether this was before or after the Battle of Bosworth but Lord Stanley, a key ally of Richard III and a contributor of 30% of the king’s army, used fear of sweating sickness as a reason to not join the royal forces in battle. It’s therefore possible that sweating sickness was seen before Bosworth and helped shape the course of English history.

Arthur Tudor (1486-1502)

Sweating sickness may have had a further impact on the Tudors and their role in our history. Henry VII’s first son, Arthur the Prince of Wales died in 1502 aged 15. Sweating sickness has been suggested as the cause of his sudden death. His death saw Henry VII’s second son, also called Henry, become first in line to the throne which he took in 1509 as King Henry VIII.

What was the sweat?

Unlike other plagues the identity of sweating sickness remains a mystery to this day. The periodicity of the epidemics suggests an environmental or meteorological trigger and possibly an insect or rodent vector.

A similar disease struck Northern France in 1718 in an outbreak known as ‘the Picardy sweat’. 196 local epidemics followed until the disease disappeared in 1861 with its identity also a mystery. Interestingly, the region of France where the Picardy sweat arose is near where Henry Tudor’s group of French, German and Swiss solders amassed prior to the Battle of Bosworth.

Several diseases have been proposed as the true identity of the sweat. Typhus (not as virulent), influenza and ergotism (don’t match the recorded symptoms) have been suggested and dropped. In 1997 it was suggested that a hantavirus could have been responsible. Hantaviruses are spread by inhalation of rodent droppings and cause similar symptoms to sweating sickness before killing with bleeding and complications to the heart and lungs. Although rare they have been identified in wild rodents in Britain. If we remember how the sweat seemed to strike following summer when rodent numbers would be at their highest and add in the poor sanitation of Tudor times then hantavirus is a strong candidate.

We’ll likely never know the true identity of sweating sickness unless it re-emerges. If that’s the case based on the terror it inspired to Tudor England we should be glad to keep it a mystery.

Thanks for reading.

- Jamie