The Mysteries of Research
From scurvy to scarless surgery, this article explains the scientific methods researchers developed to advance our medical knowledge.
The first decade of the 21st Century has passed and the world has bid it a less than fond farewell. Time magazine has dubbed 2000-2009 “the worst decade ever”. It is difficult to debate that claim1 with the constant threat of terrorism, disastrous hurricanes, a devastating tsunami, and crippling recession that have dominated the news and public consciousness for the past ten years. However, these years have also introduced us to mind-blowing advances in medicine. The human genetic code was fully characterized. Prevention, public awareness, and new therapies have brought about a 40% decrease in mortality from heart attack and stroke since the late 1990s. Perhaps most impressive of all are the advances in stem cell research. Biologists can now transform ordinary skin cells into any tissue in the body – and perhaps soon create an entire human being.2 How have we gained the knowledge and technology to manipulate nature so dramatically? A sound scientific method is the answer – the principal way to extract measurable truth from the complex order and outright chaos of the world around us. Let’s take a few minutes to explore what this method has brought, and may soon bring, to individuals who live with a gastrointestinal illness.
What do clinical researchers do besides speak in mostly Latin and statistics?
No single person is responsible for the scientific method; it is the result of over a millennium of collective thought. Aristotle (384-322 BC) was presumably the first to define the processes of induction and deduction. To practice inductive reasoning you first observe facts, discern a pattern among them, and then formulate a hypothesis to explain the observations. Deductive reasoning is its complement; it progresses from constructing a theory to building a hypothesis to testing these both with experiments. Induction and deduction are how we create theories.
A hypothesis is more than just a guess about an experiment’s outcome; a good hypothesis should be expressed in simple terms, consistent with current facts, and must be possible. Most importantly, it can be tested and proven true or false. An Arabic scientist, Abū ‘Alī al-Hasan ibn al-Hasan ibn al-Haytham (965-1039), was likely the first to stress the importance of confirming a hypothesis through careful measurement of data by conducting quantitative experiments. There are three other prominent architects of the modern hypothesis, Galileo (1564-1642), Sir Isaac Newton (1642-1727), and Scottish philosopher, David Hume (1711-1776).
Two more concepts aid the scientific method in furthering human knowledge. The first is Ockham’s razor – the idea that the simplest explanation or hypothesis tends to be the best one. It’s not a hard and fast rule, but is a guide to developing theoretical models that are easier to test. If the simple theories and hypotheses are proven false, then you can move on to more complex ones, which are more difficult to test. The second is from the philosopher, Karl Popper, who articulated a fundamental truth about scientific theories, in that no amount of positive evidence can conclusively prove a theory. However, a single piece of evidence is sufficient to prove one false. As Albert Einstein said, “no amount of experimentation can ever prove me right; a single experiment can prove me wrong.”3, 4, 5
We’ve all heard the saying that variety is the spice of life. Living things are the essence of variety – the same disease can act quite differently in different individuals. For reasons scientists still don’t understand, a disease can even act differently from time-to-time in the same patient. How do we prove the hypothesis that therapy leads to clinical improvement? Could patients have just improved with time? Could the benefits of a treatment be the result of a patient’s hope for a positive outcome, which is the psychological phenomenon known as the placebo effect? Controlled experiments help solve these problems.
Why do researchers have a tough time with relationships?
Because they’re control freaks… Experimental controls allow testing of a defined variable, called the dependent variable, that usually represents clinical improvement in medical trials. The dependent variable changes in response to an experimenter’s manipulation of only one variable, the independent variable (often a drug given to a trial participant). The first clinical trial on record was in the 18th Century when James Lind, a naval doctor, studied scurvy patients and hypothesized that the disease was due to a dietary deficiency. He divided 12 sailors with scurvy into 6 groups. Group 5 consumed lemons and oranges and the other groups were fed what we now know are ineffective treatments.
After only 6 days of a diet including citrus fruits, the two debilitated sailors in group 5 were well enough to resume normal duties, as well as care for their scurvy-ridden shipmates. Lind attempted to control his experiment, in that all of the sailors had scurvy on entering the trial, and he kept the diet consistent among the other groups. However, all received different dietary treatments. If they had all been effective, Lind wouldn’t have known if disease improvement was due to time, therapy, or the placebo effect. He believed at the time that all treatments should work. A well-controlled experiment would have included a sham treatment group, who would have received something that had no intended therapeutic value and was indistinguishable from actual treatment. Lind could have then compared the effectiveness of the sham (control) group to the other groups. Fortunately for Lind, and unfortunately for most of the sailors, the non-citrus treatments were ineffective and served as controls.6
There’s still room for error in this experiment. For example, what if Lind had owned a large citrus plantation? Then he would have a vested interest in seeing the citrus group improve. He could have put the healthiest scurvy patients in that group, or judged them as healthier than they actually were post-treatment. Two strategies address this potential for bias in modern clinical trials. Researchers now randomly allocate patients to treatment or control groups to make both groups very similar from the beginning, before altering the independent variables of sham treatment or real treatment. As well, the best quality trials are double-blinded, meaning that neither the investigator nor the participants know who receives actual therapy until the trial is over and the data collected.
Thus, we have a brief outline of some major pillars of the medical scientific method and their champions throughout history. We now know that the method is the process of inductive and deductive reasoning to formulate efficient theories that can be experimentally tested and, most importantly, proven false. You might wonder how this all relates to you as you’re sitting in the gastroenterologist’s office and contemplating participation in a clinical trial or wondering when a better treatment for your illness will be available.
Most clinical trials for new therapies involve a sham or placebo group into which sick patients are randomly placed; neither the investigator nor the participant know if active treatment is being given. A hopeful participant may be disappointed to find that the trial didn’t help their disease at all, due to being placed into the control group. This isn’t medical sadism, we promise; it’s crucial to error-free hypothesis testing. The history of medicine is full of poorly-tested compounds (some harmful) that had no real effectiveness in patients; randomized, double-blind controlled trials would have eliminated them before they were mass-marketed.
One example is a clinical trial involving the use of an early liquid formulation of a sulfa antibiotic, elixir sulfanilamide. The antibiotic was effective; however, the elixir was 72% diethylene glycol, a close chemical relative to antifreeze. Sadly, 105 of the trial participants, a full 30%, who took this medication, died from kidney failure due to diethylene glycol poisoning.7 Hydergine® (ergoloid mesylates) is another example. It was the only FDA-approved drug for Alzheimer’s disease throughout the 1980s and early 1990s. However, the evidence that showed its effectiveness was shaky, at best. As well, few trials evaluating the drug used well-standardized tests to measure symptom improvement in Alzheimer’s disease. A randomized, placebo-controlled study in 80 patients actually showed that placebo subjects scored better than Hydergine® subjects on both intelligence and behavioural performances. A systematic review of all Hydergine® data in 2006 concluded that the effectiveness of the drug was minimal, and may even be non-existent.8, 9 To understand the time-consuming development and approval of new therapies, it’s necessary to take another short but interesting trip into the past.
Maybe Dr. Frankenstein wasn’t so bad…
Advances in medicine have proven the power of the scientific method but, as with all powerful tools, it can be abused and result in disaster. Such was the case in Nazi Germany, where horrible experiments with little scientific basis were forced on people. These were appalling acts such as inflicting mutilating wounds or cold-water submersion until death. Most of these subjects died. The Nuremberg trials highlighted these atrocities and set down the basis for ethical treatment of trial participants. The tragedies did not stop there, though. In June of 1966, Henry K. Beecher outlined 22 ethically questionable trials in the US. They included injection of live cancer cells into senile patients and the hepatitis virus into mentally challenged individuals. Perhaps most famous was the Tuskegee syphilis experiment which continued until the 1970s, wherein African-American patients were refused curative therapy with penicillin, which had been used since the 1950s, so researchers could study the untreated course of syphilis.10 To finally eliminate these abuses and to protect research participants, legislation and strict guidelines were introduced in most countries. The basic principles are as follows:
- An ethics board must grant approval to all clinical trials. The members must be independent of the researchers.
- Participants must give informed consent, in that they must understand the risks and benefits of the trial.
- The trial must have an acceptable anticipated risk-to-benefit ratio, meaning that theoretically the trial should do more good than harm.
- The trial must have scientific value, in that it must lead to useful knowledge (using the methods outlined above). The technical term for this is ‘upsetting equipoise’.
- The trial should be described clearly in a document called a protocol.
For the sake of protecting subjects in drug trials, manufacturers must conduct three phases of clinical trials before a drug receives approval from a national licensing body. These phases, and the ethical principles they strive to protect, are outlined in the table below.11 Although multiple phases are ethically necessary, they significantly increase the time to approval of a treatment by a licensing body – hence the long road from a potential drug to the prescription pad.
So what has science done for me lately?
Now that we have a fundamental knowledge of the scientific method in medicine, let’s appreciate the improvements that it has brought to the lives of gastroenterology patients. One of the breakthroughs in treating inflammatory bowel disease is the use of anti-TNF antibodies, with the introduction of the medication infliximab. An antibody is a protein (a molecular machine made up of many chemical building blocks) that attaches tightly to infectious agents and helps eliminate them. An anti-TNF antibody is one that has been manipulated to bind tightly to TNF (tumour necrosis factor), a hormone that is linked to many immune functions. Elimination of excess TNF opposes the maladaptive activation of the immune system that causes inflammatory bowel disease.
Most drugs are small molecules, which are relatively simple to manufacture. However, an antibody is a huge molecule constructed by living cells and modified by other proteins. So how do manufacturers produce this treatment? The diagram below (Figure 1) illustrates the complex process,12 which took many years of scientific cooperation among various disciplines for development.
Another fascinating therapy that has been around for decades resulting from scientific research is the ileal pouch-anal anastomosis procedure (IPAA). Patients who have colon cancer or aggressive inflammatory bowel disease sometimes need to have the large bowel removed (colectomy). In the past, a colectomy necessitated an ostomy, or a hole in the abdomen that releases feces into an external reservoir. For obvious reasons, a waste collection bag underneath your shirt that is full of feces is undesirable. To try to avoid this, surgeons developed a method to fold the last part of the small intestine before it connects to the large intestine (ileum) upon itself and fasten it together to form a larger diameter structure, called a pouch. The purpose of the pouch is to mimic the function of the colon by absorbing water from the stool and storing it to be eliminated in a firmer form. Surgeons join the pouch to the anus, preserving the individual’s ability to control the passage of stool in an anatomically normal way. This is obviously a complex surgical procedure and different techniques were compared over time, sometimes using randomized controlled trials, to determine the optimal strategy to maximize patient outcomes.13 Anti-TNF antibodies, and especially IPAA, can seriously improve a person’s quality of life.
Medicine of the future: you’re going to stick those instruments where?! And do what?!
Let’s conclude our discussion with a few promising advances that may soon be routine:
Robotic researchers: Two robots, dubbed Adam and Eve, have the ability to formulate hypotheses and conduct experiments independent of humans. Artificial intelligence techniques allowed the robots to conduct new research on the genetics of yeast, and they produced findings that had evaded humans. The automation of discovery may someday result in more rapid identification of drug candidates.14
Machine learning to aid medical diagnosis: Artificial intelligence advances can aid diagnosis and predict prognosis with increasing reliability by looking at lab results and symptoms. Imagine the reduction of wait times in hospital emergency rooms with complex computer programs providing instant accurate advice to physicians based on symptoms and simple lab tests. In some hospitals they are already being used to analyze medical imaging. They can ‘look’ at endoscopies to detect colonic polyps (which can be pre-cancerous) that a human might miss. Experiments have also shown that artificial intelligence programs can analyze the genetic make-up of a trauma patient to help predict outcomes as well as or better than current methods.15, 16 These methods probably won’t replace human clinicians and are certainly not yet infallible. It’s likely, though, that they will soon be implemented and will routinely save time, limit mistakes, and help save lives.
Last but not least, Natural Orifice Transluminal Endoscopic Surgery (NOTES): With this technique, surgical tools are inserted through an existing orifice (such as the mouth, nose, anus, or vagina) which then cut through the wall of the cavity, thus gaining access to the interior of the body. The result – a surgery with no skin scars, shorter hospital stays, lower wound infection rates (there’s no external wound), less intense anaesthesia, and faster recovery. Physicians expect that it will revolutionize surgery. So far it is experimental, but it’s been successful in animals and small numbers of human patients needing kidney transplants or gall-bladder removal.17
I hope that you’ve found this brief overview of medical research and gastroenterology informative and interesting. I think upon reviewing this article we should all be optimistic. Science has come a long way, both in terms of a solid knowledge base and in its compassion for those participating in clinical trials. We can definitely look forward to a near-infinite number of new frontiers.
Phase |
Number of patients |
Ethical principles |
Time to complete |
I | dozen(s) | Safety – can healthy people handle the therapy? What are safe doses? |
about one year |
II | 50-200 | What are effective and safe doses in the targeted disease? What is the risk-to-benefit ratio? Is the trial scientifically sound? |
1-2 years |
III | 300-2000 | What is the risk-to-benefit ratio? Is the trial scientifically sound? Does the therapy do more good than harm? | several years |