You're BrowsingLos Angeles - DTLA
Menu
logo
Cart
ember logoA Journal of Cannabis and Culture
EMBER /
Arrow
search
July 08, 2022
Crash Course: How Does the Scientific Process Actually Work?

BY BEN THOMAS | Graphics by Simon Diago, archival images from The National Cancer Institute and Girl with red hat

It’s not always easy to make sense of what’s being reported in news stories on cannabis—or on any scientific topic. If an article says, “Scientists find _____,” does that mean the claim has been scientifically proven? Does it mean all scientists agree, or the majority, or just some—or only the authors of one particular study? And how sure are these researchers, exactly, that what they’re claiming is true?

These can be tricky questions to answer, even for experts within the scientific community—let alone those of us who get much of our information second- or third-hand, from news outlets and social media. Not everybody has the time to read peer-reviewed scientific papers—nor should they have to, to get a clear sense of the truth. But in the news media’s endless game of “telephone,” it’s all too easy for preliminary data to get spun into overblown headlines in which “Science” allegedly says that “Coffee Can Prevent Cancer!”

Headlines like these don’t do anybody any favors—least of all scientific researchers, who rarely claim any such thing. So let’s peel back the hype cycle, and take a clear look at how the scientific process really operates under the hood.

What exactly is the scientific method?

Most of us have been hearing about the scientific method since middle school—but a lot of people seem to be confused about what this method actually is. Movies would have you believe it’s all about bubbling beakers and Matrix-like computer screens; or perhaps some kind of mystical technique, like Shaolin kung fu.

But in reality, the scientific method is just a sequence of seven simple steps:

 

  1. Ask a question—for example, “Does CBD help relieve pain?”

  2. Do background research and find out what experts are saying.

  3. Propose a hypothesis, or possible explanation for how something might work—for example, “CBD might help relieve pain by blocking pain receptors in the brain.”

  4. Test your hypothesis with an experiment—for example, give CBD to 200 laboratory mice, and observe if and how CBD alters their brains’ responses to painful stimuli, compared with a separate control group of 200 mice who don’t receive CBD.

  5. Analyze the results of your experiment, and consider whether the data appear to support your hypothesis, or make it unlikely to be correct. 

  6. Publish your data in a reputable peer-reviewed scientific journal, for other people to analyze—including a step-by-step walkthrough of your experiment so others can repeat it and check if their own results differ. 

  7. Repeat your experiment, changing only one variable at a time, to check if some other factor(s) might be responsible for the results you’re getting.

This isn’t just a one-time process, of course, but a continual cycle in which data from each experiment serves as background information for further hypotheses and experiments. What’s more, a true scientist doesn’t care whether their hypothesis is disproven. Scientists simply follow the data wherever it leads—and openly suggest followup experiments that might further confirm or invalidate their results.

In fact, throughout every stage of this cycle, scientists are encouraged not only to dispute each others’ conclusions, but also to critique the materials and methods used—for example, what if the CBD dose was too low, or too high? What if mice respond to CBD differently than humans do? What if CBD affects mice (and/or humans) differently when THC gets involved? All perfectly valid questions, which can—and should!—fuel the next rounds of hypotheses.

From model to in vitro to in vivo—and beyond

In life sciences like biology and pharmacology (the study of how drugs work), hypotheses and experiments typically proceed through a series of three broad stages:

  1. Scientists first propose a model or mechanism for how something might work, then test that model in a simulation—for example, by writing a computer program that mathematically simulates CBD’s interactions with brain cells, based on data published in other scientific studies, to check whether the proposed effect is even biologically possible.

  2. If the simulation looks plausible, scientists then test their hypothesis in vitro—literally “in glass”—by applying it to human or animal cells grown artificially in a lab. 

  3. If in vitro experiments appear to support the hypothesis, scientists may then proceed to test their model in vivo—literally “in life”—using live animal subjects. This may be followed up with more in vivo experiments on live human volunteers.

Following the scientific method, researchers are always free to dispute whether a computer model accounts for all the necessary factors—and to question whether a mathematical simulation’s results can be reproduced in vitro, whether in vitro results can be reproduced using live animals, and whether results from animals will translate to human beings. All the above may sometimes turn out to be correct—while in many other cases, the translation breaks down at some stage, and it’s back to the drawing board.

Yet as crucial as these distinctions are, they can be hard to discern in pop-science articles, which frequently report a proposed mechanism as though it’s been observed in real life (it often hasn’t), and report preliminary in vitro results as if they’ve been replicated in live animals, or in large groups of human beings (again, they often haven’t).

Since scientists tend to hedge quite a bit when they report their results, it’s sometimes tricky to pinpoint where communication breakdowns happen. Many pop-science reporters don’t seem to have read the actual studies they’re hyping—while others misinterpret what they read, or may even deliberately exaggerate the results for the sake of clickbait.

But you can spot the differences for yourself by scanning for keywords like “potential,” “preliminary,” “possible,” “model,” “in vitro,” and “in vivo”—all of which will help clue you in on what a study is really claiming.

 

The plural of “anecdote” is not “data”

What if I follow the whole scientific method, from question to hypothesis to experiment to analysis, using myself as the test subject? The results still count as scientific data as long as I write down every step and publish my findings, right?

Nope. What I have in that case is not hard data, but a personal anecdote—and the plural of “anecdote” is not “data.” My trip reports published on Reddit are not data. My budtender’s theories about terpenes are not data. My Amazon product reviews are not data, either—at least, not until someone compares and contrasts them with many other people’s reviews, analyzes that body of information for overall trends, and reports the results of that analysis in a way that other researchers can repeat to check if they get the same output. Then, and only then, do we have hard scientific data¹. Everything up to that point is just, “Cool story, bro.”

In fact, even after researchers have published a mountain of reproducible data, that data may turn out to be meaningless if no other research team can actually reproduce it. A 2017 study led by Tim Errington at the University of Virginia found that a full 70 percent of scientists can’t reproduce the results reported by their peers in scientific journals. (Before you ask, yes; this disturbing finding has since been replicated in numerous other studies.)

But while today’s “replication crisis” might seem to call the scientific method itself into question, it actually serves as a perfect example of why this method is so vitally important. If scientists weren’t required to publish their data and document their methods, and encouraged to read and critique their peers’ work, we might never have discovered this crisis in the first place—and we’d be even more ignorant than we are now.

So when you come across a headline claiming “science says” something, keep in mind that science’s job isn’t to prove an idea is “true”—only to collect evidence that a hypothesis might be correct, which can always (now and in the future) be contradicted by data from other studies. 

If you want to help protect yourself and your friends from fake science news, here’s a quick checklist of questions to ask before sharing an article or post:

  1. What hypothesis was being tested?

  2. What’s the evidence in favor of that hypothesis? What about against it?

  3. Are the researchers just proposing a possible model, or have they actually observed this mechanism in vitro and/or in vivo?

  4. What’s the sample size—e.g., how many people, animals, organs, cells and/or batches were tested? (Small sample = big red flag.)

  5. What’s the scope of the dataset—e.g., what time periods or geographical regions does the study cover? Does it ignore any demographics?

  6. Does the study include a clearly defined control group who weren’t given the dose? How did their results differ?

  7. Who funded the study? Do its sponsors stand to gain an advantage from spinning the results toward a certain conclusion?

  8. Do the results look like systematically organized data, or just a collection of “cool story, bro” anecdotes?

  9. Have any other researchers been able to replicate the study’s results?

If some of the above questions prove tough to answer, that doesn’t necessarily mean a study’s results are useless, or that the researchers are being dishonest—it may just mean the findings are preliminary suggestions, and that’s how the researchers intend to present them. 

In that case, it’s great to get excited about the potential implications—but until more data emerge to clarify the picture, the smartest play is simply to say, “Wouldn’t it be cool if…?” and leave it at that.


¹ You may come across some articles saying that the plural of anecdote can, in fact, be data—but they’re only saying this to make a microscopically fine-grained point: a sufficiently large number of anecdotes can collectively constitute data if they're organized in a scientifically rigorous way. In other words, this is like saying, "The plural of 'person' is not 'nationality,' but a sufficiently large group of people can sometimes constitute a nationality if they're organized in a systematic way."

Ben Thomas is a journalist and novelist who's lived in 40+ countries. He runs the publishing company House Blackwood, and produces the podcast Horrifying Tales of Wonder! Follow him on Twitter at @writingben.

These statements have not been evaluated by the Food and Drug Administration. Products are not intended to diagnose, treat, cure, or prevent any disease.
Back to List
A Handy Explainer of All the Strains We Sell in Florida
next
MedMen Moments: Talking to Napalm’s Xzibit
next
Women in Cannabis: Ruchi Birdi, SVP Supply Chain at MedMen
next
A Message About Our Commitment to Diversity and Inclusion
next
Father's Day 2022: Last Prisoner Project Reminds Us to Fight for the Dads Still in Prison for Cannabis
next