When it comes to Crispr, the bacterial wünderenzyme that allows scientists to precisely edit DNA, no news is too small to stir up some drama. On Tuesday morning, doctors from Columbia, Stanford, and the University of Iowa published a one-page letter to the editor of Nature Methods—an obscure but high-profile journal—describing something downright peculiar. About a year ago, they used Crispr to edit out a blindness-causing genetic defect in mice, curing two of their cohort. Later, they decided to go back and sequence their genomes, just to see what else Crispr did while it was in there.

A lot, it turned out. With their method, the researchers observed close to 2,000 unintended mutations throughout each mouse’s genome, a rate more than 10 times higher than anyone had previously reported. If that holds up, Crispr-based therapies are in for some serious trouble. No one wants to go in for a vision-restoring treatment, only to wind up with cancer because of it.

The ensuing headlines were gleefully apocalyptic: “Crispr May Not Be Nearly as Precise as We Thought,” “Crack in Crispr Facade after Unanticipated In Vivo Mutations Arise,” and my personal favorite, “Small Study Finds Fatal Flaw in Gene Editing Tool Crispr.” And then the biotech stocks went into a tailspin. The big three Crispr-based tech companies got hit the hardest. By the close of trading Tuesday, Editas Medicine was down nearly 12 percent, Crispr Therapeutics fell more than 5 percent, and Intellia Therapeutics had plunged to just over 14 percent.

This was far from just a blip in the nerdy news cycle. A reaction to a single scientific publication on this scale raises important questions about science’s incentive structure, its processes for publicly evaluating evidence, and what happens when those butt up against the prevailing philosophies of other professions—namely, medicine.

A decade ago, most of the conversations about this letter would have happened in laboratory hallways. But this week, geneticists, microbiologists, and molecular bioengineers took to Twitter to digest the paper in public. While some experts decried the paper as unnewsworthy (everyone’s known about Crispr off-target mutations forever!) the majority of threads ticked off the experiment’s flaws: Tiny sample size! Insufficient controls! Weird Crispr delivery! Out of date/inefficient version of Crispr! The list goes on. Many doubted if it had been peer-reviewed. (It had.) The hashtag #fakenews even made a few appearances.

To be sure, the results do not match up well with what’s already in the literature on this subject. And, as the paper itself says, “The unpredictable generation of these variants is of concern.” Which is to say, the authors have no idea why or how these mutations are happening. Derek Lowe, a longtime pharmaceutical industry researcher who writes a blog on the subject for Science, had enough doubt in the results that he bought up some Editas and Crispr Therapeutics stocks while they were down.

But most scientists, while skeptical of the results, were more disappointed in the way the paper was blown out of proportion. “It’s critically important to look closely at genomes being edited with the Crispr system, ideally with a method sensitive enough to detect even rare off-target events,” says Stephen Floor, a biophysicist who worked in Crispr creator Jennifer Doudna’s lab at UC Berkeley before beginning his own gene editing cancer research at UCSF. Saying Crispr is 100 percent accurate or grossly inaccurate isn’t helpful. What scientists need to understand is which sites are being cut, what rules govern which sites get cut, and how to emphasize only cutting at sites you want. “It will be interesting to watch subsequent validation that gets to the bottom of why this report found such a surprisingly high rate of mutation,” he says.

The key word there, if you didn’t catch it, is “validation.” It’s pretty much the foundational tenet of science. You have an idea, you test it, you test it again, you eliminate confounding factors as best as you can and then you validate your results. All the critiques of the Nature Methods paper assumed the authors were operating with that same premise.

But in this case, the authors weren’t scientists: They were doctors. And in medicine, there’s a different guiding principle that places a premium on sharing significant results at face value.

The history of the case study is long and celebrated in medicine. The first recorded report of what would one day be known as HIV/AIDS was published by the CDC as five strange cases of pneumonia in gay men in Los Angeles. Vinit Majahan, an opthamologist at Stanford and co-author of Tuesday’s Crispr paper, says it was in that spirit that he and his collaborators submitted their results to the journal. “I don’t have any money in Crispr, I only have patients,” he says. “The culture and pressures of science right now push people to not share results that aren’t a splashy cure. But in medicine you can’t do that. If you make an observation that’s important enough to share with your community, you’re obligated to do that right away.”

Since Majahan’s team is working on turning its previous work into a human treatment, they saw it as irresponsible to take their results, small as they were, and sweep them under the rug. Crispr is most often described as molecular scissors, but doctors like Majahan tend to think of it more like a drug. And the more successes Crispr has—like curing mouse blindness—the more doctors start asking the next logical questions about things like dosing and formulations and side effects. How long can you have the enzyme floating around your cells before it cuts somewhere it shouldn’t? What’s the right enzyme for the job?

Matthew Taliaferro, who studies gene expression and gene editing at MIT, thinks the paper will get more scientists thinking about those kinds of questions. “Crispr definitely has off-targets. But a lot of people use it assuming no other mutations get introduced during the process,” he says. “So getting people to talk about the need for controls is a good outcome of this whole thing.” And while he was surprised by the lack of some straightforward controls, Taliaferro is aware that his initial reactions were colored by some of the Twitter threads he’d already absorbed before tracking down the paper himself. “I think the data is perfectly fine,” he says. “It’s just the interpretation of it that to me seems odd.” Namely, that every Crispr application is deeply flawed.

Which was never Majahan’s intention in the first place. “We didn’t write the headlines,” he says. “We don’t think Crispr is bad, we think it’s great.” But he didn’t get the opportunity to tell people that, because for one thing, he’s not on Twitter. When asked how he was responding to the criticisms from the scientific community, he laughed and said, “Can you read some to me? I’ve heard there’s some nasty stuff out there.”

The amplifications (and denigrations) of those interpretations around Science Twitter may not have been as knee-jerk as all the “Crispr Is Terrible and Broken Forever” headlines. But still, they were an overreaction—because after all, this was just a single paper. No one should presume a standalone study can predict the future of an entire technique. At most, it indicates that Crispr is entering its inevitable adolescence, when shiny silver bullet technologies get banged up and battle worn by new data. That doesn’t mean it isn’t the real deal. Just that it should be looked at real hard every step of the way.




Wired