A Familys Race to Cure a Daughters Genetic Disease

One July afternoon last summer, Matt Wilsey distributed small plastic tubes to 60 people gathered in a Palo Alto, California, hotel. Most of them had traveled thousands of miles to be here; now, each popped the top off a barcoded tube, spat in about half a teaspoon of saliva, and closed the tube. Some massaged their cheeks to produce enough spit to fill the tubes. Others couldn’t spit, so a technician rolled individual cotton swabs along the insides of their cheeks, harvesting their skin cells—and the valuable DNA inside.

One of the donors was Asger Vigeholm, a Danish business developer who had traveled from Copenhagen to be here, in a nondescript lobby at the Palo Alto Hilton. Wilsey is not a doctor, and Vigeholm is not his patient. But they are united in a unique medical pursuit.

Wilsey’s daughter, Grace, was one of the first children ever diagnosed with NGLY1 deficiency. It’s a genetic illness defined by a huge range of physical and mental disabilities: muscle weakness, liver problems, speech deficiencies, seizures. In 2016, Vigeholm’s son, Bertram, became the first child known to die from complications of the disease. Early one morning, as Bertram, age four, slept nestled between his parents, a respiratory infection claimed his life, leaving Vigeholm and his wife, Henriette, to mourn with their first son, Viktor. He, too, has NGLY1 deficiency.

Grace and her mother, Kristen Wilsey.

BLAKE FARRINGTON

The night before the spit party, Vigeholm and Wilsey had gathered with members of 16 other families, eating pizza and drinking beer on the hotel patio as they got to know each other. All of them were related to one of the fewer than 50 children living in the world with NGLY1 deficiency. And all of them had been invited by the Wilseys—Matt and his wife Kristen, who in 2014 launched the Grace Science Foundation to study the disease.

These families had met through an online support group, but this was the first time they had all come together in real life. Over the next few days in California, every family member would contribute his or her DNA and other biological samples to scientists researching the disease. On Friday and Saturday, 15 of these scientists described their contributions to the foundation; some studied the NGLY1 gene in tiny worms or flies, while others were copying NGLY1 deficient patients’ cells to examine how they behaved in the lab. Nobody knows what makes a single genetic mutation morph into all the symptoms Grace experiences. But the families and scientists were there to find out—and maybe even find a treatment for the disease.

That search has been elusive. When scientists sequenced the first human genome in 2000, geneticist Francis Collins, a leader of the Human Genome Project that accomplished the feat, declared that it would lead to a “complete transformation in therapeutic medicine” by 2020. But the human genome turned out to be far more complex than scientists had anticipated. Most disorders, it’s now clear, are caused by a complicated mix of genetic faults and environmental factors.

And even when a disease is caused by a defect in just one gene, like NGLY1 deficiency, fixing that defect is anything but simple. Scientists have tried for 30 years to perfect gene therapy, a method for replacing defective copies of genes with corrected ones. The first attempts used modified viruses to insert corrected genes into patients’ genomes. The idea appeared elegant on paper, but the first US gene therapy to treat an inherited disease—for blindness—was approved just last year. Now scientists are testing methods such as Crispr, which offers a far more precise way to edit DNA, to replace flawed genes with error-free ones.

Certainly, the genetics revolution has made single-mutation diseases easier to identify; there are roughly 7,000, with dozens of new ones discovered each year. But if it’s hard to find a treatment for common genetic diseases, it’s all but impossible for the very rare ones. There’s no incentive for established companies to study them; the potential market is so small that a cure will never be profitable.

Which is where the Wilseys—and the rest of the NGLY1 families—come in. Like a growing number of groups affected by rare genetic diseases, they’re leapfrogging pharmaceutical companies’ incentive structures, funding and organizing their own research in search of a cure. And they’re trying many of the same approaches that Silicon Valley entrepreneurs have used for decades.

At 10:30 on a recent Monday morning, Grace is in Spanish class. The delicate 8-year-old with wavy brown hair twisted back into a ponytail sits in her activity chair—a maneuverable kid-sized wheelchair. Her teacher passes out rectangular pieces of paper, instructing the students to make name tags.

Grace grabs her paper and chews it. Her aide gently takes the paper from Grace’s mouth and puts it on Grace’s desk. The aide produces a plastic baggie of giant-sized crayons shaped like cylindrical blocks; they’re easier for Grace to hold than the standard Crayolas that her public school classmates are using.

Grace’s NGLY1 deficiency keeps her from speaking.

BLAKE FARRINGTON

At her school, a therapist helps her communicate.

BLAKE FARRINGTON

The other kids have written their names and are now decorating their name tags.

“Are we allowed to draw zombies for the decorations?” one boy asks, as Grace mouths her crayons through the baggie.

Grace’s aide selects a blue crayon, puts it in Grace’s hand, and closes her hand over Grace’s. She guides Grace’s hand, drawing letters on the paper: “G-R-A-C-E.”

Grace lives with profound mental and physical disabilities. After she was born in 2009, her bewildering list of symptoms—weak muscles, difficulty eating, failure to thrive, liver damage, dry eyes, poor sleep—confounded every doctor she encountered. Grace didn’t toddle until she was three and still needs help using the toilet. She doesn’t speak and, like an infant, still grabs anything within arm’s reach and chews on it.

Her father wants to help her. The grandson of a prominent San Francisco philanthropist and a successful technology executive, Matt Wilsey graduated from Stanford, where he became friends with a fellow undergraduate who would one day be Grace’s godmother: Chelsea Clinton. Wilsey went on to work in the Clinton White House, on George W. Bush’s presidential campaign, and in the Pentagon.

But it was his return to Silicon Valley that really prepared Wilsey for the challenge of his life. He worked in business development for startups, where he built small companies into multimillion-dollar firms. He negotiated a key deal between online retailer Zazzle and Disney, and later cofounded the online payments company Cardspring, where he brokered a pivotal deal with First Data, the largest payment processor in the world. He was chief revenue officer at Cardspring when four-year-old Grace was diagnosed as one of the first patients with NGLY1 deficiency in 2013—and when he learned there was no cure.

At the time, scientists knew that the NGLY1 gene makes a protein called N-glycanase. But they had no idea how mistakes in the NGLY1 gene caused the bewildering array of symptoms seen in Grace and other kids with NGLY1 deficiency.

Wilsey’s experience solving technology problems spurred him to ask scientists, doctors, venture capitalists, and other families what he could do to help Grace. Most advised him to start a foundation—a place to collect money for research that might lead to a cure for NGLY1 deficiency.

As many as 30 percent of families who turn to genetic sequencing receive a diagnosis. But most rare diseases are new to science and medicine, and therefore largely untreatable. More than 250 small foundations are trying to fill this gap by sponsoring rare disease research. They’re funding scientists to make animals with the same genetic defects as their children so they can test potential cures. They’re getting patients’ genomes sequenced and sharing the results with hackers, crowdsourcing analysis of their data from global geeks. They’re making bespoke cancer treatments and starting for-profit businesses to work on finding cures for the diseases that affect them.

“Start a foundation for NGLY1 research, get it up and running, and then move on with your life,” a friend told Wilsey.

Wilsey heeded part of that advice but turned the rest of it on its head.

In 2014, Wilsey left Cardspring just before it was acquired by Twitter and started the Grace Science Foundation to fund research into NGLY1 deficiency. The foundation has committed $7 million to research since then, most of it raised from the Wilseys’ personal network.

Many other families with sick loved ones have started foundations, and some have succeeded. In 1991, for instance, a Texas boy named Ryan Dant was diagnosed with a fatal muscle-wasting disease called mucopolysaccharidosis type 1. His parents raised money to support an academic researcher who was working on a cure for MPS1; a company agreed to develop the drug, which became the first approved treatment for the disease in 2003.

But unlike Dant, Grace had a completely new disease. Nobody was researching it. So Wilsey began cold-calling dozens of scientists, hoping to convince them to take a look at NGLY1 deficiency; if they agreed to meet, Wilsey read up on how their research might help his daughter. Eventually he recruited more than 100 leading scientists, including Nobel Prize-winning biologist Shinya Yamanaka and Carolyn Bertozzi, to figure out what was so important about N-glycanase. He knew that science was unpredictable and so distributed Grace Science’s funding through about 30 grants worth an average of $135,000 apiece.

Two years later, one line of his massively parallel attack paid off.

Matt Wilsey, Grace’s father.

BLAKE FARRINGTON

Bertozzi, a world-leading chemist, studies enzymes that add and remove sugars from other proteins, fine-tuning their activity. N-glycanase does just that, ripping sugars off from other proteins. Our cells are not packed with the white, sweet stuff that you add to your coffee. But the tiny building blocks of molecules similar to table sugar can also attach themselves to proteins inside cells, acting like labels that tell the cell what to do with these proteins.

Scientists thought that N-glycanase’s main role was to help recycle defective proteins, but many other enzymes are also involved in this process. Nobody understood why the loss of N-glycanase had such drastic impacts on NGLY1 kids.

In 2016, Bertozzi had an idea. She thought N-glycanase might be more than just a bit player in the cell’s waste management system, so she decided to check whether it interacts with another protein that turns on the proteasomethe recycling machine within each of our cells.

This protein is nicknamed Nerf, after its abbreviation, Nrf1. But fresh-made Nerf comes with a sugar attached to its end, and as long as that sugar sticks, Nerf doesn’t work. Some other protein has to chop the sugar off to turn on Nerf and activate the cellular recycling service.

Think of Nerf’s sugar like the pin in a grenade: You have to remove the pin—or in this case, the sugar—to explode the grenade and break down faulty proteins.

But nobody knew what protein was pulling the pin out of Nerf. Bertozzi wondered if N-glycanase might be doing that job.

To find out, she first tested cells from mice and humans with and without working copies of the NGLY1 gene. The cells without NGLY1 weren’t able to remove Nerf’s sugar, but those with the enzyme did so easily. If Bertozzi added N-glycanase enzymes to cells without NGLY1, the cells began chopping off Nerf’s sugar just as they were supposed to: solid evidence, she thought, that N-glycanase and Nerf work together. N-glycanase pulls the pin (the sugar) out of the grenade (the Nerf protein) to trigger the explosion (boom).

The finding opened new doors for NGLY1 disease research. It gave scientists the first real clue about how NGLY1 deficiency affects patients’ bodies: by profoundly disabling their ability to degrade cellular junk via the proteasome.

As it turns out, the proteasome is also involved in a whole host of other diseases, such as cancer and brain disorders, that are far more common than NGLY1 deficiency. Wilsey immediately grasped the business implications: He had taken a moon shot, but he’d discovered something that could get him to Mars. Pharmaceutical companies had declined to work on NGLY1 deficiency because they couldn’t make money from a drug for such a rare disease. But Bertozzi had now linked NGLY1 deficiency to cancer and maladies such as Parkinson’s disease, through the proteasome—and cancer drugs are among the most profitable medicines.

Suddenly, Wilsey realized that he could invent a new business model for rare diseases. Work on rare diseases, he could argue, could also enable therapies for more common—and therefore profitable—conditions.

In early 2017, Wilsey put together a slide deck—the same kind he’d used to convince investors to fund his tech startups. Only this time, he wanted to start a biotechnology company focused on curing diseases linked to NGLY1. Others had done this before, such as John Crowley, who started a small biotechnology company that developed the first treatment for Pompe disease, which two of his children have. But few have been able to link their rare diseases to broader medical interests in the way that Wilsey hoped to.

He decided to build a company that makes treatments for both rare and common diseases involving NGLY1. Curing NGLY1 disease would be to this company as search is to Google—the big problem it was trying to solve, its reason for existence. Treating cancer would be like Google’s targeted advertising—the revenue stream that would help the company get there.

But his idea had its skeptics, Wilsey’s friends among them.

One, a biotechnology investor named Kush Parmar, told Wilsey about some major obstacles to developing a treatment for NGLY1 deficiency. Wilsey was thinking of using approaches such as gene therapy to deliver corrected NGLY1 genes into kids, or enzyme replacement therapy, to infuse kids with the N-glycanase enzyme they couldn’t make on their own.

But NGLY1 deficiency seems particularly damaging to cells in the brain and central nervous system, Parmar pointed out—places that are notoriously inaccessible to drugs. It’s hard to cure a disease if you can’t deliver the treatment to the right place.

Other friends warned Wilsey that most biotech startups fail. And even if his did succeed as a company, it might not achieve the goals that he wanted it to. Ken Drazan, president of the cancer diagnostics company Grail, is on the board of directors of Wilsey’s foundation. Drazan warned Wilsey that his company might be pulled away from NGLY1 deficiency. “If you take people’s capital, then you have to be open to wherever that product development takes you,” Drazan said.

But Wilsey did have some things going for him. Biotechnology companies have become interested of late in studying rare diseases—ones like the type of blindness for which the gene therapy was approved last year. If these treatments represent true cures, they can command a very high price.

Still, the newly approved gene therapy for blindness may be used in 6,000 people, 100 times more than could be helped by an NGLY1 deficiency cure. Wilsey asked dozens of biotechnology and pharmaceutical companies if they would work on NGLY1 deficiency. Only one, Takeda, Japan’s largest drug company, agreed to conduct substantial early-stage research on the illness. Others turned him down flat.

If no one else was going to develop a drug to treat NGLY1 deficiency, Wilsey, decided, he might as well try. “We have one shot at this,” he says. “Especially if your science is good enough, why not go for it?”

“Matt was showing classic entrepreneurial tendencies,” says Dan Levy, the vice president for small business at Facebook, who has known Wilsey since they rushed the same Stanford fraternity in the 1990s. “You have to suspend a little bit of disbelief, because everything is stacked against you.”

At 11 am, Grace sits in a classroom with a speech therapist. Though Grace doesn’t speak, she’s learning to use her “talker,” a tablet-sized device with icons that help her communicate. Grace grabs her talker and presses the icons for “play” and “music,” then presses a button to make her talker read the words out loud.

The "talker" used for Grace’s therapy.

BLAKE FARRINGTON

“OK, play music,” her therapist says, starting up a nearby iPad.

Grace watches an Elmo video on the iPad for a few moments, her forehead crinkled in concentration, her huge brown eyes a carbon copy of her dad’s. Then Grace stops the video and searches for another song.

Suddenly, her therapist slides the iPad out of Grace’s reach.

“You want ‘Slippery Fish,’” her therapist says. “I want you to tell me that.”

Grace turns to her talker: “Play music,” she types again.

The therapist attempts one more time to help Grace say more clearly which particular song she wants. Instead, Grace selects the symbols for two new words.

“Feel mad,” Grace’s talker declares.

Grace working with a therapist in one of their therapy rooms.

BLAKE FARRINGTON

There’s no denying how frustrating it can be for Grace to rely on other people to do everything for her, and how hard her family works to meet her constant needs.

Matt and Kristen can provide the therapy, equipment, medicines, and around-the-clock supervision that Grace needs to have a stable life. But that is not enough—not for Grace, who wants "Slippery Fish," nor for her parents, who want a cure.

So last summer, Wilsey raised money to bring the Vigeholms and the other NGLY1 families to Palo Alto, where they met with Grace’s doctors and the Grace Science Foundation researchers. One Japanese scientist, Takayuki Kamei, was overjoyed to meet two of the NGLY1 deficiency patients: “I say hello to their cells every morning,” he told their parents.

And because all of these families also want a cure, each also donated blood, skin, spit, stool, and urine to the world’s first NGLY1 deficiency biobank. In four days, scientists collected more NGLY1 deficiency data than had been collected in the entire five years since the disease was discovered. These patient samples, now stored at Stanford University and at Rutgers University, have been divvied up into more than 5,000 individual samples that will be distributed to academic and company researchers who wish to work on NGLY1 deficiency.

That same month, Wilsey closed a seed round of $7 million to start Grace Science LLC. His main backer, a veteran private equity investor, prefers not to be named. Like many in Silicon Valley, he’s recently become attracted to health care by the promise of a so-called “double bottom line”: the potential to both to make money and to do good by saving lives.

Wilsey is chief executive of the company and heavily involved in its scientific strategy. He’s looking for a head scientist with experience in gene therapy and in enzyme replacement therapy, which Mark Dant and John Crowley used to treat their sick children. Gene therapy now seems poised to take off after years of false starts; candidate cures for blood and nervous system disorders are speeding through clinical trials, and companies that use Crispr have raised more than $1 billion.

Wilsey doesn’t know which of these strategies, if any, will save Grace. But he hopes his company will find an NGLY1 deficiency cure within five years. The oldest known NGLY1 deficient patient is in her 20s, but since nobody has been looking for these patients until now, it’s impossible to know how many others—like Bertram—didn’t make it that long.

“We don’t know what Grace’s lifespan is,” Wilsey says. “We’re always waiting for the other shoe to drop.”

But at 3 pm on this one November day, that doesn’t seem to matter.

School’s out, and Grace is seated atop a light chestnut horse named Ned. Five staff members lead Grace through a session of equine therapy. Holding herself upright on Ned’s back helps Grace develop better core strength and coordination.

Grace on her horse.

BLAKE FARRINGTON

Grace and Ned walk under a canopy of oak trees. Her face is serene, her usually restless legs still as Ned paces through late-afternoon sunshine. But for a little grace, there may be a cure for her yet.

Read more: https://www.wired.com/story/a-familys-race-to-cure-a-daughters-genetic-disease/

The Second Coming of Ultrasound

Before Pierre Curie met the chemist Marie Sklodowska; before they married and she took his name; before he abandoned his physics work and moved into her laboratory on Rue Lhomond where they would discover the radioactive elements polonium and radium, Curie discovered something called piezoelectricity. Some materials, he found—like quartz and certain kinds of salts and ceramics—build up an electric charge when you squeeze them. Sure, it’s no nuclear power. But thanks to piezoelectricity, US troops could locate enemy submarines during World War I. Thousands of expectant parents could see their baby’s face for the first time. And one day soon, it may be how doctors cure disease.

Ultrasound, as you may have figured out by now, runs on piezoelectricity. Applying voltage to a piezoelectric crystal makes it vibrate, sending out a sound wave. When the echo that bounces back is converted into electrical signals, you get an image of, say, a fetus, or a submarine. But in the last few years, the lo-fi tech has reinvented itself in some weird new ways.

Researchers are fitting people’s heads with ultrasound-emitting helmets to treat tremors and Alzheimer’s. They’re using it to remotely activate cancer-fighting immune cells. Startups are designing swallowable capsules and ultrasonically vibrating enemas to shoot drugs into the bloodstream. One company is even using the shockwaves to heal wounds—stuff Curie never could have even imagined.

So how did this 100-year-old technology learn some new tricks? With the help of modern-day medical imaging, and lots and lots of bubbles.

Bubbles are what brought Tao Sun from Nanjing, China to California as an exchange student in 2011, and eventually to the Focused Ultrasound Lab at Brigham and Women’s Hospital and Harvard Medical School. The 27-year-old electrical engineering grad student studies a particular kind of bubble—the gas-filled microbubbles that technicians use to bump up contrast in grainy ultrasound images. Passing ultrasonic waves compress the bubbles’ gas cores, resulting in a stronger echo that pops out against tissue. “We’re starting to realize they can be much more versatile,” says Sun. “We can chemically design their shells to alter their physical properties, load them with tissue-seeking markers, even attach drugs to them.”

Nearly two decades ago, scientists discovered that those microbubbles could do something else: They could shake loose the blood-brain barrier. This impassable membrane is why neurological conditions like epilepsy, Alzheimer’s, and Parkinson’s are so hard to treat: 98 percent of drugs simply can’t get to the brain. But if you station a battalion of microbubbles at the barrier and hit them with a focused beam of ultrasound, the tiny orbs begin to oscillate. They grow and grow until they reach the critical size of 8 microns, and then, like some Grey Wizard magic, the blood-brain barrier opens—and for a few hours, any drugs that happen to be in the bloodstream can also slip in. Things like chemo drugs, or anti-seizure medications.

This is both super cool and not a little bit scary. Too much pressure and those bubbles can implode violently, irreversibly damaging the barrier.

That’s where Sun comes in. Last year he developed a device that could listen in on the bubbles and tell how stable they were. If he eavesdropped while playing with the ultrasound input, he could find a sweet spot where the barrier opens and the bubbles don’t burst. In November, Sun’s team successfully tested the approach in rats and mice, publishing their results in Proceedings in the National Academy of Sciences.

“In the longer term we want to make this into something that doesn’t require a super complicated device, something idiot-proof that can be used in any doctor’s office,” says Nathan McDannold, co-author on Sun’s paper and director of the Focused Ultrasound Lab. He discovered ultrasonic blood-brain barrier disruption, along with biomedical physicist Kullervo Hynynen, who is leading the world’s first clinical trial evaluating its usefulness for Alzheimer’s patients at the Sunnybrook Research Institute in Toronto. Current technology requires patients to don special ultrasound helmets and hop in an MRI machine, to ensure the sonic beams go to the right place. For the treatment to gain any widespread traction, it’ll have to become as portable as the ultrasound carts wheeled around hospitals today.

More recently, scientists have realized that the blood-brain barrier isn’t the only tissue that could benefit from ultrasound and microbubbles. The colon, for instance, is pretty terrible at absorbing the most common drugs for treating Crohn’s disease, ulcerative colitis, and other inflammatory bowel diseases. So they’re often delivered via enemas—which, inconveniently, need to be left in for hours.

But if you send ultrasound waves waves through the colon, you could shorten that process to minutes. In 2015, pioneering MIT engineer Robert Langer and then-PhD student Carl Schoellhammer showed that mice treated with mesalamine and one second of ultrasound every day for two weeks were cured of their colitis symptoms. The method also worked to deliver insulin, a far larger molecule, into pigs.

Since then, the duo has continued to develop the technology within a start-up called Suono Bio, which is supported by MIT’s tech accelerator, The Engine. The company intends to submit its tech for FDA approval in humans sometime later this year.

Ultrasound sends pressure waves through liquid in the body, creating bubble-filled jets that can propel microscopic drug droplets like these into surrounding tissues.
Suono Bio

Instead of injecting manufactured microbubbles, Suono Bio uses ultrasound to make them in the wilds of the gut. They act like jets, propelling whatever is in the liquid into nearby tissues. In addition to its backdoor approach, Suono is also working on an ultrasound-emitting capsule that could work in the stomach for things like insulin, which is too fragile to be orally administered (hence all the needle sticks). But Schoellhammer says they have yet to find a limit on the kinds of molecules they can force into the bloodstream using ultrasound.

“We’ve done small molecules, we’ve done biologics, we’ve tried DNA, naked RNA, we’ve even tried Crispr,” he says. “As superficial as it may sound, it all just works.”

Earlier this year, Schoellhammer and his colleagues used ultrasound to deliver a scrap of RNA that was designed to silence production of a protein called tumor necrosis factor in mice with colitis. (And yes, this involved designing 20mm-long ultrasound wands to fit in their rectums). Seven days later, levels of the inflammatory protein had decreased sevenfold and symptoms had dissipated.

Now, without human data, it’s a little premature to say that ultrasound is a cure-all for the delivery problems facing gene therapies using Crispr and RNA silencing. But these early animal studies do offer some insights into how the tech might be used to treat genetic conditions in specific tissues.

Even more intriguing though, is the possibility of using ultrasound to remotely control genetically-engineered cells. That’s what new research led by Peter Yingxiao Wang, a bioengineer at UC San Diego, promises to do. The latest craze in oncology is designing the T-cells of your immune system to better target and kill cancer cells. But so far no one has found a way to go after solid tumors without having the T-cells also attack healthy tissue. Being able to turn on T-cells near a tumor but nowhere else would solve that.

Wang’s team took a big step in that direction last week, publishing a paper that showed how you could convert an ultrasonic signal into a genetic one. The secret? More microbubbles.

This time, they coupled the bubbles to proteins on the surface of a specially designed T-cell. Every time an ultrasonic wave passed by, the bubble would expand and shrink, opening and closing the protein, letting calcium ions flow into the cell. The calcium would eventually trigger the T-cell to make a set of genetically encoded receptors, directing it it to attack the tumor.

“Now we’re working on figuring out the detection piece,” says Wang. “Adding another receptor so that we’ll known when they’ve accumulated at the tumor site, then we’ll use ultrasound to turn them on.”

In his death, Pierre Curie was quickly eclipsed by Marie; she went on to win another Nobel, this time in chemistry. The discovery for which she had become so famous—radiation—would eventually take her life, though it would save the lives of so many cancer patients in the decades to follow. As ultrasound’s second act unfolds, perhaps her husband’s first great discovery will do the same.

Read more: https://www.wired.com/story/the-second-coming-of-ultrasound/

Why No Gadget Can Prove How Stoned You Are

If you’ve spent time with marijuana—any time at all, really—you know that the high can be rather unpredictable. It depends on the strain, its level of THC and hundreds of other compounds, and the interaction between all these elements. Oh, and how much you ate that day. And how you took the cannabis. And the position of the North Star at the moment of ingestion.

OK, maybe not that last one. But as medical and recreational marijuana use spreads across the United States, how on Earth can law enforcement tell if someone they’ve pulled over is too high to be driving, given all these factors? Marijuana is such a confounding drug that scientists and law enforcement are struggling to create an objective standard for marijuana intoxication. (Also, I’ll say this early and only once: For the love of Pete, do not under any circumstances drive stoned.)

Sure, the cops can take you back to the station and draw a blood sample and determine exactly how much THC is in your system. “It's not a problem of accurately measuring it,” says Marilyn Huestis, coauthor of a new review paper in Trends in Molecular Medicine about cannabis intoxication. “We can accurately measure cannabinoids in blood and urine and sweat and oral fluid. It's interpretation that is the more difficult problem.”

You see, different people handle marijuana differently. It depends on your genetics, for one. And how often you consume cannabis, because if you take it enough, you can develop a tolerance to it. A dose of cannabis that may knock amateurs on their butts could have zero effect on seasoned users—patients who use marijuana consistently to treat pain, for instance.

The issue is that THC—what’s thought to be the primary psychoactive compound in marijuana—interacts with the human body in a fundamentally different way than alcohol. “Alcohol is a water-loving, hydrophilic compound,” says Huestis, who sits on the advisory board for Cannabix, a company developing a THC breathalyzer.1 “Whereas THC is a very fat-loving compound. It's a hydrophobic compound. It goes and stays in the tissues.” The molecule can linger for up to a month, while alcohol clears out right quick.

But while THC may hang around in tissues, it starts diminishing in the blood quickly—really quickly. “It's 74 percent in the first 30 minutes, and 90 percent by 1.4 hours,” says Huestis. “And the reason that's important is because in the US, the average time to get blood drawn [after arrest] is between 1.4 and 4 hours.” By the time you get to the station to get your blood taken, there may not be much THC left to find. (THC tends to linger longer in the brain because it’s fatty in there. That’s why the effects of marijuana can last longer than THC is detectable in breath or blood.)

So law enforcement can measure THC, sure enough, but not always immediately. And they’re fully aware that marijuana intoxication is an entirely different beast than drunk driving. “How a drug affects someone might depend on the person, how they used the drug, the type of drug (e.g., for cannabis, you can have varying levels of THC between different products), and how often they use the drug,” California Highway Patrol spokesperson Mike Martis writes in an email to WIRED.

Accordingly, in California, where recreational marijuana just became legal, the CHP relies on other observable measurements of intoxication. If an officer does field sobriety tests like the classic walk-and-turn maneuver, and suspects someone may be under the influence of drugs, they can request a specialist called a drug recognition evaluator. The DRE administers additional field sobriety tests—analyzing the suspect’s eyes and blood pressure to try to figure out what drug may be in play.

The CHP says it’s also evaluating the use of oral fluid screening gadgets to assist in these drug investigations. (Which devices exactly, the CHP declines to say.) “However, we want to ensure any technology we use is reliable and accurate before using it out in the field and as evidence in a criminal proceeding,” says Martis.

Another option would be to test a suspect’s breath with a breathalyzer for THC, which startups like Hound Labs are chasing. While THC sticks around in tissues, it’s no longer present in your breath after about two or three hours. So if a breathalyzer picks up THC, that would suggest the stuff isn’t lingering from a joint smoked last night, but one smoked before the driver got in a car.

This could be an objective measurement of the presence of THC, but not much more. “We are not measuring impairment, and I want to be really clear about that,” says Mike Lynn, CEO of Hound Labs. “Our breathalyzer is going to provide objective data that potentially confirms what the officer already thinks.” That is, if the driver was doing 25 in a 40 zone and they blow positive for THC, evidence points to them being stoned.

But you might argue that even using THC to confirm inebriation goes too far. The root of the problem isn’t really about measuring THC, it’s about understanding the galaxy of active compounds in cannabis and their effects on the human body. “If you want to gauge intoxication, pull the driver out and have him drive a simulator on an iPad,” says Kevin McKernan, chief scientific officer at Medicinal Genomics, which does genetic testing of cannabis. “That'll tell ya. The chemistry is too fraught with problems in terms of people's individual genetics and their tolerance levels.”

Scientists are just beginning to understand the dozens of other compounds in cannabis. CBD, for instance, may dampen the psychoactive effects of THC. So what happens if you get dragged into court after testing positive for THC, but the marijuana you consumed was also a high-CBD strain?

“It significantly compounds your argument in court with that one,” says Jeff Raber, CEO of the Werc Shop, a cannabis lab. “I saw this much THC, you're intoxicated. Really, well I also had twice as much CBD, doesn't that cancel it out? I don't know, when did you take that CBD? Did you take it afterwards, did you take it before?

“If you go through all this effort and spend all the time and money and drag people through court and spend taxpayer dollars, we shouldn't be in there with tons of question marks,” Raber says.

But maybe one day marijuana roadside testing won’t really matter. “I really think we're probably going to see automated cars before we're going to see this problem solved in a scientific sense,” says Raber. Don’t hold your breath, then, for a magical device that tells you you’re stoned.

1 UPDATE: 1/29/18, 2:15 pm ET: This story has been updated to disclose Huestis' affiliation with Cannabix.

Read more: https://www.wired.com/story/why-no-gadget-can-prove-how-stoned-you-are/

How Dirt Could Save Humanity From an Infectious Apocalypse

Nobody scours Central Park looking for drugs quite the way Sean Brady does. On a sweltering Thursday, he hops out of a yellow cab, crosses Fifth Avenue, and scurries up a dirt path. Around us, the penetrating churn of a helicopter and the honk of car horns filter through the trees. Brady, a fast-talking chemist in his late 40s who sports a graying buzz cut and rimless glasses, has a wry, self-deprecating humor that belies the single-minded determination of his quest. He walks along restlessly. Near the lake, we head up a rock slope and into a secluded area. Brady bends over and picks up a pinch of dusty soil. “Out of that bit of soil,” he says, “you can get enough to do DNA analysis.” He holds it in his fingertips momentarily, and then tosses it. Bits of glassy silica glisten in the sunlight.

Brady is creating drugs from dirt. He’s certain that the world’s topsoils contain incredible, practically inexhaustible reservoirs of undiscovered antibiotics, the chemical weapons bacteria use to fend off other microorganisms. He’s not alone in this thinking, but the problem is that the vast majority of bacteria cannot be grown in the lab—a necessary step in cultivating antibiotics.

Brady has found a way around this roadblock, which opens the door to all those untapped bacteria that live in dirt. By cloning DNA out of a kind of bacteria-laden mud soup, and reinstalling these foreign gene sequences into microorganisms that can be grown in the lab, he’s devised a method for discovering antibiotics that could soon treat infectious diseases and fight drug-resistant superbugs. In early 2016, Brady launched a company called Lodo Therapeutics (lodo means mud in Spanish and Portuguese) to scale up production and ultimately help humanity outrun infectious diseases nipping at our heels. Some colleagues call his approach “a walk in the park.” Indeed, his lab recently dispatched two groups of student volunteers to collect bags full of dirt at 275 locations around New York City.

Sean Brady is on a quest to revitalize antibiotic discovery.

Tim Schutsky for WIRED

We’re retracing their path back toward his lab, our shoes crunching down on potential cures for nearly any ailment imaginable. “It’s pretty amazing, right?” Brady says, drawing his words out. “Right here we can find all … the … drugs … in … the world. Pretty cool, I must say.”

At exactly the same time Brady and I are walking around Central Park, a 70-year-old woman arrives at a hospital in Reno, Nevada, with an infection no doctor can treat. The woman had fallen during a trip to India, and a pocket of fluid developed near her hip. She flew back to the US, and then, two weeks later, she was dead. The Centers for Disease Control and Prevention reports that the organism responsible for her death could evade 26 antibiotic drugs. The culprit, pan-resistant Klebsiella pneumoniae, is not the only superbug overpowering humanity’s defenses; it is part of a family known as carbapenem-resistant Enterobacteriaceae. The carpabenems are drugs of last resort, and the CDC considers organisms that evade these antibiotics to be nightmare bacteria.

One problem with antibiotic resistance is that, for most people, it remains abstract—right now its lethal impact is relatively small. Few of us have lost loved ones—yet. (The headline-grabbing methicillin-resistant Staphylococcus aureus, or MRSA, kills 20,000 people a year in the US, compared to the 600,000 who succumb to cancer.) So it’s difficult to envision a future that resembles the pre-antibiotic past—an era of untreatable staph, strep, tuberculosis, leprosy, pneumonia, cholera, diphtheria, scarlet and puerperal fevers, dysentery, typhoid, meningitis, gas gangrene, and gonorrhea.

But that’s the future we are headed for. The routine use of antibiotics and the reckless misuse in humans and animals accelerates resistance: We’re rewinding to a world where death begins in childbirth, where premature babies die, where newborns go blind from gonorrhea. Routine injuries become life-threatening infections. You could lose a limb, or your life, from a careless slip with a paring knife or an accidental fall in India. The risks of organ transplants and medical implants would outweigh any potential benefit. Go in for routine dental surgery and end up in a body bag. Explosive viral epidemics, such as the flu, prove especially lethal when they tag team with bacterial infections like strep. This is not the coming plague. It’s already upon us, and it spells the end of medicine as we know it. And that’s why Brady’s quest to revitalize antibiotic discovery is so crucial.

As a result of his calls for people from all over to send him soil, Brady keeps an entire room filled with Ziplock bags of dirt.

Tim Schutsky for WIRED

Brady sometimes describes his work as a kind of archeological dig: He is examining the remnants of a microbial civilization.

Tim Schutsky for WIRED

Since 1939, when René Dubos, a researcher at Rockefeller University, smeared dirt across a Petri plate and isolated the antibiotic gramicidin, the search for antibiotics has largely been culture dependent: It’s limited to the finite percentage of bacteria and fungi that grow in the laboratory. If the chance of finding a new antibiotic in a random soil screen was once one in 20,000, by some estimates the odds have dwindled to less than one in a billion. All the easy ones have already been found.

Historically, it’s a search riddled with accidental discoveries. The fungal strain that was used to manufacture penicillin turned up on a moldy cantaloupe; quinolones emerged from a bad batch of quinine; microbiologists first isolated bacitracin, a key ingredient in Neosporin ointment, from an infected wound of a girl who had been hit by a truck. Other antibiotics turned up in wild, far-flung corners of the globe: Cephalosporin came from a sewage pipe in Sardinia; erythromycin, the Philippines; vancomycin, Borneo; rifampicin, the French Riviera; rapamycin, Easter Island. By persuading the right microbes to grow under the right condition, we unearthed medicinal chemistry that beat back our own microscopic enemies. But despite technological advances in robotics and chemical synthesis, researchers kept rediscovering many of the same easy-to-isolate antibiotics, earning the old-school method a derisive nickname: “grind and find.”

That’s why Brady and others turned to metagenomics—the study of all the genetic information extracted from a given environment. The technique originated in the late 1980s, when microbiologists began cloning DNA directly out of seawater and soil. Extracted and cut up into chunks, this environmental DNA could be maintained in the lab by inserting the foreign gene fragments into bacteria such as E. coli (thereby creating what’s known as an artificial chromosome). These clones contained libraries, a living repository for all the genomes of all the microbes found in a particular environment.

Using high-throughput DNA sequencing, scientists then searched these libraries and their census turned up such astronomical biodiversity that they began adding new branches to the tree of life. By some estimates, the earth harbors more than a trillion individual microbe species. A single gram of soil alone can contain 3,000 bacterial species, each with an average of four million base-pairs of DNA spooled around a single circular chromosome. The next steps followed a simple logic: Find novel genetic diversity, and you’ll inevitably turn up new chemical diversity.

At Lodo, chemists extract and purify organic molecules, looking for new chemical structures and, perhaps, that one perfect molecule which could save millions of lives.

Tim Schutsky for WIRED

In 1998, Brady was part of a team that laid out a straightforward strategy for isolating DNA from the dirt-dwelling bugs, by mixing mud with detergent, inserting gene fragments into E. coli, and, finally, plating clones into Petri dishes to see what molecules they produced. By the time Brady set up his own lab at Rockefeller University, in 2006, he’d created a handful of novel compounds. Some had anticancer properties; others acted as antibiotics. He had studied the DNA plucked out of a tank filled with bromeliads in Costa Rica and produced palmitoylputrescine, an antibiotic that was effective in vitro against a resistant form of B. subtilis bacteria. Brady came to realize that he did not need to trek to some pristine or remote ecosystem to explore the world’s biodiversity. The requisite material for building new drugs could be found much closer to home.

All the while, Brady watched as the pace of antibiotic resistance eclipsed the faltering pace of discovery. Much of that has to do with the pharmaceutical industry’s bottom line. Taking a novel drug through clinical testing and human trials takes, on average, about 10 years and several billion dollars. At best one in five new drugs succeeds, and so the financial rewards are mismatched with the immense value antibiotics provide to society. Some of this comes down to the drug’s nature and activity: The more we use antibiotics, the less effective they become; the more selective pressures we apply, the more likely resistant strains will emerge.

And so antibiotics used to treat the deadliest pathogens are kept as a last resort when all else fails, such as the carbapenems. Gravely ill patients taking last-line antibiotics can end up dead or they can end up cured; either way, they’re not repeat customers, which over the long term adds up to a negligible or negative return on investment. Waiting until the market for these life-saving antibiotics reaches critical mass for profitability is a recipe for catastrophe. As Richard Ebright, a researcher at Rutgers, explains, “Unfortunately, at that point, you will have 10 million people dying for the next decade while you’re rebooting the system.” By some estimates, antibiotic drugs make up less than 1.5 percent of compounds in development. According to the Pew Charitable Trust, fewer than half the drugs being developed address the high-priority pathogens, including drug-resistant forms of TB and staph. These are world’s deadliest diseases, and they are at the top of Brady’s list of targets.

Bacteria proliferate in a liquid broth that often resembles the color of Yoo-hoo and gives off an earthy smell, like a freshly dug hole in the ground.

Tim Schutsky for WIRED

Lodo was founded with the goal of bringing life-saving medications to patients in the next 10 or 20 years.

Tim Schutsky for WIRED

Three years ago, Brady got a cold call from the Bill and Melinda Gates Foundation. On the line was Trevor Mundel, a former pharmaceutical executive who’s now the organization’s president of global health. The foundation wants to find drugs that treat TB, a disease that kills two million people a year, rivaling AIDS as the leading cause of death worldwide. TB used to be treatable with a triple-antibiotic cocktail that included rifampicin. Rif, as it’s known, was discovered almost 50 years ago, and over time the bacterium causing TB has developed a resistance. Intrigued by Brady’s “science fiction approach,” Mundel asked Brady if he could come up with a couple of new molecules that would be effective against TB.

Brady is focused on finding analogs, which are slight tweaks or modifications to the chemical structure of drugs that already exist. (Think of it as a variation on a familiar theme—a riff on rif.) Searching through metagenomic libraries Brady created from soils, he could see the different ways nature evolved to make rif. He looked for a familiar pattern: the gene clusters that created something similar to the original rif molecule, only with a chemical bond in a slightly different place, or an additional atom.

Find these analogs, and we’d once again be able to outwit Mycobacterium tuberculosis and effectively treat TB. Within six months, Brady convincingly demonstrated that he could find rif analogs as well as variants of the antibiotics vancomycin and daptomycin, which have also become increasingly ineffective because of bacterial resistance. The foundation set up a lunch meeting for him with Bill Gates, and the following January, with $17 million in venture capital from the Gates Foundation and Seattle life sciences investment outfit Accelerator, Brady founded his company.

On a bright clear day in September, Brady brings me up to Lodo’s office on the eighth floor of a glass-fronted tower at the Alexandria Center for Life Science. We pass a small room with a freezer and two shaker incubators the size of pizza ovens that warm flasks filled with bacteria, and he leads me into a pristine lab overlooking Bellevue Hospital. Ten people work at Lodo. Eleven if you count the robot. The automated Perkin-Elmer workstation, large enough to crawl inside, speeds up the discovery process by searching metagenomic libraries and plucking out the clones containing a target sequence, almost like a precision mechanical claw. Work that once took technicians and post-docs six months to a year to complete can now be accomplished in a week. That speed is already paying off. A chart on the wall lists at least 30 potential antibiotics Lodo is in the process of generating and characterizing this week alone. Brady recently identified one that cured MRSA in mice.

Brady circles the robot, hands in his pockets. The machine has been acting up. Its arms stand motionless. The process begins with soil, which arrives from donors and volunteers. Brady’s team then reduces dirt to its constituent DNA and clones the gene fragments from unculturable organisms into bacteria, which are stored in rectangular well plates the size of a brick—the so-called libraries. The challenging part is searching for a target, since all the genetic fragments are jumbled up, almost as if someone’s haphazardly tossed thousands of jigsaw pieces into a box. “So we have this very big mixture,” Brady says, “and it starts with 10 million clones and we divide it into a subset of pools.”

A single gram of soil alone can contain 3,000 bacterial species.

Tim Schutsky for WIRED

Lodo’s bioinformatics team uses algorithms to predict which fragments in which libraries are likely to synthesize which molecules, so that, in the end, the robot recovers the ones with the gene clusters needed to create antibiotic molecules. A smile forms at the corners of Brady’s mouth. “There are many other steps downstream for engineering those things,” he says, “but that’s the real novelty of what we do here.”

Brady sometimes describes this search as a kind of archeological dig: He is examining the remnants of a microbial civilization, poring over their genetic instruction manual to figure out how to build a specific aspect of the society. “If you’re doing drug discovery,” he says, “you don’t have to know what’s going on in the rest of society—how they built their huts or their canoes—if we’re going to say that antibiotics are weapons, you just need to figure out that information, which ones encode antibiotics, and then you have to go one step further and build that antibiotic.”

To do so, Lodo’s team of molecular biologists manipulate DNA and grow the clones in heated Erlenmeyer flasks. The bacteria proliferate in a liquid broth that often resembles the color of Yoo-hoo and gives off an earthy smell, like a freshly dug hole in the ground. In an adjacent room, chemists extract and purify the resulting organic molecules, looking for new chemical structures and, perhaps, that one perfect molecule which could save millions of lives.

In recent years, researchers have been trying to reinvigorate antibiotic discovery in several ways. A team from Northeastern University developed a specialized plastic chip that allowed them to culture a broader diversity of bacteria in the field, which led to the discovery of teixobactin from a meadow in Maine. Nearly everyone acknowledges that the promise of metagenomic mining has yet to materialize. As Jill Banfield, a biochemist at UC Berkeley, explains, the applications thus far have been “fairly limited.”

Warp Drive Bio, in Cambridge, Massachusetts, is one of the few companies that employs similar techniques; Brady once sat on its scientific advisory board. Greg Verdine, a company cofounder and chemist at Harvard, is confident that a DNA-directed “genomic search engine” will turn up antibiotics. “If you brought me the flower pot,” he says, “I guarantee that I could find novel antibiotics there.” Verdine has focused more narrowly on existing culturable bacteria. He argues that, by cloning DNA out of uncultured bacteria, Brady may be making an already difficult task “unnecessarily complicated.”

Several of the biotech firms that first attempted to use metagenomics to discover new drugs failed. “The big idea was in the air,” says Jon Clardy, who served as Brady’s doctoral advisor and is now at Harvard. “But I think that Sean was first person to reduce it into practice in a useful, robust way.” Clardy says one remaining challenge is to systematically predict what genes encode for molecules with a particular function. Put another way, no one knows exactly where to find nature’s instruction manual for disarming deadly infectious organisms. “That is a huge bottleneck,” he says. “Sean has ideas about how to do that, but that’s very different than the problems he solved.”

Brady takes a seat in a conference room overlooking the East River. He admits that he never imagined setting up a company on prime real estate in Manhattan. The Alexandria Center, a “big fancy building,” has a beer bar and a restaurant run by a celebrity chef. Brady sees himself as a do-gooder, an obsessively humble guy whose pipe dream involves setting up drug discovery pipelines in every country. He wonders about a time when resistant strains escape hospitals and start disrupting public transit—a scenario that is already playing out with TB. Lodo was founded on the idea that another future is possible, and that means bringing life-saving medications to patients in the next 10 or 20 years. Brady recently made his feelings clear at a company-wide meeting: “The purpose of being here is not anything besides saving people’s lives.”

An email blast went out from Lodo in September. “We need your dirt,” it said. Brady keeps an entire room filled with the rainbow of bags that resulted—dull gray, reddish, dark brown. A few summers ago, he hired a rock climber to ship him bags of dirt. Hundreds of additional volunteers have since scooped up a gallon Ziplock’s worth of soil. “We’re not panning for gold in the stream in your backyard,” Brady says. “We’re taking out a little bit of soil that otherwise you’re never going to use.” In other words, humanity’s next best hope could come from a pinch of something that turns out to be priceless—and as common as dirt.

Peter Andrey Smith (@petersm_th) is a writer based in New York.

Read more: https://www.wired.com/story/how-dirt-could-save-humanity-from-an-infectious-apocalypse/

Think twice about buying ‘squashed-faced’ breeds, vets urge dog-lovers

British Veterinary Association launches #breedtobreathe campaign to highlight serious health issues breeds such as pugs and French bulldogs are prone to

Vets have urged dog-lovers to think twice about buying squashed-faced dogs such as pugs and French bulldogs, after many would-be owners were found to be unaware of the health problems such breeds often experience.

According to data from the Kennel Club, registrations of squashed-faced, or brachycephalic, breeds have shot up in recent years: while just 692 French bulldogs were registered in 2007, registrations reached 21,470 in 2016.

Certain DNA variations in dogs are linked to a short skull shape. The animals baby-like faces with large, round, wide-set eyes and flat noses are known to be a key factor in why owners choose such breeds: over time those traits have been bred for, and in some cases have been taken to extremes.

This selective breeding and prioritising appearance over health has left the breeds prone to skin disorders, eye ulcers and breathing difficulties among other problems.

Now the British Veterinary Association (BVA) has launched a campaign dubbed #breedtobreathe to draw attention to the issues, revealing that a new survey of 671 vets found 75% of owners were unaware of the health problems of brachycephalic breeds before they chose their squashed-faced dog. Moreover the vets said just 10% of owners could spot health problems related to such breeds, with many thinking that problems including snorting were normal for such dogs.

Brachycephalic dogs graph

The survey also revealed that 49% of vets thought advertising and social media were among the reasons behind the surge in ownership of these dogs, while 43% said celebrity ownership was one of the driving factors.

We find that our veterinary surgeons are finding increasing numbers of flat-faced dogs are coming into their practices with problems which are related to the way these animals are made, said John Fishwick, president of the BVA. One of the things that is causing this increase that we have seen over the last few years appears to be celebrity endorsements and their use in advertising.

Among those criticised by the BVA are pop star Lady Gaga, who is often photographed with her French bulldogs, and YouTube star Zoella, whose pug features in her videos. Big brands are also targeted; the organisation revealed that Heinz, Costa and Halifax have all agreed to avoid using squashed-faced dogs in future advertising.

Q&A

What sort of health problems do brachycephalic dogs have?

Breeds such as pugs, bulldogs, French bulldogs and boxers are prone to a range of health problems, many of which are related to their short skulls and other characteristic features.

Breathing problems

Brachycephalic breeds often have narrow nostrils, deformed windpipes and excess soft tissues inside their nose and throat all of which can lead to difficulties with breathing, which can also lead to heart problems. The dogs are also prone to overheating.

Dental problems

The shortened upper jaws of squashed-faced dogs means their teeth are crowded, increasing the risk of tooth decay and gum disease.

Skin disorders

The deep folds around the dogs faces, such as the characteristic wrinkles of a bulldog, also bring problems as they are prone to yeast and bacterial infections.

Eye conditions

The head shape and prominent eyes of brachycephalic breeds means the dogs are at risk of eye conditions including ulcers. Among the causes of eye ulcers is that brachycephalic dogs often cannot blink properly and have problems with tear production, while eyelashes or nasal folds can also rub the surface of their eyes.

Birth problems

Brachycephalic breeds can have difficulties giving birth naturally because of the disproportionate size of the puppies heads, meaning that caesarean sections are often necessary. According torecent researchmore than 80% of Boston terrier, bulldog and French bulldog puppies in the UK are born in this manner.

The BVA is urging people to send letters to brands asking them not to use such dogs in promotional material. The campaign also aims to raise awareness of potential health problems of squashed-face breeds, and stresses the need for vets, owners, dog-show judges, breeders, researchers and others to work together to make sure the breeds are healthy.

They are lovely breeds of dog, they are very friendly and they make good pets, said Fishwick. The problem is a lot of them are really struggling, and we really want to make sure people understand this and encourage them to think about either going for another breed or a healthier version of these breeds ones which have been bred to have a longer snout or possibly even cross breeds.

The BVA warned that without action, the number of corrective surgeries needed on such animals will soar.

Caroline Kisko, secretary of the Kennel Club urged owners to do their homework before buying a squashed-faced dog. As soon as you get a market drive then the puppy farms just say ooh well breed those now, she said.

But Dr Rowena Packer of the Royal Veterinary College (RVC) said the problem is not confined to new owners, with recent research from the RVC finding that more than 90% of pug, French bulldog and English bulldog owners said they would own another such dog in the future. It is not just going to be a flash in the pan that we see this huge surge and then it goes away, she said.

It has been suggested that vets may be unwilling to speak out for fear that owners will simply take their pets elsewhere, damaging business.

But Packer disagrees, saying: I dont think any vet went into [the job] hoping that their salary would be paid by the suffering of dogs who have been bred to effectively have problems.

Dr Crina Dragu, a London-based veterinary surgeon, noted that not all squashed-faced dogs have problems. You see the ones that have happy lives, normal lives, and you see the ones that the minute they are born they spend their entire lives as though [they are being smothered] with a pillow all day, every day, she said.

Packer said prospective owners should be aware squashed-faced dogs can be an expensive commitment: I think they need to be aware of both the emotional and financial hardship that they could be putting themselves and their dogs through for potentially five to 10 years.

Read more: https://www.theguardian.com/lifeandstyle/2018/jan/05/think-twice-about-buying-squashed-faced-breeds-vets-urge-dog-lovers

On its 100th birthday in 1959, Edward Teller warned the oil industry about global warming

Benjamin Franta: Somebody cut the cake new documents reveal that American oil writ large was warned of global warming at its 100th birthday party.

It was a typical November day in New York City. The year: 1959. Robert Dunlop, 50 years old and photographed later as clean-shaven, hair carefully parted, his earnest face donning horn-rimmed glasses, passed under the Ionian columns of Columbia Universitys iconic Low Library. He was a guest of honor for a grand occasion: the centennial of the American oil industry.

Over 300 government officials, economists, historians, scientists, and industry executives were present for the Energy and Man symposium organized by the American Petroleum Institute and the Columbia Graduate School of Business and Dunlop was to address the entire congregation on the prime mover of the last century energy and its major source: oil. As President of the Sun Oil Company, he knew the business well, and as a director of the American Petroleum Institute the industrys largest and oldest trade association in the land of Uncle Sam he was responsible for representing the interests of all those many oilmen gathered around him.

Four others joined Dunlop at the podium that day, one of whom had made the journey from California and Hungary before that. The nuclear weapons physicist Edward Teller had, by 1959, become ostracized by the scientific community for betraying his colleague J. Robert Oppenheimer, but he retained the embrace of industry and government. Tellers task that November fourth was to address the crowd on energy patterns of the future, and his words carried an unexpected warning:

Ladies and gentlemen, I am to talk to you about energy in the future. I will start by telling you why I believe that the energy resources of the past must be supplemented. First of all, these energy resources will run short as we use more and more of the fossil fuels. But I would […] like to mention another reason why we probably have to look for additional fuel supplies. And this, strangely, is the question of contaminating the atmosphere. [….] Whenever you burn conventional fuel, you create carbon dioxide. [….] The carbon dioxide is invisible, it is transparent, you cant smell it, it is not dangerous to health, so why should one worry about it?

Carbon dioxide has a strange property. It transmits visible light but it absorbs the infrared radiation which is emitted from the earth. Its presence in the atmosphere causes a greenhouse effect [….] It has been calculated that a temperature rise corresponding to a 10 per cent increase in carbon dioxide will be sufficient to melt the icecap and submerge New York. All the coastal cities would be covered, and since a considerable percentage of the human race lives in coastal regions, I think that this chemical contamination is more serious than most people tend to believe.

How, precisely, Mr. Dunlop and the rest of the audience reacted is unknown, but its hard to imagine this being welcome news. After his talk, Teller was asked to summarize briefly the danger from increased carbon dioxide content in the atmosphere in this century. The physicist, as if considering a numerical estimation problem, responded:

At present the carbon dioxide in the atmosphere has risen by 2 per cent over normal. By 1970, it will be perhaps 4 per cent, by 1980, 8 per cent, by 1990, 16 per cent [about 360 parts per million, by Tellers accounting], if we keep on with our exponential rise in the use of purely conventional fuels. By that time, there will be a serious additional impediment for the radiation leaving the earth. Our planet will get a little warmer. It is hard to say whether it will be 2 degrees Fahrenheit or only one or 5.

But when the temperature does rise by a few degrees over the whole globe, there is a possibility that the icecaps will start melting and the level of the oceans will begin to rise. Well, I dont know whether they will cover the Empire State Building or not, but anyone can calculate it by looking at the map and noting that the icecaps over Greenland and over Antarctica are perhaps five thousand feet thick.

And so, at its hundredth birthday party, American oil was warned of its civilization-destroying potential.

Talk about a buzzkill.

How did the petroleum industry respond? Eight years later, on a cold, clear day in March, Robert Dunlop walked the halls of the U.S. Congress. The 1967 oil embargo was weeks away, and the Senate was investigating the potential of electric vehicles. Dunlop, testifying now as the Chairman of the Board of the American Petroleum Institute, posed the question, tomorrows car: electric or gasoline powered? His preferred answer was the latter:

We in the petroleum industry are convinced that by the time a practical electric car can be mass-produced and marketed, it will not enjoy any meaningful advantage from an air pollution standpoint. Emissions from internal-combustion engines will have long since been controlled.

Dunlop went on to describe progress in controlling carbon monoxide, nitrous oxide, and hydrocarbon emissions from automobiles. Absent from his list? The pollutant he had been warned of years before: carbon dioxide.

We might surmise that the odorless gas simply passed under Robert Dunlops nose unnoticed. But less than a year later, the American Petroleum Institute quietly received a report on air pollution it had commissioned from the Stanford Research Institute, and its warning on carbon dioxide was direct:

Significant temperature changes are almost certain to occur by the year 2000, and these could bring about climatic changes. […] there seems to be no doubt that the potential damage to our environment could be severe. […] pollutants which we generally ignore because they have little local effect, CO2 and submicron particles, may be the cause of serious world-wide environmental changes.

Thus, by 1968, American oil held in its hands yet another notice of its products world-altering side effects, one affirming that global warming was not just cause for research and concern, but a reality needing corrective action: Past and present studies of CO2 are detailed, the Stanford Research Institute advised. What is lacking, however, is […] work toward systems in which CO2 emissions would be brought under control.

This early history illuminates the American petroleum industrys long-running awareness of the planetary warming caused by its products. Tellers warning, revealed in documentation I found while searching archives, is another brick in a growing wall of evidence.

In the closing days of those optimistic 1950s, Robert Dunlop may have been one of the first oilmen to be warned of the tragedy now looming before us. By the time he departed this world in 1995, the American Petroleum Institute he once led was denying the climate science it had been informed of decades before, attacking the Intergovernmental Panel on Climate Change, and fighting climate policies wherever they arose.

This is a history of choices made, paths not taken, and the fall from grace of one of the greatest enterprises oil, the prime mover ever to tread the earth. Whether its also a history of redemption, however partial, remains to be seen.

American oils awareness of global warming and its conspiracy of silence, deceit, and obstruction goes further than any one company. It extends beyond (though includes) ExxonMobil. The industry is implicated to its core by the history of its largest representative, the American Petroleum Institute.

It is now too late to stop a great deal of change to our planets climate and its global payload of disease, destruction, and death. But we can fight to halt climate change as quickly as possible, and we can uncover the history of how we got here. There are lessons to be learned, and there is justice to be served.

Benjamin Franta (@BenFranta) is a PhD student in history of science at Stanford University who studies the history of climate change science and politics. He has a PhD in applied physics from Harvard University and is a former research fellow at the Belfer Center for Science and International Affairs at the Harvard Kennedy School of Government.

Read more: https://www.theguardian.com/environment/climate-consensus-97-per-cent/2018/jan/01/on-its-hundredth-birthday-in-1959-edward-teller-warned-the-oil-industry-about-global-warming

The Firestorm This Time: Why Los Angeles Is Burning

The Thomas Fire spread through the hills above Ventura, in the northern greater Los Angeles megalopolis, with the speed of a hurricane. Driven by 50 mph Santa Ana winds—bone-dry katabatic air moving at freeway speeds out of the Mojave desert—the fire transformed overnight from a 5,000-acre burn in a charming chaparral-lined canyon to an inferno the size of Orlando, Florida, that only stopped spreading because it reached the Pacific. Tens of thousands of people evacuated their homes in Ventura; 150 buildings burned and thousands more along the hillside and into downtown are threatened.

That isn’t the only part of Southern California on fire. The hills above Valencia, where Interstate 5 drops down out of the hills into the city, are burning. Same for a hillside of the San Gabriel Mountains, overlooking the San Fernando Valley. And the same, too, near the Mount Wilson Observatory, and on a hillside overlooking Interstate 405—the flames in view of the Getty Center and destroying homes in the rich-people neighborhoods of Bel-Air and Holmby Hills.

And it’s all horribly normal.

Southern California’s transverse ranges—the mostly east-west mountains that slice up and define the greater Los Angeles region—were fire-prone long before there was a Los Angeles. They’re a broken fragment of tectonic plate, squeezed up out of the ground by the Pacific Plate on one side and the North American on the other, shaped into the San Gabriels, the Santa Monica Mountains, the San Bernardino Mountains. Even the Channel Islands off Ventura’s coast are the tippy-tops of a transverse range.

Santa Anas notwithstanding, the transverse ranges usually keep cool coastal air in and arid desert out. Famously, they’re part of why the great California writer Carey McWilliams called the region “an island on the land.” The hills provided hiding places for cowboy crooks, hiking for the naturalist John Muir, and passes both hidden and mapped for natives and explorers coming from the north and east.

With the growth and spread of Los Angeles, fire became even more part of Southern California life. “It’s almost textbook. It’s the end of the summer drought, there has not been a lot of rain this year, and we’ve got Santa Ana winds blowing,” says Alexandra Syphard, an ecologist at the Conservation Biology Institute. “Every single year, we have ideal conditions for the types of wildfires we’re experiencing. What we don’t have every single year is an ignition during a wind event. And we’ve had several.”

"The problem is not fire. The problem is people in the wrong places."

Alexandra Syphard, Conservation Biology Institute

Before humans, wildfires happened maybe once or twice a century, long enough for fire-adapted plant species like chapparal to build up a bank of seeds that could come back after a burn. Now, with fires more frequent, native plants can’t keep up. Exotic weeds take root. “A lot of Ventura County has burned way too frequently,” says Jon Keeley, a research ecologist with the US Geological Survey at the Sequoia and Kings Canyon Field Station. “We’ve lost a lot of our natural heritage.”

Fires don’t burn like this in Northern California. That’s one of the things that makes the island on the land an island. Most wildfires in the Sierra Nevadas and northern boreal forests are slower, smaller, and more easily put out, relative to the south. (The Napa and Sonoma fires this year were more like southern fires—wind-driven, outside the forests, and near or amid buildings.) Trees buffer the wind and burn less easily than undergrowth. Keeley says northern mountains and forests are “flammability-limited ecosystems,” where fires only get big if the climate allows it—higher temperatures and dryer conditions providing more fuel. Climate change makes fires there more frequent and more severe.

Southern California, on the other hand, is an “ignition-limited ecosystem.” It’s always a tinderbox. The canyons that cut through the transverse ranges align pretty well with the direction of the Santa Ana winds; they turn into funnels. “Whether or not you get a big fire event depends on whether humans ignite a fire,” he says.

And there are just a lot more humans in Southern California these days. In 1969 Ventura County’s population was 369,811. In 2016 it was 849,738—a faster gain than the state as a whole. In 1970 Los Angeles County had 7,032,000 people; in 2015 it was 9,827,000. “If you look historically at Southern California, the frequency of fire has risen along with population growth,” Keeley says. Though even that has a saturation point. The number of fires—though not necessarily their severity—started declining in the 1980s, maybe because of better fire fighting, and maybe because with more people and more buildings and roads and concrete, there’s less to burn.

As Syphard told me back at the beginning of this year’s fire season, “The problem is not fire. The problem is people in the wrong places.”

Like most fresh-faced young actors in Southern California, the idea of dense development is a relatively recent arrival. Most of the buildings on the island on the land are low, metastasizing in a stellate wave across the landscape, over the flats, up the canyons, and along the hillsides. In 1960 Santa Paula, where the Thomas Fire in Ventura started, was a little town where Santa Paula Canyon hit the Santa Clara River. Today it’s part of greater Ventura, stretching up the canyon, reaching past farms along the river toward Saticoy.

So the canyons are perfect places for fires. They’re at the Wildland-Urban Interface, developed but not too developed. Wall-to-wall hardscape leaves nothing to burn; no buildings at all means no people to provide an ignition source. But the hills of Ventura or Bel-Air? Firestarty.

As the transverse ranges defined Southern California before Los Angeles and during its spasmodic growth, today it’s defined by freeways. The mountains shape the roads—I-5 coming over the Grapevine through Tejon Pass in the Tehachapis, the 101 skirting the north side of the Santa Monica Mountains, and the 405 tucking through them via the Sepulveda Pass. The freeways, names spoken as a number with a "the" in front, frame time and space in SoCal. For an Angeleno like me, reports of fires closing the 101, the 210, and the 405 are code for the end of the world. Forget Carey McWilliams; that’s some Nathaniel West stuff right there—the burning of Los Angeles from Day of the Locust, the apocalypse that Hollywood always promises.

It won’t be the end end, of course. Southern California zoning and development are flirting, for now at least, with density, accommodating more people, dealing with the state’s broad crisis in housing, and incidentally minimizing the size of the wildland interface. No one can unbuild what makes the place an island on the land, but better building on the island might help stop the next fires before they can start.

Read more: https://www.wired.com/story/losangeles-wildfire-science/

It’s Gonna Get a Lot Easier to Break Science Journal Paywalls

Anurag Acharya’s problem was that the Google search bar is very smart, but also kind of dumb. As a Googler working on search 13 years ago, Acharya wanted to make search results encompass scholarly journal articles. A laudable goal, because unlike the open web, most of the raw output of scientific research was invisible—hidden behind paywalls. People might not even know it existed. “I grew up in India, and most of the time you didn’t even know if something existed. If you knew it existed, you could try to get it,” Acharya says. “‘How do I get access?’ is a second problem. If I don’t know about it, I won’t even try.”

Acharya and a colleague named Alex Verstak decided that their corner of search would break with Google tradition and look behind paywalls—showing citations and abstracts even if it couldn’t cough up an actual PDF. “It was useful even if you did not have university access. That was a deliberate decision we made,” Acharya says.

Then they hit that dumbness problem. The search bar doesn’t know what flavor of information you’re looking for. You type in “cancer;” do you want results that tell you your symptoms aren’t cancer (please), or do you want the Journal of the American Medical Association? The search bar doesn’t know.

Acharya and Verstak didn't try to teach it. Instead, they built a spinoff, a search bar separate from Google-prime that would only look for journal articles, case law, patents—hardcore primary sources. And it worked. “We showed it to Larry [Page] and he said, ‘why is this not already out?’ That’s always a positive sign,” Acharya says.

Today, even though you can’t access Scholar directly from the Google-prime page, it has become the internet’s default scientific search engine—even more than once-monopolistic Web of Science, the National Institutes of Health’s PubMed, and Scopus, owned by the giant scientific publisher Elsevier.

But most science is still paywalled. More than three quarters of published journal articles—114 million on the World Wide Web alone, by one (lowball) estimate—are only available if you are affiliated with an institution that can afford pricey subscriptions or you can swing $40-per-article fees. In the last several years, though, scientists have made strides to loosen the grip of giant science publishers. They skip over the lengthy peer review process mediated by the big journals and just … post. Review comes after. The paywall isn’t crumbling, but it might be eroding. The open science movement, with its free distribution of articles before their official publication, is a big reason.

Another reason, though, is stealthy improvement in scientific search engines like Google Scholar, Microsoft Academic, and Semantic Scholar—web tools increasingly able to see around paywalls or find articles that have jumped over. Scientific publishing ain’t like book publishing or journalism. In fact, it’s a little more like music, pre-iTunes, pre-Spotify. You know, right about when everyone started using Napster.

Before World War II most scientific journals were published by small professional societies. But capitalism’s gonna capitalism. By the early 1970s the top five scientific publishers—Reed-Elsevier, Wiley-Blackwell, Springer, and Taylor & Francis—published about 20 percent of all journal articles. In 1996, when the transition to digital was underway and the PDF became the format of choice for journals, that number went up to 30 percent. Ten years later it was 50 percent.

Those big-five publishers became the change they wanted to see in the publishing world—by buying it. Owning over 2,500 journals (including the powerhouse Cell) and 35,000 books and references (including Gray’s Anatomy) is big, right? Well, that’s Elsevier, the largest scientific publisher in the world, which also owns ScienceDirect, the online gateway to all those journals. It owns the (pre-Google Scholar) scientific search engine Scopus. It bought Mendeley, a reference manager with social and community functions. It even owns a company that monitors mentions of scientific work on social media. “Everywhere in the research ecosystem, from submission of papers to research evaluations made based on those papers and various acts associated with them online, Elsevier is present,” says Vincent Larivière, an information scientist at the University of Montreal and author of the paper with those stats about publishing I put one paragraph back.

The company says all that is actually in the service of wider dissemination. “We are firmly in the open science space. We have tools, services, and partnerships that help create a more inclusive, more collaborative, more transparent world of research,” says Gemma Hersh,1 Elsevier’s vice president for open science. “Our mission is around improving research performance and working with the research community to do that.” Indeed, in addition to traditional, for-profit journals it also owns SSRN, a preprint server—one of those places that hosts unpaywalled, pre-publication articles—and publishes thousands of articles at various levels of openness.

So Elsevier is science publishing’s version of Too Big to Fail. As such, it has faced various boycotts, slightly piratical workarounds, and general anger. (“The term ‘boycott’ comes up a lot, but I struggle with that. If I can be blunt, I think it’s a word that’s maybe misapplied,” Hersh says. “More researchers submit to us every year, and we publish more articles every year.”)

If you’re not someone with “.edu” in your email, this might make you a little nuts. Not just because you might want to actually see some cool science, but because you already paid for that research. Your taxes (or maybe some zillionaire’s grant money) paid the scientists and funded the studies. The experts who reviewed and critiqued the results and conclusions before publication were volunteers. Then the journal that published it charged a university or a library—again, probably funded at least in part by your taxes—to subscribe. And then you gotta buy the article? Or the researcher had to pony up $2,0002 to make it open access?

Now, publishers like Elsevier will say that the process of editing, peer-reviewing, copy editing, and distribution are a major, necessary value add. And look at the flip side: so-called predatory journals that charge authors to publish nominally open-access articles with no real editing or review (that, yes, show up in search results). Still, the scientific publishing business is a $10 billion-a-year game. In 2010, Elsevier reported profits of $1 billion and a 35 percent margin. So, yeah.

In that early-digital-music metaphor, the publishers are the record labels and the PDFs are MP3s. But you still need a Napster. That’s where open-science-powered search engines come in.

A couple years after Acharya and Verstak built Scholar, a team at Microsoft built their own version, called Academic. It was at the time a much, let’s say, leaner experience, with far fewer papers available. But then in 2015, Microsoft released a 2.0, and it’s a killer.

Microsoft’s communication team declined to make any of the people who run it available, but a paper from the team at Microsoft Research lays the specs out pretty well: It figures out the bibliographic data of papers and combines that with results from Bing. (A real search engine that exists!) And you know what? It’s pretty great. It sees 83 million papers, not so far from estimations of the size of Google’s universe, and does the same kind of natural-language queries. Unlike Scholar, people can hook into Microsoft Academic’s API and see its citation graph, too.

Even as recently as 2015, scientific search engines weren’t much use to anyone outside universities and libraries. You could find a citation to a paper, sure—but good luck actually reading it. Even though more overt efforts to subvert copyright like Sci-Hub are falling to lawsuits from places like Elsevier and the American Chemical Society, the open science movement gaining is momentum. PDFs are falling off virtual trucks all over the internet—posted on university web sites or places like ResearchGate and Academia.edu, hosts for exactly this kind of thing—Scholar’s and Academic’s first sorties against the paywall have been joined by reinforcements. It’s starting to look like a siege.

For example the Chan Zuckerberg Initative, philanthropic arm of the founder of Facebook, is working on something aimed at increasing access. The founders of Mendeley have a new, venture-backed PDF finder called Kopernio. A browser extension called Unpaywall roots around the web for free PDFs of articles.

A particularly novel web crawler comes from the non-profit Allen Institute for Artificial Intelligence. Semantic Scholar pores over a corpus of 40 million citations in computer science and biomedicine, and extracts the tables and charts as well as using machine learning to infer meaningful cites as “highly influential citations,” a new metric. Almost a million people use it every month.

“We use AI techniques, particularly natural language processing and machine vision, to process the PDF and extract information that helps readers decide if the paper is of interest,” says Oren Etzioni, CEO of the Allen Institute for AI. “The net effect of all this is that more and more is open, and a number of publishers … have said making content discoverable via these search engines is not a bad thing.”

Even with all these increases in discoverability and access, the technical challenges of scientific search don’t stop with paywalls. When Acharya and Verstak started out, Google relied on PageRank, a way to model how important hyperlinks between two web pages were. That’s not how scientific citations work. “The linkage between articles is in text. There are references, and references are all approximate,” Acharya says. “In scholarship, all your citations are one way. Everybody cites older stuff, and papers never get modified.”

Plus, unlike a URL, the location or citation for a journal article is not the actual journal article. In fact, there might be multiple copies of the article at various locations. From a perspective as much philosophical and bibliographical, a PDF online is really just a picture of knowledge, in a way. So the search result showing a citation might also attach to multiple versions of the actual article.

That’s a special problem when researchers can post pre-print versions of their own work but might not have copyright to the publication of record, the peer-reviewed, copy-edited version in the journal. Sometimes the differences are small; sometimes they’re not.

Why don’t the search engines just use metadata to understand what version belongs where? Like when you download music, your app of choice automatically populates with things like an image, the artist’s name, the song titles…the data about the thing.

The answer: metadata LOL. It’s a big problem. “It varies by source,” Etzioni says. “A whole bunch of that information is not available as structured metadata.” Even when there is metadata, it’s in idiosyncratic formats from publisher to publisher and server to server. “In a surprising way, we’re kind of in the dark ages, and the problem just keeps getting worse,” he says. More papers get published; more are digital. Even specialists can’t keep up.

Which is why scientific search and open science are so intertwined and so critical. The reputation of a journal and the number of times a specific paper in that journal gets cited are metrics for determining who gets grants and who gets tenure, and by extension who gets to do bigger and bigger science. “Where the for-profit publishers and academic presses sort of have us by the balls is that we are addicted to prestige,” says Guy Geltner, a historian at the University of Amsterdam, open science advocate, and founder of a new user-owned social site for scientists called Scholarly Hub.

The thing is, as is typical for Google, Scholar is as opaque about how it works and what it finds. Acharya wouldn’t give me numbers of users or the number of papers it searches. (“It’s larger than the estimates that are out there,” he says, and “an order of magnitude bigger than when we started.) No one outside Google fully understands how the search engine applies its criteria for inclusion,3 and indeed Scholar hoovers up way more than just PDFs of published or pre-published articles. You get course syllabi, undergraduate coursework, PowerPoint presentations … actually, for a reporter, it’s kind of fun. But tricky.

That means the citation data is also obscure, which makes it hard to know what Scholar’s findings mean for science as a whole. Scholar may be a low-priority side-project (please don’t kill it like you killed Reader!) but maybe that data is going to be valuable someday. Elsevier obviously thinks it’s useful.

The scientific landscape is shifting. "If you took a group of academics right now and asked them to create a new system of publishing, nobody would suggest what we're currently doing," says David Barner, a psychologist at UC San Diego and open science advocate. But change, Barner says, is hard. The people who'd make those changes are already overworked, already volunteering their time.

Even Elsevier knows that change is coming. “Rather than scrabble around in one of the many programs you’ve mentioned, anyone can come to our Science and Society page, which details a host of programs and organizations we work with to cater through every scenario where somebody wants access,” Hersh says. And that’d be to the final, published, peer-reviewed version—the archived, permanent version of record.

Digital revolutions have a way of #disrupting no matter what. As journal articles get more open and more searchable, value will come from understanding what people search for—as Google long ago understood about the open web. “We’re a high quality publisher, but we’re also an information analytics company, evolving services that the research community can use,” Hersh says.

Because reputation and citation are core currencies to scientists, scientists have to be educated about the possibilities of open publication at the same time as prestigious, reputable venues have to exist. Preprints are great, and the researchers maintain copyright to them, but it’s also possible that the final citation-of-record could be different after it goes through review. There has to be a place where primary scientific work is available to the people who funded it, and a way for them to find it.

Because if there isn’t? “A huge part of research output is suffocating behind paywalls. Sixty-five of the 100 most cited articles in history are behind paywalls. That’s the opposite of what science is supposed to do,” Geltner says. “We’re not factories producing proprietary knowledge. We’re engaged in debates, and we want the public to learn from those debates.”

I'm sensitive to the irony of a WIRED writer talking about the social risks of a paywall, though I'd draw a distinction between paying a journalistic outlet for its journalism and paying a scientific publisher for someone else's science.

An even more critical difference, though, is that a science paywall does more than separate gown from town. When all the solid, good information is behind a paywall, what’s left outside in the wasteland will be crap—propaganda and marketing. Those are always free, because people with political agendas and financial interests underwrite them. Understanding that vaccines are critical to public health and human-driven carbon emissions are un-terraforming the planet cannot be the purview of the one percent. “Access to science is going to be a first-world privilege,” Geltner says. “That’s the opposite of what science is supposed to be about.”

1 UPDATE 12/3/17 11:55 AM Corrected the spelling of this name. 2 UPDATE 12/4/17 1:25 PM Removed the word "another;" researchers sometimes pay to make their own articles open-access. 3 UPDATE 12/4/17 1:25 PM Clarified to show that Google publishes inclusion criteria.

Read more: https://www.wired.com/story/its-gonna-get-a-lot-easier-to-break-science-journal-paywalls/

Google Is Giving Away AI That Can Build Your Genome Sequence

Today, a teaspoon of spit and a hundred bucks is all you need to get a snapshot of your DNA. But getting the full picture—all 3 billion base pairs of your genome—requires a much more laborious process. One that, even with the aid of sophisticated statistics, scientists still struggle over. It’s exactly the kind of problem that makes sense to outsource to artificial intelligence.

On Monday, Google released a tool called DeepVariant that uses deep learning—the machine learning technique that now dominates AI—to assemble full human genomes. Modeled loosely on the networks of neurons in the human brain, these massive mathematical models have learned how to do things like identify faces posted to your Facebook news feed, transcribe your inane requests to Siri, and even fight internet trolls. And now, engineers at Google Brain and Verily (Alphabet’s life sciences spin-off) have taught one to take raw sequencing data and line up the billions of As, Ts, Cs, and Gs that make you you.

And oh yeah, it’s more accurate than all the existing methods out there. Last year, DeepVariant took first prize in an FDA contest promoting improvements in genetic sequencing. The open source version the Google Brain/Verily team introduced to the world Monday reduced the error rates even further—by more than 50 percent. Looks like grandmaster Ke Jie isn’t be the only one getting bested by Google’s AI neural networks this year.

DeepVariant arrives at a time when healthcare providers, pharma firms, and medical diagnostic manufacturers are all racing to capture as much genomic information as they can. To meet the need, Google rivals like IBM and Microsoft are all moving into the healthcare AI space, with speculation about whether Apple and Amazon will follow suit. While DeepVariant’s code comes at no cost, that isn’t true of the computing power required to run it. Scientists say that expense is going to prevent it from becoming the standard anytime soon, especially for large-scale projects.

But DeepVariant is just the front end of a much wider deployment; genomics is about to go deep learning. And once you go deep learning, you don’t go back.

It’s been nearly two decades since high-throughput sequencing escaped the labs and went commercial. Today, you can get your whole genome for just $1,000 (quite a steal compared to the $1.5 million it cost to sequence James Watson’s in 2008).

But the data produced by today’s machines still only produce incomplete, patchy, and glitch-riddled genomes. Errors can get introduced at each step of the process, and that makes it difficult for scientists to distinguish the natural mutations that make you you from random artifacts, especially in repetitive sections of a genome.

See, most modern sequencing technologies work by taking a sample of your DNA, chopping it up into millions of short snippets, and then using fluorescently-tagged nucleotides to produce reads—the list of As, Ts, Cs, and Gs that correspond to each snippet. Then those millions of reads have to be grouped into abutting sequences and aligned with a reference genome.

That’s the part that gives scientists so much trouble. Assembling those fragments into a usable approximation of the actual genome is still one of the biggest rate-limiting steps for genetics. A number of software programs exist to help put the jigsaw pieces together. FreeBayes, VarDict, Samtools, and the most well-used, GATK, depend on sophisticated statistical approaches to spot mutations and filter out errors. Each tool has strengths and weaknesses, and scientists often wind up having to use them in conjunction.

No one knows the limitations of the existing technology better than Mark DePristo and Ryan Poplin. They spent five years creating GATK from whole cloth. This was 2008: no tools, no bioinformatics formats, no standards. “We didn’t even know what we were trying to compute!” says DePristo. But they had a north star: an exciting paper that had just come out, written by a Silicon Valley celebrity named Jeff Dean. As one of Google’s earliest engineers, Dean had helped design and build the fundamental computing systems that underpin the tech titan’s vast online empire. DePristo and Poplin used some of those ideas to build GATK, which became the field’s gold standard.

But by 2013, the work had plateaued. “We tried almost every standard statistical approach under the sun, but we never found an effective way to move the needle,” says DePristo. “It was unclear after five years whether it was even possible to do better.” DePristo left to pursue a Google Ventures-backed start-up called SynapDx that was developing a blood test for autism. When that folded two years later, one of its board members, Andrew Conrad (of Google X, then Google Life Sciences, then Verily) convinced DePristo to join the Google/Alphabet fold. He was reunited with Poplin, who had joined up the month before.

And this time, Dean wasn’t just a citation; he was their boss.

As the head of Google Brain, Dean is the man behind the explosion of neural nets that now prop up all the ways you search and tweet and snap and shop. With his help, DePristo and Poplin wanted to see if they could teach one of these neural nets to piece together a genome more accurately than their baby, GATK.

The network wasted no time in making them feel obsolete. After training it on benchmark datasets of just seven human genomes, DeepVariant was able to accurately identify those single nucleotide swaps 99.9587 percent of the time. “It was shocking to see how fast the deep learning models outperformed our old tools,” says DePristo. Their team submitted the results to the PrecisionFDA Truth Challenge last summer, where it won a top performance award. In December, they shared them in a paper published on bioRxiv.

DeepVariant works by transforming the task of variant calling—figuring out which base pairs actually belong to you and not to an error or other processing artifact—into an image classification problem. It takes layers of data and turns them into channels, like the colors on your television set. In the first working model they used three channels: The first was the actual bases, the second was a quality score defined by the sequencer the reads came off of, the third contained other metadata. By compressing all that data into an image file of sorts, and training the model on tens of millions of these multi-channel “images,” DeepVariant began to be able to figure out the likelihood that any given A or T or C or G either matched the reference genome completely, varied by one copy, or varied by both.

But they didn’t stop there. After the FDA contest they transitioned the model to TensorFlow, Google's artificial intelligence engine, and continued tweaking its parameters by changing the three compressed data channels into seven raw data channels. That allowed them to reduce the error rate by a further 50 percent. In an independent analysis conducted this week by genomics computing platform, DNAnexus, DeepVariant vastly outperformed GATK, Freebayes, and Samtools, sometimes reducing errors by as much as 10-fold.

“That shows that this technology really has an important future in the processing of bioinformatic data,” says DNAnexus CEO, Richard Daly. “But it’s only the opening chapter in a book that has 100 chapters.” Daly says he expects this kind of AI to one day actually find the mutations that cause disease. His company received a beta version of DeepVariant, and is now testing the current model with a limited number of its clients—including pharma firms, big health care providers, and medical diagnostic companies.

To run DeepVariant effectively for these customers, DNAnexus has had to invest in newer generation GPUs to support its platform. The same is true for Canadian competitor, DNAStack, which plans to offer two different versions of DeepVariant—one tuned for low cost and one tuned for speed. Google’s Cloud Platform already supports the tool, and the company is exploring using the TPUs (tensor processing units) that connect things like Google Search, Street View, and Translate to accelerate the genomics calculations as well.

DeepVariant’s code is open-source so anyone can run it, but to do so at scale will likely require paying for a cloud computing platform. And it’s this cost—computationally and in terms of actual dollars—that have researchers hedging on DeepVariant’s utility.

“It’s a promising first step, but it isn’t currently scalable to a very large number of samples because it’s just too computationally expensive,” says Daniel MacArthur, a Broad/Harvard human geneticist who has built one of the largest libraries of human DNA to date. For projects like his, which deal in tens of thousands of genomes, DeepVariant is just too costly. And, just like current statistical models, it can only work with the limited reads produced by today’s sequencers.

Still, he thinks deep learning is here to stay. “It’s just a matter of figuring out how to combine better quality data with better algorithms and eventually we’ll converge on something pretty close to perfect,” says MacArthur. But even then, it’ll still just be a list of letters. At least for the foreseeable future, we’ll still need talented humans to tell us what it all means.

Read more: https://www.wired.com/story/google-is-giving-away-ai-that-can-build-your-genome-sequence/

Want to Learn How to Mine in Space? Theres a School for You

Hunter Williams used to be an English teacher. Then, three years into that job, he started reading the book The Moon Is a Harsh Mistress. The 1966 novel by Robert Heinlein takes place in the 2070s, on the moon, which, in this future, hosts a subterranean penal colony. Like all good sci-fi, the plot hinges on a rebellion and a computer that gains self-awareness. But more important to Williams were two basic fictional facts: First, people lived on the moon. Second, they mined the moon. “I thought, ‘This is it. This is what we really could be doing,” he says.

Today, that vision is closer than ever. And Williams is taking steps to make it reality. This year, he enrolled in a class called Space Resources Fundamentals, the pilot course for the first-ever academic program specializing in space mining. It's a good time for such an education, given that companies like Deep Space Industries and Planetary Resources are planning prospecting missions, NASA's OSIRIS-REx is on its way to get a sample of an asteroid and bring it back to Earth, and there's international and commercial talk of long-term living in space.

Williams had grown up with the space-farers on Star Trek, but he found Heinlein’s vision more credible: a colony that dug into and used the resources of their celestial body. That's the central tenet of the as-yet-unrealized space mining industry: You can't take everything with you, and, even if you can, it's a whole lot cheaper not to—to mine water to make fuel, for instance, rather than launching it on overburdened rockets. “I saw a future that wasn't a hundred or a thousand years away but could be happening now,” says Williams.

So in 2012, he adjusted trajectory and went to school for aerospace engineering. Then he worked at Cape Canaveral in Florida, doing ground support for Lockheed Martin. His building, on that cosmic coast, was right next to one of SpaceX's spots. “Every day when I came to work, I would see testaments to new technology,” he says. “It was inspiring.”

A few years later, he still hadn't let go of the idea that humans could work with what they found in space. Like in his book. So he started talking to Christopher Dreyer, a professor at the Colorado School of Mines’ Center for Space Resources, a research and technology development center that's existed within the school for more than a decade.

It was good timing. Because this summer, Mines announced its intention to found the world’s first graduate program in Space Resources—the science, technology, policy, and politics of prospecting, mining, and using those resources. The multidisciplinary program would offer Post-Baccalaureate certificates and Masters of Science degrees. Although it's still pending approval for a 2018 start date, the school is running its pilot course, taught by Dreyer, this semester.

Williams has committed fully: He left his Canaveral job this summer and moved to Colorado to do research for Dreyer, and hopefully start the grad program in 2018.

Williams wasn't the only one interested in the future of space mining. People from all over, non-traditional students, wanted to take Space Resources Fundamentals. And so Dreyer and Center for Space Resources director Angel Abbud-Madrid decided to run it remotely, ending up with about 15 enrollees who log in every Tuesday and Thursday night for the whole semester. Dreyer has a special setup in his office for his virtual lectures: a laptop stand, a wall of books behind him, a studio-type light that shines evenly.

In the minutes before Thanskgiving-week class started, students' heads popped up on Dreyer's screen as they logged in. Some are full-time students at Mines; some work in industry; some work for the government. There was the employee from the FAA’s Office of Commercial Space Transportation, an office tasked, in part, with making sure the US is obeying international treaties as they explore beyond the planet. Then there’s Justin Cyrus, the CEO of a startup called Lunar Outpost. Cyrus isn’t mining any moons yet, but Lunar Outpost has partnered with Denver’s Department of Environmental Health to deploy real-time air-quality sensors, of the kind it hopes to develop for moony use.

Cyrus was a Mines graduate, with a master’s in electrical and electronics engineering; he sought out Dreyer and Abbud-Madrid when he needed advice for his nascent company. When the professors announced the space resources program, Cyrus decided to get in on this pilot class. He, and the other attendees, seem to see the class not just as an educational opportunity but also as a networking one: Their classmates, they say, are the future leaders of this industry.

Cyrus, the FAA employee, and Williams all smiled from their screens in front of benign backgrounds. About a dozen other students—all men—joined in by the time class started. The day's lesson, about resources on the moon, came courtesy of scientist Paul Spudis, who live-broadcasted from a few states away. Spudis, a guest lecturer, showed charts and maps and data about resources the moon might harbor, and where, and their worth. He's bullish on the prospects of prospecting. Toward the end of his talk, he said, "I think we'll have commercial landings on the moon in the next year or so." Indeed, the company Moon Express is planning to land there in 2018, in a bid to win the Google Lunar X Prize.

Back during Halloween week, the class covered the Outer Space Treaty, a creation of the United Nations that governs outer-space actions and (in some people's interpretations) makes the legality of space mining dubious. The lecture was full of policy detail, but the students drove the ensuing Q&A toward the sociological. Space mining would disproportionately help already-wealthy countries, some thought, despite talk in the broader community about how space mining lowers the barrier to space entry.

In this realism, and this thoughtfulness, Dreyer's class is refreshing. The PR talk of big would-be space mining companies like Planetary Resources and Deep Space Industries can be slick, uncomplicated, and (sometimes) unrealistic. It often skips over the many steps between here and self-sustaining space societies—not to mention the companies' own long-term viability.

But in Space Resource Fundamentals, the students seem grounded. Student Nicholas Proctor, one of few with a non-engineering background, appreciates the pragmatism. Proctor studied accounting as an undergrad and enrolled at Mines in mineral economics. After he received a NASA grant to study space-based solar power and its applications to the mining industry, Abbud-Madrid sent him an email telling him about the class. The professor thought it would be a good fit—and Proctor obviously agreed.

After Thanksgiving-week class was over, students logged off, waving one-handed goodbyes. Williams had been watching from the lab downstairs, in a high-tech warehouse-garage combo. There, he and other students work among experiments about how dust moves in space, and what asteroids are actually like. Of course, they're also interested in how to get stuff—resources—out of them. An old metal chamber dominates the room, looking like an unpeopled iron lung. "The big Apollo-era chamber is currently for asteroid mining," Williams explained, "breaking apart rocks with sunlight and extracting the water and even precious metals."

While Williams closed up class shop downstairs, Dreyer and Abbud-Madrid hung out in Dreyer's office for a few minutes. Dreyer, leaning back in his well-lit chair, talked bemusedly about some of the communications they receive. “We get interest from people to find out what they can mine and bring back to Earth and become a trillionaire,” he said.

That’s not really what the Space Resources program is about, in part because it’s not clear that’s possible—it’s expensive to bring the precious (to bring anything) back to Earth. The class focus—and, not coincidentally, the near-term harvest—is the H2O, which will stay in space, for space-use. “No matter how complex our society becomes, it always comes back to water,” said Abbud-Madrid. He laughed. “We’re going to the moon,” he continued. “For water.”

Read more: https://www.wired.com/story/want-to-learn-how-to-mine-in-space-theres-a-school-for-you/