It’s the (Democracy-Poisoning) Golden Age of Free Speech

For most of modern history, the easiest way to block the spread of an idea was to keep it from being mechanically disseminated. Shutter the news­paper, pressure the broad­cast chief, install an official censor at the publishing house. Or, if push came to shove, hold a loaded gun to the announcer’s head.

This actually happened once in Turkey. It was the spring of 1960, and a group of military officers had just seized control of the government and the national media, imposing an information blackout to suppress the coordination of any threats to their coup. But inconveniently for the conspirators, a highly anticipated soccer game between Turkey and Scotland was scheduled to take place in the capital two weeks after their takeover. Matches like this were broadcast live on national radio, with an announcer calling the game, play by play. People all across Turkey would huddle around their sets, cheering on the national team.

Canceling the match was too risky for the junta; doing so might incite a protest. But what if the announcer said something political on live radio? A single remark could tip the country into chaos. So the officers came up with the obvious solution: They kept several guns trained on the announcer for the entire 2 hours and 45 minutes of the live broadcast.

It was still a risk, but a managed one. After all, there was only one announcer to threaten: a single bottleneck to control of the airwaves.

Variations on this general playbook for censorship—find the right choke point, then squeeze—were once the norm all around the world. That’s because, until recently, broadcasting and publishing were difficult and expensive affairs, their infrastructures riddled with bottlenecks and concentrated in a few hands.

But today that playbook is all but obsolete. Whose throat do you squeeze when anyone can set up a Twitter account in seconds, and when almost any event is recorded by smartphone-­wielding mem­­bers of the public? When protests broke out in Ferguson, Missouri, in August 2014, a single livestreamer named Mustafa Hussein reportedly garnered an audience comparable in size to CNN’s for a short while. If a Bosnian Croat war criminal drinks poison in a courtroom, all of Twitter knows about it in minutes.

February 2018. Subscribe to WIRED.

Sean Freeman

In today’s networked environment, when anyone can broadcast live or post their thoughts to a social network, it would seem that censorship ought to be impossible. This should be the golden age of free speech.

And sure, it is a golden age of free speech—if you can believe your lying eyes. Is that footage you’re watching real? Was it really filmed where and when it says it was? Is it being shared by alt-right trolls or a swarm of Russian bots? Was it maybe even generated with the help of artificial intelligence? (Yes, there are systems that can create increasingly convincing fake videos.)

Or let’s say you were the one who posted that video. If so, is anyone even watching it? Or has it been lost in a sea of posts from hundreds of millions of content pro­ducers? Does it play well with Facebook’s algorithm? Is YouTube recommending it?

Maybe you’re lucky and you’ve hit a jackpot in today’s algorithmic public sphere: an audience that either loves you or hates you. Is your post racking up the likes and shares? Or is it raking in a different kind of “engagement”: Have you received thousands of messages, mentions, notifications, and emails threatening and mocking you? Have you been doxed for your trouble? Have invisible, angry hordes ordered 100 pizzas to your house? Did they call in a SWAT team—men in black arriving, guns drawn, in the middle of dinner?

Standing there, your hands over your head, you may feel like you’ve run afoul of the awesome power of the state for speaking your mind. But really you just pissed off 4chan. Or entertained them. Either way, congratulations: You’ve found an audience.

Here’s how this golden age of speech actually works: In the 21st century, the capacity to spread ideas and reach an audience is no longer limited by access to expensive, centralized broadcasting infrastructure. It’s limited instead by one’s ability to garner and distribute attention. And right now, the flow of the world’s attention is structured, to a vast and overwhelming degree, by just a few digital platforms: Facebook, Google (which owns YouTube), and, to a lesser extent, Twitter.

These companies—which love to hold themselves up as monuments of free expression—have attained a scale unlike anything the world has ever seen; they’ve come to dominate media distribution, and they increasingly stand in for the public sphere itself. But at their core, their business is mundane: They’re ad brokers. To virtually anyone who wants to pay them, they sell the capacity to precisely target our eyeballs. They use massive surveillance of our behavior, online and off, to generate increasingly accurate, automated predictions of what advertisements we are most susceptible to and what content will keep us clicking, tapping, and scrolling down a bottomless feed.

So what does this algorithmic public sphere tend to feed us? In tech parlance, Facebook and YouTube are “optimized for engagement,” which their defenders will tell you means that they’re just giving us what we want. But there’s nothing natural or inevitable about the specific ways that Facebook and YouTube corral our attention. The patterns, by now, are well known. As Buzzfeed famously reported in November 2016, “top fake election news stories generated more total engagement on Facebook than top election stories from 19 major news outlets combined.”

Humans are a social species, equipped with few defenses against the natural world beyond our ability to acquire knowledge and stay in groups that work together. We are particularly susceptible to glimmers of novelty, messages of affirmation and belonging, and messages of outrage toward perceived enemies. These kinds of messages are to human community what salt, sugar, and fat are to the human appetite. And Facebook gorges us on them—in what the company’s first president, Sean Parker, recently called “a social-­validation feedback loop.”

Sure, it is a golden age of free speech—if you can believe your lying eyes.

There are, moreover, no nutritional labels in this cafeteria. For Facebook, YouTube, and Twitter, all speech—whether it’s a breaking news story, a saccharine animal video, an anti-Semitic meme, or a clever advertisement for razors—is but “content,” each post just another slice of pie on the carousel. A personal post looks almost the same as an ad, which looks very similar to a New York Times article, which has much the same visual feel as a fake newspaper created in an afternoon.

What’s more, all this online speech is no longer public in any traditional sense. Sure, Facebook and Twitter sometimes feel like places where masses of people experience things together simultaneously. But in reality, posts are targeted and delivered privately, screen by screen by screen. Today’s phantom public sphere has been fragmented and submerged into billions of individual capillaries. Yes, mass discourse has become far easier for everyone to participate in—but it has simultaneously become a set of private conversations happening behind your back. Behind everyone’s backs.

Not to put too fine a point on it, but all of this invalidates much of what we think about free speech—conceptually, legally, and ethically.

The most effective forms of censorship today involve meddling with trust and attention, not muzzling speech itself. As a result, they don’t look much like the old forms of censorship at all. They look like viral or coordinated harassment campaigns, which harness the dynamics of viral outrage to impose an unbearable and disproportionate cost on the act of speaking out. They look like epidemics of disinformation, meant to undercut the credibility of valid information sources. They look like bot-fueled campaigns of trolling and distraction, or piecemeal leaks of hacked materials, meant to swamp the attention of traditional media.

These tactics usually don’t break any laws or set off any First Amendment alarm bells. But they all serve the same purpose that the old forms of censorship did: They are the best available tools to stop ideas from spreading and gaining purchase. They can also make the big platforms a terrible place to interact with other people.

Even when the big platforms themselves suspend or boot someone off their networks for violating “community standards”—an act that does look to many people like old-fashioned censorship—it’s not technically an infringement on free speech, even if it is a display of immense platform power. Anyone in the world can still read what the far-right troll Tim “Baked Alaska” Gionet has to say on the internet. What Twitter has denied him, by kicking him off, is attention.

Many more of the most noble old ideas about free speech simply don’t compute in the age of social media. John Stuart Mill’s notion that a “marketplace of ideas” will elevate the truth is flatly belied by the virality of fake news. And the famous American saying that “the best cure for bad speech is more speech”—a paraphrase of Supreme Court justice Louis Brandeis—loses all its meaning when speech is at once mass but also nonpublic. How do you respond to what you cannot see? How can you cure the effects of “bad” speech with more speech when you have no means to target the same audience that received the original message?

This is not a call for nostalgia. In the past, marginalized voices had a hard time reaching a mass audience at all. They often never made it past the gatekeepers who put out the evening news, who worked and lived within a few blocks of one another in Manhattan and Washington, DC. The best that dissidents could do, often, was to engineer self-sacrificing public spectacles that those gatekeepers would find hard to ignore—as US civil rights leaders did when they sent schoolchildren out to march on the streets of Birmingham, Alabama, drawing out the most naked forms of Southern police brutality for the cameras.

But back then, every political actor could at least see more or less what everyone else was seeing. Today, even the most powerful elites often cannot effectively convene the right swath of the public to counter viral messages. During the 2016 presidential election, as Joshua Green and Sasha Issenberg reported for Bloomberg, the Trump campaign used so-called dark posts—nonpublic posts targeted at a specific audience—to discourage African Americans from voting in battleground states. The Clinton campaign could scarcely even monitor these messages, let alone directly counter them. Even if Hillary Clinton herself had taken to the evening news, that would not have been a way to reach the affected audience. Because only the Trump campaign and Facebook knew who the audience was.

It’s important to realize that, in using these dark posts, the Trump campaign wasn’t deviantly weaponizing an innocent tool. It was simply using Facebook exactly as it was designed to be used. The campaign did it cheaply, with Facebook staffers assisting right there in the office, as the tech company does for most large advertisers and political campaigns. Who cares where the speech comes from or what it does, as long as people see the ads? The rest is not Facebook’s department.

Mark Zuckerberg holds up Facebook’s mission to “connect the world” and “bring the world closer together” as proof of his company’s civic virtue. “In 2016, people had billions of interactions and open discussions on Facebook,” he said proudly in an online video, looking back at the US election. “Candidates had direct channels to communicate with tens of millions of citizens.”

This idea that more speech—more participation, more connection—constitutes the highest, most unalloyed good is a common refrain in the tech industry. But a historian would recognize this belief as a fallacy on its face. Connectivity is not a pony. Facebook doesn’t just connect democracy-­loving Egyptian dissidents and fans of the videogame Civilization; it brings together white supremacists, who can now assemble far more effectively. It helps connect the efforts of radical Buddhist monks in Myanmar, who now have much more potent tools for spreading incitement to ethnic cleansing—fueling the fastest- growing refugee crisis in the world.

The freedom of speech is an important democratic value, but it’s not the only one. In the liberal tradition, free speech is usually understood as a vehicle—a necessary condition for achieving certain other societal ideals: for creating a knowledgeable public; for engendering healthy, rational, and informed debate; for holding powerful people and institutions accountable; for keeping communities lively and vibrant. What we are seeing now is that when free speech is treated as an end and not a means, it is all too possible to thwart and distort everything it is supposed to deliver.

Creating a knowledgeable public requires at least some workable signals that distinguish truth from falsehood. Fostering a healthy, rational, and informed debate in a mass society requires mechanisms that elevate opposing viewpoints, preferably their best versions. To be clear, no public sphere has ever fully achieved these ideal conditions—but at least they were ideals to fail from. Today’s engagement algorithms, by contrast, espouse no ideals about a healthy public sphere.

The most effective forms of censorship today involve meddling with trust and attention, not muzzling speech.

Some scientists predict that within the next few years, the number of children struggling with obesity will surpass the number struggling with hunger. Why? When the human condition was marked by hunger and famine, it made perfect sense to crave condensed calories and salt. Now we live in a food glut environment, and we have few genetic, cultural, or psychological defenses against this novel threat to our health. Similarly, we have few defenses against these novel and potent threats to the ideals of democratic speech, even as we drown in more speech than ever.

The stakes here are not low. In the past, it has taken generations for humans to develop political, cultural, and institutional antibodies to the novelty and upheaval of previous information revolutions. If The Birth of a Nation and Triumph of the Will came out now, they’d flop; but both debuted when film was still in its infancy, and their innovative use of the medium helped fuel the mass revival of the Ku Klux Klan and the rise of Nazism.

By this point, we’ve already seen enough to recognize that the core business model underlying the Big Tech platforms—harvesting attention with a massive surveillance infrastructure to allow for targeted, mostly automated advertising at very large scale—is far too compatible with authoritarianism, propaganda, misinformation, and polarization. The institutional antibodies that humanity has developed to protect against censorship and propaganda thus far—laws, journalistic codes of ethics, independent watchdogs, mass education—all evolved for a world in which choking a few gatekeepers and threatening a few individuals was an effective means to block speech. They are no longer sufficient.

But we don’t have to be resigned to the status quo. Facebook is only 13 years old, Twitter 11, and even Google is but 19. At this moment in the evolution of the auto industry, there were still no seat belts, airbags, emission controls, or mandatory crumple zones. The rules and incentive structures underlying how attention and surveillance work on the internet need to change. But in fairness to Facebook and Google and Twitter, while there’s a lot they could do better, the public outcry demanding that they fix all these problems is fundamentally mistaken. There are few solutions to the problems of digital discourse that don’t involve huge trade-offs—and those are not choices for Mark Zuckerberg alone to make. These are deeply political decisions. In the 20th century, the US passed laws that outlawed lead in paint and gasoline, that defined how much privacy a landlord needs to give his tenants, and that determined how much a phone company can surveil its customers. We can decide how we want to handle digital surveillance, attention-­channeling, harassment, data collection, and algorithmic decision­making. We just need to start the discussion. Now.


The Free Speech Issue

  • “Nice Website. It Would Be a Shame if Something Happened to It.”: Steven Johnson goes inside Cloudflare's decision to let an extremist stronghold burn.
  • Everything You Say Can and Will Be Used Against You: Doug Bock Clark profiles Antifa’s secret weapon against far-right extremists.
  • Please, Silence Your Speech: Alice Gregory visits a startup that wants to neutralize your smartphone—and un-change the world.
  • The Best Hope for Civil Discourse on the Internet … Is on Reddit: Virginia Heffernan submits to Change My View.
  • 6 Tales of Censorship: What it's like to be suspended by Facebook, blocked by Trump, and more, in the subjects’ own words.

Zeynep Tufekci (@zeynep) is an associate professor at the University of North Carolina and an opinion writer for The New York Times.

This article appears in the February issue. Subscribe now.

Read more: https://www.wired.com/story/free-speech-issue-tech-turmoil-new-censorship/

Facebook for 6-Year-Olds? Welcome to Messenger Kids

Facebook says it built Messenger Kids, a new version of its popular communications app with parental controls, to help safeguard pre-teens who may be using unauthorized and unsupervised social-media accounts. Critics think Facebook is targeting children as young as 6 to hook them on its services.

Facebook’s goal is to “push down the age” of when it’s acceptable for kids to be on social media, says Josh Golin, executive director of Campaign for a Commercial Free Childhood. Golin says 11-to-12-year-olds who already have a Facebook account, probably because they lied about their age, might find the animated emojis and GIFs of Messenger Kids “too babyish,” and are unlikely to convert to the new app.

Facebook launched Messenger Kids for 6-to-12-year olds in the US Monday, saying it took extraordinary care and precautions. The company said its 100-person team building apps for teens and kids consulted with parent groups, advocates, and childhood-development experts during the 18-month development process and the app reflects their concerns. Parents download Messenger Kids on their child’s account, after verifying their identity by logging into Facebook. Since kids cannot be found in search, parents must initiate and respond to friend requests.

Facebook says Messenger Kids will not display ads, nor collect data on kids for advertising purposes. Kids’ accounts will not automatically be rolled into Facebook accounts once they turn 13.

Nonetheless, advocates focused on marketing to children expressed concerns. The company will collect the content of children’s messages, photos they send, what features they use on the app, and information about the device they use. Facebook says it will use this information to improve the app and will share the information “within the family of companies that are part of Facebook,” and outside companies that provide customer support, analysis, and technical infrastructure.

“It’s all that squishy language that we normally see in privacy policies,” says Golin. “It seems to give Facebook a lot of wiggle room to share this information.” He says Facebook should be clearer about the outsiders with which it may share data.

In response to questions from WIRED, a spokesperson for Facebook said: “It’s important to remember that Messenger Kids does not have ads and we don’t use the data for advertising. This provision about sharing information with vendors from the privacy policy is for things like providing infrastructure to deliver messages.”

Kristen Strader, campaign coordinator for the nonprofit group Public Citizen, says Facebook has proven it cannot be trusted with youth data in the past, pointing to a leaked Facebook report from May that promised advertisers the ability to track teen emotions, such as insecurity, in real-time. "Their response was just that they will not do similar experiments in the future," says Strader. At the time, advocacy groups asked for a copy of the report, but Facebook declined.

On Thursday, Sen. Richard Blumenthal and Sen. Ed Markey sent a long list of questions about the app's privacy controls to Mark Zuckerberg. "We remain concerned about where sensitive information collected through this app could end up and for what purpose it could be used," they wrote in a letter to the Facebook CEO.

Tech companies have made a much more aggressive push into targeting younger users, a strategy that began in earnest in 2015 when Google launched YouTube Kids, which includes advertising. Parents create an account for their child through Google’s Family Link, a product to help parents monitor screentime. FamilyLink is also used for parents who want to start an account for their kid on Google Home, which gets matched to their child’s voice.

“There is no way a company can really close its doors to kids anymore,” says Jeffrey Chester, executive director for the Center of Digital Democracy. “By openly commercializing young children’s digital media use, Google has lowered the bar,” he says, pointing to what toy company Mattel described as “an eight-figure deal” that it signed with YouTube in August.

Chester says services such as YouTube Kids and Messenger Kids are designed to capture the attention, and affinity, of the youngest users. “If they are weaned on Google and Facebook, you have socialized them to use your service when they become an adult,” he says. “On the one hand it’s diabolical and on the other hand it’s how corporations work.”

In past years, tech companies avoided targeting younger users because of the Children’s Online Privacy Protection ACT (COPPA), a law that requires parental permission in order to collect data on children under 13. But, “the weakness of COPPA is that you can do a lot of things if you get parental permission,” says Golin. In the past six months, new apps have launched marketed as parent helpers. “What they’re saying is this is great way for parents to have control, what they are getting is parental permission,” says Golin.

Several children-focused nonprofit groups endorsed Facebook’s approach, including ConnectSafely and Family Online Safety Institute (FOSI). Both groups have received funding from Facebook and each has at least one representative on Facebook’s 13-person advisory board for Messenger Kids. The board also includes two representatives from MediaSmarts, which is sponsored by Facebook.

A Facebook spokesperson says, “We have long-standing relationships with some of these groups and we’ve been transparent about those relationships.” The spokesperson says many backers of Facebook’s approach, including Kristelle Lavallee of the Center on Media and Child Health, and Dr. Kevin Clark of George Mason University’s Center for Digital Media Innovation and Diversity, do not receive support from Facebook.

UPDATE 3:25 PM: This story has been updated with information about the advisory board for Messenger Kids.

UPDATE 4:25 PM 12/7/2017: This story has been updated with information about Sen. Blumenthal's and Sen. Markey's letter to Mark Zuckerberg.

Read more: https://www.wired.com/story/facebook-for-6-year-olds-welcome-to-messenger-kids/

Get Rid of Capitalism? Millennials Are Ready to Talk About It

One of the hottest tickets in New York City this weekend was a discussion on whether to overthrow capitalism.

The first run of tickets to “Capitalism: A Debate” sold out in a day. So the organizers, a pair of magazines with clear ideological affiliations, socialist and libertarian , found a larger venue: Cooper Union’s 960-capacity Great Hall, the site of an 1860 antislavery speech by Abraham Lincoln. The event sold out once again, this time in eight hours.

The crowd waiting in a long line to get inside on Friday night was mostly young and mostly male. Asher Kaplan and Gabriel Gutierrez, both 24, hoped the event would be a real-life version of the humorous, anarchic political debates on social media. “So much of this stuff is a battle that’s waged online,” said Gutierrez, who identifies, along with Kaplan, as a “leftist,” if not quite a socialist.

These days, among young people, socialism is “both a political identity and a culture,” Kaplan said. And it looks increasingly attractive.

Young Americans have soured on capitalism. In a Harvard University poll conducted last year, 51 percent of 18-to-29 year-olds in the U.S. said they opposed capitalism; only 42 percent expressed support. Among Americans of all ages, by contrast, a Gallup survey last year found that 60 percent held positive views of capitalism.

A poll released last month found American millennials closely split on the question of what type of society they would prefer to live in: 44 percent picked a socialist country, 42 percent a capitalist one. The poll, conducted by YouGov and the Victims of Communism Memorial Foundation, found that 59 percent of Americans across all age groups preferred to live under capitalism.

“I’ve seen the failings of modern-day capitalism,” said Grayson SussmanSquires, an 18-year-old student at Wesleyan University who had turned up for the capitalism debate. To him and many of his peers, he said, the notion of well-functioning capitalist order is something recounted only by older people. He was 10 when the financial crisis hit, old to enough to watch his older siblings struggle to get jobs out of college. In high school, SussmanSquires said, he volunteered for the presidential campaign of Vermont Senator Bernie Sanders, a self-described socialist. “It spoke to me in a way nothing had before,” he said.

Although debate attendees leaned left, several expressed the desire to have their views challenged by the pro-capitalist side. “It’s very easy to exist in a social group where everyone has the same political vibe,” Kaplan said.

“I’m immersed in one side of the debate,” said Thomas Doscher, 26, a labor organizer who is studying for his LSATs. “I want to hear the other side.”

The debate pitted two socialist stalwarts, Jacobin founder Bhaskar Sunkara and New York University professor Vivek Chibber, against the defenders of capitalism, Katherine Mangu-Ward, Reason’s editor in chief, and Nick Gillespie, the editor in chief of Reason.com and Reason TV.

And it was the attempt to rebuff criticism of capitalism that mostly riled up the crowd.

Chibber argued that the problem with capitalism is the power it has over workers. With the weakening of U.S. labor unions, “we have a complete despotism of the employers,” he said, leading to stagnant wages. When Mangu-Ward countered that Americans aren’t coerced on the job, the crowd erupted in laughter. “Every morning you wake up and you have a decision about whether or not you’re going to go to work,” she insisted, and the audience laughed again.

Sunkara summed up his argument for socialism as a society that helped people tackle the necessities of life—food, housing, education, health care, childcare. “Wherever we end up, it won’t be a utopia,” he said. “It will still be a place where you might get your heart broken,” or feel lonely, or get indigestion.

Mangu-Ward replied: “Capitalism kind of [fixes] those things, actually.” There’s the app Tinder to find dates, and Pepto Bismol to cure your upset stomach. “Those are the gifts of capitalism,” she said.

The arguments stayed mostly abstract. Sunkara and Chibber insisted their idea of democratic socialism shouldn’t be confused with the communist dictatorships that killed millions of people in the 20th century. Mangu-Ward and Gillespie likewise insisted on defending a capitalist ideal, not the current, corrupt reality. “Neither Nick nor I are fans of big business,” she said. “We’re not fans of crony capitalism.”

Talking theory left little time to wrestle with concrete problems, such as inequality or climate change. That frustrated Nathaniel Granor, a 31-year-old from Brooklyn who said he was worried about millions of people being put out of work by automation such as driverless vehicles.

“It didn't touch on what I feel is the heart of the matter,” Granor said. Both capitalism and socialism might ideally be ways to improve the world, he concluded, but both can fall short when applied in the real world. 

    Read more: http://www.bloomberg.com/news/articles/2017-11-06/get-rid-of-capitalism-millennials-are-ready-to-talk-about-it

    Are smartphones really making our children sad?

    US psychologist Jean Twenge, who has claimed that social media is having a malign affect on the young, answers critics who accuse her of crying wolf

    Last week, the childrens commissioner, Anne Longfield, launched a campaign to help parents regulate internet and smartphone use at home. She suggested that the overconsumption of social media was a problem akin to that of junk-food diets. None of us, as parents, would want our children to eat junk food all the time double cheeseburger, chips, every day, every meal, she said. For those same reasons, we shouldnt want our children to do the same with their online time.

    A few days later, former GCHQ spy agency chief Robert Hannigan responded to the campaign. The assumption that time online or in front of a screen is life wasted needs challenging. It is driven by fear, he said. The best thing we can do is to focus less on the time they spend on screens at home and more on the nature of the activity.

    This exchange is just one more example of how childrens screentime has become an emotive, contested issue. Last December, more than 40 educationalists, psychologists and scientists signed a letter in the Guardian calling for action on childrens screen-based lifestyles. A few days later, another 40-odd academics described the fears as moral panic and said that any guidelines needed to build on evidence rather than scaremongering.

    Faced with these conflicting expert views, how should concerned parents proceed? Into this maelstrom comes the American psychologist Jean Twenge, who has written a book entitled iGen: Why Todays Super-Connected Kids Are Growing Up Less Rebellious, More Tolerant, Less Happy and Completely Unprepared for Adulthood and What That Means for the Rest of Us.

    If the books title didnt make her view clear enough, last weekend an excerpt was published in the American magazine the Atlantic with the emotive headline Have smartphones destroyed a generation? It quickly generated differing reactions that were played out on social media these could be broadly characterised as praise from parents and criticism from scientists. In a phone interview and follow-up emails, Twenge explained her conclusions about the downsides of the connected world for teens, and answered some of her critics.

    The Atlantic excerpt from your book was headlined Have smartphones destroyed a generation? Is that an accurate reflection of what you think?
    Well, keep in mind that I didnt write the headline. Its obviously much more nuanced than that.

    So why did you write this book?
    Ive been researching generations for a long time now, since I was an undergraduate, almost 25 years. The databases I draw from are large national surveys of high school and college students, and one of adults. In 2013-14 I started to see some really sudden changes and at first I thought maybe these were just blips, but the trends kept going.

    Id never seen anything like it in all my years of looking at differences among generations. So I wondered what was going on.

    What were these sudden changes for teens?
    Loneliness and depressive symptoms started to go up, while happiness and life satisfaction started to go down. The other thing that I really noticed was the accelerated decline in seeing friends in person it falls off a cliff. Its an absolutely stunning pattern Id never seen anything like that. I really started to wonder, what is going on here? What happened around 2011-2012 [the survey data is a year or two behind] that would cause such sudden changes?

    And you concluded these changes were being brought about by increased time spent online?
    The high-school data detailed how much time teens spend online on social media and games and I noticed how that correlated with some of these indicators in terms of happiness, depression and so on.

    I was curious not just what the correlations were between these screen activities, mental health and wellbeing, but what were the links with non-screen activities, like spending time with friends in person, playing sports, going to religious services, doing homework, all these other things that teens do?

    And for happiness in particular, the pattern was so stark. Of the non-screen activities that were measured, they all correlated with greater happiness. All the screen activities correlated with lower happiness.

    Youve called these post-millennials the iGeneration. What are their characteristics?
    Im defining iGen as those born between 1995 and 2012 that latter date could change based on future data. Im reasonably certain about 1995, given the sudden changes in the trends. It also happens that 1995 was the year the internet was commercialised [Amazon launched that year, Yahoo in 1994 and Google in 1996], so if you were born in that year you have not known a time without the internet.

    But the introduction of the smartphone, exemplified by the iPhone, which was launched in 2007, is key?
    There are a lot of differences some are large, some are subtle, some are sudden and some had been building for a while but if I had to identify what really characterises them, the first influence is the smartphone.

    iGen is the first generation to spend their entire adolescence with the smartphone. This has led to many ripple effects for their wellbeing, their social interactions and the way they think about the world.

    Psychology
    Psychology professor Jean Twenge. Photograph: Gregory Bull/AP

    Why are you convinced they are unhappy because of social media, rather than it being a case of the unhappy kids being heavier users of social media?
    That is very unlikely to be true because of very good research on that very question. There is one experiment and two longitudinal studies that show the arrow goes from social media to lower wellbeing and not the other way around. For example, an experiment where people
    gave up Facebook for a week and had better wellbeing than those who had not.

    The other thing to keep in mind is that if you are spending eight hours a day with a screen you have less time to spend interacting with friends and family in person and we know definitively from decades of research that spending time with other people is one of the keys to emotional wellbeing; if youre doing that less, thats a very bad sign.

    A professor at Oxford University tweeted that your work is a non-systematic review of sloppy social science as a tool for lazy intergenerational shaming how do you respond?
    It is odd to equate documenting teens mental health issues with intergenerational shaming. Im not shaming anyone and the data I analyse is from teens, not older people criticising them.

    This comment is especially strange because this researchers best-known paper, about what he calls the Goldilocks theory, shows the same thing I find lower wellbeing after more hours of screen time. Were basically replicating each others research across two different countries, which is usually considered a good thing. So I am confused.

    Your arguments also seem to have been drawn on by the conservative right as ammunition for claims that technology is leading to the moral degradation of the young. Are you comfortable about that?
    My analyses look at what young people are saying about themselves and how they are feeling, so I dont think this idea of older people love to whine about the young is relevant. I didnt look at what older people have to say about young people. I looked at what young people are saying about their own experiences and their own lives, compared to young people 10, 20, or 30 years ago.

    Nor is it fair or accurate to characterise this as youth-bashing. Teens are saying they are suffering and documenting that should help them, not hurt them. I wrote the book because I wanted to give a voice to iGen and their experiences, through the 11 million who filled out national surveys, to the 200 plus who answered open-ended questions for me, to the 23 I talked to for up to two hours. It had absolutely nothing to do with older people and their complaints about youth.

    Many of us have a nagging feeling that social media is bad for our wellbeing, but we all suffer from a fear of missing out.
    Teens feel that very intensely, which is one reason why they are so addicted to their phones. Yet, ironically, the teens who spend more time on social media are actually more likely to report feeling left out.

    But is this confined to iGeners? One could go to a childs birthday party where the parents are glued to their smartphones and not talking to each other too.
    It is important to consider that while this trend also affects adults, it is particularly worrisome for teens because their brain development is ongoing and adolescence is a crucial time for developing social skills.

    You say teens might know the right emoji but in real life might not know the right facial expression.
    There is very little research on that question. There is one study that looked at the effects of screens on social skills among 11- to 12-year-olds, half of whom used screens at their normal level and half went to a five-day screen-free camp.

    Those who attended the camp improved their social skills reading emotions on faces was what they measured. That makes sense thats the social skill you would expect to suffer if you werent getting much in-person social interaction.

    So is it up to regulators or parents to improve the situation? Leaving this problem for parents to fix is a big challenge.
    Yes it is. I have three kids and my oldest is 10, but in her class about half have a phone, so many of them are on social media already. Parents have a tough job, because there are temptations on the screen constantly.

    What advice would you give parents?
    Put off getting your child a phone for as long as possible and, when you do, start with one that doesnt have internet access so they dont have the internet in their pocket all the time.

    But when your child says, but all my friends have got one, how do you reply?
    Maybe with my parents line If your friends all jumped in the lake, would you do it too? Although at that age the answer is usually yes, which I understand. But you can do social media on a desktop computer for a limited time each day. When we looked at the data, we found that an hour a day of electronic device use doesnt have any negative effects on mental health two hours a day or more is when you get the problems.

    The majority of teens are on screens a lot more than that. So if they want to use Instagram, Snapchat or Facebook to keep up with their friends activities, they can do that from a desktop computer.

    That sounds hard to enforce.
    We need to be more understanding of the effects of smartphones. In many ways, parents are worried about the wrong things theyre worried about their kids driving and going out. They dont worry about their kids sitting by themselves in a room with their phone and they should.

    Lots of social media features such as notifications or Snapchats Snapstreak feature are engineered to keep us glued to our phones. Should these types of features be outlawed?
    Oh man. Parents can put an app [such as Kidslox or Screentime] on their kids phone to limit the amount of time they spend on it. Do that right away. In terms of the bigger solutions, I think thats above my pay grade to figure out.

    Youve been accused by another psychologist of cherry-picking your data. Of ignoring, say, studies that suggest active social media use is associated with positive outcomes such as resilience. Did you collect data to fit a theory?
    Its impossible to judge that claim she does not provide citations to these studies. I found a few studies finding no effects or positive effects, but they were all older, before smartphones were on the scene. She says in order to prove smartphones are responsible for these trends we need a large study randomly assigning teens to not use smartphones or use them. If we wait for this kind of study, we will wait for ever that type of study is just about impossible to conduct.

    She concludes by saying: My suspicion is that the kids are gonna be OK. However, it is not OK that 50% more teens suffer from major depression now versus just six years ago and three times as many girls aged 12 to 14 take their own lives. It is not OK that more teens say that they are lonely and feel hopeless. It is not OK that teens arent seeing their friends in person as much. If we twiddle our thumbs waiting for the perfect experiment, we are taking a big risk and I for one am not willing to do that.

    Are you expecting anyone from Silicon Valley to say: How can we help?
    No, but what I think is interesting is many tech-connected people in Silicon Valley restrict their own childrens screen use, so they know. Theyre living off of it but they know its effects. It indicates that pointing out the effects of smartphones doesnt make you a luddite.

    iGen: Why Todays Super-Connected Kids Are Growing Up Less Rebellious, More Tolerant, Less Happy and Completely Unprepared for Adulthood and What That Means for the Rest of Us by Jean Twenge is published by Simon & Schuster US ($27) on 22 August

    Read more: https://www.theguardian.com/technology/2017/aug/13/are-smartphones-really-making-our-children-sad