The world is watching: How Florida shooting made U.S. gun control a global conversation

AR-15 "Sport" rifles on sale at deep discounts in an Arizona store.
Image: john moore/Getty Images

When you move to America from a country with more effective gun control laws, one of the first things you learn is how hard it is to talk to Americans — even on the sympathetic side of the political divide — about the gun issue. 

It was particularly difficult when I arrived on these shores in 1996, direct from living in Scotland during its (and Britain’s) worst-ever school shooting. In the tiny town of Dunblane, a 43-year old former shopkeeper and scoutmaster brought four handguns to a school gymnasium full of five-year-olds. He shot and killed 16 of them and their teacher, then turned his handgun on himself.

After Dunblane, the British plunged into a state of collective mourning that was at least as widespread as the better-known grieving process for Princess Diana the following year. (Americans don’t always believe that part, to which I usually say: the kids were five, for crying out loud. Five.)

In a country where nobody would dream of pulling public funding for studies into gun violence, the solution was amazingly rational and bipartisan. After a year, and an official inquiry into Dunblane, the Conservative government passed a sweeping piece of legislation restricting handguns. Then after Labour won the 1997 election, it passed another. Britain hasn’t seen a school shooting since. (Same with Australia, which also passed major gun control legislation in 1996). 

But trying to talk about all that in America over the last two decades, I’ve learned from experience, has been like touching the proverbial third rail: only tourists would be dumb enough to try it. Even gun control advocates now think they’re dealing with an intractable, generational problem. Many tell me that we need to tackle mental health services or gun fetishization in Hollywood movies first. The legislation route couldn’t possibly be that easy, they say.

But what if it is that easy? What if the rest of the world also loves Hollywood action movies and has mental health problems, but manages to have fewer shootings simply because it has fewer guns available? What if the rest of the world has been shouting at America for years that gun control is less intractable than you think — you just have to vote in large numbers for the politicians that favor it, and keep doing so at every election? 

If that’s the case, then perhaps some powerful, leveling international marketplace of ideas could help the U.S. see what everyone else has already seen. Something like social media. 

In one sense, Wednesday’s massacre in Parkland, Florida — a school shooting as shocking and senseless as Dunblane —  was evidence that America was further away from a gun control solution than ever. In 1996, buying an AR-15 assault rifle was illegal under federal law. Now, in Florida and many other states, a 19-year old can walk into any gun store and walk out with this military-grade weapon of mass destruction. 

Yet anecdotally, I have noticed one glimmer of hope. Since the last American gun massacre that got everyone talking, there has been a small shift in the online conversation. It has become a little more global. The students of Parkland have been broadcasting to the world via social media, and the world is taking notice. 

I’m not suggesting some kind of slam-dunk situation where every American on Twitter and Facebook and Snapchat has an epiphany about gun control because they’re more frequently interacting with people from other nations with different laws. 

But I am saying it’s noticeably harder for pro-gun accounts to spread lies about the situation in other countries without people from those countries chiming in. 

Meanwhile, there is a mountain of evidence that Russian bots and troll accounts are attempting to hijack the online conversation using the same playbook from the 2016 elections — manufacture conflict to destabilize American discourse. That means taking the most trollishly pro-NRA position they can think of, in a bid to counteract the large majority of Americans who want sensible gun control. 

So the voices from other countries are chiming in just in time. If anything, we need more of them to balance out cynical foreign influence in a pro-gun direction. 

How effective gun control can happen

Twenty years of trying to have this debate in the U.S. have worn me down. As you might expect, I’ve been on the receiving end of a lot of Second Amendment-splaining from the pro-gun lobby. (Yep, I’m very familiar with the two centuries of debate over the militia clause, thanks.) I’ve been told I didn’t understand the power of the NRA (which, again, I’m quite familiar with: the organization supported sensible gun restrictions until it was radicalized in 1977).

I’ve heard every argument you could imagine: the notion that British police must now be lording it over the poor defenseless population; the blinkered insistence that there must have been a rise in crime with illegal guns and legal knives now all the good people with guns have been taken out of the equation. (Violent crime is still too high in the UK, but it is a fraction of America’s total — and has declined significantly since 1996.) 

I no longer have the dream that a UK-Australia-style handgun ban would work here. There are as many as 300 million firearms in private hands, according to a 2012 Congressional estimate; even though most of them are concentrated in the hands of a small percentage of owners, it’s simply impractical to talk about removing a significant percentage of them from the equation. 

But if anything, I’m more aware of creative legal solutions: laws that require gun insurance the way we require car insurance, or tax ammunition, or hold manufacturers responsible for gun deaths. I’ve seen my adopted state of California implement some of the toughest gun laws in the nation, laws that just went into effect. The fight to prevent future massacres is just getting started.

And any time you want to talk about how it can happen, the rest of a shrinking world is listening — and ready to talk. 

Read more: https://mashable.com/2018/02/17/gun-control-social-media/

Inside the Two Years That Shook Facebookand the World

One day in late February of 2016, Mark Zuckerberg sent a memo to all of Facebook’s employees to address some troubling behavior in the ranks. His message pertained to some walls at the company’s Menlo Park headquarters where staffers are encouraged to scribble notes and signatures. On at least a couple of occasions, someone had crossed out the words “Black Lives Matter” and replaced them with “All Lives Matter.” Zuckerberg wanted whoever was responsible to cut it out.

“ ‘Black Lives Matter’ doesn’t mean other lives don’t,” he wrote. “We’ve never had rules around what people can write on our walls,” the memo went on. But “crossing out something means silencing speech, or that one person’s speech is more important than another’s.” The defacement, he said, was being investigated.

All around the country at about this time, debates about race and politics were becoming increasingly raw. Donald Trump had just won the South Carolina primary, lashed out at the Pope over immigration, and earned the enthusiastic support of David Duke. Hillary Clinton had just defeated Bernie Sanders in Nevada, only to have an activist from Black Lives Matter interrupt a speech of hers to protest racially charged statements she’d made two decades before. And on Facebook, a popular group called Blacktivist was gaining traction by blasting out messages like “American economy and power were built on forced migration and torture.”

So when Zuckerberg’s admonition circulated, a young contract employee named Benjamin Fearnow decided it might be newsworthy. He took a screenshot on his personal laptop and sent the image to a friend named Michael Nuñez, who worked at the tech-news site Gizmodo. Nuñez promptly published a brief story about Zuckerberg’s memo.

A week later, Fearnow came across something else he thought Nuñez might like to publish. In another internal communication, Facebook had invited its employees to submit potential questions to ask Zuckerberg at an all-hands meeting. One of the most up-voted questions that week was “What responsibility does Facebook have to help prevent President Trump in 2017?” Fearnow took another screenshot, this time with his phone.

Fearnow, a recent graduate of the Columbia Journalism School, worked in Facebook’s New York office on something called Trending Topics, a feed of popular news subjects that popped up when people opened Facebook. The feed was generated by an algorithm but moderated by a team of about 25 people with backgrounds in journalism. If the word “Trump” was trending, as it often was, they used their news judgment to identify which bit of news about the candidate was most important. If The Onion or a hoax site published a spoof that went viral, they had to keep that out. If something like a mass shooting happened, and Facebook’s algorithm was slow to pick up on it, they would inject a story about it into the feed.

March 2018. Subscribe to WIRED.

Jake Rowland/Esto

Facebook prides itself on being a place where people love to work. But Fearnow and his team weren’t the happiest lot. They were contract employees hired through a company called BCforward, and every day was full of little reminders that they weren’t really part of Facebook. Plus, the young journalists knew their jobs were doomed from the start. Tech companies, for the most part, prefer to have as little as possible done by humans—because, it’s often said, they don’t scale. You can’t hire a billion of them, and they prove meddlesome in ways that algorithms don’t. They need bathroom breaks and health insurance, and the most annoying of them sometimes talk to the press. Eventually, everyone assumed, Facebook’s algorithms would be good enough to run the whole project, and the people on Fearnow’s team—who served partly to train those algorithms—would be expendable.

The day after Fearnow took that second screenshot was a Friday. When he woke up after sleeping in, he noticed that he had about 30 meeting notifications from Facebook on his phone. When he replied to say it was his day off, he recalls, he was nonetheless asked to be available in 10 minutes. Soon he was on a video­conference with three Facebook employees, including Sonya Ahuja, the company’s head of investigations. According to his recounting of the meeting, she asked him if he had been in touch with Nuñez. He denied that he had been. Then she told him that she had their messages on Gchat, which Fearnow had assumed weren’t accessible to Facebook. He was fired. “Please shut your laptop and don’t reopen it,” she instructed him.

That same day, Ahuja had another conversation with a second employee at Trending Topics named Ryan Villarreal. Several years before, he and Fearnow had shared an apartment with Nuñez. Villarreal said he hadn’t taken any screenshots, and he certainly hadn’t leaked them. But he had clicked “like” on the story about Black Lives Matter, and he was friends with Nuñez on Facebook. “Do you think leaks are bad?” Ahuja demanded to know, according to Villarreal. He was fired too. The last he heard from his employer was in a letter from BCforward. The company had given him $15 to cover expenses, and it wanted the money back.

The firing of Fearnow and Villarreal set the Trending Topics team on edge—and Nuñez kept digging for dirt. He soon published a story about the internal poll showing Facebookers’ interest in fending off Trump. Then, in early May, he published an article based on conversations with yet a third former Trending Topics employee, under the blaring headline “Former Facebook Workers: We Routinely Suppressed Conservative News.” The piece suggested that Facebook’s Trending team worked like a Fox News fever dream, with a bunch of biased curators “injecting” liberal stories and “blacklisting” conservative ones. Within a few hours the piece popped onto half a dozen highly trafficked tech and politics websites, including Drudge Report and Breitbart News.

The post went viral, but the ensuing battle over Trending Topics did more than just dominate a few news cycles. In ways that are only fully visible now, it set the stage for the most tumultuous two years of Facebook’s existence—triggering a chain of events that would distract and confuse the company while larger disasters began to engulf it.

This is the story of those two years, as they played out inside and around the company. WIRED spoke with 51 current or former Facebook employees for this article, many of whom did not want their names used, for reasons anyone familiar with the story of Fearnow and Villarreal would surely understand. (One current employee asked that a WIRED reporter turn off his phone so the company would have a harder time tracking whether it had been near the phones of anyone from Facebook.)

The stories varied, but most people told the same basic tale: of a company, and a CEO, whose techno-optimism has been crushed as they’ve learned the myriad ways their platform can be used for ill. Of an election that shocked Facebook, even as its fallout put the company under siege. Of a series of external threats, defensive internal calculations, and false starts that delayed Facebook’s reckoning with its impact on global affairs and its users’ minds. And—in the tale’s final chapters—of the company’s earnest attempt to redeem itself.

In that saga, Fearnow plays one of those obscure but crucial roles that history occasionally hands out. He’s the Franz Ferdinand of Facebook—or maybe he’s more like the archduke’s hapless young assassin. Either way, in the rolling disaster that has enveloped Facebook since early 2016, Fearnow’s leaks probably ought to go down as the screenshots heard round the world.

II

By now, the story of Facebook’s all-consuming growth is practically the creation myth of our information era. What began as a way to connect with your friends at Harvard became a way to connect with people at other elite schools, then at all schools, and then everywhere. After that, your Facebook login became a way to log on to other internet sites. Its Messenger app started competing with email and texting. It became the place where you told people you were safe after an earthquake. In some countries like the Philippines, it effectively is the internet.

The furious energy of this big bang emanated, in large part, from a brilliant and simple insight. Humans are social animals. But the internet is a cesspool. That scares people away from identifying themselves and putting personal details online. Solve that problem—make people feel safe to post—and they will share obsessively. Make the resulting database of privately shared information and personal connections available to advertisers, and that platform will become one of the most important media technologies of the early 21st century.

But as powerful as that original insight was, Facebook’s expansion has also been driven by sheer brawn. Zuckerberg has been a determined, even ruthless, steward of the company’s manifest destiny, with an uncanny knack for placing the right bets. In the company’s early days, “move fast and break things” wasn’t just a piece of advice to his developers; it was a philosophy that served to resolve countless delicate trade-offs—many of them involving user privacy—in ways that best favored the platform’s growth. And when it comes to competitors, Zuckerberg has been relentless in either acquiring or sinking any challengers that seem to have the wind at their backs.

Facebook’s Reckoning

Two years that forced the platform to change

by Blanca Myers

March 2016

Facebook suspends Benjamin Fearnow, a journalist-­curator for the platform’s Trending Topics feed, after he leaks to Gizmodo.

May 2016

Gizmodo reports that Trending Topics “routinely suppressed conservative news.” The story sends Facebook scrambling.

July 2016

Rupert Murdoch tells Zuckerberg that Facebook is wreaking havoc on the news industry and threatens to cause trouble.

August 2016

Facebook cuts loose all of its Trending Topics journalists, ceding authority over the feed to engineers in Seattle.

November 2016

Donald Trump wins. Zuckerberg says it’s “pretty crazy” to think fake news on Facebook helped tip the election.

December 2016

Facebook declares war on fake news, hires CNN alum Campbell Brown to shepherd relations with the publishing industry.

September 2017

Facebook announces that a Russian group paid $100,000 for roughly 3,000 ads aimed at US voters.

October 2017

Researcher Jonathan Albright reveals that posts from six Russian propaganda accounts were shared 340 million times.

November 2017

Facebook general counsel Colin Stretch gets pummeled during congressional Intelligence Committee hearings.

January 2018

Facebook begins announcing major changes, aimed to ensure that time on the platform will be “time well spent.”

In fact, it was in besting just such a rival that Facebook came to dominate how we discover and consume news. Back in 2012, the most exciting social network for distributing news online wasn’t Facebook, it was Twitter. The latter’s 140-character posts accelerated the speed at which news could spread, allowing its influence in the news industry to grow much faster than Facebook’s. “Twitter was this massive, massive threat,” says a former Facebook executive heavily involved in the decisionmaking at the time.

So Zuckerberg pursued a strategy he has often deployed against competitors he cannot buy: He copied, then crushed. He adjusted Facebook’s News Feed to fully incorporate news (despite its name, the feed was originally tilted toward personal news) and adjusted the product so that it showed author bylines and headlines. Then Facebook’s emissaries fanned out to talk with journalists and explain how to best reach readers through the platform. By the end of 2013, Facebook had doubled its share of traffic to news sites and had started to push Twitter into a decline. By the middle of 2015, it had surpassed Google as the leader in referring readers to publisher sites and was now referring 13 times as many readers to news publishers as Twitter. That year, Facebook launched Instant Articles, offering publishers the chance to publish directly on the platform. Posts would load faster and look sharper if they agreed, but the publishers would give up an element of control over the content. The publishing industry, which had been reeling for years, largely assented. Facebook now effectively owned the news. “If you could reproduce Twitter inside of Facebook, why would you go to Twitter?” says the former executive. “What they are doing to Snapchat now, they did to Twitter back then.”

It appears that Facebook did not, however, carefully think through the implications of becoming the dominant force in the news industry. Everyone in management cared about quality and accuracy, and they had set up rules, for example, to eliminate pornography and protect copyright. But Facebook hired few journalists and spent little time discussing the big questions that bedevil the media industry. What is fair? What is a fact? How do you signal the difference between news, analysis, satire, and opinion? Facebook has long seemed to think it has immunity from those debates because it is just a technology company—one that has built a “platform for all ideas.”

This notion that Facebook is an open, neutral platform is almost like a religious tenet inside the company. When new recruits come in, they are treated to an orientation lecture by Chris Cox, the company’s chief product officer, who tells them Facebook is an entirely new communications platform for the 21st century, as the telephone was for the 20th. But if anyone inside Facebook is unconvinced by religion, there is also Section 230 of the 1996 Communications Decency Act to recommend the idea. This is the section of US law that shelters internet intermediaries from liability for the content their users post. If Facebook were to start creating or editing content on its platform, it would risk losing that immunity—and it’s hard to imagine how Facebook could exist if it were liable for the many billion pieces of content a day that users post on its site.

And so, because of the company’s self-image, as well as its fear of regulation, Facebook tried never to favor one kind of news content over another. But neutrality is a choice in itself. For instance, Facebook decided to present every piece of content that appeared on News Feed—whether it was your dog pictures or a news story—in roughly the same way. This meant that all news stories looked roughly the same as each other, too, whether they were investigations in The Washington Post, gossip in the New York Post, or flat-out lies in the Denver Guardian, an entirely bogus newspaper. Facebook argued that this democratized information. You saw what your friends wanted you to see, not what some editor in a Times Square tower chose. But it’s hard to argue that this wasn’t an editorial decision. It may be one of the biggest ever made.

In any case, Facebook’s move into news set off yet another explosion of ways that people could connect. Now Facebook was the place where publications could connect with their readers—and also where Macedonian teenagers could connect with voters in America, and operatives in Saint Petersburg could connect with audiences of their own choosing in a way that no one at the company had ever seen before.

III

In February of 2016, just as the Trending Topics fiasco was building up steam, Roger ­McNamee became one of the first Facebook insiders to notice strange things happening on the platform. McNamee was an early investor in Facebook who had mentored Zuckerberg through two crucial decisions: to turn down Yahoo’s offer of $1 billion to acquire Facebook in 2006; and to hire a Google executive named Sheryl Sandberg in 2008 to help find a business model. McNamee was no longer in touch with Zuckerberg much, but he was still an investor, and that month he started seeing things related to the Bernie Sanders campaign that worried him. “I’m observing memes ostensibly coming out of a Facebook group associated with the Sanders campaign that couldn’t possibly have been from the Sanders campaign,” he recalls, “and yet they were organized and spreading in such a way that suggested somebody had a budget. And I’m sitting there thinking, ‘That’s really weird. I mean, that’s not good.’ ”

But McNamee didn’t say anything to anyone at Facebook—at least not yet. And the company itself was not picking up on any such worrying signals, save for one blip on its radar: In early 2016, its security team noticed an uptick in Russian actors attempting to steal the credentials of journalists and public figures. Facebook reported this to the FBI. But the company says it never heard back from the government, and that was that.

Instead, Facebook spent the spring of 2016 very busily fending off accusations that it might influence the elections in a completely different way. When Gizmodo published its story about political bias on the Trending Topics team in May, the ­article went off like a bomb in Menlo Park. It quickly reached millions of readers and, in a delicious irony, appeared in the Trending Topics module itself. But the bad press wasn’t what really rattled Facebook—it was the letter from John Thune, a Republican US senator from South Dakota, that followed the story’s publication. Thune chairs the Senate Commerce Committee, which in turn oversees the Federal Trade Commission, an agency that has been especially active in investigating Facebook. The senator wanted Facebook’s answers to the allegations of bias, and he wanted them promptly.

The Thune letter put Facebook on high alert. The company promptly dispatched senior Washington staffers to meet with Thune’s team. Then it sent him a 12-page single-spaced letter explaining that it had conducted a thorough review of Trending Topics and determined that the allegations in the Gizmodo story were largely false.

Facebook decided, too, that it had to extend an olive branch to the entire American right wing, much of which was raging about the company’s supposed perfidy. And so, just over a week after the story ran, Facebook scrambled to invite a group of 17 prominent Republicans out to Menlo Park. The list included television hosts, radio stars, think tankers, and an adviser to the Trump campaign. The point was partly to get feedback. But more than that, the company wanted to make a show of apologizing for its sins, lifting up the back of its shirt, and asking for the lash.

According to a Facebook employee involved in planning the meeting, part of the goal was to bring in a group of conservatives who were certain to fight with one another. They made sure to have libertarians who wouldn’t want to regulate the platform and partisans who would. Another goal, according to the employee, was to make sure the attendees were “bored to death” by a technical presentation after Zuckerberg and Sandberg had addressed the group.

The power went out, and the room got uncomfortably hot. But otherwise the meeting went according to plan. The guests did indeed fight, and they failed to unify in a way that was either threatening or coherent. Some wanted the company to set hiring quotas for conservative employees; others thought that idea was nuts. As often happens when outsiders meet with Facebook, people used the time to try to figure out how they could get more followers for their own pages.

Afterward, Glenn Beck, one of the invitees, wrote an essay about the meeting, praising Zuckerberg. “I asked him if Facebook, now or in the future, would be an open platform for the sharing of all ideas or a curator of content,” Beck wrote. “Without hesitation, with clarity and boldness, Mark said there is only one Facebook and one path forward: ‘We are an open platform.’”

Inside Facebook itself, the backlash around Trending Topics did inspire some genuine soul-searching. But none of it got very far. A quiet internal project, codenamed Hudson, cropped up around this time to determine, according to someone who worked on it, whether News Feed should be modified to better deal with some of the most complex issues facing the product. Does it favor posts that make people angry? Does it favor simple or even false ideas over complex and true ones? Those are hard questions, and the company didn’t have answers to them yet. Ultimately, in late June, Facebook announced a modest change: The algorithm would be revised to favor posts from friends and family. At the same time, Adam Mosseri, Facebook’s News Feed boss, posted a manifesto titled “Building a Better News Feed for You.” People inside Facebook spoke of it as a document roughly resembling the Magna Carta; the company had never spoken before about how News Feed really worked. To outsiders, though, the document came across as boilerplate. It said roughly what you’d expect: that the company was opposed to clickbait but that it wasn’t in the business of favoring certain kinds of viewpoints.

The most important consequence of the Trending Topics controversy, according to nearly a dozen former and current employees, was that Facebook became wary of doing anything that might look like stifling conservative news. It had burned its fingers once and didn’t want to do it again. And so a summer of deeply partisan rancor and calumny began with Facebook eager to stay out of the fray.

IV

Shortly after Mosseri published his guide to News Feed values, Zuckerberg traveled to Sun Valley, Idaho, for an annual conference hosted by billionaire Herb Allen, where moguls in short sleeves and sunglasses cavort and make plans to buy each other’s companies. But Rupert Murdoch broke the mood in a meeting that took place inside his villa. According to numerous accounts of the conversation, Murdoch and Robert Thomson, the CEO of News Corp, explained to Zuckerberg that they had long been unhappy with Facebook and Google. The two tech giants had taken nearly the entire digital ad market and become an existential threat to serious journalism. According to people familiar with the conversation, the two News Corp leaders accused Facebook of making dramatic changes to its core algorithm without adequately consulting its media partners, wreaking havoc according to Zuckerberg’s whims. If Facebook didn’t start offering a better deal to the publishing industry, Thomson and Murdoch conveyed in stark terms, Zuckerberg could expect News Corp executives to become much more public in their denunciations and much more open in their lobbying. They had helped to make things very hard for Google in Europe. And they could do the same for Facebook in the US.

Facebook thought that News Corp was threatening to push for a government antitrust investigation or maybe an inquiry into whether the company deserved its protection from liability as a neutral platform. Inside Facebook, executives believed Murdoch might use his papers and TV stations to amplify critiques of the company. News Corp says that was not at all the case; the company threatened to deploy executives, but not its journalists.

Zuckerberg had reason to take the meeting especially seriously, according to a former Facebook executive, because he had firsthand knowledge of Murdoch’s skill in the dark arts. Back in 2007, Facebook had come under criticism from 49 state attorneys general for failing to protect young Facebook users from sexual predators and inappropriate content. Concerned parents had written to Connecticut attorney general Richard Blumenthal, who opened an investigation, and to The New York Times, which published a story. But according to a former Facebook executive in a position to know, the company believed that many of the Facebook accounts and the predatory behavior the letters referenced were fakes, traceable to News Corp lawyers or others working for Murdoch, who owned Facebook’s biggest competitor, MySpace. “We traced the creation of the Facebook accounts to IP addresses at the Apple store a block away from the MySpace offices in Santa Monica,” the executive says. “Facebook then traced interactions with those accounts to News Corp lawyers. When it comes to Facebook, Murdoch has been playing every angle he can for a long time.” (Both News Corp and its spinoff 21st Century Fox declined to comment.)

Zuckerberg took Murdoch’s threats seriously—he had firsthand knowledge of the older man’s skill in the dark arts.

When Zuckerberg returned from Sun Valley, he told his employees that things had to change. They still weren’t in the news business, but they had to make sure there would be a news business. And they had to communicate better. One of those who got a new to-do list was Andrew Anker, a product manager who’d arrived at Facebook in 2015 after a career in journalism (including a long stint at WIRED in the ’90s). One of his jobs was to help the company think through how publishers could make money on the platform. Shortly after Sun Valley, Anker met with Zuckerberg and asked to hire 60 new people to work on partnerships with the news industry. Before the meeting ended, the request was approved.

But having more people out talking to publishers just drove home how hard it would be to resolve the financial problems Murdoch wanted fixed. News outfits were spending millions to produce stories that Facebook was benefiting from, and Facebook, they felt, was giving too little back in return. Instant Articles, in particular, struck them as a Trojan horse. Publishers complained that they could make more money from stories that loaded on their own mobile web pages than on Facebook Instant. (They often did so, it turned out, in ways that short-changed advertisers, by sneaking in ads that readers were unlikely to see. Facebook didn’t let them get away with that.) Another seemingly irreconcilable difference: Outlets like Murdoch’s Wall Street Journal depended on paywalls to make money, but Instant Articles banned paywalls; Zuckerberg disapproved of them. After all, he would often ask, how exactly do walls and toll booths make the world more open and connected?

The conversations often ended at an impasse, but Facebook was at least becoming more attentive. This newfound appreciation for the concerns of journalists did not, however, extend to the journalists on Facebook’s own Trending Topics team. In late August, everyone on the team was told that their jobs were being eliminated. Simultaneously, authority over the algorithm shifted to a team of engineers based in Seattle. Very quickly the module started to surface lies and fiction. A headline days later read, “Fox News Exposes Traitor Megyn Kelly, Kicks Her Out For Backing Hillary."

V

While Facebook grappled internally with what it was becoming—a company that dominated media but didn’t want to be a media company—Donald Trump’s presidential campaign staff faced no such confusion. To them Facebook’s use was obvious. Twitter was a tool for communicating directly with supporters and yelling at the media. Facebook was the way to run the most effective direct-­marketing political operation in history.

In the summer of 2016, at the top of the general election campaign, Trump’s digital operation might have seemed to be at a major disadvantage. After all, Hillary Clinton’s team was flush with elite talent and got advice from Eric Schmidt, known for running ­Google. Trump’s was run by Brad Parscale, known for setting up the Eric Trump Foundation’s web page. Trump’s social media director was his former caddie. But in 2016, it turned out you didn’t need digital experience running a presidential campaign, you just needed a knack for Facebook.

Over the course of the summer, Trump’s team turned the platform into one of its primary vehicles for fund-­raising. The campaign uploaded its voter files—the names, addresses, voting history, and any other information it had on potential voters—to Facebook. Then, using a tool called Look­alike Audiences, Facebook identified the broad characteristics of, say, people who had signed up for Trump newsletters or bought Trump hats. That allowed the campaign to send ads to people with similar traits. Trump would post simple messages like “This election is being rigged by the media pushing false and unsubstantiated charges, and outright lies, in order to elect Crooked Hillary!” that got hundreds of thousands of likes, comments, and shares. The money rolled in. Clinton’s wonkier messages, meanwhile, resonated less on the platform. Inside Facebook, almost everyone on the executive team wanted Clinton to win; but they knew that Trump was using the platform better. If he was the candidate for Facebook, she was the candidate for LinkedIn.

Trump’s candidacy also proved to be a wonderful tool for a new class of scammers pumping out massively viral and entirely fake stories. Through trial and error, they learned that memes praising the former host of The Apprentice got many more readers than ones praising the former secretary of state. A website called Ending the Fed proclaimed that the Pope had endorsed Trump and got almost a million comments, shares, and reactions on Facebook, according to an analysis by BuzzFeed. Other stories asserted that the former first lady had quietly been selling weapons to ISIS, and that an FBI agent suspected of leaking Clinton’s emails was found dead. Some of the posts came from hyperpartisan Americans. Some came from overseas content mills that were in it purely for the ad dollars. By the end of the campaign, the top fake stories on the platform were generating more engagement than the top real ones.

Even current Facebookers acknowledge now that they missed what should have been obvious signs of people misusing the platform. And looking back, it’s easy to put together a long list of possible explanations for the myopia in Menlo Park about fake news. Management was gun-shy because of the Trending Topics fiasco; taking action against partisan disinformation—or even identifying it as such—might have been seen as another act of political favoritism. Facebook also sold ads against the stories, and sensational garbage was good at pulling people into the platform. Employees’ bonuses can be based largely on whether Facebook hits certain growth and revenue targets, which gives people an extra incentive not to worry too much about things that are otherwise good for engagement. And then there was the ever-present issue of Section 230 of the 1996 Communications Decency Act. If the company started taking responsibility for fake news, it might have to take responsibility for a lot more. Facebook had plenty of reasons to keep its head in the sand.

Roger McNamee, however, watched carefully as the nonsense spread. First there were the fake stories pushing Bernie Sanders, then he saw ones supporting Brexit, and then helping Trump. By the end of the summer, he had resolved to write an op-ed about the problems on the platform. But he never ran it. “The idea was, look, these are my friends. I really want to help them.” And so on a Sunday evening, nine days before the 2016 election, McNamee emailed a 1,000-word letter to Sandberg and Zuckerberg. “I am really sad about Facebook,” it began. “I got involved with the company more than a decade ago and have taken great pride and joy in the company’s success … until the past few months. Now I am disappointed. I am embarrassed. I am ashamed.”

Eddie Guy

VI

It’s not easy to recognize that the machine you’ve built to bring people together is being used to tear them apart, and Mark Zuckerberg’s initial reaction to Trump’s victory, and Facebook’s possible role in it, was one of peevish dismissal. Executives remember panic the first few days, with the leadership team scurrying back and forth between Zuckerberg’s conference room (called the Aquarium) and Sandberg’s (called Only Good News), trying to figure out what had just happened and whether they would be blamed. Then, at a conference two days after the election, Zuckerberg argued that filter bubbles are worse offline than on Facebook and that social media hardly influences how people vote. “The idea that fake news on Facebook—of which, you know, it’s a very small amount of the content—influenced the election in any way, I think, is a pretty crazy idea,” he said.

Zuckerberg declined to be interviewed for this article, but people who know him well say he likes to form his opinions from data. And in this case he wasn’t without it. Before the interview, his staff had worked up a back-of-the-­envelope calculation showing that fake news was a tiny percentage of the total amount of election-­related content on the platform. But the analysis was just an aggregate look at the percentage of clearly fake stories that appeared across all of Facebook. It didn’t measure their influence or the way fake news affected specific groups. It was a number, but not a particularly meaningful one.

Zuckerberg’s comments did not go over well, even inside Facebook. They seemed clueless and self-absorbed. “What he said was incredibly damaging,” a former executive told WIRED. “We had to really flip him on that. We realized that if we didn’t, the company was going to start heading down this pariah path that Uber was on.”

A week after his “pretty crazy” comment, Zuckerberg flew to Peru to give a talk to world leaders about the ways that connecting more people to the internet, and to Facebook, could reduce global poverty. Right after he landed in Lima, he posted something of a mea culpa. He explained that Facebook did take misinformation seriously, and he presented a vague seven-point plan to tackle it. When a professor at the New School named David Carroll saw Zuckerberg’s post, he took a screenshot. Alongside it on Carroll’s feed ran a headline from a fake CNN with an image of a distressed Donald Trump and the text “DISQUALIFIED; He’s GONE!”

At the conference in Peru, Zuckerberg met with a man who knows a few things about politics: Barack Obama. Media reports portrayed the encounter as one in which the lame-duck president pulled Zuckerberg aside and gave him a “wake-up call” about fake news. But according to someone who was with them in Lima, it was Zuckerberg who called the meeting, and his agenda was merely to convince Obama that, yes, Facebook was serious about dealing with the problem. He truly wanted to thwart misinformation, he said, but it wasn’t an easy issue to solve.

One employee compared Zuckerberg to Lennie in Of Mice and Men—a man with no understanding of his own strength.

Meanwhile, at Facebook, the gears churned. For the first time, insiders really began to question whether they had too much power. One employee told WIRED that, watching Zuckerberg, he was reminded of Lennie in Of Mice and Men, the farm-worker with no understanding of his own strength.

Very soon after the election, a team of employees started working on something called the News Feed Integrity Task Force, inspired by a sense, one of them told WIRED, that hyperpartisan misinformation was “a disease that’s creeping into the entire platform.” The group, which included Mosseri and Anker, began to meet every day, using whiteboards to outline different ways they could respond to the fake-news crisis. Within a few weeks the company announced it would cut off advertising revenue for ad farms and make it easier for users to flag stories they thought false.

In December the company announced that, for the first time, it would introduce fact-checking onto the platform. Facebook didn’t want to check facts itself; instead it would outsource the problem to professionals. If Facebook received enough signals that a story was false, it would automatically be sent to partners, like Snopes, for review. Then, in early January, Facebook announced that it had hired Campbell Brown, a former anchor at CNN. She immediately became the most prominent journalist hired by the company.

Soon Brown was put in charge of something called the Facebook Journalism Project. “We spun it up over the holidays, essentially,” says one person involved in discussions about the project. The aim was to demonstrate that Facebook was thinking hard about its role in the future of journalism—essentially, it was a more public and organized version of the efforts the company had begun after Murdoch’s tongue-lashing. But sheer anxiety was also part of the motivation. “After the election, because Trump won, the media put a ton of attention on fake news and just started hammering us. People started panicking and getting afraid that regulation was coming. So the team looked at what Google had been doing for years with News Lab”—a group inside Alphabet that builds tools for journalists—“and we decided to figure out how we could put together our own packaged program that shows how seriously we take the future of news.”

Facebook was reluctant, however, to issue any mea culpas or action plans with regard to the problem of filter bubbles or Facebook’s noted propensity to serve as a tool for amplifying outrage. Members of the leadership team regarded these as issues that couldn’t be solved, and maybe even shouldn’t be solved. Was Facebook really more at fault for amplifying outrage during the election than, say, Fox News or MSNBC? Sure, you could put stories into people’s feeds that contradicted their political viewpoints, but people would turn away from them, just as surely as they’d flip the dial back if their TV quietly switched them from Sean Hannity to Joy Reid. The problem, as Anker puts it, “is not Facebook. It’s humans.”

VII

Zuckerberg’s “pretty crazy” statement about fake news caught the ear of a lot of people, but one of the most influential was a security researcher named Renée DiResta. For years, she’d been studying how misinformation spreads on the platform. If you joined an antivaccine group on Facebook, she observed, the platform might suggest that you join flat-earth groups or maybe ones devoted to Pizzagate—putting you on a conveyor belt of conspiracy thinking. Zuckerberg’s statement struck her as wildly out of touch. “How can this platform say this thing?” she remembers thinking.

Roger McNamee, meanwhile, was getting steamed at Facebook’s response to his letter. Zuckerberg and Sandberg had written him back promptly, but they hadn’t said anything substantial. Instead he ended up having a months-long, ultimately futile set of email exchanges with Dan Rose, Facebook’s VP for partnerships. McNamee says Rose’s message was polite but also very firm: The company was doing a lot of good work that McNamee couldn’t see, and in any event Facebook was a platform, not a media company.

“And I’m sitting there going, ‘Guys, seriously, I don’t think that’s how it works,’” McNamee says. “You can assert till you’re blue in the face that you’re a platform, but if your users take a different point of view, it doesn’t matter what you assert.”

As the saying goes, heaven has no rage like love to hatred turned, and McNamee’s concern soon became a cause—and the beginning of an alliance. In April 2017 he connected with a former Google design ethicist named Tristan Harris when they appeared together on Bloomberg TV. Harris had by then gained a national reputation as the conscience of Silicon Valley. He had been profiled on 60 Minutes and in The Atlantic, and he spoke eloquently about the subtle tricks that social media companies use to foster an addiction to their services. “They can amplify the worst aspects of human nature,” Harris told WIRED this past December. After the TV appearance, McNamee says he called Harris up and asked, “Dude, do you need a wingman?”

The next month, DiResta published an ­article comparing purveyors of disinformation on social media to manipulative high-frequency traders in financial markets. “Social networks enable malicious actors to operate at platform scale, because they were designed for fast information flows and virality,” she wrote. Bots and sock puppets could cheaply “create the illusion of a mass groundswell of grassroots activity,” in much the same way that early, now-illegal trading algorithms could spoof demand for a stock. Harris read the article, was impressed, and emailed her.

The three were soon out talking to anyone who would listen about Facebook’s poisonous effects on American democracy. And before long they found receptive audiences in the media and Congress—groups with their own mounting grievances against the social media giant.

VIII

Even at the best of times, meetings between Facebook and media executives can feel like unhappy family gatherings. The two sides are inextricably bound together, but they don’t like each other all that much. News executives resent that Facebook and Google have captured roughly three-quarters of the digital ad business, leaving the media industry and other platforms, like Twitter, to fight over scraps. Plus they feel like the preferences of Facebook’s algorithm have pushed the industry to publish ever-dumber stories. For years, The New York Times resented that Facebook helped elevate BuzzFeed; now BuzzFeed is angry about being displaced by clickbait.

And then there’s the simple, deep fear and mistrust that Facebook inspires. Every publisher knows that, at best, they are sharecroppers on Facebook’s massive industrial farm. The social network is roughly 200 times more valuable than the Times. And journalists know that the man who owns the farm has the leverage. If Facebook wanted to, it could quietly turn any number of dials that would harm a publisher—by manipulating its traffic, its ad network, or its readers.

Emissaries from Facebook, for their part, find it tiresome to be lectured by people who can’t tell an algorithm from an API. They also know that Facebook didn’t win the digital ad market through luck: It built a better ad product. And in their darkest moments, they wonder: What’s the point? News makes up only about 5 percent of the total content that people see on Facebook globally. The company could let it all go and its shareholders would scarcely notice. And there’s another, deeper problem: Mark Zuckerberg, according to people who know him, prefers to think about the future. He’s less interested in the news industry’s problems right now; he’s interested in the problems five or 20 years from now. The editors of major media companies, on the other hand, are worried about their next quarter—maybe even their next phone call. When they bring lunch back to their desks, they know not to buy green bananas.

This mutual wariness—sharpened almost to enmity in the wake of the election—did not make life easy for Campbell Brown when she started her new job running the nascent Facebook Journalism Project. The first item on her to-do list was to head out on yet another Facebook listening tour with editors and publishers. One editor describes a fairly typical meeting: Brown and Chris Cox, Facebook’s chief product officer, invited a group of media leaders to gather in late January 2017 at Brown’s apartment in Manhattan. Cox, a quiet, suave man, sometimes referred to as “the Ryan Gosling of Facebook Product,” took the brunt of the ensuing abuse. “Basically, a bunch of us just laid into him about how Facebook was destroying journalism, and he graciously absorbed it,” the editor says. “He didn’t much try to defend them. I think the point was really to show up and seem to be listening.” Other meetings were even more tense, with the occasional comment from journalists noting their interest in digital antitrust issues.

As bruising as all this was, Brown’s team became more confident that their efforts were valued within the company when Zuckerberg published a 5,700-word corporate manifesto in February. He had spent the previous three months, according to people who know him, contemplating whether he had created something that did more harm than good. “Are we building the world we all want?” he asked at the beginning of his post, implying that the answer was an obvious no. Amid sweeping remarks about “building a global community,” he emphasized the need to keep people informed and to knock out false news and clickbait. Brown and others at Facebook saw the manifesto as a sign that Zuckerberg understood the company’s profound civic responsibilities. Others saw the document as blandly grandiose, showcasing Zuckerberg’s tendency to suggest that the answer to nearly any problem is for people to use Facebook more.

Shortly after issuing the manifesto, Zuckerberg set off on a carefully scripted listening tour of the country. He began popping into candy shops and dining rooms in red states, camera crew and personal social media team in tow. He wrote an earnest post about what he was learning, and he deflected questions about whether his real goal was to become president. It seemed like a well-­meaning effort to win friends for Facebook. But it soon became clear that Facebook’s biggest problems emanated from places farther away than Ohio.

IX

One of the many things Zuckerberg seemed not to grasp when he wrote his manifesto was that his platform had empowered an enemy far more sophisticated than Macedonian teenagers and assorted low-rent purveyors of bull. As 2017 wore on, however, the company began to realize it had been attacked by a foreign influence operation. “I would draw a real distinction between fake news and the Russia stuff,” says an executive who worked on the company’s response to both. “With the latter there was a moment where everyone said ‘Oh, holy shit, this is like a national security situation.’”

That holy shit moment, though, didn’t come until more than six months after the election. Early in the campaign season, Facebook was aware of familiar attacks emanating from known Russian hackers, such as the group APT28, which is believed to be affiliated with Moscow. They were hacking into accounts outside of Facebook, stealing documents, then creating fake Facebook accounts under the banner of DCLeaks, to get people to discuss what they’d stolen. The company saw no signs of a serious, concerted foreign propaganda campaign, but it also didn’t think to look for one.

During the spring of 2017, the company’s security team began preparing a report about how Russian and other foreign intelligence operations had used the platform. One of its authors was Alex Stamos, head of Facebook’s security team. Stamos was something of an icon in the tech world for having reportedly resigned from his previous job at Yahoo after a conflict over whether to grant a US intelligence agency access to Yahoo servers. According to two people with direct knowledge of the document, he was eager to publish a detailed, specific analysis of what the company had found. But members of the policy and communications team pushed back and cut his report way down. Sources close to the security team suggest the company didn’t want to get caught up in the political whirlwind of the moment. (Sources on the politics and communications teams insist they edited the report down, just because the darn thing was hard to read.)

On April 27, 2017, the day after the Senate announced it was calling then FBI director James Comey to testify about the Russia investigation, Stamos’ report came out. It was titled “Information Operations and Facebook,” and it gave a careful step-by-step explanation of how a foreign adversary could use Facebook to manipulate people. But there were few specific examples or details, and there was no direct mention of Russia. It felt bland and cautious. As Renée DiResta says, “I remember seeing the report come out and thinking, ‘Oh, goodness, is this the best they could do in six months?’”

One month later, a story in Time suggested to Stamos’ team that they might have missed something in their analysis. The article quoted an unnamed senior intelligence official saying that Russian operatives had bought ads on Facebook to target Americans with propaganda. Around the same time, the security team also picked up hints from congressional investigators that made them think an intelligence agency was indeed looking into Russian Facebook ads. Caught off guard, the team members started to dig into the company’s archival ads data themselves.

Eventually, by sorting transactions according to a series of data points—Were ads purchased in rubles? Were they purchased within browsers whose language was set to Russian?—they were able to find a cluster of accounts, funded by a shadowy Russian group called the Internet Research Agency, that had been designed to manipulate political opinion in America. There was, for example, a page called Heart of Texas, which pushed for the secession of the Lone Star State. And there was Blacktivist, which pushed stories about police brutality against black men and women and had more followers than the verified Black Lives Matter page.

Numerous security researchers express consternation that it took Facebook so long to realize how the Russian troll farm was exploiting the platform. After all, the group was well known to Facebook. Executives at the company say they’re embarrassed by how long it took them to find the fake accounts, but they point out that they were never given help by US intelligence agencies. A staffer on the Senate Intelligence Committee likewise voiced exasperation with the company. “It seemed obvious that it was a tactic the Russians would exploit,” the staffer says.

When Facebook finally did find the Russian propaganda on its platform, the discovery set off a crisis, a scramble, and a great deal of confusion. First, due to a miscalculation, word initially spread through the company that the Russian group had spent millions of dollars on ads, when the actual total was in the low six figures. Once that error was resolved, a disagreement broke out over how much to reveal, and to whom. The company could release the data about the ads to the public, release everything to Congress, or release nothing. Much of the argument hinged on questions of user privacy. Members of the security team worried that the legal process involved in handing over private user data, even if it belonged to a Russian troll farm, would open the door for governments to seize data from other Facebook users later on. “There was a real debate internally,” says one executive. “Should we just say ‘Fuck it’ and not worry?” But eventually the company decided it would be crazy to throw legal caution to the wind “just because Rachel Maddow wanted us to.”

Ultimately, a blog post appeared under Stamos’ name in early September announcing that, as far as the company could tell, the Russians had paid Facebook $100,000 for roughly 3,000 ads aimed at influencing American politics around the time of the 2016 election. Every sentence in the post seemed to downplay the substance of these new revelations: The number of ads was small, the expense was small. And Facebook wasn’t going to release them. The public wouldn’t know what they looked like or what they were really aimed at doing.

This didn’t sit at all well with DiResta. She had long felt that Facebook was insufficiently forthcoming, and now it seemed to be flat-out stonewalling. “That was when it went from incompetence to malice,” she says. A couple of weeks later, while waiting at a Walgreens to pick up a prescription for one of her kids, she got a call from a researcher at the Tow Center for Digital Journalism named Jonathan Albright. He had been mapping ecosystems of misinformation since the election, and he had some excellent news. “I found this thing,” he said. Albright had started digging into CrowdTangle, one of the analytics platforms that Facebook uses. And he had discovered that the data from six of the accounts Facebook had shut down were still there, frozen in a state of suspended animation. There were the posts pushing for Texas secession and playing on racial antipathy. And then there were political posts, like one that referred to Clinton as “that murderous anti-American traitor Killary.” Right before the election, the Blacktivist account urged its supporters to stay away from Clinton and instead vote for Jill Stein. Albright downloaded the most recent 500 posts from each of the six groups. He reported that, in total, their posts had been shared more than 340 million times.

Eddie Guy

X

To McNamee, the way the Russians used the platform was neither a surprise nor an anomaly. “They find 100 or 1,000 people who are angry and afraid and then use Facebook’s tools to advertise to get people into groups,” he says. “That’s exactly how Facebook was designed to be used.”

McNamee and Harris had first traveled to DC for a day in July to meet with members of Congress. Then, in September, they were joined by DiResta and began spending all their free time counseling senators, representatives, and members of their staffs. The House and Senate Intelligence Committees were about to hold hearings on Russia’s use of social media to interfere in the US election, and McNamee, Harris, and ­DiResta were helping them prepare. One of the early questions they weighed in on was the matter of who should be summoned to testify. Harris recommended that the CEOs of the big tech companies be called in, to create a dramatic scene in which they all stood in a neat row swearing an oath with their right hands in the air, roughly the way tobacco executives had been forced to do a generation earlier. Ultimately, though, it was determined that the general counsels of the three companies—Facebook, Twitter, and Google—should head into the lion’s den.

And so on November 1, Colin Stretch arrived from Facebook to be pummeled. During the hearings themselves, DiResta was sitting on her bed in San Francisco, watching them with her headphones on, trying not to wake up her small children. She listened to the back-and-forth in Washington while chatting on Slack with other security researchers. She watched as Marco Rubio smartly asked whether Facebook even had a policy forbidding foreign governments from running an influence campaign through the platform. The answer was no. Rhode Island senator Jack Reed then asked whether Facebook felt an obligation to individually notify all the users who had seen Russian ads that they had been deceived. The answer again was no. But maybe the most threatening comment came from Dianne Feinstein, the senior senator from Facebook’s home state. “You’ve created these platforms, and now they’re being misused, and you have to be the ones to do something about it,” she declared. “Or we will.”

After the hearings, yet another dam seemed to break, and former Facebook executives started to go public with their criticisms of the company too. On November 8, billionaire entrepreneur Sean Parker, Facebook’s first president, said he now regretted pushing Facebook so hard on the world. “I don’t know if I really understood the consequences of what I was saying,” h

Read more: https://www.wired.com/story/inside-facebook-mark-zuckerberg-2-years-of-hell/

Facebook’s traffic is down as it strives for ‘meaningful connections’

Meaningful.
Image: facebook/mark zuckerberg

One of Facebook’s core statistics doesn’t look so good. Time spent on the network — a number that drives the tech giant’s revenue — is down by an estimated 50 million hours per day.

Facebook now reaches 2.13 billion people per month and has 1.4 billion daily active users. If we were to revisit that 50 million hours number on a per user basis, it would be a drop of 0.035 hours aka 2.1 minutes per user per day. 

For CEO and cofounder Mark Zuckerberg, that’s a necessary drop for his company’s future success. Zuckerberg announced the news Wednesday as part of Facebook’s quarterly earnings, reflecting on its 2017 revenue and spending and the future of the company. 

Facebook’s stock was down nearly 5 percent in after-hours trading, but by the end of the hour-long call with investors, had jumped up by 2 percent. Zuckerberg knows it won’t be pretty going forward either. 

“2017 was a strong year for Facebook, but it was also a hard one,” Zuckerberg said in a statement. “In 2018, we’re focused on making sure Facebook isn’t just fun to use, but also good for people’s well-being and for society. We’re doing this by encouraging meaningful connections between people rather than passive consumption of content.”

Time spent on Facebook

The world’s largest social network (a.k.a. advertising giant, democracy wrecker, and virtual reality headset maker) is far from dead. This is Facebook we’re talking about. The site traffic isn’t everything when it comes to financials. Revenue from Facebook ads is driven by actual clicks. Facebook still brought in $4.26 billion in profits last quarter.

Zuckerberg’s decry of the old model, that means fewer viral videos.

With Zuckerberg at the helm, Facebook is pushing itself to become a place where people enjoy themselves and genuinely want to keep coming back. According to Zuckerberg’s decry of the old model, that means fewer viral videos, unless users are having back-and-forth conversations in the comments section. 

“Already last quarter, we made changes to show fewer viral videos to make sure people’s time is well spent. In total, we made changes that reduced time spent on Facebook by roughly 50 million hours every day. By focusing on meaningful connections, our community and business will be stronger over the long term,” Zuckerberg’s statement continued. 

Zuckerberg’s hope is that the ads in Facebook will be better, and therefore bring in more revenue, too. “When you care about something, you’re willing to see ads to experience it,” he said. 

Money is still no issue for Facebook. Revenue reached $12.97 billion for the fourth quarter of 2017, up 47 percent from last quarter. Earnings per share came out below analyst’s estimates, $1.21 compared to $1.94 projected. However, Facebook made sure to note the U.S. tax bill affected its overall gains. Had that one-time charge not been taken into account the result would have been $2.21, beating expectations.

For Facebook, revenue is all about smartphones. Mobile advertising revenue now makes up 89 percent of overall ad revenue, up from 84 percent a year prior. 

An ideological shift 

Facebook is now grappling with its new reputation. The company’s 2018 has been rocky following a recent shift in its ideology

After years of fueling growth among digital-first media companies with Facebook Page, the company said it would decrease their influence in the News Feed, dropping it to 4 percent from 5 percent.

Now, Facebook is prioritizing posts shared by friends and family and content from “trusted” news sources, where “trusted” is defined by the community. 

Facebook continues to be criticized by its own community for negative impacts on mental health and data privacy. Facebook effort to create an app to help children communicate inspired protest from the Campaign for a Commercial-Free Childhood that it would negatively affect their wellbeing.  

“Shift from showing the most meaningful content to people to encouraging the most meaningful interaction,” Zuckerberg said. “It’s not just one News Feed change… It’s going to be a series of product changes.”

Zuckerberg called out Stories — the vertical photo- and video-sharing product that the company copied from competitor Snapchat — as a new product aligned with meaningful interactions on and off the platform. 

“Stories is a better format of sharing multiple quick video clips throughout your day,” he said. WhatsApp and Instagram are number 1 and number 2 “most used Stories product in the world.”

These updates are far from Facebook’s only worry going forward. Facebook is dealing with the backlash of incidentally spreading Russian propaganda during the 2016 presidential election and the overall presence of fake news on the site. Facebook also is combatting hate speech. Last year, U.S. lawmakers grilled Facebook, as well as Twitter and Google, on these practices and demanded that the companies make changes, in particular with transparency on political ads

Separately, Facebook is addressing data privacy and tools that allow users to further change their ad experience ahead of the European Union’s upcoming privacy changes known as General Data Protection Regulation (GDPR). 

Big bets ahead 

But not everything is negative or decreasing on Facebook’s horizon. A new initiative called Facebook Watch, a hub for high-quality video, is gaining traction among users, media publishers, and Hollywood studios. 

“It’s early. There’s some promising signs,” Zuckerberg said. “It’s really important to internalize that the News Feed ecosystem and the Watch ecosystem are two totally separate things … We’re optimistic that Watch will be a use of video to bring people together.” 

Zuckerberg and his team spoke little of hardware updates, but the company has made big announcements already this year with its products. Facebook’s virtual reality division Oculus is launching a new headset in China thanks to a partnership with Xiaomi.

Read more: https://mashable.com/2018/01/31/facebook-earnings-2017-50-million-hours-per-day-traffic/

The Formula for Phone Addiction Might Double As a Cure

In September 2007, 75 students walked into a classroom at Stanford. Ten weeks later, they had collectively amassed 16 million users, $1 million dollars in advertising revenue, and a formula that would captivate a generation.

The class—colloquially known as "The Facebook Class"—and its instructor, BJ Fogg, became Silicon Valley legends. Graduates went on to work and design products at Uber, Facebook, and Google. Some even started companies with their classmates. But a decade later, some of the class’ teachings are in the crosshairs of our society-wide conversation about phone addiction.

Fogg's research group, the Persuasive Technology Lab, looks at how technology can persuade users to take certain actions. Early experiments centered around questions like, “How can you get people to stop smoking using SMS?” But when Facebook, then a three-year-old startup, opened its platform to third-party developers, Fogg saw a perfect opportunity to test some of his theories in the wild.

After a few lectures on the basics of behavioral psychology, students began building Facebook apps of their own. They used psychological tools like reciprocity and suggestion to engineer apps that could, for example, send your friends a virtual hug or get your friends to join an online game of dodgeball. At the time, Facebook had just begun promoting third-party apps in its news feed. The iPhone launched in the summer of 2007; the App Store would follow the year later. Fogg’s teachings became a playbook on how to make apps stick just as apps were becoming a thing.

“Within the first month, there were already millions of people using these apps,” says Dan Greenberg, a teaching assistant for the class who later went on to found the ad-tech platform Sharethrough with some of his classmates. After some students decided to monetize their apps with banner ads, apps like Greenberg’s began bringing in as much as $100,000 a month in ad sales. Fogg had a secret sauce, and it was the ideal time to serve it.

In Silicon Valley, Fogg's Behavioral Model answers one of product designers’ most enduring questions: How do you keep users coming back?

A decade ago, Fogg’s lab was a toll both for entrepreneurs and product designers on their way to Facebook and Google. Nir Eyal, the bestselling author of the book, Hooked, sat in lectures next to Ed Baker, who would later become the Head of Growth at both Facebook and Uber. Kevin Systrom and Mike Krieger, the founders of Instagram, worked on projects alongside Tristan Harris, the former Google Design Ethicist who now leads the Time Well Spent movement. Together, in Fogg's lab, they studied and developed the techniques to make our apps and gadgets addictive.

Now, we are navigating the consequences. From Facebook's former president claiming that Silicon Valley’s tools are “ripping apart the social fabric of society” to France formally banning smartphones in public schools, we are starting to reexamine the sometimes toxic relationships we have with our devices. Looking at the source of product designers’ education may help us understand the downstream consequences of their creations—and the way to reverse it.

Engineering Addiction

BJ Fogg is an unlikely leader for a Silicon Valley movement. He’s a trained psychologist and twice the age of the average entrepreneur with whom he works. His students describe him as energetic, quirky, and committed to using tech as a force for good: In the past, he's taught classes on making products to promote peace and using behavior design to connect with nature. But every class begins with his signature framework, Fogg’s Behavior Model. It suggests that we act when three forces—motivation, trigger, and ability—converge.

In Silicon Valley, the model answers one of product designers’ most enduring questions: How do you keep users coming back? Say you're a Facebook user, with the Facebook app on your phone. You're motivated to make sure photos of you posted online aren't ugly, you get triggered by a push notification from Facebook that you’ve been tagged, and your phone gives you the ability to check right away. You open the Facebook app.

Proponents of the model, like Eyal, believe that the framework can be extremely powerful. “If you understand people’s internal triggers, you can try to satiate them," he says. "If you’re feeling lonely, we can help you connect. If you’re feeling bored, we can help entertain."

But critics say that companies like Facebook have taken advantage of these psychological principles to capture human attention. Especially in advertising-supported businesses, where more time spent in app equals more profit, designers can optimize for values that don’t always align with their users’ well-being.

Tristan Harris, one of the most vocal whistleblowers of tech’s manipulative design practices (and a graduate of Fogg's lab), has grappled with this idea. In 2012, while working at Google, he created a 144-slide presentation called “A Call to Minimize Distraction & Respect Users’ Attention.” The deck, which outlined ways in which small design elements like push notifications can become massive distractions at scale, went viral within the company. Over 5,000 Googlers viewed the presentation, which Harris parlayed into a job a as Google’s first “design ethicist.”

Harris left Google in 2015 to expand the conversation around persuasive design outside of Mountain View. “Never before has a handful of people working at a handful of tech companies been able to steer the thoughts and feelings of a billion people,” he said in a recent talk at Stanford. “There are more users on Facebook than followers of Christianity. There are more people on YouTube than followers of Islam. I don’t know a more urgent problem than this.”

Harris has channeled his beliefs into his advocacy organization, Time Well Spent, which lobbies the tech industry to align with societal well-being. Three years later, his movement has begun to gain steam. Just look at Facebook, which recently restructured its news feed algorithm to prioritize the content that people find valuable (like posts from friends and family) over the stuff that people mindlessly consume (like viral videos). In a public Facebook post, Mark Zuckerberg wrote that one of Facebook’s main priorities in 2018, “is making sure the time we all spend on Facebook is time well spent.” Even, he said, if it's at the cost of how much time you spend on the platform.

Facebook's reckoning shows that companies can redesign their products to be less addictive—at the very least, they can try. Perhaps in studying the model that designers used to hook us to our phones, we can understand how those principles can be used to unhook us as well.

Finding the Cure

Fogg acknowledges that our society has become addicted to smartphones, but he believes consumers have the power to unhook themselves. “No one is forcing you to bring the phone into the bedroom and make it your alarm clock,” he says. “What people need is the motivation.”

Eyal’s next book, Indistractible, focuses on how to do that, using Fogg's model in reverse. It takes the same three ideas—motivation, trigger, and ability—and reorients them toward ungluing us from our phones. For example, you can remove triggers from certain apps by adjusting your notification settings. (Or better yet, turn off all your push notifications.) You can decrease your ability to access Facebook by simply deleting the app from your phone.

“People have the power to put this stuff away and they always have,” says Eyal. “But when we preach powerlessness, people believe that.”

Others, like Harris and venture capitalist Roger McNamee, disagree. They believe corporations’ interests are so intertwined with advertisers’ demands that, until we change the system, companies will always find new ways to maximize consumers’ time spent with their apps. “If you want to fix this as quickly as possible, the best way would be for founders of these companies to change their business model away from advertising,” says McNamee, who was an early investor in Facebook and mentor to Zuckerberg. “We have to eliminate the economic incentive to create addiction in the first place.”

There is merit to both arguments. The same methods that addict people to Snapchat might keep them learning new languages on Duolingo. The line between persuasion and coercion can be thin, but a blanket dismissal of behavior design misses the point. The larger discussion around our relationship with our gadgets comes back to aligning use with intent—for product designers and users.

Where We Go Next

Harris and McNamee believe manipulative design has to be addressed on a systems level. The two are advocating for government regulation of internet platforms like Facebook, in part as a public health issue. Companies like Apple have also seen pressure from investors to rethink how gadget addiction is affecting kids. But ultimately, business models are hard to change overnight. As long as advertising is the primary monetization strategy for the web, there will always be those who use persuasive design to keep users around longer.

So in the meantime, there are tangible steps we can all take to break the loop of addiction. Changing your notification settings or turning your phone to grayscale might seem like low-hanging fruit, but it's a place to start.

“It’s going to take the companies way longer than it would take you to do something about it,” says Eyal. “If you hold your breath and wait, you’re going to suffocate.”

Hooked On Technology

Read more: https://www.wired.com/story/phone-addiction-formula/

Why Hillary Clintons former CTO is back in Silicon Valley

Stephanie Hannon, Strava's chief product officer
Image: strava

Stephanie Hannon, 43, didn’t consider herself an athlete until age 39. In 2014, a looming surgery for personal health reasons had encouraged her to start working out. She began with a hike, and like millions of people worldwide, she turned to her smartphone for some help on where to go and downloaded an app called Strava.

This week, Hannon joined Strava as chief product officer. She’s one of the major hires the company made after growing from a niche community of cyclists in 2009 to tens of millions of athletes worldwide. Now, Hannon wants to expand the tech platform for developers and the company’s relationships with cities. 

Hannon is quite familiar with building tech products and working with communities. She’s been working at the highest levels of Silicon Valley since the 1990s. She was one of the product managers in the early days Gmail and Google Maps and lived internationally to help expand those products. She later joined Facebook, where she focused on the safety of its one billion people communicating. 

But she took a brief break from the Valley after she was called about a position on the Hillary Clinton campaign in 2015. For 20 months, Hannon worked as Hillary For America’s chief technology officer and oversaw a team of 80 technologists dedicated to putting the first women president in office. That dream wasn’t realized, but Hannon isn’t giving up yet. 

During her first day at Strava, Mashable spoke with Hannon to hear about her career in Silicon Valley, her thoughts on the 2016 election, and what she’s working on next. 

What excites you most about Strava, and had you been familiar with the product and the company before or was it one of those phone calls? 

I knew about Strava since day one because that entrepreneurship program I told you about, the Mayfield Fellows program [at Stanford University]. The CTO here, Mark Shaw, was in the program as well, and I think I was his mentor. He’s a good friend of mine, so he had worked with the founders of Strava and Kana, and he was the third employee. 

I knew about Strava for a longtime, and I went on my own quest to get healthy. In 2014, I had a health crisis. I just wanted to say I’m totally fine. I had to have a very invasive surgery, and I wasn’t fit or healthy. So I knew 7 months before surgery if I got healthy, the outcome would be better and my recovery would be much easier.

For the first time in my life at 39, I went on a hike, and I used Strava from the very beginning to track my hike. I also radically changed my diet. I gave up meat. I gave up alcohol. I gave up a lot of things and went on this personal health quest. I had a 7-hour surgery, and I basically walked right out the door. The next day I walked 2 miles. For me, that was a really motivating moment, and when I got through the surgery, I was like I’ve never been a healthy person so how am I going to keep the motivation going when there’s no surgery moving. 

So, I went from hiking to triathlons.

No big deal, just running a triathlon?

Yeah, I just want to stress that I’m not a great athlete. I think finisher is a great word. When I did a triathlon, my goal was to be a finisher, to make it across the finish line. If you’re a person who doesn’t consider yourself an athlete, carrying my bike into a big pen that says “ATHLETES ONLY,” the first time I walked through, I was like, “Is that really me?” 

Steph’s triathlon gear

Image: STEPHANIE HANNON

That was really exciting and motivating. I did triathlons, Tough Mudders, half-marathons. I went on a personal quest for fitness and when I did that my life radically changed, not just because I was fitter, but for most of my adult life I only slept 3 or 4 hours. 

I started sleeping 7 or 8 hours, and I would tell everyone about it. Like, “Have you guys heard about sleep?” I was more emotionally balanced and resilient. I was happier and had better relationships. My whole experience going through that had a big impact on me. As I was looking around at companies and met great entrepreneurs and all this cool stuff happening in Silicon Valley with the combination of knowing people here at Strava, combination of my own personal journey to get healthy, and also really believing in the product. 

At the core of it, I’ve worked on a lot of platforms, Google Maps is a platform, Facebook is a platform, I think the power of a platform and a lot of innovation can happen with partners to Strava or connected devices to Strava.

I think Strava can be sort of at the center of this connected world. The opportunity is much bigger. Strava is serving tens of millions of athletes, but I think there are more than 700 million athletes in the world, and I think they can all benefit from the product we build.  

You joined the tech scene in Silicon Valley in 1995. What’s the biggest difference between now and then? 

An incredible amount has changed in 20 years. I think the speed of development, like what I worked on when I was right out of college, my projects and products probably took a year and a half or two years to build and had a significant hardware component. 

Now, you work in a consumer web services company or a company that develops mobile apps and you can iterate really fast. You can build and launch things in a week.

I think the scale and impact has also dramatically changed because of the proliferation of mobile devices and the comfort level of the whole world with social networks and data and how people manage and use their data, like the concept of what we’re able to do at companies like Strava. I couldn’t even conceive of it two decades ago. 

We could also talk infinitely about diversity in tech. I felt very much unusual when I entered the workforce, but now I’m really happy to say the landscape has changed and I’m trying to encourage more diversity and building diverse teams has become really important to me and that feels more possible now than it did back then. 

That’s an inspiring way to put it. Diversity in tech is not perfect, but it’s good to hear that it has improved.

Exactly. We still have so long to go. I know when you’re building an engineering team to put the first women president in office, it’s an unusually good motivation to get a diverse engineering team, but I think we all have to keep working on it. 

You’ve worked at Google, twice in your career, and between that Facebook. You mentioned they’re relatable in the fact they’re mobile and they can scale fast, but is there anything in particular about the difference between those companies?

They’re both amazing companies. They’re so radically different. Google has been about organizing the world’s information and making it universally accessible and usable, which I’ll be able to repeat until the end of time. It was so drilled into us.

A lot of my time at Google was working on Google Maps. I brought Google Maps to Europe, the Middle East, and Africa, and that was an incredible experience because if you didn’t have a good online mapping tool and then you bring it to a country, suddenly they can manage in different ways, do different things with commerce and traffic, and how they look at solving big problems like terrorism or clean water, all this incredible good comes out of bringing maps to these countries.

Facebook is completely different. It was appealing to me at Facebook a billion people at the time was going there every day to communicate and how do you create a safe space for those people? A lot of what I worked on at Facebook was preventing spam and abuse and giving tools for helping people talk to each other when they were unhappy about content or had bad experiences on the platform. 

Both are amazing companies. The time I worked at Facebook they didn’t have as much acquisitions, so it felt like we were all unified working on this one product similar to Strava today whereas at Google it was already a big company and there were such massively diverse product lines, but I think across them is a focus on the user or the person or for Strava’s case an athlete and how do you build really compelling, innovative experiences that make their lives better or more efficient? 

Steph after participating in a half marathon

Image: stephanie hannon

Seemed like you had a pretty great life in Silicon Valley at some of the most respected companies. Why would you decide to leave these coveted jobs? 

It was a surprise call to get the call to interview to be the CTO. At the time I was leading Google’s social impact team and we worked on problems like disaster response. We built tools for the ebola crisis with Doctors Without Borders. We did a lot of philanthropic giving tools, and we also did a lot of Google’s elections work. In 2014, my teams were India and Brazil and we did a whole bunch of experiments in civic engagement. I was sort of immersed in that space of government and elections, and I had friends like Megan Smith, who was the CTO for Obama. 

When [the Clinton campaign] offered it to me, I was incredibly excited and paranoid because I didn’t really know what I was getting myself into, but I think if someone says, “Do you want to be part of putting the first woman president in the White House?” It was really easy to say that’s something I’ll always feel good about trying to do.

What was the great challenge you faced as CTO of Clinton’s campaign? 

I would say the greatest challenge was recruitment. For many engineers, they don’t know what it really means to work inside of a campaign or know what’s possible and then the short speed, incredibly short deadlines and very little time. We had some ideas that were not executable in the time that we had and the staffing we had. The deadlines we had were so rigid. When we were working on Google Maps or Facebook features you might aspire to launch something for St Patricks Day, but if you didn’t, it’s not a huge deal. 

For the campaign, for the first time ever, we put a real-time caucus app in the hands of every captain in Iowa so that meant we had a real-time dashboard so we could see the results for all areas they came in. You need to build the app, have it be reliable, and train your staff and have everything go well on that night because it’s that night or never.  Dealing with those kind of rigid deadlines with a small amount of resources was my biggest challenge. 

But you were able to overcome that? Did the project go well?

Well, I’d like to believe that. We could have a debate about Iowa, but I mean I’m really proud of the team. I could not be more proud of the people who gave up jobs at big companies and big compensations to come on the quest we went on for the 2016 election. I think we did a lot of things great and then there were a lot of things we ran out of time to do. A lot of time as a CTO is with these limited resources, what’s most important. 

Steph campaigning for Hillary

Image: wikimedia commons

As CTO of Clinton’s campaign, how do you think technology impacted the outcome of an election?

I think technology played a massive role. A lot of modern campaigning is how do you reach the people you want to reach efficiently. Different people are wanting to get their news on Facebook or social media. Some people prefer a newspaper. Some people prefer TV. Some people only need to hear something once. Some people need to hear something multiple times. Some people are only affected when they hear something at night or on the weekend. 

I think what’s exciting about technology in the modern era is you can reach people in a way that’s very meaningful to them with very personalized messages. I think technology plays a massive role in identifying the most important people to activate and how to activate them and how to measure your success. I hope we can have a positive impact with those technologies in 2018 and 2020 races. 

Why are you choosing to come back to Silicon Valley and San Francisco? Was there any doubt to packing up your bags and coming back here?

No, my home has always San Francisco, although I like working in different places. Over my 10 years at Google, I worked in Switzerland and in Australia. I think of San Francisco as home, but I love being abroad and in different places.

You can imagine the grief of what happened [with the election]. The outcome was big, not only for me, but with the 80 people that I hired. So a lot of the end of the year and into this year was supporting them and helping them find new jobs. Had we won the election I would have been so happy if a bunch of my team ended up in the US Digital Service or different parts of the government, but in the end, these 80 people, we all wanted to find ways to be productive, so there was a lot of that, and then there was time-off. 

Then I joined Greylock in July of this year, and for me, that was a way to be immersed in the entrepreneurship community, think about what I wanted to do next, and also help advise.   

How does your time at Greylock compare to Facebook, Google, and Clinton campaign? 

A lot of it was how I can use my experience building products to help portfolio companies at Greylock in different ways. For some of them, I’d help them hire their first product manager. For some of it, it might be a company in a new phase of growth and the product team needs to figure out how to interact with them. With some of them it was what does the product development phase look like. How do we iterate and use data? How do we think about metrics? How do we recruit? A lot of people were interested in my experience in scaling an engineering team so fast. A lot of my days were meeting with companies and just sort of helping and advising. Some of my days were just talking to companies and figuring out what to do next. 

Google has a huge market cap. Facebook is worth billions. Strava is significantly smaller. Can Strava even compete with them? 

I believe there’s space for more vertical, intimate, personal, social networks. I think there’s a set of people that you interact with for passion or love, and it doesn’t always look like your broad social network. I experienced this in the campaign era, sometime people got fatigued on Facebook because they would go there and the content was not something they were excited about. If you are a person who’s an athlete or you’re trying to get inspired or motivated or you’re trying to get a new idea and you want to go to Facebook to look for the content it isn’t easy, but when I go to Strava and look at my feed, it’s exactly what I’m looking for. It’s really easy to figure out which types of friends are which types of athletes and having these different experiences or oh, this person runs where I run so maybe we can connect. Or that bike is a bike I was thinking about buying, so maybe I should talk to them about it.

From a technological perspective, what’s the most unique or innovative thing that Strava is doing?

Many pieces add up to what’s appealing to the users and athletes today. I think it’s a good activity tracker and that’s not a small task. It’s doing incredible in biking and running and launched different multi-sport features. The idea is to appeal to all these athletes, to serve all athletes of all types. I think that’s really interesting, and then there’s the social network piece of this is a community and how do you put meaningful and interesting community features to help and support each other? 

There’s this whole platform piece. We want no matter what device you use or whether you do activities indoor or outdoor, we want you to be able to have that all in one place, and that’s a meaningful technology problem. 

A lot of my career I worked with cities, and I worked with cities at different capacities. In 2007, I helped launch Google Transit. If you remember back then, we only had driving directions on Google. I worked at Google in the Zurich office and obviously public transit in Europe looks different than in a city like San Francisco or Mountain View. I helped create that transit feed that’s widely adopted today, and then later in my Google career, I worked in projects on urban mobility and how do you take all the anonymized rich data we had to work on things like traffic congestions or infrastructure planning.  There’s a whole Strava Metro piece. Strava is working with more than 130 cities on how they can use the data to make their city better for pedestrians, runners, and cyclists. 

Image: stephanie hannon/strava screenshot

There are a lot of companies chasing health and fitness information. In your opinion, what gives Strava a competitive edge to them? 

Well, I’m going to remind you that I’ve worked here for 20 hours. I think a lot of what makes Strava unique is how well, and first of all early, they were really focused on a type of athlete and a certain experience, and they did that so well they got significant adoption. The learnings from that and being so ruthlessly focused are big. When they moved from cycling to running, they kind of were able to take the lessons but acknowledge that the same way to motivate people doesn’t necessarily look the same. They were able to build on their product and engineering team to build a new feature set. And then from running to multi-sport and from being an activity tracker to a social network. A lot of what makes it special is how powerful the product is. Building on the core user base, but being able to expand. In London, I think we have more runners than cyclists. And then the power of the platform, the vision to be able to serve all athletes is also what makes it special. 

Do you think Strava is at a disadvantage because it doesn’t make its own hardware, at least not yet?

I’m optimistic. The fact that we have 300 devices that integrate with the platform and the fact we have 20,000 thirty-party apps built on our open API, I think that’s a strong signal that we’re going to be with all types of athletes and all devices. I don’t think building our own hardware is a necessary part of that. 

What do you think your biggest challenges in this new role?

I like to say opportunity. As a user, I think there’s so much to build on. There’s incredible success that Strava’s already had. Any opportunity for a product leader to figure out what to work on because there’s no end of ideas. So I think some of the things I care about is how to be the best at multi-sport and continue to invest in those experiences. We want those people to have great experiences on the product and invest in the platform. 

A personal thing I feel very strongly about is discovery. I was just in Sydney, Australia over the holidays, and I was standing by the top of a bridge and I was holding my mobile phone. I know Strava has great data about places I can go, but it’s not sourced up in an easy, consumable way, so that’s a massive opportunity.  

You’ve held a lot of different roles: engineer, product manager, CTO. What’s been the common thread in your career, and how do you see Strava furthering it?

Where I feel most proud and motivated is technology that makes the world better for people. That looks and feels like different things. In the early days of Gmail, how can we give this version of online email free to very institution in the world and that was a really powerful quest. As we talked about I was obsessed with transit information and then at Facebook I was able to say how do you create a safe space for a billion people to communicate and then at the Hillary campaign I had a quest which I feel really passionate about, and I think a lot of people will lead a better life if this person is elected. 

Getting behind that was really easy, so in a similar way, when I look at Strava and the benefit of living a healthy active life and the power of data and community can inspire people to do that. We’re having incredible success at Strava, and I hope by me being here we can accelerate it and amplify it and reach more athletes globally. 

Read more: http://mashable.com/2018/01/04/stephanie-hannon-strava-cto-hillary-clinton-greylock/

Facebook is overhauling its News Feed so users feel better again

Facebook is re-tweaking its News Feed again. 

This time it wants to bring it back to friends and family instead of viral videos and media posts, Facebook CEO Mark Zuckerberg announced in a post Thursday. 

“I’m changing the goal I give our product teams from focusing on helping you find relevant content to helping you have more meaningful social interactions,” he wrote.

He said the change should make everyone feel better: “The research shows that when we use social media to connect with people we care about, it can be good for our well-being. We can feel more connected and less lonely, and that correlates with long term measures of happiness and health.”

With fewer posts from businesses, brands, and media, expect to see more of what your “friends” are sharing and liking. 

Zuckerberg didn’t mention Facebook’s role in the 2016 election or Russian meddling through the platform as motivation to change what shows up on the social network.

A breakdown of the “closer together” initiative (also outlined in a video above) indicates news stories will get de-prioritized, while conversations that Facebook thinks will spark a lot of engagement will get a boost. 

To achieve a happier Facebook user base, it looks like Facebook will focus on comment-heavy posts — and not just quick comments like, “Oh no!” or “Thanks!” but lengthy (meaningful!) comments.

All those “likes” won’t mean as much as full-on engagement, which under the new rules seems to mean back-and-forth conversations. Sounds like posting links back and forth won’t count as much in the meaningfulness meter.

In other words, publishers will almost certainly see traffic drop and video views decrease.

Zuckerberg rationalized that the changes will ultimately make for a better Facebook experience, naturally, but might actually cause people to spend less time on the social network.

“I also expect the time you do spend on Facebook will be more valuable,” he wrote.

UPDATE: Jan. 11, 2018, 5:07 p.m. PST This post has been updated with more information about the News Feed changes.

Read more: http://mashable.com/2018/01/11/facebook-news-feed-algorithm-changes-family-friends/

Facebook for 6-Year-Olds? Welcome to Messenger Kids

Facebook says it built Messenger Kids, a new version of its popular communications app with parental controls, to help safeguard pre-teens who may be using unauthorized and unsupervised social-media accounts. Critics think Facebook is targeting children as young as 6 to hook them on its services.

Facebook’s goal is to “push down the age” of when it’s acceptable for kids to be on social media, says Josh Golin, executive director of Campaign for a Commercial Free Childhood. Golin says 11-to-12-year-olds who already have a Facebook account, probably because they lied about their age, might find the animated emojis and GIFs of Messenger Kids “too babyish,” and are unlikely to convert to the new app.

Facebook launched Messenger Kids for 6-to-12-year olds in the US Monday, saying it took extraordinary care and precautions. The company said its 100-person team building apps for teens and kids consulted with parent groups, advocates, and childhood-development experts during the 18-month development process and the app reflects their concerns. Parents download Messenger Kids on their child’s account, after verifying their identity by logging into Facebook. Since kids cannot be found in search, parents must initiate and respond to friend requests.

Facebook says Messenger Kids will not display ads, nor collect data on kids for advertising purposes. Kids’ accounts will not automatically be rolled into Facebook accounts once they turn 13.

Nonetheless, advocates focused on marketing to children expressed concerns. The company will collect the content of children’s messages, photos they send, what features they use on the app, and information about the device they use. Facebook says it will use this information to improve the app and will share the information “within the family of companies that are part of Facebook,” and outside companies that provide customer support, analysis, and technical infrastructure.

“It’s all that squishy language that we normally see in privacy policies,” says Golin. “It seems to give Facebook a lot of wiggle room to share this information.” He says Facebook should be clearer about the outsiders with which it may share data.

In response to questions from WIRED, a spokesperson for Facebook said: “It’s important to remember that Messenger Kids does not have ads and we don’t use the data for advertising. This provision about sharing information with vendors from the privacy policy is for things like providing infrastructure to deliver messages.”

Kristen Strader, campaign coordinator for the nonprofit group Public Citizen, says Facebook has proven it cannot be trusted with youth data in the past, pointing to a leaked Facebook report from May that promised advertisers the ability to track teen emotions, such as insecurity, in real-time. "Their response was just that they will not do similar experiments in the future," says Strader. At the time, advocacy groups asked for a copy of the report, but Facebook declined.

On Thursday, Sen. Richard Blumenthal and Sen. Ed Markey sent a long list of questions about the app's privacy controls to Mark Zuckerberg. "We remain concerned about where sensitive information collected through this app could end up and for what purpose it could be used," they wrote in a letter to the Facebook CEO.

Tech companies have made a much more aggressive push into targeting younger users, a strategy that began in earnest in 2015 when Google launched YouTube Kids, which includes advertising. Parents create an account for their child through Google’s Family Link, a product to help parents monitor screentime. FamilyLink is also used for parents who want to start an account for their kid on Google Home, which gets matched to their child’s voice.

“There is no way a company can really close its doors to kids anymore,” says Jeffrey Chester, executive director for the Center of Digital Democracy. “By openly commercializing young children’s digital media use, Google has lowered the bar,” he says, pointing to what toy company Mattel described as “an eight-figure deal” that it signed with YouTube in August.

Chester says services such as YouTube Kids and Messenger Kids are designed to capture the attention, and affinity, of the youngest users. “If they are weaned on Google and Facebook, you have socialized them to use your service when they become an adult,” he says. “On the one hand it’s diabolical and on the other hand it’s how corporations work.”

In past years, tech companies avoided targeting younger users because of the Children’s Online Privacy Protection ACT (COPPA), a law that requires parental permission in order to collect data on children under 13. But, “the weakness of COPPA is that you can do a lot of things if you get parental permission,” says Golin. In the past six months, new apps have launched marketed as parent helpers. “What they’re saying is this is great way for parents to have control, what they are getting is parental permission,” says Golin.

Several children-focused nonprofit groups endorsed Facebook’s approach, including ConnectSafely and Family Online Safety Institute (FOSI). Both groups have received funding from Facebook and each has at least one representative on Facebook’s 13-person advisory board for Messenger Kids. The board also includes two representatives from MediaSmarts, which is sponsored by Facebook.

A Facebook spokesperson says, “We have long-standing relationships with some of these groups and we’ve been transparent about those relationships.” The spokesperson says many backers of Facebook’s approach, including Kristelle Lavallee of the Center on Media and Child Health, and Dr. Kevin Clark of George Mason University’s Center for Digital Media Innovation and Diversity, do not receive support from Facebook.

UPDATE 3:25 PM: This story has been updated with information about the advisory board for Messenger Kids.

UPDATE 4:25 PM 12/7/2017: This story has been updated with information about Sen. Blumenthal's and Sen. Markey's letter to Mark Zuckerberg.

Read more: https://www.wired.com/story/facebook-for-6-year-olds-welcome-to-messenger-kids/

Facebook rolls out AI to detect suicidal posts before theyre reported

This is software to save lives. Facebook’s new “proactive detection” artificial intelligence technology will scan all posts for patterns of suicidal thoughts, and when necessary send mental health resources to the user at risk or their friends, or contact local first-responders. By using AI to flag worrisome posts to human moderators instead of waiting for user reports, Facebook can decrease how long it takes to send help.

Facebook previously tested using AI to detect troubling posts and more prominently surface suicide reporting options to friends in the U.S. Now Facebook is will scour all types of content around the world with this AI, except in the European Union, where General Data Protection Regulation privacy laws on profiling users based on sensitive information complicate the use of this tech.

Facebook also will use AI to prioritize particularly risky or urgent user reports so they’re more quickly addressed by moderators, and tools to instantly surface local language resources and first-responder contact info. It’s also dedicating more moderators to suicide prevention, training them to deal with the cases 24/7, and now has 80 local partners like Save.org, National Suicide Prevention Lifeline and Forefront from which to provide resources to at-risk users and their networks.

“This is about shaving off minutes at every single step of the process, especially in Facebook Live,” says VP of product management Guy Rosen. Over the past month of testing, Facebook has initiated more than 100 “wellness checks” with first-responders visiting affected users. “There have been cases where the first-responder has arrived and the person is still broadcasting.”

The idea of Facebook proactively scanning the content of people’s posts could trigger some dystopian fears about how else the technology could be applied. Facebook didn’t have answers about how it would avoid scanning for political dissent or petty crime, with Rosen merely saying “we have an opportunity to help here so we’re going to invest in that.” There are certainly massive beneficial aspects about the technology, but it’s another space where we have little choice but to hope Facebook doesn’t go too far.

[Update: Facebook’s chief security officer Alex Stamos responded to these concerns with a heartening tweet signaling that Facebook does take seriously responsible use of AI.

Facebook CEO Mark Zuckerberg praised the product update in a post today, writing that “In the future, AI will be able to understand more of the subtle nuances of language, and will be able to identify different issues beyond suicide as well, including quickly spotting more kinds of bullying and hate.”

Unfortunately, after TechCrunch asked if there was a way for users to opt out, of having their posts a Facebook spokesperson responded that users cannot opt out. They noted that the feature is designed to enhance user safety, and that support resources offered by Facebook can be quickly dismissed if a user doesn’t want to see them.]

Facebook trained the AI by finding patterns in the words and imagery used in posts that have been manually reported for suicide risk in the past. It also looks for comments like “are you OK?” and “Do you need help?”

“We’ve talked to mental health experts, and one of the best ways to help prevent suicide is for people in need to hear from friends or family that care about them,” Rosen says. “This puts Facebook in a really unique position. We can help connect people who are in distress connect to friends and to organizations that can help them.”

How suicide reporting works on Facebook now

Through the combination of AI, human moderators and crowdsourced reports, Facebook could try to prevent tragedies like when a father killed himself on Facebook Live last month. Live broadcasts in particular have the power to wrongly glorify suicide, hence the necessary new precautions, and also to affect a large audience, as everyone sees the content simultaneously unlike recorded Facebook videos that can be flagged and brought down before they’re viewed by many people.

Now, if someone is expressing thoughts of suicide in any type of Facebook post, Facebook’s AI will both proactively detect it and flag it to prevention-trained human moderators, and make reporting options for viewers more accessible.

When a report comes in, Facebook’s tech can highlight the part of the post or video that matches suicide-risk patterns or that’s receiving concerned comments. That avoids moderators having to skim through a whole video themselves. AI prioritizes users reports as more urgent than other types of content-policy violations, like depicting violence or nudity. Facebook says that these accelerated reports get escalated to local authorities twice as fast as unaccelerated reports.

Mark Zuckerberg gets teary-eyed discussing inequality during his Harvard commencement speech in May

Facebook’s tools then bring up local language resources from its partners, including telephone hotlines for suicide prevention and nearby authorities. The moderator can then contact the responders and try to send them to the at-risk user’s location, surface the mental health resources to the at-risk user themselves or send them to friends who can talk to the user. “One of our goals is to ensure that our team can respond worldwide in any language we support,” says Rosen.

Back in February, Facebook CEO Mark Zuckerberg wrote that “There have been terribly tragic events — like suicides, some live streamed — that perhaps could have been prevented if someone had realized what was happening and reported them sooner . . .  Artificial intelligence can help provide a better approach.”

With more than 2 billion users, it’s good to see Facebook stepping up here. Not only has Facebook created a way for users to get in touch with and care for each other. It’s also unfortunately created an unmediated real-time distribution channel in Facebook Live that can appeal to people who want an audience for violence they inflict on themselves or others.

Creating a ubiquitous global communication utility comes with responsibilities beyond those of most tech companies, which Facebook seems to be coming to terms with.

Read more: https://techcrunch.com/2017/11/27/facebook-ai-suicide-prevention/

Zuckerbergs CZI donates to struggling towns near Facebook

Facebook’s success has led to gentrification and hardship in some towns close to its Menlo Park headquarters. So while the Chan Zuckerberg Initiative has committed more than $45 billion to solving health and education problems worldwide, today it’s strengthening its hyper-local philanthropy.

The new CZI Community Fund will provide $25,000 to $100,000 grants to nonprofits and nonprofit or municipality-backed organizations working to improve education, housing, homelessness, immigration, transportation and workforce development in Belle Haven, East Palo Alto, North Fair Oaks and Redwood City, California. For reference, the average rent in East Palo Alto just two miles from Facebook HQ went up 24 percent in the past year alone.

“The Bay Area is our home. We love our community and are so proud to be raising our two daughters here,” writes CZI co-founder Priscilla Chan, Mark Zuckerberg’s wife. “But listening to the stories from our local leaders and neighbors, there is still a lot of work to do.”

The CZI has already backed some local projects, including criminal justice reform in California, and put $5 million toward Y Combinator startup Landed that helps school teachers pay for home down payments in districts close to Facebook HQ. It also donated $3.1 million to Community Legal Services in East Palo Alto that helps families impacted by the local housing shortage who need legal protection, in some cases from wrongful evictions. Plus CZI put $500,000 into the Terner Center for Housing Innovation at UC Berkeley to develop long-term answers to the regional housing crisis.

Organizations seeking funding from the CZI Community Fund can apply before December 1. They’ll be evaluated on the basis of alignment with the fund’s mission, impact potential, leadership, collaboration with other organizations, community engagement and fiscal responsibility to ensure funds aren’t wasted on overhead.

Map showing Facebook’s headquarters circled in blue, and the four nearby towns supported by the CZI Community Fund

Back in 2014, TechCrunch advocated for more of this hyper-local philanthropy by tech companies. At the time, Google was helping to pay for free bus passes for kids trying to get to school, after-school programs and work.

While tech giants can have global impact with scalable apps, the high salaries they pay can lead to rising housing and living prices in nearby areas. That’s fine for their employees, but can cause trouble for lower-income residents as well as the contractors these corporations employ to run their cafeterias or sweep their floors.

There are certainly worthy causes everywhere, and some in the developing world, like anti-malaria mosquito nets, can do a lot of good for a low price. But if tech companies want to be seen as good neighbors and offset the damage they do to nearby communities, they need to give back locally, not just globally.

Read more: https://techcrunch.com/2017/10/25/hyper-local-giving/

Facebook drops no-vote stock plan, Zuck will sell shares to fund philanthropy

Mark Zuckerberg has gotten so rich that he can fund his philanthropic foundation and retain voting control without Facebook having to issue a proposed non-voting class of stock that faced shareholder resistance. Today Facebook announced that it’s withdrawn its plan to issue Class C no-vote stock and has resolved the shareholder lawsuit seeking to block the corporate governance overhaul.

Instead, Zuckerberg says that because Facebook has become so valuable, he can sell a smaller allotment of his stake in the company to deliver plenty of capital to his Chan Zuckerberg Initiative foundation that aims to help eradicate disease and deliver personalized education to all children.

“Over the past year and a half, Facebook’s business has performed well and the value of our stock has grown to the point that I can fully fund our philanthropy and retain voting control of Facebook for 20 years or more,” Zuckerberg writes. Facebook’s share price has increased roughly 45 percent, from $117 to $170, since the Class C stock plan was announced, with Facebook now valued at $495 billion.

Mark Zuckerberg, Priscilla Chan and their daughters Max and August

“We are gratified that Facebook and Mr. Zuckerberg have agreed not to proceed with the reclassification we were challenging,” writes Lee Rudy, the partner at Kessler Topaz Meltzer & Check LLP that was representing the plaintiffs in the lawsuit seeking to block the no-vote share creation. Zuckerberg was slated to testify in the suit later this month, but now won’t have to. “This result is a full victory for Facebook’s stockholders, and achieved everything we could have hoped to obtain by winning a permanent injunction at trial.”

“I want to be clear: this doesn’t change Priscilla and my plans to give away 99% of our Facebook shares during our lives. In fact, we now plan to accelerate our work and sell more of those shares sooner,” Zuckerberg wrote. “I anticipate selling 35-75 million Facebook shares in the next 18 months to fund our work in education, science, and advocacy.” That equates to $5.95 billion to $12.75 billion worth of Facebook shares Zuckerberg will liquidate.

When Zuckerberg announced the plan in April 2016, he wrote that being a founder-led company where he controls enough votes to always steer Facebook’s direction rather than cowing to public shareholders lets Facebook “resist the short term pressures that often hurt companies.” By issuing the non-voting shares, “I’ll be able to keep founder control of Facebook so we can continue to build for the long term, and Priscilla and I will be able to give our money to fund important work sooner.”

A spokesperson for the Chan Zuckerberg Initiative told TechCrunch that this outcome is very good for the foundation, because it provides more predictability to its funding. The plan will also allow Zuckerberg to deliver cash to the CZI sooner, which its new CFO Peggy Alford will be able to allocate between its health, education and advocacy projects.

With the new plan to sell shares, it’s unclear what might happen to Zuckerberg’s iron grip on Facebook’s future in “20 years or more.”

Dropping the Class C shares plan may be seen as a blow to Facebook board member Marc Andreessen, who Bloomberg revealed had coached Zuckerberg through pushing the proposed plan through the rest of the board. But given Zuckerberg’s power, Andreessen is unlikely to be ousted unless the Facebook CEO wants him gone.

Zuckerberg strolls through the developer conference of Oculus, the VR company he pushed Facebook to acquire

For the foreseeable future, though, Zuckerberg will have the power to shape Facebook’s decisions. His business instincts have proven wise over the years. Acquisitions he orchestrated that seemed pricey at first — like Instagram and WhatsApp — have been validated as their apps grow to multiples of their pre-buy size. And Zuckerberg’s relentless prioritization of the user experience over that of advertisers and outside developers has kept the Facebook community deeply engaged instead of pushed away with spam.

Zuckerberg’s ability to maintain power could allow him to continue to make bold or counter-intuitive decisions without shareholder interference. But the concentration of power also puts Facebook in a precarious position if Zuckerberg were to be tarnished by scandal or suddenly unable to continue his duties as CEO.

Zuckerberg warned investors when Facebook went public that “Facebook was not originally created to be a company. It was built to accomplish a social mission.” And yet Facebook has flourished into one of the world’s most successful businesses in part because shareholders weren’t allowed to sell its ambitions short.

Read more: https://techcrunch.com/2017/09/22/facebook-sharing/