Search

Under Fire and Losing Trust, Facebook Plays the Victim

On Tuesday morning, Facebook employees were quiet even for Facebook employees, buried in the news on their phones as they shuffled to a meeting in one of the largest cafeterias at the company’s headquarters in Menlo Park, Calif. Mark Zuckerberg, their chief executive officer, had always told them Facebook Inc.’s growth was good for the world. Sheryl Sandberg, their chief operating officer, had preached the importance of openness. Neither appeared in the cafeteria on Tuesday. Instead, the company sent a lawyer.

The context: Reports in the New York Times and the Observer the previous weekend that Cambridge Analytica, the political consulting firm that advised President Trump’s electoral campaign on digital advertising, had effectively stolen personal information from at least 50 million Americans. The data had come from Facebook, which had allowed an outside developer to take it before that developer shared it with Cambridge Analytica.

Facebook tried to get ahead of the story, announcing in a blog post that it was suspending the right-leaning consultancy and that it no longer allowed this kind of data sharing. Its users—a cohort that includes 2 billion or so people—weren’t ready to forgive. The phrase #DeleteFacebook flooded social media. (Among the outraged was WhatsApp co-founder Brian Acton, who in 2014 sold Facebook his messaging app for $19 billion.) Regulators in the U.S. and Europe announced they were opening inquiries. The company’s stock fell almost 9 percent from March 19-20, erasing about $50 billion of value.

In most moments of crisis for the company, Zuckerberg or Sandberg have typically played damage-controller-in-chief. This time, the employees got all of 30 minutes with Paul Grewal, the deputy general counsel. If the news reports were true—a blame-deflecting phrase that struck some as odd—Grewal told them, Facebook had been lied to. Cambridge Analytica should have deleted the outside developer’s data, but it didn’t. Reporters were calling this a breach, but it wasn’t, because users freely signed away their own data and that of their friends. The rules were clear, and Facebook followed them.

One employee asked the same question twice: Even if Facebook played by its own rules, and the developer followed policies at the time, did the company ever consider the ethics of what it was doing with user data? Grewal didn’t answer directly.

A Facebook spokesman declined to comment for this story, referring to a January post by Zuckerberg stating the CEO’s aim to get the company on a “better trajectory.”

One possible explanation for Facebook management’s blasé attitude is that the company has weathered complaints about violating user privacy since its earliest days. The first revolt came in 2006, when users protested that the service’s news feed was making public information that the users had intended to keep private. The news feed is now the company’s core service. In 2009, Facebook began making users’ posts, which had previously been private, public by default. That incident triggered anger, confusion, an investigation by the U.S. Federal Trade Commission, and, ultimately, a consent decree. In 2014, the company disclosed that it had tried to manipulate users’ emotions as part of an internal psychology experiment.

As bad as each of these may have seemed, Facebook users have generally been unfazed. They’ve used the service in ever-greater numbers for greater amounts of time, in effect trading privacy for product. They were willing to give more and more data to Facebook in exchange for the ability to connect with old high school friends, see pictures of their grandkids, read only the news that they agree with. The concept was dubbed Zuckerberg’s Law in 2008, when the CEO argued at a conference that each year people would share twice as much information about themselves as they had the year before. Notions of privacy were eroding, Zuckerberg said in 2010. “That social norm,” he added, “is just something that has evolved over time.”

For a while, the only thing Facebook needed to do to keep growing was to remove barriers to downloading and using the product. By 2014, it had reached almost half the world’s internet-connected population, and Zuckerberg realized the only way to expand further was to add people to the internet. While Facebook invested in internet subsidy programs in developing countries, it also went on an acquisition binge, buying up popular social software makers such as Instagram and WhatsApp.

These moves led to annual revenue growth of about 50 percent, with most of the increase coming from mobile ads, and converted the company’s Wall Street doubters. Last year, even as Facebook was forced to acknowledge that it had played a role in the Russian disinformation campaign during the election of Trump, investors pushed its stock price up 53 percent.

But the big blue app, as employees call Facebook’s namesake service, hasn’t changed much in years. The company has tweaked its algorithm, at times favoring or punishing clickbait-style news and viral videos, but most people use the service the same way they did two or three years ago. And some people are simply over it. In North America, Facebook’s daily user counts fell for the first time in the fourth quarter, and time spent on the site declined by 50 million hours a day. Facebook claimed that this was by design: Zuckerberg was focusing on helping users achieve “time well-spent,” with the news feed de-emphasizing viral flotsam.

The company positioned its new algorithmic initiative as a reaction to a study co-authored by one of its employees, arguing that while Facebook could be bad for users' mental health if they used it passively, more active use was actually good for you. The study could be viewed as a rare show of corporate transparency or a novel way to goose engagement.

Some of the moves, however, look even more desperate. Now, when people stop going on Facebook as often as usual, the company sends them frequent emails and text messages to encourage them to re-engage. It’s also getting more aggressive about suggesting what users should post.  According to some employees, the focus on time well-spent just means the company will point to metrics such as comments and personal updates as signs of growth, rather than genuinely improving the user experience.

In the long run, Facebook wants to make its product even more immersive and personal than it is now. It wants people to buy video chatting and personal assistant devices for their homes, and plans to announce those products this spring, say people familiar with the matter. It wants users to dive into Facebook-developed virtual worlds. It wants them to use Facebook Messenger to communicate with businesses, and to store their credit-card data on the app so they can use it to make payments to friends.

Employees have begun to worry that the company won’t be able to achieve its biggest goals if users decide that Facebook isn’t trustworthy enough to hold their data. At the meeting on Tuesday, the mood was especially grim. One employee told a Bloomberg Businessweek reporter that the only time he’d felt as uncomfortable at work, or as responsible for the world’s problems, was the day Donald Trump won the presidency.

Let's block ads! (Why?)

https://www.bloomberg.com/news/articles/2018-03-21/under-fire-and-losing-trust-facebook-plays-the-victim

Bagikan Berita Ini

0 Response to "Under Fire and Losing Trust, Facebook Plays the Victim"

Post a Comment

Powered by Blogger.