How We Got Here: Looking at Facebook's Failure to Protect User Privacy as It Negotiates Billion-Dollar Settlement with the FTC

March 8, 2019

Reports emerged on February 14 that Facebook and the FTC were nearing a settlement over the Cambridge Analytica data breach and the social media company’s privacy policies. It is anticipated that Facebook will ultimately agree to pay a multi-billion dollar fine – “the largest the [FTC] has ever imposed on a technology company.”  However, Facebook’s data privacy problems did not begin with Cambridge Analytica and likely will not end there.

The FTC began communicating with Facebook about protecting user privacy as early as December of 2007, when the company launched its “social advertising” plan. The plan allowed users, if they chose, to show their Facebook friends their purchases and product recommendations. At the time, the plan received mixed reviews from privacy experts. Some commended Facebook for allowing users to choose whether or not to share product recommendations with their social network. However, others “expressed concern that Facebook might one day change its policy of not sharing data with marketers.” Specifically, there was concern over “shifting promises,” worry that Facebook would eventually market users’ brand and product preferences without their permission.

In 2011, these predictions came true. Facebook was charged by the FTC in an eight-count complaint with failing to protect user data and failing to keep promises regarding user privacy. One of the promises Facebook was found to have broken was its promise to not share users’ personal information with advertisers without their permission. One of the conditions of the 2011 settlement was the requirement that Facebook obtain “independent, third-party audits” every two years for a period of 20 years “to ensure that the privacy of consumers’ information [was] protected.”

Facebook’s data privacy problems did not begin with Cambridge Analytica and likely will not end there.

The most bizarre privacy scandal in Facebook’s 15-year history was its “mood-manipulation experiment” in 2014 where over half a million users had their news feeds manipulated to show either more positive or more negative posts. This attempt to measure the influence of social media on peoples’ emotions was widely criticized as unethical.

In early 2018, the FTC began an investigation into “whether Facebook violated terms of [the] 2011 settlement when data of up to 50 million users was transferred to [Cambridge Analytica],” a firm “tied to President Donald Trump’s campaign.” The crisis that has unfolded over the last year “has its roots” in the company’s 2007 decision to launch it social advertising scheme.

The core weakness in Facebook’s privacy protection was the ability of mobile app developers to access user data by creating apps “that plugged into Facebook’s platform.” That platform provided access not only to the data of people that used the app, but also the private data of each app user’s Facebook friends. Facebook attempted to remedy this privacy issue in 2015 by changing the rules for app developers with access to its platform.

However, the new rules had no retroactive effect and Facebook had no control over or ability to “keep track of how developers used previously downloaded data.” Also, in 2015, Facebook became aware of authorized sharing of the data that was later “allegedly used by Cambridge Analytica” during the 2016 election. Facebook ordered the involved parties to delete that data a year prior to the election, but learned in 2018 that the data had been kept. A former platform-operations manager at Facebook who held the role from 2011-2012 professed that the company had a history of ineffective data protection: “The main enforcement mechanism was call them and yell at them.”

Amid the backlash over the Cambridge Analytica scandal, Mark Zuckerberg himself came forward to make three renewed promises regarding steps Facebook would take to ensure protection of user privacy. The three changes included an investigation into apps that had access to user data prior to 2014 when Facebook became aware of its platform weaknesses, new limitations on the kind of data accessible to apps on the platform, and the creation of a tool that would allow users to view and manage which apps have access to their data. The only problem with this plan is that Facebook has a long history of breaking promises on user privacy. The last few months have seen new allegations leveled against Facebook for allowing companies like Netflix and Spotify to have virtually unfettered access to user data, including private messages. No matter the outcome of the anticipated settlement with the FTC, it appears that the controversies surrounding Facebook’s efforts, or lack thereof, to protect user privacy are far from over.

Samantha Taylor, 18 February 2019