The Next Wave of Social Experimentation

How many times a day would you say that you check the Facebook App on your phone? Twice? Ten times? None at all? Since its emergence in mainstream media in 2005, Facebook has changed the way people look at social media. But this influence has not come without its share of controversies. The most recent of which being an experiment conducted on users in 2012.
Last week it was reported that

Facebook would continue its experiments on users, but will attempt to do in “a less creepy way.”

This announcement came as a result of uproar from an experiment that Facebook conducted on 690,000 users in early 2012. In this experiment some users “were shown a higher number of positive posts in their News Feeds, while others were shown more negative posts, in an attempt to gauge their emotional responses.” Facebook monitored the emotional responses by looking at what users posted after looking at their News Feed. The results from this experiment are what you would expect; when you see a more positive News Feed you will be in a better mood, while when you see a more negative News Feed you will be in less of a good mood. Essentially, Facebook found that moods were contagious. Is it just me, or did we already know this before Facebook decided to manipulate News Feeds? Facebook claims that conducting experiments on users will help create a better Facebook. However, the experiment that Facebook conducted did not necessarily make the users’ experience better, in some cases made the users’ experience worse.
This is not the first time Facebook has done experiments on users. The company is constantly changing the way ads appear, and the amount of ads that appear on a users’ News Feed. It usually does this experimentation with its’ in-house data scientists or in conjunction with academic researchers. When the in-house data scientists do experiments there are virtually no limits on what they can do. For example, another study in 2012 monitored people’s self censorship on Facebook by looking at people’s entries that were over five character’s long that did not get posted within 10 minutes of being originally created. This experiment was even done on people with the highest privacy settings possible. Further, in 2010, Facebook monitored how information spread through social media. This was done by assigning some internet links “share” status and some internet links “not-share” status. “Not-share” links were censored from users’ view. Facebook wanted to see if the “not-share” links would still be able to find their way onto Facebook, even when Facebook made those links not appear on people’s News Feeds.
Facebook is not the only internet company that does experiments on its users. Companies like Google conduct what is known as the A/B Test. The A/B Test happens when the internet site provides a different website for a small group of users. This means that you and the person next to you could potentially be looking at the same website but be seeing different screens. The purpose of these experiments is to make the users’ experience better and to figure out better ways to reach consumers.
The legal aspects of the Facebook experiment, and internet experiments in general, comes in primarily in two areas: Privacy and Terms of Use. If I have the highest privacy settings available should Facebook be able to violate that privacy for their experiments? Not only that, should Facebook be able to violate its own privacy settings to figure out what I am looking at? Facebook argues that when users check the Terms of Use box they are giving their consent to being experimented on. While nothing illegal has been done here, there are certainly some ethical considerations at work. I, personally, checked my Terms of Use box eons ago, and certainly didn’t read what it said. Just because I checked the box does that give Facebook the right to experiment on me? Should Facebook be forced to be more transparent about what it does with users’ data and how and when it conducts experiments with them?