Internet users once again turned into lab rats without their knowledge
At the beginning of July, a big story broke about Facebook conducting behavioral experiments on some of its users without their knowledge, specifically by altering the display order of news feed items to see if looking at lots of good news would make people feel more cheerful, and vice versa. A relatively small subset of 700,000 users was affected, for a fairly brief period of time, and the manipulation was subtle - they were tinkering with how news items were displayed, not hiding them outright - but a great deal of anger boiled over at Facebook for turning users into lab rats. Facebook's clunky P.R. handling of the incident didn't help matters any.
Something similar just happened with OKCupid, a dating web site, which has admitted to deliberately altering information presented to its users so it could learn from their responses. Some have argued OKCupid's manipulation was far worse than what Facebook did, especially during the third experiment described at TechCruch:
In one experiment, OKCupid removed all the photos from its website as it was rolling out a blind dating app to see how it impacted use. In the second, OKCupid ran a test to see how much a user’s picture affects viewer’s perception of their personalities. In the third, OKCupid told users that they had a 90 percent compatibility rate with users who they actually shared a 30 percent rate with.
By removing photos from its website, OKCupid learned information it could apply to its blind dating app. In its second test, it found that users saw personality and looks to be the same thing, and now instead of rating people on both personality and looks, users simply give one overall rating. Its third test seems to be the most controversial, but essentially it confirmed that OKCupid’s dating algorithm actually works — users don’t just work out because OKCupid suggests it.
They deliberately sent people on dates with incompatible partners? Wow.
TechCruch writer Cat Zakrzewski argues this is not as bad as Facebook's emotional manipulation - which, under this line of reasoning, was bad primarily because the Facebook experiment was conducted for empirical research, rather than specifically to improve the user experience:
Manipulating that data and finding best practices is just OKCupid doing its job. At the basic level, all social networks are altering what you see in your feed to make the time you spend on them better.
But Facebook’s study went beyond that. Facebook manipulated content in users’ feeds to see if the emotional tone of their News Feeds impacted the tone of their own posts on the social network, deliberately making people sad. After conducting the test on almost 700,000 users, it published those results in an academic journal.
Unlike OKCupid, Facebook didn’t alter the user’s experience simply to improve the algorithm for a business purpose. In this study, the company essentially conducted a psychological experiment that many consider unethical.
People sign up for Facebook to interact with their friends and read the content they share — both good and bad. As many before me have noted, Facebook shouldn’t mess with that for the sake of a study, especially one conducted without its users consent.
Call me old-fashioned, but I can't help thinking that tricking someone into thinking their blind date is 90 percent compatible, when they're really only 30 percent compatible, might end up "deliberately making people sad" too. Of course, it could also have led to some delightful surprises. The information to be gained from this kind of manipulation is always interesting, but it seems very odd to dismiss the complaints of users who have been secretly dragged into a maze to pursue some psychological cheese, without expressly granting their permission. (The logical argument against consent is that it colors the experiment, but frankly that ethical case seems trumped by the very idea of having consent forms for pre-Internet experiments. I'm sure psych researchers from past decades would have appreciated the pure testing environment they could have created by randomly kidnapping test subjects off the street.)
OKCupid has defended itself by noting that its Terms of Service do include consent to participate in "diagnostic research," whereas Facebook got in hot water by altering their Terms of Service after the fact. Also, the subjects of OKCupid's tinkering were notified after the fact. It has also been argued that nearly every web site - and even online-enabled local applications, such as computer games - is modified occasionally in the quest to improve the user experience, the modifications are sometimes made without warning, and online providers often quietly monitor reactions to help make wise revision decisions in the future.
But the degree and context of such manipulation is not irrelevant. Users have a certain expectation of trust with providers, which is more important than ever in an online environment, where so much information is rising into the Cloud. It's interesting to debate exactly where that envelope of trust should end, or how it should be enforced. It would be prohibitively difficult to run any online application if every single user had to accept every individual modification, and could reject them to keep the old experience intact. Users would also find constant "hey, we changed this, is it cool with you?" notifications annoying.
However, I would suggest the expectation of trust includes the core functions of an online experience. The users of a social media site expect to see messages from their friends, without filtering arranged by the invisible hands of experimental overseers. The users of a dating site expect its core mission of finding compatible partners to be conducted in a straightforward manner. Indeed, honesty is part of the basic mission statement of such a site. Planting a line about "diagnostic research" in the thick legal stew of a Terms of Service agreement - which we all know most users don't read very carefully, and certainly don't remember in detail months later - is not enough to cover a breach of honesty conducted as part of a psychological experiment, even if the ultimate goal is improving the experience for all users.
As I mentioned when writing about the Facebook mess, this is one of those new Information Age debates that won't be easy to conclusively resolve, but I wonder if one important step would be to approach a subset of users with some sort of modest incentive - say, a free month of service - in exchange for their willing participation in experiments without further notice. I tend to think a large web service could get enough volunteers that way, while essentially eliminating the risk of hard feelings later on.