That Facebook study might portend mood manipulation far beyond our Newsfeeds

The ethical debate over Facebook’s mood-manipulation study that caught the nation’s attention over the weekend is certainly warranted, but the bigger debate goes far beyond Facebook. It also goes far beyond field of scientific research. It’s about something much deeper than A/B testing page designs and interactions with new features, or even using algorithms such as EdgeRank or PageRank to filter results in the name of better usability or effectiveness. Some of us might even celebrate that type of content manipulation.

Rather, the study highlights the ease at which web companies can now collect and analyze an unprecedented amount of behavioral data (something we already knew they could do) and then, apparently, use it to affect how we behave. Not whether we stay on the site longer or click through to another piece of content, but the literal words we choose to type or thoughts we choose to share. And while it’s easy enough to laugh off this particular study (“Oh, no! I might not have complimented my friend’s baby strongly enough!”), there are plenty of situations where this type of manipulation would be no laughing matter.

On Sunday, for example, MailChimp Chief Data Scientist John Foreman published a blog post describing a world where data mining like that done in the Facebook study is used in the name of helping advertisers create generic classes of consumers to which they can present generic content:

“Rather than tailor marketing content to a user’s unique emotional make-up, Facebook has shown that they can use tangentially related (and free!) user-generated content to push a user toward marketing content generated for a more general emotional state: insecure, hungry, lonely, etc. They can edit together photos and posts in a stream to skew a user’s view of reality and shift them into one of these compromised emotional states.

“In other words, if they can’t use data to generate enough personalized content to target people, maybe they can use data to generate vanilla people within a smaller set of emotional states. Once you have a set of vanilla people, then your American Apparel ads will work on them without customization.

“The promise of data modeling at Facebook is to place us in chains made from the juxtaposition of our own content. We’ll be driven into pens made of a few profitable emotional states where marketing content waits for us like a cattle gun to the skull.”

It’s a cynical prediction, but it’s not entirely inconceivable either at Facebook or, perhaps more likely, some less-scrupulous social media site or app.

From clicking ads to clicking your seatbelt

What would be even worse is if manipulating a user’s state of mind led to more meaningful results than just some thoughtless ads. Some of the previous fears about big data — bolstered by our sharing of everything via social media, ratings sites and/or connected devices — have to do using using data mining as a means to a new type of digital discrimination. Think about bank, insurances companies or prospective employers deeming some of us too risky for low-interest loans, reasonable premiums or respectable jobs.

We have written about it before, as have Tim O’Reilly and the White House. But Facebook has now helped demonstrate that maybe this type of discrimination can evolve into manipulation. Maybe the companies that already have undue influence over certain portions of our lives can exercise even more control by just helping decide what we see in order to “improve” our behavior.

A personality profile from Five Labs based on Facebook posts. Maybe someone should help me turn up the agreeableness quotient.

Safer driving, healthier lifestyles, better customer service, better financial choices. There are all sorts of reasons that have nothing to do with selling jeans why companies might seek a little help in order to ensure their customers or employees are feeling happier, more body-conscious or more risk-averse. RunKeeper updates down or posts about binge-drinking up? Maybe it’s time to turn the content knob a bit. We’re only doing it to help …

If you’re a company prone to questionable ethics, why not try to convince a company like Facebook to do a little research into whether it can make that happen? Why not convince it to try to make it happen? After all, the potential benefits to the bottom line would probably outweigh the risk of some nominal fine down the road. The public relations storm will blow over in due time.

I’m not predicting this will happen or that the effects of it will even be noticeable if does happen. But we’ve now seen that it could happen, it could work and we probably won’t even realize it’s going on. For those concerned about the societal consequences of sharing so much data regarding what we’re doing and how we’re feeling, Facebook’s study provides plenty to think about.

Feature image courtesy of Shutterstock user Sergey Nivens.

Image copyright Shutterstock / Sergey Nivens.



More From paidContent.org

Advertisement