Google Deep Learning researcher François Chollet took to Twitter recently to outline Facebook’s use of “digital information consumption” as a “psychological control vector,” while also describing the company’s interest in A.I., adding “Personally, it really scares me.”
In a long thread of tweets, Google Deep Learning researcher François Chollet discussed the problem facing Facebook going forward and the risks that users take putting their personal information on the platform. He also discussed the dangers that artificial intelligence could pose when given access to the data of billions of users, and how Facebook could use A.I. to shape and manipulate users.
The problem with Facebook is not *just* the loss of your privacy and the fact that it can be used as a totalitarian panopticon. The more worrying issue, in my opinion, is its use of digital information consumption as a psychological control vector. Time for a thread
— François Chollet (@fchollet) March 21, 2018
The world is being shaped in large part by two long-time trends: first, our lives are increasingly dematerialized, consisting of consuming and generating information online, both at work and at home. Second, AI is getting ever smarter.
— François Chollet (@fchollet) March 21, 2018
These two trends overlap at the level of the algorithms that shape our digital content consumption. Opaque social media algorithms get to decide, to an ever-increasing extent, which articles we read, who we keep in touch with, whose opinions we read, whose feedback we get
— François Chollet (@fchollet) March 21, 2018
Chollet notes that as Facebook becomes many peoples’ main source of news, and as they control what content and news is seen on their platform, they are beginning to shape the worldview and politics of users.
If Facebook gets to decide, over the span of many years, which news you will see (real or fake), whose political status updates you’ll see, and who will see yours, then Facebook is in effect in control of your political beliefs and your worldview
— François Chollet (@fchollet) March 21, 2018
This is not quite news, as Facebook has been known to run since at least 2013 a series of experiments in which they were able to successfully control the moods and decisions of unwitting users by tuning their newsfeeds’ contents, as well as prediction user's future decisions
— François Chollet (@fchollet) March 21, 2018
In short, Facebook can simultaneously measure everything about us, and control the information we consume. When you have access to both perception and action, you’re looking at an AI problem. You can start establishing an optimization loop for human behavior. A RL loop.
— François Chollet (@fchollet) March 21, 2018
Chollet warns that combined with an advanced Artificial Intelligence, Facebook could begin to easily manipulate users as they see fit.
A good chunk of the field of AI research (especially the bits that Facebook has been investing in) is about developing algorithms to solve such optimization problems as efficiently as possible, to close the loop and achieve full control of the phenomenon at hand. In this case, us
— François Chollet (@fchollet) March 21, 2018
This is made all the easier by the fact that the human mind is highly vulnerable to simple patterns of social manipulation. While thinking about these issues, I have compiled a short list of psychological attack patterns that would be devastatingly effective
— François Chollet (@fchollet) March 21, 2018
Chollet is particularly worried about Facebook’s recent investments in A.I. and warns that this could have a devastating impact on the future. He adds that Facebook’s A.I. initiatives scare him.
We’re looking at a powerful entity that builds fine-grained psychological profiles of over two billion humans, that runs large-scale behavior manipulation experiments, and that aims at developing the best AI technology the world has ever seen. Personally, it really scares me
— François Chollet (@fchollet) March 21, 2018
He finished his tweetstorm by urging anyone that works in A.I. not to work for Facebook:
If you work in AI, please don't help them. Don't play their game. Don't participate in their research ecosystem. Please show some conscience
— François Chollet (@fchollet) March 21, 2018
However, it was pointed out by WikiLeaks founder Julian Assange that Google is allegedly taking part in many of the same practices as Facebook, and that they may reach the goal that Chollet warns about sooner than Facebook.
Google AI researcher says Facebook AI is a "dire threat" to humanity. He's right. But Google may get there first. https://t.co/ZYAUbPkeRI
— Assange Defence (@AssangeDefence) March 23, 2018
Read Chollet’s full thread of tweets here.
Lucas Nolan is a reporter for Breitbart News covering issues of free speech and online censorship. Follow him on Twitter @LucasNolan or email him at lnolan@breitbart.com
COMMENTS
Please let us know if you're having issues with commenting.