The tech elite are teaming up to protect humanity from evil Matrix-like artificial intelligence. Elon Musk and a team of Silicon Valley elite have reportedly pledged more than a billion dollars to construct artificial intelligence that benefits mankind, rather than enslaves it.
In a fascinating interview with Backchannel’s Steven Levy on Medium, Musk and his partner, Y-combinator CEO Sam Altman, describe their strategy for saving humanity through their new nonprofit, OpenAI (and how innovators can get involved).
Partner and Combine with Artificial Intelligence
Musk’s basic plan is to combine with artificial intelligence and have it distributed through every single person, rather than a single supercomputer.
“There’s two schools of thought — do you want many AIs, or a small number of AIs? We think probably many is good. And to the degree that you can tie it to an extension of individual human will, that is also good,” explains Musk.
Instead of using AI as a kind of sophisticated servant, AI becomes an extension of the human mind. “Each person is essentially symbiotic with AI as opposed to the AI being a large central intelligence that’s kind of an other.”
Their nonprofit, OpenAI, is in its infancy, so there aren’t many details. Musk and Altman say this is a multi-decade program. But, the interview does give us a solid idea on the plan to protect humanity from the threats Musk and his supporters fear.
Silicon Valley’s New Charity
It appears that OpenAI has become a charity for Silicon Valley, especially in-kind donations. OpenAI has already inked a few high-profile partnerships, including Amazon, which, according to Altman is “donating a huge amount of infrastructure to the effort.” Presumably, this means it’s donating its spare supercomputing power.
It’s also taken smaller product donations from the likes of workplace communication startup, Asana. While much of the world is focused on solving problems right now, Silicon Valley is very much a future-oriented culture, so it’s no surprise that there exists charity to save us from threats that don’t even exist yet.
Readers can learn more about the project here.