Over the course of a five-day competition, the “Lengpudashi” artificial intelligence managed to rake in more than a quarter-million dollars in poker chips from a crack team designed specifically to defeat it.
This is the second major victory for the AI system designed by Carnegie Mellon University Computer Science Professor Tuomas Sandholm, and Ph.D. student Noam Brown. In January, an earlier version called “Libratus,” cleaned up $1.5 million in imaginary cash at Rivers Casino in Pittsburgh. This time, however, the AI was playing for keeps.
Lengpudashi beat a team led by World Series of Poker winner Yue Du. His team was comprised of computer scientists, engineers, and investors. Despite the array of knowledge and talent present at the table, however, it was a landslide victory for the learning computer. Lengpudashi handily defeated all human challengers, despite elaborate tactics based on both extensive knowledge of machine learning, and of the game itself.
Poker serves as an excellent challenge for the advancement of artificial intelligence, because it is a game of “imperfect information.” With limited knowledge of what cards may or may not be in play, much of the game is concerned with the potential for players to “bluff,” or take actions that seem to indicate stronger or weaker cards than they actually possess.
It’s a unique obstacle for a computer, and has traditionally been thought of as a “very human” thing, according to Brown. But, according to the CMU Ph.D. student, “it turns out that’s not true. A computer can learn from experience that if it has a weak hand and it bluffs, it can make more money.” And the computer wasn’t even taking notes from players. According to Sandholm, “its strategies were computed from just the rules of the game,” not from analysis of historic information.
Of course, the game itself is not the goal; Sandholm and other AI researchers are focused on teaching computers to make sound strategic decisions that will translate into everything from business and finance, to cybersecurity. Understanding something as nuanced as when to bluff in poker could help AI understand the power of manipulating the appearance of information, or how to make decisions without a complete or accurate set of information available.
It also suggests that, by identifying a better way to subvert humans in service to its ultimate goal, the AI taught itself to lie. In light of this, I think that it has become appropriate to coin a word that simultaneously means both “incredibly cool” and “utterly terrifying.” We’re still in need an appropriate adjective for this.
Follow Nate Church @Get2Church on Twitter for the latest news in gaming and technology, and snarky opinions on both.