AI and Poker

Poker Has Been Conquered by an Artificial Intelligence Strategy

Poker is a game that requires strategy, intuition, and the ability to reason based on hidden information. Artificial intelligence has had some success beating humans at other games such as chess and go (games that follow predefined rules and aren’t random), but winning at poker has proven to be more difficult because of these requirements. In spite of the difficulties, artificial intelligence can now play poker and be successful at it.

DeepStack and Libratus, two artificial intelligence systems, laid the groundwork for Pluribus, an AI that defeated five other players at six-player Texas Hold ’em, the most popular variation of poker. This accomplishment transcends the realm of gaming. Because of this development, artificial intelligence now has the potential to grow and contribute to the resolution of some of the most difficult problems in the world.

A professor at Carnegie Mellon University named Tuomas Sandholm, who was involved in the development of Pluribus, made the following statement in a press release: “The ability to beat five other players in such a complicated game opens up new opportunities to use AI to solve a wide variety of real-world problems.”

A Scalable Method Called DeepStack Is Used to Win at Poker

Deep machine learning and algorithms were combined by the DeepStack team from the University of Alberta in Edmonton, Canada, to create an artificial intelligence capable of winning at two-player, “no-limit” Texas Hold ’em. This is a game that is more difficult for AI to master than others due to the random nature of the game, hidden cards, and players’ bluffs. The neural networks used by DeepStack were trained by working through more than 10 million different poker game scenarios. The AI consults its neural networks in order to decide the most effective course of action. Poker professionals from the International Federation of Poker competed against DeepStack in a game of two-player Texas Hold ’em. DeepStack’s findings, which were determined after playing 44,852 games, were ten times greater than what a professional poker player considered to be a substantial margin.

Libratus Is the Undisputed Champion of Two-Player Texas Hold ‘Em

The Libratus artificial intelligence was developed in 2017 at Carnegie Mellon University by Noam Brown and Tuomas Sandholm. It was designed to play two-player poker and was ultimately unbeatable. To function properly, this system required one hundred central processing units (CPUs). In the course of a 20-day poker battle, Libratus faced up against four of the highest-ranked Texas Hold ’em players. He played a total of 120,000 hands. It won by an astounding amount, and it took home $1.8 million in chips as a result of its victory.

Pluribus, a robot that was able to beat some of the finest poker players in the world in a game of six-player Texas Hold ’em, scored a very critical milestone on its path to become a fully functional poker playing machine. The project was a collaboration between scientists at Carnegie Mellon University and Facebook AI. It was the first time that AI fought in a game against more than one person, and it was also the first time that AI couldn’t merely rely on game strategy to win. Now that artificial intelligence can beat multiple players in such a complicated game, it is the gateway to solving some of the world’s most intractable problems, such as automated negotiations, the development of drugs, security and cyber-security, self-driving cars, and improved fraud detection. Some of these problems include:

The results obtained by Pluribus were really astounding. It competed in 10,000 hands of poker against five other players from a pool of poker players who have earned a million dollars or more. Pluribus was able to beat its human opponents, on average, and win $480 for every 100 hands played. This result is comparable to what professional poker players strive to achieve.

The research team developed Pluribus by expanding upon what they had discovered while working on Libratus. The search algorithm went through a complete and total revamp as a result. Processing through decision trees till the end of the game before making a move is a standard component of the winning formula for artificial intelligence (AI) when it comes to competing in strategic games versus a human opponent. This technique, however, was not possible in a game with many players because there was an excessive amount of hidden information and the number of alternatives to process was significantly higher. Instead of analyzing every possible play up until the very end of the game, the solution for Pluribus was to have it merely look forward a few moves when deciding what action to take, so that it could save time. The artificial intelligence educates itself using a process known as reinforcement learning, which involves repeatedly reviewing past plays and determining how well they fared based on the conditions. If it discovers that a different move could have led to a more favorable conclusion, it will learn to incorporate that discovery into its subsequent gameplay.

Before going up against other people, Pluribus sharpened its poker skills by playing billions of hands against itself. After that, it competed against a single high-stakes poker pro, and if the bot made an error, the pro notified the rest of the team. Soon, the bot was improving quite rapidly given the new information, and it went swiftly from being an average poker player to a player that could compete at the highest level. In the end, it determined its own style of play, going so far as to adopt a variety of different methods depending on the circumstances in order to win against five other players.

Recent Posts