By Will Knight | 06.01.23 |
Ready player one? This week we heard another dramatic warning about the potential danger that artificial intelligence may someday pose to the human race. But while it's unknown if or when we'll see AI running amok in the real world, we can watch it get up to some interesting shenanigans inside a virtual one.
|
GPT-4 Minecrafts Like a Boss 🎮 🥸 🧱 |
The technology that underpins ChatGPT has the potential to do much more than just talk. Linxi "Jim" Fan, an AI researcher at the chipmaker Nvidia, worked with some colleagues to devise a way to set the powerful language model GPT-4—the "brains" behind ChatGPT and a growing number of other apps and services—loose inside the blocky video game Minecraft. The Nvidia team, which included Anima Anandkumar, the company's director of machine learning and a professor at Caltech, created a Minecraft bot called Voyager that uses GPT-4 to solve problems inside the game. The language model generates objectives that help the agent explore the game, and code that improves the bot's skill at the game over time.
|
Voyager doesn't play the game like a person, but it can read the state of the game directly, via an API. It might see a fishing rod in its inventory and a river nearby, for instance, and use GPT-4 to suggest the goal of doing some fishing to gain experience. It will then use this goal to have GPT-4 generate the code needed to have the character achieve it. |
The most novel part of the project is the code that GPT-4 generates to add behaviors to Voyager. If the code initially suggested doesn't run perfectly, Voyager will try to refine it using error messages, feedback from the game, and a description of the code generated by GPT-4. Over time, Voyager builds a library of code in order to learn to make increasingly complex things and explore more of the game. A chart created by the researchers shows how capable it is compared to other Minecraft agents. Voyager obtains more than three times as many items; explores more than twice as far; and builds tools 15 times more quickly than other AI agents. Fan says the approach may be improved in the future with the addition of a way for the system to incorporate visual information from the game. |
While chatbots like ChatGPT have wowed the world with their eloquence and apparent knowledge—even if they often make things up—Voyager shows the huge potential for language models to perform helpful actions on computers. Using language models in this way could perhaps automate many routine office tasks, potentially one of the technology's biggest economic impacts. The process that Voyager uses with GPT-4 to figure out how to do things in Minecraft might be adapted for a software assistant that works out how to automate tasks via the operating system on a PC or phone. OpenAI, the startup that created ChatGPT, has added "plugins" to the bot that allow it to interact with online services such as grocery delivery app Instacart. Microsoft, which owns Minecraft, is also training AI programs to play it, and the company recently announced Windows 11 Copilot, an operating system feature that will use machine learning and APIs to automate certain tasks. It may be a good idea to experiment with this kind of technology inside a game like Minecraft, where flawed code can do relatively little harm. |
Video games have long been a test bed for AI algorithms, of course. AlphaGo, the machine learning program that mastered the extremely subtle board game Go back in 2016, cut its teeth by playing simple Atari video games. AlphaGo used a technique called reinforcement learning, which trains an algorithm to play a game by giving it positive and negative feedback, for example from the score inside a game. It is more difficult for this method to guide an agent in an open-ended game such as Minecraft, where there is no score or set of objectives and where a player's actions may not pay off until much later. Whether or not you believe we should be preparing to contain the existential threat from AI right now, Minecraft seems like an excellent playground for the technology. |
|
|
Some researchers might be awed—and worried—by the capabilities of their latest creations, while many others still see AI as an inferior and inherently annoying rip-off of reality. (The Atlantic) Before buying Twitter, Elon Musk complained that the inner workings of the platform should be more transparent. By cutting off researchers' access to an API that was used to spot bot activity, track misinformation, and more, he has done the opposite. (The Verge) The chipmaker Nvidia has ridden the AI boom expertly and is now worth a trillion dollars. The company's chips turned out to be ideally suited to the machine learning that underpins modern AI, and the company has delivered a series of products and related software to hammer home that advantage. (CNN) Here's a good story about the tricky balancing act that the Biden administration has to perform when it comes to regulating AI. Some see the appeal of strict controls on new AI technologies—others worry about handing an advantage to China. (Bloomberg News)
|
That's all for this week. For now I'll leave you with a tweet that highlights how Google's emissions and electricity use have skyrocketed in recent years. Some AI executives might warn that AI poses more of a risk than climate change, but perhaps they could be doing a bit more on both fronts? |
|
|
|
No comments:
Post a Comment