OpenAI, a firm backed by Elon Musk, has currently revealed one of it’s latest developments in the fields of Machine Learning, demonstrated using the technology of virtual sumo wrestlers.
These are the bots inside the virtual world of RoboSumo controlled my machine learning. They (The Bots) taught themselves through trial and error using Reinforcement Learning, a technique inspired by the way animals learn through feedback. It has proved useful for training computers to play games and to control robots. The virtual wrestlers might look slightly ridiculous, but they are using a very clever approach to learning in a fast-changing environment while dealing with an opponent. This game and it’s virtual world were created at OpenAI to show how forcing AI systems to compete can spur them to become more intelligent.
However, one of the disadvantages of reinforcement learning is that doesn’t work well in realistic situations, or where things are more dynamic. OpenAI devised a solution to this problem by creating its own reinforcement algorithm called proximal policy optimization (PPO), which is especially well suited to changing environments.
The latest work, done in collaboration with researchers from Carnegie Mellon University and UC Berkeley, demonstrates a way for AI agents to apply what the researchers call a “meta-learning” framework. This means the agents can take what they have already learned and apply it to a new situation.
Inside the RoboSumo environment (see video above), the agents started out behaving randomly. Through thousands of iterations of trial and error, they gradually developed the ability to move—and, eventually, to fight. Through further iterations, the wrestlers developed the ability to avoid each other, and even to question their own actions. This learning happened on the fly, with the agents adapting even they wrestled each other.
Flexible learning is a very important part of human intelligence, and it will be crucial if machines are going to become capable of performing anything other than very narrow tasks in the real world. This kind of learning is very difficult to implement in machines, and the latest work is a small but significant step in that direction.
(sources: MitTechReview, OpenAI Blog, Wired)