Open AI’s bots beat world’s top Dota 2 players

Open AI has created a bot that is taking down top Dota 2 players at The International Dota event held by Valve.

This year’s big Dota 2 tournament is currently in its second to last day with only four teams remaining in the competition. Between the event’s official games, Valve always includes some fun show matches, and this year it brought in Open AI‘s aforementioned Dota 2 b to face off against the game’s most famous player, Danil “Dendi” Ishutin, live on the main stage.

Open AI said the 1v1 bot has learned from playing against itself in a lifetimes worth of matches. In a promo video shown before the match, we see the bot besting some of the game’s top players, including Evil Genius’ cores Arteezy and Sumail.

In the live Shadow Fiend mirror match against Dendi, the bot executed an impossibly perfect creep block, perfectly balanced creep aggro, and even recognized and canceled Dendi’s healing items. The bot quickly bested Dendi in two matches, leading Dendi to say it’s “too strong!” Footage of those matches can be found on Dota 2 Rapier’s YouTube channel.

Open AI says their next goal is to get the bot ready for the infinitely more complex 5v5 matches, noting it might have something ready for next year.

How DeepMind’s AI taught itself to walk

DeepMind’s programmers have given the agent a set of virtual sensors (so it can tell whether it’s upright or not, for example) and then incentivize to move forward. The computer works the rest out for itself, using trial and error to come up with different ways of moving. True motor intelligence requires learning how to control and coordinate a flexible body to solve tasks in a range of complex environments. Existing attempts to control physically simulated humanoid bodies come from diverse fields, including computer animation and biomechanics.  A trend has been to use hand-crafted objectives, sometimes with motion capture data, to produce specific behaviors. However, this may require considerable engineering effort and can result in restricted behaviors or behaviors that may be difficult to repurpose for new tasks.

True motor intelligence requires learning how to control and coordinate a flexible body to solve tasks in a range of complex environments. Existing attempts to control physically simulated humanoid bodies come from diverse fields, including computer animation and biomechanics.  A trend has been to use hand-crafted objectives, sometimes with motion capture data, to produce specific behaviors. However, this may require considerable engineering effort and can result in restricted behaviors or behaviors that may be difficult to repurpose for new tasks.

 

DeepMind published 3 papers which are as follows:

Emergence of locomotion behaviors in rich environments:- 

For some AI problems, such as playing Atari or Go, the goal is easy to define – it’s winning. But describing a process such as a jog, a backflip or a jump is difficult because of accurately describing a complex behavior which is a common problem when teaching motor skills to an artificial system. DeepMind explored how sophisticated behaviors can emerge from scratch from the body interacting with the environment using only simple high-level objectives, such as moving forward without falling. DeepMind trained agents with a variety of simulated bodies to make progress across diverse terrains, which require jumping, turning and crouching. The results show that their agents developed these complex skills without receiving specific instructions, an approach that can be applied to train systems for multiple, distinct simulated bodies.

wall

Simulated planar Walker attempts to climb a wall

But how do you describe the process for performing a backflip? Or even just a jump? The difficulty of accurately describing a complex behavior is a common problem when teaching motor skills to an artificial system. In this work, we explore how sophisticated behaviors can emerge from scratch from the body interacting with the environment using only simple high-level objectives, such as moving forward without falling. Specifically, we trained agents with a variety of simulated bodies to make progress across diverse terrains, which require jumping, turning and crouching. The results show our agents develop these complex skills without receiving specific instructions, an approach that can be applied to train our systems for multiple, distinct simulated bodies. The GIFs show how this technique can lead to high-quality movements and perseverance.

Learning human behaviors from motion capture by adversarial imitation:-

walk.gif

A humanoid Walker produces human-like walking behavior

The emergent behavior described above can be very robust, but because the movements must emerge from scratch, they often do not look human-like.

DeepMind in their second paper demonstrated how to train a policy network that imitates motion capture data of human behaviors to pre-learn certain skills, such as walking, getting up from the ground, running, and turning.

Having produced behavior that looks human-like, they can tune and repurpose those behaviors to solve other tasks, like climbing stairs and navigating walled corridors.

 

Robust imitation of diverse behaviors:-

div.gif

The planar Walker on the left demonstrates a particular walking style and the agent in the right panel imitates that style using a single policy network.

The third paper proposes a neural network architecture, building on state-of-the-art generative models, that is capable of learning the relationships between different behaviors and imitating specific actions that it is shown. After training, their system can encode a single observed action and create a new novel movement based on that demonstration.After training, their system can encode a single observed action and create a new novel movement based on that demonstration.It can also switch between different kinds of behaviors despite never having seen transitions between them, for example switching between walking styles.

 

 

(via The Verge, DeepMind Blog)

 

 

Optical Deep Learning: Learning with light

Artificial Neural Networks mimic the way the brain learns from an accumulation of examples. A research team at the Massachusetts Institute of Technology (MIT) has come up with a novel approach to deep learning that uses a nanophotonic processor, which they claim can vastly improve the performance and energy efficiency for processing artificial neural networks.

MIT-Optical-Neural_0.jpg

Optical computers have been conceived before, but are usually aimed at more general-purpose processing. In this case, the researchers have narrowed the application domain considerably. Not only have they restricted their approach to deep learning, they have further limited this initial work to inferencing of neural networks, rather than the more computationally demanding process of training.

The optical chip contains multiple waveguides that shoot beams of light at each other simultaneously, creating interference patterns that correspond to mathematical results. “The chip, once you tune it, can carry out matrix multiplication with, in principle, zero energy, instantly,” said Marin Soljacic, an electrical engineering professor at MIT, in a statement. The accuracy leaves something to be desired for making out vowels, but the nanophotonic chip is still work-in-progress.

The new approach uses multiple light beams(as mentioned above) directed in such a way that their waves interact with each other, producing interference patterns that convey the result of the intended operation. The resulting device is something the researchers call a programmable nanophotonic processor.The result is that the optical chips using this architecture could, in principle, carry out calculations performed in typical artificial intelligence algorithms much faster and using less than one-thousandth as much energy per operation as conventional electronic chips.
The team says it will still take a lot more effort and time to make this system useful; however, once the system is scaled up and fully functioning, it can find many user cases, such as data centers or security systems. The system could also be a boon for self-driving cars or drones, says Harris, or “whenever you need to do a lot of computation but you don’t have a lot of power or time.”

 

Google’s and Nvidia’s AI Chips

Google

Google will soon launch a cloud computing service that provides exclusive access to a new kind of artificial intelligence chip designed by its own engineers. CEO Sundar Pichai revealed the new chip and service this morning in Silicon Valley during his keynote at Google I/O, the company’s annual developer conference.

GoogleChip4.jpg

This new processor is a unique creation designed to both train and execute deep neural networks—machine learning systems behind the rapid evolution of everything from image and speech recognition to automated translation to robotics. Google says it will not sell the chip directly to others. Instead, through its new cloud service, set to arrive sometime before the end of the year, any business or developer can build and operate software via the internet that taps into hundreds and perhaps thousands of these processors, all packed into Google data centers.

According to Dean, Google’s new “TPU device,” which spans four chips, can handle 180 trillion floating point operations per second, or 180 teraflops, and the company uses a new form of computer networking to connect several of these chips together, creating a “TPU pod” that provides about 11,500 teraflops of computing power. In the past, Dean said, the company’s machine translation model took about a day to train on 32 state-of-the-art CPU boards. Now, it can train in about six hours using only a portion of a pod.

Nvidia

Nvidia has released a new state-of-the-art chip that pushes the limits of machine learning, the Tesla P100 GPU. It can perform deep learning neural network tasks 12 times faster than the company’s previous top-end system (The TitanX). The P100 was a huge commitment for Nvidia, costing over $2 billion in research and development, and it sports a whopping 150 billion transistors on a single chip, making the P100 the world’s largest chip, Nvidia claims. In addition to machine learning, the P100 will work for all sorts of high-performance computing tasks — Nvidia just wants you to know it’s really good at machine learning.

dgx.png

To top off the P100’s introduction, Nvidia has packed eight of them into a crazy-powerful $129,000 supercomputer called the DGX-1. This show-horse of a machine comes ready to run, with deep-learning software preinstalled. It’s shipping first to AI researchers at MIT, Stanford, UC Berkeley, and others in June. On stage, Huang called the DGX-1 “one beast of a machine.”

The competition between these upcoming AI chips and Nvidia all points to an emerging need for simply more processing power in deep learning computing. A few years ago, GPUs took off because they cut the training time for a deep learning network from months to days. Deep learning, which had been around since at least the 1950s, suddenly had real potential with GPU power behind it. But as more companies try to integrate deep learning into their products and services, they’re only going to need faster and faster chips.

 

(via Wired, Forbes, Nvidia, The Verge)

 

A peek into NVIDIA’s self driving AI.

In a step toward making AI more accountable, Nvidia has developed a neural network for autonomous driving that highlights what it’s focusing on.

Sorta follow-up to my previous blog post: Is AI Mysterious?. Some of the most powerful machine-learning techniques available result in software that is almost completely opaque, even to the engineers that build it.

nvidia_car.jpg

We still don’t know exactly how AI works, the people at Nvidia still don’t know why the AI they built wasn’t following a single instruction given by its creators. Instead, it relied entirely on an algorithm that had taught itself to drive by watching a human do it.

It simply matches input from several video cameras to the behavior of a human driver and figures out for itself how it should drive. The only catch is that the system is so complex that it’s difficult to untangle how it actually works.

Nvidia is now working to open the black box by developing a way to visually highlight what the system is paying attention to. Explained in a recently published paper, the neural network architecture developed by Nvidia’s researchers is designed so that it can highlight the areas of a video picture that contribute most strongly to the behavior of the car’s deep neural network. Remarkably, the results show that the network is focusing on the edges of roads, lane markings, and parked cars—just the sort of things that a good human driver would want to pay attention to.

PilotNet.png

Nvidia’s Convolutional Neural Network architecture for Self-Driving AI.

“What’s revolutionary about this is that we never directly told the network to care about these things,” Urs Muller, Nvidia’s chief architect for self-driving cars, wrote in a blog post.

Fascinatingly enough, this compares a lot to human intelligence where we don’t actually know how to describe certain actions we do, but we just do them.

This certainly highlights the fact that in the near future we might start seeing AI those are just like us or even Superintelligent. I highly recommend reading the book Superintelligence: Paths, Dangers, Strategies by Nick Bostrom.

(Sources: Nvidia, MitTechReview, Nvidia Blog)

Neuralink

The rise of A.I. and over-powering humans will prove to be a catastrophic situation. Elon Musk is attempting to combat the rise of A.I. with the launch of his latest venture, brain-computer interface company Neuralink.  Detailed information about Neuralink on Wait But Why. (Highly Recommended to Read!)

neuralink.jpeg

Musk seems to want to achieve a communications leap equivalent in impact to when humans came up with language – this proved an incredibly efficient way to convey thoughts socially at the time, but what Neuralink aims to do is increase that efficiency by multiple factors of magnitude. Person-to-person, Musk’s vision would enable direct “uncompressed” communication of concepts between people, instead of having to effectively “compress” your original thought by translating it into language and then having the other party “decompress” the package you send them linguistically, which is always a lossy process.

Another thing in favor of Musk’s proposal is that symbiosis between brains and computers isn’t fiction. Remember that person who types with brain signals? Or the paralyzed people who move robot arms? These systems work better when the computer completes people’s thoughts. The subject only needs to type “bulls …” and the computer does the rest. Similarly, a robotic arm has its own smarts. It knows how to move; you just have to tell it to. So even partial signals coming out of the brain can be transformed into more complete ones. Musk’s idea is that our brains could integrate with an AI in ways that we wouldn’t even notice: imagine a sort of thought-completion tool.

So it’s not crazy to believe there could be some very interesting brain-computer interfaces in the future. But that future is not as close at hand as Musk would have you believe. One reason is that opening a person’s skull is not a trivial procedure. Another is that technology for safely recording from more than a hundred neurons at once—neural dust, neural lace, optical arrays that thread through your blood vessels—remains mostly at the blueprint stage.

 

( via Wired, TechCrunch )

Is A.I. Mysterious?

Contemplating on the book I recently began, Superintelligence: Paths, Dangers, Strategies by Nick Bostrom, the author clearly describes the future of AI as tremendously frightening and tells us in what ways AI will overpower human beings, until a point in the future where the humans will no longer be needed. Such kind of intelligence will be superior to human beings, hence coined the term Superintelligence. Nick, further mentioned in the initial pages of the book that such kind of invention would be the last invention humans would ever make, and the AI would then invent new things itself, without feeling the need of a human being.

Does AI possess a darker side? (Enter article by MitTechReview)

Last year, a strange self-driving car was released onto the quiet roads of Monmouth County, New Jersey. This experimental vehicle was developed by researchers at Nvidia, didn’t look different from other autonomous cars, but it was unlike anything demonstrated by Google, Tesla, or General Motors, and it showed the rising power of artificial intelligence. The car didn’t follow a single instruction provided by an engineer or programmer. Instead, it relied entirely on an algorithm that had taught itself to drive by watching a human do it. Getting a car to drive this way was an impressive feat. But it’s also a bit unsettling since it isn’t completely clear how the car makes its decisions. Information from the vehicle’s sensors goes straight into a huge network of artificial neurons that process the data and then deliver the commands required to operate the steering wheel, the brakes, and other systems. The result seems to match the responses you’d expect from a human driver. But what if one day it did something unexpected—crashed into a tree, or sat at a green light? As things stand now, it might be difficult to find out why. The system is so complicated that even the engineers who designed it may struggle to isolate the reason for any single action. And you can’t ask it: there is no obvious way to design such a system so that it could always explain why it did what it did.

Now, once again, such kind of experiments and research may seem obscure at the moment or may even be neglected by some people, but what if an entire fleet of vehicles start working in the manner they learned and ignoring the commands by a human.

Enter OpenAI: Discovering an enacting the path to safe artificial general intelligence.

OpenAI is a non-profit artificial intelligence (AI) research company, associated with business magnate Elon Musk, that aims to carefully promote and develop friendly AI in such a way as to benefit humanity as a whole.

Hopefully, OpenAI will help us have friendly versions of AI.

 

deepdream1

The artist Adam Ferriss created this image, and the one below, using Google Deep Dream, a program that adjusts an image to stimulate the pattern recognition capabilities of a deep neural network. The pictures were produced using a mid-level layer of the neural network.

This image sure seems kind of spooky. But who knows, there might be some hidden sarcasm in the AI that we are yet to discover.

Meanwhile, you can give it a try https://deepdreamgenerator.com/ .

 

 

Sources (MitTechReview, OpenAI, Wiki)

</ End Blog>

Artificial Intelligence ready ARM CPUs (DynamIQ)

ARM processors are ubiquitous and many of the tech gadgets we used are powered by them, furthermore, the company is showing off its plans for the future with DynamIQ. Aimed squarely at pushing the artificial intelligence and machine learning systems we’re expecting to see in cars, phones, gaming consoles and everything else, it’s what the company claims is an evolution on the existing “big.Little” technology.

arm.jpg

Here’s a high-level look at some of the new features, capabilities and benefits DynamIQ will bring to new Cortex-A processors later this year:

  • New dedicated processor instructions for ML and AI: Cortex-A processors designed for DynamIQ technology can be optimized to deliver up to a 50x boost in AI performance over the next 3-5 years relative to Cortex-A73-based systems today and up to 10x faster response between CPU and specialized accelerator hardware on the SoC that can unleash substantially better-combined performance.

arm-dynamiq2.jpg

  • Increased multi-core flexibility: SoC designers can scale up to eight cores in a single cluster and each core can have different performance and power characteristics. These advanced capabilities enable faster responsiveness to ML and AI applications. A redesigned memory subsystem enables both faster data access and enhance power management
  • More performance within restricted thermal budgets: Efficient and much faster switching of software tasks to match the right-sized processor for optimal performance and power is further enhanced through independent frequency control of individual processors
  • Safer autonomous systems: DynamIQ brings greater levels of responsiveness for ADAS solution and increased safety capabilities which will enable partners to build ASIL-D compliant systems for safe operation under failure conditions.

 

 

(source: ARM community, Engadget)

Machine Learning Speeds Up

Cloudera and Intel are jointly speeding up Machine Learning, with the help of Intel’s new Math Kernel. Benchmarks demonstrate the combined offering can advance machine learning performance over large data sets in less time and with less hardware.  This helps organizations accelerate their investments in next generation predictive analytics.

Cloudera is the leader in Apache Spark development, training, and services. Apache Spark is advancing the art of machine learning on distributed systems with familiar tools that deliver at impressive scale. By joining forces, Cloudera and Intel are furthering a joint mission of excellence in big data management in the pursuit of better outcomes by making machine learning smarter and easier to implement.

intcloud.jpg

Predictive Maintenance

By combining Spark, Intel MKL libraries, and Intel’s optimized CPU architecture machine learning workloads can scale quickly. As machine learning solutions get access to more data they can provide better accuracy in delivering predictive maintenance, recommendation engines, proactive health care and monitoring, and risk and fraud detection.

“There’s a growing urgency to implement richer machine learning models to explore and solve the most pressing business problems and to impact society in a more meaningful way,” said Amr Awadallah, chief technical officer of Cloudera. “Already among our user base, machine learning is an increasingly common practice. In fact, in a recent adoption survey over 30% of respondents indicated they are leveraging Spark for machine learning.

 

(via – Technative.io)

The Poker Playing AI

As we know that the game of Poker involves dealing with imperfect information, which makes the game very complex, and more like many real-world situations. At the Rivers Casino in Pittsburgh this week, a computer program called Libratus (A latin word meaning balanced), an AI system that may finally prove that computers can do this better than any human card player. Libratus was created by Tuomas Sandholm, a professor in the computer science department at CMU, and his graduate student Noam Brown.

mitpoker_0.jpg

The AI Poker play against the world’s best poker players. Kim is a high-stakes poker player who specializes in no-limit Texas Hold ‘Em. Jason Les and Daniel McAulay, two of the other top poker players challenging the machine, describe its play in much the same way. It does a little bit of everything,” Kim says. It doesn’t always play the same type of hand in the same way. It may bluff with a bad hand or not. It may bet high with a good hand—or not. That means Kim has trouble finding holes in its game. And if he does find a hole, it disappears the next day.

“The bot gets better and better every day. It’s like a tougher version of us,” said Jimmy Chou, one of the four pros battling Libratus. “The first couple of days, we had high hopes. But every time we find a weakness, it learns from us and the weakness disappears the next day.”

Libratus is playing thousands of games of heads-up, or two-player, no-limit Texas hold’em against several expert professional poker players. Now a little more than halfway through the 20-day contest, Libratus is up by almost $800,000 against its human opponents. So a victory, while far from guaranteed, may well be in the cards.

Regardless of the pure ability of the humans and the AI, it seems clear that the pros will be less effective as the tournament goes on. Ten hours of poker a day for 20 days straight against an emotionless computer was exhausting and demoralizing, even for pros like Doug Polk. And while the humans sleep at night, Libratus takes the supercomputer powering its in-game decision making and applies it to refining its overall strategy.

A win for Libratus would be a huge achievement in artificial intelligence. Poker requires reasoning and intelligence that has proven difficult for machines to imitate. It is fundamentally different from checkers, chess, or Go because an opponent’s hand remains hidden from view during play. In games of “imperfect information,” it is enormously complicated to figure out the ideal strategy given every possible approach your opponent may be taking. And no-limit Texas hold’em is especially challenging because an opponent could essentially bet any amount.

“Poker has been one of the hardest games for AI to crack,” says Andrew Ng, chief scientist at Baidu. “There is no single optimal move, but instead an AI player has to randomize its actions so as to make opponents uncertain when it is bluffing.”

(Sources: MitTechReview, The Verge, Wired)