Neural Networks Are Learning What (Not) To Forget

Deep Learning, which makes use of various types of Artificial Neural Networks, is the paradigm that has the potential to grow AI at an exponential rate. There’s a lot of hype going on around Neural Nets and people are busy in researching and making them useful to their potential.

The title of this blog might seem obscure to some of the readers but consider an example, humans have evolved from apes, and while evolving, we’ve forgotten some information or knowledge or a particular experience which is no longer useful or needed. In particular, humans have the extraordinary ability to constantly update their memories with the most important knowledge while overwriting information that is no longer useful. The world is full of trivial experiences or provides knowledge which is irrelevant or does little help to us, hence we forget those things or rather ignore those incidences.

The same cannot be said of machines. Any skill they learn is quickly overwritten, regardless of how important it is. There is currently no reliable mechanism they can use to prioritize these skills, deciding what to remember and what to forget. Thanks to Rahaf Aljundi and pals at the University of Leuven in Belgium and at Facebook AI Research. These guys have shown that the approach biological systems use to learn and to forget, can work with artificial neural networks too.

The key is a process known as Hebbian learning, first proposed in the 1940s by the Canadian psychologist Donald Hebb to explain the way brains learn via synaptic plasticity. Hebb’s theory can be famously summarized as “Cells that fire together wire together.”

What is Hebbian Learning?

Hebbian learning is one of the oldest learning algorithms and is based in large part on the dynamics of biological systems. A synapse between two neurons is strengthened when the neurons on either side of the synapse (input and output) have highly correlated outputs. In essence, when an input neuron fires, if it frequently leads to the firing of the output neuron, the synapse is strengthened. Following the analogy to an artificial system, the tap weight is increased with high correlation between two sequential neurons.

hebbian learning

Artificial neural nets being taught what to forget

In other words, the connections between neurons grow stronger if they fire together, and these connections are therefore more difficult to break. This is how we learn—repeated synchronized firing of neurons makes the connections between them stronger and harder to overwrite. The team did this by measuring the outputs from a neural network and monitoring how sensitive they are to changes in the connections within the network. This gave them a sense of which network parameters are most important and should, therefore, be preserved. “When learning a new task, changes to important parameters are penalized,” said the team. They say the resulting network has “memory aware synapses.”

Neural networks with memory aware synapses turn out to perform better in these tests than other networks. In other words, they preserve more of the original skill than networks without this ability, although the results certainly allow room for improvement.

The Paper: arxiv.org/abs/1711.09601 : Memory Aware Synapses: Learning What (Not) To Forget.

(via: MitTechReview, Wikipedia, arxiv)

Advertisements

Evolution of Poker with AI

Artificial Intelligence is happening all around us. With many businesses integrating AI in their technologies, there is an outburst of AI everywhere. From self-driving cars to GO playing AI, and most surprising of all, Poker! A few months ago, I had shared a blog on The Poker Playing AI (recommended read!). Well yeah, to those folks who didn’t know, AI has also kept its feet in Poker and it’s a major achievement!

To summarize the victory of AI in poker, a beautiful infographic has been created and designed meticulously by the folks at pokersites.

infographic-ai-poker-2.png

 

Source of the Infographic -> https://pokersites.me.uk/

What is Meta-Learning in Machine Learning?

Meta-Learning is a subfield of machine learning where automatic learning algorithms are applied on meta-data. In brief, it means Learning to Learn. The main goal is to use meta-data to understand how automatic learning can become flexible in solving different kinds of learning problems, hence to improve the performance of existing learning algorithms. Which means that how effectively we can increase the learning rate of our algorithms.

Meta-Learning affects the hypothesis space for the learning algorithm by either:

  • Changing the hypothesis space of the learning algorithms (hyper-parameter tuning, feature selection)
  • Changing the way the hypothesis space is searched by the learning algorithms (learning rules)

Variations of Meta-Learning: 

  • Algorithm Learning (selection) – Select learning algorithms according to the characteristics of the instance.
  • Hyper-parameter Optimization – Select hyper-parameters for learning algorithms. The choice of the hyper-parameters influences how well you learn.
  • Ensemble Methods – Learn to learn “collectively” – Bagging, Boosting, Stacked Generalization.

Flexibility is very important because each learning algorithm is based on a set of assumptions about the data, its inductive bias. This means that it will only learn well if the bias matches the data in the learning problem. A learning algorithm may perform very well on one learning problem, but very badly on the next. From a non-expert point of view, this poses strong restrictions on the use of machine learning or data mining techniques, since the relationship between the learning problem (often some kind of database) and the effectiveness of different learning algorithms is not yet understood.

By using different kinds of meta-data, like properties of the learning problem, algorithm properties (like performance measures), or patterns previously derived from the data, it is possible to select, alter or combine different learning algorithms to effectively solve a given learning problem. Critiques of meta-learning approaches bear a strong resemblance to the critique of metaheuristic, which can be said to be a related problem.

Metalearning may be the most ambitious but also the most rewarding goal of machine learning. There are few limits to what a good meta-learner will learn. Where appropriate it will learn to learn by analogy, by chunking, by planning, by subgoal generation, etc.

OpenAI’s Virtual Wrestling Bots

OpenAI, a firm backed by Elon Musk, has currently revealed one of it’s latest developments in the fields of Machine Learning, demonstrated using the technology of virtual sumo wrestlers.

OpenAI_wrestling

These are the bots inside the virtual world of RoboSumo controlled my machine learning. They (The Bots) taught themselves through trial and error using Reinforcement Learning, a technique inspired by the way animals learn through feedback. It has proved useful for training computers to play games and to control robots. The virtual wrestlers might look slightly ridiculous, but they are using a very clever approach to learning in a fast-changing environment while dealing with an opponent. This game and it’s virtual world were created at OpenAI to show how forcing AI systems to compete can spur them to become more intelligent.

However, one of the disadvantages of reinforcement learning is that doesn’t work well in realistic situations, or where things are more dynamic. OpenAI devised a solution to this problem by creating its own reinforcement algorithm called proximal policy optimization (PPO), which is especially well suited to changing environments.

The latest work, done in collaboration with researchers from Carnegie Mellon University and UC Berkeley, demonstrates a way for AI agents to apply what the researchers call a “meta-learning” framework. This means the agents can take what they have already learned and apply it to a new situation.

Inside the RoboSumo environment (see video above), the agents started out behaving randomly. Through thousands of iterations of trial and error, they gradually developed the ability to move—and, eventually, to fight. Through further iterations, the wrestlers developed the ability to avoid each other, and even to question their own actions. This learning happened on the fly, with the agents adapting even they wrestled each other.

Flexible learning is a very important part of human intelligence, and it will be crucial if machines are going to become capable of performing anything other than very narrow tasks in the real world. This kind of learning is very difficult to implement in machines, and the latest work is a small but significant step in that direction.

 

(sources: MitTechReview, OpenAI Blog, Wired)

NVIDIA’S CHIPS FOR COMPLETE CONTROL OF DRIVERLESS CARS

The race for autonomy in cars is ubiquitous. Top car brands are working in providing complete autonomous vehicles to their customers and the future with self-driving vehicles is inevitable. Adding the cherry to the cake, Nvidia’s recently announced chip is the latest generation of its DrivePX onboard car computers called Pegasus. The device is 13 times faster than the previous iteration, which has so far been used by the likes of Audi, Tesla, and Volvo to provide semi-autonomous driving capabilities in their vehicles.

nvidia_pegasus

Nvidia Pegasus

At the heart of this semiconductor is the mind-boggling technology of Deep Learning. “In the old world, the more powerful your engine, the smoother your ride will be,” Huang said during the announcement. “In the future, the more computational performance you have, the smoother your ride will be.”

Nvidia asserts that the device is only about the size of a license plate. But it has enough power to process data from up to 16 sensors, detect objects, find the car’s place in the world, plan a path, and control the vehicles itself. Oh, and it will also update centrally stored high-definition maps at the same time—all with some resources to spare.

The new system is designed to eventually handle up to 320 trillion operations per second (TOPS), compared to the 24tn of today’s technology. That would give it the power needed to process the masses of data produced by a vehicle’s cameras and other sensors and allow it to drive completely autonomously, Nvidia said. The first systems to be tested next year will have less processing power but will be designed to scale up with the addition of extra chips.

 

(sources: MitTechReview, NvidiaBlog)

MIT uses AI to kill video buffering

We all hate it when our video is interrupted by buffering or its resolution has turned into a pixelated mess. A group of MIT researchers believe they’ve figured out a solution to those annoyances plaguing millions of people a day.

MIT discovered a way to improve video streaming by reducing buffering times and pixelation. A new AI developed at the university’s Computer Science and Artificial Intelligence Laboratory uses machine learning to pick different algorithms depending on network conditions. In doing so, the AI, called Pensieve, has been shown to deliver a higher-quality streaming experience with less buffering than existing systems.

Instead of having a video arrive at your computer in one complete piece, sites like YouTube and Netflix break it up into smaller pieces and sends them sequentially, relying on ABR algorithms to determine which resolution each piece will play at. This is an attempt to give users a more consistent viewing experience while also saving bandwidth, but it created problems. If the connection is too slow, YouTube may temporarily lower the resolution – pixelating the video- to keep it playing. And since the video is sent in pieces, skipping ahead is impossible.

There are two types of ABR: a rate-based one that measures how fast a network can transmit data and a buffer-based one tasked with maintaining a sufficient buffer at the head of the video.  The current algorithms only consider one of these factors, but MIT’s new algorithm Pensieve uses machine learning to choose the best system based on the network condition

In experiments that tested the AI using wifi and LTE, the team found that it could stream video at the same resolution with 10 to 30 percent less rebuffering than other approaches. Additionally, users rated the video play with the AI 10 to 25 percent higher in terms of ‘quality of experience.’  The researchers, however, only tested Pensieve on a month’s worth of downloaded video and believe performance would be even higher with a number of data streaming giants YouTube and Netflix have.

As a next project, the team will be working to test Pensieve on virtual-reality (VR) video.“The bitrates you need for 4K-quality VR can easily top hundreds of megabits per second, which today’s networks simply can’t support,” Alizadeh says. “We’re excited to see what systems like Pensieve can do for things like VR. This is really just the first step in seeing what we can do.”

Pensieve was funded, in part, by the National Science Foundation and an innovation fellowship from Qualcomm.

Open AI’s bots beat world’s top Dota 2 players

Open AI has created a bot that is taking down top Dota 2 players at The International Dota event held by Valve.

This year’s big Dota 2 tournament is currently in its second to last day with only four teams remaining in the competition. Between the event’s official games, Valve always includes some fun show matches, and this year it brought in Open AI‘s aforementioned Dota 2 b to face off against the game’s most famous player, Danil “Dendi” Ishutin, live on the main stage.

Open AI said the 1v1 bot has learned from playing against itself in a lifetimes worth of matches. In a promo video shown before the match, we see the bot besting some of the game’s top players, including Evil Genius’ cores Arteezy and Sumail.

In the live Shadow Fiend mirror match against Dendi, the bot executed an impossibly perfect creep block, perfectly balanced creep aggro, and even recognized and canceled Dendi’s healing items. The bot quickly bested Dendi in two matches, leading Dendi to say it’s “too strong!” Footage of those matches can be found on Dota 2 Rapier’s YouTube channel.

Open AI says their next goal is to get the bot ready for the infinitely more complex 5v5 matches, noting it might have something ready for next year.

How DeepMind’s AI taught itself to walk

DeepMind’s programmers have given the agent a set of virtual sensors (so it can tell whether it’s upright or not, for example) and then incentivize to move forward. The computer works the rest out for itself, using trial and error to come up with different ways of moving. True motor intelligence requires learning how to control and coordinate a flexible body to solve tasks in a range of complex environments. Existing attempts to control physically simulated humanoid bodies come from diverse fields, including computer animation and biomechanics.  A trend has been to use hand-crafted objectives, sometimes with motion capture data, to produce specific behaviors. However, this may require considerable engineering effort and can result in restricted behaviors or behaviors that may be difficult to repurpose for new tasks.

True motor intelligence requires learning how to control and coordinate a flexible body to solve tasks in a range of complex environments. Existing attempts to control physically simulated humanoid bodies come from diverse fields, including computer animation and biomechanics.  A trend has been to use hand-crafted objectives, sometimes with motion capture data, to produce specific behaviors. However, this may require considerable engineering effort and can result in restricted behaviors or behaviors that may be difficult to repurpose for new tasks.

 

DeepMind published 3 papers which are as follows:

Emergence of locomotion behaviors in rich environments:- 

For some AI problems, such as playing Atari or Go, the goal is easy to define – it’s winning. But describing a process such as a jog, a backflip or a jump is difficult because of accurately describing a complex behavior which is a common problem when teaching motor skills to an artificial system. DeepMind explored how sophisticated behaviors can emerge from scratch from the body interacting with the environment using only simple high-level objectives, such as moving forward without falling. DeepMind trained agents with a variety of simulated bodies to make progress across diverse terrains, which require jumping, turning and crouching. The results show that their agents developed these complex skills without receiving specific instructions, an approach that can be applied to train systems for multiple, distinct simulated bodies.

wall

Simulated planar Walker attempts to climb a wall

But how do you describe the process for performing a backflip? Or even just a jump? The difficulty of accurately describing a complex behavior is a common problem when teaching motor skills to an artificial system. In this work, we explore how sophisticated behaviors can emerge from scratch from the body interacting with the environment using only simple high-level objectives, such as moving forward without falling. Specifically, we trained agents with a variety of simulated bodies to make progress across diverse terrains, which require jumping, turning and crouching. The results show our agents develop these complex skills without receiving specific instructions, an approach that can be applied to train our systems for multiple, distinct simulated bodies. The GIFs show how this technique can lead to high-quality movements and perseverance.

Learning human behaviors from motion capture by adversarial imitation:-

walk.gif

A humanoid Walker produces human-like walking behavior

The emergent behavior described above can be very robust, but because the movements must emerge from scratch, they often do not look human-like.

DeepMind in their second paper demonstrated how to train a policy network that imitates motion capture data of human behaviors to pre-learn certain skills, such as walking, getting up from the ground, running, and turning.

Having produced behavior that looks human-like, they can tune and repurpose those behaviors to solve other tasks, like climbing stairs and navigating walled corridors.

 

Robust imitation of diverse behaviors:-

div.gif

The planar Walker on the left demonstrates a particular walking style and the agent in the right panel imitates that style using a single policy network.

The third paper proposes a neural network architecture, building on state-of-the-art generative models, that is capable of learning the relationships between different behaviors and imitating specific actions that it is shown. After training, their system can encode a single observed action and create a new novel movement based on that demonstration.After training, their system can encode a single observed action and create a new novel movement based on that demonstration.It can also switch between different kinds of behaviors despite never having seen transitions between them, for example switching between walking styles.

 

 

(via The Verge, DeepMind Blog)

 

 

Optical Deep Learning: Learning with light

Artificial Neural Networks mimic the way the brain learns from an accumulation of examples. A research team at the Massachusetts Institute of Technology (MIT) has come up with a novel approach to deep learning that uses a nanophotonic processor, which they claim can vastly improve the performance and energy efficiency for processing artificial neural networks.

MIT-Optical-Neural_0.jpg

Optical computers have been conceived before, but are usually aimed at more general-purpose processing. In this case, the researchers have narrowed the application domain considerably. Not only have they restricted their approach to deep learning, they have further limited this initial work to inferencing of neural networks, rather than the more computationally demanding process of training.

The optical chip contains multiple waveguides that shoot beams of light at each other simultaneously, creating interference patterns that correspond to mathematical results. “The chip, once you tune it, can carry out matrix multiplication with, in principle, zero energy, instantly,” said Marin Soljacic, an electrical engineering professor at MIT, in a statement. The accuracy leaves something to be desired for making out vowels, but the nanophotonic chip is still work-in-progress.

The new approach uses multiple light beams(as mentioned above) directed in such a way that their waves interact with each other, producing interference patterns that convey the result of the intended operation. The resulting device is something the researchers call a programmable nanophotonic processor.The result is that the optical chips using this architecture could, in principle, carry out calculations performed in typical artificial intelligence algorithms much faster and using less than one-thousandth as much energy per operation as conventional electronic chips.
The team says it will still take a lot more effort and time to make this system useful; however, once the system is scaled up and fully functioning, it can find many user cases, such as data centers or security systems. The system could also be a boon for self-driving cars or drones, says Harris, or “whenever you need to do a lot of computation but you don’t have a lot of power or time.”

 

Google’s and Nvidia’s AI Chips

Google

Google will soon launch a cloud computing service that provides exclusive access to a new kind of artificial intelligence chip designed by its own engineers. CEO Sundar Pichai revealed the new chip and service this morning in Silicon Valley during his keynote at Google I/O, the company’s annual developer conference.

GoogleChip4.jpg

This new processor is a unique creation designed to both train and execute deep neural networks—machine learning systems behind the rapid evolution of everything from image and speech recognition to automated translation to robotics. Google says it will not sell the chip directly to others. Instead, through its new cloud service, set to arrive sometime before the end of the year, any business or developer can build and operate software via the internet that taps into hundreds and perhaps thousands of these processors, all packed into Google data centers.

According to Dean, Google’s new “TPU device,” which spans four chips, can handle 180 trillion floating point operations per second, or 180 teraflops, and the company uses a new form of computer networking to connect several of these chips together, creating a “TPU pod” that provides about 11,500 teraflops of computing power. In the past, Dean said, the company’s machine translation model took about a day to train on 32 state-of-the-art CPU boards. Now, it can train in about six hours using only a portion of a pod.

Nvidia

Nvidia has released a new state-of-the-art chip that pushes the limits of machine learning, the Tesla P100 GPU. It can perform deep learning neural network tasks 12 times faster than the company’s previous top-end system (The TitanX). The P100 was a huge commitment for Nvidia, costing over $2 billion in research and development, and it sports a whopping 150 billion transistors on a single chip, making the P100 the world’s largest chip, Nvidia claims. In addition to machine learning, the P100 will work for all sorts of high-performance computing tasks — Nvidia just wants you to know it’s really good at machine learning.

dgx.png

To top off the P100’s introduction, Nvidia has packed eight of them into a crazy-powerful $129,000 supercomputer called the DGX-1. This show-horse of a machine comes ready to run, with deep-learning software preinstalled. It’s shipping first to AI researchers at MIT, Stanford, UC Berkeley, and others in June. On stage, Huang called the DGX-1 “one beast of a machine.”

The competition between these upcoming AI chips and Nvidia all points to an emerging need for simply more processing power in deep learning computing. A few years ago, GPUs took off because they cut the training time for a deep learning network from months to days. Deep learning, which had been around since at least the 1950s, suddenly had real potential with GPU power behind it. But as more companies try to integrate deep learning into their products and services, they’re only going to need faster and faster chips.

 

(via Wired, Forbes, Nvidia, The Verge)