Interpretable Machine Learning Through Teaching – (OpenAI)

The researchers at OpenAI have designed a method that encourages AIs to teach each other with examples that are cogent to human beings as well. Their method automatically selects the most informative examples for teaching a concept, for example, the best images to describe the concept of dogs, and this approach proved to be effective for both humans as well as AIs.

OpenAI envisions that some of the most impactful applications of AI will come from a result of collaboration between humans and machines. However, communication between the two is the barrier. Consider an example. Think about trying to guess the shape of a rectangle when you’re only shown a collection of random points inside that rectangle: it’s much faster to figure out the correct dimensions of the rectangle when you’re given points at the corners of the rectangle instead. OpenAI’s machine learning approach works as a cooperative game played between two agents, one the teacher and another the student. The goal here for the student is to guess a particular concept (i.e. “dog”, “zebra”) based on examples of that concept (such as images of dogs), and the goal of the teacher is to learn to select the most illustrative examples for the student.

In their two-stage technique: 

  1. A ‘student’ neural network is given randomly selected input examples of concepts and is trained from those examples using traditional supervised learning methods to guess the correct concept labels.
  2. The ‘teacher’ network — which has an intended concept to teach and access to labels linking concepts to examples — to test different examples on the student and see which concept labels the student assigns them, eventually converging on the smallest set of examples it needs to give to let the student guess the intended concept.

However, if they train the student and the teacher jointly, the student and teacher can collude to communicate via arbitrary examples that do not make sense to humans, digressing from the main goal.

 

(via: OpenAI)

ARM’s Machine Learning Processors

ARM’s latest mobile processors are tuned to crunch machine-learning algorithms as efficiently as possible. ARM announced a few days ago that it has created its first dedicated machine-learning chips, which are meant for use in mobile and smart-home devices. The company says it’s sharing the plans with its hardware partners, including smartphone chipmaker Qualcomm, and expects to see devices packing the hardware by early 2019.

Currently, small and portable devices lack the power to run AI algorithms, so they enlist the help of big servers in the cloud. But enabling mobile devices to run their own AI software is attractive. It can speed things up, cutting the lag inherent in sending information back and forth. It will allow hardware to run offline. And it pleases privacy advocates, who are comforted by the idea of data remaining on the device.

ai_arm.jpg

Jem Davies, the lead of Machine Learning group at ARM, said, “We analyze compute workloads, work out which bits are taking the time and the power, and look to see if we can improve on our existing processors.”  The new chips use less power than the company’s other designs to perform the kinds of linear-algebra calculations that underpin modern artificial intelligence. They’re also better at moving data in and out of memory.

Artificial Synapse could make Brain-On-A-Chip Hardware a Reality

Let’s start by understanding what does the title mean! This is a part of Neuromorphic Engineering aka Neuromorphic Computing, describing the use of very-large-scale integration (VLSI) systems containing electronic analog circuits to mimic neuro-biological architectures present in the nervous system. Microprocessors configured more like brains than traditional chips could soon make computers far more astute about what’s going on around them.

neuromorphic.jpg

Neuromorphic computer chips are designed to work like the human brain. Instead of being controlled by binary, on-or-off signals like most current chips, neuromorphic chips weight their outputs, mimicking the way different neurons fire at different strengths through their synapses. In this way, small neuromorphic chips could, like the brain, efficiently process millions of streams of parallel computations that are currently only possible with large banks of supercomputers. But one significant hangup on the way to such portable artificial intelligence has been the neural synapse, which has been particularly tricky to reproduce in hardware.

Now engineers at MIT have designed an artificial synapse in such a way that they can precisely control the strength of an electric current flowing across it, similar to the way ions flow between neurons. The team has built a small chip with artificial synapses, made from silicon germanium. In simulations, the researchers found that the chip and its synapses could be used to recognize samples of handwriting, with 95 percent accuracy. The design, published last month in the journal Nature Materials, is a major step towards building portable, low-power neuromorphic chips for use in pattern recognition and other learning tasks.

Most neuromorphic chip designs attempt to emulate the synaptic connection between neurons using two conductive layers separated by a “switching medium,” or synapse-like space. When a voltage is applied, ions should move in the switching medium to create conductive filaments, similarly to how the “weight” of a synapse changes.

The research was led by Jeehwan Kim, the Class of 1947 Career Development Assistant Professor in the departments of Mechanical Engineering and Materials Science and Engineering, and a principal investigator in MIT’s Research Laboratory of Electronics and Microsystems Technology Laboratories.

In conclusion, Artificial neural networks are already loosely modeled on the brain. The combination of neural nets and neuromorphic chips could let AI systems be packed into smaller devices and run a lot more efficiently.

 

(via ScienceDaily, MitTechReview, Wiki)