Google’s Bristlecone Quantum Computing Chip

A few days ago Google previewed its new quantum processor, Bristlecone, a quantum computing chip that will serve as a testbed for research regarding the system error rates and scalability of Google’s qubit technology. In a post on its research blog, Google said it’s “cautiously optimistic that quantum supremacy can be achieved with Bristlecone.”

dims

72 Qubit Quantum Computing Chip

The purpose of this gate-based superconducting system is to provide a testbed for research into system error rates and scalability of their qubit technology, as well as applications in quantum simulation, optimization, and machine learning. Qubits (the quantum version of traditional bits) are very unstable and can be adversely affected by noise, and most of these systems can only hold a state for less than 100 microseconds. Google believes that quantum supremacy can be “comfortably demonstrated” with 49 qubits and a two-qubit error below 0.5 percent. Previous quantum systems by Google have given two-qubit errors of 0.6 percent, which in theory sounds like an extremely small difference, but in the world of quantum computing remains significant.

However, each Bristlecone chip features 72 qubits, which may help mitigate some of this error, but as Google says, quantum computing isn’t just about qubits. Until now, the most advanced quantum chip, built by IBM, had 50 qubits.  “Operating a device such as Bristlecone at low system error requires harmony between a full stack of technology ranging from software and control electronics to the processor itself,” the team writes in a blog post. “Getting this right requires careful systems engineering over several iterations.”

(via Google Research Blog, Engadget, Forbes)

 

 

Deep learning improves your computer with age

The researchers at Google have devised a new technique that could let a laptop or smartphone learn to do things better and faster over time. The researchers published a paper which focuses on a common problem. The prefetching problem. Computers process information much faster than they can pull it from memory to be processed. To avoid bottlenecks, they try to predict which information is likely to be needed and pull it in advance. As computers get more powerful, this prediction becomes progressively harder.

In the paper published, Google focuses on using deep learning to improve prefetching. “The work that we did is only the tip of the iceberg,” says Heiner Litz of the University of California, Santa Cruz, a visiting researcher on the project. Litz believes it should be possible to apply machine learning to every part of a computer, from the low-level operating system to the software that users interact with.

Such advances would be opportune. Moore’s Law is finally slowing down, and the fundamental design of computer chips hasn’t changed much in recent years. Tim Kraska, an associate professor at MIT who is also exploring how machine learning can make computers work better, says the approach could be useful for high-level algorithms, too. A database might automatically learn how to handle financial data as opposed to social-network data, for instance. Or an application could teach itself to respond to a particular user’s habits more effectively.

Paper reference: https://arxiv.org/pdf/1803.02329.pdf

 

(via: MitTechReview)

Baidu Research’s New AI Algorithm Mimics Voice With Very Few Samples

AI typically needs a plethora of data and a lot of time for something like voice cloning. It needs to listen to hours of recordings. However, a new process could get that down to one minute. Baidu researchers have unveiled an upgraded version of Deep Voice, their text-to-speech synthesis system, that can now, once trained, clone any voice after listening to a few snippets of audio. This capability was enabled by learning shared and discriminative information from speakers. Baidu calls this ‘Voice Cloning’. Voice cloning is expected to have significant applications in the direction of personalization in human-machine interfaces.

bAIDU

Here, Baidu focuses on two fundamental approaches (refer above figure):

  1. Speaker AdaptionSpeaker adaptation is based on fine-tuning a multi-speaker generative model with a few cloning samples, by using backpropagation-based optimization. Adaptation can be applied to the whole model or only the low-dimensional speaker embeddings. The latter enables a much lower number of parameters to represent each speaker, albeit it yields a longer cloning time and a lower audio quality.
  2. Speaker EncodingSpeaker encoding is based on training a separate model to directly infer a new speaker embedding from cloning audios that will ultimately be used with a multi-speaker generative model. The speaker encoding model has time-and-frequency-domain processing blocks to retrieve speaker identity information from each audio sample, and attention blocks to combine them in an optimal way.

For detailed information and mathematical explanations, refer the paper by Baidu Research.

However, this technology can also possibly have a downside as this could be tumultuous to people relying upon biometric voice security.

( via MitTechReview, Wiki, BaiduResearch)

Interpretable Machine Learning Through Teaching – (OpenAI)

The researchers at OpenAI have designed a method that encourages AIs to teach each other with examples that are cogent to human beings as well. Their method automatically selects the most informative examples for teaching a concept, for example, the best images to describe the concept of dogs, and this approach proved to be effective for both humans as well as AIs.

OpenAI envisions that some of the most impactful applications of AI will come from a result of collaboration between humans and machines. However, communication between the two is the barrier. Consider an example. Think about trying to guess the shape of a rectangle when you’re only shown a collection of random points inside that rectangle: it’s much faster to figure out the correct dimensions of the rectangle when you’re given points at the corners of the rectangle instead. OpenAI’s machine learning approach works as a cooperative game played between two agents, one the teacher and another the student. The goal here for the student is to guess a particular concept (i.e. “dog”, “zebra”) based on examples of that concept (such as images of dogs), and the goal of the teacher is to learn to select the most illustrative examples for the student.

In their two-stage technique: 

  1. A ‘student’ neural network is given randomly selected input examples of concepts and is trained from those examples using traditional supervised learning methods to guess the correct concept labels.
  2. The ‘teacher’ network — which has an intended concept to teach and access to labels linking concepts to examples — to test different examples on the student and see which concept labels the student assigns them, eventually converging on the smallest set of examples it needs to give to let the student guess the intended concept.

However, if they train the student and the teacher jointly, the student and teacher can collude to communicate via arbitrary examples that do not make sense to humans, digressing from the main goal.

 

(via: OpenAI)

ARM’s Machine Learning Processors

ARM’s latest mobile processors are tuned to crunch machine-learning algorithms as efficiently as possible. ARM announced a few days ago that it has created its first dedicated machine-learning chips, which are meant for use in mobile and smart-home devices. The company says it’s sharing the plans with its hardware partners, including smartphone chipmaker Qualcomm, and expects to see devices packing the hardware by early 2019.

Currently, small and portable devices lack the power to run AI algorithms, so they enlist the help of big servers in the cloud. But enabling mobile devices to run their own AI software is attractive. It can speed things up, cutting the lag inherent in sending information back and forth. It will allow hardware to run offline. And it pleases privacy advocates, who are comforted by the idea of data remaining on the device.

ai_arm.jpg

Jem Davies, the lead of Machine Learning group at ARM, said, “We analyze compute workloads, work out which bits are taking the time and the power, and look to see if we can improve on our existing processors.”  The new chips use less power than the company’s other designs to perform the kinds of linear-algebra calculations that underpin modern artificial intelligence. They’re also better at moving data in and out of memory.

Artificial Synapse could make Brain-On-A-Chip Hardware a Reality

Let’s start by understanding what does the title mean! This is a part of Neuromorphic Engineering aka Neuromorphic Computing, describing the use of very-large-scale integration (VLSI) systems containing electronic analog circuits to mimic neuro-biological architectures present in the nervous system. Microprocessors configured more like brains than traditional chips could soon make computers far more astute about what’s going on around them.

neuromorphic.jpg

Neuromorphic computer chips are designed to work like the human brain. Instead of being controlled by binary, on-or-off signals like most current chips, neuromorphic chips weight their outputs, mimicking the way different neurons fire at different strengths through their synapses. In this way, small neuromorphic chips could, like the brain, efficiently process millions of streams of parallel computations that are currently only possible with large banks of supercomputers. But one significant hangup on the way to such portable artificial intelligence has been the neural synapse, which has been particularly tricky to reproduce in hardware.

Now engineers at MIT have designed an artificial synapse in such a way that they can precisely control the strength of an electric current flowing across it, similar to the way ions flow between neurons. The team has built a small chip with artificial synapses, made from silicon germanium. In simulations, the researchers found that the chip and its synapses could be used to recognize samples of handwriting, with 95 percent accuracy. The design, published last month in the journal Nature Materials, is a major step towards building portable, low-power neuromorphic chips for use in pattern recognition and other learning tasks.

Most neuromorphic chip designs attempt to emulate the synaptic connection between neurons using two conductive layers separated by a “switching medium,” or synapse-like space. When a voltage is applied, ions should move in the switching medium to create conductive filaments, similarly to how the “weight” of a synapse changes.

The research was led by Jeehwan Kim, the Class of 1947 Career Development Assistant Professor in the departments of Mechanical Engineering and Materials Science and Engineering, and a principal investigator in MIT’s Research Laboratory of Electronics and Microsystems Technology Laboratories.

In conclusion, Artificial neural networks are already loosely modeled on the brain. The combination of neural nets and neuromorphic chips could let AI systems be packed into smaller devices and run a lot more efficiently.

 

(via ScienceDaily, MitTechReview, Wiki)

Google’s Cloud Auto-ML Vision

A new service by Google named Cloud AutoML uses several machine-learning tricks to automatically build and train a deep-learning algorithm that can recognize things in images. The initial release of AutoML Cloud is limited to image recognition. Its simple interface lets you upload images with ease, train and manage them, and finally deploy models on Google Cloud.

The technology is limited for now, but it could be the start of something big. Building and optimizing a deep neural network algorithm normally requires a detailed understanding of the underlying math and code, as well as extensive practice tweaking the parameters of algorithms to get things just right. The difficulty of developing AI systems has created a race to recruit talent, and it means that only big companies with deep pockets can usually afford to build their own bespoke AI algorithms.

Cloud-AutoML-Vision1.png

In addition, rather than forcing enterprises to train their algorithms using Google’s data, Cloud AutoML ingests enterprise data assets and tunes the model accordingly. The key here is that Google helps enterprises to customize a model without having to do so de novo: There’s already a great deal of training baked in. Though initially focused on image data, Google plans to roll out the service to tackle text, video, and more.

Cloud AutoML Vision is built on Google’s transfer learning and neural architecture search technologies (among others). Disney has already started using the technology to annotate their products to improve the customer’s experience on their shop-Disney site. The Zoological Society of London is also using AutoML Vision to recognize and track wildlife in order to understand their distribution and how humans are impacting the species.

The video below simplifies and formulates the working of Cloud AutoML Vision.

AI in Radiology

AI now helps in diagnosing dangerous lung diseases and adds it to its growing list of things.

A few months back a new arXiv paper by researchers from Stanford explains how CheXNet, the convolutional neural network they developed, achieved the feat. CheXNet was trained on a publicly available data set of more than 100,000 chest x-rays that were annotated with information on 14 different diseases that turn up in the images. The researchers had four radiologists go through a test set of x-rays and make diagnoses, which were compared with diagnoses performed by CheXNet. Not only did CheXNet beat radiologists at spotting pneumonia, but once the algorithm was expanded, it proved better at identifying the other 13 diseases as well. Early detection of pneumonia could help prevent some of the 50,000 deaths the disease causes in the U.S. each year. Pneumonia is also the single largest infectious cause of death for children worldwide, killing almost a million children under the age of five in 2015.

Stanford researchers trained a convolutional neural network on a data set of 40,895 images from 14,982 studies. The paper documents how the algorithm detected abnormalities (like fractures, or bone degeneration) better than radiologists in finger and wrist radiographs. However, radiologists were still better at spotting issues in elbows, forearms, hands, upper arms, and shoulders.

We’ve come a far way in AI, but still, we’ve miles of journey left. The results here clearly depict that AI is excelling humans, but does it mean that we don’t need humans? In the coming era of super intelligence, where are we standing?

 

(via; MitTechReview, arXiv)

Chip Flaws: Spectre and Meltdown Vulnerabilities

Processors are of crucial importance in this digital age as their vitality in this computational era is unparalleled. The device you are reading this blog on and your smartwatch you see your time on, every device has a processor. These processors run the processes that are essential to show you your notification, run an application, play games as well as check some emails. As they run all the essential processes on your computer, these silicon chips handle extremely sensitive data. That includes passwords and encryption keys, the fundamental tools for keeping your computer secure.

The Spectre and Meltdown vulnerabilities, revealed a few days before could let attackers capture the information they shouldn’t be able to access, like your passwords and keys. As a result, an attack on a computer chip can turn into a serious security concern.

 

meltdown-spectre-logos.jpg

Meltdown and Spectre

 

So what’s Spectre?

Spectre attacks involve inducing a victim to speculatively perform operations that would not occur during correct program execution and which leak the victim’s confidential information via a side channel to the adversary. To make computer processes run faster, a chip will essentially guess what information the computer needs to perform its next function. That’s called speculative execution. As the chip guesses, that sensitive information is momentarily easier to access. In brief, Spectre is a vulnerability with implementations of branch prediction that affects modern microprocessors with speculative execution. Spectre is a vulnerability that forces programs on a user’s operating system to access an arbitrary location in the program’s memory space.

The Spectre paper displays the attack in four essential steps:

  1. First, it shows that branch prediction logic in modern processors can be trained to reliably hit or miss based on the internal workings of a malicious program.
  2. It then goes on to show that the subsequent difference between cache hits and misses can be reliably timed so that what should have been a simple non-functional difference can, in fact, be subverted into a covert channel which extracts information from an unrelated process’s inner workings.
  3. Thirdly, the paper synthesizes the results with return-oriented programming exploits and other principles with a simple example program and a JavaScript snippet run under a sandboxing browser; in both cases, the entire address space of the victim process (i.e. the contents of a running program) is shown to be readable by simply exploiting speculative execution of conditional branches in code generated by a stock compiler or the JavaScript machinery present in an extant browser.
  4. Finally, the paper concludes by generalizing the attack to any non-functional state of the victim process. It briefly discusses even such highly non-obvious non-functional effects as bus arbitration latency.

And What’s Meltdown?

In this form of attack, the chip is fooled into loading secured data during a speculation window in such a way that it can later be viewed by an unauthorized attacker. The attack relies upon a commonly-used, industry-wide practice that separates loading in-memory data from the process of checking permissions. Again, the industry’s conventional wisdom operated under the assumption that the entire speculative execution process was invisible, so separating these pieces wasn’t seen as a risk.

In Meltdown, a carefully crafted branch of code first arranges to execute some attack code speculatively. This code loads some secure data to which the program doesn’t ordinarily have access. Because it’s happening speculatively, the permission check on that access will happen in parallel (and not fail until the end of the speculation window), and as a consequence special internal chip memory known as a cache becomes loaded with the privileged data. Then, a carefully constructed code sequence is used to perform other memory operations based upon the value of the privileged data. While the normally observable results of these operations aren’t visible following the speculation (which ultimately is discarded), a technique known as cache side-channel analysis can be used to determine the value of the secure data.

The basic difference between Spectre and Meltdown is that Spectre can be used to manipulate a process into revealing its own data. On the other hand, Meltdown can be used to read privileged memory in a process’s address space which even the process itself would normally be unable to access (on some unprotected OS’s this includes data belonging to the kernel or other processes).

(via Wiki, cnet, spectreattack, meltdownattack, redhat, wired)

 

 

Capsule Nets

A few months ago, Geoffrey Hinton and his team published two papers that introduced a completely new type of a neural network based on Capsules, further to in support of those Capsule Networks, the team published an algorithm called dynamic routing between capsules for the training of such networks.

With Hinton’s capsule network, layers are comprised not of individual Artificial Neural Networks (ANNs), but rather of small groups of ANNs arranged in functional pods, or “capsules.” Each capsule is programmed to detect a particular attribute of the object being classified, thus getting around the need for massive input data sets. This makes capsule networks a departure from the “let them teach themselves” approach of traditional neural nets.

A layer is assigned the task of verifying the presence of some characteristic, and when enough capsules are in agreement on the meaning of their input data, the layer passes on its prediction to the next layer.

 

capsArch

Capsule Net Architecture

 

A capsule is a nested set of neural layers. So in a regular neural network, you keep on adding more layers. In CapsNet you would add more layers inside a single layer. Or in other words, nesting a neural layer inside another. The state of the neurons inside a capsule capture the above properties of one entity inside an image. A capsule outputs a vector to represent the existence of the entity. The orientation of the vector represents the properties of the entity. The vector is sent to all possible parents in the neural network. For each possible parent, a capsule can find a prediction vector. Prediction vector is calculated based on multiplying its own weight and a weight matrix. Whichever parent has the largest scalar prediction vector product, increases the capsule bond. Rest of the parents decrease their bond. This routing by agreement method is superior to the current mechanism like max-pooling. Max pooling routes based on the strongest feature detected in the lower layer. Apart from dynamic routing, CapsNet talks about adding squashing to a capsule. Squashing is a non-linearity. So instead of adding squashing to each layer like how you do in CNN, you add the squashing to a nested set of layers. So the squashing function gets applied to the vector output of each capsule.

So far, capsule nets have proven equally adept at as traditional neural nets at understanding handwriting, and cut the error rate in half for identifying toy cars and trucks. Impressive, but it’s just a start. The current implantation of capsule networks is, according to Hinton, slower than it will have to be in the end.

 

(via arxiv, medium blogs, i-programmer, bigthink)