AI creates Cartoons from Text

Animation as well know is a great source of entertainment and is difficult to create and stitch them together. It is now possible to stitch cartoons with the help of AI. The researchers at the Allen Institute of Technology and the University of Illinois, have managed to develop an AI called Craft (Composition, Retrieval, and Fusion Network), which does some part of the tedious work. The AI creates scenes from text descriptions. The AI system, Craft was able to recreate the Flintstones cartoon, by being trained on 25000 3 second clips and a brief text description of that scene.

The AI works by matching the videos to the brief word-descriptions and builds a set of parameters. Craft can convert the provided text descriptions into video clips and animated series. It can not only put the characters into place but also parse objects, retrieve the background, etc.

Flintstones.png

The results are still raw in nature, and there is a lot of scope for improving the model and training it on other cartoons. The video below briefly describes how the AI works.

For detailed information, refer the paper.

SqueezeNext – Hardware Aware Neural Network Design

Berkeley researchers have published ‘SqueeseNext’, the successor to SqueezeNet, in their latest attempt to distill the capabilities of very large neural networks into smaller models that can feasibly be deployed on devices with small memory and compute capabilities, like mobile phones. One of the main barriers for deploying neural networks on embedded systems has been large memory and power consumption of existing neural networks. While much of the research into AI systems today is based around getting state-of-the-art results on specific datasets, SqueezeNext is part of a parallel track focused on making systems deployable.

Need for SqueezeNext?

The transition to Deep Neural Network based solutions started with AlexNet, which won the ImageNet challenge by a large margin. The ImageNet classification challenge started in 2010 with the first winning method achieving an error rate of 28.2%, followed by 26.2% in 2011. However, a clear improvement in accuracy was achieved by AlexNet with an error rate of 16.4%, a 10% margin with the runner-up. AlexNet consists of five convolutional, and three fully connected layers. The network contains a total of 61 million parameters. Due to this large size of the network, the original model had to be trained on two GPUs with a model parallel approach, where the filters were distributed to these GPUs. Moreover, dropout was required to avoid overfitting using such a large model size. These model sizes have millions of parameters and are not suitable for real-time applications!

SqueezeNet is a successful example which achieves AlexNet’s accuracy with 50× fewer parameters without compression and 500× smaller with deep compression. SqueezeNext is efficient because of a few design strategies: low-rank filters; a bottleneck filter to constrain the parameter count of the network; using a single fully connected layer following a bottleneck; weight and output stationary; and co-designing the network in tandem with a hardware simulator to maximize hardware usage efficiency.

The resulting SqueezeNext network is a neural network with 112X fewer model parameters than those found in AlexNet. They also develop a version of the network whose performance approaches that of VGG-19 (which did well in ImageNet 2014). The researchers also design an even more efficient network by carefully tuning model design in parallel with a hardware simulator, ultimately designing a model that is significantly faster and more energy efficient than a widely used compressed network called SqueezeNet.

 

(via: SquezeeNext)

 

Deep Reinforcement Learning and Q-Learning

Deep Learning  (neural networks) are used to achieve the state of the result for image recognition, computer vision, machine translation, etc,. Reinforcement learning refers to goal-oriented algorithms, which learn how to attain a complex objective (goal) or maximize along a particular dimension over many steps; for example, maximize the points won in a game over many moves. They can start with a blank slate, and under the right conditions, they achieve superhuman performance. Like a child incentivized by spankings and candy, these algorithms are penalized when they make the wrong decisions and rewarded when they make the right ones – this is reinforcement.

Like a human, agents learn for themselves to achieve successful strategies that lead to the greatest long-term rewards. This paradigm of learning by trial-and-error, solely from rewards or punishments, is known as reinforcement learning (RL). Reinforcement algorithms that incorporate deep learning can beat world champions at the game of Go as well as human experts playing numerous Atari video games. While that may sound trivial, it’s a vast improvement over their previous accomplishments, and the state of the art is progressing rapidly.

Reinforcement learning solves the difficult problem of correlating immediate actions with the delayed returns they produce. Like humans, reinforcement learning algorithms sometimes have to wait a while to see the fruit of their decisions. They operate in a delayed return environment, where it can be difficult to understand which action leads to which outcome over many time steps.

Q – Learning

Q-learning is a model-free reinforcement learning technique. It is able to compare the expected utility of the available actions (for a given state) without requiring a model of the environment. Additionally, Q-learning can handle problems with stochastic transitions and rewards, without requiring adaptations. It has been proven that for any finite Markov decision process (MDP), Markov decision processes (MDPs) provide a mathematical framework for modeling decision making in situations where outcomes are partly random and partly under the control of a decision maker. MDPs are useful for studying a wide range of optimization problems solved via dynamic programming and reinforcement learning. Q-learning eventually finds an optimal policy, in the sense that the expected value of the total reward return over all successive steps, starting from the current state, is the maximum achievable.

 

(via Wiki, GoogleDeepmind Blog, DeepLearning4j, IntelAI)