SLAM (Simultaneous Localization and Mapping)

What’s SLAM?

In an unknown environment, SLAM is the computational problem of updating the map and simultaneously keeping track of the agents’ location within. The sensors in the robot help make up the unknown environment. Uses the robot pose estimation to improve the map landmark position estimation and vice versa.

Is SLAM necessary?

Of course! Robot models aren’t always accurate, sometimes wheel odometry error is cumulative or other IMU (Inertial Measurement Unit) sensor errors.

KittiInput

Input

KittiRec

Received

Mapping in SLAM:

Topological SLAM approaches have been used to enforce global consistency in metric SLAM algorithms the same way topological maps are a method of environment representation.

Sensing in SLAM:

Slam uses various sensors. Different types of sensors give rise to different SLAM algorithms whose assumptions of which is most appropriate to the sensors. At one extreme, laser scans or visual features provide details of a great many points within an area, sometimes rendering SLAM inference unnecessary because shapes in these point clouds can be easily and unambiguously aligned at each step via image registration. At the opposite extreme, tactile sensors are extremely sparse as they contain only information about points very close to the agent, so they require strong prior models to compensate in purely tactile SLAM. Most practical SLAM tasks fall somewhere between these visual and tactile extremes.

Types of sensors: 

Active, Passive, Intrusive

  • Active: LEDs, Range Finders, Ultrasonic Sensors, Light Sensors.
  • Passive: Cameras, Infrared Sensors.
  • Intrusive: Markers in Augmented Reality.

Multiple Objects in SLAM:

The related problems of data association and computational complexity are among the problems yet to be fully resolved, for example, the identification of multiple confusable landmarks. A significant recent advance in the feature-based SLAM literature involved the re-examination of the probabilistic foundation for Simultaneous Localisation and Mapping (SLAM) where it was posed in terms of multi-object Bayesian filtering with random finite sets that provide superior performance to leading feature-based SLAM algorithms in challenging measurement scenarios with high false alarm rates and high missed detection rates without the need for data association.

Applications of SLAM:

  • Augmented Reality
  • Robotic control
  • Virtual Map Building (Google Earth)
  • Navigation in unknown environments

 

Cognitive Computing

So what’s cognitive computing?

We humans think, so in an easy way, it can be said that cognitive computing a simulation of human thought process in a computerized model. It involves self-learning systems that make use of natural language processing, data mining and pattern recognition. The main goal is to make use of cognitive computing to create automated systems that are capable of solving problems without human assistance.

shutterstock_120599146

Cognitive computing systems can harness learnings and past experiences at an immense scale to solve soft problems.

 

Cognitive computing makes use of machine learning algorithms and they continuously acquire knowledge by acquiring data by data mining techniques. The systems refine the way they look for patterns and as well as the way they process data so they become capable of anticipating new problems and modeling possible solutions. Being contextual, they understand what the data actually implicates. They draw from multiple sources to create holistic and contextual image data.

Cognitive computing is used in many Artificial Intelligence applications such as Artificial Neural Networks, Robotics, Virtual Realty.

IBM Watson makes use of cognitive computing:

 

maxresdefault

Watson is a question-answering computer system capable of answering questions posed in natural language, developed in IBM’s DeepQA project by a research team led by principal investigator David Ferrucci. (wiki)

 

These cognitive systems, most notably IBM Watson, rely on deep learning algorithms and neural networks to process information by comparing it to a teaching set of data. The more data the system is exposed to, the more it learns, and the more accurate it becomes over time, and the neural network is a complex “tree” of decisions the computer can make to arrive at an answer.

 

See also: Machine Learning, Artificial Neural Networks

 

 

 

Machine Learning

So what is machine learning?

Machine learning is a type of Artificial Intelligence (AI) with the ability to learn without being explicitly programmed. Machine learning focuses on the development of computer programs that can teach themselves to grow and change when exposed to new data.

The process of machine learning is similar to data mining. Both search through data to look for patterns. But instead of extracting data as in the case of data mining applications, machine learning uses the patterns in them and adjust program actions accordingly.

build-business

Building a business around Machine Learning APIs

Machine learning is categorized as :

  • Supervised Machine Learning 
  • Unsupervised Machine Learning

 

Supervised Machine Learning:

In supervised machine learning the ‘categories’ are known. Supervised learning is fairly common in classification problems because the goal is often to get the computer to learn a classification system that we have created. In supervised systems, the data as presented to a machine learning algorithm is fully labeled. That means: all examples are presented with a classification that the machine is meant to reproduce. For this, a classifier is learned from the data, the process of assigning labels to yet unseen instances is called classification.

Unsupervised Machine Learning: 

Unsupervised learning seems much harder: the goal is to have the computer learn how to do something that we don’t tell it how to do! There are actually two approaches to unsupervised learning.

  • The first approach is to teach the agent not by giving explicit categorizations, but by using some sort of reward system to indicate success. Note that this type of training will generally fit into the decision problem framework because the goal is not to produce a classification but to make decisions that maximize rewards. This approach nicely generalizes to the real world, where agents might be rewarded for doing certain actions and punished for doing others.
  • The second type of unsupervised learning is called clustering. In this type of learning, the goal is not to maximize a utility function, but simply to find similarities in the training data. The assumption is often that the clusters discovered will match reasonably well with an intuitive classification. For instance, clustering individuals based on demographics might result in a clustering of the wealthy in one group and the poor in another.
google-tensor-flow-logo-S-1024x768

TensorFlow is Google Brain’s second generation machine learning system, with a reference implementation released as open source software on November 9, 2015.

Google’s TensorFlow had recently been open-sourced. TensorFlow is an open source software library for machine learning. But now Google focuses on something known as deep learning. Google also uses this AI engine to recognize spoken words,translate from one language to another, improve Internet search results, and more. It’s the heart of Google’s Photos app! We will have a detailed look at data mining and deep learning in the future.

So to conclude this blog post, we’ve put some insights on Machine Learning. If you like this post leave a like and also leave a comment for more information and posts on related topics!

Update 2.0:

Google today announced a new machine learning platform for developers at its NEXT Google Cloud Platform user conference in San Francisco. As Google chairman Eric Schmidt stressed during today’s keynote, Google believes machine learning is “what’s next.” With this new platform, Google will make it easier for developers to use some of the machine learning smarts Google already uses to power features like Smart Reply in Inbox.

ml-lead

Google’s Cloud Machine Learning platform basically consists of two parts: one that allows developers to build machine learning models from their own data, and another that offers developers a pre-trained model. To train these machine learning models (which takes quite a bit of computing power), developers can take their data from tools like Google Cloud Dataflow, Google BigQuery,Google Cloud Dataproc, Google Cloud Storage, and Google Cloud Datalab.

“Machine learning. This is the next transformation,” Schmidt says. “I’m a programmer who sort of got lucky at Google. But the programming paradigm is changing. Instead of programming a computer, you teach a computer to learn something and it does what you want.”

Prerequisites for machine learning:

  1. Linear algebra
  2. Probability theory
  3. Calculus
  4. Calculus of variations
  5. Graph theory
  6. Optimization methods (Lagrange multipliers)

( via Quora )