Cognitive Computing

So what’s cognitive computing?

We humans think, so in an easy way, it can be said that cognitive computing a simulation of human thought process in a computerized model. It involves self-learning systems that make use of natural language processing, data mining and pattern recognition. The main goal is to make use of cognitive computing to create automated systems that are capable of solving problems without human assistance.

shutterstock_120599146

Cognitive computing systems can harness learnings and past experiences at an immense scale to solve soft problems.

 

Cognitive computing makes use of machine learning algorithms and they continuously acquire knowledge by acquiring data by data mining techniques. The systems refine the way they look for patterns and as well as the way they process data so they become capable of anticipating new problems and modeling possible solutions. Being contextual, they understand what the data actually implicates. They draw from multiple sources to create holistic and contextual image data.

Cognitive computing is used in many Artificial Intelligence applications such as Artificial Neural Networks, Robotics, Virtual Realty.

IBM Watson makes use of cognitive computing:

 

maxresdefault

Watson is a question-answering computer system capable of answering questions posed in natural language, developed in IBM’s DeepQA project by a research team led by principal investigator David Ferrucci. (wiki)

 

These cognitive systems, most notably IBM Watson, rely on deep learning algorithms and neural networks to process information by comparing it to a teaching set of data. The more data the system is exposed to, the more it learns, and the more accurate it becomes over time, and the neural network is a complex “tree” of decisions the computer can make to arrive at an answer.

 

See also: Machine Learning, Artificial Neural Networks

 

 

 

Optical Computing and Photonics Processor

Optical or photonic computing uses photons produced by lasers or diodes for computation.

In late 2015, researchers at the Univeristy Of Colorado created a first full fledged light processor which will transmit data using light instead of electricity. It has 850 I/O elements that give it a bandwidth way more than electric chips, we’re talking 300Gbps per square millimeter or 10 to 50 times what you normally see, replacing the circuitry with optics.

photonic-processor-university-of-colorado

Although having a size of 3mm x 2mm and just two cores, it has dramatic potential over our normal processor.

light-processor-100635242-large

“It’s the first processor that can use light to communicate with the external world,” Vladimir Stojanović, the University of California professor who led the collaboration, said in a press release.

 

Photonic Logic:

Photonic logic is the use of photons (light) in logic gates (NOT, AND, OR, NAND, NOR, XOR, XNOR). Switching is obtained using nonlinear optical effects when two or more signals are combined. Resonators are especially useful in photonic logic, since they allow a build-up of energy from constructive interference, thus enhancing optical nonlinear effects.

It’s been said that we may see optical computing soon. It’s because light pulses are used to send data instead of voltage packets. Processor will change from binary to light pulses using lasers

So what about memory?

A holographic memory can store data in the form of a hologram within a crystal. A laser is split into a reference beam and a signal beam. Signal beam goes through the logic gate and receives information. The two beams then meet up again and interference pattern creates a hologram in the crystal.

2016-03-27_1318

Holographic Memory

 

Advantages of Optical Computing:

  • Small size
  • Increased speed
  • Low heating
  • Reconfigurable
  • Scalable for larger or small networks
  • More complex functions done faster
  • Applications for Artificial Intelligence
  • Less power consumption (500 microwatts per interconnect length vs. 10 mW for electrical)

 

 

 

Machine Learning

So what is machine learning?

Machine learning is a type of Artificial Intelligence (AI) with the ability to learn without being explicitly programmed. Machine learning focuses on the development of computer programs that can teach themselves to grow and change when exposed to new data.

The process of machine learning is similar to data mining. Both search through data to look for patterns. But instead of extracting data as in the case of data mining applications, machine learning uses the patterns in them and adjust program actions accordingly.

build-business

Building a business around Machine Learning APIs

Machine learning is categorized as :

  • Supervised Machine Learning 
  • Unsupervised Machine Learning

 

Supervised Machine Learning:

In supervised machine learning the ‘categories’ are known. Supervised learning is fairly common in classification problems because the goal is often to get the computer to learn a classification system that we have created. In supervised systems, the data as presented to a machine learning algorithm is fully labeled. That means: all examples are presented with a classification that the machine is meant to reproduce. For this, a classifier is learned from the data, the process of assigning labels to yet unseen instances is called classification.

Unsupervised Machine Learning: 

Unsupervised learning seems much harder: the goal is to have the computer learn how to do something that we don’t tell it how to do! There are actually two approaches to unsupervised learning.

  • The first approach is to teach the agent not by giving explicit categorizations, but by using some sort of reward system to indicate success. Note that this type of training will generally fit into the decision problem framework because the goal is not to produce a classification but to make decisions that maximize rewards. This approach nicely generalizes to the real world, where agents might be rewarded for doing certain actions and punished for doing others.
  • The second type of unsupervised learning is called clustering. In this type of learning, the goal is not to maximize a utility function, but simply to find similarities in the training data. The assumption is often that the clusters discovered will match reasonably well with an intuitive classification. For instance, clustering individuals based on demographics might result in a clustering of the wealthy in one group and the poor in another.
google-tensor-flow-logo-S-1024x768

TensorFlow is Google Brain’s second generation machine learning system, with a reference implementation released as open source software on November 9, 2015.

Google’s TensorFlow had recently been open-sourced. TensorFlow is an open source software library for machine learning. But now Google focuses on something known as deep learning. Google also uses this AI engine to recognize spoken words,translate from one language to another, improve Internet search results, and more. It’s the heart of Google’s Photos app! We will have a detailed look at data mining and deep learning in the future.

So to conclude this blog post, we’ve put some insights on Machine Learning. If you like this post leave a like and also leave a comment for more information and posts on related topics!

Update 2.0:

Google today announced a new machine learning platform for developers at its NEXT Google Cloud Platform user conference in San Francisco. As Google chairman Eric Schmidt stressed during today’s keynote, Google believes machine learning is “what’s next.” With this new platform, Google will make it easier for developers to use some of the machine learning smarts Google already uses to power features like Smart Reply in Inbox.

ml-lead

Google’s Cloud Machine Learning platform basically consists of two parts: one that allows developers to build machine learning models from their own data, and another that offers developers a pre-trained model. To train these machine learning models (which takes quite a bit of computing power), developers can take their data from tools like Google Cloud Dataflow, Google BigQuery,Google Cloud Dataproc, Google Cloud Storage, and Google Cloud Datalab.

“Machine learning. This is the next transformation,” Schmidt says. “I’m a programmer who sort of got lucky at Google. But the programming paradigm is changing. Instead of programming a computer, you teach a computer to learn something and it does what you want.”

Prerequisites for machine learning:

  1. Linear algebra
  2. Probability theory
  3. Calculus
  4. Calculus of variations
  5. Graph theory
  6. Optimization methods (Lagrange multipliers)

( via Quora )

Artificial Neural Networks ( ANN )

So what’s an Artificial Neural Network ?

Artificial Neural Networks ( ANNs ) are relatively crude electronic models based on the neural stucture of the brain. As the brain basically learns from experience, It is natural proof that some problems that are beyond the scope of current computers are indeed solvable by small energy efficient packages. This brain modeling also promises a less technical way to develop machine solutions. This new approach to computing also provides a more graceful degradation during system overload than its more traditional counterparts. These biological inspired methods of computing are thought to be next major advancement in computer industry.

neural-net-head

Most neurons in the brain are connected to several thousand others.

So how does it work?

An artificial neural network is simulated with software. In other words, we use a digital computer to run a simulation of a bunch of heavily interconnected little mini-programs which stand in for the neurons of our simulated neural network. Data enters the ANN and has some operation performed on it by the first “neuron,” that operation being determined by how the neuron happens to be programmed to react to data with those specific attributes. It’s then passed on to the next neuron, which is chosen in a similar way, so that another operation can be chosen and performed. There are a finite number of “layers” of these computational neurons, and after moving through them all, an output is produced.

2016-03-21_2045

Natural Neuron

2016-03-21_2046

An Artificial Neuron

 

So where is it used?

  • Google uses an ANN to learn how to better target “watch next” suggestions after YouTube videos.
  • The scientists at the Large Hadron Collider turned to ANNs to sift the results of their collisions and pull the signature of just one particle out of the larger storm.
  • Shipping companies use them to minimize route lengths over a complex scattering of destinations.
inbox-lead-582x436

Google’s new Smart Reply feature uses artificial neural networks to come up with appropriate responses to email messages.

“That first decision is made by an artificial neural network very much like the ones we use for spam classification and separating promotional emails from personal ones,” says Greg Corrado, senior research scientist on the Google Brain Team. “Our network has been trained to predict whether this is an email someone might write a brief reply to.” For the full article on Wired (http://www.wired.com/2016/03/google-inbox-auto-answers-emails/?mbid=social_gplus)

So concluding this blog we’ve put some light on ANNs and also given a brief information on it. If you like this blog and would like some more related topics do leave a comment!

 

 

The Security of Things (SECoT)

We know about Internet of Things (IoT). It’s a hot topic now in the Industry but the concept has been from well over a decade. In the early 2000’s Kevin Ashton laid the groundwork for what would become the Internet of Things(IoT) at MIT’s AutoID lab.

 

internet-of-things-650

In a 1999 article for RFID journal, Ashton wrote: “If we had computers that knew everything there was to know about things—using data they gathered without any help from us — we would be able to track and count everything, and greatly reduce waste, loss and cost. We would know when things needed replacing, repairing or recalling, and whether they were fresh or past their best. We need to empower computers with their own means of gathering information, so they can see, hear and smell the world for themselves, in all its random glory. RFID and sensor technology enable computers to observe, identify and understand the world—without the limitations of human-entered data.”

 

This has been proved to be true now! But, what about security?  The main problem is that as the concept of IoT has been implemented recently, security hasn’t been in the picture. IoT products are often sold with old operating systems or software. It works fine on a personal level but what about an application on an industrial level? For this, an IoT device needs to be connected to the Internet, should be segmented into its own network and have network access restricted.

We know about Cyber threats and the next thing in line is IoT. What can be done to prevent it? A lot of concepts and ideas are being shared. A conference also is being held in Cambridge, Massachusetts, United States (The link to the conference https://securityofthings.com/ ).

SECoT

A generic Internet of Things topology: A typical IoT deployment will consist of sensor-equipped edge devices on a wired or wireless network sending data via a gateway to a public or private cloud. Aspects of the topology will vary broadly from application to application; for example, in some cases, the gateway may be on the device. Devices based on such topologies may be built from the ground up to leverage IoT (greenfield) or may be legacy devices that will have IoT capabilities added post-deployment (brownfield). Image via http://www.windriver.com/whitepapers/security-in-the-internet-of-things/wr_security-in-the-internet-of-things.pdf

Some ideas on SECoT were given by Wind River (Wind River is a subsidiary company of Intel providing embedded system software which comprises run-time software, industry-specific software solutions, simulation technology, development tools and middleware.) one of which is

Building In Security From The Bottom Top:

Knowing no one single control is going to adequately protect a device, how do we apply what we have learned over the past 25 years to implement security in a variety of scenarios? We do so through a multi-layered approach to security that starts at the beginning when power is applied, establishes a trusted computing baseline, and anchors that trust in something immutable that cannot be tampered with.

  • Secure booting: When power is first introduced to the device, the authenticity and integrity of the software on the device is verified using cryptographically generated digital signatures. In much the same way that a person signs a check or a legal document, a digital signature attached to the software image and verified by the device ensures that only the software that has been authorized to run on that device, and signed by the entity that authorized it, will be loaded.
  • Access control: Next, different forms of resource and access control are applied. Mandatory or role-based access controls built into the operating system limit the privileges of device components and applications so they access only the resources they need to do their jobs.
  • Device authentication: When the device is plugged into the network, it should authenticate itself prior to receiving or transmitting data
  • Firewalling and IPS: The device also needs a firewall or deep packet inspection capability to control traffic that is destined to terminate at the device. Why is a host-based firewall or IPS required if network-based appliances are in place? Deeply embedded devices have unique protocols, distinct from enterprise IT protocols. For instance, the smart energy grid has its own set of protocols governing how devices talk to each other
  • Updates and patches: Once the device is in operation, it will start receiving hot patches and software updates. Operators need to roll out patches, and devices need to authenticate them, in a way that does not consume bandwidth or impair the functional safety of the device.

So concluding this post we can say that though how appealing IoT is and the potential it carries, there are some major requirements to fulfill before actually starting to implement it on a major scale.

If you like this blog post and have some suggestions do leave a comment. Also ideas for blog posts on related topics is highly appreciated!

 

 

 

 

 

The Age Of Artificial Intelligence

It’s the age of the AIs.The word Artificial Intelligence was coined first by John McCarthy in 1956! (Wooh!  Long back! ). We knew that this was coming but we never knew any  date, just rumours.But now it’s started here.

Machines will have feelings!

Although predicted that it will take a few decades more to fully have a Robot doing all our daily jobs and we may just have to program them accordingly.

We may also see Cyborgs in a few decades but let’s just skip that part for the future now!

An engineer makes an adjustment to the robot “The Incredible Bionic Man” at the Smithsonian National Air and Space Museum in Washington October 17, 2013. The robot is the world’s first-ever functioning bionic man made of prosthetic parts and artificial organ implants. Image via CBCNEWS.

We all know Google’s computing system proved a major breakthrough  for Artificial Intelligence for a Asian Board game called Go ‘encircling game’, which is about 2500 years old and far more complex than chess!. And the result was Machines – 4, Human – 1(Anybody interested in learning it can refer this link http://www.kiseido.com/ff.htm ).AlphaGo’s success was down to artificial intelligence (AI): the computer program taught itself how to improve its game by playing millions of matches against itself.

Of course the credit deservers are the Google Researchers at Great Britain for beating the world’s best human player at Go!.(A detailed link for the breakthrough http://www.wired.com/2016/01/in-a-huge-breakthrough-googles-ai-beats-a-top-player-at-the-game-of-go/ ).

The WSJ posted an article saying ‘Machines That Will Think and Feel ‘. But is there some fear for AIs? AIs are what we make them. Just like nurturing a child, it depends on the way we program them and the algorithms we use, so we must be careful in determining what is the task of that particular AI.

According to a YouGov survey for the British Science Association of more than 2,000 people, public attitudes towards AI vary greatly depending on its application. Fully 70% of respondents are happy for intelligent machines to carry out jobs such as crop monitoring – but this falls to 49% once you start asking about household tasks, and to a miserly 23% when talking about medical operations in hospitals. The very lowest level of trust comes when you ask about sex work, with just 17% trusting robots equipped with AI in this field – although this may be a proxy for not trusting human nature very much in this situation either.

AIs aren’t close to speaking now but the future comes fast. But at some point they do will overcome humans that’s for sure. Will it help us? The answer isn’t certain for sure and only time can tell. So for now lets just enjoy the AIs we are using and far more complex AIs coming in the future.

If you enjoy this post please leave comment and also suggestions if you have! There’s always a room for improvement! And also if you are interested in any of the related topics please leave a comment, Will try to make a post on it too!.