Is A.I. Mysterious?

Contemplating on the book I recently began, Superintelligence: Paths, Dangers, Strategies by Nick Bostrom, the author clearly describes the future of AI as tremendously frightening and tells us in what ways AI will overpower human beings, until a point in the future where the humans will no longer be needed. Such kind of intelligence will be superior to human beings, hence coined the term Superintelligence. Nick, further mentioned in the initial pages of the book that such kind of invention would be the last invention humans would ever make, and the AI would then invent new things itself, without feeling the need of a human being.

Does AI possess a darker side? (Enter article by MitTechReview)

Last year, a strange self-driving car was released onto the quiet roads of Monmouth County, New Jersey. This experimental vehicle was developed by researchers at Nvidia, didn’t look different from other autonomous cars, but it was unlike anything demonstrated by Google, Tesla, or General Motors, and it showed the rising power of artificial intelligence. The car didn’t follow a single instruction provided by an engineer or programmer. Instead, it relied entirely on an algorithm that had taught itself to drive by watching a human do it. Getting a car to drive this way was an impressive feat. But it’s also a bit unsettling since it isn’t completely clear how the car makes its decisions. Information from the vehicle’s sensors goes straight into a huge network of artificial neurons that process the data and then deliver the commands required to operate the steering wheel, the brakes, and other systems. The result seems to match the responses you’d expect from a human driver. But what if one day it did something unexpected—crashed into a tree, or sat at a green light? As things stand now, it might be difficult to find out why. The system is so complicated that even the engineers who designed it may struggle to isolate the reason for any single action. And you can’t ask it: there is no obvious way to design such a system so that it could always explain why it did what it did.

Now, once again, such kind of experiments and research may seem obscure at the moment or may even be neglected by some people, but what if an entire fleet of vehicles start working in the manner they learned and ignoring the commands by a human.

Enter OpenAI: Discovering an enacting the path to safe artificial general intelligence.

OpenAI is a non-profit artificial intelligence (AI) research company, associated with business magnate Elon Musk, that aims to carefully promote and develop friendly AI in such a way as to benefit humanity as a whole.

Hopefully, OpenAI will help us have friendly versions of AI.

 

deepdream1

The artist Adam Ferriss created this image, and the one below, using Google Deep Dream, a program that adjusts an image to stimulate the pattern recognition capabilities of a deep neural network. The pictures were produced using a mid-level layer of the neural network.

This image sure seems kind of spooky. But who knows, there might be some hidden sarcasm in the AI that we are yet to discover.

Meanwhile, you can give it a try https://deepdreamgenerator.com/ .

 

 

Sources (MitTechReview, OpenAI, Wiki)

</ End Blog>

Artificial Intelligence ready ARM CPUs (DynamIQ)

ARM processors are ubiquitous and many of the tech gadgets we used are powered by them, furthermore, the company is showing off its plans for the future with DynamIQ. Aimed squarely at pushing the artificial intelligence and machine learning systems we’re expecting to see in cars, phones, gaming consoles and everything else, it’s what the company claims is an evolution on the existing “big.Little” technology.

arm.jpg

Here’s a high-level look at some of the new features, capabilities and benefits DynamIQ will bring to new Cortex-A processors later this year:

  • New dedicated processor instructions for ML and AI: Cortex-A processors designed for DynamIQ technology can be optimized to deliver up to a 50x boost in AI performance over the next 3-5 years relative to Cortex-A73-based systems today and up to 10x faster response between CPU and specialized accelerator hardware on the SoC that can unleash substantially better-combined performance.

arm-dynamiq2.jpg

  • Increased multi-core flexibility: SoC designers can scale up to eight cores in a single cluster and each core can have different performance and power characteristics. These advanced capabilities enable faster responsiveness to ML and AI applications. A redesigned memory subsystem enables both faster data access and enhance power management
  • More performance within restricted thermal budgets: Efficient and much faster switching of software tasks to match the right-sized processor for optimal performance and power is further enhanced through independent frequency control of individual processors
  • Safer autonomous systems: DynamIQ brings greater levels of responsiveness for ADAS solution and increased safety capabilities which will enable partners to build ASIL-D compliant systems for safe operation under failure conditions.

 

 

(source: ARM community, Engadget)

MIT lab’s smart boots could keep astronauts on their feet

If you’ve ever worn a spacesuit during a moonwalk or EVA, and I know a lot of you have, you were probably frustrated by how difficult it was to move around — both with the restrictions of the suit itself and the limitations on what you can see and feel. Researchers at MIT’s Man-Vehicle Lab want to make things easier with boots and other wearables that give the user haptic feedback warning them of obstacles they might not see.

It’s a serious consideration when you’re on a mission where every second count; we can all have a laugh now seeing footage of astronauts biffing it into moon dust, but if that uses up valuable breathable air or takes place during a time-sensitive operation like a rescue, it’s not so funny anymore.

hboots.jpg

 

That’s why Professor Leia Stirling and grad students at MIT are looking into alternative ways to keep astronauts informed of what’s around them so they can not just avoid stumbles, but make informed decisions.

“There’s a lot of different questions that we’re asking within the lab to help with different decisions that need to be made,” Stirling told TechCrunch. “This particular project looked at how can we potentially help someone navigate their environment and avoid obstacles at the same time?”

The solution is boots with built-in range-finding sensors that use haptic feedback (vibrations) to tell the user how close they are to an unseen moon rock or antenna. They start slow and speed up as the obstacle gets closer, a bit like the proximity alert some cars have when you’re backing into a parking spot.

hboots3.jpg

“The Man Vehicle Lab is really interested in human-machine interactions,” Stirling said. “How do we enable the appropriate trust in those systems so that the person can use the information and make the right decision?”The boots, currently just prototypes, aren’t quite ready to launch into space, but they have been tested by members of the Mars Desert Research Society, which maintains a facility in the Mars-like conditions of the Utah desert.

 

(via TechCrunch)

Apple’s new offices dedicated to AI and ML opening in Seattle

Apple will expand its presence in downtown Seattle, where it has a growing team working on artificial intelligence and machine learning technologies, according to GeekWire.

The report claims Apple will expand into additional floors in Two Union Square, and this will allow its Turi team to move into the building and provide space for future employees.

The new hires are expected to be working on projects primarily around artificial intelligence and machine learning—both of which could propel some of Apple’s existing software offerings (i.e. Siri) to better compete with the likes of Microsoft Cortana and Amazon’s very popular Alexa digital assistant.

It also builds upon Apple’s quiet move last summer to acquire Turi, a Seattle-based machine learning startup, in a deal reported to be worth around $200 million. Apple’s machine learning purchase portfolio also includes AI-related company Perception (thought to be for its image-classification skills) and British firm VocalIQ, which produced technology that could also help Apple’s AI digital assistant.

As for closer to home, Apple announced on Wednesday that its new spaceship-like campus—the brainchild of late CEO and co-founder Steve Jobs—will be named “Apple Park,” and is scheduled to open this April. However, Apple stipulated it will likely be more than six months for its more than 12,000 Apple employees who will work there to move in.

Apple will still be retaining its current headquarters at One Infinite Loop, but CEO Tim Cook is planning to move his office to the new state-of-the-art facility as well.

(Originally published on Fortune and MacRumours)

Machine Learning Speeds Up

Cloudera and Intel are jointly speeding up Machine Learning, with the help of Intel’s new Math Kernel. Benchmarks demonstrate the combined offering can advance machine learning performance over large data sets in less time and with less hardware.  This helps organizations accelerate their investments in next generation predictive analytics.

Cloudera is the leader in Apache Spark development, training, and services. Apache Spark is advancing the art of machine learning on distributed systems with familiar tools that deliver at impressive scale. By joining forces, Cloudera and Intel are furthering a joint mission of excellence in big data management in the pursuit of better outcomes by making machine learning smarter and easier to implement.

intcloud.jpg

Predictive Maintenance

By combining Spark, Intel MKL libraries, and Intel’s optimized CPU architecture machine learning workloads can scale quickly. As machine learning solutions get access to more data they can provide better accuracy in delivering predictive maintenance, recommendation engines, proactive health care and monitoring, and risk and fraud detection.

“There’s a growing urgency to implement richer machine learning models to explore and solve the most pressing business problems and to impact society in a more meaningful way,” said Amr Awadallah, chief technical officer of Cloudera. “Already among our user base, machine learning is an increasingly common practice. In fact, in a recent adoption survey over 30% of respondents indicated they are leveraging Spark for machine learning.

 

(via – Technative.io)

The Poker Playing AI

As we know that the game of Poker involves dealing with imperfect information, which makes the game very complex, and more like many real-world situations. At the Rivers Casino in Pittsburgh this week, a computer program called Libratus (A latin word meaning balanced), an AI system that may finally prove that computers can do this better than any human card player. Libratus was created by Tuomas Sandholm, a professor in the computer science department at CMU, and his graduate student Noam Brown.

mitpoker_0.jpg

The AI Poker play against the world’s best poker players. Kim is a high-stakes poker player who specializes in no-limit Texas Hold ‘Em. Jason Les and Daniel McAulay, two of the other top poker players challenging the machine, describe its play in much the same way. It does a little bit of everything,” Kim says. It doesn’t always play the same type of hand in the same way. It may bluff with a bad hand or not. It may bet high with a good hand—or not. That means Kim has trouble finding holes in its game. And if he does find a hole, it disappears the next day.

“The bot gets better and better every day. It’s like a tougher version of us,” said Jimmy Chou, one of the four pros battling Libratus. “The first couple of days, we had high hopes. But every time we find a weakness, it learns from us and the weakness disappears the next day.”

Libratus is playing thousands of games of heads-up, or two-player, no-limit Texas hold’em against several expert professional poker players. Now a little more than halfway through the 20-day contest, Libratus is up by almost $800,000 against its human opponents. So a victory, while far from guaranteed, may well be in the cards.

Regardless of the pure ability of the humans and the AI, it seems clear that the pros will be less effective as the tournament goes on. Ten hours of poker a day for 20 days straight against an emotionless computer was exhausting and demoralizing, even for pros like Doug Polk. And while the humans sleep at night, Libratus takes the supercomputer powering its in-game decision making and applies it to refining its overall strategy.

A win for Libratus would be a huge achievement in artificial intelligence. Poker requires reasoning and intelligence that has proven difficult for machines to imitate. It is fundamentally different from checkers, chess, or Go because an opponent’s hand remains hidden from view during play. In games of “imperfect information,” it is enormously complicated to figure out the ideal strategy given every possible approach your opponent may be taking. And no-limit Texas hold’em is especially challenging because an opponent could essentially bet any amount.

“Poker has been one of the hardest games for AI to crack,” says Andrew Ng, chief scientist at Baidu. “There is no single optimal move, but instead an AI player has to randomize its actions so as to make opponents uncertain when it is bluffing.”

(Sources: MitTechReview, The Verge, Wired)

Razer’s Project Valerie

For those who missed out on Razer’s CES 2017 showcase, Project Valerie is an impressive gaming laptop with three 4K displays, virtually allowing for a portable panoramic display mode.

Each display is 17.3 inches in size and features a 4K resolution, offering a huge viewing space unprecedented for a laptop. When not in use, the extra Project Valerie screens are housed within a chassis. When the laptop is opened, the screens automatically slide out from the lid and extend to the sides of the main display to create a 180-degree viewing experience, without requiring any human input.

Razer-Valerie-_-featured-640x360@2x.jpg

The machine is fuelled by a top-of-the-line Nvidia GeForce GTX 1080 graphics card with 8GB RAM, accompanied by an Intel Core i7 Skylake CPU and 32GB RAM as a foundation. It seems to be all about graphics power, and since Razer has a reputation for building high-end gaming laptops, Project Valerie is certainly no exception.

Since these two extra 17″ 4K displays need some extra space, the unit is somewhat thick in comparison to the paper-thin laptops of today and is not exactly lightweight either. But for what it is, it’s pretty impressive.

For more information, watch the video linked below.

Hyundai’s Exo-Skeleton Suits

Hyundai is known for it’s reasonably priced good cars, but in addition to that, the Korean automaker working on electric and hybrid cars is also researching alongside on Exo-Skeletons which will give superhuman abilities to common people in a way.

Hyundai has made a line of robotic suits to help paraplegic patients walk and to reduce back injuries in manual laborers. Workers piloting the device can lift objects weighing “hundreds of kilograms,” according to the company. Soldiers can also use it to pack up to 50 kilograms (110 pounds) over long distances. The drawback is they can be prohibitively expensive, but Hyundai thinks it can lower the cost of these exosuits that not only give us the ability to lift more, but can also help disabled people walk once again.

hyundai-exo08.jpg

The drawback is they can be prohibitively expensive, but Hyundai thinks it can lower the cost of these exosuits that can help disabled people walk once again.

 

exo2

The suit is a juiced up version of the H-LEX “wearable walking assistant” that Hyundai introduced last year. Unlike that lightweight version, which is worn like a suit, the fully mechanized exoskeleton “wears” you.

Hyundai says the project is part of its “Next Mobility” system “that will lead to the free movement of people and things.” In other words, the car manufacturer is angling the suits as transportation, where other companies, like Panasonic and Daewoo, see them strictly them strictly as worker aids. Like Hyundai, DARPA is building an exosuit for soldiers for its “Warrior Web” program. As companies like Ekso Bionics have shown, however, such robotic suits may have the highest potential as rehabilitation aids.

 

 

Sources: (Wired, Engadget)

Quantum computer memories of higher dimensions than a qubit

A quantum computer memory of higher dimensions has been created by the scientists from the Institute of Physics and Technology of the Russian Academy of Sciences and MIPT by letting two electrons loose in a system of quantum dots. In their study published in Scientific Reports, the researchers demonstrate for the first time how quantum walks of several electrons can help for implementation of quantum computation.

For more information: Quantum Computing

walking-electrons

Abstraction – Walking Electrons

“By studying the system with two electrons, we solved the problems faced in the general case of two identical interacting particles. This paves the way toward compact high-level quantum structures,” says Leonid Fedichkin, associate professor at MIPT’s Department of Theoretical Physics.

In a matter of hours, a quantum computer will be able to hack into the most popular cryptosystem used by web browsers. As far as more benevolent applications are concerned, a quantum computer would be capable of molecular modeling that accounts for all interactions between the particles involved. This, in turn, would enable the development of highly efficient solar cells and new drugs.

As it turns out, the unstable nature of the connection between qubits remains the major obstacle preventing the use of quantum walks of particles for quantum computation. Unlike their classical analogs, quantum structures are extremely sensitive to external noise. To prevent a system of several qubits from losing the information stored in it, liquid nitrogen (or helium) needs to be used for cooling. A research team led by Prof. Fedichkin demonstrated that a qubit could be physically implemented as a particle “taking a quantum walk” between two extremely small semiconductors known as quantum dots, which are connected by a “quantum tunnel.”

The Quantum dots are like potential wells to an electron, therefore, the position of an electron can be used to encode the basis of two states of the qubits 0 or 1.

elec.jpg

The blue and purple dots in the diagrams are the states of the two connected qudits (qutrits and ququarts are shown in (a) and (b) respectively). Each cell in the square diagrams on the right side of each figure (a-d) represents the position of one electron (i = 0, 1, 2, … along the horizontal axis) versus the position of the other electron (j = 0, 1, 2, … along the vertical axis). The cells color-code the probability of finding the two electrons in the corresponding dots with numbers i and j when a measurement of the system is made. Warmer colors denote higher probabilities. Credit: MIPT

If an entangled state is created between several qubits, their individual states can no longer be described separately from one another, and any valid description must refer to the state of the whole system. This means that a system of three qubits has a total of eight basis states and is in a superposition of them: A|000⟩+B|001⟩+C|010⟩+D|100⟩+E|011⟩+F|101⟩+G|110⟩+H|111⟩. By influencing the system, one inevitably affects all of the eight coefficients, whereas influencing a system of regular bits only affects their individual states. By implication, n bits can store n variables, while n qubits can store 2n variables. Qudits offer an even greater advantage since n four-level qudits (aka ququarts) can encode 4n, or 2n×2n variables. To put this into perspective, 10 ququarts store approximately 100,000 times more information than 10 bits. With greater values of n, the zeros in this number start to pile up very quickly.

In this study, Alexey Melnikov and Leonid Fedichkin obtain a system of two qudits implemented as two entangled electrons quantum-walking around the so-called cycle graph. The entanglement of the two electrons is caused by the mutual electrostatic repulsion experienced by like charges. Number of qudits can be created by connecting quantum dots in a pattern of winding paths and have more wandering electrons. The quantum walks approach to quantum computation is convenient because it is based on a natural process.

So far, scientists have been unable to connect a sufficient number of qubits for the development of a quantum computer. The work of the Russian researchers brings computer science one step closer to a future when quantum computations are commonplace.

(Source: Moscow Institute of Physics and Technology, 3Tags.)

Magic Leap – The Future?

Magic Leap is a US startup company that is founded by Rony Abovitz in 2010 and is working on a head-mounted virtual retinal display which superimposes 3D computer-generated imagery over real world objects, by projecting a digital light field into the user’s eye. It is attempting to construct a light-field chip using silicon photonics.

Before Magic Leap, a head-mounted display using light fields was already demonstrated by Nvidia in 2013, and the MIT Media Lab has also constructed a 3D display using “compressed light fields”; however Magic Leap asserts that it achieves better resolution with a new proprietary technique that projects an image directly onto the user’s retina. According to a researcher who has studied the company’s patents, Magic Leap is likely to use stacked silicon waveguides.

magic-leap-lens-system.png

Virtual reality overlaid on the real world in this manner is called mixed reality, or MR. (The goggles are semi-transparent, allowing you to see your actual surroundings.) It is more difficult to achieve than the classic fully immersive virtual reality, or VR, where all you see are synthetic images, and in many ways MR is the more powerful of the two technologies.

Magic Leap is not the only company creating mixed-reality technology, but right now the quality of its virtual visions exceeds all others. Because of this lead, money is pouring into this Florida office park. Google was one of the first to invest. Andreessen Horowitz, Kleiner Perkins, and others followed. In the past year, executives from most major media and tech companies have made the pilgrimage to Magic Leap’s office park to experience for themselves its futuristic synthetic reality.

mleap.jpeg

The video below is shot directly through the Magic Leap technology without composing any special effects. It gives us an idea of how it looks through the Magic Leap.

On December 9, 2015, Forbes reported on documents filed in the state of Delaware, indicating a Series C funding round of $827m. This funding round could bring the company’s total funding to $1.4 billion, and its post-money valuation to $3.7 billion.

On February 2, 2016, Financial Times reported that Magic Leap further raised another funding round of close to $800m, valuing the startup at $4.5 billion.

On February 11, 2016, Silicon Angle reported that Magic Leap had joined the Entertainment Software Association.

In April 2016, Magic Leap acquired Israeli cyber security company NorthBit.

Magic Leap has raised $1.4 billion from a list of investors including Google and China’s Alibaba Group.

On June 16, 2016, Magic Leap announced a partnership with Disney’s Lucasfilm and its ILMxLAB R&D unit. The two companies will form a joint research lab at Lucasfilm’s San Francisco campus.