Intel Core i9

Intel recently announced a new family of processors for enthusiasts, the Core X-series, and it’s anchored by the company’s first 18-core CPU, the i9-7980XE.

 

Intel+Core+i9+x+series.jpg

Priced at $1,999, the 7980XE is clearly not a chip you’ll see in an average desktop. Instead, it’s more of a statement from Intel. It beats out AMD’s 16-core Threadripper CPU, which was slated to be that company’s most powerful consumer processor for 2017. And it gives Intel yet another way to satisfy the demands of power-hungry users who might want to do things like play games in 4K while broadcasting them in HD over Twitch. And, as if its massive core count wasn’t enough, the i9-7980XE is also the first Intel consumer chip that packs in over a teraflop’s worth of computing power.

 

inteli9.jpg

 

If 18 cores are overkill for you, Intel also has other Core i9 Extreme Edition chips in 10-, 12-, 14- and 16-core variants. Perhaps the best news for hardware geeks: The 10 Core i9-7900X will retail for $999, a significant discount from last year’s version.

All of the i9 chips feature base clock speeds of 3.3GHz, reaching up to 4.3GHz dual-core speeds with Turbo Boost 2.0 and 4.5GHz with Turbo Boost 3.0 a new version of Turbo Boost which Intel has upgraded. The company points out that while the additional cores on the Core X models will improve multitasking performance, the addition of technologies like Turbo Boost Max 3.0 ensures that each core is also able to achieve improved performance. (Intel claims that the Core X series reaches 10 percent faster multithread performance over the previous generation and 15 percent faster single thread.)

 

 

(via Engadget, The Verge)

 

Google’s and Nvidia’s AI Chips

Google

Google will soon launch a cloud computing service that provides exclusive access to a new kind of artificial intelligence chip designed by its own engineers. CEO Sundar Pichai revealed the new chip and service this morning in Silicon Valley during his keynote at Google I/O, the company’s annual developer conference.

GoogleChip4.jpg

This new processor is a unique creation designed to both train and execute deep neural networks—machine learning systems behind the rapid evolution of everything from image and speech recognition to automated translation to robotics. Google says it will not sell the chip directly to others. Instead, through its new cloud service, set to arrive sometime before the end of the year, any business or developer can build and operate software via the internet that taps into hundreds and perhaps thousands of these processors, all packed into Google data centers.

According to Dean, Google’s new “TPU device,” which spans four chips, can handle 180 trillion floating point operations per second, or 180 teraflops, and the company uses a new form of computer networking to connect several of these chips together, creating a “TPU pod” that provides about 11,500 teraflops of computing power. In the past, Dean said, the company’s machine translation model took about a day to train on 32 state-of-the-art CPU boards. Now, it can train in about six hours using only a portion of a pod.

Nvidia

Nvidia has released a new state-of-the-art chip that pushes the limits of machine learning, the Tesla P100 GPU. It can perform deep learning neural network tasks 12 times faster than the company’s previous top-end system (The TitanX). The P100 was a huge commitment for Nvidia, costing over $2 billion in research and development, and it sports a whopping 150 billion transistors on a single chip, making the P100 the world’s largest chip, Nvidia claims. In addition to machine learning, the P100 will work for all sorts of high-performance computing tasks — Nvidia just wants you to know it’s really good at machine learning.

dgx.png

To top off the P100’s introduction, Nvidia has packed eight of them into a crazy-powerful $129,000 supercomputer called the DGX-1. This show-horse of a machine comes ready to run, with deep-learning software preinstalled. It’s shipping first to AI researchers at MIT, Stanford, UC Berkeley, and others in June. On stage, Huang called the DGX-1 “one beast of a machine.”

The competition between these upcoming AI chips and Nvidia all points to an emerging need for simply more processing power in deep learning computing. A few years ago, GPUs took off because they cut the training time for a deep learning network from months to days. Deep learning, which had been around since at least the 1950s, suddenly had real potential with GPU power behind it. But as more companies try to integrate deep learning into their products and services, they’re only going to need faster and faster chips.

 

(via Wired, Forbes, Nvidia, The Verge)

 

Neuralink

The rise of A.I. and over-powering humans will prove to be a catastrophic situation. Elon Musk is attempting to combat the rise of A.I. with the launch of his latest venture, brain-computer interface company Neuralink.  Detailed information about Neuralink on Wait But Why. (Highly Recommended to Read!)

neuralink.jpeg

Musk seems to want to achieve a communications leap equivalent in impact to when humans came up with language – this proved an incredibly efficient way to convey thoughts socially at the time, but what Neuralink aims to do is increase that efficiency by multiple factors of magnitude. Person-to-person, Musk’s vision would enable direct “uncompressed” communication of concepts between people, instead of having to effectively “compress” your original thought by translating it into language and then having the other party “decompress” the package you send them linguistically, which is always a lossy process.

Another thing in favor of Musk’s proposal is that symbiosis between brains and computers isn’t fiction. Remember that person who types with brain signals? Or the paralyzed people who move robot arms? These systems work better when the computer completes people’s thoughts. The subject only needs to type “bulls …” and the computer does the rest. Similarly, a robotic arm has its own smarts. It knows how to move; you just have to tell it to. So even partial signals coming out of the brain can be transformed into more complete ones. Musk’s idea is that our brains could integrate with an AI in ways that we wouldn’t even notice: imagine a sort of thought-completion tool.

So it’s not crazy to believe there could be some very interesting brain-computer interfaces in the future. But that future is not as close at hand as Musk would have you believe. One reason is that opening a person’s skull is not a trivial procedure. Another is that technology for safely recording from more than a hundred neurons at once—neural dust, neural lace, optical arrays that thread through your blood vessels—remains mostly at the blueprint stage.

 

( via Wired, TechCrunch )

Artificial Intelligence ready ARM CPUs (DynamIQ)

ARM processors are ubiquitous and many of the tech gadgets we used are powered by them, furthermore, the company is showing off its plans for the future with DynamIQ. Aimed squarely at pushing the artificial intelligence and machine learning systems we’re expecting to see in cars, phones, gaming consoles and everything else, it’s what the company claims is an evolution on the existing “big.Little” technology.

arm.jpg

Here’s a high-level look at some of the new features, capabilities and benefits DynamIQ will bring to new Cortex-A processors later this year:

  • New dedicated processor instructions for ML and AI: Cortex-A processors designed for DynamIQ technology can be optimized to deliver up to a 50x boost in AI performance over the next 3-5 years relative to Cortex-A73-based systems today and up to 10x faster response between CPU and specialized accelerator hardware on the SoC that can unleash substantially better-combined performance.

arm-dynamiq2.jpg

  • Increased multi-core flexibility: SoC designers can scale up to eight cores in a single cluster and each core can have different performance and power characteristics. These advanced capabilities enable faster responsiveness to ML and AI applications. A redesigned memory subsystem enables both faster data access and enhance power management
  • More performance within restricted thermal budgets: Efficient and much faster switching of software tasks to match the right-sized processor for optimal performance and power is further enhanced through independent frequency control of individual processors
  • Safer autonomous systems: DynamIQ brings greater levels of responsiveness for ADAS solution and increased safety capabilities which will enable partners to build ASIL-D compliant systems for safe operation under failure conditions.

 

 

(source: ARM community, Engadget)

Machine Learning Speeds Up

Cloudera and Intel are jointly speeding up Machine Learning, with the help of Intel’s new Math Kernel. Benchmarks demonstrate the combined offering can advance machine learning performance over large data sets in less time and with less hardware.  This helps organizations accelerate their investments in next generation predictive analytics.

Cloudera is the leader in Apache Spark development, training, and services. Apache Spark is advancing the art of machine learning on distributed systems with familiar tools that deliver at impressive scale. By joining forces, Cloudera and Intel are furthering a joint mission of excellence in big data management in the pursuit of better outcomes by making machine learning smarter and easier to implement.

intcloud.jpg

Predictive Maintenance

By combining Spark, Intel MKL libraries, and Intel’s optimized CPU architecture machine learning workloads can scale quickly. As machine learning solutions get access to more data they can provide better accuracy in delivering predictive maintenance, recommendation engines, proactive health care and monitoring, and risk and fraud detection.

“There’s a growing urgency to implement richer machine learning models to explore and solve the most pressing business problems and to impact society in a more meaningful way,” said Amr Awadallah, chief technical officer of Cloudera. “Already among our user base, machine learning is an increasingly common practice. In fact, in a recent adoption survey over 30% of respondents indicated they are leveraging Spark for machine learning.

 

(via – Technative.io)

Quantum computer memories of higher dimensions than a qubit

A quantum computer memory of higher dimensions has been created by the scientists from the Institute of Physics and Technology of the Russian Academy of Sciences and MIPT by letting two electrons loose in a system of quantum dots. In their study published in Scientific Reports, the researchers demonstrate for the first time how quantum walks of several electrons can help for implementation of quantum computation.

For more information: Quantum Computing

walking-electrons

Abstraction – Walking Electrons

“By studying the system with two electrons, we solved the problems faced in the general case of two identical interacting particles. This paves the way toward compact high-level quantum structures,” says Leonid Fedichkin, associate professor at MIPT’s Department of Theoretical Physics.

In a matter of hours, a quantum computer will be able to hack into the most popular cryptosystem used by web browsers. As far as more benevolent applications are concerned, a quantum computer would be capable of molecular modeling that accounts for all interactions between the particles involved. This, in turn, would enable the development of highly efficient solar cells and new drugs.

As it turns out, the unstable nature of the connection between qubits remains the major obstacle preventing the use of quantum walks of particles for quantum computation. Unlike their classical analogs, quantum structures are extremely sensitive to external noise. To prevent a system of several qubits from losing the information stored in it, liquid nitrogen (or helium) needs to be used for cooling. A research team led by Prof. Fedichkin demonstrated that a qubit could be physically implemented as a particle “taking a quantum walk” between two extremely small semiconductors known as quantum dots, which are connected by a “quantum tunnel.”

The Quantum dots are like potential wells to an electron, therefore, the position of an electron can be used to encode the basis of two states of the qubits 0 or 1.

elec.jpg

The blue and purple dots in the diagrams are the states of the two connected qudits (qutrits and ququarts are shown in (a) and (b) respectively). Each cell in the square diagrams on the right side of each figure (a-d) represents the position of one electron (i = 0, 1, 2, … along the horizontal axis) versus the position of the other electron (j = 0, 1, 2, … along the vertical axis). The cells color-code the probability of finding the two electrons in the corresponding dots with numbers i and j when a measurement of the system is made. Warmer colors denote higher probabilities. Credit: MIPT

If an entangled state is created between several qubits, their individual states can no longer be described separately from one another, and any valid description must refer to the state of the whole system. This means that a system of three qubits has a total of eight basis states and is in a superposition of them: A|000⟩+B|001⟩+C|010⟩+D|100⟩+E|011⟩+F|101⟩+G|110⟩+H|111⟩. By influencing the system, one inevitably affects all of the eight coefficients, whereas influencing a system of regular bits only affects their individual states. By implication, n bits can store n variables, while n qubits can store 2n variables. Qudits offer an even greater advantage since n four-level qudits (aka ququarts) can encode 4n, or 2n×2n variables. To put this into perspective, 10 ququarts store approximately 100,000 times more information than 10 bits. With greater values of n, the zeros in this number start to pile up very quickly.

In this study, Alexey Melnikov and Leonid Fedichkin obtain a system of two qudits implemented as two entangled electrons quantum-walking around the so-called cycle graph. The entanglement of the two electrons is caused by the mutual electrostatic repulsion experienced by like charges. Number of qudits can be created by connecting quantum dots in a pattern of winding paths and have more wandering electrons. The quantum walks approach to quantum computation is convenient because it is based on a natural process.

So far, scientists have been unable to connect a sufficient number of qubits for the development of a quantum computer. The work of the Russian researchers brings computer science one step closer to a future when quantum computations are commonplace.

(Source: Moscow Institute of Physics and Technology, 3Tags.)

Magic Leap – The Future?

Magic Leap is a US startup company that is founded by Rony Abovitz in 2010 and is working on a head-mounted virtual retinal display which superimposes 3D computer-generated imagery over real world objects, by projecting a digital light field into the user’s eye. It is attempting to construct a light-field chip using silicon photonics.

Before Magic Leap, a head-mounted display using light fields was already demonstrated by Nvidia in 2013, and the MIT Media Lab has also constructed a 3D display using “compressed light fields”; however Magic Leap asserts that it achieves better resolution with a new proprietary technique that projects an image directly onto the user’s retina. According to a researcher who has studied the company’s patents, Magic Leap is likely to use stacked silicon waveguides.

magic-leap-lens-system.png

Virtual reality overlaid on the real world in this manner is called mixed reality, or MR. (The goggles are semi-transparent, allowing you to see your actual surroundings.) It is more difficult to achieve than the classic fully immersive virtual reality, or VR, where all you see are synthetic images, and in many ways MR is the more powerful of the two technologies.

Magic Leap is not the only company creating mixed-reality technology, but right now the quality of its virtual visions exceeds all others. Because of this lead, money is pouring into this Florida office park. Google was one of the first to invest. Andreessen Horowitz, Kleiner Perkins, and others followed. In the past year, executives from most major media and tech companies have made the pilgrimage to Magic Leap’s office park to experience for themselves its futuristic synthetic reality.

mleap.jpeg

The video below is shot directly through the Magic Leap technology without composing any special effects. It gives us an idea of how it looks through the Magic Leap.

On December 9, 2015, Forbes reported on documents filed in the state of Delaware, indicating a Series C funding round of $827m. This funding round could bring the company’s total funding to $1.4 billion, and its post-money valuation to $3.7 billion.

On February 2, 2016, Financial Times reported that Magic Leap further raised another funding round of close to $800m, valuing the startup at $4.5 billion.

On February 11, 2016, Silicon Angle reported that Magic Leap had joined the Entertainment Software Association.

In April 2016, Magic Leap acquired Israeli cyber security company NorthBit.

Magic Leap has raised $1.4 billion from a list of investors including Google and China’s Alibaba Group.

On June 16, 2016, Magic Leap announced a partnership with Disney’s Lucasfilm and its ILMxLAB R&D unit. The two companies will form a joint research lab at Lucasfilm’s San Francisco campus.