MIT’s new prototype ‘3D’ Chip

Daily a plethora of data is being generated and the computing power to process this data into useful information is stalling. One of the fundamental problems being faced is the processor-memory bottleneck or the performance gap. Various methods such as caches and different software techniques have been used to eliminate this problem. But there’s another way which is, building the CPU directly into a 3D memory structure, connect them directly without any kind of motherboard traces, and compute from within the RAM itself.

3d.jpeg

 

A prototype chip built by researchers at Stanford and MIT can solve the problem by sandwiching the memory, processor and even sensors all into one unit. While current chips are made of silicon

.

rram.jpg

The researchers have developed a new 3D chip fabrication method that uses carbon nanotubes and resistive random-access memory (RRAM) cells together to create a combined nanoelectronic processor design that supports complex, 3D architecture – where traditional silicon-based chip fabrication works with 2D structures only. The team claims this makes for “the most complex nanoelectronic system ever made with emerging nanotechnologies,” creating a 3D computer architecture. Using carbon makes the whole thing possible since higher temperatures required to make a silicon CPU would damage the sensitive RRAM cells.

The 3D design is possible because these carbon nanotube circuits and RRAM memory components can be made using temperatures below 200 degrees Celsius, which is far, far less than the 1,000 degree temps needed to fabricate today’s 2D silicon transistors. Lower temperatures mean you can build an upper layer on top of another without damaging the one or ones below.

One expert cited by MIT said that this could be the answer to continuing the exponential scaling of computer power in keeping with Moore’s Law, as traditional chip methods start to run up against physical limits. It’s still in its initial phases and it would take many years till we see the actual implementation of these chips in real life.

Response of an artificial iris to light like the human eye

An artificial iris manufactured from intelligent, light-controlled polymer material can react to incoming light in the same ways as the human eye. The Iris was developed by the Smart Photonic Materials research group from the TUT, and it was recently published in the Advanced Materials journal.

iris.jpg

The human iris does its job of adjusting your pupil size to meter the amount of light hitting the retina behind without you having to actively think about it. And while a camera’s aperture is designed to work the same way as a biological iris, it’s anything but automatic. Even point-and-shoots rely on complicated control mechanisms to keep your shots from becoming overexposed. But a new “artificial iris” developed at the Tampere University of Technology in Finland can autonomously adjust itself based on how bright the scene is.

Scientists from the Smart Photonic Materials research group developed the iris using a light-sensitive liquid crystal elastomer. The team also employed photoalignment techniques, which accurately position the liquid crystal molecules in a predetermined direction within a tolerance of a few picometers. This is similar to the techniques used originally in LCD TVs to improve viewing angle and contrast but has since been adopted to smartphone screens. “The artificial iris looks a little bit like a contact lens,” TUT Associate Professor Arri Priimägi said. “Its center opens and closes according to the amount of light that hits it.”

The team hopes to eventually develop this technology into an implantable biomedical device. However, before that can happen, the TUT researchers need to first improve the iris’ sensitivity so that it can adapt to smaller changes in brightness. They also need to get it to work in an aqueous environment. However, this new iris is therefore still long ways away from being ready.

What is Blockchain?

Blockchains in Bitcoin.

 

A blockchain is a public ledger of all Bitcoin transactions that have ever been executed. It is constantly growing as ‘completed’ blocks are added to it with a new set of recordings. The blocks are added to the blockchain in a linear, chronological order. Each node (computer connected to the Bitcoin network using a client that performs the task of validating and relaying transactions) gets a copy of the blockchain, which gets downloaded automatically upon joining the Bitcoin network. The blockchain has complete information about the addresses and their balances right from the genesis block to the most recently completed block.

Blockchain

Blockchain formation.

The blockchain is seen as the main technological innovation of Bitcoin since it stands as proof of all the transactions on the network. A block is the ‘current’ part of a blockchain which records some or all of the recent transactions, and once completed goes into the blockchain as a permanent database. Each time a block gets completed, a new block is generated. There is a countless number of such blocks in the blockchain. So are the blocks randomly placed in a blockchain? No, they are linked to each other (like a chain) in proper linear, chronological order with every block containing a hash of the previous block.

To use conventional banking as an analogy, the blockchain is like a full history of banking transactions. Bitcoin transactions are entered chronologically in a blockchain just the way bank transactions are. Blocks, meanwhile, are like individual bank statements.

Based on the Bitcoin protocol, the blockchain database is shared by all nodes participating in a system. The full copy of the blockchain has records of every Bitcoin transaction ever executed. It can thus provide insight about facts like how much value belonged a particular address at any point in the past.

In the above representation, the main chain (black) consists of the longest series of blocks from the genesis block (green) to the current block. Orphan blocks (purple) exist outside of the main chain.

 

 

infographics0517-01

Working of Bitcoin and Blockchain

 

 A blockchain – originally block chain– is a distributed database that is used to maintain a continuously growing list of records, called blocks. Each block contains a timestamp and a link to a previous block. A blockchain is typically managed by a peer-to-peer network collectively adhering to a protocol for validating new blocks. By design, blockchains are inherently resistant to modification of the data. Once recorded, the data in any given block cannot be altered retroactively without the alteration of all subsequent blocks and the collusion of the network. Functionally, a blockchain can serve as “an open, distributed ledger that can record transactions between two parties efficiently and in a verifiable and permanent way. The ledger itself can also be programmed to trigger transactions automatically.”

Intel Core i9

Intel recently announced a new family of processors for enthusiasts, the Core X-series, and it’s anchored by the company’s first 18-core CPU, the i9-7980XE.

 

Intel+Core+i9+x+series.jpg

Priced at $1,999, the 7980XE is clearly not a chip you’ll see in an average desktop. Instead, it’s more of a statement from Intel. It beats out AMD’s 16-core Threadripper CPU, which was slated to be that company’s most powerful consumer processor for 2017. And it gives Intel yet another way to satisfy the demands of power-hungry users who might want to do things like play games in 4K while broadcasting them in HD over Twitch. And, as if its massive core count wasn’t enough, the i9-7980XE is also the first Intel consumer chip that packs in over a teraflop’s worth of computing power.

 

inteli9.jpg

 

If 18 cores are overkill for you, Intel also has other Core i9 Extreme Edition chips in 10-, 12-, 14- and 16-core variants. Perhaps the best news for hardware geeks: The 10 Core i9-7900X will retail for $999, a significant discount from last year’s version.

All of the i9 chips feature base clock speeds of 3.3GHz, reaching up to 4.3GHz dual-core speeds with Turbo Boost 2.0 and 4.5GHz with Turbo Boost 3.0 a new version of Turbo Boost which Intel has upgraded. The company points out that while the additional cores on the Core X models will improve multitasking performance, the addition of technologies like Turbo Boost Max 3.0 ensures that each core is also able to achieve improved performance. (Intel claims that the Core X series reaches 10 percent faster multithread performance over the previous generation and 15 percent faster single thread.)

 

 

(via Engadget, The Verge)

 

A peek into NVIDIA’s self driving AI.

In a step toward making AI more accountable, Nvidia has developed a neural network for autonomous driving that highlights what it’s focusing on.

Sorta follow-up to my previous blog post: Is AI Mysterious?. Some of the most powerful machine-learning techniques available result in software that is almost completely opaque, even to the engineers that build it.

nvidia_car.jpg

We still don’t know exactly how AI works, the people at Nvidia still don’t know why the AI they built wasn’t following a single instruction given by its creators. Instead, it relied entirely on an algorithm that had taught itself to drive by watching a human do it.

It simply matches input from several video cameras to the behavior of a human driver and figures out for itself how it should drive. The only catch is that the system is so complex that it’s difficult to untangle how it actually works.

Nvidia is now working to open the black box by developing a way to visually highlight what the system is paying attention to. Explained in a recently published paper, the neural network architecture developed by Nvidia’s researchers is designed so that it can highlight the areas of a video picture that contribute most strongly to the behavior of the car’s deep neural network. Remarkably, the results show that the network is focusing on the edges of roads, lane markings, and parked cars—just the sort of things that a good human driver would want to pay attention to.

PilotNet.png

Nvidia’s Convolutional Neural Network architecture for Self-Driving AI.

“What’s revolutionary about this is that we never directly told the network to care about these things,” Urs Muller, Nvidia’s chief architect for self-driving cars, wrote in a blog post.

Fascinatingly enough, this compares a lot to human intelligence where we don’t actually know how to describe certain actions we do, but we just do them.

This certainly highlights the fact that in the near future we might start seeing AI those are just like us or even Superintelligent. I highly recommend reading the book Superintelligence: Paths, Dangers, Strategies by Nick Bostrom.

(Sources: Nvidia, MitTechReview, Nvidia Blog)

Artificial Intelligence ready ARM CPUs (DynamIQ)

ARM processors are ubiquitous and many of the tech gadgets we used are powered by them, furthermore, the company is showing off its plans for the future with DynamIQ. Aimed squarely at pushing the artificial intelligence and machine learning systems we’re expecting to see in cars, phones, gaming consoles and everything else, it’s what the company claims is an evolution on the existing “big.Little” technology.

arm.jpg

Here’s a high-level look at some of the new features, capabilities and benefits DynamIQ will bring to new Cortex-A processors later this year:

  • New dedicated processor instructions for ML and AI: Cortex-A processors designed for DynamIQ technology can be optimized to deliver up to a 50x boost in AI performance over the next 3-5 years relative to Cortex-A73-based systems today and up to 10x faster response between CPU and specialized accelerator hardware on the SoC that can unleash substantially better-combined performance.

arm-dynamiq2.jpg

  • Increased multi-core flexibility: SoC designers can scale up to eight cores in a single cluster and each core can have different performance and power characteristics. These advanced capabilities enable faster responsiveness to ML and AI applications. A redesigned memory subsystem enables both faster data access and enhance power management
  • More performance within restricted thermal budgets: Efficient and much faster switching of software tasks to match the right-sized processor for optimal performance and power is further enhanced through independent frequency control of individual processors
  • Safer autonomous systems: DynamIQ brings greater levels of responsiveness for ADAS solution and increased safety capabilities which will enable partners to build ASIL-D compliant systems for safe operation under failure conditions.

 

 

(source: ARM community, Engadget)

Razer’s Project Valerie

For those who missed out on Razer’s CES 2017 showcase, Project Valerie is an impressive gaming laptop with three 4K displays, virtually allowing for a portable panoramic display mode.

Each display is 17.3 inches in size and features a 4K resolution, offering a huge viewing space unprecedented for a laptop. When not in use, the extra Project Valerie screens are housed within a chassis. When the laptop is opened, the screens automatically slide out from the lid and extend to the sides of the main display to create a 180-degree viewing experience, without requiring any human input.

Razer-Valerie-_-featured-640x360@2x.jpg

The machine is fuelled by a top-of-the-line Nvidia GeForce GTX 1080 graphics card with 8GB RAM, accompanied by an Intel Core i7 Skylake CPU and 32GB RAM as a foundation. It seems to be all about graphics power, and since Razer has a reputation for building high-end gaming laptops, Project Valerie is certainly no exception.

Since these two extra 17″ 4K displays need some extra space, the unit is somewhat thick in comparison to the paper-thin laptops of today and is not exactly lightweight either. But for what it is, it’s pretty impressive.

For more information, watch the video linked below.

Magic Leap – The Future?

Magic Leap is a US startup company that is founded by Rony Abovitz in 2010 and is working on a head-mounted virtual retinal display which superimposes 3D computer-generated imagery over real world objects, by projecting a digital light field into the user’s eye. It is attempting to construct a light-field chip using silicon photonics.

Before Magic Leap, a head-mounted display using light fields was already demonstrated by Nvidia in 2013, and the MIT Media Lab has also constructed a 3D display using “compressed light fields”; however Magic Leap asserts that it achieves better resolution with a new proprietary technique that projects an image directly onto the user’s retina. According to a researcher who has studied the company’s patents, Magic Leap is likely to use stacked silicon waveguides.

magic-leap-lens-system.png

Virtual reality overlaid on the real world in this manner is called mixed reality, or MR. (The goggles are semi-transparent, allowing you to see your actual surroundings.) It is more difficult to achieve than the classic fully immersive virtual reality, or VR, where all you see are synthetic images, and in many ways MR is the more powerful of the two technologies.

Magic Leap is not the only company creating mixed-reality technology, but right now the quality of its virtual visions exceeds all others. Because of this lead, money is pouring into this Florida office park. Google was one of the first to invest. Andreessen Horowitz, Kleiner Perkins, and others followed. In the past year, executives from most major media and tech companies have made the pilgrimage to Magic Leap’s office park to experience for themselves its futuristic synthetic reality.

mleap.jpeg

The video below is shot directly through the Magic Leap technology without composing any special effects. It gives us an idea of how it looks through the Magic Leap.

On December 9, 2015, Forbes reported on documents filed in the state of Delaware, indicating a Series C funding round of $827m. This funding round could bring the company’s total funding to $1.4 billion, and its post-money valuation to $3.7 billion.

On February 2, 2016, Financial Times reported that Magic Leap further raised another funding round of close to $800m, valuing the startup at $4.5 billion.

On February 11, 2016, Silicon Angle reported that Magic Leap had joined the Entertainment Software Association.

In April 2016, Magic Leap acquired Israeli cyber security company NorthBit.

Magic Leap has raised $1.4 billion from a list of investors including Google and China’s Alibaba Group.

On June 16, 2016, Magic Leap announced a partnership with Disney’s Lucasfilm and its ILMxLAB R&D unit. The two companies will form a joint research lab at Lucasfilm’s San Francisco campus.

MacBook Pro 2016

Apple announced its MacBook Pro 2016 on 27/10/2016, Thursday, in an event in Cupertino. Its roughly a 25 year anniversary of Apple Laptops and Apple has come a long way in these years.

Macbook_IL.jpg

MacBook Pro 2016

The new Pro comes with an extended Track Pad, Butterfly Keyboard that evenly distributes the pressure of the keys and most important of all, no functions keys but an OLED Touch Bar that will do the job! Also these MacBooks weigh only 3 pounds and are 23% smaller than their predecessor the 2015 MacBook Pro.

The video below (Source: Apple) gives a clearer idea of the new MacBook and the Touch Bar.

One of the most significant developments in this new Touch Bar era is that Touch ID is now available on MacBooks Pros. Sure, there have been fingerprint readers on laptops before, but now your Mac gets fingerprint-based Apple Pay transactions on the web and in apps, and the benefits of fingerprint-based fast user switching.

Gallery_TouchPay.jpg

Touch ID on the OLED Touch Bar

The new Pro starts at $1,499, but you don’t want to buy that one. That one doesn’t have a Touch Bar. You want the 13-inch model that starts at $1,799, or the $2,399 15-inch model. These are seriously expensive machines, for people willing to pay for seriously powerful machines.

Google Pixel and Pixel XL

Google Recently announced it’s two new smartphones the Pixel and the Pixel XL which isn’t made by any phone manufacturer, but Google itself has made the new line of phones. The Pixels have replaced the Nexus lines. The Google Pixel will come with Android 7.1 Nougat out of the box.

pixel.png

 

 

Not only it’s the first phone with the Google Assistant built-in but also it has the highest rated smartphone camera ever tested on DxOMark having a mobile score of 89. It is particularly strong in providing a very high level of detail from its 12.3MP camera, with relatively low levels of noise for every tested lighting condition. It also provides accurate exposures with very good contrast and white balance, as well as fast autofocus.

pixel-camera-google-2016-840x473

 

The Pixel’s strong scores under a wide range of conditions make it an excellent choice for almost any kind of photography. As with any small-sensor device, results are excellent in conditions with good and uniform lighting. But in addition, images captured indoors and in low light are very good and provide a level of detail unexpected from a smartphone camera.

It also has a headphone jack !

Pixel Tech Specs:

  • Displaypixelblack.png
    -5.0 inches
    -FHD AMOLED at 441ppi
    -2.5D Corning® Gorilla® Glass 4
  • Battery
    2,770 mAh battery
    Standby time (LTE): up to 19 days
    Talk time (3g/WCDMA): up to 26 hours
    Internet use time (Wi-Fi): up to 13 hours
    Internet use time (LTE): up to 13 hours
    Video playback: up to 13 hours
    Audio playback (via headset): up to 110 hours
    Fast charging: up to 7 hours of use from only 15 minutes of charging

Pixel XL Tech Specs:marlin-silver-en_IN.png

  • Display:
    -5.5 inches
    -QHD AMOLED at 534ppi

    -2.5D Corning® Gorilla® Glass 4

  • Battery:
    3,450 mAh battery
    Standby time (LTE): up to 23 days
    Talk time (3g/WCDMA): up to 32 hours
    Internet use time (Wi-Fi): up to 14 hours
    Internet use time (LTE): up to 14 hours
    Video playback: up to 14 hours
    Audio playback (via headset): up to 130 hours
    Fast charging: up to 7 hours of use from only 15 minutes of charging

Both the devices have :

  • 4GB LPDDR4 RAM
  • 32 or 128GB
  • Qualcomm® Snapdragon™ 821 (MSM8996 pro)
    Quad Core 2x 2.15GHz / 2x 1.6GHz
    Main Camera –
    12.3MP
    Large 1.55μm pixels
    Phase detection autofocus + laser detection autofocus
    f/2.0 Aperture
  • Front Camera –
    8MP
    1.4µm pixels
    f/2.4 Aperture
    Fixed focus
  • Video –
    1080p @ 30fps, 60fps, 120fps
    720p @ 30fps, 60fps, 240fps
    4K @ 30fps
  • Charging –
    USB Type-C™ 18W adaptor with USB-PD
    18W charging
  • OS –
     Android 7.1 Nougat
    Two years of OS upgrade from launch
    Three years of security updates from launch