Response of an artificial iris to light like the human eye

An artificial iris manufactured from intelligent, light-controlled polymer material can react to incoming light in the same ways as the human eye. The Iris was developed by the Smart Photonic Materials research group from the TUT, and it was recently published in the Advanced Materials journal.

iris.jpg

The human iris does its job of adjusting your pupil size to meter the amount of light hitting the retina behind without you having to actively think about it. And while a camera’s aperture is designed to work the same way as a biological iris, it’s anything but automatic. Even point-and-shoots rely on complicated control mechanisms to keep your shots from becoming overexposed. But a new “artificial iris” developed at the Tampere University of Technology in Finland can autonomously adjust itself based on how bright the scene is.

Scientists from the Smart Photonic Materials research group developed the iris using a light-sensitive liquid crystal elastomer. The team also employed photoalignment techniques, which accurately position the liquid crystal molecules in a predetermined direction within a tolerance of a few picometers. This is similar to the techniques used originally in LCD TVs to improve viewing angle and contrast but has since been adopted to smartphone screens. “The artificial iris looks a little bit like a contact lens,” TUT Associate Professor Arri Priimägi said. “Its center opens and closes according to the amount of light that hits it.”

The team hopes to eventually develop this technology into an implantable biomedical device. However, before that can happen, the TUT researchers need to first improve the iris’ sensitivity so that it can adapt to smaller changes in brightness. They also need to get it to work in an aqueous environment. However, this new iris is therefore still long ways away from being ready.

What is Blockchain?

Blockchains in Bitcoin.

 

A blockchain is a public ledger of all Bitcoin transactions that have ever been executed. It is constantly growing as ‘completed’ blocks are added to it with a new set of recordings. The blocks are added to the blockchain in a linear, chronological order. Each node (computer connected to the Bitcoin network using a client that performs the task of validating and relaying transactions) gets a copy of the blockchain, which gets downloaded automatically upon joining the Bitcoin network. The blockchain has complete information about the addresses and their balances right from the genesis block to the most recently completed block.

Blockchain

Blockchain formation.

The blockchain is seen as the main technological innovation of Bitcoin since it stands as proof of all the transactions on the network. A block is the ‘current’ part of a blockchain which records some or all of the recent transactions, and once completed goes into the blockchain as a permanent database. Each time a block gets completed, a new block is generated. There is a countless number of such blocks in the blockchain. So are the blocks randomly placed in a blockchain? No, they are linked to each other (like a chain) in proper linear, chronological order with every block containing a hash of the previous block.

To use conventional banking as an analogy, the blockchain is like a full history of banking transactions. Bitcoin transactions are entered chronologically in a blockchain just the way bank transactions are. Blocks, meanwhile, are like individual bank statements.

Based on the Bitcoin protocol, the blockchain database is shared by all nodes participating in a system. The full copy of the blockchain has records of every Bitcoin transaction ever executed. It can thus provide insight about facts like how much value belonged a particular address at any point in the past.

In the above representation, the main chain (black) consists of the longest series of blocks from the genesis block (green) to the current block. Orphan blocks (purple) exist outside of the main chain.

 

 

infographics0517-01

Working of Bitcoin and Blockchain

 

 A blockchain – originally block chain– is a distributed database that is used to maintain a continuously growing list of records, called blocks. Each block contains a timestamp and a link to a previous block. A blockchain is typically managed by a peer-to-peer network collectively adhering to a protocol for validating new blocks. By design, blockchains are inherently resistant to modification of the data. Once recorded, the data in any given block cannot be altered retroactively without the alteration of all subsequent blocks and the collusion of the network. Functionally, a blockchain can serve as “an open, distributed ledger that can record transactions between two parties efficiently and in a verifiable and permanent way. The ledger itself can also be programmed to trigger transactions automatically.”

Intel Core i9

Intel recently announced a new family of processors for enthusiasts, the Core X-series, and it’s anchored by the company’s first 18-core CPU, the i9-7980XE.

 

Intel+Core+i9+x+series.jpg

Priced at $1,999, the 7980XE is clearly not a chip you’ll see in an average desktop. Instead, it’s more of a statement from Intel. It beats out AMD’s 16-core Threadripper CPU, which was slated to be that company’s most powerful consumer processor for 2017. And it gives Intel yet another way to satisfy the demands of power-hungry users who might want to do things like play games in 4K while broadcasting them in HD over Twitch. And, as if its massive core count wasn’t enough, the i9-7980XE is also the first Intel consumer chip that packs in over a teraflop’s worth of computing power.

 

inteli9.jpg

 

If 18 cores are overkill for you, Intel also has other Core i9 Extreme Edition chips in 10-, 12-, 14- and 16-core variants. Perhaps the best news for hardware geeks: The 10 Core i9-7900X will retail for $999, a significant discount from last year’s version.

All of the i9 chips feature base clock speeds of 3.3GHz, reaching up to 4.3GHz dual-core speeds with Turbo Boost 2.0 and 4.5GHz with Turbo Boost 3.0 a new version of Turbo Boost which Intel has upgraded. The company points out that while the additional cores on the Core X models will improve multitasking performance, the addition of technologies like Turbo Boost Max 3.0 ensures that each core is also able to achieve improved performance. (Intel claims that the Core X series reaches 10 percent faster multithread performance over the previous generation and 15 percent faster single thread.)

 

 

(via Engadget, The Verge)

 

Google’s and Nvidia’s AI Chips

Google

Google will soon launch a cloud computing service that provides exclusive access to a new kind of artificial intelligence chip designed by its own engineers. CEO Sundar Pichai revealed the new chip and service this morning in Silicon Valley during his keynote at Google I/O, the company’s annual developer conference.

GoogleChip4.jpg

This new processor is a unique creation designed to both train and execute deep neural networks—machine learning systems behind the rapid evolution of everything from image and speech recognition to automated translation to robotics. Google says it will not sell the chip directly to others. Instead, through its new cloud service, set to arrive sometime before the end of the year, any business or developer can build and operate software via the internet that taps into hundreds and perhaps thousands of these processors, all packed into Google data centers.

According to Dean, Google’s new “TPU device,” which spans four chips, can handle 180 trillion floating point operations per second, or 180 teraflops, and the company uses a new form of computer networking to connect several of these chips together, creating a “TPU pod” that provides about 11,500 teraflops of computing power. In the past, Dean said, the company’s machine translation model took about a day to train on 32 state-of-the-art CPU boards. Now, it can train in about six hours using only a portion of a pod.

Nvidia

Nvidia has released a new state-of-the-art chip that pushes the limits of machine learning, the Tesla P100 GPU. It can perform deep learning neural network tasks 12 times faster than the company’s previous top-end system (The TitanX). The P100 was a huge commitment for Nvidia, costing over $2 billion in research and development, and it sports a whopping 150 billion transistors on a single chip, making the P100 the world’s largest chip, Nvidia claims. In addition to machine learning, the P100 will work for all sorts of high-performance computing tasks — Nvidia just wants you to know it’s really good at machine learning.

dgx.png

To top off the P100’s introduction, Nvidia has packed eight of them into a crazy-powerful $129,000 supercomputer called the DGX-1. This show-horse of a machine comes ready to run, with deep-learning software preinstalled. It’s shipping first to AI researchers at MIT, Stanford, UC Berkeley, and others in June. On stage, Huang called the DGX-1 “one beast of a machine.”

The competition between these upcoming AI chips and Nvidia all points to an emerging need for simply more processing power in deep learning computing. A few years ago, GPUs took off because they cut the training time for a deep learning network from months to days. Deep learning, which had been around since at least the 1950s, suddenly had real potential with GPU power behind it. But as more companies try to integrate deep learning into their products and services, they’re only going to need faster and faster chips.

 

(via Wired, Forbes, Nvidia, The Verge)

 

Neuralink

The rise of A.I. and over-powering humans will prove to be a catastrophic situation. Elon Musk is attempting to combat the rise of A.I. with the launch of his latest venture, brain-computer interface company Neuralink.  Detailed information about Neuralink on Wait But Why. (Highly Recommended to Read!)

neuralink.jpeg

Musk seems to want to achieve a communications leap equivalent in impact to when humans came up with language – this proved an incredibly efficient way to convey thoughts socially at the time, but what Neuralink aims to do is increase that efficiency by multiple factors of magnitude. Person-to-person, Musk’s vision would enable direct “uncompressed” communication of concepts between people, instead of having to effectively “compress” your original thought by translating it into language and then having the other party “decompress” the package you send them linguistically, which is always a lossy process.

Another thing in favor of Musk’s proposal is that symbiosis between brains and computers isn’t fiction. Remember that person who types with brain signals? Or the paralyzed people who move robot arms? These systems work better when the computer completes people’s thoughts. The subject only needs to type “bulls …” and the computer does the rest. Similarly, a robotic arm has its own smarts. It knows how to move; you just have to tell it to. So even partial signals coming out of the brain can be transformed into more complete ones. Musk’s idea is that our brains could integrate with an AI in ways that we wouldn’t even notice: imagine a sort of thought-completion tool.

So it’s not crazy to believe there could be some very interesting brain-computer interfaces in the future. But that future is not as close at hand as Musk would have you believe. One reason is that opening a person’s skull is not a trivial procedure. Another is that technology for safely recording from more than a hundred neurons at once—neural dust, neural lace, optical arrays that thread through your blood vessels—remains mostly at the blueprint stage.

 

( via Wired, TechCrunch )

Is A.I. Mysterious?

Contemplating on the book I recently began, Superintelligence: Paths, Dangers, Strategies by Nick Bostrom, the author clearly describes the future of AI as tremendously frightening and tells us in what ways AI will overpower human beings, until a point in the future where the humans will no longer be needed. Such kind of intelligence will be superior to human beings, hence coined the term Superintelligence. Nick, further mentioned in the initial pages of the book that such kind of invention would be the last invention humans would ever make, and the AI would then invent new things itself, without feeling the need of a human being.

Does AI possess a darker side? (Enter article by MitTechReview)

Last year, a strange self-driving car was released onto the quiet roads of Monmouth County, New Jersey. This experimental vehicle was developed by researchers at Nvidia, didn’t look different from other autonomous cars, but it was unlike anything demonstrated by Google, Tesla, or General Motors, and it showed the rising power of artificial intelligence. The car didn’t follow a single instruction provided by an engineer or programmer. Instead, it relied entirely on an algorithm that had taught itself to drive by watching a human do it. Getting a car to drive this way was an impressive feat. But it’s also a bit unsettling since it isn’t completely clear how the car makes its decisions. Information from the vehicle’s sensors goes straight into a huge network of artificial neurons that process the data and then deliver the commands required to operate the steering wheel, the brakes, and other systems. The result seems to match the responses you’d expect from a human driver. But what if one day it did something unexpected—crashed into a tree, or sat at a green light? As things stand now, it might be difficult to find out why. The system is so complicated that even the engineers who designed it may struggle to isolate the reason for any single action. And you can’t ask it: there is no obvious way to design such a system so that it could always explain why it did what it did.

Now, once again, such kind of experiments and research may seem obscure at the moment or may even be neglected by some people, but what if an entire fleet of vehicles start working in the manner they learned and ignoring the commands by a human.

Enter OpenAI: Discovering an enacting the path to safe artificial general intelligence.

OpenAI is a non-profit artificial intelligence (AI) research company, associated with business magnate Elon Musk, that aims to carefully promote and develop friendly AI in such a way as to benefit humanity as a whole.

Hopefully, OpenAI will help us have friendly versions of AI.

 

deepdream1

The artist Adam Ferriss created this image, and the one below, using Google Deep Dream, a program that adjusts an image to stimulate the pattern recognition capabilities of a deep neural network. The pictures were produced using a mid-level layer of the neural network.

This image sure seems kind of spooky. But who knows, there might be some hidden sarcasm in the AI that we are yet to discover.

Meanwhile, you can give it a try https://deepdreamgenerator.com/ .

 

 

Sources (MitTechReview, OpenAI, Wiki)

</ End Blog>

Artificial Intelligence ready ARM CPUs (DynamIQ)

ARM processors are ubiquitous and many of the tech gadgets we used are powered by them, furthermore, the company is showing off its plans for the future with DynamIQ. Aimed squarely at pushing the artificial intelligence and machine learning systems we’re expecting to see in cars, phones, gaming consoles and everything else, it’s what the company claims is an evolution on the existing “big.Little” technology.

arm.jpg

Here’s a high-level look at some of the new features, capabilities and benefits DynamIQ will bring to new Cortex-A processors later this year:

  • New dedicated processor instructions for ML and AI: Cortex-A processors designed for DynamIQ technology can be optimized to deliver up to a 50x boost in AI performance over the next 3-5 years relative to Cortex-A73-based systems today and up to 10x faster response between CPU and specialized accelerator hardware on the SoC that can unleash substantially better-combined performance.

arm-dynamiq2.jpg

  • Increased multi-core flexibility: SoC designers can scale up to eight cores in a single cluster and each core can have different performance and power characteristics. These advanced capabilities enable faster responsiveness to ML and AI applications. A redesigned memory subsystem enables both faster data access and enhance power management
  • More performance within restricted thermal budgets: Efficient and much faster switching of software tasks to match the right-sized processor for optimal performance and power is further enhanced through independent frequency control of individual processors
  • Safer autonomous systems: DynamIQ brings greater levels of responsiveness for ADAS solution and increased safety capabilities which will enable partners to build ASIL-D compliant systems for safe operation under failure conditions.

 

 

(source: ARM community, Engadget)

Machine Learning Speeds Up

Cloudera and Intel are jointly speeding up Machine Learning, with the help of Intel’s new Math Kernel. Benchmarks demonstrate the combined offering can advance machine learning performance over large data sets in less time and with less hardware.  This helps organizations accelerate their investments in next generation predictive analytics.

Cloudera is the leader in Apache Spark development, training, and services. Apache Spark is advancing the art of machine learning on distributed systems with familiar tools that deliver at impressive scale. By joining forces, Cloudera and Intel are furthering a joint mission of excellence in big data management in the pursuit of better outcomes by making machine learning smarter and easier to implement.

intcloud.jpg

Predictive Maintenance

By combining Spark, Intel MKL libraries, and Intel’s optimized CPU architecture machine learning workloads can scale quickly. As machine learning solutions get access to more data they can provide better accuracy in delivering predictive maintenance, recommendation engines, proactive health care and monitoring, and risk and fraud detection.

“There’s a growing urgency to implement richer machine learning models to explore and solve the most pressing business problems and to impact society in a more meaningful way,” said Amr Awadallah, chief technical officer of Cloudera. “Already among our user base, machine learning is an increasingly common practice. In fact, in a recent adoption survey over 30% of respondents indicated they are leveraging Spark for machine learning.

 

(via – Technative.io)

The Poker Playing AI

As we know that the game of Poker involves dealing with imperfect information, which makes the game very complex, and more like many real-world situations. At the Rivers Casino in Pittsburgh this week, a computer program called Libratus (A latin word meaning balanced), an AI system that may finally prove that computers can do this better than any human card player. Libratus was created by Tuomas Sandholm, a professor in the computer science department at CMU, and his graduate student Noam Brown.

mitpoker_0.jpg

The AI Poker play against the world’s best poker players. Kim is a high-stakes poker player who specializes in no-limit Texas Hold ‘Em. Jason Les and Daniel McAulay, two of the other top poker players challenging the machine, describe its play in much the same way. It does a little bit of everything,” Kim says. It doesn’t always play the same type of hand in the same way. It may bluff with a bad hand or not. It may bet high with a good hand—or not. That means Kim has trouble finding holes in its game. And if he does find a hole, it disappears the next day.

“The bot gets better and better every day. It’s like a tougher version of us,” said Jimmy Chou, one of the four pros battling Libratus. “The first couple of days, we had high hopes. But every time we find a weakness, it learns from us and the weakness disappears the next day.”

Libratus is playing thousands of games of heads-up, or two-player, no-limit Texas hold’em against several expert professional poker players. Now a little more than halfway through the 20-day contest, Libratus is up by almost $800,000 against its human opponents. So a victory, while far from guaranteed, may well be in the cards.

Regardless of the pure ability of the humans and the AI, it seems clear that the pros will be less effective as the tournament goes on. Ten hours of poker a day for 20 days straight against an emotionless computer was exhausting and demoralizing, even for pros like Doug Polk. And while the humans sleep at night, Libratus takes the supercomputer powering its in-game decision making and applies it to refining its overall strategy.

A win for Libratus would be a huge achievement in artificial intelligence. Poker requires reasoning and intelligence that has proven difficult for machines to imitate. It is fundamentally different from checkers, chess, or Go because an opponent’s hand remains hidden from view during play. In games of “imperfect information,” it is enormously complicated to figure out the ideal strategy given every possible approach your opponent may be taking. And no-limit Texas hold’em is especially challenging because an opponent could essentially bet any amount.

“Poker has been one of the hardest games for AI to crack,” says Andrew Ng, chief scientist at Baidu. “There is no single optimal move, but instead an AI player has to randomize its actions so as to make opponents uncertain when it is bluffing.”

(Sources: MitTechReview, The Verge, Wired)

satta king gali