Carbon M1 3D Printer

Carbon splashed onto the 3D print scene last year when they announced a new method of dramatically speeding up the process of resin 3D printing by applying an oxygen-permeable layer to the bottom of their resin tank. This removed the need for lengthy and tedious “peel maneuvers” as required by most other resin prints.

carbon-releases-first-commericla-clip-based-3dprinter-m1-1.jpg

The Redwood City, California-based company, known for its innovative Continuous Liquid Interface Production Technology (CLIP), which in 2015 secured a $100 million investment from Google Ventures, has over the last year partnered with leading companies in various industries, including Kodak, Ford, andJohnson & Johnson. Now, with the exciting release of their first commercial 3D printer and a slew of new materials, the company’s astounding growth, and success are sure to continue.

In terms of its specs, the Carbon M1 3D printer boasts a building envelope of 144mm x 81mm x 330mm and features a build platform made from billet aluminum, a foot-activated build area door, an oxygen-permeable window cassette, and a high performance LED light engine. Additionally, the new 3D printer is Internet-connected, allowing for the latest in features, performance enhancements, and resins to be instantly available to the M1’s users. The Carbon M1 3D printer is also capable of collecting more than 1 million process control data points a day. In practice, this means that Carbon can provide remote assistance and diagnostics to the printer to help optimize your prints and improve them over time.

How-CLIP-Works-FINAL-1.jpg

Similar to existing stereolithography (SLA) rapid prototyping processes, the Carbon M1 3D printer uses an ultraviolet (UV) light projector under a light-sensitive resin pool. As the platform moves upward, the projector moves light along cross sections of the liquid polymer, solidifying it as it goes and forming objects.

The new UV curable materials unveiled by Carbon are the following:

  • Rigid Polyurethanes (RPUs) whose stiffness, strength, and ability to handle stress make them ideal for consumer electronics, automotive parts, and industrial components.
  • Flexible Polyurethane (FPU) whose semi-rigidity and resistance towards impact, abrasion, and fatigue are useful for applications and parts which bear repetitive stresses (like hinges and friction fits).
  • Elastomeric Polyurethane (EPU) with elastic properties under cyclic tensile and compressive loads, and high tear and impact resistance.
  • Cyanate Ester-based resin (CE), a high-performance material with heat deflections up to 219°C (426°F) ideal for under-the-hood applications, electronics, or industrial components.
  • Prototyping Resin (PR) a quick printing, high-resolution material meant to withstand “moderate functional testing”. The latter is available in six colors: cyan, magenta, yellow, black, white, and gray.

maxresdefault.jpg The difference between CLIP and traditional SLA is that instead of a UV light or laser drawing the design on each layer of the liquid polymer pool, CLIP projects an entire cross section of the object across the pool, something akin to a slideshow that hardens the object continuously as the build platform rises. Unlike SLA methods, CLIP carefully balances the UV light with oxygen – the light cures the resin while the oxygen inhibits that reaction. This results in a far more gentle process, capable of producing “isotropic”, or layer-less parts, according to Phelps.

The difference between CLIP and traditional SLA is that instead of a UV light or laser drawing the design on each layer of the liquid polymer pool, CLIP projects an entire cross section of the object across the pool, something akin to a slideshow that hardens the object continuously as the build platform rises. Unlike SLA methods, CLIP carefully balances the UV light with oxygen – the light cures the resin while the oxygen inhibits that reaction. This results in a far more gentle process, capable of producing “isotropic”, or layer-less parts, according to Phelps.

carb.png

Google I/O 2016 In Brief

 

io16-social

Android N

Android N is a more improved version of Marshmallow. Starting with performance, Dave Burke said that the first Developer Preview introduced a new JIT compiler to improve software performance while the second N Developer Preview included Vulkan, a new 3D rendering API. The Android N Developer Preview 3 will bring improved graphics and runtime performance.

android-n- - 2

Quick Reply from the Notification Bar

Burke revealed that Android N now features easier multi-tasking including quick-switching between apps, multi-window support, and better notifications. As predicted, everybody’s favorite Marshmallow feature, Doze Mode, has also been improved in Android N. Doze now features a two-tier system. Also, a lot of improvements were made to the notifications, multi-tasking, and settings in Android N with Unicode 9 support.

You now can suggest a name too for the Android N! Link.

Google Assistant

During the keynote speech, Pichai unveiled a new “conversational” assistant. Based upon natural-language processing, Google Assistant seems to be a new name for Google Now, which has been available for some time on Android devices. It’s much like a personal assistant and also a conversationalist. “Think of the assistant, we think of it as a conversational assistant, we want users to have an ongoing two-way dialog,” CEO Sundar Pichai said. It’s much like it’s competition from Apple Inc. Siri, as well as other voice assistants.

screen-shot-2016-05-18-at-1-12-06-pm.png

Google Home

Google Home is a voice-activated home product that allows you and your family to get answers from Google, stream music, and manage everyday tasks. It’s much like the amazon echo as seen from the keynote. It’s a small speaker you plug into the wall with always-listening, far-field microphones that can hear you from across the room. It’ll answer your questions, play your music, and control some of your home automation gadgets.

 

screen shot 2016-05-18 at 1.21.48 pm.png

Google Home

 

Google Allo and Duo

Engineering director, Erik Kay, introduced a new “smart messaging app” called Allo. Speaking about the app, which comes with Google built-in, he said: “It works over time to make conversations easier and more productive”.  CEO Sundar Pichai said that they deeply focus on Machine Learning as it can be seen that over time to make conversations easier. And also it has a feature called Whisper/Shout which lets you send a message according to your pitch of reply.

Allo-whisper-Shout-resize-work.gif

Google also talked about Duo, a single, one-to-one video calling app which is said to “perform well even on slow networks”. It works on both Android and iOS. One of its standout features is a function which Google calls ‘Knock Knock’ which shows you a live video stream of the caller before you even answer the call. Once you do, the video will continue but you will now be part of the conversation. It’s said to be both fast and smooth.

Daydream and VR

google-daydream-vr-796x476

Google is making a feature called Android VR Mode into the latest version of its operating system. VR Mode includes a series of optimizations that will improve apps’ performance. A Daydream home screen will let people access apps and content while using the headset; an early look shows a forest landscape with the slightly low-poly look that Google has used in Cardboard apps. Inside this environment, Google has created special VR versions of YouTube, Street View, the Google Play Store, Play Movies, and Google Photos. It’s also recruited a number of outside media companies to bring apps to Daydream, including streaming platforms like Netflix and gaming companies like Ubisoft and Electronic Arts.

controller_2.0.gif

Google VR

 

Android Wear 2.0

The biggest update of android wear, given the material design overhaul and features to  have standalone apps. Probably the biggest feature is to connect directly to WiFi Networks when not connected to your phone over Bluetooth and likewise. Also, the UI is completely redesigned.

Android-Wear-2-0-Dev-Preview-Infografic-640x360.png

google-io-2016-bugdroid.jpg

 

Google I/O Attendee SWAG! (via androidcentral)

What do you get?

 

google-io-swag-2

google-io-swag-1

google-io-swag-3

IoT Business Models

Why are IoT Business Models Important?

As the Internet of Things (IoT) spreads, the implications for business model innovation are huge. Filling out well-known frameworks and streamlining established business models won’t be enough. To take advantage of new, cloud-based opportunities, today’s companies will need to fundamentally rethink their ideas regarding business models. The industrial players are moving so slowly to evolve their business model designs they risk implementing solution concepts developed in the late 1990’s by about 2020.

Apple, Google, Amazon and others present an interesting case for how B2B companies should be thinking about designing and developing smart services business models.   Players like Apple and Google have developed a business design mode that pulls together technologies from multiple domains and packages that solution in a way that wins buyer acceptance. Add to this the momentum and creativity these players are creating within their communities of users and developers — they are all driving entirely new forms of collaboration, content and peer product development.

1-eZHNT57Pfd4EMxjvMpzr3A.png

Albert Shum, Partner Director of UX Design at Microsoft, notes: “Business models are about creating experiences of value. And with the IoT, you can really look at how the customer looks at an experience—from when I’m walking through a store, buying a product, and using it—and ultimately figure out what more can I do with it and what service can renew the experience and give it new life.” To foster a conversation about the potential implications of connected experiences for designers, technologists, and business people, Albert’s team at Microsoft recently released a short film documentary called “Connecting: Makers.”

Business-to-business-model-Internet-of-things-sales-ecosystem-600x344.jpg

Some highlights of the connected business include:

  • Business Model Transformation – selling results, outcomes or performance – not equipment;
  • New Value-Added Services – providing peer benchmarking, targeted personalization services, predictive systems optimization based on analytics and modeling;
  • Product Design and Engineering Insights – collecting machine operating history across an entire generation of machines to determine priorities for future designs;
  • Sales, Fulfillment, and Supply Chain Services – developing a better understanding of installed base characteristics and behaviors for predictive modeling of demand for channel partners and ecosystem participants;
  • Ecosystem Orchestration – developing brokerage services for multiple, parallel vendors for orchestration of services around machines and systems;
  • New User Experience Design – designing more effective machines and/or systems based on a more intimate understanding machine behaviors and how users interact with the system; and,
  • Installed Base Support Services – helping customers maintain installed systems and equipment on a collective or systemic basis through careful management of configurations, installed products contracts management and life cycle management.

Open AI

OpenAI is a non-profit artificial intelligence (AI) research company, associated with business magnate Elon Musk, that aims to carefully promote and develop open-source friendly AI in such a way as to benefit, rather than harm, humanity as a whole. The organization aims to “freely collaborate” with other institutions and researchers by making its patents and research open to the public. The company is supported by over US$1 billion in commitments; however, only a tiny fraction of the $1 billion pledged is expected to be spent in the first few years. Many of the employees and board members are motivated by concerns about existential risk from artificial general intelligence. (wiki)

OpenAI

So why OpenAI?

Scientists believe that if advanced AI gains the ability to re-design itself someday that at an ever-increasing rate, an unstoppable ‘Intelligence explosion‘ would lead to human extinction.

Elon Musk poses the question: “what is the best thing we can do to ensure the future is good? We could sit on the sidelines or we can encourage regulatory oversight, or we could participate with the right structure with people who care deeply about developing AI in a way that is safe and is beneficial to humanity.”

OpenAI-Gym-930x486

OpenAI Gym

The OpenAI Gym includes environments to simulate situations for your AI to learn from, as well as a site to compare and reproduce results. The tools are designed for use with Reinforcement Learning (RL), one of the technologies used to develop Google’s AlphaGo AI that defeated Go world champion Lee Se-Dol recently. RL works on the principle that a bot will receive a reward every time it completes an action successfully – similar to how you might train a dog.

The environments available in the OpenAI gym include classic control problems like driving a car up a hill, text -based challenges like learning to win at roulette and even Atari games like Asteroids, Air Raid, and Pitfall.

They’re currently available to experiment with in Python; OpenAI says they will soon update them to work with any language, and expand the collection of environments as well.

Visit OpenAI for getting started.

 

SLAM (Simultaneous Localization and Mapping)

What’s SLAM?

In an unknown environment, SLAM is the computational problem of updating the map and simultaneously keeping track of the agents’ location within. The sensors in the robot help make up the unknown environment. Uses the robot pose estimation to improve the map landmark position estimation and vice versa.

Is SLAM necessary?

Of course! Robot models aren’t always accurate, sometimes wheel odometry error is cumulative or other IMU (Inertial Measurement Unit) sensor errors.

KittiInput

Input

KittiRec

Received

Mapping in SLAM:

Topological SLAM approaches have been used to enforce global consistency in metric SLAM algorithms the same way topological maps are a method of environment representation.

Sensing in SLAM:

Slam uses various sensors. Different types of sensors give rise to different SLAM algorithms whose assumptions of which is most appropriate to the sensors. At one extreme, laser scans or visual features provide details of a great many points within an area, sometimes rendering SLAM inference unnecessary because shapes in these point clouds can be easily and unambiguously aligned at each step via image registration. At the opposite extreme, tactile sensors are extremely sparse as they contain only information about points very close to the agent, so they require strong prior models to compensate in purely tactile SLAM. Most practical SLAM tasks fall somewhere between these visual and tactile extremes.

Types of sensors: 

Active, Passive, Intrusive

  • Active: LEDs, Range Finders, Ultrasonic Sensors, Light Sensors.
  • Passive: Cameras, Infrared Sensors.
  • Intrusive: Markers in Augmented Reality.

Multiple Objects in SLAM:

The related problems of data association and computational complexity are among the problems yet to be fully resolved, for example, the identification of multiple confusable landmarks. A significant recent advance in the feature-based SLAM literature involved the re-examination of the probabilistic foundation for Simultaneous Localisation and Mapping (SLAM) where it was posed in terms of multi-object Bayesian filtering with random finite sets that provide superior performance to leading feature-based SLAM algorithms in challenging measurement scenarios with high false alarm rates and high missed detection rates without the need for data association.

Applications of SLAM:

  • Augmented Reality
  • Robotic control
  • Virtual Map Building (Google Earth)
  • Navigation in unknown environments