page.section.full_url = {{ page.section.full_url or home }}
Hello 1581437372 - 30248322
The Conversation
Don’t miss a thing
What matters in tech, in your inbox daily
SocialUX
Follow us on
Image: MIT Lincoln Laboratory

To enable safe and affordable autonomous vehicles, the automotive industry needs lidar systems that are around the size of a wallet, cost one hundred dollars, and can see targets at long distances with high resolution. With the support of DARPA, our team at Kyber Photonics, in Lexington, Mass., is advancing the next generation of lidar sensors by developing a new solid-state, lidar-on-a-chip architecture that was recently demonstrated at MIT. The technology has an extremely wide field of view, a simplified control approach compared to the state-of-the-art designs, and has the promise to scale to millions of units via the wafer-scale fabrication methods of the integrated photonics industry.

 

Light detection and ranging (lidar) sensors hold great promise for allowing autonomous machines to see and navigate the world with very high precision. But current technology suffers from several drawbacks that need to be addressed before widespread adoption can occur. Lidar sensors provide spatial information by scanning an optical beam, typically in the wavelength range between 850 and 1550 nm, and using the reflected optical signals to build a three-dimensional map of an area of interest. They complement cameras and radar by providing high resolution and unambiguous ranging and velocity information under both daytime and nighttime conditions.

Keep reading... Show less
Image: Audio Analytic

Smartphones for several years now have had the ability to listen non-stop for wake words, like “Hey Siri” and “OK Google,” without excessive battery usage. These wake-up systems run in special, low-power processors embedded within a phone’s larger chip set. They rely on algorithms trained on a neural network to recognize a broad spectrum of voices, accents, and speech patterns. But they only recognize their wake words; more generalized speech recognition algorithms require the involvement of a phone’s more powerful processors.

Today, Qualcomm announced that Snapdragon 8885G, its latest chipset for mobile devices, will be incorporating an extra piece of software in that bit of semiconductor real estate that houses the wake word recognition engine. Created by Cambridge, U.K. startup Audio Analytic, the ai3-nano will use the Snapdragon’s low-power AI processor to listen for sounds beyond speech. Depending on the applications made available by smartphone manufacturers, the phones will be able to react to such sounds as a doorbell, water boiling, a baby’s cry, and fingers tapping on a keyboard—a library of some 50 sounds that is expected to grow to 150 to 200 in the near future.

Keep reading... Show less