A new ‘eye’ may radically change how robots see

0
3

This hexapod robot recognizes its surroundings using a vision system that occupies less storage space than a single photo on your phone. Running the new system uses only 10 percent of the energy required by conventional location systems, researchers report in the June Science Robotics.

Such a low-power ‘eye’ could be extremely useful for robots involved in space and undersea exploration, as well as for drones or microrobots, such as those that examine the digestive tract, says roboticist Yulia Sandamirskaya of Zurich University of Applied Sciences, who was not involved in the study.

The system, known as LENS, consists of a sensor, a chip and a super-tiny AI model to learn and remember location. Key to the system is the chip and sensor combo, called Speck, a commercially available product from the company SynSense. Speck’s visual sensor operates “more like the human eye” and is more efficient than a camera, says study coauthor Adam Hines, a bioroboticist at Queensland University of Technology in Brisbane, Australia.

Cameras pick up a huge level of detail, which equates to a large amount of data that all needs processing (left). The LENS system runs more efficiently using a vision sensor that picks up only on changes in the environment (right).Adam Hines

Cameras capture everything in their visual field many times per second, even if nothing changes. Mainstream AI models excel at turning this huge pile of data into useful information. But the combo of camera and AI guzzles power. Determining location devours up to a third of a mobile robot’s battery. “It is, frankly, insane that we got used to using cameras for robots,” Sandamirskaya says.

In contrast, the human eye detects primarily changes as we move through an environment. The brain then updates the image of what we’re seeing based on those changes. Similarly, each pixel of Speck’s eyelike sensor “only wakes up when it detects a change in brightness in the environment,” Hines says, so it tends to capture important structures, like edges. The information from the sensor feeds into a computer processor with digital components that act like spiking neurons in the brain, activating only as information arrives — a type of neuromorphic computing.

The sensor and chip work together with an AI model to process environmental data. The AI model developed by Hines’ team is fundamentally different from popular ones used for chatbots and the like. It learns to recognize places not from a huge pile of visual data but by analyzing edges and other key visual information coming from the sensor.

This combo of a neuromorphic sensor, processor and AI model gives LENS its low-power superpower. “Radically new, power-efficient solutions for … place recognition are needed, like LENS,” Sandamirskaya says.



Source link