(Source: Intel)
As humans, we are blessed with extraordinary biological sensors, such as our eyes and ears, coupled with an incredible processor in the form of our brain. Those who create machine-vision systems began by trying to replicate our human abilities using imaging sensors operating in the visual spectrum coupled with artificial intelligence (AI) and machine-learning (ML) technologies to provide object detection and recognition capabilities. The proficiencies of these systems can be further enhanced by employing dual sensors to provide binocular vision and depth perception.
The problem is that, as wonderful as traditional machine-vision systems are, they suffer from the same problems as the human eye, such as being limited to the visual spectrum and operating poorly in low-light and inclement weather conditions, such as rain, snow, and fog. Imagine the possibilities if these machine-vision systems could overcome these limitations. Here, we will explore the challenges associated with conventional imaging systems, as well as a solution for future imaging applications such as people tracking, volumetric measurement, robotics, and more.
A downside of conventional and thermal sensors is that they aren’t tremendously effective with regard to determining distance and tracking multiple objects in motion as they pass in front or behind each other. One option to overcome this limitation is to augment conventional and thermal imaging sensors with one or more light detection and ranging—also known as laser imaging, detection, and ranging—(LiDAR) sensors.
Conventional imaging systems are classed as being passive on the basis that they detect whatever electromagnetic energy, such as visible light or infrared, comes their way from the outside world. By comparison, LiDAR is categorized as an active remote sensing system because it generates light using a rapidly firing laser. A LiDAR system measures the time it takes for the emitted light to travel to any objects in front of it and to come back again. These times are used to calculate the distances traveled.
In much the same way that a standard imaging system creates a 2D-array of pixels (picture elements), a LiDAR imaging system creates a 3D-array of voxels (volume elements). The narrow laser beam employed by the LiDAR can detect and map physical features with very high resolutions. In fact, LiDAR dramatically outperforms standard stereo-depth cameras in applications where high resolution and high-accuracy depth data is required.
Depending on the target application, designers can use AI/ML systems in conjunction with various combinations of sensors including:
Let’s take a look at a possible use case. Consider the COVID-19 pandemic. One symptom of someone infected with the coronavirus is an elevated temperature. Designers could augment a conventional machine-vision system with thermal and LiDAR sensors to detect potential carriers in an environment such as the travelers’ lounge in an airport.
Intel’s RealSense technologies offer a wide variety of vision-based solutions designed to give your designs the ability to understand the world in 3D. The latest addition to the family is the Intel® RealSense™ LiDAR Camera L515 (Figure 1), which has the bragging rights of being the world’s smallest—61mm in diameter, 26mm in depth—and most power-efficient high-resolution LiDAR that’s capable of capturing tens of millions of voxels per second.
Figure 1: The Intel® RealSense™ LiDAR Camera L515 has a diameter smaller than a tennis ball. (Source: Intel)
Based on a revolutionary solid-state LiDAR depth technology designed for indoor applications, the L515 is perfect for applications that require depth data at high resolution and high accuracy. With a range of 0.25 meters to 9 meters, the L515 provides over 23 million accurate voxels per second, with a depth resolution of 1024 x 768 at 30 frames per second (fps). For applications requiring the combination of traditional machine vision and LiDAR, the L515 also features a full high-definition (FHD) RGB video camera sensor, along with additional sensors such as a MEMS accelerometer and a MEMS gyroscope (Figure 2).
Figure 2: Exploded view of the Intel® RealSense™ LiDAR Camera L515 (Source: Intel)
Furthermore, the L515 boasts an internal vision processor that performs tasks such as motion blur artifact reduction, thereby offloading such duties from the host processor. The lightweight L515 consumes less than 3.5 watts of power, making it the world’s most power-efficient high-resolution LiDAR camera on the market. The combination of small size and low-power consumption makes the L515 ideal for use in handheld products and small autonomous-robot applications.
If you are interested in taking advantage of the L515 in your own designs, Intel’s open-source RealSense software development kit (SDK) 2.0 is both cross-platform and operating system independent. In addition to Windows, Linux, and Android, you can also install the SDK 2.0 on Jetson TX2, Raspberry Pi 3, and macOS platforms.
The L515 uses the same SDK as all other current-generation RealSense Technology family devices, thereby allowing an easy transition from any of Intel’s other 3D cameras. The idea here is to develop once, and to then deploy on any current or future Intel RealSense depth device. Who among us could argue with a philosophy like that?
Potential applications for the L515 can get any designer’s head buzzing. LiDAR has traditionally been associated with autonomous vehicles and other outdoor applications, but the L515 opens the floodgates to all sorts of possibilities, including people tracking, volumetric measurement, robotics, 3D scanning, and the list goes on. By pairing thermal imaging technologies with LiDar technology, designers can overcome limitations commonly associated with conventional imaging systems.
How about you? What types of systems can you envisage deploying with the Intel® RealSense™ LiDAR Camera L515?
Clive "Max" Maxfield is a freelance technical consultant and writer. Max received his BSc in Control Engineering in 1980 from Sheffield Hallam University, England and began his career as a designer of central processing units (CPUs) for mainframe computers. Over the years, Max has designed everything from silicon chips to circuit boards and from brainwave amplifiers to Steampunk Prognostication Engines (don't ask). He has also been at the forefront of Electronic Design Automation (EDA) for more than 35 years.
Well-known throughout the embedded, electronics, semiconductor, and EDA industries, Max has presented papers at numerous technical conferences around the world, including North and South America, Europe, India, China, Korea, and Taiwan. He has given keynote presentations at the PCB West conference in the USA and the FPGA Forum in Norway. He's also been invited to give guest lectures at several universities in the USA, Sheffield Hallam University in the UK, and Oslo University in Norway. In 2001, Max "shared the stage" at a conference in Hawaii with former Speaker of the House, "Newt" Gingrich.
Max is the author and/or co-author of a number of books, including Designus Maximus Unleashed (banned in Alabama), Bebop to the Boolean Boogie (An Unconventional Guide to Electronics), EDA: Where Electronics Begins, FPGAs: Instant Access, and How Computers Do Math.