Humans manage to drive acceptably using only two eyes and two ears to sense the world around them. Autonomous vehicles are equipped with sensor boxes that are altogether more complex. They typically rely on radar, lidar, ultrasonic sensors or cameras all working in concert to detect upcoming road conditions.
While humans are quite cunning and difficult to deceive, our driver robot friends are less hardy. Some researchers worry that LiDAR sensors could be tampered with, hiding obstacles and causing collisions with driverless cars, or worse.
Where did he go?
LiDAR is so named because it is a light-based equivalent of radar technology. Unlike radar, however, it is still generally treated as an acronym rather than a word in its own right. The technology sends out laser pulses and captures light reflected from the environment. Pulses returning from more distant objects take longer to return to the LiDAR sensor, allowing the sensor to determine the range of objects around it. It is generally considered the benchmark sensor for autonomous driving. This is due to its greater accuracy and reliability compared to radar for object detection in automotive environments. Moreover, it offers very detailed depth data that is simply not available with an ordinary 2D camera.
A new research paper demonstrated a contradictory method to trick LiDAR sensors. The method uses a laser to selectively mask certain objects so that they cannot be “seen” by the LiDAR sensor. The document calls this a “physical removal attack,” or PRA.
The theory of the attack is based on the functioning of LiDAR sensors. Typically, these sensors prioritize stronger reflection over weaker ones. This means that a strong signal sent by an attacker will take priority over a weaker reflection from the environment. LiDAR sensors and the autonomous driving frameworks above them also typically reject detections below a certain minimum distance from the sensor. This is usually in the range of 50mm to 1000mm distance.
The attack works by firing infrared laser pulses that mimic the actual echoes the LiDAR device expects to receive. The pulses are timed to match the firing time of the victim LiDAR sensor, to control the perceived location of points spoofed by the sensor. By firing light laser pulses to mimic echoes at the sensor, the sensor will generally ignore weaker true echoes picked up from an object in its field of view. This alone may be enough to hide the obstacle from the LiDAR sensor, but would appear to create a spoofed object very close to the sensor. However, since many LiDAR sensors reject excessively close echo returns, the sensor will likely reject them entirely. If the sensor does not remove the data, the filtering software running on its point cloud output may do so itself. The resulting effect is that the LiDAR will not display any valid point cloud data in an area where it should detect an obstacle.
The attack requires some knowledge, but is surprisingly practical to perform. It only takes a little research to target different types of LiDARs used on autonomous vehicles to create a suitable spoofing device. The attack works even if the attacker fires false echoes at the LiDAR from an angle, such as from the side of the road.
This has dangerous implications for autonomous driving systems relying on data from LiDAR sensors. This technique could allow an adversary to hide obstacles from a self-driving car. Pedestrians at a crosswalk could be hidden from LiDAR, as could cars stopped at a traffic light. If the self-driving car does not “see” an obstacle in front of it, it can move forward and drive through it – or drive through it. With this technique, it is more difficult to hide close objects than those that are further away. However, hiding an object for even a few seconds may leave an autonomous vehicle too little time to stop when it finally detects a hidden obstacle.
Aside from erasing objects from a LiDAR’s view, other spoofing attacks are also possible. Previous work by researchers has involved tricking LiDAR sensors into seeing phantom objects. It’s remarkably easy to do – simply transmit laser pulses to a LiDAR victim that indicate a wall or other obstacle ahead.
The research team notes that there are some defenses against this technique. The attack tends to cut out an angular slice of the point cloud reported by the LiDAR. Detection of this deviation may indicate that a suppression attack may be taking place. Alternatively, there are methods that consist of comparing the shadows to those that should be cast by objects detected (or not) in the LiDAR point cloud.
Overall, protection against spoofing attacks could become important as self-driving cars become more common. At the same time, it is important to consider what is and what is not realistic to defend against. For example, human drivers are susceptible to crashes when their cars are hit by eggs or stones thrown from an overpass. Car manufacturers haven’t designed advanced rock lasers and super wipers to remove egg tracks. Instead, laws are enforced to discourage these attacks. It may just be extending a similar application to bad actors running around with complicated laser equipment on the side of the highway. In all likelihood, a number of both approaches will be required.