MIT researchers create new self-driving system that can steer in low visibility settings

MIT researchers create new self-driving system that can steer in low visibility settings, including fog and snow

  • Researchers from MIT are testing a new self-driving car system for bad weather
  • Instead of lidar and cameras, it uses a sensor system that reads the ground directly below and around the car instead of in front of it
  • The system works in tandem with GPS data, but struggles with rainy conditions

Researchers from MIT have developed new self-driving car system capable of navigating in low visibility settings, including in fog and snow.

The system relies on Localizing Ground Penetrating Radar (LGPR), which takes readings the shape and composition of the road directly below and around the car with electromagnetic pulses.

Other self-driving car systems use a combination of Lidar, radar, and cameras to develop a real-time topographical model of where the car is in space.

A team of researchers at MIT have created a new self-driving car system capable of navigating in low-visibility settings, including fog and snow

These systems are generally reliable but have been vulnerable to visual tricks like fake road signs and lane makers, and can become significantly less reliable during bad weather conditions.

The LGPR system aims to improve on these vulnerabilities by focusing on the road itself and not the open space in front of the car.

To work, the LGPR system needs access to GPS data about the roads it’s travelling on, as well as a reference set of LGPR data to compare against the live sensor readings from the car.

To do this the MIT team sent out a car with a human driver to build a reference set of LGPR data, which catalogs small changes in road height, potholes, or other minute irregularities that form a kind of fingerprint-like textural map of the road.

‘If you or I grabbed a shovel and dug it into the ground, all we’re going to see is a bunch of dirt,’ MIT’s Teddy Ort told Engadget.

‘But LGPR can quantify the specific elements there and compare that to the map it’s already created, so that it knows exactly where it is, without needing cameras or lasers.’

The car uses 'Localizing Ground Penetrating Radar' or LGPR, which takes a detailed reading of the ground directly below the car, not out in front of it

The car uses ‘Localizing Ground Penetrating Radar’ or LGPR, which takes a detailed reading of the ground directly below the car, not out in front of it

The live LGPR data about the roads is compared to a stored data that had previously been mapped, to help the car maintain its specific position on the road without the possibility of being tricked by road signs or old lane markers

The live LGPR data about the roads is compared to a stored data that had previously been mapped, to help the car maintain its specific position on the road without the possibility of being tricked by road signs or old lane markers 

The system combines GPS data with LGPR data about the composition of the exact part of the road being driven over to make navigational choices about how fast to travel and when to speed up or slow down.

The data used for the LGPR systems is also significantly less than the fully 3D data used by traditional self-driving systems. 

‘Intuitively, these are smaller because the sensor measures only a thin slice directly below the vehicle, while typical 3D maps contain a detailed view of the entire environment including surrounding buildings and vegetation,’ the team say, in a paper detailing their research.

‘Thus, LGPR maps can provide precise localization in changing surface conditions without requiring as much storage space.’

The team admit the system is still early in its development and is likely many years away from being road ready.

Current testing has been limited to private country roads near the university, and kept at low speeds.

Researchers acknowledge the system performs suboptimally during heavy rains, potentially because water absorption alters the shape and composition of many roads in ways that are too subtle for the naked eye to notice but significant enough to throw the LGPR sensors off.

WHAT ARE THE SIX LEVELS OF SELF-DRIVING AUTOMATION?

Level Zero – The full-time performance by the human driver of all aspects of the dynamic driving task, even when enhanced by warning or intervention systems.

Level One – A small amount of control is accomplished by the system such as adaptive braking if a car gets too close.

Level Two – The system can control the speed and direction of the car allowing the driver to take their hands off temporarily, but they have to monitor the road at all times and be ready to take over.

Level Three – The driver does not have to monitor the system at all times in some specific cases like on high ways but must be ready to resume control if the system requests.

Level Four – The system can cope will all situations automatically within defined use but it may not be able to cope will all weather or road conditions. System will rely on high definition mapping.

Level Five – Full automation. System can cope with all weather, traffic and lighting conditions. It can go anywhere, at any time in any conditions.

Tesla's Model 3 Sedan - one of the world's most advanced road-legal cars with autonomous elements - currently operates at Level Two autonomy. It is equipped for Level Three autonomy, which may be introduced in a future software update

Tesla’s Model 3 Sedan – one of the world’s most advanced road-legal cars with autonomous elements – currently operates at Level Two autonomy. It is equipped for Level Three autonomy, which may be introduced in a future software update

 

Source link