Challenge

Learning-to-Drive (L2D) Challenge

This workshop will have a challenge track on Learning to Drive with Camera Videos, Navigation Maps and a Vehicle’s CAN bus signals. Specifically, the driving model will learn to predict – given a set of sensor inputs – driving maneuvers consisting of steering wheel angle and vehicle speed at a point in the future. Participants are allowed and encouraged to explore various sensor modalities supplied through our challenge dataset to achieve this goal. The methods of the participants are evaluated based on their Mean-Square-Error (MSE) score and their novelty. We will also provide the results of several published models for comparison. Awards will be given to the challenge winners.

Our challenge dataset will have a wide range of sensors available:
  • Multiple camera configurations, ranging from a single front facing camera to multiple views providing full surround vision.
  • A standard industrial map with over 21 common road attributes, i.e. distance to road furniture, road curvature, intersection geometry, speed limits, etc.
  • Visual and numeric route planning modules.
  • Odometry via 3-axis acceleration and gyro IMU. GPS coordinates.
This challenge is launched based on the work of these two publications:

The challenge has opened already. Please register your team here to participate! There will be awards for the challenge winners!

July 16 2019:
Unfortunately, the CodaLab server has encountered a problem and is temporarily unavailable. We are working on another solution, and will bring back the challenge in a few days!