Challenge

Learning-to-Drive (L2D) Challenge

This workshop will have a challenge track on Learning to Drive with Camera Videos, Navigation Maps and a Vehicle’s CAN bus signals. Specifically, the driving model will learn to predict – given a set of sensor inputs – driving maneuvers consisting of steering wheel angle and vehicle speed at a point in the future. Participants are allowed and encouraged to explore various sensor modalities supplied through our challenge dataset to achieve this goal. The methods of the participants are evaluated based on their Mean-Square-Error (MSE) score and their novelty. We will also provide the results of several published models for comparison. Awards will be given to the challenge winners.

Our challenge dataset will have a wide range of sensors available:
  • Multiple camera configurations, ranging from a single front facing camera to multiple views providing full surround vision.
  • A standard industrial map with over 21 common road attributes, i.e. distance to road furniture, road curvature, intersection geometry, speed limits, etc.
  • Visual and numeric route planning modules.
  • Odometry via 3-axis acceleration and gyro IMU. GPS coordinates.
This challenge is launched based on the work of these two publications:

The L2D challenge is back. The challenge is now moved to AIcrowd from CodaLab after a system crash of the latter. Please find all details of registration here to participate! There will be awards for the challenge winners!