NeRF4ADR:
Neural Fields for Autonomous Driving and Robotics

ICCV 2023 Workshop, Paris, France
October 3rd (Tuesday), 2023
Live Zoom Link
📍 Location: Room S03, Paris Convention Centre

Introduction

Neural fields, or coordinate-based neural networks, have emerged as novel representations for a myriad of signals, including but not limited to 3D geometry, images, and sound. This innovative approach has seen great progress in various computer vision and computer graphics tasks, particularly in novel view synthesis and scene reconstruction. However, the incorporation of neural fields into autonomous driving and robotics is still in its early stages. Now, an intriguing question has surfaced — can these novel representations inform continuous decision-making in autonomous driving and robotics? Recent efforts indeed provide promising evidence. For instance, 3D neural fields enhance multi-view consistency, which in turn significantly improves scene understanding for robotics and autonomous driving. To accelerate advancements in the application of neural fields in robotics, this workshop aspires to serve as a confluence of researchers from various disciplines such as machine learning, computer vision, computer graphics, autonomous driving, and robotics. The objectives of this gathering are to exchange ideas, discuss the current progress of neural fields in autonomous driving and robotics, explore potential areas in autonomous driving and robotics that are ripe for the adoption of neural fields, underscore the limitations of neural fields, and discuss potential failure cases in this sector. Moreover, it provides an opportunity for the ICCV community to deliberate over this exciting and growing domain of neural fields for autonomous driving and robotics.

TL;DR

Neural fields have made significant advancements in computer vision and computer graphics but are yet to be fully explored in autonomous driving and robotics. This workshop aims to bring together experts from various fields to discuss the potential of neural fields in these sectors, exchange ideas, identify limitations, and address failure cases.



Call for Papers

The ICCV 2023 Neural Fields for Autonomous Driving and Robotics Workshop (neural-fields.xyz) is scheduled for October 3rd at the Paris Convention Center, France. This workshop aims to bring together researchers from machine learning, computer vision, computer graphics, and robotics to exchange ideas and discuss the current progress of neural fields in autonomous driving and robotics.

This workshop is intended to:

  • Explore potential areas in robotics where neural fields are better suited
  • Highlight the limitations of neural fields and discuss instances where neural fields have failed
  • Provide an opportunity for the ICCV community to discuss this exciting and growing area of research

We welcome paper submissions on all topics related to neural fields for autonomous driving and robotics, including but not limited to:

  • Neural fields for autonomous driving
  • Neural fields for scene reconstruction
  • Neural fields for robotic perception
  • Neural fields for decision making
  • Self-supervised learning with neural fields
  • Generalizable neural fields for robotics
  • Neural fields as data representations
  • Neural SLAM

We eagerly anticipate your contributions to this significant dialogue in neural fields for autonomous driving and robotics.

Style and Author Instructions

  • Paper Length: We ask authors to use the official ICCV2023 template and limit submissions to 4-8 pages excluding references.
  • Dual Submissions: The workshop is non-archival. In addition, in light of the new single-track policy of ICCV 2023, we strongly encourage papers accepted to ICCV 2023 to present at our workshop.
  • Presentation Forms: All accepted papers will get poster presentations during the workshop; selected papers will get oral presentations.

All submissions should anonymized. Papers with more than 4 pages (excluding references) will be reviewed as long papers, and papers with more than 8 pages (excluding references) will be rejected without review. Supplementary material is optional with supported formats: pdf, mp4 and zip. All papers that were not previously presented in a major conference, will be peer-reviewed by three experts in the field in a double-blind manner. In case you are submitting a previously accepted conference paper, please also attach a copy of the acceptance notification email in the supplementary material documents.

All submissions should adhere to the ICCV 2023 author guidelines.

Submission Portal: https://cmt3.research.microsoft.com/NeRF4ADR2023/Submission/Index

Paper Review Timeline:

Paper Submission and supplemental material deadline August 25, 2023 (AoE time)
Notification to authors September 11th, 2023 (AoE time)
Camera ready deadline September 30th, 2023 (AoE time)



Invited Speakers

Jiaju Wu

Jiajun Wu

Stanford University

Jiajun Wu is an Assistant Professor of Computer Science at Stanford University, affiliated with the Stanford AI Lab (SAIL) and the Stanford Vision and Learning Lab (SVL). His research focuses on machine perception, reasoning, and interaction with the physical world, drawing inspiration from human cognition. His current research topics include Physical Scene Understanding, Dynamics Models, Neuro-Symbolic Visual Reasoning, Generative Visual Models, and Multi-Modal Perception. Before joining Stanford, he was a Visiting Faculty Researcher at Google Research, New York City, and completed his PhD at MIT.

Jon Barron

Jon Barron

Google Research

Jon Barron is a senior staff research scientist at Google Research in San Francisco, where he works on computer vision and machine learning. He received a PhD in Computer Science from the University of California, Berkeley in 2013, where he was advised by Jitendra Malik, and he received an Honours BSc in Computer Science from the University of Toronto in 2007. He received a National Science Foundation Graduate Research Fellowship in 2009, the C.V. Ramamoorthy Distinguished Research Award in 2013, the PAMI Young Researcher Award in 2020. His works have received awards at ECCV 2016, TPAMI 2016, ECCV 2020, ICCV 2021, CVPR 2022, and the Communications of the ACM (2022).

Luca Carlone

Luca Carlone

Massachusetts Institute of Technology

Luca Carlone is an Associate Professor at MIT AeroAstro. His work includes seminal results on certifiably correct algorithms for localization and mapping, as well as approaches for visual-inertial navigation and distributed mapping. He is a recipient of the Best Student Paper Award at IROS 2021, the Best Paper Award in Robot Vision at ICRA 2020, a 2020 Honorable Mention From the IEEE Robotics and Automation Letters, a Track Best Paper award at the 2021 IEEE Aerospace Conference, the 2017 Transactions on Robotics King-Sun Fu Memorial Best Paper Award, the Best PaperAward at WAFR 2016, the Best Student Paper Award at the 2018 Symposium on VLSI Circuits, and he was best paper finalist at RSS2015 and RSS 2021.


Lingjie Liu

Lingjie Liu

University of Pennsylvania

Lingjie Liu is the Aravind K. Joshi Assistant Professor at the University of Pennsylvania, leading the Penn Computer Graphics Lab and affiliated with the General Robotics, Automation, Sensing & Perception Lab. Her research is at the crossroads of Computer Graphics, Computer Vision, and AI, with a particular focus on Neural Scene Representations and 3D Reconstruction. Prior to this, she was a Lise Meitner Postdoctoral Research Fellow at the Max Planck Institute for Informatics, and earned her Ph.D. from the University of Hong Kong in 2019.

Andrew Davison

Andrew Davison

Imperial College London

Andrew Davison is a Professor of Robot Vision at Imperial College London, leading the Dyson Robotics Laboratory. He pioneers in SLAM (Simultaneous Localisation and Mapping) using vision, significantly contributing to real-time 3D vision and robotics. Andrew also co-founded SLAMcore, a start-up specializing in applied Spatial AI solutions. His current research extends into "Spatial AI," aiming to enhance real-time 3D vision's performance across various dimensions like dynamics, scale, and efficiency.


Vincent Sitzmann

Vincent Sitzmann

Massachusetts Institute of Technology

Vincent Sitzmann is an Assistant Professor at MIT EECS, where he leads the Scene Representation Group. His research interest lies in neural scene representations, focusing on how neural networks learn to represent information about our world. Vincent's goal is to allow independent agents to reason about our world based on visual observations, such as inferring a complete model of a scene with information on geometry, material, lighting, etc., from only a few observations, a task that is currently challenging for AI.


Jamie Shotton

Jamie Shotton

Wayve

Jamie Shotton serves as Chief Scientist at Wayve. He has a distinguished career, having worked as Partner Director of Science at Microsoft earlier. His work significantly contributed to Kinect and HoloLens technologies. Jamie has received numerous accolades including the Royal Academy of Engineering's MacRobert and Silver Medals. He holds a PhD in Computer Vision and Machine Learning from the University of Cambridge. His interests span artificial intelligence, computer vision, machine learning, and autonomous driving among others



Schedule

Welcome and Introduction 08:55 AM - 09:00 AM
Oral Presentations 09:00 AM - 10:00 AM
Jiajun Wu 10:00 AM - 10:45 AM
Jon Barron 10:45 AM - 11:30 PM
Poster Session & Lunch 11:40 AM - 01:10 PM
Luca Carlone 01:15 PM - 02:00 PM
Lingjie Liu 02:00 PM - 02:45 PM
Andrew Davison 02:45 PM - 03:30 PM
Vincent Sitzmann 03:30 PM - 04:15 PM
Jamie Shotton 04:15 PM - 05:00 PM

Accepted Papers


MOISST: Multimodal Optimization of Implicit Scene for SpatioTemporal Calibration

Quentin HERAU, Nathan Piasco, Moussab Bennehar, Luis G Roldao Jimenez, Dzmitry Tsishkou, Cyrille Migniot, Pascal Vasseur, Cedric Demonceaux

[PDF]  

[Oral] Neural LiDAR Fields for Novel View Synthesis Calibration

Shengyu Huang, Zan Gojcic, Zian Wang, Francis Williams, Yoni Kasten, Sanja Fidle, Konrad Schindle, Or Litany

[PDF]  

Multi-Object Navigation with dynamically learned neural implicit representations Calibration

Pierre Marza, Laetitia Matignon, Olivier Simonin, Christian Wolf

[PDF]   [Supplement]

[Oral] AutoNeRF: Training Implicit Scene Representations with Autonomous Agents

Pierre Marza, Laetitia Matignon, Olivier Simonin, Dhruv Batra, Christian Wolf, Devendra Singh Chaplot

[PDF]   [Supplement]

Improved Positional Encoding for Implicit Neural Representation based Compact Data Representation

Bharath Bhushan Damodaran, Francois Schnitzler, Anne Lambert, PIERRE HELLIER

[PDF]  

Flexible Techniques for Differentiable Rendering with 3D Gaussians

Leonid Keselman, Martial Hebert

[PDF]  

On the Few-Shot Generalization of Learning on Implicit Neural Representations

Vincent Tao Hu, Wei D Zhang, Teng Long, Yunlu Chen, Yuki M Asano, Pascal Mettes, Stratis Gavves, Basura Fernando, Cees Snoek

[PDF]  

Exploring Reconstructive Latent-Space Neural Radiance Fields

Tristan T Aumentado-Armstrong, Ashkan Mirzaei, Marcus A Brubaker, Jonathan Kelly, Alex Levinshtein, Konstantinos G Derpanis, Igor Gilitschenski

[PDF]   [Supplement]

ScatterNeRF: Seeing Through Fog with Physically-Based Inverse Neural Rendering

Andrea Ramazzina, Mario Bijelic, Stefanie Walz, Dominik Scheuble, Felix Heide, Alessandro Sanvito

[PDF]   [Supplement]

CROSSFIRE: Camera Relocalization On Self-Supervised Features from an Implicit Representation

Arthur Moreau, Nathan Piasco, Moussab Bennehar, Dzmitry Tsishkou, Bogdan Stanciulescu, Arnaud de La Fortelle

[PDF]  

BID-NeRF: RGB-D image pose estimation with inverted Neural Radiance Fields

Ágoston I. Csehi, Csaba Máté Józsa

[PDF]  

[Oral] Dynamic 3D Gaussians: Tracking by Persistent Dynamic View Synthesis

Jonathon Luiten, Georgios Kopanas, Bastian Leibe, Deva Ramanan

[PDF]  

Transient Neural Radiance Fields for Lidar View Synthesis and 3D Reconstruction

Anagh Malik, Parsa Mirdehghan, Sotiris Nousias, Kiriakos N Kutulakos, David B Lindell

[PDF]   [Supplement] [Supplementary Videos]

[Oral] High-Degrees-of-Freedom Dynamic Neural Fields for Robot Self-Modeling and Motion Planning

Lennart Schulze, Hod Lipson

[PDF]   [Supplementary Video]


Top