In recent years, learning algorithms have been applied with impressive results to vision-based indoors autonomous navigation. However, most of this work has been trained and tested only in simulation because the learning algorithms are data hungry, and running real robot tests requires specialized hardware and knowledge. These issues left the final judge of the performance of the algorithms -- the real world -- out of the evaluation. Additionally, recent simulation environments with high fidelity in visuals and physics have closed the gap between simulation and real world so that models trained in simulation transfer seamlessly to reality. But there has not been yet a proper and extensive evaluation on how successfully visuo-motor navigation policies trained in these simulators transfer to real world.


In this challenge we provide participants with all the necessary ingredients to develop visuo-motor policies in simulation and test them in real world:

  1. iGibson, the Interactive Gibson Environment, a novel interactive extension of our photorealistic Gibson simulator that can be used to train agents in navigation tasks. iGibson includes multiple 3D scenes reconstructed from real world apartments, and the state-of-the-art Bullet physics engine.
  2. A real-world environment and support on a set of real navigating robots(LoCoBots) for the participants to test their solutions in the real world.
  3. We challenge participants to solve navigation tasks to a given point of the environment using RGB+D images in realistic scenarios: when the environment is empty of obstacles, contains obstacles that can be interacted, and/or is populated with other moving agents.

With our workshop and challenge, we hope to "take the temperature" of the current state of visuo-motor navigation solutions in real world in equal and fair conditions and shed light on what is solved and how, and what areas should be further investigated by the community.

Highlights of the Challenge

Real World Apartment

Interactive Objects

Dynamic Agents

Open Participation

Scenes in the Challenge

Participants are provided with 73 Gibson scenes reconstructed from real world apartments where they can develop their solutions in simulation. One of these scenes, the Castro scene (named after the street of the apartment), is a reconstruction of the real world apartment where we will test the best solutions. Only a part of the entire apartment is provided as a model; another part, which we called CastroUnseen, will remain unknown for the participants. We will use both Castro and CastroUnseen to test the solutions of the participants, in simulation and in real world.

Scenes to develop in simulation:

72 Gibson Scenes + Castro

Scenes to evaluate in simulation:

Castro + CastroUnseen

Scene to develop and evaluate in real:

Castro + CastroUnseen

Challenge Phases

  • Phase 1, Simulation phase: In this phase participants will develop their solutions in our simulator, iGibson. They will get access to all Gibson 3D reconstructed scenes (572 total, 72 high quality ones, which we recommend for training) and an additional 3D reconstructed scene called Castro that contains part of real world apartment we will use in Phase 2. We will keep the other part of the apartment, which we call CastroUnseen, to perform the evaluation. Participants can submit their solutions at any time through the [EvalAI portal]. At the end of the simulation challenge period (May 15), the best ten solutions will pass to the second phase, the real world phase. As part of our collaboration with Facebook, the top five teams from the Habitat Challenge will also take part in the phase 2 of our challenge and will test their solutions in real world.
  • Phase 2, Real world phase: The qualified teams will receive 30 min/day to evaluate their policies on our real world robotic platform. The runs will be recorded and the videos will be provided to the teams for debugging. They will also receive a record of the states, measurements, and actions taken by the real world agent, as well as their score. The last two days (31st of May and 1st of June) are the days of the challenge and the teams will be ultimately ranked based on their scores. At the end of these two days we will announce the winner of the first Gibson Sim2Real Challenge!
  • Phase 3, Demo phase: To increase visibility, the best three entries of our challenge will have the opportunity to showcase their solutions live during CVPR20! We will connect directly from Seattle the 15th of June and video stream a run of each solutions, highlighting their strengths and characteristics. This will provide an opportunity for the teams to explain their solution to the CVPR audience.

Challenge Scenarios

The first Gibson Sim2Real Challenge is composed of three navigation scenarios that represent important skills for autonomous visual navigation:

PointNav in clean environment:

PointNav with interactive objects:

PointNav among dynamic agents:

  • PointNav scenario in clean environments: the goal in this scenario is for an agent to successfully navigate to a given point location based on visual information (RGB+D images). In this scenario, the agent is not allowed to collide with the environment. This scenario will evaluate the sim2real transference of the most basic capability of a navigating agent. We will evaluate performance in this scenario using Success weighted by Path Length (SPL) [3].
  • PointNav scenario with interactive objects: in this scenario the agent is allowed (even encouraged) to collide and interact with the environment in order to push obstacles away. But careful! Some of the obstacles are not movable. This scenario evaluates agents in Interactive Navigation tasks [1], navigation problems that considers interactions with the environment. We will use Interactive Navigation Score (INS) [1] to evaluate performance of agents in this scenario.
  • PointNav scenario among dynamic agents: the goal in this scenario is to navigate to the given point location avoiding collisions with a dynamic agent that follows unknown navigating patterns. Reasoning, predicting and avoiding other moving agents is challenging, and we will measure how well existing solutions perform in this conditions. As with PointNav scenarios in clean environments, no collisions are allowed in this scenario. We will use again SPL to evaluate the performance of the controlled agent.
All submissions to our challenge will be evaluated in all three scenarios. The ranking will depend on the performance on the three scenarios but we will provide insights about the performance in each of them.


[1] Interactive Gibson Benchmark: A Benchmark for Interactive Navigation in Cluttered Environments. Fei Xia, William B. Shen, Chengshu Li, Priya Kasimbeg, Micael Tchapmi, Alexander Toshev, Roberto Martín-Martín, and Silvio Savarese. RA-L, to be presented at ICRA 2020.

[2] Gibson env: Real-world perception for embodied agents. Fei Xia, Amir R. Zamir, Zhiyang He, Alexander Sax, Jitendra Malik, and Silvio Savarese. In CVPR, 2018

[3] On evaluation of embodied navigation agents. Peter Anderson, Angel Chang, Devendra Singh Chaplot, Alexey Dosovitskiy, Saurabh Gupta, Vladlen Koltun, Jana Kosecka, Jitendra Malik, Roozbeh Mottaghi, Manolis Savva, Amir R. Zamir. arXiv:1807.06757, 2018.

Timeline (Tentative)

Releasing Development Kit February 27, 2020
Beginning of Phase 1, Development and Challenge in Simulation February 27, 2020
End of Phase 1, Challenge in Simulation May 31, 2020
Beginning of Phase 2, Tests in Real World May 15, 2020
Beginning of Challenge in Phase 2, Real World May 23, 2020
End of Challenge in Phase 2, Real World June 8, 2020
Phase 3, Live Demo at CVPR (via streaming) June 15, 2020


Do you want to participate? Download our starter package with iGibson, the Interactive Gibson Environment, and examples of agents and start developing your own entry right away! You also need to register in our webpage on the [EvalAI portal].


  • Q: What do I need to participate?
    A: Only a computer. You don't need your own robot. If your computer has a GPU, it will train faster!
  • Q: Will my solution be tested in all three scenarios? What happens if its tailored to one of them?
    A: Yes, your solution will be tested in all three scenarios, cleaned environments, environments with interactive objects and navigation among dynamic agents. We think a full solution for point to point navigation should be able to handle these three common conditions. However, we will highlight and test solutions that outperform the others in only one or two scenarios.
  • Q: Do I need to know robotics to participate?
    A: Not at all! We will take care of all the real robot hassle so that you can focus on developing an awesome solution.
  • Q: Where can I find the tech spec for the real robot and sensor?
    A: You can find the tech spec for the robot here and for the sensor Intel® RealSense? D435 here.
  • Q: Are the action to the robot continuous velocities or discrete displacements?
    A: We believe that the best action space for natural and efficient navigation is continuous robot velocities (linear and angular). However, given the popularity of discrete displacements as action space for navigation, we also accept and evaluate solutions using them.


The Sim2Real Challenge with Gibson is generously sponsored by Nvidia. Thank you Nvidia!