iGibson is a simulation environment providing fast visual rendering and physics simulation based on Bullet. iGibson is equipped with fifteen fully interactive high quality scenes, hundreds of large 3D scenes reconstructed from real homes and offices, and compatibility with datasets like CubiCasa5K and 3D-Front, providing 12000+ additional interactive scenes. Some of the features of iGibson include domain randomization, integration with motion planners and easy-to-use tools to collect human demonstrations. With these scenes and features, iGibson allows researchers to train and evaluate robotic agents that use visual signals to solve navigation and manipulation tasks such as opening doors, picking up and placing objects, or searching in cabinets.
2021.7 iGibson 2.0 Released : iGibson 2.0 is a new version of our open source robot simulation environment. It implements kinematic and non-kinematic object states (temperature, wetness level, cleanliness level, etc.) and sampling functionalities to facilitate development of embodied agents for various household activities.
2021.7 BEHAVIOR Challenge Announced: BEHAVIOR is a challenge in simulation where embodied agents make continuous full-body control decisions based on sensor information. Agents need to navigate and manipulate the simulated environment with the goal of acomplishing 100 household activities Check out the challenge page for more details.
2021.2 iGibson Challenge 2021 Announced: We are annoucing iGibson Challenge 2021, hosted with Embodied AI workshop at CVPR 2021. This challenge features interactive and social navigation in indoor environments. Check out the challenge page for more details.
2020.12 iGibson and iGibson Dataset v1 Released: This release is a major update of iGibson simulation environment and a new dataset. We release a large dataset that consists of 15 fully interactive scenes and 500+ object models annotated with material and physical properties on top of PartNet and Motion Dataset. It also introduces new features such as physically based rendering, 1-beam and 16-beam LiDAR, domain randomization, motion planning, tools to collect human demos and more! For more details, please refer to our arxiv preprint.
2020.06 End of our CVPR20 sim2real challenge with iGibson! Thanks all the participants, specially team inspir.ai, the winner of this year’s edition. Their solution transferred best between iGibson simulation and the real apartment achieving maximum scores in visual navigation with and without obstacles.
2020.04 iGibson Dataset v0.05 Released: This release include the simulation environment, ten houses annotated with interactive objects of five categories, and one house fully annotated to be interactive and with selected textures. We include documentation with code examples and baselines of navigation agents with reinforcement learning state-of-the-art algorithms.
2020.02 Beginning of the simulation phase of our CVPR Challenge "Sim2Real Challenge with Gibson": Do you want to see your visual navigation algorithm run on a real robot but you don't want to deal with the real world setup? Participate in our CVPR20 challenge! We will test the best entries from a first simulation-only phase on our own Locobots in a real apartment. If you want to participate, follow the instructions in the challenge page.
iGibson 2.0 supports object states, including temperature, wetness level, cleanliness level, and toggled and sliced states, necessary to cover a wider range of tasks.
iGibson 2.0 implements a set of predicate logic functions that map the simulator states to logic states like Cooked or Soaked.
Given a logic state, iGibson 2.0 can sample valid physical states that satisfy it. It can generate infinite instances of tasks with minimal human effort.
iGibson 2.0 includes a virtual reality (VR) interface to immerse humans in its scenes to collect demonstrations.
We released iGibson dataset, which consists of 15 fully interactive scenes and 500+ object models.
Achieving more than 100 fps with full-physics on fully-interactive scenes. More than 400 fps for rendering only.
Built in motion planners (e.g. RRT, BiRRT, LazyPRM) for arm and base. Intuitive Human-iG interface for efficient demonstration collection.
to train in collaborative or adversarial setups, where an arbitrary number of agents can see and interact with each other.
Reinforcement learning starter code for navigation tasks with visual and LiDAR signals using SAC.
The documentation for the current and previous versions of iGibson can be found here [1.0] [2.0]. The documentation includes multiple code examples and snippets to help you develop your own solutions, or modify the code for your project. If what you are looking for is not in the documentation, check the issues section of our github repository.
Details of iGibson are presented in the following papers. Consider citing them if you use iGibson in your research."iGibson 2.0: Object-Centric Simulation for Robot Learning of Everyday Household Tasks, by Chengshu Li*, Fei Xia*, Roberto Martín-Martín*, Michael Lingelbach, Sanjana Srivastava, Bokui Shen, Kent Vainio, Cem Gokmen, Gokul Dharan, Tanish Jain, Andrey Kurenkov, Karen Liu, Hyowon Gweon, Jiajun Wu, Li Fei-Fei, Silvio Savarese (* equal contribution)
We provide of a walkthrough of iGibson 2.0 and iGIbson 1.0, introducing the features and how they facilitate the development of interactive AI.
Andrey Kurenkov, Roberto Martín-Martín, Jeff Ichnowski, Ken Goldberg, Silvio Savarese[website]