Introduction


Interactive Gibson is a fast simulator and a dataset for indoor navigation and manipulation. It was first released in June 2019. It allows for complicated interactions between the agent and the environment, such as picking up and placing objects, or opening doors and cabinets. This environment opens up new venues for jointly training base and arm policies, allowing researchers to explore the synergy between manipulation and navigation.

If you use Interactive Gibson Simulator or Interactive Gibson assets, please consider citing the following paper:

Fei Xia, Chengshu Li, Kevin Chen, William B. Shen, Roberto Martin-Martin, Noriaki Hirose, Amir R. Zamir, Li Fei-Fei, and Silvio Savarese. "Gibson env V2: Embodied Simulation Environments for Interactive Navigation." 2019. [Simulator] [Documentation] [Tech report (old)] [Dataset] [New tech report coming soon] [Bibtex]

Feature set


Dataset size

Interactive Gibson dataset contains 572 buildings, 1400 floors, and 211k square meters of indoor spaces.

Interactive Objects

We augment 106 scenes with 1984 interactable CAD model alignments of 5 different object categories: chairs, desks, doors, sofas, and tables.

URDF Robot support

Support Mujoco humanoid and ant, Freight, Husky and TurtleBot v2 , Minitaur, Fetch and JackRabbot v1/v2 and a quadrocopter.

Articulated Objects

We can simulate articulated objects such as doors and cabinets.

Procedural randomized environment creation

Textures of CAD objects are randomized with physically-based rendering, resulting in a rich library of randomized environments.

Competitive speed for complex robot physics simulations

Interactive Gibson simulate full physics yet still runs at 200+ fps. For rendering only, it can achieve close to 1000 fps.

Multi-agent support

Multiple agents can be added to the same environment, add they can see each other. This makes multi-agent learning possible

3d scene graph annotations

Apart from object annotations, we also incorporate scene graph annotations and render at every frame.

Algorithm support

SAC, DDPG, and PPO baselines are provided with the environment. Pretrained models for common tasks including P2P navigation are available.

Demos


Multiple agent example

We show Gibson Env V2 is able to handle multiple agents. It can render the camera view from each agent and simulate collisions between agents.


Rendering modalities example

We support rendering rgb images, surface normal, depth and segmentation masks.


Manipulation example

In this example we show a simulated JR in Gibson Env V2. It can pull open the door in the environment. We show a third person view and the robot's view.


Texture baking example

Objects are texture baked with path traced rendering in Gibson Env V2.


Object interaction example

We can add multiple objects into Gibson Env V2 and simulate the interaction between agent and added objects.


Sim2real transfer example

Due to the photorealism of our simulator, it is easy to transfer learned policy from simulation to real world.

Benchmarks


The table shows rendering speed (frame per second) of our environment under different rendering modes at 256x256 resolution with full physical simulation enabled. Tested on GTX 1080ti

Gibson V2 Gibson V1
RGBD, pre networkf 264.1 58.5
RGBD, post networkf - 30.6
Surface Normal only 271.1 129.7
Semantic only 279.1 144.2
Non-Visual Sensory 1017.4 396.1

The table shows the rendering only (no physics) speed of our environment compared with habitat-sim at 640x480 resolution. All benchmarks are run with a single NVIDIA GeForce GTX 1080 Ti Graphics Card

Gibson V2 Habitat-sim
Hillsdale 620.4 752.9
Albertville 422.0 688.2

Selected Projects

[full list]