OmniGibson ships with many demo scripts highlighting its modularity and diverse feature set intended as a set of building blocks enabling your research. Let's try them out!
A quick word about macros
Why macros?
Macros enforce global behavior that is consistent within an individual python process but can differ between processes. This is useful because globally enabling all of OmniGibson's features can cause unnecessary slowdowns, and so configuring the macros for your specific use case can optimize performance.
For example, Omniverse provides a so-called flatcache feature which provides significant performance boosts, but cannot be used when fluids or soft bodies are present. So, we ideally should always have gm.USE_FLATCACHE=True unless we have fluids or soft bodies in our environment.
macros define a globally available set of magic numbers or flags set throughout OmniGibson. These can either be directly set in omnigibson.macros.py, or can be programmatically modified at runtime via:
gm refers to the "global" macros -- i.e.: settings that generally impact the entire OmniGibson stack. These are usually the only settings you may need to modify.
macros captures all remaining macros defined throughout OmniGibson's codebase -- these are often hardcoded default settings or magic numbers defined in a specific module. These can also be overridden, but we recommend inspecting the module first to understand how it is used.
Many of our examples set various macros settings at the beginning of the script, and is a good way to understand use cases for modifying them!
Environments
These examples showcase the full OmniGibson stack in use, and the types of environments immediately supported.
BEHAVIOR Task Demo
This demo is useful for...
Understanding how to instantiate a BEHAVIOR task
Understanding how a pre-defined configuration file is used
This demo instantiates one of our BEHAVIOR tasks (and optionally sampling object locations online) in a fully-populated scene and loads a Fetch robot. The robot executes random actions and the environment is reset periodically.
importosimportyamlimportomnigibsonasogfromomnigibson.macrosimportgmfromomnigibson.utils.ui_utilsimportchoose_from_options# Make sure object states are enabledgm.ENABLE_OBJECT_STATES=Truegm.USE_GPU_DYNAMICS=Truedefmain(random_selection=False,headless=False,short_exec=False):""" Generates a BEHAVIOR Task environment in an online fashion. It steps the environment 100 times with random actions sampled from the action space, using the Gym interface, resetting it 10 times. """og.log.info(f"Demo {__file__}\n "+"*"*80+"\n Description:\n"+main.__doc__+"*"*80)# Ask the user whether they want online object sampling or notsampling_options={False:"Use a pre-sampled cached BEHAVIOR activity scene",True:"Sample the BEHAVIOR activity in an online fashion",}should_sample=choose_from_options(options=sampling_options,name="online object sampling",random_selection=random_selection)# Load the pre-selected configuration and set the online_sampling flagconfig_filename=os.path.join(og.example_config_path,"fetch_behavior.yaml")cfg=yaml.load(open(config_filename,"r"),Loader=yaml.FullLoader)cfg["task"]["online_object_sampling"]=should_sample# Load the environmentenv=og.Environment(configs=cfg)# Allow user to move camera more easilyog.sim.enable_viewer_camera_teleoperation()# Run a simple loop and reset periodicallymax_iterations=10ifnotshort_execelse1forjinrange(max_iterations):og.log.info("Resetting environment")env.reset()foriinrange(100):action=env.action_space.sample()state,reward,terminated,truncated,info=env.step(action)ifterminatedortruncated:og.log.info("Episode finished after {} timesteps".format(i+1))break# Always close the environment at the endog.clear()if__name__=="__main__":main()
Navigation Task Demo
This demo is useful for...
Understanding how to instantiate a navigation task
Understanding how a pre-defined configuration file is used
This demo instantiates one of our navigation tasks in a fully-populated scene and loads a Turtlebot robot. The robot executes random actions and the environment is reset periodically.
importosimportyamlimportomnigibsonasogfromomnigibson.utils.ui_utilsimportchoose_from_optionsdefmain(random_selection=False,headless=False,short_exec=False):""" Prompts the user to select a type of scene and loads a turtlebot into it, generating a Point-Goal navigation task within the environment. It steps the environment 100 times with random actions sampled from the action space, using the Gym interface, resetting it 10 times. """og.log.info(f"Demo {__file__}\n "+"*"*80+"\n Description:\n"+main.__doc__+"*"*80)# Load the configconfig_filename=os.path.join(og.example_config_path,"turtlebot_nav.yaml")config=yaml.load(open(config_filename,"r"),Loader=yaml.FullLoader)# check if we want to quick load or full load the sceneload_options={"Quick":"Only load the building assets (i.e.: the floors, walls, ceilings)","Full":"Load all interactive objects in the scene",}load_mode=choose_from_options(options=load_options,name="load mode",random_selection=random_selection)ifload_mode=="Quick":config["scene"]["load_object_categories"]=["floors","walls","ceilings"]# Load the environmentenv=og.Environment(configs=config)# Allow user to move camera more easilyog.sim.enable_viewer_camera_teleoperation()# Run a simple loop and reset periodicallymax_iterations=10ifnotshort_execelse1forjinrange(max_iterations):og.log.info("Resetting environment")env.reset()foriinrange(100):action=env.action_space.sample()state,reward,terminated,truncated,info=env.step(action)ifterminatedortruncated:og.log.info("Episode finished after {} timesteps".format(i+1))break# Always close the environment at the endog.clear()if__name__=="__main__":main()
Learning
These examples showcase how OmniGibson can be used to train embodied AI agents.
Reinforcement Learning Demo
This demo is useful for...
Understanding how to hook up OmniGibson to an external algorithm
"""Example training code using stable-baselines3 PPO for one BEHAVIOR activity.Note that due to the sparsity of the reward, this training code will not converge and achieve task success.This only serves as a starting point that users can further build upon."""importargparseimportosimporttimeimportyamlimportomnigibsonasogfromomnigibsonimportexample_config_pathfromomnigibson.macrosimportgmfromomnigibson.utils.python_utilsimportmeets_minimum_versiontry:importgymnasiumasgymimporttensorboard# noqa: F401importtorchasthimporttorch.nnasnnfromstable_baselines3importPPOfromstable_baselines3.common.callbacksimportCallbackList,CheckpointCallback,EvalCallbackfromstable_baselines3.common.evaluationimportevaluate_policyfromstable_baselines3.common.torch_layersimportBaseFeaturesExtractorfromstable_baselines3.common.utilsimportset_random_seedexceptModuleNotFoundError:og.log.error("torch, stable-baselines3, or tensorboard is not installed. ""See which packages are missing, and then run the following for any missing packages:\n""pip install stable-baselines3[extra]\n""pip install tensorboard\n""pip install shimmy>=0.2.1\n""Also, please update gym to >=0.26.1 after installing sb3: pip install gym>=0.26.1")exit(1)assertmeets_minimum_version(gym.__version__,"0.28.1"),"Please install/update gymnasium to version >= 0.28.1"# We don't need object states nor transitions rules, so we disable them now, and also enable flatcache for maximum speedgm.ENABLE_OBJECT_STATES=Falsegm.ENABLE_TRANSITION_RULES=Falsegm.ENABLE_FLATCACHE=TrueclassCustomCombinedExtractor(BaseFeaturesExtractor):def__init__(self,observation_space:gym.spaces.Dict):# We do not know features-dim here before going over all the items,# so put something dummy for now. PyTorch requires callingsuper().__init__(observation_space,features_dim=1)extractors={}self.step_index=0self.img_save_dir="img_save_dir"os.makedirs(self.img_save_dir,exist_ok=True)total_concat_size=0feature_size=128forkey,subspaceinobservation_space.spaces.items():# For now, only keep RGB observationsif"rgb"inkey:og.log.info(f"obs {key} shape: {subspace.shape}")n_input_channels=subspace.shape[0]# channel firstcnn=nn.Sequential(nn.Conv2d(n_input_channels,4,kernel_size=8,stride=4,padding=0),nn.ReLU(),nn.MaxPool2d(2),nn.Conv2d(4,8,kernel_size=4,stride=2,padding=0),nn.ReLU(),nn.MaxPool2d(2),nn.Conv2d(8,4,kernel_size=3,stride=1,padding=0),nn.ReLU(),nn.Flatten(),)test_tensor=th.zeros(subspace.shape)withth.no_grad():n_flatten=cnn(test_tensor[None]).shape[1]fc=nn.Sequential(nn.Linear(n_flatten,feature_size),nn.ReLU())extractors[key]=nn.Sequential(cnn,fc)total_concat_size+=feature_sizeself.extractors=nn.ModuleDict(extractors)# Update the features dim manuallyself._features_dim=total_concat_sizedefforward(self,observations)->th.Tensor:encoded_tensor_list=[]self.step_index+=1# self.extractors contain nn.Modules that do all the processing.forkey,extractorinself.extractors.items():encoded_tensor_list.append(extractor(observations[key]))feature=th.cat(encoded_tensor_list,dim=1)returnfeaturedefmain():# Parse argsparser=argparse.ArgumentParser(description="Train or evaluate a PPO agent in BEHAVIOR")parser.add_argument("--checkpoint",type=str,default=None,help="Absolute path to desired PPO checkpoint to load for evaluation",)parser.add_argument("--eval",action="store_true",help="If set, will evaluate the PPO agent found from --checkpoint",)args=parser.parse_args()tensorboard_log_dir=os.path.join("log_dir",time.strftime("%Y%m%d-%H%M%S"))os.makedirs(tensorboard_log_dir,exist_ok=True)prefix=""seed=0# Load configwithopen(f"{example_config_path}/turtlebot_nav.yaml","r")asf:cfg=yaml.load(f,Loader=yaml.FullLoader)# Make sure flattened obs and action space is usedcfg["env"]["flatten_action_space"]=Truecfg["env"]["flatten_obs_space"]=True# Only use RGB obscfg["robots"][0]["obs_modalities"]=["rgb"]# If we're not eval, turn off the start / goal markers so the agent doesn't see themifnotargs.eval:cfg["task"]["visualize_goal"]=Falseenv=og.Environment(configs=cfg)# If we're evaluating, hide the ceilings and enable camera teleoperation so the user can easily# visualize the rollouts dynamicallyifargs.eval:ceiling=env.scene.object_registry("name","ceilings")ceiling.visible=Falseog.sim.enable_viewer_camera_teleoperation()# Set the setset_random_seed(seed)env.reset()policy_kwargs=dict(features_extractor_class=CustomCombinedExtractor,)os.makedirs(tensorboard_log_dir,exist_ok=True)ifargs.eval:assertargs.checkpointisnotNone,"If evaluating a PPO policy, @checkpoint argument must be specified!"model=PPO.load(args.checkpoint)og.log.info("Starting evaluation...")mean_reward,std_reward=evaluate_policy(model,env,n_eval_episodes=50)og.log.info("Finished evaluation!")og.log.info(f"Mean reward: {mean_reward} +/- {std_reward:.2f}")else:model=PPO("MultiInputPolicy",env,verbose=1,tensorboard_log=tensorboard_log_dir,policy_kwargs=policy_kwargs,n_steps=20*10,batch_size=8,device="cuda",)checkpoint_callback=CheckpointCallback(save_freq=1000,save_path=tensorboard_log_dir,name_prefix=prefix)eval_callback=EvalCallback(eval_env=env,eval_freq=1000,n_eval_episodes=20)callback=CallbackList([checkpoint_callback,eval_callback])og.log.debug(model.policy)og.log.info(f"model: {model}")og.log.info("Starting training...")model.learn(total_timesteps=10000000,callback=callback,)og.log.info("Finished training!")if__name__=="__main__":main()
Scenes
These examples showcase how to leverage OmniGibson's large-scale, diverse scenes shipped with the BEHAVIOR dataset.
Scene Selector Demo
This demo is useful for...
Understanding how to load a scene into OmniGibson
Accessing all BEHAVIOR dataset scenes
python-momnigibson.examples.scenes.scene_selector
This demo lets you choose a scene from the BEHAVIOR dataset, loads it along with a Turtlebot robot, and cycles the resulting environment periodically.
importomnigibsonasogfromomnigibson.macrosimportgmfromomnigibson.utils.asset_utilsimportget_available_g_scenes,get_available_og_scenesfromomnigibson.utils.ui_utilsimportchoose_from_options# Configure macros for maximum performancegm.USE_GPU_DYNAMICS=Truegm.ENABLE_FLATCACHE=Truegm.ENABLE_OBJECT_STATES=Falsegm.ENABLE_TRANSITION_RULES=Falsedefmain(random_selection=False,headless=False,short_exec=False):""" Prompts the user to select any available interactive scene and loads a turtlebot into it. It steps the environment 100 times with random actions sampled from the action space, using the Gym interface, resetting it 10 times. """og.log.info(f"Demo {__file__}\n "+"*"*80+"\n Description:\n"+main.__doc__+"*"*80)# Choose the scene type to loadscene_options={"InteractiveTraversableScene":"Procedurally generated scene with fully interactive objects",# "StaticTraversableScene": "Monolithic scene mesh with no interactive objects",}scene_type=choose_from_options(options=scene_options,name="scene type",random_selection=random_selection)# Choose the scene model to loadscenes=get_available_og_scenes()ifscene_type=="InteractiveTraversableScene"elseget_available_g_scenes()scene_model=choose_from_options(options=scenes,name="scene model",random_selection=random_selection)cfg={"scene":{"type":scene_type,"scene_model":scene_model,},}# If the scene type is interactive, also check if we want to quick load or full load the sceneifscene_type=="InteractiveTraversableScene":load_options={"Quick":"Only load the building assets (i.e.: the floors, walls, ceilings)","Full":"Load all interactive objects in the scene",}load_mode=choose_from_options(options=load_options,name="load mode",random_selection=random_selection)ifload_mode=="Quick":cfg["scene"]["load_object_categories"]=["floors","walls","ceilings","lawn","driveway","roof","rail_fence",]# Load the environmentenv=og.Environment(configs=cfg)# Allow user to move camera more easilyifnotgm.HEADLESS:og.sim.enable_viewer_camera_teleoperation()# Run a simple loop and reset periodicallysteps=0whilenotshort_execorsteps<1000:state,reward,terminated,truncated,info=env.step(None)ifterminatedortruncated:og.log.info("Episode finished after {} timesteps".format(steps+1))breaksteps+=1# Always close the environment at the endog.clear()if__name__=="__main__":main()
Scene Tour Demo
This demo is useful for...
Understanding how to load a scene into OmniGibson
Understanding how to generate a trajectory from a set of waypoints
This demo lets you choose a scene from the BEHAVIOR dataset. It allows you to move the camera using the keyboard, select waypoints, and then programmatically generates a video trajectory from the selected waypoints
importtorchasthimportomnigibsonasogimportomnigibson.lazyaslazyfromomnigibson.macrosimportgmfromomnigibson.utils.asset_utilsimportget_available_g_scenes,get_available_og_scenesfromomnigibson.utils.ui_utilsimportKeyboardEventHandler,choose_from_optionsdefmain(random_selection=False,headless=False,short_exec=False):""" Prompts the user to select any available interactive scene and loads it. It sets the camera to various poses and records images, and then generates a trajectory from a set of waypoints and records the resulting video. """og.log.info(f"Demo {__file__}\n "+"*"*80+"\n Description:\n"+main.__doc__+"*"*80)# Make sure the example is not being run headless. If so, terminate earlyifgm.HEADLESS:print("This demo should only be run not headless! Exiting early.")og.shutdown()# Choose the scene type to loadscene_options={"InteractiveTraversableScene":"Procedurally generated scene with fully interactive objects",# "StaticTraversableScene": "Monolithic scene mesh with no interactive objects",}scene_type=choose_from_options(options=scene_options,name="scene type",random_selection=random_selection)# Choose the scene model to loadscenes=get_available_og_scenes()ifscene_type=="InteractiveTraversableScene"elseget_available_g_scenes()scene_model=choose_from_options(options=scenes,name="scene model",random_selection=random_selection)print(f"scene model: {scene_model}")cfg={"scene":{"type":scene_type,"scene_model":scene_model,},}# If the scene type is interactive, also check if we want to quick load or full load the sceneifscene_type=="InteractiveTraversableScene":load_options={"Quick":"Only load the building assets (i.e.: the floors, walls, ceilings)","Full":"Load all interactive objects in the scene",}load_mode=choose_from_options(options=load_options,name="load mode",random_selection=random_selection)ifload_mode=="Quick":cfg["scene"]["load_object_categories"]=["floors","walls","ceilings"]# Load the environmentenv=og.Environment(configs=cfg)# Allow user to teleoperate the cameracam_mover=og.sim.enable_viewer_camera_teleoperation()# Create a keyboard event handler for generating waypointswaypoints=[]defadd_waypoint():nonlocalwaypointspos=cam_mover.cam.get_position_orientation()[0]print(f"Added waypoint at {pos}")waypoints.append(pos)defclear_waypoints():nonlocalwaypointsprint("Cleared all waypoints!")waypoints=[]KeyboardEventHandler.initialize()KeyboardEventHandler.add_keyboard_callback(key=lazy.carb.input.KeyboardInput.X,callback_fn=add_waypoint,)KeyboardEventHandler.add_keyboard_callback(key=lazy.carb.input.KeyboardInput.C,callback_fn=clear_waypoints,)KeyboardEventHandler.add_keyboard_callback(key=lazy.carb.input.KeyboardInput.J,callback_fn=lambda:cam_mover.record_trajectory_from_waypoints(waypoints=th.tensor(waypoints),per_step_distance=0.02,fps=30,steps_per_frame=1,fpath=None,# This corresponds to the default path inferred from cam_mover.save_dir),)KeyboardEventHandler.add_keyboard_callback(key=lazy.carb.input.KeyboardInput.ESCAPE,callback_fn=lambda:og.clear(),)# Print out additional keyboard commandsprint("\t X: Save the current camera pose as a waypoint")print("\t C: Clear all waypoints")print("\t J: Record the camera trajectory from the current set of waypoints")print("\t ESC: Terminate the demo")# Loop indefinitelysteps=0max_steps=-1ifnotshort_execelse100whilesteps!=max_steps:env.step([])steps+=1if__name__=="__main__":main()
Traversability Map Demo
This demo is useful for...
Understanding how to leverage traversability map information from BEHAVIOR dataset scenes
importosimportcv2importmatplotlib.pyplotaspltimporttorchasthimportomnigibsonasogfromomnigibson.utils.asset_utilsimportget_available_og_scenes,get_og_scene_pathfromomnigibson.utils.ui_utilsimportchoose_from_optionsdefmain(random_selection=False,headless=False,short_exec=False):""" Traversable map demo Loads the floor plan and obstacles for the requested scene, and overlays them in a visual figure such that the highlighted area reflects the traversable (free-space) area """og.log.info(f"Demo {__file__}\n "+"*"*80+"\n Description:\n"+main.__doc__+"*"*80)scenes=get_available_og_scenes()scene_model=choose_from_options(options=scenes,name="scene model",random_selection=random_selection)print(f"Generating traversability map for scene {scene_model}")trav_map_size=200trav_map_erosion=2trav_map=cv2.imread(os.path.join(get_og_scene_path(scene_model),"layout","floor_trav_0.png"))trav_map=cv2.resize(trav_map,(trav_map_size,trav_map_size))trav_map=cv2.erode(trav_map,th.ones((trav_map_erosion,trav_map_erosion)).cpu().numpy())ifnotheadless:plt.figure(figsize=(12,12))plt.imshow(trav_map)plt.title(f"Traversable area of {scene_model} scene")plt.show()if__name__=="__main__":main()
Objects
These examples showcase how to leverage objects in OmniGibson.
Load Object Demo
This demo is useful for...
Understanding how to load an object into OmniGibson
Accessing all BEHAVIOR dataset asset categories and models
importtorchasthimportomnigibsonasogfromomnigibson.utils.asset_utilsimportget_all_object_categories,get_all_object_category_modelsfromomnigibson.utils.ui_utilsimportchoose_from_optionsdefmain(random_selection=False,headless=False,short_exec=False):""" This demo shows how to load any scaled objects from the OG object model dataset The user selects an object model to load The objects can be loaded into an empty scene or an interactive scene (OG) The example also shows how to use the Environment API or directly the Simulator API, loading objects and robots and executing actions """og.log.info(f"Demo {__file__}\n "+"*"*80+"\n Description:\n"+main.__doc__+"*"*80)scene_options=["Scene","InteractiveTraversableScene"]scene_type=choose_from_options(options=scene_options,name="scene type",random_selection=random_selection)# -- Choose the object to load --# Select a category to loadavailable_obj_categories=get_all_object_categories()obj_category=choose_from_options(options=available_obj_categories,name="object category",random_selection=random_selection)# Select a model to loadavailable_obj_models=get_all_object_category_models(obj_category)obj_model=choose_from_options(options=available_obj_models,name="object model",random_selection=random_selection)# Create and load this object into the simulatorobj_cfg=dict(type="DatasetObject",name="obj",category=obj_category,model=obj_model,position=[0,0,50.0],)cfg={"scene":{"type":scene_type,},"objects":[obj_cfg],}ifscene_type=="InteractiveTraversableScene":cfg["scene"]["scene_model"]="Rs_int"# Create the environmentenv=og.Environment(configs=cfg)# Place the object so it rests on the floorobj=env.scene.object_registry("name","obj")center_offset=obj.get_position_orientation()[0]-obj.aabb_center+th.tensor([0,0,obj.aabb_extent[2]/2.0])obj.set_position_orientation(position=center_offset)# Step through the environmentmax_steps=100ifshort_execelse10000foriinrange(max_steps):env.step(th.empty(0))# Always close the environment at the endog.clear()if__name__=="__main__":main()
Object Visualizer Demo
This demo is useful for...
Viewing objects' textures as rendered in OmniGibson
Viewing articulated objects' range of motion
Understanding how to reference object instances from the environment
Understanding how to set object poses and joint states
This demo lets you choose a specific object from the BEHAVIOR dataset, and rotates the object in-place. If the object is articulated, it additionally moves its joints through its full range of motion.
importargparseimportmathimporttorchasthimportomnigibsonasogimportomnigibson.utils.transform_utilsasTfromomnigibson.utils.asset_utilsimportget_all_object_categories,get_all_object_category_modelsfromomnigibson.utils.ui_utilsimportchoose_from_optionsdefmain(random_selection=False,headless=False,short_exec=False):""" Visualizes object as specified by its USD path, @usd_path. If None if specified, will instead result in an object selection from OmniGibson's object dataset """og.log.info(f"Demo {__file__}\n "+"*"*80+"\n Description:\n"+main.__doc__+"*"*80)# Assuming that if random_selection=True, headless=True, short_exec=True, we are calling it from tests and we# do not want to parse args (it would fail because the calling function is pytest "testfile.py")usd_path=Noneifnot(random_selectionandheadlessandshort_exec):parser=argparse.ArgumentParser()parser.add_argument("--usd_path",default=None,help="USD Model to load",)args=parser.parse_args()usd_path=args.usd_path# Define objects to loadlight0_cfg=dict(type="LightObject",light_type="Sphere",name="sphere_light0",radius=0.01,intensity=1e5,position=[-2.0,-2.0,2.0],)light1_cfg=dict(type="LightObject",light_type="Sphere",name="sphere_light1",radius=0.01,intensity=1e5,position=[-2.0,2.0,2.0],)# Make sure we have a valid usd pathifusd_pathisNone:# Select a category to loadavailable_obj_categories=get_all_object_categories()obj_category=choose_from_options(options=available_obj_categories,name="object category",random_selection=random_selection)# Select a model to loadavailable_obj_models=get_all_object_category_models(obj_category)obj_model=choose_from_options(options=available_obj_models,name="object model",random_selection=random_selection)kwargs={"type":"DatasetObject","category":obj_category,"model":obj_model,}else:kwargs={"type":"USDObject","usd_path":usd_path,}# Import the desired objectobj_cfg=dict(**kwargs,name="obj",usd_path=usd_path,visual_only=True,position=[0,0,10.0],)# Create the scene config to load -- empty scenecfg={"scene":{"type":"Scene",},"objects":[light0_cfg,light1_cfg,obj_cfg],}# Create the environmentenv=og.Environment(configs=cfg)# Set camera to appropriate viewing poseog.sim.viewer_camera.set_position_orientation(position=th.tensor([-0.00913503,-1.95750906,1.36407314]),orientation=th.tensor([0.6350064,0.0,0.0,0.77250687]),)# Grab the object referencesobj=env.scene.object_registry("name","obj")# Standardize the scale of the object so it fits in a [1,1,1] box -- note that we have to stop the simulator# in order to set the scaleextents=obj.aabb_extentog.sim.stop()obj.scale=(th.ones(3)/extents).min()og.sim.play()env.step(th.empty(0))# Move the object so that its center is at [0, 0, 1]center_offset=obj.get_position_orientation()[0]-obj.aabb_center+th.tensor([0,0,1.0])obj.set_position_orientation(position=center_offset)# Allow the user to easily move the camera aroundog.sim.enable_viewer_camera_teleoperation()# Rotate the object in placesteps_per_rotate=360steps_per_joint=steps_per_rotate/10max_steps=100ifshort_execelse10000foriinrange(max_steps):z_angle=2*math.pi*(i%steps_per_rotate)/steps_per_rotatequat=T.euler2quat(th.tensor([0,0,z_angle]))pos=T.quat2mat(quat)@center_offsetifobj.n_dof>0:frac=(i%steps_per_joint)/steps_per_jointj_frac=-1.0+2.0*fracif(i//steps_per_joint)%2==0else1.0-2.0*fracobj.set_joint_positions(positions=j_frac*th.ones(obj.n_dof),normalized=True,drive=False)obj.keep_still()obj.set_position_orientation(position=pos,orientation=quat)env.step(th.empty(0))# Shut down at the endog.clear()if__name__=="__main__":main()
Highlight Object
This demo is useful for...
Understanding how to highlight individual objects within a cluttered scene
Understanding how to access groups of objects from the environment
importtorchasthimportomnigibsonasogdefmain(random_selection=False,headless=False,short_exec=False):""" Highlights visually all object instances of windows and then removes the highlighting It also demonstrates how to apply an action on all instances of objects of a given category """og.log.info(f"Demo {__file__}\n "+"*"*80+"\n Description:\n"+main.__doc__+"*"*80)# Create the scene config to load -- empty scenecfg={"scene":{"type":"InteractiveTraversableScene","scene_model":"Rs_int",}}# Create the environmentenv=og.Environment(configs=cfg)# Grab all window objectswindows=env.scene.object_registry("category","openable_window")# Step environment while toggling window highlightingi=0highlighted=Falsemax_steps=-1ifnotshort_execelse1000whilei!=max_steps:env.step(th.empty(0))ifi%50==0:highlighted=nothighlightedog.log.info(f"Toggling window highlight to: {highlighted}")forwindowinwindows:# Note that this property is R/W!window.highlighted=highlightedi+=1# Always close the environment at the endog.clear()if__name__=="__main__":main()
Draw Object Bounding Box Demo
This demo is useful for...
Understanding how to access observations from a GymObservable object
Understanding how to access objects' bounding box information
Understanding how to dynamically modify vision modalities
This demo loads a door object and banana object, and partially obscures the banana with the door. It generates both "loose" and "tight" bounding boxes (where the latter respects occlusions) for both objects, and dumps them to an image on disk.
importmatplotlib.pyplotaspltimporttorchasthimportomnigibsonasogdefmain(random_selection=False,headless=False,short_exec=False):""" Shows how to obtain the bounding box of an articulated object. Draws the bounding box around the loaded object, a cabinet, and writes the visualized image to disk at the current directory named 'bbox_2d_[loose / tight]_img.png'. NOTE: In the GUI, bounding boxes can be natively viewed by clicking on the sensor ((*)) icon at the top, and then selecting the appropriate bounding box modalities, and clicking "Show". See: https://docs.omniverse.nvidia.com/prod_extensions/prod_extensions/ext_replicator/visualization.html#the-visualizer """og.log.info(f"Demo {__file__}\n "+"*"*80+"\n Description:\n"+main.__doc__+"*"*80)# Specify objects to loadbanana_cfg=dict(type="DatasetObject",name="banana",category="banana",model="vvyyyv",bounding_box=[0.643,0.224,0.269],position=[-0.906661,-0.545106,0.136824],orientation=[0,0,0.76040583,-0.6494482],)door_cfg=dict(type="DatasetObject",name="door",category="door",model="ohagsq",bounding_box=[1.528,0.064,1.299],position=[-2.0,0,0.70000001],orientation=[0,0,-0.38268343,0.92387953],)# Create the scene config to load -- empty scene with a few objectscfg={"scene":{"type":"Scene",},"objects":[banana_cfg,door_cfg],}# Create the environmentenv=og.Environment(configs=cfg)# Set camera to appropriate viewing posecam=og.sim.viewer_cameracam.set_position_orientation(position=th.tensor([-4.62785,-0.418575,0.933943]),orientation=th.tensor([0.52196595,-0.4231939,-0.46640436,0.5752612]),)# Add bounding boxes to camera sensorbbox_modalities=["bbox_3d","bbox_2d_loose","bbox_2d_tight"]forbbox_modalityinbbox_modalities:cam.add_modality(bbox_modality)# Take a few steps to let objects settleforiinrange(100):env.step(th.empty(0))# Grab observations from viewer camera and write them to diskobs,_=cam.get_obs()forbbox_modalityinbbox_modalities:# Print out each of the modalitiesog.log.info(f"Observation modality {bbox_modality}:\n{obs[bbox_modality]}")# Also write the 2d loose bounding box to diskif"3d"notinbbox_modality:fromomnigibson.utils.deprecated_utilsimportcolorize_bboxescolorized_img=colorize_bboxes(bboxes_2d_data=obs[bbox_modality],bboxes_2d_rgb=obs["rgb"].cpu().numpy(),num_channels=4)fpath=f"{bbox_modality}_img.png"plt.imsave(fpath,colorized_img)og.log.info(f"Saving modality [{bbox_modality}] image to: {fpath}")# Always close environment down at endog.clear()if__name__=="__main__":main()
Object States
These examples showcase OmniGibson's powerful object states functionality, which captures both individual and relational kinematic and non-kinematic states.
Slicing Demo
This demo is useful for...
Understanding how slicing works in OmniGibson
Understanding how to access individual objects once the environment is created
importmathimporttorchasthimportomnigibsonasogimportomnigibson.utils.transform_utilsasTfromomnigibson.macrosimportgm# Make sure object states and transition rules are enabledgm.ENABLE_OBJECT_STATES=Truegm.ENABLE_TRANSITION_RULES=Truedefmain(random_selection=False,headless=False,short_exec=False):""" Demo of slicing an apple into two apple slices """og.log.info(f"Demo {__file__}\n "+"*"*80+"\n Description:\n"+main.__doc__+"*"*80)# Create the scene config to load -- empty scene with table, knife, and appletable_cfg=dict(type="DatasetObject",name="table",category="breakfast_table",model="rjgmmy",bounding_box=[1.36,1.081,0.84],position=[0,0,0.58],)apple_cfg=dict(type="DatasetObject",name="apple",category="apple",model="agveuv",bounding_box=[0.098,0.098,0.115],position=[0.085,0,0.92],)knife_cfg=dict(type="DatasetObject",name="knife",category="table_knife",model="lrdmpf",bounding_box=[0.401,0.044,0.009],position=[0,0,20.0],)light0_cfg=dict(type="LightObject",name="light0",light_type="Sphere",radius=0.01,intensity=4000.0,position=[1.217,-0.848,1.388],)light1_cfg=dict(type="LightObject",name="light1",light_type="Sphere",radius=0.01,intensity=4000.0,position=[-1.217,0.848,1.388],)cfg={"scene":{"type":"Scene",},"objects":[table_cfg,apple_cfg,knife_cfg,light0_cfg,light1_cfg],}# Create the environmentenv=og.Environment(configs=cfg)# Grab reference to apple and knifeapple=env.scene.object_registry("name","apple")knife=env.scene.object_registry("name","knife")# Update the simulator's viewer camera's pose so it points towards the tableog.sim.viewer_camera.set_position_orientation(position=th.tensor([0.544888,-0.412084,1.11569]),orientation=th.tensor([0.54757518,0.27792802,0.35721896,0.70378409]),)# Let apple settlefor_inrange(50):env.step(th.empty(0))knife.keep_still()knife.set_position_orientation(position=apple.get_position_orientation()[0]+th.tensor([-0.15,0.0,0.2],dtype=th.float32),orientation=T.euler2quat(th.tensor([-math.pi/2,0,0],dtype=th.float32)),)ifnotshort_exec:input("The knife will fall on the apple and slice it. Press [ENTER] to continue.")# Step simulation for a bit so that apple is slicedforiinrange(1000):env.step(th.empty(0))ifnotshort_exec:input("Apple has been sliced! Press [ENTER] to terminate the demo.")# Always close environment at the endog.clear()if__name__=="__main__":main()
Dicing Demo
This demo is useful for...
Understanding how to leverage the Dicing state
Understanding how to enable objects to be diceable
importmathimporttorchasthimportomnigibsonasogimportomnigibson.utils.transform_utilsasTfromomnigibson.macrosimportgm# Make sure object states, GPU dynamics, and transition rules are enabledgm.ENABLE_OBJECT_STATES=Truegm.USE_GPU_DYNAMICS=Truegm.ENABLE_TRANSITION_RULES=Truedefmain(random_selection=False,headless=False,short_exec=False):""" Demo of dicing an apple into apple dices """og.log.info(f"Demo {__file__}\n "+"*"*80+"\n Description:\n"+main.__doc__+"*"*80)# Create the scene config to load -- empty scene with table, knife, and appletable_cfg=dict(type="DatasetObject",name="table",category="breakfast_table",model="rjgmmy",bounding_box=[1.36,1.08,0.84],position=[0,0,0.58],)apple_cfg=dict(type="DatasetObject",name="apple",category="apple",model="agveuv",bounding_box=[0.098,0.098,0.115],position=[0.085,0,0.92],abilities={"diceable":{}},)knife_cfg=dict(type="DatasetObject",name="knife",category="table_knife",model="lrdmpf",bounding_box=[0.401,0.044,0.009],position=[0,0,20.0],)light0_cfg=dict(type="LightObject",name="light0",light_type="Sphere",radius=0.01,intensity=1e7,position=[1.217,-0.848,1.388],)light1_cfg=dict(type="LightObject",name="light1",light_type="Sphere",radius=0.01,intensity=1e7,position=[-1.217,0.848,1.388],)cfg={"scene":{"type":"Scene",},"objects":[table_cfg,apple_cfg,knife_cfg,light0_cfg,light1_cfg],}# Create the environmentenv=og.Environment(configs=cfg)# Grab reference to apple and knifeapple=env.scene.object_registry("name","apple")knife=env.scene.object_registry("name","knife")# Update the simulator's viewer camera's pose so it points towards the tableog.sim.viewer_camera.set_position_orientation(position=th.tensor([0.544888,-0.412084,1.11569]),orientation=th.tensor([0.54757518,0.27792802,0.35721896,0.70378409]),)# Let apple settlefor_inrange(50):env.step(th.empty(0))knife.keep_still()knife.set_position_orientation(position=apple.get_position_orientation()[0]+th.tensor([-0.15,0.0,0.2]),orientation=T.euler2quat(th.tensor([-math.pi/2,0,0])),)ifnotshort_exec:input("The knife will fall on the apple and dice it. Press [ENTER] to continue.")# Step simulation for a bit so that apple is dicedfor_inrange(1000):env.step(th.empty(0))ifnotshort_exec:input("Apple has been diced! Press [ENTER] to terminate the demo.")# Always close simulator at the endog.clear()if__name__=="__main__":main()
Folded and Unfolded Demo
This demo is useful for...
Understanding how to load a softbody (cloth) version of a BEHAVIOR dataset object
Understanding how to enable cloth objects to be foldable
Understanding the current heuristics used for gauging a cloth's "foldness"
This demo loads in three different cloth objects, and allows you to manipulate them while printing out their Folded state status in real-time. Try manipulating the object by holding down Shift and then Left-click + Drag!
importtorchasthimportomnigibsonasogfromomnigibson.macrosimportgmfromomnigibson.object_statesimportFolded,Unfoldedfromomnigibson.utils.constantsimportPrimTypefromomnigibson.utils.python_utilsimportmulti_dim_linspace# Make sure object states and GPU dynamics are enabled (GPU dynamics needed for cloth)gm.ENABLE_OBJECT_STATES=Truegm.USE_GPU_DYNAMICS=Truedefmain(random_selection=False,headless=False,short_exec=False):""" Demo of cloth objects that can potentially be folded. """og.log.info(f"Demo {__file__}\n "+"*"*80+"\n Description:\n"+main.__doc__+"*"*80)# Create the scene config to load -- empty scene + custom cloth objectcfg={"scene":{"type":"Scene",},"objects":[{"type":"DatasetObject","name":"carpet","category":"carpet","model":"ctclvd","bounding_box":[0.897,0.568,0.012],"prim_type":PrimType.CLOTH,"abilities":{"cloth":{}},"position":[0,0,0.5],},{"type":"DatasetObject","name":"dishtowel","category":"dishtowel","model":"dtfspn","bounding_box":[0.852,1.1165,0.174],"prim_type":PrimType.CLOTH,"abilities":{"cloth":{}},"position":[1,1,0.5],},{"type":"DatasetObject","name":"shirt","category":"t_shirt","model":"kvidcx","bounding_box":[0.472,1.243,1.158],"prim_type":PrimType.CLOTH,"abilities":{"cloth":{}},"position":[-1,1,0.5],"orientation":[0.7071,0.0,0.7071,0.0],},],}# Create the environmentenv=og.Environment(configs=cfg)# Grab object referencescarpet=env.scene.object_registry("name","carpet")dishtowel=env.scene.object_registry("name","dishtowel")shirt=env.scene.object_registry("name","shirt")objs=[carpet,dishtowel,shirt]# Set viewer cameraog.sim.viewer_camera.set_position_orientation(position=th.tensor([0.46382895,-2.66703958,1.22616824]),orientation=th.tensor([0.58779174,-0.00231237,-0.00318273,0.80900271]),)defprint_state():folded=carpet.states[Folded].get_value()unfolded=carpet.states[Unfolded].get_value()info="carpet: [folded] %d [unfolded] %d"%(folded,unfolded)folded=dishtowel.states[Folded].get_value()unfolded=dishtowel.states[Unfolded].get_value()info+=" || dishtowel: [folded] %d [unfolded] %d"%(folded,unfolded)folded=shirt.states[Folded].get_value()unfolded=shirt.states[Unfolded].get_value()info+=" || tshirt: [folded] %d [unfolded] %d"%(folded,unfolded)print(f"{info}{' '*(110-len(info))}",end="\r")for_inrange(100):og.sim.step()print("\nCloth state:\n")ifnotshort_exec:# Fold all three cloths along the x-axisforiinrange(3):obj=objs[i]pos=obj.root_link.compute_particle_positions()x_min,x_max=th.min(pos,dim=0).values[0],th.max(pos,dim=0).values[0]x_extent=x_max-x_min# Get indices for the bottom 10 percent vertices in the x-axisindices=th.argsort(pos,dim=0)[:,0][:(pos.shape[0]//10)]start=th.clone(pos[indices])# lift up a bitmid=th.clone(start)mid[:,2]+=x_extent*0.2# move towards x_maxend=th.clone(mid)end[:,0]+=x_extent*0.9increments=25forctrl_ptsinth.cat([multi_dim_linspace(start,mid,increments),multi_dim_linspace(mid,end,increments)]):obj.root_link.set_particle_positions(ctrl_pts,idxs=indices)og.sim.step()print_state()# Fold the t-shirt twice again along the y-axisfordirectionin[-1,1]:obj=shirtpos=obj.root_link.compute_particle_positions()y_min,y_max=th.min(pos,dim=0).values[1],th.max(pos,dim=0).values[1]y_extent=y_max-y_minifdirection==1:indices=th.argsort(pos,dim=0)[:,1][:(pos.shape[0]//20)]else:indices=th.argsort(pos,dim=0)[:,1][-(pos.shape[0]//20):]start=th.clone(pos[indices])# lift up a bitmid=th.clone(start)mid[:,2]+=y_extent*0.2# move towards y_maxend=th.clone(mid)end[:,1]+=direction*y_extent*0.4increments=25forctrl_ptsinth.cat([multi_dim_linspace(start,mid,increments),multi_dim_linspace(mid,end,increments)]):obj.root_link.set_particle_positions(ctrl_pts,idxs=indices)env.step(th.empty(0))print_state()whileTrue:env.step(th.empty(0))print_state()# Shut down env at the endprint()og.clear()if__name__=="__main__":main()
Overlaid Demo
This demo is useful for...
Understanding how cloth objects can be overlaid on rigid objects
Understanding current heuristics used for gauging a cloth's "overlaid" status
This demo loads in a carpet on top of a table. The demo allows you to manipulate the carpet while printing out their Overlaid state status in real-time. Try manipulating the object by holding down Shift and then Left-click + Drag!
importtorchasthimportomnigibsonasogfromomnigibson.macrosimportgmfromomnigibson.object_statesimportOverlaidfromomnigibson.utils.constantsimportPrimType# Make sure object states and GPU dynamics are enabled (GPU dynamics needed for cloth)gm.ENABLE_OBJECT_STATES=Truegm.USE_GPU_DYNAMICS=Truedefmain(random_selection=False,headless=False,short_exec=False):""" Demo of cloth objects that can be overlaid on rigid objects. Loads a carpet on top of a table. Initially Overlaid will be True because the carpet largely covers the table. If you drag the carpet off the table or even just fold it into half, Overlaid will become False. """og.log.info(f"Demo {__file__}\n "+"*"*80+"\n Description:\n"+main.__doc__+"*"*80)# Create the scene config to load -- empty scene + custom cloth object + custom rigid objectcfg={"scene":{"type":"Scene",},"objects":[{"type":"DatasetObject","name":"carpet","category":"carpet","model":"ctclvd","bounding_box":[1.346,0.852,0.017],"prim_type":PrimType.CLOTH,"abilities":{"cloth":{}},"position":[0,0,1.0],},{"type":"DatasetObject","name":"breakfast_table","category":"breakfast_table","model":"rjgmmy","bounding_box":[1.36,1.081,0.84],"prim_type":PrimType.RIGID,"position":[0,0,0.58],},],}# Create the environmentenv=og.Environment(configs=cfg)# Grab object referencescarpet=env.scene.object_registry("name","carpet")breakfast_table=env.scene.object_registry("name","breakfast_table")# Set camera poseog.sim.viewer_camera.set_position_orientation(position=th.tensor([0.88215526,-1.40086216,2.00311063]),orientation=th.tensor([0.42013364,0.12342107,0.25339685,0.86258043]),)max_steps=100ifshort_execelse-1steps=0print("\nTry dragging cloth around with CTRL + Left-Click to see the Overlaid state change:\n")whilesteps!=max_steps:print(f"Overlaid {carpet.states[Overlaid].get_value(breakfast_table)} ",end="\r")env.step(th.empty(0))steps+=1# Shut down env at the endog.clear()if__name__=="__main__":main()
Heat Source or Sink Demo
This demo is useful for...
Understanding how a heat source (or sink) is visualized in OmniGibson
Understanding how dynamic fire visuals are generated in real-time
importtorchasthimportomnigibsonasogfromomnigibsonimportobject_statesfromomnigibson.macrosimportgm# Make sure object states are enabledgm.ENABLE_OBJECT_STATES=Truedefmain(random_selection=False,headless=False,short_exec=False):# Create the scene config to load -- empty scene with a stove object addedcfg={"scene":{"type":"Scene",},"objects":[{"type":"DatasetObject","name":"stove","category":"stove","model":"qbjiva","bounding_box":[1.611,0.769,1.147],"abilities":{"heatSource":{"requires_toggled_on":True},"toggleable":{},},"position":[0,0,0.61],}],}# Create the environmentenv=og.Environment(configs=cfg)# Get reference to stove objectstove=env.scene.object_registry("name","stove")# Set camera to appropriate viewing poseog.sim.viewer_camera.set_position_orientation(position=th.tensor([-0.0792399,-1.30104,1.51981]),orientation=th.tensor([0.54897692,0.00110359,0.00168013,0.83583509]),)# Make sure necessary object states are included with the stoveassertobject_states.HeatSourceOrSinkinstove.statesassertobject_states.ToggledOninstove.states# Take a few steps so that visibility propagatesfor_inrange(5):env.step(th.empty(0))# Heat source is off.print("Heat source is OFF.")heat_source_state=stove.states[object_states.HeatSourceOrSink].get_value()assertnotheat_source_state# Toggle on stove, notify userifnotshort_exec:input("Heat source will now turn ON: Press ENTER to continue.")stove.states[object_states.ToggledOn].set_value(True)assertstove.states[object_states.ToggledOn].get_value()# Need to take a step to update the state.env.step(th.empty(0))# Heat source is onheat_source_state=stove.states[object_states.HeatSourceOrSink].get_value()assertheat_source_statefor_inrange(500):env.step(th.empty(0))# Toggle off stove, notify userifnotshort_exec:input("Heat source will now turn OFF: Press ENTER to continue.")stove.states[object_states.ToggledOn].set_value(False)assertnotstove.states[object_states.ToggledOn].get_value()for_inrange(200):env.step(th.empty(0))# Move stove, notify userifnotshort_exec:input("Heat source is now moving: Press ENTER to continue.")stove.set_position_orientation(position=th.tensor([0,1.0,0.61]))foriinrange(100):env.step(th.empty(0))# Toggle on stove again, notify userifnotshort_exec:input("Heat source will now turn ON: Press ENTER to continue.")stove.states[object_states.ToggledOn].set_value(True)assertstove.states[object_states.ToggledOn].get_value()foriinrange(500):env.step(th.empty(0))# Shutdown environment at endog.clear()if__name__=="__main__":main()
Temperature Demo
This demo is useful for...
Understanding how to dynamically sample kinematic states for BEHAVIOR dataset objects
Understanding how temperature changes are propagated to individual objects from individual heat sources or sinks
This demo loads in various heat sources and sinks, and places an apple within close proximity to each of them. As the environment steps, each apple's temperature is printed in real-time, showcasing OmniGibson's rudimentary temperature dynamics.
importtorchasthimportomnigibsonasogfromomnigibsonimportobject_statesfromomnigibson.macrosimportgm# Make sure object states are enabledgm.ENABLE_OBJECT_STATES=Truedefmain(random_selection=False,headless=False,short_exec=False):""" Demo of temperature change Loads a stove, a microwave and an oven, all toggled on, and five frozen apples The user can move the apples to see them change from frozen, to normal temperature, to cooked and burnt This demo also shows how to load objects ToggledOn and how to set the initial temperature of an object """og.log.info(f"Demo {__file__}\n "+"*"*80+"\n Description:\n"+main.__doc__+"*"*80)# Define specific objects we want to load in with the scene directlyobj_configs=[]# Lightobj_configs.append(dict(type="LightObject",light_type="Sphere",name="light",radius=0.01,intensity=1e8,position=[-2.0,-2.0,1.0],))# Stoveobj_configs.append(dict(type="DatasetObject",name="stove",category="stove",model="yhjzwg",bounding_box=[1.185,0.978,1.387],position=[0,0,0.69],))# Microwaveobj_configs.append(dict(type="DatasetObject",name="microwave",category="microwave",model="hjjxmi",bounding_box=[0.384,0.256,0.196],position=[2.5,0,0.10],))# Ovenobj_configs.append(dict(type="DatasetObject",name="oven",category="oven",model="wuinhm",bounding_box=[1.075,0.926,1.552],position=[-1.25,0,0.88],))# Trayobj_configs.append(dict(type="DatasetObject",name="tray",category="tray",model="xzcnjq",bounding_box=[0.319,0.478,0.046],position=[-0.25,-0.12,1.26],))# Fridgeobj_configs.append(dict(type="DatasetObject",name="fridge",category="fridge",model="hivvdf",bounding_box=[1.065,1.149,1.528],abilities={"coldSource":{"temperature":-100.0,"requires_inside":True,}},position=[1.25,0,0.81],))# 5 Applesforiinrange(5):obj_configs.append(dict(type="DatasetObject",name=f"apple{i}",category="apple",model="agveuv",bounding_box=[0.065,0.065,0.077],position=[0,i*0.1,5.0],))# Create the scene config to load -- empty scene with desired objectscfg={"scene":{"type":"Scene",},"objects":obj_configs,}# Create the environmentenv=og.Environment(configs=cfg)# Get reference to relevant objectsstove=env.scene.object_registry("name","stove")microwave=env.scene.object_registry("name","microwave")oven=env.scene.object_registry("name","oven")tray=env.scene.object_registry("name","tray")fridge=env.scene.object_registry("name","fridge")apples=list(env.scene.object_registry("category","apple"))# Set camera to appropriate viewing poseog.sim.viewer_camera.set_position_orientation(position=th.tensor([0.46938863,-3.97887141,1.64106008]),orientation=th.tensor([0.63311689,0.00127259,0.00155577,0.77405359]),)# Let objects settlefor_inrange(25):env.step(th.empty(0))# Turn on all scene objectsstove.states[object_states.ToggledOn].set_value(True)microwave.states[object_states.ToggledOn].set_value(True)oven.states[object_states.ToggledOn].set_value(True)# Set initial temperature of the apples to -50 degrees Celsius, and move the apples to different objectsforappleinapples:apple.states[object_states.Temperature].set_value(-50)apples[0].states[object_states.Inside].set_value(oven,True)apples[1].set_position_orientation(position=stove.states[object_states.HeatSourceOrSink].link.get_position_orientation()[0]+th.tensor([0,0,0.1]))apples[2].states[object_states.OnTop].set_value(tray,True)apples[3].states[object_states.Inside].set_value(fridge,True)apples[4].states[object_states.Inside].set_value(microwave,True)steps=0max_steps=-1ifnotshort_execelse1000# Main recording looplocations=[f"{loc:>20}"forlocin["Inside oven","On stove","On tray","Inside fridge","Inside microwave"]]print()print(f"{'Apple location:':<20}",*locations)whilesteps!=max_steps:env.step(th.empty(0))temps=[f"{apple.states[object_states.Temperature].get_value():>20.2f}"forappleinapples]print(f"{'Apple temperature:':<20}",*temps,end="\r")steps+=1# Always close env at the endog.clear()if__name__=="__main__":main()
Heated Demo
This demo is useful for...
Understanding how temperature modifications can cause objects' visual changes
Understanding how dynamic steam visuals are generated in real-time
This demo loads in three bowls, and immediately sets their temperatures past their Heated threshold. Steam is generated in real-time from these objects, and then disappears once the temperature of the objects drops below their Heated threshold.
importtorchasthimportomnigibsonasogfromomnigibsonimportobject_statesfromomnigibson.macrosimportgm# Make sure object states are enabledgm.ENABLE_OBJECT_STATES=Truedefmain(random_selection=False,headless=False,short_exec=False):# Define object configurations for objects to load -- we want to load a light and three bowlsobj_configs=[]obj_configs.append(dict(type="LightObject",light_type="Sphere",name="light",radius=0.01,intensity=1e8,position=[-2.0,-2.0,1.0],))fori,(scale,x)inenumerate(zip([0.5,1.0,2.0],[-0.6,0,0.8])):obj_configs.append(dict(type="DatasetObject",name=f"bowl{i}",category="bowl",model="ajzltc",bounding_box=th.tensor([0.329,0.293,0.168])*scale,abilities={"heatable":{}},position=[x,0,0.2],))# Create the scene config to load -- empty scene with light object and bowlscfg={"scene":{"type":"Scene",},"objects":obj_configs,}# Create the environmentenv=og.Environment(configs=cfg)# Set camera to appropriate viewing poseog.sim.viewer_camera.set_position_orientation(position=th.tensor([0.182103,-2.07295,0.14017]),orientation=th.tensor([0.77787037,0.00267566,0.00216149,0.62841535]),)# Dim the skybox so we can see the bowls' steam effectivelyog.sim.skybox.intensity=100.0# Grab reference to objects of relevanceobjs=list(env.scene.object_registry("category","bowl"))defreport_states(objs):forobjinobjs:print("="*20)print("object:",obj.name)print("temperature:",obj.states[object_states.Temperature].get_value())print("obj is heated:",obj.states[object_states.Heated].get_value())# Report default statesprint("==== Initial state ====")report_states(objs)ifnotshort_exec:# Notify user that we're about to heat the objectinput("Objects will be heated, and steam will slowly rise. Press ENTER to continue.")# Heated.forobjinobjs:obj.states[object_states.Temperature].set_value(50)env.step(th.empty(0))report_states(objs)# Take a look at the steam effect.# After a while, objects will be below the Steam temperature threshold.print("==== Objects are now heated... ====")print()max_iterations=2000ifnotshort_execelse100for_inrange(max_iterations):env.step(th.empty(0))# Also print temperaturestemps=[f"{obj.states[object_states.Temperature].get_value():>7.2f}"forobjinobjs]print("obj temps:",*temps,end="\r")print()# Objects are not heated anymore.print("==== Objects are no longer heated... ====")report_states(objs)ifnotshort_exec:# Close environment at the endinput("Demo completed. Press ENTER to shutdown environment.")og.clear()if__name__=="__main__":main()
Onfire Demo
This demo is useful for...
Understanding how changing onfire state can cause objects' visual changes
Understanding how onfire can be triggered by nearby onfire objects
This demo loads in a stove (toggled on) and two apples. The first apple will be ignited by the stove first, then the second apple will be ignited by the first apple.
importtorchasthimportomnigibsonasogfromomnigibsonimportobject_statesfromomnigibson.macrosimportgm# Make sure object states are enabledgm.ENABLE_OBJECT_STATES=Truedefmain(random_selection=False,headless=False,short_exec=False):""" Demo of on fire state. Loads a stove (toggled on), and two apples. The first apple will be ignited by the stove first, then the second apple will be ignited by the first apple. """og.log.info(f"Demo {__file__}\n "+"*"*80+"\n Description:\n"+main.__doc__+"*"*80)# Define specific objects we want to load in with the scene directlyobj_configs=[]# Lightobj_configs.append(dict(type="LightObject",light_type="Sphere",name="light",radius=0.01,intensity=1e8,position=[-2.0,-2.0,1.0],))# Stoveobj_configs.append(dict(type="DatasetObject",name="stove",category="stove",model="yhjzwg",bounding_box=[1.185,0.978,1.387],position=[0,0,0.69],))# 2 Applesforiinrange(2):obj_configs.append(dict(type="DatasetObject",name=f"apple{i}",category="apple",model="agveuv",bounding_box=[0.065,0.065,0.077],position=[0,i*0.07,2.0],abilities={"flammable":{"ignition_temperature":100,"distance_threshold":0.5}},))# Create the scene config to load -- empty scene with desired objectscfg={"scene":{"type":"Scene",},"objects":obj_configs,}# Create the environmentenv=og.Environment(configs=cfg)# Get reference to relevant objectsstove=env.scene.object_registry("name","stove")apples=list(env.scene.object_registry("category","apple"))# Set camera to appropriate viewing poseog.sim.viewer_camera.set_position_orientation(position=th.tensor([-0.42246569,-0.34745704,1.56810353]),orientation=th.tensor([0.50083786,-0.10407796,-0.17482619,0.84128772]),)# Let objects settlefor_inrange(10):env.step(th.empty(0))# Turn on the stovestove.states[object_states.ToggledOn].set_value(True)# The first apple will be affected by the stoveapples[0].set_position_orientation(position=stove.states[object_states.HeatSourceOrSink].link.get_position_orientation()[0]+th.tensor([0.11,0,0.1]))# The second apple will NOT be affected by the stove, but will be affected by the first apple once it's on fire.apples[1].set_position_orientation(position=stove.states[object_states.HeatSourceOrSink].link.get_position_orientation()[0]+th.tensor([0.32,0,0.1]))steps=0max_steps=-1ifnotshort_execelse1000# Main recording loopwhilesteps!=max_steps:env.step(th.empty(0))temps=[f"{apple.states[object_states.Temperature].get_value():>20.2f}"forappleinapples]print(f"{'Apple temperature:':<20}",*temps,end="\r")steps+=1# Always close env at the endog.clear()if__name__=="__main__":main()
Particle Applier and Remover Demo
This demo is useful for...
Understanding how a ParticleRemover or ParticleApplier object can be generated
Understanding how particles can be dynamically generated on objects
Understanding different methods for applying and removing particles via the ParticleRemover or ParticleApplier object
This demo loads in a washtowel and table and lets you choose the ability configuration to enable the washtowel with. The washtowel will then proceed to either remove and generate particles dynamically on the table while moving.
importtorchasthimportomnigibsonasogfromomnigibson.macrosimportgm,macrosfromomnigibson.object_statesimportCovered,ToggledOnfromomnigibson.utils.ui_utilsimportchoose_from_options# Set macros for this examplemacros.object_states.particle_modifier.VISUAL_PARTICLES_REMOVAL_LIMIT=1000macros.object_states.particle_modifier.PHYSICAL_PARTICLES_REMOVAL_LIMIT=8000macros.object_states.particle_modifier.MAX_VISUAL_PARTICLES_APPLIED_PER_STEP=4macros.object_states.particle_modifier.MAX_PHYSICAL_PARTICLES_APPLIED_PER_STEP=40macros.object_states.covered.MAX_VISUAL_PARTICLES=300# Make sure object states and GPU dynamics are enabled (GPU dynamics needed for fluids)gm.ENABLE_OBJECT_STATES=Truegm.USE_GPU_DYNAMICS=Truegm.ENABLE_HQ_RENDERING=Truedefmain(random_selection=False,headless=False,short_exec=False):""" Demo of ParticleApplier and ParticleRemover object states, which enable objects to either apply arbitrary particles and remove arbitrary particles from the simulator, respectively. Loads an empty scene with a table, and starts clean to allow particles to be applied or pre-covers the table with particles to be removed. The ParticleApplier / ParticleRemover state is applied to an imported cloth object and allowed to interact with the table, applying / removing particles from the table. NOTE: The key difference between ParticleApplier/Removers and ParticleSource/Sinks is that Applier/Removers requires contact (if using ParticleProjectionMethod.ADJACENCY) or overlap (if using ParticleProjectionMethod.PROJECTION) in order to spawn / remove particles, and generally only spawn particles at the contact points. ParticleSource/Sinks are special cases of ParticleApplier/Removers that always use ParticleProjectionMethod.PROJECTION and always spawn / remove particles within their projection volume, irregardless of overlap with other objects! """og.log.info(f"Demo {__file__}\n "+"*"*80+"\n Description:\n"+main.__doc__+"*"*80)# Choose what configuration to loadmodifier_type=choose_from_options(options={"particleApplier":"Demo object's ability to apply particles in the simulator","particleRemover":"Demo object's ability to remove particles from the simulator",},name="particle modifier type",random_selection=random_selection,)particle_types=["salt","water"]particle_type=choose_from_options(options={name:f"{name} particles will be applied or removed from the simulator"fornameinparticle_types},name="particle type",random_selection=random_selection,)table_cfg=dict(type="DatasetObject",name="table",category="breakfast_table",model="kwmfdg",bounding_box=[3.402,1.745,1.175],position=[0,0,0.98],)tool_cfg=dict(type="DatasetObject",name="tool",visual_only=True,position=[0,0.3,5.0],)ifmodifier_type=="particleRemover":ifparticle_type=="salt":# only ask this question if the modifier type is salt particleRemovermethod_type=choose_from_options(options={"Adjacency":"Close proximity to the object will be used to determine whether particles can be applied / removed","Projection":"A Cone or Cylinder shape protruding from the object will be used to determine whether particles can be applied / removed",},name="modifier method type",random_selection=random_selection,)else:# If the particle type is water, the remover is always adjacency typemethod_type="Adjacency"ifmethod_type=="Adjacency":# use dishtowel to remove adjacent particlestool_cfg["category"]="dishtowel"tool_cfg["model"]="dtfspn"tool_cfg["bounding_box"]=[0.34245,0.46798,0.07]elifmethod_type=="Projection":# use vacuum to remove projections particlestool_cfg["category"]="vacuum"tool_cfg["model"]="wikhik"else:# If the modifier type is particleApplier, the applier is always projection typemethod_type="Projection"ifparticle_type=="salt":# use salt shaker to apply salt particlestool_cfg["category"]="salt_shaker"tool_cfg["model"]="iomwtn"else:# use water atomizer to apply water particlestool_cfg["category"]="water_atomizer"tool_cfg["model"]="lfarai"# Create the scene config to load -- empty scene with a light and tablecfg={"env":{"rendering_frequency":60,# for HQ rendering},"scene":{"type":"Scene",},"objects":[table_cfg,tool_cfg],}# Sanity check inputs: Remover + Adjacency + Fluid will not work because we are using a visual_only# object, so contacts will not be triggered with this object# Load the environment, then immediately stop the simulator since we need to add in the modifier objectenv=og.Environment(configs=cfg)og.sim.stop()# Grab references to table and tooltable=env.scene.object_registry("name","table")tool=env.scene.object_registry("name","tool")# Set the viewer camera appropriatelyog.sim.viewer_camera.set_position_orientation(position=th.tensor([-1.61340969,-1.79803028,2.53167412]),orientation=th.tensor([0.46291845,-0.12381886,-0.22679218,0.84790371]),)# Play the simulator and take some environment steps to let the objects settleog.sim.play()for_inrange(25):env.step(th.empty(0))# If we're removing particles, set the table's covered state to be Trueifmodifier_type=="particleRemover":table.states[Covered].set_value(env.scene.get_system(particle_type),True)# Take a few steps to let particles settlefor_inrange(25):env.step(th.empty(0))# If the particle remover/applier is projection type, set the turn on shakerifmethod_type=="Projection":tool.states[ToggledOn].set_value(True)# Enable camera teleoperation for convenienceog.sim.enable_viewer_camera_teleoperation()tool.keep_still()# Set the modifier object to be in position to modify particlesifmodifier_type=="particleRemover"andmethod_type=="Projection":tool.set_position_orientation(position=[0,0.3,1.45],orientation=[0,0,0,1.0],)elifmodifier_type=="particleRemover"andmethod_type=="Adjacency":tool.set_position_orientation(position=[0,0.3,1.175],orientation=[0,0,0,1.0],)elifmodifier_type=="particleApplier"andparticle_type=="water":tool.set_position_orientation(position=[0,0.3,1.4],orientation=[0.3827,0,0,0.9239],)else:tool.set_position_orientation(position=[0,0.3,1.5],orientation=[0.7071,0,0.7071,0],)# Move object in square around tabledeltas=[[130,th.tensor([-0.01,0,0])],[60,th.tensor([0,-0.01,0])],[130,th.tensor([0.01,0,0])],[60,th.tensor([0,0.01,0])],]fort,deltaindeltas:for_inrange(t):tool.set_position_orientation(position=tool.get_position_orientation()[0]+delta)env.step(th.empty(0))# Always shut down environment at the endog.clear()if__name__=="__main__":main()
Particle Source and Sink Demo
This demo is useful for...
Understanding how a ParticleSource or ParticleSink object can be generated
Understanding how particles can be dynamically generated and destroyed via such objects
This demo loads in a sink, which is enabled with both the ParticleSource and ParticleSink states. The sink's particle source is located at the faucet spout and spawns a continuous stream of water particles, which is then destroyed ("sunk") by the sink's particle sink located at the drain.
Difference between ParticleApplier/Removers and ParticleSource/Sinks
The key difference between ParticleApplier/Removers and ParticleSource/Sinks is that Applier/Removers
requires contact (if using ParticleProjectionMethod.ADJACENCY) or overlap
(if using ParticleProjectionMethod.PROJECTION) in order to spawn / remove particles, and generally only spawn
particles at the contact points. ParticleSource/Sinks are special cases of ParticleApplier/Removers that
always use ParticleProjectionMethod.PROJECTION and always spawn / remove particles within their projection volume,
irregardless of overlap with other objects.
importtorchasthimportomnigibsonasogfromomnigibsonimportobject_statesfromomnigibson.macrosimportgmfromomnigibson.utils.constantsimportParticleModifyCondition# Make sure object states are enabled and GPU dynamics are usedgm.ENABLE_OBJECT_STATES=Truegm.USE_GPU_DYNAMICS=Truegm.ENABLE_HQ_RENDERING=Truedefmain(random_selection=False,headless=False,short_exec=False):""" Demo of ParticleSource and ParticleSink object states, which enable objects to either spawn arbitrary particles and remove arbitrary particles from the simulator, respectively. Loads an empty scene with a sink, which is enabled with both the ParticleSource and ParticleSink states. The sink's particle source is located at the faucet spout and spawns a continuous stream of water particles, which is then destroyed ("sunk") by the sink's particle sink located at the drain. NOTE: The key difference between ParticleApplier/Removers and ParticleSource/Sinks is that Applier/Removers requires contact (if using ParticleProjectionMethod.ADJACENCY) or overlap (if using ParticleProjectionMethod.PROJECTION) in order to spawn / remove particles, and generally only spawn particles at the contact points. ParticleSource/Sinks are special cases of ParticleApplier/Removers that always use ParticleProjectionMethod.PROJECTION and always spawn / remove particles within their projection volume, irregardless of overlap with other objects! """og.log.info(f"Demo {__file__}\n "+"*"*80+"\n Description:\n"+main.__doc__+"*"*80)# Create the scene config to load -- empty scenecfg={"env":{"rendering_frequency":60,# for HQ rendering},"scene":{"type":"Scene",},}# Define objects to load into the environmentsink_cfg=dict(type="DatasetObject",name="sink",category="sink",model="egwapq",bounding_box=[2.427,0.625,1.2],abilities={"toggleable":{},"particleSource":{"conditions":{"water":[(ParticleModifyCondition.TOGGLEDON,True)],# Must be toggled on for water source to be active},"initial_speed":0.0,# Water merely falls out of the spout},"particleSink":{"conditions":{"water":[],# No conditions, always sinking nearby particles},},},position=[0.0,0,0.42],)cfg["objects"]=[sink_cfg]# Create the environment!env=og.Environment(configs=cfg)# Set camera to ideal angle for viewing objectsog.sim.viewer_camera.set_position_orientation(position=th.tensor([0.37860532,-0.65396566,1.4067066]),orientation=th.tensor([0.49909498,0.15201752,0.24857062,0.81609284]),)# Take a few steps to let the objects settle, and then turn on the sinkfor_inrange(10):env.step(th.empty(0))# Empty action since no robots are in the scenesink=env.scene.object_registry("name","sink")assertsink.states[object_states.ToggledOn].set_value(True)# Take a step, and save the stateenv.step(th.empty(0))initial_state=og.sim.dump_state()# Main simulation loop.max_steps=1000max_iterations=-1ifnotshort_execelse1iteration=0try:whileiteration!=max_iterations:# Keep stepping until table or bowl are clean, or we reach 1000 stepssteps=0whilesteps!=max_steps:steps+=1env.step(th.empty(0))og.log.info("Max steps reached; resetting.")# Reset to the initial stateog.sim.load_state(initial_state)iteration+=1finally:# Always shut down environment at the endog.clear()if__name__=="__main__":main()
Kinematics Demo
This demo is useful for...
Understanding how to dynamically sample kinematic states for BEHAVIOR dataset objects
Understanding how to import additional objects after the environment is created
This demo procedurally generates a mini populated scene, spawning in a cabinet and placing boxes in its shelves, and then generating a microwave on a cabinet with a plate and apples sampled both inside and on top of it.
importtorchasthimportomnigibsonasogfromomnigibsonimportobject_statesfromomnigibson.macrosimportgm# Make sure object states are enabledgm.ENABLE_OBJECT_STATES=Truedefmain(random_selection=False,headless=False,short_exec=False):""" Demo to use the raycasting-based sampler to load objects onTop and/or inside another Loads a cabinet, a microwave open on top of it, and two plates with apples on top, one inside and one on top of the cabinet Then loads a shelf and cracker boxes inside of it """og.log.info(f"Demo {__file__}\n "+"*"*80+"\n Description:\n"+main.__doc__+"*"*80)# Create the scene config to load -- empty scenecfg={"scene":{"type":"Scene",},}# Define objects we want to sample at runtimemicrowave_cfg=dict(type="DatasetObject",name="microwave",category="microwave",model="hjjxmi",bounding_box=[0.768,0.512,0.392],)cabinet_cfg=dict(type="DatasetObject",name="cabinet",category="bottom_cabinet",model="bamfsz",bounding_box=[1.075,1.131,1.355],)plate_cfgs=[dict(type="DatasetObject",name=f"plate{i}",category="plate",model="iawoof",bounding_box=th.tensor([0.20,0.20,0.05]),)foriinrange(2)]apple_cfgs=[dict(type="DatasetObject",name=f"apple{i}",category="apple",model="agveuv",bounding_box=[0.065,0.065,0.077],)foriinrange(4)]shelf_cfg=dict(type="DatasetObject",name="shelf",category="shelf",model="pkgbcp",bounding_box=th.tensor([1.0,0.4,2.0]),)box_cfgs=[dict(type="DatasetObject",name=f"box{i}",category="box_of_crackers",model="cmdigf",bounding_box=th.tensor([0.2,0.05,0.3]),)foriinrange(5)]# Compose objects cfgobjects_cfg=[microwave_cfg,cabinet_cfg,*plate_cfgs,*apple_cfgs,shelf_cfg,*box_cfgs,]# Update their spawn positions so they don't collide immediatelyfori,obj_cfginenumerate(objects_cfg):obj_cfg["position"]=[100+i,100+i,100+i]cfg["objects"]=objects_cfg# Create the environmentenv=og.Environment(configs=cfg)env.step([])# Sample microwave and boxessample_boxes_on_shelf(env)sample_microwave_plates_apples(env)max_steps=100ifshort_execelse-1step=0whilestep!=max_steps:env.step(th.empty(0))step+=1# Always close environment at the endog.clear()defsample_microwave_plates_apples(env):microwave=env.scene.object_registry("name","microwave")cabinet=env.scene.object_registry("name","cabinet")plates=list(env.scene.object_registry("category","plate"))apples=list(env.scene.object_registry("category","apple"))# Place the cabinet at a pre-determined location on the floorog.log.info("Placing cabinet on the floor...")cabinet.set_orientation([0,0,0,1.0])env.step(th.empty(0))offset=cabinet.get_position_orientation()[0][2]-cabinet.aabb_center[2]cabinet.set_position_orientation(position=th.tensor([1.0,0,cabinet.aabb_extent[2]/2])+offset)env.step(th.empty(0))# Set microwave on top of the cabinet, open it, and step 100 timesog.log.info("Placing microwave OnTop of the cabinet...")assertmicrowave.states[object_states.OnTop].set_value(cabinet,True)assertmicrowave.states[object_states.Open].set_value(True)og.log.info("Microwave placed.")for_inrange(50):env.step(th.empty(0))og.log.info("Placing plates")n_apples_per_plate=int(len(apples)/len(plates))fori,plateinenumerate(plates):# Put the 1st plate in the microwaveifi==0:og.log.info(f"Placing plate {i} Inside the microwave...")assertplate.states[object_states.Inside].set_value(microwave,True)else:og.log.info(f"Placing plate {i} OnTop the microwave...")assertplate.states[object_states.OnTop].set_value(microwave,True)og.log.info(f"Plate {i} placed.")for_inrange(50):env.step(th.empty(0))og.log.info(f"Placing {n_apples_per_plate} apples OnTop of the plate...")forjinrange(n_apples_per_plate):apple_idx=i*n_apples_per_plate+japple=apples[apple_idx]assertapple.states[object_states.OnTop].set_value(plate,True)og.log.info(f"Apple {apple_idx} placed.")for_inrange(50):env.step(th.empty(0))defsample_boxes_on_shelf(env):shelf=env.scene.object_registry("name","shelf")boxes=list(env.scene.object_registry("category","box_of_crackers"))# Place the shelf at a pre-determined location on the floorog.log.info("Placing shelf on the floor...")shelf.set_orientation([0,0,0,1.0])env.step(th.empty(0))offset=shelf.get_position_orientation()[0][2]-shelf.aabb_center[2]shelf.set_position_orientation(position=th.tensor([-1.0,0,shelf.aabb_extent[2]/2])+offset)env.step(th.empty(0))# One step is needed for the object to be fully initializedog.log.info("Shelf placed.")for_inrange(50):env.step(th.empty(0))og.log.info("Placing boxes...")fori,boxinenumerate(boxes):box.states[object_states.Inside].set_value(shelf,True)og.log.info(f"Box {i} placed.")for_inrange(50):env.step(th.empty(0))if__name__=="__main__":main()
Attachment Demo
This demo is useful for...
Understanding how to leverage the Attached state
Understanding how to enable objects to be attachable
importtorchasthimportyamlimportomnigibsonasogfromomnigibson.macrosimportgm# Make sure object states are enabledgm.ENABLE_OBJECT_STATES=Truedefmain(random_selection=False,headless=False,short_exec=False):""" Demo of attachment of different parts of a shelf """cfg=yaml.load(open(f"{og.example_config_path}/default_cfg.yaml","r"),Loader=yaml.FullLoader)# Add objects that we want to createobj_cfgs=[]obj_cfgs.append(dict(type="LightObject",name="light",light_type="Sphere",radius=0.01,intensity=5000,position=[0,0,1.0],))base_z=0.2delta_z=0.01idx=0obj_cfgs.append(dict(type="DatasetObject",name="shelf_back_panel",category="shelf_back_panel",model="gjsnrt",position=[0,0,0.01],abilities={"attachable":{}},))idx+=1obj_cfgs.append(dict(type="DatasetObject",name="shelf_side_left",category="shelf_side",model="bxfkjj",position=[-0.4,0,base_z+delta_z*idx],abilities={"attachable":{}},))idx+=1obj_cfgs.append(dict(type="DatasetObject",name="shelf_side_right",category="shelf_side",model="yujrmw",position=[0.4,0,base_z+delta_z*idx],abilities={"attachable":{}},))idx+=1ys=[-0.93,-0.61,-0.29,0.03,0.35,0.68]foriinrange(6):obj_cfgs.append(dict(type="DatasetObject",name=f"shelf_shelf_{i}",category="shelf_shelf",model="ymtnqa",position=[0,ys[i],base_z+delta_z*idx],abilities={"attachable":{}},))idx+=1obj_cfgs.append(dict(type="DatasetObject",name="shelf_top_0",category="shelf_top",model="pfiole",position=[0,1.0,base_z+delta_z*idx],abilities={"attachable":{}},))idx+=1obj_cfgs.append(dict(type="DatasetObject",name="shelf_baseboard",category="shelf_baseboard",model="hlhneo",position=[0,-10.97884506,base_z+delta_z*idx],abilities={"attachable":{}},))idx+=1cfg["objects"]=obj_cfgsenv=og.Environment(configs=cfg)# Set viewer camera poseog.sim.viewer_camera.set_position_orientation(position=th.tensor([-1.689292,-2.11718198,0.93332228]),orientation=th.tensor([0.57687967,-0.22995655,-0.29022759,0.72807814]),)for_inrange(10):env.step([])shelf_baseboard=env.scene.object_registry("name","shelf_baseboard")shelf_baseboard.set_position_orientation(position=[0,-0.979,0.21],orientation=[0,0,0,1])shelf_baseboard.keep_still()# Lower the mass of the baseboard - otherwise, the gravity will create enough torque to break the jointshelf_baseboard.root_link.mass=0.1input("\n\nShelf parts fall to their correct poses and get automatically attached to the back panel.\n""You can try to drag (Shift + Left-CLICK + Drag) parts of the shelf to break it apart (you may need to zoom out and drag with a larger force).\n""Press [ENTER] to continue.\n")for_inrange(5000):og.sim.step()og.shutdown()if__name__=="__main__":main()
Object Texture Demo
This demo is useful for...
Understanding how different object states can result in texture changes
Understanding how to enable objects with texture-changing states
Understanding how to dynamically modify object states
importtorchasthimportomnigibsonasogfromomnigibsonimportobject_statesfromomnigibson.macrosimportgm,macrosfromomnigibson.utils.constantsimportParticleModifyMethod# Make sure object states are enabled, we're using GPU dynamics, and HQ rendering is enabledgm.ENABLE_OBJECT_STATES=Truegm.USE_GPU_DYNAMICS=Truegm.ENABLE_HQ_RENDERING=Truedefmain(random_selection=False,headless=False,short_exec=False):# Create the scene config to load -- empty scene plus a cabinetcfg={"env":{"rendering_frequency":60,# for HQ rendering},"scene":{"type":"Scene","floor_plane_visible":True,},"objects":[{"type":"DatasetObject","name":"cabinet","category":"bottom_cabinet","model":"zuwvdo","bounding_box":[1.595,0.537,1.14],"abilities":{"freezable":{},"cookable":{},"burnable":{},"saturable":{},"particleRemover":{"method":ParticleModifyMethod.ADJACENCY,"conditions":{# For a specific particle system, this specifies what conditions are required in order for the# particle applier / remover to apply / remover particles associated with that system# The list should contain functions with signature condition() --> bool,# where True means the condition is satisfied# In this case, we only allow our cabinet to absorb water, with no conditions needed.# This is needed for the Saturated ("saturable") state so that we can modify the texture# according to the water.# NOTE: This will only change color if gm.ENABLE_HQ_RENDERING and gm.USE_GPU_DYNAMICS is# enabled!"water":[],},},},"position":[0,0,0.59],},],}# Create the environmentenv=og.Environment(configs=cfg)# Set camera to appropriate viewing poseog.sim.viewer_camera.set_position_orientation(position=th.tensor([1.7789,-1.68822,1.13551]),orientation=th.tensor([0.57065614,0.20331904,0.267029,0.74947212]),)# Grab reference to object of interestobj=env.scene.object_registry("name","cabinet")# Make sure all the appropriate states are in the objectassertobject_states.Frozeninobj.statesassertobject_states.Cookedinobj.statesassertobject_states.Burntinobj.statesassertobject_states.Saturatedinobj.statesdefreport_states():# Make sure states are propagated before printingforiinrange(5):env.step(th.empty(0))print("="*20)print("temperature:",obj.states[object_states.Temperature].get_value())print("obj is frozen:",obj.states[object_states.Frozen].get_value())print("obj is cooked:",obj.states[object_states.Cooked].get_value())print("obj is burnt:",obj.states[object_states.Burnt].get_value())print("obj is soaked:",obj.states[object_states.Saturated].get_value(env.scene.get_system("water")))# Report default statesprint("==== Initial state ====")report_states()# Notify user that we're about to freeze the object, and then freeze the objectifnotshort_exec:input("\nObject will be frozen. Press ENTER to continue.")obj.states[object_states.Temperature].set_value(-50)report_states()# Notify user that we're about to cook the object, and then cook the objectifnotshort_exec:input("\nObject will be cooked. Press ENTER to continue.")obj.states[object_states.Temperature].set_value(100)report_states()# Notify user that we're about to burn the object, and then burn the objectifnotshort_exec:input("\nObject will be burned. Press ENTER to continue.")obj.states[object_states.Temperature].set_value(250)report_states()# Notify user that we're about to reset the object to its default state, and then reset stateifnotshort_exec:input("\nObject will be reset to default state. Press ENTER to continue.")obj.states[object_states.Temperature].set_value(macros.object_states.temperature.DEFAULT_TEMPERATURE)obj.states[object_states.MaxTemperature].set_value(macros.object_states.temperature.DEFAULT_TEMPERATURE)report_states()# Notify user that we're about to soak the object, and then soak the objectifnotshort_exec:input("\nObject will be saturated with water. Press ENTER to continue.")obj.states[object_states.Saturated].set_value(env.scene.get_system("water"),True)report_states()# Notify user that we're about to unsoak the object, and then unsoak the objectifnotshort_exec:input("\nObject will be unsaturated with water. Press ENTER to continue.")obj.states[object_states.Saturated].set_value(env.scene.get_system("water"),False)report_states()# Close environment at the endifnotshort_exec:input("Demo completed. Press ENTER to shutdown environment.")og.clear()if__name__=="__main__":main()
Robots
These examples showcase how to interact and leverage robot objects in OmniGibson.
Robot Visualizer Demo
This demo is useful for...
Understanding how to load a robot into OmniGibson after an environment is created
importtorchasthimportomnigibsonasogfromomnigibson.robotsimportREGISTERED_ROBOTSdefmain(random_selection=False,headless=False,short_exec=False):""" Robot demo Loads all robots in an empty scene, generate random actions """og.log.info(f"Demo {__file__}\n "+"*"*80+"\n Description:\n"+main.__doc__+"*"*80)# Create empty scene with no robots in it initiallycfg={"scene":{"type":"Scene",}}env=og.Environment(configs=cfg)og.sim.stop()# Iterate over all robots and demo their motionforrobot_name,robot_clsinREGISTERED_ROBOTS.items():# Create and import robotrobot=robot_cls(name=robot_name,obs_modalities=[],# We're just moving robots around so don't load any observation modalities)env.scene.add_object(robot)# At least one step is always needed while sim is playing for any imported object to be fully initializedog.sim.play()og.sim.step()# Reset robot and make sure it's not movingrobot.reset()robot.keep_still()# Log informationog.log.info(f"Loaded {robot_name}")og.log.info(f"Moving {robot_name}")ifnotheadless:# Set viewer in front facing robotog.sim.viewer_camera.set_position_orientation(position=th.tensor([2.69918369,-3.63686664,4.57894564]),orientation=th.tensor([0.39592411,0.1348514,0.29286304,0.85982]),)og.sim.enable_viewer_camera_teleoperation()# Hold still briefly so viewer can see robotfor_inrange(100):og.sim.step()# Then apply random actions for a bitifrobot_namenotin["BehaviorRobot"]:for_inrange(30):action_lo,action_hi=-0.1,0.1action=th.rand(robot.action_dim)*(action_hi-action_lo)+action_loifrobot_name=="Tiago":tiago_lo,tiago_hi=-0.1,0.1action[robot.base_action_idx]=(th.rand(len(robot.base_action_idx))*(tiago_hi-tiago_lo)+tiago_lo)for_inrange(10):env.step(action)# Stop the simulator and remove the robotog.sim.stop()env.scene.remove_object(obj=robot)# Always shut down the environment cleanly at the endog.clear()if__name__=="__main__":main()
Robot Control Demo
This demo is useful for...
Understanding how different controllers can be used to control robots
Understanding how to teleoperate a robot through external commands
"""Example script demo'ing robot control.Options for random actions, as well as selection of robot action space"""importtorchasthimportomnigibsonasogimportomnigibson.lazyaslazyfromomnigibson.macrosimportgmfromomnigibson.robotsimportREGISTERED_ROBOTSfromomnigibson.utils.ui_utilsimportKeyboardRobotController,choose_from_optionsCONTROL_MODES=dict(random="Use autonomous random actions (default)",teleop="Use keyboard control",)SCENES=dict(Rs_int="Realistic interactive home environment (default)",empty="Empty environment with no objects",)# Don't use GPU dynamics and use flatcache for performance boostgm.USE_GPU_DYNAMICS=Falsegm.ENABLE_FLATCACHE=Truedefchoose_controllers(robot,random_selection=False):""" For a given robot, iterates over all components of the robot, and returns the requested controller type for each component. :param robot: BaseRobot, robot class from which to infer relevant valid controller options :param random_selection: bool, if the selection is random (for automatic demo execution). Default False :return dict: Mapping from individual robot component (e.g.: base, arm, etc.) to selected controller names """# Create new dict to store responses from usercontroller_choices=dict()# Grab the default controller config so we have the registry of all possible controller optionsdefault_config=robot._default_controller_config# Iterate over all components in robotcontroller_names=robot.controller_orderforcontroller_nameincontroller_names:controller_options=default_config[controller_name]# Select controlleroptions=list(sorted(controller_options.keys()))choice=choose_from_options(options=options,name=f"{controller_name} controller",random_selection=random_selection,)# Add to user responsescontroller_choices[controller_name]=choicereturncontroller_choicesdefmain(random_selection=False,headless=False,short_exec=False,quickstart=False):""" Robot control demo with selection Queries the user to select a robot, the controllers, a scene and a type of input (random actions or teleop) """og.log.info(f"Demo {__file__}\n "+"*"*80+"\n Description:\n"+main.__doc__+"*"*80)# Choose scene to loadscene_model="Rs_int"ifnotquickstart:scene_model=choose_from_options(options=SCENES,name="scene",random_selection=random_selection)# Choose robot to createrobot_name="Fetch"ifnotquickstart:robot_name=choose_from_options(options=list(sorted(REGISTERED_ROBOTS.keys())),name="robot",random_selection=random_selection)scene_cfg=dict()ifscene_model=="empty":scene_cfg["type"]="Scene"else:scene_cfg["type"]="InteractiveTraversableScene"scene_cfg["scene_model"]=scene_model# Add the robot we want to loadrobot0_cfg=dict()robot0_cfg["type"]=robot_namerobot0_cfg["obs_modalities"]=["rgb"]robot0_cfg["action_type"]="continuous"robot0_cfg["action_normalize"]=True# Compile configcfg=dict(scene=scene_cfg,robots=[robot0_cfg])# Create the environmentenv=og.Environment(configs=cfg)# Choose robot controller to userobot=env.robots[0]controller_choices={"base":"DifferentialDriveController","arm_0":"InverseKinematicsController","gripper_0":"MultiFingerGripperController","camera":"JointController",}ifnotquickstart:controller_choices=choose_controllers(robot=robot,random_selection=random_selection)# Choose control modeifrandom_selection:control_mode="random"elifquickstart:control_mode="teleop"else:control_mode=choose_from_options(options=CONTROL_MODES,name="control mode")# Update the control mode of the robotcontroller_config={component:{"name":name}forcomponent,nameincontroller_choices.items()}robot.reload_controllers(controller_config=controller_config)# Because the controllers have been updated, we need to update the initial state so the correct controller state# is preservedenv.scene.update_initial_file()# Update the simulator's viewer camera's pose so it points towards the robotog.sim.viewer_camera.set_position_orientation(position=th.tensor([1.46949,-3.97358,2.21529]),orientation=th.tensor([0.56829048,0.09569975,0.13571846,0.80589577]),)# Reset environment and robotenv.reset()robot.reset()# Create teleop controlleraction_generator=KeyboardRobotController(robot=robot)# Register custom binding to reset the environmentaction_generator.register_custom_keymapping(key=lazy.carb.input.KeyboardInput.R,description="Reset the robot",callback_fn=lambda:env.reset(),)# Print out relevant keyboard info if using keyboard teleopifcontrol_mode=="teleop":action_generator.print_keyboard_teleop_info()# Other helpful user infoprint("Running demo.")print("Press ESC to quit")# Loop control until user quitsmax_steps=-1ifnotshort_execelse100step=0whilestep!=max_steps:action=(action_generator.get_random_action()ifcontrol_mode=="random"elseaction_generator.get_teleop_action())env.step(action=action)step+=1# Always shut down the environment cleanly at the endog.clear()if__name__=="__main__":importargparseparser=argparse.ArgumentParser(description="Teleoperate a robot in a BEHAVIOR scene.")parser.add_argument("--quickstart",action="store_true",help="Whether the example should be loaded with default settings for a quick start.",)args=parser.parse_args()main(quickstart=args.quickstart)
Robot Grasping Demo
This demo is useful for...
Understanding the difference between physical and sticky grasping
Understanding how to teleoperate a robot through external commands
This demo lets you choose a grasping mode and then loads a Fetch robot and a cube on a table. You can then teleoperate the robot to grasp the cube, observing the difference is grasping behavior based on the grasping mode chosen. Here, physical means natural friction is required to hold objects, while sticky means that objects are constrained to the robot's gripper once contact is made.
"""Example script demo'ing robot manipulation control with grasping."""importtorchasthimportomnigibsonasogfromomnigibson.macrosimportgmfromomnigibson.sensorsimportVisionSensorfromomnigibson.utils.ui_utilsimportKeyboardRobotController,choose_from_optionsGRASPING_MODES=dict(sticky="Sticky Mitten - Objects are magnetized when they touch the fingers and a CLOSE command is given",assisted="Assisted Grasping - Objects are magnetized when they touch the fingers, are within the hand, and a CLOSE command is given",physical="Physical Grasping - No additional grasping assistance applied",)# Don't use GPU dynamics and Use flatcache for performance boostgm.USE_GPU_DYNAMICS=Falsegm.ENABLE_FLATCACHE=Truedefmain(random_selection=False,headless=False,short_exec=False):""" Robot grasping mode demo with selection Queries the user to select a type of grasping mode """og.log.info(f"Demo {__file__}\n "+"*"*80+"\n Description:\n"+main.__doc__+"*"*80)# Choose type of graspinggrasping_mode=choose_from_options(options=GRASPING_MODES,name="grasping mode",random_selection=random_selection)# Create environment configuration to usescene_cfg=dict(type="Scene")robot0_cfg=dict(type="Fetch",obs_modalities=["rgb"],# we're just doing a grasping demo so we don't need all observation modalitiesaction_type="continuous",action_normalize=True,grasping_mode=grasping_mode,)# Define objects to loadtable_cfg=dict(type="DatasetObject",name="table",category="breakfast_table",model="lcsizg",bounding_box=[0.5,0.5,0.8],fixed_base=True,position=[0.7,-0.1,0.6],orientation=[0,0,0.707,0.707],)chair_cfg=dict(type="DatasetObject",name="chair",category="straight_chair",model="amgwaw",bounding_box=None,fixed_base=False,position=[0.45,0.65,0.425],orientation=[0,0,-0.9990215,-0.0442276],)box_cfg=dict(type="PrimitiveObject",name="box",primitive_type="Cube",rgba=[1.0,0,0,1.0],size=0.05,position=[0.53,-0.1,0.97],)# Compile configcfg=dict(scene=scene_cfg,robots=[robot0_cfg],objects=[table_cfg,chair_cfg,box_cfg])# Create the environmentenv=og.Environment(configs=cfg)# Reset the robotrobot=env.robots[0]robot.set_position_orientation(position=[0,0,0])robot.reset()robot.keep_still()# Make the robot's camera(s) high-resforsensorinrobot.sensors.values():ifisinstance(sensor,VisionSensor):sensor.image_height=720sensor.image_width=720# Update the simulator's viewer camera's pose so it points towards the robotog.sim.viewer_camera.set_position_orientation(position=th.tensor([-2.39951,2.26469,2.66227]),orientation=th.tensor([-0.23898481,0.48475231,0.75464013,-0.37204802]),)# Create teleop controlleraction_generator=KeyboardRobotController(robot=robot)# Print out relevant keyboard info if using keyboard teleopaction_generator.print_keyboard_teleop_info()# Other helpful user infoprint("Running demo with grasping mode {}.".format(grasping_mode))print("Press ESC to quit")# Loop control until user quitsmax_steps=-1ifnotshort_execelse100step=0whilestep!=max_steps:action=action_generator.get_random_action()ifrandom_selectionelseaction_generator.get_teleop_action()for_inrange(10):env.step(action)step+=1# Always shut down the environment cleanly at the endog.clear()if__name__=="__main__":main()
Advanced: IK Demo
This demo is useful for...
Understanding how to construct your own IK functionality using omniverse's native lula library without explicitly utilizing all of OmniGibson's class abstractions
Understanding how to manipulate the simulator at a lower-level than the main Environment entry point
importargparseimporttimeimporttorchasthimportomnigibsonasogimportomnigibson.lazyaslazyfromomnigibson.objectsimportPrimitiveObjectfromomnigibson.utils.control_utilsimportIKSolverdefmain(random_selection=False,headless=False,short_exec=False):""" Minimal example of usage of inverse kinematics solver This example showcases how to construct your own IK functionality using omniverse's native lula library without explicitly utilizing all of OmniGibson's class abstractions, and also showcases how to manipulate the simulator at a lower-level than the main Environment entry point. """og.log.info(f"Demo {__file__}\n "+"*"*80+"\n Description:\n"+main.__doc__+"*"*80)# Assuming that if random_selection=True, headless=True, short_exec=True, we are calling it from tests and we# do not want to parse args (it would fail because the calling function is pytest "testfile.py")ifnot(random_selectionandheadlessandshort_exec):parser=argparse.ArgumentParser()parser.add_argument("--programmatic","-p",dest="programmatic_pos",action="store_true",help="if the IK solvers should be used with the GUI or programmatically",)args=parser.parse_args()programmatic_pos=args.programmatic_poselse:programmatic_pos=True# Import scene and robot (Fetch)scene_cfg={"type":"Scene"}# Create Fetch robot# Note that since we only care about IK functionality, we fix the base (this also makes the robot more stable)# (any object can also have its fixed_base attribute set to True!)# Note that since we're going to be setting joint position targets, we also need to make sure the robot's arm joints# (which includes the trunk) are being controlled using joint positionsrobot_cfg={"type":"Fetch","fixed_base":True,"controller_config":{"arm_0":{"name":"NullJointController","motor_type":"position",}},}cfg=dict(scene=scene_cfg,robots=[robot_cfg])env=og.Environment(configs=cfg)# Update the viewer camera's pose so that it points towards the robotog.sim.viewer_camera.set_position_orientation(position=th.tensor([4.32248,-5.74338,6.85436]),orientation=th.tensor([0.39592,0.13485,0.29286,0.85982]),)robot=env.robots[0]# Set robot base at the originrobot.set_position_orientation(position=th.tensor([0,0,0]),orientation=th.tensor([0,0,0,1]))# At least one simulation step while the simulator is playing must occur for the robot (or in general, any object)# to be fully initialized after it is imported into the simulatorog.sim.play()og.sim.step()# Make sure none of the joints are movingrobot.keep_still()# Since this demo aims to showcase how users can directly control the robot with IK,# we will need to disable the built-in controllers in OmniGibsonrobot.control_enabled=False# Create the IK solver -- note that we are controlling both the trunk and the arm since both are part of the# controllable kinematic chain for the end-effector!control_idx=th.cat([robot.trunk_control_idx,robot.arm_control_idx[robot.default_arm]])ik_solver=IKSolver(robot_description_path=robot.robot_arm_descriptor_yamls[robot.default_arm],robot_urdf_path=robot.urdf_path,reset_joint_pos=robot.get_joint_positions()[control_idx],eef_name=robot.eef_link_names[robot.default_arm],)# Define a helper function for executing specific end-effector commands using the ik solverdefexecute_ik(pos,quat=None,max_iter=100):og.log.info("Querying joint configuration to current marker position")# Grab the joint positions in order to reach the desired pose targetjoint_pos=ik_solver.solve(target_pos=pos,target_quat=quat,tolerance_pos=0.002,tolerance_quat=0.01,weight_pos=20.0,weight_quat=0.05,max_iterations=max_iter,initial_joint_pos=robot.get_joint_positions()[control_idx],)ifjoint_posisnotNone:og.log.info("Solution found. Setting new arm configuration.")robot.set_joint_positions(joint_pos,indices=control_idx,drive=True)else:og.log.info("EE position not reachable.")og.sim.step()ifprogrammatic_posorheadless:# Sanity check IK using pre-defined hardcoded positionsquery_positions=[[1,0,0.8],[1,1,1],[0.5,0.5,0],[0.5,0.5,0.5]]forquery_posinquery_positions:execute_ik(query_pos)time.sleep(2)else:# Create a visual marker to be moved by the user, representing desired end-effector positionmarker=PrimitiveObject(relative_prim_path="/marker",name="marker",primitive_type="Sphere",radius=0.03,visual_only=True,rgba=[1.0,0,0,1.0],)env.scene.add_object(marker)# Get initial EE position and set marker to that locationcommand=robot.get_eef_position()marker.set_position_orientation(position=command)og.sim.step()# Setup callbacks for grabbing keyboard inputs from omniexit_now=Falsedefkeyboard_event_handler(event,*args,**kwargs):nonlocalcommand,exit_now# Check if we've received a key press or repeatif(event.type==lazy.carb.input.KeyboardEventType.KEY_PRESSorevent.type==lazy.carb.input.KeyboardEventType.KEY_REPEAT):ifevent.input==lazy.carb.input.KeyboardInput.ENTER:# Execute the commandexecute_ik(pos=command)elifevent.input==lazy.carb.input.KeyboardInput.ESCAPE:# Quitog.log.info("Quit.")exit_now=Trueelse:# We see if we received a valid delta command, and if so, we update our command and visualized# marker positiondelta_cmd=input_to_xyz_delta_command(inp=event.input)ifdelta_cmdisnotNone:command=command+delta_cmdmarker.set_position_orientation(position=command)og.sim.step()# Callback must return True if validreturnTrue# Hook up the callback function with omni's user interfaceappwindow=lazy.omni.appwindow.get_default_app_window()input_interface=lazy.carb.input.acquire_input_interface()keyboard=appwindow.get_keyboard()input_interface.subscribe_to_keyboard_events(keyboard,keyboard_event_handler)# Print out helpful information to the userprint_message()# Loop until the user requests an exitwhilenotexit_now:og.sim.step()# Always shut the simulation down cleanly at the endog.clear()definput_to_xyz_delta_command(inp,delta=0.01):mapping={lazy.carb.input.KeyboardInput.W:th.tensor([delta,0,0]),lazy.carb.input.KeyboardInput.S:th.tensor([-delta,0,0]),lazy.carb.input.KeyboardInput.DOWN:th.tensor([0,0,-delta]),lazy.carb.input.KeyboardInput.UP:th.tensor([0,0,delta]),lazy.carb.input.KeyboardInput.A:th.tensor([0,delta,0]),lazy.carb.input.KeyboardInput.D:th.tensor([0,-delta,0]),}returnmapping.get(inp)defprint_message():print("*"*80)print("Move the marker to a desired position to query IK and press ENTER")print("W/S: move marker further away or closer to the robot")print("A/D: move marker to the left or the right of the robot")print("UP/DOWN: move marker up and down")print("ESC: quit")if__name__=="__main__":main()
Simulator
These examples showcase useful functionality from OmniGibson's monolithic Simulator object.
What's the difference between Environment and Simulator?
handles importing scenes and objects into the actual simulation
directly interfaces with the underlying physics engine
The Environment class thinly wraps the Simulator's core functionality, by:
providing convenience functions for automatically importing a predefined scene, object(s), and robot(s) (via the cfg argument), as well as a task
providing a OpenAI Gym interface for stepping through the simulation
While most of the core functionality in Environment (as well as more fine-grained physics control) can be replicated via direct calls to Simulator (og.sim), it requires deeper understanding of OmniGibson's infrastructure and is not recommended for new users.
State Saving and Loading Demo
This demo is useful for...
Understanding how to interact with objects using the mouse
Understanding how to save the active simulator state to a file
Understanding how to restore the simulator state from a given file
This demo loads a stripped-down scene with the Turtlebot robot, and lets you interact with objects to modify the scene. The state is then saved, written to a .json file, and then restored in the simulation.
importosimporttorchasthimportomnigibsonasogimportomnigibson.lazyaslazyfromomnigibson.utils.ui_utilsimportKeyboardEventHandlerTEST_OUT_PATH=""# Define output directory here.defmain(random_selection=False,headless=False,short_exec=False):""" Prompts the user to select whether they are saving or loading an environment, and interactively shows how an environment can be saved or restored. """og.log.info(f"Demo {__file__}\n "+"*"*80+"\n Description:\n"+main.__doc__+"*"*80)cfg={"scene":{"type":"InteractiveTraversableScene","scene_model":"Rs_int","load_object_categories":["floors","walls","bed","bottom_cabinet","chair"],},"robots":[{"type":"Turtlebot","obs_modalities":["rgb","depth"],},],}# Create the environmentenv=og.Environment(configs=cfg)# Set the camera to a good angledefset_camera_pose():og.sim.viewer_camera.set_position_orientation(position=th.tensor([-0.229375,-3.40576,7.26143]),orientation=th.tensor([0.27619733,-0.00230233,-0.00801152,0.9610648]),)set_camera_pose()# Give user instructions, and then loop until completedcompleted=short_execifnotshort_execandnotrandom_selection:# Notify user to manipulate environment until ready, then press Z to exitprint()print("Modify the scene by SHIFT + left clicking objects and dragging them. Once finished, press Z.")# Register callback so user knows to press space once they're done manipulating the scenedefcomplete_loop():nonlocalcompletedcompleted=TrueKeyboardEventHandler.add_keyboard_callback(lazy.carb.input.KeyboardInput.Z,complete_loop)whilenotcompleted:action_lo,action_hi=-1,1env.step(th.rand(env.robots[0].action_dim)*(action_hi-action_lo)+action_lo)print("Completed scene modification, saving scene...")save_path=os.path.join(TEST_OUT_PATH,"saved_stage.json")og.sim.save(json_paths=[save_path])print("Re-loading scene...")og.clear()og.sim.restore(scene_files=[save_path])# env is no longer valid after og.clear()delenv# Take a sim step and playog.sim.step()og.sim.play()set_camera_pose()# Loop until user terminatescompleted=short_execifnotshort_execandnotrandom_selection:# Notify user to manipulate environment until ready, then press Z to exitprint()print("View reloaded scene. Once finished, press Z.")# Register callback so user knows to press space once they're done manipulating the sceneKeyboardEventHandler.add_keyboard_callback(lazy.carb.input.KeyboardInput.Z,complete_loop)whilenotcompleted:og.sim.step()if__name__=="__main__":main()
Rendering
These examples showcase how to change renderer settings in OmniGibson.