Introducing JackRabbot

The social navigation Robot


Stanford Vision and Learning Lab

About

Humans have the innate ability to "read" one another. When people walk in a crowed public space such as a sidewalk, an airport terminal, or a shopping mall, they obey a large number of (unwritten) common sense rules and comply with social conventions. For instance, as they consider where to move next, they respect personal space and yield right-of-way. The ability to model these “rules” and use them to understand and predict human motion in complex real world environments is extremely valuable for the next generation of social robots.

Our work at the CVGL is making practical a new generation of autonomous agents that can operate safely alongside humans in dynamic crowded environments such as terminals, malls, or campuses.  This enhanced level of proficiency opens up a broad new range of applications where robots can replace or augment human efforts. One class of tasks now susceptible to automation is the delivery of small items – such as purchased goods, mail, food, tools and documents – via spaces normally reserved for pedestrians.

In this project, we are exploring this opportunity by developing a demonstration platform to make deliveries locally within the Stanford campus.  The Stanford “Jackrabbot”, which takes it name from the nimble yet shy Jackrabbit, is a self-navigating automated electric delivery cart capable of carrying small payloads. In contrast to autonomous cars, which operate on streets and highways, the Jackrabbot is designed to operate in pedestrian spaces, at a maximum speed of five miles per hour.

NEWS & Press release

Team

Silvio Savarese

Assistant Professor
ssilvio@stanford.edu
Web

Roberto Martín-Martín

Postdoctoral Researcher
robertom@stanford.edu

Patrick Goebel

Research Scientist
pgoebel@stanford.edu
Web

Alan Federman

Systems Integration Engineer
alan.federman@gmail.com

Amir Sadeghian

PhD Candidate
amirabs@stanford.edu
Web

JunYoung Gwak

PhD Candidate
jgwak@stanford.edu

Kevin Chen

PhD candidate
kevin.chen@cs.stanford.edu

Pin Pin Tea-mangkornpan

PhD Candidate
pinnaree@stanford.edu

Xiaoxue Zang

Master's Student
xzang@stanford.edu

Richard Martinez

Master’s Student
rdm@stanford.edu

Ashwini Pokle

Master's Student
ashwinipokle@stanford.edu

Zhangyuan Wang

Master's Student
zywang17@stanford.edu

Max Chang

Undergraduate Student
mchang4@stanford.edu

Vineet Kosaraju

Undergraduate Student
vineetk@stanford.edu Web

Ella Hofmann-Coyle

Undergraduate Student
ellahofm@stanford.edu

Amy Chou

Undergraduate Student
amyachou@stanford.edu

Mihir Patel

Undergraduate Student
mihirp@stanford.edu

Michael Abbot

Visiting Scholar
Web

Noriaki Hirose

Visiting Scholar
Toyota Central R&D
hirose@stanford.edu

Seyed Hamid Rezatofighi

Endeavour Research Fellow
hamidrt@stanford.edu

Tin Tin Wisniewski

Faculty Administrator and Coordinator
tintinyw@stanford.edu


Alumni

Alexandre Alahi

Assistant Professor (EPFL)
alexandre.alahi@epfl.ch
Web

Marynel Vázquez

Assistant Professor (Yale University)
marynel.vazquez@yale.edu
Web

Vignesh Ramanthan

Research Scientist
Facebook
vigneshram.iitkgp@gmail.com

Lin Sun

Visiting Student
Stanford
PhD Candidate, HKUST
sunlin1@stanford.edu 

Alexander Robicquet

Master's Student
arobicqu@stanford.edu

Zhenkai Wang

Master's Student
zackwang@stanford.edu

Junwei Yang

Master's Student
junweiy@stanford.edu

Agrim Gupta

Master's Student
agrim@stanford.edu

Chris Cruise

Master's Student
ccruise@stanford.edu

Hans Magnus Espelund Ewald

Master's Student
hmewald@stanford.edu

Vincent Chow

Master's Student
chowv@stanford.edu

Isabella Phung

High School Student
isabellaphung@gmail.com

Neha Govil

High School Student
neha.j.govil@gmail.com

Projects

Deep Social Navigation

Deep Social Navigation


Autonomous robot navigation in known environments encompasses two main problems: 1) finding a safe path for a robot to reach a desired goal location, and 2) following the path while adapting to environmental conditions. While global planners can efficiently find optimal motion paths, translating these paths into robot commands – which is traditionally
More...

Crowdsourcing Social Navigation

Crowdsourcing social navigation

We want JackRabbot to operate safely and “politely” among humans. Politely here relates to respect social conventions that are difficult to define analytically for any circumstances, as the personal space or the dynamics of group formations. To teach JR how to navigate safely and politely, we are developing deep learning models that control JR’s motion (see Deep
More...

Traversability estimation

Traversability Estimation


Our goal is for JackRabbot to be out in the world, navigating safely human environments. To do so, JR needs to perceive the parts of the surrounding environment that are safe to traverse and the parts that are not, either because of possible damage for the robot or for people. In the GONet line of work we develop visual-based methods to estimate
More...

People Detection and Tracking

People Detection and Tracking

3D detection and tracking of pedestrians and other agents (cars, bikes) is a critical component to autonomous navigation. We are working to fully utilize 3D LIDAR data in conjunction with existing, mature approaches for processing 2D visual data in order to achieve better detection and tracking. The goal is a complete system which can take stereo camera and LIDAR
More...

Gestures aided Navigation

Gestures aided Navigation


For a social robot to navigate successfully in a human environment, it not only needs to rely on computer vision to identify the objects and motion planning to generate a safe trajectory, but it should also use gestures to convey their intentions to people in the environment. For example, the robot might want people to give more space to it when navigating or direct them
More...

Interactive Navigation

Interactive Navigation


The focus of the interactive navigation project is to enable the Jackrabbot to utilize the robot arm to perform dexterous manipulation tasks that aid navigation, such as moving obstacles out of the way or opening doors. The general strategy to achieve this is to build intelligent autonomous policies from parametrized fundamental action primitives (i.e.,
More...

Mobile Search and Pick & Place

Mobile Search and Pick & Place

To generalize their use, robots should increase their autonomy, versatility and robustness as helpers in human environments. One simple task were robots need to gain autonomy is Find&Retrieve tasks. Who wouldn’t like to have a robot (JackRabbot or the Toyota HSR of the image) that successfully reacts to the command “Bring me x”?
More...

Publications

Datasets and Code

JR life


JR2 at Robobusiness 2018!


JRs with Sundar Pichai!


Welcome JR2!


JR at the dressing room!


JR ready for California winter!


JR suit up!


JR in red!

Related videos

JR2 at Samsung CEO Summit!


JR on Quartz!


JR on CBS!


JR's view!


JR on PBS!


JR on Financial Times!

JR on ABC!

JR on BBC News!

Acknowledgements

We acknowledge the support of ONR, MURI, Toyota and Panasonic.

Contact : amirabs@stanford.edu
Last update : 10/25/2018