Introducing JackRabbot

The social navigation Robot

Stanford Vision and Learning Lab


Humans have the innate ability to "read" one another. When people walk in a crowed public space such as a sidewalk, an airport terminal, or a shopping mall, they obey a large number of (unwritten) common sense rules and comply with social conventions. For instance, as they consider where to move next, they respect personal space and yield right-of-way. The ability to model these “rules” and use them to understand and predict human motion in complex real world environments is extremely valuable for the next generation of social robots.

Our work at the CVGL is making practical a new generation of autonomous agents that can operate safely alongside humans in dynamic crowded environments such as terminals, malls, or campuses.  This enhanced level of proficiency opens up a broad new range of applications where robots can replace or augment human efforts. One class of tasks now susceptible to automation is the delivery of small items – such as purchased goods, mail, food, tools and documents – via spaces normally reserved for pedestrians.

In this project, we are exploring this opportunity by developing a demonstration platform to make deliveries locally within the Stanford campus.  The Stanford “Jackrabbot”, which takes it name from the nimble yet shy Jackrabbit, is a self-navigating automated electric delivery cart capable of carrying small payloads. In contrast to autonomous cars, which operate on streets and highways, the Jackrabbot is designed to operate in pedestrian spaces, at a maximum speed of five miles per hour.

NEWS & Press release


Silvio Savarese

Assistant Professor

Claudia D'Arpino

Postdoctoral Scholar

Roberto Martín-Martín

Postdoctoral Scholar

Patrick Goebel

Research Scientist

Alan Federman

Systems Integration Engineer

JunYoung Gwak

PhD Candidate

Kevin Chen

PhD Candidate

Jo Chuang

Master Student

Can Liu

Master Student

Eric Frankel

Undergraduate Student

Chengshu (Eric) Li

Master Student

Francis Zhang

Master Student

Mihir Patel

Undergraduate Student


MARYNEL VÁZQUEZ, Postdoctoral Scholar. Now Assistant Professor at Yale University. Web
ALEXANDRE ALAHI, Postdoctoral Scholar. Now: Assistant Professor at EPFL.
SEYED HAMID REZATOFIGHI, Endeavour Research Fellow. Now: Postdoctoral Research Fellow at The University of Adelaide.
NORIAKI HIROSE, Visiting Scholar, Toyota Central R&D.
AMIR SADEGHIAN, PhD Candidate. Now at AiBee.
ASHWINI POKLE, Master's Student. Now PhD student at CMU.
NATHAN TSOI, Research Engineer. Now PhD student at Yale University.
ISABELLA PHUNG, High School Student
TIN TIN WISNIEWSKI, Faculty Administrator and Coordinator
GIULIO AUTELITANO, Visiting Student Researcher. Web


Deep Social Navigation

Deep Social Navigation

Autonomous robot navigation in known environments encompasses two main problems: 1) finding a safe path for a robot to reach a desired goal location, and 2) following the path while adapting to environmental conditions. While global planners can efficiently find optimal motion paths, translating these paths into robot commands – which is traditionally

Crowdsourcing Social Navigation

Crowdsourcing social navigation

We want JackRabbot to operate safely and “politely” among humans. Politely here relates to respect social conventions that are difficult to define analytically for any circumstances, as the personal space or the dynamics of group formations. To teach JR how to navigate safely and politely, we are developing deep learning models that control JR’s motion (see Deep

Traversability estimation

Traversability Estimation

Our goal is for JackRabbot to be out in the world, navigating safely human environments. To do so, JR needs to perceive the parts of the surrounding environment that are safe to traverse and the parts that are not, either because of possible damage for the robot or for people. In the GONet line of work we develop visual-based methods to estimate

People Detection and Tracking

People Detection and Tracking

3D detection and tracking of pedestrians and other agents (cars, bikes) is a critical component to autonomous navigation. We are working to fully utilize 3D LIDAR data in conjunction with existing, mature approaches for processing 2D visual data in order to achieve better detection and tracking. The goal is a complete system which can take stereo camera and LIDAR

Gestures aided Navigation

Gestures aided Navigation

For a social robot to navigate successfully in a human environment, it not only needs to rely on computer vision to identify the objects and motion planning to generate a safe trajectory, but it should also use gestures to convey their intentions to people in the environment. For example, the robot might want people to give more space to it when navigating or direct them

Interactive Navigation

Interactive Navigation

The focus of the interactive navigation project is to enable the Jackrabbot to utilize the robot arm to perform dexterous manipulation tasks that aid navigation, such as moving obstacles out of the way or opening doors. The general strategy to achieve this is to build intelligent autonomous policies from parametrized fundamental action primitives (i.e.,

Mobile Search and Pick & Place

Mobile Search and Pick & Place

To generalize their use, robots should increase their autonomy, versatility and robustness as helpers in human environments. One simple task were robots need to gain autonomy is Find&Retrieve tasks. Who wouldn’t like to have a robot (JackRabbot or the Toyota HSR of the image) that successfully reacts to the command “Bring me x”?


Datasets and Code

JR life

JR2 at Robobusiness 2018!

JRs with Sundar Pichai!

Welcome JR2!

JR at the dressing room!

JR ready for California winter!

JR suit up!

JR in red!

Related videos

JR2 at Samsung CEO Summit!

JR on Quartz!

JR on CBS!

JR's view!

JR on PBS!

JR on Financial Times!

JR on ABC!

JR on BBC News!


We acknowledge the support of ONR, MURI, Toyota and Panasonic.

Last update : 10/29/2019