Multi-task Vision-Language-Action (VLA) models have recently demonstrated increasing promise as generalist foundation models for robotics, achieving non-trivial performance out of the box on new tasks in new environments. However, for such models to be truly useful, an end user must have easy means to teach them to improve. For language and vision models, the emergent ability to perform in-context learning (ICL) has proven to be a versatile and highly useful interface to easily teach new tasks with no parameter finetuning. Unfortunately, VLAs pre-trained with imitation learning objectives do not naturally acquire ICL abilities. In this paper, we demonstrate that, with the right finetuning recipe and a small robot demonstration dataset, it is possible to inject in-context adaptability post hoc into such a VLA. After retraining for in-context learning (RICL), our system permits an end user to provide a small number (10-20) of demonstrations for a new task. RICL then fetches the most relevant portions of those demonstrations into the VLA context to exploit ICL, performing the new task and boosting task performance. We apply RICL to inject ICL into the π₀-FAST VLA, and show that it permits large in-context improvements for a variety of new manipulation tasks with only 20 demonstrations per task, without any parameter updates. When parameter updates on the target task demonstrations is possible, RICL finetuning further boosts performance. We release code and model weights for RICL-π₀-FAST alongside the paper to enable, for the first time, a simple in-context learning interface for new manipulation tasks.
π₀-FAST-DROID
RICL-π₀-FAST-DROID
π₀ picks up the duck instead of the poke ball (language grounding issue). RICL-π₀ completes the task.
π₀-FAST-DROID
RICL-π₀-FAST-DROID
π₀ struggles with the grasp and motion for this novel object (adaptation issue) or drops the apple in the sink (language grounding issue). RICL-π₀ completes the task.
π₀-FAST-DROID
RICL-π₀-FAST-DROID
π₀ cannot figure out the grasp or the motion (adaptation issue). RICL-π₀ completes the task.
π₀-FAST-DROID
RICL-π₀-FAST-DROID
π₀ moves the apple (language grounding issue). RICL-π₀ completes the task.
π₀-FAST-DROID
RICL-π₀-FAST-DROID
π₀ moves the duck instead of the squeegee (language grounding issue). RICL-π₀ completes the task.
π₀-FAST-DROID
RICL-π₀-FAST-DROID
π₀ cannot figure out the precise location or the grasp (adaptation issue at the long tail of seen tasks). RICL-π₀ completes the task.
π₀-FAST-DROID
RICL-π₀-FAST-DROID
π₀ cannot figure out the novel motion to avoid the top door which acts as an obstacle (adaptation issue, again, at the long tail of seen tasks). RICL-π₀ completes the task.
π₀-FAST-DROID
RICL-π₀-FAST-DROID
π₀ struggles with the grasp and motion for this novel object (adaptation issue). RICL-π₀ completes the task.
RICL-π₀-FAST-DROID-Finetune
A human perturbs the main object in the scene. Yet, RICL-π₀-FAST-DROID-Finetune completes the task.
RICL-π₀-FAST-DROID-Finetune
A human perturbs the main object in the scene. Yet, RICL-π₀-FAST-DROID-Finetune completes the task.
RICL-π₀-FAST-DROID-Finetune
A human perturbs the main object in the scene. Yet, RICL-π₀-FAST-DROID-Finetune completes the task.
RICL-π₀-FAST-DROID-Finetune
A human perturbs the main object in the scene. Yet, RICL-π₀-FAST-DROID-Finetune completes the task.
RICL-π₀-FAST-DROID-Finetune
A human perturbs the main object in the scene. Yet, RICL-π₀-FAST-DROID-Finetune completes the task.
RICL-π₀-FAST-DROID-Finetune
A human perturbs the main object in the scene. Yet, RICL-π₀-FAST-DROID-Finetune completes the task.
{
TBA
}