DEMONSTRATE achieves comparable or improved success rates to the chosen baseline models in simulation and has shown its applicability in real-world experiments:
The integration of large language models (LLMs) into robotic systems holds significant promise for enabling natural language-based control. However, existing approaches often rely on handcrafted prompt examples and lack mechanisms for verifying correctness before execution. DEMONSTRATE presents a framework that removes the need for expert-tuned prompts by replacing in-context examples with demonstrations of low-level tasks. By mapping language descriptions to control objectives using task embeddings and inverse optimal control, the system can generalize to new tasks zero-shot and assess potential hallucinations preemptively. This results in a scalable and more reliable pipeline for deploying LLMs in robotics.
DEMONSTRATE builds upon two modules:
DEMONSTRATE’s architecture is based on a two-stage pipeline:
DEMONSTRATE achieves comparable or improved success rates to the chosen baseline models in simulation and has shown its applicability in real-world experiments:
@article{demonstrate2024,
title={DEMONSTRATE: Zero-shot Language to Robotic Control via Multi-task Demonstration Learning},
author={Anonymous},
journal={Under Review},
year={2024}
}