pxTeleopForceXR

Meta Quest 3 Apple Vision Pro M5
  • AR/VR teleoperation with force feedback + custom visual and force guidance
  • Factr/Robot as leader, Meta Quest 3 / Apple Vision Pro as XR device
  • Dataset collection in simulation: Isaac Sim for first-phase prototype testing
  • Isaac Lab integration for training with human corrections/interventions
  • ZeroMQ and DDS middleware as communication alternative

Getting XR teleoperation with force feedback to feel “right” is rarely a single-step process — especially when tasks, tools, and prototype hardware change frequently. At the same time, visual imitation learning and VLA approaches demand high-quality datasets across many tasks, while reinforcement learning often benefits from Human-in-the-Loop corrections that inject human intention as a stabilizing signal during training.

PxForceXRTeleop simplifies this loop with an integrated AR/VR + force-feedback teleoperation stack featuring custom visual and force guidance. On the operator side, it supports a leader arm setup—using FACTR as a template—and can also pair with robot arms as haptic devices when higher-fidelity force feedback is needed.

For first-phase testing and rapid iteration, PxForceXRTeleop leverages NVIDIA Omniverse Isaac Sim to validate interaction and guidance while collecting structured demonstrations for dataset generation. For HIL-RL, it targets Isaac Lab, enabling training workflows where humans provide interventions and corrections efficiently.

Integration is kept modular and deployment-friendly via ZeroMQ (high-throughput streaming) and DDS middleware (real-time messaging), with configurable runtime behavior for scaling, filtering, safety limits, guidance strategies, and haptic profiles.