pxRobotLearning
pxRobotLearning is an end-to-end platform for developing, training, validating, and deploying Physical AI systems. It combines simulation-first development, data collection, imitation learning, reinforcement learning, and optimized inference into a single, coherent pipeline that scales from research to real-world deployment.
- Simulation-Centered Learning Pipeline
- Flexible Learning Algorithms
- Sim-to-Real Adaptation & Fine-Tuning
- Production-Ready Deployment & Multimodal Learning
What We Deliver
Isaac Sim–Based Simulation Pipeline
- High-fidelity simulation foundation
The learning pipeline is built on Isaac Sim, providing physically realistic environments for robotic interaction and data generation - Scalable training environments
Supports large-scale parallel simulation for efficient policy training and evaluation - Flexible scene and sensor configuration
Enables rapid setup of robot models, sensors, and task scenarios
Reinforcement Learning & Imitation Learning Algorithms
- Multiple RL algorithm options
Provides a selection of state-of-the-art reinforcement learning algorithms suitable for different robotic tasks - Imitation learning from demonstrations
Supports learning from expert data collected via teleoperation or scripted policies - Unified training framework
Allows RL and IL methods to be combined or switched seamlessly within the same pipeline
Multi-Physics Sim-to-Sim Transfer
- Cross-engine validation
Supports sim-to-sim transfer across PhysX, Newton, and MuJoCo to improve generalization - Physics-aware robustness testing
Exposes policies to varying dynamics, contacts, and constraints - Reduced simulator bias
Mitigates overfitting to a single physics engine
Sim-to-Real Fine-Tuning
- Progressive domain adaptation
Fine-tunes policies trained in simulation using real-world data - Bridging the reality gap
Addresses discrepancies in dynamics, sensing, and actuation - Safe and efficient deployment
Enables gradual transfer from simulation to physical robots
High-Performance Deployment with ONNX & TensorRT
- Standardized model export
Converts trained models to ONNX for framework-independent deployment - TensorRT-optimized inference
Achieves low-latency, high-throughput execution on edge and embedded GPUs - Production-ready runtime stack
Designed for stable and scalable robotic deployment
Vision–Language Models & Vision–Language–Action Learning
- Multimodal perception and reasoning
Integrates vision and language understanding for complex tasks - Vision–Language–Action (VLA) policies
Enables robots to map high-level instructions to low-level actions - Fine-tuning for robotic domains
Adapts pretrained multimodal models to real robotic environments and tasks
The Robot Learning Platform delivers an end-to-end workflow that connects simulation, learning, adaptation, and deployment into a unified system. By combining high-fidelity simulation, flexible learning algorithms, robust sim-to-real transfer, and production-ready deployment with advanced multimodal learning capabilities, the platform enables scalable and reliable development of intelligent robotic behaviors across a wide range of real-world applications.
pxPerception
pxPerception is a high-precision perception platform for mobile and industrial robots operating in complex, real-world environments. It provides robust spatial understanding through tightly integrated sensing and perception pipelines, enabling reliable localization, mapping, and interaction even under challenging conditions such as low light, clutter, or dynamic scenes.
Built with simulation-first validation and real-to-virtual workflows in mind, pxPerception supports Digital Twin–based development and accelerates the transition from perception research to production-ready systems.
- High-Precision Perception Solution for Mobile & Industrial Robots and Humanoids
- Designed for low-light, warehouse, and industrial environments with robust sensing performance
- Enables high-accuracy spatial perception for navigation, localization, and interaction
- Provides an open development platform for robotic perception integration and customization
- Supports digital twin–based simulation and synthetic data generation
What We Deliver
LiDAR–IMU Integrated Perception & Localization
- Tightly coupled LiDAR–IMU integration
IMU is synchronized with LiDAR sensing to provide motion and pose information alongside point cloud acquisition - Motion-aware point cloud generation
Inertial measurements compensate for motion distortion during scanning, improving data quality under dynamic movement - Fusion with point cloud–based localization
Inertial constraints are combined with geometric matching to enhance localization robustness and accuracy - Stable performance in challenging conditions
The integrated design improves reliability in fast motion, sparse geometry, and partially degenerate environments
CUDA-Accelerated SLAM
- CUDA-based GPU acceleration
Core SLAM components are parallelized on the GPU for real-time processing of dense point clouds - High-performance scan matching and state estimation
Accelerated registration and estimation enable low-latency operation in large-scale environments - SDF-based map reconstruction
Supports TSDF and ESDF representations for dense 3D mapping and scene modeling - Mapping for planning and collision checking
Generated maps can be directly used by downstream planning and control modules
Point Cloud–Based Detection & Segmentation
- Native 3D perception on point clouds
Perception operates directly on 3D data, independent of lighting and visual appearance - From detection to segmentation
Provides object detection, semantic segmentation, and instance-level understanding - Geometric and learning-based methods
Combines classical geometry with data-driven models for robust perception - Rich semantic understanding of environments
Enables robots to identify obstacles, structures, and functional regions in complex scenes
pxTeleopForceXR
- AR/VR teleoperation with force feedback + custom visual and force guidance
- Factr/Robot as leader, Meta Quest 3 / Apple Vision Pro as XR device
- Dataset collection in simulation: Isaac Sim for first-phase prototype testing
- Isaac Lab integration for training with human corrections/interventions
- ZeroMQ and DDS middleware as communication alternative
Getting XR teleoperation with force feedback to feel “right” is rarely a single-step process — especially when tasks, tools, and prototype hardware change frequently. At the same time, visual imitation learning and VLA approaches demand high-quality datasets across many tasks, while reinforcement learning often benefits from Human-in-the-Loop corrections that inject human intention as a stabilizing signal during training.
PxForceXRTeleop simplifies this loop with an integrated AR/VR + force-feedback teleoperation stack featuring custom visual and force guidance. On the operator side, it supports a leader arm setup—using FACTR as a template—and can also pair with robot arms as haptic devices when higher-fidelity force feedback is needed.
For first-phase testing and rapid iteration, PxForceXRTeleop leverages NVIDIA Omniverse Isaac Sim to validate interaction and guidance while collecting structured demonstrations for dataset generation. For HIL-RL, it targets Isaac Lab, enabling training workflows where humans provide interventions and corrections efficiently.
Integration is kept modular and deployment-friendly via ZeroMQ (high-throughput streaming) and DDS middleware (real-time messaging), with configurable runtime behavior for scaling, filtering, safety limits, guidance strategies, and haptic profiles.