Learning-Based Robotics
Learning-Based Robotics is our service for developing, training, and validating robotic systems using data-driven and learning-based methods. In addition to classical imitation learning and reinforcement learning, we support the adaptation and fine-tuning of foundation models, including Vision-Language-Action (VLA) and other open-source models, to specific environments and applications.
Built on simulation-first development, this service covers the full learning lifecycle—from data generation and model adaptation to benchmarking and deployment on real robots—with a strong focus on robustness, reproducibility, and real-world applicability.
Data Generation & Collection
- High-fidelity dataset generation (pxRobotLearning)
Supports large-scale dataset generation both in simulation and on physical robots, ensuring consistency between synthetic and real-world data distributions. - Teleoperation and human-in-the-loop data acquisition (pxTeleopForceXR)
Enables data collection through teleoperation interfaces, wearable devices, and interactive control, allowing humans to guide, correct, and intervene during task execution. - Multi-modal data support
Collects synchronized vision, point cloud, proprioceptive, force, and task-level signals for learning-based robotics. - Structured and versioned datasets
Provides standardized dataset formats with metadata, enabling reproducible training, benchmarking, and long-term evaluation.
Learning & Model Adaptation
- Integrated IL and RL pipelines
Provides end-to-end imitation learning and reinforcement learning workflows, supporting both demonstration-driven and interaction-driven learning. - Fine-tuning of foundation models
Adapts pretrained foundation models to robotics-specific tasks, constraints, and sensor modalities. - Vision–Language–Action model adaptation
Fine-tunes VLA models to customer-specific environments, task semantics, and operational workflows. - Open-model integration
Supports integration and extension of open-source learning and perception models within a unified training framework.
Benchmarking & Validation
- Simulation-based performance benchmarking
Evaluates learning performance under controlled, repeatable simulation scenarios. - Sim-to-real and sim-to-sim validation
Assesses policy robustness across different simulators and during transfer to real hardware. - Stress testing and edge-case evaluation
Validates behavior under disturbances, sensing noise, dynamic obstacles, and rare failure conditions. - Quantitative metrics and logging
Provides systematic evaluation metrics for policy stability, task success rate, and safety constraints.
Deployment & Inference
- Model export and optimization
Supports model conversion, compression, and optimization for deployment on edge and embedded platforms. - ROS 2–based system integration
Seamlessly integrates trained models into ROS 2 pipelines for perception, planning, and control. - On-robot accelerated inference
Enables real-time inference on GPU- or accelerator-equipped robotic hardware with deterministic execution.
Typical Use Cases
- Learning-based manipulation and navigation
Development of policies for grasping, manipulation, locomotion, and autonomous navigation. - Human–robot interaction and assistance
Training interactive behaviors that leverage human input, language instructions, and feedback. - Algorithm benchmarking and evaluation
Comparative evaluation of learning algorithms under standardized conditions. - Research and industrial R&D projects
Applied research, prototype development, and technology validation for industrial robotics applications (e.g., WPT project).