Groundlight has unveiled its open-source ROS package, designed to accelerate the integration of embodied AI in robotics. This innovative tool allows ROS2 developers to easily implement advanced computer vision capabilities in their projects. By combining machine learning with real-time human supervision, Groundlight’s package enhances the perception and adaptability of robots in real-world environments. The open-source package can be accessed here.
Traditional computer vision (CV) processes have been a significant bottleneck in building robust robotic systems. The standard approach requires a time-consuming and labor-intensive cycle: collecting comprehensive datasets, meticulously labeling images, training models, evaluating performance, and refining the dataset and model to address edge cases. This lengthy process can take months for each application. Moreover, robots often behave unpredictably in scenarios outside their training set, necessitating a complete redo of the model development process.
Groundlight’s open-source ROS package transforms this paradigm by providing fast, customized edge models that run locally, tailored to each robot’s specific requirements. Supported by automatic cloud training and 24/7 human oversight, robots pause and wait for human input when encountering unfamiliar situations. This enables real-time adaptation to unexpected scenarios, with human-verified responses typically available in under a minute. These responses are then integrated back into the model and pushed to the edge, enhancing safety and reliability while significantly speeding up the development process.
“Our ROS package gives reliable vision to embodied AI systems,” said Leo Dirac, CTO of Groundlight. “Modern LLMs are often too slow and costly for direct robotic control, and they frequently struggle with basic visual tasks. We combine fast edge models with human oversight, allowing robots to efficiently and reliably perceive and understand their surroundings.”
The Groundlight ROS package allows developers to pose binary questions about images in natural language. High-confidence answers are generated by the current ML model, while low-confidence cases are escalated to human reviewers for immediate responses. This human-in-the-loop approach ensures reliability and continuously enhances the underlying ML model without the need for manual retraining.
Dr. Sarah Osentoski, a robotics pioneer, remarked, “Groundlight’s ROS package is a game changer for teams building robotic systems in unstructured environments. It simplifies human fallback and automatically incorporates exception handling into ML models, improving efficiency over time.”
This release represents a major advancement in robotics and computer vision. By merging the speed of machine learning with the reliability of human oversight, Groundlight empowers developers to create intelligent, adaptive robotic systems with ease. Whether in industrial automation, research, or innovative applications, this package paves the way for the next generation of visually-aware robots.
Groundlight is a leading innovator in visual AI solutions, committed to making computer vision more accessible and reliable for robotics and automation applications. By combining cutting-edge machine learning with human intelligence, Groundlight enables developers to build smarter, more adaptable systems that excel in real-world environments.