Back to Blog

Coordinated Multi-Robots for Planetary Exploration

12 min read
RoboticsComputer VisionPythonCUDAPyTorchROSJetson Nano

Coordinated Multi-Robots for Planetary Exploration

Role: Software Integration Lead
Technology: Python, CUDA, PyTorch, Ubuntu 16.04, Jetson Nano, Intel Realsense Camera, ROS

Overview

In a collaborative senior design project between Electrical and Computer Engineering (ECE) and Mechanical Engineering (ME) departments, we aimed to design and construct a versatile robot. This robot was specifically engineered to navigate diverse terrains effectively. The primary objective was not just traversal, but also the capability to interact with or analyze its environment, obtaining critical data or accomplishing set tasks. By merging the expertise from both ECE and ME disciplines, we addressed the challenges of robotic mobility, environmental adaptability, and objective attainment.

Objective

NASA has heavily invested in its Martian exploration programs, with landmark missions spanning decades. In 1997, the Mars Pathfinder mission introduced Sojourner, demonstrating a cost-effective method to deliver science instruments to Mars. The subsequent Spirit/Opportunity rovers, launched in 2003, scoured the Martian surface for signs of ancient water. By 2016, Curiosity's mission was to assess Mars' suitability for life, though not to detect life directly. This effort was complemented by the Perseverance rover, equipped with advanced instruments targeting similar objectives in a new Martian region.

While each mission has achieved significant milestones, the escalating costs have been a major constraint. The simplest rover, Sojourner, was constructed at $25 Million, whereas the sophisticated Curiosity required a staggering $2.5 billion. These substantial expenses have naturally led to cautious deployment in potentially hazardous Martian terrains.

Our proposed solution is an autonomous robot designed for cost-efficiency, allowing more extensive exploration of Mars' challenging landscapes without compromising the budget.

Challenges Faced

Challenge 1: Logistical Issues with Firmware Development

Problems:

  • The size and complexity of libraries and packages for object detection and depth analysis meant that our Jetson Nano required almost a full day for installation.
  • The distant location of Lab A from the main campus resulted in weak Wi-Fi connectivity, the sole method available for transferring essential project files.
  • The Linux nature of the Nano demanded custom-built firmware resources through the terminal, a process rife with pathing issues, missing dependencies, and arcane errors.
  • Failed installations towards the end of their completion cycle led to wasted hours and the need for a complete reinstall.

Solution:

  • We outsourced most of the firmware development to our personal devices, leveraging Linux virtual machines sharing the same OS version as the Jetson Nano (Ubuntu 18.04). This approach harnessed the superior processing capabilities of our personal desktops/laptops and allowed us to operate in areas with robust internet connections.
  • We pivoted our development strategy to maintain a 1-1 operational parity with the Nano. In essence, once programs are transferred, all the Nano needs to do is compile them.
  • Certain functionalities, notably object detection integrated with the motor system, remain untested due to GPU limitations on the Nano. However, by addressing the bulk of our challenges, our team can operate more efficiently and under reduced psychological strain.

Challenge 2: SICK TIM-881P LiDAR Integration Difficulties

Problems:

  • The SICK TIM-881P LiDAR is constructed for a Windows-based development and usage appspace, conflicting with the Linux-based Jetson Nano.
  • The TIM-881P does not support RViz, which is crucial for Simultaneous Localization and Mapping (SLAM).
  • The need for the appspace to maintain a persistent connection to SICK's servers, combined with the end goal of Mars deployment where internet connectivity is non-existent, renders the system inoperable.

Solution:

  • Due to the inherent challenges of integrating the SICK TIM-881P LiDAR, we pivoted to a combination of object detection through a camera and a depth sensor.
  • This alternative approach provides an efficient replacement for SLAM, allowing us to sidestep the LiDAR's compatibility issues and still achieve our goal of understanding and navigating the Martian terrain.

Challenge 3: Interdisciplinary Communication Issues between ME and ECE Teams

Problems:

  • The ME team's design resulted in a robot body too heavy for the motor specified by the ECE team.
  • The camera placement obstructed its line of sight due to miscommunication about its positioning requirements.
  • The board size chosen by the ECE team did not fit into the compartment designed by the ME team.

Solution:

  • Implemented a mandatory weekly meeting between both teams to ensure all design aspects are cohesive and complementary.
  • Stationed at least one ME team member at the ECE lab to facilitate on-the-spot communication and decisions.
  • Introduced a weekly report system where each team updates the other on their progress, decisions, and any potential challenges foreseen.

Key Responsibilities

  • Terrain Traversal Program Design & Implementation: Lead the conceptualization and construction of the robot's terrain traversal algorithm, ensuring it navigates Mars-like terrains efficiently.
  • Device Testing: Oversaw the rigorous testing processes for essential input devices, such as the camera and Inertial Measurement Unit (IMU), to ensure consistent and accurate data capture.
  • Object Detection Development: Engineered a robust object detection program, empowering the robot to successfully identify and interact with its objectives on the Martian surface.

Achievements

We successfully developed the prototype for the robot in less than 8,000 dollars. This is significantly less than Mars Rover (Over 1 billion dollars), which make it more favorable for traversal of dangerous terrain.

Key Learnings

  1. Contextual Communication: Ensuring team members have a clear understanding of problems by providing complete information and background is critical. Without this, miscommunication can lead to delays, duplicated efforts, or flawed solutions.

  2. Interdisciplinary Collaboration: Embracing expertise from diverse engineering fields is crucial for comprehensive and effective solutions, especially in multidisciplinary projects like robotics. It brings a variety of perspectives, leading to innovative and holistic approaches.

  3. Resilience in Problem Solving: Persistence in the face of challenges, as evidenced by late-night troubleshooting and iterative problem-solving, is essential. Solutions often require multiple attempts and refinements, and maintaining determination is crucial for eventual success.

  4. Adaptive Problem Solving: When faced with hardware limitations or compatibility issues, pivoting to alternative solutions (like using camera + depth sensor instead of LiDAR) can be more effective than forcing incompatible technologies.

  5. Development Environment Optimization: Leveraging virtual machines and better hardware for development while maintaining compatibility with target hardware significantly improved development efficiency.

Technical Implementation

The project utilized:

  • Computer Vision: PyTorch-based object detection models running on CUDA-enabled Jetson Nano
  • Depth Sensing: Intel Realsense Camera for 3D environment mapping
  • Robotic Operating System (ROS): For modular software architecture and hardware abstraction
  • Terrain Analysis: Custom algorithms for identifying navigable paths and obstacles

Future Improvements

  • Integration of more advanced SLAM algorithms
  • Enhanced object detection with real-time learning capabilities
  • Improved power management for extended mission duration
  • Multi-robot coordination algorithms for collaborative exploration