Designing a successful strategy for scoring in the autonomous mode of FTC robotics competition is a multifaceted process that requires a comprehensive approach. First and foremost, a thorough understanding of the game's rules and objectives and your robot’s functionalities is crucial. Teams should carefully analyze the game manual, paying attention to scoring opportunities, potential obstacles, and the overall game dynamics. Once armed with this knowledge, teams can identify key tasks that are best suited for autonomous execution of your robot, such as navigating the robot to specific locations, manipulating game elements, or interacting with opponents.
Effective sensor integration, such as cameras, is a cornerstone of a successful autonomous strategy. Teams should leverage a variety of sensors, such as encoders, odometers, color sensors, cameras, touch sensors, and distance sensors, to provide the robot with accurate data and mapping of its surroundings. This information can be used to create precise and reliable movement routines, allowing the robot to navigate the field with confidence and accuracy. Calibration and testing of sensors and their integration with the robot are essential to ensure consistency and accuracy during competition.
Programming plays a pivotal role in autonomous success too. Teams must develop robust and adaptive code and fine-tuned parameters to handle various scenarios and account for unexpected variables. Implementing decision trees or state machines can help the robot make intelligent choices based on sensor inputs, allowing it to dynamically respond to changes in the environment. Regular testing and iterative refinement of the codes and key parameters are crucial to identify and rectify any issues that may arise.
Collaboration within the team is vital for a well-rounded autonomous strategy. Different team members can focus on specific aspects, such as sensor integration, programming, or strategy refinement. Regular communication ensures that everyone is on the same page and contributes their expertise to the overall success of the autonomous mode.
Strategic planning also involves anticipating opponents' actions and potential disruptions. Analyzing the strengths and weaknesses of other robots in the competition allows teams to devise counter-strategies and minimize the impact of opponent interference. I had such experience during FTC competitions: my robot was seriously hit and damaged by opponent’s robot. Thus, a contingency plan for your robot’s autonomous mode and the following tele-operation mode is strongly recommended.
Finally, constant iterations and improvements are key components of a successful autonomous strategy. FTC teams should learn from each competition, gather feedback, and adapt their approach to address weaknesses and enhance strengths. By consistently refining their strategy, teams can maximize their scoring potential in the autonomous mode of the FTC robotics competition, ultimately contributing to their overall success in the game.
Next, I’ll use the autonomous mode of our FTC robot in the 2022-2023 season as an example to illustrate the ideas of system integration and scoring strategy. More details and instructions of the 2022-2023 season’s autonomous mode tasks are referred to FIRST website.
As shown in Figure 89, the autonomous program diagrams include autonomous operation path and points: (1) recognize the team-supplied signal sleeve and determine the randomized parking zone, and then start the robot; (2) automatically deliver the pre-loaded cone to the nearest junction’s pole; (3) automatically navigate to the substation and pick up a cone; (4) automatically navigate to the nearest junction, use camera sensor to fine-tune the robot’s location, and deliver the second cone to the junction pole; (5) automatically park in the randomized zone.
Figure 89. The diagram of autonomous mode of our robot. The red and blue trajectories represent our robot’s action paths for the red and blue alliances, respectively.
The objectives of our autonomous mode are: 1) automatically recognize the team-supplied signal sleeve and determine the randomized parking zone; 2) automatically deliver the pre-loaded cone to the nearest junction’s pole (illustrated in Figure 90), then automatically navigate to the substation and pick up a cone; 3) automatically navigate to the nearest junction, use camera sensor to fine-tune the robot’s location and deliver the second cone to the junction pole (illustrated in Figure 91); 4) automatically park in the randomized zone (shown in Figure 92).
Figure 90. Automatically deliver the pre-loaded cone to the nearest junction’s pole.
To achieve the aforementioned objectives, the integration of dual cameras and sophisticated computer vision algorithms played a pivotal role. For the attainment of objective 1), a compact camera was employed to precisely locate the signal sleeve and identify its distinctive patterns. In this context, a color-coding system involving red, green, and blue hues was implemented on the sleeve for efficient pattern recognition, with the utilization of the OpenCV package facilitating seamless localization and recognition processes.
To fulfill objective 2), a predefined robotic path was strategically implemented, complemented by the incorporation of a touch sensor that functions to halt the robot upon reaching the wall during the second cone pick-up phase. Objective 3) was realized through the utilization of a top camera, which was instrumental in accurately identifying the top of the junction pole. This information was then used to adjust the robot's orientation and position, ensuring precise delivery and release of the cone at the designated junction.
The successful accomplishment of objective 4) was achieved by leveraging the recognized sleeve signal obtained in objective 1). This information was utilized to guide the robot to the designated parking spot on the right with precision.
Figure 91. Automatically deliver the second cone to the junction.
Figure 92. The robot automatically parked in the randomized signal zone based on the recognized signal sleeve.
Beyond the utilization of two cameras for detecting and recognizing sleeve signals and cones in the field to enhance robotic sensing capabilities, the integration of two touch sensors (refer to Figure 93) was imperative. These touch sensors served the critical function of halting the robot upon reaching the wall during the designated path for the second cone pick-up (refer to Figure 91). Their significance lies in preventing the robot from colliding forcefully with the wall and accurately measuring distances during the return journey to deliver the second cone to its designated location.
Figure 93. Two touch sensors were used to stop the robot whenever they reached the wall.