The previous posts have gone through how the hexapod was built, how the different processors communicate with each other, how it is able to walk smoothly, and how it can both keep track of obstacles and where it is relative to those obstacles. In this post, I'll talk about allowing the hexapod to walk around autonomously.
The first question we need to answer is: what do we mean by walking around autonomously? Do we tell the hexapod to walk forward and it does so without help? Do we tell the hexapod to walk to 'the kitchen' and it figures out how to do that? Does the hexapod wake up in the middle of the night to search for a midnight snack? Enabling the hexapod to act out the last two scenarios is well past my skill and interest level, so I'll settle for something a little simpler. My goal is to be able to give the hexapod a location in space (as 2D coordinates) and have it walk there without running into anything. To do this, it needs to use all of the algorithms developed in the previous posts to avoid obstacles, keep track of where it is, and step with smooth confidence.
The work from the previous posts allows for a fair amount of abstraction in solving this problem. From the work on inverse kinematics, we can fully control the motion of the hexapod through two parameters: speed and turning. From the SLAM algorithm, we have a discrete grid of points representing a map of the local environment. Autonomous navigation is then just a problem of deciding where on the map the hexapod should go, and then setting the speed and turning values to make it go that way.
CS101
Unlike the problem of Simultaneous Localization and Mapping from the previous post, pathfinding has plenty of easy-to-digest literature floating around. However, this does not excuse me to simply point to some code I've written and call it a day. Further, many naive implementations of common pathfinding solutions lead to code that runs well in theory, but not quickly enough for my relatively simple-minded hexapod.
A classic pathfinding algorithm is Dijkstra's algorithm. In this algorithm, we represent every possible location as a 'node' that is connected to other 'nodes' (in this case, neighboring locations). Each connection between nodes is assigned a distance value so that the algorithm can compute distances between far-away nodes by summing up the connection distances between the two. Given a start and end node, Dijkstra's algorithm finds the shortest path (series of connections) between the two by traversing the network of connected nodes and assigning a distance-from-start value to each. If a node is reached multiple times through different paths, the shortest distance takes precedence. When the target node is reached, we simply trace the network in reverse to find which nodes gave the shortest path from start to end. In this way, Dijkstra's algorithm can find the shortest path from point A to point B (assuming you describe positions in space as a network of connected nodes).
Pathfinding on a grid of nodes from point A (green) to point B (red).
In our application, we also want the pathfinding algorithm to avoid sending the hexapod through walls or other obstacles. A simple way to achieve this is to modify the pathfinding algorithm to include a 'cost' of travelling between nodes. This cost is large for connections that land you inside a solid object. We are then not finding the shortest path between points A and B, but instead the path that minimizes distance + cost. Since we are free to pick the cost, we can pick how much the algorithm prefers short distances versus avoiding obstacles.
Pathfinding on a grid with obstacles.
One of the drawbacks of this method is the amount of time needed to find the optimal path. On a large grid, the algorithm will spend a significant amount of time exploring the multitude of potential paths that may not provide an optimal result, but must be checked to make sure. A modification to Dijkstra's algorithm can further improve the results and leads us to the A* algorithm (pronounced A-star). In this version, we explore connections that lead closer (by some measure) to the target before considering any other paths. This is like making a bet that the path between points A and B is the most direct one. If you win, the algorithm finds the path in a very short amount of time. If you lose, the algorithm has to go back and search all of the other less direct paths. I've decided to go with the A* algorithm, since I know the environment in which the hexapod will navigate is not too maze-like.
Autonavigation
My implementation of the A* algorithm for the hexapod (which I call "Autonav") can be found here. As with the SLAM algorithm, the actual solver runs in a thread separate from the main thread that controls the hexapod servos. Given a copy of the SLAM map, the current location of the hexapod, and a target location, the solver searches through the map to find a sequence of positions that connects the two locations. Since the SLAM map is updated every few seconds and may contain new obstacles, the Autonav solver needs to be able to solve for a new path every few seconds as well. In my implementation (which is admittedly not very optimized), the solver can search through around 2000 grid points in the 1-2 seconds between SLAM updates. For typical grid setups, this unfortunately only gives a physical search range of a few meters. In order for the Autonav solver to quickly and reliably find paths through the environment, the target position can only be a few meters away. Far-away targets can be reached by splitting the distance up into a series of evenly-spaced waypoints.
To visualize the path to which the Autonav sends the hexapod, I've written a short code that parses the logfile written from a run and creates a reconstruction of the scene. The render shows the SLAM map (with distance to objects as a smooth gradient) in red, the hexapod location in blue, and the Autonav path in green:
Scale is 5 cm per pixel.
For a given SLAM map, the Autonav solver does a pretty good job finding a path around obstacles. To turn this path into commands for the hexapod to follow, I set the hexapod speed to be a constant (0.3) and modified the turning value based on the path. The low resolution of the Autonav path can cause the hexapod to act a bit jumpy, as the direction the path takes only changes in 45 degree increments. To smooth the path, the hexapod picks a 'tracking point' at least 20 cm away from its current position along the Autonav path. It computes the heading to that point and sets the turning parameter proportional to how far away the hexapod's heading is from that point. In this way, the hexapod continually tracks the Autonav path.
To test how well the whole system works, I set up different scenarios for the hexapod to navigate through. After each test, I compiled the reconstruction renders into a movie so I could play back what happened. The first test was for the hexapod to move to a target 1.5 meters in front of it in an empty room, then turn around and go back to the starting point.
From the video, we see that everything seems to be working well. The SLAM algorithm updates the map with LIDAR scans made from new perspectives, the Autonav solver continually updates the intended path, and the hexapod smoothly strolls from point A to point B. Through the eyes of a human observer, this kind of test looks pretty neat:
Once again, everything works pretty smoothly. Of course, not every test I did wound up being such a simple success. Here's an example of a test where the hexapod got turned around due to what I call "SLAM-slip", where the SLAM algorithm fails to properly match new LIDAR scans to the map.
Here's a more complicated test where I added a box along its return path to see if it could adapt to the new environment:
With a bit of tuning, the hexapod was able to successfully navigate around both permanent and temporary obstacles and reach the waypoints I gave it. The biggest issue I found was in timing. By the time the SLAM algorithm updated the map and the Autonav solver found a new path, the hexapod would have moved enough to invalidate any new solution for where it should go. Sometimes there would appear to be almost a 5 second delay between useful updates to the hexapod turning. Considering all of the math behind what the hexapod is trying to run on such low-powered hardware, a 5 second delay isn't too surprising. It's unfortunate that it seems just barely underpowered for the task set before it. A simple solution to this problem is to just let the hexapod walk slowly so it can get updates to where it should go before walking too far off course.
UPDATE: more testing, so here's a longer navigation test through my apartment!
Race Prep
At this point, the hexapod is a pretty fleshed-out robot. It can do some basic autonomous navigation through my apartment, given a set of waypoints to follow. While the end result is conceptually fairly simple, the process of getting there has been considerably involved. It would be a shame to end the process here, but there isn't a whole lot more I can reasonably do to enhance the hexapod's capabilities at this point.
To give the hexapod a final task, I've entered it into the Sparkfun Autonomous Vehicle Competition (AVC) on 20 June 2015. I attended the previous two AVCs and had a lot of fun watching both the ground and aerial races. The sophistication of the entries in the ground race has been extremely varied, from LEGO cars relying on dead-reckoning to go-karts with RTK GPS. I figure that a LIDAR-equipped hexapod falls comfortably between these two ends.
Unfortunately, there are three main issues with my hexapod that make it woefully underprepared for such a competition. The first is the speed. The fastest I've gotten the hexapod to move is around 1.2 feet per second, but the navigation has issues above 0.5 feet per second. Robots in the competition are expected to travel almost 1000 feet in less than 5 minutes, giving an average speed of 3.3 feet per second. So at best, the hexapod will only make it about 20% of the way around before time is called. The second issue is sunlight. The LIDAR unit I'm using is intended for indoor use, and can't make reliable measurements in direct sunlight. This means that the hexapod might not be able to see any of its surroundings to get a position reference. The third issue is that the course doesn't really have many natural obstacles to use for position reference. In previous years, the outer boundary of the course was a chain-link fence, which is likely all but invisible to LIDAR. Even if the LIDAR could work in sunlight, there might not be any objects within range to sense. With these significant issues, I'm still going to race the hexapod. It won't win, and it might not even lose gracefully.
One of the requirements for entries into the AVC ground race is that the robot must not only be autonomous, but must start the race with a single button press. So far I've been ssh-ing into the main processor through WiFi to initiate any actions, so I need one last physical addition to the hexapod.
The LED+button board connects to the Due, but since the Due pins are shared with the main cpu, both can access them. The orange LED (#1) blinks to show that the PROG_HWMGR code is running on the Due, and the button below it does nothing. The rest of the LEDs and buttons are controlled by the main cpu when running in fully-autonomous mode. When the code is ready to enable the servos and spin up the LIDAR unit, the white LED (#2) flashes. Pressing the button below it allows the code to continue. The blue LED (#3) flashes when the LIDAR is spun up and the SLAM algorithm is ready to make an initial map. Pressing the button below it starts the integrating process. When the initial integration is done, the green LED (#4) flashes, indicating the hexapod is ready to race. Pressing the final button starts the race, and pressing it again ends the race early (in case of wayward robot).
So with that, the hexapod is built, programmed, and ready to compete. I'll post a write-up of whatever happens during the race sometime next week, along with some pictures of it racing!