Projects

Energy Efficient Planning

We were asking the question of how to plan energy efficient paths through strong and uncertain disturbances, such as ocean currents and wind fields. I developed the EESTO algorithm and demonstrated the benefits of this algorithm in both simulation and the field using a Platypus Lutra autonomous boat. In the image below you can see the paths the Lutra took on the surface of a lake in the presence of a wind field. In the field trials I tested three different planning methods. Note that the vehicle started in the upper middle (blue star) and moved to the lower middle (red square). First, we used a baseline method called Naive which ignored the wind during planning. Next we used a Replan method which started with no knowledge of the wind field but would replan as it gathered information about the wind. Lastly, we used an Oracle method which started with a map of the wind on the lake from a previous survey that day and would then replan with respect to the new information about the wind gathered during execution.

Limited Control Authority

We were interested in examining how to plan for vehicles with low actuation relative to the disturbances in the environment. I developed a Stochastic Gradient Ascent (SGA) algorithm to plan for this class of vehicle. Additionally, we were interested in how we could use a team of these vehicles to gather information over large time and distance scales. The image below shows an illustration of what our algorithm planned over a simulated information field (yellow is higher information) of the Gulf of Mexico. The yellow vehicle is able to use the currents to move from an area of low information into an area of high information while the red vehicle was swept away from high information by the currents. Our algorithm enabled vehicles to leverage the currents to find more information.

Realizable Path Planning

We were interested in decreasing the distance between the path that the robot plans and the path that the robot executes. As a first step we developed a Reinforcement Learning (RL) framework to find a policy that reduces the distance between the planned and executed path. I am currently working on folding this back into the planning to allow the robot to plan initial paths that are more realizable. Below you can see the results for the policies found by the framework on an automous boat. I performed 3 different patterns on the surface of the lake and compared our policy based method (cyan) to a default waypoint controller (blue). As can be seen our policy based method reduced the distance between the planned (red - dashed) and executed path. Additionally, the policy was able to remove the undesired looping behavior seen in the Information (3rd) path.