Abstract
Coupling (semi-)autonomous drones with on-ground personnel can help improve metrics like mission safety, task effectiveness, and task completion time. However, in order for a drone to be an effective companion, it needs to be able to make intelligent decisions about what to do in a partially observable and dynamic environment in light of uncertainty and multiple competing criteria. One simple example is where and how to move. These kinds of continuous or waypoint-based decisions vary greatly from task to task, such as in the scenario of building a 3D map of an area, getting a minimum number of pixels on objects for automatic target detection, exploring an area around a search team, etc. While it is possible to implement each behavior from scratch, we discuss a flexible and extensible framework that allows the specification of dynamic, controlled, and explainable behaviors based on the multi-criteria decision making (MCDM), an aggregation task, of different UFOMap voxel map layers. While we currently employ specific layers such as drone position, time since a voxel was last observed, minimum distance to a voxel, and exploration fringe, future additional layers present the opportunity for the creation of more complex and novel behaviors. Through testing with simulated flights, we have demonstrated that such an approach is feasible for the construction of useful semi-autonomous behaviors in the pursuit of human-robot teaming.