SHARED AUTONOMY IN COMPLEX ENVIRONMENTS
September 6, 2018
Robotic hardware is still limited by poor sensing and imprecise actuation. Hence executing robust robotic behaviors has been a challenge in complex environments. To build reliable real-world applications, coupling partial autonomy with (human) supervisory control can result in repeatable robotic processes.
Given the complexity of unstructured human environments, designing systems capable of failure-free operation is a problem that is both complex and ill-posed. This is due to the fact that there is an innumerable number of failure cases, and the algorithms underlying autonomous systems cannot be designed to explicitly enumerate and address all these cases.
To deploy robots in non-industrial settings a practical trade-off is to design systems that can either fail gracefully (i.e avoid damaging critical infrastructure) or query an expert in the event of an unmodeled source of uncertainty during the course of decision making [Reference 1]. Systems capable of querying an expert in the event of unmodeled sources of uncertainty can lead to large-scale deployment of robotic systems in the wild.