Performance metrics and human-robot interaction for teleoperated systemsThis was the title of my PhD Thesis, which can be downloaded from here (in pdf). For relevant papers have a look on the publications page.
This thesis investigates human factors issues in the design and development of effec- tive human-robot interfaces for emerging applications of teleoperated, cooperative mobile robot systems in situations such as urban search and rescue.
Traditional methods of designing human-robot interaction interfaces have failed to produce effective results as witnessed in the post September 11 search operations. The thesis adopts a user-centric approach based on the human factors of situation awareness, telepresence and workload to support the human operator because this is widely accepted as the best way of realising increased levels of collaboration between humans and robotic systems, working as a partnership to perform a complex task.
The measurement of these human factors has not been explored within the robotic community in any significant way. The measurement of these subjective and complex issues is addressed in this thesis by looking to the flight traffic control do- main where researchers have developed many methods of determining how to quan- tify the quality of situation awareness, the level of workload and the level of telepres- ence of the people in the aircrafts and on the ground. Based on these methods, the research proposes five new methods (ASAGAT, QASAGAT, CARS, PASA, SPASA) for measuring situation awareness, three methods (WSPQ, SUSPQ, SPATP) for measuring telepresence and three methods (NASA-TLX, MCHS, FSWAT) for mea- suring workload. A comprehensive comparison between them has shown that QA- SAGAT and SPASA are the most reliable and accurate for measuring situation awareness, SPATP for measuring telepresence and FSWAT for measuring workload. For the measurement of performance a new method has been developed, which is felt to be more objective for the urban search and rescue scenario than the metrics used in the RoboCup Rescue competition.
Simulation studies involved extensive investigations to determine the various software tools and platforms that are available for realising robot urban search and rescue scenarios. The software of Player-Gazebo was selected as the best solution.
A graphical user interface comprising vision, laser data, map, robot locations, etc. was developed and assessed with the proposed measurement methods under the simulated robot system and search scenarios.
The test subjects comprised specialist end users as well as general non-end users. An in-between groups analysis showed that the individual characteristics of each group may have some effect on the experimental variables, however, this effect is very minimal and the main influence factors are the interaction interfaces and the human factors investigated here. Moreover, the results indicated that there is no significant benefit when using professional urban search and rescue end users.
Correlation analysis on the data has shown that situation awareness and telep- resence have a positive effect on performance, while workload negatively affects performance. It was also found that there is a positive correlation between situation awareness and telepresence, while workload has a negative effect on both. These results validate the assumptions made.
A multiple linear regression model has been developed to further understand the individual contributions of each human factor in the overall performance achieved. The limited prediction capabilities of the linear model suggested a non-linear re- lationship. For this reason, a non-linear model using an artificial neural network trained with the backpropagation algorithm has also been developed. The resulting neural network is able to predict the response variables more precisely and is shown to be able to generalise well to unseen cases.
A physical mobile teleoperated urban search and rescue robot system has also been developed for realising future real world trials
Experiment and theory of the C. elegans locomotion system
C. elegans is one of the simplest creatures of the animal kingdom. With a mapped genome and the only mapped neural circuitry, this organism offers a first tangible opportunity to understand an entire living, behaving and learning system bottom-up and top-down. As such, it offers great promise to systems biologists, neuroscientists and roboticists alike. Despite its relative simplicity, C. elegans possesses many of the functions that are attributed to higher level organisms, including feeding, mating, complex sensory abilities, memory and learning. Can we understand the underlying engineering designs that allow this tiny nematode to survive and flourish? What insight can we gain into universal principles that give rise to adaptive and robust life-forms or to the unique architecture of its nervous system? Meeting this challenge requires a large multi-disciplinary effort, combining insight and expertise from biology, physics, engineering and computer science. Our group focuses on the C. elegans locomotion system and its neural control. This is a combined theoretical and experimental effort, spanning behavioural experiments and imaging studies as well as mathematical and simulation modelling.
This research was conducted when I was a Research Associate in the Biosystems Group within the Institute for Artificial Intelligence and Biological Systems of the School of Computing in the University of Leeds, and I was based in the School of Mechanical Engineering. I was working within the C. elegans research group, where it is studied, modeled and simulated the C. elegans locomotion and its neural control in virtual and physical real-world systems.