Bioinspired vision processing for autonomous terrestrial locomotion

Lead Research Organisation: University of Bristol
Department Name: Mechanical Engineering

Abstract

Land vehicles have been designed almost exclusively to use wheels whereas terrestrial animals almost exclusively use legs for locomotion. Wheeled systems can be fast and efficient on hard flat ground; leg based systems are more versatile and efficient on natural terrain. As we move towards a future of autonomous systems operating beyond the extent of the road network and on other planets it is likely that development of robust artificial leg-based locomotion will become increasingly important.
At present, several limits of technology prevent the emergence of autonomous legged systems with biocomparable performance. Even if a system was to emerge that could walk, run, leap, and turn without falling over, the technology does not exist safely to guide it through complex terrain using vision. Typically research into using vision for autonomous locomotion is undertaken using available vehicle technology - suggesting that the emergence of high-performance, vision-guided legged systems might occur at some time following the emergence of a basic high performance legged vehicle platform. In a novel approach we will expedite the development of a vision control architecture for locomotion over complex terrain by using human subjects as high performance vehicle platforms.
The visual scene captured using a head mounted camera will be processed to identify terrain characteristics known to be important for control of locomotion. A map of the terrain synthesised in 3D virtual space and updated in real-time is presented to the human using a virtual reality headset. The overall outcome measure will be the locomotion performance achieved by the humans using the system compared to that with no vision information available and with normal vision.
There are many benefits of this approach: it will allow us to investigate how humans modulate gait paramters and limb mechanics to compensate for partial or unreliable inforamtion about the environment. It will provide insight into the integration of feedforward and feedback control of locomotion. It will allow us to determine the locomotion performance that is possible from a given amount and quality of visually derived information given a highly developed locomotor platform and thus to understand how these two components of a high performance locomotor sytem combine to determine overall performance.
The basic principles and technologies establilsed during this project will be applicable to any land vehicle whether based on wheels or legs. Additionally, the processing of visual information for locomotion control is a special case of the more generalised task to search the ground for an object or visual feature. The technology developed in this project may be translated to other applications in which visually-guided autonomous function is required.

Planned Impact

(See Academic Beneficiaries for impact in the Academic community)

Systems for autonomous land locomotion have potential application accross most major industry sectors (see case for support). The nature of the work in this project is to develop fundamental key enabling technology for autonomous locomotion, which will in turn enable a wide range of autonomous systems. The full extent of the impact therefore will be wide and over the long term. Two types of company may derive short term benefit from our research (1) Solutions Providers such as BAE Systems and SCISYS who are already engaged in programmes of R&D of autonomous systems (2) Industries with specific and well defined applications for autonomous systems, these include the partner companies Sellafield Limited, Network Rail, National Nuclear Laboratory.

Core technology will be developed using advanced signal processing techniques to map features in the enviroment from fused video and kinematic data. Short term beneficiares include industries engaged in visual mapping, autonomous systems and robotics, and those with 'foraging' type applications where a large geographical location is to be searched for certain localised visual characteristics.

Publications


10 25 50
Anantrasirichai N (2015) Terrain Classification From Body-Mounted Cameras During Human Locomotion. in IEEE transactions on cybernetics
Anantrasirichai N ROBUST TEXTURE FEATURES FOR BLURRED IMAGES USING UNDECIMATED DUAL-TREE COMPLEX WAVELETS in Proceedings of the 21st IEEE International Conference on Image Processing
Anantrasirichai N ORIENTATION ESTIMATION FOR PLANAR TEXTURED SURFACES BASED ON COMPLEX WAVELETS in Proceedings of the 21st IEEE International Conference on Image Processing
 
Description As of November 2015 we have developed a prototype system that presents a human with a synthesised version of the visual environment in realtime using a single head-mounted camera to collect visual information and a VR headset. The video stream from the camera is split into two processing pathways which construct estimates of the shape of the environemnt and the materials from which it is made. We have shown that the performance of vision systems used controlling terrestrial locomotion can be optimised by satisfying certain relationships between basic system paramters such as camera height above ground, resolution, angle of view etc. We have shown experimentally that guided and stable locomotion is possible using a very small amount of key visual information and there is a relationship between the information rate and speed of progression.
Through 2016 we have focussed on developing hardware and software technology that will allow us to design rigorous experiments to understand how humans use vision to control locomotion when the visual information is of poor quality or incomplete.
Exploitation Route the principle areas of application for our findings will be:
1) in medical research to develop tools to assist the partialy sighted
2) by autonomous systems engineers to control vehicles
3) by vision scientists to research the human vision systems
Sectors Aerospace, Defence and Marine,Construction,Digital/Communication/Information Technologies (including Software),Healthcare,Culture, Heritage, Museums and Collections,Pharmaceuticals and Medical Biotechnology,Transport
 
Title Realtime VR scene synthesis 
Description A synthesised visual environment containing a subset of visual information available using human vision is presented to a human using VR. The human wears a VR headset and a head-mounted camera. Software translates the camera view into a 3D geometric model of the environment which is projected onto a viewing plane and shown in the VR headset. The method allows the study of locomotor performance in reduced visual environments 
Type Of Material Improvements to research infrastructure 
Provided To Others? No  
Impact none yet