Embedding intelligent image processing algorithms: The new safety enhancer for helicopters missions

Thumbnail Image
Zoppitelli, P.
Mavromatis, S.
Sequeira, J.
Anoufla, G.
Belanger, N.
Filias, F.X.
Journal Title
Journal ISSN
Volume Title
Over the last two decades, image processing technologies rapidly emerged from the shadows to become one of the most important field of interest in computer science. Although image analysis is a hot topic in the automobile industry and for some aircraft applications (drones, airplane, space probes...), the certification of any vision based autopilot system for helicopter missions remains an ongoing challenge. Indeed, such a system would be required to perform complex missions with a high success rate while possibly facing adverse weather conditions. However the rapid increase of processing power, the development of image analysis algorithms, as well as the miniaturization of high resolution cameras, are allowing new technical solutions for autonomous flight. Facing this new technological deal, helicopters manufacturers can no longer ignore that vision based systems are about to become a key enhancer for versatile rotorcraft missions. Airbus Helicopters is committed to put the safety of its aircrafts at the highest standards. For this purpose, Airbus Helicopters has initiated the development of advanced systems integrating many disciplines like sensor acquisition, scene understanding, situation awareness, and artificial intelligence. As a contribution to this company objective, the EAGLE project (Eye for Autonomous Guidance and Landing Extension) was launched 2 years ago to develop a generic optronic platform facilitating integrations of algorithms for different applications. The system aims to improve safety and reduce the pilots� workload during flights in oil and gas and SAR missions. This paper presents the latest results that have been obtained by Airbus Helicopters and the LIS-lab in the development of a landing platform detector in the frame of this project. We will first introduce the general methodology applied for the determination of the platform position. The approach is hierarchical and based on a collection of hints to determine, refine and validate suitable locations for the presence of a helipad. We will then present the strategy for the selection of regions of interest. The aim is both to determine the right size of portion of the image to be analyzed, and to enable the real-time adaptation of the selection and sequencing of the regions to be explored. This article will then detail the methods used to determine the areas susceptible to contain a landing platform. The algorithm mainly relies on flat ellipse detection as it is the most visible feature of a helipad seen from long distances. An adaption of the Hough transform proved to be the most reliable method in the specific case of very flat ellipses. A validation step using many other properties and visual clues performs the verification of the presence of the helicopter landing platform in the research areas delimited by the obtained ellipses. Having presented the algorithm for the detection of a helicopter landing platform, we will discuss some approaches to increase the system�s accuracy, robustness and integrity (detection of failure). In particular, safety and certification considerations are used to select the human-machine interfaces and overall design of the system. A result section will show how this system demonstrated to be capable of detecting landing platforms from a distance of 1500 meters and tracking it without interruption until the landing phase. Last but not least, this paper will introduce an open view of identified image processing technologies continued path for the upcoming years. Our vision of this technological field as a mandatory brand new core competency to be strengthened within Airbus Helicopters, and the way we intend to build up the necessary ecosystem with Airbus� other business units, will be the epilogue of the article.