IMBALS aims to develop, validate and verify a certifiable Image Processing Platform (IPP) anddemonstrate it in a Vision Landing System (VLS) that is capable of autolanding the Large PassengerAircraft (LPA) based on images supplied by a camera system and without support ofgroundbasedprecision instrument landing aids.
About the project:
Project Designation: IMBALS – Image based landing solutions for Disruptive Cockpit concept
Project Code: JTI-CS2-2017-CFP06-LPA-03-09
Field of Science:
/medical and health sciences/health sciences/infectious diseases/hiv
/engineering and technology/electrical engineering, electronic engineering, information engineering/electronic engineering/robotics
/engineering and technology/mechanical engineering/vehicle engineering/aerospace engineering/aircraft
/social sciences/economics and business/business and management/employment
H2020-EU.22.214.171.124. – IADP Large Passenger Aircraft
IMBALS aims to develop, validate and verify a certifiable Image Processing Platform (IPP) anddemonstrate it in a Vision Landing System (VLS) that is capable of autolanding the Large PassengerAircraft (LPA) based on images supplied by a camera system and without support ofgroundbasedprecision instrument landing aids. The VLS will additionally enhance the situational awareness for thecrew during any autolanding by supporting a Combined VisionSystem (CVS) based HMI in theDisruptiveCockpit.The VLS is intended to perform equivalent functions as the pilot’s human vision during manual landing orsupervising autolanding. So, the VLS will primarily be operated under visualmeteorological conditions. Thevisibility conditions under which the VLS will be able to perform its intended function will mainly depend onthe nature of the vision obstruct (fog, rain, snow, hail, clouds) and theperformance of the camera system.The development of the camera system falls outside the scope of the IMBALS project. Nevertheless,IMBALS will identify requirements for the images which will drive the requirements for the camera systemand demonstrate the IPP using current state of art cameras in the visual and Infra-Red (IR) wavelengths.
The independence from ground based precision instrument landing aids addresses the need to maximizethe use of autolanding, including on poorly equipped airports, to reduce cockpit workload and enhancesafety without reducing the airport throughput. This will minimize environmental impact, operating costand operational disruptions. Additionally, IMBALS will investigate how VLS in combination with CAT Ilanding aids adds up to a system with sufficient accuracy, integrity and continuity for approach and landingunder lower visibility conditions.
The certifiability of the system and the enhanced situational awareness further contribute to improvedoperational safety and ensure the ability to certify and deploy the VLS over time.
The project will start from the Concept of Operations (CONOPS) of the Disruptive Cockpit and VLS in particular and derive from there the requirements for VLS as system and the IPP as equipment within that system. The IPP technology bricks will be prototyped, validated and finally integrated to an IPP prototype. The IPP prototyped will be verified against its requirements, integrated in the Disruptive Cockpit simulator and evaluated in operational scenarios and finally, the IPP will be integrated with a camera system on a flight test bed to evaluate its performance in real flight. The project will put a strong emphasis on safety and certifiability of the system, including addressing the challenges of certifying the imageprocessing algorithms.
The IMBALS project is conducted by a heterogeneous consortium composed of Scioteq, Unmanned, Tekever and the University of Leuven, highly recognized in vision based 3D motion control for robotics and with Airbus as the topic leader.