Saturday, November 9, 2019

Human Factors in Aviation Essay

A large number of flight accidents occur mostly due to lack of efficient vision of the surrounding environment. Traditional visionary systems rely on synthetic vision or specifically vision of the existing environment devoid of mist, fog and other abnormalities. Real scenarios require the ability to provide reliable vision overcoming natural hindrances. Humans learnt the art of flying when they abandoned the idea of flapping of wings. Similarly, the latest developments of enhanced vision systems have sidestepped the existing traditional vision systems to ensure flight safety. In recent years, Controlled Flight into Terrain (CFID) has posed a significant risk in both civilian and military aviation. One of the aviation’s worst accident occurred in Tenerife, when two Boeing 747’s collided as one aircraft was attempting to take off while the other was to land. The risk of CFID can be greatly reduced with the aid of a suite of Radar and collision avoidance equipment commonly termed as Enhanced Vision systems (EVS). Rationale One of the primary causes for many runway accidents is reduced visibility. One solution to this limitation lies in the use of infrared sensing in aviation operations. All objects on earth emit infrared radiation and their emissions and features can be detected through total darkness as well as intervening mist, rain, haze, smoke and other scenarios, when the objects are invisible to the human eye (Kerr, 2004). The first EVS system was targeted for production in 2001 as standard equipment on Gulf Stream GVSP aircraft. The system was developed in part by Kolesman Inc under the technology license from Advanced Technologies, Inc. utilization of EVS addressed critical areas like CIFT avoidance, general safety enhancements during approach, landing and take off, improved detection of trees, power lines and other obstacles, improved visibility in brown out conditions, improved visibility in haze and rain, identify rugged and sloping terrain and detect runway incursions. Enhanced Vision Systems Enhanced visibility system is an electronic means to provide a display of the forward external scene topology through the use of infrared imaging sensors. They are a combination of near term designs and long term designs. Near term designs present sensor imagery with super-imposed flight symbology on a Head up display (HUD) and may include such enhancements as runway outlines, other display argumentations like obstacles, taxiways and flight corridors. Long term designs include complete replacement of the out-the window scene with a combination of electro optical and sensory information. Infrared Sensors EVS uses Infrared (IR) sensors that detect and measure the levels of infrared radiation emitted continuously by all objects. An object’s radiation level is a function of its temperature with warmer objects emitting more radiation. The infrared sensor measures these emission levels which are then processed to produce a thermal image of the sensor’s forward field of view. EVS IR sensors operate in the Infrared spectrum (Kerr, 2004). The different types of spectrum are Long wave IR, Medium wave IR and Low wave IR. Two variants of this technology are currently in aircraft use. A single sensor unit operating in the long wave, maximum weather penetration band has significant far penetrating capability. Short wave sensors have the ability to enhance the acquisition of runway lighting. A dual sensor variant composed of short and long wave bands used for both light and weather penetration fuses both sensor images for a full spectrum view. Image sensors operating in long wave Infrared spectrum are Cyro-cooled. Models of EVS One of the commonly used EVS systems is EVS 2000. The operation of the model EVS 2000 dual image sensor is given in figure 1. Long Wave Infrared sensor provides best weather penetration, ambient background and terrain features. Similarly, the Short Wave Sensor provides best detection of lighting, runway outline and obstacle lights. The signal processor combines the images of both the sensors to display a fused image picturizing the current environment (Kerr, Luk, Hammerstrom, and Misha, 2003). (Source: Kerr et al, 2003) Boeing Enhanced Vision System Boeing’s EVS enhances situational awareness by providing electronic and real time vision to the pilots. It provides information at low level, night time and moderate to heavy weather operations during all phases of flight. It has a series of imaging sensors, navigational terrain database with a virtual pathway for approach during landings, an EVS image processor and a wide field of view, C-through helmet mounted display integrated with a head tracker. It also consists of a synthetic vision system accompanying the EVS to present a computer generated image of the out-the window view in areas that are not covered by the imaging sensors of the EVS. The EVS image processor performs the following 3 functions. It compares the image scanned by the ground mapping Radar and the MMW sensor with a database to present a computer generated image of the ground terrain conditions. It is accompanied by a Global Positioning System (GPS) to provide a location map during all phases of flight. The IR imaging sensors provide a thermal image of the front line of view of the aircraft. Typical HUD symbology including altitude, air speed, pressure, etc is added without any obscuration of the underlined scene. The SV imagery provides a three dimensional view of a clear window site with reference to the stored on board database. Figure 2 gives the Boeing’s EVS/SV integrated system. The projection of SV data should be confirmed by the EVS data so that the images register accurately. The system provides for three basic views i. e. , flight to view or the normal view, the map views at different altitudes or ranges and the orbiting view or an exocentric/ownership from any orbiting location from the vehicle (Jennings, Alter, Barrow, Bernier and Guell, 2003). (Source: Jennings et al, 2003) EVS Image processing and Integration Association Engine Approach This is a neural net inspired self organizing associating memory approach that can be implemented in FPGA based boards of moderate cost. It constitutes a very efficient implementation of best match association at high real time video rates. It is highly robust in the face of noisy and obscured image inputs. This means of image representation emulates the human visual pathway. A preprocessor performs the feature extraction of edges as well as potentially higher levels of abstraction in order to generate a large, sparse and random binary vector for each image frame. The features are created by looking for 0 crossings after filtering with a laplacian of guassian filter and thereby finding edges. Each edge image is then thresholded by taking the K strongest features setting those to 1 and all others to 0. For multiple images, the feature vectors are strung together to create a composite vector. The operations are performed over a range of multi resolution hyper pixels including those for 3-D images. FPGA provides a complete solution by offering the necessary memory bandwidth, significant parallelism and low precision tolerance. Figure 3 provides an illustration of an association engine operation (Kerr et al, 2003). Fig 3: Association Engine Operation (Source Kerr et al, 2003) DSP Approach One approach to perform multi sensor image enhancement and fusion is the Retinex algorithm evolved at the NASA Langley research center. Digital signal processors from Texas instruments have been used to successfully implement a real-time version of Retinex. C6711, C6713 and DM642 are some of the commercial digital signal processors (DSP) used for image processing. Image processing which is a subset of digital signal processing enables fusion of images from various sensors to aid in efficient navigation. Figure 4: EVS Image Processing (Source: Hines et al, 2005) Image processing architecture and functions of EVS, Long Wave Infrared (LWIR) and Short Wave Infrared (SWIR) processing can be done simultaneously. The multi spectral data streams are registered to remove field of view and spatial resolution differences between the cameras and to correct inaccuracies. Registration of Long Wave IR data to the Short Wave IR is performed by selecting SWIR as the base line and applying affine transform to the LWIR imagery. LaRC patented Retinex algorithm is used to enhance the information content of the captured imagery particularly during poor visibility conditions. The Retinex can also be used as a fusion engine since the algorithm performs nearly symmetrically processing on multi-spectral data and applies multiple scaling operations on each spectral band. The fused video stream contains more information than the individual spectral bands and provides the pilot a single output which can be interpreted easily. Figure 4 illustrates the various processing stages in fusing a multi spectral image (Hines et al, 2005). Design Tradeoffs LWIR based single image system is no panacea for fog, but reduces hardware requirements. It is also a low cost solution with lower resolution. An image fusion system provides active penetration of fog and better resolution but comes at a higher cost. Increasing the bandwidth provides better size and angular resolution and satisfactory atmospheric transmission but costs high. Basic diffraction physics limits the true angular resolution but can be overcome by providing sufficient over sampling. Sensitivity vs. update rate and physical size vs. resolution have traditionally been issues with passive cameras. Fortunately, dual mode sensors overcome these trade offs (Kerr et all, 2003). A successful image capture of landing scenario is given in figure 5. Figure 5. EVS view Vs. Pilots view (source: Yerex, 2006) Human Factors Controlling the aircraft during the entire period of flight is the sole responsibility of the pilot. The pilot seeks guidance from the co-pilot, control tower and inbuilt EVS to successfully steer the aircraft. The pilot controls the aircraft based on a representation of the world displayed in the cockpit given by the inbuilt systems and may not see the actual out-the-window visual scene. Visual information is presented but may not otherwise be visible. Some of the information may be lost due to limitations of resolution, field of view or spectral sensitivities. Therefore, with EVS, the world is not viewed directly but as a representation through sensors and computerized databases. More importantly, the essential data for pilotage should be available on the display. Though EVS systems gives a representation of the exact view of the flight environment, its accuracy plays a significant role in flight safety. Thus human factor are vital for flight control.

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.