We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
This journal utilises an Online Peer Review Service (OPRS) for submissions. By clicking "Continue" you will be taken to our partner site
http://www.editorialmanager.com/aeroj/default.aspx.
Please be aware that your Cambridge account is not valid for this OPRS and registration is required. We strongly advise you to read all "Author instructions" in the "Journal information" area prior to submitting.
To save this undefined to your undefined account, please select one or more formats and confirm that you agree to abide by our usage policies. If this is the first time you used this feature, you will be asked to authorise Cambridge Core to connect with your undefined account.
Find out more about saving content to .
To save this article to your Kindle, first ensure coreplatform@cambridge.org is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
A design of a sub-scale Boundary Layer Ingestion (BLI) fan for a transonic test rig is presented. The fan is intended to be used in flow conditions with varying distortion patterns representative of a BLI application on an aircraft. The sub-scale fan design is based on a design study of a full-scale fan for a BLI demonstration project for a Fokker 100 aircraft. CFD results from the full-scale fan design and the ingested distortion pattern from CFD analyses of the whole aircraft are used as inputs for this study. The sub-scale fan is designed to have similar performance characteristics to the full-scale fan within the capabilities of the test facility. The available geometric rig envelope in the test facility necessitates a reduction in geometric scale and consideration of the operating conditions. Fan blades and vanes are re-designed for these conditions in order to mitigate the effects of the scaling. The effects of reduced size, increased relative tip clearance and thicknesses of the blades and vanes are evaluated as part of the step-by-step adaption of the design to the sub-scale conditions. Finally, the installation effects in the rig are simulated including important effects of the by-pass flow on the running characteristics and the need to control the effective fan nozzle area in order to cover the available fan operating range. The predicted operating behaviour of the fan as installed in the coming transonic test rig gives strong indication that the sub-scale fan tests will be successful.
Based on erosion coupon tests, a sand erosion model for 17-4PH steel was developed. The developed erosion model was validated against the results of compressor erosion tests from a generic rig and from other researchers. A high-fidelity computational fluid dynamics (CFD) model of the test rig was built, a user-defined function was developed to implement the erosion model into the ANSYS CFD software, and the turbulent, two-phase flow-field in multiple reference frames was solved. The simulation results are consistent with the test results from the compressor rig and with experimental findings from other researchers. Specifically, the sand erosion blunts the leading edge, sharpens the trailing edge and increases pressure-surface roughness. The comparisons between the experimental observations and numerical results as well as a quantitative comparison with three other sand erosion models indicate that the developed sand erosion model is adequate for erosion prediction of engine components made of 17-4PH steel.
A new aerodynamic open-circuit test rig for studying boundary layer ingestion (BLI) propulsion has been developed by National Research Council of Canada. The purpose is to demonstrate the advantages of BLI in reducing the power required for a given thrust and to validate the performance of BLI fan concepts. The rig consists of a boundary layer generator to simulate boundary layer development over an aircraft fuselage. The boundary layer generator can be used to create a natural boundary layer due to skin friction but also comprises an array of perforated plates through which pressurised air can be blown to manipulate the boundary layer thickness. The size of the boundary layer thickness can be controlled upstream of the fan blades. Parametric studies of boundary layer thickness were then feasible. The test calibration was conducted to validate the concept.
Prediction of stall before it occurs, or detection of stall is crucial for smooth and lasting operation of fans and compressors. In order to predict the stall, it is necessary to distinguish the operational and stall regions based on certain parameters. Also, it is important to observe the variation of those parameters as the fan transitions towards stall. Experiments were performed on a contra-rotating fan setup under clean inflow conditions, and unsteady pressure data were recorded using seven high-response sensors circumferentially arranged on the casing, near the first rotor leading edge. Windowed Fourier analysis was performed on the pressure data, to identify different regions, as the fan transits from the operational to stall region. Four statistical parameters were identified to characterise the pressure data and reduce the number of data points. K-means clustering was used on these four parameters to algorithmically mark different regions of operation. Results obtained from both the analyses are in agreement with each other, and three distinct regions have been identified. Between the no-activity and stall regions, there is a transition region that spans for a short duration of time characterised by intermittent variation of abstract parameters and excitations of Fourier frequencies. The results were validated with five datasets obtained from similar experiments at different times. All five experiments showed similar trends. Neural Network models were trained on the clustered data to predict the operating region of the machine. These models can be used to develop control systems that can prevent the stalling of the machine.
Civil aircraft that fly long ranges consume a large fraction of civil aviation fuel, injecting an important amount of aviation carbon into the atmosphere. Decarbonising solutions must consider this sector. A philosophical-analytical feasibility of an airliner family to assist in the elimination of carbon dioxide emissions from civil aviation is proposed. It comprises four models based on the integration of the body of a large two-deck airliner with the engines, wings and flight surfaces of a long-range twin widebody jet. The objective of the investigation presented here is to evaluate the impact of liquid hydrogen tank technology in terms of gravimetric efficiency. A range of hydrogen storage gravimetric efficiencies was evaluated; from a pessimistic value of 0.30 to a futuristic value of 0.85. This parameter has a profound influence on the overall fuel system weight and an impact on the integrated performance. The resulting impact is relatively small for the short-range aircraft; it increases with range and is important for the longer-range aircraft. For shorter-range aircraft variants, the tanks needed to store the hydrogen are relatively small, so the impact of tank weight is not significant. Longer range aircraft are weight constrained and the influence of tank weight is important. In the case of the longest range, the deliverable distance increases from slightly over 4,000 nautical miles, with a gravimetric efficiency of 0.3, to nearly 7,000 with a gravimetric efficiency of 0.85.
Compression systems of modern, civil aircraft engines consist of three components: Fan, low-pressure compressor (LPC) and high-pressure compressor (HPC). The efficiency of each component has improved over the last decades by means of rising computational power which made high level aerodynamic optimisations possible. Each component has been addressed individually and separated from the effects of upstream and downstream components. But as much time and effort has been spend to improve performance of rotating components, the stationary inter-compressor duct (ICD) has only received minor attention. With the rotating compression components being highly optimised and sophisticated their performance potential is limited. That is why more aggressive, respectively shorter, ICDs get more and more into the focus of research and engine manufacturers. The length reduction offers high weight saving and thus fuel saving potential as a shorter ICD means a reduction in aircraft engine length. This paper aims at evaluating the impact of more aggressive duct geometries on LPC and HPC performance. A multi objective 3D computational fluid dynamics (CFD) aerodynamic optimisation is performed on a preliminary design of a novel two spool compressor rig incorporating four different operating line and two near-stall (NST) conditions which ensure operability throughout the whole compressor operating range. While the ICD is free to change in length, shape and cross-section area, the blades of LPC and HPC are adjusted for changing duct aerodynamics via profile re-staggering to keep number of free parameters low. With this parametrisation length, reductions for the ICD of up to 40% are feasible while keeping the reduction in isentropic efficiency at aerodynamic design point for the compressor below 1%pt. Three geometries of the Pareto front are analysed in detail focusing on ICD secondary flow behaviour and changes of aerodynamics in LPC and HPC. In order to asses changes in stall margin, speedlines for the three geometries are analysed.
In real gas turbines, multiple nozzles are used instead of a single-nozzle; therefore, interactions between flames are inevitable. In this study, the effects of flame-flame interaction on the emission characteristics and lean blowout limit were analysed in a CH4-fueled single- and dual-nozzle combustor. OH* chemiluminescence imaging showed that a flame-interacting region, where the two flames from the nozzles were merged, was present in the dual-nozzle combustor, unlike the single-nozzle combustor. Flow-field measurements using particle image velocimetry confirmed that a faster velocity region was formed at the flame merging region, thereby hindering flame stabilisation. In addition, we compared the emission indices of NOx and CO between the two combustors. The emission indices of CO were not significantly different; however, a distinct effect of flame-flame interaction was indicated in NOx. To understand the effect of flame-flame interaction on NOx emissions, we measured temperature distribution using a multi-point thermocouple. Results showed that a wider high-temperature region was formed in the dual-nozzle combustor compared to the single-nozzle combustor; this was attributable to the high OH* chemiluminescence intensity in the flame-interacting region. Furthermore, it was confirmed that the size of this interacting region caused the deformation of the temperature distribution in the combustor, which can induce a difference in the increase ratio of NOx emission between high and low equivalence ratio ranges. In conclusion, we confirmed that flame-flame interaction significantly affected temperature distribution in the downstream of the flame, and the change in temperature distribution contributed primarily to the varying concentration of the emission gas.
The objective of the present work is to estimate the performance of a turbojet engine during Fluidic Thrust Vectoring (FTV) employed by injecting the secondary-jet at the throat of a convergent nozzle. The nozzle performance maps and effective nozzle throat area obtained from experiments are coupled with the performance of a conventional engine (without FTV) using an iterative algorithm developed as a part of this work. The performance is estimated for different flow rates of secondary-jet sourced either from a separate compressor or the engine’s compressor. During FTV, the operating point shifted towards the surge line with increased turbine entry temperature. The desired and obtained vector angles and thrust magnitudes are different. At high secondary-jet flow rates, the turbine operation moved out of its performance map. These aspects should be incorporated while integrating the FTV at the system level, thus, asserting the importance of FTV studies coupled with engine performance.
The electrification of the commuter aircraft is instrumental in the development of novel propulsion systems. The scope of this work aims to explore the design space of a parallel hybrid-electric configuration with an entry into service date of 2030 and beyond and determine the impact of other disciplines on conceptual design, such as components positioning, aircraft stability and structural integrity. Three levels of conceptual sizing are applied and linked with a parametric aircraft geometry tool, to generate the aircraft’s geometry and position the components. Subsequently, the structural optimisation of the wing box is performed, providing the centre of gravity of the components placed inside the wing, that minimise the induced stresses. Furthermore, the stability and trim analysis follow, with the former being highly affected by the positioning of components. Results are compared to a similar aircraft with entry into service technology of 2014 and it is indicated that in terms of block fuel reduction the total electrification benefit increases with the increase of degree of hybridisation, if aircraft mass is kept constant. On the other hand, if battery specific energy is kept constant, similar block fuel reduction is possible with lower hybridisation degrees. The potential for improvement in terms of carbon dioxide emissions and block fuel reduction ranges from 15.73% to 21.44% compared to the conventional aircraft, for levels of battery specific energy of 0.92 and 1.14 kWh/kg respectively. Finally, the component positioning evaluation indicates a maximum weight limitation of 240 kg for the addition of an aft boundary layer ingestion fan to a tube and wing aircraft configuration, without compromising the aircraft static stability.
The simulations and assessment of transient performance of gas turbine engines during the conceptual and preliminary design stage may be conducted ignoring heat soakage and tip clearance variations due to lack of detailed geometrical and structural information. As a result, problems with transient performance stability may not be revealed correctly, and corresponding design iterations would be necessary and costly when those problems are revealed at a detailed design stage. To make an engine design more cost and time effective, it has become important to require better transient performance simulations during the conceptual and preliminary design stage considering all key impact factors such as fuel control schedule, rotor dynamics, inter-component volume effect as well as heat soakage and tip clearance variation effects. In this research, a novel transient performance simulation approach with generically simplified heat soakage and tip clearance models for major gas path components of gas turbine engines including compressors, turbines and combustors has been developed to support more realistic transient performance simulations of gas turbine engines at conceptual and preliminary design stages. Such heat soakage and tip clearance models only require thermodynamic design parameters as input, which is normally available during such design stages. The models have been implemented into in-house transient performance simulation software and applied to a model twin-spool turbojet engine to test their effectiveness. Comparisons between transient performance simulated with and without the heat soakage and tip clearance effects demonstrate that the results are promising. Although the introduced heat soakage and tip clearance models may not be as accurate as that using detailed component geometrical information, it is able to include the major heat soakage and tip clearance effects and make the transient performance simulations and analysis more realistic during conceptual and preliminary engine design stage.
A shock-induced separation loss reduction method, using local blade suction surface shape modification (smooth ramp structure) with constant adverse pressure gradient with the consideration of radial equilibrium effect to split a single shock foot into multiple weaker shock wave configuration, is investigated on the NASA Rotor 37 for promoting aerodynamic performance of a transonic compressor rotor. Numerical investigation on baseline blade and improved one with blade modification on suction side has been conducted employing the Reynolds-averaged Navier–Stokes method to reveal flow physics of ramp structure. The results indicate that the passage shock foot of baseline is replaced with a family of compression waves and a weaker shock foot generating moderate adverse pressure gradient on ramp profile, which is beneficial for mitigating the shock foot and shrinking flow separation region as well. In addition, the radial secondary flow of low-momentum fluids within boundary layer is decreased significantly in the region of passage shock-wave/boundary-layer interaction on blade suction side, which mitigates the mass flow and mixing intensity of tip leakage flow. With the reduction of flow separation loss induced by passage shock, the adiabatic efficiency and total pressure ratio of improved rotor are superior to baseline model. This study herein implies a potential application of ramp profile in design method of transonic and supersonic compressors.
The tip leakage flow generates a large amount of aerodynamic losses in a zero inlet swirl turbine rotor (ZISTR), which directly uses the axial exit flow downstream of a combustion chamber without any nozzles. To reduce the tip leakage flow loss and improve the efficiency for the ZISTR, a front suction side winglet is employed on the blade tip, and the effect of winglet width is numerically investigated to explore its design space. It is found that, a suction side leading edge horseshoe vortex (SHV) on the blade tip plays a crucial role in mitigating the tip leakage flow loss. This SHV rotates in the reverse direction to the leakage vortex, so it tends to break the formation of the leakage vortex near the front part of suction side. With a larger winglet width, the SHV stays longer time on the blade tip and leaves it at a further downstream location. This increases the time and the contact area of the interaction between the SHV and the leakage vortex, so the leakage vortex is further weakened. Thus, the tip leakage flow loss is reduced, and the efficiency is improved. However, a larger winglet width also increases the heat load of the blade due to a larger blade surface area. The ZISTR designed with the winglet width equal to 2.1% blade pitch achieves a great trade-off between efficiency and heat load that the efficiency is improved by 0.85% at an expense of 1.2% increment of the heat load. Besides, for the blade using this winglet, the mechanical stress due to the centrifugal, aerodynamic and thermal load is acceptable for the engine application. This investigation indicates a great potential in the improvement of efficiency for the ZISTR using a blade tip winglet designed on the front suction side.
The commercial Computational Fluid Dynamics (CFD) software STAR-CCM+ was used to simulate the flow and breakup characteristics of a Liquid Jet Injected into the gaseous Crossflow (LJIC) under real engine operating conditions. The reasonable calculation domain geometry and flow boundary conditions were obtained based on a civil aviation engine performance model similar to the Leap-1B engine which was developed using the GasTurb software and the preliminary design results of its low-emission combustor. The Volume of Fluid (VOF) model was applied to simulate the breakup feature of the near field of LJIC. The numerical method was validated and calibrated through comparison with the public test data at atmospheric conditions. The results showed that the numerical method can capture most of the jet breakup structure and predict the jet trajectory with an error not exceeding ±5%. The verified numerical method was applied to simulate the breakup of LJIC at the real engine operating condition. The breakup mode of LJIC was shown to be surface shear breakup at elevated condition. The trajectory of the liquid jet showed good agreement with Ragucci’s empirical correlation.
Losses induced by tip clearance limit decisive improvements in the system efficiency and aerodynamic operational stability of aero-engine axial compressors. The tendency towards even lower blade heights to compensate for higher fluid densities aggravates their influence. Generally, it is emphasised that the tip clearance should be minimised but remain large enough to prevent collisions between the blade tip and the casing throughout the entire mission. The present work concentrates on the development of a preliminary aero-engine axial compressor casing design methodology involving meta-modelling techniques. Previous research work at the Institute for Turbomachinery and Flight Propulsion resulted in a Two-Dimensional (2D) axisymmetric finite element model for a generic multi-stage high-pressure axial compressor casing. Subsequent sensitivity studies led to the identification of significant parameters that are important for fine-tuning the tip clearance via specific flange design. This work is devoted to an exploration of the potential of surrogate modelling in preliminary compressor casing design with respect to rapid tip clearance assessments and its corresponding precision in comparison with finite element results. Reputed as data-driven mathematical approximation models and conceived for inexpensive numerical simulation result reproduction, surrogate models show even greater capacity when linked with extensive design space exploration and optimisation algorithms.
Compared with high-fidelity finite element simulations, the reductions obtained in computational time when using surrogate models amount to 99.9%. Validated via statistical methods and dependent on the size of the training database, the precision of surrogate models can reach down to the range of manufacturing tolerances. Subsequent inclusion of such surrogate models in a parametric optimisation process for tip clearance minimisation rapidly returned adaptions of the geometric design variables.
Increasing demand for commercial air travel is projected to have additional environmental impact through increased emissions from fuel burn. This has necessitated the improvement of aircraft propulsion technologies and proposal of new concepts to mitigate this impact. The hybrid-electric aircraft propulsion system has been identified as a potential method to achieve this improvement. However, there are many challenges to overcome. One such challenges is the combination of electrical power sources and the best strategy to manage the power available in the propulsion system. Earlier methods reviewed did not quantify the mass and efficiency penalties incurred by each method, especially at system level. This work compares three power management approaches on the basis of feasibility, mass and efficiency. The focus is on voltage synchronisation and adaptation to the load rating. The three methods are the regulated rectification, the generator field flux variation and the buck-boost. This comparison was made using the propulsion system of the propulsive fuselage aircraft concept as the reference electrical configuration. Based on the findings, the generator field flux variation approach appeared to be the most promising, based on a balance of feasibility, mass and efficiency, for a 2.6MW system.
An unusual philosophical approach is proposed here to decarbonise larger civil aircraft that fly long ranges and consume a large fraction of civil aviation fuel. These inject an important amount of carbon emissions into the atmosphere, and holistic decarbonising solutions must consider this sector. A philosophical–analytical investigation is reported here on the feasibility of an airliner family to fly over long ranges and assist in the elimination of carbon dioxide emissions from civil aviation.
Backed by state-of-the-art correlations and engine performance integration analytical tools, a family of large airliners is proposed based on the development and integration of the body of a very large two-deck four-engine airliner with the engines, wings and flight control surfaces of a very long-range twin widebody jet. The proposal is for a derivative design and not a retrofit. This derivative design may enable a swifter entry to service.
The main contribution of this study is a philosophical one: a carefully evaluated aircraft family that appears to have very good potential for first-generation hydrogen-fuelled airliners using gas turbine engines for propulsion. This family offers three variants: a 380-passenger aircraft with a range of 3,300nm, a 330-passenger aircraft with a range of 4,800nm and a 230-passenger aircraft with a range of 5,500nm. The latter range is crucially important because it permits travel from anywhere in the globe to anywhere else with only one stop. The jet engine of choice is a 450kN high-bypass turbofan.
A practical method to evaluate quantitatively the uniformity of fuel/air mixing is essential for research and development of advanced low-emission combustion systems. Typically, this is characterised by measuring an unmixedness parameter or a uniformity index. An alternative approach, based on the fuel/air equivalence ratio distribution, is proposed and demonstrated in a simple methane/air venturi mixer. This approach has two main advantages: it is correlated with the fuel/air mixture combustion temperature, and the maximum temperature variation caused by fuel/air non-uniformity can be estimated. Because of these, it can be used as a criterion to check fuel/air mixing quality, or as a target for fuel/air mixer design with acceptable maximum temperature variation. For the situations where the fuel/air distribution non-uniqueness issue becomes important for fuel/air mixing check or mixer design, an additional statistical supplementary criterion should also be used.
The potential to recover waste heat from the exhaust gases of a turboprop engine and produce useful work through an Organic Rankine Cycle (ORC) is investigated. A thermodynamic analysis of the engine’s Brayton cycle is derived to determine the heat source available for exploitation. The aim is to use the aircraft engine fuel as the working fluid of the organic Rankine cycle in order to reduce the extra weight of the waste heat recovery system and keep the thrust-to-weight ratio as high as possible. A surrogate fuel with thermophysical properties similar to aviation gas turbine fuel is used for the ORC simulation. The evaporator design as well as the weight minimisation and safety of the suggested application are the most crucial aspects determining the feasibility of the proposed concept. The results show that there is potential in the exhaust gases to produce up to 50kW of power, corresponding to a 10.1% improvement of the overall thermal efficiency of the engine.