Nomenclature
- $A$
-
flow area
- $b$
-
bias
- $B$
-
bias uncertainty
- $\boldsymbol{B}$
-
mxn Broyden matrix
- ${C_p}$
-
specific heat at constant pressure
- ${C_v}$
-
specific heat at constant volume
- ${C_V}$
-
velocity coefficient
- $\boldsymbol{E}$
-
m-vector of mass and energy imbalances
- ${F_g}$
-
gross thrust
- ${F_n}$
-
net thrust
- $h$
-
specific enthalpy
- $\boldsymbol{J}$
-
mxn Jacobian matrix
- $\dot m$
-
mass flow
- ${{MN}}$
-
Mach number
- $n$
-
sample size
- $N$
-
rotational speed
- $NL$
-
rotational speed for the LP spool
- $NH$
-
rotational speed for the HP spool
- $P$
-
pressure
- $PR$
-
pressure ratio
- $Rline$
-
auxiliary coordinate in compressor maps
- $s$
-
specific entropy
- ${s_p}$
-
precision index
- $SFC$
-
specific fuel consumption
- ${t_{95}}$
-
inverse of student’s t distribution (95% confidence)
- $T$
-
temperature
- $U$
-
uncertainty
- $WAR$
-
water-to-air ratio
- $\dot W$
-
shaft power
- $\boldsymbol{x}$
-
n-vector of independent parameters
- ${\boldsymbol{x}^*}$
-
solution n-vector
- $Y$
-
generic cycle parameter
- $\bar z$
-
sample average
Greek letters
- $\phi $
-
random errors
- ${\rm{\Phi }}$
-
random errors uncertainty
- $\beta $
-
bypass ratio
- $\theta $
-
nondimensional temperature
- $\eta $
-
efficiency
- ${\rm{\Delta }}$
-
difference
- $\tau $
-
convergence tolerance
- $\gamma $
-
specific heat ratio
- $\varepsilon $
-
error(s)
Subscripts
- $A$
-
accuracy
- ${\rm{conf}}$
-
corresponding to engine configuration/architecture
- ${\rm{corr}}$
-
corrected
- ${\rm{fuel}}$
-
parameter associated with the fuel entering the combustor
- ${\rm{ICAO}} - {\rm{SA}}$
-
ICAO standard atmosphere
- ${\rm{input}}$
-
corresponding to model inputs
- ${\rm{map}}$
-
corresponding to turbomachinery map
- ${\rm{num}}$
-
corresponding to numerical methods
- $p$
-
precision
- ${\rm{pri}}$
-
engine primary stream
- ${\rm{phy}}$
-
corresponding to physics modeling
- ${\rm{phy}}\& num$
-
corresponding to physics and numeric modeling
- ${\rm{thermo}}$
-
corresponding to thermodynamic properties
- ${\rm{sec}}$
-
engine secondary (or bypass) stream
- ${\rm{std}}$
-
standard day condition
- 0
-
total (or stagnation) thermodynamic property (e.g. h 0 , T 0 , P 0 )
Abbreviations
- AGCM
-
Aerothermodynamic Generic Cycle Model
- DP
-
design point
- GTE
-
gas turbine engine
- HPC
-
high-pressure compressor
- HPT
-
high-pressure turbine
- LARCASE
-
Laboratory of Applied Research in Active Control, Avionics and AeroServoElasticity
- LPC
-
low-pressure compressor
- LPT
-
low-pressure turbine
- MSC
-
model subject of comparison
- NPSS
-
numerical propulsion system simulation
- OD
-
off-design
- RM
-
reference model
1.0 Introduction
Gas turbine engines (GTEs) have been the prime method to power commercial and military aircraft for the past several decades. Understanding how such engines work and being able to predict their key performance characteristics, e.g. net thrust ( ${F_n}$ ) and specific fuel consumption ( ${\rm{SFC}}$ ), has been an important topic in industry, research laboratories and academia.
The Laboratory of Applied Research in Active Control, Avionics and AeroServoElasticity (LARCASE) has developed several multidisciplinary aircraft and engine models [Reference Botez1]. Novel methodologies have been explored via these models to predict the performance of real engines, such as the Rolls-Royce (RR) AE3007C and the General Electric (GE) CF34-8C5B1. These novel techniques include system identification [Reference Ghazi and Botez2–Reference Botez, Bardela and Bournisien4], empirical equations [Reference Rodriguez and Botez5] and neural networks [Reference Andrianantara, Ghazi and Botez6, Reference Zaag and Botez7]. Additionally, physics-based aerothermodynamic cycle models are explored [Reference Gurrola-Arrieta and Botez8–Reference Gurrola-Arrieta and Botez10], which are considered the most all-encompassing method with which to predict the performance characteristics of GTEs.
Cycle models can be used for engine design point (DP) studies or to understand the engine behaviour under off-design (OD) conditions. The former is typically utilised to find the best engine aerothermodynamic DP, e.g. the one that minimises ${{\rm{SFC}}}$ , achievable within technological limitations. DP simulations also help to define the optimal engine size, for example, the engine frontal and exhaust nozzle areas. Examples of DP models are discussed in Refs. [Reference Gurrola-Arrieta and Botez9, Reference Gurrola-Arrieta and Botez10]. In contrast, OD models predict the performance of already-sized engines (i.e. fixed frontal and exhaust areas) at different power regimes (take-off, climb, cruise, idle, etc.) across the flight envelope; examples of OD models are further discussed later in this section.
Regardless of the model intent (DP or OD), cycle model developers must ensure the model shows a reasonable precision, i.e. an acceptable error vs. a known reference model. Once the precision is acceptable, further steps can be taken, such as improving the model accuracy, i.e. an acceptable error vs. experimental engine data. In this paper, as already suggested, precision and accuracy have different meanings and will be clearly differentiated throughout the discussion.
Determining the cycle model precision is an essential intermediate step that must occur before matching the model to the desired engine data or making any predictions about an engine’s absolute level of performance. If the cycle model precision is not addressed beforehand, the model predictions ( ${\rm{SFC}}$ , ${F_n}$ , etc.) for any purpose (preliminary design, engine matching, etc.) could be significantly in error, and thus, compromised. The precision of a cycle model is affected by both systematic biases and random errors. Most of the models found in the literature pay little to no attention to these effects; moreover, they are not acknowledged as biases and random errors.
The availability of cycle models has significantly increased over the years. The first models found in our literature review date from the late 1960s to the beginning of the 1970s [Reference McKinney11–Reference MacMillan13]. At that time, it was difficult to find other models or an excerpt of their results to serve as a reference for comparison. Thus, assessment via engine data was the best way to build confidence in the model predictions (e.g. Ref. [Reference MacMillan13]). However, in such comparisons, detecting a bias was a real challenge. Model predictions vs. engine data may disagree for many reasons, one of which concerns the assumptions used in the model to represent the engine data. Incorrect assumptions will cause systematic errors that can go unnoticed without the help of a reference model; thus, an erroneous perception of the model’s precision and accuracy will be generated.
Powerful professional platforms now make it possible to build high-fidelity cycle models, such as the Numerical Propulsion System Simulation (NPSS) [Reference Lytle14], GasTurb [Reference Kurzke15], GSP [Reference Visser16], PROOSIS [Reference Alexiou17]. These platforms can be used to build a model to serve as reference, however, they have limited availability (e.g. due to licensing costs).
A comprehensive set of cycle models was found in the literature. For this research, they were classified into three groups based on the means utilised to compare/validate their predictions: (1) models that do not use a reference for comparison (e.g. other model or engine data) [Reference DeCastro, Litt and Frederick18–Reference Roberts and Eastbourn21]; (2) models compared only with engine data [Reference Botez, Bardela and Bournisien4, Reference Suraweera22, Reference Lazzaretto, Toffolo and Boni23]; and (3) models compared with other reference models [Reference Gurrola-Arrieta and Botez9, Reference Gurrola-Arrieta and Botez10, Reference Alexiou and Mathioudakis24–Reference Gazzetta Junior, Bringhenti, Barbosa and Tomita30]. For simplicity, those references that used both a reference model and engine data were considered in the last group.
Models in group 1 [Reference DeCastro, Litt and Frederick18–Reference Roberts and Eastbourn21] present detailed information about the thermodynamic modeling and their main assumptions. While they present excerpts of their predictions analysing different scenarios, no conclusion can be drawn about their precision or whether the performance trends predictions are within expectations.
Concerning group 2 [Reference Botez, Bardela and Bournisien4, Reference Suraweera22, Reference Lazzaretto, Toffolo and Boni23], although their intent is to represent the experimental data as much as possible, it is not clear if their accuracy (the error relative to the engines being matched) is affected, as noted previously, by systematic bias or random errors. Indeed, biases and random errors are likely to be present when comparing either two cycle models ‘vis-à-vis’ or a model vs. engine data. The biases or random errors affecting cycle model comparisons, as detailed in this paper, are associated with differences in the engine architecture being modeled, assumptions in the thermodynamic process modeling, the numerical methods, etc.
For the cycle models in group 3 [Reference Gurrola-Arrieta and Botez9, Reference Gurrola-Arrieta and Botez10, Reference Alexiou and Mathioudakis24–Reference Gazzetta Junior, Bringhenti, Barbosa and Tomita30], their precision was compared to a reference model; however, the criterion used for defining the precision acceptability was arguably subjective in all cases. For example, in Ref. [Reference Gurrola-Arrieta and Botez10], a generic turbofan cycle model programmed in Matlab is compared with an equivalent model programmed in NPSS. A validation criterion of ±0.5% (maximum absolute error) was established for the high-level performance ( ${F_n}$ , ${\rm{SFC}}$ , etc.). While a validation criterion is defined, no rationale is presented about how it was established. Similarly, in Ref. [Reference Chapman, Lavelle, Litt and Guo26], the proposed model, the so-called T-MATS, is compared with an equivalent NPSS model. The criterion used for the model acceptability was anything less 1.0% error. Chapman et al. [Reference Chapman, Lavelle, Litt and Guo26] acknowledged that their validation criterion was arbitrarily chosen. In Ref. [Reference Gaudet27], the proposed dynamic model for a power generation engine is compared with an equivalent GasTurb 10 model. For steady-state simulations, up to 2.0% errors were deemed acceptable for OD compressor characteristics (e.g. $PR$ , $\eta$ , ${\dot m_{corr}}$ ). However, up to 6.0% error was acceptable when comparing parameters linked to the engine’s control system (i.e. fuel flow).
The acceptability criteria established in Refs. [Reference Gurrola-Arrieta and Botez10, Reference Chapman, Lavelle, Litt and Guo26, Reference Gaudet27] seems to be drawn, not prior, but after comparing the precision of the proposed models against their respective references. Indeed, imposing the desired acceptability criterion beforehand without knowledge of the expected errors could jeopardise the validation exercise, i.e. the criterion could be too restrictive, and thus, could not be representative of the errors’ variation, or it could be too wide, causing large errors to go undetected. Oftentimes, the best way to define the acceptability criterion, as in Refs. [Reference Gurrola-Arrieta and Botez10, Reference Chapman, Lavelle, Litt and Guo26, Reference Gaudet27], is to test the model against its reference, and to then decide whether the errors are acceptable.
A weak aspect found in the models discussed in both groups (2) and (3) is expressed in terms of their validation methodologies. In essence, validating a model’s precision is done by comparing the outcome from one model with another used as a reference. However, the outcomes are influenced by various input variables and/or other effects that, when not considered, may cause a misleading perception of the model precision, either too pessimistic or too optimistic.
A methodology should be utilised to identify those effects (i.e. bias and/or random errors) that could affect the model precision, and thus, define a way to preclude or to quantify them. Such a methodology is a fundamental part of establishing the precision interval that a given cycle model can achieve.
Based on the literature review, no comprehensive methodology to determine cycle models’ precision uncertainty was found. However, it is recognised the work done by Refs [Reference Gurrola-Arrieta and Botez9, Reference Chapman, Lavelle, Litt and Guo26], in which explicit actions were taken to avoid biases in their cycle model comparisons. For example, Gurrola-Arrieta and Botez [Reference Gurrola-Arrieta and Botez9] acknowledged the effect caused by using different thermodynamic packages in their model comparisons. They proposed to use thermodynamic packages that show consistency in the derivatives used to compute Δh and Δs, which allow to minimise the influence of the thermodynamic packages in their model comparisons. Chapman et al. [Reference Chapman, Lavelle, Litt and Guo26], recognised the influence caused by the numerical methods used by T-MATS and NPSS models. They considered precluding the effect of the numerical method when executing the cycle model calculations. It is worth noting that the aforementioned effect considered in Ref. [Reference Chapman, Lavelle, Litt and Guo26] was not in Ref. [Reference Gurrola-Arrieta and Botez9], and vice-versa.
The first objective of this work is to propose a comprehensive methodology to define the uncertainty of the precision of a model subject of comparison (MSC) relative to a reference model (RM), both of which are cycle models.
The second objective is to apply the methodology for deriving the uncertainty figures comparing vis-à-vis the MSC and the RM. The former is a zero-dimensional, steady-state, the so-called Aerothermodynamic Generic Cycle Model (AGCM) developed at the LARCASE [Reference Gurrola-Arrieta and Botez9], while the latter is an equivalent model programmed in the high-fidelity platform, NPSS.
Once the total uncertainty ( $U$ ) is determined, it is possible to objectively define the expected level(s) of error(s) between the MSC and the RM. Moreover, the $U$ computed and presented in this paper for different performance parameters could serve as a reference for future model comparisons whenever the proposed methodology cannot be implemented, and henceforth, the corresponding $U$ s cannot readily be established.
2.0 Model description
Both the MSC and the RM are intended to represent a generic two-spool turbofan engine with separate exhausts (see Fig. 1) and their details are treated in Ref. [Reference Gurrola-Arrieta and Botez9]. In Fig. 1, the gas path thermodynamic properties (i.e. T, P, $\dot m$ , etc.) are identified using numerals across the different stations throughout the engine. For example, the total pressure at the high-pressure compressor (HPC) exit and the maximum cycle total temperature are referred to as P 0,030 and T 0,040 , respectively.
The MSC and the RM are both composed of two sub-models, one for DP and the other for OD performance analyses. These sub-models encompass the exact thermodynamic modeling, e.g. mass and energy balances. For the purpose of this work, the MSC-OD and RM-OD were considered and their necessary inputs, e.g. aerothermodynamic DP, turbomachinery maps and scaling factors, were taken from Ref. [Reference Gurrola-Arrieta and Botez9].
The aerothermodynamic DP encompasses assumptions to define the engine power ( $\beta$ , $PR$ s, ${T_{0,040}}$ ), the engine component efficiencies ( $\eta$ s, ${C_V}$ s, etc.), and the off-takes (HPC bleed and shaft(s) power extractions) of a middle-class thrust turbofan engine (about 12,700 lbf/56,492 N of take-off thrust). These assumptions permit the computation of the nozzle exhaust areas (i.e. ${A_{080}}$ and ${A_{180}}$ ) and the turbomachinery component maps scaling factors. A map encompasses a series of correlations that relate the overall thermodynamic characteristics of the turbomachinery (e.g. ${\dot m_{corr}}$ , $PR$ , $\eta$ ) to its rotational speed ( ${N_{corr}}$ ). For this work, the same maps were used by both the MSC and the RM. The maps are modified by their scaling factors to represent the intended middle-class thrust engine.
One essential capability of any cycle model is to determine the values of its relevant thermodynamic properties (T, P, h, γ, etc.) across the engine gas path. Cycle models typically use a set of equations or tabulated values to obtain the desired thermodynamic properties for air and combustion products. Herein, the names given to these equations or tabulated values are thermodynamic packages. The RM (i.e. NPSS) has preloaded four thermodynamic packages to choose from: allFuel, GasTbl, CEA and JANAF; their descriptions are presented in Ref. [31]. For this work, only allFuel and GasTbl were considered, as they are typically used in GTE analyses for propulsion applications, and their setup is seamless and straightforward. The MSC has two thermodynamic packages, thermo_package1 and thermo_package2. These packages were put together based on the thermodynamic properties’ tabulated values presented in Refs. [Reference Gordon32, Reference Pelton and Hannah33], respectively. For this research, only thermo_package1 is treated. Moreover, a subset of the NPSS allFuel and GasTbl were obtained by reverse engineering. These tables were implemented in the MSC and labeled as AGCM_allFuel and AGCM_GasTbl, respectively, to avoid any confusion from their original source in the NPSS. The errors in the calculated thermodynamic properties between AGCM_allFuel vs. allFuel and AGCM_GasTbl vs. GasTbl were about 1x10–10; thus the former tables were deemed equivalent to the latter. Finally, for the scope of this work only dry air and combustion products were considered (WAR = 0.0).
3.0 Uncertainty, bias and random errors
Throughout this discussion, the definitions of uncertainty, bias and random errors presented in Ref. [Reference Abernethy and Thompson34] are taken as reference. These definitions were presented for experimental measurements in GTEs, however, they were adapted for this work, as explained next.
The total uncertainty ( $U$ ) is the maximum error expected between the MSC and the RM, and is a measure of the precision of the former, as depicted in Fig. 2. According to Ref. [Reference Abernethy and Thompson34], the $U$ provides an estimate of the largest error that might reasonably be expected. In our case, the $U$ represents an estimate of the interval where the largest error is expected to fall for a given performance parameter (e.g. ${F_n}$ , ${\rm{SFC}}$ , etc.).
The $U$ in Equation (1) is expressed as the sum of the uncertainty due to biases ( $B$ ) and the corresponding to precision random errors ( ${\rm{\Phi }}$ ). The expression in Equation (1) was presented in Ref. [Reference Abernethy and Thompson34] and it is adopted in this work due to its simplicity. The biases ( $b$ ) are systematic or repeated errors that appear when comparing two models, hence they produced an uncertainty ( $B$ ). In the case of thermodynamic cycle models, the $B$ might be a function of power setting, e.g. engine rotational corrected speed ( $N{L_{corr}}$ ), $OPR$ , etc., and they may not be symmetric, i.e. their magnitude and sign may vary with engine power. The first term on the right side of Equation (1) must be interpreted as the net sum of the uncertainty biases, positive ( ${B^ + }$ ) and negative ( ${B^ - }$ ). As stated in Ref. [Reference Abernethy and Thompson34], a bias cannot be determined unless it is compared with the true value to be characterised. In this paper, the true value is defined as the true relative value, obtained from the RM (i.e. for checking precision). The term true value is reserved in this paper to the performance obtained from engine testing (i.e. for checking accuracy), as depicted in Fig. 2.
The ${\rm{\Phi }}$ in Equation (2), is the uncertainty associated with random errors ( $\phi $ ) in any process. The ${\rm{\Phi }}$ typically follow a bell shape probability distribution around the true relative value being characterised (as depicted in Fig. 2). In the absence of any $B$ , a measurement process is only influenced by the ${\rm{\Phi }}$ , which establish a measurement of the error prediction’s closeness (i.e. precision). Random errors might appear due to different sources; thus, the total ${\rm{\Phi }}$ in Equation (2) is determined by the root sum squared of j = 1, …, m sources. The precision index ( ${s_p}$ ), Equation (3), is the sample standard deviation of the errors ( $\varepsilon$ ), in which $\bar z$ is the average sample error. Given that the datapoints (n) available for the models’ comparison might be limited (n ≤ 30), the ${\rm{\Phi }}$ j need to be corrected due to a reduced sample; thus, the statistic called inverse student t-test ( ${t_{95}}$ ) is used. For large samples, ${t_{95}}$ tends to converge asymptotically to that of a normal distribution, ${t_{95}}\;$ = 2.0. The $B$ and ${\rm{\Phi }}$ can be estimated based on the $\varepsilon$ established between the MSC and the RM, as shown in Equation (4); where $Y$ represents any cycle parameter of interest (e.g. ${F_n},$ ${\rm{SFC}}$ , etc.) computed in both the MSC and the RM.
The cycle model accuracy is measured relative to its true value, e.g. relative to a specific engine test or engines’ sample (see Fig. 2). Original engine manufacturers (OEMs) might use a test from a development campaign, i.e. specific test(s); or data from acceptance test production engines (engine sample) to establish the true value. This process is typically called engine matching and is outside the scope of this work. For more information about engine matching accuracy, the reader is referred to Roth et al. [Reference Roth, Doel, Mavris and Beeson35].
Someone facing an engine matching task should first validate the cycle model precision, provided it has yet to be validated, then deal with the engine matching itself. Due to the scope of this work, the accuracy uncertainties due to bias ( ${B_A}$ ) and random errors ( ${{\rm{\Phi }}_A}$ ), presented in Fig. 2, are considered to be zero.
Next, the proposed definitions for the $B$ and ${\rm{\Phi }}$ are discussed. These definitions are accompanied by examples observed in the literature from the so-called groups 2 and 3 discussed in Section 1.
Configuration bias uncertainty ( ${B_{conf}}$ ). This uncertainty is associated with differences in the engine configuration/architecture being simulated, i.e. the number of components (compressors, turbines, ducts, etc.), spools (one, two, three), exhaust type (e.g. mixed or separate), off-takes (bleed and power extractions) and cooling type (cooled vs. uncooled turbines, chargeable vs. non-chargeable cooling).
A ${B_{conf}}{\rm{\;\;}}$ may occur, among other scenarios, when assumptions such as HPC bleed extraction and shaft power extraction used to satisfy parasitic (i.e. engine) and aircraft demands during actual engine operation are ignored. In the case of parasitic needs, the HPC bleed extraction is used to cool down the parts in the hot section, and the shaft power extraction is needed to drive accessories systems such as engine oil and fuel pumps. Concerning the aircraft, HPC bleed extraction is used to meet the demands of its environmental control system (ECS). Ignoring, deliberately or by omission, the assumptions concerning HPC bleed and shaft power extractions (for engine and aircraft) would impose non-negligible errors (i.e. bias) in the comparisons between the two models.
In various studies reported in the literature, ${B_{conf}}$ are embedded in the results; however, they are not always explicitly acknowledged. One example in which a ${B_{conf}}$ might occur is when comparing two significantly different configurations. For example, in Ref. [Reference Bardela and Botez36], a turbofan engine model with separate exhaust was designed to match a mixed stream turbofan engine data, such as the RR AE3007C. An optimisation technique was used to minimise the error between the model and the experimental engine data. Although the optimisation technique solved the problem, the thermodynamic processes of single vs. separate exhaust turbofan engines differ. For a single exhaust engine, the primary (hot) and secondary (cold) streams are mixed before expanding in a single exhaust nozzle. The thermodynamic mixing process is not accounted for within the boundaries of a separate exhaust turbofan, such as the one depicted in Fig. 1. The result is the induction of a ${B_{conf}}$ , however, this is not explicitly acknowledged by Bardela and Botez [Reference Bardela and Botez36]. It is believed that a ${B_{conf}}$ influenced the values of the independent parameters used to minimise the errors between the model and the engine data.
Input bias uncertainty ( ${B_{input}}$ ). This uncertainty is caused by differences in input parameters to the cycle model, e.g. in DP analyses: total engine flow ( ${\dot m_0})$ , $\beta$ , $\eta$ s, $PR$ s, ${T_{0,040}}$ . In OD analyses, ${B_{input}}$ could arise due to differences in the nozzle exhaust areas ( ${A_{080}}$ , ${A_{180}}$ ), and in turbomachinery component maps or their scaling factors, as well as due to differences in power setting ( $N{L_{corr}},$ $OPR$ , etc.).
An example of a ${B_{input}}$ effect is found in Ref. [Reference Gaudet27] for the steady-state compressor operating line validation, where a consistent discrepancy was found in the operating lines obtained from the MSC and RM (GasTurb 10). According to Gaudet [Reference Gaudet27], the discrepancy’s root cause was the differences in the input demanded shaft power, variable in the MSC, and constant in RM. Once the input discrepancy was amended (considering constant input power in both models), the operating lines from both models seemed to overlap.
${B_{conf}}\;$ and ${B_{input}}$ can lead to large, systematic errors of possibly unknown magnitudes that are difficult to reconcile and which must be avoided. As discussed in the different examples in this paper, using a known RM is an excellent way to detect ${b_{conf}}\;$ and ${b_{input}}$ . The researcher in charge of the model comparison must ensure that all due diligence has been exercised to prevent these type of biases.
Thermodynamic bias uncertainty ( ${B_{thermo}}$ ). This uncertainty is due to differences in the assumptions used to define thermodynamic properties, such as h, C p , γ, etc. Differences in assumptions, such as air and fuel composition (number of moles and species included in the chemical reaction), gas dissociation, the coefficients of the equations (e.g. high-order polynomials) used to set the C p (T) and C v (T), ideal vs. non-ideal gas behaviour; all these differences could cause a ${B_{thermo}}$ .
The potential influence of the thermodynamic package (i.e. ${B_{thermo}}$ ) in model comparisons has been acknowledged in the literature [Reference Gurrola-Arrieta and Botez9, Reference Alexiou and Mathioudakis24, Reference Gaudet27, Reference Gazzetta Junior, Bringhenti, Barbosa and Tomita30]; however, the fact that ${B_{thermo}}$ can significantly impact their cycle model comparisons has only been partially addressed. For example, in Ref. [Reference Gaudet27], the ${B_{thermo}}$ influence is acknowledged after significant errors in fuel flow (e.g. 3.19%) and delivered shaft power (e.g. 1.18%) were found between the MSC (dynamic model) and the RM (GasTurb 10). These errors were attributed to gas dissociation in combustion products not accounted for in the former. In Ref. [Reference Gurrola-Arrieta and Botez9], the differences in thermodynamic packages between the MSC (AGCM) and the RM (NPSS) are discussed. Gurrola-Arrieta and Botez [Reference Gurrola-Arrieta and Botez9] observed that while the absolute values of the thermodynamic properties, e.g. specific enthalpy (h), between packages are not the same, their derivatives (e.g. $\frac{{\partial h}}{{\partial T}}$ ) are overall consistent; thus, a small impact due to thermodynamic package differences was expected.
Physics modeling random errors uncertainty ( ${{\rm{\Phi }}_{phy}}$ ). This uncertainty concerns the thermodynamic modeling differences between the MSC and the RM (i.e. mass, energy, entropy and momentum balances) that occur either at the component or engine level. Provided that no misconception of the underlying physics exists, and assuming no other $B$ or ${\rm{\Phi }}$ are present, two different thermodynamic models should give close enough results.
The ${{\rm{\Phi }}_{phy}}$ is a good measure to identify shortcomings in MSC thermodynamic modeling; however, it is difficult to measure/quantify alone. High-fidelity cycle model calculations use numerical methods to solve the unknowns posed by either DP or OD simulations. For example, in the case of OD simulations, for finding the appropriate operating point in the turbomachinery engine components (e.g. compressors, turbines, etc.) that vanish the mass and energy imbalances within the engine. If the aim is to neatly quantify ${{\rm{\Phi }}_{phy}}$ , the influence of the numerical method must be forestalled, which brings significant complexity. From the literature review, only in Ref. [Reference Chapman, Lavelle, Litt and Guo26], a methodology was proposed to preclude the numerical method influence between the model comparisons (T-MATS vs. NPSS). However, given that Chapman et al. [Reference Chapman, Lavelle, Litt and Guo26] only used one data point in their comparisons, no conclusions can be drawn about the ${{\rm{\Phi }}_{phy}}$ .
Numeric random errors uncertainty ( ${{\rm{\Phi }}_{num}}$ ). This uncertainty is associated with the mathematical formulation to solve the imbalances in the model (e.g. mass, energy, etc.). Both the formulation and the method to solve the problem might differ between the MSC and the RM; however, it is expected that both models solve the mass and energy imbalances.
In the case of the MSC used in the present work, the problem to be solved is formulated in Equation (5), in which the m-vector E ( x ) represents the mass and energy imbalance errors, ${\left\| \cdot \right\|_2}$ represents the Euclidian norm, and τ represents a numerical tolerance. The n-vector x of independent parameters, Equation (6), represents those parameters that are varied by the numerical method to find the solution to Equation (5). These parameters include the total engine flow ( ${\dot m_0}$ ), engine bypass ratio ( $\beta$ ), fuel flow ( ${\dot m_{fuel}}$ ) and the turbomachinery maps parameters such as $Rline$ s, $N{H_{corr}}$ , $PR$ s.
The numerical method programmed and implemented in the MSC is a quasi-Newton type-class method, which is intended to find a solution vector ( x *) that satisfies Equation (5). As discussed in Ref. [Reference Gurrola-Arrieta and Botez8], this method presents significant advantages in terms of execution speed compared to other gradient-based methods.
Regarding the RM, some solver characteristics can be inferred, though still uncertain, based on the information extracted from some model’s files and the information presented in Ref. [37]. It is worth mentioning that most of the time, it is unlikely that the details of the numerical methods used in the MSC or the RM are known, as they may be proprietary and thus not readily available. Instead, it is likely that two different models will use different formulations and or numerical methods to solve the engine imbalances, thereby creating the possibility of ${{\rm{\Phi }}_{num}}$ , or even a ${B_{num}}$ to exist, as discussed later in this paper.
4.0 Cycle model precision uncertainty methodology
This section defines the methodology to estimate the figures of the biases and random errors uncertainties discussed in Section 3. When developing this methodology, it was assumed that the RM is a well-known model/platform, meaning that the researcher has confidence in its results. Moreover, it is assumed that the RM can be manipulated to some extent, which may not be the case with a limited privileges RM (e.g. trial licenses with restricted functionalities). For this research, the available NPSS license allowed manipulation of the model to add or remove engine components, customise its solver, and select different thermodynamic packages, which was sufficient for the scope of this work.
The total uncertainty ( $U$ ) in a given performance parameter is proposed to be established from Equation (1) along with the biases and random errors uncertainties discussed in the previous section; thus, the $U$ can be computed by the sum of the individual uncertainties, $B$ and ${\rm{\Phi }}$ , as expressed in Equation (7).
As noted in Section 3, ${b_{conf}}\;$ and ${b_{input}}$ could induce large systematic errors of unknown magnitude and should be avoided. In this work, both ${B_{conf}}\;$ and ${B_{input}}$ have been implicitly forestalled from the model comparisons. The MSC and the RM were intended to represent the turbofan engine depicted in Fig. 1 (hence, ${B_{conf}}$ = 0.0). Both models were given the same inputs: aerothermodynamic DP, turbomachinery component maps and their scaling factors; henceforth, ${B_{input}}$ = 0.0. Moreover, it is assumed that ${B_{conf}}$ and ${B_{input}}$ = 0.0 hold throughout this discussion, given that the engine configuration and the main engine assumptions discussed in Section 2 remain invariant.
To establish the remaining uncertainties, i.e. ${B_{thermo}}$ , ${{\rm{\Phi }}_{phy}}$ , and ${{\rm{\Phi }}_{num}}$ , it was necessary to define the methodology that allows to compute one uncertainty while supressing the effects of the remaining ones. For example, to compute ${{\rm{\Phi }}_{phy}}$ , both ${B_{thermo}}$ and ${{\rm{\Phi }}_{num}}$ had to be forestalled (i.e. ${B_{thermo}}$ and ${{\rm{\Phi }}_{num}}$ = 0.0). A similar rationale can be devised when computing the effects of the other two uncertainties.
To calculate ${B_{thermo}}$ , ${{\rm{\Phi }}_{phy}}$ and ${{\rm{\Phi }}_{num}}$ , a sample of data points was proposed (see Table 1). This sample encompasses several flight conditions (altitude, MN and $\Delta {T_{ICAO - SA}}$ ) and power setting scenarios tested in both the MSC and the RM. The power setting was selected as the LP spool corrected speed ( $N{L_{corr}}$ ), defined as the ratio of the rotational speed of the LP spool ( $NL$ ) to the square root of the nondimensional temperature ( $\theta $ ) at the fan inlet (station 120 in Fig. 1). The nondimensional temperature ( ${\theta _{120}}$ ) is the ratio of ${T_{0,120}}$ and ${T_{std}}$ = 518.67 R (288.15 K). The $N{L_{corr}}$ ranged from 52.5-100.0% with $\Delta N{L_{corr}}$ = 2.5% for Ground; the lower limit of the $N{L_{corr}}$ range had to be adjusted for Flight 1 and Flight 2 altitudes to avoid convergence troubles.
4.1 Random errors uncertainty ( ${{\bf{\Phi }}_{\boldsymbol{phy}}}$ and ${{\bf{\Phi }}_{\boldsymbol{num}}}$ )
To compute ${{\rm{\Phi }}_{phy}}$ , the effects of ${{\rm{\Phi }}_{num}}$ and ${B_{thermo}}$ first need to be supressed. A specific running mode was used in both the MSC and the RM to preclude ${{\rm{\Phi }}_{num}}$ . A running mode is defined, informally herein, as a way of modifying the cycle model execution to meet the desired intent. This mode will be called one-pass, the name given in the RM, which was also adopted in this paper.
The thermodynamic calculations execution on each component depicted in Fig. 1 requires several passes on each iteration. These calculations are performed left-to-right in Fig. 1 during each pass, first on the secondary stream and then on the primary. The numerical method might call for several passes, in which the vector x is perturbed by small amounts ( $\Delta \boldsymbol{x}$ ), to construct either the Jacobian ( J ) or Broyden ( B ) matrix. The J (or B ) is then used to compute an updated x . The one-pass mode, on the other hand, allows to execute the model calculations, for a given x , only once, thus avoiding the call to the numerical method (henceforth ${{\rm{\Phi }}_{num}}$ = 0.0). In this work, the solution vector ( x* ) was provided deliberately to both models. The x* on the flight conditions considered in Table 1 were obtained from an independent run of the RM. Additionally, to preclude ${B_{thermo}}$ (i.e. ${B_{thermo}}$ = 0.0), both the MSC and the RM considered equivalent thermodynamic packages, i.e. AGCM_allFuel and allFuel, respectively (discussed at the end of Section 2).
To isolate the ${{\rm{\Phi }}_{num}}$ effect, it is required to forestall all other terms in Equation (7). Assuming equivalent thermodynamic package are used during the comparison, then ${B_{thermo}}$ = 0.0. Moreover, to make ${{\rm{\Phi }}_{phy}}$ = 0.0, the same model (either MSC or RM) must be considered during the comparisons. However, to determine the influence of the numerical method (i.e. ${{\rm{\Phi }}_{num}}$ ), it is needed that the chosen model run with its own numerical method and the others.
Isolating ${{\rm{\Phi }}_{num}}$ was deemed cumbersome, impractical and, most likely, impossible to achieve. The programming code of the numerical method used by the RM cannot be exported to be used in the MSC. On the other hand, the RM does not allow for the addition of another solver than its own. To overcome the shortcomings in determining ${{\rm{\Phi }}_{num}}$ its uncertainty was combined with ${{\rm{\Phi }}_{phy}}$ , and the lumped uncertainty was designated as ${{\rm{\Phi }}_{phy\& num}}$ . The ${{\rm{\Phi }}_{phy\& num}}$ was obtained by comparing the runs from both the MSC and the RM using their corresponding numerical methods and only precluding ${B_{thermo}}$ , i.e. considering AGCM_allFuel and allFuel, respectively.
It is worth noting that when allowing the numerical methods to find x* , the uncertainty in ${{\rm{\Phi }}_{num}}$ is combined with ${{\rm{\Phi }}_{phy}}$ , and cannot be separated. In other words, ${{\rm{\Phi }}_{phy}}$ and ${{\rm{\Phi }}_{num}}$ are not linearly independent, i.e. ${{\rm{\Phi }}_{phy\& num}} \ne {{\rm{\Phi }}_{phy}} + {{\rm{\Phi }}_{num}}$ . Based on the previous discussion, Equation (7) had to be rewritten to account for ${{\rm{\Phi }}_{phy\& num}}$ , as shown in Equation (8).
4.2 Thermodynamic bias uncertainty $(\;{\boldsymbol{B}_\boldsymbol{thermo}}$ )
To establish ${B_{thermo}}$ alone, the influence of the last term in Equation (8) must be eliminated, i.e. deliberately making ${{\rm{\Phi }}_{phy\& num}}$ = 0.0. The ${B_{thermo}}$ was assessed by using solely the MSC, thus precluding ${{\rm{\Phi }}_{phy\& num}}$ . The physics modeling and mathematical formulation to define the engine mass and energy imbalances and the numerical method to find x * remain invariant when using the same model. To compute ${B_{thermo}}$ , the AGCM_allFuel package was taken as reference when comparing the other two thermodynamic packages of interest, namely, AGCM_GasTbl and thermo_package1. A similar approach could be followed to determine the ${B_{thermo}}$ for any other thermodynamic package of interest. For thermo_package1, the thermodynamic properties were normalised based on the corrections provided by Gurrola and Botez [Reference Gurrola-Arrieta and Botez9]. These corrections allowed to reduce the ${B_{thermo}}$ between thermo_package1 and AGCM_allFuel. Finally, a summary of the overall proposed methodology is presented in Fig. 3.
5.0 Results and discussion
This paper discusses a handful of parameters that are of interest in GTE performance analyses for propulsion. The set of parameters encompasses $SFC$ , the net thrust ( ${F_n}$ ), the primary and secondary nozzles gross thrusts ( ${F_{g,pri}}$ and ${F_{g,sec}}$ , respectively), the temperature at the HPC exit ( ${T_{0,030}}$ ), the inter-turbine temperature ( ${T_{0,046}}$ ), the shaft power in both high-pressure (HP) and low-pressure (LP) spools, ${\dot W_{HP}}$ and ${\dot W_{LP}}$ , respectively; the fuel flow ( ${\dot m_{fuel}}$ ), and the HP spool corrected rotational speed ( $N{H_{corr}}$ ).
In Section 4, the physics and numerical modeling uncertainties were combined into one, ${{\rm{\Phi }}_{phy\& num}}$ ; however, for the purpose of the discussion, the results for ${{\rm{\Phi }}_{phy}}$ are also presented. Although ${{\rm{\Phi }}_{phy}}$ will not be considered to compute the overall $U$ , as per Equation (8), its assessment was deemed beneficial to derive insights concerning ${{\rm{\Phi }}_{phy\& num}}$ .
5.1 Physics modeling random errors uncertainty ( ${\bf{\Phi }}_\boldsymbol{phy}^{}$ )
To determine ${\rm{\Phi }}_{phy}^{}$ , first, the errors ( $\varepsilon $ ) between the MSC and the RM were computed. These errors were obtained from cycle runs depicted in Table 1. The $\varepsilon $ are presented in Figs 4 and 5; the former in percentage, the latter in absolute units.
The $\varepsilon $ presented in Figs 4 and 5 are observed to be randomly distributed around zero and exhibit no observable bias or correlation trend with either engine power ( $OPR$ ) or flight condition (i.e. altitude, MN, $\Delta {T_{ICAO - SA}}$ ). The average error ( $\bar z$ ) on each parameter of interest, denoted by the continuous horizontal line in Figs 4 and 5, was calculated from the complete sample of points considering the three flight conditions in Table 1. Given that $\bar z \approx $ 0.0, therefore, the $\varepsilon $ are indeed random errors (i.e. not affected by any bias).
It is worth noting that the data scatter in Fig. 4 for the $\varepsilon _{SFC}^{}$ , $\varepsilon _{{F_n}}^{}$ , $\varepsilon _{{F_{g,pri}}}^{}$ at high power ( $OPR$ ≥ 25.0) is tighter than at low power ( $OPR \approx 10.0$ ). For example, in Fig. 4(c), regardless of the flight condition, the $\varepsilon _{{F_{g,pri}}}^{}$ scatter increases when reducing $OPR$ . The increased scatter is because the ${F_{g,pri}}$ absolute values tend to become small at lower power settings. For example, at 35k/MN = 0.8/ ${\rm{\Delta }}{T_{ICAO - SA}}$ = 0.0 and $OPR$ = 10.6, ${F_{g,pri}}$ = 380.6 lbf (1,693 N); whereas at $OPR$ = 28.1, ${F_{g,pri}}$ = 1,941.2 lbf (8,635 N). In contrast, the absolute $\varepsilon _{{F_{g,pri}}}^{}$ (Fig. 5(c)) at each flight condition remained fairly constant throughout variable power levels.
For simplicity and summarisation, it was decided to express the ${\rm{\Phi }}_{phy}^{}$ as invariant relative to the $OPR$ . The precision indexes ( ${s_p}$ ) for the three flight conditions and an overall index are presented in Table 2, the latter encompassed the scatter from the three flight conditions combined. The ${\rm{\Phi }}_{phy}^{}$ are also presented in Table 2, computed from the overall ${s_p}$ , n, and ${t_{95}}$ , as per Equation (2).
The ${\rm{\Phi }}_{phy}^{}$ is the uncertainty expected for a given performance parameter when only comparing the thermodynamic calculations between the MSC and the RM. About 95% of the $\varepsilon $ are expected to fall within the range imposed by the ${\rm{\Phi }}_{phy}^{}$ , i.e. $ \pm {t_{95}}{s_p}$ . For example, per Table 2, the ${\varepsilon _{{F_n}}}$ are expected to lie within ±0.63% or ±13.6 lbf (±60.5 N). In other words, any ${\varepsilon _{{F_n}}}$ (i.e. single data point) within this range would be considered within the expected uncertainty limit.
One should be careful with those $\varepsilon $ outside the precision uncertainty ( ${\rm{\Phi }}_{phy}^{}$ ). Additional analysis is recommended for such points before drawing conclusions or discarding them. Moreover, it is also helpful to consider the data scatter based on the absolute $\varepsilon $ , as shown in Fig. 5, in which it can be observed that they behave similarly across flight conditions and engine power-settings. Furthermore, both percentage and absolute values could be helpful when establishing a potential outlier, i.e. a data point conspicuously outside of the data variation. For example, in Fig. 4, for both $\varepsilon _{{F_n}}^{}$ and $\varepsilon _{SFC}^{}$ at Flight 1 altitude, a data point far from the data family variation is observed, e.g. $\varepsilon _{SFC}^{}$ = +1.87% and $\varepsilon _{{F_n}}^{}$ = −1.84% both at $OPR$ = 9.2. However, in Fig. 5, for the same data point, $\varepsilon _{SFC}^{}$ = +0.027 lbm/h/lbf (+0.0028 kg/h/N) and $\varepsilon _{{F_n}}^{}$ = −8.0 lbf (−35.6 N), and so is well within the data variation; it was therefore kept in the sample when computing the statistics, i.e. ${t_{95}}$ , ${s_p}$ , and $\bar z$ . Finally, regarding the ${\rm{\Phi }}_{{F_n}}^{}$ and ${\rm{\Phi }}_{SFC}^{}$ presented in percentage in Table 2, both showed the exact same figure (±0.63%). These uncertainties are an exclusive function of $\varepsilon _{{F_n}}^{}$ given that ${\dot m_{fuel}}$ is a constant input provided in x *.
Some final remarks about the ${{\rm{\Phi }}_{phy}}$ are discussed next. The most fundamental uncertainty to which any two (or more) models should be compared is the ${\rm{\Phi }}_{phy}^{}$ . However, it is recognised that the process for obtaining the ${\rm{\Phi }}_{phy}^{}$ is cumbersome, and for many cycle model comparisons, it might be impractical to obtain, given that precluding ${B_{thermo}}$ and ${{\rm{\Phi }}_{num}}$ may not be possible. As already discussed, only Chapman et al. [Reference Chapman, Lavelle, Litt and Guo26] made an effort to test the errors concerning the thermodynamic calculations, the so-called system level testing without a solver. However, the shortcoming in Ref. [Reference Chapman, Lavelle, Litt and Guo26] is that only one data point was used in their comparison, thus, the uncertainties ( ${\rm{\Phi }}_{phy}^{}$ ) cannot be computed. The uncertainty figures presented in this paper overcome the lack of data in Ref. [Reference Chapman, Lavelle, Litt and Guo26], and thus, can be used as reference for cycle model comparisons when no better reference is granted. For example, let us take the ${F_{g,pri}}$ error reported in Ref. [Reference Chapman, Lavelle, Litt and Guo26] (at sea-level), $\varepsilon _{{F_{g,pri}}}^{}$ = −0.178%, and let us assume that their ${B_{thermo}}$ is negligible. Indeed, this data point is within the uncertainty reported in Table 2 (±1.02%), although the uncertainty might seem too wide relative to the individual error. However, the uncertainties in this work have been defined based on three flight conditions and the whole range of power-settings presented in Section 4. It is worth noting that at high-power ( $OPR$ $\approx$ 28.0), the $\varepsilon _{{F_{g,pri}}}^{}$ are observed to be smaller in Fig. 4(c); at sea-level, the $\varepsilon _{{F_{g,pri}}}^{}$ are about −0.12% and +0.04% which is in better agreement with the values reported in Ref. [Reference Chapman, Lavelle, Litt and Guo26].
5.2 Physics modeling and numerical lumped uncertainty ( )
To determine $\varepsilon $ , both the MSC and the RM are allowed to use their numerical methods to find their corresponding x* , as in a typical GTE model simulation. The $\varepsilon $ are presented in Figs 6 and 7 in percentage and absolute units, respectively. It should be noted that $\varepsilon _{{{\dot m}_{fuel}}}^{}$ and $\varepsilon _{N{H_{corr}}}^{}$ are now included in the error comparisons (Figs 6(d) and (h) and 7(d) and (h)), since the numerical methods on each model need to find their corresponding values of ${\dot m_{fuel}}$ and $N{H_{corr}}$ .
During the ${\rm{\Phi }}_{phy}^{}$ discussion, the errors were centred on zero; now, when adding the effect of the numerical methods, the errors consistently shifted slightly away from zero. In both Figs 6(b), (c) and (d) and 7(b), (c) and (d) it is discernable that $\varepsilon _{{F_n}}^{}$ , $\varepsilon _{{F_{g,pri}}}^{}$ , and $\varepsilon _{{{\dot m}_{fuel}}}^{}$ consistently shifted upwards, i.e. throughout $OPR$ for the three altitude levels. Thus, their average values ( $\bar z$ ) are not centred on zero. These results suggest that a bias ( ${b_{phy\& num}}$ ), albeit small, was inadvertently induced due to the numerical method implemented in the MSC. It was decided to account for this bias in the total uncertainty ( $U$ ) determination as an additional term in Equation (8), which was designated as $B_{phy\& num}^{}$ . The magnitude of $B_{phy\& num}^{}$ was set to the corresponding average values ( $\bar z$ ) and are presented in Table 3. The $B_{phy\& num}^{}$ values are associated with the MSC in this paper and should not be generalised to other model comparisons. Finally, the $B_{phy\& num}^{}$ are constant and non-negative for all parameters; thus, care must be taken when adding these values to the other uncertainty terms. For brevity, the values of ${s_p}$ , were omitted, and only the ${\rm{\Phi }}_{phy\& num}^{}$ are presented in Table 3. However, knowing that n and ${t_{95}}$ are the same as in Table 2, thus the ${s_p}$ can be easily computed.
The ${\rm{\Phi }}_{phy\& num}^{}$ for most of the parameters shown in Table 3 are smaller than their corresponding ${\rm{\Phi }}_{phy}^{}$ shown in Table 2; for example, in the case of ${F_{g,pri}}$ , the ${\rm{\Phi }}_{phy\& num}^{}$ is approximate half than ${\rm{\Phi }}_{phy}^{}$ (±0.57 vs. ±1.02). Given that, in general, ${\rm{\Phi }}_{phy\& num}^{} \lt {\rm{\Phi }}_{phy}^{}$ , the earlier statement concerning the non-linear dependency between ${\rm{\Phi }}_{phy}^{}$ and ${\rm{\Phi }}_{num}^{}$ , i.e. ${\rm{\Phi }}_{phy\& num}^{} \ne \;{\rm{\Phi }}_{phy}^{} + \;{\rm{\Phi }}_{num}^{}$ is supported.
The ${{\rm{\Phi }}_{phy\& num}}$ presented in Table 3 are the best reference hitherto found in the literature with which to compare two or more MSCs against a RM in which each model uses its own numerical method. These values might be used as generic acceptability criteria, provided that no other bias is present. From the models’ comparison found in the literature (discussed in Section 1), it was found that their reported errors fall within the uncertain values of the ${{\rm{\Phi }}_{phy\& num}}$ shown in Table 3. For example, assuming the ${B_{thermo}}$ is negligible, the ${\varepsilon _{{F_n}}}$ reported in Ref. ([Reference Alexiou and Mathioudakis24] are about +0.15% (at sea-level, static) and −0.40% (at Top-of-Climb). In Ref. [Reference Chapman, Lavelle, Litt and Guo26], the ${\varepsilon _{{F_{g,pri}}}}$ and ${\varepsilon _{{F_{g,sec}}}}$ for the so-called system level testing with solver validation were +0.53% and −0.0370% (at sea-level, static), respectively.
5.3. Thermodynamic bias uncertainty ( $B_\boldsymbol{thermo}^{}$ )
Figure 8 shows the $B_{thermo}^{}$ computed for thermo_package1 and AGCM_GasTbl, taking as reference AGCM_allFuel. These $B_{thermo}^{}$ are presented as functions of $OPR$ , given that their magnitudes and signs were noted to vary with engine power, thus, a fixed figure was deemed inadequate as for other biases already discussed. The $B_{thermo}^{}$ were computed using a suitable linear fit representing the $\varepsilon = f\!\left( {OPR} \right)$ .
In Fig. 8(a), it is observed that the $B_{thermo}^{}$ are significantly smaller than those in Fig. 8(b). For example, $B_{{F_n}}^{}$ in Fig. 8(a) is between −0.070% and +0.080%, whereas in Fig. 8(b) is between −0.42% and −0.50%. The factors that caused the $B_{thermo}^{}$ presented in Fig. 8(a) to be small-to-negligible compared to those in Fig. 8(b) have been already introduced, i.e. the allFuel is consistent (according to Ref. [37]) with the thermodynamic tables provided by Gordon [Reference Gordon32], the thermo_package1 was built exactly from the tables presented in Ref. [Reference Gordon32]. Moreover, the normalisation constants provided in Ref. [Reference Gurrola-Arrieta and Botez9] made possible to reduce the difference in the thermodynamic properties’ absolute values between thermo_package1 and allFuel.
5.4 Uncertainty analysis study case
Let us consider a scenario in which the $U$ s are already known, for example the $U$ s proposed in this paper, provided that no better reference has been granted. This scenario would be the case if a new MSC is being compared with the RM, such as an equivalent NPSS model. Moreover, let us assume that the same DP assumptions, turbomachinery component maps and scaling factors are used in the MSC and the RM, thus, ${B_{conf}}$ and ${B_{input}}$ = 0.0. Let us further assume that both the MSC and the RM are executed for the matrix of test points defined as shown in Table 4, considering $N{L_{corr}}$ = 52.0−100.0% with $\Delta N{L_{corr}}$ = 2.0%. It is worth noting that the conditions in Table 4 do not match those used to derive the uncertainties in this paper (i.e. Table 1), neither do the power setting points, except for the nodes at $N{L_{corr}}$ = 60, 70, 80, 90 and 100.0%. Additionally, let us assume the MSC is used by a party that has defined thermo_package1 as their baseline thermodynamic package, and the RM is used by a different party that considers the so-called GasTbl. Finally, each model is expected to use their own numerical method to find a solution that solves the engine’s mass and energy imbalances.
The results are then compared vis-à-vis, by computing the errors ( $\varepsilon$ ) using Equation (4). The errors obtained from these comparisons and their respective uncertainties’ limits are presented in Fig. 9. The solid line in Fig. 9 represents the sum of the $B_{thermo}^{}$ and $B_{phy\& num}^{}$ . For this exercise, the $B_{thermo}^{}$ was calculated from the methodology proposed in Section 4; however, a close estimation can be computed by subtracting Fig. 8(b) from Fig. 8(a). The dashed lines in Fig. 9 represent the interval defined by ${\rm{\Phi }}_{phy\& num}^{}$ . It is worth noting that the ${\rm{\Phi }}_{phy\& num}^{}$ are centred on the bias line (solid). The errors shown in Fig. 9 follow the bias slope, and overall, they are within the uncertainty defined by the random errors.
In a cycle model comparison, knowing the magnitudes of the expected $B$ and ${\rm{\Phi }}$ in advance will make the decision process (accept or reject) easier. On the contrary, by ignoring them, the errors’ assessment could easily become a dubious exercise, especially for those parameters following a discernable trend, such as $SFC$ , ${\dot m_{fuel}}$ , or ${T_{0,046}}$ in Fig. 9. In other words, the data points would look conspicuously biased without the aid of the $B$ and ${\rm{\Phi }}$ lines constructed from the methodology and figures presented in this work. Indeed, while the data points are biased, which is known beforehand, the figures presented in this paper helped to objectively define the expected uncertainty limit.
6.0 Conclusions and recommendations
In this work, a thorough literature review was performed to establish the lack of objective criteria with which to define precision acceptability when comparing aerothermodynamic cycle models. The precision of a cycle model is affected by biases and random errors. The biases account for differences in model inputs, engine configuration and thermodynamic properties calculations. Random errors account for differences in physics modeling and numerical methods. The proposed methodology was applied to derive the uncertainty figures for several performance parameters of interest from which the following conclusion/recommendations were drawn:
-
• A reference cycle model is the best mean to mitigate and manage the uncertainties. However, the researcher must be aware of the type of uncertainties that may be encountered in practice, and devise ways to preclude/reduce them.
-
• It is highly recommended to take a sample of data points for comparison, making sure different flight conditions and power levels are surveyed.
-
• If the errors in the sample data points are within the maximum expected error (i.e. uncertainty) then, the errors should be deemed as acceptable. For ${F_n}$ , $SFC$ , and ${T_{0,046}}$ (of paramount interest in aero GTEs), the maximum expected errors are: ±0.57%, ±0.47%, and ±3.2 R (±1.8 K). These maximum errors account for random variation encountered in the thermodynamic modeling and the numerical method used to solve the engine imbalances.
-
• Given the lack of a comprehensive methodology to define precision uncertainties in cycle model comparisons, it is recommended to use the proposed methodology to define the specific uncertainties for other model comparisons.
-
• In case the proposed methodology cannot be implemented, it is recommended to use the uncertainty figures provided in this paper as a generic acceptability criteria.
Acknowledgements
This research was performed at the Laboratory of Applied Research in Active Controls, Avionics and AeroServoElasticity (LARCASE). This work was funded by the NSERC through the Canada Research Chair in Aircraft Modeling and Simulation Technologies. For more information related to this research, please visit the LARCASE website at https://www.etsmtl.ca/unites-de-recherche/larcase/accueil.