THIS IS A MODIFIED VERSION OF AN ARTICLE BY Jack Kane WHICH APPEARED IN THE NOVEMBER 2008 ISSUE (NUMBER 034) of RACE ENGINE TECHNOLOGY MAGAZINE
This article first presents the basics of turbocharger operation, and then explores some of the current thinking in turbo-supercharger technology as applied to competition engines.
TURBOCHARGER BASICS
Since the power a piston engine can produce is directly dependent upon the mass of air it can ingest, the purpose of forced induction (turbo-supercharging and supercharging) is to increase the inlet manifold pressure and density so as to make the cylinders ingest a greater mass of air during each intake stroke. A supercharger is an air compressor driven directly by the engine crankshaft, and as such, consumes some of the power produced by the combustion of fuel, thereby increasing BSFC and engine wear for a given amount of produced power.
A turbocharger consists of a single-stage radial-flow (“centrifugal”) compressor (air pump), (as shown on the left side of Figure 1) which, instead of being driven directly by the crankshaft, is driven by a single-stage radial-flow exhaust turbine (as shown on the right side of Figure 1) . The turbine extracts wasted kinetic and thermal energy from the high-temperature exhaust gas flow and produces the power to drive the compressor, at the cost of a slight increase in pumping losses.
Figure 1Borg-Warner Turbocharger with Variable Geometry Turbine
Turbochargers are becoming ever more widely used in racing, as motorsport increasingly embraces energy efficiency. It is expected that turbos will soon (2009 ?) reappear in F1 and IRL, as well as other venues. There is a considerable amount of development work currently being done on turbochargers, which is being motivated primarily by the following road-vehicle requirements:
- the ability to operate reliably and continuously with higher exhaust gas temperatures (EGT), and
- the ability to operate with higher compressor inlet temperatures and flowrates.
The demand for operability with higher EGT’s comes from increasing demand for better fuel economy in spark-ignition (SI) engines, which requires that the engines run much closer to stoichiometric mixtures rather than employ the very rich mixtures used in the past to reduce EGTs.
These days, compression-ignition (CI) road car engines are invariably turbocharged and this relatively-fast developing technology is operating at ever-rising BMEPs, which means higher combustion temperatures and the resulting increases in NOx emissions. The demand for higher inlet temperatures and flowrates comes from the high percentages (30-40%) of exhaust gas recirculation (EGR) required to control NOx emissions from CI engines.
Although these motivators are coming from the production-vehicle end of the spectrum, the resulting technology is or will soon be available for application into motorsports. An SI race engine won’t need to operate at stoichiometric mixtures, but the availability of turbines which can live with 1925°F (1050°C ) EGTs will provide new opportunities for greater output.
At present, competition CI engines are not required to reduce NOx emissions (but that will surely develop as political correctness further invades motorsports), so the increased compressor efficiencies, flowrates and map widths can be used to provide greater intake density at the mandated manifold absolute pressure (MAP) limits.
The increased compressor efficiencies, flowrates, and map widths developed for CI technology will certainly benefit competition SI engines in the same way.
COMPRESSORS
The performance of a radial-flow compressor is defined by a chart known as the ‘map’, as illustrated in Figure 2. The map defines, based in on inlet air conditions, the usable operating characteristics of a compressor in terms of airflow (pounds-mass per minute, lbm/min) and pressure ratio (absolute pressure at the compressor outlet divided by absolute pressure at the compressor inlet). The compressor RPM lines show, for the stated compressor speed (in thousands of RPM), the pressure ratio delivered as a function of airflow. The compressor efficiency lines show the % of adiabatic efficiency (AE) the compressor achieves at various combinations of pressure ratio and airflow.
Figure 2Garrett GT3582R Compressor Map
The odd-shaped line up the left side of the map is the surge line. It defines, for each pressure ratio, the minimum airflow at which the compressor can operate. Airflows to the left of the surge line cause the air to separate from the blades and experience a “stall” phenomenon similar to the stall of an aircraft wing. This is an area of instability in which the airflow moves in a chaotic manner, causing snapping and popping, and potential blade damage. Surge can occur with a downstream-throttled installation when the throttle is suddenly closed if there is no blowoff valve or other device to vent airflow.
The dotted line up the center of the map is the peak efficiency operating line: the maximum available efficiency for each combination of airflow and pressure ratio.
Note from the map that, for a given compressor RPM, the available pressure ratio reaches a point where it begins to drop quickly as airflow increases. This effect is less severe at lower rotor RPM, but with higher rotor RPM (therefore higher available pressure ratio), the pressure ratio drops dramatically with a slight increase in airflow. This phenomenon is known as “choke”. The choke condition is defined differently by different manufacturers: sometimes by a specific percentage reduction in pressure ratio, sometimes by reaching a specific efficiency (Garrett uses 58%).
Compressor efficiency is an important concept to understand with regard to forced induction. At standard atmospheric conditions (29.92 “hg, 59 °F) one pound-mass of air has a known density (0.0765 lbm / cubic foot) and occupies 13.07 cubic feet of volume. Compressing that mass of air into a smaller volume increases the pressure, the temperature and the density. If a compressor was 100% efficient, the temperature of the gas exiting the compressor could be calculated from the ideal gas law PV = krT (so called “adiabatic” compression: no gain or loss of heat energy). However, real world compressors are less than 100% efficient, and so the air exiting the compressor is heated more than it would be with a 100% efficient compressor, hence the resulting density is less than would be expected.
Here is an example. On a ‘standard day’ (29.92 “hg barometer, 59 °F), a compressor having ambient conditions at the inlet, operating at an adiabatic efficiency of 100% and a pressure ratio of 2.9 would produce a compressor outlet pressure of 86.77 “hg absolute (2.9 x 29.92 = 86.77), or 2.94 bar absolute (coincidentally, the 2008 manifold pressure limit for Le Mans LM-P1 diesels). The compressor outlet temperature would be 241 °F and the density ratio (density relative to the inlet air) would be 2.15, for an air density of 0.164 lbm/ft³. (2.15 x 0.0765 = 0.164).
In the real world, on a warm day with a low barometric pressure (say 85 °F and 29.10 “hg, ambient density = 0.071 lbm/ft³) with a turbo installation having a pressure drop of 0.5 psi in the inlet tract and a compressor operating at 75% adiabatic efficiency, the pressure ratio required to achieve that same outlet pressure (86.77 “hg) would be 3.087. The compressor discharge temperature and density ratio would be 358 °F and 2.057 respectively, for a density of 0.146 lbm/ft³, only 89% of the density achieved with a (dream-world) 100% efficient compressor.
If the engine designer decides that an inlet manifold temperature of 358 °F is not terribly desirable, he might decide to use a charge air cooler (correctly known as an “aftercooler” but more commonly referred to as an “intercooler”) having, for example, an effectiveness of 85% and a pressure drop of 3 “hg. That would result in a manifold pressure, temperature and density of 83.7 “hg, 126 °F, and 0.196 lbm/ft³ respectively, for a system density ratio of 2.77 and an inlet air temperature that would dramatically improve the survival prospects of an SI engine. That is the real world of turbocharging / supercharging.
( Please note that it is beyond the scope of this article to explain the details of these example calculations. Be assured they are relatively straightforward, and are presented in several excellent textbooks, including EPI Reference Library volumes 5:12, 5:23, 5:24, 5:25 and 5:26 ).
The compressor side of a turbocharger faces significant thermal, stress and fatigue challenges.
In most applications, the compressor is ingesting air at slightly more than ambient temperature, but the temperature rise across the compressor can be substantial (as explained in Turbocharger Basics at the beginning of this article).
With ambient inlet air and a 4:1 pressure ratio at 80% adiabatic efficiency (AE), the compressor discharge temperature can exceed 400°F (205°C). However, the low temperature of the inlet air plus the fact that most of the temperature rise occurs in the diffuser, where velocity is exchanged for pressure, typically keeps the operating temperature of the compressor wheel well below the compressor discharge temperature. A compressor wheel, often operating at over 100,000 RPM, is subjected to high centrifugal loads. High pressure ratios apply bending loads to the blades. Cycling between pressure ratios of 1.0 (no boost) to 4.0 (max boost) and back applies significant fatigue loads to the wheel. Surviving these cyclic loads at elevated temperatures can be a problem.
Currently, most production compressor wheels are aluminium investment castings, and a very popular material is the permanent-mold alloy 354-T61. The room temperature properties of this alloy rival some of the best forged piston materials, but the properties of 354-T61 cast aluminium at 400°F (205°C) substantially exceed any of the well-known wrought alloys (see Table 3 in Advanced Metals). Wheels cast from the very-high strength alloy 201-T7, using a permanent mold process, have also been successful in compressor applications, but this alloy is more difficult to pour successfully than is 354.
The compressor wheels in most performance applications, including the Audi and Peugeot Le Mans turbodiesels, are five-axis CNC machined from forged 2000-series aluminium billet. According to Turbonetics, that procedure provides wheels with optimal properties and accuracy, frees them from the costs involved in permanent mold tooling, and gives them a large measure of flexibility to experiment and modify existing designs.
However, much of the current direction in compressor improvement is being driven by the pressure ratio and flow requirements of CI engines for road vehicles, operating at high boost levels and high levels of EGR to reduce emissions of NOx. Wide-map compressors with pressure ratios of 4:1 and peak adiabatic efficiency of 80% are on the horizon.
With the 30-40% EGR systems being exploited in CI applications, there are emissions advantages gained by taking the recirculated gas from downstream of the emission-control system, and feeding it into the turbo inlet to be mixed with the fresh charge air. However, that causes the compressor inlet temperature to be well above ambient, and introduces contaminants that include acid components and particulates.
The increase in compressor inlet temperature from EGR, combined with the corrosive and abrasive effects of the exhaust gas, pose an increased challenge to the tensile and fatigue strength of even the best aluminium alloys. That has caused the development of titanium compressor wheels made from both CNC-billet and investment-castings. The titanium material provides stiffer blades, higher strength at elevated temperatures, and greater fatigue resistance.
Those features can be especially useful in competition applications which use two-stage turbocharging, achieving pressure ratios in the 9:1 range, with no intercooler between stages. (On a 29.92 day at 75°F, a 9:1 pressure ratio at 70% adiabatic efficiency would produce nearly 260″hg MAP and an inlet air temperature over 725°F.)
Improving the aerodynamics of a radial flow compressor involves intense modelling and simulation of the combined effects of the compressor wheel, the diffuser and the housing. One area of compressor development is the ongoing effort to provide ‘wider’ maps (at a given pressure ratio, a larger spread of airflow values between the surge line and the choke line).
The ‘widening’ of a map is illustrated in Figure 5. The blue lines in that map are the same as the map discussed above. The red sketched-in line shows an example of how the surge line can be moved to the left. The green sketched-in lines show how the 65, 68 and 70% efficiency lines have been extended into the new operating area. Note that at a pressure ratio of 2.75, the original operating range was from 36 to 60 lbm/minute. With the ‘widened’ map, the range at 2.75 PR now extends from 30 to 60 lbm/minute, a 25% improvement.
Figure 5Widened Garrett GT3582R Compressor Map
One contemporary method which has been successful in widening the map is the ‘ported shroud’ feature. It moves the surge line to the left by allowing a small amount of airflow to bleed off the low-velocity portion of the wheel and recirculate, to ward off blade stall. This feature is illustrated schematically in Figure 6.
A method which I have used in the past (before ported shrouds) is to install a sonic nozzle and an on-off valve in the compressor outlet plumbing. The sonic nozzle is sized to choke at a small percentage of the usable airflow (approximately 20%), and the on-off valve is controlled to open below an appropriate combination of RPM and MAP.
A contemporary example of this technology is shown in the compressor section of the turbocharger pictured in Figure 1 at the beginning of this article. In that implementation, the nozzle is controlled by an integrated servo-valve on the compressor housing, and the bypasses flow is recirculated to the inlet.
A single basic compressor wheel or turbine wheel can have the periphery of its blades machined (‘trimmed’) to provide a variety of different gas flow capabilities and to mate with differently-configured housings. The term trim expresses the area ratio between the inlet and the outlet of a radial flow wheel. Is it calculated by the equation:
TURBINES
A turbocharger turbine lives in a terribly hostile environment. The turbine is driven by exhaust gasses that can exceed 1875°F (1025°C) and which are very corrosive. Exhaust valves experience those same corrosive, high-temperature gasses, but exhaust valves do not approach the peak temperature of the exhaust gas because they reject a large amount of heat into the coolant through the valve seats and stems. An exhaust valve in a competition engine spends at least half of the time on the valve seat (production engines more like two-thirds of the time). Valves continuously transfer heat through the stem to the guide, and when they are seated, they rapidly transfer heat into the cylinder head through the valve seats. Those cooling paths keep exhaust valve temperatures well below EGTs.
The turbine wheel, however, lives in a continuous, high-velocity jet of those gasses. Although there is expansion across the turbine nozzle, therefore some cooling of the gasses, the temperature at the tips of the turbine rotor can approach exhaust gas temperatures. Further, the rotor system on many turbochargers operates well in excess of 100,000 RPM, and some approach 150,000 RPM. That imposes huge tensile loads from the centrifugal forces, as well as bending and vibratory loads. That environment requires the use of nickel-based superalloys for the turbine wheels. Those alloys can retain high strength values at these high temperatures.
The turbines in most current production turbochargers are suitable for continuous operation at an exhaust gas inlet temperature of 1750°F (950°C). Production turbines are typically investment-cast from Inconel 713 C or 713 LC (Table One). The turbine wheel castings are treated with Hot Isostatic Processing (HIP) to improve their structure and then are heat-treated to the required strength level.
Table 1 Chemistries of Certain Superalloys
Honeywell Turbo Technology (Garrett) supplies the turbochargers (TR30R) used on the stunning 5.5 liter Le Mans-winning Audi CI V-12’s and those of the pole-winning Peugeot CI V-12’s. Those turbos have fixed turbine nozzle geometry with wastegates. The turbine wheels in those turbos must operate continuously with EGTs up to 1925°F (1050°C). Honeywell uses a superalloy material known as Mar-M-247 (developed by Martin-Marietta in the seventies for gas turbine engine blades, discs and burner cans). This material is a nickel-based alloy containing significant amounts of chrome, aluminium and molybdenum.
In order to achieve optimal properties in components cast from Mar-M-247, NASA developed the Grainex process. This process uses traditional investment casting techniques, with the additional process of mold agitation during freezing to produce homogeneous grain inoculation, resulting in outstanding uniformity of grain structure and material properties. The part is HIP’d at 2165°F (1185°C) and 170 bar for 4 hours to minimize porosity, then solution treated for two hours at the same temperature, followed by 20 hours of aging at 1600°F (870°C). That produces a room temperature UTS of 150 ksi, which increases with temperature up to 1400°F (760°C).
Variable geometry turbines (VGT) provide a substantial improvement in turbine efficiency and enable greater flexibility of operation. A large turbo with VGT can operate as if it were a smaller turbo at lower engine speeds. In many cases, the VGT can replace a wastegate.
VGT turbochargers have been around for several years, but their applications have been somewhat limited by the EGTs they can survive. At present, VGT implementations are limited to continuous EGTs of 1750°F (950°C), with an occasional spike to 1800°F (980°C) allowable, as in the Porsche 997 twin-turbo system supplied by Borg-Warner. However, current development efforts are focused on producing VGT systems which will operate successfully at the temperatures for which the newer turbines are designed (1925°F, 1050°C) .
VGT is implemented in different ways. One system uses a series of movable vanes around the periphery of the turbine wheel, as shown in Figure1 at the beginning of this article.
Each vane pivots on an axis parallel with the rotor axis. When the exhaust gas supply is low, the vanes pivot to a position which is a few degrees from perpendicular to the turbine wheel inducer vanes, as shown in Figure 3. That gives the incoming gasses a strong tangential component to drive the turbine more effectively.
Figure 3Low Exhaust Flow VGT Position
The angle of the blades can be varied continuously, and at high exhaust flow they are nearly aligned radially with the outer contour of the turbine blades, as shown in Figure 4, giving the incoming gasses a strong radial component to drive the turbine, while offering a relatively large flow area to reduce backpressure, .
Figure 4 High Exhaust Flow VGT Position
Although many such systems currently use non-cambered vanes (the chord line is straight), future developments will include cambered vanes to increase VGT efficiencies at the top and bottom ends of the operating range. These VGT systems can be electrically operated, providing even greater flexibility to an ECU-managed engine system.
Another VGT system uses vanes which are attached to a ring surrounding the turbine wheel. These vanes have a have a fixed angular orientation. The ring and vanes move parallel to the rotor axis. The vanes orient the gas flow toward the turbine wheel blades, and the ring opens and closes the net nozzle area, dynamically altering the nozzle area, which changes the gas velocity therefore the turbine performance.
BEARING SYSTEMS
The bearing system which supports the rotor assembly (turbine, shaft and compressor) resides in the turbocharger center housing. That bearing system must reliably position and support the rotor from zero up to speeds that can approach 150,000 RPM. In addition to the rotating loads on the bearings, there can be substantial thrust loads in either direction, depending on operating conditions. The bearing system also has an influence on critical rotor speeds, vibration and shaft instability.
The temperature of the turbo environment also presents a challenge to the bearing system. If the engine is shut down immediately following a run at high power output, the turbine and turbine housing temperatures are toward their upper limits, and suddenly all gas flow through the turbine stops and all oil flow through the center housing stops. All that heat must go somewhere, and an easy path is into the center housing. The resulting temperatures can easily cook the oil to a solid with potentially disastrous results on the next run.
The bearing system has evolved from the early days, when most were hydrodynamic sleeve and face bearings which required uninterrupted oil supply to avoid damage from loss of fluid film and from overheating.
Today’s turbos feature dual ball bearing systems with very high bi-directional thrust capacities and reduced frictional drag, allowing faster spool-up times. To combat flat-spotting of bearings during heat-soak, an upgrade in bearing material from 52100 to M2 tool steel is also available.
The centrifugal force at very high speeds can cause steel balls to lift off the inner race, and to skid on the inner race during acceleration. To combat that issue, some manufacturers have switched over to bearings having ceramic balls, and others are moving in that direction. The ceramic ball bearings are also reported to be more resistant to damage from high temperatures.
Garrett uses an integrated dual ball bearing cartridge (Figure 7) which contains an angular-contact ball bearing at each end, providing a huge bi-directional thrust capacity, and which adds bending stiffness to the shaft system, helping to prevent critical speed issues.
Borg-Warner is developing a two-ball-bearing system which is expected to be fully ceramic.
Turbonetics now provides a ceramic ball bearing at the compressor end of all its turbos.
Some production turbos incorporate liquid-cooling provisions in the center housing to combat lubrication and heating issues, but several turbo suppliers told me that racers don’t like the cooled housings because of the added complexity in the racecar. However, liquid-cooled center-housings are very appropriate for turbocharged, liquid-cooled aircraft engines, and have been used successfully in several instances.
There have been applications where compound turbo systems have been used, in which one turbo feeds pressurized air to the engine, and a second turbine downstream extracts more energy from the exhaust system, but instead of running a compressor, it is geared directly to the engine output shaft. That has worked well in aircraft applications, but it presents several complex problems in an automotive application.
Energy Efficiency and Greening Rant
In a previous version of this page, I presented an insight into some of the absurd “science” that pervades the religion of anthropogenic global warming and the emerging green nude eel movement. That discussion has become so popular that I have moved it to ITS OWN DEDICATED PAGE.