Topping and Bottoming Cycles

Steam Rankine cycles can be combined with topping and/or bottoming cycles to form binary thermodynamic cycles. These topping and bottoming cycles use working fluids other than water. Topping cycles change the basic steam Rankine cycle into a binary cycle that better resembles the Carnot cycle and improves efficiency. For conventional steam cycles, state-of-the-art materials allow peak working fluid temperatures higher than the supercritical temperature for water. Much of the energy delivered into the cycle goes into superheating the steam, which is not a constant-temperature process. Therefore, a significant portion of the heat supply to the steam cycle occurs substantially below the peak cycle temperature. Adding a cycle that uses a working fluid with a boiling point higher than water allows more of the heat supply to the thermodynamic cycle to be near the peak cycle temperature, thus improving efficiency. Heat rejected from the topping cycle is channeled into the lower-temperature steam cycle.
Thermal energy not converted to work by the binary cycle is rejected to the ambient-temperature reservoir.
Metallic substances are the working fluids for topping cycles. For example, mercury was used as the topping cycle fluid in the 40-MW plant at Schiller, New Hampshire. This operated for a period of time but has since been dismantled. Significant research and testing has also been performed over the years toward the eventual goal of using other substances, such as potassium or cesium, as a topping cycle fluid.
Steam power plants in a cold, dry environment cannot take full advantage of the low heat rejection temperature available. The very low pressure to which the steam would be expanded to take advantage of the low heat sink temperature would increase the size of the low-pressure (LP) turbine to such an extent that it is impractical or at least inefficient. A bottoming cycle that uses a working fluid with a vapor pressure higher than water at ambient temperatures (such as ammonia or an organic fluid) would enable smaller LP turbines to function efficiently. Hence, a steam cycle combined with a bottoming cycle may yield better performance and be more cost-effective than a stand-alone Rankine steam cycle.

Steam Boilers
A boiler, also referred to as a steam generator, is a major component in the plant cycle. It is a closed vessel that efficiently uses heat produced from the combustion of fuel to convert water to steam. Efficiency is the most important characteristic of a boiler since it has a direct bearing on electricity production.
Boilers are classified as either drum-type or once-through. Major components of boilers include an economizer, superheaters, reheaters, and spray attemperators.
Drum-Type Boilers
Drum-type boilers (Figure 6) depend on constant recirculation of water through some of the components of the steam/water circuit to generate steam and keep the components from overheating. Drum type boilers circulate water by either natural or controlled circulation.
Natural Circulation.
Natural circulation boilers use the density differential between water in the down comers and steam in the water wall tubes for circulation. Controlled Circulation.
Controlled circulation boilers utilize boiler-water-circulating pumps to circulate water through the steam/water circuit.
Once-Through Boilers
Once-through boilers, shown in Figure 7, convert water to steam in one pass through the system.
Major Boiler Components

Economizer.
The economizer is the section of the boiler tubes where feedwater is first introduced into the boiler and where flue gas is used to raise the temperature of the water.
Steam Drum (Drum Units Only).
The steam drum separates steam from the steam/water mixture and keeps the separated steam dry.

Superheaters.
Superheaters are bundles of boiler tubing located in the flow path of the hot gases that are created by the combustion of fuel in the boiler furnace. Heat is transferred from the combustion gases to the steam in the superheater tubes.
Superheaters are classified as primary and secondary. Steam passes first through the primary superheater
(located in a relatively cool section of the boiler) after leaving the steam drum. There the steam receives a fraction of its final superheat and then passes through the secondary superheater for the remainder.







Energy Conservation

This section provides an understanding, at an overview level, of the steam power cycle. References were selected for the next level of study if required. There are noteworthy omissions in the section: site selection, fuel handling, civil engineering-related activities (like foundations), controls, and nuclear power.
Thermal power cycles take many forms, but the majority is fossil steam, nuclear, simple cycle gas turbine, and combined cycle. Of those listed, conventional coal-fired steam power is predominant. This is especially true in developing third-world countries that either have indigenous coal or can import coal inexpensively. These countries make up the largest new product market. A typical unit is shown in Figure 1.
The Rankin cycle is overwhelmingly the preferred cycle in the case of steam power and is discussed first.
Topping and bottoming cycles, with one exception, are rare and mentioned only for completeness.
The exception is the combined cycle, where the steam turbine cycle is a bottoming cycle. In the developed countries, there has been a move to the combined cycle because of cheap natural gas or oil. Combined cycles still use a reasonably standard steam power cycle except for the boiler. The complexity of a combined cycle is justified by the high thermal efficiency, which will soon approach 60%.
The core components of a steam power plant are boiler, turbine, condenser and feed water pump, and generator. These are covered in successive subsections.
The final subsection is an example of the layout/and contents of a modern steam power plant.
As a frame of reference for the reader, the following efficiencies/effectiveness’s are typical of modern fossil fuel steam power plants. The specific example chosen had steam conditions of 2400 psia, 1000°F main steam temperature, 1000°F reheat steam temperature: boiler thermal 92; turbine/generator thermal 44; turbine isentropic 89; generator 98.5; boiler feed water pump and turbine combined isentropic 82; condenser 85; plant overall 34 (Carnot 64).
Nuclear power stations are so singular that they are worthy of a few closing comments. Modern stations are all large, varying from 600 to 1500 MW. The steam is both low temperature and low pressure (~600 °F and ~1000 psia), compared with fossil applications, and hovers around saturation conditions or is slightly superheated. Therefore, the boiler(s), superheated equivalent (actually a combined moisture separator and reheated), and turbines are unique to this cycle. The turbine generator thermal efficiency is around 36%.


Rankin Cycle Analysis
Modern steam power plants are based on the Rankine cycle. The basic, ideal Rankine cycle is shown in Figure 2.
The ideal cycle comprises the processes from state
1-Saturated liquid from the condenser at state l is pumped isentropically (i.e.,S1=S2) to state
2 -and into the boiler.
3- Liquid is heated at constant pressure in the boiler to state 3 (saturated steam).
4-Steam expands isentropic ally (i.e.,S3=S4) through the turbine to state 4 where it enters the condenser as a wet vapor.
Constant-pressure transfer of heat in the condenser to return the steam back to state 1 (saturated liquid).

If changes in kinetic and potential energy are neglected, the total heat added to the rankine cycle can be represented by the shaded area on the T-S diagram in Figure 8.1.2, while the work done by this cycle can be represented by the crosshatching within the shaded area. The thermal efficiency of the cycle (h) is defined as the work (WNET) divided by the heat input to the cycle (QH).
The Rankine cycle is preferred over the Carnot cycle for the following reasons:
The heat transfer process in the boiler has to be at constant temperature for the Carnot cycle, whereas in the Rankine cycle it is superheated at constant pressure. Superheating the steam can be achieved in the Carnot cycle during heat addition, but the pressure has to drop to maintain constant temperature.
This means the steam is expanding in the boiler while heat added which is not a practical method.
The Carnot cycle requires that the working fluid be compressed at constant entropy to boiler pressure.
This would require taking wet steam from point 1¢ in Figure 2 and compressing it to saturated liquid condition at 2
.A pump required to compress a mixture of liquid and vapor isentropically is difficult to design and operate. In comparison, the Rankine cycle takes the saturated liquid and compresses it to boiler pressure. This is more practical and requires much less work.
The efficiency of the Rankine cycle can be increased by utilizing a number of variations to the basic cycle. One such variation is superheating the steam in the boiler. The additional work done by the cycle is shown in the crosshatched area in Figure 3.
The efficiency of the Rankine cycle can also be increased by increasing the pressure in the boiler.
However, increasing the steam generator pressure at a constant temperature will result in the excess moisture content of the steam exiting the turbine. In order to take advantage of higher steam generator pressures and keep turbine exhaust moistures at safe values, the steam is expanded to some intermediate pressure in the turbine and then reheated in the boiler. Following reheat, the steam is expanded to the cycle exhaust pressure. The reheat cycle is shown in Figure 4.
Another variation of the Rankine cycle is the regenerative cycle, which involves the use of feed water heaters. The regenerative cycle regains some of the irreversible heat lost when condensed liquid is pumped directly into the boiler by extracting steam from various points in the turbine and heating the condensed liquid with this steam in feed water heaters. Figure 5 shows the Rankine cycle with regeneration. The actual Rankine cycle is far from ideal as there are losses associated with the cycle. They include the piping losses due to friction and heat transfer, turbine losses associated with steam flow, pump losses due to friction, and condenser losses when condensate is sub cooled. The losses in the compression
(Pump) and expansion process (turbine) result in an increase in entropy. Also, there is lost energy in heat addition (boiler) and rejection (condenser) processes as they occur over a finite temperature difference.
Most modern power plants employ some variation of the basic Rankine cycle in order to improve thermal efficiency. For larger power plants, economies of scale will dictate the use of one or all of the variations listed above to improve thermal efficiency. Power plants in excess of 200,000 kW will in most cases have 300 ° F superheated steam leaving the boiler reheat, and seven to eight stages of feed water heating.












Trust, Alienation, and How Far to Go with Automation

Trust
If operators do not trust their sensors and displays, expert advisory system, or automatic control system, they will not use it or will avoid using it if possible. On the other hand, if operators come to place too much trust in such systems they will let down their guard, become complacent, and, when it fails, not be prepared. The question of operator trust in the automation is an important current issue in humanmachine interface design. It is desirable that operators trust their systems, but it is also desirable that they maintain alertness, situation awareness, and readiness to take over.

Alienation
There is a set of broader social effects that the new human-machine interaction can have, which can be  discussed under the rubric of alienation.
1. People worry that computers can do some tasks much better than they themselves can, such as memory and calculation. Surely, people should not try to compete in this arena.
2. Supervisory control tends to make people remote from the ultimate operations they are supposed to be overseeing — remote in space, desynchronized in time, and interacting with a computer instead of the end product or service itself.
3. People lose the perceptual-motor skills which in many cases gave them their identity. They become "deskilled", and, if ever called upon to use their previous well-honed skills, they could not.
4. Increasingly, people who use computers in supervisory control or in other ways, whether intentionally or not, are denied access to the knowledge to understand what is going on inside the computer.
5. Partly as a result of factor 4, the computer becomes mysterious, and the untutored user comes to attribute to the computer more capability, wisdom, or blame than is appropriate.
6. Because computer-based systems are growing more complex, and people are being “elevated” to roles of supervising larger and larger aggregates of hardware and software, the stakes naturally become higher. Where a human error before might have gone unnoticed and been easily corrected, now such an error could precipitate a disaster.
7. The last factor in alienation is similar to the first, but all-encompassing, namely, the fear that a “race” of machines is becoming more powerful than the human race.
These seven factors, and the fears they engender, whether justified or not, must be reckoned with.
Computers must be made to be not only “human friendly” but also not alienating with respect to these broader factors. Operators and users must become computer literate at whatever level of sophistication they can deal with.


How Far to Go with Automation
There is no question but that the trend toward supervisory control is changing the role of the human operator, posing fewer requirements on continuous sensory-motor skill and more on planning, monitoring, and supervising the computer. As computers take over more and more of the sensory-motor skill functions, new questions are being raised regarding how the interface should be designed to provide the best cooperation between human and machine. Among these questions are: To what degree should the system be automated? How much “help” from the computer is desirable? What are the points of diminishing returns?
Table lists ten levels of automation, from 0 to 100% computer control. Obviously, there are few tasks which have achieved 100% computer control, but new technology pushes relentlessly in that direction. It is instructive to consider the various intermediate levels of Table 6.1.1 in terms not only of how capable and reliable is the technology but what is desirable in terms of safety and satisfaction of the human operators and the general public.
Scale of Degrees of Automation
1. The computer offers no assistance; the human must do it all.
2. The computer offers a complete set of action alternatives, and
3. Narrows the selection down to a few, or
4. Suggests one alternative, and
5. Executes that suggestion if the human approves, or
6. Allows the human a restricted time to veto before automatic execution, or
7. Executes automatically, then necessarily informs the human, or
8. Informs the human only if asked, or
9. Informs the human only if it, the computer, decides to
10. The computer decides everything and acts autonomously, ignoring the human.
The current controversy about how much to automate large commercial transport aircraft is often couched in these terms



Human Error

Human error has long been of interest, but only in recent decades has there been serious effort to understand human error in terms of categories, causation, and remedy. There are several ways to classify human errors. One is according to whether it is an error of omission (something not done which was supposed to have been done) or commission (something done which was not supposed to have been done). Another is slip (a correct intention for some reason not fulfilled) vs. a mistake (an incorrect intention which was fulfilled). Errors may also be classified according to whether they are in sensing, perceiving, remembering, deciding, or acting. There are some special categories of error worth noting which are associated with following procedures in operation of systems. One, for example, is called a capture error, wherein the operator, being very accustomed to a series of steps, say, A, B, C, and D, intends at another time to perform E, B, C, F. But he is “captured” by the familiar sequence B, C and does E, B, C, D.
As to effective therapies for human error, proper design to make operation easy and natural and unambiguous is surely the most important. If possible, the system design should allow for error correction before the consequences become serious. Active warnings and alarms are necessary when the system can detect incipient failures in time to take such corrective action. Training is probably next most important after design, but any amount of training cannot compensate for an error-prone design. Preventing exposure to error by guards, locks, or an additional “execute” step can help make sure that the most critical actions are not taken without sufficient forethought. Least effective are written warnings such as posted decals or warning statements in instruction manuals, although many tort lawyers would like us to believe the opposite.


Mental Workload

Under such complexity it is imperative to know whether or not the mental workload of the operator is too great for safety. Human-machine systems engineers have sought to develop measures of mental workload, the idea being that as mental load increases, the risk of error increases, but presumably measurable mental load comes before actual lapse into error.
Three approaches have been developed for measuring mental workload:
1. The first and most used is the subjective rating scale, typically a ten-level category scale with descriptors for each category from no load to unbearable load.
2. The second approach is use of physiological indexes which correlate with subjective scales, including heart rate and the variability of heart rate, certain changes in the frequency spectrum of the voice, electrical resistance of the skin, diameter of the pupil of the eye, and certain changes in the evoked brain wave response to sudden sound or light stimuli.
3. The third approach is to use what is called a secondary task, an easily measurable additional task which consumes all of the operator’s attention remaining after the requirements of the primary task are satisfied. This latter technique has been used successfully in the laboratory, but has shortcomings in practice in that operators may refuse to cooperate.
Such techniques are now routinely applied to critical tasks such as aircraft landing, air traffic control, certain planned tasks for astronauts, and emergency procedures in nuclear power plants. The evidence suggests that supervisory control relieves mental load when things are going normally, but when automation fails the human operator is subjected rapidly to increased mental load.


Human Workload and Human Error

As noted above, new technology allows combination, integration, and simplification of displays compared to the intolerable plethora of separate instruments in older aircraft cockpits and plant control rooms. The computer has taken over more and more functions from the human operator. Potentially these changes make the operator’s task easier. However, it also allows for much more information to be presented, more extensive advice to be given, etc.
These advances have elevated the stature of the human operator from providing both physical energy and control, to providing only continuous control, to finally being a supervisor or a robotic vehicle or system. Expert systems can now answer the operator’s questions, much as does a human consultant, or whisper suggestions in his ear even if he doesn’t request them. These changes seem to add many cognitive functions that were not present at an earlier time. They make the operator into a monitor of the automation, who is supposed to step in when required to set things straight. Unfortunately, people are not always reliable monitors and interveners.

Common Criteria for Human Interface Design

Design of operator control stations for teleoperators poses the same types of problems as design of controls and displays for aircraft, highway vehicles, and trains. The displays must show the important variables unambiguously to whatever accuracy is required, but more than that must show the variables in relation to one another so as to clearly portray the current “situation(situation awareness is currently a popular test of the human operator in complex systems). Alarms must get the operator’s attention, indicate by text, symbol, or location on a graphic display what is abnormal, where in the system the failure occurred, what is the urgency, if response is urgent, and even suggest what action to take. (For example, the ground-proximity warning in an aircraft gives a loud “Whoop, whoop!” followed by a distinct spoken command “Pull up, pull up!”) Controls — whether analogic joysticks, master-arms, or knobs — or symbolic special-purpose buttons or general-purpose keyboards — must be natural and easy to use, and require little memory of special procedures (computer icons and windows do well here). The placement of controls and instruments and their mode and direction of operation must correspond to the desired direction and magnitude of system response.

High-Speed Train Control

With respect to new electronic technology for information sensing, storage, and processing, railroad technology has lagged behind that of aircraft and highway vehicles, but currently is catching up. The role of the human operator in future rail systems is being debated, since for some limited right-of-way trains (e.g., in airports) one can argue that fully automatic control systems now perform safely and efficiently. The train driver’s principal job is speed control (though there are many other monitoring duties he must perform), and in a train this task is much more difficult than in an automobile because of the huge inertia of the train — it takes 2 to 3 km to stop a high-speed train. Speed limits are fixed at reduced levels for curves, bridges, grade crossings, and densely populated areas, while wayside signals temporarily command lower speeds if there is maintenance being performed on the track, if there are poor environmental conditions such as rock slides or deep snow, or especially if there is another train ahead. The driver must obey all speed limits and get to the next station on time. Learning to maneuver the train with its long time constants can take months, given that for the speed control task the driver’s only input currently is an indication of current speed.
The author’s laboratory has proposed a new computer-based display which helps the driver anticipate the future effects of current throttle and brake actions. This approach, based on a dynamic model of the train, gives an instantaneous prediction of future train position and speed based on current acceleration, so that speed can be plotted on the display assuming the operator holds to current brake-throttle settings.
It also plots trajectories for maximum emergency braking and maximum service braking. In addition, the computer generates a speed trajectory which adheres at all (known) future speed limits, gets to the next station on time, and minimizes fuel/energy.


Advanced Traffic Management Systems

Automobile congestion in major cities has become unacceptable, and advanced traffic management systems are being built in many of these cities to measure traffic flow at intersections (by some combination of magnetic loop detectors, optical sensors, and other means), and regulate stoplights and message signs. These systems can also issue advisories of accidents ahead by means of variable message signs or radio, and give advice of alternative routings. In emergencies they can dispatch fire, police, ambulances, or tow trucks, and in the case of tunnels can shut down entering traffic completely if necessary. These systems are operated by a combination of computers and humans from centralized control rooms. The operators look at banks of video monitors which let them see the traffic flow at different locations, and computer-graphic displays of maps, alarm windows, and textual messages. The operators get advice from computer-based expert systems, which suggest best responses based on measured inputs, and the operator must decide whether to accept the computer’s advice, whether to seek further information, and how to respond.

Smart Cruise Control

Standard cruise control has a major deficiency in that it knows nothing about vehicles ahead, and one can easily collide with the rear end of another vehicle if not careful. In a smart cruise control system a microwave or optical radar detects the presence of a vehicle ahead and measures that distance. But there is a question of what to do with this information. Just warn the driver with some visual or auditory alarm (auditory is better because the driver does not have to be looking in the right place)? Can a warning be too late to elicit braking, or surprise the driver so that he brakes too suddenly and causes a rear-end accident to his own vehicle. Should the computer automatically apply the brakes by some function of distance to obstacle ahead, speed, and closing deceleration, If the computer did all the braking would the driver become complacent and not pay attention, to the point where a serious accident would occur if the radar failed to detect an obstacle, say, a pedestrian or bicycle, or the computer failed to brake?
Should braking be some combination of human and computer braking, and if so by what algorithm?
These are human factor questions which are currently being researched.
It is interesting to note that current developmental systems only decelerate and downshift, mostly because if the vehicle manufacturers sell vehicles which claim to perform braking they would be open to a new and worrisome area of litigation.
The same radar technology that can warn the driver or help control the vehicle can also be applied to cars overtaking from one side or the other. Another set of questions then arises as to how and what to communicate to the driver and whether or not to trigger some automatic control maneuver in certain cases.

Intelligent Highway Vehicles:Vehicle Guidance and Navigation Systems

The combination of GPS (global positioning system) satellites, high-density computer storage of map data, electronic compass, synthetic speech synthesis, and computer-graphic displays allows cars and trucks to know where they are located on the Earth to within 100 m or less, and can guide a driver to a programmed destination by a combination of a map display and speech. Some human factor challenges are in deciding how to configure the map (how much detail to present, whether to make the map northup with a moving dot representing one’s own vehicle position or current-heading-up and rapidly changing with every turn). The computer graphics can also be used to show what turns to anticipate and which lane to get in. Synthetic speech can reinforce these turn anticipations, can caution the driver if he is perceived to be headed in the wrong direction or off course, and can even guide him or her how to get back on course. An interesting question is what the computer should say in each situation to get the driver’s attention, to be understood quickly and unambiguously but without being an annoyance. Another question is whether or not such systems will distract the driver’s attention from the primary tasks, thereby reducing safety. The major vehicle manufacturers have developed such systems, they have been evaluated for reliability and human use, and they are beginning to be marketed in the United States, Europe, and
Japan.

Air Traffic Control

As demands for air travel continue to increase, so do demands for air traffic control. Given what are currently regarded as safe separation criteria, air space over major urban areas is already saturated, so that simply adding more airports is not acceptable (in addition to which residents do not want more airports, with their noise and surface traffic). The need is to reduce separations in the air, and to land aircraft closer together or on parallel runways simultaneously. This puts much greater demands on air traffic controllers, particularly at the terminal area radar control centers (TRACONs), where trained operators stare at blips on radar screens and verbally guide pilots entering the terminal airspace from various directions and altitudes into orderly descent and landing patterns with proper separation between aircraft.
Currently, many changes are being introduced into air traffic control which has profound implications for human-machine interaction. Where previously communication between pilots and air traffic controllers was entirely by voice, now digital communication between aircraft and ground (a system called datalink) allows both more and more reliable two-way communication, so that weather and runway and wind information, clearances, etc. can be displayed to pilots visually. But pilots are not so sure they want this additional technology. They fear the demise of the “party line” of voice communications with which they are so familiar and which permits all pilots in an area to listen in on each other’s conversations.
New aircraft-borne radars allow pilots to detect air traffic in their own vicinity. Improved ground based radars detect microbursts or wind shear which can easily put an aircraft out of control. Both types of radars pose challenges as to how best to warn the pilot and provide guidance as to how to respond.
But they also pose a cultural change in air traffic control, since heretofore pilots have been dependent upon air traffic controllers to advise them of weather conditions and other air traffic. Furthermore, because of the new weather and collision-avoidance technology, there are current plans for radically altering the rules whereby high-altitude commercial aircraft must stick to well-defined traffic lanes. Instead, pilots will have great flexibility as to altitude (to find the most favorable winds and therefore save fuel) and be able to take great-circle routes straight to their destinations (also saving fuel). However, air traffic controllers are not sure they want to give up the power they have had, becoming passive observers and monitors, to function only in emergencies.

Supervisory Control

Supervisory control may be defined by the analogy between a supervisor of subordinate staff in an organization of people and the human overseer of a modern computer-mediated semiautomatic control system. The supervisor gives human subordinates general instructions which they in turn may translate into action. The supervisor of a computer-controlled system does the same.
Defined strictly, supervisory control means that one or more human operators are setting initial conditions for, intermittently adjusting, and receiving high-level information from a computer that itself closes a control loop in a well-defined process through artificial sensors and effectors. For some time period the computer controls the process automatically.
By a less strict definition, supervisory control is used when a computer transforms human operator commands to generate detailed control actions, or makes significant transformations of measured data to produce integrated summary displays. In this latter case the computer need not have the capability to commit actions based upon new information from the environment, whereas in the first it necessarily must. The two situations may appear similar to the human supervisor, since the computer mediates both human outputs and human inputs, and the supervisor is thus removed from detailed events at the low level.

FIGURE 6.1.2Direct manual control-loop analysis.

Supervisory control system here the human operator issues commands to a human-interactive computer capable of understanding high-level language and providing integrated summary displays of process state information back to the operator. This computer, typically located in a control room or cockpit or office near to the supervisor, in turn communicates with at least one, and probably many (hence the dotted lines), task-interactive computers, located with the equipment they are controlling. The task-interactive computers thus receive subgoal and conditional branching information from the human-interactive computer. Using such information as reference inputs, the task-interactive computers serve to close low-level control loops between artificial sensors and mechanical actuators;i.e., they accomplish the low-level automatic control.
The low-level task typically operates at some physical distance from the human operator and his human-friendly display-control computer. Therefore, the communication channels between computers may be constrained by multiplexing, time delay, or limited bandwidth. The task-interactive computer, of course, sends analog control signals to and receives analog feedback signals from the controlled process, and the latter does the same with the environment as it operates (vehicles moving relative to air, sea, or earth, robots manipulating objects, process plants modifying products, etc.).
Supervisory command and feedback channels for process state information are shown in Figure 6.1.3 to pass through the left side of the human-interactive computer. On the right side are represented decisionaiding functions, with requests of the computer for advice and displayed output of advice (from a database, expert system, or simulation) to the operator. There are many new developments in computerbased decision aids for planning, editing, monitoring, and failure detection being used as an auxiliary part of operating dynamic systems. Reflection upon the nervous system of higher animals reveals a similar kind of supervisory control wherein commands are sent from the brain to local ganglia, and peripheral motor control loops are then closed locally through receptors in the muscles, tendons, or skin.
The brain, presumably, does higher-level planning based on its own stored data and “mental models,” an internalized expert system available to provide advice and permit trial responses before commitment to actual response.
Theorizing about supervisory control began as aircraft and spacecraft became partially automated. It became evident that the human operator was being replaced by the computer for direct control responsibility, and was moving to a new role of monitor and goal-constraint setter. An added incentive was the U.S. space program, which posed the problem of how a human operator on Earth could control a manipulator arm or vehicle on the moon through a 3-sec communication round-trip time delay. The only solution which avoided instability was to make the operator a supervisory controller communicating intermittently with a computer on the moon, which in turn closed the control loop there. The rapid development of microcomputers has forced a transition from manual control to supervisory control in a variety of industrial and military applications (Sheridan, 1992).
Let us now consider some examples of human-machine interaction, particularly those which illustrate supervisory control in its various forms. First, we consider three forms of vehicle control, namely, control of modern aircraft, “intelligent” highway vehicles, and high-speed trains, all of which have both human operators in the vehicles as well as humans in centralized traffic-control centers. Second, we consider telerobots for space, undersea, and medical applications.

Direct Manual Control

In the 1940s aircraft designers appreciated the need to characterize the transfer function of the human pilot in terms of a differential equation. Indeed, this is necessary for any vehicle or controlled physical process for which the human is the controller, see Figure 6.1.2. In this case both the human operator H and the physical process P lie in the closed loop (where H and P are Laplace transforms of the component transfer functions), and the HP combination determines whether the closed-loop is inherently stable (i.e., the closed loop characteristic equation 1+HP = 0 has only negative real roots).
In addition to the stability criterion are the criteria of rapid response of process state x to a desired or reference state r with minimum overshoot, zero “steady-state error” between r and output x, and reduction to near zero of the effects of any disturbance input d. (The latter effects are determined by the closed-loop transfer functions x=HP/(1+ HP)r+ 1/(1+ HP)d
, where if the magnitude of
H is large enough
HP /(1+ HP) approaches unity and 1/(1+ HP) approaches 0. Unhappily, there are ingredients of
H which produce delays in combination with magnitude and thereby can cause instability.
Therefore, H must be chosen carefully by the human for any given P.)
Research to characterize the pilot in these terms resulted in the discovery that the human adapts to a wide variety of physical processes so as to make HP=K(1/s)(esT). In other words, the human adjusts H to make
HP constant. The term K is an overall amplitude or gain, (1/ s) is the Laplace transform of an integrator, and ( e-sT) is a delay T long (the latter time delay being an unavoidable property of the nervous system). Parameters
K and T vary modestly in a predictable way as a function of the physical process and the input to the control system. This model is now widely accepted and used, not only in engineering aircraft control systems, but also in designing automobiles, ships, nuclear and chemical plants, and a host of other dynamic systemsŲ²

Human-Machine Interaction

Over the years machines of all kinds have been improved and made more reliable. However, machines typically operate as components of larger systems, such as transportation systems, communication systems, manufacturing systems, defense systems, health care systems, and so on. While many aspects of such systems can be and have been automated, the human operator is retained in many cases. This may be because of economics, tradition, cost, or (most likely) capabilities of the human to perceive patterns of information and weigh subtle factors in making control decisions which the machine cannot match.
Although the public as well as those responsible for system operation usually demand that there be a human operator, “human error” is a major reason for system failure. And aside from prevention of error, getting the best performance out of the system means that human and machine must be working together effectively — be properly “impedance matched.” Therefore, the performance capabilities of the human relative to those of the machine must be taken into account in system design.
Efforts to “optimize” the human-machine interaction are meaningless in the mathematical sense of optimization, since most important interactions between human and machine cannot be reduced to a mathematical form, and the objective function (defining what is good) is not easily obtained in any given context. For this reason, engineering the human-machine interaction, much as in management or medicine, remains an art more than a science, based on laboratory experiments and practical experience.
In the broadest sense, engineering the human-machine interface includes all of ergonomics or human factors engineering, and goes well beyond design of displays and control devices. Ergonomics includes not only questions of sensory physiology, whether or not the operator can see the displays or hear the auditory warnings, but also questions of biomechanics
, how the body moves, and whether or not the operator can reach and apply proper force to the controls. It further includes the fields of operator selection and training, human performance under stress, human factors in maintenance, and many other aspects of the relation of the human to technology. This section focuses primarily on human-machine interaction in control of systems.
The human-machine interactions in control are considered in terms of Figure 6.1.1. In Figure 6.1.1a the human directly controls the machine; i.e., the control loop to the machine is closed through the physical sensors, displays, human senses (visual, auditory, tactile), brain, human muscles, control devices, and machine actuators. Figure 6.1.1b illustrates what has come to be called a supervisory control system , wherein the human intermittently instructs a computer as to goals, constraints, and procedures, then turns a task over to the computer to perform automatic control for some period of time.

Displays and control devices can be analogic (movement signal directions and extent of control action, isomorphic with the world, such as an automobile steering wheel or computer mouse controls, or a moving needle or pictorial display element). Or they can be symbolic (dedicated buttons or generalpurpose keyboard controls, icons, or alarm light displays). In normal human discourse we use both speech (symbolic) and gestures (analogic) and on paper we write alphanumeric text (symbolic) and draw pictures (analogic). The system designer must decide which type of displays or controls best suits a particular application, and/or what mix to use. The designer must be aware of important criteria such as whether or not, for a proposed design, changes in the displays and controls caused by the human operator correspond in a natural and common-sense way to “more” or “less” of some variable as expected by that operator and correspond to cultural norms (such as reading from left to right in western countries), and whether or not the movement of the display elements correspond geometrically to movements of the controls.

Guidelines for Improving Thermodynamic Effectiveness

Thermal design frequently aims at the most effective system from the cost viewpoint. Still, in the cost optimization process, particularly of complex energy systems, it is often expedient to begin by identifying a design that is nearly optimal thermodynamically; such a design can then be used as a point of departure for cost optimization. Presented in this section are guidelines for improving the use of fuels (natural gas, oil, and coal) by reducing sources of thermodynamic inefficiency in thermal systems. Further discussion is provided by Bejan et al. (1996).
To improve thermodynamic effectiveness it is necessary to deal directly with inefficiencies related to exergy destruction and exergy loss. The primary contributors to exergy destruction are chemical reaction, heat transfer, mixing, and friction, including unrestrained expansions of gases and liquids. To deal with them effectively, the principal sources of inefficiency not only should be understood qualitatively, but also determined quantitatively, at least approximately. Design changes to improve effectiveness must be done judiciously, however, for the cost associated with different sources of inefficiency can be different.
For example, the unit cost of the electrical or mechanical power required to provide for the exergy destroyed owing to a pressure drop is generally higher than the unit cost of the fuel required for the
exergy destruction caused by combustion or heat transfer.
Since chemical reaction is a significant source of thermodynamic inefficiency, it is generally good practice to minimize the use of combustion. In many applications the use of combustion equipment such as boilers is unavoidable, however. In these cases a significant reduction in the combustion irreversibility by conventional means simply cannot be expected, for the major part of the exergy destruction introduced by combustion is an inevitable consequence of incorporating such equipment. Still, the exergy destruction in practical combustion systems can be reduced by minimizing the use of excess air and by preheating the reactants. In most cases only a small part of the exergy destruction in a combustion chamber can be avoided by these means. Consequently, after considering such options for reducing the exergy destruction related to combustion, efforts to improve thermodynamic performance should focus on components of the overall system that are more amenable to betterment by cost-effective conventional measures. In other words, some exergy destructions and energy losses can be avoided, others cannot. Efforts should be centered on those that can be avoided.
Nonidealities associated with heat transfer also typically contribute heavily to inefficiency. Accordingly, unnecessary or cost-ineffective heat transfer must be avoided. Additional guidelines follow:

• The higher the temperature T at which a heat transfer occurs in cases where T > T0 , where T0 denotes the temperature of the environment (Section 2.5), the more valuable the heat transfer and, consequently, the greater the need to avoid heat transfer to the ambient, to cooling water, or to a refrigerated stream. Heat transfer across
T0 should be avoided.

• The lower the temperature T at which a heat transfer occurs in cases where T < T0 , the more valuable the heat transfer and, consequently, the greater the need to avoid direct heat transfer with the ambient or a heated stream.

• Since exergy destruction associated with heat transfer between streams varies inversely with the temperature level, the lower the temperature level, the greater the need to minimize the streamto-stream temperature difference.

• Avoid the use of intermediate heat transfer fluids when exchanging energy by heat transfer between two streams
Although irreversibilities related to friction, unrestrained expansion, and mixing are often secondary in importance to those of combustion and heat transfer, they should not be overlooked, and the following guidelines apply:

• Relatively more attention should be paid to the design of the lower temperature stages of turbines and compressors (the last stages of turbines and the first stages of compressors) than to the remaining stages of these devices.

• For turbines, compressors, and motors, consider the most thermodynamically efficient options.

• Minimize the use of throttling; check whether power recovery expanders are a cost-effective alternative for pressure reduction.

• Avoid processes using excessively large thermodynamic driving forces (differences in temperature, pressure, and chemical composition). In particular, minimize the mixing of streams differing significantly in temperature, pressure, or chemical composition.


• The greater the mass rate of flow, the greater the need to use the exergy of the stream effectively.

• The lower the temperature level, the greater the need to minimize friction. Flowsheeting or process simulation software can assist efforts aimed at improving thermodynamic effectiveness by allowing engineers to readily model the behavior of an overall system, or system components, under specified conditions and do the required thermal analysis, sizing, costing, and optimization. Many of the more widely used flowsheeting programs: ASPEN PLUS, PROCESS, and
CHEMCAD are of the sequential-modular type. SPEEDUP is a popular program of the equation-solver type. Since process simulation is a rapidly evolving field, vendors should be contacted for up-to-date information concerning the features of flowsheeting software, including optimization capabilities (if any). As background for further investigation of suitable software, see Biegler (1989) for a survey of the capabilities of 15 software products.

Combustion in Internal Combustion Engine

In combustion reactions, rapid oxidation of combustible elements of the fuel results in energy release as combustion products are formed. The three major combustible chemical elements in most common fuels are carbon, hydrogen, and sulfur. Although sulfur is usually a relatively unimportant contributor to the energy released, it can be a significant cause of pollution and corrosion.
The emphasis in this section is on hydrocarbon fuels, which contain hydrogen, carbon, sulfur, and possibly other chemical substances. Hydrocarbon fuels may be liquids, gases, or solids such as coal.
Liquid hydrocarbon fuels are commonly derived from crude oil through distillation and cracking processes.
Examples are gasoline, diesel fuel, kerosene, and other types of fuel oils. The compositions of liquid fuels are commonly given in terms of mass fractions. For simplicity in combustion calculations, gasoline is often considered to be octane, C8H18, and diesel fuel is considered to be dodecane, C12H26.
Gaseous hydrocarbon fuels are obtained from natural gas wells or are produced in certain chemical processes. Natural gas normally consists of several different hydrocarbons, with the major constituent being methane, CH4. The compositions of gaseous fuels are commonly given in terms of mole fractions.
Both gaseous and liquid hydrocarbon fuels can be synthesized from coal, oil shale, and tar sands. The composition of coal varies considerably with the location from which it is mined. For combustion calculations, the makeup of coal is usually expressed as an ultimate analysis giving the composition on a mass basis in terms of the relative amounts of chemical elements (carbon, sulfur, hydrogen, nitrogen, oxygen) and ash. Coal combustion is considered further in Chapter 8, Energy Conversion.
A fuel is said to have burned completely if all of the carbon present in the fuel is burned to carbon dioxide, all of the hydrogen is burned to water, and all of the sulfur is burned to sulfur dioxide. In practice, these conditions are usually not fulfilled and combustion is incomplete. The presence of carbon monoxide (CO) in the products indicates incomplete combustion. The products of combustion of actual combustion reactions and the relative amounts of the products can be determined with certainty only by experimental means. Among several devices for the experimental determination of the composition of products of combustion are the Orsat analyzer, gas chromatograph, infrared analyzer, and flame ionization detector. Data from these devices can be used to determine the makeup of the gaseous products of combustion. Analyses are frequently reported on a “dry” basis: mole fractions are determined for all gaseous products as if no water vapor were present. Some experimental procedures give an analysis including the water vapor, however.
Since water is formed when hydrocarbon fuels are burned, the mole fraction of water vapor in the gaseous products of combustion can be significant. If the gaseous products of combustion are cooled at constant mixture pressure, the dew point temperature (Section 2.3, Ideal Gas Model) is reached when water vapor begins to condense. Corrosion of duct work, mufflers, and other metal parts can occur when water vapor in the combustion products condenses.
Oxygen is required in every combustion reaction. Pure oxygen is used only in special applications such as cutting and welding. In most combustion applications, air provides the needed oxygen. Idealizations are often used in combustion calculations involving air: (1) all components of air other than oxygen (O2) are lumped with nitrogen (N2). On a molar basis air is then considered to be 21% oxygen and 79% nitrogen. With this idealization the molar ratio of the nitrogen to the oxygen in combustion air is 3.76; (2) the water vapor present in air may be considered in writing the combustion equation or ignored. In the latter case the combustion air is regarded as dry; (3) additional simplicity results by regarding the nitrogen present in the combustion air as inert. However, if high-enough temperatures are attained, nitrogen can form compounds, often termed NOX, such as nitric oxide and nitrogen dioxide.
Even trace amounts of oxides of nitrogen appearing in the exhaust of internal combustion engines can be a source of air pollution.
The minimum amount of air that supplies sufficient oxygen for the complete combustion of all the combustible chemical elements is the theoretical, or stoichiometic, amount of air. In practice, the amount of air actually supplied may be greater than or less than the theoretical amount, depending on the application. The amount of air is commonly expressed as the percent of theoretical air or the percent excess (or percent deficiency) of air. The air-fuel ratio and its reciprocal the fuel-air ratio, each of which can be expressed on a mass or molar basis, are other ways that fuel-air mixtures are described. Another is the equivalence ratio: the ratio of the actual fuel-air ratio to the fuel-air ratio for complete combustion with the theoretical amount of air. The reactants form a lean mixture when the equivalence ratio is less than unity and a rich mixture when the ratio is greater than unity.

About Me

My photo
Cairo, Cairo, Egypt, Egypt
I am the Leader of EME Team.
Powered by Blogger.