Energy Conservation

This section provides an understanding, at an overview level, of the steam power cycle. References were selected for the next level of study if required. There are noteworthy omissions in the section: site selection, fuel handling, civil engineering-related activities (like foundations), controls, and nuclear power.
Thermal power cycles take many forms, but the majority is fossil steam, nuclear, simple cycle gas turbine, and combined cycle. Of those listed, conventional coal-fired steam power is predominant. This is especially true in developing third-world countries that either have indigenous coal or can import coal inexpensively. These countries make up the largest new product market. A typical unit is shown in Figure 1.
The Rankin cycle is overwhelmingly the preferred cycle in the case of steam power and is discussed first.
Topping and bottoming cycles, with one exception, are rare and mentioned only for completeness.
The exception is the combined cycle, where the steam turbine cycle is a bottoming cycle. In the developed countries, there has been a move to the combined cycle because of cheap natural gas or oil. Combined cycles still use a reasonably standard steam power cycle except for the boiler. The complexity of a combined cycle is justified by the high thermal efficiency, which will soon approach 60%.
The core components of a steam power plant are boiler, turbine, condenser and feed water pump, and generator. These are covered in successive subsections.
The final subsection is an example of the layout/and contents of a modern steam power plant.
As a frame of reference for the reader, the following efficiencies/effectiveness’s are typical of modern fossil fuel steam power plants. The specific example chosen had steam conditions of 2400 psia, 1000°F main steam temperature, 1000°F reheat steam temperature: boiler thermal 92; turbine/generator thermal 44; turbine isentropic 89; generator 98.5; boiler feed water pump and turbine combined isentropic 82; condenser 85; plant overall 34 (Carnot 64).
Nuclear power stations are so singular that they are worthy of a few closing comments. Modern stations are all large, varying from 600 to 1500 MW. The steam is both low temperature and low pressure (~600 °F and ~1000 psia), compared with fossil applications, and hovers around saturation conditions or is slightly superheated. Therefore, the boiler(s), superheated equivalent (actually a combined moisture separator and reheated), and turbines are unique to this cycle. The turbine generator thermal efficiency is around 36%.


Rankin Cycle Analysis
Modern steam power plants are based on the Rankine cycle. The basic, ideal Rankine cycle is shown in Figure 2.
The ideal cycle comprises the processes from state
1-Saturated liquid from the condenser at state l is pumped isentropically (i.e.,S1=S2) to state
2 -and into the boiler.
3- Liquid is heated at constant pressure in the boiler to state 3 (saturated steam).
4-Steam expands isentropic ally (i.e.,S3=S4) through the turbine to state 4 where it enters the condenser as a wet vapor.
Constant-pressure transfer of heat in the condenser to return the steam back to state 1 (saturated liquid).

If changes in kinetic and potential energy are neglected, the total heat added to the rankine cycle can be represented by the shaded area on the T-S diagram in Figure 8.1.2, while the work done by this cycle can be represented by the crosshatching within the shaded area. The thermal efficiency of the cycle (h) is defined as the work (WNET) divided by the heat input to the cycle (QH).
The Rankine cycle is preferred over the Carnot cycle for the following reasons:
The heat transfer process in the boiler has to be at constant temperature for the Carnot cycle, whereas in the Rankine cycle it is superheated at constant pressure. Superheating the steam can be achieved in the Carnot cycle during heat addition, but the pressure has to drop to maintain constant temperature.
This means the steam is expanding in the boiler while heat added which is not a practical method.
The Carnot cycle requires that the working fluid be compressed at constant entropy to boiler pressure.
This would require taking wet steam from point 1¢ in Figure 2 and compressing it to saturated liquid condition at 2
.A pump required to compress a mixture of liquid and vapor isentropically is difficult to design and operate. In comparison, the Rankine cycle takes the saturated liquid and compresses it to boiler pressure. This is more practical and requires much less work.
The efficiency of the Rankine cycle can be increased by utilizing a number of variations to the basic cycle. One such variation is superheating the steam in the boiler. The additional work done by the cycle is shown in the crosshatched area in Figure 3.
The efficiency of the Rankine cycle can also be increased by increasing the pressure in the boiler.
However, increasing the steam generator pressure at a constant temperature will result in the excess moisture content of the steam exiting the turbine. In order to take advantage of higher steam generator pressures and keep turbine exhaust moistures at safe values, the steam is expanded to some intermediate pressure in the turbine and then reheated in the boiler. Following reheat, the steam is expanded to the cycle exhaust pressure. The reheat cycle is shown in Figure 4.
Another variation of the Rankine cycle is the regenerative cycle, which involves the use of feed water heaters. The regenerative cycle regains some of the irreversible heat lost when condensed liquid is pumped directly into the boiler by extracting steam from various points in the turbine and heating the condensed liquid with this steam in feed water heaters. Figure 5 shows the Rankine cycle with regeneration. The actual Rankine cycle is far from ideal as there are losses associated with the cycle. They include the piping losses due to friction and heat transfer, turbine losses associated with steam flow, pump losses due to friction, and condenser losses when condensate is sub cooled. The losses in the compression
(Pump) and expansion process (turbine) result in an increase in entropy. Also, there is lost energy in heat addition (boiler) and rejection (condenser) processes as they occur over a finite temperature difference.
Most modern power plants employ some variation of the basic Rankine cycle in order to improve thermal efficiency. For larger power plants, economies of scale will dictate the use of one or all of the variations listed above to improve thermal efficiency. Power plants in excess of 200,000 kW will in most cases have 300 ° F superheated steam leaving the boiler reheat, and seven to eight stages of feed water heating.












Trust, Alienation, and How Far to Go with Automation

Trust
If operators do not trust their sensors and displays, expert advisory system, or automatic control system, they will not use it or will avoid using it if possible. On the other hand, if operators come to place too much trust in such systems they will let down their guard, become complacent, and, when it fails, not be prepared. The question of operator trust in the automation is an important current issue in humanmachine interface design. It is desirable that operators trust their systems, but it is also desirable that they maintain alertness, situation awareness, and readiness to take over.

Alienation
There is a set of broader social effects that the new human-machine interaction can have, which can be  discussed under the rubric of alienation.
1. People worry that computers can do some tasks much better than they themselves can, such as memory and calculation. Surely, people should not try to compete in this arena.
2. Supervisory control tends to make people remote from the ultimate operations they are supposed to be overseeing — remote in space, desynchronized in time, and interacting with a computer instead of the end product or service itself.
3. People lose the perceptual-motor skills which in many cases gave them their identity. They become "deskilled", and, if ever called upon to use their previous well-honed skills, they could not.
4. Increasingly, people who use computers in supervisory control or in other ways, whether intentionally or not, are denied access to the knowledge to understand what is going on inside the computer.
5. Partly as a result of factor 4, the computer becomes mysterious, and the untutored user comes to attribute to the computer more capability, wisdom, or blame than is appropriate.
6. Because computer-based systems are growing more complex, and people are being “elevated” to roles of supervising larger and larger aggregates of hardware and software, the stakes naturally become higher. Where a human error before might have gone unnoticed and been easily corrected, now such an error could precipitate a disaster.
7. The last factor in alienation is similar to the first, but all-encompassing, namely, the fear that a “race” of machines is becoming more powerful than the human race.
These seven factors, and the fears they engender, whether justified or not, must be reckoned with.
Computers must be made to be not only “human friendly” but also not alienating with respect to these broader factors. Operators and users must become computer literate at whatever level of sophistication they can deal with.


How Far to Go with Automation
There is no question but that the trend toward supervisory control is changing the role of the human operator, posing fewer requirements on continuous sensory-motor skill and more on planning, monitoring, and supervising the computer. As computers take over more and more of the sensory-motor skill functions, new questions are being raised regarding how the interface should be designed to provide the best cooperation between human and machine. Among these questions are: To what degree should the system be automated? How much “help” from the computer is desirable? What are the points of diminishing returns?
Table lists ten levels of automation, from 0 to 100% computer control. Obviously, there are few tasks which have achieved 100% computer control, but new technology pushes relentlessly in that direction. It is instructive to consider the various intermediate levels of Table 6.1.1 in terms not only of how capable and reliable is the technology but what is desirable in terms of safety and satisfaction of the human operators and the general public.
Scale of Degrees of Automation
1. The computer offers no assistance; the human must do it all.
2. The computer offers a complete set of action alternatives, and
3. Narrows the selection down to a few, or
4. Suggests one alternative, and
5. Executes that suggestion if the human approves, or
6. Allows the human a restricted time to veto before automatic execution, or
7. Executes automatically, then necessarily informs the human, or
8. Informs the human only if asked, or
9. Informs the human only if it, the computer, decides to
10. The computer decides everything and acts autonomously, ignoring the human.
The current controversy about how much to automate large commercial transport aircraft is often couched in these terms



Human Error

Human error has long been of interest, but only in recent decades has there been serious effort to understand human error in terms of categories, causation, and remedy. There are several ways to classify human errors. One is according to whether it is an error of omission (something not done which was supposed to have been done) or commission (something done which was not supposed to have been done). Another is slip (a correct intention for some reason not fulfilled) vs. a mistake (an incorrect intention which was fulfilled). Errors may also be classified according to whether they are in sensing, perceiving, remembering, deciding, or acting. There are some special categories of error worth noting which are associated with following procedures in operation of systems. One, for example, is called a capture error, wherein the operator, being very accustomed to a series of steps, say, A, B, C, and D, intends at another time to perform E, B, C, F. But he is “captured” by the familiar sequence B, C and does E, B, C, D.
As to effective therapies for human error, proper design to make operation easy and natural and unambiguous is surely the most important. If possible, the system design should allow for error correction before the consequences become serious. Active warnings and alarms are necessary when the system can detect incipient failures in time to take such corrective action. Training is probably next most important after design, but any amount of training cannot compensate for an error-prone design. Preventing exposure to error by guards, locks, or an additional “execute” step can help make sure that the most critical actions are not taken without sufficient forethought. Least effective are written warnings such as posted decals or warning statements in instruction manuals, although many tort lawyers would like us to believe the opposite.


Mental Workload

Under such complexity it is imperative to know whether or not the mental workload of the operator is too great for safety. Human-machine systems engineers have sought to develop measures of mental workload, the idea being that as mental load increases, the risk of error increases, but presumably measurable mental load comes before actual lapse into error.
Three approaches have been developed for measuring mental workload:
1. The first and most used is the subjective rating scale, typically a ten-level category scale with descriptors for each category from no load to unbearable load.
2. The second approach is use of physiological indexes which correlate with subjective scales, including heart rate and the variability of heart rate, certain changes in the frequency spectrum of the voice, electrical resistance of the skin, diameter of the pupil of the eye, and certain changes in the evoked brain wave response to sudden sound or light stimuli.
3. The third approach is to use what is called a secondary task, an easily measurable additional task which consumes all of the operator’s attention remaining after the requirements of the primary task are satisfied. This latter technique has been used successfully in the laboratory, but has shortcomings in practice in that operators may refuse to cooperate.
Such techniques are now routinely applied to critical tasks such as aircraft landing, air traffic control, certain planned tasks for astronauts, and emergency procedures in nuclear power plants. The evidence suggests that supervisory control relieves mental load when things are going normally, but when automation fails the human operator is subjected rapidly to increased mental load.


Human Workload and Human Error

As noted above, new technology allows combination, integration, and simplification of displays compared to the intolerable plethora of separate instruments in older aircraft cockpits and plant control rooms. The computer has taken over more and more functions from the human operator. Potentially these changes make the operator’s task easier. However, it also allows for much more information to be presented, more extensive advice to be given, etc.
These advances have elevated the stature of the human operator from providing both physical energy and control, to providing only continuous control, to finally being a supervisor or a robotic vehicle or system. Expert systems can now answer the operator’s questions, much as does a human consultant, or whisper suggestions in his ear even if he doesn’t request them. These changes seem to add many cognitive functions that were not present at an earlier time. They make the operator into a monitor of the automation, who is supposed to step in when required to set things straight. Unfortunately, people are not always reliable monitors and interveners.

Common Criteria for Human Interface Design

Design of operator control stations for teleoperators poses the same types of problems as design of controls and displays for aircraft, highway vehicles, and trains. The displays must show the important variables unambiguously to whatever accuracy is required, but more than that must show the variables in relation to one another so as to clearly portray the current “situation(situation awareness is currently a popular test of the human operator in complex systems). Alarms must get the operator’s attention, indicate by text, symbol, or location on a graphic display what is abnormal, where in the system the failure occurred, what is the urgency, if response is urgent, and even suggest what action to take. (For example, the ground-proximity warning in an aircraft gives a loud “Whoop, whoop!” followed by a distinct spoken command “Pull up, pull up!”) Controls — whether analogic joysticks, master-arms, or knobs — or symbolic special-purpose buttons or general-purpose keyboards — must be natural and easy to use, and require little memory of special procedures (computer icons and windows do well here). The placement of controls and instruments and their mode and direction of operation must correspond to the desired direction and magnitude of system response.

High-Speed Train Control

With respect to new electronic technology for information sensing, storage, and processing, railroad technology has lagged behind that of aircraft and highway vehicles, but currently is catching up. The role of the human operator in future rail systems is being debated, since for some limited right-of-way trains (e.g., in airports) one can argue that fully automatic control systems now perform safely and efficiently. The train driver’s principal job is speed control (though there are many other monitoring duties he must perform), and in a train this task is much more difficult than in an automobile because of the huge inertia of the train — it takes 2 to 3 km to stop a high-speed train. Speed limits are fixed at reduced levels for curves, bridges, grade crossings, and densely populated areas, while wayside signals temporarily command lower speeds if there is maintenance being performed on the track, if there are poor environmental conditions such as rock slides or deep snow, or especially if there is another train ahead. The driver must obey all speed limits and get to the next station on time. Learning to maneuver the train with its long time constants can take months, given that for the speed control task the driver’s only input currently is an indication of current speed.
The author’s laboratory has proposed a new computer-based display which helps the driver anticipate the future effects of current throttle and brake actions. This approach, based on a dynamic model of the train, gives an instantaneous prediction of future train position and speed based on current acceleration, so that speed can be plotted on the display assuming the operator holds to current brake-throttle settings.
It also plots trajectories for maximum emergency braking and maximum service braking. In addition, the computer generates a speed trajectory which adheres at all (known) future speed limits, gets to the next station on time, and minimizes fuel/energy.


About Me

My photo
Cairo, Cairo, Egypt, Egypt
I am the Leader of EME Team.
Powered by Blogger.