The book discusses human factors integration methodolgy and reviews the issues that underpin consideration of key topics such as human error, automation and human reliability assesment.
Inspec keywords: human factors
Other keywords: human-computer interface; human factors; control room design; task analysis; risk assessment; safety assessment; cost benefits; usability
Subjects: Ergonomics
This chapter examines the subject matter coverage of human factors and its close cousin, ergonomics. It then introduces and defines the concept of 'user-centred design' and, most importantly, places human factors within a far broader context. This context is systems engineering and the goal of truly integrated systems, in which users, equipment and the operating environment are appropriately matched for optimal, safe and effective use and performance.
The following sections are included: background to HFI prior to the mid 1980s; post mid 1980s; scope of application of HFI; life cycle management and risk; starting an HFI programme; early human factors analyses; the future; conclusions; and references.
When designing for humans there is a need to know about and understand their cognitive skills and limitations. Humans, for example, are excellent at pattern matching activities (and have been described as 'furious pattern-matchers'), but are poor at monitoring tasks, especially when the target is a low-frequency one. It is quite likely when monitoring say a radar screen for the appearance of an infrequent target that the human operator will miss it completely when it finally appears. These two activities are part of an array of information processing activities, which humans carry out throughout their waking hours. Pattern-matching is a perceptual skill while monitoring requires focused attention skills sustained over a period of time. One approach to considering human skills and design issues has been to think of the human as an information processor. This provides the basis for the current chapter, the aim of which is to look at the cognitive activities associated with human information processing in order to locate our skills, capabilities and limitations. Examples will also be provided to show how our knowledge of human information processing relates to the design of objects in everyday life and advanced technologies.
In theory, all the objects in the built world should have been carefully designed for the human user. In practice, this is not the case and we are surrounded by poor design. Certainly, in the past in industry, poor design was not considered a priority. Ergonomics derives from the word 'erg' meaning work, and 'nomos' meaning science; hence, it is the study of the science of work. The terms ergonomics and human factors are often taken as synonymous, although there is some evidence that ergonomics has its origins more in the consideration of the physiological aspects of the human, while human factors focuses more on the psychological element. In summary, human factors (or ergonomics) is all about design. But there is little point in designing something for humans to use without carrying out some form of assessment that this objective has actually been met. Consideration of how to assess and evaluate design provides the focus of the current chapter, namely, an overview of the methods, grouped together under the umbrella of the human factors toolkit, which can be employed for assessing the design of products and systems.
Task analysis is the name given to a range of techniques that can be used to examine the ways that humans undertake particular tasks. Some of these techniques focus directly upon task performance, whilst others consider how specific features of the task, such as the interfaces, operating procedures, team organisation or training, can influence task performance. Thus, task analysis techniques can be used either to ensure that new systems are designed in a way that assists the users to undertake their tasks safely and effectively, or to identify specific task-related features that could be improved in existing systems. The general family of tools known as Task Analysis can contribute substantially valuable methodologies.
The word automation, which comes from Greek, is a combination of auto, 'self, and matos, 'willing', and means something that acts by itself or of itself. This is almost identical to the modern usage where automatic means something that is self regulating or something that acts or operates in a manner determined by external influences or conditions but which is essentially independent of external control, such as an automatic light switch.
Engineers make many assumptions about human error and about their ability to design against it. This chapter tries to unpack some of those assumptions. Does 'human error' exist as a uniquely sub-standard category of human performance? Are humans the most unreliable components in an engineered human-machine assembly? Once we embrace the idea that errors are consequences, not causes, can we still distinguish between mechanical failure and human error? In fact, engineers themselves are prone to err too not only with respect to the assumptions they make about operators, but because the very activity of engineering is about reconciling irreconcilable constraints. The optimal, perfect engineered solution does not exist because by then it has already violated one or more of the original requirements. The chapter also discusses two popular ways of restraining human unreliability: procedures and automation. It tries to shed some light on why these 'solutions' to human error do not always work the way engineers thought they would.
This document has provided an overview of a framework for the assessment of human error in risk assessments. The main emphasis has been on the importance of a systematic approach to the qualitative modelling of human error. This leads to the identification and possible reduction of the human sources of risk. This process is of considerable value in its own right, and does not necessarily have to be accompanied by the quantification of error probabilities. Because of the engineering requirement to provide numerical estimates of human error probabilities in applications such as safety cases, examples of major quantification techniques have been provided, together with case studies illustrating their application. It must be recognised that quantification remains a difficult area, mainly because of the limitations of data. However, the availability of a systematic frame work within which to perform the human reliability assessment means that despite data limitations, a comprehensive treatment of human reliability can still yield considerable benefits in identifying, assessing and ultimately minimising human sources of risk.
Most of the population will be unaware of the operators seated in front of rows of screens controlling the services we take for granted metro systems, sewage works, electricity distribution and security, for example. These operators are at the core of an electronic network of which the control room is the hub. This network will be pulsing electronic information to and from their workstations, linking them with an infrastructure that is unseen from their work. Control room projects are rarely undertaken in isolation; they tend to arise because new systems are being introduced, working practices changed, organisations merged or equipment is being replaced. The control room project team must expect to work with others whose primary concern will be with areas such as building design, systems design and human resources.
The purpose of this chapter is to delineate the many facets of developing interfaces in order to provide fruitful design directions for practitioners to take in the area. To this end, the chapter is divided into four distinct sections. The first, Section 10.2, provides a brief history of interface design along with defining its scope and why it is important, especially in light of the continual problems in usefulness and ease of use of many interfaces that are designed today. Based on this understanding, Section 10.3 categorises the area according to the broad treat ment of three key topic areas in interface design, covering human factors issues (see Chapter 1 for formal definitions), interface types and design principles that are important considerations in the development of interfaces. Practical applica tions of the knowledge gained from the discussions thus far will be demonstrated in Section 10.4, through two interface design cases. These cases are analysed with consideration to the issues drawn out from previous sections, thus spotlight ing the various pitfalls and benefits to design. Section 10.5 concludes the chapter with a final reflection on the nature and inherent difficulties in producing effective interfaces.
Most computer users will have had experience of poorly designed interactive systems that are difficult to understand or frustrating to use. These may range from a consumer product with many functions and fiddly controls, to a call centre system that is hard for the telephone operator to navigate to obtain the information that a customer requires. System success is therefore very dependent upon the user being able to operate it successfully. If they have difficulties in using the system then they are unlikely to achieve their task goals. Different terms have been used to describe the quality of a system that enables the user to operate it successfully, such as 'user friendly', 'easy to use', 'intuitive', 'natural' and 'transparent'. A more general term that has been adopted is 'usability'.
This chapter will give guidance on how to assist the performance of system verification and validation (V&V) through the application of human factors (HF) to design during the various phases of a system's life cycle. Notably, a system will be considered as consisting of both humans and machines operating within social, cultural and operating environments. Figure 12.1 depicts this problem space and suggests that HF should be central to the consideration of systems. Therefore, the importance of empathy by the HF practitioner to the activities of the system engineer/maintainer and the user/operator should be emphasised. Moreover, it is argued that true expertise in HF is only acquired through a polymathical approach to work and an understanding of the approaches to the challenges of a system's life cycle by other disciplines such as systems engineering and integrated logistics support.
Having designed a system, be it a piece of software, or the human-machine interface (HMI) for a complex system, there comes a point where the designers, or indeed the customers, will require an assessment to demonstrate that the product does actually perform safely and effectively. Simulators and other simulation techniques provide powerful tools for such assessments and ergonomists are often involved in the required investigation and evaluation exercises. This chapter will describe, using some examples, the use of simulators in the verification and validation of system performance.
This chapter examines the safety assessment of systems that include people. Specifically, the chapter is concerned with the analysis of safety in complex, information systems that characteristically support dynamic processes involving large numbers of hardware, software and human elements interacting in many different ways. The chapter assumes little or no knowledge of either safety assessment or human factors assessment techniques. Information systems involving extensive human interactions are increasingly being integrated into complicated social and organisational environments where their correct design and operation are essential in order to preserve the safety of the general public and the operators. This chapter focuses on the safety assessment of information systems, typically operating in real time, within safety-related application domains where human error is often cited as a major contributing factor, or even the direct cause, of accidents or incidents.