New Publications are available for Systems analysis and programming
http://dl-live.theiet.org
New Publications are available now online for this publication.
Please follow the links to view the publication.Towards behavior driven operations (BDOps)
http://dl-live.theiet.org/content/conferences/10.1049/ic.2011.0095
Modern Enterprise Software Systems entail many challenges such as availability, scalability, complexity and providing business agility. Ensuring the systems to be up and running for 24 × 7 has become a mandate for operations. Agile development has been adopted to keep pace with the demands of business and IT. Test Driven Development (TDD) and Behavior Driven Development (BDD) are practices, which enable agile development. So far the agile approach has been limited to development. For ensuring business to be truly agile, we need to take forward the agile approach to operations. In this paper, we discuss the behavior driven approach for operations specifically on the core sub-systems like infrastructure provisioning, deployment and monitoring. We share our explorations and experiments with Behavior Driven Monitoring (BDM) and how the same can be adopted for infrastructure provisioning and deployment. We used Cucumber-Nagios to detect behavior of an enterprise application. We close this paper with a note on the benefits to busmess and IT showing its relevance to DevOps, Continuous Delivery and Cloud Computing.Web 2.0, serious game: Structuring knowledge for participative and educative representations of the city
http://dl-live.theiet.org/content/conferences/10.1049/cp.2011.0291
To complete the "Pandora21" project, that tackles the issue of creating awareness and action concerning sustainability by using "serious games", we propose a methodological and technical solution to make the space of the game massively "participative", at the scale of a city or of a territory. Such a space has to propose rich functions allowing not only to confront multiple players using the existing objects of this space, but also to co-design the space of the game. Co building of the scenes, discussing of the rules are possible for a wide group. Local partners of a given city will be able to add easily scenes, elements of scenes and micro-games transposing appropriate sustainability situations for their city by rapid prototyping, without having to pass by the computer specialists. Hypertopic, a multi-viewpoints model, will used to ensure a plurality of views on the city. The space of the game will be indeed "participative" at the same time for players and designers groups. (6 pages)Systematic literature review: teaching novices programming using robots
http://dl-live.theiet.org/content/conferences/10.1049/ic.2011.0003
Background: Teaching programming to novices is a difficult task due to the complex nature of the subject, as negative stereotypes are associated with programming and because introductory programming courses often fail to encourage student understanding. Aim: This study investigates the effectiveness of using robots as tools to aid the process of teaching programming and to determine whether such technology can help to overcome the current barriers for learners in this context. Method: The Systematic Literature Review (SLR) methodology has been selected to discover how effective the use of robotics has been in the teaching of introductory programming concepts. Nine electronic databases, the proceedings from six conferences and two journals have been searched for literature relevant to the study. Results: After applying inclusion and exclusion criteria 34 articles have been accepted in the SLR. 74% of included literature report robots to be an effective teaching tool and one that can help novice programmers in their studies. Conclusion: Robots can be a powerful and effective tool when used in an introductory programming course but the potential remains to further investigate methods for their implementation. Thoughts on the use of the SLR methodology from the perspective of a PhD student are also given.Human factors for railway signalling and control systems
http://dl-live.theiet.org/content/conferences/10.1049/ic.2010.0103
This paper provides a brief overview of Human Factors, ergonomics and User-centred design. It offers some typical examples of HF activities and a few useful principles in each HF application area. The paper also offers information on the process of Human Factors Integration (HFI), based upon the ISO standards 13407 and 18529.Mixed-level simulation of wireless sensor networks
http://dl-live.theiet.org/content/conferences/10.1049/ic.2010.0136
Networks consisting of many autonomous sensors are - thanks to various reasons - gaining importance in real-life applications. Most wireless sensor lifetimes are still limited by finite power sources leading to the need of low-power system designs. In this paper, a novel approach for system simulation of ultra-low power wireless sensor networks is proposed. To be able to estimate the power consumption of the whole network the simulation frame work must not only be capable of simulating the sensor nodes themselves, but also the overall system consisting of all interacting elements of the network - which can be much more sophisticated. The framework therefore includes an performant instruction set simulation in order to enable extensive power profiling and tracking. The hard and software co-simulation speedup gained is primarily achieved by multi-threading. As one of multiple example applications a tire pressure monitoring system from the automotive area is shown.Modeling of communication infrastructure for design-space exploration
http://dl-live.theiet.org/content/conferences/10.1049/ic.2010.0135
Computer-aided design has been traditionally applied to computers and embedded systems but not to the communication infrastructure among them. The paper contributes to fill this gap by proposing to use a mathematical language to model a distributed application in terms of tasks, hosting nodes, and interactions with the environment. Tasks are described in terms of computation and communication requirements also in relationship with state-of-the-art languages for system specification. Entities and relationships are introduced to relate tasks, data flows and environmental data to network nodes, channels among them and communication protocols. The resulting attributes and constraints can be used during a further design space exploration to synthesize automatically a suitable communication infrastructure. The approach can be applied to significant applications, e.g., those based on wireless sensor networks and peer-to-peer networks. An example related to building automation is also reported to demonstrate the potentiality of the framework.Using systemC AMS for heterogeneous systems modelling at TIER-1 level
http://dl-live.theiet.org/content/conferences/10.1049/ic.2010.0157
For the design of cyber-physical systems in the automotive domain we have to take more and more the complete value chain from the semiconductor (TIER-2) via the component (TIER-1) up to the automobile manufacturer (OEM) into account. The current requirements for safety, comfort and energy efficiency, and, above all, the rising cost pressure calls for system designs covering different domains (e.g., mechanics, analogue, and digital electronics) as well as software being tightly coupled. Due to this increasing complexity there is an indispensable need for virtual prototypes. However, the creation of such TIER-1 designs bares some challenges, for example simulation performance, modelling effort, and the assembly of models from different vendors. This paper discusses various ways for creating models for a system simulation at TIER-1 level, such as the anti-lock braking system. It is shown, how to combine these methodologies and how to integrate them into a typical TIER-1 design environment.HetMoC: heterogeneous modelling in SystemC
http://dl-live.theiet.org/content/conferences/10.1049/ic.2010.0139
We propose a novel heterogeneous model-of computation (HetMoC) framework in SystemC for embedded computing systems. As the main contribution, we formally define the computation and communication in multiple domains (continuous-time, discrete-event, synchronous/reactive, and un tuned) as polymorphic processes and signals, and present domain interfaces to integrate different domains together for heterogeneous process networks. Especially, the continuous-time signals are defined with time continuum, which are distinguished from existing approaches. For implementation, a functional modelling style has been adopted to construct HetMoC. A solver with error estimation has been exploited in numerical approximation, and the time-varying functionalities in adaptive systems have been captured in HetMoC as well. In experiments, based on an adaptive transceiver system case study, HetMoC shows promising capabilities compared with a reference model in SystemC-AMS.Feature selection based on bagging ensemble learning algorithm
http://dl-live.theiet.org/content/conferences/10.1049/cp.2009.2058
Generalization ability is a principal issue in the field of machine learning. Feature selection is a method that can improve generalization ability of learning algorithm. Through measuring feature count measure (FCM) in decision table, select the feature which depended strongly on classification attribute. Based on the above, Feature count measure based bagging ensemble learning algorithm is proposed. Experiment results show that the proposed algorithm is effective to obtain classification rule.System requirements management
http://dl-live.theiet.org/content/conferences/10.1049/ic_20080373
This paper considers the importance of managing requirements as one of the keys to ensuring successful systems development. Sources and types of requirements are discussed along with how they are expressed and their place in the system lifecycle. The use of tools to assist with requirements management is considered and some of the pitfalls to avoid are given. The purpose of the paper is to provide an introduction to the subject, with a particular bias towards railway signalling system requirements (although the principles hold good for any type of system).Adaptive fault detection strategy in grid
http://dl-live.theiet.org/content/conferences/10.1049/cp_20080800
According to the particularity of grid environment, we took research in fault detection strategy in grid system on the base of Globus platform. Then, we put forward an adaptive fault detection algorithm. The adaptive algorithm could satisfy the diversity and dynamic of grid applications by providing them different level fault detection service basing on their requirements.Research of requirements elicitation based on domain ontology
http://dl-live.theiet.org/content/conferences/10.1049/cp_20080753
Ontology-based requirements elicitation is the method that is studied more at present, its purpose is through developing and sharing the domain knowledge platform between developers and users, to guide customers to complete the process of requirements elicitation, and then to form software requirements documentation. The key in this method is how to construct the UML software class diagram model based on domain ontology. This paper mainly analyzed the relation between concepts in ontology and the relation between the concepts in UML class diagrams, and then studied the transformation process from application ontology to the UML class diagram, at last construct requirements elicitation process model based on domain ontology according to the transformation process.Design of a context aware computing engine
http://dl-live.theiet.org/content/conferences/10.1049/cp_20081118
This paper describes a software framework that facilitates easy development of modern context-aware systems. The framework exposes core context processing functions as a set of generic but customizable platform components. Here we present overall design of the framework and early results from our prototype implementations. (4 pages)Design of underlying network infrastructure of smart buildings
http://dl-live.theiet.org/content/conferences/10.1049/cp_20081123
Wireless Building Management Systems (BMS) are an attractive option when it comes to building retrofitting due to the cost constraints introduced by wired systems. A crucial part of the wireless BMS is the initial planning stage, this process can be impossible for a designer to undertake, therefore highlighting the requirement for a software design tool to aid in this process. (4 pages)Study on travel active information service system
http://dl-live.theiet.org/content/conferences/10.1049/cp_20080749
This paper presents design and implementation of the travel active information service system based on IIPP and agent technology. The system collects data from network according to the value estimated by the model, which combines interest model with credibility model. It is developed on C#.net platform with reference of flow chart and Ajax technology to improve performance. The system has advantages of user interest requirement and true information considered.Configuring an "animated work environment": a user-centered design approach
http://dl-live.theiet.org/content/conferences/10.1049/cp_20081100
The dramatic shift in the nature, place and organization of working life, as well as the sophistication of information technologies employed in work, have prompted a trans- disciplinary team to develop an intelligent environment supporting increasingly digital lifestyles. This 'Animated Work Environment" (AWE) is envisioned less as a design product and more as the locus of interaction between people, software, information, machines, furniture, and other physical surroundings. In realizing this vision, the team (representing architecture, robotics, human factors and sociology) employed a user-centered design approach to designing, prototyping, demonstrating and evaluating AWE. This paper presents, for the first time, findings from surveys and task analyses of workers employing digital technologies, and traces how these findings informed the design of six physical configurations and other aspects of the AWE robot-architecture prototype. Also presented is a reflection on the benefits and challenges of iterative, trans-disciplinary design approaches to complex systems supporting human activity. Following from a collaborative research approach which includes careful analyses of users' want and needs, AWE promises to better cultivate rich, engaged and connected lifestyles in an increasingly digital world. (8 pages)Towards automatic management of reconfigurable accelerators
http://dl-live.theiet.org/content/conferences/10.1049/cp_20080649
With growing interest in reconfigurable computing (RC) systems and constantly increasing complexity of reconfigurable devices the need for efficient ways of managing resources in these systems becomes apparent. This work presents an analysis of features inherent to RC systems and presents set of software primitives required for supporting reconfigurable accelerators on operating system (OS) level. Changes to Linux OS kernel architecture are proposed and described. Building on reported research the paper introduces a new module monitoring run-time metrics of executing hardware and software tasks. It is expected that as a result scheduling quality in reconfigurable systems will improve.Optimal task allocation and payment minimization strategy of the multihomed end system
http://dl-live.theiet.org/content/conferences/10.1049/cp_20080828
MEU (multihomed end user) refers to some terminal hosts of enterprises and large data centers that transmit packets to the destination through several paralleled upstream ISPs. In this way the host can improve the performance during the transmission. Currently there are many studies that focused on how to design some optimal and effective strategies to improve the overall performance of the transmission network. To be specific, most of them only aim at finding some balanced allocation algorithms to allocate the task properly. In this paper, we mainly propose a novel task allocation strategy which can be used in a special network topology. We consider multiple multihomed end users which connect to several upstream ISPs. It forms in a many-to-many mapping relation. By formulating the task allocation problems in this scenario, we make use of the game theoretic approach to analyze the issue and achieve the Nash equilibrium in the optimal solution. Next we make the study of the cost consumed by MEUs. By using our task allocation strategy, we calculate the total costs of the multihomed end system. Then we analyze the optimal amount of ISPs and compare our strategy with the equal division algorithm. The result shows that our strategy is better.Research of relationship-expression model in data audit
http://dl-live.theiet.org/content/conferences/10.1049/cp_20080906
Data audit is an effective method for revenue assurance (RA). However, the current research of data audit is just at the initial stage. It needs a theoretical foundation and a guiding method which is agreed on by all in the area of revenue assurance. In this paper, a model used in data audit is put forwarded which is called relationship-expression model. This model, including the data-entity model of it is formally defined and described from business level. A method of data audit based on this model is researched and verified by a real instance of a telecom operator.SysML for Systems Engineering
http://dl-live.theiet.org/content/books/pc/pbpc007e
<p xmlns="http://pub2web.metastore.ingenta.com/ns/">This book provides a pragmatic introduction to the systems engineering modelling language, the SysML, aimed at systems engineering practitioners at any level of ability, ranging from students to experts. The theoretical aspects and syntax of SysML are covered and each concept is explained through a number of example applications.</p>HMM-based semantic analysis for the ESST and media tasks
http://dl-live.theiet.org/content/conferences/10.1049/cp_20070357
A stochastic component for semantic analysis has been applied to an appointment scheduling task in English (ESST) and a hotel room reservation task in French (MEDIA). Realized as an ergodic HMM using Viterbi decoding, the parser outputs the most likely semantic representation given a transcribed utterance as input. The semantic sequences used for training and testing the parser have been derived from the semantic representations of both spoken language dialogue corpora. The HMM parameters have been estimated given the word sequences along with their semantic representation. The performance of the parser has been determined for both tasks.Research on the hybrid cam-linkage mechanism realizing trajectory
http://dl-live.theiet.org/content/conferences/10.1049/cp_20061099
Based on the analysis of current situation and application prospect of hybrid electromechanical system, the concept of hybrid cam-linkage mechanism is proposed and its significance is expounded. The basic form of hybrid-driven mechanism is given and its kinematic principle is analyzed. Take realizing ellipse path with scheduled timing mark as example, the design method of hybrid cam-linkage generating path is introduced. Providing the parametric equation of the path that the mechanism generate, formulae are derived by kinematic analysis and from these formulae, the theoretical contour equation of the cam is obtained, and then the rotational angle of servo-motor which actuates the crank is solved through computer programming. Using the graphic solution, the flexible workspace and path characteristic of hybrid cam-linkage mechanism is analyzed, and the conclusions are verified by computer simulation, and then the design principle of path is proposed. The kinematic analysis and path characteristic of hybrid cam-linkage mechanism provides more theoretical basis for further study on its dynamic characteristic and geometric optimization issues of mechanism system.Physical.virtual ∥ virtual.physical
http://dl-live.theiet.org/content/conferences/10.1049/cp_20060685
This paper presents the process and outcome of a project aiming at connecting physical and virtual space. Computer programming and physical computing platforms were implemented as the means for studying the above dipole and establishing a bidirectional, reciprocal relationship between the two entities. The following research is under further development. (10 pages)What happens when end user requirements are ignored in system design - a case study
http://dl-live.theiet.org/content/conferences/10.1049/ic_20050456
Identifying and understanding end-user requirements is a critical activity throughout system design. To be effective, system design engineers must ask 'Who are we designing for?' but equally important are the questions 'What will they be doing with it?' and 'What level of performance or safety do they need to achieve?'. Only by answering these questions is it possible to design a system that takes account of human capabilities and limitations, ultimately promoting performance and safety. To achieve this, human factors (HF) integration within design can help ensure the suitability of the physical workplace and environment as well as the tasks which staff are required to carry out. Case studies from the oil and gas industry are used to illustrate good practice in HF integration, where involvement of HF from an early lifecycle stage avoided the need for costly rework later. These are contrasted with a case where failure to consider end-user requirements in an offshore control room re-engineering project resulted in remedial work being carried out to address a range of HF problems being experienced by control room personnel. From this it is argued that HF integration is neither costly, time consuming nor a matter of common sense.The HTA tool [Hierarchical Task Analysis]
http://dl-live.theiet.org/content/conferences/10.1049/ic_20050459
Under a UK Ministry of Defence (MoD) initiative, termed the Human Factors Integration (HFI) Defence Technology Centre (DTC), a start has been made to develop a computer based toolset encompassing task analysis methods currently in existence. The toolset eventually includes methods to assist in the analysis of both observable and cognitive tasks as precursors to the development of additional methods to fit the future development of a computer based tool framework for cognitive work analysis (MacLeod et al., 2005). Cognitive work analysis is argued to consider not only the nature of time critical task performance but also the influences on work emanating from society, culture, and organisation. The HTA Tool Workshop within the HFI DTC Symposium focuses on an introduction to one of the human factors integration (HFI) tools being developed by the UK MoD HFI DTC, namely the HTA tool. Many of the participants in the HFI DTC have contributed to the specification and review of the tool. A compact disc (CD) containing this tool has been distributed to all symposium attendees.The systems design challenge of NEC
http://dl-live.theiet.org/content/conferences/10.1049/ic_20050448
This paper looks at some of the issues of network-enabled capability (NEC). As the primary vision of UK MOD, NEC, and its US counterpart, network centric warfare (NCW), promise to bring information age concepts to all dimensions of military operations. In the present, first stage of NEC development, concentration is on providing increased connectivity, and establishing the network infrastructure on which NEC depends. This paper examines some of the system level and human issues which must be addressed in using the increased connectivity and information availability of NEC.Organizational culture: can system designers ignore it?
http://dl-live.theiet.org/content/conferences/10.1049/ic_20050445
'Organizational culture' can be considered to be the sum of shared attitudes, expectations, and conventions of behaviour within a group of people who regularly work or otherwise spend time together. It has long been established in the social sciences that organizational culture profoundly influences the behaviour of individuals in organizations and that it is remarkably persistent and resistant to change. The existing culture in any organization is bound therefore to have a significant influence on how new technology and procedures are accepted and managed by a workforce. This implies that it is important to consider the organizational culture of the users of new systems. This need is examined in the light of a small number of case studies connected to the defence domain. The conclusion of the study is that it is far more effective to ride the existing organizational culture than to confront it, and some stratagems are proposed to enable systems designers to identify promising ways of doing so.Designing for joint cognitive systems
http://dl-live.theiet.org/content/conferences/10.1049/ic_20050450
Cognitive systems engineering (CSE) maintains the view that design is 'telling stories about the future'. This serves to emphasise that the aim of design is to enable the system successfully to carry out its function under conditions that are only incompletely known at the time it is specified and built. The system itself is furthermore thought of as a joint cognitive system, i.e., as a whole comprising people and technology acting together. A key concept in CSE is the ability to cope with complexity, i.e., to maintain control so that the joint cognitive system can perform its intended functions in a dynamic and volatile environment. The implications of this are described in the paper.User experience of intelligent buildings: a user-centred research framework
http://dl-live.theiet.org/content/conferences/10.1049/ic_20050245
In order to truly understand `user' experiences of intelligent buildings, a research and development framework must be developed that takes into account the discontinuities between AmI and previous technologies, as well as producing rich and detailed knowledge of the `user', embedded in the historical, socio-political and cultural context of his/her life. In addition, the mechanics of `creative misuse' of technology must be better understood, in order to extrapolate from past technologies to `intelligent agents' and their use in intelligent buildings. This paper represents an initial attempt to formulate such a framework, with particular reference to understanding the distinguishing features of intelligent buildings, and how they could influence `user' needs, attitudes and behaviour. (9 pages)Introducing reset patterns: an extension to a rapid dialogue prototyping methodology
http://dl-live.theiet.org/content/conferences/10.1049/ic_20050246
This paper exposes the rapid dialogue prototyping methodology (Rajman et al., 2003, 2004), a methodology allowing the easy and automatic derivation of an ad hoc dialogue management system from a specific task description. The goal of the produced manager is to provide the user with a dialogue based interface to easily perform the target task. In addition, reset patterns, an extension of the prototyping methodology allowing a more flexible interaction with the user, are proposed in order to improve the efficiency of the dialogue. Reset patterns are justified and theoretically validated by the definition of an average gain function to optimize. Two approaches to such an optimization are presented, focusing on a different aspect of the gain function. Eventually, experimental results are presented and a conclusion is drawn on the usefulness of the new feature. (7 pages)The challenge for computational science
http://dl-live.theiet.org/content/conferences/10.1049/ic_20040410
The high performance computer and computational science communities face three major challenges: The performance challenge, making the next generation of high performance computers, The programming challenge, writing codes that can run on the next generation of very complicated computers, and the prediction challenge, writing very complex codes that can give accurate answers that can be relied upon for the important decisions that determine the future of society. The first challenge is being met. The second challenge needs work and focus, but is being addressed. The computational science community is, however, falling short of meeting the third challenge. It needs to focus on reaching the same level of credibility and maturity as the accepted methodologies of theory, experiment and engineering design.Experiences of teaching problem frame based requirements engineering to undergraduates
http://dl-live.theiet.org/content/conferences/10.1049/ic_20040219
The teaching of requirements engineering at Bournemouth University has incorporated a problem frame based approach to analysis since 1997. Various factors have shaped the aims, content and delivery of the programme. Current practice and outcomes are described and may prove of interest to those involved in teaching requirements engineering.Toward a framework for evaluating extreme programming
http://dl-live.theiet.org/content/conferences/10.1049/ic_20040394
Software organizations are progressively adopting the development practices associated with the extreme programming (XP) methodology. Most reports on the efficacy of these practices are anecdotal. This paper provides a benchmark measurement framework for researchers and practitioners to express concretely the XP practices the organization has selected to adopt and/or modify, and the outcome thereof. The framework enables the necessary meta-analysis for combining families of case studies. The results of running framework-based case studies in various contexts will eventually constitute a body of knowledge of systematic, empirical evaluations of XP and its practices. Additionally, this benchmark provides a baseline framework that can be adapted for industrial case studies of other technologies and processes. To provide a foundation on the use of the framework, we present the initial validation of our XP evaluation framework based upon a year-long study of an IBM team that adopted a subset of XP practices.IEC 61131-3 - changing the world of industrial automation
http://dl-live.theiet.org/content/conferences/10.1049/ic_20020029
In its nearly ten years of existence, PLCopen has done a tremendous job on the IEC 61131-3 programming standard. It has been accepted by a much broader platform as originally intended: it certainly conquered the 'classical' PLC area, and in addition it created the basis for SoftPLCs and entered the world of drives and even distributed control with small intelligent nodes. This paper gives an overview of the benefits of the standard, the structuring tools, and solutions for new areas like motion control and mechatronics solutions. (4 pages)Integrating human factors and systems engineering
http://dl-live.theiet.org/content/conferences/10.1049/cp_20010469
Traditionally, most systems engineering solutions have concentrated on the complex technical problems. However, there has been a less rigorous approach to the `softer' disciplines required to address the human aspects and operational environment of the system. As a result, this has lead to solutions which fail to acknowledge the central role of users in ensuring that the goals of a system are met, and often leads to security or safety `incidents'. Praxis Critical Systems' experience has shown that business, safety and security risk can be reduced significantly by considering the human as an essential part of the system as early as possible. The problem is one of integration “how do we modify our existing processes?” This paper asserts that small improvements can be made for minimal cost. We draw on our wide practical experience from projects in a variety of markets to demonstrate that adopting a pragmatic approach will ensure that benefits can be seen to be achieved.People in Control: Human factors in control room design
http://dl-live.theiet.org/content/books/ce/pbce060e
<p xmlns="http://pub2web.metastore.ingenta.com/ns/">The aim of this book is to provide state-of-the-art information on various aspects of humanmachine intereraction and human-centred issues encountered in the control room setting. Illustrated with useful case studies.</p>Scenarios in systems engineering
http://dl-live.theiet.org/content/conferences/10.1049/ic_20000498
Scenarios can be understood in a variety of ways, several of them useful in systems engineering. A scenario can be a sequence of activities, or a more or less richly branched structure of such sequences. Branches can represent alternatives or parallels, or various intermediate options. A scenario can be concrete or abstract; and it can describe either the world or the machine. Scenarios can be represented in a rich variety of ways, including text, databases, diagrams, video, animations, cartoons, storyboards, acted scenes, and collaborative workshops. All of these may have applications in systems engineering. Scenarios can be used throughout the system life-cycle to clarify business processes; to set requirements into context for both stakeholders and systems engineers; to validate user requirements before system specification begins; to guide system specification and design; to provide the basis of acceptance test scripts; and to validate suggested tests. (6 pages)What piece of this work is man?
http://dl-live.theiet.org/content/conferences/10.1049/ic_20000100
When designing modern systems, the vogue is to increase automation and reduce the involvement of human operators (whether for cost, operational or technological elegance reasons). This drive for more automation and fewer people is not always favourably received by the users of the systems, although perceptions can vary considerably, even in apparently similar circumstances. This paper explores some of the factors influencing designers when using humans in their various systems, and the effect that this can have on the remaining people who use those systems. (6 pages)Scenario-driven systems engineering
http://dl-live.theiet.org/content/conferences/10.1049/ic_20000499
Scenarios can be an effective technique for eliciting complex system requirements. Scenarios offer visions of future system behaviour that can be simple to communicate and explore, and quick to change in response to feedback (Weidenhaupt et al., 1998). So why are scenarios not a systems engineering silver bullet? One reason is that there are too few systematic processes to follow. Systems engineers rarely know how many scenarios to produce, what the content and structure of these scenarios should be, and how they should use the scenarios to support systems engineering. As a result, systems engineers currently use scenarios in an ad hoc, non-optimal way. The paper outlines the potential benefits of integrating two systematic but different scenario-based approaches to systems engineering. (3 pages)Informing requirements: ethnography and social activities
http://dl-live.theiet.org/content/conferences/10.1049/ic_20000503
The past decade or so has seen in system design what has been termed, the turn to the social (Grudin, 1989; Bannon and Schmidt, 1989). The movement of the computer from the data processing room, to the desktop and now into the wider world, has been the occasion for system designers to recognise the need to take seriously the fact that most human activities are socially organised, and no more so than in work: a fact which can have more than incidental bearing on the success or failure of a system in use. It was also felt that traditional methods and approaches in HCI were insufficient to gaining an effective understanding of socially organised activities (Hughes et al., 1993). By happenstance the method that began to gain some prominence was the venerable method of anthropology, namely, ethnography. Put simply, ethnography is a method of social investigation which involves a fieldworker studying some situation as a real time, real world organisation of activities. It is rationale is to examine the actual organisation of work rather than the idealisations of job descriptions or plans, or the renderings of such as task analysis. The paper discusses the use of ethnography in informing design. (4 pages)Function allocation: optimising the automation boundary
http://dl-live.theiet.org/content/conferences/10.1049/ic_20000101
Two topics are currently generating debate in the human factors literature. One is the importance of function allocation, and the second is the issue of improving the integration of human factors inputs to system design. The DERA Centre for Human Sciences has been conducting research into function allocation and methods of optimising the automation boundary in future military systems for a number of years. We suggest that the process of function allocation forms a critical part of the design process that impacts upon the overall system design and the human operators' roles within that system. This paper suggests a number of reasons why the process of function allocation requires greater attention during system design than has so far been the case. It also provides an overview of work being carried out to address the need for human factors methods to support the process of function allocation that are compatible with system design practice. (6 pages)What is system architecture? Why is it important in developing ITS?
http://dl-live.theiet.org/content/conferences/10.1049/ic_20000602
Since the early 1990's many people have been advocating the use of stated ITS system architectures, indeed a number have already been created including the National ITS Architecture for the USA [ITS USA] and the European ITS Framework Architecture [Bossom 2000]. However, there is still a body of people who do not believe that they need their own system architecture, or that system architectures are not required at all. This is due, in part, to a lack of understanding of the issues involved. The paper covers five basic questions: When do we need a system architecture? What are the issues covered by a system architecture? What is a system architecture? What are the benefits of a system architecture? What is the European ITS Framework Architecture? (6 pages)Using scenarios in defence projects
http://dl-live.theiet.org/content/conferences/10.1049/ic_20000504
This paper describes the use of scenarios and scenario information within the systems engineering approach adopted by BAe Systems for naval combat system projects. The utilisation of scenarios is seen as increasingly important as a result of the increased prominence of requirements within projects geared towards delivering increasingly complex systems. (4 pages)Taking to scenarios to improve the requirements process: an experience report
http://dl-live.theiet.org/content/conferences/10.1049/ic_20000505
Scenario-based design is a new approach to requirements engineering. Studies show that using scenarios is proving successful. Whole software processes have been proposed that are based around scenarios, for instance, the various use case-based software processes that have recently appeared in the literature. With the demands for time-to-market becoming more and more severe especially in the financial sector, scenarios can play a significant part in the requirements process. This paper reports on the take up and usage of scenarios by a company in the financial sector. It is our experience that from an often messy requirements process, the introduction of scenarios, first in an ad hoc, then in a more structured way, has improved requirements gathering and validation for both the company and for their clients. Indeed, the success of this “novel” approach has convinced the company to adopt a scenario-driven requirements process. (10 pages)Using sharing trees in the automated analysis of real-time systems with data
http://dl-live.theiet.org/content/conferences/10.1049/ic_19990012
Reachability analysis and model checking of timed automata are now well-established techniques in the analysis of real-time control systems. The major limiting factor in their use, from a technical point of view, remains the state explosion problem. Symbolic representation of the state space often allows for the analysis of much larger systems than the point-wise representation which is common in enumerative analysis. In particular, the use of rooted, ordered binary decision diagrams (ROBDDs) has been successful, mainly in the analysis of hardware systems where the need for a compact representation of boolean functions is prevalent. However in software systems, it is often desirable to represent data types which are more complicated than booleans. The use of sharing trees, which eliminates the requirement to find a boolean encoding of all data types, may offer a more attractive alternative to ROBDDs in these circumstances. This paper considers the use of sharing trees in the context of automata derived from a timed algebra of asynchronous broadcasting systems. It suggests that an encoding of timing constraints may be more easily incorporated into a sharing tree representation of the state space than into one based on ROBDDs. (4 pages)Hard lessons from soft projects
http://dl-live.theiet.org/content/conferences/10.1049/ic_19990408
The paper is about the National Air Traffic Services' (NATS) New En Route Centre project-an Air Traffic Control Centre being built at Swanwick in Hampshire (UK). The air traffic control world has many examples of systems that significantly overshot their budgets and timescales, some were abandoned. The current Swanwick project management team thought this project would be different. It was believed the methods and experience applied would avoid the mistakes others had made. For five years the “evidence” showed that we were making good progress. Then the programme stalled. It has subsequently been the subject of discussion in the press and has recently been audited at the request of the Government. The paper explains the project, how it started, what happened and where it is now. In particular there are some “soft” issues that seem important and it is the author's hope that the reader will at least find them interesting. (4 pages)Sufficient and necessary conditions for routine deployment of user-centred design
http://dl-live.theiet.org/content/conferences/10.1049/ic_19990034
The greatest challenge facing the HCI community is not the solution of any particular problem in the domain of human-computer interaction. Rather, it is the introduction of HCI practice into routine design of software products. Approaches to fostering the uptake of HCI solutions and, particularly, user-centred approaches to the design of software products have included education of designers and product managers, integration of user-centred design processes into software development processes and placement of HCI practitioners into development organisations. In practice, none of these approaches has proven successful. Software systems continue to be designed with little regard to their usage requirements and, consequently fail to meet their users' expectations. This paper exposes some of the problems associated with HCI-centric approaches to the deployment of user-centred design. It contrasts these with the approaches to deployment applied when the product of user-centred design, that is, ease of use of software products, is adopted as a business objective. These include not only the demand for a user-centred approach to design, but also the creation of organisations specifically structured to facilitate the uptake of user-centred design processes by product owners. (4 pages)Making user-centred design a priority in large organisations: a case study of a usability audit
http://dl-live.theiet.org/content/conferences/10.1049/ic_19990036
A range of national and international initiatives, such as ISO 9241, have identified interface design as a critical stage in the software development process. Such initiatives will only be successful if the `corporate culture' of commercial organisations grows to recognise the importance of usability. It can, however, be extremely difficult for companies to gain a clear view of attitudes towards interface design within their organisation. This argument has been supported by the results of a twelve month study in a UK based, multi-national company. Software users were questioned about their response to the proprietary and bespoke systems that the company provided. Software developers were studied and their attitude to usability assessed during the requirements stage of a major development project. Although the results from these studies did provide valuable insights into attitudes towards usability, our main findings relate to the problems of assessing attitudes towards interface design. Some of these problems relate to the difficulty of mapping technical terms such as `consistency', `learnability' and `error' onto the subjective experiences of the employees within the company. Other problems stem from the tremendous impact that the Hawthorne effect seems to have upon commercial software engineers and project managers. (9 pages)Industry standard usability test reports
http://dl-live.theiet.org/content/conferences/10.1049/ic_19990037
A Common Industry Format (CIF) for usability test reports is currently being agreed between major American software suppliers and purchasers. The objective is to raise the profile of usability in the procurement process, and to demonstrate the consequent benefits of acquiring products with increased usability. (4 pages)Technical guidelines on embedded systems
http://dl-live.theiet.org/content/conferences/10.1049/ic_19990750
The article discusses year 2000 compliance problems and how to solve them, focusing on embedded systems. In the circumstances, a more pragmatic approach will be necessary, with the best option being to use classic business continuity management methods. This focuses on those areas within an operation where the service processes and supporting systems are judged to be mission and/or safety critical, and where the consequences of a failure would be dire. The core of this approach is to identify the end-to-end service processes with a severe impact potential on lines of service continuity, assess the probability of Year 2000 compliance failure and its risk potential in ranking order for investigation. All the systems supporting these processes then have to be investigated in depth right down each system build chain and compliance problems resolved or mitigation solutions applied. Where adequate data to assess potential Year 2000 compliance is insufficient and/or inaccessible, risk evaluation assessments are used to determine a confidence rating for the chain concerned. Each process and their support system chains are reviewed and contingency plans established to provide work-around solutions to cover potential failures. Once in place, these plans need to be tested on the same bases as NHS Emergency/Disaster Plans and honed to minimise disruption in the event that they have to be actioned. As this process proceeds down the ranked listing of problem areas to overall threat potential, it is continuously assessed and fed back to the responsible management. (10 pages)