Automation in cars is evolving so fast that it threatens to outpace the human\'s ability to keep up. As the driving task changes, it is more crucial than ever to consider the most important component in the automobile-the driver.
The first decade of the 21st century has seen an abundance of novel and automated technologies being offered in cars. Adaptive cruise controls, active steering systems and collision detection devices are just a few technologies that are becoming more widely available.
Vehicle automation is not a new phenomenon-automatic transmission has been common since the 1950s; conventional cruise control was developed around the same time. However, there is a key difference between these traditional systems and today's more complex technologies, and that lies in their interaction with the driver. Where an automatic gearbox assumes more low-level vehicle control activities, the 'new breed' is impinging on more psychological, decision-making elements of the driving task.
In our research, we have referred to this distinction as vehicle automation (referring to those 'below-the-line' vehicle control tasks) and driving automation (for the 'above-the-line' driver decision tasks). This is not a trivial distinction-in psychological terms, vehicle automation covers the skill-based tasks, which drivers perform without conscious awareness. Driving automation, on the other hand, can affect rule-based and knowledge-based tasks-meaning the driver will notice them and will have to think about their impact on overall driving performance.
It may be obvious by now that I am not an automotive engineer. My expertise lies in ergonomics (or human factors); so I am concerned with how automation is designed to fit with the psychological capabilities and limitations of the user. This driver-centred design approach views the driver-vehicle system as a team, comprising human and machine elements, but with common goals to control the vehicle safely and efficiently. From this perspective, team members should be selected (i.e. the human should be trained and the technology should be designed) to exploit the strengths and compensate for the weaknesses of other members in the team.
Thinking about how to design automation to contribute most effectively to team performance allows us to import models of human-human teams as design guidelines and there is plenty to choose from. For instance, some leading human factors researchers have looked at trust in automation based on human trust in other people. Trust is a delicate balance with automation. Too little, may be the technology is ignored negating the benefits; and too much, the driver may become too dependent on the system. It turns out that trust is largely governed by our perceptions of the system's competence - if we feel it is more able to carry out the task than ourselves, then we will trust it, and vice versa.
Some of our biggest lessons regarding human factors of automation, though, have come from aviation-an industry that has been using similar 'driving automation' systems for some 20 years now. Various issues have come to the fore, the prominent concerns being mental workload and situation awareness. Mental workload presents a paradox-automation can at once decrease workload in some situations (if it takes over driving activities), also, on the contrary, increase workload in certain other areas (such as trying to keep track of what the automation is doing). For a number of reasons about the way human attention is structured, both overload and underload are equally detrimental to performance. Situation awareness-literally 'knowing what is going on'-is the key for performance, and depends on accurate information being available for the driver to perceive, comprehend and predict what will happen in the near future.
The aviation industry has split over how to deal with these problems, with the two main aircraft manufacturers adopting opposing philosophies on the authority of automation. One, the 'hard automation' approach, which gives the technology ultimate control - if the pilot attempts a control action which the computer determines to be unsafe, it won't let it happen. The other, 'soft automation', maintains that the human should have the final say in whether an action can go ahead. A soft automation system may advise the human if those actions take the aircraft outside its flight envelope, but if the pilot persists, it will let the action go ahead anyway. Naturally, there are pros and cons for each approach. Technical reliability may be considered better than human reliability, thus favouring hard automation. Alternatively, there have been instances where pilots have been forced to stress the airframe in order to restore control from a dangerous situation, in which case a hard automation system would not have allowed the pilots to save the day. The problem lies in the context of the actions and, more often than not, in automation systems that cannot be aware of the extenuating circumstances.
In our analysis of vehicle automation systems, we have suggested that hard automation may be most appropriate for those 'below-the-line' vehicle actions, which are largely independent of context. For instance, an ABS or ESP activation is triggered by loss of traction, regardless of how or why that has occurred. Moreover, since these activities are unconscious and skill-based, they have little psychological impact on the driver's mental workload or situation awareness.
Soft automation, in contrast, is probably better suited to driving automation-systems, which are dependent on context, such as lane departure warnings, or intelligent speed adaptation. Consider a situation where a driver has misjudged an overtaking situation, and needs to temporarily break the speed limit in order to avoid a dangerous conflict with an oncoming car. An intelligent speed advisor might not allow them to do so, thus increasing the risk in such circumstances. Similarly, a lane departure warning which intervenes for a legitimate overtake is likely to become distrusted and switched off (note that not all lane transitions necessarily require the use of turn signals, which typically override a lane departure warning).
Coming back to the team analogy, then, it is evident that for a driving automation system to be integrated successfully with the driver-vehicle system, it needs to support the driver, rather than try to replace the driver. Such support depends on three factors-communication, cooperation and coordination.
Communication is a two-way process, and depends on the driver being able to give effective instructions to the system as well as the system providing informative and timely feedback on its actions. Feedback really is a crucial issue here, affecting all those problems of mental workload, situation awareness, and trust described earlier, and the importance of feedback in an automated system cannot be overstated. Keep in mind, though, the feedback doesn't necessarily have to be a visual display - auditory and even tactile interfaces are proving their worth, especially in driving where the visual demands of the primary task are already high. Using multimodal interfaces offers a redundancy of feedback, which can be even more beneficial.
Cooperation is a classic team activity, and human-automation teams are no exception. The essential point is that both members of the team work towards the same objective, and that common rules have been established to identify as to how to deal in a given situation. This can be broadly thought of across the vehicle driving automation divide, with vehicle automation being cooperation in action ('if I skid, engage ABS'), where driving automation is cooperation in planning ('I only want collision warnings if my time-to-contact exceeds a set threshold').
Coordination involves ensuring that the tasks are properly distributed amongst members of the team, and that each has a good idea of what the other is taking care of. In human teams, this would be management or delegation of a task, but with automation the hierarchy is somewhat flatter. It is important that both human and machine know the extent and limitations of the tasks they are performing. Again, context is the key here, particularly for the automation knowing what the driver's intentions are.
Taking these principles as a whole, we would argue that if automation is to be successful, the system should be designed to behave just like a human co-driver. Recall the days when you were learning to drive and the instructor had dual controls on the passenger side. S/he would continually talk to you about your performance, point out hazards on the road you should be aware of, and you knew that in an emergency s/he would take over without hesitation. That is what the ideal automation should do.
Once again, we can look to aviation for our inspiration. The concept of Crew Resource Management (CRM) was developed to improve communication, cooperation and coordination amongst the human flight crew. Its implementation has proved a huge success. The lessons learnt could equally well be applied as design guidelines for automated systems. More research is needed in human factors to determine just how these would work, but we believe it is a promising avenue for putting the human driver at the heart of the design process.
Some industry experts believe that in as little as 30 years, we will see fully automated cars on our roads. If and when that does happen, we will no longer need to worry about driver-centred design, as there will be no drivers. Until that day, we need to make sure that the intelligence in our cars is matched to the so-called 'nut behind the wheel'.
Mark Young is a research lecturer in the School of Engineering and Design and programme director for the new MSc in Human-Centred Design at the Brunel University. He holds a BSc in Psychology and a PhD in Cognitive Ergonomics, both from the University of Southampton. He is a registered member of The Ergonomics Society and sits on the vehicle design working group for the Parliamentary Advisory Council for Transport Safety (PACTS).