During on-board systems development programs, it is sometimes necessary to connect real sensor hardware to simulated environments, allowing information exchange to/from virtual worlds. This is classic Hardware-in-the-Loop (HIL) testing. Since virtual environments can be scripted and modified explicitly and efficiently, the advantages are self-evident. But this type of work only confirms sensor function in the context of the scripted experiments.
In many cases, vehicle development engineers may need to ask deeper, system-level questions. For example, a development engineering team might wish to understand how all the available information from a particular sensor suite might be used. In such cases, it becomes a game of “what-ifs,” and suddenly the HIL simulation task may actually become quite daunting.
Case in Point
If a sensor has the potential to feed an Advanced Driver Assistance System (ADAS), there can be a number of open questions spanning the range from sensor fusion and information redundancy to human acceptance and sign-off. At the more complex end of the spectrum, the end related to actual product development, it usually becomes necessary to assess overall system behavior with real people driving and/or occupying the vehicle. There are only two options at this point: Start building and evaluating real (prototype) vehicles, or use an engineering-class Driver-in-the-Loop (DIL) simulator.
In previous articles we've discussed the integration and connectivity aspects of sensor HIL benches and human interaction (DIL) simulation environments. The value of testing sensors (either SIL or HIL) with DIL is directly related to the ability to close a simulation loop with a real person – someone who is placed into intimate contact with what can ultimately be very complex sensor and vehicle interactions. But now, rather than continuing to discuss the how's for accomplishing this technically, let’s take a look at some practical why's.
Active and Passive
In some cases, testing a sensor in HIL in conjunction with DIL can better define logic flows involving sensor inputs. In cases where human interaction is present, a fundamental question may be whether or not the human interaction is integral, e.g., is there a feedback loop in play across the man-machine-sensor space? Other human-centric questions abound.
In some instances, sensor information might be used in a passive way, for example in a display function, or in the form of a proximity beep when reversing toward another object. In such cases, it is still the driver’s responsibility to take action, based on information that is ultimately derived from sensors. In these cases, information and actions will form a feedback loop with a responsiveness (gain) biased toward the human's ability to engage.
In other cases, multi-sourced sensor information can be used to inform driver assistance systems or Artificial Intelligence (AI) learning algorithms, such as those that are actively controlling a vehicle in a full Level 5 autonomy scenario, or augmenting the human piloting task. These systems must be active and agile, much faster in a feedback loop sense, and they must robust, tested under a broader variety of conditions - especially failsafe events, such that sensor feed false positives/negatives do not cause errors or significant disruptions. In addition, particularly in the case of AI development work, it's important to avoid 'incorrect training' for deep learning systems because they can and will learn the wrong things! e.g., compare AI system learning derived from stationary DIL simulator labs to fully dynamic DIL simulator labs...
But no matter how sensor information and the subsequent logic is implemented, it's impossible to know how real people might respond and behave until human participation is put into play. In complex and mixed-mode situations, sometimes the answer is not what one might expect. How might a person who is in a particular mood, is tired, is distracted by a mobile phone, or is interacting with other information and devices, respond to sensor-derived information and/or system interventions? How might fundamental perceptions of brand quality be conveyed across and around AI interventions and hand-overs? How might an AI driver deliver a pleasant, non-disruptive occupant experience in the face of even the most basic, everyday mobility situations?
Man and Machine
Much of the code controlling on-board systems these days is automatically generated, based on the results of thousands of off-line simulations. By the time the code is deployed it may be indeed be robust, solid code, but unfortunately that’s no guarantee that the resulting product will be accepted in the marketplace. The challenge is to get people in touch with these systems early and often, as they are being developed, rather than waiting for test drivers and evaluators to get behind the wheel of physical prototype cars and discover a problem – or worse still, waiting for customers in the field take up against a flawed or unacceptable feature.
By pairing DIL with HIL simulations, vehicle development teams can at least have the fighting chance to make informed decisions at early program stages. The risk of a late, subjective rejection of any deployed system is magnified by the multitude and complexity of the systems going into the latest vehicles. With hundreds of ECUs on board, it’s more important than ever to validate their interaction and cohesion – not just on a technical level, but from a human acceptance perspective – early in the development process. Through the use of DIL simulation, real drivers and occupants can actively participate within simulation experiments to evaluate the safety of new systems, to secure an up-front subjective sign-off, or simply to explore new ideas in a low-risk, low-cost way.
HIL and DIL simulation are but two aspects of the rapidly changing automotive landscape. To learn more about some of the new frontiers in vehicle development, download our FREE white paper, “Look Down the Road: Driving Simulator Technology & How Automotive Manufacturers will Benefit”.