, , | 3 min read

Scenario and Immersion Tools for DIL Simulators

Scenario Immersion Tools for DIL SimulatorsIf driving the Trans-Canada Highway in Newfoundland is not quite adventurous enough for you (and if the ripping wind does not force you to park your vehicle on the roadside), it might be worth your time to head for the northern-most extremes that can be reached, up highway 430 out of Deer Lake.  It will take every bit of 5 hours to reach L'Anse aux Meadows, the spot where (as far as we know) humans first closed the loop on global exploration, where European Viking culture bumped into American Native culture sometime around the year 1000 CE.

 

There’s a sculpture up there called The Meeting of Two Worlds, and you’ll be encouraged to walk right under it when you hike the trail to the reconstructed “Vinland settlement” earth mounds.  If you’re like me, the sculpture leaves a bigger impression than the main attraction.  It is both flowing and fierce, effectively capturing the case of colliding worlds. 

 

Colliding Worlds

In the automotive realm there are numerous examples of colliding worlds, e.g., automotive design concessions between luxury and sporting intents, industry and regulatory balances, and even fundamental role reconciliations such as those resulting from the current morphing of automobiles from their role as mobility extenders into “connected things.”  In engineering simulation parlance, we can speak of the correlation between test measurements and predictive modeling, the differences between implicit and explicit multi-body dynamics solver strategies, and the benefits / draw-backs of scenario versus immersion style Driver-in-the-Loop (DIL) image generation approaches.  In all of these areas, there are compromises and trade-offs to consider.

 

Eye of the Beholder

If we focus on image generation for a moment (no pun intended), there are a few key points worth mentioning.  First and foremost is the fact that the human eye needs different information than an optical sensor.  This might seem somewhat obvious, but it leads to a number of simulation subtleties. 

 

For example, at this point in history there is a conceptual and practical difference between “scenario” style simulation graphics and “immersion” style simulation graphics.  The former delivers illustrated scenes that are typically library-based, scalable, easily modified by a user, and geo-typical.  The latter delivers photo-realistic rendered scenes that are typically fixed, closed to user modifications, and geo-specific.  Each approach has its merits, depending on use cases, and each requires different implementation strategies for DIL simulations.

 

A common thought experiment is this:  If we can present compelling, realistic images to a human driver via an appropriately configured simulator projection system, what would happen if we co-located an optical sensor (LiDAR, radar, etc.) on the simulator cockpit such that it “sees” exactly what the human driver sees?  The answer is non-intuitive.

 

Optical sensors are objective, concerned primarily with discrete samples of object boundaries, distance-to-point information, material and lighting properties, and so on – and they are therefore insensitive to rendering “quality.”  So an image presented to an optical sensor need not be a visually realistic image at all; as long as it contains the fundamental information about the objects in the environment, the sensor is happy.  Human eyes, however, are highly subjective, concerned primarily with optical flow, apparent depth of field, and so on - and they are therefore highly sensitive to rendering “quality.”  So an image presented to the human eye does not necessarily need to be rich in terms of explicit object and scene definition, but it needs to be convincingly realistic.  In brief, for any given DIL simulation, what is pleasing to the eyes may not necessarily be the same as what is pleasing to the sensors (and/or physics simulation).

 

Finding Balance

In order to have the “best of both worlds” in a DIL simulation, it is important to (1) define your use cases, and (2) deploy a DIL simulator architecture that is flexible enough to cover those use cases.  Advanced Driver Assistance System (ADAS) development, for example, may require DIL experiments with an emphasis on scene creation authority (...scenario tools are a must here...), whereas fundamental chassis development may require expert driver evaluations of particular manoeuvres on a familiar proving ground (...immersion tools are a must here...).  Both situations can, in fact, be achieved using the same engineering class Driver-in-the-Loop simulator - if  it is properly configured.

 

To learn more about how DIL simulation is influencing the vehicle development process, download our FREE white paper, “Look Down the Road: Driving Simulator Technology & How Automotive Manufacturers will Benefit”.

Look Down The Road: Driving Simulator Technology & How Automotive Manufacturers Benefit