Having just recently returned from the Autonomous Vehicle Symposium 2015, I am at my desk, gathering a few thoughts and notes -- and travel receipts. The Symposium, a selection of technical presentations from the world’s leading automotive companies and suppliers, was held in Stuttgart, Germany on June 16-18 as a companion event to the long-standing Automotive Testing Expo. Judging by the headcount, delegate credentials, depth of information, and consensus of those with whom I’ve spoken the symposium was a resounding success, and I strongly suspect recurrences of this event and the emergence of other events much like it. “Autonomous vehicles,” as it turns out, is a very popular topic these days. One of the more sincere aspects of the symposium was that it was staged by and for automotive engineers. So the thrust of the presentations and discussions was focused on the realization of the technology required for self-driving cars. This is in contrast to other viewpoints that might focus on the monetisation, justification, social acceptance, or infrastructure requirements related to this technology. Although some interesting philosophical points did arise over the symposium’s three day run, the information presented was fundamentally in response to “how” questions rather than “what, why, or when” questions.
One notable recurring technical discussion point was the issue of handover in autonomous vehicles; i.e. How to logically and safely transition between a human driver and an artificial intelligence (AI) driver. Such situations would be common-place, of course in any practical realisation of autonomous vehicles. One of the presenters, Dr. Jean-Baptiste Haué of Renault, captured the essence of this issue as follows: There must be such a careful balance between monitoring, informing, and guiding any human driver in perilous situations; how else might one safely and efficiently explore this balance, except through simulations?
I will add that a vital subset of said simulations is that in which real people are put into direct contact with proposed vehicle control and sensor systems. Of course, I am speaking of Driver-in-the-Loop (DIL) simulations – but not of the common variety that has been the standard to date in the automotive industry. The required class of DIL simulation for this level of vehicle development work is Human-and-Hardware-in-the-Loop capable, with enough fidelity and responsiveness to engage professional test drivers and sensors alike in highly dynamic vehicle control situations. This is a deep topic. And to be fair, the symposium brought out a number of other deep technical topics as well, and any of them could, by itself, consume more than this limited space will allow. On the whole, it was fascinating.
As I review my notes from the symposium and recall the conversations that I had, I sense a certain understated theme, one that I can share. Now, this is strictly from an engineering perspective, so please forgive my admittedly biased synopsis:
Implicit in the engineering pursuits related to autonomous vehicles are the following assumptions: (i) This technology can make cars safer, and (ii) This technology can make cars more manageable as individual units and as members of the traffic collective.
On the surface these assumptions may seem rather benign, but it should be noted that they are nudging autonomous vehicle technical pursuits in two fundamental ways:
If the goal is to make cars safer and/or more manageable, all autonomous vehicle assessments will be predicated on the existence of both 1 and 2 in an acceptably capable form. And then, just as is the case with any other currently deployed automotive system, the most important factor in assessments will become the quality of the interrogation protocols; i.e. the human-centric test and simulation procedures used to verify performance. As such, I think the following is a safe prediction, even if it at first seems counterintuitive: The rise of autonomous vehicles will result in a deeper understanding of how human drivers actually drive and why we like it.