Transportation: A 50-Year Overnight Transformation
“Like the apocryphal frog in the gradually heating pot of water, the autonomous vehicle is something that has been sneaking up on since the early 1970s”
Like the apocryphal frog in the gradually heating pot of water, the autonomous vehicle is something that has been sneaking up on since the early 1970s. Actually, the idea of self-driving vehicles dates back to at least the 1950s when General Motors envisioned its titanium-bodied, turbine-powered Firebird II concept driving down the road as the occupants relaxed. As with so many ideas of that period, the Firebird II was far ahead of its time.
The role of the engineer in society is to apply science and technology to develop solutions to the problems of the day. Paving the road to autonomy didn’t really get under way until the automotive engineers of the 1970s were able to begin harnessing the technological advancements of the 1960s space race.
The first tentative steps toward truly separating the driver from control of the vehicle came in 1978 when Mercedes-Benz launched a Bosch-designed electronic anti-lock braking system. Traditionally, when braking on slippery surface, the driver had to sense an impending loss of control and manually modulate the pressure of their foot on the brake pedal. With sensors measuring the rotational speed of each wheel, an early electronic control unit determined if one or more wheels were decelerating faster than the vehicle. If a wheel was determined to be locking prematurely, the ECU would fire solenoids capable of reducing the braking force at individual wheels, something impossible for a human driver to do.
Over the next twenty five years as Moore’s Law drove down the cost and ramped up the performance of microprocessors, they quickly proliferated throughout the vehicle to manage everything from automatically locking doors to managing the powertrain to active and passive safety systems. That early ABS grew new functionality on a seemingly continual basis. After first reducing brake pressure to eliminate wheel lock, it started to apply the brakes to eliminate spin for traction control. The addition of accelerometers, yaws, pitch and roll sensors enabled electronic stability control to help mitigate spin-outs.
Up to this point, all of the sensors were largely looking inward at what the vehicle was doing in response to driver commands in an effort to match performance to the control inputs. The next stage was to begin looking outward. The addition of radar sensors to accurately measure the distance to the vehicle ahead enabled adaptive cruise control that could automatically apply the brakes if the leading vehicle slowed down. Additional radar sensors in the rear corners helped to virtually extend the driver’s vision into blind spots while cameras could watch lane markers to determine if the car was drifting and show what was directly behind the vehicle. Ultrasonic sensors provide short range monitoring of the immediate vicinity to aid in parking and pedestrian protection.
As engineers continue to strive toward the goal of eliminating traffic accidents and reducing energy consumption, new layer of sensing were added to enable specific functions. What were once standalone active safety systems have now been thoroughly networked together and each new sensor also helps fill in the gaps in the picture of the surroundings drawn by the existing sensor suite. The result is far more comprehensive situational awareness for both the driver and the electronic control systems. In the coming years, that awareness will grow dramatically with addition of vehicle-to-vehicle and vehicle-to-infrastructure communications.
The increasing fusion of the sensor and actuator suite is now enabling capabilities those 1950s GM engineers could only dream of. Over the next year, Tesla, GM and Toyota all plan to launch automated highway driving assist systems that meld adaptive cruise control with automatic lane following to provide a hands-off highway “driving” experience. Meanwhile almost every major automaker, supplier and even technology companies like Google are working toward the holy grail of completely self-driving vehicles.
However, like finding the fabled chalise, the goal of eliminating the human driver from the transportation ecosystem is likely to be far more difficult and take far longer than everyone hopes. After more than four decades of developing automotive electronic control systems, we have a pretty solid understanding of the fundamentals.
Despite that, and the known flaws in human behavior, the human brain still has an ability to deal with nuance that is unmatched in the world of silicon. We may not be able to see so well in the dark, but we can see objects through rain and snow, something our sensors are woefully unable to deal with. None of the automated driving systems in development are able to deal adequately with inclement weather. As a result, fully self-driving cars of the type envisioned by Google are likely to be restricted to specific locations that don’t experience much weather variability.
Unless as Tesla CEO Elon Musk recently speculated, “They may outlaw driven cars because they're too dangerous," automated vehicles will be coexisting with human-operating machines for decades to come. The automated vehicles we do get in the near-to-mid term will almost certainly still revert to human operation in many conditions. That means we have a lot of unanswered legal and ethical questions that will need to be resolved about responsibility when transitioning control or when hardware or software fails.
Transformation is never easy and the coming changes in our transportation system mean we will live in interesting times for years to come.