Why Autonomous Vehicles (AV's) should NOT scare you (hint: humans are horrendous drivers)
After working directly in the autonomous vehicle industry for the past year, I believe I have an insider look at what is coming to your streets sooner than you think. I also believe one of the biggest hurdles the autonomous vehicle industry is/will face is public misconception and doubt due to a lack of public education. Speaking in general, it seems autonomous vehicles are on the negative side of today's public opinion. Many struggle with the idea of trusting a "robot" with human lives and the ability of these vehicles to remain safe in all driving conditions.
While gaining public trust will only come with years of research & development, millions of test miles driven, and the public simply seeing autonomous vehicles on the road; the reality is that Autonomous vehicles will be infinitely safer than human-driven vehicles. Autonomous vehicles are not prone to human driving error/distraction, they can react significantly faster than any human physically can, and well, they don't drink alcohol. To put the atrocity that is human driving into numerical perspective, the U.S. averages 6 million car accidents a year! In 2016 alone in the U.S., there were over 40,000 traffic deaths and roughly 4.6 million people seriously injured on the road. Studies from around the world over the last 30 years have concluded human error/behavior accounts for over 90% of vehicle crashes. 90%!! To make it even worse, road crashes cost over $500 billion USD globally each year.
No Autonomous vehicles won't be perfect, but I can tell you if we were to deploy our test-level AV's tomorrow they wouldn't come close to causing 5.4 million traffic accidents in one year.
I am limited (by non-disclosure agreements) in the amount of specific details I can give you about current AV's technology, specs, and performance. But I can tell you I've been in an AV many times and I've felt safer than I do with any human behind the wheel. They've been driving around my city for the past 2 years and there has yet to be an accident caused by AV error. The accidents will come eventually, as no human created machine will be perfect. But I firmly believe when fully deployed, AV's will reduce traffic deaths by tens of thousands and traffic accidents by millions. Not to mention saving hundreds of billions of dollars across the globe.
What happens when the car gets hacked and is used to cause mayhem? Can't hack into a carbureted engine.
Cybersecurity is already a billion dollar industry and it will only get bigger. The risk will always be there with anything related to computing but companies will have no choice but to invest heavily to prevent hacking and liability. The most simple answer I can give is that AV's will always have an administrator override feature that allows the owner/company personnel to manually shut off and "lock" the car if it were to be hacked. Similar to how your iPhone can be locked when someone tries to hack into it.
Seems like this will be a generational thing. If the upcoming generations begin their adulthood with AV as a part of their lives, it'll just be second nature to them and everywhere in the future. Especially once the older generations die off. (Upvoted and followed. Nice article!)
I agree completely. The hard part will be convincing the older generation (lawmakers included) that the technology is ready now whether they trust these "robot things" or not. (But thank you!)
Really great read and well written article! There are so many sides to this issue that still need to be flushed out, but ultimately autonomous cars will save millions of lives each year. As a computer science major, a lot of my professors have spoken about a huge obstacle that has to be overcome before autonomous vehicles are adopted by the masses, which is the countless moral dilemmas that software engineers face when coding the cars. For example, in a rare case where an autonomous vehicles's brakes go out and the car can either crash into another car, saving you (the driver) but killing a family of 5 inside of the other car, or crashing your car into a wall, killing only you but saving the family, which would you choose? These situations are all essentially variations of the classic "trolley problem" and there are a million different situations and variables that play into these no-win scenarios that could be faced by the car and it is all up to the programmers to decide how the car will react in these situations.
These dilemmas also bring up another problem of who is liable in these situations. Is the programmer, the car manufacturer or someone else liable for the crash? One infeasible way of solving the problem would be to allow the driver to select certain settings before the car hits the road, placing the blame on the driver. However, studies have shown that this would result in most cars being aggressive killing machines (they would mostly choose to kill the other car) since people think differently when it is their life on the line and not someone else's.
MIT has been doing extensive research into the subject and wrote a great article summarizing the problem. The article summed up: "Therein lies the paradox. People are in favor of cars that sacrifice the occupant to save other lives—as long they don’t have to drive one themselves... If fewer people buy self-driving cars because they are programmed to sacrifice their owners, then more people are likely to die because ordinary cars are involved in so many more accidents. The result is a Catch-22 situation." (https://www.technologyreview.com/s/542626/why-self-driving-cars-must-be-programmed-to-kill/)
Overall, companies manufacturing these autonomous cars need to be open and honest about how the cars will react before the situations arise in real life, or the adoption of self-driving cars will be greatly delayed. These issues cause people to lose track of the bigger picture, in that these cars will actually save millions of lives, directly impacting people's lives by making everyday driving much safer. Do you have any insights on the issue from working within one of these companies? If it isn't covered by the NDA, what are your companies current policies regarding these situations?
That situation has been the topic of many talks and debriefs since I have been here, but unfortunately I cannot say anything about our policies if it is not public information. I can say we haven't had any incidents yet in which we saw this first hand, so I am not sure how common this situation will actually be although I understand it is one of the biggest hurdles AVs will face. I can say our AVs are generally programmed to be more passive in any situation than any human will be. This is not always good but I feel this is how AVs will be tested until the technology is perfect.
This post has received a 56.79 % upvote from @sleeplesswhale thanks to: @ahuff412.
This post has received a 5.87% upvote from @msp-bidbot thanks to: @jamesdenny. Delegate SP to this public bot and get paid daily: 50SP, 100SP, 250SP, 500SP, 1000SP, 5000SP Don't delegate so much that you have less than 50SP left on your account.
You got a 14.58% upvote from @mercurybot courtesy of @jamesdenny!
Congratulations @ahuff412! You received a personal award!
Click here to view your Board
Do not miss the last post from @steemitboard:
Congratulations @ahuff412! You received a personal award!
You can view your badges on your Steem Board and compare to others on the Steem Ranking
Vote for @Steemitboard as a witness to get one more award and increased upvotes!