Self-driving Algorithms Are Baffled by Snowflakes, Shadows, and Graffitied Signs

Self-driving Algorithms Are Baffled by Snowflakes, Shadows, and Graffitied Signs

Self-driving cars have encountered a number of issues as the first pilot programs have been launched in various cities across North America. Unfortunately, some of those problems have led to fatalities; often the kind that might have been avoided if a human driver had been in control. Sensors have failed and misinterpreted signals, operating systems have made the wrong decisions, pedestrians have been run over, and cars have caught fire.

Since human error causes the majority of accidents, self-driving cars can certainly reduce fatalities on our streets. But as companies rush their products through testing and ignore red flags to beat their competitors, autonomous vehicles are still not as safe as their manufacturers would have us believe. In fact, after careful testing, a spokesperson from the Insurance Institute for Highway Safety said, “none of these vehicles is capable of driving safely on its own. A production autonomous vehicle that can go anywhere, anytime isn’t available at your local car dealer and won’t be for quite some time.”

While in some controlled environments, such as gated communities, self-driving cars may be already efficient, they have demonstrated, time and again, that they are incapable of interpreting a variety of signals and phenomena in less controlled scenarios. Basically, algorithms can only go so far, and a real-life environment can prove too much for the current autonomous car systems.

One of the elements self-driving cars fail to “understand” is slightly altered traffic signs. Researchers tested autonomous cars’ responses to stop signs featuring graffiti and found the vehicles mistakenly interpreted them as speed limit signs over 60 percent of the time. When the alterations were more significant, self-driving cars failed the test every single time.

But even in a graffiti-free town, autonomous cars can easily be in trouble. For example, a simple raindrop or snowflake can puzzle sensors. Experts believe cold weather conditions will continue to pose a challenge for this type of vehicles for many years to come.  

The same is true of birds, which can cause vehicles to stop short, and certain sound-dampening materials, which can make it harder for some sensors to function as expected. Something as insignificant as the shadow of a tree can confuse the algorithm of self-driving technology. In short, these vehicles are going to continue to cause trouble, and possibly fatal accidents, if companies keep prioritizing financial prospects over safety.

John Uustal is a Ft. Lauderdale trial lawyer with a national law practice focused on serious injuries resulting from dangerous and poorly designed products. His upcoming book Corporate Serial Killers focuses on companies that choose profits over safety.

To Connect with John:  [hidden email].



You might also like: