In May of 2016, a Tesla Model S driver named Joshua Brown was killed in an accident while using the automaker’s “AutoPilot” feature, a semi-autonomous technology designed as a safety feature to keep drivers in their lanes.
Reports about the investigation have raised new concerns about semi-autonomous tech because it appears Mr. Brown spent most of his fateful drive with his hands off the wheel – despite the fact that AutoPilot is designed to be used with the driver engaged. In fact, he ignored repeated warnings when sensors determined his hands were absent from the steering wheel.
Fortune Autos is reporting that the NHTSA, which is the lead agency for regulating self-driving cars, “does not test or preapprove driver assistance systems before automakers install them. Instead, the agency responds to complaints or crashes when it investigates whether a potential defect poses an unreasonable risk to driver safety.”
So therefore, the onus is on automakers to ensure that these types of systems are being used appropriately and, luckily, many are stepping up to make sure of it. Fortune points to GM’s Super Cruise driver assistance system, which uses facial recognition technology to determine whether or not a driver is paying attention. If the driver isn’t, and doesn’t respond to repeated warnings, the car is brought to a controlled stop. Likewise, Audi’s system handles steering and braking up to 40 miles per hour, but requires the driver to “check in” with the wheel every 15 seconds.
For its part, Tesla has updated its Autopilot feature since the crash so drivers who repeatedly ignore safety warnings risk having their Autopilot disabled until the next time they start the car.
For Thomasnet.com, I’m Anna Wells and this is Your Industrial Daily.