There’s an old adage which states that “the more you learn, the less you fear,” although the leaders of today’s most innovative tech companies don’t appear to agree.
For years, tech entrepreneur Elon Musk has been all-but-begging global regulatory agencies to put restrictions on artificial intelligence, calling the technology “more dangerous than nuclear weapons.”
Recently, Microsoft has taken its own surprising position on technology that the company itself helped expand: facial recognition.
Tech companies have long touted some of the broader benefits of facial recognition technology, including ways to support public safety or even track down missing people. But a darker side has privacy experts concerned over consent and how the technology could be open to hacking or illegal surveillance.
A blog published by the company’s president Brad Smith identifies some of the more critical concerns and also cites “the need for public regulation and corporate responsibility.”
In short, Smith argues that facial recognition technology creates a significant risk where the information obtained could be abused or misinterpreted. He details some major points for review, including how and when the technology might be used in targeting criminals, how to prevent any use in racial profiling, and whether there ought to be minimum performance levels established on accuracy.
In order to address the breadth of issues at hand, Smith suggests that the government form a bi-partisan commission to help establish an ethical foundation over which the technology could be utilized. For Microsoft’s part, he says the tech sector should not be absolved of responsibility, and that “we at Microsoft and throughout the tech sector have a responsibility to ensure that this technology is human-centered and developed in a manner consistent with broadly held societal values.”
Image Credit: metamorworks/Shutterstock.com