Facebook is shutting down its facial recognition system over 'societal concerns'
- Facebook said Tuesday that it will shut down its facial recognition system.
- It will delete the face scans of one billion users and won't automatically recognise them in photos.
- The social network was one of a few that Clearview AI scraped for a searchable database used by American police.
- For more stories, go to www.BusinessInsider.co.za.
Facebook is shutting down its facial recognition system to limit its reliance on the technology, it said Tuesday.
In a press release, VP of artificial intelligence Jerome Pesenti said the social network needs to "weigh the positive use cases for facial recognition against growing societal concerns, especially as regulators have yet to provide clear rules."
The company, now known as Meta, said it will delete more than a billion people's "individual facial recognition templates," and the third of Facebook's daily users who have opted into the technology won't be automatically recognised in photos and videos, including in Facebook's Memories feature. The system is also used in a feature that automatically notifies users when they appear in tagged photos and videos.
It will also impact Facebook's automatic alt text system, which translates photos and videos on the platform into words for blind and visually impaired users.
The change will occur over the coming weeks, the company said.
It's a significant decision for the tech giant, which launched its facial recognition system in 2010 to automatically identify users in photos and videos.
However, concern has since mounted around how facial recognition technology can be leveraged for means that it wasn't intended for. The controversial startup Clearview AI is one of the most high-profile examples of this.
The company scraped social media platforms, including Facebook, to create a searchable facial recognition database that at least 600 law enforcement agencies used, The New York Times reported in early 2020. Clearview AI likely violated Facebook's policies by doing so, and critics said the software opened the door for racial bias.
Meta recently rolled out its Ray-Ban smart glasses, which aren't outfitted with facial recognition tech. However, Facebook Reality Labs VP and company veteran Andrew "Boz" Bosworth said earlier this year that it would have added the feature if the public wanted it to.
He said there were ethical concerns to consider though, such as the glasses always having its camera and microphone turned on, which could be misused by "authority structures."
Then-Facebook also created an internal app that let employees identify people by pointing their phone's camera at them, a source told Insider's Rob Price in late 2019. The company told Insider at the time that the app did exist but only worked on some company employees and their friends who had opted into the platform's facial recognition system.
Kristen Martin, a professor of technology ethics at the University of Notre Dame, said the trove of biometric data represents a vulnerability for Meta that will now be muted.
"Facebook's decision to shutdown their facial recognition system is a good example of trying to make product decisions that are good for the user and the company," they said in an email. "The move, however, is also a good example of regulatory pressure since their FR system has been the target of advocacy and regulators for over a decade. The volume of images Facebook had to maintain and secure was a constant vulnerability for Facebook - both in terms of cost but also in terms of trust."