Inside Facebook's suicide algorithm: Here's how the company uses artificial intelligence to predict your mental state from your posts
- Facebook is scanning nearly every post on the platform in an attempt to assess suicide risk.
- Privacy experts say Facebook's failure to get affirmative consent from users for the program presents privacy risks that could lead to exposure or worse.
In March 2017, Facebook launched an ambitious project to prevent suicide with artificial intelligence.
Following a string of suicides that were live-streamed on the platform, the effort to use an algorithm to detect signs of potential self-harm sought to proactively address a serious problem.
But over a year later, following a wave of privacy scandals that brought Facebook's data-use into question, the idea of Facebook creating and storing actionable mental health data without user-consent has numerous privacy experts worried about whether Facebook can be trusted to make and store inferences about the most intimate details of our minds.
Facebook is creating new health information about users, but it isn't held to the same privacy standard as healthcare providers
The algorithm touches nearly every post on Facebook, rating each piece of content on a scale from zero to one, with one expressing the highest likelihood of "imminent harm," according to a Facebook representative.
That data creation process alone raises concern for Natasha Duarte, a policy analyst at the Center for Democracy and Technology.
"I think this should be considered sensitive health information," she said. "Anyone who is collecting this type of information or who is making these types of inferences about people should be considering it as sensitive health information and treating it really sensitively as such."
Facebook hasn't been transparent about the privacy protocols surrounding the data around suicide that it creates. A Facebook representative told Business Insider that suicide risk scores that are too low to merit review or escalation are stored for 30 days before being deleted, but Facebook did not respond when asked how long and in what form data about higher suicide risk scores and subsequent interventions are stored.
Facebook would not elaborate on why data was being kept if no escalation was made.
Could Facebook's next big data breach include your mental health data?
The risks of storing such sensitive information is high without the proper protection and foresight, according to privacy experts.
The clearest risk is the information's susceptibility to a data breach.
"It's not a question of if they get hacked, it's a question of when," said Matthew Erickson of the consumer privacy group the Digital Privacy Alliance.
In September, Facebook revealed that a large-scale data breach had exposed the profiles of around 30 million people. For 400,000 of those, posts and photos were left open. Facebook would not comment on whether or not data from its suicide prevention algorithm had ever been the subject of a data breach.
Following the public airing of data from the hack of married dating site Ashley Madison, the risk of holding such sensitive information is clear, according to Erickson: "Will someone be able to Google your mental health information from Facebook the next time you go for a job interview?"
Dr. Dan Reidenberg, a US suicide prevention expert who helped Facebook launch its suicide prevention program, acknowledged the risks of holding and creating such data, saying, "pick a company that hasn't had a data breach anymore."
But Reidenberg said the danger lies more in stigma against mental health issues. Reidenberg argues that discrimination against mental illness is barred by the Americans with Disabilities Act, making the worst potential outcomes addressable in court.
Who gets to see mental health information at Facebook
Once a post is flagged for potential suicide risk, it's sent to Facebook's team of content moderators. Facebook would not go into specifics on the training content moderators receive around suicide but insist that they are trained to accurately screen posts for potential suicide risk.
In a Wall Street Journal review of Facebook's thousands of content moderators in 2017, they were described as mostly contract employees who experienced high turnover and little training on how to cope with disturbing content. Facebook says that the initial content moderation team receives training on "content that is potentially admissive to Suicide, self-mutilation & eating disorders" and "identification of potential credible/imminent suicide threat" that has been developed by suicide experts.
Facebook said that during this initial stage of review, names are not attached to the posts that are reviewed, but Duarte said that de-identification of social media posts can be difficult to achieve.
"It's really hard to effectively de-identify peoples' posts, there can be a lot of context in a message that people post on social media that reveals who there are even if their name isn't attached to it," he said.
If a post is flagged by an initial reviewer as containing information about a potential imminent risk, it is escalated to a team with more rapid response experience, according to Facebook, which said the specialised employees have backgrounds ranging from law enforcement to rape and suicide hotlines.
These more experienced employees have more access to information on the person whose post they're reviewing.
"I have encouraged Facebook to actually look at their profiles to look at a lot of different things around it to see if they can put it in context," Reidenberg said, insisting that adding context is one of the only ways to currently determine risk with accuracy at the moment. "The only way to get that is if we actually look at some of their history, and we look at some of their activities."
Why Facebook's suicide algorithm is banned in the EU
Facebook uses the suicide algorithm to scan posts in English, Spanish, Portuguese, and Arabic, but they don't scan posts in the European Union.
The prospect of using the algorithm in the EU was halted because of the area's special privacy protections under the General Data Protection Regulation (GDPR), which requires users give websites specific consent to collective sensitive information such as that pertaining to someone's mental health.
In the US, Facebook views its program as a matter of responsibility.
Reidenberg described the sacrifice of privacy as one that medical professionals routinely face.
"Health professionals make a critical professional decision if they're at risk and then they will initiate active rescue," Reidenberg said. "The technology companies, Facebook included, are no different than that they have to determine whether or not to activate law enforcement to save someone."
But Duarte said a critical difference exists between emergency professionals and tech companies.
Privacy experts agreed that a better version of Facebook's program would require users to affirmatively opt-in, or at least provide a way for users to opt out of the program, but currently neither of those options are available.
Emily Cain, a Facebook policy communications representative, told INSIDER, "By using Facebook, you are opting into having your posts, comments, and videos (including FB Live) scanned for possible suicide risk."
Experts agree that the suicide algorithm has potential for good
Most experts in privacy and public health spoken to for this story agreed that Facebook's algorithm has the potential for good.
According to the World Health Organisation, nearly 800,000 people commit suicide every year, disproportionately affecting teens and vulnerable populations like LGBT and indigenous peoples.
Facebook said that in their calculation, the risk of invasion of privacy is worth it.
"When it comes to suicide prevention efforts, we strive to balance people's privacy and their safety," the company said in a statement. "While our efforts are not perfect, we have decided to err on the side of providing people who need help with resources as soon as possible. And we understand this is a sensitive issue so we have a number of privacy protections in place."
Kyle McGregor, Director of New York University School of Medicine's department of Pediatric Mental Health Ethics in the US, agreed with the calculation, saying "suicidality in teens especially is a fixable problem and we as adults have every responsibility to make sure that kids can get over the hump of this prime developmental period and go onto live happy, healthy lives. If we have the possibility to prevent one or two more suicides accurately and effectively, that's worth it."
If you are having thoughts of suicide, call LifeLine on 011 422 4242 or 0861 322 322. You can also reach the South African Depression and Anxiety Group on 0800 567 567.
Receive a single WhatsApp every morning with all our latest news: click here.
Also from Business Insider South Africa:
- We compare gym fees in Joburg, Cape Town, Durban and Pretoria
- Microsoft is quietly testing a project that will give you complete control over your online data
- Watch: Scientists invented a testicle cooler device
- Photos show Chinese rover making tracks on the far side of the moon
- Passengers onboard an Emirates flight helped a man propose to his girlfriend, and the result was incredibly romantic
- Huawei employees demoted after the company tweeted from an iPhone