Twitter has restored a feature that promoted suicide prevention hotlines and other safety resources to users seeking certain content, after coming under pressure from users and consumer safety groups.
The feature, known as #ThereIsHelp, placed a banner at the top of search results for certain topics, listing contacts of support organizations in many countries related to mental health, HIV, vaccines, sexual exploitation of children, to Covid-19, to gender-based violence. , natural disasters and freedom of expression.
Reuters said on Friday the feature was removed this week. Citing two people familiar with the matter, the report said the removal was ordered by the owner of the social media platform, Elon Musk.
After the story was published, Twitter’s head of trust and safety, Ella Irwin, confirmed the removal but said it was temporary.
“We’ve corrected and rearranged our prompts. They’ve just been temporarily removed while we do that,” Irwin said in an email to Reuters.
Musk later denied the feature had been removed and called the Reuters report “fake news”.
Nevertheless, the report emerged at the start of the Christmas holidays, a difficult time for many, causing widespread concern. The unnamed sources cited by Reuters said millions of people had encountered #ThereIsHelp posts on Twitter.
Eirliani Abdul Rahman, a member of a recently disbanded Twitter content advisory group, told Reuters the disappearance of #ThereIsHelp was “extremely disconcerting and deeply disturbing” even though the removal was implemented to make way for improvements .
“This is the worst time of year to remove the suicide prevention feature,” wrote Jane Manchun Wong, software developer and Twitter user. “Instead of leaving a time slot with no suicide prevention feature for an overhaul, they could have kept the old prompt and replaced it with a new one when it’s ready.”
Early Saturday, Musk responded, tweeting: “1. The message is actually still in place. This is fake news. 2. Twitter does not prevent suicide.
Online services such as Twitter, Google and Facebook have tried for years to direct users to resources such as government hotlines if they suspect a user is in danger.
Irwin said Twitter plans to adopt an approach used by Google. This platform, she said, “does very well with these in their search results, and [we] actually reflect some of their approach with the changes we’re making.
“We know these prompts are useful in many cases and we just want to make sure they work properly and stay relevant.”
Musk said the number of views of harmful content on Twitter has declined since taking office in October. Then, he said, “hardly anyone” on Twitter was working on child safety.
“I made it a top priority immediately,” he added.
But Musk has scaled back the teams involved in dealing with difficult material, and observers said self-harm content was booming, despite a de facto ban.
Twitter introduced warning prompts about five years ago. Some were available in more than 30 countries, according to company tweets. In a blog post, Twitter said it was responsible for ensuring users can “access and receive support on our service when they need it most.”
Alex Goldenberg, senior intelligence analyst at the nonprofit Network Contagion Research Institute, said his group published research in August — before Musk took over Twitter — showing that monthly Twitter mentions of terms associated with self-harm had increased by more than 500% year after year, especially among younger users.
“If this decision is emblematic of a change in policy that they no longer take these issues seriously, that is extremely dangerous,” Goldenberg told Reuters. “This goes against Musk’s previous commitments to put the safety of children first.”