× Startups Business News Education Health Finance Technology Opinion Wealth Rankings Politics Leadership Sport Travels Careers Design Environment Energy Luxury Retail Lifestyle Automotives Photography International Press Release Article Entertainment

September 5, 2021

Twitter is stepping up its efforts to help users experiencing harassment protect themselves.

The social media company recently announced the test of a new feature called "Safety Mode," which aims to help users prevent being overwhelmed by harmful tweets and unwanted replies and mentions. The feature will temporarily block accounts from interacting with users to whom they have sent the harmful language or repeated and uninvited replies or mentions.

"We want you to enjoy healthy conversations, so this test is one way we're limiting overwhelming and unwelcome interactions that can interrupt those conversations," Twitter (TWTR) said in a statement. "Our goal is to better protect the individual on the receiving end of Tweets by reducing the prevalence and visibility of harmful remarks."
Twitter has for years faced criticism for the frequent spread of abusive and hateful content on its platform — the impacts of which can sometimes extend into the offline world — especially content targeted at women and other marginalized groups.

The last time Twitter announced a major slate of new anti-harassment features was in 2017 when it launched tools such as a "safe search" function and the ability to block potentially abusive and "low-quality" tweets from appearing in conversations.

Twitter is testing a new "Safety Mode" that aims to help users prevent unwanted or harmful tweets, replies, or DMs. But recently, the company has been talking more about how to crack down on abusive behavior. In June, Twitter privacy designer Dominic Camozzi posted a thread detailing some early feature concepts the company was considering to prevent harassment, including the ability for a user to un-tag themselves in tweets and conversations and the ability to stop anyone from mentioning them for several days.
Twitter tests option for users to report 'misleading' tweets to crack down on misinformation
With the Safety Mode tool, when a user turns it on in settings, Twitter's systems will assess incoming tweets' "content and the relationship between the tweet author and replier." If Twitter's automated system finds an account to have repeated, harmful engagement with the user, it will block the account for seven days from following the user's account, viewing their tweets, or sending them direct messages.

Twitter spokesperson Tatiana Britt said the platform does not proactively send notifications letting people know they've been blocked. However, if the violator navigates to the user's page, they'll see that "Twitter auto blocked them" and that the user is in Safety Mode, she said.
The company says its technology takes existing relationships into account to avoid blocking accounts a user frequently interacts with, and that users can review and change blocking decisions at any time.

For now, Safety Mode is just a limited test, rolling out Wednesday to "a small feedback group" of English-language users on iOS, Android, and Twitter.com, including "people from marginalized communities and female journalists."
Analyst View

Social media platforms are rife with abuses nowadays with faceless individuals pulling word triggers and encroaching into the lives or discuss of other people. Social media responses and unsolicited chats bump into people's lives online and offline more or less terrorizing, stressing, and inflicting emotional damage. Stakeholders have got to do something about this disruption before it worsens more than what it is.