Twitter was testing methods to improvise their tweet reply warnings over the last few months and yesterday finally Twitter came up with a new update on its warning prompts on replies which Twitter thinks are offensive.
You had feedback about prompts to revise a reply so we made updates:
▪️ If you see a prompt, it’ll include more info on why you received it
▪️ We've improved how we consider the context of the conversation before showing a prompt
This is now testing on Android, iOS, and web. pic.twitter.com/rxdttI1zK2
— Twitter Support (@TwitterSupport) August 10, 2020
“If someone in the experiment Tweets a reply, our technology scans the text for the language we’ve determined may be harmful and may consider how the accounts have interacted previously.”
After the initial test, Twitter included more strict warnings and also gave a more detailed explanation of these warnings. It will make the user understand if they are using offensive language and it will surely decrease the risk of any offense.
Main Motive of Twitter
Obviously, a few people consider this to be violating the mark – that Twitter’s attempting to control what you say, how you say it, free speech, discourse, and many more. However, it’s truly not – the prompts, started by recently announced answers, just mean to help wipe out confusion and offense by asking users to re-evaluate. Instagram uses a similar kind of system. If the user is happy with the comment/reply, it can be replied as normal.