Twitter has released a new test on iOS which will give warnings on tweet replies that contain offensive language which potentially can cause unwanted fear.
When things get heated, you may say things you don't mean. To let you rethink a reply, we’re running a limited experiment on iOS with a prompt that gives you the option to revise your reply before it’s published if it uses language that could be harmful.
— Twitter Support (@TwitterSupport) May 5, 2020
Twitter states that if your reply happens to include words that are harmful, negative or aggressive, a prompt will show up asking whether you want to go through your response to prevent any use of potentially offensive words.
Recommendation: Twitter Provides Key Dates Of Note In May To Encourage Strategic Planning
Here’s How It Will Look Like
This is similar to Instagram’s automated prompts on harmful offensive content which they launched back in July last year. Instagram further expanded it to offensive captions in the month of December.
The expansion has worked out effectively on Instagram and it will bring a positive effect on tweet interactions as well. Also, Facebook recently reported that the problem of misinterpretation was the key factor for causing difficulties and arguments among users. Facebook, therefore, plans on creating similar tools like Instagram’s comment warnings, which provides users to reframe their phrases in regard to mitigate potentially harmful content.
It is considered that a lot of arguments could be prevented online by letting people take time to go through their wordings so as to avoid others to take it in a wrong way. That is what Twitter is working on to avoid any mishappenings by prompting users to rethink. It is in addition to Twitter’s plans in improving platform engagements to prevent uses from any abuse through tweets.