Facebook has received severe criticism on some post which instigates violence in countries, for this reason, Social media giant announced on Wednesday that they will begin remove misinformation which leads to real-world violence.
Facebook product manager Tessa Lyons said in a statment, “We have identified that there is a type of misinformation that is shared in certain countries that can incite underlying tensions and lead to physical harm offline, We have a broader responsibility to not just reduce that type of content but remove it.”
As per the New York Times, rules and policies expand to the countries like India, Myanmar, and Sri Lanka where rumors spread rapidly leads to target ethnic minorities. Facebook has been criticized many times that the platform used to spread hatred and extremism in a society which gives rise to violence. The company has struggled to clear its image and that it supports free speech whereas they also bear the responsibility to restraint such kind of content from circulating on their site.
In another interview to CNBC Facebook Spokesperson said in a statement:
“Reducing the distribution of misinformation—rather than removing it outright—strikes the right balance between free expression and a safe and authentic community, There are certain forms of misinformation that have contributed to physical harm, and we are making a policy change which will enable us to take that type of content down. We will be begin implementing the policy during the coming months.”
In Myanmar, Facebook blamed for facilitating extremism and violent content which had an impact on the Rohingya Muslim minority, by the United Nations and Human rights organization. On the other hand, in Sri Lanka Buddhists riots broke out against the Muslim community after a false news circulates over social media that Muslim was poisoning the food which has given to Buddhists, the news was removed after the investigation.
As a new policy announced today, the company will be checking post which could harm or mislead a person over the particular scenario. Also, the post will be reviewed in partnering with some intelligence agencies which can best analyze harmful content that provokes physical violence, manipulated images and text also include in policy.
Once the post has been verified by the agencies and if it holds violent content then the related posts will be removed along with other relevant content.