Alice Budisatrijo, Director of Product Policy at Facebook, said that while it is not appropriate for a company to decide whether the information is true or false, it is imperative that people have access to reliable information.
According to the Indian site TOI, she said: “We do not believe that it is appropriate for an American technology company, or for anyone to determine what is right and what is wrong, so, whether it is private companies, or governments, or if any one representative decides what is right and what is wrong. , This creates potential overshoot and unhealthy power imbalances. ”
There can also be different degrees of truth or people can have different opinions about what is right or wrong, Bodysatrigo said. However, we take our responsibility very seriously … We are serving more than two billion people around The world. So we know how important it is for people to have reliable information, and for us to remove harmful content. ”
Referring to the Facebook Community Standards as a “living document,” Budisatrijo said these guidelines are constantly evolving to keep pace with changing online behaviors: “… as part of our response to the COVID pandemic, we have clarified our policy guidelines for applying our policy to harmful claims related to this global health emergency. This started in January last year, at the beginning of the epidemic, and has continued. ”
Budisatrijo explained that Facebook removes misinformation about COVID-19 that could contribute to physical harm, including wrong treatments, wrong treatments, or wrong information about the availability of essential services, such as hospitals, hospital beds, or the location and severity of an outbreak.
Bodysatrejo noted that between March and October of last year, the social media platform – which works with a number of third-party fact-checkers – removed 12 million pieces of misinformation about COVID-19 under these allegations. Warning poster on about 167 million pieces of content related to misinformation about COVID-19.
The company said that nearly 95 per cent of people who saw warning posters about misinformation about COVID-19 had not clicked on the links and were therefore not exposed to the wrong information, while the executive acknowledged that there was still room for further strengthening of these efforts.
“Even with the combination of artificial intelligence and the human reviewers we have around the world, we can never guarantee 100 percent that content that violates our policies is not on the platform,” she said.