YouTube and Facebook on Thursday reportedly committed to taking additional steps to delete violent content on their platforms as part of efforts to tackle online extremism. The Alphabet-owned streaming service also said it would take steps to educate younger users on identifying misinformation and manipulation tactics. Microsoft also stated that it will provide a cheaper version of a tool used to detect and prevent violence for schools and smaller organisations, according to a report. Internet firms have faced government scrutiny following the US Capitol attack that took place on January 6, 2021.
At a White House summit on tackling violence fuelled by hate, YouTube stated that it would remove content that promoted or glorified acts of violence from the video streaming platform, and that the removal would take place even if the uploader was not a member of an extremist organisation, according to a report by Reuters.
Meanwhile, the report states that YouTube has also announced that it will educate younger users using the service on misinformation and ways to detect manipulated content online.
Meta-owned Facebook has parterned with Middlebury Institute of International Studies’ Center on Terrorism, Extremism and Counterterrorism researchers, as per the report, while Microsoft will provide schools and smaller organisations with a cheaper version of the company’s AI and machine learning tools used to identify and prevent violence.
Earlier this month, it was reported that Parler, the social media application that has gained popularity with conservatives in the US, made its return to the Google Play store over a year after it was removed over its failure in monitoring violent content that is believed to have led to the attack on the US Capitol by Donald Trump’s supporters on January 6, 2021.