Wojcicki said machine learning is helping Youtube's human reviewers remove almost five times as many videos than previously and that today, 98 per cent of the videos removed for violent extremism were flagged by machine-learning algorithms.
Apart from expanding the current strength, the company plans to take punitive measures about inappropriate comments by introducing new comment management tools and blocking mechanisms.
Google is Doing Something about the latest scandal it finds itself in (YouTube/comments/paedophiles), revealing a plan to boost its video content moderation team to as many as 10,000 people.
Several advertisers, included Mars Inc., Adidas and Diageo, said they would pull their campaigns off YouTube in the aftermath, fearing the videos would attract pedophiles, according to the Wall Street Journal. In addition, algorithms flagged about 98% of the videos removed for violent extremism.
Android Gains Safe Browsing Features To Out Apps Collecting Data Without Permission
The company has updated its Unwanted Software Policy page and the guidance page on how app developers should handle user data. All that users will see is a warning, and one that doesn't quite stop them from using the app in the first place.
YouTube is also taking steps to try to reassure advertisers that their ads won't run next to gross videos. Equally, we want to give creators confidence that their revenue won't be hurt by the actions of bad actors.
According to Wojcicki, YouTube spent a year ago "testing new systems to combat emerging and evolving threats" and invested in "powerful new machine learning technology", and is now ready to employ this expertise to tackle "problematic content".
"Human reviewers remain essential to both removing content and training machine learning systems because human judgment is critical to making contextualized decisions on content", the CEO wrote in a blogpost, saying that moderators have manually reviewed almost 2m videos for violent extremist content since June, helping train machine-learning systems to identify similar footage in the future. "We are planning to apply stricter criteria, conduct more manual curation, while also significantly ramping up our team of ad reviewers to ensure ads are only running where they should", concluded Susan.
She said adding more people to identify inappropriate content will provide more data to supply and potentially improve its machine learning software.