Increase use of technology to identify extremist and terrorism-related content: Google says it uses video analysis models to find and assess more than 50 per cent of the terrorism-related content that has been removed over the past six months. Previously, there were incidents of ads from prominent brands being shown beside extremist content on the platform.
He added: "The uncomfortable truth is that we, as an industry, must acknowledge that more needs to be done".
Walker noted that this is can be hard area to navigate; for example, he said that "a video of a terrorist attack may be informative news reporting if broadcast by the BBC, or glorification of violence if uploaded in a different context by a different user". These videos will now "appear behind a warning" and will not be "monetized, recommended or eligible for comments or user endorsements".
It will also employ more engineering resources and increase its use of technology to help identify extremist videos, in addition to training new content classifiers to quickly identify and remove such content. "However, we feel the technology companies can and must go further and faster, especially in identifying and removing hateful content itself".
More Independent Experts in YouTube's Trusted Flagger Program While machines can help identify and remove extremist content from YouTube, the company realizes human experts still play a key role.
People Worldwide Forced From Their Homes In Record Numbers In 2016
The ranking does not include the 5.3 million registered Palestinian refugees . Over 900,000 refugees from South Sudan are sheltering in Uganda .
Laying out the new measures in an opinion piece for the Financial Times, Google general counsel Kent Walker said the internet giant was working with "government, law enforcement and civil society groups to tackle the problem of violent extremism online".
Walker added that Google also plans to expand its efforts to fight online radicalization, something it already targets through programs, such as Creators for Change, which promotes anti-hate voices on YouTube. The goal is to find a way to reach potential terror recruits and redirect them to anti-terror messages created to change their minds.
Google has also previously committed to working with other tech giants such as Facebook, Microsoft, and Twitter to establish and global forum to tackle terrorism online. The anti-hate group said the companies "have done little to counter the use of their platforms to spread hateful, false "information", from conspiracy theories accusing various minority groups of plotting against America to websites promoting Holocaust denial and false "facts" about Islam, LGBT people, women, Mexicans and others". "It is a sweeping and complex challenge", Walker wrote. "We are committed to playing our part". Last week, Facebook also announced a two-pronged approach to fighting terrorist content with both artificial intelligence and human experts.
Similar to Facebook's anti-terror plan, AI is not front and center when it comes to helping the search engine remove terror-related videos, such as footage of a terrorist attack, which may actually be informative.





Comments