Facebook gives UK police body cameras to train AI systems that hunt down shooter clips

FACEBOOK is handing out body cameras to UK firearms police – to better detect shooter videos uploaded to the app.
The goal is to crack down on terrorists and violent groups who use Facebook to publicise their brutal attacks.
Earlier this year, twisted white supremacist Brenton Tarrant livestreamed a massacre at two mosques in New Zealand.
Some 52 people were slaughtered when the far-right fanatic gunned down worshippers.
The attack was filmed in first-person and uploaded to Facebook. The app's moderators were later criticised for not taking down footage quickly enough.
Now Facebook is hoping to get a clearer idea of what shootings look like, to help train its own detection systems.
Facebook will provide London's Metropolitan Police with body cams for use in firearms training centres.
This will allow them to capture the "volume of images needed" to train its machine learning tools.
Over time, the AI tools will learnt o more accurately and quickly identify real-life first person shooter incidents – and then remove them from the platform/
"The technology Facebook is seeking to create could help identify firearms attacks in their early stages, and potentially assist police across the world in their response to such incidents," said Facebook's Stephanie McCourt.
"But we can't do it alone.
"This partnership with the Met Police will train our AI systems with the volume of data needed to identify these incidents.
"And we will remain committed to improving our detection abilities and keeping harmful content off Facebook."
Facebook regularly uses machine systems to detect and cull dodgy content.
These AI-led tactics have helped Facebook ban more than 200 white supremacist organisations from the site.
And in the last two years, Facebook ahs removed more than 26million pieces of content related to global terror groups, including ISIS and al-Qaeda.
According to Facebook, 99% of this was proactively identified and removed before ever being reported.
Now Facebook hopes to crack down even harder on live-streamed shootings.
"Since the CTIRU launched almost ten years ago, it’s been at the forefront in terms of working with internet service providers and social media companies to tackle terrorism online," said Neil Basu, the Assistant Commissioner for Specialist Operations.
"As a result of the unit’s relationship with Facebook, coupled with the world-renowned expertise of the Met Police Firearms Command, the Met has been invited to take part in this innovative project.
"The technology Facebook is seeking to create could help identify firearms attacks in their early stages and potentially assist police across the world in their response to such incidents.
"Technology that automatically stops live streaming of attacks once identified, would also significantly help prevent the glorification of such acts and the promotion of the toxic ideologies that drive them.
"We welcome such efforts to prevent terrorism and its glorification and are happy to help develop this technology."
MOST READ IN TECH
After the Christchurch massacre, Facebook promised instant bans for rogue live-streamers under a new "one-strike policy".
Facebook CEO Mark Zuckerberg also admitted that the social network should be regulated after the attack.
Facebook is also under fire after it emerged WhatsApp had been hacked – allowing crooks to install spyware on users' phones.
Do you think Facebook is doing enough to keep users safe? Let us know in the comments!
We pay for your stories! Do you have a story for The Sun Online Tech & Science team? Email us at tech@the-sun.co.uk