Home Secretary Amber Rudd could be forgiven for walking with a certain spring in her stride when she arrived in San Francisco earlier this week to meet members of the digital technology community to discuss ways and means of blocking Jihadist content – particularly video – from social media platforms and other internet sites. Tech companies are accustomed to being lectured by governments around the world about their responsibility to take down offensive or criminal material, but in this case, the Home Secretary was offering a tool that she claimed could identify and remove 94% of illegal content posted by Isis and other Jihadist groups.
The tool – to be more precise an algorithm – was developed by London-based artificial intelligence and analytics startup ASI Data Science, supported by £600,000 in Government funding. According to the Home Office, the software code was tested on around one million pieces of content and identified illegal, ISIS -related material with an accuracy rate of 99.995%.
In a statement, Amber Rudd stressed the importance of countering online propaganda by extremist groups – material that is used to radicalise some while potentially being hugely upsetting to the wider public
“The purpose of these videos is to incite violence in our communities, recruit people to their cause and attempt to spread fear in our society,” she said, “We know that technology like this can disrupt the action of terrorists as well as preventing people from ever being exposed to horrific images.”
The ASI software can be used on any social media platform but it is not necessarily intended for the giants of the industry such as Facebook, Youtube or Twitter, all of which have been developing their own systems for dealing with dubious content. Instead, the tool will be offered to smaller sites that do not necessarily have the resources to moderate and edit the vast quantities of material that are posted every day by users.
And according to the Government, it is the smaller platforms that are increasingly being targeted by extremists. Citing its own research, the Home Office said on Monday that ‘Daesh’ (Islamic State) supporters are currently using as many as 400 online platforms to spread their propaganda. What’s more, the number of affected platforms continues to grow. Between July 2017 and the end of the year, ISIS posted on 145 platforms that had not previously been used. As the government sees it, this proliferation of communication channels illustrates the importance of technology that can be easily deployed across multiple platforms to identify and take down illegal content.
But if the offer of easily deployable software sounds benign, there is a sting in the tale. Amber Rudd made it clear that if digital companies don’t commit to keeping their platforms propaganda free, the next step may well be legislation.
How it Works
The problem facing any organisation seeking to automate the process of moderating online content is that an innocent piece of content – say a video – might share at least some of the characteristics of extremist propaganda.
For instance, as John Gibson, head of data science at ASI explained on a company video posted to coincide with the Home Office announcement, a news report on Islamic State’s activities in Iraq might well show uniformed fighters brandishing guns and waving flags. It may even show scenes showing victims of ISIS attacks. A propaganda video by ISIS will have many of the same elements. So if a software tool is programmed to simply to identify certain signifiers, such as flags and weapons, what it may do is take down content created by the likes of the BBC and CNN.
That’s where artificial intelligence and analytics enter the picture. ASI’s software is designed to carry out an analysis that goes beyond superficial similarities. “The (software) model captures subtle signals that both videos are throwing off, “ allowing it to distinguish the Daesh propaganda,” said Gibson.
But the tool is controversial, not least because it is designed to capture video and make decisions as the content is being uploaded, rather than when it is already on the platform and ready to view. From a law enforcement perspective, this makes sense as it means the video will not be seen, or at least only for a very limited period. The potential downside is, that the system represents a form of automated censorship. On larger platforms, even a 99% accuracy rate could mean that some legitimate videos are blocked, or their publication on a platform delayed as manual editors review the material.
Opportunities for Startups
Illegal propaganda videos represent just one facet of a growing social media problem – namely when users provide the bulk of the content, how do platform owners stay ahead of the game when it comes to reviewing and removing dubious or illegal posts. In addition to ISIS videos, social media is also a distribution channel material ranging from the illegal (child pornography, hate speech or images of violence) to the potentially socially damaging ( such as the kind of fake or fabricated news that might be used to influence the outcome of elections or simply confuse the public).
All of this is potentially good news for startups working in the arena of data analysis and artificial intelligence. Machine learning technologies developed for commercial organisations can be adapted to build tools to ensure that the internet is a safe space. Indeed. Indeed, in addition to taking government money, ASI Data Science has worked for clients such as EasyJet (on an algorithm to help the airline predict sandwich sales) and London Irish Rugby club on a project aimed at identifying high performing players available at bargain prices.
Some startups are focused entirely on online content. For instance, London’s FactMata is developing a natural language processing system coupled with machine learning to enable, internet users to identify fake or fabricated news. Meanwhile, Athens-based startup Fighthoax.com claims that its AI-powered reviewing system boasts an 84% accuracy record in spotting inaccurate online content, and it had already had talks with the European Commission on how the system could be launched. Both companies are focused on educating users by rating stories rather than taking content down.
Back in the security arena, the Home Office is urging technology companies to see its new tool as a spur to do more to tackle radicalising content.
“Over the last year we have been engaging with internet companies to make sure that their platforms are not being abused by terrorists and their supporters,” said Amber Rudd.
“I have been impressed with their work so far following the launch of the Global Internet Forum to Counter-Terrorism, although there is still more to do, and I hope this new technology the Home Office has helped develop can support others to go further and faster.”