Dr Ping Zheng discusses new machine technology developed by the Home Office and an artificial intelligence firm to fight ISIS propaganda online.
It’s not surprising news that the Home Office and ASI Data Science have developed a machine learning tool that can help to fight the growing threat of ISIS terrorism since there has been a rapid development of artificial intelligence (AI) recently.
Advanced machine learning, analysing specific online content for terrorism markers, has started to exceed the capability of people conducting this task. This is due to the sheer processing ability of a machine learning system being able to simultaneously analyse online text, video and audio sources – and compare it to an autonomously generated database of terrorism content datasets to identify the creation of new terrorism related material.
Tests have shown that this new tool, can automatically detect 94% of ISIS propaganda with 99.995% accuracy. It can be used on any platform and integrated into the upload process, so the majority of video propaganda is stopped before it ever reaches the internet.
In the future, as AI deep learning systems become more sophisticated, it could be utilised for online policing. Imagine that it is capable of analysing real-time data sources, such as CCTV and social media conversations, whilst correlating the data with people’s behaviour to identify the probability of potential threats and alerting authorities.
Machine learning has taken advantage of the increase in large volumes of data, sensors and processing power. It has reached a level where it is capable of mimicking, or even better than human level interaction, in areas such as autonomous vehicles, medical diagnostics, self-learning and gameplay. It is one of the important technologies underpinning the concept of artificial intelligence explosion, also known as the technological singularity, a future vision of how artificial intelligence and humanity may coexist.
However, for those of you that are not enlightened yet, the technological singularity is a critical paradigm shift – the fundamental change in civilisation, believed to be edging closer to reality; that is the point at which an artificial intelligence can exceed the capabilities of the human brain and can therefore change our future technology landscape and civilisation in unimaginable ways, for better or worse.
Now we are closer than ever to moving towards the goal of developing human like AI. Should we worry about the AI explosion while most of us are celebrating the advance of technologies and enjoying the benefits of technological innovation?
Research into artificial intelligence, is a fundamental technology component for a singularity event to occur, especially in the case of a future artificial intelligence establishing itself. At present, these developments are characterised into specific areas including, machine vision, perception, learning, logical decision-making and problem solving, motion and manipulation. Slowly but surely, the automated systems that we directly and indirectly come across are becoming more powerful, with more control over outputs.
Many scientists predict the dystopian future at the point of AI explosion, a bad singularity outcome, featured with AI enslaving mankind. The sci-fi film, Matrix, perhaps showed us a dystopian singularity when AI ruled the world.
The question is how to prevent a bad singularity from happening? Many argue that the impact of AI depends on how we adopt it and utilise. In fact, the need to critically think through the values of AI has led to a number of very public initiatives coordinated by academic, government and industry experts. For example, EU has set up new policy guidelines to regulate future AI robots in 2016.
Navigating key technology developments in the future and their influence towards a singularity is necessary to ensure that if this event occurs, it can be managed to significantly benefit humanity.
Dr. Ping Zheng is Senior Lecturer in the Christ Church Business School.