As Artificial Intelligence and the Internet of Things develop, are we seeing a converging risk? Philip Ingram brings the views of Eugene Kaspersky and Professor Nick Bostrom together.
A Converging Risk, Artificial Intelligence and the Internet of Things
“I don’t see the Internet of Things as the Internet of Things, I see it as the Internet of Threats,” Eugene Kaspersky, the CEO of Kaspersky Lab told Philip Ingram from Security News Desk and SecurityMiddleEast.com in London last week. He went on to describe how he found it “curious to live in this world as I don’t know what will happen tomorrow” as he referred to how the threat and cyber criminality is likely to grow.
Professor Nick Bostrom, a world renowned Artificial Intelligence (AI) expert who heads the Program on the Impacts of Future Technology at Oxford University, talked of what will happe
n tomorrow as he said, “AI could bring in some fundamental change like the agriculture revolution of 10,000 years ago and the more recent industrial revolution.”
Bostrom added that in a study amongst the world’s leading AI experts their conclusion was that “there was a very high percentage chance that in the next 15-25 years there will be AI machines with an equivalent computing capacity and “thinking” capacity as the human brain.”
Of greater interest and concern is he hypothesised about how long it would take for AI to get smarter than humans and said, “Once we get to human level of intelligence, the jump to super intelligence is likely to be very rapid.”
Bostrom questioned the ability to keep control of and AI super intelligence and stated that there was a, “need to ensure it is built safe and has a value alignment as with human values.” However worryingly, when asked about securing AI systems Bostrom stated that “it would be like securing any other computer system,” but went on to suggest that “the largest threat would be from someone exposing AI to a false reality thereby affecting any learning function and potentially disrupting value alignment.”
Kaspersky talked of Judgement day for Critical Infrastructure and outlined the real vulnerabilities of the IT ecosystem. He avoided talking about AI and super intelligence and focused on the reality today, “The Internet of Threats.” Ingram suggests this is “like securing any other computer system,” to use Bostrom’s words.
“I was reading reports of a botnet exploiting 25,000 IP security cameras and the report suggested that the real number of infected cameras could be as many as 1.5 million. This, I think is a little too high and the real figure was probably more like 150,000,” Kaspersky said. “This shows the threat from any device connected to the internet,” he continued.
Bernard Marr, a best-selling author & keynote speaker in Forbes.com said “Future wars may not be fought on physical battlefields, but digital ones. One of the greatest fears of our governments is the scenario in which a regime, terrorist group or in fact anyone targets the networks themselves.”
“As more and more systems and infrastructure are built to be “smart” and include Internet of Things connectivity, the more we are putting those systems and infrastructure at risk. Imagine the chaos that would be caused by disrupting or disabling wireless communications or internet connectivity,” he added before suggesting that AI was being harnessed to potentially make attacking these sorts of targets easier.
In a different article published recently on LinkedIn, Marr predicted that the first 3 sectors to “benefit” from AI are likely to be healthcare, finance and insurance.
He said, “We’re all carrying the equivalent of Star Trek’s tricorder around in our pockets (or an early version, at any rate) and smartphones and other smart devices will continue to advance and integrate with AI and big data to allow individuals to self-diagnose. Sequencing of individual genomes and then comparing them to a vast database will allow doctors — and/or AI bots — to predict the probability that you will contract a particular disease and the best ways to treat those diseases when they appear.”
Under finance he added, “AI algorithms are already making stock market transactions in nanoseconds and making revenue predictions based on hundreds, if not thousands of data points.”
What is clear is that a rapidly growing but when compared to AI, comparatively simple development of IP connected devices is fraught with dangers caused by criminal elements manipulating the use of these devices to generate huge botnets. Protection Group International (PGI) in a recent social media post referred to this as the Internet of Trouble.
The thought of AI with an intelligence at human level on our not too distant horizon being subject to similar vulnerabilities combined with our inability to protect current systems does nothing to give confidence when the world leading expert Professor Bostrom says “that it would be like securing any other computer system,” something we are currently failing at. We have to hope that the developers of AI look to its security with the same enthusiasm they have with developing its “intelligence” and let’s hope the cybercriminals don’t get there first.