Cyber Security and Artificial Intelligence Forecasting: Short-Term Risk

Posted by: Alek Emery

22495460709_9f99309cf9_oRecent headlines surrounding cybersecurity incidents, like the EquiFax breach, illustrate the increasing importance of data security—and the potential harms resulting from security vulnerabilities within systems containing consumer information. It should come as no surprise then that the proliferation of artificial intelligence will likely play a crucial role in future cyber security developments. However, the convergence between public understanding of cyber security and artificial intelligence (AI) is lacking, particularly in the area of the already-occurring or near future possibilities for AI to create cyber security risks. In short, there are three (at least) compelling reasons for focusing on the short-term risks posed by AI when considering what can be done to prevent future harms.

1. Many serious risks posed by AI do not require the development of new AI capabilities

As most readers will be aware, one of the most common cyber security risks are phishing scams. And, within that category of risks are what are known as “spear-phishing” scams. These are more focused attacks designed to exploit specific targets, like individual employees or consumers. Many of these attacks incorporate various social engineering techniques to try and fool individuals into giving away important information or getting them to instal malicious programs. Additionally, as pointed out by Doug Fodeman in a recent panel on social engineering, most attacks are made via email.

So where does AI fit in to all this? One simple way is by making the process of collecting information on individual targets for a spear-phishing scam to be faster and more effective. An AI could comb through company directories and identify information related to individual employees that can be found on the web at sites like Spokeo.com, thereby helping a would be attacker more quickly identify targets and information that might be useful in manipulating them. Moreover, this is a capability that current AI systems could readily perform. While it is perhaps true that a malicious AI with superhuman capabilities could pose “existential” risks to mankind, as discussed by Roman Yampolskiy, it is important to not overlook the simple fact that a security risk facing all companies and consumers is that of simply being tricked into letting an attacker in—and it doesn’t take an incredibly advanced AI to make a would-be attacker that much more efficient and dangerous.

Activities such as this would already be covered by the Computer Fraud and Abuse Act (CFAA), as attempting to gain unauthorized access via fraudulent means to a protected computer, but it is difficult to identify and prosecute cyber criminals. As AI enables more frequent and more effective cyber attacks, even with the use of relatively simple AI systems, it is likely that enforcing the law against cyber criminals will become increasingly difficult. Perhaps AI will create new abilities in regards to security that are beneficial or combat the potential harms posed by AI being used in a malicious way, as noted by Yampolskiy. Or, perhaps, the law will need to be changed to enable faster and more effective prosecution of cyber criminals. In any event, overlooking the short-term risks created by AI—by worrying about what advanced AIs that vastly exceed human capability may be able to do in the future—is a serious problem facing the legal and regulatory institutions we currently have in place.

2. AI-powered devices are becoming increasingly popular, and thus pose increased risk

Major corporations that market AI products, like IBM and its Watson services, have already identified that one of the most important factors for producing commercially successful products is developing consumer trust in the product. Devices like Amazon’s Alexa are already widely popular. And, as people become more accustomed to doing more than playing music or controlling the lighting in their homes with such devices, it seems likely that consumers will develop increased trust in the devices. Moreover, there is a connection between the convenience of such a device, and how much trust and data a consumer gives it. For example, enabling the device to order products in response to vocal requests may require the device to access credit card or other personally identifying information. Connecting a device to your bank account could enable a consumer to order products on demand or pay the bills with no more effort than telling the device which bill to pay. But, the drawback to such convenience is the risk posed by a vulnerability.

This problem also extends beyond simply AI-powered devices in the home, but also to things like autonomous vehicles. As we give over more and more control over our daily routines and activities, especially those that pose such significant risks as operating a motor vehicle, the security and reliability of AI becomes that much more critical. Legislators have already introduced new provisions to the existing legal framework to try and enable research and development of new security technologies relating to Internet of Things (IoT) devices, but it is unclear if these changes will be sufficient to cover the risks posed by AI in the near future.

Already, there are more IoT devices than there are people. They have simply become a part of daily life. And the tools required for producing AI are becoming more and more accessible. For example, the proliferation of machine learning tools like TensorFlow and the availability of increased computing power created by GPU developments have dramatically decreased the barriers-to-entry for creating AIs. How the accessibility to AI and the increased security risks posed by increasing reliance on connected devices powered by AI will intersect is uncertain. But, there is an important need for people to understand how these devices work and how to manage the risks associated with them.

3. Traditional forms of Executive Agency oversight likely cannot address AI-development risks

As discussed by Matthew Scherer, in his 2016 article on the challenges of attempting to regulate AI within our current regulatory framework, the unpredictable nature of AI development and its potential to affect multiple industries and other technologies may require the creation of entirely new regulatory schemes. However, looking to regulate AI by focusing on those parties currently leading the industry, like Google and OpenAI, will be insufficient to address the near-term risks posed by malicious use of AI. For example, during the recent presidential election, the use of simple AI systems called “chatbots” were employed to exert influence on the voting population using social media. Given how simple and easily accessible these types of AI are, how can a government agency with limited resources hope to police them on the internet? Moreover, can corporations like Twitter and Facebook prevent these types of malicious AI use? The answer is unclear. Both technologically and legally.

Chatbots provide a ready example of an AI technology that is already being employed maliciously. Moreover, chatbots represent relatively simple AI systems. There is no requirement that an AI be superintelligent or sophisticated for it create security concerns. Having a chatbot interact with people on social media and try to elicit personal information from them is already commonplace. The problem really becomes one of how can the internet be policed closely enough to prevent this from becoming more widespread? How can the law help to prevent future harms resulting from data breaches facilitated by data fraudulently obtained by chatbots?

These three reasons for addressing the short-term risks of AI to cyber security suggest a need to look for new and flexible solutions to such a rapidly developing technology. More than anything else, public awareness of how connected people are to artificial intelligences, and the associated risks of that connectivity, needs to be increased. AI is in our homes, our cars, our phones, and it is even changing how teaching subjects like mathematics is being done. The benefits of connectivity and AI are already being realized, but our awareness of how this creates risk, and the legal structures we have in place to manage these risks, need to play catch-up. The sooner the better.

Advertisements

Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s