Articial Intelligence
Shutterstock

Was Terminator Based on a True Story?

There has been much ink spilled over the topic of artificial intelligence. Movies and books often warn of the cold, calculating learning machine that decides it no longer needs its human makers. Just as common are the treatises from AI experts who comfort us by reminding us that AI can only do what they’re programmed to do and that they’re not a danger to human life.

The truth lies, as it always does, somewhere in the middle of these extremes. Given our current understanding of learning intelligence, the programming that underpins AI isn’t going to yield something like Skynet or HAL. The most likely complications from AI will not involve whether they pose a risk to humans, but whether they pose a risk to security: job security and digital security alike.

Job Security

Right now, this article you’re reading was written by a human. However, in twenty years, it’s not unreasonable to imagine an AI being smart enough to write something similar to, maybe even indistinguishable from, an article written by a human. In such a scenario, would companies that need writing done rely on humans, or would they use AI? AI doesn’t need to sleep, or eat, or be paid. They could likely produce millions of times more content than a human creator.

As one might expect, this applies to nearly all fields that require human intelligence in order to work. Accounting, logistics, surveying, even programming, could all easily be handled by sufficiently intelligent AI. While the issue of robotic automation threatens jobs relating to manual labor, AI threatens to automate even our thinking. Without legislation in place to hold companies accountable, it’s possible that huge swaths of the workforce find themselves without a niche in the market.

Digital Security

Another security threat posed by AI: the threat of breaching databases and stealing personal information. Ais can only do what humans program them to, but not all humans are motivated by goodness or charity. Some people might employ AI to learn about digital security systems, perhaps those related to international banking, and turn such malicious learning intelligence loose on the internet to wreak havoc with personal information.

This has led to many calls for legislation on the types of AI that can even be legally created. Some activists fear that we may have already passed a tipping point: once sufficiently advanced AI become available to researchers, the knowledge about how to create them is loose. The Pandora’s Box might already be open.