White Hat vs Black Hat: Cybersecurity in the Age of Artificial Intelligence

Authored by: Kylie Leonard

Through media and academic researchers, we often hear terms such as expert systems, smart systems, machine learning, deep learning, artificial intelligence, weak AI, strong AI, artificial narrow intelligence, artificial general intelligence, artificial super intelligence, and singularity. What does all of this mean? 

These terms are all used to describe and interact with Artificial Intelligence (AI). The goal of AI is to develop computer algorithms to organize and understand data sets that improve automatically. These algorithms are built with the goal of performing any intellectual task comparable to a human, but faster. When the algorithm reaches this point, researchers call it Artificial General Intelligence (AGI) or Strong AI. As a society, we have a long way to go to reach AGI. Right now, our level of understanding would be considered Artificial Narrow Intelligence (ANI) or Weak AI. This means an algorithm can perform a single task using human-like capabilities. Some examples of this are GPT3's Chatbots, Self-driving cars, Google Photos app use of facial recognition, and so much more.

 

With the development of all this wonderful technology, the first question that comes to my mind is what role does cybersecurity play with AI? There are main three applications:

  1. Creating and developing AI as a means to secure a computer system or network. This application would be used to identify and stop cyberattacks. It should also be noted that cybercriminals can launch more sophisticated attacks through their own use of AI.
  2. Developing protocols to protect AI from cybersecurity threats. This application would look at protecting the algorithm and insure safety from attacks such as previously trained data sets being given to the algorithm and data poisoning that retrains the algorithm. Another consideration in this application is protecting AI from human threats.
  3. Designing ethics to protect humanity from the dangers of AI. This application is interested in creating guidelines to insure the safety of humanity. Science Fiction writer Isaac Asmolv dreamed of a future where every person owned a robot. He created a list of laws that would guide robots and protect humans. His Three Laws are “First Law: A robot may not injure a human being or, through inaction, allow a human being to come to harm. Second Law: A robot must obey the orders given it by human beings except where such orders would conflict with the First Law. Third Law: A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.” This was an attempt at creating a code of conduct.

 

As AI evolves and becomes more life-like, this introduces the philosophical questions of what is the essence of a human, what does it mean to be “alive”, and what makes up consciousness?  This leads to the big question of what rights should an intelligence entity be entitled to? The future of AI algorithms developing a form of sentience will lead to a better understanding of what it means to be human.