Opinion Assessment on Generative AI and Cybersecurity Implication

October 12, 2023by Michael Mercer

The research presented by Carnegie Mellon University’s Software Engineering Institute underlines a watershed moment in our technological evolution, particularly in software engineering and artificial intelligence (AI). As a cybersecurity expert, I find the implications of this research profound and deserving of scrutiny.

To start, generative AI’s sophistication level is promising and alarming. Although the AI’s output might resemble human intelligence, equating this resemblance to actual comprehension or consciousness is a fallacy. No matter how sophisticated, the AI remains a very intricate tool, but a tool nonetheless.

Hence, evaluating the level of trust we should accord to its outputs requires a granular approach.



The introduction of AI elements into mission- and safety-critical cyber-physical systems (CPS) presents a two-fold challenge:

  1. Complexity and Predictability: The evolution of AI in cyber-physical systems exponentially magnifies the intricacy of these systems. While AI might autonomously generate vast bodies of code, the behavior of the resultant system can become increasingly unpredictable. In traditional software engineering, behavior predictability forms a cornerstone based on static code analysis and testing. Generative AI throws a curveball in this domain by producing code that could be beyond immediate human comprehension.
  2. Security Vulnerabilities: The research rightly highlights the inception of new attack vectors with the advent of AI components. For instance, poisoning training data can lead to AIs developing biased or malicious behaviors. Prompt injections present an even more covert and treacherous threat. The flexibility that makes generative AI so potent—its ability to produce varied outputs based on prompts—can be subverted to instigate pernicious behaviors.

The double-edged sword of AI-enhanced productivity is of particular concern. It is undeniable that generative AI tools can accelerate the pace of code generation, but at what cost? If these tools inadvertently reintroduce antiquated defects or lack the nuance to grasp the context, the resultant “efficiency” can spawn an avalanche of technical debt. Moreover, the potential inability of novice developers to discern the nuances and limitations of AI-produced code can pave the way for systemic vulnerabilities.

As we transition into an era where AI-produced code cohabits with human-written code, establishing trust metrics becomes paramount. Whether we should trust AI-generated code more or less than human-crafted code is not just philosophical—it has tangible implications for cybersecurity.

While generative AI offers tantalizing prospects for software development and AI engineering, a tempered and vigilant approach is imperative. Introducing AI elements into software engineering is not just about efficiency or sophistication; it is about maintaining our digital infrastructures’ safety, security, and reliability.

As we march forward, a symbiotic relationship between human oversight and AI generation seems the most prudent path, wherein each complements the other’s strengths and compensates for their vulnerabilities.