Software development is continuously evolving. From handmade code to code generators, through to open source, and now generative AI code, software development is getting faster. In fact, generative AI can double the speed of certain coding tasks, a recent report by McKinsey has found.
Alongside the speed boosts made possible by generative AI, the report also flags the potential risks to intellectual property (IP), regulatory compliance, and security. Ensuring software quality has never been so important.
How is generative AI used in software development?
Generative AI – the use of AI and more specifically Large Language Models (LLM’s) – is being used to write code, assist developers, and automate certain aspects of software development.
Right now, we’re at the beginning of what generative AI can do across a range of industries, including software development. That means the future role of generative AI is very much undecided and under-regulated. And that’s a potential threat to software quality.
What impact does generative AI have on software quality?
While the rise of generative AI in software engineering creates advantages, it also introduces new risks. On one hand, generative AI can create more code, faster. On the other hand, it doesn’t necessarily produce better-quality code. In fact, in some cases it’s been known to introduce errors. Here’s a potential scenario.
Imagine you have a team of developers, all prompting an AI chatbot – such as ChatGPT – to produce lines of code. Delegating coding tasks to AI runs the risk of losing human knowledge and understanding of the code itself, how to maintain it, and how to improve it if something goes wrong.
On top of that, use of generative AI exposes companies to vulnerabilities such as malicious code entering their software product from the public domain. There’s also a danger that copyrighted code could find its way into AI-generated code or that the use of generative AI breaches regulations – such as the EU’s General Data Protection Regulation (GDPR). Another security risk is if developers accidentally expose confidential information when writing prompts for AI – or using prompt writing tools.
In these scenarios, the time saved by using generative AI is likely to be offset by the cost of more thorough testing procedures to ensure the robustness and security of the code before it’s released.
The future of generative AI and coding quality
Programmers have always innovated to speed up development and automate repetitive development tasks. For example, a code or component generator can be used to make programming quicker and easier.
The day developers realized that systems had similar components that could be shared, duplicated, and re-used, the open-source community emerged. Rather than building the same functions over and over again, they share libraries of pre-built components, making development faster.
Generative AI will drive the next generation of software development tools, but it’s unlikely to replace human developers. As with any application of generative AI to create content, it still needs a layer of human moderation, oversight, and control.
What’s more likely is that we’ll see a continuation of the patchwork approach to software development, with homemade, open source, and AI-generated code being aggregated into a single product. That’s why quality control, software assessments and monitoring will become even more crucial for companies looking to lock down software quality and ensure business continuity.