Gen AI in Code: How Short Term Gains Create Long Term Maintenance, If You’re Not Careful. 

by | Nov 9, 2023

B O N C O D E   B L O G

Generative AI produced code has entered the chat room. And at BonCode we believe that its entrance has increased the need for monitoring and consistent software analysis, here’s why.

Perhaps the marquee attraction of Generative AI code is its ability to increase software developers productivity. Increased productivity has a nice ring to it, we know! With the help of AI assistants like CoPilot from Github and Chat GPT, developers can prompt AI to write code which would otherwise be mundane and take up their valuable time.

The issue is, we aren’t hearing a lot about the long term effects of this “short term” productivity in major media at the moment. We think it’s a huge part of the conversation and are curious- is Gen AI code really increasing productivity, or is it giving an attractive illusion of this?

At BonCode we are passionate about great code, and a key characteristic of great code is that it’s maintainable. Maintainability, according to the ISO 25010 quality measurement standard, is “the degree of effectiveness and efficiency with which a product or system can be modified to improve it, correct it or adapt it to changes in environment.”

What supports maintainability? Constructing code which is consistent and easy to understand across team members. Perhaps more than any other factor, Gen AI produced code has the capability to deeply disrupt consistency and understanding.

Forbes has recently published an article about the need for caution when it comes to using Gen AI in coding– and it references an article which puts our above sentiments nicely:

“On the one hand, AI can curate code, allowing developers to move along with the project even if they don’t understand a particular piece of the logic. On the other hand, who can troubleshoot this code should something go awry or if the unknown code introduces vulnerabilities?”

Now this is where it gets interesting- Imagine that 100 coders are prompting chat GPT or CoPilot to help them code in the building phase of a software project. They are making something which will work for now, certainly. But the unique ways in which each developer prompts chat GPT or CoPilot will create considerable variation, likely duplication, and opportunity for misunderstanding throughout the code base. We believe that the long-term effects of this are significant.

Even without the use of an AI assistant, human developers are apt to bring variation into the code base for similar reasons. Often, each individual of a team brings their own unique way of doing things into the project. Without some way to create a standard of operation at the start of a project, the risk of producing difficult knots which need to be untangled down the line increases.

Whether it’s Gen AI or human produced code- faster code does not mean better code. All of that “saved” time upfront- will be reclaimed by the maintenance work required for such complex and varied systems.

So what should you do if your software engineers turn to Gen AI? Happily wait and see them produce a large variety of code snippets that now only AI understands? Or, safeguard the maintainability of your core applications for the mid and long term future? We think the second strategy is the better one.

Our tooling helps you assess the quality of code, whether produced by humans or by AI. It helps find architectural violations, coding errors, duplication, unnecessary complexity, performance issues, security issues and all the not so good stuff people and AI can produce. What you need is consistent code, produced by humans and or AI. Our tooling helps achieve this.

We believe there needs to be a balance where you reap the benefits of Generative AI, through monitoring your software’s long term health. If you manage this, you will not only have happy software engineers who enjoy themselves using Gen AI, you will also have happy software owners and –users who benefit from long term healthy software that stays adaptable to future needs.

As my mom used to say: “don’t choose, do both”, meaning: experiment with Gen AI, and monitor quality.

Shoot me an email at g.juliano@boncode.nl if you want to discuss further.

Gabi.

You may be interested in this:

5 Ways To Increase Velocity In Custom-Coded Software Projects

5 Ways To Increase Velocity In Custom-Coded Software Projects

Velocity – the measurement of work done within a given time period –  is a valuable metric for software development planning. What makes it so useful? Not only does measuring velocity give your development team greater adaptability to changing requirements –...

Managing Dependency And Risk In Your Company’s Custom Software

Managing Dependency And Risk In Your Company’s Custom Software

Dependency on managed services and external partners is a part of business, and software development is no exception. Relying on internal and/or external people to maintain your software systems – from cloud service providers to offshore development teams – is the...