Ga naar de inhoud

Boncode

FAQ

On this page we’ve collected the most interesting questions
that our customers have asked us over time.

Frequently Asked Questions

Vision

One could write an entire book on the impact of AI on development and software architecture. And of course we use AI in our daily practice ourselves.

The core of our vision is that AI is a ”skill amplifier”.  This means that AI, when used by skilled software engineers will definitely improve productivity and local code quality. But it also means that when AI is used by a team of software engineers who have a variety of skills, that this will undoubtedly lead to a higher variety of coding styles, architectural issues, and therefore lesser readability and analysability of code. And probably there will also be just more code. And more code is not always a good thing.

In short, we think that while AI will bring many good things, when used unwisely or just at an individual level, it will lead to more technical debt. As a result: the more organizations depend on AI in their software engineering department, the more need there will be for governance on code- and architectural quality.

Another important aspect is that while AI will generate code efficiently and effectively, the correctness of the code is not guaranteed and will need human checking.  And this in a space where engineers might lose skills, when they switch from coding to prompting. So please hire not only us, but also some extra functional testers. Meanwhile: Highly skilled engineers will shift towards higher-level tasks like architectural decision-making, AI model integration, and strategic consulting.

Boncode adds the interpretation of software metrics from an independent perspective. If our consultant discovers the usage of open source software quality measurement tooling within your organization, the first positive observation has been made: you have engineers that are interested in software quality.

Why an overarching assessment tool matters

1. Multiple technologies
Modern projects use many technologies. Without a cross-technology view, functionality can end up in the wrong place, creating long-term risks.

2. Code vs. architecture
Vendor tools focus on individual code quality. But even if every developer writes “good” code, the overall system may still become hard to maintain. What matters is the quality of the software architecture as a whole.

3. Independence
Quality checks are more credible when done by an independent source.

4. Audience
Business leaders and managers need clear, aggregated insights—trends across teams and projects—rather than technical details. Vendor tools most times don’t provide this.

In short: A multi-technology, independent assessment tool ensures both technical teams and business leaders get the insights they need.

Business Model

Services

Yes. It is our specialty to provide fact-based insight in the risks and opportunities of software. We call this a Software Due Diligence and we have done dozens of them.

This is fully understood, but source code analysis needs source code. We have taken every measure thinkable to guarantee the safety of your code. In highly exceptional situations we use external disk drives on your location. We are ISO27001 certified – this provides additional confidence.

Yes, we do issue Software Quality Statements. These Statements provide an independent fact based assurance of the technical quality of your software system(s). The certificates are based on our code analysis results, consist an overall maintainability score and are rooted in ISO25010. Certificates are an integral part of our services and are offered at no additional cost.

Yes. Security is a multi-headed monster so we will never be able to provide 100% assurance that a software system is secure. Having said that we do assess security risks on source code and architectural level based on The Open Web Application Security Project (OWASP) or customers own specific policies.

Yes, of course this is their responsibility. But in case your external developer delivers suboptimal software, you still have to cope with the effects of that result from there. You can outsource activities, but you can’t outsource responsibility.

Well, working agile means that you apparently acknowledge that your future functional requirements are unpredictable and that you therefore need a software development methodology aimed at adaptability. Shouldn’t your software product not be highly adaptable? That’s what good software quality brings you.

Software development environments (SDE’s) report to individuals who specifically use the SDE and are not adapted to managers, including project managers or CIO’s. In most cases, these tools are not aimed at team level, but at one individual’s personal work. Boncode provides quality measurements aggregated and adjusted to the level of the different stakeholders. From engineering level to boardroom and thus providing one integrated version of the truth in your software project. Boncode should be seen as a quality management system on code-, architectural- and project level.

Fun fact: In general: where software engineers do use SDE provided tools, we report that the overall quality of the entire project is higher than in situations where these tools are ignored.

 

Boncode’s tooling is technology agnostic, meaning we can onboard almost any technology. Having said that, if Boncode can’t currently measure it today, it might be bleeding edge technology or very rarely used technology, with its own risk profile.

The Maintainability Score rates how easy your system is to maintain on a scale from 0 to 100. Scores are categorized, ranging from Not maintainable up to Gold-plated. Higher scores (70+) indicate a well-maintainable system that is easier to understand, modify, and test. Lower scores (<60) signal that specific areas require attention to improve long-term maintainability and can be challenging to address.

The Maintainability Score has four main benefits:

  1. totally fact based, no opinions involved
  2. understandable for all stakeholders involved, also people with little or no software engineering skills
  3. using this score on a weekly basis, provides an excellent dataset to monitor projects over time,
  4. it becomes very easy to compare different systems within your portfolio

 

It is not. The Score is a very trustworthy indicator of the health of your software.

We prefer to talk about Software Quality or Maintainablity instead of Technical Debt. Here’s why: Technical debt refers to the cost of taking shortcuts in software development to expedite the delivery of a project, resulting in code that may not be as robust or maintainable as it should be. It’s essentially the implicit cost of future rework that will be needed to address the consequences of those shortcuts. So that appears to be a handy concept. But there is no accurate or trustworthy way to measure technical debt. And if you can’t measure it, you can’t manage it.  So while the term technical debt has some consulting value to educate people why it can be harmfull to cut corners, it does not help solving the issue. That’s why we prefer our Maintainablity Score.

Yes, some of our customers do that. But there is always a risk that people start “gaming the metrics”, meaning that they comply with the rating just for the sake of complying with the ratings. In general, that’s not an advisable apporach.

Tooling

That’s really depending on your level of expertise. If you are a board member, with limited software engineering knwoledge, you can rely on our scores that rank from 0 to 100. No knowledge needed. In case you are a software engineer with no experience with source code analysis, our onboarding process will help you out easily. And if you’ve read the book “Clean Code” by Robert C. Martin, it will be a party of recognition and you will likely find your way without any guidance from our side.

Yes. As long as there is something to analyse (usually XML representation of the diagrams made), we can measure those. For some technologies (like OutSystems) we even have a dedicated benchmark: https://boncode.nl/boncode-for-outsystems/

Yes, that is very doable and usable. The process here is that at a set fixed interval (eg per sprint), we measure all code that has been added, removed or modified (we call this churn). For LowCode platforms like OutSystems we use Automated Function Point Analysis to determine how much functionality has been added, removed or modified. For more traditional technologies we use LOC. If you correlate this with the amount of time your team allocated on the work, the productivty is objectively determined in terms of Funtion Points/hour or LOC/hour. We’ve learned that this type of analysis is very usable for internal or external benchmarking.

Boncode
Privacyoverzicht

Deze site maakt gebruik van cookies, zodat wij je de best mogelijke gebruikerservaring kunnen bieden. Cookie-informatie wordt opgeslagen in je browser en voert functies uit zoals het herkennen wanneer je terugkeert naar onze site en helpt ons team om te begrijpen welke delen van de site je het meest interessant en nuttig vindt.