Analysis Mistakes
When I started my professional career at Fraunhofer in 2009, software quality and static code analysis were among the first things I fell in love with and focused on. Having a bunch of tools at my disposal that showed me where my code (and the code of my colleagues) was flawed seemed like the perfect setting to learn and improve. We used checkstyle, spotbugs (back then called findbugs), and PMD, both locally and on our Jenkins server.
One of the biggest mistakes (if not the biggest) I made back then was to enforce the rules upon my team, without explaining the reasons for each rule and for using these tools in general. This quickly led to heated discussions â most of them emotional and unproductive. I was able to convince them in the end, but that approach took a huge toll, on myself and on the team as a whole.
So, hereâs my first word of advice: Never introduce static analysis as a top-down decree. Instead, discuss the rules and agree (!) on your ruleset as a team. Doing this will be (mostly) stress-free and lead to an improvement of the whole team â not only yourself. Many developers donât know such tools or rules and simply explaining the âwhyâ behind them can work wonders.
Itâs a Team Effort
When I joined a new team and employer, I was tasked with pretty much the same scenario: introduce analysis tools and improve the codebase. But this time, I did two things differently.
First, I was not alone this time. I was working closely with two colleagues on the matter of software quality, which was very important. I wasnât the strange new guy who proposed changes to the development process and introduced code quality toolsâhaving a dedicated sub-team made all the difference
Second, when we chose our tool (ReSharper for C# in this case), we analysed all the existing rules, then preselected a subset of rules that we wanted to start with, and discussed them with the team. We agreed on a set of checks that everyone was on board with (fun fact: the team wanted even more rules than we proposed!). Over the course of 1.5 years, we gradually tightened the rules and introduced more and more of them, always trying not to overwhelm our fellow devs. By taking it step by step, we ensured that static analysis became a natural part of our workflow rather than an obstacle.
Maybe you can already guess my second word of advice: Start small. Do not start with too many rules. Most tools enable a massive set of default checks â far too many to be useful right away. Instead, pick a few key rules, disable the rest, and then slowly increase the number or strictness of checks over time. Your team will be thankful for that, as youâre not overexerting them with new stuff in addition to what they deal with in their day-to-day job.
Absolute Numbers vs. Trend Analysis
Static code analysis is great. It can tell you about the quality of your code at any given time. If your code is free from findings, you can use your analysis tools to keep it that way. But most of the time, youâll use those tools on a project thatâs been alive for quite a while, with thousands or even tens of thousands of findings. And thatâs where the trouble begins.
Iâve seen teams completely demoralised by the sheer number of findings in their codebase. Not because the code suddenly got worse, but because they could finally see the full extent of its issuesâ and that can be paralysing. Sometimes, though, itâs not even the code thatâs the problem, but a misconfigured analysis profile burying the team in unnecessary warnings.
Now, having tools that show you findings in your code are great, donât get me wrong. But what exactly do you think will happen when you tell someone outside the team âThere are 217.323 findings in our codebaseâ? They do not have the same context that you have and canât interpret the numbers properly. What they will remember is that scary number: 217,323. And suddenly, your project has a âquality problemââ even if itâs actually improving.
Shall we turn the tools off, then? No, of course not. We can still use them and focus on something else, which will help everyone. Usually, we use code analysis for two reasons:
- Find out about the quality of our code and where it violates our rules.
- Monitor the quality continuously.
And (2) is far more important than (1). We want to see how the quality of our code base develops over time. We want to see the trend. And thatâs what we should focus on.
- Is the trend going upwards (i.e. quality improves)? Great! Thatâs something we can actually tell stakeholders: âOur code quality has been improving continuously over the last few sprints.â
- Is the trend stagnating? No need to panicâat least it is not getting worse.
- If the trend is going down, we need to take action. Find the root cause, adjust our approach, and track if our changes lead to improvement.
My third word of advice is this: Forget absolute numbers. Focus on trends. There are several tools out there that can help you to monitor the trends.
I want to introduce you to two of aforementioned tools that I encountered in projects over my career. (Yes, I know that there is also SonarQube, but Iâve never really worked with it.)
NDepend
If youâre in a .NET only environment, you can use NDepend.
NDepend is a tool that you usually use either as a plugin for visual studio or as a standalone application, but it is for Windows only. However, thereâs also a headless version available, that runs on Windows, Linux, and macOS. You can use it to create web-based reports about the quality of your code. You can see the default dashboard in figure 1, which is shown when you open the report. Itâs very easy to integrate this headless version into your build pipeline and publish the build results to GitHub pages or GitLab pages, for example. Bonus: If youâre using ReSharper, you can even integrate its findings into NDepend ((1) in figure 2). Thereâs also a separate trend view available that will show you the development of your code coverage, your technical debt, violated rules and some other metrics ((2) in figure 2).
Teamscale
A second tool that offers continuous monitoring, trend analysis, and more is Teamscale. They support a bunch of different technologies (.NET included) out of the box, and you can also upload the analysis results of tools that youâre already using.
Thatâs the default dashboard that you see above in figure 3. You can focus on absolute numbers if needed, but the real power lies in its ability to show trends â perfect for presenting results to management without overwhelming them with raw data, or to make the trend visible to the team all the time. Figure 4 shows two exemplary widgets that focus on trends.
In the End, itâs Humans After All
Having tools to analyse your code is great, having them integrated into your development process is even better. But unfortunately, thatâs still not enough.
At the end of the day, code quality is not just about tools â itâs about people. You need someone who takes care of the continuous improvement of your software and, in turn, your team: a quality engineer (ideally, not just one but at least two).
A quality engineer is not just the person who installs and configures static analysis tools. They are the ones who drive a culture of quality â helping developers understand and embrace the benefits of code analysis instead of seeing it as a bureaucratic nuisance. They ensure that rules evolve alongside the teamâs needs, keeping static analysis a living process rather than a rigid, outdated checklist.
No matter how good your tools are, you will always face resistance from developers at some point. Some will see static analysis as a burden, others as a direct challenge to their expertise. This is why a quality engineer must also be a diplomat â not just enforcing rules, but educating, discussing, and adapting. If a rule is constantly ignored, it might not be because developers are lazy â it might be because the rule doesnât make sense in the projectâs context.
A good quality engineer is part technologist, part advocate, and part diplomat.
My final word of advice: Appoint a quality engineer. And if no one volunteers, step up and take the job yourself. If you care about clean code, you donât have to wait for someone else to drive the change. Start small. Set up discussions. Advocate for better practices. And most importantly, focus on people, not just numbers. Because in the end, code quality is about making life easier for humans â not just for the machines analysing it.