Introduction
Technical debt is a metaphor coined by Ward Cunningham to describe the implied cost of future rework caused by quick-and-dirty choices in code (Technical debt - Wikipedia).
In its classic form, technical debt refers to “not quite right code which we postpone making right”—the shortcuts taken to speed up development today at the expense of effort later (Technical Debt: From Metaphor to Theory and Practice). Early discussions (e.g. by Cunningham in 1992 and later by the Software Engineering Institute) focused on code issues: messy, overly complicated code that accrues “interest” by making future changes harder (See Martin Fowler on Technical Debt).
However, limiting the definition to poor code is overly narrow. Real-world IT systems accumulate debt in many forms, not just in source code.
Architecture can ossify, infrastructure can age, tests can be skipped—all creating a drag on future development just like sloppy code does. In fact, many experts argue that all long-lived software systems accumulate various debts over time and that code-level debt (while easier to spot) is only one part of the picture (Kruchten-20).
The purpose of this post is to broaden the perspective of technical debt beyond code quality, exploring a variety of “debts” (or problems) an IT organization can incur and how they contribute to typical challenges development organizations face.
Overview of Debt in IT Systems
When we speak of debt in IT, we need to consider more than just messy code. The entire IT stack—from requirements, architecture, and infrastructure over communication and documentation to testing and operations—can accumulate “debt” that burdens future work. Just as financial debt comes in different flavors, IT debt spans multiple domains. For example, outdated server platforms or unpatched libraries are a form of infrastructure debt, and a lack of automation in deployment is operational debt. All these debts incur “interest” by making operation, maintenance, and evolution slower or riskier over time.
A helpful analogy is to compare forms of technical debt to types of financial debt. Some tech debt is like credit card debt—small shortcuts and hacks (say, hard-coded fixes or skipping tests) that yield quick gains but accrue high interest if not paid off.
Other debt is more like a long-term loan or mortgage—a conscious trade-off, such as choosing a simple architecture to meet a deadline, knowing you’ll have to invest later to scale it.
In both cases, the debt metaphor holds: you either pay now (do it right) or pay more later.
The key is that technical debt extends beyond code; it includes at least requirements, architecture, infrastructure, processes, testing, security, and operations, all of which can be compromised in the short term, only to create higher costs or efforts in the long term.
In contrast to real life, you might throw away a complete IT system, resulting in (most or all) debt gone away. Too bad that does not work with your bank and your freshly bought flat, though.
Categories of Debt in IT Systems
Let us break down technical and other debt in IT systems into several categories.
Although that diagram contains many dependencies, I decided to show only those that I deem most important. As our problem space is infinite, there surely will occur problems that one cannot easily fit into the categories shown. The diagram shall outline a broad spectrum without claiming to be complete.[1]
Requirements Debt
Definition: Requirements debt is a less obvious but critical category of technical debt. It arises from deficiencies in the requirements gathering, requirements communication, or the requirements themselves. Such debt often results in a mismatch between what stakeholders need and what the development team(s) actually built (see Why Product Owners Must Prioritize Managing Technical Debt?).
This can happen when requirements are incomplete, vague, or frequently changing (e.g. pivoting product direction or rampant scope creep). If you rush through requirements elicitation—perhaps to release an MVP quickly—you might implement features that don’t quite meet the real user needs or that aren’t built with the future in mind.
Those “gaps” or shortfalls in requirements become debt: You will have to redo or adjust the product later (sometimes extensively) to align it with actual requirements.
Classical examples of requirements debt are:
- missing quality requirements (like performance, capacity, or the like (e.g., response time < 200ms), see Q42 for multiple examples).
- missing or incomplete descriptions of special cases: If your use-case only explains the happy path but omits edge- or special cases.
- missing stakeholders: Certain people or organizations simply haven’t been asked about their requirements concerning the system under consideration.
- incomplete, missing, or domain knowledge
- contradictory stakeholder expectations (one person requires high security levels with strict legal clearing processes, others need high delivery speed)
My personal worst case in requirements debt is the missing “overall vision”, also called north star or desired outcome: When development teams are overwhelmed by only small-scale requirements without an overall goal or vision, they can no longer see the wood for the trees.
Code and Design Debt
Definition: Code and design debt is the classic form of technical debt. It refers to poor code and implementation shortcuts, such as tangled spaghetti code, lack of modularity, duplicate logic, or overly coupled components. Such constructs render codebases hard to understand and maintain.
These issues often arise from rushing features or bypassing established good practices. Over time, the code’s internal “cruft” (to use Martin Fowler’s term) accumulates, increasing the effort needed to add or change functionality (See Martin’s post on Technical Debt).[2]
In essence, the extra time developers spend dealing with bad code is the interest on code debt. For example, if adding a new feature took 2 days in a clean codebase but 4 days in a messy one, those 2 extra days are the interest payment for not refactoring earlier.
Ward Cunningham’s original metaphor was rooted in this idea:
“Shipping first-time code is like going into debt… The danger occurs when the debt is not repaid”, leading to compound interest in the form of slow, painful improvements later. (Wikipedia).
I’m reasonably sure that most of you/us have experienced this situation: Many if not most legacy systems contain multiple forms of such code debt.
Consider a decades-old enterprise application with thousands of kludges and “temporary” fixes. Minor changes in one module unexpectedly break features in another due to tight coupling. Developers are scared to touch certain “fragile” parts of the code. Such fragility and the unwillingness of developers to touch a specific part of the code base can be used as a management trigger.
A real-world illustration is the ubiquitous “big ball of mud” legacy codebase that many companies struggle with—for instance, early versions of the Netscape browser had to be substantially rewritten because the code had become too convoluted to extend.
Architecture or Structural Debt
Definition: Architecture debt (or structural debt) refers to overly tight or inappropriate coupling (dependencies), lack of cohesion or modularity, or other structural deficiencies.
It’s the “big picture” equivalent of code debt. Structural debt accrues when development teams ignore sound architectural principles for quick wins—for example, a tightly coupled monolithic design that works for a small app but cannot scale, or an over-engineered microservices architecture that becomes a maintenance nightmare.
Unlike code debt, architecture debt is often difficult to find using code-level static analysis tools yet can have even costlier consequences (See Philippe Kruchten Technical Debt: From Metaphor to Theory and Practice)
Philippe Kruchten notes that while code-level debt is easier to find, architectural debt often carries the highest cost of ownership over time (Kruchten-2020) because it permeates the entire system (aka: has global scope) and is costly to “refactor” once an application is in production.
One can find out certain types of such structural deficiencies by analyzing code dependencies (i.e., looking for circular dependencies). But be aware that dependencies occur in many (!) different forms:
- Direct code compile-time dependencies, one function or service calling another function or service. Such dependencies are most often visible in the source code.
- Runtime dependencies, injected dependencies. It is often considered a good practice to replace compile-time dependencies with runtime dependencies, but the latter can be more difficult to spot.
- Runtime dependencies like “another program needs to run first” and another job needs to finish before our is allowed to start.
- Dependencies resolved at runtime by intermediaries (like broker, nameserver, registry, config files, or even environment variables).
- Invisible call dependencies like other programs calling your public API.
- Dependencies by shared infrastructure, like shared databases, shared memory/storage, or other types of infrastructure shared between otherwise distinct systems.
- Organizational dependencies, like “a specific person has to sign off the change or merge the PR”.
Other forms of structural debt are wrong or inappropriate data models: Data models that don’t match current requirements, tables, or column names not related to their content (I remember column names like col-1
up to col-10
in production databases)
Consider the following real-world examples of architectural debt leading to full-scale business disaster:
Friendster, one of the earliest social networks, was founded in 2002 and grew to over 3 million users within months. It was built as a monolithic PHP application with limited scaling mechanisms. The monolithic backend was unable to handle social graph growth; database design didn’t scale with user relationships. Friendster attempted a complete re-architecture, but it took too long. By the time it was ready, it had lost critical mass.
“Friendster collapsed under the weight of its own architecture… They didn’t build for scale, and their code and database couldn’t handle success.” — John Adams, Former Site Reliability Engineer, Facebook
Knowledge Debt
Definition: Knowledge debt refers to knowledge or know-how not sufficiently disseminated, communicated, or kept. Another term is missing or outdated documentation.
Rumors are that in reality several IT projects lack appropriate documentation. In contrast, many other engineering disciplines (for example, mechanical, electrical, or civil engineering) heavily rely on structured and standardized documentation.
In its worst form, knowledge debt means that the only people knowledgeable in certain aspects of the system have vanished and are not accessible any longer. Another one is (quite brutally) coined “truck factor of one”: Only a single person can perform certain tasks or know certain parts of the code. If this person leaves (or, as the name suggests, gets run over by a truck), development or operation of the system might come to an immediate halt.
A third form of knowledge debt is too much and outdated documentation. This might happen in large or highly bureaucratic organizations, especially when the goals and benefits of documentation, communication, or knowledge transfer are left unclear or unspecific.
Other examples of knowledge debt are:
- Lack of onboarding guides or tutorials
- No Post-mortem or retrospective records: After incidents or major changes, no records are created and kept about what went wrong, how it was fixed, and lessons learned.
Sidenote: I’m (co-)author and maintainer of arc42, an open-source framework for efficient and pragmatic architecture communication. If your system or team suffers from knowledge debt, consider starting with the architecture canvas, literally a one-page documentation of your system.
Technology Debt
Definition: Technology debt refers to inappropriate technologies or using wrong, outdated, or overly hyped technologies for a specific task. In software, this can easily happen if dependencies on foundational technologies (like frameworks, libraries, or even programming languages) are not regularly updated.
Let’s start with outdated technology: Take Java(TM) 6 (last official update 2013) or PHP 2 (support ended 1997) as examples. During code or architecture reviews, we regularly identify certain dependencies as being outdated. But why is outdated technology a problem, as even an older dialect of a programming language or library still runs?
Old or outdated technology may contain security risks, as certain vulnerabilities or attacks might not have been detected when the technology was current. If that alone does not suffice, older technologies often suffer from performance or stability risks or might not be compatible with newer or current operating systems. Finally, older technology often lacks certain features or functions that newer versions bring. Imagine you want to use a certain library or framework that exactly solves your problem but requires Java-20, and your system only runs on Java-6.
All in all, keeping technology up-to-date reduces several risks, but obviously has to be tested thoroughly.
Now let’s tackle the next tech debt: inappropriate technology. Of course, systems can be integrated via email, but that doesn’t feel appropriate in most cases. Other examples:
- using a relational database to store and query graph-like structures like maps or network topologies. Works, but cumbersome and slow.
- using Excel (TM) as a database for multi-user applications.
- using fax to send digital documents to other systems.
- using the print functions for logging and tracing in a client/server environment.
Surely, we could continue that list of better don’t for quite a while, but I just needed a few examples.
Infrastructure Debt
Definition: Infrastructure debt refers to problems or risks in the underlying platforms and hardware that your systems run on. Regard this as a special case of the technology debt from the preceding section.
This includes outdated servers or operating systems, legacy hardware, or networks. It’s the debt you incur when you ‘make do’ with old or brittle infrastructure instead of upgrading—often because upgrades are costly or risky. However, the interest on this debt is paid in the form of rising maintenance costs, poor performance, and difficulty implementing new features that rely on modern technology. In other words, if your foundation (infrastructure) is aging instead of being renewed, you are accumulating infrastructure debt. Organizations often accumulate this debt by postponing necessary upgrades or clinging to legacy systems due to short-term convenience or fear of change.
The Software Engineering Institute describes cases where decades-old mainframe applications have “accumulated substantial technical debt over decades” — using outdated components and patterns that drag down release cycles and maintenance (Demystifying mainframe technical debt).
Another aspect is worth noting: Cloud migration without considering the required performance and security consequences (aka lift-and-shift done wrong): The “lift and shift” strategy offers a fast path to the cloud but is fraught with risks if used without careful planning and assessment. It can fail miserably (producing infrastructure debt) when performance, cost, or security considerations are overlooked, or organizations expect cloud benefits without leveraging cloud-native features.
All of this is infrastructure debt. The case studies are everywhere: airlines whose old reservation systems cause check-in meltdowns, hospitals running antiquated systems that can’t easily share data, etc. The key takeaway is that neglecting infrastructure upgrades is like ignoring a rusting foundation—the longer you wait, the harder (and more costly) the eventual fix will be.
Sidenote: The Pragmatics of (Not) Updating Technology
When deciding if or if not to update or modernize underlying technology or infrastructure, take these additional aspects into account:
1.Risk of disruption may outweigh the benefits. “If it isn’t broken, don’t fix it.” — especially when human safety or large financial consequences are involved.
2.People who originally built or maintained the system may have left. Without an in-depth/profound understanding, modernizing could be dangerous or impossible without first rebuilding significant domain expertise. See the section on “Knowledge debt” above.
Runtime Quality Issues
Definition: Runtime quality issues are unwanted behaviors of the system that occur at runtime or during installation or configuration. Examples are performance and stability issues, lack of usability, overly high resource consumption, etc.
These runtime quality issues should be counted as a specific kind of technical debt. They can result from any combination of coding, dependencies, inappropriate technology or infrastructure, or even unclear or missing requirements.
Consider the following thesis:
One can write clean code that behaves inefficiently or insecurely at runtime.
Therefore, you need to conclude that it does not suffice to write clean and understandable code with both loose coupling and high cohesion. Even such code can show unwanted runtime behavior, contain security flaws, and waste resources like memory or bandwidth.
A prerequisite to avoid runtime quality issues seems obvious: You need to know your specific quality requirements… we covered that in the section on requirements debt. Another way to counter some runtime issues is testing, which brings us to the next category of debt:
Testing Debt
Definition: Testing debt is the debt incurred by inadequate testing and QA processes.
This includes insufficient automated test coverage, lack of unit/integration tests, an unreliable or slow test suite, or even missing testing environments. In some fast-paced teams, testing is cut short or not prioritized (“we’ll add tests later”)—that saves time in the short term but accumulates significant risk and cost later. The “interest” on testing debt is paid in the form of bugs in production, slow releases (because you have to test things manually or resolve issues on the fly), and a general fear of making changes.
When a codebase has poor test coverage, every change carries uncertainty: developers aren’t confident the change won’t break something else. Over time, this can paralyze improvement or require considerable efforts for bug fixes. Testing debt also encompasses outdated test cases (that no longer catch regressions) or a lack of continuous integration, meaning bugs slip through cracks. In summary, if your testing discipline is weak, you’re incurring a debt whose cost will manifest as software failures and maintenance headaches.
Many organizations can tell stories about such testing problems: A small change goes untested and causes a major outage. For instance, in 2012, the Royal Bank of Scotland (RBS) had a massive outage preventing millions of customers from accessing accounts, caused by a failed software update to their batch processing system (see 37 Epic Software Failures).
The root issue was attributed to a combination of factors, including inadequate testing of the update procedure on the legacy system—effectively, a test/ops debt that materialized as a multi-day outage.
Testing debt is also common in projects that start with an MVP mindset: Teams might write just a few happy-path tests to get the product out the door, deferring comprehensive testing. This is actually a known practice in startups; for example, some product owners only authorize basic tests for an MVP, intentionally taking on test debt to achieve a quick launch (Why Product Owners Must Prioritize Managing Technical Debt?).
The danger comes later: as the product grows, those missing tests mean new bugs crop up unexpectedly or old bugs resurface.
A real (and tragical) case were the Boeing 737 Max’s software issues (MCAS): While a very complex and tragic example, investigations suggested that testing and oversight gaps contributed to the failure to catch critical flaws. (See the FAA report on Boeing MCAS)
On a smaller scale, even routine bugs like a mobile app crash after an update can indicate testing debt (perhaps the team didn’t test on a certain OS version or screen size). The cumulative effect of testing debt is a slower, riskier development cycle. Teams with high testing debt sometimes say things like “we’re afraid to touch that code” or “we can’t guarantee the next release won’t break something,” highlighting how this debt restricts progress.
Paying down testing debt involves investing in test automation, coverage for critical paths, and continuous integration, which many teams do only after suffering a few painful incidents that force them to refocus on quality. In essence, testing debt reminds us that “if you don’t write tests now, you’ll debug longer later.”
Security Debt
Definition: Security debt (a subset of quality debt) refers to shortcuts and deficiencies in security measures that accumulate over time.
This includes known vulnerabilities left unpatched, outdated encryption or protocols still in use, lack of compliance with security standards, weak access controls, and generally any deferred security work. It also means ignoring established organizational security practices and the need for constant vigilance concerning all aspects of security.
Organizations incur security debt when they ignore security practices for the sake of speed or convenience—for example, skipping an upgrade that fixes vulnerabilities because it might break compatibility, or delaying the implementation of audit logs because “we’ll do it later.” In the short term, nothing bad happens and development moves on, but the risk compounds over time. The “interest” on security debt is often paid in the currency of security breaches, data leaks, and emergency patches when vulnerabilities get exploited.
In other words, if you continually postpone security housekeeping, you’re essentially “borrowing against” the system’s safety, and breaches are the catastrophic debt collector.
In data security, it’s a good heuristic to assume that your attackers have more money, time, and knowledge plus better technology. Expect the opponent or attacker to be superior in every aspect. In other words: Expect the worst.
One of the most notorious examples of security debt was the Equifax data breach of 2017. Equifax, a major credit bureau, suffered a breach that exposed personal data of approximately 147 million people.
The cause? Equifax had failed to patch a critical known vulnerability in the Apache Struts web framework they used for months after the fix was available (see Equifax’s patching blunder).
Specifically, the Apache Struts flaw (CVE-2017–5638) was disclosed in March 2017 with a patch ready, but Equifax did not apply the patch; attackers exploited it in May–July 2017, and Equifax only discovered the breach in late July. This example of unpatched software shows security debt in action: The company “saved” effort in the short term by not updating their system, but the interest compounded enormously – resulting in one of the largest data breaches in history, hundreds of millions in cleanup costs, leadership fallout, and irreparable damage to reputation.
Another example: the “WannaCry ransomware” attack in 2017 hit many organizations (from the UK’s NHS to global companies) that had not applied available Windows security updates; those who had procrastinated on patches found their systems locked down by malware. This was a harsh repayment of security debt. Even less dramatic instances count: using weak passwords or hard-coded credentials might not bite immediately, but they sit as a ticking time bomb (debt) until someday an insider or hacker exploits them.
The only way out is to proactively pay down this debt by patching, upgrading, and building security into the development lifecycle, rather than bolting it on at the end.
Operational Debt
Definition: Operational debt refers to inefficiencies and manual work in the deployment, monitoring, and maintenance processes of IT systems.
It’s the debt you accrue when you don’t invest in streamlined operations—for example, relying on manual deployment steps, not having proper incident response playbooks, lacking monitoring/observability, or generally not automating routine tasks.
In the short term, you might get away with ad-hoc, manual ops (“it works, don’t touch it”), but as the system grows, these manual processes become error-prone and slow. The “interest” on operational debt is paid in the form of outages, slow recovery from incidents, configuration errors, and operational overhead. Essentially, if your ops practices are catching up to the modern DevOps/automation standards, you’re paying a continuous tax in the form of instability and wasted time. This category also covers documentation and knowledge debt in operations—e.g., only one person knows how to deploy the system, which is a risk.
Some examples: In 2017, an Amazon S3 outage was caused by an engineer manually executing a routine script with a typo, accidentally taking more servers offline than intended. The lack of a safety check or automated limit in the process turned a small mistake into a widespread, hours-long outage.
Similarly, in June 2023, Microsoft Azure experienced a 10+ hour outage in one region because of a typo—a mistaken command in a maintenance script led to the accidental deletion of 17 production databases (TechRadar). The incident underscored how a single manual error, in the absence of robust validation or recovery processes, can wreak havoc. We can view that as operational debt: perhaps the system lacked an “undo” or proper review for the deletion command—a feature that might have been on the to-do list but wasn’t there when needed.
Another common example is manual deployments. Companies that haven’t automated their release process often accumulate deployment scripts, server config tweaks, etc. that only specific people know. This can lead to outages like “we deployed the wrong version” or “the config was different on one server.”
A classic case was the 2012 “Knight Capital” fiasco (a trading firm): a deployment process mistake left an old feature flag turned on in some servers, which led to an uncontrolled algorithmic trading storm, costing the firm $460 million in 45 minutes. The root cause was traced to poor deployment practices—essentially ops debt in release management.
A lack of observability (logging, monitoring, alerting) is a form of ops debt—you save time by not setting up dashboards now, but later when something goes wrong, you “pay” by being blind to the problem. For instance, imagine an e-commerce site without proper monitoring: if the checkout service slows down, nobody notices until users complain, and it takes hours to pinpoint the bottleneck—a clear interest payment on operational debt.
Industry studies have noted that the number one cause of outages is human error in changes (OpsView.com).
This is why the DevOps movement preaches “Infrastructure as Code” and automation: to reduce manual steps and thereby reduce ops debt. In summary, operational debt accumulates in teams that skip on automation, documentation, and resilient processes. It might not bite immediately (perhaps things are fine when the system is small or the one guru admin is always around), but as systems scale or staff changes, this debt shows itself in long outages and firefighting.
Process Debt
From my personal experience, a particularly nasty kind of debt concerns processes, especially project management, requirements, or development methodology—for example, bypassing code reviews, skipping agile ceremonies, or not having an effective knowledge-sharing process.
These shortcuts can yield short-term speed but at the cost of team efficiency and product quality later (the interest on process debt can be things like team burnout, miscommunication, or feature rework). Essentially, if your development processes or requirement definitions aren’t solid, you accrue a kind of organizational debt that makes future changes harder.
Such process debt will (!) result in several other types of debt, e.g., communication debt, requirements debt, structural debt, and others.
Summary & Takeaways
Technical debt is far more than a code quality issue; it’s a holistic concept, capturing many types of drag on IT systems. We introduced a broader understanding that spans requirements, code, architecture, technology and infrastructure, processes, testing, security, and operations.
By viewing technical debt through this wider lens, we see that any shortcut or deferred work in an IT system can become a “debt” that must be repaid with interest later.
Awareness is the first step in managing and mitigating these types of debt. Once the debt is visible and acknowledged, teams can make informed decisions about when and how to pay it down.
Keep in mind that not all debt is bad—sometimes taking on technical debt strategically is necessary to meet a business goal (just as taking a loan can be a smart business move). The key is to be intentional about it: incur debt only with a plan to service or retire it. Unintentional or uncontrolled debt—the kind that “just happens” when teams are rushed or neglect maintenance—tends to spiral out of control.
As Ward Cunningham cautioned in his debt metaphor, the danger is letting interest accumulate:
“Every minute spent on not-quite-right code counts as interest on that debt” and eventually, an organization can be “brought to a standstill under the debt load” if it’s never addressed
Eventually, organizations that strategically manage their technical and other debt will be more agile, resilient, and successful than those that let debts mount unchecked.
Therefore, may the force of appropriate architectural and technical decisions be with you!
Closing note
Every so often it is impossible to judge a decision as “debt-ful” or not because the consequences cannot currently be known. We (and our IT-system) might live happily with this decision, and profit from it. Then, suddenly, that decision turns out to be the root of a major and costly problem:
For example, the Log4j security disaster, known as “Log4Shell” (CVE-2021–44228), is a prime example of a widespread, critical, and easily exploitable vulnerability that emerged in December 2021. That widely used logging library for Java systems became a security nightmare once the vulnerability had been published.
Image sources
The images were generated by ChatGPT based upon prompts by the author.
Resources and Further Reading
Considering that technical and other debt are influential for most problems we encounter in IT systems, astonishingly little has been researched and published (niche scientific conferences being the exception).
- Phillippe Kruchten, Robert Nord and Ipek Ozkaye: Managing Technical Debt: Reducing Friction in Software Development. Addison-Wesley 2019. Highly recommended for both architects and IT-managers.
- Neil Ernst, Rick Kazman, Julien Delange: Technical Debt in Practice: How to Find It and Fix It. MIT-Press, 2021. Broad overview, with good in-depth coverage of many “debt-avoidance” practices. Recommended for architects and developers.
- Nikita Golovko: Technical Debt. Design, risk and beyond.
Acknowledgements
I would like to express my gratitude to my reviewers (Gerrit Beine, Sven Johann, and Dr. Nikita Golovko) for their assistance in clarifying certain points, removing bugs, and introducing new elements. Special kudos to “m”, world-class language-, punctuation- and grammar spotter. All remaining errors are mine.
-
One special kind of problem I sometimes encountered during software reviews relates to data, e.g. data structures, database or table schemas, the correctness of the data itself, or similar issues. You may call these types of problems a category on its own, but you could also consider them to be part of structural, runtime, or technology debt. ↩
-
“Code that is rewritten to manage technical debt typically involves a productivity loss and higher costs, often estimated at up to 25–30% of total project cost in large systems.” Source: Besker, T., Martini, A., & Bosch, J. (2018). Managing Technical Debt in Software Development: Current State of Practice. In Proceedings of the 2018 International Conference on Software Engineering: Software Engineering in Practice (ICSE–SEIP). ↩