Dieser Artikel ist auch auf Deutsch verfügbar
This article is part of a series
- Part 1: Managing Geopolitical Risks with Enterprise Architecture
- Part 2: Digital Sovereignty: Why Architecture Matters and How to Make Your Company Resilient
- Part 3: A Governance Framework for Digital Sovereignty
- Part 4: EU Data Act: The Beginning of the End for Cloud Monoculture?
- Part 5: Data Inventories in the EU Data Act: The Democratization of IoT Devices
- Part 6: The Path to Heterogeneous Cloud Platforms
- Part 7: Achieving Digital Sovereignty with Standard Software (this article)
- Part 8: The Sovereignty Trap: Between Tiananmen and Trump
- Part 9: Think Locally: On-Premise LLMs as Drivers of Competitive Advantage
- Part 10: From Data Graveyards to Knowledge Landscapes
- Part 11: Digital Sovereignty as Self-Understanding
This article was translated from the original German version using AI-assisted translation.
A thought experiment: Let’s imagine two companies—specifically two IT consulting firms—with very different strategic orientations. Company A wants to be completely independent and builds its entire IT landscape in-house: from time tracking to invoicing to knowledge management. No dependencies, everything under its own control. Company B pursues the opposite strategy: it wants to act quickly in the market, invest no time in custom development, and relies exclusively on off-the-shelf standard components. A complete ERP suite from a major vendor is therefore its choice.
Which of these companies is truly sovereign? Looking more closely, you quickly realize: neither. Company A has boxed itself in through its autarky—resources are overloaded, responsiveness is practically non-existent. Company B sits in the “golden cage” of its vendor—every change costs time, money, and freedom. Both have hardly any real room for manoeuvre in the market.
Digital sovereignty therefore does not mean autarky. It means remaining capable of action: being able to consciously decide at any time where standard solutions suffice and where individual developments are necessary to strengthen the own business model.
Simply put, this isa trade-off between resource optimization (through the use of standard software) and autarky. The sweet spot of capability lies in the middle—and has to be consciously balanced.
But that is not all: even a planned mix of standard software and in-house development can fail if this “best-of-breed” approach is not guided by clear architectural guardrails (macro- architecture) into a maintainable and extensible system-of-systems. Without these guidelines, there is a risk of creating an opaque patchwork of inconsistently integrated standard software components that can hardly be maintained or extended—thus again creating an inability to act.
We therefore need clear rules for the planning and integration of standard software. We want to examine both aspects below: first, how companies can decide where custom development or standard software makes sense through structured methods. Then, how a macro- architecture can guide the use of standard software in such a way that dependencies are reduced and capability is preserved.
Strategic Planning: Where Standard Is Most Appropriate
To use standard software sensibly, you first need transparency about your own IT landscape. The goal is to understand precisely which functional capabilities a company needs, which of these are strategically crucial, and where efficiency through standardization can take priority. This creates a solid decision-making foundation: where do we develop in- house – and where do we rely on standard software?
The first step is creating a map of functional capabilities, often called a Capability Map. It shows which building blocks – from customer processes to internal administrative functions to industry-specific core processes – the company needs and how they’re connected along the value chain.
To develop this map, combining two proven approaches makes sense: TOGAF as a framework provides the strategic structure, terminology, and phase models, while Domain- driven Design (DDD) with practical tools like Event Storming or the Domain Modeling Starter Process helps work out the functional relationships in detail and together with domain experts. Additionally, Domain Storytelling interviews, analysis of existing systems, and industry-standard reference models offer valuable insights.
The result is a hierarchical Capability Map that organizes the identified capabilities and visualizes them along the value chain. It forms the foundation for making well-informed decisions: where custom development is worth pursuing, because it strengthens the company’s differentiation, and where we choose to rely on standard solutions to gain speed and efficiency?
The figure “Capability Map” shows (abbreviated) how a Capability Map could look in detail for an IT consulting firm (based on ArchiMate). Along the defined Value Stream from “Acquire Project” to “Close Project,” the required capabilities are arranged hierarchically. This makes it easy to see which capabilities are relevant across entire value creation process.
The evaluation of these capabilities is done using proven methods such as Wardley Maps, DDD Core-Domain-Charts, or strategic workshops. It’s important to reflect on the maturity level, strategic importance, and desired market differentiation of each capability.
This allows you to identify three groups:
- Core Capabilities: Areas where we must be better than the market—here, custom development is worth the investment.
- Supporting Capabilities: Supporting functions that are important but not differentiating— standard software is often the best fit here.
- Commodity Capabilities: Interchangeable areas where efficiency is the priority—standard software is the preferred approach here.
In our IT consulting example, the capability “Time & Effort Tracking” is classified as Generic, while “Project Execution & Steering” is considered Core because it directly influences value creation in customer projects. “Time & Effort Tracking,” however, follows largely standardized processes and offers little differentiation potential, making standard software standard software the logical choice here.
This analysis results in a make-or-buy strategy that determines for each capability whether it should be developed internally or handled by standard solutions. A color-coded Capability Map (e.g., blue for custom development, green for standard software) creates transparency and serves as a central management tool for further planning.
This creates an IT landscape where standard software is purposefully deployed where it creates freedom—and custom development takes place where it delivers the greatest added value. This is the first step toward using standard software not randomly, but deliberately and with sovereignly.
Architectural Guardrails for Greater Freedom
The decision of where to deploy standard software is only the first part. Equally important is how this standard software is integrated. Without clear architectural specifications, even the smartest make-or-buy strategy can quickly result in a confusing, difficult-to-change patchwork.
This is why a macro-architecture is needed: overarching guidelines that apply to all systems – regardless of whether they’re custom developments or standard products. This architecture limits itself to a manageable number of central topic areas while ensures that there are common standards at the crucial interfaces. This creates homogeneity within a heterogeneous system-of-systems: individual systems can be developed or replaced independently without destabilizing the overall landscape.
This is especially essential when deploying standard software. Integration guidelines must ensure that coupling between systems remains as loose as possible to enable vendor switches when needed without major effort. In practice, asynchronous, event-driven architectures have proven effective patterns because they decouple systems and enable flexible responses to changes. Accordingly, companies should ensure that, when selecting standard software, it includes suitable APIs and ideally already publishes events independently – for example, via webhooks.
A common mistake is directly forwarding proprietary data structures to other systems. This creates dependencies that are difficult to resolve, making later vendor switches extremely expensive and risky. Instead, incoming data should be transformed into company-specific formats via wrappers. While this initially increases integration effort, it reduces long-term complexityand facilitates reusability.
Finally, when selecting central integration components – for example, the event broker – it’s important to ensure they’re based on open and standardized formats. Proprietary protocols at this critical point would again create a dependency for the entire system that is difficult to remove. An open, ideally open-source-based approach protects long-term independence.
For our IT consulting example, this results in an implementation that processes the internal events from the purchased standard software for “Time & Effort Tracking” via an adapter and sends them to the central event bus (e.g., implemented with Apache Kafka). The self- developed solution for “Project Execution & Steering” can then listen to “Time Tracked” events and generate forecasts for the respective manager – without direct dependency on the standard software vendor.
This keeps integration flexible, and new or changed systems can be connected with relatively little effort.
With these rules, a framework emerges that seamlessly integrates standard software into the IT landscape without limiting capability