Dieser Blogpost ist Teil einer Reihe.
- Teil 1: What’s Wrong with the Current OWASP Microservice Security Cheat Sheet?
- Teil 2: Updating OWASP’s Microservice Security Cheat Sheet: Core Concepts
- Teil 3: Updating OWASP’s Microservice Security Cheat Sheet: Authentication Patterns
- Teil 4: Updating OWASP’s Microservice Security Cheat Sheet: Identity Propagation Patterns (dieser Blogpost)
In my previous post, I took a closer look at common authentication patterns. But as I mentioned at the end of that post, authentication alone is not enough. Most of our systems don’t consist of a single service — there are many services communicating with each other — which means we also need to pay attention to how the identities of our subjects “flow” through this complex landscape. As I wrote both at the end of the last post and in the introduction to this one:
Without trustworthy identity propagation, even strong initial authentication can be undermined — weakening trust boundaries and ultimately impairing the system’s ability to make reliable authorization decisions.
And that is exactly the focus of this post — introducing you to common identity propagation strategies, exploring their security implications, and how they influence system design and trust enforcement.
🙏 As before, the content below is intended as a contribution to the official cheat sheet, and your feedback will be invaluable to help shape a practical and effective update — please share your thoughts by commenting on the corresponding LinkedIn posts, sending me a direct message there, or joining the discussion in heimdall’s Discord server in the #off-topic channel.
Identity Propagation Patterns - An Overview
The current cheat sheet does cover identity propagation, but only briefly and without much detail on the bigger architectural picture. It appears in the External Entity Identity Propagation section, which covers three approaches — though one is a bit spread out in the text. While the descriptions are clear, they don’t really explore the trade-offs or design choices behind them.
With this revision, I’m giving identity propagation more attention and treating it, as mentioned in the introduction to this post, as a key topic on its own. Here, I introduce four commonly used patterns — including the three from the cheat sheet — and lay them out along a spectrum that helps explain how they differ architecturally and in terms of trust. I hope this makes it easier to understand how identities move through a system and what impact your choices have on security, complexity, and service coupling.
As mentioned in the previous section, trustworthy identity propagation — the focus of this section — is essential for maintaining strong trust boundaries across a system. Architectures following Zero Trust principles exemplify this need, as they emphasize strict access control and continuous verification. This section introduces commonly used identity propagation patterns — that is, the ways in which identity context flows between services. These patterns influence where and how access control decisions are made, the reliability and trustworthiness of those decisions, and ultimately how effectively least privilege can be enforced. They also differ in how tightly internal services are coupled to the external authentication mechanisms and identity representations used at the boundary.
Some identity propagation patterns aim to decouple internal service logic from specific external authentication protocols and data formats. This approach, often called protocol-agnostic or token-agnostic identity propagation, means internal services consume a normalized, unified identity representation that abstracts away the details of the original authentication protocol and authentication data (including both primary credentials and authentication proofs). This abstraction enables internal services to remain stable, simplified, and focused on authorization logic, even as external authentication methods evolve or change.
At one end of the spectrum, some patterns directly forward externally issued authentication data (such as OAuth2 tokens, session cookies, or certificates) downstream, requiring internal services to understand and process the original authentication protocols. This approach can increase complexity and trust assumptions within internal services. At the other end, a trusted system component at the edge transforms incoming authentication data into cryptographically signed, normalized identity structures. These structures abstract away the original protocol and data format, allowing internal services to remain agnostic to how authentication was performed. By providing tamper-resistant, verifiable representations of identity, they establish strong trust boundaries across service interactions and enable auditable access decisions, making them especially effective for enforcing least privilege in distributed environments.
Between these extremes exist intermediate patterns where internal services rely on simplified identity representations issued or transformed by upstream services but without cryptographic protections, requiring implicit trust between services.
Each pattern involves trade-offs between implementation complexity, security, trust, privacy and operational overhead. Choosing the appropriate identity propagation approach depends on the system’s security posture, scalability requirements, and the desired level of trust between internal components.
Understanding these trade-offs in concrete terms requires examining how identity propagation is commonly implemented in practice. The following sections describe representative patterns along this spectrum, highlighting their characteristics, benefits, and limitations.
External Identity Propagation
This pattern doesn’t have its own name in the current cheat sheet — it’s the approach described somewhat scattered throughout the main External Entity Identity Propagation section.
In this pattern, the edge component forwards the externally received authentication data (e.g., an access token, ID token, session cookie, or certificate) directly to internal services without transformation. The internal services are responsible for the verification of the received authentication data, for extracting the identity context (such as user ID, or other attributes), and making access control decisions based on it. When an internal service needs to communicate with another service, it just forwards the authentication data further downstream. The aforesaid verification may require contacting a Verifier, which depending on the authentication protocol and data used, could be an authorization server that issued the token or, for example, an OCSP responder to check the revocation status of a certificate.
As already said above, the actual verification of the authentication data, represented by the dotted lines in steps 3 and 5 of the diagram above, depends on the type of authentication data used. For example, in the case of an opaque token, each service must call the appropriate identity provider, respectively, authorization server endpoint to retrieve the associated data. If the token is self-descriptive, such as a JWT, the service needs the corresponding key material to verify its signature, and so on.
Pros
- Minimal edge logic required: The edge mainly forwards the authentication data, reducing its complexity. It may also just verify the validity of the authentication data.
- No additional infrastructure needed: Internal services use the same authentication data as the edge, avoiding the need for internal signing or identity transformation.
Cons
- Tight coupling to external protocols: Each microservice must understand and correctly handle potentially multiple types of external authentication data and formats (e.g., OAuth2, OIDC, cookies). As a result, services must support protocol-specific logic (e.g., JWT parsing, OAuth2 token validation, cookie decoding) and are exposed to external semantics, expiration rules, and revocation mechanisms, increasing implementation complexity and brittleness. Changes to external identity providers or protocols typically break internal service behavior.
- Increased security risk: If external authentication data is leaked, any internal service exposed, intentionally or not, can potentially be accessed directly using the leaked token.
- Unsuitable for Zero Trust or multi-tenant environments: Trust assumptions and lack of verifiability conflict with the security guarantees required in these environments.
- Privacy concern: Because externally visible authentication data is reused internally, identifiers intended for internal use (e.g., subject IDs in JWTs) may become externally observable. This can violate privacy requirements by enabling cross-context linkability and may conflict with regulations such as the GDPR or the CCPA.
Simple Service-Level Identity Forwarding
This pattern is explicitly addressed in the current cheat sheet as Sending the external entity identity as clear or self-signed data structures. In this revision, I call it Simple Service-Level Identity Forwarding — because, well, naming is hard 😉.
This pattern builds on the previous one but introduces a lightweight form of internal identity abstraction. While the edge component still forwards the externally received authentication data (e.g., an access token, ID token, session cookie, or certificate) to internal services, each microservice no longer forwards this data unchanged. Instead, a microservice extracts the relevant identity information (e.g., user ID, roles, scopes) from the incoming request and creates a simplified representation of the identity, such as a plain JSON object, a self-signed JWT, or even a single value embedded in a query or path parameter, when making calls to downstream services. As with the previous pattern, the verification of the initially received authentication data may require contacting a Verifier, which depending on the authentication protocol and data used, could be an authorization server that issued the token or, for example, an OCSP responder to check the revocation status of a certificate.
This internal identity representation is not strongly cryptographically protected and often relies on implicit trust between services. As a result, downstream services must trust the integrity and correctness of the identity information forwarded by their upstream callers.
As with the previous pattern and as also said above, the actual verification of the received authentication data, represented by the dotted line in steps 3 of the diagram above, depends on the type of authentication data used. For example, in the case of an opaque token, the service must call the appropriate identity provider, respectively, authorization server endpoint to retrieve the associated data. If the token is self-descriptive, such as a JWT, the service needs the corresponding key material to verify its signature, and so on.
Pros
- Simple and lightweight: Requires minimal implementation effort and no complex cryptography or signing infrastructure.
- Protocol abstraction: Internal services operate on simplified identity representations, avoiding the need to parse or validate external authentication protocols.
- Flexible identity forwarding: Enables propagation of identity context without dependency on a central trusted issuer for every internal call.
Cons
- High trust requirement: Downstream services must trust upstream callers to provide unaltered and accurate identity information and related data.
- Vulnerable to spoofing: Lack of cryptographic protection makes identity data susceptible to tampering.
- Unsuitable for Zero Trust or multi-tenant environments: Trust assumptions and lack of verifiability conflict with the security guarantees required in these environments.
- Protocol complexity leakage: If any internal service becomes externally exposed, support for full external authentication mechanisms is required to avoid API abuse.
- Privacy concern: Because externally visible authentication data is reused internally, identifiers intended for internal use (e.g., subject IDs in JWTs) may become externally observable. This can violate privacy requirements by enabling cross-context linkability and may conflict with regulations such as the GDPR or the CCPA.
This pattern has a tendency to invite risks commonly associated with Insecure Direct Object References (IDOR) resulting in data exposure. I hope I’m not stepping on anyone’s toes — but I just have to say it.
It’s easy to fall into the trap of passing user IDs through URLs, headers, or JSON payloads without verification — trusting upstream services entirely, with no signatures, no integrity checks, and no questions asked by downstream systems. While this might seem like a convenient internal optimization, it creates a fragile foundation.
Often the response is:
But it’s internal — what should happen? This is a common industry standard. Everyone does it.
Well… things do happen. Remote code execution, lateral movement, privilege escalation, and data leaks are all real possibilities once an attacker gains a foothold. Internal is not a security boundary — and never was. If that assumption breaks down — and it often does — these design choices can amplify the damage.
That said, if you choose it, go in with open eyes — and be honest about the trade-offs. You now know them.
Token Exchange-Based Identity Propagation
This pattern is not covered by the current cheat sheet.
This pattern builds upon the previous pattern by introducing a trusted intermediary, an authorization server, through use of the OAuth2 Token Exchange, or the new OAuth2 Transaction Tokens (draft) protocol. A microservice that receives a request containing externally issued identity (e.g., an access token) exchanges it for a new, signed access token issued by the authorization server. This exchanged token is specifically scoped for a downstream internal service and is then propagated as part of the internal call. As with the previous patterns, the verification happens optionally with the help of a Verifier. The issuance of a new token is, however, the responsibility of the Secure Token Service (STS). The latter assumes the role of the Verifier for the verification of tokens it has issued. Both might be implemented by the same authorization server, but don’t need to.
Downstream services trust the token issued by the STS rather than the one used by the external client („Some Client” in the diagram above). The pattern improves the trust model and strengthens identity guarantees, but is tightly coupled to the OAuth2 protocol family and its associated token types.
The actual verification of all involved tokens, represented by the dotted lines in steps 3 and 6 of the diagram above, depends on the type of the token used. For example, in the case of an opaque token, each service must call the appropriate identity provider endpoint to retrieve the associated data. If the token is self-descriptive, such as a JWT, the service needs the corresponding key material to verify its signature.
Pros
- Improved trust model: Downstream services do not need to trust upstream service implementations, only the STS.
- Cryptographically verifiable identity: Issued tokens are signed by an STS, offering strong integrity guarantees.
- Scoping and audience control: Exchanged tokens can be restricted in scope and audience, reducing the risk of token misuse.
Cons
- OAuth2-specific: Relies on OAuth2 Token Exchange, respectively, on OAuth2 Transaction Tokens (draft), limiting its applicability to systems using that protocol family for externally visible authentication data.
- Service-side complexity: Application code must integrate with the STS to handle token exchange logic, and manage caching or retries.
- Latency overhead: The token exchange process introduces additional network round-trips per request flow unless aggressively optimized.
- Operational dependency on the STS: Introduces runtime dependency on the STS implementation availability and scalability.
Protocol-Agnostic Identity Propagation
This pattern is roughly covered in the current cheat sheet’s Using a data structure signed by a trusted issuer section. That description appears to be primarily based on the implementation described by Netflix, and while it captures the core idea well, it also blends in Netflix-specific details and some nuances that may cause confusion. In the version below, I’ve aimed to provide a clearer, more concise explanation that doesn’t emphasize any particular implementation — Netflix’s or otherwise — and instead focuses on the architectural pattern itself and its implications.
Small advertisement: If you’re looking for an off-the-shelf option that supports this pattern (and several others described in this entire blog post series), feel free to check out my project heimdall. It goes well beyond this specific approach and is designed to be flexible and extensible for various scenarios.
By the way, Netflix refers to this pattern as „Token Agnostic Identity Propagation” in this blog post, which is a great name. That said, I’ve often found that as soon as the word „token” comes up, people instinctively think of OAuth2 or OIDC, even though tokens can also refer to cookies, certificates, or other artifacts. To avoid that confusion, I chose a more neutral name here. Naming is hard 😁.
The external request is authenticated at the system edge by a trusted component, which then generates a cryptographically signed (and/or encrypted) data structure representing the external entity’s identities and attributes (e.g., user ID, roles, permissions) - typically a self-contained, verifiable structure, such as a JWT or a proprietary signed format. By doing that, the edge component assumes the role of a Secure Token Service (STS) This signed identity structure, hereafter referred to as a token, is propagated downstream to internal microservices. Internal services trust the signature from the edge issuer and use the token to make access control decisions.
As with the previous pattern, the verification of the original authentication data may require contacting a Verifier. The implementation of the Verifier depends on the protocol and data format used — e.g. it could be an authorization server that issued a token, or it could be an OCSP responder, used to check the revocation status of a certificate. Unlike in previous patterns, only the edge component is responsible for that verification. The specific verification process depends on the aforesaid type and format of the authentication data, denoted by the dotted line in step 2.
Further downstream, the microservices validate the signed token issued by the trusted edge-component. Each microservice must have access to the corresponding verification key to validate the authenticity of this token. The corresponding verification steps are denoted by the dotted lines in steps 5 and 7. This is where the trusted component at the edge assumes the role of a Verifier.
It’s worth noting that the edge-component roles shown in the diagram above — Edge Proxy, STS, and Verifier — may all be implemented within a single technical component, or split across multiple cooperating services. For example, a proxy might delegate the authentication data and token issuance related logic to another service via a mechanism typically named as forward auth or external auth. That service could implement the STS and the Verifier logic by itself, or, in turn, delegate token issuance to an existing authorization server using mechanisms such as the OAuth2 Token Exchange, as described in the previous pattern.
Pros
- Cryptographic trust: Signed tokens provide strong guarantees about the integrity and authenticity of the propagated identity.
- Decoupling from external authentication data and context: Internal services neither handle external protocols nor need to differentiate whether requests originate from first- or third-party actors, simplifying their logic and trust assumptions.
- Rich identity context: Allows inclusion of fine-grained identity and authorization metadata.
- Secure across trust boundaries: Suitable for multi-tenant and Zero Trust environments.
- Separation of external and internal identities: Enables mapping externally known identifiers to distinct internal representations, preventing direct exposure of internal identifiers and thereby enhancing privacy by reducing correlation and tracking risks across domains.
Cons
- Key management complexity: Requires secure handling and rotation of signing keys to maintain trust.
- Token size overhead: Signed tokens issued by the edge component may be large, increasing network overhead.
- Revocation challenges: Once issued, tokens may be valid for many services until expiration, complicating immediate revocation. This can, however, be mitigated by issuing short-lived tokens and tailoring subject structures to individual downstream services.
- Increased complexity at the edge: The edge component must handle external authentication data verification as well as internal token generation and signing, making it a critical security component.
On Privacy By-Design
Privacy concerns — particularly around cross-context linkability and the risk of exposing internal identifiers — affect all identity propagation patterns, though their severity depends on how externally received authentication data is handled.
Implementation of patterns like External Identity Propagation and Simple Service-Level Identity Forwarding typically directly reuse externally visible authentication data within the system. This increases the risk that internal identifiers (e.g., sub
claims in JWTs) become externally observable, enabling correlation of user activity across contexts. Such reuse undermines core privacy goals like pseudonymisation and data minimisation and conflicts with principles of integrity and confidentiality — all central to privacy-by-design thinking.
In contrast, patterns like Token Exchange-Based Identity Propagation and Protocol-Agnostic Identity Propagation help enforce privacy boundaries by transforming or isolating authentication data before it’s used internally. That doesn’t mean these patterns — or their specific implementations — are immune to privacy risks. They simply make it easier to adopt techniques such as opaque tokens, session-referencing cookies, or identifier mapping, which reduce unnecessary exposure of user-specific identifiers. Even so, mapped identifiers can still reveal the existence of a persistent relationship with the system, which may be problematic in certain contexts. Still, these patterns embody privacy-by-design principles more effectively — and as a positive side effect, tend to align well with legal requirements such as the GDPR (Art. 5(1)(b, c, f), Art. 25, Art. 32, Recitals 26 and 30), CCPA, and similar frameworks.
Frankly speaking, this section was added later in the process. After re-reading the earlier patterns a few times, I realised that the privacy impact differences between them might not be immediately obvious. To make those contrasts clearer — and to explain why certain cons appear in the early patterns but not the later ones — I decided to add a quick recap. Hopefully, it makes the trade-offs easier to spot.
That re-reading also reminded me of a talk by Wojciech Dworakowski at the last OWASP AppSec EU in Barcelona, where he also showed how much an attacker can learn simply by inspecting a JWT — from user identifiers and account structure to hints about business logic — to bypass it. It’s a good reminder that data minimisation and token opacity aren’t just best practices. They’re central to both privacy and security.