Cooking with Onions: Inward-Pointing Arrows

Some violations of our architectural principles are easy to spot. Others hide in plain sight, and neither static analysis nor a shallow code review will help you spot them.

The onion architecture is an established approach to structuring applications. You might’ve read about it on this blog before, in an in-depth series of posts penned by my workmate Christian. Let’s skip the introductions and begin by recalling the principles of the onion approach. In essence, there are just two ground rules:

  1. we separate responsibilities of our application into concentric layers, and
  2. allow layers to depend only on what’s inside.

We end up with a structure resembling an onion. Details depend on the particular variety, but we can often identify some common features.

  • An innermost layer consisting of our business objects and logic, devoid of technological concepts and jargon.
  • A middle layer orchestrating use cases between our business objects, still in abstract terms.
  • An outer layer bringing in concrete technology, e.g. HTTP, PostgreSQL, or XML, and injecting it into our inner layers.

Separating responsibilities and enforcing the direction in which dependencies point brings immediate wins. Firstly, our core business logic is decoupled from technological details. It can be unit-tested quickly and in isolation. There’s less to mock. Secondly, architectural constraints encourage grouping side effects into designated silos. This in turn makes the system easier to reason about and more likely to fit into our head. Thirdly, our technological decisions are easier to revise. The business logic and use cases neither know nor care whether they are triggered by an HTTP request or a RabbitMQ message; whether they persist into a database table or a Kafka topic. It’s all abstracted away.

Three concentric layers with an inward pointing arrow.

The dependency rule — that is, that the arrow points inwards only — is something we can enforce in a number of ways. We can rely on a social contract, double-checked during code reviews. We can break the project into smaller artifacts along the boundaries of layers. We can also employ static analysis, which will automatically detect illegal dependencies in our codebase.

Some cases of breaking the rule are easy to spot and call out. If a class from one of the inner layers imports a package from an outer one, the breakage is evident. Consider a use case in which a service updates and persists a business object, relying on a concrete feature of MySQL, imported from the outer layer. That’s forbidden. Instead, the service could rely on an abstract behaviour, defined as an interface. The interface would be in turn implemented in terms of MySQL and injected into the service from the outside.

There are situations in which the dependency is far subtler, though. So subtle, that even static analysis — let alone a collegial code review — cannot identify the problem.

Let’s illustrate it with an example. We will use Java, but don’t let the minutiae of the language make you lose track of the big picture. The problem is not innately linked to any programming language.

Let’s assume we’re in the domain of wine retail. We will represent our goods using slender immutable objects of the following class.


class Wine {
  public Wine(String origin, String grape) {
    this.origin = origin;
    this.grape = grape;

  public final String origin;
  public final String grape;

In order to transfer them across the wire we will encode them as JSON using Jackson. The serialisation logic does not belong to our domain. Its place is in one of the outer layers; selling wine has nothing to do with the JavaScript Object Notation.


import static org.junit.jupiter.api.Assertions.*;

import com.fasterxml.jackson.databind.ObjectMapper;
import org.junit.jupiter.api.Test;

void encode() throws Exception {
  var wine = new Wine("Italy", "Lambrusco");
  var expected = "{\"origin\":\"Italy\",\"grape\":\"Lambrusco\"}";
  var actual = new ObjectMapper().writeValueAsString(wine);
  assertEquals(expected, actual);

The test passes; so far so good. What if we try to decode the JSON blob, though?

void decode() throws Exception {
  var json = "{\"origin\":\"Georgia\",\"grape\":\"Saperavi\"}";
  var wine = new ObjectMapper().readValue(json, Wine.class);
  assertEquals("Georgia", wine.origin);
  assertEquals("Saperavi", wine.grape);

The test fails with the following message.

InvalidDefinitionException: Cannot construct instance
of `Wine` (no Creators, like default construct, exist):
cannot deserialize from Object value (no delegate- or
property-based Creator)

We face a choice. We can either:

  1. annotate constructor’s arguments with @JsonProperty, or
  2. introduce a default, nullary constructor.

Neither option is acceptable. They both violate the fundamental architectural premise: arrows cannot point outwards. Let’s take a closer look.

The problem with the first option is easy to spot. In order to annotate the constructor we’d have to import @JsonProperty into our core domain logic. That breaks our rules. As we’ve established, the core logic cannot depend on technological details.1 JSON is not part of the vocabulary of our domain. I dare you to challenge this assumption by asking your wine monger about their favourite wire serialisation protocol.

Second’s option validity is trickier to refute. Arguably, something feels off. A decision made in an outer layer would force us to modify a class inside of our core domain. But that’s just gut feeling. Apart from that, how much damage can an extra constructor cause?

The problem is far more serious though. Notice that the default constructor doesn’t take any arguments. Bluntly put, it will have to make something up. This might be doable, as long as our domain defines default values for our domain objects. Otherwise, we’ll be forced to instantiate domain objects unforeseen by our domain experts. Stuff which doesn’t and cannot exist.

The damage is even greater in a statically typed programming language — such as, say, Java — whose type system allows to enforce certain properties at compile time. We just threw those guarantees out of a window.

The attentive reader will surely have a solution in mind by now. A potential way out involves an extra class. An additional data transfer object annotated with @JsonProperty and declared in an outer layer does the trick. Its responsibility will also include creating valid core domain object.2

Taken to the extreme, this approach might lead us to a proliferation of classes whose only responsibility is copying stuff around. The impact on maintenance and runtime costs depends on the context. Our programming language, framework, and usage profile all play role in the decision we’ll make. Just like in life, no architectural choices come for free.

Before we declare a success and deploy our application to production let’s pause for a minute and think about the two bad options we had. The first one would be immediately identified by static analysis and thus poses no real threat. The second one is far subtler. Not only it escapes an automated analysis, but also remains unseen during a superficial code review.

As few as they are, the principles of the onion architecture are easy to violate. When in doubt, let the domain guide your decisions. Go ahead and talk to your wine monger or domain expert. Ask for the default wine. A pale face or negative answer should clarify any misunderstandings. Don’t throw the baby out with the bathwater.

In a future instalment we will explore inwards-pointing arrows again, switching our attention to aggregates and immutability. Stay tuned.

  1. We might even enforce it at the dependency management level. If we split our project into several artifacts, the innermost one shouldn't have a JSON library among its dependencies.  ↩

  2. I'm assuming that the innermost layer exposes a mechanism for instantiating valid domain objects.  ↩



Please accept our cookie agreement to see full comments functionality. Read more