I’ve been an avid follower of Martin Fowler’s work ever since I read “Analysis Patterns” almost 20 years ago. I’ve always appreciated the way he manages to distill complex topics down to their essence, and I think the immense number of people reading martinfowler.com is well deserved. I’m also lucky to have met Martin in person numerous times and was thus able to discuss many of the topics we share an interest in with him.
Over on the innoQ company blog, I’ve published a post on why I believe a canonical data model is a bad idea. Enjoy.
Mein Kollege Sebastian Janzen, seines Zeichens bekennender AngularJS-Fan, hat den folgenden Text in seinem internen Blog gepostet – und ich konnte es mir nicht verkneifen, ihn zu fragen, ob ich ihn als Gast-Post hier veröffentlichen darf. Hier also seine (weisen) Worte …
Ich habe nun mehrmals zwei Phänomene beobachtet: Ein Entscheider - je größer die Firma desto eher zutreffend - hat keine Lust mehr, in die gelangweilten, von Prozess-Narben geprägten Gesichter seiner Entwickler zu schauen. Er möchte das Team positiv überraschen und überträgt für das nächste Projekt AngularJS von einer Buzzword- in die Requirements-Liste, denn Entwickler möchten endlich mal was Cooles machen. In einem Meeting für eine neue Webseite packt er stolz den heiligen SPA-Gral AngularJS aus. Alle Probleme sind gelöst, die Entwickler sind happy, das kommt in die Requirements.
Und nun wird noch jemand von innoQ dazu geholt und hinterfragt kritisch das Entwickler-Glück. “AngularJS? Sie wissen schon, dass dies an der Stelle nicht so ganz zu Ihrem Problem passt?”
So sehr ich Angular mag und es im ein oder anderen Bereich für richtig halte eine SPA zu bauen, so fatal kann es auch laufen, wenn man damit versucht Probleme zu erschlagen, die gegensätzlicher nicht sein könnten. Beispiele:
Meine Anwendung soll rasend schnell sein!
Ja, das kann sie als SPA sein, jedoch nicht, wenn man zwischen dieser und anderen Seiten ständig hin und her wechselt. Ein OpenID-Provider liefert in der Regel zwei Seiten aus: Login und Success + Redirect. Die Aufenthaltsdauer muss eine SPA rechtfertigen! Und für den Besucher ist es egal, ob es eine technisch perfekte Seite ist. Das wird seine Aufenthaltszeit nicht verlängern, sondern verkürzen.
Die coolen Seitenübergänge gehen nur in Angular!
Bei Angular muss ich mich nicht mehr um den DOM kümmern
Uiuiui, als ich das mal gehört habe hat sich bei mir alles rumgedreht. Ich glaube ich habe in meiner Angular-Zeit noch nie so viel über die Verzahnung zwischen DOM und JS gelernt - einfach aus der Not heraus, Bugs zu finden. Manch einer erschlägt diese Bugs mit genügend Timeouts. Und viele Plugin-Entwickler machen das auch so, aber das ist ziemlich weit weg von geil. Und vom eigentlichen Performance-Ziel. Grundkonzepte wie Asynchronität in einem Browser müssen verstanden werden, nicht hintergangen! Aus meiner Sicht geht Angular mit diesem Umstand hervorragend um, es muss nur genutzt werden.
Wir haben viel Angular-Erfahrung im Team
Sicher? Unterhaltet Euch vorher mit dem Team, stellt Fragen, was die do’s und don’ts sind. Wenn da nichts kommt oder die Frage “Wofür setzt man Angular nicht ein?” unbeantwortet bleibt, dann haben diese Menschen nicht viel damit gemacht. Da ist eine Lernphase von teilweise mehreren Wochen einzuplanen und viel Debugging-Zeit. Eine schöne Frage ist auch “Was sind Promises?”
Dieses Angular ist nur Frontend, dafür plane ich als Senior Consultant mal zwei Stunden ein
Das wird scheitern. Angular ist ein Framework, mit dem man tatsächlich eine Frontend-Architektur, die diesem Namen auch gerecht wird, aufsetzen und planen kann. Es gibt mehr als nur Data-Binding und Routen!
Performante SPAs, die in allen modernen Browsern und im IE >= 8 laufen sind das Werk von Menschen, die sich durch und durch mit der Materie beschäftigt haben und sehr viel Arbeit dort hinein gesteckt haben. Das muss in den Zeitschätzungen berücksichtigt werden, wenn man mal wieder “Die beste Seite aller Zeiten” bauen will.
So you started with a monolith, and decided to split things into smaller units. Obviously, the next thing you need to consider is how to integrate them to form a consistent whole. To do this, let’s start with the non-obvious part: The frontend (the UI).
One problem you’ll very often run into with option (1) is that the services on the server side end up being quite fine-grained (a consequence of their being reusable in many contexts), which leads to a huge number of remote calls that are required between the client and the server. Another downside typically results from the fact that you can never rely on anything computed by the client, so you’ll have to validate it on the server side. This, in turn, can lead to duplication of at least parts of your logic.
The solution to both of these problems typically is to perform integration, or orchestration if you prefer, on the server side – option (2). In other words, a server-side service will invoke other, lower-level services, taking care of combination and error handling, interpreting the client request and returning the aggregated result in the end. This is of course completely orthogonal to the architecture you choose for your client, i.e. you could just as well return HTML from your server and have a traditional, non-JS based client.
What’s not to like? What I do not like about this approach is that you create yet another server-side service, which makes me question why you created the lower-level ones in the first place. This also becomes a bottleneck, as any change to one of the lower-level services will require a change to the orchestrating service.
But there’s a third option (finally!), one that doesn’t seem to even be considered in many cases, although it is (in my not so humble opinion) the most powerful one. This option (3) relies on an absolutely magical concept called “link” (dumb joke, I know). To explore this, we question one of the initial assumptions that led to having to integrate on the client side or server side in the first place, namely that for a web UI to be integrated, it needs to aggregate UIs from different backend services into a single HTML page.
Instead, we have each service return HTML that can be rendered by the browser – in other words, we assume that each page can be assigned to one of the services. Of course there are lots of relations to other things, but we simply use links to point to them. One nice side effect of this is that it becomes much easier to ensure we have a meaningful URI for each of the pages returned (or resources exposed, pick whatever terminology you prefer).
So option (3) leaves us with a number of self-contained web applications that are integrated only by means of being linked to each other. Apart from not connecting them to each other at all, I am not aware of any sort of weaker coupling.
Of course you should be highly skeptical by now: How is that supposed to be “integration”? Surely this guy isn’t serious? Is he seriously suggesting we revert back to a model that was hip a decade or two ago? You bet I am, and I’ll explore some of your doubts in a future post.
First of all, you’re likely only reading this because you’re one of the few remaining souls who rely on RSS/Atom to follow a blog, just like I do. That’s great! And I apologize for messing with your feed (it’s quite likely a whole bunch of articles were marked as unread in your feedreader, something I hate when it happens to me). Sorry.
I will try to treat you better in the future, fellow Atom die-hard.
In the same spirit (less JS), I also removed the Disqus comments. I love the idea of Disqus (externalized comments), but I hate the implementation. But if you have feedback to some of the stuff I write, feel free to write me an email! I might not answer immediately (or rarely not at all), but I’ll try to incorporate whatever you send me into future posts.
Given that microservices are supposed to be, well, “micro”, there’s a lot of discussion about the right size. A typical answer to this question is: A microservice should do just one thing. I don’t really think that’s an answer, as “one thing” leaves a lot of room for interpretation. But I’ve seen people suggest that each individual microservice should be as small as a single function, and I strongly disagree with this for almost every situation. Consider, for example, a function that computes something based on three input values, and returns a result. Is that a good candidate for a microservice, i.e. should it be a separately deployable application of its own?
I believe it’s easier to approach this from the opposite direction. Let’s take an example: A web-based email system. Let’s not overcomplicate things and assume it’s traditional and offers the minimal features you’d expect, such as logging in and out, maintaining some user settings, creating messages (from scratch or by replying to or forwarding an existing one), deleting messages, viewing your inbox, moving messages into folders (that you can create, view and delete), maintaining an address book, search for messages … I’m sure you get the picture. At one extreme, we could absolutely build this as a single application and ensure it’s built not as a single package, but using a reasonable internal modularization strategy. We could decide to write its core as a set of collaborating classes, maybe adhering to the DDD approach (which would classify the classes according to the role they play). Then we’d add the dependencies to the outside world, such as the UI, the data storage, external systems (such as maybe external mail handlers, LDAP directories, etc.), possibly using a layered or hexagonal architecture.
The team(s) working on this application would need to synchronize very tightly, as the whole thing is released at once. It will also be scalad in an all-or-nothing fashion, and it will be down or up and running completely, not partially. That may be perfectly fine! But let’s just assume (again) you deem it’s not, and want to cut it apart into separate parts that have their own life-cycle and are isolated from each other.
How would you go about decomposing this into separate applications or services? First of all, the login/logout stuff (the auth system) is a good candidate, as is the user profile. They could go into one service, but if we consider that the auth system has to maintain passwords (or rather, password hashes), it makes sense in my view to treat it differently from the rest. The emails and folders themselves seem quite cohesive to me, though: You could separate them, but I probably wouldn’t. If there are multiple ways to connect to the outside world, say, via the Web interface, POP3, IMAP, and SMTP, I can imagine each of those being their own service. Maybe I’d factor out the storage of messages into its own service, one that doesn’t know the difference between a document and an email. I think the address book, including its data storage, its UI and its API seems like a natural candidate to be separated from the rest.
But in all, I’d probably end up with a dozen, maybe twenty or thirty services (or self-contained systems, as I prefer to call them). And more importantly, I think that for any given interaction triggered by some outside event – like e.g. a user clicking a button after entering data into a form – I’d end up touching maybe 3-5 of them.
In other words, I think it’s not a goal to make your services as small as possible. Doing so would mean you view the separation into individual, stand-alone services as your only structuring mechanism, while it should be only one of many.
Three posts in quick succession: This one by Gina Trapani, this one by Jason Snell, and this one from Marco Arment, all make the same point: There should be more blogging, and maybe one way to get this into real, live, actual posts is to reduce the amount of rules you as a writer subject yourself to. In this spirit, I’ll try to get this thing restarted.
For a long time, I’ve been convinced that we need more women, or in general, a lot more diversity, in the tech community. While I’m typically not at a loss for words on any topic, I find this one pretty hard. My guess is the major reason is that my own perspective on this is constantly changing. In fact I’m quite convinced that if I spoke to a version of myself that had been transported to the present from, say, 5 years ago, I’d disagree with me a lot. And there are a lot of capable people writing and talking about this topic already, more than enough to ensure my input is not really needed.
On the other hand, though, I know that sometimes it’s easier to listen to someone from your own demographic, and accept that they expose a point of view you disagree with, so maybe I should say something from time to time. And I haven’t put this blog to good use for a while, so why not start with this topic? I’d be extremely interested in getting your feedback, so please do use the comments or let me know what you think via Twitter.
First of all, to set up a bit of a foundation, here are some of the things I consider to be and personally have no doubts about at all:
- There is no inherent reason at all why men should be better at technical tasks than women (or vice versa).
- The software community (or IT/tech industry, if you prefer) does not have a remotely reasonable share of women.
- The reason for this is a complex mixture of a) things that happen in our education system, very early in peoples’ lives, that make women pursue a different career b) things that the tech industry does that make it unattractive for women c) things the tech industry does that drive women who do enter it into leaving it again
- Whatever the reasons may be, the effect is undoubtedly negative because a) there is a lot on unused potential, i.e. there could be almost double the number of great programmers if it weren’t mostly men who worked in this industry and b) a more diverse group is way more fun to work with and (if studies are to be believed) more productive
- A lot of the discussion about women is equally applicable to other groups, such is people with disabilities, minority ethnic groups, LGBT folks, etc.
Do you disagree with any of these?
A while ago, I gave a talk at QCon about breaking up monoliths (there’s a video up on InfoQ), repeated it in a slightly improved version at JavaZone (see slides and video), and the topic continues to come up in almost every consulting engagement and client workshop I’ve been involved in since then. Like many of the topics I talk about, it’s somewhat unfair that I get the positive feedback and people assume I came up with the ideas all on my own: Most of stuff like this is the result of collaboration, with my colleagues at innoQ (see for example an article I wrote with Phillip Ghadir for ObjektSpektrum if you read German), as well as customer staff. But wherever it originated, I found that it strikes a nerve with many developers and architects, not only in big companies that conduct million-Euro development projects, but also in smaller e-commerce companies and even startups that have started to become successful.
The main idea is this (no surprise for almost everyone, I guess): Nobody wants monoliths, i.e. big systems composed of hundreds of thousands or millions of lines of code (in a language like Java) or tens of thousands (e.g. in Ruby), yet everyone ends up having them. And once you have one, you’re basically stuck with them: They’re incredibly hard to maintain, extend, and modernize; yet they provide value and can’t simply be replaced (something that many organizations attempt but fail at, because it’s awfully hard to create something new that is not only great in terms of architecture, but also can actually function as a full replacement for all of the old system’s features.
So what’s the proposed remedy? To talk about that, we need to take a step back and find out how we actually end up systems that are too big in the first place. My theory is that the number one reason is project scope.
When a project is started, there is an assumption that it’s the goal of a project to create a single system. This typically goes unquestioned, even though the people or person coming up with the project boundaries often don’t decide this consciously. This is most obvious if they’re non-technical people who make decisions on a budget basis.
So the very first thing we should be doing as architects (or lead developers if you don’t like the term) is to find out what it actually is we should be building. Is it really a single system? If our task is to replace an existing system with a new one, it’s very tempting to just accept existing boundaries and go with them. If we’re consolidating two systems, it’s equally tempting to view our scope as the union of the predecessor systems’ scope. In the rare cases where our task is to actually modularize something existing, it’s because of business reasons (such as deregulation). Again, while it might seem like a good idea to just accept the boundaries being suggested to us, it’s not at all clear why this should be a good idea. After all, if whoever came up with those boundaries is not an architect or developer, what makes us think they made a good choice?
In my view, the most important thing to do, then, is to find out how many systems we should be building in the first place. It may be a single one, but it may also be two, five or a dozen (though probably not more) – clearly, the decision should be made very consciously, because whatever system boundaries you pick, you will likely be stuck with them for a very long time.
As “system” is a term that can mean almost anything, I need to define what I mean by it in this context. A system is an independent unit that is developed according to its own rules, and only connected to other systems in an unobstrusive fashion. A system, according to this model, has its own database, business logic, and user interface; it’s deployed separately. It’s likely developed by a different team than other systems. It has its own life cycle, in terms of development as well as deployment. It’s operated autonomously. It has its own test suite. In basically every regard, it’s as different from all the other systems as a piece of commercial off-the-shelf software would be. (In fact, one of the systems may end up being a piece of standard software.) Is that the same as the “Micro Services” idea? If you watch James Lewis’s great talk (here’s a recording, also done at JavaZone; in fact his was scheduled directly after mine), you’ll find a lot of similarities, but the major difference is probably the size of each individual unit. But to me, seeing similar concepts appear in different contexts is a very good sign.
It doesn’t really matter that much whether you get the number and distribution right with the first attempt – in fact, you can reasonably consider that to be highly improbable. But it’s one thing to find out you should have built six or eight systems instead of seven, i.e. get it wrong in one or two places, and a completely different one to notice it should have been seven instead of one.
I’ve rambled on for long enough for a single post, so here’s a preliminary conclusion: How many systems you build should be a very conscious decision. It will affect the life of those tasked with evolving and maintaining it for years to come.
Ah, the joys of “Intellectual Property”. I’ve been a long-time fan of the wonderful demotivational posters offered by despair.com. From time to time, I point people to them, e.g. by tweeting about it. In the past, when there was no Twitter (yes, that time existed), I used this blog to do so – on one occasion not only using a plain
<a>, but an
<img> element as well, embedding one of their images on this site (this is the post minus the image). After all, why not send a little traffic to these fine folks?
But obviously Despair uses stock photos – a perfect use case if there ever was one –, and the rights to the particularly cheesy one used in this poster apparently belong to Getty images (see? I did it again. Sue me.). Now I’ve received a letter, first from their internal legal department, and – after explaining the misunderstanding on their part – now from their lawyer. In both cases, they insist that we need to license the image to use it.
As I didn’t copy the image of the poster, but only link to it, this seems entirely absurd to me – particularly if Despair properly licensed the image, which I’m quite sure of. (If at all, Despair might have more reason, but I can’t believe they’d be that unreasonable, purely out of their own interest.) But my guess is the legal trolls at Getty believe it won’t be worth the hassle to me. They’re wrong – I don’t believe they deserve a single cent of my (or the company’s) money. If you have any advice, or want to share some of your own experience with these people, please leave a comment.