A while back, I was an architect in a very, very large Java project — in fact, it used to be one of the largest in Europe, with about 180 developers at its peak. (Needless to say, it ended in disaster.) One of the main characteristics of this project was a severe case of the NIH syndrome — apparently the original architects had a hard time convincing each other not to build the RDBMS and ORB from scratch. Everything else, though, was built as part of the project.
(Did I already mention it ended in disaster? Ah, I see I did. To be fair, though, technology was not the main reason it failed — managing 180 people developing in parallel without a concept of components, services or any other management of dependencies between modules was simply undoable. But I digress.)
The point I was trying to make is a different one, and related to one particular feature of the project’s huge set of frameworks. Since the persistence layer was built from scratch, too, we had excellent abilities to introduce features into the frameworks to simplify the developers’ lives. One such feature was called ‘historization’ - capabilities to set an object’s validity. As were many other concepts within that framework, very smart people thought about this hard, and came up with an ingenious solution that enabled the creation of objects in the future, the past, the virtual might-have-been future and many other strange phenomena. For instance, you were able get answers to questions such as “if I had asked two years ago what this particular object would look like three years from now (I mean, then), what would it have been? (See what I mean?)
All of this was hidden within the framework; you simply set a few time stamps, and if you didn’t, defaults were applied; you were able to set a view (a point in time), and the objects state, including relations to other objects, would magically match that view, of course only within that logical transaction (which be built as part of our own transaction framework) …
It was brilliant. It was hard to implement, but two very smart guys did it. It rocked.
It was also totally unusable.
The problem was that while it may have been able to provide lots of answers, nobody knew how to ask the matching questions.
And that’s what I’m getting at: Sometimes you can invest lots of time in building a great framework with the goal of simplifying life for its users, and end up with something that is much harder to use, and use correctly, than implementing the stuff explicitly and matching your business requirements.
One example of a case like this is the notion of compensating transactions. ACID transactions are a fine and well-understood method of building enterprise applications; in fact, the declarative support for transactions in EJB is one of the main (OK: probably the only) reason I still like it. This is a case where a framework can really help you — you either get all or none of the outcome, which greatly simplifies error handling and makes your code a lot cleaner, though. ACID Transactions and 2PC transactions are only suitable in tightly-coupled environments, though, which is why nobody in their right minds would suggest their usage in a B2B or other loosely-coupled scenario.
One way is to use compensating transactions that will be triggered by some framework once things go wrong. This is the approach taken e.g. by WS-BusinessActivity. The alternative is to treat the need for compensations as something that can’t be delegated to the framework, but needs to be built just as explicitly, or even more so, than the main flow. And this is my point — I believe this is a case where hiding complexity leads to something that is harder to build, understand and maintain even though this may seem counter-intuitive at first.
The moral of the story, of course, being that sometimes the value perceived by having support for advanced enterprise concepts such as transactions in a technology stack may be vastly overrated.