Once again, time to draw attention to a comment, this time from Benjamin Carlyle:
In short, I think that SOA is fine and a proven technology when it is possible to upgrade your whole architecture in one go to the new protocols. I think that REST is the only proven technology when only a tiny fraction of the overall architecture can be upgraded in a single sitting. You can’t upgrade the whole Web. REST accommodates this.
Be sure to read the full comment, which is longer than most of my blog entries.
Oh dear. Where to begin?
One might simply point out that, in our enthusiasm and advocacy for an idea, we must be careful to resist the lure of specious arguments and avoid jettisoning our rationality.
Of course Benjamin makes some interesting points, but let’s examine these in more detail.
First, Benjamin says “existing applications can’t interact with a new app built with the new WSDL”. Why not? Programmatically, I can find the new WSDL for a service through a registry; alternatively it is a common idiom for a web service to return its WSDL when you issue a HTTP GET on the service endpoint. Once I have obtained the WSDL, I can interpret this programmatically and use it to invoke the new service. In the old days of CORBA, we used to call this “dynamic invocation”. Whilst this approach might still not be common, it’s definitely possible and much easier and more practical today than it was 10 years ago.
Next, Benjamin describes how in REST, most changes are in the modification or addition of document types. True. But I don’t understand his distinction between “a transition from one version of a document type to the next” and “when one content type is superseded by another”, unless he’s just defining two arbitrary levels of change, a minor tweak versus complete replacement. Also, what has this got to do with REST at all? Giving an example from HTML 3/4 he says “The new content type allows new information to be added, but doesn’t take information away”. Which of the REST constraints or principles enshrines this behaviour? Surely it’s just an attribute of the HTML specification. As far as I can see, REST says very little about the nature of document types, other than they should be composed of hypermedia and belong to “an evolving set of standard data types”.
When talking about the “addition of a completely new content type”, Benjamin says “You don’t do this if it is meaningful for old clients to talk to new servers”. But surely this is one of the essential points about the design of evolvable systems. Is he saying that REST is not appropriate if you want to be able to evolve servers ahead of clients?
The problem here is that REST is fundamentally describing a system of intermediaries, not information processing endpoints. This is what the Web is about. We have discussed the risk of this potential “category error” before in [1] and [2]. I do hope we haven’t regressed.
To perform any useful activity, a REST client needs to understand and act on the information returned. That is, it has to be able to make sense of the document types returned and use this knowledge to make decisions about what to do next. In the REST model, this sense-making and decision making is performed by a human being and by interpreting the information displayed on a screen (or spoken) by a Web browser.
If we want to embed this sense-making and decision making in an automated computer program, then such a REST client will need to have an intimate knowledge of all document types and the detailed real-world semantics attached to them. Just knowing XHTML and Atom isn’t enough. For example, I can definitely process an XHTML document and extract all of the “<a>” tags that contain a “href” attribute. But which of these hyperlinks means “view supplier invoices” and which means “create new invoice”? If I receive a list of Atom entries, how do I know whether this feed refers to the supplier’s invoices or to their contact history?
In the future, the Semantic Web may provide a way that we can encode such knowledge into our programs, but in the meantime, this “protocol”, that is the “rules governing the syntax, semantics and synchronization of communication” [3] will need to be hard-coded into our REST clients. This is going to be hard and messy and seems very little different to the web services case.
Now, please don’t get me wrong. I’m not anti-REST in any way. But I don’t think it does anyone any favours to overstate the case. The basic REST principles espoused in the Fielding dissertation are a good start, but are really just concerned with basic communication mechanisms, the “plumbing” if you will. Much more work is needed on exactly how and why we need to define document/media types to achieve the properties we want from distributed systems. And we have some serious security and trust issues to deal with as well. Those are the kinds of problems where we need to be devoting out energy and attention.
I would say more on the fallacy of the “evolvability of the web” argument, but I’ve probably taken up too much space already…
Regards,
-Mike Glendinning.
[1] http://www.innoq.com/blog/st/2006/02/22/more_soap_vs_rest_arguments.html
[2] http://www.innoq.com/blog/st/2006/12/04/the_lost_art_of_separating_concerns.html
[3] http://en.wikipedia.org/wiki/Protocol_(computing)