For human interaction, for searching and cataloging based on simulating human navigation patterns, pages full of links are good. Especially when the content of that page isn’t terribly useful it’s nice to be able to click virtually at random and escape into the soothing world of advertising pitches. But for a particular user, the majority of links in a page never get used. What is a machine going to do with a bunch of random links? In machine-to-machine communication, links are undoubtedly going to be much fewer in quantity, but much higher in quality. And if the service is doing something useful on behalf of a user that doesn’t require the transmission of lots of links, is that therefore a bad service? A service should return precisely the number of links necessary for it to do useful work. No more, and no less.
This is in response to Nick Gall’s claim:
Nowhere in the vast multitude of WS-* specifications or the articles or papers describing them is there any imperative or even any emphasis that a Web Service should return an XML document that is populated references to other Web resources, ie URIs. But it is a fundamental principle of the Web that good Web resources don’t “dead end” the Web; instead, they return representations filled with URIs that link to other Web resources.
I totally fail to see how Jonathan brings up any argument against this — as long as Web services don’t actually add something more to the Web than a single endpoint per service, they are not “on the Web”, but are indeed dead ends.
For a good example of links being used in a machine-processable way, see Atom Service documents.