This is a single archived entry from Stefan Tilkov’s blog. For more up-to-date content, check out my author page at INNOQ, which has more information about me and also contains a list of published talks, podcasts, and articles. Or you can check out the full archive.

Scaling Messaging

Stefan Tilkov,

Patrick Logan questions my assertion that AMQP is something you’d use within a company, and XMPP across company boundaries (i.e. over the Internet):

What would be the drivers for this dichotomy? Why are two different messaging systems necessary? What would limit AMQP from being used on the internet scale? What would limit XMPP from being used on the intranet scale?

Patrick may well be right — I don’t know enough about either AMQP or XMPP to credibly defend my gut feeling. One of my motivations, though, was that XMPP is based on XML, while AMQP (AFAIK) is binary. This suggests to me that AMQP will probably outperform XMPP for any given scenario — at the cost of interoperability (e.g. with regard to i18n). So AMQP might be a better choice if you control both ends of the wire (in this case, both ends of the message queue), while XMPP might be better to support looser coupling.

But as I keep saying (after hearing something similar from Mark Baker for years): most of the stuff that works on Internet scale is a better choice for company-internal scenarios, too — so I’m aware I may be slightly contradicting myself here.

On September 19, 2007 7:11 PM, dbt said:

The reason that you’re correct is that—in the medium term—high bandwidth low latency messaging that would eschew XML for a binary format for performance reasons is unlikely to work over the commodity internet.

On September 22, 2007 4:11 AM, Benjamin Carlyle said:


My company use effectively a binary HTTP internally, but HTTP externally. The same reasoning applies, at least at a cursory level: We had an existing SOA solution with known properties with respect to performance, scalability, and performance. Using the HTTP data model allowed us to head down a REST path internally without shifting these properties too far. Now that we are using HTTP more and more to speak to external systems we end up with two protocols essentially doing the same thing.

However, it is still probably a false reasoning. HTTP on the big, high latency, low bandwidth external interfaces and our existing solution on the small, low latency, high bandwidth internal interfaces? The networks themselves don’t justify the dichotomy. The dichotomy is supported by two factors: * That more messages are likely to be exchanged internally, possibly leading to a performance profile that even outweighs the advantages of the better internal network * Business momentum: Our solution works, so why change it?

It used to be that TCP/IP was considered an “inter”-networking protocol: The protocol you use between your networks, alongside the protocols that service the networks themselves. Private networking protocols were eventually supplanted, and I suspect our internal solution will one day also be supplanted. It only takes one generation of software to die out and a new one to emerge for the underlying technologies to become ubiquitous.