The computer programming world has a funny way of turning tail and running in the opposite direction, after being heavily committed to a particular paradigm. There is, after all, a crisis of complexity. As systems become more and more powerful, they become harder and harder to manage. Solving the right problems in the right order (finding the right layering of abstractions) can ease the ride but ultimately, abstraction inhibits the sideways-evolution of software.
I've been thinking a lot lately about distributed systems and more generally, about programming systems of multiple-processes.
In my day job, I work heavily with MQTT. For those who don't know, MQTT is a publish/subscribe messaging protocol. It's rather simple and designed with Machine-to-machine(M2M) applications in mind. You can also do a surprising amount with the carefully selected primitive functionality it provides.
In MQTT, publishers and subscribers both connect to the central broker to exchange messages. The broker is more of an implementation detail than an active participant.
Publishers 'publish' messages to the broker. In the message header is a 'topic' - essentially a string which describes the data. An example would be 'lounge/temperature'.
Subscribers also connect to the broker, and 'subscribe' to topics. Once subscribed, you will receive all messages published to that topic.
Messages have a Quality of Service, or 'QoS' value attached to them, indicating what kind of transport they require to their subscribers. QoS has 3 possible values:
- 0 - At most once (cheap, unreliable)
- 1 - At least once
- 2 - Exactly once (expensive, reliable)
There's also a 'retain' flag, which can be set on any message. This causes any new subscribers to that topic to immediately receive that message after connecting, even if the publisher is no longer around.
Finally, there's an Last Will and Testament, or 'LWT' message. A publisher can provide this message to the broker to be published automatically in the event that the publisher's connection is dropped.
Trying it Out
It's dead easy to try out MQTT on a debian machine.
Install the Mosquitto MQTT broker, along with its utilities:
sudo apt-get install mosquitto mosquitto-clients
The 'mosquitto-clients' package provides the mosquitto_pub and mosquitto_sub command line utilities, which can be used to interact with the broker.
Subscribe to a topic in one terminal:
mosquitto_sub -h localhost -t mydemotopic
Whilst publishing to it in another:
mosquitto_pub -h localhost -t mydemotopic -m 'This is a message!'
Functions and RPC
As I began to use this protocol more and more to tie together M2M applications, I started to see why the programming world is rejecting RPC methods like CORBA in the face of messaging technologies like MQTT.
It all comes down using to the right tool for the job. Lets look at function calls:
var result = calculateResult(someInput);
var moreComplexResult = combine(calculateThing(a), calculateOther(b))
Functions are great for programming. Specifically, they're great for programming threads; a thread being a path of execution that traverses functions. By definition, a thread cannot be in two places at once.
Multithreaded programming changes this very fundamental nature of a function, and can be hard to do correctly as a result.
RPC, or 'Remote Procedure Call' is the name given to techniques which attempt to make function calls network-transparent, such that when invoking a function, the implementation of the function and the work it performs can be carried out either locally, or remotely on another computer.
RPC systems are generally, in my view, hard to work with. This is due largely to what exactly a function is and does. Whilst it is absolutely technically possible to make function calls 'syntactically' network-transparent, there is a world of difference between what can be said about a function call, and what can be said about two threads communicating across a network.
A local function call will never fail in the ways that a remote one will often do, and code must be structured to work with this reality. RPC is a facade which we end up breaking apart in order to work correctly and usefully in the real world. What if a function call completes, but you don't receive the result due to a network disruption? That's a situation which doesn't exist with normal functions, and must be handled explicitly, thus breaking the abstraction and moving practical RPC farther away from anything like a typical 'procedure call'.
I would also argue that functions and their single-threaded call-and-return behaviour simply runs against what you actually need to do when building distributed systems. Even in a world of perfect network reliability and one where RPC performance matches that of local function calls, an application running on multiple machines is an application running on multiple threads, and the code you end up writing is NOT function-oriented. Normal functions and single threaded programming are oriented toward usefully manipulating state which you own. This may form an implementation detail of a node in a distributed system, but as a whole, we have a very different beast. Distributed systems are not single-threaded worlds of state - they're often message-oriented, like MQTT.
Sometimes you really do want request-response semantics exactly as a function presents to you, and in those cases this can easily be built atop a messaging protocol. Messaging semantics are, however, distributed systems-friendly. There is no value in implementing them via a paintakingly constructed imitation of a non-distributed system, as is often the case when you start building your distributed application with RPC.