Interface ClusterStoreAndForward

All Known Implementing Classes:

public interface ClusterStoreAndForward
MatsSockets forwards requests from WebSocket-connected clients to a Mats Endpoint, and must get the reply back to the client. The WebSocket is "static" in that it is a fixed connection to one specific node (as long as the connection is up), which means that we need to get the message back to the same node that fired off the request. We could have used the same logic as with MatsFuturizer (i.e. the "last jump" uses a node-specific Topic), but that was deemed too feeble: We want reliable messaging all the way to the client. We want to be able to handle that the client looses the connection (e.g. sitting on a train which drives through a tunnel), and thus will reconnect. He might not come back to the same server as last time. We also want to be able to reboot any server (typically deploy of new code) at any time, but this obviously kills all WebSockets that are attached to it. To be able to ensure reliable messaging for MatsSockets, we need to employ a "store and forward" logic: When the reply comes in, which as with standard Mats logic can happen on any node in the cluster of nodes handling this Endpoint Stage, the reply message is temporarily stored in some reliable storage. We then look up which node that currently holds the MatsSession, and notify it about new messages for that session. This node gets the notice, finds the now local MatsSession, and forwards the message. Note that it is possible that the node getting the reply, and the node which holds the WebSocket/MatsSession, is the same node, in which case it eventually results in a local forward.

Each node has his own instance of this class, connected to the same backing datastore.

It is assumed that the consumption of messages for a session is done single threaded, on one node only. That is, only one thread on one node will actually getMessagesFromOutbox(String, int) get messages}, and, more importantly, register them as completed. Wrt. multiple nodes, this argument still holds, since only one node can hold a MatsSocketSession - the one that has the actual WebSocket connection. I believe it is possible to construct a bad async situation here (connect to one node, authenticate, get SessionId, immediately disconnect and perform reconnect, and do this until the current ClusterStoreAndForward has the wrong idea of which node holds the Session) but this should at most result in the client screwing up for himself (not getting messages). A Session is not registered until the client has authenticated, so this will never lead to information leakage to other users. Such a situation will also resolve if the client again performs a non-malicious reconnect. It is the server that constructs and holds SessionIds: A client cannot itself force the server side to create a Session or SessionId - it can only reconnect to an existing SessionId that it was given earlier.