Facebook Twitter Gplus LinkedIn YouTube E-mail RSS
Home GigaSpaces GigaSpaces XAP’s WAN Gateway

GigaSpaces XAP’s WAN Gateway

In a distributed environment, if you can’t send data from one peer to another peer, you’re… not actually distributed. It’s one of the first things shown to users of distributed products, with good reason: sharing data is the whole point, right? If you can’t synchronize between peer nodes in a given cluster, you’re not really able to call it a cluster at all.

Well, that works out well if you’re on a LAN, because you have fine-grained control over network resources and routes. In addition, by definition a LAN is local, so you don’t have to deal with network travel time or other synchronization issues.

GigaSpaces XAP has a feature called the WAN Gateway that addresses synchronization between peer clusters – not just peer nodes in a given cluster, but between heterogeneously-located clusters, in any fashion you choose.

How does it work?

Well, let’s look at the traditional “share data” application: you have two peers, “peer.one” and “peer.two,” in a cluster that we’ll call “cluster.one.[1]” If peer.one becomes the origin for a data item “A“, then peer.two is able to access and (presumably) modify that data item; any changes written by peer.two will be visible on peer.one. Naturally, peer.two can originate data as well, and peer.one will have the same visibility and access to that data.

This is standard operating procedure for distributed datagrids and caches; most products in this space use this kind of scenario for demonstrations[2].

However, note that peer.one and peer.two are literally peers. They’re typically on the same class C network (i.e., on the same LAN) and have no public visibility to worry about, no tunnels to go through to talk to each other (or the potential central server)… everything is simple and clean.

We all know that the real world doesn’t actually work this way, because it’d be too clean if it did. In the real world, things get messier. Not only do you have peers one and two in cluster one, but you have peers one and two in cluster two – with cluster two located across the globe.

Synchronization at this point is a lot messier.

That’s what XAP’s WAN gateway is designed to address.

In the gateway scenario, you can have multiple heterogenous clusters that synchronize according to your configuration – not only are they separate geographically, but they also have different network topologies. One cluster can have two peers, another can have four hundred, you might want to send data from cluster.one to clusters two and three, but cluster two only sends data back to cluster one … the possibilities are very open.

For the sake of simplicity, let’s presume two identical clusters talking to each other, synchronizing data both ways. Our clusters have two peers each, so we have four nodes total in the system.

Cluster.one Peer.one.one talks to Peer.one.two and vice versa.
Cluster.two Peer.two.one talks to Peer.two.two and vice versa.

Within each cluster, the peers’ memory space is shared; from a datagrid client’s perspective, cluster.one looks like a single data grid (which it is, it’s just distributed over two peers.)

Implementing the Gateway

The WAN Gateway is implemented as a special kind of processing unit in XAP. Each data grid defines its relationship with a gateway, which describes outgoing synchronization events – i.e., where data is written to.[3]

The gateway itself defines its relationships – the other gateways it exposes to the data grid, and which data grids to pull synchronization events from.

So our clusters described above get a little (but only a little) more complex:

Cluster one has three deployed units. One is a gateway unit, which describes itself and cluster two (and describes gateway two as a data source.) The other two are data grid deployments (as peers) which naturally (and transparently) synchronize between each other, and describe the gateway unit as a synchronization point, and also describe the gateway unit’s destination as a data sink.

Cluster two has the same three deployed units, albeit with a different names; its gateway describes itself and cluster one, and declares cluster one as a data source[4]; the local data grid peers describe the gateway unit and declare it as a data sink.

Now, any write to cluster two’s data grid will be reflected in cluster one’s data grid, no matter where the two clusters are physically located, or on what network they’re based.

We’ve described two-way communication of all data, but you can go further; not only can you declare one-way communications, but you can declare that only some data items get mirrored, as well as the granularity. You get full control, and you can treat the entire system as one homogenous unit (i.e., all peers act like true peers) or you can configure a command-and-control configuration (where one cluster “rules” the others, or sees all data whereas the other clusters see only some data.)

It also allows you to configure clusters differently. Imagine a set of Atom processors as one data grid, whereas another office has a large server handling the data grid. You can control every element, without treating the entire system as a true set of peers (because you don’t want to load those poor Atom processors with the same kind of process that you wouldn’t mind burdening the large server with.)

XAP has always exposed the raw ability to do this sort of thing, because we expose our synchronization classes for users. However, the WAN gateway is tested as a core feature of the product, so you don’t have to manage external access; it’s done for you, documented and flexible.

Incidentally, we have a screencast that shows the WAN gateway in action on the gigaspacestv channel on Youtube. It is based on a project hosted at github (the “bestpractices” project, which includes much of the source projects for the screencasts). The screencast differs from the github project only in that it uses separate hostnames for the clusters instead of hosting both on localhost.



1 Hopefully a furniture store (“Pier One“) and Pink Floyd (“Cluster One“) don’t decide to sue me now.

2 Some add central hubs to the mix, so you start peer.one, peer.two, and a server of some kind, depending on the product’s supported topologies.

3 This is where you get the ability to specify routing. “Datagrid one” can send data to “datagrid two,” which sends data to “datagrid three,” and so forth and so on – gateways don’t have to be strict peers that have to be able to write to and read from each other.

4 If we wanted the synchronization to be one-way, we wouldn’t describe the other gateway as a data source. It’d then be write-only.

 
Tags: ,
 Share on Facebook Share on Twitter Share on Reddit Share on LinkedIn
No Comments  comments 
© GigaSpaces on Application Scalability | Open Source PaaS and More