Facebook Twitter Gplus LinkedIn YouTube E-mail RSS
Home application scalability XAP Distributed Task Execution vs Map/Reduce

XAP Distributed Task Execution vs Map/Reduce

Parallel processing is one of XAP's main strengths, and it is supported in a few different ways.  Map/reduce is a parallel processing algorithm (exemplified by Hadoop) that shares some things with XAP parallelism, but is considerably different in many ways.  This purpose of this post is to compare and contrast XAP parallelism with that provided by the map/reduce algorithm, both to reduce confusion and make clear how each approach fits in the larger clustered ecosystem.  It's important to note that parallel processing in XAP is in a real-time context, as opposed to batch oriented Hadoop.

Parallelism in XAP

1. Distributed RPC

XAP supports the deployment of remoted services to it's processing grid.  In a partitioned model, each partition will have an service endpoint and process client requests.  This places code execution colocated with data in-memory; a key XAP concept.  Clients can route requests to specific partitions, or broadcast requests to multiple partitions in parallel.  On the client side, a "reducer" can be defined to process the results of the remote executions.  This reducer is similar to a map/reduce reducer, although only superficially.  The ability to define a "reducer" has led to some confusion about XAP and map/reduce.  More on that later.

2. Distributed Task Execution

XAP supplies a means of executing arbitrary client code across its processing grid without service pre-deployment.  In this case, user code is dynamically loaded into the grid containers and executed immediately.  Like distributed RPC, a reducer can be defined to process results.  The dynamic nature of task execution makes it the closest analog to the way map/reduce works, but it's still quite different.

3. Space-based Parallel Execution

XAP supports a continuous query capability that allows code to perform blocking reads on memory, waiting for objects with desired characteristics to appear.  By deploying worker threads, each performing continuous queries on each partition, parallel processing can be coordinated by object exchange.  The client (master) writes a "job" object into each partition, the workers process it and mark it as complete, and the master reads the results back.  This is the rough equivalent of the XAP "Event Driven Remoting" feature.  This pattern doesn't even provide for a reducer, although the implementation of such would be trivial.

 

Scatter/gather
 
All the XAP parallel processing features use the scatter/gather pattern.  This is the basic master-worker paradigm.  You have a collection of worker nodes, the master runs your code on them, and then they return all their results to the master.  There are no limitations regarding what work the workers do, how they do it, what they return (if anything), or what the master does when it receives the results.  This is quite different from map/reduce as describe below.  The scatter/gather concept is in fact a lower level construct than map/reduce, and is essential to map/reduce operation.
 
 
Map/reduce
 
This is a very specific algorithm that has some similarities to scatter-gather.  I'll describe the algorithm using Hadoop.  Hadoop typically reads and stores its data in HDFS (the Hadoop File System), and is tightly coupled to it.  HDFS breaks files into large blocks (typically 64MB or greater), replicates them (typically 3x), and spreads them around a bunch of cluster nodes (on locally attached hard drives).  HDFS knows where every block replica is stored in the cluster.
 
-> Note that we've already departed from XAP architecture, which only has a single primary copy of each datum
 
 
 
 
  When a map-reduce job is initiated, the job runner looks at which HDFS file is being used as input and discovers which nodes hold the blocks for that file.  The job code is then run only on those nodes where the data is located.
 
-> Another departure from XAP.  Generally XAP tasks are broadcast to all nodes, or constrained to a single node/partition by a routing id.
 
Each node then runs the map algorithm.  This, unlike with XAP, is a very specific algorithm.  It is based on the idea that you are reading tuples out of an HDFS file.  In the most basic case, the tuple is just the line itself.  IOW, a tuple with one field in it.  The "map" process is simply a (usually) java function that gets called repeatedly with each line from the HDFS file.  This function also gets called with a collector object, into which results are "mapped".  The mapper doesn't see anything else: just incoming tuples.  Lets take the word count example:  each sentence is passed to the mapper.  The mapper will break the sentence into words, and count each word.  Then it writes the resulting tuples into the collector.  Over and over.  Example:
 
"The sky is blue"  –>  mapper
mapper tokenizes it –>  "The" "sky" "is" "blue"
the mapper writes 4 tuples to the collector
{"The", 1}
{"sky", 1}
{"is",1}
{"blue",1}
 
When the mapper completes, a sort is conducted and the data is transferred to the reducer.  Recall that all nodes running mappers are doing the same thing in parallel.  This transfer (the "shuffle") across the network of intermediate data is a *major* bottleneck for large jobs, by the way.  The mapper output and all the remaining file based operations occur outside of HDFS.
 
Now all the data is on the node running the reducer.  It has a bunch of these sorted files from the mappers on its local file system.  The system then merges these sorted files in to one gigantic file and starts feeding the reducer.  Now the reducer sees something different from the mapper.  Rather than each tuple in the big sorted file getting passed in, the system collapses the keys and passes in each key plus all the values in a big list.  For example, imagine after all the mappers run, the sorted master file has the following tuples for the word "the":
 
{"the", 1}
{"the", 1}
{"the", 1}
 
Each of these could have come from the same or different nodes of course.  What the mapper gets is:
 
{"the",1,1,1}
 
This allows the "reduction" to occur in a single call.  A word count reducer would just iterate over the list of 1's and add them up and create the output tuple {"the",3}.
 
That's it.  Then typically, the reducer output is written back to HDFS, where potentially another Hadoop job will pick it up as input.  Also important to note is that with Hadoop you can have multiple reducers (each creating a different output file), which also makes it different from simple "scatter-gather".  Note that, at one level, the reducer->mapper relationship can be equated to the master->worker concept of scatter/gather.  This similarity is superficial however, as the reducer doesn't "launch" mappers: the association of mappers and reducers is done by the framework.
 
Conclusion
 
I've attempted to contrast the scatter/gather capabilities native to XAP with the map/reduce algorithm.  XAP's parallel processing capabilities bear some similarity to map/reduce: the master/worker concept, and minimizing network traffic via code/data colocation.  On the other hand, map/reduce is a strictly defined process, whereas XAP doesn't implement an "algorithm" per-se, but rather provides a capability.  All this is not to say that map/reduce couldn't be implemented on XAP.  Quite the contrary.  One can imagine the task API being used to launch reducer tasks, which in turn launch mapper tasks from whom results are collected.  The data model could be provided by utilizing XAPs SpaceDocument for maximum flexibility along with a streaming API to drive it.  In fact, an effort is underway currently to implement a true map/reduce capability on XAP using these techniques.  In a future post, I'll describe that effort, and how it can deliver a true real-time, native Hadoop map/reduce capability that exploits XAPs in-memory speed, dynamic parallel execution, and scalability, while retaining compatibility with disk based Hadoop.
 
 Share on Facebook Share on Twitter Share on Reddit Share on LinkedIn
No Comments  comments 
© GigaSpaces on Application Scalability | Open Source PaaS and More