Spaces, shards, scalability through horizontal partitioning

I just read some GigaSpaces White Paper and application examples.

They sort of applied the database horizontal partitioning (or “Shards”) paradigm  to the application server.

But they tend to alleviate the inherent problem of designing “completely self-sufficient” units. Their example is completely obvious, but in real life partitioning isn’t straightforward !

One Response to Spaces, shards, scalability through horizontal partitioning

  1. Owen Taylor says:

    Hey Bruno,

    I work for GigaSpaces and would like to address the very clear issue you raise: “Sure, you can scale if your application is partitionable, but what if it isn’t?”
    This results in two possible outcomes:
    1) You have a problem that can easily be sliced/partitioned
    2) You do not have a problem that can easily be sliced/partitioned

    As it turns out many applications are quite easily able to be partitioned and can scale as a result- for instance applications related to capital markets. (which are now often implemented using an in-memory, reliable solution such as GigaSpaces) In these applications, slicing the logic and information in parallel is often rudimentary as such things as matching Bids and Asks by their related stock symbol makes for easy initial slicing. Even in this simple example it is easy to see how one very busy stock can skew the behavior to one server and cause a resource imbalance. To counteract such behavior, it is a good idea to not bet the farm on a single criterion. With stock trading, it might be better to use both the symbol and the volume together to come up with a more versatile and distributed partitioning solution.

    Already, I have started to address the second issue of not having a partitionable problem, let me continue.

    When solving problems that do not slice easily, one must consider which portions of the problem are preventing the partitioning and come to some decisions regarding them. Is the portion that appears unsliceable requiring reference data that is too large to copy into every other node? If not, you may be able to place a copy of the shared reference data into each of the partitions so that they may optimistically read from the local copy and in that way greatly reduce most of the latency. If the information required is too large to allow the copy everywhere solution, it is probably necessary to consider moving the logic to the information instead of the reverse. In this way, it may be possible to employ the scatter/gather pattern where a copy of the logic is sent to each node containing some of the reference data and executed in parallel on those nodes. The results are then aggregated into a final answer with the overall effort applied to solving the problem effectively partitioned and utilizing a minimum of network traffic.
    There is no question that there are consulting dollars waiting to be earned for those of us who are willing to think outside the traditional serial box. Perhaps, you can begin to think about additional patterns that could leverage the flexibility inherent in a Space-based programming solution where Objects are the currency of the system and can therefore act as both information to be distributed and behavior to be executed.

    The secret to winning at this game is to first have a low-latency, high-throughput infrastructure that utilizes memory in a robust, fault-tolerant manner (allowing us to even begin to address the kinds of extreme transaction processing we are seeing today) and then to know when to move information from place to place and when to move logic or behavior.

    As I just decided I like to say, “The problems do not get easier, only the infrastructure.”



Leave a Reply

Please log in using one of these methods to post your comment: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s

%d bloggers like this: