• GigaSpaces EC2 Cloud Casino application released on Facebook

    The latest cloud enabled application using the GigaSpaces EC2 cloud tools went live on Facebook yesterday. This is a Facebook application built by Advanced Gaming Labs. The application is the first completely multi-player global Casino Application that runs exclusively on the Cloud and was built on the cloud from the ground up using GigaSpaces.


    If you would like to play the application you can do this on Facebook here


    There are lots of firsts for this application and you’ll be hearing lots more about in the forthcoming weeks as other news about the application and other launches using the engine become available.

  • Trading in the Clouds Amazon Start up Event Video

    In my prior post I posted the slides from the  London Amazon Web Services event. The video of the Orbyte presentation about their GigaSpaces built Trading solution is now available and can be viewed below or from this link. Cédric Roll – ORbyte Solutions – AWS Start-up Event – 28-04-2009 from Skills Matter on Vimeo.

  • Online gaming high scalability Event in London

    I will be speaking at the Online gaming high scalability special interest group in London on 9th July. Registration for this event is now open, but be quick because there is already only 44 tickets left. My session is entitled “The space base gaming advantage” and will explore, technically, how GigaSpaces Space Based Architecture brings technical benefits that give a specific advantage to gaming application. We will delve into what these advantages are, what they are an alternative to, and how
    Read More

  • Live from AWS startup event in the UK

    I’m currently at the Amazon Web Service Startup event in the UK. It’s been a good day and there have been a lot of good presentations. GigaSpaces had a cloud client, Orbyte, referenced in a prior post presenting their “Trading in the Clouds” solution. You can see the presentation that they presented below.
     

  • Using GigaSpaces to Trade in the EC2 Cloud

    One of the GigaSpaces customers I have been working with recently is called Orbyte Solutions. The team behind Orbyte have many years experience developing FX Trading and Spread betting platforms for large banks and financial organisations. The Retail CFD, FX and SpreadBetting is a highly competitive market which operates on tight spreads and low commissions. Clients are sophisticated and expect product innovation, advanced trading features and above all being able to trade quickly in any conditions.

    The key drivers for these businesses to remain competitive can be summarised as:

    Consistent, reliable and efficient service delivery under any market conditions

    Product innovation and time to market

    Cost effectiveness

    Historically the platforms which support these businesses have been based on a tier architecture (at best 3-tier at worst 2-tier). They are often heavily reliant on relational databases. Sometimes they implement some level of data caching and messaging. Orbyte observed that most companies in this sector have developed these caching and messaging components internally. What might have given them a technical edge 10 years ago have now become a burden with a high cost of ownership. Using GigaSpaces Orbyte have built solutions that allow Retail CFD, FX and SpreadBetting businesses can get more “bang for their buck” by moving away from the traditional tier-based model and internally developed “plumbing” components (messaging, caching, and deployment).

    As GigaSpaces makes it easy to build systems that work as easily on the Cloud as they do internally Orbyte have been able put a demo of their system on the Amazon EC2 Cloud.

    In terms of EC2 instances – this is currently running on

    • 1 x m1.small – GigaSpaces Grid Service Manager
    • 1 x c1.xlarge – Instance – Grid Service Containers
    • 1 x m1.small – Windows Small Instance – .NET Web Services

    **Until 5PM GMT 23rd April the demo system will be live on the EC2 Cloud for you to have a sneak peak**.

    You can use the connection details below.

    The connection details are below:
    username: test01
    password: 1515

    username: test02
    password: 1515

    username: test03
    password: 1515

  • Building a Global Cloud Solution for Mobile Sales Users with GigaSpaces

    One of the interesting things about working with any Middleware solution (and of course GigaSpaces is unique as the Middleware is virtualised) is the interesting projects that come up with and the way in which the technology is applied. One of the projects that is coming to fruition is one in which GigaSpaces is being used to develop a 52 country data hub for a mobile Sales solution.


    Phones and handheld devices today are much more powerful than fully fledged computers were 10 years ago, but when interacting with data and services, similar challenges exist to traditional web applications, namely performance, latency and scale.


    The users of the system’s in question are  Sales Representatives in different Countries who use the handheld devices to place orders, schedule deliveries etc of Retail stock. The hand-held devices will also contain accounts, sales and other pertinent information from a back-end  CRM system. The handhelds will operate disconnected from any central system but may connect at intervals during the day over GPRS or Wireless links. An initial data download at start-of-day will provide the information required for business conducted that day. During the day uploads of changes (deltas) might occur, and finally at end of day a full reconciliation and synchronization process will update both the Siebel database and handhelds.


    The solution copes with data conflicts, where both Representative’s data and central CRM data have been modified,  by applying specific business rules at a record or field level.


    The solution also provides buffering/queueing mechanisms to manage the flows into and out of the CRM system in order to accommodate the volume/bandwidth restrictions imposed by the package, whilst still allowing thousands of representatives to initiate synchronization operations at the same time.


    Information extraction from the CRM is implemented using the CRM’s data API and delivery to the CRM is mediated through the CRM Web Services interface.


    A mobile solution from this particular CRM vendor already exists which the company in question actually already used, but it did not suit their needs as it was a relatively thick client and it was silo’d over multiple CRM implementations. It was also not loosely coupled.
    M/br/>
    The new system had to provide isolation of the handheld software from the CRM. Why ? Because the solution needs to be abstracted from the actual Siebel schema implementation to allow for future change, which included the possible substitution of the CRM package with an alternative CRM solution. The solution also needed to allow scalability to many thousands of Sales Representatives without overloading the existing Siebel or back-end systems.


    The high level architecture for this system is outlined below:

    GigaSpaces uniquely provided:


    – A Modular, task-based  approach to services
    – Flexible interfaces for  upload and polling
    – Pluggable business  logic for  synchronisation
    – Standards-based Web  Services provide  handheld connectivity
    – Ability for WAN level sychronisation
    – In-Memory Cloud for low latency and fast performance


    Given the current penchant for Cloud this type of solution could become much more pervasive for organisations in the future.

  • Performance, Performance, Performance

    Speed as they say is relative. It is great tearing along the road in a 180 mph Ferrari, bit if you don’t care then you will be happy poodling along at 50 mph. In most of the industries or companies I work in however speed is of the essence. Fast is good, faster is better, faster than that is even better…

    Some examples….one of the companies I sold to in late 2007 was a post trade reconciliation company that used GigaSpaces to build their next generation trade reconciliation system, replacing their existing C++ system. Why ? Because ever increasing trade volumes meant they had to have an infinitely scalable system for their clients and their existing system could not cope. They put GigaSpaces XAP through rigorous tests to prove that by doubling the amount of CPU’s each time they got near linear scalability. Needless to say GIgaSpaces passed that test and the new system is now being pushed out to their clients. Prior to using a GigaSpaces designed system it took 8 minutes to process 1 million Trades. It now takes 42 seconds….

    In the first quarter this year we closed business with a Bank in which GigaSpaces is being used to interface with a trading system  (a multi-asset class solution for treasury and capital market participants) to speed up reading of trade data. Simple Trade queries that were taking minutes are now delivered in seconds, and complex Trade queries that were taking 30 minutes or more are now being delivered in around 45 seconds……

    In an earlier post I outlined a poll I had set up on LinkedIn that asked the question “What is you or your organisation’s single biggest concern about deploying applications to the Cloud ?”. I promised to post the results, which you can now see below:

     

     

     

     

     

     

     

     

     

    Surprise, surprise, it is performance. It will come as no surprise to you that the performance gains that are experienced by GigaSpaces users in private Data Centers are also experienced in the Cloud. GigaSpaces for the EC2 cloud performs with minimum latency and  maximum performance. Why ? The GigaSpaces solution is managing data in memory and ensuring it is co-located in the same cloud node as the relevant business logic making for lightening fast performance.

    Performance….if you need it you know where to look……

  • Speaking at the IET event on Cloud Computing

    I will be speaking at the Institute of Engineering and Technology conference on the 28th April on the topic “Overcoming cloud computing performance impacts and mitigating risk”. The session will include:


    – What performance impacts are their in the Cloud?
    – Best practices are there to mitigate negative impact and risk
    – Which industries care about performance in the Cloud?
    – Is it real? Who is using the Cloud? Case study examples


    This will be held in London on 28th April and my keynote slot is at 12.10 PM. You can see the full programme for Day one here.


    IET Cloud Computing

  • Another successful Enterprise EC2 deployment

    Last week was a pretty busy week for me. As well as closing out business with a major international bank who are going to be using GigaSpaces to speed up their trading systems (and the results of this in itself are impressive, speeding up Trade queries from 30 minutes to between 15 to 30 seconds) we also had a major Telco go live with their first service hosted on Amazon EC2.

    The client in this case chose to use GigaSpaces on EC2 because:

    (i) Security: The Telco’s services were exposed over a secure SOA layer hosted in an internal Data Centre. This, coupled with a secure EC2 instance in which all public cloud /private cloud connections were done over SSL gave them the ability to consider hosting front end and new application services outside of the corporate data center

    (ii)Flexibilty and Agility: The difference in building out the service in EC2 and doing this in-house was vast in terms of timescales. Difficulty in procuring machines for the reference environment and for Test and Development would have meant the service could not have been deployed in the timeframe that the business required.

    (iii)Cost: Upfront costs were zero to minimal. Just the costs of renting EC2 units for reference, test and development 

    (iv) Little or no impact on the development process, other than speeding up ! The use of GigaSpaces Cloud tooling meant that the development team used their normal agile approach and used the GigaSpaces cloud tools to automate deployment to the cloud.

    (v) Testing Scale: The ability to procure instances immediately to test scale meant that the robustness of the service to handle load could be validated early.

    (vi) Choice: As GigaSpaces can be used inside the corporate data centre or on the cloud (unlike some vendors I could mention) then the organisation could make the choice as to the most effective way to make the deployment based on cost and speed.

    If you would like to have a look at the high level architecture of this application here.

    Also, if you did not get a chance to view it, GigaSpaces held a webinar on “Deploying your existing applications to the Cloud”. I’ve embedded the video below so you can view this.

  • Twitter getting bigger – Scaling becomes more of an issue

    An informative post on YES Cloud outlines the infrastructure implications of tweeting using Twitter. Using Twitter stats from tweetstats.com shows that Tweeter are responsible for nearly 2 million tweets per day. With Twitter currently popularity soaring and it coming out of it’s current niche use it is frightening to think how many Tweets it is going to have to handle.

    To get a better sense of the infrastructure needed Prashant Gandhi, the blogger from Yes Cloud,  did some calculation in the original post as to what Twitter may have to handle:

    Assumptions:
    Average Tweet Size: 100 bytes
    # of Tweets: 10 per tweeter per day
    # of Tweeters: 1 billion worldwide

    Infrastructure Requirements:
    Tweet Rate: 10 billion tweets per day
    Tweet Storage: 100 Gigabytes per day (with 10:1 compression)

    Storage needs appear to be quite manageable also – 100GB/day means ~37TB/year

    Each tweet is essentially an HTTP transaction (request and response). The tweet rate of 10B/day translates to ~115K HTTP transactions/sec for tweets uniformly distributed throughout the day. Assuming that the compute infrastructure (aggregate of web, application, database servers) can process 1000 transactions/sec/server, about 115 servers are needed. If a peak to average ratio of 3:1 is assumed, then about 350 servers are needed. 

    Now given that Twitter has more than enough problems scaling it’s current infrastructure do you really think it would be able to handle these volumes of scale ? Or will it/  has it re-architected ? (I’m waiting for that call you Twitter execs…..)

    Now GigaSpaces is recognized as a technology that can scale and handle huge loads. It has proved this numerous times on platform tests, in banks handling huge amounts of market data, and by handling such dynamic platforms as the iPhone launch. I am starting to see more and more dynamic media, retail and innovative web 2.0 type platform vendors using GigaSpaces to handle the dynamicity of Peak loads that they otherwise cannot. GigaSpaces is the difference between scaling out your platform in real-time and servicing your customers, or not. Simple as that. Throw in the cloud to be able to handle break out scaling that the existing data centre cannot and GigaSpaces becomes even more compelling.

    Oh, and if you want to read more thoughts about scaling Twitter with GigaSpaces have a look at Nati Shalom’s blog on the subject and also Guy NIrpaz’s presentation on Scaling Twitter.