As a trusted advisor to many enterprise architects and e-commerce business executives – I thought I would share my experience with a trending technology evaluation across the industry’s biggest brands using a simple and hopefully fun to read analogy – Disney’s adaption of the age old children’s story, The Three Little Pigs.

If you watched the YouTube video from the above link then it’s songs will probably be stuck in your head – ALL DAY – Especially “who’s afraid of the big bad wolf, big bad wolf, big bad wolf…”. Well in our adaption of the story we will assume the Big Bad Wolf is actually a Peak Traffic Event – and who’s afraid of those – well the answer is Retail’s E-Commerce Executives. This is because, according to the NRF, an estimated 122 million consumers chose to shop online this past Cyber Monday which explains why 1-in-3 retailers dedicate 31-50% of their total online marketing budgets to holiday efforts. And as we have learned, from headlines and blogs over the years, this tremendous influx of online traffic sometimes spells trouble for e-commerce retailers – such as 2016’s outage-headliner Macy’s.com.

Why Focus so Narrowly on Inventory and How Does this All Connect?

Well let’s begin by quickly identifying the basic characteristics and needs of an enterprise’s Real-Time Inventory System. First, it is Mission-Critical – therefore must be highly-available in order to ensure business continuity during peak traffic events. Second, it must be Transactional – how else will you safely create/update a product across a complex data model which includes purchase orders, store locations, delivery methods, etc. Third, it must be Accurate – therefore it must adhere to all ACID properties especially Consistency. The ramifications of inaccurate inventory could impact Customer Experience, Customer Loyalty, Shopping Cart Abandonment, Returns, Refunds, etc. CIOs know that, in a properly aligned IT Department, the Business can limit IT operations but IT operations should never limit the Business.

Three Little Pigs and the Big Bad Wolf

Anyway back to the Three Little Pigs. As we all know the story goes as follows – Once upon a time there were Three Little Pigs who each built their houses out of different material to keep safe from the Big Bad Wolf. In our adaption of this story, instead of houses, the Three Little Pigs have each used different technology platforms to build their own Real-Time Inventory System in order to withstand the huffing and puffing of our Big Bad Wolf – Cyber Monday.

Three pigs

The 1st Little Pig chose a relational database which can be tuned for performance, has full transactional capability and is 100% ACID compliant. Great, now the Little Pig can go about its day, sing its song and not worry about the Big Bad Wolf, right? Well, as it turns out, Cyber Monday is no ordinary wolf. Although the RDBMS can handle regular weekly peaks – on Cyber Monday traffic volumes can increase 200%, 500%, 1000%, etc. For traditional disk-based technologies that is a problem because Inventories are inherently write-intensive systems. Thousands of concurrent transactional updates will lock shared resources which will increase memory, to track the locks, and eventually overwhelm the database. Boom! Outage! Website Unavailable! All Hands on Deck Bridge! Retrospective Call! You get the idea, we’ve all been there.

The 2nd Little Pig chose a NoSQL database – originally known for data variability but now in days perceived more as distributed scaling solutions – in our story we will use Apache Cassandra. So this Little Pig read about Cassandra online and figured he could avoid the locking of shared resources by linearly scaling out the data across the cluster. Great, now the Little Pig can go about its day, sing its song and not worry about the Big Bad Wolf, right? Well, as it turns out, Cassandra doesn’t support transactions which we identified as a basic need because of an Inventory’s complex data model. In addition, it can only guarantee availability in exchange for consistency – Yikes. You see, according to the CAP Theorem, in the presence of a network partition – required for horizontal scaling – one has to choose between consistency and availability. According to its documentation Cassandra is Eventually Consistent by Design and can only be configured for Strong Consistency in exchange for a significant degradation of performance. Why does this matter? Well if one customer sees you have 5 hammers and at the same time another customer sees you have 3 hammers – what would happen if they both bought the max? The answer will depend on your business process and policies (i.e. refund, apology, waiting list, order cancellation) but the common denominator will certainly be a bad customer experience. As you can assume, in the face of our Big Bad Wolf, the statistical probability of Inventory inconsistencies will dramatically increase. So your Inventory data will be available but at what cost to your business?

The 3rd Little Pig chose an In-Memory Data Grid – originally known as a distributed and highly-available caching technology that can handle write-intensive transactional workloads but now in days also includes in-memory compute capabilities and handles data variability – in our story we will use the GigaSpaces XAP Platform. This Little Pig went on GigaSpaces’ Wiki and learned that he too can scale out his Inventory data across the cluster’s partitions and in addition can perform 100% ACID compliant transactional workloads. That covers 2 basic needs – so what are we missing? – oh right, Consistency. Well, as it turns out, XAP’s Shared-Nothing Architecture ensures Strong Consistency by maintaining a single replica of primary data and Availability by synchronously updating its backup partitions. I know what you’re thinking – what about the CAP Theorem right? – Check out the blog by GigaSpaces’ CTO Nati Shalom where he explains the concept and compromise of NoCAP. Great, now the Little Pig can go about its day, sing its song and not worry about the Big Bad Wolf, right? YUP – “With a hey-diddle-diddle, I play on my fiddle, and dance all kinds of jigs…”

Check out our infographic:

Infographic - WHICH LITTLE PIG CHOSE YOUR REAL-TIME INVENTORY TECHNOLOGY-

Final Thoughts

Each Little Pig chose a technology that can act as a functional data platform for their Real-Time Inventory System. If it wasn’t for the huffing and puffing of the Big Bad Wolf their choice could simply be dependent on the current enterprise’s architecture, existing skills, and technology roadmap. But with the continued growth of multi-channel shopping via mobile, social, and other online sources – it isn’t a surprise that most Retail Business Executives are concerned about the latter part of the annual Cyber Monday Winners and Losers Blogs.

Relational databases aren’t going away – but their role has been transitioning away from handling frontline business transactions more towards governance and systems of record for ETL and analytical workloads. The hype of NoSQL databases is also on the decline due to the limitations of Columnar data modeling, lack of querying capabilities, and no transactionality which conflict with an agile and constantly changing business environment. Don’t get me wrong – using a scale-out technology such as Cassandra to build a Microservice to lookup “Likes” on Facebook is a great choice – because who cares if I see 5 Likes and you see 6 Likes with the eventuality of us both seeing the right amount. But for Inventory – eventuality has consequences – and the only technology, which has proven for more than 10 years, to withstand the huffing and puffing of Cyber Monday was the 3rd Little Pig’s In-Memory Data Grid.

Interested in reading more about retail and making big decisions as data gets bigger? Read our latest blog post.

Which Little Pig Chose Your Real-Time Inventory Technology?
Allen Terleto
Director, Customer Solutions @ GigaSpaces
Allen Terleto is responsible for direct sales, customer solutions and business development in the Eastern U.S. With over 10 years of engineering experience in middleware technologies, holding positions in both architecture and development, his technical background includes experience with distributed, high-volume and scalable enterprise systems, service-oriented architectures and in-memory compute technologies. Allen has earned multiple degrees including a Bachelors in Computer from the NYU Polytechnic School of Engineering, an MBA and a Masters in Information Systems Management from Stevens Institute of Technology, and is currently attending the NYU School of Professional Studies to acquire a Graduate Certificate in Enterprise Risk Management.
Tagged on: