10 years ago, Amazon found that every 100ms of latency cost them 1% in sales. Google found an extra .5 seconds in search page generation time dropped traffic by 20%. A broker could lose $4 million in revenues per millisecond
With an estimated 50 thousand attendees gathered in Las Vegas last week, Amazon rolled out dozens of new features, upgrades, and new products at AWS re:Invent. Here’s a quick roundup of news out of the annual conference that may matter
A couple of days ago Amazon Web Services (AWS) suffered a significant outage in their US-EAST-1 region. This has been the 5th major outage in that region in the past 18 months. The outage affected leading services such as Reddit, … Continue reading →
Patterns, Guidelines and Best Practices Revisited In my previous post I analyzed Amazon’s recent AWS outage and the patterns and best practices that enabled some of the businesses hosted on Amazon’s affected availability zones to survive the outage. The patterns
According to the television series “Terminator: the Sarah Connor Chronicles”, Skynet computer system began its attack against humanity on April 21, 2011. Luckily that hasn’t happened (or has it?) but on that very day another predominant computing system provided us
Dekel Tankel of GigaSpaces spoke recently to a hip cloud crowd regarding the risks associated with moving an application to the cloud or grid environment.
Without a GigaSpaces Space-based architecture (what I refer to as a TPC Architecture ) applica…
Amazon EC2 now makes it possible for failure to be less noticeable within EC2 due to the new ability offered to have an IPAddress move from box to box upon failure. This is really cool because it means you can host your webserver as a service within …