As the Internet of Things (IoT) expands toward 21 billion devices in 2020, the volume of location data, along with all other data types, will only increase. The move toward a world of connected cars, smart devices, and intelligent sensors will create high-value markets focusing on spatial analysis and geo-analytics. After all, this is the key premise of IoT: linking devices, humans, and networks all together to give consumers more knowledge and control.
Let’s remember that geo-analytics is not just the holy grail of our “Internet of everything” future. In the past couple of years, the insight value of location data has been extremely important to many other business verticals. Take, for example, location-based targeting in advertising: Juniper Research predicts that revenue generated by location-based services will reach $43.3 billion by 2019. This will drive the introduction of more intelligent devices, vehicles, and sensors that produce data at a much higher rate than we have ever seen; nearly all of that data generated will have a spatial element to it.
Why Geospatial Intelligence Matters
Today’s market leaders are using geospatial analytics to draw location-based insights and optimize workflows, reduce costs, manage risk, and optimize the customer experience. Geospatial data is providing organizations with opportunities to utilize fast data processing and analytics to make highly differentiated, insight-driven decisions. Let’s consider some of the key use cases in which the combination of geospatial data and real-time analytics can provide significant value:
Financial services. Fraud detection is a billion-dollar problem in finance, affecting consumers and banks alike. Geospatial data can help financial institutions detect and prevent fraud by correlating spatial, temporal, and transactional data altogether using predictive analytics and anomaly-detection techniques.
Retail. In the age of omni-channel retailing and hyper-personalization, geospatial is the missing link that provides customers with a seamless and converged experience between offline channels (in-store purchases, smart tags) and online channels (web, mobile). Adding geospatial analytics to the personalization across channel interactions can help retailers optimize their promotional activity based on customer locations.
Insurance. High quality geospatial data is critical to the insurance industry, as risk is often tied to location. Insurers can perform risk simulations against vast amounts of data to come up with the right risk model. This helps insurers better estimate potential losses and help customers to purchase the correct amount of cover at the right price.
Perishable Data, Fast In-Memory Analytics
While the above applications of geospatial intelligence seem diverse by virtue of business vertical, they all share one common trait: the value of their generated data is perishable within seconds, if not less. It goes without saying that, for location information to be valuable, it must be processed immediately. A financial services firm cannot effectively implement fraud-detection mechanisms based on location by having an insight-to-action time frame that exceeds minutes; it will suffer major financial losses if the data is not captured and acted upon immediately.
The emergence of IoT, mobile, and geo-analytics has driven most database vendors toward enabling geospatial analytics across their products. However, it is quite difficult to implement high-performance geospatial analytics on relational databases that were not built for these workloads. We can’t simply introduce another layer of abstraction to circumvent the impedance mismatch between relational databases and high-throughput geospatial data processing applications. What we need is the ability to ingest, consume, and analyze billions of location data points and seamlessly execute continuous real-time queries against them to generate contextual insights at global scale.
In-memory computing is creating an opportunity for organizations to be highly competitive in processing fast data and streaming analytics at scale, while reducing the cost to design, build, deploy, and support complex data pipelines. In particular, in-memory computing has the ability to fundamentally change the way geospatial intelligence is processed and consumed by providing real-time and continuously updated insight into everything that is happening in the field at low latency. In a time of increasing data generation in the physical world, in-memory computing directly addresses the need for low-latency communication between the data center and the edge. Because in-memory computing consolidates the storage of data in RAM with the processing of business logic in the same runtime space, geospatial analysis with feedback to edge devices, at very high throughput, can be accomplished in milliseconds, as opposed to the minutes or hours it traditionally takes.
Some features that in-memory data grids lend to geospatial analysis are as follows:
Data storage. In-memory data grids are graph-driven, which is a model better suited for geospatial data than are relational databases.
Performance. Memory is about 58,000 times faster than disk, so as geographic data is generated, insights can be extracted at millisecond and microsecond latencies.
While having the infrastructure that can capably deal with all of the data generated in a full-fledged geospatial scenario is crucial, of equal importance is having the ability to analyze it at millisecond latencies and turn it into something of value to both the consumer and the organization. Here, in-memory geospatial analysis (geo-fencing, predictive modeling, equipment tracking) can connect insights into workflows and actions to trigger business outcomes, alerts, and machine-to-machine automation. In-memory computing provides a new powerful paradigm for analyzing massive streams of geospatial data at millisecond scale and is the most appropriate fit for geospatial analytics. This is true just-in-time geospatial intelligence.