The Next Generation of Environmental Protection Demands a New Database

The Next Generation of Environmental Protection Demands a New Database

A new adaptive database strategy is necessary to address the challenges of managing vast and varied climate and weather data.

When it comes to achieving a comprehensive understanding of global climate change and weather patterns, data scientists have a wide variety of enormous data sets at their disposal: everything from carbon dioxide atmospheric concentrations, rising sea levels, increasing temperatures and frequency of natural disasters.

This embarrassment of data riches is, however, a double-edged sword, as it leaves an unbelievably immense amount of data to store and manage. In fact, traditional data processing software can’t manage it. Today’s climate and weather data sets embody the “three V’s” of Big Data: volume; velocity (i.e., torrents of real-time data flowing into the organization at unprecedented speeds that must be handled in a timely manner) and variety (i.e., data comes in all types of formats). 

One key reason most modern Big Data infrastructures are not ideal for storing large amounts of climate and weather data is that these technologies evolved around addressing the specific needs and challenges of various research and knowledge domains - ultimately leading to the development of silos. But now imagine going from spreadsheets to a sophisticated 3D model where each layer holds and presents information on specific meteorological variables, allowing individuals to correlate all of this information in order to uncover hidden patterns and conduct what-if analyses. In this scenario, what does the supporting database strategy need to look like?

  • Data is stored in a common, shareable and efficient format that can be analyzed in aggregate in the cloud. Weather data is Big Data: heterogeneous, multidimensional and not ideally represented in a visual tabular form. Tabular graphs might look great, but it doesn’t mean the information is being presented in a way that can be easily understood and analyzed. Rather, data visualizations should organize and present data in a coherent manner so that the audience can quickly and intuitively make sense of what’s going on and take action. As data scientists collect increasing volumes of data, these shared visualizations become increasingly vital in order to identify and communicate real-time, actionable insights. 

A better approach to storing weather and climate data is to store them as dense and sparse multidimensional arrays, representing a tailored infrastructure designed to handle the complexities of climate and weather data very efficiently. Organize this data as dense and sparse multidimensional arrays where each layer holds information about specific meteorological variables. Instead of building a new data system every time data needs change, organizations can instead build and rely on a single database that can store, govern and process all data as well as any other data type that may evolve in the future. This is called an adaptive database.

  • Predictive models can be easily scaled against this data, satisfying AI models’ and data scientists’ hunger for data. An adaptive database offers many benefits, including enhanced analysis, faster retrieval—retrieving years of data in seconds—and improved compatibility with mathematical and statistical operations. AI models rely on a lot of historical training data, which makes an adaptive database ideal.
  • It is equipped with scalable, serverless computational engines to keep data scientists focused on their core work in protecting the environment while driving huge total cost of ownership (TCO) savings. Weather data processing is a constant challenge, mirroring the constant, chaotic interactions in the atmosphere. An adaptive database can and must perform as efficiently as purpose-built databases, quickly and accurately processing large amounts of weather forecasts that are coming in continuously. Multidimensional arrays can be the right choice not just for their universality but also for performance. 

By combining an adaptive database with advances in serverless, data scientists can focus on doing research and meteorologists can focus on understanding current weather patterns and plotting weather forecasts to gain insights, rather than worrying about computing power being available to them. Going a step further, compressing array data can help contribute to a reduced carbon footprint. Organizations can also slice and only process certain slices of array data to further reduce the footprint. 

Besides supporting data diversity, being an ideal foundation for AI and delivering exceptional performance, there are other reasons why an adaptive database is important.

First, it supports vendor optimization. When an organization pays for multiple data systems, there is often overlapping functionality that ends up being paid for twice. The existence of numerous different data systems can also cost teams a lot of time as they learn to operate numerous different systems and wrangle data when they need to gain insights by combining disparate data sources. 

Second, an adaptive database that houses all data enables holistic governance. Even if organizations are happy with their numerous vendors, each different data system will likely have its own access controls and logging capabilities. Therefore, if an organization needs to enforce centralized governance over all its data, it needs to build it in-house, costing more money and time.

The good news is all this climate and weather data can do amazing things, like superimposing and synthesizing it to better understand areas and environments where climate change may be occurring and applying artificial intelligence (AI) to predict outcomes and events. 

The challenge lies in harnessing and integrating these diverse climate and weather data types, which often reside separately in bespoke database solutions. This is the only way that data scientists can achieve the most holistic, fully informed view possible of climate change and weather trends and their impact on populations. The next generation of environmental protection is calling for a new database strategy, one that not only supports all data formats— including those that have yet to be discovered—but is also an ideal foundation for machine learning and predictive analytics applications and is highly performant.

About the Author

Norman Barker is the VP of Geospatial at TileDB. Prior to joining TileDB, Norman focused on spatial indexing and image processing, and held engineering positions at Cloudant, IBM and Mapbox.