Simplifying Data Management in the Cloud
For data to be truly meaningful, quality assurance processes that ensure information is accurate, updated and relevant must be enforced. When it comes to data stored in the cloud, however, this oversight is tenuous at best. According to a recent report by Ventana Research, only 15 percent of organizations have completed a quality initiative for their cloud data – this figure drops to five percent when it comes to master data management.
Considering that data is used for everything from strategic decision-making to the evolution of core products and service offerings, inaccurate information can have a negative ripple effect on an organization’s entire business. As companies increasingly virtualize their applications, leading to growing levels of data stored in the cloud, this lack of quality management further threatens to undermine the overall utility of the stored information.
While the cloud doesn’t pose an immediate threat to data quality, moving and integrating data between cloud and on-premise is likely to give rise to a number of quality control issues, as companies are unable to extend data quality management processes to cloud applications. Also, while many cloud application providers offer service level agreements (SLAs) that outline their data management practices, when going to the cloud, the owner is essentially compromising data oversight for flexibility and elasticity.
Let’s consider, for instance, a hypothetical bank that collects and manages large amounts of customer data, and has made considerable investments in building a reliable master database and ensuring its data quality. What would happen if the bank introduced a cloud-based campaign management and execution platform to automate and enhance its direct marketing? Simply creating a database for such a highly involved function is a serious project in and of itself, but maintaining the quality of the in-house data will now require ongoing and elaborate integration with the cloud to keep the structure and the unique identifiers of the core database intact. As a result of the new implementation, the bank would likely be facing significant data duplication, serious integration overhead and related data quality risks, not to mention a much higher level of work to keep things running smoothly day-to-day.
But what if the bank had the option of keeping the data on-premise, where it is governed by internal data quality and management policies, while maintaining access to the business logic in the cloud? If the computing process could be “re-mapped,” the bank could retain control of the data the cloud application would simply “borrow” the data as needed (and write the appropriate data back). As a result, the business would be able to reap the benefits of a SaaS model while escaping the data quality management problem.Such a solution would effectively extend quality management practices to cloud data, thus eliminating the conundrum.
One may think of it as a “cloud-to-earth” connector that combines reliable communication across an unstable network and a robust queuing mechanism on both ends with a separate data access layer that uses a logical representation of the physical data structures to implement comprehensive mapping between the cloud and the on-premise data.