Q&A with Andres Rodriguez of Nasuni - Page 3
As the architecture of the cloud is fundamentally different from traditional IT infrastructures, data stored in the cloud is inherently more protected than it would be in a traditional IT infrastructure and does not need to be backed up elsewhere – even with recent high-profile outages, so long as the right service layers are in place.
For example, through the use of snapshots that capture the entire file system and patent-pending intelligent caching technology, the Nasuni Filer takes advantage of this aspect of the cloud to ensure that deleted file server data can be recovered instantly from any point in time. These snapshots are small, because Nasuni de-duplicates and compresses them, so customers can retain an unlimited number of snapshots without greatly increasing the amount of cloud storage used. If a file is accidentally deleted the file can always be recovered, usually almost instantaneously, through a snapshot that captured it. If a customer wants to delete files or data permanently, Nasuni enables the creation of time or volume based policies that will delete these snapshots from the cloud. So long as the snapshot that contains a file still exists, that file can be recovered But once the last snapshot is deleted, the file is irrevocably gone.
During the recent Amazon cloud service outage Nasuni customers were not affected at all, as the Nasuni Filer’s use of Amazon’s cloud is solely tied to the Amazon simple storage (S3) service, rather than the EBS or EC2 services that were down. The S3 service was never affected. Since its inception, Amazon’s S3 service has been rock solid. It has incredible performance characteristics, downtime is vanishingly rare, it has good, known scaling characteristics, etc. From our historical records, S3 appears to be nothing but a solid, reliable service on which to store your data.
Should S3 have gone down, or any other cloud service provider (CSP) that Nasuni works with, the Nasuni Filer’s intelligent cache would have kept customers up and running. The cache keeps active data (your “working set”) in the local cache, so even if an outage on the CSP side occurs, customers can continue to add new data, edit data within the cache, etc. And when the S3 or other CSP finally comes back online, or the network issue is resolved, the Nasuni Filer would intelligently push the changed data via its normal snapshot mechanism to the cloud, once again ensuring customer data is fully protected.
VSM: You’re saying that when an organization uses the cloud for storage, they don’t need backup?
AR: That is basically correct, but cloud storage alone is not sufficient. The Nasuni Filer combines snapshots with the reliability of the cloud to eliminate the need for backup. When today’s leading cloud service providers were first establishing themselves more than a decade ago, they were focused on constructing an infrastructure with tremendous scalability, cost and speed-of-deployment benefits to support storage and service demands of their large web properties, rather than underlying security. We’re talking about tens of thousands of servers distributed throughout the world, networked to provide 100 percent availability to ensure data and services are always available to consumers.
In essence, the entire system is built around the idea that one can protect data by making exact copies of the data and housing this data in multiple datacenters distributed around the globe.