White Paper | August 14, 2012

Don't Get Duped By Dedupe

Source: Unitrends

This white paper from Unitrends covers the various approaches to deduplication, the strengths and weaknesses of each, and introduces a different approach to deduplication, termed Adaptive Deduplication. Adaptive Deduplication delivers the advantages of deduplication without any capital expense of hardware-based deduplication devices but with better performance and manageability than software solutions.

Compression and deduplication are two types of data reduction techniques that reduce the amount of data necessary to represent a larger data set. Encryption and physical storage density along with pricing trends also have a major influence on data reduction.

Compression is a process of encoding data using less data than that which made up the original. The two types of compression are lossless and lossy compression. With lossless compression you can recover every bit of data from your original, but with lossy data compression the original data is lost in the compression process. This paper only discusses the lossless form of data compression.

Lossless data compression typically exploits the statistical redundancy of underlying data to represent the original data more concisely and yet with the ability to fully and accurately reconstitute that data at a later date when that data is uncompressed. Statistical redundancy exists because almost all real-world data isn’t random but instead have specific underlying patterns.

Download this white paper below to read more.

For full access to this content, please Register or Sign In.

Access Content Don’t Get Duped By Dedupe