10 Guidelines For Enterprise Systems Optimization
By Manny Salinas, CMO, Diskeeper Corp
Increasingly sophisticated software and hardware is being used in modern storage environments — from disk-level technologies to massively scalable network storage facilities. There are a number of myths and misconceptions regarding the continuing need for a solution to manage fragmentation in these environments. From the time nonvolatile storage was introduced decades ago, there have been layers of abstraction added between the users/applications and those devices. The fact that modern data centers go well beyond the single direct attached drive, and employ advanced storage infrastructures with additional layers of abstraction, does not eliminate the need to address fragmentation issues. To help VARs handle what may be lurking in their enterprise customers' storage arrays, here are 10 guidelines to prevent fragmentation, which reduces disk write activity, saving energy and reducing operational costs. These best practices bring to light the hidden costs, real impact and potential "gotchas" in keeping complex enterprise systems performing optimally.
1. Faster System Boot-ups
As more applications are added or profiles expand, boot-up times become longer and longer. While there are some "instant-on" solutions that boot into a thin software layer or cloud-based application, they don't actually accelerate boot-up and can create issues in a corporate environment. Look for solutions that accelerate full computer start up and boot the PC directly into the operating system. Placing system "boot" files in the optimum logical sequence and location, making it boot faster and with less effort/energy from the disk drive.
2. Environmental Footprint
The proactive prevention of fragmentation is among the most efficient methods to lessen power consumption, optimize system boot times and improve disk access speeds. By also removing any existing fragmentation, read activity becomes more efficient which reduces power costs. As organizations seek ways to cut back on energy use, and trim costs it is important for VARs to consider the significant positive impact that defragmentation can have in saving precious kilowatts.
3. The End May Not Justify the Means
An automated defragmentation approach eliminates the need for time-consuming weekend defragmentation costs and the possibility that a defragmentation task may be overlooked. The reduction of backup times with Diskeeper running has also allowed for less overhead in terms of man power as well as energy usage of running systems when they would normally be off line i.e. nights and weekends. VARs should look for solutions with zero system overhead for applications that must run frequently or continuously.
4. Virtual Environments and Fragmentation
Defragmentation plays a major role in aiding the transition to virtual systems as file fragmentation causes the operating system to pass extra and unnecessary I/O traffic to the storage sub-system. As virtual machines (VMs) share a common hardware platform, excessive and unnecessary use of hardware/storage resources by one VM slows not only the requesting VM, but all other VMs on that platform.
5. Rapid Delivery of Essential Data
Placing data that is frequently accessed in the optimal performance region on disks minimizes the amount of disk head movement to collect data files scattered across the disk, reducing wear and tear and power costs while speeding access to frequently used "important" data.
6. Compact Free Space
Free space fragmentation is often overlooked and will inevitably increase the likelihood and degree of fragmentation of data and system files. The ability to consolidate space into a small handful of very large segments in order to improve future file-write performance is an important consideration. Consolidating free space prevents file-write performance degradation for future write activity on a given volume. VARs should look for ways to achieve this automatically and consistently, without administrative intervention.
7. Daily Check List and Fragmentation
Rather than allowing files to fragment when written, look for solutions that prevent fragmentation from occurring in the first place, by intelligently writing contiguous files to the disk so system resources are not wasted creating fragmentation. Systems that operate reliably and without complaint (i.e. help desk calls) are less likely to be replaced. They'll be used until new production requirement dictate more powerful hardware, which is a real business reason to upgrade and not a "break-fix" reactive replacement. This enables site owners to postpone refresh costs and the attendant OT overhead that goes with it.
8. Metadata and System Files Fragmentation Challenges
Seek a solution that offers an effective way to handle crucial system files and metadata files. Some defragmenters offer only online defragmentation modes and cannot solve fragmentation of most metadata files as well as system files, such as the paging file or hibernation file. A fragmented hibernation file, for example, can dramatically increase the time for a hibernating laptop to return to a usable state and if free space is not effectively consolidated, expansion of the paging file on a system has a very high likelihood of fragmenting.
9. Solid State Drive (SSD) Optimization
Information stating that SSDs should not be defragmented is based on unproven and incorrect theories related to NAND Flash performance characteristics. The issue with NAND Flash storage is not the medium itself, but rather the software/firmware that controls it. Scientific investigations have clearly shown that as free space fragmentation increases, the write-performance of many SSDs decreases. It is important to implement solutions that can automatically detect and maintain SSD write-performance at peak levels.
10. Centralized Performance Management
Thorough reporting and event alerting on applications running in production environments has become increasingly important. IT departments and service providers are often tasked with meeting particular service level agreements (SLAs) for uptime, performance, etc. This requires greater involvement and knowledge regarding all applications, processes and services that contribute to, or detract from, meeting those defined service obligations. Look for a solution that provides reporting, alerts and centralized management. The ability and responsibility of IT departments in enterprise organizations to control a process or program cannot be understated.
In short, no matter how or where your customers' store data, solving file fragmentation is as vital for peak system performance and reliability as it has ever been. Eliminating fragmentation increases the speed of boot up, backups, anti-malware scans, and other system utilities, while preventing system conflicts, hard drive crashes, and data corruption. Reducing disk activity improves performance and reliability, while reducing energy costs and IT support requirements — all a bonus to VARs and their customers.
Manny Salinas is Chief Marketing Officer at Diskeeper Corporation, and has been instrumental in positioning the organization as an innovator in performance and reliability technologies. He resides in Los Angeles and is the proud father of two daughters and grandfather of three grandsons.