From necessity to convenience, you'll find a lot to like in the new Microsoft Server OS
By Kevin Noreen, Dell PowerEdge Servers Group
The march toward cloud computing continues to accelerate. Those who do not “adapt and adopt” run the risk of being left behind, technologically and competitively. In an era where technology is expected to drive profitability, you need the versatility, availability and scale that cloud computing promises.
IT market researcher Gartner predicts worldwide cloud services revenues will approach $150 billion by 2014, equally driven by the need for software, infrastructure and platforms. Moving to the cloud requires a top-to-bottom strategy involving every department IT touches. Therefore, you need to keep many things in mind with a cloud deployment, including:
Your timing could not be better to make a cloud move. Microsoft has drawn upon its experience running the Azure cloud service on Windows Server 2008 and has made important changes to the structure of Windows Server 2012, from the Hyper-V® hypervisor to Windows PowerShell®, its task-based command-line shell and scripting language.
While reluctance to disrupt a smooth-running system is understandable, IT departments are facing an important cut-off date. Extended Support, the last phase of Microsoft OS support for Windows Server 2003, is scheduled to end in July 2015. This means no more fixes, no more support calls, nothing. If hackers find a new vulnerability, it will not be fixed.
Microsoft estimates that Windows Server 2003 comprises approximately 80 percent of its server OS install base; so a large number of businesses will have to make an important strategic decision in the next two years.
The good news is that with Hyper-V enabled in Windows Server 2012, Windows Server 2003 can run in a virtual environment. So even if you don’t get your business-critical applications migrated to run on Windows Server 2012, they will still be able to run in a secure, virtual environment that is isolated from the rest of the system and protected with the advanced failover protection featured in Windows Server 2012.
Built To Scale
Windows Server 2003 was born in the era of dozens of servers in one cold room, or maybe a departmental server in a closet. Windows Server 2008 arrived in the era of hundreds of rack-mounted servers in a big, cold room. Now we are in the era of thousands of blades in a data center the size of a supermarket. Managing all of that manually is just not feasible.
The 80 percent of customers running Server 2003 are using an OS written for single-core, 32-bit processors back in the era of application-dedicated servers, and they are likely running at single digit utilization levels. This has caused many enterprises to develop a bad case of server sprawl as they scaled out while trying to keep up with space, power and cooling demands because hardware and software limitations of the time kept them from scaling up.
When combined with the correct server hardware, Windows Server 2012 delivers the virtualization capabilities to consolidate the workloads of many physical servers to a single system, and the scalability required to grow your business and be assured of capacity you may need in the future. Overall, the scalability of Server 2012 is tremendous and is an ideal environment for the latest version Hyper-V. The previous version of Hyper-V was limited to 32GB of RAM in a VM. Now, Hyper-V will allow virtual machines to support up to 512GB of RAM and 2TB of disk storage.
Server 2012 clusters can contain up to 64 nodes, well above the 16 node capability in Server 2008. In addition, they support up to 2,048 total virtual processors per host and up to 1,204 active virtual machines per host, with up to 4,000 virtual machines per failover cluster. Hyper-V will also support up to 320 logical processors per hypervisor instance, with 64 virtual processors per VM.
Part of the challenge in responding quickly and effectively to end-user problems and business requests is that you often have to use multiple tools from multiple vendors making each task time consuming.
By leveraging infrastructure management technology that has been seamlessly integrated with the systems management framework you already use to manage your Microsoft software environments, you are able to reduce complexity and save time. As well as achieve efficiency and control costs, while empowering greater productivity.
A Faster Network Pipe
Windows Server 2012 offers built-in network virtualization, something improved upon from Server 2008 and new from Server 2003. You used to need virtual LANs to divide the network traffic, which can be complex and difficult to manage. Windows Server 2012 comes with full network virtualization so you can even have the same, non-conflicting IP address range for two different tenants.
This allows for multi-tenancy, a crucial step in virtualization, where dozens and even hundreds of virtual machines can all co-exist on the same physical server and share the same I/O path without colliding or interfering with each other, or inadvertently sharing data or choking other servers.
Live migration is considerably improved in Server 2012, both from server to server in the data center and between your private cloud and a public cloud provider. First, you can move virtual machines between a public and private cloud seamlessly – between your data center to Amazon EC2 – and with no interruption, using VPN tunneling. To accomplish this, all you need is the IP address for the target location.
You will also be able to perform concurrent live migrations using Server 2012, whereas Server 2008 would only allow one migration at a time. All you need is a network connection between hosts to migrate a VM, and a virtual machine can now be moved without downtime.
Beyond The Cloud
Windows Server 2012 isn't just a cloud play; it has many other business applications. In a time of ever-shrinking IT budgets, Windows Server 2012 with Microsoft System Center 2012, paired with the right hardware make it cheaper to run a lights out server operation with fewer staff and less hardware, all from a central management location.
One way this is done is with the improved PowerShell 3.0, which has been given a significant upgrade. Windows Server 2008 R2 had about 230 commandlets—powerful mini-scripts—for managing the server, while Windows Server 2012 introduces over 2,400 commandlets. Among the new features in PowerShell is the ability to export commands and functions, so you write a function once and, if needed, it can be replicated over thousands of servers.
Among the many storage improvements in Windows Server 2012 is Offloaded Data Transfer (ODX). This feature saves time and resources by eliminating the requirement to copy/move data up to the host system memory and back when transferring data from one set of storage resources to another. Data movement is offloaded to the external storage subsystem through a command with appropriate parameters via the host, thus allowing the array itself to perform the burden of copying the data to another storage device.
The offload functionality spares server hosts from using CPU cycles to perform Storage Migration copy operations. These operations are now handled on the storage system rather than consuming server and infrastructure resources.
Windows Server 2012 also introduces the Resilient File System (ReFS), which improves on the legacy New Technology File System (NTFS). ReFS uses a new 64-bit storage methodology for metadata and file data, which gives you maximum file sizes of up to 16 exabytes and a maximum volume size of 1 yottabyte. That's 1,000 exabytes, and we haven't even touched Exabyte computing. It has many new features to handle metadata, such as allocation-on-write update and 64-bit checksums for metadata, which are stored independently to protect integrity.
When utilized with industry leading hardware, the new capabilities Windows Server 2012 present the opportunity to consolidate and retire older, power-inefficient infrastructure for the latest, highly integrated servers, storage and networking that use a fraction of the power, require much less space and make use of today’s innovative technologies that will carry your business successfully into the future.\
About The Author
Kevin Noreen is responsible for developing Dell’s Systems Management strategy and focusing his team on delivering solutions targeted at achieving Dell’s objectives for an efficient enterprise through integrated management intelligence into the server platforms.
Kevin is a seasoned veteran and pioneer in clustering technologies. Prior to joining Dell in 1998, Kevin spent 15 years at AT&T in various management roles. For over a decade, he was responsible for developing the company’s strategy for high availability and clustering initiatives and technologies.
Kevin holds a BA in Management Information Systems from the University of Iowa.