Virtualization for Security: Including Sandboxing, Disaster Recovery, High Availability, Forensic Analysis, and Honeypotting

Virtualization for Security: Including Sandboxing, Disaster Recovery, High Availability, Forensic Analysis, and Honeypotting

by John Hoopes
     
 

View All Available Formats & Editions

One of the biggest buzzwords in the IT industry for the past few years, virtualization has matured into a practical requirement for many best-practice business scenarios, becoming an invaluable tool for security professionals at companies of every size. In addition to saving time and other resources, virtualization affords unprecedented means for intrusion and

See more details below

Overview

One of the biggest buzzwords in the IT industry for the past few years, virtualization has matured into a practical requirement for many best-practice business scenarios, becoming an invaluable tool for security professionals at companies of every size. In addition to saving time and other resources, virtualization affords unprecedented means for intrusion and malware detection, prevention, recovery, and analysis. Taking a practical approach in a growing market underserved by books, this hands-on title is the first to combine in one place the most important and sought-after uses of virtualization for enhanced security, including sandboxing, disaster recovery and high availability, forensic analysis, and honeypotting.

Already gaining buzz and traction in actual usage at an impressive rate, Gartner research indicates that virtualization will be the most significant trend in IT infrastructure and operations over the next four years. A recent report by IT research firm IDC predicts the virtualization services market will grow from $5.5 billion in 2006 to $11.7 billion in 2011. With this growth in adoption, becoming increasingly common even for small and midsize businesses, security is becoming a much more serious concern, both in terms of how to secure virtualization and how virtualization can serve critical security objectives.

Titles exist and are on the way to fill the need for securing virtualization, but security professionals do not yet have a book outlining the many security applications of virtualization that will become increasingly important in their job requirements. This book is the first to fill that need, covering tactics such as isolating a virtual environment on the desktop for application testing, creating virtualized storage solutions for immediate disaster recovery and high availability across a network, migrating physical systems to virtual systems for analysis, and creating complete virtual systems to entice hackers and expose potential threats to actual production systems.

About the Technologies

A sandbox is an isolated environment created to run and test applications that might be a security risk. Recovering a compromised system is as easy as restarting the virtual machine to revert to the point before failure. Employing virtualization on actual production systems, rather than just test environments, yields similar benefits for disaster recovery and high availability. While traditional disaster recovery methods require time-consuming reinstallation of the operating system and applications before restoring data, backing up to a virtual machine makes the recovery process much easier, faster, and efficient. The virtual machine can be restored to same physical machine or an entirely different machine if the original machine has experienced irreparable hardware failure. Decreased downtime translates into higher availability of the system and increased productivity in the enterprise.

Virtualization has been used for years in the field of forensic analysis, but new tools, techniques, and automation capabilities are making it an increasingly important tool. By means of virtualization, an investigator can create an exact working copy of a physical computer on another machine, including hidden or encrypted partitions, without altering any data, allowing complete access for analysis. The investigator can also take a live 'snapshot' to review or freeze the target computer at any point in time, before an attacker has a chance to cover his tracks or inflict further damage.

A honeypot is a system that looks and acts like a production environment but is actually a monitored trap, deployed in a network with enough interesting data to attract hackers, but created to log their activity and keep them from causing damage to the actual production environment. A honeypot exposes new threats, tools, and techniques used by hackers before they can attack the real systems, which security managers patch based on the information gathered. Before virtualization became mainstream, setting up a machine or a whole network (a honeynet) for research purposes only was prohibitive in both cost and time management. Virtualization makes this technique more viable as a realistic approach for companies large and small.

• The first book to collect a comprehensive set of all virtualization security tools and strategies in a single volume
• Covers all major virtualization platforms, including market leader VMware, Xen, and Microsoft's Hyper-V virtualization platform, a new part of Windows Server 2008 releasing in June 2008
• Breadth of coverage appeals to a wide range of security professionals, including administrators, researchers, consultants, and forensic

Read More

Product Details

ISBN-13:
9780080879352
Publisher:
Elsevier Science
Publication date:
02/24/2009
Series:
Including Sandboxing, Disaster Recovery, High Availability, Forensic Analysis, and Honeypotting Series
Sold by:
Barnes & Noble
Format:
NOOK Book
Pages:
384
File size:
3 MB

Read an Excerpt

Virtualization for Security

Including Sandboxing, Disaster Recovery, High Availability, Forensic Analysis, and Honeypotting

Syngress

Copyright © 2009 Elsevier, Inc.
All right reserved.

ISBN: 978-0-08-087935-2


Chapter One

An Introduction to Virtualization

Solutions in this chapter:

* What Is Virtualization?

* Why Virtualize?

* How Does Virtualization Work?

* Types of Virtualization

* Common Use Cases for Virtualization

Introduction

Virtualization is one of those buzz words that has been gaining immense popularity with IT professionals and executives alike. Promising to reduce the ever-growing infrastructure inside current data center implementations, virtualization technologies have cropped up from dozens of software and hardware companies. But what exactly is it? Is it right for everyone? And how can it benefit your organization?

Virtualization has actually been around more than three decades. Once only accessible by the large, rich, and prosperous enterprise, virtualization technologies are now available in every aspect of computing, including hardware, software, and communications, for a nominal cost. In many cases, the technology is freely available (thanks to open-source initiatives) or included for the price of products such as operating system software or storage hardware.

Well suited for most inline business applications, virtualization technologies have gained in popularity and are in widespread use for all but the most demanding workloads. Understanding the technology and the workloads to be run in a virtualized environment is key to every administrator and systems architect who wishes to deliver the benefits of virtualization to their organization or customers.

This chapter will introduce you to the core concepts of server, storage, and network virtualization as a foundation for learning more about Xen. This chapter will also illustrate the potential benefits of virtualization to any organization.

What Is Virtualization?

So what exactly is virtualization? Today, that question has many answers. Different manufacturers and independent software vendors coined that phrase to categorize their products as tools to help companies establish virtualized infrastructures. Those claims are not false, as long as their products accomplish some of the following key points (which are the objectives of any virtualization technology):

* Add a layer of abstraction between the applications and the hardware

* Enable a reduction in costs and complexity

* Provide the isolation of computer resources for improved reliability and security

* Improve service levels and the quality of service

* Better align IT processes with business goals

* Eliminate redundancy in, and maximize the utilization of, IT infrastructures

While the most common form of virtualization is focused on server hardware platforms, these goals and supporting technologies have also found their way into other critical—and expensive—components of modern data centers, including storage and network infrastructures.

But to answer the question "What is virtualization?" we must first discuss the history and origins of virtualization, as clearly as we understand it.

The History of Virtualization

In its conceived form, virtualization was better known in the 1960s as time sharing. Christopher Strachey, the first Professor of Computation at Oxford University and leader of the Programming Research Group, brought this term to life in his paper Time Sharing in Large Fast Computers. Strachey, who was a staunch advocate of maintaining a balance between practical and theoretical work in computing, was referring to what he called multi-programming. This technique would allow one programmer to develop a program on his console while another programmer was debugging his, thus avoiding the usual wait for peripherals. Multi-programming, as well as several other groundbreaking ideas, began to drive innovation, resulting in a series of computers that burst onto the scene. Two are considered part of the evolutionary lineage of virtualization as we currently know it—the Atlas and IBM's M44/44X.

The Atlas Computer

The first of the supercomputers of the early 1960s took advantage of concepts such as time sharing, multi-programming, and shared peripheral control, and was dubbed the Atlas computer. A project run by the Department of Electrical Engineering at Manchester University and funded by Ferranti Limited, the Atlas was the fastest computer of its time. The speed it enjoyed was partially due to a separation of operating system processes in a component called the supervisor and the component responsible for executing user programs. The supervisor managed key resources, such as the computer's processing time, and was passed special instructions, or extracodes, to help it provision and manage the computing environment for the user program's instructions. In essence, this was the birth of the hypervisor, or virtual machine monitor.

In addition, Atlas introduced the concept of virtual memory, called one-level store, and paging techniques for the system memory. This core store was also logically separated from the store used by user programs, although the two were integrated. In many ways, this was the first step towards creating a layer of abstraction that all virtualization technologies have in common.

The M44/44X Project

Determined to maintain its title as the supreme innovator of computers, and motivated by the competitive atmosphere that existed, IBM answered back with the M44/44X Project. Nested at the IBM Thomas J. Watson Research Center in Yorktown, New York, the project created a similar architecture to that of the Atlas computer. This architecture was first to coin the term virtual machines and became IBM's contribution to the emerging time-sharing system concepts. The main machine was an IBM 7044 (M44) scientific computer and several simulated 7044 virtual machines, or 44Xs, using both hardware and software, virtual memory, and multi-programming, respectively.

Unlike later implementations of time-sharing systems, M44/44X virtual machines did not implement a complete simulation of the underlying hardware. Instead, it fostered the notion that virtual machines were as efficient as more conventional approaches. To nail that notion, IBM successfully released successors of the M44/44X project that showed this idea was not only true, but could lead to a successful approach to computing.

CP/CMS

A later design, the IBM 7094, was finalized by MIT researchers and IBM engineers and introduced Compatible Time Sharing System (CTSS). The term "compatible" refers to the compatibility with the standard batch processing operating system used on the machine, the Fortran Monitor System (FMS). CTSS not only ran FMS in the main 7094 as the primary facility for the standard batch stream, but also ran an unmodified copy of FMS in each virtual machine in a background facility. The background jobs could access all peripherals, such as tapes, printers, punch card readers, and graphic displays, in the same fashion as the foreground FMS jobs as long as they did not interfere with foreground time-sharing processors or any supporting resources.

MIT continued to value the prospects of time sharing, and developed Project MAC as an effort to develop the next generation of advances in time-sharing technology, pressuring hardware manufacturers to deliver improved platforms for their work. IBM's response was a modified and customized version of its System/ 360 (S/360) that would include virtual memory and time-sharing concepts not previously released by IBM. This proposal to Project MAC was rejected by MIT, a crushing blow to the team at the Cambridge Scientific Center (CSC), whose only purpose was to support the MIT/IBM relationship through technical guidance and lab activities.

The fallout between the two, however, led to one of the most pivotal points in IBM's history. The CSC team, lead by Norm Rassmussen and Bob Creasy, a defect from Project MAC, to the development of CP/CMS. In the late 1960s, the CSC developed the first successful virtual machine operating system based on fully virtualized hardware, the CP-40. The CP-67 was released as a reimplementation of the CP-40, as was later converted and implemented as the S/360-67 and later as the S/370. The success of this platform won back IBM's credibility at MIT as well as several of IBM's largest customers. It also led to the evolution of the platform and the virtual machine operating systems that ran on them, the most popular being VM/370. The VM/370 was capable of running many virtual machines, with larger virtual memory running on virtual copies of the hardware, all managed by a component called the virtual machine monitor (VMM) running on the real hardware. Each virtual machine was able to run a unique installation of IBM's operating system stably and with great performance.

Other Time-Sharing Projects

IBM's CTSS and CP/CMS efforts were not alone, although they were the most influential in the history of virtualization. As time sharing became widely accepted and recognized as an effective way to make early mainframes more affordable, other companies joined the time-sharing fray. Like IBM, those companies needed plenty of capital to fund the research and hardware investment needed to aggressively pursue time-sharing operating systems as the platform for running their programs and computations. Some other projects that jumped onto the bandwagon included

* Livermore Time-Sharing System (LTSS) Developed by the Lawrence Livermore Laboratory in the late 1960s as the operating system for the Control Data CDC 7600 supercomputers. The CDC 7600 running LTSS took over the title of the world's fastest computer, trumping on the Atlas computer, which suffered from a form of trashing due to inefficiencies in its implementation of virtual memory.

* Cray Time-Sharing System (CTSS) (This is a different CTSS; not to be confused with IBM's CTSS.) Developed for the early lines of Cray supercomputers in the early 1970s. The project was engineered by the Los Alamos Scientific Laboratory in conjunction with the Lawrence Livermore Laboratory, and stemmed from the research that Livermore had already done with the successful LTSS operating system. Cray X-MP computers running CTSS were used heavily by the United States Department of Energy for nuclear research.

* New Livermore Time-Sharing System (NLTSS) The last iteration of CTSS, this was developed to incorporate recent advances and concepts in computers, such as new communication protocols like TCP/IP and LINCS. However, it was not widely accepted by users of the Cray systems and was discontinued in the late 1980s.

Virtualization Explosion of the 1990s and Early 2000s

While we have discussed a summarized list of early virtualization efforts, the projects that have launched since those days are too numerous to reference in their entirety. Some have failed while others have gone on to be popular and accepted technologies throughout the technical community. Also, while efforts have been pushed in server virtualization, we have also seen attempts to virtualize and simplify the data center, whether through true virtualization as defined by the earlier set of goals or through infrastructure sharing and consolidation.

Many companies, such as Sun, Microsoft, and VMware, have released enterprise-class products that have wide acceptance, due in part to their existing customer base. However, Xen threatens to challenge them all with their approach to virtualization. Being adopted by the Linux community and now being integrated as a built-in feature to most popular distributions, Xen will continue to enjoy a strong and steady increase in market share. Why? We'll discuss that later in the chapter. But first, back to the question ... What is virtualization?

The Answer: Virtualization Is ...

So with all that history behind us, and with so many companies claiming to wear the virtualization hat, how do we define it? In an effort to be as all-encompassing as possible, we can define virtualization as:

A framework or methodology of dividing the resources of a computer hardware into multiple execution environments, by applying one or more concepts or technologies such as hardware and software partitioning, time-sharing, partial or complete machine simulation, emulation, quality of service, and many others.

Just as it did during the late 1960s and early 1970s with IBM's VM/370, modern virtualization allows multiple operating system instances to run concurrently on a single computer, albeit much less expensive than the mainframes of those days. Each OS instance shares the available resources available on the common physical hardware, as illustrated in Figure 1.1. Software, referred to as a virtual machine monitor (VMM), controls use and access to the CPU, memory, storage, and network resources underneath.

(Continues...)



Excerpted from Virtualization for Security Copyright © 2009 by Elsevier, Inc.. Excerpted by permission of Syngress. All rights reserved. No part of this excerpt may be reproduced or reprinted without permission in writing from the publisher.
Excerpts are provided by Dial-A-Book Inc. solely for the personal use of visitors to this web site.

Read More

Customer Reviews

Average Review:

Write a Review

and post it to your social network

     

Most Helpful Customer Reviews

See all customer reviews >