Redeploying baseline virtual machine to update a child virtual machine by creating and swapping a virtual disk comprising a clone of the baseline virtual machine

US 8 898 668B1

drawing #0

Show all 13 drawings

One or more techniques and/or systems are disclosed for redeploying a baseline VM (BVM) to one or more child VMs (CVMs) by merely cloning virtual drives of the BVM, instead of the entirety of the parent BVM. A temporary directory is created in a datastore that has the target CVMs that are targeted for virtual drive replacement (e.g., are to be re-baselined). One or more replacement virtual drives (RVDs) are created in the temporary directory, where the RVDs comprise a clone of a virtual drive of the source BVM. The one or more RVDs are moved from the temporary directory to a directory of the target CVMs, replacing existing virtual drives of the target CVMs so that the target CVMs are thus re-baselined to the state of the parent BVM.

PatentSwarm provides a collaborative workspace to search, highlight, annotate, and monitor patent data.

Start free trial Sign in

Tip: Select text to highlight, annotate, search, or share the selection.

Claims

1. A method for redeploying a baseline virtual machine to a child virtual machine, comprising:
identifying a child virtual machine as being associated with a baseline virtual machine; and
redeploying the baseline virtual machine to the child virtual machine, comprising:
gathering virtual drive information for the baseline virtual machine, the virtual drive information specifying a location of a datastore for the child virtual machine;
interrogating a baseline virtual machine drive, used by the baseline virtual machine to store data, to identify a partition, in a partition table, as configured to allow a file system of the datastore to align with a controller comprising the baseline virtual machine;
responsive to the file system of the datastore being aligned with the controller comprising the baseline virtual machine, creating a temporary directory on the datastore;
creating a replacement virtual drive within the temporary directory based upon the virtual drive information, the replacement virtual drive comprising a clone of the baseline virtual machine drive; and
replacing an existing child virtual machine drive, used by the child virtual machine to store data, with the replacement virtual drive utilizing a single operation.

Show 13 dependent claims

15. A system for redeploying a baseline virtual machine to a child virtual machine, comprising:
one or more processors; and
memory comprising instructions that, when executed by at least one of the one or more processors, implement one or more of:
a target child virtual machine identification component configured to:
identify a child virtual machine as being associated with a baseline virtual machine; and
a redeployment component configured to:
redeploy the baseline virtual machine to the child virtual machine, comprising:
gathering virtual drive information for the baseline virtual machine, the virtual drive information specifying a location of a datastore for the child virtual machine;
interrogating a baseline virtual machine drive, used by the baseline virtual machine to store data, to identify a partition, in a partition table, as configured to allow a file system of the datastore to align with a controller comprising the baseline virtual machine;
responsive to the file system of the datastore being aligned with the controller comprising the baseline virtual machine, creating a temporary directory on the datastore;
creating a replacement virtual drive within the temporary directory based upon the virtual drive information, the replacement virtual drive comprising a clone of the baseline virtual machine drive; and
replacing an existing child virtual machine drive, used by the child virtual machine to store data, with the replacement virtual drive utilizing a single operation.

Show 7 dependent claims

23. A non-transitory computer readable medium comprising computer executable instructions that when executed by a processor perform a method for redeploying a baseline virtual machine to a child virtual machine, comprising:
identifying a child virtual machine as being associated with a baseline virtual machine; and
redeploying the baseline virtual machine to the child virtual machine, comprising:
gathering virtual drive information for the baseline virtual machine, the virtual drive information specifying a location of a datastore for the child virtual machine;
interrogating a baseline virtual machine drive, used by the baseline virtual machine to store data, to identify a partition, in a partition table, as configured to allow a file system of the datastore to align with a controller comprising the baseline virtual machine;
responsive to the file system of the datastore being aligned with the controller comprising the baseline virtual machine, creating a temporary directory on the datastore;
creating a replacement virtual drive within the temporary directory based upon the virtual drive information, the replacement virtual drive comprising a clone of the baseline virtual machine drive; and
replacing an existing child virtual machine drive, used by the child virtual machine to store data, with the replacement virtual drive utilizing a single operation.

Description

FIELD

The instant disclosure pertains to clustered storage systems, and more particularly to redeploying one or more virtual machines therein.

BACKGROUND

Business entities and consumers are storing an ever increasing amount of digitized data. For example, many commercial entities are in the process of digitizing their business records and/or other data. Similarly, web based service providers generally engage in transactions that are primarily digital in nature. Thus, techniques and mechanisms that facilitate efficient and cost effective storage of vast amounts of digital data are being implemented.

When linking remote (or even locally dispersed) locations that require access to stored data, and/or to promote the continued availability of such data in the event of hardware, software, or even site failures (e.g., power outages, sabotage, natural disasters), entities have developed clustered networks that link disparate storage mediums to a plurality of clients, for example. Typically, to access data, one or more clients can connect to respective nodes of a clustered storage environment, where the nodes are linked by a cluster fabric that provides communication between the disparate nodes. Nodes can be dispersed locally, such as in a same geographical location, and/or dispersed over great distances, such as around the country.

A virtual server environment can comprise multiple physical controllers, such as servers, that access a distributed data storage and management system. Respective controllers may comprise a plurality of virtual machines (VMs) that reside and execute on the controller. The VM (a.k.a.: virtual server or virtual desktop) may comprise its own operating system and one or more applications that execute on the controller. As such, a VM can function as a self-contained desktop environment, for example, on the controller, emulated on a client attached to the controller, and multiple operating systems may execute concurrently on the controller.

VMs on a controller can be configured to share hardware resources of the controller, and if connected to a distributed data storage and management system (cluster), share hardware resources of the cluster. A VM monitor module/engine (hypervisor) may be used to manage the VMs on respective controllers, and also virtualize hardware and/or software resources of the controllers in the cluster for use by the VMs. Clients can be connected to the cluster and used to interface/interact with a particular VM, and emulate a desktop environment, such as a virtual desktop environment, on the client machine. From the viewpoint of a client, the VM may comprise a virtual desktop, or server that appears as an actual desktop machine environment or physical server.

Multiple VMs executing may be logically separated and isolated within a cluster to avoid conflicts or interference between applications of the different VMs. In this way, for example, a security issue or application crash in one VM may not affect the other VMs on the same controller, or in the cluster. Further, a preferred version of a VM may be cloned and deployed throughout a cluster, and transferred between controllers in the virtual server environment.

Often, a preferred version of a VM (baseline VM) is cloned a plurality of times and deployed, such as in a same controller or over a cluster, for access by attached clients. For example, virtual desktop infrastructures (VDIs) utilize cloned VMs to emulate desktop environments on clients, such as in secure working environments, and/or where retaining data for a cloned baseline VM may not be necessary. In this example, important information may be maintained on the controller or cluster, while transient data is destroyed when the baseline VM is redeployed to the clones. Redeploying a baseline VM to the clones (e.g., child VMs, comprising clones of a parent VM, such as the baseline VM) can also be used when software or configuration updates have been performed on the baseline VM, and these updates can be easily rolled out to the child VMs by redeploying the baseline. As an example, a baseline VM may be known as a parent VM and the clones from the baseline VM may be known as children (child) VMs, such that the parent VM can be redeployed to the children VMs, thus re-baselining the children VMs to a desired (baseline) state.

Currently, child VMs can be refreshed back to the baseline VM state, such as after changes have been made to the child VM. The refresh utility uses a copy-on-write delta file that logs any changes made to a particular child. These copy-on-write files can become quite large over time, if not refreshed, as a plurality of changes are made to the child. Further child VMs can be recomposed, which allows patches and software updates to be pushed out the child VMs from the baseline VM. A snapshot file is created of the baseline VM and rolled out to the children using a form of replication of the baseline VM.

Presently, redeploying a baseline VM to the child VMs is inefficient, and limited to merely virtual desktops and development labs due to performance problems. For example, the present use of copy-on-write files to maintain differences between a master copy and a linked clone (baseline/child relationship) provide storage and access problems. The copy-on-write files can become large quickly, and are cumbersome to manage as they have to be refreshed periodically. Further, a process used to link the master to the clones is very slow and creates storage efficiency problems.

SUMMARY

This disclosure relates to techniques and systems that provide for redeploying a baseline virtual machine (BVM) to one or more child virtual machines (CVMs). The BVM can be a version of a virtual machine that was used to create a plurality of clones, comprising the CVMs, which were deployed on a controller or over a distributed data storage and management system (cluster). As an example, a client machine can be connected to the controller or cluster, and emulate one or more of the deployed CVMs.

The one or more techniques and/or systems, described herein, allow for rapid and efficient redeployment of the BVM to the CVMs, using fewer (storage) resources and performing the deployment faster that current techniques and systems, by creating copies or clones of the BVM's virtual drives in a temporary directory, cloning the virtual drives, and moving them into the CVM's directory. That is, virtual drives for the CVMs are swapped out or replaced with virtual drives of the BVMs (e.g. re-baselining the CVM, or deploying a new iteration of the BVM). By merely cloning virtual drives, the CVM's drives can be replaced (e.g., by original baselines or new versions comprising updated programs) in an efficient manner. For example, current and previous techniques and/or systems do not provide for cloning the virtual drives separately from the VM. These current/previous techniques typically clone an entire VM a desired number of times for redeployment and discard those components that are not needed for the redeployment. It can be appreciated that cloning (and then immediately discarding) unnecessary components of a VM is a less than optimal technique. Further, when redeploying entire VMs, instead of merely the virtual drives, one or more failures may occur during cloning/copying that leave the VMs is an unknown state, such as possibly incomplete and/or corrupt.

In one embodiment of redeploying a BVM to one or more CVMs, one or more CVMs are identified that are associated with a selected BVM (selected for redeployment), where the CVM identification is done using metadata that identifies the BVM from which the CVM was cloned. The selected BVM (source BVM) is redeployed to the one or more CVMs that were identified as being associated with the BVM, for example, and were selected as target CVMs (e.g., those targeted by a user or automated program to receive a replacement virtual drive) to receive the redeployment.

In this embodiment, redeploying the BVM to the CVMs comprises gathering virtual drive information for the source BVM and for a target CVM, where the gathered information comprises a datastore location for the target CVM. Further, a temporary directory is created in the datastore that comprises the target CVM, and one or more replacement virtual drives (RVDs) are created in the temporary directory, where the RVD is a clone of a source virtual drive. The one or more RVDs are moved from the temporary directory to a directory of the target CVM, which replaces an existing CVM virtual drive.

To the accomplishment of the foregoing and related ends, the following description and annexed drawings set forth certain illustrative aspects and implementations. These are indicative of but a few of the various ways in which one or more aspects may be employed. Other aspects, advantages, and novel features of the disclosure will become apparent from the following detailed description when considered in conjunction with the annexed drawings.

DESCRIPTION OF THE DRAWINGS

FIG. 1 is a component block diagram illustrating an example clustered network in accordance with one or more of the provisions set forth herein.

FIG. 2 is a component block diagram illustrating an example data storage system in accordance with one or more of the provisions set forth herein.

FIG. 3 is a flow diagram illustrating an example method for redeploying a baseline virtual machine to one or more children virtual machines in accordance with one or more of the provisions set forth herein.

FIG. 4 is a flow diagram of one embodiment of an implementation of one or more techniques described herein.

FIG. 5 is a flow diagram illustrating another example embodiment of an implementation of one or more techniques described herein.

FIGS. 6A-B are component diagrams illustrating example embodiments in accordance with one or more of the provisions set forth herein.

FIGS. 7-8 are component diagrams illustrating example embodiments in accordance with one or more of the provisions set forth herein.

FIG. 9 is a component diagram illustrating an example system for redeploying a baseline VM to one or more child VMs.

FIG. 10 is a component diagram illustrating one embodiment implementing one or more of the systems described herein.

FIG. 11 is a flow diagram of an example alternate method for redeploying a baseline virtual machine to one or more children virtual machines.

FIG. 12 is an example of a computer readable medium in accordance with one or more of the provisions set forth herein.

DETAILED DESCRIPTION

Some examples of the claimed subject matter are now described with reference to the drawings, where like reference numerals are used to refer to like elements throughout. In the following description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the claimed subject matter. It may be evident, however, that the claimed subject matter may be practiced without these specific details. Nothing in this detailed description is admitted as prior art.

In a multi-node, clustered data storage and management network, data can be stored in a plurality of nodes and distributed clients can be connected to the network via one of the plurality of nodes (e.g., comprising storage controllers). One or more virtual machines (VMs) can be instantiated and cloned on a controller or throughout the cluster. Further, a preferred version of a VM (a baseline VM) can be cloned a plurality of times, and the plurality of clones (child VMs) can be deployed on the controller or throughout the cluster. Periodically, it may be desirable to redeploy the baseline VM to the child VMs, such as for security or to create clean versions of the baseline.

To provide a context for an embodiment of redeploying a baseline virtual machine to one or more children virtual machines, such as from a distributed data storage environment, FIG. 1 illustrates a clustered network environment 100, for example, whereon clients connect to a plurality of distributed nodes, and FIG. 2 illustrates an embodiment of a data storage system that may be implemented to store and manage data in this clustered network environment, including virtual machine information. It will be appreciated that where the same or similar components, elements, features, items, modules, etc. are illustrated in later figures but were previously discussed with regard to prior figures, that a similar (e.g., redundant) discussion of the same may be omitted when describing the subsequent figures (e.g., for purposes of simplicity and ease of understanding).

FIG. 1 is a block diagram illustrating an example clustered network environment 100 that may implement some embodiments of the techniques and/or systems described herein. The example environment 100 comprises data storage systems 102 and 104 that are coupled over a cluster fabric 106, such as a computing network embodied as a private Infiniband or Fibre Channel (FC) network facilitating communication between the storage systems 102 and 104 (and one or more modules, component, etc. therein, such as, nodes 116 and 118, for example). It will be appreciated that while two data storage systems 102 and 104 and two nodes 116 and 118 are illustrated in FIG. 1, that any suitable number of such components is contemplated. Similarly, unless specifically provided otherwise herein, the same is true for other modules, elements, features, items, etc. referenced herein and/or illustrated in the accompanying drawings. That is, a particular number of components, modules, elements, features, items, etc. disclosed herein is not meant to be interpreted in a limiting manner.

It will be further appreciated that clustered networks are not limited to any particular geographic areas and can be clustered locally and/or remotely. Thus, in one embodiment a clustered network can be distributed over a plurality of storage systems and/or nodes located in a plurality of geographic locations; while in another embodiment a clustered network can include data storage systems (e.g., 102, 104) residing in a same geographic location (e.g., in a single onsite rack of data storage devices).

In the illustrated example, one or more clients 108, 110 which may comprise, for example, personal computers (PCs), computing devices used for storage (e.g., storage servers), and other computers or peripheral devices (e.g., printers), are coupled to the respective data storage systems 102, 104 by storage network connections 112, 114. Network connection may comprise a local area network (LAN) or wide area network (WAN), for example, that utilizes Network Attached Storage (NAS) protocols, such as a Common Internet File System (CIFS) protocol or a Network File System (NFS) protocol to exchange data packets. Illustratively, the clients 108, 110 may be general-purpose computers running applications, and may interact with the data storage systems 102, 104 using a client/server model for exchange of information. That is, the client may request data from the data storage system, and the data storage system may return results of the request to the client via one or more network connections 112, 114.

The nodes 116, 118 on clustered data storage systems 102, 104 can comprise network or host nodes that are interconnected as a cluster to provide data storage and management services, such as to an enterprise having remote locations, for example. Such a node in a data storage and management network cluster environment 100 can be a device attached to the network as a connection point, redistribution point or communication endpoint, for example. A node may be capable of sending, receiving, and/or forwarding information over a network communications channel, and could comprise any device that meets any or all of these criteria. One example of a node may be a data storage and management server attached to a network, where the server can comprise a general purpose computer or a computing device particularly configured to operate as a server in a data storage and management system.

As illustrated in the exemplary environment 100, nodes 116, 118 can comprise various functional components that coordinate to provide distributed storage architecture for the cluster. For example, the nodes can comprise a network module 120, 122 (e.g., N-Module, or N-Blade) and a data module 124, 126 (e.g., D-Module, or D-Blade). Network modules 120, 122 can be configured to allow the nodes 116, 118 to connect with clients 108, 110 over the network connections 112, 114, for example, allowing the clients 108, 110 to access data stored in the distributed storage system. Further, the network modules 120, 122 can provide connections with one or more other components through the cluster fabric 106. For example, in FIG. 1, a first network module 120 of first node 116 can access a second data storage device 130 by sending a request through a second data module 126 of a second node 118.

Citations

US 2009 300,082 A1 - Method for memory space management
A method for memory space management is disclosed. It uses a resident program loaded into an operation system or the controller of a storage device...

US 2006 277,542 A1 - SYSTEM AND METHOD FOR CREATING A CUSTOMIZED INSTALLATION ON DEMAND
A customized VM image, for example of Linux software, is created by allowing a user to select packages that the user is interested in installing....

US 2009 216,975 A1 - EXTENDING SERVER-BASED DESKTOP VIRTUAL MACHINE ARCHITECTURE TO CLIENT MACHINES
A server-based desktop-virtual machines architecture may be extended to a client machine. In one embodiment, a user desktop is remotely accessed from a client system....

US 2008 209,414 A1 - Peer-to-peer software update distribution network
A software package of interest is identified, and information about a latest version of the package is retrieved. Then, data corresponding to the latest version...

US 2010 70,725 A1 - SYSTEMS AND METHODS FOR MANAGEMENT OF VIRTUALIZATION DATA
Described in detail herein is a method of copying data of one or more virtual machines being hosted by one or more non-virtual machines. The...

US 2008 144,471 A1 - APPLICATION SERVER PROVISIONING BY DISK IMAGE INHERITANCE
A disk image is generated, stored in at least one persistent storage device, comprises at least one software application, and is a root disk image....

US 2009 198,805 A1 - DESKTOP DELIVERY FOR A DISTRIBUTED ENTERPRISE
Techniques are provided for desktop delivery in a distributed enterprise. In one embodiment, a system comprises multiple computing devices that are communicatively connected to a...

US 8,239,646 B2 - Online virtual machine disk migration
A method for migrating a virtual machine disk (VM disk) from first physical storage to second physical storage while the virtual machine (VM) is running,...

US 8,677,351 B2 - System and method for delivering software update to guest software on virtual machines through a backdoor software communication pipe thereof
One embodiment entails delivering a software payload to guest software in a virtual machine so that the software payload is part of a file system...

US 7,546,354 B1 - Dynamic network based storage with high availability
The present invention provides a scalable, highly available distributed network data storage system that efficiently and reliably provides network clients and application servers with access...

US 2011 208,931 A1 - Systems and Methods for Enabling Replication Targets to Reclaim Unused Storage Space on Thin-Provisioned Storage Systems
A computer-implemented method for enabling replication targets to reclaim unused storage space on thin-provisioned storage systems may include: 1) replicating data from a replication source...

US 2009 125,904 A1 - VIRTUAL MACHINE MIGRATION
A source virtual machine (VM) hosted on a source server is migrated to a destination VM on a destination server without first powering down the...

US 2011 125,977 A1 - ALIGNING DATA STORAGE DEVICE PARTITION TO BOUNDARY OF PHYSICAL DATA SECTOR
A method of aligning a partition of a data storage device to a boundary of a physical data sector is disclosed. The data storage device...

US 2009 83,404 A1 - SOFTWARE DEPLOYMENT IN LARGE-SCALE NETWORKED SYSTEMS
Software deployment to server nodes within large-scale networked systems is provided using image-based deployment. A mostly immutable image is provided at a central service and...

US 2010 257,523 A1 - MANAGING VIRTUAL MACHINE IMAGES
A method and system for managing images of virtual machines hosted by a server. The server maintains a base virtual machine image in a common...

US 8,756,598 B1 - Diskless virtual machine cloning by separately cloning a virtual drive and configuration data of a source virtual machine for combination into a cloned virtual machine
One or more techniques and/or systems are disclosed that provide for cloning VMs, where different parts of the same VM are cloned separately. A temporary...

US 2008 222,234 A1 - Deployment and Scaling of Virtual Environments
Distributed data transfer and data replication permits transfers that minimize processing requirements on master transfer nodes by spreading work across the network and automatically synchronizing...

US 2008 28,402 A1 - Method of setting operation environment and computer system
In a method of setting an operation environment of a first computer using a first disk to a second computer using a second disk, an...

US 2008 134,178 A1 - CONTROL AND MANAGEMENT OF VIRTUAL SYSTEMS
Techniques are disclosed for controlling and managing virtual machines and other such virtual systems. VM execution approval is based on compliance with policies controlling various...

US 2009 292,737 A1 - Methods and Systems for Patching Multiple Disk Images Derived from a Common Base Disk Image
A method for updating a plurality of disk images, each of the plurality of disk images derived from a common base disk image and a...

US 2008 148,248 A1 - Automatic software maintenance with change requests
System-level information is retrieved from a remote system and a request is automatically sent to an update agent for a list of software updates for...

US 2006 184,926 A1 - Deployment of applications in a multitier compute infrastructure
An application model automates deployment of an application. In one embodiment, the application model includes a static description of the application and a run-time description...

US 2007 266,037 A1 - Filesystem-Aware Block Storage System, Apparatus, and Method
A filesystem-aware storage system locates and analyzes host filesystem data structures in order to determine storage usage of the host filesystem. To this end, the...

US 8,046,550 B2 - Systems and methods for performing backup operations of virtual machine files
Backup systems and methods are disclosed for a virtual computing environment. Certain examples include a system having a backup management server that communicates with a...

US 2010 77,162 A1 - METHOD OF CONSTRUCTING REPLICATION ENVIRONMENT AND STORAGE SYSTEM
A management computer collects a usage condition of a volume from a host computer and a storage apparatus at each site, consolidates management thereof, and...

US 2005 278,280 A1 - Self update mechanism for update module
A system for updating files in a computer system. The update system may download files from a centralized database during a start up process for...

PatentSwarm provides a collaborative workspace to search, highlight, annotate, and monitor patent data.

Start free trial Sign in