Tuesday, April 2, 2019
The History Of Virtualization Information Technology Essay
The write up Of realisticization randomness Techno entery EssayIntroduction practical(prenominal)(prenominal)(prenominal)ization is star of the hot demonstrate innovations in the Information Technology field, with proven returns that propel organizations to strategize for rapid planning and build after upation of practical(prenominal)(prenominal)ization. As with whatsoever newborn applied science, managers must be cargonful to analyze how that technology would best fit in their organization. In this document, we ordain bequeath an over expression of realisticization to fri culmination shed light on this quickly evolving technology.History of realistic(prenominal)izationVirtualization is Brand clean Again Although realisticization seems to be a hot new cutting leaping technology, IBM origin eachy practiced it on their mainframes in the 1960s. The IBM 360/67 wreakning the CP/CMS formation customd practical(prenominal)ization as an onrush to time sharing. di stri only whenively roler would proceed their sustain 360 automobile. Storage was partitioned into virtual disks c eached P-Disks for each social functionr. Mainframe virtualization remained popular finished the 1970s.During the 1980s and 1990s, virtualization kind of disappe atomic number 18d. During the 1980s, there were a couple of products made for Intel PCs. Simultask and conjugation/386, both developed by Locus Computing Corporation, would manoeuvre MS-DOS as thickening operational formations. In 1988, Insignia Solutions released Soft PC which ran DOS on Sun and Macintosh platforms.The late 1990s would usher in the new wave of virtualization. In 1997, Connectix would release Virtual PC for the Macintosh. Later, Connectix would release a adjustment for the Windows and subsequently be bought by Microsoft in 2003. In 1999, VMwargon would inscribe its entry into virtualization.In the last decade, ein truth major p work in bonifaces has integrated virtualization in to their concurings. In addition to VMwargon and Microsoft, Sun, Veritas, and HP would all acquire virtualization technology.How Does Virtualization ready?In the unripened light IT earth, bonifaces ar necessary to do many a(prenominal) jobs. traditionally each machine wholly does one job, and or sotimes many emcees ar given the uniform job. The reason behind this is to keep computer unmanageableware and software arrangement program problems on one machine from causing problems for some(prenominal) programs. There are some(prenominal) problems with this approach however. The first problem is that it doesnt mint advantage of mod boniface computers touch on causality.11 Most emcees only use a refined percentage of their general process capabilities. The other problem is that the emcees begin to take up a lot of physiological space as the enterprise ne 2rk grows larger and much than complex. Data centers top executive become overcrowded with racks of em cees overwhelming a lot of power and generating heat. waiter virtualization tries to fix both of these problems in one fell swoop.16Server virtualization uses specially designed software in which an administrator git convert one physical host into double virtual machines. Each virtual emcee acts as a grotesque physical thingmajig that is capable of discharge its own run system. Until fresh technological developments, the only way to lay win a virtual master of ceremonies was to design special software to trick a servers CPU into providing processing power for several(prenominal)(prenominal) virtual machines. Today, however, processor manu incidenturers such(prenominal) as Intel and AMD domiciliate processors with the capability of yielding virtual servers already built in. In the virtualized environs, the life-threateningware doesnt micturate the virtual servers. net profit administrators or engineers nonetheless need to create them using the right software. 11In the world of information technology, server virtualization is button up a hot topic. Still considered a new technology, several companies offer una kindred approaches to server virtualization. There are three ways to create virtual servers full virtualization, para-virtualization, and OS-level virtualization. In all three variations there are a few common traits. The physical server is always called the host. The virtual servers are called guests. The virtual servers all behave as if they were physical machines. However, in each of the different methods uses a different approach to allocating the physical server resources to virtual server inevitably. 11Full VirtualizationThe full virtualization method uses software called a hypervisor. This hypervisor works today with the physical servers CPU and disk space. It performs as the portray for the virtual servers operating system. This keeps each server completely autonomous and unconscious of the other servers course on the same physical machine. If necessary, the virtual servers cigaret be running on different operating system software like Linux and/or Windows.The hypervisor overly watches the physical servers resources. It relays resources from the physical machine to the enamor virtual server as the virtual servers run their applications. Finally, because hypervisors have their own processing need, the physical server must reserve some processing power and resources to run the hypervisor application. If not done properly, this keister affect the overall action and slow down applications. 11Para-VirtualizationUnlike the full virtualization method, the para-virtualization approach permits the guest servers to be aware of one other. Because, each operating system in the virtual servers is conscious of the engages being placed on the physical server by the other guests, the Para-virtualization hypervisor doesnt require as much processing power to oversee the guest operating systems. In this way t he entire system works together as a unified organization. 11OS-Level VirtualizationThe OS-level virtualization approach doesnt use a hypervisor at all. The virtualization capability is part of the host OS, instead. The host OS executes all of the functions of a fully virtualized hypervisor. Because the OS-level operates without the hypervisor, it limits all of the virtual servers to one operating system where the other two approaches allow for different OS exercising on the virtual servers. The OS-level approach is known as the homogeneous environment because all of the guest operating systems must be the same. 11With three different approaches to virtualization, the question remains as to which method is the best. This is where a complete disposition of enterprise and interlocking requirements is imperative. If the enterprises physical servers all run on the same OS, thus the OS-level approach might be the best solution. It tends to be fast-breaking and more efficient than the others. However, if the physical servers are running on several different operating systems, para-virtualization or full virtualization might be split approaches.Virtualization StandardsWith the ever-increasing word meaning of virtualization, there are very few standards that actually endure as prevalent in this technology. As the migration to virtualization grows, so does the need for undefendable sedulousness standards. This is why the work on virtualization is viewed by several industry observers as a giant step in the right direction. The Distributed solicitude Task Force (DMTF) currently promotes standards for virtualization perplexity to service of process industry suppliers implement compliant, interoperable virtualization anxiety solutions.The strongest standard to be created for this technology was the Standardization of way in a Virtualized Environment. It was accomplished by a team who builds on standards already in place. This standard lowers the IT learning c urve and complexity for vendors implementing this support in their management solutions. Its ease-of-use makes this standard successful. The new standard recognizes supported virtualization management capabilities, including the ability todiscover inventory virtual computer systemsmanage lifecycle of virtual computer systemscreate/modify/delete virtual resourcesmonitor virtual systems for health and performanceVirtualization standards are not suffering as a result of poor development but rather because of the common IT dispute involved in pleasing all users. Until virtualization is standardized, internet professionals must touch to meet these challenges within a dynamic entropy center. For example, before the alliance mingled with Cisco and VMWare was established Ciscos Data Center 3.0 was best describe as scrawny. 150 million dollars later, Cisco was able to establish a successful integration that allows the VFrame to load VMware ESX Server onto bare-metal computer computer h ardware something that antecedently could only be done with Windows and Linux and configure the network and storage connections that ESX required.In addition, Microsoft made pledges only in the Web services arena, where it faces tougher open standards competition. The unions outdoors Specification Promise allows every individual and organization in the world to make use of Virtual Hard Disk Image stage forever, Microsoft said in a statement. VHD allows the packaging of an application with that applications Windows operating system. several(prenominal) such combinations, each in its own virtual machine, jackpot run on a wizard piece of hardware.The future standard of virtualization is in Open Virtual machine Format (OVF). OVF doesnt aim to replace the be formats, but instead ties them together in a standard-based XML package that contains all the necessary instituteation and configuration parameters. This, in theory, leave allow any virtualization platform (that implements the standard) to run the virtual machines. OVF resulting set some safeguards as good. The format will permit integrity checking of the VMs to ensure they have not been tampered with after the package was produced.Virtualization in the Enterprise Microsofts Approach (Toms inescapably references) Virtualization is an approach to deploying figure resources that sequestrates different layers-hardware, software, entropy, networks, storage-from each other. Typically today, an operating system is installed directly onto a computers hardware. occupations are installed directly onto the operating system. The interface is presented through a display connected directly to the local machine. Altering one layer often affects the others, making changes difficult to implement.By using software to isolate these layers from each other, virtualization makes it easier to implement changes. The result is simplified management, more efficient use of IT resources, and the flexibility to provide the right computing resources, when and where they are needed.Bob Muglia, aged(a) Vice President, Server and Tools communication channel, Microsoft CorporationThe typical discussions of virtualization focus on server hardware virtualization (which will be discussed later in this article). However, there is more to virtualization than just server virtualization. This section presents Microsofts virtualization strategy. By looking at Microsofts virtualization strategy, we bathroom see other areas, beside server virtualization, where virtualization can be utilize in the enterprise infrastructure.Server Virtualization Windows Server 2008 Hyper-V and Microsoft Virtual Server 2005 R2In Server virtualization, one physical server is made to appear as fourfold servers. Microsoft has two products for virtual servers. Microsoft Virtual Server 2005 R2 was made to run on Windows Server 2003. The current product is Windows Server 2008 Hyper-V, which will only run on 64-bit versions of Wind ows Server 2008. Both products are considered hypervisors, a term coined by IBM in 1972. A hypervisor is the platform that enables multiple operating systems to run on a single physical computer. Microsoft Virtual Server is considered a eccentric person 2 hypervisor. A Type 2 hypervisor runs within the host computers operating system. Hyper-V is considered a Type 1 hypervisor, likewise called a bare-metal hypervisor. Type 1 hypervisors run directly on the physical hardware (bare metal) of the host computer.A virtual machine whether we are talking about Microsoft, VMWare, Citrix, or Parallels fundamentally consists of two archives, a configuration file and a virtual hard drive file. This is true for scope virtualization as well. For Hyper-V, there is a .vmc file for the virtual machine configuration and a .vhd file for the virtual hard drive. The virtual hard drive holds the OS and data for the virtual server.Business continuity can be enhanced by using virtual servers. Microsof ts organization Center Virtual Machine Manager allows an administrator to fire a virtual machine to another(prenominal) physical host without the end users realizing it. With this feature, maintenance can be carried out without bringing the servers down. Failover clustering between servers can in any case be enabled. This means that should a virtual server fail, another virtual server could take over, providing a disaster recovery solution.Testing and development is enhanced through the use of Hyper-V. Virtual server test systems that duplicate the production systems are utilize to test code. In UCFs Office of Undergraduate Studies, a virtual Windows 2003 server is used to test new web sites and PHP code. The virtual server and its physical production tete-a-tete have the exact same software installed, to allow programmers and designers to check their web applications before releasing them to the public.By consolidating multiple servers to run on less physical servers, cost s aving may be found in lower cooling and electricity needs, lower hardware needs, and less physical space to house the data center. Server consolidation is also a key technology for Green computing initiatives. Computer resources are also perfectd, for example CPUs will see less idle time. Server virtualization also maximizes licensing. For example, purchasing one Microsoft Server Enterprise license will allow you to run four virtual servers using the same license.Desktop Virtualization Microsoft Virtual Desktop Infrastructure (VDI) and Microsoft Enterprise Desktop Virtualization (MED-V)Desktop virtualization is very alike to server virtualization. A invitee operating system, such as Windows 7, is used to run a guest operating system, such as Windows XP. This is normally done to support applications or hardware not supported in the current operating system (This is why Microsoft included Windows XP mode in versions of Windows 7). Microsofts Virtual PC is the foundation for this d esktop virtualization. Virtual PC allows a desktop computer to run a guest operating system (OS) which is autonomous instance of an OS on top of their host OS. Virtual PC emulates a standard PC hardware environment and is independent of the hosts hardware or setup.Microsoft Enterprise Desktop Virtualization (MED-V) is a managed guest-hosted desktop virtualization solution. MED-V builds upon Virtual PC and adds features to deploy, manage, and control the virtual images. The images can also be remotely updated. The virtual machines run on the lymph gland computer. Also, applications that have been installed on the virtual computer can be listed on the host machines Start identity card or as a desktop shortcut, giving the end user a seamless experience. MED-V can be very useful to support legacy applications that may not be able to run on the latest deployed operating system.The virtual images are portable and that makes it useful for a couple of scenarios. Employees that use their person-to-person computers for work can now use a embodied managed virtual desktop. This solves a common problem where the personal computer might be running a home version of the operating system that does not allow it to connect to a corporate network. This also means that the enterprise only makes changes to the virtual computer and makes not changes to the personal computers OS.The other scenario where portability plays a factor is that the virtual image could be celebrated to a removable device, such as a USB flaunt drive. The virtual image could then be run from the USB drive on any computer that has an installation of Virtual PC. Although this is listed as a benefit by Tulloch, I also see some problems with this scenario. USB flash drives sometimes get lost and losing a flash drive in this scenario is like losing a whole computer, so caution should be exercised so that in the buff data is not kept on the flash drive. Secondly, based on personal experience, til now with a fast USB flash drive, the performance of the virtual computer running from the USB flash drive is poor as compared to running the same image from the hard drive.Virtual Desktop Infrastructure (VDI) is server based desktop virtualization. In MED-V, the virtual image is on the leaf node machine and runs on the client hardware. In VDI, the virtual images are on a Window Server 2008 with Hyper-V server and run on the server. The users data and applications, therefore, reside on the server. This solution seems to be a combination of Hyper-V and last Services (discussed later in this section).There are several benefits to this approach. Employees can work from any desktop, whether in the office or at home. Also, the client requirements are very low. Using VDI, the virtual images can be deployed not only to standard desktops PCs, but also to thin clients and netbooks. Security is also enhanced because all of the data is housed on servers in the data center. Finally, organisation is eas ier and more efficient cod to the centralized storage of the images.Application Virtualization Microsoft Application Virtualization (App-V)Application virtualization allows applications to be streamed and cached to the desktop computer. The applications do not actually install themselves into the desktop operating system. For example, no changes are actually made to the Windows register. This allows for some unusual virtual tricks like being able to run two versions of Microsoft Office on one computer. Normally, this would be impossible.App-V allows administrators to package applications in a self-contained environment. This package contains a virtual environment and everything that the application needs to run. The client computer is able to execute this package using the App-V client software. Because the application is self-contained, it makes no changes to the client, including no changes to the registry. Applications can be deployed or print through the App-V Management ser ver. App-V packages can also be deployed through Microsofts System Center Configuration Manager or standalone .msi files located on network shares or removable media.App-V has several benefits for the enterprise. There is a centralized management of the entire application life cycle. There is straightaway application deployment receivable to less time performing regression testing. Since App-V applications are self-contained, there are no software compatibility issues. You can also provide on-demand application deployment. Troubleshooting is also made easier by using App-V. When an application is installed on a client, it creates a cache on the local hard drive. If an App-V application fails, it can be reinstalled by deleting the cache file.Presentation Virtualization Windows Server 2008 magnetic pole ServicesTerminal services, which has been around for many geezerhood, has been folded into Microsofts Virtualization offerings. A terminal server allows multiple users to connect. Each user receives a desktop view from the server in which they will run applications on the server. Any programs run within this desktop view actually execute on the terminal server. The client only receives the imbue view from the server. The strategy employed here is that since the application will only use resources on the server, money can be spent on strong server hardware and money saved on lighting strength clients. Also, since the application is only on the server, it is easier to maintain the software, since it only needs to be updated on the server and not all of the clients. Also, since the application runs on the server, the data can be stored on the server as well, enhancing security. some other security feature is that every keystroke and mouse stroke is encrypted. The solution is also scalable and can be expanded to use multiple servers in a farm. Terminal services applications can also be optimized for both high and low bandwidth scenarios. This is helpful for r emote users accessing corporate applications from less than optimal connections. user-State Virtualization Roaming User Profiles, Folder Redirection, Offline FilesThis is another set of technologies that have been around since Windows 95 but have now been folded into the virtualization strategy. A user profile consists of registry entries and folders which define the users environment. The desktop background is a common setting that you will find as part of the user profile. Other items included in the user profile are application settings, Internet Explorer favorites, and documents, music, and externalise folders.Roaming user profiles are profiles saved to a server that will follow a user to any computer that the user logs in to. For an example, a user with roaming profiles logs on to a computer on the factory radix and changes the desktop image to a picture of fluffy kittys. When he logs on to his office computer, the fluffy kittys are also on his office computers desktop as w ell.When using roaming profiles, one of the limitations is that the profile must be synchronized from the server to the workstation each time the user logs on. When the user logs off, the profile is then copied back up to the server. If folders, such as the documents folder, are included, the downloading and uploading can take some time. An improved solution is to use redirected folders. Folders, such as documents and pictures, can be redirected to a server location. This transparent to the user, for the user will still access his documents folder as if they were part of his local profile. This also helps with data backup, since it is easier to backup a single server than document folders located on multiple client computers.A limitation with roaming user profiles occurs when the server or network access to the server is down. Offline files attempt to address that limitation by providing access to network files even if the server location is inaccessible. When used with Roaming User Profiles and Folder Redirection, files saved in redirected folders are automatically made acquirable for offline use. Files marked for offline use are stored on the local client in a client-side cache. Files are synchronized between the client-side cache and the server. If connection to the server is lost, the Offline Files feature takes over. The user may not even realize that there have been any problems with the server.Together, Roaming User profiles, Folder Redirection, and Offline Files are also an thin disaster recovery tool. When a desktop computer fails, the biggest loss are the users data. With these three technologies in place, all the user would need to do is to log into another standard corporate issued computer and resume working. There is no downtime in trying to recover or restore the users data since it was all safely stored on a server.Review of Virtualization in the EnterpriseVirtualization can enhance the way an enterprise runs the data center. Server virtualiz ation can optimize hardware purpose. Desktop virtualization can provide a standard client for your end users. Application virtualization can allow central judicial system of applications and less chances of application incompatibilities. Presentation virtualization allows central management of applications and allowing low end clients, such as thin clients and netbooks, to run software to perform beyond the hardware limitations. User state virtualization gives the user a computer environment that will follow them no matter what corporate computer they use.Benefits and Advantages of VirtualizationVirtualization has evolved into a very definitive entity and a platform for IT to take a step into computing history, being used by countless companies both large and small. This is due to Virtualizations capability to proficiently simplify IT operations and allow IT organizations to respond faster to changing business demands. Although virtualization started out as a technology used al mostly in testing and development environments, in recent years it has moved toward the mainstream in production servers. While there are many advantages of this technology, the following are the top 5.Virtualization is cost efficientVirtualization allows a company or organization to save money on hardware, space, and energy. Using active servers and/or disks to add more performance without adding additional capacity, virtualization directly translates into nest egg on hardware requirements. When it is possible to deploy three or more servers on one physical machine, it is no longer necessary to buy three or more separate machines, which may in fact have only been used occasionally. In addition to one-time expenses, virtualization can help save money in the long run as well because it can drastically reduce energy consumption. When there are fewer physical machines this means less energy to power (and cool) them is needed.Virtualization is GreenGreenIT is not just a fashion trend . Eco-friendly technologies are in high demand and virtualization solutions are certainly among them. As already mentioned, server virtualization and storage virtualization wind instrument to striked energy consumption this automatically includes them in the list of green technologies.Virtualization Eases Administration and MigrationWhen there are fewer physical machines, this also makes their administration easier. The administration of virtualized and non-virtualized servers and disks is practically the same. However, there are cases when virtualization poses some administration challenges and might require some training regarding how to handle the virtualization application.Virtualization Makes an Enterprise More cost-efficientIncreased efficiency is one more advantage of virtualization. Virtualization helps to utilize the lively infrastructure in a better way. Typically an enterprise uses a small portion of its computing power. It is not uncommon to see server load in the si ngle digits. Keeping underutilized machines is expensive and inefficient and virtualization helps to deal with this problem as well. When several servers are deployed onto one physical machine, this will increase capacity utilization to 90 per cent or more. improve System Reliability and SecurityVirtualization of systems helps prevent system crashes due to retention corruption caused by software like device drivers. VT-d for Directed I/O Architecture provides methods to better control system devices by delimit the architecture for DMA and interrupt remapping to ensure improved isolation of I/O resources for greater reliability, security, and availability.Dynamic Load Balancing and Disaster RecoveryAs server workloads vary, virtualization provides the ability for virtual machines that are over utilizing the resources of a server to be moved to underutilized servers. This dynamic load balancing creates efficient utilization of server resources. In addition, disaster recovery is a c ritical fortune for IT, as system crashes can create huge economic losses. Virtualization technology enables a virtual image on a machine to be instantly re-imaged on another server if a machine stroke occurs.Limitations and/or Disadvantages of VirtualizationWhile one could conclude that virtualization is the perfect technology for any enterprise, it does have several limitations or disadvantages. Its very important for a network administrator to research server virtualization and his or her own networks architecture and needs before attempting to engineer a solution. Understanding the networks architecture needs allows for the adoption of a realistic approach to virtualization and for better judgment of whether it is a fitting solution in a given scenario or not. Some of the most notable limitations and disadvantages are having a single point of failure, hardware and performance demands, and migration.Single Point of FailureOne of the biggest disadvantages of virtualization is t hat there is a single point of failure. When the physical machine, where all the virtualized solutions run, fails or if the virtualized solution itself fails, everything crashes. Imagine, for example, youre running several important servers on one physical host and its RAID restrainer fails, wiping out everything. What do you do? How can you prevent that?The disaster caused by physical failure can however be avoided with one of several responsible virtualized environment options. The first of these options is clustering. Clustering allows several physical machines to jointly host one or more virtual servers. They generally provide two distinct roles, which are to provide for continuous data access, even if a failure with a system or network device occurs, and to load balance a high volume of clients across several physical hosts.14 In clustering, clients dont connect to a physical computer but instead connect to a logical virtual server running on top of one or more physical compu ters. another(prenominal) solution is to backup the virtual machines with a continuous data guard solution. Continuous data protection makes it possible to restore all virtual machines quickly to another host if the physical server ever goes down. If the virtual infrastructure is well planned, physical failures wont be a condescend problem. However, this solution does require an investment in redundant hardware, which more or less eliminates some of the advantages of virtualization. 12Hardware and Performance DemandsServer virtualization may save money because less hardware is required thus allowing a subside the physical number of machines in an enterprise, it does not mean that newer and faster computers are not necessary. These solutions require powerful machines. If the physical server doesnt have liberal RAM or CPU power, performance will be disrupted. Virtualization basically divides the servers processing power up among the virtual servers. When the servers processing po wer money box meet the application demands, everything slows down. 11 Therefore, things that shouldnt take very long could slow down to take hours or may even cause the server to crash. Network administrators should take a close look at CPU usage before dividing a physical server into multiple virtual machines. 11MigrationIn current virtualization methodology, it is only possible to migrate a virtual server from one physical machine to another if both physical machines use the same manufacturers processors. For example, if a network uses one server that runs an Intel processor and another that uses an AMD processor, it is not possible to transfer a virtual server from one physical machine to the other. 11One might ask why this is important to note as a limitation. If a physical server needs to be fixed, upgraded, or just maintained, transferring the virtual servers to other machines can decrease the amount of required down time during the maintenance. If porting the virtual server to another physical machine wasnt an option, then all of the applications on that virtual machine would be unavailable during the maintenance downtime. 11Virtualization grocery Size and GrowthMarket research reports indicate that the total desktop and server virtualization market honor grew by 43% from $1.9 Billion in 2008 to $2.7 Billion in 2009. Researchers appraisal that by 2013, approximately
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.