Download Network Booting Cuts Administration Costs and

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Expense and cost recovery system (ECRS) wikipedia , lookup

Computer security wikipedia , lookup

Transcript
Network Booting Cuts Administration Costs
and Improves Security - Workstations all in Sync
Technical Whitepaper
Double-Take Software
Published: December 2009
The greater the number of users being administered in a network, the more important it
becomes to standardise. Replacing the operating system in the PC’s with centralised
management of operating systems enables enhanced operating security along with lower
support costs. If this can by done whilst utilising all the existing computing power of the
network PC’s then you will achieve the best of both worlds.
The IT administrator’s job can be pretty varied and you all know, only too well, that there’s no limit to
users’ imaginations when it comes to “personalising” their workstations. In spite of any number of
draconian rules concerning what can and can not be installed on their PC’s, many have unauthorised
programmes installed.
Despite firewalls, anti virus and anti malware software; trojans and viruses are caught through webmail, social media sites or from those ubiquitous USB memory sticks that are so dreaded by support
managers. On an infected machine the printer that won’t print can appear to be an insurmountable
problem, even in the age of professional IT management. Searching for the culprit is a very time
consuming exercise and this is far from being a system administrator’s favourite job.
Equally unrewarding is the monotony of installing new servers or workstations. What’s more, endless
patches and updates have to be installed on all the computers. Of course, software deployment
programmes usually take care of most of this work automatically, but not always. There are always a
few recalcitrant PC’s that reject the remote update. Administrators have to locate and rectify the fault
quickly or the software on different workstations will be at different revisions, and that means even
more fun for the IT support team.
Disk images: identical copies in the thousand
Disk images are the way to keep all the workstation elements on the LAN in sync. A digital image of a
tested master configuration is sent through the network to all workstation disks. This guarantees that
the same software installation really is used everywhere. The drawback is the sheer quantity of data.
Even though usually just a few tweaks are made, the contents of a hard disk have to be transferred to
each workstation every time. In networks with thousands of clients, the volume of data to be
migrated can easily reach the terabyte level, too much even for modern high performance LANs.
Another negative aspect is that installation of local workstation OS’s goes completely against the
universal goal of eliminating data duplication. Thousands of identical data sets waste storage space
when just one would be enough.
Remote hosted desktops: thin clients and mainframe nostalgia
When implementing remote hosted desktops it’s very tempting to go ahead and concentrate
everything on the server. The operating system and applications all run on the central computer and
the users’ workstations serve only as almost dumb terminals, this solution to the problem does
eliminate the risk of user image divergence.
www.doubletake.com | +44 (0)333 1234 200
Network Booting Cuts Administration Costs and Improves Security - Workstations all in Sync
But it comes with a huge downside: all the data processing is now taking place on the server; the existing
workstation’s computing power is almost unused. We see frenzied activity on the server and on the
network, while the CPUs in the users’ workstations are barely running at all, and workstation graphic
processors have hardly anything to do.
During peak times this system shows its weakness, in spite of beefed up servers, we often see long
response times which reduce productivity dramatically. For the older employees that are near retirement,
this brings back painful memories of long delays when they were working on tiny monochrome terminals
in the mainframe era of ancient computer history.
Scalability is also a problem, adding a hundred new workstations involves paying for the computing
power twice: once for the users’ workstations whose CPUs will be nearly idle, and once for an expensive
server and storage upgrade.
Cloud Computing: security problems can be programmed in
In the way it works the much hyped Cloud Computing SaaS (Software as a Service) is a very similar
solution. It has a local operating system; the browser serves as a window to the application and
everything that runs under the browser: Windows, or Linux, is of less importance. In this setup, the users’
workstation CPU can do some of the work, for instance with flash graphics, but the server still carries most
of the strain. Once again, the same scalability drawbacks discussed above will ensue.
Another big problem is data security, the user PCs are not standardised, which makes them untrustworthy
from a security point of view. Even if access is only by a heavily patrolled authentication process, there’s
always a danger that a virus or a trojan is working away in the background, unknown to the user,
supplying screenshots to unauthorised recipients.
Network Booting: using all your workstations’ computing power
Network Booting is the half way house between thin client and independent locally self booted PC
workstations. Instead of using its own hard disk, a computer with the network booting agent installed
boots from a SAN or iSCSI device.
Using boot-capable images makes things much simpler as soon as the workstation is connected to a new
“network boot” ready server. It takes just a few mouse clicks to choose which image to boot from.
Making slight modifications to the configuration isn’t a problem. The altered image can also be restored
to the way it was or stored as it is.
There are great advantages for users and IT managers, the users boot from a pre-prepared shared or
personalised image on a SAN. This means they don’t need a disk in their workstation, this isn’t just a good
de-duplication move, but it is also a data security move. The reasons being that there is no longer unprotected or unauthorised data stored on the workstation disks nor can there be any unauthorised
programmes running on workstation disks.
The reduced hardware configuration is only a fraction of the savings potential. The configuration cost of
adding new computers is minimal because ready-to-use boot images already exist. Crashed or virusinfected PCs are fixed in seconds by simply switching off and back on again. The computer reboots from
the protected image and is as new again straight away. Updates and patches have to be installed and
tested just once, on the boot image. When they start up next time, all the computers are up to date.
When running applications users use their own workstation CPU resources, which means sufficient
computing power is always available. Equally applications can make full use of the features of modern
processor graphics cards. Even if lots of new clients are added, there is no build up of delays at peak load
times; most of the data processing is being done in the workstation CPU and not at the server level. We
can keep adding users without beefing up the server straight away.
Network traffic: avoiding peak load
It is obvious that booting a workstation from a boot image on a SAN increases network traffic. But the
traffic is spread over a much longer time frame than is the case when an entire disk image is transferred.
Network Booting Cuts Administration Costs and Improves Security - Workstations all in Sync
That’s because users only need a relatively small quantity of data from the boot image each time they
start up and execute a programme. Only as much data as necessary is transferred, which avoids peak loads
so the existing network is perfectly adequate. XP clients get by with less than 75 Mega Bytes for booting.
A test carried out at Gakushuin University in Tokyo found that 200 XP clients starting simultaneously
required an average 40 seconds for the booting process.
In other words, booting from a SAN even on a heavily used network is still quicker than booting from a
local disk. In most cases, as long as 200 people aren’t starting their workstations simultaneously, the boot
from SAN is almost immediate; no more waiting around for the PC to boot up, it’s up and running in
almost the blink of an eye.
Boot management: shared or personalised
There will always be some users that require a special, flexible configuration that cannot be booted from
a shared image. A dedicated image on the server can be provided for them. That means a diskless
workstation is possible in almost all cases, resulting in better standardisation.
This is important because a certain degree of flexibility is necessary even when booting from a shared
image. Many users need special connections or authorisations. And of course, email signatures have to be
personalised. That’s why it’s obligatory that individual user data on the server must be saved separately
from the standardised image.
Which user boots from which image, can be defined either by the MAC address or the user account.
If a user changes workplace or needs to replace a workstation that has broken down, he/she will find
his/her own identical configuration ready and waiting on any new or acquired computer that has been
made capable of booting from the SAN. This simplicity means that in remote or branch offices a stock of
workstations, of any manufacture, can be used to replace any users’ PC. This could considerably reduce
the number and the presence of IT support personnel that are needed, as anyone can then change a
workstation for a perfectly configured working machine without supervision.
Failover: risks also centralised
An important point about a shared booting infrastructure is that the outage risk is spread differently.
Obviously, it’s not possible on diskless workstation networks for a PC’s non existent hard disk to crash. As
we discussed above a users workstation that fails can easily be replaced. All the user has to do is move to
the next available computer and boot from his/her personal image. However, if the central image or the
allocating server goes down, hundreds or even thousands of clients are disabled. That’s why it’s vital that
the Network Booting software provides the capability to go to an alternative boot source.
To sum up, a programme for booting from a network must meet these requirements:
Booting from an individual or common image in the SAN
Use of local workstation CPU resources during operation
Saving of individual user data outside the standardised shared image
Image allocation to hardware or user accounts
Alternative boot source in case of failure
Flex-VDI (Virtual Desktop Infrastructure) from Double-Take Software shows how booting from a network
works in practice:
Initially the programme generates master boot images and stores them on iSCSI SANs. Both booting
from Windows (Server, XP, Vista, W7) and Linux images are supported. Different distinct images can
be stored for the various types of work done by different users. This takes into account that a
workplace; e.g. in marketing is very different to one in customer management. Personalised images
are also enabled.
Network Booting Cuts Administration Costs and Improves Security - Workstations all in Sync
Configuration: existing hardware usually enough
All that’s necessary is for the user workstation to be equipped with a PXE 2.x-compatible 100 MBit/s
network card. Any Windows server can be used as a Flex server, Double-Take Flex Storage Server will
allow the server to emulate an iSCSI SAN. Only the allocation and image information are stored there,
so no particularly high performance or high capacity machine is required.
The bandwidth requirements in the network are also modest, because all processing takes place locally
on the user workstation, and selective data transfer avoids load peaks. This means that a 100-MBit/s
infrastructure is adequate for smaller networks.
Depending on the number of users, large LANs should be micro-segmented with intelligent switches
connected by a Gigabit backbone. This is already standard in modern LANs.
Naturally as we discussed above, secure operation also requires an efficient High Availability failover
system, using for example; Double-Take Availability with its real-time byte level asynchronous replication
and full server failover. Extra data security and continuous user connection is achieved by adding an
alternative boot source, so that after failover the users’ workstation can boot from the latest replicated
images on the “new” Server/iSCSI SAN, no data will have been lost and downtime will have been
minimal.
Patches and updates: deployment on booting
Once the workstations are booting from the SAN, updates of all types are no longer a problem. Any
changes to be made are first tested for conflicts on a test configuration. If the test doesn’t identify any
problems, only a single file “the master boot image” is replaced, thus all the workstation profiles are
updated next time they are booted.
It’s clear that in case of identified security breaches this is a huge advantage, because the delay between
the provision of a patch and its deployment throughout the network becomes minimal.
Users don’t need to worry about their personal configurations because third party software such as RES
PowerFuse stores user profiles separately from the master image, that means no personal data or
configurations are overwritten during an OS or application update.
This again underlines the flexibility of the system. After a simple alteration of a setting on the Flex
server, the workstation simply boots from another image, and a marketing workplace transforms into a
customer management workplace in seconds.
The option of linking the user account to the boot image is also very useful. If an employee moves to a
different office, he doesn’t have to take his computer with him.
It also makes sense to integrate portable computers into this system, that’s because notebooks are
especially at risk of virus infection. They can be configured with Flex so that the notebook automatically
boots from the master image as soon as it is connected with the company network, as a result of this
mobile devices are also secure. Flexibility and reliability make this an attractive solution for small
networks as well.
In the educational world of schools, colleges and universities even the most “creative” student cannot
permanently damage the computer in the classroom if it boots from a secure image next time it’s
switched on. In these tough multi facet environments it’s a very good idea to be able to use several boot
images per computer so that the same hardware can be booted from different specialised Windows
images or Linux images for different user types.
Startup: From the start Flex-VDI takes care of allocation and costs
After starting up, the workstation first sends a bootstrap to the Flex server. The network boot software
(console) installed on the server recognises from the hardware-specific MAC address whether the
enquiring computer is registered for booting from the network. If the workstation is recognised it is
directed to the dedicated boot image in the SAN, the user workstation then boots from this image.
During operation, it only communicates with the iSCSI storage in the SAN.
Network Booting Cuts Administration Costs and Improves Security - Workstations all in Sync
As we indicated above, if this modern storage infrastructure is not in place Flex can also emulate a SAN
on a normal server. This means that it’s not necessary to go out and buy expensive new equipment for
booting from a stored image, good news indeed in these difficult times.
In fact taking this economy into account and all the other reasons given in this whitepaper, Flex-VDI is a
very economic solution when compared with its competitors. In almost all cases existing hardware can be
used, at both server and workstation levels.
Flex-VDI is simplicity itself; using a virtual OS and booting directly from SAN’s it doesn’t require a
hypervisor and a dedicated server as VDI (Virtual Desktop Infrastructure) solutions do.
Flex-VDI removes the requirement for beefed up servers to handle the additional workload transferred
from the workstations, as in the case of remotely hosted thin clients.
How it works
SUMMARY - Network Booting with Double-Take Flex
First, the administrator generates group-specific boot images which he then stores on the SAN (Storage
Area Network). Every client or account is linked to a boot image on the Flex server and redirected to
that image at startup. After that, all communications is through the SAN and all processing is handled
locally.
Written by: Sven Wolf, Technical Consultant, Double-Take Software
Manage your subscription to eNews. Visit: www.doubletake.com
Get the standard today: www.doubletake.com or call +44 (0) 333 1234 200
© Double-Take Software, Inc. All rights reserved. Double-Take, GeoCluster, Double-Take for Virtual Systems, TimeData, netBoot/i, winBoot/i, Double-Take Cargo, sanFly, NSI, Balance, Double-Take ShadowCaster, and
associated logos are registered trademarks or trademarks of Double-Take Software, Inc. and/or its subsidiaries in the United States and/or other countries. Microsoft, Windows, and the Windows logo are trademarks or
registered trademarks of Microsoft Corporation in the United States and/or other countries. All other trademarks are the property of their respective companies.
Version 1.1 - 160210