Windows Deployment Server (WDS)

Introduction

Windows Deployment Server – With all the talk of the Brexit and the unfortunate conclusion, the fear of the new beginnings I thought I would take a break from reading the news and listening to the political ramblings of the masses, and have a throwback Thursday and rehash some of my projects, this to re-write them so that they no longer look like the original and to anonymise the project paperwork so that the original client cannot be discovered.

However, I will note on this one I was working for a leading University in Scotland in 2009 when I wrote about WDS and my first views of WDS was with server 2003 R2 so most of what I wrote was with that in mind. Back in those days the University wanted to manage the way in which it rolled out its server and desktop infrastructure, without imaging a technician’s laptop and then using that to effectively copy one computer to the thousands of others it had, likewise server builds would take two to three days to hand build, and rushed ones would take 2 days working 16 hours, this condition meant that the server team would often pull all-nighters (myself included) to get the job done, at the time I was an employee which meant that due to the way higher education pay scales work we did it without overtime, so underpaid, tired, and overworked server admins building servers meant that mistakes happened, not as frequently as you might think but they did happen. Couple with this a failing WSUS infrastructure and the migration of three Active Directory’s you can see it was a little bit of a mess. Good news is that whilst buy in to this project was initially low due to the amount of work required to get it off the ground, the desktop team were the first to adopt it and then followed by the server team.

I’ll let you know how it transformed the IT team in the conclusions stage but for now I want to show you how to get it all set up in your company. Bearing in mind that even if you are in the cloud infrastructure all of this can still be used to spin up a new server as a cloud infrastructure is basically using someone else’s computer.

I have also added to each of the sections a REVIEW: – remark in red to each of the sections to highlight and pick apart my original design and review of the service as well as to point out how 7 years of technology innovation have shaped this service

Purpose

The purpose of this post is to give you a technical overview of WDS and how you can set it up in your organization as well as give you in insight into the savings you will make as well as being able to drastically reduce the amount of time it takes to deploy either Server or Desktop infrastructure. Best of all WDS is free with the server license so for small to medium companies you won’t need to splash out on SCCM or SCOM, which it a good thing. However, it does take some setting up as well as testing before you can go mainstream with it in your organization.

WDS and RIS History

In windows 2000 Advanced server, we were first introduced to Remote Install Server or RIS as it is more commonly known.

This gave System Administrators the technology to access scripted builds of Microsoft windows 2000 professional and Microsoft Windows 2000 Server. This could be either deployed over the network or from CD saving time and reducing the amount of failed installs/amount of different installs on the network, as all the installs were now standardized by the scripts.

Windows 2000 Advanced server could only deploy 2000 based PC’s until SP4 when RIS had its upgrade to include Server 2003 and partial success with XP.

With Server 2003 RIS got a new lease of life and could successfully deploy all windows NT based server/client based computers. Unlike 2000, The RIS service was available for all versions of Microsoft 2003 Server, not just the expensive Enterprise (which took over from Advanced Edition). This was mainly due to the increasing demand upon solutions such as Symantec Ghost and Power Quest Drive Imaging Suite (to name two of the big ones at the time the little known Alteris was only a startup company on Microsoft’s radar.

During the latter part of 2006 and early 2007 (depending on whether you were an MSDN gold subscriber), Windows deployment Server was introduced via SP2 for Windows Server 2003.

This was for all intents and purposes a tech demo and a first look into what was coming in the upcoming Microsoft 2008 Server, so that the System Administrators could get used to the ‘new’ way of doing things as well as convert the existing RIS deployments, into the new WIM image file format that Windows Vista and Server 2008 uses both in its WDS service and for deployment from all of its CD

 

REVIEW: – Wow ok so that was what I wrote 7 years ago, today WDS is the de facto standard in how windows is delivered, when you insert your DVD or USB stick into your computer to re-install Windows the first image that loads is the boot.wim and then the second is the Install.wim, Microsoft has actually got something right in this respect, it’s a really elegant solution to deploying a desktop or server OS, as its lightweight and streamlined, it is also pretty easy to mess with to make it do what you want to do when you script the installs, which makes the job of a sysadmin a hell of a lot easier.

Advantages

There are many advantages and disadvantages to Windows Deployment Server over RIS (and other Technologies).

One of the main advantages of Windows Deployment Server is the fact that Microsoft have now moved into actually using Images to deploy there operating systems and this was incorporated into the Windows Deployment Server. This makes the whole process very quick indeed, although the image technology is described as a ‘hybrid’ imaging solution as it does not follow the traditional imaging sense, as you can still separately script applications into the image.

Another advantage of WDS is still based around the usage of the .WIM but focused on the network, and the lowered bandwidth usage over the file based RIS image. As part of the WDS service Microsoft has given a true multicasting environment with offsetting to further lower bandwidth costs, as well as administration costs due to the fact you no longer have to sync your multi casts, as you once did.

The main advantage of choosing WDS over RIS is now the unattended installations are in .XML which is far more flexible than the old .TXT or .SIF it also gives a ‘one script for all’ format so that you are not continually referencing other scripts to run as .SIF and .TXT’s were. This has the added advantage when using Windows 2008 of being able to add the client PC unattended scripts into the domain policy to check if they are compliant with the spec and if not re-image automatically thus minimizing downtime, this is because Windows 2008 scripting interface has been heavily incorporating into using the .XML scripts for pretty much everything.

The main obvious advantage to using Windows Deployment Server as to using other Deployment methods, is and always will be the cost, as WDS is free with all versions of 2003 Server and 2016 Server this will save in both money and the hassle of not having to deal with licensing the product itself (although clients and servers will need to be). This will save the budget for other IT deployments and benefits.

Disadvantages

As with all new technologies there are certain disadvantages that must be weighed against the advantages of having this technology in place.

The main disadvantage to Windows Deployment Server is the steep learning curve that is required and the amount of intensive learning there is to actually do to get the Server up and running and then create manual builds as well as the planning of the deployments is actually very high which will be documented later in this document.

Old RIS imaging system is compatible although you have to put the WDS server into a limited Legacy mode or convert them to the new .WIM format and re-write the unattended scripts.

PXE boot images and the use of ‘older’ although this document should address this disadvantage later as the new Windows Deployment Server uses PXE boot, though this is a disadvantage of most of the deployment methods on the market today.

There are other disadvantages to using Windows Deployment server although I wish to skip those for now as it is the main intension of this document to minimize the issues with Windows deployment Server.

REVIEW: – Ok so the Disadvantages are lessened in the seven years since I wrote this section as booting a PC from the network is old hat, and anyone that knows how to setup boot devices in BIOS or EFI will know how to do this, but back when the document was written we still had the odd BNC card and base 10Mb cards in some machines that had optional chips you had to buy to be able to boot from them. The biggest change in reality to the disadvantages is that we have fast central storage as well as fast local storage and the network between has organically grown causing bottlenecks and other issues but that is really for the subject of another blog post (it’s a big hairy mess in most organizations).

Planning and Design

Ideally this should have taken place before any testing or deployment has taken place, as this system was a test of the technology this happened secondly. At the end of the document I will propose a system and fully detail the test environment that was created. To successfully test and deploy the Windows Deployment Server you will have to plan out the following,

  • The infrastructure of the system
  • The security of the system
  • The overall design of the system
  • The network and how this will interact with this box
  • what will be deployed from this box
  • a standard build template
  • training and documentation
  • future upgrades and required maintenance

We will start with the infrastructure of the system and hopefully cover the rest before the end of this chapter.

REVIEW: – The last paragraph of this section really cracks me up, it was a chapter of its own, totaling nine A4 pieces of paper, these days my project documents are not based on chapters, they are nice and streamlined and broken down into project stages, I did the condensed version of the original document from 94 pages to a 2 page summary just so that the project team and management would read the full document. How times have changed.

Infrastructure Planning

As with all new builds we need to look at the actual infrastructure, that would be required to provide service. As the demands on the server and the network could potentially be the projects downfall.

Windows Deployment server has some basic requirements and a few specialist requirements but most of what are needed should be in place as we are deploying to a mature infrastructure.

For the infrastructure of Windows Deployment Server you will require the following hardware.

  • A windows domain controller working on your network
  • A DNS server, this should ideally be a windows DNS server but can be modified to use any DNS server available,
  • A DHCP server, again this should ideally be a windows based one but can be modified to operate using a non Microsoft one,
  • A windows 2008 x64 standard server, it can be a windows 2003 x64 server but you won’t get to use the new Multicasting service that is used by Windows Deployment Server,
  • A 15Gig (for 2003) or 20Gig (for 2008) root partition, this is basically for the OS.
  • A 5Gig partition for the Page file drive, this is whether you are deploying it on a 2003 or 2008 server,
  • At least 20Gigs worth of storage space for the D:\ (data) drive to hold all of the images that you require to be deployed,
  • The server can be deployed either as a virtual or physical box.

As you will see you will require a Server 2008 build to be able to multicast as this feature was added as part of the Longhorn code and not updated for the whistler code. It makes things a little trickier if you require the use of multicast.

You will also see I have planned for “at least 20Gigs” worth of storage for the images, now this is dependent on what will be finalized in the plan for the deployment of images.

REVIEW: – I was optimistic at the proposition of 20Gigs of storage space, although I had envisioned a gold image plus one scenario, unfortunately with how things happened I think it quickly got pushed to 60Gb. My own WDS server here is currently mounting 300Gb of images and my WSUS server is covering 750Gb of Windows updates (what I’m a collector)

Security of the System

This is always the crux of any platform build especially when deploying new features to an already mature infrastructure. Although we should be using best practices for deployments and so security is paramount in this, for a secure but user-friendly environment you will require the following to be taken into consideration,

  • You will need to ascertain who will be deploying from this server, I have taken the following to be the standard deployment users, Desktops, Windows Server, and Linux and Unix (sun Solaris x86 and x64).
  • The level of access each group will require
  • For the Desktop deployment users you will require them only to have access to the Desktop production Group,
  • For the Desktop capture users (who do not need to deploy but can capture) will require read/write access to the desktop group,
  • For the Windows Server deployment users you will require read only access to the images
  • For the windows Deployment Users you will require read/write access to the image location
  • For the linux/Solaris deployment users you will require read only access to the Image group
  • For the Linux/Solaris capture users you will require read/write access

As you can see these are recommendations that will require changes to the group policy and active directory structure, to make this more manageable, as this is what we are ultimately trying to achieve with the Windows deployment server and the rest of the infrastructure. To properly achieve an efficient structure to Windows Deployment server you will have to create six new security groups within the active directory, they are

  • Windows Desktop deploy
  • Windows Desktop capture
  • Windows Server Deploy
  • Windows Server capture
  • Linux/Solaris deploy
  • Linux/Solaris Capture

To further simplify this we could shorten the names so for instance the windows desktop deploy would become “WDDeploy” and the capture would be “WDCapture” this would be added to the active directory and appropriate security would be granted in Windows Deployment Server either from within Windows Deployment Server or Group Policy.

This ensures that only the correct images can be seen by the users that are required to perform builds and all other users do not have access to the Windows deployment Server.

 

REVIEW: – I like a group design in Active Directory and users to be added to groups, as well as Organizational Units, this is because Active Directory can organically grow into a monster as well as user sprawl can affect how efficiently your Sysadmins can protect and remove its users. I had even envisioned the use of Linux and Solaris deployments from this box as well (that 20Gb drive didn’t stand a chance), which is still feasible even with server 2016 but is way out of the scope of this blog post. My advice to keep it simple Linux and Solaris deployments just deploy from that OS deployment method.

Overall system Design

In this section we will discuss the infrastructure design rather than the software design. Within the existing network there are already two known used deployment methods, these being HPSIM (utilizing Alteris) and Symantec Norton Ghost, the purpose of the Windows Deployment Server is to replace the two systems and amalgamate them into one system as well as provide deployment services for Linux and x86 based Solaris builds.

With this in mind we are required to build in certain safeguards to ensure a smooth transition and resiliency of service. To enable us to do the above we will need to look at the following,

  • How the server(s) will be setup on the network

For this we would need to look at load balancing and clustering as well as looking into answer Servers (for passing over onto other VLANs)

  • You would also need to look at the amount of servers required and their location, e.g. one on the VMWare infrastructure, two in production environment and one in the test environment.
  • How the servers will be setup, will each of the servers hold a copy of the images or will they point to a common share. E.g. the VMWare Deployment server and the test Deployment server point to the share on the physical cluster of deployment servers (less secure but space efficient)
  • The type of hardware to be used and specification matches for the servers
  • The software included in the build of the Windows Deployment Server
  • The Antivirus software used
  • The monitoring and logging central repository for both the health and logs of installed machines for historical reasons as well as fine tuning the system in the future
  • The backup software and the routine that it will be running. E.g. six day incremental and every seventh day full, this is for disaster recovery scenario
  • How the boxes will be located physically, preferably split in two physical locations for the physical and the VM to be on a high availability farm, for resiliency.

Taking into account that this server will not form part of the five main roles of the infrastructure it does give the advantage that if something where to go wrong with one of the five core boxes then a ‘flat image’ could be installed and then restored from backup before being brought back into the domains main roles. Also planning from a capacity point of view if the desktop and server teams are to be deploying from the same infrastructure then you need to look at the server load and the network load, of the Deployment infrastructure so that it becomes available without causing problems either on the Deployment platform or the network in which it sits as this could cause issues with other resources that are required and of higher priority.

 

REVIEW: – I had thought about clustering and even trialed it but in the early days it wasn’t something you could do, and it took a monumental amount of effort to get the WDS service to fail over onto another node in the cluster. In today’s Server 2012 you can indeed cluster and use common shared storage.

Network planning

The section has been touched on in the above area, although I believed it required its own section due to some of the considerations that need to be addressed. The networking side of any build is particularly important, but with this project involves the movement of large packets of data across the network and the use of a round robin multicast stream (if utilized), then this requires its own ‘special’ section. Things to consider when this server goes live are as follows

  • On what Vlans do you wish to have this server operating on. Would it just be the one or multiple, best practice solution would be to have its own dedicated VLAN to reduce the traffic and congestion on the other VLANs and also to segregate the data of Windows Deployment from that of actual data.
  • The speed of the network and contention of the network during extensive builds
  • Whether we will be making use of multicast packets on this project and if so what are the additional issues that can be caused by this? Taking into consideration other multicast operations/projects on the network
  • The type of DHCP server that will be used. Will it be a Windows based DHCP server or will it be one already in place?
  • The scope for the DHCP server are there provisions in place for a dedicated scope
  • What will the overall scope of dedicated IP address be made available if dedicated IP’s are made available.
  • The amount of time that this project will take to setup for the networking and will documentation be provided as an amendment to this project to show the resulting network changes for historical and problem logging?

As you can see from a server side of things there are not many issues with the network although the considerations have to be factored in to ensure a smooth transition from the old way of doing things to the new?

 

REVIEW: – Networks have gotten better, that’s for sure as Network administrators have taken stock and cleaned them up but still the fact remains I like isolation of build environment’s nothing is worse than deploying a server or desktop to a machine that is not required or picking the wrong one and destroying a production box. That being said I have changed my mind in the use of Windows DHCP Linux and others do a far superior job but if your in a pinch and don’t have the right skills to deploy Linux DHCP and DNS then Windows does do a fine job, and makes it easier to deploy your PXE build server.

What will be deployed from the box

Ok so we have the infrastructure in place and the network set up to point to the server when its built, we need to plan how and what we are going to be deploying from the Windows Deployment Server. To do this we have to take into consideration the following before deployment. This section will be further broken down into its respective sub sections.

Setup of the Windows Deployment Server

The setup of the server focuses on how the Windows deployment will function and what mode is best to use for the types of installation we will be performing. There are three modes which are explained bellow.

  • Legacy mode: – this mode utilises the Remote Installation Server installations and is generally used when there is already an installation server in place from either Windows 2000 or Windows 2003 Remote Installation Server. Generally if this is the case then the Windows deployment Server would install this service. You can also select this mode if you do not yet have plans for the deployment of .WIM images in the near future.
  • Mixed mode:- this mode utilises both the Remote Installation Server and the Windows Deployment Server features so you get the best of both worlds. This mode is generally good if you have old Remote Installation Server images that you do not wish to convert and are planning a migration to the new system and Vista/2008 server images. As if gives you the old way of deploying your images and the new way. You will still not get the full benefits of multicasting in Remote Installation Server but you will get them as part of the Windows Deployment Server
  • Native Mode:- as the name suggests this mode will install the Windows Deployment Server in its native mode. This will mean that you will have to use the Business Deployment tool kit to covert your old Remote Installation Images to the new .WIM format that is if you have any. Bearing in mind that if you do this then you are less flexible when you are deploying Whistler based Operating Systems. This is because of how Microsoft has now put the .WIM format onto the installation media of Windows Longhorn based operating systems.

A recommendation on this is to install the server in mixed mode. This means that we get the best of both worlds and the ability to install the LinuxPXE boot. So that we can install Linux and Windows from the same box thus saving us more resources and expenditure. We will also have the ability to install .WIM files in for the future rollout of Windows Vista and Server 2008.

 

REVIEW: – Legacy mode is still a viable option if you want to install Windows XP or Linux builds but I wouldn’t recommend it to anyone now, its old and antiquated and if you are still running legacy OS’s then please contact me and we can see about removing the legacy requirements that your company has and migrating your data to a new system, If you still want to deploy Linux using a Windows product, then by all means use LinuxPXE and kick off your builds from it. if you already have a virtualised environment (either VMWare or Hyper-V) then please consider the use of a Linux Server virtual to handle the Linux side of things, if far easier and you can pass off that request from the DHCP and WDS Server with a little more work but far more elegant solution.

Standardizing the builds

Typically, if you are installing a deployment method of any type it means that you wish to standardize the builds across the infrastructure. You may also wish to reduce the amount of time that a build takes as well as keeping the installation media safe. If you are going to be building a standard image, then there are considerations to be made and an agreed plan with all as to how the images will be setup. To do this you will need to take into consideration the following.

  • The operating Systems you will be installing
  • The physical location of the media (for building the images)
  • Patch management, you will need the latest patches from the windows catalogue or have a WSUS server installed and working on your network.
  • The standard applications to be installed. You will need to agree the standard set of applications that will be installed on every computer, such as antivirus and monitoring tools that are in use on your network.
  • You will need the software disks and an idea of how they are installed if you are to make scripts for them to be installed automatically.
  • Patches for the software this needs to be planned as to how the applications will be installed.
  • Once you have the applications list and the OS types. You then need to agree standardised partitions, such as the active partition sizes and the page file drives as well as a standard Data drive, these can obviously be added to, such as extra drives and such like as the project dictates. But a standard base system is to be provided as well as a standard partition lettering system employed.
  • Scripts or no scripts. This is one of the easiest of the decisions to make when deploying an installation server. You will require scripts, as to what will be deployed and how much interaction will be required by the installations engineer at build time.
  • You will also need a standard build template document that details the base system and any extra’s that are required by the project being deployed, with agreed Service Level Agreements. This is for tracking and capacity planning more than the deployment of the Servers. But it helps to have such documentation for if something goes wrong with the server either during the build or during part of its future service.
  • You will also require flat images on the server. General images with no scripts (or very little) so that if the agreed base system changes in the future or if a specialist server/desktop is required or if you are required to update your base install package for your infrastructure.

As you can see the most flexible mode is the mixed mode as this allows you the same delivery systems and tweaks available as part of the Remote Installation Services and the maturity of the System as well as the new version of the system for future upgrades and the ability to move with the times as the infrastructure grows. As part of the design for this network and the infrastructure that we want in place the best installation would be the mixed mode as this also allows us to use LinuxPXE to be able to deploy the Linux and Solaris kickstart/bootstrap scripts from the Windows Deployment Server, giving us a truly one for all system.

REVIEW: – Ok So 7 Years ago I had just been made redundant from HP and the one box does everything mentality was still strong with me, it still is for most things and HP are good at what they do, even when I was just a little whipper snapper I was big into making Windows server the focal point of the infrastructure, in todays world I still think highly of Microsoft but I know that better options are out in the wild. Personally today my edge servers are Linux or Unix based (as the mail servers for the exchange servers were back then but thats a whole other story). I really did push and waste a lot of time to make WDS HPSIM back in the day and to be fair I did get a long way, in todays market server 2012 can still be configured in this way and its a lot more simpler than it was.

Monitoring and Backup

The monitoring and backup of the servers are very important on all of the infrastructure servers, as well as looking after the health of the drives and logs. To do this we would need to consider the following.

  • What will be the monitoring software
  • At what levels should the monitoring software send an alert
  • What requirements are needed for the logs e.g. the amount of servers/desktops deployed and by who? The amount of attempted non authorised logons to the service
  • The safe amount of free disk space and at what point the alerts should be sent
  • The backup routine and the amount of backups that are taken.
  • A disaster recovery plan for the service, does it require one or is this service going be seen as not a primary system in the event of failure

REVEIW: – This is really unto the processes and procedures that should already be in place and when installing something as monumental as this then those processes and procedures need to be looked at again and amendments made, especially with the network changes that are going to be made.

Other considerations

These are the other considerations that you should be including in all Service design documents. It should include but not limited to,

  • Training material (e.g. this document)
  • The physical location of the servers
  • Physical access restrictions policy to the servers
  • Access policy to the data either stored electronically or printed documents
  • Data retention policy to conform to ISO.9001reqiurements
  • Revision updates and document control procedures.

REVEIW: – the above should already have separate procedures, if your server rack is in a storeroom, who has access to the keys to the rack, how is the power isolated etc. training on all new systems and systems documentation is a real must and overlooked by sysadmins of my era (I’m 34 so just the tail end of the old sysadmin generation before customers became important and sysadmins were gods).

 

Conclusion

Ok I’m going to leave this here as its a natural stopping point in what is going to be a monumental blog about this subject matter, plus my head hurts at going over the old documentation and scrubbing it of identifiable information as well as some really weird formatting that I liked 7 years ago, that in the fact 16 pages in of a 94 page word document has taken me over 3 hours to correct and scrub as well as do some updates to the new information and technological advancements.

I know I promised to let you in on how this project played out, and here it is. due to the size of this project and being never done before within the institution the project was setup clandestinely the only persons that knew this project was underway was myself and the infrastructure manager (who was and is still one of the best persons I have ever worked with and a dear friend) so as you can imagine I was doing BAU and other project work whilst still testing and setting up this in a test environment. After three months of work I tried to get it through change management but it was refused to go into production because of the scale of the project as well as the Desktop team not wanting to change how they did things, for three months this document sat on the shelf then one day the an application guy read the document, who was really good friends with a desktop guy, the project then kicked off big style and I handed over the project to those teams as worked in an advisory capacity, long story short but they went from being able to deploy 5 laptops and 8 desktops a day to 30 laptops and 90 desktops in a single day with my custom .XML scripts and a WDS server. The server team took a further 6 months to take this onboard and as the servers were being virtualised templates in VMWare were used and I became the SME for VMWare, Capacity Planning and Exchange for the rest of my time in that job.

Next time  we will look at how we go about deploying WDS and how to get simple boot from PXE and a deployment of an image (hopefully without boring you to tears). Thankfully about 20 pages at the end of the original document are about how to integrate, simple .XML writing scripts and setting a standard build for the institution to work from, so that should shorten the length of the blogs :). I know that now WDS has matured but seven years ago nobody was writing about it and nobody really knew what it was nor did they care Symantec and Alteris had the market, those were the days when I was still a pioneer at deploying things.

 

Personal Update

I know you are all thinking why I have not been about to keep up to date on posts, and I will let you into a secret. I am currently learning Docker and Puppet to further my real passion for virtualisation not just in the cloud (as it really is just somebody else’s computer) but local and building automation scripts that go along with it, I am really excited by containerising the infrastructure and so it Microsoft with Server 2016 (something I am also learning to be ready for release). So with all this education going on I haven’t had any time to myself to blog. Hopefully now I am over the learning curve I should be back and good to go as well as sharing my new found knowledge. Windows 10 Hyper-V is also looking very good at the moment as well as the new version of Hyper-V for Windows 10 also has containers.
Yes Sarah is still around and I Think she has come to terms with living with a geek 🙂 I think (hope).