RSS

Category Archives: Microsoft Deployment Toolkit

Looking back at 2015…

So, the year 2015 is almost at its end. While I write this, I am already in my second week of my two week time off. And boy,I really needed this two week break.

2015 was an extremely busy year for me, and I can actually cut the year in half.

At the first half, I was still busy participating in a project where I designed and deployed System Center 2012 R2 Configuration Manager. I also built a stand-alone Image Building environment running MDT 2013. Unfortunately, the project took way longer than expected due the customer being unable to take ownership and start administering it by themselves. Eventually I decided to walk away after the contractual end date of my involvement despite the fact the project isn’t finished yet. The longer it took, the more frustrating the project became for me so the decision to walk away was eventually the right one.

This takes me to the second half. In the second half, I saw a dramatic shift in my job since I did only one Configuration Manager design and deployment in the second half of 2015. I started to extend my skillset on Enterprise Client Management a bit more with Microsoft Intune and Microsoft’s Public Cloud platform: Azure.

I also started to deliver more workshops, master classes and training sessions. This is something I really like to do and I want to thank those who made it possible for me. It allowed to me renew my Microsoft Certified Trainer certification.

Fortunately, the frustrations of the first half provided some learning moments which required me to become a more complete consultant. So my coworker arranged a two day training session for me called “Professional Recommending” (this may be a poor translation of Professioneel Adviseren in Dutch) provided by Yearth. This is by far the most important training I received in my career and it really started to pay off pretty quickly by receiving more positive feedback from customers. I became a more complete consultant with this training.

I was also happy to do the presentation workshop with Monique Kerssens and Jinxiu Hu from Niqué Consultancy BV at ExpertsLive 2015. I was happy to receive the feedback that my presentation skills have developed greatly. To quote them: “you’re standing like a house”.

The icing on the cake came at the end of this year when I was asked to review the DataON CiB-9224 platform. You can read the review in my previous post.

So, I experienced some highs and lows this year. Fortunately, the highs came at the second half.

I look forward to 2016, but that’s for another post…

 

 

Possible workaround for capturing a Windows 10 reference image with MDT 2013 Update 1

As most of us should know by now Microsoft release Microsoft Deployment Toolkit 2013 Update 1, see the announcement at http://blogs.technet.com/b/msdeployment/archive/2015/08/17/mdt-2013-update-1-now-available.aspx

The main improvements are support for Windows 10 and integration of System Center 2012 Configuration Manager SP2/R2 SP1. Unfortunately, this release has quite a lof of issues that makes it either very difficult or impossible to properly capture a reference image. A list of know issues is available at http://blogs.technet.com/b/msdeployment/archive/2015/08/25/mdt-2013-update-1-release-notes-and-known-issues.aspx

The issue that bothers me the most is the following, and I quote:

Do not upgrade from Preview to RTM

MDT 2013 Update 1 Preview should be uninstalled before installing the final MDT 2013 Update 1. Do not attempt to upgrade a preview installation or deployment share. Although the product documentation is not updated for MDT 2013 Update 1, the information on upgrading an installation still holds true.

Being a consultant which require me to be an early adopter and testing new stuff to allow myself to be ready when it’s released requires me to work with Preview versions of verious software. Also, as an ITPro which has an isolated environment available purely for Image Building purposes, I need to upgrade my deployment share frequently. While I can automate building new deployment shares, it takes time I don’t have to research and test these new technologies. So I don’t have much choice than upgrading my deployment share. I must admit that releasing this technology with so many known issues is quite sloppy to me. I can only assume that various scenarios may not have been tested thoroughly by time constraints and releasing this version was under a possible amount of pressure.

Trying to build and capture a Windows 10 reference image fails. The capturing itself fails with an error message that a certain script cannot be loaded. The MDT 2013 U1 environment I currently have is for image building purposes only so I don’t have that many customizations configured.

So knowing that the capturing itself fails I can do the capturing part myself. Knowing that image building is not something I expect you to every day the amount of administrative effort increases just a little bit but it’s quite easy to do.

First, we start a deployment using the Windows Deployment Wizard. After selecting my Build and Capture Windows 10 Task Sequence I get the option to select how I want to capture an image.

capture_option

I choose not to capture an image by selecting the option Do not capture an image of this computer. This will make the deployment run normally and finish without doing anything afterwards. I do use the option Finishaction=REBOOT in my customsettings.ini to make sure the machine restarts after completion.

The next step is logging on with the local Administrator password to SYSPREP the machine by running the sysprep.exe /oobe /generalize /shutdown command.

sysprep

Here we see SYSPREP is in progress. After a small while the machine is turned off.

Now the machine will be started again using the LiteTouch boot media (in my case I use WDS) and wait until the deployment wizard is started once more. The reason why I do this is that my deployment share is available and accessible by the Z: drive which is automatically mapped. Pressing F8 opens the command prompt.

All I need to is to start capturing an image using DISM which may look like the screenshot below (hmmm, makes me wonder why I chose that filename).

Capture_start

Now the capture can start.

Capture_progress

After a while the capture completes and a captured Windows 10 image is available in the Captures folder of the deployment share in use. This image can be used for deployment by MDT 2013 U1, System Center 2012 Configuration Manager SP2/R2 or whatever tool used for deploying .wim files.

Basically the workaround consists of replacing the image capturing part with manual labour. I’m sure that other workarounds may be available but this one works for me. The image capturing should take less than 72 hours since that is the maximum time a WinPE session is allowed to run. Once the 72 hours are up, it will automatically restart the computer. This should be enough though to have the image file created.

Feel free to use this workaround. As usual, testing is required before using it in a production environment.

Let’s hope an updated release should have all these issues solved, the sooner the better…

 

 

 

Workflow analysis to automate building up an MDT 2013 U1 environment…

Recently I was building a new MDT 2013 environment to facilitate Image Building next to a new System Center 2012 R2 Configuration Manager site. This all went fine. I started to notice though that I’ve built more of these environments the same way manually which started to annoy me a little bit. So I decided to spend some time to see if these preparations could be automated as much as possible.

To start automating something, it is essential that we know what to automate and in which sequence the defined steps need to be set. In other words, we need to analyze and determine the workflow before something can be automated.

MDT 2013 is the recommend Image Building environment for most scenarios. I’d even recommend to create an isolated environment purely for that purpose. This environment can be as little as 3 machines to build reference images as explained at https://technet.microsoft.com/en-us/library/dn744290.aspx. I use this article as a reference as well.

After some analysis I discovered and defined the following workflow to build the MDT 2013 U1 environment, assuming the VM is created and joined to the domain:

  1. Install WDS and WSUS (using a Windows Internal Database);
  2. Configure WSUS;
  3. Install Windows ADK 10 from locally available setup;
  4. Install MDT 2013 U1 from locally available setup;
  5. Create Deployment Share;
  6. Import one or more Operating Systems and create Task Sequences for them;
  7. Import applications using a .csv file;
  8. Edit Bootstrap.ini & CustomSettings.ini to meet requirements;
  9. Update Deployment Share.

After this workflow the Task Sequence can be edited. I considered trying to automate that as well, but that is for a future post. After that, you’re basically good to go. Having WSUS synchronized and download updates is something that should be done separately. Automating this is available at http://blogs.technet.com/b/heyscriptingguy/archive/2013/04/15/installing-wsus-on-windows-server-2012.aspx.

MDT 2013 can be managed with PowerShell, albeit a bit crudely in my opinion. A set of cmdlets is available to get most things done, the ones I need for my workflow are there so I can write a complete script which represents each step of the workflow defined. However, some planning is required. The following items need to be in place:

  • A ‘source’ location the setup files of Windows ADK 10 and MDT 2013;
  • The sources of the applications which are going to be part of the deployment, including a .csv file which includes the required information;
  • The location for the Deployment Share.

These requirements can be put into the script and may be modified to be used again in a different environment.

So let’s get started and check out each step.

 

First step: Install WDS and WSUS (using a Windows Internal Database). This one is easy and can be done with just one line:

Add-WindowsFeature WDS,NET-Framework-Core,UpdateServices-Services,UpdateServices-WidDB,UpdateServices-UI –IncludeAllSubFeature

 

Second step: Configure WSUS

Fairly straightforward as well, I use the Start-Process cmdlet to initiate the WsusUtil command:

$FilePath=”C:\Program Files\Update Services\Tools\WsusUtil.exe”

$ArgumentList=”postinstall CONTENT_DIR=F:\WSUS”

Start-Process -FilePath $FilePath -ArgumentList $ArgumentList

 

In this step automating the sync can be added here.

 

Third step: Install Windows ADK 10

I need adksetup.exe to start installing it. It doesn’t matter if all the bits are downloaded previously or need to downloaded when setup is running. Still pretty straightforward:

 

$FilePath=”F:\Download\Win10ADK\adksetup.exe”

$ArgumentList1=”/q /norestart ”

$ArgumentList2=”/installpath F:\ADK10 ”

$ArgumentList3=”/features OptionId.DeploymentTools OptionId.WindowsPreinstallationEnvironment OptionId.ImagingAndConfigurationDesigner OptionId.UserStateMigrationTool”

 

Start-Process -FilePath $FilePath -ArgumentList $ArgumentList1$ArgumentList2$ArgumentList3

Notice the spaces at the end of variables $ArgumentList1 and ArgumentList2

 

Forth step: Install MDT 2013 U1

The installation of MDT 2013 U1 consists of a single .msi file. Since I use the defaults it becomes pretty straightforward as well:

$FilePath=”msiexec.exe ”

$ArgumentList= “/i F:\Download\MDT2013U1\MicrosoftDeploymentToolkit2013_x64.msi /qb”

Start-Process -FilePath $FilePath -ArgumentList $ArgumentList

 

NOTE: I use /qb intentionally to see something happening.

 

Before proceeding, the PowerShell module for MDT needs to be loaded:

$ModulePath=”C:\Program Files\Microsoft Deployment Toolkit\Bin\MicrosoftDeploymentToolkit.psd1″

Import-Module $ModulePath

 

Fifth step: Create a deployment Share and share it

This requires a little bit more work. The folder itself must be created before running the cmdlet that configures the Deployment Share:

$Name=”DS001″

$PSProvider=”MDTProvider”

$Root=”F:\DeploymentShare”

$Description=”MDT Deployment Share”

$NetworkPath=”\\MDT01\DeploymentShare$”

New-Item $Root -type directory

New-PSDrive -Name $Name -PSProvider $PSProvider -Root $Root -Description $Description -NetworkPath $NetworkPath -Verbose | Add-MDTPersistentDrive -Verbose

$Type = 0

$objWMI = [wmiClass] ‘Win32_share’

$objWMI.create($Root, $Desription, $Type)

 

Sixth step: Import an Operating System and create a Task Sequence for it

 

A little bit more work. I limited the script for one, this can be repeated when more Operating Systems need to be imported. I chose not to use a .csv file for that.

 

$Path=”DS001:\Operating Systems”

$SourcePath=”F:\Download\OS\win8.1_ent_x64″

$DestinationFolder=”Windows 8.1 Enterprise x64″

Import-MDTOperatingSystem -Path $Path -SourcePath $SourcePath -DestinationFolder $DestinationFolder

$Path=”DS001:\Task Sequences”

$Name=”Build Windows 8.1 Enterprise x64″

$Template=”Client.xml”

$ID=”1″

$OperatingSystemPath=”DS001:\Operating Systems\Windows 8.1 Enterprise in Windows 8.1 Enterprise x64 install.wim”

$FullName=”Windows User”

$Homepage=”about:blank”

Import-MDTTaskSequence -Path $Path -Name $Name -Template $Template -ID $ID -OperatingSystemPath $OperatingSystemPath -FullName $FullName -OrgName $FullName -HomePage $Homepage

 

Seventh step: Import applications using a .csv as source

Import-Csv F:\Download\Apps\MDTApps.csv | % {

$Path=”DS001:\Applications”

Import-MDTApplication -Path $Path -Enable “True” -Name $_.ApplicationName -ShortName $_.ApplicationName -Commandline $_.Commandline -WorkingDirectory “.\Applications\$_.ApplicationName” -ApplicationSourcePath $_.ApplicationSourcePath -DestinationFolder $_.ApplicationName

}

Here’s a sample .csv file used for the purpose of this script:

ApplicationName,CommandLine,ApplicationSourcePath

Visual C++ 2005 redist,VS80sp1-KB926601-X86-ENU.exe /Q,F:\Download\Apps\VC2005

Visual C++ 2008 redist,VS90sp1-KB945140-ENU.exe /Q,F:\Download\Apps\VC2008

Visual C++ 2010 redist,VS10sp1-KB983509.exe /Q,F:\Download\Apps\VC2010

Visual C++ 2012 redist x64,vcredist_x64.exe /Q,F:\Download\Apps\VC2012X64

Visual C++ 2012 redist x86,vcredist_x86.exe /Q,F:\Download\Apps\VC2012X86

Visual C++ 2013 redist x64,vcredist_x64.exe /Q,F:\Download\Apps\VC2013X64

Visual C++ 2013 redist x86,vcredist_x86.exe /Q,F:\Download\Apps\VC2013X86

 

Hmmm, looks like I need to add Visual C++ 201 redist and Office 2013. The benefit of this approach is to modify the .csv to meet the list of Applications that are going to be part of the reference image.

 

Eighth step: Edit Bootstrap.ini and Customsettings.ini

This step requires a bit more work. The locations of the both files depend on step 5. To keep things easy, I remove all the text fist before appending it again in the file. The result may look like this:

$File=”F:\DeploymentShare\Control\Bootstrap.ini”

Clear-Content -Path $File

$Line=”[Settings]” | Out-File $File -Append

$Line=”Priority=Default” | Out-File $File -Append

$Line=”” | Out-File $File -Append

$Line=”[Default]” | Out-File $File -Append

$Line=”DeployRoot=\\MDT01\DeploymentShare$” | Out-File $File -Append

$Line=”UserDomain=DOMAIN9″ | Out-File $File -Append

$Line=”UserID=MDTAdmin” | Out-File $File -Append

$Line=”UserPassword=P@ssw0rd” | Out-File $File -Append

$Line=”SkipBDDWelcome=YES” | Out-File $File -Append

 

$File=”F:\DeploymentShare\Control\CustomSettings.ini”

Clear-Content -Path $File

$Line=”[Settings]” | Out-File $File -Append

$Line=”Priority=Default” | Out-File $File -Append

$Line=”” | Out-File $File -Append

$Line=”[Default]” | Out-File $File -Append

$Line=”_SMSTSORGNAME=DOMAIN9 Lab Deployment” | Out-File $File -Append

$Line=”OSInstall=Y” | Out-File $File -Append

$Line=”AdminPassword=P@ssw0rd” | Out-File $File -Append

$Line=”TimeZoneName=Pacific Standard Time” | Out-File $File -Append

$Line=”JoinWorkgroup=WORKGROUP” | Out-File $File -Append

$Line=”FinishAction=SHUTDOWN” | Out-File $File -Append

$Line=”DoNotCreateExtraPartition=YES” | Out-File $File -Append

$Line=”WSUSServer=http://mdt01.domain9.local:8530″ | Out-File $File -Append

$Line=”ApplyGPOPack=NO” | Out-File $File -Append

$Line=”SkipAdminPassword=YES” | Out-File $File -Append

$Line=”SkipProductKey=YES” | Out-File $File -Append

$Line=”SkipComputerName=YES” | Out-File $File -Append

$Line=”SkipDomainMembership=YES” | Out-File $File -Append

$Line=”SkipUserData=YES” | Out-File $File -Append

$Line=”SkipLocaleSelection=YES” | Out-File $File -Append

$Line=”SkipTaskSequence=NO” | Out-File $File -Append

$Line=”SkipTimeZone=YES” | Out-File $File -Append

$Line=”SkipApplications=YES” | Out-File $File -Append

$Line=”SkipBitLocker=YES” | Out-File $File -Append

$Line=”SkipSummary=YES” | Out-File $File -Append

$Line=”SkipRoles=YES” | Out-File $File -Append

$Line=”SkipCapture=NO” | Out-File $File -Append

$Line=”SkipFinalSummary=YES” | Out-File $File -Append

 

Ninth step: Update the Deployment Share

The final step will create the boot images and process everything to make sure it’s all there, this step may take a while.

$Path=”DS001:”

Update-MDTDeploymentShare -Path $Path

 

That’s it. After running the script you can see it’s all there. The only thing missing is adding the applications to the Task Sequence (and WSUS sync à update download). Alternatively, you can set the SkipApplications setting to NO and select the applications during the Lite-Touch wizard.

To prevent this post becoming too verbose, all that needs to be done is to copy/paste each part into a single .ps1 file and you’re good to go. Feel free to use this in a completely isolated environment to prevent interfering with a production environment.

 

Thoughts and ideas to manage Google Chrome for Work, and some Extensions too…

Recently I was asked to investigate how to manage Google Chrome for Work in a corporate environment. The customer involved is using Google Cloud Services intensively. This means that in order to receive an optimal user experience, the Google Chrome browser is needed. I decided to investigate and come up with some thoughts and ideas on how to do it.

Managing applications consists of two activities:

  • Deployment
  • Configuration

In an ideal scenario, those activities are completely separated. The result will be that the installation of the application can be done using the default settings while the configuration will be taken care of elsewhere. Fortunately, Google provides a few methods of configuring Google Chrome. The most well-known are:

  • Editing the master_preferences file
  • Using Group Policy

For the sake of this blog, the corporate environment consists of Windows 8.1 devices which are domain joined. This means we can consider using Group Policy. Using Group Policy will meet the initial goal of keeping deployment and configuration separate. I was able to use the following reference to see how to use Group Policy to manage Google Chrome: https://support.google.com/chrome/a/answer/187202

As we can see, the policies are machine driven. This provides another challenge how Google Chrome for Work needs to be installed. Google has the following deployment guide to get started: https://docs.google.com/document/d/1iu6I0MhyrvyS5h5re5ai8RSVO2sYx2gWI4Zk4Tp6fgc/preview?pli=1&sle=true

In this guide, I found the following passage critical for my investigation:

Chrome installations from an MSI package are installed at the system level and are available to all users. As a result, any user-level installation of Chrome, (i.e. a user’s own Chrome installation), will be overridden.

It looks like Google has revised the approach of installing Google Chrome quite dramatically since it was possible to install Chrome in the user context. This allowed end users to install Google Chrome without administrator privileges which can be considered a security officer’s true nightmare, not to mention the configuration management and release management processes in a corporate environment.

In order to investigate how to do it, I’ve set up a lab environment with the following machines:

  • 1 Windows Server 2012 R2 domain controller
  • 1 System Center 2012 R2 Configuration Manager site server
  • 1 Windows 8.1 x64 client

Knowing that Google Chrome will always be installed in the system context means we can make it part of an Operating System Deployment Task Sequence. But first an application needs to be made.

To create the application just import googlechromestandaloneenterprise.msi and create a standard Deployment Type, nothing fancy.

and

Really nothing fancy…

Adding this to a Task Sequence is nothing fancy either…

There you go. I created an MDT 2013 Task Sequence with an unattended Windows 8.1 x64 deployment since it’s not relevant to build a Windows 8.1 x64 image first.

Configuring Google Chrome is a bit more challenging. It really depends on requirements and processes to determine how Google Chrome for Work should be configured. This is what I did first:

  • Create a separate GPO for Google Chrome and link it to a OU where my client machine resides
  • Import the chrome.adm template

I’ve decided to configure just a few settings the GPO provides:

  • Setting Google Chrome as the default browser
  • Configure the start page

Configuring the Start Page requires 2 GPO settings

and

With

 

What about Extensions?

Admittingly, I find it great to manage Extensions by GPO. It does require some detective work before they can be automatically added to Google Chrome. Fortunately, nothing has to be downloaded in advance once you know the Extension ID. In my example I added a list of so called force-installed Extensions.

And here’s the list itself

 

Once the configuration is in place and the machine is deployed, the result will look this

Starting Google Chrome with my Start Pages

And the Extensions specified are also in place.

 

Finally, Google frequently updates Chrome. You can manage the update behavior by configuring Google Update by Group Policy, the following website can be used as a reference: https://support.google.com/chrome/a/answer/3204698

This requires to import the GoogleUpdate.adm template to configure the update behavior.

In this investigation, I configured the following update settings:

  • Automatically check for updates every 60 minutes
  • Enable auto updating of Google Chrome

 

Here are the GPO’s needed to configure it.

and

and finally

 

Updating Google Chrome for Work works great, if you’re an administrator. Unfortunately, when a user accessed the ‘About’ tab in Chrome it will trigger an update check. For a standard user this will immediately trigger User Account Control asking for credentials for elevation in Windows Vista or newer. Because Google Chrome is installed in the system context (installed in the %PROGRAMFILES% directory), administrator rights are required to make changes to the computer initiated by a user. This behavior is by design.

Especially in Windows 8.1, it is absolutely not recommended to turn off UAC since it will break a lot of functionality of Windows itself. Giving users administrator privileges should not be recommended either. To maintain a strict configuration management and release management process, it’s recommended to disable updates to prevent users who do have administrator privileges install updates by themselves.

This means a different method needs to be used to deploy an updated version of Google Chrome for Work. To me, the Application model of Configuration Manager is the most suitable environment since it allows to supersede applications. This also means the Task Sequence may be edited frequently as well. Alternatively, a 3rd party update tool such as Secunia can be used as well if the organization has a license for that.

I can conclude that Google has done the right thing to have Google Chrome for Work installed in the system context and manage it by using Group Policy.

 

 

 

 

Minimum required permissions to join a computer to the domain during OSD

Recently I was required to deliver the minimum required permissions to allow a computer to join the domain during Operating System Deployment. The challenge in this scenario was that each computer object is pre-staged in Active Directory prior to deployment. Here are a few but not all reasons why each computer object is pre-staged (let’s assume this is achieved by a PowerShell script):

  • A list of all computers exist, however the list contains only names (no MAC addresses or GUIDs)
  • Operating System Deployment needs to be simplified by not pointing to any Organizational Unit in the answer file
  • For each user a computer will be configured as a Primary Device, this is most likely used for file server related configurations

I guess I was lucky that at most projects in the past, the administrators told me which account to use since they already configured the required delegation for domain join. However, it was the first time that computer objects are pre-staged prior to deployment and I had to deliver the required permissions myself. Initially, I received the following error message:

“The join operation was not successful. This could be because an existing computer account having the name xxxx was previously created using a different set of credentials. Use a different computer name or contact your administrator to remove any stale conflicting raccount. The error was: Access is denied.”

After some digging I found the following blog post that helped me getting on track again:

http://morgansimonsen.wordpress.com/2013/12/17/delegating-computer-object-management-tasks/

After reading this post, I needed the following scenario:

2. Allowing a security principal to join and re-join a computer to a domain

The second scenario; allowing a principal to also re-join a computer to a domain requires some additional permissions. This is useful if you want to have a service account that can manage all computer accounts, also existing ones. System Center Configuration Manager for example requires these permissions.

According to the article the following permissions are required:

  • Reset Password
  • Read and write Account Restrictions
  • Validated write to DNS host name
  • Validated write to service principal name

In order to reproduce the behavior, I set up a small lab environment with a Domain Controller, a System Center 2012 R2 Configuration manager primary site server and two client machines. I did the following before deploying the machines:

  • Pre-stage the two computer accounts
  • Delegated the permissions to the account used for to join computers to the domain

The first client was joined correctly, after that I did the same with the second client and that one joined correctly as well. This means I was able to reproduce expected behavior. I consider that a result.

I’d like to thank Morgan for providing a detailed blog post which helped me achieve this goal. It comes back to having the right permissions to achieve the goal.

Feel free to try it out yourself, don’t forget to use a test environment before putting this in production…

UPDATE: the required permissions are also documented in the following KB article: http://support.microsoft.com/kb/932455 This article applies to Windows Server 2012 (R2) as well.

 

 

A small reminder (and note to self) when working with DELL Driver CABs for enterprise deployment

As many people already know, DELL provides a very comprehensive set of driver packs that can be used with MDT and especially ConfigMgr. Personally, DELL’s approach works so well I would almost jump on a couch and scream ‘I LOVE IT’ for a few times. You can compare it with a well-known actor we know best for doing this in a talk show and is also known as a Scientology zealot 😉

But I said almost…

Not only are separate packs available for a single model, family packs are available as well which makes managing drivers a lot easier.

You can find the packs at the following locations:

http://en.community.dell.com/techcenter/enterprise-client/w/wiki/2065.dell-driver-cab-files-for-enterprise-client-os-deployment.aspx

and

http://en.community.dell.com/techcenter/enterprise-client/w/wiki/7437.dell-family-driver-cab-files.aspx

 

At one my projects, I initially decided to create a single Driver Package for each family driver (or separate ones when required) which means that each package both contains the x86 and x64 drivers. Unfortunately, some models loaded the wrong driver. In my case, an x64 driver was loaded during a Windows 8.1 x86 deployment. This was not the behavior I was looking for.

After figuring out the issue, I decided to cut the packs in half and put the x86 and x64 drivers in separate Driver Packages. It’s a little bit more administrative effort, that particular effort denies me the time to jump the couch.

Let’s hope DELL’s competition can deliver a similar method which I haven’t seen so far…

 

 

 

Thoughts on ‘the art of removing’ for Image Building

At one of my projects I was asked to facilitate the Image Building process to allow the customer to create their reference images. The tooling is already configured and the customer is able to create their reference images. MDT 2013 is used to build the reference images, these are the x86 and x64 editions of Windows 8.1 Enterprise.

It is generally recommended, not only by me but by Microsoft as well (and a bunch of well-known MVPs), to use MDT 2013 for Image Building. It is also well-known that two strategies exist:

  • Thick image strategy: building an image and include the application baseline (including updates)
  • Thin image strategy: building an image without any baseline applications (including updates)

Most of the time, organizations want to have some baseline applications such as Microsoft Office. Just for giggles, I call it a ‘beer-belly’ image J

So far so good. However, at the particular project the customers asked me to remove some apps for users. In this case, they’re using the Google Cloud tools and wanted all Bing based apps removed from the image.

After some digging, I found a script created by The Deployment Guys which allowed me to achieve this goal. I used this script as a reference to have the defined set of built-in apps removed. The script is available at the following blog post:

http://blogs.technet.com/b/deploymentguys/archive/2013/10/21/removing-windows-8-1-built-in-applications.aspx

The script works like a charm, I can even use it in Configuration Manager as part of my deployment.

The actual challenge for removing the built-in apps is when to remove them. To me, only two moments are available:

  • During Image Building, prior to image capture
  • During Operating System Deployment, after the image has been applied

After some consideration, I recommended to have the built-in apps removed during Operating System Deployment. In this scenario the amount of customization remains low. It also gives me the flexibility when the decision to have these apps removed will be revoked or modified. All I need to do then is to modify the script, not rebuilding the reference images or create applications in Configuration Manager to have them reinstalled.

In a Task Sequence it can look like this.

You can compare it to a sculptor who is hacking away the material to get the desired shape. The sculptor will have a hard time if he removed too much material…

As usual, if you try this out yourself, then please use a test environment before deploying this in a production environment.

 

 

 

 

 
 
Steve Thompson [MVP]

The automation specialist

Boudewijn Plomp

Cloud and related stuff...

Anything about IT

by Alex Verboon

MDTGuy.WordPress.com

Deployment Made Simple

Modern Workplace

Azure, Hybrid Identity & Enterprise Mobility + Security

Daan Weda

This WordPress.com site is all about System Center and PowerShell

IT And Management by Abheek

Microsoft certified Trainer -Abheek

Heading To The Clouds

by Marthijn van Rheenen