RSS

Case study: Availability Sets vs. Availability Zones

One of the biggest challenges customers face is to make sure a Highly Available solution survives a catastrophic failure at fabric layer of Microsoft Azure, you things like servers, storage, network devices, power and cooling. Not caring about the fabric layer is one of the main reasons why organizations consider running their workloads in Azure in the first place.

However, Azure locations are not located at some magic castles that would make them invulnerable to catastrophic failures or other natural disasters. Of course, the magnitude of the disaster allows organizations to think about possible scenario’s to safeguard (more or less) the availability of their workloads. After all, Microsoft and their customers have a shared responsibility keeping the lot running.

Maintaining high availability at a single region provides two options:

  • Availability Sets: allows workloads to be spread over multiple hosts, racks but still remain at the same data center;
  • Availability Zones: allows workloads to be spread over multiple locations, so you automatically don’t care on which host the workload will run.

The following picture displays the difference between possible failures and SLA percentage. Obviously, Availability Zones offer higher protection against failures. Region pairs is beyond the scope of this post…

The beauty of both scenario’s is that the VNet required to connect an Azure VM is not bound by a single data center a.k.a. an Availability Zone. it is stretched over a whole region.

So I thought, let’s try this out with a typical workload that requires a high level of availability and can sustain failure pretty well. My choice was to host an SQL fail-over cluster (no Always On Availability Group) with additional resiliency using Storage Spaces Direct. Using all these techniques to maintain uptime, how cool is that?

I used the following guides to deploy a two node Windows Server 2016 cluster:

Actually I built two SQL S2D clusters. Both clusters were completely the same (Two DS11 VMs each with 2 P30 disks), except one was configured with an Availability Set and the other with an Availabilty Zone.

What makes the difference is the requirement for the Azure Load Balancer. You need an Azure Load Balancer for the cluster heartbeat to make sure which node is active. Looking the Azure Load Balancer overview, available at https://docs.microsoft.com/en-us/azure/load-balancer/load-balancer-overview you can see that you need a Standard SKU when using Availability Zones. When using an Availability Set, a basic SKU is sufficient. But that’s acutally it when deploying an SQL cluster using S2D. However, since the Load Balancer is an internal one anyway, I’d recommend using the Standard SKU anyway. From a pricing perspective, I don’t believe it would make much of a difference. If the penalties for downtime are much more severe, then I wouldn’t nitpick about this anyway.

 

Advertisements
 
Leave a comment

Posted by on 20/09/2018 in Uncategorized

 

Manageability and responsibility for Cloud Services

Many organizations are facing challenges when moving their IT services to the Public Cloud. For the sake of this post I focus solely on Microsoft Azure, although I am aware that other Cloud Providers have a similar approach and models for it…

As we’re all aware three categories of Cloud Services exist:

  • Infrastructure as a Service (IaaS);
  • Platform as a Service (PaaS);
  • Software as a Service (SaaS).

Each category have their own level of management, some elements are managed by the Cloud provider, the rest is managed by yourself. The amount of management differs from each category which is displayed by the picture below.

As you can see, SaaS services are completely managed by the Cloud provider which is great. A great approach to this is that if a Line of Business (LoB) application can be replaced by a SaaS alternative, then it really makes sense to do so. Looking at IaaS and PaaS, you can see the amount of management done by the Cloud provider is higher with PaaS than IaaS. This means the following recommendations can be made:

  • Replace/migrate existing applications tot SaaS services. This will release the IT department with the daily tasks of managing them;
  • Consider using PaaS Services as much as possible. This will also lower the administrative effort of managing cloud services by the IT department. Additonally, certain PaaS services allow developers to develop and deploy immediately to the PaaS service (ie. Azure Web App) making them not depend on an IT-Pro to facilitate the service.

However, less management doesn’t mean less responsibility. Despite having less management by using Cloud services, it doesn’t mean the organization is not responsible anymore. Microsoft released the required documentation regarding shared responsibility between the customer and themselves. This guide is available at http://aka.ms/sharedresponsibility From the guide took the following screenshot showing a diagram of the responsibilities.

 

As you can see, the customer still has some responsibility when using SaaS services. However, these models allow a customer to define a strategy when moving to the cloud…

 

 
Leave a comment

Posted by on 05/09/2018 in Azure, Public Cloud

 

Ensure IT Governance using Azure Policy…

Many organizations face challenges using Microsoft Azure in a controlled way. The high number of services (and still increasing) and the scale of Microsoft Azure may make it pretty overwhelming to maintain control and enforce compliance on IT governance also known as company policy. How great would it be if organizations can enforce their IT governance to Microsoft Azure?

Well, meet Azure Policy.

Azure Policy allows IT organizations to enforce compliance on Azure resources used. Once a Policy is applied it can report compliance on existing Azure resources and it will be enforced on newly created ones. A full overview of Azure Policy is available at https://docs.microsoft.com/en-us/azure/azure-policy/azure-policy-introduction.

For my own subscription I use for testing purposes only, I enforced a single Policy that defines which Azure location I am allowed to use. In my case, the location is West Europe which is more or less around the corner for me. Adding Azure resources to a different location after applying it result in an error message.

The screenshot below displays my configuration for this Policy.

The overview provides many more examples of typical policies that can be applied. The ones that come to my mind would most like be:

  • Allowed locations;
  • Allowed Virtual Machine SKUs;
  • Tagging;
  • White-listing Azure resources.

Before applying this policy, I’d strongly recommend to investigate IT governance if available. Once it is in place, then you should be able to ‘translate’ them into Azure Policy.

 
Leave a comment

Posted by on 21/08/2018 in Azure, Public Cloud

 

ConfigMgr: a second attempt to REALLY liberate yourself from driver management…

In a previous post, I made an attempt to use Microsoft Update for downloading and installing all drivers during an Operating System deployment task with System Center Configuration Manager or Microsoft Deployment Toolkit. This approach works pretty great as long as hardware vendors use components that require drivers who are published by Microsoft Update. This requires some testing and if something’s missing, then alternative methods are available.

However, this works great but how about maintaining them during normal operation? After all, since drivers are not managed in this scenario, the process of receiving new drivers if updated needs to continue. As we all know, System Center Configuration Manager doesn’t support deploying drivers using Software Updates since the Update Classification ‘Drivers’ is not available (it is in WSUS though) so that’s not an option.

Fortunately, since Windows 10 1607 a feature called Dual Scan is available and can be used in conjunction with Software Updates in System Center Configuration Manager. This allows organizations to use both sources for managing updates so Microsoft Update can be used to update drivers.

The easiest way to do it is to deploy Windows Update for Business policies System Center Configuration Manager (assuming Intune is not used). All that needs to be done is follow the instructions on https://docs.microsoft.com/en-us/sccm/sum/deploy-use/integrate-windows-update-for-business-windows-10#configure-windows-update-for-business-deferral-policies

Within a policy, you can include drivers to be deployed by checking the option ‘Include drivers with Windows Update’. Roughly said, you can kiss driver management in System Center Configuration Manager goodbye.

Despite the availability of good tools provided by vendors such as HP and Dell, managing drivers in System Center Configuration Manager is still a dreadful task. So this approach may reduce administrative effort dramatically…

 

 

 

 

 

 

ConfigMgr: first impressions deploying a Distribution Point on a server core installation…

Recently I’ve been investigating deploying server core installations of Windows Server 2012 R2, 2016 and newer. Deploying a server core installation has become more viable for the following reasons:

  • Smaller footprint;
  • More secure, with tools like RSAT, Remote PowerShell and Windows Admin Center a GUI may no longer be required if the workload can run on a server core installation ;
  • Easy to manage with the remote tools mentioned before and requires less updating.

Well, Configuration Manager is one of those tools who remains strongly dependent on a GUI except for the role Disitribution Point, see https://docs.microsoft.com/en-us/sccm/core/plan-design/configs/supported-operating-systems-for-site-system-servers for more information.

Unfortunately, you will lose the ability to deploy PXE and Multicast since Windows Deployment Services is not available on server core, see https://docs.microsoft.com/en-us/previous-versions/windows/it-pro/windows-server-2012-R2-and-2012/hh831764(v=ws.11) and it applies to Windows Server 2016 and newer as well, so you need to use media. I’d recommend using bootable media only since it won’t change that often. This would be terrible in the past. However, image building and deployment has lost its importance with Windows 10 and this is something I noticed as well. Nowadays, I hardly recommend to build reference images and consider just unattended setups including some stuff (drivers, updates, apps and other). The actual deployment may take a bit longer but it provides absolute flexibility.

The only scenario’s where PXE and Multicast are more viable are for mass deployments at places such as schools and universities, but this is just my opinion…

Deploying a Configuration Manager site mostly consists of at least three servers:

  • Site Server & Site Database Server (yes, a locally installed SQL instance);
  • Management Point, SUP, Application Catalogs and others except Distribution Point;
  • Distribution Point.

A Distribution Point is something that I normally don’t protect by some sort of backup mechanism. If a DP is broken, just reinstall and redistribute all content.

OK, so now to my first impressions, here they are:

  • A clean server core installation misses some basic prerequisites, ie. Remote Differential Compression;
  • After adding the server as a Distribution Point, some basic prerequisites are not automatically installed;
  • Data Deduplication works like a charm;
  • distribution of content fails due to the missing prerequisites.

So eventually, this means it’s recommended to install the prerequisites yourself before adding the server as a Distribution Point. Fortunately, this is not so difficult and will prevent a lot of frustration.

After that, it just works the same way as a GUI based server but without the overhead you don’t really need anyway. Except when you need PXE or multicast…

 

 

ConfigMgr: An attempt to liberate yourself from managing drivers

This attempt may not be suitable for the faint hearted and it may be intertpreted as if I’m dropping a bomb but here it goes.

In all those years working with Configuration Manager, managing drivers for devices remains a daunting task. It is time consuming, requires a lot of administrative effort and storage as well. It is also difficult to explain to customers on dealing with it accordingly, just not my kind of fun…

With the release of Windows 10 and Microsoft’s approach with the semi-annual update channels it may make sense to reevaluate the daunting task of driver management.

Would it be great if it can be thrown out of the window (no pun intended) so you don’t have to bother about it anymore?

Well, the answer is yes if you meet the following requirements:

Microsoft has also redesigned update deployment for Windows 10. The number of updates have been significantly reduced by merging all updates in a single monthly bundle which will increase the build version of Windows 10 as well. From Windows 10 1607 and newer, a feature called ‘Dual Scan’ has been introduced as well you may even wonder if you can throw out Update Management in Configuration Manager out of the window as well. I understand this may be hard to let get go, but releasing yourself from all this administrative effort allows you liberate yourself from this as well, unless the required processes and company policies are in place allowing you to have this automated…

To summarize it all, would it be great to have a fully patched machine including all drivers during deployment?

After investigating, I found an old but still valid approach by Chris Nackers which is available at http://blogs.catapultsystems.com/cnackers/archive/2011/04/28/using-ztiwindowsupdate-wsf-to-install-updates-in-a-system-center-configuration-manager-task-sequence/

I followed the steps except setting the variable (by not setting it) required by ZTIWindowsUpdate.wsf to make sure the script will go to Microsoft Update and retrieve all required updates from there. Additionally, I did check the ‘Continue on error’ checkbox to make sure the Task Sequence can continue in case update installation may fail. During testing I noticed some old printer driver failed to update while the rest installed properly. Enabling the ‘Continue on error’ checkbox is easier than collecting all exit codes.

In my scenario, it looks like this.

Alternatively, you can place the step after installing all applications so they may be updated as well.

Of course this requires some testing, if some devices are not installed because the driver is not available on Microsoft Update, then you can add them yourself.

Since Microsoft likes Github so much, you can even download ZTIWindowsUpdate.wsf (and ZTIUtility.wsf) as well and even edit to to your liking (ie. reducing the number of retries), you find it at https://github.com/monosoul/MS-Deployment-toolkit-scripts/tree/master/Scripts

 

The result is the deployment may take some time but you have a fully updated machine and don’t need to bother about managing drivers afterwards.

Also, allowing Dual Scan will update drivers as well keeping that part of updating the device as well…

 

Installing Windows 10 over the Internet, how cool is that?

I’ve been planning to do this for a while but time or to a lesser extent motivational constraints prevented me from doing so.

To be honest, installing Windows 10 (or many previous versions of Windows) over the Internet is something I couldn’t understand not being made available by Microsoft. To me, it is something I don’t consider something revolutionary. After all, installing an Operating System over the Internet is something that is available to Linux for quite some time.

Nowadays, more and more organizations are cleaning up their on-premises infrastructures and move them to the cloud. While this is great, it may provide some challenges for deploying clients when no local infrastructure is available anymore to facilitate this. Many organizations would also like to use their own reference images.

A certain technology caught my attention that would make this possible for now: Azure File shares.

Azure File shares allows organizations to deploy Windows 10 using a network installation. The only difference is that the network share is at an Azure location of your choice.

To keep this simple, I created an Azure File share using the following instructions:

To make this work, communication over port 445 needs to be allowed. This may be an issue by some ISPs which would completely defeat this approach.

Once the Azure File share is created and acces is available, all that needs to be done is to copy either the Windows 10 installation files or reference images created by yourself. I chose to place the Windows 10 installation files on the share.

Next step is creating a WinPE boot CD using the instructions available at https://docs.microsoft.com/en-us/windows-hardware/manufacture/desktop/winpe-create-a-boot-cd-dvd-iso-or-vhd

After creating the ISO file, I simply created a bootable USB drive and copied the files on that USB drive. I also added a simple .cmd file that mounts mounts the network share to avoid typing errors using the instructions at https://docs.microsoft.com/en-us/azure/storage/files/storage-how-to-use-files-windows

To start the installtion, I took the following steps:

  1. Boot from the USB drive
  2. Verify the network drivers are loaded and an IP-address has been assigned to the NIC
  3. Mount the Azure File Share
  4. Browse to the installation files
  5. Start setup.exe

After providing the required information for Windows setup, the installation was running. I tested this scenario at home. I am quite fortunate to have a decent Internet connection (300 Mbit/sec up/down FTTH). I wouldn’t really recommend this if network bandwith may be limited unless you have a lot of time on your hands. Nevertheless, using your own reference images and deploying machines when employees are sleeping may also work but that’s up to you

For the sake of this blog, I couldn’t be bothered to automate my deployment.

What would be really interesting is to see if I can place an MDT Deployment share on an Azure File share to deploy Windows over the Internet. I also would be very interested if Microsoft allows Windows 10 deployments using the http(s) protocol and not bother about shares at all. Ultimately, I’d like to run Windows setup using http(s) directly from Microsoft and having Microsoft maintain the setup with updates.

Seriously, how cool would that be?

 
Leave a comment

Posted by on 18/02/2018 in Uncategorized

 
 
Steve Thompson [MVP]

The automation specialist

Boudewijn Plomp

Cloud and related stuff...

Anything about IT

by Alex Verboon

MDTGuy.WordPress.com

Deployment Made Simple

Modern Workplace

Azure, Hybrid Identity & Enterprise Mobility + Security

Daan Weda

This WordPress.com site is all about System Center and PowerShell

IT And Management by Abheek

Microsoft certified Trainer -Abheek

Heading To The Clouds

by Marthijn van Rheenen