RSS

Looking back at 2018

The time to look back at 2018 has come a bit sooner than expected due to an enforced shutdown at my current employer. This is great so I don’t have to bother about it shortly before the year ends…

2018 is a year that introduced a lot of changes for me. Until October 2018 i worked at SCCT BV before making a switch to DXC Technology. Certain events I encountered this year justified this switch for me.The main reason is that I wanted to be specialist again with a focus on Microsoft Azure (and I get Azure Stack for free). This means that I’ve decided to let go my previous experience in Enterprise Client Management (ECM) and I will no longer work with System Center Configuration Manager, the rest of System Center, Hyper-V and Microsoft Intune anymore. So don’t expect any blog posts on those…

I was becoming too much of a generalist while I was introduced to be specialist at all these technologies at the same time. Basically, if you claim to be a specialist at everything, you become a specialist at nothing.

An interesting aspect I learned by making this switch is how an employer reacts to your resignation, especially if you’ve been working for them for quite some time (4,5 years). Apparently, not all employers handle it well and I find that SCCT BV didn’t react to resgination well. I find that quite a shame, unnecessary and quite immature. An employer’s behavior may have a serious impact on their reputation. After all, it takes years to build a reputation but just 5 minutes to lose it completely. It also gave me some insight on making sure how an organizational structure is set up prior to joining an employer in the future. But I hope that I don’t have to do that anymore…

Fortunately, I expect to find a lot of good opportunities within my new role at DXC Technology. The best thing I found so far is that work/life balance has become much better. It allows me to maintain my health much better than previously and I already see results (I lost quite some weight and I need to lose some more). The best thing so far is that I can work anywhere I want. DXC Technology facilitates working from home in a proper manner and that helps a lot to improve my performance. And I need to travel sometimes which is nice too.

So hopefully I have some stuff to blog about in 2019. It will most likely Azure or Azure Stack related.

I wish everyone a prosperous 2019!!!

 

 

Advertisements
 
Leave a comment

Posted by on 21/12/2018 in Opinion, Rant

 

Case study: Availability Sets vs. Availability Zones

One of the biggest challenges customers face is to make sure a Highly Available solution survives a catastrophic failure at fabric layer of Microsoft Azure, you things like servers, storage, network devices, power and cooling. Not caring about the fabric layer is one of the main reasons why organizations consider running their workloads in Azure in the first place.

However, Azure locations are not located at some magic castles that would make them invulnerable to catastrophic failures or other natural disasters. Of course, the magnitude of the disaster allows organizations to think about possible scenario’s to safeguard (more or less) the availability of their workloads. After all, Microsoft and their customers have a shared responsibility keeping the lot running.

Maintaining high availability at a single region provides two options:

  • Availability Sets: allows workloads to be spread over multiple hosts, racks but still remain at the same data center;
  • Availability Zones: allows workloads to be spread over multiple locations, so you automatically don’t care on which host the workload will run.

The following picture displays the difference between possible failures and SLA percentage. Obviously, Availability Zones offer higher protection against failures. Region pairs is beyond the scope of this post…

The beauty of both scenario’s is that the VNet required to connect an Azure VM is not bound by a single data center a.k.a. an Availability Zone. it is stretched over a whole region.

So I thought, let’s try this out with a typical workload that requires a high level of availability and can sustain failure pretty well. My choice was to host an SQL fail-over cluster (no Always On Availability Group) with additional resiliency using Storage Spaces Direct. Using all these techniques to maintain uptime, how cool is that?

I used the following guides to deploy a two node Windows Server 2016 cluster:

Actually I built two SQL S2D clusters. Both clusters were completely the same (Two DS11 VMs each with 2 P30 disks), except one was configured with an Availability Set and the other with an Availabilty Zone.

What makes the difference is the requirement for the Azure Load Balancer. You need an Azure Load Balancer for the cluster heartbeat to make sure which node is active. Looking the Azure Load Balancer overview, available at https://docs.microsoft.com/en-us/azure/load-balancer/load-balancer-overview you can see that you need a Standard SKU when using Availability Zones. When using an Availability Set, a basic SKU is sufficient. But that’s acutally it when deploying an SQL cluster using S2D. However, since the Load Balancer is an internal one anyway, I’d recommend using the Standard SKU anyway. From a pricing perspective, I don’t believe it would make much of a difference. If the penalties for downtime are much more severe, then I wouldn’t nitpick about this anyway.

 

 
Leave a comment

Posted by on 20/09/2018 in Uncategorized

 

Manageability and responsibility for Cloud Services

Many organizations are facing challenges when moving their IT services to the Public Cloud. For the sake of this post I focus solely on Microsoft Azure, although I am aware that other Cloud Providers have a similar approach and models for it…

As we’re all aware three categories of Cloud Services exist:

  • Infrastructure as a Service (IaaS);
  • Platform as a Service (PaaS);
  • Software as a Service (SaaS).

Each category have their own level of management, some elements are managed by the Cloud provider, the rest is managed by yourself. The amount of management differs from each category which is displayed by the picture below.

As you can see, SaaS services are completely managed by the Cloud provider which is great. A great approach to this is that if a Line of Business (LoB) application can be replaced by a SaaS alternative, then it really makes sense to do so. Looking at IaaS and PaaS, you can see the amount of management done by the Cloud provider is higher with PaaS than IaaS. This means the following recommendations can be made:

  • Replace/migrate existing applications tot SaaS services. This will release the IT department with the daily tasks of managing them;
  • Consider using PaaS Services as much as possible. This will also lower the administrative effort of managing cloud services by the IT department. Additonally, certain PaaS services allow developers to develop and deploy immediately to the PaaS service (ie. Azure Web App) making them not depend on an IT-Pro to facilitate the service.

However, less management doesn’t mean less responsibility. Despite having less management by using Cloud services, it doesn’t mean the organization is not responsible anymore. Microsoft released the required documentation regarding shared responsibility between the customer and themselves. This guide is available at http://aka.ms/sharedresponsibility From the guide took the following screenshot showing a diagram of the responsibilities.

 

As you can see, the customer still has some responsibility when using SaaS services. However, these models allow a customer to define a strategy when moving to the cloud…

 

 
Leave a comment

Posted by on 05/09/2018 in Azure, Public Cloud

 

Ensure IT Governance using Azure Policy…

Many organizations face challenges using Microsoft Azure in a controlled way. The high number of services (and still increasing) and the scale of Microsoft Azure may make it pretty overwhelming to maintain control and enforce compliance on IT governance also known as company policy. How great would it be if organizations can enforce their IT governance to Microsoft Azure?

Well, meet Azure Policy.

Azure Policy allows IT organizations to enforce compliance on Azure resources used. Once a Policy is applied it can report compliance on existing Azure resources and it will be enforced on newly created ones. A full overview of Azure Policy is available at https://docs.microsoft.com/en-us/azure/azure-policy/azure-policy-introduction.

For my own subscription I use for testing purposes only, I enforced a single Policy that defines which Azure location I am allowed to use. In my case, the location is West Europe which is more or less around the corner for me. Adding Azure resources to a different location after applying it result in an error message.

The screenshot below displays my configuration for this Policy.

The overview provides many more examples of typical policies that can be applied. The ones that come to my mind would most like be:

  • Allowed locations;
  • Allowed Virtual Machine SKUs;
  • Tagging;
  • White-listing Azure resources.

Before applying this policy, I’d strongly recommend to investigate IT governance if available. Once it is in place, then you should be able to ‘translate’ them into Azure Policy.

 
Leave a comment

Posted by on 21/08/2018 in Azure, Public Cloud

 

ConfigMgr: a second attempt to REALLY liberate yourself from driver management…

In a previous post, I made an attempt to use Microsoft Update for downloading and installing all drivers during an Operating System deployment task with System Center Configuration Manager or Microsoft Deployment Toolkit. This approach works pretty great as long as hardware vendors use components that require drivers who are published by Microsoft Update. This requires some testing and if something’s missing, then alternative methods are available.

However, this works great but how about maintaining them during normal operation? After all, since drivers are not managed in this scenario, the process of receiving new drivers if updated needs to continue. As we all know, System Center Configuration Manager doesn’t support deploying drivers using Software Updates since the Update Classification ‘Drivers’ is not available (it is in WSUS though) so that’s not an option.

Fortunately, since Windows 10 1607 a feature called Dual Scan is available and can be used in conjunction with Software Updates in System Center Configuration Manager. This allows organizations to use both sources for managing updates so Microsoft Update can be used to update drivers.

The easiest way to do it is to deploy Windows Update for Business policies System Center Configuration Manager (assuming Intune is not used). All that needs to be done is follow the instructions on https://docs.microsoft.com/en-us/sccm/sum/deploy-use/integrate-windows-update-for-business-windows-10#configure-windows-update-for-business-deferral-policies

Within a policy, you can include drivers to be deployed by checking the option ‘Include drivers with Windows Update’. Roughly said, you can kiss driver management in System Center Configuration Manager goodbye.

Despite the availability of good tools provided by vendors such as HP and Dell, managing drivers in System Center Configuration Manager is still a dreadful task. So this approach may reduce administrative effort dramatically…

 

 

 

 

 

 

ConfigMgr: first impressions deploying a Distribution Point on a server core installation…

Recently I’ve been investigating deploying server core installations of Windows Server 2012 R2, 2016 and newer. Deploying a server core installation has become more viable for the following reasons:

  • Smaller footprint;
  • More secure, with tools like RSAT, Remote PowerShell and Windows Admin Center a GUI may no longer be required if the workload can run on a server core installation ;
  • Easy to manage with the remote tools mentioned before and requires less updating.

Well, Configuration Manager is one of those tools who remains strongly dependent on a GUI except for the role Disitribution Point, see https://docs.microsoft.com/en-us/sccm/core/plan-design/configs/supported-operating-systems-for-site-system-servers for more information.

Unfortunately, you will lose the ability to deploy PXE and Multicast since Windows Deployment Services is not available on server core, see https://docs.microsoft.com/en-us/previous-versions/windows/it-pro/windows-server-2012-R2-and-2012/hh831764(v=ws.11) and it applies to Windows Server 2016 and newer as well, so you need to use media. I’d recommend using bootable media only since it won’t change that often. This would be terrible in the past. However, image building and deployment has lost its importance with Windows 10 and this is something I noticed as well. Nowadays, I hardly recommend to build reference images and consider just unattended setups including some stuff (drivers, updates, apps and other). The actual deployment may take a bit longer but it provides absolute flexibility.

The only scenario’s where PXE and Multicast are more viable are for mass deployments at places such as schools and universities, but this is just my opinion…

Deploying a Configuration Manager site mostly consists of at least three servers:

  • Site Server & Site Database Server (yes, a locally installed SQL instance);
  • Management Point, SUP, Application Catalogs and others except Distribution Point;
  • Distribution Point.

A Distribution Point is something that I normally don’t protect by some sort of backup mechanism. If a DP is broken, just reinstall and redistribute all content.

OK, so now to my first impressions, here they are:

  • A clean server core installation misses some basic prerequisites, ie. Remote Differential Compression;
  • After adding the server as a Distribution Point, some basic prerequisites are not automatically installed;
  • Data Deduplication works like a charm;
  • distribution of content fails due to the missing prerequisites.

So eventually, this means it’s recommended to install the prerequisites yourself before adding the server as a Distribution Point. Fortunately, this is not so difficult and will prevent a lot of frustration.

After that, it just works the same way as a GUI based server but without the overhead you don’t really need anyway. Except when you need PXE or multicast…

 

 

ConfigMgr: An attempt to liberate yourself from managing drivers

This attempt may not be suitable for the faint hearted and it may be intertpreted as if I’m dropping a bomb but here it goes.

In all those years working with Configuration Manager, managing drivers for devices remains a daunting task. It is time consuming, requires a lot of administrative effort and storage as well. It is also difficult to explain to customers on dealing with it accordingly, just not my kind of fun…

With the release of Windows 10 and Microsoft’s approach with the semi-annual update channels it may make sense to reevaluate the daunting task of driver management.

Would it be great if it can be thrown out of the window (no pun intended) so you don’t have to bother about it anymore?

Well, the answer is yes if you meet the following requirements:

Microsoft has also redesigned update deployment for Windows 10. The number of updates have been significantly reduced by merging all updates in a single monthly bundle which will increase the build version of Windows 10 as well. From Windows 10 1607 and newer, a feature called ‘Dual Scan’ has been introduced as well you may even wonder if you can throw out Update Management in Configuration Manager out of the window as well. I understand this may be hard to let get go, but releasing yourself from all this administrative effort allows you liberate yourself from this as well, unless the required processes and company policies are in place allowing you to have this automated…

To summarize it all, would it be great to have a fully patched machine including all drivers during deployment?

After investigating, I found an old but still valid approach by Chris Nackers which is available at http://blogs.catapultsystems.com/cnackers/archive/2011/04/28/using-ztiwindowsupdate-wsf-to-install-updates-in-a-system-center-configuration-manager-task-sequence/

I followed the steps except setting the variable (by not setting it) required by ZTIWindowsUpdate.wsf to make sure the script will go to Microsoft Update and retrieve all required updates from there. Additionally, I did check the ‘Continue on error’ checkbox to make sure the Task Sequence can continue in case update installation may fail. During testing I noticed some old printer driver failed to update while the rest installed properly. Enabling the ‘Continue on error’ checkbox is easier than collecting all exit codes.

In my scenario, it looks like this.

Alternatively, you can place the step after installing all applications so they may be updated as well.

Of course this requires some testing, if some devices are not installed because the driver is not available on Microsoft Update, then you can add them yourself.

Since Microsoft likes Github so much, you can even download ZTIWindowsUpdate.wsf (and ZTIUtility.wsf) as well and even edit to to your liking (ie. reducing the number of retries), you find it at https://github.com/monosoul/MS-Deployment-toolkit-scripts/tree/master/Scripts

 

The result is the deployment may take some time but you have a fully updated machine and don’t need to bother about managing drivers afterwards.

Also, allowing Dual Scan will update drivers as well keeping that part of updating the device as well…

 
 
Steve Thompson [MVP]

The automation specialist

Boudewijn Plomp

Cloud and related stuff...

Anything about IT

by Alex Verboon

MDTGuy.WordPress.com

Deployment Made Simple

Modern Workplace

Azure, Hybrid Identity & Enterprise Mobility + Security

Daan Weda

This WordPress.com site is all about System Center and PowerShell

IT And Management by Abheek

Microsoft certified Trainer -Abheek

Heading To The Clouds

by Marthijn van Rheenen