RSS

Category Archives: Uncategorized

Thoughts on standard Image SKU vs.’Smalldisk’ Image SKU

For a long time, organizations using Windows VM instances in Microsoft Azure didn’t have options regarding the OS disk for the instance. The default value is 127 GB and this hasn’t changed. Quite a while ago, Microsoft announced Windows VM instances with a smaller OS disk of only 32 GB as was announced in https://azure.microsoft.com/nl-nl/blog/new-smaller-windows-server-iaas-image/

Yes, I admit this may be old news but I haven’t given it much thought on how approach it when these Windows VM images became available, until recently…

More and more I’m involved into providing ARM templates for my customers and my main focus is on Azure IaaS deployments.

Together with Managed disks, it has become pretty easy to determine sizing for Azure VM Instances and having both Image SKUs available provide options.

However, while I was creating these ARM templates I noticed that I prefer to use the ‘Smalldisk’ Image SKU’s more over the standard one and the explanation for it is actually pretty simple.

For this post, I will use the following ARM template as a reference: https://github.com/Azure/azure-quickstart-templates/tree/master/101-vm-simple-windows

Looking at the “Properties” section of the Virtual Machine resource, you can see the relevant part of the OS Disk configuration:

“osDisk”: {
                 “createOption”: “FromImage”
                },

In this configuration, the default size will be used which should be great in most scenarios. If a different size is required, then the notation may look like this:

“osDisk”: {
                 “createOption”: “FromImage”,
                 “diskSizeGB”: “[variables(‘OSDiskSizeinGB’)]”
                },

You can specify the value either as a variable or a parameter to determine the size. In this example I use a variable and it must have a supported value for managed disks. In my case I used the following value:

“OSDiskSizeinGB”: “64”
OK, so nothing new here so far. However, to maintain maximum flexibility, you need to use the ‘Smalldisk’ Image SKU only which has the smallest possible size of 32 GB. From there, the only was is up.
To optimize Azure consumption by only paying for what you use and what you REALLY need, it may make sense that organizations create some governance and policies to determine sizing for their Azure VM instances. Not only for compute, but for storage as well. Managed Disks provide some guidance for that.
So for me, I’d focus on using the ‘Smalldisk’ Image SKU only and enlarge it when needed. It’s pretty easy to do by just adding one line in your ARM template for that VM, and an additional one for your variable…

 

Here’s my set of variables I use to select the ‘Smalldisk’ Image SKU:

“ImagePublisher”: “MicrosoftWindowsServer”,
“ImageOffer”: “WindowsServer”,
“ImageSKU”: “2019-Datacenter-smalldisk”,
“ImageVersion”: “latest”,

And here’s the relevant part of the Image reference:

“imageReference”: {
                                   “publisher”: “[variables(‘ImagePublisher’)]”,
                                   “offer”: “[variables(‘ImageOffer’)]”,
                                   “sku”: “[variables(‘ImageSKU’)]”,
                                   “version”: “[variables(‘ImageVersion’)]”
                                   },
Hope this helps!

 

 

Advertisements
 
 

Case study: Availability Sets vs. Availability Zones

One of the biggest challenges customers face is to make sure a Highly Available solution survives a catastrophic failure at fabric layer of Microsoft Azure, you things like servers, storage, network devices, power and cooling. Not caring about the fabric layer is one of the main reasons why organizations consider running their workloads in Azure in the first place.

However, Azure locations are not located at some magic castles that would make them invulnerable to catastrophic failures or other natural disasters. Of course, the magnitude of the disaster allows organizations to think about possible scenario’s to safeguard (more or less) the availability of their workloads. After all, Microsoft and their customers have a shared responsibility keeping the lot running.

Maintaining high availability at a single region provides two options:

  • Availability Sets: allows workloads to be spread over multiple hosts, racks but still remain at the same data center;
  • Availability Zones: allows workloads to be spread over multiple locations, so you automatically don’t care on which host the workload will run.

The following picture displays the difference between possible failures and SLA percentage. Obviously, Availability Zones offer higher protection against failures. Region pairs is beyond the scope of this post…

The beauty of both scenario’s is that the VNet required to connect an Azure VM is not bound by a single data center a.k.a. an Availability Zone. it is stretched over a whole region.

So I thought, let’s try this out with a typical workload that requires a high level of availability and can sustain failure pretty well. My choice was to host an SQL fail-over cluster (no Always On Availability Group) with additional resiliency using Storage Spaces Direct. Using all these techniques to maintain uptime, how cool is that?

I used the following guides to deploy a two node Windows Server 2016 cluster:

Actually I built two SQL S2D clusters. Both clusters were completely the same (Two DS11 VMs each with 2 P30 disks), except one was configured with an Availability Set and the other with an Availabilty Zone.

What makes the difference is the requirement for the Azure Load Balancer. You need an Azure Load Balancer for the cluster heartbeat to make sure which node is active. Looking the Azure Load Balancer overview, available at https://docs.microsoft.com/en-us/azure/load-balancer/load-balancer-overview you can see that you need a Standard SKU when using Availability Zones. When using an Availability Set, a basic SKU is sufficient. But that’s acutally it when deploying an SQL cluster using S2D. However, since the Load Balancer is an internal one anyway, I’d recommend using the Standard SKU anyway. From a pricing perspective, I don’t believe it would make much of a difference. If the penalties for downtime are much more severe, then I wouldn’t nitpick about this anyway.

 

 
Leave a comment

Posted by on 20/09/2018 in Uncategorized

 

Installing Windows 10 over the Internet, how cool is that?

I’ve been planning to do this for a while but time or to a lesser extent motivational constraints prevented me from doing so.

To be honest, installing Windows 10 (or many previous versions of Windows) over the Internet is something I couldn’t understand not being made available by Microsoft. To me, it is something I don’t consider something revolutionary. After all, installing an Operating System over the Internet is something that is available to Linux for quite some time.

Nowadays, more and more organizations are cleaning up their on-premises infrastructures and move them to the cloud. While this is great, it may provide some challenges for deploying clients when no local infrastructure is available anymore to facilitate this. Many organizations would also like to use their own reference images.

A certain technology caught my attention that would make this possible for now: Azure File shares.

Azure File shares allows organizations to deploy Windows 10 using a network installation. The only difference is that the network share is at an Azure location of your choice.

To keep this simple, I created an Azure File share using the following instructions:

To make this work, communication over port 445 needs to be allowed. This may be an issue by some ISPs which would completely defeat this approach.

Once the Azure File share is created and acces is available, all that needs to be done is to copy either the Windows 10 installation files or reference images created by yourself. I chose to place the Windows 10 installation files on the share.

Next step is creating a WinPE boot CD using the instructions available at https://docs.microsoft.com/en-us/windows-hardware/manufacture/desktop/winpe-create-a-boot-cd-dvd-iso-or-vhd

After creating the ISO file, I simply created a bootable USB drive and copied the files on that USB drive. I also added a simple .cmd file that mounts mounts the network share to avoid typing errors using the instructions at https://docs.microsoft.com/en-us/azure/storage/files/storage-how-to-use-files-windows

To start the installtion, I took the following steps:

  1. Boot from the USB drive
  2. Verify the network drivers are loaded and an IP-address has been assigned to the NIC
  3. Mount the Azure File Share
  4. Browse to the installation files
  5. Start setup.exe

After providing the required information for Windows setup, the installation was running. I tested this scenario at home. I am quite fortunate to have a decent Internet connection (300 Mbit/sec up/down FTTH). I wouldn’t really recommend this if network bandwith may be limited unless you have a lot of time on your hands. Nevertheless, using your own reference images and deploying machines when employees are sleeping may also work but that’s up to you

For the sake of this blog, I couldn’t be bothered to automate my deployment.

What would be really interesting is to see if I can place an MDT Deployment share on an Azure File share to deploy Windows over the Internet. I also would be very interested if Microsoft allows Windows 10 deployments using the http(s) protocol and not bother about shares at all. Ultimately, I’d like to run Windows setup using http(s) directly from Microsoft and having Microsoft maintain the setup with updates.

Seriously, how cool would that be?

 
Leave a comment

Posted by on 18/02/2018 in Uncategorized

 

Upgrading to Configuration Manager CB, going all the way…

Well, it’s been a while since I wrote something about Configuration Manager. I worked a lot with this technology but I was never able to really move away from it. I guess it has something to do with experience. If you’re experienced with something and you’ve proven to be good at it, then people will request it…

The good side of this experience is that customers I worked with in the past ask me again to assist them with this technology…

Based on what I’ve seen so far with Windows 10, adopting it is going steadily. With the release of the Fall Creators Update (1709), it is possible to both join Active Directory and Azure Active Directory. This allows coexistence between and introduces two management platforms for devices:

  • Configuration Manager
  • Intune

While it is possible to create a hybrid environment by using Intune as a stepping stone for mobile devices while managing them from Configuration Manager, I wouldn’t recommend doing so since I consider it no longer necessary and has become obsolete. I wasn’t a big fan of the Intune integration within Configuration Manager. But that is something for a different post.

Managing Windows 10 devices with Configuration Manager is strongly recommended with the Current Branch releases because of its native support for Windows 10. Microsoft supports a number of in-place upgrade paths which is documented at https://docs.microsoft.com/en-us/sccm/core/servers/deploy/install/upgrade-to-configuration-manager

So recently I was asked to do an in-place upgrade of an existing System Center 2012 Configuration Manager SP1 site (a stand alone Primary Site) running on a server with the following components:

  • Operating System: Windows Server 2012
  • SQL Version: 2012 Standard Edition SP1
  • ADK for Windows 8
  • Integrated MDT 2012 SP1

All components needed to be upgraded with the latest version, at that time the following components needed to be there:

  • Operating System: Windows Server 2016
  • SQL Version: 2016 Standard Edition
  • ADK for Windows 10 1709
  • Integrated MDT version 8443

Doing an in-place upgrade was technically and politically the best way to go

So I got started by making a full backup of the Site Database and moved to a different location (a file share). the next step was stopping all Configuration Manager services. I was then able to get started using the following sequence with a few challenges:

  • In-place upgrade to Windows Server 2016: I was forced to uninstall Endpoint Protection before upgrading
  • In-place upgrade to SQL 2016 Standard Edition: Needed to install SQL 2012 SP2 prior to upgrading to SQL 2016
  • ADK for Windows 8 had to be uninstalled prior to installing ADK for Windows 10 1709
  • In-place upgrade to Configuration Manager 1702 itself: After the upgrade IIS services were disabled so they had to be enabled and started again. Some components failed to update but they did once IIS services were started again
  • For MDT I removed the ConfigMgr Integration before uninstalling the old version and installing the latest one. For the new version I configured the ConfigMgr Integration again

After upgrading a small to-do list occurred that needed to be done:

  • WSUS post install had to be run once more. Apparently, WSUS configuration was gone after upgrading
  • New MDT Boot Images had to be created
  • MDT Packages (Toolkit, Settings and USMT) needed to be created with the new version
  • Existing Task Sequences needed to be modified

To summarize it, all went pretty smooth and new Configuration Manager features can be used.

After that, the site was upgraded to Configuration Manager 1706 using the Console…

 

Case study: Running Windows Server 2016 on a DataON CiB…

Recently I was asked to investigate if Windows Server 2016 would be a suitable OS on a DataON CiB platform. Some new features of Windows Server 2016 are very exciting. The one that excites me the most is Storage Spaces Direct. I set a goal by asking myself the following question:

Can I deploy a hyper-converged cluster using Hyper-V and Storage Spaces Direct with a CiB-9224 running Windows Server 2016?

The case study involves a CiB-9224V12 platform and I had the liberty to start from scratch on one of these babies.

cib-9224_v12_fsv

To figure out if this is possible, I took the following steps:

  1. I deployed Windows Server 2016 Datacenter on each node;
  2. I verified if no device drivers were missing. A lot of Intel chipset related devices had no driver (this may be different at a different model). I installed the Intel Chipset software. The Avago SAS adapter didn’t need a new driver. NOTE: Microsoft Update can be used as well to download and install the missing drivers
  3. I installed the required Roles & Features on both nodes: Hyper-V, Data Deduplication, Failover Clustering and Multi-path I/O
  4. I enabled Multi-Path I/O for SAS. This is a requirement for the SAS adapter to make sure the available disks are presented properly
  5. I created a failover cluster, I used a Share Witness available at a different server
  6. I attempted to enable Storage Spaces Direct but I got stuck at the ‘Waiting for SBL disks are surfaced, 27%’ step. Nothing happens after that.

 

I started troubleshooting to determine a possible issue why this step can’t be finished. I checked the requirements again for S2D and I found the following website:

https://technet.microsoft.com/en-us/windows-server-docs/storage/storage-spaces/storage-spaces-direct-hardware-requirements

At the Drives section I noticed that an unsupported scenario for S2D exists that matches the configuration of the CiB-9224: MPIO or physically connecting drives via multiple paths. After reading the requirements I stopped troubleshooting. Having an unsupported scenario means S2D is simply not possible.

 

The result was I created a Storage Pool without using S2D and I presented the Virtual Disk a Cluster Shared Volume to the cluster. I was not able to choose ReFS (not available when creating a Volume) as a file system so I had to stick with NTFS with Data Deduplication enabled.

So basically I used the ‘Windows Server 2012 R2’ solution to deploy the CSV using Storage Spaces.

With the CiB-9224 I’m not able to achieve my goal of deploying a hyper-converged cluster based on Microsoft’s definition of hyper-converged.

One question still remains: Would I recommend using Windows Server 2016 at a CiB-9224?

The answer is Yes because some new features of Windows Server 2016, for example Shielded VMs, are fully supported on this hardware.

 

DataON does have a hyper-converged S2D platform available, more information can be gathered here: http://dataonstorage.com/storage-spaces-direct/s2d-3110-1u-10-bay-all-flash-nvme-storage-spaces-direct-cluster-appliance.html

s2d-3110_frontsideview

 

 
1 Comment

Posted by on 19/01/2017 in Uncategorized

 

Live Maps Unity 7.5 with Operations Manager 2012: making dashboard views easier…

Recently Savision announced Live Maps Unity 7.5. Shortly after the announce I finally had some time left to have a look at it. One of my customers asked me to help the build a pristine OpsMgr 2012 R2 environment and they stated they already purchased Savision Live Maps as well. In this blog post I share my impressions regarding Live Maps Unity 7.5 from a technical perspective and beyond.

A commonly asked question regarding 3rd party dashboard tools is: Why do I need something like that?

To give a clear answer, certain aspects of the IT environment need to be considered:

  • OpsMgr itself is a very IT focused monitoring solution which has quite some distance to the ‘real world’. Although OpsMgr delivers a very high level of detail of the IT environment, it may become quite challenging to provide information non-IT people understand. The business requires information of the availability of IT services. The business would rather like to know if they can still use email instead of knowing which mailbox store is broken.
  • While OpsMgr has some native capabilities to build dashboards, I consider them quite inferior (even using the Visio Add-in). It takes a lot of administrative effort to build and maintain them and it just doesn’t work the right way. For this feature alone I had to give negative recommendations to previous customers to use OpsMgr solely on this challenge.

With these considerations taken in mind, the answer to the question regarding dashboards is yes convincingly.

Savision Live Maps delivers dashboards that the real world can understand and does all the work creating them for you. This significantly lowers the administrative effort to allow administrators to focus their daily task on managing their environment, not managing the tools that manage their environment.

So I decided to have a go and asked for a trial license. I’ve set up an environment in an Azure Resource Group, created a storage account and a virtual network and created the following two machines (both running on Windows Server 2012 R2:

  • 1 Domain Controller;
  • 1 Operations Manager 2012 R2 Management Server running a local SQL instance.

I imported Active Directory and SQL Server Managment Packs, importing these requires Windows Core Monitoring so that one is included as well.

The next step was installing Live Maps Unity 7.5. I used the documentation available at the Savision Training Center which is available at https://www.savision.com/live-maps-training-center. The documentation is very monkey proof is makes installing Live Maps Unity ridiculously easy.

The next step is creating the dashboards you need. After some playing around I was able to produce the following view:

service view

NOTE:I created an additional distributed application named mwesterink.lan which contains both servers only. I intentionally left some alerts to display the color differences.

 

After playing around a little bit I conclude that Savision Live Maps Unity makes dashboarding significantly easier, especially when Management Packs deliver their own distributed applications.

Something as trivial as Service Level Monitoring is enabled by just a simple check box.

Even for ITPros, the more business oriented view should be sufficient before drilling down to figure out if any new issues are occuring.

I would even consider not using any notifications anymore at all.

 

However, a major decision maker is if the license costs would meet any Return On Investment (ROI) targets. In general, decision makers are only interested in meeting ROI for projects. Any ROI not met is considered a failure. Knowing how much time it takes to have your dashboards created should allow some financial people to calculate how much time administering these dashboards cost. I am almost certain that the administrative effort will be reduced dramatically to have Live Maps Unity do all the work for you instead of building it all yourself. I didn’t need any support from Savision to build something like that, so a more experienced OpsMgr admins should certainly be able to use this. Savision have their engineers available when needed.

My final verdict: I’d definitely recommend using Live Maps Unity to present the IT infrastructure availability in OpsMgr.

 

 

A small test to determine if IIS is not affected by ‘Heartbleed’…

Last week the Internet was alerted by the Heartbleed vulnerability in OpenSSL (CVE-2014-0160). It was pretty relieving to see Microsoft provided a statement that Microsoft is not using OpenSSL but SChannel instead on their IIS. Microsoft’s statement is available here:

http://blogs.technet.com/b/erezs_iis_blog/archive/2014/04/09/information-about-heartbleed-and-iis.aspx

Fortunately, I’m helping a customer setting up a ConfigMgr site server for internet facing clients. The site server is already up and running, the machine accepts connections from clients residing on the Internet. It allowed us to do a quick test to verify IIS is not affected by the vulnerability.

 

We found the following website that allows us to do the quick test:

https://filippo.io/Heartbleed/

Enter the Internet FQDN of the internet facing site server, select the option to ignore certificates and allow the test to be run.

The test gives us this result:

 

Consulting the FAQ states the ‘broken pipe’ message states that the unaffected IIS is used, which we know we do. This site obviously doesn’t know we’re using IIS which works for me…

 
Leave a comment

Posted by on 14/04/2014 in Uncategorized

 
 
Steve Thompson [MVP]

The automation specialist

Boudewijn Plomp

Cloud and related stuff...

Anything about IT

by Alex Verboon

MDTGuy.WordPress.com

Deployment Made Simple

Modern Workplace

Azure, Hybrid Identity & Enterprise Mobility + Security

Daan Weda

This WordPress.com site is all about System Center and PowerShell

IT And Management by Abheek

Microsoft certified Trainer -Abheek

Heading To The Clouds

by Marthijn van Rheenen