Enrolling lots of Windows 10 devices to Microsoft Intune, why bother?

Recently I’ve been involved in a few Microsoft Intune deployments.

These are standalone environments, so no hybrid scenario with System Center Configuration Manager. As we all know, Microsoft Intune can be purchased separately but that’s something I wouldn’t recommend. The pricing models of Enterprise Mobility + Security (EM+S) or Microsoft 365 Enterprise (a.k.a. Secure Productive Enterprise) would give you a lot more benefits making it a true bang for your buck. Organizations who fail to see that will basically defeat themselves because their competition does embrace this strategy. These subscriptions will replace a lot of on-premises management tools which liberates administrators with their daily tasks of extinguishing fires…

Microsoft Intune is available for EM+S E3 or 365 Enterprise E3 (also in both E5 subscriptions). Both subscriptions also include Azure Active Directory Premium P1. Azure Active Directory Premium P1 is a requirement to achieve a goal this post is talking about making Windows 10 device enrollment really simple.

Following guidelines on allows organizations to deliver automatic enrollment for Windows 10 devices when Azure Active Directory Premium is enabled for a user who is assigned a EM+S or 365 Enterprise license. All features are enabled by default so we know it’s there if we don’t fiddle around with them…

So what does this actually mean?

Well, it means that each user who receives a Windows 10 device, preferably Enterprise, will do the device enrollment for you during the OOBE phase of Windows 10. It doesn’t matter if your organization has 5, 50, 500, 5000 or more devices. How cool is that?

As long as all required licenses are in place, admins don’t need to bother about this at all…



My first Azure Stack TP2 POC deployment ending in disaster…

Today I had the opportunity to have an attempt to deploy my first Azure Stack TP2 POC. Having this DataON CiB-9224 available allowed to have a go on deploying an Azure Stack TP2 POC environment. I was able to achieve this after finishing some testing with Windows Server 2016 with the platform. The results of those tests are available at

Before I started testing I reviewed the hardware requirements which are available at

Unfortunately, a small part made me wonder if I would actually succeed in deploying Azure Stack. Here’s a quote of the worrying part:

Data disk drive configuration: All data drives must be of the same type (all SAS or all SATA) and capacity. If SAS disk drives are used, the disk drives must be attached via a single path (no MPIO, multi-path support is provided).

Damn, again a challenge with MPIO. Such a shame since I meet all other hardware requirements.

So decided to have a go and figure out why MPIO is not supported by deploying Azure Stack TP2 anyway. I followed the instructions at and see what happens…

I used a single node of the CiB-9224 and used 4 400 GB SSD disks only. I turned the other node off and I disabled all unused NICs.

After a while I decided to check its progress and I noticed that nothing was happening at a specific step (there was a hour between the latest log and the time I went to check). Here’s a screenshot where the deployment was ‘stuck’:


Seems like the script is trying to enable Storage Spaces Direct (S2D). Knowing that S2D is not supported with MPIO I terminated the deployment and wiped all data because I knew I was going to be unsuccessfull. At least I know why.

I didn’t meet all hardware requirements after all. Fortunately it gave me some insights in how to deploy Azure Stack so when I do have hardware that meets my requirements, then at least I know what to do…

Looking at the requirements again, it’s obvious that the recommended way to go is with single channel JBOD.




Case study: Running Windows Server 2016 on a DataON CiB…

Recently I was asked to investigate if Windows Server 2016 would be a suitable OS on a DataON CiB platform. Some new features of Windows Server 2016 are very exciting. The one that excites me the most is Storage Spaces Direct. I set a goal by asking myself the following question:

Can I deploy a hyper-converged cluster using Hyper-V and Storage Spaces Direct with a CiB-9224 running Windows Server 2016?

The case study involves a CiB-9224V12 platform and I had the liberty to start from scratch on one of these babies.


To figure out if this is possible, I took the following steps:

  1. I deployed Windows Server 2016 Datacenter on each node;
  2. I verified if no device drivers were missing. A lot of Intel chipset related devices had no driver (this may be different at a different model). I installed the Intel Chipset software. The Avago SAS adapter didn’t need a new driver. NOTE: Microsoft Update can be used as well to download and install the missing drivers
  3. I installed the required Roles & Features on both nodes: Hyper-V, Data Deduplication, Failover Clustering and Multi-path I/O
  4. I enabled Multi-Path I/O for SAS. This is a requirement for the SAS adapter to make sure the available disks are presented properly
  5. I created a failover cluster, I used a Share Witness available at a different server
  6. I attempted to enable Storage Spaces Direct but I got stuck at the ‘Waiting for SBL disks are surfaced, 27%’ step. Nothing happens after that.


I started troubleshooting to determine a possible issue why this step can’t be finished. I checked the requirements again for S2D and I found the following website:

At the Drives section I noticed that an unsupported scenario for S2D exists that matches the configuration of the CiB-9224: MPIO or physically connecting drives via multiple paths. After reading the requirements I stopped troubleshooting. Having an unsupported scenario means S2D is simply not possible.


The result was I created a Storage Pool without using S2D and I presented the Virtual Disk a Cluster Shared Volume to the cluster. I was not able to choose ReFS (not available when creating a Volume) as a file system so I had to stick with NTFS with Data Deduplication enabled.

So basically I used the ‘Windows Server 2012 R2’ solution to deploy the CSV using Storage Spaces.

With the CiB-9224 I’m not able to achieve my goal of deploying a hyper-converged cluster based on Microsoft’s definition of hyper-converged.

One question still remains: Would I recommend using Windows Server 2016 at a CiB-9224?

The answer is Yes because some new features of Windows Server 2016, for example Shielded VMs, are fully supported on this hardware.


DataON does have a hyper-converged S2D platform available, more information can be gathered here:



1 Comment

Posted by on 19/01/2017 in Uncategorized


Looking back at 2016…

2016 is at its end so it’s time to look at it before 2017 kicks in.

Due to personal reasons, I’ve been blogging far less than before but I expect to pick it up once more and be more actively blogging.

This year I experienced quite a dramatic change in the way I do my job. System Center related projects have become somewhat nonexisting and my focus has changed to Microsoft Azure completely. I must admit that I like it quite a lot. Gone are the days that I do a project of a couple of months designing and deploying a product from the System Center Suite, except a single Operations Manager deployment. The most visited posts are related to Configuration Manager but I don’t work with it anymore. So don’t expect any new posts about Configuration Manager.

Technical deployments have also become a thing of the past. I’ve been working with customers adopting Microsoft Azure and I’ve become more and more an advisor helping to adopt Microsoft Azure as smoothly as possible. However, adopting Microsoft Azure has become more a financial discussion instead of a technical one. Customers (at least in NL) are more interested in managing costs and are looking for ways to keep costs as low as possible.

The second half of 2016 introduced a change at the company I work at: SCCT BV. When Sander Berkouwer (, an MVP and a true authority on Identity Management, joined SCCT BV it changed the dynamics of my work as well. I received a lot more Identity Management related questions from customers, as if SCCT BV became the go-to company for Identity Management. This was pretty funny since I don’t have that in-depth knowledge regarding Identity Management. Fortunately I have co-workers that do so I was actually delegating quite some work to them.

I was also able to speak on a WMUG_NL event about managing Azure costs. I hope that I will be able to speak on more events on 2017.

Oh, and I was also able to pass two Amazon Web Services (AWS) exams as well. I guess I’m one of the few in NL that has passed both Azure and AWS architecture exams. Hopefully I will work a bit more with AWS as well since they have a great set of services as well and provide new challenges…

While on a personal level 2016 was very challenging, it was quite a year professionally…

See you in 2017!!!

1 Comment

Posted by on 31/12/2016 in Opinion


Manage your Azure Bill part 2: the operating phase…

Customers who already use Microsoft Azure by one or more subscriptions may face some challenges to get some insights in their Azure spending. Quite often customers ask me how to get some insights in their Azure spending and they are looking for ways to get more details where the money goes in a presentable way. Fortunately, it’s pretty easy to answer this question but it depends on the contract they have. It can be sorted in two categories:

  1. Enterprise Agreement (EA) contracts
  2. All other ones (Pay-as-you-go, CSP etc.)

Customers having EA contracts can use PowerBI (Pro) to generate their reporting quite easily. PowerBI Pro is available for all users with an Office365 E5 license. The Azure Enterprise is available from the PowerBI portal (picture is in Dutch but you can do the math).


All other contract types can build their own environment using the Azure Usage and Billing Portal. Instructions on how to build it can be found at There are some catches but it’s pretty easy to build, I got it running in my MSDN subscription easily. Once the environment is up and running the billing data is in the database it can be queried and processed in any way the customers chooses to do so.

Alternatively, 3rd party vendors offer services to present the Azure spending but that’s for another day…

Leave a comment

Posted by on 31/12/2016 in Azure, Cloud, Public Cloud, Revenue


Manage your Azure Bill part 1: the planning phase…

2016 was the year that cloud adoption finally got going.

More and more organizations are reconsidering their IT strategy by embracing Microsoft Azure to run their workloads at. The most common reason to move workloads to Microsoft Azure is they no longer need to make the hardware investments themselves and make that Microsoft’s problem.

The biggest challenge customers are facing is how much will it cost to use Azure resources. I ranted about it before at The Azure Pricing Calculator can help quite a bit but it just doesn’t cut it. It provides an estimate of Azure Services but it doesn’t provide a bigger picture.

Fortunately, Microsoft has released the Azure TCO Calculator which allows organizations to make a much more comprehensive calculation of their Azure spending. It will also compares the costs to having it run on-premises, although it is quite biased by stating running the workloads on Azure tends to be cheaper. As my co-worker Sander Berkouwer ( states with many things: Trust, but verify! I can’t agree more on this one since organizations need to analyze and picture their workloads.

The Azure TCO Calculator is available at

This should get organizations really going embracing Microsoft Azure in 2017!!!

Leave a comment

Posted by on 31/12/2016 in Azure, Cloud, Revenue


Building a Storage Spaces Direct (S2D) cluster and succeed…

As in my previous post I failed miserably buiding an S2D cluster. Fortunately, it was just a small matter of reading this whitepaper properly which states only local storage can be used. We all know iSCSI storage is not locally attached so it makes perfect sense it doesn’t work. But at least I tested it and verified…

OK, so knowing that S2D works with DAS storage only it is time to test and verify if it’s difficult to build an S2D cluster.

To build the cluster, I’m going to build one using this guide. I use 2 FS1 Azure VMs and attach one P10 disk to each node.

So I follow the steps to build the cluster.

Thir fist step is to enable S2D which works fine.


NOTE: as in my previous post, the CacheMode parameter is not there. While this is still in the guide it may be a bit confusing to read it.

The next step is creating a Storage Pool for S2D.


Hmm, that’s odd. Appearantly 2 disks is insufficient. So, let’s add two more, one at each node resulting in having four disks.


OK, so I can continue building a S2D cluster disk of 250 GB


The final step is creating a share according to the guide.


Hmmm, this fails too…

Well I was able to create the share using the Failover Clustering Console by configuring it as a SOFS and provide a ‘Quick’ file share.

So yeah, it’s relatively easy to build an S2D cluster but some steps in the overview need to be reviewed again. It contains mistakes…

Steve Thompson [MVP]

The automation specialist

Boudewijn Plomp

Cloud and related stuff...

Anything about IT

by Alex Verboon

Deployment Made Simple

Modern Workplace

Azure, Hybrid Identity & Enterprise Mobility + Security

Daan Weda

This site is all about System Center and PowerShell

IT And Management by Abheek

Microsoft certified Trainer -Abheek

Heading To The Clouds

by Marthijn van Rheenen