Category Archives: System Center Virtual Machine Manager

Looking back at 2015…

So, the year 2015 is almost at its end. While I write this, I am already in my second week of my two week time off. And boy,I really needed this two week break.

2015 was an extremely busy year for me, and I can actually cut the year in half.

At the first half, I was still busy participating in a project where I designed and deployed System Center 2012 R2 Configuration Manager. I also built a stand-alone Image Building environment running MDT 2013. Unfortunately, the project took way longer than expected due the customer being unable to take ownership and start administering it by themselves. Eventually I decided to walk away after the contractual end date of my involvement despite the fact the project isn’t finished yet. The longer it took, the more frustrating the project became for me so the decision to walk away was eventually the right one.

This takes me to the second half. In the second half, I saw a dramatic shift in my job since I did only one Configuration Manager design and deployment in the second half of 2015. I started to extend my skillset on Enterprise Client Management a bit more with Microsoft Intune and Microsoft’s Public Cloud platform: Azure.

I also started to deliver more workshops, master classes and training sessions. This is something I really like to do and I want to thank those who made it possible for me. It allowed to me renew my Microsoft Certified Trainer certification.

Fortunately, the frustrations of the first half provided some learning moments which required me to become a more complete consultant. So my coworker arranged a two day training session for me called “Professional Recommending” (this may be a poor translation of Professioneel Adviseren in Dutch) provided by Yearth. This is by far the most important training I received in my career and it really started to pay off pretty quickly by receiving more positive feedback from customers. I became a more complete consultant with this training.

I was also happy to do the presentation workshop with Monique Kerssens and Jinxiu Hu from Niqué Consultancy BV at ExpertsLive 2015. I was happy to receive the feedback that my presentation skills have developed greatly. To quote them: “you’re standing like a house”.

The icing on the cake came at the end of this year when I was asked to review the DataON CiB-9224 platform. You can read the review in my previous post.

So, I experienced some highs and lows this year. Fortunately, the highs came at the second half.

I look forward to 2016, but that’s for another post…



Reviewing the DataON Cluster-in-a-box 9224 (CiB-9224 V12) platform

Recently the company I work for became a partner in deploying DataON platform solutions together with the Dutch distributor of DataON. The distributor has the knowledge and experience with distributing hardware, but was looking for a partner to have them deployed to meet the needs of customers. I had the honor of reviewing one of DataON’s solutions provided by the distributor: the CiB-9224 V12

DNS-9220 Front View

Before I got started I checked the relevant information on DataON’s website which is available at

Here are a few features that I consider relevant:

  • You have a two-node cluster in a single 2U enclosure;
  • A two-tier storage storage deployment is available, only JBOD is available (no hardware RAID) to both nodes;
  • The solution can be ‘stacked’ with either another CiB and/or DNS JBOD solution;
  • The components used result in a very simple and easy to use setup, no extensive hardware knowledge is required;
  • DataON delivers OOBE guides to get you started.

Overall DataON delivers a no-nonsense solution. Since I am an advocate of a no-nonsense approach it is something I really like.

After checking it all I conclude that this platform can be used in two ways:

  • Scale Out File Server (SOFS) cluster providing one or more SMB 3.0 shares;
  • A two-node Hyper-V cluster.

Specific scenarios are available at DataON’s website mentioned earlier.

For my review I decided to build a two-node Hyper-V cluster. After preparing a small infrastructure (DC, DNS, DHCP and networking) I was able to get going. I decided to follow the OOBE guide as much as possible. In less than an hour, I had a fully operational two-node Hyper-V cluster. I noticed a few things during deployment:

  • Some steps in the guide are not completely in line with deploying the solution. I was able to create a Storage Space with Data Deduplication enabled while the guide doesn’t mention Data Deduplication. However, I greatly welcome to have Data Deduplication enabled since it will generate significant savings when Virtual Machines are stored on the volume being deduplicated;
  • The Storage Space is very fast, deploying Virtual Machines doesn’t take much time at all;
  • I like the built-in Mellanox ConnectX®-3 Pro EN Single 10GbE port used for Cluster Heartbeat and Live Migration. After configuring the cluster to use this NIC only for Live Migration I was very happy with its Live Migration performance. It worked like a charm;
  • I managed the cluster using System Center 2016 Virtual Machine Manager Technical Preview and System Center 2016 Operations Manager Technical Preview. After deploying the required agents I was able to manage the cluster completely by Virtual Machine Manager. Dynamic Optimization and PRO Tips became available. After setting Dynamic Optimization to very aggressive settings I could see Virtual Machines dancing around on both nodes without negatively affecting the Virtual Machines themselves.

The next step was trying to stress test the platform. I decided to deploy 150 Virtual Machines using a tempate. I found a nice PowerShell script that would do the work for me at During this deployment I noticed the limited network resources (had a 1 Gbit/sec switch available, no fiberglass) significantly slowed down the deployment and I was also overcommitting the cluster (memory resources prevented me from running all these Virtual Machines). I had no intention of running all these machines after deploying them but it gave me some good insights of the platform’s capabilities. To me, the test scenario used is not optimal and I expect better performance when using 10 Gbit/sec SFP connections are used. Nevertheless, the platform succesfully deployed the 150 Virtual Machines.

After deploying the Virtual Machines I was able to monitor Data Deduplication (I used the default settings). Deduplication savings made me discover that basically all Virtual Machines were stored on the fast tier alone. This impressed me the most. This would make this solution extremely powerful for a VDI deployment, especially when stacked with one or more of these babies.

After fininishing my testing I can definitely recommend this platform. After finding out the price for this platform I strongly believe that the DataON solution is a serious bang for your buck. It makes the Return Of Investment (ROI) very short and easy to manage. And all that in just a 2U enclosure…

All the requirements for the future are also there when Windows Server 2016 is released. I discussed my findings with DataON as well and additional test scenarios are there to investigate

Hopefully I can test it with either Nano Server and/or Hyper-V containers but this is something for 2016…




OpsMgr 2012 R2: first impressions…

Recently I was able to make an attempt to install OpsMgr 2012 R2 in a lab environment. I was particularly interested if deploying and initial management was significantly different compared to its predecessors. To be short: it isn’t…

I’ve decided to limit my testing by importing the Active Directory MP and a few network devices using the Xian SNMP Simulator from Jalasoft. Kevin Holman’s blog post on how to use this Simulator still works in OpsMgr 2012 R2, the post is available at the following website:

A good place to get some more first impressions is checking what’s new in OpsMgr 2012 R2. You can find this overview at the following website:

Two particularly new features caught my attention:

  • A new Agent which completely replaces the old one;
  • Fabric Monitoring.

The integration between OpsMgr en VMM becomes much closer with this new feature. I have to admit this is something I really need to investigate when I have time and resources.

Personally, I am convinced that VMM is the starting place when building pristine Private Cloud environments based on Hyper-V. There’s room for debate if Bare-metal Deployment is something you need. MDT 2013 can do some very nice things on that part as well, it does require some manual labor when not using Bare-metal Deployment but it allows you to deploy much more than just a bunch of Hyper-V hosts.

I have to admit that this is something that really caught my attention…

I will post this in a future blog if I have figured it all out.

Feel free to your findings if you already had…


Thoughts on building a resilient Private Cloud…

With this blog I decided to write down my ideas and thoughts about building a resilient Private Cloud using Hyper-V 3.0 which is a role in Windows Server 2012…

This is an elaboration on a session I attended at MMS 2013, ‘WS-B302 Availability Strategies for a Resilient Private Cloud’ presented by Elden Christensen. Elden is a Program Manager for the Hyper-V team at Microsoft.

My first thought is about building the Hyper-V cluster. In general, you want to separate storage and processing power to simplify your environment which allows you to keep administration costs low. It also adds a level of abstraction which provides you more control over the Hyper-V building blocks…

Microsoft’s direction is storing your VMs on an SMB 3.0 share. This allows you to provision storage on the 4th layer of the TCP/IP stack instead of the 3rd one. You can use standard Ethernet components which are cheap compared to expensive hardware components such as Fiber channel cards and switches. Additionally, Microsoft recommends delivering the SMB 3.0 share by using a Scale Out File Server (SOFS). A SOFS is a file server cluster with shared storage which hosts the SMB 3.0 share. Using this method, all you need to do is adding the SMB 3.0 share as a cluster resource.

With this, your Hyper-V hosts barely need any local storage. They should be equipped with the maximum number of processors and memory possible. You can have a maximum of 64 hosts in your cluster. This allows me to give directions to customers for using the right servers. The current 1U servers models allow you to install a decent amount of processors and lots of memory. I’ve seen models who can have up to 768 GB of memory. Imagine this, you can have a large number of ‘pizza box’ servers which allows you to provide massive scale-out of your Hyper-V hosts. These servers are inexpensive, require low administration and low maintenance cost. If you use a large number of them, you could be able to handle server failures of a small number of hosts. I think you would have a few spare boxes available just in case…

The next thought is management. In my philosophy, System Center Virtual Machine Manager (VMM) 2012 SP1 provides the foundation to build a Private Cloud from scratch. VMM has bare-metal deployment capabilities, it allows you to optimize server workloads, you can provide power management by turning off some hosts during low workloads and you can deploy updates to all hosts as well. Many more features are available. I suggest you visit TechNet to check out what VMM can do for you.

The final thought is monitoring. You need monitoring to make sure you have a good overview of uptime and server workloads. The recommended monitoring solution is System Center Operations Manager (OpsMgr) 2012 SP1. You can create a direct connection from VMM to OpsMgr which allows OpsMgr to completely monitor your Private Cloud infrastructure. After establishing this connection, OpsMgr will automatically install the required Management Packs, will gather all the information available in VMM and will start monitoring the environment as well. OpsMgr can help you get more information from the environment which allows you to generate reports and keep management happy…

From my point of view, Microsoft has a very strong set of tools available to deliver Private Clouds. I wonder if the competition can stand up to this…

These are just my thoughts, maybe they can help you determine your Private Cloud strategy…


SCVMM 2012: vSphere 5 support?

Hi everyone,

in my current project my objective was to use SCVMM 2012 in conjunction with SCOM 2012 to provide monitoring on hypervisors primarily and simplify administration secondarily.

Unfortunatley, SCVMM 2012 doesn’t support VMWare vSphere 5 at all.

Microsoft has stated the supported versions of VMWare on Technet:

Despite that I tried to connect to a vCenter server in order to import the VMWare infrastructure, however this resulted in a complete crash of SCVMM 2012 and all functionality went down the drain…

So now I have to monitor the VMWare environment using the nWorks Management Pack from Veeam.

Why oh why, Microsoft, is VMWare vSphere support not included in SCVMM 2012.

I guess we need to wait until SP1.

So if you want to manage vSphere 5 with SCVMM 2012, just don’t.

It won’t work unfortunately…







Steve Thompson [MVP]

The automation specialist

Boudewijn Plomp

Cloud and related stuff...

Anything about IT

by Alex Verboon

Deployment Made Simple

Modern Workplace

Azure, Hybrid Identity & Enterprise Mobility + Security

Daan Weda

This site is all about System Center and PowerShell

IT And Management by Abheek

Microsoft certified Trainer -Abheek

Heading To The Clouds

by Marthijn van Rheenen