RSS

Category Archives: Hyper-V

Looking forward to 2016…

So, after leaving 2015 behind us and getting started in 2016 it’s time to have a look what 2016 is going to bring us.

2015 was the year that got the adoption of cloud technology really going and I expect more and more organizations to do so or start adopting more features cloud technology offers us. A very nice feature is that organizations start to understand better how convenient it is when the ‘gate’ for end users has shifted from Active Directory to Azure Active Directory.

Three big releases will most likely take place this year:

  • AzureStack;
  • Windows Server 2016;
  • System Center 2016.

I strongly believe the release of Windows Server 2016 will dramatically change the way we’re used to work and I really believe the following two features will enable it:

  • Nano Server;
  • Containers.

Since the release of Windows Server 2016 Technical Preview 3, and even more with Windows Server 2016 Technical Preview 4 we’re able to research and experiment with these two features. Fortunately, I don’t expect Windows Server 2016 RTM to be released in the first half of 2016. This allows me to play around with it and understand how it works so that I am prepared when it becomes available.

So, Windows Server 2016 is quite a big tip of the iceberg. With the rest all coming as well I expect 2016 to be a very busy year. But I expect to have a lot of fun with it as well…

So let’s see what’s going to happen this year, I look forward to it.

 

Looking back at 2015…

So, the year 2015 is almost at its end. While I write this, I am already in my second week of my two week time off. And boy,I really needed this two week break.

2015 was an extremely busy year for me, and I can actually cut the year in half.

At the first half, I was still busy participating in a project where I designed and deployed System Center 2012 R2 Configuration Manager. I also built a stand-alone Image Building environment running MDT 2013. Unfortunately, the project took way longer than expected due the customer being unable to take ownership and start administering it by themselves. Eventually I decided to walk away after the contractual end date of my involvement despite the fact the project isn’t finished yet. The longer it took, the more frustrating the project became for me so the decision to walk away was eventually the right one.

This takes me to the second half. In the second half, I saw a dramatic shift in my job since I did only one Configuration Manager design and deployment in the second half of 2015. I started to extend my skillset on Enterprise Client Management a bit more with Microsoft Intune and Microsoft’s Public Cloud platform: Azure.

I also started to deliver more workshops, master classes and training sessions. This is something I really like to do and I want to thank those who made it possible for me. It allowed to me renew my Microsoft Certified Trainer certification.

Fortunately, the frustrations of the first half provided some learning moments which required me to become a more complete consultant. So my coworker arranged a two day training session for me called “Professional Recommending” (this may be a poor translation of Professioneel Adviseren in Dutch) provided by Yearth. This is by far the most important training I received in my career and it really started to pay off pretty quickly by receiving more positive feedback from customers. I became a more complete consultant with this training.

I was also happy to do the presentation workshop with Monique Kerssens and Jinxiu Hu from Niqué Consultancy BV at ExpertsLive 2015. I was happy to receive the feedback that my presentation skills have developed greatly. To quote them: “you’re standing like a house”.

The icing on the cake came at the end of this year when I was asked to review the DataON CiB-9224 platform. You can read the review in my previous post.

So, I experienced some highs and lows this year. Fortunately, the highs came at the second half.

I look forward to 2016, but that’s for another post…

 

 

Reviewing the DataON Cluster-in-a-box 9224 (CiB-9224 V12) platform

Recently the company I work for became a partner in deploying DataON platform solutions together with the Dutch distributor of DataON. The distributor has the knowledge and experience with distributing hardware, but was looking for a partner to have them deployed to meet the needs of customers. I had the honor of reviewing one of DataON’s solutions provided by the distributor: the CiB-9224 V12

DNS-9220 Front View

Before I got started I checked the relevant information on DataON’s website which is available at http://dataonstorage.com/cluster-in-a-box/cib-9224-v12-2u-24-bay-12g-sas-cluster-in-a-box.html

Here are a few features that I consider relevant:

  • You have a two-node cluster in a single 2U enclosure;
  • A two-tier storage storage deployment is available, only JBOD is available (no hardware RAID) to both nodes;
  • The solution can be ‘stacked’ with either another CiB and/or DNS JBOD solution;
  • The components used result in a very simple and easy to use setup, no extensive hardware knowledge is required;
  • DataON delivers OOBE guides to get you started.

Overall DataON delivers a no-nonsense solution. Since I am an advocate of a no-nonsense approach it is something I really like.

After checking it all I conclude that this platform can be used in two ways:

  • Scale Out File Server (SOFS) cluster providing one or more SMB 3.0 shares;
  • A two-node Hyper-V cluster.

Specific scenarios are available at DataON’s website mentioned earlier.

For my review I decided to build a two-node Hyper-V cluster. After preparing a small infrastructure (DC, DNS, DHCP and networking) I was able to get going. I decided to follow the OOBE guide as much as possible. In less than an hour, I had a fully operational two-node Hyper-V cluster. I noticed a few things during deployment:

  • Some steps in the guide are not completely in line with deploying the solution. I was able to create a Storage Space with Data Deduplication enabled while the guide doesn’t mention Data Deduplication. However, I greatly welcome to have Data Deduplication enabled since it will generate significant savings when Virtual Machines are stored on the volume being deduplicated;
  • The Storage Space is very fast, deploying Virtual Machines doesn’t take much time at all;
  • I like the built-in Mellanox ConnectX®-3 Pro EN Single 10GbE port used for Cluster Heartbeat and Live Migration. After configuring the cluster to use this NIC only for Live Migration I was very happy with its Live Migration performance. It worked like a charm;
  • I managed the cluster using System Center 2016 Virtual Machine Manager Technical Preview and System Center 2016 Operations Manager Technical Preview. After deploying the required agents I was able to manage the cluster completely by Virtual Machine Manager. Dynamic Optimization and PRO Tips became available. After setting Dynamic Optimization to very aggressive settings I could see Virtual Machines dancing around on both nodes without negatively affecting the Virtual Machines themselves.

The next step was trying to stress test the platform. I decided to deploy 150 Virtual Machines using a tempate. I found a nice PowerShell script that would do the work for me at http://blogs.technet.com/b/virtual-mite/archive/2014/03/04/deploying-multiple-vm-39-s-from-template-in-vmm.aspx. During this deployment I noticed the limited network resources (had a 1 Gbit/sec switch available, no fiberglass) significantly slowed down the deployment and I was also overcommitting the cluster (memory resources prevented me from running all these Virtual Machines). I had no intention of running all these machines after deploying them but it gave me some good insights of the platform’s capabilities. To me, the test scenario used is not optimal and I expect better performance when using 10 Gbit/sec SFP connections are used. Nevertheless, the platform succesfully deployed the 150 Virtual Machines.

After deploying the Virtual Machines I was able to monitor Data Deduplication (I used the default settings). Deduplication savings made me discover that basically all Virtual Machines were stored on the fast tier alone. This impressed me the most. This would make this solution extremely powerful for a VDI deployment, especially when stacked with one or more of these babies.

After fininishing my testing I can definitely recommend this platform. After finding out the price for this platform I strongly believe that the DataON solution is a serious bang for your buck. It makes the Return Of Investment (ROI) very short and easy to manage. And all that in just a 2U enclosure…

All the requirements for the future are also there when Windows Server 2016 is released. I discussed my findings with DataON as well and additional test scenarios are there to investigate

Hopefully I can test it with either Nano Server and/or Hyper-V containers but this is something for 2016…

 

 

 

Possible workaround for capturing a Windows 10 reference image with MDT 2013 Update 1

As most of us should know by now Microsoft release Microsoft Deployment Toolkit 2013 Update 1, see the announcement at http://blogs.technet.com/b/msdeployment/archive/2015/08/17/mdt-2013-update-1-now-available.aspx

The main improvements are support for Windows 10 and integration of System Center 2012 Configuration Manager SP2/R2 SP1. Unfortunately, this release has quite a lof of issues that makes it either very difficult or impossible to properly capture a reference image. A list of know issues is available at http://blogs.technet.com/b/msdeployment/archive/2015/08/25/mdt-2013-update-1-release-notes-and-known-issues.aspx

The issue that bothers me the most is the following, and I quote:

Do not upgrade from Preview to RTM

MDT 2013 Update 1 Preview should be uninstalled before installing the final MDT 2013 Update 1. Do not attempt to upgrade a preview installation or deployment share. Although the product documentation is not updated for MDT 2013 Update 1, the information on upgrading an installation still holds true.

Being a consultant which require me to be an early adopter and testing new stuff to allow myself to be ready when it’s released requires me to work with Preview versions of verious software. Also, as an ITPro which has an isolated environment available purely for Image Building purposes, I need to upgrade my deployment share frequently. While I can automate building new deployment shares, it takes time I don’t have to research and test these new technologies. So I don’t have much choice than upgrading my deployment share. I must admit that releasing this technology with so many known issues is quite sloppy to me. I can only assume that various scenarios may not have been tested thoroughly by time constraints and releasing this version was under a possible amount of pressure.

Trying to build and capture a Windows 10 reference image fails. The capturing itself fails with an error message that a certain script cannot be loaded. The MDT 2013 U1 environment I currently have is for image building purposes only so I don’t have that many customizations configured.

So knowing that the capturing itself fails I can do the capturing part myself. Knowing that image building is not something I expect you to every day the amount of administrative effort increases just a little bit but it’s quite easy to do.

First, we start a deployment using the Windows Deployment Wizard. After selecting my Build and Capture Windows 10 Task Sequence I get the option to select how I want to capture an image.

capture_option

I choose not to capture an image by selecting the option Do not capture an image of this computer. This will make the deployment run normally and finish without doing anything afterwards. I do use the option Finishaction=REBOOT in my customsettings.ini to make sure the machine restarts after completion.

The next step is logging on with the local Administrator password to SYSPREP the machine by running the sysprep.exe /oobe /generalize /shutdown command.

sysprep

Here we see SYSPREP is in progress. After a small while the machine is turned off.

Now the machine will be started again using the LiteTouch boot media (in my case I use WDS) and wait until the deployment wizard is started once more. The reason why I do this is that my deployment share is available and accessible by the Z: drive which is automatically mapped. Pressing F8 opens the command prompt.

All I need to is to start capturing an image using DISM which may look like the screenshot below (hmmm, makes me wonder why I chose that filename).

Capture_start

Now the capture can start.

Capture_progress

After a while the capture completes and a captured Windows 10 image is available in the Captures folder of the deployment share in use. This image can be used for deployment by MDT 2013 U1, System Center 2012 Configuration Manager SP2/R2 or whatever tool used for deploying .wim files.

Basically the workaround consists of replacing the image capturing part with manual labour. I’m sure that other workarounds may be available but this one works for me. The image capturing should take less than 72 hours since that is the maximum time a WinPE session is allowed to run. Once the 72 hours are up, it will automatically restart the computer. This should be enough though to have the image file created.

Feel free to use this workaround. As usual, testing is required before using it in a production environment.

Let’s hope an updated release should have all these issues solved, the sooner the better…

 

 

 

ConfigMgr: funny behavior with building Collection queries using Chassis Types…

I’m currently involved in a project that requires me to build a ConfigMgr 2012 R2 infrastructure from scratch (these are the best ones to be honest).

In this project, the customer requires to have different collections for desktops and laptops for various reasons. All client devices have Windows 8.1 Enterprise installed.

As many ConfigMgr admins, specialists and consultants know, a lot of different ways are possible to populate collections with the right objects. In this scenario, I decided to use the Chassis Types.

I used this website as a reference:

http://blogs.technet.com/b/breben/archive/2009/07/21/chassis-type-values-in-sccm.aspx

I encountered some funny behavior with the collection for desktops (with the very obvious name All Desktops).

The collection had the following query rule:

Very straightforward…

After a while I decided to verify if the objects returned are the ones I expected. This wasn’t the case unfortunately. The collection also contained computer objects which were virtual machines running Windows Server 2012. I concluded that a Hyper-V virtual machine also received that Chassis Type (I prefer not to use Chassis Types 1 and 2). The only reason I can come up with is using this chassis type for VDI purposes.

NOTE: I used the script available at http://technet.microsoft.com/en-us/library/ee156537.aspx on a virtual machine running on a ESXi platform to investigate what Chassis Type the script returns. On that platform I received Chassis Type 1 (= other).

 

Fortunately, a small modification of the Query Statement provided me Desktops only. Since I know that all desktops are equipped with Windows 8.1 I added a rule that checks if a Workstation OS is installed:

After the next update cycle the virtual machines running Windows Server 2012 were no longer a member of this collection.

 

Hope this helps…

 

Thoughts on building a resilient Private Cloud…

With this blog I decided to write down my ideas and thoughts about building a resilient Private Cloud using Hyper-V 3.0 which is a role in Windows Server 2012…

This is an elaboration on a session I attended at MMS 2013, ‘WS-B302 Availability Strategies for a Resilient Private Cloud’ presented by Elden Christensen. Elden is a Program Manager for the Hyper-V team at Microsoft.

My first thought is about building the Hyper-V cluster. In general, you want to separate storage and processing power to simplify your environment which allows you to keep administration costs low. It also adds a level of abstraction which provides you more control over the Hyper-V building blocks…

Microsoft’s direction is storing your VMs on an SMB 3.0 share. This allows you to provision storage on the 4th layer of the TCP/IP stack instead of the 3rd one. You can use standard Ethernet components which are cheap compared to expensive hardware components such as Fiber channel cards and switches. Additionally, Microsoft recommends delivering the SMB 3.0 share by using a Scale Out File Server (SOFS). A SOFS is a file server cluster with shared storage which hosts the SMB 3.0 share. Using this method, all you need to do is adding the SMB 3.0 share as a cluster resource.

With this, your Hyper-V hosts barely need any local storage. They should be equipped with the maximum number of processors and memory possible. You can have a maximum of 64 hosts in your cluster. This allows me to give directions to customers for using the right servers. The current 1U servers models allow you to install a decent amount of processors and lots of memory. I’ve seen models who can have up to 768 GB of memory. Imagine this, you can have a large number of ‘pizza box’ servers which allows you to provide massive scale-out of your Hyper-V hosts. These servers are inexpensive, require low administration and low maintenance cost. If you use a large number of them, you could be able to handle server failures of a small number of hosts. I think you would have a few spare boxes available just in case…

The next thought is management. In my philosophy, System Center Virtual Machine Manager (VMM) 2012 SP1 provides the foundation to build a Private Cloud from scratch. VMM has bare-metal deployment capabilities, it allows you to optimize server workloads, you can provide power management by turning off some hosts during low workloads and you can deploy updates to all hosts as well. Many more features are available. I suggest you visit TechNet to check out what VMM can do for you.

The final thought is monitoring. You need monitoring to make sure you have a good overview of uptime and server workloads. The recommended monitoring solution is System Center Operations Manager (OpsMgr) 2012 SP1. You can create a direct connection from VMM to OpsMgr which allows OpsMgr to completely monitor your Private Cloud infrastructure. After establishing this connection, OpsMgr will automatically install the required Management Packs, will gather all the information available in VMM and will start monitoring the environment as well. OpsMgr can help you get more information from the environment which allows you to generate reports and keep management happy…

From my point of view, Microsoft has a very strong set of tools available to deliver Private Clouds. I wonder if the competition can stand up to this…

These are just my thoughts, maybe they can help you determine your Private Cloud strategy…

 
 
Steve Thompson [MVP]

The automation specialist

Boudewijn Plomp

Cloud and related stuff...

Anything about IT

by Alex Verboon

MDTGuy.WordPress.com

Deployment Made Simple

Modern Workplace

Azure, Hybrid Identity & Enterprise Mobility + Security

Daan Weda

This WordPress.com site is all about System Center and PowerShell

IT And Management by Abheek

Microsoft certified Trainer -Abheek

Heading To The Clouds

by Marthijn van Rheenen