RSS

Monthly Archives: January 2017

My first Azure Stack TP2 POC deployment ending in disaster…

Today I had the opportunity to have an attempt to deploy my first Azure Stack TP2 POC. Having this DataON CiB-9224 available allowed to have a go on deploying an Azure Stack TP2 POC environment. I was able to achieve this after finishing some testing with Windows Server 2016 with the platform. The results of those tests are available at https://mwesterink.wordpress.com/2017/01/19/case-study-running-windows-server-2016-on-a-dataon-cib/

Before I started testing I reviewed the hardware requirements which are available at https://docs.microsoft.com/nl-nl/azure/azure-stack/azure-stack-deploy

Unfortunately, a small part made me wonder if I would actually succeed in deploying Azure Stack. Here’s a quote of the worrying part:

Data disk drive configuration: All data drives must be of the same type (all SAS or all SATA) and capacity. If SAS disk drives are used, the disk drives must be attached via a single path (no MPIO, multi-path support is provided).

Damn, again a challenge with MPIO. Such a shame since I meet all other hardware requirements.

So decided to have a go and figure out why MPIO is not supported by deploying Azure Stack TP2 anyway. I followed the instructions at https://docs.microsoft.com/nl-nl/azure/azure-stack/azure-stack-run-powershell-script and see what happens…

I used a single node of the CiB-9224 and used 4 400 GB SSD disks only. I turned the other node off and I disabled all unused NICs.

After a while I decided to check its progress and I noticed that nothing was happening at a specific step (there was a hour between the latest log and the time I went to check). Here’s a screenshot where the deployment was ‘stuck’:

stuck_at_s2d

Seems like the script is trying to enable Storage Spaces Direct (S2D). Knowing that S2D is not supported with MPIO I terminated the deployment and wiped all data because I knew I was going to be unsuccessfull. At least I know why.

I didn’t meet all hardware requirements after all. Fortunately it gave me some insights in how to deploy Azure Stack so when I do have hardware that meets my requirements, then at least I know what to do…

Looking at the requirements again, it’s obvious that the recommended way to go is with single channel JBOD.

 

 

 

Case study: Running Windows Server 2016 on a DataON CiB…

Recently I was asked to investigate if Windows Server 2016 would be a suitable OS on a DataON CiB platform. Some new features of Windows Server 2016 are very exciting. The one that excites me the most is Storage Spaces Direct. I set a goal by asking myself the following question:

Can I deploy a hyper-converged cluster using Hyper-V and Storage Spaces Direct with a CiB-9224 running Windows Server 2016?

The case study involves a CiB-9224V12 platform and I had the liberty to start from scratch on one of these babies.

cib-9224_v12_fsv

To figure out if this is possible, I took the following steps:

  1. I deployed Windows Server 2016 Datacenter on each node;
  2. I verified if no device drivers were missing. A lot of Intel chipset related devices had no driver (this may be different at a different model). I installed the Intel Chipset software. The Avago SAS adapter didn’t need a new driver. NOTE: Microsoft Update can be used as well to download and install the missing drivers
  3. I installed the required Roles & Features on both nodes: Hyper-V, Data Deduplication, Failover Clustering and Multi-path I/O
  4. I enabled Multi-Path I/O for SAS. This is a requirement for the SAS adapter to make sure the available disks are presented properly
  5. I created a failover cluster, I used a Share Witness available at a different server
  6. I attempted to enable Storage Spaces Direct but I got stuck at the ‘Waiting for SBL disks are surfaced, 27%’ step. Nothing happens after that.

 

I started troubleshooting to determine a possible issue why this step can’t be finished. I checked the requirements again for S2D and I found the following website:

https://technet.microsoft.com/en-us/windows-server-docs/storage/storage-spaces/storage-spaces-direct-hardware-requirements

At the Drives section I noticed that an unsupported scenario for S2D exists that matches the configuration of the CiB-9224: MPIO or physically connecting drives via multiple paths. After reading the requirements I stopped troubleshooting. Having an unsupported scenario means S2D is simply not possible.

 

The result was I created a Storage Pool without using S2D and I presented the Virtual Disk a Cluster Shared Volume to the cluster. I was not able to choose ReFS (not available when creating a Volume) as a file system so I had to stick with NTFS with Data Deduplication enabled.

So basically I used the ‘Windows Server 2012 R2’ solution to deploy the CSV using Storage Spaces.

With the CiB-9224 I’m not able to achieve my goal of deploying a hyper-converged cluster based on Microsoft’s definition of hyper-converged.

One question still remains: Would I recommend using Windows Server 2016 at a CiB-9224?

The answer is Yes because some new features of Windows Server 2016, for example Shielded VMs, are fully supported on this hardware.

 

DataON does have a hyper-converged S2D platform available, more information can be gathered here: http://dataonstorage.com/storage-spaces-direct/s2d-3110-1u-10-bay-all-flash-nvme-storage-spaces-direct-cluster-appliance.html

s2d-3110_frontsideview

 

 
1 Comment

Posted by on 19/01/2017 in Uncategorized

 
 
Steve Thompson [MVP]

The automation specialist

Boudewijn Plomp

Cloud and related stuff...

Anything about IT

by Alex Verboon

MDTGuy.WordPress.com

Deployment Made Simple

Modern Workplace

Azure, Hybrid Identity & Enterprise Mobility + Security

Daan Weda

This WordPress.com site is all about System Center and PowerShell

IT And Management by Abheek

Microsoft certified Trainer -Abheek

Heading To The Clouds

by Marthijn van Rheenen