RSS

Building a Storage Spaces Direct (S2D) cluster and failing miserably…

26 Oct

Windows Server 2016 is available for a little while now. A well-hyped feature is Storage Spaces Direct (S2D). It allows organizations to create fast performing, resilient and hyperconverged clusters which will go hand in hand with Hyper-V. Based on the documentation available at https://technet.microsoft.com/en-us/windows-server-docs/storage/storage-spaces/storage-spaces-direct-overview it even allows to run Hyper-V and SOFS on the same hardware without requiring expensive storage components such as SANs and the additional components required to connect to it. This is a major improvement compared to Windows Server 2012 R2 which doesn’t support this.

The following passage in the overview caught my attention:

Storage Hardware. From 2 to 16 servers with local-attached SATA, SAS, or NVMe drives. Each server must have at least 2 solid-state drives, and at least 4 additional drives. The SATA and SAS devices should be behind a host-bus adapter (HBA) and SAS expander. We strongly recommend the meticulously engineered and extensively validated platforms from our partners (coming soon).

Okay, that makes sense since S2D eliminates the need to for remotely available storage. Seems like it works with DAS only.

But what if I still have a bunch of iSCSI targets available and would like to use them for an S2D cluster? Maybe the Volumes provided by a StorSimple device might work, after it’s iSCSI too, right?

So I’ve decided to try to build an S2D (my god this abbreviation is really close to something I don’t want to get) cluster and see if it works. For this I used the following guide to build one as a reference: https://technet.microsoft.com/en-us/windows-server-docs/compute/remote-desktop-services/rds-storage-spaces-direct-deployment

Since I don’t have any hardware available I decided to build the cluster in my Azure environment.

So here’s what I did:

  • I built an Azure VM configured as a iSCSI target server that provides 4 disks, each 1000 GB in size
  • I build two Azure VMs which will be configured as cluster nodes and have these 4 disks available

The first thing I did was verifying if the 4 disks can be pooled using the Get-PhysicalDisk | ? CanPool -eq $true cmdlet. They can.

I was getting to the point that I needed to enable S2D using the PowerShell cmdlet mentioned in the guide: Enable-ClusterS2D -CacheMode Disabled -AutoConfig:0 -SkipEligibilityChecks

The -CacheMode parameter is no longer part of the cmdlet so I took that part out and tried again:

s2d-with-iscsi

Bummer…

This error confirms that locally attached storage is required to use S2D, so this is a dead end. iSCSI disks are not supported.

I was still able to build the cluster and assigning the Storage Pool to the cluster itself instead of a node and create a CSV, but no S2D…

Advertisements
 
1 Comment

Posted by on 26/10/2016 in Windows Server

 

One response to “Building a Storage Spaces Direct (S2D) cluster and failing miserably…

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

 
Steve Thompson [MVP]

The automation specialist

Boudewijn Plomp

Cloud and related stuff...

Anything about IT

by Alex Verboon

MDTGuy.WordPress.com

Deployment Made Simple

Modern Workplace

Azure, Hybrid Identity & Enterprise Mobility + Security

Daan Weda

This WordPress.com site is all about System Center and PowerShell

IT And Management by Abheek

Microsoft certified Trainer -Abheek

Heading To The Clouds

by Marthijn van Rheenen

%d bloggers like this: