Windows Server 2016 is available for a little while now. A well-hyped feature is Storage Spaces Direct (S2D). It allows organizations to create fast performing, resilient and hyperconverged clusters which will go hand in hand with Hyper-V. Based on the documentation available at https://technet.microsoft.com/en-us/windows-server-docs/storage/storage-spaces/storage-spaces-direct-overview it even allows to run Hyper-V and SOFS on the same hardware without requiring expensive storage components such as SANs and the additional components required to connect to it. This is a major improvement compared to Windows Server 2012 R2 which doesn’t support this.
The following passage in the overview caught my attention:
Storage Hardware. From 2 to 16 servers with local-attached SATA, SAS, or NVMe drives. Each server must have at least 2 solid-state drives, and at least 4 additional drives. The SATA and SAS devices should be behind a host-bus adapter (HBA) and SAS expander. We strongly recommend the meticulously engineered and extensively validated platforms from our partners (coming soon).
Okay, that makes sense since S2D eliminates the need to for remotely available storage. Seems like it works with DAS only.
But what if I still have a bunch of iSCSI targets available and would like to use them for an S2D cluster? Maybe the Volumes provided by a StorSimple device might work, after it’s iSCSI too, right?
So I’ve decided to try to build an S2D (my god this abbreviation is really close to something I don’t want to get) cluster and see if it works. For this I used the following guide to build one as a reference: https://technet.microsoft.com/en-us/windows-server-docs/compute/remote-desktop-services/rds-storage-spaces-direct-deployment
Since I don’t have any hardware available I decided to build the cluster in my Azure environment.
So here’s what I did:
- I built an Azure VM configured as a iSCSI target server that provides 4 disks, each 1000 GB in size
- I build two Azure VMs which will be configured as cluster nodes and have these 4 disks available
The first thing I did was verifying if the 4 disks can be pooled using the Get-PhysicalDisk | ? CanPool -eq $true cmdlet. They can.
I was getting to the point that I needed to enable S2D using the PowerShell cmdlet mentioned in the guide: Enable-ClusterS2D -CacheMode Disabled -AutoConfig:0 -SkipEligibilityChecks
The -CacheMode parameter is no longer part of the cmdlet so I took that part out and tried again:
This error confirms that locally attached storage is required to use S2D, so this is a dead end. iSCSI disks are not supported.
I was still able to build the cluster and assigning the Storage Pool to the cluster itself instead of a node and create a CSV, but no S2D…