RSS

Building a Storage Spaces Direct (S2D) cluster and succeed…

As in my previous post I failed miserably buiding an S2D cluster. Fortunately, it was just a small matter of reading this whitepaper properly which states only local storage can be used. We all know iSCSI storage is not locally attached so it makes perfect sense it doesn’t work. But at least I tested it and verified…

OK, so knowing that S2D works with DAS storage only it is time to test and verify if it’s difficult to build an S2D cluster.

To build the cluster, I’m going to build one using this guide. I use 2 FS1 Azure VMs and attach one P10 disk to each node.

So I follow the steps to build the cluster.

Thir fist step is to enable S2D which works fine.

s2d-with-das

NOTE: as in my previous post, the CacheMode parameter is not there. While this is still in the guide it may be a bit confusing to read it.

The next step is creating a Storage Pool for S2D.

s2d-storage-pool-2-disk-fail

Hmm, that’s odd. Appearantly 2 disks is insufficient. So, let’s add two more, one at each node resulting in having four disks.

s2d-storage-4-disk-success

OK, so I can continue building a S2D cluster disk of 250 GB

s2d-virtualdisk

The final step is creating a share according to the guide.

smb-share-fail

Hmmm, this fails too…

Well I was able to create the share using the Failover Clustering Console by configuring it as a SOFS and provide a ‘Quick’ file share.

So yeah, it’s relatively easy to build an S2D cluster but some steps in the overview need to be reviewed again. It contains mistakes…

 

Building a Storage Spaces Direct (S2D) cluster and failing miserably…

Windows Server 2016 is available for a little while now. A well-hyped feature is Storage Spaces Direct (S2D). It allows organizations to create fast performing, resilient and hyperconverged clusters which will go hand in hand with Hyper-V. Based on the documentation available at https://technet.microsoft.com/en-us/windows-server-docs/storage/storage-spaces/storage-spaces-direct-overview it even allows to run Hyper-V and SOFS on the same hardware without requiring expensive storage components such as SANs and the additional components required to connect to it. This is a major improvement compared to Windows Server 2012 R2 which doesn’t support this.

The following passage in the overview caught my attention:

Storage Hardware. From 2 to 16 servers with local-attached SATA, SAS, or NVMe drives. Each server must have at least 2 solid-state drives, and at least 4 additional drives. The SATA and SAS devices should be behind a host-bus adapter (HBA) and SAS expander. We strongly recommend the meticulously engineered and extensively validated platforms from our partners (coming soon).

Okay, that makes sense since S2D eliminates the need to for remotely available storage. Seems like it works with DAS only.

But what if I still have a bunch of iSCSI targets available and would like to use them for an S2D cluster? Maybe the Volumes provided by a StorSimple device might work, after it’s iSCSI too, right?

So I’ve decided to try to build an S2D (my god this abbreviation is really close to something I don’t want to get) cluster and see if it works. For this I used the following guide to build one as a reference: https://technet.microsoft.com/en-us/windows-server-docs/compute/remote-desktop-services/rds-storage-spaces-direct-deployment

Since I don’t have any hardware available I decided to build the cluster in my Azure environment.

So here’s what I did:

  • I built an Azure VM configured as a iSCSI target server that provides 4 disks, each 1000 GB in size
  • I build two Azure VMs which will be configured as cluster nodes and have these 4 disks available

The first thing I did was verifying if the 4 disks can be pooled using the Get-PhysicalDisk | ? CanPool -eq $true cmdlet. They can.

I was getting to the point that I needed to enable S2D using the PowerShell cmdlet mentioned in the guide: Enable-ClusterS2D -CacheMode Disabled -AutoConfig:0 -SkipEligibilityChecks

The -CacheMode parameter is no longer part of the cmdlet so I took that part out and tried again:

s2d-with-iscsi

Bummer…

This error confirms that locally attached storage is required to use S2D, so this is a dead end. iSCSI disks are not supported.

I was still able to build the cluster and assigning the Storage Pool to the cluster itself instead of a node and create a CSV, but no S2D…

 
1 Comment

Posted by on 26/10/2016 in Windows Server

 

Microsoft Azure: ONE feature I REALLY need…

It’s been a while since I posted my previous blog post. The main reason I didn’t post anything for a while is that I was very busy personally and professionally. I’ve been helping out customers adopting the Public Cloud more frequently and I must admit it’s a lot of fun.

During these conversations I’m focusing a lot on architecting Azure solutions (especially IaaS). Many times I get a feedback question that most likely goes like this:

“Yeah, it’s very nice this Microsoft Azure and all that other gay stuff, but how much does it cost?

Quickly followed by:

“Why is Microsoft Azure so expensive?”

Provinding an answer for the first question is quite challenging because Microsoft provides the Azure Pricing Calculator (available at https://azure.microsoft.com/en-us/pricing/calculator/) only. It allows me to provide an estimate on how much it will cost an organisation to use Azure services. It is still an estimate and that’s problematic because I cannot really use it for any TCO calculation. TCO is something that a CFO looks at and he or she wants the TCO to be a low as possible. All I could find was an old post available at https://azure.microsoft.com/en-us/blog/windows-azure-platform-tco-and-roi-calculator-now-available-online-and-offline/ but the tools are not there anymore.

I need to have a total overview In order to provide an honest and accurate calculation since most organisations want to mirror their on-premises costs to Microsoft Azure’s. Here’s a, most certainly incomplete, list of costs IT organizations make:

  • Hardware purchasing
  • Licensing
  • Labour
  • Housing
  • Energy
  • 3rd party Support Plans

The fun part of is that many organizations have no idea which costs they have and if they do, taking them in the equation. This behaviour will automatically cause the second question to be asked. I’d like to see Microsoft deliver a tool that allows me to fill in these variables. Microsoft’s biggest competitor, AWS, has such a tool.

Sounds like quite a rant to Microsoft, right?

Well, what really works in their defence is that the Azure Pricing Calculator really helps organizations to provide an estimate. Unfortunately, some common sense may be required when using Azure Services. Things that need to be taken into consideration are:

  • Uptime: if a service is not needed at given times, then turn them off and stop paying for what is actively used
  • Automation: When given times are available, ie. office hours, then schedule the switching on and off activities using Automation
  • Workload: if your workload demand is strongly fluctuating, then you don’t want to buy the hardware required to faciliate a few peaks
  • Evolution: Do you really need to build a VM with IIS when the web application can run on an Azure Web App service? It makes sense to evolve on-premises or IaaS services to PaaS services and no longer be bothered managing the fabric layer or even an Operating System and/or application
  • Evolution part 2: Consider replacing (legacy) applcations by SaaS services so you don’t manage them either
  • Initial investments: No initial investments are required when using Azure Cloud Services. You don’t need to have a budget ready to buy hardware. Think about the shorter ‘time to market’

If you look at it like this, then adopting Cloud Services may not so expensive at all.

Additionally, tunnel vision can be created when looking at costs alone. Many times a small increase in costs may greatly increase the benefits adopting Azure Cloud Services and I’d certainly recommend it in most cases. The only case I wouldn’t recommend it is having a workload with almost no fluctuations.

Nevertheless, it would be nice if Microsoft would provide a tool or someone can tell me where to find it if it already exists🙂

 

 

 

 

 
Leave a comment

Posted by on 26/10/2016 in Azure, Cloud, Public Cloud, Rant

 

Backing up Azure Storage Accounts…

New year, new challenges. And I was confronted with quite a nice one.

One of my customers uses Azure Storage quite intensively. While Azure Storage Accounts provide some protection by means of replication, there’s no real protection from corruption or deletion inside the Storage Account itself. Data that has been deleted will be deleted on the replicas as well. Unfortunately, there’s no replication mechanism to replicate data between Storage Accounts comparable to DFS Replication. Governance constraints may also prevent using Geo-redundant storage. Geo-redundant storage can also not guarantee that the secondary location still has the data before it became corrupted or deleted.

So a mechanism must be developed to protect the data from a potential disaster. Only Blob, File and Table Storage are valid candidates to be protected. Microsoft has released a tool that allows content to be copied to a different Storage Account (including from and to local disk): AzCopy

The relevant information regarding AzCopy is available at https://azure.microsoft.com/en-us/documentation/articles/storage-use-azcopy/

AzCopy is a commandline tool that allows content to be copied, but it is very static by nature. This may be great for a few single Blob containers or Table Uri, but a more versatile approach is required when hundreds of Blob containers and Table Uris need to be copied, especially when they are frequently changed.

Fortunately, I was able to use PowerShell that ‘feeds’ Azcopy with the required parameters to get the job done. The result is a script. The script developed focuses on Blob and Table Storage only, it can easily be extended for File Storage. For the sake of this blog post, Access Keys are used (the secondary) but SAS tokens requires only minor modifications.

Before proceeding, the workflow needs to be defined:

  • Provide the source and destination Storage Accounts, the destination Storage Account will store everything as Blobs meaning that Tables will be exported to a .MANIFEST and .JSON file (to allow the table to be recreated);
  • Retrieve the list of Blob containers and ‘convert’ them to a URL;
  • Copy each Blob container to the destination Storage Account, all containers will be stored as virtual directories in a single container to maintain structure;
  • Retrieve the list of Table Uris;
  • Export each Uri to a .MANIFEST and .JSON file and store them as blob at the destination Storage Account;
  • Log all copy actions.

The benefit of AzCopy is that it tells the Azure platform to copy something. No extensive processing is required by AzCopy. This allows the script to run on an Azure VM. A Standard_A1 VM is sufficient, additional disks are highly recommended when large Tables are used.

Unfortunately, parallel processing of AzCopy.exe is not possible. Everything must be done sequentially. Alternatively, multiple VMs can be spun up to backup their own set of Storage Accounts. This is highly recommended because backing up Storage Accounts may very time consuming, especially when large sets of small files or very large tables. Additionally, some small text files are used to store the items retrieved. They also allow the use of ForEach loops.

To repeat the actions for multiple Storage Accounts, a .csv file is used to input. The .csv file may look like this:

SourceStorageName,SourceStorageKey,DestinationStorageName,DestinationStorageKey
source1,sourcekey1,destination1,destinationkey1
source2,sourcekey2,destination2,destinationkey2

The actual script uses a lot of variables, AzCopy is called using the Start-Process cmdlet while the parameters for AzCopy.exe are populated by a long variable that will be used by the -ArgumentList parameter.

So here’s the script I used to achieve the goal of backing up Blob containers and Table Uris:

#
# Name: Copy_Storage_Account_AzCopy.ps1
#
# Author: Marc Westerink
#
# Version: 1.0
#
# Purpose: This script copies Blob and Table Storage from a source Storage Account to a Destination Storage Account.
# File F:\Input\Storage_Accounts.csv contains all Source and Destination Storage.
# All Blob Containers and Tables will be retrieved and processed sequentially.
# All content will be copied as blobs to a container named after the Source Storage Account. A virtual directory will be created for each blob container.
#
# Requirements:
# – Storage Accounts with Secondary Access Keys
# – AzCopy needs to be installed
# – Azure PowerShell needs to be installed
# – An additional disk to store Temporary Files (i.e. F:\ drive)
# – A Temp Folder (i.e. F:\Temp) with two Text Files ‘temp.txt’ and ‘output.txt’. The Temp Folder will be used by AzCopy.
# – A Folder to store all log files (i.e. F:\Logs)
#

# First, let’s create the required global variables

# Get the date the script is being run
$Date = Get-Date -format “dd-MM-yyyy”

# AzCopy Path
$FilePath=’C:\Program Files (x86)\Microsoft SDKs\Azure\AzCopy\AzCopy.exe’

#Temp Files, let’s make sure they’re cleared before starting

$File1=’F:\Temp\temp.txt’
Clear-Content -Path $File1

$File2=’F:\Temp\output.txt’
Clear-Content -Path $File2

#Recursive Parameter: DO NOT use for Table Storage
$Recursive=’/S’

#Suppress prompt popups
$Prompt=’/Y’

#Temporary Directory for AzCopy
$TempDir=’F:\Temp’

#SplitSize parameter for Tables, this will split a large table into separate .JSON files of 1024 MB
$SplitSize=’/SplitSize:1024′

#DestType parameter, required for copying tables as blobs to the Destination
$DestType=’/DestType:Blob’

#Temporary Directory for AzCopy Journal files
$Journal=’/Z:F:\Temp’

#Blob path
$Blob=’.blob.core.windows.net/’

#https header
$HTTPS=’https:// ‘

#Let’s import the CSV and process all Storage Accounts
Import-Csv F:\Input\Storage_Accounts.csv | % {

#Creating the Full Path of the Source Storage Account Blob
$SourceStoragePath=$HTTPS+$_.SourceStorageName+$Blob

#Creating the Full Path of the Destination Storage Container, if it doesn’t exist it will be created
$DestStorageContainer=$HTTPS+$_.DestinationStorageName+$Blob+$_.SourceStorageName+$Date

#Gather the Source Access Key
$SourceStorageKey=$_.SourceStorageKey

#Gather the Destination Access Key
$DestinationStorageKey=$_.DestinationStorageKey

#Defining the log file for verbose logging with the Source Storage Account Name and the date
$Verbose=’/V:F:\Logs\’+$_.SourceStorageName+$Date+’.log’

#Create the Azure Storage Context to gather all Blobs and Tables
$Context = New-AzureStorageContext -StorageAccountName $_.SourceStorageName -StorageAccountKey $_.SourceStorageKey

#Copy blob containers first

#Get all containers
Get-AzureStorageContainer -context $context | Select Name | % {

Add-Content -Path $File1 -Value $_.Name
}

#Convert all Container Names to full paths and write them to the Output File
Get-Content $File1 | % {

Add-Content -Path $File2 -Value $SourceStoragePath$_
}

#Process all Containers using the Output File as input
Get-Content $File2 | % {

#Gather virtual directory name using the container name
$VirtualDirectory= $_ -replace $SourceStoragePath,”

$ArgumentList=’/Source:’+$_+’ ‘+’/Dest:’+$DestStorageContainer+’/’+$VirtualDirectory+’ ‘+’/SourceKey:’+$SourceStorageKey+’ ‘+’/DestKey:’+$DestinationStorageKey+’ ‘+$Recursive+’ ‘+$Verbose+’ ‘+$Prompt+’ ‘+$Journal
Start-Process -FilePath $FilePath -ArgumentList $ArgumentList -Wait
}

#Before proceeding, let’s clean up all files used
Clear-Content -Path $File1
Clear-Content -Path $File2
#Get All Tables
Get-AzureStorageTable -context $context | Select Uri | % {

Add-Content -Path $File2 -Value $_.Uri
}

#Process all Tables using the Output File as input
Get-Content $File2 | % {

$ArgumentList=’/Source:’+$_+’ ‘+’/Dest:’+$DestStorageContainer+’ ‘+’/SourceKey:’+$SourceStorageKey+’ ‘+’/DestKey:’+$DestinationStorageKey+’ ‘+$SplitSize+’ ‘+$Verbose+’ ‘+$Prompt+’ ‘+$DestType+’ ‘+$Journal
Start-Process -FilePath $FilePath -ArgumentList $ArgumentList -Wait

}

#Cleanup Output File
Clear-Content -Path $File2
}

To have this script run by a schedule, a simple Scheduled Task can be created to do so. The schedule itself depends on the environment’s needs and the time to have everything copied. It’s not uncommon that a large storage account with countless small blobs and huge tables may take a week or even longer to be copied…

 

 

 
Leave a comment

Posted by on 02/02/2016 in Azure, PowerShell, Public Cloud

 

Looking forward to 2016…

So, after leaving 2015 behind us and getting started in 2016 it’s time to have a look what 2016 is going to bring us.

2015 was the year that got the adoption of cloud technology really going and I expect more and more organizations to do so or start adopting more features cloud technology offers us. A very nice feature is that organizations start to understand better how convenient it is when the ‘gate’ for end users has shifted from Active Directory to Azure Active Directory.

Three big releases will most likely take place this year:

  • AzureStack;
  • Windows Server 2016;
  • System Center 2016.

I strongly believe the release of Windows Server 2016 will dramatically change the way we’re used to work and I really believe the following two features will enable it:

  • Nano Server;
  • Containers.

Since the release of Windows Server 2016 Technical Preview 3, and even more with Windows Server 2016 Technical Preview 4 we’re able to research and experiment with these two features. Fortunately, I don’t expect Windows Server 2016 RTM to be released in the first half of 2016. This allows me to play around with it and understand how it works so that I am prepared when it becomes available.

So, Windows Server 2016 is quite a big tip of the iceberg. With the rest all coming as well I expect 2016 to be a very busy year. But I expect to have a lot of fun with it as well…

So let’s see what’s going to happen this year, I look forward to it.

 

Looking back at 2015…

So, the year 2015 is almost at its end. While I write this, I am already in my second week of my two week time off. And boy,I really needed this two week break.

2015 was an extremely busy year for me, and I can actually cut the year in half.

At the first half, I was still busy participating in a project where I designed and deployed System Center 2012 R2 Configuration Manager. I also built a stand-alone Image Building environment running MDT 2013. Unfortunately, the project took way longer than expected due the customer being unable to take ownership and start administering it by themselves. Eventually I decided to walk away after the contractual end date of my involvement despite the fact the project isn’t finished yet. The longer it took, the more frustrating the project became for me so the decision to walk away was eventually the right one.

This takes me to the second half. In the second half, I saw a dramatic shift in my job since I did only one Configuration Manager design and deployment in the second half of 2015. I started to extend my skillset on Enterprise Client Management a bit more with Microsoft Intune and Microsoft’s Public Cloud platform: Azure.

I also started to deliver more workshops, master classes and training sessions. This is something I really like to do and I want to thank those who made it possible for me. It allowed to me renew my Microsoft Certified Trainer certification.

Fortunately, the frustrations of the first half provided some learning moments which required me to become a more complete consultant. So my coworker arranged a two day training session for me called “Professional Recommending” (this may be a poor translation of Professioneel Adviseren in Dutch) provided by Yearth. This is by far the most important training I received in my career and it really started to pay off pretty quickly by receiving more positive feedback from customers. I became a more complete consultant with this training.

I was also happy to do the presentation workshop with Monique Kerssens and Jinxiu Hu from Niqué Consultancy BV at ExpertsLive 2015. I was happy to receive the feedback that my presentation skills have developed greatly. To quote them: “you’re standing like a house”.

The icing on the cake came at the end of this year when I was asked to review the DataON CiB-9224 platform. You can read the review in my previous post.

So, I experienced some highs and lows this year. Fortunately, the highs came at the second half.

I look forward to 2016, but that’s for another post…

 

 

Reviewing the DataON Cluster-in-a-box 9224 (CiB-9224 V12) platform

Recently the company I work for became a partner in deploying DataON platform solutions together with the Dutch distributor of DataON. The distributor has the knowledge and experience with distributing hardware, but was looking for a partner to have them deployed to meet the needs of customers. I had the honor of reviewing one of DataON’s solutions provided by the distributor: the CiB-9224 V12

DNS-9220 Front View

Before I got started I checked the relevant information on DataON’s website which is available at http://dataonstorage.com/cluster-in-a-box/cib-9224-v12-2u-24-bay-12g-sas-cluster-in-a-box.html

Here are a few features that I consider relevant:

  • You have a two-node cluster in a single 2U enclosure;
  • A two-tier storage storage deployment is available, only JBOD is available (no hardware RAID) to both nodes;
  • The solution can be ‘stacked’ with either another CiB and/or DNS JBOD solution;
  • The components used result in a very simple and easy to use setup, no extensive hardware knowledge is required;
  • DataON delivers OOBE guides to get you started.

Overall DataON delivers a no-nonsense solution. Since I am an advocate of a no-nonsense approach it is something I really like.

After checking it all I conclude that this platform can be used in two ways:

  • Scale Out File Server (SOFS) cluster providing one or more SMB 3.0 shares;
  • A two-node Hyper-V cluster.

Specific scenarios are available at DataON’s website mentioned earlier.

For my review I decided to build a two-node Hyper-V cluster. After preparing a small infrastructure (DC, DNS, DHCP and networking) I was able to get going. I decided to follow the OOBE guide as much as possible. In less than an hour, I had a fully operational two-node Hyper-V cluster. I noticed a few things during deployment:

  • Some steps in the guide are not completely in line with deploying the solution. I was able to create a Storage Space with Data Deduplication enabled while the guide doesn’t mention Data Deduplication. However, I greatly welcome to have Data Deduplication enabled since it will generate significant savings when Virtual Machines are stored on the volume being deduplicated;
  • The Storage Space is very fast, deploying Virtual Machines doesn’t take much time at all;
  • I like the built-in Mellanox ConnectX®-3 Pro EN Single 10GbE port used for Cluster Heartbeat and Live Migration. After configuring the cluster to use this NIC only for Live Migration I was very happy with its Live Migration performance. It worked like a charm;
  • I managed the cluster using System Center 2016 Virtual Machine Manager Technical Preview and System Center 2016 Operations Manager Technical Preview. After deploying the required agents I was able to manage the cluster completely by Virtual Machine Manager. Dynamic Optimization and PRO Tips became available. After setting Dynamic Optimization to very aggressive settings I could see Virtual Machines dancing around on both nodes without negatively affecting the Virtual Machines themselves.

The next step was trying to stress test the platform. I decided to deploy 150 Virtual Machines using a tempate. I found a nice PowerShell script that would do the work for me at http://blogs.technet.com/b/virtual-mite/archive/2014/03/04/deploying-multiple-vm-39-s-from-template-in-vmm.aspx. During this deployment I noticed the limited network resources (had a 1 Gbit/sec switch available, no fiberglass) significantly slowed down the deployment and I was also overcommitting the cluster (memory resources prevented me from running all these Virtual Machines). I had no intention of running all these machines after deploying them but it gave me some good insights of the platform’s capabilities. To me, the test scenario used is not optimal and I expect better performance when using 10 Gbit/sec SFP connections are used. Nevertheless, the platform succesfully deployed the 150 Virtual Machines.

After deploying the Virtual Machines I was able to monitor Data Deduplication (I used the default settings). Deduplication savings made me discover that basically all Virtual Machines were stored on the fast tier alone. This impressed me the most. This would make this solution extremely powerful for a VDI deployment, especially when stacked with one or more of these babies.

After fininishing my testing I can definitely recommend this platform. After finding out the price for this platform I strongly believe that the DataON solution is a serious bang for your buck. It makes the Return Of Investment (ROI) very short and easy to manage. And all that in just a 2U enclosure…

All the requirements for the future are also there when Windows Server 2016 is released. I discussed my findings with DataON as well and additional test scenarios are there to investigate

Hopefully I can test it with either Nano Server and/or Hyper-V containers but this is something for 2016…

 

 

 
 
Steve Thompson [MVP]

The automation specialist

Boudewijn Plomp

Cloud and related stuff...

Anything about IT

by Alex Verboon

MDTGuy.WordPress.com

Deployment Made Simple

Modern Workplace

Azure, Hybrid Identity & Enterprise Mobility + Security

Daan Weda

This WordPress.com site is all about System Center and PowerShell

IT And Management by Abheek

Microsoft certified Trainer -Abheek

Heading To The Clouds

by Marthijn van Rheenen