RSS

Category Archives: Windows Server

Building a Storage Spaces Direct (S2D) cluster and succeed…

As in my previous post I failed miserably buiding an S2D cluster. Fortunately, it was just a small matter of reading this whitepaper properly which states only local storage can be used. We all know iSCSI storage is not locally attached so it makes perfect sense it doesn’t work. But at least I tested it and verified…

OK, so knowing that S2D works with DAS storage only it is time to test and verify if it’s difficult to build an S2D cluster.

To build the cluster, I’m going to build one using this guide. I use 2 FS1 Azure VMs and attach one P10 disk to each node.

So I follow the steps to build the cluster.

Thir fist step is to enable S2D which works fine.

s2d-with-das

NOTE: as in my previous post, the CacheMode parameter is not there. While this is still in the guide it may be a bit confusing to read it.

The next step is creating a Storage Pool for S2D.

s2d-storage-pool-2-disk-fail

Hmm, that’s odd. Appearantly 2 disks is insufficient. So, let’s add two more, one at each node resulting in having four disks.

s2d-storage-4-disk-success

OK, so I can continue building a S2D cluster disk of 250 GB

s2d-virtualdisk

The final step is creating a share according to the guide.

smb-share-fail

Hmmm, this fails too…

Well I was able to create the share using the Failover Clustering Console by configuring it as a SOFS and provide a ‘Quick’ file share.

So yeah, it’s relatively easy to build an S2D cluster but some steps in the overview need to be reviewed again. It contains mistakes…

Advertisements
 

Building a Storage Spaces Direct (S2D) cluster and failing miserably…

Windows Server 2016 is available for a little while now. A well-hyped feature is Storage Spaces Direct (S2D). It allows organizations to create fast performing, resilient and hyperconverged clusters which will go hand in hand with Hyper-V. Based on the documentation available at https://technet.microsoft.com/en-us/windows-server-docs/storage/storage-spaces/storage-spaces-direct-overview it even allows to run Hyper-V and SOFS on the same hardware without requiring expensive storage components such as SANs and the additional components required to connect to it. This is a major improvement compared to Windows Server 2012 R2 which doesn’t support this.

The following passage in the overview caught my attention:

Storage Hardware. From 2 to 16 servers with local-attached SATA, SAS, or NVMe drives. Each server must have at least 2 solid-state drives, and at least 4 additional drives. The SATA and SAS devices should be behind a host-bus adapter (HBA) and SAS expander. We strongly recommend the meticulously engineered and extensively validated platforms from our partners (coming soon).

Okay, that makes sense since S2D eliminates the need to for remotely available storage. Seems like it works with DAS only.

But what if I still have a bunch of iSCSI targets available and would like to use them for an S2D cluster? Maybe the Volumes provided by a StorSimple device might work, after it’s iSCSI too, right?

So I’ve decided to try to build an S2D (my god this abbreviation is really close to something I don’t want to get) cluster and see if it works. For this I used the following guide to build one as a reference: https://technet.microsoft.com/en-us/windows-server-docs/compute/remote-desktop-services/rds-storage-spaces-direct-deployment

Since I don’t have any hardware available I decided to build the cluster in my Azure environment.

So here’s what I did:

  • I built an Azure VM configured as a iSCSI target server that provides 4 disks, each 1000 GB in size
  • I build two Azure VMs which will be configured as cluster nodes and have these 4 disks available

The first thing I did was verifying if the 4 disks can be pooled using the Get-PhysicalDisk | ? CanPool -eq $true cmdlet. They can.

I was getting to the point that I needed to enable S2D using the PowerShell cmdlet mentioned in the guide: Enable-ClusterS2D -CacheMode Disabled -AutoConfig:0 -SkipEligibilityChecks

The -CacheMode parameter is no longer part of the cmdlet so I took that part out and tried again:

s2d-with-iscsi

Bummer…

This error confirms that locally attached storage is required to use S2D, so this is a dead end. iSCSI disks are not supported.

I was still able to build the cluster and assigning the Storage Pool to the cluster itself instead of a node and create a CSV, but no S2D…

 
1 Comment

Posted by on 26/10/2016 in Windows Server

 

Looking forward to 2016…

So, after leaving 2015 behind us and getting started in 2016 it’s time to have a look what 2016 is going to bring us.

2015 was the year that got the adoption of cloud technology really going and I expect more and more organizations to do so or start adopting more features cloud technology offers us. A very nice feature is that organizations start to understand better how convenient it is when the ‘gate’ for end users has shifted from Active Directory to Azure Active Directory.

Three big releases will most likely take place this year:

  • AzureStack;
  • Windows Server 2016;
  • System Center 2016.

I strongly believe the release of Windows Server 2016 will dramatically change the way we’re used to work and I really believe the following two features will enable it:

  • Nano Server;
  • Containers.

Since the release of Windows Server 2016 Technical Preview 3, and even more with Windows Server 2016 Technical Preview 4 we’re able to research and experiment with these two features. Fortunately, I don’t expect Windows Server 2016 RTM to be released in the first half of 2016. This allows me to play around with it and understand how it works so that I am prepared when it becomes available.

So, Windows Server 2016 is quite a big tip of the iceberg. With the rest all coming as well I expect 2016 to be a very busy year. But I expect to have a lot of fun with it as well…

So let’s see what’s going to happen this year, I look forward to it.

 

Looking back at 2015…

So, the year 2015 is almost at its end. While I write this, I am already in my second week of my two week time off. And boy,I really needed this two week break.

2015 was an extremely busy year for me, and I can actually cut the year in half.

At the first half, I was still busy participating in a project where I designed and deployed System Center 2012 R2 Configuration Manager. I also built a stand-alone Image Building environment running MDT 2013. Unfortunately, the project took way longer than expected due the customer being unable to take ownership and start administering it by themselves. Eventually I decided to walk away after the contractual end date of my involvement despite the fact the project isn’t finished yet. The longer it took, the more frustrating the project became for me so the decision to walk away was eventually the right one.

This takes me to the second half. In the second half, I saw a dramatic shift in my job since I did only one Configuration Manager design and deployment in the second half of 2015. I started to extend my skillset on Enterprise Client Management a bit more with Microsoft Intune and Microsoft’s Public Cloud platform: Azure.

I also started to deliver more workshops, master classes and training sessions. This is something I really like to do and I want to thank those who made it possible for me. It allowed to me renew my Microsoft Certified Trainer certification.

Fortunately, the frustrations of the first half provided some learning moments which required me to become a more complete consultant. So my coworker arranged a two day training session for me called “Professional Recommending” (this may be a poor translation of Professioneel Adviseren in Dutch) provided by Yearth. This is by far the most important training I received in my career and it really started to pay off pretty quickly by receiving more positive feedback from customers. I became a more complete consultant with this training.

I was also happy to do the presentation workshop with Monique Kerssens and Jinxiu Hu from Niqué Consultancy BV at ExpertsLive 2015. I was happy to receive the feedback that my presentation skills have developed greatly. To quote them: “you’re standing like a house”.

The icing on the cake came at the end of this year when I was asked to review the DataON CiB-9224 platform. You can read the review in my previous post.

So, I experienced some highs and lows this year. Fortunately, the highs came at the second half.

I look forward to 2016, but that’s for another post…

 

 

Reviewing the DataON Cluster-in-a-box 9224 (CiB-9224 V12) platform

Recently the company I work for became a partner in deploying DataON platform solutions together with the Dutch distributor of DataON. The distributor has the knowledge and experience with distributing hardware, but was looking for a partner to have them deployed to meet the needs of customers. I had the honor of reviewing one of DataON’s solutions provided by the distributor: the CiB-9224 V12

DNS-9220 Front View

Before I got started I checked the relevant information on DataON’s website which is available at http://dataonstorage.com/cluster-in-a-box/cib-9224-v12-2u-24-bay-12g-sas-cluster-in-a-box.html

Here are a few features that I consider relevant:

  • You have a two-node cluster in a single 2U enclosure;
  • A two-tier storage storage deployment is available, only JBOD is available (no hardware RAID) to both nodes;
  • The solution can be ‘stacked’ with either another CiB and/or DNS JBOD solution;
  • The components used result in a very simple and easy to use setup, no extensive hardware knowledge is required;
  • DataON delivers OOBE guides to get you started.

Overall DataON delivers a no-nonsense solution. Since I am an advocate of a no-nonsense approach it is something I really like.

After checking it all I conclude that this platform can be used in two ways:

  • Scale Out File Server (SOFS) cluster providing one or more SMB 3.0 shares;
  • A two-node Hyper-V cluster.

Specific scenarios are available at DataON’s website mentioned earlier.

For my review I decided to build a two-node Hyper-V cluster. After preparing a small infrastructure (DC, DNS, DHCP and networking) I was able to get going. I decided to follow the OOBE guide as much as possible. In less than an hour, I had a fully operational two-node Hyper-V cluster. I noticed a few things during deployment:

  • Some steps in the guide are not completely in line with deploying the solution. I was able to create a Storage Space with Data Deduplication enabled while the guide doesn’t mention Data Deduplication. However, I greatly welcome to have Data Deduplication enabled since it will generate significant savings when Virtual Machines are stored on the volume being deduplicated;
  • The Storage Space is very fast, deploying Virtual Machines doesn’t take much time at all;
  • I like the built-in Mellanox ConnectX®-3 Pro EN Single 10GbE port used for Cluster Heartbeat and Live Migration. After configuring the cluster to use this NIC only for Live Migration I was very happy with its Live Migration performance. It worked like a charm;
  • I managed the cluster using System Center 2016 Virtual Machine Manager Technical Preview and System Center 2016 Operations Manager Technical Preview. After deploying the required agents I was able to manage the cluster completely by Virtual Machine Manager. Dynamic Optimization and PRO Tips became available. After setting Dynamic Optimization to very aggressive settings I could see Virtual Machines dancing around on both nodes without negatively affecting the Virtual Machines themselves.

The next step was trying to stress test the platform. I decided to deploy 150 Virtual Machines using a tempate. I found a nice PowerShell script that would do the work for me at http://blogs.technet.com/b/virtual-mite/archive/2014/03/04/deploying-multiple-vm-39-s-from-template-in-vmm.aspx. During this deployment I noticed the limited network resources (had a 1 Gbit/sec switch available, no fiberglass) significantly slowed down the deployment and I was also overcommitting the cluster (memory resources prevented me from running all these Virtual Machines). I had no intention of running all these machines after deploying them but it gave me some good insights of the platform’s capabilities. To me, the test scenario used is not optimal and I expect better performance when using 10 Gbit/sec SFP connections are used. Nevertheless, the platform succesfully deployed the 150 Virtual Machines.

After deploying the Virtual Machines I was able to monitor Data Deduplication (I used the default settings). Deduplication savings made me discover that basically all Virtual Machines were stored on the fast tier alone. This impressed me the most. This would make this solution extremely powerful for a VDI deployment, especially when stacked with one or more of these babies.

After fininishing my testing I can definitely recommend this platform. After finding out the price for this platform I strongly believe that the DataON solution is a serious bang for your buck. It makes the Return Of Investment (ROI) very short and easy to manage. And all that in just a 2U enclosure…

All the requirements for the future are also there when Windows Server 2016 is released. I discussed my findings with DataON as well and additional test scenarios are there to investigate

Hopefully I can test it with either Nano Server and/or Hyper-V containers but this is something for 2016…

 

 

 

Building my first (but completely useless) Nano Server cluster based on Windows Server 2016 TP4…

Well, after building my first Nano Server I blogged about in this I got some inspiration to play around with it a bit more.

For this post, my goal was to make a Scale Out File Server cluster with two Nano Server nodes.

So my thought is to provision an iSCSI target first. I went into a completely different direction by deploying a Ubuntu 15.10 server and configure it as an iSCSI target. I used the guidelines at https://www.howtoforge.com/using-iscsi-on-ubuntu-10.04-initiator-and-target and https://linhost.info/2012/05/configure-ubuntu-to-serve-as-an-iscsi-target/ to deliver a 150 GB iSCSI target volume.

After creating two Nano Server .vhd files I noticed that the network cards had no DNS servers specified, they were also not registered in DNS so I wasn’t able to access them remotely using Server Manager. After establishing a remote PowerShell session I used the netsh command to add a DNS server to the network cards using the following command: netsh interface ip set dnsservers name=”Ethernet” static 172.16.0.1 primary

Just to be sure I restarted the machines to make sure the DNS registration takes place. Restarting occurs quickly because the OS is very small and a limited amount of services will be started. The next step was adding the machines to the TrustedHost list for WinRM.

After that I was successfully able to add the machines to Server Manager on my DC. This verifies I can access the machines remotely next to PowerShell remoting.

 

So let’s try to build a cluster. I used Failover Cluster Manager to build a cluster. So let’s get started.

Let’s do a cluster validation first.

cluster_validation-1

I added the servers, I used the default settings for validating the cluster.

cluster_validation-2

Cluster validation is running, time for something to drink or a small toilet break 😉

cluster_validation-3

It’s good to see that the cluster validation test passed. The warning on networking is purely for the fact that only network card is available which is not a recommended practice. But hey, we’re in a lab…

So let’s build that cluster.

cluster_validation-4

Let’s give the cluster a name and an IP address and proceed…

cluster_validation-5

So all is set to create that cluster. I unchecked the Add all eligible storage to the cluster checkbox since I haven’t connected anything yet. Time to build that cluster.

cluster_validation-6

The cluster is ready.

After building the cluster I configured a network share as a disk witness.

 

So the next step is adding storage by using iSCSI. And here’s the part where my cluster becomes useless. The current Nano Server packages do not include anything iSCSI Initiator related, so I have no iSCSI Initiator service running nor can I create an iSCSI based disk since the PowerShell cmdlets for the iSCSI Initiator are simply not there. So that’s a dead end here.

 

Nevertheless, it was quite a satisfying exercise to build this cluster using Nano Server and should provide some inspiration to build a cluster for a different purpose, i.e. Hyper-V. But that’s for a different post…

 

 

 
2 Comments

Posted by on 25/11/2015 in Windows Server

 

Building my first Nano Server using Windows Server 2016 TP4…

Recently Microsoft released the bits for Windows Server 2016 Technical Preview 4. One of the features that caught my attention is Nano Server, a ‘headless’ server that requires to be managed remotely. I saw some interesting demos last week at Expertslive and I had some time to check it out myself. So I downloaded Technical Preview 4 and I got started.

Nevertheless, getting started without a plan doesn’t make sense so first I need to have a plan:

  • I use my laptop to build a lab environment;
  • The lab environment consists of 1 domain controller running Windows Server 2016 TP4 with a GUI (I need to manage those servers somewhere);
  • The domain controller has some administration tools;
  • I plan to build two Nano Server machines which should be configured as a Scale Out File Server (SOFS).

I used the ‘Getting Started with Nano Server’ guide available at https://technet.microsoft.com/en-us/library/mt126167.aspx

The benefit of building your Nano Server images is something that must be done before. I quickly noticed that it’s easier to throw away your .vhd file rather than trying to troubleshoot and fiddle with it to get something working. This is in line with some tweets I read from a well-known Technical Fellow at Microsoft, Jeffrey Snover. Experiencing this first hand absolutely makes sense to me now.

After studying the guide I noticed that the following details are required before building the .vhd file:

  • Name;
  • Language;
  • IP adress information;
  • Packages;

I decided to gather all required features (except the Packages) in variables to create a small script which allows me to reuse it by changing the variables only. Here’s how it may look like:

Set-Executionpolicy Bypass -Force

#Import-Module
Import-Module C:\NanoServer\NanoServerImageGenerator.psm1 -Verbose

#Defining Server Specific Parameters
$MediaPath=’D:\’
$TargetPath=’C:\Users\Public\Documents\Hyper-V\Virtual hard disks\NA01.vhd’
$ComputerName=’NA01′
$Language=’en-us’

$Ipv4Address=’172.16.0.2′
$Ipv4SubnetMask=’255.255.0.0′
$Ipv4Gateway=’172.16.0.1′

#Create the Image
New-NanoServerImage -MediaPath $MediaPath -BasePath .\Base -TargetPath $TargetPath -ComputerName $ComputerName -InterfaceNameOrIndex Ethernet -Ipv4Address $Ipv4Address -Ipv4SubnetMask $Ipv4SubnetMask -Ipv4Gateway $Ipv4Gateway -Language $Language -Clustering -GuestDrivers -Storage -EnableRemoteManagementPort

Building the .vhd didn’t take long at all, the file itself is roughly 560 MB in size. This is easy to rebuild in case a mistake is made. The cmdlet creates a prompt to provide the password for the local Administrator account.

After creating the .vhd I created a new virtual machine and selected the .vhd file built before. After firing it up I had a working Nano Server

NA01-1

So let’s log on to see how the UI looks like.

NA01-2

Yes, pretty basic if you ask me. But hey, it is a ‘headless’ server so we’re not supposed to log on locally.

After that, I followed the instructions to join the server to the domain to the letter and that worked flawlessly as well…

So now I can build my SOFS cluster, but that’s for another post.

 

This is something definitely worth playing around with, especially now that Nano Server based op TP4 is also available in the Microsoft Azure virtual machine Gallery. But that’s also for another post…

 

 
3 Comments

Posted by on 24/11/2015 in Windows Server

 
 
Steve Thompson [MVP]

The automation specialist

Boudewijn Plomp

Cloud and related stuff...

Anything about IT

by Alex Verboon

MDTGuy.WordPress.com

Deployment Made Simple

Modern Workplace

Azure, Hybrid Identity & Enterprise Mobility + Security

Daan Weda

This WordPress.com site is all about System Center and PowerShell

IT And Management by Abheek

Microsoft certified Trainer -Abheek

Heading To The Clouds

by Marthijn van Rheenen