RSS

Case study: Running Windows Server 2016 on a DataON CiB…

Recently I was asked to investigate if Windows Server 2016 would be a suitable OS on a DataON CiB platform. Some new features of Windows Server 2016 are very exciting. The one that excites me the most is Storage Spaces Direct. I set a goal by asking myself the following question:

Can I deploy a hyper-converged cluster using Hyper-V and Storage Spaces Direct with a CiB-9224 running Windows Server 2016?

The case study involves a CiB-9224V12 platform and I had the liberty to start from scratch on one of these babies.

cib-9224_v12_fsv

To figure out if this is possible, I took the following steps:

  1. I deployed Windows Server 2016 Datacenter on each node;
  2. I verified if no device drivers were missing. A lot of Intel chipset related devices had no driver (this may be different at a different model). I installed the Intel Chipset software. The Avago SAS adapter didn’t need a new driver. NOTE: Microsoft Update can be used as well to download and install the missing drivers
  3. I installed the required Roles & Features on both nodes: Hyper-V, Data Deduplication, Failover Clustering and Multi-path I/O
  4. I enabled Multi-Path I/O for SAS. This is a requirement for the SAS adapter to make sure the available disks are presented properly
  5. I created a failover cluster, I used a Share Witness available at a different server
  6. I attempted to enable Storage Spaces Direct but I got stuck at the ‘Waiting for SBL disks are surfaced, 27%’ step. Nothing happens after that.

 

I started troubleshooting to determine a possible issue why this step can’t be finished. I checked the requirements again for S2D and I found the following website:

https://technet.microsoft.com/en-us/windows-server-docs/storage/storage-spaces/storage-spaces-direct-hardware-requirements

At the Drives section I noticed that an unsupported scenario for S2D exists that matches the configuration of the CiB-9224: MPIO or physically connecting drives via multiple paths. After reading the requirements I stopped troubleshooting. Having an unsupported scenario means S2D is simply not possible.

 

The result was I created a Storage Pool without using S2D and I presented the Virtual Disk a Cluster Shared Volume to the cluster. I was not able to choose ReFS (not available when creating a Volume) as a file system so I had to stick with NTFS with Data Deduplication enabled.

So basically I used the ‘Windows Server 2012 R2’ solution to deploy the CSV using Storage Spaces.

With the CiB-9224 I’m not able to achieve my goal of deploying a hyper-converged cluster based on Microsoft’s definition of hyper-converged.

One question still remains: Would I recommend using Windows Server 2016 at a CiB-9224?

The answer is Yes because some new features of Windows Server 2016, for example Shielded VMs, are fully supported on this hardware.

 

DataON does have a hyper-converged S2D platform available, more information can be gathered here: http://dataonstorage.com/storage-spaces-direct/s2d-3110-1u-10-bay-all-flash-nvme-storage-spaces-direct-cluster-appliance.html

s2d-3110_frontsideview

 

Advertisements
 
1 Comment

Posted by on 19/01/2017 in Uncategorized

 

Looking back at 2016…

2016 is at its end so it’s time to look at it before 2017 kicks in.

Due to personal reasons, I’ve been blogging far less than before but I expect to pick it up once more and be more actively blogging.

This year I experienced quite a dramatic change in the way I do my job. System Center related projects have become somewhat nonexisting and my focus has changed to Microsoft Azure completely. I must admit that I like it quite a lot. Gone are the days that I do a project of a couple of months designing and deploying a product from the System Center Suite, except a single Operations Manager deployment. The most visited posts are related to Configuration Manager but I don’t work with it anymore. So don’t expect any new posts about Configuration Manager.

Technical deployments have also become a thing of the past. I’ve been working with customers adopting Microsoft Azure and I’ve become more and more an advisor helping to adopt Microsoft Azure as smoothly as possible. However, adopting Microsoft Azure has become more a financial discussion instead of a technical one. Customers (at least in NL) are more interested in managing costs and are looking for ways to keep costs as low as possible.

The second half of 2016 introduced a change at the company I work at: SCCT BV. When Sander Berkouwer (https://dirteam.com/), an MVP and a true authority on Identity Management, joined SCCT BV it changed the dynamics of my work as well. I received a lot more Identity Management related questions from customers, as if SCCT BV became the go-to company for Identity Management. This was pretty funny since I don’t have that in-depth knowledge regarding Identity Management. Fortunately I have co-workers that do so I was actually delegating quite some work to them.

I was also able to speak on a WMUG_NL event about managing Azure costs. I hope that I will be able to speak on more events on 2017.

Oh, and I was also able to pass two Amazon Web Services (AWS) exams as well. I guess I’m one of the few in NL that has passed both Azure and AWS architecture exams. Hopefully I will work a bit more with AWS as well since they have a great set of services as well and provide new challenges…

While on a personal level 2016 was very challenging, it was quite a year professionally…

See you in 2017!!!

 
1 Comment

Posted by on 31/12/2016 in Opinion

 

Manage your Azure Bill part 2: the operating phase…

Customers who already use Microsoft Azure by one or more subscriptions may face some challenges to get some insights in their Azure spending. Quite often customers ask me how to get some insights in their Azure spending and they are looking for ways to get more details where the money goes in a presentable way. Fortunately, it’s pretty easy to answer this question but it depends on the contract they have. It can be sorted in two categories:

  1. Enterprise Agreement (EA) contracts
  2. All other ones (Pay-as-you-go, CSP etc.)

Customers having EA contracts can use PowerBI (Pro) to generate their reporting quite easily. PowerBI Pro is available for all users with an Office365 E5 license. The Azure Enterprise is available from the PowerBI portal (picture is in Dutch but you can do the math).

azure-ea-appsource

All other contract types can build their own environment using the Azure Usage and Billing Portal. Instructions on how to build it can be found at https://azure.microsoft.com/en-us/blog/announcing-the-release-of-the-azure-usage-and-billing-portal/. There are some catches but it’s pretty easy to build, I got it running in my MSDN subscription easily. Once the environment is up and running the billing data is in the database it can be queried and processed in any way the customers chooses to do so.

Alternatively, 3rd party vendors offer services to present the Azure spending but that’s for another day…

 
Leave a comment

Posted by on 31/12/2016 in Azure, Cloud, Public Cloud, Revenue

 

Manage your Azure Bill part 1: the planning phase…

2016 was the year that cloud adoption finally got going.

More and more organizations are reconsidering their IT strategy by embracing Microsoft Azure to run their workloads at. The most common reason to move workloads to Microsoft Azure is they no longer need to make the hardware investments themselves and make that Microsoft’s problem.

The biggest challenge customers are facing is how much will it cost to use Azure resources. I ranted about it before at https://mwesterink.wordpress.com/2016/10/26/microsoft-azure-one-feature-i-really-need/. The Azure Pricing Calculator can help quite a bit but it just doesn’t cut it. It provides an estimate of Azure Services but it doesn’t provide a bigger picture.

Fortunately, Microsoft has released the Azure TCO Calculator which allows organizations to make a much more comprehensive calculation of their Azure spending. It will also compares the costs to having it run on-premises, although it is quite biased by stating running the workloads on Azure tends to be cheaper. As my co-worker Sander Berkouwer (https://www.dirteam.com) states with many things: Trust, but verify! I can’t agree more on this one since organizations need to analyze and picture their workloads.

The Azure TCO Calculator is available at https://www.tco.microsoft.com/

This should get organizations really going embracing Microsoft Azure in 2017!!!

 
Leave a comment

Posted by on 31/12/2016 in Azure, Cloud, Revenue

 

Building a Storage Spaces Direct (S2D) cluster and succeed…

As in my previous post I failed miserably buiding an S2D cluster. Fortunately, it was just a small matter of reading this whitepaper properly which states only local storage can be used. We all know iSCSI storage is not locally attached so it makes perfect sense it doesn’t work. But at least I tested it and verified…

OK, so knowing that S2D works with DAS storage only it is time to test and verify if it’s difficult to build an S2D cluster.

To build the cluster, I’m going to build one using this guide. I use 2 FS1 Azure VMs and attach one P10 disk to each node.

So I follow the steps to build the cluster.

Thir fist step is to enable S2D which works fine.

s2d-with-das

NOTE: as in my previous post, the CacheMode parameter is not there. While this is still in the guide it may be a bit confusing to read it.

The next step is creating a Storage Pool for S2D.

s2d-storage-pool-2-disk-fail

Hmm, that’s odd. Appearantly 2 disks is insufficient. So, let’s add two more, one at each node resulting in having four disks.

s2d-storage-4-disk-success

OK, so I can continue building a S2D cluster disk of 250 GB

s2d-virtualdisk

The final step is creating a share according to the guide.

smb-share-fail

Hmmm, this fails too…

Well I was able to create the share using the Failover Clustering Console by configuring it as a SOFS and provide a ‘Quick’ file share.

So yeah, it’s relatively easy to build an S2D cluster but some steps in the overview need to be reviewed again. It contains mistakes…

 

Building a Storage Spaces Direct (S2D) cluster and failing miserably…

Windows Server 2016 is available for a little while now. A well-hyped feature is Storage Spaces Direct (S2D). It allows organizations to create fast performing, resilient and hyperconverged clusters which will go hand in hand with Hyper-V. Based on the documentation available at https://technet.microsoft.com/en-us/windows-server-docs/storage/storage-spaces/storage-spaces-direct-overview it even allows to run Hyper-V and SOFS on the same hardware without requiring expensive storage components such as SANs and the additional components required to connect to it. This is a major improvement compared to Windows Server 2012 R2 which doesn’t support this.

The following passage in the overview caught my attention:

Storage Hardware. From 2 to 16 servers with local-attached SATA, SAS, or NVMe drives. Each server must have at least 2 solid-state drives, and at least 4 additional drives. The SATA and SAS devices should be behind a host-bus adapter (HBA) and SAS expander. We strongly recommend the meticulously engineered and extensively validated platforms from our partners (coming soon).

Okay, that makes sense since S2D eliminates the need to for remotely available storage. Seems like it works with DAS only.

But what if I still have a bunch of iSCSI targets available and would like to use them for an S2D cluster? Maybe the Volumes provided by a StorSimple device might work, after it’s iSCSI too, right?

So I’ve decided to try to build an S2D (my god this abbreviation is really close to something I don’t want to get) cluster and see if it works. For this I used the following guide to build one as a reference: https://technet.microsoft.com/en-us/windows-server-docs/compute/remote-desktop-services/rds-storage-spaces-direct-deployment

Since I don’t have any hardware available I decided to build the cluster in my Azure environment.

So here’s what I did:

  • I built an Azure VM configured as a iSCSI target server that provides 4 disks, each 1000 GB in size
  • I build two Azure VMs which will be configured as cluster nodes and have these 4 disks available

The first thing I did was verifying if the 4 disks can be pooled using the Get-PhysicalDisk | ? CanPool -eq $true cmdlet. They can.

I was getting to the point that I needed to enable S2D using the PowerShell cmdlet mentioned in the guide: Enable-ClusterS2D -CacheMode Disabled -AutoConfig:0 -SkipEligibilityChecks

The -CacheMode parameter is no longer part of the cmdlet so I took that part out and tried again:

s2d-with-iscsi

Bummer…

This error confirms that locally attached storage is required to use S2D, so this is a dead end. iSCSI disks are not supported.

I was still able to build the cluster and assigning the Storage Pool to the cluster itself instead of a node and create a CSV, but no S2D…

 
1 Comment

Posted by on 26/10/2016 in Windows Server

 

Microsoft Azure: ONE feature I REALLY need…

It’s been a while since I posted my previous blog post. The main reason I didn’t post anything for a while is that I was very busy personally and professionally. I’ve been helping out customers adopting the Public Cloud more frequently and I must admit it’s a lot of fun.

During these conversations I’m focusing a lot on architecting Azure solutions (especially IaaS). Many times I get a feedback question that most likely goes like this:

“Yeah, it’s very nice this Microsoft Azure and all that other gay stuff, but how much does it cost?

Quickly followed by:

“Why is Microsoft Azure so expensive?”

Provinding an answer for the first question is quite challenging because Microsoft provides the Azure Pricing Calculator (available at https://azure.microsoft.com/en-us/pricing/calculator/) only. It allows me to provide an estimate on how much it will cost an organisation to use Azure services. It is still an estimate and that’s problematic because I cannot really use it for any TCO calculation. TCO is something that a CFO looks at and he or she wants the TCO to be a low as possible. All I could find was an old post available at https://azure.microsoft.com/en-us/blog/windows-azure-platform-tco-and-roi-calculator-now-available-online-and-offline/ but the tools are not there anymore.

I need to have a total overview In order to provide an honest and accurate calculation since most organisations want to mirror their on-premises costs to Microsoft Azure’s. Here’s a, most certainly incomplete, list of costs IT organizations make:

  • Hardware purchasing
  • Licensing
  • Labour
  • Housing
  • Energy
  • 3rd party Support Plans

The fun part of is that many organizations have no idea which costs they have and if they do, taking them in the equation. This behaviour will automatically cause the second question to be asked. I’d like to see Microsoft deliver a tool that allows me to fill in these variables. Microsoft’s biggest competitor, AWS, has such a tool.

Sounds like quite a rant to Microsoft, right?

Well, what really works in their defence is that the Azure Pricing Calculator really helps organizations to provide an estimate. Unfortunately, some common sense may be required when using Azure Services. Things that need to be taken into consideration are:

  • Uptime: if a service is not needed at given times, then turn them off and stop paying for what is actively used
  • Automation: When given times are available, ie. office hours, then schedule the switching on and off activities using Automation
  • Workload: if your workload demand is strongly fluctuating, then you don’t want to buy the hardware required to faciliate a few peaks
  • Evolution: Do you really need to build a VM with IIS when the web application can run on an Azure Web App service? It makes sense to evolve on-premises or IaaS services to PaaS services and no longer be bothered managing the fabric layer or even an Operating System and/or application
  • Evolution part 2: Consider replacing (legacy) applcations by SaaS services so you don’t manage them either
  • Initial investments: No initial investments are required when using Azure Cloud Services. You don’t need to have a budget ready to buy hardware. Think about the shorter ‘time to market’

If you look at it like this, then adopting Cloud Services may not so expensive at all.

Additionally, tunnel vision can be created when looking at costs alone. Many times a small increase in costs may greatly increase the benefits adopting Azure Cloud Services and I’d certainly recommend it in most cases. The only case I wouldn’t recommend it is having a workload with almost no fluctuations.

Nevertheless, it would be nice if Microsoft would provide a tool or someone can tell me where to find it if it already exists 🙂

 

 

 

 

 
1 Comment

Posted by on 26/10/2016 in Azure, Cloud, Public Cloud, Rant

 
 
Steve Thompson [MVP]

The automation specialist

Boudewijn Plomp

Cloud and related stuff...

Anything about IT

by Alex Verboon

MDTGuy.WordPress.com

Deployment Made Simple

Modern Workplace

Azure, Hybrid Identity & Enterprise Mobility + Security

Daan Weda

This WordPress.com site is all about System Center and PowerShell

IT And Management by Abheek

Microsoft certified Trainer -Abheek

Heading To The Clouds

by Marthijn van Rheenen