RSS

Category Archives: Private Cloud

My first Azure Stack TP2 POC deployment ending in disaster…

Today I had the opportunity to have an attempt to deploy my first Azure Stack TP2 POC. Having this DataON CiB-9224 available allowed to have a go on deploying an Azure Stack TP2 POC environment. I was able to achieve this after finishing some testing with Windows Server 2016 with the platform. The results of those tests are available at https://mwesterink.wordpress.com/2017/01/19/case-study-running-windows-server-2016-on-a-dataon-cib/

Before I started testing I reviewed the hardware requirements which are available at https://docs.microsoft.com/nl-nl/azure/azure-stack/azure-stack-deploy

Unfortunately, a small part made me wonder if I would actually succeed in deploying Azure Stack. Here’s a quote of the worrying part:

Data disk drive configuration: All data drives must be of the same type (all SAS or all SATA) and capacity. If SAS disk drives are used, the disk drives must be attached via a single path (no MPIO, multi-path support is provided).

Damn, again a challenge with MPIO. Such a shame since I meet all other hardware requirements.

So decided to have a go and figure out why MPIO is not supported by deploying Azure Stack TP2 anyway. I followed the instructions at https://docs.microsoft.com/nl-nl/azure/azure-stack/azure-stack-run-powershell-script and see what happens…

I used a single node of the CiB-9224 and used 4 400 GB SSD disks only. I turned the other node off and I disabled all unused NICs.

After a while I decided to check its progress and I noticed that nothing was happening at a specific step (there was a hour between the latest log and the time I went to check). Here’s a screenshot where the deployment was ‘stuck’:

stuck_at_s2d

Seems like the script is trying to enable Storage Spaces Direct (S2D). Knowing that S2D is not supported with MPIO I terminated the deployment and wiped all data because I knew I was going to be unsuccessfull. At least I know why.

I didn’t meet all hardware requirements after all. Fortunately it gave me some insights in how to deploy Azure Stack so when I do have hardware that meets my requirements, then at least I know what to do…

Looking at the requirements again, it’s obvious that the recommended way to go is with single channel JBOD.

 

 

Advertisements
 

Building a Storage Spaces Direct (S2D) cluster and succeed…

As in my previous post I failed miserably buiding an S2D cluster. Fortunately, it was just a small matter of reading this whitepaper properly which states only local storage can be used. We all know iSCSI storage is not locally attached so it makes perfect sense it doesn’t work. But at least I tested it and verified…

OK, so knowing that S2D works with DAS storage only it is time to test and verify if it’s difficult to build an S2D cluster.

To build the cluster, I’m going to build one using this guide. I use 2 FS1 Azure VMs and attach one P10 disk to each node.

So I follow the steps to build the cluster.

Thir fist step is to enable S2D which works fine.

s2d-with-das

NOTE: as in my previous post, the CacheMode parameter is not there. While this is still in the guide it may be a bit confusing to read it.

The next step is creating a Storage Pool for S2D.

s2d-storage-pool-2-disk-fail

Hmm, that’s odd. Appearantly 2 disks is insufficient. So, let’s add two more, one at each node resulting in having four disks.

s2d-storage-4-disk-success

OK, so I can continue building a S2D cluster disk of 250 GB

s2d-virtualdisk

The final step is creating a share according to the guide.

smb-share-fail

Hmmm, this fails too…

Well I was able to create the share using the Failover Clustering Console by configuring it as a SOFS and provide a ‘Quick’ file share.

So yeah, it’s relatively easy to build an S2D cluster but some steps in the overview need to be reviewed again. It contains mistakes…

 

Looking back at 2015…

So, the year 2015 is almost at its end. While I write this, I am already in my second week of my two week time off. And boy,I really needed this two week break.

2015 was an extremely busy year for me, and I can actually cut the year in half.

At the first half, I was still busy participating in a project where I designed and deployed System Center 2012 R2 Configuration Manager. I also built a stand-alone Image Building environment running MDT 2013. Unfortunately, the project took way longer than expected due the customer being unable to take ownership and start administering it by themselves. Eventually I decided to walk away after the contractual end date of my involvement despite the fact the project isn’t finished yet. The longer it took, the more frustrating the project became for me so the decision to walk away was eventually the right one.

This takes me to the second half. In the second half, I saw a dramatic shift in my job since I did only one Configuration Manager design and deployment in the second half of 2015. I started to extend my skillset on Enterprise Client Management a bit more with Microsoft Intune and Microsoft’s Public Cloud platform: Azure.

I also started to deliver more workshops, master classes and training sessions. This is something I really like to do and I want to thank those who made it possible for me. It allowed to me renew my Microsoft Certified Trainer certification.

Fortunately, the frustrations of the first half provided some learning moments which required me to become a more complete consultant. So my coworker arranged a two day training session for me called “Professional Recommending” (this may be a poor translation of Professioneel Adviseren in Dutch) provided by Yearth. This is by far the most important training I received in my career and it really started to pay off pretty quickly by receiving more positive feedback from customers. I became a more complete consultant with this training.

I was also happy to do the presentation workshop with Monique Kerssens and Jinxiu Hu from Niqué Consultancy BV at ExpertsLive 2015. I was happy to receive the feedback that my presentation skills have developed greatly. To quote them: “you’re standing like a house”.

The icing on the cake came at the end of this year when I was asked to review the DataON CiB-9224 platform. You can read the review in my previous post.

So, I experienced some highs and lows this year. Fortunately, the highs came at the second half.

I look forward to 2016, but that’s for another post…

 

 

Reviewing the DataON Cluster-in-a-box 9224 (CiB-9224 V12) platform

Recently the company I work for became a partner in deploying DataON platform solutions together with the Dutch distributor of DataON. The distributor has the knowledge and experience with distributing hardware, but was looking for a partner to have them deployed to meet the needs of customers. I had the honor of reviewing one of DataON’s solutions provided by the distributor: the CiB-9224 V12

DNS-9220 Front View

Before I got started I checked the relevant information on DataON’s website which is available at http://dataonstorage.com/cluster-in-a-box/cib-9224-v12-2u-24-bay-12g-sas-cluster-in-a-box.html

Here are a few features that I consider relevant:

  • You have a two-node cluster in a single 2U enclosure;
  • A two-tier storage storage deployment is available, only JBOD is available (no hardware RAID) to both nodes;
  • The solution can be ‘stacked’ with either another CiB and/or DNS JBOD solution;
  • The components used result in a very simple and easy to use setup, no extensive hardware knowledge is required;
  • DataON delivers OOBE guides to get you started.

Overall DataON delivers a no-nonsense solution. Since I am an advocate of a no-nonsense approach it is something I really like.

After checking it all I conclude that this platform can be used in two ways:

  • Scale Out File Server (SOFS) cluster providing one or more SMB 3.0 shares;
  • A two-node Hyper-V cluster.

Specific scenarios are available at DataON’s website mentioned earlier.

For my review I decided to build a two-node Hyper-V cluster. After preparing a small infrastructure (DC, DNS, DHCP and networking) I was able to get going. I decided to follow the OOBE guide as much as possible. In less than an hour, I had a fully operational two-node Hyper-V cluster. I noticed a few things during deployment:

  • Some steps in the guide are not completely in line with deploying the solution. I was able to create a Storage Space with Data Deduplication enabled while the guide doesn’t mention Data Deduplication. However, I greatly welcome to have Data Deduplication enabled since it will generate significant savings when Virtual Machines are stored on the volume being deduplicated;
  • The Storage Space is very fast, deploying Virtual Machines doesn’t take much time at all;
  • I like the built-in Mellanox ConnectX®-3 Pro EN Single 10GbE port used for Cluster Heartbeat and Live Migration. After configuring the cluster to use this NIC only for Live Migration I was very happy with its Live Migration performance. It worked like a charm;
  • I managed the cluster using System Center 2016 Virtual Machine Manager Technical Preview and System Center 2016 Operations Manager Technical Preview. After deploying the required agents I was able to manage the cluster completely by Virtual Machine Manager. Dynamic Optimization and PRO Tips became available. After setting Dynamic Optimization to very aggressive settings I could see Virtual Machines dancing around on both nodes without negatively affecting the Virtual Machines themselves.

The next step was trying to stress test the platform. I decided to deploy 150 Virtual Machines using a tempate. I found a nice PowerShell script that would do the work for me at http://blogs.technet.com/b/virtual-mite/archive/2014/03/04/deploying-multiple-vm-39-s-from-template-in-vmm.aspx. During this deployment I noticed the limited network resources (had a 1 Gbit/sec switch available, no fiberglass) significantly slowed down the deployment and I was also overcommitting the cluster (memory resources prevented me from running all these Virtual Machines). I had no intention of running all these machines after deploying them but it gave me some good insights of the platform’s capabilities. To me, the test scenario used is not optimal and I expect better performance when using 10 Gbit/sec SFP connections are used. Nevertheless, the platform succesfully deployed the 150 Virtual Machines.

After deploying the Virtual Machines I was able to monitor Data Deduplication (I used the default settings). Deduplication savings made me discover that basically all Virtual Machines were stored on the fast tier alone. This impressed me the most. This would make this solution extremely powerful for a VDI deployment, especially when stacked with one or more of these babies.

After fininishing my testing I can definitely recommend this platform. After finding out the price for this platform I strongly believe that the DataON solution is a serious bang for your buck. It makes the Return Of Investment (ROI) very short and easy to manage. And all that in just a 2U enclosure…

All the requirements for the future are also there when Windows Server 2016 is released. I discussed my findings with DataON as well and additional test scenarios are there to investigate

Hopefully I can test it with either Nano Server and/or Hyper-V containers but this is something for 2016…

 

 

 

A personal long-term vision about the IT landscape in the future…

2014 is a few months away now. In those few months quite a lot of things happened. A new job with new challenges, Windows XP EOL, a new Microsoft CEO and new insights on things that currently happening in the IT world. To me, 2014 is the year that will the beginning of drastic changes in the IT landscape to come, and this train is most likely not going to stop for a while. I wrote this blog to write things down that are on my mind, it has become a bit overwhelming to me so writing it down allows me to empty my head a bit 😉

Before 2014, many organizations were somewhat reluctant to embrace the possibilities that Cloud technologies introduce to their IT environment. These technologies are not available to IT organizations who have legal and/or political constraints to introduce Cloud technologies. Here in NL, this is especially true for government organizations which are bound by laws stating that government data can never leave the country. They may, however, consider building Private Clouds to keep everything in the country. This blog is also taking Public Cloud technologies into consideration. I have to admit though that I’m somewhat biased since I work with Microsoft technology only, so I actually speak Azure instead of Public Cloud.

The need to have an on-premises IT infrastructure will slowly become smaller. I don’t expect that organizations will decommission their servers immediately and stick with Cloud technology only but I expect infrastructures to become smaller.

Here are some good examples of changes in the IT infrastructure:

  • Microsoft Exchange environments being replaced by Office 365
  • No fixed workplace anymore but the ability to work anywhere with any supported device
  • Servers with low resource utilization being moved to Azure
  • On-premise backups are replaced by backup to the Cloud, the need to mess around with tapes will go away too

Eventually, I expect on-premises environments to disappear as well and all management that is required for that will go with it. Maybe large datacenters and data rooms require some sort of management to keep everything going but smaller organizations who don’t have a large datacenter should not consider building one right now since a lot of Cloud technologies are available for them as well. They should skip the expensive investments to buy new hardware and transfer everything to the cloud when possible.

Here are some thoughts about the consequences for this long term change:

  • The traditional system administrator will slowly disappear
  • Traditional office buildings will become much smaller because people don’t really need to be there to do their job, maybe for meetings and socializing purposes only (in NL, this might be the nail on the office buildings market’s coffin)
  • Because all company data is in the cloud, it becomes much harder to compromise this data due to unauthorized access because a device was stolen or missing
  • Investments in hardware will be significantly different
  • End users will choose their preferred device, they will receive a spending budget instead of a company policy defined device

So what does this mean for me?

Well, here’s just some examples:

  • I don’t need to bother anymore to make sure company data is safe, so nothing is stored on my local disk. I feel pretty liberated not to be worried anymore that my disk might break down which causes loss of data
  • It’s all about letting go, so I should not hold on anymore trying to keep things on-premise
  • I should forget about certain concepts which become somewhat obsolete
  • I really need to leave my comfort zone

I have to admit though, it will provide my some fantastic challenges to help customers introducing Cloud technologies so they can be more focused on their core business and generate more revenue…

It’ll be a lot of fun.

 
 

OpsMgr 2012 R2: first impressions…

Recently I was able to make an attempt to install OpsMgr 2012 R2 in a lab environment. I was particularly interested if deploying and initial management was significantly different compared to its predecessors. To be short: it isn’t…

I’ve decided to limit my testing by importing the Active Directory MP and a few network devices using the Xian SNMP Simulator from Jalasoft. Kevin Holman’s blog post on how to use this Simulator still works in OpsMgr 2012 R2, the post is available at the following website:

http://blogs.technet.com/b/kevinholman/archive/2012/02/17/test-demo-opsmgr-2012-network-monitoring-with-jalasoft-s-network-device-simulator.aspx

A good place to get some more first impressions is checking what’s new in OpsMgr 2012 R2. You can find this overview at the following website:

http://technet.microsoft.com/en-US/library/dn249700.aspx

Two particularly new features caught my attention:

  • A new Agent which completely replaces the old one;
  • Fabric Monitoring.

The integration between OpsMgr en VMM becomes much closer with this new feature. I have to admit this is something I really need to investigate when I have time and resources.

Personally, I am convinced that VMM is the starting place when building pristine Private Cloud environments based on Hyper-V. There’s room for debate if Bare-metal Deployment is something you need. MDT 2013 can do some very nice things on that part as well, it does require some manual labor when not using Bare-metal Deployment but it allows you to deploy much more than just a bunch of Hyper-V hosts.

I have to admit that this is something that really caught my attention…

I will post this in a future blog if I have figured it all out.

Feel free to your findings if you already had…

 

Using Microsoft Deployment Toolkit to start building a pristine infrastructure

Building Private Clouds is becoming more and more prominent in today’s IT world. But what if you have to start from scratch?

Imagine yourself, you need to build a Private Cloud infrastructure but you have to start from scratch? Ok, so you have purchased all the hardware needed to build the infrastructure but you have nothing. I guess you want to have something that will get you started quickly.

Microsoft Deployment Toolkit allows you start building quickly. You can easily build a machine (just a workgroup machine), install roles such as WDS, WSUS, DHCP (if not available) and DNS (nice to have) and install and configure MDT. This allows you to quickly create some sort of automation in deploying the base foundation of your Private Cloud.

Even though I integrate MDT in almost every Configuration Manager 2012 SP1 deployment, I never used it in a stand-alone setup. While I had some time I decided to have a look. What surprised me a lot is that many people wrote blogs about MDT. I remember I attended a session at MMS 2013 where MDT was explained extensively, the session was hosted by Mikael Nyström and Johan Arwidmark which I considered very valuable (and it was a lot of fun too)…

So, with all the stuff available on the Internet regarding MDT, I got started…

I used a machine that was eating dust but is still able to run Windows Server 2012 rather well. It has enough storage to store a bunch of Windows images and all related stuff I might want to deploy.

Once the basic stuff is there, I remove that machine from the infrastructure. It makes me think that a VM on my laptop could do the same thing, food for thoughts…

I used MDT 2012 U1 as a reference because it is the current version, I went to http://www.microsoft.com/mdt to download the installer. I am aware release of MDT 2013’s release is pending (at time of writing) but I expect I can reuse everything already used in MDT 2012 U1.

MDT comes with quite a bunch of scripts that should get you started pretty quickly. You can also adopt your own scripts and you can even install some applications if you like. It might prevent you from using standard Windows media and do everything manually…

Oh, and don’t forget. You can even build a .vhd that you can use to get started right away with deploying VM’s once your core infrastructure is running with something.

 

 

 
 
Steve Thompson [MVP]

The automation specialist

Boudewijn Plomp

Cloud and related stuff...

Anything about IT

by Alex Verboon

MDTGuy.WordPress.com

Deployment Made Simple

Modern Workplace

Azure, Hybrid Identity & Enterprise Mobility + Security

Daan Weda

This WordPress.com site is all about System Center and PowerShell

IT And Management by Abheek

Microsoft certified Trainer -Abheek

Heading To The Clouds

by Marthijn van Rheenen