RSS

Migration to Azure assessment: a politically incorrect view…

Migrations to Microsoft Azure are becoming more and more common since organizations start to understand the added value that Microsoft Azure (even Public Cloud in general) can offer. However, starting to understand and really understanding are two different things which provides some challenges when starting the journey. Fortunately, guys like myself are here to help organizations with this journey doing it the ‘right’ way.

Typical reasons to migrate to Microsoft Azure may be the following (but may not be limited to):

  • Modernization of IT;
  • Hardware used has reached end-of-life;
  • Data-room or data-center closure;
  • Tired of doing/managing this stuff yourself;
  • Evolving the IT organization from an administrative department to a directive one. This is more or less a goal and not an incentive.

Before starting the assessment, many organizations fall into a few potential pitfalls that would partly or completely defeat the potential that Microsoft Azure can offer:

  • Ignoring your current processes and fail to evolve them for the new world (“We would like to keep working we’re used to”);
  • Apply on-premises architecture behavior. For example, using network appliances, concepts like VLANs and their tagging, proxies, patching using WSUS. For God’s sake, Azure has some really neat PaaS alternatives that can do it all for you including some nice reporting features;
  • Look for best practice. In my opinion, there’s no such thing as a ‘best practice’. However, we do have ‘recommended practice’ that can be applied to a particular scenario;
  • Do everything overnight or ‘big bang’ approaches. Trust me, it won’t work and you’ll end up in a world of hurt. Start small and take small steps;
  • Think Microsoft will do everything for you. No, they won’t according to the Shared Responsibility for Cloud Services;
  • Look at infrastructure only. No, you may need to look at applications as well. I get chills in my spine when I’m asked to migrate an Exchange environment to Azure (seriously?);
  • Think of Azure as just another data-center. Yes, we will do ‘lift and shift’, pick up our junk and move it to Azure and expect the same results
  • Looking at cost alone and ignore the benefits. If you fall into these pitfalls then Azure will definitely be more expensive, especially when you don’t reconsider. But that’s not Microsoft’s fault.

Based on current experience with my current and previous job most organizations are in a bit of a hurry so the option most chosen to migrate is ‘lift and shift’. It allows me to do a SWOT analysis to determine which gives me the following.

Strengths (may not be limited to):

  • Granular per-system movement of functionality from current environment to Microsoft Azure VMs;
  • Microsoft-only approach, no dependencies;
  • Secure communications during replication;
  • Test scenarios prior to migrating;
  • Fast and easy to plan migration;
  • Very minimal downtime;
  • Easy rollback;
  • Easy decommissioning of existing resources.

Weaknesses (may not be limited to):

  • Skills needed for Microsoft Azure Site Recovery Services (ASR);
  • Machine being migrated may require downtime to verify actual state;
  • On-premises infrastructure requirements for converting VMware virtual machines;
  • Azure virtual machine limitations may apply (i.e. disk size);
  • Administrative effort may be required to verify sizing;
  • Possible data loss during rollback when a large time frame is in place.

Opportunities (may not be limited to):

  • Identify deprecated machines, do not migrate them;
  • Potential short path from Azure IaaS to Azure PaaS;
  • Administrative effort can be re-evaluated after migration.

Threats (may not be limited to):

  • Systems running Windows Server 2008 R2 will reach End of Support on January 14, 2020 (extended support available after migration);
  • Current processes and resource allocation;
  • No urgency to move from costly IaaS deployment to optimized PaaS deployment within Azure;
  • Additional services are still required after migration to manage servers.

So, once all that is out of the way I can start with the assessment.

For the sake of this post, I want to migrate my Image Building environment based on the setup described at https://docs.microsoft.com/en-us/windows/deployment/deploy-windows-mdt/create-a-windows-10-reference-image. Here’s an overview of my setup:

  • A single Hyper-V host;
  • A Domain Controller VM (Gen2) with 1 CPU, 4 GB RAM and a 127 GB OS disk named DC01;
  • A MDT VM (Gen2) with 1 CPU, 4 GB RAM, a 127 GB OS disk and a 512 GB data disk named IB01.

The environment is isolated so no incoming or outgoing network traffic.

Telemetry data is not available.

 

Let’s do the assessment the WRONG way:

OK so we need two servers. let’s jump to the Azure Pricing Calculator available at https://azure.microsoft.com/en-us/pricing/calculator/ and get some numbers.

For DC01:

  • VM size: F2S at € 126,82
  • OS Disk: P10 at € 18,28

For IB01:

  • VM size: F2S at € 126,82
  • OS Disk: P10 at € 18,28
  • Data disk: P20 at € 67,92

If add up the numbers then the estimated cost for this environment, then the estimated monthly cost would be € 358,12 per month, or € 4297.44 per year. What does this number tell me? Absolutely nothing, maybe a bit on the expensive side. But this is what happens when no optimizations are considered.

The result is a poor adoption of Microsoft Azure which makes customers unhappy and put the blame to Microsoft and their partners.

 

Now let’s do the assessment the CORRECT way:

Instead of doing the assessment ourselves, we can use tools. Microsoft allows organizations run their assessment with Azure Migrate which will do a big part of the assessment for us. And it’s free 🙂

What Azure Migrate can do you is available at https://docs.microsoft.com/en-us/azure/migrate/migrate-services-overview

In order to run the assessment certain steps need to be taken which are available in the overview so I don’t need to write it all down. So how does it look like after running it for a while?

Let’s go the Azure Migrate workspace first.

Let’s open the Assessments pane.

We see a single assessment, let’s open it.

So, we have an overview here. It displays the Server Readiness and the Monthly Cost Estimate, that’s pretty cool. It looks like my assessment generated some warnings so let’s take a look at that first.

Well, that’s nice. The assessment recommends a cheaper VM size and cheaper disk type than estimated in the ‘wrong’ scenario. Let’s have a look at IB01 and get some details.

Here’s the upper half. Despite the warning the VM is eligible for ‘lift and shift’.

And here’s the lower half. Here we can see which disks are identified and recommended to be used. Apparently, I don’t need Premium SSD storage so i don’t need to bother paying for it.

Talking about cost, here’s the cost estimate for both machines.

Well, here you go. Nicely specified. Based on this assessment, the total cost per month would be € 157,23 per month or € 1886,76 per year. Wow, less than half of the ‘wrong’ assessment. I may have some room for more when considering the B series VM as well.

Setting up Azure Migrate takes one or two days depending on availability. However, it is worth the effort and allows a much better discussion with organizations when they want to consider migrating to the cloud.

Keep in mind that ‘lift and shift’ may NOT be the best approach (it isn’t most of the time), so you may need to consider other options as well. However, this is a good place to start. It also helps me keeping out of monasteries because doing these ‘wrong’ estimates is a crap job to do, especially when large environments need to be assessed. It is a very time and energy consuming exercise and requires me to retreat to a monastery to do the job with as little distraction as possible. Since most of my customers are enterprise organizations, I can imagine you have a pretty good idea how crap this job may be.

Final thought: As with most things in life, I have no solution to combat bad behavior…

 

 

 

 

 
Leave a comment

Posted by on 05/09/2019 in Azure, Cloud, Public Cloud, Rant

 

Let’s get lazy: Deploy a Hub and Spoke network topology using Azure CLI

Recently, I’ve been playing around a bit finding a way to deploy a simple Azure Hub and Spoke topology. It is based on an Azure reference architecture which is available here.

To architect a typical ‘hybrid’ scenario, this reference architecture works really well for most scenarios. No need to reinvent the wheel for this. Additionally, you can also use the reference architecture with shared services which is available here. This works great hosting AD DS, but also services like Azure ADConnect and/or ADFS. After all, having a federated identity setup already hosted in Azure would make a lot of sense since the path to synchronization and federation is short 🙂

Let’s have a look at the reference architecture:

To keep it simple, I’d focus on the network setup alone. this means the VMs and NSGs will not be part of the setup in this post.

In order to deploy the environment in one go, we need to deploy the resources in the following sequence:

  1. A resource group (if not existing)
  2. The VNets
  3. The subnets within the VNets
  4. The public IP for the VNet gateway
  5. The VNet gateway itself (this one takes quite some time)
  6. VNet peerings between the Hub and Spokes (we don’t create peerings between the Spokes)

After collecting all Azure CLI commands, the following commands will do the job:

az group create –name HUBSPOKECLI –location westeurope

az network vnet create –resource-group HUBSPOKECLI –name CLIHUB1 –address-prefixes 10.11.0.0/16 –dns-servers 10.11.1.4 10.11.1.5
az network vnet create –resource-group HUBSPOKECLI –name CLISPOKE1 –address-prefixes 10.12.0.0/16 –dns-servers 10.11.1.4 10.11.1.5
az network vnet create –resource-group HUBSPOKECLI –name CLISPOKE2 –address-prefixes 10.13.0.0/16 –dns-servers 10.11.1.4 10.11.1.5

az network vnet subnet create –resource-group HUBSPOKECLI –vnet-name CLIHUB1 –name Management –address-prefix 10.11.1.0/24
az network vnet subnet create –resource-group HUBSPOKECLI –vnet-name CLIHUB1 –name GatewaySubnet –address-prefix 10.11.254.0/27

az network vnet subnet create –resource-group HUBSPOKECLI –vnet-name CLISPOKE1 –name Management –address-prefix 10.12.1.0/24
az network vnet subnet create –resource-group HUBSPOKECLI –vnet-name CLISPOKE1 –name Workload –address-prefix 10.12.2.0/24

az network vnet subnet create –resource-group HUBSPOKECLI –vnet-name CLISPOKE2 –name Management –address-prefix 10.13.1.0/24
az network vnet subnet create –resource-group HUBSPOKECLI –vnet-name CLISPOKE2 –name Workload –address-prefix 10.13.2.0/24

az network public-ip create –resource-group HUBSPOKECLI –name HUBSPOKECLI –allocation-method dynamic –dns-name hubspokecli
az network vnet-gateway create –resource-group HUBSPOKECLI –name HUBSPOKECLI –vnet CLIHUB1 –public-ip-address HUBSPOKECLI –gateway-type vpn –vpn-type RouteBased –client-protocol SSTP –sku Standard

az network vnet peering create –resource-group HUBSPOKECLI –name HUB1toSPOKE1 –vnet-name CLIHUB1 –remote-vnet CLISPOKE1 –allow-forwarded-traffic –allow-vnet-access –allow-gateway-transit
az network vnet peering create –resource-group HUBSPOKECLI –name SPOKE1toHUB1 –vnet-name CLISPOKE1 –remote-vnet CLIHUB1 –allow-forwarded-traffic –allow-vnet-access –use-remote-gateways
az network vnet peering create –resource-group HUBSPOKECLI –name HUB1toSPOKE2 –vnet-name CLIHUB1 –remote-vnet CLISPOKE2 –allow-forwarded-traffic –allow-vnet-access –allow-gateway-transit
az network vnet peering create –resource-group HUBSPOKECLI –name SPOKE12toHUB1 –vnet-name CLISPOKE2 –remote-vnet CLIHUB1 –allow-forwarded-traffic –allow-vnet-access –use-remote-gateways

For this sample CLI commands I made the following assumptions:

  • I will place DNS servers (preferably Active Directory intergrated ones) in the Management subnet within the HUB VNet (CLIHUB)
  • Names for resources can be changed accordingly
  • The peerings are created after the VNet gateway because I want to use the Gateway subnet to allow traffic to other VNets by enabling the ‘Allow Gateway Transit’/’Use Remote Gateways’ options between the peers

Creating the VNet Gateway takes a lot of time. It may take between between 30-45 minutes to have the VNet Gateway deployed. All other resources are deployed within a few seconds.

The fun part of using the Azure CLI is that scripts can be created to run a set of CLI commands. This can either be bash or .cmd scripts. Unfortunately PowerShell is not supported. I use Visual Studio Code using the CLI Extension to deploy everything in one go myself.

There you have it. This is all you need to deploy your Hub and Spoke network topology using Azure CLI commands.

Hope this helps!

 

 

 

 

 

 

 
Leave a comment

Posted by on 24/05/2019 in Azure, Cloud, Public Cloud

 

Playing around with Azure Data Box Gateway: first impressions

Helping customers move their on-premises environment to Microsoft Azure brings quite some challenges regarding their large amount of data that has been created over the years. It has become quite common that customers really ask themselves if they still want to do make heavy investments in hardware capacity (servers, network, storage) meaning they heave to put a big bag of money on the table every few years (5 years is common for the hardware life cycle). This strategy defeats any on demand capacity requirements and adaptations when the requirements change. Especially in typical office environments, some data is accessed less over time but still needs to be stored on local storage which needs to be managed and maintained.

A couple of years ago, a nice appliance was introduced to remedy this: StorSimple

While they’re great devices, they still have their troubles. Not to mention, theses devices are becoming end-of-life in not too much time as displayed here.

Over time, Microsoft has introduced Azure Data Box to allow large amounts of data to be transferred to Azure. This post describes my first impressions for a service I’ve been dying to see for quite some time: Azure Data Box Gateway. An overview of this service is available here.

I was particular curious Microsoft’s claim how seamless it is to transfer data to Azure using this solution and how easy it is available. For the sake of testing this appliance I decided to deploy an Azure VM that supports nested virtualization. I don’t have any hardware at home or a hardware lab at my employer but my employer is kind enough to provide me an Azure subscription with a little bit more spending than a typical MSDN subscription 🙂 I would not recommend using this setup for production scenarios. The Azure Data Box Gateway is designed to be deployed to an on-premises environment.

My test setup is pretty simple:

  • A Windows Server 2019 Smalldisk Azure VM, I used VM size Standard E8s v3 (8 vcpus, 64 GB memory)
  • I added a 4 TB Premium SSD Data disk to host two VMs
  • I used Thomas Maurer’s post to setup nested virtualization and configure the Hyper-V networking. The rest of his post is not relevant for this test
  • I deployed a Windows Server 2019 VM. Since I wasn’t really sure what to expect, I conveniently called it DC01. I guess the name pretty much gives away its role

After that I followed the tutorial to deploy a single Azure Data Box Gateway VM. Honestly, the tutorial is painfully simple, it’s almost an embarrassment. I guess someone with no IT skills should be able to reproduce the steps as well. After running steps 1 to 4, I got a configuration which looks like the following:

I added a single user for this purpose:

Here’s my configuration of a single share, I created a separate storage account as well:

And I created Bandwith Management. I decided to be a bit bold with the settings. For this test, I am in Azure anyway so who cares. In the real world, I would probably have different settings:

Finally, I connected the share to my VM with a simple net use command and start populating the share with some data. I started putting some linux ISO images there:

And you see them in the storage account as well 🙂 :

So, yes so far it’s pretty seamless. Seeing the files like that in the storage account as well, it triggered me to do another test: Would the data be available in the share when data is uploaded to the storage account outside the Azure Data Box Gateway?

So I installed Azure Storage Explorer and uploaded data from the Azure Storage Explorer to the storage account. Uploading some junk, I could see those not in the storage account but also in the share from the VM as well. I was impressed when I witnessed those results which fully validates Microsoft’s statement it’s seamless. I consider it even a more seamless solution than StorSimple at this time.

Finally, looking at the Azure Data Box Gateway limits, mentioned here, here are some thoughts how much this solution can scale:

Description Value
No. of files per device 100 million
Limit is ~ 25 million files for every 2 TB of disk space with maximum limit at 100 million
No. of shares per device 24
No. of shares per Azure storage container 1
Maximum file size written to a share For a 2-TB virtual device, maximum file size is 500 GB.
The maximum file size increases with the data disk size in the preceding ratio until it reaches a maximum of 5 TB.

Knowing that a single storage account can host a maximum of 500 TB (and some regions allow 2 PB), then this scaling can go through the roof making this service exceptionally suitable for large gradual migrations to Azure storage…

 

 

 

 
 

Thoughts on standard Image SKU vs.’Smalldisk’ Image SKU

For a long time, organizations using Windows VM instances in Microsoft Azure didn’t have options regarding the OS disk for the instance. The default value is 127 GB and this hasn’t changed. Quite a while ago, Microsoft announced Windows VM instances with a smaller OS disk of only 32 GB as was announced in https://azure.microsoft.com/nl-nl/blog/new-smaller-windows-server-iaas-image/

Yes, I admit this may be old news but I haven’t given it much thought on how approach it when these Windows VM images became available, until recently…

More and more I’m involved into providing ARM templates for my customers and my main focus is on Azure IaaS deployments.

Together with Managed disks, it has become pretty easy to determine sizing for Azure VM Instances and having both Image SKUs available provide options.

However, while I was creating these ARM templates I noticed that I prefer to use the ‘Smalldisk’ Image SKU’s more over the standard one and the explanation for it is actually pretty simple.

For this post, I will use the following ARM template as a reference: https://github.com/Azure/azure-quickstart-templates/tree/master/101-vm-simple-windows

Looking at the “Properties” section of the Virtual Machine resource, you can see the relevant part of the OS Disk configuration:

“osDisk”: {
                 “createOption”: “FromImage”
                },

In this configuration, the default size will be used which should be great in most scenarios. If a different size is required, then the notation may look like this:

“osDisk”: {
                 “createOption”: “FromImage”,
                 “diskSizeGB”: “[variables(‘OSDiskSizeinGB’)]”
                },

You can specify the value either as a variable or a parameter to determine the size. In this example I use a variable and it must have a supported value for managed disks. In my case I used the following value:

“OSDiskSizeinGB”: “64”
OK, so nothing new here so far. However, to maintain maximum flexibility, you need to use the ‘Smalldisk’ Image SKU only which has the smallest possible size of 32 GB. From there, the only was is up.
To optimize Azure consumption by only paying for what you use and what you REALLY need, it may make sense that organizations create some governance and policies to determine sizing for their Azure VM instances. Not only for compute, but for storage as well. Managed Disks provide some guidance for that.
So for me, I’d focus on using the ‘Smalldisk’ Image SKU only and enlarge it when needed. It’s pretty easy to do by just adding one line in your ARM template for that VM, and an additional one for your variable…

 

Here’s my set of variables I use to select the ‘Smalldisk’ Image SKU:

“ImagePublisher”: “MicrosoftWindowsServer”,
“ImageOffer”: “WindowsServer”,
“ImageSKU”: “2019-Datacenter-smalldisk”,
“ImageVersion”: “latest”,

And here’s the relevant part of the Image reference:

“imageReference”: {
                                   “publisher”: “[variables(‘ImagePublisher’)]”,
                                   “offer”: “[variables(‘ImageOffer’)]”,
                                   “sku”: “[variables(‘ImageSKU’)]”,
                                   “version”: “[variables(‘ImageVersion’)]”
                                   },
Hope this helps!

 

 

 
 

Looking back at 2018

The time to look back at 2018 has come a bit sooner than expected due to an enforced shutdown at my current employer. This is great so I don’t have to bother about it shortly before the year ends…

2018 is a year that introduced a lot of changes for me. Until October 2018 i worked at SCCT BV before making a switch to DXC Technology. Certain events I encountered this year justified this switch for me.The main reason is that I wanted to be specialist again with a focus on Microsoft Azure (and I get Azure Stack for free). This means that I’ve decided to let go my previous experience in Enterprise Client Management (ECM) and I will no longer work with System Center Configuration Manager, the rest of System Center, Hyper-V and Microsoft Intune anymore. So don’t expect any blog posts on those…

I was becoming too much of a generalist while I was introduced to be specialist at all these technologies at the same time. Basically, if you claim to be a specialist at everything, you become a specialist at nothing.

An interesting aspect I learned by making this switch is how an employer reacts to your resignation, especially if you’ve been working for them for quite some time (4,5 years). Apparently, not all employers handle it well and I find that SCCT BV didn’t react to resgination well. I find that quite a shame, unnecessary and quite immature. An employer’s behavior may have a serious impact on their reputation. After all, it takes years to build a reputation but just 5 minutes to lose it completely. It also gave me some insight on making sure how an organizational structure is set up prior to joining an employer in the future. But I hope that I don’t have to do that anymore…

Fortunately, I expect to find a lot of good opportunities within my new role at DXC Technology. The best thing I found so far is that work/life balance has become much better. It allows me to maintain my health much better than previously and I already see results (I lost quite some weight and I need to lose some more). The best thing so far is that I can work anywhere I want. DXC Technology facilitates working from home in a proper manner and that helps a lot to improve my performance. And I need to travel sometimes which is nice too.

So hopefully I have some stuff to blog about in 2019. It will most likely Azure or Azure Stack related.

I wish everyone a prosperous 2019!!!

 

 

 
Leave a comment

Posted by on 21/12/2018 in Opinion, Rant

 

Case study: Availability Sets vs. Availability Zones

One of the biggest challenges customers face is to make sure a Highly Available solution survives a catastrophic failure at fabric layer of Microsoft Azure, you things like servers, storage, network devices, power and cooling. Not caring about the fabric layer is one of the main reasons why organizations consider running their workloads in Azure in the first place.

However, Azure locations are not located at some magic castles that would make them invulnerable to catastrophic failures or other natural disasters. Of course, the magnitude of the disaster allows organizations to think about possible scenario’s to safeguard (more or less) the availability of their workloads. After all, Microsoft and their customers have a shared responsibility keeping the lot running.

Maintaining high availability at a single region provides two options:

  • Availability Sets: allows workloads to be spread over multiple hosts, racks but still remain at the same data center;
  • Availability Zones: allows workloads to be spread over multiple locations, so you automatically don’t care on which host the workload will run.

The following picture displays the difference between possible failures and SLA percentage. Obviously, Availability Zones offer higher protection against failures. Region pairs is beyond the scope of this post…

The beauty of both scenario’s is that the VNet required to connect an Azure VM is not bound by a single data center a.k.a. an Availability Zone. it is stretched over a whole region.

So I thought, let’s try this out with a typical workload that requires a high level of availability and can sustain failure pretty well. My choice was to host an SQL fail-over cluster (no Always On Availability Group) with additional resiliency using Storage Spaces Direct. Using all these techniques to maintain uptime, how cool is that?

I used the following guides to deploy a two node Windows Server 2016 cluster:

Actually I built two SQL S2D clusters. Both clusters were completely the same (Two DS11 VMs each with 2 P30 disks), except one was configured with an Availability Set and the other with an Availabilty Zone.

What makes the difference is the requirement for the Azure Load Balancer. You need an Azure Load Balancer for the cluster heartbeat to make sure which node is active. Looking the Azure Load Balancer overview, available at https://docs.microsoft.com/en-us/azure/load-balancer/load-balancer-overview you can see that you need a Standard SKU when using Availability Zones. When using an Availability Set, a basic SKU is sufficient. But that’s acutally it when deploying an SQL cluster using S2D. However, since the Load Balancer is an internal one anyway, I’d recommend using the Standard SKU anyway. From a pricing perspective, I don’t believe it would make much of a difference. If the penalties for downtime are much more severe, then I wouldn’t nitpick about this anyway.

 

 
Leave a comment

Posted by on 20/09/2018 in Uncategorized

 

Manageability and responsibility for Cloud Services

Many organizations are facing challenges when moving their IT services to the Public Cloud. For the sake of this post I focus solely on Microsoft Azure, although I am aware that other Cloud Providers have a similar approach and models for it…

As we’re all aware three categories of Cloud Services exist:

  • Infrastructure as a Service (IaaS);
  • Platform as a Service (PaaS);
  • Software as a Service (SaaS).

Each category have their own level of management, some elements are managed by the Cloud provider, the rest is managed by yourself. The amount of management differs from each category which is displayed by the picture below.

As you can see, SaaS services are completely managed by the Cloud provider which is great. A great approach to this is that if a Line of Business (LoB) application can be replaced by a SaaS alternative, then it really makes sense to do so. Looking at IaaS and PaaS, you can see the amount of management done by the Cloud provider is higher with PaaS than IaaS. This means the following recommendations can be made:

  • Replace/migrate existing applications tot SaaS services. This will release the IT department with the daily tasks of managing them;
  • Consider using PaaS Services as much as possible. This will also lower the administrative effort of managing cloud services by the IT department. Additonally, certain PaaS services allow developers to develop and deploy immediately to the PaaS service (ie. Azure Web App) making them not depend on an IT-Pro to facilitate the service.

However, less management doesn’t mean less responsibility. Despite having less management by using Cloud services, it doesn’t mean the organization is not responsible anymore. Microsoft released the required documentation regarding shared responsibility between the customer and themselves. This guide is available at http://aka.ms/sharedresponsibility From the guide took the following screenshot showing a diagram of the responsibilities.

 

As you can see, the customer still has some responsibility when using SaaS services. However, these models allow a customer to define a strategy when moving to the cloud…

 

 
Leave a comment

Posted by on 05/09/2018 in Azure, Public Cloud

 
 
Steve Thompson [MVP]

The automation specialist

Boudewijn Plomp

Cloud and related stuff...

Anything about IT

by Alex Verboon

MDTGuy.WordPress.com

Deployment Made Simple

Modern Workplace

Azure, Hybrid Identity & Enterprise Mobility + Security

Daan Weda

This WordPress.com site is all about System Center and PowerShell

IT And Management by Abheek

Microsoft certified Trainer -Abheek

Heading To The Clouds

by Marthijn van Rheenen