RSS

Author Archives: mwesterink

Let’s get lazy: Deploy a Hub and Spoke network topology using Azure CLI

Recently, I’ve been playing around a bit finding a way to deploy a simple Azure Hub and Spoke topology. It is based on an Azure reference architecture which is available here.

To architect a typical ‘hybrid’ scenario, this reference architecture works really well for most scenarios. No need to reinvent the wheel for this. Additionally, you can also use the reference architecture with shared services which is available here. This works great hosting AD DS, but also services like Azure ADConnect and/or ADFS. After all, having a federated identity setup already hosted in Azure would make a lot of sense since the path to synchronization and federation is short 🙂

Let’s have a look at the reference architecture:

To keep it simple, I’d focus on the network setup alone. this means the VMs and NSGs will not be part of the setup in this post.

In order to deploy the environment in one go, we need to deploy the resources in the following sequence:

  1. A resource group (if not existing)
  2. The VNets
  3. The subnets within the VNets
  4. The public IP for the VNet gateway
  5. The VNet gateway itself (this one takes quite some time)
  6. VNet peerings between the Hub and Spokes (we don’t create peerings between the Spokes)

After collecting all Azure CLI commands, the following commands will do the job:

az group create –name HUBSPOKECLI –location westeurope

az network vnet create –resource-group HUBSPOKECLI –name CLIHUB1 –address-prefixes 10.11.0.0/16 –dns-servers 10.11.1.4 10.11.1.5
az network vnet create –resource-group HUBSPOKECLI –name CLISPOKE1 –address-prefixes 10.12.0.0/16 –dns-servers 10.11.1.4 10.11.1.5
az network vnet create –resource-group HUBSPOKECLI –name CLISPOKE2 –address-prefixes 10.13.0.0/16 –dns-servers 10.11.1.4 10.11.1.5

az network vnet subnet create –resource-group HUBSPOKECLI –vnet-name CLIHUB1 –name Management –address-prefix 10.11.1.0/24
az network vnet subnet create –resource-group HUBSPOKECLI –vnet-name CLIHUB1 –name GatewaySubnet –address-prefix 10.11.254.0/27

az network vnet subnet create –resource-group HUBSPOKECLI –vnet-name CLISPOKE1 –name Management –address-prefix 10.12.1.0/24
az network vnet subnet create –resource-group HUBSPOKECLI –vnet-name CLISPOKE1 –name Workload –address-prefix 10.12.2.0/24

az network vnet subnet create –resource-group HUBSPOKECLI –vnet-name CLISPOKE2 –name Management –address-prefix 10.13.1.0/24
az network vnet subnet create –resource-group HUBSPOKECLI –vnet-name CLISPOKE2 –name Workload –address-prefix 10.13.2.0/24

az network public-ip create –resource-group HUBSPOKECLI –name HUBSPOKECLI –allocation-method dynamic –dns-name hubspokecli
az network vnet-gateway create –resource-group HUBSPOKECLI –name HUBSPOKECLI –vnet CLIHUB1 –public-ip-address HUBSPOKECLI –gateway-type vpn –vpn-type RouteBased –client-protocol SSTP –sku Standard

az network vnet peering create –resource-group HUBSPOKECLI –name HUB1toSPOKE1 –vnet-name CLIHUB1 –remote-vnet CLISPOKE1 –allow-forwarded-traffic –allow-vnet-access –allow-gateway-transit
az network vnet peering create –resource-group HUBSPOKECLI –name SPOKE1toHUB1 –vnet-name CLISPOKE1 –remote-vnet CLIHUB1 –allow-forwarded-traffic –allow-vnet-access –use-remote-gateways
az network vnet peering create –resource-group HUBSPOKECLI –name HUB1toSPOKE2 –vnet-name CLIHUB1 –remote-vnet CLISPOKE2 –allow-forwarded-traffic –allow-vnet-access –allow-gateway-transit
az network vnet peering create –resource-group HUBSPOKECLI –name SPOKE12toHUB1 –vnet-name CLISPOKE2 –remote-vnet CLIHUB1 –allow-forwarded-traffic –allow-vnet-access –use-remote-gateways

For this sample CLI commands I made the following assumptions:

  • I will place DNS servers (preferably Active Directory intergrated ones) in the Management subnet within the HUB VNet (CLIHUB)
  • Names for resources can be changed accordingly
  • The peerings are created after the VNet gateway because I want to use the Gateway subnet to allow traffic to other VNets by enabling the ‘Allow Gateway Transit’/’Use Remote Gateways’ options between the peers

Creating the VNet Gateway takes a lot of time. It may take between between 30-45 minutes to have the VNet Gateway deployed. All other resources are deployed within a few seconds.

The fun part of using the Azure CLI is that scripts can be created to run a set of CLI commands. This can either be bash or .cmd scripts. Unfortunately PowerShell is not supported. I use Visual Studio Code using the CLI Extension to deploy everything in one go myself.

There you have it. This is all you need to deploy your Hub and Spoke network topology using Azure CLI commands.

Hope this helps!

 

 

 

 

 

 

Advertisements
 
Leave a comment

Posted by on 24/05/2019 in Azure, Cloud, Public Cloud

 

Playing around with Azure Data Box Gateway: first impressions

Helping customers move their on-premises environment to Microsoft Azure brings quite some challenges regarding their large amount of data that has been created over the years. It has become quite common that customers really ask themselves if they still want to do make heavy investments in hardware capacity (servers, network, storage) meaning they heave to put a big bag of money on the table every few years (5 years is common for the hardware life cycle). This strategy defeats any on demand capacity requirements and adaptations when the requirements change. Especially in typical office environments, some data is accessed less over time but still needs to be stored on local storage which needs to be managed and maintained.

A couple of years ago, a nice appliance was introduced to remedy this: StorSimple

While they’re great devices, they still have their troubles. Not to mention, theses devices are becoming end-of-life in not too much time as displayed here.

Over time, Microsoft has introduced Azure Data Box to allow large amounts of data to be transferred to Azure. This post describes my first impressions for a service I’ve been dying to see for quite some time: Azure Data Box Gateway. An overview of this service is available here.

I was particular curious Microsoft’s claim how seamless it is to transfer data to Azure using this solution and how easy it is available. For the sake of testing this appliance I decided to deploy an Azure VM that supports nested virtualization. I don’t have any hardware at home or a hardware lab at my employer but my employer is kind enough to provide me an Azure subscription with a little bit more spending than a typical MSDN subscription 🙂 I would not recommend using this setup for production scenarios. The Azure Data Box Gateway is designed to be deployed to an on-premises environment.

My test setup is pretty simple:

  • A Windows Server 2019 Smalldisk Azure VM, I used VM size Standard E8s v3 (8 vcpus, 64 GB memory)
  • I added a 4 TB Premium SSD Data disk to host two VMs
  • I used Thomas Maurer’s post to setup nested virtualization and configure the Hyper-V networking. The rest of his post is not relevant for this test
  • I deployed a Windows Server 2019 VM. Since I wasn’t really sure what to expect, I conveniently called it DC01. I guess the name pretty much gives away its role

After that I followed the tutorial to deploy a single Azure Data Box Gateway VM. Honestly, the tutorial is painfully simple, it’s almost an embarrassment. I guess someone with no IT skills should be able to reproduce the steps as well. After running steps 1 to 4, I got a configuration which looks like the following:

I added a single user for this purpose:

Here’s my configuration of a single share, I created a separate storage account as well:

And I created Bandwith Management. I decided to be a bit bold with the settings. For this test, I am in Azure anyway so who cares. In the real world, I would probably have different settings:

Finally, I connected the share to my VM with a simple net use command and start populating the share with some data. I started putting some linux ISO images there:

And you see them in the storage account as well 🙂 :

So, yes so far it’s pretty seamless. Seeing the files like that in the storage account as well, it triggered me to do another test: Would the data be available in the share when data is uploaded to the storage account outside the Azure Data Box Gateway?

So I installed Azure Storage Explorer and uploaded data from the Azure Storage Explorer to the storage account. Uploading some junk, I could see those not in the storage account but also in the share from the VM as well. I was impressed when I witnessed those results which fully validates Microsoft’s statement it’s seamless. I consider it even a more seamless solution than StorSimple at this time.

Finally, looking at the Azure Data Box Gateway limits, mentioned here, here are some thoughts how much this solution can scale:

Description Value
No. of files per device 100 million
Limit is ~ 25 million files for every 2 TB of disk space with maximum limit at 100 million
No. of shares per device 24
No. of shares per Azure storage container 1
Maximum file size written to a share For a 2-TB virtual device, maximum file size is 500 GB.
The maximum file size increases with the data disk size in the preceding ratio until it reaches a maximum of 5 TB.

Knowing that a single storage account can host a maximum of 500 TB (and some regions allow 2 PB), then this scaling can go through the roof making this service exceptionally suitable for large gradual migrations to Azure storage…

 

 

 

 
 

Thoughts on standard Image SKU vs.’Smalldisk’ Image SKU

For a long time, organizations using Windows VM instances in Microsoft Azure didn’t have options regarding the OS disk for the instance. The default value is 127 GB and this hasn’t changed. Quite a while ago, Microsoft announced Windows VM instances with a smaller OS disk of only 32 GB as was announced in https://azure.microsoft.com/nl-nl/blog/new-smaller-windows-server-iaas-image/

Yes, I admit this may be old news but I haven’t given it much thought on how approach it when these Windows VM images became available, until recently…

More and more I’m involved into providing ARM templates for my customers and my main focus is on Azure IaaS deployments.

Together with Managed disks, it has become pretty easy to determine sizing for Azure VM Instances and having both Image SKUs available provide options.

However, while I was creating these ARM templates I noticed that I prefer to use the ‘Smalldisk’ Image SKU’s more over the standard one and the explanation for it is actually pretty simple.

For this post, I will use the following ARM template as a reference: https://github.com/Azure/azure-quickstart-templates/tree/master/101-vm-simple-windows

Looking at the “Properties” section of the Virtual Machine resource, you can see the relevant part of the OS Disk configuration:

“osDisk”: {
                 “createOption”: “FromImage”
                },

In this configuration, the default size will be used which should be great in most scenarios. If a different size is required, then the notation may look like this:

“osDisk”: {
                 “createOption”: “FromImage”,
                 “diskSizeGB”: “[variables(‘OSDiskSizeinGB’)]”
                },

You can specify the value either as a variable or a parameter to determine the size. In this example I use a variable and it must have a supported value for managed disks. In my case I used the following value:

“OSDiskSizeinGB”: “64”
OK, so nothing new here so far. However, to maintain maximum flexibility, you need to use the ‘Smalldisk’ Image SKU only which has the smallest possible size of 32 GB. From there, the only was is up.
To optimize Azure consumption by only paying for what you use and what you REALLY need, it may make sense that organizations create some governance and policies to determine sizing for their Azure VM instances. Not only for compute, but for storage as well. Managed Disks provide some guidance for that.
So for me, I’d focus on using the ‘Smalldisk’ Image SKU only and enlarge it when needed. It’s pretty easy to do by just adding one line in your ARM template for that VM, and an additional one for your variable…

 

Here’s my set of variables I use to select the ‘Smalldisk’ Image SKU:

“ImagePublisher”: “MicrosoftWindowsServer”,
“ImageOffer”: “WindowsServer”,
“ImageSKU”: “2019-Datacenter-smalldisk”,
“ImageVersion”: “latest”,

And here’s the relevant part of the Image reference:

“imageReference”: {
                                   “publisher”: “[variables(‘ImagePublisher’)]”,
                                   “offer”: “[variables(‘ImageOffer’)]”,
                                   “sku”: “[variables(‘ImageSKU’)]”,
                                   “version”: “[variables(‘ImageVersion’)]”
                                   },
Hope this helps!

 

 

 
 

Looking back at 2018

The time to look back at 2018 has come a bit sooner than expected due to an enforced shutdown at my current employer. This is great so I don’t have to bother about it shortly before the year ends…

2018 is a year that introduced a lot of changes for me. Until October 2018 i worked at SCCT BV before making a switch to DXC Technology. Certain events I encountered this year justified this switch for me.The main reason is that I wanted to be specialist again with a focus on Microsoft Azure (and I get Azure Stack for free). This means that I’ve decided to let go my previous experience in Enterprise Client Management (ECM) and I will no longer work with System Center Configuration Manager, the rest of System Center, Hyper-V and Microsoft Intune anymore. So don’t expect any blog posts on those…

I was becoming too much of a generalist while I was introduced to be specialist at all these technologies at the same time. Basically, if you claim to be a specialist at everything, you become a specialist at nothing.

An interesting aspect I learned by making this switch is how an employer reacts to your resignation, especially if you’ve been working for them for quite some time (4,5 years). Apparently, not all employers handle it well and I find that SCCT BV didn’t react to resgination well. I find that quite a shame, unnecessary and quite immature. An employer’s behavior may have a serious impact on their reputation. After all, it takes years to build a reputation but just 5 minutes to lose it completely. It also gave me some insight on making sure how an organizational structure is set up prior to joining an employer in the future. But I hope that I don’t have to do that anymore…

Fortunately, I expect to find a lot of good opportunities within my new role at DXC Technology. The best thing I found so far is that work/life balance has become much better. It allows me to maintain my health much better than previously and I already see results (I lost quite some weight and I need to lose some more). The best thing so far is that I can work anywhere I want. DXC Technology facilitates working from home in a proper manner and that helps a lot to improve my performance. And I need to travel sometimes which is nice too.

So hopefully I have some stuff to blog about in 2019. It will most likely Azure or Azure Stack related.

I wish everyone a prosperous 2019!!!

 

 

 
Leave a comment

Posted by on 21/12/2018 in Opinion, Rant

 

Case study: Availability Sets vs. Availability Zones

One of the biggest challenges customers face is to make sure a Highly Available solution survives a catastrophic failure at fabric layer of Microsoft Azure, you things like servers, storage, network devices, power and cooling. Not caring about the fabric layer is one of the main reasons why organizations consider running their workloads in Azure in the first place.

However, Azure locations are not located at some magic castles that would make them invulnerable to catastrophic failures or other natural disasters. Of course, the magnitude of the disaster allows organizations to think about possible scenario’s to safeguard (more or less) the availability of their workloads. After all, Microsoft and their customers have a shared responsibility keeping the lot running.

Maintaining high availability at a single region provides two options:

  • Availability Sets: allows workloads to be spread over multiple hosts, racks but still remain at the same data center;
  • Availability Zones: allows workloads to be spread over multiple locations, so you automatically don’t care on which host the workload will run.

The following picture displays the difference between possible failures and SLA percentage. Obviously, Availability Zones offer higher protection against failures. Region pairs is beyond the scope of this post…

The beauty of both scenario’s is that the VNet required to connect an Azure VM is not bound by a single data center a.k.a. an Availability Zone. it is stretched over a whole region.

So I thought, let’s try this out with a typical workload that requires a high level of availability and can sustain failure pretty well. My choice was to host an SQL fail-over cluster (no Always On Availability Group) with additional resiliency using Storage Spaces Direct. Using all these techniques to maintain uptime, how cool is that?

I used the following guides to deploy a two node Windows Server 2016 cluster:

Actually I built two SQL S2D clusters. Both clusters were completely the same (Two DS11 VMs each with 2 P30 disks), except one was configured with an Availability Set and the other with an Availabilty Zone.

What makes the difference is the requirement for the Azure Load Balancer. You need an Azure Load Balancer for the cluster heartbeat to make sure which node is active. Looking the Azure Load Balancer overview, available at https://docs.microsoft.com/en-us/azure/load-balancer/load-balancer-overview you can see that you need a Standard SKU when using Availability Zones. When using an Availability Set, a basic SKU is sufficient. But that’s acutally it when deploying an SQL cluster using S2D. However, since the Load Balancer is an internal one anyway, I’d recommend using the Standard SKU anyway. From a pricing perspective, I don’t believe it would make much of a difference. If the penalties for downtime are much more severe, then I wouldn’t nitpick about this anyway.

 

 
Leave a comment

Posted by on 20/09/2018 in Uncategorized

 

Manageability and responsibility for Cloud Services

Many organizations are facing challenges when moving their IT services to the Public Cloud. For the sake of this post I focus solely on Microsoft Azure, although I am aware that other Cloud Providers have a similar approach and models for it…

As we’re all aware three categories of Cloud Services exist:

  • Infrastructure as a Service (IaaS);
  • Platform as a Service (PaaS);
  • Software as a Service (SaaS).

Each category have their own level of management, some elements are managed by the Cloud provider, the rest is managed by yourself. The amount of management differs from each category which is displayed by the picture below.

As you can see, SaaS services are completely managed by the Cloud provider which is great. A great approach to this is that if a Line of Business (LoB) application can be replaced by a SaaS alternative, then it really makes sense to do so. Looking at IaaS and PaaS, you can see the amount of management done by the Cloud provider is higher with PaaS than IaaS. This means the following recommendations can be made:

  • Replace/migrate existing applications tot SaaS services. This will release the IT department with the daily tasks of managing them;
  • Consider using PaaS Services as much as possible. This will also lower the administrative effort of managing cloud services by the IT department. Additonally, certain PaaS services allow developers to develop and deploy immediately to the PaaS service (ie. Azure Web App) making them not depend on an IT-Pro to facilitate the service.

However, less management doesn’t mean less responsibility. Despite having less management by using Cloud services, it doesn’t mean the organization is not responsible anymore. Microsoft released the required documentation regarding shared responsibility between the customer and themselves. This guide is available at http://aka.ms/sharedresponsibility From the guide took the following screenshot showing a diagram of the responsibilities.

 

As you can see, the customer still has some responsibility when using SaaS services. However, these models allow a customer to define a strategy when moving to the cloud…

 

 
Leave a comment

Posted by on 05/09/2018 in Azure, Public Cloud

 

Ensure IT Governance using Azure Policy…

Many organizations face challenges using Microsoft Azure in a controlled way. The high number of services (and still increasing) and the scale of Microsoft Azure may make it pretty overwhelming to maintain control and enforce compliance on IT governance also known as company policy. How great would it be if organizations can enforce their IT governance to Microsoft Azure?

Well, meet Azure Policy.

Azure Policy allows IT organizations to enforce compliance on Azure resources used. Once a Policy is applied it can report compliance on existing Azure resources and it will be enforced on newly created ones. A full overview of Azure Policy is available at https://docs.microsoft.com/en-us/azure/azure-policy/azure-policy-introduction.

For my own subscription I use for testing purposes only, I enforced a single Policy that defines which Azure location I am allowed to use. In my case, the location is West Europe which is more or less around the corner for me. Adding Azure resources to a different location after applying it result in an error message.

The screenshot below displays my configuration for this Policy.

The overview provides many more examples of typical policies that can be applied. The ones that come to my mind would most like be:

  • Allowed locations;
  • Allowed Virtual Machine SKUs;
  • Tagging;
  • White-listing Azure resources.

Before applying this policy, I’d strongly recommend to investigate IT governance if available. Once it is in place, then you should be able to ‘translate’ them into Azure Policy.

 
Leave a comment

Posted by on 21/08/2018 in Azure, Public Cloud

 
 
Steve Thompson [MVP]

The automation specialist

Boudewijn Plomp

Cloud and related stuff...

Anything about IT

by Alex Verboon

MDTGuy.WordPress.com

Deployment Made Simple

Modern Workplace

Azure, Hybrid Identity & Enterprise Mobility + Security

Daan Weda

This WordPress.com site is all about System Center and PowerShell

IT And Management by Abheek

Microsoft certified Trainer -Abheek

Heading To The Clouds

by Marthijn van Rheenen