RSS

Category Archives: Cloud

Let’s get lazy: Deploy a Hub and Spoke network topology using Azure CLI

Recently, I’ve been playing around a bit finding a way to deploy a simple Azure Hub and Spoke topology. It is based on an Azure reference architecture which is available here.

To architect a typical ‘hybrid’ scenario, this reference architecture works really well for most scenarios. No need to reinvent the wheel for this. Additionally, you can also use the reference architecture with shared services which is available here. This works great hosting AD DS, but also services like Azure ADConnect and/or ADFS. After all, having a federated identity setup already hosted in Azure would make a lot of sense since the path to synchronization and federation is short 🙂

Let’s have a look at the reference architecture:

To keep it simple, I’d focus on the network setup alone. this means the VMs and NSGs will not be part of the setup in this post.

In order to deploy the environment in one go, we need to deploy the resources in the following sequence:

  1. A resource group (if not existing)
  2. The VNets
  3. The subnets within the VNets
  4. The public IP for the VNet gateway
  5. The VNet gateway itself (this one takes quite some time)
  6. VNet peerings between the Hub and Spokes (we don’t create peerings between the Spokes)

After collecting all Azure CLI commands, the following commands will do the job:

az group create –name HUBSPOKECLI –location westeurope

az network vnet create –resource-group HUBSPOKECLI –name CLIHUB1 –address-prefixes 10.11.0.0/16 –dns-servers 10.11.1.4 10.11.1.5
az network vnet create –resource-group HUBSPOKECLI –name CLISPOKE1 –address-prefixes 10.12.0.0/16 –dns-servers 10.11.1.4 10.11.1.5
az network vnet create –resource-group HUBSPOKECLI –name CLISPOKE2 –address-prefixes 10.13.0.0/16 –dns-servers 10.11.1.4 10.11.1.5

az network vnet subnet create –resource-group HUBSPOKECLI –vnet-name CLIHUB1 –name Management –address-prefix 10.11.1.0/24
az network vnet subnet create –resource-group HUBSPOKECLI –vnet-name CLIHUB1 –name GatewaySubnet –address-prefix 10.11.254.0/27

az network vnet subnet create –resource-group HUBSPOKECLI –vnet-name CLISPOKE1 –name Management –address-prefix 10.12.1.0/24
az network vnet subnet create –resource-group HUBSPOKECLI –vnet-name CLISPOKE1 –name Workload –address-prefix 10.12.2.0/24

az network vnet subnet create –resource-group HUBSPOKECLI –vnet-name CLISPOKE2 –name Management –address-prefix 10.13.1.0/24
az network vnet subnet create –resource-group HUBSPOKECLI –vnet-name CLISPOKE2 –name Workload –address-prefix 10.13.2.0/24

az network public-ip create –resource-group HUBSPOKECLI –name HUBSPOKECLI –allocation-method dynamic –dns-name hubspokecli
az network vnet-gateway create –resource-group HUBSPOKECLI –name HUBSPOKECLI –vnet CLIHUB1 –public-ip-address HUBSPOKECLI –gateway-type vpn –vpn-type RouteBased –client-protocol SSTP –sku Standard

az network vnet peering create –resource-group HUBSPOKECLI –name HUB1toSPOKE1 –vnet-name CLIHUB1 –remote-vnet CLISPOKE1 –allow-forwarded-traffic –allow-vnet-access –allow-gateway-transit
az network vnet peering create –resource-group HUBSPOKECLI –name SPOKE1toHUB1 –vnet-name CLISPOKE1 –remote-vnet CLIHUB1 –allow-forwarded-traffic –allow-vnet-access –use-remote-gateways
az network vnet peering create –resource-group HUBSPOKECLI –name HUB1toSPOKE2 –vnet-name CLIHUB1 –remote-vnet CLISPOKE2 –allow-forwarded-traffic –allow-vnet-access –allow-gateway-transit
az network vnet peering create –resource-group HUBSPOKECLI –name SPOKE12toHUB1 –vnet-name CLISPOKE2 –remote-vnet CLIHUB1 –allow-forwarded-traffic –allow-vnet-access –use-remote-gateways

For this sample CLI commands I made the following assumptions:

  • I will place DNS servers (preferably Active Directory intergrated ones) in the Management subnet within the HUB VNet (CLIHUB)
  • Names for resources can be changed accordingly
  • The peerings are created after the VNet gateway because I want to use the Gateway subnet to allow traffic to other VNets by enabling the ‘Allow Gateway Transit’/’Use Remote Gateways’ options between the peers

Creating the VNet Gateway takes a lot of time. It may take between between 30-45 minutes to have the VNet Gateway deployed. All other resources are deployed within a few seconds.

The fun part of using the Azure CLI is that scripts can be created to run a set of CLI commands. This can either be bash or .cmd scripts. Unfortunately PowerShell is not supported. I use Visual Studio Code using the CLI Extension to deploy everything in one go myself.

There you have it. This is all you need to deploy your Hub and Spoke network topology using Azure CLI commands.

Hope this helps!

 

 

 

 

 

 

Advertisements
 
Leave a comment

Posted by on 24/05/2019 in Azure, Cloud, Public Cloud

 

Playing around with Azure Data Box Gateway: first impressions

Helping customers move their on-premises environment to Microsoft Azure brings quite some challenges regarding their large amount of data that has been created over the years. It has become quite common that customers really ask themselves if they still want to do make heavy investments in hardware capacity (servers, network, storage) meaning they heave to put a big bag of money on the table every few years (5 years is common for the hardware life cycle). This strategy defeats any on demand capacity requirements and adaptations when the requirements change. Especially in typical office environments, some data is accessed less over time but still needs to be stored on local storage which needs to be managed and maintained.

A couple of years ago, a nice appliance was introduced to remedy this: StorSimple

While they’re great devices, they still have their troubles. Not to mention, theses devices are becoming end-of-life in not too much time as displayed here.

Over time, Microsoft has introduced Azure Data Box to allow large amounts of data to be transferred to Azure. This post describes my first impressions for a service I’ve been dying to see for quite some time: Azure Data Box Gateway. An overview of this service is available here.

I was particular curious Microsoft’s claim how seamless it is to transfer data to Azure using this solution and how easy it is available. For the sake of testing this appliance I decided to deploy an Azure VM that supports nested virtualization. I don’t have any hardware at home or a hardware lab at my employer but my employer is kind enough to provide me an Azure subscription with a little bit more spending than a typical MSDN subscription 🙂 I would not recommend using this setup for production scenarios. The Azure Data Box Gateway is designed to be deployed to an on-premises environment.

My test setup is pretty simple:

  • A Windows Server 2019 Smalldisk Azure VM, I used VM size Standard E8s v3 (8 vcpus, 64 GB memory)
  • I added a 4 TB Premium SSD Data disk to host two VMs
  • I used Thomas Maurer’s post to setup nested virtualization and configure the Hyper-V networking. The rest of his post is not relevant for this test
  • I deployed a Windows Server 2019 VM. Since I wasn’t really sure what to expect, I conveniently called it DC01. I guess the name pretty much gives away its role

After that I followed the tutorial to deploy a single Azure Data Box Gateway VM. Honestly, the tutorial is painfully simple, it’s almost an embarrassment. I guess someone with no IT skills should be able to reproduce the steps as well. After running steps 1 to 4, I got a configuration which looks like the following:

I added a single user for this purpose:

Here’s my configuration of a single share, I created a separate storage account as well:

And I created Bandwith Management. I decided to be a bit bold with the settings. For this test, I am in Azure anyway so who cares. In the real world, I would probably have different settings:

Finally, I connected the share to my VM with a simple net use command and start populating the share with some data. I started putting some linux ISO images there:

And you see them in the storage account as well 🙂 :

So, yes so far it’s pretty seamless. Seeing the files like that in the storage account as well, it triggered me to do another test: Would the data be available in the share when data is uploaded to the storage account outside the Azure Data Box Gateway?

So I installed Azure Storage Explorer and uploaded data from the Azure Storage Explorer to the storage account. Uploading some junk, I could see those not in the storage account but also in the share from the VM as well. I was impressed when I witnessed those results which fully validates Microsoft’s statement it’s seamless. I consider it even a more seamless solution than StorSimple at this time.

Finally, looking at the Azure Data Box Gateway limits, mentioned here, here are some thoughts how much this solution can scale:

Description Value
No. of files per device 100 million
Limit is ~ 25 million files for every 2 TB of disk space with maximum limit at 100 million
No. of shares per device 24
No. of shares per Azure storage container 1
Maximum file size written to a share For a 2-TB virtual device, maximum file size is 500 GB.
The maximum file size increases with the data disk size in the preceding ratio until it reaches a maximum of 5 TB.

Knowing that a single storage account can host a maximum of 500 TB (and some regions allow 2 PB), then this scaling can go through the roof making this service exceptionally suitable for large gradual migrations to Azure storage…

 

 

 

 
 

Thoughts on standard Image SKU vs.’Smalldisk’ Image SKU

For a long time, organizations using Windows VM instances in Microsoft Azure didn’t have options regarding the OS disk for the instance. The default value is 127 GB and this hasn’t changed. Quite a while ago, Microsoft announced Windows VM instances with a smaller OS disk of only 32 GB as was announced in https://azure.microsoft.com/nl-nl/blog/new-smaller-windows-server-iaas-image/

Yes, I admit this may be old news but I haven’t given it much thought on how approach it when these Windows VM images became available, until recently…

More and more I’m involved into providing ARM templates for my customers and my main focus is on Azure IaaS deployments.

Together with Managed disks, it has become pretty easy to determine sizing for Azure VM Instances and having both Image SKUs available provide options.

However, while I was creating these ARM templates I noticed that I prefer to use the ‘Smalldisk’ Image SKU’s more over the standard one and the explanation for it is actually pretty simple.

For this post, I will use the following ARM template as a reference: https://github.com/Azure/azure-quickstart-templates/tree/master/101-vm-simple-windows

Looking at the “Properties” section of the Virtual Machine resource, you can see the relevant part of the OS Disk configuration:

“osDisk”: {
                 “createOption”: “FromImage”
                },

In this configuration, the default size will be used which should be great in most scenarios. If a different size is required, then the notation may look like this:

“osDisk”: {
                 “createOption”: “FromImage”,
                 “diskSizeGB”: “[variables(‘OSDiskSizeinGB’)]”
                },

You can specify the value either as a variable or a parameter to determine the size. In this example I use a variable and it must have a supported value for managed disks. In my case I used the following value:

“OSDiskSizeinGB”: “64”
OK, so nothing new here so far. However, to maintain maximum flexibility, you need to use the ‘Smalldisk’ Image SKU only which has the smallest possible size of 32 GB. From there, the only was is up.
To optimize Azure consumption by only paying for what you use and what you REALLY need, it may make sense that organizations create some governance and policies to determine sizing for their Azure VM instances. Not only for compute, but for storage as well. Managed Disks provide some guidance for that.
So for me, I’d focus on using the ‘Smalldisk’ Image SKU only and enlarge it when needed. It’s pretty easy to do by just adding one line in your ARM template for that VM, and an additional one for your variable…

 

Here’s my set of variables I use to select the ‘Smalldisk’ Image SKU:

“ImagePublisher”: “MicrosoftWindowsServer”,
“ImageOffer”: “WindowsServer”,
“ImageSKU”: “2019-Datacenter-smalldisk”,
“ImageVersion”: “latest”,

And here’s the relevant part of the Image reference:

“imageReference”: {
                                   “publisher”: “[variables(‘ImagePublisher’)]”,
                                   “offer”: “[variables(‘ImageOffer’)]”,
                                   “sku”: “[variables(‘ImageSKU’)]”,
                                   “version”: “[variables(‘ImageVersion’)]”
                                   },
Hope this helps!

 

 

 
 

Manageability and responsibility for Cloud Services

Many organizations are facing challenges when moving their IT services to the Public Cloud. For the sake of this post I focus solely on Microsoft Azure, although I am aware that other Cloud Providers have a similar approach and models for it…

As we’re all aware three categories of Cloud Services exist:

  • Infrastructure as a Service (IaaS);
  • Platform as a Service (PaaS);
  • Software as a Service (SaaS).

Each category have their own level of management, some elements are managed by the Cloud provider, the rest is managed by yourself. The amount of management differs from each category which is displayed by the picture below.

As you can see, SaaS services are completely managed by the Cloud provider which is great. A great approach to this is that if a Line of Business (LoB) application can be replaced by a SaaS alternative, then it really makes sense to do so. Looking at IaaS and PaaS, you can see the amount of management done by the Cloud provider is higher with PaaS than IaaS. This means the following recommendations can be made:

  • Replace/migrate existing applications tot SaaS services. This will release the IT department with the daily tasks of managing them;
  • Consider using PaaS Services as much as possible. This will also lower the administrative effort of managing cloud services by the IT department. Additonally, certain PaaS services allow developers to develop and deploy immediately to the PaaS service (ie. Azure Web App) making them not depend on an IT-Pro to facilitate the service.

However, less management doesn’t mean less responsibility. Despite having less management by using Cloud services, it doesn’t mean the organization is not responsible anymore. Microsoft released the required documentation regarding shared responsibility between the customer and themselves. This guide is available at http://aka.ms/sharedresponsibility From the guide took the following screenshot showing a diagram of the responsibilities.

 

As you can see, the customer still has some responsibility when using SaaS services. However, these models allow a customer to define a strategy when moving to the cloud…

 

 
Leave a comment

Posted by on 05/09/2018 in Azure, Public Cloud

 

Ensure IT Governance using Azure Policy…

Many organizations face challenges using Microsoft Azure in a controlled way. The high number of services (and still increasing) and the scale of Microsoft Azure may make it pretty overwhelming to maintain control and enforce compliance on IT governance also known as company policy. How great would it be if organizations can enforce their IT governance to Microsoft Azure?

Well, meet Azure Policy.

Azure Policy allows IT organizations to enforce compliance on Azure resources used. Once a Policy is applied it can report compliance on existing Azure resources and it will be enforced on newly created ones. A full overview of Azure Policy is available at https://docs.microsoft.com/en-us/azure/azure-policy/azure-policy-introduction.

For my own subscription I use for testing purposes only, I enforced a single Policy that defines which Azure location I am allowed to use. In my case, the location is West Europe which is more or less around the corner for me. Adding Azure resources to a different location after applying it result in an error message.

The screenshot below displays my configuration for this Policy.

The overview provides many more examples of typical policies that can be applied. The ones that come to my mind would most like be:

  • Allowed locations;
  • Allowed Virtual Machine SKUs;
  • Tagging;
  • White-listing Azure resources.

Before applying this policy, I’d strongly recommend to investigate IT governance if available. Once it is in place, then you should be able to ‘translate’ them into Azure Policy.

 
Leave a comment

Posted by on 21/08/2018 in Azure, Public Cloud

 

Enrolling lots of Windows 10 devices to Microsoft Intune, why bother?

Recently I’ve been involved in a few Microsoft Intune deployments.

These are standalone environments, so no hybrid scenario with System Center Configuration Manager. As we all know, Microsoft Intune can be purchased separately but that’s something I wouldn’t recommend. The pricing models of Enterprise Mobility + Security (EM+S) or Microsoft 365 Enterprise (a.k.a. Secure Productive Enterprise) would give you a lot more benefits making it a true bang for your buck. Organizations who fail to see that will basically defeat themselves because their competition does embrace this strategy. These subscriptions will replace a lot of on-premises management tools which liberates administrators with their daily tasks of extinguishing fires…

Microsoft Intune is available for EM+S E3 or 365 Enterprise E3 (also in both E5 subscriptions). Both subscriptions also include Azure Active Directory Premium P1. Azure Active Directory Premium P1 is a requirement to achieve a goal this post is talking about making Windows 10 device enrollment really simple.

Following guidelines on https://docs.microsoft.com/en-us/intune/windows-enroll allows organizations to deliver automatic enrollment for Windows 10 devices when Azure Active Directory Premium is enabled for a user who is assigned a EM+S or 365 Enterprise license. All features are enabled by default so we know it’s there if we don’t fiddle around with them…

So what does this actually mean?

Well, it means that each user who receives a Windows 10 device, preferably Enterprise, will do the device enrollment for you during the OOBE phase of Windows 10. It doesn’t matter if your organization has 5, 50, 500, 5000 or more devices. How cool is that?

As long as all required licenses are in place, admins don’t need to bother about this at all…

 

 

My first Azure Stack TP2 POC deployment ending in disaster…

Today I had the opportunity to have an attempt to deploy my first Azure Stack TP2 POC. Having this DataON CiB-9224 available allowed to have a go on deploying an Azure Stack TP2 POC environment. I was able to achieve this after finishing some testing with Windows Server 2016 with the platform. The results of those tests are available at https://mwesterink.wordpress.com/2017/01/19/case-study-running-windows-server-2016-on-a-dataon-cib/

Before I started testing I reviewed the hardware requirements which are available at https://docs.microsoft.com/nl-nl/azure/azure-stack/azure-stack-deploy

Unfortunately, a small part made me wonder if I would actually succeed in deploying Azure Stack. Here’s a quote of the worrying part:

Data disk drive configuration: All data drives must be of the same type (all SAS or all SATA) and capacity. If SAS disk drives are used, the disk drives must be attached via a single path (no MPIO, multi-path support is provided).

Damn, again a challenge with MPIO. Such a shame since I meet all other hardware requirements.

So decided to have a go and figure out why MPIO is not supported by deploying Azure Stack TP2 anyway. I followed the instructions at https://docs.microsoft.com/nl-nl/azure/azure-stack/azure-stack-run-powershell-script and see what happens…

I used a single node of the CiB-9224 and used 4 400 GB SSD disks only. I turned the other node off and I disabled all unused NICs.

After a while I decided to check its progress and I noticed that nothing was happening at a specific step (there was a hour between the latest log and the time I went to check). Here’s a screenshot where the deployment was ‘stuck’:

stuck_at_s2d

Seems like the script is trying to enable Storage Spaces Direct (S2D). Knowing that S2D is not supported with MPIO I terminated the deployment and wiped all data because I knew I was going to be unsuccessfull. At least I know why.

I didn’t meet all hardware requirements after all. Fortunately it gave me some insights in how to deploy Azure Stack so when I do have hardware that meets my requirements, then at least I know what to do…

Looking at the requirements again, it’s obvious that the recommended way to go is with single channel JBOD.

 

 

 
 
Steve Thompson [MVP]

The automation specialist

Boudewijn Plomp

Cloud and related stuff...

Anything about IT

by Alex Verboon

MDTGuy.WordPress.com

Deployment Made Simple

Modern Workplace

Azure, Hybrid Identity & Enterprise Mobility + Security

Daan Weda

This WordPress.com site is all about System Center and PowerShell

IT And Management by Abheek

Microsoft certified Trainer -Abheek

Heading To The Clouds

by Marthijn van Rheenen