RSS

Fun with Azure Files together with Microsoft Deployment Toolkit

Recently I was involved in doing a Proof of Concept (PoC) for Windows Virtual Desktop (WVD) for one of my customers. My goal was to use as many Azure Platform as a Service (PaaS) components as possible resulting in a simple environment using the following services:

  • Azure AD Domain Services (AAD DS)
  • Azure Files
  • Bastion to access a Jumpbox
  • WVD Host Pools

I used a simple Azure Reference Architecture to deploy the Virtual Network infrastructure, which is available at https://docs.microsoft.com/en-us/azure/architecture/reference-architectures/hybrid-networking/shared-services but without the ‘hybrid’ components like VPN/ExpressRoute and replaced the AD DS VMs with AAD DS. This is a requirement to use Azure Files to store the FSLogix profile containers. See https://docs.microsoft.com/en-us/azure/virtual-desktop/create-profile-container-adds for more information. Based on the PoC I must admit it works remarkably well.

This has become possible since Azure Files supports identity-based authentication over Server Message Block (SMB) through Azure Active Directory Domain Services (Azure AD DS). See https://docs.microsoft.com/en-us/azure/storage/files/storage-files-identity-auth-active-directory-domain-service-enable for more information.

I am talking about this setting:

To me, this may become a huge game changer regarding File services, especially when authentication through AD DS becomes generally available. This makes a lot of File Servers become replaced by Azure Files. But it also caught my attention to use it for a different scenario.

A while ago I wrote this post to install Windows 10 over the Internet. I still believe that installing Windows over the Internet should be possible, especially when having a lot of bandwidth. My thoughts were to determine if it’s possible to use Microsoft Deployment Toolkit (MDT) to deploy an Operating System over the Internet with Azure Files.

NOTE: This scenario works only when your ISP allows SMB traffic (TCP port 445). Some ISP’s don’t.

To prepare the environment I did the following:

  • Setup Azure AD DS in a small vNET
  • Install Azure Files
  • Deploy a small VM to install MDT and manage the Deployment Share

The first thing that needed to be done was to create a share, I use a quota 1TB which is more than enough. I didn’t use a Premium share

I created two identities in Azure AD I used for not only for joining the domain but who need access to the Azure File Share and provided the required permission. The accounts used are also part of the AAD DS Administrators Group to keep the scenario simple.

I use one of these accounts to log on to the VM used to create and manage the Deployment Share. The VM is joined to the AAD DS domain and has MDT installed.

Eventually you can create your deployment share using the UNC path of the Azure Files Share and do your typical MDT stuff like adding apps or your Windows 10 installation media. It may look like this:

In the Azure Portal, you see the same directory structure as well:

The trick is to provide access from any location outside AAD DS so we can access the Deployment Share from anywhere. We need to specify the user name and password in the Bootstrap.ini file. The credentials are the same as the one script available by the Azure Portal (the same thing when doing a typical use command MDT uses as well):

Once everything is created, you can extract the bootable .iso from the share itself, you can even download it directly from the Azure Portal:

Eventually, all you need to do is boot from the .iso and you can start your deployment.

Here’s a screenshot of a machine running Hyper-V from a different location, choosing a normal deployment:

NOTE: You can choose to capture an image if you want to…

For the rest I didn’t bother to do anything specific from an MDT perspective, just a simple Windows 10 deployment with Office365. What you’d like to put into MDT is up to you. The end result is you can deploy a machine from any location over the Internet.

Happy deployments and hope this helps!

 

Why ‘COVID-19 Outbreak Teams’ may need IT people too…

This post may be a bit funny including the title, but they’re just my thoughts. That’s the fun part of having an opinion, I am not introducing facts here. It’s up to you, the reader, what you want to do with it. My view may be a bit simplistic but experience tells me that the best solutions are most often the simplest ones.

To keep the text a bit more readable, I will use the abbreviation of the Dutch ‘COVID-19 Outbreak Team’ called the Outbreak Management Team. Their abbreviation used is OMT. I’m a Dutch citizen so similar teams may have different names in your country. Each country has its own measures so it may not always apply to yours…

As we all know  at the time of writing this post, the world is currently struck by a virus that is taking lives and many people, organizations, governments and their politicians don’t really know how to deal with this. Some exceptions apply though. Fortunately, I haven’t been struck by this virus yet and the same goes for people around me more closely. It also allows me to observe the whole situation more rationally. Although as an outsider, I am very interested in how these teams handle the situation. Unfortunately, there’s a lack of transparency in their motives and approaches since most of these teams have closed meetings and those meeting notes are not shared nor published. Many other aspects of handling the situation remains obscure making it difficult to really understand what they’re doing. This invites suspicion towards these teams and is something I would have done differently.

Dealing with threats in IT, either very small, small, big or even massive comparable to a global outbreak is ‘business as usual’ for us IT people. IT infrastructures are constantly ‘suffering’ from either attacks (like malware) or service availability being lost (either by malfunction or manual actions). This is what we do and we don’t know any better.

The first thing IT organizations do is make sure a ‘first line of defense’ is available. The first line of defense allows quick detection and potential mitigation if available. Skilled first line teams may have room to do quick analysis and deliver quick workarounds if available. If possible, a shutdown of the complete service or infrastructure will NEVER be done unless absolutely needed so that more information of the issue becomes available, but I haven’t seen that happening in my career. Complete outrage will be the result if an organization shuts down the service or infrastructure completely due to an issue. I remember that a couple of years ago there was severe outrage when WhatsApp introduced the ‘blue ticks’, imagine what would happen if the company behind WhatsApp at that time would turn off the service completely to disable that feature again. The outrage would be bigger, but I digress…

Once the first line of defense has found a mitigation a.k.a. a workaround, then the workaround will still be used until a permanent solution has been found. Let’s translate that to the COVID-19 situation. Here in The Netherlands a ‘General Practitioner’ has found a combination of medicine that can be used to mitigate the threat so that loss of life can be reduced. He didn’t just invented it by himself, he used the greatest source to gather information to collect the information needed to collect experiences from peers over the world: the Internet. Unfortunately, he was prompted to stop treatment, although his treatment was successful, since certain protocol from the government forbade him from prescribing this treatment. Baffling if you ask me, but not completely surprising either.

OMT and politicians, mostly paralyzed by fear, decided to disable the first line of defense immediately and focus on finding a vaccine and skip the mitigation process completely on vaccines and order lockdowns until a vaccine has been found, and we don’t know if one can be found in the first place. Compare it to an IT Service Management (ITSM) process by skipping the Incident Management process completely and go to Problem Management with a extremely high priority Problem with NO workarounds and pray to find something for the Change Management process.

Having an approach like this is potentially very dangerous. Lockdown measures have been taken including a set of instructions. The irony is that the numbers of deaths and hospitalizations started to drop pretty much the same time. So what is the danger of this?

Well, due to insufficient information gathering by the first line of defense and the seeing certain trends in fatalities introduces the danger that the measurements taken are working. The biggest threat here is that there is no more ‘root cause analysis’ happening anymore because the measurements taken are working. But they don’t know if this actually true. Result is the wrong solution is used to solve the problem. Especially when poor statistics are used combined with ‘tunnel vision’. “Yes, since we introduced social distancing using the ‘1,5 meter society’ the numbers are going down” is the common argument. But what if that may not be the case? Tunnel vision prevents people from extending their research which I consider far more dangerous…

As with the Dutch OMT, it has become painfully clear that while a lot of new research is available worldwide that provide new insights to investigate, they still remain in their mantras which have become either superseded our outdated because of this tunnel vision. To me, that is a missed opportunity.

If the first line of defense can still collect information about these threats, then other IT services can be used to store and analyze the data collected. This is where IT can help as well. You can compare it with gathering metrics data for your monitoring solution. This may work far better than mathematics modelling to understand what is happening. Especially when the model itself is flawed when the same input delivers different results. I guess Sir Isaac Newton would have probably never been able to find out how gravity works if apples either drop, float or even levitate when the tree releases them. Fortunately for us, he used an ‘evidence-based’ approach when he found out that all apples fell down allowing him to prove gravity…

So, IT people will not be able to combat the COVID-19 virus itself. They lack the knowledge and skill to do so and that is a job for virologists and epidemiologist. But I am certain IT people can help them provide guidance in using a proper approach. But this can only work when all parties (especially governments and politicians) are open, honest and provide full transparency in dealing with this pandemic. And it can be done by keeping all data gathered anonymous by taking out patients personal data. I believe that the number of deaths could have been lower by having a proper process in place…

And for governments and their politicians who introduced lockdowns I have a single question: Is it worth to destroy complete economies at any price to save a relatively small number of lives? After all, it is you who put countries in lockdown, not the virus…

 

 

 

 
Leave a comment

Posted by on 17/05/2020 in Uncategorized

 

Experimenting with Pi-hole: Can it be used in a corporate environment?

In my previous post I discovered the Pi-hole project to free up my Internet connection by preventing to download a lot of ads that annoy me anyway. Running Pi-hole on a Raspberry Pi at home is fine. However, it got me thinking if I can use this in a corporate environment by filtering all ads by everyone. Corporate environments use directory services to manage user accounts and assign them permissions. To keep it simple, I focused on Active Directory.

So, I decided to set up a lab environment. To make sure capacity is not a limitation for my setup, I created a small environment in Azure. Simple VNet with some VMs, nothing fancy. However, I noticed that making screenshots is much easier by using Azure Bastion to connect to the machines. See https://docs.microsoft.com/en-us/azure/bastion/bastion-overview for more information regarding Azure Bastion

The answer to the question is: yes and no!

Why no?

Looking at the documentation and the DNS settings tab in Pi-hole’s admin portal, I can’t create a Primary DNS zone with my own domain. Since Active Directory requires a DNS zone (either AD Integrated or a Primary DNS Zone), there’s no way for me to do any DNS registration whatsoever.

The screenshot below displays my DNS configuration. There’s no way to create a Primary DNS Zone. I selected Cloudflare as my Upstream DNS servers so Cloudflare is used as a DNS forwarder.

 

So why yes then?

OK since I can’t use Pi-hole to host DNS zones but I can still use it as a forwarder for my Active Directory domain and its own DNS servers which are Active Directory integrated. So it just looks like this.

The result is a ‘chain’ of name resolution for clients within the Active Directory domain. First the AD Integrated DNS, then the rest is forwarded to Pi-hole and Pi-hole will use a forwarder of your choice (like Cloudflare in my example).

As the Pi-hole documentation states, Pi-hole is not limited to Raspberry Pi only but can be run on a few Linux distributions. In my lab environment I used Ubuntu Server and it is painfully easy to deploy. A Pi-hole server can deployed in less than an hour. It is very lightweight and has very little capacity requirements…

And never forget rule #1 when troubleshooting: It’s always DNS 😉

 

 
2 Comments

Posted by on 07/02/2020 in Azure, DNS

 

Home project: Preparing an ‘ad free’ browsing experience for the upcoming decade

My current employer has a very nice policy stating that I can work where and (mostly) when I want. My workplace device is equipped with a Windows 10 setup that is joined to Azure AD so I don’t depend on an on-premises infrastructure to access company resources. Sometimes I need to travel and sometimes I visit the office when I see fit. If traffic is a nightmare and there’s no immediate reason to travel, then I work from home. If I have a small cold too small to call in sick but big enough to infect coworkers, then I work from home. If I need to take delivery, ie. kitchen appliances with a broad estimated delivery time (9:00 -12:00), then I work from home. I guess you get the idea…

Since I’m working solely with Microsoft Azure to help customer’s make a difference, the only thing I need is an Internet connection and access to a customer’s subscription.

So yes, it’s a wonderful thing to have this freedom.

This also means I’m using my Internet connection at home quite intensively. One of the most frustrating things to browse the Internet nowadays is that many websites are heavily populated with ads. The same goes with watching videos on Youtube that are interrupted by ads during playback. The final nail on the coffin are ads embedded in apps on either my laptop, my Smart TV and my phone when connected to the home network.

I tried to combat these ads using an ‘ad blocker’ in my browser but this has a few setbacks:

  • It works only in that browser. Using more browsers requires enabling the ‘ad blocker’ multiple times
  • Websites detect these ad blockers with different results. Some websites just whine, others start to bullshit about it

By accident I encountered two Youtube videos that displayed a method to block ads on my home network, which are available here and here. I know there may be more but two will do. I ignored their own sponsored junk (I don’t believe VPN is the holy grail) and focused on the solution displayed by them. Nevertheless I thank the content creators for their tutorials. After seeing these videos, I said to myself: “I think I can do that as well”.

This post is about my first impressions using Pi-hole.

My brother in law was kind enough to let me borrow a Raspberry Pi B including a 4 Gb SD card (yes, the device is that old but it works like a charm).

My goal was to install the ‘Lite’ build of Raspbian and do everything headless. In order to achieve that I had a few challenges:

  • I tried different tools to flash the SD card. Eventually, I ended up using Etcher since this gave me the best result
  • SSH is not enabled by default, apparently for quite a while now but this stuff is something new to me. I followed the instructions displayed here how to enable it
  • Had to run some updates before getting started
  • I chose to configure a static IP-address during the setup of Pi-hole
  • I chose to use the web interface as well, gives me great stats and insights

Following the documentation on the Pi-hole website allowed me to get all this stuff done in roughly one hour.

Here’s a picture of my now home based DNS server.

After changing my router settings, cleaning all browser history and DNS caches I started playing around a bit and see if Pi-hole is working as expected.

Here are some of my findings:

  • Chrome has a nasty feature called “Async DNS resolver” causing some ads to get through if a website is using this feature (ie. Youtube in Chrome browser). Found a workaround here
  • When Async DNS resolver is disabled, most ads are no longer dispayed
  • Youtube app on my Smart TV doesn’t display ads
  • It seems that Microsoft Edge Beta is not using a similar feature like “Async DNS resolver”

To conclude: This is a great way of keeping ads out as much as possible. My connection is freed from all this ad nonsense.

As for Google: I know this is your business model but this nonsense should not be in there or people may need to reconsider using Chrome, so stop it…

 
Leave a comment

Posted by on 30/12/2019 in Opinion, Uncategorized

 

Migration to Azure assessment: a politically incorrect view…

Migrations to Microsoft Azure are becoming more and more common since organizations start to understand the added value that Microsoft Azure (even Public Cloud in general) can offer. However, starting to understand and really understanding are two different things which provides some challenges when starting the journey. Fortunately, guys like myself are here to help organizations with this journey doing it the ‘right’ way.

Typical reasons to migrate to Microsoft Azure may be the following (but may not be limited to):

  • Modernization of IT;
  • Hardware used has reached end-of-life;
  • Data-room or data-center closure;
  • Tired of doing/managing this stuff yourself;
  • Evolving the IT organization from an administrative department to a directive one. This is more or less a goal and not an incentive.

Before starting the assessment, many organizations fall into a few potential pitfalls that would partly or completely defeat the potential that Microsoft Azure can offer:

  • Ignoring your current processes and fail to evolve them for the new world (“We would like to keep working we’re used to”);
  • Apply on-premises architecture behavior. For example, using network appliances, concepts like VLANs and their tagging, proxies, patching using WSUS. For God’s sake, Azure has some really neat PaaS alternatives that can do it all for you including some nice reporting features;
  • Look for best practice. In my opinion, there’s no such thing as a ‘best practice’. However, we do have ‘recommended practice’ that can be applied to a particular scenario;
  • Do everything overnight or ‘big bang’ approaches. Trust me, it won’t work and you’ll end up in a world of hurt. Start small and take small steps;
  • Think Microsoft will do everything for you. No, they won’t according to the Shared Responsibility for Cloud Services;
  • Look at infrastructure only. No, you may need to look at applications as well. I get chills in my spine when I’m asked to migrate an Exchange environment to Azure (seriously?);
  • Think of Azure as just another data-center. Yes, we will do ‘lift and shift’, pick up our junk and move it to Azure and expect the same results
  • Looking at cost alone and ignore the benefits. If you fall into these pitfalls then Azure will definitely be more expensive, especially when you don’t reconsider. But that’s not Microsoft’s fault.

Based on current experience with my current and previous job most organizations are in a bit of a hurry so the option most chosen to migrate is ‘lift and shift’. It allows me to do a SWOT analysis to determine which gives me the following.

Strengths (may not be limited to):

  • Granular per-system movement of functionality from current environment to Microsoft Azure VMs;
  • Microsoft-only approach, no dependencies;
  • Secure communications during replication;
  • Test scenarios prior to migrating;
  • Fast and easy to plan migration;
  • Very minimal downtime;
  • Easy rollback;
  • Easy decommissioning of existing resources.

Weaknesses (may not be limited to):

  • Skills needed for Microsoft Azure Site Recovery Services (ASR);
  • Machine being migrated may require downtime to verify actual state;
  • On-premises infrastructure requirements for converting VMware virtual machines;
  • Azure virtual machine limitations may apply (i.e. disk size);
  • Administrative effort may be required to verify sizing;
  • Possible data loss during rollback when a large time frame is in place.

Opportunities (may not be limited to):

  • Identify deprecated machines, do not migrate them;
  • Potential short path from Azure IaaS to Azure PaaS;
  • Administrative effort can be re-evaluated after migration.

Threats (may not be limited to):

  • Systems running Windows Server 2008 R2 will reach End of Support on January 14, 2020 (extended support available after migration);
  • Current processes and resource allocation;
  • No urgency to move from costly IaaS deployment to optimized PaaS deployment within Azure;
  • Additional services are still required after migration to manage servers.

So, once all that is out of the way I can start with the assessment.

For the sake of this post, I want to migrate my Image Building environment based on the setup described at https://docs.microsoft.com/en-us/windows/deployment/deploy-windows-mdt/create-a-windows-10-reference-image. Here’s an overview of my setup:

  • A single Hyper-V host;
  • A Domain Controller VM (Gen2) with 1 CPU, 4 GB RAM and a 127 GB OS disk named DC01;
  • A MDT VM (Gen2) with 1 CPU, 4 GB RAM, a 127 GB OS disk and a 512 GB data disk named IB01.

The environment is isolated so no incoming or outgoing network traffic.

Telemetry data is not available.

 

Let’s do the assessment the WRONG way:

OK so we need two servers. let’s jump to the Azure Pricing Calculator available at https://azure.microsoft.com/en-us/pricing/calculator/ and get some numbers.

For DC01:

  • VM size: F2S at € 126,82
  • OS Disk: P10 at € 18,28

For IB01:

  • VM size: F2S at € 126,82
  • OS Disk: P10 at € 18,28
  • Data disk: P20 at € 67,92

If add up the numbers then the estimated cost for this environment, then the estimated monthly cost would be € 358,12 per month, or € 4297.44 per year. What does this number tell me? Absolutely nothing, maybe a bit on the expensive side. But this is what happens when no optimizations are considered.

The result is a poor adoption of Microsoft Azure which makes customers unhappy and put the blame to Microsoft and their partners.

 

Now let’s do the assessment the CORRECT way:

Instead of doing the assessment ourselves, we can use tools. Microsoft allows organizations run their assessment with Azure Migrate which will do a big part of the assessment for us. And it’s free 🙂

What Azure Migrate can do you is available at https://docs.microsoft.com/en-us/azure/migrate/migrate-services-overview

In order to run the assessment certain steps need to be taken which are available in the overview so I don’t need to write it all down. So how does it look like after running it for a while?

Let’s go the Azure Migrate workspace first.

Let’s open the Assessments pane.

We see a single assessment, let’s open it.

So, we have an overview here. It displays the Server Readiness and the Monthly Cost Estimate, that’s pretty cool. It looks like my assessment generated some warnings so let’s take a look at that first.

Well, that’s nice. The assessment recommends a cheaper VM size and cheaper disk type than estimated in the ‘wrong’ scenario. Let’s have a look at IB01 and get some details.

Here’s the upper half. Despite the warning the VM is eligible for ‘lift and shift’.

And here’s the lower half. Here we can see which disks are identified and recommended to be used. Apparently, I don’t need Premium SSD storage so i don’t need to bother paying for it.

Talking about cost, here’s the cost estimate for both machines.

Well, here you go. Nicely specified. Based on this assessment, the total cost per month would be € 157,23 per month or € 1886,76 per year. Wow, less than half of the ‘wrong’ assessment. I may have some room for more when considering the B series VM as well.

Setting up Azure Migrate takes one or two days depending on availability. However, it is worth the effort and allows a much better discussion with organizations when they want to consider migrating to the cloud.

Keep in mind that ‘lift and shift’ may NOT be the best approach (it isn’t most of the time), so you may need to consider other options as well. However, this is a good place to start. It also helps me keeping out of monasteries because doing these ‘wrong’ estimates is a crap job to do, especially when large environments need to be assessed. It is a very time and energy consuming exercise and requires me to retreat to a monastery to do the job with as little distraction as possible. Since most of my customers are enterprise organizations, I can imagine you have a pretty good idea how crap this job may be.

Final thought: As with most things in life, I have no solution to combat bad behavior…

 

 

 

 

 
Leave a comment

Posted by on 05/09/2019 in Azure, Cloud, Public Cloud, Rant

 

Let’s get lazy: Deploy a Hub and Spoke network topology using Azure CLI

Recently, I’ve been playing around a bit finding a way to deploy a simple Azure Hub and Spoke topology. It is based on an Azure reference architecture which is available here.

To architect a typical ‘hybrid’ scenario, this reference architecture works really well for most scenarios. No need to reinvent the wheel for this. Additionally, you can also use the reference architecture with shared services which is available here. This works great hosting AD DS, but also services like Azure ADConnect and/or ADFS. After all, having a federated identity setup already hosted in Azure would make a lot of sense since the path to synchronization and federation is short 🙂

Let’s have a look at the reference architecture:

To keep it simple, I’d focus on the network setup alone. this means the VMs and NSGs will not be part of the setup in this post.

In order to deploy the environment in one go, we need to deploy the resources in the following sequence:

  1. A resource group (if not existing)
  2. The VNets
  3. The subnets within the VNets
  4. The public IP for the VNet gateway
  5. The VNet gateway itself (this one takes quite some time)
  6. VNet peerings between the Hub and Spokes (we don’t create peerings between the Spokes)

After collecting all Azure CLI commands, the following commands will do the job:

az group create –name HUBSPOKECLI –location westeurope

az network vnet create –resource-group HUBSPOKECLI –name CLIHUB1 –address-prefixes 10.11.0.0/16 –dns-servers 10.11.1.4 10.11.1.5
az network vnet create –resource-group HUBSPOKECLI –name CLISPOKE1 –address-prefixes 10.12.0.0/16 –dns-servers 10.11.1.4 10.11.1.5
az network vnet create –resource-group HUBSPOKECLI –name CLISPOKE2 –address-prefixes 10.13.0.0/16 –dns-servers 10.11.1.4 10.11.1.5

az network vnet subnet create –resource-group HUBSPOKECLI –vnet-name CLIHUB1 –name Management –address-prefix 10.11.1.0/24
az network vnet subnet create –resource-group HUBSPOKECLI –vnet-name CLIHUB1 –name GatewaySubnet –address-prefix 10.11.254.0/27

az network vnet subnet create –resource-group HUBSPOKECLI –vnet-name CLISPOKE1 –name Management –address-prefix 10.12.1.0/24
az network vnet subnet create –resource-group HUBSPOKECLI –vnet-name CLISPOKE1 –name Workload –address-prefix 10.12.2.0/24

az network vnet subnet create –resource-group HUBSPOKECLI –vnet-name CLISPOKE2 –name Management –address-prefix 10.13.1.0/24
az network vnet subnet create –resource-group HUBSPOKECLI –vnet-name CLISPOKE2 –name Workload –address-prefix 10.13.2.0/24

az network public-ip create –resource-group HUBSPOKECLI –name HUBSPOKECLI –allocation-method dynamic –dns-name hubspokecli
az network vnet-gateway create –resource-group HUBSPOKECLI –name HUBSPOKECLI –vnet CLIHUB1 –public-ip-address HUBSPOKECLI –gateway-type vpn –vpn-type RouteBased –client-protocol SSTP –sku Standard

az network vnet peering create –resource-group HUBSPOKECLI –name HUB1toSPOKE1 –vnet-name CLIHUB1 –remote-vnet CLISPOKE1 –allow-forwarded-traffic –allow-vnet-access –allow-gateway-transit
az network vnet peering create –resource-group HUBSPOKECLI –name SPOKE1toHUB1 –vnet-name CLISPOKE1 –remote-vnet CLIHUB1 –allow-forwarded-traffic –allow-vnet-access –use-remote-gateways
az network vnet peering create –resource-group HUBSPOKECLI –name HUB1toSPOKE2 –vnet-name CLIHUB1 –remote-vnet CLISPOKE2 –allow-forwarded-traffic –allow-vnet-access –allow-gateway-transit
az network vnet peering create –resource-group HUBSPOKECLI –name SPOKE12toHUB1 –vnet-name CLISPOKE2 –remote-vnet CLIHUB1 –allow-forwarded-traffic –allow-vnet-access –use-remote-gateways

For this sample CLI commands I made the following assumptions:

  • I will place DNS servers (preferably Active Directory intergrated ones) in the Management subnet within the HUB VNet (CLIHUB)
  • Names for resources can be changed accordingly
  • The peerings are created after the VNet gateway because I want to use the Gateway subnet to allow traffic to other VNets by enabling the ‘Allow Gateway Transit’/’Use Remote Gateways’ options between the peers

Creating the VNet Gateway takes a lot of time. It may take between between 30-45 minutes to have the VNet Gateway deployed. All other resources are deployed within a few seconds.

The fun part of using the Azure CLI is that scripts can be created to run a set of CLI commands. This can either be bash or .cmd scripts. Unfortunately PowerShell is not supported. I use Visual Studio Code using the CLI Extension to deploy everything in one go myself.

There you have it. This is all you need to deploy your Hub and Spoke network topology using Azure CLI commands.

Hope this helps!

 

 

 

 

 

 

 
Leave a comment

Posted by on 24/05/2019 in Azure, Cloud, Public Cloud

 

Playing around with Azure Data Box Gateway: first impressions

Helping customers move their on-premises environment to Microsoft Azure brings quite some challenges regarding their large amount of data that has been created over the years. It has become quite common that customers really ask themselves if they still want to do make heavy investments in hardware capacity (servers, network, storage) meaning they heave to put a big bag of money on the table every few years (5 years is common for the hardware life cycle). This strategy defeats any on demand capacity requirements and adaptations when the requirements change. Especially in typical office environments, some data is accessed less over time but still needs to be stored on local storage which needs to be managed and maintained.

A couple of years ago, a nice appliance was introduced to remedy this: StorSimple

While they’re great devices, they still have their troubles. Not to mention, theses devices are becoming end-of-life in not too much time as displayed here.

Over time, Microsoft has introduced Azure Data Box to allow large amounts of data to be transferred to Azure. This post describes my first impressions for a service I’ve been dying to see for quite some time: Azure Data Box Gateway. An overview of this service is available here.

I was particular curious Microsoft’s claim how seamless it is to transfer data to Azure using this solution and how easy it is available. For the sake of testing this appliance I decided to deploy an Azure VM that supports nested virtualization. I don’t have any hardware at home or a hardware lab at my employer but my employer is kind enough to provide me an Azure subscription with a little bit more spending than a typical MSDN subscription 🙂 I would not recommend using this setup for production scenarios. The Azure Data Box Gateway is designed to be deployed to an on-premises environment.

My test setup is pretty simple:

  • A Windows Server 2019 Smalldisk Azure VM, I used VM size Standard E8s v3 (8 vcpus, 64 GB memory)
  • I added a 4 TB Premium SSD Data disk to host two VMs
  • I used Thomas Maurer’s post to setup nested virtualization and configure the Hyper-V networking. The rest of his post is not relevant for this test
  • I deployed a Windows Server 2019 VM. Since I wasn’t really sure what to expect, I conveniently called it DC01. I guess the name pretty much gives away its role

After that I followed the tutorial to deploy a single Azure Data Box Gateway VM. Honestly, the tutorial is painfully simple, it’s almost an embarrassment. I guess someone with no IT skills should be able to reproduce the steps as well. After running steps 1 to 4, I got a configuration which looks like the following:

I added a single user for this purpose:

Here’s my configuration of a single share, I created a separate storage account as well:

And I created Bandwith Management. I decided to be a bit bold with the settings. For this test, I am in Azure anyway so who cares. In the real world, I would probably have different settings:

Finally, I connected the share to my VM with a simple net use command and start populating the share with some data. I started putting some linux ISO images there:

And you see them in the storage account as well 🙂 :

So, yes so far it’s pretty seamless. Seeing the files like that in the storage account as well, it triggered me to do another test: Would the data be available in the share when data is uploaded to the storage account outside the Azure Data Box Gateway?

So I installed Azure Storage Explorer and uploaded data from the Azure Storage Explorer to the storage account. Uploading some junk, I could see those not in the storage account but also in the share from the VM as well. I was impressed when I witnessed those results which fully validates Microsoft’s statement it’s seamless. I consider it even a more seamless solution than StorSimple at this time.

Finally, looking at the Azure Data Box Gateway limits, mentioned here, here are some thoughts how much this solution can scale:

Description Value
No. of files per device 100 million
Limit is ~ 25 million files for every 2 TB of disk space with maximum limit at 100 million
No. of shares per device 24
No. of shares per Azure storage container 1
Maximum file size written to a share For a 2-TB virtual device, maximum file size is 500 GB.
The maximum file size increases with the data disk size in the preceding ratio until it reaches a maximum of 5 TB.

Knowing that a single storage account can host a maximum of 500 TB (and some regions allow 2 PB), then this scaling can go through the roof making this service exceptionally suitable for large gradual migrations to Azure storage…

 

 

 

 
 

Thoughts on standard Image SKU vs.’Smalldisk’ Image SKU

For a long time, organizations using Windows VM instances in Microsoft Azure didn’t have options regarding the OS disk for the instance. The default value is 127 GB and this hasn’t changed. Quite a while ago, Microsoft announced Windows VM instances with a smaller OS disk of only 32 GB as was announced in https://azure.microsoft.com/nl-nl/blog/new-smaller-windows-server-iaas-image/

Yes, I admit this may be old news but I haven’t given it much thought on how approach it when these Windows VM images became available, until recently…

More and more I’m involved into providing ARM templates for my customers and my main focus is on Azure IaaS deployments.

Together with Managed disks, it has become pretty easy to determine sizing for Azure VM Instances and having both Image SKUs available provide options.

However, while I was creating these ARM templates I noticed that I prefer to use the ‘Smalldisk’ Image SKU’s more over the standard one and the explanation for it is actually pretty simple.

For this post, I will use the following ARM template as a reference: https://github.com/Azure/azure-quickstart-templates/tree/master/101-vm-simple-windows

Looking at the “Properties” section of the Virtual Machine resource, you can see the relevant part of the OS Disk configuration:

“osDisk”: {
                 “createOption”: “FromImage”
                },

In this configuration, the default size will be used which should be great in most scenarios. If a different size is required, then the notation may look like this:

“osDisk”: {
                 “createOption”: “FromImage”,
                 “diskSizeGB”: “[variables(‘OSDiskSizeinGB’)]”
                },

You can specify the value either as a variable or a parameter to determine the size. In this example I use a variable and it must have a supported value for managed disks. In my case I used the following value:

“OSDiskSizeinGB”: “64”
OK, so nothing new here so far. However, to maintain maximum flexibility, you need to use the ‘Smalldisk’ Image SKU only which has the smallest possible size of 32 GB. From there, the only was is up.
To optimize Azure consumption by only paying for what you use and what you REALLY need, it may make sense that organizations create some governance and policies to determine sizing for their Azure VM instances. Not only for compute, but for storage as well. Managed Disks provide some guidance for that.
So for me, I’d focus on using the ‘Smalldisk’ Image SKU only and enlarge it when needed. It’s pretty easy to do by just adding one line in your ARM template for that VM, and an additional one for your variable…

 

Here’s my set of variables I use to select the ‘Smalldisk’ Image SKU:

“ImagePublisher”: “MicrosoftWindowsServer”,
“ImageOffer”: “WindowsServer”,
“ImageSKU”: “2019-Datacenter-smalldisk”,
“ImageVersion”: “latest”,

And here’s the relevant part of the Image reference:

“imageReference”: {
                                   “publisher”: “[variables(‘ImagePublisher’)]”,
                                   “offer”: “[variables(‘ImageOffer’)]”,
                                   “sku”: “[variables(‘ImageSKU’)]”,
                                   “version”: “[variables(‘ImageVersion’)]”
                                   },
Hope this helps!

 

 

 
 

Looking back at 2018

The time to look back at 2018 has come a bit sooner than expected due to an enforced shutdown at my current employer. This is great so I don’t have to bother about it shortly before the year ends…

2018 is a year that introduced a lot of changes for me. Until October 2018 i worked at SCCT BV before making a switch to DXC Technology. Certain events I encountered this year justified this switch for me.The main reason is that I wanted to be specialist again with a focus on Microsoft Azure (and I get Azure Stack for free). This means that I’ve decided to let go my previous experience in Enterprise Client Management (ECM) and I will no longer work with System Center Configuration Manager, the rest of System Center, Hyper-V and Microsoft Intune anymore. So don’t expect any blog posts on those…

I was becoming too much of a generalist while I was introduced to be specialist at all these technologies at the same time. Basically, if you claim to be a specialist at everything, you become a specialist at nothing.

An interesting aspect I learned by making this switch is how an employer reacts to your resignation, especially if you’ve been working for them for quite some time (4,5 years). Apparently, not all employers handle it well and I find that SCCT BV didn’t react to resgination well. I find that quite a shame, unnecessary and quite immature. An employer’s behavior may have a serious impact on their reputation. After all, it takes years to build a reputation but just 5 minutes to lose it completely. It also gave me some insight on making sure how an organizational structure is set up prior to joining an employer in the future. But I hope that I don’t have to do that anymore…

Fortunately, I expect to find a lot of good opportunities within my new role at DXC Technology. The best thing I found so far is that work/life balance has become much better. It allows me to maintain my health much better than previously and I already see results (I lost quite some weight and I need to lose some more). The best thing so far is that I can work anywhere I want. DXC Technology facilitates working from home in a proper manner and that helps a lot to improve my performance. And I need to travel sometimes which is nice too.

So hopefully I have some stuff to blog about in 2019. It will most likely Azure or Azure Stack related.

I wish everyone a prosperous 2019!!!

 

 

 
Leave a comment

Posted by on 21/12/2018 in Opinion, Rant

 

Case study: Availability Sets vs. Availability Zones

One of the biggest challenges customers face is to make sure a Highly Available solution survives a catastrophic failure at fabric layer of Microsoft Azure, you things like servers, storage, network devices, power and cooling. Not caring about the fabric layer is one of the main reasons why organizations consider running their workloads in Azure in the first place.

However, Azure locations are not located at some magic castles that would make them invulnerable to catastrophic failures or other natural disasters. Of course, the magnitude of the disaster allows organizations to think about possible scenario’s to safeguard (more or less) the availability of their workloads. After all, Microsoft and their customers have a shared responsibility keeping the lot running.

Maintaining high availability at a single region provides two options:

  • Availability Sets: allows workloads to be spread over multiple hosts, racks but still remain at the same data center;
  • Availability Zones: allows workloads to be spread over multiple locations, so you automatically don’t care on which host the workload will run.

The following picture displays the difference between possible failures and SLA percentage. Obviously, Availability Zones offer higher protection against failures. Region pairs is beyond the scope of this post…

The beauty of both scenario’s is that the VNet required to connect an Azure VM is not bound by a single data center a.k.a. an Availability Zone. it is stretched over a whole region.

So I thought, let’s try this out with a typical workload that requires a high level of availability and can sustain failure pretty well. My choice was to host an SQL fail-over cluster (no Always On Availability Group) with additional resiliency using Storage Spaces Direct. Using all these techniques to maintain uptime, how cool is that?

I used the following guides to deploy a two node Windows Server 2016 cluster:

Actually I built two SQL S2D clusters. Both clusters were completely the same (Two DS11 VMs each with 2 P30 disks), except one was configured with an Availability Set and the other with an Availabilty Zone.

What makes the difference is the requirement for the Azure Load Balancer. You need an Azure Load Balancer for the cluster heartbeat to make sure which node is active. Looking the Azure Load Balancer overview, available at https://docs.microsoft.com/en-us/azure/load-balancer/load-balancer-overview you can see that you need a Standard SKU when using Availability Zones. When using an Availability Set, a basic SKU is sufficient. But that’s acutally it when deploying an SQL cluster using S2D. However, since the Load Balancer is an internal one anyway, I’d recommend using the Standard SKU anyway. From a pricing perspective, I don’t believe it would make much of a difference. If the penalties for downtime are much more severe, then I wouldn’t nitpick about this anyway.

 

 
Leave a comment

Posted by on 20/09/2018 in Uncategorized

 
 
Steve Thompson [MVP]

The automation specialist

Boudewijn Plomp

Cloud and related stuff...

Anything about IT

by Alex Verboon

MDTGuy.WordPress.com

Deployment Made Simple

Modern Workplace

Azure, Hybrid Identity & Enterprise Mobility + Security

Daan Weda

This WordPress.com site is all about System Center and PowerShell

Abheek Speaks

My Views on Technology and Non Technical Issues

Heading To The Clouds

by Marthijn van Rheenen