RSS

Looking back at 2023 part 3: Embrace stoicism in adopting the public cloud

I’ve been planning to write this post for a while as I consider it one of the greatest insights not only in trying to live a better life but to grow as a professional in my career. It took me a while to adopt the philosophy of stoicism in conjunction with helping organizations in their journey in adopting the public cloud. In my case, Microsoft Azure.

Stoicism as a philosophy has many aspects that may guide you in living a life with virtue in accordance with nature. I believe this accordance should not get lost by technological advances over time as it may disconnect people from it. One aspect of the philosophy of stoicism is to live a good life by someone’s actions, not by someone’s words.

The main aspect of this post is to understand control. Living a stoic life means you can only bother yourself with things and actions within your control and not bother with those outside your control. A prime example of having no control over is the weather, so complaining about it may be a bit pointless.

So, how can someone’s control be channeled correctly when working with public cloud? It’s pretty simple. A good place to start is looking at the Shared Responsibility Model. See https://learn.microsoft.com/en-us/azure/security/fundamentals/shared-responsibility for Microsoft’s model and I will refer to Microsoft Azure for the sake of this post.

Other public providers like AWS or GCP use similar models as they are based on the definition defined by the National Institute of Standards and Technology (NIST) https://nvlpubs.nist.gov/nistpubs/Legacy/SP/nistspecialpublication800-145.pdf

I use it all the time to determine the level of control you want to keep and how much control will be delegated to the public cloud provider.

As the picture shows, running an on-premises infrastructure means you have full control over this infrastructure. This makes sense, it’s your hardware, your infrastructure, your applications, your data. So what happens when using IaaS, PaaS and/or SaaS services? Based on the service type, responsibility will be delegated to Microsoft as Microsoft will manage some layers of the technology stack.

Knowing that Microsoft will take over responsibility of some layers of the technology results into delegating control as well. Deciding to delegate control to Microsoft means you no longer need to bother yourself with this layers of the technology stack. I’ve had plenty of discussions in the past where some of my peers wanted to understand how Microsoft manages those layers of the technology stack. This is something I don’t understand as I have no control over it. It may be interesting though, but I don’t see the added value in really wanting to know this. Microsoft may provide some glimpses on specific services, they don’t publish anything on how they manage said service. A good example is the network layer as I have no control over the network cabling in an Microsoft datacenter. And why should I care as it is one of many of Microsoft’s datacenters, not mine?

This brings me to a challenge regarding cloud adoption: how much control is someone willing or able to delegate? If the answer is none at all, then the public cloud may be a no-go. Is that a bad thing? Not at all…

To summarize: Use the Shared Responsibility Model to your advantage when understanding how much control you want to delegate and not bother yourself with those layers of the technology stack anymore. Fortunately, there’s a no right or wrong approach in this so in no way I try to be judgmental.

I started this post with a picture of a marble bust of Marcus Aurelius Antoninus, the last of the ‘Five Good Emperors’. Marcus Aurelius’ writings called ‘Meditations’ has survived time and is considered a great source of understanding stoicism.

This is my last post for 2023. I look forward what 2024 will bring. Hope this helps!!!

 
Leave a comment

Posted by on 29/12/2023 in Azure, Cloud, Opinion, Public Cloud, Rant

 

Tags: , , , ,

Looking back at 2023 part 2: Embracing Docker Containers

Funny things may happen in a year, especially those you don’t expect. One thing I definitely didn’t expect was embracing Docker Containers as I was completely unfamiliar with them nor couldn’t I mention a use case for deploying them. But here at home I finally have and I may see potentially use cases for organizations as well. I consider Docker Containers extremely useful for running light-weight applications using very little resources.

One machine having very little resources I use is an HP t520 Flexible Thin Client that I equipped with 8 GB RAM and a 256 GB M.2 SATA SSD. As this model is discontinued, it’s fine for me as I don’t intend to use these for their original purpose. See https://support.hp.com/us-en/product/product-specs/hp-t520-flexible-thin-client/6875920 for more information.

I bought a bunch of these a while ago as they may be a cheap alternative to SBC’s like Raspberry Pi’s that may still be a challenge to buy. As many of these thin clients are being decommissioned, it should be really easy to find these on the used market. However, I wouldn’t spend too much money on them. Consider being fleeced if someone asks an unreasonable price for such devices. You may consider using 1L PC’s as an alternative if more resources (like CPU, RAM and storage) are required for your scenario.

During the year, one aspect I was looking into it is choosing the OS to ‘host’ the Docker Containers. My main requirement was to have it setup in such a way that it just works and I don’t get too much ‘noise’ by frequently needing to update them. I chose to install Debian 12 (Bookworm) as I know Debian uses ‘stable’ releases only. While some components of Debian 12 may be older than other Linux distros, it does what it needs to do and I don’t need bleeding edge versions of said components. I tried using Ubuntu, both as Desktop and Server, in the past. It just doesn’t work for me as it creates too much noise. I don’t consider myself a Linux expert as I shamelessly have to look up everything a follow guides, not to mention some YouTube ‘binge watching’ sessions learning something new. I like the ‘quietness’ and the ‘fire and (more or less) forget’ vibes Debian 12 is giving me. Installing Debian 12 on an HP t520 Flexible Thin Client was a flawless experience, no issues with hardware support or driver issues, great…

At time of writing, Debian 12 (Bookworm) is the latest release. I used the steps described at https://linuxiac.com/how-to-install-docker-on-debian-12-bookworm/ to install Docker. At first, I used some ‘Docker Management’ frontend tools like Portainer to install and manage Docker Containers. Eventually, as I prefer to work with Docker Compose allowing me to use the .yaml files to build and deploy my Docker Containers, I learned to appreciate using Visual Studio Code to manage my Docker Containers. I found a very useful YouTube video that I used to reproduce the steps so go watch it if you like to know more: https://www.youtube.com/watch?v=huiQd2QojXY

I use Visual Studio Code myself for writing scripts or templates either in ARM or Terraform (note to self: really need look into Azure Bicep). What I like in Visual Studio Code, is that it works great on various platforms. I use it on Windows, Linux and Mac and it works fine for this idiot of an author.

So, at time of writing I am running the following Containers at home:

– Pi-hole (pihole/pihole:latest); DNS sinkhole for blocking ads and other potentially nasty stuff

– Nut UPSD (upshift/nut-upsd); for collecting telemetry data of my UPS directly connected by USB I use to protect my network equipment for voltage fluctuations (pretty common when using solar panels) and potential outages

– Netbootxyz (ghcr.io/netbootxyz/netbootxyz); for installing Linux OS based VMs on my home lab which saves me the hassle of managing ISO

– Samba Server (gists/samba-server); for having a small SMB share when PXE has issues but also as a share for installing Windows using Microsoft Deployment Toolkit

– Corosync Qdevice (debian-qdevice:latest); Here’s the Qdevice container I mentioned in my previous post (insert link). I used the instructions mentioned at https://raymii.org/s/tutorials/Proxmox_VE_7_Corosync_QDevice_in_Docker.html but made small modifications to make it work for Proxmox VE 8 using Debian 12 (bookworm)

– Uptime Kuma (louislam/uptime-kuma); a small leightweight monitoring service that I need to use more intensively

– Home Assistant (ghcr.io/home-assistant/home-assistant:stable); yes, I started getting myself familiar with Smart Home automation. I expect to expand on this more in the future. The main feature I use it for is to collect telemetry data my DSMR Smart Meter using a USB cable directly connected to my Docker host.

Regarding monitoring my DSMR Smart Meter, I used different approaches in the past. I discarded them once I discovered I can directly integrate them in Home Assistant. These tools may be useful if Home Assistant is cannot be directly connected to a DSMR Smart Meter.

More containers may be installed later as I noticed that thin client is far from fully utilized.

Here’s a screenshot of the Sensor data of my thin client running Docker Containers:

As you can see, still lots of room to run more stuff.

This was a lot of fun. Heck, now that I have a running home lab, I may even go crazy deploying my own Kubernetes Cluster but that’s for next year…

 
Leave a comment

Posted by on 29/12/2023 in Uncategorized

 

Tags: , , , ,

Looking back at 2023 part 1: Embracing a home lab

2023 has been an interesting year getting new insights next to what I do professionally on a full-time basis. Yes, working with organizations on their cloud journey, in my case Microsoft Azure, is a lot of fun but there’s more than just the public cloud. Scenarios may exist that are not be really viable to run on public clouds. While I tried to limit the amount of having an infrastructure at home, the nerd/geek inside me woke up again wanting to run something at home. So the interest in having a home lab returned.

Some prerequisites needed were already present by means of having a home network using some decent equipment (in my humble opinion) using devices from Ubiquity.At time of writing, I use the following devices since I moved into this home in late 2020:

– Unify Dream Machine Pro

– Unifi US-16-150W switch

– 3 Unifi UAP-AC-IW access points

– 1 Unifi UAP-IW-HD access point

So, having such network equipment running a NAS, an HTPC and a work related laptop may be overkill and covers my needs quite a bit. So, why not having a home lab and start exploring multiple scenarios before deciding what to build myself?

After reading a lot of websites and extensive YouTube video sessions (man, there’s soooo much to find about the topic) I decided to go for something compact yet flexible.

The approach that really caught my attention was ‘Project TinyMiniMicro’ from Servethehome: https://www.servethehome.com/introducing-project-tinyminimicro-home-lab-revolution/

As I am not particularly interested in having a huge rack populated with 19 inch rack-mount servers (the Unifi equipment fits in my utility closet), this approach really suited me well. Nothing has changed much since the article regarding the availability of used/refurbished 1L PC’s. Plenty of them are available for a good price and they introduce lots of use cases next to home labs.

I purchased 3 HP Elitedesk 705 G4 boxes and 2 of them will be used for my home lab.The 3rd is used a emulation station for retro gaming running Batocera, but that’s outside the scope of this post.

I purchased some additional memory and SSD disks as the initial configuration was quite minimal, but the seller equipped both boxes with a 256 GB NVME SSD which I use a boot drive. The data disks chose may not be the fastest ones, but they were cheap and fit my needs.

As these boxes don’t have any KVM or Lights Management solution, I need to take them out nd put them back in if I need to do some hardware maintenance or reinstall the OS.

These are desktops so what do you expect?

After testing and installing the boxes I installed it in my utility closet. A perfect fit…

During the year, I’ve been trying out various software solutions to run on my home lab environment. With software solutions, I mean hypervisor. By lack of any interest, I ruled out Microsoft Hyper-V and VMware vSphere before investigating other solutions. I went for the KVM route using two methods of running the hypervisor:

1. Rocky Linux in conjunction with Cockpit

2. Proxmox Virtual Environment

Initially, I ran both machines as stand-alone. Eventually, I moved to Proxmox Virtual Environment running both machines in a cluster.

As I don’t have any requirements for failover, I chose to have the Virtual Machines and LXC Containers replicated and do some manual migration to the other node if maintenance is required that results into a reboot, i.e. after installing a new kernel.

This raises one issue: 2 cluster nodes insufficient for failover and cluster voting as at least 3 nodes are required and I don’t have a 3rd Proxmox machine available. Fortunately, some more YouTube sessions provided a solution using a Corosync Qdevice for voting purposes. How I managed to deploy that is part of part 2 of looking back at 2023.

Stay tuned!!!

 
Leave a comment

Posted by on 29/12/2023 in Uncategorized

 

Tags: , , , ,

My thoughts on passing the AZ-700 exam

Recently I took the ‘AZ-700: Designing and Implementing Microsoft Azure Networking Solutions’ exam for which I passed at the first attempt.

I received the ‘Microsoft Certified: Azure Network Engineer Associate’ certification after passing this exam as only this exam is needed to receive it.

In this post I share my thoughts that may help you in your preparation to pass the exam. Although I am working with Microsoft Azure since 2014 and since 2018 full-time focusing mostly on IaaS related scenarios.

I found this exam pretty challenging despite my solid understanding of Azure Networking. Of course I will not display any actual questions and answers as it makes no sense anyway.

You may receive different questions than the ones I got.Nevertheless, here are a few suggestions that may help you in your preparation:

– Have a good understanding of network basics, i.e. Address Spaces (CIDR), subnetting and routing; expect to be tested on that knowledge

– Use the exam resources displayed at https://learn.microsoft.com/en-us/credentials/certifications/exams/az-700/, they are there for a reason

– Read related docs at https://learn.microsoft.com, and use them in conjunction with trying out things yourself in your dev/test/lab environment (practice makes perfect)

– Use the ‘Shared Responsibility Model’ (https://learn.microsoft.com/en-us/azure/security/fundamentals/shared-responsibility) to your advantage as it may help you identifying the right service for a particular scenario in your exam questions

– Understand which service is needed to meet requirements

Here’s an example how to use the ‘Shared Responsibility Model’ to your advantage:

As per Microsoft recommendation not limited to Azure Networking, it strongly recommended to delegate as many tasks and management to Microsoft Azure as possible. Routing traffic is a good example of delegating to Microsoft Azure. A well-architected network topology should not require a lot of custom (or uder defined) routing. When you are faced with a lot of custom routing, then your design may be seriously flawed. This is especially true if you believe you want to override all the routing created by Microsoft Azure instead of letting Microsoft Azure take care of that for you. This thought process will not help you in passing the exam either

Here’s an example to understand which service is needed to meet requirements:

A web application is required to to be available for your users and you want to provide secure access to the service running the web application, i.e. using an App Service Environment or even a VM. As the application is using HTTPS only, two services may be eligible for this (exposing the service directly with a public IP is a big no-no):

– Application Gateway (with or without a Web Application Firewall)

– Front Door

The question should state if access is either regionally or globally.The docs should help you understand which service is required.

These are my thoughts. Hope it helps and good luck!!!

 

Tags: , , , ,

Rationalization attempt for using Azure Private DNS Zones with Active Directory Domain Services

In my previous post I made an attempt to use Azure Private DNS Zones together with Active Directory Domain Services (ADDS).

After trying it out I was quite satisfied with the result, but the execution looked a bit messy to me. This provides an opportunity for some rationalization and potentially some standardization and optimization in provisioning this solution. As I ‘cheated’ a little bit, I wanted to make sure that potential shortcuts are no longer used to allow do things ‘the right way’.

I cleaned up all relevant resources, except the VNet and the Azure DNS Private Resolver so I could start with a clean sheet. I removed the IP address of the Inbound Endpoint and reverted to the default configuration for DNS as shown below:

The next step was creating two Private DNS Zones that would match like the ones found on an ADDS Integrated DNS Zone. I used domain.local as an example like in my previous post, and I linked them to the Vnet with auto-registration disabled:

Next thing was deploying a new VM, install ADDS on it and promote it to a domain controller. I looked again at the DNS configuration as it installed a DNS server on the VM as well. This is normal behavior as no DNS Delegation is available in this scenario and it allows me to collect the records needed.

This time I decided to collect the records a bit more elegantly and a bit more manageable using two cmdlets displayed below:

These cmdlets result in two .csv files I can easily open and copy its contents to Excel for easier data processing.

OK, now for the fun part…

In my previous post I stated you can export this as an ARM template so this can be reused. However, since most of these values are fixed, using a template, or a ‘declarative’ approach, may not be the easiest way to do it. So I checked if I can use an ‘imperative’ approach with the Azure CLI. An overview of Azure CLI commands for managing Azure Private DNS Zones is available at https://learn.microsoft.com/en-us/cli/azure/network/private-dns?view=azure-cli-latest

While analyzing the records, all I need are the CLI commands to create A, SRV and CNAME records only. For the sake on convenience, the list below displays each reference:

NOTE: The CNAME record may be a bit tricky as the link would suggest a CNAME needs to be created before setting it. Fortunately, the documentation states a record will be created if it doesn’t exist

These commands work well in a Bash session. Bash supports using variables making fixed values pretty easy and keep the set of commands clean. After going through the records collected, I came up with this set of commands to have all required created:

#

# domain1.local records

#

MyResourceGroup=domain1.local
Zone=domain1.local
Hostname=dc01.domain1.local
IPAddress=172.16.0.4
ComputerName=dc01

#

# A records

#
az network private-dns record-set a add-record -g $MyResourceGroup -z $Zone -n $ComputerName -a $IPAddress
az network private-dns record-set a add-record -g $MyResourceGroup -z $Zone -n DomainDnsZones -a $IPAddress
az network private-dns record-set a add-record -g $MyResourceGroup -z $Zone -n ForestDnsZones -a $IPAddress

#

# SRV records

#

# gc

az network private-dns record-set srv add-record -g $MyResourceGroup -z $Zone -n _gc._tcp -t $Hostname -r 3268 -p 0 -w 100
az network private-dns record-set srv add-record -g $MyResourceGroup -z $Zone -n _gc._tcp.Default-First-Site-Name._sites -t $Hostname -r 3268 -p 0 -w 100

# kerberos

az network private-dns record-set srv add-record -g $MyResourceGroup -z $Zone -n _kerberos._tcp -t $Hostname -r 88 -p 0 -w 100
az network private-dns record-set srv add-record -g $MyResourceGroup -z $Zone -n _kerberos._tcp.Default-First-Site-Name._sites -t $Hostname -r 88 -p 0 -w 100
az network private-dns record-set srv add-record -g $MyResourceGroup -z $Zone -n _kerberos._udp -t $Hostname -r 88 -p 0 -w 100

# kpasswd

az network private-dns record-set srv add-record -g $MyResourceGroup -z $Zone -n _kpasswd._tcp -t $Hostname -r 464 -p 0 -w 100
az network private-dns record-set srv add-record -g $MyResourceGroup -z $Zone -n _kpasswd._udp -t $Hostname -r 464 -p 0 -w 100

# ldap

az network private-dns record-set srv add-record -g $MyResourceGroup -z $Zone -n _ldap._tcp -t $Hostname -r 389 -p 0 -w 100
az network private-dns record-set srv add-record -g $MyResourceGroup -z $Zone -n _ldap._tcp.Default-First-Site-Name._sites -t $Hostname -r 389 -p 0 -w 100
az network private-dns record-set srv add-record -g $MyResourceGroup -z $Zone -n _ldap._tcp.Default-First-Site-Name._sites.DomainDnsZones -t $Hostname -r 389 -p 0 -w 100
az network private-dns record-set srv add-record -g $MyResourceGroup -z $Zone -n _ldap._tcp.Default-First-Site-Name._sites.ForestDnsZones -t $Hostname -r 389 -p 0 -w 100
az network private-dns record-set srv add-record -g $MyResourceGroup -z $Zone -n _ldap._tcp.DomainDnsZones -t $Hostname -r 389 -p 0 -w 100
az network private-dns record-set srv add-record -g $MyResourceGroup -z $Zone -n _ldap._tcp.ForestDnsZones -t $Hostname -r 389 -p 0 -w 100

#

# _msdcs.domain1.local records

#

MyResourceGroup=domain1.local
Zone=_msdcs.domain1.local
Hostname=dc01.domain1.local
IPAddress=172.16.0.4
SiteGUID=XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX
DomainGUID=_ldap._tcp.XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX.domains

#

# CNAME records

#

az network private-dns record-set cname set-record -g $MyResourceGroup -z $Zone -n $SiteGUID -c $Hostname

#

# A records

#

az network private-dns record-set a add-record -g $MyResourceGroup -z $Zone -n gc -a $IPAddress

#

# srv records

#

# gc

az network private-dns record-set srv add-record -g $MyResourceGroup -z $Zone -n _ldap._tcp.Default-First-Site-Name._sites.gc -t $Hostname -r 3268 -p 0 -w 100
az network private-dns record-set srv add-record -g $MyResourceGroup -z $Zone -n _ldap._tcp.gc -t $Hostname -r 3268 -p 0 -w 100

# kerberos

az network private-dns record-set srv add-record -g $MyResourceGroup -z $Zone -n _kerberos._tcp.dc -t $Hostname -r 88 -p 0 -w 100
az network private-dns record-set srv add-record -g $MyResourceGroup -z $Zone -n _kerberos._tcp.Default-First-Site-Name._sites.dc -t $Hostname -r 88 -p 0 -w 100

# ldap

az network private-dns record-set srv add-record -g $MyResourceGroup -z $Zone -n $DomainGUID -t $Hostname -r 389 -p 0 -w 100
az network private-dns record-set srv add-record -g $MyResourceGroup -z $Zone -n _ldap._tcp.dc -t $Hostname -r 389 -p 0 -w 100
az network private-dns record-set srv add-record -g $MyResourceGroup -z $Zone -n _ldap._tcp.Default-First-Site-Name._sites.dc -t $Hostname -r 389 -p 0 -w 100
az network private-dns record-set srv add-record -g $MyResourceGroup -z $Zone -n _ldap._tcp.pdc -t $Hostname -r 389 -p 0 -w 100

Running these commands result into having those records created pretty quickly as can be seen below:

and

I chose to sort the commands on record types and port numbers to keep it a bit readable.

Afterwards, I created a second VM to see if I can join that machine to the domain. Those steps are the same as displayed in the previous post and that went successfully.

So, there it is. A more rationalized, standardized and optimized approach to use Azure Private DNS Zones together with ADDS. Keep in mind that once more machines are promoted to a domain controller, additional records need to be created as well. However, that can be done using the required set of commands already collected and modify them as needed using the variables set in each part of the set of commands.

Hope this helps!

 
Leave a comment

Posted by on 19/03/2023 in Azure, Cloud, DNS, Public Cloud

 

Running Active Directory Domain Services using an Azure Private DNS Zone and an Azure DNS Private Resolver, does it work?

Azure Private DNS Zones have been around for a little while after becoming GA. I’ve designed and deployed these services a few times now, mostly based on the requirement to access various Azure services (mostly PaaS) using Private Endpoints. You can create various Private DNS Zones for Private Endpoints described at https://learn.microsoft.com/en-us/azure/private-link/private-endpoint-dns

This is all great, however many customers I support in role as an Azure Architect face challenges relying on legacy (monolithic) applications that can’t be modernized using Azure services and many of these customers would like to close their on-premises environments (or a hosted private cloud) and leave these datacenter locations. Some of them are more or less forced to by their datacenter provider. Unfortunately, this results still a lot of ‘lift and shift’ migrations.

Managing IaaS is something I wouldn’t recommend so quickly anymore. If an opportunity arises to replace services running on VM (or even physical) by a native service, then I will try to replace that particular application/role by a native Azure service. Azure Private DNS Zones and DNS Private Resolvers are a good candidate to replace DNS servers with these services. More information on these services are available at https://learn.microsoft.com/en-us/azure/dns/private-dns-overview and https://learn.microsoft.com/en-us/azure/dns/dns-private-resolver-overview

Although at the time of writing these services may be considered expensive, they can add great value especially when having to manage lots of DNS Zones as all these zones can use the same DNS Private Resolver. This may be rather subjective as it depends on various use cases either supported by an appropriate Azure Governance.

Setting up these Private DNS Zones give me a great vibe of BIND9. I remember I had many questions in the 70-291 Implementing, Managing, and Maintaining a MS Windows Server 2003 Network Infrastructure exam (Yes, the 2003 version of ‘the Beast’, those were the days) so it would be interesting to see of this still works and is supported.

I wouldn’t be surprised if Microsoft actually uses BIND9 under the hood for this service. So, it got me thinking: can I deploy and configure an Azure Private DNS Zone in such a way so Active Directory Domain Services (ADDS) can be used? Would I able to join an ADDS domain using such a configuration?

As I am well aware this may not the recommended approach by Microsoft as Microsoft recommends AD Integrated DNS Zones (basically running DNS on domain controllers), it doesn’t hold me back to find out.

Using this approach provides me some challenges:

  • An ADDS domain generates a few GUIDs that represents the domain
  • Which records do I need to add?
  • You cannot create your own SOA records and a domain controller cannot generate the records needed for the domain to be resolvable by itself

I created a VNet (with a few subnets), an Azure Private DNS Zone (auto registration disabled) and an Azure DNS Private resolver as per documentation. I will display that here as it is a matter of following the tutorials and they may be subject to change over time. In configured the VNet to use the inbound IP address of the Azure DNS Private Resolver as its DNS Server. Maybe I cheat a little here, oh well…

I provisioned an Azure VM Instance running Windows Server 2019 (anything newer than 2016 would do) and I promoted it to a domain controller (DC). For this post, I use the domain name domain1.local. Maybe not the best name, but who cares?

I used the PowerShell cmdlet named Get-DnsServerResourceRecord to collect all records in the locally installed DNS Server on the DC, see screenshot below:

I need all relevant A, SRV and CNAME records. All records containing ‘@’, NS and SOA records do not need to be collected and they cannot be added in the Azure Private DNS Domain. An additional output to .csv files can be used to have all required records

It’s a bit of a tedious job, but these records need to be added. Fortunately, I can generate a .json template to have them available for future use. All I need is to change the server name and the two GUIDs. Eventually, it may look like this:

The next step is to determine if I can join a Windows machine to the domain. I provisioned another Azure VM Instance in the same VNet and see if it works:

OK, domain name populated, now let’s see if we get a prompt:

OK, this looks promising. Let’s use an account that join a computer to the domain.

Success!

Let’s restart the machine and see what we get.

Looks good to me. Now let’s check if we have a computer account in Active Directory Users and Computers.

Maybe a bit oldschool, but there we go.

So yes, we can use a combination between Azure Private DNS Zones, Azure DNS Private Resolver and Active Directory Domain Services. it may depend on your use case and governance if this approach is suitable. I may need to reach out to my Microsoft contact to determine if Microsoft supports this as I couldn’t find any relevant documentation. But that may involve some laziness and/or time constraints from my side as well…

 
1 Comment

Posted by on 09/03/2023 in Uncategorized

 

Running Pi-hole in Azure…

Let’s start with a simple statement: I like Pi-hole!

I use it at my home network to enhance my browsing experience. Initially running on a Raspberry 4 2 Gb, now running it on two HP Thin Client T520 devices. These boxes are light, low power, low profile but work flawlessly running Ubuntu Server and Pi-hole. After configuring them as recursive DNS servers using unbound I am no longer using any forwards either. How to configure Pi-hole as recursive DNS servers can be found here.

In my job as an Azure Architect I am doing a lot of research, development and testing of scenarios identified by customers. This involves a lot of deploying Hub and Spoke network topologies for which it would be convenient to have my own DNS servers as well. Additionally, it would also be nice to have a Pi-hole environment available for either mobile phones or for my laptop device when accessing Wi-Fi networks outside my home without depending on the DNS infrastructure available. This may be especially true for public Wi-Fi networks which are hornet nests for malicious activity, so making it harder would help and I believe having my own DNS servers would help a lot.

Fortunately, it is pretty easy to achieve this in the public cloud. I use Azure a lot and this post applies to Azure only, but a similar scenario can be deployed using AWS or GCP.

Having a small isolated network containing at least two DNS servers would be sufficient. Here’s the list of Azure services needed to deliver such an environment:

  • A single Virtual Network (VNet) which uses the internal IP addresses of the DNS servers for name resolution
  • A subnet to host the DNS servers
  • Two Virtual Machines running a Linux distribution supported by Pi-hole, I use Ubuntu Server LTS from the Azure Marketplace. The VM size I use is Standard B1s. More machines and/or different sizes may be considered
  • One additional Jumpbox VM for additional management if needed (optional)
  • A Bastion Host including a Bastion subnet
  • A Public IP address
  • A public facing Azure Load Balancer that forwards TCP and UDP port 53
  • A Network Security Group to filter traffic

During the basic install of Pi-hole, existing DNS servers may be needed (Google DNS, Cloudflare etc.) so these are required during initial deployment. Once Pi-hole is running on each machine, these servers can be removed

Most of this stuff is pretty straightforward like deploying the Virtual Machines. Bastion can be used to establish SSH sessions to install Pi-hole. Bastion prevents the need of any direct exposure of Virtual Machines to the public Internet.

To configure the two Load Balancing rules needed (one for TCP 53 and one for UDP 53) the recommended settings can be used. The following relevant settings can be configured with the following settings:

  • Session persistence: None
  • Floating IP: Disabled
  • SNAT: Use the recommended setting with Outbound rules

As the settings overview shows, it is completely stateless and any DNS server can handle a request. Session persistence is therefore not needed.

The Outbound Rules configuration uses a single rule with the default settings

Once finished, you have your own public recursive DNS solution with Pi-hole running in Azure. All you need is the public IP address of the Azure Load Balancer. And the costs are not too bad either.

Hope this helps!

 
Leave a comment

Posted by on 09/03/2021 in Uncategorized

 

Looking back at 2020…

Well, well, where to start?

I guess I don’t need to address what a crazy year this has been, due to the whole crisis policy makers have thrown on us. Matters such as social experiments, truth seeking and data analysis have been ‘redefined’ dramatically. Time will tell if data analysis and decision making are proven to be correct…

This has greatly impacted the way we live and how we do our jobs, especially how IT services are deployed and facilitated to end users. It’s quite impressive that working from anywhere, basically outside an office building, has been introduced more or less forcefully with very little resistance this time. I guess aggressive fearmongering has done its job pretty well. For me personally, not much has changed. I rarely visited the office, so the few visits have been reduced to zero. Working from anywhere has been limited to working from home only. I hope that lockdowns and travel restrictions are part of the past soon so I can really work from anywhere.

The days IT organizations had countless discussions that employers insisted on having employees be present at the office are not so far behind us, but now they seem so long ago and a thing of the past. One of the most common statement I received was: “I need my staff to be here at the office, so that I can see what they’re doing”. I always found such a statement a bit weird. Productivity is based on output and not on presence in my humble opinion. If I have nothing to do and basically browse the Internet, why do I need to do that at the office while I can do that at home as well?

It was quite impressive to observe that many organizations were not ready for this new reality which resulted in a VERY busy first half of this year. Adopting cloud solutions such as Microsoft Azure, but especially solutions in workplace management such as Windows Virtual Desktop (WVD) and Microsoft Teams. Although I don’t have the numbers, I guess it’s safe to say that Microsoft was one of the biggest ‘winners’ of this COVID-19 situation. Knowing that I faced capacity issues in certain Azure regions for a short time, I can imagine that Microsoft themselves couldn’t anticipate this either, but they caught up with that rather well…

My career development was a bit of a struggle due to this new reality but also many changes happening in my personal life. Fortunately for me, I didn’t get infected at all or had any symptoms. Since I joined DXC Technology in 2018, I can’t even remember I reported sick.

2020 was a year with a new beginning with many changes on a personal level. I expect to pick it all up again in 2021.

What will 2021 bring us? I absolutely don’t know. I guess it depends how much societies remain paralyzed by this social experiment (with terms like ‘The New Normal’ or ‘The Great Reset’) and what direction policy makers want to take. I strongly believe it plays a great role in how people are going to live, work and use the IT services that are part of their lives.

Will they succeed? Time will tell…

 
Leave a comment

Posted by on 29/12/2020 in Opinion

 

Fun with Azure Files together with Microsoft Deployment Toolkit

Recently I was involved in doing a Proof of Concept (PoC) for Windows Virtual Desktop (WVD) for one of my customers. My goal was to use as many Azure Platform as a Service (PaaS) components as possible resulting in a simple environment using the following services:

  • Azure AD Domain Services (AAD DS)
  • Azure Files
  • Bastion to access a Jumpbox
  • WVD Host Pools

I used a simple Azure Reference Architecture to deploy the Virtual Network infrastructure, which is available at https://docs.microsoft.com/en-us/azure/architecture/reference-architectures/hybrid-networking/shared-services but without the ‘hybrid’ components like VPN/ExpressRoute and replaced the AD DS VMs with AAD DS. This is a requirement to use Azure Files to store the FSLogix profile containers. See https://docs.microsoft.com/en-us/azure/virtual-desktop/create-profile-container-adds for more information. Based on the PoC I must admit it works remarkably well.

This has become possible since Azure Files supports identity-based authentication over Server Message Block (SMB) through Azure Active Directory Domain Services (Azure AD DS). See https://docs.microsoft.com/en-us/azure/storage/files/storage-files-identity-auth-active-directory-domain-service-enable for more information.

I am talking about this setting:

To me, this may become a huge game changer regarding File services, especially when authentication through AD DS becomes generally available. This makes a lot of File Servers become replaced by Azure Files. But it also caught my attention to use it for a different scenario.

A while ago I wrote this post to install Windows 10 over the Internet. I still believe that installing Windows over the Internet should be possible, especially when having a lot of bandwidth. My thoughts were to determine if it’s possible to use Microsoft Deployment Toolkit (MDT) to deploy an Operating System over the Internet with Azure Files.

NOTE: This scenario works only when your ISP allows SMB traffic (TCP port 445). Some ISP’s don’t.

To prepare the environment I did the following:

  • Setup Azure AD DS in a small vNET
  • Install Azure Files
  • Deploy a small VM to install MDT and manage the Deployment Share

The first thing that needed to be done was to create a share, I use a quota 1TB which is more than enough. I didn’t use a Premium share

I created two identities in Azure AD I used for not only for joining the domain but who need access to the Azure File Share and provided the required permission. The accounts used are also part of the AAD DS Administrators Group to keep the scenario simple.

I use one of these accounts to log on to the VM used to create and manage the Deployment Share. The VM is joined to the AAD DS domain and has MDT installed.

Eventually you can create your deployment share using the UNC path of the Azure Files Share and do your typical MDT stuff like adding apps or your Windows 10 installation media. It may look like this:

In the Azure Portal, you see the same directory structure as well:

The trick is to provide access from any location outside AAD DS so we can access the Deployment Share from anywhere. We need to specify the user name and password in the Bootstrap.ini file. The credentials are the same as the one script available by the Azure Portal (the same thing when doing a typical use command MDT uses as well):

Once everything is created, you can extract the bootable .iso from the share itself, you can even download it directly from the Azure Portal:

Eventually, all you need to do is boot from the .iso and you can start your deployment.

Here’s a screenshot of a machine running Hyper-V from a different location, choosing a normal deployment:

NOTE: You can choose to capture an image if you want to…

For the rest I didn’t bother to do anything specific from an MDT perspective, just a simple Windows 10 deployment with Office365. What you’d like to put into MDT is up to you. The end result is you can deploy a machine from any location over the Internet.

Happy deployments and hope this helps!

 

Why ‘COVID-19 Outbreak Teams’ may need IT people too…

This post may be a bit funny including the title, but they’re just my thoughts. That’s the fun part of having an opinion, I am not introducing facts here. It’s up to you, the reader, what you want to do with it. My view may be a bit simplistic but experience tells me that the best solutions are most often the simplest ones.

To keep the text a bit more readable, I will use the abbreviation of the Dutch ‘COVID-19 Outbreak Team’ called the Outbreak Management Team. Their abbreviation used is OMT. I’m a Dutch citizen so similar teams may have different names in your country. Each country has its own measures so it may not always apply to yours…

As we all know  at the time of writing this post, the world is currently struck by a virus that is taking lives and many people, organizations, governments and their politicians don’t really know how to deal with this. Some exceptions apply though. Fortunately, I haven’t been struck by this virus yet and the same goes for people around me more closely. It also allows me to observe the whole situation more rationally. Although as an outsider, I am very interested in how these teams handle the situation. Unfortunately, there’s a lack of transparency in their motives and approaches since most of these teams have closed meetings and those meeting notes are not shared nor published. Many other aspects of handling the situation remains obscure making it difficult to really understand what they’re doing. This invites suspicion towards these teams and is something I would have done differently.

Dealing with threats in IT, either very small, small, big or even massive comparable to a global outbreak is ‘business as usual’ for us IT people. IT infrastructures are constantly ‘suffering’ from either attacks (like malware) or service availability being lost (either by malfunction or manual actions). This is what we do and we don’t know any better.

The first thing IT organizations do is make sure a ‘first line of defense’ is available. The first line of defense allows quick detection and potential mitigation if available. Skilled first line teams may have room to do quick analysis and deliver quick workarounds if available. If possible, a shutdown of the complete service or infrastructure will NEVER be done unless absolutely needed so that more information of the issue becomes available, but I haven’t seen that happening in my career. Complete outrage will be the result if an organization shuts down the service or infrastructure completely due to an issue. I remember that a couple of years ago there was severe outrage when WhatsApp introduced the ‘blue ticks’, imagine what would happen if the company behind WhatsApp at that time would turn off the service completely to disable that feature again. The outrage would be bigger, but I digress…

Once the first line of defense has found a mitigation a.k.a. a workaround, then the workaround will still be used until a permanent solution has been found. Let’s translate that to the COVID-19 situation. Here in The Netherlands a ‘General Practitioner’ has found a combination of medicine that can be used to mitigate the threat so that loss of life can be reduced. He didn’t just invented it by himself, he used the greatest source to gather information to collect the information needed to collect experiences from peers over the world: the Internet. Unfortunately, he was prompted to stop treatment, although his treatment was successful, since certain protocol from the government forbade him from prescribing this treatment. Baffling if you ask me, but not completely surprising either.

OMT and politicians, mostly paralyzed by fear, decided to disable the first line of defense immediately and focus on finding a vaccine and skip the mitigation process completely on vaccines and order lockdowns until a vaccine has been found, and we don’t know if one can be found in the first place. Compare it to an IT Service Management (ITSM) process by skipping the Incident Management process completely and go to Problem Management with a extremely high priority Problem with NO workarounds and pray to find something for the Change Management process.

Having an approach like this is potentially very dangerous. Lockdown measures have been taken including a set of instructions. The irony is that the numbers of deaths and hospitalizations started to drop pretty much the same time. So what is the danger of this?

Well, due to insufficient information gathering by the first line of defense and the seeing certain trends in fatalities introduces the danger that the measurements taken are working. The biggest threat here is that there is no more ‘root cause analysis’ happening anymore because the measurements taken are working. But they don’t know if this actually true. Result is the wrong solution is used to solve the problem. Especially when poor statistics are used combined with ‘tunnel vision’. “Yes, since we introduced social distancing using the ‘1,5 meter society’ the numbers are going down” is the common argument. But what if that may not be the case? Tunnel vision prevents people from extending their research which I consider far more dangerous…

As with the Dutch OMT, it has become painfully clear that while a lot of new research is available worldwide that provide new insights to investigate, they still remain in their mantras which have become either superseded our outdated because of this tunnel vision. To me, that is a missed opportunity.

If the first line of defense can still collect information about these threats, then other IT services can be used to store and analyze the data collected. This is where IT can help as well. You can compare it with gathering metrics data for your monitoring solution. This may work far better than mathematics modelling to understand what is happening. Especially when the model itself is flawed when the same input delivers different results. I guess Sir Isaac Newton would have probably never been able to find out how gravity works if apples either drop, float or even levitate when the tree releases them. Fortunately for us, he used an ‘evidence-based’ approach when he found out that all apples fell down allowing him to prove gravity…

So, IT people will not be able to combat the COVID-19 virus itself. They lack the knowledge and skill to do so and that is a job for virologists and epidemiologist. But I am certain IT people can help them provide guidance in using a proper approach. But this can only work when all parties (especially governments and politicians) are open, honest and provide full transparency in dealing with this pandemic. And it can be done by keeping all data gathered anonymous by taking out patients personal data. I believe that the number of deaths could have been lower by having a proper process in place…

And for governments and their politicians who introduced lockdowns I have a single question: Is it worth to destroy complete economies at any price to save a relatively small number of lives? After all, it is you who put countries in lockdown, not the virus…

 

 

 

 
Leave a comment

Posted by on 17/05/2020 in Uncategorized

 
 
Steve Thompson [MVP]

The automation specialist

Boudewijn Plomp

Cloud and related stuff...

Anything about IT

by Alex Verboon

MDTGuy.WordPress.com

Deployment Made Simple

Modern Workplace

The secret of smart working unveiled...

Daan Weda

This WordPress.com site is all about System Center and PowerShell

Abheek Speaks

My Views on Technology and Non Technical Issues

Heading To The Clouds

by Marthijn van Rheenen