RSS

Tag Archives: Cloud

Looking back at 2023 part 3: Embrace stoicism in adopting the public cloud

I’ve been planning to write this post for a while as I consider it one of the greatest insights not only in trying to live a better life but to grow as a professional in my career. It took me a while to adopt the philosophy of stoicism in conjunction with helping organizations in their journey in adopting the public cloud. In my case, Microsoft Azure.

Stoicism as a philosophy has many aspects that may guide you in living a life with virtue in accordance with nature. I believe this accordance should not get lost by technological advances over time as it may disconnect people from it. One aspect of the philosophy of stoicism is to live a good life by someone’s actions, not by someone’s words.

The main aspect of this post is to understand control. Living a stoic life means you can only bother yourself with things and actions within your control and not bother with those outside your control. A prime example of having no control over is the weather, so complaining about it may be a bit pointless.

So, how can someone’s control be channeled correctly when working with public cloud? It’s pretty simple. A good place to start is looking at the Shared Responsibility Model. See https://learn.microsoft.com/en-us/azure/security/fundamentals/shared-responsibility for Microsoft’s model and I will refer to Microsoft Azure for the sake of this post.

Other public providers like AWS or GCP use similar models as they are based on the definition defined by the National Institute of Standards and Technology (NIST) https://nvlpubs.nist.gov/nistpubs/Legacy/SP/nistspecialpublication800-145.pdf

I use it all the time to determine the level of control you want to keep and how much control will be delegated to the public cloud provider.

As the picture shows, running an on-premises infrastructure means you have full control over this infrastructure. This makes sense, it’s your hardware, your infrastructure, your applications, your data. So what happens when using IaaS, PaaS and/or SaaS services? Based on the service type, responsibility will be delegated to Microsoft as Microsoft will manage some layers of the technology stack.

Knowing that Microsoft will take over responsibility of some layers of the technology results into delegating control as well. Deciding to delegate control to Microsoft means you no longer need to bother yourself with this layers of the technology stack. I’ve had plenty of discussions in the past where some of my peers wanted to understand how Microsoft manages those layers of the technology stack. This is something I don’t understand as I have no control over it. It may be interesting though, but I don’t see the added value in really wanting to know this. Microsoft may provide some glimpses on specific services, they don’t publish anything on how they manage said service. A good example is the network layer as I have no control over the network cabling in an Microsoft datacenter. And why should I care as it is one of many of Microsoft’s datacenters, not mine?

This brings me to a challenge regarding cloud adoption: how much control is someone willing or able to delegate? If the answer is none at all, then the public cloud may be a no-go. Is that a bad thing? Not at all…

To summarize: Use the Shared Responsibility Model to your advantage when understanding how much control you want to delegate and not bother yourself with those layers of the technology stack anymore. Fortunately, there’s a no right or wrong approach in this so in no way I try to be judgmental.

I started this post with a picture of a marble bust of Marcus Aurelius Antoninus, the last of the ‘Five Good Emperors’. Marcus Aurelius’ writings called ‘Meditations’ has survived time and is considered a great source of understanding stoicism.

This is my last post for 2023. I look forward what 2024 will bring. Hope this helps!!!

 
Leave a comment

Posted by on 29/12/2023 in Azure, Cloud, Opinion, Public Cloud, Rant

 

Tags: , , , ,

Looking back at 2023 part 1: Embracing a home lab

2023 has been an interesting year getting new insights next to what I do professionally on a full-time basis. Yes, working with organizations on their cloud journey, in my case Microsoft Azure, is a lot of fun but there’s more than just the public cloud. Scenarios may exist that are not be really viable to run on public clouds. While I tried to limit the amount of having an infrastructure at home, the nerd/geek inside me woke up again wanting to run something at home. So the interest in having a home lab returned.

Some prerequisites needed were already present by means of having a home network using some decent equipment (in my humble opinion) using devices from Ubiquity.At time of writing, I use the following devices since I moved into this home in late 2020:

– Unify Dream Machine Pro

– Unifi US-16-150W switch

– 3 Unifi UAP-AC-IW access points

– 1 Unifi UAP-IW-HD access point

So, having such network equipment running a NAS, an HTPC and a work related laptop may be overkill and covers my needs quite a bit. So, why not having a home lab and start exploring multiple scenarios before deciding what to build myself?

After reading a lot of websites and extensive YouTube video sessions (man, there’s soooo much to find about the topic) I decided to go for something compact yet flexible.

The approach that really caught my attention was ‘Project TinyMiniMicro’ from Servethehome: https://www.servethehome.com/introducing-project-tinyminimicro-home-lab-revolution/

As I am not particularly interested in having a huge rack populated with 19 inch rack-mount servers (the Unifi equipment fits in my utility closet), this approach really suited me well. Nothing has changed much since the article regarding the availability of used/refurbished 1L PC’s. Plenty of them are available for a good price and they introduce lots of use cases next to home labs.

I purchased 3 HP Elitedesk 705 G4 boxes and 2 of them will be used for my home lab.The 3rd is used a emulation station for retro gaming running Batocera, but that’s outside the scope of this post.

I purchased some additional memory and SSD disks as the initial configuration was quite minimal, but the seller equipped both boxes with a 256 GB NVME SSD which I use a boot drive. The data disks chose may not be the fastest ones, but they were cheap and fit my needs.

As these boxes don’t have any KVM or Lights Management solution, I need to take them out nd put them back in if I need to do some hardware maintenance or reinstall the OS.

These are desktops so what do you expect?

After testing and installing the boxes I installed it in my utility closet. A perfect fit…

During the year, I’ve been trying out various software solutions to run on my home lab environment. With software solutions, I mean hypervisor. By lack of any interest, I ruled out Microsoft Hyper-V and VMware vSphere before investigating other solutions. I went for the KVM route using two methods of running the hypervisor:

1. Rocky Linux in conjunction with Cockpit

2. Proxmox Virtual Environment

Initially, I ran both machines as stand-alone. Eventually, I moved to Proxmox Virtual Environment running both machines in a cluster.

As I don’t have any requirements for failover, I chose to have the Virtual Machines and LXC Containers replicated and do some manual migration to the other node if maintenance is required that results into a reboot, i.e. after installing a new kernel.

This raises one issue: 2 cluster nodes insufficient for failover and cluster voting as at least 3 nodes are required and I don’t have a 3rd Proxmox machine available. Fortunately, some more YouTube sessions provided a solution using a Corosync Qdevice for voting purposes. How I managed to deploy that is part of part 2 of looking back at 2023.

Stay tuned!!!

 
Leave a comment

Posted by on 29/12/2023 in Uncategorized

 

Tags: , , , ,

My thoughts on passing the AZ-700 exam

Recently I took the ‘AZ-700: Designing and Implementing Microsoft Azure Networking Solutions’ exam for which I passed at the first attempt.

I received the ‘Microsoft Certified: Azure Network Engineer Associate’ certification after passing this exam as only this exam is needed to receive it.

In this post I share my thoughts that may help you in your preparation to pass the exam. Although I am working with Microsoft Azure since 2014 and since 2018 full-time focusing mostly on IaaS related scenarios.

I found this exam pretty challenging despite my solid understanding of Azure Networking. Of course I will not display any actual questions and answers as it makes no sense anyway.

You may receive different questions than the ones I got.Nevertheless, here are a few suggestions that may help you in your preparation:

– Have a good understanding of network basics, i.e. Address Spaces (CIDR), subnetting and routing; expect to be tested on that knowledge

– Use the exam resources displayed at https://learn.microsoft.com/en-us/credentials/certifications/exams/az-700/, they are there for a reason

– Read related docs at https://learn.microsoft.com, and use them in conjunction with trying out things yourself in your dev/test/lab environment (practice makes perfect)

– Use the ‘Shared Responsibility Model’ (https://learn.microsoft.com/en-us/azure/security/fundamentals/shared-responsibility) to your advantage as it may help you identifying the right service for a particular scenario in your exam questions

– Understand which service is needed to meet requirements

Here’s an example how to use the ‘Shared Responsibility Model’ to your advantage:

As per Microsoft recommendation not limited to Azure Networking, it strongly recommended to delegate as many tasks and management to Microsoft Azure as possible. Routing traffic is a good example of delegating to Microsoft Azure. A well-architected network topology should not require a lot of custom (or uder defined) routing. When you are faced with a lot of custom routing, then your design may be seriously flawed. This is especially true if you believe you want to override all the routing created by Microsoft Azure instead of letting Microsoft Azure take care of that for you. This thought process will not help you in passing the exam either

Here’s an example to understand which service is needed to meet requirements:

A web application is required to to be available for your users and you want to provide secure access to the service running the web application, i.e. using an App Service Environment or even a VM. As the application is using HTTPS only, two services may be eligible for this (exposing the service directly with a public IP is a big no-no):

– Application Gateway (with or without a Web Application Firewall)

– Front Door

The question should state if access is either regionally or globally.The docs should help you understand which service is required.

These are my thoughts. Hope it helps and good luck!!!

 

Tags: , , , ,

 
Steve Thompson [MVP]

The automation specialist

Boudewijn Plomp

Cloud and related stuff...

Anything about IT

by Alex Verboon

MDTGuy.WordPress.com

Deployment Made Simple

Modern Workplace

The secret of smart working unveiled...

Daan Weda

This WordPress.com site is all about System Center and PowerShell

Abheek Speaks

My Views on Technology and Non Technical Issues

Heading To The Clouds

by Marthijn van Rheenen