RSS

Backing up Azure Storage Accounts…

New year, new challenges. And I was confronted with quite a nice one.

One of my customers uses Azure Storage quite intensively. While Azure Storage Accounts provide some protection by means of replication, there’s no real protection from corruption or deletion inside the Storage Account itself. Data that has been deleted will be deleted on the replicas as well. Unfortunately, there’s no replication mechanism to replicate data between Storage Accounts comparable to DFS Replication. Governance constraints may also prevent using Geo-redundant storage. Geo-redundant storage can also not guarantee that the secondary location still has the data before it became corrupted or deleted.

So a mechanism must be developed to protect the data from a potential disaster. Only Blob, File and Table Storage are valid candidates to be protected. Microsoft has released a tool that allows content to be copied to a different Storage Account (including from and to local disk): AzCopy

The relevant information regarding AzCopy is available at https://azure.microsoft.com/en-us/documentation/articles/storage-use-azcopy/

AzCopy is a commandline tool that allows content to be copied, but it is very static by nature. This may be great for a few single Blob containers or Table Uri, but a more versatile approach is required when hundreds of Blob containers and Table Uris need to be copied, especially when they are frequently changed.

Fortunately, I was able to use PowerShell that ‘feeds’ Azcopy with the required parameters to get the job done. The result is a script. The script developed focuses on Blob and Table Storage only, it can easily be extended for File Storage. For the sake of this blog post, Access Keys are used (the secondary) but SAS tokens requires only minor modifications.

Before proceeding, the workflow needs to be defined:

  • Provide the source and destination Storage Accounts, the destination Storage Account will store everything as Blobs meaning that Tables will be exported to a .MANIFEST and .JSON file (to allow the table to be recreated);
  • Retrieve the list of Blob containers and ‘convert’ them to a URL;
  • Copy each Blob container to the destination Storage Account, all containers will be stored as virtual directories in a single container to maintain structure;
  • Retrieve the list of Table Uris;
  • Export each Uri to a .MANIFEST and .JSON file and store them as blob at the destination Storage Account;
  • Log all copy actions.

The benefit of AzCopy is that it tells the Azure platform to copy something. No extensive processing is required by AzCopy. This allows the script to run on an Azure VM. A Standard_A1 VM is sufficient, additional disks are highly recommended when large Tables are used.

Unfortunately, parallel processing of AzCopy.exe is not possible. Everything must be done sequentially. Alternatively, multiple VMs can be spun up to backup their own set of Storage Accounts. This is highly recommended because backing up Storage Accounts may very time consuming, especially when large sets of small files or very large tables. Additionally, some small text files are used to store the items retrieved. They also allow the use of ForEach loops.

To repeat the actions for multiple Storage Accounts, a .csv file is used to input. The .csv file may look like this:

SourceStorageName,SourceStorageKey,DestinationStorageName,DestinationStorageKey
source1,sourcekey1,destination1,destinationkey1
source2,sourcekey2,destination2,destinationkey2

The actual script uses a lot of variables, AzCopy is called using the Start-Process cmdlet while the parameters for AzCopy.exe are populated by a long variable that will be used by the -ArgumentList parameter.

So here’s the script I used to achieve the goal of backing up Blob containers and Table Uris:

#
# Name: Copy_Storage_Account_AzCopy.ps1
#
# Author: Marc Westerink
#
# Version: 1.0
#
# Purpose: This script copies Blob and Table Storage from a source Storage Account to a Destination Storage Account.
# File F:\Input\Storage_Accounts.csv contains all Source and Destination Storage.
# All Blob Containers and Tables will be retrieved and processed sequentially.
# All content will be copied as blobs to a container named after the Source Storage Account. A virtual directory will be created for each blob container.
#
# Requirements:
# – Storage Accounts with Secondary Access Keys
# – AzCopy needs to be installed
# – Azure PowerShell needs to be installed
# – An additional disk to store Temporary Files (i.e. F:\ drive)
# – A Temp Folder (i.e. F:\Temp) with two Text Files ‘temp.txt’ and ‘output.txt’. The Temp Folder will be used by AzCopy.
# – A Folder to store all log files (i.e. F:\Logs)
#

# First, let’s create the required global variables

# Get the date the script is being run
$Date = Get-Date -format “dd-MM-yyyy”

# AzCopy Path
$FilePath=’C:\Program Files (x86)\Microsoft SDKs\Azure\AzCopy\AzCopy.exe’

#Temp Files, let’s make sure they’re cleared before starting

$File1=’F:\Temp\temp.txt’
Clear-Content -Path $File1

$File2=’F:\Temp\output.txt’
Clear-Content -Path $File2

#Recursive Parameter: DO NOT use for Table Storage
$Recursive=’/S’

#Suppress prompt popups
$Prompt=’/Y’

#Temporary Directory for AzCopy
$TempDir=’F:\Temp’

#SplitSize parameter for Tables, this will split a large table into separate .JSON files of 1024 MB
$SplitSize=’/SplitSize:1024′

#DestType parameter, required for copying tables as blobs to the Destination
$DestType=’/DestType:Blob’

#Temporary Directory for AzCopy Journal files
$Journal=’/Z:F:\Temp’

#Blob path
$Blob=’.blob.core.windows.net/’

#https header
$HTTPS=’https:// ‘

#Let’s import the CSV and process all Storage Accounts
Import-Csv F:\Input\Storage_Accounts.csv | % {

#Creating the Full Path of the Source Storage Account Blob
$SourceStoragePath=$HTTPS+$_.SourceStorageName+$Blob

#Creating the Full Path of the Destination Storage Container, if it doesn’t exist it will be created
$DestStorageContainer=$HTTPS+$_.DestinationStorageName+$Blob+$_.SourceStorageName+$Date

#Gather the Source Access Key
$SourceStorageKey=$_.SourceStorageKey

#Gather the Destination Access Key
$DestinationStorageKey=$_.DestinationStorageKey

#Defining the log file for verbose logging with the Source Storage Account Name and the date
$Verbose=’/V:F:\Logs\’+$_.SourceStorageName+$Date+’.log’

#Create the Azure Storage Context to gather all Blobs and Tables
$Context = New-AzureStorageContext -StorageAccountName $_.SourceStorageName -StorageAccountKey $_.SourceStorageKey

#Copy blob containers first

#Get all containers
Get-AzureStorageContainer -context $context | Select Name | % {

Add-Content -Path $File1 -Value $_.Name
}

#Convert all Container Names to full paths and write them to the Output File
Get-Content $File1 | % {

Add-Content -Path $File2 -Value $SourceStoragePath$_
}

#Process all Containers using the Output File as input
Get-Content $File2 | % {

#Gather virtual directory name using the container name
$VirtualDirectory= $_ -replace $SourceStoragePath,”

$ArgumentList=’/Source:’+$_+’ ‘+’/Dest:’+$DestStorageContainer+’/’+$VirtualDirectory+’ ‘+’/SourceKey:’+$SourceStorageKey+’ ‘+’/DestKey:’+$DestinationStorageKey+’ ‘+$Recursive+’ ‘+$Verbose+’ ‘+$Prompt+’ ‘+$Journal
Start-Process -FilePath $FilePath -ArgumentList $ArgumentList -Wait
}

#Before proceeding, let’s clean up all files used
Clear-Content -Path $File1
Clear-Content -Path $File2
#Get All Tables
Get-AzureStorageTable -context $context | Select Uri | % {

Add-Content -Path $File2 -Value $_.Uri
}

#Process all Tables using the Output File as input
Get-Content $File2 | % {

$ArgumentList=’/Source:’+$_+’ ‘+’/Dest:’+$DestStorageContainer+’ ‘+’/SourceKey:’+$SourceStorageKey+’ ‘+’/DestKey:’+$DestinationStorageKey+’ ‘+$SplitSize+’ ‘+$Verbose+’ ‘+$Prompt+’ ‘+$DestType+’ ‘+$Journal
Start-Process -FilePath $FilePath -ArgumentList $ArgumentList -Wait

}

#Cleanup Output File
Clear-Content -Path $File2
}

To have this script run by a schedule, a simple Scheduled Task can be created to do so. The schedule itself depends on the environment’s needs and the time to have everything copied. It’s not uncommon that a large storage account with countless small blobs and huge tables may take a week or even longer to be copied…

 

 

 
Leave a comment

Posted by on 02/02/2016 in Azure, PowerShell, Public Cloud

 

Looking forward to 2016…

So, after leaving 2015 behind us and getting started in 2016 it’s time to have a look what 2016 is going to bring us.

2015 was the year that got the adoption of cloud technology really going and I expect more and more organizations to do so or start adopting more features cloud technology offers us. A very nice feature is that organizations start to understand better how convenient it is when the ‘gate’ for end users has shifted from Active Directory to Azure Active Directory.

Three big releases will most likely take place this year:

  • AzureStack;
  • Windows Server 2016;
  • System Center 2016.

I strongly believe the release of Windows Server 2016 will dramatically change the way we’re used to work and I really believe the following two features will enable it:

  • Nano Server;
  • Containers.

Since the release of Windows Server 2016 Technical Preview 3, and even more with Windows Server 2016 Technical Preview 4 we’re able to research and experiment with these two features. Fortunately, I don’t expect Windows Server 2016 RTM to be released in the first half of 2016. This allows me to play around with it and understand how it works so that I am prepared when it becomes available.

So, Windows Server 2016 is quite a big tip of the iceberg. With the rest all coming as well I expect 2016 to be a very busy year. But I expect to have a lot of fun with it as well…

So let’s see what’s going to happen this year, I look forward to it.

 

Looking back at 2015…

So, the year 2015 is almost at its end. While I write this, I am already in my second week of my two week time off. And boy,I really needed this two week break.

2015 was an extremely busy year for me, and I can actually cut the year in half.

At the first half, I was still busy participating in a project where I designed and deployed System Center 2012 R2 Configuration Manager. I also built a stand-alone Image Building environment running MDT 2013. Unfortunately, the project took way longer than expected due the customer being unable to take ownership and start administering it by themselves. Eventually I decided to walk away after the contractual end date of my involvement despite the fact the project isn’t finished yet. The longer it took, the more frustrating the project became for me so the decision to walk away was eventually the right one.

This takes me to the second half. In the second half, I saw a dramatic shift in my job since I did only one Configuration Manager design and deployment in the second half of 2015. I started to extend my skillset on Enterprise Client Management a bit more with Microsoft Intune and Microsoft’s Public Cloud platform: Azure.

I also started to deliver more workshops, master classes and training sessions. This is something I really like to do and I want to thank those who made it possible for me. It allowed to me renew my Microsoft Certified Trainer certification.

Fortunately, the frustrations of the first half provided some learning moments which required me to become a more complete consultant. So my coworker arranged a two day training session for me called “Professional Recommending” (this may be a poor translation of Professioneel Adviseren in Dutch) provided by Yearth. This is by far the most important training I received in my career and it really started to pay off pretty quickly by receiving more positive feedback from customers. I became a more complete consultant with this training.

I was also happy to do the presentation workshop with Monique Kerssens and Jinxiu Hu from Niqué Consultancy BV at ExpertsLive 2015. I was happy to receive the feedback that my presentation skills have developed greatly. To quote them: “you’re standing like a house”.

The icing on the cake came at the end of this year when I was asked to review the DataON CiB-9224 platform. You can read the review in my previous post.

So, I experienced some highs and lows this year. Fortunately, the highs came at the second half.

I look forward to 2016, but that’s for another post…

 

 

Reviewing the DataON Cluster-in-a-box 9224 (CiB-9224 V12) platform

Recently the company I work for became a partner in deploying DataON platform solutions together with the Dutch distributor of DataON. The distributor has the knowledge and experience with distributing hardware, but was looking for a partner to have them deployed to meet the needs of customers. I had the honor of reviewing one of DataON’s solutions provided by the distributor: the CiB-9224 V12

DNS-9220 Front View

Before I got started I checked the relevant information on DataON’s website which is available at http://dataonstorage.com/cluster-in-a-box/cib-9224-v12-2u-24-bay-12g-sas-cluster-in-a-box.html

Here are a few features that I consider relevant:

  • You have a two-node cluster in a single 2U enclosure;
  • A two-tier storage storage deployment is available, only JBOD is available (no hardware RAID) to both nodes;
  • The solution can be ‘stacked’ with either another CiB and/or DNS JBOD solution;
  • The components used result in a very simple and easy to use setup, no extensive hardware knowledge is required;
  • DataON delivers OOBE guides to get you started.

Overall DataON delivers a no-nonsense solution. Since I am an advocate of a no-nonsense approach it is something I really like.

After checking it all I conclude that this platform can be used in two ways:

  • Scale Out File Server (SOFS) cluster providing one or more SMB 3.0 shares;
  • A two-node Hyper-V cluster.

Specific scenarios are available at DataON’s website mentioned earlier.

For my review I decided to build a two-node Hyper-V cluster. After preparing a small infrastructure (DC, DNS, DHCP and networking) I was able to get going. I decided to follow the OOBE guide as much as possible. In less than an hour, I had a fully operational two-node Hyper-V cluster. I noticed a few things during deployment:

  • Some steps in the guide are not completely in line with deploying the solution. I was able to create a Storage Space with Data Deduplication enabled while the guide doesn’t mention Data Deduplication. However, I greatly welcome to have Data Deduplication enabled since it will generate significant savings when Virtual Machines are stored on the volume being deduplicated;
  • The Storage Space is very fast, deploying Virtual Machines doesn’t take much time at all;
  • I like the built-in Mellanox ConnectX®-3 Pro EN Single 10GbE port used for Cluster Heartbeat and Live Migration. After configuring the cluster to use this NIC only for Live Migration I was very happy with its Live Migration performance. It worked like a charm;
  • I managed the cluster using System Center 2016 Virtual Machine Manager Technical Preview and System Center 2016 Operations Manager Technical Preview. After deploying the required agents I was able to manage the cluster completely by Virtual Machine Manager. Dynamic Optimization and PRO Tips became available. After setting Dynamic Optimization to very aggressive settings I could see Virtual Machines dancing around on both nodes without negatively affecting the Virtual Machines themselves.

The next step was trying to stress test the platform. I decided to deploy 150 Virtual Machines using a tempate. I found a nice PowerShell script that would do the work for me at http://blogs.technet.com/b/virtual-mite/archive/2014/03/04/deploying-multiple-vm-39-s-from-template-in-vmm.aspx. During this deployment I noticed the limited network resources (had a 1 Gbit/sec switch available, no fiberglass) significantly slowed down the deployment and I was also overcommitting the cluster (memory resources prevented me from running all these Virtual Machines). I had no intention of running all these machines after deploying them but it gave me some good insights of the platform’s capabilities. To me, the test scenario used is not optimal and I expect better performance when using 10 Gbit/sec SFP connections are used. Nevertheless, the platform succesfully deployed the 150 Virtual Machines.

After deploying the Virtual Machines I was able to monitor Data Deduplication (I used the default settings). Deduplication savings made me discover that basically all Virtual Machines were stored on the fast tier alone. This impressed me the most. This would make this solution extremely powerful for a VDI deployment, especially when stacked with one or more of these babies.

After fininishing my testing I can definitely recommend this platform. After finding out the price for this platform I strongly believe that the DataON solution is a serious bang for your buck. It makes the Return Of Investment (ROI) very short and easy to manage. And all that in just a 2U enclosure…

All the requirements for the future are also there when Windows Server 2016 is released. I discussed my findings with DataON as well and additional test scenarios are there to investigate

Hopefully I can test it with either Nano Server and/or Hyper-V containers but this is something for 2016…

 

 

 

Building my first (but completely useless) Nano Server cluster based on Windows Server 2016 TP4…

Well, after building my first Nano Server I blogged about in this I got some inspiration to play around with it a bit more.

For this post, my goal was to make a Scale Out File Server cluster with two Nano Server nodes.

So my thought is to provision an iSCSI target first. I went into a completely different direction by deploying a Ubuntu 15.10 server and configure it as an iSCSI target. I used the guidelines at https://www.howtoforge.com/using-iscsi-on-ubuntu-10.04-initiator-and-target and https://linhost.info/2012/05/configure-ubuntu-to-serve-as-an-iscsi-target/ to deliver a 150 GB iSCSI target volume.

After creating two Nano Server .vhd files I noticed that the network cards had no DNS servers specified, they were also not registered in DNS so I wasn’t able to access them remotely using Server Manager. After establishing a remote PowerShell session I used the netsh command to add a DNS server to the network cards using the following command: netsh interface ip set dnsservers name=”Ethernet” static 172.16.0.1 primary

Just to be sure I restarted the machines to make sure the DNS registration takes place. Restarting occurs quickly because the OS is very small and a limited amount of services will be started. The next step was adding the machines to the TrustedHost list for WinRM.

After that I was successfully able to add the machines to Server Manager on my DC. This verifies I can access the machines remotely next to PowerShell remoting.

 

So let’s try to build a cluster. I used Failover Cluster Manager to build a cluster. So let’s get started.

Let’s do a cluster validation first.

cluster_validation-1

I added the servers, I used the default settings for validating the cluster.

cluster_validation-2

Cluster validation is running, time for something to drink or a small toilet break😉

cluster_validation-3

It’s good to see that the cluster validation test passed. The warning on networking is purely for the fact that only network card is available which is not a recommended practice. But hey, we’re in a lab…

So let’s build that cluster.

cluster_validation-4

Let’s give the cluster a name and an IP address and proceed…

cluster_validation-5

So all is set to create that cluster. I unchecked the Add all eligible storage to the cluster checkbox since I haven’t connected anything yet. Time to build that cluster.

cluster_validation-6

The cluster is ready.

After building the cluster I configured a network share as a disk witness.

 

So the next step is adding storage by using iSCSI. And here’s the part where my cluster becomes useless. The current Nano Server packages do not include anything iSCSI Initiator related, so I have no iSCSI Initiator service running nor can I create an iSCSI based disk since the PowerShell cmdlets for the iSCSI Initiator are simply not there. So that’s a dead end here.

 

Nevertheless, it was quite a satisfying exercise to build this cluster using Nano Server and should provide some inspiration to build a cluster for a different purpose, i.e. Hyper-V. But that’s for a different post…

 

 

 
Leave a comment

Posted by on 25/11/2015 in Windows Server

 

Building my first Nano Server using Windows Server 2016 TP4…

Recently Microsoft released the bits for Windows Server 2016 Technical Preview 4. One of the features that caught my attention is Nano Server, a ‘headless’ server that requires to be managed remotely. I saw some interesting demos last week at Expertslive and I had some time to check it out myself. So I downloaded Technical Preview 4 and I got started.

Nevertheless, getting started without a plan doesn’t make sense so first I need to have a plan:

  • I use my laptop to build a lab environment;
  • The lab environment consists of 1 domain controller running Windows Server 2016 TP4 with a GUI (I need to manage those servers somewhere);
  • The domain controller has some administration tools;
  • I plan to build two Nano Server machines which should be configured as a Scale Out File Server (SOFS).

I used the ‘Getting Started with Nano Server’ guide available at https://technet.microsoft.com/en-us/library/mt126167.aspx

The benefit of building your Nano Server images is something that must be done before. I quickly noticed that it’s easier to throw away your .vhd file rather than trying to troubleshoot and fiddle with it to get something working. This is in line with some tweets I read from a well-known Technical Fellow at Microsoft, Jeffrey Snover. Experiencing this first hand absolutely makes sense to me now.

After studying the guide I noticed that the following details are required before building the .vhd file:

  • Name;
  • Language;
  • IP adress information;
  • Packages;

I decided to gather all required features (except the Packages) in variables to create a small script which allows me to reuse it by changing the variables only. Here’s how it may look like:

Set-Executionpolicy Bypass -Force

#Import-Module
Import-Module C:\NanoServer\NanoServerImageGenerator.psm1 -Verbose

#Defining Server Specific Parameters
$MediaPath=’D:\’
$TargetPath=’C:\Users\Public\Documents\Hyper-V\Virtual hard disks\NA01.vhd’
$ComputerName=’NA01′
$Language=’en-us’

$Ipv4Address=’172.16.0.2′
$Ipv4SubnetMask=’255.255.0.0′
$Ipv4Gateway=’172.16.0.1′

#Create the Image
New-NanoServerImage -MediaPath $MediaPath -BasePath .\Base -TargetPath $TargetPath -ComputerName $ComputerName -InterfaceNameOrIndex Ethernet -Ipv4Address $Ipv4Address -Ipv4SubnetMask $Ipv4SubnetMask -Ipv4Gateway $Ipv4Gateway -Language $Language -Clustering -GuestDrivers -Storage -EnableRemoteManagementPort

Building the .vhd didn’t take long at all, the file itself is roughly 560 MB in size. This is easy to rebuild in case a mistake is made. The cmdlet creates a prompt to provide the password for the local Administrator account.

After creating the .vhd I created a new virtual machine and selected the .vhd file built before. After firing it up I had a working Nano Server

NA01-1

So let’s log on to see how the UI looks like.

NA01-2

Yes, pretty basic if you ask me. But hey, it is a ‘headless’ server so we’re not supposed to log on locally.

After that, I followed the instructions to join the server to the domain to the letter and that worked flawlessly as well…

So now I can build my SOFS cluster, but that’s for another post.

 

This is something definitely worth playing around with, especially now that Nano Server based op TP4 is also available in the Microsoft Azure virtual machine Gallery. But that’s also for another post…

 

 
3 Comments

Posted by on 24/11/2015 in Windows Server

 

Live Maps Unity 7.5 with Operations Manager 2012: making dashboard views easier…

Recently Savision announced Live Maps Unity 7.5. Shortly after the announce I finally had some time left to have a look at it. One of my customers asked me to help the build a pristine OpsMgr 2012 R2 environment and they stated they already purchased Savision Live Maps as well. In this blog post I share my impressions regarding Live Maps Unity 7.5 from a technical perspective and beyond.

A commonly asked question regarding 3rd party dashboard tools is: Why do I need something like that?

To give a clear answer, certain aspects of the IT environment need to be considered:

  • OpsMgr itself is a very IT focused monitoring solution which has quite some distance to the ‘real world’. Although OpsMgr delivers a very high level of detail of the IT environment, it may become quite challenging to provide information non-IT people understand. The business requires information of the availability of IT services. The business would rather like to know if they can still use email instead of knowing which mailbox store is broken.
  • While OpsMgr has some native capabilities to build dashboards, I consider them quite inferior (even using the Visio Add-in). It takes a lot of administrative effort to build and maintain them and it just doesn’t work the right way. For this feature alone I had to give negative recommendations to previous customers to use OpsMgr solely on this challenge.

With these considerations taken in mind, the answer to the question regarding dashboards is yes convincingly.

Savision Live Maps delivers dashboards that the real world can understand and does all the work creating them for you. This significantly lowers the administrative effort to allow administrators to focus their daily task on managing their environment, not managing the tools that manage their environment.

So I decided to have a go and asked for a trial license. I’ve set up an environment in an Azure Resource Group, created a storage account and a virtual network and created the following two machines (both running on Windows Server 2012 R2:

  • 1 Domain Controller;
  • 1 Operations Manager 2012 R2 Management Server running a local SQL instance.

I imported Active Directory and SQL Server Managment Packs, importing these requires Windows Core Monitoring so that one is included as well.

The next step was installing Live Maps Unity 7.5. I used the documentation available at the Savision Training Center which is available at https://www.savision.com/live-maps-training-center. The documentation is very monkey proof is makes installing Live Maps Unity ridiculously easy.

The next step is creating the dashboards you need. After some playing around I was able to produce the following view:

service view

NOTE:I created an additional distributed application named mwesterink.lan which contains both servers only. I intentionally left some alerts to display the color differences.

 

After playing around a little bit I conclude that Savision Live Maps Unity makes dashboarding significantly easier, especially when Management Packs deliver their own distributed applications.

Something as trivial as Service Level Monitoring is enabled by just a simple check box.

Even for ITPros, the more business oriented view should be sufficient before drilling down to figure out if any new issues are occuring.

I would even consider not using any notifications anymore at all.

 

However, a major decision maker is if the license costs would meet any Return On Investment (ROI) targets. In general, decision makers are only interested in meeting ROI for projects. Any ROI not met is considered a failure. Knowing how much time it takes to have your dashboards created should allow some financial people to calculate how much time administering these dashboards cost. I am almost certain that the administrative effort will be reduced dramatically to have Live Maps Unity do all the work for you instead of building it all yourself. I didn’t need any support from Savision to build something like that, so a more experienced OpsMgr admins should certainly be able to use this. Savision have their engineers available when needed.

My final verdict: I’d definitely recommend using Live Maps Unity to present the IT infrastructure availability in OpsMgr.

 

 
 
Steve Thompson [MVP]

The automation specialist

Boudewijn Plomp

Cloud and related stuff...

Anything about IT

by Alex Verboon

MDTGuy.WordPress.com

Deployment Made Simple

System Management

Azure, Hybrid Identity & Enterprise Mobility + Security

Daan Weda

This WordPress.com site is all about System Center and PowerShell

IT And Management by Abheek

Microsoft certified Trainer -Abheek

Heading To The Clouds

by Marthijn van Rheenen