Thursday, April 30, 2015

VM Checkpoints in Windows Azure Pack

Fresh from the factory, Update Rollup 6 has been released and shipped by Microsoft.

This isn’t a blog post that will point out all the bug fixes and the amazing work all of the teams has been doing, but rather point you towards a highly requested feature, that finally made its way to the tenant portal in Windows Azure Pack.

With Update Rollup 6, we now supports creation and restore of Hyper-V checkpoints on virtual machines, provided by the VM Cloud Resource Provider.

Tenants that have deployed virtual machines may now create checkpoints and restore them on their own, without any interaction from the cloud provider.

Let us have a closer look at how this actually works, how to configure it and what additional steps you might want to take as part of this implementation.

Enabling create, view and restore of virtual machine checkpoints at the Hosting Plan level

Once the UR6 is installed for WAP and the underlying resource provider, you will notice some changes in the admin portal.

First, navigate to a Hosting Plan of yours – that contains the VM Cloud Resource Provider.
When you scroll down, you can see that we have some settings related to checkpoints.



Create, view and restore virtual machine checkpoints – will let the tenants that has subscriptions based on this hosting plan, be able to perform these actions on their virtual machines.

View and restore virtual machine checkpoints – let the tenants view and restore virtual machine checkpoints, but not create them. This can for example be performed by the cloud provider on an agreed schedule.

When you enable either of these options, an update job is taking place at the plan level and communicates the changes back to VMM, ensuring that the tenants will have permissions to take these actions in the tenant portal once it has completed.



If we switch over to the tenant portal, we can see that when we drill into one of the existing VMs (click on the VMàDashboard) we have some new actions available.



If you would manage checkpoints for your VM Roles, you can of course do that too, but you then have to drill into each specific instance, as the VM role potentially can have multiple instances when supporting scale-out.



To create a new checkpoint, simply click on Checkpoint and type the name of the checkpoint and eventually a description.



If we switch back to the fabric and VMM, we can see that a VMM job has completed with details about the checkpoint process for this specific tenant, with the name and description we typed.



If we would like to perform the same operation again, creating an additional checkpoint on the same virtual machine, we get a message telling us that the existing checkpoint will be deleted.



This is because that the current checkpoint integration in WAP will only keep one checkpoint, and avoid the scenario where you could potentially have a long chain of differential disks.

When we create the second checkpoint, we can switch back to VMM to see what’s actually happening:

First, a new checkpoint is created.
Second, the previous checkpoint is deleted.



When we explore the checkpoints settings on the VM itself afterwards, we see that we only have the latest checkpoint listed.



Regarding the restore process, we can also perform this from the same view in the tenant portal.
Once you click on the restore button, the tenant portal will show you the metadata of the available checkpoint, such as name, description and when it was created. Once you click the confirm button, the restore process will start in VMM.





Now what?

If you are familiar with how checkpoints in Hyper-V works, then you know that each static disk will  be either .vhd or .vhdx – depending on the format you are using (.vhdx was introduced with Windows Server 2012 and should be the preferred format, but Azure is still using .vhd).
Once you create a checkpoint, a new disk (.avhd or .avhdx) will be created– a differential disk, containing all the new write operations, while read operations will occur on both the parent disk (vhdx) and the newly created differential disk. 



To summarize, this might not be an ideal situation when it comes to performance, life-cycle management and storage optimization.

Since we don’t have any action in the tenant portal to perform a delete operation, this can be scary in some scenarios.
The fact that the VM will always run on a checkpoint once a checkpoint is created, means you will always be able to restore to your latest checkpoint from the portal.

In order to solve this challenge, we can leverage the integration of Service Management Automation in Azure Pack.
One of the best things with Azure Pack and the VM Cloud resource provider, is that we can extend it and create valued added solutions and services by linking certain actions happening in the tenant portal, to automated tasks that are executed by a SMA runbook in the backend.

The following screenshot shows that there’s an event related to creation of VMM Checkpoints performed by the tenant, which can easily be linked to a runbook.



Here’s an example of a runbook that will check for checkpoints created on VMs belonging to a specific VMM Cloud that is used in a Hosting Plan in WAP. If there’s any checkpoints that exists, they will be deleted and the VMs will have their disks merged back to a static disk (.vhd/.vhdx).
<#
   
.SYNOPSIS
Wokflow to check for - and eventually delete old VM checkpoints
  #> 

 workflow delete-scvmcheckpoint  
    # Connection to access VMM server. 
  
    $VmmConnection = Get-AutomationConnection -Name 'SCVMM'  
    $VmmServerName = $VmmConnection.ComputerName  
    
    inlinescript
{  

 # Import VMM module. 
Import-Module virtualmachinemanager  
         
# Connect to VMM server. 
Get-SCVMMServer -ComputerName $Using:VmmServerName  

$vms = Get-SCVirtualMachine | Where-Object {$_.Cloud -like "*Copenhagen IaaS*" -and  $_.VMCheckpoints }
      
       foreach ($vm in $vms)
    {
      
Get-SCVMCheckpoint -VM $vm | Remove-SCVMCheckpoint -RunAsynchronously
            
    }   

}-PSComputerName $VmmServerName -PSCredential $VmmCredential 

This simple code can so be added to a schedule that will execute this runbook on a daily basis – as an example, ensuring that no VMs in the cloud will run on a checkpoint on a long term.

Thanks for reading!




Wednesday, April 15, 2015

Why I am investing in DSC

In order to get a good grasp on something new, like a technology, it is always important to find a use case.

Once you have a use case, I can ensure you that the learning process is much more interesting, fun – and perhaps easier too.

That is what I did when I went deep into Desired State Configuration. I found a use case.
My use case was to leverage DSC as part of VM Roles in Azure Pack in a way that would be valid for the future too.

Here comes some reasons for my decision. 


Powershell has been around for some time now, and one of the best benefits by learning and using the shell is the amount of work you are able to do, combining modules, components, technologies and much more through the same API. Considering that everything that MS builds and do – regardless of cloud, will be accessible and manageable through Powershell in addition to other options, ensures that this is a real no-brainer.

With Windows Management Framework 4.0, we also got Powershell Desired State Configuration added to our table.
Powershell Desired State Configuration is Microsoft’s way to implement an idempotent configuration that ensures that the “desired state” will be reached by applying the entire configuration, regardless of the current state.

-          But, what does this really mean? Aren’t we able to do everything using native Powershell scripts already?

That is correct. There’s no “limits” by using Powershell natively today.
However, with native Powershell scripts you are responsible for building all the error handling and logic into your scripts. And as you probably know, that can be both very time consuming and challenging.

Desired State Configuration handles this automatically for you, letting you make and deploy any incremental changes to your configuration over time without risking the system to be put in a bad state.
If you have any configuration drift? Depending on how the Local Configuration Manager is configured – the engine that’s responsible for applying the configuration and follow the instructions, the system can heal itself by enforcing the desired state.

Think of Powershell Desired State Configuration as a contract between you and your nodes (manageable objects).

In order to create and deliver this “contract”, Desired State Configuration is based on CIM – and use WinRM for communicating. CIM uses a language called Manaed Object Format – often referred to as “MOF”. Powershell Desired State Configuration is a way to create and distribute MOF files that can be applied to systems supporting this standard.

The way it’s applied to the node(s) is either through “Push” or “Pull”.

(The difference between Push and Pull is out of scope right now and deserves a dedicated blog post later on. I promise).

To put it short, the Pull mechanism requires some infrastructure in order to work, where the node(s) are talking to the Pull server – either through SMB, Http or Https.

The Push method is pretty straight forward and what you can start using right out of the box. DSC requires that WinRM listeners are configured so that the CIM can push the configuration to the remote systems.

Here’s an example of how a Powershell DSC Configuration looks like:


configuration DNS
{
    node kndsc006
    {
        WindowsFeature DNS
        {
            Name = "DNS"
            Ensure = "Present"
            IncludeAllSubFeature = $true
        }
    }
}

DNS

Start-DscConfiguration -wait -force -Verbose .\DNS

As you can see, the format here is quite easy to read.
We can easily see that we will install (Ensure = "Present") DNS (Name = "DNS") on the target node (kndsc006). 

Actually, it is so easy to read that Powershell newbies like me are able to manage J

Hopefully this gave you some more context about the “why”, but we are not done yet.

In Azure today, we are able to leverage DSC as part of the VM extension, meaning we can create – upload – and apply our DSC configuration to Azure IaaS virtual machines. The method of applying the config for these VMs are “Push”.

As you probably know, we don’t have the exact same capabilities on-prem in order to leverage DSC as part of Azure Pack. However, we are able to simulate the same experience at some extent, by using the combination of DSC, SMA and VM Roles (http://kristiannese.blogspot.no/2015/03/application-modeling-with-vm-roles-dsc.html )

Moving forward, we know that the consistency across clouds will be as near as 1:1 with the introduction of Azure Resource Manager that will introduce us for a complete new way to interact with our cloud services – regardless of location. Also worth to note, the Azure Resource Manager itself will be idempotent.

What about your existing DSC scripts?
Exactly, that is the main point here. These configurations will be valid using Azure Resource Manager too J

So in essence, you invest in DSC now and use it both for Azure Pack (VM Roles + SMA) and Azure (VM Extension), and later on you can reuse the investment you’ve made into the era of Azure Resource Manager.


Hopefully this gave you some inspiration to start learning Desired State Configuration, available in Windows Management Framework 4.0 – but also available in 5.0 (which is in Preview).
Please note that everything you do in Azure when using the DSC VM Extension there is based on the 5.0 version.