Microsoft Azure – a cloud-native success story

One of our clients helps several retailers, both local and global, acquire, engage, and retain their customers.

They achieve this by providing strategy, tools, and tactics. These services are provided to the customer in a digital way. The customer can use web portals, a variety of different services, and can get valuable insights into their data. 

To bring their services to the next level, and to address the requirements that their customers have now and in the future, they decided to take advantage of using the cloud. By leveraging cloud-native services, they are able to provide their customers with a set of secure services and give real-time insights into data. To support their customers in the most effective way, they decided to host their services on Microsoft Azure. 

The challenge

The challenge this client is facing is similar to the challenges a lot of organizations face right now. There is an urgent need for digital transformation to keep addressing customer needs, stay competitive, and  innovate and use state-of-the-art technologies. But most of the services that are offered to their customers still run on an on-premises infrastructure that is not ready to support this. 

This was also the challenge for our client. They were providing services that were still running in an on-premises environment, which was not able to provide innovative technologies and scale accordingly to address future needs.

This client reached out to us to help them implement cloud-native services to renew their IT landscape, offer their customers  a set of services that are specifically designed for performance, security, and redundancy and provide real-time insights in data coming from various sources. This data is partially stored in Azure, but also in on-premises databases. 

Our approach

Together with the client, we decided to take advantage of all the cloud-native services that Azure has to offer, from a microservices and data analytics and insights perspective. The project was divided into two smaller projects, starting with building a full cloud-native microservices environment using only serverless technology. This will be followed by a new project for storing customer data using Azure Data Lake, implementing real-time insights using Azure Event Hub, and using various services to provide interactive, immersive dashboards and reports, such as Azure Data Share and other tooling. We decided that our cloud-native development offering was most applicable to this project. 

With our offering, we are providing tour clients with:

  • Domain-driven design (DDD): When implementing a microservices architecture, DDD is a design approach you can benefit from. Where to draw the boundaries is the key task when designing and defining a microservice. DDD patterns help you understand the complexity in the domain.
  • Cloud-native design patterns: To build highly reliable, scalable, secure applications and services, every developer needs to make use of common cloud-native design patterns. We focus fully on implementing Microsoft best practices and patterns.
  • Dev/test optimization: We bring our own development and test environments to the project. For this, we use container technologies, which have all the commonly used tooling and software pre-deployed. Next, we use automated performance and acceptance tests, fully integrated in Azure DevOps.
  • Everything-as-code: We offer out-of-the-box landing zones, which include security and compliance policies and monitoring rules. These monitoring rules are based on our experiences and best practices that we have developed over the years managing cloud environments for our global customers. We are implementing zero-touch deployments using Azure DevOps and CI/CD pipelines for automatically building and releasing applications and services.

How we implemented it

The first step was to deploy the landing zone, which included an API Management gateway, a VNet, log analytics, application insights, security policies, and default monitoring and logging rules in the Azure subscription. We deployed it automatically using CI/CD pipelines so that it can easily be deployed across different environments. Next, we started building the first APIs, using serverless services, such as Azure Functions, Azure Storage, Azure Service Bus, an Azure Key Vault, and more. We implemented cloud-native design patterns to build them. To get access to the data that still resides in the SAP on-premises environment, an Express Route connection was set up. For authentication, we used Azure Active Directory, Auth 2.0, Open ID Connect and the out-of-the-box libraries that are provided by Microsoft, such as MSAL.

By using landing zones, cloud-native patterns and Microsoft best practices, and securing it using Azure Policies and Azure Active Directory in our solution, we now have a solid foundation for rapidly building and deploying additional services.

Next steps

At this stage, we have successfully implemented a set of secure microservices for the client, which are automatically deployed across environments, securely connecting to an on-premises SAP environment, and exposed via a single gateway. Next, we will be implementing the second project, where we will form an additional DevOps team that will implement the solution for storing customer data, and provide real-time insights.


This blog provides an overview of a cloud-native project that we are currently implementing for one of our customers. At Capgemini, we have a lot of experience, use cases, and best practices in implementing cloud-native practices and designing and building cloud-native applications and systems for our enterprise customers. If you want more information about our experiences with this, you can contact me on LinkedIn or Twitter.

You can also read my other articles here.

How to learn Azure

Cloud skills are becoming more popular every day! A lot of organizations are embracing the cloud for their applications, infrastructure, Machine Learning and IoT solutions. And this will grow significally in the next years!

This also means that (Microsoft) IT professionals need to update their skills as well. In this article I will give an overview how you can get started updating your skills and be ready for all the Azure work that is coming in the near future!

Create a free Azure account

The first step, is to create a free Azure account. With this account you can test all the different services and deploy your code to Azure during the different trainings. You get 12 months of free access to all the different Azure services for a limited amount per month.

You can create a free Azure account here: Create your Azure free account today | Microsoft Azure. There is also a module on Microsoft Learn that gives you more information about creating a free Azure account, and how billing and support works in Azure. This module is called: Create an Azure account.

Learn Azure on Microsoft Learn

Microsoft Learn offers free, interactive, hands-on training to help you develop Azure technical skills. You can find a variety of learning paths on Microsoft Learn, such as Azure, Microsoft 365, .NET development, Power Platform an more. You can also watch Learn TV, or explore the different Azure certifications from there.

Microsoft Learn TV

For start learning Azure, the following learning paths are very interesting:

Azure Fundamentals Learning Paths:

Azure Fundamentals Learning Paths

These learning paths will give you a very comprehensive introduction to Azure. By completing the learning path, you will also be ready to take the AZ-900: Microsoft Azure Fundamentals certification.

Learn Azure using Microsoft Docs

Another great source for learning Azure are the Microsoft Docs. I make lots of use it for writing my Azure books. You can find anything that you want to know about Azure there.

Azure documentation on Microsoft Docs

Learn Azure using books

There is also a variety of books available to learn Azure. You can have a look at Amazon for the different books that are available.

To highlight a couple of books:


Implementing Microsoft Azure Architect Technologies: AZ-303

The AZ-303: Implementing Microsoft Azure Architect Technologies is published. This book is written together with Brett Hargreaves, he updated the original AZ-300 book that was published a year ago.

What you will learn

  • Manage Azure subscriptions and resources
  • Ensure governance and compliance with policies, roles, and blueprints
  • Build, migrate, and protect servers in Azure
  • Configure, monitor, and troubleshoot virtual networks
  • Manage Azure AD and implement multi-factor authentication
  • Configure hybrid integration with Azure AD Connect
  • Find out how you can monitor costs, performance, and security
  • Develop solutions that use Cosmos DB and Azure SQL Database

Table of Contents

  1. Implementing Cloud Infrastructure Monitoring
  2. Creating and Configuring Storage Accounts
  3. Implementing and Managing Virtual Machines
  4. Implementing and Managing Virtual Networking
  5. Creating Connectivity between Virtual Networks
  6. Managing Azure Active Directory (Azure AD)
  7. Implementing Multi-Factor Authentication (MFA)
  8. Implementing and Managing Hybrid Identities
  9. Managing Workloads in Azure
  10. Implementing Load Balancing and Networking Security
  11. Implementing Azure Governance Solutions
  12. Creating Web Apps Using PaaS and Serverless
  13. Designing and Developing Apps for Containers
  14. Implementing Authentication
  15. Developing Solutions that Use Cosmos DB Storage

You can order the book on Amazon using this link: 

Bi-weekly Azure Summary – Part 72

This bi-weekly update is a summary of trending Azure topics on social media, as well as other interesting content available on the internet.

Below, you can see an overview of interesting blogs, articles, videos and more that are posted on social media and other channels:


Development / IT Pro


What a Cloud Center of Excellence can do for your organization?

Most companies go through a set of phases in their Microsoft cloud journey. They start with experimenting with the cloud for rapid application development. A single subscription is manually created in the Azure portal, and a set of services is quickly deployed from the portal to serve business and the developers’ needs. It is even not uncommon in this phase for the business or for the developers to use their own credit card to create this single subscription. The main goal in this phase is to serve business needs quickly, creating small proofs of concept, or avoiding the lengthy and time-consuming deployment strategies of bigger organizations.

In the next phase, the IT department starts taking the first steps into the cloud and creating additional subscriptions mostly targeted to the different departments in the organization. They will introduce centralized deployments and start thinking about security and compliance in the cloud.

In the third phase, the organization embraces the cloud on a larger scale. Senior management has decided to transform IT and shift to a cloud-first approach. Applications and data centers need to be migrated, hybrid environments need to be created, and all new applications need to be cloud native. This is the moment that most organizations realize they need a proper governance model and strategy.

As cloud environments are managed on a large scale, there is a need for a solid architecture around structuring subscriptions, networking, databases, applications, security and compliance regulations, and so on. Successfully managing a cloud platform on a large scale requires ownership in the organization. It also requires a centralized entity to maintain best practices, onboard the cloud customers, and make sure that all services are secure and compliant by default.

When they start implementing these technical aspects on a large scale and embedding them into the organization, people start to realize that this also involves a significant organizational and cultural change. This is where a Cloud Center of Excellence comes in.

What does a Cloud Center of Excellence do?

A Cloud Center of Excellence (CCoE) is used to bring a diverse and knowledgeable group of experts from across the organization together to develop cloud best practices for the rest of the organization to follow. The CCoE has a support function to increase productivity throughout the organization and at the same time maintain a consistent and secure cloud platform. It is based on Microsoft agile practices and a delivery model that provides a programmatic approach to implement, manage, and operate the Microsoft Azure platform for onboarding projects and Azure workloads effectively.

A CCoE model, requires collaboration between:

  • Cloud adoption
  • Cloud strategy
  • Cloud governance
  • Cloud platform
  • Cloud automation

When these aspects are addressed, the participants can accelerate innovation and migration while reducing the overall costs of change and increasing business agility. When implemented successfully, a CCoE will create a significant cultural shift in IT as well. Without the CCoE model, IT tends to focus on providing control and central responsibility. A successful CCoE model provides focus on freedom and delegated responsibility. This works best in a technology strategy with a self-service model that allows business units to make their own decisions. The CCoE provides a set of guidelines and established and repeatable controls, used by the business.

Key responsibilities of a Cloud Center of Excellence

The primary goal of the CCoE team is to accelerate cloud adoption through cloud native and hybrid solutions. The CCoE has the following objectives:

  • Build a modern IT organization by capturing and implementing business requirements using agile approaches
  • Build reusable deployment packages that fully align with security, compliance, and service management policies
  • Maintain a functional Azure platform in alignment with operational procedures
  • Review and approve the use of cloud-native tools
  • Over time, standardize and automate commonly needed platform components and solutions

The Cloud Center of Excellence team

The CCoE team ideally consists of 3–5 people with a variety of IT backgrounds. This will bring a broad perspective and balanced set of knowledge. It should ideally include people who already have cloud experience and day-to-day roles, such as:

  • IT/Operations/IT financial manager
  • Solution/Infrastructure Architect
  • Application developer
  • Network engineer
  • Database administrator
  • Systems administrator

Excellent way to start your cloud journey

This blog will help organizations who are going through the different phases in their cloud journey and starting to transform their IT department to be ready for innovation, speed, and control. The Cloud Center of Excellence is an ideal model to accelerate your cloud adoption program.

Microsoft Azure Well-Architected Framework

Microsoft recently introduced the Microsoft Azure Well-Architected Framework, which provides customers with a set of Azure best practices to help them build and deliver well-architected solutions on top of the Azure platform.

The framework consists of five pillars of architecture excellence that can be used as guiding to improve the quality of the workloads that run on Azure. These five pillars are: cost optimization, operational excellence, performance efficiency, reliability, and security. They will be explained in more detail in the following sections.

Cost optimization

One thing to focus on when architecting cloud solutions, is to generate incremental value early in the process. To accelerate the time to market while avoiding capital-intensive solutions, the principles of Build-Measure-Learn can be applied. This is one of the central principles of Lean Startup, which helps to create customer partnerships by building with customer empathy, measuring impact on customers, and learning with customers.

By using this pay-as-you-go strategy in your architecture, you will invest in scaling out after customer success instead of delivering a large investment first version. Keep a balance in your architecture between costs for first mover advantage versus “fast follow.” For this, you can use the cost calculators to estimate the initial costs and the operational costs. Finally, establish policies, budgets, and controls that set cost limits for your solution.

For a detailed guidance on cost optimization, you can refer to the following articles:

Operational excellence

Operational excellence involves the operations processes that keep applications running in production. To make deployments reliable and predictable, they should be fully automated to reduce human error. This should be a fast and repeatable process, so it doesn’t slow down the release of new features or bug fixes. You should also need to be able to quickly roll back or roll forward when a release has problems or bugs.

To accomplish this, monitoring and diagnostics are crucial. You don’t have full control over the infrastructure and operating system when using Azure solutions. Monitoring and diagnostics will give you the insights to the systems and the solutions that run on top of it. Use a common and consistent logging schema that lets you correlate events across different systems, Azure resources, and custom applications.

A successful monitoring and diagnostics process has several distinct phases:

  1. Instrumentation: Log and generate the raw data, from all the different resources and services that you are using, such as application logs, web server logs, VM logs, diagnostics built in the Azure platform, and other sources.
  2. Collection and storage: Collect all the raw data and consolidate it into one place.
  3. Analysis and diagnosis: Analyze the data that is collected to see the overall health of the platform, services, and your applications and to troubleshoot issues.
  4. Visualization and alerts: Visualize the data that is analyzed to spot trends or set up alerting to alert the operation teams.

To get more information and further guidance about operational excellence, you can refer to the following articles:

Performance efficiency

With performance efficiency, you make sure that your workload can scale to meet the demands placed on it by the users in an effective manner. You can achieve by implementing PaaS offering that scale automatically or implementing scaling effectively in your solutions and applications.

Applications can scale in two different ways: horizontally (scale out) where new instances of the resource are added, such as extra VMs or database instances. You can also scale vertically (scale up), where you increase the capacity of a single resource, for example by using a larger VM size.

Horizontal scale needs to be architected into the system. You can scale out by placing VMs behind a load balancer. The applications that run on these VMs, also need to be able to scale. This can be accomplished by designing stateless applications or by storing state and data externally. Simply adding more instances will not guarantee that your application will scale. Scaling can also lead to more additional measures and bottlenecks.

Therefore, you should always conduct performance and load testing to find these potential bottlenecks. You can use the following articles for this:


Reliable workloads are both resilient and available. Resilient applications are able to return to a fully functioning state after a failure occurs. Available applications can be accessed by the users when they need to.

In cloud computing a different mind set is needed then in traditional application development. Cloud applications are built as distributed systems, which means they are often more complex. The costs for cloud environments are kept low through the use of commodity hardware, so occasional hardware failures must be expected. Today, users also expect systems to be available 24/7 without ever going offline.

This means that cloud applications must be architected differently. They need to be designed to expect occasional failures and need to be able to recover from them quickly. When designing your applications to be resilient, you first must understand availability requirements. How much downtime is acceptable for this application, how much downtime will cost your business, and how much should be invested in making the application highly available.

In the following articles you will get more information about how you can design and build reliable workloads and applications in Azure:


Security should be embedded throughout the entire lifecycle of an application. From the design phase all the way up to the deployments and operations phase. Protection against a variety of threats, such as DDoS attacks, is already provided by the Azure platform, but you are still responsible for building security into your application and into the DevOps processes.

Security areas that need to be considered for application development are:

  • Identity management: To authenticate and authorize users, Azure Active Directory should be considered. Azure AD is a fully managed identity and access management service, and it is integrated to Azure services, Office 365, Dynamics CRM Online, Active Directory on premises in a hybrid deployment, and many third-party SaaS applications. For consumer-facing applications, Azure AD offers Azure AD Business to Consumer, which lets users authenticate with their existing social accounts, such as Facebook, Google, LinkedIn, and more, as well as creating new accounts that are managed by Azure AD.
  • Application security: Best practices for applications, such as SSL everywhere, protecting against CSRF and XSS attacks, preventing SQL injection attacks, and so on, still apply to the cloud. You should also store your application keys and secrets in Azure Key Vault.
  • Protecting the infrastructure: Control access to all the Azure resources that you deploy. Every resource has a trust relationship with the Azure AD tenant. To grant the users in your organization the correct permissions to the Azure resources that are deployed, you can use role-based access control (RBAC). These permissions can be added to different scopes, to subscriptions, resource groups, or single resources.
  • Data encryption: When you set up high availability in Azure, make sure that you store the data in the correct geopolitical zone. Azure geo-replicated uses the concept of paired regions, which stores the replicated data in the same geopolitical region. To store cryptographic keys and secrets, you can use Azure Key Vault. You can also use Key Vault to store keys that are protected by hardware security modules.

For more information about this, you can refer to the following articles:

Wrap up

The Azure Well-Architected Framework provides comprehensive architecture principles and guidelines to build cost effective, secure, reliable, and manageable solutions in Azure. If you want to get started with the Azure Well-Architected Framework:

Thinking cloud native

These days, applications have become very complex and users are demanding more and more of these applications.

They expect innovative features, rapid responsiveness, and zero downtime. And problems that arise with building software, such as performance errors, recurring errors, and the inability to move fast are no longer acceptable by the user. If your application does not meet the user’s requirements, they simply move on to the competitor. This means that applications need to be able to address the need for speed and agility. And the solution to this is: A Cloud Native architecture and technologies.

Cloud native is all about changing the way you think about building and designing critical business systems. Cloud-native systems are specifically designed to respond to resilience as well as large-scale, and rapid change. And, they run in modern and dynamic environments, such as public, private, and hybrid clouds. Cloud-native applications are mostly built using one or more of these technologies: containers, service meshes, microservices, and declarative APIs, running on immutable infrastructure.

Some companies that have implemented cloud native and achieved speed, agility, and scalability are Netflix, Uber, and WeChat. They have thousands of independent microservices running in production and they deploy between hundred and thousand times a day. This architectural style enables them to quickly respond to market demand and conditions. By using a cloud-native approach, they can instantaneously update small areas of a live, complex application, and individually scale those areas as needed.

The speed and agility of cloud native comes from several factors: cloud infrastructure is key here, but there are five additional pillars that also provide the foundation for building cloud native applications:

Modern Design

A widely accepted methodology for constructing cloud-based applications is the twelve-factor app. It describes a set of principles and practices that developers follow to build applications that are optimized for modern cloud environments. There is big focus on portability across environments and declarative automation.

These principles and practices are considered as a solid foundation for building cloud-native apps. The systems that are built upon these principles can deploy and scale rapidly and add features to react quickly to market changes.


Cloud-native systems and applications embrace microservices, which is a popular style for constructing modern applications. The microservice architectural style is an approach to developing a single application as a suite of small services, each running in their own process and communicating with lightweight mechanisms, such as REST, gRPC, HTTP(S), or WebSockets.

Microservices can scale independently. Instead of scaling the entire application as a single unit, you scale out only those services that require more processing power or network bandwidth. Each microservice also has an autonomous lifecycle and can evolve independently and deploy frequently. You don’t have to wait for a quarterly release to deploy a new feature or update, but you can update small areas of a complex application with less risk of disrupting the entire system.


Containers are a great enabler of cloud-native systems and applications. Microservice containerization is also placed as the first step in the Cloud-Native Trial Map – released by the  Cloud Native Computing Foundation. This map offers guidance for enterprises that are beginning their cloud-native journey. This technique is very straightforward: you package the code, its dependencies, and the runtime into a binary called a container image. Those images are then stored inside a container registry which acts as a repository or library for the images. Those registries can be private or public and can be stored inside your own datacenter or using public cloud services. When needed, you transform the image into a running container instance. These instances can run in the cloud or in your private data center on servers that have a container runtime engine installed.

Containers provide portability and guarantee consistency across environments. By packaging everything into a single container image, you isolate the microservice and its dependencies from the underlying infrastructure. This also eliminates the expense of pre-configuring each environment with frameworks, software libraries, and runtime engines. And by sharing the underlying operating system and host resources, containers have a much smaller footprint than a full virtual machine. This increases the number of microservices that a given host can run at one time.

Backing services

Cloud-native applications and services depend upon several different backing services, such as data stores, monitoring, caching, and logging services, message brokers and identity services. These backing services support the stateless principle coming from the twelve-factor appl. You can consume those services from a cloud provider. You could also host your own backing services, but then you would be responsible for licensing, provisioning, and managing those resources.

Cloud-ative services are typically using backing services from cloud providers. This saves time, and reduces costs and operational risk of hosting your own services. Backend services are treated as an attached resource and are dynamically bound to a microservice. The information required to access these services, such as URLs and credentials, is then stored in an external configuration store.


The previous pillars are specifically focusing on achieving speed and agility. But that is not the complete story. The cloud environments also need to be provisioned to being able to deploy and run cloud-native applications and systems. How do you rapidly deploy your apps and features? A widely accepted practice to this is Infrastructure as Code (IaC).

Using IaC, you can automate platform provisioning and application deployment. DevOps teams that implement IaC can deliver stable environments rapidly and at scale. By adding testing and versioning to the DevOps practices, your infrastructure and deployments are automated, consistent, and repeatable.

You can use tools such as Azure Resource Manager, Terraform, and Azure CLI to create scripts to deploy the cloud infrastructure. This script is versioned and checked into source control as an artifact of the whole project. The script is then automatically invoked in the continuous integration and continuous delivery (CI/CD) pipelines to provision a consistent and repeatable infrastructure across system environments, such as QA, staging, and production. A service that can handle this process from the beginning to the end is Azure Pipelines, which is part of Azure DevOps.


This blog introduced the five different pillars that provide the foundation for building cloud native applications. At Capgemini, we have a lot of experience, use cases, and best practices in implementing cloud-native practices and designing and building cloud-native applications and systems for our enterprise customers. If you want more information about our experiences with this, you can contact me on LinkedIn or Twitter.

Bi-weekly Azure Summary – Part 69

This bi-weekly update is a summary of trending Azure topics on social media, as well as other interesting content available on the internet.

Below, you can see an overview of interesting blogs, articles, videos and more that are posted on social media and other channels:




Development / IT Pro






Azure Arc for Servers: Applying policies

In the previous post: Getting started with Azure Arc for Servers, we’ve introduced Azure Arc and Azure Arc for Servers. In that post, we’ve connected an on-premises machine to Azure Arc. In this post, we are going to apply a policy to the on-premises machine, from the Azure portal using Azure Arc. First, lets start with a little background information about Azure Policy.


Azure Policy

You can create policies to enforce different rules and effects over your resources, by using the Azure Policy service. By applying policies to your resources, they will stay compliant with your corporate standards and service level agreements. Azure Policy will assess the resources for non-compliance.  You can use built-in policies that are already provided by Azure or you can create your own policies. This assessment is done by using the following features:

  • Policy definition: First, you will create a policy definition. This consists of conditions under which it’s enforced, and the effect that takes place. Azure policy has a variety of built-in policies that you can use, such as an Allowed Location policy, an Allowed Virtual Machine SKUs policy, and more. For an overview of all the built-in policies, you can refer to the following GitHub repo: You can also create your own policy definitions. You can do this using JSON, in the Azure portal or by using PowerShell or the REST API.
  • Policy parameters: You can use parameters in your policy definition to reduce the number of policy definitions you must create. These parameters can be used to create more generic policies, which can be modified during assignment. This will give the ability to reuse policies in different scenarios. For example, you can create a location parameter, which can be filled in during assignment.
  • Policy assignment: When the policy definition is in place, either by selecting a built-in policy, or by creating a custom policy, it needs to be assigned. It needs to be assigned to a specific scope, which can be a Management Group or a Resource Group. All the resources in that scope will automatically inherit the policy assignment.

After this very brief introduction of Azure Policy, let’s assign a policy to our on-premise machine from Azure Arc.


Creating a custom policy in Azure

The first step, is to create a custom policy. The on-premises Windows Server VM is running Windows Server 2016. Let’s create a policy that only allows Windows Server machines in the Resource Group that is used for our on-premises machines that are connected with Azure Arc. Therefore, we have to take the following steps:

  • Navigate to the Azure portal:
  • In the top search box, type Policy and select it.
  • In the left menu, click Definitions, and then in the top menu click + Policy Definition:
  • First, we need to specify a location to store the policy definition. Here, you select the subscription where the definition needs to be added.
  • Then give the Policy a name, such as Only Windows Server Allowed.
  • Create a new Category, named Azure Arc machines.
  • Then we need to add the JSON for the policy. Copy the below code into the Policy Rule field:
     "policyRule": {
             "if": {
                "allOf": [
                      "field": "type",
                      "in": [
                      "not": {
                         "field": "Microsoft.Compute/imagePublisher",
                         "in": "[parameters('listOfAllowedimagePublishers')]"
             "then": {
                "effect": "deny"
      "listOfAllowedimagePublishers": {
      "type": "Array",
      "metadata": {
              "description": "The list of publishers to audit against. Example: 'MicrosoftWindowsServer'",
              "displayName": "Allowed image publishers"
  • The created policy will now look like the following image:
  • Click Save.


Now that we created a policy definition, we can assign in in Azure Arc.


Assigning policies in Azure Arc for Servers

To apply a policy to our on-premises machine in Azure Arc, you have to take the following steps:

  • Navigate to the Azure portal and type Azure Arc in the search box. Or you can launch
  • Click on the on-premises machine that we added in the previous blog post: Getting started with Azure Arc for Servers:
  • In the overview blade of the VM, click Policies in the left menu. Then, in the top menu, click Assign policy:
  • In the assign policy blade, keep the default selected scope. This is the resource group where the on-premises machine is connected.
  • Click on Policy definition and select the Only Windows Server Allowed from the list.
  • The assignment will now look like the following image:
  • Click on the Parameters tab, and fill in MicrosoftWindowsServer:
  • Click Review + create and the Create.
  • The policy will now be added to the list and it will take some time before the assessment starts. You can click on the policy name to go to the assignment details:
  • When the assessment is finished, you will see that the machine is compliant, because it has Windows Server installed on it. This will look like the following image:


We have now successfully applied a policy to an on-premises machine.



In this post, we created a custom policy and assigned to an on-premises machine that is connected to Azure Arc. We connected this machine in the previous post of this series: Getting started with Azure Arc for Servers

Assigning policies to machines is Azure Arc works perfectly and has the exact same experience as assigning them to Azure VMs. Although, I get the feeling that it takes a little bit more time to assess the machines connected to Azure Arc, then assessing VMs that are actually hosted in Azure. Which is quite logical in my opinion…



Azure Arc for Servers: Getting started

Most organizations embrace a hybrid and multi cloud approach for their businesses. This will give them the full benefits of their on-premises investments, the ability to innovate using cloud technologies, and the ability to avoid vendor lock-in.

For the last two years, Microsoft is investing enormously in enabling seamless hybrid capabilities. They released Azure Stack, which enables a consistent cloud model, but is deployed on-premises. They enabled security threat protection for any infrastructure, which is fully powered from the cloud, and they enabled the ability to run Microsoft Azure Cognitive Services AI models anywhere. Microsoft recently released Azure Arc, which unlocks new hybrid scenarios for organizations by bringing new Azure services and management features to any infrastructure.

Azure Arc extends the Azure Resource Manager capabilities to Linux and Windows servers, and Kubernetes clusters on any infrastructure across on-premises, multi-cloud, and the edge. You can use Azure Arc to run Azure data services anywhere, which includes always up-to-date data capabilities, deployment in seconds, and dynamic scalability on any infrastructure. Azure Arc for Servers is currently in preview, and that is what we are going to cover in this post.


Azure Arc for Servers

With Azure Arc for servers, you can manage machines that are hosted outside of Azure. When these types of machines are connected to Azure using Azure Arc for servers, they become Connected Machines, and they will be treated as native resources in Azure. Each Connected machine will get a Resource ID during registration in Azure and it will be managed as part of a Resource group inside an Azure subscription. This will enable the ability to benefit from Azure features and capabilities, such as Azure Policies, and tagging.

For each machine that you want to connect to Azure, an agent package needs to be installed. Based on how recently the agent has checked in, the machine will have a status of Connected or Disconnected. If a machine has not checked-in within the past 5 minutes, it will show as Disconnected until connectivity is restored. This check-in is called a heartbeat. The Azure Resource Manager service limits are also applicable to Azure Arc for server, which means that there is a limit of 800 servers per resource group.

Supported Operating Systems

By the time of writing this post, the public preview supports the following operating systems:

  • Windows Server 2012 R2 and newer
  • Ubuntu 16.04 and 18.04


During installation and runtime, the agent requires connectivity to Azure Arc service endpoints. If outbound connectivity is blocked by the firewall, make sure that the following URLs are not blocked:

Domain Environment Required Azure service endpoints Azure Resource Manager Azure Active Directory Application Insights Guest Configuration
* Guest Configuration
* Hybrid Identity Service


In the next part of this post, we are going to connect an on-premises machine in Azure using Azure Arc. For this demonstration, I have an on-premises Hyper-V environment with one Windows Server 2016 machine.

Register the required Resource Providers in Azure

First, we need to register the required resource providers in Azure. Therefore, take the following steps:

  • Open a browser and navigate to the Azure portal at:
  • Login with your administrator credentials.
  • Open Cloud Shell in the top right menu, and add the following lines of code to register the Microsoft.HybridCompute and the Microsoft.GuestConfiguration resource providers:
    Register-AzResourceProvider -ProviderNamespace Microsoft.HybridCompute
    Register-AzResourceProvider -ProviderNamespace Microsoft.GuestConfiguration
  • This will result in the following output:
  • Note that the resource providers are only registered in specific locations.


In the next part, we are going to connect the server to Azure Arc.


Connect the machine to Azure Arc for Servers

There are two different ways to connect on-premises machines to Azure Arc. You can download a script and run it manually on the server. This is the best approach when you are adding single servers to Azure Arc. You can also follow the PowerShell Quickstart for adding multiple machines using a Service Principal. You can find the quickstart here:

We are adding one machine in this demo, so we are going to follow the Portal Quickstart.

To connect the machine to Azure, we need to generate the agent install script in the Azure portal. This script is going to download the Azure Connected Machine Agent (AzCMAgent) installation package, install it on the on-premises machine and register the machine in Azure Arc.

Generate the agent install script using the Azure portal

To generate the agent install script, take the following steps:

  • Navigate to the Azure portal  and type Azure Arc in the search box. Or you can launch
  • Click on +Add.
  • Select Add machines using interactive script:

  • In the Basics blade, keep the default settings and click Review + generate. If you want, you can create a new resource group for your machines that are connected to Azure Arc:

  • The last page has a script generated which you can copy (or download). This script needs to be executed on the on-premises machine:

Connect the on-premises machine to Azure Arc

To connect the on-premises machine to Azure Arc, we first need install the agent on the on-premises machine. Therefore, take the following steps:

  • Open Windows PowerShell ISE as an administrator.
  • Paste the script, that is generated in the previous step in PowerShell, in the window and execute it.
  • The machine will be onboarded to Azure, which can take a few minutes to complete.
  • You will receive a registration code during script execution. Navigate to
  • Paste in the code from PowerShell and click Next:
  • You will receive a confirmation that the device is registered in Azure Arc:
  • You can now close the browser window.
  • If you now go back to the Azure portal and refresh the page, you will see that the server is added to Azure with the Connected status:


Managing the machine in Azure Arc

  • To manage the machine from Azure, click on the machine in the overview blade, like in the previous image.
  • In the overview blade, you can add tags to the machine. You can also Manage access, and apply policies to the machine from here:



In this post, we’ve covered how to connect an on-premises machine to Azure Arc for servers. I find it extremely easy to connect the on-premises machine to Azure. By generating the script in the Azure portal, which includes downloading and installing the agent on the on-premises machine, makes it easy to connect it to Azure. Once connected, the machine can be managed as if it is a native Azure VM.

In the next post, we are going to assign an Azure Policy to our connected machine in Azure Arc: Getting started with Azure Arc for Servers: Applying policies.


Exit mobile version