Azure DevOps Explained

My new book: Azure DevOps Explained is just released!

I’ve written this book together with Amit Malik, and Stephano Demilliani. It is aimed at developers, solutions architects, and DevOps engineers interested in getting started with cloud DevOps practices on Azure.

What you will learn

  • Get to grips with Azure DevOps
  • Find out about project management with Azure Boards
  • Understand source code management with Azure Repos
  • Build and release pipelines
  • Run quality tests in build pipelines
  • Use artifacts and integrate Azure DevOps in the GitHub flow
  • Discover real-world CI/CD scenarios with Azure DevOps

Table of Contents

  1. Azure DevOps Overview
  2. Managing Projects with Azure DevOps Boards
  3. Source Control Management with Azure DevOps
  4. Understanding Azure DevOps Pipelines
  5. Running Quality Tests in a Build Pipeline
  6. Hosting Your Own Azure Pipeline Agent
  7. Using Artifacts with Azure DevOps
  8. Deploying Applications with Azure DevOps
  9. Integrating Azure DevOps with GitHub
  10. Using Test Plans with Azure DevOps
  11. Real-World CI/CD Scenarios with Azure DevOps

You can order the book on Amazon using this link:

Bi-weekly Azure Summary – Part 72

This bi-weekly update is a summary of trending Azure topics on social media, as well as other interesting content available on the internet.

Below, you can see an overview of interesting blogs, articles, videos and more that are posted on social media and other channels:


Development / IT Pro


What a Cloud Center of Excellence can do for your organization?

Most companies go through a set of phases in their Microsoft cloud journey. They start with experimenting with the cloud for rapid application development. A single subscription is manually created in the Azure portal, and a set of services is quickly deployed from the portal to serve business and the developers’ needs. It is even not uncommon in this phase for the business or for the developers to use their own credit card to create this single subscription. The main goal in this phase is to serve business needs quickly, creating small proofs of concept, or avoiding the lengthy and time-consuming deployment strategies of bigger organizations.

In the next phase, the IT department starts taking the first steps into the cloud and creating additional subscriptions mostly targeted to the different departments in the organization. They will introduce centralized deployments and start thinking about security and compliance in the cloud.

In the third phase, the organization embraces the cloud on a larger scale. Senior management has decided to transform IT and shift to a cloud-first approach. Applications and data centers need to be migrated, hybrid environments need to be created, and all new applications need to be cloud native. This is the moment that most organizations realize they need a proper governance model and strategy.

As cloud environments are managed on a large scale, there is a need for a solid architecture around structuring subscriptions, networking, databases, applications, security and compliance regulations, and so on. Successfully managing a cloud platform on a large scale requires ownership in the organization. It also requires a centralized entity to maintain best practices, onboard the cloud customers, and make sure that all services are secure and compliant by default.

When they start implementing these technical aspects on a large scale and embedding them into the organization, people start to realize that this also involves a significant organizational and cultural change. This is where a Cloud Center of Excellence comes in.

What does a Cloud Center of Excellence do?

A Cloud Center of Excellence (CCoE) is used to bring a diverse and knowledgeable group of experts from across the organization together to develop cloud best practices for the rest of the organization to follow. The CCoE has a support function to increase productivity throughout the organization and at the same time maintain a consistent and secure cloud platform. It is based on Microsoft agile practices and a delivery model that provides a programmatic approach to implement, manage, and operate the Microsoft Azure platform for onboarding projects and Azure workloads effectively.

A CCoE model, requires collaboration between:

  • Cloud adoption
  • Cloud strategy
  • Cloud governance
  • Cloud platform
  • Cloud automation

When these aspects are addressed, the participants can accelerate innovation and migration while reducing the overall costs of change and increasing business agility. When implemented successfully, a CCoE will create a significant cultural shift in IT as well. Without the CCoE model, IT tends to focus on providing control and central responsibility. A successful CCoE model provides focus on freedom and delegated responsibility. This works best in a technology strategy with a self-service model that allows business units to make their own decisions. The CCoE provides a set of guidelines and established and repeatable controls, used by the business.

Key responsibilities of a Cloud Center of Excellence

The primary goal of the CCoE team is to accelerate cloud adoption through cloud native and hybrid solutions. The CCoE has the following objectives:

  • Build a modern IT organization by capturing and implementing business requirements using agile approaches
  • Build reusable deployment packages that fully align with security, compliance, and service management policies
  • Maintain a functional Azure platform in alignment with operational procedures
  • Review and approve the use of cloud-native tools
  • Over time, standardize and automate commonly needed platform components and solutions

The Cloud Center of Excellence team

The CCoE team ideally consists of 3–5 people with a variety of IT backgrounds. This will bring a broad perspective and balanced set of knowledge. It should ideally include people who already have cloud experience and day-to-day roles, such as:

  • IT/Operations/IT financial manager
  • Solution/Infrastructure Architect
  • Application developer
  • Network engineer
  • Database administrator
  • Systems administrator

Excellent way to start your cloud journey

This blog will help organizations who are going through the different phases in their cloud journey and starting to transform their IT department to be ready for innovation, speed, and control. The Cloud Center of Excellence is an ideal model to accelerate your cloud adoption program.

Bi-weekly Azure Summary – Part 71

This bi-weekly update is a summary of trending Azure topics on social media, as well as other interesting content available on the internet.

Below, you can see an overview of interesting blogs, articles, videos and more that are posted on social media and other channels:


Development / IT Pro



Microsoft Azure Well-Architected Framework

Microsoft recently introduced the Microsoft Azure Well-Architected Framework, which provides customers with a set of Azure best practices to help them build and deliver well-architected solutions on top of the Azure platform.

The framework consists of five pillars of architecture excellence that can be used as guiding to improve the quality of the workloads that run on Azure. These five pillars are: cost optimization, operational excellence, performance efficiency, reliability, and security. They will be explained in more detail in the following sections.

Cost optimization

One thing to focus on when architecting cloud solutions, is to generate incremental value early in the process. To accelerate the time to market while avoiding capital-intensive solutions, the principles of Build-Measure-Learn can be applied. This is one of the central principles of Lean Startup, which helps to create customer partnerships by building with customer empathy, measuring impact on customers, and learning with customers.

By using this pay-as-you-go strategy in your architecture, you will invest in scaling out after customer success instead of delivering a large investment first version. Keep a balance in your architecture between costs for first mover advantage versus “fast follow.” For this, you can use the cost calculators to estimate the initial costs and the operational costs. Finally, establish policies, budgets, and controls that set cost limits for your solution.

For a detailed guidance on cost optimization, you can refer to the following articles:

Operational excellence

Operational excellence involves the operations processes that keep applications running in production. To make deployments reliable and predictable, they should be fully automated to reduce human error. This should be a fast and repeatable process, so it doesn’t slow down the release of new features or bug fixes. You should also need to be able to quickly roll back or roll forward when a release has problems or bugs.

To accomplish this, monitoring and diagnostics are crucial. You don’t have full control over the infrastructure and operating system when using Azure solutions. Monitoring and diagnostics will give you the insights to the systems and the solutions that run on top of it. Use a common and consistent logging schema that lets you correlate events across different systems, Azure resources, and custom applications.

A successful monitoring and diagnostics process has several distinct phases:

  1. Instrumentation: Log and generate the raw data, from all the different resources and services that you are using, such as application logs, web server logs, VM logs, diagnostics built in the Azure platform, and other sources.
  2. Collection and storage: Collect all the raw data and consolidate it into one place.
  3. Analysis and diagnosis: Analyze the data that is collected to see the overall health of the platform, services, and your applications and to troubleshoot issues.
  4. Visualization and alerts: Visualize the data that is analyzed to spot trends or set up alerting to alert the operation teams.

To get more information and further guidance about operational excellence, you can refer to the following articles:

Performance efficiency

With performance efficiency, you make sure that your workload can scale to meet the demands placed on it by the users in an effective manner. You can achieve by implementing PaaS offering that scale automatically or implementing scaling effectively in your solutions and applications.

Applications can scale in two different ways: horizontally (scale out) where new instances of the resource are added, such as extra VMs or database instances. You can also scale vertically (scale up), where you increase the capacity of a single resource, for example by using a larger VM size.

Horizontal scale needs to be architected into the system. You can scale out by placing VMs behind a load balancer. The applications that run on these VMs, also need to be able to scale. This can be accomplished by designing stateless applications or by storing state and data externally. Simply adding more instances will not guarantee that your application will scale. Scaling can also lead to more additional measures and bottlenecks.

Therefore, you should always conduct performance and load testing to find these potential bottlenecks. You can use the following articles for this:


Reliable workloads are both resilient and available. Resilient applications are able to return to a fully functioning state after a failure occurs. Available applications can be accessed by the users when they need to.

In cloud computing a different mind set is needed then in traditional application development. Cloud applications are built as distributed systems, which means they are often more complex. The costs for cloud environments are kept low through the use of commodity hardware, so occasional hardware failures must be expected. Today, users also expect systems to be available 24/7 without ever going offline.

This means that cloud applications must be architected differently. They need to be designed to expect occasional failures and need to be able to recover from them quickly. When designing your applications to be resilient, you first must understand availability requirements. How much downtime is acceptable for this application, how much downtime will cost your business, and how much should be invested in making the application highly available.

In the following articles you will get more information about how you can design and build reliable workloads and applications in Azure:


Security should be embedded throughout the entire lifecycle of an application. From the design phase all the way up to the deployments and operations phase. Protection against a variety of threats, such as DDoS attacks, is already provided by the Azure platform, but you are still responsible for building security into your application and into the DevOps processes.

Security areas that need to be considered for application development are:

  • Identity management: To authenticate and authorize users, Azure Active Directory should be considered. Azure AD is a fully managed identity and access management service, and it is integrated to Azure services, Office 365, Dynamics CRM Online, Active Directory on premises in a hybrid deployment, and many third-party SaaS applications. For consumer-facing applications, Azure AD offers Azure AD Business to Consumer, which lets users authenticate with their existing social accounts, such as Facebook, Google, LinkedIn, and more, as well as creating new accounts that are managed by Azure AD.
  • Application security: Best practices for applications, such as SSL everywhere, protecting against CSRF and XSS attacks, preventing SQL injection attacks, and so on, still apply to the cloud. You should also store your application keys and secrets in Azure Key Vault.
  • Protecting the infrastructure: Control access to all the Azure resources that you deploy. Every resource has a trust relationship with the Azure AD tenant. To grant the users in your organization the correct permissions to the Azure resources that are deployed, you can use role-based access control (RBAC). These permissions can be added to different scopes, to subscriptions, resource groups, or single resources.
  • Data encryption: When you set up high availability in Azure, make sure that you store the data in the correct geopolitical zone. Azure geo-replicated uses the concept of paired regions, which stores the replicated data in the same geopolitical region. To store cryptographic keys and secrets, you can use Azure Key Vault. You can also use Key Vault to store keys that are protected by hardware security modules.

For more information about this, you can refer to the following articles:

Wrap up

The Azure Well-Architected Framework provides comprehensive architecture principles and guidelines to build cost effective, secure, reliable, and manageable solutions in Azure. If you want to get started with the Azure Well-Architected Framework:

Bi-weekly Azure Summary – Part 70

This bi-weekly update is a summary of trending Azure topics on social media, as well as other interesting content available on the internet.

Below, you can see an overview of interesting blogs, articles, videos and more that are posted on social media and other channels:


Development / IT Pro


Thinking cloud native

These days, applications have become very complex and users are demanding more and more of these applications.

They expect innovative features, rapid responsiveness, and zero downtime. And problems that arise with building software, such as performance errors, recurring errors, and the inability to move fast are no longer acceptable by the user. If your application does not meet the user’s requirements, they simply move on to the competitor. This means that applications need to be able to address the need for speed and agility. And the solution to this is: A Cloud Native architecture and technologies.

Cloud native is all about changing the way you think about building and designing critical business systems. Cloud-native systems are specifically designed to respond to resilience as well as large-scale, and rapid change. And, they run in modern and dynamic environments, such as public, private, and hybrid clouds. Cloud-native applications are mostly built using one or more of these technologies: containers, service meshes, microservices, and declarative APIs, running on immutable infrastructure.

Some companies that have implemented cloud native and achieved speed, agility, and scalability are Netflix, Uber, and WeChat. They have thousands of independent microservices running in production and they deploy between hundred and thousand times a day. This architectural style enables them to quickly respond to market demand and conditions. By using a cloud-native approach, they can instantaneously update small areas of a live, complex application, and individually scale those areas as needed.

The speed and agility of cloud native comes from several factors: cloud infrastructure is key here, but there are five additional pillars that also provide the foundation for building cloud native applications:

Modern Design

A widely accepted methodology for constructing cloud-based applications is the twelve-factor app. It describes a set of principles and practices that developers follow to build applications that are optimized for modern cloud environments. There is big focus on portability across environments and declarative automation.

These principles and practices are considered as a solid foundation for building cloud-native apps. The systems that are built upon these principles can deploy and scale rapidly and add features to react quickly to market changes.


Cloud-native systems and applications embrace microservices, which is a popular style for constructing modern applications. The microservice architectural style is an approach to developing a single application as a suite of small services, each running in their own process and communicating with lightweight mechanisms, such as REST, gRPC, HTTP(S), or WebSockets.

Microservices can scale independently. Instead of scaling the entire application as a single unit, you scale out only those services that require more processing power or network bandwidth. Each microservice also has an autonomous lifecycle and can evolve independently and deploy frequently. You don’t have to wait for a quarterly release to deploy a new feature or update, but you can update small areas of a complex application with less risk of disrupting the entire system.


Containers are a great enabler of cloud-native systems and applications. Microservice containerization is also placed as the first step in the Cloud-Native Trial Map – released by the  Cloud Native Computing Foundation. This map offers guidance for enterprises that are beginning their cloud-native journey. This technique is very straightforward: you package the code, its dependencies, and the runtime into a binary called a container image. Those images are then stored inside a container registry which acts as a repository or library for the images. Those registries can be private or public and can be stored inside your own datacenter or using public cloud services. When needed, you transform the image into a running container instance. These instances can run in the cloud or in your private data center on servers that have a container runtime engine installed.

Containers provide portability and guarantee consistency across environments. By packaging everything into a single container image, you isolate the microservice and its dependencies from the underlying infrastructure. This also eliminates the expense of pre-configuring each environment with frameworks, software libraries, and runtime engines. And by sharing the underlying operating system and host resources, containers have a much smaller footprint than a full virtual machine. This increases the number of microservices that a given host can run at one time.

Backing services

Cloud-native applications and services depend upon several different backing services, such as data stores, monitoring, caching, and logging services, message brokers and identity services. These backing services support the stateless principle coming from the twelve-factor appl. You can consume those services from a cloud provider. You could also host your own backing services, but then you would be responsible for licensing, provisioning, and managing those resources.

Cloud-ative services are typically using backing services from cloud providers. This saves time, and reduces costs and operational risk of hosting your own services. Backend services are treated as an attached resource and are dynamically bound to a microservice. The information required to access these services, such as URLs and credentials, is then stored in an external configuration store.


The previous pillars are specifically focusing on achieving speed and agility. But that is not the complete story. The cloud environments also need to be provisioned to being able to deploy and run cloud-native applications and systems. How do you rapidly deploy your apps and features? A widely accepted practice to this is Infrastructure as Code (IaC).

Using IaC, you can automate platform provisioning and application deployment. DevOps teams that implement IaC can deliver stable environments rapidly and at scale. By adding testing and versioning to the DevOps practices, your infrastructure and deployments are automated, consistent, and repeatable.

You can use tools such as Azure Resource Manager, Terraform, and Azure CLI to create scripts to deploy the cloud infrastructure. This script is versioned and checked into source control as an artifact of the whole project. The script is then automatically invoked in the continuous integration and continuous delivery (CI/CD) pipelines to provision a consistent and repeatable infrastructure across system environments, such as QA, staging, and production. A service that can handle this process from the beginning to the end is Azure Pipelines, which is part of Azure DevOps.


This blog introduced the five different pillars that provide the foundation for building cloud native applications. At Capgemini, we have a lot of experience, use cases, and best practices in implementing cloud-native practices and designing and building cloud-native applications and systems for our enterprise customers. If you want more information about our experiences with this, you can contact me on LinkedIn or Twitter.

Bi-weekly Azure Summary – Part 69

This bi-weekly update is a summary of trending Azure topics on social media, as well as other interesting content available on the internet.

Below, you can see an overview of interesting blogs, articles, videos and more that are posted on social media and other channels:




Development / IT Pro






Bi-weekly Azure Summary – Part 68

This bi-weekly update is a summary of trending Azure topics on social media, as well as other interesting content available on the internet.

Below, you can see an overview of interesting blogs, articles, videos and more that are posted on social media and other channels:




Development / IT Pro






Exit mobile version