<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[clouddevopsinsights]]></title><description><![CDATA[clouddevopsinsights]]></description><link>https://clouddevopsinsights.com</link><generator>RSS for Node</generator><lastBuildDate>Wed, 15 Apr 2026 13:48:02 GMT</lastBuildDate><atom:link href="https://clouddevopsinsights.com/rss.xml" rel="self" type="application/rss+xml"/><language><![CDATA[en]]></language><ttl>60</ttl><atom:link rel="first" href="https://clouddevopsinsights.com/rss.xml"/><item><title><![CDATA[Managing Azure DNS at Scale: From DNS Forwarders to Private DNS Resolver]]></title><description><![CDATA[<h3>Introduction</h3>
<p>DNS is a foundational service in any cloud architecture, but it becomes significantly more complex in hybrid environments. As organizations expand into Azure with hub-spoke networks, private endpoints, and on-premises integration, DNS design must evolve to handle scale, reliability, and maintainability.</p>
<p>This article walks through that evolution:</p>
<ul>
<li><p>Traditional DNS forwarders and their limitations</p>
</li>
<li><p>Azure DNS Private Resolver</p>
</li>
<li><p>Inbound and outbound endpoints</p>
</li>
<li><p>DNS forwarding rulesets</p>
</li>
<li><p>Distributed vs centralized DNS architectures</p>
</li>
<li><p>Real-world hybrid resolution scenarios</p>
</li>
</ul>
<h3>The Traditional Approach: DNS Forwarders</h3>
<p>In early Azure architectures, hybrid DNS resolution was typically achieved using <strong>custom DNS forwarder virtual machines</strong>.</p>
<h3>Typical setup</h3>
<ul>
<li><p>On-premises DNS servers forward queries to DNS VMs in Azure</p>
</li>
<li><p>Azure VNets are configured to use these custom DNS servers</p>
</li>
<li><p>Conditional forwarding is manually configured</p>
</li>
</ul>
<h3>Challenges at Scale</h3>
<p>This approach introduces several operational and architectural issues:</p>
<p><strong>Operational overhead</strong></p>
<ul>
<li><p>DNS VMs require patching, monitoring, and backup</p>
</li>
<li><p>High availability must be designed and maintained</p>
</li>
</ul>
<p><strong>Scalability limitations</strong></p>
<ul>
<li><p>DNS traffic grows with workloads</p>
</li>
<li><p>Scaling requires manual intervention</p>
</li>
</ul>
<p><strong>Reliability concerns</strong></p>
<ul>
<li><p>DNS becomes dependent on VM availability</p>
</li>
<li><p>Misconfiguration can disrupt name resolution across environments</p>
</li>
</ul>
<p><strong>Complex hybrid resolution</strong></p>
<ul>
<li>Managing Azure Private DNS zones alongside on-premises zones becomes difficult</li>
</ul>
<p>These challenges led to the need for a managed, cloud-native solution.</p>
<img src="https://cdn.hashnode.com/uploads/covers/671acebcc2180cf709b607c2/808662b1-c017-4877-b122-2d9412d0b7e7.svg" alt="" style="display:block;margin:0 auto" />

<h2>Azure DNS Private Resolver</h2>
<p>Azure DNS Private Resolver is a fully managed service that provides <strong>native DNS resolution between Azure and on-premises environments without requiring DNS VMs</strong>.</p>
<p>It enables:</p>
<ul>
<li><p>On-premises systems to resolve Azure private DNS zones</p>
</li>
<li><p>Azure resources to resolve on-premises domains</p>
</li>
<li><p>Centralized DNS forwarding logic</p>
</li>
</ul>
<p>This service integrates directly with Azure networking and removes the need for infrastructure management.</p>
<h2>Core Components</h2>
<h3>Inbound Endpoint</h3>
<p>An <strong>inbound endpoint</strong> allows DNS queries from external networks (such as on-premises) to enter Azure.</p>
<p><strong>Key characteristics:</strong></p>
<ul>
<li><p>Receives DNS queries from on-premises DNS servers</p>
</li>
<li><p>Resolves Azure Private DNS zones and private endpoints</p>
</li>
<li><p>Deployed in a dedicated subnet</p>
</li>
</ul>
<p><strong>Purpose:</strong><br />Enable on-premises systems to resolve Azure-hosted private resources.</p>
<h3>Outbound Endpoint</h3>
<p>An <strong>outbound endpoint</strong> allows Azure resources to resolve external domains, such as those hosted on-premises.</p>
<p><strong>Key characteristics:</strong></p>
<ul>
<li><p>Sends DNS queries out of Azure</p>
</li>
<li><p>Works with DNS forwarding rulesets</p>
</li>
<li><p>Enables conditional forwarding</p>
</li>
</ul>
<p><strong>Purpose:</strong><br />Enable Azure workloads to resolve on-premises DNS zones.</p>
<h2>DNS Forwarding Rulesets</h2>
<p>A <strong>DNS forwarding ruleset</strong> defines how DNS queries should be routed.</p>
<p>Each rule includes:</p>
<ul>
<li><p>Domain name (e.g., <code>corp.local</code>)</p>
</li>
<li><p>Target DNS server IP address</p>
</li>
</ul>
<p>Rulesets can be:</p>
<ul>
<li><p>Linked to multiple VNets</p>
</li>
<li><p>Centrally managed</p>
</li>
<li><p>Used to control DNS behavior at scale</p>
</li>
</ul>
<p>This eliminates the need to configure DNS settings individually on each VNet.</p>
<h2>DNS Architecture Patterns</h2>
<h3>Distributed DNS Architecture</h3>
<p>In this model:</p>
<ul>
<li><p>Azure VNets use the default Azure DNS service</p>
</li>
<li><p>A Private Resolver is deployed in a hub VNet</p>
</li>
<li><p>A forwarding ruleset is linked to spoke VNets</p>
</li>
</ul>
<p><strong>Flow:</strong></p>
<ol>
<li><p>A VM in a spoke VNet sends a DNS query</p>
</li>
<li><p>Azure DNS attempts to resolve it</p>
</li>
<li><p>If no match is found, the ruleset is evaluated</p>
</li>
<li><p>Matching queries are forwarded via the outbound endpoint</p>
</li>
</ol>
<p><strong>Key advantage:</strong><br />Minimal configuration in spoke VNets and high scalability.</p>
<img src="https://cdn.hashnode.com/uploads/covers/671acebcc2180cf709b607c2/2269e108-3a1d-482b-b343-0d4f28c0701a.png" alt="" style="display:block;margin:0 auto" />

<h3>Centralized DNS Architecture</h3>
<p>In this model:</p>
<ul>
<li><p>VNets are configured to use a custom DNS server (inbound endpoint IP)</p>
</li>
<li><p>All DNS queries are sent to the hub</p>
</li>
</ul>
<p><strong>Flow:</strong></p>
<ol>
<li><p>A VM sends a DNS query</p>
</li>
<li><p>Query is directed to the inbound endpoint</p>
</li>
<li><p>Resolver processes the request using:</p>
<ul>
<li><p>Private DNS zones</p>
</li>
<li><p>Forwarding rules</p>
</li>
<li><p>External resolution</p>
</li>
</ul>
</li>
</ol>
<p><strong>Key advantage:</strong><br />Centralized control and consistent DNS behavior.</p>
<img src="https://cdn.hashnode.com/uploads/covers/671acebcc2180cf709b607c2/0eadd091-f538-497a-8e9f-5bbbbaf831fe.png" alt="" style="display:block;margin:0 auto" />

<h2>Building a Scalable and Secure DNS Architecture with vWAN Hub</h2>
<p>For enterprise-scale environments, especially those using <strong>Virtual WAN (vWAN)</strong>, DNS design should align with centralized connectivity and security principles.</p>
<img src="https://cdn.hashnode.com/uploads/covers/671acebcc2180cf709b607c2/5fe50632-7c72-47c2-9d88-7ca0249079fe.png" alt="" style="display:block;margin:0 auto" />

<h3>Recommended Design</h3>
<ul>
<li><p>Deploy a <strong>dedicated VNet for DNS</strong></p>
</li>
<li><p>Host:</p>
<ul>
<li><p>Private DNS Resolver</p>
</li>
<li><p>Inbound endpoint subnet</p>
</li>
<li><p>Outbound endpoint subnet</p>
</li>
</ul>
</li>
<li><p>Connect this DNS VNet to the <strong>vWAN hub</strong></p>
</li>
<li><p>Ensure connectivity to:</p>
<ul>
<li><p>On-premises networks</p>
</li>
<li><p>Spoke VNets</p>
</li>
<li><p>Shared services</p>
</li>
</ul>
</li>
</ul>
<h3>How it works</h3>
<ul>
<li><p>On-premises DNS forwards queries to inbound endpoint via vWAN hub</p>
</li>
<li><p>Azure VNets use rulesets linked to the resolver</p>
</li>
<li><p>Outbound endpoint routes DNS queries back to on-premises when required</p>
</li>
</ul>
<h3>Key Benefits</h3>
<p><strong>Scalability</strong></p>
<ul>
<li><p>DNS is decoupled into a dedicated VNet</p>
</li>
<li><p>Can support multiple regions and VNets</p>
</li>
</ul>
<p><strong>Security</strong></p>
<ul>
<li><p>Traffic flows through secured vWAN hub</p>
</li>
<li><p>Enables inspection and policy enforcement</p>
</li>
</ul>
<p><strong>Centralized control</strong></p>
<ul>
<li>Single DNS control plane for entire environment</li>
</ul>
<p><strong>Separation of concerns</strong></p>
<ul>
<li>DNS is isolated from application VNets</li>
</ul>
<p>This model is particularly effective in large enterprises adopting <strong>hub-and-spoke with vWAN as the global transit hub</strong>.</p>
<h2>Real-World Scenarios</h2>
<p><strong>Scenario 1: On-Premises VM Accessing Azure Storage via Private Endpoint</strong></p>
<ol>
<li><p>On-prem VM queries:<br /><a href="http://mystorageaccount.privatelink.blob.core.windows.net"><code>mystorageaccount.privatelink.blob.core.windows.net</code></a></p>
</li>
<li><p>On-prem DNS forwards to inbound endpoint</p>
</li>
<li><p>Private Resolver resolves using Azure Private DNS zone</p>
</li>
<li><p>Returns private IP</p>
</li>
<li><p>Traffic flows over private connectivity</p>
</li>
</ol>
<p><strong>Scenario 2: Azure VM Resolving On-Premises Application</strong></p>
<ol>
<li><p>Azure VM queries: <code>app.corp.local</code></p>
</li>
<li><p>Azure DNS checks local zones (no match)</p>
</li>
<li><p>Ruleset forwards request via outbound endpoint</p>
</li>
<li><p>On-prem DNS resolves</p>
</li>
<li><p>Response returned to Azure VM</p>
</li>
</ol>
<h3>References</h3>
<ul>
<li><p>Azure DNS Private Resolver Architecture<br /><a href="https://learn.microsoft.com/en-us/azure/dns/private-resolver-architecture">https://learn.microsoft.com/en-us/azure/dns/private-resolver-architecture</a></p>
</li>
<li><p>Azure DNS Private Resolver Overview and Design<br /><a href="https://learn.microsoft.com/en-us/azure/architecture/networking/architecture/azure-dns-private-resolver">https://learn.microsoft.com/en-us/azure/architecture/networking/architecture/azure-dns-private-resolver</a></p>
</li>
<li><p>Azure DNS Management at Scale<br /><a href="https://www.youtube.com/watch?v=nVONXtEmZa8">https://www.youtube.com/watch?v=nVONXtEmZa8</a></p>
</li>
</ul>
]]></description><link>https://clouddevopsinsights.com/managing-azure-dns-at-scale-from-dns-forwarders-to-private-dns-resolver</link><guid isPermaLink="true">https://clouddevopsinsights.com/managing-azure-dns-at-scale-from-dns-forwarders-to-private-dns-resolver</guid><dc:creator><![CDATA[Abilash Vavilala]]></dc:creator></item><item><title><![CDATA[Terraform Nested Loops and flatten(): A Beginner's Guide with Azure Virtual Networks]]></title><description><![CDATA[<h3 id="heading-introduction">Introduction</h3>
<p>If you're working with Terraform and Azure, you've probably encountered situations where you need to create multiple resources based on hierarchical data structures. For example, creating multiple Virtual Networks (VNets) and then multiple Subnets within each VNet.</p>
<p>In this blog, I'll explain how to use nested for loops combined with flatten() to elegantly solve this problem. We'll use Azure Virtual Networks as our reference.</p>
<h3 id="heading-the-problem-creating-vnets-and-subnets">The Problem: Creating VNets and Subnets</h3>
<p>Imagine you need to create:3 Azure Virtual Networks (dev, prod, staging) Multiple Subnets within each VNet. Each subnet has its own CIDR block Doing this manually would require hardcoding each resource. But with Terraform's for loops and locals, we can automate this beautifully.</p>
<h3 id="heading-solution-overview">Solution Overview</h3>
<p>Our approach has 3 main steps:</p>
<ol>
<li><p>Define the data structure - Organize VNets and subnets as variables</p>
</li>
<li><p>Create VNets - Use for_each to loop through VNets</p>
</li>
<li><p>Create Subnets - Use nested loops with flatten() to create subnets</p>
</li>
</ol>
<p><strong>Step 1: Define the Data Structure</strong></p>
<p>First, let's define our Azure networks as a variable:</p>
<pre><code class="lang-markdown">variable "azure<span class="hljs-emphasis">_networks" {
  type = map(object({
    resource_</span>group = string
<span class="hljs-code">    location       = string
    address_space  = list(string)
    subnets        = map(object({ 
      address_prefix = string 
    }))
  }))
  default = {
    "vnet-dev" = {
      resource_group = "rg-dev"
      location       = "East US"
      address_space  = ["10.1.0.0/16"]
      subnets = {
        "subnet-vm" = {
          address_prefix = "10.1.1.0/24"
        }
        "subnet-db" = {
          address_prefix = "10.1.2.0/24"
        }
      }
    }
    "vnet-prod" = {
      resource_group = "rg-prod"
      location       = "West US"
      address_space  = ["10.2.0.0/16"]
      subnets = {
        "subnet-web" = {
          address_prefix = "10.2.1.0/24"
        }
        "subnet-api" = {
          address_prefix = "10.2.2.0/24"
        }
        "subnet-db" = {
          address_prefix = "10.2.3.0/24"
        }
      }
    }
    "vnet-staging" = {
      resource_group = "rg-staging"
      location       = "Central US"
      address_space  = ["10.3.0.0/16"]
      subnets = {
        "subnet-test" = {
          address_prefix = "10.3.1.0/24"
        }
      }
    }
  }
}</span>
</code></pre>
<p>What we have here:</p>
<ul>
<li><p>A map of 3 Virtual Networks</p>
</li>
<li><p>Each VNet has subnets stored as a nested map</p>
</li>
<li><p>Each subnet has an address prefix (CIDR block)</p>
</li>
</ul>
<p><strong>Step 2: Create Azure Virtual Networks</strong></p>
<p>using for_each, we iterate through each VNet:</p>
<pre><code class="lang-markdown">resource "azurerm<span class="hljs-emphasis">_virtual_</span>network" "example" {
  for<span class="hljs-emphasis">_each = var.azure_</span>networks

  name                = each.key
  address<span class="hljs-emphasis">_space       = each.value.address_</span>space
  location            = each.value.location
  resource<span class="hljs-emphasis">_group_</span>name = each.value.resource<span class="hljs-emphasis">_group
}</span>
</code></pre>
<p>What happens:</p>
<div class="hn-table">
<table>
<thead>
<tr>
<td>Iteration</td><td>each.key</td><td>each.value.location</td><td>result</td></tr>
</thead>
<tbody>
<tr>
<td>1</td><td>"vnet-dev"</td><td>"East US"</td><td>Creates VNet vnet-dev in East US</td></tr>
<tr>
<td>2</td><td>"vnet-prod"</td><td>"West US"</td><td>Creates VNet vnet-prod in West US</td></tr>
<tr>
<td>3</td><td>"vnet-staging"</td><td>"Central US"</td><td>Creates VNet vnet-staging in Central US</td></tr>
</tbody>
</table>
</div><p><strong>Terraform References:</strong></p>
<ol>
<li><p>azurerm_virtual_network.example["vnet-dev"].id</p>
</li>
<li><p>azurerm_virtual_network.example["vnet-prod"].id</p>
</li>
<li><p>azurerm_virtual_network.example["vnet-staging"].id</p>
</li>
</ol>
<p><strong>Step 3: The Magic Part - Nested Loops with flatten()</strong></p>
<p>Now comes the complex part: creating subnets across all VNets. We need to:</p>
<ol>
<li><p>Loop through each VNet</p>
</li>
<li><p>Loop through each subnet</p>
</li>
<li><p>within that VNet Combine the data Flatten the result into a single list</p>
</li>
</ol>
<pre><code class="lang-markdown">locals {
  # Create a flat list of all subnets with their parent VNet info
  azure<span class="hljs-emphasis">_subnets = flatten([
    for vnet_</span>name, vnet<span class="hljs-emphasis">_config in var.azure_</span>networks : [
<span class="hljs-code">      for subnet_name, subnet_config in vnet_config.subnets : {
        vnet_name      = vnet_name
        subnet_name    = subnet_name
        address_prefix = subnet_config.address_prefix
        vnet_id        = azurerm_virtual_network.example[vnet_name].id
        resource_group = vnet_config.resource_group
      }
    ]
  ])
}</span>
</code></pre>
<p>Let me break this down line by line:</p>
<div class="hn-table">
<table>
<thead>
<tr>
<td><strong>Line</strong></td><td><strong>Explanation</strong></td></tr>
</thead>
<tbody>
<tr>
<td>for vnet_name, vnet_config in <a target="_blank" href="http://var.azure">var.azure</a>_networks : [</td><td>Outer loop: Iterate through each VNet (dev, prod, staging)</td></tr>
<tr>
<td>for subnet_name, subnet_config in vnet_config.subnets : {</td><td>Inner loop: Iterate through subnets in the current VNet</td></tr>
<tr>
<td>vnet_name = vnet_name</td><td>Store the VNet name</td></tr>
<tr>
<td>subnet_name = subnet_name</td><td>Store the subnet name</td></tr>
<tr>
<td>address_prefix = subnet_config.address_prefix</td><td>Get the subnet's CIDR block</td></tr>
<tr>
<td>vnet_id = azurerm_virtual_network.example[vnet_name].id</td><td>Reference the VNet's ID</td></tr>
<tr>
<td>flatten([...])</td><td>Convert the nested lists into one flat list</td></tr>
</tbody>
</table>
</div><p>Iteration Trace with Real Data Let me show you exactly what happens in each iteration:</p>
<p><strong>Iteration 1: vnet_name = "vnet-dev"</strong></p>
<p>Inner loop processes subnets in vnet-dev:</p>
<p><strong>Sub-iteration 1a:</strong> subnet_name = "subnet-vm"</p>
<pre><code class="lang-markdown">{
  vnet<span class="hljs-emphasis">_name      = "vnet-dev"
  subnet_</span>name    = "subnet-vm"
  address<span class="hljs-emphasis">_prefix = "10.1.1.0/24"
  vnet_</span>id        = azurerm<span class="hljs-emphasis">_virtual_</span>network.example["vnet-dev"].id
  resource<span class="hljs-emphasis">_group = "rg-dev"
Sub-iteration 1b: subnet_</span>name = "subnet-db"
</code></pre>
<pre><code class="lang-markdown">{
  vnet<span class="hljs-emphasis">_name      = "vnet-dev"
  subnet_</span>name    = "subnet-db"
  address<span class="hljs-emphasis">_prefix = "10.1.2.0/24"
  vnet_</span>id        = azurerm<span class="hljs-emphasis">_virtual_</span>network.example["vnet-dev"].id
  resource<span class="hljs-emphasis">_group = "rg-dev"
}</span>
</code></pre>
<p><strong>Result from Iteration 1:</strong> A list with 2 objects</p>
<p><strong>Iteration 2: vnet_name = "vnet-prod"</strong></p>
<p>Inner loop processes subnets in vnet-prod:</p>
<p><strong>Sub-iteration 2a:</strong> subnet_name = "subnet-web"</p>
<pre><code class="lang-markdown">{
  vnet<span class="hljs-emphasis">_name      = "vnet-prod"
  subnet_</span>name    = "subnet-web"
  address<span class="hljs-emphasis">_prefix = "10.2.1.0/24"
  vnet_</span>id        = azurerm<span class="hljs-emphasis">_virtual_</span>network.example["vnet-prod"].id
  resource<span class="hljs-emphasis">_group = "rg-prod"
}</span>
</code></pre>
<p><strong>Sub-iteration 2b:</strong> subnet_name = "subnet-api"</p>
<pre><code class="lang-markdown">{
  vnet<span class="hljs-emphasis">_name      = "vnet-prod"
  subnet_</span>name    = "subnet-api"
  address<span class="hljs-emphasis">_prefix = "10.2.2.0/24"
  vnet_</span>id        = azurerm<span class="hljs-emphasis">_virtual_</span>network.example["vnet-prod"].id
  resource<span class="hljs-emphasis">_group = "rg-prod"
}</span>
</code></pre>
<p><strong>Sub-iteration 2c:</strong> subnet_name = "subnet-db"</p>
<pre><code class="lang-markdown">{
  vnet<span class="hljs-emphasis">_name      = "vnet-prod"
  subnet_</span>name    = "subnet-db"
  address<span class="hljs-emphasis">_prefix = "10.2.3.0/24"
  vnet_</span>id        = azurerm<span class="hljs-emphasis">_virtual_</span>network.example["vnet-prod"].id
  resource<span class="hljs-emphasis">_group = "rg-prod"
}</span>
</code></pre>
<p>Result from Iteration 2: A list with 3 objects</p>
<p><strong>Iteration 3: vnet_name = "vnet-staging"</strong></p>
<p>Inner loop processes subnets in vnet-staging:</p>
<p><strong>Sub-iteration 3a:</strong> subnet_name = "subnet-test"</p>
<pre><code class="lang-markdown">{
  vnet<span class="hljs-emphasis">_name      = "vnet-staging"
  subnet_</span>name    = "subnet-test"
  address<span class="hljs-emphasis">_prefix = "10.3.1.0/24"
  vnet_</span>id        = azurerm<span class="hljs-emphasis">_virtual_</span>network.example["vnet-staging"].id
  resource<span class="hljs-emphasis">_group = "rg-staging"
}</span>
</code></pre>
<p><strong>Result from Iteration 3:</strong> A list with 1 object</p>
<p>Before flatten() - Nested Lists</p>
<p>After the outer loop completes, we have a list of lists:</p>
<pre><code class="lang-markdown">[
  [
<span class="hljs-code">    { vnet_name = "vnet-dev", subnet_name = "subnet-vm", ... },
    { vnet_name = "vnet-dev", subnet_name = "subnet-db", ... }
  ],
  [
    { vnet_name = "vnet-prod", subnet_name = "subnet-web", ... },
    { vnet_name = "vnet-prod", subnet_name = "subnet-api", ... },
    { vnet_name = "vnet-prod", subnet_name = "subnet-db", ... }
  ],
  [
    { vnet_name = "vnet-staging", subnet_name = "subnet-test", ... }
  ]
]</span>
</code></pre>
<p><strong>Problem:</strong> This is hard to work with because it's nested!</p>
<p>After flatten() - Single Flat List. The flatten() function removes one level of nesting:</p>
<pre><code class="lang-markdown">[
  { vnet<span class="hljs-emphasis">_name = "vnet-dev", subnet_</span>name = "subnet-vm", address<span class="hljs-emphasis">_prefix = "10.1.1.0/24", vnet_</span>id = "...", resource<span class="hljs-emphasis">_group = "rg-dev" },
  { vnet_</span>name = "vnet-dev", subnet<span class="hljs-emphasis">_name = "subnet-db", address_</span>prefix = "10.1.2.0/24", vnet<span class="hljs-emphasis">_id = "...", resource_</span>group = "rg-dev" },
  { vnet<span class="hljs-emphasis">_name = "vnet-prod", subnet_</span>name = "subnet-web", address<span class="hljs-emphasis">_prefix = "10.2.1.0/24", vnet_</span>id = "...", resource<span class="hljs-emphasis">_group = "rg-prod" },
  { vnet_</span>name = "vnet-prod", subnet<span class="hljs-emphasis">_name = "subnet-api", address_</span>prefix = "10.2.2.0/24", vnet<span class="hljs-emphasis">_id = "...", resource_</span>group = "rg-prod" },
  { vnet<span class="hljs-emphasis">_name = "vnet-prod", subnet_</span>name = "subnet-db", address<span class="hljs-emphasis">_prefix = "10.2.3.0/24", vnet_</span>id = "...", resource<span class="hljs-emphasis">_group = "rg-prod" },
  { vnet_</span>name = "vnet-staging", subnet<span class="hljs-emphasis">_name = "subnet-test", address_</span>prefix = "10.3.1.0/24", vnet<span class="hljs-emphasis">_id = "...", resource_</span>group = "rg-staging" }
]
</code></pre>
<p><strong>Step 4: Convert List to Map and Create Subnets</strong></p>
<p>for_each requires a map, not a list. So we convert the flat list to a map with unique keys:</p>
<pre><code class="lang-markdown">resource "azurerm<span class="hljs-emphasis">_subnet" "example" {
  for_</span>each = {
<span class="hljs-code">    for subnet in local.azure_subnets : "${subnet.vnet_name}.${subnet.subnet_name}" =&gt; subnet
  }
</span>
  name                 = each.value.subnet<span class="hljs-emphasis">_name
  resource_</span>group<span class="hljs-emphasis">_name  = each.value.resource_</span>group
  virtual<span class="hljs-emphasis">_network_</span>name = each.value.vnet<span class="hljs-emphasis">_name
  address_</span>prefixes     = [each.value.address<span class="hljs-emphasis">_prefix]

  depends_</span>on = [azurerm<span class="hljs-emphasis">_virtual_</span>network.example]
}
</code></pre>
<p>What Gets Created After running terraform apply, you'll have 3 Vnets and 6 subnets.</p>
<p><strong>Key Concepts to Remember</strong></p>
<div class="hn-table">
<table>
<thead>
<tr>
<td>Concept</td><td>Explanation</td></tr>
</thead>
<tbody>
<tr>
<td>Outer Loop</td><td>Iterates through each VNet</td></tr>
<tr>
<td>Inner Loop</td><td>Iterates through each subnet within that VNet</td></tr>
<tr>
<td>flatten()</td><td>Converts nested lists into a single flat list</td></tr>
<tr>
<td>Map Conversion</td><td>Transforms the flat list into a map for for_each</td></tr>
<tr>
<td>Unique Keys</td><td>"vnet-dev.subnet-vm" ensures each subnet is unique</td></tr>
</tbody>
</table>
</div><p>Why This Approach?</p>
<ol>
<li><p> DRY (Don't Repeat Yourself) - No hardcoded resources</p>
</li>
<li><p> Scalable - Add new VNets/subnets easily by updating the variable</p>
</li>
<li><p> Maintainable - All data in one place</p>
</li>
<li><p> Flexible - Change naming, locations, CIDR blocks easily</p>
</li>
<li><p> Reusable - Use this pattern for other hierarchical resources</p>
</li>
</ol>
<h3 id="heading-conclusion">Conclusion</h3>
<p>Nested loops with flatten() are powerful Terraform patterns for managing hierarchical resources. By understanding how to:</p>
<p>Loop through parent resources (VNets) Loop through child resources (Subnets) Flatten nested lists Convert lists to maps You can automate complex infrastructure deployments with minimal code and maximum flexibility.</p>
]]></description><link>https://clouddevopsinsights.com/terraform-nested-loops-and-flatten-a-beginners-guide-with-azure-virtual-networks</link><guid isPermaLink="true">https://clouddevopsinsights.com/terraform-nested-loops-and-flatten-a-beginners-guide-with-azure-virtual-networks</guid><category><![CDATA[Terraform]]></category><category><![CDATA[terraform-cloud]]></category><category><![CDATA[#Terraform #InfrastructureAsCode #DevOps #CloudAutomation #AWS #Azure #GCP #TerraformFunctions #IaC #TerraformScripting #Coding #CloudComputing #Operations #TerraformBestPractices]]></category><category><![CDATA[IaC Automation]]></category><dc:creator><![CDATA[Abilash Vavilala]]></dc:creator></item><item><title><![CDATA[Supercharge GitHub Copilot for Terraform with Custom Repository Instructions]]></title><description><![CDATA[<p>GitHub Copilot becomes significantly more powerful when it understands your project's specific conventions, patterns, and architectural decisions. Custom repository instructions provide Copilot with the context it needs to generate code that aligns with your team's standards and best practices, rather than generic suggestions that might not fit your workflow.</p>
<h2 id="heading-what-are-custom-repository-instructions">What Are Custom Repository Instructions?</h2>
<p>Custom repository instructions are markdown files (<code>.github/</code><a target="_blank" href="http://copilot-instructions.md"><code>copilot-instructions.md</code></a>) that you place in your repository to guide Copilot's code generation. Think of them as a persistent conversation with Copilot about how your project works, what patterns you prefer, and what conventions your team follows.</p>
<p>For Terraform projects, these instructions are particularly valuable because infrastructure-as-code has numerous architectural decisions that vary between organizations: naming conventions, module structures, state management approaches, and resource organization patterns. Without guidance, Copilot might suggest patterns that conflict with your established standards.</p>
<h2 id="heading-why-custom-instructions-matter-for-terraform">Why Custom Instructions Matter for Terraform</h2>
<p>Terraform projects benefit from custom instructions because they help Copilot:</p>
<ul>
<li><p>Generate reusable module code with consistent patterns for conditional resource creation</p>
</li>
<li><p>Follow your team's naming conventions for resources, variables, and outputs</p>
</li>
<li><p>Understand your state management strategy, especially when using remote backends</p>
</li>
<li><p>Apply your preferred dependency management approaches</p>
</li>
<li><p>Use data sources appropriately versus hard-coded values</p>
</li>
<li><p>Implement your organization's tagging and labeling standards</p>
</li>
<li><p>Follow security and compliance requirements specific to your infrastructure</p>
</li>
</ul>
<h2 id="heading-essential-elements-for-terraform-instructions">Essential Elements for Terraform Instructions</h2>
<h3 id="heading-project-context-and-architecture">Project Context and Architecture</h3>
<p>Start by explaining your project's purpose and how Terraform is organized. This helps Copilot understand the big picture.</p>
<p>markdown</p>
<pre><code class="lang-markdown"><span class="hljs-section">## Project Overview</span>
This repository manages AWS infrastructure for production and staging environments.
We use a workspace-based approach with remote state in S3.
</code></pre>
<h3 id="heading-module-reusability-with-ternary-operators">Module Reusability with Ternary Operators</h3>
<p>Instruct Copilot to create flexible modules using conditional logic for optional resources.</p>
<pre><code class="lang-markdown"><span class="hljs-section">## Module Patterns</span>

<span class="hljs-section">### Conditional Resource Creation</span>
Use ternary operators and <span class="hljs-code">`count`</span> or <span class="hljs-code">`for_each`</span> to make resources optional:
<span class="hljs-bullet">-</span> Prefer <span class="hljs-code">`count = var.enable_feature ? 1 : 0`</span> for single optional resources
<span class="hljs-bullet">-</span> Use <span class="hljs-code">`for_each`</span> for multiple conditional resources based on maps or sets
<span class="hljs-bullet">-</span> Always provide sensible defaults in variable definitions
</code></pre>
<h3 id="heading-locals-for-computed-values">Locals for Computed Values</h3>
<p>Guide Copilot on when and how to use locals blocks.</p>
<pre><code class="lang-markdown"><span class="hljs-section">## Using Locals</span>

Use <span class="hljs-code">`locals`</span> blocks for:
<span class="hljs-bullet">-</span> Computed values used multiple times
<span class="hljs-bullet">-</span> Complex expressions that would clutter resource definitions
<span class="hljs-bullet">-</span> Combining variables into standardized formats (e.g., naming conventions)
<span class="hljs-bullet">-</span> Environment-specific configurations

Example pattern:
<span class="hljs-code">```hcl
locals {
  common_tags = merge(
    var.tags,
    {
      Environment = var.environment
      ManagedBy   = "Terraform"
      Repository  = "github.com/org/repo"
    }
  )

  resource_name = "${var.project}-${var.environment}-${var.component}"
}
```</span>
</code></pre>
<h3 id="heading-state-management-with-removed-blocks">State Management with Removed Blocks</h3>
<p>For teams managing state with <code>removed</code> blocks rather than CLI commands, this is crucial context.</p>
<pre><code class="lang-markdown"><span class="hljs-section">## State Management</span>

<span class="hljs-section">### Remote State</span>
<span class="hljs-bullet">-</span> Backend configuration is in <span class="hljs-code">`backend.tf`</span>
<span class="hljs-bullet">-</span> Use S3 backend with DynamoDB locking
<span class="hljs-bullet">-</span> Never commit <span class="hljs-code">`.tfstate`</span> files

<span class="hljs-section">### Removing Resources</span>
When removing resources from management, use <span class="hljs-code">`removed`</span> blocks instead of <span class="hljs-code">`terraform state rm`</span>:
<span class="hljs-code">```hcl
removed {
  from = aws_instance.legacy_server

  lifecycle {
    destroy = false
  }
}
```</span>

This ensures state changes are tracked in version control and applied consistently across the team.
</code></pre>
<h3 id="heading-dependency-management">Dependency Management</h3>
<p>Explain how to handle implicit and explicit dependencies.</p>
<pre><code class="lang-markdown"><span class="hljs-section">## Managing Dependencies</span>

<span class="hljs-section">### Implicit Dependencies</span>
Prefer implicit dependencies through resource references:
<span class="hljs-code">```hcl
subnet_id = aws_subnet.private.id
```</span>

<span class="hljs-section">### Explicit Dependencies</span>
Use <span class="hljs-code">`depends_on`</span> only when Terraform cannot infer the dependency:
<span class="hljs-bullet">-</span> Cross-module dependencies that aren't captured by outputs
<span class="hljs-bullet">-</span> Timing issues where resources must be created in sequence
<span class="hljs-bullet">-</span> When destroying resources in specific order matters

Always add a comment explaining why explicit dependency is needed.
</code></pre>
<h3 id="heading-data-sources-vs-hard-coded-values">Data Sources vs. Hard-Coded Values</h3>
<p>Provide guidance on using data sources for dynamic lookups.</p>
<pre><code class="lang-markdown"><span class="hljs-section">## Data Sources</span>

Prefer data sources over hard-coded values for:
<span class="hljs-bullet">-</span> AMI IDs (use latest with filters)
<span class="hljs-bullet">-</span> Availability zones
<span class="hljs-bullet">-</span> VPC and subnet IDs when working across modules
<span class="hljs-bullet">-</span> IAM policies and service principals
<span class="hljs-bullet">-</span> Route53 zone IDs

Example:
<span class="hljs-code">```hcl
data "aws_ami" "amazon_linux" {
  most_recent = true
  owners      = ["amazon"]

  filter {
    name   = "name"
    values = ["amzn2-ami-hvm-*-x86_64-gp2"]
  }
}
```</span>

Avoid data sources for values that should be explicitly versioned in code.
</code></pre>
<h3 id="heading-variable-and-output-conventions">Variable and Output Conventions</h3>
<p>Define naming and documentation standards.</p>
<pre><code class="lang-markdown"><span class="hljs-section">## Variables and Outputs</span>

<span class="hljs-section">### Variable Definitions</span>
<span class="hljs-bullet">-</span> Use snake<span class="hljs-emphasis">_case for all variable names
- Always include description and type
- Provide defaults for optional variables
- Use validation blocks for constrained values
```hcl
variable "environment" {
  description = "Environment name (dev, staging, prod)"
  type        = string

  validation {
    condition     = contains(["dev", "staging", "prod"], var.environment)
    error_</span>message = "Environment must be dev, staging, or prod."
  }
}
<span class="hljs-code">```

### Outputs
- Output resource IDs and ARNs that other modules might need
- Include descriptions explaining the output's purpose
- Use sensitive = true for secrets</span>
</code></pre>
<h3 id="heading-tagging-strategy">Tagging Strategy</h3>
<p>Specify your organization's tagging requirements.</p>
<pre><code class="lang-markdown"><span class="hljs-section">## Resource Tagging</span>

All resources that support tags must include:
<span class="hljs-code">```hcl
tags = merge(
  local.common_tags,
  {
    Name = local.resource_name
  }
)
```</span>

Required tags in common<span class="hljs-emphasis">_tags:
- Environment
- ManagedBy
- Repository
- CostCenter
- Owner</span>
</code></pre>
<h3 id="heading-security-and-compliance">Security and Compliance</h3>
<p>Include security-specific guidance.</p>
<pre><code class="lang-markdown"><span class="hljs-section">## Security Requirements</span>

<span class="hljs-bullet">-</span> Never hard-code credentials or secrets
<span class="hljs-bullet">-</span> Use AWS Secrets Manager or SSM Parameter Store for sensitive values
<span class="hljs-bullet">-</span> Enable encryption at rest for all storage resources
<span class="hljs-bullet">-</span> Use private subnets for compute resources when possible
<span class="hljs-bullet">-</span> Enable logging and monitoring for all resources
<span class="hljs-bullet">-</span> Follow principle of least privilege for IAM roles and policies
</code></pre>
<h3 id="heading-module-organization">Module Organization</h3>
<p>Explain how modules should be structured.</p>
<pre><code class="lang-markdown"><span class="hljs-section">## Module Structure</span>

Standard module layout: modules/ <span class="xml"><span class="hljs-tag">&lt;<span class="hljs-name">module-name</span>&gt;</span></span>/ main.tf # Primary resource definitions variables.tf # Input variables outputs.tf # Output values versions.tf # Provider and Terraform version constraints locals.tf # Local values (if needed) data.tf # Data sources (if needed) README.md # Module documentation

<span class="hljs-section">#module guildelines</span>
Each module should be self-contained and reusable.
</code></pre>
<h3 id="heading-testing-and-validation">Testing and Validation</h3>
<p>Provide instructions for testing patterns.</p>
<pre><code class="lang-markdown"><span class="hljs-section">## Testing</span>

<span class="hljs-bullet">-</span> Use <span class="hljs-code">`terraform fmt`</span> to format all .tf files
<span class="hljs-bullet">-</span> Run <span class="hljs-code">`terraform validate`</span> before committing
<span class="hljs-bullet">-</span> Use <span class="hljs-code">`tflint`</span> with the AWS ruleset
<span class="hljs-bullet">-</span> Include examples/ directory with working examples of module usage
<span class="hljs-bullet">-</span> Use <span class="hljs-code">`terraform-docs`</span> to generate module documentation
</code></pre>
<h2 id="heading-complete-sample-instructions-file">Complete Sample Instructions File</h2>
<p>Here's a comprehensive example bringing all these elements together:</p>
<p>markdown</p>
<pre><code class="lang-markdown"><span class="hljs-section"># Terraform Custom Instructions for GitHub Copilot</span>

<span class="hljs-section">## Project Overview</span>
This repository manages multi-environment AWS infrastructure using Terraform modules.
We deploy to dev, staging, and production environments with separate AWS accounts.

<span class="hljs-section">## Code Style and Conventions</span>

<span class="hljs-section">### Naming</span>
<span class="hljs-bullet">-</span> Use snake<span class="hljs-emphasis">_case for resources, variables, outputs, and locals
- Resource names: `<span class="xml"><span class="hljs-tag">&lt;<span class="hljs-name">resource_type</span>&gt;</span></span>_</span><span class="xml"><span class="hljs-tag">&lt;<span class="hljs-name">descriptive_name</span>&gt;</span></span>`
<span class="hljs-bullet">-</span> Variable names should be descriptive and unabbreviated when possible
<span class="hljs-bullet">-</span> Module directory names: kebab-case

<span class="hljs-section">### File Organization</span>
<span class="hljs-bullet">-</span> <span class="hljs-code">`main.tf`</span>: Primary resource definitions
<span class="hljs-bullet">-</span> <span class="hljs-code">`variables.tf`</span>: Input variables only
<span class="hljs-bullet">-</span> <span class="hljs-code">`outputs.tf`</span>: Output values only
<span class="hljs-bullet">-</span> <span class="hljs-code">`locals.tf`</span>: Local value computations
<span class="hljs-bullet">-</span> <span class="hljs-code">`data.tf`</span>: Data source lookups
<span class="hljs-bullet">-</span> <span class="hljs-code">`versions.tf`</span>: Terraform and provider version constraints
<span class="hljs-bullet">-</span> <span class="hljs-code">`backend.tf`</span>: Backend configuration

<span class="hljs-section">## Module Design Patterns</span>

<span class="hljs-section">### Conditional Resources</span>
Use ternary operators with count for optional resources:
<span class="hljs-code">```hcl
resource "aws_cloudwatch_log_group" "this" {
  count = var.enable_logging ? 1 : 0

  name              = "/aws/lambda/${var.function_name}"
  retention_in_days = var.log_retention_days

  tags = local.common_tags
}
```</span>

For multiple conditional resources, prefer for<span class="hljs-emphasis">_each:
```hcl
resource "aws_</span>subnet" "private" {
  for<span class="hljs-emphasis">_each = var.create_</span>private<span class="hljs-emphasis">_subnets ? var.private_</span>subnet<span class="hljs-emphasis">_cidrs : {}

  vpc_</span>id            = aws<span class="hljs-emphasis">_vpc.main.id
  cidr_</span>block        = each.value
  availability<span class="hljs-emphasis">_zone = each.key

  tags = merge(
    local.common_</span>tags,
<span class="hljs-code">    {
      Name = "${local.resource_prefix}-private-${each.key}"
      Tier = "private"
    }
  )
}
```
</span>
<span class="hljs-section">### Locals Usage</span>
Use locals for:
<span class="hljs-bullet">-</span> Repeated computed values
<span class="hljs-bullet">-</span> Name prefixes following our convention
<span class="hljs-bullet">-</span> Merging tags
<span class="hljs-bullet">-</span> Complex conditional logic
<span class="hljs-code">```hcl
locals {
  resource_prefix = "${var.project_name}-${var.environment}"

  common_tags = merge(
    var.additional_tags,
    {
      Environment  = var.environment
      ManagedBy    = "Terraform"
      Repository   = "github.com/myorg/infrastructure"
      CostCenter   = var.cost_center
      Owner        = var.owner_email
    }
  )

  # Complex logic in locals keeps resources clean
  enable_enhanced_monitoring = var.environment == "prod" ? true : var.enable_monitoring

  backup_retention = {
    dev     = 7
    staging = 14
    prod    = 30
  }
  retention_days = local.backup_retention[var.environment]
}
```</span>

<span class="hljs-section">## State Management</span>

<span class="hljs-section">### Backend Configuration</span>
<span class="hljs-bullet">-</span> Use S3 backend with DynamoDB state locking
<span class="hljs-bullet">-</span> Backend config in <span class="hljs-code">`backend.tf`</span>
<span class="hljs-bullet">-</span> State file path: <span class="hljs-code">`&lt;environment&gt;/&lt;component&gt;/terraform.tfstate`</span>

<span class="hljs-section">### Removing Resources from State</span>
Use <span class="hljs-code">`removed`</span> blocks instead of CLI commands for team consistency:
<span class="hljs-code">```hcl
removed {
  from = aws_instance.deprecated_server

  lifecycle {
    destroy = false  # Keep the resource, just remove from state
  }
}

# Or to track actual resource deletion
removed {
  from = module.legacy_database

  lifecycle {
    destroy = true
  }
}
```</span>

This ensures state changes are version-controlled and reviewable.

<span class="hljs-section">## Dependency Management</span>

<span class="hljs-section">### Prefer Implicit Dependencies</span>
<span class="hljs-code">```hcl
# Good - implicit dependency
resource "aws_eip" "nat" {
  vpc = true
  tags = local.common_tags
}

resource "aws_nat_gateway" "main" {
  allocation_id = aws_eip.nat.id  # Implicit dependency
  subnet_id     = aws_subnet.public.id
}
```</span>

<span class="hljs-section">### Explicit Dependencies (Use Sparingly)</span>
Only use <span class="hljs-code">`depends_on`</span> when necessary:
<span class="hljs-code">```hcl
resource "aws_iam_role_policy_attachment" "lambda" {
  role       = aws_iam_role.lambda.name
  policy_arn = aws_iam_policy.lambda.arn

  # Explicit dependency needed for eventual consistency
  depends_on = [aws_iam_role.lambda]
}
```</span>

Always include a comment explaining why explicit dependency is required.

<span class="hljs-section">## Data Sources</span>

Use data sources for dynamic lookups, not for values that should be versioned:
<span class="hljs-code">```hcl
# Good - dynamic lookup of latest AMI
data "aws_ami" "amazon_linux_2" {
  most_recent = true
  owners      = ["amazon"]

  filter {
    name   = "name"
    values = ["amzn2-ami-hvm-*-x86_64-gp2"]
  }

  filter {
    name   = "virtualization-type"
    values = ["hvm"]
  }
}

# Good - lookup existing VPC
data "aws_vpc" "main" {
  tags = {
    Name = "${var.project_name}-vpc"
  }
}

# Good - get available AZs
data "aws_availability_zones" "available" {
  state = "available"
}

# Bad - hard-code when data source is appropriate
resource "aws_instance" "web" {
  ami = "ami-0c55b159cbfafe1f0"  # Don't do this
  # ...
}
```</span>

<span class="hljs-section">## Variables</span>

<span class="hljs-section">### Variable Definitions</span>
Always include description, type, and defaults when appropriate:
<span class="hljs-code">```hcl
variable "environment" {
  description = "Environment name: dev, staging, or prod"
  type        = string

  validation {
    condition     = contains(["dev", "staging", "prod"], var.environment)
    error_message = "Environment must be dev, staging, or prod."
  }
}

variable "instance_type" {
  description = "EC2 instance type for web servers"
  type        = string
  default     = "t3.micro"
}

variable "enable_monitoring" {
  description = "Enable detailed CloudWatch monitoring"
  type        = bool
  default     = false
}

variable "subnet_cidrs" {
  description = "Map of availability zone to subnet CIDR blocks"
  type        = map(string)
  default     = {}
}

variable "allowed_cidr_blocks" {
  description = "List of CIDR blocks allowed to access resources"
  type        = list(string)
  default     = []

  validation {
    condition = alltrue([
      for cidr in var.allowed_cidr_blocks : can(cidrhost(cidr, 0))
    ])
    error_message = "All elements must be valid CIDR blocks."
  }
}
```</span>

<span class="hljs-section">### Variable Precedence</span>
Variables should be provided in this order:
<span class="hljs-bullet">1.</span> Environment-specific <span class="hljs-code">`.tfvars`</span> files
<span class="hljs-bullet">2.</span> Common <span class="hljs-code">`terraform.tfvars`</span>
<span class="hljs-bullet">3.</span> Defaults in <span class="hljs-code">`variables.tf`</span>

<span class="hljs-section">## Outputs</span>

Include clear descriptions and mark sensitive values:
<span class="hljs-code">```hcl
output "vpc_id" {
  description = "ID of the created VPC"
  value       = aws_vpc.main.id
}

output "private_subnet_ids" {
  description = "List of private subnet IDs for compute resources"
  value       = [for subnet in aws_subnet.private : subnet.id]
}

output "database_endpoint" {
  description = "Connection endpoint for the RDS instance"
  value       = aws_db_instance.main.endpoint
  sensitive   = true
}
```</span>

<span class="hljs-section">## Tagging Strategy</span>

All taggable resources must include common<span class="hljs-emphasis">_tags:
```hcl
resource "aws_</span>instance" "web" {
  # ... other configuration ...

  tags = merge(
<span class="hljs-code">    local.common_tags,
    {
      Name      = "${local.resource_prefix}-web-${count.index + 1}"
      Component = "web-server"
      Backup    = "daily"
    }
  )
}
```
</span>
Required tags (enforced via SCPs):
<span class="hljs-bullet">-</span> Environment
<span class="hljs-bullet">-</span> ManagedBy
<span class="hljs-bullet">-</span> Repository
<span class="hljs-bullet">-</span> CostCenter
<span class="hljs-bullet">-</span> Owner

<span class="hljs-section">## Security Best Practices</span>

<span class="hljs-section">### Secrets Management</span>
Never hard-code secrets. Use AWS Secrets Manager or SSM Parameter Store:
<span class="hljs-code">```hcl
data "aws_secretsmanager_secret_version" "db_password" {
  secret_id = "${var.project_name}/${var.environment}/db-password"
}

resource "aws_db_instance" "main" {
  # ... other configuration ...
  password = data.aws_secretsmanager_secret_version.db_password.secret_string
}
```</span>

<span class="hljs-section">### Encryption</span>
Enable encryption for all storage:
<span class="hljs-code">```hcl
resource "aws_s3_bucket" "data" {
  bucket = "${local.resource_prefix}-data"

  tags = local.common_tags
}

resource "aws_s3_bucket_server_side_encryption_configuration" "data" {
  bucket = aws_s3_bucket.data.id

  rule {
    apply_server_side_encryption_by_default {
      sse_algorithm     = "aws:kms"
      kms_master_key_id = aws_kms_key.s3.arn
    }
  }
}
```</span>

<span class="hljs-section">### Network Security</span>
<span class="hljs-bullet">-</span> Place compute resources in private subnets
<span class="hljs-bullet">-</span> Use security groups with minimal required access
<span class="hljs-bullet">-</span> Enable VPC flow logs
<span class="hljs-bullet">-</span> Use AWS PrivateLink for AWS service access when possible

<span class="hljs-section">## Module Structure</span>

Standard module organization:
</code></pre>
<h2 id="heading-sample-githubcopilot-instructionsmdhttpcopilot-instructionsmd-for-vs-code">Sample .github/<a target="_blank" href="http://copilot-instructions.md">copilot-instructions.md</a> for VS Code</h2>
<p>For VS Code Copilot customization, create a similar file that focuses on editor-specific workflows:</p>
<pre><code class="lang-markdown"><span class="hljs-section"># Terraform Development with GitHub Copilot</span>

<span class="hljs-section">## Project Context</span>
AWS infrastructure managed with Terraform in a multi-environment setup.
Follow the patterns and conventions defined in our terraform modules.

<span class="hljs-section">## When Writing Terraform Code</span>

<span class="hljs-section">### Always Include</span>
<span class="hljs-bullet">-</span> Type definitions for all variables
<span class="hljs-bullet">-</span> Descriptions for variables and outputs
<span class="hljs-bullet">-</span> Validation blocks for constrained variables
<span class="hljs-bullet">-</span> Common tags merged with resource-specific tags
<span class="hljs-bullet">-</span> Comments explaining complex conditionals

<span class="hljs-section">### Naming Patterns</span>
Generate resource names using this pattern:
<span class="hljs-code">```hcl
locals {
  name_prefix = "${var.project_name}-${var.environment}"
}

resource "aws_xxx" "example" {
  tags = merge(local.common_tags, {
    Name = "${local.name_prefix}-descriptive-name"
  })
}
```</span>

<span class="hljs-section">### Conditional Resources</span>
When I ask for optional resources, use count with ternary:
<span class="hljs-code">```hcl
count = var.enable_feature ? 1 : 0
```</span>

Reference with: <span class="hljs-code">`aws_resource.example[0].id`</span>

For multiple resources based on a map, use for<span class="hljs-emphasis">_each.

### Data Sources
Suggest data sources for:
- Latest AMIs (with filters)
- Availability zones
- Existing VPCs/subnets
- IAM policy documents

### Security Defaults
When creating resources:
- Enable encryption by default
- Use private subnets for compute
- Apply least-privilege IAM policies
- Enable logging and monitoring

### Suggest Modules
If I'm writing repetitive resource configurations, suggest creating a reusable module.

## Code Completion Preferences

When I start typing:
- `variable` - include description, type, and validation if constrained
- `output` - include description
- `resource` - include common_</span>tags
<span class="hljs-bullet">-</span> <span class="hljs-code">`data`</span> - include relevant filters
<span class="hljs-bullet">-</span> <span class="hljs-code">`locals`</span> - use for name prefixes and tag merging

<span class="hljs-section">## Testing</span>
Remind me to run:
<span class="hljs-bullet">-</span> <span class="hljs-code">`terraform fmt`</span> before committing
<span class="hljs-bullet">-</span> <span class="hljs-code">`terraform validate`</span> to check syntax
<span class="hljs-bullet">-</span> <span class="hljs-code">`terraform plan`</span> to preview changes
</code></pre>
<h2 id="heading-conclusion">Conclusion</h2>
<p>Custom repository instructions transform GitHub Copilot from a generic code completion tool into a knowledgeable team member that understands your specific Terraform patterns and conventions. By investing time in creating comprehensive instructions, you'll receive more relevant suggestions, reduce code review cycles, and maintain consistency across your infrastructure codebase.</p>
<p>Start with the essential elements outlined above, then refine your instructions based on the patterns and challenges unique to your organization. As your team's conventions evolve, keep your instructions updated to ensure Copilot continues to provide valuable, context-aware assistance.</p>
<h3 id="heading-reference">Reference</h3>
<p>if you are intesrested in learning more about how you can leverage Copilot then I would recomment you to check Repo below</p>
<p><a target="_blank" href="https://github.com/github/awesome-copilot">awesome-copilot</a></p>
<p><a target="_blank" href="https://github.com/github-samples/copilot-in-a-box">copilot-in-a-box</a></p>
]]></description><link>https://clouddevopsinsights.com/supercharge-github-copilot-for-terraform-with-custom-repository-instructions</link><guid isPermaLink="true">https://clouddevopsinsights.com/supercharge-github-copilot-for-terraform-with-custom-repository-instructions</guid><dc:creator><![CDATA[Abilash Vavilala]]></dc:creator></item><item><title><![CDATA[Azure Verified Modules and the Landing Zone Accelerator: Building Trustworthy Cloud Foundations]]></title><description><![CDATA[<h2 id="heading-introduction">Introduction</h2>
<p>Moving to the cloud isnt just about deploying virtual machines and servicesits about creating a <strong>repeatable, secure, and governed foundation</strong> that scales as your organization grows. Microsofts <strong>Azure Landing Zone Accelerator</strong> provides a structured way to establish this foundation, and <strong>Azure Verified Modules (AVMs)</strong> are the building blocks that make it reliable.</p>
<p>In this article, well explore what the Landing Zone Accelerator is, how AVMs work, and why theyre a game changer for enterprises adopting Azure.</p>
<hr />
<h2 id="heading-the-challenge-of-cloud-foundations">The Challenge of Cloud Foundations</h2>
<p>Enterprises often face a common set of challenges when deploying workloads in Azure:</p>
<ul>
<li><p>Inconsistent configurations across environments</p>
</li>
<li><p>Security and compliance gaps due to misapplied policies</p>
</li>
<li><p>Lack of repeatability when scaling workloads</p>
</li>
<li><p>Manual effort to reapply governance and monitoring settings</p>
</li>
</ul>
<p>Without a baseline, cloud sprawl can lead to <strong>uncontrolled costs, security risks, and operational headaches</strong>. Thats where the <strong>Landing Zone Accelerator</strong> and <strong>Verified Modules</strong> come in.</p>
<hr />
<h2 id="heading-what-is-the-azure-landing-zone-accelerator">What is the Azure Landing Zone Accelerator?</h2>
<p>The <strong>Azure Landing Zone Accelerator</strong> is Microsofts implementation of <strong>Cloud Adoption Framework (CAF) landing zones</strong>. It provides:</p>
<ul>
<li><p><strong>Architecture guidance</strong>: how to design identity, networking, security, and monitoring foundations</p>
</li>
<li><p><strong>Pre-built modules</strong>: reusable templates for deploying best-practice Azure resources</p>
</li>
<li><p><strong>Governance and compliance</strong>: policy-driven controls to enforce security standards</p>
</li>
<li><p><strong>Flexibility</strong>: deploy piecemeal (per module) or as a full baseline</p>
</li>
</ul>
<p>Think of it as a <strong>starter kit</strong> for enterprise-scale Azure environmentspre-configured, opinionated, and ready to extend.</p>
<hr />
<h2 id="heading-introducing-azure-verified-modules-avms">Introducing Azure Verified Modules (AVMs)</h2>
<p>So what makes a module verified?</p>
<p>An <strong>Azure Verified Module</strong> is a reusable infrastructure-as-code module that has been:</p>
<ul>
<li><p><strong>Reviewed by Microsoft engineers</strong></p>
</li>
<li><p><strong>Aligned with Azure best practices</strong></p>
</li>
<li><p><strong>Validated with testing for functionality and security</strong></p>
</li>
<li><p><strong>Versioned and maintained</strong> for lifecycle management</p>
</li>
</ul>
<p>Unlike custom or community modules, AVMs give you <strong>confidence</strong>youre using a module thats been vetted to meet enterprise and security standards.</p>
<p>Examples of AVMs include:</p>
<ul>
<li><p>Identity &amp; Role assignments</p>
</li>
<li><p>Virtual networks and subnets</p>
</li>
<li><p>Monitoring and diagnostic settings</p>
</li>
<li><p>Security policies (e.g., Defender for Cloud)</p>
</li>
</ul>
<hr />
<h2 id="heading-how-avms-fit-into-the-landing-zone-accelerator">How AVMs Fit Into the Landing Zone Accelerator</h2>
<p>The Landing Zone Accelerator is essentially a <strong>composition of AVMs</strong>. For example:</p>
<ol>
<li><p><strong>Identity &amp; Access</strong></p>
<ul>
<li><p>Azure AD integration</p>
</li>
<li><p>Role-based access control (RBAC)</p>
</li>
</ul>
</li>
<li><p><strong>Networking</strong></p>
<ul>
<li>Virtual networks, subnets, private DNS, and firewalls</li>
</ul>
</li>
<li><p><strong>Management &amp; Monitoring</strong></p>
<ul>
<li><p>Azure Monitor setup</p>
</li>
<li><p>Log Analytics workspaces</p>
</li>
<li><p>Policy assignments</p>
</li>
</ul>
</li>
<li><p><strong>Security &amp; Compliance</strong></p>
<ul>
<li><p>Microsoft Defender for Cloud policies</p>
</li>
<li><p>Blueprints for regulatory compliance</p>
</li>
</ul>
</li>
</ol>
<p>Each of these pieces can be deployed via a <strong>verified module</strong>, ensuring consistent quality and security.</p>
<p>The Landing Zone Accelerator is essentially a <strong>composition of AVMs</strong>. Each AVM handles a specific domain (identity, networking, monitoring, security), and together they provide a full <strong>enterprise-ready foundation</strong>.</p>
<p>One of the most important first steps is <strong>subscription vending</strong>.</p>
<h2 id="heading-example-subscription-vending-with-avms">Example: Subscription Vending with AVMs</h2>
<p><strong>Subscription vending</strong> is the process of creating and configuring new Azure subscriptions in a <strong>standardized, governed way</strong>. Instead of manually creating subscriptions and applying inconsistent configurations, you can use AVMs to automate and enforce standards.</p>
<p>A subscription vending AVM might include:</p>
<ul>
<li><p>Creation of the subscription</p>
</li>
<li><p>Assignment of <strong>management groups</strong></p>
</li>
<li><p>Application of <strong>Azure Policy</strong> for compliance</p>
</li>
<li><p>Enabling monitoring and diagnostic settings</p>
</li>
<li><p>Setting up baseline RBAC roles</p>
</li>
</ul>
<p>This ensures every subscription is born <strong>secure, compliant, and consistent</strong>a key requirement for large organizations managing hundreds of workloads.</p>
<h2 id="heading-bootstrapping-your-landing-zone">Bootstrapping Your Landing Zone</h2>
<p>Once subscriptions are managed, the next step is <strong>bootstrapping</strong>. This is about preparing your environment so that other teams can deploy confidently.</p>
<p>Bootstrapping often includes:</p>
<ul>
<li><p>Assigning core <strong>management groups</strong> and policies</p>
</li>
<li><p>Deploying <strong>identity AVMs</strong> (like role assignments, UAMI/SMI configurations)</p>
</li>
<li><p>Setting up <strong>automation hooks</strong> (CI/CD pipelines for IaC deployments)</p>
</li>
<li><p>Ensuring <strong>billing and tagging standards</strong> are enforced</p>
</li>
</ul>
<p>Bootstrapping is like laying the <strong>concrete foundation</strong> of a houseyou dont see it once its built, but everything depends on it.</p>
<h2 id="heading-platform-landing-zone">Platform Landing Zone</h2>
<p>The <strong>platform landing zone</strong> is where your <strong>enterprise shared services</strong> live. Its built using AVMs to establish services that <strong>all applications and business units will consume</strong>, such as:</p>
<ul>
<li><p><strong>Networking AVMs</strong>: Hub-and-spoke VNet, private DNS, firewalls, ExpressRoute/Virtual WAN</p>
</li>
<li><p><strong>Security AVMs</strong>: Microsoft Defender for Cloud, Sentinel integrations, Key Vault</p>
</li>
<li><p><strong>Management AVMs</strong>: Log Analytics, Azure Monitor, policy baselines, update management</p>
</li>
</ul>
<p>The platform landing zone provides the <strong>secure, monitored backbone</strong> of your cloud estate. Without it, application teams would have to reinvent networking, monitoring, and security every time.</p>
<h2 id="heading-application-landing-zone">Application Landing Zone</h2>
<p>Finally, we reach the <strong>application landing zone</strong>where business workloads actually run.</p>
<p>AVMs in this layer handle:</p>
<ul>
<li><p>App-specific networking (VNets/subnets inside the spoke)</p>
</li>
<li><p>Identity (role assignments, managed identities)</p>
</li>
<li><p>Observability (diagnostic settings tied to central monitoring)</p>
</li>
<li><p>Security policies tailored to workloads (e.g., PCI-DSS apps, healthcare apps)</p>
</li>
</ul>
<p>Because of the baseline AVMs applied during subscription vending and the shared platform landing zone, the <strong>application landing zone can focus purely on the workload</strong> without worrying about governance gaps.</p>
<h2 id="heading-policy-context-amp-policy-versioning">Policy Context &amp; Policy Versioning</h2>
<p>When using Azure Landing Zones and Azure Verified Modules (AVMs), <strong>policies</strong> are central to ensuring consistent, secure, and compliant infrastructure. They act as guardrailsrestricting what can be deployed, enforcing settings, and ensuring resources meet standards automatically. The Landing Zone Accelerator has a formal approach for policies, especially around <strong>policy versioning</strong>, which helps maintain stability and flexibility over time.</p>
<h3 id="heading-what-is-policy-context">What is Policy Context</h3>
<ul>
<li><p>A policy in Azure is a definition that enforces rules over resourcese.g., allowed VM sizes, permitted locations, enabling diagnostic logs, enforcing tag usage.</p>
</li>
<li><p>In Landing Zones, policies are typically applied via <strong>initiative definitions</strong> (policy sets) that bundle multiple individual policies together. These ensure a baseline of compliance across identity, security, networking, and monitoring.</p>
</li>
<li><p>The AVMs and the Landing Zone Accelerator include many of these policy initiatives out of the box, so when you deploy modules, relevant policies are baked in.</p>
</li>
</ul>
<h3 id="heading-why-policy-versioning-matters">Why Policy Versioning Matters</h3>
<p>Policy versioning in the Azure Landing Zones Accelerator guides how policies evolve while keeping backward compatibility and avoiding breaking changes. Key points:</p>
<ul>
<li><p><strong>Immutable releases</strong>: Once a policy (or initiative) version is released and consumed, that version should not change in a breaking way. If new policy requirements or enhancements are needed, a <em>new version</em> is published.</p>
</li>
<li><p><strong>Semver or version numbering</strong>: Policies and initiatives are versioned explicitly. This means you (or your deployment pipelines) can pin to policy version <code>1.0.0</code>, <code>1.1.0</code>, etc., ensuring that when Azure or Microsoft releases updates for policy sets, your environment doesnt change unexpectedly.</p>
</li>
<li><p><strong>Upgrade paths</strong>: When newer policy versions are released, you can plan in advanceto test them in non-production, to review impact, and to gradually promote to production.</p>
</li>
</ul>
<h3 id="heading-how-policy-versioning-works-in-the-accelerator">How Policy Versioning Works in the Accelerator</h3>
<ul>
<li><p>The Landing Zone Accelerator stores policy definitions and initiatives in its GitHub repository. Each policy package or module has a version number. When you reference a policy module in your Bicep or AVM template, you specify the version you want.</p>
</li>
<li><p>Example snippet (in Bicep / AVM context):</p>
<pre><code class="lang-bash">  module baselinePolicy <span class="hljs-string">'br/public:avm/policy/initiative/baseline:1.2.0'</span> = {
    name: <span class="hljs-string">'baseline-policy'</span>
    params: {
      allowedLocations: [ <span class="hljs-string">'eastus'</span>, <span class="hljs-string">'australiaeast'</span> ]
      tagDefaults: { environment: <span class="hljs-string">'prod'</span> }
    }
  }
</code></pre>
<p>  In this example, <code>baseline:1.2.0</code> pins the policy initiative to version 1.2.0 so updates later to version 1.3.0 or 2.0.0 are opt-in rather than automatic.</p>
</li>
</ul>
<h3 id="heading-best-practices-using-policy-versioning-in-your-adoption">Best Practices: Using Policy Versioning in Your Adoption</h3>
<ul>
<li><p><strong>Pin your policy versions</strong> in your IaC templates so you know exactly what controls are being applied.</p>
</li>
<li><p><strong>Monitor new releases</strong> of policy versions from the Accelerator repo. Review change logs.</p>
</li>
<li><p><strong>Test policy changes</strong> in dev/test subscriptions before rolling them out.</p>
</li>
<li><p><strong>Coordinate with your governance team</strong>policy changes can impact deployments, cost, and compliance audits.</p>
</li>
</ul>
<h2 id="heading-why-this-matters">Why This Matters</h2>
<p>This layered approach<strong>subscription vending  bootstrapping  platform landing zone  application landing zone</strong>is what makes the Landing Zone Accelerator with AVMs so powerful.</p>
<p>Each step builds on the previous one:</p>
<ol>
<li><p>Subscriptions are created consistently.</p>
</li>
<li><p>Bootstrapping enforces governance.</p>
</li>
<li><p>Platform services are shared and secure.</p>
</li>
<li><p>Applications deploy quickly and safely.</p>
</li>
</ol>
<p>This ensures your Azure environment is <strong>scalable, secure, and operationally efficient from day one</strong>.</p>
<h2 id="heading-references">References</h2>
<ul>
<li><p><strong>Azure Landing Zones</strong>  <a target="_blank" href="https://azure.github.io/Azure-Landing-Zones/">https://azure.github.io/Azure-Landing-Zones/</a></p>
</li>
<li><p><strong>Landing Zone Accelerator (User Guide)</strong>  <a target="_blank" href="https://azure.github.io/Azure-Landing-Zones/accelerator/">https://azure.github.io/Azure-Landing-Zones/accelerator/</a></p>
</li>
</ul>
]]></description><link>https://clouddevopsinsights.com/azure-verified-modules-and-the-landing-zone-accelerator-building-trustworthy-cloud-foundations</link><guid isPermaLink="true">https://clouddevopsinsights.com/azure-verified-modules-and-the-landing-zone-accelerator-building-trustworthy-cloud-foundations</guid><dc:creator><![CDATA[Abilash Vavilala]]></dc:creator></item><item><title><![CDATA[Hosting  Azure MCP Server in VS Code: My Experience with the MCP Server]]></title><description><![CDATA[<h3 id="heading-introduction">Introduction</h3>
<p>Azure's Model Context Protocol (MCP) provides a standardized way to serve domain-specific context to large language models. In this post, Ill walk you through how I set up my own MCP Server in VS Code and what tools are vailable in preview version of the server</p>
<h3 id="heading-azure-mcp-server">Azure MCP Server</h3>
<p>The Azure MCP Server enables AI agents and other types of clients to interact with Azure resources through natural language commands. It implements the Model Context Protocol (MCP) to provide these key features:</p>
<ul>
<li><p><strong>MCP support</strong>: Because the Azure MCP Server implements the Model Context Protocol, it works with MCP clients such as GitHub Copilot agent mode, the OpenAI Agents SDK, and Semantic Kernel.</p>
</li>
<li><p><strong>Entra ID support</strong>: The Azure MCP Server uses Entra ID through the Azure Identity library to follow Azure authentication best practices.</p>
</li>
<li><p><strong>Service and tool support</strong>: The Azure MCP Server supports Azure services and tools such as the Azure CLI and Azure Developer CLI (azd).</p>
</li>
</ul>
<h2 id="heading-introduction-to-the-model-context-protocol-mcp"><strong>Introduction to the Model Context Protocol (MCP)</strong></h2>
<p>The Model Context Protocol (MCP) is an open protocol designed to manage how language models interact with external tools, memory, and context in a safe, structured, and stateful way. MCP defines a client-server architecture with several components:</p>
<ul>
<li><p><strong>Hosts</strong>: Apps that use MCP clients to connect to and consume data from MCP servers.</p>
</li>
<li><p><strong>Clients</strong>: Components of MCP hosts that manage connections and retrieve data from MCP servers.</p>
</li>
<li><p><strong>Servers</strong>: Programs that provide features like data resources, tools for performing actions, and prompts to guide interactions.</p>
</li>
</ul>
<p>For example, VS Code is considered a host, and GitHub Copilot agent mode in VS Code acts as an MCP client that connects to MCP servers. You might also build a custom intelligent app that hosts its own MCP client that connects to MCP servers.</p>
<p>The Azure MCP Server implements a set of tools per the Model Context Protocol. AI agents and other types of clients use these tools to interact with Azure resources.</p>
<h3 id="heading-spin-up-your-own-azure-mcp-server-in-vs-code"><strong>Spin Up Your Own Azure MCP Server in VS Code</strong></h3>
<p>I followed the Microsoft documentation. According to the documentation following are prerequisites I installed. I am using Windows subsystem for Linux, this tutorial will provide steps for Ubuntu.</p>
<h2 id="heading-prerequisites"><strong>Prerequisites</strong></h2>
<ul>
<li><p>Azure Account</p>
</li>
<li><p>Python 3.9 or higher</p>
</li>
<li><p>Node JS installed locally</p>
</li>
</ul>
<h3 id="heading-installing-nodejs-and-dependencies-for-vs-code-in-wsl">Installing Nodejs and dependencies for VS Code in WSL</h3>
<pre><code class="lang-bash"><span class="hljs-meta">#!/bin/bash</span>


<span class="hljs-comment"># --- 1. Setting up NVM (Node Version Manager) and Node.js ---</span>
<span class="hljs-built_in">echo</span> <span class="hljs-string">"## 1. NVM and Node.js Installation"</span>
<span class="hljs-built_in">echo</span> <span class="hljs-string">"NVM allows you to manage multiple Node.js versions on your system."</span>

<span class="hljs-built_in">echo</span> <span class="hljs-string">"# Core Dependencies for NVM (often pre-installed in modern Ubuntu WSL)"</span>
<span class="hljs-built_in">echo</span> <span class="hljs-string">"sudo apt update"</span>
<span class="hljs-built_in">echo</span> <span class="hljs-string">"sudo apt install -y curl"</span> <span class="hljs-comment"># curl is used to download the nvm install script</span>

<span class="hljs-built_in">echo</span> <span class="hljs-string">"# NVM Installation (downloads the nvm script and sources it in your .bashrc/.zshrc)"</span>
<span class="hljs-built_in">echo</span> <span class="hljs-string">"curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/master/install.sh | bash"</span>
<span class="hljs-built_in">echo</span> <span class="hljs-string">""</span>
<span class="hljs-built_in">echo</span> <span class="hljs-string">"# After installation, you must source your shell config or open a new terminal"</span>
<span class="hljs-built_in">echo</span> <span class="hljs-string">"source ~/.bashrc # or ~/.zshrc if you use zsh"</span>
<span class="hljs-built_in">echo</span> <span class="hljs-string">""</span>

<span class="hljs-built_in">echo</span> <span class="hljs-string">"# Node.js Installation (via NVM)"</span>
<span class="hljs-built_in">echo</span> <span class="hljs-string">"nvm install --lts # Installs the latest Long Term Support version of Node.js"</span>
<span class="hljs-built_in">echo</span> <span class="hljs-string">"nvm use --lts     # Sets the LTS version as the default for the current shell session"</span>
<span class="hljs-built_in">echo</span> <span class="hljs-string">""</span>
<span class="hljs-built_in">echo</span> <span class="hljs-string">"# Core components installed with Node.js:"</span>
<span class="hljs-built_in">echo</span> <span class="hljs-string">"  - node: The JavaScript runtime itself."</span>
<span class="hljs-built_in">echo</span> <span class="hljs-string">"  - npm: Node Package Manager, used for installing and managing Node.js packages."</span>
<span class="hljs-built_in">echo</span> <span class="hljs-string">"  - npx: Node Package Execute, used for executing Node.js package binaries (often temporary)."</span>
<span class="hljs-built_in">echo</span> <span class="hljs-string">""</span>

<span class="hljs-comment"># --- 2. Configuring VS Code for WSL ---</span>
<span class="hljs-built_in">echo</span> <span class="hljs-string">"## 2. VS Code WSL Integration"</span>
<span class="hljs-built_in">echo</span> <span class="hljs-string">"The 'Remote - WSL' extension bridges VS Code on Windows to your WSL environment."</span>

<span class="hljs-built_in">echo</span> <span class="hljs-string">"# VS Code Extension Installation (done from VS Code Extensions view on Windows side)"</span>
<span class="hljs-built_in">echo</span> <span class="hljs-string">"  - Remote - WSL Extension (Microsoft)"</span>
<span class="hljs-built_in">echo</span> <span class="hljs-string">""</span>
<span class="hljs-built_in">echo</span> <span class="hljs-string">"# WSL-side Configuration for VS Code Server"</span>
<span class="hljs-built_in">echo</span> <span class="hljs-string">"  - ~/.vscode-server/ (directory created by VS Code, contains VS Code Server binaries and extensions)"</span>
<span class="hljs-built_in">echo</span> <span class="hljs-string">"  - ~/.vscode-server/server-env-setup (custom file for setting environment variables for VS Code Server)"</span>
<span class="hljs-built_in">echo</span> <span class="hljs-string">"    - Inside this file, you manually added:"</span>
<span class="hljs-built_in">echo</span> <span class="hljs-string">"      export NVM_DIR=\"<span class="hljs-variable">$HOME</span>/.nvm\""</span>
<span class="hljs-built_in">echo</span> <span class="hljs-string">"      [ -s \"\$NVM_DIR/nvm.sh\" ] &amp;&amp; \\. \"\$NVM_DIR/nvm.sh\""</span>
<span class="hljs-built_in">echo</span> <span class="hljs-string">"      [ -s \"\$NVM_DIR/bash_completion\" ] &amp;&amp; \\. \"\$NVM_DIR/bash_completion\""</span>
<span class="hljs-built_in">echo</span> <span class="hljs-string">"      nvm use --silent &lt;your_node_version&gt;"</span>
<span class="hljs-built_in">echo</span> <span class="hljs-string">""</span>

<span class="hljs-comment"># --- 3. Installing Azure CLI ---</span>
<span class="hljs-built_in">echo</span> <span class="hljs-string">"## 3. Azure CLI Installation"</span>
<span class="hljs-built_in">echo</span> <span class="hljs-string">"Azure CLI allows interaction with Azure resources from the command line."</span>

<span class="hljs-built_in">echo</span> <span class="hljs-string">"# System Dependencies for Azure CLI (specific to apt-based distributions like Ubuntu)"</span>
<span class="hljs-built_in">echo</span> <span class="hljs-string">"sudo apt update"</span>
<span class="hljs-built_in">echo</span> <span class="hljs-string">"sudo apt install -y ca-certificates curl apt-transport-https lsb-release gnupg"</span>
<span class="hljs-built_in">echo</span> <span class="hljs-string">"  - ca-certificates: Provides root certificates for secure communication (HTTPS)."</span>
<span class="hljs-built_in">echo</span> <span class="hljs-string">"  - curl: Tool for transferring data with URLs (used to download the Microsoft GPG key)."</span>
<span class="hljs-built_in">echo</span> <span class="hljs-string">"  - apt-transport-https: Enables apt to fetch packages over HTTPS."</span>
<span class="hljs-built_in">echo</span> <span class="hljs-string">"  - lsb-release: Provides information about the Linux distribution (used to get codename)."</span>
<span class="hljs-built_in">echo</span> <span class="hljs-string">"  - gnupg: GNU Privacy Guard, used for managing cryptographic keys (for verifying package authenticity)."</span>
<span class="hljs-built_in">echo</span> <span class="hljs-string">""</span>

<span class="hljs-built_in">echo</span> <span class="hljs-string">"# Adding Microsoft GPG key and Azure CLI repository"</span>
<span class="hljs-built_in">echo</span> <span class="hljs-string">"curl -sL https://packages.microsoft.com/keys/microsoft.asc | gpg --dearmor | sudo tee /etc/apt/keyrings/microsoft.gpg &gt; /dev/null"</span>
<span class="hljs-built_in">echo</span> <span class="hljs-string">"  - This imports Microsoft's public key to authenticate Azure CLI packages."</span>
<span class="hljs-built_in">echo</span> <span class="hljs-string">"AZ_REPO=\$(lsb_release -cs)"</span>
<span class="hljs-built_in">echo</span> <span class="hljs-string">"echo \"deb [arch=amd64 signed-by=/etc/apt/keyrings/microsoft.gpg] https://packages.microsoft.com/repos/azure-cli/ \$AZ_REPO main\" | sudo tee /etc/apt/sources.list.d/azure-cli.list"</span>
<span class="hljs-built_in">echo</span> <span class="hljs-string">"  - This adds the official Azure CLI repository to your system's package sources."</span>
<span class="hljs-built_in">echo</span> <span class="hljs-string">""</span>

<span class="hljs-built_in">echo</span> <span class="hljs-string">"# Azure CLI Package Installation"</span>
<span class="hljs-built_in">echo</span> <span class="hljs-string">"sudo apt update"</span> <span class="hljs-comment"># Updates package lists to include the new Azure CLI repository</span>
<span class="hljs-built_in">echo</span> <span class="hljs-string">"sudo apt install azure-cli"</span>
<span class="hljs-built_in">echo</span> <span class="hljs-string">"  - This command installs the main Azure CLI package and its core Python dependencies (Azure CLI is primarily Python-based)."</span>
<span class="hljs-built_in">echo</span> <span class="hljs-string">""</span>

<span class="hljs-built_in">echo</span> <span class="hljs-string">"# Core Dependencies of Azure CLI (handled by apt):"</span>
<span class="hljs-built_in">echo</span> <span class="hljs-string">"  - Python (version 3.8 or higher, usually installed as a dependency by apt if not present)."</span>
<span class="hljs-built_in">echo</span> <span class="hljs-string">"  - Various Python libraries and modules that the Azure CLI uses (e.g., requests, msrest, knack, azure-common, etc. - these are pulled in automatically)."</span>
<span class="hljs-built_in">echo</span> <span class="hljs-string">""</span>
<span class="hljs-built_in">echo</span> <span class="hljs-string">"--- End of Outline ---"</span>
</code></pre>
<h3 id="heading-global-install">Global Install</h3>
<p>I installed Azure MCP server Globally on my device. Directory Install is also supported</p>
<ol>
<li><p>To install the Azure MCP Server for Visual Studio Code in your user settings, select the following link:</p>
<p> <a target="_blank" href="https://insiders.vscode.dev/redirect/mcp/install?name=Azure%20MCP%20Server&amp;config=%7B%22command%22%3A%22npx%22%2C%22args%22%3A%5B%22-y%22%2C%22%40azure%2Fmcp%40latest%22%2C%22server%22%2C%22start%22%5D%7D">VS Code: Install Azure MCP Server</a></p>
</li>
<li><p>A list of installation options opens inside Visual Studio Code. Select <strong>Install Server</strong> to add the server configuration to your user settings.</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1750552086046/8b5e0c35-adfc-40ba-bf45-f7d53a1fefbf.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>Open GitHub Copilot and select Agent Mode. To learn more about Agent Mode.</p>
</li>
</ol>
<p>Refresh the tools list to see Azure MCP Server as an available option:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1750552132730/455a7a09-172a-4c1d-b76b-d431cdb134ab.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-use-prompts-to-test-the-azure-mcp-server"><strong>Use prompts to test the Azure MCP Server</strong></h2>
<p>Open GitHub Copilot and select Agent Mode.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1750552240876/93a8d5c3-ae13-4f61-9b87-80b98f797f9a.jpeg" alt class="image--center mx-auto" /></p>
<p>I will authenticate by signing into my Azure Account</p>
<pre><code class="lang-bash">az login --use-device-code
</code></pre>
<p>After Authenticating I will ask Co-pilot To list my resource groups</p>
<p>Copilot requests permission to run the necessary Azure MCP Server operation for your prompt. Select <strong>Continue</strong> or use the arrow to select a more specific behavior:</p>
<ul>
<li><p><strong>Current session</strong> always runs the operation in the current GitHub Copilot Agent Mode session.</p>
</li>
<li><p><strong>Current workspace</strong> always runs the command for current Visual Studio Code workspace.</p>
</li>
<li><p><strong>Always allow</strong> sets the operation to always run for any GitHub Copilot Agent Mode session or any Visual Studio Code workspace.</p>
</li>
</ul>
<p>Copilot will run MCP server, will use Resource Group tool and provide context to LLM and run the query and give you output</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1750554333022/10494398-49b7-41f2-a91c-3125fffda189.jpeg" alt class="image--center mx-auto" /></p>
<h3 id="heading-future-integration-with-operation">Future Integration with Operation</h3>
<p>The Azure Model Context Protocol (MCP) Server exposes many tools you can use from an existing client to interact with Azure services through natural language prompts. For example, you can use the Azure MCP Server to interact with Azure resources conversationally from GitHub Copilot agent mode in Visual Studio Code or other AI agents with commands like these:</p>
<ul>
<li><p>"Show me all my resource groups"</p>
</li>
<li><p>"List blobs in my storage container named 'documents'"</p>
</li>
<li><p>"What's the value of the 'ConnectionString' key in my app configuration?"</p>
</li>
<li><p>"Query my log analytics workspace for errors in the last hour"</p>
</li>
<li><p>"Show me all my Cosmos DB databases"</p>
</li>
</ul>
<p>Azure MCP server is in preview and most of the tools it exposes are Get tools. These tools help in Listing resources in Azure.</p>
<h3 id="heading-agent-integration-with-operation-and-incident-response-in-azure">Agent Integration with Operation and Incident Response in Azure</h3>
<p>I believe the next stage in the evolution of the Azure MCP Server lies in <strong>Operations and Incident Response</strong>.</p>
<p>Consider a scenario where a virtual machine triggers an alert due to a failed service. By integrating this alert into an AI-powered workflow using the Azure MCP Server and a connected agent, the system could automatically investigate the issue and suggest remediation steps to an Operations Engineer.</p>
<p>The engineer would then have the flexibility to review, accept, or even revert the recommended actions  enabling a more efficient and intelligent approach to managing operational incidents.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1750555067674/fab7830d-4bec-40cd-b78e-f67bb96d4cda.png" alt class="image--center mx-auto" /></p>
<ul>
<li></li>
</ul>
<p>###</p>
]]></description><link>https://clouddevopsinsights.com/hosting-azure-mcp-server-in-vs-code-my-experience-with-the-mcp-server</link><guid isPermaLink="true">https://clouddevopsinsights.com/hosting-azure-mcp-server-in-vs-code-my-experience-with-the-mcp-server</guid><category><![CDATA[mcp server]]></category><category><![CDATA[Azure]]></category><dc:creator><![CDATA[Abilash Vavilala]]></dc:creator></item><item><title><![CDATA[Azure Private DNS Resolver Explained: Secure Name Resolution for Hybrid Networks]]></title><description><![CDATA[<p>In hybrid cloud environments, name resolution can become a challengeespecially when you need to resolve Azure service private endpoints (like <code>*.</code><a target="_blank" href="http://azurewebsites.net"><code>azurewebsites.net</code></a>) from on-premises networks. This is where <strong>Azure Private DNS Resolver</strong> comes in.</p>
<p>In this article, we'll walk through:</p>
<ul>
<li><p>What Azure Private DNS Resolver is</p>
</li>
<li><p>How to create an <strong>inbound endpoint</strong></p>
</li>
<li><p>How to configure <strong>conditional forwarding</strong> from your on-prem DNS to Azure</p>
</li>
<li><p>The <strong>benefits</strong> of using private name resolution</p>
</li>
</ul>
<hr />
<h2 id="heading-what-is-azure-private-dns-resolver">What is Azure Private DNS Resolver?</h2>
<p><strong>Azure Private DNS Resolver</strong> is a fully managed DNS service that enables DNS resolution between Azure virtual networks and your on-premises environment without deploying and managing DNS servers.</p>
<p>It supports:</p>
<ul>
<li><p><strong>Inbound endpoints</strong>: Accept DNS queries from on-premises or other networks.</p>
</li>
<li><p><strong>Outbound endpoints</strong> and <strong>forwarding rulesets</strong>: Resolve custom DNS names from Azure to on-prem or external DNS servers.</p>
</li>
</ul>
<hr />
<h2 id="heading-scenario-overview">Scenario Overview</h2>
<p>We want to resolve the domain <code>*.</code><a target="_blank" href="http://azurewebsites.net"><code>azurewebsites.net</code></a> from our on-premises network <strong>to the private IP</strong> of the web app's private endpoint in Azure.</p>
<p>To do this:</p>
<ol>
<li><p>Deploy Azure Private DNS Resolver with an <strong>inbound endpoint</strong>.</p>
</li>
<li><p>Set up a conditional forwarder in your on-prem DNS server pointing <a target="_blank" href="http://azurewebsites.net"><code>azurewebsites.net</code></a> to the <strong>inbound endpoint's private IP</strong>.</p>
</li>
<li><p>Azure resolves the name using the Private DNS zone linked to the web app's private endpoint.</p>
</li>
</ol>
<hr />
<h2 id="heading-step-by-step-creating-an-inbound-endpoint">Step-by-Step: Creating an Inbound Endpoint</h2>
<h3 id="heading-step-1-deploy-azure-dns-resolver">Step 1: Deploy Azure DNS Resolver</h3>
<pre><code class="lang-bash">az network dns-resolver create \
  --name myDnsResolver \
  --resource-group myResourceGroup \
  --location eastus \
  --virtual-network myVnet
</code></pre>
<h3 id="heading-step-2-create-an-inbound-endpoint">Step 2: Create an Inbound Endpoint</h3>
<pre><code class="lang-plaintext">az network dns-resolver inbound-endpoint create \
  --name inboundEndpoint1 \
  --dns-resolver-name myDnsResolver \
  --resource-group myResourceGroup \
  --location eastus \
  --ip-configurations '[{"subnet": { "id": "/subscriptions/&lt;sub-id&gt;/resourceGroups/myResourceGroup/providers/Microsoft.Network/virtualNetworks/myVnet/subnets/inboundSubnet" }}]'
</code></pre>
<blockquote>
<p> Use a <strong>dedicated subnet</strong> for DNS Resolver. It cannot be shared with other resources.</p>
</blockquote>
<hr />
<h2 id="heading-step-3-configure-conditional-forwarding-in-on-prem-dns">Step 3: Configure Conditional Forwarding in On-Prem DNS</h2>
<p>On your on-prem DNS server (e.g., Windows Server DNS):</p>
<ol>
<li><p>Open <strong>DNS Manager</strong>.</p>
</li>
<li><p>Right-click <strong>Conditional Forwarders</strong> &gt; <strong>New Conditional Forwarder</strong>.</p>
</li>
<li><p>Enter:</p>
<ul>
<li><p><strong>Domain name</strong>: <a target="_blank" href="http://azurewebsites.net"><code>azurewebsites.net</code></a></p>
</li>
<li><p><strong>IP address</strong>: Private IP of the <strong>inbound endpoint</strong></p>
</li>
<li><p>Optionally, enable "Store this conditional forwarder in Active Directory"</p>
</li>
</ul>
</li>
</ol>
<p>This routes only <a target="_blank" href="http://azurewebsites.net"><code>azurewebsites.net</code></a> queries to Azure, avoiding unnecessary traffic.</p>
<hr />
<h2 id="heading-benefits-of-private-name-resolution-with-azure-dns-resolver">Benefits of Private Name Resolution with Azure DNS Resolver</h2>
<p> <strong>Improved Security</strong><br />Resolves names to private IPs securelywithout exposing DNS records to public resolvers.</p>
<p> <strong>Seamless Hybrid Integration</strong><br />Enables on-premises apps to resolve private Azure services like Web Apps, Key Vault, and Storage.</p>
<p> <strong>No DNS VM Management</strong><br />Azure handles high availability, patching, and scaling of the DNS infrastructure.</p>
<p> <strong>Fine-Grained Control</strong><br />Use conditional forwarding to send only specific zones to Azure.</p>
<hr />
<h2 id="heading-final-thoughts">Final Thoughts</h2>
<p>Azure Private DNS Resolver simplifies DNS management across hybrid environments. By setting up an inbound endpoint and configuring conditional forwarding, you can securely and efficiently resolve private Azure service endpoints from your on-premises network.</p>
<p>This setup is especially valuable for enterprise environments adopting <strong>Private Endpoints</strong>, <strong>Zero Trust Networking</strong>, and <strong>Hybrid Cloud Architectures</strong>.</p>
<hr />
<p>🔧 Got questions or want help automating this setup with Terraform or Bicep? Let me know in the comments or connect with me!</p>
]]></description><link>https://clouddevopsinsights.com/azure-private-dns-resolver-explained-secure-name-resolution-for-hybrid-networks</link><guid isPermaLink="true">https://clouddevopsinsights.com/azure-private-dns-resolver-explained-secure-name-resolution-for-hybrid-networks</guid><dc:creator><![CDATA[Abilash Vavilala]]></dc:creator></item><item><title><![CDATA[Cloud Disaster Recovery Strategies: Ensuring Business Continuity and Resilience]]></title><description><![CDATA[<p>In todays digital landscape, businesses heavily rely on cloud infrastructure to run critical applications and store valuable data. However, unforeseen events such as cyberattacks, hardware failures, and natural disasters can disrupt operations. This is where <strong>Disaster Recovery (DR) strategies</strong> play a crucial role in ensuring business continuity.</p>
<h3 id="heading-what-is-a-disaster-recovery-strategy-and-why-is-it-important">What is a Disaster Recovery Strategy, and Why is it Important?</h3>
<p>A <strong>Disaster Recovery (DR) strategy</strong> is a set of policies, tools, and procedures designed to restore IT services after a disruption. It ensures minimal downtime and data loss, helping businesses recover quickly from disasters. Without a DR plan, organizations risk severe financial and reputational damage due to prolonged service outages.</p>
<h3 id="heading-real-world-example-british-airways-it-failure">Real-World Example: British Airways IT Failure</h3>
<p>In 2017, British Airways suffered a massive IT failure that led to the cancellation of over 400 flights, stranding thousands of passengers. The disruption reportedly cost the airline around <strong>80 million ($102 million)</strong> in compensation, lost revenue, and reputational damage. Investigations suggested that the lack of a <strong>robust DR strategy</strong> contributed to the prolonged downtime, highlighting the critical need for businesses to have effective recovery mechanisms in place.</p>
<h2 id="heading-understanding-rto-and-rpo">Understanding RTO and RPO</h2>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1740867929611/9b4c5b8a-c50b-4264-bf42-7f0ac835912d.jpeg" alt class="image--center mx-auto" /></p>
<p>When planning a DR strategy, two key metrics must be considered:</p>
<ul>
<li><p><strong>Recovery Time Objective (RTO):</strong> The maximum acceptable downtime before services must be restored. A lower RTO means faster recovery but often requires higher investment in infrastructure.</p>
</li>
<li><p><strong>Recovery Point Objective (RPO):</strong> The maximum acceptable data loss measured in time. A lower RPO ensures minimal data loss but requires frequent backups and replication.</p>
</li>
</ul>
<h2 id="heading-four-key-cloud-disaster-recovery-strategies">Four Key Cloud Disaster Recovery Strategies</h2>
<p>Organizations can implement different DR strategies based on their <strong>RTO and RPO requirements</strong>, balancing cost and recovery speed.</p>
<h3 id="heading-1-backup-and-restore">1. <strong>Backup and Restore</strong></h3>
<ul>
<li><p><strong>Description:</strong> Periodic backups of data and applications stored in cloud storage, restored when needed.</p>
</li>
<li><p><strong>Pros:</strong> Cost-effective and simple.</p>
</li>
<li><p><strong>Cons:</strong> High RTO and RPO, slower recovery time.</p>
</li>
<li><p><strong>Best for:</strong> Small businesses or non-critical applications.</p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1740869106089/927066ec-0906-4d63-814f-312632257eb7.webp" alt class="image--center mx-auto" /></p>
</li>
</ul>
<h3 id="heading-2-pilot-light">2. <strong>Pilot Light</strong></h3>
<ul>
<li><p><strong>Description:</strong> A minimal version of the production environment is kept running with essential services. In case of a disaster, additional resources are quickly scaled up.</p>
</li>
<li><p><strong>Pros:</strong> Faster recovery compared to backup and restore, lower ongoing costs.</p>
</li>
<li><p><strong>Cons:</strong> Requires manual intervention for scaling up.</p>
</li>
<li><p><strong>Best for:</strong> Businesses needing moderate recovery speeds at a lower cost.</p>
</li>
</ul>
<h3 id="heading-3-warm-standby">3. <strong>Warm Standby</strong></h3>
<ul>
<li><p><strong>Description:</strong> A scaled-down but fully functional version of the production environment is always running. In an outage, resources are scaled up to full capacity.</p>
</li>
<li><p><strong>Pros:</strong> Faster recovery time with lower infrastructure costs compared to active-active.</p>
</li>
<li><p><strong>Cons:</strong> Higher costs than backup strategies, requires automation for scaling.</p>
</li>
<li><p><strong>Best for:</strong> Businesses requiring quick recovery but looking to save on costs.</p>
</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1740869472653/89585e9b-d683-4f8d-8f4b-0719138dfa8d.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-4-active-passive-hot-standby">4. <strong>Active-Passive (Hot Standby)</strong></h3>
<ul>
<li><p><strong>Description:</strong> A fully operational duplicate environment is maintained, ready to take over instantly in case of failure.</p>
</li>
<li><p><strong>Pros:</strong> Near-instant recovery with minimal downtime.</p>
</li>
<li><p><strong>Cons:</strong> Expensive due to duplicated infrastructure.</p>
</li>
<li><p><strong>Best for:</strong> Mission-critical applications requiring high availability.</p>
</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1748652803782/a544053b-8756-471e-8cf6-d46e0d0d0c31.webp" alt class="image--center mx-auto" /></p>
<h2 id="heading-conclusion">Conclusion</h2>
<p>Choosing the right <strong>cloud disaster recovery strategy</strong> depends on business needs, budget, and acceptable downtime. <strong>A well-defined DR plan ensures resilience, minimizes losses, and maintains customer trust</strong>. As demonstrated by British Airways, the absence of a robust DR strategy can result in severe consequences. Organizations should assess their <strong>RTO and RPO</strong> needs and implement the most suitable DR approach to safeguard their cloud infrastructure against disruptions.</p>
<p>Do you have a DR strategy in place? If not, now is the time to start planning! 🚀</p>
]]></description><link>https://clouddevopsinsights.com/cloud-disaster-recovery-strategies-ensuring-business-continuity-and-resilience</link><guid isPermaLink="true">https://clouddevopsinsights.com/cloud-disaster-recovery-strategies-ensuring-business-continuity-and-resilience</guid><category><![CDATA[Cloud Computing]]></category><category><![CDATA[Disaster recovery]]></category><dc:creator><![CDATA[Abilash Vavilala]]></dc:creator></item><item><title><![CDATA[Prompt Engineering: The Critical Skill for AI-Powered DevOps]]></title><description><![CDATA[<h3 id="heading-introduction">Introduction:</h3>
<p>DevOps is fundamentally about breaking down silos, automating processes, and accelerating the delivery of reliable software. We strive for efficiency, speed, and robustness. In recent years, Artificial Intelligence (AI), particularly Large Language Models (LLMs) and tools like GitHub Copilot, have emerged as powerful allies promising to supercharge these efforts. They can generate code snippets, write configuration files, draft documentation, and even suggest troubleshooting steps.</p>
<p>However, simply having access to these AI tools isn't a magic bullet for productivity. The real key to unlocking their potential lies in <strong>prompt engineering</strong>: the art and science of crafting effective inputs (prompts) to guide the AI towards generating the desired, accurate, and useful output. For DevOps engineers, mastering prompt engineering is rapidly becoming a critical skill.</p>
<p><strong>Why Prompt Engineering Matters in the DevOps Workflow</strong></p>
<p>DevOps tasks are diverse and often complex, spanning coding, infrastructure management, networking, security, and operations. AI can assist across this spectrum, but its effectiveness is directly proportional to the quality of the prompt it receives.</p>
<ul>
<li><p><strong>Faster Scripting and Automation:</strong> Need a script to automate backups, manage user permissions, or deploy an application? A well-crafted prompt can yield a near-complete script in seconds, saving hours of manual coding. A vague prompt might produce something unusable.</p>
</li>
<li><p><strong>Infrastructure as Code (IaC) Generation:</strong> Tools like Terraform, Pulumi, or CloudFormation require precise syntax. Prompting an AI with clear requirements (e.g., "Generate Terraform code for an AWS EC2 instance, t3.micro, in us-east-1, with security group X and specific tags") is far more effective than a generic request.</p>
</li>
<li><p><strong>Configuration Management:</strong> Generating configuration files for tools like Ansible, Chef, Puppet, Kubernetes, or Docker requires specifics. Good prompts include desired state, parameters, and constraints.</p>
</li>
<li><p><strong>Troubleshooting and Debugging:</strong> Asking an AI to "fix this error" is less helpful than providing the error message, relevant logs, the code snippet causing the issue, and the context of the system.</p>
</li>
<li><p><strong>Documentation:</strong> Generating READMEs, runbooks, or architecture diagrams requires clear instructions on the scope, audience, and key components to include.</p>
</li>
</ul>
<p><strong>The GitHub Copilot Experiment: A Case Study in Prompting</strong></p>
<p>My own experience highlights the dramatic difference prompt quality can make. I needed a PowerShell script to create a basic Azure Web App and its supporting App Service Plan, a common task for deploying web applications.</p>
<p><strong>Attempt 1: The Vague Request</strong></p>
<p>My initial prompt to GitHub Copilot was straightforward, reflecting how one might initially approach the tool:</p>
<blockquote>
<p><em>"Generate PowerShell using Az module to create an Azure Web App"</em></p>
</blockquote>
<p>The code Copilot generated <em>did</em> use Azure PowerShell <code>Az</code> module cmdlets and would likely create <em>an</em> App Service Plan and Web App. However, it was far from production-ready or even development-ready without significant changes:</p>
<ul>
<li><p>It made assumptions about resource naming, likely using generic placeholders or requiring manual input during execution.</p>
</li>
<li><p>It defaulted the <code>Location</code> (Region), potentially placing resources far from users or other dependent services.</p>
</li>
<li><p>It defaulted the App Service Plan <code>Sku</code> (pricing tier), potentially choosing a more expensive or less performant tier than required.</p>
</li>
<li><p>It didn't specify a runtime stack (like .NET, Node, Python), crucial for the application to function.</p>
</li>
<li><p>It lacked parameterization for easy reuse and integration into larger automation scripts.</p>
</li>
<li><p>Error handling (e.g., checking if resources already exist) was absent.</p>
</li>
</ul>
<p>Modifying this code to meet specific requirements (correct names, location, SKU, runtime, parameters) took considerable time. I needed to know the correct PowerShell cmdlets (<code>New-AzResourceGroup</code>, <code>New-AzAppServicePlan</code>, <code>New-AzWebApp</code>) and their parameters anyway, largely defeating the purpose of using the AI for speed.</p>
<p><strong>Attempt 2: The Guided Approach with Detailed Steps</strong></p>
<p>Learning from the first attempt, I provided Copilot with context and a clear sequence of steps (All DevOps Engineers should learn how to write Pseudocode):</p>
<blockquote>
<p>*"Generate PowerShell using Az module based on this logic:</p>
<ol>
<li><p>Define variables: ResourceGroupName='MyWebAppRG', Location='AustraliaEast', PlanName='MyWebAppPlan', WebAppName='MyUniqueWebAppXYZ'.</p>
</li>
<li><p>Check if Resource Group '$ResourceGroupName' exists in '$Location'. If not, create it using New-AzResourceGroup.</p>
</li>
<li><p>Create an App Service Plan named '$PlanName' in '$ResourceGroupName' and '$Location' using the 'S1' Standard SKU (New-AzAppServicePlan).</p>
</li>
<li><p>Create a Web App named '$WebAppName' within the resource group, using the created App Service Plan. Specify the runtime as '.NET|6.0' (New-AzWebApp).</p>
</li>
<li><p>Add an Application Setting to the Web App: 'Environment' = 'Development'.</p>
</li>
<li><p>Output the default hostname of the created Web App."*</p>
</li>
</ol>
</blockquote>
<p>The result was drastically different. The PowerShell code generated by Copilot using this prompt was <strong>approximately 95% accurate</strong> and immediately usable with minor verification:</p>
<ul>
<li><p>It followed the logical steps outlined.</p>
</li>
<li><p>It used the specified variables for names, location, and SKU.</p>
</li>
<li><p>It included a basic check for the resource group's existence.</p>
</li>
<li><p>It correctly used <code>New-AzResourceGroup</code>, <code>New-AzAppServicePlan</code>, and <code>New-AzWebApp</code> with the right parameters, including the SKU and runtime stack.</p>
</li>
<li><p>It added the specified application setting.</p>
</li>
<li><p>It included a command to output the hostname.</p>
</li>
</ul>
<p>The debugging and refinement time was minimal. The AI, guided by a structured, detailed prompt that specified <em>what</em> and <em>how</em>, acted as a highly effective accelerator.</p>
<p><strong>The Foundation: Why Domain Knowledge Remains Crucial</strong></p>
<p>This experiment underscores a vital point: <strong>using AI effectively in DevOps depends heavily on strong foundational knowledge.</strong> You can't prompt effectively if you don't understand the underlying concepts.</p>
<ol>
<li><p><strong>Programming Language Proficiency (e.g., PowerShell, Python, Bash):</strong> To write effective pseudocode or detailed prompts, you need to understand control flow, variables, functions, error handling, and the specific commands or libraries relevant to the task. You also need this knowledge to <em>evaluate</em> and <em>debug</em> the AI's output.</p>
</li>
<li><p><strong>Networking Concepts:</strong> When asking for scripts or configurations involving firewalls, load balancers, DNS, or VPCs, understanding subnets, routing, ports, and protocols is essential for crafting a precise prompt and validating the result.</p>
</li>
<li><p><strong>Operating System Internals:</strong> Tasks involving performance tuning, service management, user permissions, or file systems require an understanding of how the OS works. This knowledge informs the prompts for configuration management or troubleshooting scripts.</p>
</li>
<li><p><strong>Cloud/Infrastructure Knowledge:</strong> Understanding the specific services, APIs, and best practices of your cloud provider (AWS, Azure, GCP) or virtualization platform is critical for generating accurate IaC or automation scripts.</p>
</li>
</ol>
<p><strong>Conclusion: AI as a Co-Pilot, Not Autopilot</strong></p>
<p>AI tools like GitHub Copilot are transformative for DevOps engineers, offering significant potential to boost productivity and automate repetitive tasks. However, they are most powerful when wielded by engineers who understand <em>what</em> they are asking for and <em>how</em> to ask for it effectively.</p>
<p>Prompt engineering isn't just about fancy wording; it's about leveraging your existing technical expertise to provide the AI with the context, constraints, and structure it needs to generate high-quality output. By combining solid foundational knowledge in programming, networking, OS, and cloud systems with skillful prompt engineering, DevOps professionals can truly harness the power of AI, turning it from a novelty into an indispensable part of their toolkit for building and operating systems faster and more reliably than ever before. The future of efficient DevOps involves not just using AI, but mastering the conversation with it.</p>
]]></description><link>https://clouddevopsinsights.com/prompt-engineering-the-critical-skill-for-ai-powered-devops</link><guid isPermaLink="true">https://clouddevopsinsights.com/prompt-engineering-the-critical-skill-for-ai-powered-devops</guid><category><![CDATA[AI]]></category><category><![CDATA[Devops]]></category><dc:creator><![CDATA[Abilash Vavilala]]></dc:creator></item><item><title><![CDATA[Importance of VNet Flow Logs in Azure for Troubleshooting Network Issues]]></title><description><![CDATA[<h3 id="heading-introduction">Introduction</h3>
<p>When managing cloud infrastructure in Azure, network connectivity issues can significantly impact application availability and performance. Azure provides powerful tools for diagnosing and troubleshooting network problems, including <strong>Virtual Network (VNet) Flow Logs</strong> and <strong>IP Flow Verify</strong> in Network Watcher. These tools help network engineers and cloud administrators gain visibility into network traffic and quickly pinpoint connectivity issues.</p>
<h2 id="heading-what-are-vnet-flow-logs">What Are VNet Flow Logs?</h2>
<p>VNet Flow Logs capture information about inbound and outbound traffic within a Virtual Network (VNet). These logs provide insights into:</p>
<ul>
<li><p><strong>Source and destination IP addresses</strong></p>
</li>
<li><p><strong>Ports and protocols used</strong></p>
</li>
<li><p><strong>Traffic direction (inbound or outbound)</strong></p>
</li>
<li><p><strong>NSG rule that allowed or denied the traffic</strong></p>
</li>
<li><p><strong>Flow start and end times</strong></p>
</li>
</ul>
<p>VNet Flow Logs are stored in <strong>Azure Storage Accounts</strong> and can be analyzed using <strong>Azure Monitor, Log Analytics, or third-party tools like Splunk</strong>. They help in diagnosing network latency, packet drops, and misconfigurations in NSGs.</p>
<h3 id="heading-enabling-vnet-flow-logs">Enabling VNet Flow Logs</h3>
<p>To enable VNet Flow Logs, follow these steps:</p>
<ol>
<li><p>Open the <strong>Azure Portal</strong> and navigate to <strong>Virtual Networks</strong>.</p>
</li>
<li><p>Select the <strong>VNet</strong> where you want to enable flow logs.</p>
</li>
<li><p>In the left-hand menu (blade), scroll down to find <strong>VNet Flow Logs</strong> and click on it.</p>
</li>
<li><p>Click <strong>Enable Flow Logs</strong>.</p>
</li>
<li><p>Choose a <strong>Storage Account</strong> to store the logs.</p>
</li>
<li><p>Select <strong>Enable Traffic Analytics</strong>.</p>
</li>
<li><p>Choose the <strong>Log Analytics Workspace</strong> where you want to send logs.</p>
</li>
<li><p>Select the specific logs you want to send to the workspace.</p>
</li>
<li><p>Click <strong>Save</strong> to apply the settings.</p>
</li>
</ol>
<h3 id="heading-using-vnet-flow-logs-for-troubleshooting">Using VNet Flow Logs for Troubleshooting</h3>
<h3 id="heading-1-diagnosing-dropped-traffic">1. Diagnosing Dropped Traffic</h3>
<p>By analyzing VNet Flow Logs, you can determine if network traffic is being dropped due to NSG rules. Example:</p>
<ul>
<li><p>If an application is not accessible, check the logs to see if traffic from the client IP is being denied by an NSG rule.</p>
</li>
<li><p>You can identify if traffic is being routed correctly or if a misconfiguration is blocking access.</p>
</li>
</ul>
<h3 id="heading-2-identifying-unauthorized-access-attempts">2. Identifying Unauthorized Access Attempts</h3>
<p>Flow Logs help in identifying suspicious activities, such as repeated failed connection attempts from unknown IP addresses, which could indicate brute-force attacks or unauthorized access attempts.</p>
<h3 id="heading-3-monitoring-traffic-patterns">3. Monitoring Traffic Patterns</h3>
<p>By aggregating VNet Flow Logs over time, you can analyze traffic trends, detect anomalies, and optimize NSG rules to allow only necessary traffic while blocking potential threats.</p>
<h3 id="heading-querying-flow-logs-in-log-analytics">Querying Flow Logs in Log Analytics</h3>
<p>To query VNet Flow Logs in <strong>Log Analytics</strong>, follow these steps:</p>
<ol>
<li><p>Navigate to <strong>Azure Monitor</strong> &gt; <strong>Logs</strong>.</p>
</li>
<li><p>Select your <strong>Log Analytics Workspace</strong>.</p>
</li>
<li><p>Use the following Kusto Query Language (KQL) query to analyze traffic entering and leaving the network:</p>
</li>
</ol>
<pre><code class="lang-yaml"><span class="hljs-string">NTANetAnalytics</span>
<span class="hljs-string">|</span> <span class="hljs-string">where</span> <span class="hljs-string">FlowType_s</span> <span class="hljs-string">==</span> <span class="hljs-string">"VNetFlow"</span>
<span class="hljs-string">|</span> <span class="hljs-string">project</span> <span class="hljs-string">TimeGenerated,</span> <span class="hljs-string">SourceIP_s,</span> <span class="hljs-string">DestinationIP_s,</span> <span class="hljs-string">DestinationPort_d,</span> <span class="hljs-string">Protocol_s,</span> <span class="hljs-string">Action_s</span>
<span class="hljs-string">|</span> <span class="hljs-string">sort</span> <span class="hljs-string">by</span> <span class="hljs-string">TimeGenerated</span> <span class="hljs-string">desc</span>
</code></pre>
<ul>
<li>To filter <strong>allowed traffic</strong>, modify the query:</li>
</ul>
<pre><code class="lang-yaml"><span class="hljs-string">|</span> <span class="hljs-string">where</span> <span class="hljs-string">Action_s</span> <span class="hljs-string">==</span> <span class="hljs-string">"Allow"</span>
</code></pre>
<ul>
<li>To filter <strong>denied traffic</strong>, modify the query:</li>
</ul>
<pre><code class="lang-yaml"><span class="hljs-string">|</span> <span class="hljs-string">where</span> <span class="hljs-string">Action_s</span> <span class="hljs-string">==</span> <span class="hljs-string">"Deny"</span>
</code></pre>
<h3 id="heading-querying-traffic-analytics-in-log-analytics-workbook">Querying Traffic Analytics in Log Analytics Workbook</h3>
<p>To get a more detailed view of network traffic at the VNet and NSG levels, you can query the <strong>NTANetAnalytics</strong> table in Log Analytics:</p>
<h4 id="heading-traffic-at-vnet-level">Traffic at VNet Level:</h4>
<pre><code class="lang-yaml"><span class="hljs-string">NTANetAnalytics</span>
<span class="hljs-string">|</span> <span class="hljs-string">where</span> <span class="hljs-string">FlowType_s</span> <span class="hljs-string">==</span> <span class="hljs-string">"VNetTraffic"</span>
<span class="hljs-string">|</span> <span class="hljs-string">summarize</span> <span class="hljs-string">TotalTraffic</span> <span class="hljs-string">=</span> <span class="hljs-string">sum(TotalBytes_d)</span> <span class="hljs-string">by</span> <span class="hljs-string">VnetName_s,</span> <span class="hljs-string">TimeGenerated</span>
<span class="hljs-string">|</span> <span class="hljs-string">order</span> <span class="hljs-string">by</span> <span class="hljs-string">TotalTraffic</span> <span class="hljs-string">desc</span>
</code></pre>
<h4 id="heading-traffic-at-nsg-level">Traffic at NSG Level:</h4>
<pre><code class="lang-yaml"><span class="hljs-string">NTANetAnalytics</span>
<span class="hljs-string">|</span> <span class="hljs-string">where</span> <span class="hljs-string">FlowType_s</span> <span class="hljs-string">==</span> <span class="hljs-string">"NSGTraffic"</span>
<span class="hljs-string">|</span> <span class="hljs-string">summarize</span> <span class="hljs-string">AllowedTraffic</span> <span class="hljs-string">=</span> <span class="hljs-string">sum(case(Action_s</span> <span class="hljs-string">==</span> <span class="hljs-string">"Allow"</span><span class="hljs-string">,</span> <span class="hljs-string">TotalBytes_d,</span> <span class="hljs-number">0</span><span class="hljs-string">)),</span> 
          <span class="hljs-string">DeniedTraffic</span> <span class="hljs-string">=</span> <span class="hljs-string">sum(case(Action_s</span> <span class="hljs-string">==</span> <span class="hljs-string">"Deny"</span><span class="hljs-string">,</span> <span class="hljs-string">TotalBytes_d,</span> <span class="hljs-number">0</span><span class="hljs-string">))</span> 
  <span class="hljs-string">by</span> <span class="hljs-string">NSGName_s,</span> <span class="hljs-string">TimeGenerated</span>
<span class="hljs-string">|</span> <span class="hljs-string">order</span> <span class="hljs-string">by</span> <span class="hljs-string">DeniedTraffic</span> <span class="hljs-string">desc</span>
</code></pre>
<p>These queries help in understanding traffic volume and security rule enforcement at both the VNet and NSG levels.</p>
<h2 id="heading-ip-flow-verify-in-network-watcher">IP Flow Verify in Network Watcher</h2>
<p>In addition to VNet Flow Logs, <strong>IP Flow Verify</strong> in Azure <strong>Network Watcher</strong> allows you to test whether a specific IP flow is allowed or denied based on the configured NSG rules.</p>
<h3 id="heading-how-to-use-ip-flow-verify">How to Use IP Flow Verify</h3>
<ol>
<li><p>Open <strong>Azure Network Watcher</strong> in the <strong>Azure Portal</strong>.</p>
</li>
<li><p>Select <strong>IP Flow Verify</strong>.</p>
</li>
<li><p>Choose the <strong>Virtual Machine</strong> to test.</p>
</li>
<li><p>Enter the <strong>Source and Destination IP, Port, and Protocol</strong>.</p>
</li>
<li><p>Click <strong>Check</strong> to see if the traffic is <strong>allowed or denied</strong>.</p>
</li>
</ol>
<h3 id="heading-use-cases-of-ip-flow-verify">Use Cases of IP Flow Verify</h3>
<ul>
<li><p>Quickly verifying whether an NSG rule is blocking or allowing traffic without waiting for logs to update.</p>
</li>
<li><p>Debugging connectivity issues when deploying new applications or modifying NSG rules.</p>
</li>
<li><p>Ensuring compliance with security policies by testing network access.</p>
</li>
</ul>
<h2 id="heading-conclusion">Conclusion</h2>
<p>VNet Flow Logs and IP Flow Verify are essential tools for network troubleshooting in Azure. <strong>VNet Flow Logs</strong> provide historical data and deep traffic analysis, while <strong>IP Flow Verify</strong> offers real-time validation of NSG rules. By leveraging these tools, cloud administrators can efficiently diagnose and resolve network issues, improve security, and optimize network performance in Azure.</p>
<p>Would you like to explore how to automate network monitoring using Azure PowerShell or Azure Monitor? Let me know in the comments!</p>
]]></description><link>https://clouddevopsinsights.com/importance-of-vnet-flow-logs-in-azure-for-troubleshooting-network-issues</link><guid isPermaLink="true">https://clouddevopsinsights.com/importance-of-vnet-flow-logs-in-azure-for-troubleshooting-network-issues</guid><category><![CDATA[vnet]]></category><category><![CDATA[Azure]]></category><category><![CDATA[cloud native]]></category><category><![CDATA[Cloud Computing]]></category><dc:creator><![CDATA[Abilash Vavilala]]></dc:creator></item><item><title><![CDATA[Automating Azure VM Management with Azure Automation Account]]></title><description><![CDATA[<h2 id="heading-introduction"><strong>Introduction</strong></h2>
<p>Managing Azure Virtual Machines (VMs) manually can be time-consuming, especially for tasks like shutting down idle resources to save costs. <strong>Azure Automation Account</strong> allows you to automate these operations using <strong>Runbooks</strong>.</p>
<p>In this guide, we will:<br /> Create an <strong>Azure Automation Account</strong><br /> Set up a <strong>User Assigned Managed Identity (UMI)</strong><br /> Assign <strong>VM Contributor</strong> role to the identity<br /> Write and execute a <strong>Runbook</strong> to stop VMs<br /> <strong>Schedule</strong> the Runbook for automatic execution</p>
<p>We will use <strong>PowerShell</strong> in the Runbook and leverage <strong>Azure CLI</strong> for setup.</p>
<h2 id="heading-1-create-an-azure-automation-account"><strong>1 Create an Azure Automation Account</strong></h2>
<p>An <strong>Automation Account</strong> is required to manage and run scripts in Azure.</p>
<h3 id="heading-steps"><strong>Steps:</strong></h3>
<ol>
<li><p>Go to the <strong>Azure Portal</strong>  Search for <strong>Automation Accounts</strong>.</p>
</li>
<li><p>Click <strong>Create</strong> and provide the following details:</p>
<ul>
<li><p><strong>Subscription</strong>: Select your subscription</p>
</li>
<li><p><strong>Resource Group</strong>: Create or select an existing one</p>
</li>
<li><p><strong>Automation Account Name</strong>: e.g., <code>MyAutomationAccount</code></p>
</li>
<li><p><strong>Region</strong>: Choose the preferred region</p>
</li>
<li><p><strong>Managed Identity</strong>: Select <strong>None</strong> (well create a separate UMI later)</p>
</li>
</ul>
</li>
<li><p>Click <strong>Review + Create</strong>  <strong>Create</strong>.</p>
</li>
<li><p>Please find the screenshot of the Azure Automation Account I created.</p>
</li>
</ol>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1739064131983/489b13ec-9241-49eb-9862-6c66f3daef2f.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-2-create-a-user-assigned-managed-identity-umi"><strong>2 Create a User-Assigned Managed Identity (UMI)</strong></h2>
<p>A <strong>User Assigned Managed Identity (UMI)</strong> allows our Automation Account to authenticate and execute actions without storing credentials.</p>
<h3 id="heading-steps-1"><strong>Steps:</strong></h3>
<ol>
<li><p>Go to <strong>Azure Portal</strong>  Search for <strong>Managed Identities</strong>.</p>
</li>
<li><p>Click <strong>Create</strong> and enter:</p>
<ul>
<li><p><strong>Subscription &amp; Resource Group</strong>: Select same as Automation Account</p>
</li>
<li><p><strong>Name</strong>: e.g., <code>MyAutomationIdentity</code></p>
</li>
<li><p><strong>Region</strong>: Choose the same as Automation Account</p>
</li>
</ul>
</li>
<li><p>Click <strong>Review + Create</strong>  <strong>Create</strong>.</p>
</li>
</ol>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1739064372594/fcde0083-257c-4f15-b172-939eb9fb7429.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-3-assign-vm-contributor-role-to-the-managed-identity"><strong>3 Assign "VM Contributor" Role to the Managed Identity</strong></h2>
<p>The <strong>VM Contributor</strong> role allows our Runbook to manage VMs (start, stop, deallocate, etc.).</p>
<h3 id="heading-steps-2"><strong>Steps:</strong></h3>
<ol>
<li><p>Go to <strong>Azure Portal</strong>  Open the <strong>Managed Identity</strong> created earlier.</p>
</li>
<li><p>Navigate to <strong>Access control (IAM)</strong>  Click <strong>Add role assignment</strong>.</p>
</li>
<li><p>Select:</p>
<ul>
<li><p><strong>Role</strong>: <strong>Virtual Machine Contributor</strong></p>
</li>
<li><p><strong>Scope</strong>: <strong>Subscription</strong> (or specific resource group if needed)</p>
</li>
<li><p><strong>Assign access to</strong>: <strong>Managed Identity</strong></p>
</li>
<li><p><strong>Identity</strong>: Select autoreizeum</p>
</li>
</ul>
</li>
<li><p>Click <strong>Save</strong>.</p>
</li>
<li><p>You can see in the image below that created User Managed Identity has VM contributor role assigned.</p>
</li>
</ol>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1739064484794/2b1239b5-3fe3-4b7d-8275-284a03a2c744.png" alt class="image--center mx-auto" /></p>
<hr />
<h2 id="heading-4-create-and-publish-a-runbook-to-stop-vms"><strong>4 Create and Publish a Runbook to Stop VMs</strong></h2>
<p>Now, well create a <strong>PowerShell Runbook</strong> that will connect to Azure using the Managed Identity and stop all VMs.</p>
<h3 id="heading-steps-3"><strong>Steps:</strong></h3>
<ol>
<li><p>Open <strong>Azure Portal</strong>  Go to your <strong>Automation Account</strong>.</p>
</li>
<li><p>Click <strong>Runbooks</strong>  <strong>Create a runbook</strong>.</p>
</li>
<li><p>Provide:</p>
<ul>
<li><p><strong>Name</strong>: <code>Stop-All-VMs</code></p>
</li>
<li><p><strong>Runbook Type</strong>: <strong>PowerShell</strong></p>
</li>
<li><p><strong>Runtime Version</strong>: Latest</p>
</li>
</ul>
</li>
<li><p>Click <strong>Create</strong>.</p>
</li>
<li><p><strong>Edit the Runbook</strong>  Paste the following PowerShell script:</p>
</li>
</ol>
<p>Find the script below.</p>
<pre><code class="lang-powershell"><span class="hljs-comment">### this is final runbook script which will stop VMs. Only those VMs with autostop = true</span>

 <span class="hljs-comment"># Input parameters</span>
 <span class="hljs-keyword">param</span>(


     [<span class="hljs-type">Parameter</span>(<span class="hljs-type">Mandatory</span> = <span class="hljs-variable">$true</span>)]
     [<span class="hljs-built_in">string</span>]<span class="hljs-variable">$SubscriptionId</span> = <span class="hljs-string">"Enter your sunscriptionID"</span>,

     [<span class="hljs-type">Parameter</span>(<span class="hljs-type">Mandatory</span> = <span class="hljs-variable">$true</span>)]
     [<span class="hljs-built_in">string</span>]<span class="hljs-variable">$UserAssignedIdentityClientId</span> = <span class="hljs-string">"Enter your user managed Identity"</span>

 )


 <span class="hljs-keyword">try</span> {
     <span class="hljs-comment"># Ensures you do not inherit an AzContext in your runbook</span>
     <span class="hljs-built_in">Disable-AzContextAutosave</span> <span class="hljs-literal">-Scope</span> <span class="hljs-keyword">Process</span>

     <span class="hljs-comment"># Connect using user-assigned managed identity</span>
     <span class="hljs-built_in">Write-Output</span> <span class="hljs-string">"Connecting to Azure using User-Assigned Managed Identity..."</span>
     <span class="hljs-built_in">Connect-AzAccount</span> <span class="hljs-literal">-Identity</span> <span class="hljs-literal">-AccountId</span> <span class="hljs-variable">$UserAssignedIdentityClientId</span>

     <span class="hljs-comment"># Set context to your subscription</span>
     <span class="hljs-built_in">Write-Output</span> <span class="hljs-string">"Setting context to subscription: <span class="hljs-variable">$SubscriptionId</span>"</span>
     <span class="hljs-built_in">Set-AzContext</span> <span class="hljs-literal">-SubscriptionId</span> <span class="hljs-variable">$SubscriptionId</span>

     <span class="hljs-comment"># Get all VMs in the subscription</span>
     <span class="hljs-built_in">Write-Output</span> <span class="hljs-string">"Getting all VMs in the subscription..."</span>
     <span class="hljs-variable">$vms</span> = <span class="hljs-built_in">Get-AzVM</span>

     <span class="hljs-comment"># Check if any VMs were found</span>
     <span class="hljs-keyword">if</span> (<span class="hljs-variable">$null</span> <span class="hljs-operator">-eq</span> <span class="hljs-variable">$vms</span> <span class="hljs-operator">-or</span> <span class="hljs-variable">$vms</span>.Count <span class="hljs-operator">-eq</span> <span class="hljs-number">0</span>) {
         <span class="hljs-built_in">Write-Output</span> <span class="hljs-string">"No VMs found."</span>
         <span class="hljs-keyword">return</span>
     }

     <span class="hljs-built_in">Write-Output</span> <span class="hljs-string">"Found <span class="hljs-variable">$</span>(<span class="hljs-variable">$vms</span>.Count) VMs"</span>

     <span class="hljs-comment"># Stop VMs with autostop = true tag</span>
     <span class="hljs-keyword">foreach</span> (<span class="hljs-variable">$vm</span> <span class="hljs-keyword">in</span> <span class="hljs-variable">$vms</span>) {
         <span class="hljs-keyword">try</span> {
             <span class="hljs-comment"># Check for autostop tag</span>
             <span class="hljs-variable">$autostop</span> = <span class="hljs-variable">$vm</span>.Tags<span class="hljs-function">[<span class="hljs-string">"autostop"</span>]</span>

             <span class="hljs-keyword">if</span> (<span class="hljs-variable">$autostop</span> <span class="hljs-operator">-eq</span> <span class="hljs-string">"true"</span>) {  <span class="hljs-comment"># Case-insensitive comparison</span>
                 <span class="hljs-built_in">Write-Output</span> <span class="hljs-string">"Stopping VM: <span class="hljs-variable">$</span>(<span class="hljs-variable">$vm</span>.Name) in resource group: <span class="hljs-variable">$</span>(<span class="hljs-variable">$vm</span>.ResourceGroupName) (autostop tag present)"</span>
                 <span class="hljs-variable">$stopResult</span> = <span class="hljs-built_in">Stop-AzVM</span> <span class="hljs-literal">-ResourceGroupName</span> <span class="hljs-variable">$vm</span>.ResourceGroupName <span class="hljs-literal">-Name</span> <span class="hljs-variable">$vm</span>.Name <span class="hljs-literal">-Force</span>

                 <span class="hljs-keyword">if</span> (<span class="hljs-variable">$stopResult</span>.Status <span class="hljs-operator">-eq</span> <span class="hljs-string">"Succeeded"</span>) {
                     <span class="hljs-built_in">Write-Output</span> <span class="hljs-string">"Successfully stopped VM: <span class="hljs-variable">$</span>(<span class="hljs-variable">$vm</span>.Name)"</span>
                 } <span class="hljs-keyword">else</span> {
                     <span class="hljs-built_in">Write-Error</span> <span class="hljs-string">"Failed to stop VM: <span class="hljs-variable">$</span>(<span class="hljs-variable">$vm</span>.Name). Status: <span class="hljs-variable">$</span>(<span class="hljs-variable">$stopResult</span>.Status)"</span>
                 }
             } <span class="hljs-keyword">else</span> {
                 <span class="hljs-built_in">Write-Output</span> <span class="hljs-string">"Skipping VM: <span class="hljs-variable">$</span>(<span class="hljs-variable">$vm</span>.Name) in resource group: <span class="hljs-variable">$</span>(<span class="hljs-variable">$vm</span>.ResourceGroupName) (autostop tag NOT present or not 'true')"</span>
             }
         } <span class="hljs-keyword">catch</span> {
             <span class="hljs-built_in">Write-Error</span> <span class="hljs-string">"Error processing VM <span class="hljs-variable">$</span>(<span class="hljs-variable">$vm</span>.Name): <span class="hljs-variable">$_</span>"</span>
             <span class="hljs-keyword">continue</span> <span class="hljs-comment"># Continue to the next VM even if one fails</span>
         }
     }

     <span class="hljs-built_in">Write-Output</span> <span class="hljs-string">"VM stop operations completed"</span>

 } <span class="hljs-keyword">catch</span> {
     <span class="hljs-built_in">Write-Error</span> <span class="hljs-string">"Error in runbook execution: <span class="hljs-variable">$_</span>"</span>
     <span class="hljs-keyword">throw</span> <span class="hljs-variable">$_</span>
 } <span class="hljs-keyword">finally</span> {
     <span class="hljs-comment"># Clean up authentication context</span>
     <span class="hljs-built_in">Write-Output</span> <span class="hljs-string">"Cleaning up Azure context..."</span>
     <span class="hljs-built_in">Clear-AzContext</span> <span class="hljs-literal">-Force</span>
 }
</code></pre>
<p>🔥 <strong>Alternatively, you can get the full script from your GitHub repo:</strong><br /> <a target="_blank" href="https://github.com/AbiVavilala/Azure-AutomationAccount-runbook-workshop/blob/main/final-runbookstop.ps1"><strong>GitHub Repo: final-runbookstop.ps1</strong></a></p>
<ol start="6">
<li>Click <strong>Save</strong>  <strong>Publish</strong>.</li>
</ol>
<h2 id="heading-5-create-a-schedule-for-the-runbook"><strong>5 Create a Schedule for the Runbook</strong></h2>
<p>To automate the Runbook execution, well create a <strong>schedule</strong>.</p>
<h3 id="heading-steps-4"><strong>Steps:</strong></h3>
<ol>
<li><p>Open <strong>Azure Portal</strong>  Go to your <strong>Automation Account</strong>.</p>
</li>
<li><p>Select <strong>Runbooks</strong>  Click on <code>Stop-All-VMs</code>.</p>
</li>
<li><p>Click <strong>Schedules</strong>  <strong>Add a Schedule</strong>.</p>
</li>
<li><p>Select <strong>Create a new schedule</strong>, and provide:</p>
<ul>
<li><p><strong>Name</strong>: <code>Daily-VM-Shutdown</code></p>
</li>
<li><p><strong>Recurrence</strong>: <strong>Daily</strong></p>
</li>
<li><p><strong>Time</strong>: Choose a suitable time (e.g., 10 PM UTC)</p>
</li>
</ul>
</li>
<li><p>Click <strong>Create</strong>.</p>
</li>
<li><p>Under <strong>Parameters and run settings</strong>, ensure the default values are correct.</p>
</li>
<li><p>Click <strong>OK</strong> to save.</p>
</li>
</ol>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1739064978493/1da6e66f-4319-41f8-bd52-4b1a894aef45.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-6-testing-and-verification"><strong>6 Testing and Verification</strong></h2>
<p>To ensure everything is working:<br /> <strong>Schedule triggers the Runbook</strong>  at specified schedule.<br /> <strong>Check VM Status</strong>  Go to <strong>Azure Portal</strong>  Open <strong>Virtual Machines</strong> and verify the VMs are stopping.<br /> <strong>Check Logs</strong>  View <strong>Job Output</strong> in the Runbook to see if any errors occurred.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1739065058018/fd96f087-3c87-473d-8056-c510a6b8fe78.png" alt class="image--center mx-auto" /></p>
<pre><code class="lang-powershell">

Mode             : <span class="hljs-keyword">Process</span>

ContextDirectory : None

ContextFile      : None

CacheDirectory   : None

CacheFile        : None

KeyStoreFile     : None

Settings         : {[<span class="hljs-type">InstallationId</span>, <span class="hljs-number">102</span><span class="hljs-type">b9476</span>-<span class="hljs-type">d6c9</span>-<span class="hljs-number">4337</span>-<span class="hljs-number">870</span><span class="hljs-type">a</span>-<span class="hljs-number">4149300</span><span class="hljs-type">e7a15</span>]}



Connecting to Azure <span class="hljs-keyword">using</span> User-Assigned Managed Identity...





Environments                                                                                           Context

------------                                                                                           -------

{[<span class="hljs-type">AzureChinaCloud</span>, <span class="hljs-type">AzureChinaCloud</span>], [<span class="hljs-type">AzureUSGovernment</span>, <span class="hljs-type">AzureUSGovernment</span>], [<span class="hljs-type">AzureCloud</span>, <span class="hljs-type">AzureCloud</span>]} Microsoft.Azure.



Setting context to subscription: <span class="hljs-number">6</span>baeb535<span class="hljs-literal">-5ac9</span><span class="hljs-literal">-402f</span><span class="hljs-literal">-83c4</span><span class="hljs-literal">-4aed96077df6</span>






Getting all VMs <span class="hljs-keyword">in</span> the subscription...



Found <span class="hljs-number">2</span> VMs



Skipping VM: backend1<span class="hljs-literal">-vm</span> <span class="hljs-keyword">in</span> resource <span class="hljs-built_in">group</span>: RUNBOOKRG (autostop tag NOT present or not <span class="hljs-string">'true'</span>)



Stopping VM: backend2<span class="hljs-literal">-vm</span> <span class="hljs-keyword">in</span> resource <span class="hljs-built_in">group</span>: RUNBOOKRG (autostop tag present)



Successfully stopped VM: backend2<span class="hljs-literal">-vm</span>

VM stop operations completed

Cleaning up Azure context...
</code></pre>
<p>Above is the output of the Job in Azure.</p>
<h2 id="heading-conclusion"><strong>Conclusion</strong></h2>
<p>🎯 We successfully automated <strong>Azure VM shutdown</strong> using <strong>Azure Automation Account</strong> and <strong>Runbooks</strong>.<br />🎯 We used a <strong>User Assigned Managed Identity</strong> to authenticate securely.<br />🎯 We scheduled the Runbook for <strong>automatic execution</strong>.</p>
<p>🎯 We only shutdown the VMs with tag autoshutdown = true</p>
]]></description><link>https://clouddevopsinsights.com/automating-azure-vm-management-with-azure-automation-account</link><guid isPermaLink="true">https://clouddevopsinsights.com/automating-azure-vm-management-with-azure-automation-account</guid><category><![CDATA[Azure]]></category><category><![CDATA[azure-devops]]></category><category><![CDATA[automation]]></category><category><![CDATA[VMManagement ]]></category><dc:creator><![CDATA[Abilash Vavilala]]></dc:creator></item><item><title><![CDATA[The Importance of Pseudocode for DevOps Engineers in Automation]]></title><description><![CDATA[<h3 id="heading-introduction">Introduction:</h3>
<p>In the fast-paced world of DevOps, where automation and efficiency are key, pseudocode serves as a vital bridge between ideas and implementation. While DevOps engineers often deal with complex scripts and tools, pseudocode provides a simplified, language-agnostic way to design and communicate automation workflows. In this article, we explore why pseudocode is essential for DevOps engineers and how it can streamline automation processes.</p>
<hr />
<h2 id="heading-what-is-pseudocode"><strong>What is Pseudocode?</strong></h2>
<p>Pseudocode is a high-level representation of an algorithm or workflow that uses plain language and programming-like structure. Its not bound by the syntax of any specific programming language, making it easy to understand for both technical and non-technical stakeholders.</p>
<p><strong>Example:</strong></p>
<pre><code class="lang-powershell"><span class="hljs-built_in">START</span>
  Define variable <span class="hljs-string">"serverList"</span>
  <span class="hljs-keyword">For</span> each <span class="hljs-string">"server"</span> <span class="hljs-keyword">in</span> <span class="hljs-string">"serverList"</span>:
    Check <span class="hljs-keyword">if</span> <span class="hljs-string">"server"</span> is running
    <span class="hljs-keyword">If</span> <span class="hljs-string">"server"</span> is not running:
      Restart <span class="hljs-string">"server"</span>
    <span class="hljs-keyword">End</span> <span class="hljs-keyword">If</span>
  <span class="hljs-keyword">End</span> <span class="hljs-keyword">For</span>
<span class="hljs-keyword">END</span>
</code></pre>
<p>This simple pseudocode outlines a server monitoring and restart process without delving into specific syntax.</p>
<hr />
<h2 id="heading-why-pseudocode-matters-for-devops-engineers"><strong>Why Pseudocode Matters for DevOps Engineers</strong></h2>
<h3 id="heading-1-clarity-in-automation-design">1. <strong>Clarity in Automation Design</strong></h3>
<p>DevOps engineers often work on intricate automation pipelines, including CI/CD workflows, infrastructure provisioning, and monitoring. Pseudocode helps:</p>
<ul>
<li><p>Break down complex tasks into manageable steps.</p>
</li>
<li><p>Clearly define the logic before diving into code.</p>
</li>
<li><p>Ensure that all team members understand the workflow, regardless of their familiarity with the programming language.</p>
</li>
</ul>
<h3 id="heading-2-facilitating-collaboration">2. <strong>Facilitating Collaboration</strong></h3>
<p>DevOps projects typically involve cross-functional teams, including developers, operations, and sometimes even business stakeholders. Pseudocode:</p>
<ul>
<li><p>Acts as a universal language to communicate ideas.</p>
</li>
<li><p>Enables non-technical team members to provide input on workflows.</p>
</li>
<li><p>Reduces misunderstandings during the planning phase.</p>
</li>
</ul>
<h3 id="heading-3-error-reduction">3. <strong>Error Reduction</strong></h3>
<p>By focusing on logic rather than syntax, pseudocode allows engineers to:</p>
<ul>
<li><p>Identify potential issues early in the design phase.</p>
</li>
<li><p>Avoid common pitfalls in automation scripts.</p>
</li>
<li><p>Ensure the workflow aligns with the intended goals.</p>
</li>
</ul>
<h3 id="heading-4-reusable-templates">4. <strong>Reusable Templates</strong></h3>
<p>Pseudocode can serve as a reusable blueprint for similar automation tasks. For instance, a pseudocode template for deploying infrastructure can be adapted for different environments or tools.</p>
<hr />
<h2 id="heading-applications-of-pseudocode-in-devops-automation"><strong>Applications of Pseudocode in DevOps Automation</strong></h2>
<h3 id="heading-1-infrastructure-as-code-iac"><strong>1. Infrastructure as Code (IaC)</strong></h3>
<p>Before writing Terraform or Ansible scripts, pseudocode can outline the desired state of the infrastructure:</p>
<pre><code class="lang-powershell"><span class="hljs-built_in">START</span>
  Define <span class="hljs-string">"resource group"</span>
  Create <span class="hljs-string">"virtual network"</span>
  <span class="hljs-keyword">For</span> each <span class="hljs-string">"subnet"</span> <span class="hljs-keyword">in</span> <span class="hljs-string">"subnet list"</span>:
    Create <span class="hljs-string">"subnet"</span>
  <span class="hljs-keyword">End</span> <span class="hljs-keyword">For</span>
<span class="hljs-keyword">END</span>
</code></pre>
<h3 id="heading-2-cicd-pipelines"><strong>2. CI/CD Pipelines</strong></h3>
<p>For complex pipelines, pseudocode helps visualize stages and dependencies:</p>
<pre><code class="lang-powershell"><span class="hljs-built_in">START</span>
  Trigger pipeline on <span class="hljs-string">"code commit"</span>
  Run <span class="hljs-string">"unit tests"</span>
  <span class="hljs-keyword">If</span> <span class="hljs-string">"tests pass"</span>:
    Build application
    Deploy to <span class="hljs-string">"staging"</span>
    Run <span class="hljs-string">"integration tests"</span>
    <span class="hljs-keyword">If</span> <span class="hljs-string">"integration tests pass"</span>:
      Deploy to <span class="hljs-string">"production"</span>
    <span class="hljs-keyword">End</span> <span class="hljs-keyword">If</span>
  <span class="hljs-keyword">End</span> <span class="hljs-keyword">If</span>
<span class="hljs-keyword">END</span>
</code></pre>
<h3 id="heading-3-incident-response-automation"><strong>3. Incident Response Automation</strong></h3>
<p>Pseudocode can outline automated responses to system alerts:</p>
<pre><code class="lang-powershell"><span class="hljs-built_in">START</span>
  Monitor <span class="hljs-string">"system metrics"</span>
  <span class="hljs-keyword">If</span> <span class="hljs-string">"CPU usage &gt; 80%"</span>:
    Notify <span class="hljs-string">"on-call engineer"</span>
    Scale up <span class="hljs-string">"instances"</span>
  <span class="hljs-keyword">End</span> <span class="hljs-keyword">If</span>
<span class="hljs-keyword">END</span>
</code></pre>
<hr />
<h2 id="heading-best-practices-for-writing-pseudocode"><strong>Best Practices for Writing Pseudocode</strong></h2>
<ol>
<li><p><strong>Keep It Simple:</strong> Avoid unnecessary details; focus on the logic.</p>
</li>
<li><p><strong>Use Consistent Structure:</strong> Follow a logical flow with clear start and end points.</p>
</li>
<li><p><strong>Be Language-Agnostic:</strong> Avoid syntax or keywords specific to any programming language.</p>
</li>
<li><p><strong>Focus on Readability:</strong> Use meaningful variable names and clear indentation.</p>
</li>
<li><p><strong>Iterate and Refine:</strong> Review and update pseudocode as requirements evolve.</p>
</li>
</ol>
<hr />
<h2 id="heading-conclusion"><strong>Conclusion</strong></h2>
<p>For DevOps engineers, pseudocode is more than just a planning toolits a way to ensure clarity, collaboration, and efficiency in automation projects. By using pseudocode to design workflows, teams can reduce errors, streamline development, and create reusable templates for future tasks. Whether youre automating infrastructure, building CI/CD pipelines, or responding to incidents, pseudocode is an indispensable part of the DevOps toolkit.</p>
<p>Start incorporating pseudocode into your workflow today and experience the difference it makes in simplifying complex automation tasks!</p>
]]></description><link>https://clouddevopsinsights.com/the-importance-of-pseudocode-for-devops-engineers-in-automation</link><guid isPermaLink="true">https://clouddevopsinsights.com/the-importance-of-pseudocode-for-devops-engineers-in-automation</guid><category><![CDATA[Devops]]></category><category><![CDATA[Devops articles]]></category><category><![CDATA[automation]]></category><dc:creator><![CDATA[Abilash Vavilala]]></dc:creator></item><item><title><![CDATA[Understanding Docker Persistent Volumes and Their Independent Lifecycle]]></title><description><![CDATA[<h3 id="heading-understanding-persistent-volumes-in-docker-and-why-their-lifecycle-is-independent-of-containers"><strong>Understanding Persistent Volumes in Docker and Why Their Lifecycle is Independent of Containers</strong></h3>
<p>In the world of containerization, Docker has become one of the most popular tools for deploying applications. However, one of the key challenges when working with containers is managing data. By default, Docker containers are ephemeral, meaning that any data stored within them is lost when the container is stopped or removed. This is where <strong>Docker volumes</strong> come into play, providing a way to persist data across container restarts and removals. In this blog, well dive into persistent volumes in Docker and explain why their lifecycle is independent of containers.</p>
<h3 id="heading-what-are-docker-volumes">What Are Docker Volumes?</h3>
<p>A <strong>Docker volume</strong> is a storage mechanism that allows data to persist beyond the lifecycle of a container. Volumes are stored outside of the containers filesystem, meaning they are not removed when a container is stopped or deleted. This makes them an essential tool for managing persistent data such as databases, logs, and application state in containerized environments.</p>
<p>Docker provides two types of storage options for containers:</p>
<ol>
<li><p><strong>Volumes</strong>: Managed by Docker and stored in a specific directory on the host filesystem.</p>
</li>
<li><p><strong>Bind mounts</strong>: Map a host directory to a container directory, allowing the container to access files on the host machine.</p>
</li>
</ol>
<p>While bind mounts are useful for development purposes, volumes are the preferred method for storing persistent data in production environments because Docker manages them and they are isolated from the host filesystem.</p>
<h3 id="heading-why-is-the-volume-lifecycle-independent-of-containers">Why Is the Volume Lifecycle Independent of Containers?</h3>
<p>One of the most important features of Docker volumes is that their lifecycle is independent of the containers that use them. This means that even if you stop or remove a container, the volume will continue to exist and can be reused by other containers.</p>
<p>Heres why this is crucial:</p>
<ol>
<li><p><strong>Data Persistence</strong>: Since volumes are not tied to the lifecycle of a container, they provide a reliable way to persist data. For example, if you have a database running inside a container and the container crashes or is deleted, the data stored in the volume will remain intact. You can then spin up a new container and mount the same volume to continue where you left off.</p>
</li>
<li><p><strong>Reusability</strong>: Volumes can be reused by multiple containers. This is especially useful in microservices architectures, where different services might need to access the same data. For example, you could have a web application and a database container that both use the same volume to share data.</p>
</li>
<li><p><strong>Backup and Restore</strong>: Since volumes are independent of containers, you can easily back up and restore data stored in volumes. This can be done without worrying about the state of the container itself, providing a more reliable backup solution.</p>
</li>
<li><p><strong>Container Portability</strong>: Volumes allow you to decouple the data from the container, making it easier to move containers between different environments. For example, you can move a container from your local development machine to a production environment, and the data in the volume will remain accessible.</p>
</li>
</ol>
<h3 id="heading-creating-and-using-a-docker-volume-with-mynginx-custom-image">Creating and Using a Docker Volume with my_nginx custom image</h3>
<h3 id="heading-creating-and-using-a-docker-volume-with-mynginx">Creating and Using a Docker Volume with my_nginx</h3>
<p>To demonstrate how Docker volumes work, lets create a volume and use it with custom image container to serve a website.</p>
<h4 id="heading-1-commands-to-create-docker-image-with-custom-nginx-image">1. Commands to create docker image with custom nginx image:</h4>
<pre><code class="lang-bash"><span class="hljs-comment">## create a docker volume</span>
docker volume create my_data

<span class="hljs-comment">## mount the volume to docker container with nginx image</span>
 docker run -d --name my_nginx -v my_data:/data -p 8080:80 custom_nginx
</code></pre>
<h3 id="heading-observing-the-volume">Observing the Volume</h3>
<p>In the images below, you can see:</p>
<ol>
<li><p>The <code>my_data</code> folder on the Docker host, where the volume data is stored.</p>
</li>
<li><p>The <code>/data</code> folder inside the container, which is linked to the <code>my_data</code> volume.</p>
</li>
</ol>
<p>Although the folder appears within the containers filesystem, the data itself resides on the Docker host, making the volume's lifecycle independent of the container.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1737096425221/e4d38e2a-f5d7-491e-922a-8ea3974ac6c3.png" alt class="image--center mx-auto" /></p>
<p>The above is my_data folder on docker host</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1737096485459/4f72f092-fb54-4944-a36e-b8f0e4fd15af.png" alt class="image--center mx-auto" /></p>
<p>the above is /data folder inside the container. the folder appears inside the container file system. however the file exist outside the container. the lifecycle of that folder is independent of container.</p>
<h3 id="heading-testing-the-website">Testing the Website</h3>
<p>Now, lets verify that the container is serving the website. The container uses the <code>index.html</code> file stored in the volume to serve the website.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1737096645497/4bb9fa30-cb94-4afd-8718-8fa4be8fae6d.png" alt class="image--center mx-auto" /></p>
<h4 id="heading-step-3-delete-and-recreate-the-container">Step 3: Delete and Recreate the Container</h4>
<p>To test the persistence of the volume, well delete the container and create a new one using the same volume.</p>
<pre><code class="lang-bash"><span class="hljs-comment">## delete the container</span>
docker stop my_nginx
docker rm my_nginx

<span class="hljs-comment">## create a new container on port 8081</span>
docker run -d --name new_nginx -v my_data:/data -p 8081:80 custom_nginx
</code></pre>
<h3 id="heading-observing-the-result">Observing the Result</h3>
<p>After deleting the original container and creating a new one, youll notice the same website is being served. This is because the website data resides in the <code>my_data</code> volume on the Docker host, which is independent of any specific container.</p>
<p>This demonstrates how Docker volumes ensure data persistence, allowing you to manage containerized applications without worrying about data loss when containers are stopped or recreated.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1737097120871/92a3f125-1b33-4d6f-858e-e922b2915f98.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-conclusion">Conclusion</h3>
<p>Docker volumes are an essential tool for managing persistent data in containerized applications. The key advantage of volumes is that their lifecycle is independent of the container, ensuring that data is not lost when containers are stopped or removed. This makes them ideal for use cases where data persistence is crucial, such as databases and application state. By leveraging Docker volumes, you can ensure your containers are both portable and resilient, providing a more reliable way to manage data in a containerized environment.</p>
<p>By understanding and using volumes effectively, you can take full advantage of Dockers capabilities and build robust, scalable applications that can run seamlessly across different environments.</p>
]]></description><link>https://clouddevopsinsights.com/understanding-docker-persistent-volumes-and-their-independent-lifecycle</link><guid isPermaLink="true">https://clouddevopsinsights.com/understanding-docker-persistent-volumes-and-their-independent-lifecycle</guid><category><![CDATA[Docker]]></category><category><![CDATA[dockervolume]]></category><category><![CDATA[dockercommands]]></category><category><![CDATA[containerization]]></category><category><![CDATA[Microservices]]></category><category><![CDATA[Devops]]></category><dc:creator><![CDATA[Abilash Vavilala]]></dc:creator></item><item><title><![CDATA[Passing Storage Account Secret Key Dynamically in Azure DevOps Pipelines]]></title><description><![CDATA[<p>Managing secrets securely is a critical aspect of cloud and DevOps practices. When working with Azure Storage Accounts, you might need to dynamically pass the secret key to scripts or tasks. In this article, I will explain how to achieve this using Azure CLI, PowerShell, and the Replace Tokens task in Azure DevOps.</p>
<p>In this article, I will deploy Terraform resources using Azure DevOps. Please find the code in below repo</p>
<p><a target="_blank" href="https://github.com/AbiVavilala/TerraformADO">Code repo for this project</a></p>
<h3 id="heading-define-variables"><strong>Define Variables</strong></h3>
<p>To begin, we need to create a variable called <code>storagekey</code>. The value of this variable will be empty initially and will be dynamically fetched by the pipeline using a PowerShell script. Make sure to mark the variable as "Settable at release time."</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1736648886481/599f6176-da8a-4700-abed-a1f3eb04aab2.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-add-azure-powershell-script-inlinescript-task">Add Azure PowerShell script: InlineScript task</h3>
<p>In this task, include the PowerShell script that fetches the storage account key and passes it to your pipeline.</p>
<pre><code class="lang-powershell"><span class="hljs-variable">$key</span> = (<span class="hljs-built_in">Get-AzStorageAccountKey</span> <span class="hljs-literal">-ResourceGroupName</span> <span class="hljs-variable">$</span>(terraformrg) <span class="hljs-literal">-AccountName</span> <span class="hljs-variable">$</span>(terraformstorageaccount))[<span class="hljs-number">0</span>].Value

<span class="hljs-built_in">Write-Host</span> <span class="hljs-string">"##vso[task.setvariable variable=storagekey]<span class="hljs-variable">$key</span>"</span>
</code></pre>
<p>This script retrieves the storage account key dynamically and stores it in the pipeline variable <code>storagekey</code>.</p>
<h3 id="heading-replace-tokens-task"><strong>Replace Tokens Task</strong></h3>
<p>The Replace Tokens task allows you to dynamically replace placeholders in your code with pipeline variables. Follow these steps:</p>
<ol>
<li><p><strong>Specify Source Code and Target Files:</strong></p>
<ul>
<li>In the task configuration, define the source code directory and the target files where the placeholders need to be replaced.</li>
</ul>
</li>
<li><p><strong>Set Prefix and Suffix:</strong></p>
<ul>
<li>Use the same prefix and suffix as defined in your code. For example, if your placeholders look like <code>__storagekey</code>__, set the prefix to and the suffix to <code>__</code>.</li>
</ul>
</li>
</ol>
<p>When the pipeline runs, the Replace Tokens task will substitute the placeholders in the target files with the values of the corresponding pipeline variables.</p>
<h2 id="heading-why-this-approach-works"><strong>Why This Approach Works</strong></h2>
<p>By dynamically fetching the storage account secret key and passing it securely through the pipeline:</p>
<ul>
<li><p><strong>Security:</strong> Secrets are not hardcoded, reducing the risk of exposure.</p>
</li>
<li><p><strong>Automation:</strong> The pipeline can handle key rotations without manual intervention.</p>
</li>
<li><p><strong>Flexibility:</strong> The Replace Tokens task ensures that the updated key is seamlessly integrated into your code.</p>
</li>
</ul>
<hr />
<h2 id="heading-conclusion"><strong>Conclusion</strong></h2>
<p>This method allows you to securely and dynamically manage storage account secret keys in Azure DevOps pipelines. When keys are recreated, the pipeline remains unaffected, ensuring smooth and secure operations. By following these steps, you can enhance the security and automation of your DevOps workflows.</p>
]]></description><link>https://clouddevopsinsights.com/passing-storage-account-secret-key-dynamically-in-azure-devops-pipelines</link><guid isPermaLink="true">https://clouddevopsinsights.com/passing-storage-account-secret-key-dynamically-in-azure-devops-pipelines</guid><category><![CDATA[Azure]]></category><category><![CDATA[azure-devops]]></category><category><![CDATA[Devops]]></category><category><![CDATA[storageaccount]]></category><category><![CDATA[Pipeline]]></category><category><![CDATA[CI/CD pipelines]]></category><category><![CDATA[secrets management]]></category><dc:creator><![CDATA[Abilash Vavilala]]></dc:creator></item><item><title><![CDATA[A Comprehensive Guide to Azure Data Transfer Costs: Demystifying the Hidden Charges]]></title><description><![CDATA[<p>Managing data transfer costs in Azure can be a challenge, especially for organizations with large-scale, distributed workloads. Azure data transfer charges, often referred to as <strong>data egress costs</strong>, can quickly add up if not properly understood and optimized. This article breaks down the nuances of Azure data transfer costs, explores common scenarios, and provides actionable tips for cost optimization.</p>
<h3 id="heading-1-what-are-azure-data-transfer-costs"><strong>1. What Are Azure Data Transfer Costs?</strong></h3>
<p>Azure data transfer costs are associated with moving data between services, regions, and networks. These costs are typically categorized as:</p>
<ul>
<li><p><strong>Ingress (Data In):</strong> Data entering Azure, usually free.</p>
</li>
<li><p><strong>Egress (Data Out):</strong> Data leaving Azure, such as to the internet or other Azure regions, incurs charges.</p>
</li>
</ul>
<hr />
<h3 id="heading-2-types-of-azure-data-transfer-scenarios"><strong>2. Types of Azure Data Transfer Scenarios</strong></h3>
<p>Azure data transfer costs vary based on the source, destination, and type of transfer. Heres a breakdown:</p>
<h4 id="heading-a-data-transfer-within-a-virtual-network-vnet"><strong>a. Data Transfer Within a Virtual Network (VNET)</strong></h4>
<ul>
<li><p>Data transfers within the same VNET are <strong>free</strong> as long as they occur within the same subnet or between subnets inside the VNET.</p>
</li>
<li><p>Example: Communication between two VMs in the same VNET.</p>
</li>
</ul>
<h4 id="heading-b-vnet-peering"><strong>b. VNET Peering</strong></h4>
<ol>
<li><p><strong>Regional VNET Peering</strong> (Same Region):</p>
<ul>
<li>Transfers between two VNETs in the same region incur a charge of <strong>$0.01 per GB</strong> for both ingress and egress data.</li>
</ul>
</li>
<li><p><strong>Global VNET Peering</strong> (Different Regions):</p>
<ul>
<li><p>Data transfer costs depend on the zones where the VNETs are located:</p>
<ul>
<li><p><strong>Zone 1:</strong> $0.035 per GB (inbound and outbound).</p>
</li>
<li><p><strong>Zone 2:</strong> $0.09 per GB.</p>
</li>
<li><p><strong>Zone 3:</strong> $0.16 per GB.</p>
</li>
<li><p><strong>US Gov:</strong> $0.044 per GB.</p>
</li>
</ul>
</li>
</ul>
</li>
</ol>
<h4 id="heading-c-data-transfer-between-availability-zones"><strong>c. Data Transfer Between Availability Zones</strong></h4>
<ul>
<li><p><strong>Within the Same Zone:</strong> Free.</p>
</li>
<li><p><strong>Between Different Zones:</strong> Charged at <strong>$0.01 per GB</strong> for both ingress and egress, even if the resources are in the same VNET.</p>
</li>
</ul>
<h4 id="heading-d-data-transfer-across-azure-regions"><strong>d. Data Transfer Across Azure Regions</strong></h4>
<ul>
<li><p>Transfers between regions incur charges based on the originating and destination zones:</p>
<ul>
<li>Example: Intra-continental transfers in North America cost <strong>$0.02 per GB</strong>.</li>
</ul>
</li>
</ul>
<h4 id="heading-e-expressroute"><strong>e. ExpressRoute</strong></h4>
<ul>
<li><p>Provides a private connection between Azure and on-premises infrastructure.</p>
</li>
<li><p>Offers cost-effective options for high-volume data transfers compared to VPN egress charges.</p>
</li>
</ul>
<hr />
<h3 id="heading-3-key-scenarios-impacting-data-transfer-costs"><strong>3. Key Scenarios Impacting Data Transfer Costs</strong></h3>
<h4 id="heading-a-content-delivery"><strong>a. Content Delivery</strong></h4>
<ul>
<li>Serving large files (e.g., videos, images) to users incurs significant egress costs.</li>
</ul>
<h4 id="heading-b-cross-region-replication"><strong>b. Cross-Region Replication</strong></h4>
<ul>
<li>Geo-redundant backups or disaster recovery setups involve inter-region data transfers, increasing costs.</li>
</ul>
<h4 id="heading-c-multi-region-applications"><strong>c. Multi-Region Applications</strong></h4>
<ul>
<li>Applications with components deployed across regions result in frequent inter-region data movement.</li>
</ul>
<h4 id="heading-d-hybrid-architectures"><strong>d. Hybrid Architectures</strong></h4>
<ul>
<li>Data flowing between Azure and on-premises systems can lead to substantial egress charges.</li>
</ul>
<hr />
<h3 id="heading-4-optimization-strategies-for-data-egress-costs"><strong>4. Optimization Strategies for Data Egress Costs</strong></h3>
<p>To control and reduce data transfer costs, consider the following best practices:</p>
<h4 id="heading-a-deploy-resources-strategically"><strong>a. Deploy Resources Strategically</strong></h4>
<ul>
<li>Place resources in regions with minimal or no data transfer costs unless compliance or performance mandates otherwise.</li>
</ul>
<h4 id="heading-b-limit-cross-zone-and-cross-region-transfers"><strong>b. Limit Cross-Zone and Cross-Region Transfers</strong></h4>
<ul>
<li>Keep data movement within the same zone or region to avoid unnecessary charges.</li>
</ul>
<h4 id="heading-c-optimize-application-architecture"><strong>c. Optimize Application Architecture</strong></h4>
<ul>
<li>Design applications to minimize data movement between regions and zones.</li>
</ul>
<h4 id="heading-d-leverage-expressroute"><strong>d. Leverage ExpressRoute</strong></h4>
<ul>
<li>For high-volume transfers, use ExpressRoute for cost-effective, private connections.</li>
</ul>
<h4 id="heading-e-use-compression-and-deduplication"><strong>e. Use Compression and Deduplication</strong></h4>
<ul>
<li>Compress data and use incremental synchronization to reduce the amount of data being transferred.</li>
</ul>
<h4 id="heading-f-archive-or-delete-unnecessary-data"><strong>f. Archive or Delete Unnecessary Data</strong></h4>
<ul>
<li>Regularly clean up unused or redundant data to avoid unnecessary storage and transfer costs.</li>
</ul>
<h4 id="heading-g-use-azure-cdn"><strong>g. Use Azure CDN</strong></h4>
<ul>
<li>Cache frequently accessed data closer to users to reduce egress charges.</li>
</ul>
<hr />
<h3 id="heading-5-monitoring-and-managing-azure-data-transfer-costs"><strong>5. Monitoring and Managing Azure Data Transfer Costs</strong></h3>
<p>Azure provides tools to help monitor and manage data transfer expenses:</p>
<h4 id="heading-a-azure-cost-management-and-billing"><strong>a. Azure Cost Management and Billing</strong></h4>
<ul>
<li>Offers detailed insights into spending, including data transfer costs.</li>
</ul>
<h4 id="heading-b-azure-monitor"><strong>b. Azure Monitor</strong></h4>
<ul>
<li>Tracks network usage and identifies cost-heavy data flows.</li>
</ul>
<h4 id="heading-c-azure-pricing-calculator"><strong>c. Azure Pricing Calculator</strong></h4>
<ul>
<li>Helps estimate costs for different data transfer scenarios.</li>
</ul>
<hr />
<h3 id="heading-6-example-multi-region-web-application"><strong>6. Example: Multi-Region Web Application</strong></h3>
<p><strong>Scenario:</strong><br />When running a web app like <a target="_blank" href="https://www.clouddevopsinsights.com/"><strong>https://www.clouddevopsinsights.com/</strong></a> on Azure with a virtual machine (VM) and an Azure Application Gateway as a Layer 7 load balancer, the <strong>data transfer cost</strong> for your organization depends on several factors, including the origin and destination of the data, the traffic path, and Azure's pricing structure. Here's a detailed explanation:</p>
<hr />
<h3 id="heading-1-data-transfer-scenarios-in-your-setup"><strong>1. Data Transfer Scenarios in Your Setup</strong></h3>
<h4 id="heading-scenario-1-internet-to-application-gatewayhttpswwwclouddevopsinsightscom"><strong>Scenario 1: Internet to</strong> <a target="_blank" href="https://www.clouddevopsinsights.com/"><strong>Application Gateway</strong></a></h4>
<ul>
<li><p><strong>Description:</strong> When a customer accesses your web app from their smartphone via the internet, the request first reaches the Azure Application Gateway.</p>
</li>
<li><p><strong>Cost:</strong></p>
<ul>
<li><p><strong>Inbound Data (Ingress):</strong> Free. Azure does not charge for inbound data traffic from the internet to Azure services.</p>
</li>
<li><p><strong>Outbound Data (Egress):</strong> Charged. Azure charges for outbound data from Azure services to the internet. The rate depends on the amount of data transferred and the region.</p>
</li>
</ul>
</li>
</ul>
<h4 id="heading-scenario-2-application-gateway-to-vm"><strong>Scenario 2: Application Gateway to VM</strong></h4>
<ul>
<li><p><strong>Description:</strong> The Application Gateway forwards the request to your <a target="_blank" href="https://www.clouddevopsinsights.com/"></a>backend VM hosting the web app.</p>
</li>
<li><p><strong>Cost:</strong> Free. Data transfer within the same Azure region between the Application Gateway and VM is free.</p>
</li>
</ul>
<h4 id="heading-scenario-3-vm-response-to-application-gateway"><strong>Scenario 3: VM Response to Application Gateway</strong></h4>
<ul>
<li><p><strong>Description:</strong> The VM processes the request and sends the response back to the Application Gateway.</p>
</li>
<li><p><strong>Cost:</strong> Free. Data transfer within the same region remains free.</p>
</li>
</ul>
<h4 id="heading-scenario-4-application-gateway-to-internet"><strong>Scenario 4: Application Gateway to Internet</strong></h4>
<ul>
<li><p><strong>Description:</strong> The Application Gateway sends the response to the customer's smartphone.</p>
</li>
<li><p><strong>Cost:</strong></p>
<ul>
<li><strong>Outbound Data (Egress):</strong> Charged. This is considered internet egress, and Azure applies charges based on the volume of data sent to the customer and the region of your Azure resources.</li>
</ul>
</li>
</ul>
<hr />
<h3 id="heading-2-cost-breakdown"><strong>2. Cost Breakdown</strong></h3>
<h4 id="heading-outbound-data-transfer-egress-costs"><strong>Outbound Data Transfer (Egress) Costs</strong></h4>
<p>Azure charges for outbound data transfer based on:</p>
<ul>
<li><p><strong>Data Volume:</strong> The amount of data sent from your <a target="_blank" href="https://www.clouddevopsinsights.com/"></a>app to the customer.</p>
</li>
<li><p><strong>Region:</strong> The region where your Azure resources (Application Gateway and VM) are hosted.</p>
</li>
<li><p><strong>Pricing Tiers:</strong></p>
<ul>
<li><p>For example, in most regions:</p>
<ul>
<li><p>The first 5 GB per month is free.</p>
</li>
<li><p>5 GB10 TB per month is charged at <strong>$0.087 per GB</strong>.</p>
</li>
<li><p>10 TB50 TB per month is charged at <strong>$0.083 per GB</strong>.</p>
</li>
<li><p>Larger volumes have reduced rates.</p>
</li>
</ul>
</li>
</ul>
</li>
</ul>
<h4 id="heading-example-calculation"><strong>Example Calculation:</strong></h4>
<p>Assume:</p>
<ul>
<li><p>Your app serves 1,000 customers per day.</p>
</li>
<li><p>Each customer downloads 10 MB of data.</p>
</li>
<li><p>Monthly outbound data: 1,00010MB30days=300,000MB=300GB1,000 \times 10 \, \text{MB} \times 30 \, \text{days} = 300,000 \, \text{MB} = 300 \, \text{GB}1,00010MB30days=300,000MB=300GB.</p>
</li>
</ul>
<p>Cost:</p>
<ul>
<li><p>First 5 GB: Free.</p>
</li>
<li><p>Remaining 295 GB: 295GB0.087$/GB=25.67$295 \, \text{GB} \times 0.087 \, \text{\$/GB} = 25.67 \, \text{\$}295GB0.087$/GB=25.67$.</p>
</li>
</ul>
<p>Total monthly cost: <strong>$25.67</strong> for outbound data transfer.</p>
<hr />
<h3 id="heading-7-conclusion"><strong>7. Conclusion</strong></h3>
<p>Azure data transfer costs, while often overlooked, can significantly impact your cloud budget. By understanding the pricing structure and optimizing your architecture, you can reduce unnecessary expenses while maintaining performance and reliability.</p>
<p>Would you like assistance in designing a cost-efficient Azure architecture? Let us know in the comments</p>
]]></description><link>https://clouddevopsinsights.com/a-comprehensive-guide-to-azure-data-transfer-costs-demystifying-the-hidden-charges</link><guid isPermaLink="true">https://clouddevopsinsights.com/a-comprehensive-guide-to-azure-data-transfer-costs-demystifying-the-hidden-charges</guid><category><![CDATA[Azure]]></category><category><![CDATA[azure solutions architect]]></category><category><![CDATA[Public Cloud]]></category><dc:creator><![CDATA[Abilash Vavilala]]></dc:creator></item><item><title><![CDATA[Understanding the Differences Between Terraform Count and For_each]]></title><description><![CDATA[<p>Terraform, as a powerful Infrastructure as Code (IaC) tool, provides two key constructs<code>count</code> and <code>for_each</code>to manage resource creation dynamically. While both enable you to create multiple resources efficiently, they differ in their flexibility and use cases. In this article, well explore the differences between <code>count</code> and <code>for_each</code>, and when to use each.</p>
<hr />
<h3 id="heading-what-is-terraform-count"><strong>What is Terraform Count?</strong></h3>
<p>The <code>count</code> meta-argument is a simple way to create multiple instances of a resource. By specifying a numeric value for <code>count</code>, Terraform will create that many instances of the resource.</p>
<h4 id="heading-example"><strong>Example:</strong></h4>
<pre><code class="lang-basic">resource <span class="hljs-string">"azurerm_virtual_machine"</span> <span class="hljs-string">"example"</span> {
  count = <span class="hljs-number">3</span>

  <span class="hljs-keyword">name</span>                  = <span class="hljs-string">"vm-${count.index}"</span>
  location              = azurerm_resource_group.example.location
  resource_group_name   = azurerm_resource_group.example.<span class="hljs-keyword">name</span>
  network_interface_ids = [azurerm_network_interface.example[count.index].id]
  vm_size               = <span class="hljs-string">"Standard_DS1_v2"</span>
}
</code></pre>
<p>In this example:</p>
<ul>
<li><p>Terraform creates three virtual machines.</p>
</li>
<li><p>The <code>count.index</code> value (0, 1, 2) is used to generate unique names for each instance.</p>
</li>
</ul>
<h4 id="heading-key-features-of-count"><strong>Key Features of Count:</strong></h4>
<ul>
<li><p><strong>Index-based:</strong> Resources are identified by their index.</p>
</li>
<li><p><strong>Simple:</strong> Best suited for scenarios where all instances share similar configurations.</p>
</li>
<li><p><strong>Limitations:</strong> Less flexible when dealing with heterogeneous resource configurations.</p>
</li>
</ul>
<hr />
<h3 id="heading-what-is-terraform-foreach"><strong>What is Terraform For_each?</strong></h3>
<p>The <code>for_each</code> meta-argument is more flexible and allows you to create resources based on a set, map, or list. Each instance is uniquely identified by a key rather than an index.</p>
<h4 id="heading-example-1"><strong>Example:</strong></h4>
<pre><code class="lang-basic"> terraform {
  required_providers {
    azurerm = {
      source  = <span class="hljs-string">"hashicorp/azurerm"</span>
      version = <span class="hljs-string">"=3.47.0"</span>
    }
  }
}

#https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/guides/service_principal_client_secret
provider <span class="hljs-string">"azurerm"</span> {
  features {} 
  client_id       = <span class="hljs-string">"00000000-0000-0000-0000-000000000000"</span>
  client_secret   = <span class="hljs-string">"20000000-0000-0000-0000-000000000000"</span>
  tenant_id       = <span class="hljs-string">"10000000-0000-0000-0000-000000000000"</span>
  subscription_id = <span class="hljs-string">"20000000-0000-0000-0000-000000000000"</span>
}
#variables are declared here
variable <span class="hljs-string">"resourcedetails"</span> {
  type = map(object({
    <span class="hljs-keyword">name</span>     = string
    location = string
    size     = string
    rg_name  = string
    vnet_name = string
    subnet_name = string
  }))
  default = {
    westus = {
      rg_name  = <span class="hljs-string">"westus-rg"</span>  
      <span class="hljs-keyword">name</span>     = <span class="hljs-string">"west-vm"</span>
      location = <span class="hljs-string">"westus2"</span>
      size     = <span class="hljs-string">"Standard_B2s"</span>
      vnet_name = <span class="hljs-string">"west-vnet"</span>
      subnet_name = <span class="hljs-string">"west-subnet"</span>
    }
    eastus = {
      rg_name  = <span class="hljs-string">"eastus-rg"</span>  
      <span class="hljs-keyword">name</span>     = <span class="hljs-string">"east-vm"</span>
      location = <span class="hljs-string">"eastus"</span>
      size     = <span class="hljs-string">"Standard_B1s"</span>
      vnet_name = <span class="hljs-string">"east-vnet"</span>
      subnet_name = <span class="hljs-string">"east-subnet"</span>
    }
  }
}


resource <span class="hljs-string">"azurerm_resource_group"</span> <span class="hljs-string">"myrg"</span> {
  for_each = var.resourcedetails

  <span class="hljs-keyword">name</span>     = each.value.rg_name
  location = each.value.location
}

resource <span class="hljs-string">"azurerm_virtual_network"</span> <span class="hljs-string">"myvnet"</span> {
  for_each = var.resourcedetails
  <span class="hljs-keyword">name</span>                = each.value.vnet_name
  address_space       = [<span class="hljs-string">"10.0.0.0/16"</span>]
  location            = azurerm_resource_group.myrg[each.<span class="hljs-keyword">key</span>].location
  resource_group_name = azurerm_resource_group.myrg[each.<span class="hljs-keyword">key</span>].<span class="hljs-keyword">name</span>
}

resource <span class="hljs-string">"azurerm_subnet"</span> <span class="hljs-string">"mysubnet"</span> {
  for_each = var.resourcedetails

  <span class="hljs-keyword">name</span>                 = each.value.subnet_name
  address_prefixes     = [<span class="hljs-string">"10.0.0.0/24"</span>]
  virtual_network_name = azurerm_virtual_network.myvnet[each.<span class="hljs-keyword">key</span>].<span class="hljs-keyword">name</span>
  resource_group_name  = azurerm_resource_group.myrg[each.<span class="hljs-keyword">key</span>].<span class="hljs-keyword">name</span>
}

resource <span class="hljs-string">"azurerm_network_interface"</span> <span class="hljs-string">"mynic"</span> {
  for_each = var.resourcedetails

  <span class="hljs-keyword">name</span>                = <span class="hljs-string">"my-nic"</span>  
  location            = azurerm_resource_group.myrg[each.<span class="hljs-keyword">key</span>].location
  resource_group_name = azurerm_resource_group.myrg[each.<span class="hljs-keyword">key</span>].<span class="hljs-keyword">name</span>
  ip_configuration {
    <span class="hljs-keyword">name</span>                          = <span class="hljs-string">"my-ip-config"</span>
    subnet_id                     = azurerm_subnet.mysubnet[each.<span class="hljs-keyword">key</span>].id
    private_ip_address_allocation = <span class="hljs-string">"Dynamic"</span>
  }
}


resource <span class="hljs-string">"azurerm_virtual_machine"</span> <span class="hljs-string">"vm"</span> {
  for_each = var.resourcedetails

  <span class="hljs-keyword">name</span>                  = each.value.<span class="hljs-keyword">name</span>
  location            = azurerm_resource_group.myrg[each.<span class="hljs-keyword">key</span>].location
  resource_group_name = azurerm_resource_group.myrg[each.<span class="hljs-keyword">key</span>].<span class="hljs-keyword">name</span>
  network_interface_ids = [azurerm_network_interface.mynic[each.<span class="hljs-keyword">key</span>].id]
  vm_size               = each.value.size

  storage_image_reference {
    publisher = <span class="hljs-string">"Canonical"</span>
    offer     = <span class="hljs-string">"UbuntuServer"</span>
    sku       = <span class="hljs-string">"16.04-LTS"</span>
    version   = <span class="hljs-string">"latest"</span>
  }

  storage_os_disk {
    <span class="hljs-keyword">name</span>              = <span class="hljs-string">"${each.value.name}-osdisk"</span>
    caching           = <span class="hljs-string">"ReadWrite"</span>
    create_option     = <span class="hljs-string">"FromImage"</span>
    managed_disk_type = <span class="hljs-string">"Standard_LRS"</span>
  }

  os_profile {
    computer_name  = each.value.<span class="hljs-keyword">name</span>
    admin_username = <span class="hljs-string">"adminuser"</span>
    admin_password = <span class="hljs-string">"Password1234!"</span>
  }

  os_profile_linux_config {
    disable_password_authentication = false
  }


}
</code></pre>
<p>In this example:</p>
<ul>
<li>Two virtual machines are created, each with a unique name and size</li>
</ul>
<h4 id="heading-key-features-of-foreach"><strong>Key Features of For_each:</strong></h4>
<ul>
<li><p><strong>Key-based:</strong> Resources are identified by unique keys.</p>
</li>
<li><p><strong>Flexible:</strong> Ideal for scenarios where instances require different configurations.</p>
</li>
<li><p><strong>Dynamic:</strong> Can adapt to changes in the input set or map.</p>
</li>
</ul>
<hr />
<h3 id="heading-key-differences-between-count-and-foreach"><strong>Key Differences Between Count and For_each</strong></h3>
<div class="hn-table">
<table>
<thead>
<tr>
<td>Feature</td><td>Count</td><td>For_each</td></tr>
</thead>
<tbody>
<tr>
<td><strong>Resource Identification</strong></td><td>Index-based (<code>count.index</code>)</td><td>Key-based (<code>each.key</code>)</td></tr>
<tr>
<td><strong>Input Type</strong></td><td>Numeric value</td><td>Set, map, or list</td></tr>
<tr>
<td><strong>Use Case</strong></td><td>Homogeneous resource configurations</td><td>Heterogeneous resource configurations</td></tr>
<tr>
<td><strong>Flexibility</strong></td><td>Limited</td><td>High</td></tr>
<tr>
<td><strong>Dynamic Updates</strong></td><td>Challenging with changing counts</td><td>Adapts easily to input changes</td></tr>
</tbody>
</table>
</div><hr />
<h3 id="heading-when-to-use-count"><strong>When to Use Count</strong></h3>
<ul>
<li><p>All resources have similar configurations.</p>
</li>
<li><p>You know the exact number of resources required upfront.</p>
</li>
<li><p>Simpler scenarios where flexibility is not a priority.</p>
</li>
</ul>
<h4 id="heading-example-use-case"><strong>Example Use Case:</strong></h4>
<p>Creating multiple identical storage accounts.</p>
<pre><code class="lang-basic">resource <span class="hljs-string">"azurerm_storage_account"</span> <span class="hljs-string">"example"</span> {
  count = <span class="hljs-number">5</span>

  <span class="hljs-keyword">name</span>                     = <span class="hljs-string">"storage${count.index}"</span>
  resource_group_name      = azurerm_resource_group.example.<span class="hljs-keyword">name</span>
  location                 = azurerm_resource_group.example.location
  account_tier             = <span class="hljs-string">"Standard"</span>
  account_replication_type = <span class="hljs-string">"LRS"</span>
}
</code></pre>
<hr />
<h3 id="heading-when-to-use-foreach"><strong>When to Use For_each</strong></h3>
<ul>
<li><p>Resources require unique configurations.</p>
</li>
<li><p>You need to manage resources based on dynamic or changing input data.</p>
</li>
<li><p>Resources need to be uniquely identified by a key.</p>
</li>
</ul>
<h4 id="heading-example-use-case-1"><strong>Example Use Case:</strong></h4>
<p>Creating VMs with different configurations for a development and production environment.</p>
<pre><code class="lang-basic">resource <span class="hljs-string">"azurerm_virtual_machine"</span> <span class="hljs-string">"example"</span> {
  for_each = {
    dev  = <span class="hljs-string">"Standard_DS1_v2"</span>
    prod = <span class="hljs-string">"Standard_DS2_v2"</span>
  }

  <span class="hljs-keyword">name</span>                  = each.<span class="hljs-keyword">key</span>
  location              = azurerm_resource_group.example.location
  resource_group_name   = azurerm_resource_group.example.<span class="hljs-keyword">name</span>
  vm_size               = each.value
}
</code></pre>
<hr />
<h3 id="heading-conclusion"><strong>Conclusion</strong></h3>
<p>Both <code>count</code> and <code>for_each</code> are powerful tools in Terraform for managing multiple resources, but they serve different purposes. Use <code>count</code> for simple, uniform resource creation and <code>for_each</code> for more complex, dynamic scenarios. Understanding their differences and use cases will help you write more efficient and maintainable Terraform configurations.</p>
<p>By leveraging the right construct for the right situation, you can optimize your Infrastructure as Code workflows and better manage your cloud resources.</p>
]]></description><link>https://clouddevopsinsights.com/understanding-the-differences-between-terraform-count-and-foreach</link><guid isPermaLink="true">https://clouddevopsinsights.com/understanding-the-differences-between-terraform-count-and-foreach</guid><category><![CDATA[Terraform]]></category><category><![CDATA[#Iac #terraform #devops #aws]]></category><category><![CDATA[Public Cloud]]></category><dc:creator><![CDATA[Abilash Vavilala]]></dc:creator></item><item><title><![CDATA[Managing Terraform Drift in Azure: A Step-by-Step Guide to Sync Resources]]></title><description><![CDATA[<p><strong>Syncing Terraform with Azure: Handling Manual Changes to Resources</strong></p>
<p>Terraform is a powerful Infrastructure-as-Code (IaC) tool that allows you to manage your cloud infrastructure declaratively. However, scenarios can arise where resources created with Terraform are manually modified in the Azure Portal or via other means. This can lead to a mismatch, or "drift," between Terraforms state and the actual infrastructure.</p>
<p>In this blog, well explore how to handle such situations effectively. Lets consider a scenario where a Virtual Machine (VM) and a Network Security Group (NSG) were created using Terraform but were later manually modified in Azure. For example, additional rules were added to the NSG.</p>
<h3 id="heading-infrastructure-deployed">Infrastructure Deployed:</h3>
<p>I deployed Linux VM and associated nsg to the VM-Nic.</p>
<pre><code class="lang-basic">resource <span class="hljs-string">"azurerm_virtual_network"</span> <span class="hljs-string">"vnet-clouddevinsights"</span> {
  <span class="hljs-keyword">name</span>                = var.virtual_network_name
  address_space       = var.address_space
  location            = var.resource_group_location
  resource_group_name = var.resource_group_name
}

resource <span class="hljs-string">"azurerm_subnet"</span> <span class="hljs-string">"vnet-clouddevinsights-subnet"</span> {
  <span class="hljs-keyword">name</span>                 = var.subnet_name
  resource_group_name  = azurerm_resource_group.clouddevinsights.<span class="hljs-keyword">name</span>
  virtual_network_name = azurerm_virtual_network.vnet-clouddevinsights.<span class="hljs-keyword">name</span>
  address_prefixes     = var.subnet_address_prefix
}

resource <span class="hljs-string">"azurerm_network_security_group"</span> <span class="hljs-string">"nsg-clouddevinsights-nsg"</span> {
  <span class="hljs-keyword">name</span>                = var.network_security_group_name
  location            = var.resource_group_location
  resource_group_name = azurerm_resource_group.clouddevinsights.<span class="hljs-keyword">name</span>

  security_rule {
    <span class="hljs-keyword">name</span>                       = <span class="hljs-string">"Allow-SSH"</span>
    priority                   = <span class="hljs-number">1001</span>
    direction                  = <span class="hljs-string">"Inbound"</span>
    access                     = <span class="hljs-string">"Allow"</span>
    protocol                   = <span class="hljs-string">"Tcp"</span>
    source_port_range          = <span class="hljs-string">"*"</span>
    destination_port_range     = <span class="hljs-string">"22"</span>
    source_address_prefix      = <span class="hljs-string">"*"</span>
    destination_address_prefix = <span class="hljs-string">"*"</span>
  }

  security_rule {
    <span class="hljs-keyword">name</span>                       = <span class="hljs-string">"Allow-HTTP"</span>
    priority                   = <span class="hljs-number">1002</span>
    direction                  = <span class="hljs-string">"Inbound"</span>
    access                     = <span class="hljs-string">"Allow"</span>
    protocol                   = <span class="hljs-string">"Tcp"</span>
    source_port_range          = <span class="hljs-string">"*"</span>
    destination_port_range     = <span class="hljs-string">"80"</span>
    source_address_prefix      = <span class="hljs-string">"*"</span>
    destination_address_prefix = <span class="hljs-string">"*"</span>
  }

  security_rule {
    <span class="hljs-keyword">name</span>                       = <span class="hljs-string">"Allow-HTTPS"</span>
    priority                   = <span class="hljs-number">1003</span>
    direction                  = <span class="hljs-string">"Inbound"</span>
    access                     = <span class="hljs-string">"Allow"</span>
    protocol                   = <span class="hljs-string">"Tcp"</span>
    source_port_range          = <span class="hljs-string">"*"</span>
    destination_port_range     = <span class="hljs-string">"443"</span>
    source_address_prefix      = <span class="hljs-string">"*"</span>
    destination_address_prefix = <span class="hljs-string">"*"</span>
  }
}

resource <span class="hljs-string">"azurerm_network_interface"</span> <span class="hljs-string">"vm-nic"</span> {
  <span class="hljs-keyword">name</span>                = var.vm-nic-<span class="hljs-keyword">name</span>
  location            = azurerm_resource_group.clouddevinsights.location
  resource_group_name = azurerm_resource_group.clouddevinsights.<span class="hljs-keyword">name</span>

  ip_configuration {
    <span class="hljs-keyword">name</span>                          = <span class="hljs-string">"internal"</span>
    subnet_id                     = azurerm_subnet.vnet-clouddevinsights-subnet.id
    private_ip_address_allocation = <span class="hljs-string">"Dynamic"</span>

  }

}

resource <span class="hljs-string">"azurerm_network_interface_security_group_association"</span> <span class="hljs-string">"nsg-association"</span> {
  network_interface_id      = azurerm_network_interface.vm-nic.id
  network_security_group_id = azurerm_network_security_group.nsg-clouddevinsights-nsg.id
}

resource <span class="hljs-string">"azurerm_linux_virtual_machine"</span> <span class="hljs-string">"linux-vm"</span> {
  <span class="hljs-keyword">name</span>                = var.vm-<span class="hljs-keyword">name</span>
  resource_group_name = azurerm_resource_group.clouddevinsights.<span class="hljs-keyword">name</span>
  location            = var.resource_group_location
  size                = <span class="hljs-string">"Standard_B1s"</span>
  admin_username      = <span class="hljs-string">"adminuser"</span>
  admin_password = <span class="hljs-string">"Password1234!"</span>
  disable_password_authentication = <span class="hljs-string">"false"</span>

  network_interface_ids = [
    azurerm_network_interface.vm-nic.id,
  ]

  os_disk {
    caching              = <span class="hljs-string">"ReadWrite"</span>
    storage_account_type = <span class="hljs-string">"Standard_LRS"</span>
  }

  source_image_reference {
    publisher = <span class="hljs-string">"Canonical"</span>
    offer     = <span class="hljs-string">"UbuntuServer"</span>
    sku       = <span class="hljs-string">"18.04-LTS"</span>
    version   = <span class="hljs-string">"latest"</span>
  }
}
</code></pre>
<h3 id="heading-make-changes-in-azure">Make Changes in Azure:</h3>
<p>I will add a NSG rule manually using Azure Portal. I will allow all inbound traffic from internet.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1735691668168/faa3795b-6449-4cd6-ae1b-08ae4360dbb5.png" alt class="image--center mx-auto" /></p>
<hr />
<h3 id="heading-steps-to-sync-terraform-with-azure"><strong>Steps to Sync Terraform with Azure</strong></h3>
<h4 id="heading-step-1-update-terraforms-state-file"><strong>Step 1: Update Terraforms State File</strong></h4>
<p>Terraforms state file does not automatically reflect changes made directly in Azure. To synchronize the state file with the actual infrastructure, use the following command:</p>
<pre><code class="lang-json">terraform refresh
</code></pre>
<p>This command fetches the latest state of the resources from Azure and updates the local state file. For example, if you added new rules to the NSG or changed the VM size, these changes will now be reflected in Terraforms state.</p>
<hr />
<h4 id="heading-step-2-detect-drift-using-terraform-plan"><strong>Step 2: Detect Drift Using</strong> <code>terraform plan</code></h4>
<p>After refreshing the state, run the <code>terraform plan</code> command to identify any differences between the actual resources in Azure and the desired configuration defined in your <code>.tf</code> files:</p>
<pre><code class="lang-json">terraform plan
</code></pre>
<p>Terraform will analyze the current state and the configuration files to detect any drift. It will display a plan of actions needed to bring the infrastructure back in line with the desired state. For example, it might show that the VM size or NSG rules differ from the configuration. Please see the image below to see the drift.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1735691824604/d6c89525-9b80-47ff-abfc-746973cf9e08.png" alt class="image--center mx-auto" /></p>
<hr />
<h4 id="heading-step-3-apply-changes-to-sync-resources"><strong>Step 3: Apply Changes to Sync Resources</strong></h4>
<p>If drift is detected, you can apply the necessary changes to align the resources with your Terraform configuration. Use the following command:</p>
<pre><code class="lang-json">terraform apply
</code></pre>
<p>Terraform will prompt you to confirm the changes. Once confirmed, it will update the Azure resources to match the desired configuration. For example, it might:</p>
<ul>
<li>Remove any manually added rules in the NSG that are not in the Terraform configuration.</li>
</ul>
<hr />
<h3 id="heading-special-case-both-terraform-and-azure-modify-resources"><strong>Special Case: Both Terraform and Azure Modify Resources</strong></h3>
<p>If both Terraform and Azure have modified the same resources, its crucial to ensure that Terraforms state file is up-to-date before applying changes. Heres why:</p>
<ol>
<li><p><strong>Terraform Updates the State File:</strong> When Terraform applies changes, it updates the state file to reflect the new state of the resources.</p>
</li>
<li><p><strong>Manual Changes in Azure:</strong> If resources are manually modified after Terraforms state file has been updated, Terraform may overwrite those changes during the next apply operation.</p>
</li>
</ol>
<p>To avoid unintentional overwrites:</p>
<ul>
<li><p>Always run <code>terraform refresh</code> and <code>terraform plan</code> before applying changes.</p>
</li>
<li><p>Communicate with your team to establish clear guidelines for managing resources.</p>
</li>
</ul>
<hr />
<h3 id="heading-key-takeaways"><strong>Key Takeaways</strong></h3>
<ul>
<li><p><strong>Avoid Manual Changes:</strong> The best practice is to avoid manual changes to resources managed by Terraform. This ensures consistency and reduces the risk of drift.</p>
</li>
<li><p><strong>Refresh State Regularly:</strong> Use <code>terraform refresh</code> to keep the state file in sync with the actual infrastructure.</p>
</li>
<li><p><strong>Detect and Resolve Drift:</strong> Use <code>terraform plan</code> to detect drift and <code>terraform apply</code> to resolve it.</p>
</li>
<li><p><strong>Establish Governance:</strong> Set clear policies to manage resources and avoid conflicts between manual changes and Terraform.</p>
</li>
</ul>
<p>By following these steps, you can ensure that your infrastructure remains consistent, predictable, and aligned with your Terraform configurations. Handling drift effectively is a critical skill for any cloud professional, and it ensures that your IaC workflows remain robust and reliable.</p>
]]></description><link>https://clouddevopsinsights.com/managing-terraform-drift-in-azure-a-step-by-step-guide-to-sync-resources</link><guid isPermaLink="true">https://clouddevopsinsights.com/managing-terraform-drift-in-azure-a-step-by-step-guide-to-sync-resources</guid><category><![CDATA[Terraform]]></category><category><![CDATA[Azure]]></category><category><![CDATA[IaC (Infrastructure as Code)]]></category><dc:creator><![CDATA[Abilash Vavilala]]></dc:creator></item><item><title><![CDATA[Azure Governance Simplified: Essential Policies for Every Organization]]></title><description><![CDATA[<h1 id="heading-introduction">Introduction:</h1>
<p>In the ever-evolving world of cloud computing, governance is the cornerstone of a secure, compliant, and efficient Azure environment. As organizations migrate to the cloud, the challenge lies in maintaining control without stifling innovation. Azure Governance provides a framework to address this challenge, ensuring that resources are managed effectively, costs are optimized, and security is upheld.</p>
<p>This blog dives into the <strong>essential Azure policies</strong> every organization should implement to establish a robust governance model. Whether youre a small business or a global enterprise, these recommendations will help you create a scalable and compliant cloud environment. From cost management to security and resource consistency, well cover practical examples and best practices to guide your governance journey.</p>
<p>Lets simplify Azure Governance and empower your organization to harness the full potential of the cloud responsibly.</p>
<h3 id="heading-allowed-locations">Allowed Locations:</h3>
<p>This policy restricts the locations your organization can specify when deploying resources. It helps enforce geo-compliance requirements, ensuring resources are deployed only in approved regions.</p>
<p><strong>Exclusions:</strong></p>
<ul>
<li><p>Resource groups.</p>
</li>
<li><p>Azure Active Directory B2C directories.</p>
</li>
<li><p>Resources using the <code>global</code> region.</p>
</li>
</ul>
<p><strong>Example Use Case:</strong> Restrict deployments to regions that comply with data residency and regulatory requirements, such as GDPR or HIPAA.</p>
<p><a target="_blank" href="https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/General/AllowedLocations_Deny.json">Click here to see JSON policy</a></p>
<h3 id="heading-allowed-virtual-machine-size-skus"><strong>Allowed Virtual Machine Size SKUs:</strong></h3>
<p>This policy enables you to specify a set of virtual machine size SKUs that your organization can deploy. By restricting VM sizes, you can:</p>
<ul>
<li><p>Enforce organizational standards.</p>
</li>
<li><p>Optimize costs by preventing the use of oversized VMs.</p>
</li>
<li><p>Simplify management by limiting the variety of deployed VMs.</p>
</li>
</ul>
<p><strong>Example Use Case:</strong> Restrict VM SKUs to cost-efficient or approved configurations that meet workload requirements.</p>
<p><a target="_blank" href="https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Compute/VMSkusAllowed_Deny.json">Click here to see JSON policy</a></p>
<h3 id="heading-allowed-resource-types"><strong>Allowed Resource Types</strong></h3>
<p>This policy specifies which resource types your organization can deploy. It helps reduce complexity and minimizes the attack surface by limiting resource types to those essential for business operations.</p>
<p><strong>Note:</strong> Only resource types supporting <code>tags</code> and <code>location</code> are affected. To restrict all resources, duplicate the policy and change its mode to <code>All</code>.</p>
<p><strong>Example Use Case:</strong> Allow only approved resource types, such as storage accounts and virtual machines, to simplify governance and improve security.</p>
<p><a target="_blank" href="https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/General/AllowedResourceTypes_Deny.json">Click here to see JSON policy</a></p>
<h3 id="heading-audit-usage-of-custom-rbac-roles"><strong>Audit Usage of Custom RBAC Roles</strong></h3>
<p>Custom roles can be error-prone and introduce security risks. This policy audits the usage of built-in roles such as <code>Owner</code>, <code>Contributor</code>, and <code>Reader</code>, treating custom roles as exceptions requiring thorough review.</p>
<p><strong>Example Use Case:</strong> Ensure adherence to least privilege principles by minimizing the use of custom roles.</p>
<p><a target="_blank" href="https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/General/Subscription_AuditCustomRBACRoles_Audit.json">Click here to see JSON policy</a></p>
<h3 id="heading-custom-subscription-owner-roles-should-not-exist"><strong>Custom Subscription Owner Roles Should Not Exist</strong></h3>
<p>Prevent the creation of custom subscription owner roles to maintain consistency and security in role assignments.</p>
<p><strong>Example Use Case:</strong> Avoid role proliferation and potential misconfigurations.</p>
<p><a target="_blank" href="https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/General/CustomSubscription_OwnerRole_Audit.json">Click here to see JSON policy</a></p>
<hr />
<h3 id="heading-not-allowed-resource-types"><strong>Not Allowed Resource Types</strong></h3>
<p>This policy restricts specific resource types from being deployed. It helps reduce the environments complexity and attack surface while managing costs.</p>
<p><strong>Example Use Case:</strong> Disallow deployment of expensive or unsupported resource types.</p>
<p><a target="_blank" href="https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/General/InvalidResourceTypes_Deny.json">Click here to see JSON policy</a></p>
<h3 id="heading-a-maximum-of-3-owners-for-subscriptions"><strong>A Maximum of 3 Owners for Subscriptions</strong></h3>
<p>Limiting the number of subscription owners reduces the potential impact of a compromised owner account.</p>
<p><strong>Example Use Case:</strong> Ensure there are no more than three designated subscription owners to enhance security.</p>
<p><a target="_blank" href="https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_DesignateLessThanXOwners_Audit.json">Click here to see JSON policy</a></p>
<h3 id="heading-mfa-for-subscription-owners"><strong>MFA for Subscription Owners</strong></h3>
<p>Multi-Factor Authentication (MFA) should be enabled for all accounts with owner permissions. This prevents unauthorized access to critical resources.</p>
<p><strong>Example Use Case:</strong> Enforce MFA for subscription owners to mitigate risks of account breaches.</p>
<p><a target="_blank" href="https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableMFAForOwnerPermissions_Audit.json">Click here to see JSON policy</a></p>
<h3 id="heading-contact-email-address-for-security-issues"><strong>Contact Email Address for Security Issues</strong></h3>
<p>Set a security contact email address to ensure the right individuals are notified of potential security breaches.</p>
<p><strong>Example Use Case:</strong> Receive alerts from Azure Security Center for prompt action.</p>
<p><a target="_blank" href="https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_Security_contact_email.json">Click here to see JSON policy</a></p>
<h3 id="heading-require-a-tag-on-resource-groups"><strong>Require a Tag on Resource Groups</strong></h3>
<p>Enforce the existence of a specific tag on resource groups to enhance resource organization and tracking.</p>
<p><strong>Example Use Case:</strong> Require a <code>CostCenter</code> tag for resource groups to track expenses.</p>
<p><a target="_blank" href="https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Tags/ResourceGroupRequireTag_Deny.json">Click here to see JSON policy</a></p>
<h3 id="heading-inherit-a-tag-from-the-resource-group"><strong>Inherit a Tag from the Resource Group</strong></h3>
<p>Automatically add a missing tag to resources from their parent resource group. This ensures consistent tagging across the environment.</p>
<p><strong>Example Use Case:</strong> Ensure all resources inherit the <code>Environment</code> tag (e.g., <code>Production</code>, <code>Development</code>).</p>
<p><a target="_blank" href="https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Tags/InheritTag_Add_Modify.json">Click here to see JSON policy</a></p>
<h3 id="heading-custom-policy-for-resource-lock">Custom Policy for Resource lock:</h3>
<p>Azure doesnt provide built-in policy for resource lock. however, we can create custom policy to deploy a policy to <strong>Enforce CanNotDelete Resource Lock using Azure Policy</strong>:</p>
<pre><code class="lang-json">{
displayName: <span class="hljs-string">"Resource Lock should be enabled"</span>,
description: <span class="hljs-string">"With this policy: any resource that has the tag key LockLevel with the value CanNotDelete means authorized users can read and modify the resource, but they can t delete it."</span>,
metadata: {
category: <span class="hljs-string">"Backup"</span>,
version: <span class="hljs-string">"1.0.0"</span>
},
mode: <span class="hljs-string">"Indexed"</span>,
parameters: {
tagName: {
type: <span class="hljs-string">"string"</span>,
metadata: {
displayName: <span class="hljs-string">"Exclusion Tag Name"</span>,
description: <span class="hljs-string">"Name of the tag to use for excluding resources from this policy. This should be used along with the Exclusion Tag Value parameter."</span>
},
defaultValue: <span class="hljs-string">"_MVP_Resource_Lock_should_be_enabled"</span>
},
tagValue: {
type: <span class="hljs-string">"string"</span>,
metadata: {
displayName: <span class="hljs-string">"Exclusion Tag Value"</span>,
description: <span class="hljs-string">"Value of the tag to use for excluding resources from this policy. This should be used along with the Exclusion Tag Name parameter."</span>
},
defaultValue: <span class="hljs-string">"exclude"</span>
},
effect: {
type: <span class="hljs-string">"String"</span>,
metadata: {
displayName: <span class="hljs-string">"Effect"</span>,
description: <span class="hljs-string">"DeployIfNotExists, AuditIfNotExists or Disabled the execution of the Policy"</span>
},
allowedValues: [
<span class="hljs-string">"DeployIfNotExists"</span>,
<span class="hljs-string">"AuditIfNotExists"</span>,
<span class="hljs-string">"Disabled"</span>
],
defaultValue: <span class="hljs-string">"DeployIfNotExists"</span>
}
},
policyRule: {
if: {
allOf: [
{
field: <span class="hljs-string">"tags.LockLevel"</span>,
equals: <span class="hljs-string">"CanNotDelete"</span>
},
{
value: 🔍<span class="hljs-string">"[
    length(
        split(
            field('type'),
           '/'
        )
    )
]"</span>,
equals: <span class="hljs-number">2</span>
},
{
not: {
field: 🔍<span class="hljs-string">"[
    concat(
       'tags[
           ',
            parameters('tagName'),
           '
        ]'
    )
]"</span>,
equals: <span class="hljs-string">"[parameters('tagValue')]"</span>
}
}
]
},
then: {
effect: <span class="hljs-string">"[parameters('effect')]"</span>,
details: {
roleDefinitionIds: [
<span class="hljs-string">"/providers/microsoft.authorization/roleDefinitions/8e3af657-a8ff-443c-a75c-2fe8c4bcb635"</span>
],
type: <span class="hljs-string">"Microsoft.Authorization/locks"</span>,
name: <span class="hljs-string">"ResourceLockedByPolicy"</span>,
existenceCondition: {
allOf: [
{
field: <span class="hljs-string">"Microsoft.Authorization/locks/level"</span>,
In: [
<span class="hljs-string">"CanNotDelete"</span>
]
},
{
field: <span class="hljs-string">"Microsoft.Authorization/locks/notes"</span>,
equals: <span class="hljs-string">"Locked by Azure Policy"</span>
}
]
},
deployment: {
properties: {
mode: <span class="hljs-string">"incremental"</span>,
template: {
$schema: <span class="hljs-string">"https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#"</span>,
contentVersion: <span class="hljs-string">"1.0.0.0"</span>,
parameters: {
resourceName: {
type: <span class="hljs-string">"string"</span>
},
resourceType: {
type: <span class="hljs-string">"string"</span>
}
},
variables: {},
resources: [
{
type: <span class="hljs-string">"Microsoft.Authorization/locks"</span>,
apiVersion: <span class="hljs-string">"2016-09-01"</span>,
name: <span class="hljs-string">"ResourceLockedByPolicy"</span>,
scope: 🔍<span class="hljs-string">"[
    concat(
        parameters('resourceType'),
       '/',
         parameters('resourceName')
    )
]"</span>,
properties: {
level: <span class="hljs-string">"CanNotDelete"</span>,
notes: <span class="hljs-string">"Locked by Azure Policy"</span>
}
}
],
outputs: {}
},
parameters: {
resourceName: {
value: <span class="hljs-string">"[field('name')]"</span>
},
resourceType: {
value: <span class="hljs-string">"[field('type')]"</span>
}
}
}
}
}
}
}
}
</code></pre>
<h3 id="heading-conclusion">Conclusion:</h3>
<p>Azure Governance with Azure Policy empowers organizations to enforce compliance, security, and operational efficiency. By implementing these recommended policies, you can establish a robust governance framework that aligns with your organizations goals and regulatory requirements.</p>
<p>Start defining and enforcing these policies today to ensure your Azure environment remains secure, compliant, and cost-effective.</p>
]]></description><link>https://clouddevopsinsights.com/azure-governance-simplified-essential-policies-for-every-organization</link><guid isPermaLink="true">https://clouddevopsinsights.com/azure-governance-simplified-essential-policies-for-every-organization</guid><category><![CDATA[Azure]]></category><category><![CDATA[Azure Governance]]></category><category><![CDATA[Cloud Governance]]></category><category><![CDATA[cloud best practices]]></category><category><![CDATA[cloud security]]></category><dc:creator><![CDATA[Abilash Vavilala]]></dc:creator></item><item><title><![CDATA[Understanding Data Collection Rules in Azure: A Guide for Cloud Professionals]]></title><description><![CDATA[<h3 id="heading-introduction">Introduction:</h3>
<p>Data is the lifeblood of modern applications, and Azure provides robust tools to ensure that data is collected, processed, and stored efficiently. One of these tools is Data Collection Rules (DCRs), a feature that enables fine-grained control over data ingestion into Azure Monitor. This blog post will explore the fundamentals of DCRs, their benefits, and how to configure them for effective monitoring.</p>
<h3 id="heading-what-are-data-collection-rules">What Are Data Collection Rules?</h3>
<p>Data Collection Rules are configurations that define how telemetry and log data are collected and routed to destinations like Log Analytics workspaces, Azure Storage, or Event Hubs. Introduced to provide greater flexibility and scalability, DCRs are part of Azure Monitors modern data collection architecture.</p>
<p>With DCRs, you can:</p>
<ul>
<li><p><strong>Filter Data:</strong> Collect only the data you need, reducing noise and costs.</p>
</li>
<li><p><strong>Transform Data:</strong> Apply transformations to data before ingestion, such as masking sensitive information or enriching fields.</p>
</li>
<li><p><strong>Route Data:</strong> Send data to multiple destinations simultaneously.</p>
</li>
</ul>
<h3 id="heading-key-benefits-of-data-collection-rules">Key Benefits of Data Collection Rules</h3>
<ol>
<li><p><strong>Granular Control:</strong> Define specific data collection settings for different resource types, such as virtual machines, containers, or PaaS services.</p>
</li>
<li><p><strong>Cost Optimization:</strong> Reduce ingestion costs by filtering unnecessary data.</p>
</li>
<li><p><strong>Flexibility:</strong> Route data to multiple destinations without duplicating collection efforts.</p>
</li>
<li><p><strong>Compliance:</strong> Mask or filter sensitive data to comply with regulatory requirements.</p>
</li>
</ol>
<h3 id="heading-components-of-a-data-collection-rule">Components of a Data Collection Rule</h3>
<p>A DCR consists of the following elements:</p>
<ol>
<li><p><strong>Data Sources:</strong> Specify the resources or telemetry types (e.g., Windows Event Logs, Syslog, or custom logs).</p>
</li>
<li><p><strong>Transforms:</strong> Apply transformations to modify or filter data before ingestion.</p>
</li>
<li><p><strong>Destinations:</strong> Define where the collected data will be sent, such as Log Analytics workspaces or Azure Storage.</p>
</li>
</ol>
<h3 id="heading-configuring-a-data-collection-rule">Configuring a Data Collection Rule</h3>
<h4 id="heading-step-1-define-the-data-sources">Step 1: Define the Data Sources</h4>
<p>Start by identifying the data sources you want to monitor. For example, you might collect Windows Event Logs from virtual machines or Syslog data from Linux servers.</p>
<h4 id="heading-step-2-apply-transformations">Step 2: Apply Transformations</h4>
<p>Use KQL (Kusto Query Language) to define transformations. For instance, you can filter out logs with specific keywords or mask sensitive fields.</p>
<h4 id="heading-step-3-set-up-destinations">Step 3: Set Up Destinations</h4>
<p>Choose one or more destinations for the data. Common destinations include:</p>
<ul>
<li><p><strong>Log Analytics Workspace:</strong> For querying and analyzing data.</p>
</li>
<li><p><strong>Azure Storage:</strong> For archival purposes.</p>
</li>
<li><p><strong>Event Hubs:</strong> For integration with third-party systems.</p>
</li>
</ul>
<h4 id="heading-step-4-create-and-deploy-the-dcr">Step 4: Create and Deploy the DCR</h4>
<p>You can create DCRs using the Azure portal, Azure CLI, or ARM templates. Heres an example using Azure CLI.</p>
<pre><code class="lang-powershell">az monitor <span class="hljs-keyword">data</span><span class="hljs-literal">-collection</span> rule create -<span class="hljs-literal">-location</span> <span class="hljs-string">'eastus'</span> -<span class="hljs-literal">-resource</span><span class="hljs-literal">-group</span> <span class="hljs-string">'my-resource-group'</span> -<span class="hljs-literal">-name</span> <span class="hljs-string">'my-dcr'</span> -<span class="hljs-literal">-rule</span><span class="hljs-operator">-file</span> <span class="hljs-string">'C:\MyNewDCR.json'</span> -<span class="hljs-literal">-description</span> <span class="hljs-string">'This is my new DCR'</span>
</code></pre>
<h3 id="heading-use-local-file-as-source-of-dcr"><strong>Use local file as source of DCR</strong></h3>
<p>DCRs for Syslog events use the <code>syslog</code> data source with the incoming <code>Microsoft-Syslog</code> stream. The schema of this stream is known, so it doesn't need to be defined in the <code>dataSources</code> section. The events to collect are specified in the <code>facilityNames</code> and <code>logLevels</code> properties. See <a target="_blank" href="https://learn.microsoft.com/en-us/azure/azure-monitor/agents/data-collection-syslog">Collect Syslog events with Azure Monitor Agent</a> for further details. To get started, you can use the guidance in that article to create a DCR using the Azure portal and then inspect the JSON using the guidance at <a target="_blank" href="https://learn.microsoft.com/en-us/azure/azure-monitor/essentials/data-collection-rule-create-edit#dcr-definition">DCR definition</a>.</p>
<p>You can add a transformation to the <code>dataFlows</code> property for additional functionality and to further filter data, but you should use <code>facilityNames</code> and <code>logLevels</code> for filtering as much as possible for efficiency at to avoid potential ingestion charges.</p>
<p>The following sample DCR performs the following actions:</p>
<ul>
<li><p>Collects all events from <code>cron</code> facility.</p>
</li>
<li><p>Collects <code>Warning</code> and higher events from <code>syslog</code> and <code>daemon</code> facilities.</p>
<ul>
<li><p>Sends data to Syslog table in the workspace.</p>
</li>
<li><p>Uses a simple transformation of a <code>source</code> which makes no change to the incoming data.</p>
</li>
</ul>
</li>
</ul>
<pre><code class="lang-json">{
    <span class="hljs-attr">"location"</span>: <span class="hljs-string">"eastus"</span>,
    <span class="hljs-attr">"properties"</span>: {
      <span class="hljs-attr">"dataSources"</span>: {
        <span class="hljs-attr">"syslog"</span>: [
          {
            <span class="hljs-attr">"name"</span>: <span class="hljs-string">"cronSyslog"</span>,
            <span class="hljs-attr">"streams"</span>: [
              <span class="hljs-string">"Microsoft-Syslog"</span>
            ],
            <span class="hljs-attr">"facilityNames"</span>: [
              <span class="hljs-string">"cron"</span>
            ],
            <span class="hljs-attr">"logLevels"</span>: [
              <span class="hljs-string">"Debug"</span>,
              <span class="hljs-string">"Info"</span>,
              <span class="hljs-string">"Notice"</span>,
              <span class="hljs-string">"Warning"</span>,
              <span class="hljs-string">"Error"</span>,
              <span class="hljs-string">"Critical"</span>,
              <span class="hljs-string">"Alert"</span>,
              <span class="hljs-string">"Emergency"</span>
            ]
          },
          {
            <span class="hljs-attr">"name"</span>: <span class="hljs-string">"syslogBase"</span>,
            <span class="hljs-attr">"streams"</span>: [
              <span class="hljs-string">"Microsoft-Syslog"</span>
            ],
            <span class="hljs-attr">"facilityNames"</span>: [
              <span class="hljs-string">"daemon"</span>,              
              <span class="hljs-string">"syslog"</span>
            ],
            <span class="hljs-attr">"logLevels"</span>: [
              <span class="hljs-string">"Warning"</span>,
              <span class="hljs-string">"Error"</span>,
              <span class="hljs-string">"Critical"</span>,
              <span class="hljs-string">"Alert"</span>,
              <span class="hljs-string">"Emergency"</span>           
            ]
          }
        ]
      },
      <span class="hljs-attr">"destinations"</span>: {
        <span class="hljs-attr">"logAnalytics"</span>: [
          {
            <span class="hljs-attr">"workspaceResourceId"</span>: <span class="hljs-string">"/subscriptions/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e/resourceGroups/my-resource-group/providers/Microsoft.OperationalInsights/workspaces/my-workspace"</span>,
            <span class="hljs-attr">"name"</span>: <span class="hljs-string">"centralWorkspace"</span>
          }
        ]
      },
      <span class="hljs-attr">"dataFlows"</span>: [
        {
          <span class="hljs-attr">"streams"</span>: [
            <span class="hljs-string">"Microsoft-Syslog"</span>
          ],
          <span class="hljs-attr">"destinations"</span>: [
            <span class="hljs-string">"centralWorkspace"</span>
          ],
            <span class="hljs-attr">"transformKql"</span>: <span class="hljs-string">"source"</span>,
            <span class="hljs-attr">"outputStream"</span>: <span class="hljs-string">"Microsoft-Syslog"</span>
        }
      ]
    }
  }
</code></pre>
<h3 id="heading-manage-data-collection-rule-associations-in-azure-monitor"><strong>Manage data collection rule associations in Azure Monitor</strong></h3>
<p>To view your DCRs in the Azure portal, select <strong>Data Collection Rules</strong> under <strong>Settings</strong> on the <strong>Monitor</strong> menu. Select a DCR to view its details.</p>
<p>Click the <strong>Resources</strong> tab to view the resources associated with the selected DCR. Click <strong>Add</strong> to add an association to a new resource. You can view and add resources using this feature whether or not you created the DCR in the Azure portal.</p>
<h3 id="heading-azure-policy"><strong>Azure Policy</strong></h3>
<p>Using Azure Policy, you can associate a DCR with multiple resources at scale. When you create an assignment between a resource group and a built-in policy or initiative, associations are created between the DCR and each resource of the assigned type in the resource group, including any new resources as they're created. Azure Monitor provides a simplified user experience to create an assignment for a policy or initiative for a particular DCR, which is an alternate method to creating the assignment using Azure Policy directly.</p>
<p>From the DCR in the Azure portal, select <strong>Policies (Preview)</strong>. This will open a page that lists any assignments with the current DCR and the compliance state of included resources. Tiles across the top provide compliance metrics for all resources and assignments.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1735256683200/018704b8-8e63-4c4b-8de5-0c6401c4b069.png" alt class="image--center mx-auto" /></p>
<p>To create a new assignment, click either <strong>Assign Policy</strong> or <strong>Assign Initiative</strong>.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1735256723802/288b7a14-e2c9-4bb5-9536-4da4eee08783.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-best-practices-for-using-data-collection-rules">Best Practices for Using Data Collection Rules</h3>
<ol>
<li><p><strong>Start Small:</strong> Begin with a limited set of data sources and destinations to understand the impact.</p>
</li>
<li><p><strong>Monitor Costs:</strong> Use Azure Cost Management to track the costs associated with data ingestion.</p>
</li>
<li><p><strong>Test Transformations:</strong> Validate KQL queries to ensure they filter or transform data as expected.</p>
</li>
<li><p><strong>Use Tags:</strong> Apply tags to DCRs for better management and organization.</p>
</li>
</ol>
<h3 id="heading-common-use-cases">Common Use Cases</h3>
<ul>
<li><p><strong>Application Monitoring:</strong> Collect application logs and route them to Log Analytics for troubleshooting.</p>
</li>
<li><p><strong>Security Auditing:</strong> Filter and store security-related logs in Azure Storage for long-term retention.</p>
</li>
<li><p><strong>Compliance Reporting:</strong> Mask sensitive information in logs to meet regulatory requirements.</p>
</li>
</ul>
<h3 id="heading-conclusion">Conclusion</h3>
<p>Data Collection Rules provide a powerful and flexible way to manage data ingestion in Azure Monitor. By leveraging DCRs, you can optimize costs, improve compliance, and ensure that only the most relevant data is collected. Whether youre monitoring applications, auditing security logs, or building compliance workflows, DCRs are a must-have tool in your Azure toolkit.</p>
<p>Start exploring Data Collection Rules today and unlock the full potential of Azure Monitor!  </p>
<h3 id="heading-reference">Reference:</h3>
<p><a target="_blank" href="https://learn.microsoft.com/en-us/azure/azure-monitor/essentials/data-collection-rule-overview">Microsoft Article to refer</a></p>
]]></description><link>https://clouddevopsinsights.com/understanding-data-collection-rules-in-azure-a-guide-for-cloud-professionals</link><guid isPermaLink="true">https://clouddevopsinsights.com/understanding-data-collection-rules-in-azure-a-guide-for-cloud-professionals</guid><category><![CDATA[Azure]]></category><category><![CDATA[Cloud Computing]]></category><category><![CDATA[monitoring]]></category><category><![CDATA[SRE devops]]></category><dc:creator><![CDATA[Abilash Vavilala]]></dc:creator></item><item><title><![CDATA[Deploy ARM template using Azure DevOps]]></title><description><![CDATA[<h3 id="heading-introduction"><strong>Introduction</strong></h3>
<p>In this article, I will deploy ARM template using Azure Devops. I will use Continous Integration of ARM template with Azure Pipelines.</p>
<p>source code can be found on <a target="_blank" href="https://github.com/AbiVavilala/Deploy-ARM-with-Azure-DevOps.git"><strong>Source code and project info</strong></a></p>
<h3 id="heading-prepare-your-project"><strong>Prepare your project</strong></h3>
<p>In This project we have ARM template and Azure DevOps organization ready for creating the pipeline. The following steps show how to make sure you're ready:</p>
<p>You have an Azure DevOps organization. If you don't have one, create one for free. If your team already has an Azure DevOps organization, make sure you're an administrator of the Azure DevOps project that you want to use.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1732512336059/367292d2-7213-4600-9955-9e1bd65cad2b.png" alt class="image--center mx-auto" /></p>
<p>Import GitHub repo to Azure Repos</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1732512372752/db561bee-3a72-4a5a-b72e-3985f01b3892.png" alt class="image--center mx-auto" /></p>
<p>You have an ARM template that defines the infrastructure for your project. I have ARM tepmate in the repo. the template will create storage account. Virtual Network, Public IP, NSG, subnet, NIC, VM.</p>
<h2 id="heading-create-pipeline">Create Pipeline</h2>
<p>I will add new pipeline</p>
<p><a target="_blank" href="https://github.com/AbiVavilala/Deploy-ARM-with-Azure-DevOps/blob/main/images/createpipeline.png"><img src="https://github.com/AbiVavilala/Deploy-ARM-with-Azure-DevOps/raw/main/images/createpipeline.png" alt /></a></p>
<p>select Azure repos as repo</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1732509506040/d8c5633c-3ee4-4505-9697-7b105e0d5ce6.png?auto=compress,format&amp;format=webp" alt /></p>
<p>select the repo we just imported to Azure Repos Git in this project and then select starter pipeline below.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1732509583074/3b6022a8-cd4e-4ed8-bfad-c5fe364db8c8.png?auto=compress,format&amp;format=webp" alt /></p>
<p>add two tasks to the pipeline. first one is copying files and second one is publish artifact. please find the code below.</p>
<p><strong>Copy</strong></p>
<p><strong>Copy</strong></p>
<pre><code class="lang-yaml"><span class="hljs-comment"># Starter pipeline</span>
<span class="hljs-comment"># Start with a minimal pipeline that you can customize to build and deploy your code.</span>
<span class="hljs-comment"># Add steps that build, run tests, deploy, and more:</span>
<span class="hljs-comment"># https://aka.ms/yaml</span>

<span class="hljs-attr">trigger:</span>
<span class="hljs-bullet">-</span> <span class="hljs-string">main</span>

<span class="hljs-attr">pool:</span>
  <span class="hljs-attr">vmImage:</span> <span class="hljs-string">ubuntu-latest</span>

<span class="hljs-attr">steps:</span>
<span class="hljs-bullet">-</span> <span class="hljs-attr">script:</span> <span class="hljs-string">echo</span> <span class="hljs-string">Hello,</span> <span class="hljs-string">world!</span>
  <span class="hljs-attr">displayName:</span> <span class="hljs-string">'Run a one-line script'</span>

<span class="hljs-bullet">-</span> <span class="hljs-attr">script:</span> <span class="hljs-string">|
    echo Add other tasks to build, test, and deploy your project.
    echo See https://aka.ms/yaml
</span>  <span class="hljs-attr">displayName:</span> <span class="hljs-string">'Run a multi-line script'</span>
<span class="hljs-bullet">-</span> <span class="hljs-attr">task:</span> <span class="hljs-string">CopyFiles@2</span>
  <span class="hljs-attr">inputs:</span>
    <span class="hljs-attr">SourceFolder:</span> <span class="hljs-string">'$(agent.builddirectory)'</span>
    <span class="hljs-attr">Contents:</span> <span class="hljs-string">'**'</span>
    <span class="hljs-attr">TargetFolder:</span> <span class="hljs-string">'$(build.artifactstagingdirectory)'</span>

<span class="hljs-bullet">-</span> <span class="hljs-attr">task:</span> <span class="hljs-string">PublishBuildArtifacts@1</span>
  <span class="hljs-attr">inputs:</span>
    <span class="hljs-attr">PathtoPublish:</span> <span class="hljs-string">'$(Build.ArtifactStagingDirectory)'</span>
    <span class="hljs-attr">ArtifactName:</span> <span class="hljs-string">'storagedrop'</span>
    <span class="hljs-attr">publishLocation:</span> <span class="hljs-string">'Container'</span>
</code></pre>
<p>now lets review the pipeline before we run</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1732509688880/999a46ae-3fc7-4d58-93e1-01db7a6749d5.png?auto=compress,format&amp;format=webp" alt /></p>
<p>now lets save and run the pipleline</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1732509722663/eea03620-a911-4b85-a003-c26b47938d43.png?auto=compress,format&amp;format=webp" alt /></p>
<p>pipeline has run succesfully.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1732509746544/24af48cf-2d13-40e6-a06b-94071e4ae6df.png?auto=compress,format&amp;format=webp" alt /></p>
<p>this pipeline has produced an atrifact. we will use this artifact to create resources mentioned in our template. In our ARM template we are creating a storage account.</p>
<p>let's create a release pipeline and use artifact we got from our build process.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1732509787456/d4d8f02c-cf46-4905-a535-90e514806da2.png?auto=compress,format&amp;format=webp" alt /></p>
<p>for the release pipeline let's select empty job.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1732509821628/24521663-11f1-410c-ac59-652408668dbd.png?auto=compress,format&amp;format=webp" alt /></p>
<p>now let's add ARM template to the task</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1732509856015/6f052ce1-3cac-4b64-8a82-3e6c22ecf718.png?auto=compress,format&amp;format=webp" alt /></p>
<p>let's fill in the details so that resource get's created.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1732509882416/c8661706-1260-4696-b461-250df4d84798.png?auto=compress,format&amp;format=webp" alt /></p>
<p>we need to integrate our build into the release pipeline. the artifact generated will be added.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1732509908650/35f55f69-11f0-4f60-ae60-4f8375c97127.png?auto=compress,format&amp;format=webp" alt /></p>
<p>I added artifact generated into the release pipeline.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1732509961244/c9325989-2cea-4312-b549-2b7a2e53ed9e.png?auto=compress,format&amp;format=webp" alt /></p>
<p>now let's add task into the arm template added.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1732509987389/07628034-22ac-4825-b6ba-ce0f97e9d729.png?auto=compress,format&amp;format=webp" alt /></p>
<p>Lets create the release</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1732510034678/0de4325a-76b9-4002-9826-4cb8a76fa40d.png?auto=compress,format&amp;format=webp" alt /></p>
<p>you can see that resource group is created and storage account is created. please see the logs image below.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1732510065163/3a61341b-d97a-4923-abc6-f64a1c56b021.png?auto=compress,format&amp;format=webp" alt /></p>
<p>I checked in Azure that new resource group is created and storage account is created.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1732510104040/db3f7ecc-6620-4729-a24c-204ad3b5b611.png?auto=compress,format&amp;format=webp" alt /></p>
<h3 id="heading-i-will-add-continous-deployment-and-release-to-the-pipeline">I will add continous deployment and release to the pipeline</h3>
<p>now we will add trigger to the release pipeline. this will make sure any changes to the repo will create the new build and will trigger to release pipeline. this will create new resources.</p>
<p>I will enable continoys deployment on the pipleline. when I make change to the repo the pipeline will get executed and resource will be created. Edit the release pipeline</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1732510155884/26fe055e-9e01-4045-93e1-b4289571214e.png?auto=compress,format&amp;format=webp" alt /></p>
<p>click on the trigger that will let me enable continous deployment</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1732510186847/6e8304c0-8fe7-4d8d-891b-b433b44e4521.png?auto=compress,format&amp;format=webp" alt /></p>
<p>enable create a new release when new build is available and enable pull trigger</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1732510218748/0a016e8f-b7a4-44c0-94ce-b8cd977fc8fc.png?auto=compress,format&amp;format=webp" alt /></p>
<p>Now save the settings. We have enabled continous deployment.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1732510253757/6c61268b-8c18-4bb4-a913-c2544a0b9c9c.png?auto=compress,format&amp;format=webp" alt /></p>
<p>Now we will edit the repo. I will add more resources in the template. I will add storage account, public ip, Virtual network, subnet, NSG, NIC, Disk and Virtual machine.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1732510283246/7fad44bb-4d7b-43d8-a749-77e12a1a6455.png?auto=compress,format&amp;format=webp" alt /></p>
<p>commit the changes and you will see new bulild will be created and that will trigger release pipeline.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1732510311998/5cd04fb1-8a3b-4ca9-9738-7161de74063d.png?auto=compress,format&amp;format=webp" alt /></p>
<p>As we have enabled Continous Integration new release will be created</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1732510368680/3a18e71a-9112-42dc-873a-f54af298d272.png?auto=compress,format&amp;format=webp" alt /></p>
<p>I will check in Azure and all the resources are created.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1732510400675/3da84a6d-0332-4659-8b98-f6fd2ff3a057.png?auto=compress,format&amp;format=webp" alt /></p>
<p>This is how you can deploy ARM template with Azure Devops. you can enabel contionus deployment to make sure when you add resource to the template they get created.</p>
]]></description><link>https://clouddevopsinsights.com/deploy-arm-template-using-azure-devops</link><guid isPermaLink="true">https://clouddevopsinsights.com/deploy-arm-template-using-azure-devops</guid><category><![CDATA[Azure]]></category><category><![CDATA[Devops]]></category><category><![CDATA[azure-devops]]></category><dc:creator><![CDATA[Abilash Vavilala]]></dc:creator></item><item><title><![CDATA[Build and Deploy CI/CD pipeline for .net Application using Azure DevOps]]></title><description><![CDATA[<p>In this project we will build and deploy CI/CD pipeline for .net Application using Azure DevOps.</p>
<p>Create a Project in Azure DevOps First create a project in Azure DevOps.</p>
<p><img src="https://github.com/AbiVavilala/CI-CD-Pipeline-for-.Net-application/blob/main/images/createproject.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1729899056260/ae3f8ceb-6a41-4ab6-bb77-eed6a68fd4df.png" alt class="image--center mx-auto" /></p>
<p>Push source Code into Azure Repo I pushed .Net Source Code into Azure DevOps Repo. This Repo is part of Microsoft Parts Unlimited Open Source Project.</p>
<p><a target="_blank" href="https://github.com/microsoft/PartsUnlimitedMRP">GitHUb link for Parts Unlimited Project</a></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1729900042187/c9739224-f5d7-4f3a-9337-c6a2c312d278.png" alt class="image--center mx-auto" /></p>
<p>Build a Pipeline for the application Create a <mark>new pipeline</mark> in Pipelines</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1729900207263/108727f5-640d-4c3d-a762-51dec62816d7.png" alt class="image--center mx-auto" /></p>
<p>Select Azure Repo as source Repo</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1729900331162/6707c01a-8878-47a7-960d-5ae894070baa.png" alt class="image--center mx-auto" /></p>
<p>select starter pipeline in configure your pipeline</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1729900376867/b33edda0-93ec-4ffc-a413-e4aff9d11cb6.png" alt class="image--center mx-auto" /></p>
<p>for our pipeline let's add tasks. This tasks will create artifact. task will download required libraries for the application. build the project and create a artifact. first task is Nuget. Nuget is used to download libraries for .net Application.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1729900433565/d6400769-9c35-433f-abeb-5fab52dd6297.png" alt class="image--center mx-auto" /></p>
<p>Second task is we need Visual studio to build the project. I will add Build commands needed for this project below.</p>
<pre><code class="lang-yaml"><span class="hljs-string">/p:DeployOnBuild=true</span> <span class="hljs-string">/p:WebPublishMethod=Package</span> <span class="hljs-string">/p:PackageAsSingleFile=true</span> <span class="hljs-string">/p:SkipInvalidConfigurations=true</span> 
<span class="hljs-string">/p:PackageLocation="$(build.stagingDirectory)"</span> <span class="hljs-string">/p:IncludeServerNameInBuildInfo=True</span> <span class="hljs-string">/p:GenerateBuildInfoConfigFile=true</span> 
<span class="hljs-string">/p:BuildSymbolStorePath="$(SymbolPath)"</span> <span class="hljs-string">/p:ReferencePath="C:\Program</span> <span class="hljs-string">Files</span> <span class="hljs-string">(x86)\Microsoft</span> <span class="hljs-string">Visual</span> <span class="hljs-string">Studio\2017\Enterprise\Common7\IDE\Extensions\Microsft\Pex"</span>
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1729900646522/2b4357fa-84e8-47fe-bf13-2610bf27f831.png" alt class="image--center mx-auto" /></p>
<p>copy the files to the agent source folder and publish artifact to the staging directory.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1729900693968/4c947dc8-9e8e-402f-be5a-a21e699ea41c.png" alt class="image--center mx-auto" /></p>
<p>Now click on save and run. this will run the build pipeline and create a artifact.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1729900912336/562c3298-1745-47d0-b0a5-2ac680dcc318.png" alt class="image--center mx-auto" /></p>
<p>create a release pipeline. click on release and select new release pipeline.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1729900958545/e2394843-97b9-461d-92aa-73c2a4834205.png" alt class="image--center mx-auto" /></p>
<p>now just like build pipeline we need to add tasks to the release pipeline.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1729900994597/e19c15e1-1a57-496c-bc6a-6924ff919177.png" alt class="image--center mx-auto" /></p>
<p>Add ARM Template as task as we need to create the environment for our app.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1729901048433/fe613604-f63c-43cd-baa2-77f3653e21ff.png" alt class="image--center mx-auto" /></p>
<p>add varaibles to the pipeline. In the process we are passing some variables to the pipeline.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1729901093521/8b12bb33-b8fc-4e2b-8973-6d5555f0588b.png" alt class="image--center mx-auto" /></p>
<p>add App service deployment as task. we will deploy our web app into a app service plan</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1729901176011/41f9e2fd-da6d-4d79-9056-09d7788ca03f.png" alt class="image--center mx-auto" /></p>
<p>release pipeline ran sucessfully.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1729901257178/beea363b-876a-4cf7-aab7-d11281180c5a.png" alt class="image--center mx-auto" /></p>
<p>App has been deployed sucessfully.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1729901442058/05edd176-7c48-4bf8-859a-a3149fb5ee9c.png" alt class="image--center mx-auto" /></p>
<p>let's test our pipeline I will change the display name from</p>
<pre><code class="lang-xml"><span class="hljs-tag">&lt;<span class="hljs-name">h1</span>&gt;</span>Priya Abilash<span class="hljs-tag">&lt;/<span class="hljs-name">h1</span>&gt;</span> <span class="hljs-tag">&lt;<span class="hljs-name">h2</span>&gt;</span>avani subsidiary<span class="hljs-tag">&lt;/<span class="hljs-name">h2</span>&gt;</span> ``` to ``` <span class="hljs-tag">&lt;<span class="hljs-name">h1</span>&gt;</span>Sydney Auto Parts<span class="hljs-tag">&lt;/<span class="hljs-name">h1</span>&gt;</span> <span class="hljs-tag">&lt;<span class="hljs-name">h2</span>&gt;</span>avani subsidiary<span class="hljs-tag">&lt;/<span class="hljs-name">h2</span>&gt;</span>
</code></pre>
<p>we have pipeline build so new artifact will be created and we can manually deploy the release pipeline.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1729901540040/57c121d3-9550-4772-b038-6f7c7b3b8335.png" alt class="image--center mx-auto" /></p>
<p>![](<a target="_blank" href="https://github.com/AbiVavilala/CI-CD-Pipeline-for-.Net-application/blob/main/images/test1.png">https://github.com/AbiVavilala/CI-CD-Pipeline-for-.Net-application/blob/main/images/test1.png</a>)</p>
<p>you can see build pipeline is running once we save the changes to the layout.cshtml file in our repo</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1729901593992/958c0fe6-54df-4be1-ab08-245459c4b5dd.png" alt class="image--center mx-auto" /></p>
<p>let's create a new release for new build Click on edit pipeline and add the new build artifact as new artifact</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1729901646319/ef7403ad-d10a-421f-9d63-4ef03fae1b7a.png" alt class="image--center mx-auto" /></p>
<p>The pipeline is running.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1729901691847/65d04bc3-60fe-46c2-a677-a2224031d268.png" alt class="image--center mx-auto" /></p>
<p>Pipeline ran succesfuly.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1729901726800/79fcebae-2dc9-4aea-94c5-54ec213b2d7a.png" alt class="image--center mx-auto" /></p>
<p>I will refresh the browser and see the changes made to the repo.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1729901754964/777e4686-142f-4481-8b0e-47aa55108d22.png" alt class="image--center mx-auto" /></p>
]]></description><link>https://clouddevopsinsights.com/build-and-deploy-cicd-pipeline-for-net-application-using-azure-devops</link><guid isPermaLink="true">https://clouddevopsinsights.com/build-and-deploy-cicd-pipeline-for-net-application-using-azure-devops</guid><dc:creator><![CDATA[Abilash Vavilala]]></dc:creator></item></channel></rss>