Azure Citadel
  • Blogs

  • ARM
  • Azure Arc
    • Overview
    • Azure Arc-enabled Servers
      • Prereqs
      • Scenario
      • Hack Overview
      • Azure Landing Zone
      • Arc Pilot resource group
      • Azure Monitoring Agent
      • Additional policy assignments
      • Access your on prem VMs
      • Create onboarding scripts
      • Onboarding using scripts
      • Inventory
      • Monitoring
      • SSH
      • Windows Admin Center
      • Governance
      • Custom Script Extension
      • Key Vault Extension
      • Managed Identity
    • Azure Arc-enabled Kubernetes
      • Prereqs
      • Background
      • Deploy Cluster
      • Connect to Arc
      • Enable GitOps
      • Deploy Application
      • Enable Azure AD
      • Enforce Policy
      • Enable Monitoring
      • Enable Azure Defender
      • Enable Data Services
      • Enable Application Delivery
    • Useful Links
  • Azure CLI
    • Install
    • Get started
    • JMESPATH queries
    • Integrate with Bash
  • Azure Landing Zones
    • Prereqs
    • Day 1
      • Azure Baristas
      • Day 1 Challenge
    • Day 2
      • Example
      • Day 2 Challenge
    • Day 3
      • Day 3 Challenge
    • Useful Links
  • Azure Policy
    • Azure Policy Basics
      • Policy Basics in the Azure Portal
      • Creating Policy via the CLI
      • Deploy If Not Exists
      • Management Groups and Initiatives
    • Creating Custom Policies
      • Customer scenario
      • Policy Aliases
      • Determine the logic
      • Create the custom policy
      • Define, assign and test
  • Azure Stack HCI
    • Overview
    • Useful Links
    • Updates from Microsoft Ignite 2022
  • Marketplace
    • Introduction
      • Terminology
      • Offer Types
    • Partner Center
    • Offer Type
    • Publish a VM Offer HOL
      • Getting Started
      • Create VM Image
      • Test VM Image
      • VM Offer with SIG
      • VM Offer with SAS
      • Publish Offer
    • Other VM Resources
    • Publish a Solution Template HOL
      • Getting Started
      • Create ARM Template
      • Validate ARM Template
      • Create UI Definition
      • Package Assets
      • Publish Offer
    • Publish a Managed App HOL
      • Getting Started
      • Create ARM Template
      • Validate ARM Template
      • Create UI Definition
      • Package Assets
      • Publish Offer
    • Managed Apps with AKS HOL
    • Other Managed App Resources
    • SaaS Offer HOLs
    • SaaS Offer Video Series
      • Video 1 - SaaS Offer Overview
      • Video 2 - Purchasing a SaaS Offer
      • Video 3 - Purchasing a Private SaaS Plan
      • Video 4 - Publishing a SaaS Offer
      • Video 5 - Publishing a Private SaaS Plan
      • Video 6 - SaaS Offer Technical Overview
      • Video 7 - Azure AD Application Registrations
      • Video 8 - Using the SaaS Offer REST Fulfillment API
      • Video 9 - The SaaS Client Library for .NET
      • Video 10 - Building a Simple SaaS Landing Page in .NET
      • Video 11 - Building a Simple SaaS Publisher Portal in .NET
      • Video 12 - SaaS Webhook Overview
      • Video 13 - Implementing a Simple SaaS Webhook in .NET
      • Video 14 - Securing a Simple SaaS Webhook in .NET
      • Video 15 - SaaS Metered Billing Overview
      • Video 16 - The SaaS Metered Billing API with REST
  • Microsoft Fabric
    • Theory
    • Prereqs
    • Fabric Capacity
    • Set up a Remote State
    • Create a repo from a GitHub template
    • Configure an app reg for development
    • Initial Terraform workflow
    • Expanding your config
    • Configure a workload identity
    • GitHub Actions for Microsoft Fabric
    • GitLab pipeline for Microsoft Fabric
  • Packer & Ansible
    • Packer
    • Ansible
    • Dynamic Inventories
    • Playbooks & Roles
    • Custom Roles
    • Shared Image Gallery
  • Partner
    • Lighthouse and Partner Admin Link
      • Microsoft Cloud Partner Program
      • Combining Lighthouse and PAL
      • Minimal Lighthouse definition
      • Using service principals
      • Privileged Identity Management
    • Useful Links
  • REST API
    • REST API theory
    • Using az rest
  • Setup
  • Terraform
    • Fundamentals
      • Initialise
      • Format
      • Validate
      • Plan
      • Apply
      • Adding resources
      • Locals and outputs
      • Managing state
      • Importing resources
      • Destroy
    • Working Environments for Terraform
      • Cloud Shell
      • macOS
      • Windows with PowerShell
      • Windows with Ubuntu in WSL2
    • Using AzAPI
      • Using the REST API
      • azapi_resource
      • Removing azapi_resource
      • azapi_update_resource
      • Data sources and outputs
      • Removing azapi_update_resource
  • Virtual Machines
    • Azure Bastion with native tools & AAD
    • Managed Identities

  • About
  • Archive
  1. Home
  2. Microsoft Fabric
  3. Set up a Remote State

Table of Contents

  • Introduction
  • Create the storage account
  • Validation
  • Full set of commands
  • Next

Set up a Remote State

Configure a storage account for use as a remote state in Terraform.

Introduction

When you run terraform in a pipeline then you will need to configure a backend for the remote state. In this lab you’ll create a storage account with suitable settings, and then give yourself access to write blobs to it.

You will configure the storage account in either

  • your single subscription (suitable for testing)
  • a separate subscription (e.g. a Management subscription in a platform landing zone) (see aka.ms/alz)

The storage configuration includes

  • Enforced role based access control (RBAC)

    RBAC ensures that only authorized users and applications can access the storage account, enhancing security compared to using account keys or SAS keys, as per the security recommendations. Only privileged roles can create RBAC role assignments, whereas Contributor roles can use storage account keys.

  • Blob versioning

    Blob versioning helps protect your data by enabling you to recover previous versions of objects in case of accidental deletion or modification. In a Terraform context it also provides a secondary audit trail of changes.

  • Soft delete

    Soft delete provides an additional layer of protection by retaining deleted blobs for a specified period, allowing recovery if needed.

The normal convention for the container is to name it tfstate. We will create two, test and prod, to be used in the user context and via the CI/CD pipelines respectively.

Create the storage account

  1. Define a few variables

    • Set the resource group name and region

      rg="rg-terraform"
      loc="uksouth"
      

      Feel free to set these to your own preferred values.

    • Set subscription ID

      Set to the current subscription if you are using only a single subscription

      subscriptionId=$(az account show --query id -otsv)
      

      or set to an alternate subscription in line with the recommendations

      subscriptionId<managementSubscriptionId>
      
  2. Create the resource group

    az group create --name $rg --location $loc
    
  3. Define the storage account name

    The name needs to be globally unique, therefore the command generated a predictable string from a hash of the resource group ID.

    storage_account_name="terraformfabric$(az group show --name $rg --query id -otsv | sha1sum | cut -c1-8)"
    
  4. Create the storage account

    az storage account create --name $storage_account_name --resource-group $rg --location $loc \
      --min-tls-version TLS1_2 --sku Standard_LRS --https-only true --default-action "Allow" --public-network-access "Enabled"  \
      --allow-shared-key-access false --allow-blob-public-access false
    storage_account_id=$(az storage account show --name $storage_account_name --resource-group $rg --query id -otsv)
    
  5. Enable versioning and soft delete

    az storage account blob-service-properties update --account-name $storage_account_name --enable-versioning --enable-delete-retention --delete-retention-days 7
    
  6. Create the containers

    The Terraform state file will be created in these containers.

    az storage container create --name prod --account-name $storage_account_name --auth-mode login
    az storage container create --name test --account-name $storage_account_name --auth-mode login
    
  7. Add Storage Blob Data Contributor role assignment

    Adding this role allows you to read, write and delete blob files in the new storage account. It is more common to assign this at the storage account level, but we’ll scope it to the container level so that the user and the service principal are not able to overwrite each other’s state file.

    We’ll only create the role assignment for the user context against the test container right now. The service principal will come later.

    az role assignment create --assignee $(az ad signed-in-user show --query id -otsv) --scope "$storage_account_id/blobServices/default/containers/test" --role "Storage Blob Data Contributor"
    

Validation

If you check the Overview for your storage account then you’ll see that most of the recommendations have been met.

Screenshot of the Azure portal showing the storage account configured for remote state

  • Blob anonymous access disable
  • Blob soft delete set to 7 days
  • Blob versioning enabled
  • HTTPS required
  • Storage account key access disables
  • TLS v1.2

Full set of commands

The commands above have been consolidated into the single code black below for ease of use.

rg="rg-terraform"
loc="uksouth"

subscriptionId=$(az account show --query id -otsv)
az group create --name $rg --location $loc
storage_account_name="terraformfabric$(az group show --name $rg --query id -otsv | sha1sum | cut -c1-8)"
az storage account create --name $storage_account_name --resource-group $rg --location $loc \
  --min-tls-version TLS1_2 --sku Standard_LRS --https-only true --default-action "Allow" --public-network-access "Enabled"  \
  --allow-shared-key-access false --allow-blob-public-access false
storage_account_id=$(az storage account show --name $storage_account_name --resource-group $rg --query id -otsv)
az storage account blob-service-properties update --account-name $storage_account_name --enable-versioning --enable-delete-retention --delete-retention-days 7
az storage container create --name prod --account-name $storage_account_name --auth-mode login
az storage container create --name test --account-name $storage_account_name --auth-mode login
az role assignment create --assignee $(az ad signed-in-user show --query id -otsv) \
  --scope "$storage_account_id/blobServices/default/containers/test" --role "Storage Blob Data Contributor"

Next

The storage account is ready for use as a remote state backend. Let’s get an example repo and then set up Terraform in the user context. We’ll build out a config before automating more fully with a service principal and GitHub Actions workflow.

Fabric Capacity Set up a Remote State Create a repo from a GitHub template