Run some ad hoc Ansible commands against static inventories of virtual machines.


In this lab we will install Ansible locally, and then work through some of things it can do when using it ad hoc. We will be using the two VMs we created in the previous lab.


Ansible could be a very large set of labs by itself. It is popular for configuration management as it is open source, extensible, multi platform and agentless. We could use Ansible to provision the Azure infrastructure, but we will leave that job to Terraform.

In these labs we will use Ansible purely for configuring the images and helping to manage the VMs once they have been deployed from the image.

Ansible is growing in popularity as it is open source, extensible, multi platform, powerful and agentless. It uses OpenSSH to connect to linux servers and WinRm to connect to Windows servers.

You should be aware that there are many other options in the configuration management space, such as Chef, Puppet, Salt, Octopus, etc.

Go to and watch the Quick Start Video to get an overview of the Ansible functionality and ecosystem.


In this section you’ll create a local Ansible area to work in, including a default cfg file and a static hosts file containing the public IP addresses for the two VMs we created in the previous lab.

  1. Create an ansible folder for our dev and test work

    umask 077
    mkdir -m 700 ~/ansible && cd ~/ansible

    We will be intentionally strict with permissions.

    Note that we will be using /etc/ansible later as our ‘production’ area.

  2. Install Ansible

    sudo apt-get update && sudo apt-get install -y libssl-dev libffi-dev python-dev python-pip python-setuptools
    sudo -H pip install ansible[azure]

    As per the Ansible install guide for Azure. (Plus python-setuptools.)

  3. Create an http://ansible service principal and Ansible environment variables

    Note that the Ansible service principal will be similar to the Hashicorp one, but uses a slight different set of environment variables.

    Ansible environment variable | az ad sp create-for-rbac value | Hashicorp environment variable AZURE_TENANT | tenant | ARM_TENANT_ID AZURE_SUBSCRIPTION_ID | Use your existing $subId value | ARM_SUBSCRIPTION_ID AZURE_CLIENT_ID | appId | ARM_CLIENT_ID AZURE_SECRET | password | ARM_CLIENT_SECRET

    We’ll do this via a few commands for speed.

    subId=$(az account show --output tsv --query id)
    tenantId=$(az account show --output tsv --query tenantId)
    secret=$(az ad sp create-for-rbac --name $name --role="Contributor" --scopes="/subscriptions/$subId" --output tsv --query password)

    Again, use name="http://ansible-${subId}-sp" and rerun the az ad sp create-for-rbac command if it is taken.

    clientId=$(az ad sp show --id $name --output tsv --query appId)
    cat << EOF >> ~/.bashrc
    # Environment variables for Ansible ($name)
    export AZURE_TENANT=$tenantId
    export AZURE_CLIENT_ID=$clientId
    export AZURE_SECRET=$secret
    source ~/.bashrc
    env | grep AZURE

    There are many other options for Ansible to authenticate to Azure, as per the documentation.

    Note that these will be listed if you run az configure.

  4. Create an ansible.cfg file

    The file should contain the following:

    inventory = ~/ansible/hosts
    roles_path = ~/ansible/roles
    nocows = 1

    For info on the config file:

  5. Create a static hosts inventory

    The file should be in the following format, but you will need to specify the public IP addresses of the VMs you created in lab1.


    If you wanted to do that programmatically:

    echo "[citadel]" > hosts
    az vm list-ip-addresses --resource-group ansible_vms --output tsv --query [][0].ipAddress >> hosts

    Read our JMESPATH guide if you want to know how to construct your own queries

  6. Check your config

    ansible --version

    Example output:

    ansible 2.8.3
    config file = /home/richeney/ansible/ansible.cfg
    configured module search path = [u'/home/richeney/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
    ansible python module location = /usr/local/lib/python2.7/dist-packages/ansible
    executable location = /usr/local/bin/ansible
    python version = 2.7.15+ (default, Nov 27 2018, 23:36:35) [GCC 7.3.0]

Hosts files

The hosts file is an inventory file. These may have multiple server groupings, and hosts can belong to more than one group. We have defined just one group, “citadel”, and will use that. All servers belong to the default “all” group.

As per the video, inventories can be defined in different ways:

  • Static lines of servers
    • IP addresses
    • fully qualified domain names (FQDNs)
  • IP address ranges
  • Other custom things
  • Dynamic lists of servers

We will look at dynamic inventories in a later lab.

Getting started with Ansible

The intro_adhoc page is very good and worth reading.

Below are some good commands to begin with.

  1. list all of the servers in the inventory

    ansible all --list-hosts

    Example output:

      hosts (2):

    This command used the all group. You could have instead specified citadel, or any other group that you have defined in the inventory file.

  2. Check the citadel group

    Check that the servers in the citadel group are running and can be managed by Ansible

    ansible citadel -m ping

    Example output: | SUCCESS => {
        "ansible_facts": {
            "discovered_interpreter_python": "/usr/bin/python"
        "changed": false,
        "ping": "pong"
    } | SUCCESS => {
        "ansible_facts": {
            "discovered_interpreter_python": "/usr/bin/python"
        "changed": false,
        "ping": "pong"

Using the command and shell modules

That last command used the ping module. Modules are core to how Ansible works.

If you do not specify a module then ansible will default module to command. Here is an example short form.

  1. Run a simple command

    ansible citadel -a whoami

    Example output: | CHANGED | rc=0 >>
    richeney | CHANGED | rc=0 >>
  2. Run the same command as root

    The -a switch is for arguments. Here is a long form version of the previous command, adding --become to sudo to root.

    ansible citadel --args "/usr/bin/whoami" -become

    Example output: | CHANGED | rc=0 >>
    root | CHANGED | rc=0 >>

    Fully pathing commands is a sensible security stance with sudo commands

  3. Use the shell module

    The command module can only do simple commands. For anything more complex then you can specify the shell module.

    ansible citadel -m shell -a "cd ~; /bin/pwd" -b

    Example output: | CHANGED | rc=0 >>
    /root | CHANGED | rc=0 >>

Raising concurrency

Ansible will fork multiple processes when talking to multiple hosts. The default number of concurrent forked processes is rather low at 5.

Use the -f switch to specify a larger number for bigger groups. For example:

ansible dev -a "/sbin/reboot" -f 20

This is an example - there is no dev group defined in the hosts inventory and so this command will fail if you run it.


Browse the list of inbuilt modules at

The blog page is a useful reference for some common ad hoc module calls.

We’ll use the apt module for an ad hoc package installation on Ubuntu.

Note that we installed aptitude into the Ubuntu image for lab1, but the apt module will default to using the lower level apt-get package manager if aptitude is not present.

  1. Install the cowsay package

    ansible citadel -m apt -a "name=cowsay state=present" -b

    Check the apt module page and you will see that the arguments here are a space delimited list of parm=value pairs that match the parameters for the ansible module.

  2. Create a simple script

    Another common ad hoc module is copy. Let’s create a simple script and push that onto both of the VMs.

    Create a file called containing the following

    /bin/hostname| /usr/games/cowsay | /usr/games/lolcat
  3. Copy the script to /usr/local/bin as root

    ansible citadel -m copy -a 'src=./ dest=/usr/local/bin/ owner=root mode=0755' --become
  4. Run the script on both VMs

    ansible citadel -a '/usr/local/bin/'

    If you ran the script locally then you notice that lolsay produces rainbow coloured output. With ansible the colours are all stripped out.

Browse the modules and see what other options there were for uploading and then executing a script.

Using modules in this way is a great way for dealing with consistently applying simple ad hoc changes to groups of servers.

We have run through a few examples, but you will commonly see ad hoc commands used to

  • add users to the /etc/passwd files,
  • start, stop or restart services
  • initiate reboots

And Ansible is very useful to do that consistently across groups of virtual machines.

Getting information via Setup and Debug

You can get an enormous number of ansible facts from your hosts using the setup module.

  1. Select a host

    Pick one of the hosts from the list:

    ansible all --list-hosts

    I will use the host with a public IP of in the following examples.

  2. List out the facts

    List out all of the facts for a host:

    ansible -m setup | more
  3. Filter on a string

    ansible -m setup -a "filter=ansible_distribution*"
  4. Get a subset of the output

    ansible -m setup -a 'gather_subset=!all,!min,network'

    You can also use the debug tool. This is useful for creating messages, and for showing the list of hostvars available.

  5. display the available hostvars

    ansible -m debug -a 'var=hostvars'

    You should see the hostvars information from the host’s perspective, but it will include information about both hosts in the inventory, including group information.

  6. List out the groups

    ansible localhost -m debug -a 'var=groups.keys()'
  7. List the hosts’s group memberships

    See the groups that a host belongs to using:

    ansible localhost -m debug -a 'var=groups'
  • List all groups and host members

    To see which groups every hosts belongs to:

    ansible citadel -m debug -a 'var=group_names'

Coming up next

The hostvars available per host includes only a very basic set of information, so it isn’t particularly useful just yet. We will be extending this with lots of Azure specific information in the next lab.

We will also move from static to dynamic inventories.


Help us improve

Azure Citadel is a community site built on GitHub, please contribute and send a pull request

 Make a change