Command-line OpenStack

Tuesday, December 9th, 2025

Abstract: OpenStack provides a powerful cloud platform for managing compute, storage, and networking resources. While its web dashboard offers a user-friendly interface and can be convenient at times, it often involves too many clicks to complete routine tasks such as creating and configuring a virtual machine. The command-line interface (CLI) offers a faster, more streamlined way to create and manage resources. In this webinar, we will discuss python-openstackclient, walk through its setup and authentication, and demonstrate how to efficiently perform common OpenStack operations from the terminal. Whether you are managing a few instances or automating workflows, this session will help you leverage the CLI for speed, simplicity, and productivity.

Intro

The Alliance Federation provides free cloud resources to researchers on several national systems – Arbutus, Cedar Cloud, Nibi Cloud, and Béluga Cloud – the up-to-date list of which you will always find at https://docs.alliancecan.ca/wiki/Cloud#Cloud_systems. To get started, you will need a project in one of these clouds – follow the instructions in the same page – these are free but need to be requested, and they will use your existing allocation (within your RAS or RAC quotas).

Note
  • RAS = Regular Access Service (default allocation)
  • RAC = annual Research Allocation Competition
  • Arbutus v2.0 is being slowly rolled out, currently with a small subset of users for testing
  • Cedar Cloud will eventually be replaced by Fir Cloud (next 6 months)

Before you begin creating your instances, you might want to check out our Cloud Technical Glossary for terms like “virtual machine”, “instance”, “image”, “flavour”, “volume”, etc.

Install OpenStack CLI tools

On your own computer:

uv venv ~/env-openstack --python 3.12   # create a new virtual environment
source ~/env-openstack/bin/activate
uv pip install python-openstackclient
...
deactivate
ImportantWhere to run OpenStack CLI commands

All openstack commands we use today will be run directly on our own local computer.

Launch an instance in 5 easy steps

Step 1 - authenticate

Our goal is to replace OpenStack web UI operations (lots of mouse clicks) with bash commands. We still need to log in to the web UI to authenticate our CLI client:

  1. Log into Cedar Cloud or old Arbutus
  2. In the left-side drop-down menu, select your project
  3. In the right-side drop-down menu, click “Download OpenStack RC File”
    • bash script with all env variables to access your OpenStack project through the command-line tools
    • RC = Run Commands
  4. Source your OpenStack RC File
cd ~/training/openstack
source ~/env-openstack/bin/activate
mv ~/Downloads/CCInternal-*-openrc.sh .
source CCInternal-*-openrc.sh   # at prompt enter your CCDB password

Step 2 - decide on VM flavour and image

In OpenStack, VM flavours define the hardware configurations for virtual machines, specifying vCPUs, RAM, and disk size. There are two types of flavours in our cloud:

  1. Compute (‘c’) flavours
    • intended to run for a limited time, usually with a very high sustained CPU/memory usage
    • e.g. for code development (need to compile often), or building a container image
  2. Persistent (‘p’) flavours
    • intended to run indefinitely, with low or bursty CPU/memory requirements
    • e.g. for hosting a web server, a data portal, or a web application

Let’s check the resources available on this system:

openstack flavor list    # check VM flavours 🡒 use c2-15gb-31
openstack image list     # check OS images 🡒 use Ubuntu-24.04.2-Noble-x64-2025-03
openstack flavor show c2-15gb-31 --column disk --column ram   # get more info on this flavour

Finally, in our cloud setup, each project should have its own dedicated network. This is not something you normally select, but you need to know the value to spin the instance, so let’s find it out:

openstack network list   # list visible (to us) networks 🡒 use CCInternal-SFU-training-network
Values in this demo
Cloud Flavour Image Network
Cedar c2-15gb-31 Ubuntu-24.04.2-Noble-x64-2025-03 CCInternal-SFU-training-network
Arbutus p1-1.5gb Ubuntu-24.04-Noble-x64-2024-06 CCInternal-WG-Supp-network

Step 3 - create an SSH key pair, upload the public key

Next, we should create an ssh key pair and upload the public key (do it only once!):

cd ~/.ssh

ssh-keygen -t ed25519 -f cedarKey       # enter a non-empty passphrase (strongly recommended)
ssh-add --apple-use-keychain cedarKey   # store the passphrase in the keychain (MacOS)

openstack keypair list                  # list currently uploaded keys
openstack keypair delete <key>          # if needed
openstack keypair create --public-key cedarKey.pub cedarKey   # upload the key
openstack keypair list                  # see the newly uploaded key

Step 4 - launch the VM instance

Now it is time to launch our virtual machine:

openstack server create --flavor c2-15gb-31 --image Ubuntu-24.04.2-Noble-x64-2025-03 \
          --nic net-id=CCInternal-SFU-training-network --key-name cedarKey \
          --security-group default alexDemoBox   # launch the VM

openstack server list                            # should see our VM in the list

Step 5 - associate a floating IP

Next, we need to get a floating IP on an external-facing network and attach it to our VM:

openstack floating ip create Public-Network          # see "floating_ip_address"
openstack server add floating ip alexDemoBox <floating_ip_address>   # attach it to my instance

Now our machine is ready for use!

Optionally, you can display a few items:

openstack server show alexDemoBox                      # long table of all attributes
openstack server show alexDemoBox --column image       # check the VM's operating system
openstack server show alexDemoBox --column addresses   # check the VM's floating IP

Connect to the machine

To connect to the VM, we could run the command:

ssh ubuntu@<floating_ip_address> -i ~/.ssh/cedarKey

Personally, I prefer to store the configuration in my ~/.ssh/config file:

cat >> ~/.ssh/config <<'EOF'
Host ubuntu
    HostName <floating_ip_address>
    User ubuntu
    IdentityFile ~/.ssh/cedarKey
EOF

and then simply connect with:

ssh ubuntu

Install software

It is very easy to install packages from Ubuntu’s standard repositories:

sudo apt update       # update the list of packages
sudo apt install -y wget bat btop
NoteSecurity update

At this point you probably should upgrade everything on your machine, including installing the latest security patches:

sudo apt upgrade   # upgrade installed software to their latest versions
sudo reboot

but I won’t do it now, as it might take quite a few minutes.

You can also install Snap packages – these are self-contained apps with all dependencies included that run in a separate sandbox for each app (more isolation/security), e.g.

sudo snap install emacs --classic
/snap/bin/emacs -nw   # check the version

If you want to use this machine to build Apptainer containers as root, you’ll need to install Apptainer:

sudo add-apt-repository -y ppa:apptainer/ppa
sudo apt install -y apptainer

Install my configs (optional)

I will install my own .bashrc and .emacs files:

ssh -A ubuntu          # enable agent forwarding
tmux                   # start tmux
git clone git@github.com:razoumov/synchpc.git syncHPC   # clone from a private repo
/bin/rm -f ~/.bashrc && ln -s ~/syncHPC/bashrc ~/.bashrc && source ~/.bashrc
/bin/rm -f ~/.emacs && ln -s ~/syncHPC/emacs ~/.emacs
emacs -nw              # wait for emacs to install its packages from my config

Create and attach a volume (optional)

At this point our VM mounts several disk partitions including:

  • likely 19GB root/system partition that hosts /home/ubuntu
  • 31GB ephemeral disk from ‘c2-15gb-31’

If you need more space, you can create and attach additional volumes.

openstack volume create --size 100 demoVolume        # create a 100G volume
openstack volume show demoVolume --column status     # wait for it to become available
openstack server add volume alexDemoBox demoVolume   # attach the volume to the VM

In the last command’s output, note the volume’s device, e.g. /dev/vdc, and use it below:

ssh ubuntu
lsblk                       # list block devices; find /dev/vdc
sudo mkfs.ext4 /dev/vdc     # format the volume
sudo mkdir /data
sudo mount /dev/vdc /data   # to make it persistent across reboots, add an entry to /etc/fstab
df                          # should show my mounted volume
sudo mkdir /data/tmp
sudo chown ubuntu:ubuntu -R /data/tmp
ln -s /data/tmp ~/tmp

Allow inbound traffic on a given port

The default security group allows only incoming ssh traffic on port 22:

openstack security group list           # show all current networking security groups
openstack security group show default   # show details of the default group: likely just incoming ssh on port 22

Let’s say, we want to run a server that waits for incoming traffic on port 8080 . Here are the three steps to do this:

  1. It is best to create a new security group (let’s call it rule01):
openstack security group create rule01 --description "Allow incoming trame traffic" # 
  1. In this new group, create a security rule to allow incoming TCP traffic on port 8080:
openstack security group rule create --protocol tcp --dst-port 8080 --ingress rule01
  1. Attach this new security group to our instance:
openstack server add security group alexDemoBox rule01

Set up Git

If you are planning to use this machine for development, you will want to set up Git:

git config --global user.name "..."
git config --global user.email "..."
git config --global core.editor "emacs -nw"
git config --global core.autocrlf input   # line endings on macOS or Linux
git config --global core.ignorecase true
git config --global diff.colorMoved zebra                        # change from red+green colour coding to purple/turquoise
git config --global alias.last 'diff HEAD~1 HEAD'                # file contents of the last commit
git config --global alias.all 'log -p'                           # file contents of all commits
git config --global alias.list 'ls-tree --full-tree -r HEAD'     # list files inside the repository
git config --global alias.one "log --graph --date-order --date=short --pretty=format:'%C(cyan)%h %C(yellow)%ar %C(auto)%s%+b %C(green)%ae'"
git config --global alias.files 'show --name-only'               # show only file names in a commit
git config --global alias.search 'grep --break --heading -n -i'
git config --global alias.st 'status'
git config --global alias.br 'branch'
git config --get alias.files
git config --global init.defaultBranch main

Remote graphical applications

To work with graphical applications on the VM, we need a way to forward the VM’s display output to our local machine. Here are some popular options:

  1. SSH with X11 Forwarding – slow!
  2. use a Remote Desktop application (RDP or VNC)
  3. browser-based access, e.g. using noVNC or Apache Guacamole
  4. use a hybrid tool like https://github.com/Xpra-org/xpra: rather than exporting an entire desktop session, Xpra can forward individual applications, and they appear on your local desktop as if they were local windows; faster than X11 Forwarding; must be installed on both client and server

Here, we can try standard X11 forwarding – just keep in mind that it’ll be slow, and we might have to install xterm for the demo.

Remote visualization demo with trame

On my laptop:

cd ~/talks/2025/11-trame
tar cvfz demoFiles.tgz trame-tutorial/{04_application/coneMeshContour.py,data/disk_out_ref.vtu}
scp demoFiles.tgz ubuntu:

In the VM:

sudo apt install -y libxrender1          # X11 Rendering Extension
sudo apt install -y libosmesa6           # no GPU 🡒 OSMesa rendering

curl -LsSf https://astral.sh/uv/install.sh | sh   # into ~/.local/bin
uv venv ~/env-trame --python 3.12        # create a new virtual environment
source ~/env-trame/bin/activate
uv pip install numpy
uv pip install trame                     # trame core
uv pip install trame-vuetify trame-vtk   # widgets
uv pip install trame-components          # needed for trame-tutorial/04_application
uv pip install vtk                       # the VTK library

unarchive demoFiles.tgz
python trame-tutorial/04_application/coneMeshContour.py --host 0.0.0.0 --port 8080 --server

Now I should point my local browser to <floating_ip_address>:8080.

Create image/volume snapshots

You can create a snapshot (backup image) of your virtual machine, so that you can launch exactly the same virtual machine later on, or even migrate it to another cloud. The snapshot will include:

  • the operating system
  • all additional installed software and configuration files
  • all data from the root/system disk

but not data from the ephemeral disk or volume attachments.

Snapshot a VM

While our VM is running, you can store it as an image:

openstack server image create --name alexDemoImage alexDemoBox
openstack image list

The new image should appear in the list, likely as “queued” first – wait for it to become “active”. Now you can use this image to launch new VM instances!

openstack image show alexDemoImage --human-readable   # show its details
Note
  • For critical workloads, it is better to stop the VM first (see below).
  • Snapshots save only the root/system disk, not the ephemeral disk or volume attachments.
    • the ephemeral disk is temporary: no way to snapshot it in our setup
    • volume attachments can be saved separately (see below)

Stop/resume the VM

For critical workloads, it is better to stop the VM first:

openstack server stop alexDemoBox
openstack server show alexDemoBox --column OS-EXT-STS:vm_state   # make sure it is "stopped"
>>> create an image snapshot as described above
openstack server start alexDemoBox
openstack server show alexDemoBox --column OS-EXT-STS:vm_state   # should be active

Snapshot an attached volume

You will need to detach the volume to create its snapshot:

openstack volume list          # will likely show Status="in-use"
>>> stop the application using the volume
openstack server remove volume alexDemoBox demoVolume   # detach the volume from the VM
openstack volume list          # wait until Status="available"
openstack volume snapshot create --volume demoVolume dataSnapshot
openstack server add volume alexDemoBox demoVolume      # reattach the volume to the VM
openstack volume snapshot list

Delete instance images and volume snapshots

openstack image delete alexDemoImage
openstack volume snapshot delete dataSnapshot

Destroy the VM and its associated resources

On the laptop:

openstack server remove volume alexDemoBox demoVolume   # detach the volume from the VM
openstack volume delete demoVolume                      # delete the volume
openstack volume list                                   # verify

openstack floating ip delete <floating_ip_address>   # dissociate the floating IP

openstack server delete alexDemoBox                  # delete the instance
>>> wait a minute before running the next command
openstack server list                                # verify

openstack security group delete rule01               # delete security group

openstack image delete alexDemoImage            # mentioned earlier
openstack volume snapshot delete dataSnapshot   # mentioned earlier