The Provisioning Server

The automated provisioning process is a two-phase system. First, a virtual machine is created to act as the central provisioning server. Second, this server is used to deploy Ubuntu onto bare-metal hardware automatically.

Phase 1: Creating the Provisioning Server VM

The process begins by creating a dedicated virtual machine that will orchestrate the entire bare-metal deployment. This is handled by the vmware-automation project, which contains a set of Python scripts designed to run on a machine with VMware Workstation installed.

The workflow is as follows:

  1. A blank "template" VM is first created in VMware Workstation with a predefined hardware configuration (4GB RAM, 2 CPUs, 25GB disk).
  2. The create-vm.py script is executed, which clones this template to create a new VM, named provisioning-server by default.
  3. The script then generates a custom cloud-init ISO using create-cidata-iso.py. This ISO contains the autoinstall configuration needed for an unattended Ubuntu 24.04 installation.
  4. The script attaches the Ubuntu 24.04 installation ISO and the newly created cloud-init ISO to the VM and powers it on.
  5. Ubuntu's autoinstall process reads the configuration from the cloud-init ISO and installs the operating system without any user interaction.

At the end of this phase, a fully installed Ubuntu 24.04 VM is running and ready to be configured by Ansible.

Phase 2: Configuring the Server with Ansible

With the base VM running, the ansible-provisioning-server project takes over. This Ansible playbook, run directly on the provisioning server VM, installs and configures a suite of services that work in concert to guide a new, bare-metal machine through its own automated installation of Ubuntu.

Core Provisioning Services

  • dnsmasq (DHCP/TFTP/DNS): This is the first point of contact for a new node. When a bare-metal machine is powered on and set to PXE boot, it sends out a DHCP request. Dnsmasq is configured to listen for these requests, assign a specific IP address based on the node's MAC address (defined in nodes.json), and provide the iPXE bootloader via its built-in TFTP server. It also acts as a local DNS resolver for the provisioning network.
  • Nginx & PHP (Web Server): The web server is the main engine of the provisioning process. It hosts the iPXE boot script, the unpacked Ubuntu ISO files, and the cloud-init autoinstall configurations. It also serves a simple PHP-based web interface to monitor the status of each node.
  • iptables (NAT Gateway): To allow the newly provisioned nodes to access the internet for package downloads, the provisioning server is also configured to act as a NAT gateway, masquerading traffic from the internal network.

Server Management & Control

A key part of the automation is the ability to control the bare-metal servers themselves. This is accomplished through Redfish, a standard API for server management.

  • Supermicro Update Manager (SUM): For advanced BIOS configuration, such as setting the boot order, the playbook uses Supermicro's official sum utility. The set_boot_order.yml playbook automates the process of setting the boot order to PXE first, which is essential for the provisioning process to begin.
  • Redfish Python Script (redfish.py): A custom Python script is included to send Redfish commands to the servers. This is used by the playbooks to perform actions like setting the one-time boot device to the BIOS, allowing for configuration changes before the OS installation.