This guide provides a workflow to build custom SEAPATH ISOs and deploy them on Physical Servers (Bare Metal) or Virtual Machines (Libvirt).
Note: All scripts (
.sh,.py) and configuration files (.xml,.ks) mentioned in this guide are located in the root of this repository.
-
Python 3: Required to run the build script.
-
Base ISO: Download the standard Binary DVD ISO for your preferred OS and place it in the root of this repository.
-
RHEL 9: Download the standard RHEL 9 Binary DVD ISO (e.g.,
rhel-9.x-x86_64-dvd.iso) from the Red Hat Customer Portal -
CentOS Stream 9: Download the latest DVD ISO from the CentOS Mirrors
-
-
Red Hat Subscription (RHEL Only): To successfully install RHEL packages, you must have an active Red Hat subscription. You will need your Organization ID and an Activation Key.
-
SSH Public Keys: Have your public SSH keys ready in your
~/.ssh/directory to be automatically injected into the Kickstart file for passwordless access. -
Optional (Passwords): The default password for
root,virtu, andansibleusers istoto. To change it, edit the.ksfile and replace the hashed passwords.
In this phase, you will create three unique ISOs (one for each node). During this process, your host's SSH public key is automatically injected into the images for secure, passwordless access. We provide a Python orchestrator that automates the container build and ISO generation.
If you are building RHEL 9 ISOs, export your Red Hat Organization ID and Activation Key as environment variables. The script will automatically read them.
export ORG_ID="your_org_id_here"
export ACTIVATION_KEY="your_activation_key_here"Execute the Python builder. Follow the interactive prompts to define your OS, target environment, disk, and network interfaces. It will dynamically detect your ISO, build the required container, and generate the final ISOs (seapath-node1.iso, seapath-node2.iso, seapath-node3.iso).
./build_seapath.py==================================================
SEAPATH ISO Builder
==================================================
Which Operating System do you want to build?
[1] Red Hat Enterprise Linux 9 (RHEL)
[2] CentOS Stream 9
Select an option: 1
What is the target deployment environment?
[1] Virtual Machines (Libvirt Cluster)
[2] Bare Metal (Physical Servers)
Select an option: 1
Enter the target installation disk [sda]:
Enter the primary network interface [enp1s0]:
... (ISO Generation Phase) ...
[+] PHASE 3: Virtual Machine Infrastructure
==================================================
Do you want to configure the Libvirt Network (Host Bridges)? [y/N]: y
[*] Running network preparation...
Do you want to deploy the VMs (Libvirt Domains)? [y/N]: y
Deploy as a 3-Node Cluster? [y/N]: y
[*] Running VM deployment...
-
Physical Hardware: Flash each ISO to a USB drive and boot the corresponding server.
-
[VM Specific]: If you used the orchestrator to deploy the VMs, they are registered in Libvirt. Start them using
virshsudo virsh -c qemu:///system start seapath-node-1 sudo virsh -c qemu:///system start seapath-node-2 sudo virsh -c qemu:///system start seapath-node-3
-
Select "Install Red Hat Enterprise Linux 9.x" or "Install CentOS Stream 9" in the boot menu.
-
The installation is 100% automated via Kickstart. The system will reboot once finished.
Access is secured via SSH. Passwords are disabled for remote login.
-
Add your key to your local session:
ssh-add ~/.ssh/your_private_key_42 -
Login to a node: [VM Specific]
ssh root@192.168.124.2 # Node 1
If you reinstall a node, your host will detect a fingerprint mismatch. Clear the old record with: ssh-keygen -R 192.168.124.2
Once nodes are online, run the SEAPATH playbooks to configure the system.
git clone https://github.com/seapath/ansible.gitUse the container image generated by the Python script in Step 1. Depending on your choice, the image tag will be rhel4seapath or centos4seapath.
# Replace <os_type> with 'rhel' or 'centos'
podman run --rm -it \
--net=host \
--security-opt label=disable \
--mount type=bind,source=$(pwd)/ansible,target=/root/ansible/ \
--mount type=bind,source=/home/$(whoami)/.ssh/,target=/root/.ssh/,readonly \
localhost/<os_type>4seapath bashcd /root/ansible/
# Configure Git to trust the mounted directory
git config --global --add safe.directory /root/ansible
./prepare.sh
# Start the SSH agent and add your private key for passwordless authentication
eval $(ssh-agent -s)
ssh-add /root/.ssh/your_private_key_42
# Disable host key checking to prevent interactive prompts during automation
export ANSIBLE_HOST_KEY_CHECKING=FalseBefore running the playbook, you must customize the inventory file to map the Ansible variables to your virtual infrastructure.
Edit the file inventories/examples/seapath-standalone.yaml to match the following configuration (example for Node 1):
--- a/inventories/examples/seapath-standalone.yaml
+++ b/inventories/examples/seapath-standalone.yaml
node1:
# Admin network settings
- ansible_host: 192.168.200.125 # administration IP. TODO
- network_interface: eno1 # Administration interface name. TODO
- gateway_addr: 192.168.200.1 # Administration Gateway. TODO
- dns_servers: 192.168.200.1 # DNS servers. Remove if not used. TODO
+ ansible_host: 192.168.124.2 # administration IP.
+ network_interface: enp1s0 # Administration interface name.
+ gateway_addr: 192.168.124.1 # Administration Gateway.
+ dns_servers: 192.168.124.1 # DNS servers.
subnet: 24 # Subnet mask in CIDR notation.
# Time synchronisation
- ptp_interface: eno12419 # PTP interface receiving PTP frames. TODO
+ ptp_interface: enp1s0 # PTP interface receiving PTP frames.
ntp_servers:
- "185.254.101.25" # public NTP server example
ansible_connection: ssh
ansible_python_interpreter: /usr/bin/python3
ansible_remote_tmp: /tmp/.ansible/tmp
- ansible_user: ansible
+ ansible_user: virtu
+ ansible_ssh_private_key_file: /root/.ssh/your_private_key_42
## meritissimo1 was here :DNote: In this virtual lab setup,
enp1s0is the default management interface. If you are deploying on different hardware, verify the interface name usingip addr.
Now that everything is prepared, run the playbook.
ansible-playbook -i inventories/examples/seapath-standalone.yaml playbooks/seapath_setup_main.yamlThis mode enables High Availability and Distributed Storage with Ceph. It uses the Ring Topology simulated by the Linux bridges created in Step 2.
Before running the cluster setup, you must configure the inventory to match our virtual lab's network mapping.
Edit the file inventories/examples/seapath-cluster.yaml with the following key values:
| Section | Variable | Value for Virtual Lab |
|---|---|---|
| All Hosts | ansible_host |
.2 (node1), .3 (node2), .4 (node3) |
| Network | gateway_addr |
192.168.124.1 |
| Interfaces | network_interface |
enp1s0 |
| Ring Links | team0_0 / team0_1 |
enp2s0 and enp3s0 (Data Ring) |
| Storage | ceph_osd_disk |
/dev/disk/by-path/your_disk |
| SSH | ansible_user |
virtu |
The Ceph OSD requires a dedicated disk. In this lab, we created a secondary 50GB disk. You need to find its unique path to ensure Ansible targets the correct device.
-
Log into Node 1 via SSH.
-
Run the following command:
ls -l /dev/disk/by-path/ | grep -v "part"
-
Look for the disk that points to
sdb(our secondary disk). Example for this VM setup:ceph_osd_disk: "/dev/disk/by-path/pci-0000:00:1f.2-ata-3"
Note: This ID will vary depending on your virtual controller or physical hardware. Always verify it before running the playbook.
Inside the automation container, run the main playbook pointing to the cluster inventory:
ansible-playbook -i inventories/examples/seapath-cluster.yaml playbooks/seapath_setup_main.yaml./cleanup_vm_host.sh