Features

A core, some addons, all modular

BlueBanquise is a generic stack, made to handle simple workstations IT room, HPC cluster, enterprise network, etc.
We try to keep it as simple as possible, having advanced features optional and hidden.

The stack provides many features. This page will attempt to expose them.

Modularity and flexibility

BlueBanquise can be seen as a set of Ansible roles, which ensure full modularity.
Even elements of the core can be replaced by others.

The Ansible inventories structure used can handle a wide range of cluster and network architectures, and allows adding more variables as desired.
BlueBanquise, an Ansible stack


Modern

The whole stack is based on python 3.6+ and Ansible, for better long term investment.
The PXE stack is Legacy or EFI compatible.

A focus is made on reducing scripting to the minimum.

Technical details

Supported OS and platform

In its current state, stack can deploy on mixed Linux distributions and platform (in the same cluster), including:
  • Centos/RHEL 7.X and 8.X, with all features available.
  • OpenSuse Leap 15.1, with limited features (client only).
Depending of the demand, Ubuntu, Debian or OpenSuse could be fully implemented.
Tested platform were x86_64 for Centos/RHEL/OpenSuse Leap, but stack should be able to deploy on arm64.

This ability to handle multiple Linux distributions and architectures at the same time, allows to easily handle exotic hardware or customer needs.

Data

All stack data are stored in plain text files, in YAML format, using Ansible inventory folders structure.

No binaries + no databases = fully editable and tunable with minimal knowledge.

Roles

The stack provides many Ansible roles, seen as independent modules. Roles are nearly all autonomous from each other's, to be easily switched with others.

The following roles are available:

Core

  • hosts_file: generate the /etc/hosts file
  • nic: configure network interfaces for static configuration
  • firewall: configure firewall
  • dhcp_server: generate dhcp server configuration
  • dns_server/dns_client: generate dns server or client configuration
  • log_server/log_client: generate rsyslog server or client configuration
  • pxe_stack: deploy and generate whole PXE stack
    • Tftp based on atftp (with verbosity on by default)
    • Http based on Apache for all repositories and images
    • iPXE custom ROMs with menu, to handle all exotic hardware
      • EFI / Legacy
      • PXE / iPXE native ROM
      • CD or USB boot to PXE when no native PXE available (or stupidly made BIOS/EFI)
  • repositories_server/repositories_client: generate server or client configuration
  • nfs_server/nfs_client: generate nfs server or client configuration
  • set_hostname: set hostname of the target
  • ssh_master: generate management ssh configuration
  • ssh_slave: add ssh public keys on targets
  • time: generate server or client time configuration
  • ansible: install ansible package
  • display_tuning: add some tuning for bash and screen
  • conman: generate conman configuration, for ipmi consols logging

Advanced Core

  • advanced_dhcp_server: generate advanced dhcp server configuration, for more complexe clusters

Addons

  • clone: allows to clone systems with clonezilla
  • diskless: allows to boot remote hosts in diskless mode (nfs or livenet)
  • prometheus_server/prometheus_client: generate a prometheus configuration for monitoring
  • slurm: generate server and client slurm job scheduler configuration
  • openldap_server/openldap_client: generate a very unsecure and simple ldap configuration
  • ofed/ofed_sm: install OFED for infiniband support
  • clustershell: generate clustershells groups
  • report: generate a small report on the stack inventory, for debugging purposes
  • users_basic: add users on hosts, using a very basic useradd

Specific advanced features

The stack also as few native mechanisms under the hood. These features are a need for us, so we added them.

Multi Icebergs

This feature is sometime called multi islands in HPC or Cloud.

BlueBanquise can split cluster into independent and autonomous groups of equipment. These new subclusters are called icebergs, each iceberg being managed by a group of management nodes and isolated from the others (but can be reached through interconnect if exist and asked).

This feature is mainly used:
  • When there is a need to separated parts of the cluster (our case, to provide dedicated small sub-clusters to some customers).
  • When there is a need to run an old cluster aside a new one keeping both in production.
  • When there is a need to scale to very large number of hosts.