|
| 1 | +--- |
| 2 | +title: How to deploy and verify an OpenNebula Hosted Cloud on Bare Metal servers |
| 3 | +excerpt: Deploy a certified OpenNebula Hosted Cloud using OVHcloud bare metal servers and Ansible playbooks. |
| 4 | +updated: 2025-11-18 |
| 5 | +--- |
| 6 | + |
| 7 | +## Introduction |
| 8 | + |
| 9 | +OpenNebula is a powerful, open-source cloud management platform (CMP) designed to manage and provision virtualized infrastructure. |
| 10 | + |
| 11 | +It acts as an orchestrator, turning physical infrastructure into a managed IaaS (Infrastructure as a Service) cloud that is accessible via a unified control interface. It supports major hypervisors and allows for hybrid deployments by integrating with public cloud providers, such as OVHcloud. |
| 12 | + |
| 13 | +OpenNebula supports the deployment of their solution on OVHcloud infrastructure, which results in a cloud environment that is validated as part of the **OpenNebula Cloud-Ready Certification Program**. |
| 14 | + |
| 15 | +To streamline this process, OpenNebula provides [a set of **Ansible playbooks** called Hosted Cloud OVHcloud](https://github.com/OpenNebula/hosted-cloud-ovhcloud) for automated deployment and verification, which you will need to use. |
| 16 | + |
| 17 | +## Objective |
| 18 | + |
| 19 | +This guide details the complete path to creating an OpenNebula Hosted Cloud on OVHcloud, including the custom architecture and hardware specifications. |
| 20 | + |
| 21 | +Following this guide, you will be able to: |
| 22 | + |
| 23 | +- Request necessary resources using your OVHcloud account. |
| 24 | +- Prepare the Ansible deployment project with all required configuration. |
| 25 | +- Perform OpenNebula deployment over these resources. |
| 26 | +- Check the operation with an automated verification procedure. |
| 27 | + |
| 28 | +## Requirements |
| 29 | + |
| 30 | +- **Two** [dedicated servers](/links/bare-metal/bare-metal) from the Scale or High Grade ranges ; |
| 31 | +- An active [vRack](/links/network/vrack) service ; |
| 32 | +- A public block of Additional IP addresses, sized according to your needs ; |
| 33 | +- Access to the [OVHcloud Control Panel](/links/manager). |
| 34 | + |
| 35 | +## Instructions |
| 36 | + |
| 37 | +### Setting up your infrastructure <a name="Infrastructure_Provisioning"></a> |
| 38 | + |
| 39 | +First, you need to install Ubuntu 24.04 LTS on both of your dedicated servers, by following the instructions in [this guide](/pages/bare_metal_cloud/dedicated_servers/getting-started-with-dedicated-server). |
| 40 | + |
| 41 | +Subsequently, add both servers to your vRack service by following step 2 of [this vRack configuration guide](/pages/bare_metal_cloud/dedicated_servers/vrack_configuring_on_dedicated_server). |
| 42 | + |
| 43 | +Finally, from the OVHcloud Control Panel, open the `Network`{.action} section, then select `Public IP Addresses`{.action} under **Public Network**. Once you have reached the IP management interface, click on the `Order IPs`{.action} button near the top of the page. Choose the IP version, then select the vRack your servers are attached to, and the region where those servers are hosted. |
| 44 | + |
| 45 | +### Collecting the infrastructure configuration |
| 46 | + |
| 47 | +To proceed with OpenNebula deployment, extract the required parameters that the deployment automation relies upon. **Update the inventory values** for the [Hosted Cloud OVHcloud repository](https://github.com/OpenNebula/hosted-cloud-ovhcloud) using all collected settings to match the provisioned infrastructure. For further details on the automated deployment procedure, refer to the following section: [Initial setup](#Initial_setup) |
| 48 | + |
| 49 | +Each server is equipped with two network adapters dedicated to public connectivity and two adapters for private connectivity. The two interfaces within each segment will be **bonded** using the [OVHcloud Link Aggregation](https://www.ovhcloud.com/en/bare-metal/ovhcloud-link-aggregation/) service. |
| 50 | + |
| 51 | +The **Public network bond** is exclusively for OpenNebula service management, including cluster deployment, administration via the Sunstone Web UI or OpenNebula CLI, and connectivity between the Front-end and Virtualization hosts. |
| 52 | + |
| 53 | +The **Private network bond** provides network to virtual servers, leveraging the OVHcloud vRack. This bond supports private networking, which is segmented using 802.1Q VLANs, and enables public IP addressing for virtual servers. To assign public addresses to virtual servers, a dedicated IP range must be purchased and routed via the vRack. This setup ensures cluster management traffic is isolated from virtual machine networking. |
| 54 | + |
| 55 | + |
| 56 | + |
| 57 | +#### Bare Metal network settings <a name="Bare_metal_network_settings"></a> |
| 58 | + |
| 59 | +From your OVHcloud Control Panel, navigate to `Bare Metal Cloud`{.action}, then select `Dedicated servers`{.action}. Open both management pages for your dedicated servers, and collect the highlighted parameters : |
| 60 | + |
| 61 | + |
| 62 | + |
| 63 | +| Description | Variable Names | Comment | |
| 64 | +|------------------------------------------|----------------------------------------------------------|--------------------------------------------------------| |
| 65 | +| Frontend/KVM Host IP | `ansible_host` | public_aggregation IP address | |
| 66 | +| Frontend/KVM Host public NICs mac addresses | `public_nics.macaddress` | public_aggregation MAC addresses | |
| 67 | +| Frontend/KVM Host private NICs mac addresses| `private_nics.macaddress` | private_aggregation MAC addresses | |
| 68 | + |
| 69 | +To collect the network adapter names, connect to your dedicated server and execute the `ip address` command : |
| 70 | + |
| 71 | + |
| 72 | + |
| 73 | +| Description | Variable Names | Comment | |
| 74 | +|------------------------------------------|---------------------------------------------------------------|--------------------------------------------------------| |
| 75 | +| Frontend/KVM Host public NICs name | `public_nics.name` | public_aggregation network adapter names | |
| 76 | +| Frontend/KVM Host private NICs name | `private_nics.name` | private_aggregation network adapter names | |
| 77 | + |
| 78 | +#### VRack network settings |
| 79 | + |
| 80 | +**Public IP addresses** |
| 81 | + |
| 82 | +The public IP block ordered in the previous steps allows to attach direct public connectivity to virtual servers. For a public IP range deployed on vRack, the first, penultimate, and last addresses in any given IP block are always reserved for the network address, network gateway, and network broadcast respectively. This means that the first useable address is the second address in the block, as shown below : |
| 83 | +``` |
| 84 | +46.105.135.96 Reserved : Network address |
| 85 | +46.105.135.97 First usable IP |
| 86 | +46.105.135.98 |
| 87 | +46.105.135.99 |
| 88 | +46.105.135.100 |
| 89 | +46.105.135.101 |
| 90 | +46.105.135.102 |
| 91 | +46.105.135.103 |
| 92 | +46.105.135.104 |
| 93 | +46.105.135.105 |
| 94 | +46.105.135.106 |
| 95 | +46.105.135.107 |
| 96 | +46.105.135.108 |
| 97 | +46.105.135.109 Last usable IP |
| 98 | +46.105.135.110 Reserved : Network gateway |
| 99 | +46.105.135.111 Reserved : Network broadcast |
| 100 | +``` |
| 101 | + |
| 102 | +Declare a bridge network for public IP addresses using all usable addresses of your IP range: |
| 103 | + |
| 104 | +| Description | Variable Names | Files/Location | |
| 105 | +|------------------------------------------|---------------------------------------------------------------|--------------------------------------------------------| |
| 106 | +| VMs Public IP Range, first IP | `vn.vm_public.template.AR.IP` | Second IP address in the range: 46.105.135.97 in the example | |
| 107 | +| VMs Public IP Range, number of usable addresses | `vn.vm_public.template.AR.SIZE` | Number of usable addresses, range size minus three addresses: 13 in the example | |
| 108 | +| VMs Public DNS | `vn.vm_public.template.DNS` | 213.186.33.99: OVHcloud DNS | |
| 109 | +| VMs Public NETWORK MASK | `vn.vm_public.template.NETWORK_MASK` | IP range netmask: 255.255.255.240 for example for a /28 network | |
| 110 | +| VMs Public GATEWAY | `vn.vm_public.template.GATEWAY` | Penultimate IP address in the range, Network gateway: 46.105.135.110 in the example | |
| 111 | + |
| 112 | + |
| 113 | +**Private IP addresses** |
| 114 | + |
| 115 | +On private network bond, deploy one 802.1Q virtual network per private network. For each virtual network, create a section in the Ansible inventory file, and declare the VLAN ID, IP range and netmask. |
| 116 | + |
| 117 | +| Description | Variable Names | Files/Location | |
| 118 | +|------------------------------------------|---------------------------------------------------------------|--------------------------------------------------------| |
| 119 | +| VMs Private VLAN_ID | `vn.vm_vlan*.template.VLAN_ID` | VLAN id, between 1 and 4096 | |
| 120 | +| VMs Private IP Range, first IP | `vn.vm_vlan*.template.AR.IP` | First IP address, for example 10.1.10.100 | |
| 121 | +| VMs Private IP Range, number of usable addresses | `vn.vm_vlan*.template.AR.SIZE` | Number of usable addresses, for example 50 fo IP range 10.1.10.100-10.1.10.149 | |
| 122 | +| VMs Private NETWORK MASK | `vn.vm_vlan*.template.NETWORK_MASK` | IP range netmask: 255.255.255.0 for example for a /24 network | |
| 123 | + |
| 124 | +### Initial setup <a name="Initial_setup"></a> |
| 125 | + |
| 126 | +The deployment uses the **OpenNebula Hosted Cloud OVHcloud repository**. |
| 127 | + |
| 128 | +The high-level deployment steps are: |
| 129 | + |
| 130 | +1. **Clone** the deployment repository ; |
| 131 | +2. **Install** the dependencies listed in the [Requirements section](https://github.com/OpenNebula/hosted-cloud-ovhcloud?tab=readme-ov-file#requirements) ; |
| 132 | +3. **Update** the inventory parameters in the repository with the configuration gathered above ; |
| 133 | +4. **Launch deployment commands** : |
| 134 | + * `make pre-tasks-ovhcloud` : Patch Ubuntu kernel and perform networking setup. |
| 135 | + * `make deployment` : Deploy OpenNebula. |
| 136 | + * `make validation` : Validate the automated deployment. |
| 137 | + |
| 138 | +### Adding dedicated servers to an active infrastructure |
| 139 | + |
| 140 | +To extend the cloud with new servers: |
| 141 | + |
| 142 | +1. **Provision** the new host as detailed in the [Setting up your infrastructure](#Infrastructure_Provisioning) section ; |
| 143 | +2. **Collect** the necessary configuration parameters, especially the bare-metal network settings ; |
| 144 | +3. **Re-execute** the deployment and verification commands from the [Initial setup](#Initial_setup) section. |
| 145 | + |
| 146 | +## User guide |
| 147 | + |
| 148 | +The following section explains how to access a Hosted OpenNebula Cloud Deployment via the web UI, and instantiate and access a virtual machine. |
| 149 | + |
| 150 | +This guide provides the basic steps. If you need a more detailed guide, please refer to [OpenNebula public documentation](https://docs.opennebula.io/7.0/product/virtual_machines_operation/virtual_machines/) |
| 151 | + |
| 152 | +### Create a template from OpenNebula Public marketplace |
| 153 | + |
| 154 | +From the OpenNebula Public marketplace, search and import virtual server templates needed for your infrastructure. The example below imports the Debian 13 "Trixie" template. |
| 155 | + |
| 156 | + |
| 157 | + |
| 158 | + |
| 159 | +### Start a new virtual server |
| 160 | + |
| 161 | +Based on the imported template, create a new virtual server instance. |
| 162 | + |
| 163 | + |
| 164 | + |
| 165 | + |
| 166 | + |
| 167 | +You can adjust disk storage capacity, CPU and RAM allocation during this setup phase. |
| 168 | + |
| 169 | + |
| 170 | + |
| 171 | + |
| 172 | + |
| 173 | +Finally, attach network adapters. In this example, we provide a public IP to this server, based on the public IP range we created during the OpenNebula automated deployment. Without any override, an available IP address form the public range will be picked for this interface. |
| 174 | + |
| 175 | + |
| 176 | + |
| 177 | + |
| 178 | +### Check connectivity |
| 179 | + |
| 180 | +You can use the virtual server console to check the deployment and the server connectivity. Setup the root password using virtual machine context variables. |
| 181 | + |
| 182 | + |
| 183 | + |
| 184 | + |
| 185 | +Run your virtual server console to validate the deployment. |
| 186 | + |
| 187 | + |
| 188 | + |
| 189 | + |
| 190 | +### Update virtual servers settings |
| 191 | + |
| 192 | +Use virtual server settings to update the server configuration. This example shows the hot-plugging of a private network adapter to the virtual server. |
| 193 | + |
| 194 | + |
| 195 | + |
| 196 | + |
| 197 | + |
| 198 | + |
| 199 | +### Destroy virtual server |
| 200 | + |
| 201 | +Finally, as a cleanup step, terminate the virtual server by clicking the red “Trash can” icon. |
| 202 | + |
| 203 | + |
| 204 | + |
| 205 | + |
| 206 | +## Go further |
| 207 | + |
| 208 | +- If you need more information about Ansible, you may consult the [Ansible documentation](https://docs.ansible.com/). |
| 209 | +- The OpenNebula deployment repository used and referenced in this guide is [Hosted Cloud OVHcloud](https://github.com/OpenNebula/hosted-cloud-ovhcloud). |
| 210 | + |
| 211 | +Join our [community of users](/links/community). |
0 commit comments