2. OPNFV Fuel Installation Instruction¶
2.1. Abstract¶
This document describes how to install the Hunter
release of
OPNFV when using Fuel as a deployment tool, covering its usage,
limitations, dependencies and required system resources.
This is an unified documentation for both x86_64
and aarch64
architectures. All information is common for both architectures
except when explicitly stated.
2.2. Introduction¶
This document provides guidelines on how to install and
configure the Hunter
release of OPNFV when using Fuel as a
deployment tool, including required software and hardware configurations.
Although the available installation options provide a high degree of
freedom in how the system is set up, including architecture, services
and features, etc., said permutations may not provide an OPNFV
compliant reference architecture. This document provides a
step-by-step guide that results in an OPNFV Hunter
compliant
deployment.
The audience of this document is assumed to have good knowledge of networking and Unix/Linux administration.
Before starting the installation of the Hunter
release of
OPNFV, using Fuel as a deployment tool, some planning must be
done.
2.3. Preparations¶
Prior to installation, a number of deployment specific parameters must be collected, those are:
- Provider sub-net and gateway information
- Provider
VLAN
information - Provider
DNS
addresses - Provider
NTP
addresses - How many nodes and what roles you want to deploy (Controllers, Computes)
This information will be needed for the configuration procedures provided in this document.
2.4. Hardware Requirements¶
Mininum hardware requirements depend on the deployment type.
Warning
If baremetal
nodes are present in the cluster, the architecture of the
nodes running the control plane (kvm01
, kvm02
, kvm03
for
HA
scenarios, respectively ctl01
, gtw01
, odl01
for
noHA
scenarios) and the jumpserver
architecture must be the same
(either x86_64
or aarch64
).
Tip
The compute nodes may have different architectures, but extra configuration might be required for scheduling VMs on the appropiate host. This use-case is not tested in OPNFV CI, so it is considered experimental.
2.4.1. Hardware Requirements for virtual
Deploys¶
The following minimum hardware requirements must be met for the virtual
installation of Hunter
using Fuel:
HW Aspect | Requirement |
---|---|
1 Jumpserver | A physical node (also called Foundation Node) that will host a Salt Master container and each of the VM nodes in the virtual deploy |
CPU | Minimum 1 socket with Virtualization support |
RAM | Minimum 32GB/server (Depending on VNF work load) |
Disk | Minimum 100GB (SSD or 15krpm SCSI highly recommended) |
2.4.2. Hardware Requirements for baremetal
Deploys¶
The following minimum hardware requirements must be met for the baremetal
installation of Hunter
using Fuel:
HW Aspect | Requirement |
---|---|
1 Jumpserver | A physical node (also called Foundation Node) that hosts the Salt Master and MaaS containers |
# of nodes | Minimum 5
Warning
Note
|
CPU | Minimum 1 socket with Virtualization support |
RAM | Minimum 16GB/server (Depending on VNF work load) |
Disk | Minimum 256GB 10kRPM spinning disks |
Networks | Mininum 4
Note These can be allocated to a single NIC or spread out over multiple NICs. Warning No external |
Power mgmt | All targets need to have power management tools that
allow rebooting the hardware (e.g. IPMI ). |
2.4.3. Hardware Requirements for hybrid
(baremetal
+ virtual
) Deploys¶
The following minimum hardware requirements must be met for the hybrid
installation of Hunter
using Fuel:
HW Aspect | Requirement |
---|---|
1 Jumpserver | A physical node (also called Foundation Node) that
hosts the Salt Master and MaaS containers, and
each of the virtual nodes defined in PDF |
# of nodes | Note Depends on If the control plane is virtualized, minimum baremetal requirements are:
If the computes are virtualized, minimum baremetal requirements are:
Warning
Note
|
CPU | Minimum 1 socket with Virtualization support |
RAM | Minimum 16GB/server (Depending on VNF work load) |
Disk | Minimum 256GB 10kRPM spinning disks |
Networks | Same as for baremetal deployments |
Power mgmt | Same as for baremetal deployments |
2.4.4. Help with Hardware Requirements¶
Calculate hardware requirements:
When choosing the hardware on which you will deploy your OpenStack environment, you should think about:
- CPU – Consider the number of virtual machines that you plan to deploy in your cloud environment and the CPUs per virtual machine.
- Memory – Depends on the amount of RAM assigned per virtual machine and the controller node.
- Storage – Depends on the local drive space per virtual machine, remote volumes that can be attached to a virtual machine, and object storage.
- Networking – Depends on the Choose Network Topology, the network bandwidth per virtual machine, and network storage.
2.5. Top of the Rack (TOR
) Configuration Requirements¶
The switching infrastructure provides connectivity for the OPNFV infrastructure operations, tenant networks (East/West) and provider connectivity (North/South); it also provides needed connectivity for the Storage Area Network (SAN).
To avoid traffic congestion, it is strongly suggested that three physically separated networks are used, that is: 1 physical network for administration and control, one physical network for tenant private and public networks, and one physical network for SAN.
The switching connectivity can (but does not need to) be fully redundant, in such case it comprises a redundant 10GE switch pair for each of the three physically separated networks.
Warning
The physical TOR
switches are not automatically configured from
the OPNFV Fuel reference platform. All the networks involved in the OPNFV
infrastructure as well as the provider networks and the private tenant
VLANs needs to be manually configured.
Manual configuration of the Hunter
hardware platform should
be carried out according to the OPNFV Pharos Specification.
2.6. OPNFV Software Prerequisites¶
Note
All prerequisites described in this chapter apply to the jumpserver
node.
2.6.1. OS Distribution Support¶
The Jumpserver node should be pre-provisioned with an operating system, according to the OPNFV Pharos specification.
OPNFV Fuel has been validated by CI using the following distributions installed on the Jumpserver:
CentOS 7
(recommended by Pharos specification);Ubuntu Xenial 16.04
;
aarch64
notes
For an aarch64
Jumpserver, the libvirt
minimum required
version is 3.x
, 3.5
or newer highly recommended.
Tip
CentOS 7
(aarch64
) distro provided packages are already new
enough.
Warning
Ubuntu 16.04
(arm64
), distro packages are too old and 3rd party
repositories should be used.
For convenience, Armband provides a DEB repository holding all the required packages.
To add and enable the Armband repository on an Ubuntu 16.04 system,
create a new sources list file /apt/sources.list.d/armband.list
with the following contents:
jenkins@jumpserver:~$ cat /etc/apt/sources.list.d/armband.list
deb http://linux.enea.com/mcp-repos/rocky/xenial rocky-armband main
jenkins@jumpserver:~$ sudo apt-key adv --keyserver keys.gnupg.net \
--recv 798AB1D1
jenkins@jumpserver:~$ sudo apt-get update
2.6.2. OS Distribution Packages¶
By default, the deploy.sh
script will automatically install the required
distribution package dependencies on the Jumpserver, so the end user does
not have to manually install them before starting the deployment.
This includes Python, QEMU, libvirt etc.
See also
To disable automatic package installation (and/or upgrade) during
deployment, check out the -P
deploy argument.
Warning
The install script expects libvirt
to be already running on the
Jumpserver.
In case libvirt
packages are missing, the script will install them; but
depending on the OS distribution, the user might have to start the
libvirt
daemon service manually, then run the deploy script again.
Therefore, it is recommended to install libvirt
explicitly on the
Jumpserver before the deployment.
While not mandatory, upgrading the kernel on the Jumpserver is also highly recommended.
jenkins@jumpserver:~$ sudo apt-get install \
linux-image-generic-hwe-16.04-edge libvirt-bin
jenkins@jumpserver:~$ sudo reboot
2.6.3. User Requirements¶
The user running the deploy script on the Jumpserver should belong to
sudo
and libvirt
groups, and have passwordless sudo access.
Note
Throughout this documentation, we will use the jenkins
username for
this role.
The following example adds the groups to the user jenkins
:
jenkins@jumpserver:~$ sudo usermod -aG sudo jenkins
jenkins@jumpserver:~$ sudo usermod -aG libvirt jenkins
jenkins@jumpserver:~$ sudo reboot
jenkins@jumpserver:~$ groups
jenkins sudo libvirt
jenkins@jumpserver:~$ sudo visudo
...
%jenkins ALL=(ALL) NOPASSWD:ALL
2.6.4. Local Artifact Storage¶
The folder containing the temporary deploy artifacts (/home/jenkins/tmpdir
in the examples below) needs to have mask 777
in order for libvirt
to
be able to use them.
jenkins@jumpserver:~$ mkdir -p -m 777 /home/jenkins/tmpdir
2.6.5. Network Configuration¶
Relevant Linux bridges should also be pre-configured for certain networks, depending on the type of the deployment.
Network | Linux Bridge | Linux Bridge necessity based on deploy type | ||
---|---|---|---|---|
virtual |
baremetal |
hybrid |
||
PXE/admin | admin_br |
absent | present | present |
management | mgmt_br |
optional | optional,
recommended,
required for
functest ,
yardstick |
optional,
recommended,
required for
functest ,
yardstick |
internal | int_br |
optional | optional | present |
public | public_br |
optional | optional, recommended, useful for debugging | optional, recommended, useful for debugging |
Tip
IP addresses should be assigned to the created bridge interfaces (not to one of its ports).
Warning
PXE/admin
bridge (admin_br
) must have an IP address.
2.6.6. Changes deploy.sh
Will Perform to Jumpserver OS¶
Warning
The install script will alter Jumpserver sysconf and disable
net.bridge.bridge-nf-call
.
Warning
On Jumpservers running Ubuntu with AppArmor enabled, when deploying
on baremetal nodes (i.e. when MaaS is used), the install script
will disable certain conflicting AppArmor profiles that interfere with
MaaS services inside the container, e.g. ntpd
, named
, dhcpd
,
tcpdump
.
Warning
The install script will automatically install and/or upgrade the
required distribution package dependencies on the Jumpserver,
unless explicitly asked not to (via the -P
deploy arg).
2.7. OPNFV Software Configuration (XDF
)¶
New in version 5.0.0.
Changed in version 7.0.0.
Unlike the old approach based on OpenStack Fuel, OPNFV Fuel no longer has a
graphical user interface for configuring the environment, but instead
switched to OPNFV specific descriptor files that we will call generically
XDF
:
PDF
(POD Descriptor File) provides an abstraction of the target POD with all its hardware characteristics and required parameters;IDF
(Installer Descriptor File) extends thePDF
with POD related parameters required by the OPNFV Fuel installer;SDF
(Scenario Descriptor File, not yet adopted) will later replace embedded scenario definitions, describing the roles and layout of the cluster enviroment for a given reference architecture;
Tip
For virtual
deployments, if the public
network will be accessed
from outside the jumpserver
node, a custom PDF
/IDF
pair is
required for customizing idf.net_config.public
and
idf.fuel.jumphost.bridges.public
.
Note
For OPNFV CI PODs, as well as simple (no public
bridge) virtual
deployments, PDF
/IDF
files are already available in the
pharos git repo. They can be used as a reference for user-supplied
inputs or to kick off a deployment right away.
LAB/POD | PDF /IDF availability based on deploy type |
||
---|---|---|---|
virtual |
baremetal |
hybrid |
|
OPNFV CI POD | available in
pharos git repo
(e.g.
ericsson-virtual1 ) |
available in
pharos git repo
(e.g. lf-pod2 ,
arm-pod5 ) |
N/A, as currently there are 0 hybrid PODs in OPNFV CI |
local or new POD | user-supplied |
user-supplied |
user-supplied |
Tip
Both PDF
and IDF
structure are modelled as yaml
schemas in the
pharos git repo, also included as a git submodule in OPNFV Fuel.
See also
mcp/scripts/pharos/config/pdf/pod1.schema.yaml
mcp/scripts/pharos/config/pdf/idf-pod1.schema.yaml
Schema files are also used during the initial deployment phase to validate
the user-supplied input PDF
/IDF
files.
2.7.1. PDF
¶
The Pod Descriptor File is a hardware description of the POD
infrastructure. The information is modeled under a yaml
structure.
The hardware description covers the jumphost
node and a set of nodes
for the cluster target boards. For each node the following characteristics
are defined:
- Node parameters including
CPU
features and total memory; - A list of available disks;
- Remote management parameters;
- Network interfaces list including name,
MAC
address, link speed, advanced features;
See also
A reference file with the expected yaml
structure is available at:
mcp/scripts/pharos/config/pdf/pod1.yaml
For more information on PDF
, see the OPNFV PDF Wiki Page.
Warning
The fixed IPs defined in PDF
are ignored by the OPNFV Fuel installer
script and it will instead assign addresses based on the network ranges
defined in IDF
.
For more details on the way IP addresses are assigned, see OPNFV Fuel User Guide.
2.7.2. PDF
/IDF
Role (hostname) Mapping¶
Upcoming SDF
support will introduce a series of possible node roles.
Until that happens, the role mapping logic is hardcoded, based on node index
in PDF
/IDF
(which should also be in sync, i.e. the parameters of the
n
-th cluster node defined in PDF
should be the n
-th node in
IDF
structures too).
Node index | HA scenario |
noHA scenario |
---|---|---|
1st | kvm01 |
ctl01 |
2nd | kvm02 |
gtw01 |
3rd | kvm03 |
odl01 /unused |
4th, 5th, … | cmp001 ,
cmp002 ,
... |
cmp001 ,
cmp002 ,
... |
Tip
To switch node role(s), simply reorder the node definitions in
PDF
/IDF
(make sure to keep them in sync).
2.7.3. IDF
¶
The Installer Descriptor File extends the PDF
with POD related parameters
required by the installer. This information may differ per each installer type
and it is not considered part of the POD infrastructure.
2.7.3.1. idf.*
Overview¶
The IDF
file must be named after the PDF
it attaches to, with the
prefix idf-
.
See also
A reference file with the expected yaml
structure is available at:
mcp/scripts/pharos/config/pdf/idf-pod1.yaml
The file follows a yaml
structure and at least two sections
(idf.net_config
and idf.fuel
) are expected.
The idf.fuel
section defines several sub-sections required by the OPNFV
Fuel installer:
jumphost
: List of bridge names for each network on the Jumpserver;network
: List of device name and bus address info of all the target nodes. The order must be aligned with the order defined in thePDF
file. The OPNFV Fuel installer relies on theIDF
model to setup all node NICs by defining the expected device name and bus address;maas
: Defines the target nodes commission timeout and deploy timeout;reclass
: Defines compute parameter tuning, including huge pages,CPU
pinning and otherDPDK
settings;
---
idf:
version: 0.1 # fixed, the only supported version (mandatory)
net_config: # POD network configuration overview (mandatory)
oob: ... # mandatory
admin: ... # mandatory
mgmt: ... # mandatory
storage: ... # mandatory
private: ... # mandatory
public: ... # mandatory
fuel: # OPNFV Fuel specific section (mandatory)
jumphost: # OPNFV Fuel jumpserver bridge configuration (mandatory)
bridges: # Bridge name mapping (mandatory)
admin: 'admin_br' # <PXE/admin bridge name> or ~
mgmt: 'mgmt_br' # <mgmt bridge name> or ~
private: ~ # <private bridge name> or ~
public: 'public_br' # <public bridge name> or ~
trunks: ... # Trunked networks (optional)
maas: # MaaS timeouts (optional)
timeout_comissioning: 10 # commissioning timeout in minutes
timeout_deploying: 15 # deploy timeout in minutes
network: # Cluster nodes network (mandatory)
interface_mtu: 1500 # Cluster-level MTU (optional)
ntp_strata_host1: 1.pool.ntp.org # NTP1 (optional)
ntp_strata_host2: 0.pool.ntp.org # NTP2 (optional)
node: ... # List of per-node cfg (mandatory)
reclass: # Additional params (mandatory)
node: ... # List of per-node cfg (mandatory)
2.7.3.2. idf.net_config
¶
idf.net_config
was introduced as a mechanism to map all the usual cluster
networks (internal and provider networks, e.g. mgmt
) to their VLAN
tags, CIDR
and a physical interface index (used to match networks to
interface names, like eth0
, on the cluster nodes).
Warning
The mapping between one network segment (e.g. mgmt
) and its CIDR
/
VLAN
is not configurable on a per-node basis, but instead applies to
all the nodes in the cluster.
For each network, the following parameters are currently supported:
idf.net_config.* key |
Details |
---|---|
interface |
The index of the interface to use for this net.
For each cluster node (if network is present),
OPNFV Fuel will determine the underlying physical
interface by picking the element at index
Note The interface index should be the
same on all cluster nodes. This can be
achieved by ordering them accordingly in
|
vlan |
VLAN tag (integer) or the string native .
Required for each network. |
ip-range |
When specified, all cluster IPs dynamically
allocated by OPNFV Fuel for that network will be
assigned inside this range.
Required for Note For now, only range start address is used. |
network |
Network segment address.
Required for each network, except oob . |
mask |
Network segment mask.
Required for each network, except oob . |
gateway |
Gateway IP address.
Required for public , N/A for others. |
dns |
List of DNS IP addresses.
Required for public , N/A for others. |
Sample public
network configuration block:
idf:
net_config:
public:
interface: 1
vlan: native
network: 10.0.16.0
ip-range: 10.0.16.100-10.0.16.253
mask: 24
gateway: 10.0.16.254
dns:
- 8.8.8.8
- 8.8.4.4
hybrid
POD notes
Interface indexes must be the same for all nodes, which is problematic
when mixing virtual
nodes (where all interfaces were untagged
so far) with baremetal
nodes (where interfaces usually carry
tagged VLANs).
Tip
To achieve this, a special jumpserver
network layout is used:
mgmt
, storage
, private
, public
are trunked together
in a single trunk
bridge:
- without decapsulating them (if they are also tagged on
baremetal
); atrunk.<vlan_tag>
interface should be created on thejumpserver
for each tagged VLAN so the kernel won’t drop the packets; - by decapsulating them first (if they are also untagged on
baremetal
nodes);
The trunk
bridge is then used for all bridges OPNFV Fuel
is aware of in idf.fuel.jumphost.bridges
, e.g. for a trunk
where
only mgmt
network is not decapsulated:
idf:
fuel:
jumphost:
bridges:
admin: 'admin_br'
mgmt: 'trunk'
private: 'trunk'
public: 'trunk'
trunks:
# mgmt network is not decapsulated for jumpserver infra nodes,
# to align with the VLAN configuration of baremetal nodes.
mgmt: True
Warning
The Linux kernel limits the name of network interfaces to 16 characters.
Extra care is required when choosing bridge names, so appending the
VLAN
tag won’t lead to an interface name length exceeding that limit.
2.7.3.3. idf.fuel.network
¶
idf.fuel.network
allows mapping the cluster networks (e.g. mgmt
) to
their physical interface name (e.g. eth0
) and bus address on the cluster
nodes.
idf.fuel.network.node
should be a list with the same number (and order) of
elements as the cluster nodes defined in PDF
, e.g. the second cluster node
in PDF
will use the interface name and bus address defined in the second
list element.
Below is a sample configuration block for a single node with two interfaces:
idf:
fuel:
network:
node:
# Ordered-list, index should be in sync with node index in PDF
- interfaces:
# Ordered-list, index should be in sync with interface index
# in PDF
- 'ens3'
- 'ens4'
busaddr:
# Bus-info reported by `ethtool -i ethX`
- '0000:00:03.0'
- '0000:00:04.0'
2.7.3.4. idf.fuel.reclass
¶
idf.fuel.reclass
provides a way of overriding default values in the
reclass cluster model.
This currently covers strictly compute parameter tuning, including huge
pages, CPU
pinning and other DPDK
settings.
idf.fuel.reclass.node
should be a list with the same number (and order) of
elements as the cluster nodes defined in PDF
, e.g. the second cluster node
in PDF
will use the parameters defined in the second list element.
The following parameters are currently supported:
idf.fuel.reclass.node.*
key |
Details |
---|---|
nova_cpu_pinning |
List of CPU cores nova will be pinned to. Note Currently disabled. |
compute_hugepages_size |
Size of each persistent huge pages. Usual values are |
compute_hugepages_count |
Total number of persistent huge pages. |
compute_hugepages_mount |
Mount point to use for huge pages. |
compute_kernel_isolcpu |
List of certain CPU cores that are isolated from Linux scheduler. |
compute_dpdk_driver |
Kernel module to provide userspace I/O support. |
compute_ovs_pmd_cpu_mask |
Hexadecimal mask of CPUs to run DPDK
Poll-mode drivers. |
compute_ovs_dpdk_socket_mem |
Set of amount huge pages in MB to be
used by OVS-DPDK daemon taken for each
NUMA node. Set size is equal to
NUMA nodes count, elements are
divided by comma. |
compute_ovs_dpdk_lcore_mask |
Hexadecimal mask of DPDK lcore
parameter used to run DPDK processes. |
compute_ovs_memory_channels |
Number of memory channels to be used. |
dpdk0_driver |
NIC driver to use for physical network interface. |
dpdk0_n_rxq |
Number of RX queues. |
Sample compute_params
configuration block (for a single node):
idf:
fuel:
reclass:
node:
- compute_params:
common: &compute_params_common
compute_hugepages_size: 2M
compute_hugepages_count: 2048
compute_hugepages_mount: /mnt/hugepages_2M
dpdk:
<<: *compute_params_common
compute_dpdk_driver: uio
compute_ovs_pmd_cpu_mask: "0x6"
compute_ovs_dpdk_socket_mem: "1024"
compute_ovs_dpdk_lcore_mask: "0x8"
compute_ovs_memory_channels: "2"
dpdk0_driver: igb_uio
dpdk0_n_rxq: 2
2.7.4. SDF
¶
Scenario Descriptor Files are not yet implemented in the OPNFV Fuel Hunter
release.
Instead, embedded OPNFV Fuel scenarios files are locally available in
mcp/config/scenario
.
2.8. OPNFV Software Installation and Deployment¶
This section describes the process of installing all the components needed to deploy the full OPNFV reference platform stack across a server cluster.
2.8.1. Deployment Types¶
Warning
OPNFV releases previous to Hunter
used to rely on the virtual
keyword being part of the POD name (e.g. ericsson-virtual2
) to
configure the deployment type as virtual
. Otherwise baremetal
was implied.
Gambia
and newer releases are more flexbile towards supporting a mix
of baremetal
and virtual
nodes, so the type of deployment is
now automatically determined based on the cluster nodes types in PDF
:
PDF has nodes of type |
Deployment type | |
---|---|---|
baremetal |
virtual |
|
yes | no | baremetal |
yes | yes | hybrid |
no | yes | virtual |
Based on that, the deployment script will later enable/disable certain extra
nodes (e.g. mas01
) and/or STATE
files (e.g. maas
).
2.8.2. HA
vs noHA
¶
High availability of OpenStack services is determined based on scenario name,
e.g. os-nosdn-nofeature-noha
vs os-nosdn-nofeature-ha
.
Tip
HA
scenarios imply a virtualized control plane (VCP
) for the
OpenStack services running on the 3 kvm
nodes.
See also
An experimental feature argument (-N
) is supported by the deploy
script for disabling VCP
, although it might not be supported by
all scenarios and is not being continuosly validated by OPNFV CI/CD.
Warning
virtual
HA
deployments are not officially supported, due to
poor performance and various limitations of nested virtualization on
both x86_64
and aarch64
architectures.
Tip
virtual
HA
deployments without VCP
are supported, but
highly experimental.
Feature | HA scenario |
noHA scenario |
---|---|---|
VCP
(Virtualized Control Plane) |
yes,
disabled with -N |
no |
OpenStack APIs SSL | yes | no |
Storage | GlusterFS |
NFS |
2.8.3. Steps to Start the Automatic Deploy¶
These steps are common for virtual
, baremetal
or hybrid
deploys,
x86_64
, aarch64
or mixed
(x86_64
and aarch64
):
- Clone the OPNFV Fuel code from gerrit
- Checkout the
Hunter
release tag - Start the deploy script
Note
The deployment uses the OPNFV Pharos project as input (PDF
and
IDF
files) for hardware and network configuration of all current
OPNFV PODs.
When deploying a new POD, one may pass the -b
flag to the deploy
script to override the path for the labconfig directory structure
containing the PDF
and IDF
(<URI to configuration repo ...>
is
the absolute path to a local or remote directory structure, populated
similar to pharos git repo, i.e. PDF
/IDF
reside in a
subdirectory called labs/<lab_name>
).
jenkins@jumpserver:~$ git clone https://git.opnfv.org/fuel
jenkins@jumpserver:~$ cd fuel
jenkins@jumpserver:~/fuel$ git checkout opnfv-8.1.0
jenkins@jumpserver:~/fuel$ ci/deploy.sh -l <lab_name> \
-p <pod_name> \
-b <URI to configuration repo containing the PDF/IDF files> \
-s <scenario> \
-D \
-S <Storage directory for deploy artifacts> |& tee deploy.log
Tip
Besides the basic options, there are other recommended deploy arguments:
- use
-D
option to enable the debug info - use
-S
option to point to a tmp dir where the disk images are saved. The deploy artifacts will be re-used on subsequent (re)deployments. - use
|& tee
to save the deploy log to a file
2.8.4. Typical Cluster Examples¶
Common cluster layouts usually fall into one of the cases described below,
categorized by deployment type (baremetal
, virtual
or hybrid
) and
high availability (HA
or noHA
).
A simplified overview of the steps deploy.sh
will automatically perform is:
- create a Salt Master Docker container on the jumpserver, which will drive the rest of the installation;
baremetal
orhybrid
only: create aMaaS
container node, which will be leveraged using Salt to handle OS provisioning on thebaremetal
nodes;- leverage Salt to install & configure OpenStack;
Note
A Docker network mcpcontrol
is always created for initial connection
of the infrastructure containers (cfg01
, mas01
) on Jumphost.
Warning
A single cluster deployment per jumpserver
node is currently supported,
indifferent of its type (virtual
, baremetal
or hybrid
).
Once the deployment is complete, the following should be accessible:
Resource | HA scenario |
noHA scenario |
---|---|---|
Horizon
(Openstack
Dashboard) |
https://<prx public VIP> |
http://<ctl VIP>:8078 |
SaltStack
Deployment
Documentation |
http://<prx public VIP>:8090 |
N/A |
See also
For more details on locating and importing the generated SSL certificate, see OPNFV Fuel User Guide.
2.8.4.1. virtual
noHA
POD¶
In the following figure there are two generic examples of virtual
deploys,
each on a separate Jumphost node, both behind the same TOR
switch:
- Jumphost 1 has only virsh bridges (created by the deploy script);
- Jumphost 2 has a mix of Linux (manually created) and
libvirt
managed bridges (created by the deploy script);
cfg01 |
Salt Master Docker container |
ctl01 |
Controller VM |
gtw01 |
Gateway VM with neutron services
(DHCP agent, L3 agent, metadata agent etc) |
odl01 |
VM on which ODL runs
(for scenarios deployed with ODL) |
cmp001 ,
cmp002 |
Compute VMs |
Tip
If external access to the public
network is not required, there is
little to no motivation to create a custom PDF
/IDF
set for a
virtual deployment.
Instead, the existing virtual PODs definitions in pharos git repo can be used as-is:
ericsson-virtual1
forx86_64
;arm-virtual2
foraarch64
;
# example deploy cmd for an x86_64 virtual cluster
jenkins@jumpserver:~/fuel$ ci/deploy.sh -l ericsson \
-p virtual1 \
-s os-nosdn-nofeature-noha \
-D \
-S /home/jenkins/tmpdir |& tee deploy.log
2.8.4.2. baremetal
noHA
POD¶
Warning
These scenarios are not tested in OPNFV CI, so they are considered experimental.
cfg01 |
Salt Master Docker container |
mas01 |
MaaS Node Docker container |
ctl01 |
Baremetal controller node |
gtw01 |
Baremetal Gateway with neutron services (dhcp agent, L3 agent, metadata, etc) |
odl01 |
Baremetal node on which ODL runs (for scenarios deployed with ODL, otherwise unused |
cmp001 ,
cmp002 |
Baremetal Computes |
Tenant VM | VM running in the cloud |
2.8.4.3. baremetal
HA
POD¶
cfg01 |
Salt Master Docker container |
mas01 |
MaaS Node Docker container |
kvm01 ,
kvm02 ,
kvm03 |
Baremetals which hold the VMs with controller functions |
prx01 ,
prx02 |
Proxy VMs for Nginx |
msg01 ,
msg02 ,
msg03 |
RabbitMQ Service VMs |
dbs01 ,
dbs02 ,
dbs03 |
MySQL service VMs |
mdb01 ,
mdb02 ,
mdb03 |
Telemetry VMs |
odl01 |
VM on which OpenDaylight runs
(for scenarios deployed with ODL ) |
cmp001 ,
cmp002 |
Baremetal Computes |
Tenant VM | VM running in the cloud |
# x86_x64 baremetal deploy on pod2 from Linux Foundation lab (lf-pod2)
jenkins@jumpserver:~/fuel$ ci/deploy.sh -l lf \
-p pod2 \
-s os-nosdn-nofeature-ha \
-D \
-S /home/jenkins/tmpdir |& tee deploy.log
# aarch64 baremetal deploy on pod5 from Enea ARM lab (arm-pod5)
jenkins@jumpserver:~/fuel$ ci/deploy.sh -l arm \
-p pod5 \
-s os-nosdn-nofeature-ha \
-D \
-S /home/jenkins/tmpdir |& tee deploy.log
2.8.4.4. hybrid
noHA
POD¶
2.8.5. Automatic Deploy Breakdown¶
When an automatic deploy is started, the following operations are performed sequentially by the deploy script:
Deploy stage | Details |
---|---|
Argument Parsing | enviroment variables and command line arguments passed
to deploy.sh are interpreted |
Distribution Package Installation | Install and/or configure mandatory requirements on the
See also
Warning Mininum required Warning Mininum required |
Patch Apply | For each This allows OPNFV Fuel to alter upstream repositories contents before consuming them, including:
See also
|
SSH RSA Keypair Generation | If not already present, a RSA keypair is generated on
the
The public key will be added to the
|
j2
Expansion |
Based on
|
Jumpserver Requirements Check | Basic validation that common jumpserver requirements
are satisfied, e.g. PXE/admin is Linux bridge if
baremetal nodes are defined in the PDF . |
Infrastucture Setup | Note All steps apply to and only to the
|
STATE
Files |
Based on deployment type, scenario and other parameters,
a Tip The table below lists all current See also For more information on how the list of |
Log Collection | Contents of /var/log are recursively gathered from
all the nodes, then archived together for later
inspection. |
2.8.5.1. STATE
Files Overview¶
STATE file |
Targets involved and main intended action |
---|---|
virtual_init |
|
maas |
Note Skipped if no |
baremetal_init |
kvm , cmp : OS install, config |
dpdk |
cmp : configure OVS-DPDK |
networks |
ctl : create OpenStack networks |
neutron_gateway |
gtw01 : configure Neutron gateway |
opendaylight |
odl01 : install & configure ODL |
openstack_noha |
cluster nodes: install OpenStack without HA |
openstack_ha |
cluster nodes: install OpenStack with HA |
virtual_control_plane |
Note Skipped if |
tacker |
ctl : install & configure Tacker |
2.9. Release Notes¶
Please refer to the OPNFV Fuel Release Notes article.
2.10. References¶
For more information on the OPNFV Hunter
8.1 release, please see: