the following article will document the steps required to deploy Redhat OpenStack on top of vmware ESXi, there are some pre-reqs that are required to start this process. without the requirement it will end up in failure to deploy. i started the deployment process and failed many times until i go it right, once you understand the logic behind these requirements it’s really easy to deploy.

Note: this guide is not a complete guide for the deployment, but it will get you through the deployment and cover the parameters that are not documented in redhat official documentation.

lets me start with the basics, to understand about Redhat OpenStack you need to check http://openstack.org/ also you need to check the official redhat documentation, its very well written, but it gives less info when it comes to deploying in a virtual env. https://access.redhat.com/documentation/en/red-hat-openstack-platform/

the following diagram shows the networks required by RHOS for a successful deployment, while you can make changes and tweeks to this, this could be the optimum scenario for the deployment, i will try to cover other scenarios in later posts.

we have 2 types of vlan required, the first is a Tagged VLAN, which will be used to pass all traffic (internal API communication, Storage Network, Storage Management, Tenant Network, External Network/Provided Network) the other is Native VLAn which mainly will be used for the deployment traffic itself including PXE booting, PXE TFTP, etc…

to achieve that you need to create a network env. where those VLANs are defined on the layer 2 switch. also you will need routing for the External Network.

 

to configure your own env. you would need some decdent hardware, which will use anyway in other tests and deployments. for my lab i used the below:

  • 1 Workstation, HP z840 with 2 xeon CPU (24 Cores) – (will be used as ESXi, Virtualization Layer)
  • 96 GB RAM
  • 1G Normal Ethernet card.
  • dLink Unmanageable switch (used for speeding the copy of data from laptop to ESXi)
  • the Workstation holds 3 Disks in my case (1 SSD – which i will be using for the nodes, 2 Barracuda seagate HDD 2 TB- will be used for Virtual Machines & 1 TB – will be used for OS Images, OVAs & OVFs)
  • Some Cables.

the following diagram show the end result of the lab that you need to create:

the first router on top is the home router that is connected to the internet, you can use this to connect to your setup from the internet, or even provide your home network as an external network to OpenStack Cloud. i have placed a switch behind this router to make the copy of images and data faster between my laptop which i will be using in the deployment, also am plugging the z840 workstation mentioned above to that switch through the Nic Card. you can use equivalent machine either customized or ready made with a vendor stamp. the more memory and CPU the better (the deployment may fail because of the fewer resources you have, but one advantage of using VMWARE esxi is that you can created a resource pool and share the resources, also you can over commit, since we are not going to run actual workload in this scenario).

Deploying the ESXi:

to deploy the ESXi you will need an external USB stick with 16GB (more than enough), to make simple just watch the following video on youtube on how to deploy the esxi on an external stick, i did that so i can dump the lab and recreate again while keeping the virtual machines on the hard disk un-harmed also to save space for images and virtual machines.

 

Note: you can choose any ESXi and it should work find, but i used this version because it’s the latest supported version by Redhat

Configuring the Virtual Network:

login to ESXi using vSphere Client

now configure 3 Switches like the image below and configure 3 port group networks.

the setting for the port groups is very important, the deployment will not work without the below setting:

  1. the Deployment Network: should not be tagged on the port group, and allow the promiscuous mode. (the promiscuous mode here is required for PXE booting and PXE TFTP)
  2. the ALL VLAN Network: should be tagged as a trunk porg (VLAN: 4095) and allow promiscuous mode. (the promiscous mode is required to pass the multiple virtualization layer accross networks, if not set here the deployment will fail when the deploy script is validating the network, it will fail to ping and then exist

 

Configuring the Virtual Router

in this lab i didn’t want to expose my Home LAN (192.168.1.0/24) as an external network, i wanted to create another VLAN tagged network which i want to use as an external/provider network to the nodes. that is why i needed another router and it make perfect sense to use a virtual router, there are many out there opensource virtual routers that are available for free. i used VyOS as it’s easy to use and configure. checkout VyOS website https://vyos.io/  you can check the documentation and YouTube channel to know more https://www.youtube.com/channel/UCEjJx6j87szaiqtKDrMVb2Q

below is the commands i used to configure the vrouter

vyos@vyos:~$ show configuration commands
set interfaces ethernet eth0 address ‘192.168.1.254/24’
set interfaces ethernet eth0 description ‘LAN & Internet Network’
set interfaces ethernet eth0 hw-id ’00:0c:29:4e:3d:9d’
set interfaces ethernet eth1 address ‘192.0.2.254/24’
set interfaces ethernet eth1 description ‘Deployment Network’
set interfaces ethernet eth1 hw-id ’00:0c:29:4e:3d:a7′
set interfaces ethernet eth2 address ‘10.30.30.1/24’
set interfaces ethernet eth2 description ‘All Vlan Network’
set interfaces ethernet eth2 hw-id ’00:0c:29:4e:3d:b1′
set interfaces ethernet eth2 vif 1 address ‘172.17.17.1/24’
set interfaces ethernet eth2 vif 100 address ‘10.99.100.1/24’
set interfaces ethernet eth2 vif 101 address ‘10.99.101.1/24’
set interfaces ethernet eth2 vif 102 address ‘10.99.102.1/24’
set interfaces ethernet eth2 vif 103 address ‘10.99.103.1/24’
set interfaces loopback ‘lo’
set nat source rule 300 outbound-interface ‘eth0’
set nat source rule 300 source address ‘172.17.17.0/24’
set nat source rule 300 translation address ‘masquerade’
set service ssh port ’22’
set system config-management commit-revisions ’20’
set system console device ttyS0 speed ‘9600’
set system gateway-address ‘192.168.1.1’
set system login user vyos authentication encrypted-password ‘$1$0dBjmb8/$CG3kyUT/sVeKdkSBF91Dm/’
set system login user vyos authentication plaintext-password ”
set system login user vyos level ‘admin’
set system name-server ‘8.8.8.8’
set system name-server ‘8.8.4.4’
set system ntp server ‘0.pool.ntp.org’
set system ntp server ‘1.pool.ntp.org’
set system ntp server ‘2.pool.ntp.org’
set system package repository community components ‘main’
set system package repository community distribution ‘helium’
set system package repository community url ‘http://packages.vyos.net/vyos’
set system syslog global facility all level ‘notice’
set system syslog global facility protocols level ‘debug’

you can use then

vyos@vyos:~$ configure
[edit]
vyos@vyos# show interfaces
ethernet eth0 {
address 192.168.1.254/24
description “LAN & Internet Network”
hw-id 00:0c:29:4e:3d:9d
}
ethernet eth1 {
address 192.0.2.254/24
description “Deployment Network”
hw-id 00:0c:29:4e:3d:a7
}
ethernet eth2 {
address 10.30.30.1/24
description “All Vlan Network”
hw-id 00:0c:29:4e:3d:b1
vif 1 {
address 172.17.17.1/24
}
vif 100 {
address 10.99.100.1/24
}
vif 101 {
address 10.99.101.1/24
}
vif 102 {
address 10.99.102.1/24
}
vif 103 {
address 10.99.103.1/24
}
}
loopback lo {
}

 

and show NAT to show the natting configuration, which is the vlan vif interface 1, that will be the only natted interface to the internet and the local public LAN of my home lab.

vyos@vyos# show nat
source {
rule 300 {
outbound-interface eth0
source {
address 172.17.17.0/24
}
translation {
address masquerade
}
}
}
[edit]

now we have set the ESXI and the router, its time to start the deployment, you do some testing by creating a linux machine and connect it to the ALL VLAN port group, dont give this machine an IP from 10.30.30.1/24 just create a VLAN interface within the network 172.17.17.0/24. and ping the internet or google.com you should get response and internet access.

Create a local Repo for RHOS

Now you have 2 options when deploying RHOS, defiantly in a real env. you will have a subscription from redhat, which is basically will provide your with the repos required to deploy RHOS. but in this lab i didn’t have a subscription; so what you can do is download the repos required by using REPO SYNC COMMANDS. and create a local repo in one of the linux machines that is connected to the deployment or All VLAN Network. once your create the local repo you can configure the director machine to fetch the repos from that source instead of subscribing to redhat. to use repo sync you will need a subscription to redhat with full access, so i used local repo machine created by my friend and connected it to my env.

Starting Director Installation

create a linux redhat machine 7.3 using the instllation media aviable online, you can create a new account with redhat if you don’t have one which will give you 60 days to try the repo and sync and do all what you need for this lab. you can also use the same account to repo sync the redhat openstack repos and redhat enterprise repos and ceph repos to the local repo machine.

when creating this virtual machine, create 3 interfaces:

  1. ens192: connected to the Deployment Network, and give it the following ip: 192.0.2.1/24 without a a gateway.
  2. ens224: Dont provide any IP
  3. ens224.VLAN1 (type: VLAN) : connected to the ALL VLAN Network, and give it the following ip: 172.17.17.2/24 gateway: 172.17.17.1

Note: the same network setting can go for the local repo machine as well.

to create a connection to the loacal repo, create a file name it my.repo in the /etc/yum.repo.d/my.repo

and past the following, which include the required repo for this deployment:

[MesLocal-server-openstack]
name=vlab Repo-RHOPS10
baseurl=http://rhnlocal.messeiry.local/repo/rhel-7-server-openstack-10-rpms
enabled=1
gpgcheck=0[MesLocal-server-extras]
name=vlab Repo-extras
baseurl=http://rhnlocal.messeiry.local/repo/rhel-7-server-extras-rpms
enabled=1
[MesLocal–server-rh-common]
gpgcheck=0

name=vlab Repo-common
baseurl=http://rhnlocal.messeiry.local/repo/rhel-7-server-rh-common-rpms
enabled=1
gpgcheck=0

[MesLocal-server]
name=vlab Repo-server
baseurl=http://rhnlocal.messeiry.local/repo//rhel-7-server-rpms
enabled=1
gpgcheck=0

[MesLocal-server-satellite-tools]
name=vlab Repo-sattool
baseurl=http://rhnlocal.messeiry.local/repo/rhel-7-server-satellite-tools-6.2-rpms
enabled=1
gpgcheck=0

[MesLocal-ha-for-rhel-7-server]
name=vlab Repo-HA
baseurl=http://rhnlocal.messeiry.local/repo/rhel-ha-for-rhel-7-server-rpms
enabled=1
gpgcheck=0

[MesLocal-rhel-7-server-rhceph-2-mon-rpms]
name=vlab Repo-CEPH-MON
baseurl=http://rhnlocal.messeiry.local/repo/rhel-7-server-rhceph-2-mon-rpms
enabled=1
gpgcheck=0

[MesLocal-rhel-7-server-rhceph-2-tools-rpms]
name=vlab Repo-CEPH-tools
baseurl=http://rhnlocal.messeiry.local/repo/rhel-7-server-rhceph-2-tools-rpms
enabled=1
gpgcheck=0

[MesLocal-cf-me-5.7-for-rhel-7-rpms]
name=vlab Repo-CF
baseurl=http://rhnlocal.messeiry.local/repo/cf-me-5.7-for-rhel-7-rpms
enabled=0
gpgcheck=0

[MesLocal-rhel-7-server-nfv-rpms]
name=vlab Repo-NFV
baseurl=http://rhnlocal.messeiry.local/repo/rhel-7-server-nfv-rpms
enabled=1
gpgcheck=0

[MesLocal-Juniper-Contrail3.2.2]
name=vlab Repo-Juniper-SDN
baseurl=http://rhnlocal.messeiry.local/repo/contrail-install-packages-3.2.2.0-33-redhat73newton
enabled=1
gpgcheck=0

 

try to yum install vim to make sure its working

$ yum install vim

if you get a gpg check error try the following command to fix it and then try to re-install vim

rpm –import /etc/pki/rpm-gpg/RPM*

also install $ yum install crudini

now you are all set to Introspect & deploy RHOS 🙂

 

Creating Nodes as Virtual Machines:

  • all nodes will have 3 interfaces, 1 connected to the deployment network and 2 connected to the ALL VLAN Network
  • all nodes will have 100 GB of Storage (preferably SSD), you can use Thin Provisioning.
  • the ceph nodes can have addition disks, but when you do so you will need to include them in the storage-environment.yaml (if required)

 

Introspecting RHOS Nodes (3 Controllers, 3 Compute, 3 Ceph Storage)

RHOS can use many drivers for starting and shutting down nodes, in our case will need to use fake_pxe to do the introspection and deployment as well.

This driver provides a method to use bare metal devices without power management. This means the director does not control the registered bare metal devices and as such require manual control of power at certain points in the introspect and deployment processes.

This option is available for testing and evaluation purposes only. It is not recommended for Red Hat Enterprise Linux OpenStack Platform enterprise environments.

  • This driver does not use any authentication details because it does not control power management.
  • Edit the /etc/ironic/ironic.conf file and add fake_pxe to the enabled_drivers option to enable this driver, and dont forget to restart the ironic services (sudo systemctl restart openstack-ironic-conductor   openstack-ironic-api)
  • When performing introspection on nodes, manually power the nodes after running the openstack baremetal introspection bulk start command.
  • When performing Overcloud deployment, check the node status with the ironic node-list command. Wait until the node status changes from deploying to deploy wait-callback and then manually power the nodes.
  • After the Overcloud provisioning process completes, reboot the nodes. To check the completion of provisioning, check the node status with the ironic node-list command, wait until the node status changes to active, then manually reboot all Overcloud nodes.

no need to bother and configure the instack file with all the resources count like CPU, MEMORY & DISKS. just keep it simple as the below sample, when the initrospection start it will fill in all these information in the node definition.

{
“nodes”: [
{
“mac”: [
“00:50:56:39:D1:5D”
],
“capabilities”: “profile:control,boot_option:local”,
“pm_type”: “fake_pxe”
},
{
“mac”: [
“00:50:56:2F:AB:31”
],
“capabilities”: “profile:control,boot_option:local”,
“pm_type”: “fake_pxe”
},
{
“mac”: [
“00:50:56:22:FD:98”
],
“capabilities”: “profile:control,boot_option:local”,
“pm_type”: “fake_pxe”
},
{
“mac”: [
“00:50:56:2A:60:F7”
],
“capabilities”: “profile:compute,boot_option:local”,
“pm_type”: “fake_pxe”
},
{
“mac”: [
“00:50:56:30:5B:29”
],
“capabilities”: “profile:compute,boot_option:local”,
“pm_type”: “fake_pxe”
},
{
“mac”: [
“00:50:56:22:94:D3”
],
“capabilities”: “profile:compute,boot_option:local”,
“pm_type”: “fake_pxe”
},
{
“mac”: [
“00:50:56:20:5B:EC”
],
“capabilities”: “profile:ceph-storage,boot_option:local”,
“pm_type”: “fake_pxe”
},
{
“mac”: [
“00:50:56:25:9B:6C”
],
“capabilities”: “profile:ceph-storage,boot_option:local”,
“pm_type”: “fake_pxe”
},
{
“mac”: [
“00:50:56:21:25:37”
],
“capabilities”: “profile:ceph-storage,boot_option:local”,
“pm_type”: “fake_pxe”
}]
}

 

Deploying the RHOS Nodes  (3 Controllers, 3 Compute, 3 Ceph Storage)

follow the redhat guide through all the steps in order…

i configured my network-ennvironment.yaml this way

 

as am using 3 interfaces, 2 of them are connected to the ALL VLAN Network and i want them to be configured as bond.

i used the below setting for my controller.yaml

heat_template_version: 2015-04-30description: >
Software Config to drive os-net-config with 2 bonded nics on a bridge
with VLANs attached for the controller role.

parameters:
ControlPlaneIp:
default: ”
description: IP address/subnet on the ctlplane network
type: string
ExternalIpSubnet:
default: ”
description: IP address/subnet on the external network
type: string
InternalApiIpSubnet:
default: ”
description: IP address/subnet on the internal API network
type: string
StorageIpSubnet:
default: ”
description: IP address/subnet on the storage network
type: string
StorageMgmtIpSubnet:
default: ”
description: IP address/subnet on the storage mgmt network
type: string
TenantIpSubnet:
default: ”
description: IP address/subnet on the tenant network
type: string
ManagementIpSubnet: # Only populated when including environments/network-management.yaml
default: ”
description: IP address/subnet on the management network
type: string
BondInterfaceOvsOptions:
default: ‘bond_mode=active-backup’
description: The ovs_options string for the bond interface. Set things like
lacp=active and/or bond_mode=balance-slb using this option.
type: string
constraints:
– allowed_pattern: “^((?!balance.tcp).)*$”
description: |
The balance-tcp bond mode is known to cause packet loss and
should not be used in BondInterfaceOvsOptions.
ExternalNetworkVlanID:
default: 10
description: Vlan ID for the external network traffic.
type: number
InternalApiNetworkVlanID:
default: 20
description: Vlan ID for the internal_api network traffic.
type: number
StorageNetworkVlanID:
default: 30
description: Vlan ID for the storage network traffic.
type: number
StorageMgmtNetworkVlanID:
default: 40
description: Vlan ID for the storage mgmt network traffic.
type: number
TenantNetworkVlanID:
default: 50
description: Vlan ID for the tenant network traffic.
type: number
ManagementNetworkVlanID:
default: 60
description: Vlan ID for the management network traffic.
type: number
ControlPlaneDefaultRoute: # Override this via parameter_defaults
description: The default route of the control plane network.
type: string
ExternalInterfaceDefaultRoute:
default: ‘10.0.0.1’
description: default route for the external network
type: string
ManagementInterfaceDefaultRoute: # Commented out by default in this template
default: unset
description: The default route of the management network.
type: string
ControlPlaneSubnetCidr: # Override this via parameter_defaults
default: ’24’
description: The subnet CIDR of the control plane network.
type: string
DnsServers: # Override this via parameter_defaults
default: []
description: A list of DNS servers (2 max for some implementations) that will be added to resolv.conf.
type: comma_delimited_list
EC2MetadataIp: # Override this via parameter_defaults
description: The IP address of the EC2 metadata server.
type: string

resources:
OsNetConfigImpl:
type: OS::Heat::StructuredConfig
properties:
group: os-apply-config
config:
os_net_config:
network_config:

type: interface
name: nic1
use_dhcp: false
addresses:

ip_netmask:
list_join:
– ‘/’
– – {get_param: ControlPlaneIp}
– {get_param: ControlPlaneSubnetCidr}
routes:

ip_netmask: 169.254.169.254/32
next_hop: {get_param: EC2MetadataIp}

type: ovs_bridge
name: {get_input: bridge_name}
dns_servers: {get_param: DnsServers}
members:

type: ovs_bond
name: bond1
ovs_options: {get_param: BondInterfaceOvsOptions}
members:

type: interface
name: nic2
primary: true

type: interface
name: nic3

type: vlan
device: bond1
vlan_id: {get_param: ExternalNetworkVlanID}
addresses:

ip_netmask: {get_param: ExternalIpSubnet}
routes:

default: true
next_hop: {get_param: ExternalInterfaceDefaultRoute}

type: vlan
device: bond1
vlan_id: {get_param: InternalApiNetworkVlanID}
addresses:

ip_netmask: {get_param: InternalApiIpSubnet}

type: vlan
device: bond1
vlan_id: {get_param: StorageNetworkVlanID}
addresses:

ip_netmask: {get_param: StorageIpSubnet}

type: vlan
device: bond1
vlan_id: {get_param: StorageMgmtNetworkVlanID}
addresses:

ip_netmask: {get_param: StorageMgmtIpSubnet}

type: vlan
device: bond1
vlan_id: {get_param: TenantNetworkVlanID}
addresses:

ip_netmask: {get_param: TenantIpSubnet}
# Uncomment when including environments/network-management.yaml
# If setting default route on the Management interface, comment
# out the default route on the External interface. This will
# make the External API unreachable from remote subnets.
#-
# type: vlan
# device: bond1
# vlan_id: {get_param: ManagementNetworkVlanID}
# addresses:
# –
# ip_netmask: {get_param: ManagementIpSubnet}
# routes:
# –
# default: true
# next_hop: {get_param: ManagementInterfaceDefaultRoute}

outputs:
OS::stack_id:
description: The OsNetConfigImpl resource.
value: {get_resource: OsNetConfigImpl}

 

 

and for the compute.yaml

heat_template_version: 2015-04-30description: >
Software Config to drive os-net-config with 2 bonded nics on a bridge
with VLANs attached for the compute role.

parameters:
ControlPlaneIp:
default: ”
description: IP address/subnet on the ctlplane network
type: string
ExternalIpSubnet:
default: ”
description: IP address/subnet on the external network
type: string
InternalApiIpSubnet:
default: ”
description: IP address/subnet on the internal API network
type: string
StorageIpSubnet:
default: ”
description: IP address/subnet on the storage network
type: string
StorageMgmtIpSubnet:
default: ”
description: IP address/subnet on the storage mgmt network
type: string
TenantIpSubnet:
default: ”
description: IP address/subnet on the tenant network
type: string
ManagementIpSubnet: # Only populated when including environments/network-management.yaml
default: ”
description: IP address/subnet on the management network
type: string
BondInterfaceOvsOptions:
default: ”
description: The ovs_options string for the bond interface. Set things like
lacp=active and/or bond_mode=balance-slb using this option.
type: string
constraints:
– allowed_pattern: “^((?!balance.tcp).)*$”
description: |
The balance-tcp bond mode is known to cause packet loss and
should not be used in BondInterfaceOvsOptions.
ExternalNetworkVlanID:
default: 0
description: Vlan ID for the external network traffic.
type: number
InternalApiNetworkVlanID:
default: 20
description: Vlan ID for the internal_api network traffic.
type: number
StorageNetworkVlanID:
default: 30
description: Vlan ID for the storage network traffic.
type: number
StorageMgmtNetworkVlanID:
default: 40
description: Vlan ID for the storage mgmt network traffic.
type: number
TenantNetworkVlanID:
default: 50
description: Vlan ID for the tenant network traffic.
type: number
ManagementNetworkVlanID:
default: 60
description: Vlan ID for the management network traffic.
type: number
ControlPlaneSubnetCidr: # Override this via parameter_defaults
default: ’24’
description: The subnet CIDR of the control plane network.
type: string
ControlPlaneDefaultRoute: # Override this via parameter_defaults
description: The default route of the control plane network.
type: string
ExternalInterfaceDefaultRoute: # Not used by default in this template
default: ‘10.0.0.1’
description: The default route of the external network.
type: string
ManagementInterfaceDefaultRoute: # Commented out by default in this template
default: unset
description: The default route of the management network.
type: string
DnsServers: # Override this via parameter_defaults
default: []
description: A list of DNS servers (2 max for some implementations) that will be added to resolv.conf.
type: comma_delimited_list
EC2MetadataIp: # Override this via parameter_defaults
description: The IP address of the EC2 metadata server.
type: string

resources:
OsNetConfigImpl:
type: OS::Heat::StructuredConfig
properties:
group: os-apply-config
config:
os_net_config:
network_config:

type: interface
name: nic1
use_dhcp: false
dns_servers: {get_param: DnsServers}
addresses:

ip_netmask:
list_join:
– ‘/’
– – {get_param: ControlPlaneIp}
– {get_param: ControlPlaneSubnetCidr}
routes:

ip_netmask: 169.254.169.254/32
next_hop: {get_param: EC2MetadataIp}

default: true
next_hop: {get_param: ControlPlaneDefaultRoute}

type: ovs_bridge
name: {get_input: bridge_name}
members:

type: ovs_bond
name: bond1
ovs_options: {get_param: BondInterfaceOvsOptions}
members:

type: interface
name: nic2
primary: true

type: interface
name: nic3

type: vlan
device: bond1
vlan_id: {get_param: InternalApiNetworkVlanID}
addresses:

ip_netmask: {get_param: InternalApiIpSubnet}

type: vlan
device: bond1
vlan_id: {get_param: StorageNetworkVlanID}
addresses:

ip_netmask: {get_param: StorageIpSubnet}

type: vlan
device: bond1
vlan_id: {get_param: TenantNetworkVlanID}
addresses:

ip_netmask: {get_param: TenantIpSubnet}
# Uncomment when including environments/network-management.yaml
# If setting default route on the Management interface, comment
# out the default route on the Control Plane.
#-
# type: vlan
# device: bond1
# vlan_id: {get_param: ManagementNetworkVlanID}
# addresses:
# –
# ip_netmask: {get_param: ManagementIpSubnet}
# routes:
# –
# default: true
# next_hop: {get_param: ManagementInterfaceDefaultRoute}

outputs:
OS::stack_id:
description: The OsNetConfigImpl resource.
value: {get_resource: OsNetConfigImpl}

 

 

and for the ceph-storage.yaml

heat_template_version: 2015-04-30description: >
Software Config to drive os-net-config with 2 bonded nics on a bridge
with VLANs attached for the ceph storage role.

parameters:
ControlPlaneIp:
default: ”
description: IP address/subnet on the ctlplane network
type: string
ExternalIpSubnet:
default: ”
description: IP address/subnet on the external network
type: string
InternalApiIpSubnet:
default: ”
description: IP address/subnet on the internal API network
type: string
StorageIpSubnet:
default: ”
description: IP address/subnet on the storage network
type: string
StorageMgmtIpSubnet:
default: ”
description: IP address/subnet on the storage mgmt network
type: string
TenantIpSubnet:
default: ”
description: IP address/subnet on the tenant network
type: string
ManagementIpSubnet: # Only populated when including environments/network-management.yaml
default: ”
description: IP address/subnet on the management network
type: string
BondInterfaceOvsOptions:
default: ”
description: The ovs_options string for the bond interface. Set things like
lacp=active and/or bond_mode=balance-slb using this option.
type: string
constraints:
– allowed_pattern: “^((?!balance.tcp).)*$”
description: |
The balance-tcp bond mode is known to cause packet loss and
should not be used in BondInterfaceOvsOptions.
ExternalNetworkVlanID:
default: 0
description: Vlan ID for the external network traffic.
type: number
InternalApiNetworkVlanID:
default: 20
description: Vlan ID for the internal_api network traffic.
type: number
StorageNetworkVlanID:
default: 30
description: Vlan ID for the storage network traffic.
type: number
StorageMgmtNetworkVlanID:
default: 40
description: Vlan ID for the storage mgmt network traffic.
type: number
TenantNetworkVlanID:
default: 50
description: Vlan ID for the tenant network traffic.
type: number
ManagementNetworkVlanID:
default: 60
description: Vlan ID for the management network traffic.
type: number
ControlPlaneSubnetCidr: # Override this via parameter_defaults
default: ’24’
description: The subnet CIDR of the control plane network.
type: string
ControlPlaneDefaultRoute: # Override this via parameter_defaults
description: The default route of the control plane network.
type: string
ExternalInterfaceDefaultRoute: # Not used by default in this template
default: ‘10.0.0.1’
description: The default route of the external network.
type: string
ManagementInterfaceDefaultRoute: # Commented out by default in this template
default: unset
description: The default route of the management network.
type: string
DnsServers: # Override this via parameter_defaults
default: []
description: A list of DNS servers (2 max for some implementations) that will be added to resolv.conf.
type: comma_delimited_list
EC2MetadataIp: # Override this via parameter_defaults
description: The IP address of the EC2 metadata server.
type: string

resources:
OsNetConfigImpl:
type: OS::Heat::StructuredConfig
properties:
group: os-apply-config
config:
os_net_config:
network_config:

type: interface
name: nic1
use_dhcp: false
dns_servers: {get_param: DnsServers}
addresses:

ip_netmask:
list_join:
– ‘/’
– – {get_param: ControlPlaneIp}
– {get_param: ControlPlaneSubnetCidr}
routes:

ip_netmask: 169.254.169.254/32
next_hop: {get_param: EC2MetadataIp}

default: true
next_hop: {get_param: ControlPlaneDefaultRoute}

type: ovs_bridge
name: br-bond
members:

type: ovs_bond
name: bond1
ovs_options: {get_param: BondInterfaceOvsOptions}
members:

type: interface
name: nic2
primary: true

type: interface
name: nic3

type: vlan
device: bond1
vlan_id: {get_param: StorageNetworkVlanID}
addresses:

ip_netmask: {get_param: StorageIpSubnet}

type: vlan
device: bond1
vlan_id: {get_param: StorageMgmtNetworkVlanID}
addresses:

ip_netmask: {get_param: StorageMgmtIpSubnet}
# Uncomment when including environments/network-management.yaml
# If setting default route on the Management interface, comment
# out the default route on the Control Plane.
#-
# type: vlan
# device: bond1
# vlan_id: {get_param: ManagementNetworkVlanID}
# addresses:
# –
# ip_netmask: {get_param: ManagementIpSubnet}
# routes:
# –
# default: true
# next_hop: {get_param: ManagementInterfaceDefaultRoute}

outputs:
OS::stack_id:
description: The OsNetConfigImpl resource.
value: {get_resource: OsNetConfigImpl}

 

now you are ready to deploy 🙂

use the following command to deploy:

#!/bin/bash
openstack overcloud deploy –templates /usr/share/openstack-tripleo-heat-templates/ \
-e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml \
-e ~/templates/network-environment.yaml \
-e ~/templates/storage-environment.yaml \
–control-flavor control \
–compute-flavor compute \
–ceph-storage-flavor ceph-storage \
–control-scale 3 \
–compute-scale 3 \
–ceph-storage-scale 3 \
–ntp-server 192.0.2.200 | tee openstack-deployment.log

Note: 192.0.2.200 is my local repo machine, which i configured as as an NTP server, the deployment will fail for 3 controllers in HA mode, if there is no NTP Server.

when deploying RHOS you will need to start the nodes manually when you see the following:

[stack@undercloud ~]$ watch -n 1 openstack baremetal node list

+————————————–+——+————————————–+————-+———————-+————-+
| UUID | Name | Instance UUID | Power State | Provisioning State | Maintenance |
+————————————–+——+————————————–+————-+———————-+————-+
| ce9937be-d2f9-4d1b-b3ae-016c128ed6af | None | e2119c85-fa1e-411d-8de3-7c307286639d | power on | deploy wait-callback | False |
| 1b224050-45c1-46ee-9cf7-5df88479dfc9 | None | 5ba04bac-8a71-484b-9f15-d8d56f0df876 | power on | deploy wait-callback | False |
| fa38b21a-8521-4ebf-8af8-abe7351a2bb1 | None | 02840b27-08e1-4e7e-b028-97b312dd1bd3 | power on | deploy wait-callback | False |

when you poweron the VMs the stats wil change to deploying, then after a while it will shutdown and change to active. you need at this time to start the machine manually quickly so it continue the deployment.

wait for the deployment script to complete and it should continue successfully.

to login to the overcloud, you need to be on a machine that is connected to the VLAN1 (External Network), you can use the local repo machine for that, but make sure there is GUI installed.

the passowrd for the dashboard is in teh file overcloudenv.conf in the director machine.

 

Leave a Reply