Install and configure network node
The network node primarily handles internal and external routing and DHCP services for virtual networks.
To configure prerequisites
Before you install and configure OpenStack Networking, you
must configure certain kernel networking parameters.- Edit the
/etc/sysctl.conffile to contain the following parameters:
net.ipv4.ip_forward=1 net.ipv4.conf.all.rp_filter=0 net.ipv4.conf.default.rp_filter=0
- Implement the changes:
# sysctl -p
To install the Networking components
# yum install openstack-neutron openstack-neutron-ml2 openstack-neutron-openvswitch
To configure the Networking common components
The Networking common component configuration includes the
authentication mechanism, message queue, and plug-in.![]() | Note |
|---|---|
| Default configuration files vary by distribution. You might need to add these sections and options rather than modifying existing sections and options. Also, an ellipsis (...) in the configuration snippets indicates potential default configuration options that you should retain. |
- Edit the
/etc/neutron/neutron.conffile and complete the following actions:
- In the
[database]section, comment out anyconnectionoptions because network nodes do not directly access the database. - In the
[DEFAULT]and[oslo_messaging_rabbit]sections, configure RabbitMQ message queue access:
Replace123456789[DEFAULT]...rpc_backend = rabbit[oslo_messaging_rabbit]...rabbit_host = controllerrabbit_userid = openstackrabbit_password = RABBIT_PASSRABBIT_PASSwith the password you chose for theopenstackaccount in RabbitMQ. - In the
[DEFAULT]and[keystone_authtoken]sections, configure Identity service access:
Replace1234567891011121314[DEFAULT]...auth_strategy = keystone[keystone_authtoken]...auth_uri = http://controller:5000auth_url = http://controller:35357auth_plugin = passwordproject_domain_id = defaultuser_domain_id = defaultproject_name = serviceusername = neutronpassword = NEUTRON_PASSNEUTRON_PASSwith the password you chose or theneutronuser in the Identity service.
![[Note]](http://docs.openstack.org/kilo/install-guide/install/yum/common/images/admon/note.png)
Note Comment out or remove any other options in the [keystone_authtoken]section. - In the
[DEFAULT]section, enable the Modular Layer 2 (ML2) plug-in, router service, and overlapping IP addresses:
12345[DEFAULT]...core_plugin = ml2service_plugins = routerallow_overlapping_ips = True - (Optional) To assist with troubleshooting,
enable verbose logging in the
[DEFAULT]section:
123[DEFAULT]...verbose = True
- In the
To configure the Modular Layer 2 (ML2) plug-in
The ML2 plug-in uses the
Open vSwitch (OVS)
mechanism (agent) to build the virtual networking framework for
instances.- Edit the
/etc/neutron/plugins/ml2/ml2_conf.inifile and complete the following actions:
- In the
[ml2]section, enable the flat, VLAN, generic routing encapsulation (GRE), and virtual extensible LAN (VXLAN) network type drivers, GRE tenant networks, and the OVS mechanism driver:
12345[ml2]...type_drivers = flat,vlan,gre,vxlantenant_network_types = gremechanism_drivers = openvswitch - In the
[ml2_type_flat]section, configure the external flat provider network:
123[ml2_type_flat]...flat_networks = external - In the
[ml2_type_gre]section, configure the tunnel identifier (id) range:
123[ml2_type_gre]...tunnel_id_ranges = 1:1000 - In the
[securitygroup]section, enable security groups, enable ipset, and configure the OVS iptables firewall driver:
12345[securitygroup]...enable_security_group = Trueenable_ipset = Truefirewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver - In the
[ovs]section, enable tunnels, configure the local tunnel endpoint, and map the external flat provider network to thebr-exexternal network bridge:
Replace1234[ovs]...local_ip = INSTANCE_TUNNELS_INTERFACE_IP_ADDRESSbridge_mappings = external:br-exINSTANCE_TUNNELS_INTERFACE_IP_ADDRESSwith the IP address of the instance tunnels network interface on your network node. - In the
[agent]section, enable GRE tunnels:
123[agent]...tunnel_types = gre
- In the
To configure the Layer-3 (L3) agent
The Layer-3 (L3) agent provides
routing services for virtual networks.- Edit the
/etc/neutron/l3_agent.inifile and complete the following actions:
- In the
[DEFAULT]section, configure the interface driver, external network bridge, and enable deletion of defunct router namespaces:
12345[DEFAULT]...interface_driver = neutron.agent.linux.interface.OVSInterfaceDriverexternal_network_bridge =router_delete_namespaces = True![[Note]](http://docs.openstack.org/kilo/install-guide/install/yum/common/images/admon/note.png)
Note The external_network_bridgeoption intentionally lacks a value to enable multiple external networks on a single agent. - (Optional) To assist with troubleshooting,
enable verbose logging in the
[DEFAULT]section:
123[DEFAULT]...verbose = True
- In the
To configure the DHCP agent
The DHCP agent provides DHCP
services for virtual networks.- Edit the
/etc/neutron/dhcp_agent.inifile and complete the following actions:
- In the
[DEFAULT]section, configure the interface and DHCP drivers and enable deletion of defunct DHCP namespaces:
12345[DEFAULT]...interface_driver = neutron.agent.linux.interface.OVSInterfaceDriverdhcp_driver = neutron.agent.linux.dhcp.Dnsmasqdhcp_delete_namespaces = True - (Optional) To assist with troubleshooting,
enable verbose logging in the
[DEFAULT]section:
123[DEFAULT]...verbose = True
- In the
- (Optional)
Tunneling protocols such as GRE include additional packet headers that increase overhead and decrease space available for the payload or user data. Without knowledge of the virtual network infrastructure, instances attempt to send packets using the default Ethernet maximum transmission unit (MTU) of 1500 bytes. Internet protocol (IP) networks contain the path MTU discovery (PMTUD) mechanism to detect end-to-end MTU and adjust packet size accordingly. However, some operating systems and networks block or otherwise lack support for PMTUD causing performance degradation or connectivity failure.
Ideally, you can prevent these problems by enabling jumbo frames on the physical network that contains your tenant virtual networks. Jumbo frames support MTUs up to approximately 9000 bytes which negates the impact of GRE overhead on virtual networks. However, many network devices lack support for jumbo frames and OpenStack administrators often lack control over network infrastructure. Given the latter complications, you can also prevent MTU problems by reducing the instance MTU to account for GRE overhead. Determining the proper MTU value often takes experimentation, but 1454 bytes works in most environments. You can configure the DHCP server that assigns IP addresses to your instances to also adjust the MTU.
![[Note]](http://docs.openstack.org/kilo/install-guide/install/yum/common/images/admon/note.png)
Note Some cloud images ignore the DHCP MTU option in which case you should configure it using metadata, a script, or another suitable method. - Edit the
/etc/neutron/dhcp_agent.inifile and complete the following action:
- In the
[DEFAULT]section, enable the dnsmasq configuration file:
123[DEFAULT]...dnsmasq_config_file =/etc/neutron/dnsmasq-neutron.conf
- In the
- Create and edit the
/etc/neutron/dnsmasq-neutron.conffile and complete the following action:
- Enable the DHCP MTU option (26) and configure it to
1454 bytes:
1dhcp-option-force=26,1454
- Enable the DHCP MTU option (26) and configure it to
1454 bytes:
- Kill any existing
dnsmasqprocesses:
# pkill dnsmasq
- Edit the
To configure the metadata agent
The metadata agent
provides configuration information such as credentials to
instances.- Edit the
/etc/neutron/metadata_agent.inifile and complete the following actions:
- In the
[DEFAULT]section, configure access parameters:
Replace1234567891011[DEFAULT]...auth_uri = http://controller:5000auth_url = http://controller:35357auth_region = RegionOneauth_plugin = passwordproject_domain_id = defaultuser_domain_id = defaultproject_name = serviceusername = neutronpassword = NEUTRON_PASSNEUTRON_PASSwith the password you chose for theneutronuser in the Identity service. - In the
[DEFAULT]section, configure the metadata host:
123[DEFAULT]...nova_metadata_ip = controller - In the
[DEFAULT]section, configure the metadata proxy shared secret:
Replace123[DEFAULT]...metadata_proxy_shared_secret = METADATA_SECRETMETADATA_SECRETwith a suitable secret for the metadata proxy. - (Optional) To assist with troubleshooting,
enable verbose logging in the
[DEFAULT]section:
123[DEFAULT]...verbose = True
- In the
- On the controller node, edit the
/etc/nova/nova.conffile and complete the following action:
- In the
[neutron]section, enable the metadata proxy and configure the secret:
Replace1234[neutron]...service_metadata_proxy = Truemetadata_proxy_shared_secret = METADATA_SECRETMETADATA_SECRETwith the secret you chose for the metadata proxy.
- In the
- On the controller node, restart the
Compute API service:
# systemctl restart openstack-nova-api.service
To configure the Open vSwitch (OVS) service
The OVS service provides the underlying virtual networking
framework for instances. The integration bridge
br-int handles internal instance network
traffic within OVS. The external bridge br-ex
handles external instance network traffic within OVS. The
external bridge requires a port on the physical external network
interface to provide instances with external network access. In
essence, this port connects the virtual and physical external
networks in your environment.- Start the OVS service and configure it to start when the
system boots:
# systemctl enable openvswitch.service # systemctl start openvswitch.service
- Add the external bridge:
# ovs-vsctl add-br br-ex
- Add a port to the external bridge that connects to the
physical external network interface:
ReplaceINTERFACE_NAMEwith the actual interface name. For example, eth2 or ens256.
# ovs-vsctl add-port br-ex
INTERFACE_NAME![[Note]](http://docs.openstack.org/kilo/install-guide/install/yum/common/images/admon/note.png)
Note Depending on your network interface driver, you may need to disable generic receive offload (GRO) to achieve suitable throughput between your instances and the external network.
To temporarily disable GRO on the external network interface while testing your environment:
# ethtool -K
INTERFACE_NAMEgro off
To finalize the installation
- The Networking service initialization scripts expect a
symbolic link
/etc/neutron/plugin.inipointing to the ML2 plug-in configuration file,/etc/neutron/plugins/ml2/ml2_conf.ini. If this symbolic link does not exist, create it using the following command:
# ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
Due to a packaging bug, the Open vSwitch agent initialization script explicitly looks for the Open vSwitch plug-in configuration file rather than a symbolic link/etc/neutron/plugin.inipointing to the ML2 plug-in configuration file. Run the following commands to resolve this issue:
# cp /usr/lib/systemd/system/neutron-openvswitch-agent.service \ /usr/lib/systemd/system/neutron-openvswitch-agent.service.orig # sed -i 's,plugins/openvswitch/ovs_neutron_plugin.ini,plugin.ini,g' \ /usr/lib/systemd/system/neutron-openvswitch-agent.service
- Start the Networking services and configure them to start
when the system boots:
# systemctl enable neutron-openvswitch-agent.service neutron-l3-agent.service \ neutron-dhcp-agent.service neutron-metadata-agent.service \ neutron-ovs-cleanup.service # systemctl start neutron-openvswitch-agent.service neutron-l3-agent.service \ neutron-dhcp-agent.service neutron-metadata-agent.service
![[Note]](http://docs.openstack.org/kilo/install-guide/install/yum/common/images/admon/note.png)
Note Do not explicitly start the neutron-ovs-cleanupservice.
Verify operation
![]() | Note |
|---|---|
| Perform these commands on the controller node. |
- Source the
admincredentials to gain access to admin-only CLI commands:
$ source admin-openrc.sh
- List agents to verify successful launch of the
neutron agents:
$ neutron agent-list +--------------------------------------+--------------------+---------+-------+----------------+---------------------------+ | id | agent_type | host | alive | admin_state_up | binary | +--------------------------------------+--------------------+---------+-------+----------------+---------------------------+ | 30275801-e17a-41e4-8f53-9db63544f689 | Metadata agent | network | :-) | True | neutron-metadata-agent | | 4bd8c50e-7bad-4f3b-955d-67658a491a15 | Open vSwitch agent | network | :-) | True | neutron-openvswitch-agent | | 756e5bba-b70f-4715-b80e-e37f59803d20 | L3 agent | network | :-) | True | neutron-l3-agent | | 9c45473c-6d6d-4f94-8df1-ebd0b6838d5f | DHCP agent | network | :-) | True | neutron-dhcp-agent | +--------------------------------------+--------------------+---------+-------+----------------+---------------------------+
No comments:
Post a Comment