Thursday, October 15, 2015

OpenStack : Install Kilo on Fedora 22 : Networking Service : Install and configure compute node

http://docs.openstack.org/kilo/install-guide/install/yum/content/neutron-compute-node.html

Also need to open ports for firewall
# firewall-cmd --get-default-zone
FedoraServer
# firewall-cmd --permanent --zone=FedoraServer --add-port=5900/tcp (for VNC console)
#  firewall-cmd --reload
 firewall-cmd --list-ports
5900/tcp

Install and configure compute node

The compute node handles connectivity and security groups for instances.

To configure prerequisites
Before you install and configure OpenStack Networking, you must configure certain kernel networking parameters.
  1. Edit the /etc/sysctl.conf file to contain the following parameters:
    net.ipv4.conf.all.rp_filter=0
    net.ipv4.conf.default.rp_filter=0
    net.bridge.bridge-nf-call-iptables=1
    net.bridge.bridge-nf-call-ip6tables=1
  2. Implement the changes:
    # sysctl -p

To configure the Networking common components
The Networking common component configuration includes the authentication mechanism, message queue, and plug-in.
[Note]Note
Default configuration files vary by distribution. You might need to add these sections and options rather than modifying existing sections and options. Also, an ellipsis (...) in the configuration snippets indicates potential default configuration options that you should retain.
  • Edit the /etc/neutron/neutron.conf file and complete the following actions:
    1. In the [database] section, comment out any connection options because compute nodes do not directly access the database.
    2. In the [DEFAULT] and [oslo_messaging_rabbit] sections, configure RabbitMQ message queue access:
      Select Text
      1
      2
      3
      4
      5
      6
      7
      8
      9
      [DEFAULT]
      ...
      rpc_backend = rabbit
      [oslo_messaging_rabbit]
      ...
      rabbit_host = controller
      rabbit_userid = openstack
      rabbit_password = RABBIT_PASS
      Replace RABBIT_PASS with the password you chose for the openstack account in RabbitMQ.
    3. In the [DEFAULT] and [keystone_authtoken] sections, configure Identity service access:
      Select Text
      1
      2
      3
      4
      5
      6
      7
      8
      9
      10
      11
      12
      13
      14
      [DEFAULT]
      ...
      auth_strategy = keystone
      [keystone_authtoken]
      ...
      auth_uri = http://controller:5000
      auth_url = http://controller:35357
      auth_plugin = password
      project_domain_id = default
      user_domain_id = default
      project_name = service
      username = neutron
      password = NEUTRON_PASS
      Replace NEUTRON_PASS with the password you chose or the neutron user in the Identity service.
      [Note]Note
      Comment out or remove any other options in the [keystone_authtoken] section.
    4. In the [DEFAULT] section, enable the Modular Layer 2 (ML2) plug-in, router service, and overlapping IP addresses:
      Select Text
      1
      2
      3
      4
      5
      [DEFAULT]
      ...
      core_plugin = ml2
      service_plugins = router
      allow_overlapping_ips = True
    5. (Optional) To assist with troubleshooting, enable verbose logging in the [DEFAULT] section:
      Select Text
      1
      2
      3
      [DEFAULT]
      ...
      verbose = True

To configure the Modular Layer 2 (ML2) plug-in
The ML2 plug-in uses the Open vSwitch (OVS) mechanism (agent) to build the virtual networking framework for instances.
  • Edit the /etc/neutron/plugins/ml2/ml2_conf.ini file and complete the following actions:
    1. In the [ml2] section, enable the flat, VLAN, generic routing encapsulation (GRE), and virtual extensible LAN (VXLAN) network type drivers, GRE tenant networks, and the OVS mechanism driver:
      Select Text
      1
      2
      3
      4
      5
      [ml2]
      ...
      type_drivers = flat,vlan,gre,vxlan
      tenant_network_types = gre
      mechanism_drivers = openvswitch
    2. In the [ml2_type_gre] section, configure the tunnel identifier (id) range:
      Select Text
      1
      2
      3
      [ml2_type_gre]
      ...
      tunnel_id_ranges = 1:1000
    3. In the [securitygroup] section, enable security groups, enable ipset, and configure the OVS iptables firewall driver:
      Select Text
      1
      2
      3
      4
      5
      [securitygroup]
      ...
      enable_security_group = True
      enable_ipset = True
      firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
    4. In the [ovs] section, enable tunnels and configure the local tunnel endpoint:
      Select Text
      1
      2
      3
      [ovs]
      ...
      local_ip = INSTANCE_TUNNELS_INTERFACE_IP_ADDRESS
      Replace INSTANCE_TUNNELS_INTERFACE_IP_ADDRESS with the IP address of the instance tunnels network interface on your compute node.
    5. In the [agent] section, enable GRE tunnels:
      Select Text
      1
      2
      3
      [agent]
      ...
      tunnel_types = gre

To configure the Open vSwitch (OVS) service
The OVS service provides the underlying virtual networking framework for instances.
  • Start the OVS service and configure it to start when the system boots:
    # systemctl enable openvswitch.service
    # systemctl start openvswitch.service

To configure Compute to use Networking
By default, distribution packages configure Compute to use legacy networking. You must reconfigure Compute to manage networks through Networking.
  • Edit the /etc/nova/nova.conf file and complete the following actions:
    1. In the [DEFAULT] section, configure the APIs and drivers:
      Select Text
      1
      2
      3
      4
      5
      6
      [DEFAULT]
      ...
      network_api_class = nova.network.neutronv2.api.API
      security_group_api = neutron
      linuxnet_interface_driver = nova.network.linux_net.LinuxOVSInterfaceDriver
      firewall_driver = nova.virt.firewall.NoopFirewallDriver
      [Note]Note
      By default, Compute uses an internal firewall service. Since Networking includes a firewall service, you must disable the Compute firewall service by using the nova.virt.firewall.NoopFirewallDriver firewall driver.
    2. In the [neutron] section, configure access parameters:
      Select Text
      1
      2
      3
      4
      5
      6
      7
      8
      [neutron]
      ...
      url = http://controller:9696
      auth_strategy = keystone
      admin_auth_url = http://controller:35357/v2.0
      admin_tenant_name = service
      admin_username = neutron
      admin_password = NEUTRON_PASS
      Replace NEUTRON_PASS with the password you chose for the neutron user in the Identity service.

To finalize the installation
  1. The Networking service initialization scripts expect a symbolic link /etc/neutron/plugin.ini pointing to the ML2 plug-in configuration file, /etc/neutron/plugins/ml2/ml2_conf.ini. If this symbolic link does not exist, create it using the following command:
    # ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
    Due to a packaging bug, the Open vSwitch agent initialization script explicitly looks for the Open vSwitch plug-in configuration file rather than a symbolic link /etc/neutron/plugin.ini pointing to the ML2 plug-in configuration file. Run the following commands to resolve this issue:
    # cp /usr/lib/systemd/system/neutron-openvswitch-agent.service \
      /usr/lib/systemd/system/neutron-openvswitch-agent.service.orig
    # sed -i 's,plugins/openvswitch/ovs_neutron_plugin.ini,plugin.ini,g' \
      /usr/lib/systemd/system/neutron-openvswitch-agent.service
  2. Restart the Compute service:
    # systemctl restart openstack-nova-compute.service
  3. Start the Open vSwitch (OVS) agent and configure it to start when the system boots:
    # systemctl enable neutron-openvswitch-agent.service
    # systemctl start neutron-openvswitch-agent.service

Verify operation
[Note]Note
Perform these commands on the controller node.
  1. Source the admin credentials to gain access to admin-only CLI commands:
    $ source admin-openrc.sh
  2. List agents to verify successful launch of the neutron agents:
    $ neutron agent-list
    +--------------------------------------+--------------------+----------+-------+----------------+---------------------------+
    | id                                   | agent_type         | host     | alive | admin_state_up | binary                    |
    +--------------------------------------+--------------------+----------+-------+----------------+---------------------------+
    | 30275801-e17a-41e4-8f53-9db63544f689 | Metadata agent     | network  | :-)   | True           | neutron-metadata-agent    |
    | 4bd8c50e-7bad-4f3b-955d-67658a491a15 | Open vSwitch agent | network  | :-)   | True           | neutron-openvswitch-agent |
    | 756e5bba-b70f-4715-b80e-e37f59803d20 | L3 agent           | network  | :-)   | True           | neutron-l3-agent          |
    | 9c45473c-6d6d-4f94-8df1-ebd0b6838d5f | DHCP agent         | network  | :-)   | True           | neutron-dhcp-agent        |
    | a5a49051-05eb-4b4f-bfc7-d36235fe9131 | Open vSwitch agent | compute1 | :-)   | True           | neutron-openvswitch-agent |
    +--------------------------------------+--------------------+----------+-------+----------------+---------------------------+

No comments:

Post a Comment