Friday, October 23, 2015

OpenStack : Install Kilo on Fedora 22 : Install Dashboard

http://docs.openstack.org/kilo/install-guide/install/yum/content/install_dashboard.html

To install the dashboard components
  • Install the packages:
    # yum install openstack-dashboard httpd mod_wsgi memcached python-memcached

To configure the dashboard
  • Edit the /etc/openstack-dashboard/local_settings file and complete the following actions:
    1. Configure the dashboard to use OpenStack services on the controller node:
      Select Text
      1
      OPENSTACK_HOST = "controller"
    2. Allow all hosts to access the dashboard:
      Select Text
      1
      ALLOWED_HOSTS = '*'
    3. Configure the memcached session storage service:
      Select Text
      1
      2
      3
      4
      5
      6
      CACHES = {
      'default': {
      'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
      'LOCATION': '127.0.0.1:11211',
      }
      }
      [Note]Note
      Comment out any other session storage configuration.
    4. Configure user as the default role for users that you create via the dashboard:
      Select Text
      1
      OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user"
    5. Optionally, configure the time zone:
      Select Text
      1
      TIME_ZONE = "TIME_ZONE"
      Replace TIME_ZONE with an appropriate time zone identifier. For more information, see the list of time zones.

To finalize installation
  1. On RHEL and CentOS, configure SELinux to permit the web server to connect to OpenStack services:
    # setsebool -P httpd_can_network_connect on
  2. Due to a packaging bug, the dashboard CSS fails to load properly. Run the following command to resolve this issue:
    # chown -R apache:apache /usr/share/openstack-dashboard/static
    For more information, see the bug report.
  3. Start the web server and session storage service and configure them to start when the system boots:
    # systemctl enable httpd.service memcached.service
    # systemctl start httpd.service memcached.service
     
     # service httpd restart
     
    if any problem when restarting, check :
    # systemctl status httpd.service -l
     
    Access the dashboard using a web browser:
          
          
          http://controller/dashboard 

Wednesday, October 21, 2015

Linux : NTP servers are rejected

FAQ: http://www.ntp.org/ntpfaq/NTP-s-algo.htm

# ntpq
> as
ind assID status  conf reach auth condition  last_event cnt
===========================================================
  1   943  9614   yes   yes  none  sys.peer   reachable  1
  2   944  9014   yes   yes  none  reject   reachable  1


> rv 944
assID=944 status=9014 reach, conf, 1 event, event_reach,
srcadr=ntpserver_IP, srcport=123, dstadr=clientIP, dstport=123,
leap=00, stratum=1, precision=-20, rootdelay=0.000,
rootdispersion=0.320, refid=GPS, reach=377, unreach=0, hmode=3, pmode=4,
hpoll=6, ppoll=6, flash=400 peer_dist, keyid=0, ttl=0,
offset=-23289498.129, delay=18.167, dispersion=3.040, jitter=78437.126,
reftime=d9d20ed8.a799ceee Wed, Oct 21 2015 21:17:12.654,
org=d9d20edf.9060a3fb Wed, Oct 21 2015 21:17:19.563,
rec=d9d269f2.580148e4 Thu, Oct 22 2015 3:45:54.343,
xmt=d9d269f2.4ebe53b8 Thu, Oct 22 2015 3:45:54.307,
filtdelay= 36.14 18.17 36.36 31.15 36.72 40.30 18.80 18.83,
filtoffset= -233147 -232894 -232688 -232419 -232155 -231980 -231812 -231704,
filtdisp= 0.00 0.98 1.95 2.90 3.87 4.85 5.79 6.74

flash=400 means: peer distance exceeded
/*
 * Peer errors
 */
#define TEST10      0x0200  /* peer bad synch or stratum */
#define TEST11      0x0400  /* peer distance exceeded */
#define TEST12      0x0800  /* peer synchronization loop */
#define TEST13      0x1000  /* peer unreacable */


usually it means etwork blocking, firewall, proxy etc may cause this error

We can try this:
Adding the line
tos maxdist [NUM]
to the /etc/ntp.conf before any lines starting with 'restrict' or 'server'

Default value is 1.5
We can try higher number like 3


# /etc/init.d/ntpd stop
Shutting down ntpd: [ OK ]
 # ps -ef |grep ntp |grep -v grep
 # rm -rf /var/lib/ntp/drift

 # /usr/sbin/ntpdate -u 0.rhel.pool.ntp.org
22 Nov 17:49:01 ntpdate[26709]: adjust time server 212.26.18.41 offset -0.000689 sec

 # /usr/sbin/ntpdate -u 1.rhel.pool.ntp.org
22 Nov 17:49:09 ntpdate[26719]: adjust time server 212.26.18.41 offset 0.000809 sec

 # /usr/sbin/ntpdate -u 2.rhel.pool.ntp.org
22 Nov 17:49:17 ntpdate[26724]: adjust time server 212.26.18.43 offset -0.000317 sec

 # /etc/init.d/ntpd start
Starting ntpd: [ OK ]
 # ps -ef |grep ntp |grep -v grep
ntp 26797 1 0 17:50 ? 00:00:00 ntpd -u ntp:ntp -p /var/run/ntpd.pid -g

( wait for 10 - 15 min )

 # ntpstat
synchronised to NTP server (212.26.18.41) at stratum 2
time correct to within 968 ms
polling server every 64 s

 # ntpq -p
remote refid st t when poll reach delay offset jitter
==============================================================================
*0.rhel.pool.ntp .GPS. 1 u 31 64 277 17.296 -6.289 18.942
+2.rhel.pool.ntp 69.25.96.12 2 u 35 64 377 17.985 -7.786 13.836

 # tail -f /var/log/messages

ntpd[8240]: synchronized to 212.26.18.41, stratum 2



Or we can restart this way

# service ntpd stop
# ps -ef |grep ntp |grep -v grep
# rm -rf /var/lib/ntp/drift
# ntpd -qg
# service ntpd start

about the option of -q and -g
       -q      Exit  the ntpd just after the first time the clock is set. This
               behavior mimics that of the ntpdate program,  which  is  to  be
               retired.  The  -g  and -x options can be used with this option.
               Note: The kernel time discipline is disabled with this  option.



       -g      Normally, ntpd exits with a message to the system  log  if  the
               offset exceeds the panic threshold, which is 1000 s by default.
               This option allows the time to be  set  to  any  value  without
               restriction; however, this can happen only once. If the thresh-
               old is exceeded after that, ntpd will exit with  a  message  to
               the  system  log.  This  option  can be used with the -q and -x
               options. See the tinker command for other options.

Make sure /etc/ntp/step-tickers is correctly populated


# nc -zvnu <IP Address of the NTP server> 123

# tcpdump -s0 port 123 -vvv -i <NIC>


Allow Only Specific Clients

To only allow machines on your own network to synchronize with your NTP server, add the following restrict line to your /etc/ntp.conf file:

restrict 192.168.1.0 mask 255.255.255.0 nomodify notrap


# cat /etc/sysconfig/ntpd
# Drop root to id 'ntp:ntp' by default.
OPTIONS="-u ntp:ntp -p /var/run/ntpd.pid -I eth0"

# Set to 'yes' to sync hw clock after successful ntpdate
SYNC_HWCLOCK=no

# Additional options for ntpdate
NTPDATE_OPTIONS=""

https://docs.oracle.com/cd/E50245_01/E50251/html/vmadm-tshoot-vm-clock.html

Setting the Guest's Clock

PVM guests may perform their own system clock management, for example, using the NTPD (Network Time Protocol daemon), or the hypervisor may perform system clock management for all guests.
You can set paravirtualized guests to manage their own system clocks by setting the xen.independent_wallclock parameter to 1 in the /etc/sysctl.conf file. For example:
"xen.independent_wallclock = 1"
If you want to set the hypervisor to manage paravirtualized guest system clocks, set xen.independent_wallclock to 0. Any attempts to set or modify the time in a guest will fail.
You can temporarily override the setting in the /proc file. For example:
"echo 1 > /proc/sys/xen/independent_wallclock"


http://serverfault.com/questions/245401/xen-hvm-guest-has-severe-clock-drift
Add this line to the beginning of ntpd.conf:
tinker panic 0

To override the clock source configuration, you should add clocksource= to the kernel stanza. For example:

         kernel /vmlinuz-2.6.18-406.el5 ro root=/dev/vg0/rootvol elevator=deadline clocksource=acpi_pm
 

browsing the documentation file /usr/share/doc/kernel-doc-2.6.32/Documentation/kernel-parameters.txt


        clocksource=    [GENERIC_TIME] Override the default clocksource
                        Format:                         Override the default clocksource and use the clocksource
                        with the name specified.
                        Some clocksource names to choose from, depending on
                        the platform:
                        [all] jiffies (this is the base, fallback clocksource)
                        [ACPI] acpi_pm
                        [ARM] imx_timer1,OSTS,netx_timer,mpu_timer2,
                                pxa_timer,timer3,32k_counter,timer0_1
                        [AVR32] avr32
                        [X86-32] pit,hpet,tsc,vmi-timer;
                                scx200_hrt on Geode; cyclone on IBM x440
                        [MIPS] MIPS
                        [PARISC] cr16
                        [S390] tod
                        [SH] SuperH
                        [SPARC64] tick
                        [X86-64] hpet,tsc

        hpet=           [X86-32,HPET] option to control HPET usage
                        Format: { enable (default) | disable | force |
                                verbose }
                        disable: disable HPET and use PIT instead
                        force: allow force enabled of undocumented chips (ICH4,
                                VIA, nVidia)
                        verbose: show contents of HPET registers during setup

        notsc           [BUGS=X86-32] Disable Time Stamp Counter

An overview on hardware clock and system timer circuits:

When it comes to talk about a system's clock, the hardware sits at the very bottom. Every typical system has several devices, usually implemented by clock chips, that provide timing features and can serve as clocks. So, which hardware is available depends on the particular architecture. The clock circuits are used both to keep track of the current time of the day and to make precise time measurements. The timer circuits are programmed by the kernel, so they issue interrupts at a fixed, and predefined, frequency. For instance, IA-32 and AMD64 systems have at least one programmable interrupt timer (PIT) as a classical timer circuit, which is usually implemented by an 8254 CMOS chip. Let's briefly describe the clock and timer circuits that are usually found with any nearly modern system of those architectures:
Real Time Clock (RTC)
The RTC is independent of the system's CPU and any other chips. As it is energized by a small battery, it continues to tick even when the system is switched off. The RTC is capable of issuing interrupts at frequencies ranging between 2 Hz and 8,192 Hz. Linux uses the RTC only to derive the time and date at boot time.
Programmable Interrupt Timer (PIT)
The PIT is a time-measuring device that can be compared to the alarm clock of a microwave oven: it makes the user aware that the cooking time interval has elapsed. Instead of ringing a bell, the PIT issues a special interrupt called timer interrupt, which notifies the kernel that one more time interval has elapsed. As the time goes by, the PIT goes on issuing interrupts forever at some fixed (architecture-specific) frequency established by the kernel.
Time Stamp Counter (TSC)
All 80x86 microprocessors include a CLK input pin, which receives the clock signal of an external oscillator. Starting with the Pentium, 80x86 microprocessors sport a counter that is increased at each clock signal, and is accessible through the TSC register which can be read by means of the rdtsc assembly instruction. When using this register the kernel has to take into consideration the frequency of the clock signal: if, for instance, the clock ticks at 1 GHz, the TSC is increased once every nanosecond. Linux may take advantage of this register to get much more accurate time measurements.
CPU Local Timer
The Local APIC (Advanced Programmable Interrupt Controller) present in recent 80x86 microprocessors provide yet another time measuring device, and it is a device, similar to the PIT, which can issue one-shot or periodic interrupts. There are, however, a few differences:
  • The APIC's timer counter is 32 bit long, while the PIT's timer counter is 16 bit long;
  • The local APIC timer sends interrupts only to its processor, while the PIT raises a global interrupt, which may be handled by any CPU in the system;
  • The APIC's timer is based on the bus clock signal, and it can be programmed in such way to decrease the timer counter every 1, 2, 4, 8, 16, 32, 64, or 128 bus clock signals. Conversely, the PIT, which makes use of its own clock signals, can be programmed in a more flexible way.
High Precision Event Timer (HPET)
The HPET is a timer chip that in some future time is expected to completely replace the PIT. It provides a number of hardware timers that can be exploited by the kernel. Basically the chip includes up to eight 32 bit or 64 bit independent counters. Each counter is driven by its own clock signal, whose frequency must be at least 10 MHz; therefore the counter is increased at least once in 100 nanoseconds. Any counter is associated with at most 32 timers, each of which composed by a comparator and a match register. The HPET registers allow the kernel to read and write the values of the counters and of the match registers, to program one-shot interrupts, and to enable or disable periodic interrupts on the timers that support them.
ACPI Power Management Timer (ACPI PMT)
The ACPI PMT is another clock device included in almost all ACPI-based motherboards. Its clock signal has a fixed frequency of roughly 3.58 MHz. The device is a simple counter increased at each clock tick. However the ACPI PMT is preferable to the TSC if the operating system or the BIOS may dynamically lower the CPU's frequency or voltage. When this happens, TSC's frequency changes causing time warps and others side-effects, while the frequency of ACPI PMT does not.
 

Thursday, October 15, 2015

OpenStack : Install Kilo on Fedora 22 : Networking Service : Create initial networks : Tenant network

http://docs.openstack.org/kilo/install-guide/install/yum/content/neutron_initial-tenant-network.html

 Tenant network

The tenant network provides internal network access for instances. The architecture isolates this type of network from other tenants. The demo tenant owns this network because it only provides network access for instances within it.
[Note]Note
Perform these commands on the controller node.
 
To create the tenant network
  1. Source the demo credentials to gain access to user-only CLI commands:
    $ source demo-openrc.sh
  2. Create the network:
    $ neutron net-create demo-net
    Created a new network:
    +---------------------------+--------------------------------------+
    | Field                     | Value                                |
    +---------------------------+--------------------------------------+
    | admin_state_up            | True                                 |
    | id                        | bbe2a1dd-2557-41ff-b9ec-228d2e271add |
    | mtu                       | 0                                    |
    | name                      | demo-net                             |
    | provider:network_type     | gre                                  |
    | provider:physical_network |                                      |
    | provider:segmentation_id  | 62                                   |
    | router:external           | False                                |
    | shared                    | False                                |
    | status                    | ACTIVE                               |
    | subnets                   |                                      |
    | tenant_id                 | 9c1cc7fa7fc24c17812ec662555ba519     |
    +---------------------------+--------------------------------------+ 
Like the external network, your tenant network also requires a subnet attached to it. You can specify any valid subnet because the architecture isolates tenant networks. By default, this subnet uses DHCP so your instances can obtain IP addresses.
 
To create a subnet on the tenant network
  • Create the subnet:
    $ neutron subnet-create demo-net TENANT_NETWORK_CIDR \
      --name demo-subnet --gateway TENANT_NETWORK_GATEWAY
    Replace TENANT_NETWORK_CIDR with the subnet you want to associate with the tenant network and TENANT_NETWORK_GATEWAY with the gateway you want to associate with it, typically the ".1" IP address.
    Example using 192.168.1.0/24:
    $ neutron subnet-create demo-net 192.168.1.0/24 \
      --name demo-subnet --gateway 192.168.1.1
    Created a new subnet:
      
    +-------------------+--------------------------------------------------+
    | Field             | Value                                            |
    +-------------------+--------------------------------------------------+
    | allocation_pools  | {"start": "192.168.1.2", "end": "192.168.1.254"} |
    | cidr              | 192.168.1.0/24                                   |
    | dns_nameservers   |                                                  |
    | enable_dhcp       | True                                             |
    | gateway_ip        | 192.168.1.1                                      |
    | host_routes       |                                                  |
    | id                | d70a9abc-b83f-4078-ae5f-bfe5d01dc30b             |
    | ip_version        | 4                                                |
    | ipv6_address_mode |                                                  |
    | ipv6_ra_mode      |                                                  |
    | name              | demo-subnet                                      |
    | network_id        | bbe2a1dd-2557-41ff-b9ec-228d2e271add             |
    | subnetpool_id     |                                                  |
    | tenant_id         | 9c1cc7fa7fc24c17812ec662555ba519                 |
    +-------------------+--------------------------------------------------+
     
A virtual router passes network traffic between two or more virtual networks. Each router requires one or more interfaces and/or gateways that provide access to specific networks. In this case, you create a router and attach your tenant and external networks to it.
 
To create a router on the tenant network and attach the external and tenant networks to it
  1. Create the router:
    $ neutron router-create demo-router
    Created a new router:
    
    +-----------------------+--------------------------------------+
    | Field                 | Value                                |
    +-----------------------+--------------------------------------+
    | admin_state_up        | True                                 |
    | distributed           | False                                |
    | external_gateway_info |                                      |
    | ha                    | False                                |
    | id                    | 1aaa1237-ce55-47e1-8a70-70a700c9c2eb |
    | name                  | demo-router                          |
    | routes                |                                      |
    | status                | ACTIVE                               |
    | tenant_id             | 9c1cc7fa7fc24c17812ec662555ba519     |
    +-----------------------+--------------------------------------+
     
  2. Attach the router to the demo tenant subnet:
    $ neutron router-interface-add demo-router demo-subnet
    Added interface b1a894fd-aee8-475c-9262-4342afdc1b58 to router demo-router.
  3. Attach the router to the external network by setting it as the gateway:
    $ neutron router-gateway-set demo-router ext-net
    Set gateway for router demo-router

OpenStack : Install Kilo on Fedora 22 : Networking Service : Create initial networks : External network

http://docs.openstack.org/kilo/install-guide/install/yum/content/neutron_initial-external-network.html

External network

The external network typically provides Internet access for your instances. By default, this network only allows Internet access from instances using Network Address Translation (NAT). You can enable Internet access to individual instances using a floating IP address and suitable security group rules. The admin tenant owns this network because it provides external network access for multiple tenants.
[Note]Note
Perform these commands on the controller node.

To create the external network
  1. Source the admin credentials to gain access to admin-only CLI commands:
    $ source admin-openrc.sh
  2. Create the network:
    $ neutron net-create ext-net --router:external \
      --provider:physical_network external --provider:network_type flat
    Created a new network:
    +---------------------------+--------------------------------------+
    | Field                     | Value                                |
    +---------------------------+--------------------------------------+
    | admin_state_up            | True                                 |
    | id                        | 08e02f01-fb15-46e0-8be5-0d5f0ccf7509 |
    | mtu                       | 0                                    |
    | name                      | ext-net                              |
    | provider:network_type     | flat                                 |
    | provider:physical_network | external                             |
    | provider:segmentation_id  |                                      |
    | router:external           | True                                 |
    | shared                    | False                                |
    | status                    | ACTIVE                               |
    | subnets                   |                                      |
    | tenant_id                 | 9c1cc7fa7fc24c17812ec662555ba519     |
    +---------------------------+--------------------------------------+ 
Like a physical network, a virtual network requires a subnet assigned to it. The external network shares the same subnet and gateway associated with the physical network connected to the external interface on the network node. You should specify an exclusive slice of this subnet for router and floating IP addresses to prevent interference with other devices on the external network.

To create a subnet on the external network
  • Create the subnet:
    $ neutron subnet-create ext-net EXTERNAL_NETWORK_CIDR --name ext-subnet \
      --allocation-pool start=FLOATING_IP_START,end=FLOATING_IP_END \
      --disable-dhcp --gateway EXTERNAL_NETWORK_GATEWAY
    Replace FLOATING_IP_START and FLOATING_IP_END with the first and last IP addresses of the range that you want to allocate for floating IP addresses. Replace EXTERNAL_NETWORK_CIDR with the subnet associated with the physical network. Replace EXTERNAL_NETWORK_GATEWAY with the gateway associated with the physical network, typically the ".1" IP address. You should disable DHCP on this subnet because instances do not connect directly to the external network and floating IP addresses require manual assignment.
    For example, using 203.0.113.0/24 with floating IP address range 203.0.113.101 to 203.0.113.200:
    
    
     
    # neutron subnet-create ext-net 203.0.113.0/24 --name ext-subnet \
     --allocation-pool start=203.0.113.101,end=203.0.113.200 \
     --disable-dhcp --gateway 203.0.113.1
    Created a new subnet:
    +-------------------+----------------------------------------------------+
    | Field             | Value                                              |
    +-------------------+----------------------------------------------------+
    | allocation_pools  | {"start": "203.0.113.101", "end": "203.0.113.200"} |
    | cidr              | 203.0.113.0/24                                     |
    | dns_nameservers   |                                                    |
    | enable_dhcp       | False                                              |
    | gateway_ip        | 203.0.113.1                                        |
    | host_routes       |                                                    |
    | id                | 8225834a-15cf-442e-84f2-711ef762e39b               |
    | ip_version        | 4                                                  |
    | ipv6_address_mode |                                                    |
    | ipv6_ra_mode      |                                                    |
    | name              | ext-subnet                                         |
    | network_id        | 08e02f01-fb15-46e0-8be5-0d5f0ccf7509               |
    | subnetpool_id     |                                                    |
    | tenant_id         | 9c1cc7fa7fc24c17812ec662555ba519                   |
    +-------------------+----------------------------------------------------+
     

OpenStack : Install Kilo on Fedora 22 : Networking Service : Install and configure compute node

http://docs.openstack.org/kilo/install-guide/install/yum/content/neutron-compute-node.html

Also need to open ports for firewall
# firewall-cmd --get-default-zone
FedoraServer
# firewall-cmd --permanent --zone=FedoraServer --add-port=5900/tcp (for VNC console)
#  firewall-cmd --reload
 firewall-cmd --list-ports
5900/tcp

Install and configure compute node

The compute node handles connectivity and security groups for instances.

To configure prerequisites
Before you install and configure OpenStack Networking, you must configure certain kernel networking parameters.
  1. Edit the /etc/sysctl.conf file to contain the following parameters:
    net.ipv4.conf.all.rp_filter=0
    net.ipv4.conf.default.rp_filter=0
    net.bridge.bridge-nf-call-iptables=1
    net.bridge.bridge-nf-call-ip6tables=1
  2. Implement the changes:
    # sysctl -p

To configure the Networking common components
The Networking common component configuration includes the authentication mechanism, message queue, and plug-in.
[Note]Note
Default configuration files vary by distribution. You might need to add these sections and options rather than modifying existing sections and options. Also, an ellipsis (...) in the configuration snippets indicates potential default configuration options that you should retain.
  • Edit the /etc/neutron/neutron.conf file and complete the following actions:
    1. In the [database] section, comment out any connection options because compute nodes do not directly access the database.
    2. In the [DEFAULT] and [oslo_messaging_rabbit] sections, configure RabbitMQ message queue access:
      Select Text
      1
      2
      3
      4
      5
      6
      7
      8
      9
      [DEFAULT]
      ...
      rpc_backend = rabbit
      [oslo_messaging_rabbit]
      ...
      rabbit_host = controller
      rabbit_userid = openstack
      rabbit_password = RABBIT_PASS
      Replace RABBIT_PASS with the password you chose for the openstack account in RabbitMQ.
    3. In the [DEFAULT] and [keystone_authtoken] sections, configure Identity service access:
      Select Text
      1
      2
      3
      4
      5
      6
      7
      8
      9
      10
      11
      12
      13
      14
      [DEFAULT]
      ...
      auth_strategy = keystone
      [keystone_authtoken]
      ...
      auth_uri = http://controller:5000
      auth_url = http://controller:35357
      auth_plugin = password
      project_domain_id = default
      user_domain_id = default
      project_name = service
      username = neutron
      password = NEUTRON_PASS
      Replace NEUTRON_PASS with the password you chose or the neutron user in the Identity service.
      [Note]Note
      Comment out or remove any other options in the [keystone_authtoken] section.
    4. In the [DEFAULT] section, enable the Modular Layer 2 (ML2) plug-in, router service, and overlapping IP addresses:
      Select Text
      1
      2
      3
      4
      5
      [DEFAULT]
      ...
      core_plugin = ml2
      service_plugins = router
      allow_overlapping_ips = True
    5. (Optional) To assist with troubleshooting, enable verbose logging in the [DEFAULT] section:
      Select Text
      1
      2
      3
      [DEFAULT]
      ...
      verbose = True

To configure the Modular Layer 2 (ML2) plug-in
The ML2 plug-in uses the Open vSwitch (OVS) mechanism (agent) to build the virtual networking framework for instances.
  • Edit the /etc/neutron/plugins/ml2/ml2_conf.ini file and complete the following actions:
    1. In the [ml2] section, enable the flat, VLAN, generic routing encapsulation (GRE), and virtual extensible LAN (VXLAN) network type drivers, GRE tenant networks, and the OVS mechanism driver:
      Select Text
      1
      2
      3
      4
      5
      [ml2]
      ...
      type_drivers = flat,vlan,gre,vxlan
      tenant_network_types = gre
      mechanism_drivers = openvswitch
    2. In the [ml2_type_gre] section, configure the tunnel identifier (id) range:
      Select Text
      1
      2
      3
      [ml2_type_gre]
      ...
      tunnel_id_ranges = 1:1000
    3. In the [securitygroup] section, enable security groups, enable ipset, and configure the OVS iptables firewall driver:
      Select Text
      1
      2
      3
      4
      5
      [securitygroup]
      ...
      enable_security_group = True
      enable_ipset = True
      firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
    4. In the [ovs] section, enable tunnels and configure the local tunnel endpoint:
      Select Text
      1
      2
      3
      [ovs]
      ...
      local_ip = INSTANCE_TUNNELS_INTERFACE_IP_ADDRESS
      Replace INSTANCE_TUNNELS_INTERFACE_IP_ADDRESS with the IP address of the instance tunnels network interface on your compute node.
    5. In the [agent] section, enable GRE tunnels:
      Select Text
      1
      2
      3
      [agent]
      ...
      tunnel_types = gre

To configure the Open vSwitch (OVS) service
The OVS service provides the underlying virtual networking framework for instances.
  • Start the OVS service and configure it to start when the system boots:
    # systemctl enable openvswitch.service
    # systemctl start openvswitch.service

To configure Compute to use Networking
By default, distribution packages configure Compute to use legacy networking. You must reconfigure Compute to manage networks through Networking.
  • Edit the /etc/nova/nova.conf file and complete the following actions:
    1. In the [DEFAULT] section, configure the APIs and drivers:
      Select Text
      1
      2
      3
      4
      5
      6
      [DEFAULT]
      ...
      network_api_class = nova.network.neutronv2.api.API
      security_group_api = neutron
      linuxnet_interface_driver = nova.network.linux_net.LinuxOVSInterfaceDriver
      firewall_driver = nova.virt.firewall.NoopFirewallDriver
      [Note]Note
      By default, Compute uses an internal firewall service. Since Networking includes a firewall service, you must disable the Compute firewall service by using the nova.virt.firewall.NoopFirewallDriver firewall driver.
    2. In the [neutron] section, configure access parameters:
      Select Text
      1
      2
      3
      4
      5
      6
      7
      8
      [neutron]
      ...
      url = http://controller:9696
      auth_strategy = keystone
      admin_auth_url = http://controller:35357/v2.0
      admin_tenant_name = service
      admin_username = neutron
      admin_password = NEUTRON_PASS
      Replace NEUTRON_PASS with the password you chose for the neutron user in the Identity service.

To finalize the installation
  1. The Networking service initialization scripts expect a symbolic link /etc/neutron/plugin.ini pointing to the ML2 plug-in configuration file, /etc/neutron/plugins/ml2/ml2_conf.ini. If this symbolic link does not exist, create it using the following command:
    # ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
    Due to a packaging bug, the Open vSwitch agent initialization script explicitly looks for the Open vSwitch plug-in configuration file rather than a symbolic link /etc/neutron/plugin.ini pointing to the ML2 plug-in configuration file. Run the following commands to resolve this issue:
    # cp /usr/lib/systemd/system/neutron-openvswitch-agent.service \
      /usr/lib/systemd/system/neutron-openvswitch-agent.service.orig
    # sed -i 's,plugins/openvswitch/ovs_neutron_plugin.ini,plugin.ini,g' \
      /usr/lib/systemd/system/neutron-openvswitch-agent.service
  2. Restart the Compute service:
    # systemctl restart openstack-nova-compute.service
  3. Start the Open vSwitch (OVS) agent and configure it to start when the system boots:
    # systemctl enable neutron-openvswitch-agent.service
    # systemctl start neutron-openvswitch-agent.service

Verify operation
[Note]Note
Perform these commands on the controller node.
  1. Source the admin credentials to gain access to admin-only CLI commands:
    $ source admin-openrc.sh
  2. List agents to verify successful launch of the neutron agents:
    $ neutron agent-list
    +--------------------------------------+--------------------+----------+-------+----------------+---------------------------+
    | id                                   | agent_type         | host     | alive | admin_state_up | binary                    |
    +--------------------------------------+--------------------+----------+-------+----------------+---------------------------+
    | 30275801-e17a-41e4-8f53-9db63544f689 | Metadata agent     | network  | :-)   | True           | neutron-metadata-agent    |
    | 4bd8c50e-7bad-4f3b-955d-67658a491a15 | Open vSwitch agent | network  | :-)   | True           | neutron-openvswitch-agent |
    | 756e5bba-b70f-4715-b80e-e37f59803d20 | L3 agent           | network  | :-)   | True           | neutron-l3-agent          |
    | 9c45473c-6d6d-4f94-8df1-ebd0b6838d5f | DHCP agent         | network  | :-)   | True           | neutron-dhcp-agent        |
    | a5a49051-05eb-4b4f-bfc7-d36235fe9131 | Open vSwitch agent | compute1 | :-)   | True           | neutron-openvswitch-agent |
    +--------------------------------------+--------------------+----------+-------+----------------+---------------------------+

OpenStack : Install Kilo on Fedora 22 : Networking Service : Install and configure network node

http://docs.openstack.org/kilo/install-guide/install/yum/content/neutron-network-node.html

Install and configure network node

The network node primarily handles internal and external routing and DHCP services for virtual networks.

To configure prerequisites
Before you install and configure OpenStack Networking, you must configure certain kernel networking parameters.
  1. Edit the /etc/sysctl.conf file to contain the following parameters:
    net.ipv4.ip_forward=1
    net.ipv4.conf.all.rp_filter=0
    net.ipv4.conf.default.rp_filter=0
  2. Implement the changes:
    # sysctl -p
To install the Networking components
  • # yum install openstack-neutron openstack-neutron-ml2 openstack-neutron-openvswitch
      

To configure the Networking common components
The Networking common component configuration includes the authentication mechanism, message queue, and plug-in.
[Note]Note
Default configuration files vary by distribution. You might need to add these sections and options rather than modifying existing sections and options. Also, an ellipsis (...) in the configuration snippets indicates potential default configuration options that you should retain.
  • Edit the /etc/neutron/neutron.conf file and complete the following actions:
    1. In the [database] section, comment out any connection options because network nodes do not directly access the database.
    2. In the [DEFAULT] and [oslo_messaging_rabbit] sections, configure RabbitMQ message queue access:
      Select Text
      1
      2
      3
      4
      5
      6
      7
      8
      9
      [DEFAULT]
      ...
      rpc_backend = rabbit
      [oslo_messaging_rabbit]
      ...
      rabbit_host = controller
      rabbit_userid = openstack
      rabbit_password = RABBIT_PASS
      Replace RABBIT_PASS with the password you chose for the openstack account in RabbitMQ.
    3. In the [DEFAULT] and [keystone_authtoken] sections, configure Identity service access:
      Select Text
      1
      2
      3
      4
      5
      6
      7
      8
      9
      10
      11
      12
      13
      14
      [DEFAULT]
      ...
      auth_strategy = keystone
      [keystone_authtoken]
      ...
      auth_uri = http://controller:5000
      auth_url = http://controller:35357
      auth_plugin = password
      project_domain_id = default
      user_domain_id = default
      project_name = service
      username = neutron
      password = NEUTRON_PASS
      Replace NEUTRON_PASS with the password you chose or the neutron user in the Identity service.
      [Note]Note
      Comment out or remove any other options in the [keystone_authtoken] section.
    4. In the [DEFAULT] section, enable the Modular Layer 2 (ML2) plug-in, router service, and overlapping IP addresses:
      Select Text
      1
      2
      3
      4
      5
      [DEFAULT]
      ...
      core_plugin = ml2
      service_plugins = router
      allow_overlapping_ips = True
    5. (Optional) To assist with troubleshooting, enable verbose logging in the [DEFAULT] section:
      Select Text
      1
      2
      3
      [DEFAULT]
      ...
      verbose = True

To configure the Modular Layer 2 (ML2) plug-in
The ML2 plug-in uses the Open vSwitch (OVS) mechanism (agent) to build the virtual networking framework for instances.
  • Edit the /etc/neutron/plugins/ml2/ml2_conf.ini file and complete the following actions:
    1. In the [ml2] section, enable the flat, VLAN, generic routing encapsulation (GRE), and virtual extensible LAN (VXLAN) network type drivers, GRE tenant networks, and the OVS mechanism driver:
      Select Text
      1
      2
      3
      4
      5
      [ml2]
      ...
      type_drivers = flat,vlan,gre,vxlan
      tenant_network_types = gre
      mechanism_drivers = openvswitch
    2. In the [ml2_type_flat] section, configure the external flat provider network:
      Select Text
      1
      2
      3
      [ml2_type_flat]
      ...
      flat_networks = external
    3. In the [ml2_type_gre] section, configure the tunnel identifier (id) range:
      Select Text
      1
      2
      3
      [ml2_type_gre]
      ...
      tunnel_id_ranges = 1:1000
    4. In the [securitygroup] section, enable security groups, enable ipset, and configure the OVS iptables firewall driver:
      Select Text
      1
      2
      3
      4
      5
      [securitygroup]
      ...
      enable_security_group = True
      enable_ipset = True
      firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
    5. In the [ovs] section, enable tunnels, configure the local tunnel endpoint, and map the external flat provider network to the br-ex external network bridge:
      Select Text
      1
      2
      3
      4
      [ovs]
      ...
      local_ip = INSTANCE_TUNNELS_INTERFACE_IP_ADDRESS
      bridge_mappings = external:br-ex
      Replace INSTANCE_TUNNELS_INTERFACE_IP_ADDRESS with the IP address of the instance tunnels network interface on your network node.
    6. In the [agent] section, enable GRE tunnels:
      Select Text
      1
      2
      3
      [agent]
      ...
      tunnel_types = gre

To configure the Layer-3 (L3) agent
The Layer-3 (L3) agent provides routing services for virtual networks.
  • Edit the /etc/neutron/l3_agent.ini file and complete the following actions:
    1. In the [DEFAULT] section, configure the interface driver, external network bridge, and enable deletion of defunct router namespaces:
      Select Text
      1
      2
      3
      4
      5
      [DEFAULT]
      ...
      interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver
      external_network_bridge =
      router_delete_namespaces = True
      [Note]Note
      The external_network_bridge option intentionally lacks a value to enable multiple external networks on a single agent.
    2. (Optional) To assist with troubleshooting, enable verbose logging in the [DEFAULT] section:
      Select Text
      1
      2
      3
      [DEFAULT]
      ...
      verbose = True

To configure the DHCP agent
The DHCP agent provides DHCP services for virtual networks.
  1. Edit the /etc/neutron/dhcp_agent.ini file and complete the following actions:
    1. In the [DEFAULT] section, configure the interface and DHCP drivers and enable deletion of defunct DHCP namespaces:
      Select Text
      1
      2
      3
      4
      5
      [DEFAULT]
      ...
      interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver
      dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
      dhcp_delete_namespaces = True
    2. (Optional) To assist with troubleshooting, enable verbose logging in the [DEFAULT] section:
      Select Text
      1
      2
      3
      [DEFAULT]
      ...
      verbose = True
  2. (Optional)
    Tunneling protocols such as GRE include additional packet headers that increase overhead and decrease space available for the payload or user data. Without knowledge of the virtual network infrastructure, instances attempt to send packets using the default Ethernet maximum transmission unit (MTU) of 1500 bytes. Internet protocol (IP) networks contain the path MTU discovery (PMTUD) mechanism to detect end-to-end MTU and adjust packet size accordingly. However, some operating systems and networks block or otherwise lack support for PMTUD causing performance degradation or connectivity failure.
    Ideally, you can prevent these problems by enabling jumbo frames on the physical network that contains your tenant virtual networks. Jumbo frames support MTUs up to approximately 9000 bytes which negates the impact of GRE overhead on virtual networks. However, many network devices lack support for jumbo frames and OpenStack administrators often lack control over network infrastructure. Given the latter complications, you can also prevent MTU problems by reducing the instance MTU to account for GRE overhead. Determining the proper MTU value often takes experimentation, but 1454 bytes works in most environments. You can configure the DHCP server that assigns IP addresses to your instances to also adjust the MTU.
    [Note]Note
    Some cloud images ignore the DHCP MTU option in which case you should configure it using metadata, a script, or another suitable method.
    1. Edit the /etc/neutron/dhcp_agent.ini file and complete the following action:
      1. In the [DEFAULT] section, enable the dnsmasq configuration file:
        Select Text
        1
        2
        3
        [DEFAULT]
        ...
        dnsmasq_config_file = /etc/neutron/dnsmasq-neutron.conf
    2. Create and edit the /etc/neutron/dnsmasq-neutron.conf file and complete the following action:
      1. Enable the DHCP MTU option (26) and configure it to 1454 bytes:
        Select Text
        1
        dhcp-option-force=26,1454
    3. Kill any existing dnsmasq processes:
      # pkill dnsmasq

To configure the metadata agent
The metadata agent provides configuration information such as credentials to instances.
  1. Edit the /etc/neutron/metadata_agent.ini file and complete the following actions:
    1. In the [DEFAULT] section, configure access parameters:
      Select Text
      1
      2
      3
      4
      5
      6
      7
      8
      9
      10
      11
      [DEFAULT]
      ...
      auth_uri = http://controller:5000
      auth_url = http://controller:35357
      auth_region = RegionOne
      auth_plugin = password
      project_domain_id = default
      user_domain_id = default
      project_name = service
      username = neutron
      password = NEUTRON_PASS
      Replace NEUTRON_PASS with the password you chose for the neutron user in the Identity service.
    2. In the [DEFAULT] section, configure the metadata host:
      Select Text
      1
      2
      3
      [DEFAULT]
      ...
      nova_metadata_ip = controller
    3. In the [DEFAULT] section, configure the metadata proxy shared secret:
      Select Text
      1
      2
      3
      [DEFAULT]
      ...
      metadata_proxy_shared_secret = METADATA_SECRET
      Replace METADATA_SECRET with a suitable secret for the metadata proxy.
    4. (Optional) To assist with troubleshooting, enable verbose logging in the [DEFAULT] section:
      Select Text
      1
      2
      3
      [DEFAULT]
      ...
      verbose = True
  2. On the controller node, edit the /etc/nova/nova.conf file and complete the following action:
    1. In the [neutron] section, enable the metadata proxy and configure the secret:
      Select Text
      1
      2
      3
      4
      [neutron]
      ...
      service_metadata_proxy = True
      metadata_proxy_shared_secret = METADATA_SECRET
      Replace METADATA_SECRET with the secret you chose for the metadata proxy.
  3. On the controller node, restart the Compute API service:
    # systemctl restart openstack-nova-api.service

To configure the Open vSwitch (OVS) service
The OVS service provides the underlying virtual networking framework for instances. The integration bridge br-int handles internal instance network traffic within OVS. The external bridge br-ex handles external instance network traffic within OVS. The external bridge requires a port on the physical external network interface to provide instances with external network access. In essence, this port connects the virtual and physical external networks in your environment.
  1. Start the OVS service and configure it to start when the system boots:
    # systemctl enable openvswitch.service
    # systemctl start openvswitch.service
  2. Add the external bridge:
    # ovs-vsctl add-br br-ex
  3. Add a port to the external bridge that connects to the physical external network interface:
    Replace INTERFACE_NAME with the actual interface name. For example, eth2 or ens256.
    # ovs-vsctl add-port br-ex INTERFACE_NAME
    [Note]Note
    Depending on your network interface driver, you may need to disable generic receive offload (GRO) to achieve suitable throughput between your instances and the external network.
    To temporarily disable GRO on the external network interface while testing your environment:
    # ethtool -K INTERFACE_NAME gro off

To finalize the installation
  1. The Networking service initialization scripts expect a symbolic link /etc/neutron/plugin.ini pointing to the ML2 plug-in configuration file, /etc/neutron/plugins/ml2/ml2_conf.ini. If this symbolic link does not exist, create it using the following command:
    # ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
    Due to a packaging bug, the Open vSwitch agent initialization script explicitly looks for the Open vSwitch plug-in configuration file rather than a symbolic link /etc/neutron/plugin.ini pointing to the ML2 plug-in configuration file. Run the following commands to resolve this issue:
    # cp /usr/lib/systemd/system/neutron-openvswitch-agent.service \
      /usr/lib/systemd/system/neutron-openvswitch-agent.service.orig
    # sed -i 's,plugins/openvswitch/ovs_neutron_plugin.ini,plugin.ini,g' \
      /usr/lib/systemd/system/neutron-openvswitch-agent.service
  2. Start the Networking services and configure them to start when the system boots:
    # systemctl enable neutron-openvswitch-agent.service neutron-l3-agent.service \
      neutron-dhcp-agent.service neutron-metadata-agent.service \
      neutron-ovs-cleanup.service
    # systemctl start neutron-openvswitch-agent.service neutron-l3-agent.service \
      neutron-dhcp-agent.service neutron-metadata-agent.service
    [Note]Note
    Do not explicitly start the neutron-ovs-cleanup service.

Verify operation
[Note]Note
Perform these commands on the controller node.
  1. Source the admin credentials to gain access to admin-only CLI commands:
    $ source admin-openrc.sh
  2. List agents to verify successful launch of the neutron agents:
    $ neutron agent-list
    +--------------------------------------+--------------------+---------+-------+----------------+---------------------------+
    | id                                   | agent_type         | host    | alive | admin_state_up | binary                    |
    +--------------------------------------+--------------------+---------+-------+----------------+---------------------------+
    | 30275801-e17a-41e4-8f53-9db63544f689 | Metadata agent     | network | :-)   | True           | neutron-metadata-agent    |
    | 4bd8c50e-7bad-4f3b-955d-67658a491a15 | Open vSwitch agent | network | :-)   | True           | neutron-openvswitch-agent |
    | 756e5bba-b70f-4715-b80e-e37f59803d20 | L3 agent           | network | :-)   | True           | neutron-l3-agent          |
    | 9c45473c-6d6d-4f94-8df1-ebd0b6838d5f | DHCP agent         | network | :-)   | True           | neutron-dhcp-agent        |
    +--------------------------------------+--------------------+---------+-------+----------------+---------------------------+