Wednesday, May 28, 2014

Tuesday, May 27, 2014

git note

To get the source code of OpenStack

git clone git://git.openstack.org/openstack/swift

https://wiki.openstack.org/wiki/Gerrit_Workflow

  1. git clone $URL (get code, and the default branch name is "master")
  2. git checkout -b $my_branch_name (create my own branch)
  3. edit code under my own branch, and run unit test.
  4. git commit -a (commit change at local)
  5. git checkout master & git remote update & git pull (update master code)
  6. git checkout $my_branch_name
  7. git rebase master   (rebase is done by step 5-8)
  8. Repeat steps 3 &4 if required.
  9. git review

Friday, May 23, 2014

Running an instance in Openstack

I have to delete the "demo" "private" subnet that came with openstack-packstack to make the following steps work.

# login as openstack ID

# Source credentials file
source keystonerc_admin

# Create private network
neutron net-create private

# Associate subnet
neutron subnet-create --name private_subnet private 10.0.0.0/24

# create router
neutron router-create myrouter

# uplink router to the public internet
neutron router-gateway-set myrouter public


# uplink subnet to router
neutron  router-interface-add myrouter private_subnet


# create security profile for jump host
 neutron  security-group-create jumphost

# Add rule to allow icmp in
neutron  security-group-rule-create  --protocol icmp jumphost

# Add rule to allow ssh in
neutron  security-group-rule-create  --protocol tcp --port-range-min 22 --port-range-max 22  jumphost

# find out net id
neutron net-list

# Launch jump host:
nova boot --image cirros --flavor 1 jumphost --security_groups jumphost  --nic net-id=<net id of "private">

# Determine port-id attached to jump host
neutron port-list -- --device_id=<instance_id>

# create floatingip
neutron floatingip-create public --port-id <port-id>

# test ping/ssh




Tuesday, May 20, 2014

Install Openstack via RDO packstack

Please also refer here

These instructions are to install the current ("Icehouse") release.

Please name the host with a fully qualified domain name rather than a short-form name to avoid DNS issues with Packstack. 

Step 0: Prerequisites

Software: Red Hat Enterprise Linux (RHEL) 6.5 is the minimum recommended version, or the equivalent version of one of the RHEL-based Linux distributions such as CentOS, Scientific Linux, etc., or Fedora 20 or later.

Step 1: Software repositories


Update your current packages: 


sudo yum update -y
 
Setup the RDO repositories: 


sudo yum install -y http://rdo.fedorapeople.org/rdo-release.rpm
 
 
Step 2: Install Packstack Installer

sudo yum install -y openstack-packstack

Step 3: Run Packstack to install OpenStack

Packstack takes the work out of manually setting up OpenStack. For a single node OpenStack deployment, run the following command.

packstack --allinone

The installer will ask you to enter the root password for each host node you are installing on the network, to enable remote configuration of the host so it can remotely configure each node using Puppet.


The following is the output from my test server

# sudo yum install -y http://rdo.fedorapeople.org/rdo-release.rpm
Loaded plugins: fastestmirror, refresh-packagekit, security
Loading mirror speeds from cached hostfile
 * base: mirrors-pa.sioru.com
 * epel: mirror.umd.edu
 * extras: mirror.us.leaseweb.net
 * updates: mirrors-pa.sioru.com
Setting up Install Process
rdo-release.rpm                                          |  12 kB     00:00    
Examining /var/tmp/yum-root-IaRbXv/rdo-release.rpm: rdo-release-icehouse-3.noarch
Marking /var/tmp/yum-root-IaRbXv/rdo-release.rpm to be installed
Resolving Dependencies
--> Running transaction check
---> Package rdo-release.noarch 0:icehouse-3 will be installed
--> Finished Dependency Resolution

Dependencies Resolved

================================================================================
 Package            Arch          Version             Repository           Size
================================================================================
Installing:
 rdo-release        noarch        icehouse-3          /rdo-release        8.7 k

Transaction Summary
================================================================================
Install       1 Package(s)

Total size: 8.7 k
Installed size: 8.7 k
Downloading Packages:
Running rpm_check_debug
Running Transaction Test
Transaction Test Succeeded
Running Transaction
  Installing : rdo-release-icehouse-3.noarch                                1/1
  Verifying  : rdo-release-icehouse-3.noarch                                1/1

Installed:
  rdo-release.noarch 0:icehouse-3                                              

Complete!


#  sudo yum install -y openstack-packstack
Loaded plugins: fastestmirror, refresh-packagekit, security
Loading mirror speeds from cached hostfile
 * base: mirrors.seas.harvard.edu
 * epel: mirror.umd.edu
 * extras: mirror.millry.co
 * updates: mirrors-pa.sioru.com
Setting up Install Process
Resolving Dependencies
--> Running transaction check
---> Package openstack-packstack.noarch 0:2014.1.1-0.12.dev1068.el6 will be installed
--> Processing Dependency: openstack-packstack-puppet = 2014.1.1-0.12.dev1068.el6 for package: openstack-packstack-2014.1.1-0.12.dev1068.el6.noarch
--> Processing Dependency: openstack-puppet-modules for package: openstack-packstack-2014.1.1-0.12.dev1068.el6.noarch
--> Running transaction check
---> Package openstack-packstack-puppet.noarch 0:2014.1.1-0.12.dev1068.el6 will be installed
---> Package openstack-puppet-modules.noarch 0:2014.1-11.1.el6 will be installed
--> Processing Dependency: rubygem-json for package: openstack-puppet-modules-2014.1-11.1.el6.noarch
--> Running transaction check
---> Package rubygem-json.x86_64 0:1.5.5-1.el6 will be installed
--> Processing Dependency: ruby(abi) = 1.8 for package: rubygem-json-1.5.5-1.el6.x86_64
--> Processing Dependency: rubygems for package: rubygem-json-1.5.5-1.el6.x86_64
--> Processing Dependency: /usr/bin/ruby for package: rubygem-json-1.5.5-1.el6.x86_64
--> Processing Dependency: libruby.so.1.8()(64bit) for package: rubygem-json-1.5.5-1.el6.x86_64
--> Running transaction check
---> Package ruby.x86_64 0:1.8.7.352-13.el6 will be installed
---> Package ruby-libs.x86_64 0:1.8.7.352-13.el6 will be installed
--> Processing Dependency: libreadline.so.5()(64bit) for package: ruby-libs-1.8.7.352-13.el6.x86_64
---> Package rubygems.noarch 0:1.3.7-5.el6 will be installed
--> Processing Dependency: ruby-rdoc for package: rubygems-1.3.7-5.el6.noarch
--> Running transaction check
---> Package compat-readline5.x86_64 0:5.2-17.1.el6 will be installed
---> Package ruby-rdoc.x86_64 0:1.8.7.352-13.el6 will be installed
--> Processing Dependency: ruby-irb = 1.8.7.352-13.el6 for package: ruby-rdoc-1.8.7.352-13.el6.x86_64
--> Running transaction check
---> Package ruby-irb.x86_64 0:1.8.7.352-13.el6 will be installed
--> Finished Dependency Resolution

Dependencies Resolved

================================================================================
 Package              Arch   Version                   Repository          Size
================================================================================
Installing:
 openstack-packstack  noarch 2014.1.1-0.12.dev1068.el6 openstack-icehouse 197 k
Installing for dependencies:
 compat-readline5     x86_64 5.2-17.1.el6              base               130 k
 openstack-packstack-puppet
                      noarch 2014.1.1-0.12.dev1068.el6 openstack-icehouse  33 k
 openstack-puppet-modules
                      noarch 2014.1-11.1.el6           openstack-icehouse 1.5 M
 ruby                 x86_64 1.8.7.352-13.el6          updates            534 k
 ruby-irb             x86_64 1.8.7.352-13.el6          updates            314 k
 ruby-libs            x86_64 1.8.7.352-13.el6          updates            1.6 M
 ruby-rdoc            x86_64 1.8.7.352-13.el6          updates            377 k
 rubygem-json         x86_64 1.5.5-1.el6               puppetlabs-deps    763 k
 rubygems             noarch 1.3.7-5.el6               base               207 k

Transaction Summary
================================================================================
Install      10 Package(s)

Total download size: 5.7 M
Installed size: 16 M
Downloading Packages:
(1/10): compat-readline5-5.2-17.1.el6.x86_64.rpm         | 130 kB     00:00    
(2/10): openstack-packstack-2014.1.1-0.12.dev1068.el6.no | 197 kB     00:00    
(3/10): openstack-packstack-puppet-2014.1.1-0.12.dev1068 |  33 kB     00:00    
(4/10): openstack-puppet-modules-2014.1-11.1.el6.noarch. | 1.5 MB     00:01    
(5/10): ruby-1.8.7.352-13.el6.x86_64.rpm                 | 534 kB     00:00    
(6/10): ruby-irb-1.8.7.352-13.el6.x86_64.rpm             | 314 kB     00:00    
(7/10): ruby-libs-1.8.7.352-13.el6.x86_64.rpm            | 1.6 MB     00:01    
(8/10): ruby-rdoc-1.8.7.352-13.el6.x86_64.rpm            | 377 kB     00:00    
(9/10): rubygem-json-1.5.5-1.el6.x86_64.rpm              | 763 kB     00:00    
(10/10): rubygems-1.3.7-5.el6.noarch.rpm                 | 207 kB     00:00    
--------------------------------------------------------------------------------
Total                                           737 kB/s | 5.7 MB     00:07    
warning: rpmts_HdrFromFdno: Header V4 RSA/SHA1 Signature, key ID 0e4fbd28: NOKEY
Retrieving key from file:///etc/pki/rpm-gpg/RPM-GPG-KEY-RDO-Icehouse
Importing GPG key 0x0E4FBD28:
 Userid : rdo-icehouse-sign <rdo-info@redhat.com>
 Package: rdo-release-icehouse-3.noarch (@/rdo-release)
 From   : /etc/pki/rpm-gpg/RPM-GPG-KEY-RDO-Icehouse
warning: rpmts_HdrFromFdno: Header V4 RSA/SHA1 Signature, key ID 4bd6ec30: NOKEY
Retrieving key from file:///etc/pki/rpm-gpg/RPM-GPG-KEY-puppetlabs
Importing GPG key 0x4BD6EC30:
 Userid : Puppet Labs Release Key (Puppet Labs Release Key) <info@puppetlabs.com>
 Package: rdo-release-icehouse-3.noarch (@/rdo-release)
 From   : /etc/pki/rpm-gpg/RPM-GPG-KEY-puppetlabs
Running rpm_check_debug
Running Transaction Test
Transaction Test Succeeded
Running Transaction
  Installing : openstack-packstack-puppet-2014.1.1-0.12.dev1068.el6.noa    1/10
  Installing : compat-readline5-5.2-17.1.el6.x86_64                        2/10
  Installing : ruby-libs-1.8.7.352-13.el6.x86_64                           3/10
  Installing : ruby-1.8.7.352-13.el6.x86_64                                4/10
  Installing : ruby-irb-1.8.7.352-13.el6.x86_64                            5/10
  Installing : ruby-rdoc-1.8.7.352-13.el6.x86_64                           6/10
  Installing : rubygems-1.3.7-5.el6.noarch                                 7/10
  Installing : rubygem-json-1.5.5-1.el6.x86_64                             8/10
  Installing : openstack-puppet-modules-2014.1-11.1.el6.noarch             9/10
  Installing : openstack-packstack-2014.1.1-0.12.dev1068.el6.noarch       10/10
  Verifying  : ruby-irb-1.8.7.352-13.el6.x86_64                            1/10
  Verifying  : ruby-libs-1.8.7.352-13.el6.x86_64                           2/10
  Verifying  : compat-readline5-5.2-17.1.el6.x86_64                        3/10
  Verifying  : openstack-puppet-modules-2014.1-11.1.el6.noarch             4/10
  Verifying  : openstack-packstack-puppet-2014.1.1-0.12.dev1068.el6.noa    5/10
  Verifying  : ruby-rdoc-1.8.7.352-13.el6.x86_64                           6/10
  Verifying  : openstack-packstack-2014.1.1-0.12.dev1068.el6.noarch        7/10
  Verifying  : rubygems-1.3.7-5.el6.noarch                                 8/10
  Verifying  : ruby-1.8.7.352-13.el6.x86_64                                9/10
  Verifying  : rubygem-json-1.5.5-1.el6.x86_64                            10/10

Installed:
  openstack-packstack.noarch 0:2014.1.1-0.12.dev1068.el6                       

Dependency Installed:
  compat-readline5.x86_64 0:5.2-17.1.el6                                       
  openstack-packstack-puppet.noarch 0:2014.1.1-0.12.dev1068.el6                
  openstack-puppet-modules.noarch 0:2014.1-11.1.el6                            
  ruby.x86_64 0:1.8.7.352-13.el6                                               
  ruby-irb.x86_64 0:1.8.7.352-13.el6                                           
  ruby-libs.x86_64 0:1.8.7.352-13.el6                                          
  ruby-rdoc.x86_64 0:1.8.7.352-13.el6                                          
  rubygem-json.x86_64 0:1.5.5-1.el6                                            
  rubygems.noarch 0:1.3.7-5.el6                                                

Complete!


# sudo useradd -U -G wheel -s /bin/bash -m openstack



Now login as openstack user

[openstack@novo ~]$ packstack --allinone
Welcome to Installer setup utility
Packstack changed given value  to required value /home/openstack/.ssh/id_rsa.pub

Installing:
Clean Up                                             [ DONE ]
root@192.168.1.188's password:
Setting up ssh keys                               [ ERROR ]

ERROR : Failed to run remote script, stdout:
stderr: Warning: Permanently added '192.168.1.188' (RSA) to the list of known hosts.
Connection closed by UNKNOWN

The above error is caused by hostname not in FQDN format

# vi /etc/hosts

The following line is added :

192.168.1.188    novo.test.com novo

# hostname novo
# hostname
novo
# hostname -f (this is to verify FQDN format is showned)
novo.test.com

After fixing the problem,re-run packstack with option --answer-file followed by the answer file : packstack-answers-20140520-175735.txt


[openstack@novo ~]$ packstack --answer-file packstack-answers-20140520-175735.txt
Welcome to Installer setup utility

Installing:
Clean Up                                             [ DONE ]
Setting up ssh keys                                  [ DONE ]
Discovering hosts' details                           [ DONE ]
Adding pre install manifest entries                  [ DONE ]
Adding MySQL manifest entries                        [ DONE ]
Adding AMQP manifest entries                         [ DONE ]
Adding Keystone manifest entries                     [ DONE ]
Adding Glance Keystone manifest entries              [ DONE ]
Adding Glance manifest entries                       [ DONE ]
Installing dependencies for Cinder                   [ DONE ]
Adding Cinder Keystone manifest entries              [ DONE ]
Adding Cinder manifest entries                       [ DONE ]
Checking if the Cinder server has a cinder-volumes vg[ DONE ]
Adding Nova API manifest entries                     [ DONE ]
Adding Nova Keystone manifest entries                [ DONE ]
Adding Nova Cert manifest entries                    [ DONE ]
Adding Nova Conductor manifest entries               [ DONE ]
Creating ssh keys for Nova migration                 [ DONE ]
Gathering ssh host keys for Nova migration           [ DONE ]
Adding Nova Compute manifest entries                 [ DONE ]
Adding Nova Scheduler manifest entries               [ DONE ]
Adding Nova VNC Proxy manifest entries               [ DONE ]
Adding Nova Common manifest entries                  [ DONE ]
Adding Openstack Network-related Nova manifest entries[ DONE ]
Adding Neutron API manifest entries                  [ DONE ]
Adding Neutron Keystone manifest entries             [ DONE ]
Adding Neutron L3 manifest entries                   [ DONE ]
Adding Neutron L2 Agent manifest entries             [ DONE ]
Adding Neutron DHCP Agent manifest entries           [ DONE ]
Adding Neutron LBaaS Agent manifest entries          [ DONE ]
Adding Neutron Metadata Agent manifest entries       [ DONE ]
Adding OpenStack Client manifest entries             [ DONE ]
Adding Horizon manifest entries                      [ DONE ]
Adding Swift Keystone manifest entries               [ DONE ]
Adding Swift builder manifest entries                [ DONE ]
Adding Swift proxy manifest entries                  [ DONE ]
Adding Swift storage manifest entries                [ DONE ]
Adding Swift common manifest entries                 [ DONE ]
Adding Provisioning manifest entries                 [ DONE ]
Adding MongoDB manifest entries                      [ DONE ]
Adding Ceilometer manifest entries                   [ DONE ]
Adding Ceilometer Keystone manifest entries          [ DONE ]
Adding Nagios server manifest entries                [ DONE ]
Adding Nagios host manifest entries                  [ DONE ]
Adding post install manifest entries                 [ DONE ]
Preparing servers                                    [ DONE ]
Installing Dependencies                              [ DONE ]
Copying Puppet modules and manifests                 [ DONE ]
Applying 192.168.1.188_prescript.pp
192.168.1.188_prescript.pp:                          [ DONE ]        
Applying 192.168.1.188_mysql.pp
Applying 192.168.1.188_amqp.pp
192.168.1.188_mysql.pp:                              [ DONE ]    
192.168.1.188_amqp.pp:                               [ DONE ]    
Applying 192.168.1.188_keystone.pp
Applying 192.168.1.188_glance.pp
Applying 192.168.1.188_cinder.pp
192.168.1.188_keystone.pp:                           [ DONE ]       
192.168.1.188_glance.pp:                             [ DONE ]       
192.168.1.188_cinder.pp:                             [ DONE ]       
Applying 192.168.1.188_api_nova.pp
192.168.1.188_api_nova.pp:                           [ DONE ]       
Applying 192.168.1.188_nova.pp
192.168.1.188_nova.pp:                               [ DONE ]   
Applying 192.168.1.188_neutron.pp
192.168.1.188_neutron.pp:                            [ DONE ]      
Applying 192.168.1.188_osclient.pp
Applying 192.168.1.188_horizon.pp
192.168.1.188_osclient.pp:                           [ DONE ]       
192.168.1.188_horizon.pp:                            [ DONE ]       
Applying 192.168.1.188_ring_swift.pp
192.168.1.188_ring_swift.pp:                         [ DONE ]         
Applying 192.168.1.188_swift.pp
Applying 192.168.1.188_provision.pp
192.168.1.188_swift.pp:                              [ DONE ]        
192.168.1.188_provision.pp:                          [ DONE ]        
Applying 192.168.1.188_mongodb.pp
192.168.1.188_mongodb.pp:                            [ DONE ]      
Applying 192.168.1.188_ceilometer.pp
Applying 192.168.1.188_nagios.pp
Applying 192.168.1.188_nagios_nrpe.pp
192.168.1.188_ceilometer.pp:                         [ DONE ]          
192.168.1.188_nagios.pp:                             [ DONE ]          
192.168.1.188_nagios_nrpe.pp:                        [ DONE ]          
Applying 192.168.1.188_postscript.pp
192.168.1.188_postscript.pp:                         [ DONE ]         
Applying Puppet manifests                            [ DONE ]
Finalizing                                           [ DONE ]

 **** Installation completed successfully ******


Additional information:
 * Time synchronization installation was skipped. Please note that unsynchronized time on server instances might be problem for some OpenStack components.
 * Did not create a cinder volume group, one already existed
 * File /root/keystonerc_admin has been created on OpenStack client host 192.168.1.188. To use the command line tools you need to source the file.
 * Copy of keystonerc_admin file has been created for non-root user in /home/openstack.
 * To access the OpenStack Dashboard browse to http://192.168.1.188/dashboard .
Please, find your login credentials stored in the keystonerc_admin in your home directory.
 * To use Nagios, browse to http://192.168.1.188/nagios username : nagiosadmin, password : 30856dbf515e42ee
 * The installation log file is available at: /var/tmp/packstack/20140520-190707-JWMGRW/openstack-setup.log
 * The generated manifests are available at: /var/tmp/packstack/20140520-190707-JWMGRW/manifests


[openstack@novo ~]$ cat keystonerc_admin
export OS_USERNAME=admin
export OS_TENANT_NAME=admin
export OS_PASSWORD=6af8187b6ef64c40
export OS_AUTH_URL=http://192.168.1.188:5000/v2.0/
export PS1='[\u@\h \W(keystone_admin)]\$ '





[openstack@novo ~]$ pstree
init─┬─NetworkManager─┬─dhclient
     │                └─{NetworkManager}
     ├─abrtd
     ├─acpid
     ├─atd
     ├─auditd───{auditd}
     ├─automount───4*[{automount}]
     ├─bluetoothd
     ├─bonobo-activati───{bonobo-activat}
     ├─2*[ceilometer-agen]
     ├─ceilometer-agen───ceilometer-agen
     ├─2*[ceilometer-alar]
     ├─ceilometer-api
     ├─ceilometer-coll───ceilometer-coll
     ├─certmonger
     ├─cinder-api───cinder-api
     ├─cinder-backup
     ├─cinder-schedule
     ├─cinder-volume───cinder-volume
     ├─clock-applet
     ├─console-kit-dae───63*[{console-kit-da}]
     ├─crond
     ├─cupsd
     ├─2*[dbus-daemon───{dbus-daemon}]
     ├─2*[dbus-launch]
     ├─devkit-power-da
     ├─e-calendar-fact───{e-calendar-fac}
     ├─epmd
     ├─firefox───43*[{firefox}]
     ├─gconf-im-settin
     ├─gconfd-2
     ├─gdm-binary─┬─gdm-simple-slav─┬─Xorg
     │            │                 ├─gdm-session-wor─┬─gnome-session─┬─abrt-ap+
     │            │                 │                 │               ├─bluetoo+
     │            │                 │                 │               ├─evoluti+
     │            │                 │                 │               ├─gdu-not+
     │            │                 │                 │               ├─gnome-p+
     │            │                 │                 │               ├─gnome-p+
     │            │                 │                 │               ├─gnome-v+
     │            │                 │                 │               ├─gpk-upd+
     │            │                 │                 │               ├─metacit+
     │            │                 │                 │               ├─nautilus
     │            │                 │                 │               ├─nm-appl+
     │            │                 │                 │               ├─polkit-+
     │            │                 │                 │               ├─python
     │            │                 │                 │               ├─restore+
     │            │                 │                 │               └─{gnome-+
     │            │                 │                 └─{gdm-session-wo}
     │            │                 └─{gdm-simple-sla}
     │            └─{gdm-binary}
     ├─gdm-user-switch
     ├─gedit
     ├─glance-api───8*[glance-api]
     ├─glance-registry───glance-registry
     ├─gnome-keyring-d───2*[{gnome-keyring-}]
     ├─gnome-screensav
     ├─gnome-settings-───{gnome-settings}
     ├─gnome-terminal─┬─bash
     │                ├─4*[bash───su───bash]
     │                ├─bash───su───bash───pstree
     │                ├─bash───su───bash───mysqld_safe───mysqld───49*[{mysqld}]
     │                ├─gnome-pty-helpe
     │                └─{gnome-terminal}
     ├─gnote
     ├─gvfs-afc-volume───{gvfs-afc-volum}
     ├─gvfs-gdu-volume
     ├─gvfs-gphoto2-vo
     ├─gvfsd
     ├─gvfsd-burn
     ├─gvfsd-metadata
     ├─gvfsd-trash
     ├─hald─┬─hald-runner─┬─hald-addon-acpi
     │      │             ├─2*[hald-addon-gene]
     │      │             ├─hald-addon-inpu
     │      │             └─hald-addon-rfki
     │      └─{hald}
     ├─httpd─┬─httpd───17*[{httpd}]
     │       └─8*[httpd]
     ├─im-settings-dae
     ├─irqbalance
     ├─keystone-all
     ├─ksmtuned───sleep
     ├─libvirtd───10*[{libvirtd}]
     ├─master─┬─pickup
     │        └─qmgr
     ├─memcached───9*[{memcached}]
     ├─5*[mingetty]
     ├─modem-manager
     ├─mongod───20*[{mongod}]
     ├─nagios───{nagios}
     ├─neutron-dhcp-ag
     ├─neutron-l3-agen
     ├─neutron-metadat
     ├─neutron-ns-meta
     ├─neutron-openvsw───sudo───neutron-rootwra───ovsdb-client
     ├─neutron-server
     ├─notification-ar
     ├─nova-api───24*[nova-api]
     ├─nova-cert
     ├─nova-compute───21*[{nova-compute}]
     ├─nova-conductor───8*[nova-conductor]
     ├─nova-consoleaut
     ├─nova-novncproxy
     ├─nova-scheduler
     ├─nrpe
     ├─ovs-vswitchd───ovs-vswitchd───ovs-vswitchd
     ├─ovsdb-server───ovsdb-server
     ├─packagekitd
     ├─polkitd
     ├─pulseaudio─┬─gconf-helper
     │            └─3*[{pulseaudio}]
     ├─rabbitmq-server───bash───rabbitmq-server───su───beam.smp─┬─inet_gethost─+
     │                                                          └─42*[{beam.smp+
     ├─rpc.statd
     ├─rpcbind
     ├─rsyslogd───3*[{rsyslogd}]
     ├─rtkit-daemon───2*[{rtkit-daemon}]
     ├─seahorse-daemon
     ├─sshd
     ├─swift-account-a
     ├─2*[swift-account-r]
     ├─swift-account-s───swift-account-s
     ├─3*[swift-container]
     ├─swift-container───swift-container
     ├─swift-object-au
     ├─swift-object-re
     ├─swift-object-se───swift-object-se
     ├─swift-object-up
     ├─swift-proxy-ser───8*[swift-proxy-ser]
     ├─tgtd───tgtd
     ├─trashapplet
     ├─tuned
     ├─udevd───2*[udevd]
     ├─udisks-daemon─┬─udisks-daemon
     │               └─{udisks-daemon}
     ├─wnck-applet
     ├─wpa_supplicant
     └─xinetd



Monday, May 12, 2014

Installing virtualization packages on Red Hat Enterprise Linux



To use virtualization on Red Hat Enterprise Linux you require at least the qemu-kvm and qemu-img packages. These packages provide the user-level KVM emulator and disk image manager on the host Red Hat Enterprise Linux system.
To install the qemu-kvm and qemu-img packages, run the following command: 
 
# yum install qemu-kvm qemu-img
 
Several additional virtualization management packages are also available:
Recommended virtualization packages
python-virtinst
Provides the virt-install command for creating virtual machines.
libvirt
The libvirt package provides the server and host side libraries for interacting with hypervisors and host systems. The libvirt package provides the libvirtd daemon that handles the library calls, manages virtual machines and controls the hypervisor.
libvirt-python
The libvirt-python package contains a module that permits applications written in the Python programming language to use the interface supplied by the libvirt API.
virt-manager
virt-manager, also known as Virtual Machine Manager, provides a graphical tool for administering virtual machines. It uses libvirt-client library as the management API.
libvirt-client
The libvirt-client package provides the client-side APIs and libraries for accessing libvirt servers. The libvirt-client package includes the virsh command line tool to manage and control virtual machines and hypervisors from the command line or a special virtualization shell.
Install all of these recommended virtualization packages with the following command: 
 
# yum install virt-manager libvirt libvirt-python python-virtinst libvirt-client
 
Installing Virtualization package groups
The virtualization packages can also be installed from package groups. The following table describes the virtualization package groups and what they provide.

Table 5.1. Virtualization Package Groups
Package Group Description Mandatory Packages Optional Packages
Virtualization Provides an environment for hosting virtual machines qemu-kvm qemu-guest-agent, qemu-kvm-tools
Virtualization Client Clients for installing and managing virtualization instances python-virtinst, virt-manager, virt-viewer virt-top
Virtualization Platform Provides an interface for accessing and controlling virtual machines and containers libvirt, libvirt-client, virt-who, virt-what fence-virtd-libvirt, fence-virtd-multicast, fence-virtd-serial, libvirt-cim, libvirt-java, libvirt-qmf, libvirt-snmp, perl-Sys-Virt
Virtualization Tools Tools for offline virtual image management libguestfs libguestfs-java, libguestfs-tools, virt-v2v

To install a package group, run the yum groupinstall <groupname> command. For instance, to install the Virtualization Tools package group, run the yum groupinstall "Virtualization Tools" command. 
 
 

KVM and virtualization in Red Hat Enterprise Linux

What is KVM?
KVM (Kernel-based Virtual Machine) is a full virtualization solution for Linux on AMD64 and Intel 64 hardware that is built into the standard Red Hat Enterprise Linux 6 kernel. It can run multiple, unmodified Windows and Linux guest operating systems. The KVM hypervisor in Red Hat Enterprise Linux is managed with the libvirt API and tools built for libvirt (such as virt-manager and virsh). Virtual machines are executed and run as multi-threaded Linux processes controlled by these tools.
Overcommitting
The KVM hypervisor supports overcommitting of system resources. Overcommitting means allocating more virtualized CPUs or memory than the available resources on the system. Memory overcommitting allows hosts to utilize memory and virtual memory to increase guest densities.
Thin provisioning
Thin provisioning allows the allocation of flexible storage and optimizes the available space for every guest virtual machine. It gives the appearance that there is more physical storage on the guest than is actually available. This is not the same as overcommitting as this only pertains to storage and not CPUs or memory allocations. However, like overcommitting, the same warning applies.
KSM
Kernel SamePage Merging (KSM), used by the KVM hypervisor, allows KVM guests to share identical memory pages. These shared pages are usually common libraries or other identical, high-use data. KSM allows for greater guest density of identical or similar guest operating systems by avoiding memory duplication.
QEMU Guest Agent
The QEMU Guest Agent runs on the guest operating system and allows the host machine to issue commands to the guest operating system.
Disk I/O throttling
When several virtual machines are running simultaneously, they can interfere with system performance by using excessive disk I/O. Disk I/O throttling in KVM provides the ability to set a limit on disk I/O requests sent from virtual machines to the host machine. This can prevent a virtual machine from over utilizing shared resources, and impacting the performance of other virtual machines.
The libvirt package provides:
  • A common, generic, and stable layer to securely manage virtual machines on a host.
  • A common interface for managing local systems and networked hosts.
  • All of the APIs required to provision, create, modify, monitor, control, migrate, and stop virtual machines, but only if the hypervisor supports these operations. Although multiple hosts may be accessed with libvirt simultaneously, the APIs are limited to single node operations.
The libvirt package is designed as a building block for higher level management tools and applications, for example, virt-manager and the virsh command-line management tools. 
virsh
The virsh command-line tool is built on the libvirt management API and operates as an alternative to the graphical virt-manager application. The virsh command can be used in read-only mode by unprivileged users or, with root access, full administration functionality. The virsh command is ideal for scripting virtualization administration.
virt-manager
virt-manager is a graphical desktop tool for managing virtual machines. It allows access to graphical guest consoles and can be used to perform virtualization administration, virtual machine creation, migration, and configuration tasks. The ability to view virtual machines, host statistics, device information and performance graphs is also provided. The local hypervisor and remote hypervisors can be managed through a single interface.













Sunday, May 11, 2014

How to add a user to wheel group and enable sudo for the wheel group

To enable sudo for the wheel group :

Make sure the following line is there in /etc/sudoers

%wheel  ALL=(ALL)       ALL

Then add a user to the wheel group :

usermod -G wheel <username>

Saturday, May 10, 2014

How to install desktop capturing software on CentOS

I use recordmydesktop for capturing desktop video

# yum install recordmydesktop (this is the backend software)

use one of the following for frontend

# yum install gtk-recordmydesktop

or

# yum install qt-recordmydesktop

then recordmydesktop will appear under "Applications" -> "Sound & video"

Saturday, May 3, 2014

Build linux kernel


Do not use root to perform these tasks

Download 3.14.2 linux kernel source code

[@novo ~]$ wget https://www.kernel.org/pub/linux/kernel/v3.x/linux-3.14.2.tar.xz
--2014-05-03 11:03:19--  https://www.kernel.org/pub/linux/kernel/v3.x/linux-3.14.2.tar.xz
Resolving www.kernel.org... 199.204.44.194, 198.145.20.140, 149.20.4.69, ...
Connecting to www.kernel.org|199.204.44.194|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 78399780 (75M) [application/x-xz]
Saving to: “linux-3.14.2.tar.xz”

100%[======================================>] 78,399,780  2.08M/s   in 38s    

2014-05-03 11:03:58 (1.95 MB/s) - “linux-3.14.2.tar.xz” saved [78399780/78399780]


[@novo ~]$ mkdir ~/linux
[@novo ~]$ mv ~/linux-3.14.2.tar.xz linux
[@novo ~]$ cd linux
[@novo linux]$ ls
linux-3.14.2.tar.xz


.xz file needs to be uncompressed via J option

[@novo linux]$ tar -xvJf linux-3.14.2.tar.xz
:
:
:
linux-3.14.2/virt/kvm/iodev.h
linux-3.14.2/virt/kvm/iommu.c
linux-3.14.2/virt/kvm/irq_comm.c
linux-3.14.2/virt/kvm/irqchip.c
linux-3.14.2/virt/kvm/kvm_main.c
linux-3.14.2/virt/kvm/vfio.c


[@novo linux]$ cd linux-3.14.2
[@novo linux-3.14.2]$ which make
/usr/bin/make

make menuconfig requires ncurses-devel


Here we need root

[root@novo ~]# yum install ncurses-devel

Also we need gcc

[root@novo ~]# yum install gcc

Now back to regular user ID

[@novo linux-3.14.2]$ make menuconfig
  HOSTCC  scripts/kconfig/conf.o
  HOSTCC  scripts/kconfig/lxdialog/checklist.o
  HOSTCC  scripts/kconfig/lxdialog/inputbox.o
  HOSTCC  scripts/kconfig/lxdialog/menubox.o
  HOSTCC  scripts/kconfig/lxdialog/textbox.o
  HOSTCC  scripts/kconfig/lxdialog/util.o
  HOSTCC  scripts/kconfig/lxdialog/yesno.o
  HOSTCC  scripts/kconfig/mconf.o
  HOSTCC  scripts/kconfig/zconf.tab.o
  HOSTLD  scripts/kconfig/mconf
scripts/kconfig/mconf Kconfig
#
# using defaults found in /boot/config-2.6.32-431.el6.x86_64
#
/boot/config-2.6.32-431.el6.x86_64:497:warning: symbol value 'm' invalid for X86_INTEL_PSTATE
/boot/config-2.6.32-431.el6.x86_64:565:warning: symbol value 'm' invalid for PCCARD_NONSTATIC
/boot/config-2.6.32-431.el6.x86_64:2730:warning: symbol value 'm' invalid for MFD_WM8400
/boot/config-2.6.32-431.el6.x86_64:2731:warning: symbol value 'm' invalid for MFD_WM831X
/boot/config-2.6.32-431.el6.x86_64:2732:warning: symbol value 'm' invalid for MFD_WM8350
/boot/config-2.6.32-431.el6.x86_64:2745:warning: symbol value 'm' invalid for MFD_WM8350_I2C
/boot/config-2.6.32-431.el6.x86_64:2747:warning: symbol value 'm' invalid for AB3100_CORE
configuration written to .config

*** End of the configuration.
*** Execute 'make' to start the build or try 'make help'.




[@novo linux-3.14.2]$ make
:

:
  LD      drivers/net/ethernet/nvidia/built-in.o
  CC [M]  drivers/net/ethernet/nvidia/forcedeth.o
  LD      drivers/net/ethernet/oki-semi/built-in.o
  LD      drivers/net/ethernet/oki-semi/pch_gbe/built-in.o
  CC [M]  drivers/net/ethernet/oki-semi/pch_gbe/pch_gbe_phy.o
  CC [M]  drivers/net/ethernet/oki-semi/pch_gbe/pch_gbe_ethtool.o
  CC [M]  drivers/net/ethernet/oki-semi/pch_gbe/pch_gbe_param.o
  CC [M]  drivers/net/ethernet/oki-semi/pch_gbe/pch_gbe_api.o
  CC [M]  drivers/net/ethernet/oki-semi/pch_gbe/pch_gbe_main.o
  LD [M]  drivers/net/ethernet/oki-semi/pch_gbe/pch_gbe.o
  LD      drivers/net/ethernet/packetengines/built-in.o
  LD      drivers/net/ethernet/qlogic/built-in.o
  CC [M]  drivers/net/ethernet/qlogic/qla3xxx.o
  LD      drivers/net/ethernet/qlogic/netxen/built-in.o
  CC [M]  drivers/net/ethernet/qlogic/netxen/netxen_nic_hw.o
  CC [M]  drivers/net/ethernet/qlogic/netxen/netxen_nic_main.o
  CC [M]  drivers/net/ethernet/qlogic/netxen/netxen_nic_init.o
  CC [M]  drivers/net/ethernet/qlogic/netxen/netxen_nic_ethtool.o
  CC [M]  drivers/net/ethernet/qlogic/netxen/netxen_nic_ctx.o
  LD [M]  drivers/net/ethernet/qlogic/netxen/netxen_nic.o
  LD      drivers/net/ethernet/qlogic/qlcnic/built-in.o


:
:
  IHEX    firmware/mts_cdma.fw
  IHEX    firmware/mts_gsm.fw
  IHEX    firmware/mts_edge.fw
  H16TOFW firmware/edgeport/boot.fw
  H16TOFW firmware/edgeport/boot2.fw
  H16TOFW firmware/edgeport/down.fw
  H16TOFW firmware/edgeport/down2.fw
  IHEX    firmware/edgeport/down3.bin
  IHEX2FW firmware/whiteheat_loader.fw
  IHEX2FW firmware/whiteheat.fw
  IHEX2FW firmware/keyspan_pda/keyspan_pda.fw
  IHEX2FW firmware/keyspan_pda/xircom_pgs.fw
 

[@novo linux-3.14.2]$ ls -a vm*
vmlinux  vmlinux.o




















Friday, May 2, 2014

How to fix "invalid EFI file path" from dual boot EFI Windows 8 and CentOS


Problem : When trying to dual boot to Windows 8, "invalid EFI file path" error is displayed

Reason : CentOS grub did not find Windows 8 EFI correctly

File needs to be modified : /boot/efi/EFI/redhat/grub.conf

Before :

title Other
    rootnoverify (hd0,4)
    chainloader +1


After :

title Windows 8
    root (hd0,1)
    chainloader /EFI/Microsoft/Boot/bootmgfw.efi

   

Although (hd0,4) is where Windows 8 located (/dev/sdb5) , we should put where the file EFI/Microsoft/Boot/bootmgfw.efi is located. (hd0,1) is where bootmgfw.efi is located (/dev/sdb2)

After grub.conf is modified, it will boot fine.

[root@novo etc]# df -k /boot/efi
Filesystem     1K-blocks  Used Available Use% Mounted on
/dev/sdb2         262144 29488    232656  12% /boot/efi


[root@novo efi]# ls -ld /boot/efi/EFI/Microsoft/Boot/bootmgfw.efi
-rwx------. 1 root root 1616728 Feb 22 10:44 /boot/efi/EFI/Microsoft/Boot/bootmgfw.efi

How to find the UUID of a disk partition

[root@novo sbin]# blkid -c /dev/null -o list
device     fs_type label    mount point    UUID
-------------------------------------------------------------------------------
/dev/sdb1  ntfs    WINRE_DRV (not mounted) 1E54CD8054CD5AE3
/dev/sdb2  vfat    SYSTEM_DRV (not mounted) 66D8-0A80
/dev/sdb3  vfat    LRS_ESP  (not mounted)  ACD3-BE9C
/dev/sdb5  ntfs    Windows8_OS (not mounted) 0C02D73402D7220E
/dev/sdb6  ntfs    LENOVO   (not mounted)  1050AD5950AD4676
/dev/sdb7  ntfs    PBR_DRV  (not mounted)  34C0DB35C0DAFBD4
/dev/sdb8  ext4             /boot          8b48405a-332a-4664-af7b-1540cadc6a73
/dev/sdb9  LVM2_member      (in use)       zEQuty-0Kqp-AvkX-RlOz-cCM5-scrO-q2ubNG
/dev/mapper/vg_novo-lv_root
           ext4             /              9c276216-ab2a-4d97-818b-515a8a6eaae5
/dev/mapper/vg_novo-lv_swap
           swap             <swap>         02d515d4-e0da-4ea2-a5ed-82c2810bcb0c