# wipefs /dev/sdm (this will show the existing filesystem signature)
offset type
----------------------------------------------------------------
0x438 ext4 [filesystem]
UUID: 7a744324-a26b-47fe-9d91-24fee3cc09b4
# wipefs -a /dev/sdm (this will erase the signature)
2 bytes [53 ef] erased at offset 0x438 (ext4)
# wipefs /dev/sdm
#
Thursday, May 28, 2015
Friday, May 8, 2015
OpenStack : Install Juno
http://docs.openstack.org/juno/install-guide/install/yum/openstack-install-guide-yum-juno.pdf
Basic Environment
Database
Messaging Server
Add Identity Service
Basic Environment
Database password (no variable used) | Root password for the database |
|
Password of user guest of RabbitMQ |
|
Database password of Identity service |
|
Password of user demo |
|
Password of user admin |
|
Database password for Image Service |
|
Password of Image Service user glance |
|
Database password for Compute service |
|
Password of Compute service user nova |
|
Database password for the dashboard |
|
Database password for the Block Storage service |
|
Password of Block Storage service user cinder |
|
Database password for the Networking service |
|
Password of Networking service user neutron |
|
Database password for the Orchestration service |
|
Password of Orchestration service user heat |
|
Database password for the Telemetry service |
|
Password of Telemetry service user ceilometer |
|
Database password of Database service |
|
Password of Database Service user trove |
Database
To install and configure the database server
- Install the packages:
Note The Python MySQL library is compatible with MariaDB. # yum install mariadb mariadb-server MySQL-python
- Edit the
/etc/my.cnf
file and complete the following actions:
- In the
[mysqld]
section, set thebind-address
key to the management IP address of the controller node to enable access by other nodes via the management network:
123[mysqld]
...
bind
-address
= 10.0.0.11
- In the
[mysqld]
section, set the following keys to enable useful options and the UTF-8 character set:
1234567[mysqld]
...
default
-storage-engine
= innodb
innodb_file_per_table
collation
-server
= utf8_general_ci
init
-connect
=
'SET NAMES utf8'
character
-set-server
= utf8
- In the
To finalize installation
- Start the database service and configure it to start when the
system boots:
# systemctl enable mariadb.service # systemctl start mariadb.service
Note:
# systemctl enable mariadb.service ln -s '/usr/lib/systemd/system/mariadb.service' '/etc/systemd/system/multi-user.target.wants/mariadb.service'
- Secure the database
service including choosing a suitable password for the root
account:
# mysql_secure_installation
Note:
# mysql_secure_installation /bin/mysql_secure_installation: line 379: find_mysql_client: command not found NOTE: RUNNING ALL PARTS OF THIS SCRIPT IS RECOMMENDED FOR ALL MariaDB SERVERS IN PRODUCTION USE! PLEASE READ EACH STEP CAREFULLY! In order to log into MariaDB to secure it, we'll need the current password for the root user. If you've just installed MariaDB, and you haven't set the root password yet, the password will be blank, so you should just press enter here. Enter current password for root (enter for none): OK, successfully used password, moving on... Setting the root password ensures that nobody can log into the MariaDB root user without the proper authorisation. Set root password? [Y/n] Y New password: Re-enter new password: Password updated successfully! Reloading privilege tables.. ... Success! By default, a MariaDB installation has an anonymous user, allowing anyone to log into MariaDB without having to have a user account created for them. This is intended only for testing, and to make the installation go a bit smoother. You should remove them before moving into a production environment. Remove anonymous users? [Y/n] Y ... Success! Normally, root should only be allowed to connect from 'localhost'. This ensures that someone cannot guess at the root password from the network. Disallow root login remotely? [Y/n] Y ... Success! By default, MariaDB comes with a database named 'test' that anyone can access. This is also intended only for testing, and should be removed before moving into a production environment. Remove test database and access to it? [Y/n] Y - Dropping test database... ... Success! - Removing privileges on test database... ... Success! Reloading the privilege tables will ensure that all changes made so far will take effect immediately. Reload privilege tables now? [Y/n] Y ... Success! Cleaning up... All done! If you've completed all of the above steps, your MariaDB installation should now be secure. Thanks for using MariaDB!
To install the RabbitMQ message broker service
# yum install rabbitmq-server
To configure the message broker service
- Start the message broker service and configure it to start when the
system boots:
# systemctl enable rabbitmq-server.service # systemctl start rabbitmq-server.service
Note:
# systemctl enable rabbitmq-server.service ln -s '/usr/lib/systemd/system/rabbitmq-server.service' '/etc/systemd/system/multi-user.target.wants/rabbitmq-server.service'
- The message broker creates a default account that uses
guest
for the username and password. To simplify installation of your test environment, we recommend that you use this account, but change the password for it.
Run the following command:
ReplaceRABBIT_PASS
with a suitable password.
# rabbitmqctl change_password guest
You must configure theRABBIT_PASS
Changing password for user "guest" ... ...done.rabbit_password
key in the configuration file for each OpenStack service that uses the message broker.
Note For production environments, you should create a unique account with suitable password. For more information on securing the message broker, see the documentation.
If you decide to create a unique account with suitable password for your test environment, you must configure therabbit_userid
andrabbit_password
keys in the configuration file of each OpenStack service that uses the message broker. - For RabbitMQ version 3.3.0 or newer, you must enable remote
access for the
guest
account.
- Check the RabbitMQ version:
# rabbitmqctl status | grep rabbit Status of node 'rabbit@controller' ... {running_applications,[{rabbit,"RabbitMQ","3.4.2"},
- If necessary, edit the
/etc/rabbitmq/rabbitmq.config
file and configureloopback_users
to reference an empty list:
[{rabbit, [{loopback_users, []}]}].
Note Contents of the original file might vary depending on the source of the RabbitMQ package. In some cases, you might need to create this file. - Restart the message broker service:
# systemctl restart rabbitmq-server.service
- Check the RabbitMQ version:
Add Identity Service
Install and configure
This section describes how to install and configure the OpenStack Identity service on the controller node.
To configure prerequisites
Before you configure the OpenStack Identity service, you must create
a database and an administration token.- To create the database, complete these steps:
- Use the database access client to connect to the database
server as the
root
user:
$ mysql -u root -p
- Create the
keystone
database:
CREATE DATABASE keystone;
- Grant proper access to the
keystone
database:
GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' \ IDENTIFIED BY '
ReplaceKEYSTONE_DBPASS
'; GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' \ IDENTIFIED BY 'KEYSTONE_DBPASS
';KEYSTONE_DBPASS
with a suitable password. - Exit the database access client.
- Use the database access client to connect to the database
server as the
- Generate a random value to use as the administration token during
initial configuration:
# openssl rand -hex 10
To install and configure the components
- Run the following command to install the packages:
# yum install openstack-keystone python-keystoneclient
- Edit the
/etc/keystone/keystone.conf
file and complete the following actions:
- In the
[DEFAULT]
section, define the value of the initial administration token:
123[DEFAULT]
...
admin_token = ADMIN_TOKEN
ADMIN_TOKEN
with the random value that you generated in a previous step. - In the
[database]
section, configure database access:
123[database]
...
connection = mysql:
//keystone
:KEYSTONE_DBPASS@controller
/keystone
KEYSTONE_DBPASS
with the password you chose for the database. - In the
[token]
section, configure the UUID token provider and SQL driver:
1234[token]
...
provider = keystone.token.providers.uuid.Provider
driver = keystone.token.persistence.backends.sql.Token
- In the
[revoke]
section, configure the SQL revocation driver:
123[revoke]
...
driver = keystone.contrib.revoke.backends.sql.Revoke
- (Optional) To assist with troubleshooting,
enable verbose logging in the
[DEFAULT]
section:
123[DEFAULT]
...
verbose = True
- In the
- Create generic certificates and keys and restrict access to the
associated files:
# keystone-manage pki_setup --keystone-user keystone --keystone-group keystone # chown -R keystone:keystone /var/log/keystone # chown -R keystone:keystone /etc/keystone/ssl # chmod -R o-rwx /etc/keystone/ssl
- Populate the Identity service database:
# su -s /bin/sh -c "keystone-manage db_sync" keystone
To finalize installation
- Start the Identity service and configure it to start when the
system boots:
# systemctl enable openstack-keystone.service # systemctl start openstack-keystone.service
- By default, the Identity service stores expired tokens in the
database indefinitely. The accumulation of expired tokens considerably
increases the database size and might degrade service performance,
particularly in environments with limited resources.
We recommend that you usecron
to configure a periodic task that purges expired tokens hourly:
# (crontab -l -u keystone 2>&1 | grep -q token_flush) || \ echo '@hourly /usr/bin/keystone-manage token_flush >/var/log/keystone/keystone-tokenflush.log 2>&1' \ >> /var/spool/cron/keystone
Wednesday, May 6, 2015
Google Launches Bigtable
http://googlecloudplatform.blogspot.com/2015/05/introducing-Google-Cloud-Bigtable.html
http://www.forbes.com/sites/paulmiller/2015/05/06/google-launches-bigtable-a-big-managed-database-in-the-cloud/
http://www.forbes.com/sites/paulmiller/2015/05/06/google-launches-bigtable-a-big-managed-database-in-the-cloud/
As businesses become increasingly data-centric, and with the coming age of the Internet of Things
(IoT), enterprises and data-driven organizations must become adept at
efficiently deriving insights from their data. In this environment, any
time spent building and managing infrastructure rather than working on
applications is a lost opportunity. That’s why today we are excited to
introduce Google Cloud Bigtable
- a fully managed, high-performance, extremely scalable NoSQL database
service accessible through the industry-standard, open-source Apache HBase API. Under the hood, this new service is powered by Bigtable, the same database that drives nearly all of Google’s largest applications.
Google Cloud Bigtable
excels at large ingestion, analytics, and data-heavy serving workloads.
It's ideal for enterprises and data-driven organizations that need to
handle huge volumes of data, including businesses in the financial
services, AdTech, energy, biomedical, and telecommunications industries.
Cloud Bigtable delivers the following key benefits to organizations building large systems:
- Unmatched Performance: Single-digit millisecond latency and over 2X the performance per dollar of unmanaged NoSQL alternatives.
- Open Source Interface: Because Cloud Bigtable is accessed through the HBase API, it is natively integrated with much of the existing big data and Hadoop ecosystem and supports Google’s big data products. Additionally, data can be imported from or exported to existing HBase clusters through simple bulk ingestion tools using industry-standard formats.
- Low Cost: By providing a fully managed service and exceptional efficiency, Cloud Bigtable’s total cost of ownership is less than half the cost of its direct competition.
- Security: Cloud Bigtable is built with a replicated storage strategy, and all data is encrypted both in-flight and at rest.
- Simplicity: Creating or reconfiguring a Cloud Bigtable cluster is done through a simple user interface and can be completed in less than 10 seconds. As data is put into Cloud Bigtable the backing storage scales automatically, so there’s no need to do complicated estimates of capacity requirements.
- Maturity: Over the past 10+ years, Bigtable has driven Google’s most critical applications. In addition, the HBase API is a industry-standard interface for combined operational and analytical workloads.
To
help get you started quickly, we have assembled a service partner
ecosystem to enable a diverse and expanding set of Cloud Bigtable use
cases for our customers. Starting today, these service partners are
available to help you take a new approach to data storage in your own
environment.
- SunGard, a leading financial software and services company, can help you build a scalable, easy to manage financial data platform on Cloud Bigtable. In fact, it has already built a financial audit trail system on Cloud Bigtable which is capable of ingesting a remarkable 2.5 million trade messages per second.
- Telit Wireless Solutions, a global leader in Internet of Things (IoT) enablement, has integrated their IoT EAP (Application Enablement Platform) "m2mAIR" with Cloud Bigtable to enable a much higher performance in data ingestion.
As of today, Google Cloud Bigtable is available as a beta release in multiple locations worldwide. We are already helping customers like Qubit
migrate a multi-petabyte HBase deployment to Cloud Bigtable. We look
forward to seeing what sorts of amazing, innovative applications you can
create with this powerful piece of Google technology. If you have any
technical questions please post them to Stack Overflow with the tag ’google-cloud-bigtable’, and if you have any feedback or feature requests please send it to the feedback list.
-Posted by Cory O’Connor, Product Manager
Subscribe to:
Posts (Atom)