Sunteți pe pagina 1din 31

1.

Node Setup
----------------*
Openstack requires minimum 5 nodes, namely
i) controller
10.0.0.11
ii) network
10.0.0.21
iii)compute1
10.0.0.31
iv) block1
10.0.0.41
v) object1
10.0.0.51
*
*
*
*

configure /etc/hosts file.


configure respective network interface.
Every node contains 1 management NIC.
Configuration of Controller Node
-->
Management Interface
IPADDR=10.0.0.11
PREFIX=24
GATEWAY=10.0.0.1
Configuration of Network Node
-->
Management Interface
IPADDR=10.0.0.21
PREFIX=24
GATEWAY=10.0.0.1
-->
Tunnel Network
IPADDR=10.0.1.21
PREFIX=24
GATEWAY=10.0.1.1
-->
External Network
BOOTPROTO=DHCP
Configuration of Compute Node
-->
Management Interface
IPADDR=10.0.0.31
PREFIX=24
GATEWAY=10.0.0.1
-->
Tunnel Network
IPADDR=10.0.1.31
PREFIX=24
GATEWAY=10.0.1.1
-->
Storage Network
IPADDR=10.0.2.31
PREFIX=24
GATEWAY=10.0.2.1
Configuration of Storage Node
-->
Management Interface
IPADDR=10.0.0.41
PREFIX=24
GATEWAY=10.0.0.1
-->
Storage Network
IPADDR=10.0.2.41
PREFIX=24
GATEWAY=10.0.2.1
Configuration of Storage Node
-->
Management Interface
IPADDR=10.0.0.51
PREFIX=24
GATEWAY=10.0.0.1
-->
Storage Network
IPADDR=10.0.2.51
PREFIX=24
GATEWAY=10.0.2.1

*
created an updated repository in 10.0.0.1 and shared by http server.Upgr
ade each node to the latest one, because openstack networking requires latest ke
rnel.
[latest]
name=Repository with latest RPMs
baseurl=http://10.0.0.1/latest
enabled=1
gpgcheck=0
priority=98
[centos7]
name=Repository with Centos7
baseurl=http://10.0.0.1/shared
enabled=1
gpgcheck=0
However if you want to setup online repositoryfor openstack, run followi
ng command in all nodes.
# yum install -y http://dl.fedoraproject.org/pub/epel/7/x86_64/e/epel-r
elease-7-5.noarch.rpm
# # yum install http://rdo.fedorapeople.org/openstack-juno/rdo-release-j
uno.rpm

Note:- python-keystone-auth-token package missing from repository, howe


ver not required to install.
*

Disabling selinux

>
sed -i 's/^SELINUX=.*/SELINUX=disabled/' /etc/sysconfig/selinux
/etc/selinux/config
>
setenforce 0
*
Install iptables-services in centos 7 in controller node and disable fir
ewalld.
>
>
>
>
>
>
*

systemctl mask firewalld


systemctl stop firewalld
yum -y install iptables-services
systemctl disable iptables
iptables -F
service iptables save

Remove Service chrony, it blocks ntpd from auto starting after reboot.
>

yum -y remove chrony

2. Package Installation in Controller Node


---------------------------------------------->
yum install -y ntp yum-plugin-priorities openstack-selinux maria
db mariadb-server MySQL-python rabbitmq-server openstack-keystone python-keyston
eclient openstack-glance python-glanceclient openstack-nova-api openstack-nova-c

ert openstack-nova-conductor openstack-nova-console openstack-nova-novncproxy op


enstack-nova-scheduler python-novaclient openstack-neutron openstack-neutron-ml2
python-neutronclient openstack-dashboard httpd mod_wsgi memcached python-memcac
hed openstack-cinder python-cinderclient python-oslo-db openstack-swift-proxy py
thon-swiftclient python-keystone-auth-token python-keystonemiddleware memcached
openstack-heat-api openstack-heat-api-cfn openstack-heat-engine python-heatclien
t mongodb-server mongodb openstack-ceilometer-api openstack-ceilometer-collector
openstack-ceilometer-notification openstack-ceilometer-central openstack-ceilom
eter-alarm python-ceilometerclient openstack-trove python-troveclient openstacksahara python-saharaclient
Note:- python-keystone-auth-token package missing from repository
3. Package Installation in Network Node
---------------------------------------------->
yum install -y ntp yum-plugin-priorities openstack-selinux opens
tack-neutron openstack-neutron-ml2 openstack-neutron-openvswitch
4. Package Installation in Compute Node
---------------------------------------------->
yum install -y ntp yum-plugin-priorities openstack-selinux opens
tack-nova-compute sysfsutils openstack-neutron-ml2 openstack-neutron-openvswitch
openstack-nova-network openstack-nova-api openstack-ceilometer-compute python-c
eilometerclient python-pecan
5. Package Installation in Storage Node
--------------------------------------------->
yum install -y ntp yum-plugin-priorities openstack-selinux lvm2
openstack-cinder targetcli python-oslo-db MySQL-python xfsprogs rsync openstackswift-account openstack-swift-container openstack-swift-object
6. Package Installation in Object Node
--------------------------------------->
yum install -y xfsprogs rsync
7. Configuring NTP
-----------------*
In Controller Node
>
vim /etc/ntp.conf
server 127.127.1.1

# LCL, local clock

fudge 127.127.1.1 stratum 12

# increase stratum

restrict -4 default kod notrap nomodify


restrict -6 default kod notrap nomodify
*

In Other Nodes
>
vim /etc/ntp.conf
server controller iburst
> systemctl enable ntpd.service

> systemctl start ntpd.service


8. Configuring mariadb Database
------------------------------->
vi /etc/my.cnf
[mysqld]
bind-address = 10.0.0.11
default-storage-engine = innodb
innodb_file_per_table
collation-server = utf8_general_ci
init-connect = 'SET NAMES utf8'
character-set-server = utf8
>
>
>

systemctl enable mariadb.service


systemctl start mariadb.service
mysql_secure_installation

Note:- Set root Password and choose default options.


Creating MySQL Database
=========================
*

Creating databse for keystone, glance, nova, neutron, cinder, heat, trov

e
>

mysql -u root -p

CREATE DATABASE keystone;


GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhos
t' IDENTIFIED BY 'root123';
GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' IDENT
IFIED BY 'root123';
CREATE DATABASE glance;
GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' I
DENTIFIED BY 'root123';
GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' IDENTIFIE
D BY 'root123';
CREATE DATABASE nova;
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENT
IFIED BY 'root123';
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY
'root123';
CREATE DATABASE neutron;
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost'
IDENTIFIED BY 'root123';
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIF
IED BY 'root123';
CREATE DATABASE cinder;
GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' I
DENTIFIED BY 'root123';
GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' IDENTIFIE
D BY 'root123';
CREATE DATABASE heat;
GRANT ALL PRIVILEGES ON heat.* TO 'heat'@'localhost' IDENT
IFIED BY 'root123';
GRANT ALL PRIVILEGES ON heat.* TO 'heat'@'%' IDENTIFIED BY
'root123';
CREATE DATABASE trove;
GRANT ALL PRIVILEGES ON trove.* TO 'trove'@'localhost' IDE
NTIFIED BY 'root123';

GRANT ALL PRIVILEGES ON trove.* TO 'trove'@'%' IDENTIFIED


BY 'root123';
9. Installing Messaging Server (Rabbitmq)
------------------------------------------->
systemctl enable rabbitmq-server.service
>
systemctl start rabbitmq-server.service
*
The message broker creates a default account that uses guest for the us
ername and password. To simplify installation of your test environment, we recom
mend that you use this account, but change the password for it.
>
rabbitmqctl change_password guest root123
Note:-----For production environments, you should create a unique account with sui
table password.
10. Configuring Keystone
------------------------------>

openssl rand -hex 10


46ebf57e669fa91d3db0

>

vim /etc/keystone/keystone.conf
[DEFAULT]
admin_token = 46ebf57e669fa91d3db0
verbose = True
[database]
connection = mysql://keystone:root123@controller/keystone
[token]
provider = keystone.token.providers.uuid.Provider
driver = keystone.token.persistence.backends.sql.Token
[revoke]
driver = keystone.contrib.revoke.backends.sql.Revoke

*
Create generic certificates and keys and restrict access to the associat
ed files
>
oup keystone
>
>
>
>
>
>

keystone-manage pki_setup --keystone-user keystone --keystone-gr


chown -R keystone:keystone /var/log/keystone
chown -R keystone:keystone /etc/keystone/ssl
chmod -R o-rwx /etc/keystone/ssl
su -s /bin/sh -c "keystone-manage db_sync" keystone
systemctl enable openstack-keystone.service
systemctl start openstack-keystone.service

*
By default, the Identity service stores expired tokens in the database i
ndefinitely. The accumulation of expired tokens considerably increases the datab
ase size and might degrade service performance, particularly in environments wit
h limited resources.
>

(crontab -l -u keystone 2>&1 | grep -q token_flush) || echo '@ho

urly /usr/bin/keystone-manage token_flush >/var/log/keystone/keystone-tokenflush


.log 2>&1' >> /var/spool/cron/keystone
11. Creating Tenant, User and Role
---------------------------------->
>

export OS_SERVICE_TOKEN=46ebf57e669fa91d3db0
export OS_SERVICE_ENDPOINT=http://controller:35357/v2.0

*
Create an administrative tenant, user, and role for administrative opera
tions in your environment:
>
keystone tenant-create --name admin --description "Admin Tenant"
>
keystone user-create --name admin --pass root123 --email root@co
ntroller
>
keystone role-create --name admin
>
keystone user-role-add --user admin --tenant admin --role admin
>
keystone tenant-create --name demo --description "Demo Tenant"
>
keystone user-create --name demo --tenant demo --pass root123 -email demo@controller
>
keystone tenant-create --name service --description "Service Ten
ant"
12. Create the service entity and API endpoint
---------------------------------------------->
keystone service-create --name keystone --type identity --descri
ption "OpenStack Identity"
>
keystone endpoint-create --service-id $(keystone service-list |
awk '/ identity / {print $2}') --publicurl http://controller:5000/v2.0 --interna
lurl http://controller:5000/v2.0 --adminurl http://controller:35357/v2.0 --regi
on regionOne
>
cat >> /root/admin-openrc.sh
export
export
export
export
>

cat >> /root/demo-openrc.sh


export
export
export
export

>

OS_TENANT_NAME=admin
OS_USERNAME=admin
OS_PASSWORD=root123
OS_AUTH_URL=http://controller:35357/v2.0

OS_TENANT_NAME=demo
OS_USERNAME=demo
OS_PASSWORD=root123
OS_AUTH_URL=http://controller:5000/v2.0

source /root/admin-openrc.sh

13. Add the Image Service


-------------------------->
>

keystone user-create --name glance --pass root123


keystone user-role-add --user glance --tenant service --role adm

in
>
keystone service-create --name glance --type image --description
"OpenStack Image Service"
>
keystone endpoint-create --service-id $(keystone service-list |
awk '/ image / {print $2}') --publicurl http://controller:9292 --internalurl htt
p://controller:9292 --adminurl http://controller:9292 --region regionOne

>

vim /etc/glance/glance-api.conf
[DEFAULT]
notification_driver = noop
verbose = True
[database]
connection = mysql://glance:root123@controller/glance
[keystone_authtoken]
auth_uri = http://controller:5000/v2.0
identity_uri = http://controller:35357
admin_tenant_name = service
admin_user = glance
admin_password = root123
[paste_deploy]
flavor = keystone
[glance_store]
default_store = file
filesystem_store_datadir = /var/lib/glance/images/

>

vim /etc/glance/glance-registry.conf
[DEFAULT]
notification_driver = noop
verbose = True
[database]
connection = mysql://glance:root123@controller/glance
[keystone_authtoken]
auth_uri = http://controller:5000/v2.0
identity_uri = http://controller:35357
admin_tenant_name = service
admin_user = glance
admin_password = root123
[paste_deploy]
flavor = keystone
su -s /bin/sh -c "glance-manage db_sync" glance
systemctl enable openstack-glance-api.service openstack-glance-r

>
>
egistry.service
>
systemctl start openstack-glance-api.service openstack-glance-re
gistry.service
14. Add the Compute service
-------------------------->
keystone user-create --name nova --pass root123
>
keystone user-role-add --user nova --tenant service --role admin
>
keystone service-create --name nova --type compute --description
"OpenStack Compute"
>
keystone endpoint-create --service-id $(keystone service-list |
awk '/ compute / {print $2}') --publicurl http://controller:8774/v2/%\(tenant_id
\)s --internalurl http://controller:8774/v2/%\(tenant_id\)s --adminurl http://c
ontroller:8774/v2/%\(tenant_id\)s --region regionOne

>

vim /etc/nova/nova.conf
[DEFAULT]
rpc_backend = rabbit
rabbit_host = controller
rabbit_password = root123
auth_strategy = keystone
my_ip = 10.0.0.11
vncserver_listen = 10.0.0.11
vncserver_proxyclient_address = 10.0.0.11
verbose = True
[keystone_authtoken]
auth_uri = http://controller:5000/v2.0
identity_uri = http://controller:35357
admin_tenant_name = service
admin_user = nova
admin_password = root123
[database]
connection = mysql://nova:root123@controller/nova
[glance]
host = controller

>
su -s /bin/sh -c "nova-manage db sync" nova
>
systemctl enable openstack-nova-api.service openstack-nova-cert.
service openstack-nova-consoleauth.service openstack-nova-scheduler.service open
stack-nova-conductor.service openstack-nova-novncproxy.service
>
systemctl start openstack-nova-api.service openstack-nova-cert.s
ervice openstack-nova-consoleauth.service openstack-nova-scheduler.service opens
tack-nova-conductor.service openstack-nova-novncproxy.service

* COMPUTE NODE CONFIGURATION


===========================
-------------------------------> vim /etc/nova/nova.conf
[DEFAULT]
rpc_backend = rabbit
rabbit_host = controller
rabbit_password = root123
auth_strategy = keystone
my_ip = 10.0.0.31
vnc_enabled = True
vncserver_listen = 0.0.0.0
vncserver_proxyclient_address = 10.0.0.31
novncproxy_base_url = http://controller:6080/vnc_auto.html
verbose = True
[glance]
host = controller
# [libvirt]
# virt_type = qemu

[keystone_authtoken]
auth_uri = http://controller:5000/v2.0
identity_uri = http://controller:35357
admin_tenant_name = service
admin_user = nova
admin_password = root123
Note:------>
egrep -c '(vmx|svm)' /proc/cpuinfo
If this command returns a value of one or greater, your compute
node supports hard-ware acceleration which typically requires no additional con
figuration. If this command returns a value of zero, your compute node does not
support hard-ware acceleration and you must configure libvirt to use QEMU instea
d of KVM.
> systemctl enable libvirtd.service openstack-nova-compute.service
> systemctl start libvirtd.service openstack-nova-compute.service

15. Add a networking component


------------------------------>
>
>

source /root/admin-openrc.sh
keystone user-create --name neutron --pass root123
keystone user-role-add --user neutron --tenant service --role ad

min
>
keystone service-create --name neutron --type network --descript
ion "OpenStack Networking"
>
keystone endpoint-create --service-id $(keystone service-list |
awk '/ network / {print $2}') --publicurl http://controller:9696 --adminurl htt
p://controller:9696 --internalurl http://controller:9696 --region regionOne
>
keystone tenant-get service
>

vim /etc/neutron/neutron.conf
[DEFAULT]
rpc_backend = rabbit
rabbit_host = controller
rabbit_password = root123
auth_strategy = keystone
core_plugin = ml2
service_plugins = router
allow_overlapping_ips = True
notify_nova_on_port_status_changes = True
notify_nova_on_port_data_changes = True
nova_url = http://controller:8774/v2
nova_admin_auth_url = http://controller:35357/v2.0
nova_region_name = regionOne
nova_admin_username = nova
nova_admin_tenant_id = 36a7f48849d244b0bb7166194a728135
nova_admin_password = root123
verbose = True

network_api_class = nova.network.neutronv2.api.API
security_group_api = neutron
linuxnet_interface_driver = nova.network.linux_net.LinuxOVSI
nterfaceDriver
firewall_driver = nova.virt.firewall.NoopFirewallDriver
[database]
connection = mysql://neutron:root123@controller/neutron
[keystone_authtoken]
auth_uri = http://controller:5000/v2.0
identity_uri = http://controller:35357
admin_tenant_name = service
admin_user = neutron
admin_password = root123
[neutron]
url = http://controller:9696
auth_strategy = keystone
admin_auth_url = http://controller:35357/v2.0
admin_tenant_name = service
admin_username = neutron
admin_password = root123
>
>
>

source /root/admin-openrc.sh
keystone tenant-get service
vim /etc/neutron/plugins/ml2/ml2_conf.ini
[ml2]
type_drivers = flat,gre
tenant_network_types = gre
mechanism_drivers = openvswitch
[ml2_type_gre]
tunnel_id_ranges = 1:1000

[securitygroup]
enable_security_group = True
enable_ipset = True
firewall_driver = neutron.agent.linux.iptables_firewall.OVSH
ybridIptablesFirewallDriver
>

vim /etc/nova/nova.conf
[DEFAULT]
network_api_class = nova.network.neutronv2.api.API
security_group_api = neutron
linuxnet_interface_driver = nova.network.linux_net.LinuxOVSI

nterfaceDriver
firewall_driver = nova.virt.firewall.NoopFirewallDriver
[neutron]
url = http://controller:9696
auth_strategy = keystone
admin_auth_url = http://controller:35357/v2.0
admin_tenant_name = service
admin_username = neutron
admin_password = root123

>

ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.

ini
>
su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/n
eutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade juno" ne
utron
Note:======
Check Iptables before restarting services
>
systemctl restart openstack-nova-api.service openstack-nova-sche
duler.service openstack-nova-conductor.service
>
systemctl enable neutron-server.service
>
systemctl start neutron-server.service

Install and configure network node


==================================
>

vim /etc/sysctl.conf
net.ipv4.ip_forward=1
net.ipv4.conf.all.rp_filter=0
net.ipv4.conf.default.rp_filter=0

>
>

sysctl -p
vim /etc/neutron/neutron.conf
[DEFAULT]
rpc_backend = rabbit
rabbit_host = controller
rabbit_password = root123
auth_strategy = keystone
core_plugin = ml2
service_plugins = router
allow_overlapping_ips = True
verbose = True
[keystone_authtoken]
auth_uri = http://controller:5000/v2.0
identity_uri = http://controller:35357
admin_tenant_name = service
admin_user = neutron
admin_password = root123

>

vim /etc/neutron/plugins/ml2/ml2_conf.ini
[ml2]
type_drivers = flat,gre
tenant_network_types = gre
mechanism_drivers = openvswitch
[ml2_type_flat]
flat_networks = external
[ml2_type_gre]

tunnel_id_ranges = 1:1000
[securitygroup]
enable_security_group = True
enable_ipset = True
firewall_driver = neutron.agent.linux.iptables_firew
all.OVSHybridIptablesFirewallDriver
[ovs]
local_ip = 10.0.1.21
enable_tunneling = True
bridge_mappings = external:br-ex
[agent]
tunnel_types = gre
>

vim /etc/neutron/l3_agent.ini
[DEFAULT]
interface_driver = neutron.agent.linux.interface.OVS

InterfaceDriver
use_namespaces = True
external_network_bridge = br-ex
router_delete_namespaces = True
verbose = True
>

vim /etc/neutron/dhcp_agent.ini
[DEFAULT]
interface_driver = neutron.agent.linux.interface.OVS

InterfaceDriver
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
use_namespaces = True
dhcp_delete_namespaces = True
verbose = True
dnsmasq_config_file = /etc/neutron/dnsmasq-neutron.c
onf
>

vim /etc/neutron/dnsmasq-neutron.conf
dhcp-option-force=26,1454

>

pkill dnsmasq

>

vim /etc/neutron/metadata_agent.ini
[DEFAULT]
auth_url = http://controller:5000/v2.0
auth_region = regionOne
admin_tenant_name = service
admin_user = neutron
admin_password = root123
nova_metadata_ip = controller
metadata_proxy_shared_secret = root123
verbose = True
Note:--------

IN [database] comment out any connection options because n


etwork nodes do not directly access the database.
Comment out any auth_host, auth_port, and auth_protocol op
tions because the identity_uri option replaces them.
On Controller Node
==================
>

vim /etc/nova/nova.conf
[neutron]
service_metadata_proxy = True
metadata_proxy_shared_secret = root123

>

systemctl restart openstack-nova-api.service

To configure the Open vSwitch (OVS) service (On Network Node)


===========================================
>
>
>
>

systemctl
systemctl
ovs-vsctl
ovs-vsctl

enable openvswitch.service
start openvswitch.service
add-br br-ex
add-port br-ex INTERFACE_NAME

Note:- Replace INTERFACE_NAME with external network


interface
>
>

ethtool -K INTERFACE_NAME gro off


ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron

/plugin.ini
>
cp /usr/lib/systemd/system/neutron-openvswitch-agent.ser
vice /usr/lib/systemd/system/neutron-openvswitch-agent.service.orig
>
sed -i 's,plugins/openvswitch/ovs_neutron_plugin.ini,plu
gin.ini,g' /usr/lib/systemd/system/neutron-openvswitch-agent.service
>
systemctl enable neutron-openvswitch-agent.service neutr
on-l3-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service ne
utron-ovs-cleanup.service
>
systemctl start neutron-openvswitch-agent.service neutro
n-l3-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service
(On controller Node)
>
source /root/admin-openrc.sh
>
neutron agent-list

Install and configure compute node


==================================
>

vim /etc/sysctl.conf
net.ipv4.conf.all.rp_filter=0
net.ipv4.conf.default.rp_filter=0

>
>

sysctl -p
vim /etc/neutron/neutron.conf
[DEFAULT]

rpc_backend = rabbit
rabbit_host = controller
rabbit_password = root123
auth_strategy = keystone
core_plugin = ml2
service_plugins = router
allow_overlapping_ips = True
verbose = True
[keystone_authtoken]
auth_uri = http://controller:5000/v2.0
identity_uri = http://controller:35357
admin_tenant_name = service
admin_user = neutron
admin_password = root123

Note:- In the [database] section, comment out any connecti


on options because compute nodes do not directly access the database. Comment ou
t any auth_host, auth_port, and auth_protocol options because the identity_uri o
ption replaces them.
>

vim /etc/neutron/plugins/ml2/ml2_conf.ini
[ml2]
type_drivers = flat,gre
tenant_network_types = gre
mechanism_drivers = openvswitch
[ml2_type_gre]
tunnel_id_ranges = 1:1000

[securitygroup]
enable_security_group = True
enable_ipset = True
firewall_driver = neutron.agent.linux.iptables_firew
all.OVSHybridIptablesFirewallDriver
[ovs]
local_ip = 10.0.1.31
enable_tunneling = True
[agent]
tunnel_types = gre

>
>

systemctl enable openvswitch.service


systemctl start openvswitch.service

>

vim /etc/nova/nova.conf
[DEFAULT]
network_api_class = nova.network.neutronv2.api.API
security_group_api = neutron
linuxnet_interface_driver = nova.network.linux_net.L

inuxOVSInterfaceDriver
firewall_driver = nova.virt.firewall.NoopFirewallDri
ver
[neutron]
url = http://controller:9696
auth_strategy = keystone
admin_auth_url = http://controller:35357/v2.0
admin_tenant_name = service
admin_username = neutron
admin_password = root123
ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron

>
/plugin.ini

>
cp /usr/lib/systemd/system/neutron-openvswitch-agent.ser
vice /usr/lib/systemd/system/neutron-openvswitch-agent.service.orig
>
sed -i 's,plugins/openvswitch/ovs_neutron_plugin.ini,plu
gin.ini,g' /usr/lib/systemd/system/neutron-openvswitch-agent.service
>
systemctl restart openstack-nova-compute.service
>
systemctl enable neutron-openvswitch-agent.service
>
systemctl start neutron-openvswitch-agent.service
Create initial networks
=======================
>
source /root/admin-openrc.sh
>
neutron net-create ext-net --router:external True --prov
ider:physical_network external --provider:network_type flat
>
neutron subnet-create ext-net --name ext-subnet --alloca
tion-pool start=203.0.113.101,end=203.0.113.200 --disable-dhcp --gateway 203.0.1
13.1 203.0.113.0/24
Tenant network
==============
>
source /root/admin-openrc.sh
>
neutron net-create demo-net
>
neutron subnet-create demo-net --name demo-subnet --gate
way 192.168.1.1 192.168.1.0/24
>
neutron router-create demo-router
>
neutron router-interface-add demo-router demo-subnet
>
neutron router-gateway-set demo-router ext-net
Verify connectivity
===================
>

ping -c 4 203.0.113.101

16. Add the dashboard


------------------->
on-memcached
>

yum install -y openstack-dashboard httpd mod_wsgi memcached pyth


vim /etc/openstack-dashboard/local_settings

OPENSTACK_HOST = "controller"
ALLOWED_HOSTS = ['*']
CACHES = {
'default': {
'BACKEND': 'django.core.cache.backends.memcached.Memca
chedCache',
'LOCATION': '127.0.0.1:11211',
}
}

>
>
>
>

setsebool -P httpd_can_network_connect on
chown -R apache:apache /usr/share/openstack-dashboard/static
systemctl enable httpd.service memcached.service
systemctl start httpd.service memcached.service

17. Add the Block Storage service


-------------------------------->
>
>

source /root/admin-openrc.sh
keystone user-create --name cinder --pass root123
keystone user-role-add --user cinder --tenant service --role adm

in
>
keystone service-create --name cinder --type volume --descriptio
n "OpenStack Block Storage"
>
keystone service-create --name cinderv2 --type volumev2 --descri
ption "OpenStack Block Storage"
>
keystone endpoint-create --service-id $(keystone service-list |
awk '/ volume / {print $2}') --publicurl http://controller:8776/v1/%\(tenant_id\
)s --internalurl http://controller:8776/v1/%\(tenant_id\)s --adminurl http://c
ontroller:8776/v1/%\(tenant_id\)s --region regionOne
>
keystone endpoint-create --service-id $(keystone service-list |
awk '/ volumev2 / {print $2}') --publicurl http://controller:8776/v2/%\(tenant_
id\)s --internalurl http://controller:8776/v2/%\(tenant_id\)s --adminurl http://
controller:8776/v2/%\(tenant_id\)s --region regionOne
>
vim /etc/cinder/cinder.conf
[DEFAULT]
rpc_backend = rabbit
rabbit_host = controller
rabbit_password = root123
auth_strategy = keystone
my_ip = 10.0.0.11
verbose = True
[database]
connection = mysql://cinder:root123@controller/cinder
[keystone_authtoken]
auth_uri = http://controller:5000/v2.0
identity_uri = http://controller:35357
admin_tenant_name = service
admin_user = cinder
admin_password = root123
>

su -s /bin/sh -c "cinder-manage db sync" cinder

>
systemctl enable openstack-cinder-api.service openstack-cinder-s
cheduler.service
>
systemctl start openstack-cinder-api.service openstack-cinder-sc
heduler.service
Install and configure a storage node
=========================================
>
>
>
>
>

systemctl enable lvm2-lvmetad.service


systemctl start lvm2-lvmetad.service
pvcreate /dev/sdb1
vgcreate cinder-volumes /dev/sdb1
vim /etc/lvm/lvm.conf
devices {
filter = [ "a/sda/", "a/sdb/", "r/.*/" ]

Note:======
1. Each item in the filter array begins with a f
or accept or r for reject and includes a regular expression for the device name.
The array must end with r/.*/ to reject any remaining devices.
2. If compute node installed into lvm then chang
e /etc/lvm/lvm.conf filter = [ "a/sda/", "r/.*/"].
>

vim /etc/cinder/cinder.conf
[database]
connection = mysql://cinder:root123@controller/cin

der
[DEFAULT]
rpc_backend = rabbit
rabbit_host = controller
rabbit_password = root123
auth_strategy = keystone
my_ip = 10.0.0.41
glance_host = controller
iscsi_helper = lioadm
verbose = True
[keystone_authtoken]
auth_uri = http://controller:5000/v2.0
identity_uri = http://controller:35357
admin_tenant_name = service
admin_user = cinder
admin_password = root123
>

systemctl enable openstack-cinder-volume.service target.

>

systemctl start openstack-cinder-volume.service target.s

service
ervice
18. Add Object Storage
---------------------On Controller Node

===================
>
>

keystone user-create --name swift --pass root123


keystone user-role-add --user swift --tenant service --role admi

n
>
keystone service-create --name swift --type object-store --descr
iption "OpenStack Object Storage"
>
keystone endpoint-create --service-id $(keystone service-list |
awk '/ object-store / {print $2}') --publicurl 'http://controller:8080/v1/AUTH_%
(tenant_id)s' --internalurl 'http://controller:8080/v1/AUTH_%(tenant_id)s' --adm
inurl http://controller:8080 --region regionOne
>
curl -o /etc/swift/proxy-server.conf https://raw.githubuserconte
nt.com/openstack/swift/stable/juno/etc/proxy-server.conf-sample
>
vim /etc/swift/proxy-server.conf
[DEFAULT]
bind_port = 8080
user = swift
swift_dir = /etc/swift
[pipeline:main]
pipeline = authtoken cache healthcheck keystoneauth proxylogging proxy-server
[app:proxy-server]
allow_account_management = true
account_autocreate = true
[filter:keystoneauth]
use = egg:swift#keystoneauth
operator_roles = admin,_member_
[filter:authtoken]
paste.filter_factory = keystonemiddleware.auth_token:filte
r_factory
auth_uri = http://controller:5000/v2.0
identity_uri = http://controller:35357
admin_tenant_name = service
admin_user = swift
admin_password = root123
delay_auth_decision = true
[filter:cache]
memcache_servers = 127.0.0.1:11211
On Storage Node
================
* Configure node object1 & object2 and update /etc/hosts file and configur
e ntp client in both node.
* Packages required are ntp openstack-selinux xfsprogs rsync
>
for i in b c;do parted /dev/sd$i "mklabel gpt"; parted /dev/sd$
i "mkpart xfs 1 100% ";done
>
mkdir -p /srv/node/sd{b,c}1
>
cat >> /etc/fstab <<EOF

/dev/sdb1 /srv/node/sdb1 xfs noatime,nodiratime,nobarrier,logb


ufs=8 0 2
/dev/sdc1 /srv/node/sdc1 xfs noatime,nodiratime,nobarrier,logb
ufs=8 0 2
EOF
>
>
>
>

mkfs.xfs /dev/sdb1
mkfs.xfs /dev/sdc1
mount -a
vim /etc/rsyncd.conf
uid = swift
gid = swift
log file = /var/log/rsyncd.log
pid file = /var/run/rsyncd.pid
address = 10.0.0.51
[account]
max connections = 2
path = /srv/node/
read only = false
lock file = /var/lock/account.lock
[container]
max connections = 2
path = /srv/node/
read only = false
lock file = /var/lock/container.lock
[object]
max connections = 2
path = /srv/node/
read only = false
lock file = /var/lock/object.lock

* For Object Storage1 IP is 10.0.0.51 and change for storage2 as 10.0.0.52


>
>

systemctl enable rsyncd.service


systemctl start rsyncd.service

* Following packages must be installed, openstack-swift-account openstackswift-container openstack-swift-object


>
curl -o /etc/swift/account-server.conf https://raw.githubusercon
tent.com/openstack/swift/stable/juno/etc/account-server.conf-sample
>
curl -o /etc/swift/container-server.conf https://raw.githubuserc
ontent.com/openstack/swift/stable/juno/etc/container-server.conf-sample
>
curl -o /etc/swift/object-server.conf https://raw.githubusercont
ent.com/openstack/swift/stable/juno/etc/object-server.conf-sample
>
vim /etc/swift/account-server.conf
[DEFAULT]
bind_ip = 10.0.0.51
bind_port = 6002
user = swift
swift_dir = /etc/swift
devices = /srv/node
[pipeline:main]
pipeline = healthcheck recon account-server

[filter:recon]
recon_cache_path = /var/cache/swift
>

vim /etc/swift/container-server.conf
[DEFAULT]
bind_ip = 10.0.0.51
bind_port = 6001
user = swift
swift_dir = /etc/swift
devices = /srv/node
[pipeline:main]
pipeline = healthcheck recon container-server
[filter:recon]
recon_cache_path = /var/cache/swift

>

vim /etc/swift/object-server.conf
[DEFAULT]
bind_ip = 10.0.0.51
bind_port = 6000
user = swift
swift_dir = /etc/swift
devices = /srv/node
[pipeline:main]
pipeline = healthcheck recon object-server
[filter:recon]
recon_cache_path = /var/cache/swift

>
>
>

chown -R swift:swift /srv/node


mkdir -p /var/cache/swift
chown -R swift:swift /var/cache/swift

Create initial rings


*********************
Account ring
************
** Perform these steps on the "controller node"
>
>

cd /etc/swift; swift-ring-builder account.builder create 10 3 1


swift-ring-builder account.builder add r1z1-10.0.0.51:6002/sdb1

>

swift-ring-builder account.builder add r1z1-10.0.0.51:6002/sdc1

>
>

swift-ring-builder account.builder
swift-ring-builder account.builder rebalance

100
100

Container ring
***************
** Perform these steps on the "controller node"
>

cd /etc/swift; swift-ring-builder container.builder create 10

>

swift-ring-builder container.builder add r1z1-10.0.0.51:6001/sdb

3 1

1 100
>

swift-ring-builder container.builder add r1z1-10.0.0.51:6001/sdc

>

swift-ring-builder container.builder

>

swift-ring-builder container.builder rebalance

1 100

Object ring
*************
** Perform these steps on the "controller node"
>
>

cd /etc/swift; swift-ring-builder object.builder create 10 3 1


swift-ring-builder object.builder add r1z1-10.0.0.51:6000/sdb1 1

>

swift-ring-builder object.builder add r1z1-10.0.0.51:6000/sdc1 1

>
>

swift-ring-builder object.builder
swift-ring-builder object.builder rebalance

00
00

Distribute ring configuration files


***********************************
**
Copy the account.ring.gz, container.ring.gz, and object.ring.gz
files to the /etc/swift directory on each storage node and any additional nodes
running the proxy service
>

cd /etc/swift; scp *.gz object1:/etc/swift/

Finalize Installation
**********************
>
ssh object1 "wget -O /etc/swift/swift.conf 10.0.0.1/config/obje
ct1/swift.conf;chown -R swift:swift /etc/swift"
>
{On Controller Node}
systemctl enable openstack-swift-proxy.service memcached.service; system
ctl start openstack-swift-proxy.service memcached.service
>
{On Object node}
systemctl enable openstack-swift-account.service openstack-swift-account
-auditor.service openstack-swift-account-reaper.service openstack-swift-accountreplicator.service
systemctl enable openstack-swift-account.service openstack-swift-account
-auditor.service openstack-swift-account-reaper.service openstack-swift-accountreplicator.service
systemctl start openstack-swift-account.service openstack-swift-accountauditor.service openstack-swift-account-reaper.service openstack-swift-account-r
eplicator.service
systemctl enable openstack-swift-container.service openstack-swift-conta
iner-auditor.service openstack-swift-container-replicator.service openstack-swif
t-container-updater.service
systemctl start openstack-swift-container.service openstack-swift-contai
ner-auditor.service openstack-swift-container-replicator.service openstack-swift
-container-updater.service
systemctl start openstack-swift-container.service openstack-swift-contai
ner-auditor.service openstack-swift-container-replicator.service openstack-swift
-container-updater.service
systemctl enable openstack-swift-object.service openstack-swift-object-a
uditor.service openstack-swift-object-replicator.service openstack-swift-object-

updater.service
systemctl start openstack-swift-object.service openstack-swift-object-au
ditor.service openstack-swift-object-replicator.service openstack-swift-object-u
pdater.service

19. Add the Orchestration module


----------------------------Install and configure Orchestration
====================================
>
>
>
>
>

source /root/admin-openrc.sh
keystone user-create --name heat --pass root123
keystone user-role-add --user heat --tenant service --role admin
keystone role-create --name heat_stack_owner
keystone user-role-add --user demo --tenant demo --role heat_sta

ck_owner
> keystone role-create --name heat_stack_user
> keystone service-create --name heat --type orchestration --descr
iption "Orchestration"
> keystone service-create --name heat-cfn --type cloudformation -description "Orchestration"
> keystone endpoint-create --service-id $(keystone service-list |
awk '/ orchestration / {print $2}') --publicurl http://controller:8004/v1/%\(ten
ant_id\)s --internalurl http://controller:8004/v1/%\(tenant_id\)s --adminurl htt
p://controller:8004/v1/%\(tenant_id\)s --region regionOne
> keystone endpoint-create --service-id $(keystone service-list |
awk '/ cloudformation / {print $2}') --publicurl http://controller:8000/v1 --int
ernalurl http://controller:8000/v1 --adminurl http://controller:8000/v1 --region
regionOne
>

vim /etc/heat/heat.conf
[database]
connection = mysql://heat:root123@controller/heat
[DEFAULT]
rpc_backend = rabbit
rabbit_host = controller
rabbit_password = root123
heat_metadata_server_url = http://controller:8000
heat_waitcondition_server_url = http://controller:8000/v1/wa

itcondition
verbose = True
[keystone_authtoken]
auth_uri = http://controller:5000/v2.0
identity_uri = http://controller:35357
admin_tenant_name = service
admin_user = heat
admin_password = root123
[ec2authtoken]
auth_uri = http://controller:5000/v2.0
>
>

su -s /bin/sh -c "heat-manage db_sync" heat


systemctl enable openstack-heat-api.service openstack-heat-api-c

fn.service openstack-heat-engine.service
> systemctl start openstack-heat-api.service openstack-heat-api-cf
n.service openstack-heat-engine.service

20. Add the Telemetry module


------------------------>

vim /etc/mongodb.conf
bind_ip = 10.0.0.11
smallfiles = true

>
>
>

systemctl enable mongod.service


systemctl start mongod.service
mongo --host controller --eval 'db = db.getSiblingDB("ceilomete

r");
db.addUser({user: "ceilometer",
pwd: "root123",
roles: [ "readWrite", "dbAdmin" ]})'
>
>
>

source /root/admin-openrc.sh
keystone user-create --name ceilometer --pass root123
keystone user-role-add --user ceilometer --tenant service --role

admin
> keystone service-create --name ceilometer --type metering --desc
ription "Telemetry"
> keystone endpoint-create --service-id $(keystone service-list |
awk '/ metering / {print $2}') --publicurl http://controller:8777 --internalurl
http://controller:8777 --adminurl http://controller:8777 --region regionOne
* Packges required "openstack-ceilometer-api openstack-ceilometer-collec
tor openstack-ceilometer-notification openstack-ceilometer-central openstack-cei
lometer-alarm python-ceilometerclient"
>
>

openssl rand -hex 10


vim /etc/ceilometer/ceilometer.conf
[database]
connection = mongodb://ceilometer:root123@controller:27017/c

eilometer
[DEFAULT]
rpc_backend = rabbit
rabbit_host = controller
rabbit_password = root123
auth_strategy = keystone
verbose = True
[keystone_authtoken]
auth_uri = http://controller:5000/v2.0
identity_uri = http://controller:35357
admin_tenant_name = service
admin_user = ceilometer
admin_password = root123
[service_credentials]

os_auth_url = http://controller:5000/v2.0
os_username = ceilometer
os_tenant_name = service
os_password = root123
[publisher]
metering_secret = root123

>
systemctl enable openstack-ceilometer-api.service openstack-ceil
ometer-notification.service openstack-ceilometer-central.service openstack-ceilo
meter-collector.service openstack-ceilometer-alarm-evaluator.service openstack-c
eilometer-alarm-notifier.service
>
systemctl start openstack-ceilometer-api.service openstack-ceilo
meter-notification.service openstack-ceilometer-central.service openstack-ceilom
eter-collector.service openstack-ceilometer-alarm-evaluator.service openstack-ce
ilometer-alarm-notifier.service

Configure the Compute service


=============================
>

vim /etc/ceilometer/ceilometer.conf
[publisher]
metering_secret = root123
[DEFAULT]
rabbit_host = controller
rabbit_password = root123
verbose = True
[keystone_authtoken]
auth_uri = http://controller:5000/v2.0
identity_uri = http://controller:35357
admin_tenant_name = service
admin_user = ceilometer
admin_password = root123
[service_credentials]
os_auth_url = http://controller:5000/v2.0
os_username = ceilometer
os_tenant_name = service
os_password = root123
os_endpoint_type = internalURL
os_region_name = regionOne

>

vim /etc/nova/nova.conf
[DEFAULT]
instance_usage_audit = True
instance_usage_audit_period = hour
notify_on_state_change = vm_and_task_state
notification_driver = messagingv2

>

systemctl enable openstack-ceilometer-compute.service

>

systemctl start openstack-ceilometer-compute.service

>

systemctl restart openstack-nova-compute.service

Configure the Image Service


============================
>

(Perform on Controller node)

vim /etc/glance/glance-api.conf
[DEFAULT]
notification_driver = messagingv2
rpc_backend = rabbit
rabbit_host = controller
rabbit_password = root123

>

vim /etc/glance/glance-registry.conf
notification_driver = messagingv2
rpc_backend = rabbit
rabbit_host = controller
rabbit_password = root123

> systemctl restart openstack-glance-api.service openstack-glanceregistry.service

Configure the Block Storage service


===================================
(Perform these steps in controller node and storage nodes)
>

vim /etc/cinder/cinder.conf
[DEFAULT]
control_exchange = cinder
notification_driver = messagingv2

(Perform on Controller node)


> systemctl restart openstack-cinder-api.service openstack-cinderscheduler.service
(Perform on storage node)
> systemctl restart openstack-cinder-volume.service
*
Use the cinder-volume-usage-audit command to retrieve metrics o
n demand. For more information, see Block Storage audit script setup to get noti
fications.
Configure the Object Storage service
====================================
> source /root/admin-openrc.sh
> keystone role-create --name ResellerAdmin
> keystone user-role-add --tenant service --user ceilometer --role
`keystone role-list |awk '/ ResellerAdmin / { print $2}'`
* To configure notifications:- Perform these steps on the controll
er and any other nodes that run the Object Storage proxy service.

>

vim /etc/swift/proxy-server.conf
[filter:keystoneauth]
operator_roles = admin,_member_,ResellerAdmin

[pipeline:main]
pipeline = authtoken cache healthcheck keystoneauth proxy-lo
gging ceilometer proxy-server
**

>
>

[filter:ceilometer]
use = egg:ceilometer#swift
log_level = WARN

usermod -a -G ceilometer swift


systemctl restart openstack-swift-proxy.service

21. Launch an instance


----------------------->
>
>
>

source /root/demo-openrc.sh
ssh-keygen
nova keypair-add --pub-key ~/.ssh/id_rsa.pub demo-key
nova keypair-list

To launch an instance
1. A flavor specifies a virtual resource allocation profile whic
h includes processor, memory, and storage.
>

nova flavor-list

2. List available images:


>

nova image-list

3. List available networks:


>

neutron net-list

4. List available security groups:


>

nova secgroup-list

5. Launch the instance:


> nova boot --flavor m1.tiny --image cirros-0.3.3-x86_64 -nic net-id=DEMO_NET_ID --security-group default --key-name demo-key demo-instan
ce1
** Replace DEMO_NET_ID with suitable net id.
6. Check the status of your instance:

>
*

nova list

To access your instance using a virtual console


>

nova get-vnc-console demo-instance1 novnc

User Name: cirros


Password: cubswin

Verify the demo-net tenant network gateway:


>

ping -c 4 192.168.1.1

Verify the ext-net external network:


>

ping -c 4 openstack.org

** To access your instance remotely


* Add rules to the default security group:
>
>

nova secgroup-add-rule default icmp -1 -1 0.0.0.0/0


nova secgroup-add-rule default tcp 22 22 0.0.0.0/0

* Create a floating IP address on the ext-net external network:


>
>
>

neutron floatingip-create ext-net


nova floating-ip-associate demo-instance1 203.0.113.102
nova list

* Verify network connectivity using ping from the controller node


or any host on the external network:
>

ping -c 4 203.0.113.102

* Access your instance using SSH from the controller node or any
host on the external network:
>

ssh cirros@203.0.113.102

Note:- If your host does not contain the public/private key pair
created in an earlier step, SSH prompts for the default password associated wit
h the cirros user.
** To attach a Block Storage volume to your instance
>
>
>
'/ available / {print
>

source /root/demo-openrc.sh
nova volume-list
nova volume-attach demo-instance1 `nova volume-list|awk
$2}'|head -1`
nova volume-list

* Access your instance using SSH from the controller node or any h
ost on the external network and use the fdisk command to verify presence of the
volume as the /dev/vdb block storage device:

>

ssh cirros@203.0.113.102

Verification Of operations
==========================
**************************
1.

2.

NTP Setup
>

ntpq -c peers

>

ntpq -c assoc

OpenStack Identity
>

unset OS_SERVICE_TOKEN OS_SERVICE_ENDPOINT

> keystone --os-tenant-name admin --os-username admin --os-passwor


d root123 --os-auth-url http://controller:35357/v2.0 token-get
> keystone --os-tenant-name admin --os-username admin --os-passwor
d root123 --os-auth-url http://controller:35357/v2.0 tenant-list
> keystone --os-tenant-name admin --os-username admin --os-passwor
d root123 --os-auth-url http://controller:35357/v2.0 user-list
> keystone --os-tenant-name admin --os-username admin --os-passwor
d root123 --os-auth-url http://controller:35357/v2.0 role-list
> keystone --os-tenant-name demo --os-username demo --os-password
root123 --os-auth-url http://controller:35357/v2.0 token-get
** keystone --os-tenant-name demo --os-username demo --os-password
root123 --os-auth-url http://controller:35357/v2.0 user-list

3.

Image Service
>

mkdir /tmp/images

> wget -P /tmp/images http://cdn.download.cirros-cloud.net/0.3.3/c


irros-0.3.3-x86_64-disk.img
>

source admin-openrc.sh

> glance image-create --name "cirros-0.3.3-x86_64" --file /tmp/ima


ges/cirros-0.3.3-x86_64-disk.img --disk-format qcow2 --container-format bare --i
s-public True --progress

4.

5.

6.

>

glance image-list

>

rm -r /tmp/images

OpenStack Compute
>

source /root/admin-openrc.sh

>

nova service-list

>

nova image-list

OpenStack Networking (neutron)


>

source /root/admin-openrc.sh

>

neutron agent-list

>

nova net-list

Dashboard
>
*

7.

8.

firefox http://controller/dashboard
authenticate using admin or demo

Block Storage service


>

source /root/admin-openrc.sh

>

cinder service-list

>

source demo-openrc.sh

>

cinder create --display-name demo-volume1 1

>

cinder list

Object Storage
*

Account ring

>

swift-ring-builder account.builder

Container ring

>

swift-ring-builder container.builder

Object ring

>

swift-ring-builder object.builder

>

source demo-openrc.sh

>

swift stat

>

swift upload demo-container1 FILE

{ Replace FILE with the name of a local file


to upload to the demo-container1 container. }
>

swift list

>

swift download demo-container1 FILE

{Replace FILE with the name of the file uplo


aded to the demo-container1 container. }
9.

Orchestration module
>

source demo-openrc.sh

>

vim test-stack.yml

heat_template_version: 2014-10-16
description: A simple server.
parameters:
ImageID:
type: string
description: Image use to boot a server
NetID:
type: string
description: Network ID for the server
resources:
server:
type: OS::Nova::Server
properties:
image: { get_param: ImageID }
flavor: m1.tiny
networks:
- network: { get_param: NetID }
outputs:
private_ip:
description: IP address of the server in the private network
value: { get_attr: [ server, first_address ] }

> NET_ID=$(nova net-list | awk '/ demo-net / { print $2 }')


> heat stack-create -f test-stack.yml -P "ImageID=cirros-0.3.3-x86
_64;NetID=$NET_ID" testStack
> heat stack-list
10.

Telemetry installation
> source admin-openrc.sh
> ceilometer meter-list
> glance image-download "cirros-0.3.3-x86_64" > cirros.img
> ceilometer meter-list
> ceilometer statistics -m image.download -p 60

11.

Database service installation


> source ~/demo-openrc.sh
> trove list

* Assuming you have created an image for the type of database you
want, and have
updated the datastore to use that image, you can now create a Tr
ove instance
(database). To do this, use the trove create command.
> trove create name 2 --size=2 --databases DBNAME --users USER:PA
SSWORD --datastore_version mysql-5.5 --datastore mysql

12.

Data processing service


> source demo-openrc.sh
> sahara cluster-list

S-ar putea să vă placă și