Documente Academic
Documente Profesional
Documente Cultură
Node Setup
----------------*
Openstack requires minimum 5 nodes, namely
i) controller
10.0.0.11
ii) network
10.0.0.21
iii)compute1
10.0.0.31
iv) block1
10.0.0.41
v) object1
10.0.0.51
*
*
*
*
*
created an updated repository in 10.0.0.1 and shared by http server.Upgr
ade each node to the latest one, because openstack networking requires latest ke
rnel.
[latest]
name=Repository with latest RPMs
baseurl=http://10.0.0.1/latest
enabled=1
gpgcheck=0
priority=98
[centos7]
name=Repository with Centos7
baseurl=http://10.0.0.1/shared
enabled=1
gpgcheck=0
However if you want to setup online repositoryfor openstack, run followi
ng command in all nodes.
# yum install -y http://dl.fedoraproject.org/pub/epel/7/x86_64/e/epel-r
elease-7-5.noarch.rpm
# # yum install http://rdo.fedorapeople.org/openstack-juno/rdo-release-j
uno.rpm
Disabling selinux
>
sed -i 's/^SELINUX=.*/SELINUX=disabled/' /etc/sysconfig/selinux
/etc/selinux/config
>
setenforce 0
*
Install iptables-services in centos 7 in controller node and disable fir
ewalld.
>
>
>
>
>
>
*
Remove Service chrony, it blocks ntpd from auto starting after reboot.
>
# increase stratum
In Other Nodes
>
vim /etc/ntp.conf
server controller iburst
> systemctl enable ntpd.service
Creating databse for keystone, glance, nova, neutron, cinder, heat, trov
e
>
mysql -u root -p
>
vim /etc/keystone/keystone.conf
[DEFAULT]
admin_token = 46ebf57e669fa91d3db0
verbose = True
[database]
connection = mysql://keystone:root123@controller/keystone
[token]
provider = keystone.token.providers.uuid.Provider
driver = keystone.token.persistence.backends.sql.Token
[revoke]
driver = keystone.contrib.revoke.backends.sql.Revoke
*
Create generic certificates and keys and restrict access to the associat
ed files
>
oup keystone
>
>
>
>
>
>
*
By default, the Identity service stores expired tokens in the database i
ndefinitely. The accumulation of expired tokens considerably increases the datab
ase size and might degrade service performance, particularly in environments wit
h limited resources.
>
export OS_SERVICE_TOKEN=46ebf57e669fa91d3db0
export OS_SERVICE_ENDPOINT=http://controller:35357/v2.0
*
Create an administrative tenant, user, and role for administrative opera
tions in your environment:
>
keystone tenant-create --name admin --description "Admin Tenant"
>
keystone user-create --name admin --pass root123 --email root@co
ntroller
>
keystone role-create --name admin
>
keystone user-role-add --user admin --tenant admin --role admin
>
keystone tenant-create --name demo --description "Demo Tenant"
>
keystone user-create --name demo --tenant demo --pass root123 -email demo@controller
>
keystone tenant-create --name service --description "Service Ten
ant"
12. Create the service entity and API endpoint
---------------------------------------------->
keystone service-create --name keystone --type identity --descri
ption "OpenStack Identity"
>
keystone endpoint-create --service-id $(keystone service-list |
awk '/ identity / {print $2}') --publicurl http://controller:5000/v2.0 --interna
lurl http://controller:5000/v2.0 --adminurl http://controller:35357/v2.0 --regi
on regionOne
>
cat >> /root/admin-openrc.sh
export
export
export
export
>
>
OS_TENANT_NAME=admin
OS_USERNAME=admin
OS_PASSWORD=root123
OS_AUTH_URL=http://controller:35357/v2.0
OS_TENANT_NAME=demo
OS_USERNAME=demo
OS_PASSWORD=root123
OS_AUTH_URL=http://controller:5000/v2.0
source /root/admin-openrc.sh
in
>
keystone service-create --name glance --type image --description
"OpenStack Image Service"
>
keystone endpoint-create --service-id $(keystone service-list |
awk '/ image / {print $2}') --publicurl http://controller:9292 --internalurl htt
p://controller:9292 --adminurl http://controller:9292 --region regionOne
>
vim /etc/glance/glance-api.conf
[DEFAULT]
notification_driver = noop
verbose = True
[database]
connection = mysql://glance:root123@controller/glance
[keystone_authtoken]
auth_uri = http://controller:5000/v2.0
identity_uri = http://controller:35357
admin_tenant_name = service
admin_user = glance
admin_password = root123
[paste_deploy]
flavor = keystone
[glance_store]
default_store = file
filesystem_store_datadir = /var/lib/glance/images/
>
vim /etc/glance/glance-registry.conf
[DEFAULT]
notification_driver = noop
verbose = True
[database]
connection = mysql://glance:root123@controller/glance
[keystone_authtoken]
auth_uri = http://controller:5000/v2.0
identity_uri = http://controller:35357
admin_tenant_name = service
admin_user = glance
admin_password = root123
[paste_deploy]
flavor = keystone
su -s /bin/sh -c "glance-manage db_sync" glance
systemctl enable openstack-glance-api.service openstack-glance-r
>
>
egistry.service
>
systemctl start openstack-glance-api.service openstack-glance-re
gistry.service
14. Add the Compute service
-------------------------->
keystone user-create --name nova --pass root123
>
keystone user-role-add --user nova --tenant service --role admin
>
keystone service-create --name nova --type compute --description
"OpenStack Compute"
>
keystone endpoint-create --service-id $(keystone service-list |
awk '/ compute / {print $2}') --publicurl http://controller:8774/v2/%\(tenant_id
\)s --internalurl http://controller:8774/v2/%\(tenant_id\)s --adminurl http://c
ontroller:8774/v2/%\(tenant_id\)s --region regionOne
>
vim /etc/nova/nova.conf
[DEFAULT]
rpc_backend = rabbit
rabbit_host = controller
rabbit_password = root123
auth_strategy = keystone
my_ip = 10.0.0.11
vncserver_listen = 10.0.0.11
vncserver_proxyclient_address = 10.0.0.11
verbose = True
[keystone_authtoken]
auth_uri = http://controller:5000/v2.0
identity_uri = http://controller:35357
admin_tenant_name = service
admin_user = nova
admin_password = root123
[database]
connection = mysql://nova:root123@controller/nova
[glance]
host = controller
>
su -s /bin/sh -c "nova-manage db sync" nova
>
systemctl enable openstack-nova-api.service openstack-nova-cert.
service openstack-nova-consoleauth.service openstack-nova-scheduler.service open
stack-nova-conductor.service openstack-nova-novncproxy.service
>
systemctl start openstack-nova-api.service openstack-nova-cert.s
ervice openstack-nova-consoleauth.service openstack-nova-scheduler.service opens
tack-nova-conductor.service openstack-nova-novncproxy.service
[keystone_authtoken]
auth_uri = http://controller:5000/v2.0
identity_uri = http://controller:35357
admin_tenant_name = service
admin_user = nova
admin_password = root123
Note:------>
egrep -c '(vmx|svm)' /proc/cpuinfo
If this command returns a value of one or greater, your compute
node supports hard-ware acceleration which typically requires no additional con
figuration. If this command returns a value of zero, your compute node does not
support hard-ware acceleration and you must configure libvirt to use QEMU instea
d of KVM.
> systemctl enable libvirtd.service openstack-nova-compute.service
> systemctl start libvirtd.service openstack-nova-compute.service
source /root/admin-openrc.sh
keystone user-create --name neutron --pass root123
keystone user-role-add --user neutron --tenant service --role ad
min
>
keystone service-create --name neutron --type network --descript
ion "OpenStack Networking"
>
keystone endpoint-create --service-id $(keystone service-list |
awk '/ network / {print $2}') --publicurl http://controller:9696 --adminurl htt
p://controller:9696 --internalurl http://controller:9696 --region regionOne
>
keystone tenant-get service
>
vim /etc/neutron/neutron.conf
[DEFAULT]
rpc_backend = rabbit
rabbit_host = controller
rabbit_password = root123
auth_strategy = keystone
core_plugin = ml2
service_plugins = router
allow_overlapping_ips = True
notify_nova_on_port_status_changes = True
notify_nova_on_port_data_changes = True
nova_url = http://controller:8774/v2
nova_admin_auth_url = http://controller:35357/v2.0
nova_region_name = regionOne
nova_admin_username = nova
nova_admin_tenant_id = 36a7f48849d244b0bb7166194a728135
nova_admin_password = root123
verbose = True
network_api_class = nova.network.neutronv2.api.API
security_group_api = neutron
linuxnet_interface_driver = nova.network.linux_net.LinuxOVSI
nterfaceDriver
firewall_driver = nova.virt.firewall.NoopFirewallDriver
[database]
connection = mysql://neutron:root123@controller/neutron
[keystone_authtoken]
auth_uri = http://controller:5000/v2.0
identity_uri = http://controller:35357
admin_tenant_name = service
admin_user = neutron
admin_password = root123
[neutron]
url = http://controller:9696
auth_strategy = keystone
admin_auth_url = http://controller:35357/v2.0
admin_tenant_name = service
admin_username = neutron
admin_password = root123
>
>
>
source /root/admin-openrc.sh
keystone tenant-get service
vim /etc/neutron/plugins/ml2/ml2_conf.ini
[ml2]
type_drivers = flat,gre
tenant_network_types = gre
mechanism_drivers = openvswitch
[ml2_type_gre]
tunnel_id_ranges = 1:1000
[securitygroup]
enable_security_group = True
enable_ipset = True
firewall_driver = neutron.agent.linux.iptables_firewall.OVSH
ybridIptablesFirewallDriver
>
vim /etc/nova/nova.conf
[DEFAULT]
network_api_class = nova.network.neutronv2.api.API
security_group_api = neutron
linuxnet_interface_driver = nova.network.linux_net.LinuxOVSI
nterfaceDriver
firewall_driver = nova.virt.firewall.NoopFirewallDriver
[neutron]
url = http://controller:9696
auth_strategy = keystone
admin_auth_url = http://controller:35357/v2.0
admin_tenant_name = service
admin_username = neutron
admin_password = root123
>
ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.
ini
>
su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/n
eutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade juno" ne
utron
Note:======
Check Iptables before restarting services
>
systemctl restart openstack-nova-api.service openstack-nova-sche
duler.service openstack-nova-conductor.service
>
systemctl enable neutron-server.service
>
systemctl start neutron-server.service
vim /etc/sysctl.conf
net.ipv4.ip_forward=1
net.ipv4.conf.all.rp_filter=0
net.ipv4.conf.default.rp_filter=0
>
>
sysctl -p
vim /etc/neutron/neutron.conf
[DEFAULT]
rpc_backend = rabbit
rabbit_host = controller
rabbit_password = root123
auth_strategy = keystone
core_plugin = ml2
service_plugins = router
allow_overlapping_ips = True
verbose = True
[keystone_authtoken]
auth_uri = http://controller:5000/v2.0
identity_uri = http://controller:35357
admin_tenant_name = service
admin_user = neutron
admin_password = root123
>
vim /etc/neutron/plugins/ml2/ml2_conf.ini
[ml2]
type_drivers = flat,gre
tenant_network_types = gre
mechanism_drivers = openvswitch
[ml2_type_flat]
flat_networks = external
[ml2_type_gre]
tunnel_id_ranges = 1:1000
[securitygroup]
enable_security_group = True
enable_ipset = True
firewall_driver = neutron.agent.linux.iptables_firew
all.OVSHybridIptablesFirewallDriver
[ovs]
local_ip = 10.0.1.21
enable_tunneling = True
bridge_mappings = external:br-ex
[agent]
tunnel_types = gre
>
vim /etc/neutron/l3_agent.ini
[DEFAULT]
interface_driver = neutron.agent.linux.interface.OVS
InterfaceDriver
use_namespaces = True
external_network_bridge = br-ex
router_delete_namespaces = True
verbose = True
>
vim /etc/neutron/dhcp_agent.ini
[DEFAULT]
interface_driver = neutron.agent.linux.interface.OVS
InterfaceDriver
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
use_namespaces = True
dhcp_delete_namespaces = True
verbose = True
dnsmasq_config_file = /etc/neutron/dnsmasq-neutron.c
onf
>
vim /etc/neutron/dnsmasq-neutron.conf
dhcp-option-force=26,1454
>
pkill dnsmasq
>
vim /etc/neutron/metadata_agent.ini
[DEFAULT]
auth_url = http://controller:5000/v2.0
auth_region = regionOne
admin_tenant_name = service
admin_user = neutron
admin_password = root123
nova_metadata_ip = controller
metadata_proxy_shared_secret = root123
verbose = True
Note:--------
vim /etc/nova/nova.conf
[neutron]
service_metadata_proxy = True
metadata_proxy_shared_secret = root123
>
systemctl
systemctl
ovs-vsctl
ovs-vsctl
enable openvswitch.service
start openvswitch.service
add-br br-ex
add-port br-ex INTERFACE_NAME
/plugin.ini
>
cp /usr/lib/systemd/system/neutron-openvswitch-agent.ser
vice /usr/lib/systemd/system/neutron-openvswitch-agent.service.orig
>
sed -i 's,plugins/openvswitch/ovs_neutron_plugin.ini,plu
gin.ini,g' /usr/lib/systemd/system/neutron-openvswitch-agent.service
>
systemctl enable neutron-openvswitch-agent.service neutr
on-l3-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service ne
utron-ovs-cleanup.service
>
systemctl start neutron-openvswitch-agent.service neutro
n-l3-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service
(On controller Node)
>
source /root/admin-openrc.sh
>
neutron agent-list
vim /etc/sysctl.conf
net.ipv4.conf.all.rp_filter=0
net.ipv4.conf.default.rp_filter=0
>
>
sysctl -p
vim /etc/neutron/neutron.conf
[DEFAULT]
rpc_backend = rabbit
rabbit_host = controller
rabbit_password = root123
auth_strategy = keystone
core_plugin = ml2
service_plugins = router
allow_overlapping_ips = True
verbose = True
[keystone_authtoken]
auth_uri = http://controller:5000/v2.0
identity_uri = http://controller:35357
admin_tenant_name = service
admin_user = neutron
admin_password = root123
vim /etc/neutron/plugins/ml2/ml2_conf.ini
[ml2]
type_drivers = flat,gre
tenant_network_types = gre
mechanism_drivers = openvswitch
[ml2_type_gre]
tunnel_id_ranges = 1:1000
[securitygroup]
enable_security_group = True
enable_ipset = True
firewall_driver = neutron.agent.linux.iptables_firew
all.OVSHybridIptablesFirewallDriver
[ovs]
local_ip = 10.0.1.31
enable_tunneling = True
[agent]
tunnel_types = gre
>
>
>
vim /etc/nova/nova.conf
[DEFAULT]
network_api_class = nova.network.neutronv2.api.API
security_group_api = neutron
linuxnet_interface_driver = nova.network.linux_net.L
inuxOVSInterfaceDriver
firewall_driver = nova.virt.firewall.NoopFirewallDri
ver
[neutron]
url = http://controller:9696
auth_strategy = keystone
admin_auth_url = http://controller:35357/v2.0
admin_tenant_name = service
admin_username = neutron
admin_password = root123
ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron
>
/plugin.ini
>
cp /usr/lib/systemd/system/neutron-openvswitch-agent.ser
vice /usr/lib/systemd/system/neutron-openvswitch-agent.service.orig
>
sed -i 's,plugins/openvswitch/ovs_neutron_plugin.ini,plu
gin.ini,g' /usr/lib/systemd/system/neutron-openvswitch-agent.service
>
systemctl restart openstack-nova-compute.service
>
systemctl enable neutron-openvswitch-agent.service
>
systemctl start neutron-openvswitch-agent.service
Create initial networks
=======================
>
source /root/admin-openrc.sh
>
neutron net-create ext-net --router:external True --prov
ider:physical_network external --provider:network_type flat
>
neutron subnet-create ext-net --name ext-subnet --alloca
tion-pool start=203.0.113.101,end=203.0.113.200 --disable-dhcp --gateway 203.0.1
13.1 203.0.113.0/24
Tenant network
==============
>
source /root/admin-openrc.sh
>
neutron net-create demo-net
>
neutron subnet-create demo-net --name demo-subnet --gate
way 192.168.1.1 192.168.1.0/24
>
neutron router-create demo-router
>
neutron router-interface-add demo-router demo-subnet
>
neutron router-gateway-set demo-router ext-net
Verify connectivity
===================
>
ping -c 4 203.0.113.101
OPENSTACK_HOST = "controller"
ALLOWED_HOSTS = ['*']
CACHES = {
'default': {
'BACKEND': 'django.core.cache.backends.memcached.Memca
chedCache',
'LOCATION': '127.0.0.1:11211',
}
}
>
>
>
>
setsebool -P httpd_can_network_connect on
chown -R apache:apache /usr/share/openstack-dashboard/static
systemctl enable httpd.service memcached.service
systemctl start httpd.service memcached.service
source /root/admin-openrc.sh
keystone user-create --name cinder --pass root123
keystone user-role-add --user cinder --tenant service --role adm
in
>
keystone service-create --name cinder --type volume --descriptio
n "OpenStack Block Storage"
>
keystone service-create --name cinderv2 --type volumev2 --descri
ption "OpenStack Block Storage"
>
keystone endpoint-create --service-id $(keystone service-list |
awk '/ volume / {print $2}') --publicurl http://controller:8776/v1/%\(tenant_id\
)s --internalurl http://controller:8776/v1/%\(tenant_id\)s --adminurl http://c
ontroller:8776/v1/%\(tenant_id\)s --region regionOne
>
keystone endpoint-create --service-id $(keystone service-list |
awk '/ volumev2 / {print $2}') --publicurl http://controller:8776/v2/%\(tenant_
id\)s --internalurl http://controller:8776/v2/%\(tenant_id\)s --adminurl http://
controller:8776/v2/%\(tenant_id\)s --region regionOne
>
vim /etc/cinder/cinder.conf
[DEFAULT]
rpc_backend = rabbit
rabbit_host = controller
rabbit_password = root123
auth_strategy = keystone
my_ip = 10.0.0.11
verbose = True
[database]
connection = mysql://cinder:root123@controller/cinder
[keystone_authtoken]
auth_uri = http://controller:5000/v2.0
identity_uri = http://controller:35357
admin_tenant_name = service
admin_user = cinder
admin_password = root123
>
>
systemctl enable openstack-cinder-api.service openstack-cinder-s
cheduler.service
>
systemctl start openstack-cinder-api.service openstack-cinder-sc
heduler.service
Install and configure a storage node
=========================================
>
>
>
>
>
Note:======
1. Each item in the filter array begins with a f
or accept or r for reject and includes a regular expression for the device name.
The array must end with r/.*/ to reject any remaining devices.
2. If compute node installed into lvm then chang
e /etc/lvm/lvm.conf filter = [ "a/sda/", "r/.*/"].
>
vim /etc/cinder/cinder.conf
[database]
connection = mysql://cinder:root123@controller/cin
der
[DEFAULT]
rpc_backend = rabbit
rabbit_host = controller
rabbit_password = root123
auth_strategy = keystone
my_ip = 10.0.0.41
glance_host = controller
iscsi_helper = lioadm
verbose = True
[keystone_authtoken]
auth_uri = http://controller:5000/v2.0
identity_uri = http://controller:35357
admin_tenant_name = service
admin_user = cinder
admin_password = root123
>
>
service
ervice
18. Add Object Storage
---------------------On Controller Node
===================
>
>
n
>
keystone service-create --name swift --type object-store --descr
iption "OpenStack Object Storage"
>
keystone endpoint-create --service-id $(keystone service-list |
awk '/ object-store / {print $2}') --publicurl 'http://controller:8080/v1/AUTH_%
(tenant_id)s' --internalurl 'http://controller:8080/v1/AUTH_%(tenant_id)s' --adm
inurl http://controller:8080 --region regionOne
>
curl -o /etc/swift/proxy-server.conf https://raw.githubuserconte
nt.com/openstack/swift/stable/juno/etc/proxy-server.conf-sample
>
vim /etc/swift/proxy-server.conf
[DEFAULT]
bind_port = 8080
user = swift
swift_dir = /etc/swift
[pipeline:main]
pipeline = authtoken cache healthcheck keystoneauth proxylogging proxy-server
[app:proxy-server]
allow_account_management = true
account_autocreate = true
[filter:keystoneauth]
use = egg:swift#keystoneauth
operator_roles = admin,_member_
[filter:authtoken]
paste.filter_factory = keystonemiddleware.auth_token:filte
r_factory
auth_uri = http://controller:5000/v2.0
identity_uri = http://controller:35357
admin_tenant_name = service
admin_user = swift
admin_password = root123
delay_auth_decision = true
[filter:cache]
memcache_servers = 127.0.0.1:11211
On Storage Node
================
* Configure node object1 & object2 and update /etc/hosts file and configur
e ntp client in both node.
* Packages required are ntp openstack-selinux xfsprogs rsync
>
for i in b c;do parted /dev/sd$i "mklabel gpt"; parted /dev/sd$
i "mkpart xfs 1 100% ";done
>
mkdir -p /srv/node/sd{b,c}1
>
cat >> /etc/fstab <<EOF
mkfs.xfs /dev/sdb1
mkfs.xfs /dev/sdc1
mount -a
vim /etc/rsyncd.conf
uid = swift
gid = swift
log file = /var/log/rsyncd.log
pid file = /var/run/rsyncd.pid
address = 10.0.0.51
[account]
max connections = 2
path = /srv/node/
read only = false
lock file = /var/lock/account.lock
[container]
max connections = 2
path = /srv/node/
read only = false
lock file = /var/lock/container.lock
[object]
max connections = 2
path = /srv/node/
read only = false
lock file = /var/lock/object.lock
[filter:recon]
recon_cache_path = /var/cache/swift
>
vim /etc/swift/container-server.conf
[DEFAULT]
bind_ip = 10.0.0.51
bind_port = 6001
user = swift
swift_dir = /etc/swift
devices = /srv/node
[pipeline:main]
pipeline = healthcheck recon container-server
[filter:recon]
recon_cache_path = /var/cache/swift
>
vim /etc/swift/object-server.conf
[DEFAULT]
bind_ip = 10.0.0.51
bind_port = 6000
user = swift
swift_dir = /etc/swift
devices = /srv/node
[pipeline:main]
pipeline = healthcheck recon object-server
[filter:recon]
recon_cache_path = /var/cache/swift
>
>
>
>
>
>
swift-ring-builder account.builder
swift-ring-builder account.builder rebalance
100
100
Container ring
***************
** Perform these steps on the "controller node"
>
>
3 1
1 100
>
>
swift-ring-builder container.builder
>
1 100
Object ring
*************
** Perform these steps on the "controller node"
>
>
>
>
>
swift-ring-builder object.builder
swift-ring-builder object.builder rebalance
00
00
Finalize Installation
**********************
>
ssh object1 "wget -O /etc/swift/swift.conf 10.0.0.1/config/obje
ct1/swift.conf;chown -R swift:swift /etc/swift"
>
{On Controller Node}
systemctl enable openstack-swift-proxy.service memcached.service; system
ctl start openstack-swift-proxy.service memcached.service
>
{On Object node}
systemctl enable openstack-swift-account.service openstack-swift-account
-auditor.service openstack-swift-account-reaper.service openstack-swift-accountreplicator.service
systemctl enable openstack-swift-account.service openstack-swift-account
-auditor.service openstack-swift-account-reaper.service openstack-swift-accountreplicator.service
systemctl start openstack-swift-account.service openstack-swift-accountauditor.service openstack-swift-account-reaper.service openstack-swift-account-r
eplicator.service
systemctl enable openstack-swift-container.service openstack-swift-conta
iner-auditor.service openstack-swift-container-replicator.service openstack-swif
t-container-updater.service
systemctl start openstack-swift-container.service openstack-swift-contai
ner-auditor.service openstack-swift-container-replicator.service openstack-swift
-container-updater.service
systemctl start openstack-swift-container.service openstack-swift-contai
ner-auditor.service openstack-swift-container-replicator.service openstack-swift
-container-updater.service
systemctl enable openstack-swift-object.service openstack-swift-object-a
uditor.service openstack-swift-object-replicator.service openstack-swift-object-
updater.service
systemctl start openstack-swift-object.service openstack-swift-object-au
ditor.service openstack-swift-object-replicator.service openstack-swift-object-u
pdater.service
source /root/admin-openrc.sh
keystone user-create --name heat --pass root123
keystone user-role-add --user heat --tenant service --role admin
keystone role-create --name heat_stack_owner
keystone user-role-add --user demo --tenant demo --role heat_sta
ck_owner
> keystone role-create --name heat_stack_user
> keystone service-create --name heat --type orchestration --descr
iption "Orchestration"
> keystone service-create --name heat-cfn --type cloudformation -description "Orchestration"
> keystone endpoint-create --service-id $(keystone service-list |
awk '/ orchestration / {print $2}') --publicurl http://controller:8004/v1/%\(ten
ant_id\)s --internalurl http://controller:8004/v1/%\(tenant_id\)s --adminurl htt
p://controller:8004/v1/%\(tenant_id\)s --region regionOne
> keystone endpoint-create --service-id $(keystone service-list |
awk '/ cloudformation / {print $2}') --publicurl http://controller:8000/v1 --int
ernalurl http://controller:8000/v1 --adminurl http://controller:8000/v1 --region
regionOne
>
vim /etc/heat/heat.conf
[database]
connection = mysql://heat:root123@controller/heat
[DEFAULT]
rpc_backend = rabbit
rabbit_host = controller
rabbit_password = root123
heat_metadata_server_url = http://controller:8000
heat_waitcondition_server_url = http://controller:8000/v1/wa
itcondition
verbose = True
[keystone_authtoken]
auth_uri = http://controller:5000/v2.0
identity_uri = http://controller:35357
admin_tenant_name = service
admin_user = heat
admin_password = root123
[ec2authtoken]
auth_uri = http://controller:5000/v2.0
>
>
fn.service openstack-heat-engine.service
> systemctl start openstack-heat-api.service openstack-heat-api-cf
n.service openstack-heat-engine.service
vim /etc/mongodb.conf
bind_ip = 10.0.0.11
smallfiles = true
>
>
>
r");
db.addUser({user: "ceilometer",
pwd: "root123",
roles: [ "readWrite", "dbAdmin" ]})'
>
>
>
source /root/admin-openrc.sh
keystone user-create --name ceilometer --pass root123
keystone user-role-add --user ceilometer --tenant service --role
admin
> keystone service-create --name ceilometer --type metering --desc
ription "Telemetry"
> keystone endpoint-create --service-id $(keystone service-list |
awk '/ metering / {print $2}') --publicurl http://controller:8777 --internalurl
http://controller:8777 --adminurl http://controller:8777 --region regionOne
* Packges required "openstack-ceilometer-api openstack-ceilometer-collec
tor openstack-ceilometer-notification openstack-ceilometer-central openstack-cei
lometer-alarm python-ceilometerclient"
>
>
eilometer
[DEFAULT]
rpc_backend = rabbit
rabbit_host = controller
rabbit_password = root123
auth_strategy = keystone
verbose = True
[keystone_authtoken]
auth_uri = http://controller:5000/v2.0
identity_uri = http://controller:35357
admin_tenant_name = service
admin_user = ceilometer
admin_password = root123
[service_credentials]
os_auth_url = http://controller:5000/v2.0
os_username = ceilometer
os_tenant_name = service
os_password = root123
[publisher]
metering_secret = root123
>
systemctl enable openstack-ceilometer-api.service openstack-ceil
ometer-notification.service openstack-ceilometer-central.service openstack-ceilo
meter-collector.service openstack-ceilometer-alarm-evaluator.service openstack-c
eilometer-alarm-notifier.service
>
systemctl start openstack-ceilometer-api.service openstack-ceilo
meter-notification.service openstack-ceilometer-central.service openstack-ceilom
eter-collector.service openstack-ceilometer-alarm-evaluator.service openstack-ce
ilometer-alarm-notifier.service
vim /etc/ceilometer/ceilometer.conf
[publisher]
metering_secret = root123
[DEFAULT]
rabbit_host = controller
rabbit_password = root123
verbose = True
[keystone_authtoken]
auth_uri = http://controller:5000/v2.0
identity_uri = http://controller:35357
admin_tenant_name = service
admin_user = ceilometer
admin_password = root123
[service_credentials]
os_auth_url = http://controller:5000/v2.0
os_username = ceilometer
os_tenant_name = service
os_password = root123
os_endpoint_type = internalURL
os_region_name = regionOne
>
vim /etc/nova/nova.conf
[DEFAULT]
instance_usage_audit = True
instance_usage_audit_period = hour
notify_on_state_change = vm_and_task_state
notification_driver = messagingv2
>
>
>
vim /etc/glance/glance-api.conf
[DEFAULT]
notification_driver = messagingv2
rpc_backend = rabbit
rabbit_host = controller
rabbit_password = root123
>
vim /etc/glance/glance-registry.conf
notification_driver = messagingv2
rpc_backend = rabbit
rabbit_host = controller
rabbit_password = root123
vim /etc/cinder/cinder.conf
[DEFAULT]
control_exchange = cinder
notification_driver = messagingv2
>
vim /etc/swift/proxy-server.conf
[filter:keystoneauth]
operator_roles = admin,_member_,ResellerAdmin
[pipeline:main]
pipeline = authtoken cache healthcheck keystoneauth proxy-lo
gging ceilometer proxy-server
**
>
>
[filter:ceilometer]
use = egg:ceilometer#swift
log_level = WARN
source /root/demo-openrc.sh
ssh-keygen
nova keypair-add --pub-key ~/.ssh/id_rsa.pub demo-key
nova keypair-list
To launch an instance
1. A flavor specifies a virtual resource allocation profile whic
h includes processor, memory, and storage.
>
nova flavor-list
nova image-list
neutron net-list
nova secgroup-list
>
*
nova list
ping -c 4 192.168.1.1
ping -c 4 openstack.org
ping -c 4 203.0.113.102
* Access your instance using SSH from the controller node or any
host on the external network:
>
ssh cirros@203.0.113.102
Note:- If your host does not contain the public/private key pair
created in an earlier step, SSH prompts for the default password associated wit
h the cirros user.
** To attach a Block Storage volume to your instance
>
>
>
'/ available / {print
>
source /root/demo-openrc.sh
nova volume-list
nova volume-attach demo-instance1 `nova volume-list|awk
$2}'|head -1`
nova volume-list
* Access your instance using SSH from the controller node or any h
ost on the external network and use the fdisk command to verify presence of the
volume as the /dev/vdb block storage device:
>
ssh cirros@203.0.113.102
Verification Of operations
==========================
**************************
1.
2.
NTP Setup
>
ntpq -c peers
>
ntpq -c assoc
OpenStack Identity
>
3.
Image Service
>
mkdir /tmp/images
source admin-openrc.sh
4.
5.
6.
>
glance image-list
>
rm -r /tmp/images
OpenStack Compute
>
source /root/admin-openrc.sh
>
nova service-list
>
nova image-list
source /root/admin-openrc.sh
>
neutron agent-list
>
nova net-list
Dashboard
>
*
7.
8.
firefox http://controller/dashboard
authenticate using admin or demo
source /root/admin-openrc.sh
>
cinder service-list
>
source demo-openrc.sh
>
>
cinder list
Object Storage
*
Account ring
>
swift-ring-builder account.builder
Container ring
>
swift-ring-builder container.builder
Object ring
>
swift-ring-builder object.builder
>
source demo-openrc.sh
>
swift stat
>
swift list
>
Orchestration module
>
source demo-openrc.sh
>
vim test-stack.yml
heat_template_version: 2014-10-16
description: A simple server.
parameters:
ImageID:
type: string
description: Image use to boot a server
NetID:
type: string
description: Network ID for the server
resources:
server:
type: OS::Nova::Server
properties:
image: { get_param: ImageID }
flavor: m1.tiny
networks:
- network: { get_param: NetID }
outputs:
private_ip:
description: IP address of the server in the private network
value: { get_attr: [ server, first_address ] }
Telemetry installation
> source admin-openrc.sh
> ceilometer meter-list
> glance image-download "cirros-0.3.3-x86_64" > cirros.img
> ceilometer meter-list
> ceilometer statistics -m image.download -p 60
11.
* Assuming you have created an image for the type of database you
want, and have
updated the datastore to use that image, you can now create a Tr
ove instance
(database). To do this, use the trove create command.
> trove create name 2 --size=2 --databases DBNAME --users USER:PA
SSWORD --datastore_version mysql-5.5 --datastore mysql
12.