Sunteți pe pagina 1din 73

Openstack EX210

****************

The exam objectives for Red Hat OpenStack Platform 10 are as follows:

• Manage an OpenStack undercloud and overcloud

 Use the command line interface to manage an OpenStack environment

o Use Red Hat OpenStack director to deploy additional nodes in an existing overcloud

o Create a message queue and publish messages to it

o Add additional compute nodes

o Configure and manage networking

o Monitor OpenStack service metrics and events

• Manage projects and resources

• Create projects, users and roles

o Create and manage security groups

o Configure flavors

o Set quotas

• Manage instances

• Manage and customize images

o Launch instances

o Configure external access for instances

o Configure images at deployment

o Perform live migration of instances

• Manage and utilize storage

• Configure and manage both block and object storage

o Attach a block storage volume to an instance

o Store objects in object storage containers


• Use OpenStack orchestration

• Use orchestration templates to configure OpenStack services and resources

o Use orchestration templates to deploy a multi-instance application

Study points for the exam

To become a Red Hat Certified System Administrator in Red Hat OpenStack, you will validate your ability
to perform these tasks:

 Understand and work with director-based deployments

o Use identity environment files to connect to the undercloud

o Use identity environment files to connect to the overcloud

o Use template files, environment files, and other resources to obtain information about
an OpenStack environment

o Work with containerized services

 Configure OpenStack domains

o Create projects

o Create groups

o Create users

o Manage quotas

 Create resources

o Create virtual machine flavors

o Add existing images to an overcloud

o Create security groups

o Create key pairs

 Configure networking

o Create and assign networks to projects

o Configure network routers


o Configure software-defined networks

o Work with Open Virtual Networks

 Configure floating IP addresses

o Configure an instance to use a floating IP address

o Configure a service to be accessible via a floating IP address

 Manage block storage

o Create a block storage volume

o Attach block storage volumes to an instance

o Snapshot a storage volume

 Work with Red Hat® Ceph Storage

o Configure Ceph storage

o Monitor Ceph Storage

o Diagnose and troubleshoot Ceph Storage issues

 Work with object storage

o Create a Swift container

o Utilize a Swift container

 Work with shared storage

o Create shared file systems

o Configure instances to use shared file systems

 Manage instances

o Launch instances

o Associate instances with specified projects and networks

o Use key pairs to connect to instances

o Configure an instance during deployment


 Create a Heat stack

o Create a Heat template

o Diagnose and correct a broken Heat template

o Launch a Heat stack

 Work with images

o Modify an existing image

o Create and associate flavors to customized images

o Launch an instance from a customized image

o Launch an instance on a second compute node

 Work with OpenStack services

o Manage Identity Service tokens

o Enable tracing in RabbitMQ

o Display statistics using Ceilometer

Reference :

https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/10/

https://www.tuxfixer.com/create-tenant-in-openstack-newton-using-command-line-interface/#more-
3226

https://www.golinuxhub.com/2018/08/openstack-tripleo-architecture-step-guide-install-undercloud-
overcloud-heat-template.html#Introspection
1) Create tenant (Project) dan 4 user (2 Project A dan 2 Project B (admin dan member biasa)

tiap user punya username, email ---> CL110 (chapter 2 Organizing people and resource)

*********************************************************************

https://www.tuxfixer.com/create-tenant-in-openstack-newton-using-command-line-interface/#more-
3226

[root@allinone ~]# source keystonerc_admin

[root@allinone ~(keystone_admin)]#

Create project tenant

[root@allinone ~(keystone_admin)]# openstack project create --description "project owned by


sigma" sigma

Create user for new project

[root@allinone ~(keystone_admin)]# openstack user create --project sigma --email


admin@sigma.com --password sigma123 sigma

1.1. Create a Project


Use this procedure to create projects, add members to the project, and set resource limits for the
project.
1. As an admin user in the dashboard, select Identity > Projects.
2. Click Create Project.
3. On the Project Information tab, enter a name and description for the project (theEnabled
check box is selected by default).
4. On the Project Members tab, add members to the project from the All Users list.
5. On the Quotas tab, specify resource limits for the project.
6. Click Create Project
2.1) - Download image dari wget biar muncul di glance

****************************************************

Source the admin credentials to gain access to admin-only CLI commands:

# source ~/admin-openrc

Download the source image:

# wget http://download.cirros-cloud.net/0.3.4/cirros-0.3.4-x86_64-disk.img

Create new image from uploaded file:

Upload the image to the Image service using the QCOW2 disk format, bare container format, and public
visibility so all projects can access it:

[root@allinone ~(keystone_admin)]# openstack image create "cirros" --file cirros-0.3.4-x86_64-


disk.img --disk-format qcow2 --container-format bare --public
[root@allinone ~(keystone_admin)]# openstack image create --container-format bare
--file cirros-0.3.4-x86_64-disk.img --public cirros_0.3.4

Confirm upload of the image and validate attributes:

$ openstack image list


2.2) - Create Security Group untuk 1 Project

************************************

Create dedicated security group

Security Groups control network access to / from instances inside the tenant.

create new security group named sigma_sec for project sigma:

[root@allinone ~(keystone_admin)]# openstack security group create --project sigma sigma_sec

+-----------------+----------------------------------+

| Field | Value |

+-----------------+----------------------------------+

| created_at | 2017-01-06T18:31:27Z |

| description | tux_sec |

... output omitted ...

Add rule for sigma_sec group to permit incoming ICMP ECHO (ping):

[root@allinone ~(keystone_admin)]# openstack security group rule create --protocol icmp --ingress
--project sigma sigma_sec

+-------------------+--------------------------------------+

| Field | Value |

+-------------------+--------------------------------------+

| created_at | 2017-01-06T20:02:17Z |

... output omitted ...

Add rule for sigma_sec group to permit incoming SSH access:

[root@allinone ~(keystone_admin)]# openstack security group rule create --protocol tcp --dst-port 22
--ingress --project sigma sigma_sec
+-------------------+--------------------------------------+

| Field | Value |

+-------------------+--------------------------------------+

| created_at | 2017-01-06T20:04:12Z |

... output omitted ...

# openstack security group list

2.3) - Create Keypair dari 1 Project

****************************

Creating a key pair

1. In the dashboard, select Project > Compute > Access & Security.

2. On the Key Pairs tab, click Create Key Pair.

3. Specify a name in the Key Pair Name field, and click Create Key Pair.

Create a key pair in Horizon


The first step is to create the actual key pair, if you don’t already have one:

1. Click “Compute” under the “Project” option in the Horizon left-hand menu.
2. Select “Access & Security”.
3. Click the “Key Pairs” tab.
4. Click “+Create Key Pair”.
5. Name your new key pair and click “Create Key Pair”.

6. The new key pair will automatically download to your local machine; make sure you don’t lose
it, or you won’t be able to access the new instance.
7. Click Access & Security again to see your new key pair.

You can also create a key pair manually and import it, or import an existing public key, by click the
“Import Key Pair” button and adding it to the form.
Add a key pair to an instance
To add a key pair to an instance, you need to specify it when you’re launching the instance.

1. Under Instances click “Launch Instance”.


2. Click the “Access & Security” tab.
3. Choose the appropriate key pair from the pulldown (or click the “+” sign to import one).
4. After completing the rest of the required information on the other tabs, click “Launch”.

Securing and using your new key pair


To use your new key pair, you need to make it available to your ssh client. On Linux, follow these
instructions:

1. Copy the downloaded key pair into your ~/.ssh/ directory


2. Change permissions to 600:
3. # cd ~/.ssh
# chmod 600 KEY_NAME.pem

4. Now you can use the key pair to connect to the instances created using this key pair:

# ssh -i ~/.ssh/KEY_NAME.pem USER@SERVER_IP

Create Key Pair from CLI :

[student@workstation ~(developer1-finance)]$ openstack keypair create key_sigma > sigma-keypair.pem

[student@workstation ~(developer1-finance)]$ chmod 600 sigma-keypair.pem


2.4 ) Create Launch Instance Openstack

******************************

[stack@undercloud-director ~]$ openstack flavor list

[stack@undercloud-director ~]$ openstack image list

[stack@undercloud-director ~]$ openstack security group list

[stack@undercloud-director ~]$ openstack keypair list

 flavor — 300
 image — afa49adf-2831-4a00-9c57-afe1624d5557
 keypair — myKey
 security group — 29acef25-b59f-43a0-babf-6a5bb5cc7bed
 servername — You can name it anything you like, but in this example myNewServer will be used.
[user@localhost]$ openstack server create --flavor 300 --image
afa49adf-2831-4a00-9c57-afe1624d5557 --key-name myKey --security-group
29abef85-b89f-43a0-babf-6a5bb5cc7bed myNewServer

[stack@undercloud-director ~]$ openstack server show myNewServer

[stack@undercloud-director ~]$ openstack server list


3.1 ) Create dan Configure Private Network di local IP Network di 1 Project

************************************************************

[root@allinone ~(keystone_admin)]# openstack network create --project sigma --internal sigma_net


Create private / tenant network subnet named sigma_subnet for project sigma with specified CIDR,
Gateway and DHCP:

[root@allinone ~(keystone_admin)]# openstack subnet create --project sigma


--subnet-range 192.168.20.0/24 --gateway 192.168.20.1 --network sigma_net --dhcp
sigma_subnet
3.2 ) - Create Public Network

*********************

[root@allinone ~(keystone_admin)]# openstack network create --external --share pub_net

Create public / provider network subnet named pub_subnet with specified CIDR, Gateway and IP pool
range:

Note: we specified here the allocation pool of OpenStack IP addresses (192.168.2.70 – 192.168.2.80)
for public network, because we can’t use the whole IP range,
[root@allinone ~(keystone_admin)]# openstack subnet create --subnet-range
192.168.2.0/24 --no-dhcp --gateway 192.168.2.1 --network pub_net --allocation-pool
start=192.168.2.70,end=192.168.2.80 pub_subnet
3.3 ) - Create Router di 1 Project

************************

Now we need to create router named sigma_router to connect tenant network with public network:

[root@allinone ~(keystone_admin)]# openstack router create --project sigma sigma_router

Now set gateway for sigma_router to our public / provider network pub_net (connect sigma_router to
pub_net).

[root@allinone ~(keystone_admin)]# openstack router set sigma_router


--external-gateway pub_net

Note: if you have problems with above command (openstack router set: error: unrecognized arguments:
–external-gateway), use neutron command instead:

[root@allinone ~(keystone_admin)]# neutron router-gateway-set sigma_router pub_net

Set gateway for router sigma_router

Next, link sigma_router to sigma_subnet (connect sigma_subnet to sigma_router):

[root@allinone ~(keystone_admin)]# openstack router add subnet sigma_router sigma_subnet


4.1 ) - Create Instance di 1 project tertentu menggunakan image (glance) sesuai arahan soal
***************************************************************************

1.1. Load the developer1 user credentials.

[student@workstation ~]$ source ~/developer1-finance-rc

1.2. Verify that the rhel7 image is available.

[student@workstation ~(developer1-finance)]$ openstack image list

+---------------+-------+--------+

| ID | Name | Status |

+---------------+-------+--------+

| 926c(...)4600 | rhel7 | active |

+---------------+-------+--------+

1.3. Verify that the m1.web flavor is available.

[student@workstation ~(developer1-finance)]$ openstack flavor list

+---------------+--------+------+------+-----------+-------+-----------+

| ID | Name | RAM | Disk | Ephemeral | VCPUs | Is Public |

+---------------+--------+------+------+-----------+-------+-----------+

| dd1b(...)6900 | m1.web | 2048 | 10 | 0 |1 | True |

+---------------+--------+------+------+-----------+-------+-----------+

1.4. Verify that the finance-network1 network is available.

[student@workstation ~(developer1-finance)]$ openstack network list

+---------------+---------------------+---------------+

| ID | Name | Subnets |

+---------------+---------------------+---------------+
| b0b7(...)0db4 | finance-network1 | a29f(...)855e |

... output omitted ...

1.5. Verify that the finance-web security group is available.

[student@workstation ~(developer1-finance)]$ openstack security group list

+---------------+-------------+------------------------+---------------+

| ID | Name | Description | Project |

+---------------+-------------+------------------------+---------------+

| bdfd(...)b154 | finance-web | finance-web | d9cc(...)ae0f |

... output omitted ...

1.6. Verify that the developer1-keypair1 key pair, and its associated file located at

/home/student/developer1-keypair1.pem are available.

[student@workstation ~(developer1-finance)]$ openstack keypair list

+---------------------+-----------------+

| Name | Fingerprint |

+---------------------+-----------------+

| developer1-keypair1 | cc:59(...)0f:f9 |

+---------------------+-----------------+

[student@workstation ~(developer1-finance)]$ cd ~/developer1-keypair1.pem

/home/student/developer1-keypair1.pem: PEM RSA private key

[student@workstation ~(developer1-finance)]$ chmod 600 developer1-keypair1.pem


1.7. Launch an instance named finance-web1 using the rhel7 image, the m1.web flavor, the finance-
network1 network, the finance-web security group, and the developer1-keypair1 key pair. The instance
deployment will return an active.

[student@workstation ~(developer1-finance)]$ openstack server create --image rhel7 --flavor \


m1.web --security-group finance-web --key-name developer1-keypair1 \
--nic net-id=finance-network1 finance-web1

...output omitted...

1.8. Verify the status of the finance-web1 instance. The instance status will be Active.

[student@workstation ~(developer1-finance)]$ openstack server show finance-web1 -c name -c


status

+--------+--------------+

| Field | Value |

+--------+--------------+

| name | finance-web1 |

| status | Active |

+--------+--------------+

4.2 ) - Create Network dan Security dari instance yg kita buat di 1 project

**************************************************************

4.3 ) - Create Keypair di instance dari 1 project tertentu

************************************************

4.4 ) - Instance harus bisa dissh harus ada (PEM, Keypair, dan Floating IP)

dan jangan lupa (chmod 600 PEM)

***********************************************************************
Steps:

1. Source keystone file with admin credentials

[root@allinone ~]# source keystonerc_admin

[root@allinone ~(keystone_admin)]#

2. Create project tenant

[root@allinone ~(keystone_admin)]# openstack project create --description "project owned by


sigma" sigma

+-------------+---------------------------------------+

| Field | Value |

+-------------+---------------------------------------+

| description | project owned by sigma |

... output omitted ...

3. Create user for new project

Now create user named sigma with password sigma123 and assign it to project sigma:

[root@allinone ~(keystone_admin)]# openstack user create --project sigma --email


admin@sigma.com --password sigma123 sigma

+------------+----------------------------------+

| Field | Value |

+------------+----------------------------------+

| email | admin@sigma.com |

| enabled | True |

... output omitted ...


Copy keystonerc_admin file to keystonerc_sigma file:

[root@allinone ~(keystone_admin)]# cp /root/keystonerc_admin /root/keystonerc_sigma

Next, modify keystonerc_tuxfixer file contents to look like below:

unset OS_SERVICE_TOKEN

export OS_USERNAME=sigma

export OS_PASSWORD=sigma123

export OS_AUTH_URL=http://192.168.2.26:5000/v2.0

export PS1='[\u@\h \W(keystone_tuxfixer)]\$ '

export OS_TENANT_NAME=sigma

export OS_REGION_NAME=RegionOne

4. Create public network

Create public / provider network pub_net (hence external flag) available for all tenants including
admin (shared network):

[root@allinone ~(keystone_admin)]# openstack network create --external --share pub_net

+---------------------------+--------------------------------------+

| Field | Value |

+---------------------------+--------------------------------------+

| admin_state_up | UP |

... output omitted ...


Create public / provider network subnet named pub_subnet with specified CIDR, Gateway and IP pool
range:

[root@allinone ~(keystone_admin)]# openstack subnet create --subnet-range 192.168.2.0/24


--no-dhcp --gateway 192.168.2.1 --network pub_net --allocation-pool
start=192.168.2.70,end=192.168.2.80 pub_subnet

+-------------------+------------------------------------------------------+

| Field | Value |

+-------------------+------------------------------------------------------+

| allocation_pools | 192.168.2.70-192.168.2.80 |

... output omitted ...

Note: we specified here the allocation pool of OpenStack IP addresses (192.168.2.70 – 192.168.2.80)
for public network

5. Create private network

Create private / tenant network named sigma_net for project sigma :

[root@allinone ~(keystone_admin)]# openstack network create --project sigma --internal


sigma_net

+---------------------------+--------------------------------------+

| Field | Value |

+---------------------------+--------------------------------------+

| admin_state_up | UP |

... output omitted ...

Create private / tenant network subnet named tux_subnet for project tuxfixer with specified CIDR,
Gateway and DHCP :

[root@allinone ~(keystone_admin)]# openstack subnet create --project sigma --subnet-range


192.168.20.0/24 --gateway 192.168.20.1 --network sigma_net --dhcp sigma_subnet
+-------------------+---------------------------------------------------------+

| Field | Value |

+-------------------+---------------------------------------------------------+

| allocation_pools | 192.168.20.2-192.168.20.254 |

... output omitted ...

6. Create router to connect networks

Now we need to create router named tux_router to connect tenant network with public network:

[root@allinone ~(keystone_admin)]# openstack router create --project sigma sigma_router

+-------------------------+--------------------------------------+

| Field | Value |

+-------------------------+--------------------------------------+

| admin_state_up | UP |

... output omitted ...

Now set gateway for sigma_router to our public / provider network pub_net (connect sigma_router to
pub_net).

[root@allinone ~(keystone_admin)]# openstack router set sigma_router --external-gateway


pub_net

Note: if you have problems with above command (openstack router set: error: unrecognized arguments:
–external-gateway), use neutron command instead:

[root@allinone ~(keystone_admin)]# neutron router-gateway-set sigma_router pub_net

Set gateway for router tux_router

Next, link tux_router to tux_subnet (connect tux_subnet to tux_router):

[root@allinone ~(keystone_admin)]# openstack router add subnet sigma_router sigma_subnet


7. Create custom flavor

OpenStack by default comes with couple of predefined flavors for use with newly created instances:

[root@allinone ~(keystone_admin)]# openstack flavor list

+----+-----------+-------+------+-----------+-------+-----------+

| ID | Name | RAM | Disk | Ephemeral | VCPUs | Is Public |

+----+-----------+-------+------+-----------+-------+-----------+

| 1 | m1.tiny | 512 | 1 | 0| 1 | True |

... output omitted ...

In many cases these flavors are sufficient, but we will create our ultra small flavor named m2.tiny (1
vCPU, 128MB RAM, 1GB Disk) for use with Cirros images:

[root@allinone ~(keystone_admin)]# openstack flavor create --public --vcpus 1 --ram 128 --disk 1 –id 6
m2.tiny

+----------------------------+---------+

| Field | Value |

+----------------------------+---------+

| OS-FLV-DISABLED:disabled | False |

... output omitted ...

Verify flavor list:

[root@allinone ~(keystone_admin)]# openstack flavor list

+----+-----------+-------+------+-----------+-------+-----------+

| ID | Name | RAM | Disk | Ephemeral | VCPUs | Is Public |

+----+-----------+-------+------+-----------+-------+-----------+

| 1 | m1.tiny | 512 | 1 | 0| 1 | True |

| 6 | m2.tiny | 128 | 1 | 0| 1 | True |

... output omitted ...

+----+-----------+-------+------+-----------+-------+-----------+
8. Create OpenStack image

Download Cirros image to our Controller node:

[root@allinone ~(keystone_admin)]# wget http://download.cirros-cloud.net/0.3.4/cirros-0.3.4-


x86_64-disk.img

Create new image from uploaded file:

[root@allinone ~(keystone_admin)]# openstack image create --container-format bare --file \


cirros-0.3.4-x86_64-disk.img --public cirros_0.3.4

+------------------+------------------------------------------------------+

| Field | Value |

+------------------+------------------------------------------------------+

| checksum | ee1eca47dc88f4879d8a229cc70a07c6 |

| container_format | bare |

... output omitted ...

9. Create dedicated security group

Security Groups control network access to / from instances inside the tenant.

create new security group named sigma_sec for project sigma:

[root@allinone ~(keystone_admin)]# openstack security group create --project sigma sigma_sec

+-----------------+----------------------------------+

| Field | Value |

+-----------------+----------------------------------+

| created_at | 2017-01-06T18:31:27Z |

| description | tux_sec |

... output omitted ...


Add rule for tux_sec group to permit incoming ICMP ECHO (ping):

[root@allinone ~(keystone_admin)]# openstack security group rule create --protocol icmp --ingress
--project sigma sigma_sec

+-------------------+--------------------------------------+

| Field | Value |

+-------------------+--------------------------------------+

| created_at | 2017-01-06T20:02:17Z |

... output omitted ...

Add rule for tux_sec group to permit incoming SSH access:

[root@allinone ~(keystone_admin)]# openstack security group rule create --protocol tcp --dst-port 22
--ingress --project sigma sigma_sec

+-------------------+--------------------------------------+

| Field | Value |

+-------------------+--------------------------------------+

| created_at | 2017-01-06T20:04:12Z |

... output omitted ...

10. Assign floating IPs

We need to assign floating IPs for new Instances to be accessible from public / external network.

Unlike previous commands, which we were able to execute as admin user, to assign floating IPs to the
sigma’s project we need to source keystonerc_sigma file:

[root@allinone ~(keystone_admin)]# source keystonerc_sigma

[root@allinone ~(keystone_tuxfixer)]#
Create / assign two floating IPs for the tuxfixer project:

[root@allinone ~(keystone_tuxfixer)]# openstack floating ip create pub_net

+---------------------+--------------------------------------+

| Field | Value |

+---------------------+--------------------------------------+

| created_at | 2017-01-06T19:46:29Z |

... output omitted ...

[root@allinone ~(keystone_tuxfixer)]# openstack floating ip create pub_net

+---------------------+--------------------------------------+

| Field | Value |

+---------------------+--------------------------------------+

| created_at | 2017-01-06T19:47:05Z |

... output omitted ...

11. Launch instances

We have now everything needed to launch two instances (cirros_inst_1, cirros_inst_2) based on Cirros
image and m2.tiny flavor inside tuxfixer project:

[root@allinone ~(keystone_tuxfixer)]# openstack server create --flavor m2.tiny --image cirros_0.3.4 \


--nic net-id=tux_net --security-group tux_sec cirros_inst_1

+--------------------------------------+-----------------------------------------------------+

| Field | Value |

+--------------------------------------+-----------------------------------------------------+

| OS-DCF:diskConfig | MANUAL |

... output omitted ...


[root@allinone ~(keystone_tuxfixer)]# openstack server create --flavor m2.tiny --image cirros_0.3.4 \
--nic net-id=tux_net --security-group tux_sec cirros_inst_2

+--------------------------------------+-----------------------------------------------------+

| Field | Value |

+--------------------------------------+-----------------------------------------------------+

| OS-DCF:diskConfig | MANUAL |

... output omitted ...

Assign floating IPs to the instances:

[root@allinone ~(keystone_tuxfixer)]# openstack server add floating ip cirros_inst_1 192.168.2.71

[root@allinone ~(keystone_tuxfixer)]# openstack server add floating ip cirros_inst_2 192.168.2.78

12. Test instances connectivity

Now it’s time to test our instances. We need to connect to both instances from public / external
network (i.e. from some machine in external network) to test their connectivity via floating IPs.

Ping floating IPs of both instances:

[gjuszczak@fixxxer ~]$ ping 192.168.2.71

PING 192.168.2.71 (192.168.2.71) 56(84) bytes of data.

64 bytes from 192.168.2.71: icmp_seq=1 ttl=63 time=13.2 ms

...

[gjuszczak@fixxxer ~]$ ping 192.168.2.78

PING 192.168.2.78 (192.168.2.78) 56(84) bytes of data.

64 bytes from 192.168.2.78: icmp_seq=1 ttl=63 time=174 ms

...

Connect to the floating IP of cirros_inst_1 instance (192.168.2.71) from computer in public network:

[gjuszczak@fixxxer ~]$ ssh cirros@192.168.2.71

cirros@192.168.2.71's password:
$ hostname

cirros-inst-1

[gjuszczak@fixxxer ~]$ ssh cirros@192.168.2.78

cirros@192.168.2.78's password:

$ hostname

cirros-inst-2

5) Create dan Configure Volume Block Storage - harus berada di 1 project

************************************************************

$ openstack image list

$ openstack availability zone list

$ openstack volume create --image 8bf4dc2a-bf78-4dd1-aefa-f3347cf638c8 --size 8 \


--availability-zone nova my-new-volume
$ openstack volume list

6) Attach Volume Block Storage to an Instance

****************************************

1. In the dashboard, select Project > Compute > Volumes.

2. Select the volume’s Edit Attachments action. If the volume is not attached to an instance, the Attach
To Instance drop-down list is visible.

3. From the Attach To Instance list, select the instance to which you wish to attach the volume.

4. Click Attach Volume.

Via Command Line

volume is attached to the server with ID 84c6e57d-a6b1-44b6-81eb-fcb36afd31b5


$ openstack server add volume 84c6e57d-a6b1-44b6-81eb-fcb36afd31b5 573e024d-5235-49ce-8332-
be1576d323f8 --device /dev/vdb

$ openstack volume show 573e024d-5235-49ce-8332-be1576d323f8

7) Manage Snapshot Volume Block Storage

=====================================

$ openstack volume snapshot set my-snapshot-id

To create a volume snapshot:

1. In the dashboard, select Project > Compute > Volumes.

2. Select the target volume’s Create Snapshot action.

3. Provide a Snapshot Name for the snapshot and click Create a Volume Snapshot. The Volume
Snapshots tab displays all snapshots.
8. Troubleshoot Stack dari existing stack yg sudah dicreate

******************************************************

Chapter 7 CL210 Hal 305 Troubleshooting OpenStack Issues

- Troubleshooting Compute Nodes -- > Chapter 7 CL 210 Hal 306


- Troubleshooting Authentication and Messaging -- > Chapter 7 CL 210 Hal 316
- Troubleshooting OpenStack Networking, Image, and Volume Services --> Chapter 7 CL 210 Hal 324
- Lab: Troubleshooting OpenStack --> Chapter 7 CL 210 Hal 341
Outcomes
You should be able to:
• Troubleshoot authentication issues within OpenStack
• Search log files to help describe the nature of the problem
• Troubleshoot messaging issues within OpenStack
• Troubleshoot networking issues within OpenStack

9. Customize Image (Guestfish, Disk Image Builder, Virt-Customize)

makesure service dan rpm contoh sftpd sudah ada dan jalan Chapter 3 CL210 Hal 95

*****************************************************************

Guided Exercise: Customizing an Image


--------------------------------------------------------
You should be able to:
• Customize an image using guestfish.
• Customize an image using virt-customize.
• Upload an image into Glance.
• Spawn an instance using a customized image.

This ensures that the required packages are installed on workstation, and provisions the
environment with a public network, a private network, a private key, and security rules to
access the instance.

Steps
1. From workstation, retrieve the osp-small.qcow2 image from
http://materials.example.com/osp-small.qcow2 and save it as /home/student/
finance-rhel-db.qcow2.

[student@workstation ~]$ wget http://materials.example.com/osp-small.qcow2 \


-O ~/finance-rhel-db.qcow2
2. Using the guestfish command, open the image for editing and include network access.

[student@workstation ~]$ guestfish -i --network -a ~/finance-rhel-db.qcow2


Welcome to guestfish, the guest filesystem shell for
editing virtual machine filesystems and disk images.
Type: 'help' for help on commands
'man' to read the manual
'quit' to quit the shell
Operating system: Red Hat Enterprise Linux Server 7.3 (Maipo)
/dev/sda1 mounted on /
><fs>

3. Install the mariadb and mariadb-server packages.


><fs> command "yum -y install mariadb mariadb-server"
...output omitted...
Installed:
mariadb.x86_64 1:5.5.52-1.el7 mariadb-server.x86_64 1:5.5.52-1.el7
Dependency Installed:
libaio.x86_64 0:0.3.109-13.el7
perl-Compress-Raw-Bzip2.x86_64 0:2.061-3.el7
perl-Compress-Raw-Zlib.x86_64 1:2.061-4.el7
perl-DBD-MySQL.x86_64 0:4.023-5.el7
perl-DBI.x86_64 0:1.627-4.el7
perl-Data-Dumper.x86_64 0:2.145-3.el7
perl-IO-Compress.noarch 0:2.061-2.el7
perl-Net-Daemon.noarch 0:0.48-5.el7
perl-PlRPC.noarch 0:0.2020-14.el7
Complete!

4. Enable the mariadb service.


><fs> command "systemctl enable mariadb"

5. Because there was no output, ensure the mariadb service was enabled.
><fs> command "systemctl is-enabled mariadb"
enabled

6. Ensure the SELinux contexts for all affected files are correct.
Important
Files modified from inside the guestfish tool are written without valid SELinux
context. Failure to relabel critical modified files, such as /etc/passwd, will result
in an unusable image, since SELinux properly denies access to files with improper
context, during the boot process.
Although a relabel can be configured using touch /.autorelabel from within
guestfish, this would be persistent on the image, resulting in a relabel being
performed on every boot for every instance deployed using this image. Instead,
the foollowing step performs the relabel just once, right now.

><fs> selinux-relabel /etc/selinux/targeted/contexts/files/file_contexts /

7. Exit from the guestfish shell.


><fs> exit
[student@workstation ~]$

8. As the developer1 OpenStack user, upload the finance-rhel-db.qcow2 image to the


image service as finance-rhel-db, with a minimum disk requirement of 10 GiB, and a
minimum RAM requirement of 2 GiB.

8.1. Source the developer1-finance-rc credential file.

[student@workstation ~]$ source developer1-finance-rc


[student@workstation ~(developer1-finance)]$

8.2. As the developer1 OpenStack user, upload the finance-rhel-db.qcow2 image to


the image service as finance-rhel-db.

[student@workstation ~(developer1-finance)]$ openstack image create \


--disk-format qcow2 --min-disk 10 --min-ram 2048 --file finance-rhel-db.qcow2
finance-rhel-db
...output omitted...

9. Launch an instance in the environment using the following attributes:


Instance Attributes
Attribute Value
flavor m1.database
key pair developer1-keypair1
network finance-network1
image finance-rhel-db
security group finance-db
name finance-db1

[student@workstation ~(developer1-finance)]$ openstack server create \


--flavor m1.database --key-name developer1-keypair1 \
--nic net-id=finance-network1 --security-group finance-db --image finance-rhel-db \
--wait finance-db1
...output omitted...
10. List the available floating IP addresses, and then allocate one to finance-db1.
10.1. List the floating IPs; unallocated IPs have None listed as their Port value.

[student@workstation ~(developer1-finance)]$ openstack floating ip list -c "Floating IP


Address" -c Port
+---------------------+-----------------+
| Floating IP Address | Port |
+---------------------+-----------------+
| 172.25.250.P | None |
| 172.25.250.R | None |
+---------------------+------------------+

10.2.Attach an unallocated floating IP to the finance-db1 instance.


[student@workstation ~(developer1-finance)]$ openstack server add floating ip finance-db1
172.25.250.P

11. Use ssh to connect to the finance-db1 instance. Ensure the mariadb-server package is
installed, and that the mariadb service is enabled and running.

11.1. Log in to the finance-db1 instance using ~/developer1-keypair1.pem with ssh.

[student@workstation ~(developer1-finance)]$ ssh -i ~/developer1-keypair1.pem \


cloud-user@172.25.250.P
Warning: Permanently added '172.25.250.P' (ECDSA) to the list of known hosts.
[cloud-user@finance-db1 ~]$

11.2. Confirm that the mariadb-server package is installed.

[cloud-user@finance-db1 ~]$ rpm -q mariadb-server


mariadb-server-5.5.52-1.el7.x86_64

11.3. Confirm that the mariadb service is enabled and running, and then log out.

[cloud-user@finance-db1 ~]$ systemctl status mariadb


...output omitted...
Loaded: loaded (/usr/lib/systemd/system/mariadb.service; enabled; vendor
preset: disabled)
Active: active (running) since Mon 2017-05-29 20:49:37 EDT; 9min ago
Process: 1033 ExecStartPost=/usr/libexec/mariadb-wait-ready $MAINPID
(code=exited, status=0/SUCCESS)
Process: 815 ExecStartPre=/usr/libexec/mariadb-prepare-db-dir %n (code=exited,
status=0/SUCCESS)
Main PID: 1031 (mysqld_safe)
...output omitted...
[cloud-user@finance-db1 ~]$ exit
logout
Connection to 172.25.250.P closed.
[student@workstation ~(developer1-finance)]$

12. From workstation, retrieve the osp-small.qcow2 image from http://


materials.example.com/osp-small.qcow2 and save it as /home/student/
finance-rhel-mail.qcow2.

[student@workstation ~(developer1-finance)]$ wget \


http://materials.example.com/osp-small.qcow2 -O ~/finance-rhel-mail.qcow2

13. Use the virt-customize command to customize the ~/finance-rhel-mail.qcow2


image. Enable the postfix service, configure postfix to listen on all interfaces, and relay
all mail to workstation.lab.example.com. Install the mailx package to enable sending a test
email. Ensure the SELinux contexts are restored.

[student@workstation ~(developer1-finance)]$ virt-customize \


-a ~/finance-rhel-mail.qcow2 --run-command 'systemctl enable postfix' \
--run-command 'postconf -e "relayhost = [workstation.lab.example.com]"' \
--run-command 'postconf -e "inet_interfaces = all"' \
--run-command 'yum -y install mailx' --selinux-relabel

[ 0.0] Examining the guest ...


[ 84.7] Setting a random seed
[ 84.7] Running: systemctl enable postfix
[ 86.5] Running: postconf -e "relayhost = [workstation.lab.example.com]"
[ 88.4] Running: postconf -e "inet_interfaces = all"
[ 89.8] Running: yum -y install mailx
[ 174.0] SELinux relabelling
[ 532.7] Finishing off

14. As the developer1 OpenStack user, upload the finance-rhel-mail.qcow2 image to


the image service as finance-rhel-mail, with a minimum disk requirement of 10 GiB, and
a minimum RAM requirement of 2 GiB.

14.1. Use the openstack command to upload the finance-rhel-mail.qcow2 image to


the image service.

[student@workstation ~(developer1-finance)]$ openstack image create \


--disk-format qcow2 --min-disk 10 --min-ram 2048 --file ~/finance-rhel-mail.qcow2
finance-rhel-mail
...output omitted...
15. Launch an instance in the environment using the following attributes:
Instance Attributes
Attribute Value
flavor m1.web
key pair developer1-keypair1
network finance-network1
image finance-rhel-mail
security group finance-mail
name finance-mail1

[student@workstation ~(developer1-finance)]$ openstack server create \


--flavor m1.web --key-name developer1-keypair1 --nic net-id=finance-network1
--security-group finance-mail --image finance-rhel-mail --wait finance-mail1
...output omitted...

16. List the available floating IP addresses, and allocate one to finance-mail1.

16.1. List the available floating IPs.

[student@workstation ~(developer1-finance)]$ openstack floating ip list -c "Floating IP


Address" -c Port
+---------------------+--------------------------------------+
| Floating IP Address | Port |
+---------------------+--------------------------------------+
| 172.25.250.P | 1ce9ffa5-b52b-4581-a696-
| 172.25.250.R 52f464912500 |
| None |
+---------------------+--------------------------------------+

16.2.Attach an available floating IP to the finance-mail1 instance.


[student@workstation ~(developer1-finance)]$ openstack server add floating \
ip finance-mail1 172.25.250.R

17. Use ssh to connect to the finance-mail1 instance. Ensure the postfix service is
running, that postfix is listening on all interfaces, and that the relay_host directive is
correct.
17.1. Log in to the finance-mail1 instance using ~/developer1-keypair1.pem with
ssh.

[student@workstation ~(developer1-finance)]$ ssh -i ~/developer1-keypair1.pem \


cloud-user@172.25.250.R
Warning: Permanently added '172.25.250.R' (ECDSA) to the list of known hosts.
[cloud-user@finance-mail1 ~]$

17.2. Ensure the postfix service is running.


[cloud-user@finance-mail1 ~]$ systemctl status postfix
...output omitted...
Loaded: loaded (/usr/lib/systemd/system/postfix.service; enabled; vendor
preset: disabled)
Active: active (running) since Mon 2017-05-29 00:59:32 EDT; 4s ago
Process: 1064 ExecStart=/usr/sbin/postfix start (code=exited, status=0/
SUCCESS)
Process: 1061 ExecStartPre=/usr/libexec/postfix/chroot-update (code=exited,
status=0/SUCCESS)
Process: 1058 ExecStartPre=/usr/libexec/postfix/aliasesdb (code=exited,
status=0/SUCCESS)
Main PID: 1136 (master)
...output omitted...
17.3. Ensure postfix is listening on all interfaces.

[cloud-user@finance-mail1 ~]$ sudo ss -tnlp | grep master


LISTEN 0 100 *:25 *:* users:(("master",pid=1136,fd=13))
LISTEN 0 100 :::25 :::* users:(("master",pid=1136,fd=14))

17.4. Ensure the relayhost directive is configured correctly.


[cloud-user@finance-mail1 ~]$ postconf relayhost
CL210-RHOSP10.1-en-2-20171006 101
relayhost = [workstation.lab.example.com]

17.5. Send a test email to student@workstation.lab.example.com.


[cloud-user@finance-mail1 ~]$ mail -s "Test" student@workstation.lab.example.com
Hello World!
.
EOT

17.6. Return to workstation. Use the mail command to confirm that the test email arrived.
[cloud-user@finance-mail1 ~]$ exit
[student@workstation ~]$ mail
Heirloom Mail version 12.5 7/5/10. Type ? for help.
"/var/spool/mail/student": 1 message 1 new
>N 1 Cloud User Mon May 29 01:18 22/979 "Test"
&q
Building and Customizing Images
------------------------------------------------

Lab: Building and Customizing Images


In this lab, you will build a disk image using diskimage-builder, and then modify it using
guestfish.

You will be able to:


• Build an image using diskimage-builder.
• Customize the image using the guestfish command.
• Upload the image to the OpenStack image service.
• Spawn an instance using the customized image.

This ensures thatthe required packages are installed on workstation, and provisions the
environment with a public network, a private network, a key pair, and security rules to access
the instance.

Steps

1. From workstation, retrieve the osp-small.qcow2 image from http://


materials.example.com/osp-small.qcow2 and save it in the /home/student/
directory.

2. Create a copy of the diskimage-builder elements directory to work with in the /home/
student/ directory.

3. Create a post-install.d directory under the working copy of the rhel7 element.

4. Add a script under the rhel7/post-install.d directory to enable the httpd service.

5. Export the following environment variables, which diskimage-builder requires.


Environment Variables
Variable Content
NODE_DIST rhel7
DIB_LOCAL_IMAGE /home/student/osp-small.qcow2
DIB_YUM_REPO_CONF "/etc/yum.repos.d/openstack.repo"

ELEMENTS_PATH /home/student/elements

6. Build a RHEL 7 image named production-rhel-web.qcow2 using the diskimagebuilder


elements configured previously. Include the httpd package in the image.
7. Add a custom web index page to the production-rhel-web.qcow2 image using
guestfish. Include the text production-rhel-web in the index.html file. Ensure the
SELinux context of /var/www/html/index.html is correct.

8. As the operator1 user, create a new OpenStack image named production-rhel-web


using the production-rhel-web.qcow2 image, with a minimum disk requirement of

10 GiB, and a minimum RAM requirement of 2 GiB.

9. As the operator1 user, launch an instance using the following attributes:


Instance Attributes
Attribute Value
flavor m1.web
key pair operator1-keypair1
network production-network1
image production-rhel-web
security group production-web
name production-web1

10. List the available floating IP addresses, and then allocate one to production-web1.

11. Log in to the production-web1 instance using operator1-keypair1.pem with ssh.


Ensure the httpd package is installed, and that the httpd service is enabled and running.

12. From workstation, confirm that the custom web page, displayed from productionweb1,
contains the text production-rhel-web.
Evaluation

Solution
In this lab, you will build a disk image using diskimage-builder, and then modify it using
guestfish.

Outcomes
You will be able to:
• Build an image using diskimage-builder.
• Customize the image using the guestfish command.
• Upload the image to the OpenStack image service.
• Spawn an instance using the customized image.

Before you begin


Log in to workstation as student using student as the password.
On workstation, run the lab customization-review setup command. This ensures that
the required packages are installed on workstation, and provisions the environment with a
public network, a private network, a key pair, and security rules to access the instance.

Steps
1. From workstation, retrieve the osp-small.qcow2 image from http://
materials.example.com/osp-small.qcow2 and save it in the /home/student/
directory.

[student@workstation ~]$ wget http://materials.example.com/osp-small.qcow2 \


-O /home/student/osp-small.qcow2

2. Create a copy of the diskimage-builder elements directory to work with in the /home/
student/ directory.

[student@workstation ~]$ cp -a /usr/share/diskimage-builder/elements /home/student/

3. Create a post-install.d directory under the working copy of the rhel7 element.

[student@workstation ~]$ mkdir -p /home/student/elements/rhel7/post-install.d

4. Add a script under the rhel7/post-install.d directory to enable the httpd service.

4.1. Add a script to enable the httpd service.


Solution

[student@workstation ~]$ cd /home/student/elements/rhel7/post-install.d/


[student@workstation post-install.d]$ cat <<EOF > 01-enable-services
#!/bin/bash
systemctl enable httpd
EOF

4.2. Set the executable permission on the script.

[student@workstation post-install.d]$ chmod +x 01-enable-services

4.3. Change back to your home directory.

[student@workstation post-install.d]$ cd
[student@workstation ~]$

5. Export the following environment variables, which diskimage-builder requires.


Environment Variables

Variable Content
NODE_DIST rhel7
DIB_LOCAL_IMAGE /home/student/osp-small.qcow2
DIB_YUM_REPO_CONF "/etc/yum.repos.d/openstack.repo"
ELEMENTS_PATH /home/student/elements

[student@workstation ~]$ export NODE_DIST=rhel7


[student@workstation ~]$ export DIB_LOCAL_IMAGE=/home/student/osp-small.qcow2
[student@workstation ~]$ export DIB_YUM_REPO_CONF=/etc/yum.repos.d/openstack.repo
[student@workstation ~]$ export ELEMENTS_PATH=/home/student/elements

6. Build a RHEL 7 image named production-rhel-web.qcow2 using the diskimagebuilder


elements configured previously. Include the httpd package in the image.

[student@workstation ~]$ disk-image-create vm rhel7 -t qcow2 -p httpd -o production-rhel-


web.qcow2

7. Add a custom web index page to the production-rhel-web.qcow2 image using


guestfish. Include the text production-rhel-web in the index.html file. Ensure the
SELinux context of /var/www/html/index.html is correct.

7.1. Open a guestfish shell for the production-rhel-web.qcow2 image.

[student@workstation ~]$ guestfish -i -a production-rhel-web.qcow2


...output omitted...
><fs>

7.2. Create a new /var/www/html/index.html file.

><fs> touch /var/www/html/index.html

7.3. Edit the /var/www/html/index.html file and include the required key words.

><fs> edit /var/www/html/index.html


This instance uses the production-rhel-web image.

7.4. To ensure the new index page works with SELinux in enforcing mode, restore the /var/
www/ directory context (including the index.html file).

><fs> selinux-relabel /etc/selinux/targeted/contexts/files/file_contexts /var/


www/

7.5. Exit the guestfish shell.


><fs> exit
[student@workstation ~]$

8. As the operator1 user, create a new OpenStack image named production-rhel-web


using the production-rhel-web.qcow2 image, with a minimum disk requirement of

10 GiB, and a minimum RAM requirement of 2 GiB.

8.1. Source the operator1-production-rc credentials file.

[student@workstation ~]$ source operator1-production-rc


[student@workstation ~(operator1-production)]$

8.2. Upload the production-rhel-web.qcow2 image to the OpenStack Image service.

[student@workstation ~(operator1-production)]$ openstack image create \


--disk-format qcow2 --min-disk 10 --min-ram 2048 --file production-rhel-web.qcow2 \
production-rhel-web
...output omitted...

9. As the operator1 user, launch an instance using the following attributes:


Instance Attributes
Attribute Value
flavor m1.web
key pair operator1-keypair1
network production-network1
image production-rhel-web
security group production-web

name production-web1

[student@workstation ~(operator1-production)]$ openstack server create \


--flavor m1.web --key-name operator1-keypair1 \
--nic net-id=production-network1 --image production-rhel-web \
--security-group production-web --wait production-web1
...output omitted...

10. List the available floating IP addresses, and then allocate one to production-web1.

10.1. List the floating IPs. Available IP addresses have the Port attribute set to None.
[student@workstation ~(operator1-production)]$ openstack floating ip list -c "Floating IP
Address" -c Port
+---------------------+------------+
| Floating IP Address | Port |
+---------------------+------------+
| 172.25.250.P | None |
+---------------------+---------------+

10.2.Attach an available floating IP to the production-web1 instance.


[student@workstation ~(operator1-production)]$ openstack server add \
floating ip production-web1 172.25.250.P

11. Log in to the production-web1 instance using operator1-keypair1.pem with ssh.


Ensure the httpd package is installed, and that the httpd service is enabled and running.

11.1. Use SSH to log in to the production-web1 instance using operator1-


keypair1.pem.

[student@workstation ~(operator1-production)]$ ssh -i operator1-keypair1.pem \


cloud-user@172.25.250.P
Warning: Permanently added '172.25.250.P' (ECDSA) to the list of known hosts.
[cloud-user@production-web1 ~]$

11.2. Confirm that the httpd package is installed.


[cloud-user@production-web1 ~]$ rpm -q httpd
httpd-2.4.6-45.el7.x86_64

11.3. Confirm that the httpd service is running.

[cloud-user@production-web1 ~]$ systemctl status httpd


...output omitted...
Loaded: loaded (/usr/lib/systemd/system/httpd.service; enabled; vendor
preset: disabled)
Active: active (running) since Wed 2017-05-24 23:55:42 EDT; 8min ago
...output omitted...
11.4. Exit the instance to return to workstation.

[cloud-user@production-web1 ~]$ exit

12. From workstation, confirm that the custom web page, displayed from productionweb1,
contains the text production-rhel-web.

[student@workstation ~(operator1-production)]$ curl http://172.25.250.P/index.html

This instance uses the production-rhel-web image.


Evaluation

10 ) Managing Message Brokering (RabbitMQ) Chapter 2 CL210 Hal 66

*******************************************************

https://access.redhat.com/articles/1167113

1. Manually Create RabbitMQ Configuration Files

To work around this, manually create the two required RabbitMQ configuration files. These files, along
with their required default contents, are as follows:

/etc/rabbitmq/rabbitmq.config

% This file managed by Puppet


% Template Path: rabbitmq/templates/rabbitmq.config
[
{rabbit, [
{default_user, <<"guest">>},
{default_pass, <<"guest">>}
]},
{kernel, [

]}
].
% EOF
/etc/rabbitmq/rabbitmq-env.conf

RABBITMQ_NODE_PORT=5672

2. Launch the RabbitMQ Message Broker

# service rabbitmq-server start


# chkconfig rabbitmq-server on

To change the default guest password of RabbitMQ:

# rabbitmqctl change_password guest NEW_RABBITMQ_PASS

Configuring the RabbitMQ message broker for OpenStack use

1. Create a RabbitMQ user account for the Block Storage, Compute, OpenStack Networking,
Orchestration, Image, and Telemetry services:

# rabbitmqctl add_user cinder CINDER_PASS


# rabbitmqctl add_user nova NOVA_PASS
# rabbitmqctl add_user neutron NEUTRON_PASS
# rabbitmqctl add_user heat HEAT_PASS
# rabbitmqctl add_user glance GLANCE_PASS
# rabbitmqctl add_user ceilometer CEILOMETER_PASS

Replace CINDER_PASS, NOVA_PASS, NEUTRON_PASS, HEAT_PASS, GLANCE_PASS,


and CEILOMETER_PASS with secure passwords for each service.

2. Next, grant each of these RabbitMQ users read/write permissions to all resources:

# rabbitmqctl set_permissions cinder ".*" ".*" ".*"


# rabbitmqctl set_permissions nova ".*" ".*" ".*"
# rabbitmqctl set_permissions neutron ".*" ".*" ".*"
# rabbitmqctl set_permissions heat ".*" ".*" ".*"
# rabbitmqctl set_permissions glance ".*" ".*" ".*"
# rabbitmqctl set_permissions ceilometer ".*" ".*" ".*"

3. The OpenStack services require a restart to apply the new permissions. This step is
performed later in Section 12, “Finalize Migration to RabbitMQ”. Once the OpenStack
services have been restarted, you can verify that the permissions were correctly applied
using the list_permissions subcommand on the Messaging server:

# rabbitmqctl list_permissions
Listing permissions in vhost "/" ...
ceilometer .* .* .*
cinder .* .* .*
glance .* .* .*
guest .* .* .*
heat .* .* .*
neutron .* .* .*
nova .* .* .*

11 ) Create Container Swift di 1 Project --> Managing Object Storage Chapter 4 CL 210 Hal 135

***************************************************************************
• Upload an object to the OpenStack object storage service.
• Download an object from the OpenStack object storage service to an instance.

Guided Exercise: Managing Object Storage

Outcomes
You should be able to:
• Upload an object to the OpenStack object storage service.
• Download an object from the OpenStack object storage service to an instance.

Steps

1. Create a 10MB file named dataset.dat. As the developer1 user, create a container called
container1 in the OpenStack object storage service. Upload the dataset.dat file to this
container.

1.1. Create a 10MB file named dataset.dat.

[student@workstation ~]$ dd if=/dev/zero of=~/dataset.dat bs=10M count=1

1.2. Load the credentials for the developer1 user. This user has been configured by the lab
script with the role swiftoperator.

[student@workstation ~]$ source developer1-finance-rc

1.3. Create a new container named container1.


[student@workstation ~(developer1-finance)]$ openstack container create container1
+--------------------+------------+---------------+
| account | container | x-trans-id |
+--------------------+------------+---------------+
| AUTH_c968(...)020a | container1 | tx3b(...)e8f3 |
+--------------------+------------+---------------+

1.4. Upload the dataset.dat file to the container1 container.

[student@workstation ~(developer1-finance)]$ openstack object create container1 dataset.dat

+-------------+--------------+---------------------------------------------------------+
| object | container | etag |
+-------------+------------+-----------------------------------------------------------+
| dataset.dat | container1 | f1c9645dbc14efddc7d8a322685f26eb |
+-------------+------------+-----------------------------------------------------------+

2. Download the dataset.dat object to the finance-web1 instance created by the lab script.

2.1. Verify that the finance-web1 instance's status is ACTIVE. Verify the floating IP address associated
with the instance.

[student@workstation ~(developer1-finance)]$ openstack server show finance-web1

2.2. Copy the credentials file for the developer1 user to the finance-web1 instance. Use the cloud-user
user and the /home/student/developer1-keypair1.pem key file.

[student@workstation ~(developer1-finance)]$ scp -i developer1-keypair1.pem \


developer1-finance-rc cloud-user@172.25.250.P:~

2.3. Log in to the finance-web1 instance using cloud-user as the user and the
/home/student/developer1-keypair1.pem key file.

[student@workstation ~(developer1-finance)]$ ssh -i ~/developer1-keypair1.pem \


cloud-user@172.25.250.P
2.4. Load the credentials for the developer1 user.

[cloud-user@finance-web1 ~]$ source developer1-finance-rc

2.5. Download the dataset.dat object from the object storage service.

[cloud-user@finance-web1 ~(developer1-finance)]$ openstack object save container1 dataset.dat

2.6. Verify that the dataset.dat object has been downloaded. When done, log out from the instance.

[cloud-user@finance-web1 ~(developer1-finance)]$ ls -lh dataset.dat


-rw-rw-r--. 1 cloud-user cloud-user 10M May 26 06:58 dataset.dat
[cloud-user@finance-web1 ~(developer1-finance)]$ exit

12 ) (Scaling Compute Node) --> Deploy Second Compute Chapter 6 Hal 264 dan 294

************************************************************

13 ) Migrating Instances with Shared Storage Migrate Instance

from Compute 0 to Compute 1 Chapter 6 --> Hal 284 dan 294

********************************************************

https://www.golinuxhub.com/2018/08/openstack-tripleo-architecture-step-guide-install-undercloud-
overcloud-heat-template.html#Introspection

Solution
In this lab, you will add compute nodes, manage shared storage, and perform instance live
migration.

Outcomes
You should be able to:
• Add a compute node.
• Configure shared storage.
• Live migrate an instance using shared storage.

Steps

1. Use SSH to connect to director as the user stack and source the stackrc credentials file.

[student@workstation ~]$ ssh stack@director


[stack@director ~]$
2. Prepare compute1 for introspection. Use the details available in
http://materials.example.com/instackenv-onenode.json file.

2.1. Download the instackenv-onenode.json file from http://materials.example.com to /home/stack for


introspection of compute1.

[stack@director ~]$ wget http://materials.example.com/instackenv-onenode.json

2.2. Verify that the instackenv-onenode.json file is for compute1.

[stack@director ~]$ cat ~/instackenv-onenode.json


{
"nodes": [
{
"pm_user": "admin",
"arch": "x86_64",
"name": "compute1",
"pm_addr": "172.25.249.112",
"pm_password": "password",
"pm_type": "pxe_ipmitool",
"mac": [
"52:54:00:00:f9:0c"
],
"cpu": "2",
"memory": "6144",
"disk": "40"
}
]
}

2.3. Import instackenv-onenode.json into the baremetal service using openstack


baremetal import, and ensure that the node has been properly imported.

[stack@director ~]$ openstack baremetal import --json /home/stack/instackenv-onenode.json


Started Mistral Workflow. Execution ID: 8976a32a-6125-4c65-95f1-2b97928f6777
Successfully registered node UUID b32d3987-9128-44b7-82a5-5798f4c2a96c
Started Mistral Workflow. Execution ID: 63780fb7-bff7-43e6-bb2a-5c0149bc9acc
Successfully set all nodes to available

[stack@director ~]$ openstack baremetal node list -c Name -c 'Power State' -c 'Provisioning State' -c
Maintenance
+-------------+--------------------+-------------+-------------+
| Name | Provisioning State | Power State | Maintenance |
+-------------+--------------------+-------------+-------------+
| controller0 | active | power on | False |
| compute0 | active | power on | False |
| ceph0 | active | power on | False |
| compute1 | available | power off | False |
+-------------+--------------------+-------------+-------------+

2.4. Prior to starting introspection, set the provisioning state for compute1 to manageable.

[stack@director ~]$ openstack baremetal node manage compute1

3. Initiate introspection of compute1. Introspection may take a few minutes.


[stack@director ~]$ openstack overcloud node introspect --all-manageable --provide
Started Mistral Workflow. Execution ID: d9191784-e730-4179-9cc4-a73bc31b5aec
Waiting for introspection to finish...
...output omitted...

4. Update the node profile for compute1 to use the compute profile.
[stack@director ~]$ openstack baremetal node set compute1 --property
"capabilities=profile:compute,boot_option:local"

5. Configure 00-node-info.yaml to scale two compute nodes. Update the ComputeCount line as follows.
Edit /home/stack/templates/cl210-environment/00-node-info.yaml to scale to two compute nodes.

ComputeCount: 2

6. Deploy the overcloud, to scale compute by adding one node.

[stack@director ~]$ openstack overcloud deploy --templates ~/templates \


--environment-directory ~/templates/cl210-environment
Removing the current plan files
Uploading new plan files
Started Mistral Workflow. Execution ID: 6de24270-c3ed-4c52-8aac-820f3e1795fe
Plan updated
Deploying templates in the directory /tmp/tripleoclient-WnZ2aA/tripleo-heattemplates
Started Mistral Workflow. Execution ID: 50f42c4c-d310-409d-8d58-e11f993699cb
...output omitted...

In case that’s command no respon for deploy new compute use this command for deploy add new compute
:

[stack@undercloud-director ~]$ openstack overcloud deploy --templates ~/templates/ \


--control-scale 1 --compute-scale 1 --ceph-storage-scale 1 --control-flavor control \
--compute-flavor compute --ceph-storage-flavor ceph-storage --neutron-tunnel-types
vxlan --neutron-network-type vxlan --e \
~/templates/environments/storage-environment.yaml

********output trimmed*********

The overcloud is being deployed with virtual machines in a nested virtual environment
rather than on physical hardware. Race conditions have been observed, which can cause
elements of the deployment to hang inconsistently. During the deploying stage, the image
is uploaded to Glance and transferred to the bare metal Ironic nodes. After it completes, the
node will reboot and the file system is resized by cloud-init. It then moves into the active
provisioning state. The cloud-init issue can cause the overcloud deployment to hang due
to the node's network being unreachable. An automated solution for this issue was not
available when the course was released. However, the following procedure allows you to
manually correct the deployment.

6.1. Open a new terminal on workstation and use SSH to log in to director as the user
stack with redhat as the password. Watch the Bare Metal nodes transition from
available to deploying to to wait call-back to deploying to active by using
the openstack baremetal node list command.

6.2. After compute1 has become active, navigate to the Online Lab. Select OPEN
CONSOLE for compute1, and log in as the user root with the password redhat.

6.3. Delete the empty file /etc/sysconfig/network-scripts/ifcfg-eth0 and restart


the node using the reboot command. The deployment proceeds after compute1 boots
and may take 80 minutes or more to complete.

[root@overcloud-compute-1 ~]# rm -f /etc/sysconfig/network-scripts/ifcfg-eth0


[root@overcloud-compute-1 ~]# reboot

7. Prepare compute1 for the next part of the lab.

8. Configure controller0 for shared storage.

8.1. Log into controller0 as heat-admin and switch to the root user.

[student@workstation ~]$ ssh heat-admin@controller0


[heat-admin@overcloud-controller-0 ~]$ sudo -i
[root@overcloud-controller-0 ~]#

8.2. Install the nfs-utils package.

[root@overcloud-controller-0 ~]# yum -y install nfs-utils

8.3. Configure iptables for NFSv4 shared storage.

[root@overcloud-controller-0 ~]# iptables -v -I INPUT -p tcp --dport 2049 -j ACCEPT


ACCEPT tcp opt -- in * out * 0.0.0.0/0 -> 0.0.0.0/0 tcp dpt:2049

[root@overcloud-controller-0 ~]# service iptables save

8.4. Configure vi /etc/exports to export /var/lib/nova/instances via NFS to compute0 and compute1.
Add the following lines to the bottom of the file.
/var/lib/nova/instances 172.25.250.2 (rw,sync,fsid=0,no_root_squash)
/var/lib/nova/instances 172.25.250.12 (rw,sync,fsid=0,no_root_squash)

8.5. Enable and start the NFS service.

[root@overcloud-controller-0 ~]# systemctl enable nfs --now

8.6. Confirm the directory is exported.

[root@overcloud-controller-0 ~]# exportfs


/var/lib/nova/instances 172.25.250.2
/var/lib/nova/instances 172.25.250.12

8.7. Update the vncserver_listen variable in /etc/nova/nova.conf.


[root@overcloud-controller-0 ~]# openstack-config --set /etc/nova/nova.conf DEFAULT \
vncserver_listen 0.0.0.0

8.8. Restart OpenStack Compute services, then log out of controller0.

[root@overcloud-controller-0 ~]# openstack-service restart nova


[root@overcloud-controller-0 ~]# exit
[heat-admin@overcloud-controller-0 ~]$ exit
[student@workstation ~]$

9. Configure shared storage for compute0.

9.1. Log into compute0 as heat-admin and switch to the root user.

[student@workstation ~]$ ssh heat-admin@compute0


[heat-admin@overcloud-compute-0 ~]$ sudo -i
[root@overcloud-compute-0 ~]#

9.2. Configure vi /etc/fstab to mount the directory /var/lib/nova/instances, exported from controller0. Add
the following line to the bottom of the file. Confirm that the entry is on a single line in the file; the two line
display here in the book is due to insufficient width.

172.25.250.1:/ /var/lib/nova/instances nfs4 context="system_u:object_r:nova_var_lib_t:s0" 0 0

9.3. Mount the export from controller0 on /var/lib/nova/instances.

[root@overcloud-compute-0 ~]# mount -v /var/lib/nova/instances

9.4. Configure iptables to allow shared storage live migration.

[root@overcloud-compute-0 ~]# iptables -v -I INPUT -p tcp --dport 16509 -j ACCEPT


ACCEPT tcp opt -- in * out * 0.0.0.0/0 -> 0.0.0.0/0 tcp dpt:16509
[root@overcloud-compute-0 ~]# iptables -v -I INPUT -p tcp --dport 49152:49261 -j ACCEPT
ACCEPT tcp opt -- in * out * 0.0.0.0/0 -> 0.0.0.0/0 tcp dpts:49152:49261

[root@overcloud-compute-0 ~]# service iptables save

9.5. Configure user, group, and vnc_listen in /etc/libvirt/qemu.conf Add the following lines to the bottom of
the file.

user="root"
group="root"
vnc_listen="0.0.0.0"

9.6. Configure /etc/nova/nova.conf virtual disk storage and other properties for live migration. Use the nfs
mounted /var/lib/nova/instances directory to store instance virtual disks.
[root@overcloud-compute-0 ~]# openstack-config --set /etc/nova/nova.conf \
libvirt images_type default
[root@overcloud-compute-0 ~]# openstack-config --set /etc/nova/nova.conf \
DEFAULT instances_path /var/lib/nova/instances
[root@overcloud-compute-0 ~]# openstack-config --set /etc/nova/nova.conf \
DEFAULT novncproxy_base_url http://172.25.250.1:6080/vnc_auto.html
[root@overcloud-compute-0 ~]# openstack-config --set /etc/nova/nova.conf \
DEFAULT vncserver_listen 0.0.0.0
[root@overcloud-compute-0 ~]# openstack-config --set /etc/nova/nova.conf \
DEFAULT live_migration_flag \
VIR_MIGRATE_UNDEFINE_SOURCE,VIR_MIGRATE_PEER2PEER,VIR_MIGRATE_LIVE

9.7. Restart OpenStack services and log out of compute0.

[root@overcloud-compute-0 ~]# openstack-service restart


[root@overcloud-compute-0 ~]# exit
[heat-admin@overcloud-compute-0 ~]$ exit
[student@workstation ~]$

10. Configure shared storage for compute1.

10.1. Log into compute1 as heat-admin and switch to the root user.

[student@workstation ~]$ ssh heat-admin@compute1


[heat-admin@overcloud-compute-1 ~]$ sudo -i
[root@overcloud-compute-1 ~]#

10.2.Configure vi /etc/fstab to mount the directory /var/lib/nova/instances, exported from controller0.


Add the following line to the bottom of the file. Confirm that the entry is on a single line in the file; the two
line display here in the book is due to insufficient width.
172.25.250.1:/ /var/lib/nova/instances nfs4 context="system_u:object_r:nova_var_lib_t:s0" 0 0

10.3.Mount the export from controller0 on /var/lib/nova/instances.

[root@overcloud-compute-1 ~]# mount -v /var/lib/nova/instances

10.4.Configure iptables for live migration.

[root@overcloud-compute-1 ~]# iptables -v -I INPUT -p tcp --dport 16509 -j ACCEPT


ACCEPT tcp opt -- in * out * 0.0.0.0/0 -> 0.0.0.0/0 tcp dpt:16509

[root@overcloud-compute-1 ~]# iptables -v -I INPUT -p tcp --dport 49152:49261 -j ACCEPT


ACCEPT tcp opt -- in * out * 0.0.0.0/0 -> 0.0.0.0/0 tcp dpts:49152:49261

[root@overcloud-compute-1 ~]# service iptables save

10.5.Configure user, group, and vnc_listen in /etc/libvirt/qemu.conf Add the following lines to the bottom
of the file.

user="root"
group="root"
vnc_listen="0.0.0.0"

10.6.Configure /etc/nova/nova.conf virtual disk storage and other properties for live migration. Use the nfs
mounted /var/lib/nova/instances directory to store instance virtual disks.

[root@overcloud-compute-1 ~]# openstack-config --set /etc/nova/nova.conf \


libvirt images_type default

[root@overcloud-compute-1 ~]# openstack-config --set /etc/nova/nova.conf \


DEFAULT instances_path /var/lib/nova/instances

[root@overcloud-compute-1 ~]# openstack-config --set /etc/nova/nova.conf \


DEFAULT novncproxy_base_url http://172.25.250.1:6080/vnc_auto.html

[root@overcloud-compute-1 ~]# openstack-config --set /etc/nova/nova.conf \


DEFAULT vncserver_listen 0.0.0.0

[root@overcloud-compute-1 ~]# openstack-config --set /etc/nova/nova.conf \


DEFAULT live_migration_flag \
VIR_MIGRATE_UNDEFINE_SOURCE,VIR_MIGRATE_PEER2PEER,VIR_MIGRATE_LIVE

10.7. Restart OpenStack services and log out of compute1.

[root@overcloud-compute-1 ~]# openstack-service restart


[root@overcloud-compute-1 ~]# exit
[heat-admin@overcloud-compute-1 ~]$ exit
[student@workstation ~]$

11. Launch an instance named production1 as the user operator1 using the following
attributes:
Instance Attributes
Attribute Value
flavor m1.web
key pair operator1-keypair1
network production-network1
image rhel7
security group Production
name production1

[student@workstation ~]$ source ~/operator1-production-rc


[student@workstation ~(operator1-production)]$ openstack server create \
--flavor m1.web --key-name operator1-keypair1 \
--nic net-id=production-network1 --security-group production --image rhel7 \
--wait production1
...output omitted...

12. List the available floating IP addresses, then allocate one to the production1 instance.
12.1. List the floating IPs. An available one has the Port attribute set to None.
[student@workstation ~(operator1-production)]$ openstack floating ip list -c "Floating IP Address" -c
Port
+---------------------+------+
| Floating IP Address | Port |
+---------------------+------+
| 172.25.250.P | None |
+---------------------+------+

12.2.Attach an available floating IP to the instance production1.


[student@workstation ~(operator1-production)]$ openstack server add floating ip production1
172.25.250.P
13. Ensure that the production1 instance is accessible by logging in to the instance as the
user cloud-user, then log out of the instance.

[student@workstation ~(operator1-production)]$ ssh -i ~/operator1-keypair1.pem \


cloud-user@172.25.250.P
Warning: Permanently added '172.25.250.P' (ECDSA) to the list of known hosts.
[cloud-user@production1 ~]$ exit
[student@workstation ~(operator1-production)]$

14. Migrate the instance production1 using shared storage.


14.1. To perform live migration, the user operator1 must have the admin role assigned for the project
production. Assign the admin role to operator1 for the project production. Source the
/home/student/admin-rc file to export the admin user credentials.

[student@workstation ~(operator1-production)]$ source ~/admin-rc


[student@workstation ~(admin-admin)]$ openstack role add --user operator1 --project production
admin

14.2.Determine whether the instance is currently running on compute0 or compute1. In the example
below, the instance is running on compute0, but your instance may be running on compute1.
Source the /home/student/operator1-production-rc file to export the operator1 user credentials.

[student@workstation ~(admin-admin)]$ source ~/operator1-production-rc


[student@workstation ~(operator1-production)]$ openstack server show production1 -f json | grep
compute
"OS-EXT-SRV-ATTR:host": "overcloud-compute-0.localdomain",
"OS-EXT-SRV-ATTR:hypervisor_hostname": "overcloud-compute-0.localdomain",

14.3.Prior to migration, ensure compute1 has sufficient resources to host the instance. The
example below uses compute1, however you may need to use compute0. The compute
node should contain 2 VCPUs, a 56 GB disk, and 2048 MBs of available RAM.

[student@workstation ~(operator1-production)]$ openstack host show \


overcloud-compute-1.localdomain -f json
[
{
"Project": "(total)",
"Disk GB": 56,
"Host": "overcloud-compute-1.localdomain",
"CPU": 2,
"Memory MB": 6143
},
{
"Project": "(used_now)",
"Disk GB": 0,
"Host": "overcloud-compute-1.localdomain",
"CPU": 0,
"Memory MB": 2048
},
{
"Project": "(used_max)",
"Disk GB": 0,
"Host": "overcloud-compute-1.localdomain",
"CPU": 0,
"Memory MB": 0
}

14.4.Migrate the instance production1 using shared storage. In the example below, the
instance is migrated from compute0 to compute1, but you may need to migrate the instance from
compute1 to compute0.

[student@workstation ~(operator1-production)]$ openstack server migrate \


--shared-migration --live overcloud-compute-1.localdomain production1

15. Verify that the migration of production1 using shared storage was successful.
15.1. Verify that the migration of production1 using shared storage was successful. The example below
displays compute1, but your output may display compute0.

[student@workstation ~(operator1-production)]$ openstack server show \


production1 -f json | grep compute
"OS-EXT-SRV-ATTR:host": "overcloud-compute-1.localdomain",
"OS-EXT-SRV-ATTR:hypervisor_hostname": "overcloud-compute-1.localdomain",

14 ) Analyzing Cloud Metrics for Autoscaling

Display Metric (Collect Information dari semua resource Openstack) --> Chapter 8 Hal. 373 dan 382

*****************************************************************************

Solution
In this lab, you will analyze the Telemetry metric data and create an Aodh alarm. You will also set

the alarm to trigger when the maximum CPU utilization of an instance exceeds a threshold value.

Outcomes

You should be able to:

• Search and list the metrics available with the Telemetry service for a particular user.

• View the usage data collected for a metric.

• Check which archive policy is in use for a particular metric.

• Add new measures to a metric.

• Create an alarm based on aggregated usage data of a metric, and trigger it.

• View and analyze an alarm history.

Steps

1. List all of the instance type telemetry resources accessible by the user operator1.

1.1. From workstation, source the /home/student/operator1-production-rc file

[student@workstation ~]$ source ~/operator1-production-rc

[student@workstation ~(operator1-production)]$ openstack user show operator1

+------------+----------------------------------+

| Field | Value |

+------------+----------------------------------+

| enabled | True |

| id | 4301d0dfcbfb4c50a085d4e8ce7330f6 |

| name | operator1 |

| project_id | a8129485db844db898b8c8f45ddeb258 |

+------------+----------------------------------+
1.2. Use the retrieved user ID to search the resources accessible by the operator1 user.

Filter the output based on the instance resource type.

[student@workstation ~(operator1-production)]$ openstack metric resource search


user_id=4301d0dfcbfb4c50a085d4e8ce7330f6 -c id -c type -c user_id --type instance -f json

"user_id": "4301d0dfcbfb4c50a085d4e8ce7330f6",

"type": "instance",

"id": "969b5215-61d0-47c4-aa3d-b9fc89fcd46c"

1.3. Observe that the ID of the resource in the previous output matches the instance ID of the production-
rhel7 instance.

[student@workstation ~(operator1-production)]$ openstack server show production-rhel7 -c id -c


name -c status

+--------+--------------------------------------+

| Field | Value |

+--------+--------------------------------------+

| id | 969b5215-61d0-47c4-aa3d-b9fc89fcd46c |

| name | production-rhel7 |

| status | ACTIVE |

+--------+--------------------------------------+

2. List all metrics associated with the production-rhel7 instance.

2.1. Use the production-rhel7 instance resource ID to list the available metrics. Verify that the cpu_util
metric is listed.
[student@workstation ~(operator1-production)]$ openstack metric resource show 969b5215-61d0-
47c4-aa3d-b9fc89fcd46c --type instance

+--------------+---------------------------------------------------------------+

|Field | Value |

+--------------+---------------------------------------------------------------+

|id | 969b5215-61d0-47c4-aa3d-b9fc89fcd46c |

|image_ref | 280887fa-8ca4-43ab-b9b0-eea9bfc6174c |

|metrics | cpu.delta: a22f5165-0803-4578-9337-68c79e005c0f |

| | cpu: e410ce36-0dac-4503-8a94-323cf78e7b96 |

| | cpu_util: 6804b83c-aec0-46de-bed5-9cdfd72e9145 |

| ... output omitted... |

+--------------+---------------------------------------------------------------+

3. List the available archive policies. Verify that the cpu_util metric of the productionrhel7 instance uses the
archive policy named low.

3.1. List the available archive policies and their supported aggregation methods.

[student@workstation ~(operator1-production)]$ openstack metric archive-policy list -c name -c


aggregation_methods

+--------+------------------------------------------------+

| name | aggregation_methods |

+--------+------------------------------------------------+

| high | std, count, 95pct, min, max, sum, median, mean |

| low | std, count, 95pct, min, max, sum, median, mean |

| medium | std, count, 95pct, min, max, sum, median, mean |

+--------+------------------------------------------------+
3.2. View the definition of the low archive policy.

[student@workstation ~(operator1-production)]$ openstack metric archive-policy show low -c name


-c definition

+------------+---------------------------------------------------------------+

| Field | Value |

+------------+---------------------------------------------------------------+

| definition | - points: 12, granularity: 0:05:00, timespan: 1:00:00 |

| | - points: 24, granularity: 1:00:00, timespan: 1 day, 0:00:00 |

| | - points: 30, granularity: 1 day, 0:00:00, timespan: 30 days |

| name | low |

+------------+---------------------------------------------------------------+

3.3. Use the resource ID of the production-rhel7 instance to check which archive policy is in use for the
cpu_util metric.

[student@workstation ~(operator1-production)]$ openstack metric metric show --resource-id


969b5215-61d0-47c4-aa3d-b9fc89fcd46c -c archive_policy/name cpu_util

+---------------------+-------+

| Field | Value |

+---------------------+-------+

| archive_policy/name | low |

+---------------------+-------+

3.4. View the measures collected for the cpu_util metric associated with the production-rhel7 instance to
ensure that it uses granularities according to the definition of the low archive policy.

[student@workstation ~(operator1-production)]$ openstack metric measures show --resource-id


969b5215-61d0-47c4-aa3d-b9fc89fcd46c cpu_util

+---------------------------+-------------+----------------+
| timestamp | granularity | value |

+---------------------------+-------------+----------------+

| 2017-05-28T00:00:00+00:00 | 86400.0 | 0.838532808055 |

| 2017-05-28T15:00:00+00:00 | 3600.0 | 0.838532808055 |Solution

| 2017-05-28T18:45:00+00:00 | 300.0 | 0.838532808055 |

+---------------------------+-------------+----------------+

4. Add new measures to the cpu_util metric. Observe that the newly added measures

are available using min and max aggregation methods. Use the values from the following

table. The measures must be added using the architect1 user's credentials, because

manipulating data points requires an account with the admin role. Credentials of user

architect1 are stored in /home/student/architect1-production-rc file.

Measures Parameter

Timestamp Current time in ISO 8601 formatted timestamp

Measure values 30, 42

The measure values 30 and 42 are manual data values added to the cpu_util metric.

4.1. Source architect1 user's credential file. Add 30 and 42 as new measure values.

[student@workstation ~(operator1-production)]$ source ~/architect1-production-rc

[student@workstation ~(architect1-production)]$ openstack metric measures add --resource-id


969b5215-61d0-47c4-aa3d-b9fc89fcd46c --measure $(date -u --iso=seconds)@30 cpu_util

[student@workstation ~(architect1-production)]$ openstack metric measures add --resource-id


969b5215-61d0-47c4-aa3d-b9fc89fcd46c --measure $(date -u --iso=seconds)@42 cpu_util

4.2. Verify that the new measures have been successfully added for the cpu_util metric.

Force the aggregation of all known measures. The default aggregation method is mean,
so you will see a value of 36 (the mean of 30 and 42). The number of records and their

values returned in the output may vary.

[student@workstation ~(architect1-production)]$ openstack metric measures show --resource-id


969b5215-61d0-47c4-aa3d-b9fc89fcd46c cpu_util --refresh

+---------------------------+-------------+----------------+

| timestamp | granularity | value |

+---------------------------+-------------+----------------+

| 2017-05-28T00:00:00+00:00 | 86400.0 | 15.419266404 |

| 2017-05-28T15:00:00+00:00 | 3600.0 | 15.419266404 |

| 2017-05-28T19:55:00+00:00 | 300.0 | 0.838532808055 |

| 2017-05-28T20:30:00+00:00 | 300.0 | 36.0 |

+---------------------------+-------------+----------------+

4.3. Display the maximum and minimum values for the cpu_util metric measure.

[student@workstation ~(architect1-production)]$ openstack metric measures show --resource-id


969b5215-61d0-47c4-aa3d-b9fc89fcd46c cpu_util --refresh --aggregation max

+---------------------------+-------------+----------------+

| timestamp | granularity | value |

+---------------------------+-------------+----------------+

| 2017-05-28T00:00:00+00:00 | 86400.0 | 42.0 |

| 2017-05-28T15:00:00+00:00 | 3600.0 | 42.0 |

| 2017-05-28T19:55:00+00:00 | 300.0 | 0.838532808055 |

| 2017-05-28T20:30:00+00:00 | 300.0 | 42.0 |

+---------------------------+-------------+----------------+
[student@workstation ~(architect1-production)]$ openstack metric measures show --resource-id
969b5215-61d0-47c4-aa3d-b9fc89fcd46c cpu_util --refresh --aggregation min

+---------------------------+-------------+----------------+

| timestamp | granularity | value |

+---------------------------+-------------+----------------+

| 2017-05-28T00:00:00+00:00 | 86400.0 | 0.838532808055 |

| 2017-05-28T15:00:00+00:00 | 3600.0 | 0.838532808055 |

| 2017-05-28T20:30:00+00:00 | 300.0 | 30.0 |

+---------------------------+-------------+----------------+

5. Create a threshold alarm named cputhreshold-alarm based on aggregation by

resources. Set the alarm to trigger when maximum CPU utilization for the productionrhel7 instance exceeds
50% for two consecutive 5 minute periods.

5.1. Create the alarm.

[student@workstation ~(architect1-production)]$ openstack alarm create \

--type gnocchi_aggregation_by_resources_threshold --name cputhreshold-alarm --description


'Alarm to monitor CPU utilization' --enabled True --alarm-action 'log://' --comparison-operator ge --
evaluation-periods 2 --threshold 50.0 --granularity 300 --aggregation-method max --metric cpu_util \

--query '{"=": {"id": "969b5215-61d0-47c4-aa3d-b9fc89fcd46c"}}' --resource-type instance

+--------------------+-------------------------------------------------------+

| Field | Value |

+--------------------+-------------------------------------------------------+

| aggregation_method | max |

| alarm_actions | [u'log://'] |

| alarm_id | f93a2bdc-1ac6-4640-bea8-88195c74fb45 |

| comparison_operator| ge |

| description | Alarm to monitor CPU utilization |


| enabled | True |

| evaluation_periods | 2 |

| granularity | 300 |

| metric | cpu_util |

| name | cputhreshold-alarm |

| ok_actions | [] |

| project_id | ba5b8069596541f2966738ee0fee37de |

+--------------------+-------------------------------------------------------+

5.2. View the newly created alarm. Verify that the state of the alarm is either ok or

insufficient data. According to the alarm definition, data is insufficient until two

evaluation periods have been recorded. Continue with the next step if the state is ok or

insufficient data.

[student@workstation ~(architect1-production)]$ openstack alarm list -c name -c state -c enabled

+--------------------+-------+---------+

| name | state | enabled |

+--------------------+-------+---------+

| cputhreshold-alarm | ok | True |

+--------------------+-------+---------+

6. Simulate high CPU utilization scenario by manually adding new measures to the cpu_util

metric of the instance. It is expected to take between 5 and 10 minutes to trigger.

6.1. Open two terminal windows, either stacked vertically or side-by-side. The second

terminal will be used in subsequent steps to add data points until the alarm triggers. In

the first window, use the watch to repetitively display the alarm state.
[student@workstation ~(architect-production)]$ watch openstack alarm list -c alarm_id -c name -c
state

Every 2.0s: openstack alarm state -c alarm_id -c name -c state

+--------------------------------------+--------------------+-------+

| alarm_id | name | state |

+--------------------------------------+--------------------+-------+

| 82f0b4b6-5955-4acd-9d2e-2ae4811b8479 | cputhreshold-alarm | ok |

+--------------------------------------+--------------------+-------+

6.2. In the second terminal window, use the watch command to add new measures to the

cpu_util metric of the production-rhel7 instance every minute. A value of 80 will

simulate high CPU utilization, since the alarm is set to trigger at 50%.

[student@workstation ~(architect1-production)]$ openstack metric measures add --resource-id


969b5215-61d0-47c4-aa3d-b9fc89fcd46c --measure $(date -u --iso=seconds)@80 cpu_util

Repeat this command once per minute. Continue to add manual data points at a rate

of about one of these commands per minute. Be patient, as the trigger must detect a

maximum value greater than 50 in 2 consecutive 5 minute evaluation periods. This is

expected to take between 6 and 10 minutes.

[student@workstation ~(architect-production)]$ openstack metric measures add --resource-id


969b5215-61d0-47c4-aa3d-b9fc89fcd46c --measure $(date -u --iso=seconds)@80 cpu_util

6.3. The alarm-evaluator service will detect the new manually added measures. Within

the expected 6 to 10 minutes,

+--------------------------------------+--------------------+-------+

| alarm_id | name | state |


+--------------------------------------+--------------------+-------+

| 82f0b4b6-5955-4acd-9d2e-2ae4811b8479 | cputhreshold-alarm | alarm |

+--------------------------------------+--------------------+-------+

6.4. After stopping the watch and closing the second terminal, view the alarm history to

analyze when the alarm transitioned from the ok state to the alarm state. The output

may look similar to the lines displayed below.

[student@workstation ~(architect1-production)]$ openstack alarm-history show 82f0b4b6-5955-


4acd-9d2e-2ae4811b8479 -c timestamp -c type -c detail -f json

"timestamp": "2017-06-08T14:05:53.477088",

"type": "state transition",

"detail": "{\"transition_reason\": \"Transition to alarm due to 2 samples

outside threshold, most recent: 70.0\", \"state\": \"alarm\"}"

},

"timestamp": "2017-06-08T13:18:53.356979",

"type": "state transition",

"detail": "{\"transition_reason\": \"Transition to ok due to 2 samples

inside threshold, most recent: 0.579456043152\", \"state\": \"ok\"}"

},

"timestamp": "2017-06-08T13:15:53.338924",

"type": "state transition",


"detail": "{\"transition_reason\": \"2 datapoints are unknown\", \"state\":

\"insufficient data\"}"

},

S-ar putea să vă placă și