Openstack 에서 운영중인 VM 용량 늘리기

리눅스/OpenStack|2023. 10. 19. 11:55
반응형

Flavor 를 변경하는 방법과 disk 파일의 크기를 직접 늘려주는 방법이 있는데, 여기에서는 disk 파일 크기를 바로 변경하는 더 쉬운 방법을 안내 드리겠습니다.

 

변경하고자 하는 master 라는 이름의 VM 이 어느 컴퓨트 노드에 있는지 확인합니다.

(Controller 서버에서)

# openstack server show master |grep hostname
| OS-EXT-SRV-ATTR:hypervisor_hostname | compute1                                                 |

 

(해당 컴퓨트 노드에서)

사용중인 VM 의 PID 값 확인 후 종료합니다.

# ps -ef|grep disk
root        1110       1  0 02:37 ?        00:00:00 /usr/libexec/udisks2/udisksd
libvirt+    2788       1 99 02:38 ?        00:00:08 /usr/bin/qemu-system-x86_64 -name guest=instance-00000006,debug-threads=on -S -object {"qom-type":"secret","id":"masterKey0","format":"raw","file":"/var/lib/libvirt/qemu/domain-1-instance-00000006/master-key.aes"} -machine pc-i440fx-6.2,usb=off,dump-guest-core=off,memory-backend=pc.ram -accel kvm -cpu Skylake-Client-IBRS,ss=on,vmx=on,pdcm=on,hypervisor=on,tsc-adjust=on,clflushopt=on,umip=on,md-clear=on,stibp=on,arch-capabilities=on,ssbd=on,xsaves=on,pdpe1gb=on,ibpb=on,ibrs=on,amd-stibp=on,amd-ssbd=on,rsba=on,skip-l1dfl-vmentry=on,pschange-mc-no=on,hle=off,rtm=off -m 8192 -object {"qom-type":"memory-backend-ram","id":"pc.ram","size":8589934592} -overcommit mem-lock=off -smp 4,sockets=4,dies=1,cores=1,threads=1 -uuid f6aa6879-0ea4-4be8-a610-d1abcd60c9ab -smbios type=1,manufacturer=OpenStack Foundation,product=OpenStack Nova,version=25.2.0,serial=f6aa6879-0ea4-4be8-a610-d1abcd60c9ab,uuid=f6aa6879-0ea4-4be8-a610-d1abcd60c9ab,family=Virtual Machine -no-user-config -nodefaults -chardev socket,id=charmonitor,fd=31,server=on,wait=off -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc,driftfix=slew -global kvm-pit.lost_tick_policy=delay -no-hpet -no-shutdown -boot strict=on -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -blockdev {"driver":"file","filename":"/var/lib/nova/instances/_base/bf15376deac35c8c707e130fb5d70882999b77d2","node-name":"libvirt-2-storage","cache":{"direct":true,"no-flush":false},"auto-read-only":true,"discard":"unmap"} -blockdev {"node-name":"libvirt-2-format","read-only":true,"cache":{"direct":true,"no-flush":false},"driver":"raw","file":"libvirt-2-storage"} -blockdev {"driver":"file","filename":"/var/lib/nova/instances/f6aa6879-0ea4-4be8-a610-d1abcd60c9ab/disk","node-name":"libvirt-1-storage","cache":{"direct":true,"no-flush":false},"auto-read-only":true,"discard":"unmap"} -blockdev {"node-name":"libvirt-1-format","read-only":false,"cache":{"direct":true,"no-flush":false},"driver":"qcow2","file":"libvirt-1-storage","backing":"libvirt-2-format"} -device virtio-blk-pci,bus=pci.0,addr=0x4,drive=libvirt-1-format,id=virtio-disk0,bootindex=1,write-cache=on -netdev tap,fd=34,id=hostnet0,vhost=on,vhostfd=36 -device virtio-net-pci,host_mtu=1450,netdev=hostnet0,id=net0,mac=fa:16:3e:cb:7e:47,bus=pci.0,addr=0x3 -add-fd set=3,fd=33 -chardev pty,id=charserial0,logfile=/dev/fdset/3,logappend=on -device isa-serial,chardev=charserial0,id=serial0 -device usb-tablet,id=input0,bus=usb.0,port=1 -device usb-kbd,id=input1,bus=usb.0,port=2 -audiodev {"id":"audio1","driver":"none"} -vnc 127.0.0.1:0,audiodev=audio1 -device virtio-vga,id=video0,max_outputs=1,bus=pci.0,addr=0x2 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5 -object {"qom-type":"rng-random","id":"objrng0","filename":"/dev/urandom"} -device virtio-rng-pci,rng=objrng0,id=rng0,bus=pci.0,addr=0x6 -device vmcoreinfo -sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny -msg timestamp=on
root        2836    2649  0 02:38 pts/1    00:00:00 grep --color=auto disk

 

# kill -9 2788

 

인스턴스 디렉토리로 들어가 disk 파일의 사이즈를 20G 더 늘려줍니다.

# cd /var/lib/nova/instances/f6aa6879-0ea4-4be8-a610-d1abcd60c9ab

# qemu-img resize disk +20G

Image resized.

 

(Controller 서버에서)

중지되었던 VM 을 다시 가동합니다.

# openstack server start master

 

(VM 에서)

용량 확인 명령으로 초기 20GB 에서 20GB 더 늘린 40GB 사용이 확인되었습니다.

root@master:~# df -h
Filesystem      Size  Used Avail Use% Mounted on
tmpfs           794M  1.7M  793M   1% /run
/dev/vda1        39G   19G   21G  48% /
tmpfs           3.9G     0  3.9G   0% /dev/shm
tmpfs           5.0M     0  5.0M   0% /run/lock
/dev/vda15      105M  6.1M   99M   6% /boot/efi
tmpfs           794M  4.0K  794M   1% /run/user/1000

 

반응형

댓글()

[Openstack] powering-off 상태의 VM 을 강제로 리부팅 하는 방법

리눅스/OpenStack|2023. 7. 27. 10:47
반응형

VM 상태가 powering-off 라면 중지도 되지 않고, 리부팅도 되지 않습니다.

- 예시) VM 이름 : master

 

# openstack server show master |grep task_state
| OS-EXT-STS:task_state               | powering-off                                             |

 

이 상태에서 VM 을 중지할 경우 실패가 됩니다.

# openstack server stop master
Cannot 'stop' instance 17c0f56b-00c2-485d-9280-7d2cb09a6139 while it is in task_state powering-off (HTTP 409) (Request-ID: req-8380c8ea-fa5b-45bc-b16d-c5651a0b2281)

 

이때, 아래와 같이 nova 명령을 이용해 상태를 초기화 하고 경우에 따라 VM 중지 또는 시작 명령으로 서버를 재가동 할 수 있습니다.

# nova reset-state --active master
Reset state for server master succeeded; new state is active

# openstack server stop master

# openstack server start master
# openstack server show master |grep state
| OS-EXT-STS:power_state              | Running                                                  |
| OS-EXT-STS:task_state               | None                                                     |
| OS-EXT-STS:vm_state                 | active                                                   |

반응형

댓글()

Openstack VM 생성시 에러 {'code': 400, ..., 'message': "Host is not mapped to any cell"}

리눅스/OpenStack|2023. 7. 18. 11:47
반응형

컴퓨트 노드를 추가하고 VM (이름 : worker2) 을 생성할때 발생한 에러 입니다.

아래와 같이 상태를 확인하였습니다.

 

# openstack server show worker2 |grep fault
| fault                               | {'code': 400, 'created': '2023-07-18T02:32:17Z', 'message': "Host 'compute2' is not mapped to any cell"} |

 

이때 openstack compute service list 에 추가된 컴퓨트 노드가 정상으로 출력된다고 해도 아래와 같이 다시 명령을 실행해주세요.

nova-manage cell_v2 discover_hosts --verbose
Found 2 cell mappings.
Skipping cell0 since it does not contain hosts.
Getting computes from cell 'cell0': 12471ee6-907e-47d5-b002-b008d99b847b
Checking host mapping for compute host 'compute2': 8f3070ff-4173-4c06-a004-19eeabeacd97
Creating host mapping for compute host 'compute2': 8f3070ff-4173-4c06-a004-19eeabeacd97
Found 1 unmapped computes in cell: 12471ee6-907e-47d5-b002-b008d99b847b

 

그리고 다시 VM 을 생성하면 잘 될 것입니다.

 

반응형

댓글()

2. Openstack Image, Flavor, Network, VM 순차적으로 생성하기

리눅스/OpenStack|2023. 7. 10. 08:35
반응형

Openstack 인프라 구성이 완료된 상태에서 진행합니다.

참고 : https://sysdocu.tistory.com/1833

 

 

1. 이미지 생성

 

OS 설치용 이미지를 Openstack 에 등록해야 합니다.

 

CentOS 7 cloud 버전 이미지를 다운로드 합니다.

# wget https://mirrors.cloud.tencent.com/centos-cloud/centos/7/images/CentOS-7-x86_64-GenericCloud-2009.qcow2

 

이미지 파일을 등록합니다.

# openstack image create "CentOS7" --file CentOS-7-x86_64-GenericCloud-2009.qcow2 --disk-format qcow2 --container-format bare --public

+------------------+---------------------------------------------------------------------------------------------------------------------------------------------+
| Field            | Value                                                                                                                                       |
+------------------+---------------------------------------------------------------------------------------------------------------------------------------------+
| container_format | bare                                                                                                                                        |
| created_at       | 2023-07-10T01:23:08Z                                                                                                                        |
| disk_format      | qcow2                                                                                                                                       |
| file             | /v2/images/acbe118e-6881-4ecf-8447-868864150c81/file                                                                                        |
| id               | acbe118e-6881-4ecf-8447-868864150c81                                                                                                        |
| min_disk         | 0                                                                                                                                           |
| min_ram          | 0                                                                                                                                           |
| name             | CentOS7                                                                                                                                     |
| owner            | 677861619c5445368a353ebeb0bcba2b                                                                                                            |
| properties       | os_hidden='False', owner_specified.openstack.md5='', owner_specified.openstack.object='images/CentOS7', owner_specified.openstack.sha256='' |
| protected        | False                                                                                                                                       |
| schema           | /v2/schemas/image                                                                                                                           |
| status           | queued                                                                                                                                      |
| tags             |                                                                                                                                             |
| updated_at       | 2023-07-10T01:23:08Z                                                                                                                        |
| visibility       | public                                                                                                                                      |
+------------------+---------------------------------------------------------------------------------------------------------------------------------------------+

 

생성된 이미지를 확인합니다.

# openstack image list

+--------------------------------------+---------+--------+
| ID                                   | Name    | Status |
+--------------------------------------+---------+--------+
| acbe118e-6881-4ecf-8447-868864150c81 | CentOS7 | active |
+--------------------------------------+---------+--------+

 

* Ubuntu Cloud 이미지는 아래 공식 사이트에서 다운로드가 가능합니다.

https://cloud-images.ubuntu.com/

 

 

2. Flavor 생성

 

Flavor는 VM의 리소스 (CPU, 메모리, 디스크 등) 와 구성을 정의 합니다.

 

Flavor 를 생성합니다.

형식) openstack flavor create --ram <RAM> --disk <DISK> --vcpus <VCPUS> --public <FLAVOR_NAME>

# openstack flavor create --ram 2048 --disk 20 --vcpus 2 --public myflavor

+----------------------------+--------------------------------------+
| Field                      | Value                                |
+----------------------------+--------------------------------------+
| OS-FLV-DISABLED:disabled   | False                                |
| OS-FLV-EXT-DATA:ephemeral  | 0                                    |
| disk                       | 20                                   |
| id                         | 7e94605d-ace3-4980-94ad-fa49b36c4735 |
| name                       | myflavor                             |
| os-flavor-access:is_public | True                                 |
| properties                 |                                      |
| ram                        | 2048                                 |
| rxtx_factor                | 1.0                                  |
| swap                       |                                      |
| vcpus                      | 2                                    |
+----------------------------+--------------------------------------+

 

생성된 Flavor 를 확인합니다.

# openstack flavor list
+--------------------------------------+----------+------+------+-----------+-------+-----------+
| ID                                   | Name     |  RAM | Disk | Ephemeral | VCPUs | Is Public |
+--------------------------------------+----------+------+------+-----------+-------+-----------+
| 7e94605d-ace3-4980-94ad-fa49b36c4735 | myflavor | 2048 |   20 |         0 |     2 | True      |
+--------------------------------------+----------+------+------+-----------+-------+-----------+

 

 

3. Network 생성

 

먼저 기본 provider 네트워크를 생성합니다.
# openstack network create --share --external --provider-physical-network provider --provider-network-type flat provider
+---------------------------+--------------------------------------+
| Field                     | Value                                |
+---------------------------+--------------------------------------+
| admin_state_up            | UP                                   |
| availability_zone_hints   |                                      |
| availability_zones        |                                      |
| created_at                | 2023-07-17T06:08:56Z                 |
| description               |                                      |
| dns_domain                | None                                 |
| id                        | b536c267-4a33-4068-a0da-4748a1cbfc97 |
| ipv4_address_scope        | None                                 |
| ipv6_address_scope        | None                                 |
| is_default                | False                                |
| is_vlan_transparent       | None                                 |
| mtu                       | 1550                                 |
| name                      | provider                             |
| port_security_enabled     | True                                 |
| project_id                | 677861619c5445368a353ebeb0bcba2b     |
| provider:network_type     | flat                                 |
| provider:physical_network | provider                             |
| provider:segmentation_id  | None                                 |
| qos_policy_id             | None                                 |
| revision_number           | 1                                    |
| router:external           | External                             |
| segments                  | None                                 |
| shared                    | True                                 |
| status                    | ACTIVE                               |
| subnets                   |                                      |
| tags                      |                                      |
| updated_at                | 2023-07-17T06:08:56Z                 |
+---------------------------+--------------------------------------+

* 옵션 설명
--share : 모든 프로젝트가 가상 네트워크 사용하도록 허용
--external : 가상 네트워크가 외부에 연결되도록 함 (내부 네트워크 사용시 --internal)

다음 파일에 옵션이 설정되었는지 확인하고 없으면 입력합니다.
# vi /etc/neutron/plugins/ml2/ml2_conf.ini
[ml2_type_flat]
flat_networks = provider

# vi /etc/neutron/plugins/ml2/linuxbridge_agent.ini
[linux_bridge]
physical_interface_mappings = provider:eno1

위 eno1 은 네트워크 장치명입니다.
설정이 변경된 경우 neutron 을 재시작 합니다.
# systemctl restart neutron-server

external 네트워크에 서브넷을 생성합니다.
VM 에 할당할 외부 IP 대역 및 게이트웨이 정보를 입력합니다.
# openstack subnet create --network provider --allocation-pool start=115.68.142.66,end=115.68.142.94 --dns-nameserver 8.8.8.8 --gateway 115.68.142.65 --subnet-range 115.68.142.64/27 provider
+----------------------+--------------------------------------+
| Field                | Value                                |
+----------------------+--------------------------------------+
| allocation_pools     | 115.68.142.66-115.68.142.94          |
| cidr                 | 115.68.142.64/27                     |
| created_at           | 2023-07-17T06:12:12Z                 |
| description          |                                      |
| dns_nameservers      | 8.8.8.8                              |
| dns_publish_fixed_ip | None                                 |
| enable_dhcp          | True                                 |
| gateway_ip           | 115.68.142.65                        |
| host_routes          |                                      |
| id                   | d2346f72-dd3b-4ef2-8065-0fd34d50177f |
| ip_version           | 4                                    |
| ipv6_address_mode    | None                                 |
| ipv6_ra_mode         | None                                 |
| name                 | provider                             |
| network_id           | b536c267-4a33-4068-a0da-4748a1cbfc97 |
| prefix_length        | None                                 |
| project_id           | 677861619c5445368a353ebeb0bcba2b     |
| revision_number      | 0                                    |
| segment_id           | None                                 |
| service_types        |                                      |
| subnetpool_id        | None                                 |
| tags                 |                                      |
| updated_at           | 2023-07-17T06:12:12Z                 |
+----------------------+--------------------------------------+


사용자 환경에 내부 네트워크 제공을 위해 사용자 환경을 로드 합니다.
테스트를 위해 admin 으로 VM 을 생성할 경우 바로 아래 명령은 넘어갑니다.
# source sysdocu-openrc

VM 끼리 내부 네트워크로 사용할 서브넷을 생성합니다.
# openstack network create selfservice
+---------------------------+--------------------------------------+
| Field                     | Value                                |
+---------------------------+--------------------------------------+
| admin_state_up            | UP                                   |
| availability_zone_hints   |                                      |
| availability_zones        |                                      |
| created_at                | 2023-07-14T01:35:56Z                 |
| description               |                                      |
| dns_domain                | None                                 |
| id                        | 30fbd00c-5968-40bf-a6e6-6e1b3307a232 |
| ipv4_address_scope        | None                                 |
| ipv6_address_scope        | None                                 |
| is_default                | False                                |
| is_vlan_transparent       | None                                 |
| mtu                       | 1500                                 |
| name                      | selfservice                          |
| port_security_enabled     | True                                 |
| project_id                | 677861619c5445368a353ebeb0bcba2b     |
| provider:network_type     | vxlan                                |
| provider:physical_network | None                                 |
| provider:segmentation_id  | 477                                  |
| qos_policy_id             | None                                 |
| revision_number           | 1                                    |
| router:external           | Internal                             |
| segments                  | None                                 |
| shared                    | False                                |
| status                    | ACTIVE                               |
| subnets                   |                                      |
| tags                      |                                      |
| updated_at                | 2023-07-14T01:35:56Z                 |
+---------------------------+--------------------------------------+

# openstack subnet create --network selfservice --dns-nameserver 8.8.8.8 --gateway 172.16.1.1 --subnet-range 172.16.1.0/24 selfservice
+----------------------+--------------------------------------+
| Field                | Value                                |
+----------------------+--------------------------------------+
| allocation_pools     | 172.16.1.2-172.16.1.254              |
| cidr                 | 172.16.1.0/24                        |
| created_at           | 2023-07-14T01:36:07Z                 |
| description          |                                      |
| dns_nameservers      | 8.8.8.8                              |
| dns_publish_fixed_ip | None                                 |
| enable_dhcp          | True                                 |
| gateway_ip           | 172.16.1.1                           |
| host_routes          |                                      |
| id                   | d577dadf-9d16-49ef-b495-69412745bc7b |
| ip_version           | 4                                    |
| ipv6_address_mode    | None                                 |
| ipv6_ra_mode         | None                                 |
| name                 | selfservice                          |
| network_id           | 30fbd00c-5968-40bf-a6e6-6e1b3307a232 |
| prefix_length        | None                                 |
| project_id           | 677861619c5445368a353ebeb0bcba2b     |
| revision_number      | 0                                    |
| segment_id           | None                                 |
| service_types        |                                      |
| subnetpool_id        | None                                 |
| tags                 |                                      |
| updated_at           | 2023-07-14T01:36:07Z                 |
+----------------------+--------------------------------------+

외부네트워크와 내부 네트워크를 연결하는 작업을 진행하기 위해 Router 를 생성합니다.
# openstack router create router
+-------------------------+--------------------------------------+
| Field                   | Value                                |
+-------------------------+--------------------------------------+
| admin_state_up          | UP                                   |
| availability_zone_hints |                                      |
| availability_zones      |                                      |
| created_at              | 2023-07-17T06:13:12Z                 |
| description             |                                      |
| distributed             | False                                |
| external_gateway_info   | null                                 |
| flavor_id               | None                                 |
| ha                      | False                                |
| id                      | 12bb1577-1184-4aaf-a285-175579a0f13f |
| name                    | router                               |
| project_id              | 677861619c5445368a353ebeb0bcba2b     |
| revision_number         | 1                                    |
| routes                  |                                      |
| status                  | ACTIVE                               |
| tags                    |                                      |
| updated_at              | 2023-07-17T06:13:12Z                 |
+-------------------------+--------------------------------------+

# openstack router add subnet router selfservice
# openstack router set router --external-gateway provider

연결을 확인합니다.
# source admin-openrc
# openstack router list
+--------------------------------------+--------+--------+-------+----------------------------------+-------------+-------+
| ID                                   | Name   | Status | State | Project                          | Distributed | HA    |
+--------------------------------------+--------+--------+-------+----------------------------------+-------------+-------+
| 12bb1577-1184-4aaf-a285-175579a0f13f | router | ACTIVE | UP    | 677861619c5445368a353ebeb0bcba2b | False       | False |
+--------------------------------------+--------+--------+-------+----------------------------------+-------------+-------+

 

 

4. VM 생성

 

VM 초기 구성을 변경하기 위해 관련 패키지를 설치하고 파일을 생성합니다.

ubuntu 일반계정과 root 관리자 계정 두 가지 로그인 방법이 있으므로 상황에 맞게 사용하시면 됩니다.

# apt-get -y install cloud-init

# vi temp.sh

(root 계정으로 로그인 허용)

#cloud-config
users:
  - name: root
chpasswd:
  list: |
    root:12345678@#$%
  expire: False
runcmd:
  - 'sed -i "s/^#PermitRootLogin .*/PermitRootLogin yes/" /etc/ssh/sshd_config'
  - 'sed -i "s/^PasswordAuthentication no/PasswordAuthentication yes/" /etc/ssh/sshd_config'
  - 'systemctl restart sshd'

 

(ubuntu 기본 계정으로 로그인 허용)

#cloud-config
users:
  - name: ubuntu
chpasswd:
  list: |
    ubuntu:12345678@#$%
  expire: False
runcmd:
  - 'sed -i "s/^PasswordAuthentication no/PasswordAuthentication yes/" /etc/ssh/sshd_config'
  - 'systemctl restart sshd'
  - 'sudo usermod -aG sudo ubuntu'
  - 'sed -i "s/\/ubuntu:\/bin\/sh/\/ubuntu:\/bin\/bash/" /etc/passwd'

 

(root 패스워드를 설정하고, sudo 권한 가진 ubuntu 계정 생성)

#cloud-config
users:
  - name: ubuntu
chpasswd:
  list: |
    ubuntu:12345678@#$%
  expire: False
runcmd:
  - 'sed -i "s/^PasswordAuthentication no/PasswordAuthentication yes/" /etc/ssh/sshd_config'
  - 'systemctl restart sshd'
  - 'useradd -m -d /home/ubuntu -s /bin/bash -G sudo ubuntu'
  - 'echo "ubuntu:12345678@#$%" | chpasswd'
  - 'echo "ubuntu  ALL=(ALL) NOPASSWD: ALL" >> /etc/sudoers'

 

Image 와 Flavor 를 이용해 VM 을 생성합니다.

VM 생성할때는 위에서 생성한 Flavor, Image, Key, (내부) Network 정보를 포함해야 합니다.

형식) openstack server create --flavor <FLAVOR_NAME> --image <IMAGE_NAME> --nic net-id <NETWORK_NAME> --user-data <INITIALIZE_FILE> <INSTANCE_NAME>

# openstack server create --flavor myflavor --image "CentOS7" --nic net-id=selfservice --user-data /root/temp.sh myinstance
+-------------------------------------+-------------------------------------------------+
| Field                               | Value                                           |
+-------------------------------------+-------------------------------------------------+
| OS-DCF:diskConfig                   | MANUAL                                          |
| OS-EXT-AZ:availability_zone         |                                                 |
| OS-EXT-SRV-ATTR:host                | None                                            |
| OS-EXT-SRV-ATTR:hypervisor_hostname | None                                            |
| OS-EXT-SRV-ATTR:instance_name       |                                                 |
| OS-EXT-STS:power_state              | NOSTATE                                         |
| OS-EXT-STS:task_state               | scheduling                                      |
| OS-EXT-STS:vm_state                 | building                                        |
| OS-SRV-USG:launched_at              | None                                            |
| OS-SRV-USG:terminated_at            | None                                            |
| accessIPv4                          |                                                 |
| accessIPv6                          |                                                 |
| addresses                           |                                                 |
| adminPass                           | QSjCjKK3oiJi                                    |
| config_drive                        |                                                 |
| created                             | 2023-07-14T05:16:28Z                            |
| flavor                              | myflavor (7e94605d-ace3-4980-94ad-fa49b36c4735) |
| hostId                              |                                                 |
| id                                  | a23ff754-668f-4f9e-b517-376ae41ddc42            |
| image                               | CentOS7 (acbe118e-6881-4ecf-8447-868864150c81)  |
| key_name                            | None                                           |
| name                                | myinstance                                      |
| progress                            | 0                                               |
| project_id                          | 677861619c5445368a353ebeb0bcba2b                |
| properties                          |                                                 |
| security_groups                     | name='default'                                  |
| status                              | BUILD                                           |
| updated                             | 2023-07-14T05:16:28Z                            |
| user_id                             | 7ffedad885e1490e9f5598081077f5a8                |
| volumes_attached                    |                                                 |
+-------------------------------------+-------------------------------------------------+

 

root@controller:~# openstack server list
+--------------------------------------+------------+--------+-----------------------------------------+---------+----------+
| ID                                   | Name       | Status | Networks                                | Image   | Flavor   |
+--------------------------------------+------------+--------+-----------------------------------------+---------+----------+
| a23ff754-668f-4f9e-b517-376ae41ddc42 | myinstance | ACTIVE | selfservice=172.16.1.173 | CentOS7 | myflavor |
+--------------------------------------+------------+--------+-----------------------------------------+---------+----------+

 

여기에 외부 IP 를 할당해줍니다.

아래 명령을 실행하면 provider 에 할당된 네트워크 범위에서 1개의 IP 를 자동 생성 시킵니다.

# openstack floating ip create provider
+---------------------+--------------------------------------+
| Field               | Value                                |
+---------------------+--------------------------------------+
| created_at          | 2023-07-17T06:16:01Z                 |
| description         |                                      |
| dns_domain          | None                                 |
| dns_name            | None                                 |
| fixed_ip_address    | None                                 |
| floating_ip_address | 115.68.142.86                        |
| floating_network_id | b536c267-4a33-4068-a0da-4748a1cbfc97 |
| id                  | dcc8088b-c577-41dd-ae40-d0bdd97865ed |
| name                | 115.68.142.86                        |
| port_details        | None                                 |
| port_id             | None                                 |
| project_id          | 677861619c5445368a353ebeb0bcba2b     |
| qos_policy_id       | None                                 |
| revision_number     | 0                                    |
| router_id           | None                                 |
| status              | DOWN                                 |
| subnet_id           | None                                 |
| tags                | []                                   |
| updated_at          | 2023-07-17T06:16:01Z                 |
+---------------------+--------------------------------------+

 

생성된 외부 IP 를 확인합니다.

# openstack floating ip list
+--------------------------------------+---------------------+------------------+------+--------------------------------------+----------------------------------+
| ID                                   | Floating IP Address | Fixed IP Address | Port | Floating Network                     | Project                          |
+--------------------------------------+---------------------+------------------+------+--------------------------------------+----------------------------------+
| dcc8088b-c577-41dd-ae40-d0bdd97865ed | 115.68.142.86       | None             | None | b536c267-4a33-4068-a0da-4748a1cbfc97 | 677861619c5445368a353ebeb0bcba2b |
+--------------------------------------+---------------------+------------------+------+--------------------------------------+----------------------------------+

 

서버와 생생된 IP 를 연결해줍니다.

서버는 이름이나 ID 값 아무거나 넣어줘도 됩니다.

# openstack server add floating ip myinstance 115.68.142.86

 

IP 추가된것을 다시 확인합니다.
# openstack server list
+--------------------------------------+------------+--------+-----------------------------------------+---------+----------+
| ID                                   | Name       | Status | Networks                                | Image   | Flavor   |
+--------------------------------------+------------+--------+-----------------------------------------+---------+----------+
| a23ff754-668f-4f9e-b517-376ae41ddc42 | myinstance | ACTIVE | selfservice=172.16.1.173, 115.68.142.86 | CentOS7 | myflavor |
+--------------------------------------+------------+--------+-----------------------------------------+---------+----------+

 

참고로 ssh 로도 접속이 가능하지만 네트워크가 안될 경우 NoVNC 를 통해서 접근하는 방법은 아래와 같습니다.

# openstack console url show myinstance

+-------+-------------------------------------------------------------------------------------------+
| Field | Value                                                                                     |
+-------+-------------------------------------------------------------------------------------------+
| type  | novnc                                                                                     |
| url   | http://controller:6080/vnc_auto.html?path=%3Ftoken%3Ddd017af1-27f8-4f49-a611-fe36d5d34c01 |
+-------+-------------------------------------------------------------------------------------------+

 

URL 중 'controller' 는 접속이 가능한 도메인 또는 IP 로 대체하여 웹브라우저로 접속하면 컨트롤 가능한 콘솔 화면이 출력됩니다.

 

반응형

댓글()

1. Openstack 환경 구성 (Victoria 버전)

리눅스/OpenStack|2023. 7. 5. 13:15
반응형

설치 과정중 추가 정보가 필요한 경우 아래 공식 문서를 참조해주세요.

* Openstack Documents : https://docs.openstack.org/ko_KR/

* 본 매뉴얼은 다음 포스팅을 참고하여 재작성 하였습니다. : https://yumserv.tistory.com/294

* 서버 구성 : Controller, Neutron, Compute, Storage 각 한대씩 준비하였고 OS 환경은 Ubuntu 20.04 입니다.

- Controller : 115.68.142.99

- Neutron : 115.68.142.100

- Compute : 115.68.142.101

- Storage : 115.68.142.102 // 추가 디스크 장착 (/dev/sdb)

 

 

1. 기본 환경 구성

 

1) 호스트네임 설정

(모든 노드에서)

# vi /etc/hosts

127.0.0.1 localhost localhost.localdomain
115.68.142.99 controller
115.68.142.100 neutron
115.68.142.101 compute
115.68.142.102 storage

 

(각 노드에서)

# hostnamectl set-hostname controller    // 컨트롤러 서버에서

# hostnamectl set-hostname neutron      // 뉴트론 서버에서

# hostnamectl set-hostname compute     // 컴퓨트 서버에서

# hostnamectl set-hostname storage     // 스토리지 서버에서

 

2) time 서버 설정

(모든 서버에서)

# vi /etc/systemd/timesyncd.conf

[Time]
NTP=time.bora.net

 

# systemctl restart systemd-timesyncd

 

3) Openstack 리포지토리 추가

(모든 서버에서)

# apt -y install software-properties-common
# add-apt-repository cloud-archive:victoria

# apt -y update
# apt -y upgrade
# apt -y install python3-openstackclient

 

4) SQL 설치

(Controller 서버에서)

# apt-get -y install mariadb-server

# mysql_secure_installation

Enter current password for root (enter for none): (그냥 엔터)

Set root password? [Y/n] y

New password: (사용할 root 비밀번호 입력)
Re-enter new password: (사용할 root 비밀번호 입력)

Remove anonymous users? [Y/n] y

Disallow root login remotely? [Y/n] y

Remove test database and access to it? [Y/n] y

Reload privilege tables now? [Y/n] y

 

MariaDB 설정 파일을 생성합니다.

# vi /etc/mysql/mariadb.conf.d/99-openstack.cnf

[mysqld]
bind-address = 115.68.142.99    // 접근 허용할 IP (본 Controller 서버 IP)
default-storage-engine = innodb
innodb_file_per_table = on
max_connections = 4096
collation-server = utf8_general_ci
character-set-server = utf8

데몬을 재시작하여 설정을 적용합니다.

# systemctl restart mysqld

 

5) 메세지큐 설치

각 서버간 통신을 위해 메세지큐가 필요합니다.

 

(Controller 서버에서)

패키지를 설치합니다.

# apt-get -y install rabbitmq-server

 

rabbitmq 사용자 계정을 추가해줍니다. (설정, 쓰기, 읽기 모두 허용)
# rabbitmqctl add_user openstack 12345678
Adding user "openstack" ...

 

# rabbitmqctl set_permissions openstack "." "." ".*"
Setting permissions for user "openstack" in vhost "/" ...   

 

6) memcached 설치

(Controller 서버에서)

# apt-get -y install memcached

# sed -i 's/127.0.0.1/0.0.0.0/' /etc/memcached.conf

# systemctl restart memcached

 

방화벽을 사용하고 있다면 memcached 데몬 포트 (TCP 11211) 를 허용해 주세요.

 

 

2. Keystone 설치

 

(Controller 서버에서)

1) Keystone 데이터베이스 및 계정 생성

# mysql -p

MariaDB [(none)]> create database keystone;
MariaDB [(none)]> grant all privileges on keystone.* to keystone@'localhost' identified by '12345678';
MariaDB [(none)]> grant all privileges on keystone.* to keystone@'%' identified by '12345678';
MariaDB [(none)]> flush privileges;

 

2) Keystone 패키지 설치 및 설정

# apt-get -y install keystone python3-openstackclient apache2 libapache2-mod-wsgi-py3 python3-oauth2client 
# vi /etc/keystone/keystone.conf

[DEFAULT]
log_dir = /var/log/keystone

[database]
connection = mysql+pymysql://keystone:12345678@controller/keystone

[token]
provider = fernet

 

Keystone 관리 명령어 glance-manage 를 통해 Keystone 데이터베이스에 필요한 테이블을 생성합니다.

# su -s /bin/bash keystone -c "keystone-manage db_sync"

 

keystone-manage 를 이용하여 fernet 키 저장소를 초기화 합니다.

# keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone

# keystone-manage credential_setup --keystone-user keystone --keystone-group keystone

 

keystone-manage bootstrap 을 사용해 사용자, 프로젝트, 역할을 생성하고 새로 생성된 프로젝트 사용자에게 부여합니다.

# keystone-manage bootstrap --bootstrap-password 12345678 \
    --bootstrap-admin-url http://controller:5000/v3/ \
    --bootstrap-internal-url http://controller:5000/v3/ \
    --bootstrap-public-url http://controller:5000/v3/ \
    --bootstrap-region-id AZ1

 

여기에서 AZ1 은 가용존을 구분하기 위해 입력된 값입니다.

방화벽을 사용하고 있다면 Keystone 데몬 포트 (TCP 5000, TCP 35357) 를 허용해 주세요.

 

3) Apache 웹 서버 설정

# echo "ServerName controller" >> /etc/apache2/apache2.conf

# systemctl restart apache2

 

4) 관리자 환경변수 설정

# vi admin-openrc

export OS_PROJECT_DOMAIN_NAME=Default
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=12345678
export OS_AUTH_URL=http://controller:5000/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2
export OS_AUTH_TYPE=password

 

# source admin-openrc

 

Openstack 관리 명령어는 admin 환경 변수를 설정해야 실행이 가능합니다.

보통 controller 서버 로그인 시, 적용 해주는 것이 일반적이며, 이후에 관리 명령이 실행되지 않을 경우 해당 환경 변수 설정이 누락 되었는지 확인해보시기 바랍니다.

환경 변수가 잘 적용되었는지 확인하기 위해 token 정보를 조회해 봅니다.

# openstack token issue

+------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Field      | Value                                                                                                                                                                                   |
+------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| expires    | 2023-07-06T00:29:30+0000                                                                                                                                                                |
| id         | gAAAAABkpfzaysyUUIKuSEBP7g4KGHip6yuFrGK2LYtXiYTq5NT1j9Mha_vxESc7_a3Xc69Rx3ID56kn1oyxs0ZJ0iK46qEGqUoxL7S4ZbfK2uSC24iSrwIJ1W9D0bQ5pv6m3YhBht6gK04n1pXaoD9ahM6cS3wKl8osJhUshRyd-GrJ6Cg8pM8 |
| project_id | 677861619c5445368a353ebeb0bcba2b                                                                                                                                                        |
| user_id    | 7ffedad885e1490e9f5598081077f5a8                                                                                                                                                        |
+------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+

 

5) 프로젝트, 사용자, 역할 생성

 

# openstack project create --domain default --description "Service Project" service

+-------------+----------------------------------+
| Field       | Value                            |
+-------------+----------------------------------+
| description | Service Project                  |
| domain_id   | default                          |
| enabled     | True                             |
| id          | 38f4f6c42e614625a309679c45db8a08 |
| is_domain   | False                            |
| name        | service                          |
| options     | {}                               |
| parent_id   | default                          |
| tags        | []                               |
+-------------+----------------------------------+

 

생성된 프로젝트를 확인합니다.

# openstack project list

+----------------------------------+---------+
| ID                               | Name    |
+----------------------------------+---------+
| 38f4f6c42e614625a309679c45db8a08 | service |
| 677861619c5445368a353ebeb0bcba2b | admin   |
+----------------------------------+---------+

 

 

3. Glance 설치

 

(Controller 서버에서)

1) 데이터베이스 생성

Glance DB 패스워드 생성시 특수문자를 인식하지 못하는 경우가 있다고 하니 주의하시기 바랍니다.

(참고 : https://bugs.launchpad.net/glance/+bug/1695299)

# mysql -p
MariaDB [(none)]> create database glance;
MariaDB [(none)]> grant all privileges on glance.* to 'glance'@'localhost' identified by '12345678';
MariaDB [(none)]> grant all privileges on glance.* to 'glance'@'%' identified by '12345678';
MariaDB [(none)]> flush privileges;

 

2) Glance 사용자, 서비스, 엔드포인트 생성

# openstack user create --domain default --password 12345678 glance

+---------------------+----------------------------------+
| Field               | Value                            |
+---------------------+----------------------------------+
| domain_id           | default                          |
| enabled             | True                             |
| id                  | 41d916460e594a6996a779821f7aaaa9 |
| name                | glance                           |
| options             | {}                               |
| password_expires_at | None                             |
+---------------------+----------------------------------+

 

# openstack role add --project service --user glance admin

 

서비스 glance 를 생성합니다.

# openstack service create --name glance --description 'OpenStack Image' image

+-------------+----------------------------------+
| Field       | Value                            |
+-------------+----------------------------------+
| description | OpenStack Image                  |
| enabled     | True                             |
| id          | 8b63b37cb389408cbc0c9596c54351f3 |
| name        | glance                           |
| type        | image                            |
+-------------+----------------------------------+

 

image 서비스를 사용할 endpoint 를 생성합니다. public, internal, admin 환경 세군데에서 진행합니다.

# openstack endpoint create --region AZ1 image public http://controller:9292

+--------------+----------------------------------+
| Field        | Value                            |
+--------------+----------------------------------+
| enabled      | True                             |
| id           | 7d4985792d104e7689028b448cdcd9e7 |
| interface    | public                           |
| region       | AZ1                              |
| region_id    | AZ1                              |
| service_id   | 8b63b37cb389408cbc0c9596c54351f3 |
| service_name | glance                           |
| service_type | image                            |
| url          | http://controller:9292           |
+--------------+----------------------------------+

 

# openstack endpoint create --region AZ1 image internal http://controller:9292

+--------------+----------------------------------+
| Field        | Value                            |
+--------------+----------------------------------+
| enabled      | True                             |
| id           | edba45a5976140d1826b5fbeb61cb8ba |
| interface    | internal                         |
| region       | AZ1                              |
| region_id    | AZ1                              |
| service_id   | 8b63b37cb389408cbc0c9596c54351f3 |
| service_name | glance                           |
| service_type | image                            |
| url          | http://controller:9292           |
+--------------+----------------------------------+

 

# openstack endpoint create --region AZ1 image internal http://controller:9292

+--------------+----------------------------------+
| Field        | Value                            |
+--------------+----------------------------------+
| enabled      | True                             |
| id           | 276c3f3e3dd641b3bac59e78c5cbf86f |
| interface    | internal                         |
| region       | AZ1                              |
| region_id    | AZ1                              |
| service_id   | 8b63b37cb389408cbc0c9596c54351f3 |
| service_name | glance                           |
| service_type | image                            |
| url          | http://controller:9292           |
+--------------+----------------------------------+

 

Glance 서비스가 생성된 것을 확인합니다.

# openstack service list
+----------------------------------+----------+----------+
| ID                               | Name     | Type     |
+----------------------------------+----------+----------+
| 8b63b37cb389408cbc0c9596c54351f3 | glance   | image    |
| ffbd73da330946528c15f9db34380078 | keystone | identity |
+----------------------------------+----------+----------+

 

Endpoint 가 생성된 것을 확인합니다.

# openstack endpoint list
+----------------------------------+--------+--------------+--------------+---------+-----------+----------------------------+
| ID                               | Region | Service Name | Service Type | Enabled | Interface | URL                        |
+----------------------------------+--------+--------------+--------------+---------+-----------+----------------------------+
| 276c3f3e3dd641b3bac59e78c5cbf86f | AZ1    | glance       | image        | True    | internal  | http://controller:9292     |
| 529ebbd795334185b45b6bb43ffabfac | AZ1    | keystone     | identity     | True    | internal  | http://controller:5000/v3/ |
| 776e98c8e99e4f65acab76d0d06a58a5 | AZ1    | keystone     | identity     | True    | admin     | http://controller:5000/v3/ |
| 7d4985792d104e7689028b448cdcd9e7 | AZ1    | glance       | image        | True    | public    | http://controller:9292     |
| e204698c2a504534b962a36fbacc73eb | AZ1    | keystone     | identity     | True    | public    | http://controller:5000/v3/ |
| edba45a5976140d1826b5fbeb61cb8ba | AZ1    | glance       | image        | True    | internal  | http://controller:9292     |
+----------------------------------+--------+--------------+--------------+---------+-----------+----------------------------+

 

3) Glance 패키지 설치

# apt-get -y install glance

 

4) Glance 설정파일 수정

glance-api.conf 파일은 Glance 서비스가 클라이언트로부터 API 요청을 받아들이기 위해 사용됩니다.

 

# vi /etc/glance/glance-api.conf

[DEFAULT]
show_image_direct_url = True
[database]
connection = mysql+pymysql://glance:12345678@controller/glance
backend = sqlalchemy

[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = glance
password =  12345678

[paste_deploy]
flavor = keystone

[glance_store]
stores = file,http default_store = file
filesystem_store_datadir = /var/lib/glance/images/

[image_format]
disk_formats = ami,ari,aki,vhd,vhdx,vmdk,raw,qcow2,vdi,iso,ploop.root-tar

 

설정 파일의 권한을 변경합니다.

# chmod 640 /etc/glance/glance-api.conf
# chown root:glance /etc/glance/glance-api.conf

 

glance 관리 명령어 glance-manage 를 통해 glance 데이터베이스에 필요한 테이블을 생성합니다.

# su -s /bin/bash glance -c "glance-manage db_sync"

 

설정한 파일을 적용하기 위해 데몬을 재시작 합니다.

# systemctl restart glance-api

 

방화벽을 사용하고 있다면 glance-api, glance-registry 데몬 포트 (TCP 9191, TCP 9292) 를 허용해 주세요.

 

5) 검증 방법

cirros OS 이미지를 다운로드 받고, admin 환경에서 이미지를 생성합니다.

# wget http://download.cirros-cloud.net/0.3.5/cirros-0.3.5-x86_64-disk.img

# openstack image create "cirros" --file cirros-0.3.5-x86_64-disk.img --disk-format qcow2 --container-format bare --public

+------------------+--------------------------------------------------------------------------------------------------------------------------------------------+
| Field            | Value                                                                                                                                      |
+------------------+--------------------------------------------------------------------------------------------------------------------------------------------+
| container_format | bare                                                                                                                                       |
| created_at       | 2023-07-06T00:22:42Z                                                                                                                       |
| disk_format      | qcow2                                                                                                                                      |
| file             | /v2/images/5cee962f-6018-4cf3-9027-3b03257d7d0f/file                                                                                       |
| id               | 5cee962f-6018-4cf3-9027-3b03257d7d0f                                                                                                       |
| min_disk         | 0                                                                                                                                          |
| min_ram          | 0                                                                                                                                          |
| name             | cirros                                                                                                                                     |
| owner            | 677861619c5445368a353ebeb0bcba2b                                                                                                           |
| properties       | os_hidden='False', owner_specified.openstack.md5='', owner_specified.openstack.object='images/cirros', owner_specified.openstack.sha256='' |
| protected        | False                                                                                                                                      |
| schema           | /v2/schemas/image                                                                                                                          |
| status           | queued                                                                                                                                     |
| tags             |                                                                                                                                            |
| updated_at       | 2023-07-06T00:22:42Z                                                                                                                       |
| visibility       | public                                                                                                                                     |
+------------------+--------------------------------------------------------------------------------------------------------------------------------------------+

 

이미지 생성된 것을 확인합니다.

# openstack image list
+--------------------------------------+--------+--------+
| ID                                   | Name   | Status |
+--------------------------------------+--------+--------+
| 5cee962f-6018-4cf3-9027-3b03257d7d0f | cirros | active |
+--------------------------------------+--------+--------+

 

 

4. Nova 설치

 

(Controller 서버에서)

1) 데이터베이스 및 계정 생성

nova_cell0 데이터 베이스는 nova-api, nova-conductor, nova-compute 서비스에 의해 사용되며 스케줄링에 실패한 인스턴스의 정보를 저장합니다.
placement 의 경우, 인스턴스 생성에 필요한 자원, 나머지 자원, 전체 사용량에 대한 정보를 저장하기 위해 사용됩니다.

 

# mysql -p

MariaDB [(none)]> create database nova;
MariaDB [(none)]> grant all privileges on nova.* to nova@'localhost' identified by '12345678';
MariaDB [(none)]> grant all privileges on nova.* to nova@'%' identified by '12345678';
MariaDB [(none)]> create database nova_api;
MariaDB [(none)]> grant all privileges on nova_api.* to nova@'localhost' identified by '12345678';
MariaDB [(none)]> grant all privileges on nova_api.* to nova@'%' identified by '12345678';
MariaDB [(none)]> create database nova_cell0;
MariaDB [(none)]> grant all privileges on nova_cell0.* to nova@'localhost' identified by '12345678';
MariaDB [(none)]> grant all privileges on nova_cell0.* to nova@'%' identified by '12345678';
MariaDB [(none)]> create database placement;
MariaDB [(none)]> grant all privileges on placement.* to placement@'localhost' identified by '12345678';
MariaDB [(none)]> grant all privileges on placement.* to placement@'%' identified by '12345678';
MariaDB [(none)]> flush privileges;

 

2) Nova 사용자, 서비스, 엔드포인트 생성

nova 사용자를 생성하고 role 을 추가합니다.

# openstack user create --domain default --project service --password 12345678 nova

+---------------------+----------------------------------+
| Field               | Value                            |
+---------------------+----------------------------------+
| default_project_id  | 38f4f6c42e614625a309679c45db8a08 |
| domain_id           | default                          |
| enabled             | True                             |
| id                  | f50cd8f1bb594da68697b7bd5893fdf7 |
| name                | nova                             |
| options             | {}                               |
| password_expires_at | None                             |
+---------------------+----------------------------------+

 

# openstack role add --project service --user nova admin

 

placement 사용자를 생성하고 role 을 추가합니다.

# openstack user create --domain default --project service --password 12345678 placement

+---------------------+----------------------------------+
| Field               | Value                            |
+---------------------+----------------------------------+
| default_project_id  | 38f4f6c42e614625a309679c45db8a08 |
| domain_id           | default                          |
| enabled             | True                             |
| id                  | c51301d9aabc47af86673f151cd5532c |
| name                | placement                        |
| options             | {}                               |
| password_expires_at | None                             |
+---------------------+----------------------------------+

 

# openstack role add --project service --user placement admin

 

nova 및 placement 서비스를 생성합니다.

# openstack service create --name nova --description "OpenStack Compute service" compute

+-------------+----------------------------------+
| Field       | Value                            |
+-------------+----------------------------------+
| description | OpenStack Compute service        |
| enabled     | True                             |
| id          | 27e1df36ca7d49938bfd9d60550462b5 |
| name        | nova                             |
| type        | compute                          |
+-------------+----------------------------------+

 

# openstack service create --name placement --description "Openstack Compute Placement service" placement

+-------------+-------------------------------------+
| Field       | Value                               |
+-------------+-------------------------------------+
| description | Openstack Compute Placement service |
| enabled     | True                                |
| id          | e5c63604451544fc9313069c6f8056ad    |
| name        | placement                           |
| type        | placement                           |
+-------------+-------------------------------------+

 

nova 및 placement 엔드포인트를 생성합니다.

# openstack endpoint create --region AZ1 compute public http://controller:8774/v2.1/%\(tenant_id\)s

+--------------+-------------------------------------------+
| Field        | Value                                     |
+--------------+-------------------------------------------+
| enabled      | True                                      |
| id           | f6c634f7903d40cdbb93d72acc4175d0          |
| interface    | public                                    |
| region       | AZ1                                       |
| region_id    | AZ1                                       |
| service_id   | 27e1df36ca7d49938bfd9d60550462b5          |
| service_name | nova                                      |
| service_type | compute                                   |
| url          | http://controller:8774/v2.1/%(tenant_id)s |
+--------------+-------------------------------------------+

 

# openstack endpoint create --region AZ1 compute internal http://controller:8774/v2.1/%\(tenant_id\)s

+--------------+-------------------------------------------+
| Field        | Value                                     |
+--------------+-------------------------------------------+
| enabled      | True                                      |
| id           | 1196854aa45f437dacddcb7757f8229c          |
| interface    | internal                                  |
| region       | AZ1                                       |
| region_id    | AZ1                                       |
| service_id   | 27e1df36ca7d49938bfd9d60550462b5          |
| service_name | nova                                      |
| service_type | compute                                   |
| url          | http://controller:8774/v2.1/%(tenant_id)s |
+--------------+-------------------------------------------+

 

# openstack endpoint create --region AZ1 compute admin http://controller:8774/v2.1/%\(tenant_id\)s

+--------------+-------------------------------------------+
| Field        | Value                                     |
+--------------+-------------------------------------------+
| enabled      | True                                      |
| id           | 4ad6b5fb561f44b18b485c90c31dd7d6          |
| interface    | admin                                     |
| region       | AZ1                                       |
| region_id    | AZ1                                       |
| service_id   | 27e1df36ca7d49938bfd9d60550462b5          |
| service_name | nova                                      |
| service_type | compute                                   |
| url          | http://controller:8774/v2.1/%(tenant_id)s |
+--------------+-------------------------------------------+

 

# openstack endpoint create --region AZ1 placement public http://controller:8778

+--------------+----------------------------------+
| Field        | Value                            |
+--------------+----------------------------------+
| enabled      | True                             |
| id           | 43655df1d7514d8d8e33d1de68e6ffcf |
| interface    | public                           |
| region       | AZ1                              |
| region_id    | AZ1                              |
| service_id   | e5c63604451544fc9313069c6f8056ad |
| service_name | placement                        |
| service_type | placement                        |
| url          | http://controller:8778           |
+--------------+----------------------------------+

 

# openstack endpoint create --region AZ1 placement internal http://controller:8778

+--------------+----------------------------------+
| Field        | Value                            |
+--------------+----------------------------------+
| enabled      | True                             |
| id           | 20364072c19e42be973e175227057bb3 |
| interface    | internal                         |
| region       | AZ1                              |
| region_id    | AZ1                              |
| service_id   | e5c63604451544fc9313069c6f8056ad |
| service_name | placement                        |
| service_type | placement                        |
| url          | http://controller:8778           |
+--------------+----------------------------------+

 

# openstack endpoint create --region AZ1 placement admin http://controller:8778

+--------------+----------------------------------+
| Field        | Value                            |
+--------------+----------------------------------+
| enabled      | True                             |
| id           | f5b2f83c4dc74adbbcae3242485c5ba5 |
| interface    | admin                            |
| region       | AZ1                              |
| region_id    | AZ1                              |
| service_id   | e5c63604451544fc9313069c6f8056ad |
| service_name | placement                        |
| service_type | placement                        |
| url          | http://controller:8778           |
+--------------+----------------------------------+

 

3) Nova 패키지 설치 및 설정

# apt-get -y install nova-api nova-conductor nova-scheduler nova-novncproxy placement-api python3-novaclient

# vi /etc/nova/nova.conf

[DEFAULT]
enabled_apis = osapi_compute,metadata
transport_url = rabbit://openstack:12345678@controller
my_ip = 115.68.142.99
use_neutron = True
firewall_driver = nova.virt.firewall.NoopFirewallDriver

[api_database]
connection = mysql+pymysql://nova:12345678@controller/nova_api

[database]
connection = mysql+pymysql://nova:12345678@controller/nova
[api]
auth_strategy = keystone

[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = 12345678

[vnc]
enabled = true
vncserver_listen = 115.68.142.99
vncserver_proxyclient_address = 115.68.142.99

[glance]
api_servers = http://controller:9292

[oslo_concurrency]
lock_path = /var/lib/nova/tmp

[placement]
os_region_name = AZ1
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://controller:5000/v3
username = placement
password = 12345678

[wsgi]
api_paste_config = /etc/nova/api-paste.ini

[quota]
instances = -1
cores = -1
ram = -1
floating_ips = -1
fixed_ips = -1

 

* quota 섹션의 -1 값은 사용할 수 있는 자원의 최대치 (무한대) 를 뜻합니다.

 

설정파일의 권한을 변경합니다.

# chmod 640 /etc/nova/nova.conf

# chgrp nova /etc/nova/nova.conf

 

# vi /etc/placement/placement.conf

[DEFAULT]
debug = false

[api]
auth_strategy = keystone

[keystone_authtoken]
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = placement
password = 12345678

[placement_database]
connection = mysql+pymysql://placement:12345678@controller/placement

 

설정파일의 권한을 변경합니다.

# chmod 640 /etc/placement/placement.conf

# chgrp placement /etc/placement/placement.conf

 

4) 데이터베이스 추가

# su -s /bin/bash placement -c "placement-manage db sync"
# su -s /bin/bash nova -c "nova-manage api_db sync"
# su -s /bin/bash nova -c "nova-manage cell_v2 map_cell0"
# su -s /bin/bash nova -c "nova-manage db sync"
# su -s /bin/bash nova -c "nova-manage cell_v2 create_cell --name cell0"

 

5) 서비스 재시작

# systemctl restart apache2

# systemctl restart nova-api
# systemctl restart nova-conductor
# systemctl restart nova-scheduler
# systemctl restart nova-novncproxy

# openstack compute service list
+----+----------------+------------+----------+---------+-------+----------------------------+
| ID | Binary         | Host       | Zone     | Status  | State | Updated At                 |
+----+----------------+------------+----------+---------+-------+----------------------------+
|  8 | nova-conductor | controller | internal | enabled | up    | 2023-07-06T01:09:31.000000 |
| 14 | nova-scheduler | controller | internal | enabled | up    | 2023-07-06T01:09:30.000000 |
+----+----------------+------------+----------+---------+-------+----------------------------+

 

방화벽을 사용하고 있다면 nova 관련 데몬 포트 (TCP 6080, TCP 8774, TCP 8775, TCP 8778) 를 허용해 주세요.

 

6) Nova 패키지 설치

지금까지 Controller 에서 설치, 설정을 진행했는데, Compute 서버와 통신하고 정상 동작하기 위해 Compute 서버에서도 패키지를 설치하고 설정을 해주어야 합니다.

(Compute 서버에서)

# apt-get -y install nova-compute nova-compute-kvm

# vi /etc/nova/nova.conf

[DEFAULT]
lock_path = /var/lock/nova
state_path = /var/lib/nova
enabled_apis = osapi_compute,metadata
transport_url = rabbit://openstack:12345678@controller
my_ip = 115.68.142.101
use_neutron = True
firewall_driver = nova.virt.firewall.NoopFirewallDriver

[api]
auth_strategy = keystone

[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = 12345678

[vnc]
enabled = True
vncserver_listen = 0.0.0.0
vncserver_proxyclient_address = 115.68.142.101
novncproxy_base_url = http://controller:6080/vnc_auto.html

[glance]
api_servers = http://controller:9292

[oslo_concurrency]
lock_path = /var/lib/nova/tmp

[placement]
os_region_name = AZ1
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://controller:5000/v3
username = placement
password = 12345678

 

가상화 모듈이 지원 가능한지 확인합니다.

# lsmod | grep kvm
kvm_intel             282624  0
kvm                   663552  1 kvm_intel

 

# systemctl restart nova-compute libvirtd

 

Controller 서버에서 compute 노드를 확인합니다.

(Controller 서버에서)

# su -s /bin/bash nova -c "nova-manage cell_v2 discover_hosts"

# openstack compute service list

+----+----------------+------------+----------+---------+-------+----------------------------+
| ID | Binary         | Host       | Zone     | Status  | State | Updated At                 |
+----+----------------+------------+----------+---------+-------+----------------------------+
|  8 | nova-conductor | controller | internal | enabled | up    | 2023-07-06T01:36:37.000000 |
| 14 | nova-scheduler | controller | internal | enabled | up    | 2023-07-06T01:36:39.000000 |
| 22 | nova-compute   | compute    | nova     | enabled | up    | 2023-07-06T01:36:37.000000 |
+----+----------------+------------+----------+---------+-------+----------------------------+

 

새로운 compute 노드를 추가할 때 controller 서버에서 nova-manage cell_v2 discover_hosts 를 꼭 실행해야 합니다.

또는 /etc/nova/nova.conf 에서 적절한 interval 을 설정 할 수 있습니다.

[scheduler]
discover_hosts_in_cells_interval = 300

 

 

5. Horizon 설치

 

(Controller 서버에서)

1) Horizon 패키지 설치

# apt-get -y install openstack-dashboard

# vi /etc/openstack-dashboard/local_settings.py

...

:99번째줄 (IP 수정)
CACHES = {
    'default': {
        'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
        'LOCATION': '115.68.142.99:11211',
    },
}

:112번째줄 추가 (Memcached 섹션 저장 서비스 구성)
SESSION_ENGINE = "django.contrib.sessions.backends.signed_cookies"

:126번째줄 (Controller IP 로 수정, Identity API 버전3 활성화)
OPENSTACK_HOST = "115.68.142.99"
OPENSTACK_KEYSTONE_URL = "http://controller:5000/v3"

:131번째줄 (표준시간 설정)
TIME_ZONE = "Asia/Seoul"

: 맨아래 추가 (Default 를 대시보드를 통해 사용자에 대한 디폴트 도메인으로 구성)
OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True
OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = 'Default'

 

# systemctl restart apache2

 

2) 대시보드 접속

웹브라우저에서 Controller 서버 IP 뒤에 /horizon 을 추가하여 접속하면 대시보드가 출력됩니다.

http://115.68.142.99/horizon

 

 

6. Neutron 설치

 

(Controller 서버에서)

1) Neutron 데이터베이스 및 사용자를 생성

# mysql -p

MariaDB [(none)]> create database neutron;
MariaDB [(none)]> grant all privileges on neutron.* to neutron@'localhost' identified by '12345678';
MariaDB [(none)]> grant all privileges on neutron.* to neutron@'%' identified by '12345678';
MariaDB [(none)]> flush privileges;

 

2) Neutron 사용자, 서비스, 엔드포인트 생성

# openstack user create --domain default --project service --password 12345678 neutron

+---------------------+----------------------------------+
| Field               | Value                            |
+---------------------+----------------------------------+
| default_project_id  | 38f4f6c42e614625a309679c45db8a08 |
| domain_id           | default                          |
| enabled             | True                             |
| id                  | d62dec54e0384ef0b2b61c09a1fe2161 |
| name                | neutron                          |
| options             | {}                               |
| password_expires_at | None                             |
+---------------------+----------------------------------+

 

# openstack role add --project service --user neutron admin

# openstack service create --name neutron --description "Openstack Networking service" network

+-------------+----------------------------------+
| Field       | Value                            |
+-------------+----------------------------------+
| description | Openstack Networking service     |
| enabled     | True                             |
| id          | 6e80e82fa3294924a38cbb032ac8a04b |
| name        | neutron                          |
| type        | network                          |
+-------------+----------------------------------+

 

# openstack endpoint create --region AZ1 network public http://controller:9696

+--------------+----------------------------------+
| Field        | Value                            |
+--------------+----------------------------------+
| enabled      | True                             |
| id           | 414f06489a1345f88aaf7af3754c4fc5 |
| interface    | public                           |
| region       | AZ1                              |
| region_id    | AZ1                              |
| service_id   | 6e80e82fa3294924a38cbb032ac8a04b |
| service_name | neutron                          |
| service_type | network                          |
| url          | http://controller:9696           |
+--------------+----------------------------------+

 

# openstack endpoint create --region AZ1 network internal http://controller:9696

+--------------+----------------------------------+
| Field        | Value                            |
+--------------+----------------------------------+
| enabled      | True                             |
| id           | 9a44b4b468db496793b786a1a73ad8b6 |
| interface    | internal                         |
| region       | AZ1                              |
| region_id    | AZ1                              |
| service_id   | 6e80e82fa3294924a38cbb032ac8a04b |
| service_name | neutron                          |
| service_type | network                          |
| url          | http://controller:9696           |
+--------------+----------------------------------+

 

# openstack endpoint create --region AZ1 network admin http://controller:9696

+--------------+----------------------------------+
| Field        | Value                            |
+--------------+----------------------------------+
| enabled      | True                             |
| id           | 86b8980507f84e71b41dcaf57c9bcb96 |
| interface    | admin                            |
| region       | AZ1                              |
| region_id    | AZ1                              |
| service_id   | 6e80e82fa3294924a38cbb032ac8a04b |
| service_name | neutron                          |
| service_type | network                          |
| url          | http://controller:9696           |
+--------------+----------------------------------+

 

3) Neutron 패키지 설치 및 설정

# apt-get -y install neutron-server

# vi /etc/neutron/neutron.conf

[DEFAULT]
core_plugin = ml2

service_plugins = router
allow_overlapping_ips = true
transport_url = rabbit://openstack:12345678@controller
auth_strategy = keystone
notify_nova_on_port_status_changes = true
notify_nova_on_port_data_changes = true
network_auto_schedule = True
router_auto_schedule = True
allow_automatic_dhcp_failover = True
allow_automatic_l3agent_failover = True
agent_down_time = 60
allow_automatic_lbaas_agent_failover = true
#global_physnet_mtu = 1550
use_syslog = True
syslog_log_facility = LOG_LOCAL1
#dhcp_agents_per_network = 3

[oslo_messaging_rabbit]
#pool_max_size = 50
#pool_max_overflow = 50
#pool_timeout = 30

[agent]
root_helper = sudo /usr/bin/neutron-rootwrap /etc/neutron/rootwrap.conf

[database]
connection = mysql+pymysql://neutron:12345678@controller/neutron
max_pool_size = 50
retry_interval = 10
max_overflow = 50

[keystone_authtoken]
auth_url = http://controller:5000/v3
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = 12345678

[nova]
auth_url = http://controller:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = AZ1
project_name = service
username = nova
password = 12345678

 

ml2.conf 파일을 수정 합니다.

ml2 플러그인은 인스턴스에게 Layer2 가상 네트워크 인프라를 제공하는 리눅스 브리지 기술입니다.

# vi /etc/neutron/plugins/ml2/ml2_conf.ini

[DEFAULT]
[ml2]
type_drivers = flat,vlan,vxlan,gre
tenant_network_types = vxlan
mechanism_drivers = openvswitch,l2population
extension_drivers = port_security
#path_mtu = 1550

[ml2_type_flat]
flat_networks = provider
[ml2_type_vxlan]
vni_ranges = 1:1000
[securitygroup]
enable_ipset = True

 

nova.conf 파일에 Neutron 내용을 추가합니다.

# vi /etc/nova/nova.conf

[DEFAULT]
use_neutron = True
firewall_driver = nova.virt.firewall.NoopFirewallDriver

[neutron]
url = http://controller:9696
auth_url = http://controller:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = AZ1
project_name = service
username = neutron
password = 12345678
service_metadata_proxy = True
metadata_proxy_shared_secret = 12345678

 

# ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini

 

4) 데이터베이스 테이블 추가 및 Neutron 재시작

# su -s /bin/bash neutron -c "neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugin.ini upgrade head"

# systemctl restart neutron-server

 

방화벽을 사용하고 있다면 neutron 데몬 포트 (TCP 9696) 를 허용해 주세요.

 

5) Nuetron 패키지 설치

지금까지 Controller 에서 설치, 설정을 진행했는데, Neutron 서버와 통신하고 정상 동작하기 위해 Neutron 서버에서도 패키지를 설치하고 설정을 해주어야 합니다.

 

(Neutron 서버에서)

IP4 포워딩 내용을 아래 파일에 추가하고 적용합니다.

# vi /etc/sysctl.conf

...
net.ipv4.conf.all.rp_filter = 0
net.ipv4.conf.default.rp_filter = 0
net.ipv4.ip_forward = 0

 

# sysctl -p
net.ipv4.conf.all.rp_filter = 0
net.ipv4.conf.default.rp_filter = 0
net.ipv4.ip_forward = 0

 

패키지를 설치하고 파일을 설정합니다.

# apt-get -y install neutron-openvswitch-agent neutron-l3-agent neutron-dhcp-agent neutron-metadata-agent neutron-plugin-ml2

# vi /etc/neutron/neutron.conf

[DEFAULT]
core_plugin = ml2
service_plugins = router
allow_overlapping_ips = true
transport_url = rabbit://openstack:12345678@controller
auth_strategy = keystone
notify_nova_on_port_status_changes = true
notify_nova_on_port_data_changes = true
use_syslog = True
syslog_log_facility = LOG_LOCAL1

[oslo_messaging_rabbit]
pool_max_size = 50
pool_max_overflow = 50
pool_timeout = 30

[agent]
root_helper = sudo /usr/bin/neutron-rootwrap /etc/neutron/rootwrap.conf

[keystone_authtoken]
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = 12345678

 

openvswitch_agent.ini 파일을 수정합니다.

이 파일은 openvswitch 플러그인을 사용하기 위한 용도로 쓰입니다.

# vi /etc/neutron/plugins/ml2/openvswitch_agent.ini

[DEFAULT]

[agent]
tunnel_types = vxlan
l2_population = True
ovsdb_monitor_respawn_interval = 30

[ovs]
bridge_mappings = provider:br0
local_ip = 115.68.142.100

[securitygroup]
firewall_driver = openvswitch
enable_security_group = false
enable_ipset = true

 

metadata_agent.ini 파일을 수정합니다.

이 파일은 metadata 에이전트가 사용하는 파일로써 인증정보와 같은 설정정보를 인스턴스에게 제공합니다.

# vi /etc/neutron/metadata_agent.ini

[DEFAULT]
nova_metadata_host = controller
metadata_proxy_shared_secret = 12345678

 

dhcp_agent.ini 파일을 수정합니다.

이 파일은 dhcp 에이전트가 사용하는 파일로써 가상네트워크에 dhcp 서비스를 제공합니다.

# vi /etc/neutron/dhcp_agent.ini

[DEFAULT]
interface_driver = openvswitch
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
enable_isolated_metadata = True
enable_metadata_network = True
force_metadata = True

[ovs]
ovsdb_timeout = 600

 

l3_agent.ini 파일을 수정합니다.

이 파일은 L3 에이전트가 사용하는 파일로써 셀프서비스 가상 네트워크에 라우팅과 NAT서비스를 제공하는 역할을 합니다.

# vi /etc/neutron/l3_agent.ini

[DEFAULT]
interface_driver = openvswitch
external_network_bridge =
verbose = True
[ovs]

 

지금까지 설정했던 Neutron 관련 데몬을 재시작 하여 설정을 적용합니다.

# systemctl restart neutron-dhcp-agent
# systemctl restart neutron-l3-agent
# systemctl restart neutron-metadata-agent
# systemctl restart neutron-openvswitch-agent
# systemctl restart openvswitch-switch

 

6) 인터페이스 ovs 설정 및 rc.local 설정

기본 설치되어 있는 netplan 을 사용하지 말고 되도록 ifupdown 패키지를 이용합니다.

# apt-get -y install ifupdown

# vi /etc/network/interfaces

...

auto eno1
iface eno1 inet manual

auto br0
iface br0 inet static
address 115.68.142.100
netmask 255.255.255.224
gateway 115.68.142.97

 

서버가 부팅될때 아래내용이 자동으로 실행될 수 있도록 합니다.

아래 파일을 생성하여 내용을 작성합니다.

# vi /etc/rc.local

#!/bin/bash

ovs-vsctl del-br br0
ovs-vsctl add-br br0
ovs-vsctl add-port br0 eno1

systemctl restart neutron-openvswitch-agent neutron-l3-agent neutron-dhcp-agent neutron-metadata-agent
systemctl restart networking

exit 0

 

파일 내용이 실행될 수 있는 권한을 줍니다.

# chmod 755 /etc/rc.local

 

서비스 실행파일에 아래와 같이 내용 추가후 부팅시 활성화 될 수 있도록 합니다.

# vi /lib/systemd/system/rc-local.service

...

[Install]
WantedBy=multi-user.target

 

# systemctl enable --now rc-local

 

Netplan 을 부팅시 사용하지 않도록 비활성화 하고, 설정을 읽을 수 없도록 /etc/netplan 디렉토리 내의 모든 netplan 설정 파일 이름을 변경합니다.

# systemctl disable systemd-networkd

# mv /etc/netplan/50-cloud-init.yaml /etc/netplan/50-cloud-init.yaml.bak

# mv /etc/netplan/00-installer-config.yaml /etc/netplan/00-installer-config.yaml.bak


ifupdown 을 부팅시 사용하도록 활성화 합니다.
# systemctl enable networking

 

부팅시 resolv.conf 파일이 초기화 되지 않도록 합니다.

# apt-get -y install resolvconf

# echo "nameserver 8.8.8.8" >> /etc/resolvconf/resolv.conf.d/head

# resolvconf -u


현재까지의 네트워크 설정을 재부팅하여 적용하고 잘 동작하는지 확인합니다.
# reboot

 

7) Neutron 패키지 설치 및 설정

지금까지 Controller, Neutron 에서 설치, 설정을 진행했는데, Compute 서버와 통신하고 정상 동작하기 위해 Compute 서버에서도 패키지를 설치하고 설정을 해주어야 합니다.

 

(Compute 서버에서)

# apt-get -y install neutron-server openvswitch-switch neutron-openvswitch-agent neutron-l3-agent

# vi /etc/neutron/neutron.conf

[DEFAULT]
core_plugin = ml2
transport_url = rabbit://openstack:12345678@controller
auth_strategy = keystone
syslog_log_facility = LOG_LOCAL1
use_syslog = True
rpc_response_timeout=1200

[agent]
root_helper = sudo /usr/bin/neutron-rootwrap /etc/neutron/rootwrap.conf

[keystone_authtoken]
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = 12345678

[oslo_concurrency]
lock_path = /var/lib/neutron/tmp

 

# vi /etc/neutron/plugins/ml2/openvswitch_agent.ini

[DEFAULT]
[agent]
tunnel_types = vxlan
l2_population = True
ovsdb_monitor_respawn_interval = 30

[ovs]
local_ip = 115.68.142.101
bridge_mappings = provider:br0

[securitygroup]
firewall_driver = openvswitch
enable_security_group = false
enable_ipset = true

 

# vi /etc/neutron/l3_agent.ini

[DEFAULT]
interface_driver = openvswitch
external_network_bridge =
verbose = True

 

# vi /etc/nova/nova.conf

...

[neutron]
url = http://controller:9696
auth_url = http://controller:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = AZ1
project_name = service
username = neutron
password = 12345678

 

지금까지 설정했던 Neutron 관련 데몬을 재시작 하여 설정을 적용합니다.

# systemctl restart neutron-server

# systemctl restart nova-compute

 

8) 인터페이스 ovs 설정 및 rc.local 설정

기본 설치되어 있는 netplan 을 사용하지 말고 되도록 ifupdown 패키지를 이용합니다.

# apt-get -y install ifupdown

# vi /etc/network/interfaces

...

auto eno1
iface eno1 inet manual

auto br0
iface br0 inet static
address 115.68.142.101
netmask 255.255.255.224
gateway 115.68.142.97

 

서버가 부팅될때 아래내용이 자동으로 실행될 수 있도록 합니다.

아래 파일을 생성하여 내용을 작성합니다.

# vi /etc/rc.local

#!/bin/bash

ovs-vsctl del-br br0
ovs-vsctl add-br br0
ovs-vsctl add-port br0 eno1

systemctl restart openvswitch-switch neutron-openvswitch-agent
systemctl restart networking
sleep 10
systemctl restart neutron-openvswitch-agent nova-compute

exit 0

 

파일 내용이 실행될 수 있는 권한을 줍니다.

# chmod 755 /etc/rc.local

 

서비스 실행파일에 아래와 같이 내용 추가후 부팅시 활성화 될 수 있도록 합니다.

# vi /lib/systemd/system/rc-local.service

...

[Install]
WantedBy=multi-user.target

 

# systemctl enable --now rc-local

 

Netplan 을 부팅시 사용하지 않도록 비활성화 하고, 설정을 읽을 수 없도록 /etc/netplan 디렉토리 내의 모든 netplan 설정 파일 이름을 변경합니다.

# systemctl disable systemd-networkd

# mv /etc/netplan/50-cloud-init.yaml /etc/netplan/50-cloud-init.yaml.bak

# mv /etc/netplan/00-installer-config.yaml /etc/netplan/00-installer-config.yaml.bak


ifupdown 을 부팅시 사용하도록 활성화 합니다.
# systemctl enable networking

 

부팅시 resolv.conf 파일이 초기화 되지 않도록 합니다.

# apt-get -y install resolvconf

# echo "nameserver 8.8.8.8" >> /etc/resolvconf/resolv.conf.d/head

# resolvconf -u


현재까지의 네트워크 설정을 재부팅하여 적용하고 잘 동작하는지 확인합니다.
# reboot

 

(Controller 서버에서)

Neutron 설정이 잘 되었는지 컨트롤러 서버에서 확인합니다.

# openstack network agent list

+--------------------------------------+--------------------+---------+-------------------+-------+-------+---------------------------+
| ID                                   | Agent Type         | Host    | Availability Zone | Alive | State | Binary                    |
+--------------------------------------+--------------------+---------+-------------------+-------+-------+---------------------------+
| 015f70db-ed34-48f0-89aa-c46628f53c23 | Metadata agent     | compute | None              | :-)   | UP    | neutron-metadata-agent    |
| 08c999e6-d746-4267-8113-3db128625588 | L3 agent           | compute | nova              | :-)   | UP    | neutron-l3-agent          |
| 9b1fdb08-e982-4811-a956-541d1022e751 | Open vSwitch agent | neutron | None              | :-)   | UP    | neutron-openvswitch-agent |
| b390e1e8-0c69-4bc5-8bd5-37dda5eed35f | Metadata agent     | neutron | None              | :-)   | UP    | neutron-metadata-agent    |
| f51b1e0e-d00e-427c-9c67-56ad9f7292cd | DHCP agent         | neutron | nova              | :-)   | UP    | neutron-dhcp-agent        |
+--------------------------------------+--------------------+---------+-------------------+-------+-------+---------------------------+

 

Controller 서버와 VM 서버의 IP 통신이 가능하면 관리가 편리하므로 미리 Controller 서버에서도 사설 대역을 추가해줍니다.

(나중에 VM 에 공인 IP 로 192.168.0.x 를 추가할 예정 : https://sysdocu.tistory.com/1836)

 

(Controller 서버에서)

기본 설치되어 있는 netplan 을 사용하지 말고 되도록 ifupdown 패키지를 이용합니다.

# apt-get -y install ifupdown

# vi /etc/network/interfaces

...

auto eno1
iface eno1 inet static
    address 115.68.142.99
    netmask 255.255.255.224
    gateway 115.68.142.97

iface eno1 inet static
    address 192.168.0.10
    netmask 255.255.255.0

 

Netplan 을 부팅시 사용하지 않도록 비활성화 하고, 설정을 읽을 수 없도록 /etc/netplan 디렉토리 내의 모든 netplan 설정 파일 이름을 변경합니다.

# systemctl disable systemd-networkd

# mv /etc/netplan/50-cloud-init.yaml /etc/netplan/50-cloud-init.yaml.bak

# mv /etc/netplan/00-installer-config.yaml /etc/netplan/00-installer-config.yaml.bak


ifupdown 을 부팅시 사용하도록 활성화 합니다.
# systemctl enable networking

 

부팅시 resolv.conf 파일이 초기화 되지 않도록 합니다.

# apt-get -y install resolvconf

# echo "nameserver 8.8.8.8" >> /etc/resolvconf/resolv.conf.d/head

# resolvconf -u


현재까지의 네트워크 설정을 재부팅하여 적용하고 잘 동작하는지 확인합니다.
# reboot

 

 

7. Cinder 설치

 

(Controller 서버에서)

1) Cinder 데이터베이스 및 계정 생성

# mysql -p

MariaDB [(none)]> create database cinder;

MariaDB [(none)]> grant all privileges on cinder.* to cinder@'localhost' identified by '12345678';
MariaDB [(none)]> grant all privileges on cinder.* to cinder@'%' identified by '12345678';
MariaDB [(none)]> flush privileges;

 

2) Cinder 사용자 및 서비스 등록
# openstack user create --domain default --password 12345678 cinder

+---------------------+----------------------------------+
| Field               | Value                            |
+---------------------+----------------------------------+
| domain_id           | default                          |
| enabled             | True                             |
| id                  | c1833292afec47658596f7a3771337b8 |
| name                | cinder                           |
| options             | {}                               |
| password_expires_at | None                             |
+---------------------+----------------------------------+


# openstack role add --project service --user cinder admin
# openstack service create --name cinderv2 --description "OpenStack Block Storage" volumev2

+-------------+----------------------------------+
| Field       | Value                            |
+-------------+----------------------------------+
| description | OpenStack Block Storage          |
| enabled     | True                             |
| id          | eb4c53c170a643a08068485439f32a83 |
| name        | cinderv2                         |
| type        | volumev2                         |
+-------------+----------------------------------+


# openstack service create --name cinderv3 --description "OpenStack Block Storage" volumev3

+-------------+----------------------------------+
| Field       | Value                            |
+-------------+----------------------------------+
| description | OpenStack Block Storage          |
| enabled     | True                             |
| id          | 3263f76988c04a84814c62cce4b4ce6f |
| name        | cinderv3                         |
| type        | volumev3                         |
+-------------+----------------------------------+

 

# openstack endpoint create --region AZ1 volumev2 public http://controller:8776/v2/%\(project_id\)s

+--------------+------------------------------------------+
| Field        | Value                                    |
+--------------+------------------------------------------+
| enabled      | True                                     |
| id           | 0c3040f43f404434b04b96d9986d1e3e         |
| interface    | public                                   |
| region       | AZ1                                      |
| region_id    | AZ1                                      |
| service_id   | eb4c53c170a643a08068485439f32a83         |
| service_name | cinderv2                                 |
| service_type | volumev2                                 |
| url          | http://controller:8776/v2/%(project_id)s |
+--------------+------------------------------------------+

 

# openstack endpoint create --region AZ1 volumev2 internal http://controller:8776/v2/%\(project_id\)s

+--------------+------------------------------------------+
| Field        | Value                                    |
+--------------+------------------------------------------+
| enabled      | True                                     |
| id           | f2fce6f293da42fa87f31c93ad85fd9c         |
| interface    | internal                                 |
| region       | AZ1                                      |
| region_id    | AZ1                                      |
| service_id   | eb4c53c170a643a08068485439f32a83         |
| service_name | cinderv2                                 |
| service_type | volumev2                                 |
| url          | http://controller:8776/v2/%(project_id)s |
+--------------+------------------------------------------+


# openstack endpoint create --region AZ1 volumev2 admin http://controller:8776/v2/%\(project_id\)s

+--------------+------------------------------------------+
| Field        | Value                                    |
+--------------+------------------------------------------+
| enabled      | True                                     |
| id           | b9edd3f7f1664a0aab6bdfb357a4a2a2         |
| interface    | admin                                    |
| region       | AZ1                                      |
| region_id    | AZ1                                      |
| service_id   | eb4c53c170a643a08068485439f32a83         |
| service_name | cinderv2                                 |
| service_type | volumev2                                 |
| url          | http://controller:8776/v2/%(project_id)s |
+--------------+------------------------------------------+


# openstack endpoint create --region AZ1 volumev3 public http://controller:8776/v3/%\(project_id\)s

+--------------+------------------------------------------+
| Field        | Value                                    |
+--------------+------------------------------------------+
| enabled      | True                                     |
| id           | 595deda044bc40378148f61a9849ee7e         |
| interface    | public                                   |
| region       | AZ1                                      |
| region_id    | AZ1                                      |
| service_id   | 3263f76988c04a84814c62cce4b4ce6f         |
| service_name | cinderv3                                 |
| service_type | volumev3                                 |
| url          | http://controller:8776/v3/%(project_id)s |
+--------------+------------------------------------------+

 

# openstack endpoint create --region AZ1 volumev3 internal http://controller:8776/v3/%\(project_id\)s

+--------------+------------------------------------------+
| Field        | Value                                    |
+--------------+------------------------------------------+
| enabled      | True                                     |
| id           | f38d56eb94d94b05a16707b04e0ee446         |
| interface    | internal                                 |
| region       | AZ1                                      |
| region_id    | AZ1                                      |
| service_id   | 3263f76988c04a84814c62cce4b4ce6f         |
| service_name | cinderv3                                 |
| service_type | volumev3                                 |
| url          | http://controller:8776/v3/%(project_id)s |
+--------------+------------------------------------------+

 

# openstack endpoint create --region AZ1 volumev3 admin http://controller:8776/v3/%\(project_id\)s
+--------------+------------------------------------------+
| Field        | Value                                    |
+--------------+------------------------------------------+
| enabled      | True                                     |
| id           | 28dd03c124d04dc19490cd5499f36048         |
| interface    | admin                                    |
| region       | AZ1                                      |
| region_id    | AZ1                                      |
| service_id   | 3263f76988c04a84814c62cce4b4ce6f         |
| service_name | cinderv3                                 |
| service_type | volumev3                                 |
| url          | http://controller:8776/v3/%(project_id)s |
+--------------+------------------------------------------+


3) 패키지 설치 및 환경설정
# apt-get -y install cinder-api cinder-scheduler
# vi /etc/cinder/cinder.conf

[DEFAULT]
transport_url = rabbit://openstack:12345678@controller
auth_strategy = keystone
my_ip = 115.68.142.99

[database]
connection = mysql+pymysql://cinder:12345678@controller/cinder

[keystone_authtoken]
www_authenticate_url = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = cinder
password = 12345678

[oslo_concurrency]
lock_path = /var/lib/cinder/tmp


데이터베이스 테이블을 생성합니다.
# su -s /bin/sh -c "cinder-manage db sync" cinder

 

Nova 설정 파일에 Cinder 섹션을 추가 합니다.

# vi /etc/nova/nova.conf

...

[cinder]
os_region_name = AZ1


데몬을 재시작하여 설정한 내용을 반영합니다.
# systemctl restart nova-api
# systemctl restart cinder-scheduler
# systemctl restart apache2

4) Storage 서버 설정

지금까지 Controller 에서 설치, 설정을 진행했는데, Storage 서버와 통신하고 정상 동작하기 위해 Storage 서버에서도 패키지를 설치하고 설정을 해주어야 합니다.

(Storage 서버에서)
# apt-get -y install lvm2 thin-provisioning-tools

이미지를 저장할 두번째 하드 디스크가 필요한데, 장착 후에 파티션을 생성하지 않은 상태에서 아래와 같이 실행합니다.

# pvcreate /dev/sdb

  Physical volume "/dev/sdb" successfully created.

 

# vgcreate cinder-volumes /dev/sdb

  Volume group "cinder-volumes" successfully created

 

# pvdisplay

  --- Physical volume ---
  PV Name               /dev/sdb
  VG Name               cinder-volumes
  PV Size               <232.89 GiB / not usable <3.18 MiB
  Allocatable           yes 
  PE Size               4.00 MiB
  Total PE              59618
  Free PE               59618
  Allocated PE          0
  PV UUID               BnFfxK-A80q-HCPm-upNP-FCit-4wQt-iJxhOK

 

# vgdisplay

  --- Volume group ---
  VG Name               cinder-volumes
  System ID             
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  1
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                0
  Open LV               0
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               232.88 GiB
  PE Size               4.00 MiB
  Total PE              59618
  Alloc PE / Size       0 / 0   
  Free  PE / Size       59618 / 232.88 GiB
  VG UUID               UIE0CZ-iI3n-bSTC-zMGj-VZFX-EwNs-hbgNyi

# vi /etc/lvm/lvm.conf

...

devices {
        ...
        filter = [ "a/sdb/", "r/.*/" ]
        ...
}

...

 

위에서 sdb 는 디스크 장치명입니다.

 

# apt-get -y install cinder-volume

# vi /etc/cinder/cinder.conf

[DEFAULT]
rootwrap_config = /etc/cinder/rootwrap.conf
api_paste_confg = /etc/cinder/api-paste.ini
iscsi_helper = tgtadm
volume_name_template = volume-%s
volume_group = cinder-volumes
verbose = True
auth_strategy = keystone
state_path = /var/lib/cinder
lock_path = /var/lock/cinder
volumes_dir = /var/lib/cinder/volumes
enabled_backends = lvm
transport_url = rabbit://openstack:12345678@controller
my_ip = 115.68.142.99
glance_api_servers = http://controller:9292

[oslo_concurrency]
lock_path = /var/lib/cinder/tmp
[database]
connection = mysql+pymysql://cinder:12345678@controller/cinder

[keystone_authtoken]
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = cinder
password = 12345678

[lvm]
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
volume_group = cinder-volumes
target_protocol = iscsi
target_helper = tgtadm

 

# systemctl restart cinder-volume

# systemctl restart tgt

 

Controller 서버에서 볼륨 서비스를 확인합니다.

(Controller 서버에서)

# openstack volume service list
+------------------+-------------+------+---------+-------+----------------------------+
| Binary           | Host        | Zone | Status  | State | Updated At                 |
+------------------+-------------+------+---------+-------+----------------------------+
| cinder-scheduler | controller  | nova | enabled | up    | 2023-07-06T22:53:51.000000 |
| cinder-volume    | storage@lvm | nova | enabled | up    | 2023-07-06T22:53:53.000000 |
+------------------+-------------+------+---------+-------+----------------------------+

 

반응형

댓글()

cinder 블록스토리지 상태값 강제 변경

리눅스/OpenStack|2018. 6. 18. 15:17
반응형

VM 연결 해제 상태

cinder reset-state --state available [volume-id]


VM 연결 상태

cinder reset-state --state in-use [volume-id]


반응형

댓글()

provider 네트워크 대역 업데이트 및 추가

리눅스/OpenStack|2018. 6. 12. 07:00
반응형

[업데이트]

# neutron subnet-update provider --allocation-pool start=10.20.202.0,end=10.20.207.254



[추가]

# neutron subnet-create --name provider2 --allocation-pool start=10.20.208.3,end=10.20.208.254  --dns-nameserver 164.124.101.2 --gateway 10.20.208.1 provider 10.20.208.0/24



반응형

댓글()

모든 컴퓨트 노드가 down 으로 보이는 현상

리눅스/OpenStack|2018. 6. 9. 03:57
반응형

컨트롤러 서버에서 netstat -nltp 를 하면 beam.smp 과 같은 데몬이 떠 있어야 하는데, 없는 경우는

아래 명령으로 해결이 가능하다.

컴퓨트 노드의 down 상태가 순차적으로 up 으로 변경 된다.


# service rabbitmq-server restart


반응형

댓글()

novnc 콘솔 페이지 ssl 파일 갱신시

리눅스/OpenStack|2018. 6. 5. 09:06
반응형

1. 컨트롤러 서버의 /etc/nova/ssl 디렉토리에 인증서를 넣어놓기만 하면 됩니다. (데몬 재시작 할 필요 없음)


2. 파일명이 변경될 경우 /etc/nova/nova.conf 파일의 [DEFAULT] 섹션에 아래 옵션을 수정해주세요.

ssl_only = True

cert = /etc/nova/ssl/cert1.pem

key = /etc/nova/ssl/privkey1.pem



반응형

댓글()

VM 파티션 크기 재할당하기

리눅스/OpenStack|2018. 6. 4. 13:59
반응형

[root@sysdocu ~]# fdisk -l


Disk /dev/vda: 134.2 GB, 134217728000 bytes

139 heads, 8 sectors/track, 235741 cylinders

Units = cylinders of 1112 * 512 = 569344 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 512 bytes / 512 bytes

Disk identifier: 0x000594e0


   Device Boot      Start         End      Blocks   Id  System

/dev/vda1               2       47149    26213376   83  Linux

[root@sysdocu ~]# df -Th

Filesystem     Type   Size  Used Avail Use% Mounted on

/dev/vda1      ext4    25G  2.7G   21G  12% /

tmpfs          tmpfs  3.9G   88K  3.9G   1% /dev/shm

[root@sysdocu ~]# fdisk -u /dev/vda


WARNING: DOS-compatible mode is deprecated. It's strongly recommended to

         switch off the mode (command 'c').


Command (m for help): p


Disk /dev/vda: 134.2 GB, 134217728000 bytes

139 heads, 8 sectors/track, 235741 cylinders, total 262144000 sectors

Units = sectors of 1 * 512 = 512 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 512 bytes / 512 bytes

Disk identifier: 0x000594e0


   Device Boot      Start         End      Blocks   Id  System

/dev/vda1            2048    52428799    26213376   83  Linux


Command (m for help): d

Selected partition 1


Command (m for help): n

Command action

   e   extended

   p   primary partition (1-4)

p

Partition number (1-4): 1

First sector (8-262143999, default 8): 2048    // ★ 중요 : 시작을 2048로 해야 합니다.

Last sector, +sectors or +size{K,M,G} (2048-262143999, default 262143999): 

Using default value 262143999


Command (m for help): p


Disk /dev/vda: 134.2 GB, 134217728000 bytes

139 heads, 8 sectors/track, 235741 cylinders, total 262144000 sectors

Units = sectors of 1 * 512 = 512 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 512 bytes / 512 bytes

Disk identifier: 0x000594e0


   Device Boot      Start         End      Blocks   Id  System

/dev/vda1            2048   262143999   131070976   83  Linux


Command (m for help): w

The partition table has been altered!


Calling ioctl() to re-read partition table.


WARNING: Re-reading the partition table failed with error 16: 장치나 자원이 동작 중.

The kernel still uses the old table. The new table will be used at

the next reboot or after you run partprobe(8) or kpartx(8)

Syncing disks.

[root@sysdocu ~]# reboot





반응형

댓글()

ceilometer (몽고디비) 관련 에러시 점검사항들

리눅스/OpenStack|2018. 4. 23. 09:26
반응형

76 (마스터), 77 (슬레이브) 일때


1. 16번 서버에서 netstat -nltp 로 mongodb 데몬이 올라와 있는지 확인


2. /var/www/html/monitor.html 파일이 있는지 확인

아래 명령으로 생성이 가능하다

# crm_mon --daemonize --as-html /var/www/html/monitor.html

(/etc/rc.local 에도 등록이 되어있다)


3. crm status 나 http://15.31.245.1/monitor.html 과 같은 모니터링 페이지에서

    에러난 데몬이 있는지 보고 cleanup 해주면 된다.


4. 컨트롤러 서버에서

# service ceilometer-api restart


5. 몽고디비 서버에서

# service ceilometer-agent-central restart

# service ceilometer-agent-notification restart

# service ceilometer-collector restart


ceilometer sample-list 로 최근 내용까지 갱신 되는데 시간이 좀 걸린다.



반응형

댓글()