Opennebula Cloud
To overcome the hardware limitations of external services we run our own computing cloud with the Opennebula platform and 12 Dell PowerEdge FC630 nodes. Each node has a Intel Xeon E5-2630 CPU with 20 cores (40 with hyperthreading) and 768GB memory. The cloud resources are furthermore connected to a Dell S4048-ON Open Networking Switch which is managed by one Opendaylight Controller. This cluster facilitates SDN and cloud experiments as well as provides compute resources for high requirement simulations and emulations.
Configure new Cluster
Manual configuration (not necessary - done by playbook)
The network interfaces, datastores, primary images and VM templates are already created during the installation of opennebula core with the ansible playbook. These are the steps in case it needs to be done manually:
Create Virtual Network Interfaces
Network → Virtual Networks - Plus → Create
- Chair - Internal Network Interface (Struk DHCP)
- General - Name: “intern”, Description: “Internal Chair Interface”, CLuster: 0
- Conf - Bridge: “chair”, Network Mode: “Bridged”, Physical device: “↔”
- Addresses - AR → Ethernet, First MAC address: “↔”, Size: 256
- Context - Gateway: 131.159.25.254, Netmask: 255.255.254.0, Network address: 131.159.24.0
- MWN - Student Network Interface (Struk DHCP)
- General - Name: “mwn”, Description: “Studen/MWN Interface”, CLuster: 0
- Conf - Bridge: “chair”, Network Mode: “Bridged”, Physical device: “↔”
- Addresses - AR → Ethernet, First MAC address: “↔”, Size: 256
- Context - Gateway: 172.24.25.254, Netmask: 255.255.254.0, Network address: 172.24.24.0
Create New OS and Template
- (ISO) Storage → Images → New Image
- Name: <os>_<version>_iso (e.g. ubuntu_16.04.5_iso)
- Type: “Readonly CD-ROM”, Datastore: “ceph_img”
- Advanced Options: BUS: “SCSI”, Image mapping driver: “raw”
- Upload ISO Image, after it is ready add the label “ISO”
- (Disk) Storage → Images → New Image
- Name: <os>_<version>_raw (e.g. ubuntu_16.04.5_raw)
- Type: “Generic storage datablock”, Datastore: “ceph_img”
- Check “This image is persistent” - Empty disk image (5120MB → 5GB)
- Advanced Options: BUS: “Virtio”, Image mapping driver: “raw”
- This is the empty disk for the initial os installation
- (Initial Installation Template) Templates → VMs → Plus → Create
- Try to copy an existing template and adjust “Storage” + “OS Booting” order
- Otherwise create a new template:
- Name: “<os> <version> RAW (e.g. 'Ubuntu 16.04.5 RAW')”, Hypervisor: “KVM”
- General - Memory: 4GB, CPU: 2, VCPU: 2
- Storage - Disk0 = <raw_disk>_raw, Disk1 = <iso_image>_iso ..
- OS & CPU - Boot → <iso_image>_iso first, CPU Architecture: “x86_64”
- Input/Output: VNC, Keymap: “de” (if needed)
- Leave the rest empty
- Instantiate a new VM from the raw Template
- Templates → Create → On Hold (VM is not started immediately, time to create the struk entry)
- Attach an intern network interface (package download, etc.)
- Create a new struk entry for a chair/intern IP address
- Instances → VMs → Select VM → Deploy → Select Host
- Use VNC and go through installer. Afterwards you should have an installed OS on the second disk
- Shutdown VM and remove the second disk (with the installed OS) if possible, otherwise you can delete the VM, since the template needs to be updated anyway. After deletion/removal you have to make the disk non-persistent (Images → Select disk → persistent: No).
- Change Template → Remove the iso disk, so only the installed disk remains
- Create a new VM from the saved template/VM
- Boot the VM OS, <raw-disk>
- Make initial OS configuration
ip a
and start network interfacedhclient -v -i ens3
- Add/Change i11 user:
- mkdir .ssh, vim .ssh/authorized_keys → keepassx ssh pub key (ott-bottom level)
- chmod -R og-rwx .ssh
- sudo passwd i11 → keepassx password (ott- bottom level)
- Adjust sudoers file, i11 user must be member of adm and adm nopasswd sudo allowed
%adm ALL=NOPASSWD: ALL %il11admin ALL=NOPASSWD: ALL
- Install context package
dpkg -i one-context_*deb || apt-get install -fy
- Install latest kernel version (look up command for ubuntu version)
sudo apt-get install –install-recommends linux-generic-hwe-18.04
- Change one-contextualization for automatic interface dhcp
- Adjusted network contextualization script is in the ansible-scripts onevm role
scp ..ansible-scripts/roles/onevm/files/loc-10-network vm:/etc/one-context.d/
- After everything is done, dettach the network interface
- Install ansible packages
sudo apt install
- Shutdown VM → Storage → Ubuntu Disk → Save as → <os>_<version> (e.g. ubuntu_18.04.3)
- Add label “Template” after image is ready, Set Image Owner as “oneadmin”, Add “Use” Permission for group
- Adjust raw template to become final VM template:
- Storage - Select saved contextualized disk
- OS & CPU- Boot order: select disk0
- Context - Unselect “SSH contextualization”, Select “Network contextualization”
- Context - callback to AWX in “Start Script”, lookup correct url and config key in AWX!:
#!/bin/bash curl -H "remote-host: $NAME" -H "remote-ip: $(hostname -I)" --noproxy "*" -k -XPOST --data "host_config_key=<AWX-PLAYBOOK-CONFIG-KEY>" <AWX-PLAYBOOK-URL>
- Scheduling - Placement: Select Cluster = (0) default
- Delete Template “<os> <version> RAW”, change user and group of final template oneadmin:oneadmin
- Rest will be taken care of by the AWX (Ansible) callback (proxy, ntp, ldap, fail2ban, services, etc.)
Configure OS
After the one_core playbook run all the initial templates and images already exist. These steps are necessary to get a final OS image to actually deploy VMs from. Follow exactly the following steps:
- Instantiate a new VM from the “ubuntu-install” template, leave the default settings
- Wait until the VM boots and connect to it via VNC
- Go through the installer dialog and leave the defaults.
- Language + Keyboard Layout: English
- Install without network, Proxy address empty, apt mirror default (ubuntu)
- Install on whole disk, automatic paritioning (small grub partition, rest /)
- Name: i11, User: i11, Server-Name: one, Password: * (something)
- Install OpenSSH
- Wait until the installation is done and select reboot
- The VM will again reboot into the iso, go to the control and click on “Power Off (hard)”
- Click on the VM and Storage, remove the ubuntu iso image with a click on the X
- Start the VM again, after it is running attach a network interface to it (chair) and manually create a new struk entry for it
- Use VNC to connect to the VM and finish the OS setup
ip a sudo dhclient -v -i ens5 sudo vim /etc/sudoers --- %adm ALL=NOPASSWD: ALL %il11admin ALL=NOPASSWD: ALL --- # download the according version for the opennebula version wget https://github.com/OpenNebula/addon-context-linux/releases/download/v5.10.0/one-context_5.10.0-1.deb sudo su dpkg -i one-context_*deb || apt-get install -fy # adjust one contextualization for automatic dhcp # file is in ansible-scripts:roles/onevm/files/loc-10-network local: scp roles/onevm/files/loc-10-network vm:~ mv loc-10-network /etc/one-context.d/ exit mkdir .ssh vim .ssh/authorized_keys # paste the pub key of admin_i11 into the authorized_keys file chmod -R og-rwx .ssh # set the i11 admin password -> keepassx sudo passwd i11 sudo vim /etc/apt/sources.list # replace the whole file with the following content --- deb http://ubuntu.mirror.lrz.de/ubuntu/ bionic main restricted universe multiverse deb http://ubuntu.mirror.lrz.de/ubuntu/ bionic-updates main restricted universe multiverse deb http://ubuntu.mirror.lrz.de/ubuntu/ bionic-security main restricted universe multiverse --- sudo apt update sudo apt upgrade sudo apt install --install-recommends linux-generic-hwe-18.04 sudo reboot # login after reboot - dhcp not working yet login via VNC and manual dhclient sudo apt update sudo apt upgrade sudo apt install python python3 python-apt python3-apt rm .bash_history sudo shutdown -h now
- Click on the VM → Storage tab → Small disk symbol (SaveAs) “ubuntu_18.04.3”
- Delete the template “ubuntu-install”
- Updatethe remaining two templates and select the saved disk “ubuntu_18.04.3” under the storage tab
- Done - New VMs can be instatiated either with the “ubuntu..-ssh” or “ubuntu..-shared” template
Add new host to cluster
- Deploy nodes with Maas (dell1, dell2, intern → bridge + intern.83 vlan)
- cssh into new nodes, install packages and set up chair bridge
sudo apt install python python-apt htop sudo vim /etc/network/interfaces --------------------------------- #Add under the last auto entry auto chair iface chair inet dhcp bridge_ports intern.83 bridge_fd 15 --------------------------------- sudo ifup chair
- Log in to a ceph admin node and install ceph on nodes
ceph-deploy install emu04 emu05 ceph-deploy config push emu04 emu05
- Execute ansible playbook opennebula node
vim ansible/playbooks/opennebula/hosts ansbile-playbook -i hosts one_hosts.yml
- Add cluster nodes on sunstone web UI
Fix Routing on Node Servers
- Add two new routing tables
echo 1 intern >> /etc/iproute2/rt_tables && echo 2 mwn >> /etc/iproute2/rt_tables
- Configure new routing tables
ip route add 10.200.64.0/18 dev intern table intern ip route add 172.24.24.0/23 dev intern table mwn
- Add default gateways
ip route add default via 10.200.127.254 dev intern table intern ip route add default via 172.24.25.254 dev intern table mwn
- Add routes for the networks
- Add routing rules, when to use the routing tables
ip rule add to 10.200.64.0/18 table intern ip rule add from 10.200.64.0/18 table intern
Fix Routing on VMs
- Add a second routing table for the interface, needed to add a second gateway
echo 1 isp2 >> /etc/iproute2/rt_tables
- Configure the new routing table
ip route add 131.159.24.0/23 dev ens6 table isp2 ip route add default via 131.159.25.254 dev ens6 table isp2
- Add routing rules to choose when to search in second routing table
ip rule add from 131.159.24.0/23 table isp2 ip rule add to 131.159.24.0/23
- Add rules to network/interfaces to make them persistent after reboot
iface ens6 inet dhcp post-up ip route add default via 131.159.25.254 dev ens6 table isp2 post-up ip route add 131.159.24.0/23 dev ens6 table isp2 post-up ip rule add from 131.159.24.0/23 table isp2 post-up ip rule add to 131.159.24.0/23 table isp2
Fix Routing on ONE Host access MWN
- Add a second routing table for the interface, needed to add a second gateway
echo 5 mwn >> /etc/iproute2/rt_tables
- Configure /etc/network/interfaces to use vlan and deploy new routes
auto intern.240 iface intern.240 inet manual vlan_id 240 vlan-raw-device intern auto mwn iface mwn inet manual bridge_ports intern.240 bridge_fd 15 post-up ip route add 172.24.24.0/23 dev mwn table mwn post-up ip route add default via 172.24.25.254 dev mwn table mwn post-up ip rule add to 172.24.24.0/23 table mwn post-up ip rule add from 172.24.24.0/23 table mwn
- Start mwn interface, this automatically brings up the bridge intern.240 and sets up all necessary routes
sudo ifup mwn #waiting for mwn to get ready
Add new ONE Node
- Deploy node with Maas
- Log in and install python packages for ansible
sudo apt install python python-apt
- Make sure node is reachable and accessible (passwordless ssh) from the administrator machine, execute ansible opennebula→one_host playbook
cd ~/Documents/ansible/playbooks/opennebula ansible-playbook -i hosts one_hosts.yml
- Setup Ceph User - install ceph packages from maas server
ssh maas cd sto_cluster ceph-deploy install <server> ceph-deploy config push <server> cd ../ceph_user_libvirt/ scp * <server>:~
- Setup Ceph User on the new server
#from maas log in to the server ssh <server> sudo mv client.libvirt.key secret.xml /var/lib/one/ sudo chown oneadmin:oneadmin /var/lib/one/client.libvirt.key /var/lib/one/secret.xml sudo su oneadmin && cd virsh -c qemu:///system secret-define secret.xml cat secret.xml UUID= #UUID from secret.xml virsh -c qemu:///system secret-set-value --secret $UUID --base64 $(cat client.libvirt.key) rm client.libvirt.key #test access - output no error rbd -p one ls --id libvirt
- Set up network, add chair network
sudo vim /etc/network/interfaces --------------------------------- #Add unter the las auto entry auto chair iface chair inet dhcp bridge_ports intern.83 bridge_fd 15 --------------------------------- sudo ifup chair
- Adjust routing table for multiple networks on same interface
sudo su echo 1 mwn >> /etc/iproute2/rt_tables && echo 2 intern >> /etc/iproute2/rt_tables #temporary activate network #intern ip route add 10.200.64.0/18 dev intern table intern ip route add default via 10.200.127.254 dev intern table intern ip rule add to 10.200.64.0/18 table intern ip rule add from 10.200.64.0/18 table intern #mwn ip route add 172.24.24.0/23 dev intern table mwn ip route add default via 172.24.25.254 dev intern table mwn ip rule add to 172.24.24.0/23 table mwn ip rule add from 172.24.24.0/23 table mwn sudo vim /etc/network/interfaces --------------------------------- #add under iface intern ine dhcp post-up ip route add 10.200.64.0/18 dev intern table intern post-up ip route add default via 10.200.127.254 dev intern table intern post-up ip rule add to 10.200.64.0/18 table intern post-up ip rule add from 10.200.64.0/18 table intern post-up ip route add 172.24.24.0/23 dev intern table mwn post-up ip route add default via 172.24.25.254 dev intern table mwn post-up ip rule add to 172.24.24.0/23 table mwn post-up ip rule add from 172.24.24.0/23 table mwn --------------------------------- sudo shutdown -r now
- Add node on the Sunstone Webui: System → Hosts → Add
- Enjoy
Installation
- Deploy Emulations Server Nodes with MAAS
- Configure the interfaces for later use on vms with bridges in MAAS. All bridges need to have the same name!!
- Install post-maas packages
sudo apt install htop python python-apt
- Use ansible playbook
opennebula
to install opennebula - Core/Webui: emu10 - Nodes: emu10 emu09 emu08
Sunstone / Webui
- Create three new clusters: emu + dev + sdn
- Add nodes to webui after ansible installation: emu (emu09,emu08), dev (emu10)
- Add new networks: Network→Virtual Networks→[Add]
- dell2 - bridged (dell2) - 10.2.10.1/16 (size 254) - select cluster default, sdn
- dell1 - bridged (dell1) - 10.1.10.1/16 (size 254) - select cluster default, sdn
- chair - bridged (chair) - ethernet (size 254) - select cluster default
- intern - bridged (intern) - ethernet (size 254) - select cluster default, dev
CEPH Datastores backend
- In order to create and run new virtual machines Opennebula needs two datastores
- Image Datastore: Store templates and VM images
- System Datastore: Hold disks for running virtual machines, copied or cloned from Image Datastore
- To use ceph as a backend nodes must be ceph clients of a running ceph cluster
- Images and System files are stored in the same Ceph pool (context and auxiliar disk files are stored locally
Ceph Cluster Setup
- Create pool
ceph osd pool create one 256 256 replicated ceph osd pool set one size 2
- Add new line in ceph.conf admin node mon01:~/sto_cluster/ceph.conf:
rbd_default_format = 2
- Install Ceph on one nodes
ceph-deploy install emu09 emu08 emu10
- Push new config to all nodes
ceph-deploy config push mon01-cm sto01 sto02
New Ceph User
- create new user for libvirt
ceph aut get-or-create client.libvirt mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=one' ceph auth get-key client.libvirt | tee client.libvirt.key ceph auth get client.libvirt -o ceph.client.libvirt.keyring
- copy files to ceph clients/one nodes
scp ceph.client.libvirt.keyring emu09:~ ssh emu09 sudo mv ceph.client.libvirt.keyring /etc/ceph scp client.libvirt.key emu09:~ #same with node emu10 and emu08
- create libvirt secret
UUID=`uuidgen`; echo $UUID cat > secret.xml <<EOF <secret ephemeral='no' private='no'> <uuid>$UUID</uuid> <usage type='ceph'> <name>client.libvirt secret</name> </usage> </secret> EOF scp secret.xml emu09:~
- Log in on nodes and create libvirt secret (emu10, emu09, emu08)
sudo mv client.libvirt.key secret.xml /var/lib/one/ sudo chown oneadmin:oneadmin /var/lib/one/client.libvirt.key /var/lib/one/secret.xml sudo su oneadmin cd virsh -c qemu:///system secret-define secret.xml UUID= #uuid from secrect.xml virsh -c qemu:///system secret-set-value --secret $UUID --base64 $(cat client.libvirt.key) rm client.libvirt.key
- Confirm that setup worked
rbd -p one ls --id libvirt #output should be nothing - no error
Create Datastores
- Make new file as oneadmin user on opennebula gateway
ssh emu10 sudo su oneadmin cd vim ceph_image.txt #content ------------------------------- NAME = "ceph-ds" DS_MAD = ceph TM_MAD = ceph DISK_TYPE = RBD POOL_NAME = one BRIDGE_LIST = emu10 CEPH_HOST = "mon01-cm:6789" CEPH_USER = "libvirt" CEPH_SECRET = "XXXXX" RBD_FORMAT = 2 ------------------------------- vim ceph_system.txt ------------------------------- NAME = "ceph_system" TM_MAD = ceph TYPE = SYSTEM_DS DISK_TYPE = RBD POOL_NAME = one BRIDGE_LIST = emu10 CEPH_HOST = "mon01-cm:6789" CEPH_USER = "libvirt" CEPH_SECRET = "XXXX" RBD_FORMAT = 2 ------------------------------- onedatastore create cephds_image.txt onedatastore create cephds_system.txt
New Template
We will create two templates: One default template with system files on the local hard disk and one HA template for system and image file in the Ceph cluster and live migration capabilities.
HA Template
- Create a new datablock for the os and add the server iso image to the datastore
Storage -> Images -> Add #Server ISO Name: ubuntu_server_16.04.02 Type: Readonly-CDROM Datastore: 102:ceph_img #Image does not need to be made persistent, no changes are made on the disk Image location: Upload -> Server.iso #OS Datablock Name: default_vm_disk Type: Generic datastore block Datastore: 102:ceph_img This image persistent: yes Image location: Empty disk image -> 5000 MB
- New template for default VM
Templates -> VMs -> Add Name: default_vm Hypervisor: KVM Memory: 512 CPU: 1 Logo: Ubuntu
- Storage
Disk0: default_vm_disk Disk1: ubuntu_server_16.04.02
- Network
Nic0: dell1 Nic1: dell2 Nic2: Chair
- OS Booting
CPU Architecture: x86_64 Boot order: Check disk and ubuntu_server
- Input/Output
VNC-Keymap: de Inputs: Type - Tablet, Bus - USB
- Context
- Check Add SSH contextualization + Add Network contextualization
- Add the following start up script:
#!/bin/bash usermod -aG adm $UNAME && chsh -s /bin/bash $UNAME
- Add two custom vars:
SET_HOSTNAME = $NAME USERNAME = $UNAME
- Scheduling
Select clusters -> Choos emu cluster --> Create
- Instantiate the vm template
- Install operating system as usual
- Configure network manually in running VM
- Use Contextualization to prepare vm for opennebula
#download package to the VM sudo apt install -y cloud-utils wget https://github.com/OpenNebula/addon-context-linux/releases/download/v5.0.3/one-context_5.0.3.deb sudo dpkg -i one-context*deb
- Power Off virtual machine
- Save machine as Template: Instances → Select VM → Save icon → Choose name (can not be changed), do not check
make images persistent
unless you only intend to instantiate the Template once.
User Management
- Create a new group: System → Groups → New
Name: cm Advanced Layout - User View: check Default Users View: User Permission: VMs and check allow users to view group resources --> Create
- Create new VDC (Virtual Data Center) to assign resources to group: System → VDCs → Add icon
Name: emu Groups: cm Resources: Hosts (emu03-emu10) Vnets (intern, chair) Datastores (ceph_system, ceph_img) -->Create
- Change User View: /etc/one/sunstone-views/user.yaml
command
sudo vim /etc/one/sunstone-views/user.yaml
file-content
small_logo: images/opennebula-5.0.png provision_logo: images/opennebula-5.0.png enabled_tabs: - dashboard-tab - instances-top-tab - vms-tab #- oneflow-services-tab #- vrouters-tab - templates-top-tab - templates-tab #- oneflow-templates-tab #- vrouter-templates-tab - storage-top-tab - datastores-tab - images-tab - files-tab #- marketplaces-tab #- marketplaceapps-tab - network-top-tab - vnets-tab - vnets-topology-tab #- secgroups-tab #- infrastructure-top-tab #- clusters-tab #- hosts-tab #- zones-tab #- system-top-tab #- users-tab #- groups-tab #- vdcs-tab #- acls-tab - settings-tab #- support-tab #- upgrade-top-tab autorefresh: true features: # True to show showback monthly reports, and VM cost showback: true # Allows to change the security groups for each network interface # on the VM creation dialog secgroups: false # True to hide the CPU setting in the VM creation dialog. The CPU setting # will be set to the same value as VCPU, that will still be visible for the # end users instantiate_hide_cpu: false tabs: dashboard-tab: # The following widgets can be used inside any of the '_per_row' settings # bellow. As the name suggest, the widgets will be scaled to fit one, # two, or three per row. The footer uses the widgets at full size, i.e. # one per row. # # - storage # - users # - network # - hosts # - vms # - groupquotas # - quotas panel_tabs: actions: Dashboard.refresh: false Sunstone.toggle_top: false widgets_one_per_row: - vms widgets_three_per_row: widgets_two_per_row: - network - storage widgets_one_footer: system-top-tab: panel_tabs: actions: users-tab: panel_tabs: user_info_tab: true user_quotas_tab: true user_groups_tab: true user_accounting_tab: true user_showback_tab: true table_columns: - 0 # Checkbox - 1 # ID - 2 # Name - 3 # Group #- 4 # Auth driver #- 5 # Password - 6 # VMs - 7 # Memory - 8 # CPU #- 9 # Group ID #- 10 # Hidden User Data #- 11 # Labels #- 12 # Search data actions: User.refresh: true User.create_dialog: false User.update_password: true User.login_token: true User.quotas_dialog: false User.groups_dialog: false User.chgrp: false User.change_authentication: false User.delete: false User.edit_labels: true User.menu_labels: true groups-tab: panel_tabs: group_info_tab: true group_users_tab: true group_quotas_tab: true group_accounting_tab: true group_showback_tab: true table_columns: - 0 # Checkbox - 1 # ID - 2 # Name - 3 # Users - 4 # VMs - 5 # Memory - 6 # CPU #- 7 # Labels actions: Group.refresh: true Group.create_dialog: false Group.update_dialog: false Group.quotas_dialog: false Group.delete: false Group.edit_admins: false Group.edit_labels: true vdcs-tab: panel_tabs: vdc_info_tab: true vdc_groups_tab: true vdc_resources_tab: true table_columns: - 0 # Checkbox - 1 # ID - 2 # Name - 3 # Groups - 4 # Clusters - 5 # Hosts - 6 # VNets - 7 # Datastores #- 8 # Labels actions: Vdc.refresh: true Vdc.create_dialog: true Vdc.update_dialog: true Vdc.rename: true Vdc.delete: true Vdc.edit_labels: true Vdc.menu_labels: true acls-tab: panel_tabs: table_columns: - 0 # Checkbox - 1 # ID - 2 # Applies to - 3 # Affected resources - 4 # Resource ID / Owned by - 5 # Allowed operations - 6 # Zone #- 7 # ACL String actions: Acl.refresh: true Acl.create_dialog: true Acl.delete: true templates-top-tab: panel_tabs: actions: templates-tab: panel_tabs: template_info_tab: true template_template_tab: true table_columns: - 0 # Checkbox - 1 # ID - 2 # Owner - 3 # Group - 4 # Name - 5 # Registration time #- 6 # Labels #- 7 # Search data actions: Template.refresh: true Template.create_dialog: false Template.import_dialog: false Template.update_dialog: true Template.instantiate_vms: true Template.rename: false Template.chown: false Template.chgrp: false Template.chmod: true Template.clone_dialog: true Template.delete_dialog: false Template.share: false Template.unshare: false Template.edit_labels: true Template.menu_labels: true template_creation_tabs: general: true storage: true network: true os_booting: true features: true input_output: true context: true scheduling: false hybrid: true other: true oneflow-templates-tab: panel_tabs: service_template_info_tab: true service_template_roles_tab: true service_template_template_tab: true table_columns: - 0 # Checkbox - 1 # ID - 2 # Owner - 3 # Group - 4 # Name #- 5 # Labels #- 6 # Search data actions: ServiceTemplate.refresh: true ServiceTemplate.create_dialog: true ServiceTemplate.update_dialog: true ServiceTemplate.instantiate: true ServiceTemplate.chown: false ServiceTemplate.chgrp: false ServiceTemplate.chmod: true ServiceTemplate.rename: true ServiceTemplate.clone_dialog: true ServiceTemplate.delete: true ServiceTemplate.edit_labels: true ServiceTemplate.menu_labels: true vrouter-templates-tab: panel_tabs: vrouter_template_info_tab: true vrouter_template_template_tab: true table_columns: - 0 # Checkbox - 1 # ID - 2 # Owner - 3 # Group - 4 # Name - 5 # Registration time #- 6 # Labels #- 7 # Search data actions: VirtualRouterTemplate.refresh: true VirtualRouterTemplate.create_dialog: true VirtualRouterTemplate.update_dialog: true VirtualRouterTemplate.instantiate_dialog: true VirtualRouterTemplate.rename: true VirtualRouterTemplate.chown: false VirtualRouterTemplate.chgrp: false VirtualRouterTemplate.chmod: true VirtualRouterTemplate.clone_dialog: true VirtualRouterTemplate.delete_dialog: true VirtualRouterTemplate.share: true VirtualRouterTemplate.unshare: true VirtualRouterTemplate.edit_labels: true VirtualRouterTemplate.menu_labels: true template_creation_tabs: general: true storage: true network: true os_booting: true features: true input_output: true context: true scheduling: true hybrid: true other: true instances-top-tab: panel_tabs: actions: vms-tab: panel_tabs: vm_info_tab: true vm_capacity_tab: true vm_storage_tab: true vm_network_tab: true vm_snapshot_tab: true vm_placement_tab: false vm_actions_tab: true vm_conf_tab: true vm_template_tab: false vm_log_tab: false table_columns: - 0 # Checkbox - 1 # ID - 2 # Owner - 3 # Group - 4 # Name - 5 # Status #- 6 # Used CPU #- 7 # Used Memory - 8 # Host - 9 # IPs #- 10 # Start Time - 11 # VNC #- 12 # Hidden Template #- 13 # Labels #- 14 # Search data actions: VM.refresh: true VM.create_dialog: true VM.rename: true VM.chown: false VM.chgrp: false VM.chmod: true VM.deploy: false VM.migrate: false VM.migrate_live: false VM.hold: true VM.release: true VM.suspend: true VM.resume: true VM.stop: true VM.recover: false VM.reboot: true VM.reboot_hard: true VM.poweroff: true VM.poweroff_hard: true VM.undeploy: true VM.undeploy_hard: true VM.terminate: true VM.terminate_hard: true VM.resize: true VM.attachdisk: true VM.detachdisk: true VM.disk_saveas: true VM.attachnic: true VM.detachnic: true VM.snapshot_create: true VM.snapshot_revert: true VM.snapshot_delete: true VM.disk_snapshot_create: true VM.disk_snapshot_revert: true VM.disk_snapshot_delete: true VM.resched: false VM.unresched: false VM.save_as_template: true VM.updateconf: true VM.edit_labels: true VM.menu_labels: true oneflow-services-tab: panel_tabs: service_info_tab: true service_roles_tab: true service_log_tab: true panel_tabs_actions: service_roles_tab: Role.scale: true Role.hold: true Role.release: true Role.suspend: true Role.resume: true Role.stop: true Role.reboot: true Role.reboot_hard: true Role.poweroff: true Role.poweroff_hard: true Role.undeploy: true Role.undeploy_hard: true Role.terminate: true Role.terminate_hard: true RoleVM.hold: true RoleVM.release: true RoleVM.suspend: true RoleVM.resume: true RoleVM.stop: true RoleVM.reboot: true RoleVM.reboot_hard: true RoleVM.poweroff: true RoleVM.poweroff_hard: true RoleVM.undeploy: true RoleVM.undeploy_hard: true RoleVM.terminate: true RoleVM.terminate_hard: true RoleVM.resched: false RoleVM.unresched: false RoleVM.recover: false table_columns: - 0 # Checkbox - 1 # ID - 2 # Owner - 3 # Group - 4 # Name - 5 # State #- 6 # Labels #- 7 # Search data actions: Service.refresh: true Service.create_dialog: true Service.chown: false Service.chgrp: false Service.chmod: true Service.rename: true Service.shutdown: true Service.recover: true Service.delete: true Service.edit_labels: true Service.menu_labels: true vrouters-tab: panel_tabs: virtual_router_info_tab: true virtual_router_vms_tab: true table_columns: - 0 # Checkbox - 1 # ID - 2 # Owner - 3 # Group - 4 # Name #- 5 # Labels #- 6 # Search data actions: VirtualRouter.refresh: true VirtualRouter.create_dialog: true VirtualRouter.rename: true VirtualRouter.chown: true VirtualRouter.chgrp: true VirtualRouter.chmod: true VirtualRouter.delete: true VirtualRouter.attachnic: true VirtualRouter.detachnic: true VirtualRouter.edit_labels: true VirtualRouter.menu_labels: true infrastructure-top-tab: panel_tabs: actions: clusters-tab: panel_tabs: cluster_info_tab: true cluster_host_tab: true cluster_vnet_tab: true cluster_datastore_tab: true table_columns: - 0 # Checkbox - 1 # ID - 2 # Name - 3 # Hosts - 4 # VNets - 5 # Datastores #- 6 # Labels actions: Cluster.refresh: true Cluster.create_dialog: true Cluster.update_dialog: true Cluster.delete: true Cluster.rename: true Cluster.edit_labels: true Cluster.menu_labels: true hosts-tab: panel_tabs: host_info_tab: true host_monitoring_tab: true host_vms_tab: true host_wilds_tab: true host_zombies_tab: true host_esx_tab: true host_pci_tab: true table_columns: - 0 # Checkbox - 1 # ID - 2 # Name - 3 # Cluster - 4 # RVMs #- 5 # Real CPU - 6 # Allocated CPU #- 7 # Real MEM - 8 # Allocated MEM - 9 # Status #- 10 # IM MAD #- 11 # VM MAD #- 12 # Last monitored on #- 13 # Labels #- 14 # Search data actions: Host.refresh: true Host.create_dialog: true Host.addtocluster: true Host.rename: true Host.enable: true Host.disable: true Host.offline: true Host.delete: true Host.edit_labels: true Host.menu_labels: true zones-tab: panel_tabs: zone_info_tab: true table_columns: - 0 # Checkbox - 1 # ID - 2 # Name - 3 # Endpoint #- 4 # Labels actions: Zone.refresh: true Zone.create_dialog: true Zone.rename: true Zone.delete: true Zone.edit_labels: true Zone.menu_labels: true storage-top-tab: panel_tabs: actions: datastores-tab: panel_tabs: datastore_info_tab: false datastore_image_tab: true datastore_clusters_tab: false table_columns: - 0 # Checkbox - 1 # ID - 2 # Owner - 3 # Group - 4 # Name - 5 # Capacity - 6 # Cluster #- 7 # Basepath #- 8 # TM #- 9 # DS #- 10 # Type #- 11 # Status #- 12 # Labels #- 13 # Search data actions: Datastore.refresh: true Datastore.create_dialog: false Datastore.import_dialog: false Datastore.addtocluster: false Datastore.rename: true Datastore.chown: false Datastore.chgrp: false Datastore.chmod: true Datastore.delete: false Datastore.enable: false Datastore.disable: false Datastore.edit_labels: true Datastore.menu_labels: true images-tab: panel_tabs: image_info_tab: true image_vms_tab: true image_snapshots_tab: true table_columns: - 0 # Checkbox - 1 # ID - 2 # Owner - 3 # Group - 4 # Name - 5 # Datastore #- 6 # Size - 7 # Type #- 8 # Registration time #- 9 # Persistent - 10 # Status - 11 # #VMs #- 12 # Target #- 13 # Labels #- 14 # Search data actions: Image.refresh: true Image.create_dialog: true Image.import_dialog: false Image.upload_marketplace_dialog: true Image.rename: true Image.chown: false Image.chgrp: false Image.chmod: true Image.enable: true Image.disable: true Image.persistent: true Image.nonpersistent: true Image.clone_dialog: true Image.delete: true Image.snapshot_flatten: true Image.snapshot_revert: true Image.snapshot_delete: true Image.edit_labels: true Image.menu_labels: true files-tab: panel_tabs: file_info_tab: true table_columns: - 0 # Checkbox - 1 # ID - 2 # Owner - 3 # Group - 4 # Name - 5 # Datastore #- 6 # Size - 7 # Type #- 8 # Registration time #- 9 # Persistent - 10 # Status #- 11 # #VMs #- 12 # Target #- 13 # Labels #- 14 # Search data actions: File.refresh: true File.create_dialog: true File.rename: true File.chown: false File.chgrp: false File.chmod: true File.enable: true File.disable: true File.delete: true File.edit_labels: true File.menu_labels: true marketplaces-tab: panel_tabs: marketplace_info_tab: true marketplace_apps_tab: true table_columns: - 0 # Checkbox - 1 # ID - 2 # Owner - 3 # Group - 4 # Name - 5 # Capacity - 6 # Apps - 7 # Driver - 8 # Zone #- 9 # Labels #- 10 # Search data actions: MarketPlace.refresh: true MarketPlace.create_dialog: true MarketPlace.update_dialog: true MarketPlace.rename: true MarketPlace.chown: true MarketPlace.chgrp: true MarketPlace.chmod: true MarketPlace.delete: true MarketPlace.edit_labels: true MarketPlace.menu_labels: true marketplaceapps-tab: panel_tabs: marketplaceapp_info_tab: true marketplaceapp_templates_tab: true table_columns: - 0 # Checkbox - 1 # ID - 2 # Owner - 3 # Group - 4 # Name #- 5 # Version - 6 # Size - 7 # State #- 8 # Type - 9 # Registration - 10 # Marketplace - 11 # Zone #- 12 # Labels #- 13 # Search data actions: MarketPlaceApp.refresh: true MarketPlaceApp.create_dialog: true MarketPlaceApp.download_opennebula_dialog: true MarketPlaceApp.download_local: true MarketPlaceApp.rename: true MarketPlaceApp.chown: true MarketPlaceApp.chgrp: true MarketPlaceApp.chmod: true MarketPlaceApp.enable: true MarketPlaceApp.disable: true MarketPlaceApp.delete: true MarketPlaceApp.edit_labels: true MarketPlaceApp.menu_labels: true network-top-tab: panel_tabs: actions: vnets-tab: panel_tabs: vnet_info_tab: true vnet_ar_list_tab: true vnet_leases_tab: true vnet_sg_list_tab: true vnet_vr_list_tab: true vnet_clusters_tab: false table_columns: - 0 # Checkbox - 1 # ID - 2 # Owner - 3 # Group - 4 # Name - 5 # Reservation - 6 # Cluster #- 7 # Bridge - 8 # Leases #- 9 # VLAN ID #- 10 # Labels #- 11 # Search data actions: Network.refresh: true Network.create_dialog: false Network.import_dialog: false Network.update_dialog: true Network.reserve_dialog: true Network.addtocluster: false Network.rename: true Network.chown: false Network.chgrp: false Network.chmod: true Network.delete: true Network.hold_lease: true Network.release_lease: true Network.add_ar: false Network.remove_ar: true Network.update_ar: true Network.edit_labels: true Network.menu_labels: true vnets-topology-tab: panel_tabs: actions: NetworkTopology.refresh: true NetworkTopology.fit: true NetworkTopology.collapseVMs: true NetworkTopology.openVMs: true secgroups-tab: panel_tabs: security_group_info_tab: true security_group_vms_tab: true table_columns: - 0 # Checkbox - 1 # ID - 2 # Owner - 3 # Group - 4 # Name #- 5 # Labels #- 6 # Search data actions: SecurityGroup.refresh: true SecurityGroup.create_dialog: true SecurityGroup.update_dialog: true SecurityGroup.rename: true SecurityGroup.chown: true SecurityGroup.chgrp: true SecurityGroup.chmod: true SecurityGroup.clone_dialog: true SecurityGroup.commit_dialog: true SecurityGroup.delete: true SecurityGroup.edit_labels: true SecurityGroup.menu_labels: true support-tab: panel_tabs: support_info_tab: true table_columns: #- 0 # Checkbox - 1 # ID - 2 # Subject - 3 # Created at - 4 # Status actions: Support.refresh: true Support.create_dialog: true settings-tab: panel_tabs: settings_info_tab: true settings_config_tab: false settings_quotas_tab: true settings_group_quotas_tab: true settings_accounting_tab: true settings_showback_tab: true actions: # Buttons for settings_info_tab User.update_password: true User.login_token: true # Buttons for settings_config_tab Settings.change_language: true Settings.change_password: true Settings.change_view: true Settings.ssh_key: true Settings.login_token: true # Edit button in settings_quotas_tab User.quotas_dialog: false upgrade-top-tab: panel_tabs: actions:
- New views can be made and selected separately for each group
- Sunstone Server needs to be restarted to apply view changes
sudo service opennebula-sunstone restart
LDAP
- LDAP Authentication
/etc/one/auth/ldap_auth.conf
server 1: # Ldap authentication method :auth_method: :simple # Ldap server :host: ldap.informatik.tu-muenchen.de :port: 389 # base hierarchy where to search for users and groups :base: 'ou=Personen,ou=IN,o=TUM,c=DE' # group the users need to belong to. If not set any user will do #:group: 'cn=il11,ou=Gruppen,ou=IN,o=TUM,c=DE' # field that holds the user name, if not set 'cn' will be used :user_field: 'uid' # field name for group membership, by default it is 'member' :group_field: 'memberUid' # user field that that is in in the group group_field, if not set 'dn' will be used :user_group_field: 'cn' # Generate mapping file from group template info :mapping_generate: true # Seconds a mapping file remain untouched until the next regeneration :mapping_timeout: 300 # Name of the mapping file in OpenNebula var diretory :mapping_filename: server1.yaml # Key from the OpenNebula template to map to an AD group :mapping_key: GROUP_DN # Default group ID used for users in an AD group not mapped :mapping_default: 1
SSL Certificates
- Install nginx and configure ssl proxy, adjust Opennebula VNC Settings
/etc/nginx/sites-available/one
#### OpenNebula Sunstone upstream upstream sunstone {
server 127.0.0.1:9869;
} upstream appserver {
server 127.0.0.1:29877; # appserver_ip:ws_port
} map $http_upgrade $connection_upgrade {
default upgrade; '' close;
}
#### cloudserver.org HTTP virtual host server {
listen 80; server_name one.cm.in.tum.de;
### Permanent redirect to HTTPS (optional) return 301 https://one.cm.in.tum.de:443;
}
#### cloudserver.org HTTPS virtual host server {
listen 443; server_name one.cm.in.tum.de;
### SSL Parameters ssl on; ssl_certificate /etc/ssl/certs/emu10.fullchain.cert.pem; ssl_certificate_key /etc/ssl/private/emu10.private.key;
### Proxy requests to upstream location / { proxy_pass http://sunstone; }
}
server {
listen 29876; server_name one.cm.in.tum.de;
### SSL Parameters ssl on; ssl_certificate /etc/ssl/certs/emu10.fullchain.cert.pem; ssl_certificate_key /etc/ssl/private/emu10.private.key;
### Proxy requests to upstream location / { proxy_pass http://appserver; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection $connection_upgrade; }
}
/etc/one/sunstone-server.conf
… only VNC part….. :vnc_proxy_port: 29876 :vnc_proxy_support_wss: yes :vnc_proxy_cert: /etc/ssl/certs/emu10.fullchain.cert.pem :vnc_proxy_key: /etc/ssl/private/emu10.private.key :vnc_proxy_ipv6: false :vnc_request_password: false …..
- SSL Proxy for the VNC setup is not working very well there is a quite complicated solution
- Set VNC Proxy Port in the
sunstone-server.conf
to 29877 and restart sunstone noVNCsudo service opennebula-novnc restart
- Now restart the nginx server
sudo service nginx restart
- Now set the port in the server conf again to the default 29876 and restart sunstone webserver
sudo service opennebula-sunstone restart
- The actual VNC is at port 29877 but the sunstone web ui tries to connect to 29876 the port where our nginx websocket proxy is listening, now vnc works
- The VNC is limited by the nginx websockets proxy to 60 seconds, after that an automatic disconnect signal is sent
ONE CLI
Log in to emu10 and use opennebula commands to perform certain tasks. Here is the documentation about possible commands:
https://docs.opennebula.org/5.6/operation/references/cli.html To use the commands you need to perform the following steps:
- Create the auth directory
mkdir ~/.one touch ~/.one/one_auth
- Create a token with a one command
oneuser token-create oneadmin --time 3600 > ~/.one/one_auth
- Now you can use other commands with authentication
Import other images (KVM/Virtualbox)
You can also import other images and directly boot them. Opennebula uses KVM as a Hypervisor therefore all kvm compatible images can be used. If you have a virtualbox image you can convert it to a raw image with this command:
VBoxManage clonehd --format RAW debian.vdi debian.img
To import it to Opennebula copy the image to the sunstone gui (emu10) to this directory /var/tmp/. The directory is important as images can only be imported from trusted/safe directories. Now use the one cli to import the image. First authenticate as described above in “ONE CLI” now use:
oneimage create -d ceph_img --name gbs_image --path /var/tmp/gbs.img --prefix hd --type OS --driver raw --description "Virtualbox GBS Image"
to import it.
Make sure that the access rights are correct (go+r) when copying it to /var/tmp/ otherwise the import will fail