OpenStack Cloud Computing Cookbook

http://www.openstackcookbook.com/

Installing RabbitMQ for OpenStack Cloud Computing Cookbook

The examples in the OpenStack Cloud Computing Cookbook assumes you have a suitable messaging service backend configured to run the OpenStack services. This didn’t fit with any single chapter or service as nearly all rely on something like RabbitMQ. If you don’t have this installed, follow these steps which you should be able to copy and paste to run in your environment. Warning: The steps below do not assume security best practices as we allow the guest user to connect in our environment from any of our OpenStack services.

Getting ready

We will be performing an installation and configuration of RabbitMQ on the Controller node that is shown in the diagram. There are other messaging systems that are available for use with OpenStack such as QPID and ZeroMQ, but we concentrate on the most widely used which is RabbitMQ. In the examples through the book, the IP address of the Controller that this will be on, and will be used by the services in the book, will be 172.16.0.200.

OpenStack Cloud Computing Cookbook Lab Environment

How to do it…

To install RabbitMQ, carry out the following steps

Tip: A script is provided here for you to run the commands below

  1. We install the required packages with the following command
    sudo apt-get install rabbitmq-server
  2. We then create a very simple config file that allows guest users to connect remotely with the following
    cat > /etc/rabbitmq/rabbitmq.config <<EOF
    [{rabbit, [{loopback_users, []}]}].
    EOF
  3. And then we set RabbitMQ to listen on port 5672
    cat > /etc/rabbitmq/rabbitmq-env.conf <<EOF
    RABBITMQ_NODE_PORT=5672
    EOF
  4. We pick up the changes made by restarting RabbitMQ with the following command
    service rabbitmq-server restart

How it works…

What we have done here is install and configure RabbitMQ on our Controller node that is hosted with address 172.16.0.200. When we configure our OpenStack services that required a RabbitMQ connection, they will use the the following format:

rabbit_host = 172.16.0.200
rabbit_port = 5672
rabbit_use_ssl = false
rabbit_userid = guest
rabbit_password = guest
rabbit_virtual_host = /

OpenStack clients installation on Ubuntu for the OpenStack Cloud Computing Cookbook

Throughout the OpenStack Cloud Computing Cookbook we expect the reader to have access to the client tools required to operate an OpenStack environment. If these are not installed, they can be installed by following this simple guide.

This guide will cover installation of

  • Nova Client
  • Keystone Client
  • Neutron Client
  • Glance Client
  • Cinder Client
  • Swift Client
  • Heat Client

Getting ready

To use the tools and this guide, you are expected to have access to a Ubuntu (preferably 14.04 LTS) server or PC that has access to the network where you are installing OpenStack.

How to do it…

To install the clients, simply execute the following commands

sudo apt-get update
sudo apt-get install python-novaclient python-neutronclient python-glanceclient \
    python-cinderclient python-swiftclient python-heatclient

Once these are installed, we can configure our CLI shell environment with the appropriate environment variables to allow us to communicate with the OpenStack endpoints.

A typical set of environment variables are as follows and is used extensively throughout the book when operating OpenStack as a user of the services:

export OS_TENANT_NAME=cookbook
export OS_USERNAME=admin
export OS_PASSWORD=openstack
export OS_AUTH_URL=https://192.168.100.200:5000/v2.0/
export OS_NO_CACHE=1
export OS_KEY=/vagrant/cakey.pem
export OS_CACERT=/vagrant/ca.pem

Typically these export lines are written to a file, for example called ‘$home/openrc’ that allows a user to simply execute the following command to source in these to use with OpenStack

source openrc

(or in Bash: . openrc)

Configuring Keystone for the first time

To initially configure Keystone, we utilize the SERVICE_TOKEN and SERVICE_ENDPOINT environment variables. The SERVICE_TOKEN is found in /etc/keystone/keystone.conf and should only be used for bootstrapping Keystone. Set the environment up as follows

export ENDPOINT=192.168.100.200
export SERVICE_TOKEN=ADMIN
export SERVICE_ENDPOINT=https://${ENDPOINT}:35357/v2.0
export OS_KEY=/vagrant/cakey.pem
export OS_CACERT=/vagrant/ca.pem

This bypasses the usual authentication process to allow services and users to be configured in Keystone before the users and passwords exist.

How it works…

The OpenStack command line tools utilize environment variables to know how to interact with OpenStack. The environment variables are easy to understand in terms of their function. A user is able to control multiple environments by simply changing the relevant environment variables.

To initially install the users and services, a SERVICE_TOKEN must be used as at this first stage there are no users in the Keystone database to assign administrative privileges to. Once the initial users and services has been set up, the SERVICE_TOKEN should not be used unless maintenance and troubleshooting calls for it.

Installing MariaDB for OpenStack Cloud Computing Cookbook

The examples in the OpenStack Cloud Computing Cookbook assumes you have a suitable database backend configured to run the OpenStack services. This didn’t fit with any single chapter or service as they all rely on something like MariaDB or MySQL. If you don’t have this installed, follow these steps which you should be able to copy and paste to run in your environment.

Getting ready

We will be performing an installation and configuration of MariaDB on the Controller node that is shown in the diagram. MariaDB and MySQL are interchangeable in terms of providing the necessary MySQL database connections required for OpenStack. More information can be found at the MariaDB website. In the examples through the book, the IP address of the Controller that this will be on, and will be used by the services in the book, will be 172.16.0.200.

OpenStack Cloud Computing Cookbook Lab Environment

How to do it…

To install MariaDB, carry out the following steps as root

Tip: A script is provided here for you to run the commands below

  1. We first set some variables that will be used in the subsequent steps. This allows you to edit to suit your own environment.
    export MYSQL_HOST=172.16.0.200
    export MYSQL_ROOT_PASS=openstack
    export MYSQL_DB_PASS=openstack
  2. We then set some defaults in debconf to avoid any interactive prompts
    echo "mysql-server-5.5 mysql-server/root_password password $MYSQL_ROOT_PASS" | sudo debconf-set-selections
    echo "mysql-server-5.5 mysql-server/root_password_again password $MYSQL_ROOT_PASS" | sudo debconf-set-selections
    echo "mysql-server-5.5 mysql-server/root_password seen true" | sudo debconf-set-selections
    echo "mysql-server-5.5 mysql-server/root_password_again seen true" | sudo debconf-set-selections
  3. We then install the required packages with the following command
    sudo apt-get -y install mariadb-server python-mysqldb
  4. We now tell MariaDB to listen on all interfaces as well as set a max connection limit. Note, edit to suit the security and requirements in your environment.
    sudo sed -i "s/^bind\-address.*/bind-address = 0.0.0.0/g" /etc/mysql/my.cnf
    sudo sed -i "s/^#max_connections.*/max_connections = 512/g" /etc/mysql/my.cnf
  5. To speed up MariaDB as well as help with permissions, add the following line to /etc/mysql/conf.d/skip-name-resolve.cnf
    echo "[mysqld]
    skip-name-resolve" > /etc/mysql/conf.d/skip-name-resolve.cnf
  6. We configure UTF-8 with the following
    echo "[mysqld]
    collation-server = utf8_general_ci
    init-connect='SET NAMES utf8'
    character-set-server = utf8" > /etc/mysql/conf.d/01-utf8.cnf
  7. We pick up the changes made by restarting MariaDB with the following command
    sudo service mysql restart
  8. We now ensure the root user has the correct permissions to allow us to create further databases and users
    mysql -u root -p${MYSQL_ROOT_PASS} -h localhost -e "GRANT ALL ON *.* to root@\"localhost\" IDENTIFIED BY \"${MYSQL_ROOT_PASS}\" WITH GRANT OPTION;"
    mysql -u root -p${MYSQL_ROOT_PASS} -h localhost -e "GRANT ALL ON *.* to root@\"${MYSQL_HOST}\" IDENTIFIED BY \"${MYSQL_ROOT_PASS}\" WITH GRANT OPTION;"
    mysql -u root -p${MYSQL_ROOT_PASS} -h localhost -e "GRANT ALL ON *.* to root@\"%\" IDENTIFIED BY \"${MYSQL_ROOT_PASS}\" WITH GRANT OPTION;"
  9. We run the following command to pick up the permission changes
    mysqladmin -uroot -p${MYSQL_ROOT_PASS} flush-privileges

How it works…

What we have done here is install and configure MariaDB on our Controller node that is hosted with address 172.16.0.200. When we configure our OpenStack services that required a database connection, they will use the address format mysql://user:password@172.16.0.200/service.

See Also

The 3rd Edition of the OpenStack Cloud Computing Cookbook covers installation of highly available MariaDB with Galera

Configuring Ubuntu Cloud Archive for OpenStack

Ubuntu 14.04 LTS, the release used throughout this book, provides two repositories for installing OpenStack. The standard repository ships with the Icehouse release of OpenStack. Whereas a further supported repository, called the Ubuntu Cloud Archive, provides access to the latest release (at time of writing), Juno. We will be performing an installation and configuration of OpenStack Identity service (as well as the rest of the OpenStack services) with packages from the Ubuntu Cloud Archive to provide us with the Juno release of software.

Getting ready

Ensure you have a suitable server available for installation of the OpenStack Identity service components. If you are using the accompanying Vagrant environment as described in the Preface this will be the controller node that we will be using.

Ensure you are logged onto the controller node and that it has Internet access to allow us to install the required packages in our environment for running Keystone. If you created this node with Vagrant, you can execute the following command:

vagrant ssh controller

How to do it…

Carry out the following steps to configure Ubuntu 14.04 LTS to use the Ubuntu Cloud Archive:

  1. To access the Ubuntu Cloud Archive repository, we first install the Ubuntu Cloud Archive Keyring and enable Personal Package Archives within Ubuntu as follows:
    sudo apt-get update
    sudo apt-get install -y software-properties-common ubuntu-cloud-keyring
  2. Next we enable the Ubuntu Cloud Archive for OpenStack Juno. We do this as follows:
    sudo add-apt-repository -y cloud-archive:juno 
    sudo apt-get update

How it works…

What we’re doing here is adding an extra repository to our system that provides us with a tested set of packages of OpenStack that is fully supported on Ubuntu 14.04 LTS release. The packages in here will then be ones that will be used when we perform installation of OpenStack on our system.

There’s more…

More information about the Ubuntu Cloud Archive can be found by visiting the following address: https://wiki.ubuntu.com/ServerTeam/CloudArchive. This explains the release process and the ability to use latest releases of OpenStack—where new versions are released every 6 months—on a long term supported release of Ubuntu that gets released every 2 years.

Using an alternative release

If you wish to optionally deviate from stable releases, it is appropriate when you are helping to develop or debug OpenStack, or require functionality that is not available in the current release.

To use a particular release of PPA, for example, the next OpenStack release Kilo, we issue the following command:

sudo add-apt-repository cloud-archive:kilo

OpenStack Cloud Computing Cookbook 3rd Edition Progress, February 2015

Feb15-chapterupdateWe’re making very good progress with the 3rd Edition. With 9 chapters submitted and a gaggle of Tech Reviewers picked and reviewing as I type this, we’re heading towards a publication of (hopefully) around May 2015. At this stage things get very interesting. We’re taking machetes to some ideas, and fine tuning with a craft knife to others. As we get nearer to publication, we’ll post more details and snippets of the upcoming best seller closer to the publication date!

In the meantime, you can grab a copy of the environment used to support the book which gives a complete multi-node test environment running OpenStack Juno here.

Creating a VMware Workstation Sandbox Environment for the OpenStack Cloud Computing Cookbook

Creating a sandbox environment using VMware Workstation and Vagrant allows us to discover and experiment with the OpenStack services. VMware Workstation gives us the ability to spin up virtual machines and networks without affecting the rest of our working environment. VMware Workstation is available for Windows and Linux. Vagrant allows us to automate this task, meaning we can spend less time creating our test environments and more time using OpenStack. This test environment can then be used for the rest of the OpenStack Cloud Computing Cookbook.

It is assumed that the computer you will be using to run your test environment in has enough processing power that has hardware virtualization support (for example, Intel VT-X and AMD-V support) with at least 8 GB RAM. Remember we’re creating a virtual machine that itself will be used to spin up virtual machines, so the more RAM you have, the better.

Getting ready

To begin with, we must have purchased VMware Workstation  from http://www.vmware.com/ and then follow the installation procedure once this has been downloaded.

We also need to download and install Vagrant, which will be covered in the later part.

The steps throughout the book assume the underlying operating system that will be used to install OpenStack on will be Ubuntu 14.04 LTS release.

We don’t need to download a Ubuntu 14.04 ISO as we use our Vagrant environment do this for us.

Once set up, we need to create the Networks that we will use for our OpenStack environment. From the Virtual Network Editor, create the following networks:

virtualnetworkeditorHere we are creating 3 “host-only” networks that will map to our eth1, eth2 and eth3 interfaces under our Linux guests (as vmnet8 will consume eth0).

Once this has been done, we need to ensure we are able to change our interfaces in our guests to Promiscuous mode, and we can only do that if we are able to write to the interface on our host. On our host running VMware Workstation, run the following (if running under Linux):

sudo groupadd vmware
sudo usermod -a -G vmware {your_user_running_workstation}
sudo chgrp vmware /dev/vmnet*
sudo chmod g+rw /dev/vmnet*

Log out, then back in again.

How to do it…

To create our sandbox environment within VMware Workstation we will use Vagrant to define a number of virtual machines that allows us to run all of the OpenStack services used in the OpenStack Cloud Computing Cookbook.

controller = Controller services (APIs + Shared Services)
network = OpenStack Network node
compute = OpenStack Compute (Nova) for running KVM instances
swift = OpenStack Object Storage (All-In-One) installation
cinder = OpenStack Block Storage node

These virtual machines will be configured with at an appropriate amount of RAM, CPU and Disk, and have a total of four network interfaces. Vagrant automatically setups an interface on our virtual machine that will NAT (Network Address Translate) traffic out, allowing our virtual machine to connect to the network outside of VMware Workstation to download packages. This NAT interface is not mentioned in our Vagrantfile but will be visible on our virtual machine as eth0. A Vagrantfile, which is found in the working directory of our virtual machine sandbox environment, is a simple file that describes our virtual machines and how VMware Workstation will create them. We configure our first interface for use in our OpenStack environment, which will be the host network interface of our OpenStack virtual machines (the interface a client will connect to Horizon, or use the API), a second interface will be for our private network that OpenStack Compute uses for internal communication between different OpenStack Compute hosts and a third which will be used when we look at Neutron networking as an external provider network. When these virtual machines become available after starting them up, you will see the four interfaces that are explained below:

eth0 = VMware NAT
eth1 = Host Network
eth2 = Private (or Tenant) Network (host-host communication for Neutron created networks)
eth3 = Neutron External Network (when creating an externally routed Neutron network)

Carry out the following steps to create a virtual machine with Vagrant that will be used to run the OpenStack services:

      1. Install/Purchase VMware Workstation from http://www.vmware.com/ The book was written using VMware Workstation version 10
      2. Install Vagrant from http://www.vagrantup.com/ The book was written using Vagrant version 1.6.5
      3. Once installed, we can define our virtual machine and networking in a file called Vagrantfile. To do this, create a working directory (for example, “~/cookbook” and edit a file in here called Vagrantfileas shown in the following command snippet:
        mkdir ~/cookbook
        cd ~/cookbook
        vim Vagrantfile
      4. We can now proceed to configure Vagrant by editing the ~/cookbook/Vagrantfile file with the following code:
        # -*- mode: ruby -*-
        # vi: set ft=ruby :
        # We set the last octet in IPV4 address here
        nodes = {
         'controller' => [1, 200],
         'network' => [1, 202],
         'compute' => [1, 201],
         'swift' => [1, 210],
         'cinder' => [1, 211],
        }
        
        Vagrant.configure("2") do |config| 
          # Virtualbox
          config.vm.box = "trusty64"
          config.vm.box_url = "http://cloud-images.ubuntu.com/vagrant/trusty/current/trusty-server-cloudimg-amd64-vagrant-disk1.box"
          config.vm.synced_folder ".", "/vagrant", type: "nfs"
        
          # VMware Fusion / Workstation
         config.vm.provider "vmware_fusion" or config.vm.provider "vmware_workstation" do |vmware, override|
            override.vm.box = "bunchc/trusty-x64"
            override.vm.synced_folder ".", "/vagrant", type: "nfs"
        
            # Fusion Performance Hacks
            vmware.vmx["logging"] = "FALSE"
            vmware.vmx["MemTrimRate"] = "0"
            vmware.vmx["MemAllowAutoScaleDown"] = "FALSE"
            vmware.vmx["mainMem.backing"] = "swap"
            vmware.vmx["sched.mem.pshare.enable"] = "FALSE"
            vmware.vmx["snapshot.disabled"] = "TRUE"
            vmware.vmx["isolation.tools.unity.disable"] = "TRUE"
            vmware.vmx["unity.allowCompostingInGuest"] = "FALSE"
            vmware.vmx["unity.enableLaunchMenu"] = "FALSE"
            vmware.vmx["unity.showBadges"] = "FALSE"
            vmware.vmx["unity.showBorders"] = "FALSE"
            vmware.vmx["unity.wasCapable"] = "FALSE"
            vmware.vmx["vhv.enable"] = "TRUE"
          end
          
          # Default is 2200..something, but port 2200 is used by forescout NAC agent.
          config.vm.usable_port_range= 2800..2900
        
          nodes.each do |prefix, (count, ip_start)|
            count.times do |i|
              hostname = "%s" % [prefix, (i+1)]
        
              config.vm.define "#{hostname}" do |box|
                box.vm.hostname = "#{hostname}.book"
                box.vm.network :private_network, ip: "172.16.0.#{ip_start+i}", :netmask => "255.255.0.0"
                box.vm.network :private_network, ip: "10.10.0.#{ip_start+i}", :netmask => "255.255.255.0"
                box.vm.network :private_network, ip: "192.168.100.#{ip_start+i}", :netmask => "255.255.255.0"
        
                # If using Fusion
                box.vm.provider :vmware_fusion do |v|
                  v.vmx["memsize"] = 1024
                  if prefix == "compute" or prefix == "controller" or prefix == "swift"
                    v.vmx["memsize"] = 2048
                    v.vmx["numvcpus"] = "2"
                  end # if end # box.vm fusion
                end
                # If using Workstation
                box.vm.provider :vmware_workstation do |v|
                  v.vmx["memsize"] = 1024
                  if prefix == "compute" or prefix == "controller" or prefix == "swift"
                    v.vmx["memsize"] = 3172
                    v.vmx["numvcpus"] = "2"
                  end
               end 
        
               # Otherwise using VirtualBox 
               box.vm.provider :virtualbox do |vbox| 
                 # Defaults 
                 vbox.customize ["modifyvm", :id, "--memory", 1024] 
                 vbox.customize ["modifyvm", :id, "--cpus", 1] 
                 vbox.customize ["modifyvm", :id, "--nicpromisc3", "allow-all"] 
                 vbox.customize ["modifyvm", :id, "--nicpromisc4", "allow-all"] 
                 if prefix == "compute" or prefix == "controller" or prefix == "swift" 
                   vbox.customize ["modifyvm", :id, "--memory", 2048] 
                   vbox.customize ["modifyvm", :id, "--cpus", 2] 
                 end # if 
               end # box.vm virtualbox 
             end # config.vm.define 
          end # count.times 
        end # nodes.each end # Vagrant.configure("2")
      5. We are now ready to power on our controller node. We do this by simply running the following command:
        vagrant up

Congratulations! We have successfully created the VMware Workstation virtual machines running on Ubuntu 14.04 which is able to run OpenStack services.

How it works…

What we have done is defined a number of virtual machines within VirtualBox, VMware Fusion/Workstation by defining it in Vagrant. Vagrant then configures these virtual machines, based on the settings given in Vagrantfile in the directory where we want to store and run our VirtualBox or VMware Fusion/Workstation virtual machines from. This file is based on Ruby syntax, but the lines are relatively self-explanatory. We have specified some of the following:

      • The hostnames are called controller, network, compute, swift and cinder and have a corresponding 4th octet IP assigned to them that i is appended to the networks given further into the file.
      • The VM is based on Ubuntu Trusty Tahr, an alias for Ubuntu 14.04 LTS 64-bit
      • We configure some optimizations and specific configurations for VMware and VirtualBox
      • The file has been written as a series of nested loops, iterating over the “nodes” array set at the top of the file.
      • In each iteration, the corresponding configuration of the virtual machine is made, and then the configured virtual machine is then brought up.

We then launch this virtual machines using Vagrant with the help of the following simple command:

vagrant up

This will launch all VMs listed in the Vagrantfile.

To see the status of the virtual machines we use the following command:

vagrant status

To log into any of the machines we use the following command:

vagrant ssh controller

replace “controller” with the name of the virtual machine you want to use.

Creating a Sandbox Environment for the OpenStack Cloud Computing Cookbook

Creating a sandbox environment using VirtualBox (or VMware Fusion) and Vagrant allows us to discover and experiment with the OpenStack services. VirtualBox gives us the ability to spin up virtual machines and networks without affecting the rest of our working environment, and is freely available at http://www.virtualbox.org for Windows, Mac OS X, and Linux. Vagrant allows us to automate this task, meaning we can spend less time creating our test environments and more time using OpenStack. This test environment can then be used for the rest of the OpenStack Cloud Computing Cookbook.

It is assumed that the computer you will be using to run your test environment in has enough processing power that has hardware virtualization support (for example, Intel VT-X and AMD-V support) with at least 8 GB RAM. Remember we’re creating a virtual machine that itself will be used to spin up virtual machines, so the more RAM you have, the better.

Getting ready

To begin with, we must download VirtualBox from http://www.virtualbox.org/ and then follow the installation procedure once this has been downloaded.

We also need to download and install Vagrant, which will be covered in the later part.

The steps throughout the book assume the underlying operating system that will be used to install OpenStack on will be Ubuntu 14.04 LTS release.

We don’t need to download a Ubuntu 14.04 ISO as we use our Vagrant environment do this for us.

How to do it…

To create our sandbox environment within VirtualBox we will use Vagrant to define a number of virtual machines that allows us to run all of the OpenStack services used in the OpenStack Cloud Computing Cookbook.

controller = Controller services (APIs + Shared Services)
network = OpenStack Network node
compute = OpenStack Compute (Nova) for running KVM instances
swift = OpenStack Object Storage (All-In-One) installation
cinder = OpenStack Block Storage node

 

These virtual machines will be configured with at an appropriate amount of RAM, CPU and Disk, and have a total of four network interfaces. Vagrant automatically setups an interface on our virtual machine that will NAT (Network Address Translate) traffic out, allowing our virtual machine to connect to the network outside of VirtualBox to download packages. This NAT interface is not mentioned in our Vagrantfile but will be visible on our virtual machine as eth0. A Vagrantfile, which is found in the working directory of our virtual machine sandbox environment, is a simple file that describes our virtual machines and how VirtualBox will create them. We configure our first interface for use in our OpenStack environment, which will be the host network interface of our OpenStack virtual machines (the interface a client will connect to Horizon, or use the API), a second interface will be for our private network that OpenStack Compute uses for internal communication between different OpenStack Compute hosts and a third which will be used when we look at Neutron networking as an external provider network. When these virtual machines become available after starting them up, you will see the four interfaces that are explained below:

eth0 = VirtualBox NAT
eth1 = Host Network
eth2 = Private (or Tenant) Network (host-host communication for Neutron created networks)
eth3 = Neutron External Network (when creating an externally routed Neutron network)

Carry out the following steps to create a virtual machine with Vagrant that will be used to run the OpenStack services:

      1. Install VirtualBox from http://www.virtualbox.org/The book was written using VirtualBox version 4.3.18
      2. Install Vagrant from http://www.vagrantup.com/The book was written using Vagrant version 1.6.5
      3. Once installed, we can define our virtual machine and networking in a file called Vagrantfile. To do this, create a working directory (for example, “~/cookbook” and edit a file in here called Vagrantfile as shown in the following command snippet:
        mkdir ~/cookbook
        cd ~/cookbook
        vim Vagrantfile
      4. We can now proceed to configure Vagrant by editing the ~/cookbook/Vagrantfile file with the following code:
        # -*- mode: ruby -*-
        # vi: set ft=ruby :
        # We set the last octet in IPV4 address here
        nodes = {
         'controller' => [1, 200],
         'network' => [1, 202],
         'compute' => [1, 201],
         'swift' => [1, 210],
         'cinder' => [1, 211],
        }
        
        Vagrant.configure("2") do |config| 
          # Virtualbox
          config.vm.box = "trusty64"
          config.vm.box_url = "http://cloud-images.ubuntu.com/vagrant/trusty/current/trusty-server-cloudimg-amd64-vagrant-disk1.box"
          config.vm.synced_folder ".", "/vagrant", type: "nfs"
        
          # VMware Fusion / Workstation
          config.vm.provider "vmware_fusion" do |vmware, override|
            override.vm.box = "trusty64_fusion"
            override.vm.box_url = "https://oss-binaries.phusionpassenger.com/vagrant/boxes/latest/ubuntu-14.04-amd64-vmwarefusion.box"
            override.vm.synced_folder ".", "/vagrant", type: "nfs"
        
            # Fusion Performance Hacks
            vmware.vmx["logging"] = "FALSE"
            vmware.vmx["MemTrimRate"] = "0"
            vmware.vmx["MemAllowAutoScaleDown"] = "FALSE"
            vmware.vmx["mainMem.backing"] = "swap"
            vmware.vmx["sched.mem.pshare.enable"] = "FALSE"
            vmware.vmx["snapshot.disabled"] = "TRUE"
            vmware.vmx["isolation.tools.unity.disable"] = "TRUE"
            vmware.vmx["unity.allowCompostingInGuest"] = "FALSE"
            vmware.vmx["unity.enableLaunchMenu"] = "FALSE"
            vmware.vmx["unity.showBadges"] = "FALSE"
            vmware.vmx["unity.showBorders"] = "FALSE"
            vmware.vmx["unity.wasCapable"] = "FALSE"
          end
          
          # Default is 2200..something, but port 2200 is used by forescout NAC agent.
          config.vm.usable_port_range= 2800..2900
        
          nodes.each do |prefix, (count, ip_start)|
            count.times do |i|
              hostname = "%s" % [prefix, (i+1)]
        
              config.vm.define "#{hostname}" do |box|
                box.vm.hostname = "#{hostname}.book"
                box.vm.network :private_network, ip: "172.16.0.#{ip_start+i}", :netmask => "255.255.0.0"
                box.vm.network :private_network, ip: "172.10.0.#{ip_start+i}", :netmask => "255.255.0.0" 
                box.vm.network :private_network, ip: "192.168.100.#{ip_start+i}", :netmask => "255.255.255.0"
        
                # If using Fusion
                box.vm.provider :vmware_fusion do |v|
                  v.vmx["memsize"] = 1024
                  if prefix == "compute" or prefix == "controller" or prefix == "swift"
                    v.vmx["memsize"] = 2048
                  end # if
                end # box.vm fusion
        
                # Otherwise using VirtualBox
                box.vm.provider :virtualbox do |vbox|
                  # Defaults
                  vbox.customize ["modifyvm", :id, "--memory", 1024]
                  vbox.customize ["modifyvm", :id, "--cpus", 1]
                  vbox.customize ["modifyvm", :id, "--nicpromisc3", "allow-all"]
                  vbox.customize ["modifyvm", :id, "--nicpromisc4", "allow-all"]
                  if prefix == "compute" or prefix == "controller" or prefix == "swift"
                    vbox.customize ["modifyvm", :id, "--memory", 2048]
                    vbox.customize ["modifyvm", :id, "--cpus", 2]
                  end # if
                end # box.vm virtualbox
              end # config.vm.define 
            end # count.times
          end # nodes.each
        end # Vagrant.configure("2")
      5. We are now ready to power on our controller node. We do this by simply running the following command:
        vagrant up

Congratulations! We have successfully created the VirtualBox virtual machines running on Ubuntu 14.04 which is able to run OpenStack services.

How it works…

What we have done is defined a number of virtual machines within VirtualBox or VMware Fusion by defining it in Vagrant. Vagrant then configures these virtual machines, based on the settings given in Vagrantfile in the directory where we want to store and run our VirtualBox or VMware Fusion virtual machines from. This file is based on Ruby syntax, but the lines are relatively self-explanatory. We have specified some of the following:

      • The hostnames are called controller, network, compute, swift and cinder and have a corresponding 4th octet IP assigned to them that i is appended to the networks given further into the file.
      • The VM is based on Ubuntu Trusty Tahr, an alias for Ubuntu 14.04 LTS 64-bit
      • We configure some optimizations and specific configurations for VMware and VirtualBox
      • The file has been written as a series of nested loops, iterating over the “nodes” array set at the top of the file.
      • In each iteration, the corresponding configuration of the virtual machine is made, and then the configured virtual machine is then brought up.

We then launch this virtual machines using Vagrant with the help of the following simple command:

vagrant up

This will launch all VMs listed in the Vagrantfile.

To see the status of the virtual machines we use the following command:

vagrant status

To log into any of the machines we use the following command:

vagrant ssh controller

replace “controller” with the name of the virtual machine you want to use.

Installing Rackspace Private Cloud using Chef Cookbooks

What It Does

In this recipe we show you how to install Rackspace Private Cloud on 3 servers: 2 Controllers in HA and a Compute host.

Getting Ready

You will need

  • a Chef server installed and configured
  • 3 Servers (virtual or physical) running Ubuntu 12.04

Ensure you are on a client or server that has the Chef Client, knife, installed and configured to use your Chef Server.

How to do it…

#!/usr/bin/env bash
set -e 
set -v
set -u
# This is a crude script which will deploy an openstack HA environment 
# YOU have to populate the IP addresses for Controller 1 and 2 as well as
# The IP addresses for your compute nodes.  Additionally you will need to
# Populate the VIP_PREFIX with the first three octets of your VIP addresses.
# You should run this script on the node that will become controller 1.

# Rabbit Password
RMQ_PW="Passw0rd"

# Rabbit IP address, this should be the host ip which is on
# the same network used by your management network
RMQ_IP="10.51.50.1"

# Set the cookbook version that we will upload to chef
COOKBOOK_VERSION="v4.2.1"

# SET THE NODE IP ADDRESSES
CONTROLLER1="10.51.50.1"
CONTROLLER2="10.51.50.2"

# ADD ALL OF THE COMPUTE NODE IP ADDRESSES, SPACE SEPERATED.
COMPUTE_NODES="10.51.50.3 10.51.50.4"

# This is the VIP prefix, IE the beginning of your IP addresses for all your VIPS.
# Note, This makes a lot of assumptions for your VIPS.
# The environment use .154, .155, .156 for your HA VIPS.
VIP_PREFIX="10.51.50"

# Make the system key used for bootstrapping self and others.
if [ ! -f "/root/.ssh/id_rsa" ];then
    ssh-keygen -t rsa -f /root/.ssh/id_rsa -N ''
    pushd /root/.ssh/
    cat id_rsa.pub | tee -a authorized_keys
    popd
fi

for node in ${CONTROLLER1} ${CONTROLLER2} ${COMPUTE_NODES};do
    ssh-copy-id ${node}
done

apt-get update
apt-get install -y python-dev python-pip git erlang erlang-nox erlang-dev curl lvm2
pip install git+https://github.com/cloudnull/mungerator
RABBIT_URL="http://www.rabbitmq.com"

function rabbit_setup() {
    if [ ! "$(rabbitmqctl list_vhosts | grep -w '/chef')" ];then
      rabbitmqctl add_vhost /chef
    fi

    if [ "$(rabbitmqctl list_users | grep -w 'chef')" ];then
      rabbitmqctl delete_user chef
    fi

    rabbitmqctl add_user chef "${RMQ_PW}"
    rabbitmqctl set_permissions -p /chef chef '.*' '.*' '.*'
}

function install_apt_packages() { 
    RABBITMQ_KEY="${RABBIT_URL}/rabbitmq-signing-key-public.asc"
    wget -O /tmp/rabbitmq.asc ${RABBITMQ_KEY};   
    apt-key add /tmp/rabbitmq.asc 
    RABBITMQ="${RABBIT_URL}/releases/rabbitmq-server/v3.1.5/rabbitmq-server_3.1.5-1_all.deb"
    wget -O /tmp/rabbitmq.deb ${RABBITMQ}
    dpkg -i /tmp/rabbitmq.deb
    rabbit_setup

    CHEF="https://www.opscode.com/chef/download-server?p=ubuntu&pv=12.04&m=x86_64"
    CHEF_SERVER_PACKAGE_URL=${CHEF}
    wget -O /tmp/chef_server.deb ${CHEF_SERVER_PACKAGE_URL}
    dpkg -i /tmp/chef_server.deb
}

function CREATE_SWAP() {

  cat > /tmp/swap.sh <<EOF
#!/usr/bin/env bash
if [ ! "\$(swapon -s | grep -v Filename)" ];then
  SWAPFILE="/SwapFile"
  if [ -f "\${SWAPFILE}" ];then
    swapoff -a
    rm \${SWAPFILE}
  fi
  dd if=/dev/zero of=\${SWAPFILE} bs=1M count=1024
  chmod 600 \${SWAPFILE}
  mkswap \${SWAPFILE}
  swapon \${SWAPFILE}
fi
EOF

  cat > /tmp/swappiness.sh <<EOF
#!/usr/bin/env bash
SWAPPINESS=\$(sysctl -a | grep vm.swappiness | awk -F' = ' '{print \$2}')

if [ "\${SWAPPINESS}" != 60 ];then
  sysctl vm.swappiness=60
fi
EOF

  if [ ! "$(swapon -s | grep -v Filename)" ];then
    chmod +x /tmp/swap.sh
    chmod +x /tmp/swappiness.sh
    /tmp/swap.sh && /tmp/swappiness.sh
  fi
}

CREATE_SWAP
install_apt_packages

mkdir -p /etc/chef-server
cat > /etc/chef-server/chef-server.rb <<EOF
erchef["s3_url_ttl"] = 3600
nginx["ssl_port"] = 4000
nginx["non_ssl_port"] = 4080
nginx["enable_non_ssl"] = true
rabbitmq["enable"] = false
rabbitmq["password"] = "${RMQ_PW}"
rabbitmq["vip"] = "${RMQ_IP}"
rabbitmq['node_ip_address'] = "${RMQ_IP}"
chef_server_webui["web_ui_admin_default_password"] = "THISisAdefaultPASSWORD"
bookshelf["url"] = "https://#{node['ipaddress']}:4000"
EOF

chef-server-ctl reconfigure

sysctl net.ipv4.conf.default.rp_filter=0 | tee -a /etc/sysctl.conf
sysctl net.ipv4.conf.all.rp_filter=0 | tee -a /etc/sysctl.conf
sysctl net.ipv4.ip_forward=1 | tee -a /etc/sysctl.conf

bash <(wget -O - http://opscode.com/chef/install.sh)

SYS_IP=$(ohai ipaddress | awk '/^ / {gsub(/ *\"/, ""); print; exit}')
export CHEF_SERVER_URL=https://${SYS_IP}:4000

# Configure Knife
mkdir -p /root/.chef
cat > /root/.chef/knife.rb <<EOF
log_level                :info
log_location             STDOUT
node_name                'admin'
client_key               '/etc/chef-server/admin.pem'
validation_client_name   'chef-validator'
validation_key           '/etc/chef-server/chef-validator.pem'
chef_server_url          "https://${SYS_IP}:4000"
cache_options( :path => '/root/.chef/checksums' )
cookbook_path            [ '/opt/chef-cookbooks/cookbooks' ]
EOF

if [ ! -d "/opt/" ];then
    mkdir -p /opt/
fi

if [ -d "/opt/chef-cookbooks" ];then
    rm -rf /opt/chef-cookbooks
fi

git clone https://github.com/rcbops/chef-cookbooks.git /opt/chef-cookbooks
pushd /opt/chef-cookbooks
git checkout ${COOKBOOK_VERSION}
git submodule init
git submodule sync
git submodule update

# Get add-on Cookbooks
knife cookbook site download -f /tmp/cron.tar.gz cron 1.2.6 
tar xf /tmp/cron.tar.gz -C /opt/chef-cookbooks/cookbooks

knife cookbook site download -f /tmp/chef-client.tar.gz chef-client 3.0.6
tar xf /tmp/chef-client.tar.gz -C /opt/chef-cookbooks/cookbooks

# Upload all of the RCBOPS Cookbooks
knife cookbook upload -o /opt/chef-cookbooks/cookbooks -a
popd

# Save the erlang cookie
if [ ! -f "/var/lib/rabbitmq/.erlang.cookie" ];then
    ERLANG_COOKIE="ANYSTRINGWILLDOJUSTFINE"
else
    ERLANG_COOKIE="$(cat /var/lib/rabbitmq/.erlang.cookie)"
fi

# DROP THE BASE ENVIRONMENT FILE
cat > /opt/base.env.json <<EOF
{
  "name": "RCBOPS_Openstack_Environment",
  "description": "Environment for Openstack Private Cloud",
  "cookbook_versions": {
  },
  "json_class": "Chef::Environment",
  "chef_type": "environment",
  "default_attributes": {
  },
  "override_attributes": {
    "monitoring": {
      "procmon_provider": "monit",
      "metric_provider": "collectd"
    },
    "enable_monit": true,
    "osops_networks": {
      "management": "${VIP_PREFIX}.0/24",
      "swift": "${VIP_PREFIX}.0/24",
      "public": "${VIP_PREFIX}.0/24",
      "nova": "${VIP_PREFIX}.0/24"
    },
    "rabbitmq": {
      "cluster": true,
      "erlang_cookie": "${ERLANG_COOKIE}"
    },
    "nova": {
      "config": {
        "use_single_default_gateway": false,
        "ram_allocation_ratio": 1.0,
        "disk_allocation_ratio": 1.0,
        "cpu_allocation_ratio": 2.0,
        "resume_guests_state_on_host_boot": false
      },
      "network": {
        "provider": "neutron"
      },
      "scheduler": {
        "default_filters": [
          "AvailabilityZoneFilter",
          "ComputeFilter",
          "RetryFilter"
        ]
      },
      "libvirt": {
        "vncserver_listen": "0.0.0.0",
        "virt_type": "qemu"
      }
    },
    "keystone": {
      "pki": {
        "enabled": false
      },
      "admin_user": "admin",
      "tenants": [
        "service",
        "admin",
        "demo",
        "demo2"
      ],
      "users": {
        "admin": {
          "password": "secrete",
          "roles": {
            "admin": [
              "admin"
            ]
          }
        },
        "demo": {
          "password": "secrete",
          "default_tenant": "demo",
          "roles": {
            "Member": [
              "demo2",
              "demo"
            ]
          }
        },
        "demo2": {
          "password": "secrete",
          "default_tenant": "demo2",
          "roles": {
            "Member": [
              "demo2",
              "demo"
            ]
          }
        }
      }
    },
    "neutron": {
      "ovs": {
        "network_type": "gre",
        "provider_networks": [
          {
            "bridge": "br-eth2",
            "vlans": "1024:1024",
            "label": "ph-eth2"
          }
        ]
      }
    },
    "mysql": {
      "tunable": {
        "log_queries_not_using_index": false
      },
      "allow_remote_root": true,
      "root_network_acl": "127.0.0.1"
    },
    "vips": {
      "horizon-dash": "${VIP_PREFIX}.156",
      "keystone-service-api": "${VIP_PREFIX}.156",
      "nova-xvpvnc-proxy": "${VIP_PREFIX}.156",
      "nova-api": "${VIP_PREFIX}.156",
      "cinder-api": "${VIP_PREFIX}.156",
      "nova-ec2-public": "${VIP_PREFIX}.156",
      "config": {
        "${VIP_PREFIX}.156": {
          "vrid": 12,
          "network": "public"
        },
        "${VIP_PREFIX}.154": {
          "vrid": 10,
          "network": "public"
        },
        "${VIP_PREFIX}.155": {
          "vrid": 11,
          "network": "public"
        }
      },
      "rabbitmq-queue": "${VIP_PREFIX}.155",
      "nova-novnc-proxy": "${VIP_PREFIX}.156",
      "mysql-db": "${VIP_PREFIX}.154",
      "glance-api": "${VIP_PREFIX}.156",
      "keystone-internal-api": "${VIP_PREFIX}.156",
      "horizon-dash_ssl": "${VIP_PREFIX}.156",
      "glance-registry": "${VIP_PREFIX}.156",
      "neutron-api": "${VIP_PREFIX}.156",
      "ceilometer-api": "${VIP_PREFIX}.156",
      "ceilometer-central-agent": "${VIP_PREFIX}.156",
      "heat-api": "${VIP_PREFIX}.156",
      "heat-api-cfn": "${VIP_PREFIX}.156",
      "heat-api-cloudwatch": "${VIP_PREFIX}.156",
      "keystone-admin-api": "${VIP_PREFIX}.156"
    },
    "glance": {
      "images": [

      ],
      "image": {
      },
      "image_upload": false
    },
    "osops": {
      "do_package_upgrades": false,
      "apply_patches": false
    },
    "developer_mode": false
  }
}
EOF

# Upload all of the RCBOPS Roles
knife role from file /opt/chef-cookbooks/roles/*.rb
knife environment from file /opt/base.env.json

# Build all the things
knife bootstrap -E RCBOPS_Openstack_Environment -r role[ha-controller1],role[single-network-node] ${CONTROLLER1}
knife bootstrap -E RCBOPS_Openstack_Environment -r role[ha-controller2],role[single-network-node] ${CONTROLLER2}

for node in ${COMPUTE_NODES};do
  knife bootstrap -E RCBOPS_Openstack_Environment -r role[single-compute] ${node}
done