Install and configure a compute node

Install and configure a compute node

This section describes how to install and configure the Compute service on a compute node. The service supports several hypervisors to deploy instances or VMs. This configuration uses the Xen hypervisor on compute nodes that support hardware acceleration for virtual machines. On legacy hardware, this configuration uses the generic QEMU hypervisor. You can follow these instructions with minor modifications to horizontally scale your environment with additional compute nodes.

Note

This section assumes that you are following the instructions in this guide step-by-step to configure the first compute node. If you want to configure additional compute nodes, prepare them in a similar fashion to the first compute node in the example architectures section. Each additional compute node requires a unique IP address.

Install and configure components

Note

Default configuration files vary by distribution. You might need to add these sections and options rather than modifying existing sections and options. Also, an ellipsis (...) in the configuration snippets indicates potential default configuration options that you should retain.

  1. Install the packages:

    # pkg install py27-nova
    
  1. Edit the /etc/nova/nova.conf file and complete the following actions:

    • In the [DEFAULT] section, enable only the compute and metadata APIs:

      [DEFAULT]
      ...
      enabled_apis = osapi_compute,metadata
      
    • In the [DEFAULT] section, configure RabbitMQ message queue access:

      [DEFAULT]
      ...
      transport_url = rabbit://openstack:RABBIT_PASS@controller
      

      Replace RABBIT_PASS with the password you chose for the openstack account in RabbitMQ.

    • In the [DEFAULT] and [keystone_authtoken] sections, configure Identity service access:

      [DEFAULT]
      ...
      auth_strategy = keystone
      
      [keystone_authtoken]
      ...
      auth_uri = http://controller:5000
      auth_url = http://controller:35357
      memcached_servers = controller:11211
      auth_type = password
      project_domain_name = default
      user_domain_name = default
      project_name = service
      username = nova
      password = NOVA_PASS
      

      Replace NOVA_PASS with the password you chose for the nova user in the Identity service.

      Note

      Comment out or remove any other options in the [keystone_authtoken] section.

    • In the [DEFAULT] section, check that the my_ip option is correctly set:

      [DEFAULT]
      ...
      my_ip = MANAGEMENT_INTERFACE_IP_ADDRESS
      

      Replace MANAGEMENT_INTERFACE_IP_ADDRESS with the IP address of the management network interface on your compute node, typically 10.0.0.31 for the first node in the example architecture.

    • In the [DEFAULT] section, enable support for the Networking service:

      [DEFAULT]
      ...
      use_neutron = False
      firewall_driver = nova.virt.firewall.NoopFirewallDriver
      network_driver = nova.network.freebsd_net
      libvirt_vif_driver = nova.virt.libvirt.vif.LibvirtGenericVIFDriver
      freebsdnet_interface_driver = nova.network.freebsd_net.FreeBSDBridgeInterfaceDriver
      l3_lib = nova.network.l3.FreeBSDNetL3
      network_api_class = nova.network.api.API
      security_group_api = nova
      network_manager = nova.network.manager.FlatDHCPManager
      network_size = 254
      allow_same_net_traffic = False
      multi_host = True
      send_arp_for_ha = False
      share_dhcp_address = True
      
      # physical public interface (used for external networking)
      public_interface = em0
      
      # network bridge for flat network (choose any name you want)
      flat_network_bridge = br100
      
      # interface for communicating between instances
      # for single-node deployment use tap otherwise point it to
      # physical interface
      flat_interface = tap0
      use_ipv6 = False
      
    • In the [libvirt] section, enable options specific to FreeBSD:

      [libvirt]
      ...
      use_virtio_for_bridges = True
      
    • In the [serial_console] section, disable serial console feature:

      [serial_console]
      ...
      enabled = False
      
    • In the [vnc] section, enable and configure remote console access:

      [vnc]
      ...
      enabled = True
      vncserver_listen = 0.0.0.0
      vncserver_proxyclient_address = $my_ip
      novncproxy_base_url = http://controller:6080/vnc_auto.html
      

      The server component listens on all IP addresses and the proxy component only listens on the management interface IP address of the compute node. The base URL indicates the location where you can use a web browser to access remote consoles of instances on this compute node.

      Note

      If the web browser to access remote consoles resides on a host that cannot resolve the controller hostname, you must replace controller with the management interface IP address of the controller node.

    • In the [glance] section, configure the location of the Image service API:

      [glance]
      ...
      api_servers = http://controller:9292
      

Finalize installation

  1. Determine whether your compute node supports hardware acceleration for virtual machines:

    $ dmesg | grep VMX
    

    If this command returns a value of one or greater, your compute node supports hardware acceleration so edit nova-compute configuration file /usr/local/etc/nova/nova-compute.conf as follows:

    [DEFAULT]
    compute_driver = libvirt.LibvirtDriver
    force_raw_images = True
    use_cow_images = False
    
    [libvirt]
    virt_type = qemu
    force_xen_phy = True
    online_cpu_tracking = False
    

    If this command returns a value of zero, your compute node does not support hardware acceleration and you must configure libvirt to use QEMU instead of KVM.

    • Edit the [libvirt] section in the /usr/local/etc/nova/nova-compute.conf file as follows:

      [libvirt]
      ...
      virt_type = qemu
      online_cpu_tracking = False
      
    • Edit the [libvirt] section in the /usr/local/etc/nova/nova.conf file as follows:

      [libvirt]
      ...
      cpu_mode = none
      
    • Edit the [serial_console] section in the /usr/local/etc/nova/nova.conf file as follows:

      [serial_console]
      ...
      enabled = True
      
  1. Start the Compute service including its dependencies and configure them to start automatically when the system boots:

    # sysrc libvirtd_enable="YES"
    # sysrc virtlogd_enable="YES"
    # sysrc nova_compute_enable="YES"
    # sysrc nova_network_enable="YES"
    # service libvirtd start
    # service virtlogd start
    # service nova-compute start
    # service nova-network start
    

Note

If the nova-compute service fails to start, check /var/log/nova/nova-compute.log. The error message AMQP server on controller:5672 is unreachable likely indicates that the firewall on the controller node is preventing access to port 5672.

Creative Commons Attribution 3.0 License

Except where otherwise noted, this document is licensed under Creative Commons Attribution 3.0 License. See all OpenStack Legal Documents.