Commit 8316a236 authored by remy.d1's avatar remy.d1
Browse files

little update

parent 5122caae
Pipeline #8 skipped
cloud @ 88c79f27
Subproject commit 88c79f27ba47f260d1568f71a139402116a20a7b
enabling different virtualizations
1. **VirtualBox** - create rocks cluster using VirtualBox
2. **ec2** - run rocks cluster on ec2 in VPC. Proof of concept was working in
PRAGMA 23 demo. Currently rewriting.
**vine** - how to setup Vine Server
.. highlight:: rest
Rocks Cluster in VirtualBox
.. contents ::
:depth: 3
This page explains how to install Rocks cluster in VirtualBox.
:Rocks: 6.1.1
:VirtualBox: 4.3.10
:Host OS: MacOS X 10.9.3
+ Download and install ``VirtualBox`` and ``VirtualBox Oracle VM VirtualBox Extension Pack``
from `VirtualBox <>`_ web site
+ Download VBox Guest Additions ISO (ex. VBoxGuestAdditions_4.3.10.iso) from
`download 4.3.10 <>`_
+ Download Rocks boot ISO (kernel/boot roll) from `Rocks <>`_ web site
+ Download ``vbox_cluster`` and ``vb-in.template``
from `this repo <>`_
Install Cluster
Create input xml configuration file that will be used by ``vbox_cluster``
command to create all virtual machines.
Use downloaded `vb-in.template` file to create input `cluster.xml` with your
desired settings. The tempate file provids for building a frontend and 2 compute nodes.
Most settings have reasonable default values.
See details in the section `Configuration File`_ below.
Install Frontend
#. Run script to create VM settings in VirtualBox::
$ ./vbox_cluster --type=frontend cluster.xml
#. Start VM either from a VBox Manager GUI console or using a command::
$ vboxmanage startvm <VMName>
<VMName> is the name of a VM that was specified in the configuration file
#. When you see Rocks install screen proceed with normal rocks frontend install
For public IP use your VBox next available IP. With the default VBox install
these are the network settings to use (assume frontend is the first VM that uses the first
available IP)::
IP =
gateway =
DNS server =
FQDN = fe.public (or any other name)
Install compute nodes
Use the same ``cluster.xml`` file that was created for installing frontend, it has a separate section
for compute nodes configuration.
#. Run script to create compute node VMs settings in VirtualBox::
$ ./vbox_cluster --type=compute cluster.xml
#. On the frontend VM run: ::
# insert-ethers
Start first compute node VM either from VBox Manager GUI or via a command line: ::
$ vboxmanage startvm <VMName>
When the compute node is "discovered" by ``insert-ethers``, start the next compute node VM.
Quit insert-ethers once all compute nodes that need to be installed are "discovered".
Install Guest Additions
Guest Additiosn allow to mount directories from the host computer to the guest VM and transfer files
between the two. If you don't need mounting from the host skip this section.
#. Mount Guest Additions ISO to your VM using one of two methods:
#. Via VirtualBox Manager GUI console
+ In VirtualBox Manager console start VM for which you want to install extensions
and after it boots choose this VM from the VMs list and
click on the ``Storage`` tab.
+ From the new ``VMname storage`` window choose a controller
that was configured to support CD/DVD drive and click on the CD/DVD image
under it. This enables CD/DVD icon under ``Attributes``.
+ Click on the CD/DVD image to open a menu and choose ``Choose a virtual CD/DVD disk file...``
In opened file browser window locate in your directory
structure the guest additiosn ISO VBoxGuestAdditions_4.3.10.iso. Click ``Open``
then in the ``VMname storage`` window confirm by clicking ``Ok``
#. Via command line. Need to provide VM name, controller specifications
and ISO location, for example ::
$ vboxmanage storageattach VMname --storagectl IDE --port 0 --device 0 --type
dvddrive --medium /path/to/vbox/ISO/VBoxGuestAdditions_4.3.10.iso
#. Install Guest Addiitons On guest VM ``VMname``
+ Login on ``VMname`` VM as root
+ Check that ISO is mounted ::
# mount
/dev/sda1 on / type ext4 (rw)
proc on /proc type proc (rw)
sysfs on /sys type sysfs (rw)
/dev/sr0 on /media/VBOXADDITIONS_4.3.10_93012 type iso9660 (ro,nosuid,nodev,uhelper=udisks,uid=0,gid=0,iocharset=utf8,mode=0400,dmode=0500)
data1 on /media/sf_data1 type vboxsf (gid=399,rw)
# ls /media/VBOXADDITIONS_4.3.10_93012/
32Bit cert VBoxSolarisAdditions.pkg
64Bit OS2 VBoxWindowsAdditions-amd64.exe
AUTORUN.INF VBoxWindowsAdditions.exe VBoxWindowsAdditions-x86.exe
+ Install Guest Additions ::
# /media/VBOXADDITIONS_4.3.10_93012/
Verifying archive integrity... All good.
Uncompressing VirtualBox 4.3.10 Guest Additions for Linux............
VirtualBox Guest Additions installer
Copying additional installer modules ...
Installing additional modules ...
Removing existing VirtualBox non-DKMS kernel modules [ OK ]
Building the VirtualBox Guest Additions kernel modules
Building the main Guest Additions module [ OK ]
Building the shared folder support module [ OK ]
Building the OpenGL support module [ OK ]
Doing non-kernel setup of the Guest Additions [ OK ]
Starting the VirtualBox Guest Additions [ OK ]
Installing the Window System drivers
Installing X.Org Server 1.13 modules [ OK ]
Setting up the Window System to use the Guest Additions [ OK ]
You may need to restart the hal service and the Window System (or just restart
the guest system) to enable the Guest Additions.
Installing graphics libraries and desktop services componen[ OK ]
+ Verify that mount works ::
# ls /media
sf_data1 VBOXADDITIONS_4.3.10_93012
There is now expected ``sf_data1`` mounted under /media for a directory that was
specified in ``Shared Folders`` settings with name ``data1``.
+ Copy the script to local directory (for installing guest additions on compute nodes) ::
# mkdir /share/apps/root
# cp /media/VBOXADDITIONS_4.3.10_93012/ /share/apps/root
+ Unmount CD::
click on ``Eject`` on the ``VBOXADDITIONS_4.3.10`` window (on VM Desktop)
# umount /media/VBOXADDITIONS_4.3.10_93012/
+ To install guest additions on compute nodes run on frontend ::
# rocks run host compute /share/apps/root/
Note: frontend and compute nodes must have the same shared folders enabled
#. In VirtualBox Manager remove the disk from virtual drive in ``VMname Storage`` using
``Attributes`` menu
.. _configfile:
Configuration file
This file is a set of parameters used to describe frontend and compute nodes
VM images of the cluster. The file is parsed by the ``vbox_cluster`` script and the values
are used to create all vboxmanage commands needed to define and register VMs
with the VirtualBox. Most values are working defaults that don't need changes.::
<vbc version="0.1">
<vm name="x" private="y">
describes generic info for the cluster
Name refers to VM name, private is a name of internal network
Both are relevant on VBox side, not inside the cluster
<iso os="Linux_64" path="/path/to/boot-6.1.1.iso"/>
type of VM's os and Rocks boot ISO path
<shared name="data1" path="/some/path1/data1"/>
host directory from path will be automounted on guest VM as /media/sf_data1
<shared name="data2" path="/some/path2/data2"/>
host directory from path will be automounted on guest VM as /media/sf_data2
<enable cpuhotplug="on" />
enables changing cpus number on powered off and running VM
<frontend cpus="2">
number of cpus
<memory base="4000" vram="32" />
allocate base and video memory to VM
<boot order="dvd disk none none" />
boot order
<private nic="intnet" nictype="82540EM" nicname="default"/>
NIC default settings for private network
<public nic="nat" nictype="82540EM" />
NIC defult settings for public network
<hd size="50000" variant="Standard"/>
disk image size and type
<syssetting mouse="usbtablet" audio="none"/>
mouse and audio
<storage name="SATA" type="sata" controller="IntelAhci" attr="hdd" port="0" device="0"/>
information for VM disk image
<storage name="IDE" type="ide" controller="PIIX4" attr="dvddrive" port="0" device="0"/>
information for VM CD/DVD drive
<compute cpus="1" count="2">
number of cpus per compute node and number of compute nodes to create
<memory base="1000" vram="32" />
allocate base and video memory to VM
<boot order="net disk none none" />
boot order
<private nic="intnet" nictype="82540EM" nicname="default"/>
NIC settings for private network
<hd size="50000" variant="Standard"/>
disk image size
<syssetting audio="none"/>
<storage name="SATA" type="sata" controller="IntelAhci" attr="hdd" port="0" device="0"/>
information for VM disk image
Starting VBox after TimeMachine restore
If your VirtualBox was restored among other applications from TimeMachine backup
the needed daemons and devices (/dev/vboxdrv /dev/vboxdrvu /dev/vboxnetctl) may no
longer be present on the Mac host. The following steps fix this issue. These steps may also be needed
if /dev/vbox* get lost.
#. Recreate launchctl startup ::
sudo su
cd /Library/LaunchDaemons/
ln -s ../Application\ Support/VirtualBox/LaunchDaemons/org.virtualbox.startup.plist .
launchctl load /Library/LaunchDaemons/org.virtualbox.startup.plist
#. Recreate host only networks
+ Start VirtualBox
+ From ``Preferences...`` open ``Network`` tab
+ Choose ``Host-only Networks`` tab and click on add icon (plus sign) to add the network
+ Confirm with ``Ok`` button
Setting a NAT network
# add nat netowrk
vboxmanage natnetwork add --netname ${NATNAME} --network ${NETWORK} --enable --dhcp off
vboxmanage natnetwork start --netname ${NATNAME}
vboxmanage list natnets
# remove nat network
vboxmanage list natnets
vboxmanage natnetwork stop --netname ${NATNAME}
vboxmanage natnetwork remove --netname ${NATNAME}
<vbc version="0.1">
<vm name="x" private="local-x">
<iso os="Linux_64" path="/path/to/boot-6.1.1.iso"/>
<shared name="data1" path="/path/to/shared/data1"/>
<shared name="data2" path="/path/to/other"/>
<enable cpuhotplug="on" />
<frontend cpus="2">
<memory base="4000" vram="32" />
<boot order="dvd disk none none" />
<private nic="intnet" nictype="82540EM" nicname="default"/>
<public nic="nat" nictype="82540EM" />
<hd size="50000" variant="Standard"/>
<syssetting mouse="usbtablet" audio="none"/>
<storage name="SATA" type="sata" controller="IntelAhci" attr="hdd" port="0" device="0"/>
<storage name="IDE" type="ide" controller="PIIX4" attr="dvddrive" port="0" device="0"/>
<compute cpus="1" count="2">
<memory base="1000" vram="32" />
<boot order="net disk none none" />
<private nic="intnet" nictype="82540EM" nicname="default"/>
<hd size="50000" variant="Standard"/>
<syssetting audio="none"/>
<storage name="SATA" type="sata" controller="IntelAhci" attr="hdd" port="0" device="0"/>
#!/usr/bin/env python
import os
import sys
import string
import logging
from optparse import OptionParser, OptionGroup
import subprocess
import xml.etree.ElementTree
from pprint import pprint
class VBwrapper:
"""Base class for VirtualBox VMs creation"""
def __init__(self, argv):
self.args = argv[1:]
self.logfile = os.path.basename(argv[0]) + ".log"
self.version = "Version 0.1"
def parseArgs(self):
"""parse command line arguments """
self.errors = { "errType":"Need to provide type. Valid types: frontend or compute",
"errNoConfig":"Need to provide xml configuration file.",
"errConfig":"Configuration file %s is not found"}
usage = "Usage: %prog [-h] [-d] --type=[frontend|compute] configFile"
self.parser = OptionParser(usage, version=self.version)
self.parser.add_option("-d", "--debug",
dest="debug", action="store_true",
default=False, help="Prints values parsed from input file and exits")
self.parser.add_option("--type", dest="type", default=False, help="VM type is frontend or compute")
(options, args) = self.parser.parse_args(self.args)
if options.type not in ["frontend", "compute"]:
self.type = options.type
if args:
self.config = args[0]
self.debug = options.debug
def checkCommand(self):
"""verify VBox command exists"""
cmd = "which vboxmanage"
out = subprocess.check_output(cmd, shell=True)
err = "Command %s not found.\nCheck VirtualBox installation and make sure directory with %s is in your $PATH"
sys.exit(err % (cmd, cmd))
self.cmd = out[:-1]
def initLogging(self):
""" create logger. logging levels: debug,info,warn,error,critical """
self.logger = logging.getLogger("%prog")
# file handler
fh = logging.FileHandler(self.logfile)
# console handler
ch = logging.StreamHandler()
# formatter Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
formatter = logging.Formatter('%(asctime)s %(levelname)s - %(message)s', datefmt='%Y-%m-%d %H:%M')
# add handlers to logger
def parseConfig(self):
"""parse input xml file """
if not os.path.isfile(self.config):
self.parser.error(self.errors["errConfig"] % self.config)
xmlroot = xml.etree.ElementTree.parse(self.config).getroot()
version = xmlroot.attrib["version"]
# get generic info
if self.type == "frontend":
# get frontend info
# get compute info
if self.debug:
def parseVmTag(self, xmlroot):
"""parse <vm> tag info"""
self.VM = {}
# VM and network names
xmlnode = xmlroot.findall("./vm")[0]
self.VM["vm_name"]= xmlnode.attrib["name"]
self.VM["vm_network"] = xmlnode.attrib["private"]
# iso info
xmlnode = xmlroot.findall("./vm/iso")[0]
self.VM["iso_os"] = xmlnode.attrib["os"]
self.VM["iso_path"] = xmlnode.attrib["path"]
# shared folders info
xmlnodes = xmlroot.findall("./vm/shared")
self.VM["shared"] = []
for node in xmlnodes:
shared = {}
shared["shared_name"] = node.attrib["name"]
shared["shared_path"] = node.attrib["path"]
# hotplug cpu info
xmlnode = xmlroot.findall("./vm/enable")[0]
if "cpuhotplug" in xmlnode.attrib:
self.VM["cpuhotplug"] = xmlnode.attrib["cpuhotplug"]
def parseFrontendTag(self, xmlroot):
"""parse <frontend> tag info"""
Node = "frontend"
self.parseCpu(xmlroot, Node)
self.parseMemory(xmlroot, Node)
self.parseBoot(xmlroot, Node)
self.parsePrivate(xmlroot, Node)
self.parsePublic(xmlroot, Node)
self.parseHD(xmlroot, Node)
self.parseSyssetting(xmlroot, Node)
self.parseStorage(xmlroot, Node)
def parseComputeTag(self, xmlroot):
"""parse <compute> tag info"""
Node = "compute"
self.parseCpu(xmlroot, Node)
self.parseComputeCount(xmlroot, Node)
self.parseMemory(xmlroot, Node)
self.parseBoot(xmlroot, Node)
self.parsePrivate(xmlroot, Node)
self.parseHD(xmlroot, Node)
self.parseSyssetting(xmlroot, Node)
self.parseStorage(xmlroot, Node)
def parseCpu(self, xmlroot, Node):
"""find number of cpus requested """
xmlnode = xmlroot.findall("./%s" % Node)[0]
self.VM["cpus"] = xmlnode.attrib["cpus"]
def parseComputeCount(self, xmlroot, Node):
"""find number of compute nodes"""
xmlnode = xmlroot.findall("./%s" % Node)[0]
if "count" in xmlnode.attrib:
self.VM["compute_count"] = xmlnode.attrib["count"]
def parseMemory(self, xmlroot, Node):
"""find memory requests"""
xmlnode = xmlroot.findall("./%s/memory" % Node)[0]
self.VM["mem_base"] = xmlnode.attrib["base"]
self.VM["mem_vram"] = xmlnode.attrib["vram"]
def parseBoot(self, xmlroot, Node):
"""find boot order"""
xmlnode = xmlroot.findall("./%s/boot" % Node)[0]
str = xmlnode.attrib["order"]
items = str.split()
boot_order = ""
for n in range(0,len(items)):
boot_order += "--boot%d %s " % (n+1, items[n])
self.VM["boot_order"] = boot_order
def parsePrivate(self, xmlroot, Node):
"""find private nic info"""
xmlnode = xmlroot.findall("./%s/private" % Node)[0]
self.VM["private_nic"] = xmlnode.attrib["nic"]
self.VM["private_nictype"] = xmlnode.attrib["nictype"]
self.VM["private_nicname"] = xmlnode.attrib["nicname"]
def parsePublic(self, xmlroot, Node):
"""find public nic info"""
xmlnode = xmlroot.findall("./%s/public" % Node)[0]
self.VM["public_nic"] = xmlnode.attrib["nic"]
self.VM["public_nictype"] = xmlnode.attrib["nictype"]
def parseHD(self, xmlroot, Node):
"""find disk image info"""
xmlnode = xmlroot.findall("./%s/hd" % Node)[0]
self.VM["hd_size"] = xmlnode.attrib["size"]
self.VM["hd_variant"] = xmlnode.attrib["variant"]
def parseSyssetting(self, xmlroot, Node):
"""find mouse and audio info"""
xmlnode = xmlroot.findall("./%s/syssetting" % Node)[0]
if "mouse" in xmlnode.attrib:
self.VM["mouse"] = xmlnode.attrib["mouse"]
if "audio" in xmlnode.attrib:
self.VM["audio"] = xmlnode.attrib["audio"]
def parseStorage(self, xmlroot, Node):
"""find attached controller and disk info"""
xmlnodes = xmlroot.findall("./%s/storage" % Node)
self.VM["storage"] = []
for node in xmlnodes:
ctl = {}
ctl["name"] = node.attrib["name"]
ctl["type"] = node.attrib["type"]
ctl["controller"] = node.attrib["controller"]
ctl["attr"] = node.attrib["attr"]
ctl["port"] = node.attrib["port"]
ctl["device"] = node.attrib["device"]
def getCmdOutput(self, cmd):
"""get output of command line"""
info, err = subprocess.Popen(cmd, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE).communicate()
if err:
self.logger.error("Failed to execute %s() with error: %s" % (cmd, err))
return info
def commandCreateVm(self, name):
"""create and run createvm command"""
cmd = "%s list vms" % self.cmd
info = self.getCmdOutput(cmd)
lines = info.splitlines()
for line in lines:
if line.find(name) > 0:
self.logger.error("Failed to create VM: %s already exists" % name)
template = "%s createvm --name %s --ostype %s --register" \
% (self.cmd, name, self.VM["iso_os"])
def commandModifyVm(self, name):
"""create and run modifyvm command"""
# memory and private nic settings
template = "%s modifyvm %s" % (self.cmd, name) \
+ " --memory %s --vram %s" % (self.VM["mem_base"], self.VM["mem_vram"]) \
+ " --nic1 %s --nictype1 %s --intnet1 %s" % (self.VM["private_nic"], self.VM["private_nictype"], self.VM["vm_network"])
# add public nic for frontend
if self.VM.has_key("public_nic"):
template += " --nic2 %s --nictype2 %s" % (self.VM["public_nic"], self.VM["public_nictype"])
# add audio
if self.VM.has_key("audio"):
template += " --audio %s" % self.VM["audio"]
# add mouse
if self.VM.has_key("mouse"):
template += " --mouse %s" % self.VM["mouse"]
# add boot order
template += " %s" % self.VM["boot_order"]
def commandModifyHotPlug(self, name):
"""create and run modifyvm command for hot plug cpu"""
# hotplug setting
if self.VM["cpuhotplug"] == "on":