Monday, March 29, 2010

Xen - Migration

Hello everybody...

Here I am again posting about our cloud updates...

Today we decided to install a new physical machine as node in our cloud... the Homer's wife: Marge. The idea is to have in the future Marge as our storage and cluster controller making homer only our cloud and walrus controller.

After the installation of Marge (CentOS 5.4) I realized that our LAN had no internet. That was because when we installed the 2nd NIC, on Homer, more configurations were necessary (IP Masquerade). Since it is not a big deal, we made the required changes and now everything is running well. You can check here a good link of how to forward the internet to your lan if you have a server running Linux with 2 NICs as your lan's router. We needed the internet in order to run the CentOS updates on Marge.

Well, let's talk now about interesting things... we decided to test the live migration feature of Xen using our two currently installed physical nodes: Marge and Maggie. At this point of time we are not going to worry about Eucalyptus... The idea is just testing the Xen's live migration between our physical nodes.

So, we used a previously created CentOS image to run on Xen. If you want to know how to create a VM to run on Xen, check this.

At this point, the VM was working fine, but completely isolated in one physical node: Maggie. So now, we decided to create a NFS directory on Homer and store the image there (VM file). In that way, we could have access to the image from any physical node "without" the need to copy. To enable the NFS we just followed the instructions found on this page. It worked fine since the very first time.

Now, after rebooting the physical machines, all of them mount at startup time a directory shared by Homer. This directory contains the VM image.

Almost done! Now, we had to configure Xen in order to migrate the VMs...
To do that, you have to edit the file /etc/xen/xend-config.sxp and make the following changes:

(xend-relocation-server yes)
(xend-relocation-port 8002)
(xend-relocation-address '')
(xend-relocation-hosts-allow '')

Of course, we restarted the Xen service in all physical nodes:

/etc/init.d/xend restart

... and then started the VM in Maggie with the following command:

xm create our-vm

Finally, we migrated the VM from Maggie to Marge:

xm migrate -l our-vm target-host

That's all for today. Until next post.

Lucio

Tuesday, March 23, 2010

2nd NIC + DHCP

Here we are again, posting about our cloud adventures...

As the previous post mentioned, we decided to install another NIC in homer (our server)... and have all our nodes in a private lan. We got an old NIC card (1998) which at least is 100 Mbps... well, it really doesn't matter right now since our network switch is also pretty old (but it is 10/100 Mbps as well).

The hardest job in order to make the installation work was how to open the computer!? yeah..., unfortunately it had some kind of security lock on it... but after contacting our system lab technician (or whatever) then it was finally opened (he had the power! I mean, the tool).

After inserting the NIC, our DHCP was now configured to work giving IPs only to our private lan. In this case, our new NIC called as eth1 (by default) is providing IPs to our cloud, while the other eth0 is connected to the external world. Assuming that nobody (except us) needs access to the nodes, there is absolutely no problem having this topology.

The last modification was in the eucalyptus.conf in our server homer, in order to inform it which interface is public (external) and which one is private (internal). In our case, it is something like this:

VNET_PUBINTERFACE="eth0"
VNET_PRIVINTERFACE="eth1"

After restarting all the services in both server and node everything was working "sweet".

Ah, of course... I still have to close the server... I'll do it soon... (or not)

Until next post,
Lucio

Monday, March 22, 2010

An Image of Perfection

The time had finally come to actually create a virtual machine instance in the cloud. First we required an image to instantiate, and for this we turned to a set of pre-packaged example images provided by Eucalyptus. The images here provide only 64 bit operating systems, but without too much trouble we found a 32 bit CentOS image. This was then bundled using Eucalyptus tools and registered with Walrus (cloud storage).

EDITED by Lucio: Make sure to register all files (image + kernel + ramdisk). That was our first issue... we were not using the ramdisk and the machine was not starting because of that... after lots of coffee and mate we figured out why the machine was not starting...

The next step was to create an instance, using the euca-run-instances command. It goes without saying that this failed. Our instances remained forever trapped in the pending state, never to be started. The node controller log file was of no real use, providing an error message that led us to no solutions despite extensive googling. Our first instinct was to attempt to start the image manually on our node. We created the necessary Xen configuration file and successfully started the virtual machine, proving that the image itself was not the issue. After much poking and prodding, we determined that we were only specifying the kernel and file system images, but also had a ramdisk image available that we were not making use of. We added this to Walrus and included it in our call to euca-run-instances, and met with success. Well, sort of.

euca-run-instances -k mykey --kernel eki-907A1380 --ramdisk eri-AE9113E8 emi-3F0A1253

Our virtual machine had no IP address, and thus no way for us to connect to it. While it was comforting to know that it was there, happily doing nothing, we needed to be able to actually talk to it. We experimented with the different network configuration modes of Eucalyptus, that is, SYSTEM, STATIC, and MANAGED, with little success. We then disconnected our switch from the network and installed a DHCP server on homer. The Eucalyptus network configuration mode was set to SYSTEM, which means that Eucalyptus simply assigns a random MAC address to each VM and lets DHCP handle the rest. With this setup, we were successfully able to start a VM with an IP address and connect to it via SSH.

The problem will be converting this into something usable. At the moment, our cloud is in complete isolation from the rest of the world. We have two options: 1) we reconnect our cloud to the network and secure ourselves a range of IP addresses that we have control over and can assign via our DHCP server, or 2) we keep the cloud on its own network and add a second NIC card to our cloud controller (homer) for external access. All access to the cloud would then be routed through this machine. While option #1 seems like the ideal solution, option #2 is far more feasible at the moment, and hence this will be our route for the time being.

Until next time,
Mike

Configuring Eucalyptus


The first question after the installation was: now, what?

Well, the Eucalyptus provides a default web interface to the users. In order to configure it properly, some steps in the cloud controller were required, as they call as First step setup. You can check those steps here.

This web interface is quite simple . To access it, point your browser to https://ip_front_end:8443. At the first time, the user and password were both admin (does it make you remember anything?). Here is a screenshot of our login screen and here another screenshot after logged.

At this time we were excited to see something happening... I mean... something interesting, of course...

Our next step will be related to the Hypervisor (Xen) configuration. It seems that the time to have some fun (maybe I should say try to have...) is coming. Stay tuned.





Saturday, March 20, 2010

Installing CentOS 5.4 and Eucalyptus 1.6.2

We first visited CentOS web site to get the installation media. From there we navigated to a nearby mirror and downloaded the 6 CDs required to install CentOS 5.4. A hat tip to the CS Club of Waterloo.

We installed CentOS in two machines: homer and maggie. Both installations included the packages Server and Virtualization (the latter provides the Xen hypervisor).

We then installed Eucalyptus in both machines following this tutorial. Up to this point, no major difficulties. homer is running the CLC, the CC, the SC and Walrus. maggie is running a NC.

Next step: check that the controllers can talk to each other and instantiate a few VMs. =)

Friday, March 19, 2010

Second attempt

We decided to go with Debian, which was supposed to support Xen. However, users reported stability problems. In addition, it seems Debian is dropping support for Xen until they release Debian 6.0.

We then found a list of suggested distributions to use with Xen (check it out here) and considered using CentOS.

This time around we decided to check whether Eucalyptus would work in CentOS before downloading the distribution and installing it. Lady Luck smiled to us in this opportunity, so we started downloading CentOS!

Private Cloud, get prepared 'cause here we go!

First attempt

Ubuntu 9.10 Server Edition provides support for Eucalyptus, so we decided to go with it. We installed all the nodes and the configuration of everything was automatic and looked like magic (translation: we had no idea what was happening).

Later on, we tried to start a VM, but the process failed giving the following error:


[EUCAERROR ] libvirt: internal error no supported architecture for os type 'hvm' (code=1)
[EUCAFATAL ] hypervisor failed to start domain


We did some research with the help of Saint Google and we found out that Ubuntu was working by default with KVM as hypervisor. This wouldn't be a problem... except by the little fact that KVM requires hardware support for virtualization and our host nodes' CPUs were born far before hardware support for virtualization existed.

The first solution that came to our minds was to switch KVM for the Xen hypervisor. But we encountered another problem. Ubuntu stopped supporting Xen a while ago and reports from Xen users suggest that it can only run in an unstable state.

First road block.

Hardware specifications

Our cloud is going to be a small one. Really. Small. We have only five computers, so we are going to use four for the nodes and one for the cluster head and cloud head. The latter machine will probably host Walrus as well, once I understand what Walrus is supposed to do. I think Mike already knows, but he is not telling me.

The one machine for the cluster head has the following specifications:

- Intel(R) Pentium(R) D CPU 3.0GHz
- 1.5 GB RAM
- 120 GB hard disk
- 10/100Base-T Ethernet
- DVD-ROM drive

The machines for the nodes have the following specifications:

- Intel(R) Pentium(R) 4 CPU 2.80GHz
- 1 GB RAM
- 80 GB hard disk
- 10/100/1000Base-T Ethernet
- CD-ROM drive !!!!

Can you believe it? CD-ROM drives! These are probably the last machines around to be built with a CD-ROM drive. =P

In addition, the network connection is limited to 10/100, so the NICs in the host nodes will be a little wasted.

Our plans

The whole idea is to use Eucalyptus as the software to power our private cloud. Eucalyptus is composed of Node Controllers (NC), Cluster Controllers (CC) and a Cloud Controller (CLC). There is also a Storage Controller called Walrus, but I'm not sure yet how that one is supposed to work.

Node Controllers are to be installed in host nodes running a hypervisor. The nodes will host virtual machines.

A Cluster Controller is installed in the head of a cluster to work as an intermediary between the Cloud Controller and the Node Controllers and probably to do some additional management of the host nodes.

The Cloud Controller provides the front-end that registered users can use to create virtual machines.

Walrus? Somebody please enlighten me. =)

EDIT:
It seems there is an additional controller associated to Walrus: the Storage Controller (SC).

EDITED by Lucio:
Here are links to the tools that we are planning to use to build our cloud:

Thursday, March 18, 2010

Everybody has a cloud! We want OUR cloud!

So the title says it all. We are up for building our own private cloud. It's not that we want to sell hosting services... mmm... we could... nah, let's keep it academic for now. The idea is to experience the challenges of building and managing a private cloud (and what a challenge it will be given the hardware we have!). Hopefully, we'll learn as well what features are missing, what is a cloud useful for, and how can we contribute to the evolution of cloud computing.

The sole purpose of this blog is to document our adventure. We'll post some info on the infrastructure we have available (tiny one as of now), what software stacks we try and the like.

Stay tuned.

EDIT:
Just to make it clear what we are talking about, here's a definition of private cloud by IBM:
Private Clouds - activities and functions are provided "as a service," over a company's intranet. It is built by an organization for its own users, and everything is delivered within the organization's firewall (instead of the Internet). The private cloud owner does not share resources with any other companies, so multitenancy is not an issue. Also called an "internal cloud."