Linux Container (LXC) with LXD Hypervisor, Part 1: Installation and Creation

For the past few weeks, I've been looking into creating LXC container for both Fedora and Ubuntu distro. One of the creation method is through downloading a pre-built image.
$ lxc-create -t download -n test-container -- -d ubuntu -r trusty -a amd64

However, create unprivileged containers is rather cumbersome and list of languages bindings for the APIs are limited. What if we create a daemon or a container hypervisor that monitor and manage all the containers? In additional to that, the daemon also handles all the security privileges and provides a RESTful web APIs for remote management? Well, that the purpose of the creation of LXD, the LXC container hypervisor. Think of it as a glorify LXC 'download' creation method with additional features.

Since the LXD project is under the management of Caninocal Ltd, the company behinds Ubuntu. Hence, it's recommended to use Ubuntu if you don't want to install through source code compilation.

Installation and setup of LXD as shown below was done in Ubuntu 15.04.

Firstly, install the LXD package.
$ sudo apt-get install lxd
Warning: The home dir /var/lib/lxd/ you specified already exists.
Adding system user 'lxd' (UID 125) ...
Adding new user 'lxd' (UID 125) with group 'nogroup' ...
The home directory '/var/lib/lxd/' already exists. Not copying from '/etc/skel'.
adduser: Warning: The home directory '/var/lib/lxd/' does not belong to the user you are currently creating.
Adding group 'lxd' (GID 137) ...

From the message above, note that the group 'lxd' (GID 137) does not belong your current login user yet. To update your current login user groups during current session, run the command below so that you don't need to logout and re-login again.
$ newgrp lxd

Check out current login user groups. You should see that the current login user belongs to the group 'lxd' (GID 137).
$ id $USER | tr ',', '\n'
uid=1000(ang) gid=1000(ang) groups=1000(ang)

$ groups
ang adm cdrom sudo dip plugdev lpadmin sambashare lxd

Next, we need to set the remote server which contains the pre-built container images.
$ lxc remote add images
Generating a client certificate. This may take a minute...

List all the available pre-built container images from the server we've added just now. Pay attention to the colon (:) at the end of the command as this is needed. Otherwise, the command will list local downloaded images. The list is quite long so I've reformatted the layout and only show the top two.
$ lxc image list images:
|   ALIAS               | FINGERPRINT |PUBLIC |  DESCRIPTION   | ARCH |        UPLOAD DATE          |
|centos/6/amd64 (1 more)|460c2c6c4045 |yes    |Centos 6 (amd64)|x86_64|Jul 25, 2015 at 11:17am (MYT)|
|centos/6/i386 (1 more) |60f280890fcc |yes    |Centos 6 (i386) |i686  |Jul 25, 2015 at 11:20am (MYT)|

Let's create our first container using CentOS 6 pre-built image.
$ lxc launch images:centos/6/amd64 test-centos-6
Creating container...error: Get http://unix.socket/1.0: dial unix /var/lib/lxd/unix.socket: no such file or directory

Reading through this troubleshooting ticket, it seems that LXD daemon was not started. Let's start it. Note that I still using the old 'service' command to start the daemon instead of 'systemctl' command. As they said, old habits die hard. It will take a while for me to fully transition from SysVinit to Systemd. ;-)
$ sudo service lxd restart
$ sudo service lxd status
● lxd.service - Container hypervisor based on LXC
   Loaded: loaded (/lib/systemd/system/lxd.service; indirect; vendor preset: enabled)
   Active: active (running) since Ahd 2015-07-26 00:28:51 MYT; 10s ago
 Main PID: 13260 (lxd)
   Memory: 276.0K
   CGroup: /system.slice/lxd.service
           ‣ 13260 /usr/bin/lxd --group lxd --tcp [::]:8443

Jul 26 00:28:51 proliant systemd[1]: Started Container hypervisor based on LXC.
Jul 26 00:28:51 proliant systemd[1]: Starting Container hypervisor based on LXC...

Finally, create and launch our container using the CentOS 6 pre-built image. Compare to 'lxc-create' command, at least the parameters is simpler. This will take a while as the program needs to download the pre-built CentOS 6 image, which is average size around 50-plus MB, more on this later.
$ lxc launch images:centos/6/amd64 test-centos-6
Creating container...done
Starting container...done

Checking the status of our newly created container.
$ lxc list
|     NAME      |  STATE  |   IPV4    | IPV6 | EPHEMERAL | SNAPSHOTS |
| test-centos-6 | RUNNING | |      | NO        | 0         |

Another status of our container.
$ lxc info test-centos-6
Name: test-centos-6
Init: 14572
  eth0: IPV4
  lo:   IPV4
  lo:   IPV6    ::1

Checking the downloaded pre-built image. Subsequent container creation using the same cached image.
$ lxc image list
|       | 460c2c6c4045 | yes    | Centos 6 (amd64) | x86_64 | Jul 26, 2015 at 12:51am (MYT) |

You can use the fingerprint to create and initiate the same container.
$ lxc launch 460c2c6c4045 test-centos-6-2                                                                    
Creating container...done
Starting container...done

As I mentioned, the downloaded pre-built CentOS 6 image is roughly 50-plus MB. This file is located within the '/var/lib/lxd/images' folder. The fingerprint only shows the first 12 characters hash string of the file name.
$ sudo ls -lh /var/lib/lxd/images
total 50M
-rw-r--r-- 1 root root 50M Jul  26 00:51 460c2c6c4045a7756faaa95e1d3e057b689512663b2eace6da9450c3288cc9a1

Now, let's enter the container. Please note that the pre-built image contains the most minimum necessary packages. There are quite a few things missing. For example, wget, the downloader was not install by default.
$ lxc exec test-centos-6 /bin/bash
[root@test-centos-6 ~]#
[root@test-centos-6 ~]# cat /etc/redhat-release 
CentOS release 6.6 (Final)

[root@test-centos-6 ~]# wget
bash: wget: command not found

To exit from the container, simple type the 'exit' command.
[root@test-centos-6 ~]# exit

To stop the container, just run this command.
$ lxc stop test-centos-6
$ lxc list
|      NAME       |  STATE  |   IPV4    | IPV6 | EPHEMERAL | SNAPSHOTS |
| test-centos-6   | STOPPED |           |      | NO        | 0         |

For the next part of the series, we're going to explore transferring files between the host and the container. Till the next time.

Vagrant 1.7.3 and VirtualBox 5.0 Installation in Ubuntu 15.04 - Part 2

Continue from the first part of the installation.

Meanwhile, the available VirtualBox version from the default Ubuntu repository is 4.3.26 as shown below.
$ apt-cache show virtualbox | grep ^Version
Version: 4.3.26-dfsg-2ubuntu2
Version: 4.3.26-dfsg-2ubuntu1

While we can use similar installation method like Vagrant, if there are any repository available, always favour this installation method as you don't need to manually verify each downloaded packages. Upgrade is also seamless without hassle.
$ echo "deb vivid contrib" | sudo tee -a /etc/apt/sources.list.d/virtualbox.list
deb vivid contrib

$ cat /etc/apt/sources.list.d/virtualbox.list 
deb vivid contrib

Next, add the public key so the apt program can verify the packages from the repository we've added just now.
$ wget -q -O- | sudo apt-key add -

Update the repository packages and check the available version.
$ sudo apt-get update

As discussed, before installation, always go through the change log. Then we proceed with the installation. You must specify the exact version you want to install. In this case, is version 5.0.
$ sudo apt-get install virtualbox-5.0

Once done, we'll proceed with Extension Pack installation. Let's download it and install it using the VBoxManage console tool.
$ aria2c -x 4

$ sudo VBoxManage extpack install Oracle_VM_VirtualBox_Extension_Pack-5.0.0-101573.vbox-extpack 
Successfully installed "Oracle VM VirtualBox Extension Pack".

Confirm our installed VirtualBox version.
$ vboxmanage --version

Lastly, if there are any Linux kernel upgrade, you may need to rebuild vboxdrv kernel module by running this command.
$ sudo /etc/init.d/vboxdrv setup