Linux Container (LXC) with LXD Hypervisor, Part 1: Installation and Creation

For the past few weeks, I've been looking into creating LXC container for both Fedora and Ubuntu distro. One of the creation method is through downloading a pre-built image.
$ lxc-create -t download -n test-container -- -d ubuntu -r trusty -a amd64

However, creating unprivileged containers is rather cumbersome and list of languages bindings for the APIs are limited. What if we create a daemon or a container hypervisor that monitor and manage all the containers? In additional to that, the daemon also handles all the security privileges and provides a RESTful web APIs for remote management? Well, that the purpose of the creation of LXD, the LXC container hypervisor. Think of it as a glorify LXC 'download' creation method with additional features.

Since the LXD project is under the management of Caninocal Ltd, the company behinds Ubuntu. Hence, it's recommended to use Ubuntu if you don't want to install through source code compilation.

Installation and setup of LXD as shown below was done in Ubuntu 15.04.

Firstly, install the LXD package.
$ sudo apt-get install lxd
......
Warning: The home dir /var/lib/lxd/ you specified already exists.
Adding system user 'lxd' (UID 125) ...
Adding new user 'lxd' (UID 125) with group 'nogroup' ...
The home directory '/var/lib/lxd/' already exists. Not copying from '/etc/skel'.
adduser: Warning: The home directory '/var/lib/lxd/' does not belong to the user you are currently creating.
Adding group 'lxd' (GID 137) ...
Done.
......

From the message above, note that the group 'lxd' (GID 137) does not belong your current login user yet. To update your current login user groups during current session, run the command below so that you don't need to logout and re-login again.
$ newgrp lxd

Check out current login user groups. You should see that the current login user belongs to the group 'lxd' (GID 137).
$ id $USER | tr ',', '\n'
uid=1000(ang) gid=1000(ang) groups=1000(ang)
4(adm)
24(cdrom)
27(sudo)
30(dip)
46(plugdev)
115(lpadmin)
131(sambashare)
137(lxd)

$ groups
ang adm cdrom sudo dip plugdev lpadmin sambashare lxd

Next, we need to set the remote server which contains the pre-built container images.
$ lxc remote add images images.linuxcontainers.org
Generating a client certificate. This may take a minute...

List all the available pre-built container images from the server we've added just now. Pay attention to the colon (:) at the end of the command as this is needed. Otherwise, the command will list local downloaded images. The list is quite long so I've reformatted the layout and only show the top two.
$ lxc image list images:
+-----------------------+-------------+-------+----------------+------+-----------------------------+
|   ALIAS               | FINGERPRINT |PUBLIC |  DESCRIPTION   | ARCH |        UPLOAD DATE          |
+-----------------------+-------------+-------+----------------+------+-----------------------------+
|centos/6/amd64 (1 more)|460c2c6c4045 |yes    |Centos 6 (amd64)|x86_64|Jul 25, 2015 at 11:17am (MYT)|
|centos/6/i386 (1 more) |60f280890fcc |yes    |Centos 6 (i386) |i686  |Jul 25, 2015 at 11:20am (MYT)|
......

Let's create our first container using CentOS 6 pre-built image.
$ lxc launch images:centos/6/amd64 test-centos-6
Creating container...error: Get http://unix.socket/1.0: dial unix /var/lib/lxd/unix.socket: no such file or directory

Reading through this troubleshooting ticket, it seems that LXD daemon was not started. Let's start it. Note that I still using the old 'service' command to start the daemon instead of 'systemctl' command. As they said, old habits die hard. It will take a while for me to fully transition from SysVinit to Systemd. ;-)
$ sudo service lxd restart
$ sudo service lxd status
● lxd.service - Container hypervisor based on LXC
   Loaded: loaded (/lib/systemd/system/lxd.service; indirect; vendor preset: enabled)
   Active: active (running) since Ahd 2015-07-26 00:28:51 MYT; 10s ago
 Main PID: 13260 (lxd)
   Memory: 276.0K
   CGroup: /system.slice/lxd.service
           ‣ 13260 /usr/bin/lxd --group lxd --tcp [::]:8443

Jul 26 00:28:51 proliant systemd[1]: Started Container hypervisor based on LXC.
Jul 26 00:28:51 proliant systemd[1]: Starting Container hypervisor based on LXC...

Finally, create and launch our container using the CentOS 6 pre-built image. Compare to 'lxc-create' command, at least the parameters is simpler. This will take a while as the program needs to download the pre-built CentOS 6 image, which is average size around 50-plus MB, more on this later.
$ lxc launch images:centos/6/amd64 test-centos-6
Creating container...done
Starting container...done

Checking the status of our newly created container.
$ lxc list
+---------------+---------+-----------+------+-----------+-----------+
|     NAME      |  STATE  |   IPV4    | IPV6 | EPHEMERAL | SNAPSHOTS |
+---------------+---------+-----------+------+-----------+-----------+
| test-centos-6 | RUNNING | 10.0.3.46 |      | NO        | 0         |
+---------------+---------+-----------+------+-----------+-----------+

Another status of our container.
$ lxc info test-centos-6
Name: test-centos-6
Status: RUNNING
Init: 14572
Ips:
  eth0: IPV4    10.0.3.46
  lo:   IPV4    127.0.0.1
  lo:   IPV6    ::1

Checking the downloaded pre-built image. Subsequent container creation using the same cached image.
$ lxc image list
+-------+--------------+--------+------------------+--------+-------------------------------+
| ALIAS | FINGERPRINT  | PUBLIC |   DESCRIPTION    |  ARCH  |          UPLOAD DATE          |
+-------+--------------+--------+------------------+--------+-------------------------------+
|       | 460c2c6c4045 | yes    | Centos 6 (amd64) | x86_64 | Jul 26, 2015 at 12:51am (MYT) |
+-------+--------------+--------+------------------+--------+-------------------------------+

You can use the fingerprint to create and initiate the same container.
$ lxc launch 460c2c6c4045 test-centos-6-2                                                                    
Creating container...done
Starting container...done

As I mentioned, the downloaded pre-built CentOS 6 image is roughly 50-plus MB. This file is located within the '/var/lib/lxd/images' folder. The fingerprint only shows the first 12 characters hash string of the file name.
$ sudo ls -lh /var/lib/lxd/images
total 50M
-rw-r--r-- 1 root root 50M Jul  26 00:51 460c2c6c4045a7756faaa95e1d3e057b689512663b2eace6da9450c3288cc9a1

Now, let's enter the container. Please note that the pre-built image contains the most minimum necessary packages. There are quite a few things missing. For example, wget, the downloader was not install by default.
$ lxc exec test-centos-6 /bin/bash
[root@test-centos-6 ~]#
[root@test-centos-6 ~]# cat /etc/redhat-release 
CentOS release 6.6 (Final)

[root@test-centos-6 ~]# wget
bash: wget: command not found

To exit from the container, simple type the 'exit' command.
[root@test-centos-6 ~]# exit
exit
$ 

To stop the container, just run this command.
$ lxc stop test-centos-6
$ lxc list
+-----------------+---------+-----------+------+-----------+-----------+
|      NAME       |  STATE  |   IPV4    | IPV6 | EPHEMERAL | SNAPSHOTS |
+-----------------+---------+-----------+------+-----------+-----------+
| test-centos-6   | STOPPED |           |      | NO        | 0         |
+-----------------+---------+-----------+------+-----------+-----------+

For the next part of the series, we're going to look into importing container images into LXD. Till the next time.

Vagrant 1.7.3 and VirtualBox 5.0 Installation in Ubuntu 15.04 - Part 2

Continue from the first part of the installation.

Meanwhile, the available VirtualBox version from the default Ubuntu repository is 4.3.26 as shown below.
$ apt-cache show virtualbox | grep ^Version
Version: 4.3.26-dfsg-2ubuntu2
Version: 4.3.26-dfsg-2ubuntu1

While we can use similar installation method like Vagrant, if there are any repository available, always favour this installation method as you don't need to manually verify each downloaded packages. Upgrade is also seamless without hassle.
$ echo "deb http://download.virtualbox.org/virtualbox/debian vivid contrib" | sudo tee -a /etc/apt/sources.list.d/virtualbox.list
deb http://download.virtualbox.org/virtualbox/debian vivid contrib

$ cat /etc/apt/sources.list.d/virtualbox.list 
deb http://download.virtualbox.org/virtualbox/debian vivid contrib

Next, add the public key so the apt program can verify the packages from the repository we've added just now.
$ wget -q https://www.virtualbox.org/download/oracle_vbox.asc -O- | sudo apt-key add -
OK

Update the repository packages and check the available version.
$ sudo apt-get update

As discussed, before installation, always go through the change log. Then we proceed with the installation. You must specify the exact version you want to install. In this case, is version 5.0.
$ sudo apt-get install virtualbox-5.0

Once done, we'll proceed with Extension Pack installation. Let's download it and install it using the VBoxManage console tool.
$ aria2c -x 4 http://download.virtualbox.org/virtualbox/5.0.0/Oracle_VM_VirtualBox_Extension_Pack-5.0.0-101573.vbox-extpack

$ sudo VBoxManage extpack install Oracle_VM_VirtualBox_Extension_Pack-5.0.0-101573.vbox-extpack 
0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100%
Successfully installed "Oracle VM VirtualBox Extension Pack".

Confirm our installed VirtualBox version.
$ vboxmanage --version
5.0.0r101573

Lastly, if there are any Linux kernel upgrade, you may need to rebuild vboxdrv kernel module by running this command.
$ sudo /etc/init.d/vboxdrv setup

Vagrant 1.7.3 and VirtualBox 5.0 Installation in Ubuntu 15.04 - Part 1

VirtuaBox 5.0, the x86 virtualization software was recently released.  This will be a good time for me to revisit it again with Vagrant, a tool to provision and distribute a virtual machine on top of VirtualBox. Why bother with Vagrant if you can just use VirtualBox as is? Well, if you want (1) to quickly provision an existing downloaded image; (2) to learn different provisioner like Ansible, Chef, Puppet, and others; (3) to have a easier way to manage your VirtualBox from the console. Then, there is no better tool than Vagrant.

One of the issue I've when evaluating Linux Container (LXC) is at this moment of writing, there is no easy way to create a CentOS 7 container through its daemon, LXD. Also, the container created cannot be distributed to other Operating System as LXC is a chroot environment container and not a virtual machine. In other words, LXC only works in GNU/Linuxes.

Now, let's check through the available version for Vagrant in the Ubuntu default repository.
$ apt-cache show vagrant | grep ^Version
Version: 1.6.5+dfsg1-2

Another way to check the latest Vagrant version if you've already installed Vagrant. This is through 'vagrant version' command. However, the result returned is not entirely correct. More on that later.
$ vagrant version
Installed Version: 1.7.3
Latest Version: 1.7.3

You're running an up-to-date version of Vagrant!

Our next step is to download the latest version of both software and install it in Ubuntu 15.04. Which means we need to download DEB packages. Let's start with Vagrant. Also we need to download the corresponding checksum file as well. I'm using Aria2 instead of Wget to speed up downloading as Aria2 supports multiple simultaneous connections.
$ aria2c -x 4 https://dl.bintray.com/mitchellh/vagrant/vagrant_1.7.4_x86_64.deb
$ wget --content-disposition https://dl.bintray.com/mitchellh/vagrant/1.7.4_SHA256SUMS?direct

Before we install or upgrade Vagrant, verify the our downloaded DEB package against the checksum file. Remember to read the Changelog as well, just in case, if there are any important items relevant to our upgrade or installation.
$ sha256sum -c 1.7.4_SHA256SUMS 2>&1 | grep OK                                                                                          
vagrant_1.7.4_x86_64.deb: OK

Upgrade our Vagrant installation.
$ sudo dpkg -i vagrant_1.7.4_x86_64.deb 
......
Preparing to unpack vagrant_1.7.4_x86_64.deb ...
Unpacking vagrant (1:1.7.4) over (1:1.7.3) ...
Setting up vagrant (1:1.7.4) ...

Finally, verify our installation. See the inaccurate reporting of latest version against the installed version. Hence, to get the up-to-date version, is best to check Vagrant's download page.
$ vagrant version
Installed Version: 1.7.4
Latest Version: 1.7.3
 
You're running an up-to-date version of Vagrant!

To be continued.

Gtk-Message: Failed to load module "overlay-scrollbar"

Follow up with my previous post on replacing the existing Unity desktop with Gnome 3.16. One of the issues I kept encountered since then was this warning message of 'Gtk-Message: Failed to load module "overlay-scrollbar"', especially when I was reading PDF document through EvinceOverlay scrollbar is one of the feature added to Unity desktop to obtain more spaces by hiding the scrollbar by default and only show it when you mouse-over the scrolling hotspot. Now, how should we fix this?

Reading through the AskUbuntu's answer on this matter, it seems the pre-installed overlay-scrollbar was not removed? Let's try to remove this.
$ sudo apt-get remove overlay-scrollbar
Reading package lists... Done
Building dependency tree       
Reading state information... Done
Package 'overlay-scrollbar' is not installed, so not removed
0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.

Interesting. The package is not installed. Re-read the answer again, it seemed that this was due to residual config files that still existing after I've removed Unity desktop. Let's purge the package. Next, just logout and login again and the problem should be solved.
$ sudo apt-get purge overlay-scrollbar

On a related note, you can purge all the residual config files of all the removed DEB packages. This is something totally new for me after using Ubuntu or Debian for so many years.
$ dpkg -l | grep '^rc' | awk '{print $2}' | sudo xargs dpkg --purge

Let's break down the commands. First, get one sample result of the 'dpkg -l | grep '^rc' commands as shown below.
$ dpkg -l | grep ^rc | tail -n 1
rc  zeitgeist-datahub  0.9.14-2.2ubuntu3 amd64 event logging framework - passive logging daemon

What is 'rc'? Let's try to get the first few lines from 'dpkg -l' command. Note that I've truncated the extra whitespaces. If you look at the vertical line (|) that pointed down, there are three fields that indicates the status of a DEB package. There status are desired status, current status, and error indicator. Refer back to our 'rc' status of a package, 'r' means removed and 'c' means that config files still exists and installed in the system.
$ dpkg -l | head -n 4
Desired=Unknown/Install/Remove/Purge/Hold
| Status=Not/Inst/Conf-files/Unpacked/halF-conf/Half-inst/trig-aWait/Trig-pend
|/ Err?=(none)/Reinst-required (Status,Err: uppercase=bad)
||/ Name    Version    Architecture Description

Going back to our sample package, zeitgeist-datahub. Let's find out the residual config files exists for this DEB package?
$ dpkg -L zeitgeist-datahub
/etc
/etc/xdg
/etc/xdg/autostart
/etc/xdg/autostart/zeitgeist-datahub.desktop

Remove or purge the residual config files. Both commands are equivalent.
$ sudo apt-get purge zeitgeist-datahub
$ sudo dpkg --purge zeitgeist-datahub

Reading package lists... Done
Building dependency tree       
Reading state information... Done
The following packages will be REMOVED:
  zeitgeist-datahub*
0 upgraded, 0 newly installed, 1 to remove and 0 not upgraded.
After this operation, 0 B of additional disk space will be used.
Do you want to continue? [Y/n] y
(Reading database ... 256755 files and directories currently installed.)
Removing zeitgeist-datahub (0.9.14-2.2ubuntu3) ...
Purging configuration files for zeitgeist-datahub (0.9.14-2.2ubuntu3) ...

Checking back the package status. Nothing shown. Hence, everything was successfully purged from your system.
$ dpkg -l | grep zeitgeist-datahub
$