PlantUML is always my go-to diagramming tool whenever I need to understand and document any existing legacy system. Two things I love about this tool. First, you can generate any UML diagrams from mere textual description. Meaning that, plain text file is universal format and easy for version control as well. Second, you don't need to worry about layout but focus on the modelling and let the program decides on that.

Below is some notes I've jotted down while trying to set it up in a new machine. I believed it was around June 2015 but I've updated it for Fedora 24 (Rawhide).

$ sudo dnf install plantuml graphviz java-1.8.0-openjdk

If you want to get the latest pre-compiled PlantUML version, then you've to run it manually.
$ java -jar plantuml.jar -version

For those installation through distro's repository, there exists a shell script, '/usr/bin/plantuml' that set up all the necessary JAVA environment details for you.
$ file `which plantuml`
/usr/bin/plantuml: Bourne-Again shell script, ASCII text executable

Since PlantUML is using Graphiviz as its renderer, we need to verify that PlantUML can detect it.
$ java -jar plantuml.jar -testdot
The environment variable GRAPHVIZ_DOT has been set to /usr/bin/dot
Dot executable is /usr/bin/dot
Dot version: dot - graphviz version 2.38.0 (20140413.2041)
Installation seems OK. File generation OK

Another way is to check our PlantUML version.
$ java -jar plantuml.jar -version
PlantUML version 8027 (Sat Jun 20 18:13:59 MYT 2015)
(GPL source distribution)
OpenJDK Runtime Environment
OpenJDK 64-Bit Server VM

The environment variable GRAPHVIZ_DOT has been set to /usr/bin/dot
Dot executable is /usr/bin/dot
Dot version: dot - graphviz version 2.38.0 (20140413.2041)
Installation seems OK. File generation OK

Go generate our sample diagram with verbosity set (useful for detecting issues).
$ java -jar plantuml.jar -verbose sample.txt 
(0.000 - 117 Mo) 114 Mo - PlantUML Version 8033
(0.074 - 117 Mo) 113 Mo - GraphicsEnvironment.isHeadless() true
(0.074 - 117 Mo) 113 Mo - Forcing -Djava.awt.headless=true
(0.074 - 117 Mo) 113 Mo - java.awt.headless set as true
(0.085 - 117 Mo) 113 Mo - Setting current dir: .
(0.085 - 117 Mo) 113 Mo - Setting current dir: /home/hojimi
(0.087 - 117 Mo) 112 Mo - Using default charset
(0.093 - 117 Mo) 112 Mo - Setting current dir: /home/hojimi
(0.099 - 117 Mo) 112 Mo - Setting current dir: /home/hojimi
(0.100 - 117 Mo) 112 Mo - Reading file: sample.txt
(0.100 - 117 Mo) 112 Mo - name from block=null
(0.728 - 117 Mo) 93 Mo - Creating file: /home/hojimi/sample.png
(0.776 - 117 Mo) 90 Mo - Creating image 121x126
(0.812 - 117 Mo) 90 Mo - Ok for com.sun.imageio.plugins.png.PNGMetadata
(0.860 - 117 Mo) 89 Mo - File size : 2131
(0.861 - 117 Mo) 89 Mo - Number of image(s): 1

The textual description of 'sample.txt' that produces that UML diagram.
$ cat sample.txt 
Alice -> Bob
Bob -> Alice

Experiences On Using Static Code Analysis Tools for Python

Static code analysis, as the name implies, is the analysis of the non-running source code of a program. This can be done manually through code reviews where an experienced developer will inspect and walk through the code to find potential programming mistakes. However, such manual process are time consuming and can be improved through automated static code analysis tool.

In Python programming languages, Pyflakes, PyChecker, and Pylint is the common static code analysis tool. This post discusses the experiences on applying these tool to Subdown, an open-sourced image scraper console tool written in Python.

To evaluate and compare different these three static code analysis tool for the Python programming language, I've pick an open-sourced project called Subdown. This program is a image downloader console for the Reddit, an online news sharing community. The site is organized into multiple SubReddits, a breakdown of the smaller communities grouped by topics or interests. The program consist of a single file Python script which crawls a targeted SubReddits for its external URLs of images and asynchronous download these images. First, download the sample script.
$ wget

Pyflakes is a very basic fundamental tools. It only parse the Python source files to check for any errors. However, this tool does not check for any coding style violation. The warning shown below indicates that the Subdown program imports additional unused module named ‘mimetypes’. Loading unnecessary resources will slow down program execution and utilize additional memory resources.
$ pyflakes 'mimetypes' imported but unused

Similar to Pyflakes, PyChecker also parse and check for source files for errors, hence shares similar warning with Pyflakes. Furthermore, Pychecker also import and executing Python modules for additional validation. This result illustrated below shows that the warning indicated that the same Python module, gevent was being imported to the application in two separate ways where it should be done once.
$ pychecker
Processing module subdown (
 ImportError: No module named _winreg
 :28: self is not first method argument Imported module (mimetypes) not used Using import and from ... import for (gevent)

However, this is false positive warning. As shown from its code below, the Subdown program import all the methods from the gevent module. However, in second line, it reimport again from gevent module but only the monkey class so it can “monkey patch” the existing behaviours to work around the limitation of the standard socket module. Monkey patching is one of the feature of dynamic typed programming languages where we can extend and modify the existing behaviours of the methods, attributes, or functions during run-time. This technique is used typically to work around the constraints of no able to modify existing libraries.
16 import gevent
17 from gevent import monkey; monkey.patch_socket()

Pylint, the next static code analysis tool in our evaluation, is the most comprehensive with lots of additional features. See Appendix A for the details output when ran against Subdown program. Instead of just checking for Python code errors like the previous two tools, it also check the coding style violation and code smells. Code style is validated against Python’s PEP 8 style guide. Meanwhile, code smells is a piece of inefficient code, while may run correctly, still have room for improvement through code refactoring. All these are categorized into five message types as shown below:
  • (C) convention, for programming standard violation
  • (R) refactor, for bad code smell
  • (W) warning, for python specific problems
  • (E) error, for probable bugs in the code
  • (F) fatal, if an error occurred which prevented pylint from doing further processing

Comparing the sample result below with previous two tools, we get similar warning of unused import. However, there is a new warning not found which is ‘Unreachable code’.
W:198,12: Unreachable code (unreachable)
C:200, 0: Missing function docstring (missing-docstring)
W: 11, 0: Unused import mimetypes (unused-import)

Extracting out the portion of code in shown below which corresponds to the warning of ‘Unreachable code’, it shows that Line 198 will not be executed at all due to the raise statement in line 197. Upon raising an exception in Line 197, the program will halt and exit the execution. This is a good example where the static code analysis tool can help to uncover incorrect assumption made by the developer.
194         try:
195             get_subreddit(subreddit, max_count, timeout, page_timeout)
196         except Exception as e:
197             raise
198             puts(

While writing this post, I've found another tool called Pylama, which is a helper tool that wraps several code linters like PyFlakes, Pylint, and others. However, there is an issue integrating with Pylint. You may give it a try but YMMV.

Swift in Fedora 24 (Rawhide)

Swift, the language developed by Apple, which is set to replace Objective-C, was recently open sourced. However, the existing binary only available for Ubuntu and Mac OS. Hence, for Fedora user like myself, the only option is to install it through source code compilation.

First, install all the necessary packages.
$ sudo dnf install git cmake ninja-build clang uuid-devel libuuid-devel libicu-devel libbsd-devel libbsd-devel libedit-devel libxml2-devel libsqlite3-devel swig python-devel ncurses-devel pkgconfig

Next, create our working folder.
$ mkdir swift-lang

Clone the minimum repositories to build Swift.
$ git clone swift
$ git clone clang
$ git clone cmark
$ git clone llvm

If you have slow internet connection and experiencing disconnection during clone, is best to clone partially. Otherwise, you've to restart from the beginning again.
$ git clone --depth 1 llvm
$ cd llvm
$ git fetch --unshallow

If you have the great Internet connection, you can proceed with the remaining repositories.
$ git clone lldb
$ git clone llbuild
$ git clone swiftpm
$ git clone
$ git clone

As Swift was configured to work in Ubuntu or Debian, you may encounter error several issues during compilation. These are my workaround.

/usr/bin/which: no ninja in ...
In Fedora, Ninja Built binary name is 'ninja-build' but Swift builder script expect it to be 'ninja'. We create an alias to bypass that.
$ sudo ln -s /usr/bin/ninja-build /usr/bin/ninja

Missing ioctl.h
During compilation, the ioctl.h header file was not found as the build script assumed it's located in '/usr/include/x86_64-linux-gnu' as shown below.
header "/usr/include/x86_64-linux-gnu/sys/ioctl.h"

Temporary workaround is to symlink the folder that contains these files.
$ sudo mkdir -p /usr/include/x86_64-linux-gnu/
$ sudo ln -s /usr/include/sys/ /usr/include/x86_64-linux-gnu/sys

pod2man conversion failure
The 'pod2man' doesn't seems to convert the POD file to MAN page as illustrated in error message below.
FAILED: cd /home/hojimi/Projects/swift-lang/build/Ninja-ReleaseAssert/swift-linux-x86_64/docs/tools && /usr/bin/pod2man --section 1 --center Swift\ Documentation --release --name swift --stderr /home/hojimi/Projects/swift-lang/swift/docs/tools/swift.pod > /home/hojimi/Projects/swift-lang/build/Ninja-ReleaseAssert/swift-linux-x86_64/docs/tools/swift.1
Can't open swift: No such file or directory at /usr/bin/pod2man line 68.

Upon this error message, the 'swift.pod' file has been corrupted and emptied. You'll need to restore it back from the repository.
$ git checkout -- docs/tools/swift.pod

We need to disable the '--name swift' parameter. This is done by commenting out the 'MAN_FILE' variable.
$ sed -i 's/MAN_FILE/#MAN_FILE/g' swift/docs/tools/CMakeLists.txt

Once all the workarounds have been applied, we'll proceed with our compilation. You do not really need to set the '-j 4' parameter for parallel compilation which can really reduce compilation time. By default, Ninja Build will compile code using the available CPU cores. Also, we just want the release (-R) build without any debugging information attached.
$ ./swift/utils/build-script -R -j 4

Add our compiled binary path to the system path.
$ cd /build/Ninja-ReleaseAssert/swift-linux-x86_64/bin/
export PATH=$PATH:`pwd`

Lastly, check our compiled binary.
$ swift --version
Swift version 2.2-dev (LLVM 7bae82deaa, Clang 587b76f2f6, Swift 1171ed7081)
Target: x86_64-unknown-linux-gnu

Be warned, compilation took quite a while, maybe for several hours. This depends on your machine specification and the type of build. I've noticed my lappy was burning hot as four CPU cores were running at 100% most of the time. It's recommended during compilation, place your lappy near a fan or any place with good ventilation. See that the temperature exceed high threshold of 86.0°C.
$ sensors
Adapter: Virtual device
temp1:        +95.0°C  (crit = +98.0°C)

Adapter: ISA adapter
fan1:        4510 RPM

Adapter: ISA adapter
Physical id 0:  +97.0°C  (high = +86.0°C, crit = +100.0°C)
Core 0:         +94.0°C  (high = +86.0°C, crit = +100.0°C)
Core 1:         +97.0°C  (high = +86.0°C, crit = +100.0°C)

Under normal usage, the average temperature is roughly 50°C.
$ sensors
Adapter: Virtual device
temp1:        +46.0°C  (crit = +98.0°C)

Adapter: ISA adapter
fan1:        3525 RPM

Adapter: ISA adapter
Physical id 0:  +49.0°C  (high = +86.0°C, crit = +100.0°C)
Core 0:         +49.0°C  (high = +86.0°C, crit = +100.0°C)
Core 1:         +45.0°C  (high = +86.0°C, crit = +100.0°C)

From Fedora 23 To Fedora 24 (Rawhide)

So I was there looking at my screen and realized Fedora 23 is too stable, or rather too boring. Hence, I've decided to upgrade to Rawhide, the upcoming Fedora 24, which is expected to be released by 17th May 2016. Let's see how this compare to my upgrade from Fedora 21 to Fedora 22 (Rawhide), I hope there will be no major issues.

Configure your DNF for Rawhide.
$ sudo dnf upgrade dnf
$ sudo dnf install dnf-plugins-core fedora-repos-rawhide
$ sudo dnf config-manager --set-disabled fedora updates updates-testing
$ sudo dnf config-manager --set-enabled rawhide
$ sudo dnf clean -q dbcache plugins metadata

Upgrade your distro.
$ sudo dnf --releasever=rawhide --setopt=deltarpm=false distro-sync --nogpgcheck --allowerasing

It's always 'exciting" to use the rolling release where you can test out the latest greatest features. For Fedora 24, lots of features were planned but I'm eager to test out Wayland, the new display protocol which going to replace X. It seems some user already have good and stable enough experience using it in Fedora Rawhide. Can't wait to try it out on my T4210.

Upgrade was painfully slow. First, I've to downgrade certain packages like VLC from RPMFusion repository back to Fedora 22 version (see the last command of the above console output with --allowerasing option). Then, I've to download a total of 1860 packages. That alone took me around three-plus hours.

However, upgrade failed due to some conflict in Python 3.5. I just realized that I've upgraded my Python to 3.5 using Copr. And, to make matter worse, by default DNF did not cache downloaded packages! No choice but to redo everything again. In the end I wasted another three more hours.

First thing first. Let's enable caching for DNF. Next, temporary remove all those packages (wine-* and texlive-*) to reduce number of packages to download and remove Python 3.5 I've installed earlier from Cool Other Package Repor (COPR). Repeat the command to upgrade your distro again and reboot.
$ sudo echo 'keepcache=true' >> /etc/dnf/dnf.conf
$ sudo dnf remove wine* texlive-*
$ sudo dnf remove python35-python3*

Once you've successfully upgraded. Your system should have Gnome 3.19.2, Wayland 1.9.0, and Linux Kernel 4.4.0. Some interesting observations while testing Fedora 24.

Updates during booting
This happened twice and I need to reboot to complete the upgrade. If seemed that Systemd was instructed to handle the upgrade which totally new to me. I was under the impression during the upgrade, all the packages will be overwritten. See screenshot below.

Wayland is the default display server
Previously you've to manually switch to Wayland in the Gnome login shell (click your username and later select from the gear icon). Right now, is the reverse. If you want to use X (which you should as not all apps have been ported to Wayland yet), you've to select it manually, pick 'GNOME for X' from the menu.

Apps that fail to work
Shutter, the screenshot capture tool does not work. Suspect this is due to lack of support and the security model of Wayland as getting the content of other windows is not allow. Terminal, the default Gnome terminal emulator, under custom window size, will always shrink every time upon refocus. Dash to dock Gnome extension does not work either and has been disabled. Is best to check the all the Bugzilla's bug report on Wayland at Gnome or Red Hat. Wayland is getting there but still, you can always fallback to X11.

Natural scrolling in Touchpad
I'm not sure why this was set to default but it's fricking annoying. Basically, under natural scrolling, screen will move at the reverse direction of your fingers, similar to using a mobile phone or tablet. To differentiate between natural and non-natural scrolling is easy. For the former, focus on the moving the content, for the later, focus on moving the scrollbar.

Experience on Setting Up Alpine Linux

Starting out as one of the little unknown GNU/Linux distros, Alpine Linux has gain a lot of traction due to its featureful yet tiny size and the emergence of Linux Container implementation like Dockers and LXC. Although I came across it numerous time while testing out Dockers and LXC, I didn't pay much attention until recently while troubleshooting LXD. To summarize it, I really like the minimalist approach of Alpine Linux as for server or hardware appliance usage, nothing beats the simple direct approach.

My setup is based on the LXC container in Fedora 23. Unfortunately, you still can't create unprivileged container in Fedora. Hence, I have no choice but to do everything as root user. Not the best outcome but I can live with that. Setup and creation is pretty much straight forward thanks to this guide. The steps as follows:

Install and necessary packages and make sure the lxcbr0 bridge interface is up.
$ sudo dnf install lxc lxc-libs lxc-extra lxc-templates
$ sudo systemctl restart lxc-net
$ sudo systemctl status lxc-net
$ ifconfig lxcbr0

Create our container. By default, LXC will download apk package manager binary and all necessary default packages to create the container. Start the 'test-alpine' container once the container has been set up successfully.
$ sudo lxc-create -n test-alpine -t alpine
$ sudo lxc-start -n test-alpine

Access to the container through the console and press 'Enter'. Login as 'root' user but without any password, just press enter. Note to exist from the console, press 'Ctrl+q'.
$ sudo lxc-console -n test-alpine

Next, bring up the eth0 interface we can obtain an IP and making connection to the Internet. Check your eth0 network interface once done. Instead of SysV or Systemd, Alpine Linux is using OpenRC as its default init system. I've a hard time adjusting changes from SysV to Systemd and glad Alpine Linux did not jump to the Systemd bandwagon.
test-alpine:~# rc-service networking start
 * Starting networking ... *   lo ...ip: RTNETLINK answers: File exists
 [ !! ]
 *   eth0 ... [ ok ]

test-alpine:~# ifconfig eth0
eth0      Link encap:Ethernet  HWaddr 00:16:3E:6B:F7:8B  
          inet addr:  Bcast:  Mask:
          RX packets:15 errors:0 dropped:0 overruns:0 frame:0
          TX packets:13 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:1562 (1.5 KiB)  TX bytes:1554 (1.5 KiB)

Next, configure out system. Similarly to Debian's dpkg-reconfigure, Alpine have a list of setup commands to configure your system. However, I prefer the consistent and sensible naming used here. This is something that other GNU/Linux distros should follow. I'm looking at you CentOS/Red Hat/Fedora.
test-alpine:~# setup-
setup-acf        setup-bootable         setup-hostname      setup-mta     setup-timezone
setup-alpine     setup-disk             setup-interfaces    setup-ntp     setup-xen-dom0
setup-apkcache   setup-dns              setup-keymap        setup-proxy   setup-xorg-base
setup-apkrepos   setup-gparted-desktop  setup-lbu           setup-sshd

Next, setup the package repository and let the system pick the fastest mirror. I like that we can pick the fastest mirror in the console, which is something impossible to do so in Debian/Ubuntu.
# setup-apkrepos


r) Add random from the above list
f) Detect and add fastest mirror from above list
e) Edit /etc/apk/repositores with text editor

Enter mirror number (1-18) or URL to add (or r/f/e/done) [f]: 
Finding fastest mirror... 
ERROR: No such file or directory
ERROR: network error (check Internet connection and firewall)
Added mirror
Updating repository indexes... done.

Update our system. Even though there are more than five thousands packages, it is still not comparable to massive Debian list of available packages. But this is understandable due to the small number of contributors and their limited free time.
test-alpine:~# apk update
v3.2.3-104-g838b3e3 []
v3.2.3-104-g838b3e3 []
OK: 5289 distinct packages available

Let's continue by installing a software package. We'll use Git version control as our example. Installation is straight forwards with enough details.
test-alpine:~# apk add git
(1/13) Installing run-parts (4.4-r0)
(2/13) Installing openssl (1.0.2d-r0)
(3/13) Installing lua5.2-libs (5.2.4-r0)
(4/13) Installing lua5.2 (5.2.4-r0)
(5/13) Installing ncurses-terminfo-base (5.9-r3)
(6/13) Installing ncurses-widec-libs (5.9-r3)
(7/13) Installing lua5.2-posix (33.3.1-r2)
(8/13) Installing ca-certificates (20141019-r2)
(9/13) Installing libssh2 (1.5.0-r0)
(10/13) Installing curl (7.42.1-r0)
(11/13) Installing expat (2.1.0-r1)
(12/13) Installing pcre (8.37-r1)
(13/13) Installing git (2.4.1-r0)
Executing busybox-1.23.2-r0.trigger
Executing ca-certificates-20141019-r2.trigger
OK: 23 MiB in 28 packages

So far, I love the simplicity provided by Alpine Linux. In coming months, there will be more post on this tiny distro in coming months. Stay tuned.

Troubleshooting Dynamic Host Configuration Protocol (DHCP) Connection in LXD, Part 1: The Dnsmasq Server

While testing LXD, the GNU/Linux container hypervisor, one of the issue I've encountered was certain containers failed to obtain an IP address after booting up. Hence, for the past few days, while scratching my head investigating the issue, I've gained some understanding on how DHCP works and learned a few tricks on how to troubleshoot a DHCP connection.

DHCP is a client/server where the client obtain an IP address from the server. Thus, to troubleshoot any connection issue, we should look in two places, the server and the client side.

Is Dnsmasq up and running?
First, the server end. As I mentioned in my previous post, in LXD, the lxcbr0 bridge interface is basically a virtual switch, through Dnsmasq, provides network infrastructures services like Domain Name System (DNS) and DHCP services. If DHCP is not working, first things to check whether the Dnsmasq has been started correctly. Pay attention to all lines that contains the word 'dnsmasq' and check for any errors.
$ sudo systemctl status lxc-net -l
● lxc-net.service - LXC network bridge setup
   Loaded: loaded (/usr/lib/systemd/system/lxc-net.service; enabled; vendor preset: disabled)
   Active: active (exited) since Wed 2015-11-18 21:04:24 MYT; 1s ago
  Process: 21863 ExecStop=/usr/libexec/lxc/lxc-net stop (code=exited, status=0/SUCCESS)
  Process: 21891 ExecStart=/usr/libexec/lxc/lxc-net start (code=exited, status=0/SUCCESS)
 Main PID: 21891 (code=exited, status=0/SUCCESS)
   Memory: 408.0K
      CPU: 39ms
   CGroup: /system.slice/lxc-net.service
           └─21935 dnsmasq -u nobody --strict-order --bind-interfaces --pid-file=/run/lxc/ --listen-address --dhcp-range, --dhcp-lease-max=253 --dhcp-no-override --except-interface=lo --interface=lxcbr0 --dhcp-leasefile=/var/lib/misc/dnsmasq.lxcbr0.leases --dhcp-authoritative

Nov 18 21:04:24 localhost.localdomain dnsmasq[21935]: started, version 2.75 cachesize 150
Nov 18 21:04:24 localhost.localdomain dnsmasq[21935]: compile time options: IPv6 GNU-getopt DBus no-i18n IDN DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth DNSSEC loop-detect inotify
Nov 18 21:04:24 localhost.localdomain dnsmasq-dhcp[21935]: DHCP, IP range --, lease time 1h
Nov 18 21:04:24 localhost.localdomain dnsmasq-dhcp[21935]: DHCP, sockets bound exclusively to interface lxcbr0
Nov 18 21:04:24 localhost.localdomain dnsmasq[21935]: reading /etc/resolv.conf
Nov 18 21:04:24 localhost.localdomain dnsmasq[21935]: using nameserver
Nov 18 21:04:24 localhost.localdomain dnsmasq[21935]: read /etc/hosts - 2 addresses
Nov 18 21:04:24 localhost.localdomain systemd[1]: Started LXC network bridge setup.

As LXD is still actively under development, there are still many pending issues, you may want to walk through the '/usr/libexec/lxc/lxc-net' script to investigate more. Although from my experience, is simple service restart 'systemctl restart lxc-net' should be sufficient.

Failed to create listening socket?
Few days back, one of the issue I've experienced is that the Dnsmasq server failed to start due to failure in creating listening socket.
Nov 14 20:43:18 localhost.localdomain systemd[1]: Starting LXC network bridge setup...
Nov 14 20:43:18 localhost.localdomain lxc-net[24314]: dnsmasq: failed to create listening socket for Cannot assign requested address
Nov 14 20:43:18 localhost.localdomain dnsmasq[24347]: failed to create listening socket for Cannot assign requested address
Nov 14 20:43:18 localhost.localdomain dnsmasq[24347]: FAILED to start up
Nov 14 20:43:18 localhost.localdomain lxc-net[24314]: Failed to setup lxc-net.
Nov 14 20:43:18 localhost.localdomain systemd[1]: Started LXC network bridge setup.

Alternately, you can also check through the Systemd journal log.
$ journalctl -u lxc-net.service 
$ journalctl -u lxc-net.service | grep -i 'failed to'

The question we should raise when looking into this error is which other process is trying to bind to port 53, the default DNS port. There are several ways ways to check this.

Are there any other running Dnsmasq instances? Note that output was formatted to improve readability. Besides the one started by lxc-net service. The other two instances were created by libvirt and vagrant-libvirt.
$ ps -o pid,cmd -C dnsmasq
 2851 /sbin/dnsmasq --conf-file=/var/lib/libvirt/dnsmasq/vagrant-libvirt.conf --leasefile-ro --dhcp-script=/usr/libexec/libvirt_leaseshelper

 2852 /sbin/dnsmasq --conf-file=/var/lib/libvirt/dnsmasq/vagrant-libvirt.conf --leasefile-ro --dhcp-script=/usr/libexec/libvirt_leaseshelper

 2933 /sbin/dnsmasq --conf-file=/var/lib/libvirt/dnsmasq/default.conf --leasefile-ro --dhcp-script=/usr/libexec/libvirt_leaseshelper

 2934 /sbin/dnsmasq --conf-file=/var/lib/libvirt/dnsmasq/default.conf --leasefile-ro --dhcp-script=/usr/libexec/libvirt_leaseshelper

21935 dnsmasq -u nobody --strict-order --bind-interfaces --pid-file=/run/lxc/ --listen-address --dhcp-range, --dhcp-lease-max=253 --dhcp-no-override --except-interface=lo --interface=lxcbr0 --dhcp-leasefile=/var/lib/misc/dnsmasq.lxcbr0.leases --dhcp-authoritative

Is there any process currently listening to port 53 using the same IP address of
$ sudo netstat -anp | grep :53 | grep LISTEN
tcp        0      0   *               LISTEN      21935/dnsmasq       
tcp        0      0*               LISTEN      2933/dnsmasq        
tcp        0      0*               LISTEN      2851/dnsmasq        
tcp6       0      0 fe80::fc7b:93ff:fe7a:53 :::*                    LISTEN      21935/dnsmasq   

For my case, which I didn't manage to capture the output, is that another orphaned Dnsmasq instance preventing the 'lxc-net' service from launching a new Dnsmasq instance on lxcbr0 interface. If I remember correctly, this was due to the left over instances by me while debugging the '/usr/libexec/lxc/lxc-net' script.

Error calling 'lxd forkstart......

In full details, the exact error message
error: Error calling 'lxd forkstart test-centos-6 /var/lib/lxd/containers /var/log/lxd/test-centos-6/lxc.conf': err='exit status 1'

Again, while rebooting my lapppy after two days, I encountered the above error message again while trying to start my container through LXD. Reading through the LXD issues reports, these are the typical steps to troubleshoot this issue. Note that I've installed the LXD through source code compilation as there are no RPM package available for Fedora 23.

First thing first, as the LXD was built through code compilation, hence it was started manually by running this command. The benefit of starting the LXD daemon this way is that it let you monitor all the debugging messages as shown below.
$ su -c 'lxd --group wheel --debug --verbose'

INFO[11-14|14:10:24] LXD is starting                          path=/var/lib/lxd
WARN[11-14|14:10:24] Per-container AppArmor profiles disabled because of lack of kernel support 
INFO[11-14|14:10:24] Default uid/gid map: 
INFO[11-14|14:10:24]  - u 0 100000 65536 
INFO[11-14|14:10:24]  - g 0 100000 65536 
INFO[11-14|14:10:24] Init                                     driver=storage/dir
INFO[11-14|14:10:24] Looking for existing certificates        cert=/var/lib/lxd/server.crt key=/var/lib/lxd/server.key
DBUG[11-14|14:10:24] Container load                           container=test-busybox
DBUG[11-14|14:10:24] Container load                           container=test-ubuntu-cloud
DBUG[11-14|14:10:24] Container load                           container=test-centos-7
INFO[11-14|14:10:24] LXD isn't socket activated 
INFO[11-14|14:10:24] REST API daemon: 
INFO[11-14|14:10:24]  - binding socket                        socket=/var/lib/lxd/unix.socket

The first step to troubleshoot is to ensure that the default bridge interface, lxcbr0, used by LXD is up and running.
$ ifconfig lxcbr0
lxcbr0: error fetching interface information: Device not found

Next, start the 'lxc-net' service that created this bridge interface. Check if our bridge interface is up.
$ sudo systemctl start lxc-net

$ ifconfig lxcbr0
lxcbr0: flags=4163  mtu 1500
        inet  netmask  broadcast
        inet6 fe80::fcd3:baff:fefd:5bd7  prefixlen 64  scopeid 0x20
        ether fe:7a:fa:dd:06:cd  txqueuelen 0  (Ethernet)
        RX packets 5241  bytes 301898 (294.8 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 7610  bytes 11032257 (10.5 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

Next, check the status of the 'lxc-net' service. Why we need to do so? Remember that the 'lxc-net' service create a virtual switch where three things will be created. First, the bridge itself that links to an existing network interface connecting to the other world. Next, a DNS server which resolves domain name. And lastly, a DHCP server which assigns new IP address to the container. The DNS and DHCP services is provided by the Dnsmasq daemon.
$ sudo systemctl status lxc-net -l

● lxc-net.service - LXC network bridge setup
   Loaded: loaded (/usr/lib/systemd/system/lxc-net.service; enabled; vendor preset: disabled)
   Active: active (exited) since Sat 2015-11-14 16:13:24 MYT; 13s ago
  Process: 9807 ExecStop=/usr/libexec/lxc/lxc-net stop (code=exited, status=0/SUCCESS)
  Process: 9815 ExecStart=/usr/libexec/lxc/lxc-net start (code=exited, status=0/SUCCESS)
 Main PID: 9815 (code=exited, status=0/SUCCESS)
   Memory: 404.0K
      CPU: 46ms
   CGroup: /system.slice/lxc-net.service
           └─9856 dnsmasq -u nobody --strict-order --bind-interfaces --pid-file=/run/lxc/ --listen-address --dhcp-range, --dhcp-lease-max=253 --dhcp-no-override --except-interface=lo --interface=lxcbr0 --dhcp-leasefile=/var/lib/misc/dnsmasq.lxcbr0.leases --dhcp-authoritative

Nov 14 16:13:24 localhost.localdomain dnsmasq[9856]: started, version 2.75 cachesize 150
Nov 14 16:13:24 localhost.localdomain dnsmasq[9856]: compile time options: IPv6 GNU-getopt DBus no-i18n IDN DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth DNSSEC loop-detect inotify
Nov 14 16:13:24 localhost.localdomain dnsmasq-dhcp[9856]: DHCP, IP range --, lease time 1h
Nov 14 16:13:24 localhost.localdomain dnsmasq-dhcp[9856]: DHCP, sockets bound exclusively to interface lxcbr0
Nov 14 16:13:24 localhost.localdomain dnsmasq[9856]: reading /etc/resolv.conf
Nov 14 16:13:24 localhost.localdomain dnsmasq[9856]: using nameserver
Nov 14 16:13:24 localhost.localdomain dnsmasq[9856]: read /etc/hosts - 2 addresses
Nov 14 16:13:24 localhost.localdomain systemd[1]: Started LXC network bridge setup.

Expect more posts to come on using LXD in Fedora 23.

Fedora 23 Cloud Image Through Vagrant With VirtualBox and Libvirt Backend Provider

While testing LXD, I've to constantly switch between Ubuntu 15.10 and Fedora 23 to troubleshoot certain issues. However, my local Fedora 23 installation has been "contaminated" due to numerous tweaks I've done to get LXD to work. Hence, to make sure these changes I've made can be reproduced in fresh new Fedora environment, I've found using Vagrant with Fedora 23 Cloud image does fulfill that requirements.

Setting up in Ubuntu 15.10 was pretty much quite straight forward. First, we need to install Vagrant and VirtualBox. Check if we have the latest greatest version or for any issues.
$ sudo apt-get install vagrant virtualbox
$ vagrant version
Installed Version: 1.7.4
Latest Version: 1.7.4
You`re running an up-to-date version of Vagrant!

$ VBoxManage --version

Next, install libvirt provider and the necessary libraries. Skip this step if you want to use the default VirtualBox provider. as we're not using the VirtualBox provider.
$ sudo apt-get install libvirt libvirt-dev
$ vagrant plugin install vagrant-libvirt

Installing the 'vagrant-libvirt' plugin. This can take a few minutes...
Installed the plugin 'vagrant-libvirt (0.0.32)'!

Next, download the Base Cloud image for Vagrant. There are two versions, VirtualBox or libvirt/KVM image. Since we're running this in GNU/Linux, let's use the libvirt/KVM image.
$ aria2c -x 4

$ aria2c -x 4

Once we've downloaded the image, import it to Vagrant.
$ vagrant box add fedora/23

==> box: Box file was not detected as metadata. Adding it directly...
==> box: Adding box 'fedora/23' (v0) for provider: 
    box: Unpacking necessary files from: file:///home/ang/Projects/vagrant/
==> box: Successfully added box 'fedora/23' (v0) for 'virtualbox'!

Similarly for the libvirt image as well. We can add both images using the same name, in this case, 'fedora/23'.
$ vagrant box add fedora/23

==> box: Box file was not detected as metadata. Adding it directly...
==> box: Adding box 'fedora/23' (v0) for provider: 
    box: Unpacking necessary files from: file:///home/ang/vagrant/
==> box: Successfully added box 'fedora/23' (v0) for 'libvirt'!

See the available images. Note that the Fedora 23 box shares the same name but under different providers.
$ vagrant box list
base      (virtualbox, 0)
fedora/23 (libvirt, 0)
fedora/23 (virtualbox, 0)

Let's create Fedora 23 Vagrant instance.
$ mkdir f23_cloud_virtualbox
$ cd f23_cloud_virtualbox
$ vagrant init fedora/23
A `Vagrantfile` has been placed in this directory. You are now
ready to `vagrant up` your first virtual environment! Please read
the comments in the Vagrantfile as well as documentation on
`` for more information on using Vagrant.

Start and boot up your new Fedora 23 Cloud instance. If you don't specify the provider, by default it will use VirtualBox as its backend provider. Hence, the (--provider) parameter is optional.
$ vagrant up --provider=virtualbox
Bringing machine 'default' up with 'virtualbox' provider...
==> default: Importing base box 'fedora/23'...

Let's try with libvirt provider and create all the necessary folder. At this moment, Vagrant only allows one provider for an active machine.
$ mkdir f23_cloud_libvirt
$ cd f23_cloud_libvirt
$ vagrant init fedora/23

Once done, let's boot this machine up. However, it seems we've a problem starting up the machine due to 'default' pool?
$ vagrant up --provider=libvirt
Bringing machine 'default' up with 'libvirt' provider...
There was error while creating libvirt storage pool: Call to virStoragePoolDefineXML failed: operation failed: pool 'default' already exists with uuid 9aab798b-f428-47dd-a6fb-181db2b20432

Google returned some answers suggesting checking the status of the pool. Let's try it out.
$ virsh pool-list --all
 Name                 State      Autostart 
 default              inactive   no 

Let's start the 'default' pool and also toggle it to auto start.
$ virsh pool-start default
Pool default started

$ virsh pool-autostart default
Pool default marked as autostarted

Check the status of the 'default' pool again.
$ virsh pool-list --all
 Name                 State      Autostart 
 default              active     yes     

Retry again to boot up our machine using libvirt backend provider.
$ vagrant up --provider=libvirt
Bringing machine 'default' up with 'libvirt' provider...
==> default: Creating image (snapshot of base box volume).
==> default: Creating domain with the following settings...

Lastly, SSH to our machine.
$ vagrant ssh
[vagrant@localhost ~]$ cat /etc/fedora-release 
Fedora release 23 (Twenty Three)

Cannot Input Chinese in Google Chrome or Firefox with IBUS 1.5.10

After I've setup the Chinese input method in Ubuntu Gnome 15.10, one issue I've noticed was that I can't input any Chinese characters even through I've already switched to it. Google did return an answer on how to troubleshoot and solve this.

First is to check if Firefox load library. It seems to load but still, I can't input any Chinese characters.
$ strace firefox 2>&1 | grep immodules
open("/usr/lib/x86_64-linux-gnu/gtk-2.0/2.10.0/immodules.cache", O_RDONLY) = 25
stat("/usr/lib/x86_64-linux-gnu/gtk-2.0/2.10.0/immodules/", {st_mode=S_IFREG|0644, st_size=31816, ...}) = 0
open("/usr/lib/x86_64-linux-gnu/gtk-2.0/2.10.0/immodules/", O_RDONLY|O_CLOEXEC) = 25

Then I remember that I did make some changes to the X in .xprofile. It seems I was setting Flexible Input Method Framework (fcitx) as my default input method framework instead of Intelligent Input Bus (IBus). This was setup last time as I need to setup a default input method framework when using i3 window manager.
$ cat .xprofile | grep export
    export GTK_IM_MODULE=fcitx
    export QT4_IM_MODULE=fcitx
    export QT_IM_MODULE=fcitx
    export XMODIFIERS="@im=fcitx"

Changing the .xprofile file by replacing all 'fcitx' to 'ibus'. Log out and re-login again did solve the issue.
$ cat .xprofile | grep export
    export GTK_IM_MODULE=ibus
    export QT4_IM_MODULE=ibus
    export QT_IM_MODULE=ibus
    export XMODIFIERS="@im=ibus"

Chinese Input Method (IBus) in Ubuntu Gnome 15.10

In Ubuntu 15.04, I've written down a guide on how to setup input method for Chinese language. However, the guide was written for Unity desktop. In Ubuntu Gnome 15.10, the step are similar except Text Entry was not used. Instead we'll add the input method in 'Region & Language' setting.

First, follow the previous guide and install the both Simplified and Traditional Chinese languages files.

Then, press the Super/Windows key and type 'region', click on the menu item of 'Region & Language'. The 'Region & Language' window will pop up. Go the the 'Input Sources' section and click the '+' button.

Next, the system will prompt you to select the the languages you wan. Select the 'Chinese (China)' and you will be prompted again to select your input method. Select 'Chinese (SunPinyin)'. Once done, click the 'Add' button. 

After you have added the 'Chinese (SunPinyin)' input source, the top panel will show the language selector. Click on the menu item and select 'zh'. If the menu item does not show up in top panel, log out and re-login again.

To switch between different input method, use 'Super/Windows + Space' keys to toggle it. Open up Gedit and start typing in Chinese as shown below.

Upgrade to Ubuntu 15.10

Ubuntu 15.10 (Wily Werewolf) was released few days back. I was busy with work and didn't realize it until today. From the desktop end, I don't see any significant changes except regular upgrade of existing software. Good news is GNOME 3.16 is officially added to the default repository instead of separate Personal Package Archives (PPA). Meanwhile at the server side, far more interesting changes notably in the Cloud computing. If you've been following Mark Shuutleworth Google+ page, he has been posting more about the server stuff these days. Following Red Hat example, one can only make good profit out of the server part of the business.

Enough talking of the lackluster release. To upgrade from my 15.04, just type this command. Depending on the existing installed packages and your network speed, it took me around 3 hours to download all the required packages.
$ sudo do-release-upgrade -d

Once all the packages have been downloaded, answered a few questions, and wait for an hour to upgrade all the packages. Restarted my laptop once done. The upgrade was painless without any hiccup compare to past experiences. Is it because I'm using GNOME instead of Unity desktop? Doubt so, but based on my painful experience of getting both GNOME and Unity to work side by side, is best not to venture into that.
$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description:    Ubuntu 15.10
Release:        15.10
Codename:       wily

However, the annoying Internal System Error popup still persisting, which is due to gsj-console. I'm not sure the root cause of it but decided to disable it permanently.
$ sudo sed -i 's/enabled=1/enabled=0/' /etc/default/apport 
$ sudo systemctl stop apport

In the past, I've set up Dnsmasq for local development sites. It works by redirect certain TLD, for example, .dev to localhost IP address, The approach works if you're just running Dnsmasq service only. For those running Dnsmasq through NetworkManager, you'll need to setup it up slightly differently. Steps as follows.

Add the .dev generic Top Level Domain (gTLD) we want to redirect to localhost.
$ sudo echo "address=/dev/" > /etc/NetworkManager/dnsmasq.d/

Restart the NetworkManger service.
$ sudo systemctl restart network-manager

Try and ping any domain with TLD of .dev.
$ ping -c 4
PING ( 56(84) bytes of data.
64 bytes from icmp_seq=1 ttl=64 time=0.077 ms
64 bytes from icmp_seq=2 ttl=64 time=0.106 ms
64 bytes from icmp_seq=3 ttl=64 time=0.091 ms
64 bytes from icmp_seq=4 ttl=64 time=0.096 ms

--- ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 2999ms
rtt min/avg/max/mdev = 0.077/0.092/0.106/0.014 ms

But wait, isn't we just configured any domains end with .dev will be resolved to IP address of How come it was resolved to instead? It seems, this particular unique IP address appeared to indicate a possible name collision issue. To quote ICANN site,
" is a special IPv4 address that will appear in system logs alerting system administrators that there is potential name collision issue, enabling a quick diagnosis and remediation. The "53" is used as a mnemonic to indicate a DNS-related problem owing to the use of network port 53 for the DNS service."
We can find more details by checking it using dig command.
$ dig TXT +short
"Your DNS configuration needs immediate attention see"

In other words, the IP address is an notification to System Administrator to take note that the .dev generic Top Level Domain (gTLD) will be availabe in global DNS server.

Linux Containers (LXC) with LXD Hypervisor, Part 3 : Transferring Files Between Host and Container

Other articles in the series:
In this part 3, we're going to explore on how to copy file(s) from the host to the container and vice versa. Copying file from the host to the container is simply just using the 'lxc file push <filename> <container-name>/' command. You must append a forward slash (/), to indicate a directory name to the container name for it to work as shown below.
$ echo "a" > foobar
$ md5sum foobar 
60b725f10c9c85c70d97880dfe8191b3  foobar
$ lxc file push foobar test-centos-6
error: Invalid target test-centos-6
$ lxc file push foobar test-centos-6/tmp
error: exit status 255: mntns dir: /proc/16875/ns/mnt
open container: Is a directory

$ lxc file push foobar test-centos-6/tmp

Similarly, the copy file from container, just use the 'lxc file pull <container-name>/ <filename> .' command. Remember to put the dot (.) which indicates the destination or current folder.
$ lxc file pull test-centos-6/tmp/foobar .
$ md5sum foobar
60b725f10c9c85c70d97880dfe8191b3  foobar

As LXC is actually a glorify chroot environment container. You can actually create or copy files or folders from and to the chroot directory directly.
$ cd /var/lib/lxd/containers/test-centos-6/rootfs/tmp
$ touch create_file_directly_in_chroot_folder

Repeat the similar steps but in the container.
$ lxc exec test-centos-6 /bin/bash
$ cd /tmp
$ touch create_file_directly_in_container

Checking these files from the host. Note the file permissions.
$ ll /var/lib/lxd/containers/test-centos-6/rootfs/tmp/
total 0
-rw-rw-r-- 1 ang    ang    0 Sep  29 02:00 create_file_directly_in_chroot_folder
-rw-r--r-- 1 100000 100000 0 Sep  29 02:00 create_file_directly_in_container

Similarly, but inside the LXC container.
[root@test-centos-6 tmp]# ll
total 0
-rw-rw-r-- 1 65534 65534 0 Sep 28 14:00 create_file_directly_in_chroot_folder
-rw-r--r-- 1 root  root  0 Sep 28 14:00 create_file_directly_in_container

While this is doable, we shouldn't create files or folders directly in the container chroot folder from the host. Use the 'lxc push' and 'lxc pull' command to preserve the file permissions.

Installation and Usage of R, The Statistical Computing Language and Environment

One of the interesting thing that piqued my interest for the past two weeks was picking up R, "a language and environment for statistical computing and graphics.". For me, is just another great graphing tool, in addition to Gnuplot (more on this in future post), to plot graph from the console. To be frank, I really like R, easy to pick up (feels like PHP), and plenty of available resources in term of books and search results.

Why R? Few months back, I've bought a Pedometer and starts to capture my daily steps count in CSV format. Instead of putting these data in Google Sheet, I opted to put this in GitHub instead. First, it allows me to build a habit of making daily commits to GitHub. Second, I can explore different kind of graphical plotting tools. Lastly, which is the most important one, it let me have a general overview and awareness of my sedentary lifestyle over a period. This help me to make necessary adjustment which will affect my health. You can't make any changes if you're not constantly aware of the issue, face it directly, and make the necessary changes.

Installation in Ubuntu, in my case, 15.04 is pretty much straight forward.
$ sudo apt-get install r-base

Next, let's start R (yes, capital R) and plot a simple graph.
$ R

R version 3.1.2 (2014-10-31) -- "Pumpkin Helmet"
Copyright (C) 2014 The R Foundation for Statistical Computing
Platform: x86_64-pc-linux-gnu (64-bit)

R is free software and comes with ABSOLUTELY NO WARRANTY.
You are welcome to redistribute it under certain conditions.
Type 'license()' or 'licence()' for distribution details.

  Natural language support but running in an English locale

R is a collaborative project with many contributors.
Type 'contributors()' for more information and
'citation()' on how to cite R or R packages in publications.

Type 'demo()' for some demos, 'help()' for on-line help, or
'help.start()' for an HTML browser interface to help.
Type 'q()' to quit R.

> x <- c(1,2,3)
> y <- c(10,20,30)
> plot(x,y)

The above code will produce the following graph.

Instead of generating the graph through the R interpreter, we can also generate it in batch mode using the 'Rscript' command. Create a new file called 'plot.R' with the follow code. After you've compile it, a PDf file named Rplots.pdf will be generated with the graph.
$ cat plot.R
x <- c(1,2,3)
y <- c(10,20,30)

If we need to install any packages, use this command below. Note that this is through the console.
$ sudo Rscript -e "install.packages('ggplot2', repos='')"

Similarly, If you want to install it through the R interpreter.
> install.packages('ggplot2', repos='')

Or without specifying the repository. You will be prompted for one.
> install.packages('ggplot2')

Having done these steps, it should be sufficient enough to explore more powerful features of R. Have fun!

Linux Containers (LXC) with LXD Hypervisor, Part 2 : Importing Container Images Into LXD

Other articles in the series:
In Part 2, we're going to discuss different ways of importing LXC container images into LXD. By default, when you create a LXC container using the 'lxc launch' command, the tool will download and cache the container image from the remote server. For example, to create a new CentOS 6 LXC container.
$ lxc remote add images
$ lxc launch images:centos/7/amd64 centos

While waiting for the CentOS 7 image to be downloaded, you can check the LXD log file.
$ sudo tail -n2 /var/log/lxd/lxd.log
t=2015-08-30T00:13:22+0800 lvl=info msg="Image not in the db downloading it" image=69351a66510eecabf11ef7dfa94af40e20cf15c346ae08b3b0edd726ef3be10c server=
t=2015-08-30T00:13:22+0800 lvl=info msg="Downloading the image" image=69351a66510eecabf11ef7dfa94af40e20cf15c346ae08b3b0edd726ef3be10c

Unfortunately, if you have or experiencing slow network like me (see screenshot below), if best to use a network monitoring tool to check weather you're still downloading the image. For my case, I'm using bmon. Note my pathetic network speed. An average LXC container image is around 50MB. At download rate of average 20kb/s, it should took us around 33-plus minutes to finish the download. See that without download progress indicator, we've to go all the trouble to check whether the import is still running.

Alternatively, there also another way to import container images. This is through using 'lxd-images' tool, a Python script which supports two additional image source in addition to the default one as mentioned just now. These two sources are the local BusyBox images and Ubuntu Cloud images from official release streams. Additionally, since version 0.14, download progress tracking has been added to the tool, which solved the hassle we encountered.

Let's run the 'lxd-images' command and see its help message.
$ lxd-images
error: the following arguments are required: action
usage: lxd-images [-h] {import} ...

LXD: image store helper

positional arguments:
    import    Import images

optional arguments:
  -h, --help  show this help message and exit

 To import the latest Ubuntu Cloud image with an alias:
    /usr/bin/lxd-images import ubuntu --alias ubuntu

 To import the latest Ubuntu 14.04 LTS 64bit image with some aliases:
    /usr/bin/lxd-images import lxc ubuntu trusty amd64 --alias ubuntu --alias ubuntu/trusty

 To import a basic busybox image:
    /usr/bin/lxd-images import busybox --alias busybox

UPDATE: Since LXD version 0.17, 'lxd-images import lxc' command has been deprecated in favour of using the 'lxc launch' command.

Let's try to download and cache a CentOS 6 LXC container image into LXD. Compare using 'lxc launch' command to import container image. Notice the differences. First, verbosity is higher. At least we know what is going on behind the scene like what are the files being downloaded. Secondly, we can track the progress of the download. Third, we can add additional metadata, like aliases to the downloaded container image.
$ lxd-images import lxc centos 6 amd64 --alias centos/6                                                                                      
Downloading the GPG key for
Downloading the image list for
Validating the GPG signature of /tmp/tmprremowyo/index.json.asc
Downloading the image:
Progress: 1 %

However, from my understanding by reading the Python code of 'lxd-images' tool, container image is downloaded without using any multiple simultaneous connections. Hence, it will take a while (if you have slow connection like me) just to download any container images. To solve this, you can download and import the container image manually using third-parties download tool like Aria2 which supports multiple simultaneous connections.

In previous LXC version, if I remembered correctly, before version 0.15, CentOS 7 image was not found from the default image source listing (see emphasis in bold red) but still exists at the web site.
$ lxd-images import lxc centos 7 amd64 --alias centos/7
Downloading the GPG key for
Downloading the image list for
Validating the GPG signature of /tmp/tmpgg6sob2e/index.json.asc
Requested image doesn't exist.

Download and import the container image directly.
$ aria2x -x 4

Import the downloaded container image in unified tarball format.
$ lxc image import lxd.tar.xz --alias centos/7
Image imported with fingerprint: 1d292b81f019bcc647a1ccdd0bb6fde99c7e16515bbbf397e4663503f01d7d1c

In short, just use 'lxd-images' tool to import any container images from the default source.

For the next part of the series, we're going to look into sharing files between the LXC container and the host. Till the next time.

Thinkpad T4210?

What do you call a Thinkpad laptop which has a T420 motherboard but T410 casing? Is it a T420 or T410? Or we should call it a hybrid of both, a T4210? More on this later.

I spend most of my computing time switching between Google Chrome and Bash shell, especially the later.  Therefore, a laptop with the best keyboard is essential to keep my hands and fingers happy, especially if you're suffering from Repeative strain injury (RSI). But this is manageable after all these years as I learned to how reduce the muscle spasm through massaging the correct trigger points.

If you ask me what is my dream laptop, it should be the legendary Thinkpad, especially the T-series or the lightweight X-series. Why so? If you spend a lot of time in the console which requires a lot of typing, a laptop with the best keyboard is a must, if you want to reduce injury to yourself. If you can, go for the classic keyboards (7 rows) instead of the newly introduced Precision keyboards (6 rows). After using both keyboards for a long period of time, I firmly believed that Lenovo made a big mistake by moving to Precision keyboards.

I used to own a E-series Thinkpad, which unfortunately, a cheaper, misleading, and fake version of Thinkpad without the durability and maintainability. Forget also the R-series, which is another economic version that is riding on the Thinkpad fame.

After reading the Used ThinkPad Buyers Guide, I bought or started to collect a used ThinkPad T420. Everything seemed good enough and looks like grade-A quality. Hardware details by inxi as shown:
$ inxi -b
System:    Host: motoko Kernel: 3.19.0-25-generic x86_64 (64 bit) Desktop: Gnome 3.16.3
           Distro: Ubuntu 15.04 vivid
Machine:   System: LENOVO product: 4180CTO v: ThinkPad T420
           Mobo: LENOVO model: 4180CTO Bios: LENOVO v: 83ET63WW (1.33 ) date: 07/29/2011
CPU:       Dual core Intel Core i5-2540M (-HT-MCP-) speed/max: 842/3300 MHz
Graphics:  Card: Intel 2nd Generation Core Processor Family Integrated Graphics Controller
           Display Server: X.Org 1.17.1 drivers: intel (unloaded: fbdev,vesa) Resolution: 1600x900@60.0hz
           GLX Renderer: Mesa DRI Intel Sandybridge Mobile GLX Version: 3.0 Mesa 10.5.2
Network:   Card-1: Intel 82579LM Gigabit Network Connection driver: e1000e
           Card-2: Intel Centrino Advanced-N 6205 [Taylor Peak] driver: iwlwifi
Drives:    HDD Total Size: 500.1GB (14.8% used)
Info:      Processes: 253 Uptime: 1:16 Memory: 2738.2/7760.8MB Client: Shell (bash) inxi: 2.2.16 

Although the stated maximum memory of the machine is 8GB, the result returned by dmidecode command shows otherwise, maximum supported memory is 16GB.
$ sudo dmidecode -t 16
# dmidecode 2.12
SMBIOS 2.6 present.

Handle 0x0005, DMI type 16, 15 bytes
Physical Memory Array
        Location: System Board Or Motherboard
        Use: System Memory
        Error Correction Type: None
        Maximum Capacity: 16 GB
        Error Information Handle: Not Provided
        Number Of Devices: 2

Unfortunately, after googling the full hardware details of the machine. I've noticed that the integrated Web Cam and fingerprint reader are missing from the laptop. It seems the seller rebuild the the laptop using a T410 case and T420 motherboard. I maybe wrong on this but that the best conclusion I can reach so far. Honestly, I have no one else to blame but myself but nevertheless, I can survive without these two features. Note to self, don't travel when buying any items.

On a related note, David Hill of Lenovo was conducting surveys on exploring on the idea of reintroduce a "Retro Thinkpad". Interesting outcome on the result of the surveys.

EPEL Yum Repository in CentOS

As Ansible, an automation tool, is not included to the default Yum repository in CentOS 6, you've to setup Extra Packages for Enterprise Linux (EPEL) Yum repository. I found this article on setting up EPEL repository and realize that I've been doing it the wrong way all this while. Instead of downloading the EPEL rpm package separately, we can install it directly as it's already located within the CentOS Extra Repository just by running the command below.
$ sudo yum install epel-release

Extra information on the epel-release package.
$ yum show epel-release
Installed Packages
Name        : epel-release
Arch        : noarch
Version     : 6
Release     : 8
Size        : 22 k
Repo        : installed
From repo   : extras
Summary     : Extra Packages for Enterprise Linux repository configuration
URL         :
License     : GPLv2
Description : This package contains the Extra Packages for Enterprise Linux (EPEL) repository
: GPG key as well as configuration for yum and up2date.

Listing the files within the epel-release package.
$ rpm -ql epel-release

Getting the list of RPM packages from EPEL repository.
$ yum list | grep epel

Since this is third-party YUM repository, certain system administrator may prefer to disable it by default and only install certain package from it when necessary.
$ cat /etc/yum.repos.d/epel* | grep enabled
$ sed -i 's/enabled=1/enabled=0/g' /etc/yum.repos.d/epel*
$ cat /etc/yum.repos.d/epel* | grep enabled

If you've installed any packages from this Yum repository. You should clear all the downloaded cache RPM packages..
$ yum clean all

Unfortunately, to perform any yum commands against this repository, we've to explicitly state the repository for every command. Some examples.
$ yum search --enablerepo=epel ansible
$ yum list --enablerepo=epel | grep epel | grep ansible
$ yum install --enablerepo=epel ansible

Indeed a bit hassle to type all these commands. You can create an alias to reduce the needless typing.
$ alias yumepel="yum --enablerepo='epel'"
$ yumepel search ansible

Linux Container (LXC) with LXD Hypervisor, Part 1: Installation and Creation

For the past few weeks, I've been looking into creating LXC container for both Fedora and Ubuntu distro. One of the creation method is through downloading a pre-built image.
$ lxc-create -t download -n test-container -- -d ubuntu -r trusty -a amd64

However, creating unprivileged containers is rather cumbersome and list of languages bindings for the APIs are limited. What if we create a daemon or a container hypervisor that monitor and manage all the containers? In additional to that, the daemon also handles all the security privileges and provides a RESTful web APIs for remote management? Well, that the purpose of the creation of LXD, the LXC container hypervisor. Think of it as a glorify LXC 'download' creation method with additional features.

Since the LXD project is under the management of Caninocal Ltd, the company behinds Ubuntu. Hence, it's recommended to use Ubuntu if you don't want to install through source code compilation.

Installation and setup of LXD as shown below was done in Ubuntu 15.04.

Firstly, install the LXD package.
$ sudo apt-get install lxd
Warning: The home dir /var/lib/lxd/ you specified already exists.
Adding system user 'lxd' (UID 125) ...
Adding new user 'lxd' (UID 125) with group 'nogroup' ...
The home directory '/var/lib/lxd/' already exists. Not copying from '/etc/skel'.
adduser: Warning: The home directory '/var/lib/lxd/' does not belong to the user you are currently creating.
Adding group 'lxd' (GID 137) ...

From the message above, note that the group 'lxd' (GID 137) does not belong your current login user yet. To update your current login user groups during current session, run the command below so that you don't need to logout and re-login again.
$ newgrp lxd

Check out current login user groups. You should see that the current login user belongs to the group 'lxd' (GID 137).
$ id $USER | tr ',', '\n'
uid=1000(ang) gid=1000(ang) groups=1000(ang)

$ groups
ang adm cdrom sudo dip plugdev lpadmin sambashare lxd

Next, we need to set the remote server which contains the pre-built container images.
$ lxc remote add images
Generating a client certificate. This may take a minute...

List all the available pre-built container images from the server we've added just now. Pay attention to the colon (:) at the end of the command as this is needed. Otherwise, the command will list local downloaded images. The list is quite long so I've reformatted the layout and only show the top two.
$ lxc image list images:
|   ALIAS               | FINGERPRINT |PUBLIC |  DESCRIPTION   | ARCH |        UPLOAD DATE          |
|centos/6/amd64 (1 more)|460c2c6c4045 |yes    |Centos 6 (amd64)|x86_64|Jul 25, 2015 at 11:17am (MYT)|
|centos/6/i386 (1 more) |60f280890fcc |yes    |Centos 6 (i386) |i686  |Jul 25, 2015 at 11:20am (MYT)|

Let's create our first container using CentOS 6 pre-built image.
$ lxc launch images:centos/6/amd64 test-centos-6
Creating container...error: Get http://unix.socket/1.0: dial unix /var/lib/lxd/unix.socket: no such file or directory

Reading through this troubleshooting ticket, it seems that LXD daemon was not started. Let's start it. Note that I still using the old 'service' command to start the daemon instead of 'systemctl' command. As they said, old habits die hard. It will take a while for me to fully transition from SysVinit to Systemd. ;-)
$ sudo service lxd restart
$ sudo service lxd status
● lxd.service - Container hypervisor based on LXC
   Loaded: loaded (/lib/systemd/system/lxd.service; indirect; vendor preset: enabled)
   Active: active (running) since Ahd 2015-07-26 00:28:51 MYT; 10s ago
 Main PID: 13260 (lxd)
   Memory: 276.0K
   CGroup: /system.slice/lxd.service
           ‣ 13260 /usr/bin/lxd --group lxd --tcp [::]:8443

Jul 26 00:28:51 proliant systemd[1]: Started Container hypervisor based on LXC.
Jul 26 00:28:51 proliant systemd[1]: Starting Container hypervisor based on LXC...

Finally, create and launch our container using the CentOS 6 pre-built image. Compare to 'lxc-create' command, at least the parameters is simpler. This will take a while as the program needs to download the pre-built CentOS 6 image, which is average size around 50-plus MB, more on this later.
$ lxc launch images:centos/6/amd64 test-centos-6
Creating container...done
Starting container...done

Checking the status of our newly created container.
$ lxc list
|     NAME      |  STATE  |   IPV4    | IPV6 | EPHEMERAL | SNAPSHOTS |
| test-centos-6 | RUNNING | |      | NO        | 0         |

Another status of our container.
$ lxc info test-centos-6
Name: test-centos-6
Init: 14572
  eth0: IPV4
  lo:   IPV4
  lo:   IPV6    ::1

Checking the downloaded pre-built image. Subsequent container creation using the same cached image.
$ lxc image list
|       | 460c2c6c4045 | yes    | Centos 6 (amd64) | x86_64 | Jul 26, 2015 at 12:51am (MYT) |

You can use the fingerprint to create and initiate the same container.
$ lxc launch 460c2c6c4045 test-centos-6-2                                                                    
Creating container...done
Starting container...done

As I mentioned, the downloaded pre-built CentOS 6 image is roughly 50-plus MB. This file is located within the '/var/lib/lxd/images' folder. The fingerprint only shows the first 12 characters hash string of the file name.
$ sudo ls -lh /var/lib/lxd/images
total 50M
-rw-r--r-- 1 root root 50M Jul  26 00:51 460c2c6c4045a7756faaa95e1d3e057b689512663b2eace6da9450c3288cc9a1

Now, let's enter the container. Please note that the pre-built image contains the most minimum necessary packages. There are quite a few things missing. For example, wget, the downloader was not install by default.
$ lxc exec test-centos-6 /bin/bash
[root@test-centos-6 ~]#
[root@test-centos-6 ~]# cat /etc/redhat-release 
CentOS release 6.6 (Final)

[root@test-centos-6 ~]# wget
bash: wget: command not found

To exit from the container, simple type the 'exit' command.
[root@test-centos-6 ~]# exit

To stop the container, just run this command.
$ lxc stop test-centos-6
$ lxc list
|      NAME       |  STATE  |   IPV4    | IPV6 | EPHEMERAL | SNAPSHOTS |
| test-centos-6   | STOPPED |           |      | NO        | 0         |

For the next part of the series, we're going to look into importing container images into LXD. Till the next time.

Vagrant 1.7.3 and VirtualBox 5.0 Installation in Ubuntu 15.04 - Part 2

Continue from the first part of the installation.

Meanwhile, the available VirtualBox version from the default Ubuntu repository is 4.3.26 as shown below.
$ apt-cache show virtualbox | grep ^Version
Version: 4.3.26-dfsg-2ubuntu2
Version: 4.3.26-dfsg-2ubuntu1

While we can use similar installation method like Vagrant, if there are any repository available, always favour this installation method as you don't need to manually verify each downloaded packages. Upgrade is also seamless without hassle.
$ echo "deb vivid contrib" | sudo tee -a /etc/apt/sources.list.d/virtualbox.list
deb vivid contrib

$ cat /etc/apt/sources.list.d/virtualbox.list 
deb vivid contrib

Next, add the public key so the apt program can verify the packages from the repository we've added just now.
$ wget -q -O- | sudo apt-key add -

Update the repository packages and check the available version.
$ sudo apt-get update

As discussed, before installation, always go through the change log. Then we proceed with the installation. You must specify the exact version you want to install. In this case, is version 5.0.
$ sudo apt-get install virtualbox-5.0

Once done, we'll proceed with Extension Pack installation. Let's download it and install it using the VBoxManage console tool.
$ aria2c -x 4

$ sudo VBoxManage extpack install Oracle_VM_VirtualBox_Extension_Pack-5.0.0-101573.vbox-extpack 
Successfully installed "Oracle VM VirtualBox Extension Pack".

Confirm our installed VirtualBox version.
$ vboxmanage --version

Lastly, if there are any Linux kernel upgrade, you may need to rebuild vboxdrv kernel module by running this command.
$ sudo /etc/init.d/vboxdrv setup

Vagrant 1.7.3 and VirtualBox 5.0 Installation in Ubuntu 15.04 - Part 1

VirtuaBox 5.0, the x86 virtualization software was recently released.  This will be a good time for me to revisit it again with Vagrant, a tool to provision and distribute a virtual machine on top of VirtualBox. Why bother with Vagrant if you can just use VirtualBox as is? Well, if you want (1) to quickly provision an existing downloaded image; (2) to learn different provisioner like Ansible, Chef, Puppet, and others; (3) to have a easier way to manage your VirtualBox from the console. Then, there is no better tool than Vagrant.

One of the issue I've when evaluating Linux Container (LXC) is at this moment of writing, there is no easy way to create a CentOS 7 container through its daemon, LXD. Also, the container created cannot be distributed to other Operating System as LXC is a chroot environment container and not a virtual machine. In other words, LXC only works in GNU/Linuxes.

Now, let's check through the available version for Vagrant in the Ubuntu default repository.
$ apt-cache show vagrant | grep ^Version
Version: 1.6.5+dfsg1-2

Another way to check the latest Vagrant version if you've already installed Vagrant. This is through 'vagrant version' command. However, the result returned is not entirely correct. More on that later.
$ vagrant version
Installed Version: 1.7.3
Latest Version: 1.7.3

You're running an up-to-date version of Vagrant!

Our next step is to download the latest version of both software and install it in Ubuntu 15.04. Which means we need to download DEB packages. Let's start with Vagrant. Also we need to download the corresponding checksum file as well. I'm using Aria2 instead of Wget to speed up downloading as Aria2 supports multiple simultaneous connections.
$ aria2c -x 4
$ wget --content-disposition

Before we install or upgrade Vagrant, verify the our downloaded DEB package against the checksum file. Remember to read the Changelog as well, just in case, if there are any important items relevant to our upgrade or installation.
$ sha256sum -c 1.7.4_SHA256SUMS 2>&1 | grep OK                                                                                          
vagrant_1.7.4_x86_64.deb: OK

Upgrade our Vagrant installation.
$ sudo dpkg -i vagrant_1.7.4_x86_64.deb 
Preparing to unpack vagrant_1.7.4_x86_64.deb ...
Unpacking vagrant (1:1.7.4) over (1:1.7.3) ...
Setting up vagrant (1:1.7.4) ...

Finally, verify our installation. See the inaccurate reporting of latest version against the installed version. Hence, to get the up-to-date version, is best to check Vagrant's download page.
$ vagrant version
Installed Version: 1.7.4
Latest Version: 1.7.3
You're running an up-to-date version of Vagrant!

To be continued.

Gtk-Message: Failed to load module "overlay-scrollbar"

Follow up with my previous post on replacing the existing Unity desktop with Gnome 3.16. One of the issues I kept encountered since then was this warning message of 'Gtk-Message: Failed to load module "overlay-scrollbar"', especially when I was reading PDF document through EvinceOverlay scrollbar is one of the feature added to Unity desktop to obtain more spaces by hiding the scrollbar by default and only show it when you mouse-over the scrolling hotspot. Now, how should we fix this?

Reading through the AskUbuntu's answer on this matter, it seems the pre-installed overlay-scrollbar was not removed? Let's try to remove this.
$ sudo apt-get remove overlay-scrollbar
Reading package lists... Done
Building dependency tree       
Reading state information... Done
Package 'overlay-scrollbar' is not installed, so not removed
0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.

Interesting. The package is not installed. Re-read the answer again, it seemed that this was due to residual config files that still existing after I've removed Unity desktop. Let's purge the package. Next, just logout and login again and the problem should be solved.
$ sudo apt-get purge overlay-scrollbar

On a related note, you can purge all the residual config files of all the removed DEB packages. This is something totally new for me after using Ubuntu or Debian for so many years.
$ dpkg -l | grep '^rc' | awk '{print $2}' | sudo xargs dpkg --purge

Let's break down the commands. First, get one sample result of the 'dpkg -l | grep '^rc' commands as shown below.
$ dpkg -l | grep ^rc | tail -n 1
rc  zeitgeist-datahub  0.9.14-2.2ubuntu3 amd64 event logging framework - passive logging daemon

What is 'rc'? Let's try to get the first few lines from 'dpkg -l' command. Note that I've truncated the extra whitespaces. If you look at the vertical line (|) that pointed down, there are three fields that indicates the status of a DEB package. There status are desired status, current status, and error indicator. Refer back to our 'rc' status of a package, 'r' means removed and 'c' means that config files still exists and installed in the system.
$ dpkg -l | head -n 4
| Status=Not/Inst/Conf-files/Unpacked/halF-conf/Half-inst/trig-aWait/Trig-pend
|/ Err?=(none)/Reinst-required (Status,Err: uppercase=bad)
||/ Name    Version    Architecture Description

Going back to our sample package, zeitgeist-datahub. Let's find out the residual config files exists for this DEB package?
$ dpkg -L zeitgeist-datahub

Remove or purge the residual config files. Both commands are equivalent.
$ sudo apt-get purge zeitgeist-datahub
$ sudo dpkg --purge zeitgeist-datahub

Reading package lists... Done
Building dependency tree       
Reading state information... Done
The following packages will be REMOVED:
0 upgraded, 0 newly installed, 1 to remove and 0 not upgraded.
After this operation, 0 B of additional disk space will be used.
Do you want to continue? [Y/n] y
(Reading database ... 256755 files and directories currently installed.)
Removing zeitgeist-datahub (0.9.14-2.2ubuntu3) ...
Purging configuration files for zeitgeist-datahub (0.9.14-2.2ubuntu3) ...

Checking back the package status. Nothing shown. Hence, everything was successfully purged from your system.
$ dpkg -l | grep zeitgeist-datahub