Is Your Drupal Code Adheres to Coding Standards and Best Practices?

Following a coding standards and adhere to a best practices in any development environment is crucial to keep the code base consistent and every developer is aware of the default ways of development. For Drupal, there is this Coder module which helps to make sure your Drupal codes following the standard conventions and practices.

Before we install the Coder module, let's obtain some Drupal codes to be validated. We'll be using sample codes from Drupal Examples.
$ git clone git:// drupal-examples

Install the Coder module using Composer. Note that I've tried installing using Drush but I still prefer the former way which is more suitable for general PHP library management.
$ composer global require drupal/coder

Changed current directory to /home/ang/.composer
Using version ^8.2 for drupal/coder
./composer.json has been created
Loading composer repositories with package information
Updating dependencies (including require-dev)
  - Installing squizlabs/php_codesniffer (2.3.2)
    Downloading: 100%         

  - Installing drupal/coder (8.2.2)
    Cloning c08506a332235d5485c79231639e9577b8c4d332

Writing lock file
Generating autoload files

Checking the installed PHP CodeSniffer (phpcs) and PHP Code Beautifier and Fixer (phpcbf) program.
$ ll ~/.composer/vendor/bin/
total 0
lrwxrwxrwx. 1 ang ang 43 May 27 18:13 phpcbf -> ../squizlabs/php_codesniffer/scripts/phpcbf*
lrwxrwxrwx. 1 ang ang 42 May 27 18:13 phpcs -> ../squizlabs/php_codesniffer/scripts/phpcs*

However, both programs are not within Bash's program search path. Hence, we have to add it manually.
$ export PATH=$PATH:$HOME/.composer/vendor/bin

To make it permanent, just put the above line into ~/.bashrc or ~/.profile file.
$ echo "export PATH=$PATH:$HOME/.composer/vendor/bin" > ~/.bashrc

To verify our path has been set up correctly.
$ which phpcs

To test whether a sample code is adhere to Drupal coding standard. As shown below, the phpcs program can't find the Drupal coding standard rules file.
$ phpcs --standard=Drupal drupal-examples/action_example/action_example.module
ERROR: the "Drupal" coding standard is not installed. The installed coding standards are PSR1, PHPCS, Squiz, PSR2, Zend, PEAR and MySource

Set the path to the Drupal coding standard rules path and verify again.
$ phpcs --config-set installed_paths ~/.composer/vendor/drupal/coder/coder_sniffer
Config value "installed_paths" added successfully

$ phpcs -i
The installed coding standards are PSR1, PHPCS, Squiz, PSR2, Zend, PEAR, MySource, Drupal and DrupalPractice

Let's try again. Surprisingly the official Drupal examples code contains coding standards violation!
$ phpcs --standard=Drupal action_example/action_example.module 

 228 | ERROR   | [ ] Parameter comment must end with a full stop
 276 | ERROR   | [ ] Parameter comment must end with a full stop
 283 | ERROR   | [ ] Type hint "array" missing for $context
 285 | ERROR   | [x] Line indented incorrectly; expected 3 spaces, found 2
 286 | ERROR   | [x] Line indented incorrectly; expected 3 spaces, found 2
 287 | ERROR   | [x] Line indented incorrectly; expected 3 spaces, found 2
 288 | ERROR   | [x] Line indented incorrectly; expected 3 spaces, found 2
 289 | ERROR   | [x] Line indented incorrectly; expected 3 spaces, found 2
 290 | ERROR   | [x] Line indented incorrectly; expected 3 spaces, found 2
 291 | ERROR   | [x] Line indented incorrectly; expected 3 spaces, found 2
 292 | ERROR   | [x] Line indented incorrectly; expected 3 spaces, found 2
 293 | ERROR   | [x] Line indented incorrectly; expected 3 spaces, found 2
 343 | ERROR   | [ ] Type hint "array" missing for $context
 345 | WARNING | [ ] The use of function dsm() is discouraged
 346 | WARNING | [ ] The use of function dsm() is discouraged

Time: 429ms; Memory: 7.5Mb

grantpt failed: Read-only file system

Probably one of those weird bug that I've encountered. This happened quite a few times for the past weeks. When I tried to launch a container I've just created, LXC showed me below error message on failling to allocate a pty.
$ sudo lxc-start -n foobar -F
lxc-start: console.c: lxc_console_create: 580 Read-only file system - failed to allocate a pty
lxc-start: start.c: lxc_init: 442 failed to create console
lxc-start: start.c: __lxc_start: 1124 failed to initialize the container
lxc-start: lxc_start.c: main: 344 The container failed to start.
lxc-start: lxc_start.c: main: 348 Additional information can be obtained by setting the --logfile and --logpriority options.

PTY? Is an abbreviation for pseudoterminal, according to Wikipedia,
"is a pair of pseudo-devices, one of which, the slave, emulates a real text terminal device, the other of which, the master, provides the means by which a terminal emulator process controls the slave."
To debug this, I've tried to launch a new Tmux session, which seemed to fail to do so. Suspecting that my Tmux session somehow corrupted, I tried to open Gnome Terminal and obtained this error message "grantpt failed: Read-only file system" as shown below.

Google's search results did suggest a temporary quick solution, which seemed to solve the issue. But still, question remain, what causing /dev/pts having the wrong permissions?
$ sudo mount -o remount,gid=5,mode=620 /dev/pts

Linux Containers (LXC) in Fedora 22 Rawhide - Part 3

Continue from Part 1 and Part 2. We'll discuss another issue caused by the default LXC installation in Fedora 22, which is no default bridge network created although one is set in the config file for each container.

Let's create a dummy container to view the default bridge network interface.
$ sudo lxc-create -t download -n foo -- -d centos -r 6 -a amd64
$ sudo cat /var/lib/lxc/foo/config | grep = lxcbr0

However, as I mentioned earlier, the bridge interface lxcbr0 is not created by default. Note that bridge interface virbr0 was created due to libvirt installation.
$ ip link show | grep br0
6: virbr0:  mtu 1500 qdisc noqueue state DOWN mode DEFAULT group default 
7: virbr0-nic:  mtu 1500 qdisc fq_codel master virbr0 state DOWN mode DEFAULT group default qlen 500

Or you can use the brctl command to show the available bridge interface. If you can't find the command, just install the bridge-utils package.
$ sudo dnf install bridge-utils
$ brctl show
bridge name     bridge id               STP enabled     interfaces
virbr0          8000.525400c28250       yes             virbr0-nic

Instead of changing the default item in the container's config file every time we create a container, we can use two ways to resolve this issue. First, by overwrite the default network interface name. Second, is to create the lxcbr0 bridge interface manually.

For the first method, just overwrite the default network interface name.
$ sudo sed -i s/lxcbr0/virbr0/g /etc/lxc/default.conf 
$ cat /etc/lxc/default.conf | grep = virbr0

The issue is such approach is that you'll share the same bridge network interface with libvirt which primary manages KVM (Kernel-based Virtual Machine). Thus, if you need additional customization, for example, like different IP range, is best to create a bridge network interface, which, leads us to the second method.

First, let's duplicate the XML file that define the default bridge network.
sudo cp /etc/libvirt/qemu/networks/default.xml /etc/libvirt/qemu/networks/lxcbr0.xml

Next, we need to generate a random UUID, Universal unique identifier and MAC, media access control address for our new bridge network interface named lxcbr0.

Generating UUID.
$ uuidgen

Generating MAC address.
$ MACADDR="52:54:$(dd if=/dev/urandom count=1 2>/dev/null | md5sum | sed 's/^\(..\)\(..\)\(..\)\(..\).*$/\1:\2:\3:\4/')"; echo $MACADDR

Update the lxcbr0.xml file we've just duplicated and add in both the UUID and MAC address to the file.

The final XML file as shown below:
OVERWRITTEN AND LOST. Changes to this xml configuration should be made using:
  virsh net-edit lxcbr0
or other application using the libvirt API.

  <forward mode='nat'/>
  <bridge name='lxcbr0' stp='on' delay='0'/>
  <mac address='52:54:f0:ec:cb:a3'/>
  <ip address='' netmask=''>
      <range start='' end=''/>

Enable, auto start, and start the lxcbr0 bridge interface.
$ sudo virsh net-define /etc/libvirt/qemu/networks/lxcbr0.xml
$ sudo virsh net-autostart lxcbr0
$ sudo virsh net-start lxcbr0

Now both bridge interfaces were created and enabled. You can create any container using the default lxcbr0 bridge network interface.
$ brctl show
bridge name     bridge id               STP enabled     interfaces
lxcbr0          8000.00602f7e384b       yes             lxcbr0-nic
virbr0          8000.525400c28250       yes             veth1HV308

There are many other ways to create and setup a bridge network interface but the method of using virsh command is probably the easiest and fastest. All the necessary steps to configure DHCP through Dnsmasq has been automated. As observed through the Dnsmasq instance after we've started the lxcbr0 bridge network interface.
$ ps aux | grep [l]xcbr0
nobody    9443  0.0  0.0  20500  2424 ?        S    01:08   0:00 /sbin/dnsmasq --conf-file=/var/lib/libvirt/dnsmasq/lxcbr0.conf --leasefile-ro --dhcp-script=/usr/libexec/libvirt_leaseshelper
root      9444  0.0  0.0  20472   208 ?        S    01:08   0:00  \_ /sbin/dnsmasq --conf-file=/var/lib/libvirt/dnsmasq/lxcbr0.conf --leasefile-ro --dhcp-script=/usr/libexec/libvirt_leaseshelper

Details of the lxcbr0.conf file.
$ sudo cat /var/lib/libvirt/dnsmasq/lxcbr0.conf 
##OVERWRITTEN AND LOST.  Changes to this configuration should be made using:
##    virsh net-edit lxcbr0
## or other application using the libvirt API.
## dnsmasq conf file created by libvirt

Linux Containers (LXC) in Fedora 22 Rawhide - Part 2

In Part 1, we've learned on how to set up LXC in Fedora 22 and at the same time, we have also encountered quite a few issues and the possible workarounds to get it working. In this post, we'll still looking into these workarounds to find a better or alternative solutions

One of the issue is the deprecation of YUM in favour of DNF command to manage packages. The changes are not supposed to be backward compatible and breakage is certain. Instead of creating a container and download all the basic packages, we can build a container using download template.

Let's try the download template method. Once you've run the command below, a list of distro images will be shown. Note that not all distros can be created through this method, for example, Arch Linux is missing from the image list below. You still have to fallback to file template for container creation.

Next, you will be prompted to key in your distribution, release, and architecture. Once you've keyed in your selection, the command will continue to download the image. This may take a while, depending on your Internet speed.
$ sudo lxc-create -t download -n download-test
Setting up the GPG keyring
Downloading the image index

centos  6       amd64   default 20150507_02:16
centos  6       i386    default 20150507_02:16
centos  7       amd64   default 20150507_02:16
debian  jessie  amd64   default 20150506_22:42
debian  jessie  armel   default 20150506_22:42
debian  jessie  armhf   default 20150503_22:42
debian  jessie  i386    default 20150506_22:42
debian  sid     amd64   default 20150506_22:42
debian  sid     armel   default 20150506_22:42
debian  sid     armhf   default 20150506_22:42
debian  sid     i386    default 20150506_22:42
debian  wheezy  amd64   default 20150506_22:42
debian  wheezy  armel   default 20150505_22:42
debian  wheezy  armhf   default 20150506_22:42
debian  wheezy  i386    default 20150506_22:42
fedora  19      amd64   default 20150507_01:27
fedora  19      armhf   default 20150507_01:27
fedora  19      i386    default 20150507_01:27
fedora  20      amd64   default 20150507_01:27
fedora  20      armhf   default 20150507_01:27
fedora  20      i386    default 20150507_01:27
gentoo  current amd64   default 20150507_14:12
gentoo  current armhf   default 20150507_14:12
gentoo  current i386    default 20150507_14:12
opensuse        12.3    amd64   default 20150507_00:53
opensuse        12.3    i386    default 20150507_00:53
oracle  6.5     amd64   default 20150507_11:40
oracle  6.5     i386    default 20150507_11:40
plamo   5.x     amd64   default 20150506_21:36
plamo   5.x     i386    default 20150506_21:36
ubuntu  precise amd64   default 20150507_03:49
ubuntu  precise armel   default 20150507_03:49
ubuntu  precise armhf   default 20150507_03:49
ubuntu  precise i386    default 20150507_03:49
ubuntu  trusty  amd64   default 20150507_03:49
ubuntu  trusty  armhf   default 20150507_03:49
ubuntu  trusty  i386    default 20150506_03:49
ubuntu  trusty  ppc64el default 20150507_03:49
ubuntu  utopic  amd64   default 20150507_03:49
ubuntu  utopic  armhf   default 20150507_03:49
ubuntu  utopic  i386    default 20150507_03:49
ubuntu  utopic  ppc64el default 20150507_03:49
ubuntu  vivid   amd64   default 20150507_03:49
ubuntu  vivid   armhf   default 20150507_03:49
ubuntu  vivid   i386    default 20150506_03:49
ubuntu  vivid   ppc64el default 20150507_03:49

Distribution: centos
Release: 6
Architecture: amd64

Downloading the image index
Downloading the rootfs 
Downloading the metadata
The image cache is now ready
Unpacking the rootfs

You just created a CentOS container (release=6, arch=amd64, variant=default)

To enable sshd, run: yum install openssh-server

For security reason, container images ship without user accounts
and without a root password.

Use lxc-attach or chroot directly into the rootfs to set a root password
or create user accounts.

Once the container has been created. We start and attach to the container.
$ sudo lxc-start -n download-test
$ sudo lxc-attach -n download-test

# uname -a
Linux download-test 4.0.1-300.fc22.x86_64 #1 SMP Wed Apr 29 15:48:25 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux
# cat /etc/centos-release 
CentOS release 6.6 (Final)

Instead of prompting your for the distribution, release, and architecture choices, you can simply create a container in one line of command. Note the extra double dashes (--) before you set the requirements arguments. All parameters after the (--) are passed to the template rather than the lxc-create command. Container creation should be very fast on second time as the program cached the downloaded images.
$ sudo lxc-create -t download -n download-test -- -d centos -r 6 -a amd64

To see the available options available for a particular template, use the command below. You can substitute the template name 'download' found in /usr/share/lxc/templates/.
$ lxc-create -t download -h

Swap Space Usage in GNU/Linux

When you system has uses up all the physical memory space, it will move some of the inactive pages to the swap space, which is either a partition or a file in your storage. This works as a backup mechanism to keep you system running continuous even though the performance will take a hit. You will notice that the system will feel sluggish as there are a lot of swapping going on in the background.

I found this Bash script, swaptop, which list out all the swap usage by each process. Ran it through my current system which shown very interesting result.
$ ./swaptop | head -n 10
  188828  python 5669
  112844  chrome 3080
   67032  chrome 3617
   56312  gnome-shell 1884
   52752  chrome 2994
   44880  chrome 3031
   40784  gnome-software 2710
   37124  evolution-calen 2720
   34292  chrome 3124
   32804  packagekitd 1960

What is that Python process which has the highest usage? Why is RabbitVCS' checkerservice.pyc used up so much virtual memory? Result shown that while inactive, it will consume up to 334MB of RAM size. No wonder my system felt sluggish as I've just installed it yesterday to test out.
$ ps -a -o pid,size,args | grep 5669
 5669 334868 /usr/bin/python /usr/lib/python2.7/site-packages/rabbitvcs/services/checkerservice.pyc
31581   352 grep --color=auto 5669

Remove all RabbitVCS related packages and kill the running process of checkerservice.pyc.
$ sudo dnf remove rabbitvcs*
$ sudo kill -9 5669

Move on to the second process, evolution-calen, which I assumed is calendaring service for Evolution. Likely, let's find out the exact command parameters.
$ ps -a -o pid,size,args | grep 2720
 2720 403132 /usr/libexec/evolution-calendar-factory
 2770 517072 /usr/libexec/evolution-calendar-factory-subprocess --bus-name org.gnome.evolution.dataserver.Subprocess.Backend.Calendarx2720x2 --own-path /org/gnome/evolution/dataserver/Subprocess/Backend/Calendar/2720/2
 2780 443204 /usr/libexec/evolution-calendar-factory-subprocess --bus-name org.gnome.evolution.dataserver.Subprocess.Backend.Calendarx2720x3 --own-path /org/gnome/evolution/dataserver/Subprocess/Backend/Calendar/2720/3
32125   352 grep --color=auto 2720

Let's find out the RPM package name that provides this file.
$ dnf provides /usr/libexec/evolution-calendar-factory
Last metadata expiration check performed 2 days, 4:35:01 ago on Tue May  5 02:27:11 2015.
evolution-data-server-3.16.1-1.fc22.x86_64 : Backend data server for Evolution
Repo        : @System

Unfortunately I can't remove this package as there are quite a few essential packages depends on its, for example, Gnome Shell. Uninstall it will leave us with partially broken GNOME desktop.

Linux Containers (LXC) in Fedora 22 Rawhide - Part 1

While Docker, an application container is widely popular right now, I've decided to try LXC, a machine container that hold a virtual machine like VirtualBox or WMWare but with near bare-metal performance. As I was running on Fedora Rawhide (F22), let's try to install and setup LXC in this distro.

Installation is pretty much straight forward.
$ sudo dnf install lxc lxc-templates lxc-extra

Checking our installed version against the latest available version. Our installed version on par with the current release.
$ lxc-ls --version

The first thing to do is to check our LXC configuration. As emphasized in red below, the Cgroup memory controller is not enabled by default as it will incur additional memory. This can be enabled through by adding boot parameter cgroup_enable=memory to the Grub boot loader. For now, we will keep that in mind and stick to the default.
$ lxc-checkconfig

Kernel configuration not found at /proc/config.gz; searching...
Kernel configuration found at /boot/config-4.0.1-300.fc22.x86_64
--- Namespaces ---
Namespaces: enabled
Utsname namespace: enabled
Ipc namespace: enabled
Pid namespace: enabled
User namespace: enabled
Network namespace: enabled
Multiple /dev/pts instances: enabled

--- Control groups ---
Cgroup: enabled
Cgroup clone_children flag: enabled
Cgroup device: enabled
Cgroup sched: enabled
Cgroup cpu account: enabled
Cgroup memory controller: missing
Cgroup cpuset: enabled

--- Misc ---
Veth pair device: enabled
Macvlan: enabled
Vlan: enabled
File capabilities: enabled

Note : Before booting a new kernel, you can check its configuration
usage : CONFIG=/path/to/config /usr/bin/lxc-checkconfig

Before we can create our container, let's find out the available templates or GNU/Linux distros we can create.
$ ll /usr/share/lxc/templates/
total 348K
-rwxr-xr-x. 1 root root  11K Apr 24 03:22 lxc-alpine*
-rwxr-xr-x. 1 root root  14K Apr 24 03:22 lxc-altlinux*
-rwxr-xr-x. 1 root root  11K Apr 24 03:22 lxc-archlinux*
-rwxr-xr-x. 1 root root 9.5K Apr 24 03:22 lxc-busybox*
-rwxr-xr-x. 1 root root  29K Apr 24 03:22 lxc-centos*
-rwxr-xr-x. 1 root root  11K Apr 24 03:22 lxc-cirros*
-rwxr-xr-x. 1 root root  17K Apr 24 03:22 lxc-debian*
-rwxr-xr-x. 1 root root  18K Apr 24 03:22 lxc-download*
-rwxr-xr-x. 1 root root  48K Apr 24 03:22 lxc-fedora*
-rwxr-xr-x. 1 root root  28K Apr 24 03:22 lxc-gentoo*
-rwxr-xr-x. 1 root root  14K Apr 24 03:22 lxc-openmandriva*
-rwxr-xr-x. 1 root root  15K Apr 24 03:22 lxc-opensuse*
-rwxr-xr-x. 1 root root  40K Apr 24 03:22 lxc-oracle*
-rwxr-xr-x. 1 root root  11K Apr 24 03:22 lxc-plamo*
-rwxr-xr-x. 1 root root 6.7K Apr 24 03:22 lxc-sshd*
-rwxr-xr-x. 1 root root  25K Apr 24 03:22 lxc-ubuntu*
-rwxr-xr-x. 1 root root  13K Apr 24 03:22 lxc-ubuntu-cloud*

Let's proceed ahead by create our first container, a CentOS 6 distro. Unfortunately, as seen below, the creation failed due to deprecation of the Yum command which was redirected to DNF command.
$ sudo lxc-create -t centos -n centos-test

Host CPE ID from /etc/os-release: cpe:/o:fedoraproject:fedora:22
This is not a CentOS or Redhat host and release is missing, defaulting to 6 use -R|--release to specify release
Checking cache download in /var/cache/lxc/centos/x86_64/6/rootfs ... 
Downloading centos minimal ...
Yum command has been deprecated, redirecting to '/usr/bin/dnf -h'.
See 'man dnf' and 'man yum2dnf' for more information.
To transfer transaction metadata from yum to DNF, run:
'dnf install python-dnf-plugins-extras-migrate && dnf-2 migrate'

Yum command has been deprecated, redirecting to '/usr/bin/dnf --installroot /var/cache/lxc/centos/x86_64/6/partial -y --nogpgcheck install yum initscripts passwd rsyslog vim-minimal openssh-server openssh-clients dhclient chkconfig rootfiles policycoreutils'.
See 'man dnf' and 'man yum2dnf' for more information.
To transfer transaction metadata from yum to DNF, run:
'dnf install python-dnf-plugins-extras-migrate && dnf-2 migrate'

Config error: releasever not given and can not be detected from the installroot.
Failed to download the rootfs, aborting.
Failed to download 'centos base'
failed to install centos
lxc-create: lxccontainer.c: create_run_template: 1202 container creation template for centos-test failed
lxc-create: lxc_create.c: main: 274 Error creating container centos-test

The above error is a good example on why the transition from YUM to DNF command was unnecessary and caused breakage. It turned out that /usr/bin/yum is a shell script that display notification message. To resolve this, we need to point /usr/bin/yum to the actual yum program. There are way to bypass this step where we'll discuss about this in Part 2.
$ sudo mv /usr/bin/yum /usr/bin/yum2dnf
$ sudo ln -s /usr/bin/yum-deprecated /usr/bin/yum
$ ll /usr/bin/yum
lrwxrwxrwx. 1 root root 23 May  5 23:40 /usr/bin/yum -> /usr/bin/yum-deprecated*

Let's us try again. Although there is notification, the creation of the container will run smoothly. Since we're creating this for the first time, it will took a while to download all the packages.
$ sudo lxc-create -t centos -n centos-test
Download complete.
Copy /var/cache/lxc/centos/x86_64/6/rootfs to /var/lib/lxc/centos-test/rootfs ... 
Copying rootfs to /var/lib/lxc/centos-test/rootfs ...
Storing root password in '/var/lib/lxc/centos-test/tmp_root_pass'
Expiring password for user root.
passwd: Success

Container rootfs and config have been created.
Edit the config file to check/enable networking setup.

The temporary root password is stored in:


The root password is set up as expired and will require it to be changed
at first login, which you should do as soon as possible.  If you lose the
root password or wish to change it without starting the container, you
can change it from the host by running the following command (which will
also reset the expired flag):

        chroot /var/lib/lxc/centos-test/rootfs passwd

Checking our newly created container.
$ sudo lxc-ls

Checking the container status.
$ sudo lxc-info -n centos-test
Name:           centos-test
State:          STOPPED

Start our newly created container. Yet again, another error.
$ sudo lxc-start -n centos-test
lxc-start: lxc_start.c: main: 344 The container failed to start.
lxc-start: lxc_start.c: main: 346 To get more details, run the container in foreground mode.
lxc-start: lxc_start.c: main: 348 Additional information can be obtained by setting the --logfile and --logpriority options.

Let's try again, but with foreground mode (-F).
$ sudo lxc-start -F -n centos-test
lxc-start: conf.c: instantiate_veth: 2672 failed to attach 'vethM9Q6RT' to the bridge 'lxcbr0': Operation not permitted
lxc-start: conf.c: lxc_create_network: 2955 failed to create netdev
lxc-start: start.c: lxc_spawn: 914 failed to create the network
lxc-start: start.c: __lxc_start: 1164 failed to spawn 'centos-test'
lxc-start: lxc_start.c: main: 344 The container failed to start.
lxc-start: lxc_start.c: main: 348 Additional information can be obtained by setting the --logfile and --logpriority options.

I was quite surprised that Fedora did not create the lxcbr0 bridge interface automatically. Instead, we will use the existing virbr0 provided by libvirtd.
$ sudo yum install libvirt-daemon
sudo systemctl start libvirtd

Check the bridge network interface.
$ brctl show
bridge name     bridge id               STP enabled     interfaces
virbr0          8000.525400c28250       yes             virbr0-nic

Edit our container config file and change the network link from lxcbr0 to virbr0.
$ sudo vim /var/lib/lxc/centos-test/config = virbr0

Try to start the container again, this time, another '819 Permission denied' error.
$ sudo lxc-start -F -n centos-test
lxc-start: conf.c: lxc_mount_auto_mounts: 819 Permission denied - error mounting /usr/lib64/lxc/rootfs/proc/sys/net on /usr/lib64/lxc/rootfs/proc/net flags 4096
lxc-start: conf.c: lxc_setup: 3833 failed to setup the automatic mounts for 'centos-test'
lxc-start: start.c: do_start: 699 failed to setup the container
lxc-start: sync.c: __sync_wait: 51 invalid sequence number 1. expected 2
lxc-start: start.c: __lxc_start: 1164 failed to spawn 'centos-test'
lxc-start: lxc_start.c: main: 344 The container failed to start.
lxc-start: lxc_start.c: main: 348 Additional information can be obtained by setting the --logfile and --logpriority options.

After struggled and googled for answer for the past hours, it actually dawned to me that the '819 Permission denied' error is related to SELinux policy. I did a quick check by disabled SELinux and reboot the machine and was able to start the container.

Also, just to confirm the SELinux error for lxc-start.
$ sudo grep lxc-start /var/log/audit/audit.log | tail -n 1
type=AVC msg=audit(1430849851.869:714): avc:  denied  { mounton } for  pid=3780 comm="lxc-start" path="/usr/lib64/lxc/rootfs/proc/1/net" dev="proc" ino=49148 scontext=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 tcontext=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 tclass=dir permissive=0

Start the SELinux Alert Browser and run the below commands to add the security policy.
$ sealert

$ sudo grep lxc-start /var/log/audit/audit.log | audit2allow -M mypol
******************** IMPORTANT ***********************
To make this policy package active, execute:

semodule -i mypol.pp

$ sudo semodule -i mypol.pp

Start our container again and check it status.
$ sudo lxc-start -n centos-test 
[[email protected] ~]$ sudo lxc-info -n centos-test
Name:           centos-test
State:          RUNNING
PID:            6742
CPU use:        0.44 seconds
BlkIO use:      18.55 MiB
Memory use:     12.14 MiB
KMem use:       0 bytes
Link:           veth4SHUE1
 TX bytes:      578 bytes
 RX bytes:      734 bytes
 Total bytes:   1.28 KiB

Attach to our container. There is no login needed.
$ sudo lxc-attach -n centos-test
[[email protected] /]# uname -a
Linux centos-test 4.0.1-300.fc22.x86_64 #1 SMP Wed Apr 29 15:48:25 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux

[[email protected] /]# cat /etc/centos-release 
CentOS release 6.6 (Final)

Reinstall Grub Through Chroot an LUKS Partition in Fedora

As I mentioned before, I'm currently triple-boot my laptop with three different operating systems consists of Windows 7, Fedora 22, and Ubuntu 15.04. One of the issue faced when dealing with dual-booting Fedora and Ubuntu is that each distro will update and overwrite the existing Grub boot loader if there are new kernel upgrade. One major problem is that Ubuntu Grub update does not recognizes LUKS partition and always corrupted the Grub boot loader.

To temporary resolve this, we have to boot the machine using the Fedora Live CD, mount the encrypted partition, chroot to it, update Grub from Fedora itself, umount the partition, and lastly reboot the machine. Details as follows.

First, once you've boot the Fedora Live CD, check the device name of your LUKS partition.
$ lsblk | grep -B 2 luks
├─sda5                                          8:5    0     1G  0 part  /boot
├─sda6                                          8:6    0   204G  0 part  
│ └─luks-e927b9ed-a83a-453f-8ef7-4983a3d68589 253:0    0   204G  0 crypt /

As we have obtained the device name, which is /dev/sda6, we shall proceed to decrypt the partition. Before that, let's verify that the partition is a LUKS partition again. If the Bash return exist code of zero, then we can safely confirm that /dev/sda6 is indeeed a LUKS partition.
$ sudo cryptsetup isLuks /dev/sda6 && echo $?

We can also verify it by checking the LUKS header of that parition.
$ sudo cryptsetup luksDump /dev/sda6 | head -n 8
LUKS header information for /dev/sda6

Version:        1
Cipher name:    aes
Cipher mode:    xts-plain64
Hash spec:      sha1
Payload offset: 4096
MK bits:        512

Next, we're going to decrypt the LUKS partition and type in your password.
$ sudo cryptsetup luksOpen /dev/sda6 fedora-root

After we've decrypted the partition, we'll need to mount all necessary partitions before we can chroot it.
$ sudo udisks --mount /dev/mapper/fedora-root
$ sudo mount -t proc proc /mnt/proc
$ sudo mount -t sysfs sys /mnt/sys
$ sudo mount -o bind /dev /mnt/dev

Note that we're using udisks to automount our fedora-root in the /media folder. Equivalent steps are:
$ sudo mkdir /media/fedora-root
$ sudo mount /dev/mapper/fedora-root /media/fedora-root

Since my /boot partition is located in another partition, we'll need to mount this as well so we can update the Grub boot loader.
$ sudo mkdir /media/fedora-boot
$ sudo mount /dev/sda5 /media/fedora-boot

Next, chroot to the root partition and update our Grub.
$ chroot /media/fedora-boot
$ grub2-install /dev/sda
$ grub2-mkconfig -o /boot/grub/grub.cfg

Lastly, exit from chroot, unmount our LUKS partition, and reboot our machine. The correct Grub boot loader with correct boot parameters will be installed and loaded properly.
$ exit
$ sudo umount /media/fedora-root
$ sudo cryptsetup luksClose fedora-root
$ sudo reboot

Switching Between Different Commits in Git

PyVim, an implementation of Vim editor in Python caught my attention while browsing through the HackerNews recently. After trying and installing it through Python's pip installer, I've decided to install the latest version from its Github repository instead.

Before that, let's setup the Python's Virtual Environment.
$ cd /tmp
$ mkdir pyvim
mkdir: created directory ‘pyvim’

$ cd pyvim/
$ virtualenv -p /usr/bin/python2.7 venv
Running virtualenv with interpreter /usr/bin/python2.7
New python executable in venv/bin/python2.7
Also creating executable in venv/bin/python
Installing setuptools, pip...done.

$ source venv/bin/activate
(venv)$ which python

Clone the PyVim Git repository with the folder and the Virtual Environment you've created in previous step.
(venv)$ git clone
Cloning into 'pyvim'...
remote: Counting objects: 196, done.
remote: Compressing objects: 100% (51/51), done.
remote: Total 196 (delta 29), reused 0 (delta 0), pack-reused 143
Receiving objects: 100% (196/196), 597.72 KiB | 165.00 KiB/s, done.
Resolving deltas: 100% (93/93), done.
Checking connectivity... done.

You'll obtain below directory structure.
(venv)$ tree -L 2
├── pyvim
│   ├── docs
│   ├── examples
│   ├── LICENSE
│   ├── pyvim
│   ├── README.rst
│   ├──
│   └── tests
└── venv
    ├── bin
    ├── include
    ├── lib
    └── lib64 -> lib

10 directories, 4 files

Next, install latest PyVim and all the necessary Python packages within the Virtual Environment.
(venv)$ cd pyvim
$ python install
Finished processing dependencies for pyvim==0.0.2

Run the PyVim program and the output below shows that there is breakage with the latest committed version.
(venv)$ pyvim --help
Traceback (most recent call last):
  File "/tmp/pyvim/venv/bin/pyvim", line 9, in 
    load_entry_point('pyvim==0.0.2', 'console_scripts', 'pyvim')()
  File "/tmp/pyvim/venv/lib/python2.7/site-packages/pkg_resources/", line 519, in load_entry_point
    return get_distribution(dist).load_entry_point(group, name)
  File "/tmp/pyvim/venv/lib/python2.7/site-packages/pkg_resources/", line 2630, in load_entry_point
    return ep.load()
  File "/tmp/pyvim/venv/lib/python2.7/site-packages/pkg_resources/", line 2310, in load
    return self.resolve()
  File "/tmp/pyvim/venv/lib/python2.7/site-packages/pkg_resources/", line 2316, in resolve
    module = __import__(self.module_name, fromlist=['__name__'], level=0)
  File "/tmp/pyvim/venv/lib/python2.7/site-packages/pyvim-0.0.2-py2.7.egg/pyvim/entry_points/", line 17, in 
  File "/tmp/pyvim/venv/lib/python2.7/site-packages/pyvim-0.0.2-py2.7.egg/pyvim/", line 27, in 
  File "/tmp/pyvim/venv/lib/python2.7/site-packages/pyvim-0.0.2-py2.7.egg/pyvim/", line 17, in 
ImportError: No module named reactive

Let's find any tagged stable working versions but it seems that the author does not create any tagged branch.
(venv)$ git tag

Since there is not tagged branch, then we'll need to find out the hash of the last stable commit. In our case here, is version 0.0.2. Results as shown using the git log command with summarized output.
(venv)$ git log --oneline --decorate | cut -c -80
1ec47f1 (HEAD, origin/master, origin/HEAD, master) Command functions rename
d842f06 add docopt to install_requires in
1fdd937 Override ControlT from prompt-toolkit: don't swap characters before curs
c944a28 Implemented the :cq command.
eaa4b1e Fix: use accepts_force also for bw/bd
4920b74 Added :bd as keybinding to buffer close
f179bd6 Implemented scroll offset.
6c160ce Show 'No \! allowed' when used for commands not supporting it.
b1d9813 Fix python 3/2 compatibility for urllib.
892188c Fixed typo in README.txt
2409ad7 Some rephrasing in the README.
40cfe66 Reload option for :edit and :open
9bb4975 Added ':open' as an alias for ':edit'.
e33db19 Added ':h' alias for ':help'
a010ea8 Implemented ControlD and ControlU key bindings, for scrolling half a pag
39c72b1 Implemented ControlE and ControlY key bindings
ad880c1 Auto closes new/empty buffers when they are hidden. This solves the :q i
082ce60 Added accept_force parameter to commands decorator. 'bp'/'bn' now also a
dfed3a3 Fix a bug where a user could leave a buffer with unsaved changes by issu
1ff1bac fix ctrl-f shortcut
5369f5d Abstraction of I/O backends. Now it is possible to open .gz files and ht
75d3a3b Mention alternatives in README.rst
10dcb2d Added ControlW n/v key bindings for splitting windows.
d13f5e6 Added PageUp/PageDown key bindings.
4009f8b New screenshot for cjk characters.
34b6175 Fixed NameErrors in .pyvimrc example.
cc0b333 Adding shorthands for split and vsplit
78c2225 Pypi release 0.0.2 -- (0.0.1 release failed)
5083d0b Pypy release 0.0.1
df71609 Usable pyvim version. - Layouts: horizontal/vertical splits + tabs. - Ma
fb129f5 Initial ptvim version.
a60a0b6 Initial commit

From the result above, commit hash id 78c2225 is the first public release version. Let's switch our HEAD to that commit.
(venv)$ git checkout 78c2225
Note: checking out '78c2225'.

You are in 'detached HEAD' state. You can look around, make experimental
changes and commit them, and you can discard any commits you make in this
state without impacting any branches by performing another checkout.

If you want to create a new branch to retain commits you create, you may
do so (now or later) by using -b with the checkout command again. Example:

  git checkout -b new_branch_name

HEAD is now at 78c2225... Pypi release 0.0.2 -- (0.0.1 release failed)

Re-install and re-run the program again. It seemed this committed version work without any issues.
(venv)$ python install
(venv)$ pyvim --help
pyvim: Pure Python Vim clone.
    pyvim [-p] [-o] [-O] [-u ] [...]

    -p           : Open files in tab pages.
    -o           : Split horizontally.
    -O           : Split vertically.
    -u  : Use this .pyvimrc file instead.

To reset back the HEAD back to the origin/master.
(venv) $ git reset --hard origin/master
HEAD is now at 1ec47f1 Command functions rename

Confirm we're at the latest HEAD through git log command.
(venv) $ git log --oneline --decorate -1
1ec47f1 (HEAD, origin/master, origin/HEAD, master) Command functions rename

Instead of searching through the log, we can tag particular commit.
(venv)$ git tag -a v0.0.2 -m "Release 0.0.2" 78c2225

Let's check again throught the git log and git tag command.
(venv) $ git log --oneline --decorate | grep 'HEAD\|tag'
1ec47f1 (HEAD, origin/master, origin/HEAD, master) Command functions rename
78c2225 (tag: v0.0.2) Pypi release 0.0.2 -- (0.0.1 release failed)

$ git tag

Instead of switching to particular commit hash id, we can switch directly by using tag name.
$ git checkout v0.0.2
Previous HEAD position was 1ec47f1... Command functions rename
HEAD is now at 78c2225... Pypi release 0.0.2 -- (0.0.1 release failed)

$ git status
HEAD detached at v0.0.2
nothing to commit, working directory clean

Testing Debian GNU/Hurd 0.6

It was so long since I last heard about GNU Hurd but most recent 0.6 release piqued my interest compare to last time as my laptop are powerful enough to test run in a virtualization environment.

Installation was done in Fedora 22 where I've spend most of my computing time. On a side note, Fedora, as a desktop Operating Ssytem, is way more integrated and stable compare to Ubuntu. To be more precise, user experience in Gnome 3 is just way better than Unity desktop, although the former came a long way, after being constant ridiculed before reaching that usable point.

Let's continue with the installation. We will run it as a Debian GNU/Hurd QEMU guest OS image. Before that, we will need to install all the necessary packages.
$ sudo dnf install aria2 qemu-system-x86

Download the image using Aria2 download client.
$ aria2c -x 4
[#aa5de7 10MiB/380MiB(2%) CN:4 DL:380KiB ETA:16m34s]

Instead of using Wget download client, we can create an alias which point to Aria2 instead, as shown.
alias wget='aria2c -x 4'

Extract image file which is roughly around 3G.
$ tar -xz < debian-hurd.img.tar.gz
$ qemu-img info debian-hurd-*.img
image: debian-hurd-20150424.img
file format: raw
virtual size: 2.9G (3146776576 bytes)
disk size: 1.2G

Following the documentation to boot the Operating System through QEMU through the image file.
$ qemu-system-i386 -m 512 -net nic,model=rtl8139 -net user -drive cache=writeback,index=0,media=disk,file=$(echo debian-hurd-*.img)
WARNING: Image format was not specified for 'debian-hurd-20150424.img' and probing guessed raw. Automatically detecting the format is dangerous for raw images, write operations on block 0 will be restricted. Specify the 'raw' format explicitly to remove the restrictions.

As the console message stated, to remove restriction on write operations on block 0, we have explicitly specify disk format by adding -drive format=raw option.
$ qemu-system-i386 -m 512 -net nic,model=rtl8139 -net user -drive format=raw,cache=writeback,index=0,media=disk,file=$(echo debian-hurd-*.img)

Once you see the login prompt screen as shown below, login as root user and press enter. Password is not needed.

To enter into the GUI interface, start the Window Manager, which is IceWM. Note that to exit mouse grab within QEMU, just press CTRL+ALT+G.
$ startx

The screenshot above which reminds me of the early days of GNU/Linux where there are no desktop environment but just a bunch of Windows Managers. I'm always wonder when can we really use GNU/Hurd as an alternative or replacement to GNU/Linux distros? Next century perhaps? Yes, the development is that dog slow as most or all kernel developers are working on GNU/Linux.