Resetting File or Folder Permissions Using Yum

While setting the group file or folder permissions and ownership to /var/www, sometimes we may accidentally update the wrong folder, like to the parent folder of /var instead of /var/www.

In order to restore the default file or folders permissions in RPM-based system, there is a built-in option to revert the changes quickly compare to DEB-based system. Yup, this is probably one of the missing feature if we compare both packaging system.

First, let's find the RPM package name that contains the /var/www/html folder.

Using the rpm command.
$ time rpm -qf /var/www/html
$ time rpm -qf /var/www/html
httpd-2.4.10-1.fc20.x86_64
real    0m0.025s
user    0m0.018s
sys     0m0.006s

Using the yum command which gave us four packages and took around 1-plus BLOODY minutes.
$ time yum whatprovides '/var/www/html'
real    1m23.865s
user    0m19.660s
sys     0m0.901s

Now that is something we can improve by using cached result through -C option. Let's try again. But then again, the results are still not entirely accurate.
$ time yum -C whatprovides '/var/www/html'
......
real    0m0.350s
user    0m0.257s
sys     0m0.050s

$ ls -ld /var/www/html/
drwxr-sr-x 1 root apache 40 Oct  4 15:05 /var/www/html/

Unfortunately, yum does not include support for reverting ownership and permissions of any installed files or folders.

Reset the ownership,
$ sudo rpm --setugids httpd
$ ls -ld /var/www/html
drwxr-sr-x 1 root root 40 Oct  4 15:05 /var/www/html/

However, resetting the permissions does not seems to remove back the setguid flag. Weird. Unfortunately, I can't google for any good explanation of such problem.
$ sudo rpm --setugids httpd
$ ls -ld /var/www/html
drwxr-sr-x 1 root root 40 Oct  4 15:05 /var/www/html/

Setting Apache Document Root With setgid

When you don't understand or remember the fundamental of GNU/Linux file system permissions, you'll tend to do things in an unproductive way. For example, repeatedly and explicitly update the /var/www/ folder file permissions to the Apache's group (www-data or apache).

The proper, alternative, and convenient way of setting web root, /var/www permissions are as follows:

Settings the folder permission to Apache's user group.
$ sudo chgrp apache /var/www -R
$ sudo chmod 775 /var/www -R
$ sudo chmod g+s /var/www

Allow the $USER to fully control of the web root.
$ sudo useradd -G apache $USER
$ sudo chown $USER /var/www/

Now, long grandma story. By default, the file permissions of Apache's web root directory in CentOS or Fedora are only accessible by all but writable by root user.
$ ls -l /var/www/
total 0
drwxr-xr-x 1 root root 0 Jul 23 06:31 cgi-bin/
drwxr-xr-x 1 root root 6 Oct 4 14:30 html/

Change to folder group ownership to apache user so we can install and run any web application using that user. Otherwise most web application will complain about write permissions to the folder, especially for file uploading.
$ sudo chgrp apache /var/www -R
$ ls -l /var/www/
total 0
drwxr-xr-x 1 root apache 0 Jul 23 06:31 cgi-bin/
drwxr-xr-x 1 root apache 6 Oct 4 14:30 html/

Even we've set the group ownership to apache user, any new file or folder creation will still default to root user as we're using the sudo command.
$ sudo mkdir /var/www/html/foo.d
$ sudo touch /var/www/html/foo.f
$ ls -l /var/www/html/
total 0
drwxr-xr-x 1 root root 0 Oct 4 15:03 foo.d/
rw-r--r- 1 root root 0 Oct 4 15:03 foo.f

Hence, in order to retain or inherit the group id (apache) of the parent folder in /var/www, we've to use setgid [4].
$ sudo chmod g+s /var/www/html/

Another way of setting the folder permissions using the numerical method is:
$ sudo chmod 2775 /var/www/html -R

Notice the 's' flag on the group permissions.
$ ls -ld /var/www/html
drwxr-sr-x 1 root apache 20 Oct 4 15:04 /var/www/html/

Create another folder and file in /var/www folder again. Notice the group permissions inherit the group id in /var/www.
$ sudo mkdir /var/www/html/bar.d
$ sudo touch /var/www/html/bar.f
$ ls -ltU /var/www/html
total 0
drwxr-xr-x 1 root root 0 Oct 4 15:03 foo.d/
rw-r--r- 1 root root 0 Oct 4 15:03 foo.f
drwxr-sr-x 1 root apache 0 Oct 4 15:05 bar.d/
rw-r--r- 1 root apache 0 Oct 4 15:05 bar.f

Using the namei command to show the permissions for each components in the file path.
$ namei -l /var/www/html/foo.d/
f: /var/www/html/foo.d/
drwxr-xr-x root root /
drwxr-xr-x root root var
drwxr-xr-x root apache www
drwxr-sr-x root apache html
drwxr-xr-x root root foo.d

$ namei -l /var/www/html/bar.d
f: /var/www/html/bar.d
drwxr-xr-x root root /
drwxr-xr-x root root var
drwxr-xr-x root apache www
drwxr-sr-x root apache html
drwxr-sr-x root apache bar.d

Namei: File Permissions Listing Tool

While setting up the Apache web server, occasionally we’ll encounter the file permission issues in the document root folder, especially when symlink involved. This console app, namei, which is part of util-linux, provides a quick view of the file permissions of each component of the resolved full path. Quite a useful tool especially for those who just venture into GNU/Linux and haven’t fully grasp the file permissions in the system.

For example, listing of the file permission of the folders and file of SSH private key.
$ namei -l ~/.ssh/id_rsa 
f: /home/ang/.ssh/id_rsa
drwxr-xr-x root root /
drwxr-xr-x root root home
drwx------ ang ang ang
drwx------ ang ang .ssh
-rw------- ang ang id_rsa

First, let's create a sample symlink.
$ sudo ln -s /tmp /var/www/html/tmp

Tracing to the endpoint of the symlink.
$ namei /var/www/html/tmp
f: /var/www/html/tmp
d /
d var
d www
d html
l tmp -> /tmp
d /
d tmp

Similarly, but showing the permissions of each component of the resolved full path.
$ namei -l /var/www/html/tmp
f: /var/www/html/tmp
drwxr-xr-x root root /
drwxr-xr-x root root var
drwxr-xr-x root root www
drwxr-xr-x root root html
lrwxrwxrwx root root tmp -> /tmp
drwxr-xr-x root root /
drwxrwxrwt root root tmp

Resetting GNU/Linux File or Folder Permissions

While setting the group file or folder permissions and ownership to /var/www, sometimes we may accidentally update the wrong folder, like to the parent folder of /var instead of /var/www.

In order to restore the default file or folders permissions in RPM-based system, there is a built-in option to revert the changes quickly compare to DEB-based system. Yup, this is probably one of the missing feature if we compare both packaging system.

First, let's find the RPM package name that contains the /var/www/html folder. Using the rpm command.
$ time rpm -qf /var/www/html
$ time rpm -qf /var/www/html
httpd-2.4.10-1.fc20.x86_64
real    0m0.025s
user    0m0.018s
sys     0m0.006s

Using the yum command which gave us four packages and took around 1-plus BLOODY minutes.
$ time yum whatprovides '/var/www/html'
real    1m23.865s
user    0m19.660s
sys     0m0.901s

Now that is something we can improve by using cached result through -C option. Let's try again. But then again, the results are still not entirely accurate.
$ time yum -C whatprovides '/var/www/html'
......
real    0m0.350s
user    0m0.257s
sys     0m0.050s

$ ls -ld /var/www/html/
drwxr-sr-x 1 root apache 40 Oct  4 15:05 /var/www/html/

Unfortunately, yum does not include support for reverting ownership and permissions of any installed files or folders. Reset the ownership.
$ sudo rpm --setugids httpd
$ ls -ld /var/www/html
drwxr-sr-x 1 root root 40 Oct  4 15:05 /var/www/html/

However, resetting the permissions does not seems to remove back the setguid flag. Weird. Unfortunately, I can't google for any good explanation of such problem.
$ sudo rpm --setugids httpd
$ ls -ld /var/www/html
drwxr-sr-x 1 root root 40 Oct  4 15:05 /var/www/html/

Polipo - Tiny Caching Proxy

During provisioning a new virtual machine (VM), you will need to repeatedly destroy and rebuild the VM. One of the bottleneck is you've to re-download all the GNU/Linux distro packages. While you can use packaging tool like APT or YUM to cache your packages, it still can be shared by different VMs. To solve this, you can set up a caching proxy at your host machine to be shared among all the guest VMs.

Instead of default caching proxy, Squid, I've opted for Polipo, a smaller and simpler caching proxy. Setting up was quite straightforward with additional minor changes.

Install the packages.
$ sudo yum install polipo

Enable the service after reboot.
$ sudo systemctl enable polipo.service

Check the status of the service. One of the benefit of systemctl is that it show a lot of crucial details of the daemon or service. Thus, helps a lot when we're troubleshooting the server.
$ sudo systemctl status polipo.service 

Set up the proxy connection details as environment variables so that the console app, for example, wget or curl, can use this.
$ export {http,https,ftp,rsync}_proxy="http://localhost:8123"
$ export no_proxy=localhost,127.0.0.1
$ env | grep proxy
http_proxy=http://localhost:8123
ftp_proxy=http://localhost:8123
rsync_proxy=http://localhost:8123
https_proxy=http://localhost:8123

To test our proxy server using either curl or wget. Using curl. Option -sv is to show server header verbosely.
$ curl -sv www.google.com 2>&1 | grep 8123
* About to connect() to proxy localhost port 8123 (#0)
* Connected to localhost (127.0.0.1) port 8123 (#0)

Using wget. Using -S --spider so that wget don't download anything.
$ wget -S --spider www.google.com 2>&1 | grep 8123
Connecting to localhost (localhost)|::1|:8123... failed: Connection refused.
Connecting to localhost (localhost)|127.0.0.1|:8123... connected.
Connecting to localhost (localhost)|127.0.0.1|:8123... connected.

To test the caching while download a large file. First time download took around 3 minutes and subsequent download of similar file took less than 1 second.
$ time wget http://libguestfs.org/download/builder/cirros-0.3.1.xz
......
2014-09-28 04:17:40 (55.7 KB/s) - ‘cirros-0.3.1.xz’ saved [11419004/11419004]
real    5m52.107s
user    0m0.096s
sys     0m0.461s

$ time wget http://libguestfs.org/download/builder/cirros-0.3.1.xz
......
2014-09-28 04:17:51 (478 MB/s) - ‘cirros-0.3.1.xz.1’ saved [11419004/11419004]
real    0m0.028s
user    0m0.002s
sys     0m0.023s

Buggy Nouveau Driver in Fedora 20

Regardless what GNU/Linux distros, I still faced the same issue regarding the Nouveau, the open-sourced driver Nvidia graphic cards.

While using the 3.16 kernel, I can't seemed to boot into the graphical login. Is all blank page. It was perfectly fine using the 3.11 kernel.
$ uname -sr
Linux 3.16.2-201.fc20.x86_64

In the end, I've to switch to the console login by Ctrl-Alt-F2, and checking the systemd journal log. Sample error message relating the Nouveau driver as shown.
$ journalctl -r | grep nouveau
Sep 27 11:23:24 butterfly kernel: nouveau E[   PFIFO][0000:01:00.0] CACHE_ERROR - ch 0 [DRM] subc 2 mthd 0x0130 data 0x0000000
......
Sep 27 10:16:43 butterfly kernel: nouveau E[     DRM] GPU lockup - switching to software fbcon

Some quick search revealed that to stop the X from freezing during startup, you've to disable Nouveau acceleration, which is the common, typical, and conventional solution. There are two ways.

First, adding 'nouveau.nofbaccel=1' to the kernel parameter. This can be done during Grub2 bootup by pressing 'e' and append, after the 'rhgb quiet' the option line to the kernel parameter. It looks something like below.
...rhgh quiet nouveau.nofbaccel=1

Later, just press F10 key to continue booting the system. However, you'll have to do this everything you boot up your machine.

Second, to make this a permanent solution, you'll need to modify the Grub2 configuration. Again, there are two ways to do so.

Edit the /etc/default/grub and append 'nouveau.nofbaccel=1' to the line GRUB_CMDLINE_LINUX and update the Grub2.
$ sudo grub2-mkconfig --output=/boot/grub2/grub.cfg

Another approach is the set the configuration through module configuration file when the kernel is loading. I prefer this option as it is easier to change than using the Grub2 way.
$ sudo sh -c 'echo "options nouveau nofbaccel=1' > /etc/modprobe.d/nouveau.conf "

You may ask, why not using the proprietary Nvidia driver? Well, unfortunately, I can't get it to work correctly, especially with latest 3.16 kernel. And, I don't want to waste my time on troubleshooting the same issue again and again.

Stuck with Nouveau driver forced me to switch my desktop environment from Gnome 3 to Xfce4. I've learned that without well-supported graphic card driver, Gnome 3 experience still a lot to be desired. Mind you, this workstation is running on 20GB ram and yet, it does not help.

While not as fancy as Gnome 3, Xfce4 seems acceptable for my daily usage compare to other desktop environment.

Yum in Fedora 20

What is the first thing you do upon first login to any GNU/Linux distros? You update the whole system. In our case for F20, was using Yum.

Nothing fancy here, similar to Debian-based distros as well.
$ sudo yum update
......
Total download size: 559 M
Is this ok [y/d/N] : y
Downloading packages:
updates/20/x86_64/prestodelta
Delta RPMs reduced 361 M of updates to 96 M (73% saved)
.......

What interested me is use of Delta RPMs (DRPMs) or Presto to speed up downloading time and save bandwidth. As the name implied, DRPMs contains the binary differences between old and new RPM packages.

Similarly, Debian-based distros have something similar but not integrated with apt by default called debdelta which I haven't try out yet. The only con with such tool is that you will need quite a lot of processing power to calculate the differences.

After a few days of using F20, I've noticed that Yum was dog slow. Everything when I install or update the system, it will always requery the packages metadata. Until I realized, caching of downloaded RPM packages was disabled by default.
$ cat yum.conf | grep cache
cachedir=/var/cache/yum/$basearch/$releasever
keepcache=0

Just change the keepcache=1 and you'll notice less lagging when carry out any Yum actions.

To speed up downloading, you can also use the fastestmirror plugin. Make sure you change enabled=1 to enable the plugin.
$ sudo yum install yum-plugin-fastestmirror
$ cat /etc/yum/pluginconf.d/fastestmirror.conf | grep enabled
enabled=0

Yum groupinstall. While I really using aliases to install related group of RPM packages, unfortunately, it was quite buggy. One particular group I really love is the "Minimal Install" as illustrated below.
$ sudo yum groupinstall "Minimal Install"

This is quite helpful when you want to create a base cloud image or JEOS and change the role of the installation accordingly like as a Web or database server.

Unfortunately, I can't find something quite to install LAMP-stack quickly and conveniently like in Debian-based distros as shown.
$ sudo apt-get install lamp-server^

The only equivalent using Yum groupinstall is as shown here.
$ sudo yum groupinstall "Web Server"
$ sudo yum groupinstall "MySQL Database client"
$ sudo yum groupinstall "MySQL Database server"
$ sudo yum groupinstall "PHP Support"

Comparing both Yum and Apt, I still prefer Apt, which for me, far more stable and faster.

Fedora 20 Installation

After hoping through different GNU/Linux distros, I can't still have a smooth and painless installation with Fedora 20. There are always some tweaks and googling here and there to make things work. When you think of it, you can't blame it, as it was not designed to be used as a desktop operating system.

Some thoughts on the Fedora 20 installation and setup.

1. Resizing LUKS encrypted LVM partition. I tried but failed to do it correctly in my default Debian installation. Missed out certain step and corrupted the whole partition. Luckily, I've backup all the important stuff elsewhere. But somehow I was surprised that GParted 0.19.1 can ONLY detect luks-crypt partition. No worry, I'll learn about this later once I've setup the virtualization in the workstation.

2. Installation using netinstall. Supposed to speed up the installation process but I was stuck waiting to set up the installation source with no indication what so ever. I was under the assumption that the installer, Anaconda, somehow crashed or freeze but it turned out my slow Internet connection was the cause of it. In the end, I just switched to full DVD installation instead.

3. Partition Scheme. Following the default standard scheme but with encrypted partition using Butterface file system (Btrfs). Heard a lot about this filesystem but never really try it, will explore more after this.

4. No initial login screen, to be exact, GDM (Gnome Display Manager) was missing, and just a blank wallpaper. This happened after the first boot up upon finishing all the installation. Switched to another console by Ctrl+Alt+F2 and checked the /var/log/boot.log file. Nothing particular unique suggesting any issue. Suspected must be related to X or buggy Nouveau graphic card driver. Reboot the machine again and it seems to work.

The init Wars

"uselessd (the useless daemon, or the daemon that uses less... depending on your viewpoint) is a project which aims to reduce systemd to a base initd, process supervisor and transactional dependency system, while minimizing intrusiveness and isolationism. Basically, it’s systemd with the superfluous stuff cut out, a (relatively) coherent idea of what it wants to be, support for non-glibc platforms and an approach that aims to minimize complicated design."
-- uselessd.darknedgy.net
Via Phoronix. The never ending drama to the init wars or if you look at the positive side, the freedom of forking a software project to prove a point. Never underestimate the power of a single determined programmer. I just wish more resource should be poured towards getting a stable and optimize open sourced graphic card drivers like Nouveau or Radeon.

Switching to Fedora

It has been a while I've touch any RPM-based GNU/Linux distro, decided to try out Fedora as I'm going to learn more about virtualization.

Trying wodim, which stands for "Write Optical DIsk Media", to burn our ISO file to DVD.
$ sudo apt-get install wodim
$ wodim --version
Cdrecord-yelling-line-to-tell-frontends-to-use-it-like-version 2.01.01a03-dvd 
Wodim 1.1.11
Copyright (C) 2006 Cdrkit suite contributors
Based on works from Joerg Schilling, Copyright (C) 1995-2006, J. Schilling

Interesting history of Cdrkit, which was a fork of latest release of GPL-licensed cdrtools.
$ wodim --devices
wodim: Overview of accessible drives (1 found) :
----------------------------------------------------------------------------------
 0  dev='/dev/sg1'      rwrw-- : 'hp' 'DVDROM DH40N'
----------------------------------------------------------------------------------

$ sudo wodim -v dev=/dev/sg1 speed=4 -eject Fedora-20-x86_64-DVD.iso

Shockwave Flash has crashed

Kept getting this error message while trying to play any Flash video these past few days in Google Chrome Version 37.0.2062.120 in Debian Wheezy. According the this bug report, it was caused by the 'erroneous GLIBC_2.14 requirement'.

From all the recommended temporary solution, the best is still revert to the previous working version or downgrading. Luckily, apt kept a cache copy of the deb package.
$ sudo dpkg -i /var/cache/apt/archives/google-chrome-stable_37.0.2062.94-1_amd64.deb
$ killall chrome
$ google-chrome

Oz - Virtual Machine Builder

Stumbled upon this program, oz while trying different kind of virtual machine image builder. Unfortunately, there is not Deb package for Debian 7, hence I've to build one for myself.

Following the instruction here, creating deb package is quite straightforward.

Install the prerequisite packages.
$ sudo apt-get install debhelper python-all build-essential git-core gdebi

Clone the Git repo.
$ mkdir /tmp/oz
$ cd /tmp/oz
$ git clone https://github.com/clalancette/oz oz-git

Build the deb package.
$ cd oz-git
$ dpkg-buildpackage -us -uc
$ cd ../

Install the software with all the necessary dependencies using Gdebi installer.
$ sudo gdebi oz_*_all.deb

However, to get this tool to work, you'll have to install and setup KVM virtualization.
$ sudo apt-get install qemu-kvm libvirt-bin
$ sudo adduser kianmeng kvm
$ sudo adduser kianmeng libvirt

Refresh and update your user groups without manually logout from the system.
$ exec su -l $USER

As non-root user, there should be no permission denied error when running below command.
$ sudo virsh list --all

Network stuff.
$ sudo virsh net-list --all

Name                 State      Autostart
-------------------------------------------------------
default              inactive   yes      

$ sudo virsh net-start default

error: Failed to start network default
error: internal error Child process (/usr/sbin/dnsmasq --strict-order --bind-interfaces --pid-file=/var/run/libvirt/network/default.pid --conf-file= --except-interface lo --listen-address 192.168.122.1 --dhcp-range 192.168.122.2,192.168.122.254 --dhcp-leasefile=/var/lib/libvirt/dnsmasq/default.leases --dhcp-lease-max=253 --dhcp-no-override) unexpected exit status 2: 
dnsmasq: failed to create listening socket for 192.168.122.1: Address already in use

Apparently, running your own dnsmasq with have conflict with libvirt. To solve it, make sure dnsmasq bind to certain interface only. Edit /etc/dnsmasq.conf and uncomment these lines.
interface=wlan1
bind-interface

$ sudo service dnsmasq restart
$ sudo virsh net-start default
$ sudo virsh net-autostart default
$ sudo $ /sbin/brctl show
bridge name     bridge id                            STP enabled     interfaces
virbr0                 8000.525400d0634b       yes                     virbr0-nic

On a related note, based on my few days experience, if you want to try out anything related to cloud or virtual machine, Fedora seems to be a more suitable and supported GNU/Linux distro. I'm thinking whether to move away from Debian to Fedora as my base distro.

PHP Version Manager (PHPENV)

With the recently release PHP 5.6, I stumbled upon these tools, phpenv, (inspired by rbenv) and php-build which let you build different PHP versions without messing up your existing installation. For installation steps, we're following Kobito's setup guide.

1. To setup both phpenv and php-build, just type these commands. Be extra careful when running downloaded shell script directly from the net.

$ curl https://raw.github.com/CHH/phpenv/master/bin/phpenv-install.sh | bash
$ git clone git://github.com/CHH/php-build.git ~/.phpenv/plugins/php-build
$ echo 'export PATH="$HOME/.phpenv/bin:$PATH"' >> ~/.bashrc
$ echo 'eval "$(phpenv init -)"' >> ~/.bashrc
$ exec $SHELL -l

2. Checking our installation. Second command should give you a long list of PHP versions.
$ phpenv --version
rbenv 0.4.0-98-g13a474c

$ phpenv install -l

3. As I was running Debian, install all the necessary packages so we can compile the source code.
$ sudo apt-get install make ccache re2c libcurl libcurl-dev bison libcurl4-gnutls-dev libjpeg62-dev libmcrypt-dev libtidy libtidy-dev libxslt1-dev apache2-prefork-dev

4. Compilation. Took me around 15 minutes.
$ CFLAGS="-g" phpenv install 5.6.0

$ which php
/home/kianmeng/.phpenv/shims/php

5. Switching between installed version (system) and your compiled version.
$ phpenv versions
* system (set by /home/kianmeng/.phpenv/version)
5.6.0

$ phpenv global
system

$ php --version | grep ^PHP
PHP 5.4.4-14+deb7u14 (cli) (built: Aug 21 2014 08:36:44)

$ phpenv rehash
$ phpenv global 5.6.0

$ phpenv global
5.6.0

$ php --version | grep ^PHP
PHP 5.6.0 (cli) (built: Sep  1 2014 03:26:46)

Unfortunately, to install phpdbg, you'll need to create another shell script, as a plugin to php-build in order to build it. Someday perhaps.

Find and Delete All Duplicate Files

I was asked about this question today but can't seem to think of a quick answer to solve this issue. Typical manual solution is to just compare the file size and file content through hashing or checksum.

It seems there are quite a number of duplicate file finder tools but we will try with a console tool called fdupes. Typical usage of this program.

1. Install the program.
$ sudo apt-get install fdupes

2. Create sample testing files.
$ cd /tmp
$ wget http://upload.wikimedia.org/wikipedia/en/a/a9/Example.jpg -O a.jpg
$ cp a.jpg b.jpeg
$ touch c.jpg d.jpeg

3. Show all duplicate files.
$ fdupes -r  .
./a.jpg                                 
./b.jpeg

./c.jpg
./d.jpeg

4. Show all duplicate file but omit the first file.
$ fdupes -r -f .
./b.jpeg                                

./d.jpeg

5. Similar to step 4 but delete the duplicate files and keep one copy.
$ fdupes -r -f . | grep -v '^$' | xargs rm -v
removed `./b.jpeg'                      
removed `./d.jpeg'

On a similar note, there is this interesting read on optimized way by Dupseek, an app that find duplicate files. The main strategy is just group these files by size and start comparing them by set and ignore those set with just one file.

Unfortunately, I've a hard time understand the Perl code. The closet and simplest implementation I can found is the tiny find-duplicates Python script.

Which Is More Readable or Preferable?

Going through my daily subscribed feeds, I've notice both the monitors were displaying two ebooks, published to the web differently. On the left, is the The Feynman Lectures, and the right, Learn Web Development: The Ruby on Rails Tutorial. Both are rendered using Georgia font but at different size and layout.

If you ask me, I'll prefer the left screenshot. Straight-forward, clean, good contrast, and make the best use of the layout. Good example of a good hypertext document in the Web.


PHP 5.6 New Features

It has been a while since I last code anything significant in PHP, but this 5.6 release do have quite a few significant features. Is this the final stable version before we all move to PHP 7?

The only feature I really like, the support of variable-length argument lists through variadic functions and argument unpacking. Example shown below.
function sum(...$numbers) {
    return array_sum($numbers);
}

$nums = [1, 2, 3, 4];
echo sum(1, 2, 3, 4), "\n"; // 10
echo sum(), "\n";           // 0
echo sum(...$nums), "\n";   // 10
echo sum($nums), "\n";      // 0

On additional changes to the namespaces. While the use operator currently support importing functions, constants, or classes, it still very much limited or rather half-baked when compare to Java or Python. For example we still can't use wildcards for mass import or shorter syntax in importing selected names as shown.
// no working.
use MyProject\Feature\*;
from FooLibrary use Foo, Bar, Baz;

Also, don't get me started on the whole backslash (\) as separator for namespace. I cringe every time thinking or looking at it. Sigh.

phpdbg, which is something new to me, was included and implemented as Server Application Programming Interface (SAPI) module. Unfortunately I can't get it to work, going to try it out in another post.

Microservices?

"...an approach to developing a single application as a suite of small services, each running in its own process and communicating with lightweight mechanisms, often an HTTP resource API. These services are built around business capabilities and independently deployable by fully automated deployment machinery. There is a bare mininum of centralized management of these services, which may be written in different programming languages and use different data storage technologies."
-- James Lewis & Martin Fowler, emphasis added
The term have been lingered in my mind for the past two weeks but I didn't pay much attention to it until today. Yes again, another gimmicky development term which seems to be another a rebranding of Unix philosophy and simplified version of Service-Oriented Architecture (SOA). Sigh, the side effect of the trending butt, ahem, cloud technology these days.

How to implement this architecture style? Decompose and move each component in your monolithic system into its own service. Each service can be development using any platforms, programming languages, or data storages but communicates through JSON over HTTP or lightweight messaging bus. In short, a change of the communication style between each components from function calls to messages.

Nothing new here, old wine in a new bottle.

Android Java

"Android - really the biggest reason today why anyone besides the enterprise guys cares about Java anymore - is well down this dark dark road as well. It’s increasingly common to read a page of Android API documentation and have no idea what the fuck it’s talking about initially. You get there eventually of course, you just have to take a detour through 17 other classes. What, you can’t handle that? You obviously lack the perseverance and vision to conceptualise this grand cathedral that has been built to populate a list. Loser."
-- Neil Sainsbury, additional emphasis added.
You have to agree that Java still relevant today due to the popularity of Android. Otherwise it will headed to the same fate as Cobol or 4GL as well, in the enterprise world.  But off course, Enterprise Java and Android Java are two different beasts.

Static Variable in Python

The solution, as shown below, is actually quite straight forward. Basically by using Python attribute, you can emulate static variable in a function without using a global variable.
def myfunc():
if not hasattr(myfunc, "counter"):
    myfunc.counter = 0  # it doesn't exist yet, so initialize it
    myfunc.counter += 1

Coming from C-based programming languages, PHP in my case, it going to take a while for me to adapt to Python. Yes, I still code Python like a PHP, hence, more practice needed.

Python and Makefiles

"Besides building programs, Make can be used to manage any project where some files must be updated automatically from others whenever the others change."
-- Wikipedia on Make
One of the issue I've noticed while doing a Django project was there are a bunch of shell scripts in the project folder. These tiny shell scripts were mostly related to creating virtualenv, resetting database, and others. Would that be nice if we can combine and group all these scripts into one file?

That is possible and easy through Make and makefiles, provided that you're willing to pick up the rules. The only minor annoyance is those running Windows need to install Make for Windows.

Something I learned while working on my Makefile.

1. A rule tells the make program on how to build your program. The syntax, as shown below, is straight forward and consists of a target, dependencies, and commands.
target: dependencies/target
commands

2. Prepend the build target name with '_' if you don't want the build targets to show up autocompletion in your shell.

3. Build script too noisy or verbose? For example, printing of working directory. To silence it, use make -s or prepend '@' before the commands.

Finding and Deleting Files, xargs rm vs find -delete

Interesting comparison of finding and deleting files using both xargs and find command.

Create 10k files with 10 bytes each.
$ mkdir /tmp/test
$ dd if=/dev/zero of=masterfile bs=1 count=1000000
$ split -b 10 -a 10 masterfile

Using xargs.
$ time find -name 'xaa*' -print0 | xargs -0 rm
real    0m7.667s
user    0m1.112s
sys     0m6.491s

Using find with -delete option.
$ time find -name 'xaa*' -delete
real    0m7.252s
user    0m0.954s
sys     0m6.023s

Time difference of 0.415s, which is just insignificant. However, the -delete method way easier to remember.

Noto Sans CJK

"Noto Sans CJK is a sans serif typeface designed as an intermediate style between the modern and traditional. It is intended to be a multi-purpose digital font for user interface designs, digital content, reading on laptops, mobile devices, and electronic books."
--  http://www.google.com/get/noto/cjk.html
I've been reading a lot of Chinese text these days but the text displayed by the default CJK fonts are awful. When Google release the Noto Font, I was hoping that it can improve the readability, sadly, the result still remain the same.

Installation is quite straight forward.
1. Download the Simplified and Traditional Chinese fonts from Google's Noto font site.

2. Unzip the files and copy to the default OpenType folder.
$ mv *.otf /usr/share/fonts/truetype.

3. Update the font cache.
$ sudo fc-cache -f

As you can see from the captured screenshots for both Chrome and Iceweasel/Firefox. Text are fuzzy, due to anti-aliasing, and hard to read using the default font size. Readability only improves until you've zoom it to 150%. Iceweasel/Firefox fair worse than Chrome as the font rasterization kind of messed up with a mixed of aliased and anti-aliasing text.

Feeling disengaged? Burned-out or bored-out?

Three things to do.

First, take a sabbatical leave. That's the common sentiment in the forum. Away from Internet, from any electronic gadgets, and back to the nature. Letting go the fear of missing out.

Second, look into Maslow's hierarchy of needs. Ask yourself honestly, seriously, don't bullshit yourself, right now, about your current needs. Is it physiological, safety, love/belonging, esteem, or self-actualization?

Lastly, pick and plan your next step. Don't repeat yourself, do something different. Try a different domain. Follow up with your childhood dreams or items in your bucket list.

On Django Grappelli

When adding any new framework or library to your development, you'll eventually encounter the 80/20 rule in software development. In which you've finish the 80% of the work but stuck almost infinitely with the remaining 20%.

Several things I learned the hard way about Django Grappelli.

Sequence of loading the INSTALLED_APPS is very important. The Grappelli module must comes before django.contrib.admin module. Failing to do so and the changes to the admin layout will not take effect.
INSTALLED_APPS = (
    'grappelli',
    'django.contrib.admin',
)

If you want to customize the layout and use the default CSS styling, read the documentation on templates. Unfortunately, not googlable and must be access locally though your Django instance at http://localhost:8000/grappelli/grp-doc/. Oh boy, so much time wasted on googling for the tutorial or documentation on customization.

Where is the bloody documentation on nav-global block?

Don't Work For The Wrong Reason

"As Bradbury suggests, we create work for ourselves when it isn't necessary, and we focus on the wrong reasons for working. Stop wasting time creating work for yourself because you want to feel productive, get away from being busy just to busy, and find something you you really enjoy working on. You need money to carry on in this world, no doubt, but don't let it be your only driving factor. You'll end up bored and hating every minute of your work."
-- Ray Bradbury, emphasis added
You don't feel busy or bored if you're doing something that excite you, and you're earning enough money to survive. However, the older you get, the more bitter, grumpier, and cynical you're and hardly anything excites you these days.

For the past two weeks, I've been listening to quite a few people pitching their ideas and visions. Some are like old wine new bottles, others are okish, and only a few are well-thought off and reasonable. Expect more pitching this coming month. 

AMD Radeon R7 260X

R7 260X, that's probably the best budget (less than USD100) graphic card you can buy right now which have good support for both FOSS and proprietary driver for GNU/Linux. I'm still contemplating about getting this card.

Two reasons. The hefty price, average local price is around MYR470+ and unsure support in Debian Wheezy with backports kernel (>= 3.13). If possible, I would like to stick with Debian Wheezy and FOSS driver. It would be even better if I can get WebGL to work in Chrome.

Finally, a quieter workstation.

After so many weeks, I've managed to remove the "vacuum cleaner" noise from the HP Proliant ML 110 server and turned it into a usable workstation. I should have done this earlier but it took me a while to take the initiative to figure out how to solve it.

The changes were so simple that I laughed at my own over-analyzed solution and bought unnecessary replacement accessory parts. Since the noise was caused by the high speed fan at 4000 RPM, the best way was just to replace it with fan of lower RPM.

Two major issues I faced. First, is very hard to get a cheap PWM 4 pins 92mm fan in MY unless you bought it together with the heatsink or your source it from Taobao. Second, you can reuse the existing 6 pins JWT A2504 connector instead of buying a new one. Even so, I can't get the exact model but similar 6 pins 2510 connector works just fine.

In the end of the day, I've learned so much about casing and heatsink fan and their related power connector. Such knowledge should be quite handy in case I need to build another workstation in coming future.

Result before and after replacement.
$ sudo apt-get install freeipmi-tools lm-sensors

Default server fans
$ sudo /usr/sbin/ipmi-sensors | grep Fan
1344  | REAR FAN         | Fan                      | 4000.00   | RPM   | 'OK'
1408  | CPU FAN          | Fan                      | 4000.00   | RPM   | 'OK'

After replacement with lower RPM fans.
$ sudo /usr/sbin/ipmi-sensors | grep Fan
1344  | REAR FAN         | Fan                      | 1803.07    | RPM   | 'OK'
1408  | CPU FAN          | Fan                      | 2241.20    | RPM   | 'OK'

I forgot to capture the result before the replacement but I recall it should be the same, roughly around 35C to 39C.
$ sudo sensors-detect
$ watch -n 1 -d sensors
coretemp-isa-0000
Adapter: ISA adapter
Core 0:       +37.0°C  (high = +83.0°C, crit = +99.0°C)
Core 1:       +36.0°C  (high = +83.0°C, crit = +99.0°C)
Core 2:       +38.0°C  (high = +83.0°C, crit = +99.0°C)
Core 3:       +35.0°C  (high = +83.0°C, crit = +99.0°C)

Neglected Raspberry Pi

"A bunch of nerds could order one, then wait six months for it to arrive. They could install a version of Linux on it, play around with it for about 20 minutes, and then talk about how maybe they'll use it for XMBC. Then they could just let it gather dust on some shelf until it gets thrown away in a few years."
-- kamapuaa, emphasis added
My sentiment exactly. Especially regarding Raspberry Pi or similar devices. Due to the low spec, slow updates from distro, and lack of an optimized GPU support, you can't do much with the tiny devices unless you want to learn more about electronic. Must figure out a way to do with the abandon Pi at home, maybe I can turn it into a wireless print server instead.

Debian Wheezy Backports Kernel

So I assumed, upgrading your Debian Wheezy to later kernel version through backports, is relatively easy, you've just to
$ echo "deb http://ftp.debian.org/debian/ wheezy-backports main non-free contrib" | sudo tee -a /etc/apt/sources.list.d/wheezy-backports.list

$ echo "deb-src http://ftp.debian.org/debian/ wheezy-backports main non-free contrib" | sudo tee -a /etc/apt/sources.list.d/wheezy-backports.list

$ sudo apt-get update
$ sudo apt-get -t wheezy-backports install linux-image-amd64
$ sudo apt-get -t wheezy-backports install linux-headers-amd64 
$ uname -a
Linux butterfly 3.14-0.bpo.1-amd64 #1 SMP Debian 3.14.7-1~bpo70+1 (2014-06-21) x86_64 GNU/Linux

Easy right? Straight forward right? Wait until you reboot your machine...
$ sudo reboot

X can't start due to incompatible Nvidia drivers with the Kernel. Virtualbox also won't load as well. Remove and update the Virtualbox's Dynamic Kernel Module Support (DKMS) since it's does not come together with the GNU/Linux source.
$ sudo apt-get remove virtualbox-dkms
$ sudo apt-get -t wheezy-backports install virtualbox-dkms
$ sudo reboot

Everything seems okay. No conflict and failure during booting.

Next, the Nvidia's DKMS.
$ sudo apt-get remove nvidia-kernel-dkms
$ sudo apt-get -t wheezy-backports install nvidia-kernel-dkms
$ sudo reboot

Still can't get X to show up. Check the kernel log.
$ dmesg | grep nvidia

[   65.677942] nvidia: Unknown symbol acpi_os_wait_events_complete (err 0)

acpi_os_wait_events_complete (err 0)? It seems the Nvidia is/was ? A bit lacking behind following changes with the Kernel API with their driver. Patching it manually did not solve the issue.

Unfortunately have to purge all Nvidia driver and switch to
$ dpkg --get-selections | grep nvidia | xargs sudo aptitude purge --assume-yes
$ sudo rm /etc/X11/xorg.conf
$ sudo rm /etc/X11/Xorg.conf.d/20-nvidia.conf
$ sudo apt-get install -t wheezy-backports xserver-xorg-video-nouveau
$ sudo reboot

Update the xrandr to relect driver changes so that the dual-monitors will work again. Nouveau driver is noticeable dog slow, not a pleasant experience especially when watch YouTube as it can hang occasionally for some unknown reason.

Lesson learned. Upgrading backports kernel in Debian is not that straight forward. Should I switch to to AMD ATI card instead? Do not foresee myself getting any Nvidia hardware in coming years.

Note to self - 2014-06-19

I've a feeling that Vagrant and Google Chrome hates each other. I can understand Vagrant or Virtual Box is resource intensive but Chrome with just one tab with Gmail opened? Seriously?

The curse of PHP. Once you've been stereotyped as programmer of certain programming language, is hard to switch or move to another language.

Talking about being absent minded. Nearly lost my whole bag, didn't realized it until I traveled for hours.

Development and Production Environment

Always match your development environment with the production environment, this is so true especially for Python development. While the Docker just reached 1.0, the preferable choice still is Vagrant. Will look into Docker once time permitted. Off course, having a quad-core machine with plenty of RAMs help a lot as well.

Which begs the question, if I'm going to buy a new machine that support visualization, which Xeon model of socket 1150 should I get so the total cost of the system is within the budget of MYR1.5k? Unless necessary, I don't believe in paying more than MYR2k for any electronic devices these days.

Upgrading system is always a tedious process. You've appreciate the effort done on the unit testing, it will give you some sort of assurance that everything work as it. Testing, is one area that I should focus on in coming years.

mkdtemp: private socket dir: Permission denied

Woke up this morning and my GDM login kept kicking me out of the login. Ctrl+Alt+F1 back to the console shell and check all the logs message. Based on the similar incident I've faced few months back, check the the .xsession-errors file. Result as shown.

Go to my home directory.
$ cd

Read the X's session log file for some clue.
$ cat .xsession-errors
/etc/gdm3/Xsession: Beginning session setup...
localuser:kianmeng being added to access control list
openConnection: connect: No such file or directory
cannot connect to brltty at :0
Script for scim started at run_im.
Script for auto started at run_im.
Script for default started at run_im.
mkdtemp: private socket dir: Permission denied

Googled around, it seemed that this is quite a common issue. To confirm my finding.
$ ls -ld /tmp
drwxr-xr-x 17 root root 20480 Jun 15 12:42 /tmp/

Applied the fix.
$ sudo chmod a+wt /tmp

What is the actual root cause? Last I remembered, I was restarting the machines repeatedly without login to the machine, for some BIOS settings tweaking. Most likely that's reason.

Setting Up Git Repo Locally and Push to Remote

To be more specific, how do you setup a local Git repository with branches and tags and then later push to remote origin URL? It turned out to be quite simple. Steps as follows.

1. Setting up the remote central bare project repository.
$ mkdir project
$ cd project
$ git init --bare

For any central shared Git repository, use the --bare parameter to prevent What's the difference between a normal and bare repository? A normal repository contains a working directory and actual repository, the hidden .git folder. Where the bare repository contains only the contents of .git folder.

If you're coming from the Subversion background, you'll notice that the repository data and your check out repository has different tree layout.

2. Setting up your local project repository.
$ mkdir project
$ cd project
$ git init

3. Add in all your code, branches, or tags if necessary.

4. Set the remote origin URL.
$ git remote add origin git@github.com:foobar/project.git
$ git remote -v
origin  git@github.com:foobar/project.git(fetch)
origin  git@github.com:foobar/project.git(push)

4. Push all your branches and tags to the remote origin URL.
$ git push REMOTE --tags
$ git push REMOTE --all

Setup Backports Repository for Debian Wheezy


Setup procedure are quite easy but unfortunately, not many packages available especially the latest kernel in the backports repository.
$ sudo sh -c 'echo "deb http://cdn.debian.net/debian wheezy-backports main" > /etc/apt/sources.list.d/wheezy-backport.list'
$ sudo apt-get update
$ sudo apt-get -t wheezy-backports install --reinstall tmux

To find all installed backport packages. [2]
$ sudo dpkg -l | awk '/^ii/ && $3 ~ /bpo[4567]0/ {print $2}'

Explanation of the awk command. Find all line that start with 'ii' characters and the third column contains the word bpo40 or bpo50 or bpo60 or bpo70. If found, print the second column.

Git Learning Progress

Is like one of those rare day where you found enjoyment and gained achievement through learning something new. As I'm slowly getting acquaintance with Git, the more I use it, the more I understand how it supposed to work. More on that in another post. Right now, regarding my skill on Git, on a scale of 1 to 10, I rate myself around 3.

Moving Away From PHP

This is probably the main reason why Perl was dethroned from its position as one of the leading programming language. Its role has been replaced by other more simpler programming language as in the web (PHP) and system admin (Python/Ruby). I was wondering, what will replace PHP, Python, or Ruby? Is PHP going to be replaced by Hack (by Facebook) or node.js or Python/Ruby being replaced by Golang?

Reflecting back my programming career. I made a mistake not moving away from PHP. I should have invested more times in other programming language like Java or Python, especially Java. Off course, mistakes were made and nothing much we can do beside accepting it and move ahead.

Is always good to try and learn different new technologies. Most importantly you must enjoy what you're doing and stay healthy to sustain it.

Note to self - 2014-06-08

One of the benefit of writing a journal is that you'll have the opportunity to reflect back and monitor your progress of your life. Especially how you got where you are right now.

Due to nature of my workplace, I'm slowly getting into the habit of weekly review and keep track of my time usage, something that may be frowned upon by others, but I believe is the right approach if you want to be very aware of you time usage.

For the past two days, I've switched my working style. No more coding with a lappy while lying down. Just sat at my table with dual-monitors and code and write. The noisy workstation still bother me but I'm slowly getting used to it. Productivity seems to increase and you seemed to accomplish something. Whether this is the right thing? Not sure.

While I'm a firm believer in computer user freedom and even set up a monthly donation to them, but what's my Free Software Activities? Sadly, none. However, what if I allocate two to three hours per week for such activities. Let's see how this goes.

More on Bash scripting. I've learned and relearned quite a few things today while writing a small script for setting up my development machine.

The Weirdness of Ruby Version in Debian Wheezy

Due to certain application dependency on Ruby, I need to install Ruby on the machine. As you can see from below, the versioning and packing in Debian puzzled me.

Install the latest Ruby version and the ruby-switch program which will can set the default Ruby version to any installed version.
$ sudo apt-get install ruby1.9.3 ruby-switch

Check our just installed version.
$ ruby --version
ruby 1.9.3p194 (2012-04-20 revision 35410) [x86_64-linux]

Show the list of available version. Note that listing shows ruby1.9. I thought we've just installed version 1.9.3?
$ sudo ruby-switch --list
ruby1.8
ruby1.9.1

Set version 1.9.3 as our default Ruby version.
$ sudo ruby-switch --set ruby1.9.1

Check the binary of ruby. Apparently it is a soft link.
$ file `which ruby`
/usr/bin/ruby: symbolic link to `/etc/alternatives/ruby'

Again, another soft link but pointed to ruby1.9.1.
$ file /etc/alternatives/ruby
/etc/alternatives/ruby: symbolic link to `/usr/bin/ruby1.9.1'

Check whether /usr/bin/ruby1.9.1 is a binary or soft link. As result shown, it's a binary program.
$ file /usr/bin/ruby1.9.1
/usr/bin/ruby1.9.1: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.26, BuildID[sha1]=0xf5d1e1e2959315b9d4907b6a40fb2c44f1c27c87, stripped

Check the ruby1.9.1 version again. What?! Why it's showing version 1.9.3?
$ /usr/bin/ruby1.9.1 --version
ruby 1.9.3p194 (2012-04-20 revision 35410) [x86_64-linux]

Check which package contains this /usr/bin/ruby1.9.1 binary. Weird, it belongs to ruby1.9.1 package, which is reasonable but I didn't install that.
$ apt-file search /usr/bin/ruby1.9.1
ruby1.9.1: /usr/bin/ruby1.9.1

Show the reverse dependancies. Why ruby1.9.1 needs ruby1.9.3?
$ apt-cache rdepends ruby1.9.1 | grep ruby
ruby1.9.1
ruby1.9.1:i386
libstfl-ruby1.9.1
ruby1.9.3
ruby1.9.1-full
ruby1.9.1-examples
ruby1.9.1-dev
ruby

Okay. Let's check with Ruby Gem. Similar steps.
$ file `which gem`
/usr/bin/gem: symbolic link to `/etc/alternatives/gem'

$ file /etc/alternatives/gem
/etc/alternatives/gem: symbolic link to `/usr/bin/gem1.9.1'

Interesting. gem1.9.1 is a ruby script.
$ file /usr/bin/gem1.9.1
/usr/bin/gem1.9.1: Ruby script, ASCII text executable

However, when checking the version of /usr/bin/gem1.9.1.
$ /usr/bin/gem1.9.1 --version
1.8.23

What the heck?!

More on Bash Scripting

1. For anything that don't needs to be escaped a lot, sudo sh -c is good enough.

2. I've not even to begin to explore the power of sed, the stream editor. Example here is to insert a line to the beginning of a file. More reading on the one-line scripts for Sed in coming months.
$ sudo ls -l /root | sudo tee /root/ls.txt > dev/null
$ sudo sh -c 'ls -l > /root/ls.txt'

3. "Syntax error: "(" unexpected". That's the error you got when you tried to run a Bash script through Dash without the shebang line, which is the default system shell for Ubuntu or Debian.

4. It will take a while to appreciate the concise syntax of Bash. Let's look at these two examples [9] where we restart a daemon upon a missing file.

Ex1: Typical version
if [ ! -f /tmp/daemon.pid ]; then
python daemon.py restart
fi

Ex2: Concise version
[ -f /tmp/daemon.pid ] || python daemon.my restart

The [] constructor check the existence of the file and return *0 on success* and any number than 0 as failure. Yup, zero as success status compare to other popular programming language. Reason is you can return many different codes as error status but only one code to represent success.

Meanwhile, the || construct will execute the command on the right only if the command on the left fail.

Monitorix

"Monitorix is a free, open source, lightweight system monitoring tool designed to monitor as many services and system resources as possible. It has been created to be used under production Linux/UNIX servers, but due to its simplicity and small size can be used on embedded devices as well."
--http://www.monitorix.org, emphasis added.
Installation guide for Debian.
$ sudo su -
# curl http://apt.izzysoft.de/izzysoft.asc | apt-key add -
# apt-get update
# apt-get install monitorix
# service restart monitorix
# exit
$ xdg-open http://localhost:8080/monitorix/

One should not reject the opportunity of the visual pleasure from his machine.

Note to self - 2014-06-07

Most chassis and CPU fans use either a three pins or four pins connector. Not for HP Proliant server, different and proprietary. Their fan connector has six bloody pins. To make matter worse, these fans, sounds like a bloody vacuum cleaner. Imagine sitting next to that machine with a dbA of 79! No, you can't unplug the fan, Power-on self-test (POST) will fail!

Next time be extra careful when you're looking for a battle station. Remember to check the motherboard carefully, especially the noise and parts. Most server board contains proprietary pins and it's not easy and cheap to source these replacement parts.

If I can't still find another way to resolve this noise issue, I'll need to start another fund to get another Xeon-based quad-core workstation. Why Xeon and not i7/i5 ? Well, is not a gamer gig and furthermore, is nice to have make -j N.
                                     
Always find a way to speed up your daily routine. Pick anything around you that can be automated. As someone who spent roughly 50/50 of his time between the console and a browser, it never occurs to me that you can google from command line.

Lots of Bash coding today. Not sure why, I'm kind of have a certain liking to Bash script. Today I learned about shell bultin, passing all arguments to a function using "$@", and ``backtick and $(cmd) for command substitution. The latter is preferred because you can have nested commands.

Launch Default Web Browser From Console

Interesting, never realize there are so many ways. I should update my way of writing step-by-step guide.

1. sensible-browser
$ sinsible-browser http://google.com
$ man sensible-browser | grep DESCRIPTION -A 3
DESCRIPTION
sensible-editor, sensible-pager and sensible-browser make sensible decisions on which editor, pager, and web browser to call, respectively. Programs in Debian can use these scripts as their default editor, pager, or web browser or emulate their behavior.

2. xdg-open
$ xdg-open http://google.com
$ man xdg-open | grep DESCRIPTION -A 3
DESCRIPTION
xdg-open opens a file or URL in the user's preferred application. If a URL is provided the URL will be opened in the user's preferred web browser. If a file is provided the file will be opened in the preferred application for files of that type. xdg-open supports file, ftp, http and https URLs.

3. x-www-browser
$ x-www-browser http://google.com
$ man x-www-browser
No manual entry for x-www-browser
See 'man 7 undocumented' for help when manual pages are not available.

$ ls -l `which x-www-browser`
lrwxrwxrwx 1 root root 31 Feb 22 21:08 /usr/bin/x-www-browser -> /etc/alternatives/x-www-browser

$ ls -l /etc/alternatives/x-www-browser
lrwxrwxrwx 1 root root 29 Feb 23 04:10 /etc/alternatives/x-www-browser -> /usr/bin/google-chrome-stable

No wonder, is Debian alternative system.
$ sudo update-alternatives --list x-www-browser
/usr/bin/google-chrome-stable
/usr/bin/iceweasel
/usr/bin/midori

4. gnome-open
$ gnome-open http://google.com
$ man gnome-open | grep DESCRIPTION -A 2
DESCRIPTION
This program opens files using file handlers configured in GNOME.

Note to self - 2014-06-05

1. Taobao, slowly but surely, will replace the local ecommerce players as the go to destination for online purchasing. Nothing interesting among them, just another reseller of items procured from merchants in Taobao. The real money or profit still flow through logistic, or to be specific, freight forwarder.

2. Watching items being purchased online in real-time was quite an additive activity. You will be surprised by the types of item purchased online and some of these items are dirt cheap! I'm so going to create a dashboard to load these purchases using my underutilized monitor.

3. What do you call a PHP programmer who are in the midst of switching to another programming language like Python? A better PHP programmer. Ouch.

4. PHP vs. Python. Python is a better when you want to learn more about programming in general or be a better developer. While PHP is great when you want to create something fast for the web. Contrary to popular belief, Python is not a beginner friendly language. By the way, PHP is a web template language. Which one makes you happy? Neither. But is good to finally escape from PHP after all these 10 years. For me, PHP is "cari makan" and Python is to stimulate my bored mind.

5. export PYTHONDONTWRITEBYTECODE=1
Yup. No more pyc files scatter around. Only for Python 2.6 though.

6. Read the bloody change logs. Seriously, when you're upgrading any software or libraries, please remember to read the bloody change logs.

7. Awareness, acceptance, and courage are the keys to become a better estimator. First, you must aware of your current situation. Second, you must accept the current limitation either in your or your surrounding. Third, you must have the courage to make hard decision and mistakes to go ahead.

Note to self - 2014-06-06

Never get yourself burned by Python libraries incompatibility issues again. Is like you're lost in a maze and keep returning to the same old spot.

Learning two programming at one time? Overwhelming but it a fun stimulating experience. I should be more aggressive and start churning out stuff.

Average emails wrote per day this week? Two. More opportunity to practice my writing and communication skills..

When you realized that you forgot to eat your lunch, it implied two things. Either you're too occupied with your current task or you've fscked up your digestive system. I'm both.

Are you a pragmatic or idealist programmer? Neither. Stop asking and just coding. Create. Build. Start.

Unexpected Inconsistency: Run fsck Manually

Surprised to see this when I booted up my workstation upon arriving home. Booted up my lappy and googled for answer. Keyed in the password and ran the fsck command manually.
$ fsck /dev/sda

Reboot the machine. Same message again. Suddenly noticed that the system date was showing year 2011. Realized that I took out the motherboard's battery but forgot to reconfigure back the system date and time.

Restart again into BIOS settings. Update the system date and time and everything works again.

Next step, update your clock accurately.
$ sudo apt-get install ntpdate
$ sudo nptdate ntp.pool.org
$ date

Multiple Monitors Support in GNU/Linux

For the past two months, I've been busy configuring multiple monitors support for different machines. Instead of tweaking X's configuration file, I've found a easier way to configure it using aRandR, a frontend tool to XRandR.

Install the packages
$ sudo apt-get install arandr

Run it and configure your monitors layout. You can easily drag-and-drop and position your monitors.
$ arandr

To make the XRandR changes permanent and run it everything the X session start. Save the settings generated by aRandR in your home folder as $HOME/.xprofile.

Happiness or Achievement?

"Another major change is simply life outlook. While I was never the totally reckless type, and never all that obsessed with money, today the money just isn't particularly important. I want enough to ensure security, and it'd be nice to have enough to just work on my own projects, but I don't particularly care if I get rich. That changes my assessment of any startups drastically - I'm no longer prepared to jump at an opportunity to get rich if it's not something I'm sufficiently excited by. I don't feel I'm in a hurry to prove anything. I have what I need, and then some. I'm far more secure in myself in every way than I was at 19. I'm not going to pretend like I wouldn't love to get that multi-million exit, but it's not something that matters to me now (I'm sure it'd matter to me if it happened, though)."
-- vidarh, emphasis added
Which reminds me of a discussion I have with someone regarding life goal difference between woman and man. The former desire happiness and the later seek achievement. And both gender still wish for more money, giving the opportunity. Maybe I'm just generalizing.

Dnsmasq For Local Development Site

Found via HN. I've used to edit /etc/hosts file and add local testing site manually and it never occurs to me to use Dnsmasq to redirect all URLs that ends with .dev to 127.0.0.1.

Setup steps for Ubuntu as follows:
$ sudo apt-get install dnsmasq
$ echo 'address=/dev/127.0.0.1' | sudo tee /etc/dnsmasq.d/localhost.dev
$ sudo sed -i '1inameserver 127.0.0.1' /etc/resolv.conf
$ sudo service dnsmasq restart
$ dig foobar.dev @127.0.0.1| grep IN
$ ping foobar.dev

Simple yet smart way to use Dnsmasq.

Upgrading to Ubuntu Trusty Tahr 14.04

As usual, the recent release of Ubuntu 14.04 was a painstaking upgrading process for me again. My experience was similar to HN reader, etfb, where the upgrade crashed halfway causing conflicting and broken packages. I have to reboot to recovery mode and continued the installation forcefully by apt-get install -f. Luckily the data in my home directory was not corrupted and everything are still intact.

But that was the beginning of my dreadful recovery process. Since I've quite a lot of third parties PPA, especially GNOME 3 PPA. Both Gnome and Unity desktop refused to start up and let me log in. After some re-installation and config file removal, I managed to get my Gnome 3 session back. Don't even bother with Unity anymore since Gnome 3 plus the shell extensions solved most of my regular needs.

Kind of annoyed that I wasted the whole Saturday just to upgrade the distro. While some may say this is the price you'll have to pay if you fear of missing out (FOMO) of getting on the latest greatest. But, is advisable to get the latest updated version if you're using it on a laptop or any latest hardware. At least you can get more stable and faster drivers.

I like what wpietri said about different type of GNU/Linux users. A satisficers will accept the default settings where the maximizers will strive for the perfect setup. Those who use Ubuntu or Fedora are satisficers and maximizers will prefer ArchLinux or Gentoo. When you're young, extremely bored, and have plenty of time to kill, you will end up like the later group. As you age, time is limited and precious, you'll just want to get things to work and move on with something else with your life.

Looking at the bright side, painful as it may be, I've learned from sandGordon on how to setup a leaner Ubuntu installation. Furthermore, this release seems responsive on any laptop with Intel i3 processor compare my lappy unknown quad-core processor.

Almost Daily Git Rebasing Workflow

It used to be cumbersome and frustrating when I first learned how to do a rebasing, but these days, I'm slowly getting used to it. Yes, occasionally you still make mistakes, but branching is cheap and you can always recover from those mistakes. Let's look at my almost daily rebasing workflow. Typical steps as follows:

Getting the latest version from remote master branches.
$ git fetch
$ git rebase origin/master
Current branch master is up to date.

Create a new topic or feature branch from the master branch. Make sure you're in the master branch.
$ git checkout master
Already on 'master'

$ git checkout -b feature-x
Switched to a new branch 'feature-x'

Let’s create some dummy commits.
$ touch foo1; git add foo1; git commit -m "foo1"
$ touch foo1; git add foo1; git commit -m "foo1"

Inspired by David Baumgold’s great rebasing guide, find the last commit that your first branched off the master branch to create the feature-x branch.
$ git merge-base feature-x master
8454f7f3b1b9e224134d4336683597fb1ad290fa

Next, although optionally but if you like to have small and frequent commits, you should always squash, reword, or fixup your local changes through interactive rebasing before rebasing again the remote or origin master branch.
$ git rebase -i 8454f7f3b1b9e224134d4336683597fb1ad290fa

Or using different syntax, if you want to go back to previous commits before the current HEAD.
$ git rebase -i HEAD~2

Rebase interactively of both dummy commits.
reword b2dabc0 foo1
fixup d4add26 foo2

[detached HEAD 6af5a09] Add feature-x
 1 file changed, 0 insertions(+), 0 deletions(-)
 create mode 100644 foo1
[detached HEAD 7994cf7] Add feature-x
 2 files changed, 0 insertions(+), 0 deletions(-)
 create mode 100644 foo1
 create mode 100644 foo2
Successfully rebased and updated refs/heads/feature-x.

If you realize that you’ve made a mistake after a successful rebasing, you can always undo it.
$ git reset --hard ORIG_HEAD

Rebasing against the master branch. In other words, changes in your feature-x branch will be reapply on top of the latest changes in master branch. Often you will need to fix or skip the conflict (something I need to practice more as I always messed up the merging).
$ git rebase origin/master

Optional steps, only if you encounter conflict.
$ git rebase --skip
$ git rebase --mergetool
$ git rebase --continue
$ git rebase --abort

If you've already published your changes, in this case, feature-x branch has been pushed before to the remote server, you’ll need to force-push your changes. Although some said forced update is bad, but is a compulsory step especially after rebasing from master branch to topic/feature branch before publishing.
$ git push -f origin feature-x

Autoload Module for Python Shell and IPython

In PHP, print_r is a very useful function to display variable in a human-readable format. Similarly, both pprint and awesome_print provides equivalent function in Python. Although the former has more extra features.

Since most of my Python time is spend on either the Python shell or IPython, it will be nice if we can autoload these two modules upon starting the shell.

Python shell

First we need to set the environment variable PYTHONSTARTUP so it can autoload the file. You should put this into your .bashrc file and reload it.
export PYTHONSTARTUP=$HOME/.pythonstartup

The content of .pythonstartup file, which is just a normal Python script as shown. Besides that, we also enable tab completion as the default shell has limited features.
# two use modules for pretty print variables
from awesome_print import ap
from pprint import pprint

import rlcompleter
import readline

# enable tab completion
readline.parse_and_bind("tab: complete")

To test our autoloading, just start the Python shell and type the sample code which will list all the attributes of the ap function.
$ python
Python 2.7.5+ (default, Feb 27 2014, 19:37:08)
[GCC 4.8.1] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> ap(dir(ap))
[
  [ 0] __call__,
  [ 1] __class__,
  [ 2] __closure__,
  [ 3] __code__,
  [ 4] __defaults__,
  [ 5] __delattr__,
  [ 6] __dict__,
  [ 7] __doc__,
  [ 8] __format__,
  [ 9] __get__,
  [10] __getattribute__,
  [11] __globals__,
  [12] __hash__,
  [13] __init__,
  [14] __module__,
  [15] __name__,
  [16] __new__,
  [17] __reduce__,
  [18] __reduce_ex__,
  [19] __repr__,
  [20] __setattr__,
  [21] __sizeof__,
  [22] __str__,
  [23] __subclasshook__,
  [24] func_closure,
  [25] func_code,
  [26] func_defaults,
  [27] func_dict,
  [28] func_doc,
  [29] func_globals,
  [30] func_name
]

IPython

Again, for IPython, the setup is similar. First, you must set the export IPYTHONDIR environment variable in your bash file. In my Ubuntu 14.10, the default data path was set to $HOME/.config which contains other unnecessary configuration files to be added to my Git’s repository.
export IPYTHONDIR=$HOME/.ipython

Next, we instantiate and create the profile data.
$ ipython profile create
[ProfileCreate] Generating default config file: u'/home/kianmeng/.ipython/profile_default/ipython_config.py'
[ProfileCreate] Generating default config file: u'/home/kianmeng/.ipython/profile_default/ipython_qtconsole_config.py'
[ProfileCreate] Generating default config file: u'/home/kianmeng/.ipython/profile_default/ipython_notebook_config.py'

Following the directory structure shown below, create the startup script file 10-default.py.
$ tree .ipython
.ipython
├── profile_default
│   └── startup
│       └── 10-default.py

Add these to the 10-default.py file.
$ cat .ipython/profile_default/startup/10-default.py
from awesome_print import ap
from pprint import pprint

Start our IPython session and try to test for our autoloaded modules.
$ ipython
Python 2.7.5+ (default, Feb 27 2014, 19:37:08)
Type "copyright", "credits" or "license" for more information.

IPython 0.13.2 -- An enhanced Interactive Python.
?         -> Introduction and overview of IPython's features.
%quickref -> Quick reference.
help      -> Python's own help system.
object?   -> Details about 'object', use 'object??' for extra details.

In [1]: ap??
Type:       function
String Form:
File:       /usr/local/lib/python2.7/dist-packages/awesome_print/awesome_print.py
Definition: ap(*args)
Source:
def ap(*args):
    for arg in args:
        print format(arg)

In [2]:

PHP Successor?

"A lot of those "modern PHP" libraries are basically a masochistic exercise in copying designs that make sense in Java into PHP where they make no sense.

PHPT style tests wouldn't make sense in Java. And Java style Unit test frameworks make no sense in PHP.

However all those blindly supporting this, can please continue to waste their time, for the benefit of making sure they're buzzword compliant & in sync with groupthink."
-- mantrax [1], emphasis added

Sigh. My feeling exactly.

PHP the language has starting to show its age and limitation. From a template language, to a procedural scripting web language, and now a mutated siblings of Java-wannabe. Ironically, even Java web development itself is moving towards a scripting based language like Groovy.

I am hoping Facebook's HHVM will slowly replace PHP (Zend Engine), and, if possible, steer the direction of the language itself. Again, by replacing it with the Hack. They (Facebook) have all the resources to make this works.

Understanding Git Rebase

For a Git beginner like me, Git rebase seems cryptic and hard to understand. The one line help description of the command states that this tool "Forward-port local commits to the updated upstream head". Forward-port? Local commits? Updated upstream head? Sounds confusing? Yup, me too. Even after I read the definition and explanation of these terms.

After several day of googling and constant reading through the online tutorials and manual, finally I managed to grasp some basic understanding on how and why Git rebase works. Mostly from excellent guide of Cern guide to Git and Charles Duan's Guide to Git.

To summarize my understanding of Git rebase,
  1. Rebasing is about managing commit history / log
  2. An alternative way for doing conventional merging, but more refining
  3. Two scenarios where you will need rebasing:
    • To squash or combine our local commits before merging with remote branches
    • To keep you local branch up-to-date with remote branches without merging
We will increase our understanding by going through the step-by-step guide of going a rebasing for above mentioned scenarios. Before that, let's setup our git as follows. You can skip the user name and email if you already done so.
$ git config --global user.name "John Doe"
$ git config --global user.email johndoe@example.com

$ git config --global color.ui auto
$ git config --global color.branch auto
$ git config --global color.diff auto
$ git config --global color.status auto
$ git config --global alias.ll 'log --oneline --decorate --graph --all'

Let's create as local Git repository before we can proceed with rebasing.
$ mkdir -p /tmp/foobar
$ cd /tmp/foobar
$ git init
Initialized empty Git repository in /tmp/foobar/.git/

Create a few changeset, a set of modified files, in the master branch. We're using the naming convention of [branch name]c[sequence] for each file name that represent a changeset.
$ touch mc1; git add mc1; git commit -m "mc1"
$ touch mc2; git add mc2; git commit -m "mc2"
$ touch mc3; git add mc3; git commit -m "mc3"

Visualize our changes so far using the alias we've created.
$ git ll
* 7665913 (HEAD, master) mc3
* 9ef4878 mc2
* 2f8d692 mc1

Scenario 1 : Squashing Local Commits

Imagine that you want to add a new feature, surely you're going to create a new branch, let's call it new-feature, and work on it locally (at your development machine). Let's try that.
$ git checkout -b new-feature
Switched to a new branch 'new-feature'

Check our log and the available branch. If you've notice, the current HEAD, new-feature branch, and master branch are pointed to the same hash.
$ git ll
* 7665913 (HEAD, new-feature, master) mc3
* 9ef4878 mc2
* 2f8d692 mc1

$ git branch -a
master
* new-feature

A feature is like a task where we can further break down into sub-tasks. Also, is a good practice to commit early and commit often as you can break a problem down into a set of smaller problems and tackle it one by one.

Let's try to simulate that in the new-feature branch. Each nfX is a sub-tasks in order for us to implement the new feature.
$ touch nf1; git add nf1; git commit -m "nf1"
$ touch nf2; git add nf2; git commit -m "nf2"
$ touch nf3; git add nf3; git commit -m "nf3"
$ touch nf4; git add nf4; git commit -m "nf4"
$ touch nf5; git add nf5; git commit -m "nf5"

Check the history log again. The new-feature branch is ahead of the master branch by 5 commits.
$ git ll
* 466b238 (HEAD, new-feature) nf5
* 61f6e91 nf4
* 7f80d86 nf3
* bb93e3a nf2
* 65d8d8a nf1
* 7665913 (master) mc3
* 9ef4878 mc2
* 2f8d692 mc1

Instead of merging all those sub-tasks commit (useful to you but not to others) to the main branch, a better approach is to squash or consolidate all into one single commit through git rebase.
# last 5 commits
$ git rebase -i HEAD~5

# if the master branch or other branches is behind your new-feature branch 
$ git rebase -i master

The previous command will start the interactive mode for us to squash all our related commits and group them into one.
pick d69307e nf1
pick 4e9cd86 nf2
pick 6449f6a nf3
pick 6acfd6d nf4
pick f29e1db nf5

# Rebase 1245945..f29e1db onto 1245945
#
# Commands:
#  p, pick = use commit
#  r, reword = use commit, but edit the commit message
#  e, edit = use commit, but stop for amending
#  s, squash = use commit, but meld into previous commit
#  f, fixup = like "squash", but discard this commit's log message
#  x, exec = run command (the rest of the line) using shell
#
# These lines can be re-ordered; they are executed from top to bottom.
#
# If you remove a line here THAT COMMIT WILL BE LOST.
#
# However, if you remove everything, the rebase will be aborted.
#
# Note that empty commits are commented out

Rearrange and amend the necessary actions for these related commits.
pick f29e1db nf5
squash 6acfd6d nf4
squash 6449f6a nf3
squash 4e9cd86 nf2
squash d69307e nf1

The next step is to summarize and rewrite all the commit messages as shown below.
# This is a combination of 5 commits.
# The first commit's message is:
nf5

# This is the 2nd commit message:

nf4

# This is the 3rd commit message:

nf3

# This is the 4th commit message:

nf2

# This is the 5th commit message:

nf1

# Please enter the commit message for your changes. Lines starting
# with '#' will be ignored, and an empty message aborts the commit.
# HEAD detached from 1245945
# You are currently editing a commit while rebasing branch 'new-feature' on '1245945'.
#
# Changes to be committed:
#   (use "git reset HEAD^1 ..." to unstage)
#
#       new file:   nf1
#       new file:   nf2
#       new file:   nf3
#       new file:   nf4
#       new file:   nf5
#

In which, we just summarize it as
implement new-feature 

# Please enter the commit message for your changes. Lines starting
# with '#' will be ignored, and an empty message aborts the commit.
# HEAD detached from 1245945
# You are currently editing a commit while rebasing branch 'new-feature' on '1245945'.
#
# Changes to be committed:
#   (use "git reset HEAD^1 ..." to unstage)
#
#       new file:   nf1
#       new file:   nf2
#       new file:   nf3
#       new file:   nf4
#       new file:   nf5
#

Once successfull, the git will shown the result of rebasing.
[detached HEAD 82c66c9] implement new-feature
5 files changed, 0 insertions(+), 0 deletions(-)
create mode 100644 nf1
create mode 100644 nf2
create mode 100644 nf3
create mode 100644 nf4
create mode 100644 nf5
Successfully rebased and updated refs/heads/new-feature.

Check our history log again. Notice all those commit of nf1 till nf5 have been squashed or combined into a new commit of 82c66c9 and the new-feature branch is ahead of master branch by 1 commit. Basically, we're using rebase to main a linear history.
$ git ll
* 82c66c9 (HEAD, new-feature) implement new-feature
* 1245945 (master) mc3
* 2e803fb mc2
* 885e8be mc1

Last step, merge our new feature into the master branch.
$ git checkout master

$ git merge new-feature
Updating 1245945..82c66c9
Fast-forward
nf1 | 0
nf2 | 0
nf3 | 0
nf4 | 0
nf5 | 0
5 files changed, 0 insertions(+), 0 deletions(-)
create mode 100644 nf1
create mode 100644 nf2
create mode 100644 nf3
create mode 100644 nf4
create mode 100644 nf5

Checking our history log again.
$ git ll
* 82c66c9 (HEAD, new-feature, master) implement new-feature
* 1245945 mc3
* 2e803fb mc2
* 885e8be mc1

Scenario 2 : Keep your local branch up-to-date

If you notice in Scenario 1, the master branch stays stagnant without any additional commits. What if while we're developing on the branch and there are other commits merged to the master branch, as in other features or hotfix ?

Let's try again, but this time, we're going to create a hotfix branch and add a sample commit to fix an issue. Our commit in hotfix branch is currently the HEAD and is ahead the master branch by 1 commit.
$ git checkout -b hotfix
Switched to a new branch 'hotfix'

$touch hf1; git add h1; git commit -m "hf1"

$ git ll
* f229ff9 (HEAD, hotfix) hf1
* 82c66c9 (new-feature, master) implement new-feature
* 1245945 mc3
* 2e803fb mc2
* 885e8be mc1

During that period, there are some changes committed to the master branch. Let's add a few commits to it as well. Checking our commit log again, you'll notice a divergence between hotfix and master branchW. In other words, we've a forked commit history.
$ git checkout master
$ touch mc4; git add mc4; git commit -m "mc4"
$ touch mc5; git add mc5; git commit -m "mc5"

$ git ll
* bbb1a2b (HEAD, master) mc5
* 4472d3e mc4
| * f229ff9 (hotfix) hf1
|/  
* 82c66c9 (new-feature) implement new-feature
* 1245945 mc3
* 2e803fb mc2
* 885e8be mc1

Before we proceed with any merging or rebase, please make a copy of the current foobar folder. We're going to show the difference between using rebase and not using rebase.
$ cp -rv /tmp/foobar /tmp/foobar.orig

First, we try the merge without using rebase. After merging, we're going to add one additional commit so make our commit log more meaningful.
$ git checkout master
Switched to branch 'master'

$ git merge hotfix
Merge made by the 'recursive' strategy.
hf1 | 0
1 file changed, 0 insertions(+), 0 deletions(-)
create mode 100644 hf1

$ touch mc6; git add mc6; git commit -m "mc6"

Pay attention to the commit log where we're going to compare with the rebase method. Notice the additional commit 15ea73b as well as the forked history.
$ git ll
* 40fdd57 (HEAD, master) mc6
*   15ea73b Merge branch 'hotfix'
|\  
| * f229ff9 (hotfix) hf1
* | bbb1a2b mc5
* | 4472d3e mc4
|/  
* 82c66c9 (new-feature) implement new-feature
* 1245945 mc3
* 2e803fb mc2
* 885e8be mc1

Before that, restore our last snapshot of the repo before merging the hotfix branch.
$ rm -rf /tmp/foobar
$ cp -rv /tmp/foobar.orig /tmp/foobar
$ cd /tmp/foobar

Continue with merging using rebase.
$ git checkout hotfix
Switched to branch 'hotfix'

$ git rebase master
First, rewinding head to replay your work on top of it...
Applying: hf1

$ git ll
* cfd2dae (HEAD, hotfix) hf1
* bbb1a2b (master) mc5
* 4472d3e mc4
* 82c66c9 (new-feature) implement new-feature
* 1245945 mc3
* 2e803fb mc2
* 885e8be mc1

$ git checkout master
Switched to branch 'master'

$ git merge hotfix
Updating bbb1a2b..cfd2dae
Fast-forward
hf1 | 0
1 file changed, 0 insertions(+), 0 deletions(-)
create mode 100644 hf1

$ touch mc6; git add mc6; git commit -m "mc6"

Compare to the non-rebase merging, we'll obtain a linear history graph without additional commit or history. Also, no forked history log as well.
$ git ll
* e0615b2 (HEAD, master) mc6
* f8df51d (hotfix) hf1
* bbb1a2b mc5
* 4472d3e mc4
* 82c66c9 (new-feature) implement new-feature
* 1245945 mc3
* 2e803fb mc2
* 885e8be mc1

Comparing both the history log of using and not using rebase, I think I finally grok how the need of Git rebase compare to typical merging.