Showing posts with label gnu/linux. Show all posts
Showing posts with label gnu/linux. Show all posts

This Week I Learned - 2017 Week 21

Previous week post or the whole series.

If you cannot keep your habit in a consistent manner, you will need readjust the minimum goal of the habit until there is no more excuses for you not to do it. Is as simple as that.

Second week of eating your dinner before 7pm indeed have significant changes. Additionally with consistent meditation and healthier food choices, surprised to know that I've lost some weight. However, all these lost weight may due to water weight.

#1 Well said. Well said.
"Don’t confuse privacy with secrecy. I know what you do in the bathroom, but you still close the door. That’s because you want privacy, not secrecy."
#2 Interesting that it's not just me who have been doing my own TILs or keeping a developer journals. While some store their TILs in Github repositories, mine just as a weekly collective of blog post. Either way, keeping a journal is always a good habit for anyone practicing their craft.

#3 There are quite a few complimentary Docker utilities that help to improve your Docker usage experiences.

#4 Tracing in GNU/Linux. Always an interesting topic to explore, especially coming from Brendan Gregg.

#5 Managing Git merge conflict? git-mediate seems like a good tool to ease the pain of resolving merge conflicts. I now finally grok how three ways merge works.
  • HEAD - Your changes.
  • BASE - Code before your changes and other branches.
  • OTHERS - Code with other changes that going to be merged to your branch.
#6 Merge with squash. Good to know if you want to do lots of branching.
  • Put the to-be-squashed commits on a working branch (if they aren't already) -- use gitk for this
  • Check out the target branch (e.g. 'master')
  • git merge --squash (working branch name)
  • git commit

This Week I Learned - 2016 Week 14

Last week post or the whole series.

#1 Replace Git Bash with MinTTY. Even though you can run Bash on Ubuntu in Windows right now, the most acceptable way (without using the dreadful Windows Command line) before this is through Cygwin and MinTTY. Don't like MinTTY? Well you've Babun and MSYS2, both are based on Cygwin. But still, nothing beat a Vagrant emulated environment.

#2 12 years, 12 lessons working at ThoughtWorks. (HN thread, Reddit thread) Some beg to differ. His retrospective team approach, especially the four key questions, should be applied by any software team. Note that ThoughtWorks is both a software house and a consulting firm.

#3 BPF Compiler Collection. Efficient Linux kernel tracing and performance analysis. You can read the docs and try it out. Only for Linux kernel 4.1 and above though. Compliment to the Brendan Gregg's Linux performance material but at different approach.

#4 Brett Victor's bookshelf. Some people are just prolific book reader. I always love his idea of reactive documents, an implementation of his concept of Explorable Explanations.

#5 Startups in MontrĂ©al. E14N is the only one I'm aware of. Anyway, the discussion at HN is far more interesting regarding the place. Language racism is true and alive there, culturally and systematically forced upon you.

#6 Effective code review or faults finding and blames? Why do you need code review in the first place if trivial matter such as coding convention still cannot be properly enforced? Note that there are tools exists to fix most of these issues and is a no-brainer to rectify this (is just a command away). Root cause is still there is lack of healthy culture that values quality but instead more towards faster delivery.  Or maybe because the software industry itself does not promote integrity (Lobsters thread)?  Or maybe we applied the wrong approach?

#7 perlootut - Object-Oriented Programming in Perl Tutorial. Holy Macaroni! I've never realized that Perl's built-in Object-Oriented feature is so limited. In other words, object in Perl is a glorified hashes. Yes, you have to write your own classes from scratch!

#8 How to start gnome-terminal in fullscreen. Nobody bother to add or enable this feature as sensible default and you have to resort to multiple ways to get it to work. While I can understand of reducing the UI clutters (or dumbing down)in GNOME, but nobody actually use the gnome-terminal in fullscreen mode? It seems that GKH also have issue with gnome-terminal itself.

This Week I Learned - 2016 Week 08

Last week post.

#1 NameError: name 'basestring' is not defined. Surprisingly there is still conflict with Ansible when installed using pip for Python 2 and Python 3.

#2 GNU/Linux Performance. Poster of tools you can use to investigate performance issues with your system.

#3 Container as Python module. (HN discussion) Interesting concept indeed. I've been looking at Docker for the past three weeks and this is probably best interesting use of container. It's useful when you want to build up an actual test environment from your Python apps or scripts. Instead of Mock object, you can test against the actual system, for example, an actual database system.

#4 Xamarin sold to Microsoft. (HN discussion). What took them so long? I read (can't remember where), it was sold for 400 millions. Interesting to see how this unfold in coming future.

#5 Non Zero Day. (HN discussion) Effective way to build a new habit through chain-method or streak. No, Jerry Seinfield did not create the Seinfield productivity program. For me, almost daily Git commit. You have to get started on something, the baby step..

Experience on Setting Up Alpine Linux

Starting out as one of the little unknown GNU/Linux distros, Alpine Linux has gain a lot of traction due to its featureful yet tiny size and the emergence of Linux Container implementation like Dockers and LXC. Although I came across it numerous time while testing out Dockers and LXC, I didn't pay much attention until recently while troubleshooting LXD. To summarize it, I really like the minimalist approach of Alpine Linux as for server or hardware appliance usage, nothing beats the simple direct approach.

My setup is based on the LXC container in Fedora 23. Unfortunately, you still can't create unprivileged container in Fedora. Hence, I have no choice but to do everything as root user. Not the best outcome but I can live with that. Setup and creation is pretty much straight forward thanks to this guide. The steps as follows:

Install and necessary packages and make sure the lxcbr0 bridge interface is up.
$ sudo dnf install lxc lxc-libs lxc-extra lxc-templates
$ sudo systemctl restart lxc-net
$ sudo systemctl status lxc-net
$ ifconfig lxcbr0

Create our container. By default, LXC will download apk package manager binary and all necessary default packages to create the container. Start the 'test-alpine' container once the container has been set up successfully.
$ sudo lxc-create -n test-alpine -t alpine
$ sudo lxc-start -n test-alpine

Access to the container through the console and press 'Enter'. Login as 'root' user but without any password, just press enter. Note to exist from the console, press 'Ctrl+q'.
$ sudo lxc-console -n test-alpine

Next, bring up the eth0 interface we can obtain an IP and making connection to the Internet. Check your eth0 network interface once done. Instead of SysV or Systemd, Alpine Linux is using OpenRC as its default init system. I've a hard time adjusting changes from SysV to Systemd and glad Alpine Linux did not jump to the Systemd bandwagon.
test-alpine:~# rc-service networking start
 * Starting networking ... *   lo ...ip: RTNETLINK answers: File exists
 [ !! ]
 *   eth0 ... [ ok ]

test-alpine:~# ifconfig eth0
eth0      Link encap:Ethernet  HWaddr 00:16:3E:6B:F7:8B  
          inet addr:  Bcast:  Mask:
          RX packets:15 errors:0 dropped:0 overruns:0 frame:0
          TX packets:13 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:1562 (1.5 KiB)  TX bytes:1554 (1.5 KiB)

Next, configure out system. Similarly to Debian's dpkg-reconfigure, Alpine have a list of setup commands to configure your system. However, I prefer the consistent and sensible naming used here. This is something that other GNU/Linux distros should follow. I'm looking at you CentOS/Red Hat/Fedora.
test-alpine:~# setup-
setup-acf        setup-bootable         setup-hostname      setup-mta     setup-timezone
setup-alpine     setup-disk             setup-interfaces    setup-ntp     setup-xen-dom0
setup-apkcache   setup-dns              setup-keymap        setup-proxy   setup-xorg-base
setup-apkrepos   setup-gparted-desktop  setup-lbu           setup-sshd

Next, setup the package repository and let the system pick the fastest mirror. I like that we can pick the fastest mirror in the console, which is something impossible to do so in Debian/Ubuntu.
# setup-apkrepos


r) Add random from the above list
f) Detect and add fastest mirror from above list
e) Edit /etc/apk/repositores with text editor

Enter mirror number (1-18) or URL to add (or r/f/e/done) [f]: 
Finding fastest mirror... 
ERROR: No such file or directory
ERROR: network error (check Internet connection and firewall)
Added mirror
Updating repository indexes... done.

Update our system. Even though there are more than five thousands packages, it is still not comparable to massive Debian list of available packages. But this is understandable due to the small number of contributors and their limited free time.
test-alpine:~# apk update
v3.2.3-104-g838b3e3 []
v3.2.3-104-g838b3e3 []
OK: 5289 distinct packages available

Let's continue by installing a software package. We'll use Git version control as our example. Installation is straight forwards with enough details.
test-alpine:~# apk add git
(1/13) Installing run-parts (4.4-r0)
(2/13) Installing openssl (1.0.2d-r0)
(3/13) Installing lua5.2-libs (5.2.4-r0)
(4/13) Installing lua5.2 (5.2.4-r0)
(5/13) Installing ncurses-terminfo-base (5.9-r3)
(6/13) Installing ncurses-widec-libs (5.9-r3)
(7/13) Installing lua5.2-posix (33.3.1-r2)
(8/13) Installing ca-certificates (20141019-r2)
(9/13) Installing libssh2 (1.5.0-r0)
(10/13) Installing curl (7.42.1-r0)
(11/13) Installing expat (2.1.0-r1)
(12/13) Installing pcre (8.37-r1)
(13/13) Installing git (2.4.1-r0)
Executing busybox-1.23.2-r0.trigger
Executing ca-certificates-20141019-r2.trigger
OK: 23 MiB in 28 packages

So far, I love the simplicity provided by Alpine Linux. In coming months, there will be more post on this tiny distro in coming months. Stay tuned.

Swap Space Usage in GNU/Linux

When you system has uses up all the physical memory space, it will move some of the inactive pages to the swap space, which is either a partition or a file in your storage. This works as a backup mechanism to keep you system running continuous even though the performance will take a hit. You will notice that the system will feel sluggish as there are a lot of swapping going on in the background.

I found this Bash script, swaptop, which list out all the swap usage by each process. Ran it through my current system which shown very interesting result.
$ ./swaptop | head -n 10
  188828  python 5669
  112844  chrome 3080
   67032  chrome 3617
   56312  gnome-shell 1884
   52752  chrome 2994
   44880  chrome 3031
   40784  gnome-software 2710
   37124  evolution-calen 2720
   34292  chrome 3124
   32804  packagekitd 1960

What is that Python process which has the highest usage? Why is RabbitVCS' checkerservice.pyc used up so much virtual memory? Result shown that while inactive, it will consume up to 334MB of RAM size. No wonder my system felt sluggish as I've just installed it yesterday to test out.
$ ps -a -o pid,size,args | grep 5669
 5669 334868 /usr/bin/python /usr/lib/python2.7/site-packages/rabbitvcs/services/checkerservice.pyc
31581   352 grep --color=auto 5669

Remove all RabbitVCS related packages and kill the running process of checkerservice.pyc.
$ sudo dnf remove rabbitvcs*
$ sudo kill -9 5669

Move on to the second process, evolution-calen, which I assumed is calendaring service for Evolution. Likely, let's find out the exact command parameters.
$ ps -a -o pid,size,args | grep 2720
 2720 403132 /usr/libexec/evolution-calendar-factory
 2770 517072 /usr/libexec/evolution-calendar-factory-subprocess --bus-name org.gnome.evolution.dataserver.Subprocess.Backend.Calendarx2720x2 --own-path /org/gnome/evolution/dataserver/Subprocess/Backend/Calendar/2720/2
 2780 443204 /usr/libexec/evolution-calendar-factory-subprocess --bus-name org.gnome.evolution.dataserver.Subprocess.Backend.Calendarx2720x3 --own-path /org/gnome/evolution/dataserver/Subprocess/Backend/Calendar/2720/3
32125   352 grep --color=auto 2720

Let's find out the RPM package name that provides this file.
$ dnf provides /usr/libexec/evolution-calendar-factory
Last metadata expiration check performed 2 days, 4:35:01 ago on Tue May  5 02:27:11 2015.
evolution-data-server-3.16.1-1.fc22.x86_64 : Backend data server for Evolution
Repo        : @System

Unfortunately I can't remove this package as there are quite a few essential packages depends on its, for example, Gnome Shell. Uninstall it will leave us with partially broken GNOME desktop.

Diff and Merge N Number of Files

I was stuck with this problem yesterday where I need to compare more than three files, as shown below. That threshold hit the limit of Meld, my favourite visual tool to diff and merge code. I was intrigued about the limitation and decided to check the limitation of the diff tools in GNU/Linux world.

Before that, let's create a number of files to be compared.
$ for i in {1..6}; do seq 0 $i | xargs printf "%06d\n" > $i.txt; done

So right now we have six plain text files and one of the content of the file is shown below.
$ ls *.txt
1.txt  2.txt  3.txt  4.txt  5.txt  6.txt
$ cat 6.txt 

First, the default diff console tool. Nope, maximum two files only.
$ diff {1..6}.txt
diff: extra operand '3.txt'
diff: Try 'diff --help' for more information.

$ diff {1..2}.txt
> 000002

Next, let's try vimdiff. As you can see from the result and screenshot below, although six files are shown, only four can be merged.
$ vimdiff {1..6}.txt
"6.txt" 7L, 49C
E96: Can not diff more than 4 buffers
E96: Can not diff more than 4 buffers
Press ENTER or type command to continue

Can my favourite Meld works with more than three files? Nope.
$ meld {1..6}.txt
  meld                              Start with an empty window
  meld                 Start a version control comparison
  meld   []       Start a 2- or 3-way file comparison
  meld   [] Start a 2- or 3-way folder comparison

Error: too many arguments (wanted 0-3, got 6)

How about xxdiff? Nope again, max is three files but the error message is way more informative.
$ xxdiff {1..6}.txt                                                                                                                
xxdiff (cmdline.cpp:762): 
You can specify at most 3 filenames.
Extra arguments: " 4.txt 5.txt 6.txt"
Use 'xxdiff --help' for more information.

Diffuse? A new tool that I've recently made aware of. It seems we have a winner here!

Kdiff3. Totally ignore the remaining three files. No message what so ever.
Kompare. The program won't even start properly.

Resetting File or Folder Permissions Using Yum

While setting the group file or folder permissions and ownership to /var/www, sometimes we may accidentally update the wrong folder, like to the parent folder of /var instead of /var/www.

In order to restore the default file or folders permissions in RPM-based system, there is a built-in option to revert the changes quickly compare to DEB-based system. Yup, this is probably one of the missing feature if we compare both packaging system.

First, let's find the RPM package name that contains the /var/www/html folder.

Using the rpm command.
$ time rpm -qf /var/www/html
$ time rpm -qf /var/www/html
real    0m0.025s
user    0m0.018s
sys     0m0.006s

Using the yum command which gave us four packages and took around 1-plus BLOODY minutes.
$ time yum whatprovides '/var/www/html'
real    1m23.865s
user    0m19.660s
sys     0m0.901s

Now that is something we can improve by using cached result through -C option. Let's try again. But then again, the results are still not entirely accurate.
$ time yum -C whatprovides '/var/www/html'
real    0m0.350s
user    0m0.257s
sys     0m0.050s

$ ls -ld /var/www/html/
drwxr-sr-x 1 root apache 40 Oct  4 15:05 /var/www/html/

Unfortunately, yum does not include support for reverting ownership and permissions of any installed files or folders.

Reset the ownership,
$ sudo rpm --setugids httpd
$ ls -ld /var/www/html
drwxr-sr-x 1 root root 40 Oct  4 15:05 /var/www/html/

However, resetting the permissions does not seems to remove back the setguid flag. Weird. Unfortunately, I can't google for any good explanation of such problem.
$ sudo rpm --setugids httpd
$ ls -ld /var/www/html
drwxr-sr-x 1 root root 40 Oct  4 15:05 /var/www/html/

Setting Apache Document Root With setgid

When you don't understand or remember the fundamental of GNU/Linux file system permissions, you'll tend to do things in an unproductive way. For example, repeatedly and explicitly update the /var/www/ folder file permissions to the Apache's group (www-data or apache).

The proper, alternative, and convenient way of setting web root, /var/www permissions are as follows:

Settings the folder permission to Apache's user group.
$ sudo chgrp apache /var/www -R
$ sudo chmod 775 /var/www -R
$ sudo chmod g+s /var/www

Allow the $USER to fully control of the web root.
$ sudo useradd -G apache $USER
$ sudo chown $USER /var/www/

Now, long grandma story. By default, the file permissions of Apache's web root directory in CentOS or Fedora are only accessible by all but writable by root user.
$ ls -l /var/www/
total 0
drwxr-xr-x 1 root root 0 Jul 23 06:31 cgi-bin/
drwxr-xr-x 1 root root 6 Oct 4 14:30 html/

Change to folder group ownership to apache user so we can install and run any web application using that user. Otherwise most web application will complain about write permissions to the folder, especially for file uploading.
$ sudo chgrp apache /var/www -R
$ ls -l /var/www/
total 0
drwxr-xr-x 1 root apache 0 Jul 23 06:31 cgi-bin/
drwxr-xr-x 1 root apache 6 Oct 4 14:30 html/

Even we've set the group ownership to apache user, any new file or folder creation will still default to root user as we're using the sudo command.
$ sudo mkdir /var/www/html/foo.d
$ sudo touch /var/www/html/foo.f
$ ls -l /var/www/html/
total 0
drwxr-xr-x 1 root root 0 Oct 4 15:03 foo.d/
rw-r--r- 1 root root 0 Oct 4 15:03 foo.f

Hence, in order to retain or inherit the group id (apache) of the parent folder in /var/www, we've to use setgid [4].
$ sudo chmod g+s /var/www/html/

Another way of setting the folder permissions using the numerical method is:
$ sudo chmod 2775 /var/www/html -R

Notice the 's' flag on the group permissions.
$ ls -ld /var/www/html
drwxr-sr-x 1 root apache 20 Oct 4 15:04 /var/www/html/

Create another folder and file in /var/www folder again. Notice the group permissions inherit the group id in /var/www.
$ sudo mkdir /var/www/html/bar.d
$ sudo touch /var/www/html/bar.f
$ ls -ltU /var/www/html
total 0
drwxr-xr-x 1 root root 0 Oct 4 15:03 foo.d/
rw-r--r- 1 root root 0 Oct 4 15:03 foo.f
drwxr-sr-x 1 root apache 0 Oct 4 15:05 bar.d/
rw-r--r- 1 root apache 0 Oct 4 15:05 bar.f

Using the namei command to show the permissions for each components in the file path.
$ namei -l /var/www/html/foo.d/
f: /var/www/html/foo.d/
drwxr-xr-x root root /
drwxr-xr-x root root var
drwxr-xr-x root apache www
drwxr-sr-x root apache html
drwxr-xr-x root root foo.d

$ namei -l /var/www/html/bar.d
f: /var/www/html/bar.d
drwxr-xr-x root root /
drwxr-xr-x root root var
drwxr-xr-x root apache www
drwxr-sr-x root apache html
drwxr-sr-x root apache bar.d

Namei: File Permissions Listing Tool

While setting up the Apache web server, occasionally we’ll encounter the file permission issues in the document root folder, especially when symlink involved. This console app, namei, which is part of util-linux, provides a quick view of the file permissions of each component of the resolved full path. Quite a useful tool especially for those who just venture into GNU/Linux and haven’t fully grasp the file permissions in the system.

For example, listing of the file permission of the folders and file of SSH private key.
$ namei -l ~/.ssh/id_rsa 
f: /home/ang/.ssh/id_rsa
drwxr-xr-x root root /
drwxr-xr-x root root home
drwx------ ang ang ang
drwx------ ang ang .ssh
-rw------- ang ang id_rsa

First, let's create a sample symlink.
$ sudo ln -s /tmp /var/www/html/tmp

Tracing to the endpoint of the symlink.
$ namei /var/www/html/tmp
f: /var/www/html/tmp
d /
d var
d www
d html
l tmp -> /tmp
d /
d tmp

Similarly, but showing the permissions of each component of the resolved full path.
$ namei -l /var/www/html/tmp
f: /var/www/html/tmp
drwxr-xr-x root root /
drwxr-xr-x root root var
drwxr-xr-x root root www
drwxr-xr-x root root html
lrwxrwxrwx root root tmp -> /tmp
drwxr-xr-x root root /
drwxrwxrwt root root tmp

Find and Delete All Duplicate Files

I was asked about this question today but can't seem to think of a quick answer to solve this issue. Typical manual solution is to just compare the file size and file content through hashing or checksum.

It seems there are quite a number of duplicate file finder tools but we will try with a console tool called fdupes. Typical usage of this program.

1. Install the program.
$ sudo apt-get install fdupes

2. Create sample testing files.
$ cd /tmp
$ wget -O a.jpg
$ cp a.jpg b.jpeg
$ touch c.jpg d.jpeg

3. Show all duplicate files.
$ fdupes -r  .


4. Show all duplicate file but omit the first file.
$ fdupes -r -f .


5. Similar to step 4 but delete the duplicate files and keep one copy.
$ fdupes -r -f . | grep -v '^$' | xargs rm -v
removed `./b.jpeg'                      
removed `./d.jpeg'

On a similar note, there is this interesting read on optimized way by Dupseek, an app that find duplicate files. The main strategy is just group these files by size and start comparing them by set and ignore those set with just one file.

Unfortunately, I've a hard time understand the Perl code. The closet and simplest implementation I can found is the tiny find-duplicates Python script.

Finding and Deleting Files, xargs rm vs find -delete

Interesting comparison of finding and deleting files using both xargs and find command.

Create 10k files with 10 bytes each.
$ mkdir /tmp/test
$ dd if=/dev/zero of=masterfile bs=1 count=1000000
$ split -b 10 -a 10 masterfile

Using xargs.
$ time find -name 'xaa*' -print0 | xargs -0 rm
real    0m7.667s
user    0m1.112s
sys     0m6.491s

Using find with -delete option.
$ time find -name 'xaa*' -delete
real    0m7.252s
user    0m0.954s
sys     0m6.023s

Time difference of 0.415s, which is just insignificant. However, the -delete method way easier to remember.

Launch Default Web Browser From Console

Interesting, never realize there are so many ways. I should update my way of writing step-by-step guide.

1. sensible-browser
$ sinsible-browser
$ man sensible-browser | grep DESCRIPTION -A 3
sensible-editor, sensible-pager and sensible-browser make sensible decisions on which editor, pager, and web browser to call, respectively. Programs in Debian can use these scripts as their default editor, pager, or web browser or emulate their behavior.

2. xdg-open
$ xdg-open
$ man xdg-open | grep DESCRIPTION -A 3
xdg-open opens a file or URL in the user's preferred application. If a URL is provided the URL will be opened in the user's preferred web browser. If a file is provided the file will be opened in the preferred application for files of that type. xdg-open supports file, ftp, http and https URLs.

3. x-www-browser
$ x-www-browser
$ man x-www-browser
No manual entry for x-www-browser
See 'man 7 undocumented' for help when manual pages are not available.

$ ls -l `which x-www-browser`
lrwxrwxrwx 1 root root 31 Feb 22 21:08 /usr/bin/x-www-browser -> /etc/alternatives/x-www-browser

$ ls -l /etc/alternatives/x-www-browser
lrwxrwxrwx 1 root root 29 Feb 23 04:10 /etc/alternatives/x-www-browser -> /usr/bin/google-chrome-stable

No wonder, is Debian alternative system.
$ sudo update-alternatives --list x-www-browser

4. gnome-open
$ gnome-open
$ man gnome-open | grep DESCRIPTION -A 2
This program opens files using file handlers configured in GNOME.

Unexpected Inconsistency: Run fsck Manually

Surprised to see this when I booted up my workstation upon arriving home. Booted up my lappy and googled for answer. Keyed in the password and ran the fsck command manually.
$ fsck /dev/sda

Reboot the machine. Same message again. Suddenly noticed that the system date was showing year 2011. Realized that I took out the motherboard's battery but forgot to reconfigure back the system date and time.

Restart again into BIOS settings. Update the system date and time and everything works again.

Next step, update your clock accurately.
$ sudo apt-get install ntpdate
$ sudo nptdate
$ date

Using GNU Stow to Manage Your Dotfiles

As you know, as an avid console user for many years, you are your dotfiles. My current setup to manage these dotfiles is a combination of combination of Homesick and Git, or specifcally Github.

However, as Homesick itself is a Ruby gems, the dependacy on Ruby was unnecessary heavy and wasteful. After reading Brandon Invergo's experience on using GNU Stow, a symlink farm manager, to manage dotfiles, I was tempted to give it a try. In short, the program itself is more lightweight, portable, and simpler.

First, download the install the program.
$ sudo apt-get install stow

Next, we're going to setup our dotfiles [per-application][8]. Just create the parent folder (dotfiles), the sample package folder (git), and the dotfile we want (.gitconfig).
$ mkdir -p ~/dotfiles/git
$ touch ~/dotfiles/git/.gitconfig

$ tree -a ~/dotfiles/
└── git
└── .gitconfig

Go to our parent folder (dotfiles) and create the symlink. That's it!
$ cd ~/dotfiles
stow dir is /home/kianmeng/dotfiles
stow dir path relative to target /home/kianmeng is dotfiles
Planning stow of package git...
LINK: .gitconfig => dotfiles/git/.gitconfig
Planning stow of package git... done
Processing tasks...
Processing tasks... done

$ ls -l ~/.gitconfig
lrwxrwxrwx 1 kianmeng kianmeng 23 Mac   8 15:47 /home/kianmeng/.gitconfig -> dotfiles/git/.gitconfig

If the dotfile you're trying to symlink existed, Stow will complain. Let's illustrate this.
$ touch ~/.dummy
$ touch ~/dotfiles/git/.dummy
$ cd ~/dotfiles
$ stow -vv git
stow dir is /home/kianmeng/dotfiles
stow dir path relative to target /home/kianmeng is dotfiles
Planning stow of package git...
CONFLICT when stowing git: existing target is neither a link nor a directory: .dummy
--- Skipping .gitconfig as it already points to dotfiles/git/.gitconfig
Planning stow of package git... done
WARNING! stowing git would cause conflicts:
* existing target is neither a link nor a directory: .dummy
All operations aborted.

To remove the link, just type
$ rm -rf ~/dotfiles/git/.dummy
$ stow -vvD git
stow dir is /home/kianmeng/dotfiles
stow dir path relative to target /home/kianmeng is dotfiles
Planning unstow of package git...
UNLINK: .gitconfig
Planning unstow of package git... done
Processing tasks...
Processing tasks... done

$ ls -l ~/.gitconfig
ls: cannot access /home/kianmeng/.gitconfig: No such file or directory

If you're dotfiles directory is not located under your home direction, for example /home/kianmeng/dotfiles but instead /tmp/dotfiles, you'll need to specify the target path. Otherwise, the symlink will end up in the parent directory, and in this case, /tmp.
$ mv dotfiles /tmp
$ cd /tmp/dotfiles
$ stow -vv -t ~ git
stow dir is /tmp/dotfiles
stow dir path relative to target /home/kianmeng is ../../tmp/dotfiles
Planning stow of package git...
LINK: .gitconfig => ../../tmp/dotfiles/git/.gitconfig
Planning stow of package git... done
Processing tasks...
Processing tasks... done

While GNU Stow is a alternative way of managing dotfiles, unfortunately it still not a good choice to replace Homesick as it lacks of one essential feature, it can't and won't overwrite the existing files! In the end, I ended up another alternative tool, dfm, the dot file manager, written in Perl to manage your dotfiles.

Find Package Dependency During Source Code Compilation

Found this sandboxing tool, mbox (really crappy name) via HN. Sandboxing is like a container which isolates and assign limited resources for a guest program. Similarly, is like another form of virtualization.

Since there is not Debian package, I had to clone the source and try to compile it. But still, the tool still can't run in Ubuntu due to a memory issue which is way beyond me to fix it.
$ ./mbox ls
Stop executing pid=22065: It's not allowed to call mmap on 0x400000
Sandbox Root:
> /tmp/sandbox-22061

Since the Debian package is missing, we've to resolve to code compilation according to the README.

Clone the source from github, configure, and compile it.
$ git clone
$ cd src
$ cp {.,}configsbox.h
$ ./configure
$ make

Unfortunately, you should encounter error during compilation as certain required libraries are missing. That why having a .deb package is useful as we can check the dependency list of a package. For example to find a dependency of ack program which I've just installed.

Install the ack program
$ sudo apt-get install ack

Go to the cached location where all the Debian packages were stored.
$ cd /var/cache/apt/archives

Check the dependency of the program and install those required packages.
$ sudo dpkg -I ack_1.39-12_amd64.deb | grep -i depends
Depends: libc6 (>= 2.4)

Or you can forcefully install by letting apt-get resolves the unmet dependencies.
$ sudo apt-get install -f ack_1.39-12_amd64.deb

Now, how can we find the dependency list through source code compilation? The only way I know is to rerun the configure script, capture the output, filter all those line with 'no' keyword, and lastly find those packages with those missing header file. Example as shown below.

Capture the output of the configuration script.
$ ./configure | tee configure.log

Find those header file, sample line given.
$ cat configure.log | grep no
checking libaio.h usability... no

Find the package that contains this file libaio.h. But first we must install apt-file program.
$ sudo apt-get install apt-file
$ sudo apt-file update
$ apt-file search libaio.h
libaio-dev: /usr/include/libaio.h

Install the found package.
$ sudo apt-get install libaio-dev

Repeat for other missing header files as well. Note that certain header files are supposed to be not found as the script was checking for platform specific header files.

CentOS is officially part of Red Hat

Surprising good news for CentOS as Red Hat had officially acquired the core team of the distro. What took them so long ? They (Red Hat) should have done this sooner. Now we can expect faster updates and security patches for CentOS. Suspect they want to sustain and increase the popularity of the most popular Red Hat variant.

What is setsid ?

From my last post, I encountered this foreign new console command setsid. What the heck is setsid ? Manual page said this program let you "run a program in a new session". Why we need it? Because you want any program (e.g. daemon) that started from a terminal emulator (e.g. xterm), to stay running even after you close the terminal emulator.

Let's illustrate this using two simple examples.

1. Start xterm and later gedit editor.
$ xterm
$ gedit

2. Open up another terminal session and see the process tree. Notice that the xterm is the parent process of gedit. If we kill the parent process (xterm), all subsequent child processes will be terminated as well.
$ pstree | grep xterm

3. Close the xterm program. You will notice the gedit editor program will be shutdown as well.

Let's repeat step 1 - 3 but using setsid instead.

1. Again, start your xterm and later gedit editor using setsid. You will notice after the second command, you can proceed with other command as well.
$ xterm
$ setsid gedit

2. Let's find the process tree again. Notice gedit is not attached to the xterm parent process but instead a new process or a new session.
$ pstree | grep xterm

$ pstree | grep gedit

3. Close the xterm program. You'll notice gedit will still running.

Note that is not the same as forking a process using ampersand (&), running command below does not create a new session but a subshell child process
$ xterm
$ gedit &
$ pstree | grep xterm

Today I realized that after using GNU/Linux for so long, there are still a lot to be learned and explored. But yet so little time.

CentOS and Red Hat

Red Hat is probably the best and worse possible thing that can happen to CentOS, a community-based GNU/Linux distribution derived from it. The good part is you have the stability and good driver support since Red Hat is popular among the commercial world. The worst part is stability comes with a price. Getting latest greatest software updates (e.g. PHP or Subversion) is quite limited unless is a security fix. So you left with two choices, either you get the updates from third party repositories or you built from source code. Unfortunately, both ways have their own issues.

First, is always tricky to mix third party repositories and base repository. Certain common library packages maybe updated by third party repositories causing unnecessary breakage with existing software. Although Yum priority plugin and packages exclusion can solve that, it is still a hassle.

Second, installation by source code compilation. You can have all the customize options but with all the issues in former way as well. However, you can have the flexibility to isolate your installation into specific directory (/opt) with GNU Stow, a symlink farm manager. But you lost the ability to verify the integrity of your software binaries (check for tampering or planted Trojan) in case there is a security breach. To solve that, some may rebuild the software into RPM packages software from existing RPM's spec file which is what all the third party repositories are doing right now. In the end, you still back the problem of the first method.

How then ? Just migrate and move to a more bleeding edge distro like Ubuntu or Fedora.

Stuck with Consevative GNU/Linux Distros

I stuck with conservative GNU/Linux distros (Centos or Debian stable) which don't let you update to a more current LAMP-stack packages. No, don't want to compile source code. What should I do?
For Debian stable, use Dotdeb. For Centos, use IUSCommunity. IUS stands for Inline with Upstream Stable. This third party RPM repository is "sponsored by internal work at Rackspace (but officially unsupported)"

What if I stuck with hosting panel like cPanel, Plesk, or Direct Admin?
Pray hard. Pray very hard. You're at the mercy of your hosting provider.

Finding GNU/Linux Distro And Architecture

Dealing with quite a few new servers recently and we can't find the original hardware specification as quoted in the invoice. While we can google for the spec based on the hardware vendor and model, however certain parts of the machine may be upgraded or replaced. To be sure, always double-check the hardware spec. To do this, we need to answer two questions.

1) What is the GNU/Linux distro and architecture of the existing server?
$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 11.04
Release: 11.04
Codename: natty

$ uname -a
Linux foobar 2.6.38-11-generic #48-Ubuntu SMP Fri Jul 29 19:02:55 UTC 2011 x86_64 x86_64 x86_64 GNU/Linux

2) What is the hardware specification of the server?
console output
$ sudo lshw

GUI output
$ sudo apt-get install lshw-gtk
$ sudo lshw -X

$sudo lshw-gtk

OR (if you are running Ubuntu or you boot the machine using Live CD)
System -> Preferences -> Hardware Lister