On array_slice and array_shift Functions

public function query($query) {
    $stmt = $this->pdo->prepare($query);
    $stmt->execute(array_slice(func_get_args(), 1));
    return $stmt;
Interesting piece of PHP code shown above. Didn't realize you can use array_slice function to extract portion of an array from func_get_args function. My typical way is just using array_shift function to remove the first element.

Instead of
array_slice(func_get_args(), 1);

you can also obtain the same result (a bit longer) by
$args = func_get_args();

However, you just can't simply write in such way show below (the interpreter will throw fatal error) which I am going to elaborate more.

Run your PHP in interactive mode and type the sample code. As shown.
$ php -a
php > print_r(array_shift(array('a', 'b', 'c')));
PHP Fatal error:  Only variables can be passed by reference in php shell code on line 1

Let's assign a temporay variable to the array. Howevery, we only obtain the first element as array_shift function is a modifier function which modify the argument and expect is to be a reference.
php > print_r(array_shift($temp = array('a', 'b', 'c')));
aphp >

The only way is to first assign your array to a variable and modify it.
php > $temp = array('a', 'b', 'c');
php > array_shift($temp);
php > print_r($temp);
    [0] => b
    [1] => c

Conclusion is if you want a portion of an array, just use array_slice function.

Visualizing Wifi Signal Strength

Due to additional headcount around the house, there are currently more devices connecting to the wireless access point. Thus, affecting the Wifi connectivity, especially mine.

AskUbuntu's answer recommended that this console app called wavemon to check your Wifi adapter signal strength. Basically, this app "is an ncurses-based monitoring application for wireless network devices." Installation is plain straightforward in Ubuntu 13.04. Just type these two lines.
$ sudo apt-get install wavemon
$ sudo wavemon

Screenshot below shown the movement of the signal level of my lappy wifi connection, which was quite poor. Went to Low Yat and bought two antennas to boost the signal from 5dBi to 8dBi. Still, nothing much changed. In the end, switch to plain old cable and ethernet, specifically ethernet over power lines.

What next ? There are three other possibility to restore back my Wifi speed. First, replace and upgrade my wireless router with some other brand like Asus RTAC66U which support thousand assorted features and most importantly, Quality of Service (QoS). You cannot prevent people from watching or downloading movies 24x7, but at least you control their usage. As to be fair for everyone else. Second, buy a high gain wireless USB adapter to replace my lappy internal Wifi adapter. Third, add a few wireless repeaters around several strategic points in the house.

Let's see how this goes.

System Freeze When Using Google Chrome in Ubuntu 13.04

Is seemed I am not the only one having this issues, especially if you've opened quite a few Ajax-intensive apps or sites at the same time. Can't seemed to track down the root cause. Is it my Chrome extensions, plugins, or just video driver issue? Nevertheless, found two waysto restart the desktop.

1. To restart X by using Ctrl-Alt-Backspace, but first you must enable this key combination. Why was this disable in the first place ?  See screenshot below.

System Settings -> Keyboard -> Layout Settings -> Options -> Key sequence to kill the X Server

2. Switch to terminal by Ctrl-Alt-F1 and login again to restart Unity.
$ unity --replace

Original and Non-Original Battery

Is it just me or some Placebo weird crap ? After switching to an original Lenovo   , application in Ubuntu seemed to run faster, especially the dreadful slow Chrome browser.

Checking the battery status from the terminal using UPower tool.
$ upower -i /org/freedesktop/UPower/devices/battery_BAT0 | grep -E "state|to\ full|percentage"

    state:               charging
    time to full:        4.2 hours
    percentage:          28.1695%

As shown above, the new battery need 4.2 hours for a full charged. Rather long compare to the non-original battery which only need 1.5 hours. Why the difference? No idea what so ever.

To extend your battery life, it best not to hold full charge for a long period of time and keep the battery in 40% to 80% charged range. Unfortunately, my beloved  E420, sadly is not a real Thinkpad does not (shown below) has Tp smapi support which can regulate battery charging within the threshold range.
$ sudo apt-get install tp-smapi-dkms
$ sudo modprobe tp_smapi
ERROR: could not insert 'tp_smapi': No such device or address

How then ? The only best way to extend the battery life is remove it when not using and keep your laptop temperature cool. Or alternative, write a script to monitor your battery charging within this range and notify you when to start and stop charging.

Business Card Raytracer and POV-Ray Installation in Ubuntu 13.04

Via HN. A step-by-step decipher  of the Raytracer source code that fit a business card. A Raytracer is a program that generate an image through ray tracing technique. Hard to imagine me ever getting close to what he, Andrew Kensler managed to achieve.

The post somehow reignite my love for computer-generated imagery. That also reminds me of the classic raytracer, POV-Ray that I used to play and waited hours to generate one tiny small image. Unfortunately, due to licensing issue, the program was dropped from Ubuntu since 12.04. No worry, we still can install it using source code compilation. Installation steps for Ubuntu 13.04 as follows:

1. Download and extract the latest greatest version, I am using the 3.7 RC7 beta.
$ wget http://www.povray.org/beta/source/povray-3.7.0.RC7.tar.bz2
$ tar jxvf povray-3.7.0.RC7.tar.bz2

2. Configure the software before compilation. You should encounter quit a few warnings of missing required libraries. Note the COMPILED_BY flag is compulsory.
$ cd povray-3.7.0.RC7
$ ./configure COMPILED_BY="foobar baz "

3. Install the prerequisite packages. Note that I choose to install all the Boost-related packages but some are optional.
$ sudo apt-get install libboost1.53* zlib1g-dev libpng12-dev libjpeg-dev libtiff5-dev libopenexr-dev libsdl1.2-dev

4. Run the configure script again. You will still encounter the dreadful 'cannot link with boost thread library' (see below). Took me a while of googling to find the exact solution.
$ ./configure COMPILED_BY="foobar baz "
checking whether the boost thread library is usable... no
configure: error: in `/home/kmang/project/povray-3.7.0.RC7':
configure: error: cannot link with the boost thread library
See `config.log' for more details

5. Based on the solution proposed, append the LIBS="-lboost_system" option to the configure script to make compilation works. Compile and create the .deb package using checkinstall program (easy for uninstallation later). Note that compilation is quite long, roughly around 49 minutes.
$ ./configure COMPILED_BY="foobar baz " LIBS="-lboost_system"
$ time make
real    49m43.344s
user    47m11.508s
sys     2m15.908s

$ sudo make install && sudo checkinstall

6. To test the application, let's render Utah teapot, the famous standard reference object in the computer graphics community. See below for the sample rendered output.
$ cp -rv /usr/local/share/povray-3.7/scenes/advanced/teapot/ /tmp
$ cd /tmp/teapot
$ povray -w320 -h240 teapot.pov

Create a .deb Package from Source Code

Due to the Subversion PPA still stuck at version 1.7.9. In order to use Subversion client version 1.8.3, I need to upgrade by source code compilation. As I mentioned before, I really don't like this installation method. Fortunately, I've found this tool called checkinstall, which let you create a software packages (deb or rpm compatible) so you can remove it later.

We will illustrate by compile and install the latest version of SQLite, a lightweight database management system.

1. First, download the latest SQLite source code. We're using the autoconf version. Later, extract the tarball.
$ wget https://www.sqlite.org/2013/sqlite-autoconf-3080002.tar.gz
$ tar zxvf sqlite-autoconf-3080002.tar.gz

2. After that, we need to install all the necessary software or libraries in order to compile it. apt-get has a wonderful option call build-dep which will install all these dependencies so you can build the software.
$ sudo apt-get build-dep sqlite3

3. Go into the source code folder and generate the Makefile.
$ cd sqlite-autoconf-3080002
$ ./configure

4. Before we compile the source code, let's install the checkinstall package.
$ sudo apt-get install checkinstall

5. Compile the source code and create a .deb package. Make sure you pass -j option [5] to   to speed up the compilation time. Rules of thumb is twice the number of your CPU cores. For my case, my lappy has 4 CPU cores using the nproc command. Hence, we should use -j8 option.
$ make -j8 && sudo checkinstall

6. Please take note that checkinstall will create a package and install it immediately. Buy you can remove it.
$ sudo dpkg -r sqlite-autoconf

7. Check the content the the deb package file. It will show you a list of files to be installed.
$ dpkg --contents sqlite-autoconf_3080002-1_amd64.deb

150 Minutes Per Week

"The study found that 150 minutes of vigorous physical activity per week added two to three years to the lives of the men during the 13-year study."
-- The University of Western Australia, emphasis added
Found via Reddit. 150 minutes per week is roughly around 21.43 minutes per day. Before that, what is vigorous physical activity? According to World Health Organization (WHO), any high intensity activity that increases heart rate is considered vigorously. Examples are running, brisk walking, fast cycling, aerobics, and others.

It seems daily walking of 10,000 steps (which I still fail to do so) merely enough for fitness or weight loss but not enough to extend your life expectancy. Is time to include the Scientific 7-Minutes Workout in my daily as I don't suffered any foot pains anymore.

GNU/Linux Performance Analysis Tools

Where one slide (#16) says a thousand words. An overview of all the tools available in GNU/Linux for performance analysis by Brendan Gregg of Joyent, cloud infrastructure company.

Out of all the mentioned tools in the diagram, never try or heard of these four. Installation are done in Ubuntu 13.04.

1. perf, GNU/Linux profiling with performance counters.
$ sudo apt-get install linux-tools-3.8.0-30
$ sudo perf top

2. blktrace, kernel layer block I/O tracing.
$ sudo apt-get install blktrace
$ sudo blktrace -d /dev/sda -o - | blkparse -i -

3. slabtop, display kernel slab (amount of cache) information.
$ sudo apt-get install procps
$ sudo slabtop -o -s c | less

4. nicstat, network monitoring tool.
$ wget http://jaist.dl.sourceforge.net/project/nicstat/nicstat-1.92.tar.gz
$ tar zxvf nicstat-1.92.tar.gz
$ cd nicstat-src-1.92
$ mv Makefile.Linux Makefile
$ sudo apt-get install gcc

Edit the Makefile and change below line from 32 to 64
CFLAGS =        $(COPT) -m64

$ make
$ ./nicstat.sh 1

An Integer is Not an Integer

Another day, another unexpected behaviour encountered in  wonderland. Funny that a number returned from a function that looks like an integer is actually not an integer. Let's illustrate this with a simple example.

1. Start the interactive mode.
$ php -a

2. Let's find the square root of number 9, using sqrt function.
php > echo sqrt(9), "\n";

3. Obviously 3 is an integer right? Just to confirm, let's use is_int function to check it. Nope, is not.
php > echo is_int(sqrt(9)), "\n";

4. Checking the sqrt function documentation again. It seemed that sqrt will return float data type even though the decimal point is not shown. Let's try with is_float or gettype function.
php > echo is_float(sqrt(9)), "\n";
php > echo gettype(sqrt(9)), "\n";

Double ? Isn't it supposed to return float ? According to documentation [4], double was used instead of float due to historical reason. Why ? No effing idea what so ever.

5. Still, how we going to check for integer, is this case, to find out which number is the perfect square (an integer is a square of another integer) ? Type casting. See example below.
php > $root = sqrt(9); echo $root == (int)$root, "\n";

Another day, another gotcha in PHP world.

Simulate fail2ban Using Iptables (for SSH only)

Fellow HN reader, spindritf shared a tip to simulate something similar to fail2ban, a tool to ban certain IP address with malicious intent. Useful when you don't want to install fail2ban. My main issue with fail2ban is I sometimes accidentally ban myself after several login failure.

-A INPUT -p tcp -m tcp --dport 22 -m recent --update --seconds 180 --hitcount 4 --rttl --name SSH --rsource -j LOG --log-prefix "ssh brute force: "

-A INPUT -p tcp -m tcp --dport 22 -m recent --update --seconds 180 --hitcount 4 --rttl --name SSH --rsource -j DROP

-A INPUT -p tcp -m tcp --dport 22 -m state --state NEW -m recent --set --name SSH --rsource -j ACCEPT

Till today, I still can't read and use the Iptables properly, even with the explainshell tool, and still try my best to avoid it. Maybe one day, I still really force myself to really learn it.

Oh crap ! I shouldn't have done that?!

Don't multitask. Seriously, don't multitask while handling important stuff. As usual, I was busy coding and monitoring the Apache web server log at the same time. Due to my wonky laptop battery, I've decided to switch to another backup battery and rebooted the machine. Exit from my remote connection and type this command at my console to reboot my lappy.
$ sudo reboot

Weird. Nothing happens. No programs was closed at my desktop environment. Nothing was shutdown. Checked again at my terminal. To my horror, I just realized that I didn't log out properly from the live production server and I had rebooted the live production server instead !

Sh*t! Holy double sh*t!

Rushed out to tell my boss about it. He laughed about it in a cool as cucumber way. Not surprised. He and those long serving employees had experienced far worse scenarios before. (They have enough war stories for generations) Not for me, this is so effing embarrassing stupid. Server rebooted, double checked everything again. Thank goodness, everything seemed okay and back to normal.

Post mortem analysis. Can we prevent accidental reboot in operational critical server ? Yes, you can. Just use Molly-guard.

In a non-IT term, Molly-guard is actually a cover (normall in red) of a button to prevent accidentally triggering of unwanted event like firing nuclear missiles. As a software, Molly-guard is a shell-script that check for existing SSH session and if any of these shutdown, reboot, halt, or poweroff commands were invoked. The script will prompt you to key in the hostname to confirm before proceeding with the intended critical action.

How do set this up and get it to work? In Ubuntu/Debian-based distros, is effing easy. Just apt-get it.

1. Install the package
$ sudo apt-get install molly-guard

2. Run a simulation. Note default configuration only works over a SSH session.
$ ssh localhost
$ sudo reboot
W: molly-guard: SSH session detected!
Please type in hostname of the machine to reboot:
Good thing I asked; I won't reboot servername ...
W: aborting reboot due to 30-query-hostname exiting with code 1.

3. What if you also want this program to work on non-SSH session ? Just edit the config file (/etc/molly-guard/rc) and set ALWAYS_QUERY_HOSTNAME to true.
# when set, causes the 30-query-hostname script to always ask for the
# hostname, even if no SSH session was detected.

How about Centos/Redhat-based distros? Not in the official repositories. (Now you know why I dislikes rpm-based distros, limited softwares selection). You can download packages from these sites. Installation and setup should be the same.

Fun and exciting times these days. I blamed it all on the bloody wonky battery.

What is setsid ?

From my last post, I encountered this foreign new console command setsid. What the heck is setsid ? Manual page said this program let you "run a program in a new session". Why we need it? Because you want any program (e.g. daemon) that started from a terminal emulator (e.g. xterm), to stay running even after you close the terminal emulator.

Let's illustrate this using two simple examples.

1. Start xterm and later gedit editor.
$ xterm
$ gedit

2. Open up another terminal session and see the process tree. Notice that the xterm is the parent process of gedit. If we kill the parent process (xterm), all subsequent child processes will be terminated as well.
$ pstree | grep xterm

3. Close the xterm program. You will notice the gedit editor program will be shutdown as well.

Let's repeat step 1 - 3 but using setsid instead.

1. Again, start your xterm and later gedit editor using setsid. You will notice after the second command, you can proceed with other command as well.
$ xterm
$ setsid gedit

2. Let's find the process tree again. Notice gedit is not attached to the xterm parent process but instead a new process or a new session.
$ pstree | grep xterm

$ pstree | grep gedit

3. Close the xterm program. You'll notice gedit will still running.

Note that is not the same as forking a process using ampersand (&), running command below does not create a new session but a subshell child process
$ xterm
$ gedit &
$ pstree | grep xterm

Today I realized that after using GNU/Linux for so long, there are still a lot to be learned and explored. But yet so little time.

Restoration of Missing Unity Launcher and Panels

Supposed to rush home for some errands after work but accidentally remove certain common library package in my Ubuntu desktop. In the end, the Unity launcher, panels, and status indicator were removed. Frustrated (this is the nth times), gave up, and went straight home. After replenished my glucose, googled around and tried again.

Steps to recover it in  Ubuntu 13.04.

1. Re-install the Ubuntu desktop.
$ sudo apt-get update
$ sudo apt-get install --reinstall ubuntu-desktop
$ sudo apt-get install unity

2. Logout from the command line, since the top panel was gone and no where I can find the logout menu item.
$ gnome-session-quit

3. Login again. But still, not Unity launcher or top panel to be seen. Reset the Compiz settings and whoala!
$ sudo dconf reset -f /org/compiz/
$ setsid unity

Endless PHP Internal Drama

Via HN. Sigh. Endless drama in the PHP  internals. I applaud their efforts (both Anthony and Nikita) to bring more useful and typical programming languages features like generators, function autoloading, and password hashing. to PHP. However, the language itself is slowly morph from a simple procedural web template language into a poor imitation of Java with none of its benefits. The hacked solution of namespace separator. More inconsistency and complexity. Yes, some may argue is a Bikeshed [8] issue and you can choose not to use it. But seriously, backslah (\) ?

The language itself feels like a ship sailing aimlessly in the sea and go where ever the sea wind direct it. Both Rasmus together with Zend should step in and be the Benevolent Dictator for Life (BDFL) instead of letting the community votes decides. Sometimes, unfortunately, decisions are made by those who made the most noise.

Imperative and Dclarative Programming

Imperative. Think PHP programming language. You give command and tell the computer how to do things step-by-step. Is like a driving instructor teaching a person how to drive.

Declarative. Think SQL. You tell the database what you kind of result you want and don't care how the system do it. Is like asking a taxi driver to drive you to your destination.

Variadic Function

Few years ago, I was introduced to this PHP function func_get_args() by my senior while reading his code (can't remember what or why he used it for). This function allows you to create a function that can take multiple arguments and no argument list declaration in your function signature. Code shown below is an example of a helper function to find the maximum value from a set of number.
function max() {
   $max = -PHP_INT_MAX;
   foreach(func_get_args() as $arg) {
      if ($arg > $max) {
          $max = $arg;
   return $max;

echo max(1,5,7,3);

Why I brought this up ? I didn't realize that the term "variadic function" is used to describe function that support multiple arguments until I read the Request For Comment (RFC) on the proposal of a new syntax for variadic function for PHP. Historically, variadic function is an old concept that long implemented in C programming language, especially for printf() and scanf() function. Not surprisingly, these two functions also existed in PHP , since it's a C-based programming language.

Back to the RPC. Reading through the discussion in Reddit and the RFC itself. I think is a good idea since we can now enforce type hints and enforcing consistency on interface. However, a feature that is nice to have but only useful for those framework or library writer and not mere library consumer like us.

Revisit Subversion Branching and Merging Again

Was reading this presentation on version control in #Subversion . Relearning the whole branching and merging flow again. Due to some constraints (access right, data sensitivity, and my pure laziness as well), our flow is a bit effed up. No one but myself to blame since I am responsible for syncing all the works by different developers.

According to the slide, the flow is as follows:

1. Create a branch from trunk and commit.
$ svn cp ^/trunk ^/branches/mybranch
$ svn co ^/branches/mybranch

2. Inside your own branch, mybranch in this case, you need to keep in sync with the trunk.
$ svn merge ^/trunk
$ svn co

3. Later, once the features in mybranch is completed, you will need to reintegrate to the trunk. Inside a clean copy of trunk, run these commands.
$ svn merge --reintegrate ^/branch/mybranch
$ svn co

Once integrated, the branch is basically end-of-life. No more modification to it. This is one step which I did differently, basically we have a stable branch which actually act like trunk. Why ? I spend most of my time in stable branch and hardly touch trunk. That why.

However, the --reintegrate option has deprecated in version 1.8. Subversion will automatically decide a reintegration merge or not.

4. One integration is done you'll need to tag it.
$ svn cp ^/trunk@12345^/tags/mytag
$ svn co

5. Rinse and repeat.

There is still one question lingering in my head right now. How do I prevent certain files from being modified during merging ? More on this in next post perhaps.

Back to the slides. One particular slide echo my sentiment about the practice of using Subversion or any version control. You need to commit small, commit early, and commit often for each small task. It keeps the momentum going and motivate the team to move forward. This was further enhanced by our practice of Kanban scheduling methodology and Podomoro time management technique. Our development process is getting better but there still room for improvement.

One thing for sure, I need to move the Subversion to a new server. The installation (by yours truly) is seriously way effed up. Should I just migrate to Mercurial or Git? Or maybe I should just use hgsvn or git-svn instead?


Via HN. If my memory serves me correctly, few years back, someone (a sales dude) used to come to #myoss  meetup and talked about Multi-User Multi-Programming System (MUMPS), a programming language with embedded data storage. A very old (since 1966) programming language within a niche domain, mostly in healthcare. Suspect some of the healthcare system in .my are still using it.

I am wondering, what are the other exotic programming languages still being used in here ? For banking industry, I think Cobol and RPG are still quite common. What else?
"I really wanted to learn AngularJS since the day it was introduced, but I never had the time. I always envied other developers using it and I just kept bookmarking new AngularJS resources and articles. But then I thought maybe I should define a weekend project and do it with AngularJS, so I can learn it."
-- Sallar Kaboli, emphasis added.
HN reader, zerr also wrote something similar about the lack of time in keeping with your learning and pursuing your own weekend projects. Is going to be harder if you are getting older, have family with kids, or all sort of stupid, annoying, out of your control distraction over the  weekend.

The only time I can do something useful at home is after midnight where everything has quiet down. Is where you find solitude to read, to learn, and to think. Thus, helping you to spark your creativity and self-development.

Unfortunately, being a night owl is slowly taking a toll of my body especially you don't have sufficient sleep the next day. Lack of quality rest going to wreck your body somehow, slowly but surely. The symptoms are all there.

I didn't realize how stupid not to have a good rest until yesterday morning. Tried to upgrade the Subversion server at 4am and sent someone to KLIA at 5am. At the same time, tried my very best not to accidentally hit any of those "Mat Rempit" doing superman stunt along the Maju Expressway. As a biker, if you can, avoid Maju Expresway. I have saw three damaged bikes long the journey. Same goes to MRR2. These two driveways are way too dangerous for biker.


"The inflammation of the lining of the stomach is called gastritis. Dizziness and fatigue after eating meals are common symptoms of gastritis. This condition may be caused by irregularity in eating meals, consuming improperly cooked food, excessive eating of oily foods, overeating, alcohol intake, and drinking strong coffee or tea."
-- MD-Health, emphasis added
Be wary of any health-related information found online. The site itself has no disclaimer, no source of information, and furthermore, not HONCode certified. The best next course of action is to check with a doctor.

There are two symptoms of dizziness. One is light-headedness where you feels like fainting. Second is vertigo (which I always thought not a word but a unique name of the imprint of DC Comics, silly me), where you feels the spinning sensation. You can sense the sensation from three sources, objective (world is moving), subjective (the person him/herself is moving), and pseudovertigo (rotation inside your head).

CentOS and Red Hat

Red Hat is probably the best and worse possible thing that can happen to CentOS, a community-based GNU/Linux distribution derived from it. The good part is you have the stability and good driver support since Red Hat is popular among the commercial world. The worst part is stability comes with a price. Getting latest greatest software updates (e.g. PHP or Subversion) is quite limited unless is a security fix. So you left with two choices, either you get the updates from third party repositories or you built from source code. Unfortunately, both ways have their own issues.

First, is always tricky to mix third party repositories and base repository. Certain common library packages maybe updated by third party repositories causing unnecessary breakage with existing software. Although Yum priority plugin and packages exclusion can solve that, it is still a hassle.

Second, installation by source code compilation. You can have all the customize options but with all the issues in former way as well. However, you can have the flexibility to isolate your installation into specific directory (/opt) with GNU Stow, a symlink farm manager. But you lost the ability to verify the integrity of your software binaries (check for tampering or planted Trojan) in case there is a security breach. To solve that, some may rebuild the software into RPM packages software from existing RPM's spec file which is what all the third party repositories are doing right now. In the end, you still back the problem of the first method.

How then ? Just migrate and move to a more bleeding edge distro like Ubuntu or Fedora.