Using LXD's Instance Types to Emulate Public Clouds (Amazon, Google, or Azure) Specification

One of the challenges when developing using public clouds provides like Amazon, Google, or Azure is how do we emulate, to the closest specification of their cloud instance locally? LXD, the system container, does have a feature, where you can specify the instance-types during container creation. The public cloud specification is based on the mapping done by the instance type project. While this is not a full emulation of the actual public cloud environment, it's just the essential resource allocations like CPU, memory, disk size, or others. Nevertheless, it's a good and quick way to bootstrap your container to roughly match the resource allocated of these public cloud offering.

Before that, please check the available CPU cores and memory your machine have. In my lappy, we have 4 CPU cores and 7G of memory.
$ nproc && free -g
4
              total        used        free      shared  buff/cache   available
Mem:              7           6           0           0           1           0
Swap:             0           0           0

How does instance types works in LXD? Let's us create a container based on AWS t2.micro which have the specification of 1 CPU and 1 Gb RAM. All commands are equivalent but using different syntax.
$ lxc launch ubuntu:18.04 c1 -t aws:t2.micro # <cloud>:<instance type>
$ lxc launch ubuntu:18.04 c2 -t t2.micro # <instance type>
$ lxc launch ubuntu:18.04 c3 -t c1-m1 # c<CPU>-m<RAM in GB>

Check our specification of our created containers.
$ lxc config show c1 | grep -E "cpu|memory"
  limits.cpu: "1"
  limits.memory: 1024MB

Again, alternative way to get the config details of the container.
$ lxc config get c1 limits.cpu
1
$ lxc config get c1 limits.memory
1024MB

Betta Spawn Log : BSL20181202 : HMPK Metallic Blue (S) x HMPK Metallic Blue (S)

Yet another sibling pair but this time a metallic blue pair. This is the continuity of the previous breeding pair BSL20181105 (Super Red) and BSL20181005 (Super Yellow). Again, the plan was to breed a sibling pair to produce as much fry as much so we can continue breeding multiple generation. If everything goes well and lucky, this will take several generations of breeding.

Male: HMPK Metalic Blue (S)
Age: 4+ months
Temperaments: Normal.
Size: Small (2.5cm body only)
Grade: C

Not the best one we can obtain but nevertheless, the male Betta was healthy and quite active. We fed it well and kept it within the Indian Almond leaves water just to make sure to quarantine it well. Nevertheless, this fish was fresh from the farm and bred in natural environment and fed with live food. Surely, it's better than those bred in house like us.


Female: HMPK Metalic Blue (S)
Age: 4+ months
Temperaments: Highly active.
Size: Small (2.5cm body only)
Grade: B

When we first bought this female Betta, it was so small but we've no choice, there were no female Betta available for this colour. Nevertheless, we decided to proceed ahead as we're confident to condition this female until it's ready to breed. Took us three months to make sure this female was well fed and always in Indian Almond leaves water. Just in case to prevent sickness. Luckily this female turned out to be healthy and very active.


Log Notes
2018-09-23
Bought this pair from the Betta farm. The female was so small and we're worried that it may not be a female at all. Nevertheless, we trust the wisdom of the said Betta breeder.

2018-12-01 (1st week)
The pair mated. The female was immediately separated from the male and put into a plastic container. Since we don't have enough overnight water, the female will stay a while within the breeding box.

2018-12-02
Female removed so it won't interfere with the male. The male moved the nest several times, most probably of interruption from us.

2018-12-04
Black dot or hatched fry were seen in the bubble nest. Estimate the spawn size was roughly 100-plus.


2018-12-09 (2nd week)
Do plan in advanced when you want to breed the fish. The first two weeks are quite crucial as the fry needs to be fed constantly. Otherwise, the whole spawn will starve to death. Hence, if you need to travel, don't breed and also good to have hatch two batch of BBS, in case one batch does not hatch properly. Especially true if you're using expired BBS. We feed the fish two times per day and the growth and colour was noticeable.

2018-12-16 (3rd week)


Retrospection
1/ Always prepare backup food for the first two weeks in case the BBS did not hatch.

Pi-hole with Docker - Installation and Setup

In my previous post, I've covered Pi-hole installation and setup with LXD. For this post, we will try another installation approach, using Docker. Procedure wise, it's quite straightforward, just three steps.
$ docker pull pihole/pihole
$ wget https://raw.githubusercontent.com/pi-hole/docker-pi-hole/master/docker_run.sh
$ bash docker_run.sh

One of the issue that encountered was that the mapped ports maybe have been used for other services. to resolve the port conflicts, especially those using Ubuntu, we have to identify those processes that bind to those ports and stop it.
$ sudo netstat -nltup | grep -E ":53|:67|:80|:443"

Port conflicts with Dnsmasq in Ubuntu can be resolved by disabling its service. However, this is not advisable if you're running services that depends on Dnsmasq like LXD or VPN.

If you like alternative way to properly manage (for example, restarting) the Pi-hole's container, you can write a wrapper shell script and manage it through Docker Compose.

The wrapper shell script based on the `docker-run.sh`.
#!/usr/bin/env bash

IP_LOOKUP="$(ip route get 8.8.8.8 | awk '{for(i=1;i<=NF;i++) if ($i=="src") print $(i+1)}')"
IPV6_LOOKUP="$(ip -6 route get 2001:4860:4860::8888 | awk '{for(i=1;i<=NF;i++) if ($i=="src") print $(i+1)}')"

IP="${IP:-$IP_LOOKUP}"
IPV6="${IPv6:-$IPV6_LOOKUP}"

TIMEZONE="$(cat /etc/timezone)"

DOCKER_CONFIGS="$HOME/.pihole"

exec docker-compose $@

And our `docker-compose.yml` file, modified based on this sample.
version: '2'
services:
  pihole:
    container_name: pihole
    restart: unless-stopped
    image: pihole/pihole
    environment:
    - ServerIP=$IP
    - ServerIPv6=$IPv6
    - TZ=$TIMEZONE
    - WEBPASSWORD=*foobar*
    - DNS1=127.0.0.1
    - DNS2=1.1.1.1
    volumes:
    - $DOCKER_CONFIGS:/etc/pihole/
    - $DOCKER_CONFIGS/dnsmasq.d/:/etc/dnsmasq.d/
    ports:
    - "80:80"
    - "443:443"
    - "53:53/tcp"
    - "53:53/udp"
    - "67:67/udp"

Betta Spawn Log : BSL20181105 : HMPK Super Red (M) x HMPK Super Red (M)

Another sibling pair we bought similar to breeding project, BSL20181005. We tends to get sibling pair so we can obtain the same colour as the parent Betta instead of mixing pair from different colours. Our previous breeding projects produce offspring with less than desire colours (as in like Rojak) as shown below.


For breeding Super Red Betta, this is our third breeding project. Breeding project BSL20180518, did produce good offspring, although the number obtained was way too small, roughly around eight and sadly no female Betta at all. Another breeding project BSL20180316, although the female was not a Super Red, did yield one (yes, just one) surviving Super Red. Worst still, it's a very small size male Betta.

What we hope for this breeding project? More female Bettas so we can continue our breeding project for coming generation.

Male: HMPK Super Red (M)
Age: 4+ months
Temperaments: Sluggish and slow.
Size: Medium (3.5cm body only)
Grade: C

When we first bought this male, the movement was quite sluggish and slow. Furthermore, the anal fin was long and caudal fin was not symmetry. Nevertheless, the body size was large and good.


Female: HMPK Super Red (M)
Age: 4+ months
Temperaments: Curious and active.
Size: Medium (3.0cm body only)
Grade: B

Nothing special about this female Betta. Just the body was quite thin and long.



Log Notes
2018-09-23
Bought this pair from the Betta farm.

2018-11-01
Start conditioning. We don't use the usual glass aquarium as breeding tank but a styrofoam box instead.  Reason being that we want to try different approach using different container. Furthermore, styrofoam box ensures both fishes will not be disturbed and keep the temperature stable.


2018-11-05
Mating happened and eggs were observed in the bubble nest.

2018-11-07
Some fries were seen swimming freely. Estimated that the this spawn was around 60-plus or more. Based on past experience, not all the fry will survive during these breeding period. As usual, the female was removed immediately.



2018-11-11 (1st week)
We've decided not to use leaving-father-with-fry method and removed the male Betta immediately once.

2018-11-18 (2nd week)
BBS feeding as usual. Growing rate were inconsistent among the fishes.

2018-11-25 (3rd week)
BBS feeding as usual. Growing rate were inconsistent among the fishes.

2018-12-02 (4th week)
Total fry count was around 30-plus, half of what observed initially in the first week.

2018-12-09 (5th week)
To increase the growth of these fishes, we decided to feed BBS twice per day.

Retrospection
1/ Pick a good quality male Betta if possible as it will save us more time instead of fixing any defect issues in multiple breeding projects.

Pi-hole with LXD - Installation and Setup

Pi-hole is wrapper of your DNS server that block all advertisements and trackers. We're using it at our home network to block all those unnecessary bandwidth wasting contents. Setting up for any of your devices is quite straightforward, just make sure your router point to it as DNS server.

While there is a Docker image existed, we have installed it within a LXD container since we have a LXD host exists in our small homelab server, Kabini (more on this in coming posts).

First we setup the container based on Ubuntu 18.04.
$ lxc launch ubuntu:18.04 pihole
$ lxc list -c=ns4Pt
+--------+---------+----------------------+----------+------------+
|  NAME  |  STATE  |         IPV4         | PROFILES |    TYPE    |
+--------+---------+----------------------+----------+------------+
| pihole | RUNNING | 10.53.105.102 (eth0) | default  | PERSISTENT |
+--------+---------+----------------------+----------+------------+

Looking at the table above, notice that container created based on the default profile, the IP we obtained is within the 10.x.x.x range. What we need to do is to change to create a new profile which will enable the container accessible to other in the LAN network. Hence, we need to switch from bridge to macvlan.

The `eth0` network adapter links to your host's network adapter, which can have different naming. For example, `enp1s0` (LAN). However, you can't bridge a Wifi interface to ethernet interface as Wifi by default, only accept a single MAC address from a client.
$ lxc profile copy default macvlan
$ lxc profile device set macvlan eth0 parent enp1s0
$ lxc profile device set macvlan eth0 nictype macvlan

Stop the `pihole` container so we can switch the profile to `macvlan`.
$ lxc stop pihole
$ lxc profile apply pihole macvlan
Profiles macvlan applied to pihole
$ lxc start pihole
$ lxc list
$ lxc list -c=ns4Pt
+--------+---------+----------------------+----------+------------+
|  NAME  |  STATE  |         IPV4         | PROFILES |    TYPE    |
+--------+---------+----------------------+----------+------------+
| pihole | RUNNING | 192.168.0.108 (eth0) | macvlan  | PERSISTENT |
+--------+---------+----------------------+----------+------------+

Next, enter the container and install Pi-hole.
$ lxc exec pihole bash
root@pihole:~# curl -sSL https://install.pi-hole.net | bash

LXC/LXD 3 - Installation, Setup, and Discussion

It has been a while (like three years ago) since I last look into LXC/LXD (like version 2.0.0). As we're celebrating the end of 2018 and embracing the new year 2019, it's good to revisit LXC/LXD (latest version is 3.7.0) again to see what changes have been made to the project.

Installation wise, `snap` have replace `apt-get` as the preferred installation method so we can always get the latest and greatest updates. One of the issue I faced last time was support for non-Debian distros like CentOS/Fedora and the like was non-existed. To make it work, you have to compile the source code on your own. Even so, certain features was not implemented and made possible. Hence, `snap` is a long awaited way to get LXC/LXD to works on most GNU/Linux distros out there.

Install the packages as usual.
$ sudo apt install lxd zfsutils-linux

The `lxd` pre-installation script will ask you on which version that you want to install. If you choose `latest`, the install the latest version using `snap`. Otherwise, for stable production 3.0 release, it will install the version that came with the package.


You can verify the installation method and version of the LXD binary.
$ which lxd; lxd --version
/snap/bin/lxd
3.7

The next step is to configure LXD's settings, especially storage. In our case here, we're using ZFS, which have better storage efficiency. The only default value changed was the new storage pool name.
$ sudo lxd init
Would you like to use LXD clustering? (yes/no) [default=no]:
Do you want to configure a new storage pool? (yes/no) [default=yes]:
Name of the new storage pool [default=default]: lxd
Name of the storage backend to use (btrfs, ceph, dir, lvm, zfs) [default=zfs]:
Create a new ZFS pool? (yes/no) [default=yes]:
Would you like to use an existing block device? (yes/no) [default=no]:
Size in GB of the new loop device (1GB minimum) [default=45GB]:
Would you like to connect to a MAAS server? (yes/no) [default=no]:
Would you like to create a new local network bridge? (yes/no) [default=yes]:
What should the new bridge be called? [default=lxdbr0]:
What IPv4 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]:
What IPv6 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]:
Would you like LXD to be available over the network? (yes/no) [default=no]:
Would you like stale cached images to be updated automatically? (yes/no) [default=yes]
Would you like a YAML "lxd init" preseed to be printed? (yes/no) [default=no]:

If you want to manage the container as normal user, add yourself to the `lxd` group and refresh the changes.
$ sudo adduser $USER lxd
$ newgrp lxd
$ id $USER | tr ',', '\n'
uid=1000(ang) gid=1000(ang) groups=1000(ang)
4(adm)
7(lp)
24(cdrom)
27(sudo)
30(dip)
46(plugdev)
116(lpadmin)
126(sambashare)
127(docker)
134(libvirt)
997(lxd)

Next, we're going to create our first container and show its status. Downloading the whole template container image going to take a while.
$ lxc launch ubuntu:18.04 c1   
Creating c1
Starting c1

$ lxc list -c=ns4Pt
+------+---------+----------------------+----------+------------+
| NAME |  STATE  |         IPV4         | PROFILES |    TYPE    |
+------+---------+----------------------+----------+------------+
| c1   | RUNNING | 10.53.105.243 (eth0) | default  | PERSISTENT |
+------+---------+----------------------+----------+------------+

This Week I Learned 2018 - Week 49

Last week post or some old posts.

How to identify and utilize the hidden pocket time available? Surprisingly, there are 13 time slots available. Generally, how do we to shed unnecessary time off your daily schedule, for examples, choosing what to wear or 40 minutes per day on buying stuff? Planning, automation, and limit the choices you've to make. Plan your week ahead, preps your meals up front, or wear the same type of clothing everyday. All these to prevent decision fatigue by removing yourself to make unnecessary decisions in your daily life.

What are you thankful for? I share the same sentiment with this person. Off course, personally, be content.

Does quitting social medias like Instagram or Facebook make you happier? (via HN) Indeed, as the author experienced, it will make your lighter and thus happier. I believe the same experience you felt when going for vegetarian food for a period, your stomach felt lighter. As usual, moderation is the key but take note, these apps were explicitly designed to "consume" you. Start slowly. Instead of drastic changes, disconnect yourself during the weekend, then weekday, and finally totally remove yourself from it.

How does one living with less? Fit everything you own into one carry-on bag. As usual, there always a subreddit, r/onebag, exists. If you travel a lot for a long period of time, the author list of items is a good way to start or reduce the "stuff" you owned to the essential bare minimum.

How smooth jazz took over the '90s? When you mix the technicality of Jazz and melodically of Pop music, then you have Smooth Jazz.

Is Microsoft Edge (Spartan render engine) or Internet Explorer (Trident render engine) going to be replaced by Microsoft own version of Chromium? Yes and finally bloody yes. (via HN) The demise of Edge/IE browser allows me to check off an item from my to-do item after so many, many years of painful experiences and wasted numerous hours trying to get web sites / web apps to work correctly with Edge/IE and numerous hacks and workarounds (remember the stupid box model and their refusal to fix it?). Maybe right now we can have a consistent and standardize web browser render engine with minimum differences. Yes, they may pull another "embrace, extend, and extinguish" strategy again but at least right now we have a FOSS web browser and Firefox.

Why you should switch to Firefox web browser? (via HN) If you values and concerns about privacy. First, Mozilla values your privacy. All the browser data (bookmark, browser history, and etc) synced through Mozilla Sync cannot access by any parties except you. Second, the Firefox Multi-Account Containers extension where cookies are not shared and kept within the container tab itself. This means that each tab is a new browser session and isolated from other tabs where you can use multiple identities and accounts simultaneously. For Google Chrome, there is this extension, SessionBox that did the same but do you trust a third party vendor instead of Mozilla? Third, tracking protection is already built into the browser itself.

Golang Development Environment with GVM in Ubuntu 18.10

It has been a while since I last looked at development using Golang. Since I was reading some Golang code during this period, might as well look at setting the Golang development environment in Ubuntu 18.10.

There are several ways to setup your Golang development environment. Two good choices are using the default package installation or using Go Version Manager (GVM). There are several options to choose from default packages management, either by DEB or Snap as shown below.
$ go

Command 'go' not found, but can be installed with:

sudo snap install go         # version 1.10.3, or
sudo apt  install golang-go
sudo apt  install gccgo-go 
......

However, if you want several different Go versions co-exist within the same machine or want to get the latest greatest version, Go Version Manager (GVM) will be the preferred choices. While my preference choice is to use existing package manager (simpler and easier), it's good to look into other approaches. Hence, our focus of this post will be on GVM.

Some prerequisites. Please install and remove some packages (if you have existing Go installed).
$ sudo apt install curl git mercurial make binutils bison gcc build-essential
$ sudo apt remove golang-go
$ sudo snap remove go

Next, download and install the gvm installer. Yes, we all know downloading and running Bash script from the Interweb is rather stupid and insecure. But what the heck.
$ zsh < < (curl -s -S -L https://raw.githubusercontent.com/moovweb/gvm/master/binscripts/gvm-installer)
Cloning from https://github.com/moovweb/gvm.git to /home/ang/.gvm
Created profile for existing install of Go at "/snap/go/3039"
Installed GVM v1.0.22

Please restart your terminal session or to get started right away run
 `source /home/ang/.gvm/scripts/gvm`

Reload your Bash file settings.
$ source ~/.zshrc

Find the most recent 5 stable releases.
$ gvm listall | grep -v -E '(release|beta|rc)' | sort -rn -t. -k2,2 | head -n 5
   go1.11.2
   go1.11.1
   go1.11
   go1.10.5
   go1.10.4

Install the binary.
$ gvm install go1.11.2 --binary

Set and use the default binary.
$ gvm use go1.11.2 --default
Now using version go1.11.2

Now check your installation.
$ which go
/home/ang/.gvm/gos/go1.11.2/bin/go

Now, check your Golang related environment paths.
$ gvm pkgset list

gvm go package sets (go1.11.2)

=> global

See the environment settings of the `global` profile.
$ gvm pkgenv global

If you don't like Gvm and want to nuke the whole installation.
$ gvm implode

Thinkpad X230 - Tweaking Intel Centrino Advanced-N 6205 [Taylor Peak] Slow Wireless Performance

After using that ISP for so many years, the modem that came with the existing packages finally failed. The technician told me that the model of the modem was so old and there was no replacement stock. Instead, we have to substitute it with another modem cum router. Nevertheless, a simple hardware swap and configuration setup and everything back to normal. Typically, the next step was to test the broadband speed from my lappy.

Install the Ookla's speed test CLI tool.
$ sudo apt-get install speedtest-cli

Benchmark the connection speed. Not entirely correct as this depends on the Wifi signals and access protocols. Nevertheless, it will give us a baseline. Result shown below was nothing impressive and seemed wrong and should be higher instead.
$ speedtest | grep -E "Download|Upload"
Download: 21.12 Mbit/s
Upload: 18.83 Mbit/s

Checking through the available network adapter in this lappy.
$ lspci | grep Network
00:19.0 Ethernet controller: Intel Corporation 82579LM Gigabit Network Connection (Lewisville) (rev 04)
03:00.0 Network controller: Intel Corporation Centrino Advanced-N 6205 [Taylor Peak] (rev 34)

Based on the hardware information we obtained in previous step, we want to find out what Wifi protocol does this wireless adapter supported.
$ lspci -vv -s 03:00.0 | grep 802
 Subsystem: Intel Corporation Centrino Advanced-N 6205 (802.11a/b/g/n)

Following the instruction from this page, update the iwlwifi (Intel Wireless Lan) driver to enable antenna aggregation of the Wifi adapter.
$ echo options iwlwifi 11n_disable=8 bt_coex_active=N | sudo tee -a /etc/modprobe.d/iwlwifi.conf
$ sudo modprobe -r iwlwifi
$ sudo modprobe iwlwifi

However, this does not works for me. Even trying different approaches. Perhaps upgrading the internal Wifi adapter that supports 802.11ac? But that is damn tricky as we need to flash the BIOS to remove the whitelisted Wifi adapter.

MSP430 - Online Resourses

Some relevant online resources for those who want to start exploring microcontrollers, especially the TI MSP430G2 LaunchPad development board. This page will be updated from time to time.

How/Where to start?
Manufacturer
Technical Documents
  • SLAC485. Sample MSP4302553 codes.
  • SLAU144J. MSP430x2xx Family User's Guide
Resellers
Tutorials / Webinars / Workshops
Community Sites
University courses based on MSP430
  • ECE2049: Embedded Computing in Engineering Design.
  • EE3376: Microprocessors.
Books
Using other programming languages
  • Assembly programming with MSP430. Contains a list of of online resources related to assembly programming with MSP430.
  • Mecrisp. Another implementation of Forth for MSP430.
  • noForth. Interactive Forth programming language for MSP430. Do this if you want to learn Forth and hardware control through the Egel project.

Setting Git with P4merge in Babun

In previous post, we have discussed on setting up Babun in Windows, the next step was to setup a good merging tool to work with Git. Good and free merging tool is essential when resolving conflict during rebasing or merging. There are several good tools but the one we're comfortable with is P4Merge.

If the `p4merge.exe` binary is not found within the Babun shell, then you've to update the environment variable `$PATH` to append to the exact location of the binary. This can be done through Windows' environment path as well. Since we want our Git configuration file `gitconfig` to be portable with minimum tweaking, we can opt to set direct path.

This is where `cygpath` comes in which will convert path between Unix and Windows.

If we want to find the Windows path to our default Babun installation directory.
$ cygpath -w /
C:\Users\foobar\.babun\cygwin

How about the Unix path for `p4merge.exe` binary.
$ cygpath -u "C:\Program Files\Perforce\p4merge.exe"
/cygdrive/c/Program Files/Perforce/p4merge.exe

To make sure Git can find `p4merge.exe` binary from Babun.
$ cygpath -asm "/cygdrive/c/Program Files/Perforce/p4merge.exe"
C:/PROGRA~1/Perforce/p4merge.exe

Next, set `p4merge` our default merge tool and the direct path to its binary.
$ git config --global merge.tool p4merge
$ git config --global mergetool.p4merge.path C:/PROGRA~1/Perforce/p4merge.exe

If you don't like the tab ordering of the merging windows, customize to your liking.
$ git config --global mergetool.p4merge.cmd \
    "C:/PROGRA~1/Perforce/p4merge.exe" $BASE $LOCAL $REMOTE $MERGED

Additionally, `cygpath` also have built-ins options to access default system paths in Windows.
$ cygpath
......
System information:

  -A, --allusers        use `All Users' instead of current user for -D, -O, -P
  -D, --desktop         output `Desktop' directory and exit
  -H, --homeroot        output `Profiles' directory (home root) and exit
  -O, --mydocs          output `My Documents' directory and exit
  -P, --smprograms      output Start Menu `Programs' directory and exit
  -S, --sysdir          output system directory and exit
  -W, --windir          output `Windows' directory and exit
  -F, --folder ID       output special folder with numeric ID and exit

You can access these path directly with Windows Explorer using `cygstart`.
$ cygstart `cygpath -D`

`cygstart` is a tool is used open almost anything. For example, web URL but you must prepend it with `http`.
$ cygstart http://google.com

Zsh with Zgen

In my previous post, I've been talking about setting up Babun in Windows environment. One of the main reason to use Babun shell was the available sensible default settings for Zsh shell through oh-my-zsh framework and abundant plugins.

Initially I tried configured Zsh using Antigen based on the post by mgdm. While the tmux can start automatically every time a new session was started, the Zsh can't find the antigen-* functions. Hence, I've decided to switch the Zsh framework to Zgen instead.

First thing first, you will need to switch your current shell from Bash to Zsh. Logout and login again (I can't find any other ways besides this) to reflect the changes.
$ chsh -s `which zsh`

Since my dotfiles repository already existed, hence I only need to add Zgen as Git submodule.
$ mkdir $HOME/.zsh.d
$ cd $HOME/.zsh.d
$ git submodule add https://github.com/tarjoilija/zgen

The next step was to setup Zsh config file, `$HOME/.zshrc` with Zgen.
source $HOME/.zsh.d/zgen/zgen.zsh

ZSH_TMUX_AUTOSTART=true

# If the init script doesn't exist
if ! zgen saved; then
    zgen oh-my-zsh
    zgen oh-my-zsh plugins/tmux

    # Generate the init script from plugins above.
    zgen save
fi

Refresh the Zsh's settings.
$ source $HOME/.zshrc

Development with Docker Toolbox and Babun

For those stuck with or prefer Windows environment, Docker Toolbox is the painless and simplest way to have a consistent development environment using Docker. While we can duplicate the same environment through actual virtualization or Window Subsystem For Linux (WSL), the time taken to configure and setup the environment does not worth the effort. Furthermore, Docker still not quite works for WSL yet.

While the Git Bash for Windows works well for its basic features, a Bash prompt with some *nix utilities, there are still several console apps solely missed like rsync and wget, which will not be bundled together unless you install these separately. Furthermore, you have to waste time to customize the console prompt to your liking (optional).

This is where Babun, a Windows shell over Cygwin, comes in. The sensible default settings provided by  oh-my-zsh with wide range of extensible plugins is good enough without much tweaking.

Babun Docker
Since we're using Docker Toolbox, we also need to make sure it works with Docker Toolbox through Babun Docker.

Go to the Docker QuickStart Terminal and stop the Docker Machine.
$ docker-machine stop default

Open Babun shell and install Babun Docker.
$ curl -s https://raw.githubusercontent.com/tiangolo/babun-docker/master/setup.sh | source /dev/stdin
$ babun-docker-update
$ docker-machine start default

Always load the Docker Machine settings on new opened terminal session.
$ echo "# Docker Machine stuff\n eval \$(docker-machine env default --shell zsh)" >> ~/.zshrc

Zsh and tmux
Next, which is quite crucial for me was to auto load tmux every time Babun or Zsh start.
$ pact install tmux

In your `.zshrc`, enable the tmux plugin and set it to autoload.
# Enable this before the plugin
ZSH_TMUX_AUTOSTART=true

plugins = (git tmux)

The next step is to download my own tmux configuration file and reload the Zsh shell to reflect the changes. You can also close and re-open the Babun shell. There was an issue where the Babun shell cannot start properly due to unknown error. Update your Babun installation (which actually is Cygwin) should fix the issue.
$ wget https://raw.githubusercontent.com/kianmeng/dotfiles/master/.tmux.conf
$ source $HOME/.zshrc

Slowness
Depending on the number of plugins enabled and theme selected, Babun shell can be quite slow. Keep your enabled plugins to a minimum, pick less resource intensive theme, use a Zsh framework to manage these plugins, or optimize Cygwin instead. Nevertheless, one way to check if your shell prompt is slow, run this command.
$ babun check

This Week I Learned 2018 - Week 48

Last week post or read the old stuff instead.

What is this console app that always gave me a conflicting experience every time I use it? ImageMagick. Besides the forking of GraphicsMagick, the use of complex XML format (surprising for a 28-years program at the time of writing) as its configuration settings, it always failed when processing large number of files that exceed its default threshold values. Tweaking it with different settings or disabled it did not resolve the crash issue. Switching to GraphicsMagick yielded the same result. The workaround was convert each image file to PDF in parallel manner and merge all PDF files as single large PDF file.
find . -name '*.jpg' | parallel --progress convert {} {.}.pdf
pdfunite *.pdf scanned_doc.pdf

Does Webpack, Docker Machine, through Virtual Box, and Unix-based host OS work well? No, not really when come to watching changes of files and hot reloading. First, Inotify, the Linux kernel subsystem which notify file changes to application is not and will not be supported in Virtual Box. How about we switch to VMWare? Nope, not supported either. Switching Webpack's watch method to polling have its own issue as well, CPU usage and hot reloading depends on frequency of polling. The key is find the acceptable polling interval. Off course, there are workarounds but not to my liking. If you read carefully, optimal solution using Inotify does not work because the limitation of shared folders through Network File System (NFS) and how file changes event is not communicated between Docker's host and container.

What to consider when designing RESTful API for third parties usage? There are four rules: (1) Use API keys for every request endpoint, (2) Regulate usage through rate limiting using HTTP 429 and API key instead of IP address, (3) Revoke API key if there are usage violation but provides API for client to check rate limits, and (4) Use other means to validate authentication and authorization beside API key.

What are the differences between COALESCE and IFNULL in MySQL? There are several but COALESCE is the preferred choice because (1) It's a standard and should works across multiple DBMS (but do we switch DB that often?), (2) COALESCE support multiple arguments until it can find the first non-NULL value but IFNULL only  support two arguments, and lastly (3) IFNULL is slightly faster than COALESCE. Interestingly, undefined value (1/0) is considered as NULL or missing value.
mysql > SELECT IFNULL(1/0,'yes');
mysql > yes

mysql> SELECT COALESCE(1/0,'yes');
mysql > yes

What is the equivalent of `which` in Windows? `where` as shown below.
C:\where notepad
C:\Windows\System32\notepad.exe
C:\Windows\notepad.exe