Showing posts with label ubuntu. Show all posts
Showing posts with label ubuntu. Show all posts

Pi-hole with LXD - Installation and Setup

Pi-hole is wrapper of your DNS server that block all advertisements and trackers. We're using it at our home network to block all those unnecessary bandwidth wasting contents. Setting up for any of your devices is quite straightforward, just make sure your router point to it as DNS server.

While there is a Docker image existed, we have installed it within a LXD container since we have a LXD host exists in our small homelab server, Kabini (more on this in coming posts).

First we setup the container based on Ubuntu 18.04.
$ lxc launch ubuntu:18.04 pihole
$ lxc list -c=ns4Pt
|  NAME  |  STATE  |         IPV4         | PROFILES |    TYPE    |
| pihole | RUNNING | (eth0) | default  | PERSISTENT |

Looking at the table above, notice that container created based on the default profile, the IP we obtained is within the 10.x.x.x range. What we need to do is to change to create a new profile which will enable the container accessible to other in the LAN network. Hence, we need to switch from bridge to macvlan.

The `eth0` network adapter links to your host's network adapter, which can have different naming. For example, `enp1s0` (LAN). However, you can't bridge a Wifi interface to ethernet interface as Wifi by default, only accept a single MAC address from a client.
$ lxc profile copy default macvlan
$ lxc profile device set macvlan eth0 parent enp1s0
$ lxc profile device set macvlan eth0 nictype macvlan

Stop the `pihole` container so we can switch the profile to `macvlan`.
$ lxc stop pihole
$ lxc profile apply pihole macvlan
Profiles macvlan applied to pihole
$ lxc start pihole
$ lxc list
$ lxc list -c=ns4Pt
|  NAME  |  STATE  |         IPV4         | PROFILES |    TYPE    |
| pihole | RUNNING | (eth0) | macvlan  | PERSISTENT |

Next, enter the container and install Pi-hole.
$ lxc exec pihole bash
[email protected]:~# curl -sSL | bash

LXC/LXD 3 - Installation, Setup, and Discussion

It has been a while (like three years ago) since I last look into LXC/LXD (like version 2.0.0). As we're celebrating the end of 2018 and embracing the new year 2019, it's good to revisit LXC/LXD (latest version is 3.7.0) again to see what changes have been made to the project.

Installation wise, `snap` have replace `apt-get` as the preferred installation method so we can always get the latest and greatest updates. One of the issue I faced last time was support for non-Debian distros like CentOS/Fedora and the like was non-existed. To make it work, you have to compile the source code on your own. Even so, certain features was not implemented and made possible. Hence, `snap` is a long awaited way to get LXC/LXD to works on most GNU/Linux distros out there.

Install the packages as usual.
$ sudo apt install lxd zfsutils-linux

The `lxd` pre-installation script will ask you on which version that you want to install. If you choose `latest`, the install the latest version using `snap`. Otherwise, for stable production 3.0 release, it will install the version that came with the package.

You can verify the installation method and version of the LXD binary.
$ which lxd; lxd --version

The next step is to configure LXD's settings, especially storage. In our case here, we're using ZFS, which have better storage efficiency. The only default value changed was the new storage pool name.
$ sudo lxd init
Would you like to use LXD clustering? (yes/no) [default=no]:
Do you want to configure a new storage pool? (yes/no) [default=yes]:
Name of the new storage pool [default=default]: lxd
Name of the storage backend to use (btrfs, ceph, dir, lvm, zfs) [default=zfs]:
Create a new ZFS pool? (yes/no) [default=yes]:
Would you like to use an existing block device? (yes/no) [default=no]:
Size in GB of the new loop device (1GB minimum) [default=45GB]:
Would you like to connect to a MAAS server? (yes/no) [default=no]:
Would you like to create a new local network bridge? (yes/no) [default=yes]:
What should the new bridge be called? [default=lxdbr0]:
What IPv4 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]:
What IPv6 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]:
Would you like LXD to be available over the network? (yes/no) [default=no]:
Would you like stale cached images to be updated automatically? (yes/no) [default=yes]
Would you like a YAML "lxd init" preseed to be printed? (yes/no) [default=no]:

If you want to manage the container as normal user, add yourself to the `lxd` group and refresh the changes.
$ sudo adduser $USER lxd
$ newgrp lxd
$ id $USER | tr ',', '\n'
uid=1000(ang) gid=1000(ang) groups=1000(ang)

Next, we're going to create our first container and show its status. Downloading the whole template container image going to take a while.
$ lxc launch ubuntu:18.04 c1   
Creating c1
Starting c1

$ lxc list -c=ns4Pt
| NAME |  STATE  |         IPV4         | PROFILES |    TYPE    |
| c1   | RUNNING | (eth0) | default  | PERSISTENT |

Golang Development Environment with GVM in Ubuntu 18.10

It has been a while since I last looked at development using Golang. Since I was reading some Golang code during this period, might as well look at setting the Golang development environment in Ubuntu 18.10.

There are several ways to setup your Golang development environment. Two good choices are using the default package installation or using Go Version Manager (GVM). There are several options to choose from default packages management, either by DEB or Snap as shown below.
$ go

Command 'go' not found, but can be installed with:

sudo snap install go         # version 1.10.3, or
sudo apt  install golang-go
sudo apt  install gccgo-go 

However, if you want several different Go versions co-exist within the same machine or want to get the latest greatest version, Go Version Manager (GVM) will be the preferred choices. While my preference choice is to use existing package manager (simpler and easier), it's good to look into other approaches. Hence, our focus of this post will be on GVM.

Some prerequisites. Please install and remove some packages (if you have existing Go installed).
$ sudo apt install curl git mercurial make binutils bison gcc build-essential
$ sudo apt remove golang-go
$ sudo snap remove go

Next, download and install the gvm installer. Yes, we all know downloading and running Bash script from the Interweb is rather stupid and insecure. But what the heck.
$ zsh < < (curl -s -S -L
Cloning from to /home/ang/.gvm
Created profile for existing install of Go at "/snap/go/3039"
Installed GVM v1.0.22

Please restart your terminal session or to get started right away run
 `source /home/ang/.gvm/scripts/gvm`

Reload your Bash file settings.
$ source ~/.zshrc

Find the most recent 5 stable releases.
$ gvm listall | grep -v -E '(release|beta|rc)' | sort -rn -t. -k2,2 | head -n 5

Install the binary.
$ gvm install go1.11.2 --binary

Set and use the default binary.
$ gvm use go1.11.2 --default
Now using version go1.11.2

Now check your installation.
$ which go

Now, check your Golang related environment paths.
$ gvm pkgset list

gvm go package sets (go1.11.2)

=> global

See the environment settings of the `global` profile.
$ gvm pkgenv global

If you don't like Gvm and want to nuke the whole installation.
$ gvm implode

Upgrading to Ubuntu 18.10 (Cosmic Cuttlefish)

Ubuntu 18.10 was released last week and I've managed to upgrade most of my machines to this newer version. Compare to upgrade experience of 18.04, no show stopper was encountered. Reading through the release notes, it's not a major releases, just some upgrades to existing packages. However, some interesting items caught my attention.

1/ GNOME Disks now supports VeraCrypt. The name sounds familiar until I googled it up. It seemed that VeraCrypt is the fork version of the discontinued TrueCrypt, a popular disk encryption software many moons ago. A few old acquaintances love to use this software to stash their favourite collections.

2/ Sounds over-amplication. This is a welcome needed feature especially if you have lots of music files with incorrect volume range. Previous workaround was to play it through VLC which support this.

This Week I Learned 2018 - Week 40

Last week post or something from the past.

What is the most expensive Koi fish sold at this moment? USD 1.7 millions! Yes, that was bid for for a champion Koi fish from Sakai Fish Farm. From the video's comment, the lady bidder is Chung Ying Ying, the Koi queen from Taiwan. Not really a fan of Koi fish as I do enjoy viewing fishes from side view instead from the top.

How can you write longer articles without adding more words? Switch to Times Newer Roman font which extends each character by 5 to 10 percents wider.

Why I'm grateful to live in MY instead of ID. (via Reddit) It's quite crazy there were so many instances of earth quake or typhoons at our neighboring countries. In return, we got their haze and labours.

Why should I croak instead of die? Use croak for the caller mistake and use die for the code mistake. According to the Carp module documentation,
The Carp routines are useful in your own modules because they act like die() or warn(), but with a message which is more likely to be useful to a user of your module. 
What if the subroutine returns a list but you want to assign it as an array reference? Use anonymous array.

How to resolve "Cannot determine local time zone" in Ubuntu under WSL environment? You will need to reconfigure your timezone again as a workaround for WSL constraint.

This Week I Learned 2018 - Week 37

Something from the archive or last week post.

If you want to do a YouTube video on tutorial on photography, how should you do it? So far, nothing can top this video. Well choreographed, interesting topic, and relevant demonstration on applying Morandi (a famous 20th century Italian still life painter) colour style in your photography. What impressed me was the tutorial was not focusing on the post-processing but instead stressed on the importance on scene selection and model's clothing choices. Sometimes, you can't simply post process (photoshop) everything.

How easy to setup development workstation in Ubuntu these days? In seconds, if you exclude the time needed to download all the packages. I've been looking into TypeScript, React, and VS Code these days and it's the right time to setup a new development environment through Ubuntu's Snap.
$ sudo snap install node --channel=10/stable --classic
$ sudo snap install vscode --classic
$ sudo snap install --edge typescript --classic

What is the best approach to read a book? Reading with a pencil  (via HN) or also known as marginalia. The idea is simple, you're basically collaborating with the book author by scribbling down your questions, thoughts, and ideas in the free margin space (limited for some books). In other words, purposeful annotations while reading or active reading (suitable for research papers but not some book genres where you read for leisure).  Also, such reading method is not applicable for ebook reader (yes, reMarkable exists but the steep price does not justify it), which still does not provides a good paper experience for doodling.

Is JQuery dead? Not yet but soon, probably within these few years. Reading through the blog post by Github Engineering on removing JQuery from Github frontend, little have I realized that the frontend (JavaScript) have matured enough to deprecate JQuery. What does this indicates? The web have move beyond the dreadful old incompatible Internet Explorer versions, which the main reason of the existence of JQuery project. What next? TypeScript becomes ES Next (maybe?) and the standardization and popularity of custome web components. One thing for sure, old things will be rediscovered, reimplemented, and rehyped again and again, as usual. Same old same old. (ง'̀-'́)ง

How to teach yourself hard things? (via HN) Alternatively, the Richard Hamming tackled this in his The Art of Doing Science and Engineering: Learning to Learn course and Edward Kmett in his Stop Treading Water: Learning to Learn lecture. Furthermore, comments in HN provides us with a few good gems in area of exercising, programming, or physics. However, this is only applicable for those who are discipline, having intrinsic motivationgood quality sleep, and don't get burn out (you will eventually). In short, learning will come naturally if you interested in tackling the problem itself. Time is limited, pick your battle wisely.

Why Microsoft Word is a better writing tool than LaTeX? Reading through the post by Thorsten Bell on the tools he used to write his book (via HN) reaffirms the mistake I've made when typesetting documents, books, and thesis using LaTeX, ConTeXt, and pandoc. Fancy tools may distract you from doing what matter most, writing itself. If the writing is difficult, we can be sidetracked through fiddling with these tools under the pretense of productive procrastination.That's why, a slow and noisy typewriter (surprise that it's still expensive these days) was such an efficient tool for writing. You can't do nothing else but type or write. Which is why so many distraction-free editors exists in these Interweb days.

Why I still love PostgreSQL after all these years? 100-plus of custome data types (even table and view can be as well) supported in the database itself (via HN). Programming languages can change numerous times for a long maintained systems. Not sure for the database system. Some developers prefer strongly typed programming languages, but they seldom look into database systems with custome data types support.

How do we test web service API through console or command line? (via HN) There are so many choices like Strest, Newman (console version of Postman), shakedown (Bash script), karateDSL, UnRAVL, Artillery, and Tavern (Python-based). Coming from console background, I have preference for shakedown and Tavern due to its simplicity.

To rent or buy a house? HN user isostatic gave a practical answer to this question. Buy if you're investing, having kids, or don't want to be forced to move. Rent if you don't want to maintenance the house.

What is the symptom of midlife crisis in a good way? Extreme athleticism. (via HN) One key point I agree with the writer is that we're preparing for the coming old ages as highlighted in this quote. Interesting days ahead.
...... extreme fitness is less about being young again and more about building yourself up for the years ahead. In other words, getting better at getting older.

This Week I Learned 2018 - Week 25

Week 24 post and something from archive.

中国最后的剑圣, 于承惠。在中文影坛里,在也找不到另一个演员能诠释演出这个角色,从霸气的恶人角色到闭山修行的一代宗师。可惜,晚年在影坛没参与任何武侠片。

How to Survive Your 40s? (via KH). As someone who going to take a leap into this new decade, I can probably relates (the screenshot below tells quite a lot as well) to the author experience. Since few years back, younger people have started to call me "uncle" (my choice of clothing did contribute to that as well). It's a sudden but natural shift that comes with your age. The article reminded me of a Korean movie (can't remember the name) I've watched few weeks back. Basically the protagonist (someone in his 50s) said you need to see this milestone as the second 20s. The second time for you to reflect or follow up with what you've done (differently this time) in your 20s. The to-do list since so many years ago is still so long and it will keep me occupied for so some times.

What the difference between Perl and Python? If you need a comparison between both programming languages, this book, "Scripting with Objects: A Comparative Presentation of Object-Oriented Scripting with Perl and Python", while quite dated (it was written in 2008), provides some insights on differences between these both programming languages. In the end, the rising popularity of Python and emergence of Perl 6 shown that, opinionated or there should be a standard way of doing things won.

Why you need to set default value in `sub` in Moo or Moose? Because having a subroutine wrapper returns a unique reference every time you create a new object.

How do you boot from USB thumb drive from Grub itself? Yes, this is possible (do read the whole discussion). You must go to the Grub console by pressing 'c'. Remember that you can tab to find out which removable media and partition to use. It's quite annoying that sometimes the BIOS cannot detect the removable media (thumb drive) and can't boot from the device itself.
grub> ls
grub> set root=(hd1, msdos2)
grub> chainloader +1
grub> boot

On a related note, migration from Fedora 27 to Fedora 28 was such a painful experience. The keyboard and mouse did not work and were very lagging. I'm not sure, but Fedora 28 was such a let down. In the end, have to wipe out the whole installation and replaced it with Ubuntu 18.04 and everything works as intended. Seriously, Fedora, what is going there with 28 release?

Why they said Perl is a more advanced scripting language for system administrator? See App::GitHubPullRequest, a Perl console tool that glues together three different console tools of git, stty, and curl.

How to train your kids to do house chores voluntarily? (via HNEmpowerment since toddler.

Dreadful tasks? Just try, give it a while.

Which Perl modules to use when making HTTP requests? There are so many.

How does you do dispatch table in Perl? Found an old discussion (2010) in HN. The book High-Order Perl have a whole chapter (PDF) on this topic.

Upgrading to Ubuntu 18.04 Bionic Beaver

Yes, the regular updates on my Ubuntu distro in my lappy. While reading back my old posts, I just realized that I've written personal for almost each Ubuntu release like 17.10, 17.04, 15.10, and 13.04. Not sure why, but I didn't jot down any upgrade notes on 16.10, 16.04, 14.10, 14.04, 13.10, and earlier.

Upgrade was done as usual but with a few hiccups. Full upgrade was possible with a few manual intervention of the package management.

First, not enough free disk space in '/boot' folder.

The upgrade has aborted. The upgrade needs a total of 127 M free
space on disk '/boot'. Please free at least an additional 82.7 M of
disk space on '/boot'. You can remove old kernels using 'sudo apt
autoremove' and you could also set COMPRESS=xz in
/etc/initramfs-tools/initramfs.conf to reduce the size of your

Resolved this by removing all the previous Linux kernels and so surprised to know that my machine have so many different versions lying around. No wonder there was limited spaces available.

Second, upgrading was halted due to packaging dependency. Not sure why. Googling around for answers and trying a few usual solutions did not help at all and kept getting the same old error message.

E: Sub-process /usr/bin/dpkg returned an error code (1)

dpkg: error processing archive 
   /var/cache/apt/archives/libmariadb3_3.0.3-1build1_amd64.deb (--unpack):
 trying to overwrite '/usr/lib/x86_64-linux-gnu/mariadb/plugin/', 
   which is also in package libmariadb2:amd64 2.3.3-1
 Errors were encountered while processing:
 E: Sub-process /usr/bin/dpkg returned an error code (1)

At the end of the days, the Synaptic tool save the day and resolving all the conflicts.

It's just me or something else, `apt` doesn't seems to have a right default options to resolve conflicts compare to Synaptic.

Now for the changes, reading through the release notes, I've learned a few things and realized that I was quite lost touch with the server part of Ubuntu distro.

1/ Netplan, the network configuration abstraction renderer. Basically it's just a tool to manage networking through YAML file. Surprisingly, the console tool was written in C instead of the regularly used Python. Not sure why but surely it must have a good reason.

2/ New features only available for new installation but not upgrade. For example, swap file instead of swap partition, Python 3 over Python 2, full disk encryption (LUKS) instead of folder encryption.

3/ Subiquity, the server installer was available for server user. Definitely a DIY solution to differentiate themselves from default Debian installer.

4/ LXD 3.0. A better alternative or solution to Vagrant or Docker. I've been lost tracked of this project. Maybe it's the right time to look and get my homelab machine to run this again.

5/ chrony replaced ntpd (there are comparison as well). One good thing is chrony was licensed in GPLv2.

6/ On the desktop front, from the GNOME 3.28 release notes, Boxes was getting much needed love. Previous version was so buggy that made you wonder why it was ever released in the first place.

Upgrading to Ubuntu 17.10 Artful Aardvark Beta

Since Ubuntu 17.10 have reached its final beta (Canonical going to release it by Oct 19), might as well upgraded my lappy from 17.04. The upgrade process still took quite a while but so far no show stopper issues.

Upgrading to beta release is quite straightforward, just type this command.
$ sudo do-release-upgrade -d

Release details with Linux kernel 4.13.
$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description:    Ubuntu Artful Aardvark (development branch)
Release:        17.10
Codename:       artful

$ uname -a
Linux thinkpad 4.13.0-12-generic #13-Ubuntu SMP 
Sat Sep 23 03:40:16 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux

Some issues encountered or observations after the upgrade.

(1) Invalid MIT-MAGIC-COOKIE-1 key Cannot open display ":0"
The leftover issue from switching to Wayland from X? Resolving this by removing the `.Xauhority` file.
$ rm -rf ~/.Xauthority

(2) Gnome Shell was noticeable slow.
I'm currently running Gnome 3.26 but it was considerable slow compare to previous version. Maybe this was due to switching to Wayland?
$ gnome-shell --version
GNOME Shell 3.26.0

(3) New Setting application.
I never pay much attention, but the new settings layout have switched to a side bar style, which is more easier to read and find. Seems to be influenced by mobile devices?

(4) Shutter, the screenshot application is broken.
You can't capture screenshot using Shutter anymore as Wayland does not allows an application to capture the content of other application (security reason). Unfortunately, you have to use Gnome Screenshot, which have limited functionalities (you can't bloody capture the screenshot of it self).

Upgrade to Ubuntu 17.04 Zesty Zapus

Just read that Ubuntu 17.04 was supposed to be released today and eager to try it out. While nothing special and significant was added to this release, nevertheless, it's good to have the latest greatest when possible. Furthermore, I've accidentally `rm -rf` my home directory few days back.

Running through the typical upgrade steps.
$ sudo apt-get update
$ sudo apt-get dist-upgrade
$ sudo do-release-upgrade -d

However, upgrade seems to fail due to broken packages, see below. Most likely due to many legacy PPA added for certain kind of special packages.
Calculating the changes

Could not calculate the upgrade

An unresolvable problem occurred while calculating the upgrade.

This can be caused by:
* Upgrading to a pre-release version of Ubuntu
* Running the current pre-release version of Ubuntu
* Unofficial software packages not provided by Ubuntu

Found the answer to troubleshot this issue. You must keep the upgrade running at the same time, otherwise the `apt.log` file would not be found. I was surprised that I have so many broken and conflicting package. Follow the procedure, I have to manually remove all broken packages. For certain packages, I have to reinstall because it can't find the actual packages. Result below were truncated to save space.
$ grep Broken /var/log/dist-upgrade/apt.log

Broken libdouble-conversion1:amd64 Breaks on libdouble-conversion1v5:amd64 ......
Broken imagemagick-6-common:amd64 Breaks on libmagickcore-6.q16-2:amd64 ......
Broken libgjs0f:amd64 Conflicts on libgjs0e:amd64 < 1.46.0-1 @ii mK >

Upgrade was slow as I've just installed LaTeX few days back. Downloading all those `texlive-` packages going to take some times.

This Week I Learned - 2017 Week 14

Last week post or you can go through the whole series.

Proposal have been presented and submitted. Standard feedback received. Nevertheless, better than nothing regardless the quality of the reactions.

#1 GTCafe Studio. Stumbled upon this site while searching for different covers of Guthrie Govan's Emotive Ballad. It's rare these days to find any blog with original good content. Reading through his journal on learning guitar made me reflect back on my decision on donating all my guitars away few years back. Maybe is time to start all over again? Or maybe not? Learning to play an musical instrument is one of the way to escape from mind-numbing daily routines. However, there is a time and place for everything in life. In hindsight, sometimes you just have to move on.

#2 "CentOS is not stable, it is stale". So true that it hurts. For me, as a whole, Fedora provides a better desktop experience than Ubuntu. Yet, I still revert back to Ubuntu on my daily usage. Why? APT is definitely better than YUM and plenty of software selection. Furthermore, LXD works only in Ubuntu and not Fedora. And yes, finally Canonical realized that and declared Ubuntu Unity will be replaced by Gnome 18.04 LTS. Maybe this Ask HN post on feedback for Ubuntu 17.10 from the community have finally sealed the fate for Unity?

I always wonder what would happen if Red Hat decided to use build a distro based on Debian or DPKG package manager instead of creating their own RPM packaging manager? A unified GNU/Linux desktop will come sooner rather than unnecessary fragmentation and efforts. For example, the competition of next generation display server of Mir and Wayland. Yes, I know having options and competitions is good for progress. But the priority and effort should be on fixing the damn display drivers performance and stability issues. Fragmentation leads to duplication of works.

#3 Five great Perl programming techniques to make your life fun again. An old article, 11 years ago but everything described there is as relevant as today especially iteration using `map` and `grep` and Dispatch Table as illustrated in example below. As Perl does not have `switch` statement, hence using Dispatch Table is a good Perl design patternMark Jason Dominus, in his book, Higher-Order Perl also devoted a whole chapter (PDF) on this matter.
my $dispatch_for = {
   a => \&display_a,
   b => \&display_b,
   q => sub { ReadMode('normal'); exit(0) },
   DEFAULT => sub { print "That key does nothing\n"; },

my $func = $dispatch_for->{$char} || $dispatch_for->{DEFAULT};

#4 Perl 5 Internals (PDF). Interesting reading on the intricacy part of the Perl itself. It was brought to my attention that Perl is a bytecode compiler, not an interpreter or a compiler.

#5 The 'g' key shortcuts in Vim. You will learn something new everyday, there are so many key bindings. Surprisingly, I only knew and regularly use two. Really needs to refresh and relearn this.

Samsung M2070W WiFi Printing with TP-Link Archer C7

It took me a while to finally set this up and to my amazement, this is actually quite damn simple. While there is another way to do it, through Samsung's Printer Settings Utility, unfortunately the `` does not exists anymore in Ubuntu 16.10. Therefore, to get this to works, we have to resolve to WPS method.

There are three devices you will need to setup properly. These devices are your laptop, the router, and lastly the printer itself.

1. Install the Samsung printer driver in your Ubuntu system. You should be able to print through USB cable. See the the Samsung Unified Linux Driver Repository instructions.

2. Next is to get the network configuration details of the printer, specifically the MAC address. From the printer buttons, Menu -> 4. Network -> Network Conf. -> Print? -> Yes. Jot down the MAC address.

3. Go to the router. DHCP -> Address Reservation -> Add New. Assign the MAC address in Step 2 to a fixed IP address so we always connect to the printer using a consistent IP address. Reboot the router. If you enable MAC filtering, do remember to white list the MAC address of the printer.

4. Again, in the router, we will need to enable WPS. Go to Wirelss 2.4GHz -> WPS -> Enable WPS. You may need to reboot the router again if WPS is not enabled.

5. Continue by clicking the Add device button. There are two options to add new device. Pick the second option and click Connect.

6. Go to your printer and press the WPS button for more than 2 seconds. Wait until the printer connected to the router. Once the printer is connected, you can disable the WPS in the router. You printer now will be part of your home network and assigned an IP address.

7. In your Ubuntu installation, open the browser the connection to CUPS management site at Go to `CUPS for Administrators` -> `Adding Printers and Classes` -> `Manage Printers` -> Select the printer you've install in Step 1 -> `Administration` -> `Manage Printer`. You will be prompted for login credential. Use the same credential when you login to Ubuntu desktop environment.

We're going to use the Internet Printing Protocol (IPP). While there are many printing protocols but for convenient sake, we will pick IPP. You can obtain the IPP full address from Step 2. It looks something like `ipp://`. The IP address is set in Step 3.

8. Unplug your USB cable and print any sample test page from your laptop. If everything have been setup properly, you should be able to print wireless.

9. Additionally to enable Samsung Cloud Print, you can manage the printer remotely through SyncThru™ Web Service. Open up your browser and connect using these details. You will see this page if everything have been set up correctly.

Login : admin
Password: sec00000

MSP430 - Serial Communication Monitoring

Follow up on last post, I supposed to write about usage of MSPDebug but instead we will discuss out capturing the output from the LaunchPad.

The default microcontroller, MSP430G2553 was pre-installed with a demo Temperature Measurement program. Once you've powered up the LaunchPad via USB port, the red and green LED will toggle alternatively. Pressing the P1.3 button (left to the blinking LEDs) will start the measurement. To reset back, press the reset button (right to the blinking LEDs).

Since the only visual indicator (did I mentioned I should have bought the other LaunchPad with embedded LCD?) of the program is through the LEDs, how to we obtain the measurement of the temperature? Basically using mspdebug, stty, and cat (similar post but with more descriptions)
$ mspdebug rf2500 exit
$ stty 2400 -F /dev/ttyACM0
$ cat /dev/ttyACM0

There are other ways to read or sniff raw data from serial ports but this is out of the scope of the discussion here.

Press the P1.3 button to start the measurement. You will obtain a series of characters as shown below. Each character is the numerical representation of the ASCII characters. P indicates 80 °F / 26.67 °C and Q indicates 80 °F / 27.22 °C.

From the output above and `dmesg` output when we plugged in the USB cable to power up the LaunchPad, the device will be assigned to device name of `ttyACM0` instead of `ttyUSB0`. TTY stands for TeleTYpewriter.
[21518.005311] cdc_acm 2-1.2:1.0: No union descriptor, testing for castrated device
[21518.005357] cdc_acm 2-1.2:1.0: ttyACM0: USB ACM device

Why so? Turn out for any USB communication devices like embedded microcontrollers or mobile phones, it's reusing an old control model called Abstract Control Model (ACM) for exchanging raw data. In other words, these devices were identified as modem and communication is done by reusing an old modem communication standard.

To reconfirm that, show all information on `/dev/ttyACM0' using `udevadm` command. For privacy reason, serial number have been replaced with `XXX`.
$ udevadm info /dev/ttyACM0
P: /devices/pci0000:00/0000:00:1d.0/usb2/2-1/2-1.2/2-1.2:1.0/tty/ttyACM0
N: ttyACM0
S: serial/by-id/usb-Texas_Instruments_Texas_Instruments_MSP-FET430UIF_XXX-if00
S: serial/by-path/pci-0000:00:1d.0-usb-0:1.2:1.0
E: DEVLINKS=/dev/serial/by-path/pci-0000:00:1d.0-usb-0:1.2:1.0
E: DEVNAME=/dev/ttyACM0
E: DEVPATH=/devices/pci0000:00/0000:00:1d.0/usb2/2-1/2-1.2/2-1.2:1.0/tty/ttyACM0
E: ID_BUS=usb
E: ID_MODEL=Texas_Instruments_MSP-FET430UIF
E: ID_MODEL_ENC=Texas\x20Instruments\x20MSP-FET430UIF
E: ID_MODEL_FROM_DATABASE=eZ430 Development Tool
E: ID_PATH=pci-0000:00:1d.0-usb-0:1.2:1.0
E: ID_PATH_TAG=pci-0000_00_1d_0-usb-0_1_2_1_0
E: ID_PCI_CLASS_FROM_DATABASE=Serial bus controller
E: ID_SERIAL=Texas_Instruments_Texas_Instruments_MSP-FET430UIF_XXX
E: ID_TYPE=generic
E: ID_USB_DRIVER=cdc_acm
E: ID_USB_INTERFACES=:020201:030000:
E: ID_VENDOR=Texas_Instruments
E: ID_VENDOR_ENC=Texas\x20Instruments
E: ID_VENDOR_FROM_DATABASE=Texas Instruments, Inc.
E: MAJOR=166
E: TAGS=:systemd:

MSP430 - Setting Up LaunchPad Development Board with Ubuntu 16.10

Following up on previous post on MSP430 LaunchPad development kit. We'll proceed with setting the console-based development environment for MSP430 in Ubuntu 16.10. Once this is have been done, we will look into how to interact with MSP430 through MSPDebug via USB interface.

First, find all relevant packages for MSP430.
$ apt-cache search msp430
binutils-msp430 - Binary utilities supporting TI's MSP430 targets
gcc-msp430 - GNU C compiler (cross compiler for MSP430)
gdb-msp430 - The GNU debugger for MSP430
msp430-libc - Standard C library for TI MSP430 development
msp430mcu - Spec files, headers and linker scripts for TI's MSP430 targets
mspdebug - debugging tool for MSP430 microcontrollers

However, notice that the GNU toolchain for MSP430 in the repository is still using the old GCC version of 4.6.3. A newer version of the 5.x toolchain (maintained separately by different vendor) exists but you'll need to install it manually.
$ dpkg -L gcc-msp430 | grep bin | grep gcc-

Install all these packages.
$ apt-cache search msp430 | awk '{print $1}' | xargs sudo apt-get install -y -

Open your console and run `dmesg` command in tail mode.
$ dmesg -w

Plug in and power up the LaunchPad to your USB port. By default, MSP430G2553 will have a demo temperature measurement app installed. Once you've connected through USB, there still be a sequence of LED blinking between red and green color. To stop that, please the P1.3 button to start the Temperature Measurement mode (I've no idea what this, just follow the Quick Guide).

Output of `dmesg` as follows. Note I've removed the serial number for privacy reason.
[ 2407.555756] usb 2-1.2: new full-speed USB device number 6 using ehci-pci
[ 2407.687908] usb 2-1.2: New USB device found, idVendor=0451, idProduct=f432
[ 2407.687913] usb 2-1.2: New USB device strings: Mfr=1, Product=2, SerialNumber=3
[ 2407.687917] usb 2-1.2: Product: Texas Instruments MSP-FET430UIF
[ 2407.687920] usb 2-1.2: Manufacturer: Texas Instruments
[ 2407.687922] usb 2-1.2: SerialNumber: 
[ 2407.695830] cdc_acm 2-1.2:1.0: No union descriptor, testing for castrated device
[ 2407.695858] cdc_acm 2-1.2:1.0: ttyACM0: USB ACM device
[ 2417.806759] hid-generic 0003:0451:F432.0004: usb_submit_urb(ctrl) failed: -1
[ 2417.806843] hid-generic 0003:0451:F432.0004: timeout initializing reports
[ 2417.807991] hid-generic 0003:0451:F432.0004: hiddev0,hidraw0: USB HID v1.01
Device [Texas Instruments Texas Instruments MSP-FET430UIF] on

The device, MSP-FET430UIF, is an external USB hardware debugging device but I'm not sure why the embedded USB interface was using the same serial number. Maybe it's using the same chipset? Also, you can probably ignore the error message in the log as we can connect to the board without issue.

You can also check the device id using `lsusb` command.
$ lsusb | grep Tool
Bus 002 Device 009: ID 0451:f432 Texas Instruments, Inc. eZ430 Development Tool

# More verbose details
$ lsusb -v -d 0451:f432

To connect to the board through the USB interface and MSPDebug program. Run this command and observe that the blinking LEDs have stop. Why we use `rf2500` driver? Turn out that the MSPDebug tool was originally written for EZ430-RF2500. The drive name was confirmed by showing the list of USB devices through MSPDebug. Again, serial number was removed for privacy concern. The bus:device number (002:011) may be different every time you plug in the USB cable.
$ mspdebug --usb-list | grep 2500
    002:011 0451:f432 eZ430-RF2500 [serial: ]

There are several ways to connect to MSP430. First way should be sufficient enough.
$ mspdebug rf2500

# Using bus:device id
$ mspdebug -U 002:011 rf2500

If there are no other errors, you should see below results.
MSPDebug version 0.22 - debugging tool for MSP430 MCUs
Copyright (C) 2009-2013 Daniel Beer 
This is free software; see the source for copying conditions.  There is NO

Trying to open interface 1 on 007
rf2500: warning: can't detach kernel driver: No data available
Initializing FET...
FET protocol version is 30394216
Set Vcc: 3000 mV
Configured for Spy-Bi-Wire
fet: FET returned error code 4 (Could not find device or device not supported)
fet: command C_IDENT1 failed
Using Olimex identification procedure
Device ID: 0x2553
  Code start address: 0xc000
  Code size         : 16384 byte = 16 kb
  RAM  start address: 0x200
  RAM  end   address: 0x3ff
  RAM  size         : 512 byte = 0 kb
Device: MSP430G2xx3
Number of breakpoints: 2
fet: FET returned NAK
warning: device does not support power profiling
Chip ID data: 25 53

Available commands:
    =           erase       isearch     power       save_raw    simio       
    alias       exit        load        prog        set         step        
    break       fill        load_raw    read        setbreak    sym         
    cgraph      gdb         md          regs        setwatch    verify      
    delbreak    help        mw          reset       setwatch_r  verify_raw  
    dis         hexout      opt         run         setwatch_w  

Available options:
    color                       gdb_loop                    
    enable_bsl_access           gdbc_xfer_size              
    enable_locked_flash_access  iradix                      
    fet_block_size              quiet                       

Type "help " for more information.
Use the "opt" command ("help opt") to set options.
Press Ctrl+D to quit.

When you first execute the MSPDebug program, the temperature measurement app in the MSP430 will halt and waiting for your next command. To resume its execution, just type `run` command and the LEDs will start blinking. (provided you don't activate Temperature Measurement mode).
(mspdebug) run
Running. Press Ctrl+C to interrupt...

More to come as we will explore more and different usage with MSPDebug.

This Week I Learned - 2017 Week 00

Happy new year!

2017, the year of the fire rooster. It will be interesting to see what this year will unfold itself. The plan will still remain the same every year. The usual stay alive and healthy, more reading, learning, writing, coding, and producing as well as build new habits. In other words, do, try, make more mistakes. As they said, "One who makes no mistakes make nothing at all". Be constant aware of you thoughts and actions. Live in the moments. There is a Zen saying, "When hungry, eat. When tired, sleep". Nevertheless, do plan ahead and learn from your past. In short, continue what you planned last year and adjust accordingly.

Learning reflection for 2016. I wrote 58 posts last year. Still a firm believer of quantity over quality. Writing is like exercising, you need to practice persistently to get better. However, blindly deliberate practice without any targets may be wasteful and leads to no where. Still something to ponder about. What I learned last year? Mostly Perl and Git as well as others stuff in between. When there are always rooms for improvement, all these exposure to new old stuff (Perl is damn old anyway) did satisfy my intellectual curiosity. Exposure to C++ for the last two months was interesting. It really piqued my interesting on static type programming languages.

As usual, here we go, something new I've learned this week.

#1 According to ISO 8601, the definition of week 01 if 1st Jan falls on Monday till Thursday. If it falls on Friday till Sunday, it's still the last week of previous year. However, there is no week 00. Nevertheless, I still prefer to call it week 00, as 1st Jan signifies a fresh start.

#2 Information overload? Thinking of applying digital minimalism? (via HN). FOMO is probably is main cause to our digital clutter. Unless these digital tools can bring values to your offline life, ditch them. Likewise, I still have a long way to go with my minimalist lifestyle, not everything fit into one bag yet. There are literally thousands of things I wish to ditch away. Still, one thing at a time.

#3 The D programming language. If you're doing doing development for quite some time and follow the programming language trend, you probably heard about this programming language. I've stumbled upon this again while doing some C++ coding. Developed as an alternative or replacement to C++, it still failed, after all these years, to gain any traction (based on my reading and impression of Thoughtworks' Technology Radar, HN, and Dlang subreddit). It was one year ago since Andrei Alexandrescu quit his secure job at Facebook (get a big pay cut but financially still ok) to push D forward on full time basis, does it really helps?

I played and read through the documentation, on the surface, it's looks quite nice, Python-like syntax with C/C++ speed but isn't Golang or Nim existed for the same reason? Nevertheless, the documentation was fun to read. Love the Contract Programming, especially the Invariants.  The wiki post on Components Programming Using D (ala functional programming) was one of the most interesting read on programming language this new year.

While we at it, some adjustments are needed to get Dlang to work in Ubuntu 16.10.

First, setup the APT repository for D.
$ sudo wget \
-O /etc/apt/sources.list.d/d-apt.list

$ sudo apt-get update && sudo apt-get -y --allow-unauthenticated install \
--reinstall d-apt-keyring && $ sudo apt-get update

$ sudo apt install dmd

Next, generate the sample hello word project.
$ dub init hello
$ cd hello/source

However, you will encounter error below during compilation.
$ dmd app.d
/usr/bin/ld: app.o: relocation R_X86_64_32 against symbol `__dmd_personality_v0' 
can not be used when making a shared object; recompile with -fPIC
/usr/bin/ld: /usr/lib/x86_64-linux-gnu/libphobos2.a(exception_249_55a.o): 
relocation R_X86_64_32 against symbol `__dmd_personality_v0' can not 
be used when making a shared object; recompile with -fPIC

Add these additional options to get it to run.
$ dmd app.d -fPIC
Edit source/app.d to start your project.

Compile and run on-the-fly.
$ rdmd -fPIC app.d
Edit source/app.d to start your project.

This Week I Learned - 2016 Week 49

Last week post or the whole series.

When a screenshot says a lot. The 14-plus hours uptime is something to be concerned about. Is best to be away from your machine from time to time. As you age, there is no needs to tweak your environment, just use the default settings for almost everything.

Certain unfortunate requirements led me to buy a USB-based high powered 300Mpbs Wifi adapter, TP-Link TL-WN8200ND. Unfortunately, while the driver seems to load properly, I still can't connect through Wifi in Ubuntu 16.10.

Understanding htop. Comprehensive guide to htop and its equivalent console commands. It will make you realize how much htop have aggregate and collect all the necessary information. The same author also wrote another useful guide on HTTP headers.

While we're on HTTP. Encountered this error with Nginx few weeks back where the error log shows that "upstream sent too big header while reading response header from upstream". In other words, your proxy server, Nginx does not like the data sent over from the application server (upstream). Several reasons may cause this like large cookies size, cookies with way old timestamp, or mismatch of response size and content length. Several ways to resolve this, either fix the issue at the upstream , disable proxy buffer, or increase proxy buffer sizes. Example as shown below (do not follow this values, adjust accordingly). Don't understand these settings? You can read details explanation and an excellent guide on these directives.
http {
    proxy_buffers           8 4k;
    proxy_buffer_size         8k;
    proxy_busy_buffers_size   16k;

On Perl. Nothing much pickup for the last two weeks, mostly just test cases and test cases. Interesting behaviour when returning value from subroutine. Being Perl, implicit is better than explicit as compare to Python. For example, there is this rule of do no return `undef`, just use the bareword `return`.
use Dumper;
sub a { return undef; }
sub b { return; }

my @aa = a();
my $a = a();

my @bb = b();
my $b = b();

Dumper(\@aa); # [undef], not false
Dumper($a); # undef, false

Dumper(\@bb); # [], false value because empty array
Dumper($b); # undef, false

How to implement Test-Driven Design (TDD) in Perl? So many good links given in those answers to the questions. Unfortunately, most of the links are quite dated and some may not be that relevant anymore. But since this is Perl, most stuff should be long standardized and stable.

This Week I Learned - 2016 Week 41

Last week post or the whole series.

This is probably the unexpected way or the one-liner new way to purge old Linux kernels. You will need to install byobu (text-based window manager and multiplexer) as the Bash script is part of the package. Why I need to purge the old kernels? Well, I can't upgrade to Ubuntu 16.10 because `/boot` partition don't have enough free spaces.
$ sudo apt install byobu
$ sudo purge-old-kernels

Epoch, the start of a time, is commonly used in computing as a point of reference or date arithmetic. For Unix, the epoch starts from Jan 1, 1970. And I thought that was the standard epoch time used for every Operating System. I didn't realize that for Windows as well as for other platforms, the epoch time was different and it's set to Jan 1, 1601 (represented in FILETIME structure), a few hundres years earlier than Unix epoch time. Why? 1601 is the first year of 400-years cycle of Gregorian calendar.

Conversion between two epoch system times is straight forward using the simple formulae or another approach to calculate the different between two values, which is 11644473600 seconds. (Note that Windows tick is 100-nano seconds interval, which is 10000000). If you have a Windows epoch timestamp (18-digits), use this site to convert to normal date.

Using Git in Windows? Do use the Perforce's P4Merge as git merge tool for the three-ways merging tasks. Learned this while watching how other developer works. You can pick up a lot by watching how others works. Do keep that in mind.

Almost at the end of the year, maybe this is the right time to pick up Golang? Don't like buying Go books, well, someone recommended me to pick up "The Little Go Book".

Looking for beautiful real-time log analyzer? Try GoAccess, which is depends on gwsocket, a RFC 6455 compliant web socket server.. I should install this for my homelab later.

Testing your web application locally but wants to simulate different IP addresses? Try IP Spoofing to simulate HTTP requests

Using testing in C++, use Google Test. Going to try this in coming days if I can get my C++ development environment working.

This Week I Learned - 2016 Week 36

Last week post or you might want to check out the whole series.

Some findings around the Internet.

XKCD-style graph using Matplotlib? In Ubuntu, you'll need to install these fonts to get the closest possible rendering.
$ sudo apt-get install ttf-mscorefonts-installer fonts-humor-sans
$ rm -rf ~/.cache/matplotlib/fontList.cache

Using Matplotlib without X-server? Switch to Agg backend. Useful when you're rendering image through Docker container.
import matplotlib as mpl
import matplotlib.pyplot as plt

Sigh. Unresolved ImageMagick bug in most recently releases including the LTS, which text conversion still causing core dump. Switching to GraphicsMagick, a fork of ImageMagick did not resolve the issue as command line options have both diverted. My research made me aware that both tools were being used to massively batch process images in a very large scale.

Sometimes, the default Vim configurations/features is good enough without installing buggy extensions. We're relying too much on the external plugins without utilizing the fullest features of Vim itself.

Old time PHP developer switching to Perl? You should read this Reddit's post. The advice given was spot on and correlates with my own personal experience. Nothing against PHP, but in our journey to become a better developer, you'll need to expose yourself to other programming languages and environments. Otherwise you'll end up like those developer who claimed to be a ten-plus years developer but actually doing the same development development work for a year but repeated ten times. I will write another blog post on this in coming future.

"To finish projects on time, start every single step as late as possible" via HN. Full text of all the twitter posts. Catching and provocative statement coming from Tiago Forte, a productivity consultant. Despite the click bait title, HN user, bmh100 interpret his message correctly. Keywords here is "critical path". In other words, is Critical Chain Project Management. Sometimes I wonder is procrastination due to lack of awareness of a task? Or to rephrase it, procrastination is a mindfulness problem? Without awareness, there is no estimation and prioritization, hence the task will be postponed repeatedly or not completed within the time frame?

This Week I Learned - 2016 Week 22

Write-up for last week or you might want to read the whole series.

As the HP Proliant server keeps restarting for no particular reasons  and I can't seem to pinpoint the exact reason, is either the PSU or the motherboard. I've a hunch this is caused by the motherboard. As this is a server, the motherboard is very particular and monitor different kinds of thresholds. For example, if the heat sink fan and case fan are not running, the machine won't boot. Maybe is time for me to switch to different motherboard, a desktop-based motherboard.

As I learned in the past with this machine, finding replacement parts was a bit tricky. Looking for a replacement motherboard seems a bit hard these days, especially I want to reuse the Intel Xeon X3430 CPU (Lynnfield). As the X34xx processor supports only LGA 1156 processor socket, which has been phased out and not available in the market these days. I've three choices. First, buy a used LGA 1156 motherboard or source it from TaoBao. Second, install Windows Server to it and see any issues. Third, buy a used LGA 1156 motherboard.

Inspiring online. So much creativity these days using web to express yourself.

I have created 50 games in 2014. (HN discussion) Well, I've none in my entire development life until now and will continue to be so. Anyone can develop a game but the subtle details are what separating a boring normal games to something more exciting and enticing. Presentation by Jan Willem Nijman, Martin Jonasson and Petri Purho demonstrate this superbly.

Not a gamer but I've realize that I never actually install Steam before. Tried to install it, as usual, there is always some hiccup and workaround here and there. Command below should be good enough to go through the workaround. It has been a while since I last play any games, still nothing fancy here, nothing much to explore. Nevertheless, Steam enabled the GNU/Linux have a platform for gaming, good for creating awareness of its existences.
$ find $HOME/.steam/root/ubuntu12_32/steam-runtime/*/usr/lib/ -name "" \
-exec mv "{}" "{}.bak" \; -print

Post-installation Notes on Ubuntu 16.04 LTS Xenial Xerus

Ubuntu 16.04 LTS Xenial Xerus was released few weeks back and its time to either upgrade or fresh installation. I've done both for different machine, my lappy and desktop. Since my recent switch to SSD hard disk and a new graphic card, these are my notes on the post-installation. After so many years of using Ubuntu, yet you still need to manually tweak it to get the basic essential features to work correctly.

Installation against the SSD is freaking fast, the whole installation finished merely in just less than five minutes. I didn't time the installation process but it was blazing fast compare to all my previous installation. If you're still using HDD, switch to SSD, now! Is like the connection speed from upgrading from dial-up to fiber optic.

Update and Upgrade Packages
The new apt command is very welcoming and finally we have progress bar during package installation. Before that, switch to your fastest mirror. No offence to those who helps to mirror MY repository, but MY mirrors speed are rather inconsistent compare to SG mirrors.
$ sudo sed -i 's/my/sg/g' /etc/apt/sources.list
$ sudo apt update
$ sudo apt full-upgrade

Replace Unity with Gnome
Yes, finally Unity's launcher can be moved to the bottom of the screen but is too little, too late for anyone to care about that. GNOME provides a better integrated desktop user experience. Pick GDM3 as your login manager, log off from current desktop sesison, switch to GNOME desktop, and re-login.
$ sudo apt install ubuntu-gnome-desktop

Dual-Screen Undetected Screen Resolution
Till today, Ubuntu still cannot get the screen resolution right for my second monitor. Again, we've to tweak it through xrandr.
$ xrandr -q
$ cvt 1280 1024

Add the resulting shell script '.xprofile' to resize and re-position the dual screen monitors as follows. As I'm a left-handed mouse user, hence my screen setup is also spanning from left to right.
xrandr --newmode "1280x1024_60.00"  109.00  1280 1368 1496 1712  1024 1027 1034 1063 \
    -hsync +vsync

xrandr --addmode DVI-0 1280x1024_60.00

xrandr --output VGA-0 --primary --mode 1280x1024 --pos 1280x0 --rotate normal \
    --output DVI-0 --mode 1280x1024_60.00 --pos 0x0 --rotate normal \
    --output HTMI-0 --off

Conventionally, to execute the command before the starting of X user session for GDM, you should put these commands in '.xprofile' as GDM will load the setting from '/etc/gdm3/Xsession'.

Google Chrome can't play YouTube's videos.
Beginning 16.04, there is no more proprietary ATI graphic driver or fglrx and this may cause some issues if hardware acceleration through graphic cards like games or video playing. The error message obtained when starting Google Chrome from the console as shown below.
Not implemented reached in virtual 
void cc::VideoLayerImpl::AppendQuads(cc::RenderPass *, cc::AppendQuadsData *)

The workaround is to disable hardware acceleration under 'advanced settings`. Mostly likely this will be fixed in upcoming patches.