Showing posts with label docker. Show all posts
Showing posts with label docker. Show all posts

Rust Installation in Ubuntu 18.10

When was the last time I looked at Rust? Oh right, it was almost 5 years ago (how time flies). The Amazon Firecracker piqued my interest in Rust again and I'm curious to check out Rust again. There are several installation methods available. These days, it's easier to use custome tool like Rustup or Docker to manage and switch several and different versions compare to default distro packages.

Using Rustup
This is the default installation method. However, we using installing this using the LXC/LXD container. This is the fastest way to get Rust running in your environment compare to other methods (more on this).
$ lxc exec rust-rustup bash
[email protected]:~# curl https://sh.rustup.rs -sSf | sh

This will download and install the official compiler for the Rust programming 
language, and its package manager, Cargo.

It will add the cargo, rustc, rustup and other commands to Cargo's bin 
directory, located at:

  /root/.cargo/bin

This path will then be added to your PATH environment variable by modifying the
profile file located at:

  /root/.profile

You can uninstall at any time with rustup self uninstall and these changes will
be reverted.

Current installation options:

   default host triple: x86_64-unknown-linux-gnu
     default toolchain: stable
  modify PATH variable: yes

1) Proceed with installation (default)
2) Customize installation
3) Cancel installation
> 1
......
info: installing component 'rustc'
info: installing component 'rust-std'
info: installing component 'cargo'
info: installing component 'rust-docs'
info: default toolchain set to 'stable'

  stable installed - rustc 1.31.1 (b6c32da9b 2018-12-18)

Rust is installed now. Great!

To get started you need Cargo's bin directory ($HOME/.cargo/bin) in your PATH 
environment variable. Next time you log in this will be done automatically.

To configure your current shell run source $HOME/.cargo/env

Checking Rust-based tools version.
[email protected]:~# rustc --version; cargo --version; rustup --version
rustc 1.31.1 (b6c32da9b 2018-12-18)
cargo 1.31.0 (339d9f9c8 2018-11-16)
rustup 1.16.0 (beab5ac2b 2018-12-06)

Using Default Distro Package Manager
Again, bootstrap the environment using LXC/LXD.
$ lxc launch ubuntu:18.10 rust-pkg
$ lxc exec rust-pkg bash          
[email protected]:~# apt update; apt upgrade
[email protected]:~# apt install rustc

Checking Rust-based tools version.
[email protected]:~# rustc --version; cargo --version; rustup --version
rustc 1.30.0
cargo 1.30.0
rustup: command not found

Using Docker Image
Using Docker official image (the official image is based on Debian). Image size seemed way to big, roughly around 1.6 GB.
$ docker pull rust
$ docker image list | grep rust
rust                latest              d6daf33d7ea6        3 days ago          1.63GB

Luckily slimmer image available, roughly half the size. You just have to pull using the right tag. Reduction is size was due to the clean up steps and slimmer base Docker image.
$ docker pull rust:slim
$ docker image list | grep rust
rust                slim                a374accc3257        3 days ago          967MB
rust                latest              d6daf33d7ea6        3 days ago          1.63GB

Checking the container and Rust-based tools version.
$ docker run --rm -it rust bash
[email protected]:/# rustc --version; cargo --version; rustup --version
rustc 1.31.1 (b6c32da9b 2018-12-18)
cargo 1.31.0 (339d9f9c8 2018-11-16)
rustup 1.16.0 (beab5ac2b 2018-12-06)

This Week I Learned 2018 - Week 51

Last week post or something else from the past years.

Are we at the end of hardware virtualization performance? Yes, according to the trend of the Amazon EC2 Virtualization Types. However, in the end, we just go back to bare metal somehow. The rapid improvement in virtualization made setting up homelab and data hoarding possible, cheap, and fast.

Meanwhile, what the heck is Firecracker (official announcement from Amazon)? New virtualization tool based on Kernel-based Virtual Machine (KVM). Interestingly, checking its Git repo indicates that the project was written in Rust, due to its origin started from Chrome OS Virtual Machine Monitor (crosvm), which was written in Rust. Why? Serverless platform, and for Amazon, the removal of VM like Fargate which leads to further cost reduction. Similar, Nitro, the Amazon latest hypervisor, also leverages on KVM but only the core modules to achieve near bare metal performance.

How do you automatically clean up orphaned Docker containers, instances, volumes, networks, or images? If you use Docker for your daily development, your environment accumulates these leftover artifacts unless you're diligent enough to do the clean up yourself. My "research" (ahem, googling) found two tools, docker-gc and docker-clean. The former is written in Golang and thus make it more portable compare to the later in Bash. But why such feature is not built into Docker itself?

What the heck is MVC-L? A concept popularized by OpenCart. Nothing fancy, just an additional Language (L) layer added to the pattern. Combine with another existing extension pattern to MVC, HMVC, we will have HMVCL. Is software pattern still a thing these days?

Is being an independent ISP still a thing in 2018? Yes, it still is, especially in rural area. Whole infrastructure is based on Ubiquiti and Microtik hardware.

How to update parent state from child component in React? Callback in the parent component as a prop to the child component. Treat each component as a class and props as parameters passed to the instance of the class itself. The basis concept is quite straight forward and what was I thinking?

In the parent component.
render() {
    return <Child action={this.handler} />
}

In the child component.
render() {
    return <Button onClick={this.props.action} />
}

Pi-hole with Docker - Installation and Setup

In my previous post, I've covered Pi-hole installation and setup with LXD. For this post, we will try another installation approach, using Docker. Procedure wise, it's quite straightforward, just three steps.
$ docker pull pihole/pihole
$ wget https://raw.githubusercontent.com/pi-hole/docker-pi-hole/master/docker_run.sh
$ bash docker_run.sh

One of the issue that encountered was that the mapped ports maybe have been used for other services. to resolve the port conflicts, especially those using Ubuntu, we have to identify those processes that bind to those ports and stop it.
$ sudo netstat -nltup | grep -E ":53|:67|:80|:443"

Port conflicts with Dnsmasq in Ubuntu can be resolved by disabling its service. However, this is not advisable if you're running services that depends on Dnsmasq like LXD or VPN.

If you like alternative way to properly manage (for example, restarting) the Pi-hole's container, you can write a wrapper shell script and manage it through Docker Compose.

The wrapper shell script based on the `docker-run.sh`.
#!/usr/bin/env bash

IP_LOOKUP="$(ip route get 8.8.8.8 | awk '{for(i=1;i<=NF;i++) if ($i=="src") print $(i+1)}')"
IPV6_LOOKUP="$(ip -6 route get 2001:4860:4860::8888 | awk '{for(i=1;i<=NF;i++) if ($i=="src") print $(i+1)}')"

IP="${IP:-$IP_LOOKUP}"
IPV6="${IPv6:-$IPV6_LOOKUP}"

TIMEZONE="$(cat /etc/timezone)"

DOCKER_CONFIGS="$HOME/.pihole"

exec docker-compose [email protected]

And our `docker-compose.yml` file, modified based on this sample.
version: '2'
services:
  pihole:
    container_name: pihole
    restart: unless-stopped
    image: pihole/pihole
    environment:
    - ServerIP=$IP
    - ServerIPv6=$IPv6
    - TZ=$TIMEZONE
    - WEBPASSWORD=*foobar*
    - DNS1=127.0.0.1
    - DNS2=1.1.1.1
    volumes:
    - $DOCKER_CONFIGS:/etc/pihole/
    - $DOCKER_CONFIGS/dnsmasq.d/:/etc/dnsmasq.d/
    ports:
    - "80:80"
    - "443:443"
    - "53:53/tcp"
    - "53:53/udp"
    - "67:67/udp"

Development with Docker Toolbox and Babun

For those stuck with or prefer Windows environment, Docker Toolbox is the painless and simplest way to have a consistent development environment using Docker. While we can duplicate the same environment through actual virtualization or Window Subsystem For Linux (WSL), the time taken to configure and setup the environment does not worth the effort. Furthermore, Docker still not quite works for WSL yet.

While the Git Bash for Windows works well for its basic features, a Bash prompt with some *nix utilities, there are still several console apps solely missed like rsync and wget, which will not be bundled together unless you install these separately. Furthermore, you have to waste time to customize the console prompt to your liking (optional).

This is where Babun, a Windows shell over Cygwin, comes in. The sensible default settings provided by  oh-my-zsh with wide range of extensible plugins is good enough without much tweaking.

Babun Docker
Since we're using Docker Toolbox, we also need to make sure it works with Docker Toolbox through Babun Docker.

Go to the Docker QuickStart Terminal and stop the Docker Machine.
$ docker-machine stop default

Open Babun shell and install Babun Docker.
$ curl -s https://raw.githubusercontent.com/tiangolo/babun-docker/master/setup.sh | source /dev/stdin
$ babun-docker-update
$ docker-machine start default

Always load the Docker Machine settings on new opened terminal session.
$ echo "# Docker Machine stuff\n eval \$(docker-machine env default --shell zsh)" >> ~/.zshrc

Zsh and tmux
Next, which is quite crucial for me was to auto load tmux every time Babun or Zsh start.
$ pact install tmux

In your `.zshrc`, enable the tmux plugin and set it to autoload.
# Enable this before the plugin
ZSH_TMUX_AUTOSTART=true

plugins = (git tmux)

The next step is to download my own tmux configuration file and reload the Zsh shell to reflect the changes. You can also close and re-open the Babun shell. There was an issue where the Babun shell cannot start properly due to unknown error. Update your Babun installation (which actually is Cygwin) should fix the issue.
$ wget https://raw.githubusercontent.com/kianmeng/dotfiles/master/.tmux.conf
$ source $HOME/.zshrc

Slowness
Depending on the number of plugins enabled and theme selected, Babun shell can be quite slow. Keep your enabled plugins to a minimum, pick less resource intensive theme, use a Zsh framework to manage these plugins, or optimize Cygwin instead. Nevertheless, one way to check if your shell prompt is slow, run this command.
$ babun check

This Week I Learned 2018 - Week 47

Last week post or something else.

如何避免成为一个油腻的中年猥琐男? 虽然作者的观点是出自于中国男性的观察,但是任何中年人都可从文章借镜。不认同第八条规,“不要停止购物”。年纪越大,物资需求理当越少,甚至到无。

如何用诗词调情?女:父母不在家。男:等會去妳家。

What the recommended anime to watch in year 2018?  Megalo Box. If you're a fan of Cowboy Bebop (not a fan and way overrated), you will like this anime television series as both share a few similarities. 90's hand drawn style (dirty and raw and not like Makoto Shinkai's style), great and unique characters design (looking at you Fairy Tails and Hunter x Hunter), and predictable story line (rag to riches). Meanwhile, if you are a fan of space opera genre, the remake of Legend of the Galactic Heroes is worth watching as well, if you can ignore the  aesthetic of 3D effects which are unappealing and lifeless (looking at you Berzerk 2016).

How do you access the Docker container as root user? Surprisingly, quite straight forward. Uid of zero(0) is equivalent to `root` user.
docker exec -u 0 -it mycontainer bash

What happened when you're using LaTeX to typeset your thesis? A graph shown below (via Reddit) is worth a thousand words. I can relate to the author experience, instead of working on writing, you're struggling with typesetting.



This Week I Learned 2018 - Week 20

Week 19 post or something from the past.

Interesting development on the local scene. Everyone is overwhelmed by the endless good news which some seemed too good to be true. Still too soon to tell but nobody think it will get worst than the current mess. On a side note, at least now we can read articles in Medium from our mobile devices.

The completion of BSL20180124. Our second successful spawn. As usual, write-up on the whole process and retrospection on our breeding process. Both of us are getting more experienced, selective, and bolder when breeding Bettas. So many things learned during these few months and what we learned can definitely can help us to improve our other spawning projects. We can now confidently buy better grade (ahem, more expensive) Betta fishes and breed them. But right now, the main focus is to change our breeding method from leaving fry with father to removing fry after free swimming. The former method produces limited number of fry and the later will yield large (till 500 fry) spawn. We shall see the result in coming months.

The difference between `application/xml` and `text/xml`. Encounter this when making RESTful request and the existing CPAN module does not recognizes `text/xml`.

Good sample Dockerfile to setup your Perl application in Docker instance.
FROM perl:5.26

RUN cpanm Carton && mkdir -p /usr/src/app
WORKDIR /usr/src/app

ONBUILD COPY cpanfile* /usr/src/myapp
ONBUILD RUN carton install

ONBUILD COPY . /usr/src/app

Detect whether an item exists in a Perl's array. Why such simple stuff needs to be so complicated in Perl?
# $value can be any regex. be safe
if ( grep( /^$value$/, @array ) ) {
    print "found it";
}

A Practical Guide to (Correctly) Troubleshooting with Traceroute (PDF). We have been using traceroute wrongly all this while.

Cpanm with local library installation

One of the issue I kept facing when developing local Perl modules installation and dependency management.

$ cpanm -nq --installdeps --with-develop --with-recommends .
!
! Can't write to /usr/local/share/perl/5.26.0 and /usr/local/bin: Installing modules
! to /home/foobar/perl5
! To turn off this warning, you have to do one of the following:
!  - run me as a root or with --sudo option (to install to 
!    /usr/local/share/perl/5.26.0 and /usr/local/bin)
!  - Configure local::lib in your existing shell to set PERL_MM_OPT etc.
!  - Install local::lib by running the following commands
!
!    cpanm --local-lib=~/perl5 local::lib && eval $(perl -I ~/perl5/lib/perl5/ -Mlocal::lib)
!

Either we do this manually everytime.
$ cpanm --local-lib=~/perl5 local::lib 
$ eval $(perl -I ~/perl5/lib/perl5/ -Mlocal::lib)
local::lib is up to date. (2.000024)

Or setup your local environment for once through local::lib.
$ PATH="$PATH:~/perl5/bin"

Additional directories for Perl library.
$ export PERL5LIB=~/perl5/lib/perl5

Options for Module::Build.
$ export PERL_MB_OPT="--install_base '$HOME/perl5'"

Options for Module::MakeMaker.
$ export PERL_MM_OPT="INSTALL_BASE=$HOME/perl5"

$ curl -L http://cpanmin.us | perl - -l ~/perl5 App::cpanminus local::lib

Add this to your `.profile` as well.
$ eval $(perl -I$HOME/perl5/lib/perl5 -Mlocal::lib)
$ source ~/.profile # reload

Or just use Perl's Docker container image.

Or just use Carton and set everything to local folder.

This Week I Learned - 2017 Week 35

Last week post or revisit some old archived posts.


Long holidays and I finally have extra time to clear off some of those pesky and pending to-do list. Learned quite a lot this week, especially from different electronic devices and computer hardware.


Software development 450 words per minute. (via Reddit / HN). Be grateful. That's probably takeaway from the article itself. I was wondering how it going to affect your hearing if you keep listen to the headphone non-stop for more than 8 hours per day?


Good post on introduction to mechanical key switches, specifically Cherry MX family. For a non-gamer but mostly using your keyboard for typing, Cherry MX Brown and Cherry MX Blue would be the preferred keyboard switch for a mechanical keyboard. The Brown switch was originally developed for Kinesis Keyboard. Yes, that company that created the ergonomic contoured keyboard. Meanwhile, the Blue switch, have same tactile feeling and clicking sound to IBM Model M but less activation force. Does mechanical keyboard worth it? Yes, only if you play lots of games, build a Battlestation, a mechanical keyboard enthusiast, or have extra money to burn.


Buying an air purifier? Fview YouTube channel is probably the best I've watched so far. Honest opinions with lots of satirical remarks in between. Just like taking an advice from a trustworthy friend. So which air purifier to buy? From the result and price point, just get Xiaomi Air Purifier even through you have to tolerate the high fan noise. I was surprised that few European brands are so expensive but the filtering output was mediocre. Most likely you're paying premium to the quality material and long term reliability. One thing I've learned from electronic devices made in China or electronic devices in general these days. There are not built for reliability. a throwaway device that only serve a purpose for a short period.




Yeah, the bokeh, colour, and contrast is phenomenal and surely will make you mouth-watering.  Just make sure you watch the YouTube video in highest resolution. The most important criteria is the colour (in JPEG) format shows the actual colour and contrast representative of what we saw with the reviewer eyes. Be warned, both Sony A9 and Voigtlander 50 Heliar V4 will cost you around MYR 21k. Definitely not worth it unless you have extra cash to burn. Even so, still not worth it.




More lesson regarding ConTeXt. Want to use Times New Roman, make sure you've installed the Tex Gyre package where it includes the Termes aka Times New Roman font.


Installation of more PWM casing fans. The motherboard seemed quite sensitive and there are numerous times I can't get to the POST screen. Reading through the POST troubleshooting steps, manage to boot up the machine again. Suspect loosen power wires, memory slot, or bended CPU pins were likely the contributing causes.

Fan speeds seems to be at an accepted range. There is an increase of volume heard but I like the white noise.
$ sensors | grep fan
fan1:         1704 RPM  (min = 1577 RPM, div = 8)
fan2:         1875 RPM  (min =  784 RPM, div = 8)
fan3:         1577 RPM  (min =  685 RPM, div = 8)
fan4:            0 RPM  (min = 3515 RPM, div = 128)  ALARM
fan5:            0 RPM  (min =  703 RPM, div = 128)  ALARM


Hardware UART in MSP430. I have no idea this is possible. Mainly because I have no idea what and how UART works anyway. And, I also found out that there is a UniFlash, which is the Universal Flash Programmer for all Texas Instruments devices. Seems to support MSP430 and GNU/Linux but I haven't try it out yet.


I was looking for a DAC and my research indicated that using Raspberry Pi with HiFiBerry would be a good choice. Maybe that could put my shelved Pi into good use?


Running Docker on Fedora host but have permission error with mounted volume?
$ docker run -it -v /home/ang/project:/export tts:latest bash

[email protected]:/export# ls -l                
ls: cannot open directory '.': Permission denied

To resolve this properly, since this is a SELinux permission issue (reason why you should always test your stuff in Fedora/Red Hat/CentOS distros), you can append extra `z` or `Z` character to the mounted volume option `(-v)` as shown below.

-v /home/ang/project:/export:z

Meanwhile, setting up Docker in Fedora to support non-root user. (Yes, there are many security concerns).
$ sudo groupadd docker && sudo gpasswd -a ${USER} docker && sudo systemctl restart docker
$ newgrp docker


Readjustment of my night computing usage. Turned on Gnome's Night Light. This is to reduce the effect of blue light affecting the body melatonic production.


This Week I Learned - 2017 Week 33

Last week post or the old ramblings.


The Vox POP is probably the most entertaining and educational YouTube channel right now. I wish they produce more and frequently.




Mommy, Why is There a Server in the House? Suitable for those who are active in /r/homelab.


Refurbished my battle station and upgraded my Fedora 25 to 26. Nothing special about this release and I was expecting something significant or may be I miss out something?
$ sudo dnf upgrade --refresh
$ sudo dnf install dnf-plugin-system-upgrade
$ sudo dnf system-upgrade download --releasever=26
$ sudo dnf system-upgrade reboot


Gigabyte MA10-ST0 powered by Intel® Atom C3958 SoC. A 16-core Atom (C3958 SoC) server board which can be a good server-based motherboard for virtualization or NAS. Think ESXi, FreeNAS, or Proxmox. I was thinking of buying and setting one up for data hoarding.


Does programmer needs office? Definitely. I still fail to understand why corporation still craze over open office floor plan. It's very hard to concentrate without any distraction. People walking by and talking non-stop. Collaboration doesn't means physical communication, it can be done through any messaging app. Private office does works. The funny thing is, I miss cubicle. At least you can really concentrate and work in the zone.


Unknown electronic parts? Get it from Octopart. Buying chips and checking availability? Find Chips.


Speaker broke down and I need to get a new pair of cheap bookshelf speaker. Initially was searching for a pair for good bookshelf speaker like Pioneer SP-BS22-LR, but unfortunately, this model have been either phased out or you have to purchase whole Hi-Fi set. Meanwhile, no local distributor is importing Micca PB42x. I read good review on Swan HiVi D1010-IV or Swan Hivi D1080-IV, might as well allocates budget to purchase this instead. Luckily we can still find it from the local supplier and the price still acceptable, within MYR 450-plus. All this discussion about cheap and good quality speaker is useless if you can't can't hear audio quality? Otherwise, you're just wasting money without any actual benefits.

When you have a pair of speakers, to get the best out of your 2.1 setup. The next purchases will a Digital Analog Conveter (DAC), which convert binary bits (zero or one) to analogue signal and a amp. If you have a DAC, you can skip buying a sound card. Popular DAC is Behringer UCA202 or UCA222 and for amp, Lepai 2020A+SMSL SA50Diagram below illustrates this setup.



Slow MySQL performance in Docker instance? Use TMPFS, where you put the whole database into the RAM. The approach for MySQL docker instance seems easy enough to setup and there have been many documented results.


Why are we trying hard to optimize the MySQL Docker instance? One of the main issue is that big database restoration may kill the MySQL daemon due to either large volume of records. Which begs the question, how many rows? This is determined by server parameter of `max_allowed_packet` or adjust the server according to these parameters.
- bulk_insert_buffer_size = 256M
- innodb_log_buffer_size = 64M
- innodb_log_file_size = 1G
- max_allowed_packet = 1G

While we're on MySQL, it seems we can delete records from two tables in one DELETE statement. The key thing here is the columns from both tables need to be specified.
DELETE a.*, b.*
FROM table1 a
LEFT JOIN table2 b
ON a.id = b.id
WHERE a.id = 1


It has been a while that I talked about Perl. Getting unique array list. Sigh, every single damn time I've encountered this I was wondering why this is not built into the core language itself?


GNOME is 20 years. It has been so long since I first tried it. I'm getting old. After Ubuntu switching to GNOME as default desktop environment, I felt that GNOME have finally won the GNU/Linux desktop war against KDE after so many fricking years.


No space life in Docker Machine in Windows? Maybe you can recreate another `default` Docker Machine following these settings. Assumed you're using VirtualBox.
$ docker-machine create
--driver virtualbox
--virtualbox-memory 8096
--virtualbox-disk-size 40000

This Week I Learned - 2017 Week 31

Another week, another post. Looking for last week post or something from the archive?


The annual nature appreciation week. Didn't realize that such a place exists in this country. I really need to brush up my local geographical knowledge. However, nature should be explore in its natural form, as in camping, not from some resort built near to it.  And the food, that's a lot to be desired. Pity though.


Fighting with idiotic MySQL 2013 lost connection error for the past week with the Docker instance. Tried different approaches to tweak the server so the database restoration process won't kept throwing such error. In the end, proceed with Plan B approach by changing the mysqldump script so that it will skip extended insert. The side effect of such approach is that the dump file will be bigger and database reloading will take longer. But something working even though slow is way better that something broken. In coming weeks, will look into different approaches, for example, putting the whole test databases in memory (using tmpfs approach which can speed up testing), mysqlpdump (parallel version of mysqldump), or using ProxySQL.


Scientific communication as sequential art. I hope one day that the academic world will publish literature in more readable and accessible way. Actually it's not that hard, some common sense with certain good "taste".


Unit circle. After living for so long, I finally grok trigonometry and understand what the heck is sine, cosine, and tangent.


If you've fork a repository and want to sync from the remote upstream? Two steps. First, set up the remote upstream URL. Second, sync the fork from the upstream.

Docker and Docker Compose Installation in Ubuntu

It's kind of annoying that when you're upgrading the Dockerfile version, your Docker Engine and Docker Compose does not support it and you've up either upgrade both the softwares. Reading through the documentation was rather frustrating as you've to jump from section to section. Or maybe I miss out something instead?

First, read and check which Docker server supports which Dockerfile version.

Next, upgrade your Docker Engine. This will work for either new or existing installation.
$ sudo apt-get remove docker docker-engine docker.io
$ sudo apt-get update

$ sudo apt-get install linux-image-extra-$(uname -r) linux-image-extra-virtual \
apt-transport-https ca-certificates curl software-properties-common

$ curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -

$ sudo add-apt-repository \
"deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"

$ sudo apt-get update
$ sudo apt-get install docker-ce

Next, check your Docker client and server version.
$ docker version
Client:
 Version:      17.06.0-ce
 API version:  1.30
 Go version:   go1.8.3
 Git commit:   02c1d87
 Built:        Fri Jun 23 21:18:10 2017
 OS/Arch:      linux/amd64

Server:
 Version:      17.06.0-ce
 API version:  1.30 (minimum version 1.12)
 Go version:   go1.8.3
 Git commit:   02c1d87
 Built:        Fri Jun 23 21:17:03 2017
 OS/Arch:      linux/amd64
 Experimental: false

For Docker Compose, just install or upgrade the software through Python package manager, pip.
$ sudo -H pip install docker_compose --upgrade

Check the installed version just to verify it.
$ docker-compose version
docker-compose version 1.14.0, build c7bdf9e
docker-py version: 2.3.0
CPython version: 2.7.13
OpenSSL version: OpenSSL 1.0.1t  3 May 2016

This Week I Learned - 2017 Week 21

Previous week post or the whole series.

If you cannot keep your habit in a consistent manner, you will need readjust the minimum goal of the habit until there is no more excuses for you not to do it. Is as simple as that.

Second week of eating your dinner before 7pm indeed have significant changes. Additionally with consistent meditation and healthier food choices, surprised to know that I've lost some weight. However, all these lost weight may due to water weight.

#1 Well said. Well said.
"Don’t confuse privacy with secrecy. I know what you do in the bathroom, but you still close the door. That’s because you want privacy, not secrecy."
#2 Interesting that it's not just me who have been doing my own TILs or keeping a developer journals. While some store their TILs in Github repositories, mine just as a weekly collective of blog post. Either way, keeping a journal is always a good habit for anyone practicing their craft.

#3 There are quite a few complimentary Docker utilities that help to improve your Docker usage experiences.

#4 Tracing in GNU/Linux. Always an interesting topic to explore, especially coming from Brendan Gregg.

#5 Managing Git merge conflict? git-mediate seems like a good tool to ease the pain of resolving merge conflicts. I now finally grok how three ways merge works.
  • HEAD - Your changes.
  • BASE - Code before your changes and other branches.
  • OTHERS - Code with other changes that going to be merged to your branch.
#6 Merge with squash. Good to know if you want to do lots of branching.
  • Put the to-be-squashed commits on a working branch (if they aren't already) -- use gitk for this
  • Check out the target branch (e.g. 'master')
  • git merge --squash (working branch name)
  • git commit

This Week I Learned - 2017 Week 13

Read the last week post or follow the whole series.

Night owl turned early bird messed up my sleep cycle badly. Nevertheless, the draft report have been completed and waiting for submission.

#1 Feeling nostalgia after watching the unboxing video of IBM PC AT and Model M. The sounds of the diskette reading and loading the data reminded me of the good old days of early computing period. I'm not sure what happened to our XT 8088 but I'm surely wish I can see it again and try to boot it up again. I did once during my college days but would really love to do it again after all these years. Back in those days, XT 286 is my dream machine and how I wish we have the financial means to upgrade to it. 


#2 Lots of Docker debugging for the past week and I learned quite a lot. It's the right time to setup my virtualization machine (more on this in coming week) and start looking into Docker.

When using Docker Compose, use `docker-compose ps` instead of `docker ps`. While both commands show the listing of the available containers, the former command will only list containers declared in the `docker-compose.yml` file, a subset of the all available containers. Best to go through all the Docker Compose CLI command line parameters.

Next, you want to read (or search) the content of a file in the docker container. See the example below. Yes, I know you can just simply use the `grep` command directly.
$ docker exec -i mycontainer cat /etc/hosts | grep localhost

#3 Setting up multiple SSL certificates using one IP address in Nginx. And also how to verify and read SSL certificate info from the console. Next is to configure Google Chrome to accept invalid certificate from localhost. Copy and paste `chrome://flags/#allow-insecure-localhost` to the address text box to access the setting. Instruction as shown below.


This Week I Learned - 2016 Week 38

Last week post or the whole series. Interesting stuff learned this week.

Encountered this error message when checking a USB thumbdrive with `fdisk` command. The particular thumb drive was burned with an ISO file through the `dd` command.
$ sudo fdisk -l
......
GPT PMBR size mismatch (1432123 != 15765503) will be corrected by w(rite).
Disk /dev/sdc: 7.5 GiB, 8071938048 bytes, 15765504 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 8C18967D-CB41-4EF1-8958-4E495054958D

Device     Start     End Sectors   Size Type
/dev/sdc1     64   17611   17548   8.6M Microsoft basic data
/dev/sdc2  17612   23371    5760   2.8M EFI System
/dev/sdc3  23372 1432075 1408704 687.9M Microsoft basic data

Follow the instructions given, running the device through `gparted` seems to resolve the issue.



Perl's hash initialization, referencing, and de-referencing. Seriously, I need to get this correctly and read more Perl's FAQs.
# Normal way, without referencing.
%foobar = (a => '1', b => '2');
say $foobar{a};

# Using referencing. More readable.
$foobar = {a => '1', b => '2'};
say $foobar->{a};

# Alternatively.
$foobar_ref = \%foobar;
say $foobar_ref->{a};

Finding properties of the event target in Javascript.
$('foo').bind('click', function () {
    // inside here, `this` will refer to the foo that was clicked
});

How do you add a trailing slash if none found? Regex, regex.
$string =~ s!/*$!/!; # Add a trailing slash

Protocol-relative URL. While we're on HTTP protocol, it was made aware to me that the anchor tag should be the last item on the URL.

CSS image sprite technique using HTML unordered list. One of the issue encountered is if you have single line text link, how do you align the text link vertically in the middle? Make sure the `line-height` is equal to `height` for the `li`` element.

Git merge conflict? Just abort the whole process.

Similarly discard all changes on a diverged local branch, two ways. First method is to my liking.
# Method 1
$ git branch -D phobos
$ git checkout --track -b phobos origin/phobos

# Method 2
$ git checkout phobos
$ git reset --hard origin/phobos

Debugging Dockerfile. Something I learned this week but in a separate and longer post.

Starting a new software project but not sure about which technology stack to use? Read this slide as a guide.






Debugging Dockerfile

While building Docker image through Dockerfile, I've encountered an error in one of the build step, shown below. It seemed that that one of the Perl's module failed to install due to some unknown reason.
$ docker build -t ang:dist-zilla .

Sending build context to Docker daemon 3.072 kB
Step 1 : FROM perl:latest
 ---> a9d757d1a33b
Step 2 : RUN cpanm install Term::ReadKey
 ---> Running in 78760f841b26
 ---> Working on install
......
Building and testing TermReadKey-2.33 ... ! Installing Term::ReadKey failed. 
See /root/.cpanm/work/1473691870.7/build.log for details. Retry with --force to force install it.
......

Since there is an error, the particular layer of image was not created. Hence, there is no way for me to debug and trace the error. Fortunately, you can force it through the build by marking it to be a success build. The changes is minor, you just need to append an OR condition (also known as Short-circuit evaluation) as shown.
RUN cpanm install Term::ReadKey || true

Rebuild the image. Regardless the error, we have successfully create a layer.
$ docker build -t ang:dist-zilla .
......
Step 2 : RUN cpanm install Term::ReadKey || true
 ---> Running in f3fa83db0ae9
......
Building and testing TermReadKey-2.33 ... ! 
Installing Term::ReadKey failed. See /root/.cpanm/work/1473691397.7/build.log for details. 
Retry with --force to force install it.
FAIL
1 distribution installed
 ---> 62e2b1bd2c61
Removing intermediate container f3fa83db0ae9
Successfully built 62e2b1bd2c61

Log in to the particular snapshot or layer before we troubleshoot the issue.
$ docker run --rm -it 62e2b1bd2c61 bash -il
[email protected]:~#

Checking the build log.
[email protected]:~# tail .cpanm/work/1473691397.7/build.log -n 25
  
chmod 755 blib/arch/auto/Term/ReadKey/ReadKey.so
"/usr/local/bin/perl" -MExtUtils::Command::MM -e 'cp_nonempty' -- 
ReadKey.bs blib/arch/auto/Term/ReadKey/ReadKey.bs 644
Running Mkbootstrap for Term::ReadKey ()
chmod 644 "ReadKey.bs"
PERL_DL_NONLAZY=1 "/usr/local/bin/perl" 
"-MExtUtils::Command::MM" "-MTest::Harness" "-e" 
"undef *Test::Harness::Switches; test_harness(0, 'blib/lib', 'blib/arch')" t/*.t
t/01_basic.t ............... ok
# Looks like you planned 7 tests but ran 1.
t/02_terminal_functions.t .. 
Dubious, test returned 255 (wstat 65280, 0xff00)
Failed 6/7 subtests 
        (less 1 skipped subtest: 0 okay)

Test Summary Report
-------------------
t/02_terminal_functions.t (Wstat: 65280 Tests: 1 Failed: 0)
  Non-zero exit status: 255
  Parse errors: Bad plan.  You planned 7 tests but ran 1.
Files=2, Tests=2,  0 wallclock secs ( 0.02 usr  0.00 sys +  0.04 cusr  0.00 csys =  0.06 CPU)
Result: FAIL
Failed 1/2 test programs. 0/2 subtests failed.
Makefile:1029: recipe for target 'test_dynamic' failed
make: *** [test_dynamic] Error 255
-> FAIL Installing Term::ReadKey failed. 
See /root/.cpanm/work/1473691397.7/build.log for details. Retry with --force to force install it.
1 distribution installed

Googling result indicates that Term::ReadKey module have issue with one of the unit test where interactive shell is not found. Building Docker image does not needs interactive shell available.

The workaround is to downgrade and install the previous working version.
RUN cpanm install Term::[email protected]

This Week I Learned - 2016 Week 08

Last week post.

#1 NameError: name 'basestring' is not defined. Surprisingly there is still conflict with Ansible when installed using pip for Python 2 and Python 3.

#2 GNU/Linux Performance. Poster of tools you can use to investigate performance issues with your system.

#3 Container as Python module. (HN discussion) Interesting concept indeed. I've been looking at Docker for the past three weeks and this is probably best interesting use of container. It's useful when you want to build up an actual test environment from your Python apps or scripts. Instead of Mock object, you can test against the actual system, for example, an actual database system.

#4 Xamarin sold to Microsoft. (HN discussion). What took them so long? I read (can't remember where), it was sold for 400 millions. Interesting to see how this unfold in coming future.

#5 Non Zero Day. (HN discussion) Effective way to build a new habit through chain-method or streak. No, Jerry Seinfield did not create the Seinfield productivity program. For me, almost daily Git commit. You have to get started on something, the baby step..

This Week I Learned - 2016 Week 07

Last week post. Slow week, caught up with lots of pending stuff.

#1 Why Docker Is Not Yet Succeeding Widely in Production. Old but still relevant HN discussion regarding Docker. I've been experimenting with Docker for the past three weeks and it seems to be more stable and feature (v1.10.1) compare to the earlier version (two years ago) I've tried. Expect more write-ups on Dockers and Vagrant for coming weeks.

#2 Do you really need 10,000 steps a day? (HN discussion) For a healthy and active person, the daily 10k steps may not be necessary.

#3 Vagga. The equivalent tool I can think of right now is Otto. Good alternative to Vagrant and Docker to bootstrap your development environments.

This Week I Learned - 2016 Week 05

Last week post.

#1 Reply 1988. Highest rated drama in Korea cable television as the time of writing. Heart-warming family Korean drama about a group of neighbouring families and friends set in 1988. Lots of nostagia look back in the eighties. Interestingly, the genre of the drama is known as coming of age where we follow the growth of the characters from youth to adulthood. One of the drama's original sound track (OST), "A Little Girl" (a remake) caught my attention. It has been a while since I've mesmerized by any OST. Frankly speaking, the best kdrama I've watched so far. And lastly, why 1988? Is the year that Korea hosted the Summer Olympics which leads to a significant political and social changes.

#2 TypeMatrix. (via Arcachne Webzine) Another ergonomic keyboard but without splitting the layout into half but instead more sensible keys placement, for example, large Enter key in the center.

#3 Visualizing Concurrency in Go. (HN dicusssion) Visually intriguing but 2D representation is still better than 3D, for example, like UML sequence diagram? Would be even better if we can have step-by-step tracing of the code and the visualizing simultaneously.

#4 Overpass Web Font, free/libre font by Red Hat. Primary used for the company own branding.

#5 Docker Official Images Are Moving to Alpine Linux. The sentiment in HN discussion does not agree with such approach. Furthermore, docker-slim was created to solve fatty container issue using your favourite GNU/Linux distros.

Development and Production Environment

Always match your development environment with the production environment, this is so true especially for Python development. While the Docker just reached 1.0, the preferable choice still is Vagrant. Will look into Docker once time permitted. Off course, having a quad-core machine with plenty of RAMs help a lot as well.

Which begs the question, if I'm going to buy a new machine that support visualization, which Xeon model of socket 1150 should I get so the total cost of the system is within the budget of MYR1.5k? Unless necessary, I don't believe in paying more than MYR2k for any electronic devices these days.

Upgrading system is always a tedious process. You've appreciate the effort done on the unit testing, it will give you some sort of assurance that everything work as it. Testing, is one area that I should focus on in coming years.