Showing posts with label fedora. Show all posts
Showing posts with label fedora. Show all posts

This Week I Learned 2018 - Week 25

Week 24 post and something from archive.

中国最后的剑圣, 于承惠。在中文影坛里,在也找不到另一个演员能诠释演出这个角色,从霸气的恶人角色到闭山修行的一代宗师。可惜,晚年在影坛没参与任何武侠片。

How to Survive Your 40s? (via KH). As someone who going to take a leap into this new decade, I can probably relates (the screenshot below tells quite a lot as well) to the author experience. Since few years back, younger people have started to call me "uncle" (my choice of clothing did contribute to that as well). It's a sudden but natural shift that comes with your age. The article reminded me of a Korean movie (can't remember the name) I've watched few weeks back. Basically the protagonist (someone in his 50s) said you need to see this milestone as the second 20s. The second time for you to reflect or follow up with what you've done (differently this time) in your 20s. The to-do list since so many years ago is still so long and it will keep me occupied for so some times.

What the difference between Perl and Python? If you need a comparison between both programming languages, this book, "Scripting with Objects: A Comparative Presentation of Object-Oriented Scripting with Perl and Python", while quite dated (it was written in 2008), provides some insights on differences between these both programming languages. In the end, the rising popularity of Python and emergence of Perl 6 shown that, opinionated or there should be a standard way of doing things won.

Why you need to set default value in `sub` in Moo or Moose? Because having a subroutine wrapper returns a unique reference every time you create a new object.

How do you boot from USB thumb drive from Grub itself? Yes, this is possible (do read the whole discussion). You must go to the Grub console by pressing 'c'. Remember that you can tab to find out which removable media and partition to use. It's quite annoying that sometimes the BIOS cannot detect the removable media (thumb drive) and can't boot from the device itself.
grub> ls
grub> set root=(hd1, msdos2)
grub> chainloader +1
grub> boot

On a related note, migration from Fedora 27 to Fedora 28 was such a painful experience. The keyboard and mouse did not work and were very lagging. I'm not sure, but Fedora 28 was such a let down. In the end, have to wipe out the whole installation and replaced it with Ubuntu 18.04 and everything works as intended. Seriously, Fedora, what is going there with 28 release?

Why they said Perl is a more advanced scripting language for system administrator? See App::GitHubPullRequest, a Perl console tool that glues together three different console tools of git, stty, and curl.

How to train your kids to do house chores voluntarily? (via HNEmpowerment since toddler.

Dreadful tasks? Just try, give it a while.

Which Perl modules to use when making HTTP requests? There are so many.

How does you do dispatch table in Perl? Found an old discussion (2010) in HN. The book High-Order Perl have a whole chapter (PDF) on this topic.

This Week I Learned - 2017 Week 35

Last week post or revisit some old archived posts.

Long holidays and I finally have extra time to clear off some of those pesky and pending to-do list. Learned quite a lot this week, especially from different electronic devices and computer hardware.

Software development 450 words per minute. (via Reddit / HN). Be grateful. That's probably takeaway from the article itself. I was wondering how it going to affect your hearing if you keep listen to the headphone non-stop for more than 8 hours per day?

Good post on introduction to mechanical key switches, specifically Cherry MX family. For a non-gamer but mostly using your keyboard for typing, Cherry MX Brown and Cherry MX Blue would be the preferred keyboard switch for a mechanical keyboard. The Brown switch was originally developed for Kinesis Keyboard. Yes, that company that created the ergonomic contoured keyboard. Meanwhile, the Blue switch, have same tactile feeling and clicking sound to IBM Model M but less activation force. Does mechanical keyboard worth it? Yes, only if you play lots of games, build a Battlestation, a mechanical keyboard enthusiast, or have extra money to burn.

Buying an air purifier? Fview YouTube channel is probably the best I've watched so far. Honest opinions with lots of satirical remarks in between. Just like taking an advice from a trustworthy friend. So which air purifier to buy? From the result and price point, just get Xiaomi Air Purifier even through you have to tolerate the high fan noise. I was surprised that few European brands are so expensive but the filtering output was mediocre. Most likely you're paying premium to the quality material and long term reliability. One thing I've learned from electronic devices made in China or electronic devices in general these days. There are not built for reliability. a throwaway device that only serve a purpose for a short period.

Yeah, the bokeh, colour, and contrast is phenomenal and surely will make you mouth-watering.  Just make sure you watch the YouTube video in highest resolution. The most important criteria is the colour (in JPEG) format shows the actual colour and contrast representative of what we saw with the reviewer eyes. Be warned, both Sony A9 and Voigtlander 50 Heliar V4 will cost you around MYR 21k. Definitely not worth it unless you have extra cash to burn. Even so, still not worth it.

More lesson regarding ConTeXt. Want to use Times New Roman, make sure you've installed the Tex Gyre package where it includes the Termes aka Times New Roman font.

Installation of more PWM casing fans. The motherboard seemed quite sensitive and there are numerous times I can't get to the POST screen. Reading through the POST troubleshooting steps, manage to boot up the machine again. Suspect loosen power wires, memory slot, or bended CPU pins were likely the contributing causes.

Fan speeds seems to be at an accepted range. There is an increase of volume heard but I like the white noise.
$ sensors | grep fan
fan1:         1704 RPM  (min = 1577 RPM, div = 8)
fan2:         1875 RPM  (min =  784 RPM, div = 8)
fan3:         1577 RPM  (min =  685 RPM, div = 8)
fan4:            0 RPM  (min = 3515 RPM, div = 128)  ALARM
fan5:            0 RPM  (min =  703 RPM, div = 128)  ALARM

Hardware UART in MSP430. I have no idea this is possible. Mainly because I have no idea what and how UART works anyway. And, I also found out that there is a UniFlash, which is the Universal Flash Programmer for all Texas Instruments devices. Seems to support MSP430 and GNU/Linux but I haven't try it out yet.

I was looking for a DAC and my research indicated that using Raspberry Pi with HiFiBerry would be a good choice. Maybe that could put my shelved Pi into good use?

Running Docker on Fedora host but have permission error with mounted volume?
$ docker run -it -v /home/ang/project:/export tts:latest bash

[email protected]:/export# ls -l                
ls: cannot open directory '.': Permission denied

To resolve this properly, since this is a SELinux permission issue (reason why you should always test your stuff in Fedora/Red Hat/CentOS distros), you can append extra `z` or `Z` character to the mounted volume option `(-v)` as shown below.

-v /home/ang/project:/export:z

Meanwhile, setting up Docker in Fedora to support non-root user. (Yes, there are many security concerns).
$ sudo groupadd docker && sudo gpasswd -a ${USER} docker && sudo systemctl restart docker
$ newgrp docker

Readjustment of my night computing usage. Turned on Gnome's Night Light. This is to reduce the effect of blue light affecting the body melatonic production.

This Week I Learned - 2017 Week 33

Last week post or the old ramblings.

The Vox POP is probably the most entertaining and educational YouTube channel right now. I wish they produce more and frequently.

Mommy, Why is There a Server in the House? Suitable for those who are active in /r/homelab.

Refurbished my battle station and upgraded my Fedora 25 to 26. Nothing special about this release and I was expecting something significant or may be I miss out something?
$ sudo dnf upgrade --refresh
$ sudo dnf install dnf-plugin-system-upgrade
$ sudo dnf system-upgrade download --releasever=26
$ sudo dnf system-upgrade reboot

Gigabyte MA10-ST0 powered by Intel® Atom C3958 SoC. A 16-core Atom (C3958 SoC) server board which can be a good server-based motherboard for virtualization or NAS. Think ESXi, FreeNAS, or Proxmox. I was thinking of buying and setting one up for data hoarding.

Does programmer needs office? Definitely. I still fail to understand why corporation still craze over open office floor plan. It's very hard to concentrate without any distraction. People walking by and talking non-stop. Collaboration doesn't means physical communication, it can be done through any messaging app. Private office does works. The funny thing is, I miss cubicle. At least you can really concentrate and work in the zone.

Unknown electronic parts? Get it from Octopart. Buying chips and checking availability? Find Chips.

Speaker broke down and I need to get a new pair of cheap bookshelf speaker. Initially was searching for a pair for good bookshelf speaker like Pioneer SP-BS22-LR, but unfortunately, this model have been either phased out or you have to purchase whole Hi-Fi set. Meanwhile, no local distributor is importing Micca PB42x. I read good review on Swan HiVi D1010-IV or Swan Hivi D1080-IV, might as well allocates budget to purchase this instead. Luckily we can still find it from the local supplier and the price still acceptable, within MYR 450-plus. All this discussion about cheap and good quality speaker is useless if you can't can't hear audio quality? Otherwise, you're just wasting money without any actual benefits.

When you have a pair of speakers, to get the best out of your 2.1 setup. The next purchases will a Digital Analog Conveter (DAC), which convert binary bits (zero or one) to analogue signal and a amp. If you have a DAC, you can skip buying a sound card. Popular DAC is Behringer UCA202 or UCA222 and for amp, Lepai 2020A+SMSL SA50Diagram below illustrates this setup.

Slow MySQL performance in Docker instance? Use TMPFS, where you put the whole database into the RAM. The approach for MySQL docker instance seems easy enough to setup and there have been many documented results.

Why are we trying hard to optimize the MySQL Docker instance? One of the main issue is that big database restoration may kill the MySQL daemon due to either large volume of records. Which begs the question, how many rows? This is determined by server parameter of `max_allowed_packet` or adjust the server according to these parameters.
- bulk_insert_buffer_size = 256M
- innodb_log_buffer_size = 64M
- innodb_log_file_size = 1G
- max_allowed_packet = 1G

While we're on MySQL, it seems we can delete records from two tables in one DELETE statement. The key thing here is the columns from both tables need to be specified.
DELETE a.*, b.*
FROM table1 a
LEFT JOIN table2 b
ON =

It has been a while that I talked about Perl. Getting unique array list. Sigh, every single damn time I've encountered this I was wondering why this is not built into the core language itself?

GNOME is 20 years. It has been so long since I first tried it. I'm getting old. After Ubuntu switching to GNOME as default desktop environment, I felt that GNOME have finally won the GNU/Linux desktop war against KDE after so many fricking years.

No space life in Docker Machine in Windows? Maybe you can recreate another `default` Docker Machine following these settings. Assumed you're using VirtualBox.
$ docker-machine create
--driver virtualbox
--virtualbox-memory 8096
--virtualbox-disk-size 40000

This Week I Learned - 2017 Week 15

Last week post or you can browse through the whole series.

While debugging a Makefile, I accidentally `rm -rf` my home folder. Lesson learned, always backup and sync your changes regularly. Nevertheless, it's always a good fresh start when your home folder contains not a single file or folder. Good that you have a weekly clean up of your machine, review, keep, or remove. Otherwise, there will be a lot of pending left over files.

It has been a while since I work on weekend. The serenity of the environment did improve your productivity ten-folds. There is no sounds other than the air-con, traffic, and your typing sounds. You're basically in the zone, focus solely on the task at hand. No more stupid shenanigan. In hindsight, you have to find or create your own optimal environment and zone. It all starts with a system that leads to a habit, good habits.

#1 How to read more books? Lots of good tips and increasing the volume of books you can read. It's already early April and I only managed to finish 2 books. Not really on track on finishing 12 books this year. Thinking back, reading style, book choices, timing, and context are what causing the slowness. One of the best strategy is to switch different books if you're stuck or bored. Some books need more mental energy to go through it. While reading 2 pages per day can develop a good habit, it's not sufficient fast enough to catch up with my pilling reading list.

#2 Engineer's Disease. The unconscious thought that can lead to arrogant and condescending personality. Maybe because such behaviour "stems from the OCD and emotional detachment our peoples tend to have, mixed in with a good dose of raging insecurity"? Good forum discussions to ponder upon, especially by those working in software development.

#3 Does teenager and adult have different learning capability? Time, available perceived time. Also discipline, attention, and focus. The discussion at HN gave a lot of strategies to attack the problem. Simple daily practice and learning together with different learning strategies. What to learn then? Fundamental. There is an interesting discussion on software development being a dead-end job after 35-40.

#4 On understanding the fundamental of Vim. Before you install any Vim's plugin, best to learn what the default features exists or not.

#5 System Design Primer. If you want to learn how to design large scale systems. However, premature optimization is still evil. Knowing something can be done right doesn't means it should be done now. There are always contexts and constraints. Solutions looking for problems always end up wasting everyone resources. This HN user's experience on scaling your system accurately illustrates such scenario.

#6 Looking busy at work?. Most people don't realize that pretend to work and look busy is actually far more harder than doing the actual work. Faking will deplete you psychologically as your thoughts, actions, and words are not in sync. However, there are always exception. Certain group of people thrive on such behaviour without caring for any forms of repercussion. While some just stuck with mind-numbing boring job. There is a saying by Napoleon Hill which states "If you are not learning while you’re earning, you are cheating yourself out of the better portion of your compensation.” Unless you're stuck with certain constraints, move on. You're not a tree!

#7 LXD finally available for Fedora. Not as native RPM package but through Snap. I'm going to reformat another workstation and install Fedora with it. One less reason to stick with Ubuntu. Only left the DEB package, which I believe, no way Fedora/CentOS/Red Hat is able to dethrone the number of available packages provided by Debian. I'm not looking for rolling release like Arch but availability of different software. Maybe Snap, the universal GNU/Linux package can change that?

This Week I Learned - 2017 Week 14

Last week post or you can go through the whole series.

Proposal have been presented and submitted. Standard feedback received. Nevertheless, better than nothing regardless the quality of the reactions.

#1 GTCafe Studio. Stumbled upon this site while searching for different covers of Guthrie Govan's Emotive Ballad. It's rare these days to find any blog with original good content. Reading through his journal on learning guitar made me reflect back on my decision on donating all my guitars away few years back. Maybe is time to start all over again? Or maybe not? Learning to play an musical instrument is one of the way to escape from mind-numbing daily routines. However, there is a time and place for everything in life. In hindsight, sometimes you just have to move on.

#2 "CentOS is not stable, it is stale". So true that it hurts. For me, as a whole, Fedora provides a better desktop experience than Ubuntu. Yet, I still revert back to Ubuntu on my daily usage. Why? APT is definitely better than YUM and plenty of software selection. Furthermore, LXD works only in Ubuntu and not Fedora. And yes, finally Canonical realized that and declared Ubuntu Unity will be replaced by Gnome 18.04 LTS. Maybe this Ask HN post on feedback for Ubuntu 17.10 from the community have finally sealed the fate for Unity?

I always wonder what would happen if Red Hat decided to use build a distro based on Debian or DPKG package manager instead of creating their own RPM packaging manager? A unified GNU/Linux desktop will come sooner rather than unnecessary fragmentation and efforts. For example, the competition of next generation display server of Mir and Wayland. Yes, I know having options and competitions is good for progress. But the priority and effort should be on fixing the damn display drivers performance and stability issues. Fragmentation leads to duplication of works.

#3 Five great Perl programming techniques to make your life fun again. An old article, 11 years ago but everything described there is as relevant as today especially iteration using `map` and `grep` and Dispatch Table as illustrated in example below. As Perl does not have `switch` statement, hence using Dispatch Table is a good Perl design patternMark Jason Dominus, in his book, Higher-Order Perl also devoted a whole chapter (PDF) on this matter.
my $dispatch_for = {
   a => \&display_a,
   b => \&display_b,
   q => sub { ReadMode('normal'); exit(0) },
   DEFAULT => sub { print "That key does nothing\n"; },

my $func = $dispatch_for->{$char} || $dispatch_for->{DEFAULT};

#4 Perl 5 Internals (PDF). Interesting reading on the intricacy part of the Perl itself. It was brought to my attention that Perl is a bytecode compiler, not an interpreter or a compiler.

#5 The 'g' key shortcuts in Vim. You will learn something new everyday, there are so many key bindings. Surprisingly, I only knew and regularly use two. Really needs to refresh and relearn this.

Swift in Fedora 24 (Rawhide)

Swift, the language developed by Apple, which is set to replace Objective-C, was recently open sourced. However, the existing binary only available for Ubuntu and Mac OS. Hence, for Fedora user like myself, the only option is to install it through source code compilation.

First, install all the necessary packages.
$ sudo dnf install git cmake ninja-build clang uuid-devel libuuid-devel libicu-devel libbsd-devel libbsd-devel libedit-devel libxml2-devel libsqlite3-devel swig python-devel ncurses-devel pkgconfig

Next, create our working folder.
$ mkdir swift-lang

Clone the minimum repositories to build Swift.
$ git clone swift
$ git clone clang
$ git clone cmark
$ git clone llvm

If you have slow internet connection and experiencing disconnection during clone, is best to clone partially. Otherwise, you've to restart from the beginning again.
$ git clone --depth 1 llvm
$ cd llvm
$ git fetch --unshallow

If you have the great Internet connection, you can proceed with the remaining repositories.
$ git clone lldb
$ git clone llbuild
$ git clone swiftpm
$ git clone
$ git clone

As Swift was configured to work in Ubuntu or Debian, you may encounter error several issues during compilation. These are my workaround.

/usr/bin/which: no ninja in ...
In Fedora, Ninja Built binary name is 'ninja-build' but Swift builder script expect it to be 'ninja'. We create an alias to bypass that.
$ sudo ln -s /usr/bin/ninja-build /usr/bin/ninja

Missing ioctl.h
During compilation, the ioctl.h header file was not found as the build script assumed it's located in '/usr/include/x86_64-linux-gnu' as shown below.
header "/usr/include/x86_64-linux-gnu/sys/ioctl.h"

Temporary workaround is to symlink the folder that contains these files.
$ sudo mkdir -p /usr/include/x86_64-linux-gnu/
$ sudo ln -s /usr/include/sys/ /usr/include/x86_64-linux-gnu/sys

pod2man conversion failure
The 'pod2man' doesn't seems to convert the POD file to MAN page as illustrated in error message below.
FAILED: cd /home/hojimi/Projects/swift-lang/build/Ninja-ReleaseAssert/swift-linux-x86_64/docs/tools && /usr/bin/pod2man --section 1 --center Swift\ Documentation --release --name swift --stderr /home/hojimi/Projects/swift-lang/swift/docs/tools/swift.pod > /home/hojimi/Projects/swift-lang/build/Ninja-ReleaseAssert/swift-linux-x86_64/docs/tools/swift.1
Can't open swift: No such file or directory at /usr/bin/pod2man line 68.

Upon this error message, the 'swift.pod' file has been corrupted and emptied. You'll need to restore it back from the repository.
$ git checkout -- docs/tools/swift.pod

We need to disable the '--name swift' parameter. This is done by commenting out the 'MAN_FILE' variable.
$ sed -i 's/MAN_FILE/#MAN_FILE/g' swift/docs/tools/CMakeLists.txt

Once all the workarounds have been applied, we'll proceed with our compilation. You do not really need to set the '-j 4' parameter for parallel compilation which can really reduce compilation time. By default, Ninja Build will compile code using the available CPU cores. Also, we just want the release (-R) build without any debugging information attached.
$ ./swift/utils/build-script -R -j 4

Add our compiled binary path to the system path.
$ cd /build/Ninja-ReleaseAssert/swift-linux-x86_64/bin/
export PATH=$PATH:`pwd`

Lastly, check our compiled binary.
$ swift --version
Swift version 2.2-dev (LLVM 7bae82deaa, Clang 587b76f2f6, Swift 1171ed7081)
Target: x86_64-unknown-linux-gnu

Be warned, compilation took quite a while, maybe for several hours. This depends on your machine specification and the type of build. I've noticed my lappy was burning hot as four CPU cores were running at 100% most of the time. It's recommended during compilation, place your lappy near a fan or any place with good ventilation. See that the temperature exceed high threshold of 86.0°C.
$ sensors
Adapter: Virtual device
temp1:        +95.0°C  (crit = +98.0°C)

Adapter: ISA adapter
fan1:        4510 RPM

Adapter: ISA adapter
Physical id 0:  +97.0°C  (high = +86.0°C, crit = +100.0°C)
Core 0:         +94.0°C  (high = +86.0°C, crit = +100.0°C)
Core 1:         +97.0°C  (high = +86.0°C, crit = +100.0°C)

Under normal usage, the average temperature is roughly 50°C.
$ sensors
Adapter: Virtual device
temp1:        +46.0°C  (crit = +98.0°C)

Adapter: ISA adapter
fan1:        3525 RPM

Adapter: ISA adapter
Physical id 0:  +49.0°C  (high = +86.0°C, crit = +100.0°C)
Core 0:         +49.0°C  (high = +86.0°C, crit = +100.0°C)
Core 1:         +45.0°C  (high = +86.0°C, crit = +100.0°C)

From Fedora 23 To Fedora 24 (Rawhide)

So I was there looking at my screen and realized Fedora 23 is too stable, or rather too boring. Hence, I've decided to upgrade to Rawhide, the upcoming Fedora 24, which is expected to be released by 17th May 2016. Let's see how this compare to my upgrade from Fedora 21 to Fedora 22 (Rawhide), I hope there will be no major issues.

Configure your DNF for Rawhide.
$ sudo dnf upgrade dnf
$ sudo dnf install dnf-plugins-core fedora-repos-rawhide
$ sudo dnf config-manager --set-disabled fedora updates updates-testing
$ sudo dnf config-manager --set-enabled rawhide
$ sudo dnf clean -q dbcache plugins metadata

Upgrade your distro.
$ sudo dnf --releasever=rawhide --setopt=deltarpm=false distro-sync --nogpgcheck --allowerasing

It's always 'exciting" to use the rolling release where you can test out the latest greatest features. For Fedora 24, lots of features were planned but I'm eager to test out Wayland, the new display protocol which going to replace X. It seems some user already have good and stable enough experience using it in Fedora Rawhide. Can't wait to try it out on my T4210.

Upgrade was painfully slow. First, I've to downgrade certain packages like VLC from RPMFusion repository back to Fedora 22 version (see the last command of the above console output with --allowerasing option). Then, I've to download a total of 1860 packages. That alone took me around three-plus hours.

However, upgrade failed due to some conflict in Python 3.5. I just realized that I've upgraded my Python to 3.5 using Copr. And, to make matter worse, by default DNF did not cache downloaded packages! No choice but to redo everything again. In the end I wasted another three more hours.

First thing first. Let's enable caching for DNF. Next, temporary remove all those packages (wine-* and texlive-*) to reduce number of packages to download and remove Python 3.5 I've installed earlier from Cool Other Package Repor (COPR). Repeat the command to upgrade your distro again and reboot.
$ sudo echo 'keepcache=true' >> /etc/dnf/dnf.conf
$ sudo dnf remove wine* texlive-*
$ sudo dnf remove python35-python3*

Once you've successfully upgraded. Your system should have Gnome 3.19.2, Wayland 1.9.0, and Linux Kernel 4.4.0. Some interesting observations while testing Fedora 24.

Updates during booting
This happened twice and I need to reboot to complete the upgrade. If seemed that Systemd was instructed to handle the upgrade which totally new to me. I was under the impression during the upgrade, all the packages will be overwritten. See screenshot below.

Wayland is the default display server
Previously you've to manually switch to Wayland in the Gnome login shell (click your username and later select from the gear icon). Right now, is the reverse. If you want to use X (which you should as not all apps have been ported to Wayland yet), you've to select it manually, pick 'GNOME for X' from the menu.

Apps that fail to work
Shutter, the screenshot capture tool does not work. Suspect this is due to lack of support and the security model of Wayland as getting the content of other windows is not allow. Terminal, the default Gnome terminal emulator, under custom window size, will always shrink every time upon refocus. Dash to dock Gnome extension does not work either and has been disabled. Is best to check the all the Bugzilla's bug report on Wayland at Gnome or Red Hat. Wayland is getting there but still, you can always fallback to X11.

Natural scrolling in Touchpad
I'm not sure why this was set to default but it's fricking annoying. Basically, under natural scrolling, screen will move at the reverse direction of your fingers, similar to using a mobile phone or tablet. To differentiate between natural and non-natural scrolling is easy. For the former, focus on the moving the content, for the later, focus on moving the scrollbar.

Error calling 'lxd forkstart......

In full details, the exact error message
error: Error calling 'lxd forkstart test-centos-6 /var/lib/lxd/containers /var/log/lxd/test-centos-6/lxc.conf': err='exit status 1'

Again, while rebooting my lapppy after two days, I encountered the above error message again while trying to start my container through LXD. Reading through the LXD issues reports, these are the typical steps to troubleshoot this issue. Note that I've installed the LXD through source code compilation as there are no RPM package available for Fedora 23.

First thing first, as the LXD was built through code compilation, hence it was started manually by running this command. The benefit of starting the LXD daemon this way is that it let you monitor all the debugging messages as shown below.
$ su -c 'lxd --group wheel --debug --verbose'

INFO[11-14|14:10:24] LXD is starting                          path=/var/lib/lxd
WARN[11-14|14:10:24] Per-container AppArmor profiles disabled because of lack of kernel support 
INFO[11-14|14:10:24] Default uid/gid map: 
INFO[11-14|14:10:24]  - u 0 100000 65536 
INFO[11-14|14:10:24]  - g 0 100000 65536 
INFO[11-14|14:10:24] Init                                     driver=storage/dir
INFO[11-14|14:10:24] Looking for existing certificates        cert=/var/lib/lxd/server.crt key=/var/lib/lxd/server.key
DBUG[11-14|14:10:24] Container load                           container=test-busybox
DBUG[11-14|14:10:24] Container load                           container=test-ubuntu-cloud
DBUG[11-14|14:10:24] Container load                           container=test-centos-7
INFO[11-14|14:10:24] LXD isn't socket activated 
INFO[11-14|14:10:24] REST API daemon: 
INFO[11-14|14:10:24]  - binding socket                        socket=/var/lib/lxd/unix.socket

The first step to troubleshoot is to ensure that the default bridge interface, lxcbr0, used by LXD is up and running.
$ ifconfig lxcbr0
lxcbr0: error fetching interface information: Device not found

Next, start the 'lxc-net' service that created this bridge interface. Check if our bridge interface is up.
$ sudo systemctl start lxc-net

$ ifconfig lxcbr0
lxcbr0: flags=4163  mtu 1500
        inet  netmask  broadcast
        inet6 fe80::fcd3:baff:fefd:5bd7  prefixlen 64  scopeid 0x20
        ether fe:7a:fa:dd:06:cd  txqueuelen 0  (Ethernet)
        RX packets 5241  bytes 301898 (294.8 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 7610  bytes 11032257 (10.5 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

Next, check the status of the 'lxc-net' service. Why we need to do so? Remember that the 'lxc-net' service create a virtual switch where three things will be created. First, the bridge itself that links to an existing network interface connecting to the other world. Next, a DNS server which resolves domain name. And lastly, a DHCP server which assigns new IP address to the container. The DNS and DHCP services is provided by the Dnsmasq daemon.
$ sudo systemctl status lxc-net -l

● lxc-net.service - LXC network bridge setup
   Loaded: loaded (/usr/lib/systemd/system/lxc-net.service; enabled; vendor preset: disabled)
   Active: active (exited) since Sat 2015-11-14 16:13:24 MYT; 13s ago
  Process: 9807 ExecStop=/usr/libexec/lxc/lxc-net stop (code=exited, status=0/SUCCESS)
  Process: 9815 ExecStart=/usr/libexec/lxc/lxc-net start (code=exited, status=0/SUCCESS)
 Main PID: 9815 (code=exited, status=0/SUCCESS)
   Memory: 404.0K
      CPU: 46ms
   CGroup: /system.slice/lxc-net.service
           └─9856 dnsmasq -u nobody --strict-order --bind-interfaces --pid-file=/run/lxc/ --listen-address --dhcp-range, --dhcp-lease-max=253 --dhcp-no-override --except-interface=lo --interface=lxcbr0 --dhcp-leasefile=/var/lib/misc/dnsmasq.lxcbr0.leases --dhcp-authoritative

Nov 14 16:13:24 localhost.localdomain dnsmasq[9856]: started, version 2.75 cachesize 150
Nov 14 16:13:24 localhost.localdomain dnsmasq[9856]: compile time options: IPv6 GNU-getopt DBus no-i18n IDN DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth DNSSEC loop-detect inotify
Nov 14 16:13:24 localhost.localdomain dnsmasq-dhcp[9856]: DHCP, IP range --, lease time 1h
Nov 14 16:13:24 localhost.localdomain dnsmasq-dhcp[9856]: DHCP, sockets bound exclusively to interface lxcbr0
Nov 14 16:13:24 localhost.localdomain dnsmasq[9856]: reading /etc/resolv.conf
Nov 14 16:13:24 localhost.localdomain dnsmasq[9856]: using nameserver
Nov 14 16:13:24 localhost.localdomain dnsmasq[9856]: read /etc/hosts - 2 addresses
Nov 14 16:13:24 localhost.localdomain systemd[1]: Started LXC network bridge setup.

Expect more posts to come on using LXD in Fedora 23.

Fedora 23 Cloud Image Through Vagrant With VirtualBox and Libvirt Backend Provider

While testing LXD, I've to constantly switch between Ubuntu 15.10 and Fedora 23 to troubleshoot certain issues. However, my local Fedora 23 installation has been "contaminated" due to numerous tweaks I've done to get LXD to work. Hence, to make sure these changes I've made can be reproduced in fresh new Fedora environment, I've found using Vagrant with Fedora 23 Cloud image does fulfill that requirements.

Setting up in Ubuntu 15.10 was pretty much quite straight forward. First, we need to install Vagrant and VirtualBox. Check if we have the latest greatest version or for any issues.
$ sudo apt-get install vagrant virtualbox
$ vagrant version
Installed Version: 1.7.4
Latest Version: 1.7.4
You`re running an up-to-date version of Vagrant!

$ VBoxManage --version

Next, install libvirt provider and the necessary libraries. Skip this step if you want to use the default VirtualBox provider. as we're not using the VirtualBox provider.
$ sudo apt-get install libvirt libvirt-dev
$ vagrant plugin install vagrant-libvirt

Installing the 'vagrant-libvirt' plugin. This can take a few minutes...
Installed the plugin 'vagrant-libvirt (0.0.32)'!

Next, download the Base Cloud image for Vagrant. There are two versions, VirtualBox or libvirt/KVM image. Since we're running this in GNU/Linux, let's use the libvirt/KVM image.
$ aria2c -x 4

$ aria2c -x 4

Once we've downloaded the image, import it to Vagrant.
$ vagrant box add fedora/23

==> box: Box file was not detected as metadata. Adding it directly...
==> box: Adding box 'fedora/23' (v0) for provider: 
    box: Unpacking necessary files from: file:///home/ang/Projects/vagrant/
==> box: Successfully added box 'fedora/23' (v0) for 'virtualbox'!

Similarly for the libvirt image as well. We can add both images using the same name, in this case, 'fedora/23'.
$ vagrant box add fedora/23

==> box: Box file was not detected as metadata. Adding it directly...
==> box: Adding box 'fedora/23' (v0) for provider: 
    box: Unpacking necessary files from: file:///home/ang/vagrant/
==> box: Successfully added box 'fedora/23' (v0) for 'libvirt'!

See the available images. Note that the Fedora 23 box shares the same name but under different providers.
$ vagrant box list
base      (virtualbox, 0)
fedora/23 (libvirt, 0)
fedora/23 (virtualbox, 0)

Let's create Fedora 23 Vagrant instance.
$ mkdir f23_cloud_virtualbox
$ cd f23_cloud_virtualbox
$ vagrant init fedora/23
A `Vagrantfile` has been placed in this directory. You are now
ready to `vagrant up` your first virtual environment! Please read
the comments in the Vagrantfile as well as documentation on
`` for more information on using Vagrant.

Start and boot up your new Fedora 23 Cloud instance. If you don't specify the provider, by default it will use VirtualBox as its backend provider. Hence, the (--provider) parameter is optional.
$ vagrant up --provider=virtualbox
Bringing machine 'default' up with 'virtualbox' provider...
==> default: Importing base box 'fedora/23'...

Let's try with libvirt provider and create all the necessary folder. At this moment, Vagrant only allows one provider for an active machine.
$ mkdir f23_cloud_libvirt
$ cd f23_cloud_libvirt
$ vagrant init fedora/23

Once done, let's boot this machine up. However, it seems we've a problem starting up the machine due to 'default' pool?
$ vagrant up --provider=libvirt
Bringing machine 'default' up with 'libvirt' provider...
There was error while creating libvirt storage pool: Call to virStoragePoolDefineXML failed: operation failed: pool 'default' already exists with uuid 9aab798b-f428-47dd-a6fb-181db2b20432

Google returned some answers suggesting checking the status of the pool. Let's try it out.
$ virsh pool-list --all
 Name                 State      Autostart 
 default              inactive   no 

Let's start the 'default' pool and also toggle it to auto start.
$ virsh pool-start default
Pool default started

$ virsh pool-autostart default
Pool default marked as autostarted

Check the status of the 'default' pool again.
$ virsh pool-list --all
 Name                 State      Autostart 
 default              active     yes     

Retry again to boot up our machine using libvirt backend provider.
$ vagrant up --provider=libvirt
Bringing machine 'default' up with 'libvirt' provider...
==> default: Creating image (snapshot of base box volume).
==> default: Creating domain with the following settings...

Lastly, SSH to our machine.
$ vagrant ssh
[[email protected] ~]$ cat /etc/fedora-release 
Fedora release 23 (Twenty Three)

Symfony Installation For Developer and System Administrator

Symfony, the PHP framework which has won the PHP framework market share as quite a few crucial PHP projects already transitioned or use either Symfony2 framework or its components. While I'm not a fan of Symfony due to its Javaism which introduces unnecessary complexity and divert from the original PHP design principles, but you can't ignore the influences it brought to PHP world.

Four years ago, I was evaluating it for a new project, but decided to go with CodeIgniter and Kohana due to project size and time frame. Always go for something you're familiar with instead venturing into new unknown territory. The money and time spent does not justify it. Everyone wants it fast, cheat, and don't really care about longevity of any projects. Enough ramblings, let us revisit the installation.

There are many ways to install or use Symfony framework but the recommended way is to use the Symfony installer. Although installation through Composer (deprecated), PEAR (deprecated), or GNU/Linux distros packages are still feasible.

First, getting the Symfony installer which you can obtain by downloading the binary directly and install locally. Popularized by those developer who want to have a universal quick installation method between GNU/Linux and MacOS. The Curl's -LsS parameter means that to download from a location (-L), silently (-s), but show errors (-S). Go to Explain Shell for the full parameters details.
$ sudo curl -LsS -o /usr/local/bin/symfony
$ sudo chmod a+x /usr/local/bin/symfony

Quick and convenient right? Indeed for developer point of view who want to bootstrap any application using Symfony framework quickly. However, for a system administrator, this is a big no-no, especially in production server, where priorities are targeted towards stability and security. Why so? Note that we should always verify the authenticity of any downloaded installer to check for any corruption or tampering. While there is way to verify Symfony components, we can't seem to find it for Symfony installer. Hence, such installation method, while convenient, lack assurances.

If you're a system administrator, surely you will prefer the default packages that come with your GNU/Linux distros. For sure, you are guarantee of well-tested and automated security updates. Something that is not possible if you install the Symfony framework manually where you've to keep track of any security advisories. The trade-off is that you've to stick with legacy but stable version of the framework which may not be supported anymore.

For example, in Fedora 22, which was release few weeks back, the available Symfony version is 2.5.11. Unfortunately, after checking release schedule and roadmap checker, there will be no support of security fixes after July 2015 and is advisable to upgrade to version 2.7.x.
$ dnf info php-symfony | grep Version
Version     : 2.5.11

If you're starting a new Symfony project, are you going to use the default and outdated Symfony packages that comes with your distro? Surely not. Who in the right mind will develop against an unsupported version? And furthermore, developer always like fancy new toys.

This is one of the dilemma when the distro packages is not catching up with the release cycle of the software. Another good example is in Fedora 22, there are still packages for PEAR channel for Symfony, where this is not supported anymore due to the transition to Composer. I'm not sure why Symfony 2.5 was selected but obviously there is a mismatch between the Fedora 22 and Symfony release timeline. Also, I think Centos 6 or 7 will have the similar unsupported version.
$ dnf search symfony | grep channel
php-channel-symfony.noarch : Adds symfony project channel to PEAR
php-channel-symfony2.noarch : Adds channel to PEAR

In the end, stick to the simpler way, using the Symfony installer, even though you may be some security risk but that can be prevented by verifying its components and using Security Advisories Checker tool..

Linux Containers (LXC) in Fedora 22 Rawhide - Part 3

Continue from Part 1 and Part 2. We'll discuss another issue caused by the default LXC installation in Fedora 22, which is no default bridge network created although one is set in the config file for each container.

Let's create a dummy container to view the default bridge network interface.
$ sudo lxc-create -t download -n foo -- -d centos -r 6 -a amd64
$ sudo cat /var/lib/lxc/foo/config | grep = lxcbr0

However, as I mentioned earlier, the bridge interface lxcbr0 is not created by default. Note that bridge interface virbr0 was created due to libvirt installation.
$ ip link show | grep br0
6: virbr0:  mtu 1500 qdisc noqueue state DOWN mode DEFAULT group default 
7: virbr0-nic:  mtu 1500 qdisc fq_codel master virbr0 state DOWN mode DEFAULT group default qlen 500

Or you can use the brctl command to show the available bridge interface. If you can't find the command, just install the bridge-utils package.
$ sudo dnf install bridge-utils
$ brctl show
bridge name     bridge id               STP enabled     interfaces
virbr0          8000.525400c28250       yes             virbr0-nic

Instead of changing the default item in the container's config file every time we create a container, we can use two ways to resolve this issue. First, by overwrite the default network interface name. Second, is to create the lxcbr0 bridge interface manually.

For the first method, just overwrite the default network interface name.
$ sudo sed -i s/lxcbr0/virbr0/g /etc/lxc/default.conf 
$ cat /etc/lxc/default.conf | grep = virbr0

The issue is such approach is that you'll share the same bridge network interface with libvirt which primary manages KVM (Kernel-based Virtual Machine). Thus, if you need additional customization, for example, like different IP range, is best to create a bridge network interface, which, leads us to the second method.

First, let's duplicate the XML file that define the default bridge network.
sudo cp /etc/libvirt/qemu/networks/default.xml /etc/libvirt/qemu/networks/lxcbr0.xml

Next, we need to generate a random UUID, Universal unique identifier and MAC, media access control address for our new bridge network interface named lxcbr0.

Generating UUID.
$ uuidgen

Generating MAC address.
$ MACADDR="52:54:$(dd if=/dev/urandom count=1 2>/dev/null | md5sum | sed 's/^\(..\)\(..\)\(..\)\(..\).*$/\1:\2:\3:\4/')"; echo $MACADDR

Update the lxcbr0.xml file we've just duplicated and add in both the UUID and MAC address to the file.

The final XML file as shown below:
OVERWRITTEN AND LOST. Changes to this xml configuration should be made using:
  virsh net-edit lxcbr0
or other application using the libvirt API.

  <forward mode='nat'/>
  <bridge name='lxcbr0' stp='on' delay='0'/>
  <mac address='52:54:f0:ec:cb:a3'/>
  <ip address='' netmask=''>
      <range start='' end=''/>

Enable, auto start, and start the lxcbr0 bridge interface.
$ sudo virsh net-define /etc/libvirt/qemu/networks/lxcbr0.xml
$ sudo virsh net-autostart lxcbr0
$ sudo virsh net-start lxcbr0

Now both bridge interfaces were created and enabled. You can create any container using the default lxcbr0 bridge network interface.
$ brctl show
bridge name     bridge id               STP enabled     interfaces
lxcbr0          8000.00602f7e384b       yes             lxcbr0-nic
virbr0          8000.525400c28250       yes             veth1HV308

There are many other ways to create and setup a bridge network interface but the method of using virsh command is probably the easiest and fastest. All the necessary steps to configure DHCP through Dnsmasq has been automated. As observed through the Dnsmasq instance after we've started the lxcbr0 bridge network interface.
$ ps aux | grep [l]xcbr0
nobody    9443  0.0  0.0  20500  2424 ?        S    01:08   0:00 /sbin/dnsmasq --conf-file=/var/lib/libvirt/dnsmasq/lxcbr0.conf --leasefile-ro --dhcp-script=/usr/libexec/libvirt_leaseshelper
root      9444  0.0  0.0  20472   208 ?        S    01:08   0:00  \_ /sbin/dnsmasq --conf-file=/var/lib/libvirt/dnsmasq/lxcbr0.conf --leasefile-ro --dhcp-script=/usr/libexec/libvirt_leaseshelper

Details of the lxcbr0.conf file.
$ sudo cat /var/lib/libvirt/dnsmasq/lxcbr0.conf 
##OVERWRITTEN AND LOST.  Changes to this configuration should be made using:
##    virsh net-edit lxcbr0
## or other application using the libvirt API.
## dnsmasq conf file created by libvirt

Linux Containers (LXC) in Fedora 22 Rawhide - Part 2

In Part 1, we've learned on how to set up LXC in Fedora 22 and at the same time, we have also encountered quite a few issues and the possible workarounds to get it working. In this post, we'll still looking into these workarounds to find a better or alternative solutions

One of the issue is the deprecation of YUM in favour of DNF command to manage packages. The changes are not supposed to be backward compatible and breakage is certain. Instead of creating a container and download all the basic packages, we can build a container using download template.

Let's try the download template method. Once you've run the command below, a list of distro images will be shown. Note that not all distros can be created through this method, for example, Arch Linux is missing from the image list below. You still have to fallback to file template for container creation.

Next, you will be prompted to key in your distribution, release, and architecture. Once you've keyed in your selection, the command will continue to download the image. This may take a while, depending on your Internet speed.
$ sudo lxc-create -t download -n download-test
Setting up the GPG keyring
Downloading the image index

centos  6       amd64   default 20150507_02:16
centos  6       i386    default 20150507_02:16
centos  7       amd64   default 20150507_02:16
debian  jessie  amd64   default 20150506_22:42
debian  jessie  armel   default 20150506_22:42
debian  jessie  armhf   default 20150503_22:42
debian  jessie  i386    default 20150506_22:42
debian  sid     amd64   default 20150506_22:42
debian  sid     armel   default 20150506_22:42
debian  sid     armhf   default 20150506_22:42
debian  sid     i386    default 20150506_22:42
debian  wheezy  amd64   default 20150506_22:42
debian  wheezy  armel   default 20150505_22:42
debian  wheezy  armhf   default 20150506_22:42
debian  wheezy  i386    default 20150506_22:42
fedora  19      amd64   default 20150507_01:27
fedora  19      armhf   default 20150507_01:27
fedora  19      i386    default 20150507_01:27
fedora  20      amd64   default 20150507_01:27
fedora  20      armhf   default 20150507_01:27
fedora  20      i386    default 20150507_01:27
gentoo  current amd64   default 20150507_14:12
gentoo  current armhf   default 20150507_14:12
gentoo  current i386    default 20150507_14:12
opensuse        12.3    amd64   default 20150507_00:53
opensuse        12.3    i386    default 20150507_00:53
oracle  6.5     amd64   default 20150507_11:40
oracle  6.5     i386    default 20150507_11:40
plamo   5.x     amd64   default 20150506_21:36
plamo   5.x     i386    default 20150506_21:36
ubuntu  precise amd64   default 20150507_03:49
ubuntu  precise armel   default 20150507_03:49
ubuntu  precise armhf   default 20150507_03:49
ubuntu  precise i386    default 20150507_03:49
ubuntu  trusty  amd64   default 20150507_03:49
ubuntu  trusty  armhf   default 20150507_03:49
ubuntu  trusty  i386    default 20150506_03:49
ubuntu  trusty  ppc64el default 20150507_03:49
ubuntu  utopic  amd64   default 20150507_03:49
ubuntu  utopic  armhf   default 20150507_03:49
ubuntu  utopic  i386    default 20150507_03:49
ubuntu  utopic  ppc64el default 20150507_03:49
ubuntu  vivid   amd64   default 20150507_03:49
ubuntu  vivid   armhf   default 20150507_03:49
ubuntu  vivid   i386    default 20150506_03:49
ubuntu  vivid   ppc64el default 20150507_03:49

Distribution: centos
Release: 6
Architecture: amd64

Downloading the image index
Downloading the rootfs 
Downloading the metadata
The image cache is now ready
Unpacking the rootfs

You just created a CentOS container (release=6, arch=amd64, variant=default)

To enable sshd, run: yum install openssh-server

For security reason, container images ship without user accounts
and without a root password.

Use lxc-attach or chroot directly into the rootfs to set a root password
or create user accounts.

Once the container has been created. We start and attach to the container.
$ sudo lxc-start -n download-test
$ sudo lxc-attach -n download-test

# uname -a
Linux download-test 4.0.1-300.fc22.x86_64 #1 SMP Wed Apr 29 15:48:25 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux
# cat /etc/centos-release 
CentOS release 6.6 (Final)

Instead of prompting your for the distribution, release, and architecture choices, you can simply create a container in one line of command. Note the extra double dashes (--) before you set the requirements arguments. All parameters after the (--) are passed to the template rather than the lxc-create command. Container creation should be very fast on second time as the program cached the downloaded images.
$ sudo lxc-create -t download -n download-test -- -d centos -r 6 -a amd64

To see the available options available for a particular template, use the command below. You can substitute the template name 'download' found in /usr/share/lxc/templates/.
$ lxc-create -t download -h

Linux Containers (LXC) in Fedora 22 Rawhide - Part 1

While Docker, an application container is widely popular right now, I've decided to try LXC, a machine container that hold a virtual machine like VirtualBox or WMWare but with near bare-metal performance. As I was running on Fedora Rawhide (F22), let's try to install and setup LXC in this distro.

Installation is pretty much straight forward.
$ sudo dnf install lxc lxc-templates lxc-extra

Checking our installed version against the latest available version. Our installed version on par with the current release.
$ lxc-ls --version

The first thing to do is to check our LXC configuration. As emphasized in red below, the Cgroup memory controller is not enabled by default as it will incur additional memory. This can be enabled through by adding boot parameter cgroup_enable=memory to the Grub boot loader. For now, we will keep that in mind and stick to the default.
$ lxc-checkconfig

Kernel configuration not found at /proc/config.gz; searching...
Kernel configuration found at /boot/config-4.0.1-300.fc22.x86_64
--- Namespaces ---
Namespaces: enabled
Utsname namespace: enabled
Ipc namespace: enabled
Pid namespace: enabled
User namespace: enabled
Network namespace: enabled
Multiple /dev/pts instances: enabled

--- Control groups ---
Cgroup: enabled
Cgroup clone_children flag: enabled
Cgroup device: enabled
Cgroup sched: enabled
Cgroup cpu account: enabled
Cgroup memory controller: missing
Cgroup cpuset: enabled

--- Misc ---
Veth pair device: enabled
Macvlan: enabled
Vlan: enabled
File capabilities: enabled

Note : Before booting a new kernel, you can check its configuration
usage : CONFIG=/path/to/config /usr/bin/lxc-checkconfig

Before we can create our container, let's find out the available templates or GNU/Linux distros we can create.
$ ll /usr/share/lxc/templates/
total 348K
-rwxr-xr-x. 1 root root  11K Apr 24 03:22 lxc-alpine*
-rwxr-xr-x. 1 root root  14K Apr 24 03:22 lxc-altlinux*
-rwxr-xr-x. 1 root root  11K Apr 24 03:22 lxc-archlinux*
-rwxr-xr-x. 1 root root 9.5K Apr 24 03:22 lxc-busybox*
-rwxr-xr-x. 1 root root  29K Apr 24 03:22 lxc-centos*
-rwxr-xr-x. 1 root root  11K Apr 24 03:22 lxc-cirros*
-rwxr-xr-x. 1 root root  17K Apr 24 03:22 lxc-debian*
-rwxr-xr-x. 1 root root  18K Apr 24 03:22 lxc-download*
-rwxr-xr-x. 1 root root  48K Apr 24 03:22 lxc-fedora*
-rwxr-xr-x. 1 root root  28K Apr 24 03:22 lxc-gentoo*
-rwxr-xr-x. 1 root root  14K Apr 24 03:22 lxc-openmandriva*
-rwxr-xr-x. 1 root root  15K Apr 24 03:22 lxc-opensuse*
-rwxr-xr-x. 1 root root  40K Apr 24 03:22 lxc-oracle*
-rwxr-xr-x. 1 root root  11K Apr 24 03:22 lxc-plamo*
-rwxr-xr-x. 1 root root 6.7K Apr 24 03:22 lxc-sshd*
-rwxr-xr-x. 1 root root  25K Apr 24 03:22 lxc-ubuntu*
-rwxr-xr-x. 1 root root  13K Apr 24 03:22 lxc-ubuntu-cloud*

Let's proceed ahead by create our first container, a CentOS 6 distro. Unfortunately, as seen below, the creation failed due to deprecation of the Yum command which was redirected to DNF command.
$ sudo lxc-create -t centos -n centos-test

Host CPE ID from /etc/os-release: cpe:/o:fedoraproject:fedora:22
This is not a CentOS or Redhat host and release is missing, defaulting to 6 use -R|--release to specify release
Checking cache download in /var/cache/lxc/centos/x86_64/6/rootfs ... 
Downloading centos minimal ...
Yum command has been deprecated, redirecting to '/usr/bin/dnf -h'.
See 'man dnf' and 'man yum2dnf' for more information.
To transfer transaction metadata from yum to DNF, run:
'dnf install python-dnf-plugins-extras-migrate && dnf-2 migrate'

Yum command has been deprecated, redirecting to '/usr/bin/dnf --installroot /var/cache/lxc/centos/x86_64/6/partial -y --nogpgcheck install yum initscripts passwd rsyslog vim-minimal openssh-server openssh-clients dhclient chkconfig rootfiles policycoreutils'.
See 'man dnf' and 'man yum2dnf' for more information.
To transfer transaction metadata from yum to DNF, run:
'dnf install python-dnf-plugins-extras-migrate && dnf-2 migrate'

Config error: releasever not given and can not be detected from the installroot.
Failed to download the rootfs, aborting.
Failed to download 'centos base'
failed to install centos
lxc-create: lxccontainer.c: create_run_template: 1202 container creation template for centos-test failed
lxc-create: lxc_create.c: main: 274 Error creating container centos-test

The above error is a good example on why the transition from YUM to DNF command was unnecessary and caused breakage. It turned out that /usr/bin/yum is a shell script that display notification message. To resolve this, we need to point /usr/bin/yum to the actual yum program. There are way to bypass this step where we'll discuss about this in Part 2.
$ sudo mv /usr/bin/yum /usr/bin/yum2dnf
$ sudo ln -s /usr/bin/yum-deprecated /usr/bin/yum
$ ll /usr/bin/yum
lrwxrwxrwx. 1 root root 23 May  5 23:40 /usr/bin/yum -> /usr/bin/yum-deprecated*

Let's us try again. Although there is notification, the creation of the container will run smoothly. Since we're creating this for the first time, it will took a while to download all the packages.
$ sudo lxc-create -t centos -n centos-test
Download complete.
Copy /var/cache/lxc/centos/x86_64/6/rootfs to /var/lib/lxc/centos-test/rootfs ... 
Copying rootfs to /var/lib/lxc/centos-test/rootfs ...
Storing root password in '/var/lib/lxc/centos-test/tmp_root_pass'
Expiring password for user root.
passwd: Success

Container rootfs and config have been created.
Edit the config file to check/enable networking setup.

The temporary root password is stored in:


The root password is set up as expired and will require it to be changed
at first login, which you should do as soon as possible.  If you lose the
root password or wish to change it without starting the container, you
can change it from the host by running the following command (which will
also reset the expired flag):

        chroot /var/lib/lxc/centos-test/rootfs passwd

Checking our newly created container.
$ sudo lxc-ls

Checking the container status.
$ sudo lxc-info -n centos-test
Name:           centos-test
State:          STOPPED

Start our newly created container. Yet again, another error.
$ sudo lxc-start -n centos-test
lxc-start: lxc_start.c: main: 344 The container failed to start.
lxc-start: lxc_start.c: main: 346 To get more details, run the container in foreground mode.
lxc-start: lxc_start.c: main: 348 Additional information can be obtained by setting the --logfile and --logpriority options.

Let's try again, but with foreground mode (-F).
$ sudo lxc-start -F -n centos-test
lxc-start: conf.c: instantiate_veth: 2672 failed to attach 'vethM9Q6RT' to the bridge 'lxcbr0': Operation not permitted
lxc-start: conf.c: lxc_create_network: 2955 failed to create netdev
lxc-start: start.c: lxc_spawn: 914 failed to create the network
lxc-start: start.c: __lxc_start: 1164 failed to spawn 'centos-test'
lxc-start: lxc_start.c: main: 344 The container failed to start.
lxc-start: lxc_start.c: main: 348 Additional information can be obtained by setting the --logfile and --logpriority options.

I was quite surprised that Fedora did not create the lxcbr0 bridge interface automatically. Instead, we will use the existing virbr0 provided by libvirtd.
$ sudo yum install libvirt-daemon
sudo systemctl start libvirtd

Check the bridge network interface.
$ brctl show
bridge name     bridge id               STP enabled     interfaces
virbr0          8000.525400c28250       yes             virbr0-nic

Edit our container config file and change the network link from lxcbr0 to virbr0.
$ sudo vim /var/lib/lxc/centos-test/config = virbr0

Try to start the container again, this time, another '819 Permission denied' error.
$ sudo lxc-start -F -n centos-test
lxc-start: conf.c: lxc_mount_auto_mounts: 819 Permission denied - error mounting /usr/lib64/lxc/rootfs/proc/sys/net on /usr/lib64/lxc/rootfs/proc/net flags 4096
lxc-start: conf.c: lxc_setup: 3833 failed to setup the automatic mounts for 'centos-test'
lxc-start: start.c: do_start: 699 failed to setup the container
lxc-start: sync.c: __sync_wait: 51 invalid sequence number 1. expected 2
lxc-start: start.c: __lxc_start: 1164 failed to spawn 'centos-test'
lxc-start: lxc_start.c: main: 344 The container failed to start.
lxc-start: lxc_start.c: main: 348 Additional information can be obtained by setting the --logfile and --logpriority options.

After struggled and googled for answer for the past hours, it actually dawned to me that the '819 Permission denied' error is related to SELinux policy. I did a quick check by disabled SELinux and reboot the machine and was able to start the container.

Also, just to confirm the SELinux error for lxc-start.
$ sudo grep lxc-start /var/log/audit/audit.log | tail -n 1
type=AVC msg=audit(1430849851.869:714): avc:  denied  { mounton } for  pid=3780 comm="lxc-start" path="/usr/lib64/lxc/rootfs/proc/1/net" dev="proc" ino=49148 scontext=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 tcontext=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 tclass=dir permissive=0

Start the SELinux Alert Browser and run the below commands to add the security policy.
$ sealert

$ sudo grep lxc-start /var/log/audit/audit.log | audit2allow -M mypol
******************** IMPORTANT ***********************
To make this policy package active, execute:

semodule -i mypol.pp

$ sudo semodule -i mypol.pp

Start our container again and check it status.
$ sudo lxc-start -n centos-test 
[[email protected] ~]$ sudo lxc-info -n centos-test
Name:           centos-test
State:          RUNNING
PID:            6742
CPU use:        0.44 seconds
BlkIO use:      18.55 MiB
Memory use:     12.14 MiB
KMem use:       0 bytes
Link:           veth4SHUE1
 TX bytes:      578 bytes
 RX bytes:      734 bytes
 Total bytes:   1.28 KiB

Attach to our container. There is no login needed.
$ sudo lxc-attach -n centos-test
[[email protected] /]# uname -a
Linux centos-test 4.0.1-300.fc22.x86_64 #1 SMP Wed Apr 29 15:48:25 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux

[[email protected] /]# cat /etc/centos-release 
CentOS release 6.6 (Final)

Reinstall Grub Through Chroot an LUKS Partition in Fedora

As I mentioned before, I'm currently triple-boot my laptop with three different operating systems consists of Windows 7, Fedora 22, and Ubuntu 15.04. One of the issue faced when dealing with dual-booting Fedora and Ubuntu is that each distro will update and overwrite the existing Grub boot loader if there are new kernel upgrade. One major problem is that Ubuntu Grub update does not recognizes LUKS partition and always corrupted the Grub boot loader.

To temporary resolve this, we have to boot the machine using the Fedora Live CD, mount the encrypted partition, chroot to it, update Grub from Fedora itself, umount the partition, and lastly reboot the machine. Details as follows.

First, once you've boot the Fedora Live CD, check the device name of your LUKS partition.
$ lsblk | grep -B 2 luks
├─sda5                                          8:5    0     1G  0 part  /boot
├─sda6                                          8:6    0   204G  0 part  
│ └─luks-e927b9ed-a83a-453f-8ef7-4983a3d68589 253:0    0   204G  0 crypt /

As we have obtained the device name, which is /dev/sda6, we shall proceed to decrypt the partition. Before that, let's verify that the partition is a LUKS partition again. If the Bash return exist code of zero, then we can safely confirm that /dev/sda6 is indeeed a LUKS partition.
$ sudo cryptsetup isLuks /dev/sda6 && echo $?

We can also verify it by checking the LUKS header of that parition.
$ sudo cryptsetup luksDump /dev/sda6 | head -n 8
LUKS header information for /dev/sda6

Version:        1
Cipher name:    aes
Cipher mode:    xts-plain64
Hash spec:      sha1
Payload offset: 4096
MK bits:        512

Next, we're going to decrypt the LUKS partition and type in your password.
$ sudo cryptsetup luksOpen /dev/sda6 fedora-root

After we've decrypted the partition, we'll need to mount all necessary partitions before we can chroot it.
$ sudo udisks --mount /dev/mapper/fedora-root
$ sudo mount -t proc proc /mnt/proc
$ sudo mount -t sysfs sys /mnt/sys
$ sudo mount -o bind /dev /mnt/dev

Note that we're using udisks to automount our fedora-root in the /media folder. Equivalent steps are:
$ sudo mkdir /media/fedora-root
$ sudo mount /dev/mapper/fedora-root /media/fedora-root

Since my /boot partition is located in another partition, we'll need to mount this as well so we can update the Grub boot loader.
$ sudo mkdir /media/fedora-boot
$ sudo mount /dev/sda5 /media/fedora-boot

Next, chroot to the root partition and update our Grub.
$ chroot /media/fedora-boot
$ grub2-install /dev/sda
$ grub2-mkconfig -o /boot/grub/grub.cfg

Lastly, exit from chroot, unmount our LUKS partition, and reboot our machine. The correct Grub boot loader with correct boot parameters will be installed and loaded properly.
$ exit
$ sudo umount /media/fedora-root
$ sudo cryptsetup luksClose fedora-root
$ sudo reboot

Expanding RPM's Build-in Macro Values.

While looking into Drupal directory structures in Fedora, I stumbled upon this file, /usr/lib/rpm/macros.d/macros.drupal7, which seems to be the configuration file for RPM command.
$ cat /usr/lib/rpm/macros.d/macros.drupal7 
%drupal7            %{_datadir}/drupal7
%drupal7_modules    %{drupal7}/modules
%drupal7_themes     %{drupal7}/themes
%drupal7_libraries  %{_sysconfdir}/drupal7/all/libraries

# No-op macro to allow spec compatibility with RPM < 4.9 (no fileattrs)
%drupal7_find_provides_and_requires %{nil}

Let's try to expand the values of the above built-in RPM's macros.
$ cat /usr/lib/rpm/macros.d/macros.drupal7  | awk '{print $1}' | grep ^% | xargs -I % sh -c 'echo -en "%\t"; rpm --eval %' | column -t
%drupal7                             /usr/share/drupal7
%drupal7_modules                     /usr/share/drupal7/modules
%drupal7_themes                      /usr/share/drupal7/themes
%drupal7_libraries                   /etc/drupal7/all/libraries

Why not create a Bash function for the above command instead? Put this in your $HOME/.bashrc file.
# expanding the value of the rpm's built-in macros.
function rpm_macro() {
    if [[ -z "$1" ]]; then
        echo "No filename supplied"
        cat $1 | awk '{print $1}' | grep ^% |\
        xargs -I % sh -c 'echo -en "%\t"; rpm --eval %' | column -t  

Try out our newly created bash function.
$ rpm_macro macros.drupal7 
%drupal7                             /usr/share/drupal7
%drupal7_modules                     /usr/share/drupal7/modules
%drupal7_themes                      /usr/share/drupal7/themes
%drupal7_libraries                   /etc/drupal7/all/libraries

On Using Fedora Rawhide (F22)

I've been using Fedora Rawhide (F22) for a while until my hard disc failed on me causing file system corruption that I can't seem to boot into the system. Here are some of my lesson learned when using bleeding edge release.

1. Dual-boot or triple-boot your system.
Install multiple Operating System (OS) in your system. Preferable different GNU/Linux distros. For example a stable Fedora F21 and Fedora Rawhide (F22). If you're dual-booting between Windows and GNU/Linux, make sure you pick the common file system such as ext2/3/4, there are quite a few software exists that can let you access your GNU/Linux partition like Ext2Fsd, Linux Reader, or Ext2Read. If you intend to use block device encryption as in Linux Unified Key Setup (LUKS), you can try DoxBox.

2. Ctrl-Alt-F1/F2/F3/F4/Fn
There are a few incidents after upgrading to the latest kernel, I can't login through the graphical user interface or X. Hence, you're stuck in the console. The best way is to wait for a few days (which is why you should dual-boot to use other distros or OS) for any updates or fixes. Boot up the system but login through different terminal using the keyboard short fo Crtl-Alt-Fn keys. Run the yum update through the console and you should be able to boot up. Or you can boot up using the last working kernel version, you can see that in GNU Grub bootloader, which is likely to work.

3. nmcli
The console tool to manage NetworkManager. Setting up Wifi in console used to be quite troublesome, but since the release of nmcli, we have far more easier tool to manage our wireless connection. This is so true when you've to switch to different terminal to update your distribution without using at LAN cable. See 2. Example of usages shown below.

Check available Wifi connections. Yes, that is a bar graph in the console. Awesome, right?
$ nmcli dev wifi list
*  AAA          Infra  2     54 Mbit/s  74      ▂▄▆_  WPA1 WPA2 
   BBB          Infra  2     54 Mbit/s  20      ▂___  WPA1 WPA2 
   CCC          Infra  1     54 Mbit/s  35      ▂▄__  WPA2  

To make a Wifi connection. To prevent the Bash shell from saving your password in the history, prepend an extra space before the command.
  !----- extra space
$  nmcli dev wifi connect AAA password 
Device 'wlp3s0' successfully activated with 'bx12345e-x2w3-112z-kk33-e348f22345qa'.

4. Recovery Disk
If you don't dual-boot with different GNU/Linux distros. Use a recovery Live CD or USB. Find the extra unused thumb drive and install in it. If you save you a lot of time especially when there is disaster like hard disc failure and you have to wait to download a full Live CD.

Conclusion. If you want to try the unstable GNU/Linux early release, be prepared for breakage and constant restarts. Do remember to backup daily. Or you can switch to a rolling-release distro like ArchLinux, where packages are continually updated instead of re-installation.

DNF Unofficially Replaced Yum In Fedora 22

As I was setting up my Fedora F22 (Rawhide) installation, I've noticed that the Yellowdog Updater, Modified (YUM), the default installer has been deprecated in favour of new Dandified Yum (DNF). As this email in the mailing list confirms my assumption before the release of Fedora 22 Beta in coming week. Example as shown below when I tried to install Google Chrome web browser.

Using the good old RPM Package Manager (RPM) where the installation failed due to failed dependencies.
$ sudo rpm -ivh google-chrome-stable_current_x86_64.rpm
[sudo] password for ang:
warning: google-chrome-stable_current_x86_64.rpm: Header V4 DSA/SHA1 Signature, key ID 7fac5991: NOKEY
error: Failed dependencies:
 lsb >= 4.0 is needed by google-chrome-stable-41.0.2272.118-1.x86_64 is needed by google-chrome-stable-41.0.2272.118-1.x86_64

Instead of searching for the package names of the dependencies, we can use the yum localinstall command to resolve that for us. Unfortunately, the yum command has been delegated to dnf where the localinstall command does not exists.
$ sudo yum localinstall google-chrome-stable_current_x86_64.rpm
Yum command has been deprecated, use dnf instead.
See 'man dnf' and 'man yum2dnf' for more information.
To transfer transaction metadata from yum to DNF, run 'dnf migrate'Redirecting to '/usr/bin/dnf localinstall google-chrome-stable_current_x86_64.rpm'

No such command: localinstall. Please use /usr/bin/dnf --help
It could be a DNF plugin command.

Hence, we use the dnf command directly. I'm still puzzle why we need 122 packages just to install Google Chrome. Most likely the dependencies of the LSB packages and all the Perl libraries.
$ sudo dnf install ./google-chrome-stable_current_x86_64.rpm

Transaction Summary
Install  122 Packages

Total size: 66 M
Total download size: 21 M
Installed size: 245 M
Is this ok [y/N]:

I still prefer YUM over DNF due to my familiarity with YUM instead of DNF. However, both are still lacking behind the apt-get despite DNF trying its best to narrow the gap.

Creating Live USB Media for Fedora Rawhide (F22)

Since my hard disc failed on me a few days back, I've to reinstall my GNU/Linux environment again. Since I don't have any USB thumbdrive with me, is far more economical to perform the installation through CD. However, after comparing the price between empty CDs and a USB thumbdrive, is far more economical to use an thumbdrive. Furthermore, you can "burn" the different multiple ISO image to the same thumbdrive.

Instead of using the last stable Fedora release (F21), I opt for the Rawhide (F22) release. You can download the nightly image ISO. Note that this is a network installation image, hence the small image size of 500-plus MB.

The nightly image is quite buggy and installer may not work. If you stuck with the nightly image, consider trying the Alpha pre-release image instead.

To reduce the download time, we'll use Aria2, a console downloader program that support multiple parallel HTTP connections. On the safe side, we're using a maximum 4 connections per server (-x 4), the default is 1. You can use different value but your mileage may vary. Also, it's considered a poor etiquette to make too many connections.
$ aria2c -x 4 -o fedora_rawhide_f22.iso

Plug in your USB thumbdrive. Check the device name using the lsblk command to list the available block devices. This is currently my go-to command to see all available devices and partitions. As we can see below, the device name for USB thumbdrive is /dev/sdb.
$ lsblk
sda              8:0    0 119.2G  0 disk  
├─sda1           8:1    0   499M  0 part  
├─sda2           8:2    0   300M  0 part  /boot/efi
├─sda3           8:3    0   128M  0 part  
├─sda4           8:4    0  96.3G  0 part  /
├─sda5           8:5    0  17.3G  0 part  
├─sda6           8:6    0     1G  0 part  
└─sda7           8:7    0   3.8G  0 part  
  └─cryptswap1 252:0    0   3.8G  0 crypt [SWAP]
sdb              8:16   1   7.5G  0 disk  
└─sdb1           8:17   1   7.5G  0 part

Let's burn the ISO image using the dd command. Roughly around 1 minute and 22 seconds.
$ time sudo dd if=fedora_22_boot.iso of=/dev/sdb bs=4M
138+1 records in
138+1 records out
579862528 bytes (580 MB) copied, 80.0856 s, 7.2 MB/s

real    1m20.113s
user    0m0.016s
sys     0m5.256s

To verify that we've burned the image successfully, we can test it using a virtual machine like QEMU (Quick Emulator) instead on the intended physical machine. Note that this is under Ubuntu 15.04 Vivid Vervet.
$ sudo apt-get install qemu
$ sudo qemu-system-x86_64 -hda /dev/sdb -m 1024 -vga std

If you can see result from below screenshot, then you've successfully burn the ISO to the USB thumbdrive.