Showing posts with label lxd. Show all posts
Showing posts with label lxd. Show all posts

Rust Installation in Ubuntu 18.10

When was the last time I looked at Rust? Oh right, it was almost 5 years ago (how time flies). The Amazon Firecracker piqued my interest in Rust again and I'm curious to check out Rust again. There are several installation methods available. These days, it's easier to use custome tool like Rustup or Docker to manage and switch several and different versions compare to default distro packages.

Using Rustup
This is the default installation method. However, we using installing this using the LXC/LXD container. This is the fastest way to get Rust running in your environment compare to other methods (more on this).
$ lxc exec rust-rustup bash
[email protected]:~# curl https://sh.rustup.rs -sSf | sh

This will download and install the official compiler for the Rust programming 
language, and its package manager, Cargo.

It will add the cargo, rustc, rustup and other commands to Cargo's bin 
directory, located at:

  /root/.cargo/bin

This path will then be added to your PATH environment variable by modifying the
profile file located at:

  /root/.profile

You can uninstall at any time with rustup self uninstall and these changes will
be reverted.

Current installation options:

   default host triple: x86_64-unknown-linux-gnu
     default toolchain: stable
  modify PATH variable: yes

1) Proceed with installation (default)
2) Customize installation
3) Cancel installation
> 1
......
info: installing component 'rustc'
info: installing component 'rust-std'
info: installing component 'cargo'
info: installing component 'rust-docs'
info: default toolchain set to 'stable'

  stable installed - rustc 1.31.1 (b6c32da9b 2018-12-18)

Rust is installed now. Great!

To get started you need Cargo's bin directory ($HOME/.cargo/bin) in your PATH 
environment variable. Next time you log in this will be done automatically.

To configure your current shell run source $HOME/.cargo/env

Checking Rust-based tools version.
[email protected]:~# rustc --version; cargo --version; rustup --version
rustc 1.31.1 (b6c32da9b 2018-12-18)
cargo 1.31.0 (339d9f9c8 2018-11-16)
rustup 1.16.0 (beab5ac2b 2018-12-06)

Using Default Distro Package Manager
Again, bootstrap the environment using LXC/LXD.
$ lxc launch ubuntu:18.10 rust-pkg
$ lxc exec rust-pkg bash          
[email protected]:~# apt update; apt upgrade
[email protected]:~# apt install rustc

Checking Rust-based tools version.
[email protected]:~# rustc --version; cargo --version; rustup --version
rustc 1.30.0
cargo 1.30.0
rustup: command not found

Using Docker Image
Using Docker official image (the official image is based on Debian). Image size seemed way to big, roughly around 1.6 GB.
$ docker pull rust
$ docker image list | grep rust
rust                latest              d6daf33d7ea6        3 days ago          1.63GB

Luckily slimmer image available, roughly half the size. You just have to pull using the right tag. Reduction is size was due to the clean up steps and slimmer base Docker image.
$ docker pull rust:slim
$ docker image list | grep rust
rust                slim                a374accc3257        3 days ago          967MB
rust                latest              d6daf33d7ea6        3 days ago          1.63GB

Checking the container and Rust-based tools version.
$ docker run --rm -it rust bash
[email protected]:/# rustc --version; cargo --version; rustup --version
rustc 1.31.1 (b6c32da9b 2018-12-18)
cargo 1.31.0 (339d9f9c8 2018-11-16)
rustup 1.16.0 (beab5ac2b 2018-12-06)

Bridging a Wireless NIC?

In our previous post, we have setup Pi-hole in LXD through bridging of macvlan network adapter. Thus, our containers share the network segment with the host's machine network. The limitation of such setup is network bridging only works for Ethernet network adapter instead of Wifi network adapter. Because "many wireless cards don't allow spoofing of the source address" (shown in example later) and also a limitation of 802.11. Read this answer for more complete explanation.

Following this guide, I tried to create a bridge using Bridge Control tool, `brctl` and add the wireless network interface, `wlp3s0` to it.
$ sudo brctl addbr wbr0

$ brctl show wbr0
bridge name     bridge id               STP enabled     interfaces
wbr0            8000.000000000000       no

$ sudo brctl addif wbr0 wlp3s0
can't add wlp3s0 to bridge wbr0: Operation not supported

To resolve the shown error above, we need to enable `4addr` option to our Wifi adapter. The `4addr` is used so that "IEEE 802.3 (Ethernet) frame gets encapsulated in a IEEE 802.11 (WLAN) frame".
$ sudo iw dev wlp3s0 set 4addr on
$ sudo brctl addif wbr0 wlp3s0

Trying to obtain an IP from our bridge interface, `wbr0`.
$ sudo dhclient -d wbr0
Internet Systems Consortium DHCP Client 4.3.5
Copyright 2004-2016 Internet Systems Consortium.
All rights reserved.
For info, please visit https://www.isc.org/software/dhcp/

Listening on LPF/wbr0/12:34:56:78:90:01
Sending on   LPF/wbr0/12:34:56:78:90:01
Sending on   Socket/fallback
DHCPDISCOVER on wbr0 to 255.255.255.255 port 67 interval 3 (xid=0x444ed350)
DHCPDISCOVER on wbr0 to 255.255.255.255 port 67 interval 3 (xid=0x444ed350)
......

And suddenly, we lost connectivity to our Wifi connection and can't find any Wifi network adapter anymore. Because of security reason, it's hard to spoof source MAC address.

To undo this, let's remove all changes we've made. Also, we may need to reboot the machine to regain the Wifi connectivity.
$ sudo brctl delbr wbr0
bridge wbr0 is still up; can't delete it

$ sudo iw dev wlp3s0 set 4addr off
command failed: Device or resource busy (-16)

$ sudo brctl delif wbr0 wlp3s0
$ sudo iw dev wlp3s0 set 4addr off

$ sudo ifconfig wbr0 down
$ sudo brctl delbr wbr0

$ sudo systemctl restart NetworkManager

Nevertheless, there are still different ways to make this works although far more complicated. Since macvlan does not work with wireless adapter, there is an alternative way using ipvlan. However, this was proposed to be included to LXD but postponed since macvlan provides similar features. Furthermore, DHCP will not works in both methods anyway.

This Week I Learned 2018 - Week 50

Last week post or something else from the past instead.

What is the one crucial thing when buying insurance? Make sure it's guaranteed renewable. If not, after a big claim, the said issue will be exclude from your policy upon your renewal. If you have an insurance policy but not guarantee renewable, make it has unlimited coverage. Read the Bank Negara Malaysia (BNM)'s guidelines on this. Meanwhile, something related, when comes to insurance claims, you can claim from multiple insurer for Personal Accident (PA) or life. For medical, only can claim from one insurer.

Do we need to push so hard for Science, Technology, Engineering and Mathematics (STEM) education among young people? Yes and no. Yes, if we want to stay competitive in this industry. No, this will create oversupply of labour and thus wages are kept low. Thus, does not really justify for young people to go into STEM industry where wages are too low and education fee was too high for those looking for good university.

What are the best books of 2018? (via HN and Reddit) Well, you can go through the list from NPRGoodreads, The New Yorker, Science Friday, The Wall Street Journal (politics, children, science fiction, and mysteries), Esquire, Amazon Best Sellers or by category (note best seller), The Guardian, Powell, Five Books (science, fiction, and politics), Library Journal, People, Mental Floss, Indigo, Bill Gates himself (summer and winter), Barnes & Noble, Book Page, Financial Times, History Today, Space  (old and new), Smithsonian (history, science, travel, food, and children), and AV Club. There is one book that caught my attention and found in most of the recommended lists, Madeline Miller's Circe. If you don't have a good material to read for the new year, just check the best books of last year.

What happened when bad water quality and monsoon month (December) meet? Twenty dead fishes. Similar thing happened last year around December where some Betta fishes were dying in mass. Is it water hardness, pH level, ammonia level, nitrite level, or diseases? Our conclusion with  some googling suggested that all possible reason. Drastic water change (like 100%) during raining season will shock the fishes leading to low immune system. Furthermore, irregular water changes increases the possibility of ammonia poisoning overfeed without removing the remains will lead to nitrite poisoning.

One obvious symptom was group of Betta fishes cuddle together at the corner at the tank (see photo below). Last year, the same thing happened to our female sorority tank and we thought because these fishes were "bonding". Our naivety caused the total wipeout of all the female Bettas.


How do you troubleshoot DHCP issue within a container? Use tcpdump. `lxdbr0` is the default bridge network adapter used by LXD.
sudo tcpdump -ni lxdbr0 port 67

Using LXD's Instance Types to Emulate Public Clouds (Amazon, Google, or Azure) Specification

One of the challenges when developing using public clouds provides like Amazon, Google, or Azure is how do we emulate, to the closest specification of their cloud instance locally? LXD, the system container, does have a feature, where you can specify the instance-types during container creation. The public cloud specification is based on the mapping done by the instance type project. While this is not a full emulation of the actual public cloud environment, it's just the essential resource allocations like CPU, memory, disk size, or others. Nevertheless, it's a good and quick way to bootstrap your container to roughly match the resource allocated of these public cloud offering.

Before that, please check the available CPU cores and memory your machine have. In my lappy, we have 4 CPU cores and 7G of memory.
$ nproc && free -g
4
              total        used        free      shared  buff/cache   available
Mem:              7           6           0           0           1           0
Swap:             0           0           0

How does instance types works in LXD? Let's us create a container based on AWS t2.micro which have the specification of 1 CPU and 1 Gb RAM. All commands are equivalent but using different syntax.
$ lxc launch ubuntu:18.04 c1 -t aws:t2.micro # <cloud>:<instance type>
$ lxc launch ubuntu:18.04 c2 -t t2.micro # <instance type>
$ lxc launch ubuntu:18.04 c3 -t c1-m1 # c<CPU>-m<RAM in GB>

Check our specification of our created containers.
$ lxc config show c1 | grep -E "cpu|memory"
  limits.cpu: "1"
  limits.memory: 1024MB

Again, alternative way to get the config details of the container.
$ lxc config get c1 limits.cpu
1
$ lxc config get c1 limits.memory
1024MB

Pi-hole with LXD - Installation and Setup

Pi-hole is wrapper of your DNS server that block all advertisements and trackers. We're using it at our home network to block all those unnecessary bandwidth wasting contents. Setting up for any of your devices is quite straightforward, just make sure your router point to it as DNS server.

While there is a Docker image existed, we have installed it within a LXD container since we have a LXD host exists in our small homelab server, Kabini (more on this in coming posts).

First we setup the container based on Ubuntu 18.04.
$ lxc launch ubuntu:18.04 pihole
$ lxc list -c=ns4Pt
+--------+---------+----------------------+----------+------------+
|  NAME  |  STATE  |         IPV4         | PROFILES |    TYPE    |
+--------+---------+----------------------+----------+------------+
| pihole | RUNNING | 10.53.105.102 (eth0) | default  | PERSISTENT |
+--------+---------+----------------------+----------+------------+

Looking at the table above, notice that container created based on the default profile, the IP we obtained is within the 10.x.x.x range. What we need to do is to change to create a new profile which will enable the container accessible to other in the LAN network. Hence, we need to switch from bridge to macvlan.

The `eth0` network adapter links to your host's network adapter, which can have different naming. For example, `enp1s0` (LAN). However, you can't bridge a Wifi interface to ethernet interface as Wifi by default, only accept a single MAC address from a client.
$ lxc profile copy default macvlan
$ lxc profile device set macvlan eth0 parent enp1s0
$ lxc profile device set macvlan eth0 nictype macvlan

Stop the `pihole` container so we can switch the profile to `macvlan`.
$ lxc stop pihole
$ lxc profile apply pihole macvlan
Profiles macvlan applied to pihole
$ lxc start pihole
$ lxc list
$ lxc list -c=ns4Pt
+--------+---------+----------------------+----------+------------+
|  NAME  |  STATE  |         IPV4         | PROFILES |    TYPE    |
+--------+---------+----------------------+----------+------------+
| pihole | RUNNING | 192.168.0.108 (eth0) | macvlan  | PERSISTENT |
+--------+---------+----------------------+----------+------------+

Next, enter the container and install Pi-hole.
$ lxc exec pihole bash
[email protected]:~# curl -sSL https://install.pi-hole.net | bash

LXC/LXD 3 - Installation, Setup, and Discussion

It has been a while (like three years ago) since I last look into LXC/LXD (like version 2.0.0). As we're celebrating the end of 2018 and embracing the new year 2019, it's good to revisit LXC/LXD (latest version is 3.7.0) again to see what changes have been made to the project.

Installation wise, `snap` have replace `apt-get` as the preferred installation method so we can always get the latest and greatest updates. One of the issue I faced last time was support for non-Debian distros like CentOS/Fedora and the like was non-existed. To make it work, you have to compile the source code on your own. Even so, certain features was not implemented and made possible. Hence, `snap` is a long awaited way to get LXC/LXD to works on most GNU/Linux distros out there.

Install the packages as usual.
$ sudo apt install lxd zfsutils-linux

The `lxd` pre-installation script will ask you on which version that you want to install. If you choose `latest`, the install the latest version using `snap`. Otherwise, for stable production 3.0 release, it will install the version that came with the package.


You can verify the installation method and version of the LXD binary.
$ which lxd; lxd --version
/snap/bin/lxd
3.7

The next step is to configure LXD's settings, especially storage. In our case here, we're using ZFS, which have better storage efficiency. The only default value changed was the new storage pool name.
$ sudo lxd init
Would you like to use LXD clustering? (yes/no) [default=no]:
Do you want to configure a new storage pool? (yes/no) [default=yes]:
Name of the new storage pool [default=default]: lxd
Name of the storage backend to use (btrfs, ceph, dir, lvm, zfs) [default=zfs]:
Create a new ZFS pool? (yes/no) [default=yes]:
Would you like to use an existing block device? (yes/no) [default=no]:
Size in GB of the new loop device (1GB minimum) [default=45GB]:
Would you like to connect to a MAAS server? (yes/no) [default=no]:
Would you like to create a new local network bridge? (yes/no) [default=yes]:
What should the new bridge be called? [default=lxdbr0]:
What IPv4 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]:
What IPv6 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]:
Would you like LXD to be available over the network? (yes/no) [default=no]:
Would you like stale cached images to be updated automatically? (yes/no) [default=yes]
Would you like a YAML "lxd init" preseed to be printed? (yes/no) [default=no]:

If you want to manage the container as normal user, add yourself to the `lxd` group and refresh the changes.
$ sudo adduser $USER lxd
$ newgrp lxd
$ id $USER | tr ',', '\n'
uid=1000(ang) gid=1000(ang) groups=1000(ang)
4(adm)
7(lp)
24(cdrom)
27(sudo)
30(dip)
46(plugdev)
116(lpadmin)
126(sambashare)
127(docker)
134(libvirt)
997(lxd)

Next, we're going to create our first container and show its status. Downloading the whole template container image going to take a while.
$ lxc launch ubuntu:18.04 c1   
Creating c1
Starting c1

$ lxc list -c=ns4Pt
+------+---------+----------------------+----------+------------+
| NAME |  STATE  |         IPV4         | PROFILES |    TYPE    |
+------+---------+----------------------+----------+------------+
| c1   | RUNNING | 10.53.105.243 (eth0) | default  | PERSISTENT |
+------+---------+----------------------+----------+------------+

Upgrading to Ubuntu 18.04 Bionic Beaver

Yes, the regular updates on my Ubuntu distro in my lappy. While reading back my old posts, I just realized that I've written personal for almost each Ubuntu release like 17.10, 17.04, 15.10, and 13.04. Not sure why, but I didn't jot down any upgrade notes on 16.10, 16.04, 14.10, 14.04, 13.10, and earlier.

Upgrade was done as usual but with a few hiccups. Full upgrade was possible with a few manual intervention of the package management.

First, not enough free disk space in '/boot' folder.

The upgrade has aborted. The upgrade needs a total of 127 M free
space on disk '/boot'. Please free at least an additional 82.7 M of
disk space on '/boot'. You can remove old kernels using 'sudo apt
autoremove' and you could also set COMPRESS=xz in
/etc/initramfs-tools/initramfs.conf to reduce the size of your
initramfs.

Resolved this by removing all the previous Linux kernels and so surprised to know that my machine have so many different versions lying around. No wonder there was limited spaces available.

Second, upgrading was halted due to packaging dependency. Not sure why. Googling around for answers and trying a few usual solutions did not help at all and kept getting the same old error message.

E: Sub-process /usr/bin/dpkg returned an error code (1)

dpkg: error processing archive 
   /var/cache/apt/archives/libmariadb3_3.0.3-1build1_amd64.deb (--unpack):
 trying to overwrite '/usr/lib/x86_64-linux-gnu/mariadb/plugin/dialog.so', 
   which is also in package libmariadb2:amd64 2.3.3-1
 Errors were encountered while processing:
 /var/cache/apt/archives/libmariadb3_3.0.3-1build1_amd64.deb
 E: Sub-process /usr/bin/dpkg returned an error code (1)

At the end of the days, the Synaptic tool save the day and resolving all the conflicts.


It's just me or something else, `apt` doesn't seems to have a right default options to resolve conflicts compare to Synaptic.

Now for the changes, reading through the release notes, I've learned a few things and realized that I was quite lost touch with the server part of Ubuntu distro.

1/ Netplan, the network configuration abstraction renderer. Basically it's just a tool to manage networking through YAML file. Surprisingly, the console tool was written in C instead of the regularly used Python. Not sure why but surely it must have a good reason.

2/ New features only available for new installation but not upgrade. For example, swap file instead of swap partition, Python 3 over Python 2, full disk encryption (LUKS) instead of folder encryption.

3/ Subiquity, the server installer was available for server user. Definitely a DIY solution to differentiate themselves from default Debian installer.

4/ LXD 3.0. A better alternative or solution to Vagrant or Docker. I've been lost tracked of this project. Maybe it's the right time to look and get my homelab machine to run this again.

5/ chrony replaced ntpd (there are comparison as well). One good thing is chrony was licensed in GPLv2.

6/ On the desktop front, from the GNOME 3.28 release notes, Boxes was getting much needed love. Previous version was so buggy that made you wonder why it was ever released in the first place.


This Week I Learned - 2017 Week 15

Last week post or you can browse through the whole series.

While debugging a Makefile, I accidentally `rm -rf` my home folder. Lesson learned, always backup and sync your changes regularly. Nevertheless, it's always a good fresh start when your home folder contains not a single file or folder. Good that you have a weekly clean up of your machine, review, keep, or remove. Otherwise, there will be a lot of pending left over files.

It has been a while since I work on weekend. The serenity of the environment did improve your productivity ten-folds. There is no sounds other than the air-con, traffic, and your typing sounds. You're basically in the zone, focus solely on the task at hand. No more stupid shenanigan. In hindsight, you have to find or create your own optimal environment and zone. It all starts with a system that leads to a habit, good habits.

#1 How to read more books? Lots of good tips and increasing the volume of books you can read. It's already early April and I only managed to finish 2 books. Not really on track on finishing 12 books this year. Thinking back, reading style, book choices, timing, and context are what causing the slowness. One of the best strategy is to switch different books if you're stuck or bored. Some books need more mental energy to go through it. While reading 2 pages per day can develop a good habit, it's not sufficient fast enough to catch up with my pilling reading list.

#2 Engineer's Disease. The unconscious thought that can lead to arrogant and condescending personality. Maybe because such behaviour "stems from the OCD and emotional detachment our peoples tend to have, mixed in with a good dose of raging insecurity"? Good forum discussions to ponder upon, especially by those working in software development.

#3 Does teenager and adult have different learning capability? Time, available perceived time. Also discipline, attention, and focus. The discussion at HN gave a lot of strategies to attack the problem. Simple daily practice and learning together with different learning strategies. What to learn then? Fundamental. There is an interesting discussion on software development being a dead-end job after 35-40.

#4 On understanding the fundamental of Vim. Before you install any Vim's plugin, best to learn what the default features exists or not.

#5 System Design Primer. If you want to learn how to design large scale systems. However, premature optimization is still evil. Knowing something can be done right doesn't means it should be done now. There are always contexts and constraints. Solutions looking for problems always end up wasting everyone resources. This HN user's experience on scaling your system accurately illustrates such scenario.

#6 Looking busy at work?. Most people don't realize that pretend to work and look busy is actually far more harder than doing the actual work. Faking will deplete you psychologically as your thoughts, actions, and words are not in sync. However, there are always exception. Certain group of people thrive on such behaviour without caring for any forms of repercussion. While some just stuck with mind-numbing boring job. There is a saying by Napoleon Hill which states "If you are not learning while you’re earning, you are cheating yourself out of the better portion of your compensation.” Unless you're stuck with certain constraints, move on. You're not a tree!

#7 LXD finally available for Fedora. Not as native RPM package but through Snap. I'm going to reformat another workstation and install Fedora with it. One less reason to stick with Ubuntu. Only left the DEB package, which I believe, no way Fedora/CentOS/Red Hat is able to dethrone the number of available packages provided by Debian. I'm not looking for rolling release like Arch but availability of different software. Maybe Snap, the universal GNU/Linux package can change that?

This Week I Learned - 2016 Week 40

Last week post or the whole series.

My sentiments exactly. See comment below regarding chasing fad in technology development and especially PHP programming language. Do I actually miss PHP? Not really. But I think most of the web development systems still can be solved using PHP-based solutions. Barrier of entry to web development using PHP still the best choice.
Incidentally, this isn't unique to the front-end. I've seen the same thing happen with the SQL->ORM->Mongo->Couch->SQL nonsense, or the myriad of templating engines written for PHP, a language that itself is a templating engine.

Using Node.js? Need a basic system with essential features to bootstrap your system? Look no further than Hackarthon Starter.

The difference between Ponzi, Pyramid, and Matrix Scheme. A lot of young people, especially fresh graduates need to aware and prevent themselves from falling for such scams. The pitch of being your own boss or retire early may sounds too good to be true.

Sometimes the documentation written for a certain API, Data::ICal was so confusing that you have to resolve to searching source code written by others using the same API itself at here, here, here, and here. Can't blame the API author as the standard itself, RFC 5455 is as confusing and complicated.

Google Interview University (via HN). Instead of working for a company, why not strive to work with great people instead? Google is so big and not every team is equal. Agree with one of the comment, this is a good compilation of resource for computer science study. Which reminds me again of the Programmer Competency Matrix. Instead of focusing on the computer science stuff, why not focusing on building stuff? Someone needs to read the interview of famous programmers on their background.

Getting older but still enjoy working as a programmer? Read Reflections of an "Old" Programmer, especially the comments from the blog post itself, HN, and Reddit. The main question here is how do you age gracefully as a programmer? Lifelong learning, especially the fundamentals (the core of computer science, not latest greatest fad frameworks), as those things never changed. I blogged about this last year, during my career detour as a non-programmer role, still technical though.

Didn't realize that to use PowerTop properly, first you will need to calibrate it after installation to collect measurement data. The whole process will take a while and networking will be disabled during that period.
$ sudo apt-get install powertop
$ sudo powertop --calibrate
$ sudo powertop --auto-tune

Besides that, you can turn off unwanted running services to save a bit more battery.
$ service --status-all

Upgrade to LXD 2.3 failed to due a spelling bug in bridge network upgrade script. Fixing the bug manually and restart the installation process again.
$ sudo apt-get install -f

However, new container created don't have a default network bridge. Re-enable this resolved the issue.
$ lxc network attach lxdbr0 test-alpine
$ lxc restart test-alpine

This Week I Learned - 2016 Week 12

Last week post or the whole series.

#1 LXD 2.0 blog post series. Write-up by St├ęphane Graber on the upcoming LXD 2.0 release. Since the REST API support are pretty much quite stable these days, you can't blame where numerous web front ends (lxd-webui and lxd-webgui) exists. Between two different container implementations, I still prefer LXC to Docker. The unfortunate case is Docker has the popularity and financial backing to move things faster.

#2 "End of file during parsing". Encountered this error message when I was customizing my Emacs. This is due to excess open or close parentheses. Surprised to know that the debugging procedures is quite tedious. I'm still not getting used to the enormous long Emacs default key bindings.

#3 Emacs' Sequence of Actions at Startup. For Emacs 24.x. That is one long list of items to run during program startup.

#4 .PNONY in a Makefile. It just dawned to me that the obvious reason is that we set certain targets as .PNONY because these are not file target!

#5 Certain metals antiseptic effect. Hence, why musical instruments and some door knobs are made from brass.

#6 Which activity brings out the worst in people? Sad but true, I've to agree with the discussion of the top post. Inheritance, mostly caused by the wife/husband of the siblings.

#7 Seeing the Current Value of a Variable. Either 'Ctrl+h v' or 'Mx describe-variable' to find the value of a configuration item in Emacs.

#8 Telsa's Euology. Surprised to found out about her passing. Used to follow her blog in the early days of GNOME project when Alan Cox was still the Linux kernel maintainer.

Troubleshooting Dynamic Host Configuration Protocol (DHCP) Connection in LXD, Part 1: The Dnsmasq Server

While testing LXD, the GNU/Linux container hypervisor, one of the issue I've encountered was certain containers failed to obtain an IP address after booting up. Hence, for the past few days, while scratching my head investigating the issue, I've gained some understanding on how DHCP works and learned a few tricks on how to troubleshoot a DHCP connection.

DHCP is a client/server where the client obtain an IP address from the server. Thus, to troubleshoot any connection issue, we should look in two places, the server and the client side.

Is Dnsmasq up and running?
First, the server end. As I mentioned in my previous post, in LXD, the lxcbr0 bridge interface is basically a virtual switch, through Dnsmasq, provides network infrastructures services like Domain Name System (DNS) and DHCP services. If DHCP is not working, first things to check whether the Dnsmasq has been started correctly. Pay attention to all lines that contains the word 'dnsmasq' and check for any errors.
$ sudo systemctl status lxc-net -l
● lxc-net.service - LXC network bridge setup
   Loaded: loaded (/usr/lib/systemd/system/lxc-net.service; enabled; vendor preset: disabled)
   Active: active (exited) since Wed 2015-11-18 21:04:24 MYT; 1s ago
  Process: 21863 ExecStop=/usr/libexec/lxc/lxc-net stop (code=exited, status=0/SUCCESS)
  Process: 21891 ExecStart=/usr/libexec/lxc/lxc-net start (code=exited, status=0/SUCCESS)
 Main PID: 21891 (code=exited, status=0/SUCCESS)
   Memory: 408.0K
      CPU: 39ms
   CGroup: /system.slice/lxc-net.service
           └─21935 dnsmasq -u nobody --strict-order --bind-interfaces --pid-file=/run/lxc/dnsmasq.pid --listen-address 10.0.3.1 --dhcp-range 10.0.3.2,10.0.3.254 --dhcp-lease-max=253 --dhcp-no-override --except-interface=lo --interface=lxcbr0 --dhcp-leasefile=/var/lib/misc/dnsmasq.lxcbr0.leases --dhcp-authoritative

Nov 18 21:04:24 localhost.localdomain dnsmasq[21935]: started, version 2.75 cachesize 150
Nov 18 21:04:24 localhost.localdomain dnsmasq[21935]: compile time options: IPv6 GNU-getopt DBus no-i18n IDN DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth DNSSEC loop-detect inotify
Nov 18 21:04:24 localhost.localdomain dnsmasq-dhcp[21935]: DHCP, IP range 10.0.3.2 -- 10.0.3.254, lease time 1h
Nov 18 21:04:24 localhost.localdomain dnsmasq-dhcp[21935]: DHCP, sockets bound exclusively to interface lxcbr0
Nov 18 21:04:24 localhost.localdomain dnsmasq[21935]: reading /etc/resolv.conf
Nov 18 21:04:24 localhost.localdomain dnsmasq[21935]: using nameserver 192.168.42.1#53
Nov 18 21:04:24 localhost.localdomain dnsmasq[21935]: read /etc/hosts - 2 addresses
Nov 18 21:04:24 localhost.localdomain systemd[1]: Started LXC network bridge setup.

As LXD is still actively under development, there are still many pending issues, you may want to walk through the '/usr/libexec/lxc/lxc-net' script to investigate more. Although from my experience, is simple service restart 'systemctl restart lxc-net' should be sufficient.

Failed to create listening socket?
Few days back, one of the issue I've experienced is that the Dnsmasq server failed to start due to failure in creating listening socket.
......
Nov 14 20:43:18 localhost.localdomain systemd[1]: Starting LXC network bridge setup...
Nov 14 20:43:18 localhost.localdomain lxc-net[24314]: dnsmasq: failed to create listening socket for 10.0.3.1: Cannot assign requested address
Nov 14 20:43:18 localhost.localdomain dnsmasq[24347]: failed to create listening socket for 10.0.3.1: Cannot assign requested address
Nov 14 20:43:18 localhost.localdomain dnsmasq[24347]: FAILED to start up
Nov 14 20:43:18 localhost.localdomain lxc-net[24314]: Failed to setup lxc-net.
Nov 14 20:43:18 localhost.localdomain systemd[1]: Started LXC network bridge setup.
......

Alternately, you can also check through the Systemd journal log.
$ journalctl -u lxc-net.service 
$ journalctl -u lxc-net.service | grep -i 'failed to'

The question we should raise when looking into this error is which other process is trying to bind to port 53, the default DNS port. There are several ways ways to check this.

Are there any other running Dnsmasq instances? Note that output was formatted to improve readability. Besides the one started by lxc-net service. The other two instances were created by libvirt and vagrant-libvirt.
$ ps -o pid,cmd -C dnsmasq
  PID CMD
 2851 /sbin/dnsmasq --conf-file=/var/lib/libvirt/dnsmasq/vagrant-libvirt.conf --leasefile-ro --dhcp-script=/usr/libexec/libvirt_leaseshelper

 2852 /sbin/dnsmasq --conf-file=/var/lib/libvirt/dnsmasq/vagrant-libvirt.conf --leasefile-ro --dhcp-script=/usr/libexec/libvirt_leaseshelper

 2933 /sbin/dnsmasq --conf-file=/var/lib/libvirt/dnsmasq/default.conf --leasefile-ro --dhcp-script=/usr/libexec/libvirt_leaseshelper

 2934 /sbin/dnsmasq --conf-file=/var/lib/libvirt/dnsmasq/default.conf --leasefile-ro --dhcp-script=/usr/libexec/libvirt_leaseshelper

21935 dnsmasq -u nobody --strict-order --bind-interfaces --pid-file=/run/lxc/dnsmasq.pid --listen-address 10.0.3.1 --dhcp-range 10.0.3.2,10.0.3.254 --dhcp-lease-max=253 --dhcp-no-override --except-interface=lo --interface=lxcbr0 --dhcp-leasefile=/var/lib/misc/dnsmasq.lxcbr0.leases --dhcp-authoritative

Is there any process currently listening to port 53 using the same IP address of 10.0.3.1?
$ sudo netstat -anp | grep :53 | grep LISTEN
tcp        0      0 10.0.3.1:53             0.0.0.0:*               LISTEN      21935/dnsmasq       
tcp        0      0 192.168.124.1:53        0.0.0.0:*               LISTEN      2933/dnsmasq        
tcp        0      0 192.168.121.1:53        0.0.0.0:*               LISTEN      2851/dnsmasq        
tcp6       0      0 fe80::fc7b:93ff:fe7a:53 :::*                    LISTEN      21935/dnsmasq   

For my case, which I didn't manage to capture the output, is that another orphaned Dnsmasq instance preventing the 'lxc-net' service from launching a new Dnsmasq instance on lxcbr0 interface. If I remember correctly, this was due to the left over instances by me while debugging the '/usr/libexec/lxc/lxc-net' script.

Error calling 'lxd forkstart......

In full details, the exact error message
error: Error calling 'lxd forkstart test-centos-6 /var/lib/lxd/containers /var/log/lxd/test-centos-6/lxc.conf': err='exit status 1'

Again, while rebooting my lapppy after two days, I encountered the above error message again while trying to start my container through LXD. Reading through the LXD issues reports, these are the typical steps to troubleshoot this issue. Note that I've installed the LXD through source code compilation as there are no RPM package available for Fedora 23.

First thing first, as the LXD was built through code compilation, hence it was started manually by running this command. The benefit of starting the LXD daemon this way is that it let you monitor all the debugging messages as shown below.
$ su -c 'lxd --group wheel --debug --verbose'

INFO[11-14|14:10:24] LXD is starting                          path=/var/lib/lxd
WARN[11-14|14:10:24] Per-container AppArmor profiles disabled because of lack of kernel support 
INFO[11-14|14:10:24] Default uid/gid map: 
INFO[11-14|14:10:24]  - u 0 100000 65536 
INFO[11-14|14:10:24]  - g 0 100000 65536 
INFO[11-14|14:10:24] Init                                     driver=storage/dir
INFO[11-14|14:10:24] Looking for existing certificates        cert=/var/lib/lxd/server.crt key=/var/lib/lxd/server.key
DBUG[11-14|14:10:24] Container load                           container=test-busybox
DBUG[11-14|14:10:24] Container load                           container=test-ubuntu-cloud
DBUG[11-14|14:10:24] Container load                           container=test-centos-7
INFO[11-14|14:10:24] LXD isn't socket activated 
INFO[11-14|14:10:24] REST API daemon: 
INFO[11-14|14:10:24]  - binding socket                        socket=/var/lib/lxd/unix.socket
......


The first step to troubleshoot is to ensure that the default bridge interface, lxcbr0, used by LXD is up and running.
$ ifconfig lxcbr0
lxcbr0: error fetching interface information: Device not found

Next, start the 'lxc-net' service that created this bridge interface. Check if our bridge interface is up.
$ sudo systemctl start lxc-net

$ ifconfig lxcbr0
lxcbr0: flags=4163  mtu 1500
        inet 10.0.3.1  netmask 255.255.255.0  broadcast 0.0.0.0
        inet6 fe80::fcd3:baff:fefd:5bd7  prefixlen 64  scopeid 0x20
        ether fe:7a:fa:dd:06:cd  txqueuelen 0  (Ethernet)
        RX packets 5241  bytes 301898 (294.8 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 7610  bytes 11032257 (10.5 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

Next, check the status of the 'lxc-net' service. Why we need to do so? Remember that the 'lxc-net' service create a virtual switch where three things will be created. First, the bridge itself that links to an existing network interface connecting to the other world. Next, a DNS server which resolves domain name. And lastly, a DHCP server which assigns new IP address to the container. The DNS and DHCP services is provided by the Dnsmasq daemon.
$ sudo systemctl status lxc-net -l

● lxc-net.service - LXC network bridge setup
   Loaded: loaded (/usr/lib/systemd/system/lxc-net.service; enabled; vendor preset: disabled)
   Active: active (exited) since Sat 2015-11-14 16:13:24 MYT; 13s ago
  Process: 9807 ExecStop=/usr/libexec/lxc/lxc-net stop (code=exited, status=0/SUCCESS)
  Process: 9815 ExecStart=/usr/libexec/lxc/lxc-net start (code=exited, status=0/SUCCESS)
 Main PID: 9815 (code=exited, status=0/SUCCESS)
   Memory: 404.0K
      CPU: 46ms
   CGroup: /system.slice/lxc-net.service
           └─9856 dnsmasq -u nobody --strict-order --bind-interfaces --pid-file=/run/lxc/dnsmasq.pid --listen-address 10.0.3.1 --dhcp-range 10.0.3.2,10.0.3.254 --dhcp-lease-max=253 --dhcp-no-override --except-interface=lo --interface=lxcbr0 --dhcp-leasefile=/var/lib/misc/dnsmasq.lxcbr0.leases --dhcp-authoritative

Nov 14 16:13:24 localhost.localdomain dnsmasq[9856]: started, version 2.75 cachesize 150
Nov 14 16:13:24 localhost.localdomain dnsmasq[9856]: compile time options: IPv6 GNU-getopt DBus no-i18n IDN DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth DNSSEC loop-detect inotify
Nov 14 16:13:24 localhost.localdomain dnsmasq-dhcp[9856]: DHCP, IP range 10.0.3.2 -- 10.0.3.254, lease time 1h
Nov 14 16:13:24 localhost.localdomain dnsmasq-dhcp[9856]: DHCP, sockets bound exclusively to interface lxcbr0
Nov 14 16:13:24 localhost.localdomain dnsmasq[9856]: reading /etc/resolv.conf
Nov 14 16:13:24 localhost.localdomain dnsmasq[9856]: using nameserver 192.168.1.1#53
Nov 14 16:13:24 localhost.localdomain dnsmasq[9856]: read /etc/hosts - 2 addresses
Nov 14 16:13:24 localhost.localdomain systemd[1]: Started LXC network bridge setup.

Expect more posts to come on using LXD in Fedora 23.

Linux Containers (LXC) with LXD Hypervisor, Part 3 : Transferring Files Between Host and Container

Other articles in the series:
In this part 3, we're going to explore on how to copy file(s) from the host to the container and vice versa. Copying file from the host to the container is simply just using the 'lxc file push <filename> <container-name>/' command. You must append a forward slash (/), to indicate a directory name to the container name for it to work as shown below.
$ echo "a" > foobar
$ md5sum foobar 
60b725f10c9c85c70d97880dfe8191b3  foobar
$
$ lxc file push foobar test-centos-6
error: Invalid target test-centos-6
$
$ lxc file push foobar test-centos-6/tmp
error: exit status 255: mntns dir: /proc/16875/ns/mnt
open container: Is a directory

$ lxc file push foobar test-centos-6/tmp

Similarly, the copy file from container, just use the 'lxc file pull <container-name>/ <filename> .' command. Remember to put the dot (.) which indicates the destination or current folder.
$ lxc file pull test-centos-6/tmp/foobar .
$ md5sum foobar
60b725f10c9c85c70d97880dfe8191b3  foobar

As LXC is actually a glorify chroot environment container. You can actually create or copy files or folders from and to the chroot directory directly.
$ cd /var/lib/lxd/containers/test-centos-6/rootfs/tmp
$ touch create_file_directly_in_chroot_folder

Repeat the similar steps but in the container.
$ lxc exec test-centos-6 /bin/bash
$ cd /tmp
$ touch create_file_directly_in_container

Checking these files from the host. Note the file permissions.
$ ll /var/lib/lxd/containers/test-centos-6/rootfs/tmp/
total 0
-rw-rw-r-- 1 ang    ang    0 Sep  29 02:00 create_file_directly_in_chroot_folder
-rw-r--r-- 1 100000 100000 0 Sep  29 02:00 create_file_directly_in_container

Similarly, but inside the LXC container.
[[email protected] tmp]# ll
total 0
-rw-rw-r-- 1 65534 65534 0 Sep 28 14:00 create_file_directly_in_chroot_folder
-rw-r--r-- 1 root  root  0 Sep 28 14:00 create_file_directly_in_container

While this is doable, we shouldn't create files or folders directly in the container chroot folder from the host. Use the 'lxc push' and 'lxc pull' command to preserve the file permissions.

Linux Containers (LXC) with LXD Hypervisor, Part 2 : Importing Container Images Into LXD

Other articles in the series:
In Part 2, we're going to discuss different ways of importing LXC container images into LXD. By default, when you create a LXC container using the 'lxc launch' command, the tool will download and cache the container image from the remote server. For example, to create a new CentOS 6 LXC container.
$ lxc remote add images images.linuxcontainers.org
$ lxc launch images:centos/7/amd64 centos

While waiting for the CentOS 7 image to be downloaded, you can check the LXD log file.
$ sudo tail -n2 /var/log/lxd/lxd.log
t=2015-08-30T00:13:22+0800 lvl=info msg="Image not in the db downloading it" image=69351a66510eecabf11ef7dfa94af40e20cf15c346ae08b3b0edd726ef3be10c server=https://images.linuxcontainers.org:8443
t=2015-08-30T00:13:22+0800 lvl=info msg="Downloading the image" image=69351a66510eecabf11ef7dfa94af40e20cf15c346ae08b3b0edd726ef3be10c

Unfortunately, if you have or experiencing slow network like me (see screenshot below), if best to use a network monitoring tool to check weather you're still downloading the image. For my case, I'm using bmon. Note my pathetic network speed. An average LXC container image is around 50MB. At download rate of average 20kb/s, it should took us around 33-plus minutes to finish the download. See that without download progress indicator, we've to go all the trouble to check whether the import is still running.


Alternatively, there also another way to import container images. This is through using 'lxd-images' tool, a Python script which supports two additional image source in addition to the default one as mentioned just now. These two sources are the local BusyBox images and Ubuntu Cloud images from official release streams. Additionally, since version 0.14, download progress tracking has been added to the tool, which solved the hassle we encountered.

Let's run the 'lxd-images' command and see its help message.
$ lxd-images
error: the following arguments are required: action
usage: lxd-images [-h] {import} ...

LXD: image store helper

positional arguments:
  {import}
    import    Import images

optional arguments:
  -h, --help  show this help message and exit

Examples:
 To import the latest Ubuntu Cloud image with an alias:
    /usr/bin/lxd-images import ubuntu --alias ubuntu

 To import the latest Ubuntu 14.04 LTS 64bit image with some aliases:
    /usr/bin/lxd-images import lxc ubuntu trusty amd64 --alias ubuntu --alias ubuntu/trusty

 To import a basic busybox image:
    /usr/bin/lxd-images import busybox --alias busybox

UPDATE: Since LXD version 0.17, 'lxd-images import lxc' command has been deprecated in favour of using the 'lxc launch' command.

Let's try to download and cache a CentOS 6 LXC container image into LXD. Compare using 'lxc launch' command to import container image. Notice the differences. First, verbosity is higher. At least we know what is going on behind the scene like what are the files being downloaded. Secondly, we can track the progress of the download. Third, we can add additional metadata, like aliases to the downloaded container image.
$ lxd-images import lxc centos 6 amd64 --alias centos/6                                                                                      
Downloading the GPG key for https://images.linuxcontainers.org
Downloading the image list for https://images.linuxcontainers.org
Validating the GPG signature of /tmp/tmprremowyo/index.json.asc
Downloading the image: https://images.linuxcontainers.org/images/centos/6/amd64/default/20150829_02:16/lxd.tar.xz
Progress: 1 %

However, from my understanding by reading the Python code of 'lxd-images' tool, container image is downloaded without using any multiple simultaneous connections. Hence, it will take a while (if you have slow connection like me) just to download any container images. To solve this, you can download and import the container image manually using third-parties download tool like Aria2 which supports multiple simultaneous connections.

In previous LXC version, if I remembered correctly, before version 0.15, CentOS 7 image was not found from the default image source listing (see emphasis in bold red) but still exists at the web site.
$ lxd-images import lxc centos 7 amd64 --alias centos/7
Downloading the GPG key for https://images.linuxcontainers.org
Downloading the image list for https://images.linuxcontainers.org
Validating the GPG signature of /tmp/tmpgg6sob2e/index.json.asc
Requested image doesn't exist.

Download and import the container image directly.
$ aria2x -x 4 https://images.linuxcontainers.org/images/centos/7/amd64/default/20150619_02:16/lxd.tar.xz

Import the downloaded container image in unified tarball format.
$ lxc image import lxd.tar.xz --alias centos/7
Image imported with fingerprint: 1d292b81f019bcc647a1ccdd0bb6fde99c7e16515bbbf397e4663503f01d7d1c

In short, just use 'lxd-images' tool to import any container images from the default source.

For the next part of the series, we're going to look into sharing files between the LXC container and the host. Till the next time.

Linux Container (LXC) with LXD Hypervisor, Part 1: Installation and Creation

For the past few weeks, I've been looking into creating LXC container for both Fedora and Ubuntu distro. One of the creation method is through downloading a pre-built image.
$ lxc-create -t download -n test-container -- -d ubuntu -r trusty -a amd64

However, creating unprivileged containers is rather cumbersome and list of languages bindings for the APIs are limited. What if we create a daemon or a container hypervisor that monitor and manage all the containers? In additional to that, the daemon also handles all the security privileges and provides a RESTful web APIs for remote management? Well, that the purpose of the creation of LXD, the LXC container hypervisor. Think of it as a glorify LXC 'download' creation method with additional features.

Since the LXD project is under the management of Caninocal Ltd, the company behinds Ubuntu. Hence, it's recommended to use Ubuntu if you don't want to install through source code compilation.

Installation and setup of LXD as shown below was done in Ubuntu 15.04.

Firstly, install the LXD package.
$ sudo apt-get install lxd
......
Warning: The home dir /var/lib/lxd/ you specified already exists.
Adding system user 'lxd' (UID 125) ...
Adding new user 'lxd' (UID 125) with group 'nogroup' ...
The home directory '/var/lib/lxd/' already exists. Not copying from '/etc/skel'.
adduser: Warning: The home directory '/var/lib/lxd/' does not belong to the user you are currently creating.
Adding group 'lxd' (GID 137) ...
Done.
......

From the message above, note that the group 'lxd' (GID 137) does not belong your current login user yet. To update your current login user groups during current session, run the command below so that you don't need to logout and re-login again.
$ newgrp lxd

Check out current login user groups. You should see that the current login user belongs to the group 'lxd' (GID 137).
$ id $USER | tr ',', '\n'
uid=1000(ang) gid=1000(ang) groups=1000(ang)
4(adm)
24(cdrom)
27(sudo)
30(dip)
46(plugdev)
115(lpadmin)
131(sambashare)
137(lxd)

$ groups
ang adm cdrom sudo dip plugdev lpadmin sambashare lxd

Next, we need to set the remote server which contains the pre-built container images.
$ lxc remote add images images.linuxcontainers.org
Generating a client certificate. This may take a minute...

List all the available pre-built container images from the server we've added just now. Pay attention to the colon (:) at the end of the command as this is needed. Otherwise, the command will list local downloaded images. The list is quite long so I've reformatted the layout and only show the top two.
$ lxc image list images:
+-----------------------+-------------+-------+----------------+------+-----------------------------+
|   ALIAS               | FINGERPRINT |PUBLIC |  DESCRIPTION   | ARCH |        UPLOAD DATE          |
+-----------------------+-------------+-------+----------------+------+-----------------------------+
|centos/6/amd64 (1 more)|460c2c6c4045 |yes    |Centos 6 (amd64)|x86_64|Jul 25, 2015 at 11:17am (MYT)|
|centos/6/i386 (1 more) |60f280890fcc |yes    |Centos 6 (i386) |i686  |Jul 25, 2015 at 11:20am (MYT)|
......

Let's create our first container using CentOS 6 pre-built image.
$ lxc launch images:centos/6/amd64 test-centos-6
Creating container...error: Get http://unix.socket/1.0: dial unix /var/lib/lxd/unix.socket: no such file or directory

Reading through this troubleshooting ticket, it seems that LXD daemon was not started. Let's start it. Note that I still using the old 'service' command to start the daemon instead of 'systemctl' command. As they said, old habits die hard. It will take a while for me to fully transition from SysVinit to Systemd. ;-)
$ sudo service lxd restart
$ sudo service lxd status
● lxd.service - Container hypervisor based on LXC
   Loaded: loaded (/lib/systemd/system/lxd.service; indirect; vendor preset: enabled)
   Active: active (running) since Ahd 2015-07-26 00:28:51 MYT; 10s ago
 Main PID: 13260 (lxd)
   Memory: 276.0K
   CGroup: /system.slice/lxd.service
           ‣ 13260 /usr/bin/lxd --group lxd --tcp [::]:8443

Jul 26 00:28:51 proliant systemd[1]: Started Container hypervisor based on LXC.
Jul 26 00:28:51 proliant systemd[1]: Starting Container hypervisor based on LXC...

Finally, create and launch our container using the CentOS 6 pre-built image. Compare to 'lxc-create' command, at least the parameters is simpler. This will take a while as the program needs to download the pre-built CentOS 6 image, which is average size around 50-plus MB, more on this later.
$ lxc launch images:centos/6/amd64 test-centos-6
Creating container...done
Starting container...done

Checking the status of our newly created container.
$ lxc list
+---------------+---------+-----------+------+-----------+-----------+
|     NAME      |  STATE  |   IPV4    | IPV6 | EPHEMERAL | SNAPSHOTS |
+---------------+---------+-----------+------+-----------+-----------+
| test-centos-6 | RUNNING | 10.0.3.46 |      | NO        | 0         |
+---------------+---------+-----------+------+-----------+-----------+

Another status of our container.
$ lxc info test-centos-6
Name: test-centos-6
Status: RUNNING
Init: 14572
Ips:
  eth0: IPV4    10.0.3.46
  lo:   IPV4    127.0.0.1
  lo:   IPV6    ::1

Checking the downloaded pre-built image. Subsequent container creation using the same cached image.
$ lxc image list
+-------+--------------+--------+------------------+--------+-------------------------------+
| ALIAS | FINGERPRINT  | PUBLIC |   DESCRIPTION    |  ARCH  |          UPLOAD DATE          |
+-------+--------------+--------+------------------+--------+-------------------------------+
|       | 460c2c6c4045 | yes    | Centos 6 (amd64) | x86_64 | Jul 26, 2015 at 12:51am (MYT) |
+-------+--------------+--------+------------------+--------+-------------------------------+

You can use the fingerprint to create and initiate the same container.
$ lxc launch 460c2c6c4045 test-centos-6-2                                                                    
Creating container...done
Starting container...done

As I mentioned, the downloaded pre-built CentOS 6 image is roughly 50-plus MB. This file is located within the '/var/lib/lxd/images' folder. The fingerprint only shows the first 12 characters hash string of the file name.
$ sudo ls -lh /var/lib/lxd/images
total 50M
-rw-r--r-- 1 root root 50M Jul  26 00:51 460c2c6c4045a7756faaa95e1d3e057b689512663b2eace6da9450c3288cc9a1

Now, let's enter the container. Please note that the pre-built image contains the most minimum necessary packages. There are quite a few things missing. For example, wget, the downloader was not install by default.
$ lxc exec test-centos-6 /bin/bash
[[email protected] ~]#
[[email protected] ~]# cat /etc/redhat-release 
CentOS release 6.6 (Final)

[[email protected] ~]# wget
bash: wget: command not found

To exit from the container, simple type the 'exit' command.
[[email protected] ~]# exit
exit
$ 

To stop the container, just run this command.
$ lxc stop test-centos-6
$ lxc list
+-----------------+---------+-----------+------+-----------+-----------+
|      NAME       |  STATE  |   IPV4    | IPV6 | EPHEMERAL | SNAPSHOTS |
+-----------------+---------+-----------+------+-----------+-----------+
| test-centos-6   | STOPPED |           |      | NO        | 0         |
+-----------------+---------+-----------+------+-----------+-----------+

For the next part of the series, we're going to look into importing container images into LXD. Till the next time.

Linux Containers (LXC) in Ubuntu 15.04

Last month, I've been trying out LXC in Fedora 22 (F22) with some limitations and missing features. I tried but failed to get unprivileged container to work and there is no RPM packages for LXD. Although you can compile the code and create RPM yourself, but is not worth the time spend in doing so. Hence, is best to switch to the Ubuntu which has the latest LXC support since the one of the project leaders, St├ęphane Graber, is working for Canonical Ltd, the company that manage Ubuntu.

Installation is pretty much straightforward, just apt-getting it.
$ sudo apt-get install lxc

Checking the default LXC configuration. Compare to LXC in F22, the Cgroup memory controller was enabled by default and the kernel is still using 3.19 compare to 4.0.1.
$ lxc-checkconfig 
Kernel configuration not found at /proc/config.gz; searching...
Kernel configuration found at /boot/config-3.19.0-10-generic
--- Namespaces ---
Namespaces: enabled
Utsname namespace: enabled
Ipc namespace: enabled
Pid namespace: enabled
User namespace: enabled
Network namespace: enabled
Multiple /dev/pts instances: enabled

--- Control groups ---
Cgroup: enabled
Cgroup clone_children flag: enabled
Cgroup device: enabled
Cgroup sched: enabled
Cgroup cpu account: enabled
Cgroup memory controller: enabled
Cgroup cpuset: enabled

--- Misc ---
Veth pair device: enabled
Macvlan: enabled
Vlan: enabled
File capabilities: enabled

Note : Before booting a new kernel, you can check its configuration
usage : CONFIG=/path/to/config /usr/bin/lxc-checkconfig

One of the issue encounterd for LXC in F22 is the installation did not create the default lxcbr0 bridge interface. Not so in Ubuntu.
$ cat /etc/lxc/default.conf | grep network.link
lxc.network.link = lxcbr0

Checking the activated bridge interface, lxcbr0.
$ brctl show
bridge name     bridge id               STP enabled     interfaces
lxcbr0          8000.000000000000       no

Instead of creating a new LXC container as root user, we can create unprivileged containers as normal or non-root user.
$ lxc-create -n test-ubuntu -t ubuntu
lxc_container: conf.c: chown_mapped_root: 3394 No mapping for container root
lxc_container: lxccontainer.c: do_bdev_create: 849 Error chowning /home/ang/.local/share/lxc/test-ubuntu/rootfs to container root
lxc_container: conf.c: suggest_default_idmap: 4534 You must either run as root, or define uid mappings
lxc_container: conf.c: suggest_default_idmap: 4535 To pass uid mappings to lxc-create, you could create
lxc_container: conf.c: suggest_default_idmap: 4536 ~/.config/lxc/default.conf:
lxc_container: conf.c: suggest_default_idmap: 4537 lxc.include = /etc/lxc/default.conf
lxc_container: conf.c: suggest_default_idmap: 4538 lxc.id_map = u 0 100000 65536
lxc_container: conf.c: suggest_default_idmap: 4539 lxc.id_map = g 0 100000 65536
lxc_container: lxccontainer.c: lxcapi_create: 1320 Error creating backing store type (none) for test-ubuntu
lxc_container: lxc_create.c: main: 274 Error creating container test-ubuntu

From the above error, we need to define the uid mappings for both user and group. Duplicate the LXC's default.conf to our own home directory and add in the mapping.
$ mkdir -p ~/.config/lxc
mkdir: created directory ‘/home/ang/.config/lxc’
$ cp /etc/lxc/default.conf ~/.config/lxc/
$ echo "lxc.id_map = u 0 100000 65536" >> ~/.config/lxc/default.conf
$ echo "lxc.id_map = g 0 100000 65536" >> ~/.config/lxc/default.conf
$ echo "$USER veth lxcbr0 2" | sudo tee -a /etc/lxc/lxc-usernet
ang veth lxcbr0 2

Checking back our own user's default.conf config file.
$ cat ~/.config/lxc/default.conf 
lxc.network.type = veth
lxc.network.link = lxcbr0
lxc.network.flags = up
lxc.network.hwaddr = 00:16:3e:xx:xx:xx
lxc.id_map = u 0 100000 65536
lxc.id_map = g 0 100000 65536

Try to create our unprivileged container again. As the error indicate below, unprivileged containers can only be created through the download template.
$ lxc-create -n test-ubuntu -t ubuntu
This template can't be used for unprivileged containers.
You may want to try the "download" template instead.
lxc_container: lxccontainer.c: create_run_template: 1108 container creation template for test-ubuntu failed
lxc_container: lxc_create.c: main: 274 Error creating container test-ubuntu

Re-run the command to create the container but using the download template. This will take a while.
$ lxc-create -t download -n test-ubuntu -- -d ubuntu -r trusty -a amd64
Setting up the GPG keyring
Downloading the image index
Downloading the rootfs
Downloading the metadata
The image cache is now ready
Unpacking the rootfs

---
You just created an Ubuntu container (release=trusty, arch=amd64, variant=default)

To enable sshd, run: apt-get install openssh-server

For security reason, container images ship without user accounts
and without a root password.

Use lxc-attach or chroot directly into the rootfs to set a root password
or create user accounts.

Start the container in daemon or background mode. It seems we have error here.
$ lxc-start -n test-ubuntu -d
lxc-start: lxc_start.c: main: 344 The container failed to start.
lxc-start: lxc_start.c: main: 346 To get more details, run the container in foreground mode.
lxc-start: lxc_start.c: main: 348 Additional information can be obtained by setting the --logfile and --logpriority options.

Restart the container again in foreground mode.
$ lxc-start -n test-ubuntu -F
lxc-start: start.c: print_top_failing_dir: 102 Permission denied - could not access /home/ang.  Please grant it 'x' access, or add an ACL for the container root.
lxc-start: sync.c: __sync_wait: 51 invalid sequence number 1. expected 2
lxc-start: start.c: __lxc_start: 1164 failed to spawn 'test-ubuntu'
lxc-start: lxc_start.c: main: 344 The container failed to start.
lxc-start: lxc_start.c: main: 348 Additional information can be obtained by setting the --logfile and --logpriority options.

To fix this, we need to grant access to our $HOME directory.
$ sudo chmod +x $HOME

Let's us try again.
$ lxc-start -n test-ubuntu -d
$ lxc-attach -n test-ubuntu

Compare to Fedora 22, LXC in Ubuntu 15.04 is easier to setup although we still need to reconfigure it to enable unprivileged container creation. In short, if you want good LXC support, use Ubuntu 15.04.