Showing posts with label lxc. Show all posts
Showing posts with label lxc. Show all posts

Rust Installation in Ubuntu 18.10

When was the last time I looked at Rust? Oh right, it was almost 5 years ago (how time flies). The Amazon Firecracker piqued my interest in Rust again and I'm curious to check out Rust again. There are several installation methods available. These days, it's easier to use custome tool like Rustup or Docker to manage and switch several and different versions compare to default distro packages.

Using Rustup
This is the default installation method. However, we using installing this using the LXC/LXD container. This is the fastest way to get Rust running in your environment compare to other methods (more on this).
$ lxc exec rust-rustup bash
[email protected]:~# curl https://sh.rustup.rs -sSf | sh

This will download and install the official compiler for the Rust programming 
language, and its package manager, Cargo.

It will add the cargo, rustc, rustup and other commands to Cargo's bin 
directory, located at:

  /root/.cargo/bin

This path will then be added to your PATH environment variable by modifying the
profile file located at:

  /root/.profile

You can uninstall at any time with rustup self uninstall and these changes will
be reverted.

Current installation options:

   default host triple: x86_64-unknown-linux-gnu
     default toolchain: stable
  modify PATH variable: yes

1) Proceed with installation (default)
2) Customize installation
3) Cancel installation
> 1
......
info: installing component 'rustc'
info: installing component 'rust-std'
info: installing component 'cargo'
info: installing component 'rust-docs'
info: default toolchain set to 'stable'

  stable installed - rustc 1.31.1 (b6c32da9b 2018-12-18)

Rust is installed now. Great!

To get started you need Cargo's bin directory ($HOME/.cargo/bin) in your PATH 
environment variable. Next time you log in this will be done automatically.

To configure your current shell run source $HOME/.cargo/env

Checking Rust-based tools version.
[email protected]:~# rustc --version; cargo --version; rustup --version
rustc 1.31.1 (b6c32da9b 2018-12-18)
cargo 1.31.0 (339d9f9c8 2018-11-16)
rustup 1.16.0 (beab5ac2b 2018-12-06)

Using Default Distro Package Manager
Again, bootstrap the environment using LXC/LXD.
$ lxc launch ubuntu:18.10 rust-pkg
$ lxc exec rust-pkg bash          
[email protected]:~# apt update; apt upgrade
[email protected]:~# apt install rustc

Checking Rust-based tools version.
[email protected]:~# rustc --version; cargo --version; rustup --version
rustc 1.30.0
cargo 1.30.0
rustup: command not found

Using Docker Image
Using Docker official image (the official image is based on Debian). Image size seemed way to big, roughly around 1.6 GB.
$ docker pull rust
$ docker image list | grep rust
rust                latest              d6daf33d7ea6        3 days ago          1.63GB

Luckily slimmer image available, roughly half the size. You just have to pull using the right tag. Reduction is size was due to the clean up steps and slimmer base Docker image.
$ docker pull rust:slim
$ docker image list | grep rust
rust                slim                a374accc3257        3 days ago          967MB
rust                latest              d6daf33d7ea6        3 days ago          1.63GB

Checking the container and Rust-based tools version.
$ docker run --rm -it rust bash
[email protected]:/# rustc --version; cargo --version; rustup --version
rustc 1.31.1 (b6c32da9b 2018-12-18)
cargo 1.31.0 (339d9f9c8 2018-11-16)
rustup 1.16.0 (beab5ac2b 2018-12-06)

Pi-hole with LXD - Installation and Setup

Pi-hole is wrapper of your DNS server that block all advertisements and trackers. We're using it at our home network to block all those unnecessary bandwidth wasting contents. Setting up for any of your devices is quite straightforward, just make sure your router point to it as DNS server.

While there is a Docker image existed, we have installed it within a LXD container since we have a LXD host exists in our small homelab server, Kabini (more on this in coming posts).

First we setup the container based on Ubuntu 18.04.
$ lxc launch ubuntu:18.04 pihole
$ lxc list -c=ns4Pt
+--------+---------+----------------------+----------+------------+
|  NAME  |  STATE  |         IPV4         | PROFILES |    TYPE    |
+--------+---------+----------------------+----------+------------+
| pihole | RUNNING | 10.53.105.102 (eth0) | default  | PERSISTENT |
+--------+---------+----------------------+----------+------------+

Looking at the table above, notice that container created based on the default profile, the IP we obtained is within the 10.x.x.x range. What we need to do is to change to create a new profile which will enable the container accessible to other in the LAN network. Hence, we need to switch from bridge to macvlan.

The `eth0` network adapter links to your host's network adapter, which can have different naming. For example, `enp1s0` (LAN). However, you can't bridge a Wifi interface to ethernet interface as Wifi by default, only accept a single MAC address from a client.
$ lxc profile copy default macvlan
$ lxc profile device set macvlan eth0 parent enp1s0
$ lxc profile device set macvlan eth0 nictype macvlan

Stop the `pihole` container so we can switch the profile to `macvlan`.
$ lxc stop pihole
$ lxc profile apply pihole macvlan
Profiles macvlan applied to pihole
$ lxc start pihole
$ lxc list
$ lxc list -c=ns4Pt
+--------+---------+----------------------+----------+------------+
|  NAME  |  STATE  |         IPV4         | PROFILES |    TYPE    |
+--------+---------+----------------------+----------+------------+
| pihole | RUNNING | 192.168.0.108 (eth0) | macvlan  | PERSISTENT |
+--------+---------+----------------------+----------+------------+

Next, enter the container and install Pi-hole.
$ lxc exec pihole bash
[email protected]:~# curl -sSL https://install.pi-hole.net | bash

LXC/LXD 3 - Installation, Setup, and Discussion

It has been a while (like three years ago) since I last look into LXC/LXD (like version 2.0.0). As we're celebrating the end of 2018 and embracing the new year 2019, it's good to revisit LXC/LXD (latest version is 3.7.0) again to see what changes have been made to the project.

Installation wise, `snap` have replace `apt-get` as the preferred installation method so we can always get the latest and greatest updates. One of the issue I faced last time was support for non-Debian distros like CentOS/Fedora and the like was non-existed. To make it work, you have to compile the source code on your own. Even so, certain features was not implemented and made possible. Hence, `snap` is a long awaited way to get LXC/LXD to works on most GNU/Linux distros out there.

Install the packages as usual.
$ sudo apt install lxd zfsutils-linux

The `lxd` pre-installation script will ask you on which version that you want to install. If you choose `latest`, the install the latest version using `snap`. Otherwise, for stable production 3.0 release, it will install the version that came with the package.


You can verify the installation method and version of the LXD binary.
$ which lxd; lxd --version
/snap/bin/lxd
3.7

The next step is to configure LXD's settings, especially storage. In our case here, we're using ZFS, which have better storage efficiency. The only default value changed was the new storage pool name.
$ sudo lxd init
Would you like to use LXD clustering? (yes/no) [default=no]:
Do you want to configure a new storage pool? (yes/no) [default=yes]:
Name of the new storage pool [default=default]: lxd
Name of the storage backend to use (btrfs, ceph, dir, lvm, zfs) [default=zfs]:
Create a new ZFS pool? (yes/no) [default=yes]:
Would you like to use an existing block device? (yes/no) [default=no]:
Size in GB of the new loop device (1GB minimum) [default=45GB]:
Would you like to connect to a MAAS server? (yes/no) [default=no]:
Would you like to create a new local network bridge? (yes/no) [default=yes]:
What should the new bridge be called? [default=lxdbr0]:
What IPv4 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]:
What IPv6 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]:
Would you like LXD to be available over the network? (yes/no) [default=no]:
Would you like stale cached images to be updated automatically? (yes/no) [default=yes]
Would you like a YAML "lxd init" preseed to be printed? (yes/no) [default=no]:

If you want to manage the container as normal user, add yourself to the `lxd` group and refresh the changes.
$ sudo adduser $USER lxd
$ newgrp lxd
$ id $USER | tr ',', '\n'
uid=1000(ang) gid=1000(ang) groups=1000(ang)
4(adm)
7(lp)
24(cdrom)
27(sudo)
30(dip)
46(plugdev)
116(lpadmin)
126(sambashare)
127(docker)
134(libvirt)
997(lxd)

Next, we're going to create our first container and show its status. Downloading the whole template container image going to take a while.
$ lxc launch ubuntu:18.04 c1   
Creating c1
Starting c1

$ lxc list -c=ns4Pt
+------+---------+----------------------+----------+------------+
| NAME |  STATE  |         IPV4         | PROFILES |    TYPE    |
+------+---------+----------------------+----------+------------+
| c1   | RUNNING | 10.53.105.243 (eth0) | default  | PERSISTENT |
+------+---------+----------------------+----------+------------+

This Week I Learned - 2016 Week 40

Last week post or the whole series.

My sentiments exactly. See comment below regarding chasing fad in technology development and especially PHP programming language. Do I actually miss PHP? Not really. But I think most of the web development systems still can be solved using PHP-based solutions. Barrier of entry to web development using PHP still the best choice.
Incidentally, this isn't unique to the front-end. I've seen the same thing happen with the SQL->ORM->Mongo->Couch->SQL nonsense, or the myriad of templating engines written for PHP, a language that itself is a templating engine.

Using Node.js? Need a basic system with essential features to bootstrap your system? Look no further than Hackarthon Starter.

The difference between Ponzi, Pyramid, and Matrix Scheme. A lot of young people, especially fresh graduates need to aware and prevent themselves from falling for such scams. The pitch of being your own boss or retire early may sounds too good to be true.

Sometimes the documentation written for a certain API, Data::ICal was so confusing that you have to resolve to searching source code written by others using the same API itself at here, here, here, and here. Can't blame the API author as the standard itself, RFC 5455 is as confusing and complicated.

Google Interview University (via HN). Instead of working for a company, why not strive to work with great people instead? Google is so big and not every team is equal. Agree with one of the comment, this is a good compilation of resource for computer science study. Which reminds me again of the Programmer Competency Matrix. Instead of focusing on the computer science stuff, why not focusing on building stuff? Someone needs to read the interview of famous programmers on their background.

Getting older but still enjoy working as a programmer? Read Reflections of an "Old" Programmer, especially the comments from the blog post itself, HN, and Reddit. The main question here is how do you age gracefully as a programmer? Lifelong learning, especially the fundamentals (the core of computer science, not latest greatest fad frameworks), as those things never changed. I blogged about this last year, during my career detour as a non-programmer role, still technical though.

Didn't realize that to use PowerTop properly, first you will need to calibrate it after installation to collect measurement data. The whole process will take a while and networking will be disabled during that period.
$ sudo apt-get install powertop
$ sudo powertop --calibrate
$ sudo powertop --auto-tune

Besides that, you can turn off unwanted running services to save a bit more battery.
$ service --status-all

Upgrade to LXD 2.3 failed to due a spelling bug in bridge network upgrade script. Fixing the bug manually and restart the installation process again.
$ sudo apt-get install -f

However, new container created don't have a default network bridge. Re-enable this resolved the issue.
$ lxc network attach lxdbr0 test-alpine
$ lxc restart test-alpine

Experience on Setting Up Alpine Linux

Starting out as one of the little unknown GNU/Linux distros, Alpine Linux has gain a lot of traction due to its featureful yet tiny size and the emergence of Linux Container implementation like Dockers and LXC. Although I came across it numerous time while testing out Dockers and LXC, I didn't pay much attention until recently while troubleshooting LXD. To summarize it, I really like the minimalist approach of Alpine Linux as for server or hardware appliance usage, nothing beats the simple direct approach.

My setup is based on the LXC container in Fedora 23. Unfortunately, you still can't create unprivileged container in Fedora. Hence, I have no choice but to do everything as root user. Not the best outcome but I can live with that. Setup and creation is pretty much straight forward thanks to this guide. The steps as follows:

Install and necessary packages and make sure the lxcbr0 bridge interface is up.
$ sudo dnf install lxc lxc-libs lxc-extra lxc-templates
$ sudo systemctl restart lxc-net
$ sudo systemctl status lxc-net
$ ifconfig lxcbr0

Create our container. By default, LXC will download apk package manager binary and all necessary default packages to create the container. Start the 'test-alpine' container once the container has been set up successfully.
$ sudo lxc-create -n test-alpine -t alpine
$ sudo lxc-start -n test-alpine

Access to the container through the console and press 'Enter'. Login as 'root' user but without any password, just press enter. Note to exist from the console, press 'Ctrl+q'.
$ sudo lxc-console -n test-alpine

Next, bring up the eth0 interface we can obtain an IP and making connection to the Internet. Check your eth0 network interface once done. Instead of SysV or Systemd, Alpine Linux is using OpenRC as its default init system. I've a hard time adjusting changes from SysV to Systemd and glad Alpine Linux did not jump to the Systemd bandwagon.
test-alpine:~# rc-service networking start
 * Starting networking ... *   lo ...ip: RTNETLINK answers: File exists
 [ !! ]
 *   eth0 ... [ ok ]

test-alpine:~# ifconfig eth0
eth0      Link encap:Ethernet  HWaddr 00:16:3E:6B:F7:8B  
          inet addr:10.0.3.147  Bcast:10.0.3.255  Mask:255.255.255.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:15 errors:0 dropped:0 overruns:0 frame:0
          TX packets:13 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:1562 (1.5 KiB)  TX bytes:1554 (1.5 KiB)

Next, configure out system. Similarly to Debian's dpkg-reconfigure, Alpine have a list of setup commands to configure your system. However, I prefer the consistent and sensible naming used here. This is something that other GNU/Linux distros should follow. I'm looking at you CentOS/Red Hat/Fedora.
test-alpine:~# setup-
setup-acf        setup-bootable         setup-hostname      setup-mta     setup-timezone
setup-alpine     setup-disk             setup-interfaces    setup-ntp     setup-xen-dom0
setup-apkcache   setup-dns              setup-keymap        setup-proxy   setup-xorg-base
setup-apkrepos   setup-gparted-desktop  setup-lbu           setup-sshd

Next, setup the package repository and let the system pick the fastest mirror. I like that we can pick the fastest mirror in the console, which is something impossible to do so in Debian/Ubuntu.
# setup-apkrepos

1) nl.alpinelinux.org
2) dl-2.alpinelinux.org
3) dl-3.alpinelinux.org
4) dl-4.alpinelinux.org
5) dl-5.alpinelinux.org
6) dl-6.alpinelinux.org
7) dl-7.alpinelinux.org
8) distrib-coffee.ipsl.jussieu.fr
9) mirror.yandex.ru
10) mirrors.gigenet.com
11) repos.lax-noc.com
12) repos.dfw.lax-noc.com
13) repos.mia.lax-noc.com
14) mirror1.hs-esslingen.de
15) mirrors.centarra.com
16) liskamm.alpinelinux.uk
17) mirrors.2f30.org
18) mirror.leaseweb.com

r) Add random from the above list
f) Detect and add fastest mirror from above list
e) Edit /etc/apk/repositores with text editor

Enter mirror number (1-18) or URL to add (or r/f/e/done) [f]: 
Finding fastest mirror... 
  3.07 http://nl.alpinelinux.org/alpine/
  4.43 http://dl-2.alpinelinux.org/alpine/
  4.18 http://dl-3.alpinelinux.org/alpine/
  4.43 http://dl-4.alpinelinux.org/alpine/
  7.56 http://dl-5.alpinelinux.org/alpine/
  4.45 http://dl-6.alpinelinux.org/alpine/
ERROR: http://dl-7.alpinelinux.org/alpine/edge/main: No such file or directory
 12.75 http://distrib-coffee.ipsl.jussieu.fr/pub/linux/alpine/alpine/
  3.27 http://mirror.yandex.ru/mirrors/alpine/
  3.55 http://mirrors.gigenet.com/alpinelinux/
 27.07 http://repos.lax-noc.com/alpine/
  3.87 http://repos.dfw.lax-noc.com/alpine/
 20.34 http://repos.mia.lax-noc.com/alpine/
  3.55 http://mirror1.hs-esslingen.de/pub/Mirrors/alpine/
ERROR: http://mirrors.centarra.com/alpine/edge/main: network error (check Internet connection and firewall)
  4.96 http://liskamm.alpinelinux.uk/
  4.45 http://mirrors.2f30.org/alpine/
  5.61 http://mirror.leaseweb.com/alpine/
Added mirror nl.alpinelinux.org
Updating repository indexes... done.

Update our system. Even though there are more than five thousands packages, it is still not comparable to massive Debian list of available packages. But this is understandable due to the small number of contributors and their limited free time.
test-alpine:~# apk update
fetch http://dl-6.alpinelinux.org/alpine/v3.2/main/x86_64/APKINDEX.tar.gz
fetch http://nl.alpinelinux.org/alpine/v3.2/main/x86_64/APKINDEX.tar.gz
v3.2.3-104-g838b3e3 [http://dl-6.alpinelinux.org/alpine/v3.2/main]
v3.2.3-104-g838b3e3 [http://nl.alpinelinux.org/alpine/v3.2/main]
OK: 5289 distinct packages available

Let's continue by installing a software package. We'll use Git version control as our example. Installation is straight forwards with enough details.
test-alpine:~# apk add git
(1/13) Installing run-parts (4.4-r0)
(2/13) Installing openssl (1.0.2d-r0)
(3/13) Installing lua5.2-libs (5.2.4-r0)
(4/13) Installing lua5.2 (5.2.4-r0)
(5/13) Installing ncurses-terminfo-base (5.9-r3)
(6/13) Installing ncurses-widec-libs (5.9-r3)
(7/13) Installing lua5.2-posix (33.3.1-r2)
(8/13) Installing ca-certificates (20141019-r2)
(9/13) Installing libssh2 (1.5.0-r0)
(10/13) Installing curl (7.42.1-r0)
(11/13) Installing expat (2.1.0-r1)
(12/13) Installing pcre (8.37-r1)
(13/13) Installing git (2.4.1-r0)
Executing busybox-1.23.2-r0.trigger
Executing ca-certificates-20141019-r2.trigger
OK: 23 MiB in 28 packages


So far, I love the simplicity provided by Alpine Linux. In coming months, there will be more post on this tiny distro in coming months. Stay tuned.

Error calling 'lxd forkstart......

In full details, the exact error message
error: Error calling 'lxd forkstart test-centos-6 /var/lib/lxd/containers /var/log/lxd/test-centos-6/lxc.conf': err='exit status 1'

Again, while rebooting my lapppy after two days, I encountered the above error message again while trying to start my container through LXD. Reading through the LXD issues reports, these are the typical steps to troubleshoot this issue. Note that I've installed the LXD through source code compilation as there are no RPM package available for Fedora 23.

First thing first, as the LXD was built through code compilation, hence it was started manually by running this command. The benefit of starting the LXD daemon this way is that it let you monitor all the debugging messages as shown below.
$ su -c 'lxd --group wheel --debug --verbose'

INFO[11-14|14:10:24] LXD is starting                          path=/var/lib/lxd
WARN[11-14|14:10:24] Per-container AppArmor profiles disabled because of lack of kernel support 
INFO[11-14|14:10:24] Default uid/gid map: 
INFO[11-14|14:10:24]  - u 0 100000 65536 
INFO[11-14|14:10:24]  - g 0 100000 65536 
INFO[11-14|14:10:24] Init                                     driver=storage/dir
INFO[11-14|14:10:24] Looking for existing certificates        cert=/var/lib/lxd/server.crt key=/var/lib/lxd/server.key
DBUG[11-14|14:10:24] Container load                           container=test-busybox
DBUG[11-14|14:10:24] Container load                           container=test-ubuntu-cloud
DBUG[11-14|14:10:24] Container load                           container=test-centos-7
INFO[11-14|14:10:24] LXD isn't socket activated 
INFO[11-14|14:10:24] REST API daemon: 
INFO[11-14|14:10:24]  - binding socket                        socket=/var/lib/lxd/unix.socket
......


The first step to troubleshoot is to ensure that the default bridge interface, lxcbr0, used by LXD is up and running.
$ ifconfig lxcbr0
lxcbr0: error fetching interface information: Device not found

Next, start the 'lxc-net' service that created this bridge interface. Check if our bridge interface is up.
$ sudo systemctl start lxc-net

$ ifconfig lxcbr0
lxcbr0: flags=4163  mtu 1500
        inet 10.0.3.1  netmask 255.255.255.0  broadcast 0.0.0.0
        inet6 fe80::fcd3:baff:fefd:5bd7  prefixlen 64  scopeid 0x20
        ether fe:7a:fa:dd:06:cd  txqueuelen 0  (Ethernet)
        RX packets 5241  bytes 301898 (294.8 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 7610  bytes 11032257 (10.5 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

Next, check the status of the 'lxc-net' service. Why we need to do so? Remember that the 'lxc-net' service create a virtual switch where three things will be created. First, the bridge itself that links to an existing network interface connecting to the other world. Next, a DNS server which resolves domain name. And lastly, a DHCP server which assigns new IP address to the container. The DNS and DHCP services is provided by the Dnsmasq daemon.
$ sudo systemctl status lxc-net -l

● lxc-net.service - LXC network bridge setup
   Loaded: loaded (/usr/lib/systemd/system/lxc-net.service; enabled; vendor preset: disabled)
   Active: active (exited) since Sat 2015-11-14 16:13:24 MYT; 13s ago
  Process: 9807 ExecStop=/usr/libexec/lxc/lxc-net stop (code=exited, status=0/SUCCESS)
  Process: 9815 ExecStart=/usr/libexec/lxc/lxc-net start (code=exited, status=0/SUCCESS)
 Main PID: 9815 (code=exited, status=0/SUCCESS)
   Memory: 404.0K
      CPU: 46ms
   CGroup: /system.slice/lxc-net.service
           └─9856 dnsmasq -u nobody --strict-order --bind-interfaces --pid-file=/run/lxc/dnsmasq.pid --listen-address 10.0.3.1 --dhcp-range 10.0.3.2,10.0.3.254 --dhcp-lease-max=253 --dhcp-no-override --except-interface=lo --interface=lxcbr0 --dhcp-leasefile=/var/lib/misc/dnsmasq.lxcbr0.leases --dhcp-authoritative

Nov 14 16:13:24 localhost.localdomain dnsmasq[9856]: started, version 2.75 cachesize 150
Nov 14 16:13:24 localhost.localdomain dnsmasq[9856]: compile time options: IPv6 GNU-getopt DBus no-i18n IDN DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth DNSSEC loop-detect inotify
Nov 14 16:13:24 localhost.localdomain dnsmasq-dhcp[9856]: DHCP, IP range 10.0.3.2 -- 10.0.3.254, lease time 1h
Nov 14 16:13:24 localhost.localdomain dnsmasq-dhcp[9856]: DHCP, sockets bound exclusively to interface lxcbr0
Nov 14 16:13:24 localhost.localdomain dnsmasq[9856]: reading /etc/resolv.conf
Nov 14 16:13:24 localhost.localdomain dnsmasq[9856]: using nameserver 192.168.1.1#53
Nov 14 16:13:24 localhost.localdomain dnsmasq[9856]: read /etc/hosts - 2 addresses
Nov 14 16:13:24 localhost.localdomain systemd[1]: Started LXC network bridge setup.

Expect more posts to come on using LXD in Fedora 23.

Linux Containers (LXC) with LXD Hypervisor, Part 3 : Transferring Files Between Host and Container

Other articles in the series:
In this part 3, we're going to explore on how to copy file(s) from the host to the container and vice versa. Copying file from the host to the container is simply just using the 'lxc file push <filename> <container-name>/' command. You must append a forward slash (/), to indicate a directory name to the container name for it to work as shown below.
$ echo "a" > foobar
$ md5sum foobar 
60b725f10c9c85c70d97880dfe8191b3  foobar
$
$ lxc file push foobar test-centos-6
error: Invalid target test-centos-6
$
$ lxc file push foobar test-centos-6/tmp
error: exit status 255: mntns dir: /proc/16875/ns/mnt
open container: Is a directory

$ lxc file push foobar test-centos-6/tmp

Similarly, the copy file from container, just use the 'lxc file pull <container-name>/ <filename> .' command. Remember to put the dot (.) which indicates the destination or current folder.
$ lxc file pull test-centos-6/tmp/foobar .
$ md5sum foobar
60b725f10c9c85c70d97880dfe8191b3  foobar

As LXC is actually a glorify chroot environment container. You can actually create or copy files or folders from and to the chroot directory directly.
$ cd /var/lib/lxd/containers/test-centos-6/rootfs/tmp
$ touch create_file_directly_in_chroot_folder

Repeat the similar steps but in the container.
$ lxc exec test-centos-6 /bin/bash
$ cd /tmp
$ touch create_file_directly_in_container

Checking these files from the host. Note the file permissions.
$ ll /var/lib/lxd/containers/test-centos-6/rootfs/tmp/
total 0
-rw-rw-r-- 1 ang    ang    0 Sep  29 02:00 create_file_directly_in_chroot_folder
-rw-r--r-- 1 100000 100000 0 Sep  29 02:00 create_file_directly_in_container

Similarly, but inside the LXC container.
[[email protected] tmp]# ll
total 0
-rw-rw-r-- 1 65534 65534 0 Sep 28 14:00 create_file_directly_in_chroot_folder
-rw-r--r-- 1 root  root  0 Sep 28 14:00 create_file_directly_in_container

While this is doable, we shouldn't create files or folders directly in the container chroot folder from the host. Use the 'lxc push' and 'lxc pull' command to preserve the file permissions.

Linux Containers (LXC) with LXD Hypervisor, Part 2 : Importing Container Images Into LXD

Other articles in the series:
In Part 2, we're going to discuss different ways of importing LXC container images into LXD. By default, when you create a LXC container using the 'lxc launch' command, the tool will download and cache the container image from the remote server. For example, to create a new CentOS 6 LXC container.
$ lxc remote add images images.linuxcontainers.org
$ lxc launch images:centos/7/amd64 centos

While waiting for the CentOS 7 image to be downloaded, you can check the LXD log file.
$ sudo tail -n2 /var/log/lxd/lxd.log
t=2015-08-30T00:13:22+0800 lvl=info msg="Image not in the db downloading it" image=69351a66510eecabf11ef7dfa94af40e20cf15c346ae08b3b0edd726ef3be10c server=https://images.linuxcontainers.org:8443
t=2015-08-30T00:13:22+0800 lvl=info msg="Downloading the image" image=69351a66510eecabf11ef7dfa94af40e20cf15c346ae08b3b0edd726ef3be10c

Unfortunately, if you have or experiencing slow network like me (see screenshot below), if best to use a network monitoring tool to check weather you're still downloading the image. For my case, I'm using bmon. Note my pathetic network speed. An average LXC container image is around 50MB. At download rate of average 20kb/s, it should took us around 33-plus minutes to finish the download. See that without download progress indicator, we've to go all the trouble to check whether the import is still running.


Alternatively, there also another way to import container images. This is through using 'lxd-images' tool, a Python script which supports two additional image source in addition to the default one as mentioned just now. These two sources are the local BusyBox images and Ubuntu Cloud images from official release streams. Additionally, since version 0.14, download progress tracking has been added to the tool, which solved the hassle we encountered.

Let's run the 'lxd-images' command and see its help message.
$ lxd-images
error: the following arguments are required: action
usage: lxd-images [-h] {import} ...

LXD: image store helper

positional arguments:
  {import}
    import    Import images

optional arguments:
  -h, --help  show this help message and exit

Examples:
 To import the latest Ubuntu Cloud image with an alias:
    /usr/bin/lxd-images import ubuntu --alias ubuntu

 To import the latest Ubuntu 14.04 LTS 64bit image with some aliases:
    /usr/bin/lxd-images import lxc ubuntu trusty amd64 --alias ubuntu --alias ubuntu/trusty

 To import a basic busybox image:
    /usr/bin/lxd-images import busybox --alias busybox

UPDATE: Since LXD version 0.17, 'lxd-images import lxc' command has been deprecated in favour of using the 'lxc launch' command.

Let's try to download and cache a CentOS 6 LXC container image into LXD. Compare using 'lxc launch' command to import container image. Notice the differences. First, verbosity is higher. At least we know what is going on behind the scene like what are the files being downloaded. Secondly, we can track the progress of the download. Third, we can add additional metadata, like aliases to the downloaded container image.
$ lxd-images import lxc centos 6 amd64 --alias centos/6                                                                                      
Downloading the GPG key for https://images.linuxcontainers.org
Downloading the image list for https://images.linuxcontainers.org
Validating the GPG signature of /tmp/tmprremowyo/index.json.asc
Downloading the image: https://images.linuxcontainers.org/images/centos/6/amd64/default/20150829_02:16/lxd.tar.xz
Progress: 1 %

However, from my understanding by reading the Python code of 'lxd-images' tool, container image is downloaded without using any multiple simultaneous connections. Hence, it will take a while (if you have slow connection like me) just to download any container images. To solve this, you can download and import the container image manually using third-parties download tool like Aria2 which supports multiple simultaneous connections.

In previous LXC version, if I remembered correctly, before version 0.15, CentOS 7 image was not found from the default image source listing (see emphasis in bold red) but still exists at the web site.
$ lxd-images import lxc centos 7 amd64 --alias centos/7
Downloading the GPG key for https://images.linuxcontainers.org
Downloading the image list for https://images.linuxcontainers.org
Validating the GPG signature of /tmp/tmpgg6sob2e/index.json.asc
Requested image doesn't exist.

Download and import the container image directly.
$ aria2x -x 4 https://images.linuxcontainers.org/images/centos/7/amd64/default/20150619_02:16/lxd.tar.xz

Import the downloaded container image in unified tarball format.
$ lxc image import lxd.tar.xz --alias centos/7
Image imported with fingerprint: 1d292b81f019bcc647a1ccdd0bb6fde99c7e16515bbbf397e4663503f01d7d1c

In short, just use 'lxd-images' tool to import any container images from the default source.

For the next part of the series, we're going to look into sharing files between the LXC container and the host. Till the next time.

Linux Container (LXC) with LXD Hypervisor, Part 1: Installation and Creation

For the past few weeks, I've been looking into creating LXC container for both Fedora and Ubuntu distro. One of the creation method is through downloading a pre-built image.
$ lxc-create -t download -n test-container -- -d ubuntu -r trusty -a amd64

However, creating unprivileged containers is rather cumbersome and list of languages bindings for the APIs are limited. What if we create a daemon or a container hypervisor that monitor and manage all the containers? In additional to that, the daemon also handles all the security privileges and provides a RESTful web APIs for remote management? Well, that the purpose of the creation of LXD, the LXC container hypervisor. Think of it as a glorify LXC 'download' creation method with additional features.

Since the LXD project is under the management of Caninocal Ltd, the company behinds Ubuntu. Hence, it's recommended to use Ubuntu if you don't want to install through source code compilation.

Installation and setup of LXD as shown below was done in Ubuntu 15.04.

Firstly, install the LXD package.
$ sudo apt-get install lxd
......
Warning: The home dir /var/lib/lxd/ you specified already exists.
Adding system user 'lxd' (UID 125) ...
Adding new user 'lxd' (UID 125) with group 'nogroup' ...
The home directory '/var/lib/lxd/' already exists. Not copying from '/etc/skel'.
adduser: Warning: The home directory '/var/lib/lxd/' does not belong to the user you are currently creating.
Adding group 'lxd' (GID 137) ...
Done.
......

From the message above, note that the group 'lxd' (GID 137) does not belong your current login user yet. To update your current login user groups during current session, run the command below so that you don't need to logout and re-login again.
$ newgrp lxd

Check out current login user groups. You should see that the current login user belongs to the group 'lxd' (GID 137).
$ id $USER | tr ',', '\n'
uid=1000(ang) gid=1000(ang) groups=1000(ang)
4(adm)
24(cdrom)
27(sudo)
30(dip)
46(plugdev)
115(lpadmin)
131(sambashare)
137(lxd)

$ groups
ang adm cdrom sudo dip plugdev lpadmin sambashare lxd

Next, we need to set the remote server which contains the pre-built container images.
$ lxc remote add images images.linuxcontainers.org
Generating a client certificate. This may take a minute...

List all the available pre-built container images from the server we've added just now. Pay attention to the colon (:) at the end of the command as this is needed. Otherwise, the command will list local downloaded images. The list is quite long so I've reformatted the layout and only show the top two.
$ lxc image list images:
+-----------------------+-------------+-------+----------------+------+-----------------------------+
|   ALIAS               | FINGERPRINT |PUBLIC |  DESCRIPTION   | ARCH |        UPLOAD DATE          |
+-----------------------+-------------+-------+----------------+------+-----------------------------+
|centos/6/amd64 (1 more)|460c2c6c4045 |yes    |Centos 6 (amd64)|x86_64|Jul 25, 2015 at 11:17am (MYT)|
|centos/6/i386 (1 more) |60f280890fcc |yes    |Centos 6 (i386) |i686  |Jul 25, 2015 at 11:20am (MYT)|
......

Let's create our first container using CentOS 6 pre-built image.
$ lxc launch images:centos/6/amd64 test-centos-6
Creating container...error: Get http://unix.socket/1.0: dial unix /var/lib/lxd/unix.socket: no such file or directory

Reading through this troubleshooting ticket, it seems that LXD daemon was not started. Let's start it. Note that I still using the old 'service' command to start the daemon instead of 'systemctl' command. As they said, old habits die hard. It will take a while for me to fully transition from SysVinit to Systemd. ;-)
$ sudo service lxd restart
$ sudo service lxd status
● lxd.service - Container hypervisor based on LXC
   Loaded: loaded (/lib/systemd/system/lxd.service; indirect; vendor preset: enabled)
   Active: active (running) since Ahd 2015-07-26 00:28:51 MYT; 10s ago
 Main PID: 13260 (lxd)
   Memory: 276.0K
   CGroup: /system.slice/lxd.service
           ‣ 13260 /usr/bin/lxd --group lxd --tcp [::]:8443

Jul 26 00:28:51 proliant systemd[1]: Started Container hypervisor based on LXC.
Jul 26 00:28:51 proliant systemd[1]: Starting Container hypervisor based on LXC...

Finally, create and launch our container using the CentOS 6 pre-built image. Compare to 'lxc-create' command, at least the parameters is simpler. This will take a while as the program needs to download the pre-built CentOS 6 image, which is average size around 50-plus MB, more on this later.
$ lxc launch images:centos/6/amd64 test-centos-6
Creating container...done
Starting container...done

Checking the status of our newly created container.
$ lxc list
+---------------+---------+-----------+------+-----------+-----------+
|     NAME      |  STATE  |   IPV4    | IPV6 | EPHEMERAL | SNAPSHOTS |
+---------------+---------+-----------+------+-----------+-----------+
| test-centos-6 | RUNNING | 10.0.3.46 |      | NO        | 0         |
+---------------+---------+-----------+------+-----------+-----------+

Another status of our container.
$ lxc info test-centos-6
Name: test-centos-6
Status: RUNNING
Init: 14572
Ips:
  eth0: IPV4    10.0.3.46
  lo:   IPV4    127.0.0.1
  lo:   IPV6    ::1

Checking the downloaded pre-built image. Subsequent container creation using the same cached image.
$ lxc image list
+-------+--------------+--------+------------------+--------+-------------------------------+
| ALIAS | FINGERPRINT  | PUBLIC |   DESCRIPTION    |  ARCH  |          UPLOAD DATE          |
+-------+--------------+--------+------------------+--------+-------------------------------+
|       | 460c2c6c4045 | yes    | Centos 6 (amd64) | x86_64 | Jul 26, 2015 at 12:51am (MYT) |
+-------+--------------+--------+------------------+--------+-------------------------------+

You can use the fingerprint to create and initiate the same container.
$ lxc launch 460c2c6c4045 test-centos-6-2                                                                    
Creating container...done
Starting container...done

As I mentioned, the downloaded pre-built CentOS 6 image is roughly 50-plus MB. This file is located within the '/var/lib/lxd/images' folder. The fingerprint only shows the first 12 characters hash string of the file name.
$ sudo ls -lh /var/lib/lxd/images
total 50M
-rw-r--r-- 1 root root 50M Jul  26 00:51 460c2c6c4045a7756faaa95e1d3e057b689512663b2eace6da9450c3288cc9a1

Now, let's enter the container. Please note that the pre-built image contains the most minimum necessary packages. There are quite a few things missing. For example, wget, the downloader was not install by default.
$ lxc exec test-centos-6 /bin/bash
[[email protected] ~]#
[[email protected] ~]# cat /etc/redhat-release 
CentOS release 6.6 (Final)

[[email protected] ~]# wget
bash: wget: command not found

To exit from the container, simple type the 'exit' command.
[[email protected] ~]# exit
exit
$ 

To stop the container, just run this command.
$ lxc stop test-centos-6
$ lxc list
+-----------------+---------+-----------+------+-----------+-----------+
|      NAME       |  STATE  |   IPV4    | IPV6 | EPHEMERAL | SNAPSHOTS |
+-----------------+---------+-----------+------+-----------+-----------+
| test-centos-6   | STOPPED |           |      | NO        | 0         |
+-----------------+---------+-----------+------+-----------+-----------+

For the next part of the series, we're going to look into importing container images into LXD. Till the next time.

Linux Containers (LXC) in Ubuntu 15.04

Last month, I've been trying out LXC in Fedora 22 (F22) with some limitations and missing features. I tried but failed to get unprivileged container to work and there is no RPM packages for LXD. Although you can compile the code and create RPM yourself, but is not worth the time spend in doing so. Hence, is best to switch to the Ubuntu which has the latest LXC support since the one of the project leaders, St├ęphane Graber, is working for Canonical Ltd, the company that manage Ubuntu.

Installation is pretty much straightforward, just apt-getting it.
$ sudo apt-get install lxc

Checking the default LXC configuration. Compare to LXC in F22, the Cgroup memory controller was enabled by default and the kernel is still using 3.19 compare to 4.0.1.
$ lxc-checkconfig 
Kernel configuration not found at /proc/config.gz; searching...
Kernel configuration found at /boot/config-3.19.0-10-generic
--- Namespaces ---
Namespaces: enabled
Utsname namespace: enabled
Ipc namespace: enabled
Pid namespace: enabled
User namespace: enabled
Network namespace: enabled
Multiple /dev/pts instances: enabled

--- Control groups ---
Cgroup: enabled
Cgroup clone_children flag: enabled
Cgroup device: enabled
Cgroup sched: enabled
Cgroup cpu account: enabled
Cgroup memory controller: enabled
Cgroup cpuset: enabled

--- Misc ---
Veth pair device: enabled
Macvlan: enabled
Vlan: enabled
File capabilities: enabled

Note : Before booting a new kernel, you can check its configuration
usage : CONFIG=/path/to/config /usr/bin/lxc-checkconfig

One of the issue encounterd for LXC in F22 is the installation did not create the default lxcbr0 bridge interface. Not so in Ubuntu.
$ cat /etc/lxc/default.conf | grep network.link
lxc.network.link = lxcbr0

Checking the activated bridge interface, lxcbr0.
$ brctl show
bridge name     bridge id               STP enabled     interfaces
lxcbr0          8000.000000000000       no

Instead of creating a new LXC container as root user, we can create unprivileged containers as normal or non-root user.
$ lxc-create -n test-ubuntu -t ubuntu
lxc_container: conf.c: chown_mapped_root: 3394 No mapping for container root
lxc_container: lxccontainer.c: do_bdev_create: 849 Error chowning /home/ang/.local/share/lxc/test-ubuntu/rootfs to container root
lxc_container: conf.c: suggest_default_idmap: 4534 You must either run as root, or define uid mappings
lxc_container: conf.c: suggest_default_idmap: 4535 To pass uid mappings to lxc-create, you could create
lxc_container: conf.c: suggest_default_idmap: 4536 ~/.config/lxc/default.conf:
lxc_container: conf.c: suggest_default_idmap: 4537 lxc.include = /etc/lxc/default.conf
lxc_container: conf.c: suggest_default_idmap: 4538 lxc.id_map = u 0 100000 65536
lxc_container: conf.c: suggest_default_idmap: 4539 lxc.id_map = g 0 100000 65536
lxc_container: lxccontainer.c: lxcapi_create: 1320 Error creating backing store type (none) for test-ubuntu
lxc_container: lxc_create.c: main: 274 Error creating container test-ubuntu

From the above error, we need to define the uid mappings for both user and group. Duplicate the LXC's default.conf to our own home directory and add in the mapping.
$ mkdir -p ~/.config/lxc
mkdir: created directory ‘/home/ang/.config/lxc’
$ cp /etc/lxc/default.conf ~/.config/lxc/
$ echo "lxc.id_map = u 0 100000 65536" >> ~/.config/lxc/default.conf
$ echo "lxc.id_map = g 0 100000 65536" >> ~/.config/lxc/default.conf
$ echo "$USER veth lxcbr0 2" | sudo tee -a /etc/lxc/lxc-usernet
ang veth lxcbr0 2

Checking back our own user's default.conf config file.
$ cat ~/.config/lxc/default.conf 
lxc.network.type = veth
lxc.network.link = lxcbr0
lxc.network.flags = up
lxc.network.hwaddr = 00:16:3e:xx:xx:xx
lxc.id_map = u 0 100000 65536
lxc.id_map = g 0 100000 65536

Try to create our unprivileged container again. As the error indicate below, unprivileged containers can only be created through the download template.
$ lxc-create -n test-ubuntu -t ubuntu
This template can't be used for unprivileged containers.
You may want to try the "download" template instead.
lxc_container: lxccontainer.c: create_run_template: 1108 container creation template for test-ubuntu failed
lxc_container: lxc_create.c: main: 274 Error creating container test-ubuntu

Re-run the command to create the container but using the download template. This will take a while.
$ lxc-create -t download -n test-ubuntu -- -d ubuntu -r trusty -a amd64
Setting up the GPG keyring
Downloading the image index
Downloading the rootfs
Downloading the metadata
The image cache is now ready
Unpacking the rootfs

---
You just created an Ubuntu container (release=trusty, arch=amd64, variant=default)

To enable sshd, run: apt-get install openssh-server

For security reason, container images ship without user accounts
and without a root password.

Use lxc-attach or chroot directly into the rootfs to set a root password
or create user accounts.

Start the container in daemon or background mode. It seems we have error here.
$ lxc-start -n test-ubuntu -d
lxc-start: lxc_start.c: main: 344 The container failed to start.
lxc-start: lxc_start.c: main: 346 To get more details, run the container in foreground mode.
lxc-start: lxc_start.c: main: 348 Additional information can be obtained by setting the --logfile and --logpriority options.

Restart the container again in foreground mode.
$ lxc-start -n test-ubuntu -F
lxc-start: start.c: print_top_failing_dir: 102 Permission denied - could not access /home/ang.  Please grant it 'x' access, or add an ACL for the container root.
lxc-start: sync.c: __sync_wait: 51 invalid sequence number 1. expected 2
lxc-start: start.c: __lxc_start: 1164 failed to spawn 'test-ubuntu'
lxc-start: lxc_start.c: main: 344 The container failed to start.
lxc-start: lxc_start.c: main: 348 Additional information can be obtained by setting the --logfile and --logpriority options.

To fix this, we need to grant access to our $HOME directory.
$ sudo chmod +x $HOME

Let's us try again.
$ lxc-start -n test-ubuntu -d
$ lxc-attach -n test-ubuntu

Compare to Fedora 22, LXC in Ubuntu 15.04 is easier to setup although we still need to reconfigure it to enable unprivileged container creation. In short, if you want good LXC support, use Ubuntu 15.04.

grantpt failed: Read-only file system

Probably one of those weird bug that I've encountered. This happened quite a few times for the past weeks. When I tried to launch a container I've just created, LXC showed me below error message on failling to allocate a pty.
$ sudo lxc-start -n foobar -F
lxc-start: console.c: lxc_console_create: 580 Read-only file system - failed to allocate a pty
lxc-start: start.c: lxc_init: 442 failed to create console
lxc-start: start.c: __lxc_start: 1124 failed to initialize the container
lxc-start: lxc_start.c: main: 344 The container failed to start.
lxc-start: lxc_start.c: main: 348 Additional information can be obtained by setting the --logfile and --logpriority options.

PTY? Is an abbreviation for pseudoterminal, according to Wikipedia,
"is a pair of pseudo-devices, one of which, the slave, emulates a real text terminal device, the other of which, the master, provides the means by which a terminal emulator process controls the slave."
To debug this, I've tried to launch a new Tmux session, which seemed to fail to do so. Suspecting that my Tmux session somehow corrupted, I tried to open Gnome Terminal and obtained this error message "grantpt failed: Read-only file system" as shown below.


Google's search results did suggest a temporary quick solution, which seemed to solve the issue. But still, question remain, what causing /dev/pts having the wrong permissions?
$ sudo mount -o remount,gid=5,mode=620 /dev/pts

Linux Containers (LXC) in Fedora 22 Rawhide - Part 3

Continue from Part 1 and Part 2. We'll discuss another issue caused by the default LXC installation in Fedora 22, which is no default bridge network created although one is set in the config file for each container.

Let's create a dummy container to view the default bridge network interface.
$ sudo lxc-create -t download -n foo -- -d centos -r 6 -a amd64
$ sudo cat /var/lib/lxc/foo/config | grep lxc.network.link
lxc.network.link = lxcbr0

However, as I mentioned earlier, the bridge interface lxcbr0 is not created by default. Note that bridge interface virbr0 was created due to libvirt installation.
$ ip link show | grep br0
6: virbr0:  mtu 1500 qdisc noqueue state DOWN mode DEFAULT group default 
7: virbr0-nic:  mtu 1500 qdisc fq_codel master virbr0 state DOWN mode DEFAULT group default qlen 500

Or you can use the brctl command to show the available bridge interface. If you can't find the command, just install the bridge-utils package.
$ sudo dnf install bridge-utils
$ brctl show
bridge name     bridge id               STP enabled     interfaces
virbr0          8000.525400c28250       yes             virbr0-nic

Instead of changing the default lxc.network.link item in the container's config file every time we create a container, we can use two ways to resolve this issue. First, by overwrite the default network interface name. Second, is to create the lxcbr0 bridge interface manually.

For the first method, just overwrite the default network interface name.
$ sudo sed -i s/lxcbr0/virbr0/g /etc/lxc/default.conf 
$ cat /etc/lxc/default.conf | grep lxc.network.link
lxc.network.link = virbr0

The issue is such approach is that you'll share the same bridge network interface with libvirt which primary manages KVM (Kernel-based Virtual Machine). Thus, if you need additional customization, for example, like different IP range, is best to create a bridge network interface, which, leads us to the second method.

First, let's duplicate the XML file that define the default bridge network.
sudo cp /etc/libvirt/qemu/networks/default.xml /etc/libvirt/qemu/networks/lxcbr0.xml

Next, we need to generate a random UUID, Universal unique identifier and MAC, media access control address for our new bridge network interface named lxcbr0.

Generating UUID.
$ uuidgen
5df6886c-1dfe-44ca-8865-ebed91bd2646

Generating MAC address.
$ MACADDR="52:54:$(dd if=/dev/urandom count=1 2>/dev/null | md5sum | sed 's/^\(..\)\(..\)\(..\)\(..\).*$/\1:\2:\3:\4/')"; echo $MACADDR
52:54:f0:ec:cb:a3

Update the lxcbr0.xml file we've just duplicated and add in both the UUID and MAC address to the file.

The final XML file as shown below:
<!--
WARNING: THIS IS AN AUTO-GENERATED FILE. CHANGES TO IT ARE LIKELY TO BE
OVERWRITTEN AND LOST. Changes to this xml configuration should be made using:
  virsh net-edit lxcbr0
or other application using the libvirt API.
-->

<network>
  <name>lxcbr0</name>
  <uuid>5df6886c-1dfe-44ca-8865-ebed91bd2646</uuid>
  <forward mode='nat'/>
  <bridge name='lxcbr0' stp='on' delay='0'/>
  <mac address='52:54:f0:ec:cb:a3'/>
  <ip address='192.168.125.1' netmask='255.255.255.0'>
    <dhcp>
      <range start='192.168.125.2' end='192.168.125.254'/>
    </dhcp>
  </ip>
</network>

Enable, auto start, and start the lxcbr0 bridge interface.
$ sudo virsh net-define /etc/libvirt/qemu/networks/lxcbr0.xml
$ sudo virsh net-autostart lxcbr0
$ sudo virsh net-start lxcbr0

Now both bridge interfaces were created and enabled. You can create any container using the default lxcbr0 bridge network interface.
$ brctl show
bridge name     bridge id               STP enabled     interfaces
lxcbr0          8000.00602f7e384b       yes             lxcbr0-nic
virbr0          8000.525400c28250       yes             veth1HV308
                                                        virbr0-nic

There are many other ways to create and setup a bridge network interface but the method of using virsh command is probably the easiest and fastest. All the necessary steps to configure DHCP through Dnsmasq has been automated. As observed through the Dnsmasq instance after we've started the lxcbr0 bridge network interface.
$ ps aux | grep [l]xcbr0
nobody    9443  0.0  0.0  20500  2424 ?        S    01:08   0:00 /sbin/dnsmasq --conf-file=/var/lib/libvirt/dnsmasq/lxcbr0.conf --leasefile-ro --dhcp-script=/usr/libexec/libvirt_leaseshelper
root      9444  0.0  0.0  20472   208 ?        S    01:08   0:00  \_ /sbin/dnsmasq --conf-file=/var/lib/libvirt/dnsmasq/lxcbr0.conf --leasefile-ro --dhcp-script=/usr/libexec/libvirt_leaseshelper

Details of the lxcbr0.conf file.
$ sudo cat /var/lib/libvirt/dnsmasq/lxcbr0.conf 
##WARNING:  THIS IS AN AUTO-GENERATED FILE. CHANGES TO IT ARE LIKELY TO BE
##OVERWRITTEN AND LOST.  Changes to this configuration should be made using:
##    virsh net-edit lxcbr0
## or other application using the libvirt API.
##
## dnsmasq conf file created by libvirt
strict-order
pid-file=/var/run/libvirt/network/lxcbr0.pid
except-interface=lo
bind-dynamic
interface=lxcbr0
dhcp-range=192.168.125.2,192.168.125.254
dhcp-no-override
dhcp-lease-max=253
dhcp-hostsfile=/var/lib/libvirt/dnsmasq/lxcbr0.hostsfile
addn-hosts=/var/lib/libvirt/dnsmasq/lxcbr0.addnhosts

Linux Containers (LXC) in Fedora 22 Rawhide - Part 2

In Part 1, we've learned on how to set up LXC in Fedora 22 and at the same time, we have also encountered quite a few issues and the possible workarounds to get it working. In this post, we'll still looking into these workarounds to find a better or alternative solutions

One of the issue is the deprecation of YUM in favour of DNF command to manage packages. The changes are not supposed to be backward compatible and breakage is certain. Instead of creating a container and download all the basic packages, we can build a container using download template.

Let's try the download template method. Once you've run the command below, a list of distro images will be shown. Note that not all distros can be created through this method, for example, Arch Linux is missing from the image list below. You still have to fallback to file template for container creation.

Next, you will be prompted to key in your distribution, release, and architecture. Once you've keyed in your selection, the command will continue to download the image. This may take a while, depending on your Internet speed.
$ sudo lxc-create -t download -n download-test
Setting up the GPG keyring
Downloading the image index

---
DIST    RELEASE ARCH    VARIANT BUILD
---
centos  6       amd64   default 20150507_02:16
centos  6       i386    default 20150507_02:16
centos  7       amd64   default 20150507_02:16
debian  jessie  amd64   default 20150506_22:42
debian  jessie  armel   default 20150506_22:42
debian  jessie  armhf   default 20150503_22:42
debian  jessie  i386    default 20150506_22:42
debian  sid     amd64   default 20150506_22:42
debian  sid     armel   default 20150506_22:42
debian  sid     armhf   default 20150506_22:42
debian  sid     i386    default 20150506_22:42
debian  wheezy  amd64   default 20150506_22:42
debian  wheezy  armel   default 20150505_22:42
debian  wheezy  armhf   default 20150506_22:42
debian  wheezy  i386    default 20150506_22:42
fedora  19      amd64   default 20150507_01:27
fedora  19      armhf   default 20150507_01:27
fedora  19      i386    default 20150507_01:27
fedora  20      amd64   default 20150507_01:27
fedora  20      armhf   default 20150507_01:27
fedora  20      i386    default 20150507_01:27
gentoo  current amd64   default 20150507_14:12
gentoo  current armhf   default 20150507_14:12
gentoo  current i386    default 20150507_14:12
opensuse        12.3    amd64   default 20150507_00:53
opensuse        12.3    i386    default 20150507_00:53
oracle  6.5     amd64   default 20150507_11:40
oracle  6.5     i386    default 20150507_11:40
plamo   5.x     amd64   default 20150506_21:36
plamo   5.x     i386    default 20150506_21:36
ubuntu  precise amd64   default 20150507_03:49
ubuntu  precise armel   default 20150507_03:49
ubuntu  precise armhf   default 20150507_03:49
ubuntu  precise i386    default 20150507_03:49
ubuntu  trusty  amd64   default 20150507_03:49
ubuntu  trusty  armhf   default 20150507_03:49
ubuntu  trusty  i386    default 20150506_03:49
ubuntu  trusty  ppc64el default 20150507_03:49
ubuntu  utopic  amd64   default 20150507_03:49
ubuntu  utopic  armhf   default 20150507_03:49
ubuntu  utopic  i386    default 20150507_03:49
ubuntu  utopic  ppc64el default 20150507_03:49
ubuntu  vivid   amd64   default 20150507_03:49
ubuntu  vivid   armhf   default 20150507_03:49
ubuntu  vivid   i386    default 20150506_03:49
ubuntu  vivid   ppc64el default 20150507_03:49
---

Distribution: centos
Release: 6
Architecture: amd64

Downloading the image index
Downloading the rootfs 
Downloading the metadata
The image cache is now ready
Unpacking the rootfs

---
You just created a CentOS container (release=6, arch=amd64, variant=default)

To enable sshd, run: yum install openssh-server

For security reason, container images ship without user accounts
and without a root password.

Use lxc-attach or chroot directly into the rootfs to set a root password
or create user accounts.

Once the container has been created. We start and attach to the container.
$ sudo lxc-start -n download-test
$ sudo lxc-attach -n download-test

# uname -a
Linux download-test 4.0.1-300.fc22.x86_64 #1 SMP Wed Apr 29 15:48:25 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux
# cat /etc/centos-release 
CentOS release 6.6 (Final)

Instead of prompting your for the distribution, release, and architecture choices, you can simply create a container in one line of command. Note the extra double dashes (--) before you set the requirements arguments. All parameters after the (--) are passed to the template rather than the lxc-create command. Container creation should be very fast on second time as the program cached the downloaded images.
$ sudo lxc-create -t download -n download-test -- -d centos -r 6 -a amd64

To see the available options available for a particular template, use the command below. You can substitute the template name 'download' found in /usr/share/lxc/templates/.
$ lxc-create -t download -h

Linux Containers (LXC) in Fedora 22 Rawhide - Part 1

While Docker, an application container is widely popular right now, I've decided to try LXC, a machine container that hold a virtual machine like VirtualBox or WMWare but with near bare-metal performance. As I was running on Fedora Rawhide (F22), let's try to install and setup LXC in this distro.

Installation is pretty much straight forward.
$ sudo dnf install lxc lxc-templates lxc-extra

Checking our installed version against the latest available version. Our installed version on par with the current release.
$ lxc-ls --version
1.1.2

The first thing to do is to check our LXC configuration. As emphasized in red below, the Cgroup memory controller is not enabled by default as it will incur additional memory. This can be enabled through by adding boot parameter cgroup_enable=memory to the Grub boot loader. For now, we will keep that in mind and stick to the default.
$ lxc-checkconfig

Kernel configuration not found at /proc/config.gz; searching...
Kernel configuration found at /boot/config-4.0.1-300.fc22.x86_64
--- Namespaces ---
Namespaces: enabled
Utsname namespace: enabled
Ipc namespace: enabled
Pid namespace: enabled
User namespace: enabled
Network namespace: enabled
Multiple /dev/pts instances: enabled

--- Control groups ---
Cgroup: enabled
Cgroup clone_children flag: enabled
Cgroup device: enabled
Cgroup sched: enabled
Cgroup cpu account: enabled
Cgroup memory controller: missing
Cgroup cpuset: enabled

--- Misc ---
Veth pair device: enabled
Macvlan: enabled
Vlan: enabled
File capabilities: enabled

Note : Before booting a new kernel, you can check its configuration
usage : CONFIG=/path/to/config /usr/bin/lxc-checkconfig

Before we can create our container, let's find out the available templates or GNU/Linux distros we can create.
$ ll /usr/share/lxc/templates/
total 348K
-rwxr-xr-x. 1 root root  11K Apr 24 03:22 lxc-alpine*
-rwxr-xr-x. 1 root root  14K Apr 24 03:22 lxc-altlinux*
-rwxr-xr-x. 1 root root  11K Apr 24 03:22 lxc-archlinux*
-rwxr-xr-x. 1 root root 9.5K Apr 24 03:22 lxc-busybox*
-rwxr-xr-x. 1 root root  29K Apr 24 03:22 lxc-centos*
-rwxr-xr-x. 1 root root  11K Apr 24 03:22 lxc-cirros*
-rwxr-xr-x. 1 root root  17K Apr 24 03:22 lxc-debian*
-rwxr-xr-x. 1 root root  18K Apr 24 03:22 lxc-download*
-rwxr-xr-x. 1 root root  48K Apr 24 03:22 lxc-fedora*
-rwxr-xr-x. 1 root root  28K Apr 24 03:22 lxc-gentoo*
-rwxr-xr-x. 1 root root  14K Apr 24 03:22 lxc-openmandriva*
-rwxr-xr-x. 1 root root  15K Apr 24 03:22 lxc-opensuse*
-rwxr-xr-x. 1 root root  40K Apr 24 03:22 lxc-oracle*
-rwxr-xr-x. 1 root root  11K Apr 24 03:22 lxc-plamo*
-rwxr-xr-x. 1 root root 6.7K Apr 24 03:22 lxc-sshd*
-rwxr-xr-x. 1 root root  25K Apr 24 03:22 lxc-ubuntu*
-rwxr-xr-x. 1 root root  13K Apr 24 03:22 lxc-ubuntu-cloud*

Let's proceed ahead by create our first container, a CentOS 6 distro. Unfortunately, as seen below, the creation failed due to deprecation of the Yum command which was redirected to DNF command.
$ sudo lxc-create -t centos -n centos-test

Host CPE ID from /etc/os-release: cpe:/o:fedoraproject:fedora:22
This is not a CentOS or Redhat host and release is missing, defaulting to 6 use -R|--release to specify release
Checking cache download in /var/cache/lxc/centos/x86_64/6/rootfs ... 
Downloading centos minimal ...
Yum command has been deprecated, redirecting to '/usr/bin/dnf -h'.
See 'man dnf' and 'man yum2dnf' for more information.
To transfer transaction metadata from yum to DNF, run:
'dnf install python-dnf-plugins-extras-migrate && dnf-2 migrate'

Yum command has been deprecated, redirecting to '/usr/bin/dnf --installroot /var/cache/lxc/centos/x86_64/6/partial -y --nogpgcheck install yum initscripts passwd rsyslog vim-minimal openssh-server openssh-clients dhclient chkconfig rootfiles policycoreutils'.
See 'man dnf' and 'man yum2dnf' for more information.
To transfer transaction metadata from yum to DNF, run:
'dnf install python-dnf-plugins-extras-migrate && dnf-2 migrate'

Config error: releasever not given and can not be detected from the installroot.
Failed to download the rootfs, aborting.
Failed to download 'centos base'
failed to install centos
lxc-create: lxccontainer.c: create_run_template: 1202 container creation template for centos-test failed
lxc-create: lxc_create.c: main: 274 Error creating container centos-test

The above error is a good example on why the transition from YUM to DNF command was unnecessary and caused breakage. It turned out that /usr/bin/yum is a shell script that display notification message. To resolve this, we need to point /usr/bin/yum to the actual yum program. There are way to bypass this step where we'll discuss about this in Part 2.
$ sudo mv /usr/bin/yum /usr/bin/yum2dnf
$ sudo ln -s /usr/bin/yum-deprecated /usr/bin/yum
$ ll /usr/bin/yum
lrwxrwxrwx. 1 root root 23 May  5 23:40 /usr/bin/yum -> /usr/bin/yum-deprecated*

Let's us try again. Although there is notification, the creation of the container will run smoothly. Since we're creating this for the first time, it will took a while to download all the packages.
$ sudo lxc-create -t centos -n centos-test
......
Complete!
Download complete.
Copy /var/cache/lxc/centos/x86_64/6/rootfs to /var/lib/lxc/centos-test/rootfs ... 
Copying rootfs to /var/lib/lxc/centos-test/rootfs ...
Storing root password in '/var/lib/lxc/centos-test/tmp_root_pass'
Expiring password for user root.
passwd: Success

Container rootfs and config have been created.
Edit the config file to check/enable networking setup.

The temporary root password is stored in:

        '/var/lib/lxc/centos-test/tmp_root_pass'

The root password is set up as expired and will require it to be changed
at first login, which you should do as soon as possible.  If you lose the
root password or wish to change it without starting the container, you
can change it from the host by running the following command (which will
also reset the expired flag):

        chroot /var/lib/lxc/centos-test/rootfs passwd

Checking our newly created container.
$ sudo lxc-ls
centos-test  

Checking the container status.
$ sudo lxc-info -n centos-test
Name:           centos-test
State:          STOPPED

Start our newly created container. Yet again, another error.
$ sudo lxc-start -n centos-test
lxc-start: lxc_start.c: main: 344 The container failed to start.
lxc-start: lxc_start.c: main: 346 To get more details, run the container in foreground mode.
lxc-start: lxc_start.c: main: 348 Additional information can be obtained by setting the --logfile and --logpriority options.

Let's try again, but with foreground mode (-F).
$ sudo lxc-start -F -n centos-test
lxc-start: conf.c: instantiate_veth: 2672 failed to attach 'vethM9Q6RT' to the bridge 'lxcbr0': Operation not permitted
lxc-start: conf.c: lxc_create_network: 2955 failed to create netdev
lxc-start: start.c: lxc_spawn: 914 failed to create the network
lxc-start: start.c: __lxc_start: 1164 failed to spawn 'centos-test'
lxc-start: lxc_start.c: main: 344 The container failed to start.
lxc-start: lxc_start.c: main: 348 Additional information can be obtained by setting the --logfile and --logpriority options.

I was quite surprised that Fedora did not create the lxcbr0 bridge interface automatically. Instead, we will use the existing virbr0 provided by libvirtd.
$ sudo yum install libvirt-daemon
sudo systemctl start libvirtd

Check the bridge network interface.
$ brctl show
bridge name     bridge id               STP enabled     interfaces
virbr0          8000.525400c28250       yes             virbr0-nic

Edit our container config file and change the network link from lxcbr0 to virbr0.
$ sudo vim /var/lib/lxc/centos-test/config
lxc.network.link = virbr0

Try to start the container again, this time, another '819 Permission denied' error.
$ sudo lxc-start -F -n centos-test
lxc-start: conf.c: lxc_mount_auto_mounts: 819 Permission denied - error mounting /usr/lib64/lxc/rootfs/proc/sys/net on /usr/lib64/lxc/rootfs/proc/net flags 4096
lxc-start: conf.c: lxc_setup: 3833 failed to setup the automatic mounts for 'centos-test'
lxc-start: start.c: do_start: 699 failed to setup the container
lxc-start: sync.c: __sync_wait: 51 invalid sequence number 1. expected 2
lxc-start: start.c: __lxc_start: 1164 failed to spawn 'centos-test'
lxc-start: lxc_start.c: main: 344 The container failed to start.
lxc-start: lxc_start.c: main: 348 Additional information can be obtained by setting the --logfile and --logpriority options.

After struggled and googled for answer for the past hours, it actually dawned to me that the '819 Permission denied' error is related to SELinux policy. I did a quick check by disabled SELinux and reboot the machine and was able to start the container.

Also, just to confirm the SELinux error for lxc-start.
$ sudo grep lxc-start /var/log/audit/audit.log | tail -n 1
type=AVC msg=audit(1430849851.869:714): avc:  denied  { mounton } for  pid=3780 comm="lxc-start" path="/usr/lib64/lxc/rootfs/proc/1/net" dev="proc" ino=49148 scontext=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 tcontext=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 tclass=dir permissive=0

Start the SELinux Alert Browser and run the below commands to add the security policy.
$ sealert

$ sudo grep lxc-start /var/log/audit/audit.log | audit2allow -M mypol
******************** IMPORTANT ***********************
To make this policy package active, execute:

semodule -i mypol.pp

$ sudo semodule -i mypol.pp

Start our container again and check it status.
$ sudo lxc-start -n centos-test 
[[email protected] ~]$ sudo lxc-info -n centos-test
Name:           centos-test
State:          RUNNING
PID:            6742
CPU use:        0.44 seconds
BlkIO use:      18.55 MiB
Memory use:     12.14 MiB
KMem use:       0 bytes
Link:           veth4SHUE1
 TX bytes:      578 bytes
 RX bytes:      734 bytes
 Total bytes:   1.28 KiB

Attach to our container. There is no login needed.
$ sudo lxc-attach -n centos-test
[[email protected] /]# uname -a
Linux centos-test 4.0.1-300.fc22.x86_64 #1 SMP Wed Apr 29 15:48:25 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux

[[email protected] /]# cat /etc/centos-release 
CentOS release 6.6 (Final)