PlantUML is always my go-to diagramming tool whenever I need to understand and document any existing legacy system. Two things I love about this tool. First, you can generate any UML diagrams from mere textual description. Meaning that, plain text file is universal format and easy for version control as well. Second, you don't need to worry about layout but focus on the modelling and let the program decides on that.

Below is some notes I've jotted down while trying to set it up in a new machine. I believed it was around June 2015 but I've updated it for Fedora 24 (Rawhide).

$ sudo dnf install plantuml graphviz java-1.8.0-openjdk

If you want to get the latest pre-compiled PlantUML version, then you've to run it manually.
$ java -jar plantuml.jar -version

For those installation through distro's repository, there exists a shell script, '/usr/bin/plantuml' that set up all the necessary JAVA environment details for you.
$ file `which plantuml`
/usr/bin/plantuml: Bourne-Again shell script, ASCII text executable

Since PlantUML is using Graphiviz as its renderer, we need to verify that PlantUML can detect it.
$ java -jar plantuml.jar -testdot
The environment variable GRAPHVIZ_DOT has been set to /usr/bin/dot
Dot executable is /usr/bin/dot
Dot version: dot - graphviz version 2.38.0 (20140413.2041)
Installation seems OK. File generation OK

Another way is to check our PlantUML version.
$ java -jar plantuml.jar -version
PlantUML version 8027 (Sat Jun 20 18:13:59 MYT 2015)
(GPL source distribution)
OpenJDK Runtime Environment
OpenJDK 64-Bit Server VM

The environment variable GRAPHVIZ_DOT has been set to /usr/bin/dot
Dot executable is /usr/bin/dot
Dot version: dot - graphviz version 2.38.0 (20140413.2041)
Installation seems OK. File generation OK

Go generate our sample diagram with verbosity set (useful for detecting issues).
$ java -jar plantuml.jar -verbose sample.txt 
(0.000 - 117 Mo) 114 Mo - PlantUML Version 8033
(0.074 - 117 Mo) 113 Mo - GraphicsEnvironment.isHeadless() true
(0.074 - 117 Mo) 113 Mo - Forcing -Djava.awt.headless=true
(0.074 - 117 Mo) 113 Mo - java.awt.headless set as true
(0.085 - 117 Mo) 113 Mo - Setting current dir: .
(0.085 - 117 Mo) 113 Mo - Setting current dir: /home/hojimi
(0.087 - 117 Mo) 112 Mo - Using default charset
(0.093 - 117 Mo) 112 Mo - Setting current dir: /home/hojimi
(0.099 - 117 Mo) 112 Mo - Setting current dir: /home/hojimi
(0.100 - 117 Mo) 112 Mo - Reading file: sample.txt
(0.100 - 117 Mo) 112 Mo - name from block=null
(0.728 - 117 Mo) 93 Mo - Creating file: /home/hojimi/sample.png
(0.776 - 117 Mo) 90 Mo - Creating image 121x126
(0.812 - 117 Mo) 90 Mo - Ok for com.sun.imageio.plugins.png.PNGMetadata
(0.860 - 117 Mo) 89 Mo - File size : 2131
(0.861 - 117 Mo) 89 Mo - Number of image(s): 1

The textual description of 'sample.txt' that produces that UML diagram.
$ cat sample.txt 
Alice -> Bob
Bob -> Alice

Experiences On Using Static Code Analysis Tools for Python

Static code analysis, as the name implies, is the analysis of the non-running source code of a program. This can be done manually through code reviews where an experienced developer will inspect and walk through the code to find potential programming mistakes. However, such manual process are time consuming and can be improved through automated static code analysis tool.

In Python programming languages, Pyflakes, PyChecker, and Pylint is the common static code analysis tool. This post discusses the experiences on applying these tool to Subdown, an open-sourced image scraper console tool written in Python.

To evaluate and compare different these three static code analysis tool for the Python programming language, I've pick an open-sourced project called Subdown. This program is a image downloader console for the Reddit, an online news sharing community. The site is organized into multiple SubReddits, a breakdown of the smaller communities grouped by topics or interests. The program consist of a single file Python script which crawls a targeted SubReddits for its external URLs of images and asynchronous download these images. First, download the sample script.
$ wget

Pyflakes is a very basic fundamental tools. It only parse the Python source files to check for any errors. However, this tool does not check for any coding style violation. The warning shown below indicates that the Subdown program imports additional unused module named ‘mimetypes’. Loading unnecessary resources will slow down program execution and utilize additional memory resources.
$ pyflakes 'mimetypes' imported but unused

Similar to Pyflakes, PyChecker also parse and check for source files for errors, hence shares similar warning with Pyflakes. Furthermore, Pychecker also import and executing Python modules for additional validation. This result illustrated below shows that the warning indicated that the same Python module, gevent was being imported to the application in two separate ways where it should be done once.
$ pychecker
Processing module subdown (
 ImportError: No module named _winreg
 :28: self is not first method argument Imported module (mimetypes) not used Using import and from ... import for (gevent)

However, this is false positive warning. As shown from its code below, the Subdown program import all the methods from the gevent module. However, in second line, it reimport again from gevent module but only the monkey class so it can “monkey patch” the existing behaviours to work around the limitation of the standard socket module. Monkey patching is one of the feature of dynamic typed programming languages where we can extend and modify the existing behaviours of the methods, attributes, or functions during run-time. This technique is used typically to work around the constraints of no able to modify existing libraries.
16 import gevent
17 from gevent import monkey; monkey.patch_socket()

Pylint, the next static code analysis tool in our evaluation, is the most comprehensive with lots of additional features. See Appendix A for the details output when ran against Subdown program. Instead of just checking for Python code errors like the previous two tools, it also check the coding style violation and code smells. Code style is validated against Python’s PEP 8 style guide. Meanwhile, code smells is a piece of inefficient code, while may run correctly, still have room for improvement through code refactoring. All these are categorized into five message types as shown below:
  • (C) convention, for programming standard violation
  • (R) refactor, for bad code smell
  • (W) warning, for python specific problems
  • (E) error, for probable bugs in the code
  • (F) fatal, if an error occurred which prevented pylint from doing further processing

Comparing the sample result below with previous two tools, we get similar warning of unused import. However, there is a new warning not found which is ‘Unreachable code’.
W:198,12: Unreachable code (unreachable)
C:200, 0: Missing function docstring (missing-docstring)
W: 11, 0: Unused import mimetypes (unused-import)

Extracting out the portion of code in shown below which corresponds to the warning of ‘Unreachable code’, it shows that Line 198 will not be executed at all due to the raise statement in line 197. Upon raising an exception in Line 197, the program will halt and exit the execution. This is a good example where the static code analysis tool can help to uncover incorrect assumption made by the developer.
194         try:
195             get_subreddit(subreddit, max_count, timeout, page_timeout)
196         except Exception as e:
197             raise
198             puts(

While writing this post, I've found another tool called Pylama, which is a helper tool that wraps several code linters like PyFlakes, Pylint, and others. However, there is an issue integrating with Pylint. You may give it a try but YMMV.

Swift in Fedora 24 (Rawhide)

Swift, the language developed by Apple, which is set to replace Objective-C, was recently open sourced. However, the existing binary only available for Ubuntu and Mac OS. Hence, for Fedora user like myself, the only option is to install it through source code compilation.

First, install all the necessary packages.
$ sudo dnf install git cmake ninja-build clang uuid-devel libuuid-devel libicu-devel libbsd-devel libbsd-devel libedit-devel libxml2-devel libsqlite3-devel swig python-devel ncurses-devel pkgconfig

Next, create our working folder.
$ mkdir swift-lang

Clone the minimum repositories to build Swift.
$ git clone swift
$ git clone clang
$ git clone cmark
$ git clone llvm

If you have slow internet connection and experiencing disconnection during clone, is best to clone partially. Otherwise, you've to restart from the beginning again.
$ git clone --depth 1 llvm
$ cd llvm
$ git fetch --unshallow

If you have the great Internet connection, you can proceed with the remaining repositories.
$ git clone lldb
$ git clone llbuild
$ git clone swiftpm
$ git clone
$ git clone

As Swift was configured to work in Ubuntu or Debian, you may encounter error several issues during compilation. These are my workaround.

/usr/bin/which: no ninja in ...
In Fedora, Ninja Built binary name is 'ninja-build' but Swift builder script expect it to be 'ninja'. We create an alias to bypass that.
$ sudo ln -s /usr/bin/ninja-build /usr/bin/ninja

Missing ioctl.h
During compilation, the ioctl.h header file was not found as the build script assumed it's located in '/usr/include/x86_64-linux-gnu' as shown below.
header "/usr/include/x86_64-linux-gnu/sys/ioctl.h"

Temporary workaround is to symlink the folder that contains these files.
$ sudo mkdir -p /usr/include/x86_64-linux-gnu/
$ sudo ln -s /usr/include/sys/ /usr/include/x86_64-linux-gnu/sys

pod2man conversion failure
The 'pod2man' doesn't seems to convert the POD file to MAN page as illustrated in error message below.
FAILED: cd /home/hojimi/Projects/swift-lang/build/Ninja-ReleaseAssert/swift-linux-x86_64/docs/tools && /usr/bin/pod2man --section 1 --center Swift\ Documentation --release --name swift --stderr /home/hojimi/Projects/swift-lang/swift/docs/tools/swift.pod > /home/hojimi/Projects/swift-lang/build/Ninja-ReleaseAssert/swift-linux-x86_64/docs/tools/swift.1
Can't open swift: No such file or directory at /usr/bin/pod2man line 68.

Upon this error message, the 'swift.pod' file has been corrupted and emptied. You'll need to restore it back from the repository.
$ git checkout -- docs/tools/swift.pod

We need to disable the '--name swift' parameter. This is done by commenting out the 'MAN_FILE' variable.
$ sed -i 's/MAN_FILE/#MAN_FILE/g' swift/docs/tools/CMakeLists.txt

Once all the workarounds have been applied, we'll proceed with our compilation. You do not really need to set the '-j 4' parameter for parallel compilation which can really reduce compilation time. By default, Ninja Build will compile code using the available CPU cores. Also, we just want the release (-R) build without any debugging information attached.
$ ./swift/utils/build-script -R -j 4

Add our compiled binary path to the system path.
$ cd /build/Ninja-ReleaseAssert/swift-linux-x86_64/bin/
export PATH=$PATH:`pwd`

Lastly, check our compiled binary.
$ swift --version
Swift version 2.2-dev (LLVM 7bae82deaa, Clang 587b76f2f6, Swift 1171ed7081)
Target: x86_64-unknown-linux-gnu

Be warned, compilation took quite a while, maybe for several hours. This depends on your machine specification and the type of build. I've noticed my lappy was burning hot as four CPU cores were running at 100% most of the time. It's recommended during compilation, place your lappy near a fan or any place with good ventilation. See that the temperature exceed high threshold of 86.0°C.
$ sensors
Adapter: Virtual device
temp1:        +95.0°C  (crit = +98.0°C)

Adapter: ISA adapter
fan1:        4510 RPM

Adapter: ISA adapter
Physical id 0:  +97.0°C  (high = +86.0°C, crit = +100.0°C)
Core 0:         +94.0°C  (high = +86.0°C, crit = +100.0°C)
Core 1:         +97.0°C  (high = +86.0°C, crit = +100.0°C)

Under normal usage, the average temperature is roughly 50°C.
$ sensors
Adapter: Virtual device
temp1:        +46.0°C  (crit = +98.0°C)

Adapter: ISA adapter
fan1:        3525 RPM

Adapter: ISA adapter
Physical id 0:  +49.0°C  (high = +86.0°C, crit = +100.0°C)
Core 0:         +49.0°C  (high = +86.0°C, crit = +100.0°C)
Core 1:         +45.0°C  (high = +86.0°C, crit = +100.0°C)

From Fedora 23 To Fedora 24 (Rawhide)

So I was there looking at my screen and realized Fedora 23 is too stable, or rather too boring. Hence, I've decided to upgrade to Rawhide, the upcoming Fedora 24, which is expected to be released by 17th May 2016. Let's see how this compare to my upgrade from Fedora 21 to Fedora 22 (Rawhide), I hope there will be no major issues.

Configure your DNF for Rawhide.
$ sudo dnf upgrade dnf
$ sudo dnf install dnf-plugins-core fedora-repos-rawhide
$ sudo dnf config-manager --set-disabled fedora updates updates-testing
$ sudo dnf config-manager --set-enabled rawhide
$ sudo dnf clean -q dbcache plugins metadata

Upgrade your distro.
$ sudo dnf --releasever=rawhide --setopt=deltarpm=false distro-sync --nogpgcheck --allowerasing

It's always 'exciting" to use the rolling release where you can test out the latest greatest features. For Fedora 24, lots of features were planned but I'm eager to test out Wayland, the new display protocol which going to replace X. It seems some user already have good and stable enough experience using it in Fedora Rawhide. Can't wait to try it out on my T4210.

Upgrade was painfully slow. First, I've to downgrade certain packages like VLC from RPMFusion repository back to Fedora 22 version (see the last command of the above console output with --allowerasing option). Then, I've to download a total of 1860 packages. That alone took me around three-plus hours.

However, upgrade failed due to some conflict in Python 3.5. I just realized that I've upgraded my Python to 3.5 using Copr. And, to make matter worse, by default DNF did not cache downloaded packages! No choice but to redo everything again. In the end I wasted another three more hours.

First thing first. Let's enable caching for DNF. Next, temporary remove all those packages (wine-* and texlive-*) to reduce number of packages to download and remove Python 3.5 I've installed earlier from Cool Other Package Repor (COPR). Repeat the command to upgrade your distro again and reboot.
$ sudo echo 'keepcache=true' >> /etc/dnf/dnf.conf
$ sudo dnf remove wine* texlive-*
$ sudo dnf remove python35-python3*

Once you've successfully upgraded. Your system should have Gnome 3.19.2, Wayland 1.9.0, and Linux Kernel 4.4.0. Some interesting observations while testing Fedora 24.

Updates during booting
This happened twice and I need to reboot to complete the upgrade. If seemed that Systemd was instructed to handle the upgrade which totally new to me. I was under the impression during the upgrade, all the packages will be overwritten. See screenshot below.

Wayland is the default display server
Previously you've to manually switch to Wayland in the Gnome login shell (click your username and later select from the gear icon). Right now, is the reverse. If you want to use X (which you should as not all apps have been ported to Wayland yet), you've to select it manually, pick 'GNOME for X' from the menu.

Apps that fail to work
Shutter, the screenshot capture tool does not work. Suspect this is due to lack of support and the security model of Wayland as getting the content of other windows is not allow. Terminal, the default Gnome terminal emulator, under custom window size, will always shrink every time upon refocus. Dash to dock Gnome extension does not work either and has been disabled. Is best to check the all the Bugzilla's bug report on Wayland at Gnome or Red Hat. Wayland is getting there but still, you can always fallback to X11.

Natural scrolling in Touchpad
I'm not sure why this was set to default but it's fricking annoying. Basically, under natural scrolling, screen will move at the reverse direction of your fingers, similar to using a mobile phone or tablet. To differentiate between natural and non-natural scrolling is easy. For the former, focus on the moving the content, for the later, focus on moving the scrollbar.