Showing posts with label git. Show all posts
Showing posts with label git. Show all posts

This Week I Learned - 2016 Week 47

Last week post or the whole series.

What an excruciating stressfull week. So many things to follow up and so many things broken, including my own body till I have lost 3kg. On a positive side, when you're down with sickness, your perspective towards your environment changed, in a slightly turn off way.

On Git. I just realized, unintentionally, my git-fu just increased by 0.5% since last two weeks. When checking for merge through rebasing and merging, it seems the LOCAL and REMOTE branch have been interpreted differently in all merging tools. P4Merge interprets it differently? To summarize it, LOCAL is the originals, REMOTE is the changes you want to add regardless it's a rebasing or merging process.

Now, after you've resolved all the conflict either through rebasing or merging, to visualize and compare your changes (just to be sure), we can use `git diff` command to compare between two ranges.
$ git diff branchX..branchY

Note the double-dot to specific range there. It seems that you can specific either double-dot or triple-dot to specific ranges but indicates different types of output. The Venn diagrams and commit trees below shows the differences.

There is one practive that I do follow when using Git in feature branch, which is to commit early and commit often. However, one of the issue is that the feature branch history is cluttered with many tiny commits. While this is useful when working with others on the same branch (we're aware of what's going on), it's best to squash these commits before merging into `master` branch. There are two ways (with different behaviours) to squash all the commits.

First, is to squash these commits when merging to `master` branch. The source branch, in this case, `branchX` will be throw away.
$ git checkout master
$ git merge --squash branchX
$ git commit

Second, is to sqush these commits when rebasing. This is my prefer method where we still keep the source branch. However, if you have a lot of commits, it's quite slow as you have to assign the rebasing method (either squash or fixup) for each commit. Some Git client tools support this feature to squash commit but I never really explore this.
$ git checkout branchX
$ git rebase -i `git merge-base branchX master`

That about it for this week, more stuff to come in coming weeks as we're approaching the end of they year.

This Week I Learned - 2016 Week 44

Last week post or the whole series.

Every time I reread Perl in about 2 hours 20 minutes, there is always new insights that increase my understanding of Perl itself. I can finally grok the intricacy of the three data types: scalar, array, and hash, especially the later two. To make it short, don't use list as container for arrays or hashes, just initialize and declare both types as anonymous array(using bracket) or hash (using braces). You can use them through reference or arrow (`->`) operator. Example as shown.
use v5.10;

my $contact = {
    name => 'John Doe',
    mobiles => [
        {carrier => 'at&t', no => '111-222-333'},
        {carrier => 't-mobile', no => '444-555-666'}

say $contact->{mobiles}->[0]->{carrier}; # at&t
say $contact->{mobiles}->[1]->{carrier}; # t-mobile

Another key concept is Perl calls by reference. This means when you pass a variable to a subroutine, it's a reference to the original value, not a copy. Any modification to the variable, will be reflected within the scope or context of the code until the end of the execution. The most important takeaway is that arguments to the subroutine is a list of scalar (yes, the @_ is a list) and the element of @_ is an alias to the passed parameters. According to perlsub documntation (emphasis added),
Therefore, if you called a function with two arguments, those would be stored in $_[0] and $_[1] . The array @_ is a local array, but its elements are aliases for the actual scalar parameters. In particular, if an element $_[0] is updated, the corresponding argument is updated (or an error occurs if it is not updatable). 

While we are on Perl (yes, we still talking about it). What are the preferred ways to check if an element exists in an array? Two most common way. One is using the default grep method, another is using the `List::MoreUtils` method.
# using grep
if (grep { $_ eq $element } @list) {

# using List::MoreUtils
use List::MoreUtils qw(any);
if ( any { $_ eq $element } @list ) {

My migration back to Vim from Sublime Text seems to be progressing quite well. It's nice to expose yourself to another editor and reapply certain features back to your default editor. First, is update tmux tab with Vim's opened file name. This was one of those things that you want to fix it, but never remember to do so. Next, fix copying from clipboard in Windows not working issue. Next, set indentation rules (like tab only) by file type. And lastly, how to open most recent opened files. Something I didn't realize exists in the default settings.

Remember Lorem Ipsum? Which is commonly used as filler text or place holder text in graphic design before the actual content is used. Well, this process is also known as Greeking. In Perl, there is this module, Text::Greeking, which provides such feature. There are others as well, like the usual Text::Lorem. But nothing can compare to the sheer bullshit of Lingua::ManagementSpeak which can generate meaningful sentences but pure bullshit management speak.

"a web browser is a JS interpreter". So true. Maybe I should start looking into all these "ancient" technologies (e.g. Tcl/Tk) instead of chasing latest greatest fad.

Is always a grey area if you choose to do development work in either porn or gambling industry. But I never realize that advertising industry is as shady as well.

Hate to be prompted for password every time you need to commit to Git repository through SSH? Save some typing by caching the credential. There are two ways.

Through SSH.
$ eval `ssh-agent -s`
$ ssh-add ~/.ssh/id_rsa_key
$ ssh-add -l
$ ssh

Through Git.
$ git config --global credential.helper cache

This Week I Learned - 2016 Week 42

Last week post or the whole series.

Interesting week indeed. It has been a while since I last encountered so many different type of personalities who want or don't want to be a developer.

As usual, what have I learned this week? The usual stuff.

If you're running on GNU/Linux and want a way to manage different Windows OS through Vagrant, you can try this Vagrantfile. Installation and setup is pretty much straightforward, just make sure the Vagrantfile is downloaded. Unfortunately, the login still fail to work.
$ sudo install virtualbox vagrant
$ vagrant plugin install winrm winrm-fs

$ mkdir vagrant_win
$ cd vagrant_win
$ wget
$ IE=Win7IE8 vagrant up

Sanic, Python 3.5+ asynchronous web server. The discussion at HN seems rather interesting. While this is nothing new, asynchronous database layer like asyncpg seems rather userful to improve your DB query speed.

Issue with Babun's memory conflict after Windows updates? Try rebasing, not that Git rebasing thought. Cygwin still is the better and prefered choice for Unix experience in Windows. Yes, I know there is Bash on Windows.
1) Exit babun.
2) cmd /c %SYSTEMDRIVE%\Users\%USERNAME%\.babun\cygwin\bin\dash.exe -c '/usr/bin/rebaseall -v'

Customozing HTML's file inputs. Probably the most comprensive guide on different techniques to change the default behavour.

Web framework benchmarks. The Round 10 has one of the best humourous write-up.
The project returns with significant restructuring of the toolset and Travis CI integration. Fierce battles raged between the Compiled Empire and the Dynamic Rebellion and many requests died to bring us this data. Yes, there is some comic relief, but do not fear—the only jar-jars here are Java.
What happens when you rename a branch in Git? Plenty of things. First, you rename it locally. Next, you rename it remotely (is the same as remove the old branch and add a new branch). After that, either you update your upstream URL or checkout a fresh copy of the said new branch. Lastly, you may needs to batch update your commit messages.
$ git branch -m new_name
$ git branch -m old_name new_name
$ git push origin :old_name
$ git push --set-upstream origin new_name
$ git filter-branch -f --msg-filter 'sed "s/foo/bar/"' master..HEAD

The database schema for StackOverflow is publically accessible. I was surprised that it's such a straight forward design and nothing fancy at all. Well, is just a CRUD app with some additional tweaks here and there. However, the ranking formulae is far more interesting when compare to different algorithms used by other popular forum-like sites.

So may ways to iterate through the Perl's array. Implementation 1, 4, and 5 is what I normally used but the 5th method is still my favourite.

Source code syntax higlighter through Javascript? Just found out today, besides highlight.js, there is also Prism.js. The former seems to have more languages support but the later is used for quite a few popular projects.

NBA season going to start soon, maybe is time for me to learn some Statistics through certain API? Can't wait what surprises the 2016/2017 season will give us.

This Week I Learned - 2016 Week 38

Last week post or the whole series. Interesting stuff learned this week.

Encountered this error message when checking a USB thumbdrive with `fdisk` command. The particular thumb drive was burned with an ISO file through the `dd` command.
$ sudo fdisk -l
GPT PMBR size mismatch (1432123 != 15765503) will be corrected by w(rite).
Disk /dev/sdc: 7.5 GiB, 8071938048 bytes, 15765504 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 8C18967D-CB41-4EF1-8958-4E495054958D

Device     Start     End Sectors   Size Type
/dev/sdc1     64   17611   17548   8.6M Microsoft basic data
/dev/sdc2  17612   23371    5760   2.8M EFI System
/dev/sdc3  23372 1432075 1408704 687.9M Microsoft basic data

Follow the instructions given, running the device through `gparted` seems to resolve the issue.

Perl's hash initialization, referencing, and de-referencing. Seriously, I need to get this correctly and read more Perl's FAQs.
# Normal way, without referencing.
%foobar = (a => '1', b => '2');
say $foobar{a};

# Using referencing. More readable.
$foobar = {a => '1', b => '2'};
say $foobar->{a};

# Alternatively.
$foobar_ref = \%foobar;
say $foobar_ref->{a};

Finding properties of the event target in Javascript.
$('foo').bind('click', function () {
    // inside here, `this` will refer to the foo that was clicked

How do you add a trailing slash if none found? Regex, regex.
$string =~ s!/*$!/!; # Add a trailing slash

Protocol-relative URL. While we're on HTTP protocol, it was made aware to me that the anchor tag should be the last item on the URL.

CSS image sprite technique using HTML unordered list. One of the issue encountered is if you have single line text link, how do you align the text link vertically in the middle? Make sure the `line-height` is equal to `height` for the `li`` element.

Git merge conflict? Just abort the whole process.

Similarly discard all changes on a diverged local branch, two ways. First method is to my liking.
# Method 1
$ git branch -D phobos
$ git checkout --track -b phobos origin/phobos

# Method 2
$ git checkout phobos
$ git reset --hard origin/phobos

Debugging Dockerfile. Something I learned this week but in a separate and longer post.

Starting a new software project but not sure about which technology stack to use? Read this slide as a guide.

This Week I Learned - 2016 Week 37

Last week post or the whole series.

As we're moving to the end of the third quarter of the year, more things pop up for me to follow up. Interestingly but not surprisingly, life is as monotonous as ever. Yes, it can be routinely, but that probably the only way, through sheer discipline, to follow through your plans.

The components for setting up my homelab using AMD 5350 have been bought and set up accordingly. The only remaining tasks is to install the necessary OS and configuration. More writeups on this in coming future.

As usual, something I learned this week.

Looking into Makefile, specifically GNU MakeExtracting parameters from target? Yes, is doable but it's not pretty. See code below. If your target is not an actual physical file, make it a '.PHONY' target instead. Otherwise you will encounter "No rule to make target" error. Next, we will need to 'eval' when extracting the assigning the parameters passed, otherwise the 'PARAMS' assignment will be executed as command.
.PHONY: action

    $(eval PARAMS := $(filter-out [email protected],$(MAKECMDGOALS)))
    @echo $(PARAMS)

Interestingly, there are four ways for variable assignment in Make. The 'Set If Absent' way of variable declaration and initialization is quite handly. Funny though, Perl, which is known for its brevity, does not have such language construct.
# Lazy Set. Value is expanded and set when used.

# Immediate Set. Value is set when declared.

# Set If Absent

# Append

Write it down, make it happens. Never underestimate the power of writing. Sometimes, the pen is mightier than the sword.

'git commit --allow-empty'. My goodness! I'm not aware of this option exists in Git. How many times I've adjusted a space just to create and make an empty dummy commit. While we on Git, if you seems to "misuse" it somehow, there are many ways to recover back.

Web development is a layer of layer of layer of abstraction hacks? I firmly believe. It's messy, plagues with multiple choices, and feels like wild wild west. HN user, meredydd mentioned that modern web application today consists of five programming languages and three frameworks. Interestingly, I never realize there are so many choices. Maybe future Javascript, ES2016 can reduce that paradox of choices by standardizing on using the same language for frontend and backend, as in isomorphic Javascript? But that also raises another interesting question. Is web development a constant rewrite of existing application to newer technologies?

This Week I Learned - 2016 Week 35

Last week post or you might want to check out the whole series. As usual, some findings around the Internet.

September. It's almost at the end of the third quarter of the year. What have you done for the past eight months? No energy to do anything else after work? Switch. Do important things before work instead. This means you'll need to be a morning person and change your sleeping habits. Let's see how this goes. One small thing at a time but in a habitual and persistent way.

Everyone should be inspired by Norman Borlaug. We need more people like him instead of psychopath who wants to see the world burn.

How to make a copy another hash in Perl? Easy, for shallow copy, just use syntax below. Or simply use the Clone module if you have deep nested hash.
my %copy = %$hash;

Two ways of using grep in Perl, which I always confuse. First, by expression and secondly, by code block. Examples as shown.
if (grep /a/, @cities) {..}

if (grep { length($_) == 6 } @cities) {..}

The strange case of Perl's DBI returning 0e0 (considered as true zero) if zero rows were affected.
sub do {
    my($dbh, $statement, $attr, @bind_values) = @_;
    my $sth = $dbh->prepare($statement, $attr) or return undef;
    $sth->execute(@bind_values) or return undef;
    my $rows = $sth->rows;
    ($rows == 0) ? "0E0" : $rows; # always return true if no error

"MySQL error 1449: The user specified as a definer does not exist." How annoying and rather inconvenient for me.

What are Git's caret (^) and tilde (~)?

This Week I Learned - 2016 Week 29

Looking back to the last week post or you might want to check out the whole series.

One of the issue when using Perl for beginner is to understand and differentiate the usage of referencing and dereferencing used by different data types, especially array or hash. Inspired by this site, the table below (generated by tablesgenerator) shows different way to initiate, reference, dereference, and accessing these data types.

$scalar @list %hash FILE
Instantiating it $scalar = "a"; @list = ("a", "b"); %list = ("a" => "b"); -
Instantiating a reference to it $ref = \"a"; $ref = ["a", "b"]; $ref = {"a" => "b"}; -
Referencing it $ref = \$scalar $ref = \@list $ref = \%list $ref = \*FILE
Dereferencing it $$ref or ${$ref} @{$ref} %{$ref} {$ref} or <$ref>
Accessing an element - ${$ref}[1] or $ref->[1] $ref{a} or $ref->{a} -

Git, the rebasing workflow. Better still, understand the how Git works and learn some Git branching, visually.
$ git fetch
$ git rebase origin/master
$ git checkout master
$ git merge insert_awesome_topical_branch_name_here
$ git push origen/master
$ git branch -d insert_awesome_topical_branch_name_here

Kimchi, web interface to KVM. Didn't realize this exists.

Good HN discussion on creating productive habits. Some of the interesting notes are:
  • Appreciate and be grateful with what you have and stop caring for things that make you unhappy.
  • Use a chess clock to remind you of the time you've spend on doing something else.
  • Set a expected time on how to do a task as works fill the time you've set to do it.
  • Meditation helps with focus.
  • Just get start, leave no excuses of not to start. You finish a task by starting.
  • Habit formation, though daily small steps (in other words, easy) until it's ingrained in you. Which is obtained through persistence and discipline. Remember habit > inspiration.
  • Complete something early in the morning. Something simple. Apply that mindset to your whole day. Also known as pre-game routine.
  • Eliminating the inessential. Minimized and focus on important things. Less is more.

Similar, another HN discussion on can't concentrate on tasks?

  • Low dopamine perhaps? Sleep and eat well. Take care of your mental healthiness as well.
  • Better dateline management.
  • Morning is the best time to work due to glucose level is high when you start your day.
  • You will need a deep work environment.

This Week I Learned - 2016 Week 28

Looking back to the last week post or you might want to check out the whole series.

Great rule when picking up any technologies for your development stack.
If a project is innovative in a business sense, then choose a boring technology. If it is boring in a business sense, then choose an interesting technology.

Newscombinator's best of bookmarks. Every links is worthy your precious time.

Caching your GitHub password in Git. Seriously, do this if you commit early and commit often to remote repository.

The Golden Age of Autodidacts. Don't be a passive learner but instead a self-directed learner. Knowledge workers like programmer should always improve their learning skills. Don't let the feeling of inadequacy stop you, incorporate purpose or meaning in your learning. Start analyzing your learn and work pattern. Adapt and adjust. It's always never too late to start anything.

This Week I Learned - 2016 Week 26

Last week post or the whole series.

What the difference for the Git config option of 'push.default'? To prevent yourself from committing and overriding local branches to remote branches, stick with 'simple' way of 'git push'.

Why you need to support Perl's PSGI?

One of the dilemma faced by any programmer, what should I program? (via Slashdot) Someone if the forum joked that "I know how to post a comment, but I don't know what to say." Funny indeed but true as well. Yes, you can learn one programming language per year, but in the end, it will be quite a waste if nothing is created.

12 years of web programming (via Reddit). Sad but true. Layer of layer of layer of abstraction which in the end, just to produce HTML.

Another approach of subroutine parameters validation in Perl. This is from REST::Client. "Tim Toady" at work here using Carp module.
croak "REST::Client exception: First argument to request must be one of GET, PUT, POST, DELETE, OPTIONS, HEAD" unless $method =~ /^(get|put|post|delete|options|head)$/i;
croak "REST::Client exception: Must provide a url to $method" unless $url;
croak "REST::Client exception: headers must be presented as a hashref" if $headers && ref $headers ne 'HASH';

This Week I Learned - 2016 Week 14

Last week post or the whole series.

#1 Replace Git Bash with MinTTY. Even though you can run Bash on Ubuntu in Windows right now, the most acceptable way (without using the dreadful Windows Command line) before this is through Cygwin and MinTTY. Don't like MinTTY? Well you've Babun and MSYS2, both are based on Cygwin. But still, nothing beat a Vagrant emulated environment.

#2 12 years, 12 lessons working at ThoughtWorks. (HN thread, Reddit thread) Some beg to differ. His retrospective team approach, especially the four key questions, should be applied by any software team. Note that ThoughtWorks is both a software house and a consulting firm.

#3 BPF Compiler Collection. Efficient Linux kernel tracing and performance analysis. You can read the docs and try it out. Only for Linux kernel 4.1 and above though. Compliment to the Brendan Gregg's Linux performance material but at different approach.

#4 Brett Victor's bookshelf. Some people are just prolific book reader. I always love his idea of reactive documents, an implementation of his concept of Explorable Explanations.

#5 Startups in Montréal. E14N is the only one I'm aware of. Anyway, the discussion at HN is far more interesting regarding the place. Language racism is true and alive there, culturally and systematically forced upon you.

#6 Effective code review or faults finding and blames? Why do you need code review in the first place if trivial matter such as coding convention still cannot be properly enforced? Note that there are tools exists to fix most of these issues and is a no-brainer to rectify this (is just a command away). Root cause is still there is lack of healthy culture that values quality but instead more towards faster delivery.  Or maybe because the software industry itself does not promote integrity (Lobsters thread)?  Or maybe we applied the wrong approach?

#7 perlootut - Object-Oriented Programming in Perl Tutorial. Holy Macaroni! I've never realized that Perl's built-in Object-Oriented feature is so limited. In other words, object in Perl is a glorified hashes. Yes, you have to write your own classes from scratch!

#8 How to start gnome-terminal in fullscreen. Nobody bother to add or enable this feature as sensible default and you have to resort to multiple ways to get it to work. While I can understand of reducing the UI clutters (or dumbing down)in GNOME, but nobody actually use the gnome-terminal in fullscreen mode? It seems that GKH also have issue with gnome-terminal itself.

This Week I Learned - 2016 Week 11

Last week post or the whole series.

#1 Undoing a git rebase. I've made a mistake where you can 'fixup' a previous commit during rebasing. Instead of trying the fix it, might as well undo the rebasing through these two commands, provided you haven't done anything else before hand.
$ git reset --hard ORIG_HEAD
$ git rebase -i --abort

#2 NextBug, a bug recommender for Bugzilla based on the textual recommendation. Rare to find an interesting academic project which have immediate impact in the industry. This makes the developer aware the context of the issue being looked into. Video presentation of the tool as well as published papers here, here and here.

#3 Journal of Software Engineering Research and Development. Surprising find, especially this paper, Patch rejection in Firefox: negative reviews, backouts, and issue reopening. There seems to be a lot of interesting research done in field of Empirical Software Engineering (ESE). The most prolific group in the industry for this field most likely is the Microsoft ESE group.

#4 Open Source Society University. Does pursuing a typical degree in Computer Science compare to self-taught is a sensible choice these days? Not anymore but unfortunately, a degree is the minimum requirement if you need to work oversea and to get pass the Human Resource department.

#5 Pollen, a book publishing system written in Racket. Right, another publishing system in another exotic programming language. Why not? I've enough of Sphinx, anyway.

#6 Lumen, a PHP micro-framework based on Laravel. Yes, another PHP micro-frameworks.

#7 pkgr, create DEB or RPM package for Ruby, NodeJS, or Go application.

This Week I Learned - 2016 Week 06

Previous post.

#1 PatternCraft. Learning Design Pattern through StarCraft. Never underestimate the importance of Software Metaphors in abstracting software engineering concepts.

#2 Ask HN: Best curated newsletters? Need a way to reduce your time from the net but at the same time still fear of missing out? Pick your favourite curated newsletters. Cron.weekly seems to have plenty of links which I've found interesting if you're looking into system administration. Mandarin Weekly caught my attention as well.

#3 How Git Merging turns you into a GITar Hero. Till today, I still don't understand why developers still fail to see the benefit of Git Rebasing. Maybe the complexity of the merged trees indicates productivity or sense of accomplishment? You know, software engineers tends to over-analyze and over-engineer.

#4 Linux Performance Analysis in 60,000 Milliseconds. Using uptime, dmesg, vmstat, mpstat, pidstat, iostat, free, sar, and top command, you can have an overview of the resource usage of a system. Don't want to go through the hassle of all these commands? Just use Glances, web or console-based monitoring tools written in Python. Perhaps, htop, an interactive process viewer or iotop, disk I/O status monitoring tool.

#5 Ping Sweep. Fun activity to do with nephews during CNY. We all learn how to find all available hosts that were connecting to the Access Point (AP). From the list of IP addresses, divide the these these hosts into mobile and computing devices. Have fun times scanning the network where they both overloaded the Wifi router by "nmapping" the network. The seed of learning have been planted, is really up to them to explore further. Hopefully, by the next CNY, they will move ahead even further and know which particular field in IT they want to venture into.

#6 Janice Kaplin: "The Gratitude Diaries". Is time to reflect and appreciate on what we have and where we are. How? Keep a gratitude journal.

#7 Today I Learned (TIL) is a famous subreddit. For technology related (programming or system administration), there are TIL collections created by Josh Branchaud, hashrocket, Jake Worth, and thoughtbot.

This Week I Learned - 2016 Week 04

In case you miss out, last week post.

#1 Shōwa Genroku Rakugo Shinjū (official site). Caught my attention with its unique and mature story line. Definitely way different from the regular shounen action anime. Basically a story about the journey of an apprentice rakugo storyteller. The fast paced dialogues and art style reminded me of The Tatami Galaxy (Reddit discussion).

#2 Mobile Suit Gundam: The Origin (official site). Probably the next anime series I like after the Mobile Suit Gundam: The 08th MS Team in Gundam franchise. Love the old school animation and original characters design. For some amusement, there is a good discussion on the worse Gundam protagonist.

#3 Social media friends are mostly fake (Reddit discussion). Agree with one of the comment. There is a distinctive difference between a contact and real friends. You should treat all social network as contact list for networking purpose, nothing more and nothing less. Real friends should be interacted through physical life.

#4 Empanada. So this is "Mat Salleh" (Portuguese to be exact) name for Malaysia snack called "karipap" or "curry puff". Knowing this makes me craving for the Empanada, especially those large one with filling of sweet potatoes, chicken meat, and curry spice.

#5 The Bookbinder. Remember before you submit your final thesis or dissertation you need to bind it with fabric or fake leather cover and gold foil lettering or seal? The video illustrates the step-by-step process to do it.

#6 How to create and apply patch in Git. Till today, I still can't remember how to do it properly.

#7 Conversational Commerce (HN discussion). I've noticed that the Social Network is slowly being replaced by Social Messaging. Are we going to the days where ICQ/AIM and IRC were once popular?

#8 PostgreSQL Query Plan Visualization (HN discussion). The most aesthetic visualization of PostgreSQL's execution plan for a SQL statement through the EXPLAIN command (more explanation on its usage). Which is also inspired by another tool Unfortunately both are web-based tool where sometimes not applicable if you've sensitive SQL queries that should remain confidential.

Swift in Fedora 24 (Rawhide)

Swift, the language developed by Apple, which is set to replace Objective-C, was recently open sourced. However, the existing binary only available for Ubuntu and Mac OS. Hence, for Fedora user like myself, the only option is to install it through source code compilation.

First, install all the necessary packages.
$ sudo dnf install git cmake ninja-build clang uuid-devel libuuid-devel libicu-devel libbsd-devel libbsd-devel libedit-devel libxml2-devel libsqlite3-devel swig python-devel ncurses-devel pkgconfig

Next, create our working folder.
$ mkdir swift-lang

Clone the minimum repositories to build Swift.
$ git clone swift
$ git clone clang
$ git clone cmark
$ git clone llvm

If you have slow internet connection and experiencing disconnection during clone, is best to clone partially. Otherwise, you've to restart from the beginning again.
$ git clone --depth 1 llvm
$ cd llvm
$ git fetch --unshallow

If you have the great Internet connection, you can proceed with the remaining repositories.
$ git clone lldb
$ git clone llbuild
$ git clone swiftpm
$ git clone
$ git clone

As Swift was configured to work in Ubuntu or Debian, you may encounter error several issues during compilation. These are my workaround.

/usr/bin/which: no ninja in ...
In Fedora, Ninja Built binary name is 'ninja-build' but Swift builder script expect it to be 'ninja'. We create an alias to bypass that.
$ sudo ln -s /usr/bin/ninja-build /usr/bin/ninja

Missing ioctl.h
During compilation, the ioctl.h header file was not found as the build script assumed it's located in '/usr/include/x86_64-linux-gnu' as shown below.
header "/usr/include/x86_64-linux-gnu/sys/ioctl.h"

Temporary workaround is to symlink the folder that contains these files.
$ sudo mkdir -p /usr/include/x86_64-linux-gnu/
$ sudo ln -s /usr/include/sys/ /usr/include/x86_64-linux-gnu/sys

pod2man conversion failure
The 'pod2man' doesn't seems to convert the POD file to MAN page as illustrated in error message below.
FAILED: cd /home/hojimi/Projects/swift-lang/build/Ninja-ReleaseAssert/swift-linux-x86_64/docs/tools && /usr/bin/pod2man --section 1 --center Swift\ Documentation --release --name swift --stderr /home/hojimi/Projects/swift-lang/swift/docs/tools/swift.pod > /home/hojimi/Projects/swift-lang/build/Ninja-ReleaseAssert/swift-linux-x86_64/docs/tools/swift.1
Can't open swift: No such file or directory at /usr/bin/pod2man line 68.

Upon this error message, the 'swift.pod' file has been corrupted and emptied. You'll need to restore it back from the repository.
$ git checkout -- docs/tools/swift.pod

We need to disable the '--name swift' parameter. This is done by commenting out the 'MAN_FILE' variable.
$ sed -i 's/MAN_FILE/#MAN_FILE/g' swift/docs/tools/CMakeLists.txt

Once all the workarounds have been applied, we'll proceed with our compilation. You do not really need to set the '-j 4' parameter for parallel compilation which can really reduce compilation time. By default, Ninja Build will compile code using the available CPU cores. Also, we just want the release (-R) build without any debugging information attached.
$ ./swift/utils/build-script -R -j 4

Add our compiled binary path to the system path.
$ cd /build/Ninja-ReleaseAssert/swift-linux-x86_64/bin/
export PATH=$PATH:`pwd`

Lastly, check our compiled binary.
$ swift --version
Swift version 2.2-dev (LLVM 7bae82deaa, Clang 587b76f2f6, Swift 1171ed7081)
Target: x86_64-unknown-linux-gnu

Be warned, compilation took quite a while, maybe for several hours. This depends on your machine specification and the type of build. I've noticed my lappy was burning hot as four CPU cores were running at 100% most of the time. It's recommended during compilation, place your lappy near a fan or any place with good ventilation. See that the temperature exceed high threshold of 86.0°C.
$ sensors
Adapter: Virtual device
temp1:        +95.0°C  (crit = +98.0°C)

Adapter: ISA adapter
fan1:        4510 RPM

Adapter: ISA adapter
Physical id 0:  +97.0°C  (high = +86.0°C, crit = +100.0°C)
Core 0:         +94.0°C  (high = +86.0°C, crit = +100.0°C)
Core 1:         +97.0°C  (high = +86.0°C, crit = +100.0°C)

Under normal usage, the average temperature is roughly 50°C.
$ sensors
Adapter: Virtual device
temp1:        +46.0°C  (crit = +98.0°C)

Adapter: ISA adapter
fan1:        3525 RPM

Adapter: ISA adapter
Physical id 0:  +49.0°C  (high = +86.0°C, crit = +100.0°C)
Core 0:         +49.0°C  (high = +86.0°C, crit = +100.0°C)
Core 1:         +45.0°C  (high = +86.0°C, crit = +100.0°C)

Switching Between Different Commits in Git

PyVim, an implementation of Vim editor in Python caught my attention while browsing through the HackerNews recently. After trying and installing it through Python's pip installer, I've decided to install the latest version from its Github repository instead.

Before that, let's setup the Python's Virtual Environment.
$ cd /tmp
$ mkdir pyvim
mkdir: created directory ‘pyvim’

$ cd pyvim/
$ virtualenv -p /usr/bin/python2.7 venv
Running virtualenv with interpreter /usr/bin/python2.7
New python executable in venv/bin/python2.7
Also creating executable in venv/bin/python
Installing setuptools, pip...done.

$ source venv/bin/activate
(venv)$ which python

Clone the PyVim Git repository with the folder and the Virtual Environment you've created in previous step.
(venv)$ git clone
Cloning into 'pyvim'...
remote: Counting objects: 196, done.
remote: Compressing objects: 100% (51/51), done.
remote: Total 196 (delta 29), reused 0 (delta 0), pack-reused 143
Receiving objects: 100% (196/196), 597.72 KiB | 165.00 KiB/s, done.
Resolving deltas: 100% (93/93), done.
Checking connectivity... done.

You'll obtain below directory structure.
(venv)$ tree -L 2
├── pyvim
│   ├── docs
│   ├── examples
│   ├── LICENSE
│   ├── pyvim
│   ├── README.rst
│   ├──
│   └── tests
└── venv
    ├── bin
    ├── include
    ├── lib
    └── lib64 -> lib

10 directories, 4 files

Next, install latest PyVim and all the necessary Python packages within the Virtual Environment.
(venv)$ cd pyvim
$ python install
Finished processing dependencies for pyvim==0.0.2

Run the PyVim program and the output below shows that there is breakage with the latest committed version.
(venv)$ pyvim --help
Traceback (most recent call last):
  File "/tmp/pyvim/venv/bin/pyvim", line 9, in 
    load_entry_point('pyvim==0.0.2', 'console_scripts', 'pyvim')()
  File "/tmp/pyvim/venv/lib/python2.7/site-packages/pkg_resources/", line 519, in load_entry_point
    return get_distribution(dist).load_entry_point(group, name)
  File "/tmp/pyvim/venv/lib/python2.7/site-packages/pkg_resources/", line 2630, in load_entry_point
    return ep.load()
  File "/tmp/pyvim/venv/lib/python2.7/site-packages/pkg_resources/", line 2310, in load
    return self.resolve()
  File "/tmp/pyvim/venv/lib/python2.7/site-packages/pkg_resources/", line 2316, in resolve
    module = __import__(self.module_name, fromlist=['__name__'], level=0)
  File "/tmp/pyvim/venv/lib/python2.7/site-packages/pyvim-0.0.2-py2.7.egg/pyvim/entry_points/", line 17, in 
  File "/tmp/pyvim/venv/lib/python2.7/site-packages/pyvim-0.0.2-py2.7.egg/pyvim/", line 27, in 
  File "/tmp/pyvim/venv/lib/python2.7/site-packages/pyvim-0.0.2-py2.7.egg/pyvim/", line 17, in 
ImportError: No module named reactive

Let's find any tagged stable working versions but it seems that the author does not create any tagged branch.
(venv)$ git tag

Since there is not tagged branch, then we'll need to find out the hash of the last stable commit. In our case here, is version 0.0.2. Results as shown using the git log command with summarized output.
(venv)$ git log --oneline --decorate | cut -c -80
1ec47f1 (HEAD, origin/master, origin/HEAD, master) Command functions rename
d842f06 add docopt to install_requires in
1fdd937 Override ControlT from prompt-toolkit: don't swap characters before curs
c944a28 Implemented the :cq command.
eaa4b1e Fix: use accepts_force also for bw/bd
4920b74 Added :bd as keybinding to buffer close
f179bd6 Implemented scroll offset.
6c160ce Show 'No \! allowed' when used for commands not supporting it.
b1d9813 Fix python 3/2 compatibility for urllib.
892188c Fixed typo in README.txt
2409ad7 Some rephrasing in the README.
40cfe66 Reload option for :edit and :open
9bb4975 Added ':open' as an alias for ':edit'.
e33db19 Added ':h' alias for ':help'
a010ea8 Implemented ControlD and ControlU key bindings, for scrolling half a pag
39c72b1 Implemented ControlE and ControlY key bindings
ad880c1 Auto closes new/empty buffers when they are hidden. This solves the :q i
082ce60 Added accept_force parameter to commands decorator. 'bp'/'bn' now also a
dfed3a3 Fix a bug where a user could leave a buffer with unsaved changes by issu
1ff1bac fix ctrl-f shortcut
5369f5d Abstraction of I/O backends. Now it is possible to open .gz files and ht
75d3a3b Mention alternatives in README.rst
10dcb2d Added ControlW n/v key bindings for splitting windows.
d13f5e6 Added PageUp/PageDown key bindings.
4009f8b New screenshot for cjk characters.
34b6175 Fixed NameErrors in .pyvimrc example.
cc0b333 Adding shorthands for split and vsplit
78c2225 Pypi release 0.0.2 -- (0.0.1 release failed)
5083d0b Pypy release 0.0.1
df71609 Usable pyvim version. - Layouts: horizontal/vertical splits + tabs. - Ma
fb129f5 Initial ptvim version.
a60a0b6 Initial commit

From the result above, commit hash id 78c2225 is the first public release version. Let's switch our HEAD to that commit.
(venv)$ git checkout 78c2225
Note: checking out '78c2225'.

You are in 'detached HEAD' state. You can look around, make experimental
changes and commit them, and you can discard any commits you make in this
state without impacting any branches by performing another checkout.

If you want to create a new branch to retain commits you create, you may
do so (now or later) by using -b with the checkout command again. Example:

  git checkout -b new_branch_name

HEAD is now at 78c2225... Pypi release 0.0.2 -- (0.0.1 release failed)

Re-install and re-run the program again. It seemed this committed version work without any issues.
(venv)$ python install
(venv)$ pyvim --help
pyvim: Pure Python Vim clone.
    pyvim [-p] [-o] [-O] [-u ] [...]

    -p           : Open files in tab pages.
    -o           : Split horizontally.
    -O           : Split vertically.
    -u  : Use this .pyvimrc file instead.

To reset back the HEAD back to the origin/master.
(venv) $ git reset --hard origin/master
HEAD is now at 1ec47f1 Command functions rename

Confirm we're at the latest HEAD through git log command.
(venv) $ git log --oneline --decorate -1
1ec47f1 (HEAD, origin/master, origin/HEAD, master) Command functions rename

Instead of searching through the log, we can tag particular commit.
(venv)$ git tag -a v0.0.2 -m "Release 0.0.2" 78c2225

Let's check again throught the git log and git tag command.
(venv) $ git log --oneline --decorate | grep 'HEAD\|tag'
1ec47f1 (HEAD, origin/master, origin/HEAD, master) Command functions rename
78c2225 (tag: v0.0.2) Pypi release 0.0.2 -- (0.0.1 release failed)

$ git tag

Instead of switching to particular commit hash id, we can switch directly by using tag name.
$ git checkout v0.0.2
Previous HEAD position was 1ec47f1... Command functions rename
HEAD is now at 78c2225... Pypi release 0.0.2 -- (0.0.1 release failed)

$ git status
HEAD detached at v0.0.2
nothing to commit, working directory clean

Using Multiple Drush Version For All Users

Drush, is the wonderful command line utility to manage Drupal site. However, on some occasion, you will need to install different Drush version in your local development machine to test out different features.

Following the official installation guide, we're going to setup global accessible Drush installation. The benefit of using such method is that any user can run the Drush command. Note that I'm currently using Fedora Rawhide (F22) but this should be applicable to all GNU/Linux distros.

First, make sure you've already install the Git version control tool. If not, just install it.
$ sudo yum install git php php-pear
$ sudo pear install Console_Table

Next, we'll clone the Drush from the Github repository. Rather than following the convention of installing it in /usr/local directory, I opt for /opt directory to differentiate packages or software installed by distribution and myself. Although the Drush command still depends on PHP binary from the default distribution.
$ sudo git clone /opt/drush
Cloning into '/opt/drush'...
remote: Counting objects: 29817, done.
remote: Compressing objects: 100% (17/17), done.
remote: Total 29817 (delta 6), reused 0 (delta 0), pack-reused 29798
Receiving objects: 100% (29817/29817), 11.04 MiB | 1.75 MiB/s, done.
Resolving deltas: 100% (16973/16973), done.
Checking connectivity... done.

Following that, we'll need to find the available Drush versions. This is through git tag command.
$ cd /opt/drush
$ git tag

Let's try with version 6.5 first using the git checkout command.
$ sudo git checkout 6.5.0                                                                                                  
Note: checking out '6.5.0'.

You are in 'detached HEAD' state. You can look around, make experimental
changes and commit them, and you can discard any commits you make in this
state without impacting any branches by performing another checkout.

If you want to create a new branch to retain commits you create, you may
do so (now or later) by using -b with the checkout command again. Example:

  git checkout -b new_branch_name

HEAD is now at 0a7918a... Prep for 6.5.0

Later, we need to create the soft link to the drush command and verify that our link works. Instead of using /usr/local/bin, we'll put the symbolic link to /usr/bin directory. More on this later.
$ sudo ln -s /opt/drush/drush /usr/bin/drush
$ which drush

Checking our Drush version.
$ drush --version
 Drush Version   :  6.5.0 

To switch another Drush version, for example version 5.1, we'll using the git checkout command again.
$ sudo git checkout 5.1.0
Previous HEAD position was 0a7918a... Prep for 6.5.0
HEAD is now at 1583e0d... Prep for 5.1

$ drush --version
drush version 5.1

How about the latest version of 7. As shown below, version 7 uses Composer to install all its dependencies.
$ sudo git checkout 7.0.0-alpha9
Previous HEAD position was 1583e0d... Prep for 5.1
HEAD is now at f10919a... Prep for 7.0.0-alpha9.

$ drush --version
Unable to load autoload.php. Drush now requires Composer in order to install its dependencies and autoload classes. Please see

Installation of Composer, dependency management for PHP is straight forward although executing downloaded Bash script directly is not recommended.
$ sudo mkdir /opt/composer
$ cd /opt/composer/

$ sudo sh -c 'curl -sS | php'
#!/usr/bin/env php
All settings correct for using Composer

Composer successfully installed to: /opt/composer/composer.phar
Use it: php composer.phar

Now similarly, we'll create the symbolic link for composer.phar in /usr/bin directory.
$ sudo ln -s /opt/composer/composer.phar /usr/bin/composer

$ composer --version
Composer version 1.0-dev (eadc167b121359986f542cc9cf976ecee3fcbf69) 2015-03-02 18:20:22

Why we need to put in /usr/bin directory instead of /usr/local/bin directory? The reason is the default $PATH for sudo command, as shown below, does not includes the /usr/local/bin directory.
$ sudo bash
# echo $PATH

Since we've installed Composer and satisfied the Drush version 7 dependency. We'll check our installation again.
$ cd /opt/drush
$ sudo composer install
Loading composer repositories with package information
Installing dependencies (including require-dev) from lock file
Warning: The lock file is not up to date with the latest changes in composer.json. You may be getting outdated dependencies. Run update to update them.

Lastly, check our Drush version again.
$ drush --version                                                                                                          
 Drush Version   :  7.0.0-alpha9 

Hence, by using Git, we can easily install and switch to different Drush version within one installation.

Setting Up Git Repo Locally and Push to Remote

To be more specific, how do you setup a local Git repository with branches and tags and then later push to remote origin URL? It turned out to be quite simple. Steps as follows.

1. Setting up the remote central bare project repository.
$ mkdir project
$ cd project
$ git init --bare

For any central shared Git repository, use the --bare parameter to prevent What's the difference between a normal and bare repository? A normal repository contains a working directory and actual repository, the hidden .git folder. Where the bare repository contains only the contents of .git folder.

If you're coming from the Subversion background, you'll notice that the repository data and your check out repository has different tree layout.

2. Setting up your local project repository.
$ mkdir project
$ cd project
$ git init

3. Add in all your code, branches, or tags if necessary.

4. Set the remote origin URL.
$ git remote add origin [email protected]:foobar/project.git
$ git remote -v
origin  [email protected]:foobar/project.git(fetch)
origin  [email protected]:foobar/project.git(push)

4. Push all your branches and tags to the remote origin URL.
$ git push REMOTE --tags
$ git push REMOTE --all

Git Learning Progress

Is like one of those rare day where you found enjoyment and gained achievement through learning something new. As I'm slowly getting acquaintance with Git, the more I use it, the more I understand how it supposed to work. More on that in another post. Right now, regarding my skill on Git, on a scale of 1 to 10, I rate myself around 3.

Almost Daily Git Rebasing Workflow

It used to be cumbersome and frustrating when I first learned how to do a rebasing, but these days, I'm slowly getting used to it. Yes, occasionally you still make mistakes, but branching is cheap and you can always recover from those mistakes. Let's look at my almost daily rebasing workflow. Typical steps as follows:

Getting the latest version from remote master branches.
$ git fetch
$ git rebase origin/master
Current branch master is up to date.

Create a new topic or feature branch from the master branch. Make sure you're in the master branch.
$ git checkout master
Already on 'master'

$ git checkout -b feature-x
Switched to a new branch 'feature-x'

Let’s create some dummy commits.
$ touch foo1; git add foo1; git commit -m "foo1"
$ touch foo1; git add foo1; git commit -m "foo1"

Inspired by David Baumgold’s great rebasing guide, find the last commit that your first branched off the master branch to create the feature-x branch.
$ git merge-base feature-x master

Next, although optionally but if you like to have small and frequent commits, you should always squash, reword, or fixup your local changes through interactive rebasing before rebasing again the remote or origin master branch.
$ git rebase -i 8454f7f3b1b9e224134d4336683597fb1ad290fa

Or using different syntax, if you want to go back to previous commits before the current HEAD.
$ git rebase -i HEAD~2

Rebase interactively of both dummy commits.
reword b2dabc0 foo1
fixup d4add26 foo2

[detached HEAD 6af5a09] Add feature-x
 1 file changed, 0 insertions(+), 0 deletions(-)
 create mode 100644 foo1
[detached HEAD 7994cf7] Add feature-x
 2 files changed, 0 insertions(+), 0 deletions(-)
 create mode 100644 foo1
 create mode 100644 foo2
Successfully rebased and updated refs/heads/feature-x.

If you realize that you’ve made a mistake after a successful rebasing, you can always undo it.
$ git reset --hard ORIG_HEAD

Rebasing against the master branch. In other words, changes in your feature-x branch will be reapply on top of the latest changes in master branch. Often you will need to fix or skip the conflict (something I need to practice more as I always messed up the merging).
$ git rebase origin/master

Optional steps, only if you encounter conflict.
$ git rebase --skip
$ git rebase --mergetool
$ git rebase --continue
$ git rebase --abort

If you've already published your changes, in this case, feature-x branch has been pushed before to the remote server, you’ll need to force-push your changes. Although some said forced update is bad, but is a compulsory step especially after rebasing from master branch to topic/feature branch before publishing.
$ git push -f origin feature-x

Understanding Git Rebase

For a Git beginner like me, Git rebase seems cryptic and hard to understand. The one line help description of the command states that this tool "Forward-port local commits to the updated upstream head". Forward-port? Local commits? Updated upstream head? Sounds confusing? Yup, me too. Even after I read the definition and explanation of these terms.

After several day of googling and constant reading through the online tutorials and manual, finally I managed to grasp some basic understanding on how and why Git rebase works. Mostly from excellent guide of Cern guide to Git and Charles Duan's Guide to Git.

To summarize my understanding of Git rebase,
  1. Rebasing is about managing commit history / log
  2. An alternative way for doing conventional merging, but more refining
  3. Two scenarios where you will need rebasing:
    • To squash or combine our local commits before merging with remote branches
    • To keep you local branch up-to-date with remote branches without merging
We will increase our understanding by going through the step-by-step guide of going a rebasing for above mentioned scenarios. Before that, let's setup our git as follows. You can skip the user name and email if you already done so.
$ git config --global "John Doe"
$ git config --global [email protected]

$ git config --global color.ui auto
$ git config --global color.branch auto
$ git config --global color.diff auto
$ git config --global color.status auto
$ git config --global alias.ll 'log --oneline --decorate --graph --all'

Let's create as local Git repository before we can proceed with rebasing.
$ mkdir -p /tmp/foobar
$ cd /tmp/foobar
$ git init
Initialized empty Git repository in /tmp/foobar/.git/

Create a few changeset, a set of modified files, in the master branch. We're using the naming convention of [branch name]c[sequence] for each file name that represent a changeset.
$ touch mc1; git add mc1; git commit -m "mc1"
$ touch mc2; git add mc2; git commit -m "mc2"
$ touch mc3; git add mc3; git commit -m "mc3"

Visualize our changes so far using the alias we've created.
$ git ll
* 7665913 (HEAD, master) mc3
* 9ef4878 mc2
* 2f8d692 mc1

Scenario 1 : Squashing Local Commits

Imagine that you want to add a new feature, surely you're going to create a new branch, let's call it new-feature, and work on it locally (at your development machine). Let's try that.
$ git checkout -b new-feature
Switched to a new branch 'new-feature'

Check our log and the available branch. If you've notice, the current HEAD, new-feature branch, and master branch are pointed to the same hash.
$ git ll
* 7665913 (HEAD, new-feature, master) mc3
* 9ef4878 mc2
* 2f8d692 mc1

$ git branch -a
* new-feature

A feature is like a task where we can further break down into sub-tasks. Also, is a good practice to commit early and commit often as you can break a problem down into a set of smaller problems and tackle it one by one.

Let's try to simulate that in the new-feature branch. Each nfX is a sub-tasks in order for us to implement the new feature.
$ touch nf1; git add nf1; git commit -m "nf1"
$ touch nf2; git add nf2; git commit -m "nf2"
$ touch nf3; git add nf3; git commit -m "nf3"
$ touch nf4; git add nf4; git commit -m "nf4"
$ touch nf5; git add nf5; git commit -m "nf5"

Check the history log again. The new-feature branch is ahead of the master branch by 5 commits.
$ git ll
* 466b238 (HEAD, new-feature) nf5
* 61f6e91 nf4
* 7f80d86 nf3
* bb93e3a nf2
* 65d8d8a nf1
* 7665913 (master) mc3
* 9ef4878 mc2
* 2f8d692 mc1

Instead of merging all those sub-tasks commit (useful to you but not to others) to the main branch, a better approach is to squash or consolidate all into one single commit through git rebase.
# last 5 commits
$ git rebase -i HEAD~5

# if the master branch or other branches is behind your new-feature branch 
$ git rebase -i master

The previous command will start the interactive mode for us to squash all our related commits and group them into one.
pick d69307e nf1
pick 4e9cd86 nf2
pick 6449f6a nf3
pick 6acfd6d nf4
pick f29e1db nf5

# Rebase 1245945..f29e1db onto 1245945
# Commands:
#  p, pick = use commit
#  r, reword = use commit, but edit the commit message
#  e, edit = use commit, but stop for amending
#  s, squash = use commit, but meld into previous commit
#  f, fixup = like "squash", but discard this commit's log message
#  x, exec = run command (the rest of the line) using shell
# These lines can be re-ordered; they are executed from top to bottom.
# If you remove a line here THAT COMMIT WILL BE LOST.
# However, if you remove everything, the rebase will be aborted.
# Note that empty commits are commented out

Rearrange and amend the necessary actions for these related commits.
pick f29e1db nf5
squash 6acfd6d nf4
squash 6449f6a nf3
squash 4e9cd86 nf2
squash d69307e nf1

The next step is to summarize and rewrite all the commit messages as shown below.
# This is a combination of 5 commits.
# The first commit's message is:

# This is the 2nd commit message:


# This is the 3rd commit message:


# This is the 4th commit message:


# This is the 5th commit message:


# Please enter the commit message for your changes. Lines starting
# with '#' will be ignored, and an empty message aborts the commit.
# HEAD detached from 1245945
# You are currently editing a commit while rebasing branch 'new-feature' on '1245945'.
# Changes to be committed:
#   (use "git reset HEAD^1 ..." to unstage)
#       new file:   nf1
#       new file:   nf2
#       new file:   nf3
#       new file:   nf4
#       new file:   nf5

In which, we just summarize it as
implement new-feature 

# Please enter the commit message for your changes. Lines starting
# with '#' will be ignored, and an empty message aborts the commit.
# HEAD detached from 1245945
# You are currently editing a commit while rebasing branch 'new-feature' on '1245945'.
# Changes to be committed:
#   (use "git reset HEAD^1 ..." to unstage)
#       new file:   nf1
#       new file:   nf2
#       new file:   nf3
#       new file:   nf4
#       new file:   nf5

Once successfull, the git will shown the result of rebasing.
[detached HEAD 82c66c9] implement new-feature
5 files changed, 0 insertions(+), 0 deletions(-)
create mode 100644 nf1
create mode 100644 nf2
create mode 100644 nf3
create mode 100644 nf4
create mode 100644 nf5
Successfully rebased and updated refs/heads/new-feature.

Check our history log again. Notice all those commit of nf1 till nf5 have been squashed or combined into a new commit of 82c66c9 and the new-feature branch is ahead of master branch by 1 commit. Basically, we're using rebase to main a linear history.
$ git ll
* 82c66c9 (HEAD, new-feature) implement new-feature
* 1245945 (master) mc3
* 2e803fb mc2
* 885e8be mc1

Last step, merge our new feature into the master branch.
$ git checkout master

$ git merge new-feature
Updating 1245945..82c66c9
nf1 | 0
nf2 | 0
nf3 | 0
nf4 | 0
nf5 | 0
5 files changed, 0 insertions(+), 0 deletions(-)
create mode 100644 nf1
create mode 100644 nf2
create mode 100644 nf3
create mode 100644 nf4
create mode 100644 nf5

Checking our history log again.
$ git ll
* 82c66c9 (HEAD, new-feature, master) implement new-feature
* 1245945 mc3
* 2e803fb mc2
* 885e8be mc1

Scenario 2 : Keep your local branch up-to-date

If you notice in Scenario 1, the master branch stays stagnant without any additional commits. What if while we're developing on the branch and there are other commits merged to the master branch, as in other features or hotfix ?

Let's try again, but this time, we're going to create a hotfix branch and add a sample commit to fix an issue. Our commit in hotfix branch is currently the HEAD and is ahead the master branch by 1 commit.
$ git checkout -b hotfix
Switched to a new branch 'hotfix'

$touch hf1; git add h1; git commit -m "hf1"

$ git ll
* f229ff9 (HEAD, hotfix) hf1
* 82c66c9 (new-feature, master) implement new-feature
* 1245945 mc3
* 2e803fb mc2
* 885e8be mc1

During that period, there are some changes committed to the master branch. Let's add a few commits to it as well. Checking our commit log again, you'll notice a divergence between hotfix and master branchW. In other words, we've a forked commit history.
$ git checkout master
$ touch mc4; git add mc4; git commit -m "mc4"
$ touch mc5; git add mc5; git commit -m "mc5"

$ git ll
* bbb1a2b (HEAD, master) mc5
* 4472d3e mc4
| * f229ff9 (hotfix) hf1
* 82c66c9 (new-feature) implement new-feature
* 1245945 mc3
* 2e803fb mc2
* 885e8be mc1

Before we proceed with any merging or rebase, please make a copy of the current foobar folder. We're going to show the difference between using rebase and not using rebase.
$ cp -rv /tmp/foobar /tmp/foobar.orig

First, we try the merge without using rebase. After merging, we're going to add one additional commit so make our commit log more meaningful.
$ git checkout master
Switched to branch 'master'

$ git merge hotfix
Merge made by the 'recursive' strategy.
hf1 | 0
1 file changed, 0 insertions(+), 0 deletions(-)
create mode 100644 hf1

$ touch mc6; git add mc6; git commit -m "mc6"

Pay attention to the commit log where we're going to compare with the rebase method. Notice the additional commit 15ea73b as well as the forked history.
$ git ll
* 40fdd57 (HEAD, master) mc6
*   15ea73b Merge branch 'hotfix'
| * f229ff9 (hotfix) hf1
* | bbb1a2b mc5
* | 4472d3e mc4
* 82c66c9 (new-feature) implement new-feature
* 1245945 mc3
* 2e803fb mc2
* 885e8be mc1

Before that, restore our last snapshot of the repo before merging the hotfix branch.
$ rm -rf /tmp/foobar
$ cp -rv /tmp/foobar.orig /tmp/foobar
$ cd /tmp/foobar

Continue with merging using rebase.
$ git checkout hotfix
Switched to branch 'hotfix'

$ git rebase master
First, rewinding head to replay your work on top of it...
Applying: hf1

$ git ll
* cfd2dae (HEAD, hotfix) hf1
* bbb1a2b (master) mc5
* 4472d3e mc4
* 82c66c9 (new-feature) implement new-feature
* 1245945 mc3
* 2e803fb mc2
* 885e8be mc1

$ git checkout master
Switched to branch 'master'

$ git merge hotfix
Updating bbb1a2b..cfd2dae
hf1 | 0
1 file changed, 0 insertions(+), 0 deletions(-)
create mode 100644 hf1

$ touch mc6; git add mc6; git commit -m "mc6"

Compare to the non-rebase merging, we'll obtain a linear history graph without additional commit or history. Also, no forked history log as well.
$ git ll
* e0615b2 (HEAD, master) mc6
* f8df51d (hotfix) hf1
* bbb1a2b mc5
* 4472d3e mc4
* 82c66c9 (new-feature) implement new-feature
* 1245945 mc3
* 2e803fb mc2
* 885e8be mc1

Comparing both the history log of using and not using rebase, I think I finally grok how the need of Git rebase compare to typical merging.