Friday, January 10, 2014

Short update on lacking updates

Just as a short update for all of you:

I followed the fate of many blogger and right now my private/professional life keeps me busy. This will last probably until summer, so patience is your best friend right now.

I don't even have time to install my new Raspberry Pi with 512MB RAM, but that will probably be my first new post. Hopefully, RetroShare 0.6 will be released by that time ;)

Thanks for your understanding!

Oh, and have a happy new year :)

Tuesday, February 12, 2013

Fix corrupted JPEGs made by the Samsung Galaxy S2

While organizing some of the pictures I took with my Samsung Galaxy S2, I've encountered one file that couldn't be opened. Being a nerd, I couldn't resist and had to investigate the issue. I think I've already spent more than two hours searching the not-so-all-knowing internet for solutions, but in the end it came down to using my brain and reading the EXIF specification (PDF).

tl;dr: The Samsung Galaxy S2 can occasionally create corrupted JPEGs, i.e., files that don't follow the specifications.

The Problem

Standard (linux) picture viewer applications would just say that they can't open the file. That's obviously not sufficient to get to the bottom of this, so I used GIMP and ImageMagicks convert, which both gave me the same information:
$ convert test.jpg out.jpg
convert: Corrupt JPEG data: 1072 extraneous bytes before marker 0xd8 `test.jpg' @ warning/jpeg.c/EmitMessage/231.
convert: Invalid JPEG file structure: two SOI markers `test.jpg' @ error/jpeg.c/EmitMessage/236.
convert: missing an image filename `out.jpg' @ error/convert.c/ConvertImageCommand/3011.
So, the two valuable information were:
1072 extraneous bytes before marker 0xd8
and
two SOI markers

The EXIF standard

The only helpful googling those error messages brought up was to use a hex editor. (Yeay!)
My corrupt file starts with
FF D8 FF E1  00 0E 45 78  69 66 00 00  49 49 2A 00 ...
What you see here, is a JPEG file (FF D8) followed by some EXIF information (FF E1). Using the EXIF specification (PDF), we learn that marks the start of an application segment 1 (APP1).
Offset (Hex)NameCode (Hex)
0000SOI (Start Of Image) MarkerFFD8
0002APP1 MarkerFFE1
0004APP1 Lengthxxxx
0006Identifier4578 6966 00 ("Exif"00)
000BPad00
000CAPP1 Body
Okay, so FFD8 is a SOI marker and the error message says that the file has two of them, which apparently is a bad thing. So I searched for another occurence of FFD8 and found one at 0x442 = 1090. It also said that it had 1072 extraneous bytes before marker 0xd8, which is only slightly smaller than the area between the APP1 header and the next SOI marker. So, is the SOI marker here wrong?

A valid JPEG file

Since I don't have the slightest idea of what exactly is wrong here, I opened another JPEG that works and was taken only minutes before the corrupt one. Comparing them by fast-switching between the console tabs (exploiting low-level visual processing and attention guidance of the brain is fun), I've noticed two things:
  1. FFD8 can be found at the same position in both files, so that is not the problem.
  2. The first difference is in the APP1 length.
  3. The difference is huge!
See for yourselves:

Corrupt file:
FF D8 FF E1  00 0E 45 78  69 66 00 00  49 49 2A 00 ...
Valid file:
FF D8 FF E1  E0 42 45 78  69 66 00 00  49 49 2A 00 ...
The length of the APP1 segment in the corrupt file is only 0xE = 14? That should be far too small.

I then started to increase the length in the corrupted file randomly and see what error messages convert would give me, but that's more like being in a completely dark room with a metal bucket and throwing rocks until I hear that I've hit the bucket.
But let's see what is at the end of the APP1 segment in the valid file:
0xE042: FD CF FF D9 FF DB 00 84
At 0xE044, which is 0xE042 plus the SOI marker before the APP1 segment, it says FFD9 and the EXIF specification tells us that this is the EOI (End Of Image) marker followed by FFDB, which is the DQT (Define Quantization Table) marker, see Table 40 of the specification. As far as I can tell, everything is where it should be.

Overflow

Now back in the corrupt file, I searched for FFD9FFDB and found it at 0x10010. Do you see it already?
Minus the two bytes for the SOI marker, the length of the APP1 segment should be 0x1000E, which unfortunately can't be stored in only two bytes. What CAN be stored in two bytes is the lower part, 0x000E, which we see as length in the APP1 segment header. A classic example of an integer overflow, the first one I've observed in the wild!

The EXIF specification is clear:
Interoperability, APP1 consists of the APP1 marker, Exif identifier code, and the attribute information itself. The size of APP1 including all these elements shall not exceed the 64 Kbytes specified in the JPEG standard.
Oops.

Solution

From my understanding, the APP1 segment contains the thumbnail at the end. I reckon that that can be recalculated and stored properly by most image processing applications, so let's try shorting the data there to get under 64 Kbytes. I removed 20 bytes directly before the FFD9FFDB, which yields a new APP1 segement length of 0x1000E - 0x14 = 0xFFFA, and store this new length at 0x0004.

It seems like this works! The JPEG can now be opened again without any errors, not even regarding the thumbnail, which I've truncated and is not so important to me.

This is the only time I've encountered this problem with pictures taken using my Samsung Galaxy S2, so this should be a one-time fix. If it happens again, I think I have write a little script to do that for me.

Friday, February 8, 2013

Compiling RetroShare for the Raspberry Pi, revisited

With my emulated Raspberry Pi set up, I wanted to compile the newest version of RetroShare to check both whether the pseudo-cross-compilation actually works and if my last how-to is still valid.

At the time of writing this, the RetroShare wiki gives these instructions:
sudo apt-get install -y g++ libbz2-dev libcunit1-dev libgnome-keyring-dev libgpg-error-dev libgpgme11-dev libprotobuf-dev libqt4-dev libssh-dev libssl-dev libupnp-dev libxss-dev qt4-qmake subversion
cd ~/
svn co svn://svn.code.sf.net/p/retroshare/code/trunk retroshare
cd ~/retroshare/libbitdht/src && qmake && make clean && make && \
cd ~/retroshare/openpgpsdk/src && qmake && make clean && make && \
cd ~/retroshare/libretroshare/src && qmake && make clean && \
cp ~/retroshare/libretroshare/src/Makefile ~/retroshare/libretroshare/src/Makefile.old &&\
cat ~/retroshare/libretroshare/src/Makefile | perl -pe 's/^(INCPATH[^\n].*)\n/$1 -I\/usr\/lib\/arm-linux-gnueabihf\/glib-2.0\/include\n/g' > ~/retroshare/libretroshare/src/Makefile.new &&\
mv ~/retroshare/libretroshare/src/Makefile.new ~/retroshare/libretroshare/src/Makefile &&\
make && \
cd ~/retroshare/retroshare-nogui/src && qmake && make clean && make && \
cd ~/retroshare/retroshare-gui/src && qmake && make clean && make
I want to play it safe and not use the latest version from svn, because the project is very active at the moment and I don't want to sit here and wonder whether the eventual compiler errors are my fault or a result of an incomplete commit there.

So, download the latest version: http://sourceforge.net/projects/retroshare/files/RetroShare/0.5.4d/RetroShare-v0.5.4d.tar.gz

Problems with architectural chroot

First of all, it seems like the chroot isn't perfect. For instance, I can get an internet connection, so wget fails, and I also can't sudo.

As a consequence of that, the installation of packages has to be done from within the QEMU environment. A lot has changed since version 0.5.4b, so I'll stick to what their wiki says about what we need to install.
sudo apt-get install -y g++ libbz2-dev libcunit1-dev libgnome-keyring-dev libgpg-error-dev libgpgme11-dev libprotobuf-dev libqt4-dev libssh-dev libssl-dev libupnp-dev libxss-dev qt4-qmake
I was about to say "Activate the swapfile", but we don't need it, because we have the RAM of our host linux machine.

Exit the QEMU environment now with sudo reboot.

Compiling RetroShare

Enter the architectural chroot like described in my last post.

Then, enter the development directory and start compiling. I've adapted the above given commands to fit my directory structure and increase the number of threads to make use of my i5's four cores.
cd ~/development/RetroShare-v0.5.4b/trunk/
cd libbitdht/src && qmake && make clean && make -j4 && \
cd ../../openpgpsdk/src && qmake && make clean && make -j4 && \
cd ../../libretroshare/src && qmake && make clean && \
cp Makefile Makefile.old &&\
cat Makefile | perl -pe 's/^(INCPATH[^\n].*)\n/$1 -I\/usr\/lib\/arm-linux-gnueabihf\/glib-2.0\/include\n/g' > Makefile.new &&\
mv Makefile.new Makefile &&\
make -j4 && \
cd ../../retroshare-nogui/src && qmake && make clean && make -j4 && \
cd ../../retroshare-gui/src && qmake && make clean && make -j4
Wow! That was incredibly fast! It worked like a charm and it only took 30 minutes. If this isn't a full-scale success, I don't know what is ;)

The only thing left to do now is to strip the executable
strip RetroShare
and copy it to the actual Raspberry Pi. It works like a charm!

A huge "Thank you" to the guys from RetroShare for making it easier and easier to compile this awesome piece of software for the Raspberry Pi!

Thursday, February 7, 2013

Faster compiling on an emulated Raspberry Pi on Linux

In my last article about RetroShare on the Raspberry Pi, I've written about my experiences ...

Unfortunately, the 256 MB RAM of my Raspi is barely enough to keep RetroShare running, but even a simple 'sudo apt-get update' can crash it. The RetroShare project itself is very active at the moment, bugs are constantly fixed and features added. But compiling RetroShare requires all resources, so I thought about alternatives.

"Simple", I thought, "Emulation!"

Spoiler alert: Emulating a Raspberry Pi using QEMU is slow.

I have no experience with emulation whatsoever, so there is no real reason for choosing QEMU over other emulators like VirtualBox, I just found instructions and information for this way first.

What we need to get started

The Emulator

My desktop PC is running Ubuntu 12.04, the rest of the system specifications shouldn't matter. (Yes, I know that this version is a bit outdated, but as a True Believer in Murphy's Law I tend to not change a running system.)

Installing QEMU

Install the package qemu-kvm-extras the way you usually install packages, e.g., using aptitude or
sudo apt-get install qemu-kvm-extras
This will also install some dependencies that are needed.

You will also need to download a QEMU-ready linux kernel for the Raspberry Pi, which you can do here: http://xecdesign.com/downloads/linux-qemu/kernel-qemu
Alternatively, you can compile your own kernel.

Preparing the environment

Create a directory in which our experiment will take place. I chose:
mkdir ~/development/raspberrypi-qemu

Raspbian Wheezy

I used newest version of Raspbian Wheezy currently available at http://www.raspberrypi.org/downloads:

Torrent2012-12-16-wheezy-raspbian.zip.torrent
Direct download2012-12-16-wheezy-raspbian.zip
SHA-1514974a5fcbbbea02151d79a715741c2159d4b0a
Default loginUsername: pi Password: raspberry

Download it and unpack the image file into the directory you prepared in the last step.

Plan A: Running an emulated Raspberry Pi

In the directory of the image, run the following command:
qemu-system-arm -kernel kernel-qemu -cpu arm1176 -m 256 -M versatilepb -no-reboot -serial stdio -append "root=/dev/sda2 panic=1" -hda 2012-10-28-wheezy-raspbian.img
The parameters have the following functions:
-kernel kernel-qemu
the QEMU-ready kernel we just downloaded
-cpu arm1176
the CPU we want to emulate (ARM1176JZF-S (700 MHz))
-m 256
how much RAM should be available (in MB)
-M versatilepb
the machine type to emulate
-no-reboot
exit instead of rebooting
-serial stdio
redirects the serial port to the standard I/O
-append "root=/dev/sda2 panic=1"
where to find the root partition, depends on the image
-hda 2012-10-28-wheezy-raspbian.img
what should be used as hard drive, in this case our image
Now, you might be tempted to simply increase the amount of available RAM, but it doesn't work that way. I'm not sure why, but it only works with 256 MB RAM. On the other hand, we can circumvent the problems of creating a swap file on an SD card, because most likely the image is not located on one. Increasing the available memory is easy now, just add a sufficiently large swap file.

We also shouldn't forget to resize our image, because right now we only have about 200 MB of free space left. Again: There already are many articles on the net covering this, so I'll only quickly describe what I did.
  1. With QEMU not running, you can use qemu-img to resize an image:
    qemu-img resize 2012-12-16-wheezy-raspbian.img +1G
  2. For other reasons, raspian's built-in functionality of growing the partition to fill the SD card won't work here, so I did it the hard way.
  3. Boot your emulated Raspberry Pi again using QEMU
  4. Resize the partition using fdisk
    sudo fdisk /dev/sda
    It should look similar to this:
    Device Boot  Start       End    Blocks   Id  System
    /dev/sda1     8192    122879     57344    c  W95 FAT32 (LBA)
    /dev/sda2   122880   5885951   2881536   83  Linux
    You need to delete partition 2 and create it again with the same start, but this time with the highest allowed value for end.
  5. Resize the filesystem using
    sudo resize2fs /dev/sda2

It's too slow

Before we start compiling RetroShare again, let's check up on the speed. This is what I get from the QEMU Raspberry Pi:
pi@raspberrypi:~$ cat /proc/cpuinfo | grep MIPS
BogoMIPS        : 565.24
And this is from my actual Raspberry Pi
pi@raspberrypi:~$ cat /proc/cpuinfo | grep MIPS
BogoMIPS        : 697.95
So, my emulated raspi running on a Intel Core i5 is actually slower than the real raspi ... which is not what I wanted. I mean, okay, if I'd have the machine running for something else anyway, that wouldn't be a problem. But I still want it to be faster.

Plan B: Architectural chroot a.k.a. chroot Voodoo

Looking for solutions to speed up QEMU, I stumbled upon another approach: architectural chroot!

Now, I'm familiar with chroot (at least I though so) and I've used it hundreds of times when my Ubuntu got f*cked up because of an update or some other stuff. And I remember the difficulties when I tried to chroot into a 64 bit system from a 32 bit Live-CD. But it seems like there is a way around this. Coincidentally, we're already close to what we need: a static version of QEMU.

Install it via apt-get (or build it yourself)
sudo apt-get install qemu-user-static
We need to mount the image using loopback, but since the image contains multiple partitions, we require kpartx
$ sudo kpartx -a -v 2012-12-16-wheezy-raspbian.img 
add map loop0p1 (252:8): 0 114688 linear /dev/loop0 8192
add map loop0p2 (252:9): 0 5763072 linear /dev/loop0 122880

$ sudo mount /dev/mapper/loop0p2 /mnt/temp
Now, copy the static QEMU binary TODO and mount the special directories:
sudo cp /usr/bin/qemu-arm-static /mnt/temp/usr/bin
sudo mount -o bind /dev /mnt/temp/dev
sudo mount -o bind /proc /mnt/temp/proc
sudo mount -o bind /sys /mnt/temp/sys
Before we can enter the chroot environment, it's time for the magic! As root (simple sudo won't work), do this (all in one line):
echo ':arm:M::\x7fELF\x01\x01\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02\x00\x28\x00:\xff\xff\xff\xff\xff\xff\xff\x00\xff\xff\xff\xff\xff\xff\xff\xff\xfe\xff\xff\xff:/usr/bin/qemu-arm-static:' > /proc/sys/fs/binfmt_misc/register
This registers the static QEMU we copied as arm-interpreter to the kernel. The path specified needs to be the same on both your linux machine and the Raspberry Pi environment.

Now we can chroot:
sudo chroot /mnt/temp
Did it work?
$ uname -a
Linux localhost 2.6.32 #58-Ubuntu SMP Thu Jan 24 15:28:10 UTC 2013 armv7l GNU/Linux

Hooray! Welcome to your (much faster) Raspberry Pi environment! Now, let's do some compiling ;)

Cleaning up

To avoid inconsistencies, make sure you never use QEMU and chroot at the same time! Even more, you need to completely unmount the image before you start QEMU. Otherwise you might see some undesireable side effects.
sudo umount /mnt/temp/dev
sudo umount /mnt/temp/proc
sudo umount /mnt/temp/sys
sudo umount /mnt/temp
sudo kpartx -d -v 2012-12-16-wheezy-raspbian.img

Acknowledgements

I also want to give credit to the following articles that helped me doing this:

Monday, October 15, 2012

Compiling RetroShare for the Raspberry Pi

The question is not if you are paranoid,
it is if you are paranoid enough.
I'm a fan of darknets. I've always been. I don't want this to turn into a political discussion - it is a "tech" blog - so it should be sufficient to say that I like and use RetroShare.

RetroShare is a decentralized sharing application/network that is, in my opinion, the best darknet software available right now. I've already tried WASTE, GnuNet and AllianceP2P, and they couldn't convince me for long.
Maybe I'll make an article about the pros and cons of RetroShare, but in this article I want to describe my attempts to get it running on my Raspberry Pi, so it can serve as a 24/7 node in the RetroShare network to forward connections without consuming that much power.

When I heard about the Raspberry Pi, the geek inside me wanted to have one. Fabian is still pissed that my Raspi arrived sooner than his, that is it arrived at all, although I ordered it much later. (I think he wants to go for Parallella now ...)

I've already managed to compile RetroShare 0.5.3 on and for my Raspi, but it segfaulted whenever I wanted to add a friend or open the settings menu. With the recent release of 0.5.4 I thought I could give it a try again.

Be warned: The compiling alone takes a few hours. I didn't write down exact times, but it's probably best if you have some other tasks to attend to for 2 hours while one subproject is compiling.

What we need to get started

Raspberry Pi and Raspbian Wheezy
I'm starting with my Raspberry Pi (Model B) with a sufficiently large SD card (4 GB should be enough) and a freshly installed Raspbian Wheezy (build 2012-09-18), which I've updated today (15th October) doing apt-get update && apt-get upgrade, and haven't modified so far besides putting my dotfiles under revision control using git.

In case that information changes on the Raspi website, I'll duplicate the links and information here, so we're all on the same page:
Torrent2012-09-18-wheezy-raspbian.zip.torrent
Direct download2012-09-18-wheezy-raspbian.zip
SHA-13bc788d447bc88feaae8382d61364eaba1088e78
Default loginUsername: pi Password: raspberry

RetroShare sources
The RetroShare version we need is 0.5.4b

Create a directory development in your home directory, cd there, download the sources and unpack them.
mkdir ~/development
mkdir ~/development/RetroShare-v0.5.4b
cd ~/development
wget http://sourceforge.net/projects/retroshare/files/RetroShare/0.5.4b/RetroShare-v0.5.4b.tar.gz
cd RetroShare-v0.5.4b
tar -xvf ../RetroShare-v0.5.4b.tar.gz

Required packages

My primary source for instructions is the RetroShare wiki page UnixCompile of today's version. It is a bit outdated because they've removed gnupg and introduced openpgp, but didn't update their instructions. Install the following packages with:
sudo apt-get install libqt4-dev g++ libupnp-dev libssl-dev libgnome-keyring-dev libbz2-dev libxss-dev

Then you need to compile the projects in the subdirectories, but there are still some modifications necessary to make it work under Debian. The first two subprojects should work fine:
cd ~/development/RetroShare-v0.5.4b/trunk/
cd libbitdht/src && qmake && make clean && make -j2
cd ../../openpgpsdk/src && qmake && make clean && make -j2
The problems start with libretroshare and there are two changes you need to make on it:

Firstly, because even though the preprocessor commands are there, you still need to tell the make process that you are on Debian and your libupnp is of a different version. You do this by editing the file libretroshare.pro in the libretroshare/src/ directory and after the line 221
DEFINES *= UBUNTU
you add (not replace) these lines
DEFINES *= DEBIAN
DEFINES *= UPNP_VERSION=10617
because we do in fact have version 1.6.17 of libupnp installed. (See this thread in the RetroShare forum.)

Secondly, someone hardcoded the location of glib-2.0 into that project file in a very system dependent way. You need to change this line (should be directly below the previous change, line 224 now)
INCLUDEPATH += /usr/include/glib-2.0/ /usr/lib/glib-2.0/include
to this
INCLUDEPATH += $$system(pkg-config --cflags glib-2.0 | sed -e "s/-I//g")
(Source)

cd ../../libretroshare/src && qmake && make clean && make -j2

And now the real fun starts. The Raspberry Pi has 256 MB RAM of which at least 16 MB need to be reserved for the video core, so only 240 MB RAM left. Unfortunately, this is not enough to compile retroshare-gui, it will quit with something like
virtual memory exhausted: Cannot allocate memory
g++: internal compiler error: Killed (program cc1plus)
when it tries to compile qrc_images.cpp.

Super hot update
The irony of situation is that just today an enhanced version of the Raspberry Pi was announced with 512 MB of RAM, which should make the following swap file part obsolete.

Now, I've heard that very, very bad things happen if you create a swap file on a flash storage device like the SD card your Raspi uses or USB sticks. But as a matter of fact, Raspbian wheezy already uses a swap file per default, so it can't be that bad. All you need to do is create another swap file of sufficient size, say 256 MB, which you delete afterwards just to be on the safe side:
sudo dd if=/dev/zero of=swapfile bs=1M count=256
sudo mkswap swapfile
sudo swapon swapfile
Then you can compile retroshare-gui
cd ../../retroshare-gui/src && qmake && make clean && make -j2
and deactivate the swap file again:
sudo swapoff swapfile
sudo rm swapfile
This is also a good opportunity to turn of the default swap file, too, as it really isn't that healthy for your SD card. (Hoping that actually running RetroShare is possible with only 256 MB RAM)

Done

And that's it, you've just compiled RetroShare. This article is already way too long, so I'll stop here and post about configuring and running it in a later post. Hopefully soon.

Sunday, October 14, 2012

Step-wise howto for dotfiles with git

After implementing and using the approach to backup and manage linux configuration files for multiple machines I wanted to give a short reference and summary, both for you and for me to be able to easily look it up.

Step 1: Create a repository at github

Log into your account at github (if you don't have one, create one), and click on "Create a new repo". There you should enter a good name for your repository (I suggest "dotfiles" or similar) and - if you want - a description. Then click on "create repository". Done ;)

In case you haven't done so already, you should add the public SSH keys of your machines to the list of authorized keys, otherwise you won't be able to commit to your git repositories using SSH.
(In short, you have to copy the contents of ~/.ssh/id_dsa.pub into the box at the github site. If you have no idea what I'm talking about, better read up on SSH public key authentication.)

Step 2: Initialize the dotfiles directory and repository

At first, you need to create the .dotfiles directory and move the dotfiles that you want to put under revision control there.
mkdir ~/.dotfiles
mv ~/.bashrc ~/.dotfiles/bashrc
mv ~/.bash_aliases ~/.dotfiles/bash_aliases
mv ~/.screenrc ~/.dotfiles/screenrc
mv ~/.vimrc ~/.dotfiles/vimrc
Then you need to place the symlinker script there, make it executable and execute it.
cd ~/.dotfiles
chmod u+x symlinks.sh
./symlinks.sh

Then initialize your repository and link it to github. This only needs to be done once.
git init
git config --global user.name "YOUR NAME"
git config --global user.email yourmail@something.com
git remote add origin git@github.com:/dotfiles.git
where "YOUR NAME" and yourmail@something.com are the name and mail address with which your commits will be signed. You don't have to do this and you don't have to provide your real name or address there. Decide for yourself.
But GITUB_USERNAME has to be your github username (who would have guessed ...).

Step 3: Track changes with git

Now and whenever you have changed something in your .dotfiles folder, add those changes or files to the repository and push it to github.
git add bashrc
git add bash_aliases
git add screenrc
git add vimrc
git add symlinks.sh
git commit -m "First commit with some rc-files and symlinks.sh"
git push origin master

Step 4: Add additional machines to your dotfiles management

Whenever you want to place another machine under your git dotfile management, you need to check out that repository on that machine.
mkdir ~/.dotfiles
git clone git@github.com/dotfiles.git ~/.dotfiles
git config --global user.name "YOUR NAME"
git config --global user.email yourmail@something.com

Step 5: Pull changes from your github repository to your local machine

I haven't thought about whether I want to automate this step and how. For the moment, manually pulling changes from github seems the best solution to me, since I'm not changing my dotfiles on a daily basis.
git pull origin master
and sometimes
~/dotfiles/symlinks.sh
if you just added the machine to the management system or if new dotfiles have been put under revision control.

Addtional remarks

I've found it to be more comfortable to github as upstream repository with
git push -u origin master
because after that I only have to use
git push
and
git pull

Also, I wrote this article bases on the bash histories of my machines, so in case you encounter an error, please tell me so in the comments so that I can correct this article. Thanks ;)

Monday, October 1, 2012

Backup and manage linux configuration files for multiple machines

If you have multiple computers running linux, you've probably faced the same problem I do right now:
Dealing with your own personally customized configuration files like .vimrc, .bashrc, .screenrc, etc., which are called dotfiles. This includes situations like:
  • You setup a new machine and want to configure it the way you want
  • You upgrade your linux distro and now you have to merge your changes into the new config files
  • You already have multiple machines with different configurations but forgot what is where
I discussed this with a good friend (who also has a blog and you should totally check it out) and he suggested using git to manage them.

Now, I'm certainly not the first one to think about that and this problem has been solved often enough. I also don't want to copy or repeat what others have already written else, firstly because I probably can't make it much better, and secondly because I'm lazy. So I'll just explain the basic steps and link to the sources where I got it from.

Git and Github

The first thing I found was the post Using git and github to manage your dotfiles by Micheal Smalley and I liked it very much. It is a good starting point that explains how to set up git and also show a small script to do some managing of the dotfiles.

This will be from where I'll proceed now, because Fabian (above mentioned friend) wouldn't stop annoying me until I signed up at github.


But you shouldn't stop there, because there are some issues that can be solved even better.

Different machines need different dotfiles

I wasn't quite sure about the symlinking thing so I continued searching.I found the post Why I use git and puppet to manage my dotfiles and what immediately convinced me was this statement:
Of course if I only used the default master branch of git I may as well be storing all of my dotfiles in Dropbox. Since many of my machines need to have their own various customizations I use a branch for each machine. Then I periodically rebase each individual branch on the latest master and use git cherry-pick to move changes from the custom branch to the master branch.
If your machines are on different upgrade levels or even on different distributions, dotfiles that might work on one machine can easily be invalid on another, even if you want to have the same configuration on all of them.

Symlinks with Puppet

I've never heard of Puppet, but the clue seems to be that you don't specify steps, but the only the goals, and Puppet ensures that your goals are fulfilled.
file { "/home/${id}/.bashrc":
 ensure => link,
 target => "/home/${id}/config/my.bashrc",
}
With this configuration, Puppet will create the symlink, but only if it doesn't exist already. The command for this is
puppet apply symlinks.pp
assuming of course your file is called symlinks.pp

Edit:
Actually, I have to revoke what I said there. I've tried it right now and the problem is that if the files already exists, they will be deleted. At least, I couldn't find the original .bashrc anymore, there was only the symlink to the one in my .dotfiles directory.
I will use a modified version of the bash script from the first link I provided and provide it here later.

Edit 2:
Here it is. My script is based on that from Micheal Smalley, but I didn't like the idea of a separate folder for the old configuration files. Instead, I move old configuration files that are not symlinks to the same .dotfiles folder, but add a suffix that indicated when they where moved there and from which machine. That way, you can put them under revision control and merge them later into your current configuration file.

Edit 3: small correction to the script, it didn't link when there was not previous dotfile to be moved to .dotfiles.

#!/bin/bash

dir=~/.dotfiles                    # dotfiles directory
time=$(date +%Y%m%d_%H%m%S)
oldsuffix="old.$(hostname)-$time"

# list of files/folders to symlink in homedir
files="bashrc bash_aliases screenrc vimrc"

##########

# move any existing dotfiles in homedir to dotfiles_old directory, then create symlinks
echo "Moving any existing dotfiles from ~ to $dir with suffix $oldsuffix"
for file in $files; do
  absfile=~/.$file
  # if file exists and is no symlink, move it to .dotfiles
  if [[ -e $absfile ]] && ! [[ -h $absfile ]]; then
    mv ~/.$file $dir/$file.$oldsuffix
  fi
  # if file doesn't exist, link it
  if ! [[ -e $absfile ]]; then
    echo "Creating symlink to $file in home directory."
    ln -s $dir/$file ~/.$file
  fi
done

By the way, you can find my github repository of my dotfiles here: https://github.com/TheSentry/dotfiles

Reuse existing dotfiles

Because other people had the same problem, there are already lots of repositories for dotfiles available. You can browse them and pick some you like, e.g. at http://dotfiles.org/

And don't forget: Dotfiles Are Meant to Be Forked


Thats it. I'm lazy and I want to actually implement this solution on my computers, because until know, there is no backup or management of my dotfiles whatsoever. If you spot any errors here or have questions and problems that you are too lazy to solve yourself, comment below and I'll see what I can do ;)