Haui's bytes http://pdes-net.org/x-haui/blog/html/ news, diary, journal, whatever en-us Wed, 12 Mar 2014 00:00:00 +0100 http://pdes-net.org/x-haui/blog/html/2014/03/12/who_is_phoning_home.html http://pdes-net.org/x-haui/blog/html/2014/03/12/who_is_phoning_home.html <![CDATA[Who Is Phoning Home]]>

Who Is Phoning Home

The current development in consumer electronics like tablets or mobile phones couldn’t be worse. At least if you care a little bit about privacy. A few years ago, Spy- and Adware that came bundled with other software was unwelcome to (almost) every Windows user. Internet Discussion Boards were overrun by people trying to get rid of annoying web browser bars that were installed together with their favorite file sharing tool. Others even refused to use the Opera web browser because it was ad-sponsored.

Nowadays, we don’t have these problems anymore. Spy- and Adware is now socially accepted and more popular than ever before. Naturally, the name has changed - the euphemism App just sounds cool and innocent.

Many (or even most) Apps for Android or IOS, the two major operating systems found on mobile phones and tablets, come with some important built-in consumer information right away. To retrieve these ads, an internet connection is needed because the advertising is tailored specifically for your needs (I know, it’s great). Apart from this, an active internet connection introduces another great field of application. While Ads are transmitted to your device, your personal data may be sent from your device to trustworthy servers around the world. So it’s a win-win situation for everybody - except for you.

The permission concepts of Android and IOS are, of course, completely useless because nobody cares anyway - “A flashlight app that requires access to Internet, Email accounts and GPS information - seems legit!”. In addition, the modern operating systems are an ideal breeding ground for these privacy nightmare applications because of their standardized APIs. In contrast to a classical desktop computer system, it’s extremely easy to collect data like calender events or Emails automatically because the APIs already include some ready-to-run methods for these purposes.

However, apart from avoiding this new technology, there isn’t that much you can do about the aforementioned issues. Still, there are ways to figure out which data is sent from and to your device and it’s even possible to filter this traffic. I’ll describe on of these ways in the following. But be warned - there’s no App for this and you will need a root shell ;)

../../../_images/bridge.png

The image describes the basic setup. The PC acts as a Wi-Fi hotspot and forwards the traffic received from a tablet to the DSL router. To turn the PC into a hotspot, a Wi-Fi USB adapter that can act as a hotspot (iw list |grep AP) is required. Once the adapter is properly installed on your system, it’s easy to create a hotspot with hostapd. The Arch Wiki contains some information about the configuration, but the default configuration file is quite self-explanatory and just needs a few adjustments. After the setup is done, brctl (on Debian contained in the package bridge-utils) is used to create a bridge that connects the wireless and the non-wireless network interfaces:

brctl addbr br0
brctl addif br0 eth0

Don’t forget to add the correct bridge configuration line in your hostapd.conf:

bridge=br0

After the interfaces are brought up, you may start hostapd:

ip link set dev eth0 up
ip link set dev wlan0 up
ip link set dev br0 up
hostapd -B /etc/hostapd/hostapd.conf

If no errors occurred, you should be able to connect your wireless device to the newly created hotspot. As all traffic now flows through the PC system, we’re able to record and inspect the network packets. tcpdump is the tool of choice for this task:

tcpdump -ni br0 -w tablet.pcap

The command collects all packets passing the bridge interface and writes them to a file tablet.pcap. No need to mention that the command must be run with root privileges. Once enough packets are collected, the PCAP file can be inspected with Wireshark. Hence we may, for example, check if the login data for our favorite shopping app is sent via a SSL secured connection or as plain text. As a thorough explanation of Wireshark’s (and tcpdump’s) capabilities could easily fill an entire book, I recommend you take a look at the documentation if you’re interested in topics like filter expressions. However, basic knowledge of the TCP/IP protocol suite is mandatory for this.

I’ve mentioned earlier, that the setup not only allows us to capture all network traffic, but also enables us to filter (and even modify) the traffic. A few basic firewall rules are enough to stop all communication between the tablet and a specific IP/IP range:

iptables -A FORWARD --dest 192.168.1.0/24 -j DROP
iptables -A FORWARD --src 192.168.1.0/24 -j DROP

In the example, I’ve used a locally assigned IP address range - in a real world example you would most likely pick a non-private IP. Filtering by IP addresses however is not always a satisfying solution. Sometimes the packet payload is way more important than the destination or source information. To filter out all packets containing a specific string like “adserver”, iptables’ string matching extension is very useful:

iptables -A FORWARD -m string --string "adserver" --algo=kmp -j DROP

For this deep packet inspection, better tools might exist, but I’m not familiar with these (and my iptables skills have also become quite rusty).

All in all, a Linux driven hotspot opens up completely new possibilities when compared to a standard Access Point. Still, inspecting the traffic and creating proper firewall rules is a very cumbersome procedure.

]]>
Wed, 12 Mar 2014 00:00:00 +0100
http://pdes-net.org/x-haui/blog/html/2014/03/05/cutom_git_prompt.html http://pdes-net.org/x-haui/blog/html/2014/03/05/cutom_git_prompt.html <![CDATA[Custom Git Prompt]]>

Custom Git Prompt

From all the available Revision Control Systems I know, Git is my personal favorite. I use it for all my revision control needs at work and at home. As a consequence, I got quite a few git repositories residing in my home directory. To keep track of all these repositories, I modified my bash prompt in a way that it displays the most important git infos in the current directory (of course only if the directory is a git repo). My solution basically consists of two parts. First, I wrote a simple Perl script, that summarizes and colors the output of git status -s. Second, I created a bash script that is sourced at the end of the ~/.bashrc. This bash script overwrites the standard cd command and checks whether the new directory is a git repository. If the check doesn’t fail, the output of the Perl script is integrated in the prompt.

In a first version, the bash prompt was completely refreshed after every command executed in the shell. For very large git repositories, however, this led to a noticeable lag after every command. So, I decided to rerun the git status script only if the last command executed in the shell was likely to modify the status of the git repo. My list of commands, that makes no claim to be complete, includes vim, cp, mv, and, of course, git. To manually rerun the status script, and, thus, update the prompt, I included the command g, that actually only triggers the git status script. As a drawback of this solution, changes made in the same git repo are not visible across multiple concurrent shell sessions. For my needs, however, it works well enough. See the following screenshot for a short demonstration:

../../../_images/gitstatus.png

Please note that my prompt spans across two lines by default (over three when inside a git directory).

You can download the two scripts here. Move gitstatus into ~/bin/ and make it executable. Drop gitstatus.sh in your home directory as .gitstatus and add the following line to your .bashrc

. ~/.gitstatus

You may of course modify the GITPREFIX and GITSUFFIX variables in gitstatus.sh to fit your needs.

]]>
Wed, 05 Mar 2014 00:00:00 +0100
http://pdes-net.org/x-haui/blog/html/2013/09/12/find_files_in_apt_packages.html http://pdes-net.org/x-haui/blog/html/2013/09/12/find_files_in_apt_packages.html <![CDATA[Find files in APT packages]]>

Find files in APT packages

On a Debian based Linux distribution like Crunchbang, it’s quite easy to determine which package a specific file on the system belongs to. Issuing dpkg-query -S /usr/lib32/gconv/UNICODE.so on my desktop system tells me, that the file belongs to the libc6-i386 package - the 32 bit shared libraries for AMD64.

Sometimes, however, it comes handy to know what package a file not present on the filesystem belongs to. One prominent example is the tunctl program, that allows the creation of TUN/TAP interfaces on Linux. A search for tunctl via aptitude search tunctl doesn’t yield any results. That’s where apt-file comes into play. After installing it via aptitude install apt-file and updating the index with apt-file update, we can start a more advanced search. Using apt-file find tunctl, we get the info that the file /usr/sbin/tunctl is included in the uml-utilities package. To allow more sophisticated searches, apt-file offers various advanced options, e.g., for case insensitive searches based on regular expressions.

]]>
Thu, 12 Sep 2013 00:00:00 +0200
http://pdes-net.org/x-haui/blog/html/2013/09/06/sudoku.html http://pdes-net.org/x-haui/blog/html/2013/09/06/sudoku.html <![CDATA[Sudoku]]>

Sudoku

Solving sudoku puzzles usually requires nothing more than a pen and some time. Solving 50 sudoku puzzles, however, requires a huge amount of time. 50 sudoku puzzles...? Yep, to get the solution for Problem 96 on ProjectEuler, 50 sudokus need to be solved first. I’ve solved my first 49 problems on ProjectEuler a few years ago and recently rediscovered the website. So I started with the sudoku problem and got the solution quite fast by using a simple brute force algorithm. I’m not going to post the solution for the problem, but just the C++ code for my sudoku solver. After compiling the program with g++ -std=c++11 -O6 -o sudokusolver sudokusolver.cpp, sudokus given in an input file are solved with a simple recursion based algorithm. An example is given below:

Trying to solve:
---------
003020600
900305001
001806400
008102900
700000008
006708200
002609500
800203009
005010300
---------
    |
    V
---------
483921657
967345821
251876493
548132976
729564138
136798245
372689514
814253769
695417382
---------

The sudokus in the input file must consist of 9 consecutive lines containing the initial values inside the sudoku. Blank fields are represented by zero. Multiple sudokus have to be separated by at least one dash in a single line.

]]>
Fri, 06 Sep 2013 00:00:00 +0200
http://pdes-net.org/x-haui/blog/html/2013/09/06/gnuplot.html http://pdes-net.org/x-haui/blog/html/2013/09/06/gnuplot.html <![CDATA[Gnuplot]]>

Gnuplot

Some years ago, when I had to create plots from measurement data for a student research project I first started using Gnuplot. Although several powerful alternatives exist, I never felt the need to switch away from Gnuplot and I’m still using it for all my plotting tasks. In almost all cases, these tasks consist of visualizing data that was acquired with the help of other tools, such as Iperf. Sometimes this data is not suitable to be feed directly into Gnuplot, but requires some preprocessing first. So my usual approach was to write a Bash or Perl script to preprocess and sanitize the collected data and then dynamically generate a Gnuplot script for this specific data set. Depending on data, however, this felt like a cumbersome and suboptimal approach. The two scripts res.pl and wrapper.sh illustrate this point. The Perl script is called by the bash script and transforms the input data into a whitespace seperated data set. The bash script then extracts several additional information, e.g. the axis labels. Then, a Gnuplot script is generated and executed to finally create the plot. Although this works, a more elegant way would have been the utilization of a gnuplot library, such as Chart::Graph::Gnuplot. I’ve written the small demo script frequency.pl to illustrate the usage of the aforementioned library. The script counts the character frequency of an input file and creates a plot like the following from the collected data.

http://pdes-net.org/x-haui/pics/frequency.png

All in all, the usage of this library feels much more comfortable, especially when dealing with poorly formatted input data that requires a great amount of preprocessing. Of course, Gnuplot bindings for languages other than Perl do exist as well.

]]>
Fri, 06 Sep 2013 00:00:00 +0200
http://pdes-net.org/x-haui/blog/html/2012/09/26/encoding_hell.html http://pdes-net.org/x-haui/blog/html/2012/09/26/encoding_hell.html <![CDATA[Encoding Hell]]>

Encoding Hell

Some years ago, when I first started using Linux, my system’s locale was set to ISO8859-15. Over the years I switched to UTF-8 of course. Though I now tend to use proper filenames for all my files, I come across witnesses of the old days, when I was littering my filenames with crappy [1], or even crappier[2] characters, once in a while. In my defence I have to say, that lots of these files carry names I didn’t choose by myself, because they were auto-generated by CD rippers or other software. Some files even date back to the time when I was exclusively using Windows and didn’t care about filenames or encodings at all.
Using the command from my posting about rename can usually fix all these filenames, but this might not always be what you want - a folder named glückliche_kühe is renamed to gl_ckliche_k_he - not a perfect solution. What you might really want is to convert the filename from one encoding to another, and good for you, somebody already did all the work and created a nifty little program called convmv, which supports 124 different encodings. The syntax is quiet easy:

convmv -f iso8859-15 -t utf-8 *

This would show which filenames in the current directory would have been converted from ISO8859-15 to UTF-8 if you’d explicitly added the - -notest option to the command-line.

Thats the easy way, but let’s assume you want to work with the glückliche_kühe folder without re-encoding the filename. Be aware of the fact, that some graphical file mangers may not handle filenames with wrong encodings correctly. On my system, krusader couldn’t open the ISO8859-15 encoded test folder, while gentoo (yes, this is indeed a file manger) only displayed a warning. Additionally, there are situations, where no graphical environment is available at all.

So, the far more interesting question is how to work with these files in a shell environment. The naive approach cd glückliche_kühe fails because the ISO8859-15 ü is different from the UTF-8 ü - our UTF-8 environment will correctly respond that there’s no such folder. A simple ls will show a question mark for every crappier character in the filename and that’s not exactly useful either, since we can’t uniquely identify the names this way. How would you change into glückliche_kühe if there’s also a folder called gl_ckliche_k_he? Typing cd gl?ckliche_k?he is ambiguous since the question mark is treated as a special character by the Bash and expands to match any character. Depending on the situation, this might or might not work as the Bash returns a list of all matching filenames for your input sequence gl?ckliche_k?he.
One solution is to run ls with the -b option - this way, we instruct ls to print unprintable characters as octal escapes:

user@localhost /tmp/test $ ls -b
gl\374ckliche_k\374he

This gives us something to work with. echo can interpret these escape sequences and Bash’s command substitution offers a way to use echo‘s output as a value.

user@localhost /tmp/test $ cd "$(echo -e "gl\0374ckliche_k\0374he")"
user@localhost /tmp/test/glückliche_kühe $ pwd
/tmp/test/glückliche_kühe

There are three things you should note here. First of all, in order to mark the escaped sequences as octal numbers, you need to add a leading zero in the way I did in this example. Secondly, the -e parameter is required to tell echo to interpret escaped sequences rather than printing the literal characters. The last thing is not exactly related to the encoding problem, but always worth mentioning: the quotes are supposed to be there for a reason!

So, now the encoding hell shouldn’t look so scary anymore - at least not with respect to filenames. ;)

Oh, and by the way, if you just want to check if you got any wrongly encoded filenames, this one-liner could help:

find . -print0 | xargs -0 ls -db  | egrep "\\\[0-9\]{3}"


[1] every character c, that is not in [a-zA-Z0-9._-]+
[2] every character c, where utf8(c) != iso8859-15(c)

]]>
Wed, 26 Sep 2012 00:00:00 +0200
http://pdes-net.org/x-haui/blog/html/2012/09/12/rebuild_debian_packages.html http://pdes-net.org/x-haui/blog/html/2012/09/12/rebuild_debian_packages.html <![CDATA[Rebuilding Debian packages]]>

Rebuilding Debian packages

Most of the software installed via APT, Debian’s package management system, runs perfectly fine without any reason to complain. In some rare cases however, you might find yourself unsatisfied with a package and have the itch to recompile it. For me, Debian’s package for the Vim text editor is one of these cases - the package available in the repositories was compiled without support for the Perl interface. Of course, one could just visit vim.org, download the latest sources for Vim, check the build requirements and install the missing libraries manually, call ./configure with the correct parameters, compile the program and finally install it. Apart from being a quiet cumbersome procedure, APT would not include this version of Vim into its database. So, there has to be a better way to do this, and indeed, there is one.

First of all, two packages and their dependencies are required for the next steps - build-essential and devscripts. They should be available in the repositories and can be installed as usual:

su root -c "apt-get install build-essential devscripts"

Once this is done, we’ll change to our developer directory and download the sources for Vim as well as the build dependencies.

mkdir -p ~/devel
cd ~/devel
apt-get source vim
apt-get build-dep vim

When this is finished, a new directory ~/devel/vim-*VERSION*/ should contain the sources for Vim as well as Debian specific patches/configurations. Now, one could do all kinds of changes tho Vim’s source code, but we just want to to modify a configuration parameter. This is done by editing the debin/rules file, which contains the default configure flags for the package. The flags defined here are passed to the configure script during the build process. The Perl interface can be enabled by swapping a parameter from –disable-perlinterp to –enable-perlinterp. Thereafter, you just need to invoke the following command and wait until the compilation process is finished:

debuild -us -uc

If no errors occurred, you’ll find several *.deb files inside your ~/devel directory. To install Vim, just pick vim-*VERSION*_*ARCH*.deb and install it via dpkg, e.g. on my box:

su root -c "dpkg -i vim_7.3.547-4_amd64.deb"

vim –version should now show +perl instead of -perl, and :perldo is finally available. ;)

]]>
Wed, 12 Sep 2012 00:00:00 +0200
http://pdes-net.org/x-haui/blog/html/2012/09/07/delete_all_files_except_one.html http://pdes-net.org/x-haui/blog/html/2012/09/07/delete_all_files_except_one.html <![CDATA[Delete all files except one]]>

Delete all files except one

A couple of days I was asked if knew an easy way to delete all but one files in a directory. If you didn’t already guess it from this blog entry’s title, there is a simple way - or to be more precise - there are several ways. The first one is quiet straightforward and uses the find command:

find . -not -name do_not_delete_me -delete

This works recursively and also preserves files named do_not_delete_me contained in sub-folders of the current directory:

user@host /tmp/test $ ls -R
.:
a  b  c  do_not_delete_me  foo

./a:
foo

./b:
bar  do_not_delete_me

./c:
baz
user@host /tmp/test $ find . -not -name do_not_delete_me -delete
find: cannot delete `./b': Directory not empty
user@host /tmp/test $ ls -R
.:
b  do_not_delete_me

./b:
do_not_delete_me

As you can see, find tries to delete the folder b but fails because the folder is not empty. If you don’t care for files in sub-directories, it gets a bit more complicated with find:

find . -mindepth 1 -maxdepth 1 -not -name do_not_delete_me -exec rm -rf -- {} +

The -mindepth/-maxdepth parameters tell find to ignore sub-directories, because we’re not interested in their contents. This should also save some execution time - especially if the directory hierarchy is really deep.

While this works well, Bash’s pattern matching offers an easier solution for this:

rm -rf !(do_not_delete_me)

As the manpage explains, the text enclosed by brackets is considered to be a pattern list, i.e. constructs like !(*.jpg|*.png) are perfectly valid. If you don’t care for files in sub-directories, this might be the preferred way - it’s shorter and maybe even faster than the solutions using find.

No matter which solution you choose, refrain from error-prone constructs like rm -rf `ls | grep -v do_not_delete_me`.

]]>
Fri, 07 Sep 2012 00:00:00 +0200
http://pdes-net.org/x-haui/blog/html/2012/08/17/new_design.html http://pdes-net.org/x-haui/blog/html/2012/08/17/new_design.html <![CDATA[New design]]>

New design

A few days ago, I decided to give my blog a new look. Consequently, I wanted to upgrade nanoblogger from version 3.3 to at least 3.4. or even 3.5. Thinking about the upgrade procedure however, gave me a headache - there are just too many small fixes and workarounds I tinkered into nanoblogger’s source code. After I spent some time searching for alternatives, I eventually ended up with Tinkerer, a Python-based static blog compiler. Apart from being actively developed, Tinkerer has two advantages over nanoblogger I especially want to emphasize.

First of all, Tinkerer is fast. Completely rebuilding my blog takes just about 2 seconds - nanoblogger needs over 3 minutes for the same task. Secondly, Tinkerer offers source code highlighting for many programming and markup languages by using Pygments.

Additionally, transferring the old blog postings from nanoblogger was easier than expected. I wrote a small shell script that converts the *.txt files inside nanoblogger’s data directory into a format known by Tinkerer. Of course, this just automates some steps of the process and can’t spare you the work of manually fixing errors and warnings Tinkerer might report. Still, it saved me a lot of work.

On the downside, I already stumbled on some bugs - if you plan to use Tinkerer for your own blog, this might save you a lot of trouble, if you’re repeatedly getting unexplainable UnicodeErrors.

]]>
Fri, 17 Aug 2012 00:00:00 +0200
http://pdes-net.org/x-haui/blog/html/2012/08/15/hardly_known.html http://pdes-net.org/x-haui/blog/html/2012/08/15/hardly_known.html <![CDATA[Hardly known]]>

Hardly known

Most Linux and some Ubuntu users know a certain set of command-line programs for interactive shell usage. Most importantly, there are the standard tools from the GNU core utilities which cover many aspects of everyday’s work. You’ll find these tools preinstalled on almost every Linux-based desktop or server system (embedded systems often tend to use all-in-one tools like BusyBox as a replacement for the core utilities). Additionally, some of the commonly used tools like grep or strings are found in separate packages, which are also available on most systems.

Read more...

]]>
Wed, 15 Aug 2012 00:00:00 +0200
http://pdes-net.org/x-haui/blog/html/2012/04/10/default_parameters.html http://pdes-net.org/x-haui/blog/html/2012/04/10/default_parameters.html <![CDATA[Default parameters]]>

Default parameters

Recently I came across an interesting snippet of C++ code:

#include <iostream>
class Base {
    public:
    virtual void message1(){ std::cout << "Base message1" << std::endl; }
    virtual void message2(std::string param = "Base message2"){ std::cout << param << std::endl; }
};
class Derived : public Base {
    public:
    virtual void message1(){ std::cout << "Derived message1" << std::endl; }
    virtual void message2(std::string param = "Derived message2"){ std::cout << param << std::endl; }
};
int main(){
    Derived d;
    Base *base = &d;
    base->message1();
    base->message2();
    return 0;
}

If you compile this with g++ and run the produced binary you’ll get the following output:

Derived message1
Base message2

At first glance this might look a little confusing. It seems like the correct overloaded function is only called for message1 but not for mesage2. However, if you change the function bodies as follows, you can see that the correct function is called both times:

class Base {
    public:
    virtual void message1(){ std::cout << "Base::message1 Base message1" << std::endl; }
    virtual void message2(std::string param = "Base message2"){ std::cout << "Base::message2 " << param << std::endl; }
};
class Derived : public Base {
    public:
    virtual void message1(){ std::cout << "Derived::message1 Derived message1" << std::endl; }
    virtual void message2(std::string param = "Derived message2"){ std::cout << "Derived::message2 "<< param << std::endl; }
};

Derived::message1 Derived message1
Derived::message2 Base message2

As you can see, the correct overloaded functions of the derived class are called in both cases. The main problem stems from the default parameter for message2. Default parameters for C++ functions are resolved at compile time depending on the static type (here: Base), whereas the correct function is determined from the dynamic type (here: Derived). As a rule of thumb, you should never change the values for default parameters in inherited functions. Although it’s legal and the result is well defined, doing so will only lead to confusion and subtle errors. Another approach to avoid this issue is to abstain from using default parameters for virtual functions at all. If you want to know more about this an many other quirks of C++ you should take a look at Scott Meyer’s book Effective C++.

]]>
Tue, 10 Apr 2012 00:00:00 +0200
http://pdes-net.org/x-haui/blog/html/2012/01/29/security_through_obscurity_.html http://pdes-net.org/x-haui/blog/html/2012/01/29/security_through_obscurity_.html <![CDATA[Security through obscurity]]>

Security through obscurity

From the variety of available email clients, I found Claws Mail to be my favorite (maybe ‘cause after 6 years of Linux, I still haven’t found the time to configure mutt...). Anyway, in today’s posting I will not praise the advantages of Claws Mail, but rant a little about one of its “security” features. Like most programs, Claws Mail stores its configuration in a separate directory in the user’s home folder. This folder contains, among other things, all account information. Since Claws Mail doesn’t offer any kind of password manger or “master password” one would think, that the passwords for the mail accounts are stored in plain text. However, the accountrc file contains base64-encoded strings of DES-encrypted passwords. At this point, one should wonder how the program can encrypt the passwords without asking the user for a password. The solution is simple - the password is hardcoded into the binary. With this knowledge it’s obvious that this approach is a clear case of security through obscurity. Given the accountrc file and the binary everyone can easily decrypt the passwords, i.e. with this standalone C program. If you’re asking for more security than restrictive file permissions for your home folder can provide, you still got several options. Patch Claws Mail’s sourcecode in order to use a real password safe for the storage of the passwords, use file encryption (either for your complete home folder, or just for ~/.claws-mail, e.g. with encfs), or switch to another email client.

]]>
Sun, 29 Jan 2012 00:00:00 +0100
http://pdes-net.org/x-haui/blog/html/2012/01/26/toggle_ssl_.html http://pdes-net.org/x-haui/blog/html/2012/01/26/toggle_ssl_.html <![CDATA[Toggle SSL]]>

Toggle SSL

To switch easily between the HTTP and HTTPS version of a website, I wrote a small plugin for Vimperator that can be found here. Save it into ~/.vimperator/plugins/ and restart Firefox. You should now be able to switch between the HTTP and HTTPS version of a website by pressing \h.

]]>
Thu, 26 Jan 2012 00:00:00 +0100
http://pdes-net.org/x-haui/blog/html/2011/06/23/advanced_i_o_redirection.html http://pdes-net.org/x-haui/blog/html/2011/06/23/advanced_i_o_redirection.html <![CDATA[Advanced I/O redirection]]>

Advanced I/O redirection

Recently I had to commit a bunch of changes via SVN. Cause it’s really recommended to review all changes made in the working directory before actually committing the data, I issued a svn status | grep ^M to see all files that have been modified since the last commit. The result was a fairly long list of files and I wanted to check which changes where actually made to each individual file. Of course, every SVN user knows about svn diff or even better svn diff | less , which gives a complete diff of all modified files. However, I don’t really like this output...it just glues diff after diff together an if you scroll too fast, you will miss one or more small but important changes. That’s why i wanted to have a mechanism, that shows one diffed file at a time until I explicitly proceed to the next file. My first approach was a simple one-liner:

svn status | grep ^M | awk '{print $2}' | while read l; do echo "****** $l ******"; svn diff "$l" ; read tmp; done

As you will notice, this doesn’t really work - the two read commands take turns in reading the output of svn status. One elegant solution for this includes the use of the shell builtin exec:

#!/bin/bash
exec 3<&0
svn status | grep ^M | awk '{print $2}' | while read l; do echo "****** $l ******"; svn diff "$l" | less ; read tmp <&3 ;done

The line following the shebang creates a copy of the current stdin (filehandle 0) and assigns it to a new filehandle 3 i.e. 0 and 3 both will read commands from the keyboard being the default in a newly created shell. In the next line filehandle 0 is redirected several times (remember: a | b redirects the stdout of a into the stdin of b), so the first read reads its lines from the awk command. The second read, however, reads its input from filehandle 3 which still has the value that filehandle 0 had in the beginning of the script, i.e. it reads the keyboard input (I also piped svn diff through less, but that’s just a small enhancement which is unrelated to the main problem). This is just a simple example for the powers of bash’s redirection, more complex ones do exist ;)

]]>
Thu, 23 Jun 2011 00:00:00 +0200
http://pdes-net.org/x-haui/blog/html/2011/01/09/flashplayer_issues.html http://pdes-net.org/x-haui/blog/html/2011/01/09/flashplayer_issues.html <![CDATA[Flashplayer issues]]>

Flashplayer issues

While older versions of Adobe’s Flashplayer for Linux made content like Youtube videos accessible via the /tmp filesystem, the latest versions hide these files from the user by exploiting a feature of unlink:

If the name was the last link to a file but any processes
still have the file open the file will remain in existence until
the last file descriptor referring to it is closed.

In other words, the flashplayer creates a new file in /tmp, deletes the file right away with unlink but keeps the filehandle open, so the flashplayer process may still access the file. This however, may lead to confusion - df reveals that the free space on /tmp is shrinking, while du doesn’t show any growing files at all. One way to fix this issue is simple - use library preloading to overwrite the original unlink function used by firefox: Download the tgz-archive, unpack it and make it. If the previous steps were successful, you should now have a file unlink.so available. The last step is to tell firefox (or more precisely the dynamic linker) to use the unlink function from this file rather than the one from your C Standard library:

LD_PRELOAD=/path/to/unlink.so firefox

The LD_PRELOAD environment variable tells the dynamic linker to search for libraries in non-standard locations - in this case in our library file unlink.so. You might want to add an alias like the following to your environment, but for obvious reasons you shouldn’t globally export LD_PRELOAD.

alias ff="LD_PRELOAD=/path/to/unlink.so firefox"

Yet there is one drawback with this solution: even if you close firefox, the files in /tmp will persist, so you may want to delete them manually from time to time...

]]>
Sun, 09 Jan 2011 00:00:00 +0100
http://pdes-net.org/x-haui/blog/html/2010/12/17/some_lesser_known_bash_tricks.html http://pdes-net.org/x-haui/blog/html/2010/12/17/some_lesser_known_bash_tricks.html <![CDATA[Some lesser-known Bash tricks]]>

Some lesser-known Bash tricks

Though alternatives like the zsh exist, the Bourne-again shell is still the de facto standard among all Unix shells. Maybe that’s why some people refer to it as the Windows of the shells - although there are now better alternatives around, most users still stick with it. I am one of these users - that’s why today’s blog entry is about some useful, but little-known bash features. ;-) Note that you’ll need at least Bash v4.0 for some of them. The Bash built-in shopt allows you to (de)activate various variables in order to control optional shell behavior. shopt called without an argument gives you an overview of all available options. To activate a feature, simply issue shopt -s OPTION - if you’d like to deactivate the feature again, a shopt -u OPTION suffices. If you wonder what’s so tricky about this, just read on - basic knowledge of shopt is needed to benefit from the following. Everybody knows about the extremely useful for-loop, which allows it to perform the same command for all (or a subset of all) files in a directory. Its syntax is pretty much straightforward:

for file in *; do echo "Touching $file"; touch "$file"; done

This will touch every file in the current directory and tell you about it (not really a real world example, but you might get the point). However sometimes you’d like to also work on the files in all subdirectories - two popular solutions for this include the find command or recursion. As a rule, most users forget/don’t know about the Bash’s globstar option. If set (shopt -s globstar), you may use the following construct to also touch the files found in all subdirectories.

for file in **/*; do echo "Touching $file"; touch "$file"; done

Just want to touch all mp3 files? Here you go:

for file in **/*.mp3; do echo "Touching $file"; touch "$file"; done

Considering the previous example, you may notice that globbing doesn’t include hidden files - which in most cases makes sense. Nevertheless, you can alter this standard behavior, by enabling the dotglob option using shopt While the above examples mostly cover batch processing, some options only influence the interactive shell usage. If cdspell is set, the bash will generously ignore spelling mistakes in the directory component of a cd command:

user@host /var/tmp $ mkdir example_
user@host /var/tmp $ cd example
-bash: cd: example: No such file or directory
user@host /var/tmp $ shopt -s cdspell
user@host /var/tmp $ cd example
example_
user@host /var/tmp/example_ $

autocd does a very similar job - if you issue a valid directory name without the prepended cd, you will automatically change to that directory. These are just some of the available options - man bash knows and explains them all, so start reading.... ;-)

]]>
Fri, 17 Dec 2010 00:00:00 +0100
http://pdes-net.org/x-haui/blog/html/2010/11/27/discontinuation_of_yaydl.html http://pdes-net.org/x-haui/blog/html/2010/11/27/discontinuation_of_yaydl.html <![CDATA[Discontinuation of yaydl]]>

Discontinuation of yaydl

If you’re one of the few yaydl users out there, you might have noticed that I didn’t put too much effort in the project recently. As I don’t see any chance of maintaining yaydl in an appropriate way over the next months, I decided to discontinue the whole thing. Feel free to use the script as long as it works for you, but please don’t email me any bug reports or the like. If you’re looking for an alternative, I suggest you take a closer look at clive or youtube-dl.

]]>
Sat, 27 Nov 2010 00:00:00 +0100
http://pdes-net.org/x-haui/blog/html/2010/08/18/vim_tips.html http://pdes-net.org/x-haui/blog/html/2010/08/18/vim_tips.html <![CDATA[Vim tips]]>

Vim tips

Almost 4PM...time for some vim tips. :-)

  • autocmd Vim’s powerful autocmd feature can be used to automatically perform certain commands when a specific event occurs. The events that can be used as triggers range from creating a new file to resizing vim’s window. A complete list of available triggers can be obtained by typing :help autocmd-events in vim. So, how’s this useful? Let’s say you write most of your Perl scripts in vim, why should you insert the shebang and some other stuff manually in a new file, when the editor can do this for you? The following two steps show you how it’s done:

    1. Create a new file ~/.vim/skeletons/skeleton.pl containing a shebang for Perl as well as the recommended use strict/warnings statements:

      p=$(which perl); mkdir -p ~/.vim/skeletons; cat << EOF > ~/.vim/skeletons/skeleton.pl
      #!$p
      use strict;
      use warnings;
      EOF
      
    2. Put the following in your ~/.vimrc

      autocmd BufNewFile *.pl 0r ~/.vim/skeletons/skeleton.pl | :normal G
      

    Now, when you’re creating a new *.pl-file it is automatically prepended with the contents of ~/.vim/skeletons/skeleton.pl and vim starts at the end of the file. Needless to say, that you can use multiple autocmd commands to support languages other than Perl.

  • Syntax check Everyone knows about vim’s :make command, but did you know that it’s possible to set the make program for each file type separately?

    autocmd FileType perl set makeprg=perl\ -c\ %\ $*
    

    By adding this to your ~/.vimrc, :make will no longer invoke make file but perl -c file instead, when you’re editing a Perl script. As usual, Perl is just an example - i.e. Ruby programmers might use ruby -c or the like.

  • Y? There’s some inconsistency between deleting and yanking in vim: dd deletes the current line, D deletes from the cursor to the end of the line. yy yanks the current line, but Y also yanks the current line... To yank all characters from the cursor position to the end of the line, you either need to type y$, or add a custom mapping for Y to your ~/.vimrc:

    map Y y$
  • Matchit Typing % in normal mode finds the next item in the current line or under the cursor and jumps to its match. Items include c-style comments, parenthesis and some preprocessor statements. Unfortunately, there’s no native support for HTML or Latex, but there’s a handy little plugin, that adds support for these and many other languages: Matchit.

Enough for one day....

]]>
Wed, 18 Aug 2010 00:00:00 +0200
http://pdes-net.org/x-haui/blog/html/2010/07/24/yaydl_1_5_2.html http://pdes-net.org/x-haui/blog/html/2010/07/24/yaydl_1_5_2.html <![CDATA[yaydl 1.5.2]]>

yaydl 1.5.2

yaydl 1.5.2 fixes the support for youtube....

Download the tar.gz

]]>
Sat, 24 Jul 2010 00:00:00 +0200
http://pdes-net.org/x-haui/blog/html/2010/07/20/bandwidth_monitors.html http://pdes-net.org/x-haui/blog/html/2010/07/20/bandwidth_monitors.html <![CDATA[Bandwidth monitors]]>

Bandwidth monitors

There are many tools available, that allow you to monitor (among other things) the current downstream of your internet connection. Some of them, like dstat and bwm-ng are handy console applications, whereas others integrate nicely into your desktop. Two popular examples for this would be conky or gkrellm. So, in general there’s no real need for the following bash one-liner, unless you’re just an ordinary user working on some poorly equipped linux box which doesn’t offer any of the tools mentioned above. In that case, you’ll be glad to have a dirty solution like the following available:

r=$(cat /sys/class/net/eth0/statistics/rx_bytes) ; while [ 1 ]; do n=$(cat /sys/class/net/eth0/statistics/rx_bytes); d=$(((n-r) / 1024 ));r=$n; echo "$d KB/s"; sleep 1;done

There is no need to mention,that eth0 must be replaced by your primary interface’s name.

]]>
Tue, 20 Jul 2010 00:00:00 +0200
http://pdes-net.org/x-haui/blog/html/2010/07/18/remove_exif_data.html http://pdes-net.org/x-haui/blog/html/2010/07/18/remove_exif_data.html <![CDATA[Remove Exif data]]>

Remove Exif data

Sometimes it’s advantageous to remove Exif metadata from image files, for example when posting images online. Fortunately, that’s not a big deal since we’re using linux:

mogrify -strip image.jpg

...or if you want to process more files:

mogrify -strip *.jpg
]]>
Sun, 18 Jul 2010 00:00:00 +0200
http://pdes-net.org/x-haui/blog/html/2010/07/18/important_notice_11.html http://pdes-net.org/x-haui/blog/html/2010/07/18/important_notice_11.html <![CDATA[Important notice!!11]]>

Important notice!!11

http://pdes-net.org/x-haui/pics/scheine_thumb.jpg ]]>
Sun, 18 Jul 2010 00:00:00 +0200
http://pdes-net.org/x-haui/blog/html/2010/04/21/linux_mplayer_and_the_zdf_mediathek.html http://pdes-net.org/x-haui/blog/html/2010/04/21/linux_mplayer_and_the_zdf_mediathek.html <![CDATA[Linux, mplayer and the ZDF Mediathek]]>

Linux, mplayer and the ZDF Mediathek

While the idea behind the ZDF Mediathek is not so bad at all, the actual implementation is a pain in the ass - especially the flash version of the website, which causes my Firefox to crash again and again... So I tried the HTML version of the site, which has two major advantages: 1.) Firefox doesn’t crash anymore and 2.) one can watch the videos with any external program like vlc or mplayer. However, there’s still a huge drawback: The videos are streamed via the Real-Time Streaming Protocol or the Microsoft Media Server Protocol, so basic operations like fast-forwarding, rewinding or pausing should be avoided. Additionally, as no (significant) buffering is performed, your internet connection will be in use for the whole runtime of a video, limiting other online activities. Looking for an easy solution for this, I checked mplayer’s manpage and found the -dumpstream option. The rest was some elementary bash scripting:

mplayer -dumpfile "$(date +%y_%m_%d_%H_%M.dump)" -dumpstream "$(curl -s "$(curl -s "$LINK" | egrep "<li>DSL\s*2000\s*<a href=.*asx" | sed -r 's#.*href="([^"]+)".*#\1#')" | egrep -o 'mms://[^"]+')"

This will save any(?) video from the Mediathek to a local file called *current_date*.dump. If you didn’t figure it out by yourself, $LINK must be set to / replaced by the actual URL pointing to your video (you’ll need the URL to the HTML version, or do some additional preprocessing first). Before you ask: Of course I wrote an easy-to-use, ready-to-run script for this - it even does some limited error checking. It can be found here. Update: Seems like this only works for just a few videos, so don’t be too disappointed if it fails...

]]>
Wed, 21 Apr 2010 00:00:00 +0200
http://pdes-net.org/x-haui/blog/html/2010/04/10/yaydl_1_5_1.html http://pdes-net.org/x-haui/blog/html/2010/04/10/yaydl_1_5_1.html <![CDATA[yaydl 1.5.1]]>

yaydl 1.5.1

Version 1.5.1 comes with support for video.golem.de (ok....not as big as youtube, but who cares...) BTW: If you want to be informed about new versions without reading my blog (shame on you!), you might want to subscribe to yaydl on freshmeat.

Download the tar.gz

]]>
Sat, 10 Apr 2010 00:00:00 +0200
http://pdes-net.org/x-haui/blog/html/2010/04/08/yaydl_1_5.html http://pdes-net.org/x-haui/blog/html/2010/04/08/yaydl_1_5.html <![CDATA[yaydl 1.5]]>

yaydl 1.5

I know, you’ve all been waiting for it, so without any further ado, here it is, yaydl 1.5! It includes all new features from version 1.4a, as well as support for custom fmt codes. As usual, I also fixed some bugs - check out the changelog for details.

Download the tar.gz

]]>
Thu, 08 Apr 2010 00:00:00 +0200
http://pdes-net.org/x-haui/blog/html/2010/04/02/yaydl_1_4_5a.html http://pdes-net.org/x-haui/blog/html/2010/04/02/yaydl_1_4_5a.html <![CDATA[yaydl 1.4.5a]]>

yaydl 1.4.5a

Still a alpha version, but youtube works again!

Download the tar.gz

]]>
Fri, 02 Apr 2010 00:00:00 +0200
http://pdes-net.org/x-haui/blog/html/2010/02/15/yaydl_1_4a.html http://pdes-net.org/x-haui/blog/html/2010/02/15/yaydl_1_4a.html <![CDATA[yaydl 1.4a]]>

yaydl 1.4a

It’s already been more than one month since I released yaydl 1.3.7, so it’s time for a small status update. The attached preview of the upcoming release already includes two main new features: support for playlists and 1080p videos on Youtube. Please note that these improvements aren’t fully stable yet and might include some bugs - so if you’re a fan of rock-stable software, keep your hands off this alpha ;-)

Download the tar.gz

]]>
Mon, 15 Feb 2010 00:00:00 +0100
http://pdes-net.org/x-haui/blog/html/2010/02/02/lyrics_fetcher_for_moc.html http://pdes-net.org/x-haui/blog/html/2010/02/02/lyrics_fetcher_for_moc.html <![CDATA[Lyrics fetcher for moc]]>

Lyrics fetcher for moc

As the newest alpha version of moc comes with the ability to display song lyrics, I thought it might be a nice feature to fetch these lyrics automatically. So I just cobbled together a few lines perl code, that actually work pretty well. Nevertheless it’s just a quick hack with a lot of bugs, so don’t expect too much. :-)

Download

]]>
Tue, 02 Feb 2010 00:00:00 +0100
http://pdes-net.org/x-haui/blog/html/2010/02/01/howto_convert_windows_style_line_endings_to_unix_style_and_vice_versa_.html http://pdes-net.org/x-haui/blog/html/2010/02/01/howto_convert_windows_style_line_endings_to_unix_style_and_vice_versa_.html <![CDATA[HowTo: Convert Windows-style line endings to Unix-style (and vice versa)]]>

HowTo: Convert Windows-style line endings to Unix-style (and vice versa)

Though there are some tools (e.g. dos2unix) available to convert between DOS/Windows (\r\n) and Unix (\n) line endings, you’d sometimes like to solve this rather simple task with tools available on any linux box you connect to. So, here are some examples - just pick the one that fits you best: Windows to Unix:

#Shell
perl -pe '$_=~s#\r\n#\n#' < windows.txt [> linux.txt]
sed -r 's/\r$//' windows.txt [> linux.txt]

#Vim
:%s/\r$//
:set ff=unix

Unix to Windows (do you really want this?)

#Shell
perl -pe '$_=~s#\n#\r\n#' < linux.txt [> windows.txt]
sed -r 's/$/\r/' linux.txt [> windows.txt]

#Vim
:set ff=dos

Don’t forget to save your file, when you’re using vim.

]]>
Mon, 01 Feb 2010 00:00:00 +0100
http://pdes-net.org/x-haui/blog/html/2010/01/06/yaydl_1_3_7.html http://pdes-net.org/x-haui/blog/html/2010/01/06/yaydl_1_3_7.html <![CDATA[yaydl 1.3.7]]>

yaydl 1.3.7

A new year - a new version of yaydl. Apart from the usual bugfixes, one new feature found its way into this release: From now on, yaydl has direct support for inputfiles, which renders the workaround via shell I/O redirection obsolete. As always, the commandline option for this and all other functions can be found in the readme file. ;-)

Download the tar.gz

]]>
Wed, 06 Jan 2010 00:00:00 +0100
http://pdes-net.org/x-haui/blog/html/2009/11/04/yaydl_1_3_6.html http://pdes-net.org/x-haui/blog/html/2009/11/04/yaydl_1_3_6.html <![CDATA[yaydl 1.3.6]]>

yaydl 1.3.6

After so many people asked me to waste some more blog entries on yaydl, I eventually found some spare time to comply with their request. ;-) However, if you were expecting some fancy new features, don’t get too overexcited: v1.3.6 is just another bugfix release. Yet, I’m confident I’ll extend yaydl’s functionality till autumn 2010. :D

Download the tar.gz

]]>
Wed, 04 Nov 2009 00:00:00 +0100
http://pdes-net.org/x-haui/blog/html/2009/08/03/yaydl_1_3_5.html http://pdes-net.org/x-haui/blog/html/2009/08/03/yaydl_1_3_5.html <![CDATA[yaydl 1.3.5]]>

yaydl 1.3.5

One very small bugfix, barely worth mentioning....

Download the tar.gz

]]>
Mon, 03 Aug 2009 00:00:00 +0200
http://pdes-net.org/x-haui/blog/html/2009/07/09/recursive_rename.html http://pdes-net.org/x-haui/blog/html/2009/07/09/recursive_rename.html <![CDATA[Recursive Rename]]>

Recursive Rename

Referring to my last blog entry, I’m proud to present an enhanced version of Larry Wall’s/Robin Baker’s famous script (p)rename. The main difference between the original and recrename lies within the fact, that the latter recursively renames all files/folders in the directory tree, when invoked with a directory name as argument. So, here’s an example how it works:

#"." represents the current directory...(more arguments are allowed)
recrename -n 's#\s+#_#g ;y#A-Z#a-z#;s#[^a-z0-9_\-.]#_#g;s#_+#_#g' .

If everything looks ok, you may omit the -n for the changes to take effect. Update: Since two programs for two very similar operations are a waste of precious disc space, I joined rename and recrename. From now on, recrename works recursively if you explicitly add -r to your command line. Furthermore, I changed the standard behaviour, so that modifications as from now on only affect basenames, even if the filename argument is a complete path, i.e. recrename ‘s#\s+#_#’ “/home/foo/bar baz/bla blub” changes “bla blub” to bla_blub but leaves “bar baz” untouched.

Download

]]>
Thu, 09 Jul 2009 00:00:00 +0200
http://pdes-net.org/x-haui/blog/html/2009/06/15/rename.html http://pdes-net.org/x-haui/blog/html/2009/06/15/rename.html <![CDATA[rename]]>

rename

One of the most annoying things about copying files from Windows systems on your local linux box are the strange filenames, Windows users consider to be state-of-the-art. Most of these names seem to consist solely of uppercase and special characters and of course lots of whitespaces... However, as we’re running linux, the solution is just one simple command away: prename (rename on some systems), which is included in the standard Perl distribution, makes it possible:

#just pretend....
rename -n 's#\s+#_#g ;y#A-Z#a-z#;s#[^a-z0-9_\-.]#_#g;s#_+#_#g' *

#"sanitize" all filenames in the current directory if the output
#from above is ok
rename 's#\s+#_#g ;y#A-Z#a-z#;s#[^a-z0-9_\-.]#_#g;s#_+#_#g' *

BTW: This won’t work out-of-the-box with find’s exec-option!

]]>
Mon, 15 Jun 2009 00:00:00 +0200
http://pdes-net.org/x-haui/blog/html/2009/05/23/concatenate_pdf_files.html http://pdes-net.org/x-haui/blog/html/2009/05/23/concatenate_pdf_files.html <![CDATA[Concatenate PDF files]]>

Concatenate PDF files

There are several ways of concatenating multiple PDF files into one single PDF file. Unfortunately, most of these ways just don’t work at all, or the resulting PDF is not what you’d expect it to be. However, there’s one convenient way using ghostscript that always worked pretty well for me:

gs -dBATCH -dNOPAUSE -sDEVICE=pdfwrite -sOutputFile=output.pdf *.pdf

Since this is a pretty complex command, we’ll hide its complexity for future use: Put the code shown below into a newly created file /usr/local/bin/mkpdf

#!/bin/bash

[[ $# -lt 2 ]] && { echo "Usage: $0 output.pdf <inputfiles.pdf>" ; exit 1; }

OUTPUT="$1"
shift
gs -dBATCH -dNOPAUSE -sDEVICE=pdfwrite -sOutputFile="$OUTPUT" "$@"

After setting up the appropriate file permissions (chmod 755 /usr/local/bin/mkpdf), mkpdf output.pdf <inputfiles.pdf> does exactly the same as the command mentioned above.

]]>
Sat, 23 May 2009 00:00:00 +0200
http://pdes-net.org/x-haui/blog/html/2009/05/15/yaydl_1_3.html http://pdes-net.org/x-haui/blog/html/2009/05/15/yaydl_1_3.html <![CDATA[yaydl 1.3]]>

yaydl 1.3

Version 1.3 of yaydl fixes the support for video.google.com and improves the sound extracting feature. Check out the changelog/README for details.

Download the tar.gz

]]>
Fri, 15 May 2009 00:00:00 +0200
http://pdes-net.org/x-haui/blog/html/2009/05/09/yaydl_1_2.html http://pdes-net.org/x-haui/blog/html/2009/05/09/yaydl_1_2.html <![CDATA[yaydl 1.2]]>

yaydl 1.2

This version fixes and improves the support for dailymotion.com. The default file format for videos from dm is no longer flv, but mp4 instead. 56k users, don’t panic...–forceflv still works. ;-)

Download the tar.gz

]]>
Sat, 09 May 2009 00:00:00 +0200
http://pdes-net.org/x-haui/blog/html/2009/05/06/yaydl_1_1_5.html http://pdes-net.org/x-haui/blog/html/2009/05/06/yaydl_1_1_5.html <![CDATA[yaydl 1.1.5]]>

yaydl 1.1.5

As you already might have noticed, yaydl –sound fails from time to time when processing youtube videos, i.e. the resulting file isn’t a valid mp3-file, but just raw data. The reason for this error lies within the file format. Earlier versions of yaydl always picked the flv version of a video, whereas newer versions always try to get the higher resolution mp4 files. Unfortunately, mplayer -dumpaudio doesn’t work properly with these files. Therefore, yaydl from now on uses ffmpeg -vn -acodec libmp3lame -ab 192k*for non-flv videofiles from youtube. Nevertheless, I still would prefer the usage of mplayer for this purpose, but I didn’t get it work... So, if you know how to extract the audio track from a mp4-file with mplayer, just drop me a note.*

Download the tar.gz

]]>
Wed, 06 May 2009 00:00:00 +0200
http://pdes-net.org/x-haui/blog/html/2009/03/17/yaydl_1_1.html http://pdes-net.org/x-haui/blog/html/2009/03/17/yaydl_1_1.html <![CDATA[yaydl 1.1]]>

yaydl 1.1

No bugfixes this time, just support for another video sharing site (sevenload) :-)

Download the tar.gz Just download the perl script

]]>
Tue, 17 Mar 2009 00:00:00 +0100
http://pdes-net.org/x-haui/blog/html/2009/03/14/yaydl_1_0.html http://pdes-net.org/x-haui/blog/html/2009/03/14/yaydl_1_0.html <![CDATA[yaydl 1.0]]>

yaydl 1.0

After more than one year of development, I finally decided to release yaydl 1.0! I rewrote large parts of the exiting sourcecode from scratch for this release, but I also added lots of new features and support for one new video sharing website (dailymotion). The main project page contains some more information.

Download the tar.gz Just download the perl script

]]>
Sat, 14 Mar 2009 00:00:00 +0100
http://pdes-net.org/x-haui/blog/html/2009/03/11/irssi_colors_pl.html http://pdes-net.org/x-haui/blog/html/2009/03/11/irssi_colors_pl.html <![CDATA[Irssi - colors.pl]]>

Irssi - colors.pl

Just a quick hack to display some sort of a color table in irssi. After loading the script /colors will give you a list of all colors available in irssi as well as a short description how to use them.

Download

]]>
Wed, 11 Mar 2009 00:00:00 +0100
http://pdes-net.org/x-haui/blog/html/2009/03/08/mfactorial.html http://pdes-net.org/x-haui/blog/html/2009/03/08/mfactorial.html <![CDATA[mfactorial]]>

mfactorial

Recently, I was playing around with the GNU Multiple Precision Arithmetic Library. While most of the resulting code snippets are pretty much useless, the attached C program might be worth a blog entry. It calculates the factorial of a given number. At first glance, nothing special at all, but it does this task rather fast:

/* Naive algorithm*/
time ./fak  500000 > /dev/null

real    2m4.756s
user    2m4.637s
sys 0m0.094s


/*mfactorial*/
time ./mfactorial 500000 > /dev/null

real    0m7.855s
user    0m13.226s
sys 0m0.050s

Maybe I’ll re-implement the whole thing using a real clever algorithm (e.g. split-recursive) some day, but for now, I’m content with the current speed. :-)

Download

BTW: The compiling instructions are included in the *.c file
]]>
Sun, 08 Mar 2009 00:00:00 +0100
http://pdes-net.org/x-haui/blog/html/2009/03/08/yaydl_0_9_1a.html http://pdes-net.org/x-haui/blog/html/2009/03/08/yaydl_0_9_1a.html <![CDATA[yaydl 0.9.1a]]>

yaydl 0.9.1a

I just finished my work on yaydl 0.9.1a. The most significant change might be the support for HD videos on youtube. Furthermore, if no HD version is available, yaydl will download the videos intended for IPODs (mp4 instead of flv) by default. You may override this behaviour by adding the –forceflv parameter to your command-line.

Download the latest version

]]>
Sun, 08 Mar 2009 00:00:00 +0100
http://pdes-net.org/x-haui/blog/html/2009/01/31/geballte_kompetenz.html http://pdes-net.org/x-haui/blog/html/2009/01/31/geballte_kompetenz.html <![CDATA[Geballte Kompetenz]]>

Geballte Kompetenz

Heute Nachmittag, gegen ca. 16.00 Uhr kam die schreckliche Wahrheit ans Licht: Jede von Google indexierte Seite kann ihren Computer beschädigen!!!1elf Wer das Ganze verpasst hat, kann glücklicherweise immer noch auf die nun folgenden Screenshots zurückgreifen. *g*

http://pdes-net.org/x-haui/images/google_small.png http://pdes-net.org/x-haui/images/foobar_small.png
http://pdes-net.org/x-haui/images/web_small.png http://pdes-net.org/x-haui/images/x-haui_small.png
]]>
Sat, 31 Jan 2009 00:00:00 +0100
http://pdes-net.org/x-haui/blog/html/2009/01/12/graphical_notifications_for_centerim.html http://pdes-net.org/x-haui/blog/html/2009/01/12/graphical_notifications_for_centerim.html <![CDATA[Graphical notifications for centerim]]>

Graphical notifications for centerim

Using centerim’s external configuration file makes it pretty easy to get notified about incoming messages. However, if you would add something like the following to your ~/.centerim/external, you’d be informed about *every* single incoming message, which doesn’t make much sense, at least not in my eyes....

%action  notification
event msg
proto all
status all
options nowait

%exec
#!/bin/bash
#msg=`cat`
nick=$(head -n 1 $CONTACT_INFODIR/info)
echo "new message from $nick"  | dzen2 -bg red -fg white -p

To deal with this problem, I wrote two small perl scripts: centerim_notify.pl & offline.pl. The first one caters for the notifications and stores the screen name of your IM-buddy in a hidden file (~/.centerim/.seen) so you won’t get notified again. The latter removes a chat partner from the list, when he logs off. So, here’s how to use it:

  • Save both scripts to ~/.centerim/scripts/ and make them executable (chmod +x ...)
  • Put the following into your ~/.centerim/external:
%action  notification
event msg
proto all
status all
options nowait

%exec
#!/bin/bash
nick=$(head -n 1 $CONTACT_INFODIR/info)
~/.centericq/scripts/centerim_notify.pl "$nick"

%action offline
event offline
proto all
status all
options nowait

%exec
#!/bin/bash
nick=$(head -n 1 $CONTACT_INFODIR/info)
~/.centericq/scripts/offline.pl "$nick"
  • Create a bash alias that deletes the “seen-file” everytime you launch centerim:
echo "alias centerim='rm -f ~/.centericq/.seen && centerim -o'" >> ~/.bashrc
  • Run centerim!

Requirements: dzen2, perl and centerim ;-)

]]>
Mon, 12 Jan 2009 00:00:00 +0100
http://pdes-net.org/x-haui/blog/html/2009/01/02/yaydl_0_9_0a.html http://pdes-net.org/x-haui/blog/html/2009/01/02/yaydl_0_9_0a.html <![CDATA[yaydl 0.9.0a]]>

yaydl 0.9.0a

Time for another update - yaydl 0.9.0a is now capable of reading URLs from STDIN, permitting shell commands like the following one:

yaydl.pl --stdin < file_containing_multiple_urls

Additionally, I fixed some minor bugs affecting the routines for vimeo/metacafe/google.

Download the latest version

]]>
Fri, 02 Jan 2009 00:00:00 +0100
http://pdes-net.org/x-haui/blog/html/2008/12/30/_howto_delete_vim_s_backup_files.html http://pdes-net.org/x-haui/blog/html/2008/12/30/_howto_delete_vim_s_backup_files.html <![CDATA[[HowTo] Delete vim's backup files]]>

[HowTo] Delete vim’s backup files

In general, vim’s auto backup function (:set backup) is a rather useful feature that saved me, or more precisely, some of my files several times. Nevertheless, all these files with an appended tilde, flooding your filesystem, are quite annoying. So, here’s a way to get rid of all these files in one go:

#find all backup files in your home directory
find ~ -name "*~"
#find and delete all backup files in your home directory
find ~ -name "*~" -delete
#find all backup files and delete them only if the original files exist!
find ~ -name "*~" -exec bash -c '[ -e "${1%?}" ] && rm -f -- "$1";' _ {} \;
]]>
Tue, 30 Dec 2008 00:00:00 +0100
http://pdes-net.org/x-haui/blog/html/2008/11/12/vimrc.html http://pdes-net.org/x-haui/blog/html/2008/11/12/vimrc.html <![CDATA[vimrc]]>

vimrc

My current vimrc file contains some settings I consider to be essential for my daily work with vim. I hope this might give you some ideas for your own configuration.

]]>
Wed, 12 Nov 2008 00:00:00 +0100
http://pdes-net.org/x-haui/blog/html/2008/09/25/yaydl_0_8_9a_update_.html http://pdes-net.org/x-haui/blog/html/2008/09/25/yaydl_0_8_9a_update_.html <![CDATA[yaydl 0.8.9a (update)]]>

yaydl 0.8.9a (update)

yaydl - yet another update... ;-) This version contains support for another video site - vimeo.com Have fun! UPDATE: version 0.8.9a provides support for video.google.com and includes some bugfixes.

Download the latest version

]]>
Thu, 25 Sep 2008 00:00:00 +0200
http://pdes-net.org/x-haui/blog/html/2008/09/24/yaydl_0_8_7a_update_.html http://pdes-net.org/x-haui/blog/html/2008/09/24/yaydl_0_8_7a_update_.html <![CDATA[yaydl 0.8.7a (update)]]>

yaydl 0.8.7a (update)

A new alpha-version of yaydl is available now. Downloading videos from youtube, myvideo and metacafe should work fine, whereas the support for clipfish is currently on hold. BTW: A final version will be released along with DNF ;-) UPDATE: I’ve just finished my work on version 0.8.7a - everything should be working fine again. :-)

Download the latest version

]]>
Wed, 24 Sep 2008 00:00:00 +0200
http://pdes-net.org/x-haui/blog/html/2008/09/14/dhttpd_svg_z_patch.html http://pdes-net.org/x-haui/blog/html/2008/09/14/dhttpd_svg_z_patch.html <![CDATA[dhttpd svg(z) patch]]>

dhttpd svg(z) patch

If you’re using dhttpd you might have noticed, that it sends the wrong http-headers for svg(z)-files. ;) To fix this problem, all you need to do is apply the patch linked below.

Download

]]>
Sun, 14 Sep 2008 00:00:00 +0200
http://pdes-net.org/x-haui/blog/html/2008/08/29/what_s_my_public_ip_.html http://pdes-net.org/x-haui/blog/html/2008/08/29/what_s_my_public_ip_.html <![CDATA[What's my public IP?]]>

What’s my public IP?

Here are 2 ways, to determine your public IP address using curl, sed and grep. I recommend using the second one.

curl -s www.wieistmeineip.de | grep class=\"ip\" | sed -r 's#(.*>)(.*)(<.*)#\2#'
curl -s checkip.dyndns.org | sed -r 's#(.*: )([0-9.]*)(<.*)#\2#'
]]>
Fri, 29 Aug 2008 00:00:00 +0200
http://pdes-net.org/x-haui/blog/html/2008/08/25/useful_bash_functions_ii.html http://pdes-net.org/x-haui/blog/html/2008/08/25/useful_bash_functions_ii.html <![CDATA[Useful bash functions II]]>

Useful bash functions II

Most perl scripts I write start like this:

#!/usr/bin/perl
use strict;
use warnings;
#maybe the GPL or some other stuff...

Additionally it’s always required to set the “x flag” via chmod - all in all quite annoying. That’s why I added the function shown below to my ~/.bashrc

np() {
    gplfile=~/Dokumente/gpl.txt
    if [ $# -eq 0 ]
    then
        echo "filename required..."
        return
    fi
    if [ -e "$1" ]
    then
        echo "file already exists!"
        return
    fi
    touch "$1" || { echo "can't touch $1" ; return ; }
    echo "#!/usr/bin/perl" >> "$1"
    if [ "$2" = "gpl" ] && [ -e "$gplfile" ]
    then
        cat "$gplfile" >> "$1"
    fi
    echo "" >> "$1"
    echo "use strict;" >> "$1"
    echo "use warnings;" >> "$1"
    echo "use 5.10.0;" >> "$1"
    echo "use feature 'say';" >> "$1"
    echo "" >> "$1"
    chmod 700 "$1"
    [ $EDITOR ] || EDITOR=vim
    $EDITOR "$1"
}

Usage: np filename [gpl]

]]>
Mon, 25 Aug 2008 00:00:00 +0200
http://pdes-net.org/x-haui/blog/html/2008/08/21/useful_bash_functions_i.html http://pdes-net.org/x-haui/blog/html/2008/08/21/useful_bash_functions_i.html <![CDATA[Useful bash functions I]]>

Useful bash functions I

Put the following in your ~/.bashrc

up() {
    if [ $# == 0 ]
    then
        cd ..
        return
    fi
    if [ "${1//[^0-9]/}" != "$1" ]
    then
        echo "Not a number"
        return
    fi
    STRING=""
    for (( i=0; i<$1 ; i++ ))
    do
        STRING="$STRING../"
    done
    cd $STRING
}

up is equivalent to cd .. and up N jumps up N directories in the directory tree.

]]>
Thu, 21 Aug 2008 00:00:00 +0200
http://pdes-net.org/x-haui/blog/html/2008/08/18/battery_status_script.html http://pdes-net.org/x-haui/blog/html/2008/08/18/battery_status_script.html <![CDATA[Battery status script]]>

Battery status script

Some weeks ago, my laptop suddenly turned off, because its battery was completely flat. To prevent this from happening again, I wrote a small script that notifies me, when a critical charge state is reached. Additionally it shuts the computer down before the battery does. ;-) Please note that this isn’t meant to be a general purpose solution, so feel free to adapt it to your own needs.

Download

]]>
Mon, 18 Aug 2008 00:00:00 +0200
http://pdes-net.org/x-haui/blog/html/2008/08/15/screen_ein_kurzer_einblick.html http://pdes-net.org/x-haui/blog/html/2008/08/15/screen_ein_kurzer_einblick.html <![CDATA[screen - ein kurzer Einblick]]>

screen - ein kurzer Einblick

Ich habe ein kurzes Einsteigertutorial zur Verwendung von GNU screen verfasst. Zu finden ist es hier.

]]>
Fri, 15 Aug 2008 00:00:00 +0200
http://pdes-net.org/x-haui/blog/html/2008/08/13/indent_a_file_in_2_seconds.html http://pdes-net.org/x-haui/blog/html/2008/08/13/indent_a_file_in_2_seconds.html <![CDATA[Indent a file in 2 seconds]]>

Indent a file in 2 seconds

Indenting a whole file in Vim is pretty easy: Type the following in normal mode

ggVG=

Done!

]]>
Wed, 13 Aug 2008 00:00:00 +0200
http://pdes-net.org/x-haui/blog/html/2008/08/11/new_website_online_.html http://pdes-net.org/x-haui/blog/html/2008/08/11/new_website_online_.html <![CDATA[new website online!]]>

new website online!

A few days ago, I stumbled across Volker Birk’s blog. ‘Cause I really liked the design, I decided to rebuild my own website using NanoBlogger. Although I’m not completely satisfied with it yet, I think the result’s already pretty considerable. :-) BTW: The old site is still available here.

]]>
Mon, 11 Aug 2008 00:00:00 +0200
http://pdes-net.org/x-haui/blog/html/2008/08/11/palindromic_numbers.html http://pdes-net.org/x-haui/blog/html/2008/08/11/palindromic_numbers.html <![CDATA[palindromic numbers]]>

palindromic numbers

Find all numbers from 1 to 999999, which are palindromic in base 2 and base 16.

Download

]]>
Mon, 11 Aug 2008 00:00:00 +0200
http://pdes-net.org/x-haui/blog/html/2008/08/11/remove_the_last_character_from_a_string_variable_bash_.html http://pdes-net.org/x-haui/blog/html/2008/08/11/remove_the_last_character_from_a_string_variable_bash_.html <![CDATA[Remove the last character from a string/variable (bash)]]>

Remove the last character from a string/variable (bash)

echo ${var:0:${#var}-1}
echo ${var%?}
echo $var | sed 's/.$//'

You could also use chop ;)

]]>
Mon, 11 Aug 2008 00:00:00 +0200
http://pdes-net.org/x-haui/blog/html/2008/08/11/bash_list_just_directories.html http://pdes-net.org/x-haui/blog/html/2008/08/11/bash_list_just_directories.html <![CDATA[bash: list just directories]]>

bash: list just directories

ls -d */
ls -l | grep ^d
find . -maxdepth 1 -type d
ls -F | grep /$
ls -1 | while read line; do if [ -d "$line" ]; then echo $line; fi; done
ls -1|while read l;do [ -d "$l" ]&&echo $l;done
perl -e 'foreach(glob(".* *")){print "$_\n" if (-d $_)}'
perl -e 'opendir(DIR,".");foreach(readdir(DIR)){print $_ ."\n" if(-d $_);}'
just kidding ;)
...
]]>
Mon, 11 Aug 2008 00:00:00 +0200
http://pdes-net.org/x-haui/blog/html/2008/08/11/zig_zag_scan_in_perl.html http://pdes-net.org/x-haui/blog/html/2008/08/11/zig_zag_scan_in_perl.html <![CDATA[zig-zag scan in Perl]]>

zig-zag scan in Perl

zig-zag scan in Perl :-)

]]>
Mon, 11 Aug 2008 00:00:00 +0200
http://pdes-net.org/x-haui/blog/html/2008/08/11/hint_on_2_dimensional_arrays_in_perl_.html http://pdes-net.org/x-haui/blog/html/2008/08/11/hint_on_2_dimensional_arrays_in_perl_.html <![CDATA[hint on 2-dimensional arrays in Perl:]]>

hint on 2-dimensional arrays in Perl:

$width = @{$array[0]};
$height = @array;
]]>
Mon, 11 Aug 2008 00:00:00 +0200
http://pdes-net.org/x-haui/blog/html/2008/08/11/rot13_encryption_of_a_file_.html http://pdes-net.org/x-haui/blog/html/2008/08/11/rot13_encryption_of_a_file_.html <![CDATA[rot13 "encryption" of a file:]]>

rot13 “encryption” of a file:

tr A-Za-z N-ZA-Mn-za-m < file
]]>
Mon, 11 Aug 2008 00:00:00 +0200
http://pdes-net.org/x-haui/blog/html/2008/08/11/calculate_the_sum_of_digits_of_a_given_number.html http://pdes-net.org/x-haui/blog/html/2008/08/11/calculate_the_sum_of_digits_of_a_given_number.html <![CDATA[Calculate the sum of digits of a given number]]>

Calculate the sum of digits of a given number

echo 1234 | sed 's/\(.\)/\1\+/g' | sed 's/.$//' | bc -l
perl -e 'print eval join "+", split //, 12345;'
]]>
Mon, 11 Aug 2008 00:00:00 +0200
http://pdes-net.org/x-haui/blog/html/2008/08/11/perl_password_generator.html http://pdes-net.org/x-haui/blog/html/2008/08/11/perl_password_generator.html <![CDATA[perl password generator]]>

perl password generator

Another password generator, written in perl. As it’s more advanced than the old bash script I recommend using this one, if you’re to lazy to create a secure password by yourself. The special thing about this script is, that it’s possible to specify the characters, that will occur in the password (e.g. it’s possible to create a password, containing just the letters a, b and c (don’t do this...))

Download

]]>
Mon, 11 Aug 2008 00:00:00 +0200
http://pdes-net.org/x-haui/blog/html/2008/08/11/bash_password_generator.html http://pdes-net.org/x-haui/blog/html/2008/08/11/bash_password_generator.html <![CDATA[bash password generator]]>

bash password generator

A simple shell script, that generates a random password from a given set of characters.

Download

]]>
Mon, 11 Aug 2008 00:00:00 +0200
http://pdes-net.org/x-haui/blog/html/2008/08/11/irssi_kick_pl.html http://pdes-net.org/x-haui/blog/html/2008/08/11/irssi_kick_pl.html <![CDATA[Irssi - kick.pl]]>

Irssi - kick.pl

Ever wanted to kick multiple users with just one command?

kick.pl makes it possible!

Usage:

/kck user1 user2 user3

Download

]]>
Mon, 11 Aug 2008 00:00:00 +0200
http://pdes-net.org/x-haui/blog/html/2008/08/11/irssi_wiki_pl.html http://pdes-net.org/x-haui/blog/html/2008/08/11/irssi_wiki_pl.html <![CDATA[Irssi - wiki.pl]]>

Irssi - wiki.pl

This script provides a fast way to “search” wikipedia from irssi.

Example:

/wiki rolling stones

prints http://de.wikipedia.org/wiki/Rolling_Stones in the current channel

and

/ewiki rolling stones

expands to http://en.wikipedia.org/wiki/Rolling_Stones

Download

Note: this script requires curl.

]]>
Mon, 11 Aug 2008 00:00:00 +0200
http://pdes-net.org/x-haui/blog/html/2008/08/10/irssi_f_h_pl.html http://pdes-net.org/x-haui/blog/html/2008/08/10/irssi_f_h_pl.html <![CDATA[Irssi - f@h.pl]]>

Irssi - f@h.pl

Folding@home (F@H) is the most powerful distributed computing cluster in the world and one of the world’s largest distributed computing projects. The goal of the project is “to understand protein folding, misfolding, and related diseases.” To keep everyone on IRC informed about my current work progress, I created this small script. :-)

Download

Note: this script requires curl.

]]>
Sun, 10 Aug 2008 00:00:00 +0200
http://pdes-net.org/x-haui/blog/html/2008/08/10/irssi_mocnp_pl.html http://pdes-net.org/x-haui/blog/html/2008/08/10/irssi_mocnp_pl.html <![CDATA[Irssi - mocnp.pl]]>

Irssi - mocnp.pl

This is a nowplaying script for moc and irssi. It also enables you to control moc from irssi. I recommend applying my patch to moc, as it’s the only way to determine whether you turned shuffle/repeat on or off.

Usage:

/mocnp     prints the currently played song in the active channel
/play      starts playing, if moc's in state "pause"
/pause     pause the current song
/start     starts moc if it's not running or if it's in mode "stop"
/stop      stops moc
/next      next song
/prev      previous song
/shuffle   turns shuffle on/off (requires >=mocp 2.5)
/repeat    turns repeat on/off (requires >=mocp 2.5)

Download

]]>
Sun, 10 Aug 2008 00:00:00 +0200
http://pdes-net.org/x-haui/blog/html/2008/08/10/irssi_nicklist_pl.html http://pdes-net.org/x-haui/blog/html/2008/08/10/irssi_nicklist_pl.html <![CDATA[Irssi - nicklist.pl]]>

Irssi - nicklist.pl

This script displays a nicklist on the top of your irssi-window. Since it uses the split function of irssi, it works without any external programs. Although it works fine for me, a newer, better version will come soon...

Download

]]>
Sun, 10 Aug 2008 00:00:00 +0200
http://pdes-net.org/x-haui/blog/html/2008/08/10/shuffle_repeat_patch_for_moc.html http://pdes-net.org/x-haui/blog/html/2008/08/10/shuffle_repeat_patch_for_moc.html <![CDATA[shuffle/repeat patch for moc]]>

shuffle/repeat patch for moc

A lot ofmoc’s functions can be easily controlled via command-line switches.  Unfortunately, by default, there’s no way to determine whether shuffle/repeat is turned on or off, without checking the ncurses-interface of moc. So, I created a small patch, that enables moc to display these two values along with the song information on the command-line.

Applying this patch is pretty easy:

patch interface.c patch

Example for the altered output:

State: PAUSE
File: /my/favorite/musicfile.mp3
Title: foobar
Artist: foo
SongTitle: foobar
Album: bar
TotalTime: 01:28
TimeLeft: 00:54
TotalSec: 88
CurrentTime: 00:34
CurrentSec: 34
Bitrate: 192Kbps
AvgBitrate: 199Kbps
Rate: 44KHz
Shuffle: on
Repeat: off

Download

]]>
Sun, 10 Aug 2008 00:00:00 +0200
http://pdes-net.org/x-haui/blog/html/2008/08/10/yaydl_yet_another_youtube_downloader_no_longer_maintained_.html http://pdes-net.org/x-haui/blog/html/2008/08/10/yaydl_yet_another_youtube_downloader_no_longer_maintained_.html <![CDATA[yaydl - yet another youtube downloader (No longer maintained!)]]>

yaydl - yet another youtube downloader (No longer maintained!)

yaydl - more than a youtube downloader!

Currently supports:

  • youtube
  • metacafe.com
  • clipfish.de
  • myvideo.de
  • video.google.com
  • vimeo.com
  • dailymotion
  • sevenload
  • video.golem.de
  • downloading of multiple videos “simultaneously”
  • encoding from flv to xvid using mencoder or ffmpeg
  • extracting the soundtrack of a video using ffmpeg or mplayer+lame
  • md3-tagging of the extracted sound-files
  • auto-renaming
  • support for HD videos on youtube/dailymotion

Requirements:

  • Getopt::Long
  • LWP::UserAgent
  • MP3::Info
  • Term::ProgressBar

On debian-based systems, a apt-get install libwww-perl libgetopt-long-descriptive-perl libmp3-info-perl libterm-progressbar-perl will suffice.

Installation:

Just run the included install file. :-)

Changelog Download the current version:

Download

]]>
Sun, 10 Aug 2008 00:00:00 +0200
http://pdes-net.org/x-haui/blog/html/2008/08/10/brute.html http://pdes-net.org/x-haui/blog/html/2008/08/10/brute.html <![CDATA[brute]]>

brute

Now that’s something pretty useless...

A simple MD5 brute forcer, written in C.

It calculates all combinations of a given set of characters and compares the MD5 sum of each combination to a given hash. brute calculates about 3 million hashes per second on a Pentium 4 (2,6 GHz). Usage:

brute length hash

length represents the maximal length of a string, i.e. if it’s 6, brute won’t try words longer than 6 characters. hash is a md5-hash, for example 5cfa779519a9789fad5dfe0de784ee4c

Download

]]>
Sun, 10 Aug 2008 00:00:00 +0200