Recovering data from a "wet" Nokia

Let's ignore now why a phone fell in the seawater; it's enough saying that 2 years old kids don't know much about electronics. This and the following happened about one year ago, but I think it's still worth telling...

So, a Nokia 6101 had a bath on the beach and suddenly stopped working (how to blame it?); unfortunately, that model hasn't an external memory card, but an integrated chip. The owner, who wasn't me, had some important stuff in the memory of the phone, and she she was willing to pay to recover that stuff.

She first asked to a couple of electricians and phone shops, but they all said it was impossible to recover data from an oxidized phone. Then, I suggested calling a data-recovery company or Nokia itself; but three companies (yes, those companies who can recover data from a burnt hard disk for thousands dollars) said that they couldn't recover data from a proprietary chip, and at Nokia they gently said they don't deal with reparations nor recoveries. Neither a Nokia official repairman could help finding a solution.

Before throwing in the towel, I disassembled the phone to see what was inside. Nothing was really oxidized, but maybe some sea salt was still on the circuits. So, as the master told me, I put the phone into distilled water for some minutes, then I dried it with a hairdryer. The LCD display was gone, and no signal came from the USB cable (DKU-5). Here's a picture of the dismantled phone (click for a larger view):


However, that little tiny battery (highlited in red) caught my attention. The battery was soldered on the circuit and, standing at my tester, it was exhausted. Maybe that was the real problem? I broke the soldering and replaced the battery with a new one (cost: 3€). Let's replug the phone battery (with some tape) to see if the patient is alive again...



Ant it is! With Nokia PC Suite and gnokii, through the USB cable could pass all the survivor data.

And this was the simple story of the owner of a phone, who was about to lose all the data in the phone because of a lousy 3€ battery, who was willing to pay - God knows how much - to recover all, and who was completely abandoned by Nokia and by a bounch of professional (???) recovering data companies.

The moral of the tale is different depending on who you are in the story:

  • If you are the owner of the phone, and your phone absolutely has to fall in seawater, do not despair: it's still possible to recover your data, somewhere, somehow, for just 3€;
  • If you are one of the recovering-data companies: go and find another job;
  • If you are Nokia: do not design phones with little, tiny, lousy, soldered batteries without telling it even to official technical support partners.

Strange perspectives on Google Maps

Sometimes pictures taken from a satellite have to deal with strong perspective problems. What about this two buildings, in Milan, that seem to be about to fall down?


Original link

The "falling" effect is due to the fact that the pictures merged into the global map are (almost) assonometric projections taken in two different directions. Of course, the higher are the buildings in the map the stronger is this kind of mental (rather than optical) illusion. Here are a view on Milan and another one on NY, Empire State Building:


Original link

Original link

I collected some other Milan maps (one, two, three) with the same problem. But anywhere around the world maps there are tons of similar views, far more interesting than these...

HTML page with pictures

Quick and simple: a short bash script to create a HTML page with all the pictures you have in the current directory.

echo -e "<html><head><title>Icons</title></head><body>\n" > file.html
ls -1 *.png | while read a; do echo -e "<img src='./$a'/>\n" >> file.html; done
echo -e "</body></html>\n" >> file.html

Just add different wildcards for every image format you want to include.

Online vs. offline hashes

Someone asked me why the hash of any string produced by Hash'em'all! is different from the hash of the same string produced in bash by

echo "string" | md5sum

The reason is simple: the "echo" command automatically adds a newline at the end of the input string. The "-n" option tells the command not to add it. So

echo -n "string" | md5sum

will give the same result as Hash'em'all!. So simple... But someone was going crazy for this.

Open in new windows with HTML 4.01 Strict

As every web-programmer should know, in HTML 4.01 strict the "target" property of the tags is no longer allowed. I would like to personally thank the WWW consortium for this, because I hate links that automatically open in a new window (while my best friend Firefox helps me keeping them in new tabs instead of new windows). However, under certain circumstances (?) may be desiderable to disturb the user by forcing the browser not to change the address of the current page (i.e. you don't want users to go away from your page); strange but true, you don't care about breaking the user's browsing history, but you would feel as a profanation to break the W3C standards. Here comes to help Javascript: just replace every

<a href="http://..." target="_blank">


with

<a href="http://..." onclick="window.open(this.href); return false;">


Welcome to the dark side of standard compliance!

FS-independent bash backup

Dirty and quick, a little script to backup an entire hard drive from bash:

#!/bin/bash
# License: do what you want but cite my blog ;)
# http://binaryunit.blogspot.com
#
# *** superSalva 1.0 by Eugenio Rustico ***
# Backup utility for ALL types of partitions
#
# FEATURES
# - Easy disk/partition image *even of unknown filesystems*
# - On the fly compression: no need for temporary files
# - Customizeable process
#
# LIMITS
# - May be more speedy
# - No fs-specific support
# - Reads and compress even zero-zones
#
# TODO:
# - Support for decompressor without zcat equivalent
# - Support for creating (better if bootable) iso images
# - Support for md5sum integrity verification (!)
# - Free space checking
# - Wizard
# - Final statistics and estimated time
# - Trap for CTRL+C

# Device to be backupped, even if NTFS or unknown. May be a partition or a whole disk
export DEVICE=/dev/hda6

# AUTOMATIC: device capability, in kb
export DEVICE_DIM=`df -k | grep $DEVICE | awk {'print $2'}`

# Actually unused
export DEVICE_FREE=`df -k | grep $DEVICE | awk {'print $4'}`

# Destination/source directory. If destination, should have $(dim of $DEVICE) space free
# Do NOT locate destination on the same drive you're backupping!
# export DIR=/d/ripristino/e
export DIR=/pozzo

# Destination/source base filename. During backup files are overwritten.
export FILENAME=Immagine_E_20_6_2006

# On the fly compression commands. Shoud support reading from stdin and writing on stdout
# DEFAULT: gzip, well-known
# EXPERIMENTAL: 7zip, slower but better compression. YOU MUST HAVE 7zip ALREADY INSTALLED. But does "7cat" exist?
export COMPRESSOR=gzip
export DECOMPRESSOR=zcat

# Compression parameters
# Actually, only a compression level sent to gzip
export COMPRESSOR_PARAMS=-9

# Compressed file extension. Optional but useful
export EXTENSION=gz

# Number of piecese to skip while backuppin/restoring
# Useful for testing and for resuming interrupted backups
# CHANGE THIS ONLY IF YOU KNOW WHAT YOU ARE DOING
# Default: 0
export SKIP=0

# Block size. NOTE: from it depend default piece dimensions!
# CHANGE THIS ONLY IF YOU KNOW WHAT YOU ARE DOING
export BLOCK_SIZE=1024

# Dimension of pieces to compress and backup, in kb.
# Too small = too many pieces, not useful
# Too large = unefficient compression
# MAX = 4194303 (if dest fs is not FAT, may be higher), few pieces
# MEDIUM VALUES:
# 524288 (512 Mb)
# 262144 (256Mb)
# 131072 (128 Mb), many pieces
# 65536 (64 Mb)
# 32768 (32 Mb), definitely too much pieces!
# MIN = 1 (nonsense)
# DEFAULT = 1048576 (1 Gb, RECOMMENDED)
# NOTE: this values are block-size dependent (in this case, we use 1024 bytes blocks)
export PIECE_DIM=32768

# AUTOMATIC: number of pieces
# Should be plus one, but it's zero based so no matter
export NUM=$(($DEVICE_DIM/$PIECE_DIM))

# Action: BACKUP or RESTORE
export ACTION=BACKUP

# Want to see what I'm doing?
export VERBOSE=1

echo
echo
echo " *** superSalva 1.0 *** "
echo
echo
echo "Device: $DEVICE ($DEVICE_DIM kb)"
echo Compressor: $COMPRESSOR
echo Decompressor: $DECOMPRESSOR
echo Parameters: $COMPRESSOR_PARAMS
echo Destination: $DIR/$FILENAME.NUM.TOT.$EXTENSION
echo Device: $DEVICE
echo Pieces: $(($NUM+1)) pieces, $PIECE_DIM kb each
echo Action: $ACTION
echo
echo Ready? CTRL+C to abort, ENTER to start. May take LONG time.
read
echo
echo

export SUM=0
for i in `seq $SKIP $NUM`
do
export FILEN="$DIR/$FILENAME.$(($i+1)).$(($NUM+1)).$EXTENSION"
export SK=$(($i*$PIECE_DIM))
#export
#if [ ]
#then
#fi
if [ "$ACTION" == "RESTORE" ]
then
echo "* Decompressing and writing piece $(($i+1)) of $(($NUM+1)) (kb $(($SK+1)) to $((($i+1)*$PIECE_DIM)))..."
export COMMAND="$DECOMPRESSOR $FILEN | dd of=$DEVICE seek=$SK count=$PIECE_DIM bs=$BLOCK_SIZE"
if [ $VERBOSE == 1 ]; then echo $COMMAND; fi
$COMMAND
echo "* Successfully restored $FILEN"
echo
else
echo "* Reading and compressing piece $(($i+1)) of $(($NUM+1)) (kb $(($SK+1)) to $((($i+1)*$PIECE_DIM)))..."
export COMMAND="dd if=$DEVICE skip=$SK count=$PIECE_DIM bs=$BLOCK_SIZE | $COMPRESSOR $COMPRESSOR_PARAMS > $FILEN"
if [ $VERBOSE == 1 ]; then echo $COMMAND; fi
$COMMAND
export LAST_DIM=$((`ls -l $FILEN | awk {'print $5'}`/1000))
echo "* Saved $FILEN ($LAST_DIM kb, ratio $(((100*$LAST_DIM)/$PIECE_DIM))%)"
export SUM=$(($SUM+$LAST_DIM))
echo
fi
done

echo
if [ $ACTION == "RESTORE" ]
then
echo "Finished. $DEVICE seems to be restored."
else
echo "Finished. $(($NUM+1)) files, tot $SUM kb ($((100*$SUM/$DEVICE_DIM))% of original $DEVICE size)"
fi
echo Bye!
echo

Thanks to dd. Features:
  • On the fly compression
  • Non need for temporary files
  • File-system independent
  • Backup and restore facilities
  • Keeps master boot records
  • ...
TODO: lots of things (checksumming facilities, configuring wizard, free space checking, command tests...); the complete TODO list is inside the script.

Call for developers: stop designing for IE!

There's one big truth you learn if you have a little experience in web designing and/or developing: you have to get stuck with Internet Explorer bugs.

The work of web developers is nowadays made of two parts:

  1. Write the website and test it (usually one test is enough for Firefox 1, Firefox 2, Opera, Safari, Camino, Konqueror, etc. on any platform)
  2. Rewrite the CSS and test it again with Internet Explorer 6 and 7
Internet Explorer (any version) is not standard compliant. Yes, we're talking about W3C standards, the ones made with efforts by the members of this worldwide consortium aiming to give a bit of order to the chaos of internet technologies and to improve websites usability. Ironically, Microsoft is a W3C member despite its known attitude not to comply with many international standards.

Now, what's the problem? Simply, because of Microsoft whims in fact of standards (especially for CSS) most web developers have to do the double work. Personally, I'm tired of wasting time (and money) to double-debug my works. And because so many people use (mainly not intentionally) Internet Explorer, noone would pay for a website not viewable with IE.

A solution? If you can decide about your website (read: if it's not committed, or you can talk about it with your boss) do the right thing: do not test and redesign your CSS for IE. Just leave a warning like this one:

Warning: due to a bug in Internet Explorer, this website may look like ugly. To view this page correctly, please use a standard compliant browser. Thanks.

This may seem rough, but it's a bit ironic to smart readers and, above all, it's perfect if you're tired or wasting your worktime for IE. Adding a sponsored link to Firefox (one like "Get Firefox for better browsing", you know what I'm talking about) below the warning makes it even more useful.

Please note that this way you won't loose all your IE users: they should just see the website ugly, as IE naturally renders it, and they should be informed about the cause of that (while being anyway able to use it). Moreover, if they'll start using Firefox (or Opera, or...) they'll experience a better browsing experience (especially people who still use primitive non-tabbed broswers like IE6). And they'll be thankful to you.

Web developers (and their bosses) should keep in mind that if a standard compliant website is not correctly displayed on IE this is not a problem of the website itself: this is a problem of IE. And the users, both geek and inexperienced ones, have the right to know that someone gave them a buggy browser. Hiding this, by building a separate style just for IE, will feed users' ignorance, keeping web development tedious and time wasting.

A live example of this philosophy is Hash'em all!, one of my last works: IE is the only browser in which the central DIVs are not correctly sized and cover all the window from left to right. I'll add some screenshots asap.

You probably don't need a proof that IE brings so many problems for developers, but in case you do here are some references:
Similar initiatives:
UPDATE: I added to Hash'em all! the link to a campaign called "Ugly on IE, not for my fault". If you wish to do the same don't hesitate to add the same image (or another better) linking to this post.

Hash'em all: free online text and file hashing

I couldn't find an online hashing service which allowed to hash text strings and files using several different algorithms (MD5, SHA1, SHA512, RIPMED, Whirlpool, HAVAL, etc.); so, I made one. And after hours and hours of creative efforts, here's the name I conceived: Hash'em all!.

Layout is really simple but (big surprise!) there's a rendering problem with Internet Explorer (any verision). The central DIVs are correctly sized in Firefox, Opera, Safari, Konqueror, Camino and every other standard-compliant web-browser; unfortunately, IE is not in this list (while MS is part of W3C consortium... bah!).

I will *never* redisign the site to fit IE bugs with a CSS selector and a separate stylesheet: I have no intention to do again the work because of Microsoft's whims.
Using IE Hash'em all! is still usable, but ugly. If you want a nice interface, please use a W3C-compliant browser. And, if you'd like to know what's the problem: it seems that in IE there's no way to have a centered auto-adaping DIV, because IE totally ignores display: table; CSS attribute.

Features in brief:

  • It's free
  • It's fast (less than 0.1 second for most queries)
  • There's no limit on the number of queries
  • It allows to hash files up to 10Mb
  • You can choose up to 35 different digests
  • Allows empty strings
  • It's cute (but that's subjective...)
Yes, there are just a few google ads. But it's free (as in free beer, for now) ;]

Workaround: dpkg goes Segmentation Fault

So, your apt-get [dist-]upgrade stops when a post/pre-installation script goes in Segmentation Fault. The strange thing is that the same script, extracted from your favourite deb package, works if launched standalone. That's probably a debconf or dpkg bug but, frankly, I didn't even google to find if it's really a known bug: I get this problem quite often on my Debian testing (especially while upgrading libc6, tzdatam console_common), so I chose to spend my time looking for a workaround. And, finally, I found a very simple one.

That's, more or less, the error I get (note: my system is localized in italian):

(Lettura del database ... 184524 file e directory attualmente installati.)
Mi preparo a sostituire libc6 2.7-3 (con libc6 2.7-5) ...
Spacchetto il sostituto di libc6 ...
Configuro libc6 (2.7-5) ...
dpkg: errore processando libc6 (--install):
il sottoprocesso post-installation script è stato terminato dal segnale (Segmentation fault)
Sono occorsi degli errori processando:
libc6

In a nutshell, my solution is:
  • Unpack the .deb package causing the problem and grab the meta-files
  • Clear the pre/post-installation script (depending on which one causes the segmentation fault)
  • Pack a new .deb package with these fake scripts and install it
  • Run manually the scripts you modified (do this before the installation if it was a preinst or pre/post-rm)
  • Done: you can go on with your dist-upgrade
That's easy to do within a bash shell if you know the right commands. And they are:

# Make a copy of crashing package
cp /var/cache/apt/archives/crashing_pkg_x.y.z.deb ./crashing_package.deb

# Extracts package data
dpkg-deb -x crashing_package.deb ./temp_dir

# Extracts package meta-files
dpkg-deb -e crashing_package.deb ./temp_dir/DEBIAN

# Make fake post/pre installation scripts
echo "echo Fake" > ./temp_dir/DEBIAN/postinst

# Pack a new modified package
dpkg-deb -b ./temp_dir/ mod_package.deb

# Install the hacked package (should not go in segfault)
dpkg -i mod_package.deb

# Manually run postinst script
./temp_dir/DEBIAN/postinst

# Clean
rm -rf ./temp_dir
rm crashing_package.deb mod_package.deb
Dirty and functional, as usual. But pay attention to the manual execution of scripts: they may crash, or refuse to go on, or they may contain debconf commands that bash can't execute; so my advice is: take a look inside and try to manually do, step by step, what the script was supposed to do (uninstalling packages, stopping processes, restarting daemons, upgrading configurations, and so on).

I guess you're experienced enough to change them to suit your needs (and, above all, I'm in a hurry now!). So, good luck and may the GNU-force be with you =)