Thursday, June 14, 2012

Hardware's hard facts

We just bought some toys to do a Tamias test deployment. We started small with just 15 nodes made of :
  • A Barebone chassis that has motherboard, CPU, chipset and loads of integrated stuff we won't use : the Shuttle XS36V
  • A 1TB 2.5" hard disk from Samsung
  • A 2GB RAM stick from Elixir (unrelated to the card game, for you french readers)
Now, the law of large numbers predicts that for millions of hardware pieces, a few of them will be faulty. I was really interested to see how lucky we would be with the Tamias hardware, and here is the result for the case of not so large numbers :

PartQty boughtQty faultyRemark
Shuttle XS36V152Won't boot from USB key, but installs from USB CD
Samsung 1TB HDD150so far so good
Elixir 2GB RAM1521 won't P.O.S.T., 1 fails during Debian install
So that's it, 13% failure for RAM. I was expecting 1 failure and hoping for zero... The motherboard could be rated at 0% failure because it actually works, but we had to boot from a USB CD. I would have to check if the BIOS release is the same as the other 13 pieces, but since we got it to work, we decided not to RMA them. Memory was RMAd and replaced swiftly, Memtest86 (using debian memtest86+ packages) has passed so we expect no problems here either...

Wednesday, May 2, 2012

FTDI USB driver woes on Debian LiveCD

Just a quick info that might be useful for some of you Xbee users. I wrote this little software to visualize and plot Xbee accelerometer data a while back.

Now, my brother tried to use it, and since he is not into Linux at the moment, I prepared a LiveCD for him using the Debian Live project. The reason why I did that is because the python program needs wxpython which is not included in the default Debian LiveCD.

The problem is that the kernel currently used (at the time of build: started Mon, 05 Mar 2012 20:21:01 -0600 and ended Mon, 05 Mar 2012 20:52:50 -0600) includes a buggy driver for the FTDI USB kernel driver. Details about the bug can be found here :
http://comments.gmane.org/gmane.linux.usb.general/44150

and changes needed to any python software (including my own xviz) that wants to work around this bug, so that you don't need to replace the kernel or rebuild the liveCD, are to look for a line like this:
 self._port = serial.Serial(self._comport, 9600)
and add a rtscts= parameter like this:
 self._port = serial.Serial(self._comport, 9600, rtscts=1)

This tells the driver to open the serial port using RTS/CTS flow control, which actually disables it, because the driver inverts both states, duh.

Tuesday, April 17, 2012

From information overflow, to a spotless mind

From Charles-Philippe Larivière...
It seems that I have been suffering an acute case of writer's block for the first few months of year 2012.

And what better way to start the year, than writing a non-technical vaguely metaphysical post. It is fun to see how I got to posting about this. While I write these first few lines, I still haven't chosen a title. Like B. Pascal put it a long time ago, the last thing I'll do is figure what will go first.

So, a piece of news not so recently suggested that the last DRAM chip maker in Japan, called Elpida, would go bankrupt. And then it did. You've never heard of it before, have you ? Well that's how I started to think again about memory.

As self-centred egotistical human beings, we basically build our memories with our own experience. Unfortunately, some things go well beyond our understanding and need taking a few thousands steps back to behold the big picture. That's where History comes into play. You could see it as a purveyor of fabricated memories, or as a conservation mechanism for the collective memory of mankind. Notice how History is both important and out of control at the same time. This is the setting that gave so much strength to the 1984 book in my own self-centered opinion.

The folk memory of fishermen in the Fukushima prefecture says that when a quake hits, a tsunami will follow. It also says that you had better go out to sea before the wave becomes to steep to climb. This is how a few boats were saved.

Our History is full of important lessons that should be remembered, and yet, we are now bathing in useless information, both figuratively and literally speaking. From the electromagnetic waves of wifi, cellphones, satellites, land TV and radio, to the clusterfuck of social media, RSS aggregation and whatever will come next - we are even drowning in information. And although the face value of it depends on how much each piece of information relates to you, chances are that not a billionth of it is worth your attention.

This is like dilution and filtering. The important information is diluted, your interest might not match those of dedicated mainstream portals, and obtaining what you want is frustrating. Meanwhile, the human brain received billions of sensory reports from all over the body, and yet, you could read these words without having to deal with it. Only valuable information made it to conscious level. Why can't our computers do this work of filtering for us, skipping over all that is not useful, skimming the things that we must read ?

Granted, this is hard from a computer software point of view. This is why regular users don't have it. Big companies do, however. That's how your webmail is free of monetary cost. Instead, computer programs are laughing at your information and selling select pieces to each other.

...to Sandow Birk
But eventually, you would expect people governing us to employ such filters, be they humans or computers, to isolate what is important, focus on it, and rule accordingly. So, how come a megaquake more than 1000 years ago went forgotten ? How come a ridiculously low $5 millions cut can be decided on a tsunami advisory program ? When will people stop doing exactly the same wrong thing every once in a long while, that is, long enough to not remember a bit...

NOTE: the two paintings came to my attention through the "google filter" and this page.

Tuesday, December 27, 2011

Looking back on 2011

Dear readers,

So the year 2011 is coming to an end and I have eventually failed my secret goal of having more articles than last year. The main reason being that I did not  post much in the last three months. And the explanation is just a single word : Tamias. Tamias is a pet name, a kind of chipmunk (although some folks complained that our logo looks like a mutated rat or something, hi Ed!). It could have been the name of a pet-project, but it really is the pet-name of a project we have here, at the office.

I can't really explain it all in one post, so let's say a few very basic things to keep it short and interesting.


  1. Tamias is based on Tahoe-LAFS, which makes it awesome from the start
  2. Tamias allows users sharing a mutual interest to gather together and create their own cloudish storage system by providing storage space to the distributed storage "cloud". Sorry for the buzz word, you can really replace cloud by network, p2p network, overlay network, whatever.
  3. Users from a same network exchange identity information between each others (essentially public keys) and can share files privately and securely. Privately means that someone learning the access information (what is called a capability here, think of it as a self-certifying URL) from someone else can not use it. Securely means that file blocks are stored with encryption and thus not readable by the storage node (this is granted by Tahoe-LAFS).
  4. The first beta version of Tamias was released on December 25th here.
This version works, but it is not very user-friendly, so we only expect the most computer savvy users to try it out and report errors, problems, bugs and so on. In the future, we will provide a (much much much) better UI of course, and new features like user self-introduction, user search, sharing delegation, networks federation, etc...

The Tamias website is at https://tamias.iijlab.net and has a few tutorials about compiling the software or setting up a node. As it is based on Tahoe-LAFS, it includes Tahoe-LAFS and builds upon it. However, you can not use your existing Tahoe-LAFS grid without modifications, because we added a couple of features on the storage server side, that were needed for privacy extensions.

Short post for a long story, but here's a scoop for you all : the next release will be much better and is planned for next spring...

Happy year-end holidays to you all !
Jean

Wednesday, September 21, 2011

Realtime visualization for Xbee+Accelerometer data

Here is the long awaited follow-up to the Xbee experiments that my little brother outsourced to me.

Before I get to the description of the software itself, if you want to try this out, be sure to follow the guidelines in the previous post on the same topic. You are then ready to proceed.

And here is what you get with this software :
  • Realtime visualization of all three axis
  • Can be adapted to many different accelerometers by tuning some global variables
  • Recording of samples to file, with timestamps and X, Y and Z values
  • Automatically generated plots for each axis and a combined 3-axis plot
  • Exported files named according to experiment date and time (for easier sorting)
  • All the above, in pure opensource python goodness
  • English and French localization
  • Theoretically works from within a LiveCD

Now let me briefly describe what the software does. As you can see in the screenshot on the side, the user interface is quite easy to understand. The three rectangle-shaped black boxes are showing whatever comes from the Xbee chip hooked up to the USB port, in realtime.

There are also three clickable buttons. The leftmost button labelled 'start' triggers the beginning of the recording. You will usually push this just before your experiment. The middle button named 'stop' obviously halts the recording. It also takes care of dumping all the samples into a CSV and generating the four plots : a combined plot with all 3 axis and three separate plots, one per axis. Of course the last button closes the USB port and quits the program. Beware that it does not check if you have a running experiment and will hence lose some data unless you push the stop button first.

Enough talking now, here is the python archive, all you have to do is extract the contents somewhere and run the software by launching the xviz.py script file.

I also uploaded this to the github social coding site, feel free to fork this, submit patches or whatever !

https://github.com/jean-/xviz