Experimenting with filesystems

Since macOS Sierra introduced the new Apple filesystem (APFS), designed by no other than Dominic Gianpaolo (of BeOS fame), I thought I’d give it a try. So I created an APFS volume on an external 3G hard disk, copied a few pictures on it, and started playing with it.

APFS runs FSCK every time???

It mostly worked, although it comes with dire warnings, and you have to pass an insane option to the diskutil commands to get rid of it:

ddd@Marypuce Pictures> diskutil apfs list
WARNING:  You are using a pre-release version of the Apple File System called
          APFS which is meant for evaluation and development purposes only.
          Files stored on APFS volumes may not be accessible in future releases
          of macOS.  You should back up all of your data before using APFS and
          regularly back up data while using APFS, including before upgrading
          to future releases of macOS.

But things quickly went south as soon as I disconnected and reconnected the disk. It did not mount instantly, because an fsck process (File System Check) was running. Once this completed, I could see my disk, but it took minutes. So I tried ejecting the disk again. And sure enough, I had fsck running again next time I mounted the disk.

So I decided to try something else. I installed ZFS for OSX. I had only heard praises about ZFS being so great and this and that, so I thought it would be interesting.

ZFS can’t remount an external disk without some black magic?

Again, things went smoothly. Well, mostly. You have to activate some special option for the disk to “look like” HFS+ if you want Photos to be able to use it.

But again, things went south as soon as I disconnected the disk from one machine to put it in another one. I did something terribly wrong, you see: I ejected the disk on one Mac, and attached it to another. And I got this helpful little error message:

sudo zfs mount PhotosZFS
cannot open 'PhotosZFS': pool I/O is currently suspended

It looks like this is a standard issue with ZFS. You have to do some magic to export or import your ZFS pools. Something that I could understand. What I cannot understand is this response, from a guy with nickname ilovezfs:

I see in IRC that the disk was actually disconnected and reconnected while the pool was imported. Given that this is a single partition pool, not a raidz or mirror vdev, there is no reason to expect the pool to continue to function after the device has been disconnected and reconnected without exporting it first. At that point, your only choice is to reboot.

So now you have this supposedly enterprise-grade, secure, checksummed, snapshotting, almost magical filesystem, but unmounting a disk and reconnecting it to another computer is so verboten there is no reason to expect the pool to continue to function? Well, yes, there is: every other filesystem on earth does that right. And suggesting the fix is to reboot? Give me a break.

I’ll try ZFS again in 10 years, when it knows how to deal with external disks and does not loose 350GB of data on its first day of operation.


macOS Sierra Mail bug

This week, I started using macOS Sierra. Overall, like iOS 10, it’s another one of these Apple releases of late where what you gain is not extraordinarily compelling, but you discover as you go various things that you lost for no good reason.

Here is one I found today. Apparently, macOS Sierra cannot send mail with picture attachments. That seems like a pretty big one. (Update: It apparently depends on the machine, see at end).

If I send a picture with attachment from a machine running OSX 10.11, here is what I see in my Inbox in macOS Sierra:

Screen Shot 2016-10-06 at 12.38.09.png

So far, so good. Notice that the mail was sent to an Exchange server.

But now, let’s send an e-mail with a picture, this time from macOS Sierra. Here is what it looks like in my Inbox:

Screen Shot 2016-10-06 at 12.39.44.png

Now, something is obviously missing. In Outlook, I see some weird message telling me that the attachment was removed:

Screen Shot 2016-10-06 at 12.40.09.png

What is really curious is that it seems to depend on the server being used, not on the client. If I send the same kind of e-mail to a Google Mail account, then it looks like this, with a large empty box at the top, and then my picture attachment lost at the bottom (still not good, but at least, the picture is not entirely lost) :

Screen Shot 2016-10-06 at 12.42.46.png

I filed a bug report with Apple on this. It seems pretty major to me, and I really wonder how they could have missed it. Is there something special with my setup? Do you see the same thing?

Update: I tried sending an e-mail with attachment from another Mac also running macOS Sierra, and I have no problem at all… except that the aspect ratio of the picture is all wrong on Outlook. So the problem is not with every instance of macOS Sierra, which is good news for Apple and bad news for the Apple Mail developers (bugs that don’t always happen are harder to figure out).

Restarting a Blogmax “private” blog

Back when I was working for HP, I was using Blogmax to build a daily blog of my activities. That was quite useful as a self-reference, but also helped my team members follow what I was doing (I was the only one working from France, most of the team being in the US).

When I started Taodyne, I stopped doing that because a) we were all in the same room, b) we did not necessarily want to publicise everything we were doing, and c) I didn’t have the time. I now really regret it, as this would have been very interesting to me as a searchable archive.

So I’ve decided to restart a private blog. Not private in the sense that it’s hidden or that you can’t read it, but in the sense that it’s really notes for myself. If they happen to be useful for someone else, good. But be warned, it’s unlikely my private blog will be of any interest to you. I insert a reference here so that Google starts indexing it🙂

5 ways Tao3D is so much better than HTML5

It’s the ultimate showdown. On the left side, the current contender, the web. Thousands of developers building exciting new technologies using JavaScript, HTML5 and mashed potatoes. On the right side, a tiny French startup with Tao3D, a brand new way to describe documents. Guess who wins? (TL;DR: You know the web has to win. But why?)




Why are there new Javascript frameworks every day?

At Google I/O 2015, Google announced Polymer 1.0, a “modern web API” according to the web site. To me, it looks a bit like AngularJS mixed with Bootstrap.js, except it’s entirely different. Google also recently bought Firebase which looks to me a bit like Ionic, except of course it’s entirely different. And just now, I discovered Famous, which seems a bit similar to Reveal.js along with Three.js, except of course it’s entirely different.

Don’t get me wrong, I’m all in favor of competition, and there is something unique about all these frameworks. But this proliferation also demonstrates that there’s something seriously wrong with the web today. And I’d like to explain what it is.


Reason #5: HTML5 is way too verbose

Consider the Hello Famous on the Famous front page. It’s a whole 38 lines of JavaScript just to make a single picture spin. What the Hulk? In Tao3D, it only takes 4 lines. Four tiny miserable lines on one side, vs. 38 with today’s leading web technologies? We are not talking about a mere 2x or 3x factor, but practically 10x. And it’s not an exception either. On a slightly more complex clock animation, Tao3D is 33x shorter than JavaScript. Don’t you want to save your carpal tunnel from premature death?

Due to limitations with WordPress and code colorization, I have to ask you to read the rest on the Taodyne web site.

Shader-based text animations

With shaders, it is possible to create interesting text animations.

The code

The following code lets you create a dancing script by using a vertex shader to move the y coordinate up and down over time, and use a fragment shader to create colorization effects:

import BlackAndWhiteThemes

theme "WhiteOnBlack"
base_slide "Dancing script",

    contents 0,
        // Create a shader transforming a sphere
            vertex_shader <>
            fragment_shader <>
        shader_set time := page_time mod 60

        text_box 0, 0, 0.6*slide_width, slide_height,
            align 0.5
            vertical_align 0.5
            color "#F4B63C"
            font "Tangerine", 120, bold
            shader_set parms := 0.8, 0.03
            paragraph "Fun text effects, 50 lines of code"
            color "white"
            font "Arial", 60
            shader_set parms := 0.3, 0.01
            paragraph "Animate text easily with Tao3D!!!"

            color "lightblue"
            font "Courier", 20
            align 1.0
            shader_set parms := 0.2, 0.7
            paragraph "http://bit.ly/1HWCGvd"

Going further

This technique is extremely powerful. By adjusting the shader, you can easily get effects such as fuzzy text, text with punched holes inside, flaming text, glowing text, and so on.

Christophe de Dinechin

Detecting video changes

Tao3D can capture live video, and makes it quite easy to perform image analysis using shaders. Here is a simple example that shows how to highlight moving parts in a video by colorizing them in red and blue.


Capturing live video

Tao3D can capture live video in a texture by using the VLCAudioVideo module and using the qtcapture:// URL on MacOSX or dshow:// on Windows (I am not sure what the right URL would be on Linux).

So below is a simple program that would capture a live video stream and display it at the center of the screen:

import VLCAudioVideo
WEBCAM -> if is_available mac then "qtcapture://" else "dshow://"

movie_texture WEBCAM
rectangle texture_width, texture_height

Saving a texture

If we want to detect movement, we need to save a reference frame to compare to. This can be done by using a dynamic texture in which we play the movie, but that we only update at specific times. Here is what it looks like to capture a snapshot every second:

lastTex -> 0
if time mod 1 < 0.1 then
    frame_texture 640, 480,
        color "white"
        movie_texture WEBCAM
        rectangle texture_width, texture_height
    lastTex := texture

The solution above is a bit crude, since it hard-codes the frame texture size. Normally, we should capture the actual frame size from the input video:

lastTex -> 0
frameWidth  -> 640
frameHeight -> 480
if time mod 1 < 0.1 then
    frame_texture 640, 480,
        color "white"
        movie_texture WEBCAM
        rectangle texture_width, texture_height
    lastTex := texture

Comparing textures

A simple shader program that compares two textures and highlights the difference in red/blue will look like this:

uniform sampler2D last;
uniform sampler2D cur;

void main()
    vec4 old = texture2D(last, gl_TexCoord[0].st);
    vec4 new = texture2D(cur, gl_TexCoord[1].st);
    gl_FragColor = vec4(0.3*vec3(old.r + old.g + old.b),1) + vec4(1,0,0,0) * (new - old);

The last line adds a grey image and a red channel that contains the difference between the new and old picture. Apparently, when this difference becomes negative, GLSL will push up the blue and green channels of the resulting fragment color as a result of color normalization.

I will save this program in a file called diff.fs for easier editing. Remember that Tao3D will reload the shader code when you change it.

Comparing the snapshot and the current image

We can now run the shader program to compare the snapshot and the current image from the camera:

import Slides
import VLCAudioVideo

WEBCAM -> if is_available mac then "qtcapture://" else "dshow://"

base_slide "Test",
    lastTex -> 0

    color "white"
    contents 0,
        if time mod 1  640
            frameHeight -> 480
            frame_texture frameWidth, frameHeight,
                color "white"
                movie_texture WEBCAM
                rectangle texture_width, texture_height
                frameWidth := texture_width
                frameHeight := texture_height
            lastTex := texture
            texture lastTex
        rectangle -400, 0, 640, 480

    contents 0,
        texture_unit 0
        texture lastTex
        texture_unit 1
        movie_texture WEBCAM
            fragment_shader_file "diff.fs"
        shader_set last := 0
        shader_set cur  := 1
        rectangle 400, 0, 640, 480

The result will look something like this:

It’s of course more interesting when it helps highlight small movements:

Going further

This is only a starting point. You can explore ideas such as:

  • Using other sources such as a webcam.
  • Exploring the capabilities of shaders. The Filters module shows a few common algorithms in image processing that can serve as a starting point.
  • Storing more than one texture for finer analysis.

That was easy!

Christophe de Dinechin