At Taodyne, we mostly use Apple Pages to create our documents. For large documents, I’d like to be able to create numbered chapters, something like “Chapter 1″, “Chapter 2″, and so on. Apple Pages does not seem to have that feature. Let’s not get used to it, and let’s fix it.
Apple Pages can read numbered chapters from Word
One thing that I observed is that when you read a Microsoft Word document that contains numbered chapters, Apple Pages preserves that formatting. In other words, if the user interface may not know how to edit numbered lists with text in them, the rendering engine knows how to render them, and the regular editing within Pages will correctly renumber these documents.
To verify that my recollection of this capability of Pages was correct, I first created a document in Microsoft Word that looks like this:
Section 1 – Hello
Chapter 1 – This is a chapter
I. This is a numbered section
1. This is a numbered sub-section
It doesn’t just “look like” this. The Section and Chapter text were edited in the Numbering section of Microsoft Word, so this is auto-numbering.
Then I saved this document to disk, and imported it into Pages. And indeed, when I edit it in Pages, numbering works just like in Microsoft Word.
The Pages XML format
Let’s look inside the document to see what’s there. A quick tour through the command line shows that Apple Pages documents are really zipped collections of files, including XML files representing the document itself:
% unzip Hello.pages Archive: Hello.pages extracting: thumbs/PageCapThumbV2-1.tiff extracting: QuickLook/Thumbnail.jpg extracting: QuickLook/Preview.pdf extracting: buildVersionHistory.plist inflating: index.xml
The most interesting of these documents is the index.xml file. It contains the actual description of the document in XML format. And if I look inside, I see something interesting:
<sf:list-label-typeinfo sf:type="text"><sf:text-label sf:type="decimal" sf:format="Section %L -" sf:first="1"/>
So this sf:format= accepts a rather general format, with %L serving as the marker for where the number should go.
The solution for adding chapter numbers
So the solution for adding chapter numbers is simple:
- Once, you will need Microsoft Word to create a document that has the kind of chapter numbering that you need. You may have multiple levels of numbering (e.g. chapter, section, etc).
- Import this document in Pages. This will give you a new list style.
- When you want to number chapters, select the given list style.
- To edit the formatting of the numbering text, select the whole line, change colors or fonts, and in the list style, select “Redefine style for selection”. In other words, the list style defines the font and color for the numbering independently from the paragraph style, and can do that for multiple levels.
Now, you have proper chapter numbering in Apple Pages.
The whole essay is a bit long, but definitely worth reading. It goes through the history of the OLPC project (including its roots in early experiments), through musings about the best choice of operating system, to suggestions on how to move the project forward successfully, after what appears to have been a severe crisis.
No matter what, Krstić is right that the whole experience will not have been in vain. But it’s too bad that a project like this can die for purely political reasons. On one hand, OLPC could not have seen the light of day without the efficient support of someone like Nicholas Negroponte. On the other hand, if we are to trust Krstić earlier essay, Things to remember when reading news about OLPC, he’s now almost a liability to the project:
To those on the outside and looking in: remember that, though he takes the liberty of speaking in its name, Nicholas is not OLPC. OLPC is Walter Bender, Scott Ananian, Chris Ball, Mitch Bradley, Mark Foster, Marco Pesenti Gritti, Mary Lou Jepsen, Andres Salomon, Richard Smith, Michael Stone, Tomeu Vizoso, John Watlington, Dan Williams, Dave Woodhouse, and the community, and the rest of the people who worked days, nights, and weekends without end, fighting like warrior poets to make this project work. Nicholas wasn’t the one who built the hardware, or wrote the software, or deployed the machines. Nicholas talks, but these people’s work walks.
Makes you wonder who really “invented” the OLPC… Two earlier posts may be relevant to this topic:
A friend pointed me to Microsoft Surface. Apart from the annoying fact that the site does not work with Apple’s Safari (but it does with Firefox), I find it pretty interesting. Surface is, to put it shortly, a new breed of user interface using the same kind of multitouch screen you find on the Apple iPhone. (Update: It turns out this is not a touchscreen).
This is not the first time I actually see something like that: HP also had something similar in the works. There is even a blog dedicated to these topics. But Microsoft Surface is interesting for two reasons. The first one is that it’s the first time I see something that might actually be a real user-interface and not a mock up or an impressive hardware test. The second one is… secret.
Anyway, if we one day get to the point where we have multitouch capabilities in foldable screens…
Thought recognition is coming. New articles on this topic pop up regularly. But what would a thought-driven user interface look like?
The inception of the XL programming language began with questions like this. I was thinking more of speech recognition at the time. I was trying to figure out how it would be possible to use object-oriented programming (which I had just discovered back then) to program a speech-centric user interface. It turns out that it’s probably quite difficult.
The reason is relatively easy to explain. In a graphical user interface (GUI), you have a finite (and relatively small) number of objects on screen. You pick up one object, for example a menu, and then another, and so on. One of the key design features of the GUI is that it should be non-modal, i.e. at any given point in time, you should be able to pick this or that menu freely. This is very different from old text-based programs, where you would typically switch, for instance, between text editing mode, text formatting mode, page layout mode, printer selection mode, and so on. This basic tenet was a mindset revolution for programmers at the beginnings of GUIs. The original Macintosh Human Interface Guidelines insists on that point as early as page 12. Today, it’s much harder to find web pages explaining that fundamental aspect, because programmers only know about modeless programming.
But a speech-based user interface, on the contrary, is extremely modal. Everything depends on what was said before. For example, the word it in Find the Smith file and print it. I will often use the more general term vocabulary-based user interface (VUI), which covers all kinds of user interface where you “talk” to a machine. For example, with a voice mail system, the vocabulary can be digits you type on a keypad, like 1221 to get voice mail. The problem is that the vocabulary for speech can be thousands of words. So at any given point in time, you have thousands of possible modes.