I posted on twitter about how much I like Muse for iPad. People were curious about what was so special about it, and I thought I could sum it up in a tweet or two. But by the third reply in the thread I realized “I should have put this on the blog.” So here it is…
Selections in Muse
Muse does a great many things well, and I could probably do a whole series on little things worth emulating in our apps. But one particularly representative example stands out to me: Selections.
Anyone who’s used a graphics program — your Photoshops, your Pixelmators, your MacPaints — knows there are two ways to make a selection:
- Drag a box.
- Lasso an arbitrary shape.
Dragging a box is simple and fast: one click, one drag — and the drag can be adjusted in real time so only the click needs to be targeted. Fast. Easy. Minimal twitch factor. But not very precise.
For precision (selecting around a non-box thing without cutting into background) there’s the lasso. It lets you carefully trace the path of the selection you want. Carefully being the operative word. You need to target each and every point along the path as you’re selecting it. Using the lasso is sweaty.
Muse lets you use either a box or a lasso to make selections. No big innovation there. You make lasso selections with the Pencil. It’s very nice to be able to use the iPad’s most precise instrument for Muse’s most precise operation, but that’s not breaking new ground either.
Modality’s All in the Mind
What’s amazing is how Muse presents modality.
See, in Photoshop, etc. tools are modal. Everything behaves differently depending on what tool’s currently selected (what mode the editor is in) and — importantly — this means we, as users, need to manage the state of the tools to get the results we want.
If we want to make a box selection, for example, we have to be in box-selection-mode. If we’re accidentally in lasso-selection-mode instead, we end up drawing a diagonal-squiggle-selection, not the box we had originally intended.
Being developers, we know there’s a variable called
currentTool or something buried in Photoshop that keeps track of this mode so that selection operations can refer to it later to know whether to draw a box or a lasso or what have you.
What may not be as clear is that, for the user, this is also state they have to hold in their mental model of the app. They have to consult the “am I in box mode?” in their brain before making a selection because different gestures (drag-a-box vs. trace-a-shape) and different levels of precision (breezy vs. sweaty) will be required, depending.
This means before a user can let muscle memory take over and perform the gesture they know they need to perform, they literally need to context switch. Halt everything, consult state, potentially switch mode, and then attempt to resume the half-subconscious gesture they were trying to make a moment ago.
Let’s Get Physical
So how does Muse deal with this? You make a lasso selection with the Pencil as normal. To make a box selection instead, just hold the Pencil at a low angle, like you were shading with it:
Let’s look at that video again. That little flip of the Pencil in the hand? That’s the context switch. Muse takes it completely out of mental state and turns it into a physical thing, amenable to muscle memory.
This is a level up, but also not unique. Anyone who uses Photoshop does so with one hand on the keyboard so as to hold down one or more modifier keys while working. ⌥⇧⌘-drag for special behavior. That sort of thing.
What impresses me is the careful thought around the interaction. The Muse developers realized they needed precise (lasso) and imprecise (box) selection UI in their app. And they realized a (small p) pencil has precise (line) and imprecise (shade) modes physically built into it.
Connecting those dots isn’t hard, but thinking about a single interaction in our apps carefully enough to see the dots is. It goes to show we shouldn’t take any interaction for granted. Muse certainly doesn’t, and the careful design that results makes it an absolute joy to use.