A Spectre Retrospective – Part 2: Design

Sebastiaan de With
Apr 11, 2019
9 min read

One month ago we launched Spectre, our new long-exposure photography app. It was very well-received — I was so grateful to see not just enthusiastic responses about the capabilities of the app and the stunning shots people have taken with it, but also many delighted comments at the work we put into making its design special.

Spectre — if you’re not familiar — is an app that lets you take long exposures: photos that previously required a bulky setup of a big camera, a tripod, filters, and more.

Image courtesy of SIGMA.

Spectre gets around all this with computational photography. It takes hundreds of photos and combines them in real time. It uses computer vision to correct for handheld shots, so there’s usually no need for a tripod.

Coming from traditional long exposure photography, the Spectre experience is game-changing. Just hold the phone steady and tap:

A simple and kind of magical experience: hand-held long exposures.

This isn’t the first camera app I’ve designed: I previously wrote about the design of Halide, an our first app, which gives people more control over their iPhone camera. But Spectre is an entirely different beast, with its own serious design challenge: hiding the complexity behind all that technology.

We really wanted to enable users to just take photos without fiddling with settings. We envisioned this tool to be usable by an average iPhone user, not just photography buffs. That meant most users were going to be long exposure novices. This also meant showing when this was the right tool for the job, and inspire their creativity.

Making It Indistinguishable From Magic

We wanted Spectre to follow Clarke’s Third Law: “Any sufficiently advanced technology is indistinguishable from magic.”

But magic is confusing — terrifying, even — if you aren’t the magician. That meant pairing our (rather advanced) technology with a simple and useable interface.

Our first Spectre prototype did several really cool things:

  • Took many photos over a variable amount of time and combined them into a single shot
  • Used AI to stabilize the shot to remove typical blurriness induced by the users’ hand movement (on most phones)
  • Saved the shot as a Live Photo

You could see these as a set of ingredients. The app is the final dish. We had to ‘just’ figure out the the steps in between…


The most important thing you can do as a designer to make great new technology usable is to simplify. A time machine can look like this:

But we imagine a time machine more like this:

An ideal design would be as simple as putting in a year and pressing a button.

If we added buttons and switches for everything, we’d end up with an incredibly complicated interface!

Sometimes Fewer Features Are Better

At times, it seems easy or even necessary to add buttons and switches, though! For instance, long exposures require you to set an exposure time.

Image via Digital Trends.

I love this aspect of long exposures on cameras, so I really wanted maintain the delight of selecting an exposure time and the corresponding satisfaction of the tactile camera controls. Having an unlimited or huge amount of exposure times would add incredible complexity to the app, though.

In early tests, we simply limited it to three options. We even named them Short, Medium, and Long (S, M, L) — but eventually decided to maintain the camera metaphor by showing the actual exposure times of 3, 5 and 9 seconds.

I assumed that when we gave the beta version to friends and photographers for feedback, they’d ask for more options. Nobody ever did. Even after our launch, we’ve had very few requests for more exposure time options.

In cases like this, you can get away with fewer features to create a better user experience. We weren’t targeting professional photographers with Spectre: this was a utility for every iPhone user, and this made the app far less overwhelming.

Limitations on features can help make your app easy to understand.

Machine Learning As A Design Solution

There are many ways for Spectre can merge its hundred of images taken during the exposure process into the final photo. We found two distinct use cases that required a different methods.

The first is the most physically accurate. Without getting too technical, think of it as a moving average:

Yep, this is a hand-held 5 second exposure taken on a windy spot.

In the image above, you can see how the water is essentially ‘ghostly’ between the rocks in the bottom left area of the image. Over time, a long exposure ‘averages’ the image, which creates that ghostly effect.

However, people create incredibly magical imagery with light painting or night-time cityscapes. Due to limitations in iPhone hardware, our averaging method doesn’t do it justice. We realized we needed a special “light-trails mode” that uses a different algorithm.

Unfortunately, making you choose between modes greatly complicates the experience. Most people — including me, a ‘serious’ photographer — would be happier if the camera simply did the right thing and made the choice for them. Until very recently, that notion was a laughable fiction.

Enter Machine Learning. We trained a neural network to recognize situations that generate beautiful light trails, such as such as fireworks, light painting and city streets — just to see how feasible it would be as a method to greatly simplify the user experience.

We were shocked by how often the AI was right.

In our testing of the first version with Machine Learning-based scene detection, it got the type of blending right about nine out of ten times. We found ourselves very rarely overriding it.

Allowing users to ignore this otherwise mandatory choice made it possible for anyone to just open the app and shoot without thinking about specifics.

That’s really what computational photography is all about: using all the smarts in your smartphone to get technology out of the way, so you can focus on the photo (and art of photography) itself.

We have an indication (a light bulb! Get it?) in case users want to see what Spectre is ‘thinking.’ Users can override it with the manual control, in case you disagree with the AI.

In all, it taught us that Machine Learning is a very powerful tool for enhancing the user experience if used right — but to also be mindful of its limitations and always empower the user to make their own decisions.

Familiarity Aids Usability

I’ll admit it, me and Ben like to re-think things. When we first designed Halide, we had some pretty off-the-wall designs and ideas for how we could rethink a phone camera interface. In the end, we struck a fine balance between familiarity and clever new design touches.

Halide strikes a balance between conventional and clever.

For Spectre, we had plenty of ideas to rethink how to browse your resulting shots, how a shutter button could look (or even where it could live), and how we’d lay out the user interface.

But given the already complicated premise of long exposure photography, the best possible thing we could do — and as a designer this is hard to admit — was to stick to a fairly conventional design and let the actual features surprise and delight.

You can see this in our traditional shutter button layout, the identical handling of live photo playback in our reviewer, and even putting a more familiar face on a feature we added from Halide:

Much like in Halide, users can swipe up and down on the viewfinder to adjust exposure. Up is brighter, down is darker.

To anchor it a bit to the experience of the stock camera app, we add a nice self-explanatory bar showing a sun icon to indicate your exposure adjustment. We even removed the ‘EV’ label of the brightness value we show in the top of the interface; most users of Spectre won’t know or care about what an EV is anyway.

The Little Touches

An iPhone Inside Your iPhone

And then there’s the delight you can put in tiny things. Did you see our incredibly adorable tiny phone? It shows users relative phone movement. If you move too much, we use this to indicate you have to keep your phone steadier to get a great result.

This little control has a lot of work and refinement going into it to ensure it maps correctly to movement and feels natural. It’s also a wonderful fun interaction: you can really swing it off the screen if you want. As a thoughtful detail to the user, we also make sure it’s designed to look like the phone they’re holding in their hands.

A Real Dial

Remember how I talked about the exposure duration dial, and how I really wanted it to feel like a tactile control like the one you have on a camera? The delightful clickiness of it, its precision, the appearance of etched metal… Delicious. How could we possibly translate this to a digital user interface? You’d just be tapping on glass.

It could’ve been as boring as a scrolling picker (like you see in the Alarm clock). I never found those pleasant at all. To this day, I never feel like it ends where I want it to end, and its excess movement makes it feel imprecise and clumsy. A dropdown would be a total digital interface, and a very snooze-worthy one at that.

I’d designed this dial in one of my UI explorations and before I knew it, Ben had started working on a set of custom spring physics to make it work. As I launched our latest build, I actually giggled as I was playing with it. It felt perfect.

The only issue was that my big silly thumb would totally obscure the control. No worries — this happens in real life with cameras, too.

But we didn’t have to also emulate the limitations of the physical world along with its wonderful tactile qualities: this is the digital world. We can make tiny phones fly. We can use a machine brain to pick the right settings for you.

We can make the dial expand gracefully as you select an exposure time.

A dial that’s better than its real life equivalent.

This is probably my favorite part of the interface. Everything about it feels and looks perfect; Ben’s custom spring physics, my visual design iterations, and Jelmar’s completely custom typeface we use for the numerals. Three people worked hard on this little doodad — and most users probably wouldn’t even notice the work we put into it!

Halide Router Wide is our custom typeface by Jelmar Geertsma. We use it throughout the app to give Spectre its signature look.

I would argue that’s how you know that you’ve made something well.

When you come across an article set in a superb typeface for reading, some excellent airport signage or a well designed door handle, you don’t think about it or even notice it. If anything you might just notice being happier, or simply not frustrated. That’s how you know a design is great: it disappears.

That’s Spectre

There’s many other small touches, visible and invisible, that we applied to make Spectre special. I hope you notice them — or delightfully miss them as the app works its magic.

In the end, what really validated us and the work we put in has been seeing the incredible images from our users. It truly is the greatest feeling. Thanks for making Spectre a success, and thank you for reading!

Other posts we are currently working on include an extended guide into using Spectre; Ben’s dive into Spectre’s technology; a look into the design of the Spectre icon and its onboarding process and more.

Have questions or ideas for an article? Don’t hesitate to reach out to us!