How many times have we all caught ourselves writing code like this:
func makeNext() -> Int {
let tmp = state
state += 1
return tmp
}
We want to manipulate some state, but we also want to return it in its pre-manipulated condition. In other words we’d like to insert a bit of code right after our return statement:
// No, this doesn't work.
func makeNext() -> Int {
return state
state += 1
}
But we can’t because nothing after the return
will get executed. So instead we stick our state in a tmp
variable, do what we gotta do, and then return the tmp
.
So what? It’s just one more line of code, right?
The danger here (and with tmp variables everywhere) is that, by definition, they’re exact copies of types we’re actively working with in the same scope. How they differ isn’t semantically clear, and neither the compiler nor code review can stop us from accidentally substituting one for the other:1
func makeNext() -> Int {
let tmp = state
state += 1
return state //uh oh!
}
Thankfully Swift does provide a way for us to squeeze code in after the return
of a method — the documentation just doesn’t make this immediately obvious:
Use defer to write a block of code that is executed after all other code in the function…
Yay!
…just before the function returns.
Oh… Um… Well it turns out they’re talking about the function returning on the stack and not the return
statement executing in code. So doing something like this works perfectly:
func makeNext() -> Int {
defer {
state += 1
}
return state
}
This is true regardless of whether state
has value semantics or not.
Note this not only avoids confusing our original and tmp
variables, but now the intent of our code has been made clear:
Before we couldn’t know why we were forking off a tmp
unless we analyzed everything that messed with the original and followed it all the way through to the eventual return of its tmp
copy.2 Here, it’s clear from the get-go we want to do some stuff with state
. But first we’re going to return it.
1: Actually, in this trivial example, the compiler will warn us that tmp
is unused. But we can easily imagine situations where that will not be the case.↩︎
2: And then, if you’re anything like me, all the way back to the top to find where tmp
was made in an attempt to remember what it was before we changed it.↩︎
Finally an architecture think piece for the rest of us:
PSA: No one is forcing you to implement multiple DataSources in one Controller. To initiate network calls in viewDidLoad. To parse JSONs in UIViewController. To hard-wire Views with Singleton instances.
If you do that, it’s your fault; don’t blame MVC.
I whole-heartedly endorce all the advice Aleksandar gives within.
(Thanks to Ryan for the link!)
Let’s say we have a truck. We want it to be safe, so we give it some seatbelts and airbags.
struct Truck {
var seatbeltFastened: Bool
var airbagsEnabled: Bool
}
And because we care about safety, we don’t want to let the truck drive unless the seatbelts are fastened.
extension Truck {
func drive() throws {
guard seatbeltFastened else {
throw SafetyError.seatbelt
}
guard airbagsEnabled else {
throw SafetyError.airbag
}
// drive away...
}
}
Now we can drive with confidence:
do {
myTruck.drive()
} catch {
print("Safety violation! Driving disabled!")
}
This is pretty nice. To be safe, we have to verify the seatbelt is fastened and the airbags are enabled before we drive. By putting the check in drive()
itself, we don’t have to remember to make the check every time we use the truck. And throws
gives us a handy way to recover from any exceptional situations where driving isn’t safe.
Here’s the thing about throws
, though; it tends to leak up into abstractions built on top of it.
Let’s say we’re building a shipping API, for example. We might want to ship a package via a truck:
func ship(package: Package, truck: Truck) throws {
truck.add(package)
try truck.drive()
}
Our ship(package:, truck:)
function looks pretty clean, but note that it’s marked throws
. Some logic around how our truck drives has leaked up into our shipping logic, forcing us to deal with it whenever we ship:
do {
ship(package: myPackage, truck: myTruck)
} catch {
//???
}
This exposes a few problems:
ship()
that any errors will come from driving or that the solution might be to, say, fasten seatbelts.1We want our truck to be safe. But if we validate its safety when we use it, we leak details up to anything that calls it, limiting its composability.
This is an example of complected concerns. We have two concepts here, driving and safety. We have to pull the two apart:
extension Truck {
func validateSafety() throws {
guard seatbeltFastened else {
throw SafetyError.seatbelt
}
guard airbagsEnabled else {
throw SafetyError.airbag
}
}
func drive() {
// just drive...
}
}
This is cool in the sense that, having separated our validation from action, ship(...)
no longer has to care about the state of the Truck
we pass it:
do {
try myTruck.validateSafety()
ship(package: myPackage, truck: myTruck)
} catch {
print("Safety Violation!")
}
But now we have to remember to check the safety of our truck every time before we use it! If we forget once, disaster.2
So what if instead of verifying safety in a method, we make safety a feature of a type?
struct SafeTruck {
let value: Truck
init(_ truck: Truck) throws {
try truck.validateSafety()
value = truck
}
}
We’ve taken Truck
and wrapped it in a new type, SafeTruck
, which can only be created with a Truck
that meets its safety requirements.
Which means we can now rewrite ship(package:, truck:)
to take a SafeTruck
instead of a Truck
:
func ship(package: Package, truck: SafeTruck) {
truck.value.add(package)
truck.value.drive()
}
Now Swift does all the work for us:
ship(package: myPackage, truck: myTruck)
//🛑 Cannot convert value of type 'Truck'
// to expected argument type 'SafeTruck'
This isn’t magic. We haven’t somehow abstracted away the need to catch validation errors. We’ve just moved the implicit check we’d previously made whenever we used a truck into an explicit check we make on initialization:
do {
let safeTruck = try SafeTruck(myTruck)
} catch {
print("Truck is not safe!")
}
First, note that by moving validation from a thing that happens on use (where it could be buried beneath twelve layers of abstraction) to something that happens on creation, we’ve front-loaded it. We now get to handle errors where we have the most specific knowledge about them. To put it another way: there’s little doubt why try SafeTruck(myTruck)
might fail.
But we’ve also isolated our checks. We only have to write try...catch
once (on initialization). After that (if our use case permits) we can reuse our safe, validated truck without having to recheck its safety.3
And because we’ve made safety a feature of our type, we have all the brains of Swift’s tireless type checker behind us, making sure we never make a mistake.
After all, when given a type safe language it only makes sense to put safety in types.
1: True, we can deduce this information by matching for a specific error (SafetyError.seatbelt
in this case). But knowing the specific error requires we know the implementation of ship(...)
well enough to know what methods on Truck
get called — and then know Truck
well enough to know which methods throw and why.↩︎
2: Whereby “disaster” I mean “a bug”.↩︎
3: Note this is only true because Truck
is a value type. If SafeTruck
stored a reference to a truck instead of a value, truck
could be mutated behind its back. There’d be no way to guarantee a truck that was safe at initialization would still be safe later when actually used.↩︎
Wile E. Coyote’s problem wasn’t that the Roadrunner was fast (roadrunners top out at 32 km/h, coyotes at 69km/h), it was that he was wily. If he did the obvious, simplest thing and ran the bird down instead of trying to be clever, he’d have no difficulty.
Meditate on this before writing each line of code.
Remember how, when Swift introduced us to the concept of Optional
, all of a sudden explanations around the theory and implementation of monads became relevant1 and interesting?2 Thanks to the await/async proposal currently being discussed in Swift evolution, the same is about to happen to another rather computer-sciencey concept: coroutines.
We’re still a long way off from any concrete implementation of coroutines in Swift, but that doesn’t mean we can’t explore some of the conceptual underpinnings around how and why we might like to use them.
But to do that we need to begin at the beginning.
What’s a subroutine? Technically it’s a sequence of program instructions that perform a specific task, packaged as a unit. But we know it better as simple, everyday function.3
We tend to take functions for granted, and many of their characteristics get overlooked as just “the way things work”. For example, the execution of instructions in a function always starts at the top, never in the middle. And when we leave a function (via explicit or implicit return
) we tear it down and can never get back into it:
func makeSplines() -> [Spline] {
var splines = [Spline(1)] // We always start here
return splines // Once we leave here…
splines.append(Spline(2)) // …this never happens.
}
Again, this is so familiar to us it hardly seems worth mentioning. This is just the way functions work. But it’s not always desirable. Consider the following:
func reticulate(splines: [Spline]) -> [Spline] {
let encabulated = encabulate(splines)
let frobbed = frob(encabulated)
return frobbed
}
Because subroutines always start at the top and can only exit once, any call to reticulate
has to wait for encabulate
and frob
to complete before moving on. If these subroutines block us for a long time, we might wish we could return from reticulate
early to do some work. But as we saw above, once we exit we can never get back — everything after the return gets thrown away.
The customary way of working around this in Swift is with completion handlers:
func reticulate(splines: [Spline],
completion: ([Spline]) -> Void) {
encabulate(splines) { encabulated in
frob(encabulated) { frobbed in
completion(frobbed)
}
}
return
}
But let’s examine what we’ve actually done here. We’ve moved the entire body of reticulate
into closures passed to encabulate
and frob
. Then we moved all our return values into these closures' parameters. This frees up reticulate
to exit immediately because it has no body left to execute and no values left to return.
But remember, subroutines are thrown out the moment they exit. If that’s true and reticulate
returns before encabulate
or frob
are done with their work, how does any of this still exist when we try to run their completion handlers?
The answer lies in our use of closures. From the section on closures in The Swift Programming Language:
A closure can capture constants and variables from the surrounding context in which it is defined. The closure can then refer to and modify the values of those constants and variables from within its body, even if the original scope that defined the constants and variables no longer exists.
We can see this even more clearly if we do a little work in reticulate
before calling encabulate
:
func reticulate(splines: [Spline],
completion: ([Spline]) -> Void) {
let sorted = splines.sorted
let reversed = sorted.reversed()
encabulate(splines) { encabulated in
completion(encabulated + sorted + reversed)
}
return
}
Here we see that not only does our completion closure still exist, it’s able to make use of values defined in reticulate
long after it’s exited.
In fact, from a certain point of view, when we define this closure we’re saving the state and position of execution within reticulate
. And when we run the closure, we’re sort of resuming execution of reticulate
right where it left off.
When a closure is used to preserve the execution environment of a routine like this, it’s called a continuation. And passing continuations into other routines to be called in lieu of returning is known as continuation passing style (CPS).
All of which is a pretty cool trick! But it’s also messy:
func reticulate(splines: [Spline],
completion: ([Spline]) -> Void) {
// ❶ Execution starts here.
// ❷ But immediately jumps over this...
encabulate(splines) { encabulated in
// ❹ sometime later this is called...
frob(encabulated) { frobbed in
// ❻ finally we call the completion
// which acts like a return even though
// it's deeply nested.
completion(frobbed)
}
// ❺ ...but returns immediately
}
// ❸ ...and returns down here
return
}
Between the rat’s nest of closures and the execution path that bounces up and down like an EKG, this CPS syntax is far from ideal. All we really want is a way to say “suspend self, pass control, be ready to resume.” But the concept is so antithetical to the core definition of a subroutine (start at top, exit once forever) that it’s hard to express.
Enter coroutines.4
Coroutines are different from subroutines. Actually, they’re a more general form of subroutines that don’t follow the “has to start at top” and “can only exit once” rules. Instead, coroutines can exit whenever they call other coroutines. And the next time they’re called, instead of starting from the top again, they pick up right where they left off.
This makes them naturally suited to expressing the “suspend self, pass control, be ready to resume” concept subroutines have such trouble with. To a coroutine, that’s just a simple call. They don’t need to pass around all that continuation baggage.5
Once we have the ability to define coroutines, we can rewrite our example (using the proposed Swift syntax) as simply:
func reticulate(splines: [Spline])
async -> [Splines] {
let encabulated = await encabulate(splines)
let frobbed = await frob(encabulated)
return frobbed
}
The async
in the func
declaration marks this as a coroutine. The await
operator marks locations the coroutine can suspend itself (and, later, resume from).
Compare this to our original, blocking piece of code:
func reticulate(splines: [Spline]) -> [Spline] {
let encabulated = encabulate(splines)
let frobbed = frob(encabulated)
return frobbed
}
We can see that simply switching from a subroutine to a coroutine gives us our desired non-blocking behavior with near identical expressiveness and zero boilerplate.
And that’s just the tip of the coroutine iceberg. They’re the foundation of handy abstractions like Actors and Futures. They’re an incredible tool for parsing and lexing.
And, if we think about it, coroutines are essentially tiny little state machines. What can’t we do with tiny little state machines?!
Exploring all these will have to wait for future posts, though. Probably after we get a Swift implementation to play around with. Let’s keep our fingers crossed for v5!
1: For geeky values of “relevant”.↩︎
2: For highly geeky values of “interesting”.↩︎
3: Or, in cases where the subroutine has access to the state of an object, method.t↩︎
4: FINALLY! ↩︎
5: At least conceptually. Coroutines have to stash the state of their execution environment somewhere. And in theory Swift’s coroutine implementation could just be sugar for rewriting
let foo = await bar()
//rest of the body
into
bar(continuation: { foo in
//rest of the body
})
But as an abstraction, coroutines lets us think about suspending and resuming rather than passing continuations. And they do this regardless of how they “actually” work under the hood.↩︎
Some of you have probably noticed I almost never use the second person here (and rarely the first). It’s because I don’t feel like I’m writing to you. I feel like we’re working together, here. Trying to understand the misunderstood and fix the broken. Figuring it out.
But now I’m writing to you.
I’m asking you, actually. I’m asking, “What do you think about the Google manifesto?”
Do you think it’s been blown out of proportion? That maybe it made some good points? Do you think there’s an equivalence between diverse, inclusive speech and speech against diversity and inclusion?
Fair enough. How about the Nazis who showed up in Charlottesville?
They have the same manifesto. Do you support them? If you’ve already bought into the genetic inferiority of all women, it’s really not much of a jump to throw blacks and jews on the fire as well, is it?
If you believe we have to be “open and honest” about women’s genetic limitations so we can “help” them find their proper role in tech, maybe you would stand arm-in-arm with the alt-right to defend that statue of Robert E. Lee? Lee, who wrote “the blacks are immeasurably better off” as slaves and “the painful discipline they are undergoing, is necessary for their instruction as a race” and “will prepare & lead them to better things.” It’s the same manifesto.
The Charlottesville Nazis, the alt-right, who proudly march alongside them, our president, who makes excuses for each, and this Google manifesto prat — they are all of a piece and they are all selling the same scam. Support for one is support for all.
And support of any means you will find no support here. There will never be a “we” that includes you. Your only goal is to misinform and break.
We’re here to understand and fix.
Let’s play some battleship! Assuming a standard 10x10 board, we’ll need two collections:
let xs = 1...10
let ys = ["A", "B", "C", "D", "E",
"F", "G", "H", "I", "J"]
If we wanted to hit every square on the grid, we’d have to iterate over both lists in a nested loop. 1
for x in xs {
for y in ys {
target(x, y)
}
}
There’s nothing wrong with this as far as it goes. But it doesn’t do a great job of expressing what we’re actually trying to accomplish — that is, iterating over every ordered pair of ys
and xs
.
A more declarative way to go about our task might be to generate a new collection which is explicitly the cartesian product of the two sets, and iterate over that:
product(xs, ys).forEach { x, y in
target(x, y)
}
Great! Only one minor problem: product
doesn’t exist.
Building something like product
isn’t super hard. The trickiest part is getting all the generics right so it can be used with, say, a closed range just as well as with an array.
func product<X, Y>(_ xs: X, _ ys: Y) ->
[(X.Element, Y.Element)]
where X: Collection, Y: Collection {
var orderedPairs: [(X.Element, Y.Element)] = []
for x in xs {
for y in ys {
orderedPairs.append((x, y))
}
}
return orderedPairs
}
But when we calculate all the ordered pairs of two sets, the size of the resultant set is (as the name implies) the product of both sets. That’s no big deal when we’re talking a 10x10 grid. But if we’re dealing with 10k elements, we can benefit from a more lazy approach that stores the individual sets and generates their products on the fly, as needed.
Let’s start by building an iterator:
public struct CartesianProductIterator<X, Y>:
IteratorProtocol where
X: IteratorProtocol,
Y: Collection {
public typealias Element = (X.Element, Y.Element)
public mutating func next() -> Element? {
//...
}
}
Why is our generic type X
an iterator but we force Y
to conform to Collection
? If you look at the nested loop in our naive example above, you’ll see that while we iterate over xs
only once, we actually loop over ys
a number of times (an xs.count
number of times, to be precise).
IteratorProtocol
allows us to iterate over a set exactly once, so it’s perfect for our X
type. But only Collection
guarantees us the ability to non-destructively traverse a sequence over and over. So Y
must be a little more constrained.
Let’s add an initializer to store our iterator, collection, and related curiosities as properties:
private var xIt: X
private let yCol: Y
private var x: X.Element?
private var yIt: Y.Iterator
public init(xs: X, ys: Y) {
xIt = xs
yCol = ys
x = xIt.next()
yIt = yCol.makeIterator()
}
First note xIt
is a var
. Iterators mutate themselves in the course of serving up next()
, so our copy of xs
must be mutable.
Also, our ultimate goal here is to take values from xIt
and for each of them iterate over the all the values of yCol
. We prep for this by pulling the first value out of xIt
into x
and making an iterator for yCol
called yIt
.
And note x
needs to be optional. We’ll ultimately iterate over xIt
until we hit the end — and we’ll know we hit the end when x
is nil
.2
With all that settled, let’s move on to our implementation of next()
.
The first step of next()
is simple; pull a value out of yIt
each time it’s called and pair it with the same ol' x
we set in the initializer (providing, of course, x
isn’t nil
)
public mutating func next() -> Element? {
guard let someX = x else {
return nil
}
guard let someY = yIt.next() else {
return nil
}
return (someX, someY)
}
There. Now each call to next()
returns x
and whatever the next value of yIt
is. But what do we do once we hit the end of yIt
? We want to bump x
to the next value of xIt
, create a new yIt
from our collection — and then do the whole thing over again.
Anytime we say to ourselves “…and then do the whole thing over again,” it’s a sign recursion is in our future.
There’s nothing magical about recursion. To do it, we just need to call a method from within the implementation of itself. We’ll do it here when we run out of values in yIt
:
public mutating func next() -> Element? {
guard let someX = x else {
return nil
}
guard let someY = yIt.next() else {
return next() //Recursion!
}
return (someX, someY)
}
But there are two things we need to pay attention to whenever we write recursive routines.
The first is making sure we don’t loop indefinitely. We’ll do that by setting conditions for terminating the recursion, and then making sure we move towards that condition with each iteration.
Our termination condition already exists. It’s that top guard
. If x
is ever nil
, we’re out.
So all we have to do is make sure we’re moving towards a state where x
is nil
:
public mutating func next() -> Element? {
guard let someX = x else {
return nil
}
guard let someY = yIt.next() else {
yIt = yCol.makeIterator()
x = xIt.next()
return next()
}
return (someX, someY)
}
There. Now every time we hit the end of yIt
, we not only make a new one from our collection, but we also pull the next x
from xIt
. Eventually, xIt
will run out of elements, and x
will be nil
. End of recursion.
The second thing to look out for when writing recursively is blowing the stack. A stack overflow happens when you call too many functions in a row without returning.3
We can easily see how recursion might cause this to happen. Basically, when we call next()
from inside next()
we dig one level deeper in the stack. If calling next()
in next()
causes us to call next()
, now we’re another level deep. Get too deep, and we error out with a busted stack.
Thankfully we can see the only time we call next()
from inside next()
is when yIt.next()
returns nil
. So the only way we can dig deeper in the stack is if yIt.next()
returns nil
many times in a row. And the only way that could happen is if yCol
is empty.4
So we’ll short circuit that specific case with a guard
:
public mutating func next() -> Element? {
guard !yCol.isEmpty else {
return nil
}
guard let someX = x else {
return nil
}
guard let someY = yIt.next() else {
yIt = yCol.makeIterator()
x = xIt.next()
return next()
}
return (someX, someY)
}
And so, finally, we can lazily iterate over every ordered pair of our collections — all while only storing the collections themselves, not their product. But the syntax is a bit awkward:
let xs = 1...10
let ys = ["A", "B", "C", "D", "E",
"F", "G", "H", "I", "J"]
var it = CartesianProductIterator(
xs: xs.makeIterator(),
ys: ys)
while let (x, y) = it.next() {
target(x, y)
}
Let’s wrap this in a Sequence
to get access to for...in
, forEach
, and the rest.
public struct CartesianProductSequence<X, Y>:
Sequence where
X: Sequence,
Y: Collection {
public typealias Iterator =
CartesianProductIterator<X.Iterator, Y>
private let xs: X
private let ys: Y
public init(xs: X, ys: Y) {
self.xs = xs
self.ys = ys
}
public func makeIterator() -> Iterator {
return Iterator(xs: xs.makeIterator(),
ys: ys)
}
}
And as a finishing touch let’s add a a top-level function to make chaining little more readable:
public func product<X, Y>(_ xs: X, _ ys: Y) ->
CartesianProductSequence<X, Y> where
X: Sequence,
Y: Collection {
return CartesianProductSequence(xs: xs, ys: ys)
}
And just like that, our battleship dreams have become reality:
let xs = 1...10
let ys = ["A", "B", "C", "D", "E",
"F", "G", "H", "I", "J"]
product(xs, ys).forEach { x, y in
target(x, y)
}
As ever, here’s a gist of all this together in one place. Compliments of the house.
1: I know Battleship™ was invented before Descartes or something, and insists on calling out the lettered Y axis before the numbered X one. I tried a version of this post where the imaginary target(_:_:)
function took its y
parameter before the x
. I was constitutionally incapable of publishing it.↩︎
2: Or it could be nil
right now. If xIt
is an iterator over an empty set, x
will be nil
by the end of initialization.↩︎
3: Ironically, I didn’t find the answer here particularly compelling.↩︎
4: Specifically: if yCol
is empty, then every time we call next()
, we’re going to immediately call return next()
over and over again until we hit the end of xIt
. If xIt
is long, this could easily be enough to overflow the stack.↩︎
Good news, everyone!
While my original post on this topic might be of some small interest in a general “strategies for using expression patterns” kind of way, Swift provides a much better solution for the specific problem of matching NSError
s.
NSErrors
are bridged to error structs in Swift.
What does that mean in practice? You can find all the gory details in SE-0112, but the long and the short of it is, depending on the domain
of a given NSError
, Swift automatically bridges it to a struct describing that domain. In the case of NSURLErrorDomain
, for example, Swift will bridge to a URLError:
let error = NSError(domain: NSURLErrorDomain,
code: NSURLErrorTimedOut)
error is URLError //> true
This alone lets us deal with NSError
s in a much more declarative way without thinking about domains (and that’s not to mention all the goodies it gives us that we used to have to dig through userInfo
for — like failingURL!)
But there’s more:
Looks like Foundation provides the
~=(T.Code, T)
matcher for you.
So to match a timeout in Swift without writing any code, it’s actually as simple as:
catch URLError.timedOut {
print("timed out!")
}
Wow! As Doug Gregor writes,
You shouldn’t need to match on
NSError
nowadays, unless someone is vending an error code not marked withNS_ERROR_ENUM
.
I wish I had learned about this a lot earlier. Spread the word! Please and thank you.
EXCITING UPDATE!
Swift is way more on top of this than I originally gave it credit for. The TL;DR is you can actually match NSError
s using their structs like so:
catch URLError.timedOut {
print("timed out!")
}
The updated post over here has all the details.
For context, the rest of the original is provided blow. But seriously, check out the update.
Say we’re writing some sort of networking library on top of URLSession
. We might define some domain-specific errors that only make sense in the context of our library — unexpected headers or things like that:
enum RequestError: Error {
case missingContentType
case missingBody
//...
}
And these would be really easy for a client using our library to catch:
let net = AmazingNetworkThing()
do {
try net.fetchRequest(myRequest)
} catch RequestError.missingContentType {
print("unknown content type")
} catch {
//...
}
But the vast majority of network-related errors would actually be raised by the URL loading system itself. And that’s Foundation
-level API, which means it raises NSError
s. Which, in turn, means a consumer of our library would have to catch a simple timeout something like:
do {
try net.fetchRequest(myRequest)
} catch let error as NSError where
error.domain == NSURLErrorDomain &&
error.code == NSURLErrorTimedOut {
print("timed out!")
}
By comparison, this feels overly wordy and very imperative.
We could, of course, catch every NSError
thrown by URLSession
in our library, and wrap it in a custom enum
-based error for easier matching by the client. But a few problems with this approach:
NSURLErrorDomain
defines constants for 49 error codes. And there could be more that are undocumented. Which brings us to…URLSession
. By abstracting over part of its API in our library, we’re putting ourselves on the hook for keeping pace with Apple devs from point-release to point-release. And they have a bigger team than ours. Which also suggests…URLSession
is much more widely adopted than our library. Consumers of our lib way down the stack might be expecting a NSURLErrorDomain
error rather than our custom, one-off wrapper.So rather than wrapping the error, a better solution is to wrap the matcher.
struct ErrorMatcher {
let domain: String
let code: Int
}
func ~=(p: ErrorMatcher, v: Error) -> Bool {
return
p.domain == (v as NSError).domain &&
p.code == (v as NSError).code
}
See this post for more on that funky
~=
operator.
Using this, we can more elegantly and declaratively match any NSError
:
do {
try net.fetchRequest(myRequest)
} catch ErrorMatcher(
domain: NSURLErrorDomain,
code: NSURLErrorTimedOut) {
print("timed out!")
}
And if we were to add a little sugar for a domain of particular concern to us, well who would blame us?
extension ErrorMatcher {
static func urlDomain(_ c: Int) -> ErrorMatcher {
return ErrorMatcher(
domain: NSURLErrorDomain,
code: c)
}
// Then...
do {
try net.fetchRequest(myRequest)
} catch ErrorMatcher.urlDomain(NSURLErrorTimedOut){
print("timed out!")
}
And if a given domain/code pair were common enough, we could even:
extension ErrorMatcher {
static let urlTimeout =
ErrorMatcher.urlDomain(NSURLErrorTimedOut)
}
// Thus...
do {
try net.fetchRequest(myRequest)
} catch ErrorMatcher.urlTimeout {
print("timed out!")
}
Thankfully we don’t have to toss out all these existing, localized, very well documented NSError
babies with the bath water of imperative matching. Leveraging Swift’s expression patterns, we can have declarative enum
-like matching in our catch
clauses — even with NSError
s.
Say we’re writing an HTTP library. We’re going to want a way to deal with headers.
func addHeader(_ header: String, value: String) {
//...
}
Take a look at the signature of addHeaders
. On the surface, there’s no problem here. The spec roughly defines headers as a list of key/value pairs with both the key and the value being text. Seems pretty straight forward:
addHeader("Contant-Type", value: "text/html")
But it’s not the wild west. HTTP headers have a number of well-known keys. And some, like “Content-Type” here, are used over and over again. And if we look above, we’ll see I mistyped it.
No problem. We’ll define a constant to use instead:
let kContentType = "Content-Type"
addHeader(kContentType, value: "text/html")
A great solution for the problem at hand. But we haven’t dealt with the root issue: the interface is still inherently stringly typed. Nothing actually enforces the use of our constant, so…
//Me in a different file.
//After three years.
//And a bottle of Buffalo Trace.
addHeader("cantnt-tipy", value: "max/headroom")
Right. This is why we have enums.
enum HeaderKey {
case accept
case contentType
case userAgent
//...
}
func addHeader(_ header: HeaderKey, value:String) {
//...
}
addHeader(.contentType, value: "max/headroom")
Great! Clean and very swifty. We could stop here…
Except that well-known headers aren’t the whole story. Custom headers are very much a thing.
addHeader("X-Coke-Type", value: "New™ Coke®")
//🛑 cannot convert value of type 'String'
// to expected argument type 'HeaderKey'
How do we make room in our enum for unexpected and unknowable keys like this? We’ll capture them in an associated value:
enum HeaderKey {
case accept, contentType, userAgent
case other(String)
}
addHeader(.contentType, value: "max/headroom")
addHeader(.other("X-Coke-Type"),
value: "New™ Coke®")
And now we have an interesting decision to make. Do we want to enforce safe, well-known constants and provide an option to specify arbitrary strings? Or do we want to allow arbitrary strings and provide an option to specify safe, well-known constants?
Above I’ve chosen the former. But if the situation calls for the latter, we could easily make HeaderKey
conform to ExpressibleByStringLiteral
:
extension HeaderKey: ExpressibleByStringLiteral {
public init(stringLiteral value: String) {
self = .other(value)
}
//...
}
Then we could write our custom headers without .other
:
addHeader("X-Coke-Type", value: "New™ Coke®")
Now, of course, there’s nothing to stop us from fat-fingering “Contant-Type” as a string literal. But HeaderKey
is still there in the signature and we can use .contentType
if we choose.
Which of these approaches is correct? Neither and both — it’s a trade off that depends on the use case. For our HTTP header example, though, it feels right to prioritize enumeration over custom strings.
Speaking of conformance to string protocols, so far we’ve been focusing on cleaning up the call site. But remember, ultimately, headers are text. So when we pass them to our networking libraries et al., we’ll need to treat them like strings. That’s what CustomStringConvertible
is for:
extension HeaderKey: CustomStringConvertible {
public var description: String {
switch self {
case .accept: return "Accept"
case .contentType: return "Content-Type"
case .userAgent: return "User-Agent"
case .other(let s): return s
}
}
}
At this point, we might ask “Why not RawRepresentable
?” It’s true, RawRepresentable does almost exactly what we want. But it carries with it the extra overhead of initializing with a raw value which we’ll never use.1 And String(describing:)
is the cannonical way “to convert an instance of any type to its preferred representation as a string.”
func addHeader(_ header: HeaderKey, value:String) {
let headerText = String(describing: header)
libraryExpectingAString(headerText)
}
Very rarely, in life or code, is any list absolute or certain. Situations come up all the time where we need an “escape hatch” from our carefully calculated set of pre-defined options.
When that happens, we don’t need to throw our hands up in despair and make everything a String
. Enumerations (combined with CustomStringConvertible
and maybe even ExpressibleByStringLiteral
) let us work around the 20% case while not jeopardizing the safety and convenience of the 80% out there.
1: Still, RawRepresentable
is way cooler than it’s often given credit for, and those interested should read Ole Begemann’s amazing write up on manually implementing the protocol.↩︎
Before a thing can be read, it must be seen. And this is a problem with Swift’s1 “not” operator: !
. It’s comparatively thin so it doesn’t leave a lot of ink on the page. And unlike a .
or ,
, it’s also tall, so it isn’t able to use its negative space to stake out territory.
Compare:
foo.bar.baz
foo!bar!baz
(.many, .unique, .books)
(!many, !unique, !books)
The .
pops out as a “nonletter”. The !
blends together with whatever’s around it. Which makes it non-ideal for a “reevaluate this entire expression as the opposite of whatever it was” operator.
Surprisingly, the rise of boutique “programmer’s fonts” hasn’t really helped us out here. Everything from Courier Prime to Source Code Pro2 renders !
more or less the same; an undifferentiated thin line. Even Hoefler&Co’s Operator, which goes out of its way to be ugly in the name of readability, toes the line when it comes to the humble !
.
But where $600 typefaces fail us, we can oft find salvation in unicode and emoji:
prefix operator ❗️
prefix public func ❗️(a: Bool) -> Bool{
return !a
}
I’m not going to claim this hasn’t been a controversial change for some of my teams. But it’s indisputable it stands out:
foo!bar!baz
foo❗️bar❗️baz
(!many, !unique, !books)
(❗️many, ❗️unique, ❗️books)
Of course, a ❗️ is a little more difficult to type than a “!”. But that’s not a bug, it’s a feature! Because…
Let’s look at a simple conditional.
if homer.isLickingToads { ... }
This is the very definition of readable. Why? Because (english-speaking) brains are highly adapted to parse english and this reads like english: “If Homer is licking toads…”.
Now let’s look at its negation:
if ❗️homer.isLickingToads { ... }
Hmm. This is a little less readable3 because it doesn’t parse quite right. “If not Homer is licking toads…” ultimately makes sense, but only because we disengage our language center and engage our logic circuits to eval it. This creates friction.
Now, it’s really important to note we write all our code in “logic mode”. So typing out ❗️homer.isLickingToads
is the most natural, easiest thing to do at coding time — even though it’s (slightly) more difficult to read afterwards.
This asymmetry is important to call out because it’s the opposite of useful. We write code only once but read it many times thereafter. If we have to introduce a burden, we want to shift it to the writer, not the reader.4 So we would like this to read:
if homer.isNotLickingToads { ... }
Our linguistic brains parse this just fine. And that troublesome ❗️ has vanished altogether!
On the down side, our logical brains now need to write more code. How much more? Just:
extension Homer {
var isNotLickingToads: Bool {
return ❗️isLickingToads
}
}
That seems a totally worthwhile tradeoff.
1: Along with every other common programming language… though interestingly this is one of those cases where C-based languages diverged from their ALGOL-roots. ALGOL and it’s non-C derivatives use NOT
.↩︎
2: Which is the best and is totally what you should be using.↩︎
3: And if you feel there’s nothing wrong with this, please substitute your own arbitrary number of parentheses and ||
s until it becomes scary.↩︎
4: It’s probably also worth pointing out that, until we make this shift, code review is either sisyphean or useless.↩︎
So much good stuff in this post from Microsoft’s CVP of Office Development, Terry Crowley. The excerpts speak for themselves:
If essential complexity growth is inevitable, you want to do everything you can to reduce ongoing accidental or unnecessary complexity.
And:
What I found is that advocates for these new technologies tended to confuse the productivity benefits of working on a small code base with the benefits of the new technology itself — efforts using a new technology inherently start small so the benefits get conflated.
And:
The dynamic you see with especially long-lived code bases like Office is that the amount of framework code becomes dominated over time by the amount of application code and in fact frameworks over time get absorbed into the overall code base…
This means that you eventually pay for all that code that lifted your initial productivity. So “free code” tends to be “free as in puppy” rather than “free as in beer”.
subscribe via RSS