In this post, I will share some of my favourite ‘customizations’ that make Swift an even more pleasant environment than it already is. Most of it comes straight from Warp, the big data analysis tool I’ve been developing, and is focused around concurrency and data processing.
Dealing with the main thread
Most code that interacts with the user interface is required to run on the main thread. Not doing so can cause all sorts of weird lock-ups that are hard to debug. Using the AssertMainThread utility function, you can guarantee that a particular piece of code is only ever run on the main thread. It also uses a neat little trick with default parameters to show the file and line number of the code that should have been run on the main thread. When compiling in release mode, the assert is a no-op and the whole thing is optimized away by the compiler.
internal func AssertMainThread(file: StaticString = __FILE__, line: UWord = __LINE__) { assert(NSThread.isMainThread(), "Code at \(file):\(line) must run on main thread!") }
If you need to initiate UI work from a background thread/queue, you can use the AsyncMain function – the code block sent to it is executed automatically on the main thread as soon as it has time to do so. This is like ‘posting a message’ to the main queue – the background thread should (and cannot) wait for the UI work to complete. Due to Swift’s syntactic sugar, you can omit the parenthesis, making your code look clear and clean: AsyncMain { /* your UI work here */ }.
internal func AsyncMain(block: () -> ()) { dispatch_async(dispatch_get_main_queue(), block) }
Dealing with the sad path
Swift makes it really easy for a programmer to reason in the ‘happy path’ – when everything in your program works just as you had in mind. Using optionals, you can make sure that if things don’t work out as planned, no serious damage is done. This however makes it difficult to present users with any sort of error message. History has also shown that programmers really ought to have more attention for the ‘sad path’. Luckily, Swift allows us to make this possible by introducing a so-called ‘fallible type’. This is conceptually the same as an optional type, but instead of becoming nil on error it can carry an error message. The code can be found here, usage is as follows:
func askUserInput() -> Fallible<String> { /* return either .Failure("error message") to indicate error, or QBEFallible(return value) to indicate success */ } switch askUserInput() { case .Success(let input): println("Successfully got input: \(input.value)") case .Failure(let errorString): println("Something went wrong: \(errorString)") } func stringToInt(x: String) -> Fallible<Int> { ... } let number = askUserInput().use { stringToInt($0) } /* number is a Fallible<Int> */
The nice thing about this particular implementation is that you can chain operations on fallible values with ‘.use’, and any error message is automatically propagated (e.g. if askUserInput fails in the second example, the ‘number’ variable will be the failure value from askUserInput; stringToInt will not be executed in that case).
Logging from different background threads at once
If you use the built-in println function in Swift to log messages from different threads running concurrently, you may notice that from time to time, garbled text appears on the console. This is because println doesn’t force threads to take turns writing to the console. The following alternative does, however:
internal func Log(message: String) { dispatch_async(dispatch_get_main_queue()) { println(message) } }
This function basically submits each log message to the main queue, from where it is written to the console sequentially. Note that because this operation is asynchronous, log messages may appear (much) later than expected (or not at all, if the main thread is blocked or crashes before the log message is processed) but, well, if that’s the case then you have bigger fish to fry. Dispatch_async does guarantee that the println calls are executed in order of submission (i.e. log messages from the same thread will always appear in-order).
Dealing with the future
Applications that make extensive use of multi-threading to perform lengthy calculations in the background need to deal with several issues:
- Avoid recalculating something expensive, and avoid starting a new calculation for something that is already being calculated;
- Only (lazily) calculating expensive values when needed;
- The ability to cancel jobs in case the expensive result is no longer needed;
- The ability to perform expensive work in different queues (e.g. different QoS-classes in libdispatch).
- Dealing with errors from background threads
I usually use a class called ‘Future’ to deal with this. A Future represents a value that can be obtained asynchronously; if a calculation for the value is not yet in progress, it is started as soon as the first request for the value is made. If a calculation is already in progress, the request callback is put on a ‘waiting list’ and will get called as soon as the calculation is done. The expensive calculation is represented as a callback that takes a ‘job’ object as one of its parameters; using the job object, it can check whether it is supposed to cancel its work or continue, and it can obtain the queue to which it should submit subtasks. It can also track progress (any subtask can report progress values to the job object, which will coalesce it and report it to anyone listening on the future object). The job object provides a log method that serializes log messages and also attaches a ‘job number’, so messages from jobs running in parallel can easily be distinguished.
Swift is a neat language
I first started programming in Java around 2004 – I liked the simplicity of the language, but it was also always making life difficult in many subtle ways (anyone remember fiddling with the CLASSPATH…). As it was also the language adopted by academia and XML reference implementations, many APIs were over-engineered to the maximum extent possible. Soon after I discovered C++, and instantly loved it. The great thing about C++ is that it lets you program the programming language itself: you can overload operators and use various other language constructs to make your life as a programmer easier, and in turn your programs better and more robust. Everyone keeps bashing about how dangerous and cumbersome C(++)’s pointers are, but fact is, they’re easy to work with if you tell the language how you want to deal with them (e.g. using this, which is the equivalent of ARC for C++). You need a custom parser? No problem, write your own using ‘almost’ EBNF syntax right in the source code!
In the years after, I discovered that some things that were notoriously difficult in C++ or even Java (string handling! working with JSON! asynchronous code!) could be done so much easier in scripting languages like JavaScript and even PHP. And then there was Swift. Swift is the first language I’ve seen that (1) is native and interfaces well with C libraries, (2) provides proper support for closures and first-class functions (allowing functional programming and asynchronous constructs) and (3) is as easy to use as the scripting languages when it comes to mundane tasks, such as dealing with strings. Paired with the features of libdispatch, Swift is a powerful beast for data processing. But above all, Swift is the first language that can be customized like I could customize C++. Yes, you can write that parser in Swift as well. Swift is a very neat language indeed.
Update (June 14, 2015): Swift 2.0 (announced last Monday) brings more goodies to the table that make some of the above items even easier to implement. As Swift 2.0 now supports ‘multi-payload enumerations’, it is now possible to use the ‘Fallible’ type presented above in a more logical way: you can return “.Success(someResult)” rather than “QBEFallible(someResult)” (which is easier on the eyes) and do not need to use “.value” on the success value (i.e. after “case .Success(let x)”, x is the actual result value rather than a ‘box’ containing that value). You should also mark the QBEFallible.do method with the @warn_unused_result attribute, so that it is never inadvertently used to silence errors.
Someone over at Hacker News suggested the following with respect to the ‘Log’ function (good idea indeed, however if you work with libraries that perform their own logging from other threads, your messages may get mixed up with theirs. If you use the main thread for logging, at least you coordinate with the system libraries who all seem to log on the main thread).
You could just make a logging serial queue that you send all log messages to; that way there’s no risk of activity on the main thread slowing down your logging. In (Obj-)C:
static queue_t queue;
void log (NSString *message) {
static dispatch_once_t onceToken;
dispatch_once(&onceToken, ^{
queue = dispatch_queue_create("logging", DISPATCH_QUEUE_SERIAL);
});
dispatch_async(queue, ^{
NSLog(message);
});
}