Hitting the Network with Swift Doesn't Have to Be a Big Deal

It's all too common when scanning a Swift codebase to come across the dreaded lines of code import Alamofire or import Moya. At a point in time, almost any iOS app built will need to send or receive data from an API. The adage is that job of an iOS developer is turning JSON into rectangles on a screen. Sending and receiving JSON is big part of iOS development, and there are a million and one ways to do this.

Avoiding dependencies

Being dependent on third-party code for your app to work isn't ideal. For side projects it is achievable to have zero dependencies. Have I mentioned how fast a project compiles without them? Often in a professional work environment, there are unavoidable requirements to use frameworks and libraries, but it's still possible to minimise dependencies. Using Alamofire, Moya, SwiftyJSON, or a variety of other libraries to make requests, or parse responses is common in almost any iOS codebase, but often without good reason. That isn't to say there's no use in any of the aforementioned libraries, but I do think they're often used unnecessarily, and without justification.

Benefits of avoiding dependencies

There are many reasons why I prefer to avoid dependencies in an iOS codebase, and a few overlap with my reason for disliking networking libraries.

  1. Networking on iOS is easy. This is probably the most important reason, and is something that is easy to overlook. URLSession is fantastic, and makes it really easy to perform API requests in an app. Codeable is equally great, and makes JSON encoding and decoding a breeze. These are first-party frameworks which make it appealing to ditch the burden that comes with third-party frameworks.
  2. Third-party frameworks are often difficult to debug. Ever tried debugging a request that's gone through Alamofire, potentially another network layer, and the result is handled by a framework such as RxSwift? It's next to impossible, and a lot of that pain and hurt is saved by using good ol' URLSession.
  3. Third-party libraries add to compile time. They often contain code that isn't related to the specific action or two you're using the library for, and hence slow down compile times unnecessarily. Not to mention using a dependency manager such as CocoaPods will slow things down by virtue of the fact that it exists. The advantage to writing your own networking code is that you only have to write what you need - and that's all the compiler will ever have to compile.
  4. Support. You can update your networking code at any point in time to add, remove, or modify functionality. If a new version of Swift comes out, you can update your code then and there. With third-party dependencies there is no guarantee they'll update at all. And even the well supported ones might be a bit behind. Being in control by writing your own code is always my preferred way of doing things - within reason.

Yes, I'm mentioning URLSession a lot. And sure, you can argue that everything eventually uses URLSession under-the-hood. But calling it directly is advantageous for the reasons mentioned above. It's a lot easier when your code doesn't go through 14 different layers before finally doing the thing you want it to do.

The ideal

In my mind, for an app that sends simple HTTP requests back and forth to an API, a network manager should be simple. You should be able to create a request (with the URL, headers, and body), tell the manager what you expect back (typically a data object that can be decoded to a Codable object), and it performs the request either successfully or with an error. What I've built suits the needs of my app, Petty, but is generic enough to suit most needs an app might have. As you'll see, there's an element of business logic to it that is unique to the app, but it's separated from the actual API request code - which can be used in almost any app.

The solution

The solution is straightforward. An APIRequest class manages the request - it contains a Request object, and has a performRequest method can be called to interface with the network. This method asynchronously completes with the Swift Result type - meaning it either completes successfully by returning the expected model object, or with an error.

So, it's time to see some code.

The code

First up, there's a Request object. It's initialised with everything needed for an API call - baseURL, relative URL, a HTTP method (GET, POST, etc.), request timeout interval, and finally HTTP headers and a request body - both of which are optional as they aren't needed for every request.

// MARK: Request method
enum RequestMethod: String {
    case POST
    case GET
    // Can add additional cases if `POST` and `GET` don't cover all your needs
}

// MARK: Typealias
typealias Body = Data
typealias HttpHeaders = [String: String]

// MARK: Request
struct Request {
    private let baseURL: String
    private let relativeURL: String
    private let method: RequestMethod
    private let timeoutInterval: Double
    private let headers: HttpHeaders?
    private let body: Body?

    init(baseURL: String, relativeURL: String, method: RequestMethod, timeoutInterval: Double = 24.0, headers: HttpHeaders? = nil, body: Body? = nil) {
        self.baseURL = baseURL
        self.relativeURL = relativeURL
        self.method = method
        self.timeoutInterval = timeoutInterval
        self.headers = headers
        self.body = body
    }
}

// MARK: Request extension
extension Request {

    private var requestURL: URL? { URL(string: baseURL + relativeURL) }

    var request: URLRequest? {
        guard let url = requestURL else { return nil }
        var request = URLRequest(url: url)
        request.httpMethod = method.rawValue
        request.httpBody = body
        request.allHTTPHeaderFields = headers
        request.timeoutInterval = 24.0
        return request
    }

}

Note how all of the properties are private. After initialising a Request object, it has a request property which is public - and this is the only property needed to work with in the APIRequest class.

The APIRequest class is where the magic happens. It takes any type - T - so long as it conforms to Decodable, and this is the object expected back from the request. It has two public methods - performRequest and cancelRequest - as well as a few private helper methods. I won't explain it all - feel free to copy and paste this code - but the idea is that the performRequest method is called, and the method will either complete successfully with the expected model object, or fail with an Error object.

// MARK: API Request
class APIRequest<T: Decodable> {

    typealias Completion = (Result<T, APIErrors>) -> Void

    var request: Request
    private var task: URLSessionDataTask?

    init(request: Request) {
        self.request = request
    }

    // MARK: Public API
    func performRequest(completion: @escaping Completion) {

        guard let session = makeSession(completion: completion) else {
            completion(.failure(.requestError))
            return
        }
        task = session
        task?.resume()
    }

    func cancelRequest() {
        task?.cancel()
        task = nil
    }

    // MARK: Private helpers
    private func makeSession(completion: @escaping Completion) -> URLSessionDataTask? {

        guard let request = request.request else {
            completion(.failure(.requestError))
            return nil
        }
        let task = URLSession.shared.dataTask(with: request) { [weak self] data, response, error in
            self?.parseResponse(data, response: response, error: error, completion: completion)
        }
        return task
    }

    private func parseResponse(_ data: Data?, response: URLResponse?, error: Error?, completion: @escaping Completion) {

        if let error = error {
            completion(.failure(.responseError(error)))
        }
        guard let data = data else {
            completion(.failure(.dataError))
            return
        }
        do {
            let decoder = JSONDecoder()
            let responseObject = try decoder.decode(T.self, from: data)
            completion(.success(responseObject))
        } catch {
            completion(.failure(.serialisationError))
        }
    }

}

That's it. That's the whole API request manager - a functioning network layer which can be used to easily create a request, and perform that same request.

To put a bow on it, here's the enum I'm currently using for API errors. It could be more comprehensive, but using it will mean the above code compiles.

// MARK: API Errors
enum APIErrors: Error {
    case requestError
    case responseError(_ error: Error)
    case dataError
    case serialisationError
}

Application-specific requests

At this stage, requests can be made but further effort is required to truly benefit from the solid foundation. All the code until this point is meant to be generic - and can be used for any app. From here on is where things get application specific, and the example I'm using is what I've done for one of my apps - Petty.

Start by creating a protocol which any future network request must conform to. This protocol uses an associated type which must conform to Decodable and is the object type expected to get back from the request. Of course, not every network request will have a response, or expect a Decodable response, but you can easily modify things so that the network manager handles a Data object, or an empty response. I won't cover that as part of this post. By focusing on returning a decoded JSON object, most use cases are covered. Each PettyAPIRequest must have a request which needs to be executed, and an instance of the APIRequest class to execute this request. There's also a getData method which will go off and make the request, completing with the result. Of course, any class conforming to PettyAPIRequest can implement this method however it wants, but the implementation below should cover most needs.

protocol PettyAPIRequest: PettyRequest {
    associatedtype ReturnType: Decodable
    var request: Request { get set }
    var apiRequest: APIRequest<ReturnType> { get set }
    func getData(completion: @escaping (Result<ReturnType, APIErrors>) -> Void)

    // All Petty API requests requirer a bearer token, hence the required init with token
    init(with token: String)
}

extension PettyAPIRequest {

    func getData(completion: @escaping (Result<ReturnType, APIErrors>) -> Void) {
        apiRequest.performRequest(completion: completion)
    }
}

Until this point things have been quite generic, but here's where it gets application-specific. Any class conforming to this protocol must be initialised with a bearer token, which all API requests in Petty need, and also must subclass PettyRequest which is a class (see below) containing some generic things such as the API key (bad practice, I know, I know), base URL, and some default headers.

class PettyRequest {
    let apiKey = Constants.API.Key
    let baseURL = Constants.API.BaseURL
    lazy var defaultHeaders: [String: String] = [
        "Content-Type":"application/json",
        "apikey": apiKey
    ]
}

Real-world use

So, that's all the boilerplate out of the way. It shouldn't take long to get to this point, and have a fully-functioning network layer. You have control over how your iOS application interacts with the network. But now you want to make a request. Let's do that!

One of the requests that Petty might need to make is a HTTP GET request for the price of petrol at every station in the state of New South Wales. Here's how to create this request in less than 25 lines of code.

class AllStationDataRequest: PettyRequest, PettyAPIRequest {

    typealias ReturnType = AllStationData

    private let relativeURL = Constants.API.URL.allPrices
    private var bearer: String
    private lazy var headers: [String: String] = {
        var headers = defaultHeaders
        headers["authorization"] = "Bearer " + bearer
        return headers
    }()

    required init(with token: String) {
        self.bearer = token
        super.init()
    }

    lazy var request = Request(baseURL: baseURL, relativeURL: relativeURL, method: .GET, headers: headers, body: nil)

    lazy var apiRequest = APIRequest<AllStationData>(request: request)

}

Hopefully it's self-explanatory, and the simplicity of this code is a result of good foundations laid earlier, but I'll call out a few things:

  1. The ReturnType typealias is set to AllStationData - a struct which conforms to Codable and is the format data is expected to be returned in.
  2. The request is initialised with an authorisation/bearer token. This is specific to Petty, as every request needs this token, but requests in your application might not. Or only some requests might need authorisation, in which it wouldn't be required as part of initialisation.
  3. Note how an authorization header is added to the existing defaultHeaders which are inherited from PettyRequest.
  4. When interfacing with this object, the getData method is used. That is inherited from the PettyAPIRequest protocol, so its existence isn't immediately obvious, but it's there and it's how requests are made. You can optionally implement it as part of this class, and it will automatically override the protocol implementation.

Making a request in your code

To use this in your code - in a view model or wherever else you make network requests - it's as simple as initialising the request object, and calling the getData method. From there, check the result for success or failure, and do as you like with the result.

let request = AllStationDataRequest(with: apiKey)
request.getData { result in
    switch result {
    case .success(let object):
        print("Success. We got \(object) back.")
        // Do with `object` as you like...
    case .failure(let error):
        print("Oh no. Error! \(error)")
        // Do with `error` as you like...
    }
}

What has been achieved?

That's it! (For real this time.) Feel free to use this code in your own projects, or base your own networking layer off of it. Note you will need to make customisations, but the foundations are in place.

So, what have we achieved?

  1. We own the networking layer of our application, with no reliance on a third-party.
  2. The implementation is abstracted away (which is a nice advantage of third-party networking libraries), but we still have full control. The result is neater code throughout the project.
  3. We can easily create a new request object, customise it with request parameters, and use it in various places across the application with only a few lines of code each time.
  4. Cleanly handle both success and failure cases.
  5. Taking full advantage of Swift language features (Result type, Codable, generics, etc.) to build a flexible interface for network requests.

Closed Loop CGM

This blog. It's been a while. There's good reason for that, however. Most of the tech/developer things I would write about here I get to talk about on the podcast I've been recording weekly for over a year now. Since it began, we've changed the name. It's now Cup of Tech.

Just over 18 months ago, I wrote extensively about using a continuous glucose monitor (CGM) for the first time, after 16 years of manually drawing blood every time I wanted to know my blood sugar level. It's safe to say that the Dexcom CGM was life changing. For all of my complaining, and unhappiness with the lack of control over the alarms, the Dexcom is worthwhile. The benefits far outweigh the drawbacks. I've consistently had the best HBA1c results of my life in the last 18 months, and my diabetes control continues to improve on a quarterly basis.

So, what am I getting at? This blog has been updated less and less recently because the podcast is my outlet for talking about software development news. However, I really enjoyed writing the 5-part series on CGM, and the feedback I received was positive. Today I'm trying out a new CGM, and I figure this is as good of a place as any to write about it.

As a result of upgrading my insulin pump to the latest and greatest Medtronic model, they've thrown in 6 months of their CGM system for "free." I'm not really sure what to expect. It's generally agreed upon that Dexcom make the best CGM product in the world. It's also unquestioned that it is the most accurate, and requires the least calibration. So, why the downgrade? Well, you had me at free. Only kidding. Well, somewhat. As Medtronic make both the insulin pump and the CGM, the new model of insulin pump can do some fancy things that I'm keen to try out. Things such as automatically adjusting insulin levels based on real-time blood glucose data. If my sugar rises slowly overnight, the Medtronic 670G system is supposed to notice this and increase insulin levels slowly until blood sugar is back in check.

The promise of the "closed-loop" 670G system is exciting. I'm not sure how well it will work in practice, but I'll hopefully write about the experience on this blog. The extra calibrations will be annoying, but I'd like to be at a point with this CGM system where I can trust the readings it gives. I'm not sure if that's a realistic goal, having heard a few things about its accuracy especially when compared to Dexcom. I'm also not set on sticking with the Medtronic system for all of the 6 months. One of the nice things about the Dexcom is that it's almost "set and forget." I do the calibrations when I can, and know that I can always trust the readings it gives - they're incredibly accurate. If the Medtronic CGM doesn't allow for this same level of peace-of-mind, switching back to the Dexcom will be the best option.

As with any new technology, I'm excited to try it out. Could be great, could be horrible. Being me, I'll be skeptical at first but would love to be proven wrong.

Customising the Menu Bar of a Catalyst App Using UIMenuBuilder

I'm in the process of building out a Mac app for one of my iOS side projects - Petty - using the new "Catalyst" tools announced by Apple at WWDC this year. Petty has become my iOS development playground, as I use it to experiment with new iOS technologies. This was the case with Siri Shortcuts last year. My motivation for finishing work on the Mac app for Petty is that I'll be presenting a talk, Updating Your App for iOS 13 at the /dev/world/2019 conference in Melbourne, and part of that talk will cover bringing the iOS app to macOS.

The menu bar is one of the most iconic parts of a Mac GUI. It's consistent, helpful, and rather intuitive. It helps people uncover actions, and abilities of an app, and also provides a little more for power users - keyboard shortcuts for these actions. Naturally, most Mac apps feel more "at home" on the Mac if they take advantage of system features such as the menu bar. Admittedly, Petty is a simple app with few hidden features. That said, I'd still like to customise the menu bar when Petty is run on the Mac.

This post will be about customising the menus in the menu bar programatically, with Swift. It is also possible to achieve this using Storyboards as demonstrated in the WWDC19 session video, Introducing iPad Apps for Mac. There's a live demo towards the end of that session video if you're interested.


Let's build and run Petty on the Mac.

Menu bar after building and running with no customisation.

Menu bar after building and running with no customisation.

By default, this is what the menu bar looks like when Petty is first built run on the Mac. Without customisation, there's an application, file, edit, format, view, window, and help menu.

To help customise the menu bar there's a new method in the AppDelegate which can be overridden, buildMenuWithBuilder. The first step is to override this, and call super.

override func buildMenu(with builder: UIMenuBuilder) {
    super.buildMenu(with: builder)
    /* Do something */
}

The first objective is to remove the menus that aren't wanted. There is text input in Petty (in the form of a search field), so the "Edit" menu should stay. There's no need to format text, so the "Format" menu can go. To do this, within the buildMenu method, we're going to call, builder.remove(menu: .format). Building and running again will show that the Format menu is no longer present. You'll notice in the screenshot above that there's also a "Services" sub-menu in the main application menu. This is unnecessary for Petty, so it's going to go as well. Removing it is the same as removing the Format menu, except we will specify .services instead. I'm also going to remove .toolbar-related menu items.

At this point, the buildMenu method is as follows:

override func buildMenu(with builder: UIMenuBuilder) {
    super.buildMenu(with: builder)

    builder.remove(menu: .services)
    builder.remove(menu: .format)
    builder.remove(menu: .toolbar)
}

Everything remaining in the menu at this point can stay there. The actions in the "View", "Window", and "Edit" menus are all taken care of by the system - at least for Petty.

There are a couple of actions that should be added to the menu bar. In the iOS version of Petty, it's possible to pull-to-refresh on the table view. This will reload the visible data. This action is also possible on macOS, and the Catalyst tools bring this feature across nicely. However, Mac users may expect other ways to refresh data. The Command-R keyboard shortcut is a common way to refresh data on the Mac, and it would be nice to use this shortcut to reload data in Petty on the Mac.

To add to the menu bar, either the .insertChild or .insertSibling methods can be called on the builder object. Calling insert child allows you to place a UIMenu object at either the start (top) or end (bottom) of a menu - for example atStartOfMenu: .file. Inserting a sibling allows for a more precise insertion either before (above) or after (below) another one. For example - afterMenu: .about means we want to insert a menu after the About menu. In this case of the reload data action, it should be put at the top of the file menu.

let refreshCommand = UIKeyCommand(input: "R", modifierFlags: [.command], action: #selector(reloadData))
refreshCommand.title = "Reload data"
let reloadDataMenu = UIMenu(title: "Reload data", image: nil, identifier: UIMenu.Identifier("reloadData"), options: .displayInline, children: [refreshCommand])
builder.insertChild(reloadDataMenu, atStartOfMenu: .file)

There's a bit happening in the code above. First, a UIKeyCommand is being created. Here is where a desirable keyboard shortcut - in this case Command-R - is specified for the action. The refresh command also needs a title, and this title is shown in the menu. We then initialise a UIMenu with the same title, can optionally provide it an image but ignore it in this case, and optionally give it an identifier. Options are also specified. In this case, .displayInLine is given as an option, which tells the system that this command belongs in the menu we're adding it to, and doesn't open yet another submenu. Then, we pass it an array containing one object - the refresh command which we created earlier. Finally, we insert the reloadDataMenu as a child on the builder object which we got from overriding the buildMenu method earlier, and specify that we want to insert it as an option at the start of the File menu. Note that when constructing the refreshCommand, a selector is specified. It's the code that will run when the menu item we just created is pressed, or the Command-R keyboard shortcut is used. If you want to specify a sender to that method, the type is simply AppDelegate.

Success! There's now a reload action in the menu bar.

Success! There's now a reload action in the menu bar.

This is what the buildMenu method should look like now:

override func buildMenu(with builder: UIMenuBuilder) {
    super.buildMenu(with: builder)

    builder.remove(menu: .services)
    builder.remove(menu: .format)
    builder.remove(menu: .toolbar)

    let refreshCommand = UIKeyCommand(input: "R", modifierFlags: [.command], action: #selector(reloadData))
    refreshCommand.title = "Reload data"
    let reloadDataMenu = UIMenu(title: "Reload data", image: nil, identifier: UIMenu.Identifier("reloadData"), options: .displayInline, children: [refreshCommand])
    builder.insertChild(reloadDataMenu, atStartOfMenu: .file)
}

The second action to add to the menu bar for Petty is a quick action to open the application settings. The iOS version of the app has its own settings screen, and it can be opened by tapping the settings icon on the main screen of the app. This is also the case when using the app on macOS - settings can be opened by tapping a button. However, Mac users are accustomed to open application preferences either via the application menu bar item, or via the keyboard shortcut Command-, (command-comma). It makes sense to support this in Petty, too.

The code is as follows, and is pretty similar to the reload data action:

let preferencesCommand = UIKeyCommand(input: ",", modifierFlags: [.command], action: #selector(openPreferences))
preferencesCommand.title = "Preferences..."
let openPreferences = UIMenu(title: "Preferences...", image: nil, identifier: UIMenu.Identifier("openPreferences"), options: .displayInline, children: [preferencesCommand])
builder.insertSibling(openPreferences, afterMenu: .about)

Note the input value is different. The keyboard shortcut is Command-, (command-comma), not Command-R, and that's specified in the UIKeyCommand object. This shortcut is also tied to a different action - openPreferences. The other difference is that we're inserting it by calling the insertSibling method on the builder object which allows us to specify that it belongs after the .about menu. The system knows what the About menu is. In this case it's the "About Petty" action in the main application menu. The system has also put this action it in its own section of the menu.

The preferences menu makes an appearance.

The preferences menu makes an appearance.


What has been achieved? We've programatically modified the menu bar actions in a Catalyst Mac app. This includes removing unneeded actions, and adding our own custom actions. The power of the menu bar is far greater than has been explored in this post, but it's a decent start, and satisfies the needs for menu bar customisation while building a macOS version of Petty. Of course, there are better ways to write and manage the code when a menu bar becomes more complex, or when the items in the menu bar differ from screen-to-screen in your app. Naturally, it is also not good practice to write too much code in the AppDelegate. That said, hopefully this post acts as a helpful guide for getting started when customising the menu bar for your own macOS apps that are being brought across from iOS using Catalyst.

The finished menu bar, after the customisation. It's what's on the inside of those menus that counts!

The finished menu bar, after the customisation. It's what's on the inside of those menus that counts!


Below is the finished Swift code from this post. Note that in order for the following code to compile, you'll need to implement reloadData and openPreferences methods, and prefix them with @objc.

override func buildMenu(with builder: UIMenuBuilder) {
    super.buildMenu(with: builder)

    builder.remove(menu: .services)
    builder.remove(menu: .format)
    builder.remove(menu: .toolbar)

    let refreshCommand = UIKeyCommand(input: "R", modifierFlags: [.command], action: #selector(reloadData))
    refreshCommand.title = "Reload data"
    let reloadDataMenu = UIMenu(title: "Reload data", image: nil, identifier: UIMenu.Identifier("reloadData"), options: .displayInline, children: [refreshCommand])
    builder.insertChild(reloadDataMenu, atStartOfMenu: .file)


    let preferencesCommand = UIKeyCommand(input: ",", modifierFlags: [.command], action: #selector(openPreferences))
    preferencesCommand.title = "Preferences..."
    let openPreferences = UIMenu(title: "Preferences...", image: nil, identifier: UIMenu.Identifier("openPreferences"), options: .displayInline, children: [preferencesCommand])
    builder.insertSibling(openPreferences, afterMenu: .about)
}

What Will Happen to Buddybuild at WWDC19?

When Apple acquired TestFlight in 2014, they integrated it into App Store Connect (called iTunes Connect at the time), but kept the purpose of the service very much the same - allowing developers to easily distribute pre-release builds of their software. I'm curious as to the direction Apple will take with Buddybuild.

I see a few ways it could go. Buddybuild could become a premium CI service, as it was previously, with a monthly or annual fee for developers to use it, with the intention being making money. I don't see this as the most likely outcome, but it's a possibility. Apple tends to build mass-market consumer services when looking to grow services revenue, not niche developer tools.

Alternatively, Buddybuild, the service that we once knew, may return and be integrated into App Store Connect. Developers might have free, or inexpensive, access to a CI/CD service that will run tests for them, build their app, and distribute it where it needs to go automatically. This approach would interest me the most, and I think it's the most likely. The fact that they didn't shut the whole service down upon acquisition shows CI still interests them, and they have plans to continue the service, at the very least for existing customers, but I'm sure that will change. It would also mean Apple is exploring an area they haven't before - they would have direct access to source code, be it hosted by Apple itself, or pulled from a third-party code hosting service such as GitHub or BitBucket.

A final approach which I believe is plausible would be that they take the technology behind Buddybuild (its integrations, build scripts, and hardware infrastructure) and use it for something else - such as remote code compilation. This could be especially useful should Apple be looking at bringing Xcode to iPad. I could imagine it being a bit like Google Stadia but for developers and their Xcode projects.

It's also possible that none of the three approaches mentioned above are the route Apple take. WWDC19 is going to be an interesting conference, and I look forward to seeing what Apple has been up to in the world of CI/CD.

Petty 2.1

The iOS version of Petty, my app for displaying real-time petrol prices in the state of New South Wales, was updated to v2.1 today.

The full release notes for this version are as follows:

* Adds additional Siri Shortcut for showing the real-time price at a given station. This will show up as a suggested Shortcut around iOS where it's relevant, and can also be added to Siri and can be accessed with your voice. This can be done by tapping the "Add to Siri" button at the bottom of the screen when looking at the prices for a particular station. Please note, this feature requires Petty Premium to be unlocked for it to work.
* Updates to the design of the station view - the map takes up more space and is more of a priority on this screen.
* Improved error handling if an attempt to buy the in-app purchase to unlock Petty Premium was unsuccessful.
* Updated icon on Apple Watch.
* Updates display name of Petty widget.
* Added app version and build number to the bottom of the Settings page.
* Lots of other small bug fixes and improvements throughout the app.

I'm most excited about the additional shortcut. There are now a few things you can do with Siri Shortcuts inside of Petty - such finding out the prices at a particular station, or having Petty find the nearest or cheapest petrol around. This is especially handy if you're driving, can't look at or touch your iPhone, and need to use Siri hands-free to interact with your iPhone.

I didn't write on this blog about the initial Siri features when I added them back in September, so consider this a belated introductory post. I'm pretty happy with how the shortcuts work in Petty, and I hope you enjoy using them just as much.

Petty is available on the App Store as a free download, with a one time in-app purchase to unlock Premium mode.

Tools for an iOS Developer

Working efficiently with better tools

In the interest of becoming a better, more focused iOS software developer, I've been thinking a bit about the tools I use to work.

iOS developers are fortunate not to need a lot of expensive software to be able to get their job done. Most of the expense associated with developing for the iOS platform comes from having to buy hardware; the Mac, iPhone, iPad, and Apple Watch aren't cheap these days. Once the necessary hardware is acquired, it's possible to get most of our work done with a few, "free" applications. Mainly Xcode and a terminal.

While it's possible to do the job with minimal software, there are some tools which can make life easier, or boost productivity in ways not possible with the default software stack. When iOS development was exclusively a hobby for me, I never gave much thought to paid software tools. It's difficult to justify spending money on these things. Now, working in and around Xcode somewhere between 24 and 40 hours a week (depending on if Uni is in session), I've decided to allocate myself more of a budget for these tools. The benefit is two-fold, I find it easier to do my work - which is always a nice thing - and my increased efficiency benefits the person/people/company paying me to complete said task, as I'm likely to develop and test something faster, and move onto the next thing sooner. A further bonus is that buying these tools for work means I also have access to better tools when working on side projects!

There are many tools out there for iOS developers. I've chosen three that I think are worth paying for, and am quite satisfied using myself.

Tower 3

Git is the version control system of choice for most iOS developers. The full power of Git can is accessible from the command line, is installed by default on any Mac, and can be used completely for free.

Git can be powerful but scary. No one wants to make a mistake and realise they've deleted a feature that's still in progress! Some people are amazingly skilled at using Git and all its features from the command line, but I am not one of those people. I prefer to use a GUI Git client, find that to be more friendly, and better suits how I use Git. There are a few popular GUI Git clients out there, including GitHub Desktop, Atlassian's Sourcetree, and Tower. Having used all three, Tower feels the most polished. I feel comfortable using it, and feel in control when performing actions with Git. I like having everything displayed in a GUI, and not having to rely exclusively on the command line.

I recently upgraded from Tower 2 to Tower 3. Some features are "nice to have," such as support for dark mode in macOS. There are other new features which improve my workflow and productivity noticeably. Being able to re-write commit messages, and delete specific commits - generally before having pushed to remote - is fantastic! Completing these tasks from the command line would be tedious, and I'd be fearful of making a mistake! Considering I'm in and out of a Git client all day every day at work, and just as frequently when working on side projects, it makes sense to choose something that I feel comfortable with and can use efficiently.

Although Tower is, under the hood, performing actions that you could do from a command line, or with an alternative free Git client, I see it as a worthwhile tool to have while writing software.

Charles

Charles is a proxy service which will capture all network traffic from either a physical iOS device or from a simulator while developing, allowing you to inspect the network traffic being sent and received, as well as modify the requests and responses for testing and debugging purposes. There are certainly other ways to mock the traffic coming in and out of your app, but Charles is the most straightforward way I've found to do this. I'm also pretty sure Charles has a lot of great features I don't use, but even just for the basic inspecting and modifying of network traffic it's worth its money. One of my favourite features is being able to "breakpoint" a request, meaning Charles will always stop when a certain request is being sent or received, allowing you to modify it every time.

Using Charles helps me feel a little more confident when working with API calls, knowing how easy it makes it test an app, check edge-cases, or throw data at an app during development before the backend API is ready for use. Though not an essential tool, it's one which I'm glad I have available to use and means there's one less thing to worry about when building iOS apps.

Sherlock

A newer tool on the scene for iOS developers is Sherlock, a simulator UI inspection tool. It's different from the inbuilt Xcode View Hierarchy Debugger as it allows you to change UI on the fly, while the app is running in the simulator. It's a handy tool to test auto layout constraints, change label text and other UI element attributes such as colours, across multiple device screen sizes without having to build and run on a simulator each time. No setup is required. Sherlock handles installing itself on the simulator and attaching to the app you're currently running from Xcode. It's delightfully easy to use. While not an essential tool, I've found it to speed up my workflow dramatically when building UI, and is now a tool I wouldn't want to work without when doing iOS development.

Wrapping up

Good tools can make your work easier, and more enjoyable. Saving a few minutes at a time, multiple times a day adds up quickly. Everyone values different tools, but recently I've been quite impressed with Tower, Charles, and Sherlock, see them as worthwhile purchases, and wouldn't want to be doing my job as an iOS developer without them.

Shipping Side Projects 🚀

I'm a huge fan of side projects. They're an opportunity to exercise creativity however you see fit. They provide you with a chance to work on something where you set 100% of the requirements. They give you a way to learn new skills and new technologies without any time constraints.

For some, side projects exist only to the creator. They aren't built to ship, and they aren't built with the thought that anyone else will ever see or use the project. For others, myself included, one of the best things about working on a side project is the moment it's ready to ship. I enjoy working on side projects and enjoy shipping them just as much.

In thinking about how to approach side projects going forward, I've come to realise that the kind of apps I want to work on as side projects typically aren't going to be fully-featured. At least at first. It's no secret that a considerable amount of my side project time over the last six months has been spent on a weekly podcast. Though it doesn't involve writing code, recording and editing the podcast has been amongst the most enjoyable side project work I've done. The podcast is time-consuming, and I don't necessarily have the time I once did to spend on more polished mobile app side projects - such as with Petty, and HeartMonitor - which were both mostly feature-complete at launch. Petty has received regular updates since its launch, adding features based on user feedback, but at the time it shipped there wasn't a whole lot more I'd hoped to add for 1.0.

There are some side projects I'd like to start working on, but to get them to a point where I'm completely satisfied would take longer than is ideal. As I said, I enjoy shipping side projects. I'm thinking about a different approach going forward. It's not a new or novel concept, but I'm coming around to the idea of shipping the bare minimum and adding to it over time. I guess it's the software MVP approach. I'm not sure if shipping software with limited features will make adding additional features more motivating than if it weren't out in the world yet, but it's something I'd like to try.

The idea of shipping more frequently can be applied to existing projects too. If new iPhones come out, why not release a small update that adds support? That's enough for one update; it doesn't have to wait until a bunch of new features are also ready.

The intention is not to ship poor-quality software, but to have a wider range of public projects, and to ship updates to them more frequently. This might mean v1.0 isn't quite where I'd like it to be, but that's okay. It can then be followed up with fairly continuous updates. Add a small feature one week? Cool, ship it. Fix a tiny bug the next? Ship that too! Updates don't have to be huge. But hopefully, the enjoyment of shipping will still be felt from shipping fairly often.

Software developers are lucky. We can build the software that we want to exist. Going forward, I'm going to try and ship more work more often, even if the project doesn't tick every feature box that I'd like for each release.

So, We're Making a Podcast

So, we're making a podcast. It's called So cast, and it's an excuse for Kai, Malin and I to drink coffee and have a chat each week, since they moved halfway around the world.

We're all software developers so, naturally, most of our conversations end up being about something tech related. Hopefully we're able to provide unique and interesting insights through these weekly discussions, and hopefully you find the show interesting enough to subscribe to.

We've hit the ground running and have five great episodes for you to listen to.

The show kicked off when we were fortunate enough to have access to the Apple Podcast Studio at WWDC, where we spoke about our WWDC experience. The second episode covers our early thoughts on the iOS 12 and watchOS 5 beta. In the third we discuss home automation and smart assistants, with a bit about Siri Shortcuts at the end. In episode 4 we talk about new MacBook Pros, and the 10th anniversary of the App Store. And finally in the fifth, we talk about everything from buying a new Mac, to what cloud storage to use, a whole bunch about emoji, and Apple Watch move streaks.

We've got some great future topics planned, and plan on recording and releasing weekly. If the show sounds like something you'd be interested in, it's likely available in your podcast player of choice, including iTunes, Pocket Casts, and Overcast.

If you've got any feedback, or would just like to find out when we release new episodes, you can follow the show account on Twitter: @So_cast.

"Hey Siri, what on earth is a Shortcut?"

At WWDC this year, Apple introduced Siri Shortcuts, which are a new way for developers to integrate their app with Siri on iOS, and watchOS.

I'm in the process of writing a talk on Siri Shortcuts to present at the fantastic /dev/world/2018 conference in late August, and one of the things I'm struggling the most with is how to explain Shortcuts, and how to differentiate between the types of Shortcuts. From a developer's point of view, there are many things referred to as Shortcuts. This post is an attempt to clear the thoughts in my mind, and hopefully help clarify things for you too.

Shortcuts, the app

In my experience from talking to people, "Shortcuts" tends to refer to Shortcuts the app, which is essentially Apple's replacement for the Workflow app after acquiring the team behind it in early 2017. The Shortcuts app allows you to chain actions together, and run them as a group. For example, you might have a "Running late" Shortcut that sends a message to your boss saying you'll be late, starts a "hurry" playlist, and opens Maps with directions to work. A lot of the integrations in Shortcuts at the moment are first-party actions - including controls such as toggling Wi-Fi or Do Not Disturb settings or interacting with built-in Apple apps. Shortcuts will shine once it's possible to run shortcuts (yes, that means something different here) as a part of the shortcuts you can create in the Shortcuts app. Have I confused you enough yet?

Siri Shortcuts

There is a new Shortcuts API available to developers. In this context, a shortcut is an action or task that is often repeated with a somewhat predictable pattern. Developers are responsible for "donating" a shortcut to the system when a user performs a relevant action, and the system takes care of when and where to suggest the shortcut. Suggestions are displayed both on the lock screen as notifications, and under the Siri app suggestions when you swipe down on the home screen. Examples of shortcuts include suggesting a lunch order when it's nearing midday, or opening a document that you often open when you first get to work. These suggestions also appear on the Apple Watch Siri watch face.

Shortcuts can be created in one of two ways. Firstly, using an NSUserActivity. You should create these for specific activities or screens within your app where a notable action is performed, and where there is a possibility of someone needing to return to the app in that state for whatever reason. NSUserActivity is used for handoff features allowing someone to pick up on one device where they left off on another. These activities can be "donated" to the system when a user performs a relevant task, and as long as the isEligibleForPrediction flag is set to true, they will begin to surface around iOS when they are relevant. A simple example of a useful NSUserActivity would be one that's donated every time someone starts a workout in a workout app. Over time, the system will get a sense for when this action is performed - perhaps when someone gets to the gym, or every morning at 6 am - and intelligently suggest it ahead of time.

The second way to donate a shortcut is by creating an INInteraction, built specifically for shortcut functionality. An INInteraction contains an INIntent which describes the user's request. If the purpose of the shortcut is to resume state in an app, then similar to donating an NSUserActivity, handling an intent is done from the App Delegate with the application(_:continue:restorationHandler:) method.

Intents Extension

Here's where things get interesting (but maybe a tad confusing). Up until now, the shortcuts I've mentioned are good for simple suggestions. These suggestions, when interacted with, kick the user to your app where you handle the rest of the interaction. There's a lot of practical use-cases for that, but what about if you want to do more? It might not always be necessary to open the app to achieve a task. For example, if I order coffee at 9 am every morning through a shortcut, do I want the app to open every time? Not only is it slow, but I might not have an interest in customising the order - it's the same every day! This is where an Intents Extension comes into play. An intents extension allows you to run code from an extension bundle, meaning you can perform tasks without opening your main app, or touching code from that bundle. It is suggested that any shared code or business logic that the extension needs access to be moved to a shared framework, and then imported into the relevant targets in your project (including the intents extension) where it's used. It is also possible to start an activity from a shortcut, and then resume that activity in the app. It might be useful if you find the user waiting too long for a server response, but it is recommended to design with the idea that the entire activity or task be completed from the extension.

Intents UI extension

Sometimes, a binary success/failure response isn't enough to properly convey the result of the shortcut that just ran. For this case, Apple provides an Intents UI extension which allows your intent to show a view controller and accompany the response with custom UI. These extensions are only supported on iOS (not watchOS). The only limitation to these view controllers is that they do not receive touch events. Design with the assumption a user is not able to interact with the view. Content can be updated as required, however. Extensions that use maps (e.g. Ride sharing) will already have a map provided by the system, so it's not necessary to also display a map in the UI extension. More on the different domains later on.

Interacting with shortcuts by voice

Hopefully, at this point, you've got a reasonable understanding of a Siri shortcut, and understanding the different uses for the word "shortcut." I've written about intents up to this point with the idea that they're surfaced as suggestions, and run from either a lock screen notification or the shortcuts area of the Siri search page. There is another way that these Shortcuts can be surfaced around the operating system, and that's via voice command using Siri. You can suggest that someone adds a shortcut from your app to Siri via a button in your app to open an INUIAddVoiceShortcutViewController. A user can also add a shortcut to Siri themselves, from the Siri options in the Settings app. There is a convenience aspect to being able to run a shortcut from Siri, but one of the biggest advantages I've found to running them from Siri instead of manually is that you're able to provide voice feedback on the request. An Intents extension (without UI) will show the name of the shortcut it's running, as well as the app running it, and then provide a success or failure indicator, or optionally kick the user back to the main iOS app. There is no other feedback. With an Intents UI extension, visual feedback can be given through the UI. When a shortcut is activated with Siri it can provide voice feedback from the request to the user, regardless of whether there is a UI component, making for a great Siri experience. In this way, Siri can become conversational. Asking Siri something simple such as, "When is the next bus to the city?" or, "Are the Dragons up?" could return a voice-based response making it a great way to get an answer on the go, or when making the request from across the room. I believe that these shortcuts can be run from both HomePod and Apple Watch, even if the app with the Shortcut doesn't have an Apple Watch app. This might only be with certain categories of SiriKit apps, however. I haven't been able to test it yet.

SiriKit Domains and Intents

Integrating your app with a voice assistant is tricky. People talk in different ways. The way I ask for directions might be different from the way you do. How does a voice assistant know what you mean? SiriKit uses intents which are a type of request the user can make. Up until now, SiriKit integration has been limited to intents in specific "domains," as Apple calls them. These domains are Messaging, Lists and Notes, Workouts, Payments, VoIP Calling, Visual Codes, Photos, Ride Booking, Car Commands, CarPlay and Restaurant Reservations. Previously, if your app didn't fit into one of these categories, you weren't able to integrate with Siri. Intents for these domains aren't new. What is new is the ability for any developer to create a custom intent, meaning any app can integrate with SiriKit. Each action needs to have a defined intent. These are defined through the new intents definition file that you can add to Xcode, and include fields for any required parameters necessary for either the request or the response.

With the old set of domains, there was room for ambiguity in the request. You, as the developer, could say that you didn't understand the request, or that you require more information. There's no room for ambiguity with the new domains as they're triggered by a custom phrase with no ability to input. At the time that a shortcut is added to Siri, the person adding it must tie a custom phrase to it. For example, you, a completely sensible human, might use the phrase, "Coffee order" for your morning coffee order, whereas I, a questionably sensible person, might use the phrase, "Banana peel."

Shortcut intents can take parameters as far as you, the developer, is concerned. The same "Order coffee" intent that you define might have an option for the number of sugars someone wants with their coffee. However, the number must be set at the time the shortcut is added. A "coffee order" shortcut can't have one sugar one day, and none the next. If it's set up, the parameters are all the same. Different shortcuts with unique trigger phrases would need to be set up by the user if they wish to have multiple options available when ordering coffee from Siri. You might suggest that the shortcut is set up for a coffee with no sugar after a user places this order a few times, but once it's added to Siri, it cannot be changed without removing the shortcut or adding a new one altogether.

Wrapping up

Many hours later, this post has certainly helped me come up with more precise distinctions between the types of shortcuts developers can build into their apps, and what it means for users of these apps. I hope it can clarify some of the questions you have, too. Maybe now the next time you're talking to someone about shortcuts, you'll better be able to pick up on exactly what aspect of shortcuts they're talking about - Shortcuts the app, the suggestions, Intents, or the Siri Shortcuts. If there's anything I've missed, or you'd like to ask a question, feel free to reach out on Twitter.

Parallel Testing in Xcode 10

Parallel testing

At the Apple Worldwide Developers Conference last week, Xcode 10 was announced with a bunch of new features and enhancements to various developer tools. One of the features that caught my eye was parallel testing.

We should all be writing tests for our code. Unit tests run relatively quickly and are used to test small sections of code, generally in isolation. UITests are another form of test that, as the name implies, test the UI of your application. They do this by running through full flows in your app - such as a purchase flow from start to end - ensuring that all the expected UI elements are present and that each button and control works as expected. UITests are useful for catching regressions, and for feeling confident that nothing broke after making a change to your app, but unfortunately can take a while to run. Dozens of 30-second flows in your app add up, and suddenly you might find your test suite taking 30+ minutes to run.

Enter parallel testing.

Previously only available to xcodebuild for separate simulators, and now available for all projects in Xcode, parallel testing allows multiple tests to be run simultaneously, with the main advantage being dramatically shortened XCTest and XCUITest times.

Screen Shot 2018-06-15 at 8.36.22 pm.png
How to enable parallel testing
  1. Select your target scheme in Xcode, and "Edit Scheme..."
  2. Find the settings for "Test", and press on the "Info" tab
  3. You'll see a list of your Unit and UI tests, press on the associated "Options..." button
  4. Select "Execute in parallel on Simulator"
  5. Optionally select "Randomize execution order"
Running tests in parallel

It's only possible to run Unit tests in parallel on macOS. Both Unit tests and UI tests can be run in parallel on iOS and tvOS.

When running tests in parallel, Xcode will run them on a clone of the same simulator. Most Macs should be capable of running at least two cloned simulators in parallel. Modern Macs with more RAM and more processor cores should be capable of running even more tests simultaneously.

Tests are split up by classes and allocated to each clone of the simulator as Xcode sees fit. What this means is that your tests are only as fast as the longest-running test class. For this reason, it's important to keep each test class as concise as possible and consider splitting tests into as many classes as is practical.

Considerations

There are some things to consider now that tests can run in parallel, and optionally, in a random order.

Ensure that tests are able to run independently of one another and that no test relies on the test that comes before or after it to set up or clean up. Each test should be truly independent of all other tests. You are no longer able to ensure that test A will finish before test B begins, so this independence is important.

Performance tests will not achieve maximum performance when running in parallel. Apple suggests putting performance tests in their own bundle and disabling parallel testing for this bundle.

Wrapping up

Parallel testing certainly made for an impressive demo during the Platforms State of the Union at WWDC last week. It's something that will save countless hours of development time. Long testing time is something that may discourage additional tests from being written, and anything combatting that is a benefit to software quality going forward.