🎉 Black Friday Sale! Save 30% when you subscribe today.

Reducer Protocol in Practice

Episode #208 • Oct 10, 2022 • Free Episode

We celebrate the release of the Composable Architecture’s new reducer protocol and dependency management system by showing how they improve the case studies and demos that come with the library, as well as a larger more real-world application.

Previous episode
Reducer Protocol in Practice
Next episode
FreeThis episode is free for everyone.

Subscribe to Point-Free

Access all past and future episodes when you become a subscriber.

See plans and pricing

Already a subscriber? Log in

Introduction

We have spent the past 7 episodes completely reinventing the manner in which one creates features in the Composable Architecture. Something as seemingly innocent as putting a protocol in front of the fundamental reducer type completely changed the way we think about implementing reducers, composing reducers, providing dependencies to reducers, and even testing them.

And this week we are finally releasing an update to the library that brings all of those tools to the public, and there’s even a whole bunch of other tools and features shipping that we didn’t have time to cover in the past episodes. We’d like to celebrate the release by highlighting a couple of places where this new style of reducer has greatly simplified how we approach problems with the library. We will take a look at a few case studies and demo applications that ship with the library, as well as look at our open-source word game, isowords, to see what the reducer protocol can do in a large, real-world application.

Let’s dig in!

Recursive case study

Let’s start by looking at a few case studies where the protocol-style of reducers has greatly simplified things.

There’s a case study that demonstrates how one can deal with recursive state. In this demo you can add rows to a list, and then each row is capable of drilling down to a new list where you can add rows, and on and on and on.

Right now we have the git repo of the library pointed to a commit just before the reducer protocol was merged, which was a27a7a5. Let’s see how this kind of feature was built in the previous version of the Composable Architecture.

It starts out just as any other kind of feature, where we model the domain. For example, the state just holds onto an identified array of state so that we can show it in a list, but interestingly it is a recursive data type:

struct NestedState: Equatable, Identifiable {
  let id: UUID
  var name: String = ""
  var rows: IdentifiedArrayOf<NestedState> = []
}

The NestedState type holds onto a collection of itself. This is how we can allow drilling down any number of levels.

The action enum is similar, where it contains a case that references itself so that we can expression actions that happen any number of layers deep:

enum NestedAction: Equatable {
  case addRowButtonTapped
  case nameTextFieldChanged(String)
  case onDelete(IndexSet)
  indirect case row(
    id: NestedState.ID, action: NestedAction
  )
}

One thing to note is that we had to mark the enum as indirect in order to allow it to be recursive.

And then there’s an environment for the demo’s features, of which there is only one, a UUID generator:

struct NestedEnvironment {
  var uuid: () -> UUID
}

With the domain defined we can then define the reducer that implements the logic for the feature. This is a little tricky though. It would be easy enough to implement the logic for just a single list of rows. But we also need to implement the logic for a drill down from a list to a new list, and then another drill down, and on and on and on. Somehow the reducer itself needs to be recursive just as the state and actions were.

To aid in this, the demo comes with a little reducer helper called recurse. It allows you to implement a reducer like normal by being handed state, action and environment, but it also hands you a reference to self that can be used to perform recursive logic:

extension Reducer {
  static func recurse(
    _ reducer:
      @escaping (Self, inout State, Action, Environment)
        -> Effect<Action, Never>
  ) -> Self {

    var `self`: Self!
    self = Self { state, action, environment in
      reducer(self, &state, action, environment)
    }
    return self
  }
}

The implementation is simple enough. You just upfront define an implicitly unwrapped optional reducer, create a new reducer that captures that value, and then assign the value. That little dance allows you to tie the loop of recursion.

With that helper defined you can now define the demo’s logic in the standard way with the one small addition that we will invoke the self reducer inside the recursive row action:

let nestedReducer = Reducer<
  NestedState, NestedAction, NestedEnvironment
>.recurse { `self`, state, action, environment in
  switch action {
  case .addRowButtonTapped:
    state.rows.append(
      NestedState(id: environment.uuid())
    )
    return .none

  case let .nameTextFieldChanged(name):
    state.name = name
    return .none

  case let .onDelete(indexSet):
    state.rows.remove(atOffsets: indexSet)
    return .none

  case .row:
    return self.forEach(
      state: \.rows,
      action: /NestedAction.row(id:action:),
      environment: { $0 }
    )
    .run(&state, action, environment)
  }
}

This reducer is a little bit mind-bendy, but it gets the job done. Needing to .forEach on the self in order to run the reducer on the collection of children is wild, but once that is done you get something really powerful out of it.

Let’s see what this looks like when we port this over to the ReducerProtocol. I’ll switch over to version 0.41, and we’ll see that the domain modeling looks basically the same, except we now use @Dependency in order to instantly get access to a fully controllably UUID generator:

struct Nested: ReducerProtocol {
  struct State: Equatable, Identifiable {
    let id: UUID
    var name: String = ""
    var rows: IdentifiedArrayOf<State> = []
  }

  enum Action: Equatable {
    case addRowButtonTapped
    case nameTextFieldChanged(String)
    case onDelete(IndexSet)
    indirect case row(id: State.ID, action: Action)
  }

  @Dependency(\.uuid) var uuid

  …
}

Where things start to really differ is how the actual logic of the reducer is implemented. Rather than having a closure that takes state and action, we implement a property called body:

var body: some ReducerProtocol<State, Action> {
  …
}

This is the way to compose reducers in the Composable Architecture, and it may not seem like we are composing anything, but we really are. We not only need to run the logic for a particular list of rows, but also the logic for the drill downs of those rows, and the drill downs of the drill downs, etc.

So, we implement the logic for just a single list of rows by constructing a Reduce value that is handed some state and an action so that we can mutate the state and return any effects necessary.

Then the real magic happens. We invoke the .forEach operator on that reducer in order to run another reducer on each row of the collection of child states. But, which reducer do we want to run? We want to recursively run the same reducer on each row.

Previously that required some tricks to get a recursive handle on the reducer so that we could invoke it for each row, but now that isn’t necessary at all. The reducers constructed in the body property are already lazy since to run the reducer the library first invokes the body property to construct a reducer, and then hits the reduce method.

This means we can recursively use Self inside the body property:

var body: some ReducerProtocol<State, Action> {
  Reduce { state, action in
    switch action {
    case .addRowButtonTapped:
      state.rows.append(State(id: self.uuid()))
      return .none

    case let .nameTextFieldChanged(name):
      state.name = name
      return .none

    case let .onDelete(indexSet):
      state.rows.remove(atOffsets: indexSet)
      return .none

    case .row:
      return .none
    }
  }
  .forEach(\.rows, action: /Action.row(id:action:)) {
    Self()
  }
}

This will run the Self reducer on each row of the collection as actions come into the system. This is so much simpler and clearer than the contortions we had to put ourselves through previously.

But the benefits go beyond what we are seeing here. Because reducers are now expressed as types that expose a reduce method or a body property, Swift can do a much better job at optimizing and inlining code. That causes stack traces to become much slimmer, which can lead to performance improvements and decrease memory usage.

To see this in concrete terms, let’s stub some data in this recursion case study so that we can drill down 20 levels deep, and we’ll put a breakpoint in the action that is sent when the add button is tapped. We will also switch the build configuration to “Release” so that we get a realistic picture of how the app behaves when running production.

If we run in the simulator, drill down all 20 times, and then add a row, the breakpoint will trigger and we will see a pretty sizable stack trace. Let’s copy and paste it into a text document.

There’s 165 stack frames, but really the first stack frame with our code is at #129. So 36 of these frames are just things that iOS and SwiftUI are doing and we have no control over. Further, 98 of these stack frames have the label “[inlined]”, which means that aren’t actual stack frames. They get optimized away.

This means that 165 minus 98 is the true number of stack frames, which is 66, and of those 66 stack frames, 36 of them are out of our control. So our code constitutes only 31 stack frames even though we have a highly composed features spanning 20 layers of functionality.

We can even quickly delete all lines that contain “[inlined]” to see how just short and succinct this stack trace is.

We can see that a little helper method in the _ForEachReducer isn’t getting inlined for some reason. Perhaps it’s a little too heavyweight and so Swift decided not to inline it. That’s ok, we don’t need to inline everything.

Let’s quickly compare this to how things used to be before the reducer protocol. We aren’t going to run the older version to get the stack trace. Instead, we’ve already done all of that work, and I have the stack trace I can paste right here.

This is the stack trace from drilling down 20 levels deep and then tapping the add button. It has 191 stack frames, but the first stack frame from our code happens at #155. So again, about 36 stack frames are due to just iOS and SwiftUI right out of the gate.

However, if we search for “[inlined]” in this stack trace we will see that only 42 frames were inlined, as opposed to 98 stack frames when using the reducer protocol. This means that our application code constitutes a whopping 113 stack frames once you remove all of the inlined frames and the frames that are out of our control. That’s more than 3 times the number of frames than in the protocol version of this feature.

To see something even more shocking, let’s take a look at the stack trace back in version 0.39 of the library. This was a few releases ago, and it was before we made a series of sweeping performance improvements to the library a few weeks ago. I’ll paste in the stack trace of running the exact same case study, drilling down 20 levels, and then tapping the add button.

There are now a whopping 347 stack frames, of which only 42 have been inlined. Removing those stack frames and the ones that are out of our control we will find that our application contributes 269 stack frames, which is nearly 10 times more than when using the reducer protocol. This is absolutely massive, and should come with some performance benefits and decrease in memory usage.

Preview dependencies

While performance improvements to the library are certainly welcome, by far the biggest improvement made to the library thanks to the reducer protocol is the new, shiny dependency management system. We are going to show how this new system completely changes the way we deal with dependencies by looking at isowords in a moment, but before then we can show off an improvement we made in the final release of the library that was not covered in an episode. And it was all thanks to a suggestion from a community member that participated in the public beta.

As we covered in the episodes, when you register a dependency with the library you must always specify a “live” value that is used when the application runs in the simulator or on device. It’s the version of the dependency that can actually interact with the outside world, including making network requests, accessing location managers, or who knows what else.

You can also provide a “test” value, and that will be used when testing your feature with the TestStore, and typically we like to construct an instance of the dependency that performs an XCTFail if any of its endpoints are invoked. This gives us the nice behavior of forcing us to account for how dependencies are used in tests.

For the final release of the library we added one more type of dependency you can provide: a “preview” value. This is the version of the dependency that will be used when running your feature in an Xcode preview. This gives you a chance to provide a baseline of data and functionality without using an actual, live dependency. You of course don’t have to provide a preview value, and if you don’t it will default to the live value.

Let’s take a look at the speech recognition demo application to see how this works. Recall that this demo shows off how to use Apple’s Speech framework to live transcribe audio into a text transcript on the screen. Let’s quickly demo that in the simulator.

The way this works is that we have defined a SpeechClient dependency that represents the interface to how one interacts with the Speech framework in iOS:

struct SpeechClient {
  var finishTask: @Sendable () async -> Void
  var requestAuthorization: @Sendable
    () async ->
      SFSpeechRecognizerAuthorizationStatus
  var startTask: @Sendable
    (SFSpeechAudioBufferRecognitionRequest) async ->
      AsyncThrowingStream<
        SpeechRecognitionResult, Error
      >
}

It has 3 simple endpoints. One for asking for authorization to recognize speech, one for starting a speech recognition task, and then one for stopping the task.

We provide a number of implementations of this interface. The most important one is the “live” client, which actually calls out to Apple’s APIs under the hood.

We even use an actor under the hood in order to serialize access to Apple’s framework. That’s a technique we will discuss on Point-Free sometime in the future.

There’s also the “unimplemented” testValue that simply causes a test failure if any of its endpoints are called.

There’s also this super interesting previewValue.

It’s an implementation of the SpeechClient that emulates how the speech APIs works without actually calling out to any of Apple’s APIs. When you start a speech recognition task with this client it will just send back a stream of transcripts that spell out a bunch of “lorem ipsum” text. It even dynamically changes the cadence of the words to emulate longer words taking longer to say. This allows you to see how the client’s behavior flows through your feature’s logic without needing to call Apple’s APIs.

And the reason you would want to do that is because many times it is not possible to use Apple’s APIs. In particular, in SwiftUI previews. It is just not possible to use the Speech framework in SwiftUI previews. Same goes for core location, core motion, and a lot more. If you want to run features that use those technologies in a preview, you have to go the extra mile to control those dependencies so you can supply stubbed out data and behavior. Otherwise your feature will just be broken in the preview and you won’t be able to iterate on its logic or styling quickly.

If we hop to SpeechRecognition.swift and go to the bottom of the file we will see that a preview is provided, and it’s quite simple:

struct SpeechRecognitionView_Previews: PreviewProvider {
  static var previews: some View {
    SpeechRecognitionView(
      store: Store(
        initialState: SpeechRecognition.State(
          transcribedText: "Test test 123"
        ),
        reducer: SpeechRecognition()
      )
    )
  }
}

There’s no mention of dependencies at all.

Based on how we developed the dependency story in our past episodes we would be using the live speech client in this preview, which means accessing the Speech framework’s APIs, which means we would just have a broken preview. Nothing would actually work.

But, if we run the preview and hit the record button, we will see that the feature emulates what happens in practice when running the app. A stream of words slowly animate onto the screen, as if we were speaking those words and having the app live transcribe it.

This is happening because when registering dependencies with the library you get to specify a version of the dependency to use only in previews, allowing you to provide some stubbed data and logic. Had the preview been using the live implementation we would have a completely non-functional preview, as can be seen by overriding the dependency to be the liveValue:

struct SpeechRecognitionView_Previews: PreviewProvider {
  static var previews: some View {
    SpeechRecognitionView(
      store: Store(
        initialState: SpeechRecognition.State(
          transcribedText: "Test test 123"
        ),
        reducer: SpeechRecognition()
          .dependency(\.speechClient, .liveValue)
      )
    )
  }
}

The preview is completely broken now, which means you lose the ability to iterate on how text flows onto the screen. Maybe you want to play with styling or animations. With the live dependency that actually interacts with the Speech framework that would be impossible, but thanks to our “lorem” client it’s very easy.

And this all works because we have provided a previewValue in our conformance to the TestDependencyKey protocol:

extension SpeechClient: TestDependencyKey {
  static let previewValue = {
    …
  }()
  …
}

Remember, it’s not necessary to provide this. We don’t want to make things harder for you to adopt this dependency system. If you choose not to provide a previewValue it will take the liveValue in previews. And you can always override dependencies directly on the reducer when constructing your preview.

ifCaseLet

There’s one last thing we want to show off in the demo applications that come with the library before we hop over to isowords. In the episodes discussing the reducer protocol we showed how dealing with optional and array state in the library used to be fraught. It was on you to wield the APIs correctly, in particular you needed to combine the child and parent reducers in a very particular order.

We re-imagined what these operations could look like by making them into methods defined on the reducer protocol so that we could enforce the order under the hood, thus baking more correctness into the API.

We showed off how this looked in the voice memos demo application, where the root feature needs to conditionally run a reducer on some optional state and be able to run a reducer on each element of a collection. It looked something like this:

var body: some ReducerProtocol<State, Action> {
  Reduce { state, action in
    …
  }
  .ifLet(
    \.recordingMemo, action: /Action.recordingMemo
  ) {
    RecordingMemo()
  }
  .forEach(
    \.voiceMemos, action: /Action.voiceMemo(id:action:)
  ) {
    VoiceMemo()
  }
}

To use ifLet you first identify the optional state you want to operate on, as well as the actions the child domain uses, and then specify the reducer you want to run on that optional state. And similarly for forEach.

Well, those operators work great for optional and collection state, but there’s another kind of state that is important to be able to handle: enums. For that reason we have also added an ifCaseLet operator to the library.

We have an example of this in the Tic-Tac-Toe demo application, which models its root state as an enum for whether or not the user is logged in:

public enum State: Equatable {
  case login(Login.State)
  case newGame(NewGame.State)

  public init() { self = .login(Login.State()) }
}

Then we can compose a reducer together that runs a reducer on each case of the enum in addition to a reducer that handles the root level logic of the application:

public var body: some ReducerProtocol<State, Action> {
  Reduce { state, action in
    …
  }
  .ifCaseLet(/State.login, action: /Action.login) {
    Login()
  }
  .ifCaseLet(/State.newGame, action: /Action.newGame) {
    NewGame()
  }
}

This operator bakes in the same safety features as ifLet and forEach, making it easier to use correctly.

isowords

So, this new release is looking pretty great for simplifying features built in the Composable Architecture. The new protocol is capable of expressing recursive features in a simple, natural way, and we’ve even added new powerful features to the dependency system that we didn’t get a chance to talk about in episodes.

Let’s now turn our attention to isowords, our open source word game built entirely in SwiftUI and the Composable Architecture. It’s a highly modularized code base, with each core feature of the application put into its own module, and it’s a pretty complex application, needing to deal with lots of effects, including network requests, Game Center, randomness, audio players, haptics and more.

We also have an extensive test suite, both unit tests and snapshot tests, for all major parts of the application, which means we heavily lean on needing to control our dependencies. By embracing the ReducerProtocol we are able to delete a massive amount of unnecessary code, and we could simplify some of our most complicated reducers.

Let’s take a look.

If you recall, we kicked off our reducer protocol series of episodes by showing all the problems with the library that we think could be solved with the protocol. The boilerplate associated with explicit environments of dependencies was a huge problem. We showed this by adding a dependency to a leaf node feature in isowords, the settings screen:

struct SettingsEnvironment {
  var someValue: Int
  …
}

…and saw how that seemingly innocent change reverberated throughout the entire application. We had to update every feature that touched the settings feature by adding this dependency to their environments, then updating their initializers to handle that new dependency since the features are modularized, and then pass that new dependency down to settings. And then we had to do it all over again for every feature that touched a feature that touched the settings feature. And on, and on, and on until we got to the entry point of the application. And if that wasn’t bad enough, the tests were also broken and needed to be updated, but we didn’t even attempt to do that in the episode.

All in all, it took us 8 minutes to accomplish this in the episode, and that’s with movie magic editing to try to make the experience less painful for our viewers, while still trying to communicate just how painful it is to do in real life.

Let’s see what this looks like with reducer protocols. We’ve already got the settings feature converted to the new ReducerProtocol, and it uses the @Dependency property wrapper to specify which dependencies it needs:

public struct Settings: ReducerProtocol {
  @Dependency(\.apiClient) var apiClient
  @Dependency(\.applicationClient) var applicationClient
  @Dependency(\.audioPlayer) var audioPlayer
  @Dependency(\.build) var build
  @Dependency(\.fileClient) var fileClient
  @Dependency(\.mainQueue) var mainQueue
  @Dependency(\.remoteNotifications.register)
  var registerForRemoteNotifications
  @Dependency(\.serverConfig.config) var serverConfig
  @Dependency(\.storeKit) var storeKit
  @Dependency(\.userNotifications) var userNotifications

  …
}

If the dependency we want to add to this feature happens to already exist, whether it’s a dependency that ships with the library or a first-party dependency that you defined, then we can just add it directly. For example, suppose the settings feature all the sudden needs access to the current date. That’s as simple as this:

@Dependency(\.date) var date

Miraculously everything still compiles, even tests. There is no step 2. In less than 10 seconds we can add a new dependency without changing any feature that needs to interact with the settings feature. Settings will be automatically provided a live date dependency when run in the simulator or on a device, and in tests it will be provided an “unimplemented” version that causes a test failure if it is ever used, forcing you to think about how this dependency might affect your feature’s logic.

For example, if we run the settings tests right now we will find that all tests pass because nothing in the reducer is actually using the date dependency. If we start using it somewhere, like say just computing the current date inside an effect:

case .binding(\.$developer.currentBaseUrl):
  return .fireAndForget {
    [url = state.developer.currentBaseUrl.url] in

    await self.apiClient.setBaseUrl(url)
    await self.apiClient.logout()
    _ = self.date.now
  }

We now get a test failure because an unimplemented dependency is being used:

testSetApiBaseUrl(): Unimplemented: @Dependency(.date)

This is incredible. The test suite is letting us know that something tricky is happening in our feature that we aren’t yet asserting against, and so we should do something about that.

To get the test passing we need to stub out the date dependency with something we control, like a constant date:

store.dependencies.date.now =
  Date(timeIntervalSinceReferenceDate: 1234567890)

And now the test passes. Of course, we didn’t add a new assertion, but that’s also because we didn’t actually use the date in any meaningful way. If we had then there would actually be some more work to do here.

So, it takes less than 10 seconds to add a dependency to a feature if that dependency so happens to already be available. What about if you need to register a whole new dependency with the library?

For example, in the past episode demonstrating the problem of environments, we added an integer to the environment to show how things go wrong. Let’s do the same here. It starts by creating a new type that represents a key that can be used to find a dependency in the global, nebulous blob of dependencies:

private enum SomeValueKey: DependencyKey {
}

The bare minimum you need to provide this conformance is a liveValue, which is the value used when running your application on a device or simulator. Right now we’ll just use an integer:

private enum SomeValueKey: DependencyKey {
  static let liveValue = 42
}

…but more generally this is where you would construct an implementation of some dependency client that interacts with the real world, such as making network requests, interacting with databases, file systems and more.

With the key defined, we now need to provide a computed property on DependencyValues for accessing and setting the dependency:

extension DependencyValues {
  var someValue: Int {
    get { self[SomeValueKey.self] }
    set { self[SomeValueKey.self] = newValue }
  }
}

DependencyValues is the global, nebulous blob of dependencies, and so this computed property “registers” the dependency with the library so that it can be instantly used from any reducer.

And this little dance to register the dependency might seem a little weird, but really it’s no different than what one has to do to register an environment value with SwiftUI, which allows you to implicitly push values deep into a view hierarchy. In fact, we modeled our dependency system heavily off of how environment values work in SwiftUI.

With that little bit of work done, we instantly get the ability to fetch this dependency from the global DependencyValues store:

// @Dependency(\.date) var date
@Dependency(\.someValue) var someValue

And we can start using it right in the reducer:

// _ = self.date.now
_ = self.someValue

This was incredibly easy to do. If I hadn’t been blabbering the whole time I could have added this dependency in under a minute, and the whole application still builds as do all tests.

Speaking of tests, how does registering new dependencies with the library affect tests? Let’s run them and find out.

Well, looks like we got a failure:

testSetApiBaseUrl(): @Dependency(.someValue) has no test implementation, but was accessed from a test context:

Location:
  SettingsFeature/Settings.swift:204
Key:
  SomeValueKey
Value:
  Int

Dependencies registered with the library are not allowed to use their default, live implementations when run in a ‘TestStore’.

To fix, override ‘someValue’ with a mock value in your test by mutating the ‘dependencies’ property on your ‘TestStore’. Or, if you’d like to provide a default test value, implement the ‘testValue’ requirement of the ‘DependencyKey’ protocol.

This helpfully lets us know that we haven’t provided a test implementation of our dependency. It even tells exactly which dependency it is and where we used the dependency.

And it’s pretty clear we don’t have a test value for this dependency by look at its conformance to DependencyKey:

private enum SomeValueKey: DependencyKey {
  static let liveValue = 42
}

The library is taking a stance on how live dependencies are allowed to be used in tests.

On the one hand, we want to make it easy for you to get started with the dependency system by not forcing you to provide a live value and a test value just to get something up on the screen. So, we require only a liveValue at the bare minimum, and then the testValue will be derived from that liveValue.

However, we do not think it’s ever appropriate to use live dependencies in tests. This could lead you to making network requests, accidentally tracking analytics events that don’t represent true user behavior, or trampling on the global, shared user defaults in your application. None of that is ideal, so the library forces its opinion on users.

Luckily, the fix is easy. You just need to supply a version of the dependency that is appropriate for using tests. You can do this on a test-by-test basis by overriding the dependency on the test store:

store.dependencies.someValue = 42

Now the test passes, and so if we were using this value in some real way we could make an assertion on that logic.

Or you can drop that line from the test, and instead provide all tests with a default test value by augmenting the dependency key:

private enum SomeValueKey: DependencyKey {
  static let liveValue = 42
  static let testValue = 0
}

Now tests still pass.

Now, for some very simple dependencies it may be fine to stub in a test value for all tests to use without failure, but as we mentioned a moment ago it can be very powerful to know when features are using dependencies that you didn’t account for so that you can strengthen your assertions.

So, most of the time we recommend leaving out the testValue in your dependencies so that you can get that instant feedback when something starts using the dependency.

If your dependency is very complicated, having a whole bunch of endpoints you can interact with, like say a file system client that can create, read, update and delete files, then you may want to provide an “unimplemented” version of the dependency that invokes XCTFail whenever any of its endpoints are accessed.

For example, the audio player dependency that allows us to play sound effects and music in the game has 8 different endpoints:

public struct AudioPlayerClient {
  public var load: @Sendable ([Sound]) async -> Void
  public var loop: @Sendable (Sound) async -> Void
  public var play: @Sendable (Sound) async -> Void
  public var secondaryAudioShouldBeSilencedHint:
    @Sendable () async -> Bool
  public var setGlobalVolumeForMusic:
    @Sendable (Float) async -> Void
  public var setGlobalVolumeForSoundEffects:
    @Sendable (Float) async -> Void
  public var setVolume:
    @Sendable (Sound, Float) async -> Void
  public var stop: @Sendable (Sound) async -> Void

  …
}

If one of these endpoints is used in a test where we didn’t explicitly override it, we want a failure to let us know exactly which endpoint was accessed. And for that reason we perform a little bit of upfront work to provide an unimplemented version of the client that causes a test failure if the endpoint is ever accessed:

extension AudioPlayerClient: TestDependencyKey {
  public static let previewValue = Self.noop

  public static let testValue = Self(
    load: XCTUnimplemented("\(Self.self).load"),
    loop: XCTUnimplemented("\(Self.self).loop"),
    play: XCTUnimplemented("\(Self.self).play"),
    secondaryAudioShouldBeSilencedHint: XCTUnimplemented(
      "\(Self.self).secondaryAudioShouldBeSilencedHint",
      placeholder: false
    ),
    setGlobalVolumeForMusic: XCTUnimplemented(
      "\(Self.self).setGlobalVolumeForMusic"
    ),
    setGlobalVolumeForSoundEffects: XCTUnimplemented(
      "\(Self.self).setGlobalVolumeForSoundEffects"
    ),
    setVolume: XCTUnimplemented(
      "\(Self.self).setVolume"
    ),
    stop: XCTUnimplemented("\(Self.self).stop")
  )
}

These XCTUnimplemented functions are provided by our XCTestDynamicOverlay library, which automatically comes with the Composable Architecture:

import XCTestDynamicOverlay

And it allows you to define test helpers in application code, which is usually not possible because the XCTest framework is not available.

This forces us to override each individual endpoint we expect to be used in the user flow we are testing.

So, things are looking pretty incredible. It’s so easy to add new dependencies, and by default the library guides you to do so in the safest way possible when it comes to testing. There’s one other thing we want to show, which is something we discussed in past episodes.

Sometimes dependencies can be quite heavy weight or difficult to build, especially if they depend on a 3rd party framework, like Firebase, FFmpeg, or a web socket library, or who knows what else. In those times we like to separate the interface of the dependency, which is usually super lightweight and builds very quickly, from the implementation of the live dependency, which actually needs access to the heavy weight stuff.

We’ve got one example of needing to do this. Our API client separates interface from implementation because the interface only needs access to a few basic things that build quite fast:

.target(
  name: "ApiClient",
  dependencies: [
    "SharedModels",
    "XCTestDebugSupport",
    .product(
      name: "CasePaths", package: "swift-case-paths"
    ),
    .product(
      name: "Dependencies",
      package: "swift-composable-architecture"
    ),
    .product(
      name: "XCTestDynamicOverlay",
      package: "xctest-dynamic-overlay"
    ),
  ]
),

The live implementation, however, needs access to the ServerRouter library:

.target(
  name: "ApiClientLive",
  dependencies: [
    "ApiClient",
    "ServerRouter",
    "SharedModels",
    "TcaHelpers",
    .product(
      name: "Dependencies",
      package: "swift-composable-architecture"
    ),
  ],
  exclude: ["Secrets.swift.example"]
),

…which is the thing that actually constructs the router that powers both the API client for the iOS app and the router for the server. It uses our parsing library to do that, which incurs a small compilation cost:

.target(
  name: "ServerRouter",
  dependencies: [
    "SharedModels",
    .product(
      name: "Tagged", package: "swift-tagged"
    ),
    .product(
      name: "Parsing", package: "swift-parsing"
    ),
    .product(
      name: "URLRouting", package: "swift-url-routing"
    ),
    .product(
      name: "XCTestDynamicOverlay",
      package: "xctest-dynamic-overlay"
    ),
  ]
),

We can see this in concrete terms by building each library. If we build the ApiClient library we will see it takes about 3 seconds, so quite fast. And if we build the ApiClientLive library we will see it takes about 8 seconds. Still pretty fast, but it is definitely slower. And in the future the live library could get slower and slower to build.

But the cool thing is that any feature that needs the API client never has to incur the cost of the live API client, and hence the cost of the parsing library and router. Features that need the API client only need the interface, and so they just need to incur the 3 second compilation cost. That means it will be faster to iterate on, and as we’ve mentioned a bunch of times on Point-Free, beyond just raw compilation times, the less you build in features the more stable certain tools will be such, such as SwiftUI previews. We’ve seen cases were previews were completely broken for a feature module, but by eliminating access to a few live dependencies, especially certain Apple frameworks, we were able to restore preview functionality.

We can see how the dependency registration process works when interfaces and implementations are separated. In the ApiClient library we register a dependency key that only conforms to TestDependencyKey, which means you only have to provide a testValue. No liveValue is necessary at this point.

We can see how the dependency registration process works when interfaces and implementations are separated. In the ApiClient library we register the ApiClient type as a dependency value by defining a TestDependencyKey, which only requires that we provide a testValue, and optionally a previewValue. No liveValue is necessary at this point:

extension DependencyValues {
  public var apiClient: ApiClient {
    get { self[ApiClient.self] }
    set { self[ApiClient.self] = newValue }
  }
}
extension ApiClient: TestDependencyKey {
  public static let previewValue = Self.noop
  public static let testValue = Self(…)
}

Also note here that we didn’t introduce a whole new type to conform to TestDependencyKey. Often it is possible to conform a dependency’s interface directly to the dependency key protocols, and this can save us from the boilerplate and ceremony of defining yet another type just to register the dependency.

And then in the ApiClientLive library we can fully conform to the DependencyKey protocol by providing the live value:

extension ApiClient: DependencyKey {
  public static let liveValue = Self.live(
    sha256: { Data(SHA256.hash(data: $0)) }
  )
  …
}

So that’s pretty cool.

So, we’ve seen how to add a new dependency to a feature, but the entire application has already been converted to the new dependency system, and so what did that look like? Well, there was a ton of code we were able to delete.

The environment for the root app feature was so big that we had to put it in its own file even though we typically like to create all domain-related types together. The file was 140 lines, consisting of an import for every dependency the entire application uses, a struct with fields for every single dependency, and initializer that takes every dependency and assigns it, and then at the bottom we define some useful instances of the environment, such as an unimplemented one for tests and a “no-op” one handy for previews.

This 140 lines code squashes down to just 9 in the AppReducer struct:

@Dependency(\.fileClient) var fileClient
@Dependency(\.gameCenter.turnBasedMatch.load)
var loadTurnBasedMatch
@Dependency(\.database.migrate) var migrate
@Dependency(\.mainRunLoop.now.date) var now
@Dependency(\.dictionary.randomCubes) var randomCubes
@Dependency(\.remoteNotifications)
var remoteNotifications
@Dependency(\.serverConfig.refresh)
var refreshServerConfig
@Dependency(\.userDefaults) var userDefaults
@Dependency(\.userNotifications) var userNotifications

This is less than half the number of dependencies the full application uses. The app reducer doesn’t need things like an API client, or the haptic feedback generator, or store kit, or most of it really.

Further, not only did we get to whittle down the dependencies to just the 9 this feature needs, but we further whittled some dependencies down to just the one single endpoint the feature needs. For example, the only thing we need from the Game Center dependency is the ability to load turn based matches:

@Dependency(\.gameCenter.turnBasedMatch.load)
var loadTurnBasedMatch

The Game Center client has 15 other endpoints besides this load one, and we are making it very visible to any one looking at this code that we do not need any of that. We just need the one endpoint.

Same goes for the database client:

@Dependency(\.database.migrate) var migrate

…the dictionary client:

@Dependency(\.dictionary.randomCubes) var randomCubes

…the server config client:

@Dependency(\.serverConfig.refresh)
var refreshServerConfig

…and even the main run loop:

@Dependency(\.mainRunLoop.now.date) var now

This makes it clear we’re not even doing any time-based asynchrony in this feature. We just need a way of getting the current date.

So this is a huge win for the root level app feature, but the wins multiplied with every single feature module in the entire application. We were able to delete 26 environment structs, which means deleting 26 public initializers, and even delete even more places where we had to transform a parent environment into a child environment. It’s hard to measure exactly, but we certainly deleted close to if not over a 1,000 lines of code.

There’s a couple of other fun things in the isowords code base. We have a reducer operator defined that can enhance any existing reducer with on that performs haptic feedback when certain events happen. At the call site it looks like this:

.haptics(
  isEnabled: \.isHapticsEnabled,
  triggerOnChangeOf: \.selectedCubeFaces
)

The operator first takes an argument that allows us to specify whether or not the haptics is even enabled, which can be determined by reading a boolean from the feature’s state. The second argument is used to determine when a haptic feedback should be triggered. We can specify a piece of equatable state, and when that state changes the feedback will be triggered.

The cool part about this is that the haptics operator gets to hide some details from us that we don’t have to care about at the call site. In particular, the haptics functionality is implemented via a private type that conforms to the ReducerProtocol, and it depends on the feedbackGenerator dependency:

private struct Haptics<
  Base: ReducerProtocol, Trigger: Equatable
>: ReducerProtocol {
  let base: Base
  let isEnabled: (Base.State) -> Bool
  let trigger: (Base.State) -> Trigger

  @Dependency(\.feedbackGenerator) var feedbackGenerator

  var body: some ReducerProtocol<
    Base.State, Base.Action
  > {
    self.base
      .onChange(of: self.trigger) { _, _, state, _ in
        guard self.isEnabled(state)
        else { return .none }

        return .fireAndForget {
          await self.feedbackGenerator
            .selectionChanged()
        }
      }
  }
}

The feature invoking this functionality doesn’t need to know where it gets its dependencies from. That can be completely hidden.

The only time we will actually care is when writing tests, in which case we will get some failing tests if the feedback generator is invoked without being properly stubbed. But as soon as that happens we can just stub the dependency, either by putting in a no-op if we just want to quiet the error, or with something that tracks some state so that we can confirm the generator was invoked the way we expect.

There’s another example of this in a sounds reducer operator. It layers complex sound effect logic on top of the game without having to muddy the game reducer, which is already extremely complex. This is done with a private GameSounds reducer, which also needs some dependencies but the parent doesn’t need to know anything about that:

private struct GameSounds<
  Base: ReducerProtocol<Game.State, Game.Action>
>: ReducerProtocol {
  @Dependency(\.audioPlayer) var audioPlayer
  @Dependency(\.date) var date
  @Dependency(\.dictionary.contains)
  var dictionaryContains
  @Dependency(\.mainQueue) var mainQueue

  …
}

And then inside the body we have a very complex reducer, because the logic guiding sound effects is quite complex, but amazingly this can be kept fully separate from the game reducer. We don’t need to litter the game logic code with all of this gnarly sound effect logic, which makes it easier to edit each reducer in isolation.

There’s one last example we want to look at in isowords that is quite advanced, and this is something that was quite awkward to accomplish before the reducer protocol. The feature that handles all of the logic for leaderboards is called LeaderboardResults. The file that has this logic has a preview that shows all the different variations it handles.

It’s quite generic. This one reducer handles the logic for game leaderboards, word leaderboards and daily challenge leaderboards. That is 3 pretty significantly different use cases to package up into a single feature.

The thing is, though, that they all basically work the same, they just need to be customized in the way they filter their results. To handle this we make the entire feature generic over the type of time scope that can be used:

public struct LeaderboardResults<TimeScope>:
ReducerProtocol {
  …
}

This allows us to use a time scope of past day/week/all time for game and word leaderboards, and for daily challenges we use the date that represents which day we are fetching results for.

This allows us to consolidate a massive amount of code which otherwise would need to be duplicated. And the best part is that the type we define to conform to the ReducerProtocol provides a natural place for us to define the generic:

LeaderboardResults<TimeScope>

Previously this was quite awkward with the Reducer struct. There’s no way to make a value generic. We can’t do something like this:

let leaderboardResultsReducer<TimeScope> = Reducer { … }

Instead, we had to define a function that takes no arguments just so that we could get access to a generic:

func leaderboardResultsReducer<TimeScope>() -> Reducer<
  LeaderboardResultsState<TimeScope>,
  LeaderboardResultsAction<TimeScope>,
  LeaderboardResultsEnvironment<TimeScope>
> { … }

This makes it much nicer to create these kinds of super generic, reusable components that can be mixed into other features.

Also, interestingly, these kinds of super generic components don’t necessary need to leverage the dependency system. For example, the LeaderboardResults has only one dependency, an async endpoint for loading results from a game mode and time scope, and it specifies it as a regular property:

public struct LeaderboardResults<TimeScope>:
ReducerProtocol {
  …
  public let loadResults: @Sendable
    (GameMode, TimeScope) async throws -> ResultEnvelope
  …
}

It’s not appropriate to use @Dependency for this because this needs to be customized at the point of creating the LeaderboardResults reducer. Dependency values are perfect for statically-known, global dependencies, but this dependency is super generic.

Instead, the feature that mixes LeaderboardResults into its functionality can lean on an @Dependency dependency in order to grab the endpoint it wants to pass along:

Scope(state: \.solo, action: /Action.solo) {
  LeaderboardResults(
    loadResults: self.apiClient.loadSoloResults
  )
}
Scope(state: \.vocab, action: /Action.vocab) {
  LeaderboardResults(
    loadResults: self.apiClient.loadVocabResults
  )
}

In this case we grab the loadSoloResults and loadVocabResults endpoints from the API client, and configure LeaderboardResults with those functions.

Conclusion

That concludes our quick overview of the latest release of the Composable Architecture, which introduces the reducer protocol and a whole new dependency management system. It’s worth noting that this update is 100% backwards compatible with the previous version of the library. If you already have a large application built with the library, there is no reason to stop everything and update everything right now. You can do it slowly, on your own time, piece by piece, and we even have some upgrade guides that give pointers on how to do that.

We have even made these changes compatible for people who can’t yet upgrade to Xcode 14 and Swift 5.7. The features that require Swift 5.7 tools will gracefully degrade to Swift 5.6 friendly code, making it even easier for you to incremental adopt the new reducer protocol when you are ready.

So, that’s it for this week, and next week we start a completely different topic.

Until next time!


Get started with our free plan

Our free plan includes 1 subscriber-only episode of your choice, access to 64 free episodes with transcripts and code samples, and weekly updates from our newsletter.

View plans and pricing