If there is one word I would use to describe what we have done so far it would be: “wow”.
In just one line of code we are expressing the idea of sharing a piece of state with the file system. Using @Shared
with file storage looks almost identical to using @Shared
with user defaults, but it works beautifully for more complex data types. Any change made to the shared state is automatically saved to disk, and if anyone else every saves data straight to that file, the @Shared
state in the app will immediately update.
But things get even better. Even though the @Shared
property wrapper typically is interacting with outside systems that we do not control, such as user defaults and the file system, it was still built in a way that makes it possible to test any of your code using @Shared
. And can be done so with no additional setup work too.
It’s amazing to see, so let’s write a very basic test for our feature.
Let’s hop over to the project’s test file…
Which is using Swift’s new native Testing framework, and we’ll add a suite and test:
import Testing
@testable import TourOfSharing
@Suite
@MainActor
struct FactFeatureTests {
@Test func basics() {
}
}
In this test we can create an instance of our model so that we can invoke its methods and assert on how its state changes:
@Test func basics() {
let model = FactFeatureModel()
}
The easiest thing to test would be that tapping the “Increment” button does indeed cause the count
to increase by one:
model.incrementButtonTapped()
#expect(model.count == 1)
And this test passes. But still this isn’t too impressive, yet.
Next we could try emulating the user tapping on the “Get fact” button:
await model.factButtonTapped()
After this button is tapped we of course do not expect the count to change:
#expect(model.count == 1)
And further, after the async work is completed we expect that the model’s fact
is somehow mutated:
#expect(model.fact == <#???#>)
How can we predict what this value is? If the code we are exercising makes a live network request to the Numbers API, then we can’t possible know what kind of fact they are going to return.
However, remember that we did provide a specialized goodFacts
for our FactClient
that did return something that was easy to predict:
#expect(model.fact == "1 is a good number!")
Perhaps the goodFacts
can be used in tests too so that we can make this assertion?
Well, if we run tests we will find that the suite fails with the following error:
Issue recorded: @Dependency(FactClient.self) has no test implementation, but was accessed from a test context: …
It is not allowed to use a dependency in a test without explicitly overriding it. This is a great test failure to have. It forces you to prove that you know which dependencies are going to be accessed when executing a user flow, and gives you great coverage over how your features will behave.
To override a dependency just for one test we can wrap the body of the test in withDependencies
:
@Test func basics() async {
await withDependencies {
} operation: {
…
}
}
The first trailing closure allows us to change the dependencies however we want, and then the last trailing closure will be executed with those dependencies change, which means we can override the fact client to get a passing test:
await withDependencies {
$0[FactClient.self] = .goodFacts
} operation: {
…
}
It is important to note that the dependencies will only be changed for just that one lexical scope and will not interfere with any other part of our code base that happens to be running at the same time.
However, needing to write our test code in withDependencies
and incurring an extra level of indentation is a bit of a pain. There is a better way that works most of the time. Our Dependencies library comes with test trait that allows us to change the dependencies for a single test:
@Test(.dependency(…))
But to get access to this trait we must further depend on the DependenciesTestSupport
module and import it:
import DependenciesTestSupport
This is a library bundled in the Dependencies package that provides testing tools. It is only appropriate to depend on this library from test targets, and should never be done from modules that will ship in an app.
And now we can specify the instance of FactClient
we want to use for this test:
@Test(.dependency(FactClient.…))
And remember how we defined a “good facts” client that simply tells us that every number is a good number? Well, that sounds like an appropriate fact client to use for this test so that we can predict the fact returned:
@Test(.dependency(FactClient.goodFacts))
That is all it takes. The FactClient
dependency will be overridden for just this one test, allowing us to predict the values it returns and assert how our feature behaves when invoking the dependency. In fact, our tests now pass, and do so immediately, deterministically, and all without ever making a network request to the outside world.
Let’s keep going. Next let’s emulate the user tapping on the star icon to save this fact as one of their favorites:
model.favoriteFactButtonTapped()
After invoking that method we expect a fact to be added to the favorites, but also there are some complications here:
#expect(
model.favoriteFacts == [
Fact(id: <#UUID#>, number: <#Int#>, savedAt: <#Date#>, value: <#String#>)
]
)
First, the fact type must be made equatable, which is easy enough to do:
struct Fact: Codable, Equatable, Identifiable {
…
}
Second, only some of these properties are easy enough to fill in:
#expect(
model.favoriteFacts == [
Fact(
id: <#UUID#>,
number: 1,
savedAt: <#Date#>,
value: "1 is a good number!"
)
]
)
But what do we do about the id
and savedAt
? They are completely unpredictable, and in our feature code we are reaching out the uncontrolled UUID and date initializers:
Fact(id: UUID(), number: count, savedAt: Date(), value: fact),
Whenever you see UUID()
or Date()
in your code you should know that you have an uncontrolled dependency in your code, and it is going to complicate testing. Without controlling these dependencies we really have no choice but to weaken our assertions by just asserting on the bits of data we can predict:
model.favoriteFactButtonTapped()
// #expect(
// model.favoriteFacts == [
// Fact(
// id: <#UUID#>,
// number: 1,
// savedAt: <#Date#>,
// value: "1 is a good number!"
// )
// ]
// )
#expect(model.favoriteFacts.count == 1)
#expect(model.favoriteFacts.map(\.number) == [1])
#expect(model.favoriteFacts.map(\.value) == ["1 is a great number!"])
This test passes, and it’s certainly a way to move forward, but its also a bit of a bummer. Asserting on just these bits of data individually means that we lose exhaustivity in asserting against the entire fact. What if in the future there is additional data in the fact that could have subtle logic that we want to test? It would be our responsibility to update this assertion to assert on the new data, whereas when asserting like this:
#expect(
model.favoriteFacts == [
Fact(
id: <#UUID#>,
number: 1,
savedAt: <#Date#>,
value: "1 is a good number!"
)
]
)
…we just got all of that for free. It would force us to make sure we are asserting on how everything changes in our feature so that we do not miss anything.
Let’s correct this. Let’s add a dependency on the UUID and date generator to our model:
@ObservationIgnored
@Dependency(\.date.now) var now
@ObservationIgnored
@Dependency(\.uuid) var uuid
And then rather than reaching out to the uncontrollable UUID
and Date
initializers we will instead use these dependencies:
$0.insert(
Fact(id: uuid(), number: count, savedAt: now, value: fact),
at: 0
)
This makes it possible for us to control how these dependencies behave in certain contexts, such as tests.
For example, we can now use the dependency
test trait to override the uuid
and date
dependencies:
@Test(
.dependency(FactClient.goodFacts),
.dependency(\.date.now, …),
.dependency(\.uuid, …)
)
For the date dependency we can just choose a single date that we want the date.now
dependency to always return:
.dependency(\.date.now, Date(timeIntervalSince1970: 1234567890),
And for the uuid
dependency the library comes with a special “incrementing” version of the UUID generator that simply returns an ever-incrementing UUID when you invoke it, starting at 0:
.dependency(\.uuid, .incrementing)
Now we can predict exactly what these dependencies will return when our feature code executes:
#expect(
model.favoriteFacts == [
Fact(
id: UUID(0),
number: 1,
savedAt: Date(timeIntervalSince1970: 1234567890),
value: "1 is a good number!"
)
]
)
And just like that we have a passing test that is exhaustively proving how the state in the favoriteFacts
property changes. If in the future we add more state to the Fact
that has some subtle logic, we will be forced to assert on how it changes here.
Let’s move on with the final bit of behavior in the feature, which is deleting a fact:
model.deleteFacts(indexSet: [0])
#expect(model.favoriteFacts == [])
And this test passes.
This test exercises a full user flow in our feature, and the test passes.
We now have a test that covers a full user flow in our feature. We are emulating that the user taps the increment button, then gets a fact for that number, then decides to save that fact to their favorites, and finally removes the fact from their list of favorites. And each step of the way we are asserting on how the feature’s state changes.
It may not seem very impressive, but honestly the fact that we can even write a test at all is impressive. Remember that the count
and favoriteFacts
are synchronized to external systems that are out of our control, such as user defaults and the file system. If we had written this in the naive way, by reaching out to those global storage systems directly and mutating them, then it would have been much hard to write these tests.
But with the @Shared
property wrapper we get to mostly just forget that the state is being synchronized to an external system, and instead treat it mostly as regular state. And we personally think that is absolutely incredible.
But we really want to prove it to you. So let’s play around with these tests a bit more in order to show you just how much work @Shared
is doing for us.
We can easily see exactly what it would look like if we had naively interacted with those global, mutable storage systems. We can use the .dependency
trait to run this test with a live user defaults and live file system while still using the “good facts” fact client:
@Test(
.dependency(\.defaultAppStorage, .standard),
.dependency(\.defaultFileStorage, .fileSystem),
…
)
func basics() async {
…
}
Now when we run the test it instantly fails. The count
is no longer 1 and the favoriteFacts
array now holds a lot more facts. This is happening because we are reading from the same user defaults and file system that the simulator is using, and so our changes in the simulator are bleeding over to the test.
And further, the test is also bleeding over to the simulator. If we run the app in the simulator we will see that the count has been incremented…
Now when we run this test it of course fails. But after, let’s run in the simulator to see that a fact was added to our list. And so each time we run this test we will be adding more and more data to the file that powers the app when running in a simulator, and that seems really annoying.
Now you may be thinking this is no big deal, after all can’t we just clean up the app storage and file storage before the test runs? We could even perform this work in the init
of the test suite, which is kind of like the setUp
we are all familiar with from XCTest:
@Suite
@MainActor
struct FactFeatureTests {
init() {
UserDefaults.standard.removeObject(forKey: "count")
try? FileManager.default.removeItem(
at: .documentsDirectory.appending(component: "favoriteFacts.json")
)
}
…
}
That cleans those external systems so that we start from scratch, and indeed the test now passes.
But this still isn’t ideal for two reasons. First of all, we have to remember to do this cleaning process. And if in the future we start saving data to another file, or user defaults key, or even some completely different system, we will have to remember to update this test and any other tests to make sure they all start with a clean slate. This was the exact same problem we ran into with previews when we were grappling with uncontrolled dependencies.
Another downside to this approach is that it is erasing all of the data in our simulator…
So if we spent a lot of time building up a certain kind of data in the simulator, then it would only take a single run of this test to wipe it all away.
So, that’s bad, but there is a much more pernicious effect to clearing this state in the init
of the test suite. We can no longer run multiple tests in parallel. Swift Testing greatly differs from XCTest in how it runs its tests by default. XCTest can run tests in parallel, but each test runs in its own process. Swift Testing, on the other hand, runs tests in parallel in the same process. So all tests that are running at the same time are sharing the same resources, including any global state.
And this means if we have two tests in this suite running, they are each going to be sharing the same user defaults and file system. And that means when one test clears out those storage systems, the other test is going to be left in a bad spot.
We can see this concretely by simply copying and pasting our test and renaming it to anotherBasics
:
@Test(
.dependency(FactClient.goodFacts),
.dependency(\.defaultAppStorage, .standard),
.dependency(\.defaultFileStorage, .fileSystem)
)
func anotherBasics() async {
let model = FactFeatureModel()
…
}
This new test passes completely fine in isolation, but it will now cause the entire suite to fail. The assertion that the count remained unchanged after tapping the “Get fact” button fails:
await model.factButtonTapped()
#expect(model.count == 1)
Expectation failed: (model.count → 0) == 1
This is happening because while this test suspends to perform the work in factButtonTapped
the other test started up, and it removed the “count” key from user defaults. That change was noticed by the @Shared
value we are using in this test, and so its value resets back to 0.
This means we simply cannot run these tests in parallel, and it’s because we are reaching out to external global systems that we do not control. We could of course serialize the tests using the .serialized
testing trait:
@Suite(.serialized)
@MainActor
struct FactFeatureTests {
…
}
…and now the suite passes. But still, even this is not ideal. First off all, it’s a bummer to have to run tests in serial just because our code is written in a suboptimal way. This means our test suite is going to run a lot slower than necessary.
But also, this .serialized
trait only serializes the tests in this one suite. Every other test in the test target will still run in parallel alongside this suite, and those tests could interfere with the user defaults or file system, causing mysterious test failures. In order for this approach to work you will actually need to serialize every test in your entire target that touches user defaults or the file system. And good luck doing that!
So it is actually a superpower of the @Shared
property wrapper that it controls its dependencies under the hood, allowing us to write tests in a very natural way, that run in parallel, and without us having to do extra work to clean up external systems behind us:
@Suite//(.serialized)
@MainActor
struct FactFeatureTests {
…
@Test(.dependency(FactClient.goodFacts))
func basics() async {
…
}
@Test(.dependency(FactClient.goodFacts))
func anotherBasics() async {
…
}
}
These two tests pass, even though they are run in parallel, and it’s all because @Shared
goes through great lengths to control its dependencies and provide each test with its own, unique scratch pad of dependencies.
And it’s easy to take this for granted. It works so seamlessly that we may not even realize just how great this is. And for what it’s worth, as far as we can tell there is not a single dependencies library in the Swift community that works with Swift’s native testing framework. They all require one to serialize every test suite to work with Swift Testing because they all reach out to a global storage of dependencies.
Our Dependencies library does not require this, and it’s all thanks to the fact that our library is built on TaskLocal
s, which quarantines dependencies to tasks instead of having global mutable state. When these tests are run in parallel, they are each given their own distinct copies of dependencies. There is no sharing between them whatsoever.
We can even do something fun by printing the test name after each line is execute in each test:
@Test(.dependency(FactClient.goodFacts))
@MainActor
func basics() async {
let model = FactFeatureModel(); print(#function)
model.incrementButtonTapped(); print(#function)
#expect(model.count == 1); print(#function)
await model.factButtonTapped(); print(#function)
#expect(model.count == 1); print(#function)
#expect(model.fact == "1 is a good number!"); print(#function)
model.favoriteFactButtonTapped(); print(#function)
#expect(model.favoriteFacts.map(\.number) == [1]); print(#function)
#expect(model.favoriteFacts.map(\.value) == ["1 is a good number!"]); print(#function)
model.deleteFacts(indexSet: [0]); print(#function)
#expect(model.favoriteFacts == []); print(#function)
}
@Test(.dependency(FactClient.goodFacts))
@MainActor
func anotherBasics() async {
let model = FactFeatureModel(); print(#function)
model.incrementButtonTapped(); print(#function)
#expect(model.count == 1); print(#function)
await model.factButtonTapped(); print(#function)
#expect(model.count == 1); print(#function)
#expect(model.fact == "1 is a good number!"); print(#function)
model.favoriteFactButtonTapped(); print(#function)
#expect(model.favoriteFacts.map(\.number) == [1]); print(#function)
#expect(model.favoriteFacts.map(\.value) == ["1 is a good number!"]); print(#function)
model.deleteFacts(indexSet: [0]); print(#function)
#expect(model.favoriteFacts == []); print(#function)
}
And when we run the test and inspect the logs we will see that these tests are indeed interleaving:
◇ Test run started.
↳ Testing Library Version: 102 (arm64-apple-ios13.0-simulator)
◇ Suite FactFeatureTests started.
◇ Test anotherBasics() started.
◇ Test basics() started.
anotherBasics()
anotherBasics()
anotherBasics()
basics()
basics()
basics()
anotherBasics()
anotherBasics()
anotherBasics()
anotherBasics()
anotherBasics()
anotherBasics()
anotherBasics()
basics()
✔ Test anotherBasics() passed after 0.020 seconds.
basics()
basics()
basics()
basics()
basics()
basics()
✔ Test basics() passed after 0.022 seconds.
✔ Suite FactFeatureTests passed after 0.022 seconds.
✔ Test run with 2 tests passed after 0.022 seconds.
And so if dependencies were not quarantined to each test we would run the risk of one test make changes that the other test can see. But luckily that is not the case.
We have now seen that using the fileStorage
strategy with our @Shared
property wrapper does not affect the testability of your features one bit. The mutations to this state in tests will not bleed over from test to test or to the simulator. You can even run multiple tests in parallel in the same process, like what Swift Testing does by default. And we feel that our Dependencies library is pretty much the only library out there that even allows this.
There is one more topic to discuss before ending this tour of our new Sharing library. There is a 3rd persistence strategy that ships with the library, and it is called inMemory
. It isn’t going to be as useful as the appStorage
or fileStorage
strategies, but it does have its place.
The inMemory
strategy allows you to hold a piece of state that is accessible globally in your entire application such that it will be reset back to its default when the app is killed and relaunched. It is appropriate for data that you want accessible everywhere, but that doesn’t need to be persisted. And you may wonder why you wouldn’t just use a global variable for that. But mutable globals in Swift are no longer concurrency safe, and so won’t even compile in Swift 6 mode without extra work. And further, our inMemory
strategy is also friendly to testing, so that multiple running tests will not trample over each other while reading from and writing to the global state.
Let’s take a quick look.
We are going to add a very simple debug feature to our app just to show off what the inMemory
strategy is capable of. Suppose we wanted to keep a temporary list of events for our app that we did not need to persist. It would just be a collection of strings that we could reference at anytime to see what events had been tracked.
We could represents this as a @Shared
array of events that uses the inMemory
key:
@Observable
@MainActor
class FactFeatureModel {
@ObservationIgnored
@Shared(.inMemory("events")) var events: [String] = []
…
}
And if we don’t want to have to type out all of this information every time we could of course define a type-safe key that can be used more easily:
extension SharedKey where Self == InMemoryKey<[String]>.Default {
static var events: Self {
Self[.inMemory("events"), default: []]
}
}
Now we can do the following:
@ObservationIgnored
@Shared(.events) var events
With this defined we can start tracking events, such as when the “Increment” button is tapped:
func incrementButtonTapped() {
$events.withLock { $0.append("Increment Button Tapped") }
…
}
Or the “Decrement” button:
func decrementButtonTapped() {
$events.withLock { $0.append("Decrement Button Tapped") }
…
}
The “Get Fact” button:
func factButtonTapped() async {
$events.withLock { $0.append("Get Fact Button Tapped") }
…
}
The favoriting button:
func favoriteFactButtonTapped() {
$events.withLock { $0.append("Favorite Fact Button Tapped") }
…
}
And finally the delete button:
func deleteFacts(indexSet: IndexSet) {
$events.withLock { $0.append("Delete Fact") }
…
}
This will mutate a piece of state that is shared globally with the entire app, and it can be accessed and mutated from anywhere. But, this state is completely safe to use from multiple threads, thanks to the withLock
method.
We can create some UI for displaying these events. Let’s add some local state to our view for displaying a sheet:
@State var eventsPresented = false
Then we’ll add a button to the view for flipping this state to true
:
.toolbar {
ToolbarItem {
Button("Events") { eventsPresented = true }
}
}
We can present a view when the boolean flips to true
:
.sheet(isPresented: $eventsPresented) {
EventsView()
}
And this EventsView
is quite easy to implement:
struct EventsView: View {
@Shared(.events) var events
var body: some View {
Form {
ForEach(events.reversed(), id: \.self) { event in
Text(event)
}
}
}
}
Notice that we are using the @Shared(.events)
right in the view, even though previously the only place we used it was in an observable model. This shows why it is so powerful to be able to use the tools anywhere. We aren’t forced to keep everything in the view or keep everything in observable models. We can choose to hold onto state in the way that makes most sense for us, and right now this EventsView
is so simple we might as well just hold onto it directly in the view.
That’s all it takes and our feature works exactly was we would expect. We can run it in the simulator, tap around on a few things, and then confirm that our events were tracked.
It would even be possible to write tests for this. At the end of our existing test we can simply assert on how we expect the events
array to look:
#expect(
model.events == [
"Increment Button Tapped",
"Get Fact Button Tapped",
"Favorite Fact Button Tapped",
"Delete Facts",
]
)
This assertion passes just fine.
We can even copy-and-paste this assertion over to the anotherBasics
test…
And running the suite together also passes. This is amazing to see because remember these tests are running in parallel and the events data is a global mutable blob of state. Both tests are reading from and writing to global state, but thanks to how the inMemory
strategy was designed, each test gets its own unique blob of mutable state. They are not actually seeing the same shared state. And so there is no concern of these tests trampling on each other if they write to the shared state.
And that is the conclusion to our tour of our new Sharing library. We have seen that the @Shared
property wrapper is a tool for sharing state with many parts of your app, as well as with external storage systems. The library even comes with 3 important strategies right out of the box: appStorage
, which uses user defaults, fileStorage
, which stores data as bytes on the disk, and inMemory
, which stores state only in memory that will be cleared out when the app is killed and relaunched.
The @Shared
property wrapper, and these 3 persistence strategies, can be used basically anywhere in your app. They can be used directly in a SwiftUI view, or used in an @Observable
model, or in a UIKit view controller, or in some random helper function you have squirreled away in your code base. You no longer have to stratify your persisted data into two different worlds, where you get a nice modern API when working inside a SwiftUI view, but have to deal with older legacy APIs everywhere else.
And amazingly, even though these persistence strategies interact with outside systems out of our control, your code remains testable each step of the way. You can invoke methods on your model and assert how state changes, but those changes will be quarantined to the test running. Those changes will not be visible to the simulator, nor will they be visible to other tests running in parallel or running later.
And if you can believe it, what we have covered so far really only scratches the surface of what the Sharing library has to offer. One of its most powerful features is the ability for you to create your own persistence strategies. The most obvious one being SQLite, where you can hold onto state in your feature in a simple manner, but secretly under the hood the data is being persisted to a SQLite database.
And there’s even more exotic forms of persistence strategies. For example, you could have a strategy that keeps a piece of data in sync with an external server. This would be great for feature flags and A/B tests. You could flip a setting on your server and have it immediately propagate to every app install, instantly.
But all of that will have to wait.
Until next time!