The fact that the test suite passes proves that there are only two tests of the entire suite that require analytics: when clearing todos and deleting todos. If we were to go in and start instrumenting more parts of our application we would instantly get feedback on which tests need to be updated. We wouldn’t need to hunt them down or audit our entire test suite to see where we should be further asserting for analytics events.
This means that if you come to this code 6 months from now in order to add some more analytics, you wouldn’t even have to think about what tests need to be updated. You could just run the entire suite and see which test cases get stuck on the unimplemented client. That’s the power of being more explicit and exhaustive with what dependencies your test cases are actually using.
However, there is one not very optimal thing about what we have done so far, and that’s the fact that when an unimplemented dependency is used it crashes the whole test suite. No other test will run, and that’s going to be really annoying in practice. If we have a long test suite then it takes just a single failure to stop the entire suite, and we’ll have no idea of what other tests failed until we fix the first one that failed.
So having the unimplemented dependencies was a nice way to get our feet wet with the concept of exhaustively describing our dependencies, but can we do better? Yes we can, but it comes with a few new complications that have to be worked out.
What if instead of doing a fatalError
inside each endpoint of our dependency we put in a XCTFail
? That would make our test fail, while also letting the rest of the suite run.
Let’s try it out with our simplest dependency, the UUID
initializer.