Xcode 13 Introduces a New Way for Handling Expected Unit Test Failures
In my previous blog post, I wrote about using a couple of ways to either suppress or skip expected test failures. In Xcode 12.5, Apple introduced a new API XCTExpectFailure
to provide a better way to handle expected (And hopefully temporary and fixable in near-future) test failures. In todays' blog post, we will go over them.
Please note that this API is only available in Xcode 12.5. So if you're using an old Xcode, you might want to upgrade it before trying out this tutorial
For this guide, we will write unit tests for function separateZerosAndOnes
that is responsible for separating 0's and 1's in the given array.
func testThatFunctionSeparatesZerosAndOnes() {
var input = [0, 1, 0, 1, 1, 1, 0]
let utility = UtilityClass()
utility.separateZerosAndOnes(input: &input)
XCTAssertEqual(input, [1, 0, 0, 1, 1, 1])
}
Unfortunately, our test is failing due to an unknown reason and we don't have enough time to debug it. So we will use XCTExpectFailure
to let Xcode know that this is an expected failure. We will do this by calling XCTExpectFailure
API anywhere before the assertion is called. If the test fails, it's indicated with a gray failure icon, but the failure is suppressed so it's treated as if the test was ignored.
func testThatFunctionSeparatesZerosAndOnes() {
XCTExpectFailure("<https://jira.com/mycompany/34555> Test failed due to bug in function which separated out 0's and 1's in the given array")
var input = [0, 1, 0, 1, 1, 1, 0]
let utility = UtilityClass()
utility.separateZerosAndOnes(input: &input)
XCTAssertEqual(input, [1, 0, 0, 0, 1, 1, 1])
}
If you run the test, it shows the expected failure for the failing line, but since it's shown in the gray icon, it's not a hard failure, and the build can still proceed to merge.
Wrapping Failing Test in XCTExpectFailure
Closure
XCTExpectFailure
API also allows wrapping the failing code in the closure so that only those failures happening inside the closure will be reported and everything else outside will be treated in a standard way.
func testThatFunctionSeparatesZerosAndOnes() {
var input = [0, 1, 0, 1, 1, 1, 0]
let utility = UtilityClass()
utility.separateZerosAndOnes(input: &input)
XCTExpectFailure("<https://google.com> Test failed due to bug in function which separated out 0's and 1's in the given array") {
XCTAssertEqual(input, [1, 0, 0, 0, 1, 1, 1])
}
XCTAssertEqual(input, [0, 0, 0, 1, 1, 1, 1])
}
In this example, since we are wrapping code on line 11
inside XCTExpectFailure
closure, the failure will be regarded as an expected failure and will be reported as such. In contrast, if any failure occurs outside of this closure - For example, on line 14
, it will be reported as a hard failure blocking the merge.
Using Issue Matching with XCTExpectedFailure
XCTExpectedFailure
allows granular control over which failures are handled and which ones are ignored when the test fails. When XCTExpectFailure
API is called, it allows to pass options
that specify the issue matching filter. This filter is used to indicate which failures should be handled. If the failures do not match the matcher filter, they are ignored.
Let's say we want to use the XCTExpectFailure
API, but only when the assertion fails. If the test fails with an assertion failure, that will be regarded as an expected failure, otherwise will be raised as standard test failures.
Consider the following method which separates out o's and 1's from the passed array of integers and also throws an error if the passed array is empty.
func separateZerosAndOnes(input: inout [Int]) throws {
guard !input.isEmpty else {
throw AlgorithmError.emptyArray
}
var low = 0
var high = input.count - 1
while low < high {
if input[low] == 1 {
input[low] = 0
input[high] = 1
high -= 1
} else {
low += 1
}
}
}
Our job is to write some tests for this method.
func testThatFunctionSeparatesZerosAndOnes() throws {
var input = [0, 1, 0, 1, 1, 1, 0]
let expectFailureOptions = XCTExpectedFailure.Options()
expectFailureOptions.issueMatcher = { issue in
return issue.type == .assertionFailure
}
try UtilityClass().separateZerosAndOnes(input: &input)
XCTExpectFailure("<https://google.com> Test failed due to bug in function which separated out 0's and 1's in the given array", options: expectFailureOptions)
XCTAssertEqual(input, [0, 0, 0, 1, 1, 1, 1])
}
In the above example, I am expecting failures only while calling XCTAssertEqual
API. If test failures occur for any other reason, we want to explicitly catch them.
To force this behavior, I am using XCTExpectedFailure.Options
. In these options, I am using the issue matcher to specify which type of issues should be handled by XCTExpectFailure
API so that if failures occur due to that reason, they will be caught.
In the above example, I know separateZerosAndOnes
is buggy and hence I have specified that I am expecting all the failures resulting from assertion failure. But if tests fail for reasons other than this, then such failures need urgent attention and should be fixed.
Issue types not specified in the issue matcher
What happens if the test fails for the reason other than assertionFailure
? For example, if the test throws an error that behavior won't be considered as expected failures and will be explicitly reported.
The method separateZerosAndOnes
throws an error if an empty array is passed. If we pass an empty array to it and the method throws an exception, it will be explicitly caught since we are not specifying the exception condition in the issueMatcher
for passed XCTExpectedFailure
options.
But if we were to pass additional conditions in issueMatcher
predicate saying issue.type == .thrownError
, it will ignore this thrown error and will be considered as an expected failure.
Analyzing Test Report
Finally, we will analyze the test report to see if there is any interesting information we can find. If you see the test failure reason we passed in the XCTExpectFailure
method, we are also passing the URL enclosed in angle brackets. It refers to the ticket URL to track the test failures so that engineers can follow up and fix them.
var input = [0, 1, 0, 1, 1, 1, 0]
XCTExpectFailure("<https://google.com> Test failed due to bug in function which separated out 0's and 1's in the given array", options: expectFailureOptions)
try UtilityClass().separateZerosAndOnes(input: &input)
XCTAssertEqual(input, [0, 0, 1, 1, 1, 1])
Run the test and jump to the test report. Under the test report, if you select the Expected Failures
tab, you will see the failing test and it also shows the little Bug
icon. Clicking this icon will take you to the ticket URL passed in the failure reason.
That way, you not only track the failing tests which need a fix but can also directly jump to the ticket created to track this failure.
What happens if tests marked as expected failure suddenly start passing?
Tests are marked with XCTExpectFailure
if we definitely know they consistently fail. This allows developers to see which tests are failing as well as mark those failures as non-blocking for merge. But if all of a sudden, someone fixes the known bug, what happens then? The tests marked with expected failure annotations will explicitly start failing and will be merge blocking.
If you know of any failing test that gets intentionally fixed, you might want to get rid of XCTExpectFailure
around it so that we can always expect it to pass.
Do it only if you understand what made the test suddenly pass. If you're unsure, it's probably the case of flaky test which may pass now, but will fail at a later time. So watch out for any signs of flakiness before refactoring further
In the above example, if I fix the separateZerosAndOnes
method from the known bug, I can refactor the test to something like this,
Fixing a Bug
while low < high {
if input[low] == 1 {
let temp = input[low]
input[low] = input[high]
input[high] = temp
high -= 1
} else {
low += 1
}
}
Refactoring Test Code
func testThatFunctionSeparatesZerosAndOnes() throws {
var input = [0, 1, 0, 1, 1, 1, 0]
try UtilityClass().separateZerosAndOnes(input: &input)
XCTAssertEqual(input, [0, 0, 0, 1, 1, 1, 1])
}
As you can see, the test successfully passes and there are no failures of any kind at all.
Summary
To summarize, it's a welcome change to suppress known failures temporarily before eventually fixing them. As I described in the last post, Apple already had a couple of XCTest
APIs to handle known failures, but they improved the flow by introducing an additional API.
There are a few benefits of the new API. First, it creates a dedicated tab to show Expected test failures, and second, it gives a way to link and directly jumps to ticket tracking the issue affecting the broken test. That way, not only do we get the visibility in the CI environment, but can also directly get additional context from the existing ticket.
I also like how it delivers additional granularity by allowing us to handle expected failures in the context of closures or allowing us to pass additional options which will control what kind of failures are marked as expected failures. No doubt, you can use it to significantly improve the test failure visibility, tracking, and save time by temporarily marking failing tests as such until they are fixed.