How to ensure your tests can run in the Bazel sandbox
See also: How to ensure your code builds with Bazel
This documentation is a work in progress; feel free to add any useful stuff that you may have uncovered.
Background
When Bazel executes tests via bazel test
, its behavior differs from a standard go test
in several important ways. One of the most prominent is its usage of a “sandbox”; when Bazel runs a test, it stages all the tests and the data that it needs (the “runfiles”) in a special, sandboxed directory. This directory is entirely managed by Bazel, does not have a path that is statically known, and isn’t inside your checkout. By design, this makes it difficult to find artifacts that you haven’t told Bazel you depend on. This is good! If your test depends on the presence of a file or tool, that should be declared explicitly – but it does take a little work to do it right.
At the time of writing, our Bazel builds and tests in CI all occur in a bespoke Docker image, cockroachdb/bazel
. This image is different from the normal builder
image and, notably, has a lot less stuff installed on it. This is intentional, and makes it substantially easier to identify missing dependencies.
Reproducing a broken test
A test failure can be reproduced locally with bazel test
; for example, if the package in question is pkg/path/to/dir
, you can run bazel test pkg/path/to/dir:all
or equivalently ./dev test pkg/path/to/dir
. You may want to also set the --test_env=GO_TEST_WRAP_TESTV=1
command-line flag, which will cause the tests to be more verbose (like go test
's -test.v
flag). Consider running the test in the cockroachdb/bazel
Docker image to avoid a scenario where you get a test working locally but it fails in CI: to run the builder image, use ./dev builder
.
Failed tests will be displayed prominently:
cockroach$ bazel test pkg/internal/rsg/yacc:yacc_test --test_env=GO_TEST_WRAP_TESTV=1 --verbose_failures
INFO: Analyzed target //pkg/internal/rsg/yacc:yacc_test (0 packages loaded, 0 targets configured).
INFO: Found 1 test target...
FAIL: //pkg/internal/rsg/yacc:yacc_test (see /private/var/tmp/_bazel_ricky/be70b24e7357091e16c49d70921b7985/execroot/cockroach/bazel-out/darwin-fastbuild/testlogs/pkg/internal/rsg/yacc/yacc_test/test.log)
Target //pkg/internal/rsg/yacc:yacc_test up-to-date:
bazel-bin/pkg/internal/rsg/yacc/yacc_test_/yacc_test
INFO: Elapsed time: 0.268s, Critical Path: 0.11s
INFO: 2 processes: 1 internal, 1 darwin-sandbox.
INFO: Build completed, 1 test FAILED, 2 total actions
//pkg/internal/rsg/yacc:yacc_test FAILED in 0.1s
/private/var/tmp/_bazel_ricky/be70b24e7357091e16c49d70921b7985/execroot/cockroach/bazel-out/darwin-fastbuild/testlogs/pkg/internal/rsg/yacc/yacc_test/test.log
INFO: Build completed, 1 test FAILED, 2 total actions
The test’s logs are in the file that the output mentions; if you find it more convenient, you can find the same logs in the _bazel/testlogs
directory in your workspace (in this case, that file would be at _bazel/testlogs/pkg/internal/rsg/yacc/yacc_test/test.log
).
Cookbook
Missing files and data
See bazel: make `pkg/internal/rsg/yacc` test runnable in Bazel by rickystewart · Pull Request #61796 · cockroachdb/cockroach for an example of the concepts discussed here.
Many tests that succeed in go test
but fail in Bazel will do so because a file can’t be found in the sandbox. Generally, you solve these problems by specifying the dependency in the test’s BUILD.bazel
file, and making sure that you find that file at the correct path in the test.
To make sure that a file is captured in a test’s Runfiles, add it to the deps
array corresponding to the _test
target in the BUILD.bazel
file, for example:
go_test(
name = "yacc_test",
...
data = ["//pkg/sql/parser:sql.y"], # NEW
)
Any strings you add to data
should be a valid label. You can add anything to a file’s data
– source code, generated code, binary data, etc. You may find glob useful, which can allow you to dynamically capture a set of files under a directory.
Generally, data
files may end up in an unpredictable location, so you will probably have to dynamically determine the locations of any files you need. We have a few helper libraries to make this easy for you:
The helper library at
pkg/build/bazel
provides some general-purpose utilities:BuiltWithBazel()
returnstrue
iff this test was built with Bazel. (The other functions in this library will panic unless this returnstrue
, so always check that you’veBuiltWithBazel
first.) You can use this to perform some Bazel-only logic, for example:if bazel.BuiltWithBazel() { // Bazel-only logic } else { // `go test`-only logic }
Runfile()
returns the absolute path to a file you’ve depended on:See
pkg/build/bazel/bazel.go
(and the library that this one wraps, atvendor/github.com/bazelbuild/rules_go/go/tools/bazel
) for full documentation.
If your test contains a
testdata/
directory, you can usetestutils.TestDataPath()
to easily get a path to any file intestdata
– for example,testutils.TestDataPath(t, “a.txt”)
to locate the filetestdata/a.txt
. Note that Gazelle auto-generates a dependency on thetestdata
files for you in this case.
Using the Go toolchain in the sandbox
Some tests (namely lints) may require calling into the go
toolchain from within the test itself. We have a helper function to make this easy (make sure to add @go_sdk//:files
to your data
):
This function sets GOROOT
and updates the PATH
so the go
binary that Bazel used to compile your code can be found.
Build configuration issues
The make
build (and the way that it’s triggered in CI) is more or less the “source of truth” for how Cockroach is built in different configurations, including tests. It’s possible that something’s been missed in how that build configuration has been “translated” to Bazel. If you suspect the build configuration needs to be changed in some way, contact a member of the Dev-Inf team.
geos
Some tests require the geos
library. You can declare a dependency on geos
by adding “//c-deps:libgeos"
to the data
for your tests.
Other concerns
Test size
In Bazel, all tests can be annotated with a “size”. This implies a default timeout (you can also set a custom timeout). We have four test suites, one for each size – //pkg:small_tests
, //pkg:medium_tests
, //pkg:large_tests
, //pkg:enormous_tests
. The test suites each run all the tests in-tree for the given size. //pkg:all_tests
contains all of the tests from the four test suites above.
Make sure your test is appropriately sized when you run it. Bazel will tell you how long it takes your test to run – as a rule of thumb, small
tests should be < 30s (< 10s is better!), medium
< 2m, large
< 5m, and enormous
anything longer.
Links
Copyright (C) Cockroach Labs.
Attention: This documentation is provided on an "as is" basis, without warranties or conditions of any kind, either express or implied, including, without limitation, any warranties or conditions of title, non-infringement, merchantability, or fitness for a particular purpose.