Table of Contents | ||
---|---|---|
|
...
Slices and maps contain pointers to the underlying data so be conscious of ownership when passing them as parameters or returning them as values.
On the other hand, do not unnecessarily copy slices - heap allocations are expensive and we want to avoid them when possible.
...
The goal of every test is to observe some behavior of some functionality and assert it matches the expected behavior of that functionality. In particular this is committed and then re-run on every change PR so that if anything any PR later causes unintended changes to that behavior – e.g. a modification in the functionality itself or a library or a refactor – the committed test will detect the that change and fail, alerting those making the modifications to a potential regression.
However one thing a good test generally wants to avoid, but which can happen more very easily when writing data driven tests, is observing and asserting behavior beyond the actual functionality being tested , by including overly broad observations and assertions which then make in the test. Doing so makes the test more brittle, add adds toil to maintaining it and may ultimately defeat the actual purpose of the test completely – detecting regressions and stopping them from being merged unintentionally.
For contrived example, a test verifying the behavior of basic addition in some library by checking 1 + 1 == 2
is fine, but a library might check `1 + 1`
equals the expected output 2
. But if that test did so by comparing it to this by, say, including that result in a string that for some reason also included that current operating system version such as , e.g. “MacOS 15.2”, and comparing that, then this test would be quite brittle: different engineers running it on different hosts and OS versions would get different results; the test would need to be updated every time MacOS is upgraded, even though no change to addition in the library was madebecomes sensitive to unrelated factors outside the behavior it wanted to test such as OS upgrades. At best, constantly updating this test due to environmental changes outside the library would add unnecessary toil and friction . Worse but worse yet, engineers are would likely to become habituated to just blindly updating this test to make it pass every time it fails , – as its failures and the need to update hit will it become “expected”. However once this happens, so meaning any real, genuine regressions risk being missed, when the test is instead just as a matter of course, but now mistakenly, blindly updated to reflect themwhatever new output is observed, as this has become the norm.
The above is of course an absurd and very clearly contrived example, but more subtle cases like this arise often, particularly when working logictests and --rewrite
where a query or trace thereof might include a table ID or range ID – making it now depend on factors like the total number of tables – or might enumerate all tables to verify an introspection utility.
A well-written logic test should choose its SELECT statement and its WHERE clause to ensure that the output includes and only includes and asserts output that is directly relevant to the tested behavior and fully specified by the test’s constructed conditions.
For example, if testing a builtin table that returned the names of all columns in each table in the cluster, one would want to select only the rows for tables the test itself created, or perhaps include specific, intentionally selected system
or internal tables to further verify the functionality correctly handles system and internal tables. Such a test however would not want to just select every system and internal table; doing so would create output that is determined by the total set of all system tables and the columns of each, not just the deliberately constructed and chosen test conditions, making the test overly broad in its assertion and brittle much like our contrived example above.
...