Qt
Internal/Contributor docs for the Qt SDK. Note: These are NOT official API docs; those are found at https://doc.qt.io/
Loading...
Searching...
No Matches
qttestlib-manual.qdoc
Go to the documentation of this file.
1// Copyright (C) 2022 The Qt Company Ltd.
2// Copyright (C) 2016 Intel Corporation.
3// SPDX-License-Identifier: LicenseRef-Qt-Commercial OR GFDL-1.3-no-invariants-only
4
5/*!
6 \page qtest-overview.html
7 \title Qt Test Overview
8 \brief Overview of the Qt unit testing framework.
9
10 \ingroup frameworks-technologies
11 \ingroup qt-basic-concepts
12
13 \keyword qtestlib
14
15 Qt Test is a framework for unit testing Qt based applications and libraries.
16 Qt Test provides
17 all the functionality commonly found in unit testing frameworks as
18 well as extensions for testing graphical user interfaces.
19
20 Qt Test is designed to ease the writing of unit tests for Qt
21 based applications and libraries:
22
23 \table
24 \header \li Feature \li Details
25 \row
26 \li \b Lightweight
27 \li Qt Test consists of about 6000 lines of code and 60
28 exported symbols.
29 \row
30 \li \b Self-contained
31 \li Qt Test requires only a few symbols from the Qt Core module
32 for non-gui testing.
33 \row
34 \li \b {Rapid testing}
35 \li Qt Test needs no special test-runners; no special
36 registration for tests.
37 \row
38 \li \b {Data-driven testing}
39 \li A test can be executed multiple times with different test data.
40 \row
41 \li \b {Basic GUI testing}
42 \li Qt Test offers functionality for mouse and keyboard simulation.
43 \row
44 \li \b {Benchmarking}
45 \li Qt Test supports benchmarking and provides several measurement back-ends.
46 \row
47 \li \b {IDE friendly}
48 \li Qt Test outputs messages that can be interpreted by Qt Creator, Visual
49 Studio, and KDevelop.
50 \row
51 \li \b Thread-safety
52 \li The error reporting is thread safe and atomic.
53 \row
54 \li \b Type-safety
55 \li Extensive use of templates prevent errors introduced by
56 implicit type casting.
57 \row
58 \li \b {Easily extendable}
59 \li Custom types can easily be added to the test data and test output.
60 \endtable
61
62 You can use a Qt Creator wizard to create a project that contains Qt tests
63 and build and run them directly from Qt Creator. For more information, see
64 \l {Qt Creator: Build and run tests}.
65
66 \target qttest-creating-a-test
67 \section1 Creating a Test
68
69 To create a test, subclass QObject and add one or more private slots to it. Each
70 private slot is a test function in your test. QTest::qExec() can be used to execute
71 all test functions in the test object.
72
73 In addition, you can define the following private slots that are \e not
74 treated as test functions. When present, they will be executed by the
75 testing framework and can be used to initialize and clean up either the
76 entire test or the current test function.
77
78 \list
79 \li \c{initTestCase()} will be called before the first test function is executed.
80 \li \c{initTestCase_data()} will be called to create a global test data table.
81 \li \c{cleanupTestCase()} will be called after the last test function was executed.
82 \li \c{init()} will be called before each test function is executed.
83 \li \c{cleanup()} will be called after every test function.
84 \endlist
85
86 Use \c initTestCase() for preparing the test. Every test should leave the
87 system in a usable state, so it can be run repeatedly. Cleanup operations
88 should be handled in \c cleanupTestCase(), so they get run even if the test
89 fails.
90
91 Use \c init() for preparing a test function. Every test function should
92 leave the system in a usable state, so it can be run repeatedly. Cleanup
93 operations should be handled in \c cleanup(), so they get run even if the
94 test function fails and exits early.
95
96 Alternatively, you can use RAII (resource acquisition is initialization),
97 with cleanup operations called in destructors, to ensure they happen when
98 the test function returns and the object moves out of scope.
99
100 If \c{initTestCase()} fails, no test function will be executed. If \c{init()} fails,
101 the following test function will not be executed, the test will proceed to the next
102 test function.
103
104 Example:
105 \snippet code/doc_src_qtestlib.cpp 0
106
107 Finally, if the test class has a static public \c{void initMain()} method,
108 it is called by the QTEST_MAIN macros before the QApplication object
109 is instantiated. This was added in 5.14.
110
111 For more examples, refer to the \l{Qt Test Tutorial}.
112
113 \section1 Increasing Test Function Timeout
114
115 QtTest limits the run-time of each test to catch infinite loops and similar
116 bugs. By default, any test function call will be interrupted after five
117 minutes. For data-driven tests, this applies to each call with a distinct
118 data-tag. This timeout can be configured by setting the \c QTEST_FUNCTION_TIMEOUT
119 environment variable to the maximum number of milliseconds that is acceptable
120 for a single call to take. If a test takes longer than the configured timeout,
121 it is interrupted, and \c qFatal() is called. As a result, the test aborts by
122 default, as if it had crashed.
123
124 To set \c QTEST_FUNCTION_TIMEOUT from the command line on Linux or macOS, enter:
125
126 \badcode
127 QTEST_FUNCTION_TIMEOUT=900000
128 export QTEST_FUNCTION_TIMEOUT
129 \endcode
130
131 On Windows:
132 \badcode
133 SET QTEST_FUNCTION_TIMEOUT=900000
134 \endcode
135
136 Then run the test inside this environment.
137
138 Alternatively, you can set the environment variable programmatically in the
139 test code itself, for example by calling, from the
140 \l{Creating a Test}{initMain()} special method of your test class:
141
142 \badcode
143 qputenv("QTEST_FUNCTION_TIMEOUT", "900000");
144 \endcode
145
146 To calculate a suitable value for the timeout, see how long the test usually
147 takes and decide how much longer it can take without that being a symptom of
148 some problem. Convert that longer time to milliseconds to get the timeout value.
149 For example, if you decide that a test that takes several minutes could
150 reasonably take up to twenty minutes, for example on a slow machine,
151 multiply \c{20 * 60 * 1000 = 1200000} and set the environment variable to
152 \c 1200000 instead of the \c 900000 above.
153
154 \if !defined(qtforpython)
155 \section1 Building a Test
156
157 You can build an executable that contains one test class that typically
158 tests one class of production code. However, usually you would want to
159 test several classes in a project by running one command.
160
161 See \l {Chapter 1: Writing a Unit Test}{Writing a Unit Test} for a step by
162 step explanation.
163
164 \section2 Building with CMake and CTest
165
166 You can use CMake and
167 \l{https://cmake.org/cmake/help/latest/manual/ctest.1.html}{CTest} to create
168 a test. CTest enables you to include or exclude tests based on a regular
169 expression that is matched against the test name. You can further apply the
170 \c LABELS property to a test and CTest can then include or exclude tests
171 based on those labels.
172 All labeled targets will be run when \c {test} target is called on the
173 command line.
174
175 \note On Android, if you have one connected device or emulator, tests will
176 run on that device. If you have more than one device connected, set the
177 environment variable \c {ANDROID_DEVICE_SERIAL} to the
178 \l {Android: Query for devices}{ADB serial number} of the device that
179 you want to run tests on.
180
181 There are several other advantages with CMake. For example, the result of
182 a test run can be published on a web server using CDash with virtually no
183 effort.
184
185 CTest scales to very different unit test frameworks, and works out of the
186 box with QTest.
187
188 The following is an example of a CMakeLists.txt file that specifies the
189 project name and the language used (here, \e mytest and C++), the Qt
190 modules required for building the test (Qt Test), and the files that are
191 included in the test (\e tst_mytest.cpp).
192
193 \quotefile code/doc_src_cmakelists.txt
194
195 For more information about the options you have, see \l {Build with CMake}.
196
197 \section2 Building with qmake
198
199 If you are using \c qmake as your build tool, just add the
200 following to your project file:
201
202 \snippet code/doc_src_qtestlib.pro 1
203
204 If you would like to run the test via \c{make check}, add the
205 additional line:
206
207 \snippet code/doc_src_qtestlib.pro 2
208
209 To prevent the test from being installed to your target, add the
210 additional line:
211
212 \snippet code/doc_src_qtestlib.pro 3
213
214 See the \l{Building a Testcase}{qmake manual} for
215 more information about \c{make check}.
216
217 \section2 Building with Other Tools
218
219 If you are using other build tools, make sure that you add the location
220 of the Qt Test header files to your include path (usually \c{include/QtTest}
221 under your Qt installation directory). If you are using a release build
222 of Qt, link your test to the \c QtTest library. For debug builds, use
223 \c{QtTest_debug}.
224
225 \endif
226
227 \section1 Qt Test Command Line Arguments
228
229 \section2 Syntax
230
231 The syntax to execute an autotest takes the following simple form:
232
233 \snippet code/doc_src_qtestlib.qdoc 2
234
235 Substitute \c testname with the name of your executable. \c
236 testfunctions can contain names of test functions to be
237 executed. If no \c testfunctions are passed, all tests are run. If you
238 append the name of an entry in \c testdata, the test function will be
239 run only with that test data.
240
241 For example:
242
243 \snippet code/doc_src_qtestlib.qdoc 3
244
245 Runs the test function called \c toUpper with all available test data.
246
247 \snippet code/doc_src_qtestlib.qdoc 4
248
249 Runs the \c toUpper test function with all available test data,
250 and the \c toInt test function with the test data row called \c
251 zero (if the specified test data doesn't exist, the associated test
252 will fail and the available data tags are reported).
253
254 \snippet code/doc_src_qtestlib.qdoc 5
255
256 Runs the \c testMyWidget function test, outputs every signal
257 emission and waits 500 milliseconds after each simulated
258 mouse/keyboard event.
259
260 \section2 Options
261
262 \section3 Logging Options
263
264 The following command line options determine how test results are reported:
265
266 \list
267 \li \c -o \e{filename,format} \br
268 Writes output to the specified file, in the specified format (one
269 of \c txt, \c csv, \c junitxml, \c xml, \c lightxml, \c teamcity
270 or \c tap). Use the special filename \c{-} (hyphen) to log to
271 standard output.
272 \li \c -o \e filename \br
273 Writes output to the specified file.
274 \li \c -txt \br
275 Outputs results in plain text.
276 \li \c -csv \br
277 Outputs results as comma-separated values (CSV) suitable for
278 import into spreadsheets. This mode is only suitable for
279 benchmarks, since it suppresses normal pass/fail messages.
280 \li \c -junitxml \br
281 Outputs results as a \l{JUnit XML} document.
282 \li \c -xml \br
283 Outputs results as an XML document.
284 \li \c -lightxml \br
285 Outputs results as a stream of XML tags.
286 \li \c -teamcity \br
287 Outputs results in \l{TeamCity} format.
288 \li \c -tap \br
289 Outputs results in \l{Test Anything Protocol} (TAP) format.
290 \endlist
291
292 The first version of the \c -o option may be repeated in order to log
293 test results in multiple formats, but no more than one instance of this
294 option can log test results to standard output.
295
296 If the first version of the \c -o option is used, neither the second version
297 of the \c -o option nor the \c -txt, \c -xml, \c -lightxml, \c -teamcity,
298 \c -junitxml or \c -tap options should be used.
299
300 If neither version of the \c -o option is used, test results will be logged to
301 standard output. If no format option is used, test results will be logged in
302 plain text.
303
304 \section3 Test Log Detail Options
305
306 The following command line options control how much detail is reported
307 in test logs:
308
309 \list
310 \li \c -silent \br
311 Silent output; only shows fatal errors, test failures and minimal status
312 messages.
313 \li \c -v1 \br
314 Verbose output; shows when each test function is entered.
315 (This option only affects plain text output.)
316 \li \c -v2 \br
317 Extended verbose output; shows each \l QCOMPARE() and \l QVERIFY().
318 (This option affects all output formats and implies \c -v1 for plain text output.)
319 \li \c -vs \br
320 Shows all signals that get emitted and the slot invocations resulting from
321 those signals.
322 (This option affects all output formats.)
323 \endlist
324
325 \section3 Testing Options
326
327 The following command-line options influence how tests are run:
328
329 \list
330 \li \c -functions \br
331 Outputs all test functions available in the test, then quits.
332 \li \c -datatags \br
333 Outputs all data tags available in the test.
334 A global data tag is preceded by ' __global__ '.
335 \li \c -eventdelay \e ms \br
336 If no delay is specified for keyboard or mouse simulation
337 (\l QTest::keyClick(),
338 \l QTest::mouseClick() etc.), the value from this parameter
339 (in milliseconds) is substituted.
340 \li \c -keydelay \e ms \br
341 Like -eventdelay, but only influences keyboard simulation and not mouse
342 simulation.
343 \li \c -mousedelay \e ms \br
344 Like -eventdelay, but only influences mouse simulation and not keyboard
345 simulation.
346 \li \c -maxwarnings \e number \br
347 Sets the maximum number of warnings to output. 0 for unlimited, defaults to
348 2000.
349 \li \c -nocrashhandler \br
350 Disables the crash handler on Unix platforms.
351 On Windows, it re-enables the Windows Error Reporting dialog, which is
352 turned off by default. This is useful for debugging crashes.
353 \li \c -repeat \e n \br
354 Run the testsuite n times or until the test fails. Useful for finding
355 flaky tests. If negative, the tests are repeated forever. This is intended
356 as a developer tool, and is only supported with the plain text logger.
357 \li \c -skipblacklisted \br
358 Skip the blacklisted tests. This option is intended to allow more accurate
359 measurement of test coverage by preventing blacklisted tests from inflating
360 coverage statistics. When not measuring test coverage, it is recommended to
361 execute blacklisted tests to reveal any changes in their results, such as
362 a new crash or the issue that caused blacklisting being resolved.
363
364 \li \c -platform \e name \br
365 This command line argument applies to all Qt applications, but might be
366 especially useful in the context of auto-testing. By using the "offscreen"
367 platform plugin (-platform offscreen) it's possible to have tests that use
368 QWidget or QWindow run without showing anything on the screen. Currently
369 the offscreen platform plugin is only fully supported on X11.
370 \endlist
371
372 \section3 Benchmarking Options
373
374 The following command line options control benchmark testing:
375
376 \list
377 \li \c -callgrind \br
378 Uses Callgrind to time benchmarks (Linux and \macos).
379 \li \c -perf \br
380 Uses Linux perf events to time benchmarks
381 \li \c -tickcounter \br
382 Uses CPU tick counters to time benchmarks. Requires hardware support.
383 \li \c -eventcounter \br
384 Counts events received during benchmarks.
385 \li \c -minimumvalue \e n \br
386 Sets the minimum acceptable measurement value.
387 \li \c -minimumtotal \e n \br
388 Sets the minimum acceptable total for repeated executions of a test function.
389 \li \c -iterations \e n \br
390 Sets the number of accumulation iterations.
391 \li \c -median \e n \br
392 Sets the number of median iterations.
393 \li \c -vb \br
394 Outputs verbose benchmarking information.
395 \endlist
396
397 \section3 Miscellaneous Options
398
399 \list
400 \li \c -help \br
401 Outputs the possible command line arguments and gives some useful help.
402 \endlist
403
404 \section1 Qt Test Environment Variables
405
406 You can set certain environment variables in order to affect
407 the execution of an autotest:
408
409 \list
410 \li \c QTEST_DISABLE_CORE_DUMP \br
411 Setting this variable to a non-zero value will disable the generation
412 of a core dump file.
413 \li \c QTEST_DISABLE_STACK_DUMP \br
414 Setting this variable to a non-zero value will prevent Qt Test from
415 printing a stacktrace in case an autotest times out or crashes.
416 \li \c QTEST_FATAL_FAIL \br
417 Setting this variable to a non-zero value will cause a failure in
418 an autotest to immediately abort the entire autotest. This is useful
419 to e.g. debug an unstable or intermittent failure in a test, by
420 launching the test in a debugger. Support for this variable was
421 added in Qt 6.1.
422 \endlist
423
424 \target qttest-creating-a-benchmark
425 \section1 Creating a Benchmark
426
427 To create a benchmark, follow the instructions for creating a test and then add a
428 \l QBENCHMARK macro or \l QTest::setBenchmarkResult() to the test function that
429 you want to benchmark. In the following code snippet, the macro is used:
430
431 \snippet code/doc_src_qtestlib.cpp 12
432
433 A test function that measures performance should contain either a single
434 \c QBENCHMARK macro or a single call to \c setBenchmarkResult(). Multiple
435 occurrences make no sense, because only one performance result can be
436 reported per test function, or per data tag in a data-driven setup.
437
438 Avoid changing the test code that forms (or influences) the body of a
439 \c QBENCHMARK macro, or the test code that computes the value passed to
440 \c setBenchmarkResult(). Differences in successive performance results
441 should ideally be caused only by changes to the product you are testing.
442 Changes to the test code can potentially result in misleading report of
443 a change in performance. If you do need to change the test code, make
444 that clear in the commit message.
445
446 In a performance test function, the \c QBENCHMARK or \c setBenchmarkResult()
447 should be followed by a verification step using \l QCOMPARE(), \l QVERIFY(),
448 and so on. You can then flag a performance result as \e invalid if another
449 code path than the intended one was measured. A performance analysis tool
450 can use this information to filter out invalid results.
451 For example, an unexpected error condition will typically cause the program
452 to bail out prematurely from the normal program execution, and thus falsely
453 show a dramatic performance increase.
454
455 \section2 Selecting the Measurement Back-end
456
457 The code inside the QBENCHMARK macro will be measured, and possibly also repeated
458 several times in order to get an accurate measurement. This depends on the selected
459 measurement back-end. Several back-ends are available. They can be selected on the
460 command line (see \l{Benchmarking Options}):
461
462 \target testlib-benchmarking-measurement
463
464 \table
465 \header \li Name
466 \li Command-line Argument
467 \li Availability
468 \row \li Walltime
469 \li (default)
470 \li All platforms
471 \row \li CPU tick counter
472 \li -tickcounter
473 \li Windows, \macos, Linux, many UNIX-like systems.
474 \row \li Event Counter
475 \li -eventcounter
476 \li All platforms
477 \row \li Valgrind Callgrind
478 \li -callgrind
479 \li Linux (if installed)
480 \row \li Linux Perf
481 \li -perf
482 \li Linux
483 \endtable
484
485 In short, walltime is always available but requires many repetitions to
486 get a useful result.
487 Tick counters are usually available and can provide
488 results with fewer repetitions, but can be susceptible to CPU frequency
489 scaling issues.
490 Valgrind provides exact results, but does not take
491 I/O waits into account, and is only available on a limited number of
492 platforms.
493 Event counting is available on all platforms and it provides the number of events
494 that were received by the event loop before they are sent to their corresponding
495 targets (this might include non-Qt events).
496
497 The Linux Performance Monitoring solution is available only on Linux and
498 provides many different counters, which can be selected by passing an
499 additional option \c {-perfcounter countername}, such as \c {-perfcounter
500 cache-misses}, \c {-perfcounter branch-misses}, or \c {-perfcounter
501 l1d-load-misses}. The default counter is \c {cpu-cycles}. The full list of
502 counters can be obtained by running any benchmark executable with the
503 option \c -perfcounterlist.
504
505 \note
506 \list
507 \li Using the performance counter may require enabling access to non-privileged
508 applications.
509 \li Devices that do not support high-resolution timers default to
510 one-millisecond granularity.
511 \endlist
512
513 See \l {Chapter 5: Writing a Benchmark}{Writing a Benchmark} in the Qt Test
514 Tutorial for more benchmarking examples.
515
516 \section1 Using Global Test Data
517
518 You can define \c{initTestCase_data()} to set up a global test data table.
519 Each test is run once for each row in the global test data table. When the
520 test function itself \l{Chapter 2: Data Driven Testing}{is data-driven},
521 it is run for each local data row, for each global data row. So, if there
522 are \c g rows in the global data table and \c d rows in the test's own
523 data-table, the number of runs of this test is \c g times \c d.
524
525 Global data is fetched from the table using the \l QFETCH_GLOBAL() macro.
526
527 The following are typical use cases for global test data:
528
529 \list
530 \li Selecting among the available database backends in QSql tests to run
531 every test against every database.
532 \li Doing all networking tests with and without SSL (HTTP versus HTTPS)
533 and proxying.
534 \li Testing a timer with a high precision clock and with a coarse one.
535 \li Selecting whether a parser shall read from a QByteArray or from a
536 QIODevice.
537 \endlist
538
539 For example, to test each number provided by \c {roundTripInt_data()} with
540 each locale provided by \c {initTestCase_data()}:
541
542 \snippet code/src_qtestlib_qtestcase_snippet.cpp 31
543
544 On the command-line of a test you can pass the name of a function (with no
545 test-class-name prefix) to run only that one function's tests. If the test
546 class has global data, or the function is data-driven, you can append a data
547 tag, after a colon, to run only that tag's data-set for the function. To
548 specify both a global tag and a tag specific to the test function, combine
549 them with a colon between, putting the global data tag first. For example
550
551 \snippet code/doc_src_qtestlib.qdoc 6
552
553 will run the \c zero test-case of the \c roundTripInt() test above (assuming
554 its \c TestQLocale class has been compiled to an executable \c testqlocale)
555 in each of the locales specified by \c initTestCase_data(), while
556
557 \snippet code/doc_src_qtestlib.qdoc 7
558
559 will run all three test-cases of \c roundTripInt() only in the C locale and
560
561 \snippet code/doc_src_qtestlib.qdoc 8
562
563 will only run the \c zero test-case in the C locale.
564
565 Providing such fine-grained control over which tests are to be run can make
566 it considerably easier to debug a problem, as you only need to step through
567 the one test-case that has been seen to fail.
568
569*/
570
571/*!
572 \page qtest-tutorial.html
573 \brief A short introduction to testing with Qt Test.
574 \nextpage {Chapter 1: Writing a Unit Test}{Chapter 1}
575 \ingroup best-practices
576
577 \title Qt Test Tutorial
578
579 This tutorial introduces some of the features of the Qt Test framework. It
580 is divided into six chapters:
581
582 \list 1
583 \li \l {Chapter 1: Writing a Unit Test}{Writing a Unit Test}
584 \li \l {Chapter 2: Data Driven Testing}{Data Driven Testing}
585 \li \l {Chapter 3: Simulating GUI Events}{Simulating GUI Events}
586 \li \l {Chapter 4: Replaying GUI Events}{Replaying GUI Events}
587 \li \l {Chapter 5: Writing a Benchmark}{Writing a Benchmark}
588 \li \l {Chapter 6: Skipping Tests with QSKIP}{Skipping Tests}
589 \endlist
590
591
592*/