Qt
Internal/Contributor docs for the Qt SDK. Note: These are NOT official API docs; those are found at https://doc.qt.io/
Loading...
Searching...
No Matches
qttestlib-manual.qdoc
Go to the documentation of this file.
1
// Copyright (C) 2022 The Qt Company Ltd.
2
// Copyright (C) 2016 Intel Corporation.
3
// SPDX-License-Identifier: LicenseRef-Qt-Commercial OR GFDL-1.3-no-invariants-only
4
5
/*!
6
\page qtest-overview.html
7
\title Qt Test Overview
8
\brief Overview of the Qt unit testing framework.
9
10
\ingroup frameworks-technologies
11
\ingroup qt-basic-concepts
12
13
\keyword qtestlib
14
15
Qt Test is a framework for unit testing Qt based applications and libraries.
16
Qt Test provides
17
all the functionality commonly found in unit testing frameworks as
18
well as extensions for testing graphical user interfaces.
19
20
Qt Test is designed to ease the writing of unit tests for Qt
21
based applications and libraries:
22
23
\table
24
\header \li Feature \li Details
25
\row
26
\li \b Lightweight
27
\li Qt Test consists of about 6000 lines of code and 60
28
exported symbols.
29
\row
30
\li \b Self-contained
31
\li Qt Test requires only a few symbols from the Qt Core module
32
for non-gui testing.
33
\row
34
\li \b {Rapid testing}
35
\li Qt Test needs no special test-runners; no special
36
registration for tests.
37
\row
38
\li \b {Data-driven testing}
39
\li A test can be executed multiple times with different test data.
40
\row
41
\li \b {Basic GUI testing}
42
\li Qt Test offers functionality for mouse and keyboard simulation.
43
\row
44
\li \b {Benchmarking}
45
\li Qt Test supports benchmarking and provides several measurement back-ends.
46
\row
47
\li \b {IDE friendly}
48
\li Qt Test outputs messages that can be interpreted by Qt Creator, Visual
49
Studio, and KDevelop.
50
\row
51
\li \b Thread-safety
52
\li The error reporting is thread safe and atomic.
53
\row
54
\li \b Type-safety
55
\li Extensive use of templates prevent errors introduced by
56
implicit type casting.
57
\row
58
\li \b {Easily extendable}
59
\li Custom types can easily be added to the test data and test output.
60
\endtable
61
62
You can use a Qt Creator wizard to create a project that contains Qt tests
63
and build and run them directly from Qt Creator. For more information, see
64
\l {Qt Creator: Build and run tests}.
65
66
\target qttest-creating-a-test
67
\section1 Creating a Test
68
69
To create a test, subclass QObject and add one or more private slots to it. Each
70
private slot is a test function in your test. QTest::qExec() can be used to execute
71
all test functions in the test object.
72
73
In addition, you can define the following private slots that are \e not
74
treated as test functions. When present, they will be executed by the
75
testing framework and can be used to initialize and clean up either the
76
entire test or the current test function.
77
78
\list
79
\li \c{initTestCase()} will be called before the first test function is executed.
80
\li \c{initTestCase_data()} will be called to create a global test data table.
81
\li \c{cleanupTestCase()} will be called after the last test function was executed.
82
\li \c{init()} will be called before each test function is executed.
83
\li \c{cleanup()} will be called after every test function.
84
\endlist
85
86
Use \c initTestCase() for preparing the test. Every test should leave the
87
system in a usable state, so it can be run repeatedly. Cleanup operations
88
should be handled in \c cleanupTestCase(), so they get run even if the test
89
fails.
90
91
Use \c init() for preparing a test function. Every test function should
92
leave the system in a usable state, so it can be run repeatedly. Cleanup
93
operations should be handled in \c cleanup(), so they get run even if the
94
test function fails and exits early.
95
96
Alternatively, you can use RAII (resource acquisition is initialization),
97
with cleanup operations called in destructors, to ensure they happen when
98
the test function returns and the object moves out of scope.
99
100
If \c{initTestCase()} fails, no test function will be executed. If \c{init()} fails,
101
the following test function will not be executed, the test will proceed to the next
102
test function.
103
104
Example:
105
\snippet code/doc_src_qtestlib.cpp 0
106
107
Finally, if the test class has a static public \c{void initMain()} method,
108
it is called by the QTEST_MAIN macros before the QApplication object
109
is instantiated. This was added in 5.14.
110
111
For more examples, refer to the \l{Qt Test Tutorial}.
112
113
\section1 Increasing Test Function Timeout
114
115
QtTest limits the run-time of each test to catch infinite loops and similar
116
bugs. By default, any test function call will be interrupted after five
117
minutes. For data-driven tests, this applies to each call with a distinct
118
data-tag. This timeout can be configured by setting the \c QTEST_FUNCTION_TIMEOUT
119
environment variable to the maximum number of milliseconds that is acceptable
120
for a single call to take. If a test takes longer than the configured timeout,
121
it is interrupted, and \c qFatal() is called. As a result, the test aborts by
122
default, as if it had crashed.
123
124
To set \c QTEST_FUNCTION_TIMEOUT from the command line on Linux or macOS, enter:
125
126
\badcode
127
QTEST_FUNCTION_TIMEOUT=900000
128
export QTEST_FUNCTION_TIMEOUT
129
\endcode
130
131
On Windows:
132
\badcode
133
SET QTEST_FUNCTION_TIMEOUT=900000
134
\endcode
135
136
Then run the test inside this environment.
137
138
Alternatively, you can set the environment variable programmatically in the
139
test code itself, for example by calling, from the
140
\l{Creating a Test}{initMain()} special method of your test class:
141
142
\badcode
143
qputenv("QTEST_FUNCTION_TIMEOUT", "900000");
144
\endcode
145
146
To calculate a suitable value for the timeout, see how long the test usually
147
takes and decide how much longer it can take without that being a symptom of
148
some problem. Convert that longer time to milliseconds to get the timeout value.
149
For example, if you decide that a test that takes several minutes could
150
reasonably take up to twenty minutes, for example on a slow machine,
151
multiply \c{20 * 60 * 1000 = 1200000} and set the environment variable to
152
\c 1200000 instead of the \c 900000 above.
153
154
\if !defined(qtforpython)
155
\section1 Building a Test
156
157
You can build an executable that contains one test class that typically
158
tests one class of production code. However, usually you would want to
159
test several classes in a project by running one command.
160
161
See \l {Chapter 1: Writing a Unit Test}{Writing a Unit Test} for a step by
162
step explanation.
163
164
\section2 Building with CMake and CTest
165
166
You can use \l {Building with CMake and CTest} to create a test.
167
\l{https://cmake.org/cmake/help/latest/manual/ctest.1.html}{CTest} enables
168
you to include or exclude tests based on a regular expression that is
169
matched against the test name. You can further apply the \c LABELS property
170
to a test and CTest can then include or exclude tests based on those labels.
171
All labeled targets will be run when \c {test} target is called on the
172
command line.
173
174
\note On Android, if you have one connected device or emulator, tests will
175
run on that device. If you have more than one device connected, set the
176
environment variable \c {ANDROID_DEVICE_SERIAL} to the
177
\l {Android: Query for devices}{ADB serial number} of the device that
178
you want to run tests on.
179
180
There are several other advantages with CMake. For example, the result of
181
a test run can be published on a web server using CDash with virtually no
182
effort.
183
184
CTest scales to very different unit test frameworks, and works out of the
185
box with QTest.
186
187
The following is an example of a CMakeLists.txt file that specifies the
188
project name and the language used (here, \e mytest and C++), the Qt
189
modules required for building the test (Qt Test), and the files that are
190
included in the test (\e tst_mytest.cpp).
191
192
\quotefile code/doc_src_cmakelists.txt
193
194
For more information about the options you have, see \l {Build with CMake}.
195
196
\section2 Building with qmake
197
198
If you are using \c qmake as your build tool, just add the
199
following to your project file:
200
201
\snippet code/doc_src_qtestlib.pro 1
202
203
If you would like to run the test via \c{make check}, add the
204
additional line:
205
206
\snippet code/doc_src_qtestlib.pro 2
207
208
To prevent the test from being installed to your target, add the
209
additional line:
210
211
\snippet code/doc_src_qtestlib.pro 3
212
213
See the \l{Building a Testcase}{qmake manual} for
214
more information about \c{make check}.
215
216
\section2 Building with Other Tools
217
218
If you are using other build tools, make sure that you add the location
219
of the Qt Test header files to your include path (usually \c{include/QtTest}
220
under your Qt installation directory). If you are using a release build
221
of Qt, link your test to the \c QtTest library. For debug builds, use
222
\c{QtTest_debug}.
223
224
\endif
225
226
\section1 Qt Test Command Line Arguments
227
228
\section2 Syntax
229
230
The syntax to execute an autotest takes the following simple form:
231
232
\snippet code/doc_src_qtestlib.qdoc 2
233
234
Substitute \c testname with the name of your executable. \c
235
testfunctions can contain names of test functions to be
236
executed. If no \c testfunctions are passed, all tests are run. If you
237
append the name of an entry in \c testdata, the test function will be
238
run only with that test data.
239
240
For example:
241
242
\snippet code/doc_src_qtestlib.qdoc 3
243
244
Runs the test function called \c toUpper with all available test data.
245
246
\snippet code/doc_src_qtestlib.qdoc 4
247
248
Runs the \c toUpper test function with all available test data,
249
and the \c toInt test function with the test data row called \c
250
zero (if the specified test data doesn't exist, the associated test
251
will fail and the available data tags are reported).
252
253
\snippet code/doc_src_qtestlib.qdoc 5
254
255
Runs the \c testMyWidget function test, outputs every signal
256
emission and waits 500 milliseconds after each simulated
257
mouse/keyboard event.
258
259
\section2 Options
260
261
\section3 Logging Options
262
263
The following command line options determine how test results are reported:
264
265
\list
266
\li \c -o \e{filename,format} \br
267
Writes output to the specified file, in the specified format (one
268
of \c txt, \c csv, \c junitxml, \c xml, \c lightxml, \c teamcity
269
or \c tap). Use the special filename \c{-} (hyphen) to log to
270
standard output.
271
\li \c -o \e filename \br
272
Writes output to the specified file.
273
\li \c -txt \br
274
Outputs results in plain text.
275
\li \c -csv \br
276
Outputs results as comma-separated values (CSV) suitable for
277
import into spreadsheets. This mode is only suitable for
278
benchmarks, since it suppresses normal pass/fail messages.
279
\li \c -junitxml \br
280
Outputs results as a \l{JUnit XML} document.
281
\li \c -xml \br
282
Outputs results as an XML document.
283
\li \c -lightxml \br
284
Outputs results as a stream of XML tags.
285
\li \c -teamcity \br
286
Outputs results in \l{TeamCity} format.
287
\li \c -tap \br
288
Outputs results in \l{Test Anything Protocol} (TAP) format.
289
\endlist
290
291
The first version of the \c -o option may be repeated in order to log
292
test results in multiple formats, but no more than one instance of this
293
option can log test results to standard output.
294
295
If the first version of the \c -o option is used, neither the second version
296
of the \c -o option nor the \c -txt, \c -xml, \c -lightxml, \c -teamcity,
297
\c -junitxml or \c -tap options should be used.
298
299
If neither version of the \c -o option is used, test results will be logged to
300
standard output. If no format option is used, test results will be logged in
301
plain text.
302
303
\section3 Test Log Detail Options
304
305
The following command line options control how much detail is reported
306
in test logs:
307
308
\list
309
\li \c -silent \br
310
Silent output; only shows fatal errors, test failures and minimal status
311
messages.
312
\li \c -v1 \br
313
Verbose output; shows when each test function is entered.
314
(This option only affects plain text output.)
315
\li \c -v2 \br
316
Extended verbose output; shows each \l QCOMPARE() and \l QVERIFY().
317
(This option affects all output formats and implies \c -v1 for plain text output.)
318
\li \c -vs \br
319
Shows all signals that get emitted and the slot invocations resulting from
320
those signals.
321
(This option affects all output formats.)
322
\endlist
323
324
\section3 Testing Options
325
326
The following command-line options influence how tests are run:
327
328
\list
329
\li \c -functions \br
330
Outputs all test functions available in the test, then quits.
331
\li \c -datatags \br
332
Outputs all data tags available in the test.
333
A global data tag is preceded by ' __global__ '.
334
\li \c -eventdelay \e ms \br
335
If no delay is specified for keyboard or mouse simulation
336
(\l QTest::keyClick(),
337
\l QTest::mouseClick() etc.), the value from this parameter
338
(in milliseconds) is substituted.
339
\li \c -keydelay \e ms \br
340
Like -eventdelay, but only influences keyboard simulation and not mouse
341
simulation.
342
\li \c -mousedelay \e ms \br
343
Like -eventdelay, but only influences mouse simulation and not keyboard
344
simulation.
345
\li \c -maxwarnings \e number \br
346
Sets the maximum number of warnings to output. 0 for unlimited, defaults to
347
2000.
348
\li \c -nocrashhandler \br
349
Disables the crash handler on Unix platforms.
350
On Windows, it re-enables the Windows Error Reporting dialog, which is
351
turned off by default. This is useful for debugging crashes.
352
\li \c -repeat \e n \br
353
Run the testsuite n times or until the test fails. Useful for finding
354
flaky tests. If negative, the tests are repeated forever. This is intended
355
as a developer tool, and is only supported with the plain text logger.
356
\li \c -skipblacklisted \br
357
Skip the blacklisted tests. This option is intended to allow more accurate
358
measurement of test coverage by preventing blacklisted tests from inflating
359
coverage statistics. When not measuring test coverage, it is recommended to
360
execute blacklisted tests to reveal any changes in their results, such as
361
a new crash or the issue that caused blacklisting being resolved.
362
363
\li \c -platform \e name \br
364
This command line argument applies to all Qt applications, but might be
365
especially useful in the context of auto-testing. By using the "offscreen"
366
platform plugin (-platform offscreen) it's possible to have tests that use
367
QWidget or QWindow run without showing anything on the screen. Currently
368
the offscreen platform plugin is only fully supported on X11.
369
\endlist
370
371
\section3 Benchmarking Options
372
373
The following command line options control benchmark testing:
374
375
\list
376
\li \c -callgrind \br
377
Uses Callgrind to time benchmarks (Linux and \macos).
378
\li \c -perf \br
379
Uses Linux perf events to time benchmarks
380
\li \c -tickcounter \br
381
Uses CPU tick counters to time benchmarks. Requires hardware support.
382
\li \c -eventcounter \br
383
Counts events received during benchmarks.
384
\li \c -minimumvalue \e n \br
385
Sets the minimum acceptable measurement value.
386
\li \c -minimumtotal \e n \br
387
Sets the minimum acceptable total for repeated executions of a test function.
388
\li \c -iterations \e n \br
389
Sets the number of accumulation iterations.
390
\li \c -median \e n \br
391
Sets the number of median iterations.
392
\li \c -vb \br
393
Outputs verbose benchmarking information.
394
\endlist
395
396
\section3 Miscellaneous Options
397
398
\list
399
\li \c -help \br
400
Outputs the possible command line arguments and gives some useful help.
401
\endlist
402
403
\section1 Qt Test Environment Variables
404
405
You can set certain environment variables in order to affect
406
the execution of an autotest:
407
408
\list
409
\li \c QTEST_DISABLE_CORE_DUMP \br
410
Setting this variable to a non-zero value will disable the generation
411
of a core dump file.
412
\li \c QTEST_DISABLE_STACK_DUMP \br
413
Setting this variable to a non-zero value will prevent Qt Test from
414
printing a stacktrace in case an autotest times out or crashes.
415
\li \c QTEST_FATAL_FAIL \br
416
Setting this variable to a non-zero value will cause a failure in
417
an autotest to immediately abort the entire autotest. This is useful
418
to e.g. debug an unstable or intermittent failure in a test, by
419
launching the test in a debugger. Support for this variable was
420
added in Qt 6.1.
421
\endlist
422
423
\target qttest-creating-a-benchmark
424
\section1 Creating a Benchmark
425
426
To create a benchmark, follow the instructions for creating a test and then add a
427
\l QBENCHMARK macro or \l QTest::setBenchmarkResult() to the test function that
428
you want to benchmark. In the following code snippet, the macro is used:
429
430
\snippet code/doc_src_qtestlib.cpp 12
431
432
A test function that measures performance should contain either a single
433
\c QBENCHMARK macro or a single call to \c setBenchmarkResult(). Multiple
434
occurrences make no sense, because only one performance result can be
435
reported per test function, or per data tag in a data-driven setup.
436
437
Avoid changing the test code that forms (or influences) the body of a
438
\c QBENCHMARK macro, or the test code that computes the value passed to
439
\c setBenchmarkResult(). Differences in successive performance results
440
should ideally be caused only by changes to the product you are testing.
441
Changes to the test code can potentially result in misleading report of
442
a change in performance. If you do need to change the test code, make
443
that clear in the commit message.
444
445
In a performance test function, the \c QBENCHMARK or \c setBenchmarkResult()
446
should be followed by a verification step using \l QCOMPARE(), \l QVERIFY(),
447
and so on. You can then flag a performance result as \e invalid if another
448
code path than the intended one was measured. A performance analysis tool
449
can use this information to filter out invalid results.
450
For example, an unexpected error condition will typically cause the program
451
to bail out prematurely from the normal program execution, and thus falsely
452
show a dramatic performance increase.
453
454
\section2 Selecting the Measurement Back-end
455
456
The code inside the QBENCHMARK macro will be measured, and possibly also repeated
457
several times in order to get an accurate measurement. This depends on the selected
458
measurement back-end. Several back-ends are available. They can be selected on the
459
command line (see \l{Benchmarking Options}):
460
461
\target testlib-benchmarking-measurement
462
463
\table
464
\header \li Name
465
\li Command-line Argument
466
\li Availability
467
\row \li Walltime
468
\li (default)
469
\li All platforms
470
\row \li CPU tick counter
471
\li -tickcounter
472
\li Windows, \macos, Linux, many UNIX-like systems.
473
\row \li Event Counter
474
\li -eventcounter
475
\li All platforms
476
\row \li Valgrind Callgrind
477
\li -callgrind
478
\li Linux (if installed)
479
\row \li Linux Perf
480
\li -perf
481
\li Linux
482
\endtable
483
484
In short, walltime is always available but requires many repetitions to
485
get a useful result.
486
Tick counters are usually available and can provide
487
results with fewer repetitions, but can be susceptible to CPU frequency
488
scaling issues.
489
Valgrind provides exact results, but does not take
490
I/O waits into account, and is only available on a limited number of
491
platforms.
492
Event counting is available on all platforms and it provides the number of events
493
that were received by the event loop before they are sent to their corresponding
494
targets (this might include non-Qt events).
495
496
The Linux Performance Monitoring solution is available only on Linux and
497
provides many different counters, which can be selected by passing an
498
additional option \c {-perfcounter countername}, such as \c {-perfcounter
499
cache-misses}, \c {-perfcounter branch-misses}, or \c {-perfcounter
500
l1d-load-misses}. The default counter is \c {cpu-cycles}. The full list of
501
counters can be obtained by running any benchmark executable with the
502
option \c -perfcounterlist.
503
504
\note
505
\list
506
\li Using the performance counter may require enabling access to non-privileged
507
applications.
508
\li Devices that do not support high-resolution timers default to
509
one-millisecond granularity.
510
\endlist
511
512
See \l {Chapter 5: Writing a Benchmark}{Writing a Benchmark} in the Qt Test
513
Tutorial for more benchmarking examples.
514
515
\section1 Using Global Test Data
516
517
You can define \c{initTestCase_data()} to set up a global test data table.
518
Each test is run once for each row in the global test data table. When the
519
test function itself \l{Chapter 2: Data Driven Testing}{is data-driven},
520
it is run for each local data row, for each global data row. So, if there
521
are \c g rows in the global data table and \c d rows in the test's own
522
data-table, the number of runs of this test is \c g times \c d.
523
524
Global data is fetched from the table using the \l QFETCH_GLOBAL() macro.
525
526
The following are typical use cases for global test data:
527
528
\list
529
\li Selecting among the available database backends in QSql tests to run
530
every test against every database.
531
\li Doing all networking tests with and without SSL (HTTP versus HTTPS)
532
and proxying.
533
\li Testing a timer with a high precision clock and with a coarse one.
534
\li Selecting whether a parser shall read from a QByteArray or from a
535
QIODevice.
536
\endlist
537
538
For example, to test each number provided by \c {roundTripInt_data()} with
539
each locale provided by \c {initTestCase_data()}:
540
541
\snippet code/src_qtestlib_qtestcase_snippet.cpp 31
542
543
On the command-line of a test you can pass the name of a function (with no
544
test-class-name prefix) to run only that one function's tests. If the test
545
class has global data, or the function is data-driven, you can append a data
546
tag, after a colon, to run only that tag's data-set for the function. To
547
specify both a global tag and a tag specific to the test function, combine
548
them with a colon between, putting the global data tag first. For example
549
550
\snippet code/doc_src_qtestlib.qdoc 6
551
552
will run the \c zero test-case of the \c roundTripInt() test above (assuming
553
its \c TestQLocale class has been compiled to an executable \c testqlocale)
554
in each of the locales specified by \c initTestCase_data(), while
555
556
\snippet code/doc_src_qtestlib.qdoc 7
557
558
will run all three test-cases of \c roundTripInt() only in the C locale and
559
560
\snippet code/doc_src_qtestlib.qdoc 8
561
562
will only run the \c zero test-case in the C locale.
563
564
Providing such fine-grained control over which tests are to be run can make
565
it considerably easier to debug a problem, as you only need to step through
566
the one test-case that has been seen to fail.
567
568
*/
569
570
/*!
571
\page qtest-tutorial.html
572
\brief A short introduction to testing with Qt Test.
573
\nextpage {Chapter 1: Writing a Unit Test}{Chapter 1}
574
\ingroup best-practices
575
576
\title Qt Test Tutorial
577
578
This tutorial introduces some of the features of the Qt Test framework. It
579
is divided into six chapters:
580
581
\list 1
582
\li \l {Chapter 1: Writing a Unit Test}{Writing a Unit Test}
583
\li \l {Chapter 2: Data Driven Testing}{Data Driven Testing}
584
\li \l {Chapter 3: Simulating GUI Events}{Simulating GUI Events}
585
\li \l {Chapter 4: Replaying GUI Events}{Replaying GUI Events}
586
\li \l {Chapter 5: Writing a Benchmark}{Writing a Benchmark}
587
\li \l {Chapter 6: Skipping Tests with QSKIP}{Skipping Tests}
588
\endlist
589
590
591
*/
qtbase
src
testlib
doc
src
qttestlib-manual.qdoc
Generated on
for Qt by
1.14.0