Qt
Internal/Contributor docs for the Qt SDK. Note: These are NOT official API docs; those are found at https://doc.qt.io/
Loading...
Searching...
No Matches
cameraoverview.qdoc
Go to the documentation of this file.
1
// Copyright (C) 2021 The Qt Company Ltd.
2
// SPDX-License-Identifier: LicenseRef-Qt-Commercial OR GFDL-1.3-no-invariants-only
3
4
/*!
5
\page cameraoverview.html
6
\title Camera Overview
7
\brief Camera viewfinder, still image capture, and video recording.
8
\ingroup explanations-graphicsandmultimedia
9
10
The Qt Multimedia API provides a number of camera related classes, so you
11
can access images and videos from mobile device cameras or web cameras.
12
There are both C++ and QML APIs for common tasks.
13
14
\section1 Camera Features
15
16
In order to use the camera classes, a quick overview of the way a camera
17
works is needed. If you're already familiar with this, you can skip ahead to
18
\l {camera-tldr}{Camera implementation details}.
19
For a more detailed explanations of how a camera works, see the following YouTube
20
clip.
21
22
\youtube qS1FmgPVLqw
23
24
\section2 The Lens Assembly
25
26
At one end of the camera assembly is the lens assembly (one or
27
more lenses, arranged to focus light onto the sensor). The lenses
28
themselves can sometimes be moved to adjust things like focus and zoom. They
29
might also be fixed in an arrangement for a good balance between maintaining
30
focus and cost.
31
32
\image how-focus-works.gif "An animation of how focus works"
33
34
\image Zoom.gif "An animation of how zoom works."
35
36
Some lens assemblies can automatically be adjusted so that
37
an object at different distances from the camera can be kept in focus.
38
This is usually done by measuring how sharp a particular area of the
39
frame is, and then adjusting the lens assembly to find the peak sharpness. In
40
some cases, the camera will always use the center of the frame for this.
41
In other cases, a camera may also allow this target focus region to be specified.
42
Some examples of this feature include:
43
\list
44
\li Face zoom: Using computer vision to detect and use one or more faces as the
45
target.
46
\li Touch to zoom: Enabling the user to manually select an area via the preview
47
screen.
48
\endlist
49
50
\section2 The Sensor
51
Once light arrives at the sensor, it gets converted into digital pixels.
52
This process can depend on a number of things but ultimately comes down
53
to two things:
54
\list
55
\li The length of time conversion is allowed to take. Also known as exposure
56
time.
57
\li How bright the light is.
58
\endlist
59
60
The longer a conversion is allowed to take, the better the resulting image
61
quality. Using a flash can assist with letting more light hit the sensor,
62
allowing it to convert pixels faster, giving better quality for the same
63
amount of time. Conversely, allowing a longer conversion time can let you
64
take photos in darker environments, \b{as long as the camera is steady}. If the
65
camera moves while the sensor is recording, the resulting image is blurred.
66
67
\section2 Image Processing
68
After the image has been captured by the sensor, the camera firmware performs
69
various image processing tasks on it to compensate for various sensor
70
characteristics, current lighting, and desired image properties. Faster sensor
71
pixel conversion times may introduce digital noise, so some amount of image
72
processing can be done to remove this, based on the camera sensor settings.
73
74
The color of the image can also be adjusted at this stage to compensate for
75
different light sources - fluorescent lights and sunlight give very different
76
appearances to the same object, so the image can be adjusted based on the
77
white balance of the picture (due to the different color temperatures of the
78
light sources).
79
\image image_processing.png "5 examples of various image processing techniques."
80
81
Some forms of "special effects" can also be performed at this stage. Black
82
and white, sepia, or "negative" style images can be produced.
83
84
\section2 Recording for Posterity
85
Finally, once a perfectly focused, exposed and processed image has been
86
created, it can be put to good use. Camera images can be further processed
87
by application code (for example, to detect bar-codes, or to stitch together a
88
panoramic image), or saved to a common format like JPEG, or used to create a movie.
89
Many of these tasks have classes to assist them.
90
91
\target camera-tldr
92
\section1 Camera Implementation Details
93
\section2 Detecting and Selecting a Camera
94
95
Before using the camera APIs, you should check that a camera is available at
96
runtime. If there is none available, you could disable camera related features
97
in your application. To perform this check in C++, use the
98
\l QMediaDevices::videoInputs() function, as shown in the example below:
99
100
101
\snippet multimedia-snippets/camera.cpp Camera overview check
102
103
Access a camera using the \l QCamera class in C++ or the \l Camera
104
type in QML.
105
106
When multiple camera-devices are available, you can specify which one to use.
107
108
In C++:
109
110
\snippet multimedia-snippets/camera.cpp Camera selection
111
112
In QML, you can select the camera by setting the \l{Camera::cameraDevice} property.
113
In C++, you can also select a camera-device by its physical orientation rather than
114
by camera info. This is useful on mobile devices, which often have a
115
front-facing and a back-facing camera.
116
117
In C++:
118
119
\snippet multimedia-snippets/camera.cpp Camera overview position
120
121
If no QCameraDevice is specified, the default device will be used. The
122
default device is chosen based on information provided by the operating
123
system. On desktop platforms, the default camera is commonly set by the
124
end-user in the system settings. On a mobile device, the back-facing
125
camera is usually the default camera-device. You can get the default
126
camera-device with \l QMediaDevices::defaultVideoInput() in C++, or
127
\l MediaDevices.defaultVideoInput in QML.
128
129
The default camera-device may change over time, i.e as a result of the
130
end-user disconnecting the current default camera-device. Application
131
developers may track the change by querying the default camera-device
132
again when the signal \l QMediaDevices::videoInputsChanged is emitted.
133
134
\section2 Preview
135
136
While not strictly necessary, it's often useful to be able to see
137
what the camera is pointing at. This is known as a preview.
138
139
Depending on whether you're using QML or C++, you can do this in multiple ways.
140
In QML, you can use \l Camera and videoOutput together to monitor a
141
captureSession.
142
143
\qml
144
Item {
145
VideoOutput {
146
id: output
147
anchors.fill: parent
148
}
149
CaptureSession {
150
videoOutput: output
151
152
Camera {
153
// You can adjust various settings in here
154
}
155
}
156
}
157
\endqml
158
159
In C++, your choice depends on whether you are using widgets, or QGraphicsView.
160
The \l QVideoWidget class is used in the widgets case, and \l QGraphicsVideoItem
161
is useful for QGraphicsView.
162
163
\snippet multimedia-snippets/camera.cpp Camera overview viewfinder
164
165
For advanced usage (like processing preview frames as they come, which enables
166
detection of objects or patterns), you can also use your own QVideoSink and set
167
that as the videoOutput for the QMediaCaptureSession. In this case, you will need to
168
render the preview image yourself by processing the data received from the
169
videoFrameChanged() signal.
170
171
\snippet multimedia-snippets/camera.cpp Camera overview surface
172
173
On mobile devices, the preview image is by default oriented in the same way as the device.
174
Thus, as the user rotates the device, the preview image will switch between portrait and
175
landscape mode. Once you start recording, the orientation will be locked. To avoid a poor
176
user experience, you should also lock the orientation of the applications user interface
177
while recording. This can be achieved using the
178
\l{QWindow::contentOrientation}{contentOrientation} property of QWindow.
179
180
\section2 Still Images
181
182
After setting up a viewfinder and finding something photogenic, to capture an
183
image we need to initialize a new QImageCapture object. All that is then
184
needed is to start the camera and capture the image.
185
186
\snippet multimedia-snippets/camera.cpp Camera overview capture
187
188
\section2 Movies
189
190
Previously we saw code that allowed the capture of a still image. Recording
191
video requires the use of a \l QMediaRecorder object.
192
193
To record video we need to create a camera object as before but this time as
194
well as creating a viewfinder, we will also initialize a media recorder object.
195
196
\snippet multimedia-snippets/camera.cpp Camera overview movie
197
198
Signals from the \e QMediaRecorder can be connected to slots to react to
199
changes in the state of the encoding process or error events. Recording
200
starts when \l QMediaRecorder::record() is called. This causes the signal
201
\l{QMediaRecorder::}{recorderStateChanged()} to be emitted. Recording is
202
controlled by the record(), stop(), and pause() slots of QMediaRecorder.
203
204
\section2 Controlling the Imaging Pipeline
205
206
Now that the basics of capturing images and movies are covered, there are a number
207
of ways to control the imaging pipeline to implement some interesting techniques.
208
As explained earlier, several physical and electronic elements combine to determine
209
the final images, and you can control them with different classes.
210
211
\section3 Focus and Zoom
212
213
QCamera allows you to set the general focus policy by means of the
214
enums for the \l {QCamera::FocusMode}{FocusMode}. \l {QCamera::FocusMode}{FocusMode}
215
deals with settings such as \l {QCamera::FocusModeAuto},
216
and \l {QCamera::FocusModeInfinity}.
217
218
For camera hardware that supports it, \l QCamera::FocusModeAutoNear allows
219
imaging of things that are close to the sensor. This is useful in applications
220
like bar-code recognition, or business card scanning.
221
222
In addition to focus, QCamera allows you to control any available zoom
223
functionality using \l{QCamera::setZoomFactor}{setZoomFactor()} or
224
\l{QCamera::zoomTo}{zoomTo()}. The
225
available zoom range might be limited or entirely fixed to unity (1:1). The
226
allowed range can be checked with \l{QCamera::minimumZoomFactor}{minimumZoomFactor()}
227
and \l{QCamera::maximumZoomFactor}{maximumZoomFactor()}.
228
229
\section3 Exposure, Shutter Speed and Flash
230
231
There are a number of settings that affect the amount of light that hits the
232
camera sensor, and hence the quality of the resulting image.
233
234
The main settings for automatic image taking are the
235
\l {QCamera::ExposureMode}{exposure mode} and \l {QCamera::FlashMode}{flash mode}.
236
Several other settings (such as: ISO setting and exposure time) are usually
237
managed automatically, but can also be overridden if desired.
238
239
Finally, you can control the flash hardware (if present) using this class. In
240
some cases the hardware may also double as a torch.
241
242
\target camera_image_processing
243
\section3 Image Processing
244
245
The QCamera class lets you adjust the image processing part of the pipeline.
246
These settings include:
247
\list
248
\li \l {QCamera::WhiteBalanceMode}{white balance}
249
(also known as color temperature)
250
\endlist
251
252
Most cameras support automatic settings for all of these, so you shouldn't need
253
to adjust them unless the user wants a specific setting.
254
255
\section1 Examples
256
257
There are both C++ and QML examples available.
258
259
\section2 C++ Examples
260
261
\annotatedlist camera_examples
262
263
\section2 QML Examples
264
265
\annotatedlist camera_examples_qml
266
267
\section1 Reference Documentation
268
269
\section2 C++ Classes
270
271
\annotatedlist multimedia_camera
272
273
\section2 QML Types
274
275
\annotatedlist camera_qml
276
277
*/
qtmultimedia
src
multimedia
doc
src
cameraoverview.qdoc
Generated on
for Qt by
1.14.0