Qt
Internal/Contributor docs for the Qt SDK. Note: These are NOT official API docs; those are found at https://doc.qt.io/
Loading...
Searching...
No Matches
cameraoverview.qdoc
Go to the documentation of this file.
1// Copyright (C) 2021 The Qt Company Ltd.
2// SPDX-License-Identifier: LicenseRef-Qt-Commercial OR GFDL-1.3-no-invariants-only
3
4/*!
5\page cameraoverview.html
6\title Camera Overview
7\brief Camera viewfinder, still image capture, and video recording.
8\ingroup explanations-graphicsandmultimedia
9
10The Qt Multimedia API provides a number of camera related classes, so you
11can access images and videos from mobile device cameras or web cameras.
12There are both C++ and QML APIs for common tasks.
13
14\section1 Camera Features
15
16In order to use the camera classes, a quick overview of the way a camera
17works is needed. If you're already familiar with this, you can skip ahead to
18\l {camera-tldr}{Camera implementation details}.
19For a more detailed explanations of how a camera works, see the following YouTube
20clip.
21
22\youtube qS1FmgPVLqw
23
24\section2 The Lens Assembly
25
26At one end of the camera assembly is the lens assembly (one or
27more lenses, arranged to focus light onto the sensor). The lenses
28themselves can sometimes be moved to adjust things like focus and zoom. They
29might also be fixed in an arrangement for a good balance between maintaining
30focus and cost.
31
32\image how-focus-works.gif "An animation of how focus works"
33
34\image Zoom.gif "An animation of how zoom works."
35
36Some lens assemblies can automatically be adjusted so that
37an object at different distances from the camera can be kept in focus.
38This is usually done by measuring how sharp a particular area of the
39frame is, and then adjusting the lens assembly to find the peak sharpness. In
40some cases, the camera will always use the center of the frame for this.
41In other cases, a camera may also allow this target focus region to be specified.
42Some examples of this feature include:
43\list
44\li Face zoom: Using computer vision to detect and use one or more faces as the
45target.
46\li Touch to zoom: Enabling the user to manually select an area via the preview
47screen.
48\endlist
49
50\section2 The Sensor
51Once light arrives at the sensor, it gets converted into digital pixels.
52This process can depend on a number of things but ultimately comes down
53to two things:
54\list
55\li The length of time conversion is allowed to take. Also known as exposure
56time.
57\li How bright the light is.
58\endlist
59
60The longer a conversion is allowed to take, the better the resulting image
61quality. Using a flash can assist with letting more light hit the sensor,
62allowing it to convert pixels faster, giving better quality for the same
63amount of time. Conversely, allowing a longer conversion time can let you
64take photos in darker environments, \b{as long as the camera is steady}. If the
65camera moves while the sensor is recording, the resulting image is blurred.
66
67\section2 Image Processing
68After the image has been captured by the sensor, the camera firmware performs
69various image processing tasks on it to compensate for various sensor
70characteristics, current lighting, and desired image properties. Faster sensor
71pixel conversion times may introduce digital noise, so some amount of image
72processing can be done to remove this, based on the camera sensor settings.
73
74The color of the image can also be adjusted at this stage to compensate for
75different light sources - fluorescent lights and sunlight give very different
76appearances to the same object, so the image can be adjusted based on the
77white balance of the picture (due to the different color temperatures of the
78light sources).
79\image image_processing.png "5 examples of various image processing techniques."
80
81Some forms of "special effects" can also be performed at this stage. Black
82and white, sepia, or "negative" style images can be produced.
83
84\section2 Recording for Posterity
85Finally, once a perfectly focused, exposed and processed image has been
86created, it can be put to good use. Camera images can be further processed
87by application code (for example, to detect bar-codes, or to stitch together a
88panoramic image), or saved to a common format like JPEG, or used to create a movie.
89Many of these tasks have classes to assist them.
90
91\target camera-tldr
92\section1 Camera Implementation Details
93\section2 Detecting and Selecting a Camera
94
95Before using the camera APIs, you should check that a camera is available at
96runtime. If there is none available, you could disable camera related features
97in your application. To perform this check in C++, use the
98\l QMediaDevices::videoInputs() function, as shown in the example below:
99
100
101 \snippet multimedia-snippets/camera.cpp Camera overview check
102
103Access a camera using the \l QCamera class in C++ or the \l Camera
104type in QML.
105
106When multiple camera-devices are available, you can specify which one to use.
107
108In C++:
109
110 \snippet multimedia-snippets/camera.cpp Camera selection
111
112In QML, you can select the camera by setting the \l{Camera::cameraDevice} property.
113In C++, you can also select a camera-device by its physical orientation rather than
114by camera info. This is useful on mobile devices, which often have a
115front-facing and a back-facing camera.
116
117In C++:
118
119 \snippet multimedia-snippets/camera.cpp Camera overview position
120
121If no QCameraDevice is specified, the default device will be used. The
122default device is chosen based on information provided by the operating
123system. On desktop platforms, the default camera is commonly set by the
124end-user in the system settings. On a mobile device, the back-facing
125camera is usually the default camera-device. You can get the default
126camera-device with \l QMediaDevices::defaultVideoInput() in C++, or
127\l{QtMultimedia::MediaDevices::defaultVideoInput}{MediaDevices.defaultVideoInput}
128in QML.
129
130The default camera-device may change over time, i.e as a result of the
131end-user disconnecting the current default camera-device. Application
132developers may track the change by querying the default camera-device
133again when the signal \l QMediaDevices::videoInputsChanged is emitted.
134
135\section2 Preview
136
137While not strictly necessary, it's often useful to be able to see
138what the camera is pointing at. This is known as a preview.
139
140Depending on whether you're using QML or C++, you can do this in multiple ways.
141In QML, you can use \l Camera and videoOutput together to monitor a
142captureSession.
143
144\qml
145Item {
146 VideoOutput {
147 id: output
148 anchors.fill: parent
149 }
150 CaptureSession {
151 videoOutput: output
152
153 Camera {
154 // You can adjust various settings in here
155 }
156 }
157}
158\endqml
159
160In C++, your choice depends on whether you are using widgets, or QGraphicsView.
161The \l QVideoWidget class is used in the widgets case, and \l QGraphicsVideoItem
162is useful for QGraphicsView.
163
164 \snippet multimedia-snippets/camera.cpp Camera overview viewfinder
165
166For advanced usage (like processing preview frames as they come, which enables
167detection of objects or patterns), you can also use your own QVideoSink and set
168that as the videoOutput for the QMediaCaptureSession. In this case, you will need to
169render the preview image yourself by processing the data received from the
170videoFrameChanged() signal.
171
172 \snippet multimedia-snippets/camera.cpp Camera overview surface
173
174On mobile devices, the preview image is by default oriented in the same way as the device.
175Thus, as the user rotates the device, the preview image will switch between portrait and
176landscape mode. Once you start recording, the orientation will be locked. To avoid a poor
177user experience, you should also lock the orientation of the applications user interface
178while recording. This can be achieved using the
179\l{QWindow::contentOrientation}{contentOrientation} property of QWindow.
180
181\section2 Still Images
182
183After setting up a viewfinder and finding something photogenic, to capture an
184image we need to initialize a new QImageCapture object. All that is then
185needed is to start the camera and capture the image.
186
187 \snippet multimedia-snippets/camera.cpp Camera overview capture
188
189\section2 Movies
190
191Previously we saw code that allowed the capture of a still image. Recording
192video requires the use of a \l QMediaRecorder object.
193
194To record video we need to create a camera object as before but this time as
195well as creating a viewfinder, we will also initialize a media recorder object.
196
197 \snippet multimedia-snippets/camera.cpp Camera overview movie
198
199Signals from the \e QMediaRecorder can be connected to slots to react to
200changes in the state of the encoding process or error events. Recording
201starts when \l QMediaRecorder::record() is called. This causes the signal
202\l{QMediaRecorder::}{recorderStateChanged()} to be emitted. Recording is
203controlled by the record(), stop(), and pause() slots of QMediaRecorder.
204
205\section2 Controlling the Imaging Pipeline
206
207Now that the basics of capturing images and movies are covered, there are a number
208of ways to control the imaging pipeline to implement some interesting techniques.
209As explained earlier, several physical and electronic elements combine to determine
210the final images, and you can control them with different classes.
211
212\section3 Focus and Zoom
213
214QCamera allows you to set the general focus policy by means of the
215enums for the \l {QCamera::FocusMode}{FocusMode}. \l {QCamera::FocusMode}{FocusMode}
216deals with settings such as \l {QCamera::FocusModeAuto},
217and \l {QCamera::FocusModeInfinity}.
218
219For camera hardware that supports it, \l QCamera::FocusModeAutoNear allows
220imaging of things that are close to the sensor. This is useful in applications
221like bar-code recognition, or business card scanning.
222
223In addition to focus, QCamera allows you to control any available zoom
224functionality using \l{QCamera::setZoomFactor}{setZoomFactor()} or
225\l{QCamera::zoomTo}{zoomTo()}. The
226available zoom range might be limited or entirely fixed to unity (1:1). The
227allowed range can be checked with \l{QCamera::minimumZoomFactor}{minimumZoomFactor()}
228and \l{QCamera::maximumZoomFactor}{maximumZoomFactor()}.
229
230\section3 Exposure, Shutter Speed and Flash
231
232There are a number of settings that affect the amount of light that hits the
233camera sensor, and hence the quality of the resulting image.
234
235The main settings for automatic image taking are the
236\l {QCamera::ExposureMode}{exposure mode} and \l {QCamera::FlashMode}{flash mode}.
237Several other settings (such as: ISO setting and exposure time) are usually
238managed automatically, but can also be overridden if desired.
239
240Finally, you can control the flash hardware (if present) using this class. In
241some cases the hardware may also double as a torch.
242
243\target camera_image_processing
244\section3 Image Processing
245
246The QCamera class lets you adjust the image processing part of the pipeline.
247These settings include:
248\list
249 \li \l {QCamera::WhiteBalanceMode}{white balance}
250 (also known as color temperature)
251\endlist
252
253Most cameras support automatic settings for all of these, so you shouldn't need
254to adjust them unless the user wants a specific setting.
255
256\section1 Examples
257
258There are both C++ and QML examples available.
259
260\section2 C++ Examples
261
262\annotatedlist camera_examples
263
264\section2 QML Examples
265
266\annotatedlist camera_examples_qml
267
268\section1 Reference Documentation
269
270\section2 C++ Classes
271
272\annotatedlist multimedia_camera
273
274\section2 QML Types
275
276\annotatedlist camera_qml
277
278*/