This document defines a set of JavaScript APIs that allow local media, including audio and video, to be requested from a platform.
This document is not complete. It is subject to major changes and, while early experimentations are encouraged, it is therefore not intended for implementation. The API is based on preliminary work done in the WHATWG.
Access to multimedia streams (video, audio, or both) from local devices (video cameras, microphones, Web cams) can have a number of uses, such as real-time communication, recording, and surveillance.
This document defines the APIs used to get access to local devices that can generate multimedia stream data. This document also defines the MediaStream API by which JavaScript is able to manipulate the stream data or otherwise process it.
This specification defines conformance criteria that apply to a single product: the user agent that implements the interfaces that it contains.
Conformance requirements phrased as algorithms or specific steps may be implemented in any manner, so long as the end result is equivalent. (In particular, the algorithms defined in this specification are intended to be easy to follow, and not intended to be performant.)
Implementations that use ECMAScript to implement the APIs defined in this specification must implement them in a manner consistent with the ECMAScript Bindings defined in the Web IDL specification [[!WEBIDL]], as this specification uses that specification and terminology.
The
EventHandler
interface represents a callback used for event handlers as
defined in [[!HTML5]].
The concepts queue a task and fires a simple event are defined in [[!HTML5]].
The terms event handlers and event handler event types are defined in [[!HTML5]].
A source is the "thing" providing the source of a media stream track. The source is the broadcaster of the media itself. A source can be a physical webcam, microphone, local video or audio file from the user's hard drive, network resource, or static image.
Some sources have an identifier which must be unique to the application (un-guessable by another application) and persistent between application sessions (e.g., the identifier for a given source device/application must stay the same, but not be guessable by another application). Sources that must have an identifier are camera and microphone sources; local file sources are not required to have an identifier. Source identifiers let the application save, identify the availability of, and directly request specific sources.
Other than the identifier, other bits of source identity are never directly available to the application until the user agent connects a source to a track. Once a source has been "released" to the application (either via a permissions UI, pre-configured allow-list, or some other release mechanism) the application will be able discover additional source-specific capabilities.
Sources do not have constraints -- tracks have constraints. When a source is connected to a track, it must, possibly in combination with UA processing (e.g., downsampling), conform to the constraints present on that track (or set of tracks).
Sources will be released (un-attached) from a track when the track is ended for any reason.
On the
MediaStreamTrack
object, sources are represented by a
sourceType
attribute. The behavior of APIs associated with the
source's capabilities and settings change depending on the source
type.
Sources have
capabilities
and
settings
. The capabilities and settings are "owned" by the source
and are common to any (multiple) tracks that happen to be using the
same source (e.g., if two different track objects bound to the same
source ask for the same capability or setting information, they will
get back the same answer).
A setting refers to the immediate, current value of the source's (optionally constrained) capabilities. Settings are always read-only.
A source's settings can change dynamically over time due to environmental conditions, sink configurations, or constraint changes. A source's settings must always conform to the current set of mandatory constraints that all of the tracks it is bound to have defined, and should do its best to conform to the set of optional constraints specified.
Although settings are a property of the source, they are only exposed to the application through the tracks attached to the source. The ConstrainablePattern interface provides this exposure.
A conforming user-agent must support all the setting names defined in this spec.
Source capabilities are the intrinsic "features" of a source object. For each source setting, there is a corresponding capability that describes whether it is supported by the source and if so, what the range of supported values are. As with settings, capabilities are exposed to the application via the ConstrainablePattern interface.
The values of the supported capabilities must be normalized to the ranges and enumerated types defined in this specification.
A getCapabilities() call on a track returns the same underlying per-source capabilities for all tracks connected to the source.
Source capabilities are effectively constant. Applications should be able to depend on a specific source having the same capabilities for any session.
Constraints are an optional track feature for restricting the range of allowed variability on a source. Without provided track constraints, implementations are free to select a source's settings from the full ranges of its supported capabilities, and to adjust those settings at any time for any reason.
Constraints are exposed on tracks via the ConstrainablePattern interface, which includes an API for dynamically changing constraints. Note that getUserMedia() also permits an initial set of constraints to be applied when the track is first obtained.
It is possible for two tracks that share a unique source to apply contradictory constraints. The ConstrainablePattern interface supports the calling of an error handler when the conflicting constraint is requested. After successful application of constraints on a track (and its associated source), if at any later time the track becomes overconstrained, the user agent MUST change the track to the muted state.
A correspondingly-named constraint exists for each corresponding source setting name and capability name. In general, user agents will have more flexibility to optimize the media streaming experience the fewer constraints are applied, so application authors are strongly encouraged to use mandatory constraints sparingly.
RTCPeerConnection
RTCPeerConnection
is defined in [[!WEBRTC10]].The two main components in the MediaStream API are the
MediaStreamTrack
and the
MediaStream
interfaces. The
MediaStreamTrack
object represents media originating from a single media
source in the user agent, e.g. video from a web camera. A
MediaStream
is used group several
MediaStreamTrack
objects into one unit that can be rendered in a media
element or recorded.
Each
MediaStream
can contain zero or more
MediaStreamTrack
objects. All tracks in a
MediaStream
are intended to be synchronized when rendered. Different
MediaStream
objects do not need to be synchronized.
While the intent is to synchronize tracks, it could be better in some circumstances to permit tracks to lose synchronization.In particular, when tracks are remotely sourced and real-time [[!WEBRTC10]], it can be better to allow loss of synchronization than to accumulate delays or risk glitches and other artifacts. Implementations are expected to understand the implications of choices regarding synchronization of playback and the effect that these have on user perception.
A
MediaStreamTrack
represents content comprising one or more channels, where
the channels have a defined well known relationship to each other
(such as a stereo or 5.1 audio signal). A channel is the smallest unit
considered in this API specification.
A
MediaStream
object has an input and an output that represent the
combined input and output of all the object's tracks. The output of
the
MediaStream
controls how the object is rendered, e.g., what is saved if
the object is recorded to a file or what is displayed if the object is
used in a video
element.
A new
MediaStream
object can be created from accessible media sources (that
does not require any additional permissions) using the
MediaStream()
constructor. The constructor argument can either be an
existing
MediaStream
object, in which case all the tracks of the given stream are
added to the new
MediaStream
object, or an array of
MediaStreamTrack
objects. The latter form makes it possible to compose a
stream from different source streams.
Both
MediaStream
and
MediaStreamTrack
objects can be cloned. This allows for greater control since
the separate instances can be manipulated and consumed individually. A cloned
MediaStream
contains clones of all member tracks from the original
stream.
When a
MediaStream
object is being generated from a local file (as opposed to a
live audio/video source), the user agent SHOULD stream the data from
the file in real time, not all at once. The MediaStream
object is also used in contexts outside getUserMedia
,
such as [[!WEBRTC10]]. In both cases, ensuring a realtime stream
reduces the ease with which pages can distinguish live video from
pre-recorded video, which can help protect the user's privacy.
The
MediaStream()
constructor composes a new stream out of existing tracks. It
takes an optional argument of type
MediaStream
or an array of
MediaStreamTrack
objects. When the
constructor is invoked, the UA must run the following steps:
Let stream be a newly constructed
MediaStream
object.
Initialize stream's
id
attribute to a newly generated value.
If the constructor's argument is present, run the sub steps that corresponds to the argument type.
Array
of
MediaStreamTrack
objects:
Run the following sub steps for each
MediaStreamTrack
in the array:
Add track: Let track be the
MediaStreamTrack
about to be processed.
Add track to stream's track set.
Run the sub steps labeled Add track (above) for
every
MediaStreamTrack
in the argument stream's track
set.
If stream's track set is
empty or only contains ended tracks, set stream's
active
attribute to false
, otherwise set it to
true
.
Return stream.
The tracks of a
MediaStream
are stored in a track set. The
track set MUST contain the
MediaStreamTrack
objects that correspond to the tracks of the stream. The
relative order of the tracks in the set is user agent defined and the
API will never put any requirements on the order. The proper way to
find a specific
MediaStreamTrack
object in the set is to look it up by its
id
.
An object that reads data from the output of a
MediaStream
is referred to as a
MediaStream
consumer. The list of
MediaStream
consumers currently include the media elements [[HTML5]],
RTCPeerConnection
[[WEBRTC10]],
MediaRecorder
[[mediastream-rec]] and
ImageCapture
[[mediastream-imagecap]].
MediaStream
consumers must be able to handle tracks being added and
removed. This behavior is specified per consumer.
A
MediaStream
object is said to be MediaStream.inactive when it does not have
any tracks or all tracks belonging to the stream have ended. Otherwise the stream is active. A
MediaStream
can start its life as inactive if it is constructed without
any tracks.
When a
MediaStream
goes from being active to inactive, the user agent MUST
queue a task that sets the object's
active
attribute to false
and fire a simple event
named
inactive
at the object. When a
MediaStream
goes from being inactive to active, the user agent MUST
queue a task that sets the object's
active
attribute to true
and fire a simple event named
active
at the object.
If the stream's activity status changed due to a user request, the task source for this task is the user interaction task source. Otherwise the task source for this task is the networking task source.
When a
MediaStream
object is created, the user agent MUST generate an
identifier string, and MUST initialize the object's
id
attribute to that string. A good practice is to use an
UUID, which is 36 characters long in its canonical form.
The
id
attribute MUST return the value to which it was
initialized when the object was created.
Returns a sequence of
MediaStreamTrack
objects representing the audio tracks in this
stream.
The
getAudioTracks()
method MUST return a sequence that represents a snapshot
of all the
MediaStreamTrack
objects in this stream's track
set whose
kind
is equal to "audio
". The conversion from
the track set to the sequence is user
agent defined and the order does not have to stable between
calls.
Returns a sequence of
MediaStreamTrack
objects representing the video tracks in this
stream.
The
getVideoTracks()
method MUST return a sequence that represents a snapshot
of all the
MediaStreamTrack
objects in this stream's track
set whose
kind
is equal to "video
". The conversion from
the track set to the sequence is user
agent defined and the order does not have to stable between
calls.
Returns a sequence of
MediaStreamTrack
objects representing all the tracks in this stream.
The
getTracks()
method MUST return a sequence that represents a snapshot
of all the
MediaStreamTrack
objects in this stream's track
set, regardless of kind. The conversion from the track set to the sequence is user agent
defined and the order does not have to stable between calls.
The
getTrackById()
method MUST return the first
MediaStreamTrack
object in this stream's track
set whose
id
is equal to trackId. The method MUST return
null if no track matches the trackId argument.
Adds the given
MediaStreamTrack
to this
MediaStream
.
When the
addTrack()
method is invoked, the user agent MUST run the following
steps:
Let track be the
MediaStreamTrack
argument and stream this
MediaStream
object.
If track is already in stream's track set, then abort these steps.
Add track to stream's track set.
Removes the given
MediaStreamTrack
from this
MediaStream
.
When the
removeTrack()
method is invoked, the user agent MUST remove the track,
indicated by the method's argument, from the stream's track set, if present.
Clones the given
MediaStream
and all its tracks.
When the
MediaStream.clone()
method is invoked, the user agent MUST run the following
steps:
Let streamClone be a newly constructed
MediaStream
object.
Initialize streamClone's
id
attribute to a newly generated value.
Let clonedTracks be a list that contains the
result of running
MediaStreamTrack.clone()
on all the tracks in the stream on which this method
was called.
Let clonedTracks be streamClone's track set.
The
MediaStream.active
attribute returns true if the
MediaStream
is active (see inactive),
and false otherwise.
When a
MediaStream
object is created, its
active
attribute MUST be set to true, unless stated otherwise
(for example by the
MediaStream()
constructor algorithm).
active
, MUST be supported by all objects implementing the
MediaStream
interface.
inactive
, MUST be supported by all objects implementing the
MediaStream
interface.
addtrack
, MUST be supported by all objects implementing the
MediaStream
interface.
removetrack
, MUST be supported by all objects implementing the
MediaStream
interface.A
MediaStreamTrack
object represents a media source in the user agent. Several
MediaStreamTrack
objects can represent the same media source, e.g., when the
user chooses the same camera in the UI shown by two consecutive calls
to
getUserMedia()
.
The data from a
MediaStreamTrack
object does not necessarily have a canonical binary form;
for example, it could just be "the video currently coming from the
user's video camera". This allows user agents to manipulate media in
whatever fashion is most suitable on the user's platform.
A script can indicate that a track no longer needs its source with
the
MediaStreamTrack.stop()
method. When all tracks using a source have been stopped,
the given permission for that source is revoked and the source is stopped. If the data is being generated from
a live source (e.g., a microphone or camera), then the user agent
SHOULD remove any active "on-air" indicator for that source. If the
data is being generated from a prerecorded source (e.g. a video file),
any remaining content in the file is ignored. An implementation may
use a per source reference count to keep track of source usage, but
the specifics are out of scope for this specification.
If there is no stored permission to use that source, the UA SHOULD also remove the "permission granted" indicator for the source.
A
MediaStreamTrack
has two stages in its life-cycle: live
and
ended
. A newly created
MediaStreamTrack
can be in any stage depending on how it was created. For
example, cloning an ended track results in a new ended track. The
current stage is reflected by the object's
readyState
attribute.
In the live
state, the track is active and media is
available for use by consumers (but may be replaced by
zero-information-content if the
MediaStreamTrack
is muted or enabled, see below).
A muted or disabled
MediaStreamTrack
renders either silence (audio), black frames (video), or a
zero-information-content equivalent. For example, a video element
sourced by a muted or disabled
MediaStreamTrack
(contained within a
MediaStream
), is playing but the rendered content is the muted output.
When all tracks connected to a source are muted or disabled, the
"on-air" or "recording" indicator for that source can be turned off;
when the track is no longer muted or disabled, it MUST be turned
back on.
The muted/unmuted state of a track reflects if the source
provides any media at this moment. The enabled/disabled state is
under application control and determines if the track outputs media
(to its consumers). Hence, media from the source only flows when a
MediaStreamTrack
object is both unmuted and enabled.
A
MediaStreamTrack
is muted when the source is
temporarily unable to provide the track with data. A track can be
muted by a user. Often this action is outside the control of the
application. This could be as a result of the user hitting a
hardware switch, or toggling a control in the operating system or
browser chrome. A track can also be muted by the user agent.
Applications are able to enable or
disable a
MediaStreamTrack
to prevent it from rendering media from the source. A
muted track will however, regardless of the enabled state, render
silence and blackness. A disabled track is logically equivalent to a
muted track, from a consumer point of view.
For a newly created
MediaStreamTrack
object, the following applies. The track is always enabled
unless stated otherwise (for example when cloned) and the muted
state reflects the state of the source at the time the track is
created.
A
MediaStreamTrack
object is said to end when the source of the
track is disconnected or exhausted.
A
MediaStreamTrack
can be detached from its
source. It means that the track is no longer dependent on the source
for media data. If no other
MediaStreamTrack
is using the same source, the source will be stopped.
MediaStreamTrack
attributes such as
kind
and
label
MUST not change values when the source is detached.
When a
MediaStreamTrack
object ends for any reason (e.g., because the user
rescinds the permission for the page to use the local camera, or
because the data comes from a finite file and the file's end has
been reached and the user has not requested that it be looped, or
because the application invoked the
stop()
method on the
MediaStreamTrack
object, or because the UA has instructed the track to end
for any reason) it is said to be ended.
When a
MediaStreamTrack
track ends for any reason other than the
stop()
method being invoked, the user agent MUST queue a task
that runs the following steps:
If the track's
readyState
attribute has the value ended
already,
then abort these steps.
Set track's
readyState
attribute to ended
.
Detach track's source.
Fire a simple event named
ended
at the object.
If the end of the stream was reached due to a user request, the event source for this event is the user interaction event source.
There are two concepts related to the media flow for a
live
MediaStreamTrack
: muted or not, and enabled or disabled.
Muted refers to the input to the
MediaStreamTrack
. If live samples are not made available to the
MediaStreamTrack
it is muted.
Muted is out of control for the application, but can be observed
by the application by reading the
muted
attribute and listening to the associated events
mute
and
unmute
.There can be several reasons for a
MediaStreamTrack
to be muted: the user pushing a physical mute button on
the microphone, the user toggling a control in the operating system,
the user clicking a mute button in the browser chrome, the UA (on
behalf of the user) mutes, etc.
Enabled/disabled on the other hand
is available to application to control (and observe) via the
enabled
attribute.
The result for the consumer is the same in the meaning that
whenever
MediaStreamTrack
is muted or disabled (or both) the consumer gets
zero-information-content, which means silence for audio and black
frames for video. In other words, media from the source only flows
when a
MediaStreamTrack
object is both unmuted and enabled. For example, a video
element sourced by a muted or disabled
MediaStreamTrack
(contained in a
MediaStream
), is playing but rendering blackness.
For a newly created
MediaStreamTrack
object, the following applies: the track is always enabled
unless stated otherwise (for example when cloned) and the muted
state reflects the state of the source at the time the track is
created.
Constraints are set on tracks and may affect sources.
Whether
Constraints
were provided at track initialization time or need to be
established later at runtime, the APIs defined in the
ConstrainablePattern Interface allow the retrieval and
manipulation of the constraints currently established on a
track.
Each track maintains an internal version of the
Constraints
structure, namely a mandatory set of constraints (no
duplicates), and an optional ordered list of individual constraint
objects (may contain duplicates). The internal stored constraint
structure is exposed to the application by the
constraints
attribute, and may be modified by the
applyConstraints()
method.
When
applyConstraints()
is called, a user agent MUST queue a task to evaluate
those changes when the task queue is next serviced. Similarly, if
the
sourceType
changes, then the user agent MUST perform the same actions to
re-evaluate the constraints of each track affected by that source
change.
If the
MediaStreamError
event named 'overconstrained' is thrown, the track MUST be
muted until either new satisfiable constraints are applied or the
existing constraints become satisfiable.
The
MediaStreamTrack.kind
attribute MUST return the string "audio
"
if the object represents an audio track or "video
"
if object represents a video track.
Unless a
MediaStreamTrack
object is created as a part a of special purpose
algorithm that specifies how the track id must be initialized,
the user agent MUST generate an identifier string and initialize
the object's
id
attribute to that string. See
MediaStream.id
for guidelines on how to generate such an
identifier.
An example of an algorithm that specifies how the track id
must be initialized is the algorithm to represent an incoming
network component with a
MediaStreamTrack
object. [[!WEBRTC10]]
MediaStreamTrack.id
attribute MUST return the value to which it was
initialized when the object was created.
User agents MAY label audio and video sources (e.g.,
"Internal microphone" or "External USB Webcam"). The
MediaStreamTrack.label
attribute MUST return the label of the object's
corresponding source, if any. If the corresponding source has or
had no label, the attribute MUST instead return the empty
string.
The
MediaStreamTrack.enabled
attribute controls the
enabled
state for the object.
On getting, the attribute MUST return the last value to which
it was set. On setting, it MUST be set to the new value,
regardless if the
MediaStreamTrack
object has been detached
from its source or not.
Thus, after a
MediaStreamTrack
is detached from its source, its
enabled
attribute still changes value when set; it just
doesn't do anything with that new value.
The
MediaStreamTrack.muted
attribute MUST return true
if the track is
muted, and false
otherwise.
mute
, MUST be supported by all objects implementing the
MediaStreamTrack
interface.
unmute
, MUST be supported by all objects implementing the
MediaStreamTrack
interface.readonly
attribute MUST return the value true
.
Otherwise, it must return the value false
.remote
attribute MUST return the value true
.
Otherwise, it must return the value false
.The
readyState
attribute represents the state of the track. It MUST
return the value to which the user agent last set it.
ended
, MUST be supported by all objects implementing the
MediaStreamTrack
interface.Clones the given
MediaStreamTrack
.
When the
MediaStreamTrack.clone()
method is invoked, the user agent MUST run the
following steps:
Let trackClone be a newly constructed
MediaStreamTrack
object.
Initialize trackClone's
id
attribute to a newly generated value.
Let trackClone inherit this track's underlying
source,
kind
,
label
,
readyState
, and
enabled
attributes, as well as its currently active
constraints.
Return trackClone.
When a
MediaStreamTrack
object's
stop()
method is invoked, the user agent MUST run following
steps:
Let track be the current
MediaStreamTrack
object.
If track is sourced by a non-local source, then abort these steps.
Set track's
readyState
attribute to ended
.
Detach track's source.
The task source for the tasks queued for the
stop()
method is the DOM manipulation task source.
See ConstrainablePattern Interface for the definition of this method.
See ConstrainablePattern Interface for the definition of this event handler.
The track is active (the track's underlying media source is making a best-effort attempt to provide data in real time).
The output of a track in the live
state can be
switched on and off with the
enabled
attribute.
The track has ended (the track's underlying media source is no longer providing data, and will never provide more data for this track). Once a track enters this state, it never exits it.
For example, a video track in a
MediaStream
ends if the user unplugs the USB web camera that acts
as the track's media source.
MediaStreamTrack
s. The source is a local video-producing camera
source.
MediaStreamTrack
s. The source is a local audio-producing microphone
source.The
addtrack
and
removetrack
events use the
MediaStreamTrackEvent
interface.
Firing a track event named
e with a
MediaStreamTrack
track means that an event with the name
e, which does not bubble (except where otherwise stated)
and is not cancelable (except where otherwise stated), and which uses
the
MediaStreamTrackEvent
interface with the
track
attribute set to track, MUST be created and
dispatched at the given target.
Constructs a new
MediaStreamTrackEvent
.
The
track
attribute represents the
MediaStreamTrack
object associated with the event.
Browsers provide a media pipeline from sources to sinks. In a browser, sinks are the <img>, <video> and <audio> tags. Traditional sources include streamed content, files, and web resources. The media produced by these sources typically does not change over time - these sources can be considered to be static.
The sinks that display these sources to the user (the actual tags
themselves) have a variety of controls for manipulating the source
content. For example, an <img> tag scales down a huge source image
of 1600x1200 pixels to fit in a rectangle defined with
width="400"
and height="300"
.
The getUserMedia API adds dynamic sources such as microphones and cameras - the characteristics of these sources can change in response to application needs. These sources can be considered to be dynamic in nature. A <video> element that displays media from a dynamic source can either perform scaling or it can feed back information along the media pipeline and have the source produce content more suitable for display.
Note: This sort of feedback loop is obviously just enabling an "optimization", but it's a non-trivial gain. This optimization can save battery, allow for less network congestion, etc...
Note that MediaStream
sinks (such as
<video>
, <audio>
, and even
RTCPeerConnection
) will continue to have mechanisms to
further transform the source stream beyond that which the
Settings, Capabilities, and Constraints described
in this specification offer. (The sink transformation options, including
those of RTCPeerConnection
, are outside the scope of this
specification.)
The act of changing or applying a track constraint may affect the
settings
of all tracks sharing that source and consequently all
down-level sinks that are using that source. Many sinks may be able to
take these changes in stride, such as the <video>
element or RTCPeerConnection
. Others like the Recorder API
may fail as a result of a source setting change.
The RTCPeerConnection
is an interesting object because
it acts simultaneously as both a sink and a source for
over-the-network streams. As a sink, it has source transformational
capabilities (e.g., lowering bit-rates, scaling-up or down resolutions,
adjusting frame-rates), and as a source it could have its own settings
changed by a track source (though in this specification sources with the
remote
attribute set to true do not consider the current constraints
applied to a track).
To illustrate how changes to a given source impact various sinks,
consider the following example. This example only uses width and height,
but the same principles apply to any of the Settings exposed in
this specification. In the first figure a home client has obtained a
video source from its local video camera. The source's width and height
settings are 800 pixels by 600 pixels, respectively. Three
MediaStream
objects on the home client contain tracks that use this same
sourceId
. The three media streams are connected to three different
sinks: a <video>
element (A), another
<video>
element (B), and a peer connection (C). The
peer connection is streaming the source video to an away client. On the
away client there are two media streams with tracks that use the peer
connection as a source. These two media streams are connected to two
<video>
element sinks (Y and Z).
Note that at this moment, all of the sinks on the home client must apply a transformation to the original source's provided dimension settings. A is scaling the video up (resulting in loss of quality), B is scaling the video down, and C is also scaling the video up slightly for sending over the network. On the away client, sink Y is scaling the video way down, while sink Z is not applying any scaling.
Using the ConstrainablePattern interface, one of the tracks requests a higher resolution (1920 by 1200 pixels) from the home client's video source.
Note that the source change immediately affects all of the tracks and sinks on the home client, but does not impact any of the sinks (or sources) on the away client. With the increase in the home client source video's dimensions, sink A no longer has to perform any scaling, while sink B must scale down even further than before. Sink C (the peer connection) must now scale down the video in order to keep the transmission constant to the away client.
While not shown, an equally valid settings change request could be made of the away client video source (the peer connection on the away client's side). This would not only impact sink Y and Z in the same manner as before, but could lead to re-negotiation with the peer connection on the home client in order to alter the transformation that it is applying to the home client's video source. Such a change is NOT REQUIRED to change anything related to sink A or B or the home client's video source.
Note that this specification does not define a mechanism by which a change to the away client's video source could automatically trigger a change to the home client's video source. Implementations may choose to make such source-to-sink optimizations as long as they only do so within the constraints established by the application, as the next example demonstrates.
It is fairly obvious that changes to a given source will impact sink
consumers. However, in some situations changes to a given sink may also
be cause for implementations to adjust a source's settings. This is
illustrated in the following figures. In the first figure below, the
home client's video source is sending a video stream sized at 1920 by
1200 pixels. The video source is also unconstrained, such that the exact
source dimensions are flexible as far as the application is concerned.
Two
MediaStream
objects contain tracks with the same
sourceId
, and those
MediaStream
s are connected to two different <video>
element sinks A and B. Sink A has been sized to
width="1920"
and height="1200"
and is
displaying the source's video content without any transformations. Sink
B has been sized smaller and, as a result, is scaling the video down to
fit its rectangle of 320 pixels across by 200 pixels down.
When the application changes sink A to a smaller dimension (from 1920 to 1024 pixels wide and from 1200 to 768 pixels tall), the browser's media pipeline may recognize that none of its sinks require the higher source resolution, and needless work is being done both on the part of the source and on sink A. In such a case and without any other constraints forcing the source to continue producing the higher resolution video, the media pipeline MAY change the source resolution:
In the above figure, the home client's video source resolution was changed to the greater of that from sinkA and from sinkB in order to optimize playback. While not shown above, the same behavior could apply to peer connections and other sinks.
It is possible that constraints can be applied to a track
which a source is unable to satisfy, either because the source itself
cannot satisfy the constraint or because the source is already
satisfying a conflicting constraint. When this happens, the
applyConstraints()
call will fail and call the user-provided
ConstraintErrorCallback, without applying any of the new
constraints. Since no change in constraints occurs in this case, there
is also no required change to the source itself as a result of this
condition. Here is an example of this behavior.
In this example, two media streams each have a video track that share the same source. The first track initially has no constraints applied. It is connected to sink N. Sink N has a width and height of 800 by 600 pixels and is scaling down the source's resolution of 1024 by 768 to fit. The other track has a mandatory constraint forcing off the source's fill light; it is connected to sink P. Sink P has a width and height equal to that of the source.
Now, the first track adds a mandatory constraint that the fill light should be forced on. At this point, both mandatory constraints cannot be satisfied by the source (the fill light cannot be simultaneously on and off at the same time). Since this state was caused by the first track's attempt to apply a conflicting constraint, the constraint application fails and there is no change in the source's settings or the constraints on either track.
Let's look at a slightly different situation starting from the same point. In this case, instead of the first track attempting to apply a conflicting constraint, the user physically locks the camera into a mode where the fill light is on. At this point the source can no longer satisfy the second track's mandatory constraint that the fill light be off. The second track is transitioned into the muted state and receives an overconstrained event. At the same time, the source notes that its remaining active sink only requires a resolution of 800 by 600 and so it adjusts its resolution down to match (this is an optional optimization that the user agent is allowed to make given the situation).
At this point, it is the responsibility of the application to address the problem that led to the overconstrained situation, perhaps by removing the fill light mandatory constraint on the second track or by closing the second track altogether and informing the user
A MediaStream
may be assigned to media elements as
defined in HTML5
[[HTML5]] A MediaStream
is not preloadable or seekable and
represents a simple, potentially infinite, linear media timeline. The
timeline starts at 0 and increments linearly in real time as long as the
MediaStream
is playing. The timeline does not increment
when the MediaStream
is paused.
UAs that support this specification MUST support the following partial interface, which allows a MediaStream to be assigned directly to a media element.
Holds the MediaStream that provides media for this element.
This attribute overrides both the src
attribute and
any <source> elements. Specifically, if
srcObject
is specified, the UA MUST use it as the
source of media, even if the src
attribute is also
set or <source> children are present. If the value of
srcObject
is replaced or set to null the UA MUST
re-run the
media element load algorithm
We may want to allow direct assignment of other types as well
The UA runs the
media element load algorithm to obtain media for the media element
to display. As defined in the [[HTML5]] specification, this algorithm
has two basic phases:
resource selection algorithm chooses the resource to play and
resolves its URI. Then the
resource fetch phase loads the resource. Both these phases are
potentially simplified when using a MediaStream. First of all,
srcObject
takes priority over other means of specifying
the resource, and it provides the object itself rather than a URI.
Therefore, there is no need to run the resource selection algorithm.
Secondly, when the UA reaches the resource fetch algorithm with a
MediaStream, the MediaStream is a local object so there's nothing to
fetch. Therefore, the following modifications/restrictions to the
media element load algorithm apply:
Whenever the user agent runs the
media element load algorithm, if srcObject
is
specified, the UA must immediately go to the
resource fetch phase of the algorithm.
Whenever the user agent runs the
media element load algorithm, reaches the
resource fetch phase of this algorithm, and determines that
the media resource in question is a MediaStream, it MUST
immediately abort the
resource selection algorithm, setting the
media.readyState
to HAVE_NOTHING if media is not yet available and to
HAVE_ENOUGH_DATA once it is.
For each
MediaStreamTrack
in the
MediaStream
, including those that are added after the UA enters the
media element load algorithm, the UA MUST create a
corresponding
AudioTrack
or
VideoTrack
as defined in [[HTML5]]. Since the order in the
MediaStream
's track set is undefined, no
requirements are put how the
AudioTrackList
and
VideoTrackList
are ordered.
The properties of the AudioTrack
and
VideoTrack
objects MUST be initialized as follows.
Let
AudioTrack.id
and VideoTrack.id
have the value of the corresponding
MediaStreamTrack.id
attribute
AudioTrack.kind
and
VideoTrack.kind
be "main"
AudioTrack.label
and
VideoTrack.label
have the value of the
corresponding
MediaStreamTrack.label
attribute
AudioTrack.language
and
VideoTrack.language
be the empty string
Let the media resource, represented by the
MediaStream
object, indicate to the
media element load algorithm that all audio tracks and all
live video tracks (represented by a
MediaStreamTrack
with the
readyState
attribute set to live
) should be enabled.
This allows the media element load algorithm to set
AudioTrack.enabled
, VideoTrack.selected
and VideoTrackList.selectedIndex
accordingly.
(Note that since the MediaStream is potentially endless, the UA does not exit the media element load algorithm until the MediaStream moves from the active to the inactive state.)
If a
MediaStreamTrack
is removed from a
MediaStream
, played by a media element, the corresponding
AudioTrack
or VideoTrack
MUST be removed
as well.
The UA MUST NOT buffer data from a MediaStream. When playing, the UA MUST always play the current data from the stream.
When the MediaStream is moves from the active to the inactive state, the UA MUST raise an
ended event on the media element and set its ended
attribute to true
. Note that once ended
equals true
the media element will not play media
even if new Tracks are added to the MediaStream (causing it to
return to the active state) unless autoplay
is
true
or the JavaScript restarts the element, e.g., by
calling play().
The nature of the MediaStream
places certain
restrictions on the behavior and attribute values of the associated
media element and on the operations that can be performed on it, as
shown below:
Attribute Name | Attribute Type | Valid Values When Using a MediaStream | Additional considerations |
---|---|---|---|
currentSrc
|
DOMString
|
the empty string | When srcObject is specified the UA MUST set
this to the empty string. |
preload
|
DOMString
|
none
|
A MediaStream cannot be preloaded. |
buffered
|
TimeRanges
|
buffered.length MUST return
0 . |
A MediaStream cannot be preloaded. Therefore, the amount buffered is always an empty TimeRange. |
networkState
|
unsigned short
|
NETWORK_IDLE | The media element does not fetch the MediaStream so there is no network traffic. |
readyState
|
unsigned short
|
HAVE_NOTHING, HAVE_ENOUGH_DATA | A
MediaStream
may be created before there is any data available, for
example when a stream is received from a remote peer. The value
of the readyState of the media element MUST be
HAVE_NOTHING before the first media arrives and HAVE_ENOUGH_DATA
once the first media has arrived. |
currentTime
|
double
|
Any positive integer. The initial value is 0 and the values increments linearly in real time whenever the stream is playing. | The value is the current stream position, in seconds. On any
attempt to set this attribute, the user agent must throw an
InvalidStateError exception. |
duration
|
unrestricted double
|
Infinity | A MediaStream does not have a pre-defined duration. |
seeking
|
boolean
|
false | A MediaStream is not seekable. Therefore, this attribute
MUST always have the value false . |
defaultPlaybackRate
|
double
|
1.0 | A MediaStream is not seekable. Therefore, this attribute
MUST always have the value 1.0 and any attempt to
alter it MUST fail. |
playbackRate
|
double
|
1.0 | A MediaStream is not seekable. Therefore, this attribute
MUST always have the value 1.0 and any attempt to
alter it MUST fail. |
played
|
TimeRanges
|
played.length MUST return
1 .played.start(0) MUST return
0 .played.end(0) MUST return the
last known
currentTime
. |
A MediaStream's timeline always consists of a single range, starting at 0 and extending up to the currentTime. |
seekable
|
TimeRanges
|
seekable.length MUST return 0 .
|
A MediaStream is not seekable. |
startDate
|
Date
|
Not-a-Number (NaN) | A MediaStream does not specify a timeline offset. |
loop
|
boolean
|
true, false | Setting the loop attribute has no effect since
a
MediaStream
has no defined end and therefore cannot be
looped. |
All errors defined in this specification implement the following interface:
The name of the error
This attribute is only used for some types of errors. For
MediaStreamError
with a name of ConstraintNotSatisfiedError
,
this attribute MUST be set to the name of the constraint that caused
the error.
The following interface is defined for cases when a MediaStreamError is raised as an event:
TODO
The following event fires on
MediaStream
objects:
Event name | Interface | Fired when... |
---|---|---|
active
|
Event
|
The
MediaStream
became active (see inactive). |
inactive
|
Event
|
The
MediaStream
became inactive. |
addtrack
|
MediaStreamTrackEvent
|
A new
MediaStreamTrack
has been added to this stream. Note that this event is
not fired when the script directly modifies the tracks of a
MediaStream
. |
removetrack
|
MediaStreamTrackEvent
|
A
MediaStreamTrack
has been removed from this stream. Note that this event
is not fired when the script directly modifies the tracks of a
MediaStream
. |
The following event fires on
MediaStreamTrack
objects:
Event name | Interface | Fired when... |
---|---|---|
mute
|
Event
|
The
MediaStreamTrack
object's source is temporarily unable to provide
data. |
unmute
|
Event
|
The
MediaStreamTrack
object's source is live again after having been
temporarily unable to provide data. |
overconstrained
|
MediaStreamErrorEvent
|
This error event fires asynchronously for each affected track
(when multiple tracks share the same source) after the user
agent has evaluated the current constraints against a given
Due to being over-constrained, the user agent must mute each affected track. The affected track(s) will remain muted until the application adjusts the constraints to accommodate the source's capabilities. |
ended
|
Event
|
The
MediaStreamTrack
object's source will no longer provide any data, either
because the user revoked the permissions, or because the source
device has been ejected, or because the remote peer permanently
stopped sending data. |
The following event fires on
MediaDevices
objects:
Event name | Interface | Fired when... |
---|---|---|
devicechange
|
Event
|
The set of media devices, available to the user agent, has
changed. The current list devices can be retrieved with the
enumerateDevices()
method. |
This section describes an API that the script can use to query the user agent about connected media input and output devices (for example a web camera or a headset).
Returns the MediaDevices
object associated with
this Navigator
object.
The MediaDevices
object which is the entry point to
the API used to examine and get access to media devices available to
the user agent.
When a new media input or output device is made available, the user
agent MUST queue a task fires a simple event named
devicechange
at the
MediaDevices
object.
devicechange
, MUST be supported by all objects implementing the
MediaDevices
interface.Collects information about the user agents available media input and output devices. The method MUST only return information that the script is authorized to access (TODO expand authorized).
When the
enumerateDevices()
method is called, the user agent must queue a task that
runs the following steps:
Let resultCallback be the callback indicated by the methods first argument.
If this method has been called previously within this
application session, let oldList be the list of
MediaDeviceInfo
objects that was produced at that call
(resultList); otherwise, let oldList be
an empty list.
Let resultList be an empty list.
Probe the user agent for available media devices, and run the following sub steps for each discovered device, device:
If device is represented by a
MediaDeviceInfo
object in oldList, append that object
to resultList, abort these steps and continue
with the next device (if any).
Let deviceInfo be a new
MediaDeviceInfo
object to represent device.
If device belongs to the same physical
device as a device, already represented in
oldList or resultList, initialize
deviceInfo's
groupId
member to the
groupId
value of the existing
MediaDeviceInfo
object. Otherwise, let deviceInfo's
groupId
member be a newly generated unique
identifier
Append deviceInfo to resultList.
Invoke resultCallback with resultList as its argument.
MediaDeviceInfo
objects representing the result of a call to
MediaDevices.enumerateDevices()
.The unique id for the represented device.
Describes the kind of the represented device.
A label describing this device (for example "External USB Webcam"). If the device has no associated label, then this dictionary member MUST return the empty string.
Returns the group identifier of the represented device. Two devices have the same group identifier if they belong to the same physical device; for example a headset.
Represents an audio input device; for example a microphone.
Represents an audio output device; for example a pair of headphones.
Represents a video input device; for example a webcam.
This section extends
NavigatorUserMedia
and
MediaDevices
with APIs to request permission to access media input devices
available to the user agent.
NavigatorUserMedia
for legacy purposes. See MediaDevices.getUserMedia().The getSupportedConstraints
method is provided to
allow the application to determine which constraints the User Agent
recognizes.
Returns a dictionary whose members are the constraint keys
known to the User Agent for the kind given as argument. A
supported constraint MUST be represented by a member whose name is
the constraint name and whose value is true
. Any
constraint names not supported by the User Agent MUST not be
present in the returned dictionary.
Prompts the user for permission to use their Web cam or other video or audio input.
(Remove when other issues are removed. This is only here to keep the issues from being renumbered)
The constraints argument is an object of type
MediaStreamConstraints
.
The successCallback will be invoked with a suitable
MediaStream
object as its argument if the user accepts valid tracks
as described below.
The errorCallback will be invoked if there is a failure in finding valid tracks or if the user denies permission, both as described below.
When the
getUserMedia()
method is called, the user agent MUST run the following
steps:
Let constraints be the method's first argument.
Let successCallback be the callback indicated by the method's second argument.
Let errorCallback be the callback indicated by the method's third argument.
Let requestedMediaTypes be the set of media types in constraints with either a dictionary value or a value of "true".
If requestedMediaTypes is the empty set, let
error be a new
MediaStreamError
object whose
name
attribute has the value
NotSupportedError
and jump to the step labeled
Error Task below.
Let finalSet be an (initially) empty set.
If successCallback is null, abort these steps.
For each media type T in requestedMediaTypes,
Let candidateSet be all possible tracks of media type T that the browser could return.
If candidateSet is the empty set,
let error be a
new MediaStreamError
object
whose name
attribute has the
value NotFoundError
and jump to the
step labeled Error Task below.
For each required ('min', 'max', or 'exact') constraint provided for a constraint name in CS,
If the constraint is not supported by the browser, jump to the step labeled Constraint Failure below.
Remove from the candidateSet any track that cannot satisfy the value given for the constraint in CS, if any.
If the candidateSet no longer contains at least one track, jump to the step labeled Constraint Failure below. Otherwise, continue with the next required constraint.
Let the secondPassSet be the current
contents of the candidateSet. Note that unknown
properties are discarded by WebIDL, which means that
unknown/unsupported required constraints will silently
disappear. To avoid this being a surprise, application
authors are expected to first use the
getSupportedConstraints()
method as shown in
the Examples.
For each constraint key-value pair in the "advanced" sequence of CS, in order,
If the constraint is not supported by the browser, skip it and continue with the next constraint.
Remove from the secondPassSet any tracks that cannot satisfy the value given for the constraint.
If the secondPassSet is now empty, let the secondPassSet be the current contents of the candidateSet. Otherwise, let the candidateSet be the current contents of the secondPassSet.
Let the thirdPassSet be the current contents of the candidateSet.
For all non-required ('ideal' or bare-value) constraints in CS, identify the maximum number of such constraint pairs that could be satisfied by at least one track in thirdPassSet.
The decision of which of these non-required constraints to satisfy is completely up to the user agent, as long as the number of constraints satisfied matches the number in the previous step.
For the non-required constraints the user agent has decided to satisfy, remove from the thirdPassSet any tracks that cannot satisfy those constraints.
If the thirdPassSet is now empty, let the thirdPassSet be the current contents of the candidateSet. Otherwise, let the candidateSet be the current contents of the thirdPassSet.
Final: Add the tracks in the candidateSet to the finalSet.
Return, and run the remaining steps asynchronously.
Optionally, e.g., based on a previously-established user preference, for security reasons, or due to platform limitations, jump to the step labeled Permission Failure below.
Prompt the user in a user agent specific manner for
permission to provide the entry script's origin with a
MediaStream
object representing a media stream.
The provided media MUST include precisely one track of each
media type in requestedMediaTypes from the
finalSet. The decision of which tracks to choose
from the finalSet is completely up to the user
agent and may be determined by asking the user. Once selected,
the source of a
MediaStreamTrack
MUST not change.
Define the event that should be raised when the user agent changes its choice of track.
User agents are encouraged to default to using the user's primary or system default camera and/or microphone (when possible) to generate the media stream. User agents MAY allow users to use any media source, including pre-recorded media files.
If the user grants permission to use local recording devices, user agents are encouraged to include a prominent indicator that the devices are "hot" (i.e. an "on-air" or "recording" indicator), as well as a "device accessible" indicator indicating that the page has been granted access to the source.
If the user denies permission, jump to the step labeled Permission Failure below. If the user never responds, this algorithm stalls on this step.
If the user grants permission but a hardware error such as an OS/program/webpage lock prevents access, jump to the step labeled Unavailable Failure below.
If the user grants permission but device access fails for any reason other than those listed above, jump to the step labeled General Failure below.
Let stream be the
MediaStream
object for which the user granted permission.
Queue a task to invoke successCallback with stream as its argument.
Abort these steps.
Permission Failure: Let error be a new
MediaStreamError
object whose
name
attribute has the value
PermissionDeniedError
and jump to the step
labeled Error Task below.
Constraint Failure: Let error be a new
MediaStreamError
object whose
name
attribute has the value
ConstraintNotSatisfiedError
and whose
constraintName
attribute is set to the name of the constraint that
caused the error.
Unavailable Failure: Let error be a new
MediaStreamError
object whose
name
attribute has the value
SourceUnavailable
and jump to the step
labeled Error Task below.
General Failure: Let error be a new
MediaStreamError
object whose
name
attribute has the value
AbortError
and jump to the step
labeled Error Task below.
Error Task: Queue a task to invoke errorCallback with error as its argument.
The task source for these tasks is the user interaction task source.
The MediaStreamConstraints
dictionary is used to
instruct the UA what sort of MediaStreamTracks to include in
the MediaStream returned by getUserMedia().
If true
, it requests that the returned
MediaStream contain a video track. If a Constraints
structure is provided, it further specifies the nature and
settings of the video Track. If false
, the
MediaStream MUST not contain a video Track.
If true
, it requests that the returned
MediaStream contain an audio track. If a Constraints
structure is provided, it further specifies the nature and
settings of the audio Track. If false
, the
MediaStream MUST not contain an audio Track.
Add explanation of handleEvent
Add explanation of handleEvent
The user agent is encouraged to reserve resources when it has determined that a given call to getUserMedia() will succeed. It is preferable to reserve the resource prior to invoking the success callback provided by the web page. Subsequent calls to getUserMedia() (in this page or any other) should treat the resource that was previously allocated, as well as resources held by other applications, as busy. Resources marked as busy should not be provided as sources to the current web page, unless specified by the user. Optionally, the user agent may choose to provide a stream sourced from a busy source but only to a page whose origin matches the owner of the original stream that is keeping the source busy.
This document recommends that in the permission grant dialog or device selection interace (if one is present), the user be allowed to select any available hardware as a source for the stream requested by the page (provided the resource is able to fulfill mandatory constraints, if any were specified). Although not specifically recommended as best practice, note that some user agents may support the ability to substitute a video or audio source with local files and other media. A file picker may be used to provide this functionality to the user.
This document also recommends that the user be shown all resources that are currently busy as a result of prior calls to getUserMedia() (in this page or any other page that is still alive) and be allowed to terminate that stream and utilize the resource for the current page instead. If possible in the current operating environment, it is also suggested that resources currently held by other applications be presented and treated in the same manner. If the user chooses this option, the track corresponding to the resource that was provided to the page whose stream was affected must be removed.
A MediaStream may contain more than one video and audio track. This makes it possible to include video from two or more webcams in a single stream object, for example. However, the current API does not allow a page to express a need for multiple video streams from independent sources.
It is recommended for multiple calls to getUserMedia() from the same page be allowed as a way for pages to request multiple, discrete, video or audio streams.
A single call to getUserMedia() will always return a stream with either zero or one audio tracks, and either zero or one video tracks. If a script calls getUserMedia() multiple times before reaching a stable state, this document advises the UI designer that the permission dialogs should be merged, so that the user can give permission for the use of multiple cameras and/or media sources in one dialog interaction. The constraints on each getUserMedia call can be used to decide which stream gets which media sources.
The Constrainable pattern allows its consumers to inspect and adjust
the properties of the object that implements it. It is broken out as a
separate set of definitions so that it can be referred to by other
specifications. The core concept is that of a Capability, which consists
of a property or feature of an object and the set of its possible
values, which may be specified either as a range or as an enumeration.
For example, a camera might be capable of framerates (a property)
between 20 and 50 frames per second (a range) and may be able to be
positioned (a property) facing towards the user, away from the user, or
to the left or right of the user (an enumerated set.) The application
can examine a ConstrainablePattern object's set of Capabilities via the
getCapabilities()
accessor.
The application can select the (range of) values it wants for an
object's Capabilities by means of basic and/or advanced ConstraintSets
and the applyConstraints()
method. A ConstraintSet consists
of the names of one or more properties of the object plus the desired
value (or a range of desired values) for each of them. Each of those
property/value pairs can be considered to be an individual constraint.
For example, the application may set a ConstraintSet containing two
constraints, the first stating that the framerate of a camera be between
30 and 40 frames per second (a range) and the second that the camera
should be facing the user (a specific value). How the individual
constraints interact depends on whether and how they are given in the
basic Constraint structure, which is a ConstraintSet with an additional
'advanced' property, or whether they are in a ConstraintSet in the
advanced list. The behavior is as follows: all 'min', 'max', and 'exact'
constraints in the basic Constraint structure are together treated as
the 'required' set, and if it is not possible to satisfy simultaneously
all of those individual constraints for tho indicated property names,
the UA MUST call the errorCallback
. Otherwise, it must
apply the required constraints. Next, it will consider any
ConstraintSets given in the 'advanced' list, in the order in which they
are specified, and will try to satisfy/apply each complete ConstraintSet
(i.e., all constraints in the ConstraintSet together), but will skip a
ConstraintSet if and only if it cannot satisfy/apply it in its entirety.
Next, the UA MUST attempt to apply, individually, any 'ideal' constraint
or a constraint given as a bare value for the property. Of these
properties, it MUST satisfy the largest number that it can, in any
order. Finally, the UA MUST call the successCallback
.
Important note: If JavaScript applications using this API want the
attributes in the constraints to be used by the browser, the JavaScript
code has to first check, via getSupportedConstraints()
,
that all the named properties that are used are supported by the
browser. The reason for this is that WebIDL drops any unsupported names
from the dictionary holding the constraints, so the browser does not see
them and the unsupported names end up being silently ignored. This will
cause confusing programming errors as the JavaScript code will be
setting constraints but the browser will be ignoring them. Browsers that
support (recognize) the name of a required constraint but cannot satisfy
it will generate an error, while browsers that do not support the given
name of the constraint will not generate an error.
The definition and behavior of 'ideal', along with that of bare values in the basic constraint structure (which are assumed to mean 'ideal'), have not yet been agreed upon, or even thoroughly discussed.
The following examples may help to understand how constraints work. The first shows a basic Constraint structure. Three constraints are given, each of which the UA will attempt to satisfy individually. Depending upon the resolutions available for this camera, it is possible that not all three constraints can be satisfied at the same time. If so, the user agent will satisfy two if it can, or only one if not even two constraints can be satisfied together. Note that if not all three can be satisfied simultaneously, it is possible that there is more than one combination of two constraints that could be satisfied. If so, the user agent will choose.
var supports = navigator.mediaDevices.getSupportedConstraints("video"); if(!supports["aspectRatio"]) { // Treat like an error. } var constraints = { width: 1280, height: 720, aspectRatio: 1.5 };
This next example adds a small bit of complexity. The ideal values
are still given for width and height, but this time with minimum
requirements on each as well that must be satisfied. If it cannot
satisfy either the width or height minimum it will call the
errorCallback
. Otherwise, it will try to satisfy the width,
height, and aspectRatio target values as well and then call the
successCallback
.
var supports = navigator.mediaDevices.getSupportedConstraints("video"); if(!supports["aspectRatio"]) { // Treat like an error. } var constraints = { width: {min: 640, ideal: 1280}, height: {min: 480, ideal: 720}, aspectRatio: 1.5 };
This example illustrates the full control possible with the Constraints structure by adding the 'advanced' property. In this case, the user agent behaves the same way with respect to the required constraints, but before attempting to satisfy the ideal values it will process the 'advanced' list. In this example the 'advanced' list contains two ConstraintSets. The first specifies width and height constraints, and the second specifies an aspectRatio constraint. Note that in the advanced list, these bare values are treated as 'exact' values. This example represents the following: "I need my video to be at least 640 pixels wide and at least 480 pixels high. My preference is for precisely 1920x1280, but if you can't give me that, give me an aspectRatio of 4x3 if at all possible. If even that is not possible, give me a resolution as close to 1280x720 as possieble."
var supports = navigator.mediaDevices.getSupportedConstraints("video"); if(!supports["width"] || !supports["height"]) { // Treat like an error. } var constraints = { width: {min: 640, ideal: 1280}, height: {min: 480, ideal: 720}, advanced: [{width: 1920, height: 1280}, {aspectRatio: 1.3333333333}] };
The ordering of advanced ConstraintSets is significant. In the preceding example it is impossible to satisfy both the 1920x1280 ConstraintSet and the 4x3 aspect ratio ConstraintSet at the same time. Since the 1920x1280 occurs first in the list, the user agent will attempt to satisfy it first. Application authors can therefore implement a backoff strategy by specifying multiple optional ConstraintSets for the same property. For example, an application might specify three optional ConstraintSets, the first asking for a framerate greater than 500, the second asking for a framerate greater than 400, and the third asking for one greater than 300. If the UA is capable of setting a framerate greater than 500, it will (and the subsequent two ConstraintSets will be trivially satisfied.) However, if the UA cannot set the framerate above 500, it will skip that ConstraintSet and attempt to set the framerate above 400. If that fails, it will then try to set it above 300. If the UA cannot satisfy any of the three ConstraintSets, it will set the framerate to any value it can get. If the developer wanted to insist on 300 as a lower bound, he could provide that as a 'min' value in the basic ConstraintSet. In that case, the UA would fail altogether if it couldn't get a value over 300, but would choose a value over 500 if possible, then try for a value over 400.
Note that, unlike basic constraints, the constraints within a
ConstraintSet in the advanced list must be satisfied together or skipped
together. Thus, {width: 1920, height: 1280} is a request for that
specific resolution, not a request for that width or that height. One
can think of the basic constraints as requesting an or (non-exclusive)
of the individual constraints, while each advanced ConstraintSet is
requesting an and of the individual constraints in the ConstraintSet. An
application may inspect the full set of Constraints currently in effect
via the getConstraints()
accessor.
The specific value that the UA chooses for a Capability is referred
to as a Setting. For example, if the application applies a ConstraintSet
specifying that the framerate must be at least 30 frames per second, and
no greater than 40, the Setting can be any intermediate value, e.g., 32,
35, or 37 frames per second. The application can query the current
settings of the object's Capabilities via the
getSettings()
accessor.
Due to the limitations of the interface definition language used in this specification, it is not possible for other interfaces to inherit or implement ConstrainablePattern. Therefore the WebIDL definitions given are only templates to be copied. Each interface that wishes to make use of the functionality defined here will have to provide its own copy of the WebIDL for the functions and interfaces given here. However it can refer to the semantics defined here, which will not change. See MediaStreamTrack Interface Definition for an example of this.
The getCapabilities() method returns the dictionary of the capabilities that the object supports.
It is possible that the underlying hardware may not exactly
map to the range defined in the registry entry. Where this is
possible, the entry should define how to translate and scale the
hardware's setting onto the values defined in the entry. For
example, suppose that a registry entry defines a hypothetical
fluxCapacitance capability that is defined to be the range from
-10 (min) to 10 (max), but there are common hardware devices
that support only values of "off" "medium" and "full". The
registry entry might specify that for such hardware, the user
agent should map the range value of -10 to "off", 10 to "full",
and 0 to "medium". It might also indicate that given a
ConstraintSet imposing a strict value of 3, the user agent
should attempt to set the value of "medium" on the hardware, and
and that
getSettings()
should return a fluxCapacitance of 0, since that is
the value defined as corresponding to "medium".
applyConstraints()
, maintaining the order in which they
were specified. Note that some of the optional ConstraintSets
returned may not be currently satisfied. To check which
ConstraintSets are currently in effect, the application should use
getSettings
. applyConstraints()
. Note
that the actual setting of a property must be a single value. The applyConstraints() algorithm for applying constraints is stated below. Here are some preliminary definitions that are used in the statement of the algorithm:
When applyConstraints
is called, the UA must queue a task to run the
following steps:
getSupportedConstraints()
method as shown in the
Examples below.errorCallback
, passing it a new
MediaStreamError
with name
ConstraintNotSatisfied
and
constraintName
set to any of the required
constraints that could not be satisfied, and return.
existingConstraints remain in effect in this
case.successCallback
. From
this point on until applyConstraints() is called successfully
again, getConstraints() must return the newConstraints that
were passed as an argument to this call. The UA may choose new settings for the Capabilities of the object at any time. When it does so it must attempt to satisfy the current Constraints, in the manner described in the algorithm above.
The definition of how multiple unorderedConstraints are to be satisfied together is still very much under discussion. Also, please see issue 6 about 'ideal' not yet being defined.
overconstrained
, must be supported
by all objects implementing the
ConstrainablePattern
pattern. The UA must raise a MediaStreamErrorEvent
named "overconstrained" if changing circumstances at runtime result
in it no longer being able to satisfy the
requiredConstraints from the currently valid Constraints.
This MediaStreamErrorEvent must contain a MediaStreamError whose
name
is "overconstrainedError", and whose
constraintName
attribute is set to one of the
requiredConstraints that can no longer be satisfied. The
message
attribute of the MediaStreamError SHOULD
contain a string that is useful for debugging. The conditions under
which this error might occur are platform and application-specific.
For example, the user might physically manipulate a camera in a way
that makes it impossible to provide a resolution that satisfies the
constraints. The UA MAY take other actions as a result of the
overconstrained situation.
MediaStreamError
holding a required
constraint that could not be satisfied.An example of Constraints that could be passed into
applyConstraints()
or returned as a value of
constraints
is below. It uses the properties defined in the Track property registry.
var supports = navigator.mediaDevices.getSupportedConstraints("video"); if(!supports["facingMode"]) { // Treat like an error. } var constraints = { "width": { "min": 640 }, "height": { "min": 480 }, "advanced": [{ "width": 650 }, { "width": { "min": 650 } }, { "frameRate": 60 }, { "width": { "max": 800 } }, { "facingMode": "user" }] };
Here is another example, specifically for a video track where I must have a particular camera and have separate preferences for the width and height:
var supports = navigator.mediaDevices.getSupportedConstraints("video"); if(!supports["sourceId"]) { // Treat like an error. } var constraints = { sourceId: {"exact": "20983-20o198-109283-098-09812"}, advanced: [{ width: { min: 800, max: 1200 } }, { height: { min: 600 } }] };
And here's one for an audio track:
var supports = navigator.mediaDevices.getSupportedConstraints("audio"); if(!supports["sourceId"] || !supports["gain"]) { // Treat like an error. } var constraints = { advanced: [{ sourceId: "64815-wi3c89-1839dk-x82-392aa" }, { gain: 0.5 }] };
Here's an example of use of ideal:
var supports = navigator.mediaDevices.getSupportedConstraints("video"); if(!supports["aspectRatio"] || !supports["facingMode"]) { // Treat like an error. } navigator.mediaDevices.getUserMedia({ "video": { "width": {"min": 320, "ideal": 1280, "max": 1920}, "height": {"min": 240, "ideal": 720, "max": 1080}, "framerate": 30, // Shorthand for ideal. // "facingMode": "environment" would be optional. "facingMode": {"exact": "environment"} }}, ...);
Here's an example of "I want 720p, but I can accept up to 1080p and down to VGA.":
var supports = navigator.mediaDevices.getSupportedConstraints("video"); if(!supports["width"] || !supports["height"]) { // Treat like an error. } navigator.mediaDevices.getUserMedia({"video": { "width": {"min": 640, "ideal": 1280, "max": 1920}, "height": {"min": 480, "ideal": 720, "max": 1080}, }}, ...);
Here's an example of "I want a front-facing camera and it must be VGA.":
var supports = navigator.mediaDevices.getSupportedConstraints("video"); if(supports["facingMode"]) { navigator.mediaDevices.getUserMedia({"video": { "facingMode": {"exact": "user"}, "width": {"exact": 640}, "height": {"exact": 480} }}, ...); }
There is a single IANA registry that defines the constrainable properties of all objects that implement the Constrainable pattern. The registry entries must contain the name of each property along with its set of legal values. The registry entries for MediaStreamTrack are defined below. The syntax for the specification of the set of legal values depends on the type of the values. In addition to the standard atomic types (boolean, long, double, DOMString), legal values include lists of any of the atomic types, plus min-max ranges, as defined below.
List values must be interpreted as disjunctions. For example, if a property 'facingMode' for a camera is defined as having legal values ["left", "right", "user", "environment"], this means that 'facingMode' can have the value "left", the value "right", the value "environment" or the value "user". Similarly Constraints restricting 'facingMode' to ["user", "left", "right"] would mean that the UA should select a camera (or point the camera, if that is possible) so that "facingMode" is either "user", "left", or "right". This Constraint would thus request that the camera not be facing away from the user, but would allow the UA to choose among the other directions.
Capabilities are dictionary containing one or more
key-value pairs, where each key must be a constrainable property defined in the
registry, and each value must
be a subset of the set of values defined for that property in the
registry. The exact syntax of the value expression depends on the type
of the property but is of type
ConstraintValues
. The Capabilities dictionary specifies the subset of the
constrainable properties and values from the registry that the UA
supports. Note that a UA may
support only a subset of the properties that are defined in the
registry, and may support a
subset of the set values for those properties that it does support.
Note that Capabilities are returned from the UA to the application,
and cannot be specified by the application. However, the application
can control the Settings that the UA chooses for Capabilities by means
of Constraints.
An example of a Capabilities dictionary is shown below. This example is not very realistic in that a browser would actually be required to support more settings that just these.
{ "frameRate": { "min": 1.0, "max": 60.0 }, "facingMode": ["user", "environment"] }
A Setting is a dictionary containing one or more
key-value pairs. It must contain
each key returned in getCapabilities()
. There must be a single value for each key
and the value must a member of
the set defined for that property by capabilities()
. The
Settings
dictionary contains the actual values that the
UA has chosen for the object's Capabilities. The exact syntax of the
value depends on the type of the property.
An example of a Setting dictionary is shown below. This example is not very realistic in that a browser would actually be required to support more settings that just these.
{ "frameRate": 30.0, "facingMode": "user" }
Due to the limitiations of WebIDL, interfaces implementing the Constrainable Pattern cannnot simply subclass Constraints and ConstraintSet as they are defined here. Instead they must provide their own definitions that follow this pattern. See MediaTrackConstraints for an example of this.
Each member of a ConstraintSet corresponds to a Capability and specifies a subset of its legal values. Applying a ConstraintSet instructs that UA to restrict the setting of the corresponding Capabilities to the specified values or ranges of values. A given property MAY occur both in the basic Constraints set and in the advanced ConstraintSets list, and MAY occur at most once in each ConstraintSet in the advanced list.
The list of ConstraintSets that the UA must attempt to satisfy, in order, skipping
only those that cannot be satisfied. The order of these
ConstraintSets is significant. In particular, when they are passed
as an argument to applyConstraints
, the UA must try to satisfy them in the
order that is specified. Thus if optional ConstraintSets C1 and C2
can be satisfied individually, but not together, then whichever of
C1 and C2 is first in this list will be satisfied, and the other
will not. The UA must
attempt to satisfy all optional ConstraintSets in the list, even
if some cannot be satisfied. Thus, in the preceding example, if
optional constraint C3 is specified after C1 and C2, the UA will
attempt to satisfy C3 even though C2 cannot be satisfied. Note
that a given property name may occur only once in each
ConstraintSet but may occur in more than one ConstraintSet.
This sample code exposes a button. When clicked, the button is disabled and the user is prompted to offer a stream. The user can cause the button to be re-enabled by providing a stream (e.g., giving the page access to the local camera) and then disabling the stream (e.g., revoking that access).
<input type="button" value="Start" onclick="start()" id="startBtn"> <script> var startBtn = document.getElementById('startBtn'); function start() { navigator.mediaDevices.getUserMedia({ audio: true, video: true }, gotStream, logError); startBtn.disabled = true; } function gotStream(stream) { stream.oninactive = function () { startBtn.disabled = false; }; } function logError(error) { log(error.name + ": " + error.message); } </script>
This example allows people to take photos of themselves from the local video camera. Note that the forthcoming Image Capture specification may provide a simpler way to accomplish this.
<article> <style scoped> video { transform: scaleX(-1); } p { text-align: center; } </style> <h1>Snapshot Kiosk</h1> <section id="splash"> <p id="errorMessage">Loading...</p> </section> <section id="app" hidden> <p><video id="monitor" autoplay></video> <canvas id="photo"></canvas> <p><input type=button value="📷" onclick="snapshot()"> </section> <script> navigator.mediaDevices.getUserMedia({ video: true }, gotStream, noStream); var video = document.getElementById('monitor'); var canvas = document.getElementById('photo'); function gotStream(stream) { video.srcObject = stream; stream.oninactive = noStream; video.onloadedmetadata = function () { canvas.width = video.videoWidth; canvas.height = video.videoHeight; document.getElementById('splash').hidden = true; document.getElementById('app').hidden = false; }; } function noStream() { document.getElementById('errorMessage').textContent = 'No camera available.'; } function snapshot() { canvas.getContext('2d').drawImage(video, 0, 0); } </script> </article>
This specification defines the following new error names
This section is non-normative; it specifies no new behaviour, but instead summarizes information already present in other parts of the specification.
This document extends the Web platform with the ability to manage input devices for media - in this iteration, microphones and cameras. It also allows the manipulation of audio output devices (speakers and headphones).
Without authorization (to the “drive-by web”), it offers the ability to tell how many devices there are of each class. The identifiers for the devices are designed to not be useful for a fingerprint that can track the user between origins, but the number of devices adds to the fingerprint surface.
When authorization is given, this document describes how to get access to, and use, media data from the devices mentioned. This data may be sensitive; advice is given that indicators should be supplied to indicate that devices are in use, but both the nature of authorization and the indicators of in-use devices are platform decisions.
Authorization may be given on a case-by-case basis, or be persistent. In the case of a case-by-case authorization, it is important that the user be able to say “no” in a way that prevents the UI from blocking user interaction until permission is given - either by offering a way to say a “persistent NO” or by not using a modal permissions dialog.
It is possible to use constraints so that the failure of a getUserMedia call will return information about devices on the system without prompting the user, which increases the surface available for fingerprinting. The UA should consider limiting the rate at which failed getUserMedia calls are allowed in order to limit this additional surface.
In the case of persistent authorization, it is important that it’s easy to find the list of granted permissions and revoke permissions that the user wishes to revoke.
Once permission has been granted, the UA should make two things readily apparent to the user:
IANA is requested to register the following properties as specified in [[!RTCWEB-CONSTRAINTS]]:
The following constraint names are defined to apply to both video
and audio
MediaStreamTrack
objects:
Property Name | Values | Notes |
---|---|---|
sourceType |
SourceTypeEnum
|
The type of the source of the MediaStreamTrack. Note that the setting of this property is uniquely determined by the source that is attached to the Track. In particular, getCapabilities() will return only a single value for sourceID/Type. This property can therefore be used for initial media selection with getUserMedia(). However is not useful for subsequent media control with applyConstraints, since any attempt to set a different value will result in an unsatisfiable ConstraintSet. |
sourceId | DOMString | The application-unique identifier for this source. The same identifier MUST be valid between sessions of this application, but MUST also be different for other applications. Some sort of GUID is recommended for the identifier. Note that the setting of this property is uniquely determined by the source that is attached to the Track. In particular, getCapabilities() will return only a single value for sourceID/Type. This property can therefore be used for initial media selection with getUserMedia(). However is not useful for subsequent media control with applyConstraints, since any attempt to set a different value will result in an unsatisfiable ConstraintSet. |
groupId | DOMString | The group identifier for this source. Two devices have the same group identifier if they belong to the same physical device; for example the audio input and output devices of a headset. |
The following properties are defined to apply only to video
MediaStreamTrack
objects:
Property Name | Values | Notes |
---|---|---|
width |
ConstrainLong
|
The width or width range, in pixels, of the video source. As a capability, the range should span the video source's pre-set width values with min being the smallest width and max being the largest width. |
height |
ConstrainLong
|
The height or height range, in pixels, of the video source. As a capability, the range should span the video source's pre-set height values with min being the smallest height and max being the largest height. |
frameRate |
ConstrainDouble
|
The exact desired frame rate (frames per second) or frameRate range of the video source. If the source does not natively provide a frameRate, or the frameRate cannot be determined from the source stream, then this value MUST refer to the user agent's vsync display rate. |
aspectRatio |
ConstrainDouble
|
The exact aspect ratio (width in pixels divided by height in pixels), represented as a double rounded to the tenth decimal place. |
facingMode |
ConstrainDOMString
|
The members of the enum describe the directions that the
camera can face, as seen from the user's perspective. Valid
values for the strings in the ConstrainDOMString are the values
of enum VideoFacingModeEnum . |
Below is an illustration of the video facing modes in relation to
the user.
The following properties are defined to apply only to audio
MediaStreamTrack
objects:
Property Name | Values | Notes |
---|---|---|
volume |
ConstrainDouble
|
The volume or volume range of the audio source, as a percentage. A volume of 0.0 is silence, while a volume of 1.0 is the maximum supported volume. Note that any ConstraintSet that specifies values outside of this range can never be satisfied. |
sampleRate |
ConstrainLong
|
The sample rate in samples per second for the audio data. |
sampleSize |
ConstrainLong
|
The linear sample size in bits. This constraint can only be satisfied for audio devices that produce linear samples. |
echoCancelation |
boolean
|
When one or more audio streams is being played in the proceses of varios microphones, it is often desirable to attempt to remove the sound being played from the input signals recorded by the microphones. This is referred to echo cancelation. There are cases where it is not needed and it is desirable to turn it off so that no audio artifacts are introduced. This allows applications to control this behavior. |
This section will be removed before publication.
getSupportedConstraints()
method.The editors wish to thank the Working Group chairs and Team Contact, Harald Alvestrand, Stefan Håkansson and Dominique Hazaël-Massieux, for their support. Substantial text in this specification was provided by many people including Jim Barnett, Harald Alvestrand, Travis Leithead, and Stefan Håkansson.