The WEBRTC effort aims to create a set of specifications that allows browsers to function as effective platforms for applications that use and exchange real-time, interactive media, including audio and video.
This document defines a set of APIs that allow local media to be requested from a platform, media to be sent over the network to another browser or device implementing the WEBRTC protocols, and media received from another browser or device to be processed and displayed locally.
There are a number of facets to video-conferencing in HTML:
video
or audio
elements.This document defines the APIs used for these features.
Prompts the user for permission to use their Web cam or other video or audio input.
The options argument is a string of comma-separated values, each of which is itself a space-separated list of tokens, the first token of which is from the following list:
audio
"video
"user
" or "environment
" to
indicate the preferred cameras to use.If the user accepts, the successCallback is invoked, with a
suitable LocalMediaStream
object as
its argument.
If the user declines, the errorCallback (if any) is invoked.
When the
getUserMedia()
method is called, the user agent must run the
following steps:
Let options be the method's first argument.
Let successCallback be the callback indicated by the method's second argument.
Let errorCallback be the callback indicated by the method's third argument, if any, or null otherwise.
If successCallback is null, abort these steps.
Let audio be false.
Let video be false.
Let camera preference be the empty set.
Split options on commas to obtain list of options.
For each string option in list of options, run the following substeps:
Split option on spaces to obtain list of suboptions.
If the first token in list of suboptions is a
case-sensitive match for the string "audio
", let audio be true.
If the first token in list of suboptions is a
case-sensitive match for the string "video
", run these subsubsteps:
Let video be true.
If list of suboptions contains a token that is a
case-sensitive match for the string "user
", add any cameras that face towards the user to the
camera preference set.
If list of suboptions contains a token that is a
case-sensitive match for the string "environment
", add any cameras that face away from the user to
the camera preference set.
If both audio and video are still
false, then throw a NOT_SUPPORTED_ERR
exception and abort these
steps.
Return, and run the remaining steps asynchronously.
Optionally, e.g. based on a previously-established user preference, for security reasons, or due to platform limitations, jump to the step labeled failure below.
Prompt the user in a user-agent-specific manner for permission to provide
the entry script's origin with a LocalMediaStream
object representing a media
stream.
If audio is true, then the provided media should include an audio track. If audio is false, then the provided media must not include an audio track.
If video is true, then the provided media should include a video track. If video is false, then the provided media must not include a video track.
User agents are encouraged to default to using the user's primary or system default camera and/or microphone (as appropriate) to generate the media stream. User agents may allow users to use any media source, including pre-recorded media files.
If video is true, then the user agent should encourage the user to provide a camera from the camera preference set.
User agents may wish to offer the user more control over the provided media. For example, a user agent could offer to enable a camera light or flash, or to change settings such as the frame rate or shutter speed.
If the user grants permission to use local recording devices, user agents are encouraged to include a prominent indicator that the devices are "hot" (i.e. an "on-air" or "recording" indicator).
If the user denies permission, jump to the step labeled failure below. If the user never responds, this algorithm stalls on this step.
Let stream be the LocalMediaStream
object for which the user
granted permission.
Queue a task to invoke successCallback with stream as its argument.
Abort these steps.
Failure: If errorCallback is null, abort these steps.
Let error be a new NavigatorUserMediaError
object whose
code
attribute has the numeric
value 1 (PERMISSION_DENIED
).
Queue a task to invoke errorCallback with error as its argument.
The task source for these tasks is the user interaction task source.
PERMISSION_DENIED
is
defined.A voice chat feature in a game could attempt to get access to the user's microphone by calling the API as follows:
<script> navigator.getUserMedia('audio', gotAudio); function gotAudio(stream) { // ... use 'stream' ... } </script>
A video-conferencing system would ask for both audio and video:
<script> function beginCall() { navigator.getUserMedia('audio,video user', gotStream); } function gotStream(stream) { // ... use 'stream' ... } </script>
The MediaStream
interface is used to
represent streams of media data, typically (but not necessarily) of audio and/or video
content, e.g. from a local camera or a remote site. The data from a MediaStream
object does not necessarily have a canonical
binary form; for example, it could just be "the video currently coming from the user's
video camera". This allows user agents to manipulate media streams in whatever fashion
is most suitable on the user's platform.
Each MediaStream
object can represent zero
or more tracks, in particular audio and video tracks. Tracks can contain multiple
channels of parallel data; for example a single audio track could have nine channels of
audio data to represent a 7.2 surround sound audio track.
Each track represented by a MediaStream
object has a corresponding MediaStreamTrack
object.
A MediaStream
object has an input and an
output. The input depends on how the object was created: a LocalMediaStream
object generated by a getUserMedia()
call, for instance, might take
its input from the user's local camera, while a MediaStream
created by a PeerConnection
object will take as input the data received
from a remote peer. The output of the object controls how the object is used, e.g. what
is saved if the object is written to a file, what is displayed if the object is used in
a video
element, or indeed what is transmitted to a remote peer if the
object is used with a PeerConnection
object.
Each track in a MediaStream
object can be
disabled, meaning that it is muted in the object's output. All tracks are initially
enabled.
A MediaStream
can be
finished,
indicating that its inputs have forever stopped providing data. When a MediaStream
object is finished, all its tracks are muted
regardless of whether they are enabled or disabled.
The output of a MediaStream
object must
correspond to the tracks in its input. Muted audio tracks must be replaced with
silence. Muted video tracks must be replaced with blackness.
A new MediaStream
object can be created
from a list of MediaStreamTrack
objects
using the MediaStream()
constructor. The list of MediaStreamTrack
objects can be the track list of another stream, a subset of the track list of
a stream or a composition of MediaStreamTrack
objects from different MediaStream
objects.
The ability to duplicate a MediaStream
, i.e.
create a new MediaStream
object from the track
list of an existing stream, allows for greater control since separate
MediaStream
instances can be manipulated and consumed
individually. This can be used, for instance, in a video-conferencing scenario to display
the local video from the user's camera and microphone in a local monitor, while only
transmitting the audio to the remote peer (e.g. in response to the user using a "video
mute" feature). Combining tracks from different MediaStream
objects into a new MediaStream
makes it
possible to, e.g., record selected tracks from a conversation involving several
MediaStream
objects with a single
MediaStreamRecorder
.
The LocalMediaStream
interface is used
when the user agent is generating the stream's data (e.g. from a camera or streaming it
from a local video file). It allows authors to control individual tracks during the
generation of the content, e.g. to allow the user to temporarily disable a local camera
during a video-conference chat.
When a LocalMediaStream
object is being
generated from a local file (as opposed to a live audio/video source), the user agent
should stream the data from the file in real time, not all at once. This reduces the
ease with which pages can distinguish live video from pre-recorded video, which can
help protect the user's privacy.
The MediaStream(trackList)
constructor must return a new MediaStream
object with a newly generated label.
A new MediaStreamTrack
object is created
for every unique underlying media source in trackList and appended
to the new MediaStream
object's track list
according to the track ordering constraints.
A MediaStream
object is said to end
when the user agent learns that no more data will ever be forthcoming for this
stream.
When a MediaStream
object ends for any
reason (e.g. because the user rescinds the permission for the page to use the local
camera, or because the data comes from a finite file and the file's end has been
reached and the user has not requested that it be looped, or because the stream comes
from a remote peer and the remote peer has permanently stopped sending data, it is
said to be finished
. When this happens for any reason other than the
stop()
method being invoked, the user agent must
queue a task that runs the following steps:
If the object's readyState
attribute has the value
ENDED
(2) already, then abort these steps. (The
stop()
method was probably called just before
the stream stopped for other reasons, e.g. the user clicked an in-page stop button
and then the user-agent-provided stop button.)
Set the object's readyState
attribute to ENDED
(2).
Fire a simple event named ended
at
the object.
As soon as a MediaStream
object is finished, the stream's
tracks start outputting only silence and/or blackness, as appropriate, as defined earlier.
If the end of the stream was reached due to a user request, the task source for this task is the user interaction task source. Otherwise the task source for this task is the networking task source.
PeerConnection
API.Returns a MediaStreamTrackList
object representing the tracks that can be enabled and disabled.
A MediaStream
can have multiple audio
and video sources (e.g. because the user has multiple microphones, or because the
real source of the stream is a media resource with
many media tracks). The stream represented by a MediaStream
thus has zero or more tracks.
The tracks
attribute must return an
array host object for objects of type
MediaStreamTrack
that is fixed
length and read only. The same object must be returned each time the
attribute is accessed. [[!WEBIDL]]
The array must contain the MediaStreamTrack
objects that correspond to the the
tracks of the stream. The relative order of all tracks in a user agent must be
stable. All audio tracks must precede all video tracks. Tracks that come from a
media resource whose format defines an order must be
in the order defined by the format; tracks that come from a media resource whose format does not define an order must be
in the relative order in which the tracks are declared in that media resource. Within these constraints, the order is
user-agent defined.
Begins recording the stream. The returned MediaStreamRecorder
object provides access to the
recorded data.
When the record()
method is invoked, the user
agent must return a new MediaStreamRecorder
object associated with the
stream.
The readyState
attribute represents the
state of the stream. It must return the value to which the user agent last set it
(as defined below). It can have the following values: LIVE or
ENDED.
When a MediaStream
object is created,
its readyState
attribute must be set to
LIVE
(1), unless it is being created using the MediaStream()
constructor whose argument is a list of
MediaStreamTrack
objects whose underlying
media sources will never produce any more data, in which case the MediaStream
object must be created with its readyState
attribute set to ENDED
(2).
ended
, must be supported by all objects
implementing the MediaStream
interface.When a LocalMediaStream
object's
stop()
method is invoked, the user agent
must queue a task that runs the following steps:
If the object's readyState
attribute is in the
ENDED
(2) state, then abort these
steps.
Permanently stop the generation of data for the stream. If the data is being generated from a live source (e.g. a microphone or camera), and no other stream is being generated from a live source, then the user agent should remove any active "on-air" indicator. If the data is being generated from a prerecorded source (e.g. a video file), any remaining content in the file is ignored. The stream is finished. The stream's tracks start outputting only silence and/or blackness, as appropriate, as defined earlier.
Set the object's readyState
attribute to ENDED
(2).
Fire a simple event named ended
at the object.
The task source for the tasks
queued for the stop()
method is the DOM manipulation task
source.
The MediaStreamTrack.kind
attribute must
return the string "audio
" if the object's corresponding
track is or was an audio track, "video
" if the
corresponding track is or was a video track, and a user-agent defined string
otherwise.
When a LocalMediaStream
object is
created, the user agent must generate a globally unique identifier string, and must
initialize the object's label
attribute to that string. Such strings
must only use characters in the ranges U+0021, U+0023 to U+0027, U+002A to U+002B,
U+002D to U+002E, U+0030 to U+0039, U+0041 to U+005A, U+005E to U+007E, and must be
36 characters long.
When a MediaStream
is created to
represent a stream obtained from a remote peer, the label
attribute
is initialized from information provided by the remote source.
When a MediaStream
is created from
another using the MediaStream()
constructor, the label
attribute
is initialized to a newly generated value.
The label
attribute must return the value to
which it was initialized when the object was created.
The label of a MediaStream
object is unique to the source of the stream, but that does not mean it is not
possible to end up with duplicates. For example, a locally
generated stream could be sent from one user to a remote peer using PeerConnection
, and then sent back to the original
user in the same manner, in which case the original user will have multiple streams
with the same label (the locally-generated one and the one received from the remote
peer).
User agents may label audio and video sources (e.g. "Internal microphone" or
"External USB Webcam"). The MediaStreamTrack.label
attribute
must return the label of the object's corresponding track, if any. If the
corresponding track has or had no label, the attribute must instead return the
empty string.
Thus the kind
and label
attributes do not change value, even if the MediaStreamTrack
object is disassociated from its
corresponding track.
The MediaStreamTrack.enabled
attribute, on getting, must return the last value to which it was set. On setting,
it must be set to the new value, and then, if the MediaStreamTrack
object is still associated with a
track, must enable the track if the new value is true, and disable it
otherwise.
Thus, after a MediaStreamTrack
is disassociated from its track,
its enabled
attribute still changes value
when set, it just doesn't do anything with that new value.
Creates a Blob
of the recorded data, and invokes the provided
callback with that Blob
.
When the getRecordedData()
method is called, the user agent must run the following steps:
Let callback be the callback indicated by the method's first argument.
If callback is null, abort these steps.
Let data be the data that was streamed by the
MediaStream
object from which the
MediaStreamRecorder
was created
since the creation of the MediaStreamRecorder
object.
Return, and run the remaining steps asynchronously.
Generate a file that containing data in a format
supported by the user agent for use in audio
and
video
elements.
Let blob be a Blob
object representing the
contents of the file generated in the previous step. [[!FILE-API]]
Queue a task to invoke callback with blob as its argument.
The getRecordedData()
method can
be called multiple times on one MediaStreamRecorder
object; each time, it will
create a new file as if this was the first time the method was being called. In
particular, the method does not stop or reset the recording when the method is
called.
Note that the following is actually only a partial interface, but ReSpec does not yet support that.
Mints a Blob URL to refer to the given MediaStream
.
When the createObjectURL()
method is called
with a MediaStream
argument, the user agent
must return a unique Blob URL for the given MediaStream
. [[!FILE-API]]
For audio and video streams, the data exposed on that stream must be in a format
supported by the user agent for use in audio
and video
elements.
A Blob URL is the same as what the
File API specification calls a Blob URI, except that anything in the
definition of that feature that refers to File
and Blob
objects is hereby extended to also apply to MediaStream
and LocalMediaStream
objects.
This sample code exposes a button. When clicked, the button is disabled and the user is prompted to offer a stream. The user can cause the button to be re-enabled by providing a stream (e.g. giving the page access to the local camera) and then disabling the stream (e.g. revoking that access).
<input type="button" value="Start" onclick="start()" id="startBtn"> <script> var startBtn = document.getElementById('startBtn'); function start() { navigator.getUserMedia('audio,video', gotStream); startBtn.disabled = true; } function gotStream(stream) { stream.onended = function () { startBtn.disabled = false; } } </script>
This example allows people to record a short audio message and upload it to the server. This example even shows rudimentary error handling.
<input type="button" value="⚫" onclick="msgRecord()" id="recBtn"> <input type="button" value="◼" onclick="msgStop()" id="stopBtn" disabled> <p id="status">To start recording, press the ⚫ button.</p> <script> var recBtn = document.getElementById('recBtn'); var stopBtn = document.getElementById('stopBtn'); function report(s) { document.getElementById('status').textContent = s; } function msgRecord() { report('Attempting to access microphone...'); navigator.getUserMedia('audio', gotStream, noStream); recBtn.disabled = true; } var msgStream, msgStreamRecorder; function gotStream(stream) { report('Recording... To stop, press to ◼ button.'); msgStream = stream; msgStreamRecorder = stream.record(); stopBtn.disabled = false; stream.onended = function () { msgStop(); } } function msgStop() { report('Creating file...'); stopBtn.disabled = true; msgStream.onended = null; msgStream.stop(); msgStreamRecorder.getRecordedData(msgSave); } function msgSave(blob) { report('Uploading file...'); var x = new XMLHttpRequest(); x.open('POST', 'uploadMessage'); x.send(blob); x.onload = function () { report('Done! To record a new message, press the ⚫ button.'); recBtn.disabled = false; }; x.onerror = function () { report('Failed to upload message. To try recording a message again, press the ⚫ button.'); recBtn.disabled = false; }; } function noStream() { report('Could not obtain access to your microphone. To try again, press the ⚫ button.'); recBtn.disabled = false; } </script>
This example allows people to take photos of themselves from the local video camera.
<article> <style scoped> video { transform: scaleX(-1); } p { text-align: center; } </style> <h1>Snapshot Kiosk</h1> <section id="splash"> <p id="errorMessage">Loading...</p> </section> <section id="app" hidden> <p><video id="monitor" autoplay></video> <canvas id="photo"></canvas> <p><input type=button value="📷" onclick="snapshot()"> </section> <script> navigator.getUserMedia('video user', gotStream, noStream); var video = document.getElementById('monitor'); var canvas = document.getElementById('photo'); function gotStream(stream) { video.src = URL.getObjectURL(stream); video.onerror = function () { stream.stop(); }; stream.onended = noStream; video.onloadedmetadata = function () { canvas.width = video.videoWidth; canvas.height = video.videoHeight; document.getElementById('splash').hidden = true; document.getElementById('app').hidden = false; }; } function noStream() { document.getElementById('errorMessage').textContent = 'No camera available.'; } function snapshot() { canvas.getContext('2d').drawImage(video, 0, 0); } </script> </article>
A PeerConnection
allows two users to
communicate directly, browser-to-browser. Communications are coordinated via a
signaling channel provided by script in the page via the server, e.g. using
XMLHttpRequest
.
Calling "new PeerConnection
(configuration,
signalingCallback)" creates a PeerConnection
object.
The configuration string gives the address of a STUN or TURN server to use to establish the connection. [STUN] [TURN]
The allowed formats for this string are:
TYPE 203.0.113.2:3478
"Indicates a specific IP address and port for the server.
TYPE relay.example.net:3478
""Indicates a specific host and port for the server; the user agent will look up the IP address in DNS.
TYPE example.net
""Indicates a specific domain for the server; the user agent will look up the IP address and port in DNS.
The "TYPE
" is one of:
STUN
STUNS
TURN
TURNS
The signalingCallback argument is a method that will be invoked
when the user agent needs to send a message to the other host over the signaling
channel. When the callback is invoked, convey its first argument (a string) to the
other peer using whatever method is being used by the Web application to relay
signaling messages. (Messages returned from the other peer are provided back to the
user agent using the processSignalingMessage()
method.)
A PeerConnection
object has an associated
PeerConnection
signaling
callback, a PeerConnection
ICE
Agent,
a PeerConnection
readiness state and an SDP Agent. These
are initialized when the object is created.
When the PeerConnection()
constructor is invoked, the
user agent must run the following steps. This algorithm has a synchronous
section (which is triggered as part of the event loop algorithm).
Steps in the synchronous section are marked with ⌛.
Let serverConfiguration be the constructor's first argument.
Let signalingCallback be the constructor's second argument.
Let connection be a newly created PeerConnection
object.
Create an ICE Agent and let connection's PeerConnection
ICE Agent be that ICE
Agent. [ICE]
If serverConfiguration contains a U+000A LINE FEED (LF) character or a U+000D CARRIAGE RETURN (CR) character (or both), remove all characters from serverConfiguration after the first such character.
Split serverConfiguration on spaces to obtain configuration components.
If configuration components has two or more components, and the first component is a case-sensitive match for one of the following strings:
STUN
"STUNS
"TURN
"TURNS
"...then run the following substeps:
Let server type be STUN if the first component of
configuration components is 'STUN
' or
'STUNS
', and TURN otherwise (the first component of
configuration components is "TURN
" or
"TURNS
").
Let secure be true if the first component of configuration components is "STUNS
" or
"TURNS
", and false otherwise.
Let host be the contents of the second component of configuration components up to the character before the first U+003A COLON character (:), if any, or the entire string otherwise.
Let port be the contents of the second component of configuration components from the character after the first U+003A COLON character (:) up to the end, if any, or the empty string otherwise.
Configure the PeerConnection
ICE Agent's STUN or
TURN server as follows:
If the given IP address, host name, domain name, or port are invalid, then the user agent must act as if no STUN or TURN server is configured.
Let the connection's PeerConnection
signaling
callback be signalingCallback.
Set connection's PeerConnection
readiness state
to NEW
(0).
Set connection's PeerConnection
ice state
to NEW
(0).
Set connection's PeerConnection
sdp state
to NEW
(0).
Let connection's localStreams
attribute be an empty
read-only MediaStream
array.
[[!WEBIDL]]
Let connection's remoteStreams
attribute be an empty
read-only MediaStream
array.
[[!WEBIDL]]
Return connection, but continue these steps asynchronously.
Await a stable state. The synchronous section consists of the remaining steps of this algorithm. (Steps in synchronous sections are marked with ⌛.)
⌛ If the ice state is set to NEW, it must queue a task to start gathering ICE address and set the ice state to ICEGATHERING.
⌛ Once the ICE address gathering is complete, if there are any streams in localStreams, the SDP Agent will send the initial the SDP offer. The initial SDP offer MUST contain both the ICE candidate information as well as the SDP to represent the media descriptions for all the streams in localStreams.
During the lifetime of the peerConnection object, the following procedures are followed:
If a local media stream has been added and an SDP offer needs to be sent, and the ICE state is not NEW or ICEGATHERING, and the SDP Agent state is NEW or SDPIDLE, then send and queue a task to send an SDP offer and change the SPD state to SDP Waiting.
If an SDP offer has been received, and the SDP state is NEW or SDPIDLE, pass the ICE candidates from the SDP offer to the ICE Agent and change it state to ICECHECKING. Construct an appropriate SDP answer, update the remote streams, queue a task to send the SDP offer, and set the SDPAgent state to SDPIDLE.
At the point the sdpState changes from NEW to some other state, the readyState changes to NEGOTIATING.
If the ICE Agent finds a candidates that froms a valid connection, the ICE state is changed to ICECONNECTED
If the ICE Agent finishes checking all candidates, if a connection has been found, the ice state is changed to ICECOMPLETED and if not connection has been found it is changed to ICEFAILED.
If the iceState is ICECONNECTED or ICECOMPLETED and the SDP stat is SDPIDLE, the readyState is set to ACTIVE.
If the iceState is ICEFAILED, a task is queued to calls the close method.
The close method will cause the system to wait until the sdpStat is SDPIDLE then it will send an SDP offer terminating all media and change the readyState to CLOSING as well as stop all ICE process and change the iceState to ICE_CLOSED. Once an SDP anser to this offer is received, the readyState will be changed to CLOSED.
User agents may negotiate any codec and any resolution, bitrate, or other quality
metric. User agents are encouraged to initially negotiate for the native resolution of
the stream. For streams that are then rendered (using a video
element),
user agents are encouraged to renegotiate for a resolution that matches the rendered
display size.
Starting with the native resolution means that if the Web application
notifies its peer of the native resolution as it starts sending data, and the peer
prepares its video
element accordingly, there will be no need for a
renegotiation once the stream is flowing.
All SDP media descriptions for streams represented by MediaStream
objects must include a label attribute
("a=label:
") whose value is the value of the MediaStream
object's label
attribute.
[SDP] [SDPLABEL]
PeerConnection
s must not generate any
candidates for media streams whose media descriptions do not have a label attribute
("a=label:
"). [ICE] [SDP] [SDPLABEL]
When a user agent starts receiving media for a component and a candidate was
provided for that component by a PeerConnection
, the user agent must follow these
steps:
Let connection be the PeerConnection
expecting this media.
If there is already a MediaStream
object
for the media stream to which this component belongs, then associate the component
with that media stream and abort these steps. (Some media streams have multiple
components; this API does not expose the role of these individual components in
ICE.)
Create a MediaStream
object to represent
the media stream. Set its label
attribute to the value of the SDP Label
attribute for that component's media stream.
Queue a task to run the following substeps:
If the connection's PeerConnection
readiness
state is CLOSED
(3), abort these steps.
Add the newly created MediaStream
object to the end of connection's remoteStreams
array.
Fire a stream event named addstream
with the newly created
MediaStream
object at the connection object.
When a PeerConnection
finds that a stream
from the remote peer has been removed (its port has been set to zero in a media
description sent on the signaling channel), the user agent must follow these steps:
Let connection be the PeerConnection
associated with the stream being
removed.
Let stream be the MediaStream
object that represents the media stream being
removed, if any. If there isn't one, then abort these steps.
By definition, stream is now finished.
A task is thus queued to update stream and fire an event.
Queue a task to run the following substeps:
If the connection's PeerConnection
readiness
state is CLOSED
(3), abort these steps.
Remove stream from connection's
remoteStreams
array.
Fire a stream event named removestream
with stream at the connection object.
The task source for the tasks listed in this section is the networking task source.
To prevent network sniffing from allowing a fourth party to establish a connection to a peer using the information sent out-of-band to the other peer and thus spoofing the client, the configuration information should always be transmitted using an encrypted connection.
When a message is relayed from the remote peer over the signaling channel is
received by the Web application, pass it to the user agent by calling the
processSignalingMessage()
method.
The order of messages is important. Passing messages to the user agent in a different order than they were generated by the remote peer's user agent can prevent a successful connection from being established or degrade the connection's quality if one is established.
When the processSignalingMessage()
method is invoked, the user agent must
run the following steps:
Let message be the method's argument.
Let connection be the PeerConnection
object on which the method was
invoked.
If connection's PeerConnection
readiness
state is CLOSED
(3), throw an
INVALID_STATE_ERR
exception.
If the first four characters of message are not
"SDP
" followed by a U+000A LINE FEED (LF) character, then
abort these steps. (This indicates an error in the signaling channel
implementation. User agents may report such errors to their developer consoles
to aid debugging.)
Future extensions to the PeerConnection
interface might use other prefix
values to implement additional features.
Let sdp be the string consisting of all but the first four characters of message.
Pass the sdp to the PeerConnection
SDP Agent as a
subsequent offer or answer, to be interpreted as appropriate given the current
state of the SDP Agent. [ICE]
When a PeerConnection
ICE
Agent forms a connection to the the far side and enters the state
ICECONNECTED, the user
agent must queue a task that sets the PeerConnection
object's PeerConnection
readiness state
to ACTIVE
(2) and then fires a simple event named open
at the
PeerConnection
object.
close()
method has been invoked.close()
method has been invoked.The readyState
attribute
must return the PeerConnection
object's
PeerConnection
readiness
state, represented by a number from the following list:
PeerConnection
. NEW
(0)PeerConnection
. NEGOTIATING
(1)PeerConnection
. ACTIVE
(2)PeerConnection
. CLOSING
(4)PeerConnection
object is terminating all media and is in the process of closing the
Ice Agent and SDP Agent. PeerConnection
. CLOSED
(3)The iceState
attribute
must return the state of the PeerConnection
ICE Agent
PeerConnection
ICE
state, represented by a number from the following list:
PeerConnection
. NEW
(0)PeerConnection
. ICE_GATHERING
(0x100)PeerConnection
. ICE_WAITING
(0x200)PeerConnection
. ICE_CHECKING
(0x300)PeerConnection
. ICE_CONNECTED
(0x400)PeerConnection
. ICE_COMPLETED
(0x500)PeerConnection
. ICE_FAILED
(0x600)PeerConnection
. ICE_CLOSED
(0x700)The sdpState
attribute
must return the state of the PeerConnection
SDP Agent , represented by
a number from the following list:
PeerConnection
. NEW
(0)PeerConnection
. SDP_IDLE
(0x1000)PeerConnection
. SDP_WAITING
(0x2000)PeerConnection
. SDP_GLARE
(0x3000)Attempts to starting sending the given stream to the remote peer.
When the other peer starts sending a stream in this manner, an addstream
event is fired at the
PeerConnection
object.
When the addStream()
method is
invoked, the user agent must run the following steps:
Let stream be the method's argument.
If the PeerConnection
object's
PeerConnection
readiness
state is CLOSED
(3), throw an
INVALID_STATE_ERR
exception.
If stream is already in the PeerConnection
object's localStreams
object, then abort
these steps.
Add stream to the end of the PeerConnection
object's localStreams
object.
Return from the method.
Have the PeerConnection
add a
media stream for stream the next time the user agent
provides a stable state. Any other
pending stream additions and removals must be processed at the same time.
Stops sending the given stream to the remote peer.
When the other peer stops sending a stream in this manner, a removestream
event is fired at the
PeerConnection
object.
When the removeStream()
method
is invoked, the user agent must run the following steps:
Let stream be the method's argument.
If the PeerConnection
object's
PeerConnection
readiness
state is CLOSED
(3), throw an
INVALID_STATE_ERR
exception.
If stream is not in the PeerConnection
object's localStreams
object, then abort
these steps.
Remove stream from the PeerConnection
object's localStreams
object.
Return from the method.
Have the PeerConnection
remove the
media stream for stream the next time the user agent
provides a stable state. Any other
pending stream additions and removals must be processed at the same time.
Returns a live array containing the streams that the user agent is currently
attempting to transmit to the remote peer (those that were added with addStream()
).
Specifically, it must return the read-only MediaStream
array that the attribute was set to when the
PeerConnection
's constructor ran.
Returns a live array containing the streams that the user agent is currently receiving from the remote peer.
Specifically, it must return the read-only MediaStream
array that the attribute was set to when the
PeerConnection
's constructor ran.
This array is updated when addstream
and removestream
events are fired.
When the close()
method is invoked,
the user agent must run the following steps:
If the PeerConnection
object's
PeerConnection
readiness
state is CLOSED
(3), throw an
INVALID_STATE_ERR
exception.
Destroy the PeerConnection
ICE Agent, abruptly ending any active ICE processing and any active
streaming, and releasing any relevant resources (e.g. TURN permissions).
Set the object's PeerConnection
readiness
state to CLOSED
(3).
The localStreams
and remoteStreams
objects remain in the
state they were in when the object was closed.
connecting
, must be supported by all
objects implementing the PeerConnection
interface.open
, must be supported by all objects
implementing the PeerConnection
interface.open
, must be supported by all objects
implementing the PeerConnection
interface. It is called any time the readyState, iceState, or sdpState
changes. addstream
, must be supported by all objects
implementing the PeerConnection
interface.removestream
, must be supported by all
objects implementing the PeerConnection
interface.When two peers decide they are going to set up a connection to each other, they both go through these steps. The STUN/TURN server configuration describes a server they can use to get things like their public IP address or to set up NAT traversal. They also have to send data for the signaling channel to each other using the same out-of-band mechanism they used to establish that they were going to communicate in the first place.
// the first argument describes the STUN/TURN server configuration var local = new PeerConnection('TURNS example.net', sendSignalingChannel); local.signalingChannel(...); // if we have a message from the other side, pass it along here // (aLocalStream is some LocalMediaStream object) local.addStream(aLocalStream); // start sending video function sendSignalingChannel(message) { ... // send message to the other side via the signaling channel } function receiveSignalingChannel (message) { // call this whenever we get a message on the signaling channel local.signalingChannel(message); } local.onaddstream = function (event) { // (videoElement is some <video> element) videoElement.src = URL.getObjectURL(event.stream); };
Although progress is being made, there is currently not enough agreement on the data channel to write it up. This section will be filled in as rough consensus is reached.
A Window
object has a strong reference to any PeerConnection
objects created from the constructor whose
global object is that Window
object.
The addstream
and removestream
events use the MediaStreamEvent
interface:
Firing a stream event
named e with a MediaStream
stream means that an event
with the name e, which does not bubble (except where otherwise
stated) and is not cancelable (except where otherwise stated), and which uses the
MediaStreamEvent
interface with the
stream
attribute set to stream, must be created and dispatched at the given target.
The stream
attribute represents the
MediaStream
object associated with the
event.
The initMediaStreamEvent()
method must initialize the event in a manner analogous to the similarly-named
method in the DOM Events interfaces. [[!DOM-LEVEL-3-EVENTS]]
The following event fires on MediaStream
objects:
Event name | Interface | Fired when... |
---|---|---|
ended |
Event |
The MediaStream object will no longer
stream any data, either because the user revoked the permissions, or because the
source device has been ejected, or because the remote peer stopped sending data,
or because the stop() method was invoked. |
The following events fire on PeerConnection
objects:
Event name | Interface | Fired when... |
---|---|---|
connecting |
Event |
The ICE Agent has begun negotiating with the peer. This can happen multiple
times during the lifetime of the PeerConnection object. |
open |
Event |
The ICE Agent has finished negotiating with the peer. |
message |
MessageEvent |
A data UDP media stream message was received. |
addstream |
MediaStreamEvent |
A new stream has been added to the remoteStreams array. |
removestream |
MediaStreamEvent |
A stream has been removed from the remoteStreams array. |
This registration is for community review and will be submitted to the IESG for review, approval, and registration with IANA.
This format is used for encoding UDP packets transmitted by potentially hostile Web page content via a trusted user agent to a destination selected by a potentially hostile remote server. To prevent this mechanism from being abused for cross-protocol attacks, all the data in these packets is masked so as to appear to be random noise. The intent of this masking is to reduce the potential attack scenarios to those already possible previously.
However, this feature still allows random data to be sent to destinations that might not normally have been able to receive them, such as to hosts within the victim's intranet. If a service within such an intranet cannot handle receiving UDP packets containing random noise, it might be vulnerable to attack from this feature.
Fragment identifiers cannot be used with application/html-peer-connection-data
as URLs cannot be used to identify streams that use this format.
This section will be removed before publication.
Need a way to indicate the type of the SDP when passing SDP strings.
The editors wish to thank the Working Group chairs, Harald Alvestrand and Stefan Håkansson, for their support.