A pure Rust implementation of WebRTC API

Last update: May 22, 2022

WebRTC.rs

License: MIT Discord Twitter

A pure Rust implementation of WebRTC stack. Rewrite Pion WebRTC stack (http://Pion.ly) in Rust

Sponsored with 💖 by

Stream Chat embark

Roadmap

WebRTC.rs

Work in Progress Towards 1.0

Contributors or pull requests are welcome!

GitHub

https://github.com/webrtc-rs/webrtc
Comments
  • 1. Project roadmap

    Hi, first of all, I am a Pion lover. It's great to see movement in the Rust world. Since this project is very recent, how serious is this project? I am planning long-term for some of our software components. Rust might be valuable for us but we need WebRTC support. So don't get my question offensive in any way. Just curious if this is something I could base our stack on in the future.

    Cheers Chris

    Reviewed by chrisprobst at 2020-10-30 15:48
  • 2. File descriptor (socket) leak

    Sockets opened in ICEGatherer seem to be leaked. They will never be closed even when the PeerConnections are closed. Finally, my server becomes unavailable with too many open files error.

    I've investigated into webrtc's source code, and found that CandidateBase.close() is an no-op. Of course, it is because tokio's UdpSocket does not provide close(). Despite all, the sockets should be closed when it is dropped. Therefore, I guess that the socket is not dropped.

    RTCPeerConnection holds Arc<PeerConnectionInternal> and there are other holders too. In v0.2.1, the peerconnection has no problem, dropped well. However, other holders seems not to drop it. I have no idea where the others are, but internal_rtcp_writer (peer_connection.rs, line 170) or pci (peer_connection.rs, line 1163, 1345, 1393) may have created reference cycle.

    If possible, those references should be replaced by std::sync::Weak to break the cycle. Pion or other webrtc libraries written in garbage-collected languages may not suffer from this issue because the GC can detect and free those circular references well. Because Rust does not have one, we should use weak references to avoid this problem. It will also fix other memory leaks too.

    Reviewed by qbx2 at 2021-10-26 06:11
  • 3. [WebRTC] double check simulcast example and play-from-disk-renegotiation example

    Pion's simulcast example can receives two streams with rid = "f" and "h" ~~and play-from-disk-renegotiation example has video grid when click add video multiple times.~~

    Reviewed by rainliu at 2021-09-24 07:16
  • 4. Firefox SDP parsing fails due to missing C line on media level

    Hey,

    I noticed that, at least in my modified reflect example, Firefox throws a DOM exception when applying the answer SDP.

    Example SDP
    v=0
    o=- 4985367504486208344 356629000 IN IP4 0.0.0.0
    s=-
    t=0 0
    a=fingerprint:sha-256 D5:67:46:98:F8:D8:05:4B:D5:42:4E:FD:04:C7:03:93:B9:90:C8:AD:A2:82:3E:AE:24:E3:4A:B8:54:2D:76:31
    a=group:BUNDLE 1 2
    m=audio 0 UDP/TLS/RTP/SAVPF 0
    a=candidate:167090039 1 udp 2130706431 :: 50760 typ host
    a=candidate:167090039 2 udp 2130706431 :: 50760 typ host
    a=candidate:3498147984 1 udp 2130706431 192.168.1.76 59816 typ host
    a=candidate:3498147984 2 udp 2130706431 192.168.1.76 59816 typ host
    a=candidate:3122685691 1 udp 2130706431 192.168.1.33 57564 typ host
    a=candidate:3122685691 2 udp 2130706431 192.168.1.33 57564 typ host
    a=end-of-candidates
    m=video 9 UDP/TLS/RTP/SAVPF 120
    c=IN IP4 0.0.0.0
    a=setup:active
    a=mid:1
    a=ice-ufrag:nzCUobTruNCKeWEl
    a=ice-pwd:pSowjNuaNylpeKBFgTsFUNwQRdrzDQxe
    a=rtcp-mux
    a=rtcp-rsize
    a=rtpmap:120 VP8/90000
    a=fmtp:120 max-fs=12288;max-fr=60
    a=rtcp-fb:120 nack
    a=rtcp-fb:120 nack pli
    a=rtcp-fb:120 ccm fir
    a=rtcp-fb:120 goog-remb
    a=rtcp-fb:120 transport-cc
    a=extmap:7 http://www.ietf.org/id/draft-holmer-rmcat-transport-wide-cc-extensions-01
    a=recvonly
    m=application 9 UDP/DTLS/SCTP webrtc-datachannel
    c=IN IP4 0.0.0.0
    a=setup:active
    a=mid:2
    a=sendrecv
    a=sctp-port:5000
    a=ice-ufrag:nzCUobTruNCKeWEl
    a=ice-pwd:pSowjNuaNylpeKBFgTsFUNwQRdrzDQxe
    

    This causes the following exception to be thrown in Firefox:

    Uncaught (in promise) DOMException: SIPCC Failed to parse SDP: SDP Parse Error on line 39:  c= connection line not specified for every media level, validation failed.
    

    This makes sense, the c line is missing for the audio track.

    This is the relevant part of the Firefox source code: https://searchfox.org/mozilla-central/source/third_party/sipcc/sdp_main.c#955

    Reviewed by k0nserv at 2021-12-16 10:17
  • 5. datachannel intermittently closes with an error

    First, I want to say thank you for building nice project! I have an issue that datachannel intermittently closes with an error.

    Webrtc: 0.1.1 Rust: 1.55 stable

    Case 1. [2021-10-05T17:11:04Z WARN webrtc_sctp::association] [] failed to read packets on net_conn: Alert is Fatal or Close Notify

    Case 2. [2021-10-05T17:16:18Z WARN webrtc_sctp::association] [] failed to handle_inbound: chunk too short

    In my case, I used datachannel with max_retransmits = 0, ordered = false. These errors occurred even without heavy load. When I use other webrtc libraries (pion, aiortc, werift), I don't have the issue that datachannel closes

    Reviewed by qbx2 at 2021-10-05 17:39
  • 6. Udp connection(Socket) not close

    hi

    I am using this repo for a streamer case. The streamer client is a web browser with webrtc api. It was perfect except some memory issue. So when I dig further, i notice something, the host opens a new udp port whenever a new peer connection is created but failed to get closed when the client dropped.

    /*Log start/ [Rolling] 2022-03-28T16:57:37.824594+08:00 - WARN - webrtc_ice::agent::agent_internal - [controlled]: Failed to close candidate udp4 prflx 192.168.0.100:63937 related :0: the agent is closed [Rolling] 2022-03-28T16:57:37.824713+08:00 - INFO - webrtc_ice::agent::agent_internal - [controlled]: Setting new connection state: Failed [Rolling] 2022-03-28T16:57:37.825166+08:00 - INFO - webrtc::peer_connection - ICE connection state changed: failed [Rolling] 2022-03-28T16:57:37.825233+08:00 - INFO - vccplayer::player - Connection State has changed failed [Rolling] 2022-03-28T16:57:37.825279+08:00 - INFO - webrtc::peer_connection - peer connection state changed: failed [Rolling] 2022-03-28T16:57:37.825323+08:00 - INFO - vccplayer::player - Peer Connection State has changed: failed /*Log end/

    I googled the Go version of Pion, something like https://github.com/pion/webrtc/issues/629, and try to close the peer. But i got no luck. Since I am not in a cluster mode, single port mode is not possible.

    So what can I do to get rid of it, maybe add a timeout feature to the udp connections?

    Screen Shot 2022-03-28 at 2 09 07 PM Screen Shot 2022-03-28 at 2 08 42 PM Screen Shot 2022-03-28 at 2 08 07 PM
    Reviewed by shiqifeng2000 at 2022-03-28 06:10
  • 7. ReadRTCP doesn't work

    Digging into how/why reading RTCP results in the data stream being corrupt.

    I've

    • Confirmed that in Pion, the code that I am trying to write works
    • Isolated the issue to a simple repro
    • Got suspicions that the issue is caused by a shared lock between RTP and RTCP, meaning the RTP only gets read when an RTCP packet comes in.

    I'll dig deeper, and amend this issue as I do.

    Discussed in https://github.com/webrtc-rs/webrtc/discussions/150

    Originally posted by robashton January 13, 2022 As mentioned over on the Interceptor repo, https://github.com/webrtc-rs/interceptor/issues/2 and indeed the subsequent pull request https://github.com/webrtc-rs/interceptor/pull/3, I'm wanting to get hold of the RTCP (Specifically, the sender reports) being sent by the web browser so I can do some A/V sync downstream of the rust.

    The documentation for Pion suggests that we should use the receiver passed to us in OnTrack to perform this function, as this will run any interceptors and give you the final RTCP. (Hence how I discovered the issues there). ( https://github.com/pion/webrtc/blob/master/examples/rtcp-processing/main.go#L31 )

    Indeed, on ingest - without the call to read_rtcp, those interceptors never get executed and presumably the rtcp being sent to us is just dropped on the floor (as far as I can tell). If I call read_rtcp, it breaks the actual rtp somehow, even if I do nothing with the results.

    Sticking in the following code to the on_track code in the save-to-disk example means the interceptors get ran and I get some RTCP (including the sender reports in that println), but means the data stream breaks.

                        tokio::spawn(async move {
                            loop{
                                tokio::select! {
                                    result = receiver.read_rtcp() => { 
                                        println!("RTCP {:?}". result);
                                    }
                                }
                            }
                        });
    

    wnat am I missing? How are we supposed to make sure the interceptors get executed on ingest and things like sender reports are made available to us?

    Reviewed by robashton at 2022-01-15 11:42
  • 8. Use error enums instead of Anyhow

    Original discussion on discord, and further issue raised here: https://github.com/webrtc-rs/rtp/issues/14

    Anyhow is great, but maybe not a good fit for webrtc.rs. To summarize:

    • Anyhow is mostly meant for applications, not libraries.
    • Anyhow doesn't implement std::error::Error (it can't), which makes it hard to interoperate in apps that rely on that.
    • Not having explicit enumerations of possible errors means the webrtc.rs API effectively have "hidden" code paths. It's not possible for a user to know which errors could potentially be thrown from an API call returning anyhow::Result<X>

    Way forward

    • Replace anyhow with idiomatic Rust error enums.
    • Keep thiserror to help implement said enums.
    • Maintain ergonomics for ? use internally in webrtc.rs (work with From traits and rewrap errors).
    • Bring out possible errors in API calls. Either via types or documentation.

    This issue will be used to coordinate the effort which will span all the sub-crates.

    • [x] sdp
    • [x] util (done, but deliberately marked as draft)
    • [x] mdns
    • [x] stun
    • [x] turn
    • [x] ice
    • [x] dtls
    • [x] rtp
    • [x] rtcp
    • [x] srtp
    • [x] sctp
    • [x] media
    • [x] interceptor
    • [x] data
    • [x] webrtc
    Reviewed by algesten at 2021-10-05 10:59
  • 9. re-export rtcp/rtp/interceptor/data/media

    I'm building SFU based on WebRTC.rs. And I feel adding related dependencies in my Cargo.toml is a bit inconvenient. There is potential of version mismatch problem. So I think re-export dependencies under webrtc crate might help. (not sure if I should re-export all of them though)

    Reviewed by wdv4758h at 2021-10-25 03:36
  • 10. Miss of built-in packet loss handling solutions

    Hello,

    I'm trying to use this library to make some trials about real-time desktop streaming, but the video stream is nearly immediately corrupted because of packet loss/reordering. What's the suggested approach to handle this kind of issues? I've explored the examples but I can't see anything related to handling network errors, maybe I've missed it.

    For anyone interested on exploring the project, this is the repo and the source code (right now mostly undocumented): https://github.com/aegroto/remotia/tree/webrtc-nohardrework

    Reviewed by aegroto at 2021-11-29 10:02
  • 11. Datachannel leak

    Steps to reproduce:

    1. Set your on_data_channel:
        peer_connection                                             
            .on_data_channel(Box::new(move |d: Arc<RTCDataChannel>| { 
                let d_label = d.label().to_owned();
                let d_id = d.id();                                              
                println!("New DataChannel {} {}", d_label, d_id);
                                         
                let dc = Arc::downgrade(&d);          
                tokio::spawn(async move {           
                    use std::sync::Weak;        
                    let mut int = tokio::time::interval(Duration::from_secs(1));
                    loop {                           
                        int.tick().await;               
                        dbg!(Weak::strong_count(&dc));
                        dbg!(Weak::weak_count(&dc));
                    }                                             
                });                                
                Box::pin(async {})
        }).await;
    
    1. Make connection and close the connection
    2. Peerconnection is disconnected and dropped, but strong_count of the datachannel never goes to the zero forever

    Cause (My guess): sctp Association::accept_stream does not cancel even if the sctp stream is closed. Therefore, RTCSctpTransport::accept_data_channels does not get returned from DataChannel::accept and keeps reference to param.data_channels.

    Reviewed by qbx2 at 2021-10-20 05:05
  • 12. Inactive tracks should not be stopped

    As showcased in https://github.com/webrtc-rs/webrtc/issues/191 tracks that go from .e.g. sendonly to inactive in the course of applying a remote offer(via set_remote_description) are stopped:

    https://github.com/webrtc-rs/webrtc/blob/de83067295e4394cb3a66fe9ceffc7865e2f4217/src/peer_connection/mod.rs#L1314-L1316

    My understanding of the specification is that when a media description becomes inactive that shouldn't cause the corresponding transceiver to stop.

    I took a stab at removing this stopping behaviour because it causes the bug in https://github.com/webrtc-rs/webrtc/issues/191 and that works‚ in so far that, it stops the endless loop.

    On the JS side reactivating the media description via

    transceiver.sender.replaceTrack(newTrack);
    transceiver.direction = "sendonly";
    

    and negotiating does not cause webrtc-rs to bring the transceiver back from inactive to recvonly.

    Reviewed by k0nserv at 2022-05-18 08:11
  • 13. Endless negotiation loop

    I've encountered a case where on_negotiation_needed fires and if you negotiate as a consequence of that and endless offer answer loop will occur.

    Reproduction steps

    1. Browser sends an offer
    2. Server sends an answer
    3. Optionally step 1 and 2 happen a few times, for example by first negotiating a data channel and then a video track(Video track is sendonly on the browser side and recvonly on the server side).
    4. We remove the video track with removeTrack in the browser
    5. The browser sends an offer with the diff:
    @@ -36,7 +36,7 @@
    
     a=extmap:9 urn:ietf:params:rtp-hdrext:sdes:mid
     a=extmap:10 urn:ietf:params:rtp-hdrext:sdes:rtp-stream-id
     a=extmap:11 urn:ietf:params:rtp-hdrext:sdes:repaired-rtp-stream-id
    -a=sendonly
    +a=inactive
     a=msid:743400ce-3696-4cb3-b1a2-3fef7647be68 952e921c-dbbc-46ce-a8b4-f4bfa96972c9
     a=rtcp-mux
     a=rtcp-rsize
    
    1. server creates an answer with the diff
    @@ -40,5 +40,5 @@
    
     a=extmap:4 http://www.ietf.org/id/draft-holmer-rmcat-transport-wide-cc-extensions-01
     a=extmap:9 urn:ietf:params:rtp-hdrext:sdes:mid
     a=extmap:11 urn:ietf:params:rtp-hdrext:sdes:repaired-rtp-stream-id
    -a=recvonly
    +a=inactive
    
    1. The connection status reach "stable" on the server side.
    2. on_negotiation_needed fires
    3. We create an offer and perform a negotiation
    4. Go to step 8.

    As we loop in step 8 and 9 the only thing that will differ for each offer and answer is the o= line, this is according to the specification.

    Details

    My assumption here is that after step 7 we don't expect on_negotiation_needed to fire, as evident by the fact that there is no meaningful change to the generated offer.

    The reason we keep firing on_negotiation_needed is that the check in check_negotiation_needed that corresponds to step 5.4 of the check if negotiation needed process indicates we should negotiate.

    The transceiver in question is stopped because we stop it when the remote description is applied with an inactive direction:

    https://github.com/webrtc-rs/webrtc/blob/de83067295e4394cb3a66fe9ceffc7865e2f4217/src/peer_connection/mod.rs#L1315

    Workaround

    It strikes me that a local fix here could be to generate an offer, then diff it against the current local description, and stop the negotiation if only the o= line differs.

    Reviewed by k0nserv at 2022-05-17 14:04
  • 14. Support completely unreliable datachannels

    It appears that DataChannel parameters: id maxRetransmits & maxPacketLifetime were not translated completely from https://github.com/pion/webrtc/blob/master/datachannel.go#L26

    These parameters are able to have a value of nil in Pion, which allows for setup of datachannels which are completely unordered & unreliable. There's a big difference between an id or maxRetransmits of nil vs 0, which webrtc-rs currently conflates.

    This will allow interop with https://github.com/triplehex/webrtc-unreliable .

    This PR requires the usage of the following PRs to function correctly: https://github.com/webrtc-rs/sctp/pull/10 https://github.com/webrtc-rs/data/pull/5

    Reviewed by connorcarpenter at 2022-04-30 23:10
  • 15. Ability to retrieve stats from a peer connection

    I see that the stats module exists, but does not have any content. I was hoping to contribute a pull request adding RTP stats, and wanted to make sure that 1) this is not stepping on work that someone else is doing, and 2) I implement it in a way that is likely to get merged.

    I was planning to follow the model of Pion, using stats.go and GetStats as the basis for the API.

    This would work out as an async fn get_stats(&self) -> StatsReport, with I'm sure some intervening modules along the way. Does this seem correct?

    /cc @mickel8

    Reviewed by sax at 2022-03-30 17:16
A library for easily creating WebRTC data channel connections in Rust

Cyberdeck A library for easily creating WebRTC data channel connections in Rust.

Mar 27, 2022
A multiplayer web based roguelike built on Rust and WebRTC
A multiplayer web based roguelike built on Rust and WebRTC

Gorgon A multiplayer web-based roguelike build on Rust and WebRTC. License This project is licensed under either of Apache License, Version 2.0, (LICE

Oct 18, 2021
Easy-to-use wrapper for WebRTC DataChannels peer-to-peer connections written in Rust and compiling to WASM.

Easy-to-use wrapper for WebRTC DataChannels peer-to-peer connections written in Rust and compiling to WASM.

May 24, 2022
All-batteries included GStreamer WebRTC producer

webrtcsink All-batteries included GStreamer WebRTC producer, that tries its best to do The Right Thing™. Use case The webrtcbin element in GStreamer i

May 12, 2022
Modrinth API is a simple library for using Modrinth's API in Rust projects

Ferinth is a simple library for using the Modrinth API in Rust projects. It uses reqwest as its HTTP(S) client and deserialises responses to typed structs using serde.

May 20, 2022
The Safe Network Core. API message definitions, routing and nodes, client core api.

safe_network The Safe Network Core. API message definitions, routing and nodes, client core api. License This Safe Network repository is licensed unde

Apr 30, 2022
Backroll is a pure Rust implementation of GGPO rollback networking library.

backroll-rs Backroll is a pure Rust implementation of GGPO rollback networking library. Development Status This is still in an untested alpha stage. A

May 23, 2022
An implementation of the ZITADEL gRPC API in Rust.

An implementation of the ZITADEL gRPC API in Rust. Complemented with other useful elements such as ServiceAccount auth.

May 17, 2022
rseip (eip-rs) - EtherNet/IP in pure Rust

rseip rseip (eip-rs) - EtherNet/IP in pure Rust Features Pure Rust Library Asynchronous Extensible Explicit Messaging (Connected / Unconnected) Open S

May 12, 2022
Pure rust mqtt cilent

NOTE: Archived. No further development under this repo. Follow progress of a different implementation here Pure rust MQTT client which strives to be s

May 1, 2022
Docker daemon API in Rust

Bollard: an asynchronous rust client library for the docker API Bollard leverages the latest Hyper and Tokio improvements for an asynchronous API cont

May 18, 2022
A rust client and structures to interact with the Clever-Cloud API.

Clever-Cloud Software Development Kit - Rust edition This crate provides structures and client to interact with the Clever-Cloud API. Status This crat

Feb 5, 2022
Revolt backend API server, built with Rust.

Delta Description Delta is a blazing fast API server built with Rust for Revolt. Features: Robust and efficient API routes for running a chat platform

May 24, 2022
Podman-api-rs - Rust interface to Podman (libpod).

podman-api Rust interface to Podman Usage Add the following to your Cargo.toml file [dependencies] podman-api = "0.1" SSL Connection To enable HTTPS c

May 19, 2022
ZeroNS: a name service centered around the ZeroTier Central API

ZeroNS: a name service centered around the ZeroTier Central API ZeroNS provides names that are a part of ZeroTier Central's configured networks; once

May 20, 2022
A wrapper for the Google Cloud DNS API

cloud-dns is a crate providing a client to interact with Google Cloud DNS v1

Jan 26, 2022
Obtain (wildcard) certificates from let's encrypt using dns-01 without the need for API access to your DNS provider.

Agnos Presentation Agnos is a single-binary program allowing you to easily obtain certificates (including wildcards) from Let's Encrypt using DNS-01 c

Feb 2, 2022
Hammerspoon plugin and API to interact with Yabai socket directly

Yabai.spoon NOTE: no longer using it or intending to maintain it see #2. Yabai.spoon is lua 5.4 library to interact with yabai socket directly within

May 17, 2022
🔌 A curseforge proxy server, keeping your API key safe and sound.

?? CFPROXY - The curseforge proxy server Curseforge has locked down their API and now restricts access without authentification. This spells trouble f

May 16, 2022