RTC Feature Complete: What's Next for Sans-I/O WebRTC

January 18, 2026 views

Introduction

With the release of rtc 0.8.0, the sans-I/O WebRTC implementation has reached a significant milestone: full feature parity with the async-based webrtc crate and comprehensive W3C WebRTC API compliance. This article reflects on what we've achieved and outlines the roadmap for what comes next.


Achievement: Feature Parity with Async WebRTC

The rtc crate now provides all the functionality of the webrtc crate, reimagined with sans-I/O principles. Here's a summary of the complete feature set:

Protocol Stack

Layer Feature Status
ICEHost, SRFLX, Relay candidatesβœ… Complete
ICETrickle ICEβœ… Complete
ICEICE-TCP (passive & active)βœ… Complete
ICEmDNS for privacyβœ… Complete
DTLSDTLS 1.2 with certificate fingerprintsβœ… Complete
SRTPAES-CM, AES-GCM cipher suitesβœ… Complete
SCTPReliable & unreliable data channelsβœ… Complete
RTPHeader extensions, payload typesβœ… Complete
RTCPSR, RR, NACK, PLI, FIR, TWCCβœ… Complete

Peer Connection API

Feature Status
createOffer() / createAnswer()βœ… Complete
setLocalDescription() / setRemoteDescription()βœ… Complete
addIceCandidate() (local & remote)βœ… Complete
addTrack() / removeTrack()βœ… Complete
createDataChannel()βœ… Complete
getStats() with StatsSelectorβœ… Complete
getSenders() / getReceivers()βœ… Complete
getTransceivers()βœ… Complete
Connection state eventsβœ… Complete
ICE state eventsβœ… Complete
Data channel eventsβœ… Complete
Track eventsβœ… Complete

Interceptor Framework

Interceptor Purpose Status
NACK GeneratorRequest retransmission of lost packetsβœ… Complete
NACK ResponderRespond to NACK with cached packetsβœ… Complete
Sender ReportGenerate RTCP SR for sendersβœ… Complete
Receiver ReportGenerate RTCP RR for receiversβœ… Complete
TWCC SenderAdd transport-wide sequence numbersβœ… Complete
TWCC ReceiverGenerate TWCC feedbackβœ… Complete
SimulcastRID/MID header extensionsβœ… Complete

WebRTC Stats API

Stats Type Coverage
RTCPeerConnectionStats100%
RTCTransportStats100%
RTCIceCandidateStats100%
RTCIceCandidatePairStats89%
RTCCertificateStats100%
RTCCodecStats100%
RTCDataChannelStats100%
RTCInboundRtpStreamStats60%*
RTCOutboundRtpStreamStats67%*
RTCRemoteInboundRtpStreamStats83%
RTCRemoteOutboundRtpStreamStats83%

*Media encoding/decoding stats require application-provided data (roadmap item)


Achievement: W3C WebRTC Specification Compliance

The rtc crate follows the W3C WebRTC specification closely:

Specification Conformance

  • WebRTC 1.0 β€” Peer connection lifecycle, SDP negotiation, ICE handling
  • WebRTC Stats β€” Statistics identifiers and types
  • JSEP β€” JavaScript Session Establishment Protocol
  • ICE β€” Interactive Connectivity Establishment
  • Trickle ICE β€” Incremental ICE candidate exchange
  • mDNS β€” Multicast DNS for ICE candidates
  • DTLS 1.2 β€” Datagram Transport Layer Security
  • SRTP β€” Secure Real-time Transport Protocol
  • SCTP over DTLS β€” Data channel transport

Achievement: Sans-I/O Architecture Benefits

By building WebRTC with sans-I/O principles, we've achieved:

Runtime Independence

// Works with any async runtime or no runtime at all
let mut pc = RTCPeerConnection::new(config)?;

// You control the I/O loop
loop {
    // Poll for outgoing data
    while let Some(msg) = pc.poll_write() {
        socket.send_to(&msg.message, msg.transport.peer_addr)?;
    }

    // Handle incoming data
    let (n, peer_addr) = socket.recv_from(&mut buf)?;
    pc.handle_read(TaggedBytesMut { ... })?;

    // Handle timeouts
    pc.handle_timeout(Instant::now())?;
}

Deterministic Testing

#[test]
fn test_with_controlled_time() {
    let fixed_time = Instant::now();

    // All operations use explicit timestamps
    pc.handle_read(packet, fixed_time)?;
    pc.handle_timeout(fixed_time)?;

    let stats = pc.get_stats(fixed_time, StatsSelector::None);
    assert_eq!(stats.packets_received, expected);
}

Zero Hidden I/O

  • No background tasks or threads
  • No hidden network operations
  • No implicit timers
  • Complete application control

What's Next: The Roadmap

With feature parity achieved, the focus shifts to four key areas: browser interoperability, performance, test coverage, and code quality.


Focus 1: Browser Interoperability

Ensuring seamless interoperability with all major browsers is critical for real-world deployments. While the core protocol implementation is complete, comprehensive browser testing and compatibility verification is ongoing.

Target Browsers

Browser Platform Status Priority
Chrome/ChromiumWindows, macOS, LinuxπŸ”„ In ProgressHigh
FirefoxWindows, macOS, LinuxπŸ”„ In ProgressHigh
SafarimacOS, iOSπŸ“‹ PlannedHigh
EdgeWindowsπŸ“‹ PlannedMedium
Mobile ChromeAndroidπŸ“‹ PlannedMedium
Mobile SafariiOSπŸ“‹ PlannedMedium

Interoperability Test Scenarios

Data Channels:

  • Reliable ordered channels
  • Unreliable unordered channels
  • Multiple concurrent channels
  • Large message fragmentation
  • Binary and text messages

Media:

  • Audio-only calls (Opus codec)
  • Video-only streams (VP8, VP9, H.264)
  • Audio + Video combined
  • Simulcast with layer switching
  • Screen sharing

ICE & Connectivity:

  • Direct host-to-host connection
  • STUN-assisted connectivity
  • TURN relay fallback
  • Trickle ICE candidate exchange
  • ICE restart mid-session
  • mDNS candidate handling

SDP Negotiation:

  • Offer/Answer exchange
  • Renegotiation (add/remove tracks)
  • Codec negotiation
  • Extension negotiation
  • Rejected media sections

Browser-Specific Quirks

Each browser has its own WebRTC implementation quirks that need to be handled:

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚                     Browser Compatibility Matrix                            β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚  Issue                          β”‚ Chrome β”‚ Firefox β”‚ Safari β”‚ Edge          β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚  SDP format variations          β”‚   βœ“    β”‚    βœ“    β”‚   ⚠    β”‚    βœ“          β”‚
β”‚  ICE candidate formatting       β”‚   βœ“    β”‚    βœ“    β”‚   ⚠    β”‚    βœ“          β”‚
β”‚  DTLS role handling             β”‚   βœ“    β”‚    βœ“    β”‚   ⚠    β”‚    βœ“          β”‚
β”‚  Data channel negotiation       β”‚   βœ“    β”‚    βœ“    β”‚   βœ“    β”‚    βœ“          β”‚
β”‚  Simulcast configuration        β”‚   βœ“    β”‚    ⚠    β”‚   ⚠    β”‚    βœ“          β”‚
β”‚  TWCC support                   β”‚   βœ“    β”‚    βœ“    β”‚   ⚠    β”‚    βœ“          β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
  βœ“ = Works as expected   ⚠ = Requires special handling   βœ— = Known issues

Automated Browser Testing

Planned infrastructure:

  1. Selenium/Playwright tests β€” Automated browser control for E2E testing
  2. WebDriver BiDi β€” Modern browser automation protocol
  3. CI integration β€” Run browser tests on every PR
  4. Cross-platform matrix β€” Test on Windows, macOS, Linux
# Example CI configuration (planned)
browser-interop:
  strategy:
    matrix:
      browser: [chrome, firefox, safari, edge]
      os: [ubuntu-latest, macos-latest, windows-latest]
  steps:
    - run: cargo build --release
    - run: ./run-browser-tests.sh ${{ matrix.browser }}

Known Issues to Address

  • Safari SDP parsing edge cases
  • Firefox simulcast layer negotiation
  • Mobile browser power management
  • Browser-specific codec preferences
  • ICE candidate timing differences

Focus 2: Performance Engineering

Performance is not an afterthoughtβ€”it's a core requirement for real-time communication. This focus area encompasses systematic benchmarking, profiling, and optimization across the entire stack.

Benchmarking Infrastructure

Before optimizing, we need to measure. A comprehensive benchmarking infrastructure is essential.

Planned benchmark suite:

// Using criterion for statistical rigor
use criterion::{criterion_group, criterion_main, Criterion, Throughput};

fn bench_datachannel_throughput(c: &mut Criterion) {
    let mut group = c.benchmark_group("datachannel");

    for size in [64, 1024, 16384, 65536].iter() {
        group.throughput(Throughput::Bytes(*size as u64));
        group.bench_with_input(
            BenchmarkId::new("send", size),
            size,
            |b, &size| {
                b.iter(|| {
                    dc.send(&message[..size])
                });
            },
        );
    }
    group.finish();
}

fn bench_rtp_pipeline(c: &mut Criterion) {
    c.bench_function("rtp_parse", |b| {
        b.iter(|| RtpPacket::unmarshal(&packet_bytes))
    });

    c.bench_function("rtp_marshal", |b| {
        b.iter(|| packet.marshal_to(&mut buffer))
    });

    c.bench_function("srtp_encrypt", |b| {
        b.iter(|| context.encrypt_rtp(&mut packet))
    });

    c.bench_function("srtp_decrypt", |b| {
        b.iter(|| context.decrypt_rtp(&mut packet))
    });
}

criterion_group!(benches, bench_datachannel_throughput, bench_rtp_pipeline);
criterion_main!(benches);

Benchmark categories:

Category Metrics Tools
ThroughputMessages/sec, Bytes/seccriterion, custom
Latencyp50, p99, p999criterion, hdr_histogram
MemoryAllocations, peak usagedhat, heaptrack
CPUCycles per operationperf, flamegraph

Profiling and Analysis

Profiling workflow:

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚                         Performance Analysis Workflow                       β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚                                                                             β”‚
β”‚   1. Baseline         2. Profile           3. Analyze                       β”‚
β”‚   β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”        β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”          β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”                       β”‚
β”‚   β”‚ Run     │───────▢│ Collect │─────────▢│ Generateβ”‚                       β”‚
β”‚   β”‚ Bench   β”‚        β”‚ Samples β”‚          β”‚ Reports β”‚                       β”‚
β”‚   β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜        β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜          β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜                       β”‚
β”‚        β”‚                  β”‚                    β”‚                            β”‚
β”‚        β–Ό                  β–Ό                    β–Ό                            β”‚
β”‚   criterion          perf record          flamegraph                        β”‚
β”‚   results            + perf script        + hotspot analysis                β”‚
β”‚                                                                             β”‚
β”‚   4. Optimize         5. Validate          6. Document                      β”‚
β”‚   β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”        β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”          β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”                       β”‚
β”‚   β”‚ Apply   │───────▢│ Re-run  │─────────▢│ Record  β”‚                       β”‚
β”‚   β”‚ Changes β”‚        β”‚ Bench   β”‚          β”‚ Gains   β”‚                       β”‚
β”‚   β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜        β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜          β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜                       β”‚
β”‚                                                                             β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

Profiling tools:

  • perf β€” Linux performance counters, CPU profiling
  • flamegraph β€” Visualize hot code paths
  • heaptrack β€” Memory allocation profiling
  • cargo-llvm-lines β€” Generic code bloat analysis
  • valgrind/cachegrind β€” Cache behavior analysis

DataChannel Optimization

WebRTC DataChannels are increasingly used for high-throughput applications. Optimization targets:

SCTP layer:

Optimization Description Expected Impact
Chunk batchingCombine small messages into fewer SCTP chunksReduce overhead 20-40%
Zero-copy I/OAvoid buffer copies in send/receive pathReduce CPU usage
TSN trackingOptimize sequence number managementReduce memory allocations
Congestion controlTune SCTP congestion parametersImprove throughput stability

Application layer:

  • Message framing optimization
  • Backpressure handling
  • Buffer pool for allocations

Performance targets:

Metric Baseline Target Notes
Throughput (reliable, ordered)TBD> 500 MbpsSingle channel
Throughput (unreliable)TBD> 1 GbpsBest-effort
Latency (1KB message)TBD< 1 msp99
Messages/secondTBD> 100KSmall messages

RTP/RTCP Pipeline Optimization

Media transport is latency-sensitive and high-volume.

Packet processing:

Incoming RTP Packet
        β”‚
        β–Ό
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚ UDP Receive   β”‚ ← Goal: zero-copy receive
β””β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”˜
        β”‚
        β–Ό
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚ SRTP Decrypt  β”‚ ← Goal: hardware AES-NI
β””β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”˜
        β”‚
        β–Ό
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚ RTP Parse     β”‚ ← Goal: minimal validation
β””β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”˜
        β”‚
        β–Ό
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚ Interceptors  β”‚ ← Goal: inline, no allocations
β””β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”˜
        β”‚
        β–Ό
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚ Jitter Buffer β”‚ ← Goal: lock-free, pre-allocated
β””β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”˜
        β”‚
        β–Ό
    Application

Specific optimizations:

  • SIMD parsing β€” Use SIMD instructions for header parsing where beneficial
  • AES-NI β€” Ensure hardware acceleration for SRTP
  • Inline interceptors β€” Compile-time interceptor composition (already implemented via generics)
  • Pre-allocated buffers β€” Avoid per-packet allocations
  • Branch prediction β€” Optimize common code paths

ICE Performance

Connection establishment time directly impacts user experience.

Optimization areas:

Phase Current Target Approach
Candidate gatheringTBD< 100msParallel STUN queries
Connectivity checksTBD< 500msPrioritized pair testing
DTLS handshakeTBD< 200msSession resumption
Total time-to-mediaTBD< 1sCombined optimizations

Techniques:

  • Aggressive candidate nomination
  • Parallel connectivity checks
  • STUN response caching
  • Optimized candidate pair sorting

Memory Optimization

Real-time systems benefit from predictable memory behavior.

Goals:

  • Minimize allocations in hot paths
  • Use buffer pools for packet buffers
  • Pre-allocate data structures where possible
  • Reduce memory fragmentation

Tracking:

// Example: Using dhat for allocation profiling
#[global_allocator]
static ALLOC: dhat::Alloc = dhat::Alloc;

#[test]
fn test_allocations_in_hot_path() {
    let _profiler = dhat::Profiler::new_heap();

    // Run hot path code
    for _ in 0..10000 {
        process_rtp_packet(&packet);
    }

    // Analyze allocation count and sizes
}

Continuous Performance Monitoring

CI integration:

  • Run benchmarks on every PR
  • Track performance regressions
  • Publish benchmark results
  • Alert on significant regressions

Planned dashboard metrics:

  • Throughput trends over time
  • Latency percentiles
  • Memory usage patterns
  • CPU efficiency

Focus 3: Test Coverage

Current State

The codebase has growing test coverage, but there's room for improvement:

Category Current Target
Unit testsPartial80%+ line coverage
Integration tests28 test filesComprehensive scenarios
Interop testsChrome, FirefoxAll major browsers
Fuzz testsLimitedCritical parsers

Unit Test Expansion

Priority areas for unit testing:

  1. SDP parsing and generation β€” Complex edge cases in offer/answer
  2. ICE state machine β€” All state transitions and error conditions
  3. DTLS handshake β€” Certificate validation, cipher negotiation
  4. SCTP association β€” Stream management, congestion control
  5. Interceptor logic β€” NACK timing, RTCP report generation

Example test structure:

#[cfg(test)]
mod tests {
    use super::*;

    #[test]
    fn test_ice_state_transitions() {
        // Test all valid state transitions
        // Test invalid transition handling
        // Test event emission
    }

    #[test]
    fn test_nack_timing_accuracy() {
        // Verify NACK generation timing
        // Test RTT-based retransmit intervals
    }
}

Integration Test Expansion

Planned integration test scenarios:

  • Multi-party conferencing (3+ peers)
  • Renegotiation (adding/removing tracks mid-session)
  • Network condition simulation (packet loss, delay, reordering)
  • Long-running sessions (stability over hours)
  • ICE restart scenarios
  • TURN relay failover
  • Simulcast layer switching
  • DataChannel flow control under load

Fuzz Testing

Critical parsers should be fuzz-tested:

// Using cargo-fuzz or libfuzzer
fuzz_target!(|data: &[u8]| {
    let _ = RtpPacket::unmarshal(data);
});

fuzz_target!(|data: &[u8]| {
    let _ = StunMessage::unmarshal(data);
});

Priority fuzz targets:

  • RTP/RTCP packet parsing
  • STUN message parsing
  • SDP parsing
  • SCTP chunk parsing
  • DTLS record parsing

Focus 4: Code Quality and Tech Debt

TODO/FIXME Cleanup

The codebase currently contains 104 TODO/FIXME comments that need to be addressed:

$ grep -r "TODO\|FIXME" --include="*.rs" | wc -l
104

Categorization plan:

Category Action
Missing featuresImplement or document as "won't fix"
Performance notesCreate benchmark, then optimize
Error handlingImprove error messages and recovery
Code cleanupRefactor or remove dead code
DocumentationAdd missing docs

Tracking approach:

  1. Create GitHub issues for each significant TODO
  2. Prioritize by impact (user-facing vs internal)
  3. Address in dedicated cleanup sprints
  4. Add CI check to prevent new untracked TODOs

Documentation Improvements

  • Complete rustdoc coverage for public APIs
  • Add architecture decision records (ADRs)
  • Improve inline code comments for complex algorithms
  • Create troubleshooting guide
  • Add performance tuning guide

API Refinements

Some APIs may benefit from refinement based on user feedback:

  • Error types consolidation
  • Builder pattern consistency
  • Event handling ergonomics
  • Configuration validation

Community Contribution Opportunities

We welcome contributions in these areas:

Good First Issues

  • Documentation improvements
  • Adding missing unit tests
  • Resolving simple TODO comments
  • Example improvements

Intermediate

  • Integration test scenarios
  • Benchmark implementations
  • Bug fixes with clear reproduction

Advanced

  • Performance optimizations
  • New interceptor implementations
  • Protocol extensions

See CONTRIBUTING.md for guidelines.


Conclusion

The rtc crate has achieved its primary goal: a complete, W3C-compliant WebRTC implementation using sans-I/O principles. With feature parity established, the focus now shifts to making it faster, more reliable, and easier to use.

The sans-I/O architecture provides a solid foundation for these improvements. By separating protocol logic from I/O, we can:

  • Benchmark and optimize without network variability
  • Test deterministically with controlled time
  • Profile precisely with no background noise

We're excited about the road ahead and welcome community participation in shaping the future of Rust WebRTC.


Get Involved

Have ideas for performance improvements or want to contribute tests? Open an issue or join the discussion on Discord!


References


← Back to Blog | Home