🥧 Savoury implementation of the QUIC transport protocol and HTTP/3

Overview

quiche

crates.io docs.rs license build

quiche is an implementation of the QUIC transport protocol and HTTP/3 as specified by the IETF. It provides a low level API for processing QUIC packets and handling connection state. The application is responsible for providing I/O (e.g. sockets handling) as well as an event loop with support for timers.

For more information on how quiche came about and some insights into its design you can read a post on Cloudflare's blog that goes into some more detail.

Who uses quiche?

Cloudflare

quiche powers Cloudflare edge network's HTTP/3 support. The cloudflare-quic.com website can be used for testing and experimentation.

curl

quiche can be integrated into curl to provide support for HTTP/3.

NGINX (unofficial)

quiche can be integrated into NGINX using an unofficial patch to provide support for HTTP/3.

Getting Started

Command-line apps

Before diving into the quiche API, here are a few examples on how to use the quiche tools provided as part of the quiche-apps crate.

After cloning the project according to the command mentioned in the building section, the client can be run as follows:

 $ cargo run --manifest-path=tools/apps/Cargo.toml --bin quiche-client -- https://cloudflare-quic.com/

while the server can be run as follows:

 $ cargo run --manifest-path=tools/apps/Cargo.toml --bin quiche-server -- \
      --cert tools/apps/src/bin/cert.crt \
      --key tools/apps/src/bin/cert.key

(note that the certificate provided is self-signed and should not be used in production)

Use the --help command-line flag to get a more detailed description of each tool's options.

Connection setup

The first step in establishing a QUIC connection using quiche is creating a configuration object:

let config = quiche::Config::new(quiche::PROTOCOL_VERSION)?;

This is shared among multiple connections and can be used to configure a QUIC endpoint.

On the client-side the connect() utility function can be used to create a new connection, while accept() is for servers:

// Client connection.
let conn = quiche::connect(Some(&server_name), &scid, &mut config)?;

// Server connection.
let conn = quiche::accept(&scid, None, &mut config)?;

Handling incoming packets

Using the connection's recv() method the application can process incoming packets that belong to that connection from the network:

loop {
    let read = socket.recv(&mut buf).unwrap();

    let read = match conn.recv(&mut buf[..read]) {
        Ok(v) => v,

        Err(e) => {
            // An error occurred, handle it.
            break;
        },
    };
}

Generating outgoing packets

Outgoing packet are generated using the connection's send() method instead:

loop {
    let write = match conn.send(&mut out) {
        Ok(v) => v,

        Err(quiche::Error::Done) => {
            // Done writing.
            break;
        },

        Err(e) => {
            // An error occurred, handle it.
            break;
        },
    };

    socket.send(&out[..write]).unwrap();
}

When packets are sent, the application is responsible for maintaining a timer to react to time-based connection events. The timer expiration can be obtained using the connection's timeout() method.

let timeout = conn.timeout();

The application is responsible for providing a timer implementation, which can be specific to the operating system or networking framework used. When a timer expires, the connection's on_timeout() method should be called, after which additional packets might need to be sent on the network:

// Timeout expired, handle it.
conn.on_timeout();

// Send more packets as needed after timeout.
loop {
    let write = match conn.send(&mut out) {
        Ok(v) => v,

        Err(quiche::Error::Done) => {
            // Done writing.
            break;
        },

        Err(e) => {
            // An error occurred, handle it.
            break;
        },
    };

    socket.send(&out[..write]).unwrap();
}

Sending and receiving stream data

After some back and forth, the connection will complete its handshake and will be ready for sending or receiving application data.

Data can be sent on a stream by using the stream_send() method:

if conn.is_established() {
    // Handshake completed, send some data on stream 0.
    conn.stream_send(0, b"hello", true)?;
}

The application can check whether there are any readable streams by using the connection's readable() method, which returns an iterator over all the streams that have outstanding data to read.

The stream_recv() method can then be used to retrieve the application data from the readable stream:

if conn.is_established() {
    // Iterate over readable streams.
    for stream_id in conn.readable() {
        // Stream is readable, read until there's no more data.
        while let Ok((read, fin)) = conn.stream_recv(stream_id, &mut buf) {
            println!("Got {} bytes on stream {}", read, stream_id);
        }
    }
}

HTTP/3

The quiche HTTP/3 module provides a high level API for sending and receiving HTTP requests and responses on top of the QUIC transport protocol.

Have a look at the examples/ directory for more complete examples on how to use the quiche API, including examples on how to use quiche in C/C++ applications (see below for more information).

Calling quiche from C/C++

quiche exposes a thin C API on top of the Rust API that can be used to more easily integrate quiche into C/C++ applications (as well as in other languages that allow calling C APIs via some form of FFI). The C API follows the same design of the Rust one, modulo the constraints imposed by the C language itself.

When running cargo build, a static library called libquiche.a will be built automatically alongside the Rust one. This is fully stand-alone and can be linked directly into C/C++ applications.

Note that in order to enable the FFI API, the ffi feature must be enabled (it is disabled by default), by passing --features ffi to cargo.

Building

quiche requires Rust 1.50 or later to build. The latest stable Rust release can be installed using rustup.

Once the Rust build environment is setup, the quiche source code can be fetched using git:

 $ git clone --recursive https://github.com/cloudflare/quiche

and then built using cargo:

 $ cargo build --examples

cargo can also be used to run the testsuite:

 $ cargo test

Note that BoringSSL, which is used to implement QUIC's cryptographic handshake based on TLS, needs to be built and linked to quiche. This is done automatically when building quiche using cargo, but requires the cmake command to be available during the build process. On Windows you also need NASM. The official BoringSSL documentation has more details.

In alternative you can use your own custom build of BoringSSL by configuring the BoringSSL directory with the QUICHE_BSSL_PATH environment variable:

 $ QUICHE_BSSL_PATH="/path/to/boringssl" cargo build --examples

Building for Android

To build quiche for Android, you need the following:

  • Install the Android NDK (13b or higher), using Android Studio or directly.
  • Set ANDROID_NDK_HOME environment variable to NDK path, e.g.
 $ export ANDROID_NDK_HOME=/usr/local/share/android-ndk
  • Install the Rust toolchain for Android architectures needed:
 $ rustup target add aarch64-linux-android arm-linux-androideabi armv7-linux-androideabi i686-linux-android x86_64-linux-android

Note that the minimum API level is 21 for all target architectures.

Depending on the NDK version used, you can take one of the following procedures:

NDK version >= 19

For NDK version 19 or higher (21 recommended), you can build in a simpler way using cargo-ndk. You need to install cargo-ndk (v2.0 or later) first.

 $ cargo install cargo-ndk

You can build the quiche library using the following procedure. Note that -t <architecture> and -p <NDK version> are mandatory.

 $ cargo ndk -t arm64-v8a -p 21 -- build --features ffi

See build_android_ndk19.sh for more information.

Note that building with NDK version 18 appears to be broken.

NDK version < 18

If you need to use NDK version < 18 (gcc), you can build quiche in the following way.

To prepare the cross-compiling toolchain, run the following command:

 $ tools/android/setup_android.sh

It will create a standalone toolchain for arm64/arm/x86 architectures under the $TOOLCHAIN_DIR/arch directory. If you didn't set TOOLCHAIN_DIR environment variable, the current directory will be used.

After it run successfully, run the following script to build libquiche:

 $ tools/android/build_android.sh --features ndk-old-gcc

It will build binaries for aarch64, armv7 and i686. You can pass parameters to this script for cargo build. For example if you want to build a release binary with verbose logs, do the following:

 $ tools/android/build_android.sh --features ndk-old-gcc --release -vv

Building for iOS

To build quiche for iOS, you need the following:

  • Install Xcode command-line tools. You can install them with Xcode or with the following command:
 $ xcode-select --install
  • Install the Rust toolchain for iOS architectures:
 $ rustup target add aarch64-apple-ios x86_64-apple-ios
  • Install cargo-lipo:
 $ cargo install cargo-lipo

To build libquiche, run the following command:

 $ cargo lipo --features ffi

or

 $ cargo lipo --features ffi --release

iOS build is tested in Xcode 10.1 and Xcode 11.2.

Building Docker images

In order to build the Docker images, simply run the following command:

 $ make docker-build

You can find the quiche Docker images on the following Docker Hub repositories:

The latest tag will be updated whenever quiche master branch updates.

cloudflare/quiche

Provides a server and client installed in /usr/local/bin.

cloudflare/quiche-qns

Provides the script to test quiche within the quic-interop-runner.

Copyright

Copyright (C) 2018-2019, Cloudflare, Inc.

See COPYING for the license.

Comments
  • NGINX server doenst establish HTTP3 connection

    NGINX server doenst establish HTTP3 connection

    Im using macOS Catalina Version 10.15.7

    nginx version: nginx/1.16.1 (quiche-c2703b3) built by clang 11.0.3 (clang-1103.0.32.62) built with OpenSSL 1.1.0 (compatible; BoringSSL) (running with BoringSSL) TLS SNI support enabled configure arguments: --prefix=/Users/samy/NGINX-New/nginx-1.16.1 --build=quiche-c2703b3 --with-http_ssl_module --with-http_v2_module --with-http_v3_module --with-openssl=../quiche/deps/boringssl --with-quiche=../quiche

    Nginx Conf file: http { server { # Enable QUIC and HTTP/3. listen 443 quic reuseport;

        # Enable HTTP/2 (optional).
    listen 443 ssl http2;
    
        ssl_certificate      cert.pem;
        ssl_certificate_key  key.pem;
    
        # Enable all TLS versions (TLSv1.3 is required for QUIC).
        ssl_protocols TLSv1 TLSv1.1 TLSv1.2 TLSv1.3;
    
        # Request buffering in not currently supported for HTTP/3.
        proxy_request_buffering off;
    
        # Add Alt-Svc header to negotiate HTTP/3.
        add_header alt-svc 'h3-29=":443"; ma=86400';
      
    }
    

    }

    When i start the nginx server, and try to fetch the page from GOOGLE CHROME CANARY(with enable quic flag and quic-version=h3-29 set), it still goes through HTTP2 and not HTTP3.

    If i comment out listen 443 ssl http2(To enable HTTP/2), i get Connection refused in the browser. With curl it returns, HTTP/2 200 server: nginx/1.16.1 date: Tue, 10 Nov 2020 02:32:54 GMT content-type: text/html content-length: 608 last-modified: Mon, 09 Nov 2020 21:43:41 GMT etag: "5fa9b80d-260" alt-svc: h3-29=":443"; ma=86400 accept-ranges: bytes

    opened by Samy432 16
  • Nginx http3 closes connection with big html page

    Nginx http3 closes connection with big html page

    I got a problem while testing an nginx server patched with Quiche implementation of HTTP/3 with curl: when I try to send multiple consecutive request for a small html page (~1kb), nginx responds correctly

        [email protected]:~# ./curl/src/curl https://192.168.19.128?[1-5] -Ik --http3
    
    [1/5]: https://192.168.19.128?1 --> <stdout>
    --_curl_--https://192.168.19.128?1
    HTTP/3 200
    server: nginx/1.16.1
    date: Mon, 25 Nov 2019 13:44:21 GMT
    content-type: text/html
    content-length: 924
    last-modified: Mon, 25 Nov 2019 12:07:59 GMT
    etag: "5ddbc41f-39c"
    alt-svc: h3-23=":443"; ma=86400
    accept-ranges: bytes
    
    
    [2/5]: https://192.168.19.128?2 --> <stdout>
    --_curl_--https://192.168.19.128?2
    HTTP/3 200
    server: nginx/1.16.1
    date: Mon, 25 Nov 2019 13:44:21 GMT
    content-type: text/html
    content-length: 924
    last-modified: Mon, 25 Nov 2019 12:07:59 GMT
    etag: "5ddbc41f-39c"
    alt-svc: h3-23=":443"; ma=86400
    accept-ranges: bytes
    
    
    [3/5]: https://192.168.19.128?3 --> <stdout>
    --_curl_--https://192.168.19.128?3
    HTTP/3 200
    server: nginx/1.16.1
    date: Mon, 25 Nov 2019 13:44:21 GMT
    content-type: text/html
    content-length: 924
    last-modified: Mon, 25 Nov 2019 12:07:59 GMT
    etag: "5ddbc41f-39c"
    alt-svc: h3-23=":443"; ma=86400
    accept-ranges: bytes
    
    
    [4/5]: https://192.168.19.128?4 --> <stdout>
    --_curl_--https://192.168.19.128?4
    HTTP/3 200
    server: nginx/1.16.1
    date: Mon, 25 Nov 2019 13:44:21 GMT
    content-type: text/html
    content-length: 924
    last-modified: Mon, 25 Nov 2019 12:07:59 GMT
    etag: "5ddbc41f-39c"
    alt-svc: h3-23=":443"; ma=86400
    accept-ranges: bytes
    
    
    [5/5]: https://192.168.19.128?5 --> <stdout>
    --_curl_--https://192.168.19.128?5
    HTTP/3 200
    server: nginx/1.16.1
    date: Mon, 25 Nov 2019 13:44:21 GMT
    content-type: text/html
    content-length: 924
    last-modified: Mon, 25 Nov 2019 12:07:59 GMT
    etag: "5ddbc41f-39c"
    alt-svc: h3-23=":443"; ma=86400
    accept-ranges: bytes
    

    If I try to make a single request to a medium/big html file, nginx respond correctly again, but when I try to make multiple consecutive request to a medium/big html page (>=30kb), nginx stop responding after an arbitrary number of requests (2-5 requests normally). Here's an example made of 10 requests to the https://cloudflare-quic.com html page (which I downloaded on my server):

       [email protected]:~# ./curl/src/curl -Ik https://192.168.19.128/cloudflare.html?[1-10] --http3 -v
    
    [1/10]: https://192.168.19.128/cloudflare.html?1 --> <stdout>
    --_curl_--https://192.168.19.128/cloudflare.html?1
    *   Trying 192.168.19.128:443...
    * Sent QUIC client Initial, ALPN: h3-23
    * h3 [:method: HEAD]
    * h3 [:path: /cloudflare.html?1]
    * h3 [:scheme: https]
    * h3 [:authority: 192.168.19.128]
    * h3 [user-agent: curl/7.67.0-DEV]
    * h3 [accept: */*]
    * Using HTTP/3 Stream ID: 0 (easy handle 0x5614ee569460)
    > HEAD /cloudflare.html?1 HTTP/3
    > Host: 192.168.19.128
    > user-agent: curl/7.67.0-DEV
    > accept: */*
    >
    < HTTP/3 200
    HTTP/3 200
    < server: nginx/1.16.1
    server: nginx/1.16.1
    < date: Mon, 25 Nov 2019 13:53:43 GMT
    date: Mon, 25 Nov 2019 13:53:43 GMT
    < content-type: text/html
    content-type: text/html
    < content-length: 106072
    content-length: 106072
    < vary: Accept-Encoding
    vary: Accept-Encoding
    < etag: "5ddbdc21-19e58"
    etag: "5ddbdc21-19e58"
    < alt-svc: h3-23=":443"; ma=86400
    alt-svc: h3-23=":443"; ma=86400
    < accept-ranges: bytes
    accept-ranges: bytes
    
    <
    * Excess found: excess = 27523 url = /cloudflare.html (zero-length body)
    * Connection #0 to host 192.168.19.128 left intact
    
    [2/10]: https://192.168.19.128/cloudflare.html?2 --> <stdout>
    --_curl_--https://192.168.19.128/cloudflare.html?2
    * Found bundle for host 192.168.19.128: 0x5614ee56db00 [can multiplex]
    * Re-using existing connection! (#0) with host 192.168.19.128
    * Connected to 192.168.19.128 (192.168.19.128) port 443 (#0)
    * h3 [:method: HEAD]
    * h3 [:path: /cloudflare.html?2]
    * h3 [:scheme: https]
    * h3 [:authority: 192.168.19.128]
    * h3 [user-agent: curl/7.67.0-DEV]
    * h3 [accept: */*]
    * Using HTTP/3 Stream ID: 4 (easy handle 0x5614ee56b2b0)
    > HEAD /cloudflare.html?2 HTTP/3
    > Host: 192.168.19.128
    > user-agent: curl/7.67.0-DEV
    > accept: */*
    >
    * Got h3 for stream 0, expects 4
    * Got h3 for stream 0, expects 4
    * Got h3 for stream 0, expects 4
    * Got h3 for stream 0, expects 4
    [...]
    

    access.log of this requests:

    192.168.19.129 - - [26/Nov/2019:12:41:56 +0100] "HEAD /cloudflare.html?1 HTTP/3" 200 106072 "-" "curl/7.67.0-DEV" "-" 3d6f3588d11d45d4cffa3c9ee8e3dcbe
    192.168.19.129 - - [26/Nov/2019:12:41:56 +0100] "HEAD /cloudflare.html?2 HTTP/3" 200 106072 "-" "curl/7.67.0-DEV" "-" d4b886015b99af3c53618895959f90f2
    

    error.log is empty.

    It stucks on this screen repeating "Got h3 for stream 0, expects 4". Also I noticed that, when testing on smaller pages, that the smallest the file the bigger is the number of requests fullfilled before stop responding and start printing the error "Got h3 for stream x, expecting y", whith the relation that y=x+4. Also the access.log and the error.log are clean, meaning that it could maybe be some king of parameter missing in the server configuration, but I'm not sure about it. Does anyone have an idea of what the problem could be?

    My config

    nginx version:

    nginx version: nginx/1.16.1
    built by gcc 9.2.1 20191008 (Ubuntu 9.2.1-9ubuntu2)
    built with OpenSSL 1.1.0 (compatible; BoringSSL) (running with BoringSSL)
    TLS SNI support enabled
    configure arguments: 
    --prefix=/root/nginx-1.16.1 
    --with-http_ssl_module 
    --with-http_v2_module 
    --with-http_v3_module 
    --with-openssl=../quiche/deps/boringssl 
    --with-quiche=../quiche
    

    nginx.conf:

    user root;
    # you must set worker processes based on your CPU cores, nginx does not benefit from setting more than that
    worker_processes auto; #some last versions calculate it automatically
    
    # number of file descriptors used for nginx
    # the limit for the maximum FDs on the server is usually set by the OS.
    # if you don't set FD's then OS settings will be used which is by default 2000
    worker_rlimit_nofile 100000;
    
    # only log critical errors
    error_log logs/error.log crit;
    error_log  logs/error.log debug;
    error_log  logs/error.log  notice;
    error_log  logs/error.log  info;
    
    # provides the configuration file context in which the directives that affect connection processing are specified.
    events {
        # determines how much clients will be served per worker
        # max clients = worker_connections * worker_processes
        # max clients is also limited by the number of socket connections available on the system (~64k)
        worker_connections 4000;
    
        # optimized to serve many clients with each thread, essential for linux -- for testing environment
        use epoll;
    
        # accept as many connections as possible, may flood worker connections if set too low -- for testing environment
        multi_accept on;
    }
    
    http {
        # cache informations about FDs, frequently accessed files
        # can boost performance, but you need to test those values
        open_file_cache max=200000 inactive=20s;
        open_file_cache_valid 30s;
        open_file_cache_min_uses 2;
        open_file_cache_errors on;
    
        # to boost I/O on HDD we can disable access logs
        access_log on;
    
        # copies data between one FD and other from within the kernel
        # faster than read() + write()
        sendfile on;
    
        # send headers in one piece, it is better than sending them one by one
        tcp_nopush on;
    
        # don't buffer data sent, good for small data bursts in real time
        tcp_nodelay on;
    
        # reduce the data that needs to be sent over network -- for testing environment
        gzip on;
        # gzip_static on;
        gzip_min_length 10240;
        gzip_comp_level 1;
        gzip_vary on;
        gzip_disable msie6;
        gzip_proxied expired no-cache no-store private auth;
        gzip_types
            # text/html is always compressed by HttpGzipModule
            text/css
            text/javascript
            text/xml
            text/plain
            text/x-component
            application/javascript
            application/x-javascript
            application/json
            application/xml
            application/rss+xml
            application/atom+xml
            font/truetype
            font/opentype
            application/vnd.ms-fontobject
            image/svg+xml;
    
        # allow the server to close connection on non responding client, this will free up memory
        reset_timedout_connection on;
    
        # request timed out -- default 60
        client_body_timeout 10;
    
        # if client stop responding, free up memory -- default 60
        send_timeout 2;
    
        # server will close connection after this time -- default 75
        keepalive_timeout 30;
    
        # number of requests client can make over keep-alive -- for testing environment
        keepalive_requests 100000;
        
        log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                          '$status $body_bytes_sent "$http_referer" '
                          '"$http_user_agent" "$http_x_forwarded_for"';
      ########################################################
    	########################################################  
            server {
    
    
            access_log  logs/access.log  main;
         	  sendfile on;
        	  tcp_nopush on;
        	  tcp_nodelay on;
        	  keepalive_timeout 65;
        	  types_hash_max_size 2048;
        	  # server_tokens off;
            gzip  on;
          
    		    # Enable QUIC and HTTP/3.
            listen 443 quic reuseport;
    
            # Enable HTTP/2 (optional).
            listen 443 ssl http2;
    
            ssl_certificate      certificate.pem;
            ssl_certificate_key  key.pem;
    
            # Enable all TLS versions (TLSv1.3 is required for QUIC).
            ssl_protocols TLSv1 TLSv1.1 TLSv1.2 TLSv1.3;
            
            # Add Alt-Svc header to negotiate HTTP/3.
            add_header alt-svc 'h3-23=":443"; ma=86400';
     
            listen       80;
            server_name  localhost;
    
            location / {
                root   html;
                index  index.html index.htm;
            }
     
            error_page   500 502 503 504  /50x.html;
            location = /50x.html {
                root   html;
            } 
            ###Limits the maximum number of concurrent HTTP/3 streams in a connection.
            http3_max_concurrent_streams 256;
            
            ###Limits the maximum number of requests that can be served on a single HTTP/3 connection, 
            ###after which the next client request will lead to connection closing and the need of establishing a new connection.
            http3_max_requests 20000;
            
            ###Limits the maximum size of the entire request header list after QPACK decompression.
            http3_max_header_size 100000k;
            
            ###Sets the per-connection incoming flow control limit.
            http3_initial_max_data 2000000m;
            
            ###Sets the per-stream incoming flow control limit.
            http3_initial_max_stream_data 1000000m;
            
            ###Sets the timeout of inactivity after which the connection is closed.
            http3_idle_timeout 1500000m;
        }
     ########################################################
    	########################################################
    }
    

    Curl version

    curl 7.67.0-DEV (x86_64-pc-linux-gnu) libcurl/7.67.0-DEV BoringSSL zlib/1.2.11 nghttp2/1.39.2 quiche/0.1.0
    Release-Date: [unreleased]
    Protocols: dict file ftp ftps gopher http https imap imaps pop3 pop3s rtsp smb smbs smtp smtps telnet tftp
    Features: AsynchDNS HTTP2 HTTP3 HTTPS-proxy IPv6 Largefile libz NTLM NTLM_WB SSL UnixSockets
    
    opened by GiuseppeDiPalma 16
  • Why cant webtransport trigger http3 data event?

    Why cant webtransport trigger http3 data event?

    I changed the headers in accordance to https://github.com/cloudflare/quiche/issues/1132#issuecomment-1018293465 and the connection got established in https://googlechrome.github.io/samples/webtransport/client.html. But when I send a bidirectional stream, the event quiche::h3::Event::Data is not triggered. I think the bidirectional frame has this format: https://www.ietf.org/id/draft-ietf-webtrans-http3-02.html#name-bidirectional-streams. Any help would be appreciated.

    opened by kalradivyanshu 15
  • Unexpected http0.9 protocol when using cURL

    Unexpected http0.9 protocol when using cURL

    :wave: Hey There (Thanks for the great library!),

    I seem to be running into something unexpected while testing quiche (through the C FFI), and using a locally built cURL. When using cURL I must specify the: --http0.9 option, but only when using my version (not the rust example).

    In my C++ Code I run:

     // _config has been initialized with: quiche_config_new(QUICHE_PROTOCOL_VERSION);
    quiche_config_set_application_protos(
          _config.get(),
          const_cast<uint8_t *>(
              reinterpret_cast<const uint8_t *>(QUICHE_H3_APPLICATION_PROTOCOL)),
          sizeof(QUICHE_H3_APPLICATION_PROTOCOL) - 1);
    

    The definition of: QUICHE_H3_APPLICATION_PROTOCOL comes from quiche.h, and seems to be properly set to: #define QUICHE_H3_APPLICATION_PROTOCOL "\x05h3-23", which is what I expect.

    When I try to use cURL though I get:

    *   Trying ::1:8080...
    * Sent QUIC client Initial, ALPN: h3-23
    * quiche: recv() unexpectedly returned -1 (errno: 111, socket 3)
    * connect to ::1 port 8080 failed: Connection refused
    *   Trying 127.0.0.1:8080...
    * Sent QUIC client Initial, ALPN: h3-23
    * h3 [:method: GET]
    * h3 [:path: /]
    * h3 [:scheme: https]
    * h3 [:authority: localhost:8080]
    * h3 [user-agent: curl/7.67.0-DEV]
    * h3 [accept: */*]
    * Using HTTP/3 Stream ID: 0 (easy handle 0x55e202f95280)
    > GET / HTTP/3
    > Host: localhost:8080
    > user-agent: curl/7.67.0-DEV
    > accept: */*
    > 
    * Received HTTP/0.9 when not allowed
    
    * Connection #0 to host localhost left intact
    curl: (1) quiche: recv() unexpectedly returned -1 (errno: 111, socket 3)
    

    This error goes away if I provide the: --http0.9 flag to cURL, and the request completes successfully.

    When running the example http-server written in rust: ./target/debug/examples/http3-server --listen 0.0.0.0:8080 with cURL I am able to successfully query without providing the http0.9 flag:

    ./src/curl -k --tlsv1.3 --http3 https://localhost:8080/ && echo ""
    Not Found!
    

    (It should be noted while running the C example with: ./examples/http3-server 0.0.0.0 8080, I get a straight up failure).

    I'm curious if this is the only place I call: quiche_config_set_application_protos, and it doesn't (to the best of my knowledge) include http0.9, how is curl seeing this 0.9 data? Is there a config option I need to set when using the C API?

    opened by Mythra 15
  • Nginx configures multiple sites to share 443/udp ports

    Nginx configures multiple sites to share 443/udp ports

    [email protected] conf 18:30:35 # pwd
    /usr/local/nginx/conf
    [email protected] conf # grep -n quic nginx.conf
    177:        listen      443 quic    reuseport;
    248:        listen      443 quic    reuseport;
    [email protected] conf # nginx -t               
    nginx: [emerg] duplicate listen options for 0.0.0.0:443 in /usr/local/nginx/conf/nginx.conf:248
    nginx: configuration file /usr/local/nginx/conf/nginx.conf test failed
    

    Is this a bug?

    opened by xmapst 15
  • bubble up STOP_SENDING frame to H3 event

    bubble up STOP_SENDING frame to H3 event

    This is trying to address issue #585. The QUIC frame STOP_SENDING triggers a H3 event of StopSending, so that the application can react accordingly.

    Update:

    • Handled TODO items in stream_shutdown to send out STOP_SENDING and RESET_STREAM frames.
    • Reply RESET_STREAM frame when received STOP_SENDING frame.
    • Bubble up StopSending and ResetStream H3 events only for user request streams.
    opened by keepsimple1 14
  • Expose error code / message of APPLICATION_CLOSE when receiving the frame

    Expose error code / message of APPLICATION_CLOSE when receiving the frame

    It would be very useful to be able to get access to the error code / message of an received APPLICATION_CLOSE frame. This way a user of quiche can better understand on why a connection was closed in the first place.

    For example I am using quiche in netty and currently building an h3 layer on top of it in which I would love to expose some more details about why a remote peer closed the connection

    opened by normanmaurer 13
  • quiche_conn_recv() == -10 on windows

    quiche_conn_recv() == -10 on windows

    I run client.c and server.c on windows but I got this error: failed to process packet: -10 returned from the quiche_conn_recv() function

    I set port:5555 and host="localhost"

    I can't understand why this error occurred? Can I get more debug info or logging somehow to better understand the details of this failure?

    Capture20 Capture21

    opened by foroughmajidi 13
  • curl/libcurl not supporting --http3

    curl/libcurl not supporting --http3

    Hello,

    While I understand this may not be the most appropriate place or repo for my issue I am just seeking help with running --http3 command following the Cloudflare Quick & HTTP3 article (https://blog.cloudflare.com/http3-the-past-present-and-future/) and I'm hoping this is the fastest and most active way.

    I have followed the instruction for "If you're running macOS, we've also made it easy to install an HTTP/3 equipped version of curl via Homebrew"

    Step 1) Install Homebrew Curl

    $ brew install --HEAD -s https://raw.githubusercontent.com/cloudflare/homebrew-cloudflare/master/curl.rb

    Step 2) Added Homebrew curl path to bash profile and checked along with version.

    $ curl -V
    curl 7.69.0-DEV (x86_64-apple-darwin18.7.0) libcurl/7.69.0-DEV SecureTransport zlib/1.2.11 brotli/1.0.7 libidn2/2.3.0 librtmp/2.3
    Release-Date: [unreleased]
    Protocols: dict file ftp ftps gopher http https imap imaps ldap ldaps pop3 pop3s rtmp rtsp smb smbs smtp smtps telnet tftp
    Features: AsynchDNS brotli IDN IPv6 Largefile libz NTLM NTLM_WB SSL UnixSockets
    
    $ which curl
    /usr/local/opt/curl/bin/curl
    
    $ curl-config --version
    libcurl 7.69.0-DEV
    

    Step 4) THE ISSUE

    When I run the example command it says libcurl version doesn't support this.

    $ curl -I https://blog.cloudflare.com/ --http3
    curl: option --http3: the installed libcurl version doesn't support this
    curl: try 'curl --help' or 'curl --manual' for more information
    

    I can find very little information on this online...So I'm hoping someone would be able to help.

    Tony

    opened by tonyclemmey 13
  • Why is the Connection boxed by default?

    Why is the Connection boxed by default?

    Hey,

    I was just skimming over the API and I was wondering why connect and accept return a boxed Connection? Connection is neither a trait object nor dynamically sized, so why the need for the box?

    opened by NeoLegends 13
  • follow cloudflare quiche to enable nginx support http3. but can not access it from curl(build with ngtcp2) or chrome(canary)

    follow cloudflare quiche to enable nginx support http3. but can not access it from curl(build with ngtcp2) or chrome(canary)

    Hi all

    I just follow this link https://blog.cloudflare.com/experiment-with-http-3-using-nginx-and-quiche/ to enable nginx support http3

    you can see below for nginx listen on 443 for both tcp and udp : [[email protected] sbin]# netstat -nalp | grep :443 tcp 0 0 0.0.0.0:443 0.0.0.0:* LISTEN 18549/nginx: master udp 0 0 0.0.0.0:443 0.0.0.0:* 18549/nginx: master

    here is configure parameters:

    [email protected] sbin]# ./nginx -V nginx version: nginx/1.16.1 built by gcc 8.5.0 20210514 (Red Hat 8.5.0-4) (GCC) built with OpenSSL 1.1.0 (compatible; BoringSSL) (running with BoringSSL) TLS SNI support enabled configure arguments: --prefix=/webapps/nginx --sbin-path=/webapps/nginx/sbin/nginx --conf-path=/webapps/nginx/config/nginx.conf --error-log-path=/webapps/nginx/logs/nginx_error.log --http-log-path=/webapps/nginx/logs/nginx_access.log --pid-path=/webapps/nginx/logs/nginx.pid --user=nginx --group=nginx --with-file-aio --with-threads --with-http_addition_module --with-http_auth_request_module --with-http_dav_module --with-http_flv_module --with-http_gunzip_module --with-http_gzip_static_module --with-http_mp4_module --with-http_random_index_module --with-http_realip_module --with-http_secure_link_module --with-http_ssl_module --with-http_stub_status_module --with-http_sub_module --with-mail --with-mail_ssl_module --with-http_v2_module --with-http_v3_module --with-openssl=../quiche/deps/boringssl --with-quiche=../quiche [[email protected] sbin]#

    here is the configure file:

    server { listen 443 quic reuseport; listen 443 ssl http2; server_name www.kaixinduole.com; access_log logs/www.kaixinduole.com.https.log main; ssl_certificate /cert/www.kaixinduole.com/www.kaixinduole.com.pem; ssl_certificate_key /cert/www.kaixinduole.com/www.kaixinduole.com.key; ssl_protocols TLSv1 TLSv1.1 TLSv1.2 TLSv1.3; #ssl_protocols TLSv1.3; add_header alt-svc 'h3-23=":443"; ma=86400';

        location / {
            root   html;
            proxy_pass http://backend;
            index  index.html index.htm;
        }
    
        error_page   500 502 503 504  /50x.html;
        location = /50x.html {
            root   html;
        }
    }
    
    server {
        listen       80 http2;
        server_name  www.kaixinduole.com;
        access_log  logs/www.kaixinduole.com.http.log  main;
        #ssl_certificate /cert/www.kaixinduole.com/www.kaixinduole.com.pem;
        #ssl_certificate_key /cert/www.kaixinduole.com/www.kaixinduole.com.key;
        #ssl_protocols TLSv1 TLSv1.1 TLSv1.2 TLSv1.3;
        #add_header alt-svc 'h3-23=":443"; ma=86400';
    
        location / {
            root   html;
            #proxy_pass http://backend;
            index  index.html index.htm;
        }
    
        error_page   500 502 503 504  /50x.html;
        location = /50x.html {
            root   html;
        }
    }
    server {
        listen       80;
        server_name  www.bianmingkai.com;
        access_log  logs/www.bianmingkai.com.http.log  main;
        #ssl_certificate /cert/www.bianmingkai.com/www.bianmingkai.com.pem;
        #ssl_certificate_key /cert/www.bianmingkai.com/www.bianmingkai.com.key;
        #ssl_protocols TLSv1 TLSv1.1 TLSv1.2 TLSv1.3;
        #add_header alt-svc 'h3-23=":443"; ma=86400';
    
        location / {
            root   html;
            #proxy_pass http://backend;
            index  index.html index.htm;
        }
    

    but I can not access it via curl( curl has already support http3)

    [email protected]:/usr/local/src/curl/src# ./curl --http3 https://nghttp2.org:4433/ -I HTTP/3 200 server: nghttp3/ngtcp2 server content-type: text/html content-length: 6616

    [email protected]:/usr/local/src/curl/src# ./curl --http3 https://www.kaixinduole.com:443/ -I curl: (56) Failed to connect to www.kaixinduole.com port 443 after 8 ms: Failure when receiving data from the peer [email protected]:/usr/local/src/curl/src#

    Could you please help to point out the reason? really thanks

    opened by ShawnBian 12
  • how can I improve the stream_capacity?

    how can I improve the stream_capacity?

    I want to send 1MB in a request but the capacity is 13411 bytes. this is my code in examples.(I use h3 examples) println!("sending HTTP request {:?}", req); let stream_id = h3_conn.send_request(&mut conn, &req, false).unwrap(); println!("---- stream_id {}", stream_id); println!("---- stream_capacity {:?}", conn.stream_capacity(stream_id));

    let mut body_written = 0; let msg = vec![0; 1024*100]; loop { let written = match h3_conn.send_body(&mut conn, stream_id, &msg, false) { Ok(v) => v, Err(quiche::h3::Error::Done) => { println!("send_body() Done"); 0 }, Err(e) => { println!("{} send_body() failed: {:?}", conn.trace_id(), e); return; }, }; println!("---- send_body {}", written); body_written += written; // if written>=(1024*1024) { break; // } }

    println!("---- total send_body {}", body_written); req_sent = true;

    this is log. sending HTTP request [":method: POST", ":scheme: https", ":authority: 172.168.1.16", ":path: /upload", "user-agent: quiche"] ---- stream_id 0 ---- stream_capacity Ok(13411) ---- send_body 13406 ---- total send_body 13406

    opened by zhangbtciab 0
  • how to upload a img with http3?

    how to upload a img with http3?

    I upload a img with the interface "h3_conn.send_body(&mut conn, stream_id, &body, true).unwrap();" and I append the header " // b"content-type, b"multipart/form-data". quiche-http3 as client . nginx-quic as server . I received the response code 415.

    this is the server log. Content-Type is not multipart/form-data and no Content-Disposition header found, client: 172.168.1.35, server: nginx-quic, request: "POST /upload HTTP/3.0"

    opened by zhangbtciab 0
  • Can't download big files by using quiche integrated with curl

    Can't download big files by using quiche integrated with curl

    I have file-server (caddy) running on localhost using http/3 protocol serving one file of 2319MiB. When I try to download that file using curl with http3 support, the download stops at 129MiB (see attached screenshot). Same file can be downloaded successfully by using http/2 protocol or by using web browser (Firefox in my case).

    Can anyone give me insights what exactly is happening here and how to resolve this issue.

    quiche-issue

    opened by GauravChawda 0
  • some questions about the flamegraph of quiche-server

    some questions about the flamegraph of quiche-server

    When I tested quic-server, I found that a lot of time was spent in quiche::h3::qpack::huffman::Decoder::decode4. Why are so many memcpys generated here?

    image

    opened by yejingx 7
  • Is it possible to limit the rate of a stream?

    Is it possible to limit the rate of a stream?

    I'm using quiche to run a QUIC/H3 server. Sometimes we wanted to rate limit (say, at most 2Mbps) a stream (Http POST) from a client, is it possible with quiche?

    opened by keepsimple1 0
Releases(0.16.0)
  • 0.16.0(Oct 13, 2022)

  • 0.15.0(Oct 7, 2022)

    Breaking Changes:

    • Due to the changes needed to support connection migration, many APIs now require an additional SocketAddr parameter, including accept(), connect(), RecvInfo, SendInfo.

    Highlights:

    • Added support for connection migration.
    • Added support for the HTTP/3 PRIORITY_UPDATE frame (see h3::Connection::send_priority_update_for_request()`).
    • Added support for the BBR congestion control algorithm (note: this requires application to also support pacing).
    • Many more bug fixes and performance improvements.

    Full changelog at https://github.com/cloudflare/quiche/compare/0.14.0...0.15.0

    Source code(tar.gz)
    Source code(zip)
  • 0.14.0(May 23, 2022)

    Breaking Changes:

    • The Connection structure doesn't need to be pinned anymore, so the accept() and connect() functions now return a plain Connection rather than wrap it in Box<Pin<...>>.

    Highlights:

    • Fixed a potential crash that can be triggered by a stream being stopped.
    • Fixed UB caused by mutable Connection references being aliased when executing BoringSSL callbacks.
    • Added HTTP/3 events to qlog.

    Full changelog at https://github.com/cloudflare/quiche/compare/0.13.0...0.14.0

    Source code(tar.gz)
    Source code(zip)
  • 0.13.0(May 16, 2022)

    Breaking Changes:

    Highlights:

    • Updated delivery rate estimation to draft-01 and HyStart++ to draft-04.
    • Added support for the PRIORITY_UPDATE HTTP/3 frame: a new PriorityUpdate value has been added to the h3::Event enum that is returned by h3::Connection::poll() to notify the application when a priority update has been received, and the new h3::Connection::take_last_priority_update() method has been added to let applications retrieve the priority value for a specific prioritized element.

    Full changelog at https://github.com/cloudflare/quiche/compare/0.12.0...0.13.0

    Source code(tar.gz)
    Source code(zip)
  • 0.12.0(Feb 4, 2022)

  • 0.11.0(Jan 27, 2022)

    Breaking Changes:

    • The quiche git repository has been converted to a workspace layout. This should only affect projects that import the git repository directly, rather than using the package published on crates.io.
    • The quiche_conn_session() function has been temporarily removed from the FFI API due to a bug that would cause applications to access already-freed memory. The functionality will be re-implemented in a future release.

    Highlights:

    • Window-based flow control implementation. Applications can limit the amount of memory allocated to store incoming data by using the Config::set_max_connection_window() and Config::set_max_stream_window() APIs.
    • Update to QLOG v0.3, including support for the JSON-SEQ serialization format.
    • Many bug fixes and performance improvements.

    Full changelog at https://github.com/cloudflare/quiche/compare/0.10.0...0.11.0

    Source code(tar.gz)
    Source code(zip)
  • 0.10.0(Sep 23, 2021)

    Breaking Changes:

    • The new StreamReset value has been added to the Error enum. This can be returned by Connecttion::stream_recv() to notify the application when the peer has reset a particular stream.
    • The new Reset value has been added to the h3::Event enum. This is returned by h3::Connection::poll() to notify the application when the peer has reset a particular stream.
    • The h3::Config::set_max_header_list_size() method was renamed to h3::Config::set_max_field_section_size() to align to the renaming of the corresponding HTTP/3 setting in the spec.
    • Support for building with Android NDK < 19 was dropped.

    Highlights:

    • Support for Proportional Rate Reduction (RFC6937) for the CUBIC congestion control algorithm.
    • Support for Hystart++ draft-03 for both CUBIC and Reno.
    • Support for loss recovery adaptive packet reordering thresholds.
    • New APIs added to send and receive DATAGRAM frames as Vec<u8> to avoid copying data (see Connection::dgram_send_vec() and Connection::dgram_recv_vec()).
    • Support for qlog draft-02.
    • New APIs added to: expose the server name requested by the client (Connection::server_name()), expose the locally-generated connection error (Connection::local_error()), expose whether the connection timed-out (Connection::is_timed_out()).
    • Many bug fixes and performance improvements.

    Full changelog at https://github.com/cloudflare/quiche/compare/0.9.0...0.10.0

    Source code(tar.gz)
    Source code(zip)
  • 0.9.0(Jun 8, 2021)

    Breaking Changes:

    • Many public APIs have been updated to support network path awareness (see below):
      • The accept() and connect() functions now take a std::net::SocketAddr parameter representing the address of the peer on the other side of the connection.
      • The Connection::recv() method takes a RecvInfo parameter which should be populated with the address the UDP datagram was received from. Note that additional fields may be added in the future to support new features.
      • The Connection::send() method returns an additional SendInfo value which carries additional metadata for the newly generated QUIC packet, for example the address it needs to be sent to. Note that additional fields may be added in the future to support new features.
    • The InvalidStreamState error value has been updated to include the stream ID that caused the error.
    • The h3::Header and h3::HeaderRef types now use byte vectors (Vec<u8>) rather than strings to represent header names and values. Their constructors have been updated accordingly to take byte slices (&[u8]) rather than string slices (&str). The FFI API is unaffected.

    Highlights:

    • Support for QUIC v1 (RFC 8999, 9000, 9001, 9002)
    • Support for resuming connections and sending early data (0-RTT resumption) on the client (previously this was only supported on the server-side).
    • Support for network path awareness, which lays the foundation for network migration support (to be added in a future release).
    • Support for calculating packet pacing delays (requires support on the application). See the documentation for the at field of the SendInfo structure for more details.
    • Several improvements to the CUBIC congestion control implementation.
    • Many bug fixes and performance improvements.

    Full changelog at https://github.com/cloudflare/quiche/compare/0.8.1...0.9.0

    Source code(tar.gz)
    Source code(zip)
  • 0.8.1(Apr 15, 2021)

    Highlights:

    • The ConnectionError structure's fields are now properly accessible by applications.
    • The HTTP/3 Finished event is not triggered multiple times even if a msbehaving application repeatedly calls the recv_body() method after Error::Done is returned.

    Full changelog at https://github.com/cloudflare/quiche/compare/0.8.0...0.8.1

    Source code(tar.gz)
    Source code(zip)
  • 0.8.0(Apr 13, 2021)

    Breaking Changes:

    • The HTTP/3 Data and Datagram events are now edge-triggered like the other HTTP/3 events. This means that they will only be reported by the poll() method once. Applications will need to consume all the data buffered for those events' triggers to be re-armed.

    Highlights:

    Full changelog at https://github.com/cloudflare/quiche/compare/0.7.0...0.8.0

    Source code(tar.gz)
    Source code(zip)
  • 0.7.0(Feb 1, 2021)

    Breaking Changes:

    Highlights:

    Full changelog at https://github.com/cloudflare/quiche/compare/0.6.0...0.7.0

    Source code(tar.gz)
    Source code(zip)
  • 0.6.0(Oct 26, 2020)

    Highlights:

    • Initial support for the DATAGRAM extension.
    • New API for HTTP/3 graceful connection shutdown (GOAWAY).
    • Improved congestion control algorithms with appropriate byte counting.
    • Many bug fixes and performance improvements..

    Full changelog at https://github.com/cloudflare/quiche/compare/0.5.1...0.6.0

    Source code(tar.gz)
    Source code(zip)
  • 0.5.1(Jul 17, 2020)

    Highlights:

    • Fixed potential crash when PTO is triggered
    • Relaxed build configuration to avoid failures when building BoringSSL due to spurious warnings.
    • Removed dependency on Go and Perl during build.

    Full changelog at https://github.com/cloudflare/quiche/compare/0.5.0...0.5.1

    Source code(tar.gz)
    Source code(zip)
  • 0.5.0(Jul 14, 2020)

    Highlights:

    • Support for draft-28 and draft-29 (along draft-27).
    • Support for transport streams prioritization, as well as HTTP/3 extensible priorities.
    • Many bug fixes and performance improvements.

    Full changelog at https://github.com/cloudflare/quiche/compare/0.4.0...0.5.0

    Source code(tar.gz)
    Source code(zip)
  • 0.4.0(May 15, 2020)

    Highlights:

    • Support for configuring custom trust stores for validating certificates.
    • Support for generating qlog traces.
    • Support for sending DATA_BLOCKED and STREAM_DATA_BLOCKED frames.
    • CUBIC congestion controller (in addition to the Reno one).
    • Support for Hystart++ for both Reno and CUBIC controllers.
    • Dropped support for draft versions older than draft-27.
    • Many bug fixes and performance improvements.

    Full changelog at https://github.com/cloudflare/quiche/compare/0.3.0...0.4.0

    Source code(tar.gz)
    Source code(zip)
  • 0.3.0(Mar 13, 2020)

    Highlights:

    • Support for draft-25 and draft-27 (along draft-23 and -24).
    • Preliminary scaffolding for supporting multiple congestion control algorithms.
    • Support for newer Android NDKs.
    • More aggressive backpressure on stream writes.
    • Delivery rate estimation exposed via Stats API.
    • Various bug fixes and performance improvements.

    Full changelog at https://github.com/cloudflare/quiche/compare/0.2.0...0.3.0

    Source code(tar.gz)
    Source code(zip)
  • 0.2.0(Jan 8, 2020)

    Highlights:

    • Support for draft-24 and draft-23.
    • Support for receiving 0-RTT data.
    • Loss recovery improvements.
    • Misc. bug fixes.

    Full changelog at https://github.com/cloudflare/quiche/compare/0.1.0...0.2.0

    Source code(tar.gz)
    Source code(zip)
Peer-to-peer communications library for Rust based on QUIC protocol

qp2p Crate Documentation MaidSafe website SAFE Dev Forum SAFE Network Forum Overview This library provides an API to simplify common tasks when creati

MaidSafe 328 Nov 25, 2022
Futures-based QUIC implementation in Rust

Pure-rust QUIC protocol implementation Quinn is a pure-rust, future-based implementation of the QUIC transport protocol undergoing standardization by

null 2.4k Nov 27, 2022
neqo — an Implementation of QUIC written in Rust

Neqo, an Implementation of QUIC written in Rust To run test HTTP/3 programs (neqo-client and neqo-server): cargo build ./target/debug/neqo-server [::]

Mozilla 1.5k Nov 21, 2022
A µTP (Micro/uTorrent Transport Library) library implemented in Rust

rust-utp A Micro Transport Protocol library implemented in Rust. API documentation Overview The Micro Transport Protocol is a reliable transport proto

Ricardo Martins 133 Oct 31, 2022
TCP is so widely used, however QUIC may have a better performance.

TCP is so widely used, however QUIC may have a better performance. For softwares which use protocols built on TCP, this program helps them take FULL advantage of QUIC.

zephyr 15 Jun 10, 2022
MQTT over QUIC

MQuicTT ?? This is a pre-alpha project, tread carefully ?? A rustlang utility/library for MQTT over QUIC. QUIC allows us to send data over multiple co

null 28 Oct 26, 2022
A minimalistic encryption protocol for rust async streams/packets, based on noise protocol and snow.

Snowstorm A minimalistic encryption protocol for rust async streams / packets, based on noise protocol and snow. Quickstart Snowstorm allows you to se

Black Binary 19 Nov 22, 2022
Rustus - TUS protocol implementation in Rust.

Rustus Tus protocol implementation written in Rust. Features This implementation has several features to make usage as simple as possible. Rustus is r

Pavel Kirilin 68 Nov 21, 2022
RakNet Protocol implementation by Rust.

rust-raknet RakNet Protocol implementation by Rust. Raknet is a reliable udp transport protocol that is often used for communication between game clie

b23r0 152 Nov 21, 2022
A proxy implement with http / socks5 in-bound and vmess out-bound, written in Rust and tokio.rs

tokio-vmess an Asynchronous proxy implement with http / socks5 in-bound and vmess out-bound, written in Rust and tokio Run example first, Fill out the

irumeria 7 Oct 3, 2022
A simple web server(and library) to display server stats over HTTP and Websockets/SSE or stream it to other systems.

x-server-stats A simple web server(and library) to display server stats over HTTP and Websockets/SSE or stream it to other systems. x-server(in x-serv

Pratyaksh 11 Oct 17, 2022
Library + CLI-Tool to measure the TTFB (time to first byte) of HTTP requests. Additionally, this crate measures the times of DNS lookup, TCP connect and TLS handshake.

TTFB: CLI + Lib to Measure the TTFB of HTTP/1.1 Requests Similar to the network tab in Google Chrome or Mozilla Firefox, this crate helps you find the

Philipp Schuster 22 Nov 7, 2022
A remote shell, TCP tunnel and HTTP proxy for Replit.

Autobahn A remote shell, TCP tunnel and HTTP proxy for Replit. Hybrid SSH/HTTP server for Replit. Based on leon332157/replish. Autobahn runs a WebSock

Patrick Winters 12 Sep 24, 2022
Conway Game of Life plus WebAssembly and basic HTTP Server

Conway Game of Life plus WebAssembly and basic HTTP Server How to run First, you have to choose what server do you want to use for hosting the wasm ga

Lambdaclass 8 Sep 6, 2022
♻ A simple and efficient Gemini-to-HTTP proxy written in Rust.

September A simple and efficient Gemini-to-HTTP proxy written in Rust. Usage Docker $ docker run -d [ -e ROOT="gemini://fuwn.me" ] [ -e PORT="8080"] [

GemRest 10 Jul 2, 2022
A simple cross-platform remote file management tool to upload and download files over HTTP/S

A simple cross-platform remote file management tool to upload and download files over HTTP/S

sexnine 11 Oct 31, 2022
Reverse proxy for HTTP microservices and STDIO. Openfass watchdog which can run webassembly with wasmer-gpu written in rust.

The of-watchdog implements an HTTP server listening on port 8080, and acts as a reverse proxy for running functions and microservices. It can be used independently, or as the entrypoint for a container with OpenFaaS.

yanghaku 7 Sep 15, 2022
The Graph is a protocol for building decentralized applications (dApps) quickly on Ethereum and IPFS using GraphQL.

Graph Node The Graph is a protocol for building decentralized applications (dApps) quickly on Ethereum and IPFS using GraphQL. Graph Node is an open s

Mindy.wang 2 Jun 18, 2022
IDP2P is a peer-to-peer identity protocol which enables a controller to create, manage and share its own proofs as well as did documents

IDP2P Experimental, inspired by ipfs, did:peer and keri Background See also (related topics): Decentralized Identifiers (DIDs) Verifiable Credentials

null 5 Oct 31, 2022