HTTP 2.0 client & server implementation for Rust.

Related tags

Database rust http2 tokio
Overview

H2

A Tokio aware, HTTP/2 client & server implementation for Rust.

License: MIT Crates.io Documentation

More information about this crate can be found in the crate documentation.

Features

  • Client and server HTTP/2 implementation.
  • Implements the full HTTP/2 specification.
  • Passes h2spec.
  • Focus on performance and correctness.
  • Built on Tokio.

Non goals

This crate is intended to only be an implementation of the HTTP/2 specification. It does not handle:

  • Managing TCP connections
  • HTTP 1.0 upgrade
  • TLS
  • Any feature not described by the HTTP/2 specification.

This crate is now used by hyper, which will provide all of these features.

Usage

To use h2, first add this to your Cargo.toml:

[dependencies]
h2 = "0.3"

Next, add this to your crate:

extern crate h2;

use h2::server::Connection;

fn main() {
    // ...
}

FAQ

How does h2 compare to solicit or rust-http2?

The h2 library has implemented more of the details of the HTTP/2 specification than any other Rust library. It also passes the h2spec set of tests. The h2 library is rapidly approaching "production ready" quality.

Besides the above, Solicit is built on blocking I/O and does not appear to be actively maintained.

Is this an embedded Java SQL database engine?

No.

Comments
  • Redesign the h2::Error type

    Redesign the h2::Error type

    Yea, I think we'd need to redesign the h2::Error type some, so that it can include if it's a stream error, or a connection error (GOAWAY). Then we'd be better equipped to answer that programmatically.

    Originally posted by @seanmonstar in https://github.com/hyperium/hyper/issues/2500#issuecomment-821585355

    AFAICT we want DynConnection::error and ConnectionInner::error to at the very least be an Option<frame::GoAway> instead of a plain Option<Reason>.

    opened by nox 16
  • Incorrect connection error on implicit stream close from window update

    Incorrect connection error on implicit stream close from window update

    The server is allowed to send window updates for streams which are (locally) already closed, which is handled here:

    https://github.com/carllerche/h2/blob/master/src/proto/streams/streams.rs#L398

    However, in the case where the received ID is higher than self.next_stream_id, this causes the connection to error with Reason::PROTOCOL_ERROR, rather than implicitly closing any idle streams.

    According to the HTTP/2 spec,

    WINDOW_UPDATE can be sent by a peer that has sent a frame bearing the END_STREAM flag. This means that a receiver could receive a WINDOW_UPDATE frame on a "half-closed (remote)" or "closed" stream. A receiver MUST NOT treat this as an error (see Section 5.1).

    In particular, treating it as a connection error causes all simultaneous requests to fail the next time we poll the connection, even if they would have succeeded.

    opened by rbtying 16
  • Add `stream_id` accessors to public API types

    Add `stream_id` accessors to public API types

    Problem:

    Applications may want to access the underlying h2 stream ID for diagnostics, etc. Stream IDs were not previously exposed in public APIs.

    Solution:

    The frame::StreamId type is now publically exported and has RustDoc comments. The public API types SendStream, RecvStream, ReleaseCapacity, client::ResponseFuture, and server::SendResponse now all have stream_id methods which return the stream ID of the corresponding stream.

    Closes #289.

    Signed-off-by: Eliza Weisman [email protected]

    enhancement 
    opened by hawkw 16
  • Split Travis build into stages

    Split Travis build into stages

    This PR rewrites our Travis config to use Travis' Build Stages feature. It also includes my PR with fixes to codecov.io coverage uploading (#61). I've rewritten the step for publishing RustDoc to GitHub Pages to use a Travis Deploy stage, removing the dependency on travis-cargo.

    Closes #59 Closes #61

    opened by hawkw 16
  • Panics in browser environment

    Panics in browser environment

    Panic happens due to usage of Instant::now() which is not implemented in the stdlib for the wasm32-unknown-unknown target.

    A solution would be to use a more general wasm-timer crate, which re-exports std::time::Instant for native targets and uses web_sys to implement it for wasm32.

    Here is a patch which I am using for a while in production: https://github.com/boxdot/h2/commit/a14c89c72ed5725c1626a22cc07f996b617d6d5c.

    Please let me know if it is acceptable, then I will create a PR.

    opened by boxdot 13
  • Requests missing an :authority pseudo-header return PROTOCOL_ERROR

    Requests missing an :authority pseudo-header return PROTOCOL_ERROR

    We are using nghttpx to upgrade requests from HTTP to H2. nghttpx omits the :authority pseudo-header when creating the request to forward on via H2, this appears to be consistent with the spec:

    The :authority pseudo-header field includes the authority portion of the target URI ([RFC3986], Section 3.2). The authority MUST NOT include the deprecated userinfo subcomponent for http or https schemed URIs.

    To ensure that the HTTP/1.1 request line can be reproduced accurately, this pseudo-header field MUST be omitted when translating from an HTTP/1.1 request that has a request target in origin or asterisk form (see [RFC7230], Section 5.3). Clients that generate HTTP/2 requests directly SHOULD use the :authority pseudo-header field instead of the Host header field. An intermediary that converts an HTTP/2 request to HTTP/1.1 MUST create a Host header field if one is not present in a request by copying the value of the :authority pseudo-header field.

    However, it appears that h2 incorrectly rejects the request with PROTOCOL_ERROR when this happens. It also seems like there should also be some logging around this code path, but I couldn't get it to trigger.

    bug 
    opened by zackangelo 13
  • Help? It hangs

    Help? It hangs

    I've implemented a simple example using h2 by copying client and server examples into a single binary.

    Server is the exact copy of the example, and the client is modified to do requests in a loop.

    It hangs after 5957 iterations.

    How to reproduce:

    # git clone https://github.com/stepancheg/h2-hang/
    # cd h2-hang
    # cargo run
    

    h2.rs

    Output is:

    ...
    5957
    GOT request: Request { method: GET, uri: https://localhost:8080/, version: HTTP/2.0, headers: {}, body: RecvStream { inner: ReleaseCapacity { inner: OpaqueStreamRef { stream_id: StreamId(11915), ref_count: 2 } } } }
    >>>> sending data
    GOT RESPONSE: Response { status: 200, version: HTTP/2.0, headers: {}, body: RecvStream { inner: ReleaseCapacity { inner: OpaqueStreamRef { stream_id: StreamId(11915), ref_count: 2 } } } }
    GOT CHUNK = b"hello wo"
    

    I'd say it is a bug, but probably I missed something.

    opened by stepancheg 13
  • Be more lenient with streams in the `pending_send` queue.

    Be more lenient with streams in the `pending_send` queue.

    The is_peer_reset() check doesn't quite cover all the cases where we call clear_queue, such as when we call recv_err. Instead of trying to make the check more precise, let's gracefully handle spurious entries in the queue.

    This fixes the issue I mentioned at https://github.com/carllerche/h2/pull/258#issuecomment-381768145.

    opened by goffrie 12
  • Http2 protocol violation

    Http2 protocol violation

    Hello,

    I am reporting an issue I have encountered while using tonic grpc server (v0.8.0). We use nginx as a GRPC reverse proxy and hit the following corner case. Full details here: https://trac.nginx.org/nginx/ticket/2376 you can also find a pcap capture of the issue in the ticket

    With this setup [client] === grpc stream (with ssl) ===> [nginx] === grpc stream (cleartext) ===> [backend]

    and considering the following grpc service description

    service Agent {
         rpc StreamToServer(stream Empty {}) returns (Empty) {}
    }
    

    If the http2 server generated by tonic early respond to the client without consuming fully the input stream

    impl Agent {
        async fn stream_to_server (
            &self,
            request: Request<Streaming<Empty>>,
        ) -> Result<Response<Empty>, Status> {
            let _ = request.into_inner();
            // we don't care of the request at all
            // Just sending back our response before end of client stream
            Ok(Response::new(Empty {}))
        }
    }
    

    The following packets are going to be emitted by tonic/hyper/h2, in response to the call.

    HEADERS[1],  DATA[1] (GRPC empty data),  HEADERS[1] (TRAILING with flag end_stream) , RST_STREAM(error: CANCEL)[1]
    

    This specific sequence of packet is causing nginx to miss-behave and not forward the DATA & RST_STREAM back to the client. After discussion with a maintainer of nginx, this is caused by the last RST_STREAM(error: CANCEL) which is invalid in regard of the spec, it should be a RST_STREAM(NO_ERROR)

    As per the RFC

       ... A server can
       send a complete response prior to the client sending an entire
       request if the response does not depend on any portion of the request
       that has not been sent and received.  When this is true, a server MAY
       request that the client abort transmission of a request without error
       by sending a RST_STREAM with an error code of NO_ERROR after sending
       a complete response (i.e., a frame with the END_STREAM flag).
       Clients MUST NOT discard responses as a result of receiving such a
       RST_STREAM, though clients can always discard responses at their
       discretion for other reasons.
    

    I tracked down where is coming this RST_STREAM(CANCEL), at first I thought it was in tonic but it ended-up being in the impl Drop of h2

    h2::proto::streams::streams::maybe_cancel streams.rs:1467
    h2::proto::streams::streams::drop_stream_ref::{closure#0} streams.rs:1443
    h2::proto::streams::counts::Counts::transition<h2::proto::streams::streams::drop_stream_ref::{closure_env#0}, ()> counts.rs:127
    h2::proto::streams::streams::drop_stream_ref streams.rs:1442
    h2::proto::streams::streams::{impl#12}::drop streams.rs:1403
    

    https://github.com/hyperium/h2/blob/756384f4cdd62bce3af7aa53a156ba2cc557b5ec/src/proto/streams/streams.rs#L1466

    So it seems a generic way of handling the end of stream for h2. And now I am stuck in my investigation and require some guidance regarding where to go from here.

    • Replacing the Reason::CANCEL by Reason::NO_ERROR solves my issue with nginx (tested locally) and seem more adequate in regard of the RFC, but I don't know if there is un-expected side effect to that for other cases
    • I don't know if it is tonic or hyper that miss use h2 and require somewhere some special handling for this scenario

    We really like tonic and nginx and would appreciate if we can go forward to make both happy to work together

    opened by erebe 11
  • PUSH example

    PUSH example

    Hellom I'm trying to use the new push_promise feature, I edited the server example to reply a index.html page and push a script.js file, but it ends up with a PROTOCOL_ERROR and I'm unable to debug it:

    
    use http::{Response, Request, StatusCode};
    
    use tokio::net::TcpListener;
    
    #[tokio::main]
    pub async fn main() {
        let mut listener = TcpListener::bind("127.0.0.1:5927").await.expect("Bind error");
    
        // Accept all incoming TCP connections.
        loop {
            if let Ok((socket, _peer_addr)) = listener.accept().await {
                // Spawn a new task to process each connection.
                tokio::spawn(async move {
                    // Start the HTTP/2.0 connection handshake
                    let mut h2 = server::handshake(socket).await.expect("Handshake error");
                    // Accept all inbound HTTP/2.0 streams sent over the
                    // connection.
                    while let Some(request) = h2.accept().await {
                        let (request, mut respond) = request.expect("Incoming request error");
                        println!("Received request: {:?}", request);
    
                        // Build a response with no body
                        let response = Response::builder()
                            .status(StatusCode::OK)
                            .body(())
                            .expect("First response build error");
    
                        // Send the response back to the client
                        let mut stream = respond.send_response(response, false)
                            .expect("First response send error");
    
                        let contents = b"index.html contents".to_vec();
                        stream.send_data(contents.into(), true).expect("index.html send error");
    
                        // prepare the PUSH request
                        let request = Request::builder()
                            .uri("script.js")
                            .body(())
                            .expect("script.js request build error");
                        
                        // init the PUSH sequence
                        let mut push = respond.push_request(request)
                            .expect("script.js request send error");
    
                        // Build a response with no body
                        let response = Response::builder()
                            .status(StatusCode::OK)
                            .body(())
                            .expect("Second response build error");
    
                        // Send the response back to the client
                        let mut stream = push.send_response(response, false)
                            .expect("Second response send error");
    
                        let contents = b"script.js contents".to_vec();
                        stream.send_data(contents.into(), true).expect("script.js send error");
                    }
                });
            }
        }
    }
    

    Can you please help me creating a working PUSH example? Thank you

    opened by nappa85 11
  • Deadlock with tokio 1.7.0

    Deadlock with tokio 1.7.0

    With the latest tokio 1.7.0 there is a deadlock when tokio runtime is shutting down.

    lib versions:

    h2 = "0.3.3"
    hyper = "0.14.9"
    tokio = "1.7.0"
    

    The problem

    #[tokio::test(flavor = "multi_thread", worker_threads = 1)]
    async fn test_meta_cluster_write_on_non_leader() -> anyhow::Result<()> {
        // - Start a server
        // - spawn a task sending RPC to the server...
        Ok(())
    }
    

    The problem code snippet is a unit test that brings up a grpc server and a client that keeps sending RPC to the server in another tokio task.

    When the test quits(and tokio runtime is shutting down), the task that keeps sending RPC is still running. Then there is a deadlock that hangs the world and never quits.

    The same codes work fine with tokio 1.6.0; Since in 1.7.0 a new feature is added: https://github.com/tokio-rs/tokio/pull/3752 which I believe causes this problem.

    The detail

    • The deadlock happens when the tokio runtime is shutting down and trying to drop a stream: in src/proto/streams/streams.rs, it acquired the lock of the stream to do some cleanup jobs.

    • Then while holding the lock me, maybe_cancel() tries to wake up the task this stream belongs to.

    • Because tokio runtime is closed thus another round of dropping happens. Finally in src/proto/connection.rs, it tried again to acquire the same lock to release resource. deadlock.

    All these happens in one thread with tokio 1.7.0 .

    impl Drop for OpaqueStreamRef {
        fn drop(&mut self) {
            drop_stream_ref(&self.inner, self.key);
        }
    }
    
    fn drop_stream_ref(inner: &Mutex<Inner>, key: store::Key) {
    
        let mut me = match inner.lock() {
            Ok(inner) => inner,
            // ...
        };
    
        // ...
    
        me.counts.transition(stream, |counts, stream| {
            maybe_cancel(stream, actions, counts);
            // ...
        });
    }
    
    
    impl<T, P, B> Drop for Connection<T, P, B>
    // ...
    {
        fn drop(&mut self) {
    
            {
                // Check if lock is held
                let _v = self.inner.streams.inner.try_lock();
                if _v.is_err() {
                    let bt = backtrace::Backtrace::new();
                    tracing::debug!("--- bt: {:?}", bt);
                }
            }
    
            // BUG: recv_eof requires lock on self.inner.streams.inner
            let _ = self.inner.streams.recv_eof(true);
        }
    }
    

    Stack summary when deadlock:

       4: <h2::proto::connection::Connection<T,P,B> as core::ops::drop::Drop>::drop
                 at /Users/drdrxp/xp/vcs/h2/src/proto/connection.rs:574:30
    
         ...
    
      52: h2::proto::streams::streams::drop_stream_ref
                 at /Users/drdrxp/xp/vcs/h2/src/proto/streams/streams.rs:1423:5
      53: <h2::proto::streams::streams::OpaqueStreamRef as core::ops::drop::Drop>::drop
                 at /Users/drdrxp/xp/vcs/h2/src/proto/streams/streams.rs:1380:9
    

    The entire backtrace(first lock acquire is at frame 53):

       0: backtrace::backtrace::libunwind::trace
                 at /Users/drdrxp/.cargo/registry/src/github.com-1ecc6299db9ec823/backtrace-0.3.60/src/backtrace/libunwind.rs:90:5
          backtrace::backtrace::trace_unsynchronized
                 at /Users/drdrxp/.cargo/registry/src/github.com-1ecc6299db9ec823/backtrace-0.3.60/src/backtrace/mod.rs:66:5
       1: backtrace::backtrace::trace
                 at /Users/drdrxp/.cargo/registry/src/github.com-1ecc6299db9ec823/backtrace-0.3.60/src/backtrace/mod.rs:53:14
       2: backtrace::capture::Backtrace::create
                 at /Users/drdrxp/.cargo/registry/src/github.com-1ecc6299db9ec823/backtrace-0.3.60/src/capture.rs:176:9
       3: backtrace::capture::Backtrace::new
                 at /Users/drdrxp/.cargo/registry/src/github.com-1ecc6299db9ec823/backtrace-0.3.60/src/capture.rs:140:22
       4: <h2::proto::connection::Connection<T,P,B> as core::ops::drop::Drop>::drop
                 at /Users/drdrxp/xp/vcs/h2/src/proto/connection.rs:574:30
       5: core::ptr::drop_in_place<h2::proto::connection::Connection<tonic::transport::service::io::BoxedIo,h2::client::Peer,hyper::proto::h2::SendBuf<bytes::bytes::Bytes>>>
                 at /rustc/657bc01888e6297257655585f9c475a0801db6d2/library/core/src/ptr/mod.rs:192:1
       6: core::ptr::drop_in_place<h2::client::Connection<tonic::transport::service::io::BoxedIo,hyper::proto::h2::SendBuf<bytes::bytes::Bytes>>>
                 at /rustc/657bc01888e6297257655585f9c475a0801db6d2/library/core/src/ptr/mod.rs:192:1
       7: core::ptr::drop_in_place<futures_util::future::either::Either<futures_util::future::poll_fn::PollFn<hyper::proto::h2::client::handshake<tonic::transport::service::io::BoxedIo,tonic::body::BoxBody>::{{closure}}::{{closure}}::{{closure}}>,h2::client::Connection<tonic::transport::service::io::BoxedIo,hyper::proto::h2::SendBuf<bytes::bytes::Bytes>>>>
                 at /rustc/657bc01888e6297257655585f9c475a0801db6d2/library/core/src/ptr/mod.rs:192:1
       8: core::ptr::drop_in_place<futures_util::future::try_future::into_future::IntoFuture<futures_util::future::either::Either<futures_util::future::poll_fn::PollFn<hyper::proto::h2::client::handshake<tonic::transport::service::io::BoxedIo,tonic::body::BoxBody>::{{closure}}::{{closure}}::{{closure}}>,h2::client::Connection<tonic::transport::service::io::BoxedIo,hyper::proto::h2::SendBuf<bytes::bytes::Bytes>>>>>
                 at /rustc/657bc01888e6297257655585f9c475a0801db6d2/library/core/src/ptr/mod.rs:192:1
       9: core::ptr::drop_in_place<futures_util::future::future::map::Map<futures_util::future::try_future::into_future::IntoFuture<futures_util::future::either::Either<futures_util::future::poll_fn::PollFn<hyper::proto::h2::client::handshake<tonic::transport::service::io::BoxedIo,tonic::body::BoxBody>::{{closure}}::{{closure}}::{{closure}}>,h2::client::Connection<tonic::transport::service::io::BoxedIo,hyper::proto::h2::SendBuf<bytes::bytes::Bytes>>>>,futures_util::fns::MapErrFn<hyper::proto::h2::client::handshake<tonic::transport::service::io::BoxedIo,tonic::body::BoxBody>::{{closure}}::{{closure}}::{{closure}}>>>
                 at /rustc/657bc01888e6297257655585f9c475a0801db6d2/library/core/src/ptr/mod.rs:192:1
      10: core::ptr::drop_in_place<futures_util::future::future::Map<futures_util::future::try_future::into_future::IntoFuture<futures_util::future::either::Either<futures_util::future::poll_fn::PollFn<hyper::proto::h2::client::handshake<tonic::transport::service::io::BoxedIo,tonic::body::BoxBody>::{{closure}}::{{closure}}::{{closure}}>,h2::client::Connection<tonic::transport::service::io::BoxedIo,hyper::proto::h2::SendBuf<bytes::bytes::Bytes>>>>,futures_util::fns::MapErrFn<hyper::proto::h2::client::handshake<tonic::transport::service::io::BoxedIo,tonic::body::BoxBody>::{{closure}}::{{closure}}::{{closure}}>>>
                 at /rustc/657bc01888e6297257655585f9c475a0801db6d2/library/core/src/ptr/mod.rs:192:1
      11: core::ptr::drop_in_place<futures_util::future::try_future::MapErr<futures_util::future::either::Either<futures_util::future::poll_fn::PollFn<hyper::proto::h2::client::handshake<tonic::transport::service::io::BoxedIo,tonic::body::BoxBody>::{{closure}}::{{closure}}::{{closure}}>,h2::client::Connection<tonic::transport::service::io::BoxedIo,hyper::proto::h2::SendBuf<bytes::bytes::Bytes>>>,hyper::proto::h2::client::handshake<tonic::transport::service::io::BoxedIo,tonic::body::BoxBody>::{{closure}}::{{closure}}::{{closure}}>>
                 at /rustc/657bc01888e6297257655585f9c475a0801db6d2/library/core/src/ptr/mod.rs:192:1
      12: core::ptr::drop_in_place<(futures_util::future::try_future::MapErr<futures_util::future::either::Either<futures_util::future::poll_fn::PollFn<hyper::proto::h2::client::handshake<tonic::transport::service::io::BoxedIo,tonic::body::BoxBody>::{{closure}}::{{closure}}::{{closure}}>,h2::client::Connection<tonic::transport::service::io::BoxedIo,hyper::proto::h2::SendBuf<bytes::bytes::Bytes>>>,hyper::proto::h2::client::handshake<tonic::transport::service::io::BoxedIo,tonic::body::BoxBody>::{{closure}}::{{closure}}::{{closure}}>,futures_util::future::future::Map<futures_util::stream::stream::into_future::StreamFuture<futures_channel::mpsc::Receiver<hyper::common::never::Never>>,hyper::proto::h2::client::handshake<tonic::transport::service::io::BoxedIo,tonic::body::BoxBody>::{{closure}}::{{closure}}::{{closure}}>)>
                 at /rustc/657bc01888e6297257655585f9c475a0801db6d2/library/core/src/ptr/mod.rs:192:1
      13: core::ptr::drop_in_place<core::option::Option<(futures_util::future::try_future::MapErr<futures_util::future::either::Either<futures_util::future::poll_fn::PollFn<hyper::proto::h2::client::handshake<tonic::transport::service::io::BoxedIo,tonic::body::BoxBody>::{{closure}}::{{closure}}::{{closure}}>,h2::client::Connection<tonic::transport::service::io::BoxedIo,hyper::proto::h2::SendBuf<bytes::bytes::Bytes>>>,hyper::proto::h2::client::handshake<tonic::transport::service::io::BoxedIo,tonic::body::BoxBody>::{{closure}}::{{closure}}::{{closure}}>,futures_util::future::future::Map<futures_util::stream::stream::into_future::StreamFuture<futures_channel::mpsc::Receiver<hyper::common::never::Never>>,hyper::proto::h2::client::handshake<tonic::transport::service::io::BoxedIo,tonic::body::BoxBody>::{{closure}}::{{closure}}::{{closure}}>)>>
                 at /rustc/657bc01888e6297257655585f9c475a0801db6d2/library/core/src/ptr/mod.rs:192:1
      14: core::ptr::drop_in_place<futures_util::future::select::Select<futures_util::future::try_future::MapErr<futures_util::future::either::Either<futures_util::future::poll_fn::PollFn<hyper::proto::h2::client::handshake<tonic::transport::service::io::BoxedIo,tonic::body::BoxBody>::{{closure}}::{{closure}}::{{closure}}>,h2::client::Connection<tonic::transport::service::io::BoxedIo,hyper::proto::h2::SendBuf<bytes::bytes::Bytes>>>,hyper::proto::h2::client::handshake<tonic::transport::service::io::BoxedIo,tonic::body::BoxBody>::{{closure}}::{{closure}}::{{closure}}>,futures_util::future::future::Map<futures_util::stream::stream::into_future::StreamFuture<futures_channel::mpsc::Receiver<hyper::common::never::Never>>,hyper::proto::h2::client::handshake<tonic::transport::service::io::BoxedIo,tonic::body::BoxBody>::{{closure}}::{{closure}}::{{closure}}>>>
                 at /rustc/657bc01888e6297257655585f9c475a0801db6d2/library/core/src/ptr/mod.rs:192:1
      15: core::ptr::drop_in_place<hyper::proto::h2::client::conn_task<futures_util::future::try_future::MapErr<futures_util::future::either::Either<futures_util::future::poll_fn::PollFn<hyper::proto::h2::client::handshake<tonic::transport::service::io::BoxedIo,tonic::body::BoxBody>::{{closure}}::{{closure}}::{{closure}}>,h2::client::Connection<tonic::transport::service::io::BoxedIo,hyper::proto::h2::SendBuf<bytes::bytes::Bytes>>>,hyper::proto::h2::client::handshake<tonic::transport::service::io::BoxedIo,tonic::body::BoxBody>::{{closure}}::{{closure}}::{{closure}}>,futures_util::future::future::Map<futures_util::stream::stream::into_future::StreamFuture<futures_channel::mpsc::Receiver<hyper::common::never::Never>>,hyper::proto::h2::client::handshake<tonic::transport::service::io::BoxedIo,tonic::body::BoxBody>::{{closure}}::{{closure}}::{{closure}}>>::{{closure}}>
                 at /Users/drdrxp/xp/vcs/hyper/src/proto/h2/client.rs:176:11
      16: core::ptr::drop_in_place<core::future::from_generator::GenFuture<hyper::proto::h2::client::conn_task<futures_util::future::try_future::MapErr<futures_util::future::either::Either<futures_util::future::poll_fn::PollFn<hyper::proto::h2::client::handshake<tonic::transport::service::io::BoxedIo,tonic::body::BoxBody>::{{closure}}::{{closure}}::{{closure}}>,h2::client::Connection<tonic::transport::service::io::BoxedIo,hyper::proto::h2::SendBuf<bytes::bytes::Bytes>>>,hyper::proto::h2::client::handshake<tonic::transport::service::io::BoxedIo,tonic::body::BoxBody>::{{closure}}::{{closure}}::{{closure}}>,futures_util::future::future::Map<futures_util::stream::stream::into_future::StreamFuture<futures_channel::mpsc::Receiver<hyper::common::never::Never>>,hyper::proto::h2::client::handshake<tonic::transport::service::io::BoxedIo,tonic::body::BoxBody>::{{closure}}::{{closure}}::{{closure}}>>::{{closure}}>>
                 at /rustc/657bc01888e6297257655585f9c475a0801db6d2/library/core/src/ptr/mod.rs:192:1
      17: core::ptr::drop_in_place<tokio::runtime::task::core::Stage<core::future::from_generator::GenFuture<hyper::proto::h2::client::conn_task<futures_util::future::try_future::MapErr<futures_util::future::either::Either<futures_util::future::poll_fn::PollFn<hyper::proto::h2::client::handshake<tonic::transport::service::io::BoxedIo,tonic::body::BoxBody>::{{closure}}::{{closure}}::{{closure}}>,h2::client::Connection<tonic::transport::service::io::BoxedIo,hyper::proto::h2::SendBuf<bytes::bytes::Bytes>>>,hyper::proto::h2::client::handshake<tonic::transport::service::io::BoxedIo,tonic::body::BoxBody>::{{closure}}::{{closure}}::{{closure}}>,futures_util::future::future::Map<futures_util::stream::stream::into_future::StreamFuture<futures_channel::mpsc::Receiver<hyper::common::never::Never>>,hyper::proto::h2::client::handshake<tonic::transport::service::io::BoxedIo,tonic::body::BoxBody>::{{closure}}::{{closure}}::{{closure}}>>::{{closure}}>>>
                 at /rustc/657bc01888e6297257655585f9c475a0801db6d2/library/core/src/ptr/mod.rs:192:1
      18: tokio::runtime::task::core::CoreStage<T>::set_stage::{{closure}}
                 at /Users/drdrxp/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.7.0/src/runtime/task/core.rs:296:35
      19: tokio::loom::std::unsafe_cell::UnsafeCell<T>::with_mut
                 at /Users/drdrxp/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.7.0/src/loom/std/unsafe_cell.rs:14:9
      20: tokio::runtime::task::core::CoreStage<T>::set_stage
                 at /Users/drdrxp/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.7.0/src/runtime/task/core.rs:296:9
      21: tokio::runtime::task::core::CoreStage<T>::drop_future_or_output
                 at /Users/drdrxp/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.7.0/src/runtime/task/core.rs:262:13
      22: tokio::runtime::task::harness::cancel_task::{{closure}}
                 at /Users/drdrxp/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.7.0/src/runtime/task/harness.rs:392:9
      23: core::ops::function::FnOnce::call_once
                 at /rustc/657bc01888e6297257655585f9c475a0801db6d2/library/core/src/ops/function.rs:227:5
      24: <std::panic::AssertUnwindSafe<F> as core::ops::function::FnOnce<()>>::call_once
                 at /rustc/657bc01888e6297257655585f9c475a0801db6d2/library/std/src/panic.rs:347:9
      25: std::panicking::try::do_call
                 at /rustc/657bc01888e6297257655585f9c475a0801db6d2/library/std/src/panicking.rs:401:40
      26: <unknown>
                 at /rustc/657bc01888e6297257655585f9c475a0801db6d2/library/std/src/panicking.rs:429:6
      27: std::panicking::try
                 at /rustc/657bc01888e6297257655585f9c475a0801db6d2/library/std/src/panicking.rs:365:19
      28: std::panic::catch_unwind
                 at /rustc/657bc01888e6297257655585f9c475a0801db6d2/library/std/src/panic.rs:434:14
      29: tokio::runtime::task::harness::cancel_task
                 at /Users/drdrxp/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.7.0/src/runtime/task/harness.rs:391:15
      30: tokio::runtime::task::harness::Harness<T,S>::shutdown
                 at /Users/drdrxp/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.7.0/src/runtime/task/harness.rs:168:19
      31: tokio::runtime::task::raw::shutdown
                 at /Users/drdrxp/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.7.0/src/runtime/task/raw.rs:130:5
      32: tokio::runtime::task::raw::RawTask::shutdown
                 at /Users/drdrxp/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.7.0/src/runtime/task/raw.rs:90:18
      33: tokio::runtime::task::Task<S>::shutdown
                 at /Users/drdrxp/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.7.0/src/runtime/task/mod.rs:164:9
      34: tokio::runtime::task::Notified<S>::shutdown
                 at /Users/drdrxp/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.7.0/src/runtime/task/mod.rs:177:9
      35: tokio::runtime::queue::Inject<T>::push
                 at /Users/drdrxp/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.7.0/src/runtime/queue.rs:521:13
      36: tokio::runtime::thread_pool::worker::Shared::schedule::{{closure}}
                 at /Users/drdrxp/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.7.0/src/runtime/thread_pool/worker.rs:735:13
      37: tokio::macros::scoped_tls::ScopedKey<T>::with
                 at /Users/drdrxp/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.7.0/src/macros/scoped_tls.rs:74:22
      38: tokio::runtime::thread_pool::worker::Shared::schedule
                 at /Users/drdrxp/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.7.0/src/runtime/thread_pool/worker.rs:722:9
      39: tokio::runtime::thread_pool::worker::<impl tokio::runtime::task::Schedule for alloc::sync::Arc<tokio::runtime::thread_pool::worker::Worker>>::schedule
                 at /Users/drdrxp/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.7.0/src/runtime/thread_pool/worker.rs:712:9
      40: tokio::runtime::task::core::Scheduler<S>::schedule::{{closure}}
                 at /Users/drdrxp/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.7.0/src/runtime/task/core.rs:172:36
      41: tokio::loom::std::unsafe_cell::UnsafeCell<T>::with
                 at /Users/drdrxp/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.7.0/src/loom/std/unsafe_cell.rs:10:9
      42: tokio::runtime::task::core::Scheduler<S>::schedule
                 at /Users/drdrxp/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.7.0/src/runtime/task/core.rs:168:9
      43: tokio::runtime::task::harness::Harness<T,S>::wake_by_ref
                 at /Users/drdrxp/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.7.0/src/runtime/task/harness.rs:139:13
      44: tokio::runtime::task::harness::Harness<T,S>::wake_by_val
                 at /Users/drdrxp/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.7.0/src/runtime/task/harness.rs:133:9
      45: tokio::runtime::task::waker::wake_by_val
                 at /Users/drdrxp/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.7.0/src/runtime/task/waker.rs:102:5
      46: core::task::wake::Waker::wake
                 at /rustc/657bc01888e6297257655585f9c475a0801db6d2/library/core/src/task/wake.rs:217:18
      47: h2::proto::streams::prioritize::Prioritize::schedule_send
                 at /Users/drdrxp/xp/vcs/h2/src/proto/streams/prioritize.rs:124:17
      48: h2::proto::streams::send::Send::schedule_implicit_reset
                 at /Users/drdrxp/xp/vcs/h2/src/proto/streams/send.rs:242:9
      49: h2::proto::streams::streams::maybe_cancel
                 at /Users/drdrxp/xp/vcs/h2/src/proto/streams/streams.rs:1446:9
      50: h2::proto::streams::streams::drop_stream_ref::{{closure}}
                 at /Users/drdrxp/xp/vcs/h2/src/proto/streams/streams.rs:1424:9
      51: h2::proto::streams::counts::Counts::transition
                 at /Users/drdrxp/xp/vcs/h2/src/proto/streams/counts.rs:127:19
      52: h2::proto::streams::streams::drop_stream_ref
                 at /Users/drdrxp/xp/vcs/h2/src/proto/streams/streams.rs:1423:5
      53: <h2::proto::streams::streams::OpaqueStreamRef as core::ops::drop::Drop>::drop
                 at /Users/drdrxp/xp/vcs/h2/src/proto/streams/streams.rs:1380:9
      54: core::ptr::drop_in_place<h2::proto::streams::streams::OpaqueStreamRef>
                 at /rustc/657bc01888e6297257655585f9c475a0801db6d2/library/core/src/ptr/mod.rs:192:1
      55: core::ptr::drop_in_place<h2::client::ResponseFuture>
                 at /rustc/657bc01888e6297257655585f9c475a0801db6d2/library/core/src/ptr/mod.rs:192:1
      56: core::ptr::drop_in_place<futures_util::future::future::map::Map<h2::client::ResponseFuture,<hyper::proto::h2::client::ClientTask<tonic::body::BoxBody> as core::future::future::Future>::poll::{{closure}}>>
                 at /rustc/657bc01888e6297257655585f9c475a0801db6d2/library/core/src/ptr/mod.rs:192:1
      57: core::ptr::drop_in_place<futures_util::future::future::Map<h2::client::ResponseFuture,<hyper::proto::h2::client::ClientTask<tonic::body::BoxBody> as core::future::future::Future>::poll::{{closure}}>>
                 at /rustc/657bc01888e6297257655585f9c475a0801db6d2/library/core/src/ptr/mod.rs:192:1
      58: core::ptr::drop_in_place<hyper::client::dispatch::Callback<http::request::Request<tonic::body::BoxBody>,http::response::Response<hyper::body::body::Body>>::send_when<futures_util::future::future::Map<h2::client::ResponseFuture,<hyper::proto::h2::client::ClientTask<tonic::body::BoxBody> as core::future::future::Future>::poll::{{closure}}>>::{{closure}}::{{closure}}>
                 at /rustc/657bc01888e6297257655585f9c475a0801db6d2/library/core/src/ptr/mod.rs:192:1
      59: core::ptr::drop_in_place<futures_util::future::poll_fn::PollFn<hyper::client::dispatch::Callback<http::request::Request<tonic::body::BoxBody>,http::response::Response<hyper::body::body::Body>>::send_when<futures_util::future::future::Map<h2::client::ResponseFuture,<hyper::proto::h2::client::ClientTask<tonic::body::BoxBody> as core::future::future::Future>::poll::{{closure}}>>::{{closure}}::{{closure}}>>
                 at /rustc/657bc01888e6297257655585f9c475a0801db6d2/library/core/src/ptr/mod.rs:192:1
      60: core::ptr::drop_in_place<hyper::client::dispatch::Callback<http::request::Request<tonic::body::BoxBody>,http::response::Response<hyper::body::body::Body>>::send_when<futures_util::future::future::Map<h2::client::ResponseFuture,<hyper::proto::h2::client::ClientTask<tonic::body::BoxBody> as core::future::future::Future>::poll::{{closure}}>>::{{closure}}>
                 at /Users/drdrxp/xp/vcs/hyper/src/client/dispatch.rs:242:9
      61: core::ptr::drop_in_place<core::future::from_generator::GenFuture<hyper::client::dispatch::Callback<http::request::Request<tonic::body::BoxBody>,http::response::Response<hyper::body::body::Body>>::send_when<futures_util::future::future::Map<h2::client::ResponseFuture,<hyper::proto::h2::client::ClientTask<tonic::body::BoxBody> as core::future::future::Future>::poll::{{closure}}>>::{{closure}}>>
                 at /rustc/657bc01888e6297257655585f9c475a0801db6d2/library/core/src/ptr/mod.rs:192:1
      62: core::ptr::drop_in_place<tokio::runtime::task::core::Stage<core::future::from_generator::GenFuture<hyper::client::dispatch::Callback<http::request::Request<tonic::body::BoxBody>,http::response::Response<hyper::body::body::Body>>::send_when<futures_util::future::future::Map<h2::client::ResponseFuture,<hyper::proto::h2::client::ClientTask<tonic::body::BoxBody> as core::future::future::Future>::poll::{{closure}}>>::{{closure}}>>>
                 at /rustc/657bc01888e6297257655585f9c475a0801db6d2/library/core/src/ptr/mod.rs:192:1
      63: tokio::runtime::task::core::CoreStage<T>::set_stage::{{closure}}
                 at /Users/drdrxp/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.7.0/src/runtime/task/core.rs:296:35
      64: tokio::loom::std::unsafe_cell::UnsafeCell<T>::with_mut
                 at /Users/drdrxp/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.7.0/src/loom/std/unsafe_cell.rs:14:9
      65: tokio::runtime::task::core::CoreStage<T>::set_stage
                 at /Users/drdrxp/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.7.0/src/runtime/task/core.rs:296:9
      66: tokio::runtime::task::core::CoreStage<T>::drop_future_or_output
                 at /Users/drdrxp/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.7.0/src/runtime/task/core.rs:262:13
      67: tokio::runtime::task::harness::cancel_task::{{closure}}
                 at /Users/drdrxp/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.7.0/src/runtime/task/harness.rs:392:9
      68: core::ops::function::FnOnce::call_once
                 at /rustc/657bc01888e6297257655585f9c475a0801db6d2/library/core/src/ops/function.rs:227:5
      69: <std::panic::AssertUnwindSafe<F> as core::ops::function::FnOnce<()>>::call_once
                 at /rustc/657bc01888e6297257655585f9c475a0801db6d2/library/std/src/panic.rs:347:9
      70: std::panicking::try::do_call
                 at /rustc/657bc01888e6297257655585f9c475a0801db6d2/library/std/src/panicking.rs:401:40
      71: <unknown>
                 at /rustc/657bc01888e6297257655585f9c475a0801db6d2/library/std/src/panicking.rs:429:6
      72: std::panicking::try
                 at /rustc/657bc01888e6297257655585f9c475a0801db6d2/library/std/src/panicking.rs:365:19
      73: std::panic::catch_unwind
                 at /rustc/657bc01888e6297257655585f9c475a0801db6d2/library/std/src/panic.rs:434:14
      74: tokio::runtime::task::harness::cancel_task
                 at /Users/drdrxp/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.7.0/src/runtime/task/harness.rs:391:15
      75: tokio::runtime::task::harness::Harness<T,S>::shutdown
                 at /Users/drdrxp/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.7.0/src/runtime/task/harness.rs:168:19
      76: tokio::runtime::task::raw::shutdown
                 at /Users/drdrxp/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.7.0/src/runtime/task/raw.rs:130:5
      77: tokio::runtime::task::raw::RawTask::shutdown
                 at /Users/drdrxp/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.7.0/src/runtime/task/raw.rs:90:18
      78: tokio::runtime::task::core::Header::shutdown
                 at /Users/drdrxp/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.7.0/src/runtime/task/core.rs:306:13
      79: tokio::runtime::thread_pool::worker::Core::pre_shutdown
                 at /Users/drdrxp/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.7.0/src/runtime/thread_pool/worker.rs:557:13
      80: tokio::runtime::thread_pool::worker::Context::run
                 at /Users/drdrxp/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.7.0/src/runtime/thread_pool/worker.rs:332:9
      81: tokio::runtime::thread_pool::worker::run::{{closure}}
                 at /Users/drdrxp/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.7.0/src/runtime/thread_pool/worker.rs:303:17
      82: tokio::macros::scoped_tls::ScopedKey<T>::set
                 at /Users/drdrxp/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.7.0/src/macros/scoped_tls.rs:61:9
      83: tokio::runtime::thread_pool::worker::run
                 at /Users/drdrxp/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.7.0/src/runtime/thread_pool/worker.rs:300:5
      84: tokio::runtime::thread_pool::worker::Launch::launch::{{closure}}
                 at /Users/drdrxp/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.7.0/src/runtime/thread_pool/worker.rs:279:45
      85: <tokio::runtime::blocking::task::BlockingTask<T> as core::future::future::Future>::poll
                 at /Users/drdrxp/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.7.0/src/runtime/blocking/task.rs:42:21
      86: tokio::runtime::task::core::CoreStage<T>::poll::{{closure}}
                 at /Users/drdrxp/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.7.0/src/runtime/task/core.rs:243:17
      87: tokio::loom::std::unsafe_cell::UnsafeCell<T>::with_mut
                 at /Users/drdrxp/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.7.0/src/loom/std/unsafe_cell.rs:14:9
      88: tokio::runtime::task::core::CoreStage<T>::poll
                 at /Users/drdrxp/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.7.0/src/runtime/task/core.rs:233:13
      89: tokio::runtime::task::harness::poll_future::{{closure}}
                 at /Users/drdrxp/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.7.0/src/runtime/task/harness.rs:427:23
      90: <std::panic::AssertUnwindSafe<F> as core::ops::function::FnOnce<()>>::call_once
                 at /rustc/657bc01888e6297257655585f9c475a0801db6d2/library/std/src/panic.rs:347:9
      91: std::panicking::try::do_call
                 at /rustc/657bc01888e6297257655585f9c475a0801db6d2/library/std/src/panicking.rs:401:40
      92: <unknown>
                 at /Users/drdrxp/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.7.0/src/runtime/task/harness.rs:422:18
      93: std::panicking::try
                 at /rustc/657bc01888e6297257655585f9c475a0801db6d2/library/std/src/panicking.rs:365:19
      94: std::panic::catch_unwind
                 at /rustc/657bc01888e6297257655585f9c475a0801db6d2/library/std/src/panic.rs:434:14
      95: tokio::runtime::task::harness::poll_future
                 at /Users/drdrxp/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.7.0/src/runtime/task/harness.rs:414:19
      96: tokio::runtime::task::harness::Harness<T,S>::poll_inner
                 at /Users/drdrxp/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.7.0/src/runtime/task/harness.rs:89:9
      97: tokio::runtime::task::harness::Harness<T,S>::poll
                 at /Users/drdrxp/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.7.0/src/runtime/task/harness.rs:59:15
      98: tokio::runtime::task::raw::poll
                 at /Users/drdrxp/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.7.0/src/runtime/task/raw.rs:104:5
      99: tokio::runtime::task::raw::RawTask::poll
                 at /Users/drdrxp/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.7.0/src/runtime/task/raw.rs:66:18
     100: tokio::runtime::task::Notified<S>::run
                 at /Users/drdrxp/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.7.0/src/runtime/task/mod.rs:171:9
     101: tokio::runtime::blocking::pool::Inner::run
                 at /Users/drdrxp/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.7.0/src/runtime/blocking/pool.rs:265:17
     102: tokio::runtime::blocking::pool::Spawner::spawn_thread::{{closure}}
                 at /Users/drdrxp/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.7.0/src/runtime/blocking/pool.rs:245:17
     103: std::sys_common::backtrace::__rust_begin_short_backtrace
                 at /rustc/657bc01888e6297257655585f9c475a0801db6d2/library/std/src/sys_common/backtrace.rs:125:18
     104: std::thread::Builder::spawn_unchecked::{{closure}}::{{closure}}
                 at /rustc/657bc01888e6297257655585f9c475a0801db6d2/library/std/src/thread/mod.rs:481:17
     105: <std::panic::AssertUnwindSafe<F> as core::ops::function::FnOnce<()>>::call_once
                 at /rustc/657bc01888e6297257655585f9c475a0801db6d2/library/std/src/panic.rs:347:9
     106: std::panicking::try::do_call
                 at /rustc/657bc01888e6297257655585f9c475a0801db6d2/library/std/src/panicking.rs:401:40
     107: <unknown>
                 at /Users/drdrxp/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.7.0/src/runtime/task/harness.rs:422:18
     108: std::panicking::try
                 at /rustc/657bc01888e6297257655585f9c475a0801db6d2/library/std/src/panicking.rs:365:19
     109: std::panic::catch_unwind
                 at /rustc/657bc01888e6297257655585f9c475a0801db6d2/library/std/src/panic.rs:434:14
     110: std::thread::Builder::spawn_unchecked::{{closure}}
                 at /rustc/657bc01888e6297257655585f9c475a0801db6d2/library/std/src/thread/mod.rs:480:30
     111: core::ops::function::FnOnce::call_once{{vtable.shim}}
                 at /rustc/657bc01888e6297257655585f9c475a0801db6d2/library/core/src/ops/function.rs:227:5
     112: <alloc::boxed::Box<F,A> as core::ops::function::FnOnce<Args>>::call_once
                 at /rustc/657bc01888e6297257655585f9c475a0801db6d2/library/alloc/src/boxed.rs:1575:9
          <alloc::boxed::Box<F,A> as core::ops::function::FnOnce<Args>>::call_once
                 at /rustc/657bc01888e6297257655585f9c475a0801db6d2/library/alloc/src/boxed.rs:1575:9
          std::sys::unix::thread::Thread::new::thread_start
                 at /rustc/657bc01888e6297257655585f9c475a0801db6d2/library/std/src/sys/unix/thread.rs:71:17
     113: __pthread_body
     114: __pthread_start
    
    opened by drmingdrmer 10
  • fix: panic in pop_frame()

    fix: panic in pop_frame()

    We met the panic in our production environment, so handle this panic
    condition before panic. stack backtrace:
       0: rust_begin_unwind
                 at
    /rustc/59eed8a2aac0230a8b53e89d4e99d55912ba6b35/library/std/src/panicking.rs:517:5
       1: core::panicking::panic_fmt
                 at
    /rustc/59eed8a2aac0230a8b53e89d4e99d55912ba6b35/library/core/src/panicking.rs:101:14
       2: core::panicking::panic
                 at
    /rustc/59eed8a2aac0230a8b53e89d4e99d55912ba6b35/library/core/src/panicking.rs:50:5
       3: h2::proto::streams::flow_control::FlowControl::send_data
                 at
    /build/vendor/h2/src/proto/streams/flow_control.rs:176:9
       4: h2::proto::streams::prioritize::Prioritize::pop_frame::{{closure}}
                 at
    /build/vendor/h2/src/proto/streams/prioritize.rs:737:33
       5: tracing::span::Span::in_scope
                 at /build/vendor/tracing/src/span.rs:982:9
       6: h2::proto::streams::prioritize::Prioritize::pop_frame
                 at
    /build/vendor/h2/src/proto/streams/prioritize.rs:736:29
       7: h2::proto::streams::prioritize::Prioritize::poll_complete
                 at
    /build/vendor/h2/src/proto/streams/prioritize.rs:497:19
       8: h2::proto::streams::send::Send::poll_complete
                 at /build/vendor/h2/src/proto/streams/send.rs:297:9
       9: h2::proto::streams::streams::Inner::poll_complete
                 at /build/vendor/h2/src/proto/streams/streams.rs:850:16
      10: h2::proto::streams::streams::Streams<B,P>::poll_complete
                 at /build/vendor/h2/src/proto/streams/streams.rs:180:9
      11: h2::proto::connection::Connection<T,P,B>::poll
                 at /build/vendor/h2/src/proto/connection.rs:253:36
      12: <h2::client::Connection<T,B> as
    core::future::future::Future>::poll
      13: ...
    
    opened by aftersnow 0
  • Consider moving hyper's h2 support types to h2

    Consider moving hyper's h2 support types to h2

    My current project uses h2 directly without hyper negotiated by TLS with ALPN. To support an AsyncRead+AsyncWrite transport on top of an HTTP2 stream I had to write an adapter like hyper's H2Upgraded, which was non-trivial due to buffering and flow-control capacity management.

    Now I wonder if you would consider moving the h2 support types from hyper to h2 and expose them, so users of h2 can build binary protocols on top of h2 and even can take advantage of BDP support. I don't believe this would bloat h2 with code unrelated to HTTP2 since it does support upgrading to stream-based protocols.

    I know the chance of this is slim, but I had to ask. :)

    opened by cloneable 0
  • server: Ignore `NotConnected` errors on shutdown

    server: Ignore `NotConnected` errors on shutdown

    Related to https://github.com/hyperium/tonic/issues/1183

    I started to see this error come up when debugging the above tonic issue, to me ignoring this error seems okay in this context, but I'd like to have some other eyes on it.

    cc @seanmonstar @hawkw

    opened by LucioFranco 1
  • ignore ENOTCONN error during closing state of Connection

    ignore ENOTCONN error during closing state of Connection

    If the underlying socket is already closed during State::Closing, it is not an error to get ENOTCONN when calling shutdown, so ignore it.

    This can happen when a connection is local to the host and the kernel already has shutdown the socket.

    related to hyperium/hyper#3070 (with this change, i can't reproduce the behaviour there anymore)

    not completely sure about also checking reason for NO_ERROR, maybe should also check intiator for Library ?

    IMO this change is ok, because AFAICS this only happens when we come from handle_poll2_result where it's already noted that we're 'already going away' and it does not make sense to bubble a ENOTCONN up when trying to shutdown the connection as it's already shut down.

    opened by flumm 0
  • Receiving `FRAME_SIZE_ERROR` in complex setup

    Receiving `FRAME_SIZE_ERROR` in complex setup

    I'm building an alternative implementation of a (mostly) decentralised networking protocol. This protocol consists of the following services:

    • Client (any HTTP client, could be curl or an application in any programming language)
    • Outway (plain HTTP server, acts as a reverse proxy to the Inway)
    • Inway (mTls HTTPS server, acts as a reverse proxy to a service)
    • Service (any HTTP/HTTPS endpoint)

    The traffic flow is like this (going both ways):

    flowchart LR
      Client --plain HTTP 1.1--> Outway --HTTP 2 mTLS--> Inway --> Service
    

    Both the Outway/Inway use warp for the server implementation and reqwest for the client in the reverse proxy. The configuration of reqwest is as follows:

    Outway

            ClientBuilder::new()
                .use_rustls_tls()
                .build()?;
    

    Inway

            ClientBuilder::new()
                .use_rustls_tls()
                .tls_built_in_root_certs(false)
                .add_root_certificate(ca_cert)
                .identity(identity)
                .https_only(true)
                .build()?;
    

    I tried to verify this is not an issue with my reverse proxy implementation (which is very basic). To check this I used curl (with TLS client authentication) directly with the Inway. This works fine. So the following flow seems to work:

    flowchart LR
      Client --HTTP2 mTLS--> Inway --> Service
    

    So it seems that this is an issue between the Outway and the Inway, although I'm not 100% sure.

    The error I'm getting is BrokenPipe in hyper. This doesn't seem to happen when I either force http1 in the reqwest::ClientBuilder of the Outway or remove the request body stream. Not sure if it's relevant but this is how I stream the request body in the reverse proxy:

    let stream = self
                .body
                .map(|buf| buf.map(|mut buf| buf.copy_to_bytes(buf.remaining())));
    

    Trace logs (in the Inway with https://httpbin.org/anything as the service endpoint):

     TRACE h2::codec::framed_write          > FramedWrite::buffer; frame=Settings { flags: (0x0), initial_window_size: 1048576, max_frame_size: 16384, max_header_list_size: 16777216 }
     DEBUG h2::codec::framed_write          > send frame=Settings { flags: (0x0), initial_window_size: 1048576, max_frame_size: 16384, max_header_list_size: 16777216 }
     TRACE h2::frame::settings              > encoding SETTINGS; len=18
     TRACE h2::frame::settings              > encoding setting; val=InitialWindowSize(1048576)
     TRACE h2::frame::settings              > encoding setting; val=MaxFrameSize(16384)
     TRACE h2::frame::settings              > encoding setting; val=MaxHeaderListSize(16777216)
     TRACE h2::codec::framed_write          > encoded settings rem=27
     TRACE h2::server                       > state=Flushing(_)
     TRACE h2::codec::framed_write          > queued_data_frame=false
     TRACE h2::codec::framed_write          > flushing buffer
     TRACE h2::server                       > flush.poll=Ready
     TRACE h2::proto::streams::flow_control > inc_window; sz=65535; old=0; new=65535
     TRACE h2::proto::streams::flow_control > inc_window; sz=65535; old=0; new=65535
     TRACE h2::proto::streams::prioritize   > Prioritize::new; flow=FlowControl { window_size: Window(65535), available: Window(65535) }
     DEBUG h2::proto::connection            > Connection; peer=Server
     TRACE h2::server                       > connection established!
     TRACE h2::proto::streams::recv         > set_target_connection_window; target=1048576; available=65535, reserved=0
     TRACE h2::proto::connection            > connection.state=Open
     TRACE h2::codec::framed_read           > poll
     TRACE h2::codec::framed_write          > FramedWrite::buffer; frame=WindowUpdate { stream_id: StreamId(0), size_increment: 983041 }
     DEBUG h2::codec::framed_write          > send frame=WindowUpdate { stream_id: StreamId(0), size_increment: 983041 }
     TRACE h2::frame::window_update         > encoding WINDOW_UPDATE; id=StreamId(0)
     TRACE h2::codec::framed_write          > encoded window_update rem=13
     TRACE h2::proto::streams::flow_control > inc_window; sz=983041; old=65535; new=1048576
     TRACE h2::proto::streams::prioritize   > poll_complete
     TRACE h2::proto::streams::prioritize   > schedule_pending_open
     TRACE h2::codec::framed_write          > queued_data_frame=false
     TRACE h2::codec::framed_write          > flushing buffer
     TRACE h2::proto::connection            > connection.state=Open
     TRACE h2::codec::framed_read           > poll
     TRACE h2::codec::framed_read           > read.bytes=27
     TRACE h2::codec::framed_read           > FramedRead::decode_frame; offset=27
     TRACE h2::codec::framed_read           > decoding frame from 27B
     TRACE h2::codec::framed_read           > frame.kind=Settings
     DEBUG h2::codec::framed_read           > received frame=Settings { flags: (0x0), enable_push: 0, initial_window_size: 2097152, max_frame_size: 16384 }
     TRACE h2::proto::connection            > recv SETTINGS frame=Settings { flags: (0x0), enable_push: 0, initial_window_size: 2097152, max_frame_size: 16384 }
     TRACE h2::codec::framed_write          > FramedWrite::buffer; frame=Settings { flags: (0x1: ACK) }
     DEBUG h2::codec::framed_write          > send frame=Settings { flags: (0x1: ACK) }
     TRACE h2::frame::settings              > encoding SETTINGS; len=0
     TRACE h2::codec::framed_write          > encoded settings rem=9
     TRACE h2::proto::settings              > ACK sent; applying settings
     TRACE h2::codec::framed_read           > poll
     TRACE h2::codec::framed_read           > read.bytes=13
     TRACE h2::codec::framed_read           > FramedRead::decode_frame; offset=13
     TRACE h2::codec::framed_read           > decoding frame from 13B
     TRACE h2::codec::framed_read           > frame.kind=WindowUpdate
     DEBUG h2::codec::framed_read           > received frame=WindowUpdate { stream_id: StreamId(0), size_increment: 5177345 }
     TRACE h2::proto::connection            > recv WINDOW_UPDATE frame=WindowUpdate { stream_id: StreamId(0), size_increment: 5177345 }
     TRACE h2::proto::streams::flow_control > inc_window; sz=5177345; old=65535; new=5242880
     TRACE h2::proto::streams::prioritize   > assign_connection_capacity; inc=5177345
     TRACE h2::codec::framed_read           > poll
     TRACE h2::codec::framed_read           > read.bytes=63
     TRACE h2::codec::framed_read           > FramedRead::decode_frame; offset=63
     TRACE h2::codec::framed_read           > decoding frame from 63B
     TRACE h2::codec::framed_read           > frame.kind=Headers
     TRACE h2::frame::headers               > loading headers; flags=(0x4: END_HEADERS)
     TRACE h2::hpack::decoder               > decode
     TRACE h2::hpack::decoder               > rem=54 kind=Indexed
     TRACE h2::hpack::decoder               > rem=53 kind=Indexed
     TRACE h2::hpack::decoder               > rem=52 kind=LiteralWithIndexing
     TRACE h2::hpack::decoder               > rem=37 kind=LiteralWithoutIndexing
     TRACE h2::hpack::decoder               > rem=30 kind=LiteralWithIndexing
     TRACE h2::hpack::decoder               > rem=25 kind=LiteralWithIndexing
     TRACE h2::hpack::decoder               > rem=15 kind=LiteralWithIndexing
     DEBUG h2::codec::framed_read           > received frame=Headers { stream_id: StreamId(1), flags: (0x4: END_HEADERS) }
     TRACE h2::proto::connection            > recv HEADERS frame=Headers { stream_id: StreamId(1), flags: (0x4: END_HEADERS) }
     TRACE h2::proto::streams::flow_control > inc_window; sz=1048576; old=0; new=1048576
     TRACE h2::proto::streams::flow_control > inc_window; sz=2097152; old=0; new=2097152
     TRACE h2::proto::streams::streams      > recv_headers; stream=StreamId(1); state=State { inner: Idle }
     TRACE h2::proto::streams::recv         > opening stream; init_window=1048576
     TRACE h2::proto::streams::store        > Queue::push
     TRACE h2::proto::streams::store        >  -> first entry
     TRACE h2::proto::streams::counts       > transition_after; stream=StreamId(1); state=State { inner: Open { local: AwaitingHeaders, remote: Streaming } }; is_closed=false; pending_send_empty=true; buffered_send_data=0; num_recv=1; num_send=0
     TRACE h2::codec::framed_read           > poll
     TRACE h2::codec::framed_read           > read.bytes=9
     TRACE h2::codec::framed_read           > FramedRead::decode_frame; offset=9
     TRACE h2::codec::framed_read           > decoding frame from 9B
     TRACE h2::codec::framed_read           > frame.kind=Data
     DEBUG h2::codec::framed_read           > received frame=Data { stream_id: StreamId(1), flags: (0x1: END_STREAM) }
     TRACE h2::proto::connection            > recv DATA frame=Data { stream_id: StreamId(1), flags: (0x1: END_STREAM) }
     TRACE h2::proto::streams::recv         > recv_data; size=0; connection=1048576; stream=1048576
     TRACE h2::proto::streams::flow_control > send_data; sz=0; window=1048576; available=1048576
     TRACE h2::proto::streams::state        > recv_close: Open => HalfClosedRemote(AwaitingHeaders)
     TRACE h2::proto::streams::flow_control > send_data; sz=0; window=1048576; available=1048576
     TRACE h2::proto::streams::counts       > transition_after; stream=StreamId(1); state=State { inner: HalfClosedRemote(AwaitingHeaders) }; is_closed=false; pending_send_empty=true; buffered_send_data=0; num_recv=1; num_send=0
     TRACE h2::codec::framed_read           > poll
     TRACE h2::codec::framed_read           > read.bytes=9
     TRACE h2::codec::framed_read           > FramedRead::decode_frame; offset=9
     TRACE h2::codec::framed_read           > decoding frame from 9B
     TRACE h2::codec::framed_read           > frame.kind=Settings
     DEBUG h2::codec::framed_read           > received frame=Settings { flags: (0x1: ACK) }
     TRACE h2::proto::connection            > recv SETTINGS frame=Settings { flags: (0x1: ACK) }
     DEBUG h2::proto::settings              > received settings ACK; applying Settings { flags: (0x0), initial_window_size: 1048576, max_frame_size: 16384, max_header_list_size: 16777216 }
     TRACE h2::proto::streams::recv         > update_initial_window_size; new=1048576; old=1048576
     TRACE h2::codec::framed_read           > poll
     TRACE h2::proto::streams::prioritize   > poll_complete
     TRACE h2::proto::streams::prioritize   > schedule_pending_open
     TRACE h2::codec::framed_write          > queued_data_frame=false
     TRACE h2::codec::framed_write          > flushing buffer
     TRACE h2::proto::streams::streams      > next_incoming; id=StreamId(1), state=State { inner: HalfClosedRemote(AwaitingHeaders) }
     TRACE h2::server                       > received incoming
     TRACE h2::proto::connection            > connection.state=Open
     TRACE h2::codec::framed_read           > poll
     TRACE h2::proto::streams::prioritize   > poll_complete
     TRACE h2::proto::streams::prioritize   > schedule_pending_open
     TRACE h2::codec::framed_write          > flushing buffer
     DEBUG h2::client                       > binding client connection
     DEBUG h2::client                       > client connection bound
     TRACE h2::codec::framed_write          > FramedWrite::buffer; frame=Settings { flags: (0x0), enable_push: 0, initial_window_size: 2097152, max_frame_size: 16384 }
     DEBUG h2::codec::framed_write          > send frame=Settings { flags: (0x0), enable_push: 0, initial_window_size: 2097152, max_frame_size: 16384 }
     TRACE h2::frame::settings              > encoding SETTINGS; len=18
     TRACE h2::frame::settings              > encoding setting; val=EnablePush(0)
     TRACE h2::frame::settings              > encoding setting; val=InitialWindowSize(2097152)
     TRACE h2::frame::settings              > encoding setting; val=MaxFrameSize(16384)
     TRACE h2::codec::framed_write          > encoded settings rem=27
     TRACE h2::proto::streams::flow_control > inc_window; sz=65535; old=0; new=65535
     TRACE h2::proto::streams::flow_control > inc_window; sz=65535; old=0; new=65535
     TRACE h2::proto::streams::prioritize   > Prioritize::new; flow=FlowControl { window_size: Window(65535), available: Window(65535) }
     DEBUG h2::proto::connection            > Connection; peer=Client
     TRACE h2::proto::streams::recv         > set_target_connection_window; target=5242880; available=65535, reserved=0
     TRACE h2::proto::streams::flow_control > inc_window; sz=2097152; old=0; new=2097152
     TRACE h2::proto::streams::flow_control > inc_window; sz=65535; old=0; new=65535
     TRACE h2::proto::connection            > connection.state=Open
     TRACE h2::proto::streams::send         > send_headers; frame=Headers { stream_id: StreamId(1), flags: (0x4: END_HEADERS) }; init_window=65535
     TRACE h2::proto::streams::prioritize   > Prioritize::queue_frame; stream.id=StreamId(1)
     TRACE h2::proto::streams::prioritize   > schedule_send stream.id=StreamId(1)
     TRACE h2::proto::streams::store        > Queue::push
     TRACE h2::proto::streams::store        >  -> first entry
     TRACE h2::proto::streams::prioritize   > reserve_capacity; stream.id=StreamId(1) requested=1 effective=1 curr=0
     TRACE h2::proto::streams::prioritize   > try_assign_capacity; stream.id=StreamId(1)
     TRACE h2::proto::streams::prioritize   > requested=1 additional=1 buffered=0 window=65535 conn=65535
     TRACE h2::proto::streams::prioritize   > assigning capacity=1
     TRACE h2::proto::streams::stream       >   assigned capacity to stream; available=1; buffered=0; id=StreamId(1); max_buffer_size=1048576
     TRACE h2::proto::streams::stream       >   notifying task
     TRACE h2::proto::streams::prioritize   > available=1 requested=1 buffered=0 has_unavailable=true
     TRACE h2::proto::streams::recv         > release_capacity; size=0
     TRACE h2::proto::streams::recv         > release_connection_capacity; size=0, connection in_flight_data=0
     TRACE h2::proto::streams::prioritize   > send_data; sz=0 requested=1
     TRACE h2::proto::streams::prioritize   > buffered=0
     TRACE h2::proto::streams::prioritize   > available=1 buffered=0
     TRACE h2::proto::streams::prioritize   > Prioritize::queue_frame; stream.id=StreamId(1)
     TRACE h2::proto::streams::prioritize   > schedule_send stream.id=StreamId(1)
     TRACE h2::proto::streams::store        > Queue::push
     TRACE h2::proto::streams::store        >  -> already queued
     TRACE h2::proto::streams::counts       > transition_after; stream=StreamId(1); state=State { inner: Open { local: Streaming, remote: AwaitingHeaders } }; is_closed=false; pending_send_empty=false; buffered_send_data=0; num_recv=0; num_send=1
     TRACE h2::proto::streams::prioritize   > reserve_capacity; stream.id=StreamId(1) requested=1 effective=1 curr=1
     TRACE h2::proto::streams::prioritize   > reserve_capacity; stream.id=StreamId(1) requested=0 effective=0 curr=1
     TRACE h2::proto::streams::prioritize   > assign_connection_capacity; inc=1
     TRACE h2::proto::streams::prioritize   > send_data; sz=0 requested=0
     TRACE h2::proto::streams::prioritize   > buffered=0
     TRACE h2::proto::streams::state        > send_close: Open => HalfClosedLocal(AwaitingHeaders)
     TRACE h2::proto::streams::prioritize   > reserve_capacity; stream.id=StreamId(1) requested=0 effective=0 curr=0
     TRACE h2::proto::streams::prioritize   > available=0 buffered=0
     TRACE h2::proto::streams::prioritize   > Prioritize::queue_frame; stream.id=StreamId(1)
     TRACE h2::proto::streams::prioritize   > schedule_send stream.id=StreamId(1)
     TRACE h2::proto::streams::store        > Queue::push
     TRACE h2::proto::streams::store        >  -> already queued
     TRACE h2::proto::streams::counts       > transition_after; stream=StreamId(1); state=State { inner: HalfClosedLocal(AwaitingHeaders) }; is_closed=false; pending_send_empty=false; buffered_send_data=0; num_recv=0; num_send=1
     TRACE h2::proto::streams::streams      > drop_stream_ref; stream=Stream { id: StreamId(1), state: State { inner: HalfClosedLocal(AwaitingHeaders) }, is_counted: true, ref_count: 2, next_pending_send: None, is_pending_send: true, send_flow: FlowControl { window_size: Window(65535), available: Window(0) }, requested_send_capacity: 0, buffered_send_data: 0, send_task: Some(Waker { data: 0x133508020, vtable: 0x102bf9240 }), pending_send: Deque { indices: Some(Indices { head: 0, tail: 2 }) }, next_pending_send_capacity: None, is_pending_send_capacity: false, send_capacity_inc: true, next_open: None, is_pending_open: false, is_pending_push: false, next_pending_accept: None, is_pending_accept: false, recv_flow: FlowControl { window_size: Window(2097152), available: Window(2097152) }, in_flight_recv_data: 0, next_window_update: None, is_pending_window_update: false, reset_at: None, next_reset_expire: None, pending_recv: Deque { indices: None }, recv_task: None, pending_push_promises: Queue { indices: None, _p: PhantomData }, content_length: Omitted }
     TRACE h2::proto::streams::counts       > transition_after; stream=StreamId(1); state=State { inner: HalfClosedLocal(AwaitingHeaders) }; is_closed=false; pending_send_empty=false; buffered_send_data=0; num_recv=0; num_send=1
     TRACE h2::proto::streams::streams      > drop_stream_ref; stream=Stream { id: StreamId(1), state: State { inner: HalfClosedRemote(AwaitingHeaders) }, is_counted: true, ref_count: 2, next_pending_send: None, is_pending_send: false, send_flow: FlowControl { window_size: Window(2097152), available: Window(0) }, requested_send_capacity: 0, buffered_send_data: 0, send_task: Some(Waker { data: 0x134813000, vtable: 0x102bcc300 }), pending_send: Deque { indices: None }, next_pending_send_capacity: None, is_pending_send_capacity: false, send_capacity_inc: false, next_open: None, is_pending_open: false, is_pending_push: false, next_pending_accept: None, is_pending_accept: false, recv_flow: FlowControl { window_size: Window(1048576), available: Window(1048576) }, in_flight_recv_data: 0, next_window_update: None, is_pending_window_update: false, reset_at: None, next_reset_expire: None, pending_recv: Deque { indices: None }, recv_task: None, pending_push_promises: Queue { indices: None, _p: PhantomData }, content_length: Omitted }
     TRACE h2::proto::streams::counts       > transition_after; stream=StreamId(1); state=State { inner: HalfClosedRemote(AwaitingHeaders) }; is_closed=false; pending_send_empty=true; buffered_send_data=0; num_recv=1; num_send=0
     TRACE h2::codec::framed_read           > poll
     TRACE h2::codec::framed_read           > read.bytes=27
     TRACE h2::codec::framed_read           > FramedRead::decode_frame; offset=27
     TRACE h2::codec::framed_read           > decoding frame from 27B
     TRACE h2::codec::framed_read           > frame.kind=Settings
     DEBUG h2::codec::framed_read           > received frame=Settings { flags: (0x0), max_concurrent_streams: 128, initial_window_size: 65536, max_frame_size: 16777215 }
     TRACE h2::proto::connection            > recv SETTINGS frame=Settings { flags: (0x0), max_concurrent_streams: 128, initial_window_size: 65536, max_frame_size: 16777215 }
     TRACE h2::codec::framed_write          > FramedWrite::buffer; frame=Settings { flags: (0x1: ACK) }
     DEBUG h2::codec::framed_write          > send frame=Settings { flags: (0x1: ACK) }
     TRACE h2::frame::settings              > encoding SETTINGS; len=0
     TRACE h2::codec::framed_write          > encoded settings rem=36
     TRACE h2::proto::settings              > ACK sent; applying settings
     TRACE h2::proto::streams::prioritize   > recv_stream_window_update; stream.id=StreamId(1) stream.state=State { inner: HalfClosedLocal(AwaitingHeaders) } inc=1 flow=FlowControl { window_size: Window(65535), available: Window(0) }
     TRACE h2::codec::framed_read           > poll
     TRACE h2::codec::framed_read           > read.bytes=13
     TRACE h2::codec::framed_read           > FramedRead::decode_frame; offset=13
     TRACE h2::codec::framed_read           > decoding frame from 13B
     TRACE h2::codec::framed_read           > frame.kind=WindowUpdate
     DEBUG h2::codec::framed_read           > received frame=WindowUpdate { stream_id: StreamId(0), size_increment: 2147418112 }
     TRACE h2::proto::connection            > recv WINDOW_UPDATE frame=WindowUpdate { stream_id: StreamId(0), size_increment: 2147418112 }
     TRACE h2::proto::streams::flow_control > inc_window; sz=2147418112; old=65535; new=2147483647
     TRACE h2::proto::streams::prioritize   > assign_connection_capacity; inc=2147418112
     TRACE h2::codec::framed_read           > poll
     TRACE h2::codec::framed_write          > FramedWrite::buffer; frame=WindowUpdate { stream_id: StreamId(0), size_increment: 5177345 }
     DEBUG h2::codec::framed_write          > send frame=WindowUpdate { stream_id: StreamId(0), size_increment: 5177345 }
     TRACE h2::frame::window_update         > encoding WINDOW_UPDATE; id=StreamId(0)
     TRACE h2::codec::framed_write          > encoded window_update rem=49
     TRACE h2::proto::streams::flow_control > inc_window; sz=5177345; old=65535; new=5242880
     TRACE h2::proto::streams::prioritize   > poll_complete
     TRACE h2::proto::streams::prioritize   > schedule_pending_open
     TRACE h2::proto::streams::prioritize   > popped; stream.id=StreamId(1) stream.state=State { inner: HalfClosedLocal(AwaitingHeaders) }
     TRACE h2::proto::streams::prioritize   > is_pending_reset=false
     TRACE h2::proto::streams::prioritize   > pop_frame; frame=Headers { stream_id: StreamId(1), flags: (0x4: END_HEADERS) }
     TRACE h2::proto::streams::store        > Queue::push
     TRACE h2::proto::streams::store        >  -> first entry
     TRACE h2::proto::streams::counts       > transition_after; stream=StreamId(1); state=State { inner: HalfClosedLocal(AwaitingHeaders) }; is_closed=false; pending_send_empty=false; buffered_send_data=0; num_recv=0; num_send=1
     TRACE h2::proto::streams::prioritize   > writing frame=Headers { stream_id: StreamId(1), flags: (0x4: END_HEADERS) }
     TRACE h2::codec::framed_write          > FramedWrite::buffer; frame=Headers { stream_id: StreamId(1), flags: (0x4: END_HEADERS) }
     DEBUG h2::codec::framed_write          > send frame=Headers { stream_id: StreamId(1), flags: (0x4: END_HEADERS) }
     TRACE h2::proto::streams::prioritize   > schedule_pending_open
     TRACE h2::proto::streams::prioritize   > popped; stream.id=StreamId(1) stream.state=State { inner: HalfClosedLocal(AwaitingHeaders) }
     TRACE h2::proto::streams::prioritize   > is_pending_reset=false
     TRACE h2::proto::streams::prioritize   > data frame sz=0 eos=false window=0 available=0 requested=0 buffered=0
     TRACE h2::proto::streams::prioritize   > sending data frame len=0
     TRACE h2::proto::streams::flow_control > send_data; sz=0; window=65535; available=0
     TRACE h2::proto::streams::flow_control > send_data; sz=0; window=2147483647; available=2147483647
     TRACE h2::proto::streams::prioritize   > pop_frame; frame=Data { stream_id: StreamId(1) }
     TRACE h2::proto::streams::store        > Queue::push
     TRACE h2::proto::streams::store        >  -> first entry
     TRACE h2::proto::streams::counts       > transition_after; stream=StreamId(1); state=State { inner: HalfClosedLocal(AwaitingHeaders) }; is_closed=false; pending_send_empty=false; buffered_send_data=0; num_recv=0; num_send=1
     TRACE h2::proto::streams::prioritize   > writing frame=Data { stream_id: StreamId(1) }
     TRACE h2::codec::framed_write          > FramedWrite::buffer; frame=Data { stream_id: StreamId(1) }
     DEBUG h2::codec::framed_write          > send frame=Data { stream_id: StreamId(1) }
     TRACE h2::proto::streams::prioritize   > reclaimed frame=Data { stream_id: StreamId(1) } sz=0
     TRACE h2::proto::streams::prioritize   > schedule_pending_open
     TRACE h2::proto::streams::prioritize   > popped; stream.id=StreamId(1) stream.state=State { inner: HalfClosedLocal(AwaitingHeaders) }
     TRACE h2::proto::streams::prioritize   > is_pending_reset=false
     TRACE h2::proto::streams::prioritize   > data frame sz=0 eos=true window=0 available=0 requested=0 buffered=0
     TRACE h2::proto::streams::prioritize   > sending data frame len=0
     TRACE h2::proto::streams::flow_control > send_data; sz=0; window=65535; available=0
     TRACE h2::proto::streams::flow_control > send_data; sz=0; window=2147483647; available=2147483647
     TRACE h2::proto::streams::prioritize   > pop_frame; frame=Data { stream_id: StreamId(1), flags: (0x1: END_STREAM) }
     TRACE h2::proto::streams::counts       > transition_after; stream=StreamId(1); state=State { inner: HalfClosedLocal(AwaitingHeaders) }; is_closed=false; pending_send_empty=true; buffered_send_data=0; num_recv=0; num_send=1
     TRACE h2::proto::streams::prioritize   > writing frame=Data { stream_id: StreamId(1), flags: (0x1: END_STREAM) }
     TRACE h2::codec::framed_write          > FramedWrite::buffer; frame=Data { stream_id: StreamId(1), flags: (0x1: END_STREAM) }
     DEBUG h2::codec::framed_write          > send frame=Data { stream_id: StreamId(1), flags: (0x1: END_STREAM) }
     TRACE h2::proto::streams::prioritize   > reclaimed frame=Data { stream_id: StreamId(1), flags: (0x1: END_STREAM) } sz=0
     TRACE h2::proto::streams::prioritize   > schedule_pending_open
     TRACE h2::codec::framed_write          > queued_data_frame=false
     TRACE h2::codec::framed_write          > flushing buffer
     TRACE h2::proto::connection            > connection.state=Open
     TRACE h2::codec::framed_read           > poll
     TRACE h2::proto::streams::prioritize   > poll_complete
     TRACE h2::proto::streams::prioritize   > schedule_pending_open
     TRACE h2::codec::framed_write          > flushing buffer
     TRACE h2::proto::connection            > connection.state=Open
     TRACE h2::codec::framed_read           > poll
     TRACE h2::proto::streams::prioritize   > poll_complete
     TRACE h2::proto::streams::prioritize   > schedule_pending_open
     TRACE h2::codec::framed_write          > flushing buffer
     TRACE h2::proto::connection            > connection.state=Open
     TRACE h2::codec::framed_read           > poll
     TRACE h2::codec::framed_read           > read.bytes=9
     TRACE h2::codec::framed_read           > FramedRead::decode_frame; offset=9
     TRACE h2::codec::framed_read           > decoding frame from 9B
     TRACE h2::codec::framed_read           > frame.kind=Settings
     DEBUG h2::codec::framed_read           > received frame=Settings { flags: (0x1: ACK) }
     TRACE h2::proto::connection            > recv SETTINGS frame=Settings { flags: (0x1: ACK) }
     DEBUG h2::proto::settings              > received settings ACK; applying Settings { flags: (0x0), enable_push: 0, initial_window_size: 2097152, max_frame_size: 16384 }
     TRACE h2::proto::streams::recv         > update_initial_window_size; new=2097152; old=2097152
     TRACE h2::codec::framed_read           > poll
     TRACE h2::codec::framed_read           > read.bytes=17
     TRACE h2::codec::framed_read           > FramedRead::decode_frame; offset=17
     TRACE h2::codec::framed_read           > decoding frame from 17B
     TRACE h2::codec::framed_read           > frame.kind=GoAway
     DEBUG h2::codec::framed_read           > received frame=GoAway { error_code: FRAME_SIZE_ERROR, last_stream_id: StreamId(1) }
     TRACE h2::proto::connection            > recv GOAWAY frame=GoAway { error_code: FRAME_SIZE_ERROR, last_stream_id: StreamId(1) }
     TRACE h2::codec::framed_read           > poll
     TRACE h2::proto::connection            > codec closed
     TRACE h2::proto::streams::streams      > Streams::recv_eof
     TRACE h2::proto::streams::state        > recv_eof; state=HalfClosedLocal(AwaitingHeaders)
     TRACE h2::proto::streams::prioritize   > clear_queue; stream.id=StreamId(1)
     TRACE h2::proto::streams::prioritize   > assign_connection_capacity; inc=0
     TRACE h2::proto::streams::counts       > transition_after; stream=StreamId(1); state=State { inner: Closed(Error(Io(BrokenPipe, None))) }; is_closed=true; pending_send_empty=true; buffered_send_data=0; num_recv=0; num_send=1
     TRACE h2::proto::streams::counts       > dec_num_streams; stream=StreamId(1)
     TRACE h2::proto::connection            > connection.state=Closing(NO_ERROR, Library)
     TRACE h2::proto::connection            > connection closing after flush
     TRACE h2::codec::framed_write          > flushing buffer
     TRACE h2::proto::connection            > connection.state=Closed(NO_ERROR, Library)
     TRACE h2::proto::streams::streams      > Streams::recv_eof
     TRACE h2::proto::streams::streams      > drop_stream_ref; stream=Stream { id: StreamId(1), state: State { inner: Closed(Error(Io(BrokenPipe, None))) }, is_counted: false, ref_count: 1, next_pending_send: None, is_pending_send: false, send_flow: FlowControl { window_size: Window(65535), available: Window(0) }, requested_send_capacity: 0, buffered_send_data: 0, send_task: None, pending_send: Deque { indices: None }, next_pending_send_capacity: None, is_pending_send_capacity: false, send_capacity_inc: true, next_open: None, is_pending_open: false, is_pending_push: false, next_pending_accept: None, is_pending_accept: false, recv_flow: FlowControl { window_size: Window(2097152), available: Window(2097152) }, in_flight_recv_data: 0, next_window_update: None, is_pending_window_update: false, reset_at: None, next_reset_expire: None, pending_recv: Deque { indices: None }, recv_task: None, pending_push_promises: Queue { indices: None, _p: PhantomData }, content_length: Omitted }
     TRACE h2::proto::streams::counts       > transition_after; stream=StreamId(1); state=State { inner: Closed(Error(Io(BrokenPipe, None))) }; is_closed=true; pending_send_empty=true; buffered_send_data=0; num_recv=0; num_send=0
     TRACE h2::proto::streams::send         > send_headers; frame=Headers { stream_id: StreamId(1), flags: (0x4: END_HEADERS) }; init_window=2097152
     TRACE h2::proto::streams::prioritize   > Prioritize::queue_frame; stream.id=StreamId(1)
     TRACE h2::proto::streams::prioritize   > schedule_send stream.id=StreamId(1)
     TRACE h2::proto::streams::store        > Queue::push
     TRACE h2::proto::streams::store        >  -> first entry
     TRACE h2::proto::streams::counts       > transition_after; stream=StreamId(1); state=State { inner: HalfClosedRemote(Streaming) }; is_closed=false; pending_send_empty=false; buffered_send_data=0; num_recv=1; num_send=0
     TRACE h2::proto::streams::prioritize   > reserve_capacity; stream.id=StreamId(1) requested=1 effective=1 curr=0
     TRACE h2::proto::streams::prioritize   > try_assign_capacity; stream.id=StreamId(1)
     TRACE h2::proto::streams::prioritize   > requested=1 additional=1 buffered=0 window=2097152 conn=5242880
     TRACE h2::proto::streams::prioritize   > assigning capacity=1
     TRACE h2::proto::streams::stream       >   assigned capacity to stream; available=1; buffered=0; id=StreamId(1); max_buffer_size=409600
     TRACE h2::proto::streams::stream       >   notifying task
     TRACE h2::proto::streams::prioritize   > available=1 requested=1 buffered=0 has_unavailable=true
     TRACE h2::proto::streams::prioritize   > send_data; sz=291 requested=1
     TRACE h2::proto::streams::prioritize   > buffered=291
     TRACE h2::proto::streams::prioritize   > try_assign_capacity; stream.id=StreamId(1)
     TRACE h2::proto::streams::prioritize   > requested=291 additional=290 buffered=291 window=2097152 conn=5242879
     TRACE h2::proto::streams::prioritize   > assigning capacity=290
     TRACE h2::proto::streams::stream       >   assigned capacity to stream; available=291; buffered=291; id=StreamId(1); max_buffer_size=409600
     TRACE h2::proto::streams::prioritize   > available=291 requested=291 buffered=291 has_unavailable=true
     TRACE h2::proto::streams::store        > Queue::push
     TRACE h2::proto::streams::store        >  -> already queued
     TRACE h2::proto::streams::state        > send_close: HalfClosedRemote => Closed
     TRACE h2::proto::streams::prioritize   > reserve_capacity; stream.id=StreamId(1) requested=0 effective=291 curr=291
     TRACE h2::proto::streams::prioritize   > available=291 buffered=291
     TRACE h2::proto::streams::prioritize   > Prioritize::queue_frame; stream.id=StreamId(1)
     TRACE h2::proto::streams::prioritize   > schedule_send stream.id=StreamId(1)
     TRACE h2::proto::streams::store        > Queue::push
     TRACE h2::proto::streams::store        >  -> already queued
     TRACE h2::proto::streams::counts       > transition_after; stream=StreamId(1); state=State { inner: Closed(EndStream) }; is_closed=false; pending_send_empty=false; buffered_send_data=291; num_recv=1; num_send=0
     TRACE h2::proto::streams::streams      > drop_stream_ref; stream=Stream { id: StreamId(1), state: State { inner: Closed(EndStream) }, is_counted: true, ref_count: 2, next_pending_send: None, is_pending_send: true, send_flow: FlowControl { window_size: Window(2097152), available: Window(291) }, requested_send_capacity: 291, buffered_send_data: 291, send_task: Some(Waker { data: 0x134813000, vtable: 0x102bcc300 }), pending_send: Deque { indices: Some(Indices { head: 0, tail: 1 }) }, next_pending_send_capacity: None, is_pending_send_capacity: false, send_capacity_inc: true, next_open: None, is_pending_open: false, is_pending_push: false, next_pending_accept: None, is_pending_accept: false, recv_flow: FlowControl { window_size: Window(1048576), available: Window(1048576) }, in_flight_recv_data: 0, next_window_update: None, is_pending_window_update: false, reset_at: None, next_reset_expire: None, pending_recv: Deque { indices: None }, recv_task: None, pending_push_promises: Queue { indices: None, _p: PhantomData }, content_length: Omitted }
     TRACE h2::proto::streams::counts       > transition_after; stream=StreamId(1); state=State { inner: Closed(EndStream) }; is_closed=false; pending_send_empty=false; buffered_send_data=291; num_recv=1; num_send=0
     TRACE h2::proto::streams::streams      > drop_stream_ref; stream=Stream { id: StreamId(1), state: State { inner: Closed(EndStream) }, is_counted: true, ref_count: 1, next_pending_send: None, is_pending_send: true, send_flow: FlowControl { window_size: Window(2097152), available: Window(291) }, requested_send_capacity: 291, buffered_send_data: 291, send_task: Some(Waker { data: 0x134813000, vtable: 0x102bcc300 }), pending_send: Deque { indices: Some(Indices { head: 0, tail: 1 }) }, next_pending_send_capacity: None, is_pending_send_capacity: false, send_capacity_inc: true, next_open: None, is_pending_open: false, is_pending_push: false, next_pending_accept: None, is_pending_accept: false, recv_flow: FlowControl { window_size: Window(1048576), available: Window(1048576) }, in_flight_recv_data: 0, next_window_update: None, is_pending_window_update: false, reset_at: None, next_reset_expire: None, pending_recv: Deque { indices: None }, recv_task: None, pending_push_promises: Queue { indices: None, _p: PhantomData }, content_length: Omitted }
     TRACE h2::proto::streams::counts       > transition_after; stream=StreamId(1); state=State { inner: Closed(EndStream) }; is_closed=false; pending_send_empty=false; buffered_send_data=291; num_recv=1; num_send=0
     TRACE h2::proto::connection            > connection.state=Open
     TRACE h2::codec::framed_read           > poll
     TRACE h2::proto::streams::prioritize   > poll_complete
     TRACE h2::proto::streams::prioritize   > schedule_pending_open
     TRACE h2::proto::streams::prioritize   > popped; stream.id=StreamId(1) stream.state=State { inner: Closed(EndStream) }
     TRACE h2::proto::streams::prioritize   > is_pending_reset=false
     TRACE h2::proto::streams::prioritize   > pop_frame; frame=Headers { stream_id: StreamId(1), flags: (0x4: END_HEADERS) }
     TRACE h2::proto::streams::store        > Queue::push
     TRACE h2::proto::streams::store        >  -> first entry
     TRACE h2::proto::streams::counts       > transition_after; stream=StreamId(1); state=State { inner: Closed(EndStream) }; is_closed=false; pending_send_empty=false; buffered_send_data=291; num_recv=1; num_send=0
     TRACE h2::proto::streams::prioritize   > writing frame=Headers { stream_id: StreamId(1), flags: (0x4: END_HEADERS) }
     TRACE h2::codec::framed_write          > FramedWrite::buffer; frame=Headers { stream_id: StreamId(1), flags: (0x4: END_HEADERS) }
     DEBUG h2::codec::framed_write          > send frame=Headers { stream_id: StreamId(1), flags: (0x4: END_HEADERS) }
     TRACE h2::proto::streams::prioritize   > schedule_pending_open
     TRACE h2::proto::streams::prioritize   > popped; stream.id=StreamId(1) stream.state=State { inner: Closed(EndStream) }
     TRACE h2::proto::streams::prioritize   > is_pending_reset=false
     TRACE h2::proto::streams::prioritize   > data frame sz=291 eos=true window=291 available=291 requested=291 buffered=291
     TRACE h2::proto::streams::prioritize   > sending data frame len=291
     TRACE h2::proto::streams::flow_control > send_data; sz=291; window=2097152; available=291
     TRACE h2::proto::streams::flow_control > send_data; sz=291; window=5242880; available=5242880
     TRACE h2::proto::streams::prioritize   > pop_frame; frame=Data { stream_id: StreamId(1), flags: (0x1: END_STREAM) }
     TRACE h2::proto::streams::counts       > transition_after; stream=StreamId(1); state=State { inner: Closed(EndStream) }; is_closed=true; pending_send_empty=true; buffered_send_data=0; num_recv=1; num_send=0
     TRACE h2::proto::streams::counts       > dec_num_streams; stream=StreamId(1)
     TRACE h2::proto::streams::prioritize   > writing frame=Data { stream_id: StreamId(1), flags: (0x1: END_STREAM) }
     TRACE h2::codec::framed_write          > FramedWrite::buffer; frame=Data { stream_id: StreamId(1), flags: (0x1: END_STREAM) }
     DEBUG h2::codec::framed_write          > send frame=Data { stream_id: StreamId(1), flags: (0x1: END_STREAM) }
     TRACE h2::codec::framed_write          > queued_data_frame=true
     TRACE h2::codec::framed_write          > queued_data_frame=true
     TRACE h2::codec::framed_write          > flushing buffer
     TRACE h2::proto::streams::prioritize   > reclaimed frame=Data { stream_id: StreamId(1), flags: (0x1: END_STREAM) } sz=0
     TRACE h2::proto::streams::prioritize   > schedule_pending_open
     TRACE h2::codec::framed_write          > flushing buffer
    
    opened by dmeijboom 1
Releases(v0.3.15)
  • v0.3.15(Oct 24, 2022)

    What's Changed

    • Remove B: Buf bound on SendStream's parameter by @djkoloski in https://github.com/hyperium/h2/pull/614
    • add accessor for StreamId u32 by @ehaydenr in https://github.com/hyperium/h2/pull/639

    New Contributors

    • @ehaydenr made their first contribution in https://github.com/hyperium/h2/pull/639
    Source code(tar.gz)
    Source code(zip)
  • v0.3.14(Aug 16, 2022)

    • Add Error::is_reset function.
    • Bump MSRV to Rust 1.56.
    • Return RST_STREAM(NO_ERROR) when the server early responds.

    New Contributors

    • @djkoloski made their first contribution in https://github.com/hyperium/h2/pull/616
    • @bruceg made their first contribution in https://github.com/hyperium/h2/pull/618
    • @ryanrussell made their first contribution in https://github.com/hyperium/h2/pull/620
    • @kckeiks made their first contribution in https://github.com/hyperium/h2/pull/625
    • @erebe made their first contribution in https://github.com/hyperium/h2/pull/634
    Source code(tar.gz)
    Source code(zip)
  • v0.3.13(Mar 31, 2022)

  • v0.3.12(Mar 9, 2022)

    • Avoid time operations that can panic (#599)
    • Bump MSRV to Rust 1.49 (#606)
    • Fix header decoding error when a header name is contained at a continuation header boundary (#589)
    • Remove I/O type names from handshake tracing spans (#608)

    New Contributors

    • @LPardue made their first contribution in https://github.com/hyperium/h2/pull/602
    • @hikaricai made their first contribution in https://github.com/hyperium/h2/pull/589
    Source code(tar.gz)
    Source code(zip)
  • v0.3.11(Jan 26, 2022)

  • v0.3.10(Jan 7, 2022)

  • v0.3.9(Dec 9, 2021)

  • v0.3.8(Dec 8, 2021)

    • Add "extended CONNECT support". Adds h2::ext::Protocol, which is used for request and response extensions to connect new protocols over an HTTP/2 stream.
    • Add max_send_buffer_size options to client and server builders, and a default of ~400KB. This acts like a high-water mark for the poll_capacity() method.
    • Fix panic if receiving malformed HEADERS with stream ID of 0.
    Source code(tar.gz)
    Source code(zip)
  • v0.3.7(Oct 22, 2021)

    • Fix panic if server sends a malformed frame on a stream client was about to open.
    • Fix server to treat :status in a request as a stream error instead of connection error.
    Source code(tar.gz)
    Source code(zip)
  • v0.3.6(Sep 30, 2021)

  • v0.3.5(Sep 29, 2021)

    • Fix sending of very large headers. Previously when a single header was too big to fit in a single HEADERS frame, an error was returned. Now it is broken up and sent correctly.
    • Fix buffered data field to be a bigger integer size.
    • Refactor error format to include what initiated the error (remote, local, or user), if it was a stream or connection-level error, and any received debug data.
    Source code(tar.gz)
    Source code(zip)
  • v0.3.4(Aug 20, 2021)

    • Fix panic when encoding header size update over a certain size.
    • Fix SendRequest to wake up connection when dropped.
    • Fix potential hang if RecvStream is placed in the request or response extensions.
    • Stop calling Instant::now if zero reset streams are configured.
    Source code(tar.gz)
    Source code(zip)
  • v0.3.3(Apr 29, 2021)

  • v0.3.2(Mar 25, 2021)

  • v0.3.1(Mar 25, 2021)

    • Add Connection::max_concurrent_recv_streams() getter.
    • Add Connection::max_concurrent_send_streams() getter.
    • Fix client to ignore receipt of 1xx headers frames.
    • Fix incorrect calculation of pseudo header lengths when determining if a received header is too big.
    • Reduce monomorphized code size of internal code.
    Source code(tar.gz)
    Source code(zip)
  • v0.3.0(Feb 26, 2021)

  • v0.2.7(Nov 10, 2020)

    • Fix stream ref count when sending a push promise
    • Fix receiving empty DATA frames in response to a HEAD request
    • Fix handling of client disabling SERVER_PUSH
    Source code(tar.gz)
    Source code(zip)
  • v0.2.6(Jul 14, 2020)

  • v0.2.5(Jul 13, 2020)

  • v0.2.4(Mar 30, 2020)

  • v0.2.3(Mar 30, 2020)

    • Fix server being able to accept CONNECT requests without :scheme or :path.
    • Fix receiving a GOAWAY frame from updating the recv max ID, it should only update max send ID.
    Source code(tar.gz)
    Source code(zip)
  • v0.2.2(Mar 30, 2020)

  • v0.2.1(Dec 6, 2019)

  • v0.2.0(Dec 3, 2019)

    • Add server support for PUSH_PROMISEs (#327).
    • Add server::Connection::set_initial_window_size and client::Connection::set_initial_window_size which can adjust the INITIAL_WINDOW_SIZE setting on an existing connection (#421).
    • Update to http v0.2.
    • Update to tokio v0.2.
    • Update from futures 0.1 to std::future::Future.
    • Change unstable-stream feature to stream.
    • Change ReserveCapacity to FlowControl (#423).
    • Change Stream implementations to the optional stream cargo feature, default disabled. Specific async and poll functions are now inherent, and Stream can be re-enabled with the stream cargo feature.
    • Remove From<io::Error> for Error.
    Source code(tar.gz)
    Source code(zip)
Owner
null
Incomplete Redis client and server implementation using Tokio - for learning purposes only

mini-redis mini-redis is an incomplete, idiomatic implementation of a Redis client and server built with Tokio. The intent of this project is to provi

Tokio 2.3k Jan 4, 2023
rust_arango enables you to connect with ArangoDB server, access to database, execute AQL query, manage ArangoDB in an easy and intuitive way, both async and plain synchronous code with any HTTP ecosystem you love.

rust_arango enables you to connect with ArangoDB server, access to database, execute AQL query, manage ArangoDB in an easy and intuitive way, both async and plain synchronous code with any HTTP ecosystem you love.

Foretag 3 Mar 24, 2022
Async Lightweight HTTP client using system native library if possible. (Currently under heavy development)

Async Lightweight HTTP Client (aka ALHC) What if we need async but also lightweight http client without using such a large library like reqwest, isahc

SteveXMH 7 Dec 15, 2022
rinflux is Rust based influx client implementation that have been inspired from influx other language implementation, developed with 💖

Unofficial InfluxDB Driver for Rust This library is a work in progress. This means a feature you might need is not implemented yet or could be handled

Workfoxes 1 Apr 7, 2022
This code features a viper-client, which can connect to a viper-server, a custom interface made for Comelit devices.

Viper Client ?? (WIP) This is code for my intercom; specifically for the Comelit Mini Wi-Fi MSFV. This features a ViperClient which can talk to the Co

Gerard 4 Feb 20, 2023
Affine-client is a client for AFFINE based on Tauri

Affine Client affine-client is a client for AFFINE based on Tauri Supported Platforms Windows Linux MacOS Download https://github.com/m1911star/affine

Horus 216 Dec 25, 2022
An intentionally-limited Rust implementation of the Redis server with no external dependencies.

lil-redis An intentionally-limited Rust implementation of the Redis server. lil redis is an accessible implementation of a very basic Redis server (wi

Miguel Piedrafita 37 Jan 1, 2023
🦀 REST API client implementation for freee, auto-generated from OpenAPI specification.

freee-rs REST API client implementation for freee, auto-generated from OpenAPI specification. Getting Started Add to your Cargo.toml as follows: [depe

Naoki Ikeguchi 3 Jul 14, 2022
Python implementation of CeresDB client.

CeresDB Python Client Introduction The python client for CeresDB. CeresDB is a high-performance, distributed, schema-less, cloud native time-series da

CeresDB 6 Feb 28, 2023
Sharded, concurrent mini redis that support http interface implemented in rust

Rudis A mini version of redis server that provides http interface implemented in Rust. The in-memorry kv-storage is sharded and concurrent safe. Inspi

Lorenzo Cao 43 May 30, 2023
TDS 7.2+ (mssql / Microsoft SQL Server) async driver for rust

Tiberius A native Microsoft SQL Server (TDS) client for Rust. Supported SQL Server versions Version Support level Notes 2019 Tested on CI 2017 Tested

Prisma 189 Dec 25, 2022
FeOphant - A SQL database server written in Rust and inspired by PostreSQL.

A PostgreSQL inspired SQL database written in Rust.

Christopher Hotchkiss 27 Dec 7, 2022
Redis compatible server framework for Rust

Redis compatible server framework for Rust Features Create a fast custom Redis compatible server in Rust Simple API. Support for pipelining and telnet

Josh Baker 61 Nov 12, 2022
A Rust-based comment server using SQLite and an intuitive REST API.

soudan A Rust-based comment server using SQLite and an intuitive REST API. Soudan is built with simplicity and static sites in mind. CLI usage See sou

Elnu 0 Dec 19, 2022
Notification demon + web server using async Rust

Async Rust example Road to the asynchronous Rust Table of Contents About the Project Screenshots Tech Stack Features Getting Started Prerequisites Clo

Edem Khadiev 4 Feb 9, 2023
This is a small demo of how to transform a simple single-server RocksDB service written in Rust into a distributed version using OmniPaxos.

OmniPaxos Demo This is a small demo of how to transform a simple single-server RocksDB service into a distributed version using OmniPaxos. Related res

Harald Ng 6 Jun 28, 2023
Query is a Rust server for your remote SQLite databases and a CLI to manage them.

Query Query is a Rust server for your remote SQLite databases and a CLI to manage them. Table Of Contents Run A Query Server CLI Install Use The Insta

Víctor García 6 Oct 6, 2023
Cassandra DB native client written in Rust language. Find 1.x versions on https://github.com/AlexPikalov/cdrs/tree/v.1.x Looking for an async version? - Check WIP https://github.com/AlexPikalov/cdrs-async

CDRS CDRS is looking for maintainers CDRS is Apache Cassandra driver written in pure Rust. ?? Looking for an async version? async-std https://github.c

Alex Pikalov 338 Jan 1, 2023
CouchDB client-side library for the Rust programming language

Chill Chill is a client-side CouchDB library for the Rust programming language, available on crates.io. It targets Rust Stable. Chill's three chief de

null 35 Jun 26, 2022