A purpose-built proxy for the Linkerd service mesh. Written in Rust.

Overview

linkerd2

GitHub license Slack Status

This repo contains the transparent proxy component of Linkerd2. While the Linkerd2 proxy is heavily influenced by the Linkerd 1.X proxy, it comprises an entirely new codebase implemented in the Rust programming language.

This proxy's features include:

  • Transparent, zero-config proxying for HTTP, HTTP/2, and arbitrary TCP protocols.
  • Automatic Prometheus metrics export for HTTP and TCP traffic;
  • Transparent, zero-config WebSocket proxying;
  • Automatic, latency-aware, layer-7 load balancing;
  • Automatic layer-4 load balancing for non-HTTP traffic;
  • Automatic TLS (experimental);
  • An on-demand diagnostic tap API.

This proxy is primarily intended to run on Linux in containerized environments like Kubernetes, though it may also work on other Unix-like systems (like macOS).

The proxy supports service discovery via DNS and the linkerd2 Destination gRPC API.

The Linkerd project is hosted by the Cloud Native Computing Foundation (CNCF).

Building the project

A Makefile is provided to automate most build tasks. It provides the following targets:

  • make build -- Compiles the proxy on your local system using cargo
  • make clean -- Cleans the build target on the local system using cargo clean
  • make test -- Runs unit and integration tests on your local system using cargo
  • make test-flakey -- Runs all tests, including those that may fail spuriously
  • make package -- Builds a tarball at target/release/linkerd2-proxy-${PACKAGE_VERSION}.tar.gz. If PACKAGE_VERSION is not set in the environment, the local git SHA is used.
  • make docker -- Builds a Docker container image that can be used for testing. If the DOCKER_TAG environment variable is set, the image is given this name. Otherwise, the image is not named.

Cargo

Usually, Cargo, Rust's package manager, is used to build and test this project. If you don't have Cargo installed, we suggest getting it via https://rustup.rs/.

Repository Structure

This project is broken into many small libraries, or crates, so that components may be compiled & tested independently. The following crate targets are especially important:

Code of conduct

This project is for everyone. We ask that our users and contributors take a few minutes to review our code of conduct.

License

linkerd2-proxy is copyright 2018 the linkerd2-proxy authors. All rights reserved.

Licensed under the Apache License, Version 2.0 (the "License"); you may not use these files except in compliance with the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.

Comments
  • profiling: add benchmark and profiling scripts

    profiling: add benchmark and profiling scripts

    This is essentially @pothos' PR #278, but rebased & updated to work with the current master. In addition, I've changed the profiling proxy to be run as a separate bin target (run with cargo run --bin profile) rather than a test case that does nothing on most test runs and runs the proxy in profiling mode when an env var is set.

    Description from the original PR:

    A benchmark/profiling script for local development or a CI helps to catch performance regressions early and find spots for optimization.

    The benchmark setup consists of a cargo test that reuses the test infrastructure to forward localhost connections. This test is skipped by default unless an env var is set. The benchmark load comes from a fortio server and client for HTTP/gRPC req/s latency measurement and from an iperf server and client for TCP throughput measurement. In addition to the fortio UI to inspect the benchmark data, the results are also stored to a summary text file which can be used to plot the difference of the summary results of, e.g., two git branches.

    The profiling setup is the same as above but also runs "perf" or "memory-profilier" to sample the call stacks at either runtime or on heap allocation calls. This requires a special debug build with optimizations, that can be generated with a build script. The results can be inspected as interactive flamegraph SVGs in the browser.

    Please follow the instructions in the profiling/README.md file on how the scripts are used.

    Signed-off-by: Kai Lüke [email protected]

    Closes #278.

    Signed-off-by: Eliza Weisman [email protected] Co-authored-by: Kai Lüke [email protected]

    opened by hawkw 31
  • Access Logging

    Access Logging

    Access logging is very important functionality for my team as we wish to maintain feature-parity with our existing AWS ALB-based approach. This functionality was requested here and was marked as help wanted, so thought I'd take a stab at implementing it.

    Creating as a draft as still needs more testing and benchmarking, and I'm new to tower and so might have made some rookie errors. However, I wanted to create a draft as an opportunity to get some early feedback.

    The basic design consists of an AccessLogLayer that instruments both requests and responses that flow through it, in a similar manner to how handle_time is already computed. I'm not a massive fan of this, but it was the only way I could easily see to get accurate processing time metrics. I've tried to avoid any memory allocation on the hot path, although there are possibly more atomic increments than I would like. The performance impact with the feature disabled, i.e. LINKERD2_PROXY_ACCESS_LOG_FILE, not set should be minimal.

    The results of this instrumentation are then sent over a mpsc channel to an AccessLogCollector that writes them in a space-delimited format to a file specified as an environment variable. It buffers in memory and flushes on termination and on write if more than FLUSH_TIMEOUT_SECS since the last flush. This makes the access logging best effort much like AWS ALBs.

    An example deployment scenario using this functionality might deploy a fluent-bit sidecar to ship the logs, or write to /dev/stderr and use a log shipper deployed as a DaemonSet.

    opened by tustvold 26
  •  Add Route timeouts

    Add Route timeouts

    If a service profile specifies a timeout for a given route, this applies a timeout waiting for responses on that route.

    Timeouts show up in proxy metrics like this:

    route_response_total{direction="outbound",dst="localhost:80",status_code="504",classification="failure",error="timeout"} 1
    
    opened by seanmonstar 19
  • feat: add json log format as an option

    feat: add json log format as an option

    JSON logs can be easily parsed in logging sinks such as Elasticsearch.

    resolves https://github.com/linkerd/linkerd2/issues/2491

    cc @hawkw

    Please bare with me as this is my first PR in Rust, unless you count a message change. :)

    Could you please break down the anatomy of the following line:

    https://github.com/linkerd/linkerd2-proxy/blob/610309ebc240494426f9169000c46da9d5fe9647/linkerd/app/core/src/trace.rs#L12

    opened by naseemkullah 15
  • Update dependencies to remove duplicate `rand` versions

    Update dependencies to remove duplicate `rand` versions

    This branch updates the proxy's dependencies to eliminate multiple incompatible versions of the rand crate.

    Previously we depended on both rand 0.4.3 and 0.5.1. After this branch, we depend on rand 0.6.3 (the latest) and 0.5.1. However, PR #169 will also update trust-dns-resolver (the only crate depending on rand 0.5.1) to a version that depends on rand 0.6.3, so once both this branch and #169 are merged, we will depend only on rand 0.6.3.

    Note that this is a fairly large change to Cargo.toml --- it was necessary to update many of the proxy's dependencies in order to consolidate on one rand version. Additionally, I had to push branches of some of those dependencies in order to update their rand dependency, so it's currently necessary to patch some of the proxy's dependencies. When those branches merge upstream, the patches can be removed.

    Signed-off-by: Eliza Weisman [email protected]

    opened by hawkw 15
  • trace: Apache Common Log Format access logging

    trace: Apache Common Log Format access logging

    This branch builds on @tustvold's work in #601. The original PR description from that branch:

    Access logging is very important functionality for my team as we wish to maintain feature-parity with our existing AWS ALB-based approach. This functionality was requested here and was marked as help wanted, so thought I'd take a stab at implementing it.

    Creating as a draft as still needs more testing and benchmarking, and I'm new to tower and so might have made some rookie errors. However, I wanted to create a draft as an opportunity to get some early feedback.

    The basic design consists of an AccessLogLayer that instruments both requests and responses that flow through it, in a similar manner to how handle_time is already computed. I'm not a massive fan of this, but it was the only way I could easily see to get accurate processing time metrics. I've tried to avoid any memory allocation on the hot path, although there are possibly more atomic increments than I would like. The performance impact with the feature disabled, i.e. LINKERD2_PROXY_ACCESS_LOG_FILE, not set should be minimal.

    The results of this instrumentation are then sent over a mpsc channel to an AccessLogCollector that writes them in a space-delimited format to a file specified as an environment variable. It buffers in memory and flushes on termination and on write if more than FLUSH_TIMEOUT_SECS since the last flush. This makes the access logging best effort much like AWS ALBs.

    An example deployment scenario using this functionality might deploy a fluent-bit sidecar to ship the logs, or write to /dev/stderr and use a log shipper deployed as a DaemonSet.

    The additional changes in this branch are:

    • Updated against the latest state of the main branch.
    • Changed the tracing configuration to use per-layer filtering, so that the access log layer only sees access log spans, while the stdout logging layer does not see the access log spans (although, it could if we wanted it to...)
    • Changed the format for outputting the access log to the Apache Common Log Format. Note that this format does not include all the data that the access log spans currently collect; I excluded that data so that the output is compatible with tools that ingest the Apache log format. In a follow-up PR, we can add the ability to control what format the access log is written in, and add an alternative format that includes all the access log data that Linkerd's spans can collect (I suggest newline-delimited JSON for this).

    Of course, a huge thank you to @tustvold for all their work on this; I only updated the branch with the latest changes and made some minor improvements. :)

    Co-authored-by: Raphael Taylor-Davies [email protected]

    Closes #601

    opened by hawkw 13
  • Separate max destination resolutions from route cache capacity

    Separate max destination resolutions from route cache capacity

    Currently, the proxy places a limit on the number of active routes in the route cache. This limit defaults to 100 routes, and is intended to prevent the proxy from requesting more than 100 lookups from the Destination service.

    However, in some cases, such as Prometheus scraping a large number of pods, the proxy hits this limit even though none of those requests actually result in requests to service discovery (since Prometheus scrapes pods by their IP addresses).

    This branch implements @briansmith's suggestion in https://github.com/linkerd/linkerd2/issues/1322#issuecomment-407161829. It splits the router capacity limit to two separate, configurable limits, one that sets an upper bound on the number of concurrently active destination lookups, and one that limits the capacity of the router cache.

    I've done some preliminary testing using the lifecycle tests, where a single Prometheus instance is configured to scrape a very large number of proxies. In these tests, neither limit is reached.

    Signed-off-by: Eliza Weisman [email protected]

    opened by hawkw 13
  • remove busy loop from destination background future during shutdown

    remove busy loop from destination background future during shutdown

    When the proxy is shutting down, once there are no more outbound connections, the sender side of the resolver channel is dropped. In the admin background thread, when the destination background future is notified of the closure, instead of shutting down itself, it just busy loops. Now, after seeing shutdown, the background future ends as well.

    While examining this, I noticed all the background futures are joined togther into a single Future before being spawned on a dedicated current_thread executor. Join in this case is inefficient, since every single time one of the futures is ready, they are all polled again. Since we have an executor handy, it's better to allow it to manage each of the futures individually.


    I can't prove that this is the cause for https://github.com/linkerd/linkerd2/issues/1231, but this was the first obvious thing I could find.

    opened by seanmonstar 13
  • Added identity header support

    Added identity header support

    Passes the identity as described in https://github.com/linkerd/linkerd2/issues/3756.

    So firstly, until yesterday, I had never seen a single line of Rust code before. I know Rust uses explicit memory management, and I'm sure I've got some things wrong because really, I have no idea what I'm doing.

    I've probably inserted my identity header logic in the wrong place in the stack, and I'm not sure if the identity parameter gets wired up correctly at the right time or not.

    I haven't implemented an integration test because I'm not sure where I should put it, or whether there is a good test to model my tests from. But the tests that are needed here are:

    • Validate that the identity header is passed when present.
    • Validate that any client supplied identity headers, both for TLS and non TLS clients, are stripped.
    opened by jroper 12
  • replace proxy::http usage of tower-h2 with hyper

    replace proxy::http usage of tower-h2 with hyper

    This shrinks much of the proxy::http::glue module, which before, was around abstracting over both hyper and tower-h2. This changes it so hyper's HTTP2 support is used instead of tower-h2. Therefore, the glue module is able to get much smaller, now mostly just being glue for hyper + tower (as tower stabilizes some, hyper will eventually provide support itself, and the glue can go away altogether).

    One functional change caused is that HTTP2 requests have lost the ability to wait for a reconnect. The reason for the change is due to how hyper::Client cannot implement NewService the same way that tower_h2::Client could. hyper's Client cannot pre-emptively prepare a connection without a Request. This "loss" makes it function the same as HTTP1 has the whole time. With the eventual introduction of general retries, this functionality should be re-gained, but even better, working for both HTTP/1 and HTTP2.

    Due to this change, I'm hesitant to merge this right before announcing Linkerd 2.1. Ideally, this change could be tested internally for some time, through development and integration tests.

    opened by seanmonstar 12
  • ci/workflows: Add CIFuzz integration

    ci/workflows: Add CIFuzz integration

    Add CIFuzz Github action to build and run fuzzers on pull requests.

    This will run the fuzzers for 600 seconds (split between all the fuzzers) in the CI when a pull request is made and is a service by OSS-fuzz. It can help with catching bugs early and catch regressions.

    Documentation for CIFuzz can be found here: https://google.github.io/oss-fuzz/getting-started/continuous-integration/

    Signed-off-by: David Korczynski [email protected]

    opened by DavidKorczynski 10
  • distribute: Add a backend cache

    distribute: Add a backend cache

    NewDistribute will be used to implement per-route traffic distribution policies so that, for instance, each request route may use a different traffic split. In this setup, we want each instance--re-constructed as configuration changes--to reuse a common set of concrete services so that load balancer state need not be lost when a policy changes.

    This change adds a distribute::BackendCache module that produces a NewDistribute for an updated, cached set of backends.

    opened by olix0r 0
  • build(deps): bump tj-actions/changed-files from 35.1.0 to 35.2.1

    build(deps): bump tj-actions/changed-files from 35.1.0 to 35.2.1

    Bumps tj-actions/changed-files from 35.1.0 to 35.2.1.

    Release notes

    Sourced from tj-actions/changed-files's releases.

    v35.2.1

    What's Changed

    Full Changelog: https://github.com/tj-actions/changed-files/compare/v35...v35.2.1

    v35.2.0

    What's Changed

    New Contributors

    Full Changelog: https://github.com/tj-actions/changed-files/compare/v35...v35.2.0

    v35.1.2

    What's Changed

    Full Changelog: https://github.com/tj-actions/changed-files/compare/v35...v35.1.2

    v35.1.1

    What's Changed

    Full Changelog: https://github.com/tj-actions/changed-files/compare/v35...v35.1.1

    Changelog

    Sourced from tj-actions/changed-files's changelog.

    Changelog

    v35.2.1 (2023-01-02)

    Full Changelog

    v35 (2023-01-02)

    Full Changelog

    Fixed bugs:

    • [BUG] On pull_request_review error #906

    Closed issues:

    • Dependency Dashboard #27

    Merged pull requests:

    v35.2.0 (2022-12-30)

    Full Changelog

    Fixed bugs:

    • [BUG] files_ignore used with files not ignoring as expected #901

    Merged pull requests:

    v35.1.2 (2022-12-29)

    Full Changelog

    ... (truncated)

    Commits

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    dependencies github_actions 
    opened by dependabot[bot] 0
  • Make the HTTP request queue per-load balancer

    Make the HTTP request queue per-load balancer

    • Update the outbound logical HTTP stack to only use buffering around load balancers.
    • Remove profiles::split in favor of the (cloneable) distribute
    • Move routers from profiles to outbound
    • Routers are cloneable, use shared balancers.
    opened by olix0r 1
  • allow port ranges in opaque ports environment variable

    allow port ranges in opaque ports environment variable

    Depends on #2079

    Currently, the LINKERD2_PROXY_INBOUND_PORTS_DISABLE_PROTOCOL_DETECTION environment variable accepts a comma-delimited list of individual port numbers. Each of these ports is then stored as a separate cache entry in the port policy cache to indicate that the port is opaque.

    Unfortunately, this does not work well when a very large number of ports are marked as opaque, because the proxy-injector will generate a massive value for that environment variable, listing every individual port. This can cause issues when this manifest becomes larger than the Kubernetes API can reasonably handle. In addition, huge numbers of ports will all be kept in memory as a separate cache entry by the proxy, increasing proxy memory usage. See linkerd/linkerd2#9803 for details.

    This branch changes the proxy so that the LINKERD2_PROXY_INBOUND_PORTS_DISABLE_PROTOCOL_DETECTION environment variable may contain a list of individual port numbers and port ranges, which are specified as <low>-<high>.

    Opaque ports are now stored using a RangeInclusiveSet from the rangemap crate, rather than by creating individual cache entries for each port. This means we are no longer storing a separate entry in the cache for every port in a range, reducing memory consumption when there are very large ranges of opaque ports.

    This is the proxy half of the work towards resolving linkerd/linkerd2#9803. Once this branch lands, we'll also have to change the proxy-injector so that it no longer handles opaque port ranges by generating a list of all the individual ports in the range, and simply forwards those ranges as they were specified when generating the environment variable.

    opened by hawkw 0
  • remove fixed inbound policy mode

    remove fixed inbound policy mode

    Currently, the inbound proxy is capable of operating in two modes: a fixed mode, where no policy controller address is provided, and inbound port policies are configured by the environment variables LINKERD2_PROXY_INBOUND_PORTS_DISABLE_PROTOCOL_DETECTION, LINKERD2_PROXY_INBOUND_PORTS_REQUIRE_IDENTITY, and LINKERD2_PROXY_INBOUND_PORTS_REQUIRE_TLS; and a discovery-based mode, where port policies are discovered from the policy controller. In practice, the fixed mode is never actually used, so we probably ought to remove it entirely in 2.13.

    This branch removes the fixed mode for inbound port policy. Starting the proxy without the LINKERD2_PROXY_POLICY_SVC_ADDR and LINKERD2_PROXY_POLICY_SVC_IDENTITY environment variables set is now an error. If the LINKERD2_INBOUND_PORTS_REQUIRE_IDENTITY or LINKERD2_PROXY_INBOUND_PORTS_REQUIRE_TLS environment variables are present, the proxy will now log a warning and ignore their values. The LINKERD2_PROXY_INBOUND_PORTS_DISABLE_PROTOCOL_DETECTION environment variable is still supported, however, in order to allow the proxy to avoid performing policy controller lookups when configured with a large list of opaque ports.

    The Store::spawn_fixed method on linkerd_app_inbound::policy::Store is no longer used when running the proxy normally, but it is retained for testing purposes, and has been made test-only.

    Finally, it was necessary to add a mock implementation of the policy controller for integration tests to pass, since a policy controller is now required. This is based on the implementation of a mock policy controller added in #1992, but without the outbound policy API. PR #1992 should be able to rebase cleanly once this branch merges.

    Closes #2076

    opened by hawkw 1
  • populate policy cache with defaults configured by env vars

    populate policy cache with defaults configured by env vars

    Currently, the proxy-injector may set the following environment variables to configure the policies that the proxy should use for specific inbound ports:

    • LINKERD2_PROXY_INBOUND_PORTS_DISABLE_PROTOCOL_DETECTION
    • LINKERD2_PROXY_INBOUND_PORTS_REQUIRE_IDENTITY
    • LINKERD2_PROXY_INBOUND_PORTS_REQUIRE_TLS

    However, when the proxy is configured with a policy service address, these environment variables are never used, even when they are set. Instead, the proxy will always look up those ports with the policy controller, which will then tell it to do...what the environment variable would have already told it to do (because the env vars are generated based on the same annotations that would tell the policy controller what policy to send). This means we do unnecessary policy resolutions for those ports.

    This branch updates the proxy so that these environment variables are always honored, even when a policy controller is configured. This way, the unnecessary lookups are avoided. Critically, we should note that we will never do a policy lookup for a port that is configured by one of these environment variables.

    If a port is present in both LINKERD2_PROXY_INBOUND_PORTS (the list of ports for which we proactively start policy controller watches), as well as configured by one of the policy env vars, we will still look that port up with the policy controller, rather than using the env var configuration. A warning is logged in this case.

    opened by hawkw 0
Releases(release/v2.188.0)
Owner
Linkerd
A service mesh for Kubernetes and beyond. Cloud Native Computing Foundation (cncf.io) project.
Linkerd
A genetic algorithm for bechmark problems, written to learn Rust.

Genetic Algorithm A genetic algorithm in Rust for the following benchmark problems: Ackley Griewangk Rastrigin Rosenbrock Schwefel Sphere Usage: Insta

Andrew Schwartzmeyer 73 Dec 25, 2022
interative assembly shell written in rust

Overview this project is inspired by https://github.com/poppycompass/asmshell Preview Build from source git clone https://github.com/keystone-engine/k

Xargin 236 Dec 23, 2022
Drill is a HTTP load testing application written in Rust inspired by Ansible syntax

Drill Drill is a HTTP load testing application written in Rust. The main goal for this project is to build a really lightweight tool as alternative to

Ferran Basora 1.5k Dec 28, 2022
An experimental HTTP load testing application written in Rust.

Herd Herd was a small side project in building a HTTP load testing application in Rust with a main focus on being easy to use and low on OS level depe

Jacob Clark 100 Dec 27, 2022
Full fake REST API generator written with Rust

Weld Full fake REST API generator. This project is heavily inspired by json-server, written with rust. Synopsis Our first aim is to generate a fake ap

Seray Uzgur 243 Dec 31, 2022
A GPU-accelerated cross-platform terminal emulator and multiplexer written by @wez and implemented in Rust

Wez's Terminal A GPU-accelerated cross-platform terminal emulator and multiplexer written by @wez and implemented in Rust User facing docs and guide a

Wez Furlong 6.7k Jan 2, 2023
pastebin written in pure rust. A rewrite of ptpb/pb.

rspb rust fork of ptpb/pb TL;DR Create a new paste from the output of cmd: cmd | curl -F c=@- https://pb.mgt.moe/ Usage Creating pastes > echo hi | c

mgt 39 Jan 4, 2023
Distributed compute platform implemented in Rust, and powered by Apache Arrow.

Ballista: Distributed Compute Platform Overview Ballista is a distributed compute platform primarily implemented in Rust, powered by Apache Arrow. It

Ballista 2.3k Jan 3, 2023
Userspace WireGuard® Implementation in Rust

BoringTun BoringTun is an implementation of the WireGuard® protocol designed for portability and speed. BoringTun is successfully deployed on millions

Cloudflare 4.8k Jan 8, 2023
A fast data collector in Rust

Flowgger is a fast, simple and lightweight data collector written in Rust. It reads log entries over a given protocol, extracts them, decodes them usi

Amazon Web Services - Labs 739 Jan 7, 2023
kytan: High Performance Peer-to-Peer VPN in Rust

kytan: High Performance Peer-to-Peer VPN kytan is a high performance peer to peer VPN written in Rust. The goal is to to minimize the hassle of config

Chang Lan 368 Dec 31, 2022
The LibreTranslate API for Rust.

libretranslate-rs A LibreTranslate API for Rust. libretranslate = "0.2.4" libretranslate allows you to use open source machine translation in your pr

Grant Handy 51 Jan 5, 2023
Yet another pager in Rust

rust-pager Yet another pager in Rust Features Vim like keybindings Search substring Mouse wheel support Install cargo install rust-pager Usage <comman

null 19 Dec 7, 2022
A Rust serverless function to retrieve and relay a playlist for Twitch livestreams/VODs.

City17 A Rust serverless function to retrieve and relay a playlist for Twitch livestreams/VODs. By running this in specific countries and using a brow

Malloc Voidstar 5 Dec 15, 2021
Fork of async-raft, the Tokio-based Rust implementation of the Raft protocol.

Agreed Fork of async-raft, the Tokio-based Rust implementation of the Raft distributed consensus protocol. Agreed is an implementation of the Raft con

NLV8 Technologies 8 Jul 5, 2022
Rust runtime for Vercel Functions.

Rust Rust runtime for Vercel Functions. Community-maintained package to support using Rust inside Vercel Functions as a Runtime. Usage First, you'll n

Vercel Community 378 Dec 30, 2022
Ultralight, security-first service mesh for Kubernetes. Main repo for Linkerd 2.x.

Linkerd ?? Welcome to Linkerd! ?? Linkerd is an ultralight, security-first service mesh for Kubernetes. Linkerd adds critical security, observability,

Linkerd 9.2k Jan 1, 2023
Easy c̵̰͠r̵̛̠ö̴̪s̶̩̒s̵̭̀-t̶̲͝h̶̯̚r̵̺͐e̷̖̽ḁ̴̍d̶̖̔ ȓ̵͙ė̶͎ḟ̴͙e̸̖͛r̶̖͗ë̶̱́ṉ̵̒ĉ̷̥e̷͚̍ s̷̹͌h̷̲̉a̵̭͋r̷̫̊ḭ̵̊n̷̬͂g̵̦̃ f̶̻̊ơ̵̜ṟ̸̈́ R̵̞̋ù̵̺s̷̖̅ţ̸͗!̸̼͋

Rust S̵̓i̸̓n̵̉ I̴n̴f̶e̸r̵n̷a̴l mutability! Howdy, friendly Rust developer! Ever had a value get m̵̯̅ð̶͊v̴̮̾ê̴̼͘d away right under your nose just when

null 294 Dec 23, 2022
Hybrid Traffic Mesh Proxy

Hybrid Traffic Mesh Proxy L7 proxy on kubernetes dependencies: routeagent: refresh proxy routes fetched with k8s sdk register routes curl -v --unix-s

Goku 1 Feb 11, 2022
General purpose client/server networking library written in Rust, built on top of the QUIC protocol which is implemented by quinn

Overview "This library stinks!" ... "Unless you like durian" durian is a client-server networking library built on top of the QUIC protocol which is i

Michael 92 Dec 31, 2022