Web3-proxy: a fast caching and load balancing proxy for web3 (Ethereum or similar) JsonRPC servers.

Overview

web3-proxy

Web3-proxy is a fast caching and load balancing proxy for web3 (Ethereum or similar) JsonRPC servers.

Signed transactions (eth_sendRawTransaction) are sent in parallel to the configured private RPCs (eden, ethermine, flashbots, etc.).

All other requests are sent to an RPC server on the latest block (alchemy, moralis, rivet, your own node, or one of many other providers). If multiple servers are in sync, they are prioritized by active_requests/soft_limit. Note that this means that the fastest server is most likely to serve requests and slow servers are unlikely to ever get any requests.

Each server has different limits to configure. The soft_limit is the number of parallel active requests where a server starts to slow down. The hard_limit is where a server starts giving rate limits or other errors.

$ cargo run --release -p web3-proxy -- --help
   Compiling web3-proxy v0.1.0 (/home/bryan/src/web3-proxy/web3-proxy)
    Finished release [optimized + debuginfo] target(s) in 17.69s
     Running `target/release/web3-proxy --help`
Usage: web3-proxy [--port ] [--workers ] [--config ]

Web3-proxy is a fast caching and load balancing proxy for web3 (Ethereum or similar) JsonRPC servers.

Options:
  --port            what port the proxy should listen on
  --workers         number of worker threads
  --config          path to a toml of rpc servers
  --help            display usage information

Start the server with the defaults (listen on http://localhost:8544 and use ./config/example.toml which proxies to a local websocket on 8546 and ankr's public ETH node):

cargo run --release -p web3-proxy

Check that the proxy is working:

curl -X POST -H "Content-Type: application/json" --data '{"jsonrpc":"2.0","method":"web3_clientVersion","id":1}' 127.0.0.1:8544

You can copy config/example.toml to config/production-$CHAINNAME.toml and then run docker-compose up --build -d start a proxies for many chains.

Flame Graphs

Flame graphs make finding slow code painless:

$ cat /proc/sys/kernel/kptr_restrict
1
$ echo 0 | sudo tee /proc/sys/kernel/kptr_restrict
0
$ CARGO_PROFILE_RELEASE_DEBUG=true cargo flamegraph

GDB

Run the proxy under gdb for advanced debugging:

cargo build --release && RUST_LOG=web3_proxy=debug rust-gdb --args target/debug/web3-proxy --listen-port 7503 --rpc-config-path ./config/production-eth.toml

Load Testing

Test the proxy:

wrk -s ./data/wrk/getBlockNumber.lua -t12 -c400 -d30s --latency http://127.0.0.1:8544
wrk -s ./data/wrk/getLatestBlockByNumber.lua -t12 -c400 -d30s --latency http://127.0.0.1:8544

Test geth:

wrk -s ./data/wrk/getBlockNumber.lua -t12 -c400 -d30s --latency http://127.0.0.1:8545
wrk -s ./data/wrk/getLatestBlockByNumber.lua -t12 -c400 -d30s --latency http://127.0.0.1:8545

Test erigon:

wrk -s ./data/wrk/getBlockNumber.lua -t12 -c400 -d30s --latency http://127.0.0.1:8945
wrk -s ./data/wrk/getLatestBlockByNumber.lua -t12 -c400 -d30s --latency http://127.0.0.1:8945

Note: Testing with getLatestBlockByNumber.lua is not great because the latest block changes and so one run is likely to be very different than another.

You might also like...
Layer 4 load balancer with dynamic configuration loading
Layer 4 load balancer with dynamic configuration loading

Convey Layer 4 load balancer with dynamic configuration loading featuring proxy, passthrough and direct server return modes Features Stats page (at /s

A fast, stable, efficient, and lightweight intranet penetration, port forwarding tool supports multiple connections, cascading proxy, and transmission encryption
A fast, stable, efficient, and lightweight intranet penetration, port forwarding tool supports multiple connections, cascading proxy, and transmission encryption

A fast, stable, efficient, and lightweight intranet penetration, port forwarding tool supports multiple connections, cascading proxy, and transmission encryption

Simple and fast layer 4 proxy in Rust

Fourth 这一波在第四层。 English Fourth是一个Rust实现的Layer 4代理,用于监听指定端口TCP流量,并根据规则转发到指定目标。 功能 监听指定端口代理到本地或远端指定端口 监听指定端口,通过TLS ClientHello消息中的SNI进行分流 安装方法 为了确保获得您架构

A fast and stable reverse proxy for NAT traversal, written in Rust
A fast and stable reverse proxy for NAT traversal, written in Rust

rathole A fast and stable reverse proxy for NAT traversal, written in Rust rathole, like frp, can help to expose the service on the device behind the

The Graph is a protocol for building decentralized applications (dApps) quickly on Ethereum and IPFS using GraphQL.

Graph Node The Graph is a protocol for building decentralized applications (dApps) quickly on Ethereum and IPFS using GraphQL. Graph Node is an open s

RCProxy - a lightweight, fast but powerful Redis Cluster Proxy written in Rust

RCProxy - a lightweight, fast but powerful Redis Cluster Proxy written in Rust

Hopper - Fast, configurable, lightweight Reverse Proxy for Minecraft

Hopper Hopper is a lightweight reverse proxy for minecraft. It allows you to connect multiple servers under the same IP and port, with additional func

A proxy implement with http / socks5 in-bound and vmess out-bound, written in Rust and tokio.rs

tokio-vmess an Asynchronous proxy implement with http / socks5 in-bound and vmess out-bound, written in Rust and tokio Run example first, Fill out the

A versatile and efficient proxy framework with nice features suitable for various use cases.

A versatile and efficient proxy framework with nice features suitable for various use cases.

Comments
  • investigate fee estimation being low

    investigate fee estimation being low

    A user reported that they had several transactions run out of gas after switching to LlamaNodes. Investigate if they just got unlucky or if there are some flags we can tune. I have a suspicion this is just a coincidence.

    I have had gas estimation be low using nodes before. Brownie defaults to increasing estimates by 20% or 30% because this was such a common problem.

    opened by WyseNynja 4
  • double check archive stat dashboard

    double check archive stat dashboard

    https://eth.llamarpc.com/user/stats/detailed

    make sure archive requests are being grouped on properly. it should be a bool, not an int. or if it is an int, it should be joined with the non-archives

    opened by WyseNynja 1
  • Handle very large, very slow responses

    Handle very large, very slow responses

    I tried to run a debug_traceBlockByNumber using this tool connected to Erigon over WebSocket and this is the result:

    2022-06-20T16:33:54.300492Z DEBUG proxy_web3_rpc: web3_proxy::app: Received request: Single(JsonRpcRequest { id: RawValue(1), method: "debug_traceBlockByNumber", .. })
    2022-06-20T16:34:04.306695Z  INFO block_receiver: web3_proxy::connections: new head: 0xb685…c903 rpc=Web3Connection { url: "ws://192.168.10.50:8544", .. } new_block_num=14997562
    2022-06-20T16:34:04.306775Z DEBUG block_receiver: web3_proxy::connections: all head: 0xb685…c903 rpc=Web3Connection { url: "ws://192.168.10.50:8544", .. } new_block_num=14997562
    2022-06-20T16:34:05.584122Z  INFO block_receiver: web3_proxy::connections: new head: 0xa4c2…1217 rpc=Web3Connection { url: "ws://192.168.10.50:8544", .. } new_block_num=14997563
    2022-06-20T16:34:05.584163Z DEBUG block_receiver: web3_proxy::connections: all head: 0xa4c2…1217 rpc=Web3Connection { url: "ws://192.168.10.50:8544", .. } new_block_num=14997563
    2022-06-20T16:34:05.584188Z  INFO block_receiver: web3_proxy::connections: new head: 0xc373…8483 rpc=Web3Connection { url: "ws://192.168.10.50:8544", .. } new_block_num=14997564
    2022-06-20T16:34:05.584201Z DEBUG block_receiver: web3_proxy::connections: all head: 0xc373…8483 rpc=Web3Connection { url: "ws://192.168.10.50:8544", .. } new_block_num=14997564
    2022-06-20T16:34:12.572834Z  INFO block_receiver: web3_proxy::connections: new head: 0xdf46…b691 rpc=Web3Connection { url: "ws://192.168.10.50:8544", .. } new_block_num=14997565
    2022-06-20T16:34:12.572867Z DEBUG block_receiver: web3_proxy::connections: all head: 0xdf46…b691 rpc=Web3Connection { url: "ws://192.168.10.50:8544", .. } new_block_num=14997565
    2022-06-20T16:34:54.303245Z  WARN web3_proxy::frontend::errors: Responding with error: JsonRpcForwardedResponse { id: RawValue(null), .. }
    2022-06-20T16:35:11.120685Z ERROR ethers_providers::transports::ws: err=Protocol(ResetWithoutClosingHandshake)
    2022-06-20T16:35:11.120834Z ERROR ethers_providers::transports::ws: WebSocket connection closed unexpectedly
    2022-06-20T16:35:11.121147Z  WARN subscribe_pending_transactions: web3_proxy::connection: subscription ended
    2022-06-20T16:35:11.121173Z  WARN subscribe_new_heads: web3_proxy::connection: subscription ended
    2022-06-20T16:35:11.121275Z  WARN web3_proxy::connections: block_receiver exited!
    2022-06-20T16:35:11.121334Z  INFO web3_proxy::connections: subscriptions over: Web3Connections { inner: [Web3Connection { url: "ws://192.168.10.50:8544", .. }], .. }
    2022-06-20T16:35:11.121405Z  INFO web3_proxy: app_handle exited x=Ok(())
    

    On the cURL side:

    curl -X POST -H "Content-Type: application/json" --data '{"jsonrpc":"2.0","method":"debug_traceBlockByNumber","params":["latest"],"id":1}' 127.0.0.1:8544
    {"jsonrpc":"2.0","id":null,"error":{"code":-32099,"message":"deadline has elapsed"}}
    

    I then tried with a specific block number just so it would be reproducible, and this time the issue was with the response size specifically:

    2022-06-20T16:39:33.588660Z DEBUG proxy_web3_rpc: web3_proxy::app: Received request: Single(JsonRpcRequest { id: RawValue(1), method: "debug_traceBlockByNumber", .. })
    2022-06-20T16:39:38.092209Z ERROR ethers_providers::transports::ws: err=Capacity(MessageTooLong { size: 526286051, max_size: 16777216 })
    2022-06-20T16:39:38.092242Z ERROR ethers_providers::transports::ws: WebSocket connection closed unexpectedly
    2022-06-20T16:39:38.092393Z  WARN subscribe_new_heads: web3_proxy::connection: subscription ended
    2022-06-20T16:39:38.092462Z  WARN proxy_web3_rpc: web3_proxy::connections: Backend server error! self=Web3Connections { inner: [Web3Connection { url: "ws://192.168.10.50:8544", .. }], .. } e=oneshot canceled
    2022-06-20T16:39:38.092496Z  WARN subscribe_pending_transactions: web3_proxy::connection: subscription ended
    2022-06-20T16:39:38.092532Z  WARN web3_proxy::connections: block_receiver exited!
    2022-06-20T16:39:38.092569Z  INFO web3_proxy::connections: subscriptions over: Web3Connections { inner: [Web3Connection { url: "ws://192.168.10.50:8544", .. }], .. }
    2022-06-20T16:39:38.092774Z  INFO web3_proxy: app_handle exited x=Ok(())
    
    curl -X POST -H "Content-Type: application/json" --data '{"jsonrpc":"2.0","method":"debug_traceBlockByNumber","params":["0xE4D849"],"id":1}' 127.0.0.1:8544
    curl: (52) Empty reply from server
    

    It does work testing Erigon directly with websocat

    opened by quickchase 2
Owner
null
The High Performance Proxy/Load Balancer

Silverwind-The Next Generation High-Performance Proxy English 简体中文 The Silverwind is a high-performance reverse proxy/load balancer. And it could be a

null 112 Apr 7, 2023
UDP proxy with Proxy Protocol and mmproxy support

udppp UDP proxy with Proxy Protocol and mmproxy support. Features Async Support Proxy Protocol V2 SOCKET preserve client IP addresses in L7 proxies(mm

b23r0 10 Dec 18, 2022
Safe Rust crate for creating socket servers and clients with ease.

bitsock Safe Rust crate for creating socket servers and clients with ease. Description This crate can be used for Client <--> Server applications of e

Lorenzo Torres 3 Nov 25, 2021
An HTTP server wrapper for omnisette. Supports both V1 (Provision) and V3 of anisette servers.

omnisette-server An HTTP server wrapper for omnisette. Supports both V1 (Provision) and V3 of anisette servers. Setup First, download the Apple Music

SideStore Team 5 Mar 29, 2023
Proxy sentry request to a sentry server using a tunnel/proxy endpoint

Sentry Tunnel This is a proxy that forwards tunneled sentry requests to the real sentry server. The implementation is based on the explanation provide

Paul FLORENCE 14 Dec 20, 2022
Lightweight proxy that allows redirect HTTP(S) traffic through a proxy.

Proxyswarm Proxyswarm is a lightweight proxy that allows redirect HTTP(S) traffic through a proxy. WARNING: This app isn't recomended for download lar

Jorge Alejandro Jimenez Luna 4 Apr 16, 2022
A TCP proxy using HTTP - Reach SSH behind a Nginx reverse proxy

?? TCP over HTTP ?? The Questions ?? What does it do? You can proxy TCP traffic over HTTP. A basic setup would be: [Your TCP target] <--TCP-- [Exit No

Julian 185 Dec 15, 2022
🤖 brwrs is a new protocol running over TCP/IP that is intended to be a suitable candidate for terminal-only servers

brwrs is a new protocol running over TCP/IP that is intended to be a suitable candidate for terminal-only servers (plain text data). That is, although it can be accessed from a browser, brwrs will not correctly interpret the browser's GET request.

daCoUSB 3 Jul 30, 2021
A library to quickly create OAuth2.1 compliant servers from scratch.

oauth21-server A library to easily create an OAuth 2.1 compliant authorization server. The motivation to develop this library comes from the fact that

Revanth Pothukuchi 3 Mar 14, 2022
chain nats.io servers with transformation & processing pipelines

NATS proxy service Simple tool to forward specific topics from one nats.io cluster to the same server or another. Provides support to process messages

Marquitos 8 Sep 19, 2022