A collection of lower-level libraries for composable network services.

Overview

Actix Net

A collection of lower-level libraries for composable network services.

CI codecov Chat on Discord Dependency Status

Example

See actix-server/examples and actix-tls/examples for some basic examples.

MSRV

This repo's Minimum Supported Rust Version (MSRV) is 1.46.0.

License

The crates in repo are licensed under either of:

at your option.

Code of Conduct

Contribution to the actix-net repo is organized under the terms of the Contributor Covenant. The Actix team promises to intervene to uphold that code of conduct.

Comments
  • actix-rt: leaking memory in every request

    actix-rt: leaking memory in every request

    I just compiled example from github and run apache benchmarks ab -n 1000000 -c 64 http://127.0.0.1:8001/test/john/index.html and memory was constantly growing. After 1 milion requests it was 3.1 GB of heap.

    I could reproduce the same on my catalina macbook pro and linux server. It's like every request is leaking few KB's of memory.

    opened by Lesiuk 30
  • Connection not closed properly

    Connection not closed properly

    I've been using 1.x version of actix-web for months, had to restart my app every now and then (sometimes after minutes, sometimes after days) since there are a lot of ESTABLISHED connections left there hanging, eventually causing too many open files error (I've increased the limit drastically). I'm using my server with keep-alive disabled, the rest of the settings are the defaults. I have since tried to upgrade to 2.0.0 to see if it solves the problem, but it's the same thing.

    The service itself gets around 500-1000 requests per second in production currently.

    opened by orangesoup 29
  • Tracking issue for std::future migration

    Tracking issue for std::future migration

    This issue tracks the migration.

    Crates migrated:

    • [x] - actix-threadpool #46
    • [x] - actix-rt #47
    • [x] - actix-codec #48
    • [x] - actix-service (discussion needed) #57
    • [x] - actix-server-config (Needs bugfixing)
    • [x] - actix-server (Needs bugfixing)
    • [x] - actix-utils
    • [x] - actix-connect
    • [x] - actix-ioframe

    note: these are in rough dependency order

    Decisions:

    1. The old future trait had Item, and Error associated types. This meant it was similar to the Result type, and thus, every function that returned future was fallible. Now, the future has only one associated type, that denotes infallible future output.

      Several places return a future with a result, that is not used anywhere, or its error type is (). Which of these occurences should be replaced with Future<Output=Item>, and which with Future<Output=Result<Item,Error>>?

    2. Usage of the Pin<&mut Self> in places similar to future, this means the Service::poll_ready and ActorFuture::poll in actix.

      There is a reason for using these Pins in the poll method, should we upgrade our definitions of traits similar to Future to also utilize them ?. Also, maybe not, since these were primarily introduced to support await in the form of generators. Need more info from qualified people.

    3. Usage and form of macros / functions to ease pinning and unpinning.

      I am currently working on actix-service, and it is a massive chore to always create unsafe block in order to create or destrucure a Pin<&mut T>. This can be sometimes solved by pin projections from the pin-utils crate, but it does not allow splitting borrows ( Creating multiple pins to multiple fields of a struct), which is a massive pain.

    enhancement help wanted 
    opened by semtexzv 17
  • WIP: Convert actix-service to use RefCell instead of UnsafeCell

    WIP: Convert actix-service to use RefCell instead of UnsafeCell

    Hi, this is first PoC implementation for gathering feedback to make sure I am on the right path.

    For smooth conversion I have added AXCell, which is similar to former Cell but using RefCell under the hood. Also added couple of concurrency tests with delays on first and second service to validate that there are no 2 mutable borrows which would lead to panic with RefCell.

    By benchmarks pipeline with new AndThen performing similar to UnsafeCell baseline implementation:

    AndThen with UnsafeCell #2              time:   [54.157 ns 54.850 ns 55.605 ns]
    AndThen with RefCell #2                 time:   [54.976 ns 55.624 ns 56.353 ns]                                    
    Pipeline::and_then based Rc<RefCell>    time:   [54.696 ns 55.260 ns 55.866 ns]
    

    Pls let me know if using this approach is ok?

    opened by dunnock 15
  • fix

    fix "Can not register server socket" The parameter is incorrect. on windows

    PR Type

    Bug Fix

    PR Checklist

    Check your PR fulfills the following:

    • [ ] Tests for the changes have been added / updated.
    • [ ] Documentation comments have been added / updated.
    • [ ] A changelog entry has been made for the appropriate packages.
    • [x] Format code with the latest stable rustfmt

    Overview

    This should fix another place where this issue have occured.

    Closes #221

    opened by danylaporte 13
  • Refactor LocalWaker

    Refactor LocalWaker

    ~Simplify LocalWaker register method with replace mutable pointer method~

    Refactor LocalWaker: is_registered has no real use because the only use of the registry is to wake up the task it refers to. But wake does not need the registration status, the behavior is the same whether a task is registered or not. So it has been removed in favor of wake. Without this method the use of UnsafeCell is not necessary so I have proposed a later refactor from UnsafeCell to Cell.

    opened by botika 12
  • Limit of max_connections is not enforced on backlog

    Limit of max_connections is not enforced on backlog

    Here's an example service with max_connections set to 2 and backlog of 16 (actix-web v3.3.0):

    const MAX_CONNECTIONS: usize = 2;
    const BACKLOG: i32 = 16;
    const WORKERS: usize = 1;
    
    const ADDR: &str = "127.0.0.1:8888";
    
    #[actix_web::main]
    async fn main() -> std::io::Result<()> {
        simplelog::CombinedLogger::init(vec![simplelog::TermLogger::new(
            simplelog::LevelFilter::Info,
            simplelog::Config::default(),
            simplelog::TerminalMode::Stderr,
        )])
        .expect("logging configuration failed");
    
        log::info!(
            "Running server with {} workers / {} max connections / {} listen backlog on {}",
            WORKERS,
            MAX_CONNECTIONS,
            BACKLOG,
            ADDR
        );
    
        actix_web::HttpServer::new(move || {
            actix_web::App::new().route("/", actix_web::web::get().to(greet))
        })
        .on_connect(move |_, _| {
            log::info!(
                "number of open fds = {}",
                procfs::process::Process::myself()
                    .expect("creating procfs reader failed")
                    .fd_count()
                    .expect("getting fd count failed")
            )
        })
        .max_connections(MAX_CONNECTIONS)
        .backlog(BACKLOG)
        .workers(WORKERS)
        .bind(ADDR)?
        .run()
        .await
    }
    
    async fn greet() -> impl actix_web::Responder {
        "howdy!\n"
    }
    

    One can run it:

    $ cargo run
    

    And then use the following go program to create one connection every 100ms without any requests:

    package main
    
    import (
    	"log"
    	"net"
    	"time"
    )
    
    func main() {
    	i := 0
    	for {
    		i++
    
    		_, err := net.Dial("tcp", "127.0.0.1:8888")
    		if err != nil {
    			log.Fatalf("Connection #%d failed: %s", i, err)
    		}
    
    		log.Printf("Connnection #%d succeded", i)
    
    		time.Sleep(time.Millisecond * 100)
    	}
    }
    

    The go program will happily churn until the end of ulimit.

    What I expect from Actix is to not have more than 2 (limit) + 16 (backlog) connections open at any point, which corresponds to 22 open file descriptors (let's say under 30 for simplicity). What I see is that number gets a lot higher:

    01:32:00 [INFO] Running server with 1 workers / 2 max connections / 16 listen backlog on 127.0.0.1:8888
    01:32:00 [INFO] Starting 1 workers
    01:32:00 [INFO] Starting "actix-web-service-127.0.0.1:8888" service on 127.0.0.1:8888
    01:32:06 [INFO] number of open fds = 21
    01:32:06 [INFO] number of open fds = 22
    01:32:11 [INFO] number of open fds = 23
    01:32:11 [INFO] Accepting connections on 127.0.0.1:8888 has been resumed
    01:32:11 [INFO] number of open fds = 22
    01:32:16 [INFO] number of open fds = 39
    01:32:16 [INFO] Accepting connections on 127.0.0.1:8888 has been resumed
    01:32:16 [INFO] number of open fds = 38
    01:32:21 [INFO] number of open fds = 54
    01:32:21 [INFO] Accepting connections on 127.0.0.1:8888 has been resumed
    01:32:21 [INFO] number of open fds = 53
    01:32:26 [INFO] number of open fds = 69
    01:32:26 [INFO] Accepting connections on 127.0.0.1:8888 has been resumed
    01:32:26 [INFO] number of open fds = 68
    01:32:31 [INFO] number of open fds = 79
    01:32:31 [INFO] Accepting connections on 127.0.0.1:8888 has been resumed
    01:32:31 [INFO] number of open fds = 78
    01:32:36 [INFO] number of open fds = 94
    01:32:36 [INFO] Accepting connections on 127.0.0.1:8888 has been resumed
    01:32:36 [INFO] number of open fds = 93
    01:32:41 [INFO] number of open fds = 103
    01:32:41 [INFO] Accepting connections on 127.0.0.1:8888 has been resumed
    01:32:41 [INFO] number of open fds = 102
    01:32:46 [INFO] number of open fds = 118
    01:32:46 [INFO] Accepting connections on 127.0.0.1:8888 has been resumed
    01:32:46 [INFO] number of open fds = 117
    01:32:51 [INFO] number of open fds = 128
    01:32:51 [INFO] Accepting connections on 127.0.0.1:8888 has been resumed
    01:32:51 [INFO] number of open fds = 127
    01:32:56 [INFO] number of open fds = 143
    01:32:56 [INFO] Accepting connections on 127.0.0.1:8888 has been resumed
    01:32:56 [INFO] number of open fds = 142
    01:33:01 [INFO] number of open fds = 153
    01:33:01 [INFO] Accepting connections on 127.0.0.1:8888 has been resumed
    01:33:01 [INFO] number of open fds = 152
    01:33:06 [INFO] number of open fds = 168
    01:33:06 [INFO] Accepting connections on 127.0.0.1:8888 has been resumed
    01:33:06 [INFO] number of open fds = 167
    01:33:11 [INFO] number of open fds = 177
    01:33:11 [INFO] Accepting connections on 127.0.0.1:8888 has been resumed
    01:33:11 [INFO] number of open fds = 176
    01:33:16 [INFO] number of open fds = 192
    01:33:16 [INFO] Accepting connections on 127.0.0.1:8888 has been resumed
    01:33:16 [INFO] number of open fds = 191
    01:33:21 [INFO] number of open fds = 202
    01:33:21 [INFO] Accepting connections on 127.0.0.1:8888 has been resumed
    01:33:21 [INFO] number of open fds = 201
    01:33:26 [INFO] number of open fds = 217
    01:33:26 [INFO] Accepting connections on 127.0.0.1:8888 has been resumed
    01:33:26 [INFO] number of open fds = 216
    01:33:31 [INFO] number of open fds = 227
    01:33:31 [INFO] Accepting connections on 127.0.0.1:8888 has been resumed
    01:33:31 [INFO] number of open fds = 226
    01:33:36 [INFO] number of open fds = 242
    01:33:36 [INFO] Accepting connections on 127.0.0.1:8888 has been resumed
    01:33:36 [INFO] number of open fds = 241
    01:33:41 [INFO] number of open fds = 251
    01:33:41 [INFO] Accepting connections on 127.0.0.1:8888 has been resumed
    01:33:41 [INFO] number of open fds = 250
    01:33:46 [INFO] number of open fds = 266
    01:33:46 [INFO] Accepting connections on 127.0.0.1:8888 has been resumed
    01:33:46 [INFO] number of open fds = 265
    01:33:51 [INFO] number of open fds = 276
    01:33:51 [INFO] Accepting connections on 127.0.0.1:8888 has been resumed
    01:33:51 [INFO] number of open fds = 275
    01:33:56 [INFO] number of open fds = 291
    01:33:56 [INFO] Accepting connections on 127.0.0.1:8888 has been resumed
    01:33:56 [INFO] number of open fds = 290
    01:34:01 [INFO] number of open fds = 300
    01:34:01 [INFO] Accepting connections on 127.0.0.1:8888 has been resumed
    01:34:01 [INFO] number of open fds = 299
    01:34:06 [INFO] number of open fds = 315
    01:34:06 [INFO] Accepting connections on 127.0.0.1:8888 has been resumed
    01:34:06 [INFO] number of open fds = 314
    01:34:11 [INFO] number of open fds = 325
    01:34:11 [INFO] Accepting connections on 127.0.0.1:8888 has been resumed
    01:34:11 [INFO] number of open fds = 324
    01:34:16 [INFO] number of open fds = 340
    01:34:16 [INFO] Accepting connections on 127.0.0.1:8888 has been resumed
    01:34:16 [INFO] number of open fds = 339
    01:34:21 [INFO] number of open fds = 350
    01:34:21 [INFO] Accepting connections on 127.0.0.1:8888 has been resumed
    01:34:21 [INFO] number of open fds = 349
    01:34:26 [INFO] number of open fds = 365
    01:34:26 [INFO] Accepting connections on 127.0.0.1:8888 has been resumed
    01:34:26 [INFO] number of open fds = 364
    01:34:31 [INFO] number of open fds = 374
    01:34:31 [INFO] Accepting connections on 127.0.0.1:8888 has been resumed
    01:34:31 [INFO] number of open fds = 373
    01:34:36 [INFO] number of open fds = 388
    01:34:36 [INFO] Accepting connections on 127.0.0.1:8888 has been resumed
    01:34:36 [INFO] number of open fds = 388
    01:34:41 [INFO] number of open fds = 399
    01:34:41 [INFO] Accepting connections on 127.0.0.1:8888 has been resumed
    01:34:41 [INFO] number of open fds = 398
    01:34:46 [INFO] number of open fds = 414
    01:34:46 [INFO] Accepting connections on 127.0.0.1:8888 has been resumed
    01:34:46 [INFO] number of open fds = 413
    01:34:51 [INFO] number of open fds = 424
    01:34:51 [INFO] Accepting connections on 127.0.0.1:8888 has been resumed
    01:34:51 [INFO] number of open fds = 423
    01:34:56 [INFO] number of open fds = 439
    01:34:56 [INFO] Accepting connections on 127.0.0.1:8888 has been resumed
    01:34:56 [INFO] number of open fds = 438
    01:35:01 [INFO] number of open fds = 448
    01:35:01 [INFO] Accepting connections on 127.0.0.1:8888 has been resumed
    01:35:01 [INFO] number of open fds = 447
    01:35:06 [INFO] number of open fds = 463
    01:35:06 [INFO] Accepting connections on 127.0.0.1:8888 has been resumed
    01:35:06 [INFO] number of open fds = 462
    01:35:11 [INFO] number of open fds = 472
    01:35:11 [INFO] Accepting connections on 127.0.0.1:8888 has been resumed
    01:35:11 [INFO] number of open fds = 472
    01:35:16 [INFO] number of open fds = 488
    01:35:16 [INFO] Accepting connections on 127.0.0.1:8888 has been resumed
    01:35:16 [INFO] number of open fds = 487
    01:35:21 [INFO] number of open fds = 498
    01:35:21 [INFO] Accepting connections on 127.0.0.1:8888 has been resumed
    01:35:21 [INFO] number of open fds = 497
    01:35:26 [INFO] number of open fds = 513
    01:35:26 [INFO] Accepting connections on 127.0.0.1:8888 has been resumed
    01:35:26 [INFO] number of open fds = 512
    01:35:31 [INFO] number of open fds = 522
    01:35:31 [INFO] Accepting connections on 127.0.0.1:8888 has been resumed
    01:35:31 [INFO] number of open fds = 521
    01:35:36 [INFO] number of open fds = 537
    01:35:36 [INFO] Accepting connections on 127.0.0.1:8888 has been resumed
    01:35:36 [INFO] number of open fds = 536
    01:35:41 [INFO] number of open fds = 547
    01:35:41 [INFO] Accepting connections on 127.0.0.1:8888 has been resumed
    01:35:41 [INFO] number of open fds = 546
    01:35:46 [INFO] number of open fds = 562
    01:35:46 [INFO] Accepting connections on 127.0.0.1:8888 has been resumed
    01:35:46 [INFO] number of open fds = 561
    01:35:51 [INFO] number of open fds = 572
    01:35:51 [INFO] Accepting connections on 127.0.0.1:8888 has been resumed
    01:35:51 [INFO] number of open fds = 571
    01:35:56 [INFO] number of open fds = 587
    01:35:56 [INFO] Accepting connections on 127.0.0.1:8888 has been resumed
    01:35:56 [INFO] number of open fds = 586
    01:36:01 [INFO] number of open fds = 596
    01:36:01 [INFO] Accepting connections on 127.0.0.1:8888 has been resumed
    01:36:01 [INFO] number of open fds = 595
    01:36:06 [INFO] Accepting connections on 127.0.0.1:8888 has been resumed
    01:36:06 [INFO] number of open fds = 615
    01:36:06 [INFO] number of open fds = 622
    01:36:11 [INFO] number of open fds = 621
    01:36:11 [INFO] Accepting connections on 127.0.0.1:8888 has been resumed
    01:36:11 [INFO] number of open fds = 620
    01:36:16 [INFO] number of open fds = 636
    01:36:16 [INFO] Accepting connections on 127.0.0.1:8888 has been resumed
    01:36:16 [INFO] number of open fds = 635
    

    From what I see, backlog is getting drained faster than it should be. In strace output one can see (this is the very beginning of Go connection generation):

    $ sudo strace -Tt -f -p $(pidof actix-conn-limit) -e accept4,close
    strace: Process 5159 attached with 3 threads
    [pid  5405] 01:38:16 accept4(6, {sa_family=AF_INET, sin_port=htons(35126), sin_addr=inet_addr("127.0.0.1")}, [128->16], SOCK_CLOEXEC) = 19 <0.000042>
    [pid  5405] 01:38:16 accept4(6, 0x7fe321cf4550, [128], SOCK_CLOEXEC) = -1 EAGAIN (Resource temporarily unavailable) <0.000019>
    [pid  5404] 01:38:16 close(21)          = 0 <0.000023>
    [pid  5404] 01:38:16 close(20)          = 0 <0.000011>
    [pid  5404] 01:38:16 syscall_332(0, 0, 0, 0xfff, 0, 0x10) = -1 (errno 14) <0.000013>
    [pid  5404] 01:38:16 syscall_332(0xffffffffffffff9c, 0x7fe31c006b00, 0, 0xfff, 0x7fe321eee620, 0) = 0 <0.000015>
    [pid  5404] 01:38:16 close(20)          = 0 <0.000010>
    [pid  5405] 01:38:17 accept4(6, {sa_family=AF_INET, sin_port=htons(35128), sin_addr=inet_addr("127.0.0.1")}, [128->16], SOCK_CLOEXEC) = 20 <0.000018>
    [pid  5405] 01:38:17 accept4(6, 0x7fe321cf4550, [128], SOCK_CLOEXEC) = -1 EAGAIN (Resource temporarily unavailable) <0.000017>
    [pid  5404] 01:38:17 close(21)          = 0 <0.000013>
    [pid  5404] 01:38:17 syscall_332(0xffffffffffffff9c, 0x7fe31c006240, 0, 0xfff, 0x7fe321eee620, 0) = 0 <0.000025>
    [pid  5404] 01:38:17 close(21)          = 0 <0.000010>
    [pid  5405] 01:38:17 accept4(6, {sa_family=AF_INET, sin_port=htons(35130), sin_addr=inet_addr("127.0.0.1")}, [128->16], SOCK_CLOEXEC) = 21 <0.000029>
    [pid  5405] 01:38:17 accept4(6, 0x7fe321cf4550, [128], SOCK_CLOEXEC) = -1 EAGAIN (Resource temporarily unavailable) <0.000012>
    [pid  5405] 01:38:17 accept4(6, {sa_family=AF_INET, sin_port=htons(35132), sin_addr=inet_addr("127.0.0.1")}, [128->16], SOCK_CLOEXEC) = 22 <0.000017>
    [pid  5405] 01:38:17 accept4(6, 0x7fe321cf4550, [128], SOCK_CLOEXEC) = -1 EAGAIN (Resource temporarily unavailable) <0.000015>
    [pid  5404] 01:38:21 close(20)          = 0 <0.000056>
    [pid  5404] 01:38:21 close(20)          = 0 <0.000029>
    [pid  5404] 01:38:21 syscall_332(0xffffffffffffff9c, 0x7fe31c007320, 0, 0xfff, 0x7fe321eee620, 0) = 0 <0.000019>
    [pid  5404] 01:38:21 close(20)          = 0 <0.000026>
    [pid  5404] 01:38:21 close(19)          = 0 <0.000033>
    [pid  5404] 01:38:21 close(19)          = 0 <0.000033>
    [pid  5404] 01:38:21 syscall_332(0xffffffffffffff9c, 0x7fe31c02ae30, 0, 0xfff, 0x7fe321eee620, 0) = 0 <0.000018>
    [pid  5404] 01:38:21 close(19)          = 0 <0.000021>
    [pid  5405] 01:38:21 accept4(6, {sa_family=AF_INET, sin_port=htons(35134), sin_addr=inet_addr("127.0.0.1")}, [128->16], SOCK_CLOEXEC) = 19 <0.000039>
    [pid  5405] 01:38:21 accept4(6, {sa_family=AF_INET, sin_port=htons(35136), sin_addr=inet_addr("127.0.0.1")}, [128->16], SOCK_CLOEXEC) = 20 <0.000035>
    [pid  5405] 01:38:21 accept4(6, {sa_family=AF_INET, sin_port=htons(35138), sin_addr=inet_addr("127.0.0.1")}, [128->16], SOCK_CLOEXEC) = 23 <0.000021>
    [pid  5405] 01:38:21 accept4(6, {sa_family=AF_INET, sin_port=htons(35140), sin_addr=inet_addr("127.0.0.1")}, [128->16], SOCK_CLOEXEC) = 24 <0.000022>
    [pid  5405] 01:38:21 accept4(6, {sa_family=AF_INET, sin_port=htons(35142), sin_addr=inet_addr("127.0.0.1")}, [128->16], SOCK_CLOEXEC) = 25 <0.000020>
    [pid  5405] 01:38:21 accept4(6, {sa_family=AF_INET, sin_port=htons(35144), sin_addr=inet_addr("127.0.0.1")}, [128->16], SOCK_CLOEXEC) = 26 <0.000021>
    [pid  5405] 01:38:21 accept4(6, {sa_family=AF_INET, sin_port=htons(35146), sin_addr=inet_addr("127.0.0.1")}, [128->16], SOCK_CLOEXEC) = 27 <0.000021>
    [pid  5405] 01:38:21 accept4(6, {sa_family=AF_INET, sin_port=htons(35148), sin_addr=inet_addr("127.0.0.1")}, [128->16], SOCK_CLOEXEC) = 28 <0.000021>
    [pid  5405] 01:38:21 accept4(6, {sa_family=AF_INET, sin_port=htons(35150), sin_addr=inet_addr("127.0.0.1")}, [128->16], SOCK_CLOEXEC) = 29 <0.000021>
    [pid  5405] 01:38:21 accept4(6, {sa_family=AF_INET, sin_port=htons(35152), sin_addr=inet_addr("127.0.0.1")}, [128->16], SOCK_CLOEXEC) = 30 <0.000020>
    [pid  5405] 01:38:21 accept4(6, {sa_family=AF_INET, sin_port=htons(35154), sin_addr=inet_addr("127.0.0.1")}, [128->16], SOCK_CLOEXEC) = 31 <0.000020>
    [pid  5405] 01:38:21 accept4(6, {sa_family=AF_INET, sin_port=htons(35156), sin_addr=inet_addr("127.0.0.1")}, [128->16], SOCK_CLOEXEC) = 32 <0.000015>
    [pid  5405] 01:38:21 accept4(6, {sa_family=AF_INET, sin_port=htons(35158), sin_addr=inet_addr("127.0.0.1")}, [128->16], SOCK_CLOEXEC) = 33 <0.000017>
    [pid  5405] 01:38:21 accept4(6, {sa_family=AF_INET, sin_port=htons(35160), sin_addr=inet_addr("127.0.0.1")}, [128->16], SOCK_CLOEXEC) = 34 <0.000031>
    [pid  5405] 01:38:21 accept4(6, {sa_family=AF_INET, sin_port=htons(35162), sin_addr=inet_addr("127.0.0.1")}, [128->16], SOCK_CLOEXEC) = 35 <0.000018>
    [pid  5405] 01:38:21 accept4(6, {sa_family=AF_INET, sin_port=htons(35164), sin_addr=inet_addr("127.0.0.1")}, [128->16], SOCK_CLOEXEC) = 36 <0.000016>
    [pid  5405] 01:38:21 accept4(6, {sa_family=AF_INET, sin_port=htons(35166), sin_addr=inet_addr("127.0.0.1")}, [128->16], SOCK_CLOEXEC) = 37 <0.000015>
    [pid  5405] 01:38:21 accept4(6, 0x7fe321cf4550, [128], SOCK_CLOEXEC) = -1 EAGAIN (Resource temporarily unavailable) <0.000018>
    

    In my mind max_connections should never allow more than 2 consecutive accept4 without a close between them.

    opened by bobrik 12
  • actix-rt: Make the process of running System in existing Runtime more clear

    actix-rt: Make the process of running System in existing Runtime more clear

    PR Type

    Other (Usability enhancement)

    PR Checklist

    Check your PR fulfills the following:

    • [x] Tests for the changes have been added / updated.
    • [x] Documentation comments have been added / updated.
    • [x] A changelog entry has been made for the appropriate packages.
    • [x] Format code with the latest stable rustfmt

    Overview

    When I first tried to start the actix-web server using an existing tokio Runtime, I honestly found the interface a bit unfriendly.

    In an attempt to make other folks' life easier I did the following:

    • Added doc-comments with an example and a note about Arbiters still using their own Runtime objects.
    • Added a new method which is less flexible but simpler to use to attach a System to a given Runtime.

    These changes aren't breaking and hopefully useful.

    opened by popzxc 12
  • rework actix-threadpool

    rework actix-threadpool

    PR Type

    Refactor

    PR Checklist

    Check your PR fulfills the following:

    • [x] Tests for the changes have been added / updated.
    • [x] Documentation comments have been added / updated.
    • [x] A changelog entry has been made for the appropriate packages.
    • [x] Format code with the latest stable rustfmt

    Overview

    actix-threadpool would spawn a fixed number of threads in a static pool and stay for the life of whole actix-app lifetime. I feel this is an overkill for majority use case and may bring up confusion.

    This PR reworks the pool to use a dynamic strategy with the following behavior:

    • Rename the worker thread name from "actix-web" to "actix-threadpool-worker".
    • Start with 1 idle thread and when more jobs goes in it spawn more threads until hitting the upper limit.
    • Any pool thread kept idle for 5 minutes would de spawn it self.

    This PR is still not complete and need some help on tests and benchmarks.

    opened by fakeshadow 12
  • Migrate actix-service to std::future

    Migrate actix-service to std::future

    Originally from @semtexzv.

    The big picture starting with actix-service:

    Using Pin<&mut Self> in poll_ready since it usually polls futures, not in call, which we might wanna change. There is an issue with the IntoFuture trait not being provided by futures anymore, so I wrote a simple one, and am using it throughout the codebase. This is just a first iteration, and will require some work, also BoxedService impl currently does not work, since it polls a future in call method.

    Decisions that we need to make:

    1. The old future trait had Item, and Error associated types. This meant it was similar to the Result type, and thus, every function that returned future was fallible. Now, the future has only one associated type, that denotes infallible future output.

    Several places return a future with a result, that is not used anywhere, or its error type is (). Which of these occurences should be replaced with Future<Output=Item>, and which with Future<Output=Result<Item,Error>>?

    1. Usage of the Pin<&mut Self> in places similar to future, this means the Service::poll_ready and ActorFuture::poll in actix.

    There is a reason for using these Pins in the poll method, should we upgrade our definitions of traits similar to Future to also utilize them ?. Also, maybe not, since these were primarily introduced to support await in the form of generators. Need more info from qualified people.

    1. Usage and form of macros / functions to ease pinning and unpinning.

    I am currently working on actix-service, and it is a massive chore to always create unsafe block in order to create or destrucure a Pin<&mut T>. This can be sometimes solved by pin projections from the pin-utils crate, but it does not allow splitting borrows ( Creating multiple pins to multiple fields of a struct), which is a massive pain.

    enhancement help wanted question 
    opened by cdbattags 12
  • Start the std::future and tokio 0.2 migration

    Start the std::future and tokio 0.2 migration

    This PR contains the work to start the migration of actix ecosystem to std::future, and tokio 0.2.

    Currently, the actix-rt, actix-codec and actix-threadpool libraries are done. Big part of this migration will be actix-service, and thus, I invite you to help.

    @fafhrd91 I'd like you to create a separate branch, so this work can live in this repository instead of my fork.

    opened by semtexzv 12
  • how to close connection immediately?

    how to close connection immediately?

    if client's connection is probably attacking, how can I close it immediately and insert the address into a dynamic blacklist?

        let listener = TcpListener::bind(...).await?;
    
        while let (socket, addr) = listener.accept().await? {
    
            if blacklist.contains(&addr) {
                continue;
            }
    
            tokio::spawn(async move {
    
                ...
    
                if connection_is_attacking() {
                    blacklist.insert(addr);
                    connection.close_immediately();
                }
    
                ...
    
            });
        }
    
    opened by lithbitren 0
  • PoC proxy protocol stream wrapper

    PoC proxy protocol stream wrapper

    PR Type

    INSERT_PR_TYPE

    PR Checklist

    Check your PR fulfills the following:

    • [ ] Tests for the changes have been added / updated.
    • [ ] Documentation comments have been added / updated.
    • [ ] A changelog entry has been made for the appropriate packages.
    • [ ] Format code with the latest stable rustfmt

    Overview

    opened by robjtede 0
  • add MPTCP socket protocol (optional)

    add MPTCP socket protocol (optional)

    PR Type

    Feature

    Overview

    Add the possibility to use the MPTCP protocol at the socket level for users of ServerBuilder.

    MPTCP is now more widely available since Linux Kernel version >= 5.6. But it still need to be enabled manually using: sysctl net.mptcp.enabled=1 (of course, MPTCP is only available on Linux).

    The new MPTCP struct give the user the option to determine how we'll handle the case where MPTCP is not available on the host, either we crash, or we fallback to regular TCP.

    opened by Martichou 1
  • impl Transform for Option

    impl Transform for Option

    This topic continues https://github.com/actix/actix-web/pull/2858 , whole goal is to .wrap an optional middleware. Condition middleware exists but requires a transformer even when unused. It seems to me that using an Option is more natural. But as that PR was on a different crate than Transform's, a newtype was introduced, and .into() required an explicit annotation. Sadly this was non-optimal.

    @fakeshadow suggested that this feature should rather be moved into Transform's crate, so that .wrap can natively take an Option<SomeMiddleware>. This would mean moving the ConditionMiddleware code too.

    opened by KoltesDigital 1
Releases(bytestring-v1.2.1)
Owner
Actix
Actix - web and actor frameworks for Rust
Actix
A tool to aid in self-hosting. Expose local services on your computer, via a public IPv4 address.

innisfree A tool to aid in self-hosting. Expose local services on your computer, via a public IPv4 address. Why? Most of the data I maintain is local,

Conor Schaefer 7 Mar 19, 2022
Off-chain services for Gnosis Protocol v2

Cow Protocol Services This repository contains backend code for Cow Protocol Services written in Rust. Order Book The orderbook crate provides the htt

CoW Protocol 42 Jan 3, 2023
A simple API gateway written in Rust, using the Hyper and Reqwest libraries.

API Gateway A simple API gateway written in Rust, using the Hyper and Reqwest libraries. This gateway can be used to forward requests to different bac

Adão Raul 3 Apr 24, 2023
Rust crate providing a variety of automotive related libraries, such as communicating with CAN interfaces and diagnostic APIs

The Automotive Crate Welcome to the automotive crate documentation. The purpose of this crate is to help you with all things automotive related. Most

I CAN Hack 29 Mar 11, 2024
A metrics collection application for Linux machines. Created for MSCS 710 Project at Marist College.

Linux-Metrics-Collector A metrics collection application for Linux machines. Created for MSCS 710 Project at Marist College. Development Environment S

Christopher Ravosa 2 May 2, 2022
🔍 Fully-featured metrics collection agent for First Tech Challenge competitions. Supports Prometheus.

Scout Scout is a fully-featured free and open source metrics collector for FTC competitions. The project is licensed under the GNU LGPLv3 license. Fea

hivemind 3 Oct 24, 2023
Cross-platform, low level networking using the Rust programming language.

libpnet Linux ∪ OS X Build Status: Windows Build Status: Discussion and support: #libpnet on freenode / #rust-networking on irc.mozilla.org / #rust on

null 1.8k Jan 6, 2023
Network simulation in Rust

netsim - A Rust library for network simulation and testing (currently linux-only). netsim is a crate for simulating networks for the sake of testing n

Andrew Cann 115 Dec 15, 2022
A private network system that uses WireGuard under the hood.

innernet A private network system that uses WireGuard under the hood. See the announcement blog post for a longer-winded explanation. innernet is simi

Tonari, Inc 4.1k Dec 29, 2022
A Curve-like AMM for Secret Network

A Curve-like AMM for Secret Network. Supports a varibale number of tokens with the same underlying value.

Enigma 16 Dec 11, 2022
A multi-protocol network relay

A multi-protocol network relay

zephyr 43 Dec 13, 2022
A Rust library for parsing the SOME/IP network protocol (without payload interpretation).

someip_parse A Rust library for parsing the SOME/IP network protocol (without payload interpretation). Usage Add the following to your Cargo.toml: [de

Julian Schmid 18 Oct 31, 2022
Computational Component of Polkadot Network

Gear is a new Polkadot/Kusama parachain and most advanced L2 smart-contract engine allowing anyone to launch any dApp for networks with untrusted code.

null 145 Dec 19, 2022
Fullstack development framework for UTXO-based dapps on Nervos Network

Trampoline-rs The framework for building powerful dApps on the number one UTXO chain, Nervos Network CKB. This is an early-stage, currently very incom

TannrA 2 Mar 25, 2022
Official Implementation of Findora Network.

Findora Platform Wiki Contribution Guide Licensing The primary license for Platform is the Business Source License 1.1 (BUSL-1.1), see LICENSE. Except

Findora Foundation 61 Dec 9, 2022
Simple in-network file transfer with barely any overhead.

fftp fftp is the "Fast File Transport Protocol". It transfers files quickly between computers on a network with low overhead. Motivation FTP uses two

leo 4 May 12, 2022
netavark: A container network stack

netavark: A container network stack Netavark is a rust based network stack for containers. It is being designed to work with Podman but is also applic

Containers 230 Jan 2, 2023
A cross-platform, user-space WireGuard port-forwarder that requires no system network configurations.

Cross-platform, user-space WireGuard port-forwarder that requires no system network configurations.

Aram Peres 629 Jan 4, 2023
An implementation of the CESS network supported by CESS LAB.

--------- ?? ---------An infrastructure of decentralized cloud data network built with Substrate-------- ?? -------- ---------------- ?? -------------

Cess Project 249 Dec 26, 2022