cargo-lambda is a Cargo subcommand to help you work with AWS Lambda.

Overview

cargo-lambda

crates.io Build Status

cargo-lambda is a Cargo subcommand to help you work with AWS Lambda.

The new subcommand creates a basic Rust package from a well defined template to help you start writing AWS Lambda functions in Rust.

The build subcommand compiles AWS Lambda functions natively and produces artifacts which you can then upload to AWS Lambda or use with other echosystem tools, like SAM Cli or the AWS CDK.

The watch subcommand boots a development server that emulates interations with the AWS Lambda control plane. This subcommand also reloads your Rust code as you work on it.

The invoke subcommand sends request to the control plane emulator to test and debug interactions with your Lambda functions.

Installation

Install this subcommand on your host machine with Cargo itself:

cargo install cargo-lambda

Usage

New

The new command creates new Rust packages with a basic scheleton to help you start writing AWS Lambda functions with Rust. This command will create this package in a new sub-directory inside the directory where it's invoked. Run cargo lambda new PACKAGE-NAME to generate your new package.

This command uses templates packed as zip files, or from local directories. The default template supports HTTP Lambda functions, as well as functions that receive events defined in the aws_lambda_events crate. You can provide your own template using the --template flag.

The files Cargo.toml, README.md, and src/main.rs in the template are parsed with Liquid to dynamically render different files based on a series of global variables. You can see all the variables in the source code.

After creating a new package, you can use the build command described below to compile the source code.

Build

Within a Rust project that includes a Cargo.toml file, run the cargo lambda build command to natively compile your Lambda functions in the project. The resulting artifacts such as binaries or zips, will be placed in the target/lambda directory. This is an example of the output produced by this command:

❯ tree target/lambda
target/lambda
├── delete-product
│   └── bootstrap
├── dynamodb-streams
│   └── bootstrap
├── get-product
│   └── bootstrap
├── get-products
│   └── bootstrap
└── put-product
    └── bootstrap

5 directories, 5 files

Build - Output Format

By default, cargo-lambda produces a binary artifact for each Lambda functions in the project. However, you can configure cargo-lambda to produce a ready to upload zip artifact.

The --output-format paramters controls the output format, the two current options are zip and binary with binary being the default.

Example usage to create a zip.

cargo lambda build --output-format zip

Build - Architectures

By default, cargo-lambda compiles the code for Linux X86-64 architectures, you can compile for Linux ARM architectures by providing the right target:

cargo lambda build --target aarch64-unknown-linux-gnu

ℹ️ Starting in version 0.6.2, you can use the shortcut --arm64 to compile your functions for Linux ARM architectures:

cargo lambda build --arm64

Build - Compilation Profiles

By default, cargo-lambda compiles the code in debug mode. If you want to change the profile to compile in release mode, you can provide the right flag.

cargo lambda build --release

When you compile your code in release mode, cargo-lambda will strip the binaries from all debug symbols to reduce the binary size.

Build - How does it work?

cargo-lambda uses Zig and cargo-zigbuild to compile the code for the right architecture. If Zig is not installed in your host machine, the first time that your run cargo-lambda, it will guide you through some installation options. If you run cargo-lambda in a non-interactive shell, the build process will fail until you install that dependency.

Watch

⚠️ This subcommand used to be called start. Both names still work, as start is an alias for watch.

The watch subcommand emulates the AWS Lambda control plane API. Run this command at the root of a Rust workspace and cargo-lambda will use cargo-watch to hot compile changes in your Lambda functions. Use flag --no-reload to avoid hot compilation.

⚠️ This command works best with the Lambda Runtime version 0.5.1. Previous versions of the rumtime are likely to crash with serialization errors.

cargo lambda watch

The function is not compiled until the first time that you try to execute it. See the invoke command to learn how to execute a function. Cargo will run the command cargo run --bin FUNCTION_NAME to try to compile the function. FUNCTION_NAME can be either the name of the package if the package has only one binary, or the binary name in the [[bin]] section if the package includes more than one binary.

Watch - Environment variables

If you need to set environment variables for your function to run, you can specify them in the metadata section of your Cargo.toml file.

Use the section package.metadata.lambda.env to set global variables that will applied to all functions in your package:

[package]
name = "basic-lambda"

[package.metadata.lambda.env]
RUST_LOG = "debug"
MY_CUSTOM_ENV_VARIABLE = "custom value"

If you have more than one function in the same package, and you want to set specific variables for each one of them, you can use a section named after each one of the binaries in your package, package.metadata.lambda.bin.BINARY_NAME:

[package]
name = "lambda-project"

[package.metadata.lambda.env]
RUST_LOG = "debug"

[package.metadata.lambda.bin.get-product.env]
GET_PRODUCT_ENV_VARIABLE = "custom value"

[package.metadata.lambda.bin.add-product.env]
ADD_PRODUCT_ENV_VARIABLE = "custom value"

[[bin]]
name = "get-product"
path = "src/bin/get-product.rs"

[[bin]]
name = "add-product"
path = "src/bin/add-product.rs"

Watch - Function URLs

The emulator server includes support for Lambda function URLs out of the box. Since we're working locally, these URLs are under the /lambda-url path instead of under a subdomain. The function that you're trying to access through a URL must respond to Request events using lambda_http, or raw ApiGatewayV2httpRequest events.

You can create functions compatible with this feature by running cargo lambda new --http FUNCTION_NAME.

To access a function via its HTTP endpoint, start the watch subcommand cargo lambda watch, then send requests to the endpoint http://localhost:9000/lambda-url/FUNCTION_NAME. You can add any additional path after the function name, or any query parameters.

Invoke

The invoke subcomand helps you send requests to the control plane emulator.

If your Rust project only includes one function, in the package's main.rs file, you can invoke it by sending the data that you want to process, without extra arguments. For example:

$ cargo lambda invoke --data-ascii '{"command": "hi"}'

If your project includes more than one function, or the binary has a different name than the package, you must provide the name of the Lambda function that you want to invoke, and the payload that you want to send. If you don't know how to find your function's name, it can be in two places:

  • If your Cargo.toml file includes a [package] section, and it does not include a [[bin]] section, the function's name is in the name attribute under the [package] section.
  • If your Cargo.toml file includes one or more [[bin]] sections, the function's name is in the name attribute under the [[bin]] section that you want to compile.

In the following example, basic-lambda is the function's name as indicated in the package's [[bin]] section:

$ cargo lambda invoke basic-lambda --data-ascii '{"command": "hi"}'

Cargo-Lambda compiles functions on demand when they receive the first invocation. It's normal that the first invocation takes a long time if your code has not compiled with the host compiler before. After the first compilation, Cargo-Lambda will re-compile your code every time you make a change in it, without having to send any other invocation requests.

Invoke - Ascii data

The --data-ascii flag allows you to send a payload directly from the command line:

cargo lambda invoke basic-lambda --data-ascii '{"command": "hi"}'

Invoke - File data

The --data-file flag allows you to read the payload from a file in your file system:

cargo lambda invoke basic-lambda --data-file examples/my-payload.json

Invoke - Example data

The --data-example flag allows you to fetch an example payload from the aws-lambda-events repository, and use it as your request payload. For example, if you want to use the example-apigw-request.json payload, you have to pass the name apigw-request into this flag:

cargo lambda invoke http-lambda --data-example apigw-request

After the first download, these examples are cached in your home directory, under .cargo/lambda/invoke-fixtures.

Rust version

This project works with Rust 1.59 and above.

Comments
  • Add GMP Linker Support

    Add GMP Linker Support

    I am attempting to use the multi-party-ecdsa library which uses GMP in my Rust lambda function. GMP is a precision arithmetic library which is commonly used in cryptography research, especially via Rust, so its support would be widely useful for the systems, security, and applied cryptography communities.

    Running cargo lambda build causes the following error:

    error: linking with /Users/daryakaviani/Library/Caches/cargo-zigbuild/0.11.1/zigcc-x86_64-unknown-linux-gnu.sh
    ...
    ld.lld: error: unable to find library -lgmp
    

    This is specific to cargo-lambda's usage of the zigbuild linker because cargo build works successfully when gmp is installed. Also, cargo zigbuild works, so it could be the specific way cargo-lambda handles environment variables in relation to the zigbuild linker. This error is exclusively introduced when I import multi-party-ecdsa in src/main.rs. To confirm that this is a machine-agnostic issue, I tried it on x86-64 Ubuntu, x86-64 Amazon Linux, x86-64 MacOS, and ARM M1 MacOS and received the same results. I attempted a workaround on Linux by disabling the zigbuild linker:

    cargo lambda build --disable-zig-linker
    
    cargo lambda deploy \
      --iam-role arn:aws:iam::XXXXXXXXXXXX:role/rust_lambda_test \
      func_name
    
    cargo lambda invoke --remote \
    --data-ascii '{"params": "some params"}' \
    --output-format json \
    func_name
    

    This successfully compiled my artifacts and deployed the function, but invocation caused:

    Error: Runtime.ExitError
    
      × RequestId: xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx Error: Runtime exited with error: exit status 1
    

    Is it possible for cargo-zigbuild or cargo-lambda to be updated for GMP support? If there is an existent workaround, please let me know and feel free to move this to Discussions!

    @calavera @messense

    opened by daryakaviani 12
  • `cargo lambda build --release` no matching package named `lambda_http` found

    `cargo lambda build --release` no matching package named `lambda_http` found

    $ cargo lambda build --release [±master ●●] error: no matching package named lambda_http found location searched: registry crates-io required by package `my-lambda v0.1.0

    Hi folks! I might be doing something bone-headed. I am seeing this error when I try to run the command above after running cargo lambda new my-lambda and changing into the created package directory.

    But then, the weird thing is that I can do this:

     kleb@klebs-MacBook-Pro /tmp                                                                                                                                           [9:56:10]
    > $ mkdir scratch
    
    kleb@klebs-MacBook-Pro /tmp                                                                                                                                           [9:56:12]
    > $ cd scratch
    
    kleb@klebs-MacBook-Pro /tmp/scratch                                                                                                                                   [9:56:15]
    > $ cargo init --lib
         Created library package
    
    kleb@klebs-MacBook-Pro /tmp/scratch                                                                                                                                   [9:56:17]
    > $ vim Cargo.toml                                                                                                                                                  [±master ●]
    
    kleb@klebs-MacBook-Pro /tmp/scratch                                                                                                                                   [9:56:27]
    > $ cat Cargo.toml                                                                                                                                                  [±master ●]
    [package]
    name = "scratch"
    version = "0.1.0"
    edition = "2021"
    
    # See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
    
    [dependencies]
    lambda_http = "0.6.0"
    
    kleb@klebs-MacBook-Pro /tmp/scratch                                                                                                                                   [9:56:29]
    > $ cargo build                                                                                                                                                     [±master ●]
        Updating crates.io index
      Downloaded itoa v1.0.3
      Downloaded 1 crate (10.5 KB) in 1.11s
       Compiling proc-macro2 v1.0.43
       Compiling quote v1.0.21
       Compiling unicode-ident v1.0.2
       Compiling syn v1.0.99
       Compiling serde_derive v1.0.141
       Compiling serde v1.0.141
       Compiling autocfg v1.1.0
       Compiling libc v0.2.126
       Compiling cfg-if v1.0.0
       Compiling log v0.4.17
       Compiling futures-core v0.3.21
       Compiling pin-project-lite v0.2.9
       Compiling itoa v1.0.3
       Compiling once_cell v1.13.0
       Compiling fnv v1.0.7
       Compiling futures-task v0.3.21
       Compiling memchr v2.5.0
       Compiling futures-util v0.3.21
       Compiling pin-utils v0.1.0
       Compiling httparse v1.7.1
       Compiling futures-channel v0.3.21
       Compiling matches v0.1.9
       Compiling try-lock v0.2.3
       Compiling ryu v1.0.11
       Compiling tower-service v0.3.2
       Compiling percent-encoding v2.1.0
       Compiling serde_json v1.0.82
       Compiling httpdate v1.0.2
       Compiling tower-layer v0.3.1
       Compiling encoding_rs v0.8.31
       Compiling base64 v0.13.0
       Compiling mime v0.3.16
       Compiling tracing-core v0.1.29
       Compiling form_urlencoded v1.0.1
       Compiling tokio v1.20.1
       Compiling num-traits v0.2.15
       Compiling num-integer v0.1.45
       Compiling want v0.3.0
       Compiling mio v0.8.4
       Compiling socket2 v0.4.4
       Compiling num_cpus v1.13.1
       Compiling time v0.1.44
       Compiling tokio-macros v1.8.0
       Compiling tracing-attributes v0.1.22
       Compiling pin-project-internal v1.0.11
       Compiling async-stream-impl v0.3.3
       Compiling async-stream v0.3.3
       Compiling pin-project v1.0.11
       Compiling tracing v0.1.36
       Compiling tower v0.4.13
       Compiling bytes v1.2.1
       Compiling query_map v0.5.0
       Compiling chrono v0.4.19
       Compiling serde_urlencoded v0.7.1
       Compiling http v0.2.8
       Compiling http-body v0.4.5
       Compiling http-serde v1.1.0
       Compiling aws_lambda_events v0.6.3
       Compiling hyper v0.14.20
       Compiling tokio-stream v0.1.9
       Compiling lambda_runtime_api_client v0.6.0
       Compiling lambda_runtime v0.6.0
       Compiling lambda_http v0.6.0
       Compiling scratch v0.1.0 (/private/tmp/scratch)
        Finished dev [unoptimized + debuginfo] target(s) in 28.00s
    
    kleb@klebs-MacBook-Pro /tmp/scratch                                                                                                                                   [9:57:00]
    > $                                                            
    

    In other words, lambda seems to work OK outside of cargo-lambda.

    Here is the Cargo.toml for my lambda package:

    :!/bin/cat my-lambda/Cargo.toml
    [package]
    name = "my-lambda"
    version = "0.1.0"
    edition = "2021"
    
    # Starting in Rust 1.62 you can use `cargo add` to add dependencies
    # to your project.
    #
    # If you're using an older Rust version,
    # download cargo-edit(https://github.com/killercup/cargo-edit#installation)
    # to install the `add` subcommand.
    #
    # Running `cargo add DEPENDENCY_NAME` will
    # add the latest version of a dependency to the list,
    # and it will keep the alphabetic ordering for you.
    
    [dependencies]
    lambda_http = { version = "0.6.0", default-features = false, features = ["apigw_http"] }
    lambda_runtime = "0.6.0"
    tokio = { version = "1", features = ["macros"] }
    tracing = { version = "0.1", features = ["log"] }
    tracing-subscriber = { version = "0.3", default-features = false, features = ["fmt"] }
    
    

    Did I do something wrong during cargo-lambda setup? I am pretty tired at the moment after an all-nighter and might be missing something obvious.

    I will check back after I pass out asleep for a little while and see if I can solve 🤣🙏

    Otherwise, I am looking forward to using cargo-lambda! It seems great! Thank you!!

    opened by tklebanoff 11
  • Deploy doesn't work in workspace

    Deploy doesn't work in workspace

    Hey, it seems like cargo-lambda can't deploy things in a workspace?

    tree:

    ├── Cargo.lock
    ├── Cargo.toml
    ├── crates
    │   ├── app_1
    │   │   ├── Cargo.toml
    │   │   └── src
    │   ├── app_2
    │   │   ├── Cargo.toml
    │   │   └── src
    │   ├── app_3
    │   │   ├── Cargo.toml
    │   │   ├── migrations
    │   │   └── src
    ├── src
    │   ├── config.rs
    │   ├── lib.rs
    │   ├── notifier.rs
    

    app_2 is the only one that has a lambda and depends on lambda_http etc.

    root Cargo.toml:

    [package]
    name = "project"
    version = "0.1.0"
    edition = "2021"
    
    [profile.dev.package."*"]
    opt-level = 2
    
    [workspace]
    members = ["crates/*"]
    

    Running cargo lambda deploy --iam-role arn:aws:iam::610581918146:role/AWS_LAMBDA_FULL_ACCESS app_2 gives me Error: × binary file for app_2 not found, usecargo lambda buildto create it

    If I take the app_2 and move it to another folder outside of this workspace and builds, then cargo-lambda is able to deploy it.

    opened by drager 9
  • Allow users to specify on which interface to bind

    Allow users to specify on which interface to bind

    I added an option, -a, that allows users to specify on which interface to bind. It defaults to 127.0.0.1. While trying to test my changes, I ran into the error I mentioned on !259. I ran into the same error without my changes, so there is a good chance my test was invalid.

    Why not listen on 0.0.0.0 by default? One could. A bit of research says that it would not bind to IPv6 addresses. Binding to both 0.0.0.0 and :: would solve that.

    Resolves #259

    opened by taylor1791 8
  • `cargo deploy` to deploy several functions at once

    `cargo deploy` to deploy several functions at once

    Would be nice to have the feature to deploy several functions at once (as it is done in this example https://github.com/softprops/serverless-aws-rust-multi ). Also would be nice to have ability to put some sort of prefix/suffix to function names.

    I have tried with serverless plugin first, but looks abandoned, doesn't work - tries to compile my code with ancient version of Rust and fails. cargo lambda at least compiles and builds fine. Also deploy of just one function works fine. However, scaling it to several functions becomes boilerplate.

    Would be nice to have some sort of descriptor as part of build file...

    What do you think? Perhaps I can try to contribute when I have some free time.

    opened by fancywriter 8
  • "Invalid cross-device link" when running `cargo lambda new blah` on Manjaro

       ~/projects  cargo lambda new sa-solver-sns-result-ws-dispatcher
    ? Is this function an HTTP function? No ? AWS Event type that this function receives sns::SnsEvent Error: × failed to move package: from "/tmp/.tmpX6qXqg" to "sa-solver-sns-result-ws-dispatcher" ╰─▶ Invalid cross-device link (os error 18)

    ~/projects  cargo lambda new sa-solver-sns-result-ws-dispatcher
    ? Is this function an HTTP function? No ? AWS Event type that this function receives sns::SnsEvent Error: × failed to move package: from "/tmp/.tmpX6qXqg" to "sa-solver-sns-result-ws-dispatcher" ╰─▶ Invalid cross-device link (os error 18)

       ~/projects  uname -a  1 ✘  18s   base  Linux gauss 5.10.105-1-MANJARO #1 SMP PREEMPT Fri Mar 11 14:12:33 UTC 2022 x86_64 GNU/Linux    ~/projects  cargo --version  ✔  base  cargo 1.62.0-nightly (dba5baf 2022-04-13)

    bug 
    opened by sdd 8
  • Environment variables are deleted in AWS

    Environment variables are deleted in AWS

    First of all, amazing work, I really like this project!

    But we discovered today that in case you have an existing Lambda in AWS that already has environment variables (given in our case with terraform / cdk) but your Lambda code itself doesn't define any, the already existing ones in AWS are overwritten / deleted on deploy, which is bad. Is this intended / is there a solution?

    edit: Here an example where there where environment variable before the deployment which were gone after it:

    cargo lambda -vv deploy --region eu-central-1 --timeout 5
    TRACE run: deploying project options=Deploy { remote_config: RemoteConfig { profile: None, region: Some("eu-central-1"), alias: None, retry_attempts: 1 }, function_config: FunctionDeployConfig { memory: None, enable_function_url: false, disable_function_url: false, timeout: Some(Timeout(5)), env_var: None, env_file: None, tracing: None, role: None, layer: None }, lambda_dir: None, manifest_path: "Cargo.toml", binary_name: None, binary_path: None, s3_bucket: None, extension: false, compatible_runtimes: ["provided.al2"], output_format: Text, name: None }
    ▸▹▹▹▹ loading binary data
    DEBUG run: searching package metadata name="update" target_matches=false
    DEBUG run: using deploy configuration from metadata config=DeployConfig { memory: None, timeout: Timeout(30), env: {}, env_file: None, tracing: PassThrough, iam_role: None, layers: None }
    ▸▹▹▹▹ deploying function
    DEBUG run: updating function's configuration config=UpdateFunctionConfiguration { handle: Handle { client: Client { connector: DynConnector, middleware: DynMiddleware, retry_policy: Standard { config: Config { initial_retry_tokens: 500, retry_cost: 5, no_retry_increment: 1, timeout_retry_cost: 10, max_attempts: 3, initial_backoff: 5s, max_backoff: 20s, base: 0x55f747c5ceb0 }, shared_state: CrossRequestRetryState { quota_available: Mutex { data: 501, poisoned: false, .. } } }, timeout_config: Config { api: Api { call: Unset, call_attempt: Unset }, http: Http { connect: Unset, write: Unset, read: Unset, tls_negotiation: Unset }, tcp: Tcp { connect: Unset, write: Unset, read: Unset } }, sleep_impl: Set(TokioSleep) }, conf: Config }, inner: Builder { function_name: Some("update"), role: None, handler: None, description: None, timeout: None, memory_size: None, vpc_config: None, environment: Some(Environment { variables: "*** Sensitive Data Redacted ***" }), runtime: None, dead_letter_config: None, kms_key_arn: None, tracing_config: Some(TracingConfig { mode: Some(PassThrough) }), revision_id: None, layers: None, file_system_configs: None, image_config: None, ephemeral_storage: None } }
    ▹▹▹▹▹ deploying function
    DEBUG run: uploading zip to Lambda
    🔍 function arn: arn:aws:lambda:eu-central-1:578944296431:function:update:12
    
    opened by Dgame 7
  • GitHub Action

    GitHub Action

    It'd be nice to provide a GitHub Action that uses the Docker Image described in #230. That way people that want to use Cargo Lambda in GitHub Actions don't have to worry about installing it.

    We'd probably need a new repository in the org for that.

    help wanted good first issue 
    opened by calavera 7
  • env_logger spams hyper logs when turned on

    env_logger spams hyper logs when turned on

    I'm not sure if this is a problem with lambda_http or cargo-lambda.

    With this code:

    use dotenv::dotenv;
    use lambda_http::{
        http::{Method, StatusCode},
        service_fn,
        tower::ServiceBuilder,
        Request, Response,
    };
    use lambda_http::{Body, IntoResponse, RequestExt};
    use serde_json::json;
    use tower_http::cors::{Any, CorsLayer};
    
    async fn function_handler(
        request: Request,
    ) -> Result<Response<Body>, lambda_http::Error> {
           Response::builder()
            .status(StatusCode::CREATED)
            .header("Content-Type", "application/json")
            .body("".into())
            .unwrap()
    }
    
    #[tokio::main]
    async fn main() -> Result<(), lambda_http::Error> {
        env_logger::init();
        dotenv().ok();
    
        let cors_layer = CorsLayer::new()
            .allow_methods(vec![Method::GET, Method::POST, Method::OPTIONS])
            .allow_origin(Any);
    
        let handler = ServiceBuilder::new()
            .layer(cors_layer)
            .service(service_fn(function_handler);
    
        lambda_http::run(handler).await?;
    
        Ok(())
    }
    

    and running cargo lambda watch I will get my terminal spammed:

    [2022-07-14T12:30:18Z DEBUG hyper::client::pool] reuse idle connection for ("http", 127.0.0.1:9000) 
    [2022-07-14T12:30:18Z DEBUG hyper::proto::h1::io] flushed 126 bytes 
    [2022-07-14T12:30:18Z DEBUG hyper::proto::h1::io] parsed 4 headers 
    [2022-07-14T12:30:18Z DEBUG hyper::proto::h1::conn] incoming body is empty 
    [2022-07-14T12:30:18Z DEBUG hyper::client::pool] pooling idle connection for ("http", 127.0.0.1:9000) 
    [2022-07-14T12:30:18Z DEBUG hyper::client::pool] reuse idle connection for ("http", 127.0.0.1:9000) 
    [2022-07-14T12:30:18Z DEBUG hyper::proto::h1::io] flushed 126 bytes 
    [2022-07-14T12:30:18Z DEBUG hyper::proto::h1::io] parsed 4 headers 
    [2022-07-14T12:30:18Z DEBUG hyper::proto::h1::conn] incoming body is empty 
    [2022-07-14T12:30:18Z DEBUG hyper::client::pool] pooling idle connection for ("http", 127.0.0.1:9000) 
    [2022-07-14T12:30:18Z DEBUG hyper::client::pool] reuse idle connection for ("http", 127.0.0.1:9000) 
    [2022-07-14T12:30:18Z DEBUG hyper::proto::h1::io] flushed 126 bytes 
    [2022-07-14T12:30:18Z DEBUG hyper::proto::h1::io] parsed 4 headers 
    [2022-07-14T12:30:18Z DEBUG hyper::proto::h1::conn] incoming body is empty 
    [2022-07-14T12:30:18Z DEBUG hyper::client::pool] pooling idle connection for ("http", 127.0.0.1:9000) 
    [2022-07-14T12:30:18Z DEBUG hyper::client::pool] reuse idle connection for ("http", 127.0.0.1:9000) 
    [2022-07-14T12:30:18Z DEBUG hyper::proto::h1::io] flushed 126 bytes 
    [2022-07-14T12:30:18Z DEBUG hyper::proto::h1::io] parsed 4 headers 
    [2022-07-14T12:30:18Z DEBUG hyper::proto::h1::conn] incoming body is empty 
    [2022-07-14T12:30:18Z DEBUG hyper::client::pool] pooling idle connection for ("http", 127.0.0.1:9000) 
    [2022-07-14T12:30:18Z DEBUG hyper::client::pool] reuse idle connection for ("http", 127.0.0.1:9000) 
    [2022-07-14T12:30:18Z DEBUG hyper::proto::h1::io] flushed 126 bytes 
    [2022-07-14T12:30:18Z DEBUG hyper::proto::h1::io] parsed 4 headers 
    [2022-07-14T12:30:18Z DEBUG hyper::proto::h1::conn] incoming body is empty 
    [2022-07-14T12:30:18Z DEBUG hyper::client::pool] pooling idle connection for ("http", 127.0.0.1:9000) 
    [2022-07-14T12:30:18Z DEBUG hyper::client::pool] reuse idle connection for ("http", 127.0.0.1:9000) 
    [2022-07-14T12:30:18Z DEBUG hyper::proto::h1::io] flushed 126 bytes 
    [2022-07-14T12:30:18Z DEBUG hyper::proto::h1::io] parsed 4 headers 
    [2022-07-14T12:30:18Z DEBUG hyper::proto::h1::conn] incoming body is empty 
    [2022-07-14T12:30:18Z DEBUG hyper::client::pool] pooling idle connection for ("http", 127.0.0.1:9000) 
    [2022-07-14T12:30:18Z DEBUG hyper::client::pool] reuse idle connection for ("http", 127.0.0.1:9000) 
    [2022-07-14T12:30:18Z DEBUG hyper::proto::h1::io] flushed 126 bytes 
    [2022-07-14T12:30:18Z DEBUG hyper::proto::h1::io] parsed 4 headers 
    [2022-07-14T12:30:18Z DEBUG hyper::proto::h1::conn] incoming body is empty 
    [2022-07-14T12:30:18Z DEBUG hyper::client::pool] pooling idle connection for ("http", 127.0.0.1:9000) 
    [2022-07-14T12:30:18Z DEBUG hyper::client::pool] reuse idle connection for ("http", 127.0.0.1:9000) 
    [2022-07-14T12:30:18Z DEBUG hyper::proto::h1::io] flushed 126 bytes 
    [2022-07-14T12:30:18Z DEBUG hyper::proto::h1::io] parsed 4 headers 
    [2022-07-14T12:30:18Z DEBUG hyper::proto::h1::conn] incoming body is empty 
    [2022-07-14T12:30:18Z DEBUG hyper::client::pool] pooling idle connection for ("http", 127.0.0.1:9000) 
    [2022-07-14T12:30:18Z DEBUG hyper::client::pool] reuse idle connection for ("http", 127.0.0.1:9000) 
    [2022-07-14T12:30:18Z DEBUG hyper::proto::h1::io] flushed 126 bytes 
    [2022-07-14T12:30:18Z DEBUG hyper::proto::h1::io] parsed 4 headers 
    [2022-07-14T12:30:18Z DEBUG hyper::proto::h1::conn] incoming body is empty 
    [2022-07-14T12:30:18Z DEBUG hyper::client::pool] pooling idle connection for ("http", 127.0.0.1:9000) 
    [2022-07-14T12:30:18Z DEBUG hyper::client::pool] reuse idle connection for ("http", 127.0.0.1:9000) 
    [2022-07-14T12:30:18Z DEBUG hyper::proto::h1::io] flushed 126 bytes 
    [2022-07-14T12:30:18Z DEBUG hyper::proto::h1::io] parsed 4 headers 
    [2022-07-14T12:30:18Z DEBUG hyper::proto::h1::conn] incoming body is empty 
    [2022-07-14T12:30:18Z DEBUG hyper::client::pool] pooling idle connection for ("http", 127.0.0.1:9000) 
    [2022-07-14T12:30:18Z DEBUG hyper::client::pool] reuse idle connection for ("http", 127.0.0.1:9000) 
    [2022-07-14T12:30:18Z DEBUG hyper::proto::h1::io] flushed 126 bytes 
    [2022-07-14T12:30:18Z DEBUG hyper::proto::h1::io] parsed 4 headers 
    [2022-07-14T12:30:18Z DEBUG hyper::proto::h1::conn] incoming body is empty 
    [2022-07-14T12:30:18Z DEBUG hyper::client::pool] pooling idle connection for ("http", 127.0.0.1:9000) 
    [2022-07-14T12:30:18Z DEBUG hyper::client::pool] reuse idle connection for ("http", 127.0.0.1:9000)
    
    opened by drager 7
  • feat: validate build target for AWS Lambda, and check target component in toolchain

    feat: validate build target for AWS Lambda, and check target component in toolchain

    Hi @calavera, just creating this as a draft PR for you to review when you have time; hopefully this resolves #30.

    I've also added a test for adding the target component to the toolchain, and confirmed it works as expected. It looks like the check to confirm if a target component is present in the toolchain should be "zero cost", as it involves just a lookup to see if a directory exists or not in the rustup home.

    --

    Below are the changes added as part of this feature:

    • validate that the build target passed in to --target is currently supported for deployments to AWS Lambda
    • check if the target component is installed in the host toolchain, and install it with rustup target add <component> if needed
    opened by rnag 7
  • Error with cargo lambda build on Windows

    Error with cargo lambda build on Windows

    Hello,

    I have tried following the readme on how to get up an running with this tool but I have hit this problem on two separate windows 10 computers. When I get to the lambda build command I get the below error.

    PS C:\temp\test_fn> cargo lambda build --release
    Error: 
      × The prompt configuration is invalid: Available options can not be empty
    

    I tried to run this with various other switches but I cannot get it to build. Other commands like cargo lambda watch were working correctly.

    I tried to search to see if others had the issue but could not find any, apologies if it is already logged.

    Thank you, Adam

    opened by acottis 6
  • Create native installers

    Create native installers

    The initial installation experience with cargo install is suboptimal. This command takes several minutes because it needs to download and compile the code in the host machine. It'd be much better if we had native installers. This is a list of installers that we should probably support:

    • [x] Homebrew (linux and mac)
    • [x] Scoop (windows)
    • [ ] NPM (any OS with Node installed, good for people that use the AWS CDK)
    • [x] PiP (any OS with Python installed, good for people that use the AWS SAM cli)
    help wanted good first issue 
    opened by calavera 2
  • Allow to send ClientContext information when invoking a function

    Allow to send ClientContext information when invoking a function

    The server can already take that information from a http header. The Invoke command should expose a way to provide that information and serialize it in request headers.

    commands/invoke 
    opened by calavera 0
  • Allow to send Cognito information when invoking a function

    Allow to send Cognito information when invoking a function

    The server can already take that information from a http header. The Invoke command should expose a way to provide that information and serialize it in request headers.

    commands/invoke 
    opened by calavera 0
Releases(v0.14.0)
Owner
null
The lambda-chaos-extension allows you to inject faults into Lambda functions without modifying the function code.

Chaos Extension - Seamless, Universal & Lightning-Fast The lambda-chaos-extension allows you to inject faults into Lambda functions without modifying

AWS CLI Tools 5 Aug 2, 2023
A cargo subcommand that extends cargo's capabilities when it comes to code generation.

cargo-px Cargo Power eXtensions Check out the announcement post to learn more about cargo-px and the problems it solves with respect to code generatio

Luca Palmieri 33 May 7, 2023
Aws-sdk-rust - AWS SDK for the Rust Programming Language

The AWS SDK for Rust This repo contains the new AWS SDK for Rust (the SDK) and its public roadmap. Please Note: The SDK is currently released as a dev

Amazon Web Services - Labs 2k Jan 3, 2023
Rs.aws-login - A command line utility to simplify logging into AWS services.

aws-login A command line utility to simplify logging into AWS accounts and services. $ aws-login use ? Please select a profile to use: › ❯ dev-read

Kevin Herrera 11 Oct 30, 2022
A tool to run web applications on AWS Lambda without changing code.

AWS Lambda Adapter A tool to run web applications on AWS Lambda without changing code. How does it work? AWS Lambda Adapter supports AWS Lambda functi

AWS Samples 321 Jan 2, 2023
📦 🚀 a smooth-talking smuggler of Rust HTTP functions into AWS lambda

lando ?? maintenance mode ahead ?? As of this announcement AWS not officialy supports Rust through this project. As mentioned below this projects goal

Doug Tangren 68 Dec 7, 2021
A Rust runtime for AWS Lambda

Rust Runtime for AWS Lambda This package makes it easy to run AWS Lambda Functions written in Rust. This workspace includes multiple crates: lambda-ru

Amazon Web Services - Labs 2.4k Dec 29, 2022
A simple workshop to learn how to write, test and deploy AWS Lambda functions using the Rust programming language

Rust Lambda Workshop Material to host a workshop on how to build and deploy Rust Lambda functions with AWS SAM and Cargo Lambda. Intro What is Serverl

Luciano Mammino 13 Mar 28, 2024
A cargo subcommand that displays the assembly generated for Rust source code

cargo-show-asm A cargo subcommand that displays the assembly generated for Rust source code.

null 193 Dec 29, 2022
Cargo subcommand for optimizing binaries with PGO and BOLT.

cargo-pgo Cargo subcommand that makes it easier to use PGO and BOLT to optimize Rust binaries. Installation $ cargo install cargo-pgo You will also ne

Jakub Beránek 229 Dec 28, 2022
Cargo subcommand to easily bootstrap nocode applications. Write nothing; deploy nowhere.

cargo-nocode No code is the best way to write secure and reliable applications. Write nothing; deploy nowhere. cargo-nocode aims to bring the nocode a

Orhun Parmaksız 29 Jul 1, 2023
Remote Secret Editor for AWS Secret Manager

Barberousse - Remote Secrets Editor About Usage Options Printing Editing Copying RoadMap 1.0 1.1 Future About A project aimed to avoid downloading sec

Mohamed Zenadi 18 Sep 28, 2021
Rust client for AWS Infinidash service.

AWS Infinidash - Fully featured Rust client Fully featured AWS Infinidash client for Rust applications. You can use the AWS Infinidash client to make

Rafael Carício 15 Feb 12, 2022
Rusoto is an AWS SDK for Rust

Rusoto is an AWS SDK for Rust You may be looking for: An overview of Rusoto AWS services supported by Rusoto API documentation Getting help with Rusot

null 2.6k Jan 3, 2023
Easy switch between AWS Profiles and Regions

AWSP - CLI To Manage your AWS Profiles! AWSP provides an interactive terminal to interact with your AWS Profiles. The aim of this project is to make i

KubeOps Skills 14 Dec 25, 2022
Simple fake AWS Cognito User Pool API server for development.

Fakey Cognito ?? Homepage Simple fake AWS Cognito API server for development. ✅ Implemented features AdminXxx on User Pools API. Get Started # run wit

naokirin 4 Aug 30, 2022
Postgres proxy which allows tools that don't natively supports IAM auth to connect to AWS RDS instances.

rds-iamauth-proxy rds-proxy lets you make use of IAM-based authentication to AWS RDS instances from tools that don't natively support that method of a

Gold Fig Labs Inc. 10 Nov 7, 2022
Cookiecutter templates for Serverless applications using AWS SAM and the Rust programming language.

Cookiecutter SAM template for Lambda functions in Rust This is a Cookiecutter template to create a serverless application based on the Serverless Appl

AWS Samples 24 Nov 11, 2022
Ref Arch: Serverless GraphQL in Rust on AWS

A Whole Hog Reference Architecture for an Apollo Federation-Ready, Serverless, Rust-Based GraphQL Microservice on AWS using Cloud Development Kit (CDK)

Michael Edelman 3 Jan 12, 2022