Create full-fledged APIs for static datasets without writing a single line of code.

Overview

ROAPI

build Documentation

ROAPI automatically spins up read-only APIs for static datasets without requiring you to write a single line of code. It builds on top of Apache Arrow and Datafusion. The core of its design can be boiled down to the following:

  • Query frontends to translate SQL, GraphQL and REST API queries into Datafusion plans.
  • Datafusion for query plan execution.
  • Data layer to load datasets from a variety of sources and formats with automatic schema inference.
  • Response encoding layer to serialize intermediate Arrow record batch into various formats requested by client.

See below for a high level diagram:

roapi-design-diagram

Installation

Install pre-built binary

pip install roapi-http

Check out Github release page for pre-built binaries for each platform. Pre-built docker images are also available at ghcr.io/roapi/roapi-http.

Install from source

cargo install --locked --git https://github.com/roapi/roapi --branch main --bin roapi-http

Usage

Quick start

Spin up APIs for test_data/uk_cities_with_headers.csv and test_data/spacex_launches.json:

roapi-http \
    --table "uk_cities=test_data/uk_cities_with_headers.csv" \
    --table "test_data/spacex_launches.json"

For windows, full scheme(file:// or filesystem://) must filled, and use double quote(") instead of single quote(') to escape windows cmdline limit:

roapi-http \
    --table "uk_cities=file://d:/path/to/uk_cities_with_headers.csv" \
    --table "file://d:/path/to/test_data/spacex_launches.json"

Or using docker:

docker run -t --rm -p 8080:8080 ghcr.io/roapi/roapi-http:latest --addr 0.0.0.0:8080 \
    --table "uk_cities=test_data/uk_cities_with_headers.csv" \
    --table "test_data/spacex_launches.json"

Query tables using SQL, GraphQL or REST:

curl -X POST -d "SELECT city, lat, lng FROM uk_cities LIMIT 2" localhost:8080/api/sql
curl -X POST -d "query { uk_cities(limit: 2) {city, lat, lng} }" localhost:8080/api/graphql
curl "localhost:8080/api/tables/uk_cities?columns=city,lat,lng&limit=2"

Get inferred schema for all tables:

curl 'localhost:8080/api/schema'

Config file

You can also configure multiple table sources using YAML config, which supports more advanced format specific table options:

addr: 0.0.0.0:8084
tables:
  - name: "blogs"
    uri: "test_data/blogs.parquet"

  - name: "ubuntu_ami"
    uri: "test_data/ubuntu-ami.json"
    option:
      format: "json"
      pointer: "/aaData"
      array_encoded: true
    schema:
      columns:
        - name: "zone"
          data_type: "Utf8"
        - name: "name"
          data_type: "Utf8"
        - name: "version"
          data_type: "Utf8"
        - name: "arch"
          data_type: "Utf8"
        - name: "instance_type"
          data_type: "Utf8"
        - name: "release"
          data_type: "Utf8"
        - name: "ami_id"
          data_type: "Utf8"
        - name: "aki_id"
          data_type: "Utf8"

  - name: "spacex_launches"
    uri: "https://api.spacexdata.com/v4/launches"
    option:
      format: "json"

  - name: "github_jobs"
    uri: "https://jobs.github.com/positions.json"

To run serve tables using config file:

roapi-http -c ./roapi.yml

See config documentation for more options including using Google spreadsheet as a table source.

Response serialization

By default, ROAPI encodes responses in JSON format, but you can request different encodings by specifying the ACCEPT header:

curl -X POST \
    -H 'ACCEPT: application/vnd.apache.arrow.stream' \
    -d "SELECT launch_library_id FROM spacex_launches WHERE launch_library_id IS NOT NULL" \
    localhost:8080/api/sql

REST API query interface

You can query tables through REST API by sending GET requests to /api/tables/{table_name}. Query operators are specified as query params.

REST query frontend currently supports the following query operators:

  • columns
  • sort
  • limit
  • filter

To sort column col1 in ascending order and col2 in descending order, set query param to: sort=col1,-col2.

To find all rows with col1 equal to string 'foo', set query param to: filter[col1]='foo'. You can also do basic comparisons with filters, for example predicate 0 <= col2 < 5 can be expressed as filter[col2]gte=0&filter[col2]lt=5.

GraphQL query interface

To query tables using GraphQL, send the query through POST request to /api/graphql endpoint.

GraphQL query frontend supports the same set of operators supported by REST query frontend. Here how is you can apply various operators in a query:

{
    table_name(
        filter: {
            col1: false
            col2: { gteq: 4, lt: 1000 }
        }
        sort: [
            { field: "col2", order: "desc" }
            { field: "col3" }
        ]
        limit: 100
    ) {
        col1
        col2
        col3
    }
}

SQL query interface

To query tables using a subset of standard SQL, send the query through POST request to /api/sql endpoint. This is the only query interface that supports table joins.

Features

Query layer:

  • REST API GET
  • GraphQL
  • SQL
  • join between tables
  • support query on nested struct fields
  • index
  • protocol
    • gRPC
    • MySQL
    • Postgres

Response serialization:

  • JSON application/json
  • Arrow application/vnd.apache.arrow.stream
  • Parquet application/vnd.apache.parquet
  • msgpack

Data layer:

Misc:

  • auto gen OpenAPI doc for rest layer
  • query input type conversion based on table schema
  • stream arrow encoding response
  • authentication layer

Development

The core of ROAPI, including query frontends and data layer, lives in the self-contained columnq crate. It takes queries and outputs Arrow record batches. Data sources will also be loaded and stored in memory as Arrow record batches.

The roapi-http crate wraps columnq with a HTTP based API layer. It serializes Arrow record batches produced by columnq into different formats based on client request.

Building ROAPI with simd optimization requires nightly rust toolchain.

Build Docker image

docker build --rm -t ghcr.io/roapi/roapi-http:latest .
Comments
  • database support

    database support

    For 0.7.1 and a configuration of:

    tables:
      - name: "table_foo"
        uri: "mysql://username:password@localhost:3306/database"
        #uri: "table_foo=mysql://username:password@localhost:3306/database"
    
    

    In any case I get:

     Invalid table URI: cannot detect table extension from uri: mysql://username:password@server:port/databas
    
    

    back as a result - how can I tell roapi to actually use either postgres or MySQL? In both cases the table is not identified

    opened by geoHeil 19
  • Lazy load parquet

    Lazy load parquet

    @houqp as requested, the initial PoC of using TableProvider trait instead of explicit MemTable, to enable direct queries on the source storage via datafusion.

    opened by dispanser 14
  • Expecting qualified names

    Expecting qualified names

    Hi all,

    I'm using the version 0.4.4 and the REST API and was able to successfully use roapi using a projection/sort/limit pattern with the following call: curl -v "http://127.0.0.1:8084/api/tables/cities?&columns=City,State&sort=LatD&limit=3"

    Unfortunately, when I run more than once this call, I'm starting to receive such messages: {"code":400,"error":"query_execution","message":"Failed to execute query: Error during planning: No field named 'cities.LatD'. Valid fields are 'cities.City', 'cities.State'."}* Connection #0 to host 127.0.0.1 left intact

    More, I can't successfully qualify the names of the columns, I tried many variationq but without success: curl -v "http://127.0.0.1:8084/api/tables/cities?&sort=LatD&columns=cities.City,cities.State&limit=3" which is misleading regarding that the error statement is claiming that the expected fields are 'cities.City', 'cities.State'.

    Any direction will be welcome.

    bug upstream 
    opened by Seddryck 10
  • pip install error

    pip install error

    the problem as the follows: When I execute "pip install roapi-http": Collecting roapi-http Could not find a version that satisfies the requirement roapi-http (from versions: ) No matching distribution found for roapi-http so, how to fix it?

    opened by cymqqqq 9
  • datafusion-objectstore-s3

    datafusion-objectstore-s3

    Currently ColumnQ uses rusoto

    A lot has been done with Object stores on datafusion and I'm wondering if it could be interesting to move to: https://github.com/datafusion-contrib/datafusion-objectstore-s3

    opened by tiphaineruy 8
  • basic example fails on osx

    basic example fails on osx

    when toying around with the basic example from the main README document I observe HTTP URI resolution problems:

    roapi-http --version
    roapi-http 0.5.3
    roapi-http --table "uk_cities=https://raw.githubusercontent.com/roapi/roapi/main/test_data/uk_cities_with_headers.csv"
    Error: failed to lookup address information: nodename nor servname provided, or not known
    
    
    # strangely - this fails with the same error when running locally:
    wget https://raw.githubusercontent.com/roapi/roapi/main/test_data/uk_cities_with_headers.csv
    roapi-http --table "uk_cities=uk_cities_with_headers.csv"
    Error: failed to lookup address information: nodename nor servname provided, or not known
    

    when running natively on a Catalina OsX.

    However, when executed inside your docker container - it works just fine.

    opened by geoHeil 8
  • issue compiling roapi-http v0.1.13  could not compile `actix-cors`

    issue compiling roapi-http v0.1.13 could not compile `actix-cors`

    Hello, I try to compile roapi-http from the source zip

    I have an error when I compile

    cargo install --verbose --path ./roapi-http --bin roapi-http building ... ...

    error[E0053]: method `error_response` has an incompatible type for trait                                                              │
      --> /usr/local/cargo/git/checkouts/actix-extras-3214827614a42f65/ab3bdb6/actix-cors/src/error.rs:53:5                               │
       |                                                                                                                                  │
    53 |     fn error_response(&self) -> HttpResponse {                                                                                   │
       |     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ expected struct `BaseHttpResponse`, found struct `actix_web::HttpResponse`          │
       |                                                                                                                                  │
       = note: expected fn pointer `fn(&CorsError) -> BaseHttpResponse<actix_web::dev::Body>`                                             │
                  found fn pointer `fn(&CorsError) -> actix_web::HttpResponse`                                                            │
                                                                                                                                          │
    error: aborting due to previous error                                                                                                 │
                                                                                                                                          │
    For more information about this error, try `rustc --explain E0053`.                                                                   │
    error: could not compile `actix-cors`                                                                                                 │
    
    opened by Diluted 8
  • Add dynamic register and update table function.

    Add dynamic register and update table function.

    Use example

    curl -X POST http://172.24.16.1:8080/api/table \
       -H 'Content-Type: application/json' \
       -d '[
        {
            "tableName": "uk_cities2",
            "uri": "./test_data/uk_cities_with_headers.csv"
        }
    ]'
    
    opened by zemelLeong 7
  • Looking into source of logging overhead

    Looking into source of logging overhead

    https://tech.marksblogg.com/roapi-rust-data-api.html mentioned that changing logging level from info to error resulted in 50% speed boost, this is certainly not expected :(

    bug 
    opened by houqp 7
  • Build wheels with rustls

    Build wheels with rustls

    The upgrade that came with the SIMD build in the Dockerfile #81, broke the columnq and roapi-http builds. I tried fixing this in multiple ways but in the end upgrading the container to a more recent version and using rustls instead of native-tls (openssl) seems to work. I am not sure if my solution is acceptable, but I am not sure how to fix it otherwise.

    I tried combining this with #31 but I only could get the x86_64 build to work.

    opened by ekroon 7
  • Failed to load Delta table with partitions

    Failed to load Delta table with partitions

    When I tried to load a Delta table with partitions, it throws the following exception

    Error: DeltaTable error: Failed to apply transaction log: Invalid JSON in log record Caused by: 0: Failed to apply transaction log: Invalid JSON in log record 1: Invalid JSON in log record 2: invalid type: null, expected a string at line 1 column 195

    I believe it fails because the table commit log has an entry

    {"add":{"path":"file_year=__HIVE_DEFAULT_PARTITION__/file_month=__HIVE_DEFAULT_PARTITION__/part-00000-4cfe8fb8-3905-4ae6-8b44-e7612753064c.c000.snappy.parquet","partitionValues":{"file_year":null,"file_month":null},"size":3331,"modificationTime":1628075908000,"dataChange":true}}
    
    bug 
    opened by mmuru 7
  • `SQL like '%中文%'` is not supported

    `SQL like '%中文%'` is not supported

    http rest api give message: Failed to execute query: Arrow error: External error: Arrow error: External error: Execution error: Arrow error: External error: Internal error: Data type LargeUtf8 not supported for scalar operation 'like' on string array. This was likely caused by a bug in DataFusion's code and we would welcome that you file an bug report in our issue tracker

    Maybe DataFusion not support this.

    opened by tempbottle 0
  • Issue: Queries load results into memory

    Issue: Queries load results into memory

    Currently all queries load the results in to memory

    //in columnq/query/grapqhl.rs
    pub async fn exec_query(
        dfctx: &datafusion::execution::context::SessionContext,
        q: &str,
    ) -> Result<Vec<arrow::record_batch::RecordBatch>, QueryError> {
        query_to_df(dfctx, q)?
            .collect()
            .await
            .map_err(QueryError::query_exec)
    }
    // columnq/query/rest.ts
    pub async fn query_table(
        dfctx: &datafusion::execution::context::SessionContext,
        table_name: &str,
        params: &HashMap<String, String>,
    ) -> Result<Vec<RecordBatch>, QueryError> {
        let df = table_query_to_df(dfctx, table_name, params)?;
        df.collect().await.map_err(QueryError::query_exec)
    }
    

    So all this queries use a method called collect which as per docs loads the queried results in to memory.

    Convert the logical plan represented by this DataFrame into a physical plan and execute it, collecting all resulting batches into memory Executes this DataFrame and collects all results into a vector of RecordBatch.
    

    There is another method called collect_partitioned which does not load into the memory but return the result Result<Vec<Vec<RecordBatch>>>

    xecutes this DataFrame and collects all results into a vector of vector of RecordBatch maintaining the input partitioning.
    

    So can I create pr changing current query functions with collect_partitioned method or can you just any other alternatives to perform queries while not loading them into memory? thank you

    opened by elliot14A 2
  • Error when using delta tables with partitions

    Error when using delta tables with partitions

    I get an error when I try to use a delta table with partitions.

    I can reproduce the error with this sample data:

    df = pd.read_csv( "https://raw.githubusercontent.com/quankiquanki/skytrax-reviews-dataset/master/data/airport.csv") cols = df.columns df["year"] = pd.to_datetime(df["date"]).dt.year df = df[cols.insert(0, "year")]

    write_deltalake("./data/airport1", df, partition_by=["year"]) df = df[cols.insert(len(cols), "year")] write_deltalake("./data/airport2", df, partition_by=["year"])

    Request: localhost:8084/api/tables/airport1?columns=year Returns the column "airport_name"

    Request: localhost:8084/api/tables/airport2?columns=year Returns an error: {"code":400,"error":"query_execution","message":"Failed to execute query: Arrow error: Invalid argument error: column types must match schema types, expected Dictionary(UInt16, Utf8) but found Dictionary(UInt16, Int64) at column index 0"}

    opened by dominikpeter 0
  • Feature Request: SQL Endpoint with Parameters

    Feature Request: SQL Endpoint with Parameters

    Hi there

    This project really looks promising! I have one thing I think should be changed: the SQL Endpoint does not seem to support parameters - at least it's not documented. That encourages users to create their SQL by using string concatenation which is not good from a security perspective (SQL injection). I propose to either use JSON Body:

    {
    "sql": "SELECT * FROM table where name = $name",
    "parameters": {"$name": "something that is properly escaped\" drop table table"}
    }
    

    Or make use of headers:

    POST URLTOTABLE
    X-SQL-Parameter-Name: something that is properly escaped" drop table table
    
    SELECT * FROM table where name = $name
    

    What do you think?

    opened by aersam 0
  • Parquet s3 urls not working when `use_memory_table` is set to false

    Parquet s3 urls not working when `use_memory_table` is set to false

    s3 urls were not working when use_memory_table is set to false. It was returning the following error:

    Error: DataFusion error: Internal error: No suitable object store found for s3://roapitest/blogs_flattened.parquet. This was likely caused by a bug in DataFusion's code and we would welcome that you file an bug report in our issue tracker
    

    So I went through the datafusion repo, did a little bit of debugging then I had found out that, While using use_memory_table: false it was using a function from datafusion called infer_schema which in turn uses object_store which is actually causing the error. object_store is a dashmap containing the schema info of path,s3,hdfs urls. Only localfile system urls are present in object_store. We have to manually configure s3 urls with bucket name using a function in SessionContext in RuntimeEnv called register_store. I was able to use register_store and the required functionality to use s3 without loading into the memory seems to work, but in columnq there is global SessionContext and when tables in loaded for example in parquet file in columnq/src/table/parquet.rs has a local SessionContext.

    //in columnq/src/columnq.rs
    pub fn new_with_config(config: SessionConfig) -> Self {
            let dfctx = SessionContext::with_config(config);
            let bucket_name = "roapi-test";
            let region = std::env::var("AWS_REGION").unwrap();
            let endpoint = std::env::var("AWS_ENDPOINT_URL").unwrap();
            let secret_access_key = std::env::var("AWS_SECRET_ACCESS_KEY").unwrap();
            let access_key_id = std::env::var("AWS_ACCESS_KEY_ID").unwrap();
            let s3_builder = AmazonS3Builder::new()
                .with_access_key_id(access_key_id)
                .with_bucket_name(bucket_name)
                .with_endpoint(endpoint)
                .with_region(region)
                .with_secret_access_key(secret_access_key)
                .build()
                .map_err(|e| {
                    println!("{:?}", e);
                    ColumnQError::MissingOption
                })
                .unwrap();
            dfctx.runtime_env().object_store_registry.register_store(
                "s3",
                "roapi-test",
                Arc::new(s3_builder),
            );
            let schema_map = HashMap::<String, arrow::datatypes::SchemaRef>::new();
            Self {
                dfctx,
                schema_map,
                kv_catalog: HashMap::new(),
            }
        }
    
    //in columnq/src/table/parquet.rs
    pub async fn to_datafusion_table(t: &TableSource) -> Result<Arc<dyn TableProvider>, ColumnQError> {
        let opt = t
            .option
            .clone()
            .unwrap_or_else(|| TableLoadOption::parquet(TableOptionParquet::default()));
        let TableOptionParquet { use_memory_table } = opt.as_parquet()?;
    
        if *use_memory_table {
            to_mem_table(t).await
        } else {
            let table_url = ListingTableUrl::parse(t.get_uri_str())?;
            let options = ListingOptions::new(Arc::new(ParquetFormat::default()));
    
            let schemaref = match &t.schema {
                Some(s) => Arc::new(s.into()),
                None => {
                    let ctx = SessionContext::new();
                   
                    let bucket_name = "roapi-test";
                    let region = std::env::var("AWS_REGION").unwrap();
                    let endpoint = std::env::var("AWS_ENDPOINT_URL").unwrap();
                    let secret_access_key = std::env::var("AWS_SECRET_ACCESS_KEY").unwrap();
                    let access_key_id = std::env::var("AWS_ACCESS_KEY_ID").unwrap();
                    let s3_builder = AmazonS3Builder::new()
                        .with_access_key_id(access_key_id)
                        .with_bucket_name(bucket_name)
                        .with_endpoint(endpoint)
                        .with_region(region)
                        .with_secret_access_key(secret_access_key)
                        .build()
                        .map_err(|e| {
                            println!("{:?}", e);
                            ColumnQError::MissingOption
                        })?;
                    ctx.runtime_env().object_store_registry.register_store(
                        "s3",
                        "roapi-test",
                        Arc::new(s3_builder),
                    );
                    let s = options.infer_schema(&ctx.state(), &table_url).await?;
                    println!("6");
                    s
                }
            };
    
            let table_config = ListingTableConfig::new(table_url)
                .with_listing_options(options)
                .with_schema(schemaref);
            Ok(Arc::new(ListingTable::try_new(table_config)?))
        }
    }
    

    if you can see I have to use register_store function twice into order make s3 urls to work only for parquet files. If want it to work for csv files, I have to use the function thrice. Which I think is not efficient.

    Do you think is there a better way to do it like having a single global SessionContext?

    opened by elliot14A 3
Releases(roapi-v0.8.1)
Owner
Where API meets Data
null
PostQuet: Stream PostgreSQL tables/queries to Parquet files seamlessly with this high-performance, Rust-based command-line tool.

STATUS: IN DEVELOPMENT PostQuet: Streaming PostgreSQL to Parquet Exporter PostQuet is a powerful and efficient command-line tool written in Rust that

Per Arneng 4 Apr 11, 2023
🎹 [WIP] Full-fledged software sampler written in Rust.

sampler This project aims to be a full-fledged software sampler written in Rust. While I initially used Apple's AUSampler for sampled instruments in m

soaky audio 12 Dec 27, 2022
Easy c̵̰͠r̵̛̠ö̴̪s̶̩̒s̵̭̀-t̶̲͝h̶̯̚r̵̺͐e̷̖̽ḁ̴̍d̶̖̔ ȓ̵͙ė̶͎ḟ̴͙e̸̖͛r̶̖͗ë̶̱́ṉ̵̒ĉ̷̥e̷͚̍ s̷̹͌h̷̲̉a̵̭͋r̷̫̊ḭ̵̊n̷̬͂g̵̦̃ f̶̻̊ơ̵̜ṟ̸̈́ R̵̞̋ù̵̺s̷̖̅ţ̸͗!̸̼͋

Rust S̵̓i̸̓n̵̉ I̴n̴f̶e̸r̵n̷a̴l mutability! Howdy, friendly Rust developer! Ever had a value get m̵̯̅ð̶͊v̴̮̾ê̴̼͘d away right under your nose just when

null 294 Dec 23, 2022
serve a static site, single page application or just a static file with Rust

cargo-server tl;dr: Does the same as "python -m http.server" or "npx serve" but for Rust ecosystem. cargo-server helps you serve a static site, single

Raphael Amorim 18 Oct 14, 2022
Serve a static site, single page application or just a static file with Rust

cargo-server tl;dr: Does the same as "python -m http.server" or "npx serve" but for Rust ecosystem. cargo-server helps you serve a static site, single

Raphael Amorim 18 Oct 14, 2022
Sero is a web server that allows you to easily host your static sites without pain. The idea was inspired by surge.sh but gives you full control.

sero Lightning-fast, static web publishing with zero configuration and full control ?? Table Of Contents ?? Table Of Contents ?? Tools ❓ About The Pro

Dmitry Miasnenko 6 Nov 13, 2023
shavee is a Program to automatically decrypt and mount ZFS datasets using Yubikey HMAC as 2FA or any USB drive with support for PAM to auto mount home directories.

shavee is a simple program to decrypt and mount encrypted ZFS user home directories at login using Yubikey HMAC or a Simple USB drive as 2FA written in rust.

Ashutosh Verma 38 Dec 24, 2022
Display ZFS datasets' I/O in real time

ztop Display ZFS datasets' I/O in real time Overview ztop is like top, but for ZFS datasets. It displays the real-time activity for datasets. The buil

Alan Somers 40 Nov 23, 2022
Repository for CinPatent: Datasets for Patent Classification

CinPatent: Datasets for Patent Classification We release two datasets for patent classification in English and Japanese at Google Drive. The data fold

Cinnamon 1 Jan 2, 2023
A blazingly fast static web server with routing, templating, and security in a single binary you can set up with zero code. :zap::crab:

binserve ⚡ ?? A blazingly fast static web server with routing, templating, and security in a single binary you can set up with zero code. ?? UPDATE: N

Mufeed VH 722 Dec 27, 2022
A lightweight full-text search library that provides full control over the scoring calculations

probly-search · A full-text search library, optimized for insertion speed, that provides full control over the scoring calculations. This start initia

Quantleaf 20 Nov 26, 2022
🔍TinySearch is a lightweight, fast, full-text search engine. It is designed for static websites.

tinysearch TinySearch is a lightweight, fast, full-text search engine. It is designed for static websites. TinySearch is written in Rust, and then com

null 2.2k Dec 31, 2022
Docker images for compiling static Rust binaries using musl-libc and musl-gcc, with static versions of useful C libraries. Supports openssl and diesel crates.

rust-musl-builder: Docker container for easily building static Rust binaries Source on GitHub Changelog UPDATED: Major updates in this release which m

Eric Kidd 1.3k Jan 1, 2023
Hot reload static web server for deploying mutiple static web site with version control.

SPA-SERVER It is to provide a static web http server with cache and hot reload. 中文 README Feature Built with Hyper and Warp, fast and small! SSL with

null 7 Dec 18, 2022
Static Web Server - a very small and fast production-ready web server suitable to serve static web files or assets

Static Web Server (or SWS abbreviated) is a very small and fast production-ready web server suitable to serve static web files or assets.

Jose Quintana 496 Jan 2, 2023
A fast static site generator in a single binary with everything built-in. https://www.getzola.org

zola (né Gutenberg) A fast static site generator in a single binary with everything built-in. Documentation is available on its site or in the docs/co

Zola 10.1k Jan 5, 2023
A fast static site generator in a single binary with everything built-in. https://www.getzola.org

zola (né Gutenberg) A fast static site generator in a single binary with everything built-in. Documentation is available on its site or in the docs/co

Zola 10.1k Jan 10, 2023
Silkenweb - A library for writing reactive single page web apps

Silkenweb A library for building reactive single page web apps. Features Fine grained reactivity using signals to minimize DOM API calls No VDOM. Call

null 85 Dec 26, 2022
Striving to create a great Application with full functions of learning languages by ChatGPT, TTS, STT and other awesome AI models

Striving to create a great Application with full functions of learning languages by ChatGPT, TTS, STT and other awesome AI models, supports talking, speaking assessment, memorizing words with contexts, Listening test, so on.

null 155 Apr 20, 2023
Single-reader, multi-writer & single-reader, multi-verifier; broadcasts reads to multiple writeable destinations in parallel

Bus Writer This Rust crate provides a generic single-reader, multi-writer, with support for callbacks for monitoring progress. It also provides a gene

Pop!_OS 26 Feb 7, 2022