Allow DataFusion to resolve queries across remote query engines while pushing down as much compute as possible down.

Overview

DataFusion Federation

crates.io docs.rs

The goal of this repo is to allow DataFusion to resolve queries across remote query engines while pushing down as much compute as possible down.

Check out the examples to get a feel for how it works.

Potential use-cases:

  • Querying across SQLite, MySQL, PostgreSQL, ...
  • Pushing down SQL or Substrait plans.
  • DataFusion -> Flight SQL -> DataFusion
  • ..

Status

The project is in alpha status. Contributions welcome; land a PR = commit access.

Comments
  • Support multiple FlightSQL Endpoints + Auth Options

    Support multiple FlightSQL Endpoints + Auth Options

    This PR adds support for the following FlightSQL features:

    • Combining multiple streams in the case the returned Ticket has multiple endpoints
    • Support basic and PKI auth

    I also attempting refactoring according to DataFusion's suggested practice when needing to call async methods within the execute method of a physical plan. As discussed on the prior PR, it is still painful to obtain the schema even when following this method. I do think it is slightly cleaner than try_unfold and can be cleaned up further if we figure out a better way to obtain the expected schema of the flight stream.

    opened by devinjdangelo 4
  • Add Flight SQL Federation example

    Add Flight SQL Federation example

    I added a Flight SQL example.

    cc @devinjdangelo since you showed interest. I do feel FlightSQLExecutor.execute is somewhat complicated, making me wonder if keeping it async is warranted.

    The FlightSqlService is rather basic now. It may be possible to add extension points for things like authentication so it becomes usable as a library.

    opened by backkem 4
  • feat: add support for producing SQL in multiple dialects

    feat: add support for producing SQL in multiple dialects

    Federation wasn’t working with the Postgres dialect and ConnectorX was crashing on an example using postgres, so this PR adds fixes for both of these issues.

    The dialect parameter will allow future extensibility to other dialects for query generation.

    opened by sardination 1
  • Pass VirtualExecutionPlan Schema to SQLExecutors

    Pass VirtualExecutionPlan Schema to SQLExecutors

    This PR simplifies FlightSQL Execution plans by relying on the VirtualExecutionPlan inferred from the sub-logical plan sliced out of the overall query to infer the schema. This avoids the need for block_on and simplifies the code further.

    Additionally, I added a new async trait method get_table_schema so SQLExecutors can implement arbitrary logic to return what the overall table schema of each table is.

    opened by devinjdangelo 1
  • ci: fixes to the ci

    ci: fixes to the ci

    1. included commitlint.config.js to fix the PullRequestaction
    2. Release action will be performed only if commits are pushed to the datafusion-federation crate
    3. git ignored files generated by the PullRequest action
    4. fixed the manifest file of datafusion-federation crate to remove defaults
    opened by rajantikare 0
  • Add SessionStateProvider

    Add SessionStateProvider

    I added a SessionStateProvider to allow setting a different SessionState based on the incoming request. This can be used to resolve queries differently, based on auth headers for example.

    opened by backkem 0
  • Capitalize Cargo.toml and Make SQLExecutor::execute Sync

    Capitalize Cargo.toml and Make SQLExecutor::execute Sync

    This project is super cool already, and I loved looking through the optimizer/analyzer code especially. The sqlite-partial example I think is a fantastic proof of concept! Thank you for your work @backkem!

    I had some difficulty getting the project to compile / work with rust analyzer initially. Capitalizing all instances of Cargo.toml fixed that.

    We can eliminate the need for tokio::block_on in the VirtualExecutionPlan by making SQLExecutor::execute sync. We can still support async code in those execution plans by using methods like futures::stream::unfold to return an async stream from a sync function. See for example here: https://github.com/devinjdangelo/DataWeb/blob/main/webengine/src/web_source.rs#L308.

    I tried for a while to improve the CXExecutor to use new_record_batch_iter and then use futures::stream::iter to return that as an async stream. This fails since the returned iterator is not truly an iterator and further is not thread safe, so in general cannot be used within a stream.

    I'll open an issue an think more about how we can support true streaming / batched execution with the connector-x.

    opened by devinjdangelo 0
  • How do Federated Node contain

    How do Federated Node contain "Predicate pushdown" ?

    hello all

    I have test a sql like SELECT t.TrackId, t.Name AS TrackName, a.Title AS AlbumTitle, ar.Name AS ArtistName FROM Track t JOIN Album a ON t.AlbumId = a.AlbumId JOIN Artist ar ON a.ArtistId = ar.ArtistId WHERE t.TrackId > 375 order by t.TrackId limit 10

    I have get logic plan

    image

    where t.trackid > Int64 does not pushdown?

    How to do this optimization?

    opened by oikomi 1
  • chore(deps,cargo): update tonic requirement from 0.10.2 to 0.11.0

    chore(deps,cargo): update tonic requirement from 0.10.2 to 0.11.0

    Updates the requirements on tonic to permit the latest version.

    Changelog

    Sourced from tonic's changelog.

    0.11.0 (2024-02-08)

    BREAKING CHANGES:

    • Removed NamedService from the transport module, please import it via tonic::server::NamedService.
    • MSRV bumped to 1.70.

    Features

    • Added zstd compression support.
    • Added connection timeout for connecto_with_connector_lazy.
    • Upgrade rustls to v0.22
    • Feature gate server implementation for tonic-reflection.

    0.10.2 (2023-09-28)

    Bug Fixes

    • web: Client decoding incomplete buffer bug (#1540) (83e363a)

    0.10.1 (2023-09-21)

    Bug Fixes

    0.10.0 (2023-09-08)

    Bug Fixes

    Features

    • build: Add optional default unimplemented stubs (#1344) (aff1daf)
    • core: amortize many ready messages into fewer, larger buffers (#1423) (76eedc1)
    • tonic-types: add ability to extract rich error details from google.rpc.Status (#1430) (5fd635a)
    • transport: Add Router::into_router (#1442) (ea06a1b)
    • transport: Expose TcpConnectInfo fields (#1449) (74b079c)
    • web: Add GrpcWebClientService (#1472) (dc29c17)

    ... (truncated)

    Commits

    You can trigger a rebase of this PR by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot show <dependency name> ignore conditions will show all of the ignore conditions of the specified dependency
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    dependencies rust 
    opened by dependabot[bot] 0
  • chore(deps,cargo): update datafusion requirement from 34.0.0 to 36.0.0

    chore(deps,cargo): update datafusion requirement from 34.0.0 to 36.0.0

    Updates the requirements on datafusion to permit the latest version.

    Changelog

    Sourced from datafusion's changelog.

    #!/bin/bash

    Licensed to the Apache Software Foundation (ASF) under one

    or more contributor license agreements. See the NOTICE file

    distributed with this work for additional information

    regarding copyright ownership. The ASF licenses this file

    to you under the Apache License, Version 2.0 (the

    "License"); you may not use this file except in compliance

    with the License. You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

    Unless required by applicable law or agreed to in writing,

    software distributed under the License is distributed on an

    "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY

    KIND, either express or implied. See the License for the

    specific language governing permissions and limitations

    under the License.

    some issues are just documentation

    add-sections={"documentation":{"prefix":"Documentation updates:","labels":["documentation"]},"performance":{"prefix":"Performance improvements:","labels":["performance"]}}

    uncomment to not show PRs. TBD if we shown them or not.

    #pull-requests=false

    so that the component is shown associated with the issue

    issue-line-labels=sql exclude-labels=development-process,invalid breaking-labels=api change

    Commits

    You can trigger a rebase of this PR by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot show <dependency name> ignore conditions will show all of the ignore conditions of the specified dependency
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    dependencies rust 
    opened by dependabot[bot] 0
  • chore(deps,cargo): update arrow requirement from 49.0.0 to 50.0.0

    chore(deps,cargo): update arrow requirement from 49.0.0 to 50.0.0

    Updates the requirements on arrow to permit the latest version.

    Changelog

    Sourced from arrow's changelog.

    50.0.0 (2024-01-08)

    Full Changelog

    Breaking changes:

    Implemented enhancements:

    • Support get offsets or blocks info from arrow file. #5252 [arrow]
    • Make regexp_match take scalar pattern and flag #5246 [arrow]
    • Cannot access pen state website on arrow-row #5238 [arrow]
    • RecordBatch with_schema's error message is hard to read #5227 [arrow]
    • Support cast between StructArray. #5219 [arrow]
    • Remove nightly-only simd feature and related code in ArrowNumericType #5185 [arrow]
    • Use Vec instead of Slice in ColumnReader #5177 [parquet]
    • Request to Memmap Arrow IPC files on disk #5153 [arrow]
    • GenericColumnReader::read_records Yields Truncated Records #5150 [parquet]
    • Nested Schema Projection #5148 [parquet] [arrow]
    • Support specifying quote and escape in Csv WriterBuilder #5146 [arrow]
    • Support casting of Float16 with other numeric types #5138 [arrow]
    • Parquet: read parquet metadata with page index in async and with size hints #5129 [parquet]
    • Cast from floating/timestamp to timestamp/floating #5122 [arrow]
    • Support Casting List To/From LargeList in Cast Kernel #5113 [arrow]
    • Expose a path for converting bytes::Bytes into arrow_buffer::Buffer without copy #5104 [arrow]
    • API inconsistency of ListBuilder make it hard to use as nested builder #5098 [arrow]
    • Parquet: don't truncate min/max statistics for float16 and decimal when writing file #5075 [parquet]
    • Parquet: derive boundary order when writing columns #5074 [parquet]
    • Support new Arrow PyCapsule Interface for Python FFI #5067 [arrow]
    • 48.0.1 arrow patch release #5050 [parquet] [arrow]
    • Binary columns do not receive truncated statistics #5037 [parquet]
    • Re-evaluate Explicit SIMD Aggregations #5032 [arrow]
    • Min/Max Kernels Should Use Total Ordering #5031 [arrow]
    • Allow zip compute kernel to take Scalar / Datum #5011 [arrow]
    • Add Float16/Half-float logical type to Parquet #4986 [parquet]
    • feat: cast (Large)List to FixedSizeList #5081 [arrow] (wjones127)
    • Update Parquet Encoding Documentation #5051 [parquet]

    Fixed bugs:

    • json schema inference can't handle null field turned into object field in subsequent rows #5215 [arrow]
    • Invalid trailing content after Z in timezone is ignored #5182 [arrow]
    • Take panics on a fixed size list array when given null indices #5169 [arrow]

    ... (truncated)

    Commits

    You can trigger a rebase of this PR by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot show <dependency name> ignore conditions will show all of the ignore conditions of the specified dependency
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    dependencies rust 
    opened by dependabot[bot] 0
  • chore(deps,cargo): update arrow-flight requirement from 49.0.0 to 50.0.0

    chore(deps,cargo): update arrow-flight requirement from 49.0.0 to 50.0.0

    Updates the requirements on arrow-flight to permit the latest version.

    Changelog

    Sourced from arrow-flight's changelog.

    50.0.0 (2024-01-08)

    Full Changelog

    Breaking changes:

    Implemented enhancements:

    • Support get offsets or blocks info from arrow file. #5252 [arrow]
    • Make regexp_match take scalar pattern and flag #5246 [arrow]
    • Cannot access pen state website on arrow-row #5238 [arrow]
    • RecordBatch with_schema's error message is hard to read #5227 [arrow]
    • Support cast between StructArray. #5219 [arrow]
    • Remove nightly-only simd feature and related code in ArrowNumericType #5185 [arrow]
    • Use Vec instead of Slice in ColumnReader #5177 [parquet]
    • Request to Memmap Arrow IPC files on disk #5153 [arrow]
    • GenericColumnReader::read_records Yields Truncated Records #5150 [parquet]
    • Nested Schema Projection #5148 [parquet] [arrow]
    • Support specifying quote and escape in Csv WriterBuilder #5146 [arrow]
    • Support casting of Float16 with other numeric types #5138 [arrow]
    • Parquet: read parquet metadata with page index in async and with size hints #5129 [parquet]
    • Cast from floating/timestamp to timestamp/floating #5122 [arrow]
    • Support Casting List To/From LargeList in Cast Kernel #5113 [arrow]
    • Expose a path for converting bytes::Bytes into arrow_buffer::Buffer without copy #5104 [arrow]
    • API inconsistency of ListBuilder make it hard to use as nested builder #5098 [arrow]
    • Parquet: don't truncate min/max statistics for float16 and decimal when writing file #5075 [parquet]
    • Parquet: derive boundary order when writing columns #5074 [parquet]
    • Support new Arrow PyCapsule Interface for Python FFI #5067 [arrow]
    • 48.0.1 arrow patch release #5050 [parquet] [arrow]
    • Binary columns do not receive truncated statistics #5037 [parquet]
    • Re-evaluate Explicit SIMD Aggregations #5032 [arrow]
    • Min/Max Kernels Should Use Total Ordering #5031 [arrow]
    • Allow zip compute kernel to take Scalar / Datum #5011 [arrow]
    • Add Float16/Half-float logical type to Parquet #4986 [parquet]
    • feat: cast (Large)List to FixedSizeList #5081 [arrow] (wjones127)
    • Update Parquet Encoding Documentation #5051 [parquet]

    Fixed bugs:

    • json schema inference can't handle null field turned into object field in subsequent rows #5215 [arrow]
    • Invalid trailing content after Z in timezone is ignored #5182 [arrow]
    • Take panics on a fixed size list array when given null indices #5169 [arrow]

    ... (truncated)

    Commits

    You can trigger a rebase of this PR by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot show <dependency name> ignore conditions will show all of the ignore conditions of the specified dependency
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    dependencies rust 
    opened by dependabot[bot] 0
  • chore(deps,cargo): update datafusion-substrait requirement from 34.0.0 to 36.0.0

    chore(deps,cargo): update datafusion-substrait requirement from 34.0.0 to 36.0.0

    Updates the requirements on datafusion-substrait to permit the latest version.

    Changelog

    Sourced from datafusion-substrait's changelog.

    #!/bin/bash

    Licensed to the Apache Software Foundation (ASF) under one

    or more contributor license agreements. See the NOTICE file

    distributed with this work for additional information

    regarding copyright ownership. The ASF licenses this file

    to you under the Apache License, Version 2.0 (the

    "License"); you may not use this file except in compliance

    with the License. You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

    Unless required by applicable law or agreed to in writing,

    software distributed under the License is distributed on an

    "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY

    KIND, either express or implied. See the License for the

    specific language governing permissions and limitations

    under the License.

    some issues are just documentation

    add-sections={"documentation":{"prefix":"Documentation updates:","labels":["documentation"]},"performance":{"prefix":"Performance improvements:","labels":["performance"]}}

    uncomment to not show PRs. TBD if we shown them or not.

    #pull-requests=false

    so that the component is shown associated with the issue

    issue-line-labels=sql exclude-labels=development-process,invalid breaking-labels=api change

    Commits

    You can trigger a rebase of this PR by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot show <dependency name> ignore conditions will show all of the ignore conditions of the specified dependency
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    dependencies rust 
    opened by dependabot[bot] 0
Owner
Community maintained unofficial extensions for Apache Arrow DataFusion
null
Distributed compute platform implemented in Rust, and powered by Apache Arrow.

Ballista: Distributed Compute Platform Overview Ballista is a distributed compute platform primarily implemented in Rust, powered by Apache Arrow. It

Ballista 2.3k Jan 3, 2023
Python package to compute levensthein distance in rust

Contents Introduction Installation Usage License Introduction Rust implementation of levensthein distance (https://en.wikipedia.org/wiki/Levenshtein_d

Thibault Blanc 2 Feb 21, 2022
Toy library for neural networks in Rust using Vulkan compute shaders

descent Toy library for neural networks in Rust using Vulkan compute shaders. Features Multi-dimensional arrays backed by Vulkan device memory Use Rus

Simon Brown 71 Dec 16, 2022
Mars is a rust machine learning library. [Goal is to make Simple as possible]

Mars Mars (ma-rs) is an blazingly fast rust machine learning library. Simple and Powerful! ?? ?? Contribution: Feel free to build this project. This i

KoBruh 3 Dec 25, 2022
Simulation of sand falling down in a cave built using nannou (Rust)

nannou-sand-simulation Learning nannou, an open-source creative-coding toolkit for Rust, by implementing a visualization for a simulation of sand fall

Luciano Mammino 3 Dec 20, 2022
Apache Arrow DataFusion and Ballista query engines

DataFusion is an extensible query execution framework, written in Rust, that uses Apache Arrow as its in-memory format.

The Apache Software Foundation 2.9k Jan 2, 2023
xh is a friendly and fast tool for sending HTTP requests. It reimplements as much as possible of HTTPie's excellent design, with a focus on improved performance.

xh is a friendly and fast tool for sending HTTP requests. It reimplements as much as possible of HTTPie's excellent design, with a focus on improved performance

Mohamed Daahir 3.4k Jan 6, 2023
Getting the token's holder info and pushing to a web server.

Purpose of this program I've made this web scraper so you can use it to get the holder's amount from BSCscan and it will upload for you in JSON format

null 3 Jul 7, 2022
A Rust library for evaluating log4j substitution queries in order to determine whether or not malicious queries may exist.

log4j_interpreter A Rust library for evaluating log4j substitution queries in order to determine whether or not malicious queries may exist. Limitatio

Fastly 22 Nov 7, 2022
Grsql is a great tool to allow you set up your remote sqlite database as service and CRUD(create/read/update/delete) it using gRPC.

Grsql is a great tool to allow you set up your remote sqlite database as service and CRUD (create/ read/ update/ delete) it using gRPC. Why Create Thi

Bruce Yuan 33 Dec 16, 2022
An experimental implementation of Arc against Apache Datafusion

box This is an experimental repository to perform a proof of concept replacement of the Apache Spark executor for Arc with Apache DataFusion. This is

tripl.ai 1 Nov 26, 2021
CloudLLM is a Rust library designed to seamlessly bridge applications with remote Language Learning Models (LLMs) across various platforms.

CloudLLM CloudLLM is a Rust library designed to seamlessly bridge applications with remote Language Learning Models (LLMs) across various platforms. W

null 4 Oct 13, 2023
A tool that allow you to run SQL-like query on local files instead of database files using the GitQL SDK.

FileQL - File Query Language FileQL is a tool that allow you to run SQL-like query on local files instead of database files using the GitQL SDK. Sampl

Amr Hesham 39 Mar 12, 2024
An object-relational in-memory cache, supports queries with an SQL-like query language.

qlcache An object-relational in-memory cache, supports queries with an SQL-like query language. Warning This is a rather low-level library, and only p

null 3 Nov 14, 2021
A query builder that builds and typechecks queries at compile time

typed-qb: a compile-time typed "query builder" typed-qb is a compile-time, typed, query builder. The goal of this crate is to explore the gap between

ferrouille 3 Jan 22, 2022
Rust library to parse, deparse and normalize SQL queries using the PostgreSQL query parser

This Rust library uses the actual PostgreSQL server source to parse SQL queries and return the internal PostgreSQL parse tree.

pganalyze 37 Dec 18, 2022
The joker_query is a cute query builder, with Joker can implement most complex queries with sugar syntax

joker_query The joker_query is most sugared query builder of Rust, with joker_query can implement most complex queries with sugar syntax Features − (O

DanyalMh 8 Dec 13, 2023
Remote-Archive is a utility for exploring remote archive files without downloading the entire contents of the archive.

[WIP] REMOTE-ARCHIVE Remote-Archive is a utility for exploring remote archive files without downloading the entire contents of the archive. The idea b

null 4 Nov 7, 2022
Resolve JavaScript/TypeScript module with Rust

ES Resolve JavaScript/TypeScript module resolution in Rust Installation cargo add es_resolve Get Started use std::path::{Path, PathBuf}; use es_resolv

wang chenyu 2 Oct 12, 2022
Load and resolve Cargo configuration.

cargo-config2 Load and resolve Cargo configuration. This library is intended to accurately emulate the actual behavior of Cargo configuration, for exa

Taiki Endo 6 Jan 10, 2023