🧰 The Rust SQL Toolkit. An async, pure Rust SQL crate featuring compile-time checked queries without a DSL. Supports PostgreSQL, MySQL, SQLite, and MSSQL.

Overview

SQLx

🧰 The Rust SQL Toolkit


Built with ❤️ by The LaunchBadge team

SQLx is an async, pure Rust SQL crate featuring compile-time checked queries without a DSL.

  • Truly Asynchronous. Built from the ground-up using async/await for maximum concurrency.

  • Type-safe SQL (if you want it) without DSLs. Use the query!() macro to check your SQL and bind parameters at compile time. (You can still use dynamic SQL queries if you like.)

  • Database Agnostic. Support for PostgreSQL, MySQL, SQLite, and MSSQL.

  • Pure Rust. The Postgres and MySQL/MariaDB drivers are written in pure Rust using zero unsafe†† code.

† The SQLite driver uses the libsqlite3 C library as SQLite is an embedded database (the only way we could be pure Rust for SQLite is by porting all of SQLite to Rust).

†† SQLx uses #![forbid(unsafe_code)] unless the sqlite feature is enabled. As the SQLite driver interacts with C, those interactions are unsafe.


  • Cross-platform. Being native Rust, SQLx will compile anywhere Rust is supported.

  • Built-in connection pooling with sqlx::Pool.

  • Row streaming. Data is read asynchronously from the database and decoded on-demand.

  • Automatic statement preparation and caching. When using the high-level query API (sqlx::query), statements are prepared and cached per-connection.

  • Simple (unprepared) query execution including fetching results into the same Row types used by the high-level API. Supports batch execution and returning results from all statements.

  • Transport Layer Security (TLS) where supported (MySQL and PostgreSQL).

  • Asynchronous notifications using LISTEN and NOTIFY for PostgreSQL.

  • Nested transactions with support for save points.

  • Any database driver for changing the database driver at runtime. An AnyPool connects to the driver indicated by the URI scheme.

Install

SQLx is compatible with the async-std, tokio and actix runtimes; and, the [native-tls] and [rustls] TLS backends. When adding the dependency, you must chose a runtime feature that is runtime + tls.

# Cargo.toml
[dependencies]
# tokio + rustls
sqlx = { version = "0.5", features = [ "runtime-tokio-rustls" ] }
# async-std + native-tls
sqlx = { version = "0.5", features = [ "runtime-async-std-native-tls" ] }

The runtime and TLS backend not being separate feature sets to select is a workaround for a Cargo issue.

Cargo Feature Flags

  • runtime-async-std-native-tls (on by default): Use the async-std runtime and native-tls TLS backend.

  • runtime-async-std-rustls: Use the async-std runtime and rustls TLS backend.

  • runtime-tokio-native-tls: Use the tokio runtime and native-tls TLS backend.

  • runtime-tokio-rustls: Use the tokio runtime and rustls TLS backend.

  • runtime-actix-native-tls: Use the actix runtime and native-tls TLS backend.

  • runtime-actix-rustls: Use the actix runtime and rustls TLS backend.

  • postgres: Add support for the Postgres database server.

  • mysql: Add support for the MySQL/MariaDB database server.

  • mssql: Add support for the MSSQL database server.

  • sqlite: Add support for the self-contained SQLite database engine.

  • any: Add support for the Any database driver, which can proxy to a database driver at runtime.

  • macros: Add support for the query*! macros, which allow compile-time checked queries.

  • migrate: Add support for the migration management and migrate! macro, which allow compile-time embedded migrations.

  • uuid: Add support for UUID (in Postgres).

  • chrono: Add support for date and time types from chrono.

  • time: Add support for date and time types from time crate (alternative to chrono, which is preferred by query! macro, if both enabled)

  • bstr: Add support for bstr::BString.

  • git2: Add support for git2::Oid.

  • bigdecimal: Add support for NUMERIC using the bigdecimal crate.

  • decimal: Add support for NUMERIC using the rust_decimal crate.

  • ipnetwork: Add support for INET and CIDR (in postgres) using the ipnetwork crate.

  • json: Add support for JSON and JSONB (in postgres) using the serde_json crate.

  • tls: Add support for TLS connections.

Usage

Quickstart

[dependencies]
sqlx = { version = "0.4.1", features = [ "postgres" ] }
async-std = { version = "1.6", features = [ "attributes" ] }
use sqlx::postgres::PgPoolOptions;
// use sqlx::mysql::MySqlPoolOptions;
// etc.

#[async_std::main]
// or #[tokio::main]
async fn main() -> Result<(), sqlx::Error> {
    // Create a connection pool
    //  for MySQL, use MySqlPoolOptions::new()
    //  for SQLite, use SqlitePoolOptions::new()
    //  etc.
    let pool = PgPoolOptions::new()
        .max_connections(5)
        .connect("postgres://postgres:password@localhost/test").await?;

    // Make a simple query to return the given parameter
    let row: (i64,) = sqlx::query_as("SELECT $1")
        .bind(150_i64)
        .fetch_one(&pool).await?;

    assert_eq!(row.0, 150);

    Ok(())
}

Connecting

A single connection can be established using any of the database connection types and calling connect().

use sqlx::Connection;

let conn = SqliteConnection::connect("sqlite::memory:").await?;

Generally, you will want to instead create a connection pool (sqlx::Pool) in order for your application to regulate how many server-side connections it's using.

let pool = MySqlPool::connect("mysql://user:pass@host/database").await?;

Querying

In SQL, queries can be separated into prepared (parameterized) or unprepared (simple). Prepared queries have their query plan cached, use a binary mode of communication (lower bandwidth and faster decoding), and utilize parameters to avoid SQL injection. Unprepared queries are simple and intended only for use case where a prepared statement will not work, such as various database commands (e.g., PRAGMA or SET or BEGIN).

SQLx supports all operations with both types of queries. In SQLx, a &str is treated as an unprepared query and a Query or QueryAs struct is treated as a prepared query.

// low-level, Executor trait
conn.execute("BEGIN").await?; // unprepared, simple query
conn.execute(sqlx::query("DELETE FROM table")).await?; // prepared, cached query

We should prefer to use the high level, query interface whenever possible. To make this easier, there are finalizers on the type to avoid the need to wrap with an executor.

sqlx::query("DELETE FROM table").execute(&mut conn).await?;
sqlx::query("DELETE FROM table").execute(&pool).await?;

The execute query finalizer returns the number of affected rows, if any, and drops all received results. In addition, there are fetch, fetch_one, fetch_optional, and fetch_all to receive results.

The Query type returned from sqlx::query will return Row<'conn> from the database. Column values can be accessed by ordinal or by name with row.get(). As the Row retains an immutable borrow on the connection, only one Row may exist at a time.

The fetch query finalizer returns a stream-like type that iterates through the rows in the result sets.

// provides `try_next`
use futures::TryStreamExt;

let mut rows = sqlx::query("SELECT * FROM users WHERE email = ?")
    .bind(email)
    .fetch(&mut conn);

while let Some(row) = rows.try_next().await? {
    // map the row into a user-defined domain type
    let email: &str = row.try_get("email")?;
}

To assist with mapping the row into a domain type, there are two idioms that may be used:

let mut stream = sqlx::query("SELECT * FROM users")
    .map(|row: PgRow| {
        // map the row into a user-defined domain type
    })
    .fetch(&mut conn);
#[derive(sqlx::FromRow)]
struct User { name: String, id: i64 }

let mut stream = sqlx::query_as::<_, User>("SELECT * FROM users WHERE email = ? OR name = ?")
    .bind(user_email)
    .bind(user_name)
    .fetch(&mut conn);

Instead of a stream of results, we can use fetch_one or fetch_optional to request one required or optional result from the database.

Compile-time verification

We can use the macro, sqlx::query! to achieve compile-time syntactic and semantic verification of the SQL, with an output to an anonymous record type where each SQL column is a Rust field (using raw identifiers where needed).

let countries = sqlx::query!(
        "
SELECT country, COUNT(*) as count
FROM users
GROUP BY country
WHERE organization = ?
        ",
        organization
    )
    .fetch_all(&pool) // -> Vec<{ country: String, count: i64 }>
    .await?;

// countries[0].country
// countries[0].count

Differences from query():

  • The input (or bind) parameters must be given all at once (and they are compile-time validated to be the right number and the right type).

  • The output type is an anonymous record. In the above example the type would be similar to:

    { country: String, count: i64 }
  • The DATABASE_URL environment variable must be set at build time to a database which it can prepare queries against; the database does not have to contain any data but must be the same kind (MySQL, Postgres, etc.) and have the same schema as the database you will be connecting to at runtime.

    For convenience, you can use a .env file to set DATABASE_URL so that you don't have to pass it every time:

    DATABASE_URL=mysql://localhost/my_database
    

The biggest downside to query!() is that the output type cannot be named (due to Rust not officially supporting anonymous records). To address that, there is a query_as!() macro that is identical except that you can name the output type.

// no traits are needed
struct Country { country: String, count: i64 }

let countries = sqlx::query_as!(Country,
        "
SELECT country, COUNT(*) as count
FROM users
GROUP BY country
WHERE organization = ?
        ",
        organization
    )
    .fetch_all(&pool) // -> Vec<Country>
    .await?;

// countries[0].country
// countries[0].count

Safety

This crate uses #![forbid(unsafe_code)] to ensure everything is implemented in 100% Safe Rust.

If the sqlite feature is enabled, this is downgraded to #![deny(unsafe_code)] with #![allow(unsafe_code)] on the sqlx::sqlite module. There are several places where we interact with the C SQLite API. We try to document each call for the invariants we're assuming. We absolutely welcome auditing of, and feedback on, our unsafe code usage.

License

Licensed under either of

at your option.

Contribution

Unless you explicitly state otherwise, any contribution intentionally submitted for inclusion in the work by you, as defined in the Apache-2.0 license, shall be dual licensed as above, without any additional terms or conditions.

Comments
  • [Discussion] Dynamic Query Builder Design

    [Discussion] Dynamic Query Builder Design

    We have received a lot of requests, and have a significant need ourselves, for a dynamic query builder in SQLx that can generate and execute valid SQL with the appropriate bind parameters for a given database backend at runtime.

    This is to do things like:

    • Dynamically add columns from the query based on what data is needed
    • Dynamically add joins to the query from other tables
    • Dynamically add where clauses based on user input
    • Generate INSERT INTO foo(...) VALUES(...) statements with a variable number of records in the VALUES expression, taking into account the SQL flavor's max number of bind parameters per query (which we can build a more specialized API for on top of this)

    MVP

    I'm thinking the minimum viable product for this would be something like the following:

    pub struct QueryBuilder<DB> { ... }
    
    impl<DB> QueryBuilder<DB> {
        /// Create a builder with the initial SQL literal to append to
        pub fn new(init: impl Into<String>) -> Self { ... }
    
        /// Push more SQL code
        pub fn push(&mut self, sql: impl Display) -> Self { ... }
    
        /// Push a bind parameter to the SQL and add encode its value
        /// RFC: should we be taking values here, or bind once the query is built?
        pub fn push_bind(&mut self, value: impl Encode<DB>) -> Self { ... }
    
        /// Append a comma separated list with the closure being repeatedly called to add the values between commas until it returns `false`.
        ///
        /// This seems a bit higher level but it would be really useful for generating `VALUES` lists with binds
        pub fn comma_sep(&mut self, impl FnMut(&mut QueryBuilder<DB>) -> bool) -> Self { ... }
    
        /// Finalize the query so it can be executed
        /// 
        /// I'm thinking this clones the query so the builder can be reused
        pub fn build(&mut self) -> sqlx::Query<DB, 'static> { ... }
    
        /// Alternatively, we have `.execute()`, `.fetch()`, `.fetch_all()`, etc. on this struct
    }
    

    Thoughts to the Future

    • Some sort of macro that can do interpolations and dynamic repetitions like quote::quote!()
      • Perhaps still somehow checking the query at compile time
      • The catch here is that the syntax would have to be tokenizable by Rust's lexer and I think there's a few things about SQL which violate that (single-quote strings, for one)
    • A dedicated API for generating efficient INSERT INTO statements
      • Perhaps a proc-macro that generates an insertion builder for a given table with specific columns and return values

    Non-Goals

    • We don't want to go overboard and implement a fully fledged query DSL yet, that's a discussion for another day

    cc @mehcode @izik1 @danielakhterov

    enhancement help wanted E-medium proposal 
    opened by abonander 31
  • Upgrade to time 0.3

    Upgrade to time 0.3

    Continuation of #1444

    Still left to do:

    • [x] Fix a solution for https://github.com/time-rs/time/discussions/356
    • [x] Update documentation

    Closes #1277

    opened by paolobarbolini 26
  • Request for Comments: Possible Strategies for Public Dependency Version Management

    Request for Comments: Possible Strategies for Public Dependency Version Management

    The Situtation

    As of writing, we have a lot of PRs piled up for the version 0.6.0 release, many of which which are just breaking upgrades of dependencies: #1426, #1455, #1505, #1529, #1733

    Since these are exposed in our public API (although rustls is getting fixed so it's no longer public), we have to cut a backwards-incompatible (0.{x+1}.0) release when we upgrade them to backwards-incompatible versions to maintain our SemVer guarantees.

    However, we currently don't cut breaking releases very often (the last one was over a year ago) for reasons which I'll get into.

    I'm not a huge fan of this situation for a couple main reasons:

    • It forces people to stay on older, possibly insecure or broken versions of these dependencies until we get around to cutting a new release.
    • When we do cut a new release, it forces people to accept possibly quite inconvenient breaking changes to SQLx itself just to upgrade some dependencies.

    Options

    I've thought of a handful of possible solutions here, but none of them feel like The Correct One to me, so I'd like to get some feedback and do some workshopping here. The last one I came up with while writing this and is definitely very interesting, but I'm mentioning all of them for the sake of discussion.

    Cut More Granular Breaking Releases More Often

    This has the benefit of not forcing people to wait forever to upgrade dependencies, and makes upgrading more palatable as there would often be fewer breaking changes in SQLx itself, or none at all.

    However, because we only provide bugfixes to the latest 0.x release, this means that people who choose to remain on old versions (or just forget to upgrade) won't receive those fixes. Maybe a possible solution here would be to start offering LTS releases, but that's a lot of work to do for free.

    The main reasons why we're not already doing this are that I personally feel like breaking-change releases should have some meat to them to justify the effort of upgrading, and I don't really like the optics of frequent breaking-change releases. When I look at a crate's version history and see that it puts out a breaking release every month or so, it doesn't exactly inspire a lot of confidence. It shows that it's maintained, sure, but also that it's probably in a pretty experimental state or maybe has an inexperienced maintainer who doesn't put a lot of thought into their API design. That's not the kind of vibe we want to put out with SQLx.

    Use Version Ranges for Public Dependencies

    For dependency upgrades where there were no breaking changes to the parts of that dependency's API that SQLx uses directly, we could use a version range for that dependency which allows the root crate in the dependency graph to select any compatible version in that range. I have previously used this strategy in my img_hash crate (which I haven't had the time or energy to maintain, sorry) for the integration with the image crate.

    As for testing in CI, I think we'd be okay just testing against the latest version in the range for a dependency, assuming we only do this when we can just widen the range without having to change the code and the previous range was covered by past CI runs.

    This is ostensibly the option I like the most, however we're still back to square one for dependency upgrades with breaking changes to APIs that we need in SQLx (e.g. to provide Encode/Decode impls), so I would ideally like to find a more general solution that covers both cases. Although, since the API surface that SQLx touches is generally rather small for any integration, having to increase the minimum version for the range due to breaking changes that matter to us should hopefully be quite rare.

    A big issue though, which is something @mehcode brought up a long time ago when I originally suggested doing it for SQLx, is the potential of the root crate failing to constrain the version range, e.g. someone using sqlx::types::OffsetDateTime and never adding time to their own dependencies to constrain the version appropriate to their usage. A cargo update could naively perform a breaking-change upgrade, as SQLx's version requirements aren't always going to match the end user's, whether they remember them to specify them or not.

    We would probably need to remove those reexports to eliminate that hazard, but the macros use them so they aren't affected if the root crate renames one of these dependencies. We could hide them, but it's still potentially an issue if the end user only uses those types through, e.g. query!() where they never have to name them themselves.

    We would need to force the user to include the packages in their own dependencies and never rename them, then the macros could just assume that those crates exist in the global namespace.

    Move Type Integrations to External Crates

    This one obviously falls afoul of the orphan rule unless we use newtype wrappers, which has a significant impact to ergonomics.

    However, we still have the same issue of how to reference the types from the macros. If the sqlx facade depends on them with a version range and reexports them, then we have the version range constraint issue. If it doesn't then we still have to force the user to directly depend on the crates and not rename them. There's also the whole new issue of how to assign version numbers to these crates when they have to be semantically versioned against both sqlx-core and the crate they're providing the integration for.

    Overall, this feels inferior to the previous option.

    Provide a Separate Cargo Feature for Each Version of a Dependency

    This was suggested for time 0.2 and time 0.3 but we tried it and it didn't work very well.

    For crates that don't have this issue, we could do this, but it doesn't seem like it would scale well, and we'd need to make a breaking-change release anyway when we decide to drop support for old versions of crates.

    Implement Our Own Containers for Various Types In-Tree

    I've thought about doing this for types like Postgres' MACADDR8, for which an appropriate crate doesn't really exist right now. We already do this for more niche, database-specific types, again namely from Postgres thanks to its rich type system.

    This would fix all the issues with external dependencies, but create a whole slew of new ones.

    For one, we rely pretty heavily on the external crates for parsing from text formats since hand-rolling that would be redundant. We would also be taking on the burden of designing useful APIs for all of these types. We could provide conversions to their counterparts in popular crates for that, but then we're back to square one with the semver issue again.

    However, while writing this I had a bolt-from-the-blue idea:

    :zap: Implement Wrapper Types and Provide Conversions via Version Ranges :zap:

    I had to run this past @mehcode because I wasn't sure whether it was absolutely brilliant or completely insane. I think the jury is still out on that.

    We would create wrapper types using existing crates for their internal functionality, on separate feature flags that aren't tied to our semver, and then provide conversions to the types publicly through a separate package with as wide of a version range as possible, e.g.:

    [dependencies]
    _time_internal = { version = "0.3", package = "time" }
    time = { version = ">=0.2, <0.4", optional = true }
    
    pub struct TimestampTz(_time_internal::OffsetDateTime);
    
    // impl Type, Encode, Decode
    // also Serialize, Deserialize behind a feature flag
    
    // plus a conservative set of inherent methods copied from `OffsetDateTime`
    
    // also impl Type, Encode, Decode for `time::OffsetDateTime` if enabled
    // as well as `From<TimestampTz> for time::OffsetDateTime`
    // these impls would touch the smallest subset of the `time` API that they could, e.g.
    // using `TimestampTz` for the heavy lifting and then just converting to the final type via unix_timestamp_nanos + to_offset
    // which is supported by both time 0.2 and time 0.3
    

    We can then upgrade _time_internal with impunity, the macros can directly reference TimestampTz, and we don't have the issue of unconstrained version ranges because the user would only interact with the ranged time dependency if they were already using it in their own crate, where it should be properly constrained.

    This would also let us "fix" things like time's default human-readable serialization which is not ISO-8601 compatible even though that's most often the format returned from web APIs.

    For new versions of crates where we can't feasibly include them in the version range, we could include them as a new feature.

    This would be a pretty sweeping breaking change, however I think once we rip the band-aid off it'll be smooth sailing from there and we can save breaking releases for actual breaking changes to SQLx. This might let us someday soon cut a 1.0!

    @mehcode was also concerned about allowing query_as!() invocations to continue using time::OffsetDateTime directly, and suggested having it invoke Into internally. I think that would be best combined with reworking the typechecking code to use const-eval and const-panic that I've been talking about doing for ages.

    opened by abonander 24
  • unknown message type: '\u{0}' for postgres

    unknown message type: '\u{0}' for postgres

    During benchmarking, I've got a lot of errors as next:

    ERROR sqlx_core::pool::connection] error occurred while flushing the connection: encountered unexpected or invalid data: unknown message type: '\u{0}'

    Idk is it related to this issue, but in postgres I also have next warning: [550] WARNING: there is no transaction in progress (in app I have a transaction, and seems for some reason it was not started)

    opened by dmitryb-dev 22
  • Unwrap on from_utf8 causes panic with non-UTF-8 database server/client encodings

    Unwrap on from_utf8 causes panic with non-UTF-8 database server/client encodings

    Environment

    • Rust toolchain version: 1.53.0-nightly (74874a690 2021-03-30) x86_64-pc-windows-msvc
    • IDE name and version: CLion 2020.3.3 (CL-203.7717.62)
    • Operating system: Windows 10 10.0
    • sqlx version:
      • name = "sqlx-core"
      • version = "0.5.1"
      • source = "registry+https://github.com/rust-lang/crates.io-index"
      • checksum = "b1cad9cae4ca8947eba1a90e8ec7d3c59e7a768e2f120dc9013b669c34a90711"

    Describing problem

    If the postgres database has an encoding other than UTF-8, sqlx cannot connect to it due to an error in converting the message bytes to utf-8 encoding. Sample code: image

    What is in the dsn does not matter. When I run this code, I get this: image

    Full stacktrace is available here - https://gist.github.com/ShagonRU/6bff4316a33895021b3aaed8554be16a

    ===

    Next are my assumptions based on stacktrace. After getting error (or another response) from database, this ( sqlx-core-0.5.1\src\postgres\message\response.rs:139:26 ) code panics: image

    Decode uses this trait impl ( sqlx-core-0.5.1\src\io\decode.rs:13:9 ): image

    Here is called this decode ( sqlx-core-0.5.1\src\postgres\connection\stream.rs:95:48 ): image

    So. As a result, If the database is not in UTF-8 encoding, then the user will simply get panic and will not be able to do anything about it. There is also no explanation anywhere about this moment. Moreover, as will be shown in the example above, the server can be in UTF-8 encoding, but if the client is not in it, there will be the same panic. For example - my docker container with postgres: image And i getting the same error as with my local postgres. I can only connect to a remote server from my system. (fortunately I have remote server, as well as the second non-Windows system, but this is still confusing)

    Possible solutions

    • Remove .unwrap() and return correct error about encoding (???, i don'n know what user can do with this error)
    • Find out what encoding was used and decode messages to utf-8 from it.
    bug db:postgres 
    opened by chamoretto 20
  • Pool::acquire times out

    Pool::acquire times out

    I'm using a connection pool with a larger number of tokio tasks, see the example below. The tasks run a small query, do some work without holding on to a connection, then run another query. In the example, the second call to acquire quickly starts failing with PoolTimeout or it takes too long, roughly connect_timeout seconds.

    Any ideas would be appreciated.

    use sqlx::mysql::*;
    async fn run_queries(i: u32, pool: &sqlx::Pool<MySql>) {
        {
            let mut conn = pool.acquire().await.unwrap();
    
            let _ = sqlx::query("SELECT 1").fetch_one(&mut conn).await.unwrap();
    
            // conn gets dropped here and should be returned to the pool
        }
    
        // (do some other work here without holding on to a connection)
    
        {
            let start = std::time::Instant::now();
    
            // this often fails (PoolTimedOut) or takes connect_timeout seconds
            let mut _conn = pool.acquire().await.unwrap();
    
            println!("{} pool.acquire() took {:?}", i, std::time::Instant::now() - start);
    
            // (run another query...)
    
        }
    }
    
    #[tokio::main]
    async fn main() {
        let pool = MySqlPoolOptions::new()
            .max_connections(1) // also fails with higher counts, e.g. 5
            .connect(&std::env::var("DATABASE_URL").unwrap())
            .await
            .unwrap();
    
        let mut handles = vec![];
        for i in 0..100 {
            let pool = pool.clone();
    
            handles.push(tokio::spawn(async move {
                run_queries(i, &pool).await;
            }));
        }
    
        futures::future::join_all(handles).await;
    }
    
    [dependencies]
    sqlx = { version = "0.4.0-beta.1", default-features = false, features = [ "runtime-tokio", "mysql" ] }
    tokio = "0.2.22"
    futures = "0.3.5"
    

    Output snippets:

    thread 'tokio-runtime-worker' panicked at 'called `Result::unwrap()` on an `Err` value: PoolTimedOut'
    pool.acquire() took 30.001958452s
    

    MySQL Engine version 8.0.17. Tested with stable-x86_64-unknown-linux-gnu and stable-x86_64-pc-windows-msvc.

    opened by skuzins 20
  • [Ready] Add support for building in

    [Ready] Add support for building in "offline/decoupled" mode

    TODO:

    • [x] add support in sqlx-macros
    • [x] rebase on ~develop (will necessitate creating a new PR)~ master
    • [x] implement cargo sqlx prepare
    • [x] integration test
    • [x] document offline mode
    opened by abonander 19
  • Proposal: remove runtime features and async-std support. Still using async-std? Please make yourself heard!

    Proposal: remove runtime features and async-std support. Still using async-std? Please make yourself heard!

    TL;DR: we want to drop existing runtime features and use Tokio everywhere in SQLx.

    If you are still using runtime-async-std-[native-tls, rustls], please give this issue a :-1: so we know you're out there!

    The runtime-actix-[native-tls, rustls] features would also go away, but as long as Actix is still based on Tokio, SQLx should continue to work with it just fine. Currently, these features exist purely to try to prevent confusion, but they also seem to cause a good bit of confusion at the same time, so getting rid of them doesn't seem controversial to me.

    Background and Motivation

    SQLx is rather unique in its class as an async library that needs heavy runtime support but still tries to support multiple runtimes. Most async libraries are either entirely runtime-agnostic, or committed solely to one runtime.

    SQLx was initially envisioned as an async-std-only library. We had intended to use it with Tide as our web backend stack at Launchbadge. Very soon after release, the request to support Tokio became very popular (heh, we initially labeled that issue wontfix), being echoed both on Discord and on Reddit, and so we added Tokio in 0.2.0.

    We soon added support for the Actix runtime (which is Tokio under the hood) as well, and also settled on Actix-web for our web stack for various reasons. (We're considering Axum for new projects, which is also Tokio-based.)

    Since then, supporting multiple runtimes has turned out to be quite the maintenance burden.

    We had to create a whole subcrate just to provide a unified interface, using mutually exclusive Cargo features because we conditionally re-export items from one of tokio, async-std or actix-rt under conflicting names, and so can't have more than one enabled at once. This has caused a lot of issues, especially because we force the user to pick a runtime even when they would prefer not to (like if they're just trying to use the derive macros).

    The mutual exclusion is also necessary to pick between tokio-native-tls, tokio-rustls, async-std-native-tls and async-std-rustls even though they're all more or less identical, just linking different crates. We believe this could also be resolved by Cargo weak features which will be stabilized in Rust 1.60, but is still worth noting.

    We have 69 CI passes, which take around 15 minutes to run, thanks to having to cover all the possible combinations of the databases, database versions, runtimes and TLS providers that we support, although to keep from going overboard we only test the oldest and newest versions we support of each database.

    Getting rid of runtime features in general and committing solely to Tokio would simplify SQLx, and usage of it, quite a bit.

    These days, Tokio is the superior option.

    • Tokio is actively maintained, while async-std hasn't had a release in 5 months. Last commit to async-std/master was 2 months ago.
    • Tokio has a larger ecosystem with several killer apps:
      • hyper, reqwest, actix-web, rusoto, axum, tokio-tungstenite, tonic, redis, tokio-postgres
      • async-std has just Tide and Surf, I think, and those have stagnated as well.
    • Tokio has a stronger community:
      • Tokio is more popular, with 8x more downloads on crates.io compared to async-std.
      • Tokio has 540 contributors on Github compared to async-std's 126.

    Implementation

    • Drop the runtime-* matrix of features.
    • Remove the sqlx-rt crate and replace usages of sqlx_rt with tokio in other crates.
    • Add native-tls and rustls features which would no longer have to be mutually exclusive; if both are enabled, native-tls should take priority.

    What would happen to async-std users?

    If there's still a significant cohort of async-std SQLx users, we don't just want to kick them to the curb. In order to make our decision, we want to try to gauge how many of them there are which is why you should :-1: this issue if you would be negatively affected by this change.

    There's also async-compat which can be used to make Tokio-based crates work on async-std by using a single-threaded Tokio runtime spawned into a background thread.

    It should work well enough for SQLx, with the exception of Pool. Because PoolConnection wants to spawn a task on-drop to make sure the buffers are flushed and the connection is still usable, just wrapping a future that uses Pool or PoolConnection in Compat won't be sufficient, as the drop handler will be run after the tokio::runtime::EnterGuard is dropped. It shouldn't panic as the drop handler already checks if the runtime is available or not, but it does mean that every connection checked out from Pool would be immediately closed after use, which kind-of defeats the purpose.

    The solution I'm thinking is to allow providing a tokio::runtime::Handle to use in PoolOptions, but getting a handle to the Tokio runtime that async-compat spawns would require depending on Tokio in your application, as it doesn't provide direct access to it:

    let pool: sqlx::PgPool = Compat::new(async {
        sqlx::pool::PoolOptions::new()
            .tokio_handle(tokio::runtime::Handle::current())
            .connect("<DATABASE_URL>")
            .await
    }).await?;
    

    Alternatively, this could work without depending on Tokio (I'm thinking we'd provide both):

    let pool: sqlx::PgPool = Compat::new(async {
        sqlx::pool::PoolOptions::new()
            .use_current_tokio()
            .connect("<DATABASE_URL>")
            .await
    }).await?;
    proposal 
    opened by abonander 18
  • more pool fixes

    more pool fixes

    • a task that is marked woken but didn't actually wake before being cancelled will instead wake the next task in the queue

    • a task that wakes but doesn't get a connection will put itself back in the queue instead of waiting until it times out with no way to be woken

    • the idle reaper now won't run if there are tasks waiting for a connection, and also uses the proper SharedPool::release() to return validated connections to the pool so waiting tasks get woken

    closes #622, #1210

    (hopefully for good this time)

    Signed-off-by: Austin Bonander [email protected]

    opened by abonander 18
  • Cannot get past

    Cannot get past "Are you sure you want to drop the database" prompt

    docker pull rust
    docker run -it rust
    cargo install sqx-cli
    DATABASE_URL=postgres://foo@localhost/bar_test sqlx database drop
    
    Are you sure you want to drop the database at postgres://foo@localhost/bar_test? [n/Y]
    

    I press Y and then get, on a new line:

    Are you sure you want to drop the database at postgres://foo@localhost/bar_test? [Y/n]
    

    (Note the Y/n instead of n/Y)

    I press Y and then get the previous. Repeat ad-infinitum.

    I eventually press CTRL-C to bail. I get back to the terminal, but I've lost my cursor.

    opened by xpe 17
  • MySQL VARCHAR is mapped to `Vec<u8>` instead of `String`

    MySQL VARCHAR is mapped to `Vec` instead of `String`

    The following is my database schema, I expect the VARCHAR and TEXT types are both mapped to Rust's String. However, I got the error in the next following log.

    fyp/slide-service> describe `users`;
    +---------------+--------------+------+-----+-------------------+-----------------------------------------------+
    | Field         | Type         | Null | Key | Default           | Extra                                         |
    +---------------+--------------+------+-----+-------------------+-----------------------------------------------+
    | id            | varchar(255) | NO   | PRI | NULL              |                                               |
    | username      | varchar(50)  | NO   | UNI | NULL              |                                               |
    | email         | varchar(255) | NO   | UNI | NULL              |                                               |
    | full_name     | varchar(60)  | YES  |     | NULL              |                                               |
    | password_hash | varchar(255) | NO   |     | NULL              |                                               |
    | avatar_url    | text         | YES  |     | NULL              |                                               |
    | bio           | text         | YES  |     | NULL              |                                               |
    | created_at    | timestamp    | NO   |     | CURRENT_TIMESTAMP | DEFAULT_GENERATED                             |
    | updated_at    | timestamp    | NO   |     | CURRENT_TIMESTAMP | DEFAULT_GENERATED on update CURRENT_TIMESTAMP |
    +---------------+--------------+------+-----+-------------------+-----------------------------------------------+
    

    This is the compile-time error

    error[E0308]: mismatched types
      --> cmd/auth-service/src/db/user.rs:23:20
       |
    23 |         let user = sqlx::query_as!(User, "SELECT * FROM users WHERE username = ?", username)
       |                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ expected struct `std::string::String`, found struct `Vec`
       |
       = note: expected struct `std::string::String`
                  found struct `Vec<u8>`
       = note: this error originates in the macro `$crate::sqlx_macros::expand_query` (in Nightly builds, run with -Z macro-backtrace for more info)
    

    This following is the Rust struct.

    #[derive(Debug, Deserialize, Serialize, sqlx::FromRow, SimpleObject)]
    pub struct User {
        pub id: String,
        pub username: String,
        pub email: String,
        #[serde(skip)]
        #[graphql(skip)]
        pub password_hash: String,
        pub full_name: Option<String>,
        pub bio: Option<String>,
        pub avatar_url: Option<String>,
        pub created_at: DateTime<Utc>,
        pub updated_at: DateTime<Utc>,
    }
    

    One more note, I'm not sure if this is related, I'm using PlanetScale's MySQL which uses Vitess under the hood.

    db:mysql macros bug:db 
    opened by marcustut 16
  • Cannot create users or databases using sqlx

    Cannot create users or databases using sqlx

    Bug Description

    I cant create a database user it gives me this error DatabaseError("error returned from database: syntax error at or near "$1"") It works if I hard code the values in the query but it breaks when trying to use bindings

    Minimal Reproduction

            sqlx::query("CREATE USER $1 WITH PASSWORD $2")
                .bind(&app_user)
                .bind(&app_password)
                .execute(&mut conn)
                .await?;
    
            sqlx::query("ALTER DATABASE $1 OWNER TO $2")
                .bind(&db_name)
                .bind(&app_user)
                .execute(&mut conn)
                .await?;
    

    Info

    • SQLx version: 0.6.2
    • SQLx features enabled: ["runtime-tokio-rustls", "postgres", "chrono", "uuid", "migrate", "decimal"]
    • Database server and version: Postgres
    • Operating system: Linux
    • rustc --version: 1.66.0
    bug 
    opened by Alfiec7113 0
  • Don't run EXPLAIN nullability analysis on Materialize

    Don't run EXPLAIN nullability analysis on Materialize

    Materialize 0 is a PostgreSQL-like database that, similar to CockroachDB, does not support PostgreSQL's EXPLAIN output. Extend the fix from PR #1248 to Materialize, too, so that sqlx can still be used with Materialize.

    See #1248.

    opened by benesch 0
  • Nested arrays can't be encoded: PgHasArrayType is not satisfied

    Nested arrays can't be encoded: PgHasArrayType is not satisfied

    Bug Description

    Unlike the code in #1945 implies, passing nested arrays as bind params does not work:

    Slices of strings

    let a = &[&["a".to_string()], &["b".to_string()]];
    let mut query = sqlx::query("SELECT $1::text[]");
    query = query.bind(&a[..]);
    query.execute(&db).await.unwrap();
    
    // the trait bound `&[std::string::String; 1]: PgHasArrayType` is not satisfied
    

    Vecs of strings

    let a = vec![vec!["a".to_string()], vec!["b".to_string()]];
    let mut query = sqlx::query("SELECT $1::text[]");
    query = query.bind(a);
    query.execute(&db).await.unwrap();
    
    // the trait bound `Vec<String>: PgHasArrayType` is not satisfied
    

    Vecs of integers

    let a: Vec<Vec<i32>> = vec![vec![1], vec![2]];
    let mut query = sqlx::query("SELECT $1::int[]");
    query = query.bind(a);
    query.execute(&db).await.unwrap();
    
    // the trait bound `Vec<i32>: PgHasArrayType` is not satisfied
    

    Info

    • SQLx version: I've tried both 0.6.x and 0.5.x
    • SQLx features enabled: postgres, all-types, runtime-tokio-native-tls
    • Database server and version: Postgres 14
    • Operating system: mac
    • rustc --version: 1.63
    bug 
    opened by seanlinsley 1
  • Documentation request for using Serialize,Deserialize properly with different data types

    Documentation request for using Serialize,Deserialize properly with different data types

    Is your feature request related to a problem? Please describe. Currently it's not very obvious when/how the serialization and deserialization with serde work and when they do not. To make it work we need to jump through a bunch of hoops not documented any where (at least I couldn't find them except for a very old issue).

    Describe the solution you'd like Document that the dependency versions need to match and that the crates being included should enable the serde feature on dependent crates like uuid and chrono.

    Describe alternatives you've considered Alternative I have considered (and went through) is searching through issue list for old issues, searching google, going through sqlx code, hurting the table with my head etc. Basically do not want others to go through the same.

    Additional context My guess is that out of every 100 users using sqlx, 99 would use uuid and chrono crates at the minimum and have uuid and datetime types in their tables. Lacking such basic functionality documented anywhere is not good for promoting sqlx anywhere.

    enhancement 
    opened by option-greek 0
  • Index on CHAR(N) columns not used when using strings (at least on Postgres)

    Index on CHAR(N) columns not used when using strings (at least on Postgres)

    Bug Description

    It could possibly not be considered a bug, but it is certainly surprising and very unfortunate behavior.

    Basically, if you have a table with a CHAR(N) column and an index on that column, when you use a string (with the correct length even) in a simple query, that string is converted to text which causes it not to use the index. This is quite surprising and very unfortunate when the index is critical to achieve decent performance.

    Minimal Reproduction

    Set up the database

        let pool =
            PgPool::connect("postgresql://postgres:postgres@localhost:5432/test_database").await?;
    
        sqlx::query(
            "create table if not exists test_table (id serial primary key, random_str CHAR(32))",
        )
        .execute(&pool)
        .await
        .unwrap();
    
        sqlx::query("create index if not exists test_index on test_table (random_str)")
            .execute(&pool)
            .await
            .unwrap();
    

    Insert some data

        let mut rng = rand::thread_rng();
        for i in 0..100000 {
            if i % 10000 == 0 {
                println!("{i}");
            }
            let random_str = Alphanumeric.sample_string(&mut rng, 32);
    
            sqlx::query("insert into test_table (random_str) values ($1)")
                .bind(random_str)
                .execute(&pool)
                .await
                .unwrap();
        }
    

    Run a search query and analyze its plan

        let search_str = Alphanumeric.sample_string(&mut rng, 32);
    
        let rows =
            sqlx::query("explain analyze select random_str from test_table where random_str = $1")
                .bind(search_str)
                .fetch_all(&pool)
                .await
                .unwrap();
    
        for row in rows {
            let s: String = row.get(0);
            println!("{s}")
        }
    

    For me, this prints the following (more or less):

    Gather  (cost=1000.00..7008.10 rows=1701 width=33) (actual time=39.426..42.269 rows=0 loops=1)
      Workers Planned: 1
      Workers Launched: 1
      ->  Parallel Seq Scan on test_table  (cost=0.00..5838.00 rows=1001 width=33) (actual time=30.099..30.099 rows=0 loops=2)
            Filter: ((random_str)::text = 'OkcKbzBxtX9mLUpHTVEJ5jnvlXi96k3J'::text)
            Rows Removed by Filter: 170114
    Planning Time: 1.132 ms
    Execution Time: 42.330 ms
    

    I.e. it is doing a sequential scan. This occurs because PostgreSQL can't use the index as the index is on CHAR(32), not text.

    If I change the search query slightly like this...

    explain analyze select random_str from test_table where random_str = $1::CHAR(32) -- Note the added explicit type for the bind parameter.
    

    ... then I instead get this output:

    Index Only Scan using test_index on test_table  (cost=0.42..4.44 rows=1 width=33) (actual time=0.033..0.034 rows=0 loops=1)
      Index Cond: (random_str = 'aj5zIad3fFzYjFfANaW1LDnnQUntGsHB'::character(32))
      Heap Fetches: 0
    Planning Time: 1.085 ms
    Execution Time: 0.394 ms
    

    That's a huge difference. This is quite a subtle issue that took me a while to track down. This is quite unfortunate behavior.

    Info

    • SQLx version: sqlx = { version = "0.6.2", features = ["runtime-tokio-rustls", "postgres"] }
    • SQLx features enabled: sqlx = { version = "0.6.2", features = ["runtime-tokio-rustls", "postgres"] }
    • Database server and version: PostgreSQL 14.6
    • Operating system: Reproduction done on Windows 10 but same behavior observed in Linux.
    • rustc --version: rustc 1.66.0 (69f9c33d7 2022-12-12)
    bug 
    opened by Victor-N-Suadicani 0
Releases(v0.5.10)
Owner
launchbadge
launchbadge
This project provides a Rust-based solution for migrating MSSQL databases to MySQL.

MSSQL to MySQL Database Migration A Rust project to migrate MSSQL databases to MySQL, including table structures, column data types, and table data ro

Bitalizer 2 Jul 10, 2023
TDS 7.2+ (mssql / Microsoft SQL Server) async driver for rust

Tiberius A native Microsoft SQL Server (TDS) client for Rust. Supported SQL Server versions Version Support level Notes 2019 Tested on CI 2017 Tested

Prisma 189 Dec 25, 2022
Rust library to parse, deparse and normalize SQL queries using the PostgreSQL query parser

This Rust library uses the actual PostgreSQL server source to parse SQL queries and return the internal PostgreSQL parse tree.

pganalyze 37 Dec 18, 2022
Generate type-checked Rust from your PostgreSQL.

Cornucopia Generate type checked Rust from your SQL Install | Example Cornucopia is a small CLI utility resting on postgres designed to facilitate Pos

null 206 Dec 25, 2022
An object-relational in-memory cache, supports queries with an SQL-like query language.

qlcache An object-relational in-memory cache, supports queries with an SQL-like query language. Warning This is a rather low-level library, and only p

null 3 Nov 14, 2021
A query builder that builds and typechecks queries at compile time

typed-qb: a compile-time typed "query builder" typed-qb is a compile-time, typed, query builder. The goal of this crate is to explore the gap between

ferrouille 3 Jan 22, 2022
fast & easy CLI and vscode extension specialized to format MySQL INSERT queries.

insertfmt fast & easy CLI specialized to format MySQL INSERT queries. format queries so that they look like a table. NOTE: If you wanna use the VSCode

canalun 7 May 2, 2023
cogo rust coroutine database driver (Mysql,Postgres,Sqlite)

cdbc Coroutine Database driver Connectivity.based on cogo High concurrency,based on coroutine No Future<'q,Output=*>,No async fn, No .await , no Poll*

co-rs 10 Nov 13, 2022
postgres-ical - a PostgreSQL extension that adds features related to parsing RFC-5545 « iCalendar » data from within a PostgreSQL database

postgres-ical - a PostgreSQL extension that adds features related to parsing RFC-5545 « iCalendar » data from within a PostgreSQL database

Edgar Onghena 1 Feb 23, 2022
Mycelite is a SQLite extension that allows you to synchronize changes from one instance of SQLite to another.

Mycelite What is Mycelite? Mycelite is a SQLite extension that allows you to synchronize changes from one instance of SQLite to another. Currently, it

Mycelial 16 Jan 2, 2023
Simple and handy btrfs snapshoting tool. Supports unattended snapshots, tracking, restoring, automatic cleanup and more. Backed with SQLite.

Description Simple and handy btrfs snapshoting tool. Supports unattended snapshots, tracking, restoring, automatic cleanup and more. Backed with SQLit

Eduard Tolosa 27 Nov 22, 2022
Distributed, version controlled, SQL database with cryptographically verifiable storage, queries and results. Think git for postgres.

SDB - SignatureDB Distributed, version controlled, SQL database with cryptographically verifiable storage, queries and results. Think git for postgres

Fremantle Industries 5 Apr 26, 2022
Run SQL queries on CSV files

zsql run SQL queries on csv files A terminal utility to easily run SQL queries on CSV files. zsql is shipped as a small single binary powered by rust

Zizaco 9 Jul 9, 2022
Running SQL-like queries on files.

filesql Running SQL-like queries on files. Features Supported: REPL Basic SQL expressions. INSERT clause. (which inserts data into another file) WHERE

Zhang Li 1 Nov 15, 2021
Materialize simplifies application development with streaming data. Incrementally-updated materialized views - in PostgreSQL and in real time. Materialize is powered by Timely Dataflow.

Materialize is a streaming database for real-time applications. Get started Check out our getting started guide. About Materialize lets you ask questi

Materialize, Inc. 4.7k Jan 8, 2023
This crate allows you to send cypher queries to the REST endpoint of a neo4j database

rusted_cypher Rust crate for accessing the cypher endpoint of a neo4j server This crate allows you to send cypher queries to the REST endpoint of a ne

Livio Ribeiro 68 Dec 1, 2022
A Rust crate for writing servers that speak PostgreSQL's wire protocol

Convergence A Rust crate for writing servers that speak PostgreSQL's wire protocol. Additionally, the experimental convergence-arrow crate enables con

ReservoirDB 63 Jan 2, 2023
RRust, a reversible Rust DSL

RRust, reversible DSL for Rust RRust is a reversible subset of Rust inside of Rust, this means you can use it to write a algorithm and then it can run

Erk 18 Dec 18, 2022
CRUD example with Rocket, Toql and MySQL

Todo (Rocket, Toql, MySQL) This is a little REST server built with Rocket, Toql and MySQL. The server allows to call CRUD functions on a Todo item. It

Artos 7 Jan 5, 2022