A simple and fast linear algebra library for games and graphics

Overview

glam

Build Status Coverage Status Latest Version docs Minimum Supported Rust Version

A simple and fast 3D math library for games and graphics.

Development status

glam is in beta stage. Base functionality has been implemented and the look and feel of the API has solidified.

Features

  • f32 types
    • vectors: Vec2, Vec3, Vec3A and Vec4
    • square matrices: Mat2, Mat3, Mat3A and Mat4
    • a quaternion type: Quat
    • affine transformation types: Affine2 and Affine3A
  • f64 types
    • vectors: DVec2, DVec3 and DVec4
    • square matrices: DMat2, DMat3 and DMat4
    • a quaternion type: DQuat
    • affine transformation types: DAffine2 and DAffine3
  • i32 types
    • vectors: IVec2, IVec3 and IVec4
  • u32 types
    • vectors: UVec2, UVec3 and UVec4
  • bool types
    • vectors: BVec2, BVec3 and BVec4

SIMD

The Vec3A, Vec4, Quat, Mat2, Mat3A, Mat4, Affine2 and Affine3A types use 128-bit wide SIMD vector types for storage on x86, x86_64 and wasm32 architectures. As a result, these types are all 16 byte aligned and depending on the size of the type or the type's members, they may contain internal padding. This results in some wasted space in the cases of Vec3A, Mat3A, Affine2 and Affine3A. However, the use of SIMD generally results in better performance than scalar math.

glam outperforms similar Rust libraries for common operations as tested by the mathbench project.

Enabling SIMD

SIMD is supported on x86, x86_64 and wasm32 targets.

  • SSE2 is enabled by default on x86_64 targets.
  • To enable SSE2 on x86 targets add -C target-feature=+sse2 to RUSTCFLAGS.
  • To enable simd128 on wasm32 targets add -C target-feature=+simd128 to RUSTFLAGS.

Note that SIMD on wasm32 passes tests but has not been benchmarked, performance may or may not be better than scalar math.

no_std support

no_std support can be enabled by compiling with --no-default-features to disable std support and --features libm for math functions that are only defined in std. For example:

[dependencies]
glam = { version = "0.20.2", default-features = false, features = ["libm"] }

To support both std and no_std builds in project, you can use the following in your Cargo.toml:

[features]
default = ["std"]

std = ["glam/std"]
libm = ["glam/libm"]

[dependencies]
glam = { version = "0.20.2", default-features = false }

Optional features

  • approx - traits and macros for approximate float comparisons
  • bytemuck - for casting into slices of bytes
  • libm - required to compile with no_std
  • mint - for interoperating with other 3D math libraries
  • num-traits - required to compile no_std, will be included when enabling the libm feature
  • rand - implementations of Distribution trait for all glam types.
  • serde - implementations of Serialize and Deserialize for all glam types. Note that serialization should work between builds of glam with and without SIMD enabled
  • rkyv - implementations of Archive, Serialize and Deserialize for all glam types. Note that serialization is not interoperable with and without the scalar-math feature. It should work between all other builds of glam. Endian conversion is currently not supported
  • bytecheck - to perform archive validation when using the rkyv feature

Feature gates

  • scalar-math - compiles with SIMD support disabled
  • debug-glam-assert - adds assertions in debug builds which check the validity of parameters passed to glam to help catch runtime errors
  • glam-assert - adds validation assertions to all builds
  • cuda - forces glam types to match expected cuda alignment

Minimum Supported Rust Version (MSRV)

The minimum supported version of Rust for glam is 1.52.1.

wasm32 SIMD intrinsics require Rust 1.54.0.

Conventions

Column vectors

glam interprets vectors as column matrices (also known as "column vectors") meaning when transforming a vector with a matrix the matrix goes on the left, e.g. v' = Mv. DirectX uses row vectors, OpenGL uses column vectors. There are pros and cons to both.

Column-major order

Matrices are stored in column major format. Each column vector is stored in contiguous memory.

Co-ordinate system

glam is co-ordinate system agnostic and intends to support both right-handed and left-handed conventions.

Design Philosophy

The design of this library is guided by a desire for simplicity and good performance.

  • No generics and minimal traits in the public API for simplicity of usage
  • All dependencies are optional (e.g. mint, rand and serde)
  • Follows the Rust API Guidelines where possible
  • Aiming for 100% test coverage
  • Common functionality is benchmarked using Criterion.rs

Architecture

See ARCHITECTURE.md for details on glam's internals.

Inspirations

There were many inspirations for the interface and internals of glam from the Rust and C++ worlds. In particular:

License

Licensed under either of

at your option.

Contribution

Contributions in any form (issues, pull requests, etc.) to this project must adhere to Rust's Code of Conduct.

Unless you explicitly state otherwise, any contribution intentionally submitted for inclusion in the work by you, as defined in the Apache-2.0 license, shall be dual licensed as above, without any additional terms or conditions.

Thank you to all of the glam contributors!

Support

If you are interested in contributing or have a request or suggestion start a discussion on GitHub. See CONTRIBUTING.md for more information for contributors.

The Game Development in Rust Discord and Bevy Engine Discord servers are not official support channels but can be good places to ask for help with glam.

Attribution

glam contains code ported from the following C++ libraries:

  • DirectXMath - MIT License - Copyright (c) 2011-2020 Microsoft Corp
  • Realtime Math - MIT License - Copyright (c) 2018 Nicholas Frechette
  • GLM - MIT License - Copyright (c) 2005 - G-Truc Creation

See ATTRIBUTION.md for details.

Comments
  • Consider adding transform types

    Consider adding transform types

    When dealing with transforms that only contain position, orientation and possibly scale many operations can be performed more efficiently than using a general purpose matrix.

    A while ago I experimented with transform types which contained position and orientation (as a quaternion) much like Unreal's FTransform, but I found these to not be particularly more efficient or to save much space in practice, so they are not enabled by default. They can be enabled using the transform-types feature.

    Another option might be to have these kind of transforms backed by a matrix with simplified methods/operators similar to Transform4D described in Foundations of Game Engine Development Volume 1.

    enhancement 
    opened by bitshifter 28
  • Replacing x86/x86_64 specific SIMD vectors with core_simd library

    Replacing x86/x86_64 specific SIMD vectors with core_simd library

    So while looking through the library, I noticed the library only has SIMD vector support for the x86/x86_64 architectures. While this is honestly great for 99% of users, having to rely on scalar math for architectures like ARM and wasm32, architectures where performance is typically more of an issue, is honestly dissapointing. I've recently been testing using SIMD, and found it difficult to use, until I found the official core_simd crate. I think that allowing using core_simd as a feature flag would be very useful, since the work of supporting multiple architectures is already being done by the core_simd team, and it would allow for SIMD vectors on many many more architectures, with little to no work needing to be done on your side.

    opened by billyb2 21
  • Support for const types

    Support for const types

    Currently, constant types don't seem to be supported:

    error[E0015]: calls in constants are limited to constant functions, tuple structs and tuple variants
      --> src/main.rs:
       |
    17 | const INIT_POS: Vec3 = vec3(1.0, 1.0, 1.0);
       |                        ^^^^^^^^^^^^^^^^^^^
    

    I can't construct the tuple manually either, since there is only crate-level visibility. Would it be possible to implement this?

    enhancement 
    opened by ApoorvaJ 19
  • Alternate Vec3 that's actually 12 bytes

    Alternate Vec3 that's actually 12 bytes

    I've found that when dealing with mesh data files and meshes in memory and similar, you sometimes really want a Vec3 type that's guaranteed to actually be 12 bytes (3*mem::size_of()) in size.

    I could of course implement my own Vec3 and have it by the side, but I think this need might be common enough that it might be worth including (since glam's SIMD Vec3 is stuck at 16 bytes, for pretty good reasons).

    One idea might want to be to export a RawVec3 type or something that's essentially equivalent to the current non-SIMD fallback, even if SIMD is enabled. What do you think?

    opened by hrydgard 19
  • Expose mul_add for vector types.

    Expose mul_add for vector types.

    Fixes #269.

    This PR does the following:

    • Exposes "mul_add" on all vector types.
    • Updates documentation for the function
    • Forces the scalar implementations to use f32::mul_add and f64::mul_add for the types where it's relevant.
    • Updates the relevant test names as "mul_add" is no longer private.
    opened by james7132 15
  • Add

    Add "cuda" feature for enabling cuda-native alignment requirements on *Vec2 and *Vec4

    CUDA prefers specific alignments for Vec2 and Vec4 types in order to use more efficient memory loads. This PR adds a "cuda" feature that enables those alignments using cfg_attr in order that glam can be used to generate efficient PTX code.

    This is currently a draft as the changes are somehow affecting DAffine2 and causing some of those tests to fail, where as far as I can see that type shouldn't be affected, but it's hard for me to trace back through all the macro invocations to understand where it's getting affected.

    opened by anderslanglands 11
  • project function

    project function

    It would be nice if there was a function for projecting from one vector onto another normalized vector. Here's an example code snip that I implemented inside the project:

    trait Vec3Project {
        fn project(self, vector_to_project_onto: Self) -> Self;
    }
    
    impl Vec3Project for Vec3 {
        fn project(self, vector_to_project_onto: Self) -> Self {
           self.dot(vector_to_project_onto) * vector_to_project_onto     
        }
    }
    

    An example of where it would be useful can be found at this repo: https://github.com/ForesightMiningSoftwareCorporation/bevy_transform_gizmo

    And here is the video of it in action:

    https://user-images.githubusercontent.com/2632925/119217311-33ac8a00-ba8e-11eb-9dfb-db0b9c13cd84.mp4

    NOTE: That repo itself doesn't implement that function. I cloned it and noticed a helpful place in the source code for it, tested it, and it seems to work good. The repo referenced this article in its implementation which I copied into that example function above: https://en.wikipedia.org/wiki/Vector_projection

    opened by TCROC 11
  • Affine3D

    Affine3D

    Part of https://github.com/bitshifter/glam-rs/issues/25. A simpler version of https://github.com/bitshifter/glam-rs/pull/156.

    This PR introduces the Affine3D type, implemented as 3x Vec4 (NOTE: only f32 version included).

    Banchmark results on my Intel MacBook (best of a few runs):

    | op | Mat4 sse2 | Affine3D sse2 | | Mat4 scalar | Affine3D scalar | | -------------------- | --------- | -------- | - | ------ | -------- | | inverse | 12 ns | 9 ns 🥇 | | 31 ns | 16 ns 🥇 | | Self * Self | 6.1 ns | 4.2 ns 🥇 | | 21 ns | 14 ns 🥇 | | transform point3 | 2.6 ns 🥇 | 3.8 ns | | 3.8 ns | 3.5 ns 🥇 | | transform vector3 | 2.5 ns 🥇 | 3.6 ns | | 3.3 ns | 2.9 ns 🥇 |

    ( 🥇 is the fastest in each pair of columns)

    It would be nice to speed up the sse2 vector transforms more, as those are probably among the most common operations to do with a matrix. I've tried several different approaches, but I can't get further than this. Unless someone has a better idea, I think we should just recommend turning your Affine3D into a Mat4 on SSE2 targets when doing a lot of mat * vec transforms.

    It is also possible that for some platforms (in particular spirv) a 4x Vec3 may actually be faster.

    opened by emilk 11
  • Expose named fields for non-simd vector types

    Expose named fields for non-simd vector types

    The motivation is that it's kind of a pain to pay the cost of only accessing fields through function calls when you're not actually using simd. This is coming up for Embark both in WASM and (now) GPU code. Opening this issue to see what your thoughts/concerns are about this.

    There's a couple options for implementation... the simplest would be to just make things that can be a plain struct with public named fields, i.e.

    pub struct Vec3 {
        pub x: f32,
        pub y: f32,
        pub z: f32,
    }
    

    Another option is to use the strategy that nalgebra uses which is to create specific "component bag" structs and then impl Deref for them for things which make sense. For example:

    struct XYZ {
        x: f32,
        y: f32,
        z: f32,
    }
    
    struct RGB {
        r: f32,
        g: f32,
        b: f32,
    }
    impl Deref<XYZ> for Vec3 {
       //...
    }
    impl Deref<RGB> for Vec3 {
       //...
    }
    

    Now you can write both vec.x or vec.r and you get the same thing as vec.x() currently.

    discussion papercut 
    opened by fu5ha 11
  • Primitive integer ops for UVec and IVec

    Primitive integer ops for UVec and IVec

    I'm thinking about replacing my own vector types with glam, but there is some missing functionality I need first:

    • Not
    • BitAnd
    • BitOr
    • BitXor
    • Shl
    • Shr
    • ~~Rem~~

    Would a PR to add these impls to UVec and IVec be welcome?

    opened by bonsairobo 10
  • Add `fn is_finite(&self) -> bool` member to all types

    Add `fn is_finite(&self) -> bool` member to all types

    Having a function like this is super-useful to guard against non-finite values. For instance:

    let v = foo.cross(bar) / det;
    let v = v.normalize():
    if v.is_finite() {
       Some(v)
    } else {
       None
    }
    

    or debug_assert!(transform.is_finite());

    Naming

    There is already a is_nan which returns a mask. This sets a precedence that something called is_finite should also return a mask. However, is_entirely_finite doesn't feel like a great name to me. Maybe functions that return a mask should have a m suffix or something instead?

    Anyway, I'll be happy to change the name of this, this is just a first draft!

    opened by emilk 10
  • Bounds / Box / Range types

    Bounds / Box / Range types

    I situation I frequently find myself in when working with vectors is I have a 2D rectangle or 3D volume and I need to know if a particular coordinate is "in bounds", IE, I have an axis-aligned bounding box and I need to know if a point is within it. It is straightforward to keep a minimum and maximum coordinate and >=/<= each component, but this is verbose and therefore error-prone. I like to have a single object encapsulating the bounds especially since this makes it easy to tack on convenience features such as "expand the bounds to include this point" and "offset the bounds".

    There doesn't seem to be a good library for this with glam or in Rust in general. "Glamour" has Box2D/Box3D structs, but Glamour is somewhat heaviweight and expects you to use its own special specially-typed vector struct idiom, also it currently doesn't seem to be compatible with Glam 0.22 (requires 0.21). It would be great to have a Box/Bounds/AABB/multidimensional range struct that uses Glam types natively.

    I previously contributed Bound2 and Bound3 classes to the CPML vector math library for Lua, and would be happy to create a PR or addon crate for Rust implementing a bounds object. It would be helpful though if while I work on the PR I could get feedback on how to fit glam's idioms best (For starters: should it be named Box2, Bound2 or something else? Would Box be preferred or specialized IBox2/DBox3 type classes? Would it be better to do this as a glam-rs PR or a new crate?).

    opened by mcclure 0
  • Make BVec4A available when scalar-math feature is enabled

    Make BVec4A available when scalar-math feature is enabled

    This just matches what was done for BVec3A. There's a bunch of functions in scalar/vec4.rs that return BVec4 instead of BVec4A though, like cmpge. I tried wrangling the template files but wasn't sure how to fix that part, its a bit annoying because switching on scalar-math creates compile errors when these functions are being used. I currently only need scalar-math to run tests in miri because miri doesn't support SIMD.

    opened by rodolphito 3
  • [feature] provide a method to project Vec4 and Vec3 from homogeneous space

    [feature] provide a method to project Vec4 and Vec3 from homogeneous space

    What problem does this solve or what need does it fill?

    Currently Vec3 provides extend(w) to unproject into homogeneous coordinates (hence Vec4 with euclidean hyperspace w - maybe I missed it, but there is no inverse to extend() or the documentation misses to mention it.

    What solution would you like?

    I would prefer to write something like this

    transform.translation = view.w_axis.project()

    rather than

    transform.translation = view.w_axis.xyz() / view.w_axis.w;

    My example is now for Vec4 ... same applies to homogeneous representations of Vec2 in vectorspace R3 using Vec3

    What alternative(s) have you considered?

    Keep writing rather obvious code

    This was originally a request in bevy https://github.com/bevyengine/bevy/issues/6916#

    enhancement good first issue 
    opened by seichter 2
  • Bool vector type mismatch between scalar and SIMD implementations

    Bool vector type mismatch between scalar and SIMD implementations

    The type of the boolean vectors used by scalar and SIMD implementations of the same vector type seems to be different.

    Consider, for example, Vec3A. The SIMD implementation of the vector exposes this function signature:

    pub fn select(mask: BVec3A, if_true: Self, if_false: Self) -> Self {
        // ...
    }
    

    whereas the scalar implementation of the vector exposes this different function signature:

    pub fn select(mask: BVec3, if_true: Self, if_false: Self) -> Self {
        // ...
    }
    

    This seems to be a regression from 0.22.0, which:

    Added u32 implementation of BVec3A and BVec4 when SIMD is not available. These are used instead of aliasing to the bool implementations.

    Presumably, this problem wasn't noticed before because, on scalar platforms, BVec3 and BVec3A were interchangeable.

    opened by Radbuglet 2
  • Quat::slerp dot threshold prevents extrapolation when difference in angle is smaller than about π / 49.65

    Quat::slerp dot threshold prevents extrapolation when difference in angle is smaller than about π / 49.65

    I’m using Quat::slerp to extrapolate rotations, and it works perfectly when the angle is greater than about π / 49.65, in which case the dot product is 0.9994995, suspiciously close to the dot threshold of 0.9995 where Quat::slerp falls back to Quat::lerp. However, when the angle is smaller than that, e.g. π / 60.00, then extrapolation no longer works properly, and the rotation goes slower as the s parameter to Quat::slerp goes further beyond 1.0.

    Example code:

    let start = Quat::from_axis_angle(Vec3::X, 0.0);
    let end   = Quat::from_axis_angle(Vec3::X, PI / 49.65);
    println!("{:?} {:?}",
        Quat::dot(start, end),
        Quat::slerp(start, end, 49.65 * 2.0).to_axis_angle());
    // => 0.99949956 (Vec3(-0.023733087, -0.0, -0.0), 6.2778816)
    
    let start = Quat::from_axis_angle(Vec3::X, 0.0);
    let end   = Quat::from_axis_angle(Vec3::X, PI / 60.00);
    println!("{:?} {:?}",
        Quat::dot(start, end),
        Quat::slerp(start, end, 60.00 * 2.0).to_axis_angle());
    // => 0.99965733 (Vec3(1.0000001, 0.0, 0.0), 2.5490494)
    

    Notice how in the first example, the resulting angle is 2π, i.e. properly extrapolated; whereas in the second example, the resulting angle is not even close to 2π.

    I’m not quite sure why the special case for small angles exists; is it merely an optimization? Or is it there to avoid problems with precision? I saw in #57 that the implementation of Quat::slerp was based on that in cgmath. But cgmath falls back to nlerp, not lerp, which might behave differently.

    It would be nice to have extrapolation work as expected, like it does for e.g. Vec3::lerp. Do you think that would be acceptable, perhaps as a separate method?

    (Or maybe I shouldn’t use quaternions for animations.)

    help wanted 
    opened by chloekek 0
Owner
Cameron Hart
Game engine/physics programmer at Wargaming.net
Cameron Hart
A fast, low-resource Natural Language Processing and Text Correction library written in Rust.

nlprule A fast, low-resource Natural Language Processing and Error Correction library written in Rust. nlprule implements a rule- and lookup-based app

Benjamin Minixhofer 496 Jan 8, 2023
Rust edit distance routines accelerated using SIMD. Supports fast Hamming, Levenshtein, restricted Damerau-Levenshtein, etc. distance calculations and string search.

triple_accel Rust edit distance routines accelerated using SIMD. Supports fast Hamming, Levenshtein, restricted Damerau-Levenshtein, etc. distance cal

Daniel Liu 75 Jan 8, 2023
💥 Fast State-of-the-Art Tokenizers optimized for Research and Production

Provides an implementation of today's most used tokenizers, with a focus on performance and versatility. Main features: Train new vocabularies and tok

Hugging Face 6.2k Jan 5, 2023
Fast and easy random number generation.

alea A zero-dependency crate for fast number generation, with a focus on ease of use (no more passing &mut rng everywhere!). The implementation is bas

Jeff Shen 26 Dec 13, 2022
Vaporetto: a fast and lightweight pointwise prediction based tokenizer

?? VAporetto: POintwise pREdicTion based TOkenizer Vaporetto is a fast and lightweight pointwise prediction based tokenizer. Overview This repository

null 184 Dec 22, 2022
Composable n-gram combinators that are ergonomic and bare-metal fast

CREATURE FEATUR(ization) A crate for polymorphic ML & NLP featurization that leverages zero-cost abstraction. It provides composable n-gram combinator

null 3 Aug 25, 2022
Fast PDF password cracking utility equipped with commonly encountered password format builders and dictionary attacks.

PDFRip Fast PDF password cracking utility equipped with commonly encountered password format builders and dictionary attacks. ?? Table of Contents Int

Mufeed VH 226 Jan 4, 2023
🛥 Vaporetto is a fast and lightweight pointwise prediction based tokenizer. This is a Python wrapper for Vaporetto.

?? python-vaporetto ?? Vaporetto is a fast and lightweight pointwise prediction based tokenizer. This is a Python wrapper for Vaporetto. Installation

null 17 Dec 22, 2022
Find files (ff) by name, fast!

Find Files (ff) Find Files (ff) utility recursively searches the files whose names match the specified RegExp pattern in the provided directory (defau

Vishal Telangre 310 Dec 29, 2022
Fast suffix arrays for Rust (with Unicode support).

suffix Fast linear time & space suffix arrays for Rust. Supports Unicode! Dual-licensed under MIT or the UNLICENSE. Documentation https://docs.rs/suff

Andrew Gallant 207 Dec 26, 2022
A fast implementation of Aho-Corasick in Rust.

aho-corasick A library for finding occurrences of many patterns at once with SIMD acceleration in some cases. This library provides multiple pattern s

Andrew Gallant 662 Dec 31, 2022
Blazingly fast framework for in-process microservices on top of Tower ecosystem

norpc = not remote procedure call Motivation Developing an async application is often a very difficult task but building an async application as a set

Akira Hayakawa 15 Dec 12, 2022
Ultra-fast, spookily accurate text summarizer that works on any language

pithy 0.1.0 - an absurdly fast, strangely accurate, summariser Quick example: pithy -f your_file_here.txt --sentences 4 --help: Print this help messa

Catherine Koshka 13 Oct 31, 2022
A lightning-fast Sanskrit toolkit. For Python bindings, see `vidyut-py`.

Vidyut मा भूदेवं क्षणमपि च ते विद्युता विप्रयोगः ॥ Vidyut is a lightning-fast toolkit for processing Sanskrit text. Vidyut aims to provide standard co

Ambuda 14 Dec 30, 2022
Simple, extendable and embeddable scripting language.

duckscript duckscript SDK CLI Simple, extendable and embeddable scripting language. Overview Language Goals Installation Homebrew Binary Release Ducks

Sagie Gur-Ari 356 Dec 24, 2022
A sweet n' simple pastebin with syntax highlighting and no client-side code!

sweetpaste sweetpaste is a sweet n' simple pastebin server. It's completely server-side, with zero client-side code. Configuration The configuration w

Lucy 0 Sep 4, 2022
Simple NLP in Rust with Python bindings

vtext NLP in Rust with Python bindings This package aims to provide a high performance toolkit for ingesting textual data for machine learning applica

Roman Yurchak 133 Jan 3, 2023
Simple STM32F103 based glitcher FW

Airtag glitcher (Bluepill firmware) Simple glitcher firmware running on an STM32F103 on a bluepill board. See https://github.com/pd0wm/airtag-dump for

Willem Melching 27 Dec 22, 2022
Simple Data Stealer

helfsteal Simple Data Stealer Hi All, I published basic data stealer malware with Rust. FOR EDUCATIONAL PURPOSES. You can use it for Red Team operatio

Ahmet Güler 7 Jul 7, 2022