Simple, safe way to store and distribute tensors

Overview

safetensors

Safetensors

This repository implements a new simple format for storing tensors safely (as opposed to pickle) and that is still fast (zero-copy).

Getting started

from safetensors import safe_open
from safetensors.torch import save_file

tensors = {
   "weight1": torch.zeros((1024, 1024)),
   "weight2": torch.zeros((1024, 1024))
}
save_file(tensors, "model.safetensors")

tensors = {}
with safe_open("model.safetensors", framework="pt", device="cpu") as f:
   for key in f.keys():
       tensors[key] = f.get_tensor(key)

Python documentation

Format

  • 8 bytes: N, a u64 int, containing the size of the header
  • N bytes: a JSON utf-8 string representing the header. The header is a dict like {"TENSOR_NAME": {"dtype": "float16", "shape": [1, 16, 256], "offsets": (X, Y)}}, where X and Y are the offsets in the byte buffer of the tensor data A special key __metadata__ is allowed to contain free form text map.
  • Rest of the file: byte-buffer.

Yet another format ?

The main rationale for this crate is to remove the need to use pickle on PyTorch which is used by default. There are other formats out there used by machine learning and more general formats.

Let's take a look at alternatives and why this format is deemed interesting. This is my very personal and probably biased view:

Format Safe Zero-copy Lazy loading No file size limit Layout control Flexibility Bfloat16
pickle (PyTorch) βœ— βœ— βœ— πŸ—Έ βœ— πŸ—Έ πŸ—Έ
H5 (Tensorflow) πŸ—Έ βœ— πŸ—Έ πŸ—Έ ~ ~ βœ—
SavedModel (Tensorflow) πŸ—Έ βœ— βœ— πŸ—Έ πŸ—Έ βœ— πŸ—Έ
MsgPack (flax) πŸ—Έ πŸ—Έ βœ— πŸ—Έ βœ— βœ— πŸ—Έ
Protobuf (ONNX) πŸ—Έ βœ— βœ— βœ— βœ— βœ— πŸ—Έ
Cap'n'Proto πŸ—Έ πŸ—Έ ~ πŸ—Έ πŸ—Έ ~ βœ—
Arrow ? ? ? ? ? ? βœ—
Numpy (npy,npz) πŸ—Έ ? ? βœ— πŸ—Έ βœ— βœ—
SafeTensors πŸ—Έ πŸ—Έ πŸ—Έ πŸ—Έ πŸ—Έ βœ— πŸ—Έ
  • Safe: Can I use a file randomly downloaded and expect not to run arbitrary code ?
  • Zero-copy: Does reading the file require more memory than the original file ?
  • Lazy loading: Can I inspect the file without loading everything ? And loading only some tensors in it without scanning the whole file (distributed setting) ?
  • Layout control: Lazy loading, is not necessarily enough since if the information about tensors is spread out in your file, then even if the information is lazily accessible you might have to access most of your file to read the available tensors (incurring many DISK -> RAM copies). Controlling layout to keep fast access to single tensors is important.
  • No file size limit: Is there a limit to the file size ?
  • Flexibility: Can I save custom code in the format and be able to use it later with zero extra code ? (~ means we can store more than pure tensors, but no custom code)
  • Bfloat16: Does the format support native bfloat16 (meaning to weird workarounds are necessary). This is becoming increasingly important in the ML world.

Main oppositions

  • Pickle: Unsafe, runs arbitrary code
  • H5: Apparently now discouraged for TF/Keras. Seems like a great fit otherwise actually. Some classic user after free issues: https://www.cvedetails.com/vulnerability-list/vendor_id-15991/product_id-35054/Hdfgroup-Hdf5.html. On a very different level than pickle security wise. Also 210k lines of code vs ~400 lines for this lib currently.
  • SavedModel: Tensorflow specific (it contains TF graph information).
  • MsgPack: No layout control to enable lazy loading (important for loading specific parts in distributed setting)
  • Protobuf: Hard 2Go max file size limit
  • Cap'n'proto: Float16 support is not present link so using a manual wrapper over a byte-buffer would be necessary. Layout control seems possible but not trivial as buffers have limitations link.
  • Numpy (npz): No bfloat16 support. Vulnerable to zip bombs (DOS).
  • Arrow: No bfloat16 support. Seem do require decoding link

Notes

  • Zero-copy: No format is really zero-copy in ML, it needs to go from disk to RAM/GPU RAM (that takes time). Also In PyTorch/numpy, you need a mutable buffer, and we don't really want to mutate a mmaped file, so 1 copy is really necessary to use the thing freely in user code. That being said, zero-copy is achievable in Rust if it's wanted and safety can be guaranteed by some other means. SafeTensors is not zero-copy for the header. The choice of JSON is pretty arbitrary, but since deserialization is <<< of the time required to load the actual tensor data and is readable I went that way, (also space is <<< to the tensor data).

  • Endianness: Little-endian. This can be modified later, but it feels really unecessary atm

  • Order: 'C' or row-major. This seems to have won. We can add that information later if needed.

  • Stride: No striding, all tensors need to be packed before being serialized. I have yet to see a case where it seems useful to have a strided tensor stored in serialized format

Benefits

Since we can invent a new format we can propose additional benefits:

  • Prevent DOS attacks: We can craft the format in such a way that it's almost impossible to use malicious files to DOS attack a user. Currently there's a limit on the size of the header of 100MB to prevent parsing extremely large JSON. Also when reading the file, there's a guarantee that addresses in the file do not overlap in any way, meaning when you're loading a file you should never exceed the size of the file in memory

  • Faster load: PyTorch seems to be the fastest file to load out in the major ML formats. However, it does seem to have an extra copy on CPU, which we can bypass in this lib link. Currently CPU loading the entire file is still slightly slower than PyTorch on some platforms but it's not entirely clear why.

  • Lazy loading: in distributed (multi node or multi gpu) settings, it's nice to be able to load only part of the tensors on the various models. For BLOOM using this format enabled to load the model on 8 GPUs from 10mn with regular PyToch weights down to 45s. This really speeds up feedbacks loops when developping on the model. For instance you don't have to have separate copies of the weights when changing the distribution strategy (for instance Pipeline Parallelism vs Tensor Parallelism).

Comments
  • Torch SD-based models tensor invalid for input size

    Torch SD-based models tensor invalid for input size

    There might be a slight discrepancy between the loading and saving process in safetensors. When loading a SD-based model like sd-1.4 packaged into a PyTorch checkpoint, we'll call it sd-v1-4.ckpt. We can package its state_dict and discard the torch format.

    Packaging as safe_tensors

        sf_filename = "sd-v1-4.safetensors"
        filename = "sd-v1-4.ckpt"
    
        loaded = torch.load(filename)
        loaded = loaded['state_dict']
    
        # appears to pop nothing in this case
        shared = shared_pointers(loaded)
        for shared_weights in shared:
            for name in shared_weights[1:]:
                loaded.pop(name)
    
        loaded = {k: v.contiguous() for k, v in loaded.items()}
    
        save_file(loaded, metadata={"format": "pt"})
    
        check_file_size(local, filename)
    

    Loading the tensors

    load_file('sd-v1-4.safetensors', device='cpu')

    Results in error:

    File "venv\lib\site-packages\safetensors\torch.py", line 99, in load_file result[k] = f.get_tensor(k) RuntimeError: shape '[1280, 1280, 3, 3]' is invalid for input of size 7290352

    Expected behaviour: safetensors fails while trying to save unexpected tensor data or creates tensors which can be loaded Affected version: safetensors=2.4.0, torch=1.12.1+cu113

    ckpt size: 3.97 GB 4,265,381,888 bytes (4,265,380,512 bytes) safetensor size: 3.97 GB 4,265,148,416 bytes (4,265,146,304 bytes) SHA fe4efff1e174c627256e44ec2991ba279b3816e364b49f9be2abc0b3ff3f8556

    Using pruned version of CompVis/stable-diffusion-v-1-4-original

    Apologies if this is already fixed with addition of more dtypes. Will try to get more info by running through check output and debug info of this specific tensor

    opened by pattontim 21
  • Zero copy is not used when torch memory is sparse and/or has not been garbage collected

    Zero copy is not used when torch memory is sparse and/or has not been garbage collected

    Used

    os.environ['SAFETENSORS_FAST_GPU'] = '1'

    I observed on the webui that despite setting device='cuda' with this flag and using safetensors.torch.load_file, it was taking almost 45 seconds to load a 4GB .safetensors file. However, when trying to replicate it in a separate program but using the same libraries, the model loads fast and only takes a few seconds. These can be represented by these curves, each a 60s time chunk:

    wrong Long load time observed, CPU fallback right2 The program is executed at the red line.

    It appears that due to some pollution of memory, webui always falls back to loading from CPU by default. This pollution appears to persist, and if you terminate the webui and then run a separate program which uses safetensors, it also falls back to loading using slow CPU copy.

    Steps to replicate:

    1. Load a program using loading various large torch files to cpu (regular torch files with torch.load), and loading a large file to GPU (safetensors).
    2. Terminate it after loading the torch CPU files and GPU for a few seconds.
    3. Within 10 seconds, try to launch an external program using safetensors to GPU with the fast gpu flag. The program resorts to slow copy via CPU. *Does not replicate if you interrupt the second external program and try again.
    interrupted Cancel to slow copy after pollution

    safetensors tested: 0.2.5

    Next steps:

    • print how much memory cuda sees as available at runtime

    • disable windows 10 Hardware-accelerated GPU scheduling

    • figure out how the webui pollutes space (I think it loads a few models with torch to CPU, 5GB to memory)

    opened by pattontim 13
  • Python slice API proposition

    Python slice API proposition

    Much better API:

    with PySafeFile(self.local, framework="pt") as f:
          tensor = f["test"][:, :, 1:2]
    

    Design:

    • Context manager This is meant to open a file, we need to close it, and this is the Python way (lazy load on a real buffer instead of a file doesn't have real advantages since torch is already able to not copy a single thing with striding in the first place)
    • Framework arg: I played with other ideas, as if the code was framework agnostic it would be easier on users on any kind of failure and to adapt directly within Python. However, memoryview doesn't handle dtype, and bfloat16 is only handled by pytorch, so using it as intermediary is not really as viable as I thought.
    • f["test"] This object is ~equivalent to TensorView<'data> except we cannot have a lifetime on it, so it owns the Mmap instead. This is not as torch.Tensor at this point, only when you slice into does it become torch.Tensor.
    • [:, :, 1:2] This is where we can calculate how to slice into the data, creating the buffer iteratively (so only once) as PyByteArray so we do actually avoid allocating the entire buffer here (hopefully). In order to avoid shape shenanigans, we simply return the torch.Tensor directly.

    Pros:

    • handles a lot of boilerplate for caller.
    • Feels more pythonesque

    Cons:

    • Unsure about soundness of keeping Mmap struct (in PySafeFile), it's meant to be used as context manager, so it should clean up after itself.
    • Lots more boilerplate in the binding code.
    opened by Narsil 11
  • Make GPU loading faster by removing all extra CPU copies.

    Make GPU loading faster by removing all extra CPU copies.

    Loading roughly 2x as fast (depends on hardware) models on GPU.

    IT works by not allocating on CPU, and directly allocating + memsetting the memory on the GPU. Afaik, there's no way to do that in Pytorch. All the storage methods have intermediary CPU alloc.

    @davidhewitt Sorry to ping you out of the blue, but you have been a huge help on tokenizers.

    The present library aims to prevent arbitrary code execution when loading weights (in particular Pytorch). https://www.youtube.com/watch?v=2ethDz9KnLk

    At HF, we haven't yet fully committed to actually switch format, but multiple other nice features could come out of it, namely this handy 2x speedup when loading tensors on GPU (because we can remove entirely the CPU allocations). However the present solution uses a good dose of unsafe.

    • Do you have ideas on removing this unsafe ?
    • Or validating it ?

    Currently there's is no way (afaik) to access an equivalent of cudaMemcpy directly from torch. That indirection would help put the safety back in pytorch and not in this crate. However after a healthy dose of looking I couldn't find anything. Still since pytorch is in Python world, I'm guessing it's always going to require passing a PyBufferArray which we have to reallocate (because of trailing '\0' at least).

    The current PR does the following:

    • It figures out the libcudart being used by pytorch itself
    • Loads it
    • When creating tensors on GPU, it will allocate an empty buffer (managed by Pytorch) through torch.empty(shape, dtype, device).
    • Then lookup cudaMemcpyHostToDevice.
    • Call it directly to set the GPU RAM with the actual tensors on disk.
    opened by Narsil 7
  • Found out about `torch.asarray` + `torch.Storage` combo.

    Found out about `torch.asarray` + `torch.Storage` combo.

    Extremely fast (I'm guessing it's just pointer arithmetic and actual load time would it when actually using the tensors.)

    benches/test_pt.py::test_pt_pt_load_cpu PASSED
    benches/test_pt.py::test_pt_sf_load_cpu PASSED
    
    
    ------------------------------------------------------------------------------------- benchmark: 2 tests ------------------------------------------------------------------------------------
    Name (time in ms)            Min                 Max                Mean            StdDev              Median                IQR            Outliers       OPS            Rounds  Iterations
    ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
    test_pt_sf_load_cpu       4.4699 (1.0)        9.8395 (1.0)        5.5089 (1.0)      0.9217 (1.0)        5.2233 (1.0)       0.7899 (1.0)         24;13  181.5238 (1.0)         175           1
    test_pt_pt_load_cpu     162.7417 (36.41)    183.1321 (18.61)    173.9028 (31.57)    8.4244 (9.14)     174.7930 (33.46)    13.4904 (17.08)         2;0    5.7503 (0.03)          6           1
    ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
    
    Legend:
      Outliers: 1 Standard Deviation from Mean; 1.5 IQR (InterQuartile Range) from 1st Quartile and 3rd Quartile.
      OPS: Operations Per Second, computed as 1 / Mean
    =============================================================================================================== 2 passed, 4 deselected in 8.96s ================================================================================================================
    

    I checked against transformers and I get:

    ============== DUMMY ====================
    Loaded in 0:00:03.180369
    Setting `pad_token_id` to `eos_token_id`:50256 for open-end generation.
    [{'generated_text': 'test.\n\nThe first thing to note is that the "new" version of the app is not'}]
    Ran in 0:00:04.339221
    ============== NO ALLOC ====================
    Lookup in 0:00:00.000292
    model from config in 0:00:00.581647
    tokenizer in 0:00:00.641293
    Loaded in 0:00:00.648781
    Loaded weights in 0:00:00.654212
    Loaded on model in 0:00:00.658909
    Setting `pad_token_id` to `eos_token_id`:50256 for open-end generation.
    Ran in 0:00:01.828281
    [{'generated_text': 'test.\n\nThe first thing to note is that the "new" version of the app is not'}]
    Setting `pad_token_id` to `eos_token_id`:50256 for open-end generation.
    Ran in 0:00:02.982728
    [{'generated_text': 'test.\n\nThe first thing to note is that the "new" version of the app is not'}]
    

    And without safetensors

    ============== DUMMY (PT) ====================
    Loaded in 0:00:04.272222
    Setting `pad_token_id` to `eos_token_id`:50256 for open-end generation.
    [{'generated_text': 'test.\n\nThe first thing to note is that the "new" version of the app is not'}]
    Ran in 0:00:05.432200
    

    https://gist.github.com/Narsil/57ddec904ee795cc6e78eb13562f0c36

    I am not sure where the 500-600ms come from when loading the model with no_init_weights from accelerate, I'm guessing it's torch internals but I'm not sure.

    the NO_ALLOC case is more of a showcase but maybe at some point we can manage to integrate into transformers.

    opened by Narsil 5
  • Implement benchmarking using Criterion

    Implement benchmarking using Criterion

    TLDR: add benchmarking for serialization & deserialziation

    Not sure if we will do parallelization https://github.com/huggingface/safetensors/issues/15 But either way, I thought it is probably a good idea to add some benchmarking.

    I'm using criterion, which is a popular rust benchmarking framework (also used on tokenizers).

    Can be run as:

    cargo bench
    

    Produces output as:

    Serlialize 10_MB
    time:   [427.15 Β΅s 436.21 Β΅s 446.56 Β΅s]                             
    change: [-0.3956% +0.7567% +1.9834%] (p = 0.32 > 0.05)
    No change in performance detected.
    
    Deserlialize 10_MB                     
    time:   [5.5303 Β΅s 5.5634 Β΅s 5.5997 Β΅s]                                
    change: [-0.4513% +0.5147% +1.5265%] (p = 0.32 > 0.05)
    No change in performance detected.
    

    Read more about cargo bench here. Note: since criterion is a dev-dependency, it does not get added in the prod build

    opened by mishig25 5
  • Compatibility with `torch.save()`?

    Compatibility with `torch.save()`?

    Very cool project!

    I was wondering if there's possibility to use this package as the "backend" for torch.save to enable easier integration with downstream projects that are already using torch.save/load...

    seems like it might be possible since torch.save and torch.load have a pickle_module= argument. Something like:

    import safetensors
    torch.save(my_tensor, "my_tensor.pt", pickle_module=safetensors)   # or safetensors.torch
    
    opened by dconathan 4
  • [Critical] Design defect: endianness is not stored and can be arbitrary

    [Critical] Design defect: endianness is not stored and can be arbitrary

    import numpy as np
    import safetensors.numpy
    
    a = safetensors.numpy.save({"a": np.array(range(6), dtype='>u4')})
    b = safetensors.numpy.save({"a":np.array(range(6), dtype='<u4')})
    print(a)
    print(b)
    
    b'7\x00\x00\x00\x00\x00\x00\x00{"a":{"dtype":"U32","shape":[6],"data_offsets":[0,24]}}\x00\x00\x00\x00\x00\x00\x00\x01\x00\x00\x00\x02\x00\x00\x00\x03\x00\x00\x00\x04\x00\x00\x00\x05'
    b'7\x00\x00\x00\x00\x00\x00\x00{"a":{"dtype":"U32","shape":[6],"data_offsets":[0,24]}}\x00\x00\x00\x00\x01\x00\x00\x00\x02\x00\x00\x00\x03\x00\x00\x00\x04\x00\x00\x00\x05\x00\x00\x00'
    
    opened by KOLANICH 3
  • Get rid of pyO3, use `ctypes` + C FFI

    Get rid of pyO3, use `ctypes` + C FFI

    Getting rid of pyO3 and using ctypes instead will allow to

    • get rid of the requirement of having and executing Rust compiler in order to build a wheel;
    • reduce bloat by reusing the shared library
    opened by KOLANICH 3
  • pip install .[dev]

    pip install .[dev]

    For doc-builds & in general, pip install .[dev] is preferred over python setup.py develop

    1. Previously, pip install .[dev] was erring due to lines below missing: https://github.com/huggingface/safetensors/blob/109f54c1f390e2906098f3e5144eff4272aac384/bindings/python/pyproject.toml#L1-L3
    2. For the extras management, I've copied it from transformers
    opened by mishig25 3
  • Replace `std::ffi::c_uint` w/ `std::os::raw::c_uint`

    Replace `std::ffi::c_uint` w/ `std::os::raw::c_uint`

    Getting error below on Intel CPU Macbook:

    cargo check inside safetensors/bindings/python

    (base) Mishigs-MacBook-Pro:python mishig$ cargo check
        Checking safetensors-python v0.2.3 (/Users/mishig/Desktop/safetensors/bindings/python)
    error[E0412]: cannot find type `c_uint` in module `std::ffi`
      --> src/lib.rs:33:16
       |
    33 | ) -> std::ffi::c_uint;
       |                ^^^^^^ not found in `std::ffi`
       |
    help: consider importing this type alias
       |
    3  | use std::os::raw::c_uint;
    

    However, the error goes away when std::ffi::c_uint is replaced with std::os::raw::c_uint

    I don't know what's the difference between the two: their docs look identical

    opened by mishig25 3
  • hi, may ask differences between safetensors vs flatbuffer?

    hi, may ask differences between safetensors vs flatbuffer?

    Since it looks like both of them are data store library. But flatbuffer not only can store Tensor, but also structures && many customized data types. Does safetensors also able to?

    Furthermore, does safetensor able to using cross platform with a very tiny dependency lib? such as use it on Android or iOS

    opened by jinfagang 2
  • Allow setting tensor via `safe_open`

    Allow setting tensor via `safe_open`

    Ideally we want to support the following API:

    with safe_open(filename, mode="w", metadata=metadata) as f:
        f.get_tensor("embedding")[::2] = my_tensor 
    
    opened by thomasw21 0
  • nvidia + AMD gpu cards potential incompatibility

    nvidia + AMD gpu cards potential incompatibility

    https://www.reddit.com/r/StableDiffusion/comments/z8mnak/comment/iyzztaf/?utm_source=share&utm_medium=web2x&context=3

    Currently SAFETENSORS_FAST_GPU=1 expected cuda cards. In case of hybridization, the code will likely find the cuda part, yet if we're allocating on the AMD card we need to not use the fast path.

    We need to check :

    • that this is indeed a failure mode
    • Fix it
    opened by Narsil 0
  • save and load pytorch models?

    save and load pytorch models?

    Currently an option to save model with pytorch is to use (https://pytorch.org/tutorials/beginner/saving_loading_models.html#save-load-entire-model)

    *** Save/Load Entire Model
    
    #Save:
    torch.save(model, PATH)
    
    Load:
    # Model class must be defined somewhere
    model = torch.load(PATH)
    model.eval()
    

    Would it be possible to use safetensors to do the same?

    opened by its-jd 1
Releases(v0.2.7)
Owner
Hugging Face
The AI community building the future.
Hugging Face
Super-simple, fully Rust powered "memory" (doc store + semantic search) for LLM projects, semantic search, etc.

memex Super simple "memory" for LLM projects, semantic search, etc. Running the service Note that if you're running on Apple silicon (M1/M2/etc.), it'

Spyglass Search 15 Jun 19, 2023
A simple and convenient way to bundle owned data with a borrowing type.

A simple and convenient way to bundle owned data with a borrowing type. The Problem One of the main selling points of Rust is its borrow checker, whic

Dmitry Zamkov 20 Dec 21, 2022
Key-value store for embedded systems, for raw NOR flash, using an LSM-Tree.

ekv Key-value store for embedded systems, for raw NOR flash, using an LSM-Tree. Features None yet TODO Everything Minimum supported Rust version (MSRV

Dario Nieuwenhuis 16 Nov 22, 2022
Efficiently store Rust idiomatic bytes related types in Avro encoding.

Serde Avro Bytes Avro is a binary encoding format which provides a "bytes" type optimized to store &[u8] data like. Unfortunately the apache_avro enco

Akanoa 3 Mar 30, 2024
Game Boy Emulator written in Rust, as a way to fully grasp the Rust programming language

Flan's Game Boy Emulator Game Boy Emulator written in Rust, as a way to get hands-on with the Rust programming language, and creating a proper project

Flan 3 Dec 31, 2022
Code sample for "Reading files the hard way Part 3"

read-raw-ext4 Rust code sample to read an ext4 partition from Rust, for: https://fasterthanli.me/series/reading-files-the-hard-way/part-3 Usage Don't.

amos 10 Mar 25, 2023
Safe, efficient, and ergonomic bindings to Wolfram LibraryLink and the Wolfram Language

wolfram-library-link Bindings to the Wolfram LibraryLink interface, making it possible to call Rust code from the Wolfram Language. This library is us

Wolfram Research, Inc. 28 Dec 6, 2022
Build database expression type checker and vectorized runtime executor in type-safe Rust

Typed Type Exercise in Rust Build database expression type checker and vectorized runtime executor in type-safe Rust. This project is highly inspired

Andy Lok 89 Dec 27, 2022
Safe, idiomatic bindings to cFE and OSAL APIs for Rust

n2o4 The n2o4 crate provides safe, idiomatic Rust bindings to the APIs of cFE and OSAL, the libraries of the Core Flight System (cFS). IMPORTANT NOTE

null 3 Aug 29, 2022
Thread-safe clone-on-write container for fast concurrent writing and reading.

sync_cow Thread-safe clone-on-write container for fast concurrent writing and reading. SyncCow is a container for concurrent writing and reading of da

null 40 Jan 16, 2023
Safe, comp time generated queries in rust

query_builder For each struct field following methods will be generated. All fields where_FIELDNAME_eq Numeric fields where_FIELDNAME_le where_FIELDNA

Amirreza Askarpour 2 Oct 31, 2021
Safe Rust bindings to the DynamoRIO dynamic binary instrumentation framework.

Introduction The dynamorio-rs crate provides safe Rust bindings to the DynamoRIO dynamic binary instrumentation framework, essentially allowing you to

S.J.R. van Schaik 17 Nov 21, 2022
A type-safe, high speed programming language for scalable systems

A type-safe, high speed programming language for scalable systems! (featuring a cheesy logo!) note: the compiler is unfinished and probably buggy. if

Hail 0 Sep 14, 2022
Safe MMDeploy Rust wrapper.

Introduction Safe MMDeploy Rust wrapper. News (2022.9.29) This repo has been added into the OpenMMLab ecosystem. (2022.9.27) This repo has been added

Mengyang Liu 14 Dec 15, 2022
A additional Rust compiler pass to detect memory safe bugs of Rust programs.

SafeDrop A additional Rust compiler pass to detect memory safe bugs of Rust programs. SafeDrop performs path-sensitive and field-sensitive inter-proce

Artisan-Lab  (Fn*) 5 Nov 25, 2022
Linked Atomic Random Insert Vector: a thread-safe, self-memory-managed vector with no guaranteed sequential insert.

Linked Atomic Random Insert Vector Lariv is a thread-safe, self-memory-managed vector with no guaranteed sequential insert. It internally uses a linke

Guillem Jara 8 Feb 1, 2023
Blazing fast, memory safe & modern Linux package manager written in Rust.

paket Blazing fast, memory safe & modern Linux package manager written in Rust. Roadmap Version: 0.1 Paket.toml file parsing. (#1, #2) CLI handling (p

null 4 Oct 19, 2023
Fast and simple datetime, date, time and duration parsing for rust.

speedate Fast and simple datetime, date, time and duration parsing for rust. speedate is a lax† RFC 3339 date and time parser, in other words, it pars

Samuel Colvin 43 Nov 25, 2022
Simple and performant hot-reloading for Rust

reloady Simple, performant hot-reloading for Rust. Requires Rust nightly and only works on Linux for now. installing CLI To install the CLI helper car

Anirudh Balaji 24 Aug 5, 2022