Xsv - A fast CSV command line toolkit written in Rust.

Overview

xsv is a command line program for indexing, slicing, analyzing, splitting and joining CSV files. Commands should be simple, fast and composable:

  1. Simple tasks should be easy.
  2. Performance trade offs should be exposed in the CLI interface.
  3. Composition should not come at the expense of performance.

This README contains information on how to install xsv, in addition to a quick tour of several commands.

Linux build status Windows build status

Dual-licensed under MIT or the UNLICENSE.

Available commands

  • cat - Concatenate CSV files by row or by column.
  • count - Count the rows in a CSV file. (Instantaneous with an index.)
  • fixlengths - Force a CSV file to have same-length records by either padding or truncating them.
  • flatten - A flattened view of CSV records. Useful for viewing one record at a time. e.g., xsv slice -i 5 data.csv | xsv flatten.
  • fmt - Reformat CSV data with different delimiters, record terminators or quoting rules. (Supports ASCII delimited data.)
  • frequency - Build frequency tables of each column in CSV data. (Uses parallelism to go faster if an index is present.)
  • headers - Show the headers of CSV data. Or show the intersection of all headers between many CSV files.
  • index - Create an index for a CSV file. This is very quick and provides constant time indexing into the CSV file.
  • input - Read CSV data with exotic quoting/escaping rules.
  • join - Inner, outer and cross joins. Uses a simple hash index to make it fast.
  • partition - Partition CSV data based on a column value.
  • sample - Randomly draw rows from CSV data using reservoir sampling (i.e., use memory proportional to the size of the sample).
  • reverse - Reverse order of rows in CSV data.
  • search - Run a regex over CSV data. Applies the regex to each field individually and shows only matching rows.
  • select - Select or re-order columns from CSV data.
  • slice - Slice rows from any part of a CSV file. When an index is present, this only has to parse the rows in the slice (instead of all rows leading up to the start of the slice).
  • sort - Sort CSV data.
  • split - Split one CSV file into many CSV files of N chunks.
  • stats - Show basic types and statistics of each column in the CSV file. (i.e., mean, standard deviation, median, range, etc.)
  • table - Show aligned output of any CSV data using elastic tabstops.

A whirlwind tour

Let's say you're playing with some of the data from the Data Science Toolkit, which contains several CSV files. Maybe you're interested in the population counts of each city in the world. So grab the data and start examining it:

$ curl -LO https://burntsushi.net/stuff/worldcitiespop.csv
$ xsv headers worldcitiespop.csv
1   Country
2   City
3   AccentCity
4   Region
5   Population
6   Latitude
7   Longitude

The next thing you might want to do is get an overview of the kind of data that appears in each column. The stats command will do this for you:

$ xsv stats worldcitiespop.csv --everything | xsv table
field       type     min            max            min_length  max_length  mean          stddev         median     mode         cardinality
Country     Unicode  ad             zw             2           2                                                   cn           234
City        Unicode   bab el ahmar  Þykkvibaer     1           91                                                  san jose     2351892
AccentCity  Unicode   Bâb el Ahmar  ïn Bou Chella  1           91                                                  San Antonio  2375760
Region      Unicode  00             Z9             0           2                                        13         04           397
Population  Integer  7              31480498       0           8           47719.570634  302885.559204  10779                   28754
Latitude    Float    -54.933333     82.483333      1           12          27.188166     21.952614      32.497222  51.15        1038349
Longitude   Float    -179.983333    180            1           14          37.08886      63.22301       35.28      23.8         1167162

The xsv table command takes any CSV data and formats it into aligned columns using elastic tabstops. You'll notice that it even gets alignment right with respect to Unicode characters.

So, this command takes about 12 seconds to run on my machine, but we can speed it up by creating an index and re-running the command:

$ xsv index worldcitiespop.csv
$ xsv stats worldcitiespop.csv --everything | xsv table
...

Which cuts it down to about 8 seconds on my machine. (And creating the index takes less than 2 seconds.)

Notably, the same type of "statistics" command in another CSV command line toolkit takes about 2 minutes to produce similar statistics on the same data set.

Creating an index gives us more than just faster statistics gathering. It also makes slice operations extremely fast because only the sliced portion has to be parsed. For example, let's say you wanted to grab the last 10 records:

$ xsv count worldcitiespop.csv
3173958
$ xsv slice worldcitiespop.csv -s 3173948 | xsv table
Country  City               AccentCity         Region  Population  Latitude     Longitude
zw       zibalonkwe         Zibalonkwe         06                  -19.8333333  27.4666667
zw       zibunkululu        Zibunkululu        06                  -19.6666667  27.6166667
zw       ziga               Ziga               06                  -19.2166667  27.4833333
zw       zikamanas village  Zikamanas Village  00                  -18.2166667  27.95
zw       zimbabwe           Zimbabwe           07                  -20.2666667  30.9166667
zw       zimre park         Zimre Park         04                  -17.8661111  31.2136111
zw       ziyakamanas        Ziyakamanas        00                  -18.2166667  27.95
zw       zizalisari         Zizalisari         04                  -17.7588889  31.0105556
zw       zuzumba            Zuzumba            06                  -20.0333333  27.9333333
zw       zvishavane         Zvishavane         07      79876       -20.3333333  30.0333333

These commands are instantaneous because they run in time and memory proportional to the size of the slice (which means they will scale to arbitrarily large CSV data).

Switching gears a little bit, you might not always want to see every column in the CSV data. In this case, maybe we only care about the country, city and population. So let's take a look at 10 random rows:

$ xsv select Country,AccentCity,Population worldcitiespop.csv \
  | xsv sample 10 \
  | xsv table
Country  AccentCity       Population
cn       Guankoushang
za       Klipdrift
ma       Ouled Hammou
fr       Les Gravues
la       Ban Phadèng
de       Lüdenscheid      80045
qa       Umm ash Shubrum
bd       Panditgoan
us       Appleton
ua       Lukashenkivske

Whoops! It seems some cities don't have population counts. How pervasive is that?

$ xsv frequency worldcitiespop.csv --limit 5
field,value,count
Country,cn,238985
Country,ru,215938
Country,id,176546
Country,us,141989
Country,ir,123872
City,san jose,328
City,san antonio,320
City,santa rosa,296
City,santa cruz,282
City,san juan,255
AccentCity,San Antonio,317
AccentCity,Santa Rosa,296
AccentCity,Santa Cruz,281
AccentCity,San Juan,254
AccentCity,San Miguel,254
Region,04,159916
Region,02,142158
Region,07,126867
Region,03,122161
Region,05,118441
Population,(NULL),3125978
Population,2310,12
Population,3097,11
Population,983,11
Population,2684,11
Latitude,51.15,777
Latitude,51.083333,772
Latitude,50.933333,769
Latitude,51.116667,769
Latitude,51.133333,767
Longitude,23.8,484
Longitude,23.2,477
Longitude,23.05,476
Longitude,25.3,474
Longitude,23.1,459

(The xsv frequency command builds a frequency table for each column in the CSV data. This one only took 5 seconds.)

So it seems that most cities do not have a population count associated with them at all. No matter—we can adjust our previous command so that it only shows rows with a population count:

$ xsv search -s Population '[0-9]' worldcitiespop.csv \
  | xsv select Country,AccentCity,Population \
  | xsv sample 10 \
  | xsv table
Country  AccentCity       Population
es       Barañáin         22264
es       Puerto Real      36946
at       Moosburg         4602
hu       Hejobaba         1949
ru       Polyarnyye Zori  15092
gr       Kandíla          1245
is       Ólafsvík         992
hu       Decs             4210
bg       Sliven           94252
gb       Leatherhead      43544

Erk. Which country is at? No clue, but the Data Science Toolkit has a CSV file called countrynames.csv. Let's grab it and do a join so we can see which countries these are:

curl -LO https://gist.githubusercontent.com/anonymous/063cb470e56e64e98cf1/raw/98e2589b801f6ca3ff900b01a87fbb7452eb35c7/countrynames.csv
$ xsv headers countrynames.csv
1   Abbrev
2   Country
$ xsv join --no-case  Country sample.csv Abbrev countrynames.csv | xsv table
Country  AccentCity       Population  Abbrev  Country
es       Barañáin         22264       ES      Spain
es       Puerto Real      36946       ES      Spain
at       Moosburg         4602        AT      Austria
hu       Hejobaba         1949        HU      Hungary
ru       Polyarnyye Zori  15092       RU      Russian Federation | Russia
gr       Kandíla          1245        GR      Greece
is       Ólafsvík         992         IS      Iceland
hu       Decs             4210        HU      Hungary
bg       Sliven           94252       BG      Bulgaria
gb       Leatherhead      43544       GB      Great Britain | UK | England | Scotland | Wales | Northern Ireland | United Kingdom

Whoops, now we have two columns called Country and an Abbrev column that we no longer need. This is easy to fix by re-ordering columns with the xsv select command:

$ xsv join --no-case  Country sample.csv Abbrev countrynames.csv \
  | xsv select 'Country[1],AccentCity,Population' \
  | xsv table
Country                                                                              AccentCity       Population
Spain                                                                                Barañáin         22264
Spain                                                                                Puerto Real      36946
Austria                                                                              Moosburg         4602
Hungary                                                                              Hejobaba         1949
Russian Federation | Russia                                                          Polyarnyye Zori  15092
Greece                                                                               Kandíla          1245
Iceland                                                                              Ólafsvík         992
Hungary                                                                              Decs             4210
Bulgaria                                                                             Sliven           94252
Great Britain | UK | England | Scotland | Wales | Northern Ireland | United Kingdom  Leatherhead      43544

Perhaps we can do this with the original CSV data? Indeed we can—because joins in xsv are fast.

$ xsv join --no-case Abbrev countrynames.csv Country worldcitiespop.csv \
  | xsv select '!Abbrev,Country[1]' \
  > worldcitiespop_countrynames.csv
$ xsv sample 10 worldcitiespop_countrynames.csv | xsv table
Country                      City                   AccentCity             Region  Population  Latitude    Longitude
Sri Lanka                    miriswatte             Miriswatte             36                  7.2333333   79.9
Romania                      livezile               Livezile               26      1985        44.512222   22.863333
Indonesia                    tawainalu              Tawainalu              22                  -4.0225     121.9273
Russian Federation | Russia  otar                   Otar                   45                  56.975278   48.305278
France                       le breuil-bois robert  le Breuil-Bois Robert  A8                  48.945567   1.717026
France                       lissac                 Lissac                 B1                  45.103094   1.464927
Albania                      lumalasi               Lumalasi               46                  40.6586111  20.7363889
China                        motzushih              Motzushih              11                  27.65       111.966667
Russian Federation | Russia  svakino                Svakino                69                  55.60211    34.559785
Romania                      tirgu pancesti         Tirgu Pancesti         38                  46.216667   27.1

The !Abbrev,Country[1] syntax means, "remove the Abbrev column and remove the second occurrence of the Country column." Since we joined with countrynames.csv first, the first Country name (fully expanded) is now included in the CSV data.

This xsv join command takes about 7 seconds on my machine. The performance comes from constructing a very simple hash index of one of the CSV data files given. The join command does an inner join by default, but it also has left, right and full outer join support too.

Installation

Binaries for Windows, Linux and macOS are available from Github.

If you're a macOS Homebrew user, then you can install xsv from homebrew-core:

$ brew install xsv

If you're a macOS MacPorts user, then you can install xsv from the official ports:

$ sudo port install xsv

If you're a Nix/NixOS user, you can install xsv from nixpkgs:

$ nix-env -i xsv

Alternatively, you can compile from source by installing Cargo (Rust's package manager) and installing xsv using Cargo:

cargo install xsv

Compiling from this repository also works similarly:

git clone git://github.com/BurntSushi/xsv
cd xsv
cargo build --release

Compilation will probably take a few minutes depending on your machine. The binary will end up in ./target/release/xsv.

Benchmarks

I've compiled some very rough benchmarks of various xsv commands.

Motivation

Here are several valid criticisms of this project:

  1. You shouldn't be working with CSV data because CSV is a terrible format.
  2. If your data is gigabytes in size, then CSV is the wrong storage type.
  3. Various SQL databases provide all of the operations available in xsv with more sophisticated indexing support. And the performance is a zillion times better.

I'm sure there are more criticisms, but the impetus for this project was a 40GB CSV file that was handed to me. I was tasked with figuring out the shape of the data inside of it and coming up with a way to integrate it into our existing system. It was then that I realized that every single CSV tool I knew about was woefully inadequate. They were just too slow or didn't provide enough flexibility. (Another project I had comprised of a few dozen CSV files. They were smaller than 40GB, but they were each supposed to represent the same kind of data. But they all had different column and unintuitive column names. Useful CSV inspection tools were critical here—and they had to be reasonably fast.)

The key ingredients for helping me with my task were indexing, random sampling, searching, slicing and selecting columns. All of these things made dealing with 40GB of CSV data a bit more manageable (or dozens of CSV files).

Getting handed a large CSV file once was enough to launch me on this quest. From conversations I've had with others, CSV data files this large don't seem to be a rare event. Therefore, I believe there is room for a tool that has a hope of dealing with data that large.

Naming collision

This project is unrelated to another similar project with the same name: https://mj.ucw.cz/sw/xsv/

Comments
  • xsv partition subcommand

    xsv partition subcommand

    READY FOR MERGE (I hope).

    I've tried to follow the standard xsv coding style and to re-use existing support code where possible.

    But I want show my initial work and ask for any general feedback now.

    One interesting wrinkle that I noticed: The split command has a --output flag, but it ignores it. I'm thinking that perhaps instead of having a --filename TEMPLATE argument, that both split and partition should have an --output argument that defaults to {}.csvusing my new FilenameTemplate type. Would this be a reasonable approach?

    TODO

    • [x] Empty strings in partition column
    • [x] Create output directory if it does not exist
    • [x] Sanitize filenames to contain only shell-safe characters
    • [x] Collisions between sanitized field values
    • [x] Test files with no headers & partitioning based on column number
    • [x] Test --filename argument, including prefix
    • [x] Invalid --filename arguments, including no {} or two {}
    • [x] Modify both partition and split to use the same filename template system, possibly as --output instead of --filename?
    • More as I think of them
    opened by emk 26
  • thread '<main>' panicked at 'index out of bounds: the len is 0 but the index is 9', src/select.rs:352

    thread '
    ' panicked at 'index out of bounds: the len is 0 but the index is 9', src/select.rs:352

    I caught this panic while performing a join:

    xsv join --no-case url images.csv url machines.csv
    thread '<main>' panicked at 'index out of bounds: the len is 0 but the index is 9', src/select.rs:352
    Image,index,url,Year,Manufacturer,Model,Title,Serial,Stock,Pricing,Description,index,url
    

    I was able to get by it by flipping the two CSVs to be joined.

    opened by zacstewart 13
  • Workaround for rust 1.27.0 not compiling on macOS 10.10

    Workaround for rust 1.27.0 not compiling on macOS 10.10

    xsv 0.13.0 depends on Rust and requires the latest version (1.27.0 as of now). I've found that on my machine Rust fails to compile, probably because I'm running 10.10 Yosemite because my mac is ancient

    https://github.com/rust-lang/rust/issues/51838

    Luckily there is a workaround for this.

    In the general case where you want to install package X, package X depends on the latest version of package Y and for some reason (poverty, laziness) you can't use the latest version of package Y on your machine. Just do a /usr/local/Cellar/Y and sym link the last known good version to the latest version. This was inspired by https://stackoverflow.com/questions/19664535/how-can-i-prevent-homebrew-from-upgrading-vtk-dependency-for-pcl/19665408#19665408

    In this specific case

    cd /usr/local/Cellar/rust ln -s 1.24.1 1.27.0

    This tricks homebrew into thinking it already has 1.27.0 and thus won't download it, fail to compile and end up all fubar.

    question 
    opened by STA-WSYNC 9
  • Add homebrew-core installation instructions

    Add homebrew-core installation instructions

    xsv was added to homebrew-core in https://github.com/Homebrew/homebrew-core/pull/11427, so this change mentions that installation option in the readme. The wording was adapted from ripgrep's readme: https://github.com/BurntSushi/ripgrep/tree/685cc6c5622b02fd5a53c8bc953176b159c780e4#installation

    opened by josephfrazier 8
  • Bug in search regex ?

    Bug in search regex ?

    Was messing around with the Tranco list and xsv (0.13.0) today and came across a possible bug.

    xsv search -s 2 'uk$' top-1m.csv | grep -Ev 'uk$'         
    1,google.com
    

    Surely that's not right ?

    opened by udf2457 7
  • Q: Does xsv have an equivalent operation to csvkit's csvclean ?

    Q: Does xsv have an equivalent operation to csvkit's csvclean ?

    I need a fast filter to parse csv file lines and drop those that are unparsable, as with https://csvkit.readthedocs.io/en/1.0.2/scripts/csvclean.html. csvclean works in a shell pipe (PR 781) but is limited in speed.

    Does xsv have a similar method?

    opened by kmatt 7
  • Partition into files based on columns?

    Partition into files based on columns?

    Wow, xsv is cool!

    I was exploring Pachyderm for large-scale data ingestion tasks, and one use-case popped up fairly often. This is basically a map/reduce implementation using an immutable, versioned cluster file system, and worker jobs running inside of Docker containers. You read files from /pfs/$INPUT_NAME/* and write them to /pfs/out.

    Imagine an input file which looks like:

    value,count
    FL,11957
    CA,11816
    TX,10157
    IL,5633
    OH,5556
    GA,4535
    NY,4088
    MI,4008
    NJ,3890
    CO,3690
    

    I would love to be able to write something like:

    xsv partition value /pfs/out
    

    ...and create a file named /pfs/out/FL containing 11957, a file named /pfs/out/CA containing 11816, and so on. It would also be OK if the files contained the partitioning column: FL,11957 and CA,11816. If there are more than two columns, they should all be included. All rows with the same value column should be in the appropriate output file.

    Does xsv have support for anything like this? Would you like support for something like this? If not, no worries, it would be a trivial standalone tool using your csv library. But I thought I'd check whether you wanted it upstream first.

    enhancement 
    opened by emk 7
  • Compilation failure under rustc 1.0.0-beta.2

    Compilation failure under rustc 1.0.0-beta.2

    This could be me doing something wrong...

    $ cargo build --release
       Compiling byteorder v0.3.7
       Compiling threadpool v0.1.4
       Compiling streaming-stats v0.1.23
       Compiling regex v0.1.28
       Compiling rustc-serialize v0.3.12
       Compiling libc v0.1.6
       Compiling log v0.3.1
    /Users/luis.casillas/.cargo/registry/src/github.com-1ecc6299db9ec823/streaming-stats-0.1.23/src/lib.rs:1:1: 1:41 error: unstable feature
    /Users/luis.casillas/.cargo/registry/src/github.com-1ecc6299db9ec823/streaming-stats-0.1.23/src/lib.rs:1 #![feature(collections, core, std_misc)]
                                                                                                             ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    note: this feature may not be used in the beta release channel
    error: aborting due to previous error
    Build failed, waiting for other jobs to finish...
    Could not compile `streaming-stats`.
    
    $ git log | head -n 5
    commit c19cf0bf43d001ee0555d943eb6703c058152877
    Author: Andrew Gallant <[email protected]>
    Date:   Thu Apr 16 17:50:25 2015 -0400
    
        crates.io merit badge
    
    opened by ldcasillas-progreso 7
  • Feature Request: be able to specify sample-size as a percentage

    Feature Request: be able to specify sample-size as a percentage

    If sample-size is between 0 and 1 exclusive, it will be treated as a percentage of the total rowcount of the CSV.

    So if I want my sample to be 20% of the csv. xsv sample 0.20 file.csv -o output.csv

    Otherwise, if sample-size > 1, it will be treated as a rowcount, like before.

    opened by jqnatividad 6
  • Trying to concat rows of ~55000 CSV files with a cumulative size of 1.4gb, xsv killed by oom_reaper

    Trying to concat rows of ~55000 CSV files with a cumulative size of 1.4gb, xsv killed by oom_reaper

    Hi, so I have 55161 csv files in a directory (1.csv to 55161.csv) . I'm trying to concat them all with:

    xsv cat rows $(ls *.csv | sort -n) -o daily.csv
    

    But xsv is being killed by the oom_reaper after exhausting all of my 32gb of RAM. Does anyone know why this is happening? I wouldn't really expect sxv cat to use very much memory at all, much less over 30gb of memory!

    Does anyone know what's going on?

    bug 
    opened by smabie 6
  • help <command>?

    help ?

    xsv help currently shows the basic help text, however xsv help <command> doesn't do anything useful (shows the same), command-specific help goes through xsv <command> —help.

    Since it already namespaces, it would be nice if help could be provided a command to get details, it would also be convenient when trying to find how to do something: xsv help to get the listing, then <UP> <space> <command> to get more specific help on the command.

    opened by xmo-odoo 6
  • "@list" external parameter list

    Hi, is there an equivalent to xsv cat --output out.csv @list.txt to support fetching parameters from a separate file?

    e.g.: when using cat to assemble a lot of splits.

    • Every single split is listed in list.txt
    split1.csv
    split2.csv
    split3.csv
    …
    
    • The above command is used to assemble the splits from the list into out.csv
    opened by DrYak 0
  • Derive `Default` for `TypedMinMax`

    Derive `Default` for `TypedMinMax`

    Hello,

    I am reviewing potentially clippy false-positive/false-negative after a change to clippy::derivable_impls. In this case the lint seems correct. If not your feedback is welcome

    opened by kraktus 0
  • --compress-program option for xsv split?

    --compress-program option for xsv split?

    Today I learned how fast xsv split can be. The only thing that I feel that it's missing is a --compress-program switch (or similar) that would allow the user to compress created split files. @BurntSushi do you feel like it's a good fit for this tool?

    opened by d33tah 2
  • Bump crossbeam-channel from 0.2.4 to 0.4.4

    Bump crossbeam-channel from 0.2.4 to 0.4.4

    Bumps crossbeam-channel from 0.2.4 to 0.4.4.

    Changelog

    Sourced from crossbeam-channel's changelog.

    Version 0.8.1

    • Support targets that do not have atomic CAS on stable Rust (#698)

    Version 0.8.0

    • Bump the minimum supported Rust version to 1.36.
    • Bump crossbeam-channel to 0.5.
    • Bump crossbeam-deque to 0.8.
    • Bump crossbeam-epoch to 0.9.
    • Bump crossbeam-queue to 0.3.
    • Bump crossbeam-utils to 0.8.

    Version 0.7.3

    • Fix breakage with nightly feature due to rust-lang/rust#65214.
    • Bump crossbeam-channel to 0.4.
    • Bump crossbeam-epoch to 0.8.
    • Bump crossbeam-queue to 0.2.
    • Bump crossbeam-utils to 0.7.

    Version 0.7.2

    • Bump crossbeam-channel to 0.3.9.
    • Bump crossbeam-epoch to 0.7.2.
    • Bump crossbeam-utils to 0.6.6.

    Version 0.7.1

    • Bump crossbeam-utils to 0.6.5.

    Version 0.7.0

    • Remove ArcCell, MsQueue, and TreiberStack.
    • Change the interface of ShardedLock to match RwLock.
    • Add SegQueue::len().
    • Rename SegQueue::try_pop() to SegQueue::pop().
    • Change the return type of SegQueue::pop() to Result.
    • Introduce ArrayQueue.
    • Update dependencies.

    Version 0.6.0

    • Update dependencies.

    Version 0.5.0

    • Update crossbeam-channel to 0.3.
    • Update crossbeam-utils to 0.6.
    • Add AtomicCell, SharedLock, and WaitGroup.

    ... (truncated)

    Commits

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

    dependencies 
    opened by dependabot[bot] 0
  • Bump smallvec from 0.6.5 to 0.6.14

    Bump smallvec from 0.6.5 to 0.6.14

    Bumps smallvec from 0.6.5 to 0.6.14.

    Release notes

    Sourced from smallvec's releases.

    v0.6.14

    • Fix a possible buffer overflow in insert_many (#252, #254).

    v0.6.13

    • Use the maybe-unit crate in place of soon-to-be-deprecated std::mem::uninitialized (#180). When built with Rust 1.36 or later, this fixes a source of undefined behavior. It also fixes deprecation warnings in Rust 1.39 and later, and test failures when run in MIRI. In Rust 1.35 and earlier it provides some safety improvements but does not completely eliminate undefined behavior. (However, we are not aware of any cases where the undefined behavior causes bugs in practice in those toolchains.)

    v0.6.12

    • Move code using default fn into its own module (#161).

    v0.6.11

    • The unstable alloc feature is no longer needed. This crate can now build with the std feature disabled on stable Rust 1.36 or later (#159).

    v0.6.10

    • Fix a bug in extend with certain iterators (#150).
    • Fix soundness bugs in the grow method (#151, #152).
    • Fix typo in docs (#144).

    v0.6.9

    • Remove dependency on unreachable crate (#140)

    v0.6.8

    • Don't leak memory if an iterator panics during extend (#137)
    • Update the unstable union feature for better forward compatibility (#135)

    v0.6.7

    • Add an optional feature to use the unstable may_dangle attribute (#133).

    v0.6.6

    • Fix possible over-allocation in from_slice (#122)
    • Optional nightly-only specialization feature for from_slice optimization (#123)
    • New from_raw_parts constructor (#130)
    • Documentation and testing improvements (#125, #129)
    Commits

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

    dependencies 
    opened by dependabot[bot] 0
Releases(0.13.0)
Owner
Andrew Gallant
I love to code.
Andrew Gallant
qsv: Ultra-fast CSV data-wrangling toolkit

qsv is a command line program for indexing, slicing, analyzing, splitting, enriching, validating & joining CSV files. Commands are simple, fast & composable

Joel Natividad 398 Jan 3, 2023
rsv is a command line tool to deal with small and big CSV, TXT, EXCEL files (especially >10G)

csv, excel toolkit written in Rust rsv is a command line tool to deal with small and big CSV, TXT, EXCEL files (especially >10G). rsv has following fe

Zhuang Dai 39 Jan 30, 2023
A toolkit for building your own interactive command-line tools in Rust

promkit A toolkit for building your own interactive command-line tools in Rust, utilizing crossterm. Getting Started Put the package in your Cargo.tom

null 70 Dec 18, 2022
Small command-line tool to switch monitor inputs from command line

swmon Small command-line tool to switch monitor inputs from command line Installation git clone https://github.com/cr1901/swmon cargo install --path .

William D. Jones 5 Aug 20, 2022
📺(tv) Tidy Viewer is a cross-platform CLI csv pretty printer that uses column styling to maximize viewer enjoyment.

??(tv) Tidy Viewer is a cross-platform CLI csv pretty printer that uses column styling to maximize viewer enjoyment.

Alex Hallam 1.8k Jan 2, 2023
A tool for collecting rollup blocks from the Aztec Connect rollup, and exporting them to csv

Aztec Connect Data Gobbler The Aztec Connect Data gobbler is a tool made for extracting data from the Aztec Connect system using only L1 as its source

Lasse Herskind 6 Feb 17, 2023
Sniffer - a tool to quickly inspect csv and flat-file files for basic information

sniffer sniffer is a tool to quickly inspect csv and flat-file files for basic information. Need to see how many rows are in a csv file? Want to see t

Daniel B 10 Apr 4, 2023
A simple CLI tool for converting CSV file content to JSON.

fast-csv-to-json A simple CLI tool for converting CSV file content to JSON. 我花了一個小時搓出來,接著優化了兩天的快速 CSV 轉 JSON CLI 小工具 Installation Install Rust with ru

Ming Chang 3 Apr 5, 2023
A simple and efficient terminal UI implementation with ratatui.rs for getting quick insights from csv files right on the terminal

CSV-GREP csv-grep is an intuitive TUI application writting with ratatui.rs for reading, viewing and quickly analysing csv files right on the terminal.

Anthony Ezeabasili 16 Mar 10, 2024
⚡️ Lightning-fast and minimal calendar command line. Written in Rust 🦀

⚡️ Lightning-fast and minimal calendar command line. It's similar to cal. Written in Rust ??

Arthur Henrique 36 Jan 1, 2023
A blazing fast command line license generator for your open source projects written in Rust🚀

Overview This is a blazing fast ⚡ , command line license generator for your open source projects written in Rust. I know that GitHub

Shoubhit Dash 43 Dec 30, 2022
Command-line HTTP client for sending a POST request to specified URI on each stdin line.

line2httppost Simple tool to read lines from stdin and post each line as separate POST request to a specified URL (TCP connection is reused though). G

Vitaly Shukela 3 Jan 3, 2023
Pink is a command-line tool inspired by the Unix man command.

Pink is a command-line tool inspired by the Unix man command. It displays custom-formatted text pages in the terminal using a subset of HTML-like tags.

null 3 Nov 2, 2023
A full featured, fast Command Line Argument Parser for Rust

clap Command Line Argument Parser for Rust It is a simple-to-use, efficient, and full-featured library for parsing command line arguments and subcomma

null 10.4k Jan 10, 2023
A full featured, fast Command Line Argument Parser for Rust

clap Command Line Argument Parser for Rust It is a simple-to-use, efficient, and full-featured library for parsing command line arguments and subcomma

Ed Page 0 Jun 16, 2022
A blazingly fast command-line tool for converting Chinese punctuations to English punctuations

A blazingly fast command-line tool for converting Chinese punctuations to English punctuations

Hogan Lee 9 Dec 23, 2022
A robust, customizable, blazingly-fast, efficient and easy-to-use command line application to uwu'ify your text!

uwuifyy A robust, customizable, blazingly-fast, efficient and easy-to-use command line application to uwu'ify your text! Logo Credits: Jade Nelson Tab

Hamothy 43 Dec 12, 2022
SKYULL is a command-line interface (CLI) in development that creates REST API project structure templates with the aim of making it easy and fast to start a new project.

SKYULL is a command-line interface (CLI) in development that creates REST API project structure templates with the aim of making it easy and fast to start a new project. With just a few primary configurations, such as project name, you can get started quickly.

Gabriel Michaliszen 4 May 9, 2023
Fast command-line application to show the moon phase

moon-phases Command-line application to show the moon phase for a given date and time, as a text string, emoji, or numeric value. It can also show the

mirrorwitch 3 Oct 7, 2023