LevelDB is a fast key-value storage library written at Google that provides an ordered mapping from string keys to string values.

Related tags

Database leveldb
Overview

LevelDB is a fast key-value storage library written at Google that provides an ordered mapping from string keys to string values.

Build Status Build status

Authors: Sanjay Ghemawat ([email protected]) and Jeff Dean ([email protected])

Features

  • Keys and values are arbitrary byte arrays.
  • Data is stored sorted by key.
  • Callers can provide a custom comparison function to override the sort order.
  • The basic operations are Put(key,value), Get(key), Delete(key).
  • Multiple changes can be made in one atomic batch.
  • Users can create a transient snapshot to get a consistent view of data.
  • Forward and backward iteration is supported over the data.
  • Data is automatically compressed using the Snappy compression library.
  • External activity (file system operations etc.) is relayed through a virtual interface so users can customize the operating system interactions.

Documentation

LevelDB library documentation is online and bundled with the source code.

Limitations

  • This is not a SQL database. It does not have a relational data model, it does not support SQL queries, and it has no support for indexes.
  • Only a single process (possibly multi-threaded) can access a particular database at a time.
  • There is no client-server support builtin to the library. An application that needs such support will have to wrap their own server around the library.

Getting the Source

git clone --recurse-submodules https://github.com/google/leveldb.git

Building

This project supports CMake out of the box.

Build for POSIX

Quick start:

mkdir -p build && cd build
cmake -DCMAKE_BUILD_TYPE=Release .. && cmake --build .

Building for Windows

First generate the Visual Studio 2017 project/solution files:

mkdir build
cd build
cmake -G "Visual Studio 15" ..

The default default will build for x86. For 64-bit run:

cmake -G "Visual Studio 15 Win64" ..

To compile the Windows solution from the command-line:

devenv /build Debug leveldb.sln

or open leveldb.sln in Visual Studio and build from within.

Please see the CMake documentation and CMakeLists.txt for more advanced usage.

Contributing to the leveldb Project

The leveldb project welcomes contributions. leveldb's primary goal is to be a reliable and fast key/value store. Changes that are in line with the features/limitations outlined above, and meet the requirements below, will be considered.

Contribution requirements:

  1. Tested platforms only. We generally will only accept changes for platforms that are compiled and tested. This means POSIX (for Linux and macOS) or Windows. Very small changes will sometimes be accepted, but consider that more of an exception than the rule.

  2. Stable API. We strive very hard to maintain a stable API. Changes that require changes for projects using leveldb might be rejected without sufficient benefit to the project.

  3. Tests: All changes must be accompanied by a new (or changed) test, or a sufficient explanation as to why a new (or changed) test is not required.

  4. Consistent Style: This project conforms to the Google C++ Style Guide. To ensure your changes are properly formatted please run:

    clang-format -i --style=file <file>
    

Submitting a Pull Request

Before any pull request will be accepted the author must first sign a Contributor License Agreement (CLA) at https://cla.developers.google.com/.

In order to keep the commit timeline linear squash your changes down to a single commit and rebase on google/leveldb/master. This keeps the commit timeline linear and more easily sync'ed with the internal repository at Google. More information at GitHub's About Git rebase page.

Performance

Here is a performance report (with explanations) from the run of the included db_bench program. The results are somewhat noisy, but should be enough to get a ballpark performance estimate.

Setup

We use a database with a million entries. Each entry has a 16 byte key, and a 100 byte value. Values used by the benchmark compress to about half their original size.

LevelDB:    version 1.1
Date:       Sun May  1 12:11:26 2011
CPU:        4 x Intel(R) Core(TM)2 Quad CPU    Q6600  @ 2.40GHz
CPUCache:   4096 KB
Keys:       16 bytes each
Values:     100 bytes each (50 bytes after compression)
Entries:    1000000
Raw Size:   110.6 MB (estimated)
File Size:  62.9 MB (estimated)

Write performance

The "fill" benchmarks create a brand new database, in either sequential, or random order. The "fillsync" benchmark flushes data from the operating system to the disk after every operation; the other write operations leave the data sitting in the operating system buffer cache for a while. The "overwrite" benchmark does random writes that update existing keys in the database.

fillseq      :       1.765 micros/op;   62.7 MB/s
fillsync     :     268.409 micros/op;    0.4 MB/s (10000 ops)
fillrandom   :       2.460 micros/op;   45.0 MB/s
overwrite    :       2.380 micros/op;   46.5 MB/s

Each "op" above corresponds to a write of a single key/value pair. I.e., a random write benchmark goes at approximately 400,000 writes per second.

Each "fillsync" operation costs much less (0.3 millisecond) than a disk seek (typically 10 milliseconds). We suspect that this is because the hard disk itself is buffering the update in its memory and responding before the data has been written to the platter. This may or may not be safe based on whether or not the hard disk has enough power to save its memory in the event of a power failure.

Read performance

We list the performance of reading sequentially in both the forward and reverse direction, and also the performance of a random lookup. Note that the database created by the benchmark is quite small. Therefore the report characterizes the performance of leveldb when the working set fits in memory. The cost of reading a piece of data that is not present in the operating system buffer cache will be dominated by the one or two disk seeks needed to fetch the data from disk. Write performance will be mostly unaffected by whether or not the working set fits in memory.

readrandom  : 16.677 micros/op;  (approximately 60,000 reads per second)
readseq     :  0.476 micros/op;  232.3 MB/s
readreverse :  0.724 micros/op;  152.9 MB/s

LevelDB compacts its underlying storage data in the background to improve read performance. The results listed above were done immediately after a lot of random writes. The results after compactions (which are usually triggered automatically) are better.

readrandom  : 11.602 micros/op;  (approximately 85,000 reads per second)
readseq     :  0.423 micros/op;  261.8 MB/s
readreverse :  0.663 micros/op;  166.9 MB/s

Some of the high cost of reads comes from repeated decompression of blocks read from disk. If we supply enough cache to the leveldb so it can hold the uncompressed blocks in memory, the read performance improves again:

readrandom  : 9.775 micros/op;  (approximately 100,000 reads per second before compaction)
readrandom  : 5.215 micros/op;  (approximately 190,000 reads per second after compaction)

Repository contents

See doc/index.md for more explanation. See doc/impl.md for a brief overview of the implementation.

The public interface is in include/leveldb/*.h. Callers should not include or rely on the details of any other header files in this package. Those internal APIs may be changed without warning.

Guide to header files:

  • include/leveldb/db.h: Main interface to the DB: Start here.

  • include/leveldb/options.h: Control over the behavior of an entire database, and also control over the behavior of individual reads and writes.

  • include/leveldb/comparator.h: Abstraction for user-specified comparison function. If you want just bytewise comparison of keys, you can use the default comparator, but clients can write their own comparator implementations if they want custom ordering (e.g. to handle different character encodings, etc.).

  • include/leveldb/iterator.h: Interface for iterating over data. You can get an iterator from a DB object.

  • include/leveldb/write_batch.h: Interface for atomically applying multiple updates to a database.

  • include/leveldb/slice.h: A simple module for maintaining a pointer and a length into some other byte array.

  • include/leveldb/status.h: Status is returned from many of the public interfaces and is used to report success and various kinds of errors.

  • include/leveldb/env.h: Abstraction of the OS environment. A posix implementation of this interface is in util/env_posix.cc.

  • include/leveldb/table.h, include/leveldb/table_builder.h: Lower-level modules that most clients probably won't use directly.

Comments
  • Comprehensive, Native Windows Support

    Comprehensive, Native Windows Support

    Now, before you tell me this is a lot of work: I know, and am working on it (and almost done). Ideally, I would like to have my changes merged here, so I have a few questions and concerns for my current port.

    Questions

    Should I target a specific C++ standard?

    Currently, my code depends on a few C++11 features, which can be easily removed with a few macros. This makes the code less readable, however, if C++03 support is desired, I will gladly change my implementation to conform to an older standard.

    How to handle Unicode filesystem support?

    Currently, LevelDB uses char-based (narrow) strings for for all filesystem operations, which does not translate well for Windows systems (since narrow strings use the ANSI, or OEM legacy codepages, and not UTF-8, for backwards compatibility). This means paths using international characters, or emojis, are therefore not supported with a simple port, something I consider to be an undesirable solution for a modern library. All the current forks of levelDB do not solve this fundamental issue, leading me to create my own implementation. Possible solutions include:

    1. A narrow (UTF-8) API on *Nix, and a wide (UTF-16) API on Windows, using a typedef to determine the proper path type.
    2. Converting all narrow strings from UTF-8 to UTF-16 before calling WinAPI functions.
    3. Providing both a narrow (ANSI) and wide (UTF-16) API on Windows.

    The 2nd option, although the least amount of work, is the least amenable for me since the expected encoding for paths from levelDB would then conflict with the entirety of the WinAPI. The 3rd option, however, duplicates code to support both the narrow and wide WinAPI, which would increase the amount of work required to maintain levelDB. The first option is a happy median: it minimizes redundancy and is consistent with expectations about *Nix and Windows paths. I am, however, amenable to any suggestions the levelDB authors may have.

    Intellectual Property

    To emulate the behavior of mmap on Windows, I used a very lightweight library (<250 lines of code) from Steven Lee, mman-win32. However, looking over your contributor license agreement, it seems that my port would not satisfy Google's CLA until I remove this code from my implementation. If this is the case, I could easily use the raw WinAPI functions rather than the emulated mmap in my Windows port. Please notify me if I should remove this code prior to submitting a pull request.

    Other Changes

    CMake Build System

    I introduced a CMake build system, which retains most of the same logic as the existing Makefile. The existing Makefile has not been deprecated.

    AppVeyor Continual Integration

    To ensure builds do not break the Windows builds, I am planning to add an AppVeyor configuration, which allows continual integration on Windows using MSVC.

    Summary

    If there is still interest for native Windows support, and the proposed changes are amenable to the levelDB authors, I would gladly submit a pull request.

    enhancement 
    opened by Alexhuszagh 27
  • Provide a shared library

    Provide a shared library

    Original issue 27 created by quadrispro on 2011-08-09T12:57:55.000Z:

    Please add a target into the Makefile to compile a shared library object.

    Thanks in advance for any reply.

    opened by cmumford 22
  • CMake Support

    CMake Support

    Hi, @cmumford

    Does it make sense if we add cmake support to leveldb? if the answer is YES, I will try to do it.

    There are some useful LLVM tools likes clang-tidy/woboq that need cmake support. We will get code format automatically, static check and online code browser if here is CMakeLists.txt.

    any comments are appreciated. thx

    enhancement 
    opened by liuchang0812 19
  • Compaction error: IO error: .../xxxxx.ldb: Too many open files

    Compaction error: IO error: .../xxxxx.ldb: Too many open files

    I also read the issue 181

    LevelDB's above a certain size (about 40 GB) seems to cause leveldb to open every single file in the database without closing anything in between.

    Also, it seems it opens every file twice, for some reason.

    My problem is almost the same.

    OS: FreeBSD 10.1-RELEASE amd64 Leveldb: master branch, ( also test 1.18,1.17,...1.14 ) Dataset: 99G with snappy compressed, 58612 *.sst files. ulimit -n: 706995 kern.maxfiles: 785557 kern.maxfilesperproc: 706995

    The dataset was generated by leveldb 1.8.0 , running several months. Last week , I restart the server , then the issue occurred.

    It seems open every *.sst file twice,and not close them.

    $ fstat -m|grep leveldb|wc
      117223 1055007 8668825
    

    58612 * 2 ~= 117223 < 706995 (system limit)

    $ fstat -m|grep leveldb
    USER     CMD          PID   FD MOUNT      INUM MODE         SZ|DV R/W
    root     leveldb-tools 67098   67 /         92326 -rw-r--r--  1594319  r
    root     leveldb-tools 67098   68 /         92326 -rw-r--r--  1594319  r
    root     leveldb-tools 67098   69 /         45578 -rw-r--r--  2124846  r
    root     leveldb-tools 67098   70 /         45578 -rw-r--r--  2124846  r
    root     leveldb-tools 67098   71 /         45579 -rw-r--r--  2123789  r
    root     leveldb-tools 67098   72 /         45579 -rw-r--r--  2123789  r
    root     leveldb-tools 67098   73 /         45580 -rw-r--r--  2125455  r
    root     leveldb-tools 67098   74 /         45580 -rw-r--r--  2125455  r
    root     leveldb-tools 67098   75 /         45581 -rw-r--r--  2123795  r
    root     leveldb-tools 67098   76 /         45581 -rw-r--r--  2123795  r
    root     leveldb-tools 67098   77 /         45582 -rw-r--r--  2122645  r
    root     leveldb-tools 67098   78 /         45582 -rw-r--r--  2122645  r
    root     leveldb-tools 67098   79 /         45583 -rw-r--r--  2119487  r
    root     leveldb-tools 67098   80 /         45583 -rw-r--r--  2119487  r
    root     leveldb-tools 67098   81 /         45584 -rw-r--r--  2117737  r
    root     leveldb-tools 67098   82 /         45584 -rw-r--r--  2117737  r
    ... more ....
    

    as above , each file open twice (the same inode num: 92326,92326,45578,45578,...)

    $ tail -f LOG
    2016/08/10-11:17:48.121149 802006400 Recovering log #18223888
    2016/08/10-11:17:48.329778 802006400 Delete type=2 #18223889
    2016/08/10-11:17:48.333491 802006400 Delete type=3 #18223887
    2016/08/10-11:17:48.333993 802006400 Delete type=0 #18223888
    2016/08/10-11:17:48.388989 802007400 Compacting 58608@0 + 0@1 files
    2016/08/10-11:20:14.324576 802007400 compacted to: files[ 58608 0 0 0 0 0 0 ]
    2016/08/10-11:20:14.325108 802007400 Compaction error: IO error: ..../leveldb/18223891.ldb: Too many open files
    

    After the IO error, the open files reduse to 87580

    fstat -m | grep leveldb | wc
       87580  788220 6476498
    

    And the program cost 100% CPU

      PID USERNAME    THR PRI NICE   SIZE    RES STATE   C   TIME    WCPU COMMAND
    67098 root          6  35    0  5293M  3607M uwait   4  48:55 100.00% leveldb-tools
    

    But there is no disk io at all

      PID USERNAME     VCSW  IVCSW   READ  WRITE  FAULT  TOTAL PERCENT COMMAND
    67098 root            0    216      0      0      0      0   0.00% leveldb-tools
    

    Then ,can't seek, can't get, can't put .....

    I'v tried change leveldb_options_set_max_open_files() , 100, 1024, 400000, but it not worked.

    bug 
    opened by rchunping 18
  • Concurrency support for multiple processes (1 exclusive initializer / n readers)

    Concurrency support for multiple processes (1 exclusive initializer / n readers)

    Original issue 176 created by shri314 on 2013-06-10T20:03:38.000Z:

    Can the designers of leveldb explain the rational behind the design decision of not supporting multiple processes in leveldb implementation?

    The documentation clearly says, under Concurrency section that: "A database may only be opened by one process at a time. The leveldb implementation acquires a lock from the operating system to prevent misuse."

    Currently I can see that when one process opens level db, it uses fcntl with RW lock (exclusive lock). However this is a severely limiting, as no other process can ever open the same database even if it wants to just inspect the database contents for RDONLY purposes.

    The use case for example is - one process exclusively opens leveldb database and fills up the database, then closes it. Then n different processes start reading that database.

    question 
    opened by cmumford 17
  • There is a static initializer generated in util/comparator.cc

    There is a static initializer generated in util/comparator.cc

    Original issue 75 created by [email protected] on 2012-03-13T10:14:38.000Z:

    Static initializers are totally fine in 99% of the projects. However in Chrome we are trying to remove them as they significantly slow down startup due to disk seeks.

    There is only one static initializer generated by leveldb: $ nm libleveldb.a|grep _GLOBAL__I 0000000000000050 t _GLOBAL__I__ZN7leveldb10ComparatorD2Ev $

    A global instance of BytewiseComparatorImpl is created at static initialization time in util/comparator.cc:

    // Intentionally not destroyed to prevent destructor racing // with background threads. static const Comparator* bytewise = new BytewiseComparatorImpl;

    const Comparator* BytewiseComparator() { return bytewise; }

    I tried to make BytewiseComparator() CreateBytewiseComparator() instead so that it returns a new instance every time it is called. But then I'm encountering some ownership issues when it is used in the Options class. I initially made Options call CreateBytewiseComparator() in its constructor and delete it in its destructor (I also provided the correct implementations of copy constructor/assignment operator). The thing is that the comparator must live longer than the Options instance which owns it since the client seems to still use the pointer after Options goes out of scope.

    Therefore I was also thinking about a totally different approach and wanted to add atomicops and CallOnce (GoogleOnceInit) from V8 to leveldb. That way we can keep BytewiseComparator() as it is and initialize the global instance the first time it is used. Adding all these dependencies might seem overkill. This is why I'm not directly sending a CL to you. They might serve you later though.

    What do you think?

    opened by cmumford 17
  • Add DB::SuspendCompactions() and DB:: ResumeCompactions() methods

    Add DB::SuspendCompactions() and DB:: ResumeCompactions() methods

    Original issue 184 created by chirino on 2013-07-01T13:33:37.000Z:

    If an application wants to take a consistent backup of the leveldb data files, it needs to ensure that the background compaction threads are not modifying those files.

    opened by cmumford 15
  • Xcode 9 / Swift 4 warnings

    Xcode 9 / Swift 4 warnings

    There are 3 warnings when building a project in Xcode 9 with Swift 4.

    Two warnings are the same, for lines 274 and 275: Possible misuse of comma operator here - Cast expression to void to silence warning

    and on line 1350: Code will never be executed

    opened by saldous 14
  • Add O_CLOEXEC to open calls.

    Add O_CLOEXEC to open calls.

    This prevents file descriptors from leaking to child processes.

    When compiled for older (pre-2.6.23) kernels which lack support for O_CLOEXEC, there is no change in behavior. With newer kernels, child processes will no longer inherit leveldb's file handles, which reduces the changes of accidentally corrupting the database.

    Fixes #623

    cla: yes 
    opened by adam-azarchs 13
  • 'string' file not found

    'string' file not found

    screen shot 2017-10-10 at 20 44 59

    Getting this Error while compiling IOS project. Looks like some CPP code is there in project which is not being complied properly.

    /Users/cvi/Desktop/Ritesh/quintessence-learning/iOSApp/Pods/leveldb-library/include/leveldb/slice.h:21:10: error: 'string' file not found #include ^ :0: error: could not build Objective-C module 'CoreFoundation'

    question 
    opened by ritesh-chandora 13
  • LevelDB on Windows

    LevelDB on Windows

    Hi, i used MSYS2 and mingw compiler , if this pull request interesting for you please merge if not simple reject. I tested code on Windows/Linux/MacOSX , used in FastoNoSQL application.

    opened by topilski 12
  • [BUG] LevelDB data loss after a crash when deployed on GlusterFS

    [BUG] LevelDB data loss after a crash when deployed on GlusterFS

    Description

    We run a simple workload on LevelDB that inserts two key-value pairs. The two inserts end up going to different log files, and the first insert is set as asynchronous.

    The file system trace we observed is shown below:

    1 append("3.log") # first insert
    2 create("4.log")
    3 close("3.log")
    4 append("4.log") # second insert
    5 fdatasync("4.log")
    

    When deployed on GlusterFS, the first append (line 1) may return successfully, but the data fails to persist to disk. This is due to a common approach in distributed file system for write optimization, which delays write submission to server, and lie to application that write has finished without error.

    When any failure happens during the write submission, GlusterFS will make close (line 3) return with -1 to propagate the error. However, since LevelDB doesn't check any error returned by close, it's not aware about any error happens during the first insert.

    In GlusterFS, fdatasync("4.log") will only persist data on 4.log but not 3.log, therefore, if any crash happens after fsync (line 5), LevelDB will not recover the first insert after reboot.

    As a consequence, there is data loss on the first insert, but not second insert, which violates the ordering guarantee provided by LevelDB.

    Fix

    To fix the problem, we could add error handling logic for close operation. Basically, when error happens, we should consider previous append as failed, and either redo it or call fsync on that specific log file to force the file system persist the write.

    opened by cns2022 0
  • Unused warn in `third_party/benchmark/src/complexity.cc`

    Unused warn in `third_party/benchmark/src/complexity.cc`

    I tried to build the source shortly after i clone the repo:

    [ 71%] Building CXX object third_party/benchmark/src/CMakeFiles/benchmark.dir/complexity.cc.o
    leveldb/third_party/benchmark/src/complexity.cc:85:10: error: variable 'sigma_gn' set but not used [-Werror,-Wunused-but-set-variable]
      double sigma_gn = 0.0;
             ^
    1 error generated.
    make[2]: *** [third_party/benchmark/src/CMakeFiles/benchmark.dir/complexity.cc.o] Error 1
    make[1]: *** [third_party/benchmark/src/CMakeFiles/benchmark.dir/all] Error 2
    make: *** [all] Error 2
    

    It seems that the sigma_gn did not get used at any place.

    And the solution is to simply remove this two lines:

    LeastSq MinimalLeastSq(const std::vector<int64_t>& n,
                           const std::vector<double>& time,
                           BigOFunc* fitting_curve) {
    -  double sigma_gn = 0.0;
    + //  double sigma_gn = 0.0;
      double sigma_gn_squared = 0.0;
      double sigma_time = 0.0;
      double sigma_time_gn = 0.0;
    
      // Calculate least square fitting parameter
      for (size_t i = 0; i < n.size(); ++i) {
        double gn_i = fitting_curve(n[i]);
    -     sigma_gn += gn_i;
    + //    sigma_gn += gn_i;
        sigma_gn_squared += gn_i * gn_i;
        sigma_time += time[i];
        sigma_time_gn += time[i] * gn_i;
      }
    
    opened by JasonkayZK 2
  • Throw specific exception instead of assert

    Throw specific exception instead of assert

    I noticed that there are many assert in the programme which may cause levelDB to crash. May I know whether there are specific reasons we only assert instead of using specific exceptions?

    For example, in write_batch.cc

    void WriteBatchInternal::SetContents(WriteBatch* b, const Slice& contents) {
      assert(contents.size() >= kHeader);
      b->rep_.assign(contents.data(), contents.size());
    }
    

    Thank you.

    opened by DavidXU12345 0
Releases(1.23)
  • 1.23(Feb 23, 2021)

    • Sync MANIFEST before closing in db_impl when creating a new DB. Add logging with debugging information when failing to load a version set.
    • Optimize leveldb block seeks to utilize the current iterator location. This is beneficial when iterators are reused and seeks are not random but increasing. It is additionally beneficial with larger block sizes and keys with common prefixes.
    • Merge pull request #862 from rex4539:https
    • Documentation fixes
    • Merge pull request #855 from cmumford/submodule-fix
    • (test) Merge pull request #853 from cmumford:benchmark
    • Merge pull request #854 from cmumford:printf-fix
    • (cmumford/printf-fix) Fixed fprintf of 64-bit value.
    • (cmumford/benchmark) Added google/benchmark submodule.
    • Internal test cleanup
    • Internal cleanup migrating StatusOr.
    • Merge pull request #822 from jl0x61:bugFix
    • Merge pull request #819 from wzk784533:master
    • avoid unnecessary memory copy
    • Merge pull request #798 from lntotk:master
    • Fix accidental double std:: qualifiers.
    • Add some std:: qualifiers to types and functions.
    • Switch from C headers to C++ headers.
    • change const to constexpr
    • remove unnessary status judge
    • Remove leveldb::port::kLittleEndian.
    • Remove Windows workarounds in some tests.
    • Add Env::Remove{File,Dir} which obsolete Env::Delete{File,Dir}.
    • Defend against inclusion of windows.h in tests that invoke Env::DeleteFile.
    • Add WITHOUT ROWID to SQLite benchmark.
    • Merge pull request #756 from pwnall/third_party_2
    • Switch testing harness to googletest.
    • Move CI to Visual Studio 2019.
    • Allow different C/C++ standards when this is used as a subproject.
    • Align CMake configuration with related projects.
    • Remove redundant PROJECT_SOURCE_DIR usage from CMake config.
    • Fix installed target definition.
    • Added return in Version::Get::State::Match to quiet warning.
    • Using CMake's check_cxx_compiler_flag to check support for -Wthread-safety.
    • Fix tsan problem in env_test.
    • Merge pull request #698 from neal-zhu:master
    • Simplify unlocking in DeleteObsoleteFiles.
    • Add "leveldb" subdirectory to public include paths.
    • Align EnvPosix and EnvWindows.
    • Disable exceptions and RTTI in CMake configuration.
    • cache Saver in State object fix bug(uninitialized options pointer in State)
    • remove TODO in Version::ForEachOverlapping
    • use ForEachOverlapping to impl Get
    • Merge pull request #386 from ivanabc:master
    • unsigned char -> uint8_t
    • Add explicit typecasts to avoid compiler warning.
    • Guard DBImpl::versions_ by mutex_.
    • Converted two for-loops to while-loops.
    • Switch to using C++ 11 override specifier.
    • Added unit test for InternalKey::DecodeFrom with empty string.
    • Merge pull request #411 from proller:assert1
    • Using std::ostringstream in key DebugString.
    • Merge pull request #457 from jellor:patch-2
    • Fix EnvPosix tests on Travis CI.
    • Merge pull request #624 from adam-azarchs:master
    • Clean up util/coding.{h,cc}.
    • Initialize Stats::start_ before first use in Stats::Start().
    • Merge pull request #365 from allangj:c-strict-prototypes
    • Add argument definition for void c functions.
    • Consolidate benchmark code to benchmarks/.
    • Convert missed virtual -> override in db_test.cc.
    • Merge pull request #679 from smartxworks:optimize-readseq
    • Merge pull request #278 from wankai:master
    • don't check current key in DBIter::Next()
    • Add O_CLOEXEC to open calls.
    • broken db: fix assertion in leveldb::InternalKey::Encode, mark base as corrupt
    • set const property
    • reduce lock's range in DeleteObsoleteFiles
    • block_builder header file dependency fixed
    Source code(tar.gz)
    Source code(zip)
  • 1.22(May 3, 2019)

    • Corrected formatting to be compliant with the Google C++ Style Guide.
    • Specifically export the WriteBatch::Handler inner class for Windows link.
    • Merge pull request #665 from cheng-chang:coding.
    • Merge pull request #669 from pavel-pimenov:fix-readme-windows-mkdir.
    • Merge pull request #472 from zhoudayang:patch-1.
    • Merge pull request #339 from richcole-at-amazon:master.
    • Restore soname versioning with CMake build.
    • Other miscellaneous cleanups, fixes, and improvements.
    Source code(tar.gz)
    Source code(zip)
  • 1.21(Mar 29, 2019)

    • Switched to using Copybara for project synchronization.
    • Minor cleanup in ports.
    • Silence unused argument warnings in MSVC.
    • Add tests for empty keys and values.
    • Switch corruption_test to use InMemEnv.
    • Replace AtomicPointer with std::atomic.
    • Make InMemoryEnv more consistent with filesystem based Env's.
    • Align windows_logger with posix_logger.
    • Improve CI configuration and added AppVeyor (Windows CI) badge to README.
    • Added native support for Windows.
    • Make WriteBatch::ApproximateSize() const.
    • Fix PosixWritableFile::Sync() on Apple systems.
    • Fix fdatasync() feature detection in opensource build.
    • C++11 cleanup for util/mutexlock.h.
    • Rework threading in env_posix.cc.
    • Remove InitOnce from the port API.
    • Expose WriteBatch::Append().
    • Fix documentation for log file growth.
    • Add move constructor to Status.
    • Replace port_posix with port_stdcxx.
    • Reimplement ConsumeDecimalNumber.
    • Document the building process.
    • Replace NULL with nullptr in C++ files.
    • Remove PLATFORM_IS_LITTLE_ENDIAN from port/posix.h.
    • Add more thread safety annotations.
    • Require C++11.
    • Replace SIZE_MAX with std::numeric_limits.
    • Add CMake build support.
    • Enable thread safety annotations.
    • leveldb::DestroyDB will now delete empty directories.
    • Replace SSE-optimized CRC32C in POSIX port with external library.
    • Fix file writing bug in CL 170738066.
    • Fix use of uninitialized value in LRUHandle.
    • Fix issue #474: a race between the f*_unlocked() STDIO calls in env_posix.cc and concurrent application calls to fflush(NULL).
    • Use __APPLE__ instead of OS_MACOS. The former is compiler-provided.
    • Report missing CURRENT manifest file as database corruption.
    • LevelDB: Add WriteBatch::ApproximateSize().
    • Other minor fixes, code cleanup, and documentation improvements.
    Source code(tar.gz)
    Source code(zip)
  • v1.20(Mar 2, 2017)

    • Convert documentation to markdown.
    • Implement support for Intel crc32 instruction (SSE 4.2). Based on https://github.com/google/leveldb/pull/309.
    • Limit the number of read-only files the POSIX Env will have open.
    • Add option for maximum file size.
    Source code(tar.gz)
    Source code(zip)
  • v1.19(Aug 11, 2016)

    • A snappy change broke test assumptions about the size of compressed output. Fixed.
    • Fix problems in LevelDB's caching code.
    • Fix LevelDB build when asserts are enabled in release builds. (#367).
    • Change std::uint64_t to uint64_t (#354).
    • Fixes a bug encountered when reading records from leveldb files that have been split, as in a [] input task split.
    • Deleted redundant null ptr check prior to delete. (#338).
    • Fix signed/unsigned mismatch on VC++ builds.
    • Putting build artifacts in subdirectory.
    • Added continuous build integration via Travis CI.
    • log compaction output file's level along with number.
    • Misc. improvements to README file.
    • Fix Android/MIPS build (#115).
    • Only compiling TrimSpace on linux (#310).
    • Use xcrun to determine Xcode.app path instead of using a hardcoded path.
    • Add "approximate-memory-usage" property to leveldb::DB::GetProperty.
    • Add leveldb::Cache::Prune.
    • Fix size_t/int comparison/conversion issues.
    • Added leveldb::Status::IsInvalidArgument() method.
    • Suppress error reporting after seeking but before a valid First or Full record is encountered.
    • #include -> (#280).
    • Now attempts to reuse the preceding MANIFEST and log file when re-opened.
    • Add benchmark that measures cost of repeatedly opening the database.
    • Added a new fault injection test.
    • Add arm64 support to leveldb.
    Source code(tar.gz)
    Source code(zip)
  • v1.18(Sep 16, 2014)

    • Update version number to 1.18
    • Replace the basic fprintf call with a call to fwrite in order to work around the apparent compiler optimization/rewrite failure that we are seeing with the new toolchain/iOS SDKs provided with Xcode6 and iOS8.
    • Fix ALL the header guards.
    • Createed a README.md with the LevelDB project description.
    • A new CONTRIBUTING file.
    • Don't implicitly convert uint64_t to size_t or int. Either preserve it as uint64_t, or explicitly cast. This fixes MSVC warnings about possible value truncation when compiling this code in Chromium.
    • Added a DumpFile() library function that encapsulates the guts of the "leveldbutil dump" command. This will allow clients to dump data to their log files instead of stdout. It will also allow clients to supply their own environment.
    • leveldb: Remove unused function 'ConsumeChar'.
    • leveldbutil: Remove unused member variables from WriteBatchItemPrinter.
    • OpenBSD, NetBSD and DragonflyBSD have _LITTLE_ENDIAN, so define PLATFORM_IS_LITTLE_ENDIAN like on FreeBSD. This fixes:
      • issue #143
      • issue #198
      • issue #249
    • Switch from <cstdatomic> to <atomic>. The former never made it into the standard and doesn't exist in modern gcc versions at all. The later contains everything that leveldb was using from the former. This problem was noticed when porting to Portable Native Client where no memory barrier is defined. The fact that <cstdatomic> is missing normally goes unnoticed since memory barriers are defined for most architectures.
    • Make Hash() treat its input as unsigned. Before this change LevelDB files from platforms with different signedness of char were not compatible. This change fixes: issue #243
    • Verify checksums of index/meta/filter blocks when paranoid_checks set.
    • Invoke all tools for iOS with xcrun. (This was causing problems with the new XCode 5.1.1 image on pulse.)
    • include <sys/stat.h> only once, and fix the following linter warning: "Found C system header after C++ system header"
    • When encountering a corrupted table file, return Status::Corruption instead of Status::InvalidArgument.
    • Support cygwin as build platform, patch is from https://code.google.com/p/leveldb/issues/detail?id=188
    • Fix typo, merge patch from https://code.google.com/p/leveldb/issues/detail?id=159
    • Fix typos and comments, and address the following two issues:
      • issue #166
      • issue #241
    • Add missing db synchronize after "fillseq" in the benchmark.
    • Removed unused variable in SeekRandom: value (issue #201)
    Source code(tar.gz)
    Source code(zip)
  • v1.17(Sep 15, 2014)

    1. Cleanup: delete unused IntSetToString

      It was added in http://cr/19491949 (and was referenced at the time). The last reference was removed in http://cr/19507363.

      This fixes warning/error with pre-release crosstoolv18:

      'std::string leveldb::{anonymous}::IntSetToString(const std::set<long unsigned int>&)' defined but not used [-Werror=unused-function]
      
    2. Added arm64 and and armv7s to IOS build as suggested on leveldb mailing list.

    3. Changed local variable type from int to size_t

      This eliminates compiler warning/error and resolves issue #146

    Source code(tar.gz)
    Source code(zip)
  • v1.16(Sep 15, 2014)

    • Make Log::Reader not report a corruption when the last record in a log file is truncated.
    • Fix issue #230: variable created but not utilized.
    • Remove comment that referenced a removed feature.
    Source code(tar.gz)
    Source code(zip)
  • v1.15(Sep 15, 2014)

    • switched from mmap based writing to simpler stdio based writing. Has a minor impact (0.5 microseconds) on microbenchmarks for asynchronous writes. Synchronous writes speed up from 30ms to 10ms on linux/ext4. Should be much more reliable on diverse platforms.
    • compaction errors now immediately put the database into a read-only mode (until it is re-opened). As a downside, a disk going out of space and then space being created will require a re-open to recover from, whereas previously that would happen automatically. On the plus side, many corruption possibilities go away.
    • force the DB to enter an error-state so that all future writes fail when a synchronous log write succeeds but the sync fails.
    • repair now regenerates sstables that exhibit problems
    • fix issue #224 - Use native memory barriers on OSX
    • fix issue #218 - QNX build is broken
    • fix build on iOS with xcode 5
    • make tests compile and pass on windows
    Source code(tar.gz)
    Source code(zip)
  • v1.14(Sep 15, 2014)

    Fix issues #206, #207

    Also,

    • Fix link to bigtable paper in docs.
    • New sstables will have the file extension .ldb. .sst files will continue to be recognized.
    • When building for iOS, use xcrun to execute the compiler. This may affect issue #183.
    Source code(tar.gz)
    Source code(zip)
  • v1.13(Sep 15, 2014)

    Fix issues #83, #93, #188, #196.

    Additionally, fix the bug described in https://groups.google.com/d/msg/leveldb/yL6h1mAOc20/vLU64RylIdMJ where a large contiguous keyspace of deleted data was not getting compacted.

    Also fix a bug where options.max_open_files was not getting clamped properly.

    Source code(tar.gz)
    Source code(zip)
  • v1.12(Sep 15, 2014)

  • v1.11(Sep 15, 2014)

  • v1.10(Sep 15, 2014)

    Fixes issues:

    • #153 - thanks feniksgordonfreeman
    • #159
    • #162
    • #172

    Additionally,

    • Remove calls to exit(1).
    • Fix unused-variable warnings from clang.
    • Fix possible overflow error related to num_restart value >= (2^32/4).
    • Add leveldbutil to .gitignore.
    • Add better log messages when Write is stalled on a compaction.
    Source code(tar.gz)
    Source code(zip)
  • v1.9(Sep 15, 2014)

  • v1.8(Sep 15, 2014)

  • v1.7(Sep 15, 2014)

    Details:

    • Fix shared library building.
    • Reorganize linking commands so flags like --as-needed can be passed.
    • C binding exports version numbers.
    • Fix small typos in documention.
    Source code(tar.gz)
    Source code(zip)
  • v1.6(Sep 15, 2014)

    Highlights

    • Mmap at most 1000 files on Posix to improve performance for large databases.
    • Support for more architectures (thanks to Alexander K.)

    Building and porting

    • HP/UX support (issue #132)
    • AtomicPointer for ia64 (issue #129)
    • Sparc v9 support (issue #130)
    • Atomic ops for powerpc
    • Use -fno-builtin-memcmp only when using g++
    • Simplify IOS build rules (issue #120)
    • Use CXXFLAGS instead of CFLAGS when invoking C++ compiler (issue #124)
    • Fix snappy shared library problem (issue #100)
    • Fix shared library installation path regression
    • Endian-ness detection tweak for FreeBSD

    Bug fixes

    • Stop ignoring FLAGS_open_files in db_bench
    • Make bloom test behavior agnostic to endian-ness

    Performance

    • Limit number of mmapped files to 1000 to improve perf for large dbs
    • Do not delay for 1 second on shutdown path (issue #131)

    Misc

    • Make InMemoryEnv return a no-op logger
    • C binding now has a wrapper for free (issue #123)
    • Add thread-safety annotations
    • Added an in-process lock table (issue #126)
    • Make RandomAccessFile and SequentialFile non-copyable
    Source code(tar.gz)
    Source code(zip)
  • v1.5(Sep 15, 2014)

    1. Remove obsolete android port files.
    2. Remove static initializer
    3. Fix endian-ness detection
    4. Fix build on various platforms
    5. Improve android port speed.
    Source code(tar.gz)
    Source code(zip)
  • v1.4(Sep 15, 2014)

    In particular, we add a new FilterPolicy class. An instance of this class can be supplied in Options when opening a database. If supplied, the instance is used to generate summaries of keys (e.g., a bloom filter) which are placed in sstables. These summaries are consulted by DB::Get() so we can avoid reading sstable blocks that are guaranteed to not contain the key we are looking for.

    This change provides one implementation of FilterPolicy based on bloom filters.

    Other changes:

    • Updated version number to 1.4.
    • Some build tweaks.
    • C binding for CompactRange.
    • A few more benchmarks: deleteseq, deleterandom, readmissing, seekrandom.
    • Minor .gitignore update.
    Source code(tar.gz)
    Source code(zip)
  • v1.3(Sep 15, 2014)

Owner
Google
Google ❤️ Open Source
Google
Immutable Ordered Key-Value Database Engine

PumpkinDB Build status (Linux) Build status (Windows) Project status Usable, between alpha and beta Production-readiness Depends on your risk toleranc

null 1.3k Jan 2, 2023
General basic key-value structs for Key-Value based storages

General basic key-value structs for Key-Value based storages

Al Liu 0 Dec 3, 2022
ForestDB - A Fast Key-Value Storage Engine Based on Hierarchical B+-Tree Trie

ForestDB is a key-value storage engine developed by Couchbase Caching and Storage Team, and its main index structure is built from Hierarchic

null 1.2k Dec 26, 2022
A Key-Value data storage system. - dorea db

Dorea DB ?? Dorea is a key-value data storage system. It is based on the Bitcask storage model Documentation | Crates.io | API Doucment 简体中文 | English

ZhuoEr Liu 112 Dec 2, 2022
AgateDB is an embeddable, persistent and fast key-value (KV) database written in pure Rust

AgateDB is an embeddable, persistent and fast key-value (KV) database written in pure Rust. It is designed as an experimental engine for the TiKV project, and will bring aggressive optimizations for TiKV specifically.

TiKV Project 535 Jan 9, 2023
A fast and simple in-memory database with a key-value data model written in Rust

Segment Segment is a simple & fast in-memory database with a key-value data model written in Rust. Features Dynamic keyspaces Keyspace level control o

Segment 61 Jan 5, 2023
A "blazingly" fast key-value pair database without bloat written in rust

A fast key-value pair in memory database. With a very simple and fast API. At Xiler it gets used to store and manage client sessions throughout the pl

Arthur 16 Dec 16, 2022
RedisLess is a fast, lightweight, embedded and scalable in-memory Key/Value store library compatible with the Redis API.

RedisLess is a fast, lightweight, embedded and scalable in-memory Key/Value store library compatible with the Redis API.

Qovery 145 Nov 23, 2022
A simple embedded key-value store written in rust as a learning project

A simple embedded key-value store written in rust as a learning project

Blobcode 1 Feb 20, 2022
Appendable and iterable key/list storage, backed by S3, written in rust

klstore Appendable and iterable key/list storage, backed by S3. General Overview Per key, a single writer appends to underlying storage, enabling many

Eric Thill 3 Sep 29, 2022
Plugin for macro-, mini-quad (quads) to save data in simple local storage using Web Storage API in WASM and local file on a native platforms.

quad-storage This is the crate to save data in persistent local storage in miniquad/macroquad environment. In WASM the data persists even if tab or br

ilya sheprut 9 Jan 4, 2023
Distributed transactional key-value database, originally created to complement TiDB

Website | Documentation | Community Chat TiKV is an open-source, distributed, and transactional key-value database. Unlike other traditional NoSQL sys

TiKV Project 12.4k Jan 3, 2023
PickleDB-rs is a lightweight and simple key-value store. It is a Rust version for Python's PickleDB

PickleDB PickleDB is a lightweight and simple key-value store written in Rust, heavily inspired by Python's PickleDB PickleDB is fun and easy to use u

null 155 Jan 5, 2023
CLI tool to work with Sled key-value databases.

sledtool CLI tool to work with Sled key-value databases. $ sledtool --help Usage: sledtool <dbpath> <command> [<args>] CLI tool to work with Sled da

Vitaly Shukela 27 Sep 26, 2022
Pure rust embeddable key-value store database.

MHdb is a pure Rust database implementation, based on dbm. See crate documentation. Changelog v1.0.3 Update Cargo.toml v1.0.2 Update Cargo.toml v1.0.1

Magnus Hirth 7 Dec 10, 2022
RefineDB - A strongly-typed document database that runs on any transactional key-value store.

RefineDB - A strongly-typed document database that runs on any transactional key-value store.

Heyang Zhou 375 Jan 4, 2023
Log structured append-only key-value store from Rust In Action with some enhancements.

riakv Log structured, append only, key value store implementation from Rust In Action with some enhancements. Features Persistent key value store with

Arindam Das 5 Oct 29, 2022
A LSM-based Key-Value Store in Rust

CobbleDB A LSM-based Key-Value Store in Rust Motivation There is no open-source LSM-based key-value store in Rust natively. Some crates are either a w

Yizheng Jiao 2 Oct 25, 2021
A sessioned Merkle key/value store

storage A sessioned Merkle key/value store The crate was designed to be the blockchain state database. It provides persistent storage layer for key-va

Findora Foundation 15 Oct 25, 2022