Distributed transactional key-value database, originally created to complement TiDB

Overview

tikv_logo

Website | Documentation | Community Chat

Build Status Coverage Status CII Best Practices

TiKV is an open-source, distributed, and transactional key-value database. Unlike other traditional NoSQL systems, TiKV not only provides classical key-value APIs, but also transactional APIs with ACID compliance. Built in Rust and powered by Raft, TiKV was originally created to complement TiDB, a distributed HTAP database compatible with the MySQL protocol.

The design of TiKV ('Ti' stands for titanium) is inspired by some great distributed systems from Google, such as BigTable, Spanner, and Percolator, and some of the latest achievements in academia in recent years, such as the Raft consensus algorithm.

If you're interested in contributing to TiKV, or want to build it from source, see CONTRIBUTING.md.

cncf_logo

TiKV is a graduated project of the Cloud Native Computing Foundation (CNCF). If you are an organization that wants to help shape the evolution of technologies that are container-packaged, dynamically-scheduled and microservices-oriented, consider joining the CNCF. For details about who's involved and how TiKV plays a role, read the CNCF announcement.


With the implementation of the Raft consensus algorithm in Rust and consensus state stored in RocksDB, TiKV guarantees data consistency. Placement Driver (PD), which is introduced to implement auto-sharding, enables automatic data migration. The transaction model is similar to Google's Percolator with some performance improvements. TiKV also provides snapshot isolation (SI), snapshot isolation with lock (SQL: SELECT ... FOR UPDATE), and externally consistent reads and writes in distributed transactions.

TiKV has the following key features:

  • Geo-Replication

    TiKV uses Raft and the Placement Driver to support Geo-Replication.

  • Horizontal scalability

    With PD and carefully designed Raft groups, TiKV excels in horizontal scalability and can easily scale to 100+ TBs of data.

  • Consistent distributed transactions

    Similar to Google's Spanner, TiKV supports externally-consistent distributed transactions.

  • Coprocessor support

    Similar to HBase, TiKV implements a coprocessor framework to support distributed computing.

  • Cooperates with TiDB

    Thanks to the internal optimization, TiKV and TiDB can work together to be a compelling database solution with high horizontal scalability, externally-consistent transactions, support for RDBMS, and NoSQL design patterns.

Governance

See Governance.

Documentation

For instructions on deployment, configuration, and maintenance of TiKV,see TiKV documentation on our website. For more details on concepts and designs behind TiKV, see Deep Dive TiKV.

Note:

We have migrated our documentation from the TiKV's wiki page to the official website. The original Wiki page is discontinued. If you have any suggestions or issues regarding documentation, offer your feedback here.

TiKV adopters

You can view the list of TiKV Adopters.

TiKV roadmap

You can see the TiKV Roadmap.

TiKV software stack

The TiKV software stack

  • Placement Driver: PD is the cluster manager of TiKV, which periodically checks replication constraints to balance load and data automatically.
  • Store: There is a RocksDB within each Store and it stores data into the local disk.
  • Region: Region is the basic unit of Key-Value data movement. Each Region is replicated to multiple Nodes. These multiple replicas form a Raft group.
  • Node: A physical node in the cluster. Within each node, there are one or more Stores. Within each Store, there are many Regions.

When a node starts, the metadata of the Node, Store and Region are recorded into PD. The status of each Region and Store is reported to PD regularly.

Try TiKV

TiKV was originally a component of TiDB. To run TiKV you must build and run it with PD, which is used to manage a TiKV cluster. You can use TiKV together with TiDB or separately on its own.

We provide multiple deployment methods, but it is recommended to use our Ansible deployment for production environment. The TiKV documentation is available on TiKV's website.

Testing deployment

Production deployment

For the production environment, use TiDB Ansible to deploy the cluster.

Client drivers

Currently, the interfaces to TiKV are the TiDB Go client and the TiSpark Java client.

These are the clients for TiKV:

If you want to try the Go client, see Go Client.

Security

Security audit

A third-party security auditing was performed by Cure53. See the full report here.

Reporting Security Vulnerabilities

To report a security vulnerability, please send an email to TiKV-security group.

See Security for the process and policy followed by the TiKV project.

Communication

Communication within the TiKV community abides by TiKV Code of Conduct. Here is an excerpt:

In the interest of fostering an open and welcoming environment, we as contributors and maintainers pledge to making participation in our project and our community a harassment-free experience for everyone, regardless of age, body size, disability, ethnicity, sex characteristics, gender identity and expression, level of experience, education, socio-economic status, nationality, personal appearance, race, religion, or sexual identity and orientation.

Social Media

Slack

Join the TiKV community on Slack - Sign up and join channels on TiKV topics that interest you.

WeChat

The TiKV community is also available on WeChat. If you want to join our WeChat group, send a request mail to [email protected], with your personal information that includes the following:

  • WeChat ID (Required)
  • A contribution you've made to TiKV, such as a PR (Required)
  • Other basic information

We will invite you in right away.

License

TiKV is under the Apache 2.0 license. See the LICENSE file for details.

Acknowledgments

  • Thanks etcd for providing some great open source tools.
  • Thanks RocksDB for their powerful storage engines.
  • Thanks rust-clippy. We do love the great project.
Comments
  • [WIP] UCP: Add Tracing to Coprocessor

    [WIP] UCP: Add Tracing to Coprocessor

    UCP #5714

    What have you changed?

    Add some OpenTracing info in coprocessor requests.

    What is the type of the changes?

    • New feature (a change which adds functionality)

    How is the PR tested?

    • Unit test

    Does this PR affect documentation (docs) or should it be mentioned in the release notes?

    Yes

    Does this PR affect tidb-ansible?

    No

    contribution 
    opened by Renkai 97
  • copr: chunk time use new 8 byte format

    copr: chunk time use new 8 byte format

    Signed-off-by: TennyZhuang [email protected]

    What have you changed?

    Use new 8 byte chunk time format.

    What is the type of the changes?

    • New feature (a change which adds functionality)
    • Improvement (a change which is an improvement to an existing feature)

    How is the PR tested?

    • Unit test

    Refer to a related PR or issue link (optional)

    Related to https://github.com/pingcap/tidb/pull/14278

    Benchmark result if necessary (optional)

    Any examples? (optional)

    contribution 
    opened by TennyZhuang 91
  • tidb_query: fix read empty value for the clustered PK column in the 2nd index with latin1_bin

    tidb_query: fix read empty value for the clustered PK column in the 2nd index with latin1_bin

    What problem does this PR solve?

    Issue Number: close https://github.com/pingcap/tidb/issues/24548

    Problem Summary:

    Latin1Bin didn't be judged as Bin collation

    What is changed and how it works?

    What's Changed:

    Latin1Bin should be judged as Bin collation

    Related changes

    • Need to cherry-pick to the release branch 5.0

    Check List

    Tests

    • Unit test

    Side effects

    • n/a

    Release note

    • Fix read empty value for the clustered primary key column in the secondary index when collation is latin1_bin.
    sig/coprocessor contribution status/LGT2 status/can-merge size/M needs-cherry-pick-release-5.0 
    opened by lysu 89
  • PCP-14: localize some metrics in servers

    PCP-14: localize some metrics in servers

    What have you changed?

    Use a new macro make_auto_flush_static_metric! to make localized, auto-flush able metrics for CounterVec.

    What is the type of the changes?

    • Improvement (a change which is an improvement to an existing feature)

    How is the PR tested?

    • Unit test

    Does this PR affect documentation (docs) or should it be mentioned in the release notes?

    No

    Does this PR affect tidb-ansible?

    No

    Refer to a related PR or issue link (optional)

    https://github.com/tikv/tikv/issues/5708

    Benchmark result if necessary (optional)

    Any examples? (optional)

    component/performance contribution status/can-merge 
    opened by Renkai 83
  • PCP-32: Optimize Chunk Encoder in components/tidb_query/src/codec/batch/lazy_column_vec.rs

    PCP-32: Optimize Chunk Encoder in components/tidb_query/src/codec/batch/lazy_column_vec.rs

    PCP #5729

    What have you changed?

    Implemented efficient codec for transforming row datum into Chunk format.

    What is the type of the changes?

    • Improvement (a change which is an improvement to an existing feature)

    How is the PR tested?

    • [x] Unit test

    Does this PR affect documentation (docs) or should it be mentioned in the release notes?

    No

    Does this PR affect tidb-ansible?

    No

    Refer to a related PR or issue link (optional)

    https://github.com/tikv/tikv/issues/5729

    Benchmark result if necessary (optional)

    Any examples? (optional)

    sig/coprocessor contribution status/can-merge 
    opened by Renkai 82
  • log: ensure panic output is flushed to the log

    log: ensure panic output is flushed to the log

    What problem does this PR solve?

    Problem Summary: The panic message is still possibly missing from the log. The previous fix is not elegant https://github.com/tikv/tikv/pull/9955, e.g. for debug build, the wait duration is still too small.

    Fix #8998

    What is changed and how it works?

    What's Changed: This PR solves the issue at root. The root cause is the panic log is not processed yet in the async log worker before _exit is called. So using async guard to wait for the finish of the async log worker.

    Check List

    Tests

    • Manual test (Use debug build)
    • Unit test

    Release note

    • Ensure panic output is flushed to the log
    status/LGT2 type/bugfix status/can-merge needs-cherry-pick-release-4.0 size/L needs-cherry-pick-release-5.0 
    opened by Connor1996 64
  • *: Support both Prost and rust-protobuf libraries

    *: Support both Prost and rust-protobuf libraries

    To use Prost, set the PROST env var, e.g.,: PROST=1 make dev. If using Cargo, use --no-default-features --features prost-codec.

    The most notable change is threading the prost-codec/protobuf-codec through the Cargo.tomls of all crates. In addition, in order to make this work I had to move integraton tests and benchmarks into their own crate (tests). This is because Cargo features do not interact perfectly with dev-dependencies.

    We're using a Git dep for Prost in order to get some optimisations which are on master, but not in the latest release. We can change to a crates.io dep when there is another release.

    We must allow the identity_conversion lint because there are some conversions which are meaningful with rust-protobuf, but no-ops with Prost.

    The changes to src/coprocessor/endpoint.rs are because Prost does not permit setting a custom recursion limit. We only did this for tests previously. We now use the default recursion limit all the time for both codecs; the test must be adjusted so that we hit the higher limit.

    PTAL @breeswish @BusyJay @overvenus

    What have you changed?

    Add a feature flag for Prost which builds TiKV and its deps using Prost rather than rust-protobuf as the protobuf codec.

    What is the type of the changes?

    • Engineering (engineering change which doesn't change any feature or fix any issue)

    How is the PR tested?

    make dev

    Does this PR affect documentation (docs) or should it be mentioned in the release notes?

    Should have dev docs (to come) and mentioned in release docs.

    Does this PR affect tidb-ansible?

    No

    Refer to a related PR or issue link (optional)

    https://github.com/tikv/tikv/issues/2452

    component/gRPC status/can-merge 
    opened by nrc 62
  • coprocessor/expression: push down scalar functions

    coprocessor/expression: push down scalar functions

    Update

    The content in this issue is outdated. Please refer to https://github.com/tikv/tikv/issues/5751 for a latest list.

    Click to expand the original content

    Feature Request

    In coprocessor, for reading operations, some functions have been pushed down to tikv to make the computation distribute into each region. For example, when TiDB receives a SQL query like

    select sum(col1+col2) from table1
    

    TiDB would push down the computation sum(col1+col2) to each region (TiKV) of the table.

    Now there are lots of functions need to been pushed down into TiKV, some have been implemented already while the rest may need your help.

    Here is the list of functions to been pushed down, you may pick one to make a pull request:

    Casting

    • [X] CastIntAsInt
    • [X] CastIntAsReal
    • [X] CastIntAsString
    • [X] CastIntAsDecimal
    • [X] CastIntAsTime
    • [X] CastIntAsDuration
    • [X] CastIntAsJson
    • [X] CastRealAsInt
    • [X] CastRealAsReal
    • [X] CastRealAsString
    • [X] CastRealAsDecimal
    • [X] CastRealAsTime
    • [X] CastRealAsDuration
    • [X] CastRealAsJson
    • [X] CastDecimalAsInt
    • [X] CastDecimalAsReal
    • [X] CastDecimalAsString
    • [X] CastDecimalAsDecimal
    • [X] CastDecimalAsTime
    • [X] CastDecimalAsDuration
    • [X] CastDecimalAsJson
    • [X] CastStringAsInt
    • [X] CastStringAsReal
    • [X] CastStringAsString
    • [X] CastStringAsDecimal
    • [X] CastStringAsTime
    • [X] CastStringAsDuration
    • [X] CastStringAsJson
    • [X] CastTimeAsInt
    • [X] CastTimeAsReal
    • [X] CastTimeAsString
    • [X] CastTimeAsDecimal
    • [X] CastTimeAsTime
    • [X] CastTimeAsDuration
    • [X] CastTimeAsJson
    • [X] CastDurationAsInt
    • [X] CastDurationAsReal
    • [X] CastDurationAsString
    • [X] CastDurationAsDecimal
    • [X] CastDurationAsTime
    • [X] CastDurationAsDuration
    • [X] CastDurationAsJson
    • [X] CastJsonAsInt
    • [X] CastJsonAsReal
    • [X] CastJsonAsString
    • [X] CastJsonAsDecimal
    • [X] CastJsonAsTime
    • [X] CastJsonAsDuration
    • [X] CastJsonAsJson

    Compare

    • [X] CoalesceInt
    • [X] CoalesceReal
    • [X] CoalesceDecimal
    • [X] CoalesceString
    • [X] CoalesceTime
    • [X] CoalesceDuration
    • [X] CoalesceJson
    • [X] LTInt
    • [X] LTReal
    • [X] LTDecimal
    • [X] LTString
    • [X] LTTime
    • [X] LTDuration
    • [X] LTJson
    • [X] LEInt
    • [X] LEReal
    • [X] LEDecimal
    • [X] LEString
    • [X] LETime
    • [X] LEDuration
    • [X] LEJson
    • [X] GTInt
    • [X] GTReal
    • [X] GTDecimal
    • [X] GTString
    • [X] GTTime
    • [X] GTDuration
    • [X] GTJson
    • [X] GreatestInt @bb7133 #3113
    • [X] GreatestReal @bb7133 #3113
    • [X] GreatestDecimal @bb7133 #3113
    • [X] GreatestString @bb7133 #3113
    • [X] GreatestTime @bb7133 #3113
    • [X] LeastInt @bb7133 #3113
    • [X] LeastReal @bb7133 #3113
    • [X] LeastDecimal @bb7133 #3113
    • [X] LeastString @bb7133 #3113
    • [X] LeastTime @bb7133 #3113
    • [X] IntervalInt @bb7133 #3330
    • [X] IntervalReal @bb7133 #3330
    • [X] GEInt
    • [X] GEReal
    • [X] GEDecimal
    • [X] GEString
    • [X] GETime
    • [X] GEDuration
    • [X] GEJson
    • [X] EQInt
    • [X] EQReal
    • [X] EQDecimal
    • [X] EQString
    • [X] EQTime
    • [X] EQDuration
    • [X] EQJson
    • [X] NEInt
    • [X] NEReal
    • [X] NEDecimal
    • [X] NEString
    • [X] NETime
    • [X] NEDuration
    • [X] NEJson
    • [X] NullEQInt
    • [X] NullEQReal
    • [X] NullEQDecimal
    • [X] NullEQString
    • [X] NullEQTime
    • [X] NullEQDuration
    • [X] NullEQJson

    Arithmetic

    • [X] PlusReal
    • [X] PlusDecimal
    • [X] PlusInt
    • [X] MinusReal
    • [X] MinusDecimal
    • [X] MinusInt
    • [X] MultiplyReal
    • [X] MultiplyDecimal
    • [X] MultiplyInt
    • [X] MultiplyIntUnsigned
    • [X] DivideReal
    • [X] DivideDecimal
    • [X] IntDivideInt @bb7133 #3030
    • [X] IntDivideDecimal @bb7133 #3030
    • [X] ModReal @bb7133 #3030
    • [X] ModDecimal @bb7133 #3030
    • [X] ModInt @bb7133 #3030

    Math

    • [X] AbsInt
    • [X] AbsUInt
    • [X] AbsReal
    • [X] AbsDecimal
    • [X] CeilIntToDec
    • [X] CeilIntToInt
    • [X] CeilDecToInt
    • [X] CeilDecToDec
    • [X] CeilReal
    • [X] FloorIntToDec
    • [X] FloorIntToInt
    • [X] FloorDecToInt
    • [X] FloorDecToDec
    • [X] FloorReal
    • [X] RoundReal @colinback #3621
    • [X] RoundInt @intellild #3395
    • [X] RoundDec @colinback #3621
    • [X] RoundWithFracReal @colinback #3621
    • [X] RoundWithFracInt @colinback #3621
    • [X] RoundWithFracDec @colinback #3621
    • [X] Log1Arg @sllt #3603
    • [X] Log2Args @sllt #3603
    • [X] Log2 @sllt #3379
    • [X] Log10 @sllt #3379
    • [X] Rand @xiangyuf #3415
    • [X] RandWithSeed @xiangyuf #3415
    • [X] Pow @smallyard #3475
    • [X] Conv @niedhui #3691
    • [X] CRC32 @TennyZhuang #3374
    • [X] Sign @Observer42 #3518
    • [X] Sqrt @xiangyuf #3476
    • [X] Acos @malc0lm #3482
    • [X] Asin @malc0lm #3482
    • [X] Atan1Arg @Observer42 #3520
    • [X] Atan2Args @Observer42 #3520
    • [X] Cos @vkorenev #3410
    • [X] Cot @mtunique #3543
    • [X] Degrees @mtunique #3543
    • [X] Exp @niedhui #3686
    • [X] PI @sweetIan #3382
    • [X] Radians @niedhui #3683
    • [X] Sin @liufuyang #3406
    • [X] Tan @arosspope #3456
    • [X] TruncateInt @niedhui #3532
    • [X] TruncateReal @niedhui #3633
    • [X] TruncateDecimal @niedhui #3637

    Operator

    • [X] LogicalAnd
    • [X] LogicalOr
    • [X] LogicalXor
    • [X] UnaryNot
    • [X] UnaryMinusInt
    • [X] UnaryMinusReal
    • [X] UnaryMinusDecimal
    • [X] DecimalIsNull
    • [X] DurationIsNull
    • [X] RealIsNull
    • [X] StringIsNull
    • [X] TimeIsNull
    • [X] IntIsNull
    • [X] JsonIsNull
    • [X] BitAndSig
    • [X] BitOrSig
    • [X] BitXorSig
    • [X] BitNegSig
    • [X] IntIsTrue
    • [X] RealIsTrue
    • [X] DecimalIsTrue
    • [X] IntIsFalse
    • [X] RealIsFalse
    • [X] DecimalIsFalse
    • [X] LeftShift @spongedu #3391
    • [X] RightShift @spongedu #3391

    Control

    • [X] IfNullInt
    • [X] IfNullReal
    • [X] IfNullDecimal
    • [X] IfNullString
    • [X] IfNullTime
    • [X] IfNullDuration
    • [X] IfInt
    • [X] IfReal
    • [X] IfDecimal
    • [X] IfString
    • [X] IfTime
    • [X] IfDuration
    • [X] IfNullJson
    • [X] IfJson
    • [X] CaseWhenInt
    • [X] CaseWhenReal
    • [X] CaseWhenDecimal
    • [X] CaseWhenString
    • [X] CaseWhenTime
    • [X] CaseWhenDuration
    • [X] CaseWhenJson

    Encryption

    • [ ] AesDecrypt
    • [ ] AesEncrypt
    • [X] Compress @niedhui #3856
    • [X] MD5 @Hijiao #3554
    • [ ] Password
    • [ ] RandomBytes
    • [X] SHA1 @haoxiang47 #3612
    • [X] SHA2 @spongedu #3649
    • [X] Uncompress @niedhui #3856
    • [X] UncompressedLength @niedhui #3856

    Info

    • [ ] ~Database~
    • [ ] ~FoundRows~
    • [ ] ~CurrentUser~
    • [ ] ~User~
    • [ ] ~ConnectionID~
    • [ ] ~LastInsertID~
    • [ ] ~LastInsertIDWithID~
    • [ ] ~Version~
    • [ ] ~TiDBVersion~
    • [ ] ~RowCount~

    Miscellaneous

    • [ ] Sleep
    • [ ] Lock
    • [ ] ReleaseLock
    • [ ] DecimalAnyValue
    • [ ] DurationAnyValue
    • [ ] IntAnyValue
    • [ ] JSONAnyValue
    • [ ] RealAnyValue
    • [ ] StringAnyValue
    • [ ] TimeAnyValue
    • [X] InetAton @rleungx #3659
    • [X] InetNtoa @rleungx #3659
    • [X] Inet6Aton @sweetIan #3480
    • [X] Inet6Ntoa @sweetIan #3519
    • [X] IsIPv4 @opensourcegeek #3460
    • [ ] IsIPv4Compat
    • [ ] IsIPv4Mapped
    • [X] IsIPv6 @opensourcegeek #3479
    • [ ] UUID

    Like

    • [X] LikeSig
    • [X] RegexpBinarySig @bb7133 #3196
    • [X] RegexpSig @bb7133 #3196

    JSON

    • [X] JsonExtractSig
    • [X] JsonUnquoteSig
    • [X] JsonTypeSig
    • [X] JsonSetSig
    • [X] JsonInsertSig
    • [X] JsonReplaceSig
    • [X] JsonRemoveSig
    • [X] JsonMergeSig
    • [X] JsonObjectSig
    • [X] JsonArraySig

    Time

    • [X] DateFormatSig
    • [ ] DateLiteral
    • [x] DateDiff @edwardpku #3937
    • [ ] NullTimeDiff
    • [ ] TimeStringTimeDiff
    • [ ] DurationStringTimeDiff
    • [ ] DurationDurationTimeDiff
    • [ ] StringTimeTimeDiff
    • [ ] StringDurationTimeDiff
    • [ ] StringStringTimeDiff
    • [ ] TimeTimeTimeDiff
    • [X] Date @hawkingrei #3428
    • [X] Hour @koushiro #3753
    • [X] Minute @koushiro #3753
    • [X] Second @koushiro #3753
    • [X] MicroSecond @koushiro #3753
    • [X] Month @chux0519 #3569
    • [X] MonthName @koushiro #3735
    • [ ] NowWithArg
    • [ ] NowWithoutArg
    • [X] DayName @koushiro #3774
    • [X] DayOfMonth @koushiro #3774
    • [X] DayOfWeek @koushiro #3774
    • [X] DayOfYear @koushiro #3774
    • [X] WeekWithMode @AbnerZheng #3857
    • [X] WeekWithoutMode @AbnerZheng #3861
    • [X] WeekDay @koushiro #3871
    • [X] WeekOfYear @koushiro #3871
    • [X] Year @Kingwl #3622
    • [x] YearWeekWithMode @AbnerDBFan #3876
    • [x] YearWeekWithoutMode @AbnerDBFan #3876
    • [ ] GetFormat
    • [ ] SysDateWithFsp
    • [ ] SysDateWithoutFsp
    • [ ] ~CurrentDate~
    • [ ] CurrentTime0Arg
    • [ ] CurrentTime1Arg
    • [ ] Time
    • [ ] TimeLiteral
    • [ ] UTCDate
    • [ ] UTCTimestampWithArg
    • [ ] UTCTimestampWithoutArg
    • [x] AddDatetimeAndDuration @koushiro #3899
    • [x] AddDatetimeAndString @koushiro #3899
    • [x] AddTimeDateTimeNull @koushiro #4063
    • [ ] AddStringAndDuration
    • [ ] AddStringAndString
    • [ ] AddTimeStringNull
    • [x] AddDurationAndDuration @GinYM #3984
    • [x] AddDurationAndString @DCjanus #4010
    • [x] AddTimeDurationNull @koushiro #4063
    • [ ] AddDateAndDuration
    • [ ] AddDateAndString
    • [ ] SubDatetimeAndDuration
    • [ ] SubDatetimeAndString
    • [ ] SubTimeDateTimeNull
    • [ ] SubStringAndDuration
    • [ ] SubStringAndString
    • [ ] SubTimeStringNull
    • [ ] SubDurationAndDuration
    • [ ] SubDurationAndString
    • [ ] SubTimeDurationNull
    • [ ] SubDateAndDuration
    • [ ] SubDateAndString
    • [ ] UnixTimestampCurrent
    • [ ] UnixTimestampInt
    • [ ] UnixTimestampDec
    • [ ] ConvertTz
    • [ ] MakeDate
    • [ ] MakeTime
    • [ ] PeriodAdd
    • [ ] PeriodDiff
    • [ ] Quarter
    • [ ] SecToTime
    • [ ] TimeToSec
    • [ ] TimestampAdd
    • [x] ToDays @GinYM #3978
    • [ ] ToSeconds
    • [ ] UTCTimeWithArg
    • [ ] UTCTimeWithoutArg
    • [ ] Timestamp1Arg
    • [ ] Timestamp2Args
    • [ ] TimestampLiteral
    • [X] LastDay @WPH95 #3556
    • [ ] StrToDateDate
    • [ ] StrToDateDatetime
    • [ ] StrToDateDuration
    • [ ] FromUnixTime1Arg
    • [ ] FromUnixTime2Arg
    • [ ] ExtractDatetime
    • [ ] ExtractDuration
    • [ ] AddDateStringString
    • [ ] AddDateStringInt
    • [ ] AddDateStringDecimal
    • [ ] AddDateIntString
    • [ ] AddDateIntInt
    • [ ] AddDateDatetimeString
    • [ ] AddDateDatetimeInt
    • [ ] SubDateStringString
    • [ ] SubDateStringInt
    • [ ] SubDateStringDecimal
    • [ ] SubDateIntString
    • [ ] SubDateIntInt
    • [ ] SubDateDatetimeString
    • [ ] SubDateDatetimeInt
    • [ ] FromDays
    • [ ] TimeFormat
    • [ ] TimestampDiff

    String functions

    • [X] BitLength @spongedu #3376
    • [X] Bin @spongedu #3397
    • [X] ASCII @spongedu #3436
    • [ ] Char
    • [X] CharLength @spongedu #3461
    • [X] Concat @crazycs520 #3654
    • [x] ConcatWS @kg88 #3818
    • [ ] Convert
    • [X] Elt @spongedu #3555
    • [ ] ExportSet3Arg
    • [ ] ExportSet4Arg
    • [ ] ExportSet5Arg
    • [x] FieldInt @manifoldQAQ #4007
    • [x] FieldReal @manifoldQAQ #4007
    • [x] FieldString @manifoldQAQ #4007
    • [ ] FindInSet
    • [ ] Format
    • [ ] FormatWithLocale
    • [X] FromBase64 @niedhui #3716
    • [X] HexIntArg @sweetIan #3478
    • [X] HexStrArg @sweetIan #3478
    • [ ] Insert
    • [ ] InsertBinary
    • [ ] Instr
    • [ ] InstrBinary
    • [x] LTrim @spongedu #3400
    • [X] Left @spongedu #3413
    • [x] LeftBinary
    • [X] Length @spongedu #3376
    • [x] Locate2Args @gaodayue #4016
    • [x] Locate3Args @gaodayue #4016
    • [x] LocateBinary2Args @gaodayue #4016
    • [x] LocateBinary3Args @gaodayue #4016
    • [X] Lower @spongedu #3433
    • [x] Lpad @niedhui #3943
    • [x] LpadBinary @niedhui #3943
    • [ ] MakeSet
    • [ ] OctInt @yjhmelody #3605
    • [ ] OctString
    • [ ] Ord
    • [ ] Quote
    • [X] RTrim @spongedu #3400
    • [ ] Repeat
    • [x] Replace @lerencao #4360
    • [X] Reverse @spongedu #3435
    • [X] ReverseBinary @spongedu #3435
    • [X] Right @rleungx #3653
    • [x] RightBinary @niedhui #3982
    • [x] Rpad @niedhui #3914
    • [x] RpadBinary @niedhui #3914
    • [X] Space @niedhui #3841
    • [X] Strcmp @niedhui #3879
    • [X] Substring2Args @niedhui #3472
    • [X] Substring3Args @niedhui #3472
    • [X] SubstringBinary2Args @niedhui #3813
    • [X] SubstringBinary3Args @niedhui #3813
    • [X] SubstringIndex @niedhui #3717
    • [X] ToBase64 @niedhui #3716
    • [X] Trim1Arg @niedhui #3698
    • [X] Trim2Args @niedhui #3698
    • [X] Trim3Args @niedhui #3698
    • [X] UnHex @sweetIan #3469
    • [X] Upper @spongedu #3433

    Other

    • [X] BitCount @spongedu #3394
    • [ ] GetParamString
    • [ ] GetVar
    • [ ] RowSig
    • [ ] SetVar
    • [ ] ValuesDecimal
    • [ ] ValuesDuration
    • [ ] ValuesInt
    • [ ] ValuesJSON
    • [ ] ValuesReal
    • [ ] ValuesString
    • [ ] ValuesTime
    • [X] InInt @winoros #2411
    • [X] InReal @winoros #2411
    • [X] InDecimal @winoros #2411
    • [X] InString @winoros #2411
    • [X] InTime @winoros #2411
    • [X] InDuration @winoros #2411
    • [X] InJson @winoros #2411
    help wanted sig/coprocessor difficulty/easy 
    opened by AndreMouche 60
  • copr: vectorize `Upper`

    copr: vectorize `Upper`

    PCP #5751

    Signed-off-by: wangwangwar [email protected]

    What have you changed?

    Vectorize Upper

    What is the type of the changes?

    Improvement

    How is the PR tested?

    • Unit test
    • Integration test

    Does this PR affect documentation (docs) or should it be mentioned in the release notes?

    No. It should not change the behavior.

    Does this PR affect tidb-ansible?

    No

    Refer to a related PR or issue link (optional)

    https://github.com/tikv/copr-test/pull/102

    Benchmark result if necessary (optional)

    Any examples? (optional)

    sig/coprocessor contribution status/LGT1 status/can-merge 
    opened by wangwangwar 58
  • add trace info to coprocessor call

    add trace info to coprocessor call

    UCP #5714

    What have you changed?

    Add some minitrace info in coprocessor requests.

    What is the type of the changes?

    • New feature (a change which adds functionality)

    How is the PR tested?

    • Unit test

    Does this PR affect documentation (docs) or should it be mentioned in the release notes?

    Yes

    Does this PR affect tidb-ansible?

    No

    contribution status/can-merge 
    opened by Renkai 57
  • cmd: refactor tikv server startup

    cmd: refactor tikv server startup

    What have you changed?

    Refactors server.rs in cmd. There are no functional changes here, just moving code around to hopefully make it clearer what is happening and how components are related. There should be no functional or performance impact.

    The main idea is to breakup the huge function into many little ones. To do that I had to use a new object (TiKV) to track the various components.

    I think there are some improvements that could be made, by moving some parts of these tasks into the components themselves. There are a few very small changes like that, however, for this PR I tried to avoid it as far as possible and only change the one module.

    What is the type of the changes?

    • Engineering (engineering change which doesn't change any feature or fix any issue)

    PTAL @overvenus @BusyJay

    component/server status/can-merge 
    opened by nrc 55
  • *: introduce slog_panic and SlogFormat

    *: introduce slog_panic and SlogFormat

    What is changed and how it works?

    Issue Number: Ref #12842

    What's Changed:

    These two are helpers to utilize the static KV pairs in logger. In the
    past, we use `logger.list()` to try to format the configured KV pairs,
    but it will not work as values are omitted.
    

    Check List

    Tests

    • Unit test

    Release note

    None
    
    size/XL release-note-none 
    opened by BusyJay 1
  • Logging from coprocessor is noisy

    Logging from coprocessor is noisy

    Bug Report

    What version of TiKV are you using?

    TiKV 
    Release Version:   6.5.0
    Edition:           Community
    Git Commit Hash:   47b81680f75adc4b7200480cea5dbe46ae07c4b5
    Git Commit Branch: heads/refs/tags/v6.5.0
    UTC Build Time:    2022-12-21 09:03:22
    Rust Version:      rustc 1.67.0-nightly (96ddd32c4 2022-11-14)
    Enable Features:   pprof-fp jemalloc mem-profiling portable sse test-engine-kv-rocksdb test-engine-raft-raft-engine cloud-aws cloud-gcp cloud-azure
    Profile:           dist_release
    

    What operating system and CPU are you using?

    Fedora Linux 37, Linux 6.0.12, Intel Core i7

    Steps to reproduce

    Run TiUP Playground with v6.5.0 Run TiUP Bench TPCC (prepare and run)

    What did you expect?

    No noisy logging

    What did happened?

    With log format set to JSON:

    $ jq 'select(.level == "WARN") .caller' .tiup/data/TRUCY5d/tikv-0/tikv.log | sort | uniq -c | sort -nr
        300 "endpoint.rs:780"
         58 "subscription_track.rs:159"
          3 "server.rs:1877"
          1 "store.rs:1722"
          1 "server.rs:997"
          1 "server.rs:545"
          1 "lib.rs:542"
          1 "client.rs:163"
    
    grep endpoint.rs:780 .tiup/data/TRUCY5d/tikv-0/tikv.log | tail -5 | jq .
    
    {
      "level": "WARN",
      "caller": "endpoint.rs:780",
      "message": "error-response",
      "time": "2022/12/30 09:40:53.496 +01:00",
      "err": "Key is locked (will clean up) primary_lock: 7480000000000000525F72038000000000000002038000000000000009 lock_version: 438406913302462506 key: 7480000000000000585F72038000000000000002038000000000000009038000000000000BEE lock_ttl: 20021 txn_size: 1 lock_for_update_ts: 438406913302462506 use_async_commit: true min_commit_ts: 438406913315569692"
    }
    {
      "level": "WARN",
      "caller": "endpoint.rs:780",
      "message": "error-response",
      "time": "2022/12/30 09:40:53.551 +01:00",
      "err": "Key is locked (will clean up) primary_lock: 7480000000000000585F7203800000000000000203800000000000000103800000000000086A lock_version: 438406913315569683 key: 74800000000000005A5F7203800000000000000203800000000000000A03800000000000086A lock_ttl: 20032 txn_size: 20 lock_for_update_ts: 438406913328676870 min_commit_ts: 438406913328676886"
    }
    {
      "level": "WARN",
      "caller": "endpoint.rs:780",
      "message": "error-response",
      "time": "2022/12/30 09:40:53.807 +01:00",
      "err": "Key is locked (will clean up) primary_lock: 7480000000000000525F72038000000000000001038000000000000005 lock_version: 438406913381367836 key: 7480000000000000585F72038000000000000001038000000000000005038000000000000BF5 lock_ttl: 20013 txn_size: 1 lock_for_update_ts: 438406913381367836 use_async_commit: true min_commit_ts: 438406913394475028"
    }
    {
      "level": "WARN",
      "caller": "endpoint.rs:780",
      "message": "error-response",
      "time": "2022/12/30 09:40:53.975 +01:00",
      "err": "Key is locked (will clean up) primary_lock: 7480000000000000585F72038000000000000008038000000000000001038000000000000862 lock_version: 438406913420689438 key: 7480000000000000585F72038000000000000008038000000000000001038000000000000862 lock_ttl: 20001 txn_size: 10 lock_type: Del lock_for_update_ts: 438406913433534489 min_commit_ts: 438406913433534503"
    }
    {
      "level": "WARN",
      "caller": "endpoint.rs:780",
      "message": "error-response",
      "time": "2022/12/30 09:40:54.169 +01:00",
      "err": "Key is locked (will clean up) primary_lock: 7480000000000000525F7203800000000000000A038000000000000001 lock_version: 438406913499070467 key: 7480000000000000585F7203800000000000000A038000000000000001038000000000000BF1 lock_ttl: 20001 txn_size: 1 lock_for_update_ts: 438406913499070467 use_async_commit: true min_commit_ts: 438406913499070478"
    }
    

    As this is a warning level the only way to filter this is to set the log level to error, which might lead to missing important messages. This doesn't seem to be an abnormal situation or user actionable.

    opened by dveeden 1
  • Logging from backup-stream is noisy

    Logging from backup-stream is noisy

    Bug Report

    What version of TiKV are you using?

    TiKV 
    Release Version:   6.5.0
    Edition:           Community
    Git Commit Hash:   47b81680f75adc4b7200480cea5dbe46ae07c4b5
    Git Commit Branch: heads/refs/tags/v6.5.0
    UTC Build Time:    2022-12-21 09:03:22
    Rust Version:      rustc 1.67.0-nightly (96ddd32c4 2022-11-14)
    Enable Features:   pprof-fp jemalloc mem-profiling portable sse test-engine-kv-rocksdb test-engine-raft-raft-engine cloud-aws cloud-gcp cloud-azure
    Profile:           dist_release
    

    What operating system and CPU are you using?

    Fedora Linux 37, Linux 6.0.12, Intel Core i7

    Steps to reproduce

    Run a TiUP Playground. Then run tiup bench tpcc prepare.

    What did you expect?

    No noisy logging

    What did happened?

    grep 'subscription_track.rs:159' .tiup/data/TRUCY5d/tikv-0/tikv.log
    
    {"level":"WARN","caller":"subscription_track.rs:159","message":"backup stream observer refreshing void subscription.","time":"2022/12/30 09:25:02.301 +01:00","new_region":"id: 2 start_key: 748000FFFFFFFFFFFFFE00000000000000F8 region_epoch { conf_ver: 1 version: 2 } peers { id: 3 store_id: 1 }"}
    {"level":"WARN","caller":"subscription_track.rs:159","message":"backup stream observer refreshing void subscription.","time":"2022/12/30 09:25:02.311 +01:00","new_region":"id: 4 start_key: 748000FFFFFFFFFFFFFD00000000000000F8 end_key: 748000FFFFFFFFFFFFFE00000000000000F8 region_epoch { conf_ver: 1 version: 3 } peers { id: 5 store_id: 1 }"}
    {"level":"WARN","caller":"subscription_track.rs:159","message":"backup stream observer refreshing void subscription.","time":"2022/12/30 09:25:02.322 +01:00","new_region":"id: 6 start_key: 748000FFFFFFFFFFFFFC00000000000000F8 end_key: 748000FFFFFFFFFFFFFD00000000000000F8 region_epoch { conf_ver: 1 version: 4 } peers { id: 7 store_id: 1 }"}
    {"level":"WARN","caller":"subscription_track.rs:159","message":"backup stream observer refreshing void subscription.","time":"2022/12/30 09:25:02.344 +01:00","new_region":"id: 8 start_key: 748000FFFFFFFFFFFFFB00000000000000F8 end_key: 748000FFFFFFFFFFFFFC00000000000000F8 region_epoch { conf_ver: 1 version: 5 } peers { id: 9 store_id: 1 }"}
    {"level":"WARN","caller":"subscription_track.rs:159","message":"backup stream observer refreshing void subscription.","time":"2022/12/30 09:25:03.766 +01:00","new_region":"id: 10 start_key: 7480000000000000FF0400000000000000F8 end_key: 748000FFFFFFFFFFFFFB00000000000000F8 region_epoch { conf_ver: 1 version: 6 } peers { id: 11 store_id: 1 }"}
    {"level":"WARN","caller":"subscription_track.rs:159","message":"backup stream observer refreshing void subscription.","time":"2022/12/30 09:25:03.912 +01:00","new_region":"id: 10 start_key: 7480000000000000FF0600000000000000F8 end_key: 748000FFFFFFFFFFFFFB00000000000000F8 region_epoch { conf_ver: 1 version: 7 } peers { id: 11 store_id: 1 }"}
    {"level":"WARN","caller":"subscription_track.rs:159","message":"backup stream observer refreshing void subscription.","time":"2022/12/30 09:25:04.064 +01:00","new_region":"id: 10 start_key: 7480000000000000FF0800000000000000F8 end_key: 748000FFFFFFFFFFFFFB00000000000000F8 region_epoch { conf_ver: 1 version: 8 } peers { id: 11 store_id: 1 }"}
    {"level":"WARN","caller":"subscription_track.rs:159","message":"backup stream observer refreshing void subscription.","time":"2022/12/30 09:25:04.345 +01:00","new_region":"id: 10 start_key: 7480000000000000FF0A00000000000000F8 end_key: 748000FFFFFFFFFFFFFB00000000000000F8 region_epoch { conf_ver: 1 version: 9 } peers { id: 11 store_id: 1 }"}
    {"level":"WARN","caller":"subscription_track.rs:159","message":"backup stream observer refreshing void subscription.","time":"2022/12/30 09:25:04.501 +01:00","new_region":"id: 10 start_key: 7480000000000000FF0C00000000000000F8 end_key: 748000FFFFFFFFFFFFFB00000000000000F8 region_epoch { conf_ver: 1 version: 10 } peers { id: 11 store_id: 1 }"}
    {"level":"WARN","caller":"subscription_track.rs:159","message":"backup stream observer refreshing void subscription.","time":"2022/12/30 09:25:04.646 +01:00","new_region":"id: 10 start_key: 7480000000000000FF0E00000000000000F8 end_key: 748000FFFFFFFFFFFFFB00000000000000F8 region_epoch { conf_ver: 1 version: 11 } peers { id: 11 store_id: 1 }"}
    {"level":"WARN","caller":"subscription_track.rs:159","message":"backup stream observer refreshing void subscription.","time":"2022/12/30 09:25:04.818 +01:00","new_region":"id: 10 start_key: 7480000000000000FF1000000000000000F8 end_key: 748000FFFFFFFFFFFFFB00000000000000F8 region_epoch { conf_ver: 1 version: 12 } peers { id: 11 store_id: 1 }"}
    {"level":"WARN","caller":"subscription_track.rs:159","message":"backup stream observer refreshing void subscription.","time":"2022/12/30 09:25:05.021 +01:00","new_region":"id: 10 start_key: 7480000000000000FF1200000000000000F8 end_key: 748000FFFFFFFFFFFFFB00000000000000F8 region_epoch { conf_ver: 1 version: 13 } peers { id: 11 store_id: 1 }"}
    {"level":"WARN","caller":"subscription_track.rs:159","message":"backup stream observer refreshing void subscription.","time":"2022/12/30 09:25:05.170 +01:00","new_region":"id: 10 start_key: 7480000000000000FF1400000000000000F8 end_key: 748000FFFFFFFFFFFFFB00000000000000F8 region_epoch { conf_ver: 1 version: 14 } peers { id: 11 store_id: 1 }"}
    {"level":"WARN","caller":"subscription_track.rs:159","message":"backup stream observer refreshing void subscription.","time":"2022/12/30 09:25:05.315 +01:00","new_region":"id: 10 start_key: 7480000000000000FF1600000000000000F8 end_key: 748000FFFFFFFFFFFFFB00000000000000F8 region_epoch { conf_ver: 1 version: 15 } peers { id: 11 store_id: 1 }"}
    {"level":"WARN","caller":"subscription_track.rs:159","message":"backup stream observer refreshing void subscription.","time":"2022/12/30 09:25:05.462 +01:00","new_region":"id: 10 start_key: 7480000000000000FF1800000000000000F8 end_key: 748000FFFFFFFFFFFFFB00000000000000F8 region_epoch { conf_ver: 1 version: 16 } peers { id: 11 store_id: 1 }"}
    {"level":"WARN","caller":"subscription_track.rs:159","message":"backup stream observer refreshing void subscription.","time":"2022/12/30 09:25:05.611 +01:00","new_region":"id: 10 start_key: 7480000000000000FF1A00000000000000F8 end_key: 748000FFFFFFFFFFFFFB00000000000000F8 region_epoch { conf_ver: 1 version: 17 } peers { id: 11 store_id: 1 }"}
    {"level":"WARN","caller":"subscription_track.rs:159","message":"backup stream observer refreshing void subscription.","time":"2022/12/30 09:25:05.753 +01:00","new_region":"id: 10 start_key: 7480000000000000FF1C00000000000000F8 end_key: 748000FFFFFFFFFFFFFB00000000000000F8 region_epoch { conf_ver: 1 version: 18 } peers { id: 11 store_id: 1 }"}
    {"level":"WARN","caller":"subscription_track.rs:159","message":"backup stream observer refreshing void subscription.","time":"2022/12/30 09:25:05.896 +01:00","new_region":"id: 10 start_key: 7480000000000000FF1E00000000000000F8 end_key: 748000FFFFFFFFFFFFFB00000000000000F8 region_epoch { conf_ver: 1 version: 19 } peers { id: 11 store_id: 1 }"}
    {"level":"WARN","caller":"subscription_track.rs:159","message":"backup stream observer refreshing void subscription.","time":"2022/12/30 09:25:06.066 +01:00","new_region":"id: 10 start_key: 7480000000000000FF2000000000000000F8 end_key: 748000FFFFFFFFFFFFFB00000000000000F8 region_epoch { conf_ver: 1 version: 20 } peers { id: 11 store_id: 1 }"}
    {"level":"WARN","caller":"subscription_track.rs:159","message":"backup stream observer refreshing void subscription.","time":"2022/12/30 09:25:06.210 +01:00","new_region":"id: 10 start_key: 7480000000000000FF2200000000000000F8 end_key: 748000FFFFFFFFFFFFFB00000000000000F8 region_epoch { conf_ver: 1 version: 21 } peers { id: 11 store_id: 1 }"}
    {"level":"WARN","caller":"subscription_track.rs:159","message":"backup stream observer refreshing void subscription.","time":"2022/12/30 09:25:06.351 +01:00","new_region":"id: 10 start_key: 7480000000000000FF2400000000000000F8 end_key: 748000FFFFFFFFFFFFFB00000000000000F8 region_epoch { conf_ver: 1 version: 22 } peers { id: 11 store_id: 1 }"}
    {"level":"WARN","caller":"subscription_track.rs:159","message":"backup stream observer refreshing void subscription.","time":"2022/12/30 09:25:06.504 +01:00","new_region":"id: 10 start_key: 7480000000000000FF2600000000000000F8 end_key: 748000FFFFFFFFFFFFFB00000000000000F8 region_epoch { conf_ver: 1 version: 23 } peers { id: 11 store_id: 1 }"}
    {"level":"WARN","caller":"subscription_track.rs:159","message":"backup stream observer refreshing void subscription.","time":"2022/12/30 09:25:06.661 +01:00","new_region":"id: 10 start_key: 7480000000000000FF2800000000000000F8 end_key: 748000FFFFFFFFFFFFFB00000000000000F8 region_epoch { conf_ver: 1 version: 24 } peers { id: 11 store_id: 1 }"}
    {"level":"WARN","caller":"subscription_track.rs:159","message":"backup stream observer refreshing void subscription.","time":"2022/12/30 09:25:06.812 +01:00","new_region":"id: 10 start_key: 7480000000000000FF2A00000000000000F8 end_key: 748000FFFFFFFFFFFFFB00000000000000F8 region_epoch { conf_ver: 1 version: 25 } peers { id: 11 store_id: 1 }"}
    {"level":"WARN","caller":"subscription_track.rs:159","message":"backup stream observer refreshing void subscription.","time":"2022/12/30 09:25:06.956 +01:00","new_region":"id: 10 start_key: 7480000000000000FF2C00000000000000F8 end_key: 748000FFFFFFFFFFFFFB00000000000000F8 region_epoch { conf_ver: 1 version: 26 } peers { id: 11 store_id: 1 }"}
    {"level":"WARN","caller":"subscription_track.rs:159","message":"backup stream observer refreshing void subscription.","time":"2022/12/30 09:25:07.114 +01:00","new_region":"id: 10 start_key: 7480000000000000FF2E00000000000000F8 end_key: 748000FFFFFFFFFFFFFB00000000000000F8 region_epoch { conf_ver: 1 version: 27 } peers { id: 11 store_id: 1 }"}
    {"level":"WARN","caller":"subscription_track.rs:159","message":"backup stream observer refreshing void subscription.","time":"2022/12/30 09:25:07.269 +01:00","new_region":"id: 10 start_key: 7480000000000000FF3000000000000000F8 end_key: 748000FFFFFFFFFFFFFB00000000000000F8 region_epoch { conf_ver: 1 version: 28 } peers { id: 11 store_id: 1 }"}
    {"level":"WARN","caller":"subscription_track.rs:159","message":"backup stream observer refreshing void subscription.","time":"2022/12/30 09:25:07.411 +01:00","new_region":"id: 10 start_key: 7480000000000000FF3200000000000000F8 end_key: 748000FFFFFFFFFFFFFB00000000000000F8 region_epoch { conf_ver: 1 version: 29 } peers { id: 11 store_id: 1 }"}
    {"level":"WARN","caller":"subscription_track.rs:159","message":"backup stream observer refreshing void subscription.","time":"2022/12/30 09:25:07.565 +01:00","new_region":"id: 10 start_key: 7480000000000000FF3400000000000000F8 end_key: 748000FFFFFFFFFFFFFB00000000000000F8 region_epoch { conf_ver: 1 version: 30 } peers { id: 11 store_id: 1 }"}
    {"level":"WARN","caller":"subscription_track.rs:159","message":"backup stream observer refreshing void subscription.","time":"2022/12/30 09:25:07.731 +01:00","new_region":"id: 10 start_key: 7480000000000000FF3600000000000000F8 end_key: 748000FFFFFFFFFFFFFB00000000000000F8 region_epoch { conf_ver: 1 version: 31 } peers { id: 11 store_id: 1 }"}
    {"level":"WARN","caller":"subscription_track.rs:159","message":"backup stream observer refreshing void subscription.","time":"2022/12/30 09:25:07.890 +01:00","new_region":"id: 10 start_key: 7480000000000000FF3800000000000000F8 end_key: 748000FFFFFFFFFFFFFB00000000000000F8 region_epoch { conf_ver: 1 version: 32 } peers { id: 11 store_id: 1 }"}
    {"level":"WARN","caller":"subscription_track.rs:159","message":"backup stream observer refreshing void subscription.","time":"2022/12/30 09:25:08.044 +01:00","new_region":"id: 10 start_key: 7480000000000000FF3A00000000000000F8 end_key: 748000FFFFFFFFFFFFFB00000000000000F8 region_epoch { conf_ver: 1 version: 33 } peers { id: 11 store_id: 1 }"}
    {"level":"WARN","caller":"subscription_track.rs:159","message":"backup stream observer refreshing void subscription.","time":"2022/12/30 09:25:08.192 +01:00","new_region":"id: 10 start_key: 7480000000000000FF3C00000000000000F8 end_key: 748000FFFFFFFFFFFFFB00000000000000F8 region_epoch { conf_ver: 1 version: 34 } peers { id: 11 store_id: 1 }"}
    {"level":"WARN","caller":"subscription_track.rs:159","message":"backup stream observer refreshing void subscription.","time":"2022/12/30 09:25:08.341 +01:00","new_region":"id: 10 start_key: 7480000000000000FF3E00000000000000F8 end_key: 748000FFFFFFFFFFFFFB00000000000000F8 region_epoch { conf_ver: 1 version: 35 } peers { id: 11 store_id: 1 }"}
    {"level":"WARN","caller":"subscription_track.rs:159","message":"backup stream observer refreshing void subscription.","time":"2022/12/30 09:25:08.482 +01:00","new_region":"id: 10 start_key: 7480000000000000FF4000000000000000F8 end_key: 748000FFFFFFFFFFFFFB00000000000000F8 region_epoch { conf_ver: 1 version: 36 } peers { id: 11 store_id: 1 }"}
    {"level":"WARN","caller":"subscription_track.rs:159","message":"backup stream observer refreshing void subscription.","time":"2022/12/30 09:25:08.629 +01:00","new_region":"id: 10 start_key: 7480000000000000FF4200000000000000F8 end_key: 748000FFFFFFFFFFFFFB00000000000000F8 region_epoch { conf_ver: 1 version: 37 } peers { id: 11 store_id: 1 }"}
    {"level":"WARN","caller":"subscription_track.rs:159","message":"backup stream observer refreshing void subscription.","time":"2022/12/30 09:25:08.806 +01:00","new_region":"id: 10 start_key: 7480000000000000FF4400000000000000F8 end_key: 748000FFFFFFFFFFFFFB00000000000000F8 region_epoch { conf_ver: 1 version: 38 } peers { id: 11 store_id: 1 }"}
    {"level":"WARN","caller":"subscription_track.rs:159","message":"backup stream observer refreshing void subscription.","time":"2022/12/30 09:25:08.959 +01:00","new_region":"id: 10 start_key: 7480000000000000FF4600000000000000F8 end_key: 748000FFFFFFFFFFFFFB00000000000000F8 region_epoch { conf_ver: 1 version: 39 } peers { id: 11 store_id: 1 }"}
    {"level":"WARN","caller":"subscription_track.rs:159","message":"backup stream observer refreshing void subscription.","time":"2022/12/30 09:25:09.105 +01:00","new_region":"id: 10 start_key: 7480000000000000FF4800000000000000F8 end_key: 748000FFFFFFFFFFFFFB00000000000000F8 region_epoch { conf_ver: 1 version: 40 } peers { id: 11 store_id: 1 }"}
    {"level":"WARN","caller":"subscription_track.rs:159","message":"backup stream observer refreshing void subscription.","time":"2022/12/30 09:25:09.256 +01:00","new_region":"id: 10 start_key: 7480000000000000FF4A00000000000000F8 end_key: 748000FFFFFFFFFFFFFB00000000000000F8 region_epoch { conf_ver: 1 version: 41 } peers { id: 11 store_id: 1 }"}
    {"level":"WARN","caller":"subscription_track.rs:159","message":"backup stream observer refreshing void subscription.","time":"2022/12/30 09:25:09.410 +01:00","new_region":"id: 10 start_key: 7480000000000000FF4C00000000000000F8 end_key: 748000FFFFFFFFFFFFFB00000000000000F8 region_epoch { conf_ver: 1 version: 42 } peers { id: 11 store_id: 1 }"}
    {"level":"WARN","caller":"subscription_track.rs:159","message":"backup stream observer refreshing void subscription.","time":"2022/12/30 09:25:09.569 +01:00","new_region":"id: 10 start_key: 7480000000000000FF4E00000000000000F8 end_key: 748000FFFFFFFFFFFFFB00000000000000F8 region_epoch { conf_ver: 1 version: 43 } peers { id: 11 store_id: 1 }"}
    {"level":"WARN","caller":"subscription_track.rs:159","message":"backup stream observer refreshing void subscription.","time":"2022/12/30 09:31:58.991 +01:00","new_region":"id: 10 start_key: 7480000000000000FF5000000000000000F8 end_key: 748000FFFFFFFFFFFFFB00000000000000F8 region_epoch { conf_ver: 1 version: 44 } peers { id: 11 store_id: 1 }"}
    {"level":"WARN","caller":"subscription_track.rs:159","message":"backup stream observer refreshing void subscription.","time":"2022/12/30 09:31:59.173 +01:00","new_region":"id: 10 start_key: 7480000000000000FF5200000000000000F8 end_key: 748000FFFFFFFFFFFFFB00000000000000F8 region_epoch { conf_ver: 1 version: 45 } peers { id: 11 store_id: 1 }"}
    {"level":"WARN","caller":"subscription_track.rs:159","message":"backup stream observer refreshing void subscription.","time":"2022/12/30 09:31:59.370 +01:00","new_region":"id: 10 start_key: 7480000000000000FF5400000000000000F8 end_key: 748000FFFFFFFFFFFFFB00000000000000F8 region_epoch { conf_ver: 1 version: 46 } peers { id: 11 store_id: 1 }"}
    {"level":"WARN","caller":"subscription_track.rs:159","message":"backup stream observer refreshing void subscription.","time":"2022/12/30 09:31:59.528 +01:00","new_region":"id: 10 start_key: 7480000000000000FF5600000000000000F8 end_key: 748000FFFFFFFFFFFFFB00000000000000F8 region_epoch { conf_ver: 1 version: 47 } peers { id: 11 store_id: 1 }"}
    {"level":"WARN","caller":"subscription_track.rs:159","message":"backup stream observer refreshing void subscription.","time":"2022/12/30 09:31:59.704 +01:00","new_region":"id: 10 start_key: 7480000000000000FF5800000000000000F8 end_key: 748000FFFFFFFFFFFFFB00000000000000F8 region_epoch { conf_ver: 1 version: 48 } peers { id: 11 store_id: 1 }"}
    {"level":"WARN","caller":"subscription_track.rs:159","message":"backup stream observer refreshing void subscription.","time":"2022/12/30 09:31:59.858 +01:00","new_region":"id: 10 start_key: 7480000000000000FF5A00000000000000F8 end_key: 748000FFFFFFFFFFFFFB00000000000000F8 region_epoch { conf_ver: 1 version: 49 } peers { id: 11 store_id: 1 }"}
    {"level":"WARN","caller":"subscription_track.rs:159","message":"backup stream observer refreshing void subscription.","time":"2022/12/30 09:32:00.012 +01:00","new_region":"id: 10 start_key: 7480000000000000FF5C00000000000000F8 end_key: 748000FFFFFFFFFFFFFB00000000000000F8 region_epoch { conf_ver: 1 version: 50 } peers { id: 11 store_id: 1 }"}
    {"level":"WARN","caller":"subscription_track.rs:159","message":"backup stream observer refreshing void subscription.","time":"2022/12/30 09:32:00.176 +01:00","new_region":"id: 10 start_key: 7480000000000000FF5E00000000000000F8 end_key: 748000FFFFFFFFFFFFFB00000000000000F8 region_epoch { conf_ver: 1 version: 51 } peers { id: 11 store_id: 1 }"}
    {"level":"WARN","caller":"subscription_track.rs:159","message":"backup stream observer refreshing void subscription.","time":"2022/12/30 09:32:00.376 +01:00","new_region":"id: 10 start_key: 7480000000000000FF6000000000000000F8 end_key: 748000FFFFFFFFFFFFFB00000000000000F8 region_epoch { conf_ver: 1 version: 52 } peers { id: 11 store_id: 1 }"}
    {"level":"WARN","caller":"subscription_track.rs:159","message":"backup stream observer refreshing void subscription.","time":"2022/12/30 09:32:30.596 +01:00","new_region":"id: 104 start_key: 7480000000000000FF5E5F720380000000FF0000000303800000FF0000007EE5000000FC end_key: 7480000000000000FF6000000000000000F8 region_epoch { conf_ver: 1 version: 53 } peers { id: 105 store_id: 1 }"}
    {"level":"WARN","caller":"subscription_track.rs:159","message":"backup stream observer refreshing void subscription.","time":"2022/12/30 09:32:50.635 +01:00","new_region":"id: 104 start_key: 7480000000000000FF5E5F720380000000FF0000000503800000FF000000FDC6000000FC end_key: 7480000000000000FF6000000000000000F8 region_epoch { conf_ver: 1 version: 54 } peers { id: 105 store_id: 1 }"}
    {"level":"WARN","caller":"subscription_track.rs:159","message":"backup stream observer refreshing void subscription.","time":"2022/12/30 09:33:20.642 +01:00","new_region":"id: 104 start_key: 7480000000000000FF5E5F720380000000FF0000000703800000FF0000017CAC000000FC end_key: 7480000000000000FF6000000000000000F8 region_epoch { conf_ver: 1 version: 55 } peers { id: 105 store_id: 1 }"}
    {"level":"WARN","caller":"subscription_track.rs:159","message":"backup stream observer refreshing void subscription.","time":"2022/12/30 09:34:48.657 +01:00","new_region":"id: 102 start_key: 7480000000000000FF5C5F720380000000FF0000000303800000FF0000000005038000FF0000000002600380FF0000000000000100FE end_key: 7480000000000000FF5E00000000000000F8 region_epoch { conf_ver: 1 version: 52 } peers { id: 103 store_id: 1 }"}
    {"level":"WARN","caller":"subscription_track.rs:159","message":"backup stream observer refreshing void subscription.","time":"2022/12/30 09:35:48.129 +01:00","new_region":"id: 94 start_key: 7480000000000000FF545F720380000000FF0000000403800000FF0000000004038000FF0000000004050000FD end_key: 7480000000000000FF5600000000000000F8 region_epoch { conf_ver: 1 version: 48 } peers { id: 95 store_id: 1 }"}
    {"level":"WARN","caller":"subscription_track.rs:159","message":"backup stream observer refreshing void subscription.","time":"2022/12/30 09:35:48.701 +01:00","new_region":"id: 102 start_key: 7480000000000000FF5C5F720380000000FF0000000503800000FF0000000009038000FF0000000005D60380FF0000000000000200FE end_key: 7480000000000000FF5E00000000000000F8 region_epoch { conf_ver: 1 version: 53 } peers { id: 103 store_id: 1 }"}
    {"level":"WARN","caller":"subscription_track.rs:159","message":"backup stream observer refreshing void subscription.","time":"2022/12/30 09:36:58.724 +01:00","new_region":"id: 102 start_key: 7480000000000000FF5C5F720380000000FF0000000803800000FF0000000003038000FF0000000009570380FF0000000000000B00FE end_key: 7480000000000000FF5E00000000000000F8 region_epoch { conf_ver: 1 version: 54 } peers { id: 103 store_id: 1 }"}
    

    This is with log format set to json.

    For users of TiKV it isn't clear if this indicates a problem or not. Note that these are warning messages.

    If these message are normal and expected, then the level should probably be INFO.

    Note that this is a TiUP playground without any backup or PITR going on.

    opened by dveeden 4
  • Logging from sst_importer is noisy

    Logging from sst_importer is noisy

    Bug Report

    What version of TiKV are you using?

    TiKV 
    Release Version:   6.5.0
    Edition:           Community
    Git Commit Hash:   47b81680f75adc4b7200480cea5dbe46ae07c4b5
    Git Commit Branch: heads/refs/tags/v6.5.0
    UTC Build Time:    2022-12-21 09:03:22
    Rust Version:      rustc 1.67.0-nightly (96ddd32c4 2022-11-14)
    Enable Features:   pprof-fp jemalloc mem-profiling portable sse test-engine-kv-rocksdb test-engine-raft-raft-engine cloud-aws cloud-gcp cloud-azure
    Profile:           dist_release
    

    What operating system and CPU are you using?

    Fedora 37, Linux 6.0.12, Intel Core i7

    Steps to reproduce

    Run a TiDB Playground

    What did you expect?

    No noisy logging with the default loglevel

    What did happened?

    [2022/12/30 09:12:51.255 +01:00] [INFO] [sst_importer.rs:442] ["shrink cache by tick"] ["retain size"=0] ["shrink size"=0]
    [2022/12/30 09:13:01.257 +01:00] [INFO] [sst_importer.rs:442] ["shrink cache by tick"] ["retain size"=0] ["shrink size"=0]
    [2022/12/30 09:13:11.258 +01:00] [INFO] [sst_importer.rs:442] ["shrink cache by tick"] ["retain size"=0] ["shrink size"=0]
    [2022/12/30 09:13:21.259 +01:00] [INFO] [sst_importer.rs:442] ["shrink cache by tick"] ["retain size"=0] ["shrink size"=0]
    [2022/12/30 09:13:31.260 +01:00] [INFO] [sst_importer.rs:442] ["shrink cache by tick"] ["retain size"=0] ["shrink size"=0]
    [2022/12/30 09:13:41.261 +01:00] [INFO] [sst_importer.rs:442] ["shrink cache by tick"] ["retain size"=0] ["shrink size"=0]
    [2022/12/30 09:13:51.262 +01:00] [INFO] [sst_importer.rs:442] ["shrink cache by tick"] ["retain size"=0] ["shrink size"=0]
    [2022/12/30 09:14:01.264 +01:00] [INFO] [sst_importer.rs:442] ["shrink cache by tick"] ["retain size"=0] ["shrink size"=0]
    [2022/12/30 09:14:11.266 +01:00] [INFO] [sst_importer.rs:442] ["shrink cache by tick"] ["retain size"=0] ["shrink size"=0]
    

    This is not very informative to a TiKV user, it might be useful to a TiKV developer.

    opened by dveeden 1
  • raft-engine: remove confusing API cut logs

    raft-engine: remove confusing API cut logs

    What is changed and how it works?

    Issue Number: Ref #12842

    What's Changed:

    The API is supposed to be used with `append` but nowhere can we find
    the clue. This PR merges `cut_logs` and `append` to reduce confusion
    and mistakes.
    

    Check List

    Tests

    • Unit test
    • Integration test

    Release note

    None
    
    size/L release-note-none 
    opened by BusyJay 1
  • raftstore-v2: publish tablet in raftstore thread only

    raftstore-v2: publish tablet in raftstore thread only

    What is changed and how it works?

    Issue Number: Ref #12842

    What's Changed:

    Publish tablet in apply thread is unsafe. This PR moves the operation to
    raftstore. It also fixes the issues that applying two splits at a time can
    cause panic. It also makes sure cache will be cleared after tablet is published.
    

    Check List

    Tests

    • Unit test
    • Integration test

    Release note

    None
    
    status/LGT1 size/XXL release-note-none 
    opened by BusyJay 2
Releases(v6.5.0)
  • v6.5.0(Dec 29, 2022)

  • v5.1.5(Dec 28, 2022)

    Bug fixes

    • Fix the issue of time parsing error that occurs when the DATETIME values contain a fraction and Z #12739
    • Fix a bug that replica reads might violate the linearizability #12109
    • Fix a bug that Regions might be overlapped if Raftstore is busy #13160
    • Fix the TiKV panic issue that occurs when applying snapshot is aborted #11618
    • Fix a bug that TiKV might panic if it has been running for 2 years or more #11940
    • Fix the panic issue that might occur when the source peer catches up logs by snapshot in the Region merge process #12663
    • Fix the issue that TiKV panics when performing type conversion for an empty string #12673
    • Fix a bug that stale messages cause TiKV to panic #12023
    • Fix the panic issue that might occur when a peer is being split and destroyed at the same time #12825
    • Fix the TiKV panic issue that occurs when the target peer is replaced with the peer that is destroyed without being initialized when merging a Region #12048
    • Fix the issue that TiKV reports the invalid store ID 0 error when using Follower Read #12478
    • Fix the possible duplicate commit records in pessimistic transactions when async commit is enabled #12615
    • Support configuring the unreachable_backoff item to avoid Raftstore broadcasting too many messages after one peer becomes unreachable #13054
    • Fix the issue that successfully committed optimistic transactions may report the Write Conflict error when the network is poor #34066
    • Fix the wrong expression of Unified Read Pool CPU in dashboard #13086
    Source code(tar.gz)
    Source code(zip)
  • v6.1.3(Dec 5, 2022)

  • v5.3.4(Nov 24, 2022)

  • v6.4.0(Nov 17, 2022)

    Improvements

    • Add a new configuration item apply-yield-write-size to control the maximum number of bytes that the Apply thread can write for one Finite-state Machine in one round of poll, and relieve Raftstore congestion when the Apply thread writes a large volume of data #13313 @glorv
    • Warm up the entry cache before migrating the leader of Region to avoid QPS jitter during the leader transfer process #13060 @cosven
    • Support pushing down the json_constains operator to Coprocessor #13592 @lizhenhuan
    • Add the asynchronous function for CausalTsProvider to improve the flush performance in some scenarios #13428 @zeminzhou

    Bug fixes

    • Fix the issue that TiDB fails to start on Gitpod when there are multiple cgroup and mountinfo records #13660 @tabokie
    • Fix the wrong expression of a TiKV metric tikv_gc_compaction_filtered #13537 @Defined2014
    • Fix the performance issue caused by the abnormal delete_files_in_range #13534 @tabokie
    • Fix abnormal Region competition caused by expired lease during snapshot acquisition #13553 @SpadeA-Tang
    • Fix errors occurred when FLASHBACK fails in the first batch #13672 #13704 #13723 @HuSharp
    Source code(tar.gz)
    Source code(zip)
  • v6.1.2(Oct 24, 2022)

    Improvements

    • Support configuring the unreachable_backoff item to avoid Raftstore broadcasting too many messages after one peer becomes unreachable #13054 @5kbpers
    • Support configuring the RocksDB write stall settings to a value smaller than the flow control threshold #13467 @tabokie

    Bug fixes

    • Fix the issue that the snapshot data might be incomplete caused by batch snapshot across Regions #13553 @SpadeA-Tang
    • Fix the issue of QPS drop when flow control is enabled and level0_slowdown_trigger is set explicitly #11424 @Connor1996
    • Fix the issue that causes permission denied error when TiKV gets an error from the web identity provider and fails back to the default provider #13122 @3pointer
    • Fix the issue that the TiKV service is unavailable for several minutes when a TiKV instance is in an isolated network environment #12966 @cosven
    Source code(tar.gz)
    Source code(zip)
  • v5.4.3(Oct 13, 2022)

    Improvements

    • Support configuring the RocksDB write stall settings to a value smaller than the flow control threshold #13467
    • Support configuring the unreachable_backoff item to avoid Raftstore broadcasting too many messages after one peer becomes unreachable #13054

    Bug fixes

    • Fix the issue of continuous SQL execution errors in the cluster after the PD leader is switched or PD is restarted #12934
    • Cause: This issue is caused by a TiKV bug that TiKV does not retry sending heartbeat information to PD client after heartbeat requests fail, until TiKV reconnects to PD client. As a result, the Region information on the failed TiKV node becomes outdated, and TiDB cannot get the latest Region information, which causes SQL execution errors.
    • Affected versions: v5.3.2 and v5.4.2. This issue has been fixed in v5.3.3 and v5.4.3. If you are using v5.4.2, you can upgrade your cluster to v5.4.3.
    • Workaround: In addition to upgrade, you can also restart the TiKV nodes that cannot send Region heartbeat to PD, until there is no Region heartbeat to send.
    • Fix the issue that causes permission denied error when TiKV gets an error from the web identity provider and fails back to the default provider #13122
    • Fix the issue that the PD client might cause deadlocks #13191
    • Fix the issue that Regions might be overlapped if Raftstore is busy #13160
    Source code(tar.gz)
    Source code(zip)
  • v6.3.0(Sep 30, 2022)

    Improvements

    • Support configuring the unreachable_backoff item to avoid Raftstore broadcasting too many messages after one peer becomes unreachable #13054 @5kbpers
    • Improve the fault tolerance of TSO service #12794 @pingyu
    • Support dynamically modifying the number of sub-compaction operations performed concurrently in RocksDB (rocksdb.max-sub-compactions) #13145 @ethercflow
    • Optimize the performance of merging empty Regions #12421 @tabokie
    • Support more regular expression functions #13483 @gengliqi
    • Support automatically adjusting the thread pool size based on the CPU usage #13313 @glorv

    Bug fixes

    • Fix the issue that PD does not reconnect to TiKV after the Region heartbeat is interrupted #12934 @bufferflies
    • Fix the issue that Regions might be overlapped if Raftstore is busy #13160 @5kbpers
    • Fix the issue that the PD client might cause deadlocks #13191 @bufferflies #12933 @BurtonQin
    • Fix the issue that TiKV might panic when encryption is disabled #13081 @jiayang-zheng
    • Fix the wrong expression of Unified Read Pool CPU in Dashboard #13086 @glorv
    • Fix the issue that the TiKV service is unavailable for several minutes when a TiKV instance is in an isolated network environment #12966 @cosven
    • Fix the issue that TiKV mistakenly reports a PessimisticLockNotFound error #13425 @sticnarf
    • Fix the issue that PITR might cause data loss in some situations #13281 @YuJuncen
    • Fix the issue that causes checkpoint not advanced when there are some long pessimistic transactions #13304 @YuJuncen
    • Fix the issue that TiKV does not distinguish the datetime type (DATETIME, DATE, TIMESTAMP and TIME) and STRING type in JSON #13417 @YangKeao
    • Fix incompatibility with MySQL of comparison between JSON bool and other JSON value #13386 #37481 @YangKeao
    Source code(tar.gz)
    Source code(zip)
  • v5.3.3(Sep 14, 2022)

  • v6.1.1(Sep 1, 2022)

    Improvements

    • Support compressing the metrics response using gzip to reduce the HTTP body size #12355 @winoros
    • Support reducing the amount of data returned for each request by filtering out some metrics using the server.simplify-metrics configuration item #12355 @glorv
    • Support dynamically modifying the number of sub-compaction operations performed concurrently in RocksDB (rocksdb.max-sub-compactions) #13145 @ethercflow

    Bug fixes

    • Fix a bug that Regions might be overlapped if Raftstore is busy #13160 @5kbpers
    • Fix the issue that PD does not reconnect to TiKV after the Region heartbeat is interrupted #12934 @bufferflies
    • Fix the issue that TiKV panics when performing type conversion for an empty string #12673 @wshwsh12
    • Fix the issue of inconsistent Region size configuration between TiKV and PD #12518 @5kbpers
    • Fix the issue that encryption keys are not cleaned up when Raft Engine is enabled #12890 @tabokie
    • Fix the panic issue that might occur when a peer is being split and destroyed at the same time #12825 @BusyJay
    • Fix the panic issue that might occur when the source peer catches up logs by snapshot in the Region merge process #12663 @BusyJay
    • Fix the issue of frequent PD client reconnection that occurs when the PD client meets an error #12345 @Connor1996
    • Fix potential panic when parallel recovery is enabled for Raft Engine #13123 @tabokie
    • Fix the issue that the Commit Log Duration of a new Region is too high, which causes QPS to drop #13077 @Connor1996
    • Fix rare panics when Raft Engine is enabled #12698 @tabokie
    • Avoid redundant log warnings when proc filesystem (procfs) cannot be found #13116 @tabokie
    • Fix the wrong expression of Unified Read Pool CPU in dashboard #13086 @glorv
    • Fix the issue that when a Region is large, the default region-split-check-diff might be larger than the bucket size #12598 @tonyxuqqi
    • Fix the issue that TiKV might panic when Apply Snapshot is aborted and Raft Engine is enabled #12470 @tabokie
    • Fix the issue that the PD client might cause deadlocks #13191 @bufferflies #12933 @BurtonQin (Boqin Qin
    Source code(tar.gz)
    Source code(zip)
  • v6.2.0(Aug 23, 2022)

    For the complete and official release notes, see https://docs.pingcap.com/tidb/v6.2/release-6.2.0.

    Improvements

    • Support compressing the metrics response using gzip to reduce the HTTP body size #12355 @glorv
    • Improve the readability of the TiKV panel in Grafana Dashboard #12007 @kevin-xianliu
    • Optimize the commit pipeline performance of the Apply operator #12898 @ethercflow
    • Support dynamically modifying the number of sub-compaction operations performed concurrently in RocksDB (rocksdb.max-sub-compactions) #13145 @ethercflow

    Bug fixes

    • Avoid reporting WriteConflict errors in pessimistic transactions #11612 @sticnarf
    • Fix the possible duplicate commit records in pessimistic transactions when async commit is enabled #12615 @sticnarf
    • Fix the issue that TiKV panics when modifying the storage.api-version from 1 to 2 #12600 @pingyu
    • Fix the issue of inconsistent Region size configuration between TiKV and PD #12518 @5kbpers
    • Fix the issue that TiKV keeps reconnecting PD clients #12506, #12827 @Connor1996
    • Fix the issue that TiKV panics when performing type conversion for an empty string #12673 @wshwsh12
    • Fix the issue of time parsing error that occurs when the DATETIME values contain a fraction and Z #12739 @gengliqi
    • Fix the issue that the perf context written by the Apply operator to TiKV RocksDB is coarse-grained #11044 @LykxSassinator
    • Fix the issue that TiKV fails to start when the configuration of backup/import/cdc is invalid #12771 @3pointer
    • Fix the panic issue that might occur when a peer is being split and destroyed at the same time #12825 @BusyJay
    • Fix the panic issue that might occur when the source peer catches up logs by snapshot in the Region merge process #12663 @BusyJay
    • Fix the panic issue caused by analyzing statistics when max_sample_size is set to 0 #11192 @LykxSassinator
    • Fix the issue that encryption keys are not cleaned up when Raft Engine is enabled #12890 @tabokie
    • Fix the issue that the get_valid_int_prefix function is incompatible with TiDB. For example, the FLOAT type was incorrectly converted to INT #13045 @guo-shaoge
    • Fix the issue that the Commit Log Duration of a new Region is too high, which causes QPS to drop #13077 @Connor1996
    • Fix the issue that PD does not reconnect to TiKV after the Region heartbeat is interrupted #12934 @bufferflies
    Source code(tar.gz)
    Source code(zip)
  • v5.4.2(Jul 8, 2022)

    Improvements

    • Reload TLS certificate automatically for each update to improve availability #12546
    • Improve the health check to detect unavailable Raftstore, so that the TiKV client can update Region Cache in time #12398
    • Transfer the leadership to CDC observer to reduce latency jitter #12111

    Bug Fixes

    • Fix the panic issue caused by analyzing statistics when max_sample_size is set to 0 #11192
    • Fix the potential issue of mistakenly reporting TiKV panics when exiting TiKV #12231
    • Fix the panic issue that might occur when the source peer catches up logs by snapshot in the Region merge process #12663
    • Fix the panic issue that might occur when a peer is being split and destroyed at the same time #12825
    • Fix the issue of frequent PD client reconnection that occurs when the PD client meets an error #12345
    • Fix the issue of time parsing error that occurs when the DATETIME values contain a fraction and Z #12739
    • Fix the issue that TiKV panics when performing type conversion for an empty string #12673
    • Fix the possible duplicate commit records in pessimistic transactions when async commit is enabled #12615
    • Fix the issue that TiKV reports the invalid store ID 0 error when using Follower Read #12478
    • Fix the issue of TiKV panic caused by the race between destroying peers and batch splitting Regions #12368
    • Fix the issue that tikv-ctl returns an incorrect result due to its wrong string match #12329
    • Fix the issue of failing to start TiKV on AUFS #12543
    Source code(tar.gz)
    Source code(zip)
  • v5.3.2(Jun 29, 2022)

    Improvements

    • Reduce the system call by the Raft client and increase CPU efficiency #11309
    • Improve the health check to detect unavailable Raftstore, so that the TiKV client can update Region Cache in time #12398
    • Transfer the leadership to CDC observer to reduce latency jitter #12111
    • Add more metrics for the garbage collection module of Raft logs to locate performance problems in the module #11374

    Bug Fixes

    • Fix the issue of frequent PD client reconnection that occurs when the PD client meets an error #12345
    • Fix the issue of time parsing error that occurs when the DATETIME values contain a fraction and Z #12739
    • Fix the issue that TiKV panics when performing type conversion for an empty string #12673
    • Fix the possible duplicate commit records in pessimistic transactions when async commit is enabled #12615
    • Fix the bug that TiKV reports the invalid store ID 0 error when using Follower Read #12478
    • Fix the issue of TiKV panic caused by the race between destroying peers and batch splitting Regions #12368
    • Fix the issue that successfully committed optimistic transactions may report the Write Conflict error when the network is poor #34066
    • Fix the issue that TiKV panics and destroys peers unexpectedly when the target Region to be merged is invalid #12232
    • Fix a bug that stale messages cause TiKV to panic #12023
    • Fix the issue of intermittent packet loss and out of memory (OOM) caused by the overflow of memory metrics #12160
    • Fix the potential panic issue that occurs when TiKV performs profiling on Ubuntu 18.04 #9765
    • Fix the issue that tikv-ctl returns an incorrect result due to its wrong string match #12329
    • Fix a bug that replica reads might violate the linearizability #12109
    • Fix the TiKV panic issue that occurs when the target peer is replaced with the peer that is destroyed without being initialized when merging a Region #12048
    • Fix a bug that TiKV might panic if it has been running for 2 years or more #11940
    Source code(tar.gz)
    Source code(zip)
  • v6.1.0(Jun 13, 2022)

    Improvements

    • Improve the old value hit rate of CDC when using in-memory pessimistic lock #12279
    • Improve the health check to detect unavailable Raftstore, so that the TiKV client can update Region Cache in time #12398
    • Support setting memory limit on Raft Engine #12255
    • TiKV automatically detects and deletes the damaged SST files to improve the product availability #10578
    • CDC supports RawKV #11965
    • Support splitting a large snapshot file into multiple files #11595
    • Move the snapshot garbage collection from Raftstore to background thread to prevent snapshot GC from blocking Raftstore message loops #11966
    • Support dynamic setting of the the maximum message length (max-grpc-send-msg-len) and the maximum batch size of gPRC messages (raft-msg-max-batch-size) #12334
    • Support executing online unsafe recovery plan through Raft #10483

    Bug fixes

    • Fix the issue that the Raft log lag is increasing when a TiKV instance is taken offline #12161
    • Fix the issue that TiKV panics and destroys peers unexpectedly because the target Region to be merged is invalid #12232
    • Fix the issue that TiKV reports the failed to load_latest_options error when upgrading from v5.3.1 or v5.4.0 to v6.0.0 #12269
    • Fix the issue of OOM caused by appending Raft logs when the memory resource is insufficient #11379
    • Fix the issue of TiKV panic caused by the race between destroying peers and batch splitting Regions #12368
    • Fix the issue of TiKV memory usage spike in a short time after stats_monitor falls into a dead loop #12416
    • Fix the issue that TiKV reports the invalid store ID 0 error when using Follower Read #12478
    Source code(tar.gz)
    Source code(zip)
  • v5.4.1(May 13, 2022)

    Improvements

    • Support displaying multiple Kubernetes clusters in the Grafana dashboard #12104

    Bug Fixes

    • Fix the issue that TiKV panics and destroys peers unexpectedly because the target Region to be merged is invalid #12232
    • Fix a bug that stale messages cause TiKV to panic #12023
    • Fix the issue of intermittent packet loss and out of memory (OOM) caused by the overflow of memory metrics #12160
    • Fix the potential panic issue that occurs when TiKV performs profiling on Ubuntu 18.04 #9765
    • Fix a bug that replica reads might violate the linearizability #12109
    • Fix the TiKV panic issue that occurs when the target peer is replaced with the peer that is destroyed without being initialized when merging a Region #12048
    • Fix a bug that TiKV might panic if it has been running for 2 years or more #11940
    • Reduce the TiCDC recovery time by reducing the number of the Regions that require the Resolve Locks step #11993
    • Fix the panic issue caused by deleting snapshot files when the peer status is Applying #11746
    • Fix the issue that destroying a peer might cause high latency #10210
    • Fix the panic issue caused by invalid assertion in resource metering #12234
    • Fix the issue that slow score calculation is inaccurate in some corner cases #12254
    • Fix the OOM issue caused by the resolved_ts module and add more metrics #12159
    • Fix the issue that successfully committed optimistic transactions may report the Write Conflict error when the network is poor #34066
    • Fix the TiKV panic issue that occurs when replica read is enabled on a poor network #12046
    Source code(tar.gz)
    Source code(zip)
  • v5.2.4(Apr 26, 2022)

  • v6.0.0(Apr 6, 2022)

    Improvements

    • Improve the Raftstore sampling accuracy for large key range batches #11039
    • Add the correct "Content-Type" for debug/pprof/profile to make the Profile more easily identified #11521
    • Renew the lease time of the leader infinitely when the Raftstore has heartbeats or handles read requests, which helps reduce latency jitter #11579
    • Choose the store with the least cost when switching the leader, which helps improve performance stability #10602
    • Fetch Raft logs asynchronously to reduce the performance jitter caused by blocking the Raftstore #11320
    • Support the QUARTER function in vector calculation #5751
    • Support pushing down the BIT data type to TiKV #30738
    • Support pushing down the MOD function and the SYSDATE function to TiKV #11916
    • (dup: release-5.3.1.md > Improvements> TiKV)- Reduce the TiCDC recovery time by reducing the number of the Regions that require the Resolve Locks step #11993
    • Support dynamically modifying raftstore.raft-max-inflight-msgs #11865
    • Support EXTRA_PHYSICAL_TABLE_ID_COL_ID to enable dynamic pruning mode #11888
    • Support calculation in buckets #11759
    • Encode the keys of RawKV API V2 as user-key + memcomparable-padding + timestamp #11965
    • Encode the values of RawKV API V2 as user-value + ttl + ValueMeta and encode delete in ValueMeta #11965
    • TiKV Coprocessor supports the Projection operator #12114
    • Support dynamically modifying raftstore.raft-max-size-per-msg #12017
    • Support monitoring multi-k8s in Grafana #12014
    • Transfer the leadership to CDC observer to reduce latency jitter #12111
    • Support dynamically modifying raftstore.apply_max_batch_size and raftstore.store_max_batch_size #11982
    • RawKV V2 returns the latest version upon receiving the raw_get or raw_scan request #11965
    • Support the RCCheckTS consistency reads #12097
    • Support dynamically modifying storage.scheduler-worker-pool-size(the thread count of the Scheduler pool) #12067
    • Control the use of CPU and bandwidth by using the global foreground flow controller to improve the performance stability of TiKV #11855
    • Support dynamically modifying readpool.unified.max-thread-count (the thread count of the UnifyReadPool) #11781
    • Use the TiKV internal pipeline to replace the RocksDB pipeline and deprecate the rocksdb.enable-multibatch-write parameter #12059

    Bug Fixes

    • (dup: release-5.3.1.md > Bug fixes> TiKV)- Fix the panic issue caused by deleting snapshot files when the peer status is Applying #11746
    • (dup: release-5.3.1.md > Bug fixes> TiKV)- Fix the issue of QPS drop when flow control is enabled and level0_slowdown_trigger is set explicitly #11424
    • (dup: release-5.3.1.md > Bug fixes> TiKV)- Fix the issue that destroying a peer might cause high latency #10210
    • (dup: release-5.3.1.md > Bug fixes> TiKV)- Fix a bug that TiKV cannot delete a range of data (unsafe_destroy_range cannot be executed) when the GC worker is busy #11903
    • Fix a bug that TiKV panics when the data in StoreMeta is accidentally deleted in some corner cases #11852
    • Fix a bug that TiKV panics when performing profiling on an ARM platform #10658
    • Fix a bug that TiKV might panic if it has been running for 2 years or more #11940
    • Fix the compilation issue on the ARM64 architecture caused by missing SSE instruction set #12034
    • (dup: release-5.3.1.md > Bug fixes> TiKV)- Fix the issue that deleting an uninitialized replica might cause an old replica to be recreated #10533
    • Fix the bug that stale messages causes TiKV to panic #12023
    • Fix the issue that undefined behavior (UB) might occur in TsSet conversions #12070
    • Fix a bug that replica reads might violate the linearizability #12109
    • Fix the potential panic issue that occurs when TiKV performs profiling on Ubuntu 18.04 #9765
    • Fix the issue that tikv-ctl returns an incorrect result due to its wrong string match #12049
    • Fix the issue of intermittent packet loss and out of memory (OOM) caused by the overflow of memory metrics #12160
    • Fix the potential issue of mistakenly reporting TiKV panics when exiting TiKV #12231
    Source code(tar.gz)
    Source code(zip)
  • v6.0.0-alpha(Mar 4, 2022)

  • v5.3.1(Mar 3, 2022)

    Feature enhancements

    • Update the proc filesystem (procfs) to v0.12.0 #11702
    • Improve the error log report in the Raft client #11959
    • Increase the speed of inserting SST files by moving the verification process to the Import thread pool from the Apply thread pool #11239

    Bug fixes

    • Fix a bug that TiKV cannot delete a range of data (unsafe_destroy_range cannot be executed) when the GC worker is busy #11903
    • Fix the issue that destroying a peer might cause high latency #10210
    • Fix a bug that the any_value function returns a wrong result when regions are empty #11735
    • Fix the issue that deleting an uninitialized replica might cause an old replica to be recreated #10533
    • Fix the metadata corruption issue when Prepare Merge is triggered after a new election is finished but the isolated peer is not informed #11526
    • Fix the deadlock issue that happens occasionally when coroutines run too fast #11549
    • Fix the potential deadlock and memory leak issues when profiling flame graphs #11108
    • Fix the rare data inconsistency issue when retrying a prewrite request in pessimistic transactions #11187
    • Fix a bug that the configuration resource-metering.enabled does not work #11235
    • Fix the issue that some coroutines leak in resolved_ts #10965
    • Fix the issue of reporting false "GC can not work" alert under low write flow #9910
    • Fix a bug that tikv-ctl cannot return the correct Region-related information #11393
    • Fix the issue that a down TiKV node causes the resolved timestamp to lag #11351
    • Fix a panic issue that occurs when Region merge, ConfChange, and Snapshot happen at the same time in extreme conditions #11475
    • Fix the issue that TiKV cannot detect the memory lock when TiKV performs a reverse table scan #11440
    • Fix the issue of negative sign when the decimal divide result is zero #29586
    • Fix a memory leak caused by the monitoring data of statistics threads #11195
    • Fix the issue of TiCDC panic that occurs when the downstream database is missing #11123
    • Fix the issue that TiCDC adds scan retries frequently due to the Congest error #11082
    • Fix the issue that batch messages are too large in Raft client implementation #9714
    • Collapse some uncommon storage-related metrics in Grafana dashboard #11681
    Source code(tar.gz)
    Source code(zip)
  • v5.1.4(Feb 22, 2022)

    Feature enhancements

    • Update the proc filesystem (procfs) to v0.12.0 #11702
    • Improve the error log report in the Raft client #11959
    • Increase the speed of inserting SST files by moving the verification process to the Import thread pool from the Apply thread pool #11239

    Bug fixes

    • Fix a bug that TiKV cannot delete a range of data (unsafe_destroy_range cannot be executed) when the GC worker is busy #11903
    • Fix the issue that destroying a peer might cause high latency #10210
    • Fix a bug that the any_value function returns a wrong result when regions are empty #11735
    • Fix the issue that deleting an uninitialized replica might cause an old replica to be recreated #10533
    • Fix the metadata corruption issue when Prepare Merge is triggered after a new election is finished but the isolated peer is not informed #11526
    • Fix the deadlock issue that happens occasionally when coroutines run too fast #11549
    • Fix the potential deadlock and memory leak issues when profiling flame graphs #11108
    • Fix the rare data inconsistency issue when retrying a prewrite request in pessimistic transactions #11187
    • Fix a bug that the configuration resource-metering.enabled does not work #11235
    • Fix the issue that some coroutines leak in resolved_ts #10965
    • Fix the issue of reporting false "GC can not work" alert under low write flow #9910
    • Fix a bug that tikv-ctl cannot return the correct Region-related information #11393
    • Fix the issue that a down TiKV node causes the resolved timestamp to lag #11351
    • Fix a panic issue that occurs when Region merge, ConfChange, and Snapshot happen at the same time in extreme conditions #11475
    • Fix the issue that TiKV cannot detect the memory lock when TiKV performs a reverse table scan #11440
    • Fix the issue of negative sign when the decimal divide result is zero #29586
    • Fix a memory leak caused by the monitoring data of statistics threads #11195
    • Fix the issue of TiCDC panic that occurs when the downstream database is missing #11123
    • Fix the issue that TiCDC adds scan retries frequently due to the Congest error #11082
    • Fix the issue that batch messages are too large in Raft client implementation #9714
    • Collapse some uncommon storage-related metrics in Grafana dashboard #11681
    Source code(tar.gz)
    Source code(zip)
  • v5.4.0(Feb 10, 2022)

    Improvements

    • Coprocessor supports paging API to process requests in a stream-like way #11448
    • Support read-through-lock so that read operations do not need to wait for secondary locks to be resolved #11402
    • Add a disk protection mechanism to avoid panic caused by disk space drainage #10537
    • Support archiving and rotating logs #11651
    • Reduce the system call by the Raft client and increase CPU efficiency #11309
    • Coprocessor supports pushing down substring to TiKV #11495
    • Improve the scan performance by skip reading locks in the Read Committed isolation level #11485
    • Reduce the default thread pool size used by backup operations and limit the use of thread pool when the stress is high #11000
    • Support dynamically adjusting the sizes of the Apply thread pool and the Store thread pool #11159
    • Support configuring the size of the snap-generator thread pool #11247
    • Optimize the issue of global lock race that occurs when there are many files with frequent reads and writes #250

    Bug fixes

    • Fix the issue that the MVCC deletion records are not cleared by GC #11217
    • Fix the issue that retrying prewrite requests in the pessimistic transaction mode might cause the risk of data inconsistency in rare cases #11187
    • Fix the issue that GC scan causes memory overflow #11410
    • Fix the issue that RocksDB flush or compaction causes panic when the disk capacity is full #11224
    Source code(tar.gz)
    Source code(zip)
  • v5.0.6(Dec 30, 2021)

    Improvements

    • Increase the speed of inserting SST files by moving the verification process to the Import thread pool from the Apply thread pool #11239
    • Add more metrics for the garbage collection module of Raft logs to locate performance problems in the module #11374
    • Collapse some uncommon storage-related metrics in Grafana dashboard #11681

    Bug fixes

    • Fix the issue that a down TiKV node causes the resolved timestamp to lag #11351
    • Fix the issue that TiKV cannot detect the memory lock when TiKV perform a reverse table scan #11440
    • Fix the issue that the accumulation of GC tasks might cause TiKV to be OOM (out of memory) #11410
    • Fix the issue of TiKV panic that occurs when the files do not exist when TiDB Lightning imports data #10438
    • Fix the issue that the node of a TiKV replica is down after the node gets snapshots because TiKV cannot modify the metadata accurately #10225
    • Fix the leak issue of the backup thread pool #10287
    • Fix the issue of casting illegal strings into floating-point numbers #23322
    Source code(tar.gz)
    Source code(zip)
  • v4.0.16(Dec 17, 2021)

    Compatibility changes

    • Before v4.0.16, when TiDB converts an illegal UTF-8 string to a Real type, an error is reported directly. Starting from v4.0.16, TiDB processes the conversion according to the legal UTF-8 prefix in the string #11466

    Improvements

    • Reduce disk space consumption by adopting the zstd algorithm to compress SST files when restoring data using Backup & Restore or importing data using Local-backend of TiDB Lightning #11469

    Bug fixes

    • Fix a panic issue that occurs when Region merge, ConfChange, and Snapshot happen at the same time in extreme conditions #11475
    • Fix the issue of negative sign when the decimal divide result is zero #29586
    • Fix the issue that the average latency of the by-instance gRPC requests is inaccurate in TiKV metrics #11299
    • Fix the issue of TiCDC panic that occurs when the downstream database is missing #11123
    • Fix the issue that the Raft connection is broken when the channel is full #11047
    • Fix the issue that TiDB cannot correctly identify whether the Int64 types in Max/Min functions are a signed integer or not, which causes the wrong calculation result of Max/Min #10158
    • Fix the issue that CDC adds scan retries frequently due to the Congest error #11082
    Source code(tar.gz)
    Source code(zip)
  • v5.2.3(Dec 2, 2021)

    Bug fix

    • Fix the issue that the GcKeys task does not work when it is called by multiple keys. Caused by this issue, compaction filer GC might not drop the MVCC deletion information. #11217
    Source code(tar.gz)
    Source code(zip)
  • v5.0.5(Dec 2, 2021)

    Bug fix

    • Fix the issue that the GcKeys task does not work when it is called by multiple keys. Caused by this issue, compaction filer GC might not drop the MVCC deletion information. #11217
    Source code(tar.gz)
    Source code(zip)
  • v5.1.3(Dec 3, 2021)

    Bug fix

    • Fix the issue that the GcKeys task does not work when it is called by multiple keys. Caused by this issue, compaction filer GC might not drop the MVCC deletion information. #11217
    Source code(tar.gz)
    Source code(zip)
  • v5.3.0(Nov 29, 2021)

    Improvements

    • Enhance disk space protection to improve storage stability
    • Simplify the algorithm of L0 flow control #10879
    • Improve the error log report in the raft client module #10944
    • Improve logging threads to avoid them becoming a performance bottleneck #10841
    • Add more statistics types of write queries #10507

    Bug Fixes

    • Fix the issue of unavailable TiKV caused by Raftstore deadlock when migrating Regions. The workaround is to disable the scheduling and restart the unavailable TiKV. #10909
    • Fix the issue that CDC adds scan retries frequently due to the Congest error #11082
    • Fix the issue that the Raft connection is broken when the channel is full #11047
    • Fix the issue that batch messages are too large in Raft client implementation #9714
    • Fix the issue that some coroutines leak in resolved_ts #10965
    • Fix a panic issue that occurs to the coprocessor when the size of response exceeds 4 GiB #9012
    • Fix the issue that snapshot Garbage Collection (GC) misses GC snapshot files when snapshot files cannot be garbage collected #10813
    • Fix a panic issue caused by timeout when processing Coprocessor requests #10852
    • Fix a memory leak caused by monitoring data of statistics threads #11195
    • Fix a panic issue caused by getting the cgroup information from some platforms #10980
    Source code(tar.gz)
    Source code(zip)
  • v5.2.2(Oct 29, 2021)

    Improvements

    • Simplify the algorithm of L0 flow control #10879
    • Improve the error log report in raft client module #10983
    • Make the slow log of TiKV coprocessor only consider the time spent on processing requests #10841
    • Drop log instead of blocking threads when the slogger thread is overloaded and the queue is filled up #10841
    • Add more statistics types of write queries #10507

    Bug Fixes

    • Fix the issue that CDC add scan retries frequently due to Congest error #11082
    • Fix that the raft connection is broken when the channel is full #11047
    • Fix the issue that batch messages are too large in Raft client implementation #9714
    • Fix the issue that concurrent leaks in resolved_ts #10965
    • Fix a panic issue that occurs to coprocessor when response size exceeds 4 GiB #9012
    • Fix the issue that snapshot Garbage Collection (GC) misses GC snapshot files when snapshot files cannot be garbage collected #10813
    • Fix a panic issue that occurs when processing coprocessor requests times out #10852
    Source code(tar.gz)
    Source code(zip)
  • v5.1.2(Sep 27, 2021)

    Improvements

    • Support dynamically modifying TiCDC configurations [#10645] (https://github.com/tikv/tikv/issues/10645)
    • Reduce the size of Resolved TS message to save network bandwidth #2448
    • Limit the counts of peer stats in the heartbeat message reported by a single store #10621

    Bug Fixes

    • Fix a bug that some files are missed to be imported during the process of importing snapshot files when upgrading TiKV from v3.x to v4.x or v5.x #10902
    • Fix the issue that the GC (Garbage Collection) failure (such as file corrupted) of a single snapshot file stops the GC process of all other GC-able files #10813
    • The slow log of TiKV coprocessor only considers the time spent on processing requests #10841
    • Drop log instead of blocking threads when the slogger thread is overloaded and the queue is filled up #10841
    • Fix a bug of the panic caused by timeout when processing Coprocessor requests #10852
    • Fix the TiKV panic issue that occurs when upgrading from a pre-5.0 version with Titan enabled #10842
    • Fix the issue that TiKV of a newer version cannot be rolled back to v5.0.x #10842
    • Fix the issue that TiKV might delete files before it ingests to RocksDB #10438
    • Fix the parsing failure caused by the left pessimistic locks #26404
    Source code(tar.gz)
    Source code(zip)
  • v5.0.4(Sep 14, 2021)

    Improvements

    • Limit the TiCDC sink's memory consumption #10305
    • Add the memory-bounded upper limit for the TiCDC old value cache #10313

    Bug Fixes

    • Fix the wrong tikv_raftstore_hibernated_peer_state metric #10330
    • Fix the wrong arguments type of the json_unquote() function in the coprocessor #10176
    • Skip clearing callback during graceful shutdown to avoid breaking ACID in some cases #10353 #10307
    • Fix a bug that the read index is shared for replica reads on a Leader #10347
    • Fix the wrong function that casts DOUBLE to DOUBLE #25200
    Source code(tar.gz)
    Source code(zip)
Owner
TiKV Project
TiKV Project
General basic key-value structs for Key-Value based storages

General basic key-value structs for Key-Value based storages

Al Liu 0 Dec 3, 2022
A Distributed SQL Database - Building the Database in the Public to Learn Database Internals

Table of Contents Overview Usage TODO MVCC in entangleDB SQL Query Execution in entangleDB entangleDB Raft Consensus Engine What I am trying to build

Sarthak Dalabehera 38 Jan 2, 2024
open source training courses about distributed database and distributed systemes

Welcome to learn Talent Plan Courses! Talent Plan is an open source training program initiated by PingCAP. It aims to create or combine some open sour

PingCAP 8.3k Dec 30, 2022
Immutable Ordered Key-Value Database Engine

PumpkinDB Build status (Linux) Build status (Windows) Project status Usable, between alpha and beta Production-readiness Depends on your risk toleranc

null 1.3k Jan 2, 2023
AgateDB is an embeddable, persistent and fast key-value (KV) database written in pure Rust

AgateDB is an embeddable, persistent and fast key-value (KV) database written in pure Rust. It is designed as an experimental engine for the TiKV project, and will bring aggressive optimizations for TiKV specifically.

TiKV Project 535 Jan 9, 2023
Pure rust embeddable key-value store database.

MHdb is a pure Rust database implementation, based on dbm. See crate documentation. Changelog v1.0.3 Update Cargo.toml v1.0.2 Update Cargo.toml v1.0.1

Magnus Hirth 7 Dec 10, 2022
A simple key value database for storing simple structures.

Perdia-DB A simple key value database for storing simple structures. No nesting of structures is supported, but may be implemented in the future. Toke

Perdia 4 May 24, 2022
A fast and simple in-memory database with a key-value data model written in Rust

Segment Segment is a simple & fast in-memory database with a key-value data model written in Rust. Features Dynamic keyspaces Keyspace level control o

Segment 61 Jan 5, 2023
A "blazingly" fast key-value pair database without bloat written in rust

A fast key-value pair in memory database. With a very simple and fast API. At Xiler it gets used to store and manage client sessions throughout the pl

Arthur 16 Dec 16, 2022
An embedded, in-memory, immutable, copy-on-write, key-value database engine

An embedded, in-memory, immutable, copy-on-write, key-value database engine. Features In-memory database Multi-version concurrency control Rich transa

SurrealDB 7 May 5, 2024
Aggregatable Distributed Key Generation

Aggregatable DKG and VUF WARNING: this code should not be used in production! Implementation of Aggregatable Distributed Key Generation, a distributed

Kobi Gurkan 38 Nov 30, 2022
PickleDB-rs is a lightweight and simple key-value store. It is a Rust version for Python's PickleDB

PickleDB PickleDB is a lightweight and simple key-value store written in Rust, heavily inspired by Python's PickleDB PickleDB is fun and easy to use u

null 155 Jan 5, 2023
CLI tool to work with Sled key-value databases.

sledtool CLI tool to work with Sled key-value databases. $ sledtool --help Usage: sledtool <dbpath> <command> [<args>] CLI tool to work with Sled da

Vitaly Shukela 27 Sep 26, 2022
LevelDB is a fast key-value storage library written at Google that provides an ordered mapping from string keys to string values.

LevelDB is a fast key-value storage library written at Google that provides an ordered mapping from string keys to string values. Authors: Sanjay Ghem

Google 31.5k Jan 1, 2023
ForestDB - A Fast Key-Value Storage Engine Based on Hierarchical B+-Tree Trie

ForestDB is a key-value storage engine developed by Couchbase Caching and Storage Team, and its main index structure is built from Hierarchic

null 1.2k Dec 26, 2022
A Key-Value data storage system. - dorea db

Dorea DB ?? Dorea is a key-value data storage system. It is based on the Bitcask storage model Documentation | Crates.io | API Doucment 简体中文 | English

ZhuoEr Liu 112 Dec 2, 2022
RedisLess is a fast, lightweight, embedded and scalable in-memory Key/Value store library compatible with the Redis API.

RedisLess is a fast, lightweight, embedded and scalable in-memory Key/Value store library compatible with the Redis API.

Qovery 145 Nov 23, 2022
Log structured append-only key-value store from Rust In Action with some enhancements.

riakv Log structured, append only, key value store implementation from Rust In Action with some enhancements. Features Persistent key value store with

Arindam Das 5 Oct 29, 2022
A LSM-based Key-Value Store in Rust

CobbleDB A LSM-based Key-Value Store in Rust Motivation There is no open-source LSM-based key-value store in Rust natively. Some crates are either a w

Yizheng Jiao 2 Oct 25, 2021