TiDBTiDB Docs Dash 2024

Jan 9 - Jan 12 (UTC)
Join us to improve docs and win a prize!
Sign InTry Free

High Availability FAQs

This document summarizes the FAQs related to high availability of TiDB.

How is TiDB strongly consistent?

Data is redundantly replicated between TiKV nodes using the Raft consensus algorithm to ensure recoverability when a node failure occurs.

At the bottom layer, TiKV uses a model of replication log + State Machine to replicate data. For the write requests, the data is written to a Leader and the Leader then replicates the command to its Followers in the form of log. When the majority of nodes in the cluster receive this log, this log is committed and can be applied into the State Machine.

The architecture of TiDB guarantees that it fully supports geo-distribution and multi-activeness. Your data and applications are always-on. All the outages are transparent to your applications and your data can recover automatically. The operation depends on the network latency and stability. It is recommended to keep the latency within 5ms. Currently, TiDB already has similar use cases. For details, see Three Data Centers in Two Cities Deployment.

Was this page helpful?

Download PDFRequest docs changesAsk questions on Discord
Playground
New
One-stop & interactive experience of TiDB's capabilities WITHOUT registration.
Products
TiDB
TiDB Dedicated
TiDB Serverless
Pricing
Get Demo
Get Started
© 2024 PingCAP. All Rights Reserved.
Privacy Policy.