TiDB 6.5.1 Release Notes
Release date: March 10, 2023
TiDB version: 6.5.1
Quick access: Quick start | Production deployment | Installation packages
Compatibility changes
Starting from February 20, 2023, the telemetry feature is disabled by default in new versions of TiDB and TiDB Dashboard, including v6.5.1, and usage information is not collected and shared with PingCAP. Before upgrading to these versions, if the cluster uses the default telemetry configuration, the telemetry feature is disabled after the upgrade. See TiDB Release Timeline for specific versions.
- The default value of the
tidb_enable_telemetry
system variable is changed fromON
toOFF
. - The default value of the TiDB
enable-telemetry
configuration item is changed fromtrue
tofalse
. - The default value of the PD
enable-telemetry
configuration item is changed fromtrue
tofalse
.
- The default value of the
Starting from v1.11.3, the telemetry feature is disabled by default in newly deployed TiUP, and usage information is not collected. If you upgrade from a TiUP version earlier than v1.11.3 to v1.11.3 or a later version, the telemetry feature keeps the same status as before the upgrade.
No longer support modifying column types on partitioned tables because of potential correctness issues. #40620 @mjonss
The default value of the TiKV
advance-ts-interval
configuration item is changed from1s
to20s
. You can modify this configuration item to reduce the latency and improve the timeliness of the Stale Read data. See Reduce Stale Read latency for details.
Improvements
TiDB
Starting from v6.5.1, the TiDB cluster deployed by TiDB Operator v1.4.3 or higher supports IPv6 addresses. This means that TiDB can support a larger address space and bring you better security and network performance.
- Full support for IPv6 addressing: TiDB supports using IPv6 addresses for all network connections, including client connections, internal communication between nodes, and communication with external systems.
- Dual-stack support: If you are not ready to fully switch to IPv6 yet, TiDB also supports dual-stack networks. This means that you can use both IPv4 and IPv6 addresses in the same TiDB cluster and choose a network deployment mode that prioritizes IPv6 by configuration.
For more information on IPv6 deployment, see TiDB on Kubernetes documentation.
Support specifying the SQL script executed upon TiDB cluster initialization #35624 @morgo
TiDB v6.5.1 adds a new configuration item
initialize-sql-file
. When you start a TiDB cluster for the first time, you can specify the SQL script to be executed by configuring the command line parameter--initialize-sql-file
. You can use this feature when you need to perform such operations as modifying the value of a system variable, creating a user, or granting privileges. For more information, see documentation.Clear expired region cache regularly to avoid memory leak and performance degradation #40461 @sticnarf
Add a new configuration item
--proxy-protocol-fallbackable
to control whether to enable PROXY protocol fallback mode. When this parameter is set totrue
, TiDB accepts PROXY client connections and client connections without any PROXY protocol header #41409 @blacktear23Improve the accuracy of Memory Tracker #40900 #40500 @wshwsh12
When the plan cache fails to take effect, the system returns the reason as a warning #40210 @qw4990
Improve the optimizer strategy for out-of-range estimation #39008 @time-and-fate
TiKV
- Support starting TiKV on a CPU with less than 1 core #13586 #13752 #14017 @andreid-db
- Increase the thread limit of the Unified Read Pool (
readpool.unified.max-thread-count
) to 10 times the CPU quota, to better handle high-concurrency queries #13690 @v01dstar - Change the the default value of
resolved-ts.advance-ts-interval
from"1s"
to"20s"
, to reduce cross-region traffic #14100 @overvenus
TiFlash
Tools
Backup & Restore (BR)
TiCDC
- Enable pull-based sink to optimize system throughput #8232 @hi-rustin
- Support storing redo logs to GCS-compatible or Azure-compatible object storage #7987 @CharlesCheung96
- Implement MQ sink and MySQL sink in the asynchronous mode to improve the sink throughput #5928 @amyangfei @CharlesCheung96
Bug fixes
TiDB
- Fix the issue that the
pessimistic-auto-commit
configuration item does not take effect for point-get queries #39928 @zyguan - Fix the issue that the
INSERT
orREPLACE
statements might panic in long session connections #40351 @winoros - Fix the issue that
auto analyze
causes graceful shutdown to take a long time #40038 @xuyifangreeneyes - Fix the issue that data race might occur during DDL ingestion #40970 @tangenta
- Fix the issue that data race might occur when an index is added #40879 @tangenta
- Fix the issue that the adding index operation is inefficient due to invalid Region cache when there are many Regions in a table #38436 @tangenta
- Fix the issue that TiDB might deadlock during initialization #40408 @Defined2014
- Fix the issue that unexpected data is read because TiDB improperly handles
NULL
values when constructing key ranges #40158 @tiancaiamao - Fix the issue that the value of system variables might be incorrectly modified in some cases due to memory reuse #40979 @lcwangchao
- Fix the issue that a TTL task fails if the primary key of the table contains an
ENUM
column #40456 @lcwangchao - Fix the issue that TiDB panics when adding a unique index #40592 @tangenta
- Fix the issue that some truncate operations cannot be blocked by MDL when truncating the same table concurrently #40484 @wjhuang2016
- Fix the issue that TiDB cannot restart after global bindings are created for partition tables in dynamic trimming mode #40368 @Yisaer
- Fix the issue that reading data using the "cursor read" method might return an error because of GC #39447 @zyguan
- Fix the issue that the
EXECUTE
information is null in the result ofSHOW PROCESSLIST
#41156 @YangKeao - Fix the issue that when
globalMemoryControl
is killing a query, theKILL
operation might not end #41057 @wshwsh12 - Fix the issue that TiDB might panic after
indexMerge
encounters an error #41047 #40877 @guo-shaoge @windtalker - Fix the issue that the
ANALYZE
statement might be terminated byKILL
#41825 @XuHuaiyu - Fix the issue that goroutine leak might occur in
indexMerge
#41545 #41605 @guo-shaoge - Fix the issue of potential wrong results when comparing unsigned
TINYINT
/SMALLINT
/INT
values withDECIMAL
/FLOAT
/DOUBLE
values smaller than0
#41736 @LittleFall - Fix the issue that enabling
tidb_enable_reuse_chunk
might lead to memory leak #40987 @guo-shaoge - Fix the issue that data race in time zone might cause data-index inconsistency #40710 @wjhuang2016
- Fix the issue that the scan detail information during the execution of
batch cop
might be inaccurate #41582 @you06 - Fix the issue that the upper concurrency of
cop
is not limited #41134 @you06 - Fix the issue that the
statement context
incursor read
is mistakenly cached #39998 @zyguan - Periodically clean up stale Region cache to avoid memory leak and performance degradation #40355 @sticnarf
- Fix the issue that using plan cache on queries that contain
year <cmp> const
might get a wrong result #41626 @qw4990 - Fix the issue of large estimation errors when querying with a large range and a large amount of data modification #39593 @time-and-fate
- Fix the issue that some conditions cannot be pushed down through Join operators when using Plan Cache #40093 #38205 @qw4990
- Fix the issue that IndexMerge plans might generate incorrect ranges on the SET type columns #41273 #41293 @time-and-fate
- Fix the issue that Plan Cache might cache FullScan plans when processing
int_col <cmp> decimal
conditions #40679 #41032 @qw4990 - Fix the issue that Plan Cache might cache FullScan plans when processing
int_col in (decimal...)
conditions #40224 @qw4990 - Fix the issue that the
ignore_plan_cache
hint might not work forINSERT
statements #40079 #39717 @qw4990 - Fix the issue that Auto Analyze might hinder TiDB from exiting #40038 @xuyifangreeneyes
- Fix the issue that incorrect access intervals might be constructed on Unsigned Primary Keys in partitioned tables #40309 @winoros
- Fix the issue that Plan Cache might cache Shuffle operators and return incorrect results #38335 @qw4990
- Fix the issue that creating Global Binding on partitioned tables might cause TiDB to fail to start #40368 @Yisaer
- Fix the issue that query plan operators might be missing in slow logs #41458 @time-and-fate
- Fix the issue that incorrect results might be returned when TopN operators with virtual columns are mistakenly pushing down to TiKV or TiFlash #41355 @Dousir9
- Fix the issue of inconsistent data when adding indexes #40698 #40730 #41459 #40464 #40217 @tangenta
- Fix the issue of getting the
Pessimistic lock not found
error when adding indexes #41515 @tangenta - Fix the issue of misreported duplicate key errors when adding unique indexes #41630 @tangenta
- Fix the issue of performance degradation when using
paging
in TiDB #40741 @solotzg
- Fix the issue that the
TiKV
- Fix the issue that Resolved TS causes higher network traffic #14092 @overvenus
- Fix the data inconsistency issue caused by network failure between TiDB and TiKV during the execution of a DML after a failed pessimistic DML #14038 @MyonKeminta
- Fix an error that occurs when casting the
const Enum
type to other types #14156 @wshwsh12 - Fix the issue that the paging in a cop task is inaccurate #14254 @you06
- Fix the issue that the
scan_detail
field is inaccurate inbatch_cop
mode #14109 @you06 - Fix a potential error in the Raft Engine that might cause TiKV to detect Raft data corruption and fail to restart #14338 @tonyxuqqi
PD
- Fix the issue that the execution
replace-down-peer
slows down under certain conditions #5788 @HundunDM - Fix the issue that PD might unexpectedly add multiple Learners to a Region #5786 @HunDunDM
- Fix the issue that the Region Scatter task generates redundant replicas unexpectedly #5909 @HundunDM
- Fix the PD OOM issue that occurs when the calls of
ReportMinResolvedTS
are too frequent #5965 @HundunDM - Fix the issue that the Region Scatter might cause uneven distribution of leader #6017 @HunDunDM
- Fix the issue that the execution
TiFlash
- Fix the issue that semi-joins use excessive memory when calculating Cartesian products #6730 @gengliqi
- Fix the issue that TiFlash log search is too slow #6829 @hehechen
- Fix the issue that TiFlash cannot start because files are mistakenly deleted after repeated restarts #6486 @JaySon-Huang
- Fix the issue that TiFlash might report an error when performing a query after adding a new column #6726 @JaySon-Huang
- Fix the issue that TiFlash does not support IPv6 configuration #6734 @ywqzzy
Tools
Backup & Restore (BR)
- Fix the issue that the connection failure between PD and tidb-server causes PITR backup progress not to advance #41082 @YuJuncen
- Fix the issue that TiKV cannot listen to PITR tasks due to the connection failure between PD and TiKV #14159 @YuJuncen
- Fix the issue that PITR does not support configuration changes for PD clusters #14165 @YuJuncen
- Fix the issue that the PITR feature does not support CA-bundles #38775 @3pointer
- Fix the issue that when a PITR backup task is deleted, the residual backup data causes data inconsistency in new tasks #40403 @joccau
- Fix the issue that causes panic when BR parses the
backupmeta
file #40878 @MoCuishle28 - Fix the issue that restore is interrupted due to failure in getting the Region size #36053 @YuJuncen
- Fix the issue that the frequency of
resolve lock
is too high when there is no PITR backup task in the TiDB cluster #40759 @joccau - Fix the issue that restoring data to a cluster on which the log backup is running causes the log backup file unable to be restored #40797 @Leavrth
- Fix the panic issue that occurs when attempting to resume backup from a checkpoint after a full backup failure #40704 @Leavrth
- Fix the issue that PITR errors are overwritten #40576 @Leavrth
- Fix the issue that checkpoints do not advance in PITR backup tasks when the advance owner and gc owner are different #41806 @joccau
TiCDC
- Fix the issue that changefeed might get stuck in special scenarios such as when scaling in or scaling out TiKV or TiCDC nodes #8174 @hicqu
- Fix the issue that precheck is not performed on the storage path of redo log #6335 @CharlesCheung96
- Fix the issue of insufficient duration that redo log can tolerate for S3 storage failure #8089 @CharlesCheung96
- Fix the issue that
transaction_atomicity
andprotocol
cannot be updated via the configuration file #7935 @CharlesCheung96 - Fix the issue that the checkpoint cannot advance when TiCDC replicates an excessively large number of tables #8004 @overvenus
- Fix the issue that applying redo log might cause OOM when the replication lag is excessively high #8085 @CharlesCheung96
- Fix the issue that the performance degrades when redo log is enabled to write meta #8074 @CharlesCheung96
- Fix a bug that the context deadline is exceeded when TiCDC replicates data without splitting large transactions #7982 @hi-rustin
- Fix the issue that pausing a changefeed when PD is abnormal results in incorrect status #8330 @sdojjy
- Fix the data inconsistency that occurs when replicating data to a TiDB or MySQL sink and when
CHARACTER SET
is specified on the column that has the non-null unique index without a primary key #8420 @asddongmen - Fix the panic issue in table scheduling and blackhole sink #8024 #8142 @hicqu
TiDB Data Migration (DM)
- Fix the issue that the
binlog-schema delete
command fails to execute #7373 @liumengya94 - Fix the issue that the checkpoint does not advance when the last binlog is a skipped DDL #8175 @D3Hunter
- Fix a bug that when the expression filters of both "update" and "non-update" types are specified in one table, all
UPDATE
statements are skipped #7831 @lance6716
- Fix the issue that the
TiDB Lightning
- Fix the issue that TiDB Lightning prechecks cannot find dirty data left by previously failed imports #39477 @dsdashun
- Fix the issue that TiDB Lightning panics in the split-region phase #40934 @lance6716
- Fix the issue that the conflict resolution logic (
duplicate-resolution
) might lead to inconsistent checksums #40657 @gozssky - Fix the issue that TiDB Lightning might incorrectly skip conflict resolution when all but the last TiDB Lightning instance encounters a local duplicate record during a parallel import #40923 @lichunzhu
- Fix the issue that when importing data in Local Backend mode, the target columns do not automatically generate data if the compound primary key of the imported target table has an
auto_random
column and no value for the column is specified in the source data #41454 @D3Hunter