Skip to content

Commit 9424d36

Browse files
joccauBenjamin2037
authored andcommitted
!430 7.1.8-5: 更新 v8.5.1 的代码到 7.1.8-5 || tidb_security_advanced_branch_pr=95
* executor: fix a bug that global temporary table send cop request (pingcap#588… * statistics: fix the panic when to async load stats with dropped index … * executor: fix prepared protocol charset (pingcap#58872) (pingcap#58931) * *: Update client-go and verify all read ts (pingcap#58909) * integration test: fix test case "br_pitr" (pingcap#58876) * session: add indexes for `mysql.analyze_jobs` (pingcap#58134) (pingcap#58355) * ddl: fix args count for modify column (pingcap#58855) (pingcap#58858) * planner: correct plan when scan tidb related cluster table with KeepOr… * planner: Fix vector not truncated after CBO (pingcap#58809) (pingcap#58844) * ddl: Fix vector index for high dimensional vectors (pingcap#58717) (pingcap#58835) * ddl: Fix issue with concurrent update getting reverted by BackfillData… * statistics: stats cache set default quota as 20% (pingcap#58013) (pingcap#58817) * executor: change the evaluation order of columns in `Update` and `Inse… * statistics: add recover to protect background task (pingcap#58739) (pingcap#58767) * ttl: fix the infinite waiting for delRateLimiter when `tidb_ttl_delete… * ttl: reduce some warnings logs when locking TTL tasks (pingcap#58306) (pingcap#58783) * ttl: retry the rows when del rate limiter returns error in delWorker (… * ttl: reschedule task to other instances when shrinking worker (pingcap#57703) (pingcap#58778) * ttl: fix the issue that one task losing heartbeat will block other tas… * ttl: fix the issue that the task is not cancelled after transfering ow… * ddl: fix job state overridden when concurrent updates don't overlap in… * ttl: set the job history status to `cancelled` if it's removed in GC a… * ttl: fix the timezone issue and panic in the caller of `getSession` (#… * ddl: fix version syncer doesn't print who hasn't synced on partial syn… * ttl: fix the issue that `DROP TABLE` / `ALTER TABLE` will keep job run… * br/stream: allow pitr to create oversized indices (pingcap#58433) (pingcap#58527) * ttl: set a result for timeout scan task during shrinking scan worker (… * executor: fix time zone issue when querying slow log (pingcap#58455) (pingcap#58577) * table: fix the issue that the default value for `BIT` column is wrong … * statistics: temporarily skip handling errors for DDL events (pingcap#58609) (pingcap#58634) * sessionctx: fix null max value to leading wrong warning (pingcap#57898) (pingcap#57935) * planner: convert cartesian semi join with other nulleq condition to cr… * planner: fix idxMergePartPlans forget to deal with RootTaskConds (pingcap#585… * domain: change some stats log level as WARN (pingcap#58316) (pingcap#58555) * planner: quickly get total count from index/column (pingcap#58365) (pingcap#58431) * planner, expr: eval readonly user var during plan phase (pingcap#54462) (pingcap#58540) * metrics: add col/idx name(s) for BackfillProgressGauge and BackfillTot… * br: refactor test to use wait checkpoint method (pingcap#57612) (pingcap#58498) * executor: reuse chunk in hash join v2 during restoring (pingcap#56936) (pingcap#58018) * executor: fix goroutine leak when exceed quota in hash agg (pingcap#58078) (pingcap#58462) * copr: fix the issue that busy threshold may redirect batch copr to fol… * statistics: skip non-exicted table when to init stats (pingcap#58381) (pingcap#58394) * planner: fix incorrectly using the schema for plan cache (pingcap#57964) (pingcap#58090) * *: use DDL subscriber updating stats meta (pingcap#57872) (pingcap#58387) * planner, runtime_filter: Remove redundant logs whose meaning can be di… * statistics: remove dead code (pingcap#58412) (pingcap#58442) * planner: Use/force to apply prefer range scan (pingcap#56928) (pingcap#58444) * statistics: gc the statistics correctly after drop the database (pingcap#5730… * ddl: Fixed partition interval from DayMinute to just Minute. (pingcap#57738) (pingcap#58019) * executor: Enlarge the timeout for fetching TiFlash system tables (pingcap#579
1 parent 485c646 commit 9424d36

File tree

226 files changed

+5026
-2308
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

226 files changed

+5026
-2308
lines changed

DEPS.bzl

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -6985,13 +6985,13 @@ def go_deps():
69856985
name = "com_github_tikv_client_go_v2",
69866986
build_file_proto_mode = "disable_global",
69876987
importpath = "github.com/tikv/client-go/v2",
6988-
sha256 = "cdcad188042c4d716dd9d4a304a2e36bc9d4edccaf86a19b85b1682f01df193c",
6989-
strip_prefix = "github.com/tikv/client-go/[email protected].20241121061241-006dfb024c26",
6988+
sha256 = "f896c60fc81339cc03e7dcb38b70bcfa368bbb2a94c6c7f3f5da6a4795d237f0",
6989+
strip_prefix = "github.com/tikv/client-go/[email protected].20250113074216-d66e460ff577",
69906990
urls = [
6991-
"http://bazel-cache.pingcap.net:8080/gomod/github.com/tikv/client-go/v2/com_github_tikv_client_go_v2-v2.0.8-0.20241121061241-006dfb024c26.zip",
6992-
"http://ats.apps.svc/gomod/github.com/tikv/client-go/v2/com_github_tikv_client_go_v2-v2.0.8-0.20241121061241-006dfb024c26.zip",
6993-
"https://cache.hawkingrei.com/gomod/github.com/tikv/client-go/v2/com_github_tikv_client_go_v2-v2.0.8-0.20241121061241-006dfb024c26.zip",
6994-
"https://storage.googleapis.com/pingcapmirror/gomod/github.com/tikv/client-go/v2/com_github_tikv_client_go_v2-v2.0.8-0.20241121061241-006dfb024c26.zip",
6991+
"http://bazel-cache.pingcap.net:8080/gomod/github.com/tikv/client-go/v2/com_github_tikv_client_go_v2-v2.0.8-0.20250113074216-d66e460ff577.zip",
6992+
"http://ats.apps.svc/gomod/github.com/tikv/client-go/v2/com_github_tikv_client_go_v2-v2.0.8-0.20250113074216-d66e460ff577.zip",
6993+
"https://cache.hawkingrei.com/gomod/github.com/tikv/client-go/v2/com_github_tikv_client_go_v2-v2.0.8-0.20250113074216-d66e460ff577.zip",
6994+
"https://storage.googleapis.com/pingcapmirror/gomod/github.com/tikv/client-go/v2/com_github_tikv_client_go_v2-v2.0.8-0.20250113074216-d66e460ff577.zip",
69956995
],
69966996
)
69976997
go_repository(

br/pkg/restore/snap_client/systable_restore_test.go

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -116,5 +116,5 @@ func TestCheckSysTableCompatibility(t *testing.T) {
116116
//
117117
// The above variables are in the file br/pkg/restore/systable_restore.go
118118
func TestMonitorTheSystemTableIncremental(t *testing.T) {
119-
require.Equal(t, int64(218), session.CurrentBootstrapVersion)
119+
require.Equal(t, int64(219), session.CurrentBootstrapVersion)
120120
}

br/pkg/task/common_test.go

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -99,7 +99,7 @@ func TestUrlNoQuery(t *testing.T) {
9999

100100
func TestTiDBConfigUnchanged(t *testing.T) {
101101
cfg := config.GetGlobalConfig()
102-
restoreConfig := enableTiDBConfig()
102+
restoreConfig := tweakLocalConfForRestore()
103103
require.NotEqual(t, config.GetGlobalConfig(), cfg)
104104
restoreConfig()
105105
require.Equal(t, config.GetGlobalConfig(), cfg)

br/pkg/task/restore.go

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -980,7 +980,7 @@ func runSnapshotRestore(c context.Context, mgr *conn.Mgr, g glue.Glue, cmdName s
980980
}
981981

982982
// pre-set TiDB config for restore
983-
restoreDBConfig := enableTiDBConfig()
983+
restoreDBConfig := tweakLocalConfForRestore()
984984
defer restoreDBConfig()
985985

986986
if client.GetSupportPolicy() {
@@ -1415,9 +1415,9 @@ func filterRestoreFiles(
14151415
return
14161416
}
14171417

1418-
// enableTiDBConfig tweaks some of configs of TiDB to make the restore progress go well.
1418+
// tweakLocalConfForRestore tweaks some of configs of TiDB to make the restore progress go well.
14191419
// return a function that could restore the config to origin.
1420-
func enableTiDBConfig() func() {
1420+
func tweakLocalConfForRestore() func() {
14211421
restoreConfig := config.RestoreFunc()
14221422
config.UpdateGlobal(func(conf *config.Config) {
14231423
// set max-index-length before execute DDLs and create tables

br/pkg/task/stream.go

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1287,6 +1287,9 @@ func restoreStream(
12871287
ctx, cancelFn := context.WithCancel(c)
12881288
defer cancelFn()
12891289

1290+
restoreCfg := tweakLocalConfForRestore()
1291+
defer restoreCfg()
1292+
12901293
if span := opentracing.SpanFromContext(ctx); span != nil && span.Tracer() != nil {
12911294
span1 := span.Tracer().StartSpan(
12921295
"restoreStream",

br/tests/br_encryption/run.sh

Lines changed: 2 additions & 35 deletions
Original file line numberDiff line numberDiff line change
@@ -59,39 +59,6 @@ insert_additional_data() {
5959
done
6060
}
6161

62-
wait_log_checkpoint_advance() {
63-
echo "wait for log checkpoint to advance"
64-
sleep 10
65-
local current_ts=$(python3 -c "import time; print(int(time.time() * 1000) << 18)")
66-
echo "current ts: $current_ts"
67-
i=0
68-
while true; do
69-
# extract the checkpoint ts of the log backup task. If there is some error, the checkpoint ts should be empty
70-
log_backup_status=$(unset BR_LOG_TO_TERM && run_br --skip-goleak --pd $PD_ADDR log status --task-name $TASK_NAME --json 2>br.log)
71-
echo "log backup status: $log_backup_status"
72-
local checkpoint_ts=$(echo "$log_backup_status" | head -n 1 | jq 'if .[0].last_errors | length == 0 then .[0].checkpoint else empty end')
73-
echo "checkpoint ts: $checkpoint_ts"
74-
75-
# check whether the checkpoint ts is a number
76-
if [ $checkpoint_ts -gt 0 ] 2>/dev/null; then
77-
if [ $checkpoint_ts -gt $current_ts ]; then
78-
echo "the checkpoint has advanced"
79-
break
80-
fi
81-
echo "the checkpoint hasn't advanced"
82-
i=$((i+1))
83-
if [ "$i" -gt 50 ]; then
84-
echo 'the checkpoint lag is too large'
85-
exit 1
86-
fi
87-
sleep 10
88-
else
89-
echo "TEST: [$TEST_NAME] failed to wait checkpoint advance!"
90-
exit 1
91-
fi
92-
done
93-
}
94-
9562
calculate_checksum() {
9663
local db=$1
9764
local checksum=$(run_sql "USE $db; ADMIN CHECKSUM TABLE $TABLE;" | awk '/CHECKSUM/{print $2}')
@@ -170,7 +137,7 @@ run_backup_restore_test() {
170137
checksum_ori[${i}]=$(calculate_checksum "$DB${i}") || { echo "Failed to calculate checksum after insertion"; exit 1; }
171138
done
172139

173-
wait_log_checkpoint_advance || { echo "Failed to wait for log checkpoint"; exit 1; }
140+
. "$CUR/../br_test_utils.sh" && wait_log_checkpoint_advance $TASK_NAME || { echo "Failed to wait for log checkpoint"; exit 1; }
174141

175142
#sanity check pause still works
176143
run_br log pause --task-name $TASK_NAME --pd $PD_ADDR || { echo "Failed to pause log backup"; exit 1; }
@@ -270,7 +237,7 @@ test_backup_encrypted_restore_unencrypted() {
270237
# Insert additional test data
271238
insert_additional_data "insert_after_full_backup" || { echo "Failed to insert additional data"; exit 1; }
272239

273-
wait_log_checkpoint_advance || { echo "Failed to wait for log checkpoint"; exit 1; }
240+
. "$CUR/../br_test_utils.sh" && wait_log_checkpoint_advance $TASK_NAME || { echo "Failed to wait for log checkpoint"; exit 1; }
274241

275242

276243
# Stop and clean the cluster
Lines changed: 11 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,11 @@
1+
# config of tidb
2+
3+
max-index-length = 12288
4+
5+
[security]
6+
ssl-ca = "/tmp/backup_restore_test/certs/ca.pem"
7+
ssl-cert = "/tmp/backup_restore_test/certs/tidb.pem"
8+
ssl-key = "/tmp/backup_restore_test/certs/tidb.key"
9+
cluster-ssl-ca = "/tmp/backup_restore_test/certs/ca.pem"
10+
cluster-ssl-cert = "/tmp/backup_restore_test/certs/tidb.pem"
11+
cluster-ssl-key = "/tmp/backup_restore_test/certs/tidb.key"

br/tests/br_pitr/incremental_data/ingest_repair.sql

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -46,3 +46,6 @@ ALTER TABLE test.pairs9 CHANGE y y2 varchar(20);
4646

4747
-- test partition
4848
ALTER TABLE test.pairs10 ADD INDEX i1(y);
49+
50+
51+
CREATE INDEX huge ON test.huge_idx(blob1, blob2);

br/tests/br_pitr/prepare_data/ingest_repair.sql

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -43,4 +43,6 @@ INSERT INTO test.pairs10 VALUES (1,1,"1"),(2,2,"2"),(3,3,"3"),(4,4,"4"),(5,5,"5"
4343
-- test no need to repair
4444
CREATE TABLE test.pairs11(x int auto_increment primary key, y int DEFAULT RAND(), z int DEFAULT RAND());
4545
INSERT INTO test.pairs11 VALUES (),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),();
46-
ALTER TABLE test.pairs11 ADD UNIQUE KEY u1(x, y);
46+
ALTER TABLE test.pairs11 ADD UNIQUE KEY u1(x, y);
47+
48+
CREATE TABLE test.huge_idx(id int AUTO_INCREMENT, blob1 varchar(1000), blob2 varchar(1000));

br/tests/br_pitr/run.sh

Lines changed: 16 additions & 44 deletions
Original file line numberDiff line numberDiff line change
@@ -21,10 +21,17 @@ CUR=$(cd `dirname $0`; pwd)
2121
# const value
2222
PREFIX="pitr_backup" # NOTICE: don't start with 'br' because `restart services` would remove file/directory br*.
2323
res_file="$TEST_DIR/sql_res.$TEST_NAME.txt"
24+
TASK_NAME="br_pitr"
25+
26+
restart_services_allowing_huge_index() {
27+
echo "restarting services with huge indices enabled..."
28+
stop_services
29+
start_services --tidb-cfg "$CUR/config/tidb-max-index-length.toml"
30+
echo "restart services done..."
31+
}
2432

2533
# start a new cluster
26-
echo "restart a services"
27-
restart_services
34+
restart_services_allowing_huge_index
2835

2936
# prepare the data
3037
echo "prepare the data"
@@ -38,7 +45,7 @@ echo "prepare_delete_range_count: $prepare_delete_range_count"
3845

3946
# start the log backup task
4047
echo "start log task"
41-
run_br --pd $PD_ADDR log start --task-name integration_test -s "local://$TEST_DIR/$PREFIX/log"
48+
run_br --pd $PD_ADDR log start --task-name $TASK_NAME -s "local://$TEST_DIR/$PREFIX/log"
4249

4350
# run snapshot backup
4451
echo "run snapshot backup"
@@ -70,39 +77,8 @@ incremental_delete_range_count=$(run_sql "select count(*) DELETE_RANGE_CNT from
7077
echo "incremental_delete_range_count: $incremental_delete_range_count"
7178

7279
# wait checkpoint advance
73-
echo "wait checkpoint advance"
74-
sleep 10
7580
current_ts=$(python3 -c "import time; print(int(time.time() * 1000) << 18)")
76-
echo "current ts: $current_ts"
77-
i=0
78-
while true; do
79-
# extract the checkpoint ts of the log backup task. If there is some error, the checkpoint ts should be empty
80-
log_backup_status=$(unset BR_LOG_TO_TERM && run_br --skip-goleak --pd $PD_ADDR log status --task-name integration_test --json 2>br.log)
81-
echo "log backup status: $log_backup_status"
82-
checkpoint_ts=$(echo "$log_backup_status" | head -n 1 | jq 'if .[0].last_errors | length == 0 then .[0].checkpoint else empty end')
83-
echo "checkpoint ts: $checkpoint_ts"
84-
85-
# check whether the checkpoint ts is a number
86-
if [ $checkpoint_ts -gt 0 ] 2>/dev/null; then
87-
# check whether the checkpoint has advanced
88-
if [ $checkpoint_ts -gt $current_ts ]; then
89-
echo "the checkpoint has advanced"
90-
break
91-
fi
92-
# the checkpoint hasn't advanced
93-
echo "the checkpoint hasn't advanced"
94-
i=$((i+1))
95-
if [ "$i" -gt 50 ]; then
96-
echo 'the checkpoint lag is too large'
97-
exit 1
98-
fi
99-
sleep 10
100-
else
101-
# unknown status, maybe somewhere is wrong
102-
echo "TEST: [$TEST_NAME] failed to wait checkpoint advance!"
103-
exit 1
104-
fi
105-
done
81+
. "$CUR/../br_test_utils.sh" && wait_log_checkpoint_advance $TASK_NAME
10682

10783
# dump some info from upstream cluster
10884
# ...
@@ -122,8 +98,7 @@ check_result() {
12298
}
12399

124100
# start a new cluster
125-
echo "restart services"
126-
restart_services
101+
restart_services_allowing_huge_index
127102

128103
# non-compliant operation
129104
echo "non compliant operation"
@@ -142,8 +117,7 @@ run_br --pd $PD_ADDR restore point -s "local://$TEST_DIR/$PREFIX/log" --full-bac
142117
check_result
143118

144119
# start a new cluster for incremental + log
145-
echo "restart services"
146-
restart_services
120+
restart_services_allowing_huge_index
147121

148122
echo "run snapshot restore#2"
149123
run_br --pd $PD_ADDR restore full -s "local://$TEST_DIR/$PREFIX/full"
@@ -155,7 +129,7 @@ check_result
155129

156130
# start a new cluster for incremental + log
157131
echo "restart services"
158-
restart_services
132+
restart_services_allowing_huge_index
159133

160134
echo "run snapshot restore#3"
161135
run_br --pd $PD_ADDR restore full -s "local://$TEST_DIR/$PREFIX/full"
@@ -169,8 +143,7 @@ if [ $restore_fail -ne 1 ]; then
169143
fi
170144

171145
# start a new cluster for corruption
172-
echo "restart a services"
173-
restart_services
146+
restart_services_allowing_huge_index
174147

175148
file_corruption() {
176149
echo "corrupt the whole log files"
@@ -196,8 +169,7 @@ if [ $restore_fail -ne 1 ]; then
196169
fi
197170

198171
# start a new cluster for corruption
199-
echo "restart a services"
200-
restart_services
172+
restart_services_allowing_huge_index
201173

202174
file_lost() {
203175
echo "lost the whole log files"

0 commit comments

Comments
 (0)