-
Notifications
You must be signed in to change notification settings - Fork 6k
session: add indexes for mysql.analyze_jobs
#58134
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
session: add indexes for mysql.analyze_jobs
#58134
Conversation
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## master #58134 +/- ##
================================================
+ Coverage 73.2296% 75.2632% +2.0335%
================================================
Files 1675 1724 +49
Lines 462254 472345 +10091
================================================
+ Hits 338507 355502 +16995
+ Misses 102970 94676 -8294
- Partials 20777 22167 +1390
Flags with carried forward coverage won't be shown. Click here to find out more.
|
Signed-off-by: Rustin170506 <[email protected]>
2b64e04
to
d543cf2
Compare
Signed-off-by: Rustin170506 <[email protected]>
Signed-off-by: Rustin170506 <[email protected]>
Tested locally: explain SELECT
MIN(TIMESTAMPDIFF(SECOND, aj.start_time, CURRENT_TIMESTAMP)) AS min_duration
FROM (
SELECT
MAX(id) AS max_id
FROM
mysql.analyze_jobs
WHERE
table_schema = '1'
AND table_name = '1'
AND state = 'failed'
AND partition_name IN ('1')
GROUP BY
partition_name
) AS latest_failures
JOIN mysql.analyze_jobs aj ON aj.id = latest_failures.max_id;
+------------------------------------+--------+---------+------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|id |estRows |task |access object |operator info |
+------------------------------------+--------+---------+------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|StreamAgg_18 |1.00 |root | |funcs:min(Column#31)->Column#28 |
|└─Projection_52 |1.00 |root | |timestampdiff(SECOND, mysql.analyze_jobs.start_time, 2024-12-16 17:25:42)->Column#31 |
| └─Projection_50 |1.00 |root | |mysql.analyze_jobs.start_time |
| └─TopN_21 |1.00 |root | |Column#30, offset:0, count:1 |
| └─Projection_51 |1.00 |root | |mysql.analyze_jobs.start_time, timestampdiff(SECOND, mysql.analyze_jobs.start_time, 2024-12-16 17:25:42)->Column#30 |
| └─IndexJoin_29 |1.00 |root | |inner join, inner:Selection_25, outer key:Column#14, inner key:mysql.analyze_jobs.id, equal cond:eq(Column#14, mysql.analyze_jobs.id) |
| ├─Selection_36(Build) |0.80 |root | |not(isnull(Column#14)) |
| │ └─HashAgg_39 |1.00 |root | |group by:mysql.analyze_jobs.partition_name, funcs:max(mysql.analyze_jobs.id)->Column#14 |
| │ └─TableReader_46 |0.00 |root | |data:Selection_45 |
| │ └─Selection_45 |0.00 |cop[tikv]| |eq(mysql.analyze_jobs.partition_name, "1"), eq(mysql.analyze_jobs.state, "failed"), eq(mysql.analyze_jobs.table_name, "1"), eq(mysql.analyze_jobs.table_schema, "1")|
| │ └─TableFullScan_44|10000.00|cop[tikv]|table:analyze_jobs|keep order:false, stats:pseudo |
| └─Selection_25(Probe) |0.64 |root | |not(isnull(timestampdiff("SECOND", mysql.analyze_jobs.start_time, 2024-12-16 17:25:42))) |
| └─TableReader_24 |0.80 |root | |data:TableRangeScan_23 |
| └─TableRangeScan_23 |0.80 |cop[tikv]|table:aj |range: decided by [Column#14], keep order:false, stats:pseudo |
+------------------------------------+--------+---------+------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------+
explain SELECT
MIN(TIMESTAMPDIFF(SECOND, aj.start_time, CURRENT_TIMESTAMP)) AS min_duration
FROM (
SELECT
MAX(id) AS max_id
FROM
mysql.analyze_jobs
WHERE
table_schema = '1'
AND table_name = '1'
AND state = 'failed'
AND partition_name IN ('1')
GROUP BY
partition_name
) AS latest_failures
JOIN mysql.analyze_jobs aj ON aj.id = latest_failures.max_id;
+----------------------------+--------+---------+------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|id |estRows |task |access object |operator info |
+----------------------------+--------+---------+------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|StreamAgg_12 |1.00 |root | |funcs:avg(Column#17)->Column#16 |
|└─Projection_26 |0.00 |root | |cast(timestampdiff(SECOND, mysql.analyze_jobs.start_time, mysql.analyze_jobs.end_time), decimal(20,0) BINARY)->Column#17 |
| └─TopN_13 |0.00 |root | |mysql.analyze_jobs.id:desc, offset:0, count:5 |
| └─TableReader_21 |0.00 |root | |data:TopN_20 |
| └─TopN_20 |0.00 |cop[tikv]| |mysql.analyze_jobs.id:desc, offset:0, count:5 |
| └─Selection_19 |0.00 |cop[tikv]| |eq(mysql.analyze_jobs.partition_name, ""), eq(mysql.analyze_jobs.state, "finished"), eq(mysql.analyze_jobs.table_name, "1"), eq(mysql.analyze_jobs.table_schema, "1"), isnull(mysql.analyze_jobs.fail_reason)|
| └─TableFullScan_18|10000.00|cop[tikv]|table:analyze_jobs|keep order:false, stats:pseudo |
+----------------------------+--------+---------+------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ After: explain SELECT
MIN(TIMESTAMPDIFF(SECOND, aj.start_time, CURRENT_TIMESTAMP)) AS min_duration
FROM (
SELECT
MAX(id) AS max_id
FROM
mysql.analyze_jobs
WHERE
table_schema = '1'
AND table_name = '1'
AND state = 'failed'
AND partition_name IN ('1')
GROUP BY
partition_name
) AS latest_failures
JOIN mysql.analyze_jobs aj ON aj.id = latest_failures.max_id;
StreamAgg_18,1.00,root,"",funcs:min(Column#32)->Column#28
└─Projection_57,1.00,root,"","timestampdiff(SECOND, mysql.analyze_jobs.start_time, 2024-12-16 17:36:19)->Column#32"
└─Projection_55,1.00,root,"",mysql.analyze_jobs.start_time
└─TopN_21,1.00,root,"","Column#31, offset:0, count:1"
└─Projection_56,1.00,root,"","mysql.analyze_jobs.start_time, timestampdiff(SECOND, mysql.analyze_jobs.start_time, 2024-12-16 17:36:19)->Column#31"
└─IndexJoin_29,1.00,root,"","inner join, inner:Selection_25, outer key:Column#14, inner key:mysql.analyze_jobs.id, equal cond:eq(Column#14, mysql.analyze_jobs.id)"
├─Selection_36(Build),0.80,root,"",not(isnull(Column#14))
│ └─StreamAgg_41,1.00,root,"","group by:mysql.analyze_jobs.partition_name, funcs:max(mysql.analyze_jobs.id)->Column#14"
│ └─IndexReader_51,0.00,root,"",index:IndexRangeScan_50
│ └─IndexRangeScan_50,0.00,cop[tikv],"table:analyze_jobs, index:idx_schema_table_partition_state(table_schema, table_name, partition_name, state)","range:[""1"" ""1"" ""1"" ""failed"",""1"" ""1"" ""1"" ""failed""], keep order:true, stats:pseudo"
└─Selection_25(Probe),0.64,root,"","not(isnull(timestampdiff(""SECOND"", mysql.analyze_jobs.start_time, 2024-12-16 17:36:19)))"
└─TableReader_24,0.80,root,"",data:TableRangeScan_23
└─TableRangeScan_23,0.80,cop[tikv],table:aj,"range: decided by [Column#14], keep order:false, stats:pseudo"
explain SELECT
MIN(TIMESTAMPDIFF(SECOND, aj.start_time, CURRENT_TIMESTAMP)) AS min_duration
FROM (
SELECT
MAX(id) AS max_id
FROM
mysql.analyze_jobs
WHERE
table_schema = '1'
AND table_name = '1'
AND state = 'failed'
AND partition_name IN ('1')
GROUP BY
partition_name
) AS latest_failures
JOIN mysql.analyze_jobs aj ON aj.id = latest_failures.max_id;
StreamAgg_12,1.00,root,"",funcs:avg(Column#17)->Column#16
└─Projection_27,0.00,root,"","cast(timestampdiff(SECOND, mysql.analyze_jobs.start_time, mysql.analyze_jobs.end_time), decimal(20,0) BINARY)->Column#17"
└─TopN_14,0.00,root,"","mysql.analyze_jobs.id:desc, offset:0, count:5"
└─IndexLookUp_22,0.00,root,"",""
├─IndexRangeScan_18(Build),0.00,cop[tikv],"table:analyze_jobs, index:idx_schema_table_partition_state(table_schema, table_name, partition_name, state)","range:[""1"" ""1"" """" ""finished"",""1"" ""1"" """" ""finished""], keep order:false, stats:pseudo"
└─TopN_21(Probe),0.00,cop[tikv],"","mysql.analyze_jobs.id:desc, offset:0, count:5"
└─Selection_20,0.00,cop[tikv],"",isnull(mysql.analyze_jobs.fail_reason)
└─TableRowIDScan_19,0.00,cop[tikv],table:analyze_jobs,"keep order:false, stats:pseudo"
|
Signed-off-by: Rustin170506 <[email protected]>
Signed-off-by: Rustin170506 <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🔢 Self-check (PR reviewed by myself and ready for feedback.)
/test all |
/approve for bootstrap |
Signed-off-by: Rustin170506 <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🔢 Self-check (PR reviewed by myself and ready for feedback.)
/hold |
Signed-off-by: Rustin170506 <[email protected]>
Tested again:
tiup playground v8.4.0 --db.binpath /Users/rustin/code/tidb/bin/tidb-server mysql> show create table mysql.analyze_jobs;
ERROR 2013 (HY000): Lost connection to MySQL server during query
No connection. Trying to reconnect...
Connection id: 3338665990
Current database: *** NONE ***
+--------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Table | Create Table |
+--------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| analyze_jobs | CREATE TABLE `analyze_jobs` (
`id` bigint unsigned NOT NULL AUTO_INCREMENT,
`update_time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
`table_schema` char(64) NOT NULL DEFAULT '',
`table_name` char(64) NOT NULL DEFAULT '',
`partition_name` char(64) NOT NULL DEFAULT '',
`job_info` text NOT NULL,
`processed_rows` bigint unsigned NOT NULL DEFAULT '0',
`start_time` timestamp NULL DEFAULT NULL,
`end_time` timestamp NULL DEFAULT NULL,
`state` enum('pending','running','finished','failed') NOT NULL,
`fail_reason` text DEFAULT NULL,
`instance` varchar(512) NOT NULL COMMENT 'address of the TiDB instance executing the analyze job',
`process_id` bigint unsigned DEFAULT NULL COMMENT 'ID of the process executing the analyze job',
PRIMARY KEY (`id`) /*T![clustered_index] CLUSTERED */,
KEY `update_time` (`update_time`),
KEY `idx_schema_table_state` (`table_schema`,`table_name`,`state`),
KEY `idx_schema_table_partition_state` (`table_schema`,`table_name`,`partition_name`,`state`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_bin |
+--------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
1 row in set (0.03 sec)
tiup playground v8.4.0 --tag 20240329-test5 mysql> show create table mysql.analyze_jobs;
ERROR 2013 (HY000): Lost connection to MySQL server during query
No connection. Trying to reconnect...
Connection id: 4078960646
Current database: *** NONE ***
+--------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Table | Create Table |
+--------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| analyze_jobs | CREATE TABLE `analyze_jobs` (
`id` bigint(64) unsigned NOT NULL AUTO_INCREMENT,
`update_time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
`table_schema` char(64) NOT NULL DEFAULT '',
`table_name` char(64) NOT NULL DEFAULT '',
`partition_name` char(64) NOT NULL DEFAULT '',
`job_info` text NOT NULL,
`processed_rows` bigint(64) unsigned NOT NULL DEFAULT '0',
`start_time` timestamp NULL DEFAULT NULL,
`end_time` timestamp NULL DEFAULT NULL,
`state` enum('pending','running','finished','failed') NOT NULL,
`fail_reason` text DEFAULT NULL,
`instance` varchar(512) NOT NULL COMMENT 'address of the TiDB instance executing the analyze job',
`process_id` bigint(64) unsigned DEFAULT NULL COMMENT 'ID of the process executing the analyze job',
PRIMARY KEY (`id`) /*T![clustered_index] CLUSTERED */,
KEY `update_time` (`update_time`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_bin |
+--------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
1 row in set (0.01 sec) tiup playground v8.4.0 --db.binpath /Users/rustin/code/tidb/bin/tidb-server --tag 20240329-test5 mysql> show create table mysql.analyze_jobs;
ERROR 2013 (HY000): Lost connection to MySQL server during query
No connection. Trying to reconnect...
Connection id: 786432006
Current database: *** NONE ***
+--------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Table | Create Table |
+--------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| analyze_jobs | CREATE TABLE `analyze_jobs` (
`id` bigint unsigned NOT NULL AUTO_INCREMENT,
`update_time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
`table_schema` char(64) NOT NULL DEFAULT '',
`table_name` char(64) NOT NULL DEFAULT '',
`partition_name` char(64) NOT NULL DEFAULT '',
`job_info` text NOT NULL,
`processed_rows` bigint unsigned NOT NULL DEFAULT '0',
`start_time` timestamp NULL DEFAULT NULL,
`end_time` timestamp NULL DEFAULT NULL,
`state` enum('pending','running','finished','failed') NOT NULL,
`fail_reason` text DEFAULT NULL,
`instance` varchar(512) NOT NULL COMMENT 'address of the TiDB instance executing the analyze job',
`process_id` bigint unsigned DEFAULT NULL COMMENT 'ID of the process executing the analyze job',
PRIMARY KEY (`id`) /*T![clustered_index] CLUSTERED */,
KEY `update_time` (`update_time`),
KEY `idx_schema_table_state` (`table_schema`,`table_name`,`state`),
KEY `idx_schema_table_partition_state` (`table_schema`,`table_name`,`partition_name`,`state`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_bin |
+--------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
1 row in set (0.02 sec) |
/unhold |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🔢 Self-check (PR reviewed by myself and ready for feedback.)
Signed-off-by: Rustin170506 <[email protected]>
Signed-off-by: Rustin170506 <[email protected]>
/retest |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: benmeadowcroft, D3Hunter, Leavrth, qw4990, tiancaiamao The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
/retest |
Signed-off-by: ti-chi-bot <[email protected]>
In response to a cherrypick label: new pull request created to branch |
@Rustin170506: The following test failed, say
Full PR test history. Your PR dashboard. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here. |
I test this PR again locally to evaluate the performance of creating indexes. For 100k rows, it takes 16 seconds to create the indexes, although it is not that slow, but it still takes some time. So I decided to undo part of this change. We will only create the new indexes for the new cluster. And we do not create the index for the old clusters during the upgrade process. Normally, for the smaller cluster, this should not be a problem. But for some huge clusters, we can ask users to manually create it instead of blocking the upgrade process. > ALTER TABLE mysql.analyze_jobs ADD INDEX `idx_schema_table_state` (`table_schema`, `table_name`, `state`)
[2024-12-30 14:17:40] completed in 5 s 755 ms
> ALTER TABLE mysql.analyze_jobs ADD INDEX `idx_schema_table_partition_state` (`table_schema`, `table_name`, `partition_name`, `state`)
[2024-12-30 14:17:52] completed in 11 s 860 ms |
…_pr=95 * executor: fix a bug that global temporary table send cop request (pingcap#588… * statistics: fix the panic when to async load stats with dropped index … * executor: fix prepared protocol charset (pingcap#58872) (pingcap#58931) * *: Update client-go and verify all read ts (pingcap#58909) * integration test: fix test case "br_pitr" (pingcap#58876) * session: add indexes for `mysql.analyze_jobs` (pingcap#58134) (pingcap#58355) * ddl: fix args count for modify column (pingcap#58855) (pingcap#58858) * planner: correct plan when scan tidb related cluster table with KeepOr… * planner: Fix vector not truncated after CBO (pingcap#58809) (pingcap#58844) * ddl: Fix vector index for high dimensional vectors (pingcap#58717) (pingcap#58835) * ddl: Fix issue with concurrent update getting reverted by BackfillData… * statistics: stats cache set default quota as 20% (pingcap#58013) (pingcap#58817) * executor: change the evaluation order of columns in `Update` and `Inse… * statistics: add recover to protect background task (pingcap#58739) (pingcap#58767) * ttl: fix the infinite waiting for delRateLimiter when `tidb_ttl_delete… * ttl: reduce some warnings logs when locking TTL tasks (pingcap#58306) (pingcap#58783) * ttl: retry the rows when del rate limiter returns error in delWorker (… * ttl: reschedule task to other instances when shrinking worker (pingcap#57703) (pingcap#58778) * ttl: fix the issue that one task losing heartbeat will block other tas… * ttl: fix the issue that the task is not cancelled after transfering ow… * ddl: fix job state overridden when concurrent updates don't overlap in… * ttl: set the job history status to `cancelled` if it's removed in GC a… * ttl: fix the timezone issue and panic in the caller of `getSession` (#… * ddl: fix version syncer doesn't print who hasn't synced on partial syn… * ttl: fix the issue that `DROP TABLE` / `ALTER TABLE` will keep job run… * br/stream: allow pitr to create oversized indices (pingcap#58433) (pingcap#58527) * ttl: set a result for timeout scan task during shrinking scan worker (… * executor: fix time zone issue when querying slow log (pingcap#58455) (pingcap#58577) * table: fix the issue that the default value for `BIT` column is wrong … * statistics: temporarily skip handling errors for DDL events (pingcap#58609) (pingcap#58634) * sessionctx: fix null max value to leading wrong warning (pingcap#57898) (pingcap#57935) * planner: convert cartesian semi join with other nulleq condition to cr… * planner: fix idxMergePartPlans forget to deal with RootTaskConds (pingcap#585… * domain: change some stats log level as WARN (pingcap#58316) (pingcap#58555) * planner: quickly get total count from index/column (pingcap#58365) (pingcap#58431) * planner, expr: eval readonly user var during plan phase (pingcap#54462) (pingcap#58540) * metrics: add col/idx name(s) for BackfillProgressGauge and BackfillTot… * br: refactor test to use wait checkpoint method (pingcap#57612) (pingcap#58498) * executor: reuse chunk in hash join v2 during restoring (pingcap#56936) (pingcap#58018) * executor: fix goroutine leak when exceed quota in hash agg (pingcap#58078) (pingcap#58462) * copr: fix the issue that busy threshold may redirect batch copr to fol… * statistics: skip non-exicted table when to init stats (pingcap#58381) (pingcap#58394) * planner: fix incorrectly using the schema for plan cache (pingcap#57964) (pingcap#58090) * *: use DDL subscriber updating stats meta (pingcap#57872) (pingcap#58387) * planner, runtime_filter: Remove redundant logs whose meaning can be di… * statistics: remove dead code (pingcap#58412) (pingcap#58442) * planner: Use/force to apply prefer range scan (pingcap#56928) (pingcap#58444) * statistics: gc the statistics correctly after drop the database (pingcap#5730… * ddl: Fixed partition interval from DayMinute to just Minute. (pingcap#57738) (pingcap#58019) * executor: Enlarge the timeout for fetching TiFlash system tables (pingcap#579…
What problem does this PR solve?
Issue Number: close #57996
Problem Summary:
What changed and how does it work?
This PR added two indexes for the
mysql.analyze_jobs
to speed up queries.I also bumped the bootstrap version from 239 to 140.
Check List
Tests
Side effects
Documentation
Release note
Please refer to Release Notes Language Style Guide to write a quality release note.