-
Notifications
You must be signed in to change notification settings - Fork 6k
br: concurrently set tiflash replicas #63360
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
Signed-off-by: Jianjun Liao <[email protected]>
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
Hi @Leavrth. Thanks for your PR. PRs from untrusted users cannot be marked as trusted with I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
Codecov Report❌ Patch coverage is Additional details and impacted files@@ Coverage Diff @@
## master #63360 +/- ##
================================================
+ Coverage 72.7782% 74.0205% +1.2423%
================================================
Files 1832 1848 +16
Lines 495769 495842 +73
================================================
+ Hits 360812 367025 +6213
+ Misses 113016 106132 -6884
- Partials 21941 22685 +744
Flags with carried forward coverage won't be shown. Click here to find out more.
🚀 New features to boost your workflow:
|
@Leavrth: The following tests failed, say
Full PR test history. Your PR dashboard. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here. |
workerpool := tidbutil.NewWorkerPool(16, "repair ingest index") | ||
eg, ectx := errgroup.WithContext(ctx) | ||
for _, sql := range sqls { | ||
resetSQL := sql |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I remember in our go version, we can directly use sql
in for-loop
for _, sql := range sqls { | ||
resetSQL := sql | ||
workerpool.ApplyWithIDInErrorGroup(eg, func(id uint64) error { | ||
resetSession := resetSessions[id%uint64(len(resetSessions))] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
seems ID
are already allocated to 1 to worker pool size. So we can use resetSessions[id - 1]
workerpool.ApplyWithIDInErrorGroup(eg, func(id uint64) error { | ||
resetSession := resetSessions[id%uint64(len(resetSessions))] | ||
log.Info("reset tiflash replica", zap.String("sql", sql)) | ||
return resetSession.ExecuteInternal(ectx, resetSQL) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If one SQL fails, should we continue to execute rest of them?
What problem does this PR solve?
Issue Number: close #63177
Problem Summary:
Currently, BR set tiflash replica one by one without any concurrency.
What changed and how does it work?
optimize the speed of setting tiflash replica after log restore
Check List
Tests
Side effects
Documentation
Release note
Please refer to Release Notes Language Style Guide to write a quality release note.