Skip to content

goroutine leak: goroutine increase during the long-term operation #9047

@bufferflies

Description

@bufferflies

Bug Report

The count of pd goroutine continue increase in long-term cluster

What did you do?

Trigger condition: remove scheduler

#/bin/bash

for i in $(seq 1 50); do
tiup ctl:v6.5.6 pd -u  http://127.0.0.1:2379 scheduler remove balance-leader-scheduler
sleep 10
tiup ctl:v6.5.6 pd -u  http://127.0.0.1:2379 scheduler add balance-leader-scheduler
sleep 20
done
Image

root cause:

decoder := schedule.ConfigSliceDecoder(schedulerCfg.Type, schedulerCfg.Args)
tmp, err := schedule.CreateScheduler(schedulerCfg.Type, schedule.NewOperatorController(c.ctx, nil, nil), storage.NewStorageWithMemoryBackend(), decoder)
if err != nil {
return err
}

What did you expect to see?

The count of goroutine is steady

What did you see instead?

What version of PD are you using (pd-server -V)?

v6.5.6

Metadata

Metadata

Assignees

Labels

affects-5.4This bug affects the 5.4.x(LTS) versions.affects-6.1This bug affects the 6.1.x(LTS) versions.affects-6.5This bug affects the 6.5.x(LTS) versions.affects-7.1This bug affects the 7.1.x(LTS) versions.impact/leakreport/customerCustomers have encountered this bug.severity/majortype/bugThe issue is confirmed as a bug.

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions