Skip to content

lightning OOM during write/ingest to TiKV when import large mount of data #59947

@D3Hunter

Description

@D3Hunter

Bug Report

Please answer these questions before submitting your issue. Thanks!

1. Minimal reproduce step (Required)

the issue happened in two cases, one 2TB data, another 12TB data, both are using v8.1.2 version. current guess of the root cause is that pebble might have some bug, causes memory usage burst, and OOM, as the time window is very short, and pebble might allocate memory using CGO, we hasn't got any useful heap profile.
and we found that in v8.1.1 we have upgrade pebble to v1.1 in 888a58b since v8.1.1. so we try import the same data using v7.5.5, and it success.

we found a pebble OOM issue cockroachdb/pebble#3039, it might related

so the trigger condition might include

  • import large data set, say 2T. (we have tested with smaller data-set, say several hundreds of GB, no such issue)
  • using version >= v8.1.1 which have upgraded pebble

2. What did you expect to see? (Required)

success

3. What did you see instead (Required)

oom on ingest

4. What is your TiDB version? (Required)

v8.1.2

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions