Skip to content

Conversation

adiholden
Copy link
Contributor

@adiholden adiholden commented Aug 29, 2023

No memory limit on loading from file or replicating.
When loading a snapshot created by the same server configuration (memory and
number of shards) we will create a different dash table segment directory tree, because the
tree shape is related to the order of entries insertion. Therefore when loading data from
snapshot or from replication the conservative memory checks might fail as the new tree might
have more segments. Because we dont want to fail loading a snapshot from the same server
configuration we disable this checks on loading and replication.

fixes #1708

@adiholden adiholden force-pushed the memory_limit_check branch 2 times, most recently from 85f5015 to 1705817 Compare August 29, 2023 10:42
bool apply_memory_limit =
!owner_->IsReplica() && !(ServerState::tlocal()->gstate() == GlobalState::LOADING);

PrimeEvictionPolicy evp{cntx,
Copy link
Contributor

@royjacobson royjacobson Aug 29, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this something we'll need to adjust when executing REPLICAOF NO ONE on a replica?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ah, nvm

royjacobson
royjacobson previously approved these changes Aug 29, 2023
Copy link
Contributor

@royjacobson royjacobson left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm

int64_t(memory_budget_ - key.size()),
ssize_t(soft_budget_limit_),
this,
apply_memory_limit};

// If we are over limit in non-cache scenario, just be conservative and throw.
if (!caching_mode_ && evp.mem_budget() < 0) {
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@romange Redis does not limit memory when loading rdb file, as I wrote in another PR comment. Do you think we should do the same? If I change this if to check apply_memory_limit we will not fail on memory limit when loading at all. Currently I removed only the conservative memory limit

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@romange Based on our chat, I updated the PR to not apply memory checks when replicating or loading

@adiholden adiholden changed the title fix(server): Dont apply conservative memory limit when loading/replicating fix(server): Dont apply memory limit when loading/replicating Aug 30, 2023
…ating

When loading a snapshot created by the same server configuration (memory and
number of shards) we will create a different dash table segment directory tree, because the
tree shape is related to the order of entries insertion. Therefore when loading data from
snapshot or from replication the conservative memory checks might fail as the new tree might
have more segments. Because we dont want to fail loading a snapshot from the same server
configuration we disable this checks on loading and replication.

Signed-off-by: adi_holden <[email protected]>
Signed-off-by: adi_holden <[email protected]>
soft_limit_(soft_limit),
cntx_(cntx),
can_evict_(can_evict),
apply_memory_limit_(apply_memory_limit) {
Copy link
Collaborator

@romange romange Aug 30, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

instead of passing apply_memory_limit=false you can pass mem_budget as INT64_MAX

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

apply_memory_limit parameter is not needed

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

there is a problem with this solution as we use the evp.memory_budget() to update db_slice.memory_budget_ so if we pass INT64_MAX we will mess with the db_slice.memory_budget_

@adiholden adiholden merged commit ac3b8b2 into main Sep 3, 2023
@adiholden adiholden deleted the memory_limit_check branch September 3, 2023 08:24
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

bad_alloc on replica when master near maxmemory (cache_mode=true)
3 participants