-
Notifications
You must be signed in to change notification settings - Fork 1.1k
fix(server): Dont apply memory limit when loading/replicating #1760
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
85f5015
to
1705817
Compare
bool apply_memory_limit = | ||
!owner_->IsReplica() && !(ServerState::tlocal()->gstate() == GlobalState::LOADING); | ||
|
||
PrimeEvictionPolicy evp{cntx, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this something we'll need to adjust when executing REPLICAOF NO ONE
on a replica?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ah, nvm
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lgtm
src/server/db_slice.cc
Outdated
int64_t(memory_budget_ - key.size()), | ||
ssize_t(soft_budget_limit_), | ||
this, | ||
apply_memory_limit}; | ||
|
||
// If we are over limit in non-cache scenario, just be conservative and throw. | ||
if (!caching_mode_ && evp.mem_budget() < 0) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@romange Redis does not limit memory when loading rdb file, as I wrote in another PR comment. Do you think we should do the same? If I change this if to check apply_memory_limit we will not fail on memory limit when loading at all. Currently I removed only the conservative memory limit
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@romange Based on our chat, I updated the PR to not apply memory checks when replicating or loading
…ating When loading a snapshot created by the same server configuration (memory and number of shards) we will create a different dash table segment directory tree, because the tree shape is related to the order of entries insertion. Therefore when loading data from snapshot or from replication the conservative memory checks might fail as the new tree might have more segments. Because we dont want to fail loading a snapshot from the same server configuration we disable this checks on loading and replication. Signed-off-by: adi_holden <[email protected]>
Signed-off-by: adi_holden <[email protected]>
Signed-off-by: adi_holden <[email protected]>
b3b92fd
to
e8b24b4
Compare
soft_limit_(soft_limit), | ||
cntx_(cntx), | ||
can_evict_(can_evict), | ||
apply_memory_limit_(apply_memory_limit) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
instead of passing apply_memory_limit=false you can pass mem_budget
as INT64_MAX
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
apply_memory_limit parameter is not needed
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
there is a problem with this solution as we use the evp.memory_budget() to update db_slice.memory_budget_ so if we pass INT64_MAX we will mess with the db_slice.memory_budget_
No memory limit on loading from file or replicating.
When loading a snapshot created by the same server configuration (memory and
number of shards) we will create a different dash table segment directory tree, because the
tree shape is related to the order of entries insertion. Therefore when loading data from
snapshot or from replication the conservative memory checks might fail as the new tree might
have more segments. Because we dont want to fail loading a snapshot from the same server
configuration we disable this checks on loading and replication.
fixes #1708