You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
H/T to VladLazar for a similar change that inspired this one:
VladLazar@682aea5
The problem statement:
```
We have seen a couple of races between the application of `segment.ms`
and the normal append path. They had the following pattern in common:
1. application of `segment.ms` begins
2. a call to `segment::append` is interleaved
3. the append finishes first and and advances the dirty offsets, which
the rolling logic in `segment.ms` does not expect
-- or --
4. `segment.ms` releases the current appender while the append is
ongoing, which the append logic does not expect
```
The proposed fix was to introduce a new appender lock to the segment, and
ensure that it is held while appending an while segment.ms rolling. This
addressed problem #3, but wasn't sufficient to address redpanda-data#4.
The issue with introducing another lock to the segment is that the
unexpected behavior when appending to a segment happens in the context
of an already referenced segment. I.e. the appending fiber may proceed
to reference an appender, only for it to be destructed by the
housekeeping fiber before segment::append() is called, resulting in a
segfault.
This patch extends usage of the existing disk_log_impl::_segments_rolling_lock
to cover the entire duration of append (i.e. not just the underlying
segment::append() call), ensuring that segment.ms rolls and appends are
mutually exclusive.
(cherry picked from commit 78a9749)
0 commit comments