Skip to content

Commit 4d1102b

Browse files
andrwngvbotbuildovich
authored andcommitted
archival: remove upload retry limit for segments
In a previous PR[1] we began to rely on the archiver loop to retry, and moved away from relying on `cloud_io::remote` for retries in two ways: 1. setting an explicit `disallow` retry policy on the retry node passed to the remote, and 2. setting the `max_retries` passed to `remote::upload_segment()` to 1. In practice, we saw that _not_ relying on the remote resulted in an uptick in the `vectorized_cloud_storage_failed_uploads` metric, which is monitored and alerted on. In [2] we reverted #1, but didn't notice #2. This commit reverts #2. [1] redpanda-data#25951 [2] redpanda-data#26969 (cherry picked from commit 7f409da)
1 parent 90cd7b3 commit 4d1102b

File tree

1 file changed

+1
-7
lines changed

1 file changed

+1
-7
lines changed

src/v/cluster/archival/ntp_archiver_service.cc

Lines changed: 1 addition & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -1554,13 +1554,7 @@ ss::future<ntp_archiver_upload_result> ntp_archiver::upload_segment(
15541554
// index in the background.
15551555
auto upload_segment_ready = co_await ss::coroutine::as_future(
15561556
_remote.upload_segment(
1557-
get_bucket_name(),
1558-
path,
1559-
meta.size_bytes,
1560-
get_stream,
1561-
rtc,
1562-
lazy_abort,
1563-
1));
1557+
get_bucket_name(), path, meta.size_bytes, get_stream, rtc, lazy_abort));
15641558

15651559
// As noted above, check whether 'get_stream' was called. If not, close the
15661560
// upload stream.

0 commit comments

Comments
 (0)