Skip to content

Conversation

michalsosn
Copy link
Contributor

No description provided.

if not items:
return

query_size_limit = env.NEPTUNE_QUERY_MAX_REQUEST_SIZE.get()
batch_size_limit_env = env.NEPTUNE_QUERY_SERIES_BATCH_SIZE.get()
max_workers = env.NEPTUNE_QUERY_MAX_WORKERS.get()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
max_workers = env.NEPTUNE_QUERY_MAX_WORKERS.get()
max_workers = max(env.NEPTUNE_QUERY_MAX_WORKERS.get(), 10)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we clip this value? If someone limits the concurrency to 2 workers then it should be fine to split the request into up to 2 groups, because there won't necessarily be any benefit to going above this number?

@gabrys
Copy link
Contributor

gabrys commented Aug 28, 2025

Please test this against simply changing the default

NEPTUNE_QUERY_SERIES_BATCH_SIZE = EnvVariable[int]("NEPTUNE_QUERY_SERIES_BATCH_SIZE", int, 10_000)

to

NEPTUNE_QUERY_SERIES_BATCH_SIZE = EnvVariable[int]("NEPTUNE_QUERY_SERIES_BATCH_SIZE", int, 1000)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants