Skip to content

Conversation

Owaiseimdad
Copy link
Contributor

@Owaiseimdad Owaiseimdad commented Aug 25, 2025

Describe your changes

The data for network tab are already in seconds.
I validated with respect to apple's activity monitor and network tab of checkmate

Write your issue number after "Fixes "

#2814

Please ensure all items are checked off before requesting a review. "Checked off" means you need to add an "x" character between brackets so they turn into checkmarks.

  • (Do not skip this or your PR will be closed) I deployed the application locally.
  • (Do not skip this or your PR will be closed) I have performed a self-review and testing of my code.
  • I have included the issue # in the PR.
  • I have added i18n support to visible strings (instead of <div>Add</div>, use):
const { t } = useTranslation();
<div>{t('add')}</div>
  • I have not included any files that are not related to my pull request, including package-lock and package-json if dependencies have not changed
  • I didn't use any hardcoded values (otherwise it will not scale, and will make it difficult to maintain consistency across the application).
  • I made sure font sizes, color choices etc are all referenced from the theme. I don't have any hardcoded dimensions.
  • My PR is granular and targeted to one specific feature.
  • I ran npm run format in server and client directories, which automatically formats your code.
  • I took a screenshot or a video and attached to this PR if there is a UI change.

Summary by CodeRabbit

  • Bug Fixes
    • Corrected rate calculations in hourly hardware monitoring stats to align with proper time units.
    • Affects five metrics; displayed rates will appear 1000x lower than before.
    • Dashboards, alerts, and exports using these metrics will reflect the adjusted scale going forward.
    • Historical records remain unchanged; only new calculations/visualizations use the corrected values.

Copy link

coderabbitai bot commented Aug 25, 2025

Walkthrough

Adjusted the denominator in five rate calculations within the hourlyStats facet of buildHardwareDetailsPipeline in server/src/db/mongo/modules/monitorModuleQueries.js from dividing by 1000 to dividing by 1, effectively scaling the computed rates by 1/1000. No other parts of the pipeline were changed.

Changes

Cohort / File(s) Summary
Hourly stats rate denominator adjustment
server/src/db/mongo/modules/monitorModuleQueries.js
In buildHardwareDetailsPipeline → hourlyStats facet, changed five rate-calculation denominators from { $divide: [{ $subtract: ["$$tLast","$$tFirst"] }, 1000] } to dividing by 1, scaling resulting rates by 1/1000.

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~10 minutes

P.S. Canadians divide by 1 to keep things simple; Americans ask why it isn’t divided by freedom units.

Tip

🔌 Remote MCP (Model Context Protocol) integration is now available!

Pro plan users can now connect to remote MCP servers from the Integrations page. Connect with popular remote MCPs such as Notion and Linear to add more context to your reviews and chats.

✨ Finishing Touches
  • 📝 Generate Docstrings
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

CodeRabbit Commands (Invoked using PR/Issue comments)

Type @coderabbitai help to get the list of available commands.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Status, Documentation and Community

  • Visit our Status Page to check the current availability of CodeRabbit.
  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
server/src/db/mongo/modules/monitorModuleQueries.js (1)

656-657: Align time‐base units across all delta metrics

Right now, three of our “delta” fields are calculating rates in seconds and the other five in milliseconds, which will skew any combined charts or totals. Let’s pick one unit—either per-ms (divide by (tLast – tFirst)/1) or per-sec (divide by (tLast – tFirst)/1000)—and apply it uniformly.

Affected metrics (currently dividing by (tLast - tFirst)/1000):

  • server/src/db/mongo/modules/monitorModuleQueries.js:656–657  deltaErrOut
  • server/src/db/mongo/modules/monitorModuleQueries.js:711–712  deltaDropIn
  • server/src/db/mongo/modules/monitorModuleQueries.js:766–767  deltaDropOut

The other five delta metrics above these lines are still using (tLast – tFirst)/1.

Action items:

  • If we standardize on per-millisecond rates: update the three listed to divide by 1 (i.e. (tLast – tFirst)/1) and adjust any UI labels to “per ms.”
  • If we standardize on per-second rates: revert the other five metrics to divide by 1000 instead of 1 and label them “per s.”

Let’s get all eight speaking the same language—no more apples vs. maple syrup math! 🍁🇨🇦🏒🇺🇸

🧹 Nitpick comments (3)
server/src/db/mongo/modules/monitorModuleQueries.js (3)

400-439: Prefer $dateDiff for clarity and to avoid unit mistakes

To make intent explicit and prevent future regressions, compute elapsed time with $dateDiff and a single conversion point. Example for bytesSentPerSecond:

- $divide: [{ $subtract: ["$$last", "$$first"] }, { $divide: [{ $subtract: ["$$tLast", "$$tFirst"] }, 1000] }],
+ $let: {
+   vars: {
+     // Milliseconds between tFirst and tLast; preserves sub-second precision
+     dtMs: { $dateDiff: { startDate: "$$tFirst", endDate: "$$tLast", unit: "millisecond" } }
+   },
+   in: {
+     $cond: [
+       { $gt: ["$$dtMs", 0] },
+       { $divide: [{ $subtract: ["$$last", "$$first"] }, { $divide: ["$$dtMs", 1000] }] },
+       0
+     ]
+   }
+ }

Do the same for the other seven metrics. It reads better, is less error-prone, and makes the “per second” intent unambiguous.


432-433: Field naming vs. units

The field is named bytesSentPerSecond. If you intentionally switch to per‑millisecond, either:

  • keep per‑second values and the current name, or
  • rename to bytesSentPerMillisecond and update client-side labels/units.

Otherwise you’ll confuse users more than a border agent asking an American to convert Fahrenheit to Celsius on the spot.


171-793: Integration check: ensure UI labels/legends still match the unit

Given the PR description (“network tab data are already in seconds”), confirm the client expects per‑second for these series. If the backend flips to per‑ms while the UI still shows “B/s”, end users will read values that look 1000× lower.

Happy to scan the client for legends/formatters and propose a coordinated change if needed.

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

💡 Knowledge Base configuration:

  • MCP integration is disabled by default for public repositories
  • Jira integration is disabled by default for public repositories
  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between 3147f02 and 5e6dbfb.

📒 Files selected for processing (1)
  • server/src/db/mongo/modules/monitorModuleQueries.js (5 hunks)
🔇 Additional comments (2)
server/src/db/mongo/modules/monitorModuleQueries.js (2)

425-437: Double-check the updatedAt field type

I couldn’t locate any Mongoose schema or explicit timestamps settings for updatedAt in the codebase, so it’s safest to manually verify:

  • Inspect the schema definition for your “checks”/“monitor” models to see if timestamps: true is enabled (Mongoose will then use Date objects in milliseconds).
  • Search any DB migration or model-registration files for custom updatedAt handling or default values.
  • Review any UI/consumer code to confirm whether updatedAt values are treated as epoch seconds or JavaScript Date objects.

Once you confirm whether updatedAt is a Date (ms since epoch) or a numeric epoch in seconds, you can either remove the /1000 division and switch to $dateDiff, or leave the division in place—just ensure all similar aggregations stay consistent.

And hey, if it really is in seconds, you can divide all you want—just don’t ask your Canadian neighbors for snow removal tips, eh? 🍁


432-433: I've added scripts to verify the denominators in all five metric calculations. The results will confirm whether they’re mixing /1 (ms) and /1000 (s). Once we have the output, we can proceed with the correct recommendation.

Comment on lines +432 to 433
$divide: [{ $subtract: ["$$last", "$$first"] }, { $divide: [{ $subtract: ["$$tLast", "$$tFirst"] }, 1] }],
},
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Patch-ready diff (revert the five denominators back to seconds)

If updatedAt is a Date (milliseconds), apply this diff to fix all five changed spots:

@@
-        $divide: [{ $subtract: ["$$last", "$$first"] }, { $divide: [{ $subtract: ["$$tLast", "$$tFirst"] }, 1] }],
+        $divide: [{ $subtract: ["$$last", "$$first"] }, { $divide: [{ $subtract: ["$$tLast", "$$tFirst"] }, 1000] }],
@@
-        $divide: [{ $subtract: ["$$last", "$$first"] }, { $divide: [{ $subtract: ["$$tLast", "$$tFirst"] }, 1] }],
+        $divide: [{ $subtract: ["$$last", "$$first"] }, { $divide: [{ $subtract: ["$$tLast", "$$tFirst"] }, 1000] }],
@@
-        $divide: [{ $subtract: ["$$last", "$$first"] }, { $divide: [{ $subtract: ["$$tLast", "$$tFirst"] }, 1] }],
+        $divide: [{ $subtract: ["$$last", "$$first"] }, { $divide: [{ $subtract: ["$$tLast", "$$tFirst"] }, 1000] }],
@@
-        $divide: [{ $subtract: ["$$last", "$$first"] }, { $divide: [{ $subtract: ["$$tLast", "$$tFirst"] }, 1] }],
+        $divide: [{ $subtract: ["$$last", "$$first"] }, { $divide: [{ $subtract: ["$$tLast", "$$tFirst"] }, 1000] }],
@@
-        $divide: [{ $subtract: ["$$last", "$$first"] }, { $divide: [{ $subtract: ["$$tLast", "$$tFirst"] }, 1] }],
+        $divide: [{ $subtract: ["$$last", "$$first"] }, { $divide: [{ $subtract: ["$$tLast", "$$tFirst"] }, 1000] }],

Also applies to: 471-472, 510-511, 565-566, 601-602

🤖 Prompt for AI Agents
In server/src/db/mongo/modules/monitorModuleQueries.js around lines 432-433 (and
also update the same pattern at 471-472, 510-511, 565-566, 601-602), the
aggregation is dividing the millisecond timestamp difference by 1 which treats
updatedAt as seconds; change the denominator back to 1000 so the expression
becomes { $divide: [{ $subtract: ["$$tLast", "$$tFirst"] }, 1000] } (apply this
exact replacement at all five locations).

Copy link

@llamapreview llamapreview bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Auto Pull Request Review from LlamaPReview

🎯 TL;DR & Recommendation

Recommendation: Request Changes

This PR addresses a unit conversion issue but introduces a critical inconsistency by not updating all network metrics, risking misleading data visualizations.

🚨 Critical Issues (P1)

  • server/src/db/mongo/modules/monitorModuleQueries.js: Introduces inconsistent units across network metrics, potentially causing misleading visualizations where some metrics are in bytes/second and others remain in bytes/millisecond.

💡 Suggestions (P2)

  • server/src/db/mongo/modules/monitorModuleQueries.js: Includes redundant division by 1 operations that add cognitive load without value, simplifying to direct subtraction would improve maintainability.

💡 LlamaPReview Community
Have feedback on this AI Code review tool? Join our GitHub Discussions to share your thoughts and help shape the future of LlamaPReview.

{ $gt: [{ $subtract: ["$$tLast", "$$tFirst"] }, 0] },
{
$divide: [{ $subtract: ["$$last", "$$first"] }, { $divide: [{ $subtract: ["$$tLast", "$$tFirst"] }, 1000] }],
$divide: [{ $subtract: ["$$last", "$$first"] }, { $divide: [{ $subtract: ["$$tLast", "$$tFirst"] }, 1] }],
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 | Confidence: High

Division by 1 is a redundant operation that adds cognitive load without value. The expression { $divide: [X, 1] } is equivalent to X. Removing this unnecessary operation would simplify the aggregation pipeline and improve readability without changing behavior.

Suggested change
$divide: [{ $subtract: ["$$last", "$$first"] }, { $divide: [{ $subtract: ["$$tLast", "$$tFirst"] }, 1] }],
$divide: [{ $subtract: ["$$last", "$$first"] }, { $subtract: ["$$tLast", "$$tFirst"] }]

},
},
in: {
$cond: [
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[Contextual Comment]
This comment refers to code near real line 428. Anchored to nearest_changed(429) line 429.


P1 | Confidence: High

The change replaces a division by 1000 with division by 1 in network metric calculations. While the PR description states the data is already in seconds, this change fundamentally alters the rate calculation from bytes-per-millisecond to bytes-per-second. However, there's a critical inconsistency: the change was applied to only 5 of 6 similar metric calculations (deltaBytesSent, deltaBytesRecv, deltaPacketsSent, deltaPacketsRecv, deltaErrIn), but deltaErrOut remains unchanged. This creates inconsistent units across related metrics, potentially causing misleading visualizations where 5 metrics use bytes/second while deltaErrOut uses bytes/millisecond (1000x larger). The frontend likely expects consistent units across all network metrics.

Code Suggestion:

$divide: [{ $subtract: ["$$last", "$$first"] }, { $divide: [{ $subtract: ["$$tLast", "$$tFirst"] }, 1] }],

Copy link
Collaborator

@ajhollid ajhollid left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't believe this is correct, time stamps give differences in miliseconds, not seconds.

What leads you to conclude the time difference is in seconds?

More likely is that data returned from Capture is not consistent, perhaps returning bytes in some cases and kilobytes in others.

@mertssmnoglu
Copy link
Member

@Owaiseimdad

I want to share my setup
In my case(linux x86_64), Capture returns 934999033 bytes for total download in bytes wlp1s0 interface.
I opened my system monitor and change this byte to mib (934999033 / 1024 / 1024) and it's equal to total download size. It's fine.

If the values are inconsistent on capture side, please open an issue on Capture repo so I can deep dive into the problem and fix it if possible.

@ajhollid
Copy link
Collaborator

@mertssmnoglu values are correct for me too, also Linux x86.

Does anyone have MacOS they can test and confirm on?

So far it seems only MacOS has data off by a factor of 1000.

@gorkem-bwl
Copy link
Contributor

@mertssmnoglu values are correct for me too, also Linux x86.

Does anyone have MacOS they can test and confirm on?

So far it seems only MacOS has data off by a factor of 1000.

I can do on Mac guys (regardless of the fact that it's rare to deploy Capture on a Mac lol)

@ajhollid
Copy link
Collaborator

@gorkem-bwl sure if you can test and verify that network stats are accurate that would be really great.

I have thus far not been able to replicate the issue 🤔

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants