Enhance benchmark_rounds.sh with scheduling/orchestration breakdown and verbose logging#336
Conversation
Summary of ChangesHello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request significantly enhances the Highlights
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request enhances the benchmark script with more detailed timing metrics and verbose logging. The changes are well-structured. I've identified a couple of areas for improvement: one is a minor output formatting issue in the awk script that causes misaligned columns, and the other is an opportunity to refactor some duplicated error-handling logic in the shell script to improve maintainability. My suggestions address these points.
3a57965 to
75d95c4
Compare
…nd verbose logging Add per-round scheduling and orchestration time columns to the benchmark report, enabling finer-grained latency analysis. Add a -v/--verbose flag that captures full run_example.py console output to a timestamped log file for easier debugging of benchmark failures.
75d95c4 to
bcf8a6d
Compare
Summary
sched_start→sched_end) and orchestration time (orch_start→orch_end) columns to the benchmark report, alongside the existing total elapsed time. Includes avg and trimmed-avg statistics for each breakdown.-v/--verboseflag that saves the completerun_example.pyconsole output to a timestamped log file (benchmark_YYYYMMDD_HHMMSS.log), making it easier to diagnose failures or unexpected results.new_round()helper for cleaner round-boundary detection based on bothsched_seenandorch_seenthread tracking.Motivation
Previously the benchmark script only reported total elapsed time per round, making it difficult to tell whether latency regressions came from scheduling overhead or orchestration logic. Additionally, when a benchmark run failed, all
run_example.pyoutput was discarded (> /dev/null), requiring manual re-runs to debug.