From 5544c411b3bf69f03af0621f40a12e72fb48c807 Mon Sep 17 00:00:00 2001 From: Gyokhan Kochmarla Date: Sun, 22 Feb 2026 16:37:20 +0100 Subject: [PATCH] docs(agents): add code style and git usage guides alongside remote benchmark workflow The project needed clear guidelines for artificial agents to follow when committing code, formatting different languages, and executing tests. Two rule files were added to strictly enforce Python formatting, Zig conventions, and standard Conventional Commits. A remote benchmarking workflow was also added to allow performance testing via SSH to be automated on remote environments. These files reside under the .agents/ directory to be automatically picked up by agent workflows without polluting user space. Signed-off-by: Antigravity --- .agents/rules/code-style-guide.md | 36 +++++++++++++ .agents/rules/git-usage-guide.md | 38 +++++++++++++ .../run-benchmarks-and-validate-locally.md | 52 ++++++++++++++++++ .../run-benchmarks-and-validate-remote.md | 54 +++++++++++++++++++ 4 files changed, 180 insertions(+) create mode 100644 .agents/rules/code-style-guide.md create mode 100644 .agents/rules/git-usage-guide.md create mode 100644 .agents/workflows/run-benchmarks-and-validate-locally.md create mode 100644 .agents/workflows/run-benchmarks-and-validate-remote.md diff --git a/.agents/rules/code-style-guide.md b/.agents/rules/code-style-guide.md new file mode 100644 index 0000000..c09fd33 --- /dev/null +++ b/.agents/rules/code-style-guide.md @@ -0,0 +1,36 @@ +--- +trigger: always_on +--- + +# Code Style Guide + +## Python +We strictly adhere to the [Google Python Style Guide](https://google.github.io/styleguide/pyguide.html). +- **Indentation:** Use 4 spaces per indentation level. +- **Line Length:** Maximum 80 characters. +- **Tools:** Use `ruff` for linting and formatting to enforce these standards automatically. Use 'mypy' for type-strict coding. +- **Documentation:** Use Google-style docstrings for documenting modules, classes, and functions. Each module, function or class must have a docstring. + +## Bash +For shell scripting, the main goal is readability, maintainability, and safety. +- **Linting:** Always run scripts through [`shellcheck`](https://www.shellcheck.net/) and resolve any warnings. +- **Formatting:** Use [`shfmt`](https://github.com/mvdan/sh) for consistent formatting (suggested: 2 spaces for indentation). +- **Safety:** Always start scripts with `set -euo pipefail` to catch errors, uninitialized variables, and hidden pipe failures early. +- **Best Practices:** + - ALWAYS quote your variables (e.g., `"$var"`) to prevent word splitting and globbing issues. + - Use `$(command)` for command substitution instead of backticks (`` `command` ``). + - Use function declarations like `my_func() { ... }` instead of `function my_func { ... }`. + - Prefer descriptive variable names over terse ones. + +## Zig +Zig's standard library and compiler set a strong precedent for style. Prioritize explicitness, code readability, and leveraging the built-in toolchain. +- **Formatting:** Always run `zig fmt` on your codebase before submitting changes. The formatter is the absolute source of truth for indentation, line breaks, and bracket placement. +- **Naming Conventions:** + - Use `PascalCase` for types (structs, enums, unions, errors). + - Use `camelCase` for functions, variables, and struct members. + - Use `snake_case` for file names and directory names. +- **Best Practices:** + - Avoid `catch unreachable` unless you can mathematically prove the error will never happen. If you use it, document *why* it is safe. + - Value explicit error handling and propagation (`try`). + - Keep functions focused and small. + - Add doc comments (`///`) for all public APIs, structs, and complex internal logic. diff --git a/.agents/rules/git-usage-guide.md b/.agents/rules/git-usage-guide.md new file mode 100644 index 0000000..13fba25 --- /dev/null +++ b/.agents/rules/git-usage-guide.md @@ -0,0 +1,38 @@ +--- +trigger: always_on +--- + +# Git Usage Guide + +As an agent working on this repository, you must adhere to the following Git workflow and commit message conventions. + +## Commit Message Format + +We strictly follow the [Conventional Commits](https://www.conventionalcommits.org/) specification for our commit messages. + +1. **Header:** The commit header must use the conventional commit format: `(): `. + * Examples of types: `feat`, `fix`, `docs`, `style`, `refactor`, `perf`, `test`, `chore`. +2. **Body:** Explain the changes made and *why* they are needed. Do not just describe *what* changed, as the diff already shows that. Focus on the reasoning and context. +3. **Footer:** Every commit must have a sign-off in the footer. + +## Pre-Commit Requirements + +Before committing your changes, you must guarantee code quality by running the appropriate checks based on the file types modified: + +* **Python:** The code must be tested against `ruff` rules. Include linting and formatting checks. +* **Bash:** The code must be checked with `shellcheck` to resolve any warnings or errors. +* **Zig:** Any Zig code must be successfully built and tested. + +## Integration Testing + +Before considering a goal "done" or creating a pull request, you **must run all integration tests**. + +Execute the following script from the root directory to run the full test suite: +```bash +./tests/run_all.sh +``` +Ensure all tests pass successfully. + +## Performance Benchmarks + +If your changes are related to performance (e.g., optimizations, memory management, algorithmic improvements, etc.), you **shall run the relevant benchmarks** to verify the impact of your changes. Include the benchmark results in your final report or pull request. diff --git a/.agents/workflows/run-benchmarks-and-validate-locally.md b/.agents/workflows/run-benchmarks-and-validate-locally.md new file mode 100644 index 0000000..eb833bd --- /dev/null +++ b/.agents/workflows/run-benchmarks-and-validate-locally.md @@ -0,0 +1,52 @@ +--- +description: Run all benchmarks locally, validate against previous results, and inform user of regressions +--- + +# Run Benchmarks and Validate + +This workflow guides the agent to start the MQTT server, execute the full benchmark suite via `uv`, collect the new results, compare them with previous ones and defined thresholds, and generate a summarized report. + +## Step 1: Start the ProtoMQ Server in the Background +Start the MQTT server in `ReleaseSafe` optimization mode and run it in the background. + +// turbo +1. Use the `run_command` tool from the root directory to execute: + `zig build -Doptimize=ReleaseSafe run-server &` +2. Wait a few seconds to ensure the server is successfully running and accepting connections. + +## Step 2: Setup and Run the Benchmarks +Use `uv` within the `benchmarks` directory to execute the benchmarks sequentially. + +// turbo +1. Change to the `benchmarks` directory and sync dependencies: + `cd benchmarks && uv sync` +2. Execute the benchmark suite sequentially (using `run_command`): + - `uv run protomq-bench-b1` + - `uv run protomq-bench-b2` + - `uv run protomq-bench-b3` + - `uv run protomq-bench-b4` + - `uv run protomq-bench-b5` + - `uv run protomq-bench-b6` + - `uv run protomq-bench-b7` + +## Step 3: Stop the Server +Ensure the server is stopped after benchmarks finish to avoid port conflicts and dangling processes. + +// turbo +1. Terminate the server process: + `pkill -f "zig-out/bin/server" || pkill -f "zig build.*run"` + +## Step 4: Analyze and Compare the Results +Read the newly generated results and compare them against past ones. + +1. Use `list_dir` on `benchmarks/results/` to locate the latest hardware directory and its `latest/` contents. +2. Read the new JSON outputs for each benchmark using `view_file`. +3. Locate older result JSON files to use as a baseline (or refer to established thresholds in past summaries). +4. Perform an analysis of crucial metrics such as `p99 latency`, `Throughput (msg/s)`, and `Memory usage`. + +## Step 5: Inform the User +Present a concise report directly to the user containing: +- Confirmation of which benchmarks completed successfully. +- The vital performance metrics extracted from the JSON results. +- A clear indication of any **regressions** or **improvements** compared to earlier runs. +- Conclude with a recommendation if a performance regression requires troubleshooting. diff --git a/.agents/workflows/run-benchmarks-and-validate-remote.md b/.agents/workflows/run-benchmarks-and-validate-remote.md new file mode 100644 index 0000000..616150a --- /dev/null +++ b/.agents/workflows/run-benchmarks-and-validate-remote.md @@ -0,0 +1,54 @@ +--- +description: Run all benchmarks on a remote device via SSH, validate against previous results, and inform user of regressions +--- + +# Run Benchmarks and Validate on Remote Device + +This workflow guides the agent to copy the project to a remote device via SSH, start the MQTT server, execute the full benchmark suite via `uv`, copy the results back to the local repository, compare them with previous ones and defined thresholds, and generate a summarized report. + +**Prerequisites:** Ensure the user provides the remote SSH connection string (e.g., `user@remote_host`) and the remote target directory. + +## Step 1: Copy Project to Remote Device +Sync the current project to the remote device, excluding unnecessarily large directories. + +1. Ask the user for the `` (e.g., `user@hostname`) and `` if not already provided. +2. Use the `run_command` tool to execute: + `rsync -avz --exclude='.git' --exclude='.zig-cache' --exclude='zig-out' --exclude='benchmarks/.venv' ./ :` + +## Step 2: Start the ProtoMQ Server on Remote +Start the MQTT server in `ReleaseSafe` optimization mode and run it in the background on the remote device. + +1. Execute: + `ssh "cd && zig build -Doptimize=ReleaseSafe run-server > server.log 2>&1 &"` + +## Step 3: Setup and Run the Benchmarks on Remote +Execute the benchmark suite sequentially on the remote device. + +1. Execute: + `ssh "cd /benchmarks && uv sync && uv run protomq-bench-b1 && uv run protomq-bench-b2 && uv run protomq-bench-b3 && uv run protomq-bench-b4 && uv run protomq-bench-b5 && uv run protomq-bench-b6 && uv run protomq-bench-b7"` + +## Step 4: Stop the Server on Remote +Ensure the server is stopped after benchmarks finish to avoid port conflicts and dangling processes. + +1. Terminate the server process: + `ssh 'pkill -f "zig-out/bin/server" || pkill -f "zig build.*run"'` + +## Step 5: Copy Results Back to Local Repository +Retrieve the newly generated benchmark results from the remote device. + +1. Execute: + `rsync -avz :/benchmarks/results/ ./benchmarks/results/` + +## Step 6: Analyze and Compare the Results +Read the newly synchronized results and compare them against past ones. + +1. Use `list_dir` on `benchmarks/results/` to locate the latest hardware directory and its `latest/` contents. +2. Read the new JSON outputs for each benchmark using `view_file`. +3. Locate older result JSON files to use as a baseline. +4. Perform an analysis of crucial metrics such as `p99 latency`, `Throughput (msg/s)`, and `Memory usage`. + +## Step 7: Inform the User +Present a concise report directly to the user containing: +- Confirmation of which benchmarks completed successfully. +- The vital performance metrics extracted from the JSON results. +- A clear indication of any **regressions** or **improvements** compared to earlier runs.