Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
36 changes: 36 additions & 0 deletions .agents/rules/code-style-guide.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,36 @@
---
trigger: always_on
---

# Code Style Guide

## Python
We strictly adhere to the [Google Python Style Guide](https://google.github.io/styleguide/pyguide.html).
- **Indentation:** Use 4 spaces per indentation level.
- **Line Length:** Maximum 80 characters.
- **Tools:** Use `ruff` for linting and formatting to enforce these standards automatically. Use 'mypy' for type-strict coding.
- **Documentation:** Use Google-style docstrings for documenting modules, classes, and functions. Each module, function or class must have a docstring.

## Bash
For shell scripting, the main goal is readability, maintainability, and safety.
- **Linting:** Always run scripts through [`shellcheck`](https://www.shellcheck.net/) and resolve any warnings.
- **Formatting:** Use [`shfmt`](https://github.com/mvdan/sh) for consistent formatting (suggested: 2 spaces for indentation).
- **Safety:** Always start scripts with `set -euo pipefail` to catch errors, uninitialized variables, and hidden pipe failures early.
- **Best Practices:**
- ALWAYS quote your variables (e.g., `"$var"`) to prevent word splitting and globbing issues.
- Use `$(command)` for command substitution instead of backticks (`` `command` ``).
- Use function declarations like `my_func() { ... }` instead of `function my_func { ... }`.
- Prefer descriptive variable names over terse ones.

## Zig
Zig's standard library and compiler set a strong precedent for style. Prioritize explicitness, code readability, and leveraging the built-in toolchain.
- **Formatting:** Always run `zig fmt` on your codebase before submitting changes. The formatter is the absolute source of truth for indentation, line breaks, and bracket placement.
- **Naming Conventions:**
- Use `PascalCase` for types (structs, enums, unions, errors).
- Use `camelCase` for functions, variables, and struct members.
- Use `snake_case` for file names and directory names.
- **Best Practices:**
- Avoid `catch unreachable` unless you can mathematically prove the error will never happen. If you use it, document *why* it is safe.
- Value explicit error handling and propagation (`try`).
- Keep functions focused and small.
- Add doc comments (`///`) for all public APIs, structs, and complex internal logic.
38 changes: 38 additions & 0 deletions .agents/rules/git-usage-guide.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,38 @@
---
trigger: always_on
---

# Git Usage Guide

As an agent working on this repository, you must adhere to the following Git workflow and commit message conventions.

## Commit Message Format

We strictly follow the [Conventional Commits](https://www.conventionalcommits.org/) specification for our commit messages.

1. **Header:** The commit header must use the conventional commit format: `<type>(<scope>): <description>`.
* Examples of types: `feat`, `fix`, `docs`, `style`, `refactor`, `perf`, `test`, `chore`.
2. **Body:** Explain the changes made and *why* they are needed. Do not just describe *what* changed, as the diff already shows that. Focus on the reasoning and context.
3. **Footer:** Every commit must have a sign-off in the footer.

## Pre-Commit Requirements

Before committing your changes, you must guarantee code quality by running the appropriate checks based on the file types modified:

* **Python:** The code must be tested against `ruff` rules. Include linting and formatting checks.
* **Bash:** The code must be checked with `shellcheck` to resolve any warnings or errors.
* **Zig:** Any Zig code must be successfully built and tested.

## Integration Testing

Before considering a goal "done" or creating a pull request, you **must run all integration tests**.

Execute the following script from the root directory to run the full test suite:
```bash
./tests/run_all.sh
```
Ensure all tests pass successfully.

## Performance Benchmarks

If your changes are related to performance (e.g., optimizations, memory management, algorithmic improvements, etc.), you **shall run the relevant benchmarks** to verify the impact of your changes. Include the benchmark results in your final report or pull request.
52 changes: 52 additions & 0 deletions .agents/workflows/run-benchmarks-and-validate-locally.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,52 @@
---
description: Run all benchmarks locally, validate against previous results, and inform user of regressions
---

# Run Benchmarks and Validate

This workflow guides the agent to start the MQTT server, execute the full benchmark suite via `uv`, collect the new results, compare them with previous ones and defined thresholds, and generate a summarized report.

## Step 1: Start the ProtoMQ Server in the Background
Start the MQTT server in `ReleaseSafe` optimization mode and run it in the background.

// turbo
1. Use the `run_command` tool from the root directory to execute:
`zig build -Doptimize=ReleaseSafe run-server &`
2. Wait a few seconds to ensure the server is successfully running and accepting connections.

## Step 2: Setup and Run the Benchmarks
Use `uv` within the `benchmarks` directory to execute the benchmarks sequentially.

// turbo
1. Change to the `benchmarks` directory and sync dependencies:
`cd benchmarks && uv sync`
2. Execute the benchmark suite sequentially (using `run_command`):
- `uv run protomq-bench-b1`
- `uv run protomq-bench-b2`
- `uv run protomq-bench-b3`
- `uv run protomq-bench-b4`
- `uv run protomq-bench-b5`
- `uv run protomq-bench-b6`
- `uv run protomq-bench-b7`

## Step 3: Stop the Server
Ensure the server is stopped after benchmarks finish to avoid port conflicts and dangling processes.

// turbo
1. Terminate the server process:
`pkill -f "zig-out/bin/server" || pkill -f "zig build.*run"`

## Step 4: Analyze and Compare the Results
Read the newly generated results and compare them against past ones.

1. Use `list_dir` on `benchmarks/results/` to locate the latest hardware directory and its `latest/` contents.
2. Read the new JSON outputs for each benchmark using `view_file`.
3. Locate older result JSON files to use as a baseline (or refer to established thresholds in past summaries).
4. Perform an analysis of crucial metrics such as `p99 latency`, `Throughput (msg/s)`, and `Memory usage`.

## Step 5: Inform the User
Present a concise report directly to the user containing:
- Confirmation of which benchmarks completed successfully.
- The vital performance metrics extracted from the JSON results.
- A clear indication of any **regressions** or **improvements** compared to earlier runs.
- Conclude with a recommendation if a performance regression requires troubleshooting.
54 changes: 54 additions & 0 deletions .agents/workflows/run-benchmarks-and-validate-remote.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,54 @@
---
description: Run all benchmarks on a remote device via SSH, validate against previous results, and inform user of regressions
---

# Run Benchmarks and Validate on Remote Device

This workflow guides the agent to copy the project to a remote device via SSH, start the MQTT server, execute the full benchmark suite via `uv`, copy the results back to the local repository, compare them with previous ones and defined thresholds, and generate a summarized report.

**Prerequisites:** Ensure the user provides the remote SSH connection string (e.g., `user@remote_host`) and the remote target directory.

## Step 1: Copy Project to Remote Device
Sync the current project to the remote device, excluding unnecessarily large directories.

1. Ask the user for the `<SSH_TARGET>` (e.g., `user@hostname`) and `<REMOTE_DIR>` if not already provided.
2. Use the `run_command` tool to execute:
`rsync -avz --exclude='.git' --exclude='.zig-cache' --exclude='zig-out' --exclude='benchmarks/.venv' ./ <SSH_TARGET>:<REMOTE_DIR>`

## Step 2: Start the ProtoMQ Server on Remote
Start the MQTT server in `ReleaseSafe` optimization mode and run it in the background on the remote device.

1. Execute:
`ssh <SSH_TARGET> "cd <REMOTE_DIR> && zig build -Doptimize=ReleaseSafe run-server > server.log 2>&1 &"`

## Step 3: Setup and Run the Benchmarks on Remote
Execute the benchmark suite sequentially on the remote device.

1. Execute:
`ssh <SSH_TARGET> "cd <REMOTE_DIR>/benchmarks && uv sync && uv run protomq-bench-b1 && uv run protomq-bench-b2 && uv run protomq-bench-b3 && uv run protomq-bench-b4 && uv run protomq-bench-b5 && uv run protomq-bench-b6 && uv run protomq-bench-b7"`

## Step 4: Stop the Server on Remote
Ensure the server is stopped after benchmarks finish to avoid port conflicts and dangling processes.

1. Terminate the server process:
`ssh <SSH_TARGET> 'pkill -f "zig-out/bin/server" || pkill -f "zig build.*run"'`

## Step 5: Copy Results Back to Local Repository
Retrieve the newly generated benchmark results from the remote device.

1. Execute:
`rsync -avz <SSH_TARGET>:<REMOTE_DIR>/benchmarks/results/ ./benchmarks/results/`

## Step 6: Analyze and Compare the Results
Read the newly synchronized results and compare them against past ones.

1. Use `list_dir` on `benchmarks/results/` to locate the latest hardware directory and its `latest/` contents.
2. Read the new JSON outputs for each benchmark using `view_file`.
3. Locate older result JSON files to use as a baseline.
4. Perform an analysis of crucial metrics such as `p99 latency`, `Throughput (msg/s)`, and `Memory usage`.

## Step 7: Inform the User
Present a concise report directly to the user containing:
- Confirmation of which benchmarks completed successfully.
- The vital performance metrics extracted from the JSON results.
- A clear indication of any **regressions** or **improvements** compared to earlier runs.