HYPERFLEET-556 - feat - update mock HyperFleet API to generate more realistic payloads #80
HYPERFLEET-556 - feat - update mock HyperFleet API to generate more realistic payloads #80
Conversation
to generate payloads matching production shape
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
WalkthroughThe mock Hyperfleet API server now reads CLUSTER_COUNT via clusterCountFromEnv(), defaulting when unset and logging on parse errors; main aborts if the resolved count is <= 0. Cluster generation was refactored: createCluster(i) produces deterministic IDs/UUID-like names, per-cluster created/updated times, generation, created_by/updated_by, labels, and dynamic status.conditions from buildConditions(generation). The Hyperfleet cluster spec is now populated via buildOCMSpec and helpers, adding OCM subscription/version/upgrade info, href wiring, state/status derivation, and provider/region enrichment for AWS and GCP. Startup logging reports bind address, total marshaled JSON size, and approximate bytes per cluster. Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes 🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Comment |
There was a problem hiding this comment.
Actionable comments posted: 2
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
test/mock-hyperfleet-api/main.go (1)
13-21:⚠️ Potential issue | 🟠 MajorCap
CLUSTER_COUNTbefore prebuilding the payload.This path materializes every cluster map and then a second full JSON buffer at startup. With the much larger payload shape, any large positive
CLUSTER_COUNTcan make the mock slow or OOM before it serves a request.Possible guardrail
const ( port = 8888 defaultClusterCount = 100 + maxClusterCount = 10000 clustersPath = "/api/hyperfleet/v1/clusters" ocmBase = "/api/clusters_mgmt/v1" ocmClusters = ocmBase + "/clusters" maxUpgradePatches = 20 ) @@ func main() { clusterCount := clusterCountFromEnv() - if clusterCount <= 0 { - log.Fatalf("CLUSTER_COUNT must be greater than 0, got %d", clusterCount) - } clusterList := createClusterList(clusterCount) @@ func clusterCountFromEnv() int { s := os.Getenv("CLUSTER_COUNT") if s == "" { return defaultClusterCount } n, err := strconv.Atoi(s) if err != nil { log.Printf("Invalid CLUSTER_COUNT %q, using default %d", s, defaultClusterCount) return defaultClusterCount } + if n <= 0 || n > maxClusterCount { + log.Fatalf("CLUSTER_COUNT must be between 1 and %d, got %d", maxClusterCount, n) + } return n }Also applies to: 33-41, 61-71
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/mock-hyperfleet-api/main.go` around lines 13 - 21, The startup currently prebuilds full cluster maps and JSON buffers using an uncontrolled CLUSTER_COUNT which can cause OOM; add a cap and use the capped value when constructing payloads: read the configured CLUSTER_COUNT (env or flag), compute cappedCount = min(requestedCount, defaultClusterCount) or a hard upper bound (e.g., defaultClusterCount or a new constant), and use cappedCount everywhere you build cluster data/JSON (places that reference defaultClusterCount, clustersPath payload creation, and any code around maxUpgradePatches). Ensure the cap is applied before any large allocations for the prebuilt cluster map and JSON buffer so startup never materializes more than the cappedCount.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@test/mock-hyperfleet-api/main.go`:
- Line 46: The mock HTTP server registers only clustersPath via
http.HandleFunc(clustersPath, jsonHandler(clusterBytes)) but the JSON payload
(clusterBytes and similar fixtures) emits hrefs under
"/api/clusters_mgmt/v1/..." and "/api/accounts_mgmt/v1/..." which will 404 when
dereferenced; either add handlers for those advertised routes (register
http.HandleFunc for the clusters_mgmt and accounts_mgmt prefixes) that serve the
corresponding fixture bytes, or change the fixtures (e.g., clusterBytes and the
other bytes served in the ranges mentioned) to emit non-actionable or mock-local
hrefs that match the existing registered routes (or relative links), ensuring
the href values and registered paths (clustersPath, any other paths you add) are
consistent across the mock.
- Around line 101-106: fakeUUID generates segments larger than their UUID widths
because fmt's width specifiers are minimums; update fakeUUID so each numeric
part is explicitly masked to the UUID field size before formatting (e.g., mask
the first segment to 32 bits/0xFFFFFFFF for the 8-hex field, the 2nd to 16 bits
for 4-hex, the 3rd to 12/0x0FFF with the 0x4000 bit set, the 4th to 14/0x3FFF
with the 0x8000 bit set, and the last to 48 bits/0xFFFFFFFFFFFF for the 12-hex
field) to guarantee 8-4-4-4-12 output; modify the fakeUUID function accordingly
so createCluster() gets valid UUID-shaped strings for name and external_id.
---
Outside diff comments:
In `@test/mock-hyperfleet-api/main.go`:
- Around line 13-21: The startup currently prebuilds full cluster maps and JSON
buffers using an uncontrolled CLUSTER_COUNT which can cause OOM; add a cap and
use the capped value when constructing payloads: read the configured
CLUSTER_COUNT (env or flag), compute cappedCount = min(requestedCount,
defaultClusterCount) or a hard upper bound (e.g., defaultClusterCount or a new
constant), and use cappedCount everywhere you build cluster data/JSON (places
that reference defaultClusterCount, clustersPath payload creation, and any code
around maxUpgradePatches). Ensure the cap is applied before any large
allocations for the prebuilt cluster map and JSON buffer so startup never
materializes more than the cappedCount.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 0d01a610-13ca-4e72-b929-0e732a28cb57
📒 Files selected for processing (1)
test/mock-hyperfleet-api/main.go
to guarantee valid UUID format
This is a test mock server where the operator controls the env var intentionally. Capping it would limit profiling use cases. |
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@test/mock-hyperfleet-api/main.go`:
- Around line 85-88: fakeClusterID produces variable-length hex segments because
multiplications overflow past 8 hex digits; mirror the fakeUUID fix by masking
each computed segment to 32 bits before formatting (e.g., cast or apply &
0xffffffff to the i*2654435761, i*40503, i*12345678 results) so that the "%08x"
fields never exceed 8 hex chars, and ensure the first segment uses the same
fixed-width mask consistent with "%07x" (the initial i % 0x0FFFFFFF). Update the
fakeClusterID function to apply these masks to the multiplier results while
keeping the existing fmt.Sprintf layout.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: e01d9431-31d5-405a-85ae-c10414658772
📒 Files selected for processing (1)
test/mock-hyperfleet-api/main.go
|
Category: JIRA The PR title references HYPERFLEET-556 but this work corresponds to |
multiplier results to 32 bits to fix hex overflow
I had already closed 740 when I realized we’d benefit from a more realistic payload during profiling (556), so this work is tracked under 556. I’ve kept the cross-reference to 740 in the body for context. |
error in CLUSTER_COUNT log message
OCM response structure
distribution
and deduplicate state logic
ciaranRoche
left a comment
There was a problem hiding this comment.
Small thing, createdAt is computed with the same formula here as in createCluster on line 112. If someone later tweaks the timestamp logic in one place but not the other, the top-level created_time and the spec's creation_timestamp/activity_timestamp will silently diverge.
Since buildOCMSpec already receives id and uuid from the caller, might be worth just passing createdAt in too:
func buildOCMSpec(i int, id, uuid string, createdAt time.Time) map[string]any {Then you compute it once in createCluster and it stays consistent everywhere.
HyperShift HostedCluster spec
|
/lgtm |
|
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: rafabene The full list of commands accepted by this bot can be found here. The pull request process is described here DetailsNeeds approval from an approver in each of these files:
Approvers can indicate their approval by writing |
Summary
Test plan
go build ./test/mock-hyperfleet-api/compilesCLUSTER_COUNT=500 go run ./test/mock-hyperfleet-api/thencurl localhost:8888/api/hyperfleet/v1/clustersreturns valid JSONSummary by CodeRabbit