Skip to content

HYPERFLEET-556 - feat - update mock HyperFleet API to generate more realistic payloads #80

Merged
pnguyen44 merged 11 commits intomainfrom
HYPERFLEET-556
Mar 24, 2026
Merged

HYPERFLEET-556 - feat - update mock HyperFleet API to generate more realistic payloads #80
pnguyen44 merged 11 commits intomainfrom
HYPERFLEET-556

Conversation

@pnguyen44
Copy link
Contributor

@pnguyen44 pnguyen44 commented Mar 20, 2026

Summary

  • Update mock HyperFleet API to generate payloads matching production shape
  • Supports HYPERFLEET-556

Test plan

  • go build ./test/mock-hyperfleet-api/ compiles
  • CLUSTER_COUNT=500 go run ./test/mock-hyperfleet-api/ then curl localhost:8888/api/hyperfleet/v1/clusters returns valid JSON

Summary by CodeRabbit

  • Chores
    • Mock server now emits richer, deterministic cluster records with per-cluster IDs, varied timestamps, creators, labels and generation.
    • Cluster specs are populated with OCM-related details: subscription/version info, provider and region metadata, and cluster href/state enrichment.
    • Status payloads include multiple, adapter-specific conditions in addition to Ready/Available.
    • CLUSTER_COUNT is strictly validated; invalid or non-positive values now cause the server to exit.
    • Logging reports total payload size and estimated per-cluster byte counts for clearer test diagnostics.

to generate payloads matching production shape
@openshift-ci openshift-ci bot requested review from aredenba-rh and yasun1 March 20, 2026 20:26
@coderabbitai
Copy link

coderabbitai bot commented Mar 20, 2026

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Use the checkboxes below for quick actions:

  • ▶️ Resume reviews
  • 🔍 Trigger review

Walkthrough

The mock Hyperfleet API server now reads CLUSTER_COUNT via clusterCountFromEnv(), defaulting when unset and logging on parse errors; main aborts if the resolved count is <= 0. Cluster generation was refactored: createCluster(i) produces deterministic IDs/UUID-like names, per-cluster created/updated times, generation, created_by/updated_by, labels, and dynamic status.conditions from buildConditions(generation). The Hyperfleet cluster spec is now populated via buildOCMSpec and helpers, adding OCM subscription/version/upgrade info, href wiring, state/status derivation, and provider/region enrichment for AWS and GCP. Startup logging reports bind address, total marshaled JSON size, and approximate bytes per cluster.

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The PR title clearly identifies the main change—updating the mock HyperFleet API to generate more realistic payloads—and includes the tracking issue HYPERFLEET-556.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch HYPERFLEET-556

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
test/mock-hyperfleet-api/main.go (1)

13-21: ⚠️ Potential issue | 🟠 Major

Cap CLUSTER_COUNT before prebuilding the payload.

This path materializes every cluster map and then a second full JSON buffer at startup. With the much larger payload shape, any large positive CLUSTER_COUNT can make the mock slow or OOM before it serves a request.

Possible guardrail
 const (
 	port                = 8888
 	defaultClusterCount = 100
+	maxClusterCount     = 10000
 	clustersPath        = "/api/hyperfleet/v1/clusters"
 	ocmBase             = "/api/clusters_mgmt/v1"
 	ocmClusters         = ocmBase + "/clusters"
 
 	maxUpgradePatches = 20
 )
@@
 func main() {
 	clusterCount := clusterCountFromEnv()
-	if clusterCount <= 0 {
-		log.Fatalf("CLUSTER_COUNT must be greater than 0, got %d", clusterCount)
-	}
 
 	clusterList := createClusterList(clusterCount)
@@
 func clusterCountFromEnv() int {
 	s := os.Getenv("CLUSTER_COUNT")
 	if s == "" {
 		return defaultClusterCount
 	}
 	n, err := strconv.Atoi(s)
 	if err != nil {
 		log.Printf("Invalid CLUSTER_COUNT %q, using default %d", s, defaultClusterCount)
 		return defaultClusterCount
 	}
+	if n <= 0 || n > maxClusterCount {
+		log.Fatalf("CLUSTER_COUNT must be between 1 and %d, got %d", maxClusterCount, n)
+	}
 	return n
 }

Also applies to: 33-41, 61-71

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@test/mock-hyperfleet-api/main.go` around lines 13 - 21, The startup currently
prebuilds full cluster maps and JSON buffers using an uncontrolled CLUSTER_COUNT
which can cause OOM; add a cap and use the capped value when constructing
payloads: read the configured CLUSTER_COUNT (env or flag), compute cappedCount =
min(requestedCount, defaultClusterCount) or a hard upper bound (e.g.,
defaultClusterCount or a new constant), and use cappedCount everywhere you build
cluster data/JSON (places that reference defaultClusterCount, clustersPath
payload creation, and any code around maxUpgradePatches). Ensure the cap is
applied before any large allocations for the prebuilt cluster map and JSON
buffer so startup never materializes more than the cappedCount.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@test/mock-hyperfleet-api/main.go`:
- Line 46: The mock HTTP server registers only clustersPath via
http.HandleFunc(clustersPath, jsonHandler(clusterBytes)) but the JSON payload
(clusterBytes and similar fixtures) emits hrefs under
"/api/clusters_mgmt/v1/..." and "/api/accounts_mgmt/v1/..." which will 404 when
dereferenced; either add handlers for those advertised routes (register
http.HandleFunc for the clusters_mgmt and accounts_mgmt prefixes) that serve the
corresponding fixture bytes, or change the fixtures (e.g., clusterBytes and the
other bytes served in the ranges mentioned) to emit non-actionable or mock-local
hrefs that match the existing registered routes (or relative links), ensuring
the href values and registered paths (clustersPath, any other paths you add) are
consistent across the mock.
- Around line 101-106: fakeUUID generates segments larger than their UUID widths
because fmt's width specifiers are minimums; update fakeUUID so each numeric
part is explicitly masked to the UUID field size before formatting (e.g., mask
the first segment to 32 bits/0xFFFFFFFF for the 8-hex field, the 2nd to 16 bits
for 4-hex, the 3rd to 12/0x0FFF with the 0x4000 bit set, the 4th to 14/0x3FFF
with the 0x8000 bit set, and the last to 48 bits/0xFFFFFFFFFFFF for the 12-hex
field) to guarantee 8-4-4-4-12 output; modify the fakeUUID function accordingly
so createCluster() gets valid UUID-shaped strings for name and external_id.

---

Outside diff comments:
In `@test/mock-hyperfleet-api/main.go`:
- Around line 13-21: The startup currently prebuilds full cluster maps and JSON
buffers using an uncontrolled CLUSTER_COUNT which can cause OOM; add a cap and
use the capped value when constructing payloads: read the configured
CLUSTER_COUNT (env or flag), compute cappedCount = min(requestedCount,
defaultClusterCount) or a hard upper bound (e.g., defaultClusterCount or a new
constant), and use cappedCount everywhere you build cluster data/JSON (places
that reference defaultClusterCount, clustersPath payload creation, and any code
around maxUpgradePatches). Ensure the cap is applied before any large
allocations for the prebuilt cluster map and JSON buffer so startup never
materializes more than the cappedCount.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 0d01a610-13ca-4e72-b929-0e732a28cb57

📥 Commits

Reviewing files that changed from the base of the PR and between a86043c and affd0e3.

📒 Files selected for processing (1)
  • test/mock-hyperfleet-api/main.go

@pnguyen44 pnguyen44 marked this pull request as draft March 20, 2026 20:45
@pnguyen44
Copy link
Contributor Author

Verify each finding against the current code and only fix it if needed.

In @test/mock-hyperfleet-api/main.go around lines 13 - 21, The startup currently
prebuilds full cluster maps and JSON buffers using an uncontrolled CLUSTER_COUNT
which can cause OOM; add a cap and use the capped value when constructing
payloads: read the configured CLUSTER_COUNT (env or flag), compute cappedCount =
min(requestedCount, defaultClusterCount) or a hard upper bound (e.g.,
defaultClusterCount or a new constant), and use cappedCount everywhere you build
cluster data/JSON (places that reference defaultClusterCount, clustersPath
payload creation, and any code around maxUpgradePatches). Ensure the cap is
applied before any large allocations for the prebuilt cluster map and JSON
buffer so startup never materializes more than the cappedCount.

This is a test mock server where the operator controls the env var intentionally. Capping it would limit profiling use cases.

@pnguyen44 pnguyen44 marked this pull request as ready for review March 23, 2026 18:37
@openshift-ci openshift-ci bot requested review from Mischulee and ciaranRoche March 23, 2026 18:37
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@test/mock-hyperfleet-api/main.go`:
- Around line 85-88: fakeClusterID produces variable-length hex segments because
multiplications overflow past 8 hex digits; mirror the fakeUUID fix by masking
each computed segment to 32 bits before formatting (e.g., cast or apply &
0xffffffff to the i*2654435761, i*40503, i*12345678 results) so that the "%08x"
fields never exceed 8 hex chars, and ensure the first segment uses the same
fixed-width mask consistent with "%07x" (the initial i % 0x0FFFFFFF). Update the
fakeClusterID function to apply these masks to the multiplier results while
keeping the existing fmt.Sprintf layout.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: e01d9431-31d5-405a-85ae-c10414658772

📥 Commits

Reviewing files that changed from the base of the PR and between affd0e3 and be9a596.

📒 Files selected for processing (1)
  • test/mock-hyperfleet-api/main.go

@rafabene
Copy link
Contributor

Category: JIRA

The PR title references HYPERFLEET-556 but this work corresponds to
HYPERFLEET-740 (which you created as a dependency for 556). The PR title
should reference HYPERFLEET-740 so the mock API work is properly tracked
against its own ticket. The "Supports HYPERFLEET-556" note in the body is fine
for cross-referencing, but the title should carry the specific ticket ID.

multiplier results to 32 bits to fix hex overflow
@pnguyen44
Copy link
Contributor Author

The PR title references HYPERFLEET-556 but this work corresponds to
HYPERFLEET-740 (which you created as a dependency for 556). The PR title
should reference HYPERFLEET-740 so the mock API work is properly tracked
against its own ticket. The "Supports HYPERFLEET-556" note in the body is fine
for cross-referencing, but the title should carry the specific ticket ID.

I had already closed 740 when I realized we’d benefit from a more realistic payload during profiling (556), so this work is tracked under 556. I’ve kept the cross-reference to 740 in the body for context.

Copy link
Contributor

@ciaranRoche ciaranRoche left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Small thing, createdAt is computed with the same formula here as in createCluster on line 112. If someone later tweaks the timestamp logic in one place but not the other, the top-level created_time and the spec's creation_timestamp/activity_timestamp will silently diverge.

Since buildOCMSpec already receives id and uuid from the caller, might be worth just passing createdAt in too:

func buildOCMSpec(i int, id, uuid string, createdAt time.Time) map[string]any {

Then you compute it once in createCluster and it stays consistent everywhere.

@rafabene
Copy link
Contributor

/lgtm

@openshift-ci openshift-ci bot added the lgtm label Mar 24, 2026
@openshift-ci
Copy link

openshift-ci bot commented Mar 24, 2026

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: rafabene

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Details Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@pnguyen44 pnguyen44 merged commit da32353 into main Mar 24, 2026
8 checks passed
@pnguyen44 pnguyen44 deleted the HYPERFLEET-556 branch March 24, 2026 17:57
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants