Conversation
PhilippMatthes
left a comment
There was a problem hiding this comment.
Love this progress! But let's revisit how we want to handle unknown resources.
pkg/multicluster/client.go
Outdated
| // ClustersForGVK returns all clusters that serve the given GVK. | ||
| // If no remote clusters are configured, only the home cluster is returned. | ||
| // For fallback GVKs with remote clusters, the home cluster is appended | ||
| // because resources might have been written there as a fallback. | ||
| func (c *Client) ClustersForGVK(gvk schema.GroupVersionKind) []cluster.Cluster { |
There was a problem hiding this comment.
We should change the strictness of the multicluster client. Before, any gvk that wasn't recognized would simply fall back to the home cluster.
What we should do, in my opinion: specify any gvk that is used by cortex, and implicit fallbacks are no longer supported. In this way we avoid shadowing errors.
So for any gvk there would be two options:
- There is a specified remote cluster
- It is specified to be handled in cortex' home cluster
If none of these options apply, the resource is unknown, we should return an error.
This also means, we should bubble up this error in the client implementation. Ideally, also the WatchesMulticluster implementation should return an error then, which would be used like
bldr := ... //
var err error
bldr, err = bldr.WatchesMulticluster(...)
if err != nil { /* handle or os.Exit(1) */ }
// ...In the long run, this will make our implementation much less error prone.
pkg/multicluster/client.go
Outdated
| // Fallback GVKs that are written to the home cluster if no router match is found. | ||
| Fallbacks []FallbackConfig `json:"fallbacks,omitempty"` | ||
| // Apiserver overrides that map GVKs to remote clusters. | ||
| APIServerOverrides []APIServerOverride `json:"apiservers,omitempty"` |
There was a problem hiding this comment.
Nothing wrong with this, but I had an idea which is probably more clear
apiservers:
# All resources provided under home will be managed in the
# cluster where cortex is deployed. Note that any resources
# not specified here must be given under remotes, otherwise
# the multicluster client will fail.
home:
# Note: also needs to include kubernetes secret and configmap gvks.
gvks: ...
# Configuration of resources managed in remote clusters.
remotes:
- host: "apiserver-url"
gvks: # Used for List, Get, Watches
- "cortex.cloud/v1alpha1/Decision"
- "cortex.cloud/v1alpha1/DecisionList"
# ...
# Used to match the correct cluster for a given resource
# through the implemented resource routers.
labels: # Used for Create/Update/Delete
az: "qa-de-1b"
📝 WalkthroughWalkthroughThis pull request introduces multi-cluster resource routing infrastructure to support distributing Kubernetes workloads across home and remote clusters. Core changes include a ResourceRouter abstraction, HypervisorResourceRouter implementation for zone-based routing, refactored multicluster Client API with ClustersForGVK discovery, updated controller initialization with explicit error handling, and Helm configuration additions to declare supported resource kinds per cluster. Changes
Sequence DiagramsequenceDiagram
actor Client
participant MCRouter as Multicluster Router
participant ResRouter as Resource Router<br/>(e.g., Hypervisor)
participant HomCluster as Home Cluster
participant RemCluster as Remote Cluster
Client->>MCRouter: Request operation on Resource GVK
MCRouter->>MCRouter: ClustersForGVK lookup
activate MCRouter
MCRouter->>MCRouter: Check homeGVKs map
Note over MCRouter: Identify home + remote<br/>clusters serving GVK
deactivate MCRouter
alt Write Operation
MCRouter->>ResRouter: clusterForWrite(gvk, obj)
ResRouter->>ResRouter: Match(obj, cluster.labels)
ResRouter-->>MCRouter: Selected cluster (by AZ/routing rule)
MCRouter->>RemCluster: Execute operation
RemCluster-->>MCRouter: Result
else Read Operation
par Parallel Read
MCRouter->>HomCluster: List/Get from home
HomCluster-->>MCRouter: Home results
and
MCRouter->>RemCluster: List/Get from remote
RemCluster-->>MCRouter: Remote results
end
MCRouter->>MCRouter: Aggregate results
end
MCRouter-->>Client: Combined result/status
Estimated code review effort🎯 4 (Complex) | ⏱️ ~70 minutes Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches
🧪 Generate unit tests (beta)
📝 Coding Plan
Comment |
Test Coverage ReportTest Coverage 📊: 67.3% |
There was a problem hiding this comment.
Actionable comments posted: 2
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
internal/knowledge/kpis/controller.go (1)
484-488:⚠️ Potential issue | 🟡 MinorFix copy-paste artifacts in comment and variable name.
The comment says "datasources" and the variable is named
ds, but this predicate handlesKPIobjects, notDatasourceobjects.📝 Proposed fix
&v1alpha1.KPI{}, builder.WithPredicates(predicate.NewPredicateFuncs(func(obj client.Object) bool { - // Only react to datasources matching the scheduling domain. - ds := obj.(*v1alpha1.KPI) - return ds.Spec.SchedulingDomain == c.Config.SchedulingDomain + // Only react to KPIs matching the scheduling domain. + kpi := obj.(*v1alpha1.KPI) + return kpi.Spec.SchedulingDomain == c.Config.SchedulingDomain })),🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/knowledge/kpis/controller.go` around lines 484 - 488, The predicate created in builder.WithPredicates using predicate.NewPredicateFuncs refers to "datasources" and uses variable name ds although it handles KPI objects; update the inline comment to mention KPIs (e.g., "Only react to KPIs matching the scheduling domain.") and rename the variable ds to a clearer name like kpi in the predicate function (obj.(*v1alpha1.KPI)) so the comment and variable reflect the actual type handled by the predicate.
🧹 Nitpick comments (4)
helm/bundles/cortex-nova/values.yaml (1)
83-103: Consider centralizing sharedhome.gvksentries to reduce drift across bundles.This list is now duplicated across multiple bundle
values.yamlfiles. A shared values fragment/template would reduce maintenance risk when GVKs evolve.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@helm/bundles/cortex-nova/values.yaml` around lines 83 - 103, The shared GVK list under apiservers.home.gvks is duplicated across bundles; extract this array into a single shared values fragment (e.g., a common values file or Helm library chart) and update each bundle's values.yaml to reference/include that fragment instead of duplicating entries; specifically, move the list currently under apiservers.home.gvks (the cortex.cloud/*, kvm.cloud.sap/* and v1/Secret entries) into the shared fragment and replace the duplicated block in each bundle with an include/reference, ensuring keystoneSecretRef remains untouched.pkg/multicluster/client.go (2)
181-200: Consider guarding against nil HomeCluster.When
homeGVKs[gvk]is true butHomeClusteris nil (e.g., uninitialized client), the method appendsnilto the returned slice. Callers iterating and calling.GetClient()would panic.♻️ Option: Add nil guard or validation
if isHome { + if c.HomeCluster == nil { + return nil, errors.New("HomeCluster is not initialized") + } clusters = append(clusters, c.HomeCluster) }Alternatively, validate
HomeClusteris set duringInitFromConfto catch this earlier.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/multicluster/client.go` around lines 181 - 200, The ClustersForGVK method can append a nil HomeCluster when homeGVKs[gvk] is true; update ClustersForGVK to guard against a nil c.HomeCluster: after computing isHome, if isHome && c.HomeCluster == nil return an error (or at minimum do not append nil) so callers don't receive nil cluster entries; locate and change the logic in ClustersForGVK (references: ClustersForGVK, HomeCluster, homeGVKs, remoteClusters, remoteClustersMu) to either validate HomeCluster during InitFromConf or return an explicit error when a GVK is claimed as home but HomeCluster is unset.
351-367: Note: Partial failure can leave clusters in inconsistent state.
DeleteAllOfstops on the first cluster error after potentially succeeding on previous clusters. This is a known limitation of distributed operations. Consider documenting this behavior or returning a multi-error aggregate if stricter consistency guarantees are needed in the future.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/multicluster/client.go` around lines 351 - 367, The current Client.DeleteAllOf implementation exits on the first cluster error which can leave other clusters already mutated; update DeleteAllOf (or add a documented variant) to collect errors from each cl.GetClient().DeleteAllOf call instead of returning immediately so callers can see all failures — iterate clusters from Client.ClustersForGVK, accumulate per-cluster errors (e.g., into a slice and return as an aggregated error via errors.Join or a multi-error type) and still attempt DeleteAllOf on all clusters; alternatively add a clear comment on Client.DeleteAllOf that it stops on first error if you prefer not to change behavior.pkg/multicluster/builder_test.go (1)
53-75: Test validates nil HomeCluster edge case rather than typical usage.The test name suggests verifying that home GVKs return the home cluster, but
HomeClusteris never set. This tests thatClustersForGVKreturns a slice containingnilwhenhomeGVKsis configured butHomeClusteris uninitialized.In production, callers iterating over the result would panic calling
.GetClient()onnil. Consider either:
- Adding a guard in
ClustersForGVKto skip appending nilHomeCluster- Renaming this test to clarify it tests the uninitialized edge case
- Adding a separate test with a mock
HomeClusterto verify typical behavior🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/multicluster/builder_test.go` around lines 53 - 75, The test TestClient_ClustersForGVK_HomeGVKReturnsHomeCluster currently asserts that a GVK marked in Client.homeGVKs yields a nil HomeCluster because Client.HomeCluster is uninitialized; update the implementation of Client.ClustersForGVK to guard against appending a nil HomeCluster (check c.HomeCluster != nil before appending) so callers won't receive nil entries, and then add a separate test that sets Client.HomeCluster to a mock/non-nil remoteCluster to verify the normal behavior; reference the Client struct fields homeGVKs, HomeCluster and the method ClustersForGVK when making the change.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@pkg/multicluster/client_test.go`:
- Around line 1226-1253: The test TestClient_ConcurrentAddRemoteAndRead is flaky
because readers call ClustersForGVK(configMapGVK) before any cluster exists; to
stabilize it, ensure the Client starts with at least one entry for configMapGVK
by pre-populating c.remoteClusters[configMapGVK] = []remoteCluster{{}}
(protected by remoteClustersMu if needed) before spawning reader and writer
goroutines; reference Client, ClustersForGVK, remoteClusters, remoteClustersMu,
and configMapGVK when making this change.
In `@pkg/multicluster/routers.go`:
- Around line 21-25: The Match method in HypervisorResourceRouter uses a value
type assertion obj.(hv1.Hypervisor) which fails when Kubernetes passes a
pointer; update Match to accept pointer types (or both pointer and value) by
asserting obj.(*hv1.Hypervisor) or using a type switch to handle *hv1.Hypervisor
and hv1.Hypervisor, then use the dereferenced value (or the value case) for
subsequent logic so routing works for the client.Object pointer inputs.
---
Outside diff comments:
In `@internal/knowledge/kpis/controller.go`:
- Around line 484-488: The predicate created in builder.WithPredicates using
predicate.NewPredicateFuncs refers to "datasources" and uses variable name ds
although it handles KPI objects; update the inline comment to mention KPIs
(e.g., "Only react to KPIs matching the scheduling domain.") and rename the
variable ds to a clearer name like kpi in the predicate function
(obj.(*v1alpha1.KPI)) so the comment and variable reflect the actual type
handled by the predicate.
---
Nitpick comments:
In `@helm/bundles/cortex-nova/values.yaml`:
- Around line 83-103: The shared GVK list under apiservers.home.gvks is
duplicated across bundles; extract this array into a single shared values
fragment (e.g., a common values file or Helm library chart) and update each
bundle's values.yaml to reference/include that fragment instead of duplicating
entries; specifically, move the list currently under apiservers.home.gvks (the
cortex.cloud/*, kvm.cloud.sap/* and v1/Secret entries) into the shared fragment
and replace the duplicated block in each bundle with an include/reference,
ensuring keystoneSecretRef remains untouched.
In `@pkg/multicluster/builder_test.go`:
- Around line 53-75: The test
TestClient_ClustersForGVK_HomeGVKReturnsHomeCluster currently asserts that a GVK
marked in Client.homeGVKs yields a nil HomeCluster because Client.HomeCluster is
uninitialized; update the implementation of Client.ClustersForGVK to guard
against appending a nil HomeCluster (check c.HomeCluster != nil before
appending) so callers won't receive nil entries, and then add a separate test
that sets Client.HomeCluster to a mock/non-nil remoteCluster to verify the
normal behavior; reference the Client struct fields homeGVKs, HomeCluster and
the method ClustersForGVK when making the change.
In `@pkg/multicluster/client.go`:
- Around line 181-200: The ClustersForGVK method can append a nil HomeCluster
when homeGVKs[gvk] is true; update ClustersForGVK to guard against a nil
c.HomeCluster: after computing isHome, if isHome && c.HomeCluster == nil return
an error (or at minimum do not append nil) so callers don't receive nil cluster
entries; locate and change the logic in ClustersForGVK (references:
ClustersForGVK, HomeCluster, homeGVKs, remoteClusters, remoteClustersMu) to
either validate HomeCluster during InitFromConf or return an explicit error when
a GVK is claimed as home but HomeCluster is unset.
- Around line 351-367: The current Client.DeleteAllOf implementation exits on
the first cluster error which can leave other clusters already mutated; update
DeleteAllOf (or add a documented variant) to collect errors from each
cl.GetClient().DeleteAllOf call instead of returning immediately so callers can
see all failures — iterate clusters from Client.ClustersForGVK, accumulate
per-cluster errors (e.g., into a slice and return as an aggregated error via
errors.Join or a multi-error type) and still attempt DeleteAllOf on all
clusters; alternatively add a clear comment on Client.DeleteAllOf that it stops
on first error if you prefer not to change behavior.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 424be9b7-5587-4274-960b-c65f38badd85
📒 Files selected for processing (20)
cmd/main.gohelm/bundles/cortex-cinder/values.yamlhelm/bundles/cortex-ironcore/values.yamlhelm/bundles/cortex-manila/values.yamlhelm/bundles/cortex-nova/values.yamlhelm/bundles/cortex-pods/values.yamlinternal/knowledge/extractor/trigger.gointernal/knowledge/kpis/controller.gointernal/scheduling/cinder/filter_weigher_pipeline_controller.gointernal/scheduling/machines/filter_weigher_pipeline_controller.gointernal/scheduling/manila/filter_weigher_pipeline_controller.gointernal/scheduling/nova/detector_pipeline_controller.gointernal/scheduling/nova/filter_weigher_pipeline_controller.gointernal/scheduling/pods/filter_weigher_pipeline_controller.gopkg/multicluster/builder.gopkg/multicluster/builder_test.gopkg/multicluster/client.gopkg/multicluster/client_test.gopkg/multicluster/routers.gopkg/multicluster/routers_test.go
| func TestClient_ConcurrentAddRemoteAndRead(t *testing.T) { | ||
| scheme := newTestScheme(t) | ||
|
|
||
| homeCache := &fakeCache{} | ||
| homeCluster := newFakeClusterWithCache(scheme, homeCache) | ||
|
|
||
| // No remote clusters configured - both object and list will use home cluster | ||
| c := &Client{ | ||
| HomeCluster: homeCluster, | ||
| HomeScheme: scheme, | ||
| HomeCluster: newFakeCluster(scheme), | ||
| HomeScheme: scheme, | ||
| remoteClusters: map[schema.GroupVersionKind][]remoteCluster{}, | ||
| } | ||
|
|
||
| ctx := context.Background() | ||
| var wg sync.WaitGroup | ||
|
|
||
| obj := &corev1.ConfigMap{} | ||
| list := &corev1.ConfigMapList{} | ||
| field := "metadata.name" | ||
| extractValue := func(obj client.Object) []string { | ||
| return []string{obj.GetName()} | ||
| // Readers | ||
| for range 10 { | ||
| wg.Go(func() { | ||
| for range 100 { | ||
| if _, err := c.ClustersForGVK(configMapGVK); err != nil { | ||
| t.Errorf("unexpected error: %v", err) | ||
| } | ||
| } | ||
| }) | ||
| } | ||
|
|
||
| err := c.IndexField(ctx, obj, list, field, extractValue) | ||
| if err != nil { | ||
| t.Fatalf("unexpected error: %v", err) | ||
| // Writers | ||
| for range 5 { | ||
| wg.Go(func() { | ||
| for range 100 { | ||
| c.remoteClustersMu.Lock() | ||
| c.remoteClusters[configMapGVK] = append(c.remoteClusters[configMapGVK], remoteCluster{}) | ||
| c.remoteClustersMu.Unlock() |
There was a problem hiding this comment.
TestClient_ConcurrentAddRemoteAndRead is nondeterministically flaky.
On Line 1240, readers expect ClustersForGVK(configMapGVK) to always succeed, but the test starts with no configured cluster for that GVK. If readers run before writers, they can hit the unknown-GVK error path and fail intermittently.
🩹 Suggested stabilization patch
func TestClient_ConcurrentAddRemoteAndRead(t *testing.T) {
scheme := newTestScheme(t)
c := &Client{
HomeCluster: newFakeCluster(scheme),
HomeScheme: scheme,
- remoteClusters: map[schema.GroupVersionKind][]remoteCluster{},
+ remoteClusters: map[schema.GroupVersionKind][]remoteCluster{
+ configMapGVK: {{cluster: newFakeCluster(scheme)}}, // ensure readers always have at least one route
+ },
}
@@
// Writers
for range 5 {
wg.Go(func() {
for range 100 {
c.remoteClustersMu.Lock()
- c.remoteClusters[configMapGVK] = append(c.remoteClusters[configMapGVK], remoteCluster{})
+ c.remoteClusters[configMapGVK] = append(
+ c.remoteClusters[configMapGVK],
+ remoteCluster{cluster: newFakeCluster(scheme)},
+ )
c.remoteClustersMu.Unlock()
}
})
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| func TestClient_ConcurrentAddRemoteAndRead(t *testing.T) { | |
| scheme := newTestScheme(t) | |
| homeCache := &fakeCache{} | |
| homeCluster := newFakeClusterWithCache(scheme, homeCache) | |
| // No remote clusters configured - both object and list will use home cluster | |
| c := &Client{ | |
| HomeCluster: homeCluster, | |
| HomeScheme: scheme, | |
| HomeCluster: newFakeCluster(scheme), | |
| HomeScheme: scheme, | |
| remoteClusters: map[schema.GroupVersionKind][]remoteCluster{}, | |
| } | |
| ctx := context.Background() | |
| var wg sync.WaitGroup | |
| obj := &corev1.ConfigMap{} | |
| list := &corev1.ConfigMapList{} | |
| field := "metadata.name" | |
| extractValue := func(obj client.Object) []string { | |
| return []string{obj.GetName()} | |
| // Readers | |
| for range 10 { | |
| wg.Go(func() { | |
| for range 100 { | |
| if _, err := c.ClustersForGVK(configMapGVK); err != nil { | |
| t.Errorf("unexpected error: %v", err) | |
| } | |
| } | |
| }) | |
| } | |
| err := c.IndexField(ctx, obj, list, field, extractValue) | |
| if err != nil { | |
| t.Fatalf("unexpected error: %v", err) | |
| // Writers | |
| for range 5 { | |
| wg.Go(func() { | |
| for range 100 { | |
| c.remoteClustersMu.Lock() | |
| c.remoteClusters[configMapGVK] = append(c.remoteClusters[configMapGVK], remoteCluster{}) | |
| c.remoteClustersMu.Unlock() | |
| func TestClient_ConcurrentAddRemoteAndRead(t *testing.T) { | |
| scheme := newTestScheme(t) | |
| c := &Client{ | |
| HomeCluster: newFakeCluster(scheme), | |
| HomeScheme: scheme, | |
| remoteClusters: map[schema.GroupVersionKind][]remoteCluster{ | |
| configMapGVK: {{cluster: newFakeCluster(scheme)}}, // ensure readers always have at least one route | |
| }, | |
| } | |
| var wg sync.WaitGroup | |
| // Readers | |
| for range 10 { | |
| wg.Go(func() { | |
| for range 100 { | |
| if _, err := c.ClustersForGVK(configMapGVK); err != nil { | |
| t.Errorf("unexpected error: %v", err) | |
| } | |
| } | |
| }) | |
| } | |
| // Writers | |
| for range 5 { | |
| wg.Go(func() { | |
| for range 100 { | |
| c.remoteClustersMu.Lock() | |
| c.remoteClusters[configMapGVK] = append( | |
| c.remoteClusters[configMapGVK], | |
| remoteCluster{cluster: newFakeCluster(scheme)}, | |
| ) | |
| c.remoteClustersMu.Unlock() |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@pkg/multicluster/client_test.go` around lines 1226 - 1253, The test
TestClient_ConcurrentAddRemoteAndRead is flaky because readers call
ClustersForGVK(configMapGVK) before any cluster exists; to stabilize it, ensure
the Client starts with at least one entry for configMapGVK by pre-populating
c.remoteClusters[configMapGVK] = []remoteCluster{{}} (protected by
remoteClustersMu if needed) before spawning reader and writer goroutines;
reference Client, ClustersForGVK, remoteClusters, remoteClustersMu, and
configMapGVK when making this change.
| func (h HypervisorResourceRouter) Match(obj any, labels map[string]string) (bool, error) { | ||
| hv, ok := obj.(hv1.Hypervisor) | ||
| if !ok { | ||
| return false, errors.New("object is not a Hypervisor") | ||
| } |
There was a problem hiding this comment.
Type assertion will fail for pointer types.
Kubernetes objects are passed as pointers implementing client.Object. The caller in clusterForWrite passes obj any where obj originates from client.Object parameters. The assertion obj.(hv1.Hypervisor) will fail when obj is *hv1.Hypervisor, causing all Hypervisor routing to fail with "object is not a Hypervisor".
🐛 Proposed fix: use pointer type assertion
func (h HypervisorResourceRouter) Match(obj any, labels map[string]string) (bool, error) {
- hv, ok := obj.(hv1.Hypervisor)
+ hv, ok := obj.(*hv1.Hypervisor)
if !ok {
return false, errors.New("object is not a Hypervisor")
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| func (h HypervisorResourceRouter) Match(obj any, labels map[string]string) (bool, error) { | |
| hv, ok := obj.(hv1.Hypervisor) | |
| if !ok { | |
| return false, errors.New("object is not a Hypervisor") | |
| } | |
| func (h HypervisorResourceRouter) Match(obj any, labels map[string]string) (bool, error) { | |
| hv, ok := obj.(*hv1.Hypervisor) | |
| if !ok { | |
| return false, errors.New("object is not a Hypervisor") | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@pkg/multicluster/routers.go` around lines 21 - 25, The Match method in
HypervisorResourceRouter uses a value type assertion obj.(hv1.Hypervisor) which
fails when Kubernetes passes a pointer; update Match to accept pointer types (or
both pointer and value) by asserting obj.(*hv1.Hypervisor) or using a type
switch to handle *hv1.Hypervisor and hv1.Hypervisor, then use the dereferenced
value (or the value case) for subsequent logic so routing works for the
client.Object pointer inputs.
Changes
ResourceRouterinterface that can match a gvk to a cluster (for now only hypervisor crd by AZ)APIServerOverrideconfigListqueries all clusters for the gvk and merges resultsGettries all clusters, returns first found (skips NotFound)Create/Update/Delete/Patch: uses ResourceRouter to match the object to the correct cluster via labelsDeleteAllOfruns on all clusters for the gvkStatus/SubResourcewrites are routed via ResourceRouter like top-level writesWatchesMulticlustersets up watches on all clusters serving the GVKSummary by CodeRabbit
Release Notes
New Features
Improvements