diff --git a/docs/baremetal/api-references/index.md b/docs/baremetal/api-references/index.md index f0afbf1..e0d4696 100644 --- a/docs/baremetal/api-references/index.md +++ b/docs/baremetal/api-references/index.md @@ -2,5 +2,5 @@ This section provides detailed API reference documentation for the IronCore Bare Metal Management components. -* [**metal-operator**](/baremetal/api-references/metal-operator): The core bare metal management component that manages the lifecycle of bare metal servers. -* [**boot-operator**](/baremetal/api-references/boot-operator): Responsible for providing boot images and Ignition configurations to bare metal servers. +- [**metal-operator**](/baremetal/api-references/metal-operator): The core bare metal management component that manages the lifecycle of bare metal servers. +- [**boot-operator**](/baremetal/api-references/boot-operator): Responsible for providing boot images and Ignition configurations to bare metal servers. diff --git a/docs/baremetal/architecture/discovery.md b/docs/baremetal/architecture/discovery.md index b9ab470..7e512fd 100644 --- a/docs/baremetal/architecture/discovery.md +++ b/docs/baremetal/architecture/discovery.md @@ -44,7 +44,7 @@ spec: ``` 3. The `EndpointReconciler` watches for changes to the `Endpoint` and looks up the MAC address in the [MACAddress database](https://ironcore-dev.github.io/metal-operator/concepts/endpoints.html#configuration) -to find a matching MAC address prefix end derive from that the initial credentials, protocol, and other information needed to create a BMC resource. +to find a matching MAC address prefix and derive from that the initial credentials, protocol, and other information needed to create a BMC resource. 4. If a MAC address prefix is found in the database, the `EndpointReconciler` creates a `BMC` and `BMCSecret` resource. Here is an example of a `BMC` resource: diff --git a/docs/baremetal/architecture/index.md b/docs/baremetal/architecture/index.md index df85736..fd3ab1b 100644 --- a/docs/baremetal/architecture/index.md +++ b/docs/baremetal/architecture/index.md @@ -9,7 +9,7 @@ server management and in-band server boot automation. The out-of-band automation is responsible for the initial provisioning of bare metal servers. Here the main component is the `metal-operator`, which is responsible for managing the lifecycle of bare metal server. In the out-of-band -network, BMCs (Baseboard Management Controllers) are assigned IP addresses (in our case via FeDHCP) and are then reachable +network, BMCs (Baseboard Management Controllers) are assigned IP addresses (via FeDHCP) and are then reachable via the `metal-operator`. The `metal-operator` can then perform actions like [discovering](/baremetal/architecture/discovery) and [provisioning](/baremetal/architecture/provisioning) servers. It is also responsible for the [maintenance](/baremetal/architecture/maintenance) workflow of bare metal servers. diff --git a/docs/baremetal/architecture/maintenance.md b/docs/baremetal/architecture/maintenance.md index e9b05bf..e1d99cd 100644 --- a/docs/baremetal/architecture/maintenance.md +++ b/docs/baremetal/architecture/maintenance.md @@ -2,5 +2,5 @@ TODO: -* Describe the maintenance process -* Describe the extension points here \ No newline at end of file +- Describe the maintenance process +- Describe the extension points here \ No newline at end of file diff --git a/docs/baremetal/architecture/provisioning.md b/docs/baremetal/architecture/provisioning.md index f7220c4..4b65bbb 100644 --- a/docs/baremetal/architecture/provisioning.md +++ b/docs/baremetal/architecture/provisioning.md @@ -1,10 +1,10 @@ # Server Provisioning This section describes how the provisioning of bare metal servers is handled in IronCore's bare metal automation. -In the [discovery section](/baremetal/architecture/discovery) we discussed how servers are discovered and first time -booted and how they are transitioned into an `Available` state. Now we will focus on the provisioning process, and -one can use such a `Server` resource to provision a custom operating system and automate the software installation on -such a server. +In the [discovery section](/baremetal/architecture/discovery) you can find how servers are discovered and first-time +booted and how they transition into an `Available` state. This section focuses on the provisioning process and how +you can use a Server resource to provision a custom operating system and automate the software installation on +a server. ## Claiming a Server diff --git a/docs/baremetal/index.md b/docs/baremetal/index.md index 5747ab1..63cebca 100644 --- a/docs/baremetal/index.md +++ b/docs/baremetal/index.md @@ -18,7 +18,8 @@ The core components of the bare metal management in IronCore include: ## Concepts and Usage Guides -Usage guides and concepts for the `metal-operator` API types can be found in the [metal-operator documentation](https://ironcore-dev.github.io/metal-operator/concepts/). +Usage guides and concepts for the `metal-operator` API types can be found in the [metal-operator documentation](https://ironcore-dev.github.io/metal-operator/concepts). + ## Prerequisites diff --git a/docs/baremetal/kubernetes/cloud-controller-manager.md b/docs/baremetal/kubernetes/cloud-controller-manager.md index e1f0c0c..ce29aec 100644 --- a/docs/baremetal/kubernetes/cloud-controller-manager.md +++ b/docs/baremetal/kubernetes/cloud-controller-manager.md @@ -2,24 +2,24 @@ [Cloud-Controller-Manager](https://kubernetes.io/docs/concepts/architecture/cloud-controller) (CCM) is the bridge between Kubernetes and a cloud-provider. CCM uses the cloud-provider (IronCore Bare Metal API in this case) API to manage these -resources. We have implemented the [cloud provider interface](https://github.com/kubernetes/cloud-provider/blob/master/cloud.go) -in the [`cloud-provider-metal`](https://github.com/ironcore-dev/cloud-provider-metal) repository. -Here's a more detail on how these APIs are implemented in the IronCore bare metal cloud-provider for different objects. +resources. The [cloud provider interface](https://github.com/kubernetes/cloud-provider/blob/master/cloud.go) +is implemented in the [`cloud-provider-metal`](https://github.com/ironcore-dev/cloud-provider-metal) repository. +Here is more detail on how these APIs are implemented in the IronCore bare metal cloud-provider for different objects. -## Node lifecycle +## Node Lifecycle ### InstanceExists -`InstanceExists` checks if a node with the given name exists in the cloud provider. In IronCore bare metal a `Node` -is represented by a `ServerClaim` object. The `InstanceExists` method checks if a `ServerClaim` with the given name exists. +The `InstanceExists` method checks if a node with the given name exists in the cloud provider. In IronCore bare metal, a Node +is represented by a ServerClaim object. The `InstanceExists` method checks if a ServerClaim with the given name exists. ### InstanceShutdown -`InstanceShutdown` checks if a node with the given name is shutdown in the cloud provider. Here, the instane controller -checks if the `ServerClaim` and the claimed `Server` object are in the `PowerOff` state. +The `InstanceShutdown` method checks if a node with the given name is shut down in the cloud provider. Here, the instance controller +checks if the ServerClaim and the claimed Server object are in the `PowerOff` state. ### InstanceMetadata -`InstanceMetadata` retrieves the metadata of a node with the given name. In IronCore bare metal, this method retrieves -the topology labels from a `Server` object which is claimed by the `ServerClaim` of the node. Additional labels of the -`Server` object are also added to the node object. +The `InstanceMetadata` method retrieves the metadata of a node with the given name. In IronCore bare metal, this method retrieves +the topology labels from a Server object that is claimed by the ServerClaim of the node. Additional labels of the +Server object are also added to the Node object. diff --git a/docs/baremetal/kubernetes/gardener.md b/docs/baremetal/kubernetes/gardener.md index 630c75b..2ca8f9b 100644 --- a/docs/baremetal/kubernetes/gardener.md +++ b/docs/baremetal/kubernetes/gardener.md @@ -10,9 +10,9 @@ There are two main components in the Gardener integration with IronCore: ## Machine Controller Manager (MCM) The [machine-controller-manager-provider-ironcore](https://github.com/ironcore-dev/machine-controller-manager-provider-ironcore-metal) -is responsible for managing the lifecycle of `Nodes` in a Kubernetes cluster. Here the MCM in essence is translating -Gardener `Machine` resource to `ServerClaims` and wrapping the `user-data` coming from the Gardner OS extensions into -an Ignition `Secret`. +is responsible for managing the lifecycle of Nodes in a Kubernetes cluster. Here the MCM in essence translates +a Gardener Machine resource to ServerClaims and wraps the `user-data` coming from the Gardener OS extensions into +an Ignition Secret. ## Gardener Extension Provider diff --git a/docs/baremetal/operations-guide/overview.md b/docs/baremetal/operations-guide/overview.md index 12e83d6..46e1f24 100644 --- a/docs/baremetal/operations-guide/overview.md +++ b/docs/baremetal/operations-guide/overview.md @@ -7,4 +7,4 @@ This guide provides operational instructions for system operators, covering depl - Routine maintenance tasks - Monitoring & alerts -For more detailed troubleshooting information, refer to the [Troubleshooting](../troubleshooting/index.md) section. +For more detailed troubleshooting information, refer to the [Troubleshooting](/baremetal/troubleshooting/) section. diff --git a/docs/community/contributing.md b/docs/community/contributing.md index e51fda3..a2af4c4 100644 --- a/docs/community/contributing.md +++ b/docs/community/contributing.md @@ -33,7 +33,7 @@ avoids unnecessary work and helps align your contribution with the project's dir ## Making a Contribution -### 1. Fork and Clone +### Fork and Clone Fork the repository you want to contribute to and clone it locally: @@ -42,7 +42,7 @@ git clone git@github.com:/.git cd ``` -### 2. Create a Branch +### Create a Branch Create a feature branch from `main`: @@ -57,7 +57,7 @@ git fetch upstream main git rebase upstream/main ``` -### 3. Make Your Changes +### Make Your Changes - Follow the [coding](/community/style-guide/coding) and [documentation](/community/style-guide/documentation) style guides for code, testing, and documentation standards. - Keep commits small and focused — each commit should be correct independently. @@ -69,7 +69,7 @@ git rebase upstream/main git commit -s -m "Add support for feature X" ``` -### 4. Submit a Pull Request +### Submit a Pull Request Push your branch and open a pull request against `main`: @@ -82,7 +82,7 @@ In your pull request description: - Reference any related issues (e.g., `Fixes #123`). - Tag a relevant maintainer if you need a specific reviewer — check the `CODEOWNERS` file in the repository. -### 5. Run Checks +### Run Checks Before submitting, run the project's checks locally to catch issues early. See [Running tests](/community/style-guide/coding#running-tests) in the coding style guide for details: diff --git a/docs/iaas/api-references/index.md b/docs/iaas/api-references/index.md index 2904411..832ff7a 100644 --- a/docs/iaas/api-references/index.md +++ b/docs/iaas/api-references/index.md @@ -4,9 +4,9 @@ This section provides detailed API references for the IronCore IaaS types. The IronCore aggregated API server exposes the following API groups: -* [**Core**](/iaas/api-references/core): The core API group contains the foundational types used in IronCore IaaS. -* [**Compute**](/iaas/api-references/compute): The compute API group contains types related to virtual machines and compute resources. -* [**Storage**](/iaas/api-references/storage): The storage API group contains types related to storage resources. -* [**Networking**](/iaas/api-references/networking): The networking API group contains types related to networking resources. -* [**IPAM**](/iaas/api-references/ipam): The IPAM API group contains types related to IP address management. -* [**Common**](/iaas/api-references/common): The common API group contains types that are shared across multiple API groups. +- [**Core**](/iaas/api-references/core): The core API group contains the foundational types used in IronCore IaaS. +- [**Compute**](/iaas/api-references/compute): The compute API group contains types related to virtual machines and compute resources. +- [**Storage**](/iaas/api-references/storage): The storage API group contains types related to storage resources. +- [**Networking**](/iaas/api-references/networking): The networking API group contains types related to networking resources. +- [**IPAM**](/iaas/api-references/ipam): The IPAM API group contains types related to IP address management. +- [**Common**](/iaas/api-references/common): The common API group contains types that are shared across multiple API groups. diff --git a/docs/iaas/architecture/networking.md b/docs/iaas/architecture/networking.md index 6cf0a28..ef144b6 100644 --- a/docs/iaas/architecture/networking.md +++ b/docs/iaas/architecture/networking.md @@ -23,11 +23,11 @@ correctly propagated across the IronCore installation. ## `ironcore` and `ironcore-net` -`ironcore-net` is a global coordination service within an IronCore installation. Therefore, it is a single instance and -the place where all network related decisions like reservation of unique IP addresses, allocation of unique network IDs, etc. are made. +The `ironcore-net` service is a global coordination service within an IronCore installation. Therefore, it is a single instance and +the place where all network-related decisions like reservation of unique IP addresses, allocation of unique network IDs, etc. are made. -`ironcore-net` has apart from its [own API](https://github.com/ironcore-dev/ironcore-net/tree/main/api/core/v1alpha1) two main components: -- **apinetlet**: This component is responsible from translating the user-facing API objects from the `networking` resource group into the internal representation used by `ironcore-net`. +The `ironcore-net` service has, apart from its [own API](https://github.com/ironcore-dev/ironcore-net/tree/main/api/core/v1alpha1), two main components: +- **apinetlet**: This component is responsible for translating the user-facing API objects from the `networking` resource group into the internal representation used by `ironcore-net`. - **metalnetlet**: This component is interfacing with the `metalnet` API to manage cluster-level networking resources like `NetworkInterface` which are requested globally in the `ironcore-net` API but are implemented by `metalnet` on a hypervisor level. ### Example `apinetlet` flow @@ -56,9 +56,9 @@ for translating and allocating the necessary resources in `ironcore-net` to ensu ### `metalnetlet` and `metalnet` -`metalnetlet` and `metalnet` work together to provide the necessary networking capabilities for `Machines` in an IronCore on +The `metalnetlet` and `metalnet` components work together to provide the necessary networking capabilities for Machines in an IronCore instance on a hypervisor host. In a compute cluster, the `metalnetlet` will create for each `Node` in the cluster a corresponding -`Node` object in the `ironcore-net` API. This `Node` object represents the hypervisor host and is used to manage the networking resources +Node object in the `ironcore-net` API. This Node object represents the hypervisor host and is used to manage the networking resources which should be available on this host. The image below illustrates the relationship between `metalnetlet` and `metalnet`: @@ -68,7 +68,7 @@ The image below illustrates the relationship between `metalnetlet` and `metalnet The `NetworkInterface` creation flow will look like this: 1. A provider (in this case `libvirt-provider`) will create a virtual machine against the libvirt daemon on a hypervisor host. In case a `NetworkInterface` should be attached to this virtual machine, the `machinepoollet` will call the corresponding -`AttachNetworkInterface` method on the [`MachineRuntime`](/iaas/architecture/runtime-interface#machineruntime-interface) +`AttachNetworkInterface` method on the [`MachineRuntime`](/iaas/architecture/runtime-interface#machineruntime-interface) interface implemented by the `libvirt-provider`. The `libvirt-provider` itself then has a plugin into the `ironcore-net` API to create a `NetworkInterface` resource in the `ironcore-net` API. 2. The `metalnetlet` will then watch for changes to the `NetworkInterface` resource and create the corresponding `NetworkInterface` @@ -78,10 +78,10 @@ virtual network interface back to the `ironcore-net` API by updating the status 4. The `libvirt-provider` will poll the `ironcore-net` API to get the updated status of the `NetworkInterface` and will use the PCI address in the status to attach the virtual network interface to the virtual machine instance. -`LoadBalancer` and `NATGateways` resources follow a similar flow. Here, however, the compute provider is not involved. -The `apinetlet` will translate the `ironcore` `LoadBalancer` or `NATGateway` resource into the corresponding `ironcore-net` -objects. Those will be scheduled on `ironcore-net` `Nodes`. Onces this is done, the `metalnetlet` will watch those resources -and create the corresponding `LoadBalancer` or `NATGateway` objects in the `metalnet` API. +LoadBalancer and NATGateway resources follow a similar flow. Here, however, the compute provider is not involved. +The `apinetlet` translates the IronCore LoadBalancer or NATGateway resource into the corresponding `ironcore-net` +objects. Those are scheduled on `ironcore-net` Nodes. Once this is done, the `metalnetlet` watches those resources +and creates the corresponding `LoadBalancer` or `NATGateway` objects in the `metalnet` API. ### `metalnet`, `dpservice` and `metalbond` @@ -92,4 +92,4 @@ The following figure depicts the basic working flow of creating two interfaces f ![Metalnet Dpservice Metalbond](/metalnet-dpservice-metalbond.png) -`metalnet` controllers watch the metalnet objects such as `Network` and `NetworkInterface`. Upon arriving of a new `NetworkInterface`, `metalnet` communicates with `dpservice` to obtain a newly generated underlying IPv6 address for a corresponding interface's overlay IP address. For example, on `Server-A`, `metalnet` obtains an underlying IPv6 address, `2001:db8::1`, for `interface-A` with a private IP `10.0.0.1`. This encapsulation routing information is announced by `metalnet`'s embedded `metalbond-client` to `metalbond-server` running on a region router, and further synchronised by other `metalbond` instances. For instance, `metalbond` on `Server-B` shall receive this `10.0.0.1@2001:db8::1` information and push it into its local `dpservice` via gRPC. `dpservice` uses these routing information to perform overlay packet encapsulation and decapsulation to achieve communication among VMs running on different servers. Meanwhile, `metalnet` also picks a pci addresss of a VF, such as `0000:3b:00.2`, and attach it as part of `NetworkInterface` status, which is further utilised by `ironcore-net` and `vm-provider`/`libvirt-provider` to create a VM. +The `metalnet` controllers watch the metalnet objects such as Network and NetworkInterface. Upon arrival of a new NetworkInterface, `metalnet` communicates with `dpservice` to obtain a newly generated underlying IPv6 address for a corresponding interface's overlay IP address. For example, on `Server-A`, `metalnet` obtains an underlying IPv6 address, `2001:db8::1`, for `interface-A` with a private IP `10.0.0.1`. This encapsulation routing information is announced by the embedded `metalbond-client` to `metalbond-server` running on a region router, and further synchronized by other `metalbond` instances. For instance, `metalbond` on `Server-B` receives this `10.0.0.1@2001:db8::1` information and pushes it into its local `dpservice` via gRPC. The `dpservice` uses this routing information to perform overlay packet encapsulation and decapsulation to achieve communication among VMs running on different servers. Meanwhile, `metalnet` also picks a PCI address of a VF, such as `0000:3b:00.2`, and attaches it as part of the NetworkInterface status, which is further used by `ironcore-net` and `libvirt-provider` to create a VM. diff --git a/docs/iaas/architecture/providers/brokers.md b/docs/iaas/architecture/providers/brokers.md index be02ddf..7a0433d 100644 --- a/docs/iaas/architecture/providers/brokers.md +++ b/docs/iaas/architecture/providers/brokers.md @@ -9,8 +9,8 @@ Below is an example of how a `machinepoollet` and `machinebroker` will translate ![Brokers](/brokers.png) Brokers are useful in scenarios where IronCore should not run in a single cluster but rather have a federated -environment. For example, in a federated environment, every hypervisor node in a compute cluster would announce it's -`MachinePool` inside the compute cluster. A `MachinePoollet`/`MachineBroker` in this compute cluster could now announce -a logical `MachinePool` "one level up" as a logical compute pool in e.g. an availability zone cluster. The broker concept +environment. For example, in a federated environment, every hypervisor node in a compute cluster would announce its +MachinePool inside the compute cluster. A MachinePoollet/MachineBroker in this compute cluster could now announce +a logical MachinePool "one level up" as a logical compute pool in e.g. an availability zone cluster. The broker concept allows you to design a cluster topology which might be important for large-scale deployments of IronCore (e.g., managing a whole datacenter region with multiple AZs). diff --git a/docs/iaas/architecture/providers/ceph-provider.md b/docs/iaas/architecture/providers/ceph-provider.md index 22db0b4..9ddbf51 100644 --- a/docs/iaas/architecture/providers/ceph-provider.md +++ b/docs/iaas/architecture/providers/ceph-provider.md @@ -7,8 +7,8 @@ The [`ceph-provider`](https://github.com/ironcore-dev/ceph-provider) contains tw ## `ceph-volume-provider` -The `ceph-volume-provider` implements the IronCore `VolumeRuntime` interface to manage volumes in a Ceph cluster. A -`CreateVolume` IRI call results in the creation of a `ceph image` in the cluster, which can be used as a block device +The `ceph-volume-provider` implements the IronCore `VolumeRuntime` interface to manage volumes in a Ceph cluster. +A `CreateVolume` IRI call results in the creation of a `ceph image` in the cluster, which can be used as a block device for virtual machines. The following diagram visualizes the connection between the `ceph-volume-provider` and the `volumepoollet`: @@ -28,7 +28,7 @@ graph TD ``` Once a `Volume` has been created by the `ceph-volume-provider`, it returns the access information to the `volumepoollet` -which then propages this as status information to the `Volume` resource in the IronCore API. On the consumer side, e.g. +which then propagates this as status information to the Volume resource in the IronCore API. On the consumer side, e.g. [`libvirt-provider`](/iaas/architecture/providers/libvirt-provider), the `Volume` resource is then used to attach the block device to a virtual machine instance using the credentials provided in the `Volume` status. diff --git a/docs/iaas/architecture/providers/libvirt-provider.md b/docs/iaas/architecture/providers/libvirt-provider.md index 2e2f55b..c7f8836 100644 --- a/docs/iaas/architecture/providers/libvirt-provider.md +++ b/docs/iaas/architecture/providers/libvirt-provider.md @@ -1,6 +1,6 @@ # `libvirt-provider` -The [`libvirt-proivder`](https://github.com/ironcore-dev/libvirt-provider) implements the +The [`libvirt-provider`](https://github.com/ironcore-dev/libvirt-provider) implements the [`MachineRuntime` interface](/iaas/architecture/runtime-interface#machineruntime-interface). It interfaces directly with the `libvirt` daemon running on a hypervisor host to manage virtual machine instances. @@ -9,7 +9,7 @@ IronCore `compute` resource group into a `domain.xml` representing the virtual m ## Overview -The relationship beween the `machinepoollet` and the `libvirt-provider` is illustrated in the graph below: +The relationship between the `machinepoollet` and the `libvirt-provider` is illustrated in the graph below: ```mermaid graph TD @@ -25,7 +25,7 @@ graph TD C -- defines --> VC[Supported MachineClasses] ``` -Here the `machinepoollet` announces it's `MachinePool` and watches `Machines` scheduled on this pool as described in the +Here the `machinepoollet` announces its MachinePool and watches Machines scheduled on this pool as described in the [Scheduling and Orchestration Section](/iaas/architecture/scheduling). The `libvirt-provider` is then invoked by the `machinepoollet` via the `MachineRuntime` interface method the `libvirt-provider` diff --git a/docs/iaas/architecture/runtime-interface.md b/docs/iaas/architecture/runtime-interface.md index 4099e10..bf55421 100644 --- a/docs/iaas/architecture/runtime-interface.md +++ b/docs/iaas/architecture/runtime-interface.md @@ -9,7 +9,7 @@ There are three main runtime interfaces in IronCore: - `VolumeRuntime`: This interface is used for managing storage resources, such as block storage. - `BucketRuntime`: This interface is used for managing object storage, such as S3-compatible buckets. -Implementations of these interfaces are done by provider-specific components. More infomation about the provider can +Implementations of these interfaces are done by provider-specific components. More information about the provider can be found in the [provider concept documentation](/iaas/architecture/providers/). The definition of the runtime interfaces can be found in IronCore's [`iri` package](https://github.com/ironcore-dev/ironcore/tree/main/iri/). @@ -53,7 +53,7 @@ methods to attach volumes or network interfaces to a `Machine` if a change in th Similar to the `MachineRuntime`, the `VolumeRuntime` interface is responsible for managing block storage resources in IronCore. Here the `volumepoollet` takes a similar role as the `machinepoollet` for the `MachineRuntime` and invokes `CreateVolume`, -`DeleteVolume`, `ExpandVolume`, and other methods to manage `Volume` resources. +the `DeleteVolume`, `ExpandVolume`, and other methods to manage Volume resources. ```proto service VolumeRuntime { diff --git a/docs/iaas/architecture/scheduling.md b/docs/iaas/architecture/scheduling.md index 58cb38b..94f81b4 100644 --- a/docs/iaas/architecture/scheduling.md +++ b/docs/iaas/architecture/scheduling.md @@ -1,21 +1,21 @@ # Scheduling and Orchestration This section provides an overview of the scheduling and orchestration mechanisms used in IronCore's Infrastructure as -a Service (IaaS) layer. It covers the concepts of `Pools`, poollets, and the IronCore Runtime Interface (IRI), +a Service (IaaS) layer. It covers the concepts of Pools, poollets, and the IronCore Runtime Interface (IRI), which together enable efficient resource management and allocation. ## Pools and Classes -The core concept in IronCore's scheduling architecture is the resource `Pool`. A `Pool` is announced by a poollet which represents -an entity onto which resources can be scheduled. The announcement of a `Pool` resource is done by a poollet, which -also provides in the `Pool` status the `AvailableMachineClasses` a `Pool` supports. A `Class` in this context represents -a list of resource-specific capabilities that a `Pool` can provide, such as CPU, memory, and storage. +The core concept in IronCore's scheduling architecture is the resource Pool. A Pool is announced by a poollet which represents +an entity onto which resources can be scheduled. The announcement of a Pool resource is done by a poollet, which +also provides in the Pool status the available MachineClasses a Pool supports. A Class in this context represents +a list of resource-specific capabilities that a Pool can provide, such as CPU, memory, and storage. -`Pools` and `Classes` are defined for all major resource types in IronCore, including compute and storage. Resources in the -`networking` API have no `Pool` concept, as they are not scheduled but rather provided on-demand by the network related +Pools and Classes are defined for all major resource types in IronCore, including compute and storage. Resources in the +`networking` API have no Pool concept, as they are not scheduled but rather provided on-demand by the network-related components. The details are described in the [networking section](/iaas/architecture/networking). -An example definition of a `Pool` from the `compute` API group (`MachinePool`) is shown below: +An example definition of a Pool from the `compute` API group (MachinePool) is shown below: ```yaml apiVersion: compute.ironcore.dev/v1alpha1 @@ -29,7 +29,7 @@ status: - name: machineclass-sample ``` -The corresponding `MachineClass` defines the capabilities of the pool, such as CPU and memory: +The corresponding MachineClass defines the capabilities of the pool, such as CPU and memory: ```yaml apiVersion: compute.ironcore.dev/v1alpha1 @@ -43,19 +43,19 @@ capabilities: ## Scheduling -If a `Machine` or `Volume` resource has been created, the IronCore scheduler will look for a suitable `Pool` which -supports the defined `MachineClass` or `VolumeClass`. If a suitable `Pool` is found, the scheduler will set the `.spec.machinePoolRef` -or `.spec.volumePoolRef` field of the `Machine` or `Volume` resource to the name of the `Pool`. This reference indicates -the `poollet` responsible for the announced `Pool` that something needs to be done, such as creating a `Machine` or `Volume` resource. +If a Machine or Volume resource has been created, the IronCore scheduler looks for a suitable Pool that +supports the defined MachineClass or VolumeClass. If a suitable Pool is found, the scheduler sets the `.spec.machinePoolRef` +or `.spec.volumePoolRef` field of the Machine or Volume resource to the name of the Pool. This reference indicates +to the `poollet` responsible for the announced Pool that something needs to be done, such as creating a Machine or Volume resource. -The current schedule implementation works on a best-effort basis, meaning that it will try to find a suitable `Pool` and -assign the correct `Pool` but it does not guarantee that the `Machine` or `Volume` will be created. A new `Reservation` -based scheduling mechanism which is described in this [enhancement proposal](https://github.com/ironcore-dev/ironcore/blob/main/docs/proposals/11-scheduling.md) +The current scheduler implementation works on a best-effort basis, meaning that it tries to find a suitable Pool and +assign the correct Pool but does not guarantee that the Machine or Volume will be created. A new Reservation-based +scheduling mechanism described in this [enhancement proposal](https://github.com/ironcore-dev/ironcore/blob/main/docs/proposals/11-scheduling.md) should provide a more robust scheduling mechanism in the future. -## Poollets +## Poollets -The `poollets` responsibilities besides announcing the `Pool` are manifold and are depicted in the diagram below: +The poollet's responsibilities besides announcing the Pool are manifold and are depicted in the diagram below: ![Pools and Poollets](/poolsandpoollets.png) @@ -69,7 +69,7 @@ The `poollets` responsibilities besides announcing the `Pool` are manifold and a The key role in managing the resources is defined in the [IronCore Runtime Interface (IRI)](/iaas/architecture/runtime-interface), which provides a well-defined gRPC interface for each resource group. -This architecture with resources scheduled on `Pools` and `poollets` acting as the resource managers by invoking a backend -interface API corresponds to the same model Kubernetes is using with Kubelet announcing `Nodes` and `Pods` being scheduled -on `Nodes`. The Kubelet here interact via the Container Runtime Interface (CRI) with the container runtime, to manifest -the actual instance of a `Pod`. +This architecture with resources scheduled on Pools and poollets acting as the resource managers by invoking a backend +interface API corresponds to the same model Kubernetes uses with the Kubelet announcing Nodes and Pods being scheduled +on Nodes. The Kubelet interacts via the Container Runtime Interface (CRI) with the container runtime, to manifest +the actual instance of a Pod. diff --git a/docs/iaas/getting-started.md b/docs/iaas/getting-started.md index aa169f5..15c9fc9 100644 --- a/docs/iaas/getting-started.md +++ b/docs/iaas/getting-started.md @@ -1,10 +1,10 @@ -# Getting started with Infrastructure as a Service +# Getting Started with Infrastructure as a Service This section provides a comprehensive guide to getting started with IronCore's Infrastructure as a Service (IaaS) layer. It covers the prerequisites, local setup, and how to create and manage resources such as virtual machines, storage, and networking. -Before you are using IronCore IaaS, please make yourself familiar with the core concepts of IronCore as described in the +Before you use IronCore IaaS, make yourself familiar with the core concepts of IronCore as described in the [architecture overview section](/iaas/architecture/). ## Local Setup @@ -25,13 +25,13 @@ The fastest way to get started with IronCore IaaS is to run it locally inside a To do that clone the `ironcore-in-a-box` repository and run the provided script: -```bash +```shell git clone https://github.com/ironcore-dev/ironcore-in-a-box.git ``` To start the IronCore stack you simply run: -```bash +```shell make up ``` @@ -43,10 +43,10 @@ for more details on how to run the stack on macOS or Windows. ### Usage -Once all IronCore components are up and running, you can start using the IaaS layer. `ironcore-in-a-box` provides a -conclusive example on how to create a `Machine` and all necessary resources to run a virtual machine [here](https://github.com/ironcore-dev/ironcore-in-a-box/blob/main/examples/machine/machine.yaml). +Once all IronCore components are up and running, you can start using the IaaS layer. The `ironcore-in-a-box` project provides a +comprehensive example on how to create a Machine and all necessary resources to run a virtual machine [here](https://github.com/ironcore-dev/ironcore-in-a-box/blob/main/examples/machine/machine.yaml). -```bash +```shell kubectl apply -f https://raw.githubusercontent.com/ironcore-dev/ironcore-in-a-box/refs/heads/main/examples/machine/machine.yaml ``` diff --git a/docs/iaas/kubernetes/cloud-controller-manager.md b/docs/iaas/kubernetes/cloud-controller-manager.md index 6f431ec..b78cdb8 100644 --- a/docs/iaas/kubernetes/cloud-controller-manager.md +++ b/docs/iaas/kubernetes/cloud-controller-manager.md @@ -3,11 +3,11 @@ [Cloud-Controller-Manager](https://kubernetes.io/docs/concepts/architecture/cloud-controller) (CCM) is the bridge between Kubernetes and a cloud-provider. CCM is responsible for managing cloud-specific infrastructure resources such as `Routes`, `LoadBalancer` and `Instances`. CCM uses the cloud-provider (IronCore in this case) APIs to manage these -resources. We have implemented the [cloud provider interface](https://github.com/kubernetes/cloud-provider/blob/master/cloud.go) -in the [`cloud-provider-ironcore`](https://github.com/ironcore-dev/cloud-provider-ironcore) repository. -Here's a more detail on how these APIs are implemented in the IronCore IaaS cloud-provider for different objects. +resources. The [cloud provider interface](https://github.com/kubernetes/cloud-provider/blob/master/cloud.go) +is implemented in the [`cloud-provider-ironcore`](https://github.com/ironcore-dev/cloud-provider-ironcore) repository. +Here is more detail on how these APIs are implemented in the IronCore IaaS cloud-provider for different objects. -## Node lifecycle +## Node Lifecycle The Node Controller within the CCM ensures that the Kubernetes cluster has an up-to-date view of the available `Nodes` and their status, by interacting with cloud-provider's API. This allows Kubernetes to manage workloads effectively and @@ -17,15 +17,15 @@ Below is the detailed explanation on how APIs are implemented by `cloud-provider ### Instance Exists -- `InstanceExists` method checks for the node existence. `Machine` object from IronCore represents the `Node` instance. +- The `InstanceExists` method checks for the node existence. A Machine object from IronCore represents the Node instance. ### Instance Shutdown -- `InstanceShutdown` checks if the node instance is in shutdown state +- The `InstanceShutdown` method checks if the node instance is in shutdown state. ### Instance Metadata -InstanceMetadata returns metadata of a node instance, which includes : +The `InstanceMetadata` method returns metadata of a node instance, which includes: - `ProviderID`: Provider is combination of ProviderName(Which is nothing but set to `IronCore`) - `InstanceType`: InstanceType is set to referencing MachineClass name by the instance. @@ -33,34 +33,34 @@ InstanceMetadata returns metadata of a node instance, which includes : - `Zone`: Zone is set to referenced MachinePool name. -## Load balancing for Services of type LoadBalancer +## Load Balancing for Services of Type LoadBalancer -`LoadBalancer` service allows external access to Kubernetes services within a cluster, ensuring traffic is distributed -effectively. Within the CCM there is a controller that listens for `Service` objects of type `LoadBalancer`. It then +A LoadBalancer service allows external access to Kubernetes services within a cluster, ensuring traffic is distributed +effectively. Within the CCM there is a controller that listens for Service objects of type LoadBalancer. It then interacts with cloud provider specific APIs to provision, configure, and manage the load balancer. Below is the detailed explanation on how APIs are implemented in IronCore cloud-provider. ### GetLoadBalancerName -- `GetLoadBalancerName` return LoadBalancer's name based on the service name +- The `GetLoadBalancerName` method returns the LoadBalancer name based on the service name. ### Ensure LoadBalancer -- `EnsureLoadBalancer` gets the LoadBalancer name based on service name. -- Checks if IronCore `LoadBalancer` object already exists. If not it gets the `port` and `protocol`, `ipFamily` information from the service and creates a new LoadBalancer object in the IronCore. -- Newly created LoadBalancer will be associated with Network reference provided in cloud configuration. -- Then `LoadBalancerRouting` object is created with the destination IP information retrieved from the nodes (Note: `LoadBalancerRouting` is internal object to IronCore). Later, this information is used at the IronCore API level to describe the explicit targets in a pool traffic is routed to. -- IronCore supports two types of LoadBalancer `Public` and `Internal`. If LoadBalancer has to be of type Internal, "service.beta.kubernetes.io/ironcore-load-balancer-internal" annotation needs to be set to true, otherwise it will be considered as public type. +- The `EnsureLoadBalancer` method gets the LoadBalancer name based on service name. +- It checks if an IronCore LoadBalancer object already exists. If not, it gets the `port`, `protocol`, and `ipFamily` information from the service and creates a new LoadBalancer object in IronCore. +- The newly created LoadBalancer is associated with the Network reference provided in the cloud configuration. +- A LoadBalancerRouting object is then created with the destination IP information retrieved from the nodes (note: LoadBalancerRouting is an internal object in IronCore). This information is used at the IronCore API level to describe the explicit targets in a pool that traffic is routed to. +- IronCore supports two types of LoadBalancer: `Public` and `Internal`. If the LoadBalancer needs to be of type Internal, you must set the `service.beta.kubernetes.io/ironcore-load-balancer-internal` annotation to true. Otherwise it is considered public. ### Update LoadBalancer -- `UpdateLoadBalancer` gets the `LoadBalancer` and `LoadBalancerRouting` objects based on service name. -- If there is any change in the nodes(added/removed), LoadBalancerRouting destinations are updated. +- The `UpdateLoadBalancer` method gets the LoadBalancer and LoadBalancerRouting objects based on service name. +- If there is any change in the nodes (added/removed), the LoadBalancerRouting destinations are updated. ### Delete LoadBalancer -- EnsureLoadBalancerDeleted gets the LoadBalancer name based on service name, check if it exists and deletes it. +- The `EnsureLoadBalancerDeleted` method gets the LoadBalancer name based on service name, checks if it exists, and deletes it. ## Managing Routes @@ -70,21 +70,21 @@ interfaces to ensure this functionality. ### Creating Routes -- Create route method retrieve the machine object corresponding to the target node name. -- Iterates over all target node addresses and identify the matching network interface using internal IPs. -- If a matching network interface is found, proceed with finding prefix. If a prefix already exists, that means the route is already present. -- If the prefix is not found, add it to the network interface specification. +- The create route method retrieves the Machine object corresponding to the target node name. +- It iterates over all target node addresses and identifies the matching NetworkInterface using internal IPs. +- If a matching NetworkInterface is found, it proceeds with finding the prefix. If a prefix already exists, the route is already present. +- If the prefix is not found, it adds it to the NetworkInterface specification. -### Deleting Route +### Deleting Routes -- Delete route method retrieves the machine object corresponding to the target node name. -- Then iterates over all target node addresses and identify the matching network interface using internal IPs. -- If a matching network interface is found, attempt to remove the prefix. -- Check for the prefix in the network interface's spec and remove it if present. +- The delete route method retrieves the Machine object corresponding to the target node name. +- It iterates over all target node addresses and identifies the matching NetworkInterface using internal IPs. +- If a matching NetworkInterface is found, it attempts to remove the prefix. +- It checks for the prefix in the NetworkInterface spec and removes it if present. -### List Routes +### Listing Routes -- List route method retrieves all the network interfaces matching the given namespace, network, and cluster label. -- Iterates over each network interface and compiles route information based on its prefixes. -- It also verifies that the network interface is associated with a machine reference and retrieves node addresses based on the machine reference name. -- Based on all the collected information (Name, DestinationCIDR, TargetNode, TargetNodeAddresses) `Route` list is returned. +- The list route method retrieves all NetworkInterfaces matching the given namespace, network, and cluster label. +- It iterates over each NetworkInterface and compiles route information based on its prefixes. +- It also verifies that the NetworkInterface is associated with a Machine reference and retrieves node addresses based on the Machine reference name. +- Based on all the collected information (Name, DestinationCIDR, TargetNode, TargetNodeAddresses), the Route list is returned. diff --git a/docs/iaas/kubernetes/csi-driver.md b/docs/iaas/kubernetes/csi-driver.md index 69b5c04..7ab010e 100644 --- a/docs/iaas/kubernetes/csi-driver.md +++ b/docs/iaas/kubernetes/csi-driver.md @@ -47,18 +47,18 @@ explanation of how the APIs are implemented in the IronCore CSI driver for diffe ### Volume Creation -- `CreateVolume` method is called when a new `PersistentVolumeClaim` is created +- The `CreateVolume` method is called when a new PersistentVolumeClaim is created - Validates the storage class parameters and volume capabilities -- Create a new `Volume` object in IronCore with specified parameters -- Set up the volume with the appropriate size, access mode, and other configurations +- Creates a new Volume object in IronCore with specified parameters +- Sets up the volume with the appropriate size, access mode, and other configurations - Returns a unique volume ID that will be used to identify the volume in later operations ### Volume Deletion -- `DeleteVolume` method is called when a `PersistentVolume` is deleted -- Retrieve the volume using the volume ID +- The `DeleteVolume` method is called when a PersistentVolume is deleted +- Retrieves the volume using the volume ID - Performs cleanup operations if necessary -- Delete the `Volum`e object from IronCore +- Deletes the Volume object from IronCore - Ensures all associated resources are properly cleaned up ## Node Operations @@ -67,17 +67,17 @@ The CSI driver runs as a node plugin on each Kubernetes node to handle volume mo ### Volume Publishing -- `NodePublishVolume` is called when a volume needs to be mounted on a node +- The `NodePublishVolume` method is called when a volume needs to be mounted on a node - Validates the volume capabilities and access mode -- Create the necessary mount point on the node +- Creates the necessary mount point on the node - Mounts the volume using the appropriate filesystem -- Set up the required permissions and mount options +- Sets up the required permissions and mount options ### Volume Unpublishing -- `NodeUnpublishVolume` is called when a volume needs to be unmounted from a node +- The `NodeUnpublishVolume` method is called when a volume needs to be unmounted from a node - Unmounts the volume from the specified mount point -- Clean up any temporary files or directories +- Cleans up any temporary files or directories - Ensures the volume is properly detached from the node ## Controller Operations @@ -86,24 +86,24 @@ The CSI driver also runs as a controller plugin to manage volume provisioning an ### Volume Attachment -- `ControllerPublishVolume` is called when a volume needs to be attached to a node +- The `ControllerPublishVolume` method is called when a volume needs to be attached to a node - Validates the node information and volume capabilities -- Attaches the `Volume` to the specified node +- Attaches the Volume to the specified node - Returns the device path that will be used for mounting ### Volume Detachment -- `ControllerUnpublishVolume` is called when a volume needs to be detached from a node +- The `ControllerUnpublishVolume` method is called when a volume needs to be detached from a node - Detaches the volume from the specified node -- Perform any necessary cleanup operations +- Performs any necessary cleanup operations - Ensures the volume is properly detached before returning ## Volume Expansion The CSI driver supports online volume expansion (if allowed by the `StorageClass`), allowing volumes to be resized without downtime. -- `ExpandVolume` is called when a volume needs to be resized +- The `ExpandVolume` method is called when a volume needs to be resized - Validates the new size and volume capabilities -- Resizes the `Volume` in IronCore -- Update the filesystem if necessary +- Resizes the Volume in IronCore +- Updates the filesystem if necessary - Returns the new size of the volume diff --git a/docs/iaas/kubernetes/index.md b/docs/iaas/kubernetes/index.md index 41765ce..deb5c82 100644 --- a/docs/iaas/kubernetes/index.md +++ b/docs/iaas/kubernetes/index.md @@ -15,7 +15,7 @@ The typical provider-specific integration points in Kubernetes are the following As for CNI and CRI you can use almost any implementation that is compatible with Kubernetes. -For CSI, IronCore provider an own implementation of the [CSI interface](/iaas/kubernetes/csi-driver). +For CSI, IronCore provides its own implementation of the [CSI interface](/iaas/kubernetes/csi-driver). Additionally, the IronCore [Cloud Controller Manager](/iaas/kubernetes/cloud-controller-manager) provides the necessary integration points of handling Loadbalancing and other provider specific integrations like the `Node` lifecycle and topology information. diff --git a/docs/iaas/operations-guide/overview.md b/docs/iaas/operations-guide/overview.md index 3c41e34..14a46ab 100644 --- a/docs/iaas/operations-guide/overview.md +++ b/docs/iaas/operations-guide/overview.md @@ -7,4 +7,4 @@ This guide provides operational instructions for system operators, covering depl - Routine maintenance tasks - Monitoring & alerts -For more detailed troubleshooting information, refer to the [Troubleshooting](../troubleshooting/index.md) section. +For more detailed troubleshooting information, refer to the [Troubleshooting](/iaas/troubleshooting/) section. diff --git a/docs/iaas/usage-guides/compute.md b/docs/iaas/usage-guides/compute.md index d6348d1..a315639 100644 --- a/docs/iaas/usage-guides/compute.md +++ b/docs/iaas/usage-guides/compute.md @@ -1,6 +1,6 @@ # Compute Resources -IronCore compute resources are `Machines`, their associated `Machineclasses` and `MachinePools` that allow you to define, provision, and manage virtual machines. This guide explains the core compute resource types and how to use them. +IronCore compute resources are Machines, their associated MachineClasses, and MachinePools that allow you to define, provision, and manage virtual machines. This guide explains the core compute resource types and how to use them. ## Machine @@ -74,7 +74,7 @@ A `MachinePool` is a resource in IronCore that represents a pool of compute reso the infrastructure's compute configuration used to provision and manage `Machines`, ensuring resource availability and compatibility with associated `MachineClasses`. -> Note:One `machinepoollet` is responsible for one `MachinePool`. +> Note: One `machinepoollet` is responsible for one MachinePool. Details on how `MachinePools` are announced and used can be found in the [Pools and Poollets](/iaas/architecture/scheduling) section. diff --git a/docs/iaas/usage-guides/index.md b/docs/iaas/usage-guides/index.md index f9e53fd..81af13e 100644 --- a/docs/iaas/usage-guides/index.md +++ b/docs/iaas/usage-guides/index.md @@ -7,4 +7,4 @@ operations. Those guides are intended to give the user a comprehensive understan More examples and detailed usage can be found in the [end to end examples](https://github.com/ironcore-dev/ironcore/tree/main/config/samples/e2e) in the `ironcore` repository. -Detailed API references for the IronCore IaaS types can be found in the [API References](../api-references/) section. +Detailed API references for the IronCore IaaS types can be found in the [API References](/iaas/api-references/) section. diff --git a/docs/iaas/usage-guides/storage.md b/docs/iaas/usage-guides/storage.md index a0cfb6f..3a4f701 100644 --- a/docs/iaas/usage-guides/storage.md +++ b/docs/iaas/usage-guides/storage.md @@ -1,10 +1,10 @@ # Storage Resources -IronCore storage resources are -- `Volumes`, their associated `Volumeclasses` and `VolumePools` that allow you to define, provision, and manage Block devices in the IronCore infrastructure. +IronCore storage resources are: +- Volumes, their associated VolumeClasses, and VolumePools that allow you to define, provision, and manage block devices in the IronCore infrastructure. -- `Buckets`, their associated `Bucketclasses` and `BucketPools`, that allow you to define, provision, and manage the object storage such as files or data blobs. +- Buckets, their associated BucketClasses, and BucketPools that allow you to define, provision, and manage object storage such as files or data blobs. -- `VolumeSnapshots`, that allow users to take a point-in-time snapshot of an IronCore `Volume` content. It can be used to restore the data in case of data loss or to migrate the data to a different cluster or storage system. Also an IronCore `Volume` can be provisioned by referencing a `VolumeSnapshot`. +- VolumeSnapshots that allow you to take a point-in-time snapshot of a Volume's content. You can use them to restore data in case of data loss or to migrate data to a different cluster or storage system. You can also provision a Volume by referencing a VolumeSnapshot. This guide explains the core storage resource types and how to use them. @@ -84,7 +84,7 @@ spec: ``` ### Key Fields: -- `providerID` (`string`): The `providerId` helps the controller identify and communicate with the correct storage system within the specific backened storage provider. +- `providerID` (`string`): The `providerId` helps the controller identify and communicate with the correct storage system within the specific backend storage provider. ## Bucket @@ -175,7 +175,7 @@ spec: ### Key Fields: -- `ProviderID` (`string`): The `providerId` helps the controller identify and communicate with the correct storage system within the specific backened storage provider. +- `ProviderID` (`string`): The `providerId` helps the controller identify and communicate with the correct storage system within the specific backend storage provider. ## VolumeSnapshot @@ -197,4 +197,4 @@ spec: ### Key Fields: -- `volumeRef` (`string`): `volumeRef` refers to the name of an IronCore `volume` to create a volumeSnapshot. +- `volumeRef` (`string`): `volumeRef` refers to the name of an IronCore Volume to create a VolumeSnapshot. diff --git a/docs/network-automation/index.md b/docs/network-automation/index.md index 261db36..9f6cb3a 100644 --- a/docs/network-automation/index.md +++ b/docs/network-automation/index.md @@ -1,16 +1,16 @@ # Network Automation -The IronCore project provides a robust framework for automating network management tasks. We are leveraging +The IronCore project provides a robust framework for automating network management tasks. It leverages a Kubernetes-based architecture to streamline the deployment, configuration, and monitoring of network devices. ## Key Features -- **Devices Discovery**: Automatically discover network devices. +- **Device Discovery**: Automatically discover network devices. - **Provisioning**: Automate the provisioning of network devices. - **Configuration Management**: Manage and apply configurations across multiple devices. ## Getting Started -In the IronCore project we support different vendors and device types through dedicated operators. Below are some of the key components: +The IronCore project supports different vendors and device types through dedicated operators. Below are some of the key components: - [network-operator](https://ironcore.dev/network-operator/): Automation for Cisco NX-OS devices. - [sonic-operator](https://github.com/ironcore-dev/sonic-operator/): Automation for Sonic Edgecore switches. diff --git a/docs/overview/index.md b/docs/overview/index.md index 70d5e44..4214e37 100644 --- a/docs/overview/index.md +++ b/docs/overview/index.md @@ -6,7 +6,7 @@ can transform your workflows. ## IronCore Architecture -Here's a visual representation of IronCore's two layers. The bare metal management and network automation layers +The following diagram shows IronCore's two layers. The bare metal management and network automation layers belong to the infrastructure management domain, while the IaaS layer provides cloud-like capabilities on top. ## Infrastructure as a Service (IaaS) diff --git a/docs/overview/principles.md b/docs/overview/principles.md index 379fd77..b3df920 100644 --- a/docs/overview/principles.md +++ b/docs/overview/principles.md @@ -1,17 +1,16 @@ # Project Design Principles -## 1. Declarative Kubernetes APIs +## Declarative Kubernetes APIs All functionality must be exposed via declarative Kubernetes APIs. Use Custom Resource Definitions (CRDs) or the API aggregation layer where appropriate. Services should model their state using Kubernetes resources. -## 2. Minimal API Surface +## Minimal API Surface When introducing new features, expose only the essential fields. Avoid over-designing or leaking internal implementation details into the API. -## 3. Separation of Concerns -Clearly separate different problem domains. Don’t mix unrelated concerns in a single API or component. Define strict API contracts between boundaries. +## Separation of Concerns +Clearly separate different problem domains. Do not mix unrelated concerns in a single API or component. Define strict API contracts between boundaries. -## 4. Kubernetes API Conventions -Follow official Kubernetes API conventions when designing or extending APIs: -➡️ [Kubernetes API Conventions Guide](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/api-conventions.md) +## Kubernetes API Conventions +Follow official Kubernetes API conventions when designing or extending APIs. See the [Kubernetes API Conventions Guide](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/api-conventions.md) for details. -## 5. No Scripting for Deployment +## No Scripting for Deployment Avoid Bash, Python, or other scripting languages for deployment tasks. All components must be declaratively deployable using Kubernetes manifests or tools like Helm or Kustomize.