Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions docs/baremetal/api-references/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,5 +2,5 @@

This section provides detailed API reference documentation for the IronCore Bare Metal Management components.

* [**metal-operator**](/baremetal/api-references/metal-operator): The core bare metal management component that manages the lifecycle of bare metal servers.
* [**boot-operator**](/baremetal/api-references/boot-operator): Responsible for providing boot images and Ignition configurations to bare metal servers.
- [**metal-operator**](/baremetal/api-references/metal-operator): The core bare metal management component that manages the lifecycle of bare metal servers.
- [**boot-operator**](/baremetal/api-references/boot-operator): Responsible for providing boot images and Ignition configurations to bare metal servers.
2 changes: 1 addition & 1 deletion docs/baremetal/architecture/discovery.md
Original file line number Diff line number Diff line change
Expand Up @@ -44,7 +44,7 @@ spec:
```

3. The `EndpointReconciler` watches for changes to the `Endpoint` and looks up the MAC address in the [MACAddress database](https://ironcore-dev.github.io/metal-operator/concepts/endpoints.html#configuration)
to find a matching MAC address prefix end derive from that the initial credentials, protocol, and other information needed to create a BMC resource.
to find a matching MAC address prefix and derive from that the initial credentials, protocol, and other information needed to create a BMC resource.
4. If a MAC address prefix is found in the database, the `EndpointReconciler` creates a `BMC` and `BMCSecret` resource.

Here is an example of a `BMC` resource:
Expand Down
2 changes: 1 addition & 1 deletion docs/baremetal/architecture/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ server management and in-band server boot automation.

The out-of-band automation is responsible for the initial provisioning of bare metal servers. Here the main component
is the `metal-operator`, which is responsible for managing the lifecycle of bare metal server. In the out-of-band
network, BMCs (Baseboard Management Controllers) are assigned IP addresses (in our case via FeDHCP) and are then reachable
network, BMCs (Baseboard Management Controllers) are assigned IP addresses (via FeDHCP) and are then reachable
via the `metal-operator`. The `metal-operator` can then perform actions like [discovering](/baremetal/architecture/discovery)
and [provisioning](/baremetal/architecture/provisioning) servers. It is also responsible for the [maintenance](/baremetal/architecture/maintenance)
workflow of bare metal servers.
Expand Down
4 changes: 2 additions & 2 deletions docs/baremetal/architecture/maintenance.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,5 +2,5 @@

TODO:

* Describe the maintenance process
* Describe the extension points here
- Describe the maintenance process
- Describe the extension points here
8 changes: 4 additions & 4 deletions docs/baremetal/architecture/provisioning.md
Original file line number Diff line number Diff line change
@@ -1,10 +1,10 @@
# Server Provisioning

This section describes how the provisioning of bare metal servers is handled in IronCore's bare metal automation.
In the [discovery section](/baremetal/architecture/discovery) we discussed how servers are discovered and first time
booted and how they are transitioned into an `Available` state. Now we will focus on the provisioning process, and
one can use such a `Server` resource to provision a custom operating system and automate the software installation on
such a server.
In the [discovery section](/baremetal/architecture/discovery) you can find how servers are discovered and first-time
booted and how they transition into an `Available` state. This section focuses on the provisioning process and how
you can use a Server resource to provision a custom operating system and automate the software installation on
a server.

## Claiming a Server

Expand Down
3 changes: 2 additions & 1 deletion docs/baremetal/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,8 @@ The core components of the bare metal management in IronCore include:

## Concepts and Usage Guides

Usage guides and concepts for the `metal-operator` API types can be found in the [metal-operator documentation](https://ironcore-dev.github.io/metal-operator/concepts/).
Usage guides and concepts for the `metal-operator` API types can be found in the [metal-operator documentation](https://ironcore-dev.github.io/metal-operator/concepts).


## Prerequisites

Expand Down
22 changes: 11 additions & 11 deletions docs/baremetal/kubernetes/cloud-controller-manager.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,24 +2,24 @@

[Cloud-Controller-Manager](https://kubernetes.io/docs/concepts/architecture/cloud-controller) (CCM) is the bridge
between Kubernetes and a cloud-provider. CCM uses the cloud-provider (IronCore Bare Metal API in this case) API to manage these
resources. We have implemented the [cloud provider interface](https://github.com/kubernetes/cloud-provider/blob/master/cloud.go)
in the [`cloud-provider-metal`](https://github.com/ironcore-dev/cloud-provider-metal) repository.
Here's a more detail on how these APIs are implemented in the IronCore bare metal cloud-provider for different objects.
resources. The [cloud provider interface](https://github.com/kubernetes/cloud-provider/blob/master/cloud.go)

Check warning on line 5 in docs/baremetal/kubernetes/cloud-controller-manager.md

View check run for this annotation

In Solidarity / Inclusive Language

Match Found

Please consider an alternative to `master`. Possibilities include: `primary`, `main`, `leader`, `active`, `writer`
Raw output
/master/gi
is implemented in the [`cloud-provider-metal`](https://github.com/ironcore-dev/cloud-provider-metal) repository.
Here is more detail on how these APIs are implemented in the IronCore bare metal cloud-provider for different objects.

## Node lifecycle
## Node Lifecycle

### InstanceExists

`InstanceExists` checks if a node with the given name exists in the cloud provider. In IronCore bare metal a `Node`
is represented by a `ServerClaim` object. The `InstanceExists` method checks if a `ServerClaim` with the given name exists.
The `InstanceExists` method checks if a node with the given name exists in the cloud provider. In IronCore bare metal, a Node
is represented by a ServerClaim object. The `InstanceExists` method checks if a ServerClaim with the given name exists.

### InstanceShutdown

`InstanceShutdown` checks if a node with the given name is shutdown in the cloud provider. Here, the instane controller
checks if the `ServerClaim` and the claimed `Server` object are in the `PowerOff` state.
The `InstanceShutdown` method checks if a node with the given name is shut down in the cloud provider. Here, the instance controller
checks if the ServerClaim and the claimed Server object are in the `PowerOff` state.

### InstanceMetadata

`InstanceMetadata` retrieves the metadata of a node with the given name. In IronCore bare metal, this method retrieves
the topology labels from a `Server` object which is claimed by the `ServerClaim` of the node. Additional labels of the
`Server` object are also added to the node object.
The `InstanceMetadata` method retrieves the metadata of a node with the given name. In IronCore bare metal, this method retrieves
the topology labels from a Server object that is claimed by the ServerClaim of the node. Additional labels of the
Server object are also added to the Node object.
6 changes: 3 additions & 3 deletions docs/baremetal/kubernetes/gardener.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,9 +10,9 @@ There are two main components in the Gardener integration with IronCore:
## Machine Controller Manager (MCM)

The [machine-controller-manager-provider-ironcore](https://github.com/ironcore-dev/machine-controller-manager-provider-ironcore-metal)
is responsible for managing the lifecycle of `Nodes` in a Kubernetes cluster. Here the MCM in essence is translating
Gardener `Machine` resource to `ServerClaims` and wrapping the `user-data` coming from the Gardner OS extensions into
an Ignition `Secret`.
is responsible for managing the lifecycle of Nodes in a Kubernetes cluster. Here the MCM in essence translates
a Gardener Machine resource to ServerClaims and wraps the `user-data` coming from the Gardener OS extensions into
an Ignition Secret.

## Gardener Extension Provider

Expand Down
2 changes: 1 addition & 1 deletion docs/baremetal/operations-guide/overview.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,4 +7,4 @@ This guide provides operational instructions for system operators, covering depl
- Routine maintenance tasks
- Monitoring & alerts

For more detailed troubleshooting information, refer to the [Troubleshooting](../troubleshooting/index.md) section.
For more detailed troubleshooting information, refer to the [Troubleshooting](/baremetal/troubleshooting/) section.
10 changes: 5 additions & 5 deletions docs/community/contributing.md
Original file line number Diff line number Diff line change
Expand Up @@ -33,7 +33,7 @@ avoids unnecessary work and helps align your contribution with the project's dir

## Making a Contribution

### 1. Fork and Clone
### Fork and Clone

Fork the repository you want to contribute to and clone it locally:

Expand All @@ -42,7 +42,7 @@ git clone git@github.com:<your-username>/<repository>.git
cd <repository>
```

### 2. Create a Branch
### Create a Branch

Create a feature branch from `main`:

Expand All @@ -57,7 +57,7 @@ git fetch upstream main
git rebase upstream/main
```

### 3. Make Your Changes
### Make Your Changes

- Follow the [coding](/community/style-guide/coding) and [documentation](/community/style-guide/documentation) style guides for code, testing, and documentation standards.
- Keep commits small and focused — each commit should be correct independently.
Expand All @@ -69,7 +69,7 @@ git rebase upstream/main
git commit -s -m "Add support for feature X"
```

### 4. Submit a Pull Request
### Submit a Pull Request

Push your branch and open a pull request against `main`:

Expand All @@ -82,7 +82,7 @@ In your pull request description:
- Reference any related issues (e.g., `Fixes #123`).
- Tag a relevant maintainer if you need a specific reviewer — check the `CODEOWNERS` file in the repository.

### 5. Run Checks
### Run Checks

Before submitting, run the project's checks locally to catch issues early. See
[Running tests](/community/style-guide/coding#running-tests) in the coding style guide for details:
Expand Down
12 changes: 6 additions & 6 deletions docs/iaas/api-references/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,9 +4,9 @@ This section provides detailed API references for the IronCore IaaS types.

The IronCore aggregated API server exposes the following API groups:

* [**Core**](/iaas/api-references/core): The core API group contains the foundational types used in IronCore IaaS.
* [**Compute**](/iaas/api-references/compute): The compute API group contains types related to virtual machines and compute resources.
* [**Storage**](/iaas/api-references/storage): The storage API group contains types related to storage resources.
* [**Networking**](/iaas/api-references/networking): The networking API group contains types related to networking resources.
* [**IPAM**](/iaas/api-references/ipam): The IPAM API group contains types related to IP address management.
* [**Common**](/iaas/api-references/common): The common API group contains types that are shared across multiple API groups.
- [**Core**](/iaas/api-references/core): The core API group contains the foundational types used in IronCore IaaS.
- [**Compute**](/iaas/api-references/compute): The compute API group contains types related to virtual machines and compute resources.
- [**Storage**](/iaas/api-references/storage): The storage API group contains types related to storage resources.
- [**Networking**](/iaas/api-references/networking): The networking API group contains types related to networking resources.
- [**IPAM**](/iaas/api-references/ipam): The IPAM API group contains types related to IP address management.
- [**Common**](/iaas/api-references/common): The common API group contains types that are shared across multiple API groups.
24 changes: 12 additions & 12 deletions docs/iaas/architecture/networking.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,11 +23,11 @@ correctly propagated across the IronCore installation.

## `ironcore` and `ironcore-net`

`ironcore-net` is a global coordination service within an IronCore installation. Therefore, it is a single instance and
the place where all network related decisions like reservation of unique IP addresses, allocation of unique network IDs, etc. are made.
The `ironcore-net` service is a global coordination service within an IronCore installation. Therefore, it is a single instance and
the place where all network-related decisions like reservation of unique IP addresses, allocation of unique network IDs, etc. are made.

`ironcore-net` has apart from its [own API](https://github.com/ironcore-dev/ironcore-net/tree/main/api/core/v1alpha1) two main components:
- **apinetlet**: This component is responsible from translating the user-facing API objects from the `networking` resource group into the internal representation used by `ironcore-net`.
The `ironcore-net` service has, apart from its [own API](https://github.com/ironcore-dev/ironcore-net/tree/main/api/core/v1alpha1), two main components:
- **apinetlet**: This component is responsible for translating the user-facing API objects from the `networking` resource group into the internal representation used by `ironcore-net`.
- **metalnetlet**: This component is interfacing with the `metalnet` API to manage cluster-level networking resources like `NetworkInterface` which are requested globally in the `ironcore-net` API but are implemented by `metalnet` on a hypervisor level.

### Example `apinetlet` flow
Expand Down Expand Up @@ -56,9 +56,9 @@ for translating and allocating the necessary resources in `ironcore-net` to ensu

### `metalnetlet` and `metalnet`

`metalnetlet` and `metalnet` work together to provide the necessary networking capabilities for `Machines` in an IronCore on
The `metalnetlet` and `metalnet` components work together to provide the necessary networking capabilities for Machines in an IronCore instance on
a hypervisor host. In a compute cluster, the `metalnetlet` will create for each `Node` in the cluster a corresponding
`Node` object in the `ironcore-net` API. This `Node` object represents the hypervisor host and is used to manage the networking resources
Node object in the `ironcore-net` API. This Node object represents the hypervisor host and is used to manage the networking resources
which should be available on this host.

The image below illustrates the relationship between `metalnetlet` and `metalnet`:
Expand All @@ -68,7 +68,7 @@ The image below illustrates the relationship between `metalnetlet` and `metalnet
The `NetworkInterface` creation flow will look like this:
1. A provider (in this case `libvirt-provider`) will create a virtual machine against the libvirt daemon on a hypervisor host.
In case a `NetworkInterface` should be attached to this virtual machine, the `machinepoollet` will call the corresponding
`AttachNetworkInterface` method on the [`MachineRuntime`](/iaas/architecture/runtime-interface#machineruntime-interface)
`AttachNetworkInterface` method on the [`MachineRuntime`](/iaas/architecture/runtime-interface#machineruntime-interface)
interface implemented by the `libvirt-provider`. The `libvirt-provider` itself then has a plugin into the `ironcore-net`
API to create a `NetworkInterface` resource in the `ironcore-net` API.
2. The `metalnetlet` will then watch for changes to the `NetworkInterface` resource and create the corresponding `NetworkInterface`
Expand All @@ -78,10 +78,10 @@ virtual network interface back to the `ironcore-net` API by updating the status
4. The `libvirt-provider` will poll the `ironcore-net` API to get the updated status of the `NetworkInterface` and will
use the PCI address in the status to attach the virtual network interface to the virtual machine instance.

`LoadBalancer` and `NATGateways` resources follow a similar flow. Here, however, the compute provider is not involved.
The `apinetlet` will translate the `ironcore` `LoadBalancer` or `NATGateway` resource into the corresponding `ironcore-net`
objects. Those will be scheduled on `ironcore-net` `Nodes`. Onces this is done, the `metalnetlet` will watch those resources
and create the corresponding `LoadBalancer` or `NATGateway` objects in the `metalnet` API.
LoadBalancer and NATGateway resources follow a similar flow. Here, however, the compute provider is not involved.
The `apinetlet` translates the IronCore LoadBalancer or NATGateway resource into the corresponding `ironcore-net`
objects. Those are scheduled on `ironcore-net` Nodes. Once this is done, the `metalnetlet` watches those resources
and creates the corresponding `LoadBalancer` or `NATGateway` objects in the `metalnet` API.

### `metalnet`, `dpservice` and `metalbond`

Expand All @@ -92,4 +92,4 @@ The following figure depicts the basic working flow of creating two interfaces f

![Metalnet Dpservice Metalbond](/metalnet-dpservice-metalbond.png)

`metalnet` controllers watch the metalnet objects such as `Network` and `NetworkInterface`. Upon arriving of a new `NetworkInterface`, `metalnet` communicates with `dpservice` to obtain a newly generated underlying IPv6 address for a corresponding interface's overlay IP address. For example, on `Server-A`, `metalnet` obtains an underlying IPv6 address, `2001:db8::1`, for `interface-A` with a private IP `10.0.0.1`. This encapsulation routing information is announced by `metalnet`'s embedded `metalbond-client` to `metalbond-server` running on a region router, and further synchronised by other `metalbond` instances. For instance, `metalbond` on `Server-B` shall receive this `10.0.0.1@2001:db8::1` information and push it into its local `dpservice` via gRPC. `dpservice` uses these routing information to perform overlay packet encapsulation and decapsulation to achieve communication among VMs running on different servers. Meanwhile, `metalnet` also picks a pci addresss of a VF, such as `0000:3b:00.2`, and attach it as part of `NetworkInterface` status, which is further utilised by `ironcore-net` and `vm-provider`/`libvirt-provider` to create a VM.
The `metalnet` controllers watch the metalnet objects such as Network and NetworkInterface. Upon arrival of a new NetworkInterface, `metalnet` communicates with `dpservice` to obtain a newly generated underlying IPv6 address for a corresponding interface's overlay IP address. For example, on `Server-A`, `metalnet` obtains an underlying IPv6 address, `2001:db8::1`, for `interface-A` with a private IP `10.0.0.1`. This encapsulation routing information is announced by the embedded `metalbond-client` to `metalbond-server` running on a region router, and further synchronized by other `metalbond` instances. For instance, `metalbond` on `Server-B` receives this `10.0.0.1@2001:db8::1` information and pushes it into its local `dpservice` via gRPC. The `dpservice` uses this routing information to perform overlay packet encapsulation and decapsulation to achieve communication among VMs running on different servers. Meanwhile, `metalnet` also picks a PCI address of a VF, such as `0000:3b:00.2`, and attaches it as part of the NetworkInterface status, which is further used by `ironcore-net` and `libvirt-provider` to create a VM.
6 changes: 3 additions & 3 deletions docs/iaas/architecture/providers/brokers.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,8 +9,8 @@ Below is an example of how a `machinepoollet` and `machinebroker` will translate
![Brokers](/brokers.png)

Brokers are useful in scenarios where IronCore should not run in a single cluster but rather have a federated
environment. For example, in a federated environment, every hypervisor node in a compute cluster would announce it's
`MachinePool` inside the compute cluster. A `MachinePoollet`/`MachineBroker` in this compute cluster could now announce
a logical `MachinePool` "one level up" as a logical compute pool in e.g. an availability zone cluster. The broker concept
environment. For example, in a federated environment, every hypervisor node in a compute cluster would announce its
MachinePool inside the compute cluster. A MachinePoollet/MachineBroker in this compute cluster could now announce
a logical MachinePool "one level up" as a logical compute pool in e.g. an availability zone cluster. The broker concept
allows you to design a cluster topology which might be important for large-scale deployments of IronCore (e.g., managing
a whole datacenter region with multiple AZs).
Loading