Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
9 changes: 5 additions & 4 deletions astro.config.mjs
Original file line number Diff line number Diff line change
Expand Up @@ -440,12 +440,13 @@ export default defineConfig({
slug: 'aws/enterprise',
},
{
label: 'Single Sign-On',
autogenerate: { directory: '/aws/enterprise/sso' },
label: 'Kubernetes',
autogenerate: { directory: '/aws/enterprise/kubernetes' },
collapsed: true,
},
{
label: 'Kubernetes Executor',
slug: 'aws/enterprise/kubernetes-executor',
label: 'Single Sign-On',
autogenerate: { directory: '/aws/enterprise/sso' },
},
{
label: 'Enterprise Image',
Expand Down
100 changes: 100 additions & 0 deletions src/content/docs/aws/enterprise/kubernetes/concepts.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,100 @@
---
title: Concepts
description: Concepts & Architecture
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we add a diagram here similar to the one we have in Notion ?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, we can add the one in Notion. I'll add a commit in a bit.

template: doc
sidebar:
order: 2
tags: ["Enterprise"]
---

## Concepts & Architecture

This conceptual guide explains how LocalStack runs inside a Kubernetes cluster, how workloads are executed, and how networking and DNS behave in a Kubernetes-based deployment.


## How the LocalStack pod works

The LocalStack pod runs the LocalStack runtime and acts as the central coordinator for all emulated AWS services within the cluster.

Its primary responsibilities include:

* Exposing the LocalStack edge endpoint and AWS service API ports
* Receiving and routing incoming AWS API requests
* Orchestrating services that require additional compute (for example Lambda, Glue, ECS, and EC2)
* Managing the lifecycle of compute workloads spawned on behalf of AWS services

From a Kubernetes perspective, the LocalStack pod is a standard pod that fully participates in cluster networking. It is typically exposed through a Kubernetes `Service`, and all AWS API interactions—whether from inside or outside the cluster—are routed through this pod.


## Execution modes

LocalStack supports two execution modes for running compute workloads:

* Docker executor
* Kubernetes-native executor

### Docker executor

The Docker executor runs workloads as containers started via a Docker runtime that is accessible from the LocalStack pod. This provides a simple, self-contained execution model without Kubernetes-level scheduling.

However, Kubernetes does not provide a Docker daemon inside pods by default. To use the Docker executor in Kubernetes, the LocalStack pod must be given access to a Docker-compatible runtime (commonly via a Docker-in-Docker sidecar), which adds complexity and security concerns.

### Kubernetes-native executor

The Kubernetes-native executor runs workloads as Kubernetes pods. In this mode, LocalStack communicates directly with the Kubernetes API to create, manage, and clean up pods on demand.

This execution mode provides stronger isolation, better security, and full integration with Kubernetes scheduling, resource limits, and lifecycle management.

The execution mode is configured using the `CONTAINER_RUNTIME` environment variable.


## Child pods

For compute-oriented AWS services, LocalStack can execute workloads either within the LocalStack pod itself or as separate Kubernetes pods.

When the Kubernetes-native executor is enabled, LocalStack launches compute workloads as dedicated Kubernetes pods (referred to here as *child pods*). These include:

* Lambda function invocations
* Glue jobs
* ECS tasks and Batch jobs
* EC2 instances
* RDS databases
* Apache Airflow workflows
* Amazon Managed Service for Apache Flink
* Amazon DocumentDB databases
* Redis instances
* CodeBuild containers

For example, each Glue job run or ECS task invocation results in a new pod created from the workload’s configured runtime image and resource requirements.

These child pods execute independently of the LocalStack pod. Kubernetes is responsible for scheduling them, enforcing resource limits, and managing their lifecycle. Most child pods are short-lived and terminate once the workload completes, though some services (such as Lambda) may keep pods running for longer periods.


## Networking model

LocalStack runs as a standard Kubernetes pod and is accessed through a Kubernetes `Service` that exposes the edge API endpoint and any additional service ports.

Other pods within the cluster communicate with LocalStack through this Service using normal Kubernetes DNS resolution and cluster networking.

When the Kubernetes-native executor is enabled, child pods communicate with LocalStack in the same way, by sending API requests over the cluster network to the LocalStack Service.


## DNS behavior

LocalStack includes a DNS server capable of resolving AWS-style service endpoints.

In a Kubernetes deployment:

* The DNS server can be exposed through the same Kubernetes Service as the LocalStack API ports.
* This allows transparent resolution of AWS service hostnames and `localhost.localstack.cloud` to LocalStack endpoints from within the cluster.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Something else i thought of!

Suggested change
* This allows transparent resolution of AWS service hostnames and `localhost.localstack.cloud` to LocalStack endpoints from within the cluster.
* This allows transparent resolution of AWS service hostnames and `localhost.localstack.cloud` to LocalStack endpoints from within the cluster.
* If a custom domain is used to refer to the LocalStack Kubernetes service (via `LOCALSTACK_HOST`) then this name and subdomains of this name are also resolved by the LocalStack DNS server


This enables applications running in Kubernetes to interact with LocalStack using standard AWS SDK endpoint resolution without additional configuration.


## When to choose the Kubernetes-native executor
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

praise I really like this section!


The Kubernetes-native executor should be used when LocalStack is deployed inside a Kubernetes cluster and workloads must run reliably and securely.

It is the recommended execution mode for nearly all Kubernetes deployments, because Kubernetes does not include a Docker daemon inside pods and does not provide native Docker access. The Kubernetes-native executor aligns with Kubernetes’ workload model, enabling pod-level isolation, scheduling, and resource governance.

The Docker executor should only be used in Kubernetes environments that have been explicitly modified to provide Docker runtime access to the LocalStack pod. Such configurations are uncommon, often restricted, and can introduce security risks. As a result, the Kubernetes-native executor is the operationally supported and recommended execution mode for Kubernetes-based deployments.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

question: do we want to mention that we won't provide support use cases of the Docker executor in Kubernetes clusters?

245 changes: 245 additions & 0 deletions src/content/docs/aws/enterprise/kubernetes/deploy-helm-chart.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,245 @@
---
title: Deploy with Helm
description: Install and run LocalStack on Kubernetes using the official Helm chart.
template: doc
sidebar:
order: 3
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could we make the helm chart 4 and the operator 3 to emphasise the operator more?

tags: ["Enterprise"]
---

## Introduction

A Helm chart is a package that bundles Kubernetes manifests into a reusable, configurable deployment unit. It makes applications easier to install, upgrade, and manage.

Using the LocalStack Helm chart lets you deploy LocalStack to Kubernetes with set defaults while still customizing resources, persistence, networking, and environment variables through a single `values.yaml`. This approach is especially useful for teams running LocalStack in shared clusters or CI environments where repeatable, versioned deployments matter.

## Getting Started

This guide shows you how to install and run LocalStack on Kubernetes using the official Helm chart. It walks you through adding the Helm repository, installing and configuring LocalStack, and verifying that your deployment is running and accessible in your cluster.

## Prerequisites

* **Kubernetes** 1.19 or newer
* **Helm** 3.2.0 or newer
* A working Kubernetes cluster (self-hosted, managed, or local)
* `kubectl` installed and configured for your cluster
* Helm CLI installed and available in your shell `PATH`

:::note
**Namespace note:** All commands in this guide assume installation into the **`default`** namespace.
If you’re using a different namespace:
* Add `--namespace <name>` (and `--create-namespace` on first install) to Helm commands
* Add `-n <name>` to `kubectl` commands
:::

## Install

### 1) Add Helm repo

```bash
helm repo add localstack https://localstack.github.io/helm-charts
helm repo update
```

### 2) Install with default configuration

```bash
helm install localstack localstack/localstack
```

This creates the LocalStack resources in your cluster using the chart defaults.

:::note
### Install LocalStack Pro

If you want to use the `localstack-pro` image, create a `values.yaml` file:

```yaml
image:
repository: localstack/localstack-pro

extraEnvVars:
- name: LOCALSTACK_AUTH_TOKEN
value: "<your auth token>"
```

Then install using your custom values:

```bash
helm install localstack localstack/localstack -f values.yaml
```

:::

#### Auth token from a Kubernetes Secret

If your auth token is stored in a Kubernetes Secret, you can reference it using `valueFrom`:

```yaml
extraEnvVars:
- name: LOCALSTACK_AUTH_TOKEN
valueFrom:
secretKeyRef:
name: <name of the secret>
key: <name of the key in the secret containing the API key>
```
Comment on lines +72 to +85
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we need to include this section in the callout as it refers to the values.yml file. If a user skips the callout then they skip straight to "you can reference it from... without telling them where to put it.

Suggested change
:::
#### Auth token from a Kubernetes Secret
If your auth token is stored in a Kubernetes Secret, you can reference it using `valueFrom`:
```yaml
extraEnvVars:
- name: LOCALSTACK_AUTH_TOKEN
valueFrom:
secretKeyRef:
name: <name of the secret>
key: <name of the key in the secret containing the API key>
```
#### Auth token from a Kubernetes Secret
If your auth token is stored in a Kubernetes Secret, you can reference it using `valueFrom`:
```yaml
extraEnvVars:
- name: LOCALSTACK_AUTH_TOKEN
valueFrom:
secretKeyRef:
name: <name of the secret>
key: <name of the key in the secret containing the API key>

:::


## Configure

The chart ships with sensible defaults, but most production-ish setups will want a small `values.yaml` to customize behavior.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure the -ish is required

Suggested change
The chart ships with sensible defaults, but most production-ish setups will want a small `values.yaml` to customize behavior.
The chart ships with sensible defaults, but most production setups will want a small `values.yaml` to customize behavior.


### View all default values

```bash
helm show values localstack/localstack
```

### Override values with a custom `values.yaml`

Create a `values.yaml` and apply it during install/upgrade:

```bash
helm upgrade --install localstack localstack/localstack -f values.yaml
```

:::note
Keep the existing **parameters table** in this page (or embed it as a collapsible section).
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this line an internal note, or are we keeping it in the docs? If so I don't understand what it means


If you’re migrating from the existing Kubernetes docs page, preserve the parameter names and meaning so users can “diff” old vs new without re-learning.
:::

## Verify

### 1) Check the Pod status

```bash
kubectl get pods
```

After a short time, you should see the LocalStack Pod in `Running` status:

```text
NAME READY STATUS RESTARTS AGE
localstack-7f78c7d9cd-w4ncw 1/1 Running 0 1m9s
```

### 2) Optional: Port-forward to access LocalStack from localhost

If you’re running a **local cluster** (for example, k3d) and LocalStack is not exposed externally, port-forward the service:

```bash
kubectl port-forward svc/localstack 4566:4566
```

Now verify connectivity with the AWS CLI:

```bash
aws sts get-caller-identity --endpoint-url "http://0.0.0.0:4566"
```

Example response:

```json
{
"UserId": "AKIAIOSFODNN7EXAMPLE",
"Account": "000000000000",
"Arn": "arn:aws:iam::000000000000:root"
}
```

## Common customizations

### Enable persistence

If you want state to survive Pod restarts, enable PVC-backed persistence:

* Set: `persistence.enabled = true`

Example `values.yaml`:

```yaml
persistence:
enabled: true
```

:::note
This is especially useful for workflows where you seed resources or rely on state across restarts.
:::


### Set Pod resource requests and limits

Some environments (notably **EKS on Fargate**) may terminate Pods with low/default resource allocations. Consider setting explicit requests/limits:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
Some environments (notably **EKS on Fargate**) may terminate Pods with low/default resource allocations. Consider setting explicit requests/limits:
Some environments (notably **EKS on Fargate**) may terminate the LocalStack pod if not configured with reasonable requests/limits:


```yaml
resources:
requests:
cpu: 1
memory: 1Gi
limits:
cpu: 2
memory: 2Gi
```

### Add env vars and startup scripts
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
### Add env vars and startup scripts
### Add environment variables and startup scripts


You can inject environment variables or run a startup script to:

* pre-configure LocalStack
* seed AWS resources
* tweak LocalStack behavior

Use:

* `extraEnvVars` for environment variables
* `startupScriptContent` for startup scripts

Example pattern:

```yaml
extraEnvVars:
- name: DEBUG
value: "1"

startupScriptContent: |
echo "Starting up..."
# add your initialization logic here
```

### Install into a different namespace

Use `--namespace` and create it on first install:

```bash
helm install localstack localstack/localstack --namespace localstack --create-namespace
```

Then include the namespace on kubectl commands:

```bash
kubectl get pods -n localstack
```

### Update installation

```bash
helm repo update
helm upgrade localstack localstack/localstack
```

If you use a `values.yaml`:

```bash
helm upgrade localstack localstack/localstack -f values.yaml
```


### Helm chart options

Run:

```bash
helm show values localstack/localstack
```

Keep the parameter tables on this page for quick reference (especially for common settings like persistence, resources, env vars, and service exposure).
Loading