diff --git a/doc/modules/ROOT/nav.adoc b/doc/modules/ROOT/nav.adoc
index 5e2270e7..11ff5332 100644
--- a/doc/modules/ROOT/nav.adoc
+++ b/doc/modules/ROOT/nav.adoc
@@ -1,31 +1,33 @@
-* xref:1.primer.adoc[]
-* xref:2.messages.adoc[]
-* xref:sans_io_philosophy.adoc[]
-* xref:http_protocol_basics.adoc[]
-// * xref:header_containers.adoc[]
-* xref:message_bodies.adoc[]
-* Serializing
-* Parsing
-* xref:Message.adoc[]
-* Server
-** xref:server/servers-intro.adoc[Servers]
-** xref:server/route-handlers.adoc[Route Handlers]
-** xref:server/router.adoc[Router]
-** xref:server/routers.adoc[Routers Deep Dive]
-** xref:server/route-patterns.adoc[Route Patterns]
-** xref:server/serve-static.adoc[Serving Static Files]
-** xref:server/serve-index.adoc[Directory Listings]
-** xref:server/bcrypt.adoc[BCrypt Password Hashing]
-// ** xref:server/middleware.adoc[Middleware]
-// ** xref:server/errors.adoc[Error Handling]
-// ** xref:server/params.adoc[Route Parameters]
-// ** xref:server/advanced.adoc[Advanced Topics]
-// ** xref:server/cors.adoc[CORS]
-* Compression
-** xref:compression/zlib.adoc[ZLib]
-** xref:compression/brotli.adoc[Brotli]
-* Design Requirements
-** xref:design_requirements/serializer.adoc[Serializer]
-** xref:design_requirements/parser.adoc[Parser]
-// * xref:reference:boost/http.adoc[Reference]
-* xref:reference.adoc[Reference]
+* xref:2.http-tutorial/2.http-tutorial.adoc[HTTP Tutorial]
+** xref:2.http-tutorial/2a.what-is-http.adoc[What is HTTP]
+** xref:2.http-tutorial/2b.urls-and-resources.adoc[URLs and Resources]
+** xref:2.http-tutorial/2c.message-anatomy.adoc[Message Anatomy]
+** xref:2.http-tutorial/2d.methods.adoc[Methods]
+** xref:2.http-tutorial/2e.status-codes.adoc[Status Codes]
+** xref:2.http-tutorial/2f.headers.adoc[Headers]
+** xref:2.http-tutorial/2g.content-negotiation.adoc[Content Negotiation and Body Encoding]
+** xref:2.http-tutorial/2h.connection-management.adoc[Connection Management]
+** xref:2.http-tutorial/2i.caching.adoc[Caching]
+** xref:2.http-tutorial/2j.authentication.adoc[Authentication and Security]
+** xref:2.http-tutorial/2k.http2.adoc[HTTP/2]
+** xref:2.http-tutorial/2l.http3.adoc[HTTP/3 and QUIC]
+* xref:3.messages/3.messages.adoc[HTTP Messages]
+** xref:3.messages/3a.containers.adoc[Containers]
+** xref:3.messages/3b.serializing.adoc[Serializing]
+** xref:3.messages/3c.parsing.adoc[Parsing]
+* xref:4.servers/4.servers.adoc[HTTP Servers]
+** xref:4.servers/4a.http-worker.adoc[HTTP Worker]
+** xref:4.servers/4b.route-handlers.adoc[Route Handlers]
+** xref:4.servers/4c.routers.adoc[Routers]
+** xref:4.servers/4d.route-patterns.adoc[Route Patterns]
+** xref:4.servers/4e.serve-static.adoc[Serving Static Files]
+** xref:4.servers/4f.serve-index.adoc[Directory Listings]
+** xref:4.servers/4g.bcrypt.adoc[BCrypt]
+* xref:5.compression/5.compression.adoc[Compression]
+** xref:5.compression/5a.zlib.adoc[ZLib]
+** xref:5.compression/5b.brotli.adoc[Brotli]
+* xref:6.design/6.design.adoc[Design]
+** xref:6.design/6a.sans-io.adoc[Sans-I/O Philosophy]
+** xref:6.design/6b.parser.adoc[Parser]
+** xref:6.design/6c.serializer.adoc[Serializer]
+* xref:7.reference/7.reference.adoc[Reference]
diff --git a/doc/modules/ROOT/pages/2.http-tutorial/2.http-tutorial.adoc b/doc/modules/ROOT/pages/2.http-tutorial/2.http-tutorial.adoc
new file mode 100644
index 00000000..5ee62dc6
--- /dev/null
+++ b/doc/modules/ROOT/pages/2.http-tutorial/2.http-tutorial.adoc
@@ -0,0 +1,43 @@
+//
+// Copyright (c) 2025 Vinnie Falco (vinnie.falco@gmail.com)
+//
+// Distributed under the Boost Software License, Version 1.0. (See accompanying
+// file LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt)
+//
+// Official repository: https://github.com/cppalliance/http
+//
+
+= Introduction to HTTP
+
+Every time you click a link, a small conversation takes place between
+two machines. One asks a question; the other answers it. The language
+they speak is HTTP, and it is the most widely used application protocol
+on Earth. Billions of requests flow through it every hour--web pages,
+images, API calls, video streams--all carried by the same simple
+exchange of text messages that Tim Berners-Lee sketched on a notepad
+in 1990.
+
+You are about to learn that language from the ground up.
+
+We start with the basics: what HTTP actually is, and how clients and
+servers find each other through URLs. From there you will look inside
+the messages themselves--their structure, the methods that give them
+purpose, and the status codes that report what happened. You will see
+how headers quietly orchestrate everything from content types to
+caching policies, and how content negotiation lets a single resource
+serve different representations to different clients.
+
+Then the picture gets more interesting. You will learn how connections
+are opened, reused, and closed--and why getting this right matters more
+than most people realize. Caching will show you how the Web avoids
+doing the same work twice. Authentication will reveal how identity and
+trust are woven into the protocol without breaking its stateless design.
+
+Finally, you will follow HTTP's evolution into its modern forms: the
+binary multiplexing of HTTP/2, and the QUIC-based transport of HTTP/3
+that eliminates decades-old performance bottlenecks at the transport
+layer.
+
+None of this requires prior networking experience. Each section builds
+on the last, and by the end you will read raw HTTP traffic the way a
+mechanic reads an engine--seeing not just what is happening, but _why_.
diff --git a/doc/modules/ROOT/pages/2.http-tutorial/2a.what-is-http.adoc b/doc/modules/ROOT/pages/2.http-tutorial/2a.what-is-http.adoc
new file mode 100644
index 00000000..7c20b905
--- /dev/null
+++ b/doc/modules/ROOT/pages/2.http-tutorial/2a.what-is-http.adoc
@@ -0,0 +1,403 @@
+//
+// Copyright (c) 2025 Vinnie Falco (vinnie.falco@gmail.com)
+//
+// Distributed under the Boost Software License, Version 1.0. (See accompanying
+// file LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt)
+//
+// Official repository: https://github.com/cppalliance/http
+//
+
+= What is HTTP
+
+Every time you click a link, load a webpage, or submit a form, a
+quiet conversation takes place between your computer and a machine
+somewhere else in the world. The language they speak is HTTP--the
+Hypertext Transfer Protocol. It is the protocol that moves the Web:
+billions of images, pages, videos, and API calls, every single day.
+
+Tim Berners-Lee designed HTTP in 1990-91 at CERN, the European
+particle physics lab in Geneva. The Internet already existed, but
+there was no uniform way to request a document from another machine
+and know what you got back. Was it a picture? A text file? A
+spreadsheet? HTTP solved both problems at once: it gave computers a
+standard way to ask for things _and_ a standard way to describe
+what those things are. That single insight--that retrieving data is
+useless unless you know its type--became the foundation of the World
+Wide Web.
+
+== Clients and Servers
+
+HTTP is a client-server protocol. One program, the _client_, opens a
+connection and sends a request. Another program, the _server_, receives
+the request and sends back a response. Then the cycle can repeat.
+
+The most familiar client is your web browser. When you type
+`http://example.com/index.html` into the address bar, the browser
+connects to the server at `example.com` and asks for the resource
+`/index.html`. The server locates the file, wraps it in a response,
+and sends it back. The browser then renders what it received.
+
+But clients are not just browsers. A command-line tool like `curl`,
+a mobile app fetching JSON from an API, or a search-engine crawler
+indexing pages--these are all HTTP clients. Anything that sends an
+HTTP request qualifies.
+
+Servers, likewise, range from a single laptop running a test server
+to vast clusters of machines behind a load balancer serving millions
+of requests per second. The protocol does not care about scale; the
+conversation is the same.
+
+== URLs: Naming Resources
+
+Before HTTP can fetch something, it needs to know _where_ that
+something is. This is the job of the URL, or Uniform Resource Locator.
+A URL has three essential parts:
+
+[source]
+----
+http://www.example.com:80/docs/tutorial.html
+^^^^ ^^^^^^^^^^^^^^^^ ^^ ^^^^^^^^^^^^^^^^^^^
+scheme host port path
+----
+
+* The **scheme** (`http`) tells the client which protocol to use.
+* The **host** (`www.example.com`) identifies the server, either as a
+ domain name or an IP address.
+* The **port** (`80`) selects the specific service on that host. Port 80
+ is the default for HTTP and is usually omitted.
+* The **path** (`/docs/tutorial.html`) names the resource on the server.
+
+When the port is left out, the client assumes 80 for `http` and 443
+for `https`. This is why you almost never see `:80` in everyday URLs.
+
+URLs are powerful because they are universal. The same format works
+for web pages, images, API endpoints, and resources accessed through
+other protocols like FTP (`ftp://files.example.com/report.pdf`). They
+replaced a world where accessing a remote file meant knowing the right
+tool, the right login sequence, and the right series of commands.
+
+== Requests and Responses
+
+An HTTP conversation is a simple exchange: one request, one response.
+Every request asks the server to do something; every response reports
+what happened.
+
+=== The Request
+
+A request begins with a _request line_ that contains three pieces of
+information: the method, the request target, and the HTTP version.
+
+[source]
+----
+GET /index.html HTTP/1.1
+Host: www.example.com
+Accept: text/html
+----
+
+The first line says: "Using HTTP/1.1, please `GET` me the resource at
+`/index.html`." The lines that follow are _headers_--metadata about the
+request. Here, `Host` tells the server which website is being addressed
+(a single server can host many sites), and `Accept` says the client
+prefers HTML.
+
+=== The Response
+
+The server answers with a _status line_ followed by its own headers and,
+optionally, a body:
+
+[source]
+----
+HTTP/1.1 200 OK
+Content-Type: text/html
+Content-Length: 137
+
+
+
Hello
+
Welcome to the tutorial.
+
+----
+
+The status line reports the protocol version, a numeric status code
+(`200`), and a human-readable reason phrase (`OK`). The headers describe
+the payload: its type (`text/html`) and size in bytes (`137`). After a
+blank line, the body carries the actual content.
+
+This request-response pattern is the heartbeat of the Web. Every link
+you click, every image that loads, every API call your app makes follows
+exactly this structure.
+
+== Methods
+
+The _method_ in the request line tells the server what action to
+perform. HTTP defines several methods, but three dominate everyday use:
+
+[cols="1,4"]
+|===
+|Method |Purpose
+
+|**GET**
+|Retrieve a resource. This is what browsers use when you navigate to a
+page or load an image. A GET should never change anything on the server;
+it is purely a read operation.
+
+|**POST**
+|Send data to the server for processing. Form submissions, file uploads,
+and API calls that create new records typically use POST. The data
+travels in the request body.
+
+|**HEAD**
+|Identical to GET, but the server returns only the headers--no body.
+This is useful for checking whether a resource exists or has changed
+without downloading the entire thing.
+|===
+
+Other methods exist (`PUT`, `DELETE`, `PATCH`, `OPTIONS`), and they play
+important roles in RESTful APIs. But GET and POST account for the vast
+majority of traffic on the Web.
+
+The distinction between GET and POST matters. GET requests can be
+bookmarked, cached, and repeated safely. POST requests carry side
+effects--submitting an order twice is not the same as submitting it
+once. This is why browsers warn you before resubmitting a form.
+
+== Status Codes
+
+The three-digit status code in the response is how the server
+communicates outcome. Codes are grouped by their first digit:
+
+[cols="1,3,3"]
+|===
+|Range |Category |Examples
+
+|**1xx**
+|Informational
+|`100 Continue`
+
+|**2xx**
+|Success
+|`200 OK`, `201 Created`, `204 No Content`
+
+|**3xx**
+|Redirection
+|`301 Moved Permanently`, `302 Found`, `304 Not Modified`
+
+|**4xx**
+|Client error
+|`400 Bad Request`, `403 Forbidden`, `404 Not Found`
+
+|**5xx**
+|Server error
+|`500 Internal Server Error`, `503 Service Unavailable`
+|===
+
+A `200` means everything worked. A `404` means the resource does not
+exist--probably the most recognizable error code in the world. A `301`
+tells the client the resource has moved and provides the new URL in a
+`Location` header. A `500` means something went wrong on the server's
+side.
+
+The reason phrase (`OK`, `Not Found`) is for humans reading raw
+messages. Software uses the numeric code exclusively. You could replace
+`200 OK` with `200 All Good` and nothing would break.
+
+== Headers
+
+Headers are the metadata layer of HTTP. They appear in both requests
+and responses as `Name: value` pairs, one per line, between the start
+line and the body.
+
+Some headers you will encounter constantly:
+
+**Request headers:**
+
+* `Host` -- the domain name of the server (required in HTTP/1.1)
+* `User-Agent` -- identifies the client software
+* `Accept` -- media types the client is willing to receive
+* `Accept-Language` -- preferred human languages
+* `If-Modified-Since` -- makes the request conditional, asking the
+ server to send the resource only if it changed after a given date
+
+**Response headers:**
+
+* `Content-Type` -- the media type of the body
+* `Content-Length` -- size of the body in bytes
+* `Date` -- when the response was generated
+* `Server` -- identifies the server software
+* `Last-Modified` -- when the resource was last changed
+* `Location` -- the URL to redirect to (used with 3xx status codes)
+* `Cache-Control` -- directives for caching behavior
+
+Headers make HTTP extensible. A client and server can agree on new
+headers without changing the protocol itself. This openness is one of
+the reasons HTTP has survived and thrived for over three decades.
+
+== Content Types
+
+When a server sends a response, how does the client know whether the
+body is HTML, a JPEG image, or a JSON document? The answer is the
+`Content-Type` header, which carries a _media type_ (historically
+called a MIME type).
+
+A media type has two parts separated by a slash: a top-level type and
+a subtype.
+
+[cols="2,3"]
+|===
+|Media Type |Meaning
+
+|`text/html`
+|An HTML document
+
+|`text/plain`
+|Plain text, no formatting
+
+|`image/jpeg`
+|A JPEG photograph
+
+|`image/png`
+|A PNG image
+
+|`application/json`
+|A JSON data structure
+
+|`application/octet-stream`
+|Arbitrary binary data (the catch-all)
+
+|`video/mp4`
+|An MP4 video file
+|===
+
+The top-level type gives the client a general idea even if it does not
+recognize the specific subtype. If the browser receives `image/webp`
+and has no WebP decoder, it at least knows the content is an image and
+can decide what to do--perhaps show a placeholder or offer to
+download the file.
+
+Media types are the reason HTTP can carry anything, not just hypertext.
+A single protocol serves web pages, streams video, delivers fonts, and
+transfers API payloads, all because every response says exactly what it
+contains.
+
+== Statelessness
+
+HTTP is _stateless_: the server does not remember anything about
+previous requests. Each request is independent--the server processes
+it in isolation and forgets about it the moment the response is sent.
+
+This sounds like a limitation, but it is actually a powerful design
+choice. Statelessness means any server in a cluster can handle any
+request. There is no session to maintain, no affinity to preserve.
+Scaling becomes straightforward: add more servers and distribute the
+load.
+
+Of course, real applications need state. A shopping cart must persist
+between page views. A login must be remembered. HTTP solves this
+through _cookies_--small pieces of data that the server sends in a
+`Set-Cookie` header and the client returns in a `Cookie` header on
+subsequent requests. Cookies bolt stateful sessions onto a stateless
+protocol without changing the protocol itself.
+
+== Connections and TCP
+
+HTTP does not define how bytes travel across the network. It delegates
+that responsibility to TCP (Transmission Control Protocol), which
+guarantees reliable, in-order delivery. Before any HTTP message can be
+exchanged, the client and server establish a TCP connection through a
+three-way handshake:
+
+. The client sends a SYN packet.
+. The server responds with SYN-ACK.
+. The client replies with ACK.
+
+Only then can data flow. This handshake adds one full round-trip of
+latency before the first byte of the request is even sent. Between
+New York and London, that round-trip alone takes roughly 56
+milliseconds over fiber.
+
+Early HTTP (1.0) opened a new TCP connection for every single request.
+A page with ten images meant eleven connections: one for the HTML and
+one for each image. The overhead was enormous.
+
+HTTP/1.1 changed this with _persistent connections_. By default, the
+connection stays open after a response, ready for the next request.
+This eliminates repeated handshakes and lets TCP ramp up to full
+throughput. A `Connection: close` header signals that one party is
+done and the connection should be torn down.
+
+HTTP/1.1 also introduced _pipelining_: a client can send several
+requests in a row without waiting for responses. In practice,
+pipelining proved fragile and was rarely used, but the idea of
+eliminating round-trip delays carried forward into HTTP/2's
+multiplexing.
+
+== A Brief History
+
+HTTP has evolved in discrete steps, each addressing the shortcomings
+of the previous version:
+
+**HTTP/0.9 (1991)** -- The original one-line protocol. Only GET was
+supported. No headers, no status codes, no content types. The server
+sent back raw HTML and closed the connection.
+
+**HTTP/1.0 (1996)** -- Added version numbers, headers, status codes,
+and content-type negotiation. For the first time, HTTP could carry
+images, video, and other media--not just hypertext. Each request still
+required its own TCP connection.
+
+**HTTP/1.1 (1997)** -- The workhorse of the modern Web. Persistent
+connections, chunked transfer encoding, the `Host` header for virtual
+hosting, and cache-control directives. HTTP/1.1 corrected architectural
+flaws and introduced the performance optimizations that made the Web
+commercially viable.
+
+**HTTP/2 (2015)** -- A binary framing layer over the same semantics.
+Multiplexed streams eliminate head-of-line blocking at the application
+level. Header compression (`HPACK`) reduces overhead. Server push
+allows preemptive delivery of resources.
+
+**HTTP/3 (2022)** -- Replaces TCP with QUIC, a UDP-based transport
+with built-in encryption and independent stream multiplexing. This
+eliminates head-of-line blocking at the transport level and reduces
+connection setup to a single round-trip.
+
+Despite these changes, the fundamental model has not moved: a client
+sends a request, a server sends a response, and headers describe
+everything. Code written against HTTP/1.1 semantics still works in
+an HTTP/3 world.
+
+== Putting It All Together
+
+Here is a complete HTTP/1.1 exchange, the kind that happens invisibly
+thousands of times as you browse:
+
+[source]
+----
+GET /about.html HTTP/1.1 <1>
+Host: www.example.com <2>
+User-Agent: Mozilla/5.0 <3>
+Accept: text/html <4>
+
+----
+
+[source]
+----
+HTTP/1.1 200 OK <1>
+Date: Sat, 07 Feb 2026 12:00:00 GMT <2>
+Server: Apache/2.4.54 <3>
+Content-Type: text/html <4>
+Content-Length: 82 <5>
+
+ <6>
+About
+
About us.
+
+----
+
+The request identifies the resource, the protocol version, and what the
+client is prepared to accept. The response confirms success, describes
+the payload, and delivers the content. Two small text messages, and a
+page appears on screen.
+
+Every layer of the Web--browsers, servers, proxies, caches, CDNs,
+APIs--is built on this exchange. Understanding it gives you a
+foundation for everything that follows.
diff --git a/doc/modules/ROOT/pages/2.http-tutorial/2b.urls-and-resources.adoc b/doc/modules/ROOT/pages/2.http-tutorial/2b.urls-and-resources.adoc
new file mode 100644
index 00000000..73704e54
--- /dev/null
+++ b/doc/modules/ROOT/pages/2.http-tutorial/2b.urls-and-resources.adoc
@@ -0,0 +1,319 @@
+//
+// Copyright (c) 2025 Vinnie Falco (vinnie.falco@gmail.com)
+//
+// Distributed under the Boost Software License, Version 1.0. (See accompanying
+// file LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt)
+//
+// Official repository: https://github.com/cppalliance/http
+//
+
+= URLs and Resources
+
+Every building in a city has an address. Without addresses, you could
+describe a building by its appearance or its neighborhood, but you could
+never tell a taxi driver exactly where to go. Before URLs existed, finding
+something on the Internet was like that -- you needed to know which
+application to open, which server to connect to, which protocol to speak,
+and which directory to look in. URLs collapsed all of that into a single
+string that anyone could share, bookmark, or click.
+
+This section explains what resources are, how URLs name them, and how
+the pieces of a URL work together to tell an HTTP client exactly what
+to fetch and from where.
+
+== Resources
+
+A resource is anything that can be served over the web. The term is
+deliberately broad. A resource might be a static file sitting on disk -- an
+HTML page, a JPEG photograph, a PDF manual. It might also be a program
+that generates content on demand: a search engine returning results for
+your query, a stock ticker streaming live prices, or an API endpoint
+returning JSON.
+
+What makes something a resource is not its format or its origin, but the
+fact that it can be identified by a name and retrieved by a client. HTTP
+does not care whether the bytes come from a file, a database, a camera
+feed, or a script. As far as the protocol is concerned, a resource is
+whatever the server sends back.
+
+=== Media Types
+
+When a server sends a resource, the client needs to know what kind of data
+it is receiving. A stream of bytes could be an image, a web page, or a
+compressed archive -- the bits alone do not say. HTTP solves this with
+*media types* (also called MIME types), a labeling system borrowed from
+email.
+
+A media type is a two-part string: a primary type and a subtype, separated
+by a slash:
+
+[cols="1a,2a"]
+|===
+|Media Type|Meaning
+
+|`text/html`
+|An HTML document
+
+|`text/plain`
+|Plain text with no formatting
+
+|`image/jpeg`
+|A JPEG photograph
+
+|`image/png`
+|A PNG image
+
+|`application/json`
+|JSON-formatted data
+
+|`application/octet-stream`
+|Arbitrary binary data (the catch-all)
+
+|===
+
+The server communicates the media type in the `Content-Type` header:
+
+[source]
+----
+HTTP/1.1 200 OK
+Content-Type: image/png
+Content-Length: 4096
+
+<...4096 bytes of image data...>
+----
+
+The client reads this header and decides how to handle the body. A browser
+renders `text/html` as a web page, displays `image/jpeg` as a picture, and
+might offer to download `application/octet-stream` as a file. Media types
+are the reason the web can serve every kind of content through a single
+protocol.
+
+== URLs
+
+A Uniform Resource Locator (URL) is the address of a resource on the
+Internet. It tells a client three things at once: _how_ to access the
+resource (the protocol), _where_ the resource lives (the server), and
+_which_ resource to retrieve (the path).
+
+[source]
+----
+http://www.example.com/seasonal/index-fall.html
+----
+
+This single string replaces what used to be a paragraph of instructions:
+"Open your FTP client, connect to this server, log in with these
+credentials, navigate to this directory, switch to binary mode, and
+download this file." A URL encodes all of that context into a compact,
+shareable format.
+
+URLs are a subset of a broader concept called Uniform Resource Identifiers
+(URIs). The HTTP specification uses the term URI, but in practice nearly
+every URI you encounter is a URL. The distinction matters mainly in
+specifications; for day-to-day work with HTTP, the two terms are
+interchangeable.
+
+=== Anatomy of a URL
+
+A URL can contain up to nine components. Most URLs use only a few of them,
+but the full general form is:
+
+[source]
+----
+scheme://user:password@host:port/path?query#fragment
+----
+
+The three most important parts are the *scheme*, the *host*, and the
+*path*. Here is how they break down for a typical HTTP URL:
+
+[source]
+----
+ http://www.example.com:8080/tools/hammers?color=blue&sort=price#reviews
+ \__/ \______________/\__/\____________/ \____________________/\_____/
+scheme host port path query fragment
+----
+
+[cols="1a,3a"]
+|===
+|Component|Description
+
+|*scheme*
+|The protocol to use. For web traffic this is `http` or `https`. The scheme
+ends at the first `:` character and is case-insensitive.
+
+|*host*
+|The server's address -- either a domain name like `www.example.com` or an
+IP address like `192.168.1.1`. This is where the client will open a
+connection.
+
+|*port*
+|The TCP port on the server. If omitted, the default for the scheme is used
+(80 for `http`, 443 for `https`).
+
+|*path*
+|The specific resource on the server, structured like a filesystem path.
+Each segment is separated by `/`.
+
+|*query*
+|Additional parameters passed to the server, introduced by `?`. Query
+strings are typically formatted as `name=value` pairs separated by `&`.
+
+|*fragment*
+|A reference to a specific section _within_ the resource, introduced by
+`#`. Fragments are used only by the client -- they are never sent to the
+server.
+
+|===
+
+=== Schemes
+
+The scheme is the first thing a client reads. It determines which protocol
+to use for retrieving the resource. Although HTTP and HTTPS dominate the
+web, URLs support many schemes:
+
+[cols="1a,3a"]
+|===
+|Scheme|Example
+
+|`http`
+|`\http://www.example.com/index.html` -- standard web traffic, port 80
+
+|`https`
+|`\https://www.example.com/secure` -- HTTP over TLS, port 443
+
+|`ftp`
+|`ftp://ftp.example.com/pub/readme.txt` -- file transfer
+
+|`mailto`
+|`mailto:user@example.com` -- email address
+
+|`file`
+|`file:///home/user/notes.txt` -- local filesystem
+
+|===
+
+For HTTP programming, you will work almost exclusively with `http` and
+`https`. The scheme tells your code whether to open a plain TCP connection
+or negotiate a TLS handshake before sending the first request.
+
+=== The Request-Target
+
+When a client sends an HTTP request, the URL does not appear in the
+message exactly as you see it in a browser's address bar. The scheme and
+host are stripped away, and only the *request-target* is placed on the
+request line. For most requests, the request-target is the path plus any
+query string:
+
+[source]
+----
+GET /tools/hammers?color=blue HTTP/1.1
+Host: www.example.com
+----
+
+The host is conveyed separately in the `Host` header. This split exists
+because a single server can host many domain names (virtual hosting), and
+the request-target alone would not identify which site the client wants.
+
+For requests sent through a proxy, the full URL (called the *absolute
+form*) may appear on the request line instead:
+
+[source]
+----
+GET http://www.example.com/tools/hammers HTTP/1.1
+----
+
+Understanding the request-target matters because when you build or parse
+HTTP messages, you are working with this extracted piece of the URL, not
+the full address.
+
+== Percent-Encoding
+
+URLs were designed to be transmitted safely across every protocol on the
+Internet, so they are restricted to a small set of characters: letters,
+digits, and a handful of punctuation marks. Any character outside this safe
+set must be *percent-encoded* -- replaced with a `%` sign followed by two
+hexadecimal digits representing the character's byte value.
+
+[cols="1a,1a,1a"]
+|===
+|Character|ASCII Code|Encoded Form
+
+|space
+|32 (0x20)
+|`%20`
+
+|`#`
+|35 (0x23)
+|`%23`
+
+|`%`
+|37 (0x25)
+|`%25`
+
+|`/`
+|47 (0x2F)
+|`%2F`
+
+|`?`
+|63 (0x3F)
+|`%3F`
+
+|===
+
+For example, a search query containing spaces and special characters:
+
+[source]
+----
+GET /search?q=hello%20world%21 HTTP/1.1
+Host: www.example.com
+----
+
+Here `%20` represents a space and `%21` represents an exclamation mark.
+
+Several characters have reserved meanings inside a URL -- `/` separates
+path segments, `?` introduces the query, `#` marks a fragment, and `:`
+separates the scheme. If you need these characters to appear as literal
+data (for instance, a filename that contains a question mark), you must
+percent-encode them. Conversely, encoding characters that are already safe
+is technically allowed but can cause interoperability problems, so it is
+best avoided.
+
+Applications should encode unsafe characters before transmitting a URL and
+decode them when processing one. Getting this wrong is a common source of
+bugs: double-encoding a URL that is already encoded, or failing to encode
+a user-supplied value before inserting it into a path or query string.
+
+== Relative URLs
+
+Not every URL needs to spell out the scheme and host. A *relative URL*
+is a shorthand that omits the parts which can be inferred from context.
+If you are already viewing a page at
+`\http://www.example.com/tools/index.html`, a link to `./hammers.html` is
+understood to mean `\http://www.example.com/tools/hammers.html`.
+
+The URL from which missing parts are inherited is called the *base URL*.
+It is usually the URL of the document that contains the link:
+
+[source]
+----
+Base URL: http://www.example.com/tools/index.html
+Relative URL: ./hammers.html
+Resolved URL: http://www.example.com/tools/hammers.html
+----
+
+Relative URLs make content portable. A set of HTML pages that link to
+each other with relative paths can be moved to a different server or a
+different directory without breaking any links, because the references
+adjust automatically to the new base.
+
+In HTTP messages, the request-target is already relative to the server,
+so the concept shows up naturally. When your code constructs a request,
+it uses the path portion of a URL -- which is itself a relative reference
+resolved against the connection's host.
+
+== Next Steps
+
+You now know what resources are, how URLs name them, and how the pieces
+of a URL map onto an HTTP request. The next section breaks open the
+messages themselves:
+
+* xref:2.http-tutorial/2c.message-anatomy.adoc[Message Anatomy] -- start lines, headers, and bodies
diff --git a/doc/modules/ROOT/pages/2.http-tutorial/2c.message-anatomy.adoc b/doc/modules/ROOT/pages/2.http-tutorial/2c.message-anatomy.adoc
new file mode 100644
index 00000000..cb757951
--- /dev/null
+++ b/doc/modules/ROOT/pages/2.http-tutorial/2c.message-anatomy.adoc
@@ -0,0 +1,376 @@
+//
+// Copyright (c) 2025 Vinnie Falco (vinnie.falco@gmail.com)
+//
+// Distributed under the Boost Software License, Version 1.0. (See accompanying
+// file LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt)
+//
+// Official repository: https://github.com/cppalliance/http
+//
+
+= Message Anatomy
+
+If HTTP is the language that moves the Web, then HTTP messages are the
+sentences. Every image that loads, every form that submits, every API
+call that returns JSON--each one is carried inside a small, precisely
+formatted block of text that both sides of the conversation agree to
+understand.
+
+The beauty of HTTP messages is that they are _plain text_. You can
+read them with your own eyes, type them by hand, and debug them with
+nothing more than a terminal. This transparency is not an accident; it
+is a deliberate design choice that made the early Web possible for
+anyone with a text editor and a network socket. Binary protocols may
+be more compact, but HTTP's readability is the reason it was adopted
+so fast, and the reason you can learn its structure in a single
+sitting.
+
+== The Three Parts
+
+Every HTTP message--request or response--consists of exactly three
+parts, always in the same order:
+
+[source]
+----
+start-line CRLF
+header-field CRLF
+header-field CRLF
+...
+CRLF
+[ message-body ]
+----
+
+. The **start line** says what the message is about. For a request,
+ it identifies the action and the target. For a response, it reports
+ what happened.
+. The **header fields** carry metadata: the type and size of the body,
+ caching instructions, authentication credentials, and anything else
+ the sender wants the receiver to know.
+. The **body** (optional) carries the actual payload--an HTML page,
+ a JSON document, an image, or nothing at all.
+
+The start line and headers are ASCII text, one item per line, each
+terminated by a carriage return followed by a line feed (written
+`CRLF`, ASCII 13 then ASCII 10). After the last header, a blank
+line--a bare `CRLF` with nothing before it--marks the boundary
+between metadata and data. Everything after that blank line is the
+body.
+
+This structure never changes. Every HTTP/1.x message you will ever
+encounter follows it. Once you can identify these three parts, you can
+read any HTTP exchange.
+
+== Request Messages
+
+A request message begins with a _request line_ that contains three
+fields separated by spaces:
+
+[source]
+----
+method SP request-target SP HTTP-version CRLF
+----
+
+Here is a concrete example:
+
+[source]
+----
+GET /docs/tutorial.html HTTP/1.1
+Host: www.example.com
+Accept: text/html
+User-Agent: curl/8.4.0
+
+----
+
+The request line `GET /docs/tutorial.html HTTP/1.1` says three things
+at once:
+
+* **GET** is the method--the action the client wants the server to
+ perform. Methods are covered in detail in a later section.
+* **/docs/tutorial.html** is the request target--the resource being
+ addressed, usually the path component of the URL.
+* **HTTP/1.1** is the protocol version, telling the server which
+ dialect of HTTP the client speaks.
+
+After the request line come the headers (`Host`, `Accept`,
+`User-Agent`), and after the blank line comes the body. This
+particular request has no body because GET asks for data rather than
+sending it.
+
+A POST request, by contrast, typically carries a body:
+
+[source]
+----
+POST /api/login HTTP/1.1
+Host: www.example.com
+Content-Type: application/json
+Content-Length: 42
+
+{"username":"alice","password":"s3cret!"}
+----
+
+Here `Content-Type` tells the server the body is JSON, and
+`Content-Length` tells it the body is exactly 42 bytes long. The
+body begins immediately after the blank line.
+
+== Response Messages
+
+A response message begins with a _status line_ instead of a request
+line, but the rest of the structure is identical:
+
+[source]
+----
+HTTP-version SP status-code SP reason-phrase CRLF
+----
+
+For example:
+
+[source]
+----
+HTTP/1.1 200 OK
+Date: Sat, 07 Feb 2026 12:00:00 GMT
+Content-Type: text/html
+Content-Length: 137
+
+
+Hello
+
Welcome to the tutorial.
+
+----
+
+The status line `HTTP/1.1 200 OK` reports:
+
+* **HTTP/1.1** -- the protocol version the server is using.
+* **200** -- a numeric status code indicating success. Codes are
+ grouped by their first digit: 2xx means success, 3xx means
+ redirection, 4xx means client error, 5xx means server error.
+ Status codes are covered in their own section.
+* **OK** -- a human-readable reason phrase. Software ignores this;
+ you could replace `OK` with `All Good` and nothing would break.
+ The phrase exists solely to help humans scanning raw traffic.
+
+After the status line come headers, the blank line, and the body
+carrying the HTML document.
+
+== On the Wire
+
+Understanding the physical format matters when debugging. Here is
+what the request from the earlier example looks like as raw bytes
+flowing across the network, with invisible characters made visible:
+
+[source]
+----
+G E T / d o c s / t u t o r i a l . h t m l H T T P / 1 . 1 CR LF
+H o s t : w w w . e x a m p l e . c o m CR LF
+A c c e p t : t e x t / h t m l CR LF
+CR LF
+----
+
+Every line, including the start line, ends with `CR LF` (bytes `0x0D
+0x0A`). The blank line after the headers is just `CR LF` by itself--no
+characters before it. That bare `CRLF` is what separates the header
+section from the body. It is so important that the HTTP specification
+requires it even when the message has no body and no headers at all.
+
+A few practical details worth knowing:
+
+* **Whitespace in headers.** A header field is a name, a colon, optional
+ whitespace, and a value: `Content-Type: text/html`. Leading and
+ trailing whitespace around the value is stripped by the parser.
+* **Case insensitivity.** Header field names are case-insensitive.
+ `Content-Type`, `content-type`, and `CONTENT-TYPE` all mean the
+ same thing.
+* **One header per line.** Older specifications allowed "folding" a
+ long header across multiple lines by starting continuation lines
+ with whitespace, but HTTP/1.1 (RFC 9112) has deprecated this
+ practice. Modern implementations reject folded headers.
+* **Robustness.** The specification says `CRLF`, but many real-world
+ implementations also accept a bare `LF`. Robust parsers tolerate
+ this, although strict parsers reject it.
+
+== Header Fields
+
+Header fields are the metadata layer of HTTP. They appear between the
+start line and the body as `Name: value` pairs, one per line.
+
+[source]
+----
+Content-Type: text/html; charset=utf-8
+Content-Length: 4821
+Cache-Control: max-age=3600
+----
+
+Headers serve many roles. Some describe the body (`Content-Type`,
+`Content-Length`). Some control caching (`Cache-Control`, `ETag`).
+Some carry authentication tokens (`Authorization`). Some influence
+connection behavior (`Connection`, `Transfer-Encoding`). The protocol
+is extensible--any sender can introduce new header names, and
+receivers that do not recognize them simply pass them through.
+
+Headers are classified into broad categories:
+
+* **Request headers** supply information about the request or the
+ client: `Host`, `User-Agent`, `Accept`, `Authorization`.
+* **Response headers** supply information about the response or the
+ server: `Server`, `Retry-After`, `WWW-Authenticate`.
+* **Representation headers** describe the body: `Content-Type`,
+ `Content-Length`, `Content-Encoding`, `Content-Language`.
+* **General headers** apply to the message as a whole rather than to
+ the body: `Date`, `Connection`, `Transfer-Encoding`, `Via`.
+
+A deeper exploration of individual headers appears in a later section.
+What matters here is the _structure_: every header is a name-value
+pair on its own line, headers can appear in any order, the same name
+can appear more than once, and the entire header block ends with a
+blank line.
+
+== The Message Body
+
+The body is the payload. It is everything that comes after the blank
+line separating headers from data. It can contain HTML, JSON, XML,
+images, video, compressed archives, or nothing at all.
+
+Not every message has a body. Responses to HEAD requests never
+include one. Responses with status codes 1xx, 204, and 304 never
+include one. GET requests rarely carry a body, though the protocol
+does not forbid it.
+
+When a body _is_ present, the receiver needs to know how large it is.
+HTTP provides three mechanisms:
+
+**Content-Length.** The simplest case. The sender states the exact
+number of bytes:
+
+[source]
+----
+Content-Length: 4821
+----
+
+The receiver reads exactly 4821 bytes after the blank line and knows
+the body is complete.
+
+**Chunked transfer encoding.** When the sender does not know the
+total size in advance--for example, when generating content
+dynamically--it can send the body in chunks. Each chunk is preceded
+by its size in hexadecimal, and the stream ends with a zero-length
+chunk:
+
+[source]
+----
+Transfer-Encoding: chunked
+
+1a
+This is the first chunk.
+10
+Second chunk!!!
+0
+
+----
+
+The receiver reads each chunk size, consumes that many bytes, and
+repeats until it sees `0`. This mechanism lets servers begin
+transmitting before the entire response is generated.
+
+**Connection close.** A server may simply close the connection when
+the body is finished. This only works for responses (a client cannot
+close and still expect a reply) and only when neither `Content-Length`
+nor chunked encoding is in use. It is the least desirable method
+because it prevents connection reuse.
+
+== Requests vs. Responses
+
+Requests and responses share the same three-part structure. The only
+structural difference is the start line:
+
+[cols="1,2,2"]
+|===
+|Part |Request |Response
+
+|**Start line**
+|`GET /index.html HTTP/1.1`
+|`HTTP/1.1 200 OK`
+
+|**Headers**
+|`Host: example.com` +
+`Accept: text/html`
+|`Content-Type: text/html` +
+`Content-Length: 512`
+
+|**Body**
+|_(often empty for GET)_
+|`...`
+|===
+
+This symmetry is intentional. Because both directions use the same
+header syntax and the same body framing rules, a single parser can
+handle either direction with only the start-line logic swapped out.
+
+== Message Flow
+
+HTTP messages travel in one direction: _downstream_. Every sender is
+upstream of the receiver, regardless of whether the message is a
+request or a response. When a client sends a request through two
+proxies to an origin server, the request flows downstream from client
+to server. When the server sends the response back through those same
+proxies, the response also flows downstream--this time from server to
+client.
+
+The terms _inbound_ and _outbound_ describe the two legs of the
+journey. Messages travel inbound toward the origin server and outbound
+back to the client:
+
+[source]
+----
+Client ──► Proxy A ──► Proxy B ──► Server (inbound)
+Client ◄── Proxy A ◄── Proxy B ◄── Server (outbound)
+----
+
+This distinction matters when proxies modify or inspect messages in
+transit. A proxy that adds a `Via` header, for instance, annotates the
+message as it passes downstream in either direction.
+
+== A Complete Exchange
+
+Putting it all together, here is an annotated HTTP/1.1 exchange--the
+kind that happens thousands of times while you browse a single page:
+
+[source]
+----
+GET /about.html HTTP/1.1 <1>
+Host: www.example.com <2>
+User-Agent: Mozilla/5.0 <3>
+Accept: text/html <4>
+ <5>
+----
+
+<1> Request line: method, target, version
+<2> Required in HTTP/1.1--identifies which site on the server
+<3> Identifies the client software
+<4> Tells the server what the client prefers to receive
+<5> Blank line--end of headers, no body follows
+
+[source]
+----
+HTTP/1.1 200 OK <1>
+Date: Sat, 07 Feb 2026 12:00:00 GMT <2>
+Server: Apache/2.4.54 <3>
+Content-Type: text/html <4>
+Content-Length: 82 <5>
+ <6>
+ <7>
+About
+
About us.
+
+----
+
+<1> Status line: version, code, reason
+<2> When the response was generated
+<3> Server software
+<4> The body is HTML
+<5> The body is exactly 82 bytes
+<6> Blank line--end of headers, body follows
+<7> The body itself
+
+Two small text messages, a handful of headers each, and a page appears
+on screen. Every layer of the Web--browsers, servers, proxies, caches,
+CDNs, APIs--is built on this exchange. The format has not changed in
+any fundamental way since 1997. Code that can parse these three parts
+correctly can participate in the largest distributed system ever built.
diff --git a/doc/modules/ROOT/pages/2.http-tutorial/2d.methods.adoc b/doc/modules/ROOT/pages/2.http-tutorial/2d.methods.adoc
new file mode 100644
index 00000000..717ce148
--- /dev/null
+++ b/doc/modules/ROOT/pages/2.http-tutorial/2d.methods.adoc
@@ -0,0 +1,427 @@
+//
+// Copyright (c) 2025 Vinnie Falco (vinnie.falco@gmail.com)
+//
+// Distributed under the Boost Software License, Version 1.0. (See accompanying
+// file LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt)
+//
+// Official repository: https://github.com/cppalliance/http
+//
+
+= Methods
+
+The method is the verb at the beginning of every HTTP request. It tells
+the server what the client wants done. When your browser loads a page,
+it sends `GET`. When you submit a login form, it sends `POST`. When a
+deployment script uploads a new version of a file, it sends `PUT`. The
+rest of the message--headers, body, URL--supplies the nouns and
+adjectives, but the method is the action word that drives the entire
+conversation.
+
+HTTP defines a small set of standard methods, each with precise
+semantics that the rest of the Web's infrastructure relies on. Caches
+decide what to store based on the method. Proxies decide what to retry
+based on the method. Browsers decide whether to warn you about
+resubmitting a form based on the method. Understanding what each method
+means--and two critical properties called _safety_ and
+_idempotency_--is the key to understanding why HTTP works the way it
+does.
+
+== GET
+
+GET is the workhorse of the Web. It asks the server to return a
+representation of the resource identified by the request target. Every
+link you click, every image that loads, every stylesheet and script
+your browser fetches--all of them use GET.
+
+[source]
+----
+GET /docs/tutorial.html HTTP/1.1
+Host: www.example.com
+Accept: text/html
+----
+
+The server responds with the resource:
+
+[source]
+----
+HTTP/1.1 200 OK
+Content-Type: text/html
+Content-Length: 4821
+
+
+Tutorial
+...
+
+----
+
+A GET request should not change anything on the server. Loading a page
+ten times should produce the same result as loading it once, and the
+server's state should be identical afterward. This property makes GET
+requests safe to bookmark, cache, prefetch, and retry on failure. It
+is the foundation that allows search engines to crawl the Web without
+accidentally placing orders or deleting files.
+
+GET requests do not carry a body. All the information the server needs
+is in the URL and the headers.
+
+== HEAD
+
+HEAD is identical to GET, with one difference: the server returns only
+the status line and headers--no body. The response headers must be
+exactly what a GET to the same resource would produce.
+
+[source]
+----
+HEAD /docs/tutorial.html HTTP/1.1
+Host: www.example.com
+----
+
+[source]
+----
+HTTP/1.1 200 OK
+Content-Type: text/html
+Content-Length: 4821
+Last-Modified: Mon, 03 Feb 2026 09:15:00 GMT
+----
+
+This is useful in several situations:
+
+* **Checking existence.** A `200` means the resource exists; a `404`
+ means it does not. You learn this without downloading the content.
+* **Checking freshness.** By comparing the `Last-Modified` or `ETag`
+ header against a cached copy, a client can decide whether to
+ download the full resource.
+* **Discovering size.** The `Content-Length` header reveals the body
+ size before committing to the transfer, useful for progress bars or
+ deciding whether a download is worth the bandwidth.
+
+HEAD is required by the HTTP/1.1 specification. Every server that
+implements GET for a resource must also support HEAD.
+
+== POST
+
+POST sends data to the server for processing. Unlike GET, which only
+retrieves, POST is designed to cause something to happen: submit a
+form, upload a file, create a new record, trigger a computation.
+
+[source]
+----
+POST /api/orders HTTP/1.1
+Host: www.example.com
+Content-Type: application/json
+Content-Length: 58
+
+{"item":"widget","quantity":3,"shipping":"express"}
+----
+
+[source]
+----
+HTTP/1.1 201 Created
+Location: /api/orders/7742
+Content-Type: application/json
+
+{"id":7742,"status":"confirmed"}
+----
+
+The data travels in the request body, and the `Content-Type` header
+tells the server how it is encoded. Form submissions typically use
+`application/x-www-form-urlencoded`; API calls commonly use
+`application/json`; file uploads use `multipart/form-data`.
+
+POST is neither safe nor idempotent. Submitting the same order twice
+may create two orders. This is why browsers show a warning dialog when
+you try to refresh a page that was the result of a POST: "Are you sure
+you want to resubmit the form?" The browser cannot know whether
+sending the request again is harmless or catastrophic, so it asks.
+
+POST is also the most flexible method. When no other method fits--when
+the action does not map neatly to "retrieve," "replace," or
+"delete"--POST is the general-purpose fallback. Running a search that
+requires a large query body, triggering a batch job, sending a
+message--all of these are reasonable uses of POST.
+
+== PUT
+
+PUT asks the server to create or replace the resource at the given URL
+with the contents of the request body. If the resource already exists,
+PUT replaces it entirely. If it does not exist, PUT creates it.
+
+[source]
+----
+PUT /docs/readme.txt HTTP/1.1
+Host: www.example.com
+Content-Type: text/plain
+Content-Length: 34
+
+Updated product list coming soon!
+----
+
+If the resource is new, the server responds with `201 Created`. If it
+replaced an existing resource, the server responds with `200 OK` or
+`204 No Content`.
+
+The critical distinction between PUT and POST is that PUT is
+_idempotent_: sending the same PUT request ten times has exactly the
+same effect as sending it once. The final state of the resource is
+identical regardless of how many times the request is repeated. POST
+makes no such guarantee. This difference matters in unreliable
+networks: if a PUT request times out and the client does not know
+whether it succeeded, it can safely retry. A POST cannot be retried
+as safely.
+
+PUT replaces the _entire_ resource. If you PUT a document with three
+fields to a URL that previously held a document with five fields, the
+resource now has three fields. The other two are gone. For partial
+updates, use PATCH.
+
+== DELETE
+
+DELETE asks the server to remove the resource identified by the
+request target.
+
+[source]
+----
+DELETE /api/orders/7742 HTTP/1.1
+Host: www.example.com
+----
+
+[source]
+----
+HTTP/1.1 204 No Content
+----
+
+A `204 No Content` response indicates that the server successfully
+processed the deletion and has nothing more to say. A `200 OK` with a
+body is also valid if the server wants to return a confirmation message.
+
+Like PUT, DELETE is idempotent. Deleting a resource that has already
+been deleted should not produce an error in principle--the end state
+(the resource is gone) is the same. In practice, many servers return
+`404 Not Found` on repeated deletes, which is technically acceptable
+since idempotency refers to the server's state, not the response code.
+
+DELETE requests do not typically carry a body, though the protocol does
+not forbid one.
+
+== PATCH
+
+PATCH applies a partial modification to a resource. Where PUT replaces
+the whole thing, PATCH changes only the parts you specify.
+
+[source]
+----
+PATCH /api/users/42 HTTP/1.1
+Host: www.example.com
+Content-Type: application/json
+Content-Length: 27
+
+{"email":"new@example.com"}
+----
+
+[source]
+----
+HTTP/1.1 200 OK
+Content-Type: application/json
+
+{"id":42,"name":"Alice","email":"new@example.com"}
+----
+
+PATCH is neither safe nor idempotent in the general case. Whether it
+is idempotent depends on the patch format: setting a field to a
+specific value is idempotent; incrementing a counter is not. The
+protocol leaves this to the application.
+
+PATCH was not part of the original HTTP specification. It was added
+later in RFC 5789 because the lack of a partial-update method forced
+developers into awkward workarounds: either sending the entire resource
+via PUT every time a single field changed, or overloading POST for
+everything.
+
+== OPTIONS
+
+OPTIONS asks the server what communication capabilities are available
+for a given resource, or for the server as a whole.
+
+[source]
+----
+OPTIONS /api/orders HTTP/1.1
+Host: www.example.com
+----
+
+[source]
+----
+HTTP/1.1 200 OK
+Allow: GET, POST, OPTIONS
+----
+
+The `Allow` header in the response lists the methods the server
+supports for that resource. An `OPTIONS *` request (using an asterisk
+as the request target) asks about the server's general capabilities
+rather than a specific resource.
+
+OPTIONS is most visible in the context of CORS (Cross-Origin Resource
+Sharing). When a web page at `app.example.com` tries to call an API
+at `api.example.com`, the browser first sends an OPTIONS request--a
+_preflight_--to ask the API server whether cross-origin requests are
+permitted. The API server responds with headers like
+`Access-Control-Allow-Origin` and `Access-Control-Allow-Methods`. Only
+if the preflight succeeds does the browser send the actual request.
+
+== TRACE and CONNECT
+
+Two additional methods appear in the specification but serve
+specialized purposes:
+
+**TRACE** asks the server to echo back the request it received,
+allowing the client to see what intermediaries (proxies, gateways)
+may have modified along the way. TRACE is a diagnostic tool. It
+carries no body, and the response body contains the exact request
+message the server received. Most servers disable TRACE in production
+because it can expose sensitive header information.
+
+**CONNECT** asks an intermediary (usually an HTTP proxy) to establish
+a TCP tunnel to a destination server. This is how HTTPS traffic passes
+through HTTP proxies: the client sends `CONNECT api.example.com:443`,
+the proxy opens a raw TCP connection to that address, and from that
+point forward the proxy blindly relays bytes in both directions. The
+client and destination server then perform a TLS handshake through the
+tunnel.
+
+Neither method appears in everyday application code, but both are
+important pieces of HTTP's infrastructure.
+
+== Safe Methods
+
+A method is _safe_ if it does not change the state of the server. A
+safe request is an observation: it looks at something but does not
+alter it. The safe methods are:
+
+* **GET**
+* **HEAD**
+* **OPTIONS**
+* **TRACE**
+
+Safety is a promise from the protocol to the user. When you click a
+link, the browser uses GET because the protocol guarantees that
+following a link will never, by itself, cause a purchase, a deletion,
+or any other side effect. This is why search engines can crawl every
+link they find without fear of triggering dangerous actions.
+
+The promise is semantic, not absolute. A server _could_ delete a
+database row every time it receives a GET request, but it would be
+violating the protocol's contract. Intermediaries, caches, and clients
+all assume safe methods are safe and make optimization decisions
+accordingly. A server that breaks this assumption will encounter
+unpredictable behavior.
+
+== Idempotent Methods
+
+A method is _idempotent_ if making the same request multiple times
+produces the same result as making it once. The server's state after
+N identical requests is the same as after one. The idempotent methods
+are:
+
+* **GET**
+* **HEAD**
+* **PUT**
+* **DELETE**
+* **OPTIONS**
+* **TRACE**
+
+All safe methods are automatically idempotent--reading something
+twice does not change anything. PUT and DELETE are also idempotent:
+uploading the same file twice leaves one copy; deleting the same
+resource twice leaves it deleted.
+
+POST is the notable exception. Submitting the same order form twice
+creates two orders. Sending the same message twice delivers it twice.
+This is why idempotency matters: when a network error occurs after a
+request is sent but before the response arrives, the client does not
+know whether the request succeeded. For idempotent methods, the safe
+choice is to retry. For POST, the client must use other mechanisms
+(confirmation pages, unique transaction tokens) to avoid duplication.
+
+== Method Properties at a Glance
+
+[cols="1,1,1,1"]
+|===
+|Method |Safe |Idempotent |Request Body
+
+|**GET**
+|Yes
+|Yes
+|No
+
+|**HEAD**
+|Yes
+|Yes
+|No
+
+|**POST**
+|No
+|No
+|Yes
+
+|**PUT**
+|No
+|Yes
+|Yes
+
+|**DELETE**
+|No
+|Yes
+|Optional
+
+|**PATCH**
+|No
+|No
+|Yes
+
+|**OPTIONS**
+|Yes
+|Yes
+|Optional
+
+|**TRACE**
+|Yes
+|Yes
+|No
+
+|**CONNECT**
+|No
+|No
+|No
+|===
+
+This table is worth memorizing. Caches, proxies, and retry logic
+throughout the Web's infrastructure depend on these properties. A
+cache knows it can store GET responses. A proxy knows it can retry a
+PUT after a connection failure. A browser knows it must warn before
+resending a POST. Every one of these decisions flows from the method's
+safety and idempotency.
+
+== Extension Methods
+
+HTTP is extensible. The specification defines the methods above, but
+servers are free to implement additional methods. The WebDAV protocol,
+for example, adds `LOCK`, `UNLOCK`, `MKCOL`, `COPY`, and `MOVE` for
+remote file management. Application frameworks sometimes define custom
+methods for specialized operations.
+
+A server that receives a method it does not recognize should respond
+with `501 Not Implemented`. A server that recognizes a method but does
+not allow it for a particular resource should respond with
+`405 Method Not Allowed` and include an `Allow` header listing the
+methods that _are_ permitted.
+
+Proxies that encounter an unknown method should relay the request
+downstream if possible, following the principle: be conservative in
+what you send, be liberal in what you accept.
+
+== Next Steps
+
+You now know what each HTTP method means, when to use it, and the
+safety and idempotency guarantees that make the Web's infrastructure
+possible. The next section covers the other half of the conversation:
+
+* xref:2.http-tutorial/2e.status-codes.adoc[Status Codes] -- how the server reports what happened
diff --git a/doc/modules/ROOT/pages/2.http-tutorial/2e.status-codes.adoc b/doc/modules/ROOT/pages/2.http-tutorial/2e.status-codes.adoc
new file mode 100644
index 00000000..415945e6
--- /dev/null
+++ b/doc/modules/ROOT/pages/2.http-tutorial/2e.status-codes.adoc
@@ -0,0 +1,558 @@
+//
+// Copyright (c) 2025 Vinnie Falco (vinnie.falco@gmail.com)
+//
+// Distributed under the Boost Software License, Version 1.0. (See accompanying
+// file LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt)
+//
+// Official repository: https://github.com/cppalliance/http
+//
+
+= Status Codes
+
+When you send someone a letter, you eventually learn what happened.
+The letter arrived and they wrote back. The letter was returned
+because they moved. The address did not exist. The post office was
+closed. In every case, the outcome falls into one of a small number
+of categories, and you adjust your next step accordingly.
+
+HTTP works the same way. Every response begins with a three-digit
+number--the _status code_--that tells the client exactly what
+happened. Did the request succeed? Did the resource move somewhere
+else? Did the client make a mistake? Did the server break? The
+status code answers all of these questions before the client reads a
+single byte of the body. It is the first thing a program checks,
+and often the only thing it needs.
+
+== The Status Line
+
+The status code lives in the first line of every response, called
+the _status line_:
+
+[source]
+----
+HTTP/1.1 200 OK
+----
+
+Three fields, separated by spaces:
+
+* **HTTP/1.1** -- the protocol version.
+* **200** -- the status code, a three-digit integer.
+* **OK** -- the _reason phrase_, a short human-readable description.
+
+Software makes decisions based on the numeric code alone. The
+reason phrase is entirely cosmetic. You could replace `OK` with
+`Everything Is Fine` or even leave it blank, and the protocol
+would still work. The phrase exists so that a human reading raw
+traffic with a packet sniffer or terminal can understand what
+happened at a glance.
+
+== The Five Classes
+
+Status codes are not random. The first digit divides all possible
+codes into five classes, each representing a fundamentally different
+kind of outcome:
+
+[cols="1,2,4"]
+|===
+|Range |Class |Meaning
+
+|**1xx**
+|Informational
+|The request was received and the server is continuing to process it.
+
+|**2xx**
+|Success
+|The request was received, understood, and accepted.
+
+|**3xx**
+|Redirection
+|The client must take additional action to complete the request.
+
+|**4xx**
+|Client Error
+|The request contains an error and cannot be fulfilled.
+
+|**5xx**
+|Server Error
+|The server failed to fulfill a valid request.
+|===
+
+This grouping is the single most useful thing to memorize about
+status codes. When you encounter an unfamiliar code, its first digit
+immediately tells you whose problem it is: a 4xx means the client
+did something wrong; a 5xx means the server did. A 3xx means nobody
+did anything wrong--the resource just is not here anymore. Even
+software that does not recognize a specific code can fall back to
+the class: an unknown `237` is treated as a generic success, and an
+unknown `569` as a generic server error.
+
+== 1xx: Informational
+
+Informational codes are provisional. The server is acknowledging the
+request but has not finished processing it yet. These responses
+never carry a body.
+
+**100 Continue.** This is the most important 1xx code. When a client
+wants to send a large body--say, a 500 MB file upload--it can ask
+the server for permission first by including an `Expect:
+100-continue` header. The server replies with `100 Continue` if it
+is willing to accept the body, or with an error code (such as `413
+Content Too Large`) if it is not. This handshake avoids wasting
+bandwidth sending a massive payload that the server would reject:
+
+[source]
+----
+PUT /uploads/backup.tar.gz HTTP/1.1
+Host: storage.example.com
+Content-Length: 524288000
+Expect: 100-continue
+
+----
+
+[source]
+----
+HTTP/1.1 100 Continue
+----
+
+Only after receiving the `100` does the client begin transmitting
+the body.
+
+**101 Switching Protocols.** The server is switching to a different
+protocol at the client's request. This is how HTTP connections
+upgrade to WebSocket: the client sends an `Upgrade: websocket`
+header, and if the server agrees, it responds with `101` and the
+connection transitions from HTTP to the WebSocket protocol.
+
+**103 Early Hints.** A relatively recent addition (RFC 8297). The
+server sends preliminary `Link` headers so the browser can start
+preloading stylesheets or fonts while the server is still computing
+the final response. When the real response arrives, the browser has
+already fetched several critical resources.
+
+== 2xx: Success
+
+These are the codes every client hopes to see. The request worked.
+
+**200 OK.** The universal success code. For a GET, it means the
+requested resource is in the body. For a POST, it means the action
+completed and the body contains the result. It is by far the most
+common status code on the Web:
+
+[source]
+----
+HTTP/1.1 200 OK
+Content-Type: text/html
+Content-Length: 612
+
+
+Welcome
+...
+
+----
+
+**201 Created.** The request succeeded _and_ a new resource was
+created as a result. Servers typically return this after a POST that
+creates a record. The response includes a `Location` header
+pointing to the newly created resource:
+
+[source]
+----
+HTTP/1.1 201 Created
+Location: /api/users/4281
+Content-Type: application/json
+
+{"id":4281,"name":"Alice"}
+----
+
+**204 No Content.** The request succeeded, but there is nothing to
+send back. Common after a DELETE that removes a resource, or a PUT
+that updates one when the client does not need a copy of the result.
+The response has headers but no body:
+
+[source]
+----
+HTTP/1.1 204 No Content
+----
+
+**206 Partial Content.** The server is delivering only part of the
+resource because the client asked for a range. This is the mechanism
+behind resumable downloads and video streaming. When you pause a
+download and resume it later, the client sends a `Range` header
+asking for bytes from where it left off, and the server replies
+with `206`:
+
+[source]
+----
+GET /videos/lecture.mp4 HTTP/1.1
+Host: cdn.example.com
+Range: bytes=1048576-
+----
+
+[source]
+----
+HTTP/1.1 206 Partial Content
+Content-Range: bytes 1048576-5242879/5242880
+Content-Length: 4194304
+Content-Type: video/mp4
+
+...(bytes 1048576 through 5242879)...
+----
+
+== 3xx: Redirection
+
+Redirection codes tell the client that the resource it asked for is
+available, but somewhere else--or that the client already has a
+valid copy. The client must take one more step to get what it wants.
+
+**301 Moved Permanently.** The resource has a new, permanent URL.
+The server includes the new address in a `Location` header, and all
+future requests should use it. Search engines update their indexes
+when they see a 301. This is the code a website sends when it
+changes its domain name:
+
+[source]
+----
+HTTP/1.1 301 Moved Permanently
+Location: https://www.new-example.com/about
+
+----
+
+**302 Found.** The resource is temporarily at a different URL. The
+client should follow the `Location` header for this request but
+continue using the original URL for future requests. Login pages use
+this constantly--after you submit your credentials, the server
+redirects you to the page you originally wanted:
+
+[source]
+----
+HTTP/1.1 302 Found
+Location: /dashboard
+
+----
+
+**303 See Other.** Tells the client to retrieve the result at a
+different URL using GET, regardless of what method the original
+request used. This is the standard response after a POST that
+creates or processes something: the server says "the operation
+worked, now GET the result here." This pattern prevents the
+double-submit problem--if the user refreshes the page, the browser
+re-issues a GET rather than re-posting the form.
+
+**307 Temporary Redirect.** Like 302, but with an important
+guarantee: the client _must_ use the same method. If the original
+request was a POST, the redirect must also be a POST. HTTP/1.1
+introduced 307 to resolve an ambiguity in 302, where browsers
+historically changed POST to GET during redirects.
+
+**308 Permanent Redirect.** Like 301, but with the same
+method-preservation guarantee as 307. A POST stays a POST. Use 308
+when you permanently move an API endpoint that receives POST
+requests.
+
+**304 Not Modified.** This code has nothing to do with location
+changes. It is the server's way of saying "you already have the
+latest version." When a client sends a conditional request with an
+`If-Modified-Since` or `If-None-Match` header, the server can
+respond with 304 instead of transmitting the entire resource again.
+No body is sent--the client uses its cached copy:
+
+[source]
+----
+GET /style.css HTTP/1.1
+Host: www.example.com
+If-None-Match: "abc123"
+
+----
+
+[source]
+----
+HTTP/1.1 304 Not Modified
+ETag: "abc123"
+Cache-Control: max-age=3600
+
+----
+
+The 304 saves bandwidth and time. On a busy site, the majority of
+requests for static assets are answered this way.
+
+== 4xx: Client Error
+
+The 4xx family means the client did something wrong. The request was
+malformed, unauthorized, or asked for something that does not exist.
+The server understood the request well enough to know it cannot be
+fulfilled.
+
+**400 Bad Request.** The catch-all for malformed requests. The
+server could not parse the request because of invalid syntax, a
+missing required field, or a body that does not match the declared
+`Content-Type`. When an API returns 400, it usually includes a body
+explaining exactly what was wrong:
+
+[source]
+----
+HTTP/1.1 400 Bad Request
+Content-Type: application/json
+
+{"error":"'email' field is required"}
+----
+
+**401 Unauthorized.** Despite its name, this code means
+_unauthenticated_. The client has not provided credentials, or the
+credentials it provided are invalid. The response includes a
+`WWW-Authenticate` header telling the client how to authenticate:
+
+[source]
+----
+HTTP/1.1 401 Unauthorized
+WWW-Authenticate: Bearer realm="api"
+----
+
+**403 Forbidden.** The server knows who the client is but refuses
+to grant access. The difference from 401 is important: 401 means
+"I don't know who you are--please log in," while 403 means "I know
+who you are, and you are not allowed." Re-authenticating will not
+help.
+
+**404 Not Found.** The most famous status code in the world. The
+server cannot find the resource at the requested URL. Every web
+user has encountered a 404 page. In API design, servers sometimes
+return 404 instead of 403 to hide the existence of a resource from
+unauthorized clients--if you are not supposed to know it exists,
+the server tells you it does not exist.
+
+**405 Method Not Allowed.** The URL is valid, but the method is
+not supported for it. A resource might accept GET and POST but not
+DELETE. The server must include an `Allow` header listing the
+methods it does support:
+
+[source]
+----
+HTTP/1.1 405 Method Not Allowed
+Allow: GET, POST, HEAD
+----
+
+**408 Request Timeout.** The client took too long to finish sending
+its request. Servers set a timeout and, if the client has not
+completed the request within it, respond with 408 and close the
+connection.
+
+**409 Conflict.** The request conflicts with the current state of
+the resource. A common example is trying to create a user account
+with a username that already exists, or uploading a file that would
+overwrite a newer version.
+
+**429 Too Many Requests.** The client has been rate-limited. It sent
+too many requests in a given period of time. The server often
+includes a `Retry-After` header indicating how long the client
+should wait before trying again:
+
+[source]
+----
+HTTP/1.1 429 Too Many Requests
+Retry-After: 30
+Content-Type: application/json
+
+{"error":"Rate limit exceeded. Try again in 30 seconds."}
+----
+
+== 5xx: Server Error
+
+The 5xx codes mean the server knows the request was valid but
+something went wrong on its side. The fault lies with the server,
+not the client.
+
+**500 Internal Server Error.** The generic server-side failure. An
+unhandled exception, a null pointer, a failed database query--
+anything that the server did not anticipate. It is the server
+equivalent of a shrug. When you see 500 in production logs, it
+means code needs to be fixed:
+
+[source]
+----
+HTTP/1.1 500 Internal Server Error
+Content-Type: text/html
+
+
Something went wrong.
+----
+
+**502 Bad Gateway.** A server acting as a gateway or proxy received
+an invalid response from the upstream server it was forwarding to.
+If your application sits behind a reverse proxy like Nginx, a 502
+usually means your application process crashed or is not listening.
+
+**503 Service Unavailable.** The server is temporarily unable to
+handle requests--usually because it is overloaded or undergoing
+maintenance. Unlike 500, this code implies the problem is transient
+and the client should try again later. A `Retry-After` header may
+indicate when the server expects to recover:
+
+[source]
+----
+HTTP/1.1 503 Service Unavailable
+Retry-After: 120
+----
+
+**504 Gateway Timeout.** Like 502, but specifically about time. The
+proxy or gateway did not receive a response from the upstream server
+within the allowed time. This is the timeout version of 502.
+
+== The Redirection Maze: 301 vs. 302 vs. 303 vs. 307 vs. 308
+
+The redirect codes deserve a closer look because their history is
+tangled and the distinctions matter in practice.
+
+The original HTTP/1.0 specification defined 301 (permanent) and 302
+(temporary). The intention was that clients should preserve the
+original HTTP method when following a redirect--if you POST to a
+URL and get a 302 back, you should POST to the new URL too.
+Browsers, however, ignored this and changed POST to GET when
+following 302 redirects.
+
+HTTP/1.1 cleaned up the mess by adding three new codes:
+
+[cols="1,2,3"]
+|===
+|Code |Type |Method preserved?
+
+|**301**
+|Permanent
+|No -- browsers may change POST to GET
+
+|**302**
+|Temporary
+|No -- browsers may change POST to GET
+
+|**303**
+|Temporary (see other)
+|Always changes to GET (by design)
+
+|**307**
+|Temporary
+|Yes -- method must not change
+
+|**308**
+|Permanent
+|Yes -- method must not change
+|===
+
+The practical rule: use 301 or 308 for permanent moves, 302 or 307
+for temporary ones. If you specifically want the client to switch to
+GET after a POST (the "post/redirect/get" pattern), use 303. If you
+need the client to keep the same method, use 307 or 308.
+
+== Reading a Status Code in Context
+
+A status code never appears alone. It arrives alongside headers and
+sometimes a body that together tell the full story. Here is a
+complete exchange where the client requests a page that has moved:
+
+[source]
+----
+GET /old-page HTTP/1.1 <1>
+Host: www.example.com
+----
+
+[source]
+----
+HTTP/1.1 301 Moved Permanently <2>
+Location: /new-page <3>
+Content-Length: 0
+----
+
+[source]
+----
+GET /new-page HTTP/1.1 <4>
+Host: www.example.com
+----
+
+[source]
+----
+HTTP/1.1 200 OK <5>
+Content-Type: text/html
+Content-Length: 94
+
+
+New Page
+
You found the new page.
+
+----
+
+<1> The client requests the original URL
+<2> The server says the resource has permanently moved
+<3> The `Location` header provides the new URL
+<4> The client automatically follows the redirect
+<5> The resource is delivered from its new home
+
+Browsers perform this redirect chain automatically and invisibly.
+The user sees only the final page and the new URL in the address
+bar. Programs and libraries do the same, though most limit the
+number of redirects they will follow (typically 20) to avoid
+infinite loops.
+
+== Codes You Will See Every Day
+
+Out of the dozens of defined status codes, a handful appear in the
+vast majority of real-world traffic:
+
+[cols="1,3"]
+|===
+|Code |When you will see it
+
+|`200`
+|Almost every successful page load, API call, and resource fetch.
+
+|`201`
+|After creating a resource via POST in a REST API.
+
+|`204`
+|After a successful DELETE, or a PUT that needs no response body.
+
+|`301`
+|When a website changes its URL structure or migrates to a new
+domain.
+
+|`304`
+|When the browser checks its cache and the resource has not changed.
+
+|`400`
+|When an API request has missing or invalid parameters.
+
+|`401`
+|When you forget to include your authentication token.
+
+|`403`
+|When you are authenticated but lack permission.
+
+|`404`
+|When the URL does not match any resource on the server.
+
+|`500`
+|When the server has an unhandled bug.
+
+|`502`
+|When the reverse proxy cannot reach the application server.
+
+|`503`
+|When the server is overloaded or down for maintenance.
+|===
+
+Memorizing these twelve codes covers the overwhelming majority of
+HTTP traffic. The rest are important in specific contexts--range
+requests, content negotiation, WebDAV--but these are the ones that
+appear in logs, error pages, and debugging sessions day after day.
+
+== Why the First Digit Matters
+
+The five-class design is not just a convenience for humans. The HTTP
+specification (RFC 9110) requires that software treat an unrecognized
+status code as equivalent to the x00 code of its class. If a client
+receives a response with status code `299`, and it does not know what
+`299` means, it must treat it as `200`. If it receives `599`, it must
+treat it as `500`.
+
+This rule makes the protocol future-proof. New status codes can be
+defined without breaking existing clients. As long as the first digit
+is correct, old software will do something reasonable--it just will
+not do anything _specific_ to the new code. This is one of the small
+design decisions that allowed HTTP to evolve for over three decades
+without fragmenting the Web.
diff --git a/doc/modules/ROOT/pages/2.http-tutorial/2f.headers.adoc b/doc/modules/ROOT/pages/2.http-tutorial/2f.headers.adoc
new file mode 100644
index 00000000..97d9ade2
--- /dev/null
+++ b/doc/modules/ROOT/pages/2.http-tutorial/2f.headers.adoc
@@ -0,0 +1,635 @@
+//
+// Copyright (c) 2025 Vinnie Falco (vinnie.falco@gmail.com)
+//
+// Distributed under the Boost Software License, Version 1.0. (See accompanying
+// file LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt)
+//
+// Official repository: https://github.com/cppalliance/http
+//
+
+= Headers
+
+An HTTP message without headers is like a package with no shipping label.
+The bytes inside might be a web page, a photograph, or a stock quote,
+but neither the sender nor the receiver would know what to do with them.
+Headers are the metadata layer that gives HTTP its power: they describe
+the payload, declare the sender's preferences, control caching, carry
+authentication tokens, and negotiate the terms of every exchange. Two
+programs that have never communicated before can cooperate perfectly, as
+long as they read each other's headers.
+
+You already know from the previous section that headers sit between the
+start line and the body as `Name: value` pairs, one per line. This
+section goes deeper. You will learn _which_ headers matter, _why_ they
+exist, and how they work together to orchestrate everything from a
+simple page load to a complex API transaction.
+
+== Header Syntax
+
+Every header field follows the same format:
+
+[source]
+----
+Field-Name: field-value
+----
+
+A name, a colon, optional whitespace, and a value. The name is
+case-insensitive -- `Content-Type`, `content-type`, and `CONTENT-TYPE`
+all mean the same thing. The value, however, _is_ case-sensitive for
+most headers unless the specification for that header says otherwise.
+
+[source]
+----
+Content-Type: text/html; charset=utf-8
+Cache-Control: max-age=3600, must-revalidate
+Accept: text/html, application/json;q=0.9, */*;q=0.8
+----
+
+Some values are simple tokens. Others carry parameters separated by
+semicolons, or lists of items separated by commas. The `Accept` header
+above says: "I prefer HTML, I can handle JSON almost as well, and in a
+pinch I will take anything." Those `q` values are quality weights --
+a number from 0 to 1 that ranks the client's preference.
+
+A message can contain any number of headers, and the same header name
+can appear more than once. When it does, the effect is the same as if
+the values were combined into a single comma-separated list. These two
+forms are equivalent:
+
+[source]
+----
+Accept-Encoding: gzip
+Accept-Encoding: deflate
+----
+
+[source]
+----
+Accept-Encoding: gzip, deflate
+----
+
+Headers can appear in any order, with one exception: the `Host` header
+should appear first in an HTTP/1.1 request, because servers that support
+virtual hosting need it immediately.
+
+== Categories of Headers
+
+HTTP defines dozens of standard headers. Remembering them all is
+unnecessary, but understanding how they are organized makes the list
+manageable. Headers fall into four broad categories based on their role
+in the message.
+
+=== Request Headers
+
+Request headers carry information _from the client to the server_.
+They describe who is making the request, what the client can accept,
+and any conditions attached to the request.
+
+[cols="1a,3a"]
+|===
+|Header|Purpose
+
+|`Host`
+|The domain name (and optional port) of the server. Required in
+HTTP/1.1 because a single machine often hosts many sites.
+
+|`User-Agent`
+|Identifies the client software -- browser name, version, and platform.
+Servers use this to tailor responses or log traffic patterns.
+
+|`Accept`
+|Media types the client is willing to receive, ranked by preference.
+
+|`Accept-Language`
+|Human languages the client prefers, such as `en`, `fr`, or `de`.
+
+|`Accept-Encoding`
+|Compression algorithms the client supports, such as `gzip` or `br`
+(Brotli).
+
+|`Referer`
+|The URL of the page that led the client to make this request. Useful
+for analytics and back-link tracking.
+
+|`Authorization`
+|Credentials for authenticating the client -- a bearer token, a Basic
+username/password pair, or other scheme.
+
+|`Cookie`
+|Previously stored cookies being returned to the server.
+
+|===
+
+Here is a realistic request with several of these headers in action:
+
+[source]
+----
+GET /api/products?category=tools HTTP/1.1
+Host: www.example.com
+User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64)
+Accept: application/json
+Accept-Language: en-US,en;q=0.9
+Accept-Encoding: gzip, br
+Authorization: Bearer eyJhbGciOiJIUzI1NiJ9...
+Cookie: session_id=abc123
+----
+
+Every header here helps the server make a better decision. `Accept`
+says the client wants JSON, not HTML. `Accept-Encoding` says the server
+can compress the response. `Authorization` proves the client is logged
+in. Without these headers, the server would have to guess -- or refuse
+the request.
+
+=== Response Headers
+
+Response headers carry information _from the server to the client_.
+They describe the server itself, provide instructions about the
+response, and sometimes set up future interactions.
+
+[cols="1a,3a"]
+|===
+|Header|Purpose
+
+|`Server`
+|Identifies the server software, similar to how `User-Agent` identifies
+the client.
+
+|`Date`
+|The date and time when the response was generated.
+
+|`Location`
+|The URL to redirect to. Used with 3xx status codes to send the client
+somewhere else.
+
+|`Retry-After`
+|Tells the client how long to wait before retrying, typically used with
+`503 Service Unavailable`.
+
+|`Set-Cookie`
+|Sends a cookie to the client for storage. The client will return it in
+subsequent `Cookie` headers.
+
+|`WWW-Authenticate`
+|Challenges the client to provide credentials. Sent with
+`401 Unauthorized` to initiate authentication.
+
+|===
+
+=== Representation Headers
+
+Representation headers describe the body of the message -- what it is,
+how big it is, and how it has been encoded. They appear in both requests
+and responses, because both directions can carry a body.
+
+[cols="1a,3a"]
+|===
+|Header|Purpose
+
+|`Content-Type`
+|The media type of the body: `text/html`, `application/json`,
+`image/png`, and so on. This is the single most important header for
+interpreting the payload.
+
+|`Content-Length`
+|The size of the body in bytes. Lets the receiver know exactly how much
+data to expect.
+
+|`Content-Encoding`
+|Any compression applied to the body, such as `gzip` or `br`. The
+receiver must decompress before processing.
+
+|`Content-Language`
+|The natural language of the body, such as `en` or `fr`.
+
+|`Transfer-Encoding`
+|How the body is framed for transport. The value `chunked` means the
+body arrives in pieces, each prefixed by its size.
+
+|===
+
+When a server sends a compressed JSON response, the representation
+headers make the whole arrangement explicit:
+
+[source]
+----
+HTTP/1.1 200 OK
+Content-Type: application/json; charset=utf-8
+Content-Encoding: gzip
+Content-Length: 1843
+Transfer-Encoding: chunked
+----
+
+The client sees `Content-Type` and knows it is JSON. It sees
+`Content-Encoding: gzip` and knows it must decompress. It sees
+`Transfer-Encoding: chunked` and knows the body will arrive in
+pieces rather than all at once. Every step is unambiguous.
+
+=== General Headers
+
+Some headers belong to the message as a whole rather than to the
+request, the response, or the body. These are general headers --
+they can appear in either direction and affect how the message is
+transported or processed.
+
+[cols="1a,3a"]
+|===
+|Header|Purpose
+
+|`Connection`
+|Controls whether the TCP connection stays open after the response.
+`keep-alive` is the HTTP/1.1 default; `close` signals that the
+connection should be torn down.
+
+|`Via`
+|Records the intermediate proxies or gateways a message has passed
+through. Each hop appends its own entry.
+
+|`Cache-Control`
+|Directives for how caches along the path should handle the message.
+This header is so important it gets its own section below.
+
+|`Date`
+|The timestamp of the message, used for caching calculations and log
+correlation.
+
+|===
+
+== Conditional Headers
+
+One of the most practical things headers do is make requests
+_conditional_. A conditional request says: "Only give me this
+resource if something has changed." This avoids transferring data the
+client already has, saving bandwidth and time.
+
+The mechanism relies on validators -- values that identify a specific
+version of a resource. HTTP supports two kinds:
+
+**Last-Modified dates.** The server includes a `Last-Modified` header
+in the response, telling the client when the resource was last changed:
+
+[source]
+----
+HTTP/1.1 200 OK
+Last-Modified: Wed, 15 Jan 2026 08:30:00 GMT
+Content-Type: text/html
+Content-Length: 5120
+
+...
+----
+
+When the client requests the resource again, it includes the date in an
+`If-Modified-Since` header:
+
+[source]
+----
+GET /index.html HTTP/1.1
+Host: www.example.com
+If-Modified-Since: Wed, 15 Jan 2026 08:30:00 GMT
+----
+
+If the resource has not changed, the server responds with
+`304 Not Modified` and no body -- the client uses its cached copy.
+If it _has_ changed, the server sends the full `200 OK` response
+with the updated resource.
+
+**Entity tags.** An `ETag` is an opaque identifier -- often a hash --
+that the server assigns to a specific version of a resource:
+
+[source]
+----
+HTTP/1.1 200 OK
+ETag: "a3f2b7c"
+Content-Type: application/json
+Content-Length: 892
+
+{"products": [...]}
+----
+
+The client echoes it back in an `If-None-Match` header:
+
+[source]
+----
+GET /api/products HTTP/1.1
+Host: www.example.com
+If-None-Match: "a3f2b7c"
+----
+
+If the ETag still matches, the server returns `304 Not Modified`.
+ETags are more precise than dates because they change whenever the
+content changes, regardless of the clock. They also handle the edge
+case where a resource is modified, then reverted -- the date changes
+but the content is the same. An ETag catches that.
+
+[cols="1a,2a"]
+|===
+|Request Header|Meaning
+
+|`If-Modified-Since`
+|Send the resource only if it changed after this date.
+
+|`If-None-Match`
+|Send the resource only if its ETag differs from this value.
+
+|`If-Match`
+|Proceed only if the resource's current ETag matches. Used to prevent
+overwriting someone else's changes during an update.
+
+|`If-Unmodified-Since`
+|Proceed only if the resource has not changed since this date.
+
+|===
+
+Conditional requests are the foundation of efficient caching. Without
+them, every page view would require a full download even if nothing
+changed.
+
+== Cache-Control
+
+Caching is one of the most consequential behaviors that headers govern.
+The `Cache-Control` header tells every cache along the path -- the
+browser, any proxy, any CDN -- how to store, reuse, and validate
+responses. Getting caching right can make a site feel instant; getting
+it wrong can serve stale data or defeat the cache entirely.
+
+`Cache-Control` carries one or more comma-separated directives:
+
+[source]
+----
+Cache-Control: public, max-age=86400, must-revalidate
+----
+
+This says: any cache may store the response (`public`), it is fresh
+for 86,400 seconds -- one day (`max-age`), and after that the cache
+must check with the server before reusing it (`must-revalidate`).
+
+The most important directives:
+
+[cols="1a,3a"]
+|===
+|Directive|Meaning
+
+|`public`
+|Any cache (browser or shared proxy) may store the response. Even
+responses that would normally be private become cacheable.
+
+|`private`
+|Only the user's browser may cache this response. Shared caches like
+CDNs must not store it. Useful for responses tailored to one user.
+
+|`no-cache`
+|A cache may store the response, but must revalidate with the server
+before every reuse. This guarantees freshness without giving up caching
+entirely.
+
+|`no-store`
+|The response must not be stored by any cache at all. Used for
+sensitive data like banking pages or personal health records.
+
+|`max-age=`
+|How long the response stays fresh, in seconds from the time of the
+request. Replaces the older `Expires` header.
+
+|`s-maxage=`
+|Like `max-age`, but applies only to shared (proxy) caches.
+
+|`must-revalidate`
+|Once the response becomes stale, a cache must not serve it without
+first confirming with the server. Prevents serving expired content
+during outages.
+
+|===
+
+A static asset like a CSS file or an image that never changes can
+carry an aggressive cache policy:
+
+[source]
+----
+Cache-Control: public, max-age=31536000, immutable
+----
+
+This tells caches the resource is good for an entire year and will
+never change at that URL. The `immutable` directive prevents browsers
+from revalidating even when the user hits reload.
+
+By contrast, a personalized dashboard page might use:
+
+[source]
+----
+Cache-Control: private, no-cache
+----
+
+Only the browser may store it, and it must check with the server every
+time. This balances performance (the browser avoids a full download if
+the content has not changed) with correctness (the user always sees
+fresh data).
+
+== Content-Type in Depth
+
+The `Content-Type` header deserves special attention because it affects
+how every participant in the chain interprets the body. Its value is a
+media type, optionally followed by parameters:
+
+[source]
+----
+Content-Type: text/html; charset=utf-8
+----
+
+The media type `text/html` tells the receiver the body is an HTML
+document. The `charset=utf-8` parameter specifies the character
+encoding. Without the charset, the receiver must guess -- and guessing
+often goes wrong, producing garbled text.
+
+Common `Content-Type` values you will encounter constantly:
+
+[cols="1a,2a"]
+|===
+|Value|Typical Use
+
+|`text/html; charset=utf-8`
+|Web pages
+
+|`application/json`
+|API responses and request bodies
+
+|`application/x-www-form-urlencoded`
+|HTML form submissions (the default encoding)
+
+|`multipart/form-data`
+|File uploads and forms with binary data
+
+|`image/png`, `image/jpeg`, `image/webp`
+|Images
+
+|`application/octet-stream`
+|Arbitrary binary data -- the "I don't know what this is" fallback
+
+|===
+
+When a client sends a POST request with a JSON body, it must set
+`Content-Type` so the server knows how to parse the payload:
+
+[source]
+----
+POST /api/orders HTTP/1.1
+Host: www.example.com
+Content-Type: application/json
+Content-Length: 61
+
+{"item":"hammer","quantity":2,"shipping":"express"}
+----
+
+If the client omits `Content-Type`, the server has no reliable way to
+determine whether the body is JSON, XML, form data, or something else.
+This is the most common cause of "400 Bad Request" errors in API work.
+
+== Cookies and State
+
+HTTP is stateless -- the server remembers nothing about previous
+requests. But real applications need continuity: a shopping cart must
+persist across page views, a login must survive navigation. Cookies
+solve this by using two headers to graft state onto a stateless
+protocol.
+
+The server creates a cookie by including a `Set-Cookie` header in the
+response:
+
+[source]
+----
+HTTP/1.1 200 OK
+Set-Cookie: session_id=xyz789; Path=/; HttpOnly; Secure; Max-Age=3600
+----
+
+The browser stores this cookie and returns it in a `Cookie` header on
+every subsequent request to the same site:
+
+[source]
+----
+GET /cart HTTP/1.1
+Host: www.example.com
+Cookie: session_id=xyz789
+----
+
+The server reads `session_id` and looks up the associated session -- the
+user's cart, their login state, their preferences. The protocol itself
+has not changed; it is still a standalone request and response. The
+cookie is simply a token that lets the server correlate requests.
+
+The attributes on `Set-Cookie` control security and scope:
+
+[cols="1a,3a"]
+|===
+|Attribute|Effect
+
+|`Path=/`
+|The cookie is sent for every path on the site.
+
+|`HttpOnly`
+|JavaScript cannot access this cookie, reducing the risk of cross-site
+scripting (XSS) attacks stealing session tokens.
+
+|`Secure`
+|The cookie is only sent over HTTPS connections.
+
+|`Max-Age=3600`
+|The cookie expires after 3,600 seconds (one hour). Without `Max-Age`
+or `Expires`, the cookie lives only until the browser is closed.
+
+|`SameSite=Lax`
+|The cookie is not sent on cross-site requests initiated by third-party
+sites, protecting against cross-site request forgery (CSRF).
+
+|===
+
+Cookies are the most widely used mechanism for session management on the
+web. They work because both sides cooperate: the server issues the token,
+the client returns it, and the protocol never needs to know what it
+means.
+
+== Extension and Custom Headers
+
+HTTP is designed to be extended. Any sender can introduce a new header
+name, and receivers that do not recognize it simply pass it through
+unchanged. This extensibility is one of the reasons HTTP has thrived for
+over three decades -- new capabilities are added by defining new headers,
+not by changing the protocol.
+
+Historically, custom headers used an `X-` prefix to signal that they
+were experimental or non-standard:
+
+[source]
+----
+X-Request-ID: 8f14e45f-ceea-467f-a8f1-06f3b7c9d6e2
+X-RateLimit-Remaining: 42
+----
+
+This convention was deprecated in 2012 (RFC 6648) because too many
+`X-` headers became permanent standards, and the prefix added confusion
+rather than clarity. Modern practice is to choose a descriptive name
+without the prefix and register it with IANA if it gains widespread
+use.
+
+Custom headers are common in APIs. Rate-limiting headers tell clients
+how many requests they have left. Tracing headers like `X-Request-ID`
+correlate a single user action across multiple services. CDN headers
+report cache hit status. These all work because HTTP's header mechanism
+is open-ended by design.
+
+== A Complete Exchange
+
+Putting it all together, here is an annotated exchange that uses many
+of the headers discussed in this section:
+
+[source]
+----
+GET /products/42 HTTP/1.1 <1>
+Host: api.example.com <2>
+Accept: application/json <3>
+Accept-Encoding: gzip <4>
+If-None-Match: "b5c8d3e" <5>
+Authorization: Bearer eyJhbGci... <6>
+Cookie: session_id=abc123 <7>
+----
+
+<1> Request line -- fetch product 42
+<2> Which server to talk to
+<3> The client wants JSON
+<4> The client can decompress gzip
+<5> Conditional: skip the body if the ETag has not changed
+<6> Authentication credentials
+<7> Session cookie
+
+[source]
+----
+HTTP/1.1 200 OK <1>
+Date: Sat, 07 Feb 2026 14:30:00 GMT <2>
+Content-Type: application/json <3>
+Content-Encoding: gzip <4>
+Content-Length: 412 <5>
+ETag: "c9a1f0b" <6>
+Cache-Control: private, max-age=60 <7>
+Set-Cookie: prefs=dark; Path=/; Secure <8>
+
+{"id":42,"name":"Claw Hammer",...} <9>
+----
+
+<1> Success
+<2> When the response was generated
+<3> The body is JSON
+<4> Compressed with gzip
+<5> 412 bytes after compression
+<6> New ETag for this version of the resource
+<7> Only the browser should cache this, fresh for 60 seconds
+<8> A cookie recording the user's theme preference
+<9> The JSON payload
+
+Nine request headers and eight response headers, and the entire
+interaction is self-describing. The client knows how to decompress and
+parse the body. The server knows the client is authenticated and prefers
+JSON. The browser knows how long to cache the result and which cookie to
+store. Every decision is made by reading headers -- no out-of-band
+knowledge required.
+
+This is the design insight that makes HTTP so durable. New requirements
+-- compression, authentication, caching, rate limiting, content
+negotiation -- are all layered on through headers rather than baked into
+the protocol's core syntax. The message format has not changed since
+1997, yet it handles workloads that its designers never imagined.
diff --git a/doc/modules/ROOT/pages/2.http-tutorial/2g.content-negotiation.adoc b/doc/modules/ROOT/pages/2.http-tutorial/2g.content-negotiation.adoc
new file mode 100644
index 00000000..cf418bb5
--- /dev/null
+++ b/doc/modules/ROOT/pages/2.http-tutorial/2g.content-negotiation.adoc
@@ -0,0 +1,434 @@
+//
+// Copyright (c) 2025 Vinnie Falco (vinnie.falco@gmail.com)
+//
+// Distributed under the Boost Software License, Version 1.0. (See accompanying
+// file LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt)
+//
+// Official repository: https://github.com/cppalliance/http
+//
+
+= Content Negotiation and Body Encoding
+
+A single URL can mean different things to different clients. A French
+speaker and an English speaker visiting the same page should each get
+content in their own language. A modern browser that understands WebP
+images should not be forced to download a larger JPEG. A phone on a
+slow cellular link should receive a compressed response, not the raw
+megabyte that a desktop on fiber could swallow without noticing.
+
+HTTP solves this through two related mechanisms. _Content negotiation_
+lets the client describe what it prefers and the server choose the best
+available representation. _Body encoding_ lets either side compress or
+transform the payload so it travels efficiently across the wire. Together
+they turn a single resource into something that adapts--to languages,
+formats, and network conditions--without requiring a different URL for
+every variation.
+
+== The Problem of Multiple Representations
+
+Consider a documentation site that publishes the same article in English,
+French, and Japanese. Each version lives on the server as a separate
+file, but the URL that users share and bookmark is the same:
+
+[source]
+----
+https://docs.example.com/guide/getting-started
+----
+
+When a request arrives for that URL, the server must decide which file
+to send. It could guess. It could ask the client to choose from a list.
+Or it could look at the request headers, where the client has already
+declared its preferences, and pick the best match automatically.
+
+HTTP calls these different files _variants_ of the same resource. The
+process of selecting among them is content negotiation. The idea extends
+beyond language: a resource might have variants in different media types
+(HTML versus JSON), different character encodings, or different
+compression formats.
+
+== Server-Driven Negotiation
+
+The most common approach is _server-driven_ (or _proactive_)
+negotiation. The client sends preference headers with every request.
+The server examines them, compares them against the available variants,
+and returns the best match.
+
+This happens transparently. The user clicks a link; the browser sends
+its preferences; the server picks a variant; the page loads. No extra
+round-trips, no menus to click through.
+
+The downside is that the server must guess when the client's preferences
+do not perfectly match any available variant. If the server has English
+and French but the client wants Spanish, the server has to decide what
+to do--return English, return French, or reject the request. HTTP gives
+the server tools to make a reasonable choice, but it cannot read minds.
+
+== The Accept Headers
+
+Clients express preferences through four request headers, each
+corresponding to a different dimension of the response:
+
+[cols="2a,3a"]
+|===
+|Header|Controls
+
+|`Accept`
+|Which media types the client can handle. Matched against the response's
+`Content-Type`.
+
+|`Accept-Language`
+|Which human languages the client prefers. Matched against
+`Content-Language`.
+
+|`Accept-Encoding`
+|Which content encodings (compression algorithms) the client supports.
+Matched against `Content-Encoding`.
+
+|`Accept-Charset`
+|Which character sets the client can display. Matched against the
+`charset` parameter of `Content-Type`.
+
+|===
+
+A real browser request carries several of these at once:
+
+[source]
+----
+GET /guide/getting-started HTTP/1.1
+Host: docs.example.com
+Accept: text/html, application/xhtml+xml, */*
+Accept-Language: fr, en;q=0.8
+Accept-Encoding: gzip, br
+----
+
+This request says: "I prefer HTML or XHTML, but will accept anything.
+I want French if you have it, with English as a fallback. I can
+decompress gzip and Brotli."
+
+== Quality Values
+
+Not every preference is equal. A client might strongly prefer French but
+tolerate English in a pinch, and refuse Turkish entirely. HTTP expresses
+this with _quality values_--a numeric weight between `0.0` and `1.0`
+attached to each option with the `q` parameter.
+
+[source]
+----
+Accept-Language: fr;q=1.0, en;q=0.8, de;q=0.5, tr;q=0.0
+----
+
+A quality of `1.0` means "this is exactly what I want." A quality of
+`0.0` means "do not send this under any circumstances." If no `q`
+parameter is present, the default is `1.0`.
+
+The server reads these values, compares them against the variants it
+has, and picks the one with the highest combined match. If the best
+available match has a quality of `0.0`, the server should not return
+it--a `406 Not Acceptable` response is more appropriate.
+
+Quality values apply to all four Accept headers. A media type preference
+list might look like this:
+
+[source]
+----
+Accept: text/html;q=1.0, application/json;q=0.9, text/plain;q=0.5
+----
+
+The server learns that HTML is most desired, JSON is almost as good,
+and plain text is acceptable but not ideal. This flexibility lets
+clients degrade gracefully rather than fail when the server lacks a
+perfect match.
+
+=== Wildcards
+
+The `*` character serves as a wildcard in Accept headers.
+`Accept: text/*` means any text subtype is acceptable. `Accept: \*/*`
+means any media type at all is acceptable. Wildcards typically carry
+a lower quality value than specific types, so the server prefers an
+exact match when one exists:
+
+[source]
+----
+Accept: text/html;q=1.0, text/*;q=0.5, */*;q=0.1
+----
+
+Here the client strongly prefers HTML, will accept other text formats,
+and will grudgingly take anything else rather than get nothing.
+
+== The 406 Response
+
+When the server cannot satisfy any of the client's stated preferences
+and all matching qualities are zero, it responds with:
+
+[source]
+----
+HTTP/1.1 406 Not Acceptable
+Content-Type: text/html
+
+
+
The requested resource is only available in Japanese.
+
+----
+
+In practice, many servers choose to send the closest available variant
+anyway, reasoning that _something_ is better than an error page. The
+specification permits this--`406` is a tool, not a mandate.
+
+== The Vary Header
+
+Content negotiation creates a complication for caches. A cache stores a
+response keyed by its URL. But if the same URL can produce different
+responses depending on `Accept-Language`, a cache that blindly serves
+the first response it stored will send French pages to English speakers.
+
+The `Vary` header solves this. The server includes it in the response to
+tell caches which request headers influenced the choice of variant:
+
+[source]
+----
+HTTP/1.1 200 OK
+Content-Type: text/html
+Content-Language: fr
+Vary: Accept-Language
+----
+
+This tells any cache: "I chose this response based on the
+`Accept-Language` header. If a future request has a different
+`Accept-Language` value, do not serve this cached copy--ask the origin
+server again."
+
+A `Vary` header can list multiple fields:
+
+[source]
+----
+Vary: Accept-Language, Accept-Encoding
+----
+
+Caches that implement `Vary` correctly store multiple variants of the
+same URL and match incoming requests against the stored request headers.
+Getting `Vary` right is essential for any system that sits between
+clients and origin servers--proxies, CDNs, and reverse caches all
+depend on it.
+
+== Content Encoding
+
+Content negotiation decides _what_ to send. Content encoding decides
+_how to compress it_ before it travels across the network.
+
+When a server has a large HTML page, sending it uncompressed wastes
+bandwidth and time. If the client supports compression, the server can
+encode the body with an algorithm like gzip, Brotli, or deflate, and
+the client decompresses it on arrival. The original media type does not
+change--a gzip-compressed HTML page is still `text/html`. Only the
+transport representation changes.
+
+=== The Content-Encoding Process
+
+The flow is straightforward:
+
+. The client sends `Accept-Encoding` listing the algorithms it supports.
+. The server picks one (or none) and compresses the body.
+. The server adds a `Content-Encoding` header naming the algorithm.
+. The client reads `Content-Encoding`, decompresses, and processes the
+ original content.
+
+[source]
+----
+GET /report.html HTTP/1.1
+Host: www.example.com
+Accept-Encoding: gzip, br
+----
+
+[source]
+----
+HTTP/1.1 200 OK
+Content-Type: text/html
+Content-Encoding: gzip
+Content-Length: 3907
+
+<...3907 bytes of gzip-compressed HTML...>
+----
+
+The `Content-Length` reflects the compressed size, not the original.
+The `Content-Type` still says `text/html` because that is what the
+body _is_ once decompressed.
+
+=== Common Content-Encoding Algorithms
+
+[cols="1a,3a"]
+|===
+|Token|Description
+
+|`gzip`
+|The most widely supported algorithm. Based on the DEFLATE algorithm
+wrapped in the gzip file format. Virtually every HTTP client and server
+understands it.
+
+|`deflate`
+|Raw DEFLATE compression in the zlib format. Less common than gzip in
+practice due to historical ambiguity in implementations.
+
+|`br`
+|Brotli, a newer algorithm developed by Google. Achieves better
+compression ratios than gzip, especially for text. Supported by all
+modern browsers, typically only over HTTPS.
+
+|`identity`
+|No encoding applied. This token exists so clients can explicitly
+express a preference for uncompressed content using quality values.
+
+|===
+
+A client that wants to explicitly reject uncompressed responses can
+send:
+
+[source]
+----
+Accept-Encoding: gzip;q=1.0, identity;q=0.0
+----
+
+If the server cannot compress the response, it should send a
+`406 Not Acceptable` rather than ignore the prohibition--though, again,
+real-world servers vary in how strictly they follow this.
+
+== Transfer Encoding
+
+Content encoding compresses the _payload_. Transfer encoding changes
+how the _message_ is delivered. The distinction matters: content
+encoding is about the resource, transfer encoding is about the
+transport.
+
+The primary transfer encoding in HTTP/1.1 is _chunked encoding_. It
+exists to solve a specific problem: how do you send a response when
+you do not know its total size in advance?
+
+=== The Problem
+
+Normally, a server declares the body size in `Content-Length` so the
+client knows when the body ends. But if the server is generating
+content dynamically--streaming search results, compressing on the fly,
+assembling a page from multiple database queries--it may not know the
+total size until it is finished. Without `Content-Length`, and on a
+persistent connection, the client has no way to tell where one response
+ends and the next begins.
+
+=== Chunked Encoding
+
+Chunked transfer encoding breaks the body into a series of chunks, each
+preceded by its size in hexadecimal. A zero-length chunk signals the
+end of the body:
+
+[source]
+----
+HTTP/1.1 200 OK
+Content-Type: text/plain
+Transfer-Encoding: chunked
+
+1a
+We hold these truths to be
+1c
+self-evident, that all men
+0
+
+----
+
+Each chunk begins with a line containing the chunk size (in hex),
+followed by a CRLF, then that many bytes of data, then another CRLF.
+The final chunk has a size of `0`, and after it the body is complete.
+
+This mechanism lets the server begin transmitting before the entire
+response is generated, reducing latency for the client. It also
+preserves persistent connections--the client reads chunks until it
+sees the terminating zero-length chunk, then it knows the next bytes
+on the connection belong to a new response.
+
+=== Combining Content and Transfer Encoding
+
+Content encoding and transfer encoding can be applied together. A
+server might gzip-compress a dynamically generated HTML page and then
+send the compressed data in chunks:
+
+[source]
+----
+HTTP/1.1 200 OK
+Content-Type: text/html
+Content-Encoding: gzip
+Transfer-Encoding: chunked
+
+a3f
+<...chunk of gzip-compressed data...>
+7b2
+<...another chunk...>
+0
+
+----
+
+The client first reassembles the chunks, then decompresses the gzip
+payload, and finally processes the HTML. The two encodings serve
+different purposes and are reversed in opposite order: transfer
+encoding is unwrapped first, content encoding second.
+
+== Character Sets
+
+Text-based media types carry an optional `charset` parameter on the
+`Content-Type` header that tells the client how to decode bytes into
+characters:
+
+[source]
+----
+Content-Type: text/html; charset=utf-8
+----
+
+Without this parameter, the client must guess the encoding--and guessing
+is a reliable source of garbled text. UTF-8 has become the dominant
+encoding on the web, handling virtually every script in use today. Older
+encodings like `iso-8859-1` (Latin-1) still appear, particularly on
+legacy systems.
+
+Clients can declare character-set preferences in the `Accept-Charset`
+header, but modern practice has largely moved past this. Most clients
+support UTF-8 and most servers send it. The header remains in the
+specification for completeness, but you will rarely need to set it
+explicitly.
+
+== A Complete Negotiated Exchange
+
+Here is an exchange that exercises several negotiation mechanisms at
+once. The client is a browser in France requesting a documentation page:
+
+[source]
+----
+GET /guide/getting-started HTTP/1.1
+Host: docs.example.com
+Accept: text/html;q=1.0, application/json;q=0.5
+Accept-Language: fr;q=1.0, en;q=0.7
+Accept-Encoding: gzip, br
+----
+
+The server has a French HTML variant and decides to compress it with
+Brotli:
+
+[source]
+----
+HTTP/1.1 200 OK
+Content-Type: text/html; charset=utf-8
+Content-Language: fr
+Content-Encoding: br
+Content-Length: 8421
+Vary: Accept-Language, Accept-Encoding
+
+
+
+...
+----
+
+The response headers tell the full story: the body is HTML in UTF-8
+(`Content-Type`), written in French (`Content-Language`), compressed
+with Brotli (`Content-Encoding`), and 8421 bytes in compressed form
+(`Content-Length`). The `Vary` header warns caches that both language
+and encoding influenced the choice, so future requests with different
+values for those headers need a fresh lookup.
+
+The client decompresses the Brotli payload and renders the French HTML
+page. The entire negotiation--language selection, format preference,
+compression--happened in a single round-trip, guided entirely by headers.
diff --git a/doc/modules/ROOT/pages/2.http-tutorial/2h.connection-management.adoc b/doc/modules/ROOT/pages/2.http-tutorial/2h.connection-management.adoc
new file mode 100644
index 00000000..ce3e7f97
--- /dev/null
+++ b/doc/modules/ROOT/pages/2.http-tutorial/2h.connection-management.adoc
@@ -0,0 +1,439 @@
+//
+// Copyright (c) 2025 Vinnie Falco (vinnie.falco@gmail.com)
+//
+// Distributed under the Boost Software License, Version 1.0. (See accompanying
+// file LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt)
+//
+// Official repository: https://github.com/cppalliance/http
+//
+
+= Connection Management
+
+Opening a TCP connection is like dialing a phone number before you can
+speak. There is a pause--a handshake--where both sides agree that a
+line is open. In the early days of the Web, every single HTTP request
+paid that cost: dial, say one thing, hang up, dial again, say the next
+thing. A page with ten images meant eleven phone calls. The Web
+worked, but the wasted time was enormous. Connection management is the
+story of how HTTP learned to keep the line open, send more per call,
+and eventually carry entire conversations over a single wire.
+
+== The Cost of a New Connection
+
+Before any HTTP message can travel, the client and server must
+establish a TCP connection through a three-way handshake:
+
+. The client sends a *SYN* packet.
+. The server responds with *SYN+ACK*.
+. The client replies with *ACK*.
+
+Only after this exchange can data flow. The handshake adds one full
+round-trip of latency before the first byte of the request is even
+sent. Between New York and London, that round-trip takes roughly 56
+milliseconds over fiber. For a small resource--a 304 Not Modified
+response, a tiny icon--the handshake can consume more than half the
+total time.
+
+[source]
+----
+Client Server
+ |--- SYN ----------------------->|
+ |<------------- SYN+ACK --------|
+ |--- ACK ----------------------->|
+ |--- GET /index.html HTTP/1.1 -->|
+ |<------------ HTTP/1.1 200 OK --|
+----
+
+TCP also starts slowly on purpose. A new connection uses _slow start_,
+throttling the amount of data in flight until the network proves it
+can handle more. Each successful acknowledgment doubles the sender's
+congestion window. A fresh connection cannot send data at full speed;
+it needs time to ramp up. A connection that has already exchanged a
+modest amount of data is significantly faster than a new one.
+
+These two costs--handshake delay and slow start--make opening a new
+connection surprisingly expensive. Every optimization in this section
+exists to avoid paying them more than necessary.
+
+== HTTP/1.0: One Request, One Connection
+
+The original HTTP/1.0 protocol treated every request as an isolated
+event. The client opened a TCP connection, sent one request, received
+one response, and the connection was torn down. Loading a web page
+with an HTML document and three images required four separate TCP
+connections:
+
+[source]
+----
+[connect] GET /page.html → 200 OK [close]
+[connect] GET /logo.png → 200 OK [close]
+[connect] GET /photo.jpg → 200 OK [close]
+[connect] GET /style.css → 200 OK [close]
+----
+
+Each connection paid the handshake cost. Each started with a fresh
+slow-start window. As web pages grew richer--dozens of images,
+stylesheets, and scripts--the accumulated overhead became the dominant
+source of latency. Users stared at blank screens while their browsers
+quietly opened and closed connections.
+
+== Persistent Connections
+
+The fix was obvious: keep the connection open. Rather than hanging up
+after each response, the client and server could reuse the same TCP
+connection for multiple requests.
+
+HTTP/1.0 introduced this informally through a `Connection:
+Keep-Alive` header. If the client included this header, and the server
+echoed it back, the connection stayed open after the response:
+
+[source]
+----
+GET /page.html HTTP/1.0
+Host: www.example.com
+Connection: Keep-Alive
+
+----
+
+[source]
+----
+HTTP/1.0 200 OK
+Content-Type: text/html
+Content-Length: 3104
+Connection: Keep-Alive
+
+...
+----
+
+Both sides had to agree. If the server did not return `Connection:
+Keep-Alive`, the client assumed the connection would close. Every
+request that wanted persistence had to ask for it explicitly.
+
+HTTP/1.1 reversed the default. Persistent connections became automatic.
+An HTTP/1.1 connection stays open after every response unless one side
+explicitly signals otherwise with `Connection: close`:
+
+[source]
+----
+GET /style.css HTTP/1.1
+Host: www.example.com
+Connection: close
+
+----
+
+This single change eliminated enormous overhead. With connection reuse,
+the handshake cost is paid once, TCP slow start ramps up once, and all
+subsequent requests on that connection benefit from a warmed-up pipe.
+For a page requiring N resources from the same server, persistent
+connections save (N-1) round trips--often seconds of real-world
+latency.
+
+Either side can close a persistent connection at any time, even without
+sending `Connection: close` first. Servers close idle connections to
+free resources. Clients close connections they no longer need. The
+protocol requires that both sides tolerate unexpected closes and be
+prepared to retry requests.
+
+== The Connection Header
+
+The `Connection` header controls per-hop connection behavior. It is
+a hop-by-hop header, meaning it applies only to the immediate link
+between two participants and must not be forwarded by proxies.
+
+The header carries three kinds of values:
+
+* **`close`** -- signals that the connection should be shut down after
+ the current request/response.
+* **`Keep-Alive`** -- in HTTP/1.0, explicitly requests persistence.
+ Unnecessary in HTTP/1.1, where persistence is the default.
+* **Header field names** -- lists other hop-by-hop headers that must
+ be removed before forwarding. This "protects" headers from
+ accidental propagation through proxy chains.
+
+[source]
+----
+Connection: close
+----
+
+[source]
+----
+Connection: Keep-Alive
+Keep-Alive: timeout=30, max=100
+----
+
+The `Keep-Alive` header (when present alongside `Connection:
+Keep-Alive`) can include hints about how long the sender expects to
+hold the connection open and how many more requests it anticipates.
+These are advisory, not guarantees. A server that says `timeout=30`
+may still close the connection after five seconds if it needs the
+resources.
+
+A critical rule for intermediaries: proxies must parse the
+`Connection` header, remove it and every header it names, and then
+forward the message. A proxy that blindly relays `Connection:
+Keep-Alive` to an origin server creates a well-known failure called
+the "dumb proxy" problem--the server believes the proxy wants
+persistence, the proxy does not understand persistence, and the
+connection hangs.
+
+== Parallel Connections
+
+While persistent connections eliminated repeated handshakes, they did
+not solve another problem: serialization. On a single persistent
+connection, each request had to wait for the previous response to
+finish. If the server took 500 milliseconds to generate one resource,
+everything behind it queued up.
+
+Browsers worked around this by opening multiple TCP connections in
+parallel. Instead of one pipe, they opened several--typically six per
+host in modern browsers:
+
+[source]
+----
+Connection 1: GET /page.html → 200 OK → GET /app.js → 200 OK
+Connection 2: GET /style.css → 200 OK → GET /font.woff → 200 OK
+Connection 3: GET /logo.png → 200 OK
+Connection 4: GET /hero.jpg → 200 OK
+Connection 5: GET /icon.svg → 200 OK
+Connection 6: GET /data.json → 200 OK
+----
+
+Parallel connections overlap the delays. While one connection waits
+for a response, others are already transferring data. Users see
+images loading simultaneously across the page, which feels faster even
+when the wall-clock time is similar.
+
+The downsides are real:
+
+* Each connection pays its own handshake and slow-start costs.
+* Six connections consume six times the memory and CPU on both client
+ and server.
+* Under limited bandwidth, parallel streams compete for the same
+ pipe, and each one moves proportionally slower.
+* A hundred users each opening six connections means 600 connections
+ for the server to manage.
+
+Parallel connections are a pragmatic workaround, not an elegant
+solution. They exist because HTTP/1.x lacks multiplexing.
+
+== Pipelining
+
+HTTP/1.1 introduced _pipelining_ as an attempt at better concurrency
+within a single connection. With pipelining, a client can send several
+requests in a row without waiting for responses:
+
+[source]
+----
+Client Server
+ |--- GET /a.html ----------------->|
+ |--- GET /b.css ------------------>|
+ |--- GET /c.js ------------------->|
+ |<------------- 200 OK (a.html) ---|
+ |<------------- 200 OK (b.css) ----|
+ |<------------- 200 OK (c.js) -----|
+----
+
+By dispatching requests early, the client eliminates the dead time
+between sending a request and receiving the previous response. The
+server can even begin processing requests in parallel internally.
+
+But pipelining has a fatal constraint: **responses must arrive in the
+same order as the requests.** HTTP/1.1 messages carry no sequence
+numbers, so neither side can match responses to requests if they
+arrive out of order. This requirement creates _head-of-line blocking_.
+
+=== Head-of-Line Blocking
+
+Suppose the client pipelines three requests and the server can
+generate the second and third responses quickly, but the first takes
+a long time. The fast responses must wait, fully buffered, until the
+slow one finishes:
+
+[source]
+----
+Client Server
+ |--- GET /slow ------------------->|
+ |--- GET /fast1 ------------------->| fast1 ready, but must wait
+ |--- GET /fast2 ------------------->| fast2 ready, but must wait
+ | | ...processing /slow...
+ |<------------ 200 OK (/slow) -----|
+ |<------------ 200 OK (/fast1) ----|
+ |<------------ 200 OK (/fast2) ----|
+----
+
+A single slow response blocks everything behind it. The server wastes
+memory buffering completed responses. If the connection fails
+mid-pipeline, the client must re-request everything it has not
+received--possibly triggering duplicate processing for non-idempotent
+requests.
+
+Additional problems made pipelining fragile in practice:
+
+* Many proxies and intermediaries did not support it correctly.
+* Servers had to buffer potentially large responses out of order.
+* Detecting whether an intermediary supports pipelining was unreliable.
+* Only idempotent requests (GET, HEAD) were safe to pipeline.
+
+Due to these issues, browser support for pipelining remained limited.
+Most browsers shipped with it disabled by default. The idea was
+sound--eliminating round-trip delays is always valuable--but the
+execution within HTTP/1.1's constraints was impractical. The real
+solution required a protocol-level change.
+
+== Closing Connections
+
+Connection management is not just about opening and keeping
+connections alive. Knowing _when_ and _how_ to close them correctly
+is equally important.
+
+=== Signaled Close
+
+Either party can signal its intent to close by including `Connection:
+close` in a request or response. After the client sends this header,
+it must not send additional requests on that connection. After the
+server sends it, the client knows the connection will end once the
+response is fully received.
+
+=== Idle Timeouts
+
+Servers close persistent connections that sit idle too long. A
+connection consuming resources but carrying no traffic is a liability.
+Typical idle timeouts range from 5 to 120 seconds, depending on the
+server's load and configuration. Clients must be prepared for the
+connection to vanish at any time and should reopen a new one when
+needed.
+
+=== Graceful Close
+
+TCP connections are bidirectional--each side has an independent
+input and output channel. A _full close_ shuts down both channels
+at once. A _half close_ shuts down only one, leaving the other
+open.
+
+The HTTP specification recommends that applications perform a _graceful
+close_ by first closing their output channel (signaling "I have
+nothing more to send") and then waiting for the peer to close its
+output channel. This avoids a dangerous race condition: if you close
+the input channel while the peer is still sending, the operating
+system may issue a TCP RST (reset), which erases any data the peer
+had buffered but you had not yet read.
+
+This matters most with pipelined connections. Imagine you pipelined
+ten requests and have received responses for the first eight, sitting
+unread in your buffer. Now request eleven arrives at a server that
+has already decided to close. The server's RST wipes your buffer, and
+you lose the eight perfectly good responses you already had.
+
+The graceful close protocol:
+
+. Close your output channel (half close).
+. Continue reading from the input channel.
+. When the peer also closes, or a timeout expires, close fully.
+
+=== Retries and Idempotency
+
+When a connection closes unexpectedly, the client must decide whether
+to retry the request. For idempotent methods--GET, HEAD, PUT,
+DELETE--retrying is safe because repeating the operation produces
+the same result. For non-idempotent methods like POST, retrying risks
+duplication. This is why browsers warn before resubmitting a form:
+the connection may have closed after the server processed the request
+but before the response arrived.
+
+== How HTTP/2 Changed the Picture
+
+HTTP/2 addressed HTTP/1.1's connection limitations at the protocol
+level. Rather than opening multiple TCP connections or attempting
+fragile pipelining, HTTP/2 introduced _multiplexing_ over a single
+connection.
+
+An HTTP/2 connection breaks messages into small binary _frames_,
+each tagged with a stream identifier. Multiple streams flow
+concurrently over the same TCP connection, and their frames can
+interleave freely:
+
+[source]
+----
+Single TCP connection:
+ [stream 1: request ]
+ [stream 3: request ]
+ [stream 1: response frame 1]
+ [stream 3: response frame 1]
+ [stream 1: response frame 2]
+ [stream 3: response frame 2]
+----
+
+Because each frame identifies its stream, responses can arrive in any
+order and be reassembled correctly. Head-of-line blocking at the HTTP
+layer is eliminated. A slow response on stream 1 no longer blocks a
+fast response on stream 3.
+
+The consequences for connection management are dramatic:
+
+* **One connection per origin.** HTTP/2 uses a single TCP connection
+ between client and server, eliminating the overhead of multiple
+ parallel connections.
+* **No domain sharding.** With multiplexing, the workaround of
+ splitting resources across subdomains becomes counterproductive--it
+ prevents the protocol from prioritizing and compressing effectively.
+* **Stream prioritization.** The client can indicate which streams
+ matter most, allowing the server to allocate bandwidth intelligently.
+* **Flow control.** Both per-stream and per-connection flow control
+ prevent any one stream from monopolizing the pipe.
+
+However, HTTP/2 still runs over TCP, which imposes its own ordering
+constraints. If a TCP packet is lost, the entire connection stalls
+until that packet is retransmitted--even streams whose data was not
+in the lost packet. This is _transport-layer_ head-of-line blocking,
+and it is the problem that HTTP/3, built on QUIC over UDP, was
+designed to solve. But that is a story for a later section.
+
+== Practical Summary
+
+The evolution of HTTP connection management follows a clear arc toward
+doing more with fewer connections:
+
+[cols="1,2,3"]
+|===
+|Era |Strategy |Connection behavior
+
+|**HTTP/1.0**
+|One request per connection
+|Open, request, respond, close. Expensive and wasteful.
+
+|**HTTP/1.0+**
+|Keep-Alive
+|Opt-in persistence via `Connection: Keep-Alive`. A significant
+improvement, but both sides had to agree explicitly.
+
+|**HTTP/1.1**
+|Persistent by default
+|Connections stay open unless `Connection: close` is sent. Pipelining
+attempted but largely failed due to head-of-line blocking.
+
+|**Browsers**
+|Parallel connections
+|Up to six TCP connections per host to work around HTTP/1.x
+serialization. Effective but resource-heavy.
+
+|**HTTP/2**
+|Multiplexed streams
+|One connection per origin. Binary framing eliminates HTTP-layer
+head-of-line blocking. Stream priorities and flow control replace
+the need for multiple connections.
+|===
+
+The lesson running through all of this is that connections are
+expensive, and the protocol's history is a series of increasingly
+elegant solutions to that single economic fact. Keep connections open.
+Reuse them. And when one connection is not enough, multiplex rather
+than multiply.
+
+== Next Steps
+
+You now understand how HTTP connections are opened, reused, and closed,
+and why each generation of the protocol refined the strategy. The next
+section covers the mechanism that prevents the Web from doing the same
+work twice:
+
+* xref:2.http-tutorial/2i.caching.adoc[Caching] -- how clients and servers avoid redundant transfers
diff --git a/doc/modules/ROOT/pages/2.http-tutorial/2i.caching.adoc b/doc/modules/ROOT/pages/2.http-tutorial/2i.caching.adoc
new file mode 100644
index 00000000..839b7d61
--- /dev/null
+++ b/doc/modules/ROOT/pages/2.http-tutorial/2i.caching.adoc
@@ -0,0 +1,497 @@
+//
+// Copyright (c) 2025 Vinnie Falco (vinnie.falco@gmail.com)
+//
+// Distributed under the Boost Software License, Version 1.0. (See accompanying
+// file LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt)
+//
+// Official repository: https://github.com/cppalliance/http
+//
+
+= Caching
+
+The fastest HTTP request is the one that never reaches the server. Every
+time a browser fetches a stylesheet it already downloaded five seconds ago,
+or a CDN re-asks an origin for a logo that has not changed in a year, the
+network does work that produces no new information. The bytes travel the
+same wires, consume the same bandwidth, and impose the same latency--all
+to deliver an answer both sides already know.
+
+HTTP caching exists to eliminate this waste. A _cache_ stores a copy of a
+response and reuses it for subsequent matching requests, avoiding the
+round-trip to the origin server entirely when the stored copy is still
+valid. The result is faster page loads, lower bandwidth costs, and reduced
+server load. A single busy origin serving millions of users would collapse
+under the weight of redundant traffic if caches did not absorb the vast
+majority of it.
+
+The mechanism is deceptively simple in concept--save the response, serve it
+again later--but the details matter. How long is a stored response usable?
+How does a cache know when the original has changed? Who is allowed to
+store what? HTTP answers these questions through a set of headers and rules
+that give servers precise control over how their responses are cached, and
+give caches the tools to serve content efficiently without ever delivering
+stale data by accident.
+
+== Freshness: When a Stored Response is Good Enough
+
+A cached response does not stay valid forever. The server that generated
+it knows how volatile its content is, and HTTP provides two ways for the
+server to express this: a relative lifetime and an absolute expiration
+date.
+
+=== Cache-Control: max-age
+
+The modern and preferred approach is the `Cache-Control: max-age`
+directive. The value is the number of seconds the response may be
+considered _fresh_ from the moment it was generated:
+
+[source]
+----
+HTTP/1.1 200 OK
+Content-Type: text/html
+Cache-Control: max-age=3600
+
+
+...
+----
+
+This response tells any cache that stores it: "You may serve this copy
+for the next 3600 seconds (one hour) without contacting me." During that
+window the cache satisfies requests instantly--a _cache hit_. After the
+window closes the stored copy becomes _stale_, and the cache must check
+with the server before using it again.
+
+=== The Expires Header
+
+Before `Cache-Control` existed, HTTP/1.0 used the `Expires` header to
+specify an absolute date and time after which the response should no
+longer be considered fresh:
+
+[source]
+----
+HTTP/1.1 200 OK
+Content-Type: text/html
+Expires: Thu, 01 Jan 2026 00:00:00 GMT
+
+
+...
+----
+
+Absolute dates depend on the server's clock being accurate, which proved
+unreliable in practice. If both `Expires` and `Cache-Control: max-age`
+are present, `max-age` takes priority. New implementations should use
+`max-age`.
+
+=== The Age Header
+
+When a shared cache (such as a CDN) stores a response and later serves it,
+the `Age` header tells the next recipient how many seconds the response
+has been sitting in that cache:
+
+[source]
+----
+HTTP/1.1 200 OK
+Content-Type: text/html
+Cache-Control: max-age=3600
+Age: 1800
+
+
+...
+----
+
+A client receiving this response knows that 1800 of the original 3600
+seconds of freshness have already elapsed, leaving 1800 seconds of
+remaining freshness. Without the `Age` header, downstream caches would
+have no way to account for time spent in upstream caches.
+
+=== Heuristic Caching
+
+If a response carries no `Cache-Control` or `Expires` header at all,
+caches do not simply refuse to store it. HTTP allows them to apply a
+_heuristic_: if the `Last-Modified` header is present, a common rule of
+thumb is to treat the response as fresh for roughly 10% of the time since
+it was last modified. A page last modified a year ago might be cached for
+about five weeks; a page modified yesterday, for about two hours.
+
+Heuristic caching is a sensible default, but it is unpredictable. Servers
+that care about caching behavior should always include an explicit
+`Cache-Control` header.
+
+== Validation: Checking Without Re-Downloading
+
+When a cached copy goes stale, the cache does not have to throw it away
+and fetch the entire response from scratch. Instead it can ask the server:
+"Has this resource changed since I last fetched it?" If the answer is no,
+the server sends back a tiny `304 Not Modified` response with no body,
+and the cache marks its existing copy as fresh again. This is called
+_revalidation_, and it can save enormous amounts of bandwidth.
+
+HTTP supports two revalidation mechanisms, each based on a different kind
+of identifier that the server attaches to the original response.
+
+=== Last-Modified and If-Modified-Since
+
+The simplest approach uses timestamps. The server includes a
+`Last-Modified` header in the response:
+
+[source]
+----
+HTTP/1.1 200 OK
+Content-Type: text/html
+Last-Modified: Mon, 15 Jan 2026 10:00:00 GMT
+Cache-Control: max-age=3600
+
+
+...
+----
+
+When the cached copy becomes stale and a client requests the same
+resource, the cache sends a _conditional request_ with an
+`If-Modified-Since` header carrying the stored timestamp:
+
+[source]
+----
+GET /index.html HTTP/1.1
+Host: www.example.com
+If-Modified-Since: Mon, 15 Jan 2026 10:00:00 GMT
+----
+
+If the resource has not changed, the server responds:
+
+[source]
+----
+HTTP/1.1 304 Not Modified
+Cache-Control: max-age=3600
+----
+
+No body is transferred. The cache refreshes the freshness lifetime of its
+stored copy and serves it to the client. If the resource _has_ changed,
+the server responds with a full `200 OK` and the new content.
+
+=== ETags and If-None-Match
+
+Timestamps have limitations. A file might be rewritten with identical
+content, changing its modification date without changing its meaning. Or
+changes might happen faster than the one-second granularity of HTTP dates.
+_Entity tags_ (ETags) solve both problems. An ETag is an opaque identifier
+--often a hash or version string--that the server generates for a specific
+version of a resource:
+
+[source]
+----
+HTTP/1.1 200 OK
+Content-Type: text/html
+ETag: "a1b2c3d4"
+Cache-Control: max-age=3600
+
+
+...
+----
+
+When revalidating, the cache sends the stored ETag in an `If-None-Match`
+header:
+
+[source]
+----
+GET /index.html HTTP/1.1
+Host: www.example.com
+If-None-Match: "a1b2c3d4"
+----
+
+If the server's current ETag for the resource matches, nothing has changed
+and the server returns `304 Not Modified`. If the ETag differs, the server
+returns the new content with a `200 OK` and a new ETag.
+
+When both `If-Modified-Since` and `If-None-Match` are present in the same
+request, the ETag comparison takes precedence. Servers are encouraged to
+send both `ETag` and `Last-Modified` in responses, because each serves
+different consumers: ETags provide precise cache validation, while
+`Last-Modified` is useful for crawlers, content-management systems, and
+HTTP/1.0 caches that do not understand ETags.
+
+=== Weak Validators
+
+Sometimes a cosmetic change--a whitespace fix, a comment edit--should
+not force every cache in the world to re-download the resource. HTTP
+supports _weak validators_ for this purpose. A weak ETag is prefixed
+with `W/`:
+
+[source]
+----
+ETag: W/"v2.6"
+----
+
+A weak ETag signals that the resource is _semantically equivalent_ even
+if the bytes are not identical. Caches can still use weak ETags for
+revalidation, but certain operations that require exact byte-level
+matching (such as range requests) demand strong validators.
+
+== Cache-Control Directives
+
+The `Cache-Control` header is the primary tool for controlling caching
+behavior. Directives can appear in both responses and requests, each
+serving a different purpose. The most important response directives are
+summarized below.
+
+=== Controlling Who May Cache
+
+[cols="2a,3a"]
+|===
+|Directive|Meaning
+
+|`public`
+|Any cache--browser, proxy, CDN--may store the response. This is the
+default for most responses, but stating it explicitly can override
+restrictions that would otherwise apply (for example, to responses that
+required authentication).
+
+|`private`
+|Only the end user's browser may store the response. Shared caches such
+as proxies and CDNs must not. Use this for personalized content--a user's
+account page, a shopping cart, anything tied to a session.
+
+|===
+
+=== Controlling Storage and Reuse
+
+[cols="2a,3a"]
+|===
+|Directive|Meaning
+
+|`no-store`
+|The response must not be stored by any cache at all. Use this for
+sensitive data that should never be written to disk--bank statements,
+medical records, authentication tokens.
+
+|`no-cache`
+|The response _may_ be stored, but must not be served to a client without
+first revalidating with the origin server. Despite its misleading name,
+`no-cache` does not prevent caching--it prevents _unvalidated_ reuse.
+
+|`max-age=`
+|The response is fresh for the given number of seconds. After that it
+becomes stale and must be revalidated.
+
+|`s-maxage=`
+|Like `max-age`, but applies only to shared caches (proxies, CDNs). A
+response with `max-age=60, s-maxage=3600` tells browsers to revalidate
+after one minute, but allows CDNs to serve the cached copy for an hour.
+
+|===
+
+=== Controlling Stale Behavior
+
+[cols="2a,3a"]
+|===
+|Directive|Meaning
+
+|`must-revalidate`
+|Once the response becomes stale, the cache must not serve it without
+successful revalidation. If the origin server is unreachable, the cache
+must return a `504 Gateway Timeout` rather than serve stale content.
+
+|`proxy-revalidate`
+|Same as `must-revalidate`, but applies only to shared caches.
+
+|`immutable`
+|Tells the cache that the response body will never change. Even when the
+user manually reloads the page, the browser may skip revalidation. This
+is ideal for versioned static assets (like `app.v3.js`) whose URL changes
+whenever their content changes.
+
+|===
+
+=== Request-Side Directives
+
+Clients can also include `Cache-Control` directives in their requests to
+influence how caches along the path behave:
+
+[cols="2a,3a"]
+|===
+|Directive|Meaning
+
+|`no-cache`
+|Forces the cache to revalidate before serving a stored response. Browsers
+send this on a normal page reload.
+
+|`no-store`
+|Tells intermediate caches not to store the response.
+
+|`max-age=0`
+|The client will not accept a cached response older than zero
+seconds--effectively requiring revalidation.
+
+|`max-stale=`
+|The client is willing to accept a response that has been stale for up to
+the specified number of seconds. Useful for unreliable network conditions
+where some content is better than none.
+
+|`min-fresh=`
+|The client wants a response that will remain fresh for at least the
+specified number of seconds.
+
+|`only-if-cached`
+|The client wants a response only if it is already in the cache. If no
+cached response is available, the cache returns `504 Gateway Timeout`
+instead of fetching from the origin.
+
+|===
+
+== Types of Caches
+
+Caches exist at multiple points along the path between a client and an
+origin server. Each type serves a different purpose.
+
+=== Private Caches
+
+A _private cache_ belongs to a single user--typically the browser's
+built-in cache. It stores responses on disk or in memory and serves them
+when the same user revisits a page. Because no other user can access it, a
+private cache is the only appropriate place to store personalized content.
+
+Every modern browser maintains a private cache. When you load a page and
+then press the back button, the browser often serves the previous page
+from its cache without any network activity at all. This is why "back" is
+nearly instantaneous even on a slow connection.
+
+=== Shared Caches
+
+A _shared cache_ sits between multiple clients and the origin server.
+Shared caches come in two flavors:
+
+_Proxy caches_ are forward proxies deployed by a network operator--an ISP
+or a corporate IT department--to reduce outbound bandwidth. All users on
+the network share the same cache, so a popular resource fetched by one
+user can be served to another without reaching the origin.
+
+_Reverse proxy caches_ (including CDNs) are deployed by the content
+provider in front of the origin server. They absorb traffic, distribute
+content to servers closer to end users, and shield the origin from flash
+crowds. When millions of users request the same news article within
+seconds, the CDN serves its cached copy and the origin barely notices.
+
+The `private` and `s-maxage` directives exist specifically to let servers
+control behavior differently for browser caches and shared caches, because
+what is safe to store in a user's own browser is not always safe to store
+on a shared proxy.
+
+== Cache Keys and the Vary Header
+
+A cache identifies a stored response by its URL. Two requests for the same
+URL normally receive the same cached response. But content negotiation
+complicates this: the same URL might produce a French HTML page for one
+client and a gzip-compressed English JSON response for another.
+
+The `Vary` header, discussed in the content negotiation section, tells
+caches which request headers influenced the server's choice of response.
+A cache that respects `Vary` stores multiple variants keyed by the URL
+_plus_ the values of the headers listed in `Vary`:
+
+[source]
+----
+HTTP/1.1 200 OK
+Content-Type: text/html
+Content-Encoding: gzip
+Vary: Accept-Encoding, Accept-Language
+Cache-Control: max-age=3600
+----
+
+This response instructs caches to maintain separate stored copies for
+different combinations of `Accept-Encoding` and `Accept-Language`. A
+request with `Accept-Encoding: br` will not match a stored response that
+was compressed with gzip, even though the URL is identical.
+
+== Cache Busting
+
+A response cached with a long `max-age` cannot be revoked. Once a CDN
+has stored a response for a year, no header the server sends afterward
+will reach that CDN until the year expires. The server has, in effect,
+relinquished control of the URL for the duration of the freshness
+lifetime.
+
+The standard solution is _cache busting_: encoding a version identifier
+into the URL itself. When the content changes, the URL changes, and the
+old cached response is simply never requested again:
+
+[source]
+----
+
+
+----
+
+The HTML page that references these URLs uses `no-cache` (forcing
+revalidation on every load), while the assets themselves carry
+`max-age=31536000, immutable`--one year, and no revalidation even on
+reload. When the stylesheet changes, the HTML is updated to reference
+`style.v4.css`, and the old `v3` response ages out of caches on its own.
+
+This pattern separates _mutable resources_ (the HTML page) from
+_immutable versioned assets_ (stylesheets, scripts, images), giving
+each the caching strategy it deserves.
+
+== A Complete Caching Exchange
+
+Here is a sequence that exercises freshness, staleness, and revalidation.
+A browser requests a product page:
+
+[source]
+----
+GET /products/widget HTTP/1.1
+Host: shop.example.com
+Accept: text/html
+Accept-Encoding: gzip
+----
+
+The server responds with a fresh copy:
+
+[source]
+----
+HTTP/1.1 200 OK
+Content-Type: text/html; charset=utf-8
+Content-Encoding: gzip
+Cache-Control: max-age=600, must-revalidate
+ETag: "8f14e45f"
+Last-Modified: Sat, 07 Feb 2026 12:00:00 GMT
+Content-Length: 4821
+Vary: Accept-Encoding
+
+
+...
+----
+
+The browser caches this response. For the next ten minutes (600 seconds),
+any request for the same URL is served directly from the browser cache
+with no network activity.
+
+After ten minutes the cached copy is stale. The user navigates to the
+same page again. The browser sends a conditional request:
+
+[source]
+----
+GET /products/widget HTTP/1.1
+Host: shop.example.com
+Accept: text/html
+Accept-Encoding: gzip
+If-None-Match: "8f14e45f"
+If-Modified-Since: Sat, 07 Feb 2026 12:00:00 GMT
+----
+
+The server checks. The product page has not changed, so it responds:
+
+[source]
+----
+HTTP/1.1 304 Not Modified
+Cache-Control: max-age=600, must-revalidate
+ETag: "8f14e45f"
+----
+
+No body is transferred. The browser resets the freshness clock on its
+stored copy and renders the page instantly. The entire revalidation
+exchange--a small request and a tiny response--consumed a fraction of
+the bandwidth that a full download would have required.
+
+If the product page _had_ changed, the server would have returned a
+`200 OK` with the new content, a new `ETag`, and a new `Last-Modified`
+date. The browser would replace its stored copy and render the updated
+page. Either way, the user sees correct content; caching only decides
+how much network work is needed to get it.
diff --git a/doc/modules/ROOT/pages/2.http-tutorial/2j.authentication.adoc b/doc/modules/ROOT/pages/2.http-tutorial/2j.authentication.adoc
new file mode 100644
index 00000000..ce716b4a
--- /dev/null
+++ b/doc/modules/ROOT/pages/2.http-tutorial/2j.authentication.adoc
@@ -0,0 +1,425 @@
+//
+// Copyright (c) 2025 Vinnie Falco (vinnie.falco@gmail.com)
+//
+// Distributed under the Boost Software License, Version 1.0. (See accompanying
+// file LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt)
+//
+// Official repository: https://github.com/cppalliance/http
+//
+
+= Authentication and Security
+
+HTTP is stateless. The server remembers nothing about the last request
+when the next one arrives. Yet the Web is full of private data--bank
+accounts, medical records, corporate documents, personal photographs.
+Someone has to stand at the door and ask, "Who are you, and can you
+prove it?"
+
+Authentication is how HTTP answers that question. The protocol does not
+store passwords or manage sessions; it simply defines a conversation
+pattern--a _challenge_ from the server, a _response_ from the client--
+that lets identity be proven within each request. This pattern is
+general enough to support simple password checks, cryptographic digests,
+and modern token-based systems, all using the same pair of headers and
+the same status code that HTTP has carried since 1996.
+
+== The Challenge/Response Framework
+
+Every HTTP authentication exchange follows the same shape, regardless of
+the scheme in use. The server challenges, and the client responds.
+
+When a client requests a protected resource without credentials, the
+server does not simply refuse. It tells the client _how_ to
+authenticate:
+
+[source]
+----
+GET /account/balance HTTP/1.1
+Host: bank.example.com
+----
+
+[source]
+----
+HTTP/1.1 401 Unauthorized
+WWW-Authenticate: Basic realm="Online Banking"
+----
+
+The `401 Unauthorized` status code means: "I know what you asked for,
+but I need proof of who you are before I hand it over." The
+`WWW-Authenticate` header tells the client which authentication scheme
+to use and provides any parameters the scheme requires.
+
+The client gathers credentials--usually by prompting the user--and
+retries the request with an `Authorization` header:
+
+[source]
+----
+GET /account/balance HTTP/1.1
+Host: bank.example.com
+Authorization: Basic YWxpY2U6czNjcjN0
+----
+
+If the credentials are valid, the server returns the resource normally:
+
+[source]
+----
+HTTP/1.1 200 OK
+Content-Type: application/json
+
+{"balance": 4217.83}
+----
+
+If they are not, the server sends another `401` and the cycle repeats.
+This three-step dance--request, challenge, authorized request--is the
+foundation of all HTTP authentication.
+
+== Security Realms
+
+A single server often protects different resources with different
+passwords. A corporate intranet might have one set of credentials for
+financial reports and another for the employee directory. HTTP handles
+this through _realms_.
+
+The `realm` parameter in the `WWW-Authenticate` header names the
+protected area:
+
+[source]
+----
+WWW-Authenticate: Basic realm="Corporate Financials"
+----
+
+When the browser encounters this challenge, it displays the realm name
+to the user, so they know _which_ username and password to enter. A
+request to a different part of the same server might trigger a different
+challenge:
+
+[source]
+----
+WWW-Authenticate: Basic realm="Employee Directory"
+----
+
+Realms let a server partition its resources into independent protection
+spaces, each with its own set of authorized users. The browser
+remembers which credentials belong to which realm and sends them
+automatically on subsequent requests to the same space.
+
+== Basic Authentication
+
+Basic authentication is the oldest and simplest HTTP authentication
+scheme. It is supported by virtually every client and server, and it
+works like this:
+
+. The client joins the username and password with a colon:
+ `alice:s3cr3t`
+. It encodes the result using Base64: `YWxpY2U6czNjcjN0`
+. It sends the encoded string in the `Authorization` header.
+
+A complete exchange:
+
+[source]
+----
+GET /family/photos HTTP/1.1
+Host: www.example.com
+----
+
+[source]
+----
+HTTP/1.1 401 Unauthorized
+WWW-Authenticate: Basic realm="Family"
+----
+
+[source]
+----
+GET /family/photos HTTP/1.1
+Host: www.example.com
+Authorization: Basic YWxpY2U6czNjcjN0
+----
+
+[source]
+----
+HTTP/1.1 200 OK
+Content-Type: text/html
+
+...
+----
+
+Base64 is an encoding, not encryption. Anyone who intercepts the
+`Authorization` header can trivially decode it and read the password in
+plain text. Basic authentication is only safe when the connection itself
+is encrypted with TLS--that is, when the URL begins with `https`. Over
+plain HTTP, it is no more secure than shouting your password across a
+crowded room.
+
+Despite this weakness, Basic authentication remains useful. It is easy
+to implement, universally understood, and perfectly adequate when
+layered on top of HTTPS. Many internal tools and APIs still rely on it.
+
+== Digest Authentication
+
+Digest authentication was designed to fix Basic's most glaring flaw:
+sending the password in the clear. Instead of transmitting the actual
+password, the client sends a _digest_--a one-way cryptographic hash
+that proves knowledge of the password without revealing it.
+
+=== The Core Idea
+
+The server and the client both know the secret password. Instead of
+sending that password, the client computes a hash of the password mixed
+with other values, and sends the hash. The server performs the same
+computation and compares results. If they match, the client must have
+known the password. An attacker who intercepts the hash cannot reverse
+it to recover the original password.
+
+=== Preventing Replay Attacks
+
+A hash alone is not enough. If an attacker captures the digest, they
+could replay it to the server and gain access without knowing the
+password. Digest authentication prevents this with a _nonce_--a unique
+value the server generates for each challenge.
+
+The client mixes the nonce into its hash computation. Because the nonce
+changes with each challenge, yesterday's captured digest is useless
+today.
+
+=== A Digest Exchange
+
+[source]
+----
+GET /financials/forecast.xlsx HTTP/1.1
+Host: corp.example.com
+----
+
+[source]
+----
+HTTP/1.1 401 Unauthorized
+WWW-Authenticate: Digest
+ realm="Corporate Financials",
+ nonce="7c4f8e2a9b3d1c5f",
+ qop="auth"
+----
+
+The server provides a realm, a fresh nonce, and the quality of
+protection (`qop`) it supports.
+
+[source]
+----
+GET /financials/forecast.xlsx HTTP/1.1
+Host: corp.example.com
+Authorization: Digest
+ username="bob",
+ realm="Corporate Financials",
+ nonce="7c4f8e2a9b3d1c5f",
+ uri="/financials/forecast.xlsx",
+ qop=auth,
+ nc=00000001,
+ cnonce="a1b2c3d4",
+ response="3b8a21f6c4e7d9b0a5f2e8c1d4b7a6e3"
+----
+
+The `response` field is the hash. It incorporates the username,
+password, realm, nonce, request method, URI, and a client-generated
+nonce (`cnonce`). The `nc` (nonce count) tracks how many times the
+client has used this nonce, adding another layer of replay protection.
+
+[source]
+----
+HTTP/1.1 200 OK
+Content-Type: application/vnd.ms-excel
+
+<...spreadsheet data...>
+----
+
+Digest authentication never sends the password over the wire. However,
+it has seen limited adoption. In practice, most deployments choose Basic
+authentication over TLS, which provides stronger overall security with
+less complexity.
+
+== Bearer Tokens
+
+Modern APIs rarely ask users for a password on every request. Instead,
+the client authenticates once--typically through a login page or an
+OAuth 2.0 flow--and receives a _token_. This token is a string that
+represents the client's identity and permissions. On subsequent
+requests, the client presents the token rather than a username and
+password.
+
+The `Bearer` scheme carries these tokens:
+
+[source]
+----
+GET /api/user/profile HTTP/1.1
+Host: api.example.com
+Authorization: Bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9...
+----
+
+The server validates the token--checking its signature, its expiration,
+and the permissions it grants--and either serves the resource or rejects
+the request.
+
+Bearer tokens are opaque to the protocol. HTTP does not know or care
+what is inside them. They might be JSON Web Tokens (JWTs) containing
+encoded claims, or random strings that the server looks up in a
+database. The protocol's only job is to carry them in the
+`Authorization` header.
+
+Like Basic authentication, Bearer tokens must be protected by TLS. A
+stolen token grants the same access as a stolen password, and tokens
+travel in every request header. HTTPS ensures they stay confidential.
+
+== 401 Versus 403
+
+Two status codes relate to authentication and authorization, and the
+distinction between them matters:
+
+[cols="1a,3a"]
+|===
+|Status|Meaning
+
+|`401 Unauthorized`
+|The server does not know who you are. Provide valid credentials and
+try again. The response includes a `WWW-Authenticate` header describing
+how to authenticate.
+
+|`403 Forbidden`
+|The server knows who you are, but you are not allowed to access this
+resource. Re-authenticating will not help--your identity is established,
+but your permissions are insufficient.
+
+|===
+
+A `401` is a question: "Who are you?" A `403` is a verdict: "I know
+who you are, and the answer is no."
+
+Some servers return `404 Not Found` instead of `403` to hide the
+existence of a resource from unauthorized users. If an attacker cannot
+tell whether a URL leads to a protected page or nothing at all, that
+itself is a layer of defense.
+
+== Proxy Authentication
+
+Authentication can happen at intermediaries, not just origin servers. A
+corporate proxy might require employees to identify themselves before
+any request reaches the open Internet. HTTP supports this with a
+parallel set of headers and a dedicated status code.
+
+[cols="1a,1a"]
+|===
+|Origin Server|Proxy Server
+
+|`401 Unauthorized`
+|`407 Proxy Authentication Required`
+
+|`WWW-Authenticate`
+|`Proxy-Authenticate`
+
+|`Authorization`
+|`Proxy-Authorization`
+
+|===
+
+The exchange follows the same challenge/response pattern. The proxy
+sends a `407` with a `Proxy-Authenticate` header; the client retries
+with `Proxy-Authorization`. Both origin-server and proxy authentication
+can coexist in the same request, each using its own set of headers.
+
+== HTTPS and Transport Security
+
+HTTP authentication proves _identity_, but it does not protect the
+_conversation_. Headers, bodies, and credentials all travel as readable
+text unless the connection is encrypted. This is where TLS--Transport
+Layer Security--enters the picture.
+
+When a URL begins with `https`, the client and server perform a TLS
+handshake before any HTTP data is exchanged. This handshake establishes
+three properties:
+
+**Encryption.** All data between client and server is encrypted.
+Eavesdroppers see only opaque bytes.
+
+**Server authentication.** The server presents a certificate proving
+its identity. The client verifies the certificate against a trusted
+chain of certificate authorities. This prevents an attacker from
+impersonating the server.
+
+**Integrity.** Every message includes a cryptographic checksum. If a
+single byte is altered in transit, the receiver detects the tampering
+and discards the message.
+
+TLS does not replace HTTP authentication--it complements it. HTTP
+authentication answers "who is the client?" TLS answers "is this really
+the server, and is anyone listening?" Together they provide both ends of
+the trust equation.
+
+Without TLS, Basic credentials are exposed, Bearer tokens can be
+stolen, and even Digest authentication is vulnerable to sophisticated
+attacks. In modern practice, HTTPS is not optional for any
+authenticated endpoint. It is the foundation on which all other
+security mechanisms rest.
+
+== A Complete Authenticated Exchange
+
+Here is an annotated exchange that ties together the concepts from this
+section. A client accesses a protected API endpoint:
+
+[source]
+----
+GET /api/orders HTTP/1.1 <1>
+Host: api.example.com <2>
+----
+
+<1> The client requests a protected resource
+<2> Over an HTTPS connection (implied by the API)
+
+The server challenges:
+
+[source]
+----
+HTTP/1.1 401 Unauthorized <1>
+WWW-Authenticate: Bearer realm="Orders API" <2>
+----
+
+<1> Authentication required
+<2> The server expects a Bearer token
+
+The client authenticates (perhaps through an OAuth flow) and retries:
+
+[source]
+----
+GET /api/orders HTTP/1.1 <1>
+Host: api.example.com <2>
+Authorization: Bearer eyJhbGciOi... <3>
+----
+
+<1> Same request, repeated
+<2> Same host
+<3> Now carrying a valid token
+
+The server validates the token and responds:
+
+[source]
+----
+HTTP/1.1 200 OK <1>
+Content-Type: application/json <2>
+Cache-Control: private, no-store <3>
+
+[{"id": 1, "item": "Claw Hammer"}, ...] <4>
+----
+
+<1> Success
+<2> JSON response
+<3> Sensitive data--no caching allowed
+<4> The protected resource
+
+The `Cache-Control: private, no-store` directive is worth noting.
+Authenticated responses often contain data specific to one user. Caching
+such responses in a shared proxy would leak private data to other users.
+The `no-store` directive tells every cache along the path--browser,
+proxy, CDN--that this response must never be stored.
+
+Authentication, authorization, and transport security each solve a
+different piece of the same puzzle. Authentication proves identity.
+Authorization determines what that identity may access. TLS ensures
+the entire conversation remains private. HTTP weaves all three into its
+stateless request/response model through a handful of headers and status
+codes--no sessions, no stored state, just a protocol-level conversation
+that scales to billions of requests per day.
diff --git a/doc/modules/ROOT/pages/2.http-tutorial/2k.http2.adoc b/doc/modules/ROOT/pages/2.http-tutorial/2k.http2.adoc
new file mode 100644
index 00000000..851f16c8
--- /dev/null
+++ b/doc/modules/ROOT/pages/2.http-tutorial/2k.http2.adoc
@@ -0,0 +1,487 @@
+//
+// Copyright (c) 2025 Vinnie Falco (vinnie.falco@gmail.com)
+//
+// Distributed under the Boost Software License, Version 1.0. (See accompanying
+// file LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt)
+//
+// Official repository: https://github.com/cppalliance/http
+//
+
+= HTTP/2
+
+For twenty years, HTTP/1.1 carried the Web on its back. It was
+reliable, well understood, and ubiquitous. It was also, by modern
+standards, painfully slow. Every browser window that loaded a complex
+page was quietly opening six TCP connections to the same server,
+queuing dozens of requests behind each other, and wasting bandwidth
+on headers that repeated the same information request after request.
+Developers invented clever workarounds--image sprites, domain sharding,
+inlined resources--but these were band-aids on a protocol that had
+never been designed for the Web it created.
+
+HTTP/2, standardized in 2015 as RFC 7540 and later revised as
+RFC 9113, replaces the text-based wire format of HTTP/1.1 with a
+binary framing layer. The semantics are identical: methods, status
+codes, headers, URIs--everything your application already understands
+remains unchanged. What changes is how those messages are encoded,
+transported, and multiplexed over the network. The result is faster
+page loads, fewer connections, less overhead, and a simpler deployment
+story.
+
+== Why HTTP/1.1 Hit a Wall
+
+To appreciate what HTTP/2 fixes, you need to see what was broken.
+
+HTTP/1.1 delivers responses _sequentially_ on each connection. If a
+client sends two requests on the same connection, the server must
+finish sending the first response before it can begin the second. This
+is called _head-of-line blocking_. While the server is busy with a
+slow database query for request one, request two--which might be a
+tiny icon file ready to go--waits in line.
+
+Browsers compensated by opening up to six parallel connections per
+origin. A page with sixty resources across two origins could use twelve
+simultaneous connections. But each connection requires a TCP handshake
+(one round-trip) and, over HTTPS, a TLS handshake (another round-trip
+or two). On a transatlantic link with 80ms round-trip time, those
+handshakes alone cost hundreds of milliseconds before a single byte
+of content arrives.
+
+HTTP/1.1 pipelining was supposed to help: the client could send
+several requests without waiting for responses. In practice it was
+fragile, poorly supported by intermediaries, and never widely deployed.
+The problem needed a deeper solution.
+
+== From SPDY to HTTP/2
+
+Google began experimenting with an alternative in 2009 under the name
+SPDY (pronounced "speedy"). The goals were ambitious: cut page load
+times in half without requiring website authors to change their
+content. Lab tests on the top 25 websites showed pages loading up to
+55% faster.
+
+By 2012, SPDY was supported in Chrome, Firefox, and Opera, and major
+sites like Google, Twitter, and Facebook were serving traffic over it.
+Seeing this momentum, the IETF HTTP Working Group adopted SPDY as the
+starting point for an official successor to HTTP/1.1. Over the next
+three years, SPDY and the emerging HTTP/2 standard coevolved: SPDY
+served as the experimental branch where proposals were tested in
+production before being folded into the specification.
+
+In May 2015, RFC 7540 (HTTP/2) and RFC 7541 (HPACK header compression)
+were published. Google retired SPDY shortly after. By the time the
+standard was approved, dozens of production-ready client and server
+implementations already existed--an unusually smooth launch for a
+major protocol revision.
+
+== The Binary Framing Layer
+
+The single most important change in HTTP/2 is invisible to
+applications: the replacement of HTTP/1.1's newline-delimited text
+format with a binary framing layer.
+
+In HTTP/1.1, a request looks like this on the wire:
+
+[source]
+----
+GET /index.html HTTP/1.1\r\n
+Host: www.example.com\r\n
+Accept: text/html\r\n
+\r\n
+----
+
+Parsing this requires scanning for line endings, handling optional
+whitespace, and dealing with varying termination sequences--a process
+that is error-prone and surprisingly expensive at scale.
+
+HTTP/2 replaces this with fixed-length binary frames. Each frame
+begins with a nine-byte header:
+
+[source]
+----
++-----------------------------------------------+
+| Length (24 bits) |
++-------+-+-------------------------------------+
+| Type | Flags |
+| (8) | (8) |
++-------+-+-------+-----------------------------+
+|R| Stream Identifier (31 bits) |
++-+---------------------------------------------+
+| Frame Payload (0...) |
++-----------------------------------------------+
+----
+
+* **Length** tells the receiver how many bytes of payload follow.
+* **Type** identifies what the frame carries (headers, data, settings,
+ and so on).
+* **Flags** carry frame-specific signals, such as "this is the last
+ frame of the message."
+* **Stream Identifier** tags every frame with the stream it belongs to,
+ so frames from different streams can be interleaved on a single
+ connection.
+
+Binary framing is more compact, faster to parse, and unambiguous.
+The client and server handle the encoding transparently--applications
+continue to work with the same HTTP methods, headers, and status codes
+they always have.
+
+== Streams, Messages, and Frames
+
+HTTP/2 introduces three layers of abstraction within a single TCP
+connection:
+
+**Frame**::
+The smallest unit of communication. Every frame has a type--`HEADERS`,
+`DATA`, `SETTINGS`, `WINDOW_UPDATE`, `PUSH_PROMISE`, `PING`,
+`GOAWAY`, `RST_STREAM`, `PRIORITY`, or `CONTINUATION`--and carries
+the stream identifier in its header.
+
+**Message**::
+A complete HTTP request or response, composed of one or more frames.
+A `HEADERS` frame begins the message; zero or more `DATA` frames
+carry the body; a flag on the final frame marks the end.
+
+**Stream**::
+A bidirectional flow of frames within the connection, identified by
+a unique integer. Client-initiated streams use odd identifiers (1, 3,
+5, ...); server-initiated streams use even identifiers. Both sides
+increment a simple counter to avoid collisions.
+
+All communication happens over a single TCP connection. The connection
+carries many concurrent streams. Each stream carries one message
+exchange. Each message is broken into frames that can be interleaved
+with frames from other streams. On the receiving end, frames are
+reassembled into messages using the stream identifier.
+
+This layering is the foundation of everything else HTTP/2 offers.
+
+== Multiplexing
+
+Multiplexing is the headline feature. It solves head-of-line blocking
+at the HTTP layer in a single stroke.
+
+In HTTP/1.1, loading a page with a stylesheet, a script, and three
+images from the same origin requires the browser to either queue
+requests behind each other on one connection or open multiple
+connections. With HTTP/2, all five requests can be sent immediately
+on a single connection, and the server can interleave the responses:
+
+[source]
+----
+Connection (single TCP)
+ ├─ Stream 1: GET /page.html → 200 OK (HTML body)
+ ├─ Stream 3: GET /style.css → 200 OK (CSS body)
+ ├─ Stream 5: GET /app.js → 200 OK (JS body)
+ ├─ Stream 7: GET /hero.jpg → 200 OK (image data)
+ └─ Stream 9: GET /logo.png → 200 OK (image data)
+----
+
+The server does not have to finish sending the CSS before it starts
+on the JavaScript. It can send a chunk of the image, then a chunk of
+the HTML, then more of the image--whatever order is optimal. Frames
+from different streams are interleaved freely and reassembled by the
+receiver.
+
+The practical consequences are significant:
+
+* A single connection replaces the six-connection workaround, reducing
+ TLS handshakes, memory, and socket overhead.
+* Domain sharding becomes unnecessary--and in fact harmful, because it
+ splits the single compression context and priority tree.
+* Image sprites and CSS/JS concatenation lose their primary motivation.
+ Individual files can be cached, invalidated, and loaded independently.
+* Page load times drop because requests are no longer blocked behind
+ unrelated responses.
+
+== Stream Prioritization
+
+When dozens of streams are in flight at once, not all of them are
+equally urgent. The CSS that unblocks page rendering matters more than
+a background image below the fold. HTTP/2 lets the client express these
+priorities so the server can allocate bandwidth and processing time
+intelligently.
+
+Each stream can be assigned a _weight_ (an integer from 1 to 256) and
+a _dependency_ on another stream. Together, these form a prioritization
+tree:
+
+* Streams that depend on a parent should receive resources only after
+ the parent is served.
+* Sibling streams share resources in proportion to their weights.
+
+For example, if stream A (weight 12) and stream B (weight 4) are
+siblings, stream A should receive three-quarters of the available
+bandwidth and stream B one-quarter. If stream C depends on stream D,
+then D should be fully served before C begins receiving data.
+
+The client can update priorities at any time--when the user scrolls,
+for instance, images that have moved on-screen can be reprioritized
+above those that have scrolled off.
+
+Priorities are _hints_, not mandates. The server should respect them,
+but it is free to adapt. A good HTTP/2 server interleaves frames from
+multiple priority levels so that a slow high-priority stream does not
+starve everything else.
+
+== Header Compression (HPACK)
+
+HTTP/1.1 headers are verbose and repetitive. Every request to the same
+origin sends the same `Host`, `User-Agent`, `Accept`, and cookie
+headers--often 500 to 800 bytes of identical text, request after
+request. On pages that generate dozens of requests, the header overhead
+alone can fill the initial TCP congestion window and add an entire
+round-trip of latency.
+
+HTTP/2 addresses this with HPACK (RFC 7541), a compression scheme
+designed specifically for HTTP headers. HPACK uses two techniques:
+
+**Static table.** A predefined table of 61 common header field/value
+pairs (`:method: GET`, `:status: 200`, `content-type: text/html`, and
+so on). These can be referenced by index instead of transmitted in
+full.
+
+**Dynamic table.** A per-connection table that both sides maintain.
+When a header field is sent for the first time, it is added to the
+dynamic table. Subsequent requests that use the same field can
+reference the table entry instead of retransmitting the value.
+
+The result is dramatic. On the second request to the same origin, most
+headers are transmitted as single-byte index references. If nothing
+has changed between requests--common for polling--the header overhead
+drops to nearly zero.
+
+Consider two successive requests:
+
+[source]
+----
+Request 1:
+ :method: GET
+ :path: /api/items
+ :authority: api.example.com
+ accept: application/json
+ cookie: session=abc123
+
+Request 2:
+ :method: GET
+ :path: /api/items/42 ← only this changed
+ :authority: api.example.com
+ accept: application/json
+ cookie: session=abc123
+----
+
+In HTTP/1.1, both requests transmit every header in full. In HTTP/2,
+the second request transmits only the changed `:path` value; everything
+else is implied by the dynamic table. Where HTTP/1.1 might send 400
+bytes of headers on the second request, HTTP/2 sends perhaps 20.
+
+HPACK was designed to resist the CRIME attack that compromised earlier
+compression approaches (SPDY originally used zlib). By using index-based
+referencing and Huffman coding instead of general-purpose compression,
+HPACK avoids leaking secrets through compression side channels.
+
+== Flow Control
+
+Multiplexing many streams on one connection creates a resource
+allocation problem: a large download should not starve smaller,
+time-sensitive requests. HTTP/2 solves this with a flow control
+mechanism modeled on TCP's own window-based approach, but applied at
+the stream level.
+
+Each side of the connection advertises a _flow control window_--the
+number of bytes it is willing to receive--for each stream and for the
+connection as a whole. The default window is 65,535 bytes. As data
+is received, the window shrinks; the receiver sends `WINDOW_UPDATE`
+frames to replenish it.
+
+Key properties of HTTP/2 flow control:
+
+* It is _per-stream and per-connection_. A receiver can throttle one
+ stream without affecting others.
+* It is _directional_. Each side independently controls how much data
+ it is willing to accept.
+* It is _credit-based_. The sender can only transmit as many `DATA`
+ bytes as the receiver has permitted.
+* It is _hop-by-hop_, not end-to-end. A proxy between client and
+ server manages its own flow control windows on each side.
+
+Flow control applies only to `DATA` frames. Control frames like
+`HEADERS` and `SETTINGS` are always delivered without flow control,
+ensuring that the connection can always be managed even when data
+windows are exhausted.
+
+== Server Push
+
+HTTP/2 introduced _server push_: the ability for a server to send
+resources to the client before the client requests them. When the
+server knows that an HTML page will need a particular stylesheet and
+script, it can push those resources alongside the initial response,
+eliminating the round-trip the client would spend discovering and
+requesting them.
+
+The mechanism works through `PUSH_PROMISE` frames. The server sends
+a `PUSH_PROMISE` containing the headers of the resource it intends
+to push. The client can accept the push (letting it populate the cache)
+or reject it with a `RST_STREAM` if the resource is already cached.
+
+In theory, server push was elegant. In practice, it proved difficult
+to use effectively. Servers had to guess what clients already had
+cached, and incorrect guesses wasted bandwidth. The overhead of
+implementing push correctly on both sides outweighed the latency
+savings in many deployments.
+
+RFC 9113, the 2022 revision of the HTTP/2 specification, formally
+deprecated server push. Browsers have largely removed support for it.
+The same goal--hinting to the client about needed resources before the
+page HTML is fully parsed--is now better served by `103 Early Hints`
+responses, which tell the client what to preload without the complexity
+of push streams.
+
+== Connection Establishment
+
+HTTP/2 runs over TCP, and in practice almost exclusively over TLS.
+All major browsers require HTTPS for HTTP/2 connections, even though
+the specification technically allows cleartext HTTP/2.
+
+For HTTPS connections, the client and server negotiate HTTP/2 during
+the TLS handshake using _ALPN_ (Application-Layer Protocol
+Negotiation). The client includes `h2` in its list of supported
+protocols in the TLS ClientHello message. If the server also supports
+HTTP/2, it selects `h2` in the ServerHello, and both sides begin
+speaking HTTP/2 immediately after the handshake completes. No extra
+round-trips are needed.
+
+Once the TLS handshake finishes, both sides send a _connection
+preface_: a `SETTINGS` frame declaring their configuration (maximum
+concurrent streams, initial window size, maximum header list size, and
+so on). The client also sends a well-known 24-byte magic string as a
+sanity check:
+
+[source]
+----
+PRI * HTTP/2.0\r\n\r\nSM\r\n\r\n
+----
+
+This string is designed to fail clearly if an HTTP/1.1 server
+accidentally receives it, preventing silent protocol mismatches.
+
+For the rare case of cleartext HTTP/2, the client can use the HTTP/1.1
+`Upgrade` mechanism:
+
+[source]
+----
+GET / HTTP/1.1
+Host: example.com
+Connection: Upgrade, HTTP2-Settings
+Upgrade: h2c
+----
+
+If the server supports HTTP/2, it responds with `101 Switching
+Protocols` and both sides switch to binary framing. If not, the
+exchange continues as normal HTTP/1.1.
+
+== One Connection Per Origin
+
+HTTP/2 is designed around a single connection per origin. Where
+HTTP/1.1 browsers opened six connections to achieve parallelism,
+HTTP/2 multiplexes everything onto one. This has several benefits:
+
+* **Better compression.** A single HPACK dynamic table covers all
+ requests to the origin, maximizing header compression.
+* **Consistent prioritization.** All streams compete in a single
+ priority tree rather than across independent connections.
+* **Reduced overhead.** One TLS handshake, one TCP slow-start ramp,
+ fewer sockets consuming memory on client and server alike.
+* **Friendlier to the network.** Fewer competing TCP flows mean less
+ congestion and better utilization of available bandwidth.
+
+There is a trade-off. Because all streams share one TCP connection,
+a single lost packet forces TCP to retransmit and stalls _every_
+stream on that connection--head-of-line blocking returns, but at the
+transport layer rather than the application layer. On lossy networks
+(mobile, satellite), this can hurt performance.
+
+In practice, the benefits of compression, prioritization, and reduced
+overhead outweigh the TCP-level blocking penalty for most deployments.
+The transport-layer limitation is the primary motivation for HTTP/3,
+which replaces TCP with QUIC to give each stream independent loss
+recovery.
+
+== Frame Types at a Glance
+
+HTTP/2 defines ten frame types. Understanding them gives you a
+complete picture of what the protocol can express:
+
+[cols="2,4"]
+|===
+|Frame Type |Purpose
+
+|`DATA`
+|Carries the body of a request or response.
+
+|`HEADERS`
+|Opens a new stream and carries compressed HTTP headers.
+
+|`PRIORITY`
+|Declares a stream's weight and dependency.
+
+|`RST_STREAM`
+|Immediately terminates a stream (error or cancellation).
+
+|`SETTINGS`
+|Exchanges connection configuration between endpoints.
+
+|`PUSH_PROMISE`
+|Announces a server-initiated push stream (deprecated).
+
+|`PING`
+|Measures round-trip time and verifies connection liveness.
+
+|`GOAWAY`
+|Initiates graceful connection shutdown, telling the peer the
+last stream ID that was processed.
+
+|`WINDOW_UPDATE`
+|Adjusts the flow control window for a stream or the connection.
+
+|`CONTINUATION`
+|Continues a header block that did not fit in a single `HEADERS`
+or `PUSH_PROMISE` frame.
+|===
+
+The `GOAWAY` frame deserves special mention. It allows a server to
+drain gracefully: the server tells the client which streams were
+processed and which were not, so the client can safely retry
+unprocessed requests on a new connection. This is essential for
+zero-downtime deployments.
+
+== What HTTP/2 Means for Applications
+
+Because HTTP/2 preserves HTTP semantics, existing applications work
+without modification. But understanding the protocol lets you stop
+fighting it:
+
+* **Stop sharding domains.** Multiple origins prevent HTTP/2 from
+ using a single connection and split the compression context.
+ Consolidate resources onto one origin where possible.
+* **Stop concatenating and spriting.** Individual files multiplex
+ efficiently, cache independently, and invalidate granularly.
+ Bundling large files delays execution and wastes cache space when
+ a single component changes.
+* **Stop inlining resources.** Small CSS or JavaScript inlined into
+ HTML cannot be cached separately. With multiplexing, the cost of
+ an additional request is negligible.
+* **Do use priority hints.** Modern browsers set stream priorities
+ automatically, but server-side awareness of priorities (serving
+ critical CSS before background images) further improves perceived
+ performance.
+* **Do tune your TCP stack.** HTTP/2's single connection depends
+ heavily on TCP performance. A server with an initial congestion
+ window of 10 segments, TLS session resumption, and ALPN support
+ gives HTTP/2 the best foundation.
+
+HTTP/2 adoption crossed 35% of all websites by early 2026, and
+virtually all modern browsers support it. It remains the workhorse
+protocol for the majority of encrypted web traffic, even as HTTP/3
+gains ground with its QUIC-based transport. Understanding HTTP/2 is
+not just historical context--it is the protocol most of your requests
+travel over today.
diff --git a/doc/modules/ROOT/pages/2.http-tutorial/2l.http3.adoc b/doc/modules/ROOT/pages/2.http-tutorial/2l.http3.adoc
new file mode 100644
index 00000000..5eefc915
--- /dev/null
+++ b/doc/modules/ROOT/pages/2.http-tutorial/2l.http3.adoc
@@ -0,0 +1,344 @@
+//
+// Copyright (c) 2025 Vinnie Falco (vinnie.falco@gmail.com)
+//
+// Distributed under the Boost Software License, Version 1.0. (See accompanying
+// file LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt)
+//
+// Official repository: https://github.com/cppalliance/http
+//
+
+= HTTP/3 and QUIC
+
+HTTP/2 solved the problem of multiplexing requests over a single
+connection, but it inherited a deeper limitation from the layer
+beneath it. TCP, the transport protocol that carries HTTP/1.1 and
+HTTP/2 alike, treats everything it delivers as one continuous stream
+of bytes. It has no idea that the bytes it carries belong to different
+HTTP transactions. When a single TCP packet is lost, every stream on
+that connection stalls until the missing packet is retransmitted--even
+streams that have nothing to do with the lost data. This is
+_head-of-line blocking_ at the transport layer, and no amount of
+clever framing at the HTTP level can fix it. The problem lives in TCP
+itself.
+
+HTTP/3, standardized as RFC 9114 in June 2022, replaces TCP with a
+new transport protocol called QUIC. The HTTP semantics you already
+know--methods, status codes, headers, bodies--remain identical. What
+changes is everything below: how connections are established, how data
+is encrypted, how streams are multiplexed, and how the protocol
+recovers from packet loss. The result is a faster, more resilient
+transport for the same familiar request-response exchange.
+
+== The TCP Problem
+
+To understand why HTTP/3 exists, you need to see the problem it was
+designed to solve.
+
+TCP guarantees that bytes arrive in the order they were sent. If packet
+number 7 out of 20 is lost in transit, packets 8 through 20 sit in the
+receiver's buffer, fully intact, waiting. TCP will not deliver any of
+them to the application until packet 7 has been retransmitted and
+received. This guarantee of in-order delivery is what makes TCP
+reliable--and it is also what makes it slow when packets go missing.
+
+With HTTP/1.1, this was manageable because browsers opened multiple TCP
+connections--six or more to the same server. A lost packet on one
+connection only blocked that connection. The others continued normally.
+
+HTTP/2 changed the picture. It multiplexes all requests over a single
+TCP connection to eliminate the overhead of multiple handshakes and to
+enable stream prioritization. But now a single lost packet can stall
+_every_ in-flight request. Under poor network conditions--a mobile user
+on a train, a congested Wi-Fi link--HTTP/2 can actually perform worse
+than HTTP/1.1. At around 2% packet loss, the older protocol with its
+multiple connections often wins.
+
+This is not a flaw in HTTP/2's design. It is a fundamental mismatch
+between what HTTP needs (independent streams) and what TCP provides
+(a single ordered byte stream). Fixing it required changing the
+transport.
+
+== What Is QUIC
+
+QUIC is a general-purpose transport protocol that runs over UDP. It
+was originally developed by Google starting around 2012, then
+standardized by the IETF as RFC 9000 in 2021. The name is not an
+acronym; it is just pronounced "quick."
+
+Running over UDP may sound alarming--UDP is a bare-bones protocol
+with no reliability guarantees, no congestion control, and no
+connection concept. A UDP datagram is just a 64-bit header (source
+port, destination port, length, checksum) and a payload. Packets can
+arrive out of order, duplicated, or not at all.
+
+But QUIC does not rely on UDP for reliability. It _reimplements_
+everything TCP provides--reliable delivery, congestion control, flow
+control, connection establishment--inside its own protocol layer. UDP
+is simply the envelope that gets QUIC packets through middleboxes,
+firewalls, and NATs that already understand UDP port numbers. QUIC
+builds a sophisticated, reliable, encrypted transport on top of a
+deliberately minimal carrier.
+
+The key innovation is that QUIC knows about _streams_. Where TCP sees
+a single undifferentiated byte stream, QUIC sees multiple independent
+streams multiplexed within a single connection. Each stream has its
+own identifier and its own ordering guarantees. If a packet carrying
+data for stream 5 is lost, only stream 5 stalls. Streams 1 through 4
+and 6 onward continue to deliver data to the application without
+waiting.
+
+== Connection Establishment
+
+Setting up a TCP connection with TLS encryption takes at least two
+round-trips. First, the TCP three-way handshake (SYN, SYN-ACK, ACK)
+consumes one round-trip. Then the TLS handshake negotiates
+cryptographic keys in at least one more round-trip (two round-trips
+with TLS 1.2). Only after both handshakes complete can the client send
+its first HTTP request.
+
+QUIC collapses these into a single round-trip. Because QUIC integrates
+TLS 1.3 directly into its transport handshake, the cryptographic key
+exchange happens alongside connection establishment. The client sends
+its first flight of data, the server responds with its own
+cryptographic material _and_ the response to the client's request
+setup, and the connection is ready.
+
+The improvement is even more dramatic on _returning_ connections.
+QUIC supports _0-RTT resumption_: when a client reconnects to a
+server it has visited before, it can send application data in its
+very first packet, before the handshake even completes. The client
+reuses cryptographic keys from the previous session to encrypt this
+early data. The server can begin processing the request immediately.
+
+[source]
+----
+First visit (1-RTT):
+ Client ----[Initial + crypto]----> Server
+ Client <---[crypto + handshake]--- Server
+ Client ----[HTTP request]--------> Server
+
+Return visit (0-RTT):
+ Client ----[Initial + crypto + HTTP request]----> Server
+ Client <---[crypto + HTTP response]-------------- Server
+----
+
+Compared to TCP+TLS 1.2, which needs three round-trips before
+the first byte of application data can be sent, 0-RTT means a
+returning client's request is already on the wire in the very first
+packet. On a path with 50 milliseconds of round-trip time, that
+saves 100 to 150 milliseconds of latency--perceptible to a human
+and significant at scale.
+
+== Built-In Encryption
+
+In the HTTP/1.1 and HTTP/2 world, encryption is optional. You can run
+plain HTTP over TCP, and TLS is a separate layer added on top.
+Millions of sites still serve traffic without encryption, and even
+encrypted connections expose TCP headers--sequence numbers, flags,
+window sizes--as plain text for any observer on the network path.
+
+QUIC makes encryption mandatory. There is no unencrypted mode. Every
+QUIC connection uses TLS 1.3, and the encryption covers not just the
+HTTP payload but most of the QUIC packet header as well. Only a small
+number of fields remain visible: a few flags and the connection ID.
+Everything else--including transport-level metadata that TCP left
+exposed--is encrypted.
+
+This is a security improvement and a practical one. Because middleboxes
+cannot inspect or modify QUIC headers, the protocol is resistant to
+ossification--the gradual process by which network equipment starts
+depending on header fields being in certain positions with certain
+values, making it impossible to evolve the protocol. TCP has suffered
+badly from ossification over the decades. QUIC sidesteps it by
+encrypting the fields that middleboxes might otherwise latch onto.
+
+== Stream Multiplexing Without Blocking
+
+This is the feature that justifies the entire endeavor.
+
+HTTP/2 multiplexes streams at the application layer, but all those
+streams share a single TCP byte stream underneath. QUIC multiplexes
+streams at the _transport_ layer. Each stream is an independent
+sequence of bytes with its own flow control. The transport knows which
+bytes belong to which stream because every QUIC frame carries a stream
+identifier.
+
+When a packet is lost, QUIC retransmits only the data for the affected
+stream. Other streams continue to deliver data to the application. The
+head-of-line blocking problem that plagued HTTP/2 over TCP simply does
+not exist.
+
+[source]
+----
+HTTP/2 over TCP (single ordered byte stream):
+
+ Stream A: [pkt1] [pkt2] [LOST] [pkt4] [pkt5]
+ Stream B: [pkt6] [pkt7] [pkt8]
+ Stream C: [pkt9]
+ ^^^^^
+ All streams blocked until
+ retransmit arrives
+
+HTTP/3 over QUIC (independent streams):
+
+ Stream A: [pkt1] [pkt2] [LOST] ...waiting...
+ Stream B: [pkt6] [pkt7] [pkt8] ✓ delivered
+ Stream C: [pkt9] ✓ delivered
+ ^^^^^
+ Only Stream A waits
+----
+
+The practical impact is most visible on lossy networks. A mobile user
+on cellular data experiences frequent packet loss as they move between
+towers. With HTTP/2, every lost packet freezes the entire page load.
+With HTTP/3, only the specific resource carried on the affected stream
+is delayed. The rest of the page continues to load.
+
+== Connection Migration
+
+TCP connections are identified by a four-tuple: source IP, source port,
+destination IP, destination port. If any of these change, the
+connection breaks. When a phone switches from Wi-Fi to cellular, its
+IP address changes, and every TCP connection is destroyed. The browser
+must re-establish connections from scratch--new handshakes, new
+slow-start ramp-up, new TLS negotiation.
+
+QUIC connections are identified by a _connection ID_, a token generated
+during the handshake. This ID is independent of the network addresses
+underneath. When a phone switches from Wi-Fi to cellular, the QUIC
+connection continues seamlessly--the client sends packets from its new
+IP address with the same connection ID, and the server recognizes it
+as the same connection. No new handshake, no lost state, no
+interrupted downloads.
+
+This matters for the modern web. Users walk between rooms, enter
+elevators, step outside buildings. Their devices constantly switch
+between networks. Connection migration means an HTTP/3 video stream
+does not skip, an ongoing file download does not restart, and an
+interactive application does not lose its session state.
+
+== QPACK Header Compression
+
+HTTP/2 introduced HPACK, a header compression scheme that exploits the
+redundancy in HTTP headers. Most requests to the same server carry
+nearly identical headers--the same `Host`, `User-Agent`,
+`Accept-Encoding`, and `Cookie` values over and over. HPACK maintains
+a dynamic table of recently seen header fields and replaces repeated
+values with compact index references.
+
+HPACK depends on both sides processing headers in strict order, because
+updates to the dynamic table must be synchronized. This works with
+TCP's in-order delivery but conflicts with QUIC's independent streams,
+where header blocks can arrive out of order.
+
+HTTP/3 replaces HPACK with QPACK (RFC 9204), a header compression
+scheme designed for out-of-order delivery. QPACK uses a separate
+unidirectional stream to synchronize dynamic table updates. Header
+blocks on request streams reference the table but do not modify it
+directly, so they can be processed in any order. The compression
+efficiency is comparable to HPACK, but the design respects QUIC's
+fundamental property of stream independence.
+
+== No Server Push
+
+HTTP/2 introduced _server push_, a feature that allowed the server to
+send resources to the client before the client asked for them. The
+idea was compelling: when a client requests an HTML page, the server
+already knows it will need the associated CSS and JavaScript files, so
+why wait for the client to discover and request them?
+
+In practice, server push proved difficult to use correctly. Pushed
+resources often collided with the browser's cache--the server sent
+files the client already had. The browser's prioritization of pushed
+resources was inconsistent across implementations. Many deployments
+disabled it or never enabled it.
+
+HTTP/3 (RFC 9114) still defines server push, but adoption remains
+minimal. The feature has been effectively deprecated in major browsers.
+The `103 Early Hints` status code, which tells the client to preload
+specific resources without actually pushing them, has emerged as a
+simpler and more predictable alternative.
+
+== The Protocol Stack
+
+The full HTTP/3 protocol stack differs significantly from its
+predecessors:
+
+[cols="1,1,1"]
+|===
+|HTTP/1.1 |HTTP/2 |HTTP/3
+
+|HTTP/1.1
+|HTTP/2 (binary framing, HPACK)
+|HTTP/3 (QPACK)
+
+|TLS (optional)
+|TLS (typically required)
+|_integrated into QUIC_
+
+|TCP
+|TCP
+|QUIC
+
+|IP
+|IP
+|UDP / IP
+|===
+
+The most striking change is the disappearance of TLS as a separate
+layer. In the HTTP/3 stack, encryption is not bolted on--it is woven
+into the transport. QUIC packets are encrypted by default, and the
+cryptographic handshake is inseparable from connection establishment.
+
+The move from TCP to UDP is equally significant. TCP has been the
+foundation of web traffic since the early 1990s. Replacing it with a
+protocol built on UDP--traditionally associated with real-time
+applications like DNS lookups, video calls, and gaming--represents a
+fundamental shift in how the web's transport layer works.
+
+== Discovery and Fallback
+
+A client cannot know in advance whether a server supports HTTP/3. The
+discovery mechanism works through the `Alt-Svc` (Alternative Service)
+header. When a client connects to a server over HTTP/1.1 or HTTP/2,
+the server can include this header in the response:
+
+[source]
+----
+Alt-Svc: h3=":443"; ma=86400
+----
+
+This tells the client: "I support HTTP/3 on UDP port 443. This
+information is valid for 86400 seconds (one day)." The client caches
+this hint and attempts an HTTP/3 connection on subsequent requests.
+
+If the QUIC connection fails--because a firewall blocks UDP, because
+a middlebox interferes, or because the path does not support it--the
+client falls back to HTTP/2 over TCP. This graceful degradation means
+deploying HTTP/3 on a server carries no risk of breaking existing
+clients. Older clients that do not understand `Alt-Svc` simply ignore
+the header and continue using HTTP/2 or HTTP/1.1.
+
+Modern browsers also implement _QUIC connection racing_: when they
+learn a server supports HTTP/3, they attempt both a QUIC connection
+and a TCP connection simultaneously, and use whichever succeeds first.
+This avoids the penalty of trying QUIC and waiting for a timeout
+before falling back.
+
+== What Stays the Same
+
+Despite all the changes underneath, the conversation between client and
+server is the same one it has always been. A request still has a method,
+a target, and headers. A response still has a status code and a body.
+`GET /index.html` means the same thing over HTTP/3 that it meant over
+HTTP/1.1. Status `200` still means success. `Cache-Control` still
+governs caching. `Content-Type` still describes the payload.
+
+The semantics of HTTP--the shared vocabulary defined in RFC 9110--are
+independent of the version. HTTP/3 changes the _transport_, not the
+_language_. Code that constructs requests, inspects headers, or
+interprets status codes does not need to know or care whether the
+messages traveled over TCP or QUIC. This separation of semantics from
+transport is one of HTTP's most enduring design strengths, and it is
+the reason the protocol has evolved three times without breaking the
+web.
diff --git a/doc/modules/ROOT/pages/3.messages/3.messages.adoc b/doc/modules/ROOT/pages/3.messages/3.messages.adoc
new file mode 100644
index 00000000..9268dd52
--- /dev/null
+++ b/doc/modules/ROOT/pages/3.messages/3.messages.adoc
@@ -0,0 +1,104 @@
+//
+// Copyright (c) 2025 Vinnie Falco (vinnie.falco@gmail.com)
+//
+// Distributed under the Boost Software License, Version 1.0. (See accompanying
+// file LICENSE_1_0.txt or copy at https://www.boost.org/LICENSE_1_0.txt)
+//
+// Official repository: https://github.com/cppalliance/http
+//
+
+= HTTP Messages
+
+Every HTTP interaction is a message exchange: a client sends a request,
+a server sends a response. This library gives you three complementary
+components for working with messages:
+
+* **Containers** build and inspect the start line and headers
+* **Serializer** transforms a container and body data into bytes for the wire
+* **Parser** transforms bytes from the wire into a container and body data
+
+These components are designed to work together but remain independently
+useful. You can build a message with a container, serialize it for
+transmission, and parse the reply — or use any one in isolation.
+
+== Containers Hold Headers, Not Bodies
+
+An HTTP message consists of a start line, header fields, and an optional
+body. This library keeps the body separate from the container. The
+`request` and `response` types hold only the start line and headers in
+their serialized wire format. This avoids parameterizing on body type
+and keeps the containers simple:
+
+[source,cpp]
+----
+request req(method::get, "/api/users");
+req.set(field::host, "example.com");
+req.set(field::accept, "application/json");
+
+// buffer() returns the serialized start line + headers
+std::cout << req.buffer();
+----
+
+Body data flows through the serializer or parser instead.
+
+== The Serializer and Parser Are Persistent Objects
+
+Both the serializer and parser allocate a fixed block of memory at
+construction and reuse it across every message on a connection. This
+eliminates per-message allocation and makes resource usage predictable.
+Create one of each when a connection is established and keep them alive
+for its duration:
+
+[source,cpp]
+----
+// One parser and one serializer per connection
+request_parser pr;
+serializer sr(cfg);
+
+pr.reset();
+
+while (connection_open)
+{
+ pr.start();
+ // ... parse a request ...
+ // ... serialize a response ...
+}
+----
+
+== Two-Sided Interfaces
+
+The serializer and parser each expose two sides:
+
+[cols="1,2,2"]
+|===
+| |Input Side |Output Side
+
+|**Serializer**
+|Sink — accepts a container and body data from the caller
+|Stream — emits serialized HTTP bytes for writing to a socket
+
+|**Parser**
+|Stream — accepts raw bytes read from a socket
+|Source — yields a parsed container and body data to the caller
+|===
+
+This two-sided design follows naturally from their role as transformers:
+the serializer converts structured data into bytes, and the parser
+converts bytes into structured data. Neither side performs I/O directly.
+You feed data in on one side and pull results out the other.
+
+== Sans-I/O at Every Layer
+
+None of these components touch a socket. The parser consumes buffers
+you fill; the serializer produces buffers you drain. This means the
+same code works with any transport — Asio, io_uring, or a test harness
+that feeds canned data — without recompilation or abstraction layers.
+
+== Pages in This Section
+
+* xref:3.messages/3a.containers.adoc[Containers] — build and inspect requests,
+ responses, and header field collections
+* xref:3.messages/3b.serializing.adoc[Serializing] — transform messages and body
+ data into bytes for the wire
+* xref:3.messages/3c.parsing.adoc[Parsing] — transform bytes from the wire into
+ messages and body data
diff --git a/doc/modules/ROOT/pages/containers.adoc b/doc/modules/ROOT/pages/3.messages/3a.containers.adoc
similarity index 87%
rename from doc/modules/ROOT/pages/containers.adoc
rename to doc/modules/ROOT/pages/3.messages/3a.containers.adoc
index 51ed6e96..cb8fd1e5 100644
--- a/doc/modules/ROOT/pages/containers.adoc
+++ b/doc/modules/ROOT/pages/3.messages/3a.containers.adoc
@@ -37,10 +37,59 @@ the template complexity that comes from parameterizing on body type.
|===
+These types form an inheritance hierarchy rooted at `fields_base`:
+
+[mermaid]
+----
+classDiagram
+ fields_base <|-- fields
+ fields_base <|-- message_base
+ message_base <|-- request_base
+ message_base <|-- response_base
+ request_base <|-- request
+ request_base <|-- static_request
+ response_base <|-- response
+ response_base <|-- static_response
+
+ class fields_base["fields_base"] {
+ field storage and iteration
+ }
+ class message_base["message_base"] {
+ version, payload, keep-alive
+ }
+ class fields["fields"] {
+ standalone header collection
+ }
+ class request_base["request_base"] {
+ method, target
+ }
+ class response_base["response_base"] {
+ status code, reason
+ }
+ class request["request"] {
+ dynamic storage
+ }
+ class static_request["static_request"] {
+ externally-provided storage
+ }
+ class response["response"] {
+ dynamic storage
+ }
+ class static_response["static_response"] {
+ externally-provided storage
+ }
+----
+
All containers maintain this invariant: **contents are always valid HTTP**.
Operations that would produce malformed output throw an exception. This
means you can safely serialize any container at any time.
+WARNING: Syntactic validity does not imply semantic correctness. A
+container will happily hold contradictory headers, nonsensical status
+codes, or fields that create security vulnerabilities such as response
+splitting. The caller is responsible for ensuring the contents reflect
+intent.
+
== Working with Fields
The `fields` class stores a collection of header fields. Use it when you
@@ -462,4 +511,4 @@ container is modified.
Now that you can build and inspect HTTP messages, learn how to parse
incoming messages from the network:
-* xref:parsing.adoc[Parsing] — parse request and response messages
+* xref:3.messages/3c.parsing.adoc[Parsing] — parse request and response messages
diff --git a/doc/modules/ROOT/pages/serializing.adoc b/doc/modules/ROOT/pages/3.messages/3b.serializing.adoc
similarity index 98%
rename from doc/modules/ROOT/pages/serializing.adoc
rename to doc/modules/ROOT/pages/3.messages/3b.serializing.adoc
index 7072b346..73cf843a 100644
--- a/doc/modules/ROOT/pages/serializing.adoc
+++ b/doc/modules/ROOT/pages/3.messages/3b.serializing.adoc
@@ -221,8 +221,8 @@ co_await sink.write_eof(
For detailed information on compression services, see:
-* xref:compression/zlib.adoc[ZLib] — DEFLATE and gzip compression
-* xref:compression/brotli.adoc[Brotli] — Brotli compression
+* xref:5.compression/5a.zlib.adoc[ZLib] — DEFLATE and gzip compression
+* xref:5.compression/5b.brotli.adoc[Brotli] — Brotli compression
== Expect: 100-continue
diff --git a/doc/modules/ROOT/pages/parsing.adoc b/doc/modules/ROOT/pages/3.messages/3c.parsing.adoc
similarity index 96%
rename from doc/modules/ROOT/pages/parsing.adoc
rename to doc/modules/ROOT/pages/3.messages/3c.parsing.adoc
index 54b93906..85d8f659 100644
--- a/doc/modules/ROOT/pages/parsing.adoc
+++ b/doc/modules/ROOT/pages/3.messages/3c.parsing.adoc
@@ -224,8 +224,8 @@ you receive is the decoded content.
For detailed information on compression services, see:
-* xref:compression/zlib.adoc[ZLib] — DEFLATE and gzip decompression
-* xref:compression/brotli.adoc[Brotli] — Brotli decompression
+* xref:5.compression/5a.zlib.adoc[ZLib] — DEFLATE and gzip decompression
+* xref:5.compression/5b.brotli.adoc[Brotli] — Brotli decompression
== Handling Multiple Messages
@@ -303,4 +303,4 @@ Common errors include:
Now that you can parse incoming messages, learn how to produce outgoing
messages:
-* xref:serializing.adoc[Serializing] — produce HTTP messages for transmission
+* xref:3.messages/3b.serializing.adoc[Serializing] — produce HTTP messages for transmission
diff --git a/doc/modules/ROOT/pages/server/servers-intro.adoc b/doc/modules/ROOT/pages/4.servers/4.servers.adoc
similarity index 97%
rename from doc/modules/ROOT/pages/server/servers-intro.adoc
rename to doc/modules/ROOT/pages/4.servers/4.servers.adoc
index c91f707f..21231507 100644
--- a/doc/modules/ROOT/pages/server/servers-intro.adoc
+++ b/doc/modules/ROOT/pages/4.servers/4.servers.adoc
@@ -214,7 +214,7 @@ and passes it to the handler. This is how a single route definition
handles thousands of different URLs.
Patterns can also include wildcards, optional groups, and literal
-segments. The xref:server/route-patterns.adoc[Route Patterns] page
+segments. The xref:4.servers/4d.route-patterns.adoc[Route Patterns] page
explores the full syntax. For now, the key insight is that routing
transforms a flat stream of requests into organized, purpose-built
handler functions.
@@ -496,14 +496,14 @@ effortlessly.
The pages that follow dive into each component in detail:
-* xref:server/router.adoc[Router] -- the dispatch engine that maps
+* xref:4.servers/4c.routers.adoc[Routers] -- the dispatch engine that maps
requests to handlers, with method matching, handler chaining,
middleware, nested routers, and error handling
-* xref:server/route-patterns.adoc[Route Patterns] -- the full pattern
+* xref:4.servers/4d.route-patterns.adoc[Route Patterns] -- the full pattern
syntax including named parameters, wildcards, and optional groups
-* xref:server/serve-static.adoc[Serving Static Files] -- efficient
+* xref:4.servers/4e.serve-static.adoc[Serving Static Files] -- efficient
file serving with caching, range requests, and content negotiation
-* xref:server/serve-index.adoc[Directory Listings] -- browsable
+* xref:4.servers/4f.serve-index.adoc[Directory Listings] -- browsable
directory views in HTML, JSON, or plain text
Each page builds on the foundation laid here. The router page shows
diff --git a/doc/modules/ROOT/pages/4.servers/4a.http-worker.adoc b/doc/modules/ROOT/pages/4.servers/4a.http-worker.adoc
new file mode 100644
index 00000000..bfebe264
--- /dev/null
+++ b/doc/modules/ROOT/pages/4.servers/4a.http-worker.adoc
@@ -0,0 +1,10 @@
+//
+// Copyright (c) 2025 Vinnie Falco (vinnie.falco@gmail.com)
+//
+// Distributed under the Boost Software License, Version 1.0. (See accompanying
+// file LICENSE_1_0.txt or copy at https://www.boost.org/LICENSE_1_0.txt)
+//
+// Official repository: https://github.com/cppalliance/http
+//
+
+= HTTP Worker
diff --git a/doc/modules/ROOT/pages/server/route-handlers.adoc b/doc/modules/ROOT/pages/4.servers/4b.route-handlers.adoc
similarity index 99%
rename from doc/modules/ROOT/pages/server/route-handlers.adoc
rename to doc/modules/ROOT/pages/4.servers/4b.route-handlers.adoc
index 5fced58b..3c6ae545 100644
--- a/doc/modules/ROOT/pages/server/route-handlers.adoc
+++ b/doc/modules/ROOT/pages/4.servers/4b.route-handlers.adoc
@@ -866,4 +866,4 @@ to _install_ a handler -- how to tell the router "when a `GET` request
arrives at `/posts/:id`, call _this_ function."
That is the job of the router, and it is the subject of the
-xref:server/router.adoc[next page].
+xref:4.servers/4c.routers.adoc[next page].
diff --git a/doc/modules/ROOT/pages/server/routers.adoc b/doc/modules/ROOT/pages/4.servers/4c.routers.adoc
similarity index 99%
rename from doc/modules/ROOT/pages/server/routers.adoc
rename to doc/modules/ROOT/pages/4.servers/4c.routers.adoc
index 7268787d..3e56bbe1 100644
--- a/doc/modules/ROOT/pages/server/routers.adoc
+++ b/doc/modules/ROOT/pages/4.servers/4c.routers.adoc
@@ -1171,9 +1171,9 @@ size.
== See Also
-* xref:server/route-handlers.adoc[Route Handlers] -- the handler
+* xref:4.servers/4b.route-handlers.adoc[Route Handlers] -- the handler
signature, `route_params`, and the `send()` method
-* xref:server/route-patterns.adoc[Route Patterns] -- the complete
+* xref:4.servers/4d.route-patterns.adoc[Route Patterns] -- the complete
pattern syntax with detailed matching examples
-* xref:server/serve-static.adoc[Serving Static Files] -- the
+* xref:4.servers/4e.serve-static.adoc[Serving Static Files] -- the
`serve_static` middleware
diff --git a/doc/modules/ROOT/pages/server/route-patterns.adoc b/doc/modules/ROOT/pages/4.servers/4d.route-patterns.adoc
similarity index 99%
rename from doc/modules/ROOT/pages/server/route-patterns.adoc
rename to doc/modules/ROOT/pages/4.servers/4d.route-patterns.adoc
index c7675ce0..cadb50a5 100644
--- a/doc/modules/ROOT/pages/server/route-patterns.adoc
+++ b/doc/modules/ROOT/pages/4.servers/4d.route-patterns.adoc
@@ -505,4 +505,4 @@ These patterns are invalid and produce parse errors:
== See Also
-* xref:router.adoc[Router] - request dispatch and handler registration
+* xref:4.servers/4c.routers.adoc[Routers] - request dispatch and handler registration
diff --git a/doc/modules/ROOT/pages/server/serve-static.adoc b/doc/modules/ROOT/pages/4.servers/4e.serve-static.adoc
similarity index 99%
rename from doc/modules/ROOT/pages/server/serve-static.adoc
rename to doc/modules/ROOT/pages/4.servers/4e.serve-static.adoc
index a21a2674..0c20a899 100644
--- a/doc/modules/ROOT/pages/server/serve-static.adoc
+++ b/doc/modules/ROOT/pages/4.servers/4e.serve-static.adoc
@@ -308,5 +308,5 @@ This configuration:
== See Also
-* xref:server/route-patterns.adoc[Route Patterns] — how request paths
+* xref:4.servers/4d.route-patterns.adoc[Route Patterns] — how request paths
are matched to handlers
diff --git a/doc/modules/ROOT/pages/server/serve-index.adoc b/doc/modules/ROOT/pages/4.servers/4f.serve-index.adoc
similarity index 97%
rename from doc/modules/ROOT/pages/server/serve-index.adoc
rename to doc/modules/ROOT/pages/4.servers/4f.serve-index.adoc
index debf38ab..58ad62c0 100644
--- a/doc/modules/ROOT/pages/server/serve-index.adoc
+++ b/doc/modules/ROOT/pages/4.servers/4f.serve-index.adoc
@@ -254,7 +254,7 @@ appropriate for local development but not for production.
== See Also
-* xref:server/serve-static.adoc[Serving Static Files] — file serving
+* xref:4.servers/4e.serve-static.adoc[Serving Static Files] — file serving
middleware that pairs with `serve_index`
-* xref:server/route-patterns.adoc[Route Patterns] — how request paths
+* xref:4.servers/4d.route-patterns.adoc[Route Patterns] — how request paths
are matched to handlers
diff --git a/doc/modules/ROOT/pages/bcrypt.adoc b/doc/modules/ROOT/pages/4.servers/4g.bcrypt.adoc
similarity index 100%
rename from doc/modules/ROOT/pages/bcrypt.adoc
rename to doc/modules/ROOT/pages/4.servers/4g.bcrypt.adoc
diff --git a/doc/modules/ROOT/pages/5.compression/5.compression.adoc b/doc/modules/ROOT/pages/5.compression/5.compression.adoc
new file mode 100644
index 00000000..56d1ef42
--- /dev/null
+++ b/doc/modules/ROOT/pages/5.compression/5.compression.adoc
@@ -0,0 +1,10 @@
+//
+// Copyright (c) 2025 Vinnie Falco (vinnie.falco@gmail.com)
+//
+// Distributed under the Boost Software License, Version 1.0. (See accompanying
+// file LICENSE_1_0.txt or copy at https://www.boost.org/LICENSE_1_0.txt)
+//
+// Official repository: https://github.com/cppalliance/http
+//
+
+= Compression
diff --git a/doc/modules/ROOT/pages/compression/zlib.adoc b/doc/modules/ROOT/pages/5.compression/5a.zlib.adoc
similarity index 98%
rename from doc/modules/ROOT/pages/compression/zlib.adoc
rename to doc/modules/ROOT/pages/5.compression/5a.zlib.adoc
index 40178163..d2fe0be3 100644
--- a/doc/modules/ROOT/pages/compression/zlib.adoc
+++ b/doc/modules/ROOT/pages/5.compression/5a.zlib.adoc
@@ -213,4 +213,4 @@ ser_cfg.apply_gzip_encoder = true;
== See Also
-* xref:brotli.adoc[Brotli] — Higher compression ratio
+* xref:5.compression/5b.brotli.adoc[Brotli] — Higher compression ratio
diff --git a/doc/modules/ROOT/pages/compression/brotli.adoc b/doc/modules/ROOT/pages/5.compression/5b.brotli.adoc
similarity index 97%
rename from doc/modules/ROOT/pages/compression/brotli.adoc
rename to doc/modules/ROOT/pages/5.compression/5b.brotli.adoc
index 58be8c9b..2ecf5f51 100644
--- a/doc/modules/ROOT/pages/compression/brotli.adoc
+++ b/doc/modules/ROOT/pages/5.compression/5b.brotli.adoc
@@ -160,4 +160,4 @@ ser_cfg.apply_brotli_encoder = true;
== See Also
-* xref:zlib.adoc[ZLib] — DEFLATE/gzip compression
+* xref:5.compression/5a.zlib.adoc[ZLib] — DEFLATE/gzip compression
diff --git a/doc/modules/ROOT/pages/6.design/6.design.adoc b/doc/modules/ROOT/pages/6.design/6.design.adoc
new file mode 100644
index 00000000..1a4a728d
--- /dev/null
+++ b/doc/modules/ROOT/pages/6.design/6.design.adoc
@@ -0,0 +1,10 @@
+//
+// Copyright (c) 2025 Vinnie Falco (vinnie.falco@gmail.com)
+//
+// Distributed under the Boost Software License, Version 1.0. (See accompanying
+// file LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt)
+//
+// Official repository: https://github.com/cppalliance/http
+//
+
+= Design
diff --git a/doc/modules/ROOT/pages/sans_io_philosophy.adoc b/doc/modules/ROOT/pages/6.design/6a.sans-io.adoc
similarity index 100%
rename from doc/modules/ROOT/pages/sans_io_philosophy.adoc
rename to doc/modules/ROOT/pages/6.design/6a.sans-io.adoc
diff --git a/doc/modules/ROOT/pages/design_requirements/parser.adoc b/doc/modules/ROOT/pages/6.design/6b.parser.adoc
similarity index 100%
rename from doc/modules/ROOT/pages/design_requirements/parser.adoc
rename to doc/modules/ROOT/pages/6.design/6b.parser.adoc
diff --git a/doc/modules/ROOT/pages/design_requirements/serializer.adoc b/doc/modules/ROOT/pages/6.design/6c.serializer.adoc
similarity index 100%
rename from doc/modules/ROOT/pages/design_requirements/serializer.adoc
rename to doc/modules/ROOT/pages/6.design/6c.serializer.adoc
diff --git a/doc/modules/ROOT/pages/reference.adoc b/doc/modules/ROOT/pages/7.reference/7.reference.adoc
similarity index 100%
rename from doc/modules/ROOT/pages/reference.adoc
rename to doc/modules/ROOT/pages/7.reference/7.reference.adoc
diff --git a/doc/modules/ROOT/pages/http-protocol.adoc b/doc/modules/ROOT/pages/http-protocol.adoc
deleted file mode 100644
index 09f10668..00000000
--- a/doc/modules/ROOT/pages/http-protocol.adoc
+++ /dev/null
@@ -1,218 +0,0 @@
-//
-// Copyright (c) 2025 Vinnie Falco (vinnie.falco@gmail.com)
-//
-// Distributed under the Boost Software License, Version 1.0. (See accompanying
-// file LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt)
-//
-// Official repository: https://github.com/cppalliance/http
-//
-
-= Introduction to HTTP
-
-This section covers the fundamentals of HTTP that you need to understand
-before using the library. After reading this, you'll know how HTTP sessions
-work, what constitutes a message, and what security pitfalls to avoid.
-
-== Sessions
-
-HTTP is a stream-oriented protocol between two connected programs: a client
-and a server. While the connection remains open, the client sends HTTP requests
-and the server sends HTTP responses. These messages are paired in order—each
-request has exactly one corresponding response.
-
-[source]
-----
-Client Server
- | |
- |-------- Request #1 ----------------->|
- |<------- Response #1 -----------------|
- | |
- |-------- Request #2 ----------------->|
- |<------- Response #2 -----------------|
- | |
- ˅ ˅
-----
-
-An HTTP/1.1 session typically proceeds as follows:
-
-1. Client establishes a TCP connection to the server
-2. Client sends a request
-3. Server processes the request and sends a response
-4. Steps 2-3 repeat until either party closes the connection
-
-=== Persistent Connections
-
-HTTP/1.1 connections are persistent by default. The same connection can be
-reused for multiple request/response exchanges, avoiding the overhead of
-establishing new TCP connections.
-
-A connection is closed when:
-
-* Either party sends `Connection: close`
-* An error occurs during parsing or I/O
-* A configurable idle timeout expires
-* The underlying transport is terminated
-
-=== Pipelining
-
-HTTP/1.1 allows clients to send multiple requests without waiting for
-responses (pipelining). Responses must arrive in the same order as requests.
-While the protocol supports this, many implementations handle it poorly,
-which is why this library parses one complete message at a time.
-
-== Messages
-
-HTTP messages consist of three parts: the start line, the headers, and
-an optional message body.
-
-[cols="1a,1a"]
-|===
-|HTTP Request|HTTP Response
-
-|
-[source]
-----
-GET /index.html HTTP/1.1
-User-Agent: Boost
-Host: example.com
-
-----
-|
-[source]
-----
-HTTP/1.1 200 OK
-Server: Boost.HTTP
-Content-Length: 13
-
-Hello, world!
-----
-
-|===
-
-=== Start Line
-
-The start line differs between requests and responses:
-
-**Request line**: `method SP request-target SP HTTP-version CRLF`
-
-**Status line**: `HTTP-version SP status-code SP reason-phrase CRLF`
-
-The library validates start lines strictly. Invalid syntax is rejected
-immediately rather than attempting recovery.
-
-=== Header Fields
-
-Headers are name-value pairs that provide metadata about the message.
-Each header occupies one line, terminated by CRLF:
-
-[source]
-----
-field-name: field-value
-----
-
-Important characteristics:
-
-* Field names are case-insensitive (`Content-Type` equals `content-type`)
-* Field values have leading and trailing whitespace stripped
-* The same field name may appear multiple times
-* Order of fields with the same name is significant
-
-The library tracks several headers automatically and enforces their semantics:
-
-[cols="1a,4a"]
-|===
-|Field|Description
-
-|*Connection*
-|Controls whether the connection stays open. Values include `keep-alive`
-and `close`. The library updates connection state based on this field.
-
-|*Content-Length*
-|Specifies the exact size of the message body in bytes. When present,
-the parser uses this to determine when the body ends.
-
-|*Transfer-Encoding*
-|Indicates transformations applied to the message body. The library
-supports `chunked`, `gzip`, `deflate`, and `brotli` encodings.
-
-|*Upgrade*
-|Requests a protocol switch (e.g., to WebSocket). The library detects
-this and makes the raw connection available for the new protocol.
-
-|===
-
-=== Message Body
-
-The body is a sequence of bytes following the headers. Its length is
-determined by:
-
-* `Content-Length` header (exact byte count)
-* `Transfer-Encoding: chunked` (length encoded in stream)
-* Connection close (for responses without length indication)
-
-The library handles body framing automatically during parsing and
-serialization. You provide or consume the raw body bytes.
-
-== Security Considerations
-
-HTTP implementation bugs frequently lead to security vulnerabilities.
-The library is designed to prevent common attacks by default.
-
-=== Request Smuggling
-
-Request smuggling exploits disagreements between servers about where
-one request ends and the next begins. This happens when:
-
-* Multiple `Content-Length` headers have different values
-* Both `Content-Length` and `Transfer-Encoding: chunked` are present
-* Malformed chunk sizes are interpreted differently
-
-The library rejects ambiguous requests. When both `Content-Length` and
-`Transfer-Encoding` appear, `Transfer-Encoding` takes precedence per
-RFC 9110, and `Content-Length` is removed from the parsed headers.
-
-=== Header Injection
-
-Header injection attacks insert unexpected headers by including CRLF
-sequences in field values. The library forbids CR, LF, and NUL characters
-in header values—attempts to include them throw an exception.
-
-[source,cpp]
-----
-// This throws - newlines not allowed in values
-req.set(field::user_agent, "Bad\r\nInjected-Header: evil");
-----
-
-=== Resource Exhaustion
-
-Attackers can exhaust server memory by sending:
-
-* Extremely long header lines
-* Too many header fields
-* Enormous message bodies
-
-The library provides configurable limits for all of these. When a limit
-is exceeded, parsing fails with a specific error code.
-
-[source,cpp]
-----
-// Configure limits via parser config
-request_parser::config cfg;
-cfg.headers.max_field_size = 8192; // Max bytes per header line
-cfg.headers.max_fields = 100; // Max number of headers
-cfg.body_limit = 1024 * 1024; // Max body size (1 MB)
-----
-
-=== Field Validation
-
-Field names must consist only of valid token characters. Field values
-must not contain control characters except horizontal tab. The library
-validates these constraints on every operation that creates or modifies
-headers.
-
-== Next Steps
-
-Now that you understand HTTP message structure and session management,
-learn how to work with the library's message containers:
-
-* xref:containers.adoc[Containers] — request, response, and fields types
diff --git a/doc/modules/ROOT/pages/index.adoc b/doc/modules/ROOT/pages/index.adoc
index 86d235e1..b851ac77 100644
--- a/doc/modules/ROOT/pages/index.adoc
+++ b/doc/modules/ROOT/pages/index.adoc
@@ -9,27 +9,29 @@
= Boost.HTTP
-HTTP powers the web, but implementing it correctly is surprisingly hard. Boost.HTTP
-is a portable C++ library that provides containers and algorithms for the HTTP/1.1
-protocol, giving you RFC-compliant message handling without the usual implementation
-headaches.
+HTTP powers the web, but implementing it correctly in C++ is surprisingly hard.
+Boost.HTTP gives you the entire HTTP/1.1 protocol stack — from raw parsing up
+to complete clients and servers — without tying you to any network library.
== What This Library Does
-* Provides modifiable containers for HTTP requests and responses
-* Parses incoming HTTP messages with configurable limits
-* Serializes outgoing messages with automatic chunked encoding
-* Handles content encodings (gzip, deflate, brotli)
-* Offers an Express.js-style router for request dispatch
-* Enforces RFC 9110 compliance to prevent common security issues
+* Implements HTTP at multiple levels of abstraction, from low-level parsing and
+ serialization to complete clients and servers
+* Sans-I/O from top to bottom — zero dependency on Asio, Corosio, or any
+ transport; you bring your own
+* Coroutines-only execution model built on Capy
+* Type-erased streams reflecting buffer-oriented I/O
+* Express.js-style router with pattern matching and middleware composition
+* High-level components: static file serving, form processing, cookie handling,
+ cryptographic utilities
+* Content encodings (gzip, deflate, brotli)
+* Strict RFC 9110 compliance to prevent common security issues
== What This Library Does Not Do
-* Network I/O — this is a Sans-I/O library by design
+* Provide a network transport — this is a Sans-I/O library by design
* HTTP/2 or HTTP/3 protocol support
* TLS/SSL handling
-* Cookie management or session state
-* Full HTTP client/server implementation (see Boost.Beast2 for I/O)
== Target Audience
@@ -38,37 +40,42 @@ handling. You should have:
* Familiarity with TCP/IP networking concepts
* Understanding of the HTTP request/response model
-* Experience with C++ move semantics and memory management
+* Experience with C++ coroutines
== Design Philosophy
-The library follows a Sans-I/O architecture that separates protocol logic from
-network operations. This design choice yields several benefits:
+Boost.HTTP follows a Sans-I/O architecture at every layer. Even high-level
+clients and servers operate on abstract, type-erased stream interfaces rather
+than concrete sockets. The library has no dependency on any I/O or networking
+library — not even in its implementation files.
-**Reusability.** The same protocol code works with any I/O framework — Asio,
-io_uring, or platform-specific APIs. Write the HTTP logic once, integrate
-it anywhere.
+This separation yields concrete benefits:
-**Testability.** Tests run as pure function calls without sockets, timers,
-or network delays. Coverage is higher, execution is faster, results are
+**Reusability.** The same code works with any I/O framework — Asio, io_uring,
+or platform-specific APIs. Write the HTTP logic once, plug in any transport.
+
+**Testability.** Tests run as pure function calls without sockets, timers, or
+network delays. Coverage is higher, execution is faster, results are
deterministic.
-**Security.** The parser is strict by default. Malformed input that could
-enable request smuggling or header injection is rejected immediately.
+**Security.** The parser is strict by default. Malformed input that could enable
+request smuggling or header injection is rejected immediately.
+
+The execution model is coroutines-only, built on Capy. All asynchronous
+operations are naturally expressed as coroutine awaits over type-erased
+streams that reflect buffer-oriented I/O.
== Requirements
-* C++11 compiler (see tested compilers below)
-* Boost libraries (core, system, optional)
+* C++20 compiler with coroutine support
+* Boost libraries (core, system)
* Link to the static or dynamic library
-The library supports `-fno-exceptions` and detects this automatically.
-
=== Tested Compilers
-* GCC: 5 to 14 (except 8.0.1)
-* Clang: 3.9, 4 to 18
-* MSVC: 14.1 to 14.42
+* GCC: 11 to 14
+* Clang: 14 to 18
+* MSVC: 14.3 to 14.42
== Code Conventions
@@ -122,13 +129,11 @@ Content-Length: 42
== Next Steps
-* xref:http-protocol.adoc[Introduction to HTTP] — understand HTTP sessions and message flow
-* xref:containers.adoc[Containers] — work with requests, responses, and fields
-* xref:parsing.adoc[Parsing] — parse incoming HTTP messages
-* xref:serializing.adoc[Serializing] — produce outgoing HTTP messages
-* xref:router.adoc[Router] — dispatch requests to handlers
-* xref:compression/zlib.adoc[ZLib Compression] — DEFLATE and gzip support
-* xref:compression/brotli.adoc[Brotli Compression] — high-ratio compression
+* xref:2.http-tutorial/2.http-tutorial.adoc[HTTP Tutorial] — learn the HTTP protocol
+* xref:3.messages/3.messages.adoc[HTTP Messages] — containers, serializing, and parsing
+* xref:4.servers/4.servers.adoc[HTTP Servers] — workers, route handlers, routers, and built-in middleware
+* xref:5.compression/5.compression.adoc[Compression] — gzip, deflate, brotli
+* xref:7.reference/7.reference.adoc[Reference] — complete API reference
== Acknowledgments
diff --git a/doc/modules/ROOT/pages/router.adoc b/doc/modules/ROOT/pages/router.adoc
deleted file mode 100644
index b1f220c0..00000000
--- a/doc/modules/ROOT/pages/router.adoc
+++ /dev/null
@@ -1,542 +0,0 @@
-//
-// Copyright (c) 2025 Vinnie Falco (vinnie.falco@gmail.com)
-//
-// Distributed under the Boost Software License, Version 1.0. (See accompanying
-// file LICENSE_1_0.txt or copy at https://www.boost.org/LICENSE_1_0.txt)
-//
-// Official repository: https://github.com/cppalliance/http
-//
-
-= Router
-
-The router is an Express.js-style request dispatcher for HTTP servers. You
-register handlers for path patterns and HTTP methods, then dispatch incoming
-requests. The router matches requests against registered routes and invokes
-the appropriate handlers in order.
-
-Like the rest of this library, the router is Sans-I/O: it handles routing and
-response generation without performing network operations. A separate I/O
-layer manages connections and drives the protocol.
-
-== Quick Start
-
-[source,cpp]
-----
-#include
-
-using namespace boost::http;
-
-int main()
-{
- router r;
-
- r.add(method::get, "/hello",
- [](route_params& p)
- {
- p.status(status::ok);
- p.set_body("Hello, world!");
- return route::send;
- });
-
- r.add(method::get, "/users/:id",
- [](route_params& p)
- {
- auto id = p.param("id");
- p.status(status::ok);
- p.set_body("User: " + std::string(id));
- return route::send;
- });
-
- // Dispatch a request
- route_params params;
- // ... populate params from parsed request ...
- auto result = co_await r.dispatch(method::get, url, params);
-}
-----
-
-== Route Handlers
-
-A handler is any callable that accepts a reference to the params object and
-returns a `route_result`:
-
-[source,cpp]
-----
-route_result handler(route_params& p);
-----
-
-The return value tells the router what to do next:
-
-[cols="1,3"]
-|===
-|Value |Meaning
-
-|`route::send`
-|Response is ready. Send it to the client.
-
-|`route::next`
-|Continue to the next handler in the chain.
-
-|`route::next_route`
-|Skip remaining handlers in this route, try the next route.
-
-|`route::close`
-|Close the connection after sending any response.
-
-|`route::complete`
-|Request fully handled; no response to send.
-
-|`route::detach`
-|Handler took ownership of the session (advanced).
-|===
-
-Most handlers return `route::send` when they produce a response, or
-`route::next` when they perform setup work and defer to later handlers.
-
-== Adding Routes
-
-Use `add()` to register a handler for a specific HTTP method and path:
-
-[source,cpp]
-----
-router.add(method::get, "/users", get_users);
-router.add(method::post, "/users", create_user);
-router.add(method::get, "/users/:id", get_user);
-router.add(method::put, "/users/:id", update_user);
-router.add(method::delete_, "/users/:id", delete_user);
-----
-
-Use `all()` to match any HTTP method:
-
-[source,cpp]
-----
-router.all("/status", check_status);
-----
-
-== Path Patterns
-
-Route paths support named parameters and wildcards:
-
-[cols="1,2,2"]
-|===
-|Pattern |Example URL |Matches
-
-|`/users`
-|`/users`
-|Exact match
-
-|`/users/:id`
-|`/users/42`
-|Named parameter `id` = `"42"`
-
-|`/users/:id/posts/:pid`
-|`/users/42/posts/7`
-|Multiple parameters
-
-|`/files/*path`
-|`/files/docs/readme.txt`
-|Wildcard captures remainder
-|===
-
-For the complete pattern syntax including optional groups, escaping, and
-quoted names, see xref:server/route-patterns.adoc[Route Patterns].
-
-Access captured parameters in handlers:
-
-[source,cpp]
-----
-r.add(method::get, "/users/:id/posts/:pid",
- [](route_params& p)
- {
- auto user_id = p.param("id");
- auto post_id = p.param("pid");
- // ...
- return route::send;
- });
-----
-
-== Fluent Route Interface
-
-The `route()` method returns a fluent interface for registering multiple
-handlers on the same path:
-
-[source,cpp]
-----
-router.route("/users/:id")
- .add(method::get, get_user)
- .add(method::put, update_user)
- .add(method::delete_, delete_user)
- .all(log_access);
-----
-
-This is equivalent to calling `add()` separately for each method, but more
-concise when a path has multiple handlers.
-
-== Handler Chaining
-
-Multiple handlers can be registered for the same route. They execute in
-order until one returns something other than `route::next`:
-
-[source,cpp]
-----
-router.add(method::get, "/admin",
- [](route_params& p)
- {
- // Authentication check
- if (!is_authenticated(p))
- {
- p.status(status::unauthorized);
- p.set_body("Unauthorized");
- return route::send;
- }
- return route::next;
- },
- [](route_params& p)
- {
- // Authorization check
- if (!is_admin(p))
- {
- p.status(status::forbidden);
- p.set_body("Forbidden");
- return route::send;
- }
- return route::next;
- },
- [](route_params& p)
- {
- // Business logic
- p.status(status::ok);
- p.set_body("Admin panel");
- return route::send;
- });
-----
-
-This pattern separates concerns: authentication, authorization, and business
-logic each have their own handler.
-
-== Middleware
-
-Use `use()` to add middleware that runs for all routes matching a prefix:
-
-[source,cpp]
-----
-// Global middleware (runs for all routes)
-router.use(
- [](route_params& p)
- {
- p.res.set(field::server, "MyApp/1.0");
- return route::next;
- });
-
-// Path-specific middleware
-router.use("/api",
- [](route_params& p)
- {
- // Verify API key
- if (!p.req.exists(field::authorization))
- {
- p.status(status::unauthorized);
- return route::send;
- }
- return route::next;
- });
-----
-
-Middleware registered with `use()` matches prefix patterns. Middleware
-attached to `"/api"` runs for `"/api"`, `"/api/users"`, and `"/api/data"`.
-
-== Error Handlers
-
-Register error handlers to catch failures during request processing:
-
-[source,cpp]
-----
-// Global error handler
-router.use(
- [](route_params& p, system::error_code ec)
- {
- p.status(status::internal_server_error);
- p.set_body("Error: " + ec.message());
- return route::send;
- });
-
-// Path-specific error handler
-router.use("/api",
- [](route_params& p, system::error_code ec)
- {
- p.status(status::internal_server_error);
- p.res.set(field::content_type, "application/json");
- p.set_body("{\"error\":\"" + ec.message() + "\"}");
- return route::send;
- });
-----
-
-Error handlers receive the error code that caused the failure. Return
-`route::next` to pass to the next error handler.
-
-== Exception Handlers
-
-Register exception handlers with `except()`:
-
-[source,cpp]
-----
-router.except(
- [](route_params& p, std::exception_ptr ep)
- {
- try
- {
- std::rethrow_exception(ep);
- }
- catch (std::exception const& e)
- {
- p.status(status::internal_server_error);
- p.set_body(e.what());
- }
- return route::send;
- });
-----
-
-== Router Options
-
-Configure matching behavior when constructing the router:
-
-[source,cpp]
-----
-router r(
- router_options()
- .case_sensitive(true) // Paths are case-sensitive
- .strict(true)); // Trailing slash matters
-----
-
-[cols="1,1,3"]
-|===
-|Option |Default |Description
-
-|`case_sensitive`
-|`false`
-|When true, `/Users` and `/users` are different routes.
-
-|`strict`
-|`false`
-|When true, `/api` and `/api/` are different routes.
-
-|`merge_params`
-|`false`
-|When true, inherit parameters from parent routers.
-|===
-
-== Nested Routers
-
-Mount routers within routers for modular organization:
-
-[source,cpp]
-----
-// API routes
-router api;
-api.add(method::get, "/users", list_users);
-api.add(method::get, "/posts", list_posts);
-
-// Admin routes
-router admin;
-admin.add(method::get, "/stats", show_stats);
-admin.add(method::post, "/config", update_config);
-
-// Main router
-router app;
-app.use("/api", std::move(api));
-app.use("/admin", std::move(admin));
-----
-
-Routes are composed: `/api/users` matches `list_users`, `/admin/stats`
-matches `show_stats`.
-
-== Dispatching Requests
-
-Dispatch requests directly on the router:
-
-[source,cpp]
-----
-// Build routes
-router r;
-// ... add routes ...
-
-// Dispatch requests
-route_params p;
-p.url = parsed_url;
-p.req = parsed_request;
-
-auto result = co_await r.dispatch(
- p.req.method(),
- p.url,
- p);
-
-switch (result)
-{
-case route::send:
- // p.res contains response to send
- co_await send_response(p.res);
- break;
-
-case route::next:
- // No handler matched - send 404
- send_not_found();
- break;
-
-case route::close:
- // Close connection
- break;
-}
-----
-
-The router internally flattens routes for efficient dispatch. The first
-call to `dispatch()` finalizes the routing table automatically.
-
-== The route_params Object
-
-The standard `route_params` type contains everything handlers need:
-
-[source,cpp]
-----
-class route_params
-{
- urls::url_view url; // Parsed request target
- http::request req; // Request headers
- http::response res; // Response to build
- http::request_parser parser; // For body access
- http::serializer serializer; // For response output
- capy::datastore route_data; // Per-request storage
- capy::datastore session_data;// Per-session storage
- suspender suspend; // For async operations
- capy::executor_ref ex; // Session executor
-};
-----
-
-Convenience methods simplify common operations:
-
-[source,cpp]
-----
-r.add(method::post, "/data",
- [](route_params& p)
- {
- // Set response status
- p.status(status::created);
-
- // Set response body
- p.set_body("Created");
-
- return route::send;
- });
-----
-
-== Async Operations
-
-Handlers can perform async work using `suspend`:
-
-[source,cpp]
-----
-r.add(method::get, "/slow",
- [](route_params& p)
- {
- return p.suspend(
- [](resumer resume)
- {
- // Called synchronously
- schedule_async_work([resume]()
- {
- // Called later, on completion
- resume(route::send);
- });
- });
- });
-----
-
-== Reading Request Bodies
-
-Use `read_body` for async body reading:
-
-[source,cpp]
-----
-r.add(method::post, "/upload",
- [](route_params& p)
- {
- return p.read_body(
- capy::string_body_sink(),
- [&p](std::string body)
- {
- // Body is now available
- process_upload(body);
- p.status(status::ok);
- return route::send;
- });
- });
-----
-
-== Complete Example
-
-[source,cpp]
-----
-#include
-
-using namespace boost::http;
-
-int main()
-{
- router r;
-
- // Middleware
- r.use([](route_params& p)
- {
- p.res.set(field::server, "MyApp/1.0");
- return route::next;
- });
-
- // Health check
- r.add(method::get, "/health",
- [](route_params& p)
- {
- p.status(status::ok);
- p.set_body("OK");
- return route::send;
- });
-
- // API routes
- r.route("/api/users")
- .add(method::get,
- [](route_params& p)
- {
- p.status(status::ok);
- p.res.set(field::content_type, "application/json");
- p.set_body("[{\"id\":1},{\"id\":2}]");
- return route::send;
- })
- .add(method::post,
- [](route_params& p)
- {
- p.status(status::created);
- return route::send;
- });
-
- r.add(method::get, "/api/users/:id",
- [](route_params& p)
- {
- auto id = p.param("id");
- p.status(status::ok);
- p.set_body("{\"id\":" + std::string(id) + "}");
- return route::send;
- });
-
- // Error handler
- r.use([](route_params& p, system::error_code ec)
- {
- p.status(status::internal_server_error);
- p.set_body(ec.message());
- return route::send;
- });
-
- // ... integrate with your I/O layer ...
-}
-----
-
-== Next Steps
-
-* xref:bcrypt.adoc[BCrypt] — secure password hashing for authentication
-* xref:sans_io_philosophy.adoc[Sans-I/O Philosophy] — design rationale
diff --git a/doc/outline.md b/doc/outline.md
new file mode 100644
index 00000000..e4ac4526
--- /dev/null
+++ b/doc/outline.md
@@ -0,0 +1,188 @@
+# Documentation Outline
+
+## 1. index.adoc — Introduction
+
+- Title + problem statement
+- What This Library Does
+ - HTTP at multiple abstraction levels (low-level to complete clients/servers)
+ - Sans-I/O top to bottom — no dependency on Corosio, Asio, or any transport
+ - Coroutines-only execution model (Capy-based)
+ - Type-erased streams reflecting buffer-oriented I/O
+ - High-level components: file serving, forms, cookies, cryptography
+- What This Library Does Not Do
+- Target Audience
+- Design Philosophy
+- Requirements
+- Code Conventions
+- Quick Example
+- Next Steps
+
+Existing page: `index.adoc` — rewrite in place
+
+## 2. 2.http-tutorial.adoc — HTTP Tutorial
+
+- Educate the reader on the HTTP protocol itself
+- Sessions, request/response model, message structure
+- Methods, status codes, headers
+- Security considerations
+
+Renamed from: `http-protocol.adoc`
+
+## 3. HTTP Messages
+
+### 3. 3.messages.adoc — HTTP Messages (intro)
+
+- Landing page for the section
+- How containers, serializing, and parsing relate
+- Overview of the data model and the two-sided stream architecture
+
+New page (no existing equivalent)
+
+### 3a. 3a.containers.adoc — Containers
+
+- `request`, `response`, `fields`
+- Methods, status codes, reason strings
+- Building and inspecting messages
+
+Renamed from: `containers.adoc`
+
+### 3b. 3b.serializing.adoc — Serializing
+
+- How the serializer works: persistent object tied to the session/socket lifetime
+- Two sides: input (sink — accepts message and body) and output (stream — emits serialized HTTP)
+- General concepts first, then the input side (sink interface), then the output side (stream interface)
+- Chunked encoding, content encoding
+- `Expect: 100-continue` handshake
+
+Renamed from: `serializing.adoc`
+
+### 3c. 3c.parsing.adoc — Parsing
+
+- How the parser works: persistent object tied to the session/socket lifetime
+- Two sides: input (stream — caller provides a read stream the parser draws from) and output (source — parsed message and body)
+- General concepts first, then the input side (stream interface), then the output side (source interface)
+- `request_parser`, `response_parser`
+- Incremental parsing, limits, error handling
+
+Renamed from: `parsing.adoc`
+
+## 4. HTTP Servers
+
+### 4. 4.servers.adoc — HTTP Servers (intro)
+
+- Landing page for the section
+- Overview of the server architecture
+- How the pieces fit together: worker, route handlers, routers, middleware
+
+Renamed from: `server/servers-intro.adoc`
+
+### 4a. 4a.http-worker.adoc — HTTP Worker
+
+- The core server loop
+- Connection management, session lifecycle
+
+New page (no existing equivalent)
+
+### 4b. 4b.route-handlers.adoc — Route Handlers
+
+- Smallest unit of server logic
+- Handler signature, communicating with the server
+- Middleware concepts as they apply to handlers
+
+Renamed from: `server/route-handlers.adoc`
+
+### 4c. 4c.routers.adoc — Routers
+
+- Express.js-style request dispatch
+- Nested routers
+- Middleware composition within routers
+
+Renamed from: `server/routers.adoc` (also merges `router.adoc`)
+
+### 4d. 4d.route-patterns.adoc — Route Patterns
+
+- Pattern syntax: literals, named parameters, wildcards, optional groups
+- Escaping special characters, quoted parameter names
+- Grammar reference
+- Matching behavior, router options
+- Pattern examples and error cases
+
+Renamed from: `server/route-patterns.adoc`
+
+### 4e. 4e.serve-static.adoc — Serving Static Files
+
+- Static file delivery
+- Content types, caching, conditional requests, range requests
+
+Renamed from: `server/serve-static.adoc`
+
+### 4f. 4f.serve-index.adoc — Directory Listings
+
+- Browsable directory listings
+
+Renamed from: `server/serve-index.adoc`
+
+### 4g. 4g.bcrypt.adoc — BCrypt Password Hashing
+
+- Secure password hashing and verification
+
+Renamed from: `bcrypt.adoc`
+
+## 5. Compression
+
+### 5. 5.compression.adoc — Compression (intro)
+
+- Landing page for the section
+- How compression is provided as individual separate libraries
+
+New page (no existing equivalent)
+
+### 5a. 5a.zlib.adoc — ZLib
+
+- DEFLATE, zlib format, gzip
+
+Renamed from: `compression/zlib.adoc`
+
+### 5b. 5b.brotli.adoc — Brotli
+
+- High-ratio compression
+
+Renamed from: `compression/brotli.adoc`
+
+## 6. Design
+
+### 6a. 6a.sans-io.adoc — Sans-I/O Philosophy
+
+- What Sans-I/O is and why it matters
+- Reusability, testability, determinism
+- Comparison with I/O-coupled designs
+
+Renamed from: `sans_io_philosophy.adoc`
+
+### 6b. 6b.parser.adoc — Parser
+
+- Comparison to Boost.Beast parser design
+- Memory allocation and utilization
+- Input buffer preparation, two-phase parsing
+- Use cases and interfaces
+
+Renamed from: `design_requirements/parser.adoc`
+
+### 6c. 6c.serializer.adoc — Serializer
+
+- Use cases and interfaces
+- Empty body, WriteSink body, BufferSink body
+
+Renamed from: `design_requirements/serializer.adoc`
+
+## 7. 7.reference.adoc — API Reference
+
+Existing page: `reference.adoc` — renamed
+
+---
+
+## Unmapped Existing Pages
+
+These pages do not map to a section in the new outline:
+
+- **`router.adoc`** — Standalone router page. Material merged into Routers (4c).