From ad4831bf243d2af338f8a01d5762dc444c62dbc0 Mon Sep 17 00:00:00 2001 From: Nigel Jones Date: Wed, 18 Mar 2026 11:25:14 +0000 Subject: [PATCH 01/11] docs: condense README to elevator pitch (#478) --- README.md | 206 +++++++++++------------------------------------------- 1 file changed, 39 insertions(+), 167 deletions(-) diff --git a/README.md b/README.md index ff5a7682e..442d8fa55 100644 --- a/README.md +++ b/README.md @@ -1,14 +1,15 @@ - +Mellea logo -# Mellea - -Mellea is a library for writing generative programs. -Generative programming replaces flaky agents and brittle prompts -with structured, maintainable, robust, and efficient AI workflows. +# Mellea — build predictable AI without guesswork +Inside every AI-powered pipeline, the unreliable part is the same: the LLM call itself. +Silent failures, untestable outputs, no guarantees. +Mellea wraps those calls in Python you can read, test, and reason about — +type-annotated outputs, verifiable requirements, automatic retries. [//]: # ([![arXiv](https://img.shields.io/badge/arXiv-2408.09869-b31b1b.svg)](https://arxiv.org/abs/2408.09869)) -[![Docs](https://img.shields.io/badge/docs-live-brightgreen)](https://docs.mellea.ai/) +[![Website](https://img.shields.io/badge/website-mellea.ai-blue)](https://mellea.ai/) +[![Docs](https://img.shields.io/badge/docs-docs.mellea.ai-brightgreen)](https://docs.mellea.ai/) [![PyPI version](https://img.shields.io/pypi/v/mellea)](https://pypi.org/project/mellea/) [![PyPI - Python Version](https://img.shields.io/pypi/pyversions/mellea)](https://pypi.org/project/mellea/) [![uv](https://img.shields.io/endpoint?url=https://raw.githubusercontent.com/astral-sh/uv/main/assets/badge/v0.json)](https://github.com/astral-sh/uv) @@ -18,189 +19,60 @@ with structured, maintainable, robust, and efficient AI workflows. [![Contributor Covenant](https://img.shields.io/badge/Contributor%20Covenant-3.0-4baaaa.svg)](CODE_OF_CONDUCT.md) [![Discord](https://img.shields.io/discord/1448407063813165219?logo=discord&logoColor=white&label=Discord&color=7289DA)](https://ibm.biz/mellea-discord) - -## Features - - * A standard library of opinionated prompting patterns. - * Sampling strategies for inference-time scaling. - * Clean integration between verifiers and samplers. - - Batteries-included library of verifiers. - - Support for efficient checking of specialized requirements using - activated LoRAs. - - Train your own verifiers on proprietary classifier data. - * Compatible with many inference services and model families. Control cost - and quality by easily lifting and shifting workloads between: - - inference providers - - model families - - model sizes - * Easily integrate the power of LLMs into legacy code-bases (mify). - * Sketch applications by writing specifications and letting `mellea` fill in - the details (generative slots). - * Get started by decomposing your large unwieldy prompts into structured and maintainable mellea problems. - - - -## Getting Started - -You can get started with a local install, or by using Colab notebooks. - -### Getting Started with Local Inference - - - -Install with [uv](https://docs.astral.sh/uv/getting-started/installation/): +## Install ```bash uv pip install mellea ``` -Install with pip: +See [installation docs](https://docs.mellea.ai/getting-started/installation) for extras (`[hf]`, `[watsonx]`, `[docling]`, `[all]`, …) and source installation. -```bash -pip install mellea -``` - -> [!NOTE] -> `mellea` comes with some additional packages as defined in our `pyproject.toml`. If you would like to install all the extra optional dependencies, please run the following commands: -> -> ```bash -> uv pip install "mellea[hf]" # for Huggingface extras and Alora capabilities -> uv pip install "mellea[watsonx]" # for watsonx backend -> uv pip install "mellea[docling]" # for docling -> uv pip install "mellea[smolagents]" # for HuggingFace smolagents tools -> uv pip install "mellea[all]" # for all the optional dependencies -> ``` -> -> You can also install all the optional dependencies with `uv sync --all-extras` - -> [!NOTE] -> If running on an Intel mac, you may get errors related to torch/torchvision versions. Conda maintains updated versions of these packages. You will need to create a conda environment and run `conda install 'torchvision>=0.22.0'` (this should also install pytorch and torchvision-extra). Then, you should be able to run `uv pip install mellea`. To run the examples, you will need to use `python ` inside the conda environment instead of `uv run --with mellea `. - -> [!NOTE] -> If you are using python >= 3.13, you may encounter an issue where outlines cannot be installed due to rust compiler issues (`error: can't find Rust compiler`). You can either downgrade to python 3.12 or install the [rust compiler](https://www.rust-lang.org/tools/install) to build the wheel for outlines locally. - -For running a simple LLM request locally (using Ollama with Granite model), this is the starting code: -```python -# file: https://github.com/generative-computing/mellea/blob/main/docs/examples/tutorial/example.py -import mellea - -m = mellea.start_session() -print(m.chat("What is the etymology of mellea?").content) -``` - - -Then run it: -> [!NOTE] -> Before we get started, you will need to download and install [ollama](https://ollama.com/). Mellea can work with many different types of backends, but everything in this tutorial will "just work" on a Macbook running IBM's Granite 4 Micro 3B model. -```shell -uv run --with mellea docs/examples/tutorial/example.py -``` - -### Get Started with Colab - -| Notebook | Try in Colab | Goal | -|----------|--------------|------| -| Hello, World | Open In Colab | Quick‑start demo | -| Simple Email | Open In Colab | Using the `m.instruct` primitive | -| Instruct-Validate-Repair | Open In Colab | Introduces our first generative programming design pattern | -| Model Options | Open In Colab | Demonstrates how to pass model options through to backends | -| Sentiment Classifier | Open In Colab | Introduces the `@generative` decorator | -| Managing Context | Open In Colab | Shows how to construct and manage context in a `MelleaSession` | -| Generative OOP | Open In Colab | Demonstrates object-oriented generative programming in Mellea | -| Rich Documents | Open In Colab | A generative program that uses Docling to work with rich-text documents | -| Composing Generative Functions | Open In Colab | Demonstrates contract-oriented programming in Mellea | -| `m serve` | Open In Colab | Serve a generative program as an openai-compatible model endpoint | -| MCP | Open In Colab | Mellea + MCP | - - -### Installing from Source - -If you want to contribute to Mellea or need the latest development version, see the -[Getting Started](CONTRIBUTING.md#getting-started) section in our Contributing Guide for -detailed installation instructions. - -## Getting started with validation - -Mellea supports validation of generation results through a **instruct-validate-repair** pattern. -Below, the request for *"Write an email.."* is constrained by the requirements of *"be formal"* and *"Use 'Dear interns' as greeting."*. -Using a simple rejection sampling strategy, the request is sent up to three (loop_budget) times to the model and -the output is checked against the constraints using (in this case) LLM-as-a-judge. - - -```python -# file: https://github.com/generative-computing/mellea/blob/main/docs/examples/instruct_validate_repair/101_email_with_validate.py -from mellea import MelleaSession -from mellea.backends import ModelOption -from mellea.backends.ollama import OllamaModelBackend -from mellea.backends import model_ids -from mellea.stdlib.sampling import RejectionSamplingStrategy - -# create a session with Mistral running on Ollama -m = MelleaSession( - backend=OllamaModelBackend( - model_id=model_ids.MISTRALAI_MISTRAL_0_3_7B, - model_options={ModelOption.MAX_NEW_TOKENS: 300}, - ) -) - -# run an instruction with requirements -email_v1 = m.instruct( - "Write an email to invite all interns to the office party.", - requirements=["be formal", "Use 'Dear interns' as greeting."], - strategy=RejectionSamplingStrategy(loop_budget=3), -) - -# print result -print(f"***** email ****\n{str(email_v1)}\n*******") -``` - - -## Getting Started with Generative Slots - -Generative slots allow you to define functions without implementing them. -The `@generative` decorator marks a function as one that should be interpreted by querying an LLM. -The example below demonstrates how an LLM's sentiment classification -capability can be wrapped up as a function using Mellea's generative slots and -a local LLM. +## Example +The `@generative` decorator turns a typed Python function into a structured LLM call. +Docstrings become prompts, type hints become schemas — no parsers, no chains: ```python -# file: https://github.com/generative-computing/mellea/blob/main/docs/examples/tutorial/sentiment_classifier.py#L1-L13 -from typing import Literal +from pydantic import BaseModel from mellea import generative, start_session +class UserProfile(BaseModel): + name: str + age: int @generative -def classify_sentiment(text: str) -> Literal["positive", "negative"]: - """Classify the sentiment of the input text as 'positive' or 'negative'.""" +def extract_user(text: str) -> UserProfile: + """Extract the user's name and age from the text.""" - -if __name__ == "__main__": - m = start_session() - sentiment = classify_sentiment(m, text="I love this!") - print("Output sentiment is:", sentiment) +m = start_session() +user = extract_user(m, text="User log 42: Alice is 31 years old.") +print(user.name) # Alice +print(user.age) # 31 — always an int, guaranteed by the schema ``` +## Learn More -## Contributing +| Resource | | +|---|---| +| [mellea.ai](https://mellea.ai) | Vision, features, and live demos | +| [docs.mellea.ai](https://docs.mellea.ai) | Full docs — tutorials, API reference, how-to guides | +| [Colab notebooks](docs/examples/notebooks/) | Interactive examples you can run immediately | +| [Code examples](docs/examples/) | Runnable examples: RAG, agents, IVR, MObjects, and more | -We welcome contributions to Mellea! There are several ways to contribute: +## Contributing -1. **Contributing to this repository** - Core features, bug fixes, standard library components -2. **Applications & Libraries** - Build tools using Mellea (host in your own repo with `mellea-` prefix) -3. **Community Components** - Contribute to [mellea-contribs](https://github.com/generative-computing/mellea-contribs) +We welcome contributions of all kinds — bug fixes, new backends, standard library components, examples, and docs. -Please see our **[Contributing Guide](CONTRIBUTING.md)** for detailed information on: -- Getting started with development -- Coding standards and workflow -- Testing guidelines -- How to contribute specific types of components +- **[Contributing Guide](https://docs.mellea.ai/community/contributing-guide)** — development setup, workflow, and coding standards +- **[Building Extensions](https://docs.mellea.ai/community/building-extensions)** — create reusable components in your own repo +- **[mellea-contribs](https://github.com/generative-computing/mellea-contribs)** — community library for shared components -Questions? Join our [Discord](https://ibm.biz/mellea-discord)! +Questions? Join our [Discord](https://ibm.biz/mellea-discord). ### IBM ❤️ Open Source AI -Mellea has been started by IBM Research in Cambridge, MA. - +Mellea was started by IBM Research in Cambridge, MA. +--- +Licensed under the [Apache-2.0 License](LICENSE). Copyright © 2026 Mellea. From 5b1eb5f44f06a646aced3a8fceb0a7ebd46e8faf Mon Sep 17 00:00:00 2001 From: Nigel Jones Date: Wed, 18 Mar 2026 11:25:48 +0000 Subject: [PATCH 02/11] docs: link contributing guide to CONTRIBUTING.md --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index 442d8fa55..464cf8627 100644 --- a/README.md +++ b/README.md @@ -63,7 +63,7 @@ print(user.age) # 31 — always an int, guaranteed by the schema We welcome contributions of all kinds — bug fixes, new backends, standard library components, examples, and docs. -- **[Contributing Guide](https://docs.mellea.ai/community/contributing-guide)** — development setup, workflow, and coding standards +- **[Contributing Guide](CONTRIBUTING.md)** — development setup, workflow, and coding standards - **[Building Extensions](https://docs.mellea.ai/community/building-extensions)** — create reusable components in your own repo - **[mellea-contribs](https://github.com/generative-computing/mellea-contribs)** — community library for shared components From 6a8d35f9bda21763dbd204fe4fdecf4fdb7fda93 Mon Sep 17 00:00:00 2001 From: Nigel Jones Date: Wed, 18 Mar 2026 11:29:12 +0000 Subject: [PATCH 03/11] docs: fix license badge link, vision statement, IVR spelling, wording tweaks --- README.md | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/README.md b/README.md index 464cf8627..1443f6e65 100644 --- a/README.md +++ b/README.md @@ -4,8 +4,8 @@ Inside every AI-powered pipeline, the unreliable part is the same: the LLM call itself. Silent failures, untestable outputs, no guarantees. -Mellea wraps those calls in Python you can read, test, and reason about — -type-annotated outputs, verifiable requirements, automatic retries. +Mellea is a Python library for writing *generative programs* — replacing brittle prompts and flaky agents +with structured, testable AI workflows built around type-annotated outputs, verifiable requirements, and automatic retries. [//]: # ([![arXiv](https://img.shields.io/badge/arXiv-2408.09869-b31b1b.svg)](https://arxiv.org/abs/2408.09869)) [![Website](https://img.shields.io/badge/website-mellea.ai-blue)](https://mellea.ai/) @@ -15,7 +15,7 @@ type-annotated outputs, verifiable requirements, automatic retries. [![uv](https://img.shields.io/endpoint?url=https://raw.githubusercontent.com/astral-sh/uv/main/assets/badge/v0.json)](https://github.com/astral-sh/uv) [![Ruff](https://img.shields.io/endpoint?url=https://raw.githubusercontent.com/astral-sh/ruff/main/assets/badge/v2.json)](https://github.com/astral-sh/ruff) [![pre-commit](https://img.shields.io/badge/pre--commit-enabled-brightgreen?logo=pre-commit&logoColor=white)](https://github.com/pre-commit/pre-commit) -[![GitHub License](https://img.shields.io/github/license/generative-computing/mellea)](https://img.shields.io/github/license/generative-computing/mellea) +[![GitHub License](https://img.shields.io/github/license/generative-computing/mellea)](https://github.com/generative-computing/mellea/blob/main/LICENSE) [![Contributor Covenant](https://img.shields.io/badge/Contributor%20Covenant-3.0-4baaaa.svg)](CODE_OF_CONDUCT.md) [![Discord](https://img.shields.io/discord/1448407063813165219?logo=discord&logoColor=white&label=Discord&color=7289DA)](https://ibm.biz/mellea-discord) @@ -30,7 +30,7 @@ See [installation docs](https://docs.mellea.ai/getting-started/installation) for ## Example The `@generative` decorator turns a typed Python function into a structured LLM call. -Docstrings become prompts, type hints become schemas — no parsers, no chains: +Docstrings become prompts, type hints become schemas — no templates, no parsers: ```python from pydantic import BaseModel @@ -57,7 +57,7 @@ print(user.age) # 31 — always an int, guaranteed by the schema | [mellea.ai](https://mellea.ai) | Vision, features, and live demos | | [docs.mellea.ai](https://docs.mellea.ai) | Full docs — tutorials, API reference, how-to guides | | [Colab notebooks](docs/examples/notebooks/) | Interactive examples you can run immediately | -| [Code examples](docs/examples/) | Runnable examples: RAG, agents, IVR, MObjects, and more | +| [Code examples](docs/examples/) | Runnable examples: RAG, agents, Instruct-Validate-Repair (IVR), MObjects, and more | ## Contributing From f22963b0e542c9ce6b3e5b7f1f3724628d7d49fd Mon Sep 17 00:00:00 2001 From: Nigel Jones Date: Wed, 18 Mar 2026 11:30:35 +0000 Subject: [PATCH 04/11] docs: replace Discord link with GitHub Discussions --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index 1443f6e65..cf26811ab 100644 --- a/README.md +++ b/README.md @@ -67,7 +67,7 @@ We welcome contributions of all kinds — bug fixes, new backends, standard libr - **[Building Extensions](https://docs.mellea.ai/community/building-extensions)** — create reusable components in your own repo - **[mellea-contribs](https://github.com/generative-computing/mellea-contribs)** — community library for shared components -Questions? Join our [Discord](https://ibm.biz/mellea-discord). +Questions? Open a [GitHub Discussion](https://github.com/generative-computing/mellea/discussions). ### IBM ❤️ Open Source AI From b11771dd2ae996190eca5913f237ed4c70ff7962 Mon Sep 17 00:00:00 2001 From: Nigel Jones Date: Wed, 18 Mar 2026 11:31:23 +0000 Subject: [PATCH 05/11] docs: remove Discord badge --- README.md | 1 - 1 file changed, 1 deletion(-) diff --git a/README.md b/README.md index cf26811ab..c856984c1 100644 --- a/README.md +++ b/README.md @@ -17,7 +17,6 @@ with structured, testable AI workflows built around type-annotated outputs, veri [![pre-commit](https://img.shields.io/badge/pre--commit-enabled-brightgreen?logo=pre-commit&logoColor=white)](https://github.com/pre-commit/pre-commit) [![GitHub License](https://img.shields.io/github/license/generative-computing/mellea)](https://github.com/generative-computing/mellea/blob/main/LICENSE) [![Contributor Covenant](https://img.shields.io/badge/Contributor%20Covenant-3.0-4baaaa.svg)](CODE_OF_CONDUCT.md) -[![Discord](https://img.shields.io/discord/1448407063813165219?logo=discord&logoColor=white&label=Discord&color=7289DA)](https://ibm.biz/mellea-discord) ## Install From 00460cdaa1f4c40ffa37476049d4447c5ee0d8bc Mon Sep 17 00:00:00 2001 From: Nigel Jones Date: Wed, 18 Mar 2026 11:34:51 +0000 Subject: [PATCH 06/11] docs: use GitHub Discussions, fix table header --- README.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/README.md b/README.md index c856984c1..4fb7ef9ce 100644 --- a/README.md +++ b/README.md @@ -51,7 +51,7 @@ print(user.age) # 31 — always an int, guaranteed by the schema ## Learn More -| Resource | | +| Resource | Description | |---|---| | [mellea.ai](https://mellea.ai) | Vision, features, and live demos | | [docs.mellea.ai](https://docs.mellea.ai) | Full docs — tutorials, API reference, how-to guides | @@ -66,7 +66,7 @@ We welcome contributions of all kinds — bug fixes, new backends, standard libr - **[Building Extensions](https://docs.mellea.ai/community/building-extensions)** — create reusable components in your own repo - **[mellea-contribs](https://github.com/generative-computing/mellea-contribs)** — community library for shared components -Questions? Open a [GitHub Discussion](https://github.com/generative-computing/mellea/discussions). +Questions? See [GitHub Discussions](https://github.com/generative-computing/mellea/discussions). ### IBM ❤️ Open Source AI From 5d6c0d59e31427a4bed728bccc40cda956b1117f Mon Sep 17 00:00:00 2001 From: Nigel Jones Date: Wed, 18 Mar 2026 11:35:06 +0000 Subject: [PATCH 07/11] docs: fix landing page description --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index 4fb7ef9ce..c0324c3e1 100644 --- a/README.md +++ b/README.md @@ -53,7 +53,7 @@ print(user.age) # 31 — always an int, guaranteed by the schema | Resource | Description | |---|---| -| [mellea.ai](https://mellea.ai) | Vision, features, and live demos | +| [mellea.ai](https://mellea.ai) | Vision and features | | [docs.mellea.ai](https://docs.mellea.ai) | Full docs — tutorials, API reference, how-to guides | | [Colab notebooks](docs/examples/notebooks/) | Interactive examples you can run immediately | | [Code examples](docs/examples/) | Runnable examples: RAG, agents, Instruct-Validate-Repair (IVR), MObjects, and more | From 9f791edd1d11833c6b8aa2fc30bde97d1418f2ab Mon Sep 17 00:00:00 2001 From: Nigel Jones Date: Wed, 18 Mar 2026 11:42:32 +0000 Subject: [PATCH 08/11] docs: add capabilities section, fix table style --- README.md | 11 ++++++++++- 1 file changed, 10 insertions(+), 1 deletion(-) diff --git a/README.md b/README.md index c0324c3e1..787098c35 100644 --- a/README.md +++ b/README.md @@ -49,10 +49,19 @@ print(user.name) # Alice print(user.age) # 31 — always an int, guaranteed by the schema ``` +## What Mellea Does + +- **Structured output** — `@generative` turns typed functions into LLM calls; Pydantic schemas are enforced at generation time +- **Requirements & repair** — attach natural-language requirements to any call; Mellea validates and retries automatically +- **Sampling strategies** — rejection sampling, majority voting, inference-time scaling with one parameter change +- **Multiple backends** — Ollama, OpenAI, vLLM, HuggingFace, WatsonX, LiteLLM, Bedrock +- **Legacy integration** — drop Mellea into existing codebases with `mify` +- **MCP compatible** — expose any generative program as an MCP tool + ## Learn More | Resource | Description | -|---|---| +| --- | --- | | [mellea.ai](https://mellea.ai) | Vision and features | | [docs.mellea.ai](https://docs.mellea.ai) | Full docs — tutorials, API reference, how-to guides | | [Colab notebooks](docs/examples/notebooks/) | Interactive examples you can run immediately | From b7085175dd4604028ad2fe486247ae5eac9cf847 Mon Sep 17 00:00:00 2001 From: Nigel Jones Date: Wed, 18 Mar 2026 12:43:45 +0000 Subject: [PATCH 09/11] Update README.md Co-authored-by: Paul Schweigert --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index 787098c35..95be4943a 100644 --- a/README.md +++ b/README.md @@ -2,7 +2,7 @@ # Mellea — build predictable AI without guesswork -Inside every AI-powered pipeline, the unreliable part is the same: the LLM call itself. +Inside every AI-powered pipeline, the unreliable part is the same: the LLM calls itself. Silent failures, untestable outputs, no guarantees. Mellea is a Python library for writing *generative programs* — replacing brittle prompts and flaky agents with structured, testable AI workflows built around type-annotated outputs, verifiable requirements, and automatic retries. From 39158e0cd2a73bdf2c0403eaceb2b6b869b7d592 Mon Sep 17 00:00:00 2001 From: Nigel Jones Date: Wed, 18 Mar 2026 12:46:41 +0000 Subject: [PATCH 10/11] Update README.md Co-authored-by: Paul Schweigert --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index 95be4943a..5074d8058 100644 --- a/README.md +++ b/README.md @@ -55,7 +55,7 @@ print(user.age) # 31 — always an int, guaranteed by the schema - **Requirements & repair** — attach natural-language requirements to any call; Mellea validates and retries automatically - **Sampling strategies** — rejection sampling, majority voting, inference-time scaling with one parameter change - **Multiple backends** — Ollama, OpenAI, vLLM, HuggingFace, WatsonX, LiteLLM, Bedrock -- **Legacy integration** — drop Mellea into existing codebases with `mify` +- **Legacy integration** — easily drop Mellea into existing codebases with `mify` - **MCP compatible** — expose any generative program as an MCP tool ## Learn More From 08e2a8014e29d499ecdf6784fcab1f0786a48570 Mon Sep 17 00:00:00 2001 From: Nigel Jones Date: Wed, 18 Mar 2026 13:03:23 +0000 Subject: [PATCH 11/11] docs: fix grammar, clarify sampling strategies description --- README.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/README.md b/README.md index 5074d8058..6c8cfea5f 100644 --- a/README.md +++ b/README.md @@ -2,7 +2,7 @@ # Mellea — build predictable AI without guesswork -Inside every AI-powered pipeline, the unreliable part is the same: the LLM calls itself. +Inside every AI-powered pipeline, the unreliable part is the same: the LLM call itself. Silent failures, untestable outputs, no guarantees. Mellea is a Python library for writing *generative programs* — replacing brittle prompts and flaky agents with structured, testable AI workflows built around type-annotated outputs, verifiable requirements, and automatic retries. @@ -53,7 +53,7 @@ print(user.age) # 31 — always an int, guaranteed by the schema - **Structured output** — `@generative` turns typed functions into LLM calls; Pydantic schemas are enforced at generation time - **Requirements & repair** — attach natural-language requirements to any call; Mellea validates and retries automatically -- **Sampling strategies** — rejection sampling, majority voting, inference-time scaling with one parameter change +- **Sampling strategies** — run a generation multiple times and pick the best result; swap between rejection sampling, majority voting, and more with one parameter change - **Multiple backends** — Ollama, OpenAI, vLLM, HuggingFace, WatsonX, LiteLLM, Bedrock - **Legacy integration** — easily drop Mellea into existing codebases with `mify` - **MCP compatible** — expose any generative program as an MCP tool