Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 3 additions & 0 deletions _typos.toml
Original file line number Diff line number Diff line change
Expand Up @@ -15,3 +15,6 @@ exten = "exten"
invokable = "invokable"
typ = "typ"
Rabit = "Rabit"
byted = "byted"
Byted = "Byted"
bytedgpt = "bytedgpt"
64 changes: 32 additions & 32 deletions content/en/docs/eino/Eino: Cookbook.md

Large diffs are not rendered by default.

163 changes: 97 additions & 66 deletions content/en/docs/eino/FAQ.md

Large diffs are not rendered by default.

Large diffs are not rendered by default.

Large diffs are not rendered by default.

Large diffs are not rendered by default.

Large diffs are not rendered by default.

Large diffs are not rendered by default.

Large diffs are not rendered by default.

84 changes: 42 additions & 42 deletions content/en/docs/eino/core_modules/components/chat_model_guide.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,18 +3,18 @@ Description: ""
date: "2026-01-20"
lastmod: ""
tags: []
title: 'Eino: ChatModel Guide'
weight: 1
title: 'Eino: ChatModel User Guide'
weight: 8
---

## Overview
## Introduction

The Model component is used to interact with large language models. Its main purpose is to send user input messages to the language model and obtain the model's response. This component plays an important role in the following scenarios:
The Model component is a component for interacting with large language models. Its main purpose is to send user input messages to the language model and obtain the model's response. This component plays an important role in the following scenarios:

- Natural language conversations
- Natural language dialogue
- Text generation and completion
- Tool call parameter generation
- Multimodal interactions (text, images, audio, etc.)
- Parameter generation for tool calls
- Multimodal interaction (text, images, audio, etc.)

## Component Definition

Expand Down Expand Up @@ -42,8 +42,8 @@ type ToolCallingChatModel interface {

- Function: Generate a complete model response
- Parameters:
- ctx: Context object for passing request-level information, also used to pass the Callback Manager
- input: List of input messages
- ctx: Context object for passing request-level information and Callback Manager
- input: Input message list
- opts: Optional parameters for configuring model behavior
- Return values:
- `*schema.Message`: The response message generated by the model
Expand All @@ -52,7 +52,7 @@ type ToolCallingChatModel interface {
#### Stream Method

- Function: Generate model response in streaming mode
- Parameters: Same as the Generate method
- Parameters: Same as Generate method
- Return values:
- `*schema.StreamReader[*schema.Message]`: Stream reader for model response
- error: Error information during generation
Expand All @@ -72,7 +72,7 @@ type ToolCallingChatModel interface {

```go
type Message struct {
// Role indicates the role of the message (system/user/assistant/tool)
// Role represents the message role (system/user/assistant/tool)
Role RoleType
// Content is the text content of the message
Content string
Expand All @@ -82,18 +82,18 @@ type Message struct {
// UserInputMultiContent stores user input multimodal data, supporting text, images, audio, video, files
// When using this field, the model role is restricted to User
UserInputMultiContent []MessageInputPart
// AssistantGenMultiContent holds multimodal data output by the model, supporting text, images, audio, video
// AssistantGenMultiContent stores model output multimodal data, supporting text, images, audio, video
// When using this field, the model role is restricted to Assistant
AssistantGenMultiContent []MessageOutputPart
// Name is the sender name of the message
Name string
// ToolCalls is the tool call information in assistant messages
ToolCalls []ToolCall
// ToolCallID is the tool call ID for tool messages
// ToolCallID is the tool call ID of tool messages
ToolCallID string
// ResponseMeta contains response metadata
ResponseMeta *ResponseMeta
// Extra is used to store additional information
// Extra stores additional information
Extra map[string]any
}
```
Expand All @@ -102,8 +102,8 @@ The Message struct is the basic structure for model interaction, supporting:

- Multiple roles: system, user, assistant (ai), tool
- Multimodal content: text, images, audio, video, files
- Tool calls: Support for model calling external tools and functions
- Metadata: Including response reason, token usage statistics, etc.
- Tool calls: supports model calling external tools and functions
- Metadata: includes response reason, token usage statistics, etc.

### Common Options

Expand All @@ -121,12 +121,12 @@ type Options struct {
Model *string
// TopP controls the diversity of output
TopP *float32
// Stop specifies the conditions to stop generation
// Stop specifies conditions to stop generation
Stop []string
}
```

Options can be set using the following methods:
Options can be set in the following ways:

```go
// Set temperature
Expand Down Expand Up @@ -183,7 +183,7 @@ import (
"github.com/cloudwego/eino/schema"
)

// Initialize model (using OpenAI as an example)
// Initialize model (using openai as example)
cm, err := openai.NewChatModel(ctx, &openai.ChatModelConfig{
// Configuration parameters
})
Expand All @@ -192,11 +192,11 @@ cm, err := openai.NewChatModel(ctx, &openai.ChatModelConfig{
messages := []*schema.Message{
{
Role: schema.System,
Content: "你是一个有帮助的助手。",
Content: "You are a helpful assistant.",
},
{
Role: schema.User,
Content: "你好!",
Content: "Hello!",
},
}

Expand Down Expand Up @@ -282,7 +282,7 @@ handler := &callbacksHelper.ModelCallbackHandler{
return ctx
},
OnEnd: func(ctx context.Context, info *callbacks.RunInfo, output *model.CallbackOutput) context.Context {
fmt.Printf("Generation complete, Token usage: %+v\n", output.TokenUsage)
fmt.Printf("Generation complete, token usage: %+v\n", output.TokenUsage)
return ctx
},
OnEndWithStreamOutput: func(ctx context.Context, info *callbacks.RunInfo, output *schema.StreamReader[*model.CallbackOutput]) context.Context {
Expand Down Expand Up @@ -336,22 +336,22 @@ result, err := runnable.Invoke(ctx, messages, compose.WithCallbacks(helper))

## **Existing Implementations**

1. OpenAI ChatModel: Using OpenAI's GPT series models [ChatModel - OpenAI](/docs/eino/ecosystem_integration/chat_model/chat_model_openai)
2. Ollama ChatModel: Using Ollama local models [ChatModel - Ollama](/docs/eino/ecosystem_integration/chat_model/chat_model_ollama)
3. ARK ChatModel: Using ARK platform model services [ChatModel - ARK](/docs/eino/ecosystem_integration/chat_model/chat_model_ark)
1. OpenAI ChatModel: Using OpenAI's GPT series models [ChatModel - OpenAI](https://bytedance.larkoffice.com/wiki/NguEw85n6iJjShkVtdQcHpydnld)
2. Ollama ChatModel: Using Ollama local models [ChatModel - Ollama](https://bytedance.larkoffice.com/wiki/WWngw1XMViwgyYkNuZgcjZnxnke)
3. ARK ChatModel: Using ARK platform model services [ChatModel - ARK](https://bytedance.larkoffice.com/wiki/WUzzwaX8ricGwZk1i1mcJHHNnEl)
4. More: [Eino ChatModel](https://www.cloudwego.io/docs/eino/ecosystem_integration/chat_model/)

## Custom Implementation Reference
## Implementation Reference

When implementing a custom ChatModel component, pay attention to the following points:
When implementing a custom ChatModel component, note the following:

1. Make sure to implement common options
2. Make sure to implement the callback mechanism
3. Remember to close the writer after streaming output is complete
2. Make sure to implement callback mechanism
3. Remember to close the writer after completing output in streaming mode

### Option Mechanism

If a custom ChatModel needs Options beyond the common Options, you can use the component abstraction utility functions to implement custom Options, for example:
Custom ChatModel can use the component abstraction utility function to implement custom Options if Options beyond common Options are needed, for example:

```go
import (
Expand Down Expand Up @@ -445,7 +445,7 @@ func NewMyChatModel(config *MyChatModelConfig) (*MyChatModel, error) {
}

func (m *MyChatModel) Generate(ctx context.Context, messages []*schema.Message, opts ...model.Option) (*schema.Message, error) {
// 1. Process options
// 1. Handle options
options := &MyChatModelOptions{
Options: &model.Options{
Model: &m.model,
Expand All @@ -456,7 +456,7 @@ func (m *MyChatModel) Generate(ctx context.Context, messages []*schema.Message,
options.Options = model.GetCommonOptions(options.Options, opts...)
options = model.GetImplSpecificOptions(options, opts...)

// 2. Callback before starting generation
// 2. Callback before generation starts
ctx = callbacks.OnStart(ctx, &model.CallbackInput{
Messages: messages,
Config: &model.Config{
Expand All @@ -467,7 +467,7 @@ func (m *MyChatModel) Generate(ctx context.Context, messages []*schema.Message,
// 3. Execute generation logic
response, err := m.doGenerate(ctx, messages, options)

// 4. Handle error and completion callbacks
// 4. Handle errors and completion callback
if err != nil {
ctx = callbacks.OnError(ctx, err)
return nil, err
Expand All @@ -481,7 +481,7 @@ func (m *MyChatModel) Generate(ctx context.Context, messages []*schema.Message,
}

func (m *MyChatModel) Stream(ctx context.Context, messages []*schema.Message, opts ...model.Option) (*schema.StreamReader[*schema.Message], error) {
// 1. Process options
// 1. Handle options
options := &MyChatModelOptions{
Options: &model.Options{
Model: &m.model,
Expand All @@ -492,7 +492,7 @@ func (m *MyChatModel) Stream(ctx context.Context, messages []*schema.Message, op
options.Options = model.GetCommonOptions(options.Options, opts...)
options = model.GetImplSpecificOptions(options, opts...)

// 2. Callback before starting streaming generation
// 2. Callback before streaming generation starts
ctx = callbacks.OnStart(ctx, &model.CallbackInput{
Messages: messages,
Config: &model.Config{
Expand All @@ -501,18 +501,18 @@ func (m *MyChatModel) Stream(ctx context.Context, messages []*schema.Message, op
})

// 3. Create streaming response
// Pipe produces a StreamReader and a StreamWriter; writing to StreamWriter can be read from StreamReader, both are concurrency-safe.
// The implementation asynchronously writes generated content to StreamWriter and returns StreamReader as the return value
// ***StreamReader is a data stream that can only be read once. When implementing Callback yourself, you need to pass the data stream to callback via OnEndWithCallbackOutput and also return a data stream, requiring a copy of the data stream
// Considering this scenario always requires copying the data stream, the OnEndWithStreamOutput function will copy internally and return an unread stream
// The following code demonstrates one stream processing approach; the processing method is not unique
// Pipe produces a StreamReader and StreamWriter. Writing to StreamWriter can be read from StreamReader, both are concurrency-safe.
// In implementation, write generated content to StreamWriter asynchronously and return StreamReader as return value
// ***StreamReader is a data stream that can only be read once. When implementing Callback yourself, you need to pass the data stream to callback via OnEndWithCallbackOutput and also return a data stream, requiring copying the data stream.
// Considering this scenario always requires copying the data stream, the OnEndWithCallbackOutput function copies internally and returns an unread stream.
// The following code demonstrates one stream handling approach; handling approaches are not unique.
sr, sw := schema.Pipe[*model.CallbackOutput](1)

// 4. Start asynchronous generation
// 4. Start async generation
go func() {
defer sw.Close()

// Stream writing
// Stream write
m.doStream(ctx, messages, options, sw)
}()

Expand Down
Loading