client.prompts.log(...)
-
-
-
Log to a Prompt.
You can use query parameters
version_id, orenvironment, to target an existing version of the Prompt. Otherwise, the default deployed version will be chosen.Instead of targeting an existing version explicitly, you can instead pass in Prompt details in the request body. In this case, we will check if the details correspond to an existing version of the Prompt. If they do not, we will create a new version. This is helpful in the case where you are storing or deriving your Prompt details in code.
-
-
-
from humanloop import Humanloop import datetime client = Humanloop(api_key="YOUR_API_KEY", ) client.prompts.log(path='persona', prompt={'model': 'gpt-4', 'template': [{'role': "system", 'content': 'You are {{person}}. Answer questions as this person. Do not break character.'}]}, messages=[{'role': "user", 'content': 'What really happened at Roswell?'}], inputs={'person': 'Trump' }, created_at=datetime.datetime.fromisoformat("2024-07-18 23:29:35.178000+00:00", ), provider_latency=6.5931549072265625, output_message={'content': "Well, you know, there is so much secrecy involved in government, folks, it's unbelievable. They don't want to tell you everything. They don't tell me everything! But about Roswell, it's a very popular question. I know, I just know, that something very, very peculiar happened there. Was it a weather balloon? Maybe. Was it something extraterrestrial? Could be. I'd love to go down and open up all the classified documents, believe me, I would. But they don't let that happen. The Deep State, folks, the Deep State. They're unbelievable. They want to keep everything a secret. But whatever the truth is, I can tell you this: it's something big, very very big. Tremendous, in fact.", 'role': "assistant"}, prompt_tokens=100, output_tokens=220, prompt_cost=1e-05, output_cost=0.0002, finish_reason='stop', )
-
-
-
version_id:
typing.Optional[str]β A specific Version ID of the Prompt to log to.
-
environment:
typing.Optional[str]β Name of the Environment identifying a deployed version to log to.
-
run_id:
typing.Optional[str]β Unique identifier for the Run to associate the Log to.
-
path:
typing.Optional[str]β Path of the Prompt, including the name. This locates the Prompt in the Humanloop filesystem and is used as as a unique identifier. For example:folder/nameor justname.
-
id:
typing.Optional[str]β ID for an existing Prompt.
-
output_message:
typing.Optional[ChatMessageParams]β The message returned by the provider.
-
prompt_tokens:
typing.Optional[int]β Number of tokens in the prompt used to generate the output.
-
reasoning_tokens:
typing.Optional[int]β Number of reasoning tokens used to generate the output.
-
output_tokens:
typing.Optional[int]β Number of tokens in the output generated by the model.
-
prompt_cost:
typing.Optional[float]β Cost in dollars associated to the tokens in the prompt.
-
output_cost:
typing.Optional[float]β Cost in dollars associated to the tokens in the output.
-
finish_reason:
typing.Optional[str]β Reason the generation finished.
-
messages:
typing.Optional[typing.Sequence[ChatMessageParams]]β The messages passed to the to provider chat endpoint.
-
tool_choice:
typing.Optional[PromptLogRequestToolChoiceParams]Controls how the model uses tools. The following options are supported:
'none'means the model will not call any tool and instead generates a message; this is the default when no tools are provided as part of the Prompt.'auto'means the model can decide to call one or more of the provided tools; this is the default when tools are provided as part of the Prompt.'required'means the model must call one or more of the provided tools.{'type': 'function', 'function': {name': <TOOL_NAME>}}forces the model to use the named function.
-
prompt:
typing.Optional[PromptLogRequestPromptParams]The Prompt configuration to use. Two formats are supported:
- An object representing the details of the Prompt configuration
- A string representing the raw contents of a .prompt file A new Prompt version will be created if the provided details do not match any existing version.
-
start_time:
typing.Optional[dt.datetime]β When the logged event started.
-
end_time:
typing.Optional[dt.datetime]β When the logged event ended.
-
output:
typing.Optional[str]β Generated output from your model for the provided inputs. Can beNoneif logging an error, or if creating a parent Log with the intention to populate it later.
-
created_at:
typing.Optional[dt.datetime]β User defined timestamp for when the log was created.
-
error:
typing.Optional[str]β Error message if the log is an error.
-
provider_latency:
typing.Optional[float]β Duration of the logged event in seconds.
-
stdout:
typing.Optional[str]β Captured log and debug statements.
-
provider_request:
typing.Optional[typing.Dict[str, typing.Optional[typing.Any]]]β Raw request sent to provider.
-
provider_response:
typing.Optional[typing.Dict[str, typing.Optional[typing.Any]]]β Raw response received the provider.
-
inputs:
typing.Optional[typing.Dict[str, typing.Optional[typing.Any]]]β The inputs passed to the prompt template.
-
source:
typing.Optional[str]β Identifies where the model was called from.
-
metadata:
typing.Optional[typing.Dict[str, typing.Optional[typing.Any]]]β Any additional metadata to record.
-
source_datapoint_id:
typing.Optional[str]β Unique identifier for the Datapoint that this Log is derived from. This can be used by Humanloop to associate Logs to Evaluations. If provided, Humanloop will automatically associate this Log to Evaluations that require a Log for this Datapoint-Version pair.
-
trace_parent_id:
typing.Optional[str]β The ID of the parent Log to nest this Log under in a Trace.
-
user:
typing.Optional[str]β End-user ID related to the Log.
-
prompt_log_request_environment:
typing.Optional[str]β The name of the Environment the Log is associated to.
-
save:
typing.Optional[bool]β Whether the request/response payloads will be stored on Humanloop.
-
log_id:
typing.Optional[str]β This will identify a Log. If you don't provide a Log ID, Humanloop will generate one for you.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.prompts.update_log(...)
-
-
-
Update a Log.
Update the details of a Log with the given ID.
-
-
-
from humanloop import Humanloop client = Humanloop(api_key="YOUR_API_KEY", ) client.prompts.update_log(id='id', log_id='log_id', )
-
-
-
id:
strβ Unique identifier for Prompt.
-
log_id:
strβ Unique identifier for the Log.
-
output_message:
typing.Optional[ChatMessageParams]β The message returned by the provider.
-
prompt_tokens:
typing.Optional[int]β Number of tokens in the prompt used to generate the output.
-
reasoning_tokens:
typing.Optional[int]β Number of reasoning tokens used to generate the output.
-
output_tokens:
typing.Optional[int]β Number of tokens in the output generated by the model.
-
prompt_cost:
typing.Optional[float]β Cost in dollars associated to the tokens in the prompt.
-
output_cost:
typing.Optional[float]β Cost in dollars associated to the tokens in the output.
-
finish_reason:
typing.Optional[str]β Reason the generation finished.
-
messages:
typing.Optional[typing.Sequence[ChatMessageParams]]β The messages passed to the to provider chat endpoint.
-
tool_choice:
typing.Optional[PromptLogUpdateRequestToolChoiceParams]Controls how the model uses tools. The following options are supported:
'none'means the model will not call any tool and instead generates a message; this is the default when no tools are provided as part of the Prompt.'auto'means the model can decide to call one or more of the provided tools; this is the default when tools are provided as part of the Prompt.'required'means the model must call one or more of the provided tools.{'type': 'function', 'function': {name': <TOOL_NAME>}}forces the model to use the named function.
-
output:
typing.Optional[str]β Generated output from your model for the provided inputs. Can beNoneif logging an error, or if creating a parent Log with the intention to populate it later.
-
created_at:
typing.Optional[dt.datetime]β User defined timestamp for when the log was created.
-
error:
typing.Optional[str]β Error message if the log is an error.
-
provider_latency:
typing.Optional[float]β Duration of the logged event in seconds.
-
stdout:
typing.Optional[str]β Captured log and debug statements.
-
provider_request:
typing.Optional[typing.Dict[str, typing.Optional[typing.Any]]]β Raw request sent to provider.
-
provider_response:
typing.Optional[typing.Dict[str, typing.Optional[typing.Any]]]β Raw response received the provider.
-
inputs:
typing.Optional[typing.Dict[str, typing.Optional[typing.Any]]]β The inputs passed to the prompt template.
-
source:
typing.Optional[str]β Identifies where the model was called from.
-
metadata:
typing.Optional[typing.Dict[str, typing.Optional[typing.Any]]]β Any additional metadata to record.
-
start_time:
typing.Optional[dt.datetime]β When the logged event started.
-
end_time:
typing.Optional[dt.datetime]β When the logged event ended.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.prompts.call_stream(...)
-
-
-
Call a Prompt.
Calling a Prompt calls the model provider before logging the request, responses and metadata to Humanloop.
You can use query parameters
version_id, orenvironment, to target an existing version of the Prompt. Otherwise the default deployed version will be chosen.Instead of targeting an existing version explicitly, you can instead pass in Prompt details in the request body. In this case, we will check if the details correspond to an existing version of the Prompt. If they do not, we will create a new version. This is helpful in the case where you are storing or deriving your Prompt details in code.
-
-
-
from humanloop import Humanloop client = Humanloop(api_key="YOUR_API_KEY", ) response = client.prompts.call_stream() for chunk in response.data: yield chunk
-
-
-
version_id:
typing.Optional[str]β A specific Version ID of the Prompt to log to.
-
environment:
typing.Optional[str]β Name of the Environment identifying a deployed version to log to.
-
path:
typing.Optional[str]β Path of the Prompt, including the name. This locates the Prompt in the Humanloop filesystem and is used as as a unique identifier. For example:folder/nameor justname.
-
id:
typing.Optional[str]β ID for an existing Prompt.
-
messages:
typing.Optional[typing.Sequence[ChatMessageParams]]β The messages passed to the to provider chat endpoint.
-
tool_choice:
typing.Optional[PromptsCallStreamRequestToolChoiceParams]Controls how the model uses tools. The following options are supported:
'none'means the model will not call any tool and instead generates a message; this is the default when no tools are provided as part of the Prompt.'auto'means the model can decide to call one or more of the provided tools; this is the default when tools are provided as part of the Prompt.'required'means the model must call one or more of the provided tools.{'type': 'function', 'function': {name': <TOOL_NAME>}}forces the model to use the named function.
-
prompt:
typing.Optional[PromptsCallStreamRequestPromptParams]The Prompt configuration to use. Two formats are supported:
- An object representing the details of the Prompt configuration
- A string representing the raw contents of a .prompt file A new Prompt version will be created if the provided details do not match any existing version.
-
inputs:
typing.Optional[typing.Dict[str, typing.Optional[typing.Any]]]β The inputs passed to the prompt template.
-
source:
typing.Optional[str]β Identifies where the model was called from.
-
metadata:
typing.Optional[typing.Dict[str, typing.Optional[typing.Any]]]β Any additional metadata to record.
-
start_time:
typing.Optional[dt.datetime]β When the logged event started.
-
end_time:
typing.Optional[dt.datetime]β When the logged event ended.
-
source_datapoint_id:
typing.Optional[str]β Unique identifier for the Datapoint that this Log is derived from. This can be used by Humanloop to associate Logs to Evaluations. If provided, Humanloop will automatically associate this Log to Evaluations that require a Log for this Datapoint-Version pair.
-
trace_parent_id:
typing.Optional[str]β The ID of the parent Log to nest this Log under in a Trace.
-
user:
typing.Optional[str]β End-user ID related to the Log.
-
prompts_call_stream_request_environment:
typing.Optional[str]β The name of the Environment the Log is associated to.
-
save:
typing.Optional[bool]β Whether the request/response payloads will be stored on Humanloop.
-
log_id:
typing.Optional[str]β This will identify a Log. If you don't provide a Log ID, Humanloop will generate one for you.
-
provider_api_keys:
typing.Optional[ProviderApiKeysParams]β API keys required by each provider to make API calls. The API keys provided here are not stored by Humanloop. If not specified here, Humanloop will fall back to the key saved to your organization.
-
num_samples:
typing.Optional[int]β The number of generations.
-
return_inputs:
typing.Optional[bool]β Whether to return the inputs in the response. If false, the response will contain an empty dictionary under inputs. This is useful for reducing the size of the response. Defaults to true.
-
logprobs:
typing.Optional[int]β Include the log probabilities of the top n tokens in the provider_response
-
suffix:
typing.Optional[str]β The suffix that comes after a completion of inserted text. Useful for completions that act like inserts.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.prompts.call(...)
-
-
-
Call a Prompt.
Calling a Prompt calls the model provider before logging the request, responses and metadata to Humanloop.
You can use query parameters
version_id, orenvironment, to target an existing version of the Prompt. Otherwise the default deployed version will be chosen.Instead of targeting an existing version explicitly, you can instead pass in Prompt details in the request body. In this case, we will check if the details correspond to an existing version of the Prompt. If they do not, we will create a new version. This is helpful in the case where you are storing or deriving your Prompt details in code.
-
-
-
from humanloop import Humanloop client = Humanloop(api_key="YOUR_API_KEY", ) client.prompts.call(version_id='prv_Wu6zx1lAWJRqOyL8nWuZk', path='persona', messages=[{'role': "user", 'content': 'What really happened at Roswell?'}], inputs={'person': 'Trump' }, )
-
-
-
version_id:
typing.Optional[str]β A specific Version ID of the Prompt to log to.
-
environment:
typing.Optional[str]β Name of the Environment identifying a deployed version to log to.
-
path:
typing.Optional[str]β Path of the Prompt, including the name. This locates the Prompt in the Humanloop filesystem and is used as as a unique identifier. For example:folder/nameor justname.
-
id:
typing.Optional[str]β ID for an existing Prompt.
-
messages:
typing.Optional[typing.Sequence[ChatMessageParams]]β The messages passed to the to provider chat endpoint.
-
tool_choice:
typing.Optional[PromptsCallRequestToolChoiceParams]Controls how the model uses tools. The following options are supported:
'none'means the model will not call any tool and instead generates a message; this is the default when no tools are provided as part of the Prompt.'auto'means the model can decide to call one or more of the provided tools; this is the default when tools are provided as part of the Prompt.'required'means the model must call one or more of the provided tools.{'type': 'function', 'function': {name': <TOOL_NAME>}}forces the model to use the named function.
-
prompt:
typing.Optional[PromptsCallRequestPromptParams]The Prompt configuration to use. Two formats are supported:
- An object representing the details of the Prompt configuration
- A string representing the raw contents of a .prompt file A new Prompt version will be created if the provided details do not match any existing version.
-
inputs:
typing.Optional[typing.Dict[str, typing.Optional[typing.Any]]]β The inputs passed to the prompt template.
-
source:
typing.Optional[str]β Identifies where the model was called from.
-
metadata:
typing.Optional[typing.Dict[str, typing.Optional[typing.Any]]]β Any additional metadata to record.
-
start_time:
typing.Optional[dt.datetime]β When the logged event started.
-
end_time:
typing.Optional[dt.datetime]β When the logged event ended.
-
source_datapoint_id:
typing.Optional[str]β Unique identifier for the Datapoint that this Log is derived from. This can be used by Humanloop to associate Logs to Evaluations. If provided, Humanloop will automatically associate this Log to Evaluations that require a Log for this Datapoint-Version pair.
-
trace_parent_id:
typing.Optional[str]β The ID of the parent Log to nest this Log under in a Trace.
-
user:
typing.Optional[str]β End-user ID related to the Log.
-
prompts_call_request_environment:
typing.Optional[str]β The name of the Environment the Log is associated to.
-
save:
typing.Optional[bool]β Whether the request/response payloads will be stored on Humanloop.
-
log_id:
typing.Optional[str]β This will identify a Log. If you don't provide a Log ID, Humanloop will generate one for you.
-
provider_api_keys:
typing.Optional[ProviderApiKeysParams]β API keys required by each provider to make API calls. The API keys provided here are not stored by Humanloop. If not specified here, Humanloop will fall back to the key saved to your organization.
-
num_samples:
typing.Optional[int]β The number of generations.
-
return_inputs:
typing.Optional[bool]β Whether to return the inputs in the response. If false, the response will contain an empty dictionary under inputs. This is useful for reducing the size of the response. Defaults to true.
-
logprobs:
typing.Optional[int]β Include the log probabilities of the top n tokens in the provider_response
-
suffix:
typing.Optional[str]β The suffix that comes after a completion of inserted text. Useful for completions that act like inserts.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.prompts.list(...)
-
-
-
Get a list of all Prompts.
-
-
-
from humanloop import Humanloop client = Humanloop(api_key="YOUR_API_KEY", ) response = client.prompts.list(size=1, ) for item in response: yield item # alternatively, you can paginate page-by-page for page in response.iter_pages(): yield page
-
-
-
page:
typing.Optional[int]β Page number for pagination.
-
size:
typing.Optional[int]β Page size for pagination. Number of Prompts to fetch.
-
name:
typing.Optional[str]β Case-insensitive filter for Prompt name.
-
user_filter:
typing.Optional[str]β Case-insensitive filter for users in the Prompt. This filter matches against both email address and name of users.
-
sort_by:
typing.Optional[FileSortBy]β Field to sort Prompts by
-
order:
typing.Optional[SortOrder]β Direction to sort by.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.prompts.upsert(...)
-
-
-
Create a Prompt or update it with a new version if it already exists.
Prompts are identified by the
IDor theirpath. The parameters (i.e. the prompt template, temperature, model etc.) determine the versions of the Prompt.You can provide
version_nameandversion_descriptionto identify and describe your versions. Version names must be unique within a Prompt - attempting to create a version with a name that already exists will result in a 409 Conflict error.
-
-
-
from humanloop import Humanloop client = Humanloop(api_key="YOUR_API_KEY", ) client.prompts.upsert(path='Personal Projects/Coding Assistant', model='gpt-4o', endpoint="chat", template=[{'content': 'You are a helpful coding assistant specialising in {{language}}', 'role': "system"}], provider="openai", max_tokens=-1, temperature=0.7, version_name='coding-assistant-v1', version_description='Initial version', )
-
-
-
model:
strβ The model instance used, e.g.gpt-4. See supported models
-
path:
typing.Optional[str]β Path of the Prompt, including the name. This locates the Prompt in the Humanloop filesystem and is used as as a unique identifier. For example:folder/nameor justname.
-
id:
typing.Optional[str]β ID for an existing Prompt.
-
endpoint:
typing.Optional[ModelEndpoints]β The provider model endpoint used.
-
template:
typing.Optional[PromptRequestTemplateParams]The template contains the main structure and instructions for the model, including input variables for dynamic values.
For chat models, provide the template as a ChatTemplate (a list of messages), e.g. a system message, followed by a user message with an input variable. For completion models, provide a prompt template as a string.
Input variables should be specified with double curly bracket syntax:
{{input_name}}.
-
template_language:
typing.Optional[TemplateLanguage]β The template language to use for rendering the template.
-
provider:
typing.Optional[ModelProviders]β The company providing the underlying model service.
-
max_tokens:
typing.Optional[int]β The maximum number of tokens to generate. Provide max_tokens=-1 to dynamically calculate the maximum number of tokens to generate given the length of the prompt
-
temperature:
typing.Optional[float]β What sampling temperature to use when making a generation. Higher values means the model will be more creative.
-
top_p:
typing.Optional[float]β An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass.
-
stop:
typing.Optional[PromptRequestStopParams]β The string (or list of strings) after which the model will stop generating. The returned text will not contain the stop sequence.
-
presence_penalty:
typing.Optional[float]β Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the generation so far.
-
frequency_penalty:
typing.Optional[float]β Number between -2.0 and 2.0. Positive values penalize new tokens based on how frequently they appear in the generation so far.
-
other:
typing.Optional[typing.Dict[str, typing.Optional[typing.Any]]]β Other parameter values to be passed to the provider call.
-
seed:
typing.Optional[int]β If specified, model will make a best effort to sample deterministically, but it is not guaranteed.
-
response_format:
typing.Optional[ResponseFormatParams]β The format of the response. Only{"type": "json_object"}is currently supported for chat.
-
reasoning_effort:
typing.Optional[PromptRequestReasoningEffortParams]β Guidance on how many reasoning tokens it should generate before creating a response to the prompt. OpenAI reasoning models (o1, o3-mini) expect a OpenAIReasoningEffort enum. Anthropic reasoning models expect an integer, which signifies the maximum token budget.
-
tools:
typing.Optional[typing.Sequence[ToolFunctionParams]]β The tool specification that the model can choose to call if Tool calling is supported.
-
linked_tools:
typing.Optional[typing.Sequence[str]]β The IDs of the Tools in your organization that the model can choose to call if Tool calling is supported. The default deployed version of that tool is called.
-
attributes:
typing.Optional[typing.Dict[str, typing.Optional[typing.Any]]]β Additional fields to describe the Prompt. Helpful to separate Prompt versions from each other with details on how they were created or used.
-
version_name:
typing.Optional[str]β Unique name for the Prompt version. Version names must be unique for a given Prompt.
-
version_description:
typing.Optional[str]β Description of the version, e.g., the changes made in this version.
-
description:
typing.Optional[str]β Description of the Prompt.
-
tags:
typing.Optional[typing.Sequence[str]]β List of tags associated with this prompt.
-
readme:
typing.Optional[str]β Long description of the Prompt.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.prompts.get(...)
-
-
-
Retrieve the Prompt with the given ID.
By default, the deployed version of the Prompt is returned. Use the query parameters
version_idorenvironmentto target a specific version of the Prompt.
-
-
-
from humanloop import Humanloop client = Humanloop(api_key="YOUR_API_KEY", ) client.prompts.get(id='pr_30gco7dx6JDq4200GVOHa', )
-
-
-
id:
strβ Unique identifier for Prompt.
-
version_id:
typing.Optional[str]β A specific Version ID of the Prompt to retrieve.
-
environment:
typing.Optional[str]β Name of the Environment to retrieve a deployed Version from.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.prompts.delete(...)
-
-
-
Delete the Prompt with the given ID.
-
-
-
from humanloop import Humanloop client = Humanloop(api_key="YOUR_API_KEY", ) client.prompts.delete(id='pr_30gco7dx6JDq4200GVOHa', )
-
-
-
id:
strβ Unique identifier for Prompt.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.prompts.move(...)
-
-
-
Move the Prompt to a different path or change the name.
-
-
-
from humanloop import Humanloop client = Humanloop(api_key="YOUR_API_KEY", ) client.prompts.move(id='pr_30gco7dx6JDq4200GVOHa', path='new directory/new name', )
-
-
-
id:
strβ Unique identifier for Prompt.
-
path:
typing.Optional[str]β Path of the Prompt including the Prompt name, which is used as a unique identifier.
-
name:
typing.Optional[str]β Name of the Prompt.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.prompts.populate(...)
-
-
-
Retrieve the Prompt with the given ID, including the populated template.
By default, the deployed version of the Prompt is returned. Use the query parameters
version_idorenvironmentto target a specific version of the Prompt.
-
-
-
from humanloop import Humanloop client = Humanloop(api_key="YOUR_API_KEY", ) client.prompts.populate(id='id', request={'key': 'value' }, )
-
-
-
id:
strβ Unique identifier for Prompt.
-
request:
typing.Dict[str, typing.Optional[typing.Any]]
-
version_id:
typing.Optional[str]β A specific Version ID of the Prompt to retrieve to populate the template.
-
environment:
typing.Optional[str]β Name of the Environment to retrieve a deployed Version from to populate the template.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.prompts.list_versions(...)
-
-
-
Get a list of all the versions of a Prompt.
-
-
-
from humanloop import Humanloop client = Humanloop(api_key="YOUR_API_KEY", ) client.prompts.list_versions(id='pr_30gco7dx6JDq4200GVOHa', )
-
-
-
id:
strβ Unique identifier for Prompt.
-
evaluator_aggregates:
typing.Optional[bool]β Whether to include Evaluator aggregate results for the versions in the response
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.prompts.delete_prompt_version(...)
-
-
-
Delete a version of the Prompt.
-
-
-
from humanloop import Humanloop client = Humanloop(api_key="YOUR_API_KEY", ) client.prompts.delete_prompt_version(id='id', version_id='version_id', )
-
-
-
id:
strβ Unique identifier for Prompt.
-
version_id:
strβ Unique identifier for the specific version of the Prompt.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.prompts.patch_prompt_version(...)
-
-
-
Update the name or description of the Prompt version.
-
-
-
from humanloop import Humanloop client = Humanloop(api_key="YOUR_API_KEY", ) client.prompts.patch_prompt_version(id='id', version_id='version_id', )
-
-
-
id:
strβ Unique identifier for Prompt.
-
version_id:
strβ Unique identifier for the specific version of the Prompt.
-
name:
typing.Optional[str]β Name of the version.
-
description:
typing.Optional[str]β Description of the version.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.prompts.set_deployment(...)
-
-
-
Deploy Prompt to an Environment.
Set the deployed version for the specified Environment. This Prompt will be used for calls made to the Prompt in this Environment.
-
-
-
from humanloop import Humanloop client = Humanloop(api_key="YOUR_API_KEY", ) client.prompts.set_deployment(id='id', environment_id='environment_id', version_id='version_id', )
-
-
-
id:
strβ Unique identifier for Prompt.
-
environment_id:
strβ Unique identifier for the Environment to deploy the Version to.
-
version_id:
strβ Unique identifier for the specific version of the Prompt.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.prompts.remove_deployment(...)
-
-
-
Remove deployed Prompt from the Environment.
Remove the deployed version for the specified Environment. This Prompt will no longer be used for calls made to the Prompt in this Environment.
-
-
-
from humanloop import Humanloop client = Humanloop(api_key="YOUR_API_KEY", ) client.prompts.remove_deployment(id='id', environment_id='environment_id', )
-
-
-
id:
strβ Unique identifier for Prompt.
-
environment_id:
strβ Unique identifier for the Environment to remove the deployment from.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.prompts.list_environments(...)
-
-
-
List all Environments and their deployed versions for the Prompt.
-
-
-
from humanloop import Humanloop client = Humanloop(api_key="YOUR_API_KEY", ) client.prompts.list_environments(id='pr_30gco7dx6JDq4200GVOHa', )
-
-
-
id:
strβ Unique identifier for Prompt.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.prompts.update_monitoring(...)
-
-
-
Activate and deactivate Evaluators for monitoring the Prompt.
An activated Evaluator will automatically be run on all new Logs within the Prompt for monitoring purposes.
-
-
-
from humanloop import Humanloop client = Humanloop(api_key="YOUR_API_KEY", ) client.prompts.update_monitoring(id='pr_30gco7dx6JDq4200GVOHa', activate=[{'evaluator_version_id': 'evv_1abc4308abd'}], )
-
-
-
id:
str
-
activate:
typing.Optional[typing.Sequence[EvaluatorActivationDeactivationRequestActivateItemParams]]β Evaluators to activate for Monitoring. These will be automatically run on new Logs.
-
deactivate:
typing.Optional[typing.Sequence[EvaluatorActivationDeactivationRequestDeactivateItemParams]]β Evaluators to deactivate. These will not be run on new Logs.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.prompts.serialize(...)
-
-
-
Serialize a Prompt to the .prompt file format.
Useful for storing the Prompt with your code in a version control system, or for editing with an AI tool.
By default, the deployed version of the Prompt is returned. Use the query parameters
version_idorenvironmentto target a specific version of the Prompt.
-
-
-
from humanloop import Humanloop client = Humanloop(api_key="YOUR_API_KEY", ) client.prompts.serialize(id='id', )
-
-
-
id:
strβ Unique identifier for Prompt.
-
version_id:
typing.Optional[str]β A specific Version ID of the Prompt to retrieve.
-
environment:
typing.Optional[str]β Name of the Environment to retrieve a deployed Version from.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.prompts.deserialize(...)
-
-
-
Deserialize a Prompt from the .prompt file format.
This returns a subset of the attributes required by a Prompt. This subset is the bit that defines the Prompt version (e.g. with
modelandtemperatureetc)
-
-
-
from humanloop import Humanloop client = Humanloop(api_key="YOUR_API_KEY", ) client.prompts.deserialize(prompt='prompt', )
-
-
-
prompt:
str
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.tools.call(...)
-
-
-
Call a Tool.
Calling a Tool with inputs runs the tool's source code and logs the result and metadata to Humanloop.
You can use query parameters
version_id, orenvironment, to target an existing version of the Tool. Otherwise, the default deployed version will be chosen.Instead of targeting an existing version explicitly, you can instead pass in Tool details in the request body. In this case, we will check if the details correspond to an existing version of the Tool. If they do not, we will create a new version. This is helpful in the case where you are storing or deriving your Tool details in code.
-
-
-
from humanloop import Humanloop client = Humanloop(api_key="YOUR_API_KEY", ) client.tools.call()
-
-
-
version_id:
typing.Optional[str]β A specific Version ID of the Tool to call.
-
environment:
typing.Optional[str]β Name of the Environment identifying a deployed version to call.
-
path:
typing.Optional[str]β Path of the Tool, including the name. This locates the Tool in the Humanloop filesystem and is used as as a unique identifier. For example:folder/nameor justname.
-
id:
typing.Optional[str]β ID for an existing Tool.
-
tool:
typing.Optional[ToolKernelRequestParams]β Details of your Tool. A new Tool version will be created if the provided details are new.
-
inputs:
typing.Optional[typing.Dict[str, typing.Optional[typing.Any]]]β The inputs passed to the prompt template.
-
source:
typing.Optional[str]β Identifies where the model was called from.
-
metadata:
typing.Optional[typing.Dict[str, typing.Optional[typing.Any]]]β Any additional metadata to record.
-
start_time:
typing.Optional[dt.datetime]β When the logged event started.
-
end_time:
typing.Optional[dt.datetime]β When the logged event ended.
-
source_datapoint_id:
typing.Optional[str]β Unique identifier for the Datapoint that this Log is derived from. This can be used by Humanloop to associate Logs to Evaluations. If provided, Humanloop will automatically associate this Log to Evaluations that require a Log for this Datapoint-Version pair.
-
trace_parent_id:
typing.Optional[str]β The ID of the parent Log to nest this Log under in a Trace.
-
user:
typing.Optional[str]β End-user ID related to the Log.
-
tool_call_request_environment:
typing.Optional[str]β The name of the Environment the Log is associated to.
-
save:
typing.Optional[bool]β Whether the request/response payloads will be stored on Humanloop.
-
log_id:
typing.Optional[str]β This will identify a Log. If you don't provide a Log ID, Humanloop will generate one for you.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.tools.log(...)
-
-
-
Log to a Tool.
You can use query parameters
version_id, orenvironment, to target an existing version of the Tool. Otherwise the default deployed version will be chosen.Instead of targeting an existing version explicitly, you can instead pass in Tool details in the request body. In this case, we will check if the details correspond to an existing version of the Tool, if not we will create a new version. This is helpful in the case where you are storing or deriving your Tool details in code.
-
-
-
from humanloop import Humanloop client = Humanloop(api_key="YOUR_API_KEY", ) client.tools.log(path='math-tool', tool={'function': {'name': 'multiply', 'description': 'Multiply two numbers', 'parameters': {'type': 'object' , 'properties': {'a': {'type': 'number'}, 'b': {'type': 'number'}} , 'required': ['a', 'b'] }}}, inputs={'a': 5 , 'b': 7 }, output='35', )
-
-
-
version_id:
typing.Optional[str]β A specific Version ID of the Tool to log to.
-
environment:
typing.Optional[str]β Name of the Environment identifying a deployed version to log to.
-
path:
typing.Optional[str]β Path of the Tool, including the name. This locates the Tool in the Humanloop filesystem and is used as as a unique identifier. For example:folder/nameor justname.
-
id:
typing.Optional[str]β ID for an existing Tool.
-
tool:
typing.Optional[ToolKernelRequestParams]β Details of your Tool. A new Tool version will be created if the provided details are new.
-
start_time:
typing.Optional[dt.datetime]β When the logged event started.
-
end_time:
typing.Optional[dt.datetime]β When the logged event ended.
-
output:
typing.Optional[str]β Generated output from your model for the provided inputs. Can beNoneif logging an error, or if creating a parent Log with the intention to populate it later.
-
created_at:
typing.Optional[dt.datetime]β User defined timestamp for when the log was created.
-
error:
typing.Optional[str]β Error message if the log is an error.
-
provider_latency:
typing.Optional[float]β Duration of the logged event in seconds.
-
stdout:
typing.Optional[str]β Captured log and debug statements.
-
provider_request:
typing.Optional[typing.Dict[str, typing.Optional[typing.Any]]]β Raw request sent to provider.
-
provider_response:
typing.Optional[typing.Dict[str, typing.Optional[typing.Any]]]β Raw response received the provider.
-
inputs:
typing.Optional[typing.Dict[str, typing.Optional[typing.Any]]]β The inputs passed to the prompt template.
-
source:
typing.Optional[str]β Identifies where the model was called from.
-
metadata:
typing.Optional[typing.Dict[str, typing.Optional[typing.Any]]]β Any additional metadata to record.
-
source_datapoint_id:
typing.Optional[str]β Unique identifier for the Datapoint that this Log is derived from. This can be used by Humanloop to associate Logs to Evaluations. If provided, Humanloop will automatically associate this Log to Evaluations that require a Log for this Datapoint-Version pair.
-
trace_parent_id:
typing.Optional[str]β The ID of the parent Log to nest this Log under in a Trace.
-
user:
typing.Optional[str]β End-user ID related to the Log.
-
tool_log_request_environment:
typing.Optional[str]β The name of the Environment the Log is associated to.
-
save:
typing.Optional[bool]β Whether the request/response payloads will be stored on Humanloop.
-
log_id:
typing.Optional[str]β This will identify a Log. If you don't provide a Log ID, Humanloop will generate one for you.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.tools.update(...)
-
-
-
Update a Log.
Update the details of a Log with the given ID.
-
-
-
from humanloop import Humanloop client = Humanloop(api_key="YOUR_API_KEY", ) client.tools.update(id='id', log_id='log_id', )
-
-
-
id:
strβ Unique identifier for Prompt.
-
log_id:
strβ Unique identifier for the Log.
-
output:
typing.Optional[str]β Generated output from your model for the provided inputs. Can beNoneif logging an error, or if creating a parent Log with the intention to populate it later.
-
created_at:
typing.Optional[dt.datetime]β User defined timestamp for when the log was created.
-
error:
typing.Optional[str]β Error message if the log is an error.
-
provider_latency:
typing.Optional[float]β Duration of the logged event in seconds.
-
stdout:
typing.Optional[str]β Captured log and debug statements.
-
provider_request:
typing.Optional[typing.Dict[str, typing.Optional[typing.Any]]]β Raw request sent to provider.
-
provider_response:
typing.Optional[typing.Dict[str, typing.Optional[typing.Any]]]β Raw response received the provider.
-
inputs:
typing.Optional[typing.Dict[str, typing.Optional[typing.Any]]]β The inputs passed to the prompt template.
-
source:
typing.Optional[str]β Identifies where the model was called from.
-
metadata:
typing.Optional[typing.Dict[str, typing.Optional[typing.Any]]]β Any additional metadata to record.
-
start_time:
typing.Optional[dt.datetime]β When the logged event started.
-
end_time:
typing.Optional[dt.datetime]β When the logged event ended.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.tools.list(...)
-
-
-
Get a list of all Tools.
-
-
-
from humanloop import Humanloop client = Humanloop(api_key="YOUR_API_KEY", ) response = client.tools.list(size=1, ) for item in response: yield item # alternatively, you can paginate page-by-page for page in response.iter_pages(): yield page
-
-
-
page:
typing.Optional[int]β Page offset for pagination.
-
size:
typing.Optional[int]β Page size for pagination. Number of Tools to fetch.
-
name:
typing.Optional[str]β Case-insensitive filter for Tool name.
-
user_filter:
typing.Optional[str]β Case-insensitive filter for users in the Tool. This filter matches against both email address and name of users.
-
sort_by:
typing.Optional[FileSortBy]β Field to sort Tools by
-
order:
typing.Optional[SortOrder]β Direction to sort by.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.tools.upsert(...)
-
-
-
Create a Tool or update it with a new version if it already exists.
Tools are identified by the
IDor theirpath. The name, description and parameters determine the versions of the Tool.You can provide
version_nameandversion_descriptionto identify and describe your versions. Version names must be unique within a Tool - attempting to create a version with a name that already exists will result in a 409 Conflict error.
-
-
-
from humanloop import Humanloop client = Humanloop(api_key="YOUR_API_KEY", ) client.tools.upsert(path='math-tool', function={'name': 'multiply', 'description': 'Multiply two numbers', 'parameters': {'type': 'object' , 'properties': {'a': {'type': 'number'}, 'b': {'type': 'number'}} , 'required': ['a', 'b'] }}, version_name='math-tool-v1', version_description='Simple math tool that multiplies two numbers', )
-
-
-
path:
typing.Optional[str]β Path of the Tool, including the name. This locates the Tool in the Humanloop filesystem and is used as as a unique identifier. For example:folder/nameor justname.
-
id:
typing.Optional[str]β ID for an existing Tool.
-
function:
typing.Optional[ToolFunctionParams]β Callable function specification of the Tool shown to the model for tool calling.
-
source_code:
typing.Optional[str]β Code source of the Tool.
-
setup_values:
typing.Optional[typing.Dict[str, typing.Optional[typing.Any]]]β Values needed to setup the Tool, defined in JSON Schema format: https://json-schema.org/
-
attributes:
typing.Optional[typing.Dict[str, typing.Optional[typing.Any]]]β Additional fields to describe the Tool. Helpful to separate Tool versions from each other with details on how they were created or used.
-
tool_type:
typing.Optional[FilesToolType]β Type of Tool.
-
version_name:
typing.Optional[str]β Unique identifier for this Tool version. Each Tool can only have one version with a given name.
-
version_description:
typing.Optional[str]β Description of the Version.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.tools.get(...)
-
-
-
Retrieve the Tool with the given ID.
By default, the deployed version of the Tool is returned. Use the query parameters
version_idorenvironmentto target a specific version of the Tool.
-
-
-
from humanloop import Humanloop client = Humanloop(api_key="YOUR_API_KEY", ) client.tools.get(id='tl_789ghi', )
-
-
-
id:
strβ Unique identifier for Tool.
-
version_id:
typing.Optional[str]β A specific Version ID of the Tool to retrieve.
-
environment:
typing.Optional[str]β Name of the Environment to retrieve a deployed Version from.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.tools.delete(...)
-
-
-
Delete the Tool with the given ID.
-
-
-
from humanloop import Humanloop client = Humanloop(api_key="YOUR_API_KEY", ) client.tools.delete(id='tl_789ghi', )
-
-
-
id:
strβ Unique identifier for Tool.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.tools.move(...)
-
-
-
Move the Tool to a different path or change the name.
-
-
-
from humanloop import Humanloop client = Humanloop(api_key="YOUR_API_KEY", ) client.tools.move(id='tl_789ghi', path='new directory/new name', )
-
-
-
id:
strβ Unique identifier for Tool.
-
path:
typing.Optional[str]β Path of the Tool including the Tool name, which is used as a unique identifier.
-
name:
typing.Optional[str]β Name of the Tool, which is used as a unique identifier.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.tools.list_versions(...)
-
-
-
Get a list of all the versions of a Tool.
-
-
-
from humanloop import Humanloop client = Humanloop(api_key="YOUR_API_KEY", ) client.tools.list_versions(id='tl_789ghi', )
-
-
-
id:
strβ Unique identifier for the Tool.
-
evaluator_aggregates:
typing.Optional[bool]β Whether to include Evaluator aggregate results for the versions in the response
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.tools.delete_tool_version(...)
-
-
-
Delete a version of the Tool.
-
-
-
from humanloop import Humanloop client = Humanloop(api_key="YOUR_API_KEY", ) client.tools.delete_tool_version(id='id', version_id='version_id', )
-
-
-
id:
strβ Unique identifier for Tool.
-
version_id:
strβ Unique identifier for the specific version of the Tool.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.tools.update_tool_version(...)
-
-
-
Update the name or description of the Tool version.
-
-
-
from humanloop import Humanloop client = Humanloop(api_key="YOUR_API_KEY", ) client.tools.update_tool_version(id='id', version_id='version_id', )
-
-
-
id:
strβ Unique identifier for Tool.
-
version_id:
strβ Unique identifier for the specific version of the Tool.
-
name:
typing.Optional[str]β Name of the version.
-
description:
typing.Optional[str]β Description of the version.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.tools.set_deployment(...)
-
-
-
Deploy Tool to an Environment.
Set the deployed version for the specified Environment. This Prompt will be used for calls made to the Tool in this Environment.
-
-
-
from humanloop import Humanloop client = Humanloop(api_key="YOUR_API_KEY", ) client.tools.set_deployment(id='tl_789ghi', environment_id='staging', version_id='tv_012jkl', )
-
-
-
id:
strβ Unique identifier for Tool.
-
environment_id:
strβ Unique identifier for the Environment to deploy the Version to.
-
version_id:
strβ Unique identifier for the specific version of the Tool.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.tools.remove_deployment(...)
-
-
-
Remove deployed Tool from the Environment.
Remove the deployed version for the specified Environment. This Tool will no longer be used for calls made to the Tool in this Environment.
-
-
-
from humanloop import Humanloop client = Humanloop(api_key="YOUR_API_KEY", ) client.tools.remove_deployment(id='tl_789ghi', environment_id='staging', )
-
-
-
id:
strβ Unique identifier for Tool.
-
environment_id:
strβ Unique identifier for the Environment to remove the deployment from.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.tools.list_environments(...)
-
-
-
List all Environments and their deployed versions for the Tool.
-
-
-
from humanloop import Humanloop client = Humanloop(api_key="YOUR_API_KEY", ) client.tools.list_environments(id='tl_789ghi', )
-
-
-
id:
strβ Unique identifier for Tool.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.tools.update_monitoring(...)
-
-
-
Activate and deactivate Evaluators for monitoring the Tool.
An activated Evaluator will automatically be run on all new Logs within the Tool for monitoring purposes.
-
-
-
from humanloop import Humanloop client = Humanloop(api_key="YOUR_API_KEY", ) client.tools.update_monitoring(id='tl_789ghi', activate=[{'evaluator_version_id': 'evv_1abc4308abd'}], )
-
-
-
id:
str
-
activate:
typing.Optional[typing.Sequence[EvaluatorActivationDeactivationRequestActivateItemParams]]β Evaluators to activate for Monitoring. These will be automatically run on new Logs.
-
deactivate:
typing.Optional[typing.Sequence[EvaluatorActivationDeactivationRequestDeactivateItemParams]]β Evaluators to deactivate. These will not be run on new Logs.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.tools.get_environment_variables(...)
-
-
-
from humanloop import Humanloop client = Humanloop(api_key="YOUR_API_KEY", ) client.tools.get_environment_variables(id='id', )
-
-
-
id:
strβ Unique identifier for File.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.tools.add_environment_variable(...)
-
-
-
Add an environment variable to a Tool.
-
-
-
from humanloop import Humanloop client = Humanloop(api_key="YOUR_API_KEY", ) client.tools.add_environment_variable(id='id', request=[{'name': 'name', 'value': 'value'}], )
-
-
-
id:
strβ Unique identifier for Tool.
-
request:
typing.Sequence[FileEnvironmentVariableRequestParams]
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.tools.delete_environment_variable(...)
-
-
-
from humanloop import Humanloop client = Humanloop(api_key="YOUR_API_KEY", ) client.tools.delete_environment_variable(id='id', name='name', )
-
-
-
id:
strβ Unique identifier for File.
-
name:
strβ Name of the Environment Variable to delete.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.datasets.list(...)
-
-
-
List all Datasets.
-
-
-
from humanloop import Humanloop client = Humanloop(api_key="YOUR_API_KEY", ) response = client.datasets.list(size=1, ) for item in response: yield item # alternatively, you can paginate page-by-page for page in response.iter_pages(): yield page
-
-
-
page:
typing.Optional[int]β Page offset for pagination.
-
size:
typing.Optional[int]β Page size for pagination. Number of Datasets to fetch.
-
name:
typing.Optional[str]β Case-insensitive filter for Dataset name.
-
user_filter:
typing.Optional[str]β Case-insensitive filter for users in the Dataset. This filter matches against both email address and name of users.
-
sort_by:
typing.Optional[FileSortBy]β Field to sort Datasets by
-
order:
typing.Optional[SortOrder]β Direction to sort by.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.datasets.upsert(...)
-
-
-
Create a Dataset or update it with a new version if it already exists.
Datasets are identified by the
IDor theirpath. The datapoints determine the versions of the Dataset.By default, the new Dataset version will be set to the list of Datapoints provided in the request. You can also create a new version by adding or removing Datapoints from an existing version by specifying
actionasaddorremoverespectively. In this case, you may specify theversion_idorenvironmentquery parameters to identify the existing version to base the new version on. If neither is provided, the latest created version will be used.You can provide
version_nameandversion_descriptionto identify and describe your versions. Version names must be unique within a Dataset - attempting to create a version with a name that already exists will result in a 409 Conflict error.Humanloop also deduplicates Datapoints. If you try to add a Datapoint that already exists, it will be ignored. If you intentionally want to add a duplicate Datapoint, you can add a unique identifier to the Datapoint's inputs such as
{_dedupe_id: <unique ID>}.
-
-
-
from humanloop import Humanloop client = Humanloop(api_key="YOUR_API_KEY", ) client.datasets.upsert(path='datasets/support-queries', datapoints=[{'messages': [{'role': "user", 'content': 'How do i manage my organizations API keys?\n'}], 'target': {'response': 'Hey, thanks for your questions. Here are steps for how to achieve: 1. Log in to the Humanloop Dashboard \n\n2. Click on "Organization Settings."\n If you do not see this option, you might need to contact your organization admin to gain the necessary permissions.\n\n3. Within the settings or organization settings, select the option labeled "API Keys" on the left. Here you will be able to view and manage your API keys.\n\n4. You will see a list of existing API keys. You can perform various actions, such as:\n - **Generate New API Key:** Click on the "Generate New Key" button if you need a new API key.\n - **Revoke an API Key:** If you need to disable an existing key, find the key in the list and click the "Revoke" or "Delete" button.\n - **Copy an API Key:** If you need to use an existing key, you can copy it to your clipboard by clicking the "Copy" button next to the key.\n\n5. **Save and Secure API Keys:** Make sure to securely store any new or existing API keys you are using. Treat them like passwords and do not share them publicly.\n\nIf you encounter any issues or need further assistance, it might be helpful to engage with an engineer or your IT department to ensure you have the necessary permissions and support.\n\nWould you need help with anything else?'}}, {'messages': [{'role': "user", 'content': 'Hey, can do I use my code evaluator for monitoring my legal-copilot prompt?'}], 'target': {'response': 'Hey, thanks for your questions. Here are steps for how to achieve: 1. Navigate to your Prompt dashboard. \n 2. Select the `Monitoring` button on the top right of the Prompt dashboard \n 3. Within the model select the Version of the Evaluator you want to turn on for monitoring. \n\nWould you need help with anything else?'}}], version_name='Initial version', version_description='Add two new questions and answers', )
-
-
-
datapoints:
typing.Sequence[CreateDatapointRequestParams]β The Datapoints to create this Dataset version with. Modify theactionfield to determine how these Datapoints are used.
-
version_id:
typing.Optional[str]β ID of the specific Dataset version to base the created Version on. Only used whenactionis"add"or"remove".
-
environment:
typing.Optional[str]β Name of the Environment identifying a deployed Version to base the created Version on. Only used whenactionis"add"or"remove".
-
include_datapoints:
typing.Optional[bool]β If set totrue, include all Datapoints in the response. Defaults tofalse. Consider using the paginated List Datapoints endpoint instead.
-
path:
typing.Optional[str]β Path of the Dataset, including the name. This locates the Dataset in the Humanloop filesystem and is used as as a unique identifier. For example:folder/nameor justname.
-
id:
typing.Optional[str]β ID for an existing Dataset.
-
action:
typing.Optional[UpdateDatesetAction]The action to take with the provided Datapoints.
- If
"set", the created version will only contain the Datapoints provided in this request. - If
"add", the created version will contain the Datapoints provided in this request in addition to the Datapoints in the target version. - If
"remove", the created version will contain the Datapoints in the target version except for the Datapoints provided in this request.
If
"add"or"remove", one of theversion_idorenvironmentquery parameters may be provided. - If
-
attributes:
typing.Optional[typing.Dict[str, typing.Optional[typing.Any]]]β Additional fields to describe the Dataset. Helpful to separate Dataset versions from each other with details on how they were created or used.
-
version_name:
typing.Optional[str]β Unique name for the Dataset version. Version names must be unique for a given Dataset.
-
version_description:
typing.Optional[str]β Description of the version, e.g., the changes made in this version.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.datasets.get(...)
-
-
-
Retrieve the Dataset with the given ID.
Unless
include_datapointsis set totrue, the response will not include the Datapoints. Use the List Datapoints endpoint (GET /{id}/datapoints) to efficiently retrieve Datapoints for a large Dataset.By default, the deployed version of the Dataset is returned. Use the query parameters
version_idorenvironmentto target a specific version of the Dataset.
-
-
-
from humanloop import Humanloop client = Humanloop(api_key="YOUR_API_KEY", ) client.datasets.get(id='ds_b0baF1ca7652', version_id='dsv_6L78pqrdFi2xa', include_datapoints=True, )
-
-
-
id:
strβ Unique identifier for Dataset.
-
version_id:
typing.Optional[str]β A specific Version ID of the Dataset to retrieve.
-
environment:
typing.Optional[str]β Name of the Environment to retrieve a deployed Version from.
-
include_datapoints:
typing.Optional[bool]β If set totrue, include all Datapoints in the response. Defaults tofalse. Consider using the paginated List Datapoints endpoint instead.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.datasets.delete(...)
-
-
-
Delete the Dataset with the given ID.
-
-
-
from humanloop import Humanloop client = Humanloop(api_key="YOUR_API_KEY", ) client.datasets.delete(id='id', )
-
-
-
id:
strβ Unique identifier for Dataset.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.datasets.move(...)
-
-
-
Move the Dataset to a different path or change the name.
-
-
-
from humanloop import Humanloop client = Humanloop(api_key="YOUR_API_KEY", ) client.datasets.move(id='id', )
-
-
-
id:
strβ Unique identifier for Dataset.
-
path:
typing.Optional[str]β Path of the Dataset including the Dataset name, which is used as a unique identifier.
-
name:
typing.Optional[str]β Name of the Dataset, which is used as a unique identifier.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.datasets.list_datapoints(...)
-
-
-
List all Datapoints for the Dataset with the given ID.
-
-
-
from humanloop import Humanloop client = Humanloop(api_key="YOUR_API_KEY", ) response = client.datasets.list_datapoints(id='ds_b0baF1ca7652', size=1, ) for item in response: yield item # alternatively, you can paginate page-by-page for page in response.iter_pages(): yield page
-
-
-
id:
strβ Unique identifier for Dataset.
-
version_id:
typing.Optional[str]β A specific Version ID of the Dataset to retrieve.
-
environment:
typing.Optional[str]β Name of the Environment to retrieve a deployed Version from.
-
page:
typing.Optional[int]β Page number for pagination.
-
size:
typing.Optional[int]β Page size for pagination. Number of Datapoints to fetch.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.datasets.list_versions(...)
-
-
-
Get a list of the versions for a Dataset.
-
-
-
from humanloop import Humanloop client = Humanloop(api_key="YOUR_API_KEY", ) client.datasets.list_versions(id='ds_b0baF1ca7652', )
-
-
-
id:
strβ Unique identifier for Dataset.
-
include_datapoints:
typing.Optional[ListVersionsDatasetsIdVersionsGetRequestIncludeDatapoints]β If set to 'latest_saved', include datapoints for the latest saved version. Alternatively, 'latest_committed' (deprecated) includes datapoints for the latest committed version only.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.datasets.delete_dataset_version(...)
-
-
-
Delete a version of the Dataset.
-
-
-
from humanloop import Humanloop client = Humanloop(api_key="YOUR_API_KEY", ) client.datasets.delete_dataset_version(id='id', version_id='version_id', )
-
-
-
id:
strβ Unique identifier for Dataset.
-
version_id:
strβ Unique identifier for the specific version of the Dataset.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.datasets.update_dataset_version(...)
-
-
-
Update the name or description of the Dataset version.
-
-
-
from humanloop import Humanloop client = Humanloop(api_key="YOUR_API_KEY", ) client.datasets.update_dataset_version(id='id', version_id='version_id', )
-
-
-
id:
strβ Unique identifier for Dataset.
-
version_id:
strβ Unique identifier for the specific version of the Dataset.
-
name:
typing.Optional[str]β Name of the version.
-
description:
typing.Optional[str]β Description of the version.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.datasets.upload_csv(...)
-
-
-
Add Datapoints from a CSV file to a Dataset.
This will create a new version of the Dataset with the Datapoints from the CSV file.
If either
version_idorenvironmentis provided, the new version will be based on the specified version, with the Datapoints from the CSV file added to the existing Datapoints in the version. If neitherversion_idnorenvironmentis provided, the new version will be based on the version of the Dataset that is deployed to the default Environment.You can optionally provide a name and description for the new version using
version_nameandversion_descriptionparameters.
-
-
-
from humanloop import Humanloop client = Humanloop(api_key="YOUR_API_KEY", ) client.datasets.upload_csv(id='id', )
-
-
-
id:
strβ Unique identifier for the Dataset
-
file:
from __future__ import annotations core.Fileβ See core.File for more documentation
-
version_id:
typing.Optional[str]β ID of the specific Dataset version to base the created Version on.
-
environment:
typing.Optional[str]β Name of the Environment identifying a deployed Version to base the created Version on.
-
version_name:
typing.Optional[str]β Name for the new Dataset version.
-
version_description:
typing.Optional[str]β Description for the new Dataset version.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.datasets.set_deployment(...)
-
-
-
Deploy Dataset to Environment.
Set the deployed version for the specified Environment.
-
-
-
from humanloop import Humanloop client = Humanloop(api_key="YOUR_API_KEY", ) client.datasets.set_deployment(id='ds_b0baF1ca7652', environment_id='staging', version_id='dsv_6L78pqrdFi2xa', )
-
-
-
id:
strβ Unique identifier for Dataset.
-
environment_id:
strβ Unique identifier for the Environment to deploy the Version to.
-
version_id:
strβ Unique identifier for the specific version of the Dataset.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.datasets.remove_deployment(...)
-
-
-
Remove deployed Dataset from Environment.
Remove the deployed version for the specified Environment.
-
-
-
from humanloop import Humanloop client = Humanloop(api_key="YOUR_API_KEY", ) client.datasets.remove_deployment(id='ds_b0baF1ca7652', environment_id='staging', )
-
-
-
id:
strβ Unique identifier for Dataset.
-
environment_id:
strβ Unique identifier for the Environment to remove the deployment from.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.datasets.list_environments(...)
-
-
-
List all Environments and their deployed versions for the Dataset.
-
-
-
from humanloop import Humanloop client = Humanloop(api_key="YOUR_API_KEY", ) client.datasets.list_environments(id='id', )
-
-
-
id:
strβ Unique identifier for Dataset.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.evaluators.log(...)
-
-
-
Submit Evaluator judgment for an existing Log.
Creates a new Log. The evaluated Log will be set as the parent of the created Log.
-
-
-
from humanloop import Humanloop client = Humanloop(api_key="YOUR_API_KEY", ) client.evaluators.log(parent_id='parent_id', )
-
-
-
parent_id:
strβ Identifier of the evaluated Log. The newly created Log will have this one set as parent.
-
version_id:
typing.Optional[str]β ID of the Evaluator version to log against.
-
environment:
typing.Optional[str]β Name of the Environment identifying a deployed version to log to.
-
path:
typing.Optional[str]β Path of the Evaluator, including the name. This locates the Evaluator in the Humanloop filesystem and is used as as a unique identifier. For example:folder/nameor justname.
-
id:
typing.Optional[str]β ID for an existing Evaluator.
-
start_time:
typing.Optional[dt.datetime]β When the logged event started.
-
end_time:
typing.Optional[dt.datetime]β When the logged event ended.
-
output:
typing.Optional[str]β Generated output from the LLM. Only populated for LLM Evaluator Logs.
-
created_at:
typing.Optional[dt.datetime]β User defined timestamp for when the log was created.
-
error:
typing.Optional[str]β Error message if the log is an error.
-
provider_latency:
typing.Optional[float]β Duration of the logged event in seconds.
-
stdout:
typing.Optional[str]β Captured log and debug statements.
-
provider_request:
typing.Optional[typing.Dict[str, typing.Optional[typing.Any]]]β Raw request sent to provider. Only populated for LLM Evaluator Logs.
-
provider_response:
typing.Optional[typing.Dict[str, typing.Optional[typing.Any]]]β Raw response received the provider. Only populated for LLM Evaluator Logs.
-
inputs:
typing.Optional[typing.Dict[str, typing.Optional[typing.Any]]]β The inputs passed to the prompt template.
-
source:
typing.Optional[str]β Identifies where the model was called from.
-
metadata:
typing.Optional[typing.Dict[str, typing.Optional[typing.Any]]]β Any additional metadata to record.
-
source_datapoint_id:
typing.Optional[str]β Unique identifier for the Datapoint that this Log is derived from. This can be used by Humanloop to associate Logs to Evaluations. If provided, Humanloop will automatically associate this Log to Evaluations that require a Log for this Datapoint-Version pair.
-
trace_parent_id:
typing.Optional[str]β The ID of the parent Log to nest this Log under in a Trace.
-
user:
typing.Optional[str]β End-user ID related to the Log.
-
create_evaluator_log_request_environment:
typing.Optional[str]β The name of the Environment the Log is associated to.
-
save:
typing.Optional[bool]β Whether the request/response payloads will be stored on Humanloop.
-
log_id:
typing.Optional[str]β This will identify a Log. If you don't provide a Log ID, Humanloop will generate one for you.
-
output_message:
typing.Optional[ChatMessageParams]β The message returned by the LLM. Only populated for LLM Evaluator Logs.
-
judgment:
typing.Optional[CreateEvaluatorLogRequestJudgmentParams]β Evaluator assessment of the Log.
-
marked_completed:
typing.Optional[bool]β Whether the Log has been manually marked as completed by a user.
-
spec:
typing.Optional[CreateEvaluatorLogRequestSpecParams]
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.evaluators.list(...)
-
-
-
Get a list of all Evaluators.
-
-
-
from humanloop import Humanloop client = Humanloop(api_key="YOUR_API_KEY", ) response = client.evaluators.list(size=1, ) for item in response: yield item # alternatively, you can paginate page-by-page for page in response.iter_pages(): yield page
-
-
-
page:
typing.Optional[int]β Page offset for pagination.
-
size:
typing.Optional[int]β Page size for pagination. Number of Evaluators to fetch.
-
name:
typing.Optional[str]β Case-insensitive filter for Evaluator name.
-
user_filter:
typing.Optional[str]β Case-insensitive filter for users in the Evaluator. This filter matches against both email address and name of users.
-
sort_by:
typing.Optional[FileSortBy]β Field to sort Evaluators by
-
order:
typing.Optional[SortOrder]β Direction to sort by.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.evaluators.upsert(...)
-
-
-
Create an Evaluator or update it with a new version if it already exists.
Evaluators are identified by the
IDor theirpath. The spec provided determines the version of the Evaluator.You can provide
version_nameandversion_descriptionto identify and describe your versions. Version names must be unique within an Evaluator - attempting to create a version with a name that already exists will result in a 409 Conflict error.
-
-
-
from humanloop import Humanloop client = Humanloop(api_key="YOUR_API_KEY", ) client.evaluators.upsert(path='Shared Evaluators/Accuracy Evaluator', spec={'arguments_type': "target_required", 'return_type': "number", 'evaluator_type': 'python', 'code': 'def evaluate(answer, target):\n return 0.5'}, version_name='simple-evaluator', version_description='Simple evaluator that returns 0.5', )
-
-
-
spec:
EvaluatorRequestSpecParams
-
path:
typing.Optional[str]β Path of the Evaluator, including the name. This locates the Evaluator in the Humanloop filesystem and is used as as a unique identifier. For example:folder/nameor justname.
-
id:
typing.Optional[str]β ID for an existing Evaluator.
-
version_name:
typing.Optional[str]β Unique name for the Evaluator version. Version names must be unique for a given Evaluator.
-
version_description:
typing.Optional[str]β Description of the version, e.g., the changes made in this version.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.evaluators.get(...)
-
-
-
Retrieve the Evaluator with the given ID.
By default, the deployed version of the Evaluator is returned. Use the query parameters
version_idorenvironmentto target a specific version of the Evaluator.
-
-
-
from humanloop import Humanloop client = Humanloop(api_key="YOUR_API_KEY", ) client.evaluators.get(id='ev_890bcd', )
-
-
-
id:
strβ Unique identifier for Evaluator.
-
version_id:
typing.Optional[str]β A specific Version ID of the Evaluator to retrieve.
-
environment:
typing.Optional[str]β Name of the Environment to retrieve a deployed Version from.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.evaluators.delete(...)
-
-
-
Delete the Evaluator with the given ID.
-
-
-
from humanloop import Humanloop client = Humanloop(api_key="YOUR_API_KEY", ) client.evaluators.delete(id='ev_890bcd', )
-
-
-
id:
strβ Unique identifier for Evaluator.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.evaluators.move(...)
-
-
-
Move the Evaluator to a different path or change the name.
-
-
-
from humanloop import Humanloop client = Humanloop(api_key="YOUR_API_KEY", ) client.evaluators.move(id='ev_890bcd', path='new directory/new name', )
-
-
-
id:
strβ Unique identifier for Evaluator.
-
path:
typing.Optional[str]β Path of the Evaluator including the Evaluator name, which is used as a unique identifier.
-
name:
typing.Optional[str]β Name of the Evaluator, which is used as a unique identifier.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.evaluators.list_versions(...)
-
-
-
Get a list of all the versions of an Evaluator.
-
-
-
from humanloop import Humanloop client = Humanloop(api_key="YOUR_API_KEY", ) client.evaluators.list_versions(id='ev_890bcd', )
-
-
-
id:
strβ Unique identifier for the Evaluator.
-
evaluator_aggregates:
typing.Optional[bool]β Whether to include Evaluator aggregate results for the versions in the response
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.evaluators.delete_evaluator_version(...)
-
-
-
Delete a version of the Evaluator.
-
-
-
from humanloop import Humanloop client = Humanloop(api_key="YOUR_API_KEY", ) client.evaluators.delete_evaluator_version(id='id', version_id='version_id', )
-
-
-
id:
strβ Unique identifier for Evaluator.
-
version_id:
strβ Unique identifier for the specific version of the Evaluator.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.evaluators.update_evaluator_version(...)
-
-
-
Update the name or description of the Evaluator version.
-
-
-
from humanloop import Humanloop client = Humanloop(api_key="YOUR_API_KEY", ) client.evaluators.update_evaluator_version(id='id', version_id='version_id', )
-
-
-
id:
strβ Unique identifier for Evaluator.
-
version_id:
strβ Unique identifier for the specific version of the Evaluator.
-
name:
typing.Optional[str]β Name of the version.
-
description:
typing.Optional[str]β Description of the version.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.evaluators.set_deployment(...)
-
-
-
Deploy Evaluator to an Environment.
Set the deployed version for the specified Environment. This Evaluator will be used for calls made to the Evaluator in this Environment.
-
-
-
from humanloop import Humanloop client = Humanloop(api_key="YOUR_API_KEY", ) client.evaluators.set_deployment(id='ev_890bcd', environment_id='staging', version_id='evv_012def', )
-
-
-
id:
strβ Unique identifier for Evaluator.
-
environment_id:
strβ Unique identifier for the Environment to deploy the Version to.
-
version_id:
strβ Unique identifier for the specific version of the Evaluator.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.evaluators.remove_deployment(...)
-
-
-
Remove deployed Evaluator from the Environment.
Remove the deployed version for the specified Environment. This Evaluator will no longer be used for calls made to the Evaluator in this Environment.
-
-
-
from humanloop import Humanloop client = Humanloop(api_key="YOUR_API_KEY", ) client.evaluators.remove_deployment(id='ev_890bcd', environment_id='staging', )
-
-
-
id:
strβ Unique identifier for Evaluator.
-
environment_id:
strβ Unique identifier for the Environment to remove the deployment from.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.evaluators.list_environments(...)
-
-
-
List all Environments and their deployed versions for the Evaluator.
-
-
-
from humanloop import Humanloop client = Humanloop(api_key="YOUR_API_KEY", ) client.evaluators.list_environments(id='ev_890bcd', )
-
-
-
id:
strβ Unique identifier for Evaluator.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.evaluators.update_monitoring(...)
-
-
-
Activate and deactivate Evaluators for monitoring the Evaluator.
An activated Evaluator will automatically be run on all new Logs within the Evaluator for monitoring purposes.
-
-
-
from humanloop import Humanloop client = Humanloop(api_key="YOUR_API_KEY", ) client.evaluators.update_monitoring(id='id', )
-
-
-
id:
str
-
activate:
typing.Optional[typing.Sequence[EvaluatorActivationDeactivationRequestActivateItemParams]]β Evaluators to activate for Monitoring. These will be automatically run on new Logs.
-
deactivate:
typing.Optional[typing.Sequence[EvaluatorActivationDeactivationRequestDeactivateItemParams]]β Evaluators to deactivate. These will not be run on new Logs.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.flows.log(...)
-
-
-
Log to a Flow.
You can use query parameters
version_id, orenvironment, to target an existing version of the Flow. Otherwise, the default deployed version will be chosen.
-
-
-
from humanloop import Humanloop import datetime client = Humanloop(api_key="YOUR_API_KEY", ) client.flows.log(id='fl_6o701g4jmcanPVHxdqD0O', flow={'attributes': {'prompt': {'template': 'You are a helpful assistant helping with medical anamnesis', 'model': 'gpt-4o', 'temperature': 0.8} , 'tool': {'name': 'retrieval_tool_v3', 'description': 'Retrieval tool for MedQA.', 'source_code': 'def retrieval_tool(question: str) -> str:\n pass\n'} }}, inputs={'question': 'Patient with a history of diabetes and hypertension presents with chest pain and shortness of breath.' }, output='The patient is likely experiencing a myocardial infarction. Immediate medical attention is required.', log_status="incomplete", start_time=datetime.datetime.fromisoformat("2024-07-08 21:40:35+00:00", ), end_time=datetime.datetime.fromisoformat("2024-07-08 21:40:39+00:00", ), )
-
-
-
version_id:
typing.Optional[str]β A specific Version ID of the Flow to log to.
-
environment:
typing.Optional[str]β Name of the Environment identifying a deployed version to log to.
-
messages:
typing.Optional[typing.Sequence[ChatMessageParams]]β List of chat messages that were used as an input to the Flow.
-
output_message:
typing.Optional[ChatMessageParams]β The output message returned by this Flow.
-
run_id:
typing.Optional[str]β Unique identifier for the Run to associate the Log to.
-
path:
typing.Optional[str]β Path of the Flow, including the name. This locates the Flow in the Humanloop filesystem and is used as as a unique identifier. For example:folder/nameor justname.
-
id:
typing.Optional[str]β ID for an existing Flow.
-
start_time:
typing.Optional[dt.datetime]β The start time of the Trace. Will be updated if a child Log with an earlier start time is added.
-
end_time:
typing.Optional[dt.datetime]β The end time of the Trace. Will be updated if a child Log with a later end time is added.
-
output:
typing.Optional[str]β Generated output from your model for the provided inputs. Can beNoneif logging an error, or if creating a parent Log with the intention to populate it later.
-
created_at:
typing.Optional[dt.datetime]β User defined timestamp for when the log was created.
-
error:
typing.Optional[str]β Error message if the log is an error.
-
provider_latency:
typing.Optional[float]β Duration of the logged event in seconds.
-
stdout:
typing.Optional[str]β Captured log and debug statements.
-
provider_request:
typing.Optional[typing.Dict[str, typing.Optional[typing.Any]]]β Raw request sent to provider.
-
provider_response:
typing.Optional[typing.Dict[str, typing.Optional[typing.Any]]]β Raw response received the provider.
-
inputs:
typing.Optional[typing.Dict[str, typing.Optional[typing.Any]]]β The inputs passed to the prompt template.
-
source:
typing.Optional[str]β Identifies where the model was called from.
-
metadata:
typing.Optional[typing.Dict[str, typing.Optional[typing.Any]]]β Any additional metadata to record.
-
log_status:
typing.Optional[LogStatus]β Status of the Flow Log. When a Log is updated fromincompletetocomplete, no more Logs can be added to it.
-
source_datapoint_id:
typing.Optional[str]β Unique identifier for the Datapoint that this Log is derived from. This can be used by Humanloop to associate Logs to Evaluations. If provided, Humanloop will automatically associate this Log to Evaluations that require a Log for this Datapoint-Version pair.
-
trace_parent_id:
typing.Optional[str]β The ID of the parent Log to nest this Log under in a Trace.
-
user:
typing.Optional[str]β End-user ID related to the Log.
-
flow_log_request_environment:
typing.Optional[str]β The name of the Environment the Log is associated to.
-
save:
typing.Optional[bool]β Whether the request/response payloads will be stored on Humanloop.
-
log_id:
typing.Optional[str]β This will identify a Log. If you don't provide a Log ID, Humanloop will generate one for you.
-
flow:
typing.Optional[FlowKernelRequestParams]β Flow used to generate the Trace.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.flows.update_log(...)
-
-
-
Update the status, inputs, output of a Flow Log.
Marking a Flow Log as complete will trigger any monitoring Evaluators to run. Inputs and output (or error) must be provided in order to mark it as complete.
The end_time log attribute will be set to match the time the log is marked as complete.
-
-
-
from humanloop import Humanloop client = Humanloop(api_key="YOUR_API_KEY", ) client.flows.update_log(log_id='medqa_experiment_0001', inputs={'question': 'Patient with a history of diabetes and normal tension presents with chest pain and shortness of breath.' }, output='The patient is likely experiencing a myocardial infarction. Immediate medical attention is required.', log_status="complete", )
-
-
-
log_id:
strβ Unique identifier of the Flow Log.
-
messages:
typing.Optional[typing.Sequence[ChatMessageParams]]β List of chat messages that were used as an input to the Flow.
-
output_message:
typing.Optional[ChatMessageParams]β The output message returned by this Flow.
-
inputs:
typing.Optional[typing.Dict[str, typing.Optional[typing.Any]]]β The inputs passed to the Flow Log.
-
output:
typing.Optional[str]β The output of the Flow Log. Provide None to unset existingoutputvalue. Provide either this,output_messageorerror.
-
error:
typing.Optional[str]β The error message of the Flow Log. Provide None to unset existingerrorvalue. Provide either this,output_messageoroutput.
-
log_status:
typing.Optional[LogStatus]β Status of the Flow Log. When a Flow Log is updated tocomplete, no more Logs can be added to it. You cannot update a Flow Log's status fromcompletetoincomplete.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.flows.get(...)
-
-
-
Retrieve the Flow with the given ID.
By default, the deployed version of the Flow is returned. Use the query parameters
version_idorenvironmentto target a specific version of the Flow.
-
-
-
from humanloop import Humanloop client = Humanloop(api_key="YOUR_API_KEY", ) client.flows.get(id='fl_6o701g4jmcanPVHxdqD0O', )
-
-
-
id:
strβ Unique identifier for Flow.
-
version_id:
typing.Optional[str]β A specific Version ID of the Flow to retrieve.
-
environment:
typing.Optional[str]β Name of the Environment to retrieve a deployed Version from.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.flows.delete(...)
-
-
-
Delete the Flow with the given ID.
-
-
-
from humanloop import Humanloop client = Humanloop(api_key="YOUR_API_KEY", ) client.flows.delete(id='fl_6o701g4jmcanPVHxdqD0O', )
-
-
-
id:
strβ Unique identifier for Flow.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.flows.move(...)
-
-
-
Move the Flow to a different path or change the name.
-
-
-
from humanloop import Humanloop client = Humanloop(api_key="YOUR_API_KEY", ) client.flows.move(id='fl_6o701g4jmcanPVHxdqD0O', path='new directory/new name', )
-
-
-
id:
strβ Unique identifier for Flow.
-
path:
typing.Optional[str]β Path of the Flow including the Flow name, which is used as a unique identifier.
-
name:
typing.Optional[str]β Name of the Flow.
-
directory_id:
typing.Optional[str]β Unique identifier for the Directory to move Flow to. Starts withdir_.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.flows.list(...)
-
-
-
Get a list of Flows.
-
-
-
from humanloop import Humanloop client = Humanloop(api_key="YOUR_API_KEY", ) response = client.flows.list(size=1, ) for item in response: yield item # alternatively, you can paginate page-by-page for page in response.iter_pages(): yield page
-
-
-
page:
typing.Optional[int]β Page number for pagination.
-
size:
typing.Optional[int]β Page size for pagination. Number of Flows to fetch.
-
name:
typing.Optional[str]β Case-insensitive filter for Flow name.
-
user_filter:
typing.Optional[str]β Case-insensitive filter for users in the Flow. This filter matches against both email address and name of users.
-
sort_by:
typing.Optional[FileSortBy]β Field to sort Flows by
-
order:
typing.Optional[SortOrder]β Direction to sort by.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.flows.upsert(...)
-
-
-
Create or update a Flow.
Flows can also be identified by the
IDor theirpath.You can provide
version_nameandversion_descriptionto identify and describe your versions. Version names must be unique within a Flow - attempting to create a version with a name that already exists will result in a 409 Conflict error.
-
-
-
from humanloop import Humanloop client = Humanloop(api_key="YOUR_API_KEY", ) client.flows.upsert(path='Personal Projects/MedQA Flow', attributes={'prompt': {'template': 'You are a helpful medical assistant helping with medical anamnesis. Answer {{question}}', 'model': 'gpt-4o', 'temperature': 0.8} , 'tool': {'name': 'retrieval_tool_v3', 'description': 'Retrieval tool for MedQA.', 'source_code': 'def retrieval_tool(question: str) -> str:\n pass\n'} , 'version_name': 'medqa-flow-v1' , 'version_description': 'Initial version' }, )
-
-
-
attributes:
typing.Dict[str, typing.Optional[typing.Any]]β A key-value object identifying the Flow Version.
-
path:
typing.Optional[str]β Path of the Flow, including the name. This locates the Flow in the Humanloop filesystem and is used as as a unique identifier. For example:folder/nameor justname.
-
id:
typing.Optional[str]β ID for an existing Flow.
-
version_name:
typing.Optional[str]β Unique name for the Flow version. Version names must be unique for a given Flow.
-
version_description:
typing.Optional[str]β Description of the version, e.g., the changes made in this version.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.flows.list_versions(...)
-
-
-
Get a list of all the versions of a Flow.
-
-
-
from humanloop import Humanloop client = Humanloop(api_key="YOUR_API_KEY", ) client.flows.list_versions(id='fl_6o701g4jmcanPVHxdqD0O', )
-
-
-
id:
strβ Unique identifier for Flow.
-
evaluator_aggregates:
typing.Optional[bool]β Whether to include Evaluator aggregate results for the versions in the response
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.flows.delete_flow_version(...)
-
-
-
Delete a version of the Flow.
-
-
-
from humanloop import Humanloop client = Humanloop(api_key="YOUR_API_KEY", ) client.flows.delete_flow_version(id='id', version_id='version_id', )
-
-
-
id:
strβ Unique identifier for Flow.
-
version_id:
strβ Unique identifier for the specific version of the Flow.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.flows.update_flow_version(...)
-
-
-
Update the name or description of the Flow version.
-
-
-
from humanloop import Humanloop client = Humanloop(api_key="YOUR_API_KEY", ) client.flows.update_flow_version(id='id', version_id='version_id', )
-
-
-
id:
strβ Unique identifier for Flow.
-
version_id:
strβ Unique identifier for the specific version of the Flow.
-
name:
typing.Optional[str]β Name of the version.
-
description:
typing.Optional[str]β Description of the version.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.flows.set_deployment(...)
-
-
-
Deploy Flow to an Environment.
Set the deployed version for the specified Environment. This Flow will be used for calls made to the Flow in this Environment.
-
-
-
from humanloop import Humanloop client = Humanloop(api_key="YOUR_API_KEY", ) client.flows.set_deployment(id='fl_6o701g4jmcanPVHxdqD0O', environment_id='staging', version_id='flv_6o701g4jmcanPVHxdqD0O', )
-
-
-
id:
strβ Unique identifier for Flow.
-
environment_id:
strβ Unique identifier for the Environment to deploy the Version to.
-
version_id:
strβ Unique identifier for the specific version of the Flow.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.flows.remove_deployment(...)
-
-
-
Remove deployed Flow from the Environment.
Remove the deployed version for the specified Environment. This Flow will no longer be used for calls made to the Flow in this Environment.
-
-
-
from humanloop import Humanloop client = Humanloop(api_key="YOUR_API_KEY", ) client.flows.remove_deployment(id='fl_6o701g4jmcanPVHxdqD0O', environment_id='staging', )
-
-
-
id:
strβ Unique identifier for Flow.
-
environment_id:
strβ Unique identifier for the Environment to remove the deployment from.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.flows.list_environments(...)
-
-
-
List all Environments and their deployed versions for the Flow.
-
-
-
from humanloop import Humanloop client = Humanloop(api_key="YOUR_API_KEY", ) client.flows.list_environments(id='fl_6o701g4jmcanPVHxdqD0O', )
-
-
-
id:
strβ Unique identifier for Flow.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.flows.update_monitoring(...)
-
-
-
Activate and deactivate Evaluators for monitoring the Flow.
An activated Evaluator will automatically be run on all new "completed" Logs within the Flow for monitoring purposes.
-
-
-
from humanloop import Humanloop client = Humanloop(api_key="YOUR_API_KEY", ) client.flows.update_monitoring(id='fl_6o701g4jmcanPVHxdqD0O', activate=[{'evaluator_version_id': 'evv_1abc4308abd'}], )
-
-
-
id:
str
-
activate:
typing.Optional[typing.Sequence[EvaluatorActivationDeactivationRequestActivateItemParams]]β Evaluators to activate for Monitoring. These will be automatically run on new Logs.
-
deactivate:
typing.Optional[typing.Sequence[EvaluatorActivationDeactivationRequestDeactivateItemParams]]β Evaluators to deactivate. These will not be run on new Logs.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.agents.log(...)
-
-
-
Create an Agent Log.
You can use query parameters
version_id, orenvironment, to target an existing version of the Agent. Otherwise, the default deployed version will be chosen.If you create the Agent Log with a
log_statusofincomplete, you should later update it tocompletein order to trigger Evaluators.
-
-
-
from humanloop import Humanloop client = Humanloop(api_key="YOUR_API_KEY", ) client.agents.log(path='Banking/Teller Agent', agent={'provider': "anthropic", 'endpoint': "chat", 'model': 'claude-3-7-sonnet-latest', 'reasoning_effort': 1024, 'template': [{'role': "system", 'content': 'You are a helpful digital assistant, helping users navigate our digital banking platform.'}], 'max_iterations': 3, 'tools': [{'type': 'file', 'link': {'file_id': 'pr_1234567890', 'version_id': 'prv_1234567890'}, 'on_agent_call': "continue"}, {'type': 'inline', 'json_schema': {'name': 'stop', 'description': 'Call this tool when you have finished your task.', 'parameters': {'type': 'object' , 'properties': {'output': {'type': 'string', 'description': 'The final output to return to the user.'}} , 'additionalProperties': False , 'required': ['output'] }, 'strict': True}, 'on_agent_call': "stop"}]}, )
-
-
-
version_id:
typing.Optional[str]β A specific Version ID of the Agent to log to.
-
environment:
typing.Optional[str]β Name of the Environment identifying a deployed version to log to.
-
run_id:
typing.Optional[str]β Unique identifier for the Run to associate the Log to.
-
path:
typing.Optional[str]β Path of the Agent, including the name. This locates the Agent in the Humanloop filesystem and is used as as a unique identifier. For example:folder/nameor justname.
-
id:
typing.Optional[str]β ID for an existing Agent.
-
output_message:
typing.Optional[ChatMessageParams]β The message returned by the provider.
-
prompt_tokens:
typing.Optional[int]β Number of tokens in the prompt used to generate the output.
-
reasoning_tokens:
typing.Optional[int]β Number of reasoning tokens used to generate the output.
-
output_tokens:
typing.Optional[int]β Number of tokens in the output generated by the model.
-
prompt_cost:
typing.Optional[float]β Cost in dollars associated to the tokens in the prompt.
-
output_cost:
typing.Optional[float]β Cost in dollars associated to the tokens in the output.
-
finish_reason:
typing.Optional[str]β Reason the generation finished.
-
messages:
typing.Optional[typing.Sequence[ChatMessageParams]]β The messages passed to the to provider chat endpoint.
-
tool_choice:
typing.Optional[AgentLogRequestToolChoiceParams]Controls how the model uses tools. The following options are supported:
'none'means the model will not call any tool and instead generates a message; this is the default when no tools are provided as part of the Prompt.'auto'means the model can decide to call one or more of the provided tools; this is the default when tools are provided as part of the Prompt.'required'means the model must call one or more of the provided tools.{'type': 'function', 'function': {name': <TOOL_NAME>}}forces the model to use the named function.
-
agent:
typing.Optional[AgentLogRequestAgentParams]The Agent configuration to use. Two formats are supported:
- An object representing the details of the Agent configuration
- A string representing the raw contents of a .agent file A new Agent version will be created if the provided details do not match any existing version.
-
start_time:
typing.Optional[dt.datetime]β When the logged event started.
-
end_time:
typing.Optional[dt.datetime]β When the logged event ended.
-
output:
typing.Optional[str]β Generated output from your model for the provided inputs. Can beNoneif logging an error, or if creating a parent Log with the intention to populate it later.
-
created_at:
typing.Optional[dt.datetime]β User defined timestamp for when the log was created.
-
error:
typing.Optional[str]β Error message if the log is an error.
-
provider_latency:
typing.Optional[float]β Duration of the logged event in seconds.
-
stdout:
typing.Optional[str]β Captured log and debug statements.
-
provider_request:
typing.Optional[typing.Dict[str, typing.Optional[typing.Any]]]β Raw request sent to provider.
-
provider_response:
typing.Optional[typing.Dict[str, typing.Optional[typing.Any]]]β Raw response received the provider.
-
inputs:
typing.Optional[typing.Dict[str, typing.Optional[typing.Any]]]β The inputs passed to the prompt template.
-
source:
typing.Optional[str]β Identifies where the model was called from.
-
metadata:
typing.Optional[typing.Dict[str, typing.Optional[typing.Any]]]β Any additional metadata to record.
-
source_datapoint_id:
typing.Optional[str]β Unique identifier for the Datapoint that this Log is derived from. This can be used by Humanloop to associate Logs to Evaluations. If provided, Humanloop will automatically associate this Log to Evaluations that require a Log for this Datapoint-Version pair.
-
trace_parent_id:
typing.Optional[str]β The ID of the parent Log to nest this Log under in a Trace.
-
user:
typing.Optional[str]β End-user ID related to the Log.
-
agent_log_request_environment:
typing.Optional[str]β The name of the Environment the Log is associated to.
-
save:
typing.Optional[bool]β Whether the request/response payloads will be stored on Humanloop.
-
log_id:
typing.Optional[str]β This will identify a Log. If you don't provide a Log ID, Humanloop will generate one for you.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.agents.update_log(...)
-
-
-
Update a Log.
Update the details of a Log with the given ID.
-
-
-
from humanloop import Humanloop client = Humanloop(api_key="YOUR_API_KEY", ) client.agents.update_log(id='ag_1234567890', log_id='log_1234567890', messages=[{'role': "user", 'content': 'I need to withdraw $1000'}, {'role': "assistant", 'content': 'Of course! Would you like to use your savings or checking account?'}], output_message={'role': "assistant", 'content': "I'm sorry, I can't help with that."}, log_status="complete", )
-
-
-
id:
strβ Unique identifier for Agent.
-
log_id:
strβ Unique identifier for the Log.
-
messages:
typing.Optional[typing.Sequence[ChatMessageParams]]β List of chat messages that were used as an input to the Flow.
-
output_message:
typing.Optional[ChatMessageParams]β The output message returned by this Flow.
-
inputs:
typing.Optional[typing.Dict[str, typing.Optional[typing.Any]]]β The inputs passed to the Flow Log.
-
output:
typing.Optional[str]β The output of the Flow Log. Provide None to unset existingoutputvalue. Provide either this,output_messageorerror.
-
error:
typing.Optional[str]β The error message of the Flow Log. Provide None to unset existingerrorvalue. Provide either this,output_messageoroutput.
-
log_status:
typing.Optional[LogStatus]β Status of the Flow Log. When a Flow Log is updated tocomplete, no more Logs can be added to it. You cannot update a Flow Log's status fromcompletetoincomplete.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.agents.call_stream(...)
-
-
-
Call an Agent. The Agent will run on the Humanloop runtime and return a completed Agent Log.
If the Agent requires a tool call that cannot be ran by Humanloop, execution will halt. To continue, pass the ID of the incomplete Log and the required tool call to the /agents/continue endpoint.
The agent will run for the maximum number of iterations, or until it encounters a stop condition, according to its configuration.
You can use query parameters
version_id, orenvironment, to target an existing version of the Agent. Otherwise the default deployed version will be chosen.Instead of targeting an existing version explicitly, you can instead pass in Agent details in the request body. A new version is created if it does not match any existing ones. This is helpful in the case where you are storing or deriving your Agent details in code.
-
-
-
from humanloop import Humanloop client = Humanloop(api_key="YOUR_API_KEY", ) response = client.agents.call_stream() for chunk in response.data: yield chunk
-
-
-
version_id:
typing.Optional[str]β A specific Version ID of the Agent to log to.
-
environment:
typing.Optional[str]β Name of the Environment identifying a deployed version to log to.
-
path:
typing.Optional[str]β Path of the Agent, including the name. This locates the Agent in the Humanloop filesystem and is used as as a unique identifier. For example:folder/nameor justname.
-
id:
typing.Optional[str]β ID for an existing Agent.
-
messages:
typing.Optional[typing.Sequence[ChatMessageParams]]β The messages passed to the to provider chat endpoint.
-
tool_choice:
typing.Optional[AgentsCallStreamRequestToolChoiceParams]Controls how the model uses tools. The following options are supported:
'none'means the model will not call any tool and instead generates a message; this is the default when no tools are provided as part of the Prompt.'auto'means the model can decide to call one or more of the provided tools; this is the default when tools are provided as part of the Prompt.'required'means the model must call one or more of the provided tools.{'type': 'function', 'function': {name': <TOOL_NAME>}}forces the model to use the named function.
-
agent:
typing.Optional[AgentsCallStreamRequestAgentParams]The Agent configuration to use. Two formats are supported:
- An object representing the details of the Agent configuration
- A string representing the raw contents of a .agent file A new Agent version will be created if the provided details do not match any existing version.
-
inputs:
typing.Optional[typing.Dict[str, typing.Optional[typing.Any]]]β The inputs passed to the prompt template.
-
source:
typing.Optional[str]β Identifies where the model was called from.
-
metadata:
typing.Optional[typing.Dict[str, typing.Optional[typing.Any]]]β Any additional metadata to record.
-
start_time:
typing.Optional[dt.datetime]β When the logged event started.
-
end_time:
typing.Optional[dt.datetime]β When the logged event ended.
-
source_datapoint_id:
typing.Optional[str]β Unique identifier for the Datapoint that this Log is derived from. This can be used by Humanloop to associate Logs to Evaluations. If provided, Humanloop will automatically associate this Log to Evaluations that require a Log for this Datapoint-Version pair.
-
trace_parent_id:
typing.Optional[str]β The ID of the parent Log to nest this Log under in a Trace.
-
user:
typing.Optional[str]β End-user ID related to the Log.
-
agents_call_stream_request_environment:
typing.Optional[str]β The name of the Environment the Log is associated to.
-
save:
typing.Optional[bool]β Whether the request/response payloads will be stored on Humanloop.
-
log_id:
typing.Optional[str]β This will identify a Log. If you don't provide a Log ID, Humanloop will generate one for you.
-
provider_api_keys:
typing.Optional[ProviderApiKeysParams]β API keys required by each provider to make API calls. The API keys provided here are not stored by Humanloop. If not specified here, Humanloop will fall back to the key saved to your organization.
-
return_inputs:
typing.Optional[bool]β Whether to return the inputs in the response. If false, the response will contain an empty dictionary under inputs. This is useful for reducing the size of the response. Defaults to true.
-
include_trace_children:
typing.Optional[bool]β If true, populatetrace_childrenfor the returned Agent Log. Only applies when not streaming. Defaults to false.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.agents.call(...)
-
-
-
Call an Agent. The Agent will run on the Humanloop runtime and return a completed Agent Log.
If the Agent requires a tool call that cannot be ran by Humanloop, execution will halt. To continue, pass the ID of the incomplete Log and the required tool call to the /agents/continue endpoint.
The agent will run for the maximum number of iterations, or until it encounters a stop condition, according to its configuration.
You can use query parameters
version_id, orenvironment, to target an existing version of the Agent. Otherwise the default deployed version will be chosen.Instead of targeting an existing version explicitly, you can instead pass in Agent details in the request body. A new version is created if it does not match any existing ones. This is helpful in the case where you are storing or deriving your Agent details in code.
-
-
-
from humanloop import Humanloop client = Humanloop(api_key="YOUR_API_KEY", ) client.agents.call(path='Banking/Teller Agent', messages=[{'role': "user", 'content': "I'd like to deposit $1000 to my savings account from my checking account."}], )
-
-
-
version_id:
typing.Optional[str]β A specific Version ID of the Agent to log to.
-
environment:
typing.Optional[str]β Name of the Environment identifying a deployed version to log to.
-
path:
typing.Optional[str]β Path of the Agent, including the name. This locates the Agent in the Humanloop filesystem and is used as as a unique identifier. For example:folder/nameor justname.
-
id:
typing.Optional[str]β ID for an existing Agent.
-
messages:
typing.Optional[typing.Sequence[ChatMessageParams]]β The messages passed to the to provider chat endpoint.
-
tool_choice:
typing.Optional[AgentsCallRequestToolChoiceParams]Controls how the model uses tools. The following options are supported:
'none'means the model will not call any tool and instead generates a message; this is the default when no tools are provided as part of the Prompt.'auto'means the model can decide to call one or more of the provided tools; this is the default when tools are provided as part of the Prompt.'required'means the model must call one or more of the provided tools.{'type': 'function', 'function': {name': <TOOL_NAME>}}forces the model to use the named function.
-
agent:
typing.Optional[AgentsCallRequestAgentParams]The Agent configuration to use. Two formats are supported:
- An object representing the details of the Agent configuration
- A string representing the raw contents of a .agent file A new Agent version will be created if the provided details do not match any existing version.
-
inputs:
typing.Optional[typing.Dict[str, typing.Optional[typing.Any]]]β The inputs passed to the prompt template.
-
source:
typing.Optional[str]β Identifies where the model was called from.
-
metadata:
typing.Optional[typing.Dict[str, typing.Optional[typing.Any]]]β Any additional metadata to record.
-
start_time:
typing.Optional[dt.datetime]β When the logged event started.
-
end_time:
typing.Optional[dt.datetime]β When the logged event ended.
-
source_datapoint_id:
typing.Optional[str]β Unique identifier for the Datapoint that this Log is derived from. This can be used by Humanloop to associate Logs to Evaluations. If provided, Humanloop will automatically associate this Log to Evaluations that require a Log for this Datapoint-Version pair.
-
trace_parent_id:
typing.Optional[str]β The ID of the parent Log to nest this Log under in a Trace.
-
user:
typing.Optional[str]β End-user ID related to the Log.
-
agents_call_request_environment:
typing.Optional[str]β The name of the Environment the Log is associated to.
-
save:
typing.Optional[bool]β Whether the request/response payloads will be stored on Humanloop.
-
log_id:
typing.Optional[str]β This will identify a Log. If you don't provide a Log ID, Humanloop will generate one for you.
-
provider_api_keys:
typing.Optional[ProviderApiKeysParams]β API keys required by each provider to make API calls. The API keys provided here are not stored by Humanloop. If not specified here, Humanloop will fall back to the key saved to your organization.
-
return_inputs:
typing.Optional[bool]β Whether to return the inputs in the response. If false, the response will contain an empty dictionary under inputs. This is useful for reducing the size of the response. Defaults to true.
-
include_trace_children:
typing.Optional[bool]β If true, populatetrace_childrenfor the returned Agent Log. Only applies when not streaming. Defaults to false.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.agents.continue_call_stream(...)
-
-
-
Continue an incomplete Agent call.
This endpoint allows continuing an existing incomplete Agent call, by passing the tool call requested by the Agent. The Agent will resume processing from where it left off.
The messages in the request will be appended to the original messages in the Log. You do not have to provide the previous conversation history.
The original log must be in an incomplete state to be continued.
-
-
-
from humanloop import Humanloop client = Humanloop(api_key="YOUR_API_KEY", ) response = client.agents.continue_call_stream(log_id='log_id', messages=[{'role': "user"}], ) for chunk in response.data: yield chunk
-
-
-
log_id:
strβ This identifies the Agent Log to continue.
-
messages:
typing.Sequence[ChatMessageParams]β The additional messages with which to continue the Agent Log. Often, these should start with the Tool messages with results for the previous Assistant message's tool calls.
-
provider_api_keys:
typing.Optional[ProviderApiKeysParams]β API keys required by each provider to make API calls. The API keys provided here are not stored by Humanloop. If not specified here, Humanloop will fall back to the key saved to your organization.
-
include_trace_children:
typing.Optional[bool]β If true, populatetrace_childrenfor the returned Agent Log. Defaults to false.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.agents.continue_call(...)
-
-
-
Continue an incomplete Agent call.
This endpoint allows continuing an existing incomplete Agent call, by passing the tool call requested by the Agent. The Agent will resume processing from where it left off.
The messages in the request will be appended to the original messages in the Log. You do not have to provide the previous conversation history.
The original log must be in an incomplete state to be continued.
-
-
-
from humanloop import Humanloop client = Humanloop(api_key="YOUR_API_KEY", ) client.agents.continue_call(log_id='log_1234567890', messages=[{'role': "tool", 'content': '{"type": "checking", "balance": 5200}', 'tool_call_id': 'tc_1234567890'}], )
-
-
-
log_id:
strβ This identifies the Agent Log to continue.
-
messages:
typing.Sequence[ChatMessageParams]β The additional messages with which to continue the Agent Log. Often, these should start with the Tool messages with results for the previous Assistant message's tool calls.
-
provider_api_keys:
typing.Optional[ProviderApiKeysParams]β API keys required by each provider to make API calls. The API keys provided here are not stored by Humanloop. If not specified here, Humanloop will fall back to the key saved to your organization.
-
include_trace_children:
typing.Optional[bool]β If true, populatetrace_childrenfor the returned Agent Log. Defaults to false.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.agents.list(...)
-
-
-
Get a list of all Agents.
-
-
-
from humanloop import Humanloop client = Humanloop(api_key="YOUR_API_KEY", ) response = client.agents.list(size=1, ) for item in response: yield item # alternatively, you can paginate page-by-page for page in response.iter_pages(): yield page
-
-
-
page:
typing.Optional[int]β Page number for pagination.
-
size:
typing.Optional[int]β Page size for pagination. Number of Agents to fetch.
-
name:
typing.Optional[str]β Case-insensitive filter for Agent name.
-
user_filter:
typing.Optional[str]β Case-insensitive filter for users in the Agent. This filter matches against both email address and name of users.
-
sort_by:
typing.Optional[FileSortBy]β Field to sort Agents by
-
order:
typing.Optional[SortOrder]β Direction to sort by.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.agents.upsert(...)
-
-
-
Create an Agent or update it with a new version if it already exists.
Agents are identified by the
IDor theirpath. The parameters (i.e. the template, temperature, model etc.) and tools determine the versions of the Agent.You can provide
version_nameandversion_descriptionto identify and describe your versions. Version names must be unique within an Agent - attempting to create a version with a name that already exists will result in a 409 Conflict error.
-
-
-
from humanloop import Humanloop client = Humanloop(api_key="YOUR_API_KEY", ) client.agents.upsert(path='Banking/Teller Agent', provider="anthropic", endpoint="chat", model='claude-3-7-sonnet-latest', reasoning_effort=1024, template=[{'role': "system", 'content': 'You are a helpful digital assistant, helping users navigate our digital banking platform.'}], max_iterations=3, tools=[{'type': 'inline', 'json_schema': {'name': 'stop', 'description': 'Call this tool when you have finished your task.', 'parameters': {'type': 'object' , 'properties': {'output': {'type': 'string', 'description': 'The final output to return to the user.'}} , 'additionalProperties': False , 'required': ['output'] }, 'strict': True}, 'on_agent_call': "stop"}], version_name='teller-agent-v1', version_description='Initial version', )
-
-
-
model:
strβ The model instance used, e.g.gpt-4. See supported models
-
path:
typing.Optional[str]β Path of the Agent, including the name. This locates the Agent in the Humanloop filesystem and is used as as a unique identifier. For example:folder/nameor justname.
-
id:
typing.Optional[str]β ID for an existing Agent.
-
endpoint:
typing.Optional[ModelEndpoints]β The provider model endpoint used.
-
template:
typing.Optional[AgentRequestTemplateParams]The template contains the main structure and instructions for the model, including input variables for dynamic values.
For chat models, provide the template as a ChatTemplate (a list of messages), e.g. a system message, followed by a user message with an input variable. For completion models, provide a prompt template as a string.
Input variables should be specified with double curly bracket syntax:
{{input_name}}.
-
template_language:
typing.Optional[TemplateLanguage]β The template language to use for rendering the template.
-
provider:
typing.Optional[ModelProviders]β The company providing the underlying model service.
-
max_tokens:
typing.Optional[int]β The maximum number of tokens to generate. Provide max_tokens=-1 to dynamically calculate the maximum number of tokens to generate given the length of the prompt
-
temperature:
typing.Optional[float]β What sampling temperature to use when making a generation. Higher values means the model will be more creative.
-
top_p:
typing.Optional[float]β An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass.
-
stop:
typing.Optional[AgentRequestStopParams]β The string (or list of strings) after which the model will stop generating. The returned text will not contain the stop sequence.
-
presence_penalty:
typing.Optional[float]β Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the generation so far.
-
frequency_penalty:
typing.Optional[float]β Number between -2.0 and 2.0. Positive values penalize new tokens based on how frequently they appear in the generation so far.
-
other:
typing.Optional[typing.Dict[str, typing.Optional[typing.Any]]]β Other parameter values to be passed to the provider call.
-
seed:
typing.Optional[int]β If specified, model will make a best effort to sample deterministically, but it is not guaranteed.
-
response_format:
typing.Optional[ResponseFormatParams]β The format of the response. Only{"type": "json_object"}is currently supported for chat.
-
reasoning_effort:
typing.Optional[AgentRequestReasoningEffortParams]β Guidance on how many reasoning tokens it should generate before creating a response to the prompt. OpenAI reasoning models (o1, o3-mini) expect a OpenAIReasoningEffort enum. Anthropic reasoning models expect an integer, which signifies the maximum token budget.
-
tools:
typing.Optional[typing.Sequence[AgentRequestToolsItemParams]]
-
attributes:
typing.Optional[typing.Dict[str, typing.Optional[typing.Any]]]β Additional fields to describe the Prompt. Helpful to separate Prompt versions from each other with details on how they were created or used.
-
max_iterations:
typing.Optional[int]β The maximum number of iterations the Agent can run. This is used to limit the number of times the Agent model is called.
-
version_name:
typing.Optional[str]β Unique name for the Prompt version. Each Prompt can only have one version with a given name.
-
version_description:
typing.Optional[str]β Description of the Version.
-
description:
typing.Optional[str]β Description of the Prompt.
-
tags:
typing.Optional[typing.Sequence[str]]β List of tags associated with this prompt.
-
readme:
typing.Optional[str]β Long description of the Prompt.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.agents.delete_agent_version(...)
-
-
-
Delete a version of the Agent.
-
-
-
from humanloop import Humanloop client = Humanloop(api_key="YOUR_API_KEY", ) client.agents.delete_agent_version(id='ag_1234567890', version_id='agv_1234567890', )
-
-
-
id:
strβ Unique identifier for Agent.
-
version_id:
strβ Unique identifier for the specific version of the Agent.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.agents.patch_agent_version(...)
-
-
-
Update the name or description of the Agent version.
-
-
-
from humanloop import Humanloop client = Humanloop(api_key="YOUR_API_KEY", ) client.agents.patch_agent_version(id='ag_1234567890', version_id='agv_1234567890', name='teller-agent-v2', description='Updated version', )
-
-
-
id:
strβ Unique identifier for Agent.
-
version_id:
strβ Unique identifier for the specific version of the Agent.
-
name:
typing.Optional[str]β Name of the version.
-
description:
typing.Optional[str]β Description of the version.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.agents.get(...)
-
-
-
Retrieve the Agent with the given ID.
By default, the deployed version of the Agent is returned. Use the query parameters
version_idorenvironmentto target a specific version of the Agent.
-
-
-
from humanloop import Humanloop client = Humanloop(api_key="YOUR_API_KEY", ) client.agents.get(id='ag_1234567890', )
-
-
-
id:
strβ Unique identifier for Agent.
-
version_id:
typing.Optional[str]β A specific Version ID of the Agent to retrieve.
-
environment:
typing.Optional[str]β Name of the Environment to retrieve a deployed Version from.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.agents.delete(...)
-
-
-
Delete the Agent with the given ID.
-
-
-
from humanloop import Humanloop client = Humanloop(api_key="YOUR_API_KEY", ) client.agents.delete(id='ag_1234567890', )
-
-
-
id:
strβ Unique identifier for Agent.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.agents.move(...)
-
-
-
Move the Agent to a different path or change the name.
-
-
-
from humanloop import Humanloop client = Humanloop(api_key="YOUR_API_KEY", ) client.agents.move(id='ag_1234567890', path='new directory/new name', )
-
-
-
id:
strβ Unique identifier for Agent.
-
path:
typing.Optional[str]β Path of the Flow including the Flow name, which is used as a unique identifier.
-
name:
typing.Optional[str]β Name of the Flow.
-
directory_id:
typing.Optional[str]β Unique identifier for the Directory to move Flow to. Starts withdir_.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.agents.list_versions(...)
-
-
-
Get a list of all the versions of a Agent.
-
-
-
from humanloop import Humanloop client = Humanloop(api_key="YOUR_API_KEY", ) client.agents.list_versions(id='ag_1234567890', )
-
-
-
id:
strβ Unique identifier for Agent.
-
evaluator_aggregates:
typing.Optional[bool]β Whether to include Evaluator aggregate results for the versions in the response
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.agents.set_deployment(...)
-
-
-
Deploy Agent to an Environment.
Set the deployed version for the specified Environment. This Agent will be used for calls made to the Agent in this Environment.
-
-
-
from humanloop import Humanloop client = Humanloop(api_key="YOUR_API_KEY", ) client.agents.set_deployment(id='id', environment_id='environment_id', version_id='version_id', )
-
-
-
id:
strβ Unique identifier for Agent.
-
environment_id:
strβ Unique identifier for the Environment to deploy the Version to.
-
version_id:
strβ Unique identifier for the specific version of the Agent.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.agents.remove_deployment(...)
-
-
-
Remove deployed Agent from the Environment.
Remove the deployed version for the specified Environment. This Agent will no longer be used for calls made to the Agent in this Environment.
-
-
-
from humanloop import Humanloop client = Humanloop(api_key="YOUR_API_KEY", ) client.agents.remove_deployment(id='id', environment_id='environment_id', )
-
-
-
id:
strβ Unique identifier for Agent.
-
environment_id:
strβ Unique identifier for the Environment to remove the deployment from.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.agents.list_environments(...)
-
-
-
List all Environments and their deployed versions for the Agent.
-
-
-
from humanloop import Humanloop client = Humanloop(api_key="YOUR_API_KEY", ) client.agents.list_environments(id='ag_1234567890', )
-
-
-
id:
strβ Unique identifier for Agent.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.agents.update_monitoring(...)
-
-
-
Activate and deactivate Evaluators for monitoring the Agent.
An activated Evaluator will automatically be run on all new Logs within the Agent for monitoring purposes.
-
-
-
from humanloop import Humanloop client = Humanloop(api_key="YOUR_API_KEY", ) client.agents.update_monitoring(id='ag_1234567890', activate=[{'evaluator_version_id': 'ev_1234567890'}, {'evaluator_id': 'ev_2345678901', 'environment_id': 'env_1234567890'}], deactivate=[{'evaluator_version_id': 'ev_0987654321'}], )
-
-
-
id:
str
-
activate:
typing.Optional[typing.Sequence[EvaluatorActivationDeactivationRequestActivateItemParams]]β Evaluators to activate for Monitoring. These will be automatically run on new Logs.
-
deactivate:
typing.Optional[typing.Sequence[EvaluatorActivationDeactivationRequestDeactivateItemParams]]β Evaluators to deactivate. These will not be run on new Logs.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.agents.serialize(...)
-
-
-
Serialize an Agent to the .agent file format.
Useful for storing the Agent with your code in a version control system, or for editing with an AI tool.
By default, the deployed version of the Agent is returned. Use the query parameters
version_idorenvironmentto target a specific version of the Agent.
-
-
-
from humanloop import Humanloop client = Humanloop(api_key="YOUR_API_KEY", ) client.agents.serialize(id='id', )
-
-
-
id:
strβ Unique identifier for Agent.
-
version_id:
typing.Optional[str]β A specific Version ID of the Agent to retrieve.
-
environment:
typing.Optional[str]β Name of the Environment to retrieve a deployed Version from.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.agents.deserialize(...)
-
-
-
Deserialize an Agent from the .agent file format.
This returns a subset of the attributes required by an Agent. This subset is the bit that defines the Agent version (e.g. with
modelandtemperatureetc)
-
-
-
from humanloop import Humanloop client = Humanloop(api_key="YOUR_API_KEY", ) client.agents.deserialize(agent='agent', )
-
-
-
agent:
str
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.directories.list()
-
-
-
Retrieve a list of all Directories.
-
-
-
from humanloop import Humanloop client = Humanloop(api_key="YOUR_API_KEY", ) client.directories.list()
-
-
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.directories.create(...)
-
-
-
Creates a Directory.
-
-
-
from humanloop import Humanloop client = Humanloop(api_key="YOUR_API_KEY", ) client.directories.create()
-
-
-
name:
typing.Optional[str]β Name of the directory to create.
-
parent_id:
typing.Optional[str]β ID of the parent directory. Starts withdir_.
-
path:
typing.Optional[str]β Path to create the directory in, relative to the root directory. If the path does not exist, it will be created. Includes name, e.g.path/to/directory.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.directories.get(...)
-
-
-
Fetches a directory by ID.
-
-
-
from humanloop import Humanloop client = Humanloop(api_key="YOUR_API_KEY", ) client.directories.get(id='id', )
-
-
-
id:
strβ String ID of directory. Starts withdir_.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.directories.delete(...)
-
-
-
Delete the Directory with the given ID.
The Directory must be empty (i.e. contain no Directories or Files).
-
-
-
from humanloop import Humanloop client = Humanloop(api_key="YOUR_API_KEY", ) client.directories.delete(id='id', )
-
-
-
id:
strβ Unique identifier for Directory. Starts withdir_.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.directories.update(...)
-
-
-
Update the Directory with the given ID.
-
-
-
from humanloop import Humanloop client = Humanloop(api_key="YOUR_API_KEY", ) client.directories.update(id='id', )
-
-
-
id:
strβ Unique identifier for Directory. Starts withdir_.
-
name:
typing.Optional[str]β Name to set for the directory.
-
parent_id:
typing.Optional[str]β ID of the parent directory. Specify this to move directories. Starts withdir_.
-
path:
typing.Optional[str]β Path to move the directory to, relative to the root directory. Specify this to move directories. Includes name, e.g.path/to/directory.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.files.list_files(...)
-
-
-
Get a paginated list of files.
-
-
-
from humanloop import Humanloop client = Humanloop(api_key="YOUR_API_KEY", ) client.files.list_files()
-
-
-
page:
typing.Optional[int]β Page offset for pagination.
-
size:
typing.Optional[int]β Page size for pagination. Number of files to fetch.
-
name:
typing.Optional[str]β Case-insensitive filter for file name.
-
path:
typing.Optional[str]β Path of the directory to filter for. Returns files in this directory and all its subdirectories.
-
template:
typing.Optional[bool]β Filter to include only template files.
-
type:
typing.Optional[typing.Union[FileType, typing.Sequence[FileType]]]β List of file types to filter for.
-
environment:
typing.Optional[str]β Case-sensitive filter for files with a deployment in the specified environment. Requires the environment name.
-
sort_by:
typing.Optional[FileSortBy]β Field to sort files by
-
order:
typing.Optional[SortOrder]β Direction to sort by.
-
include_raw_file_content:
typing.Optional[bool]β Whether to include the raw file content in the response. Currently only supported for Agents and Prompts.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.files.retrieve_by_path(...)
-
-
-
Retrieve a File by path.
-
-
-
from humanloop import Humanloop client = Humanloop(api_key="YOUR_API_KEY", ) client.files.retrieve_by_path(path='path', )
-
-
-
path:
strβ Path of the File to retrieve.
-
environment:
typing.Optional[str]β Name of the Environment to retrieve a deployed Version from.
-
include_raw_file_content:
typing.Optional[bool]β Whether to include the raw file content in the response. Currently only supported for Agents and Prompts.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.evaluations.list(...)
-
-
-
Retrieve a list of Evaluations for the specified File.
-
-
-
from humanloop import Humanloop client = Humanloop(api_key="YOUR_API_KEY", ) response = client.evaluations.list(file_id='pr_30gco7dx6JDq4200GVOHa', size=1, ) for item in response: yield item # alternatively, you can paginate page-by-page for page in response.iter_pages(): yield page
-
-
-
file_id:
strβ Filter by File ID. Only Evaluations for the specified File will be returned.
-
page:
typing.Optional[int]β Page number for pagination.
-
size:
typing.Optional[int]β Page size for pagination. Number of Evaluations to fetch.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.evaluations.create(...)
-
-
-
Create an Evaluation.
Create a new Evaluation by specifying the File to evaluate, and a name for the Evaluation. You can then add Runs to this Evaluation using the
POST /evaluations/{id}/runsendpoint.
-
-
-
from humanloop import Humanloop client = Humanloop(api_key="YOUR_API_KEY", ) client.evaluations.create(evaluators=[{'version_id': 'version_id'}], )
-
-
-
evaluators:
typing.Sequence[CreateEvaluationRequestEvaluatorsItemParams]β The Evaluators used to evaluate.
-
file:
typing.Optional[FileRequestParams]β The File to associate with the Evaluation. This File contains the Logs you're evaluating.
-
name:
typing.Optional[str]β Name of the Evaluation to help identify it. Must be unique within the associated File.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.evaluations.add_evaluators(...)
-
-
-
Add Evaluators to an Evaluation.
The Evaluators will be run on the Logs generated for the Evaluation.
-
-
-
from humanloop import Humanloop client = Humanloop(api_key="YOUR_API_KEY", ) client.evaluations.add_evaluators(id='id', evaluators=[{'version_id': 'version_id'}], )
-
-
-
id:
strβ Unique identifier for Evaluation.
-
evaluators:
typing.Sequence[AddEvaluatorsRequestEvaluatorsItemParams]β The Evaluators to add to this Evaluation.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.evaluations.remove_evaluator(...)
-
-
-
Remove an Evaluator from an Evaluation.
The Evaluator will no longer be run on the Logs in the Evaluation.
-
-
-
from humanloop import Humanloop client = Humanloop(api_key="YOUR_API_KEY", ) client.evaluations.remove_evaluator(id='id', evaluator_version_id='evaluator_version_id', )
-
-
-
id:
strβ Unique identifier for Evaluation.
-
evaluator_version_id:
strβ Unique identifier for Evaluator Version.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.evaluations.get(...)
-
-
-
Get an Evaluation.
This includes the Evaluators associated with the Evaluation and metadata about the Evaluation, such as its name.
To get the Runs associated with the Evaluation, use the
GET /evaluations/{id}/runsendpoint. To retrieve stats for the Evaluation, use theGET /evaluations/{id}/statsendpoint.
-
-
-
from humanloop import Humanloop client = Humanloop(api_key="YOUR_API_KEY", ) client.evaluations.get(id='ev_567yza', )
-
-
-
id:
strβ Unique identifier for Evaluation.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.evaluations.delete(...)
-
-
-
Delete an Evaluation.
The Runs and Evaluators in the Evaluation will not be deleted.
-
-
-
from humanloop import Humanloop client = Humanloop(api_key="YOUR_API_KEY", ) client.evaluations.delete(id='ev_567yza', )
-
-
-
id:
strβ Unique identifier for Evaluation.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.evaluations.list_runs_for_evaluation(...)
-
-
-
List all Runs for an Evaluation.
-
-
-
from humanloop import Humanloop client = Humanloop(api_key="YOUR_API_KEY", ) client.evaluations.list_runs_for_evaluation(id='id', )
-
-
-
id:
strβ Unique identifier for Evaluation.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.evaluations.create_run(...)
-
-
-
Create an Evaluation Run.
Optionally specify the Dataset and version to be evaluated.
Humanloop will automatically start generating Logs and running Evaluators where
orchestrated=true. If you are generating Logs yourself, you can setorchestrated=falseand then generate and submit the required Logs via the API.If
datasetandversionare provided, you can setuse_existing_logs=Trueto reuse existing Logs, avoiding generating new Logs unnecessarily. Logs that are associated with the specified Version and havesource_datapoint_idreferencing a datapoint in the specified Dataset will be associated with the Run.To keep updated on the progress of the Run, you can poll the Run using the
GET /evaluations/{id}/runsendpoint and check its status.
-
-
-
from humanloop import Humanloop client = Humanloop(api_key="YOUR_API_KEY", ) client.evaluations.create_run(id='id', )
-
-
-
id:
strβ Unique identifier for Evaluation.
-
dataset:
typing.Optional[CreateRunRequestDatasetParams]β Dataset to use in this Run.
-
version:
typing.Optional[CreateRunRequestVersionParams]β Version to use in this Run.
-
orchestrated:
typing.Optional[bool]β Whether the Run is orchestrated by Humanloop. IfTrue, Humanloop will generate Logs for the Run;datasetandversionmust be provided. IfFalse, a log for the Prompt/Tool should be submitted by the user via the API.
-
use_existing_logs:
typing.Optional[bool]β IfTrue, the Run will be initialized with existing Logs associated with the Dataset and Version. IfFalse, the Run will be initialized with no Logs. Can only be set toTruewhen bothdatasetandversionare provided.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.evaluations.add_existing_run(...)
-
-
-
Add an existing Run to the specified Evaluation.
This is useful if you want to compare the Runs in this Evaluation with an existing Run that exists within another Evaluation.
-
-
-
from humanloop import Humanloop client = Humanloop(api_key="YOUR_API_KEY", ) client.evaluations.add_existing_run(id='id', run_id='run_id', )
-
-
-
id:
strβ Unique identifier for Evaluation.
-
run_id:
strβ Unique identifier for Run.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.evaluations.remove_run(...)
-
-
-
Remove a Run from an Evaluation.
The Logs and Versions used in the Run will not be deleted. If this Run is used in any other Evaluations, it will still be available in those Evaluations.
-
-
-
from humanloop import Humanloop client = Humanloop(api_key="YOUR_API_KEY", ) client.evaluations.remove_run(id='id', run_id='run_id', )
-
-
-
id:
strβ Unique identifier for Evaluation.
-
run_id:
strβ Unique identifier for Run.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.evaluations.update_evaluation_run(...)
-
-
-
Update an Evaluation Run.
Specify
control=trueto use this Run as the control Run for the Evaluation. You can cancel a running/pending Run, or mark a Run that uses external or human Evaluators as completed.
-
-
-
from humanloop import Humanloop client = Humanloop(api_key="YOUR_API_KEY", ) client.evaluations.update_evaluation_run(id='id', run_id='run_id', )
-
-
-
id:
strβ Unique identifier for Evaluation.
-
run_id:
strβ Unique identifier for Run.
-
control:
typing.Optional[bool]β IfTrue, this Run will be used as the control in the Evaluation. Stats for other Runs will be compared to this Run. This will replace any existing control Run.
-
status:
typing.Optional[EvaluationStatus]β Used to set the Run tocancelledorcompleted. Can only be used if the Run is currentlypendingorrunning.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.evaluations.add_logs_to_run(...)
-
-
-
Add the specified Logs to a Run.
-
-
-
from humanloop import Humanloop client = Humanloop(api_key="YOUR_API_KEY", ) client.evaluations.add_logs_to_run(id='id', run_id='run_id', log_ids=['log_ids'], )
-
-
-
id:
strβ Unique identifier for Evaluation.
-
run_id:
strβ Unique identifier for Run.
-
log_ids:
typing.Sequence[str]β The IDs of the Logs to add to the Run.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.evaluations.get_stats(...)
-
-
-
Get Evaluation Stats.
Retrieve aggregate stats for the specified Evaluation. This includes the number of generated Logs for each Run and the corresponding Evaluator statistics (such as the mean and percentiles).
-
-
-
from humanloop import Humanloop client = Humanloop(api_key="YOUR_API_KEY", ) client.evaluations.get_stats(id='id', )
-
-
-
id:
strβ Unique identifier for Evaluation.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.evaluations.get_logs(...)
-
-
-
Get the Logs associated to a specific Evaluation.
This returns the Logs associated to all Runs within with the Evaluation.
-
-
-
from humanloop import Humanloop client = Humanloop(api_key="YOUR_API_KEY", ) client.evaluations.get_logs(id='id', )
-
-
-
id:
strβ String ID of evaluation. Starts withev_orevr_.
-
page:
typing.Optional[int]β Page number for pagination.
-
size:
typing.Optional[int]β Page size for pagination. Number of Logs to fetch.
-
run_id:
typing.Optional[typing.Union[str, typing.Sequence[str]]]β Filter by Run IDs. Only Logs for the specified Runs will be returned.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.logs.list(...)
-
-
-
List all Logs for the given filter criteria.
-
-
-
from humanloop import Humanloop client = Humanloop(api_key="YOUR_API_KEY", ) response = client.logs.list(file_id='file_123abc', size=1, ) for item in response: yield item # alternatively, you can paginate page-by-page for page in response.iter_pages(): yield page
-
-
-
file_id:
strβ Unique identifier for the File to list Logs for.
-
page:
typing.Optional[int]β Page number for pagination.
-
size:
typing.Optional[int]β Page size for pagination. Number of Logs to fetch.
-
version_id:
typing.Optional[str]β If provided, only Logs belonging to the specified Version will be returned.
-
id:
typing.Optional[typing.Union[str, typing.Sequence[str]]]β If provided, returns Logs whose IDs contain any of the specified values as substrings.
-
search:
typing.Optional[str]β If provided, only Logs that contain the provided string in its inputs and output will be returned.
-
metadata_search:
typing.Optional[str]β If provided, only Logs that contain the provided string in its metadata will be returned.
-
start_date:
typing.Optional[dt.datetime]β If provided, only Logs created after the specified date will be returned.
-
end_date:
typing.Optional[dt.datetime]β If provided, only Logs created before the specified date will be returned.
-
include_parent:
typing.Optional[bool]β If true, include the full parent Log in the response. Only applicable when retrieving Evaluator Logs.
-
in_trace_filter:
typing.Optional[typing.Union[bool, typing.Sequence[bool]]]β If true, return Logs that are associated to a Trace. False, return Logs that are not associated to a Trace.
-
sample:
typing.Optional[int]β If provided, limit the response to a random subset of logs from the filtered results. (This will be an approximate sample, not a strict limit.)
-
include_trace_children:
typing.Optional[bool]β If true, populatetrace_childrenfor the retrieved Logs. Only applicable when retrieving Flow or Agent Logs.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.logs.delete(...)
-
-
-
Delete Logs with the given IDs.
-
-
-
from humanloop import Humanloop client = Humanloop(api_key="YOUR_API_KEY", ) client.logs.delete(id='prv_Wu6zx1lAWJRqOyL8nWuZk', )
-
-
-
id:
typing.Optional[typing.Union[str, typing.Sequence[str]]]β Unique identifiers for the Logs to delete.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.logs.get(...)
-
-
-
Retrieve the Log with the given ID.
-
-
-
from humanloop import Humanloop client = Humanloop(api_key="YOUR_API_KEY", ) client.logs.get(id='prv_Wu6zx1lAWJRqOyL8nWuZk', )
-
-
-
id:
strβ Unique identifier for Log.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-