Skip to content

feat: implement native tool calling in LiteAgent#4659

Closed
joaomdmoura wants to merge 5 commits intomainfrom
joaomdmoura/add-native-tool-calling-to-lite-agent
Closed

feat: implement native tool calling in LiteAgent#4659
joaomdmoura wants to merge 5 commits intomainfrom
joaomdmoura/add-native-tool-calling-to-lite-agent

Conversation

@joaomdmoura
Copy link
Collaborator

@joaomdmoura joaomdmoura commented Mar 1, 2026

  • Added support for native function calling in the LiteAgent class, allowing it to utilize LLM's built-in capabilities for structured tool calls.
  • Introduced a new execution mode that determines whether to use native tools or fallback to the ReAct text pattern based on LLM capabilities.
  • Updated system prompts to accommodate the new native tools functionality.
  • Enhanced the agent's invocation loop to handle native tool calls effectively, improving overall performance and response accuracy.

Note

Medium Risk
Adds a new LiteAgent execution path that bypasses ReAct parsing and executes provider-native tool calls (including optional parallelism), which can change tool invocation order and message history shape. Also adjusts LLM hook handling to preserve tool-call lists, affecting hook/response behavior across executors.

Overview
LiteAgent can now run in a native function-calling mode when the underlying LLM reports supports_function_calling() and tools are present, falling back to the existing ReAct text loop otherwise.

In native mode, LiteAgent sends OpenAI-style tool schemas to the LLM, detects tool-call responses across multiple provider formats, executes tool calls (optionally in parallel when safe), appends assistant.tool_calls + tool messages, and continues with a post-tool reasoning prompt; it also supports early return for result_as_answer tools and respects tool usage limits/caching.

System prompting is updated to use a new lite_agent_system_prompt_native_tools slice (no ReAct formatting instructions), and _setup_after_llm_call_hooks is updated to accept list responses and not stringify tool-call lists so native tool handling continues to work with hooks enabled. Extensive unit tests cover mode detection, prompt selection, tool execution/parallel batches, usage counters/limits, hook regression, and duplicate tool-name deduping.

Written by Cursor Bugbot for commit d2a5256. This will update automatically on new commits. Configure here.

- Added support for native function calling in the LiteAgent class, allowing it to utilize LLM's built-in capabilities for structured tool calls.
- Introduced a new execution mode that determines whether to use native tools or fallback to the ReAct text pattern based on LLM capabilities.
- Updated system prompts to accommodate the new native tools functionality.
- Enhanced the agent's invocation loop to handle native tool calls effectively, improving overall performance and response accuracy.
…ll hooks

- Simplified the conversion of tools to OpenAI schema by removing redundant mapping.
- Updated the after-LLM call hooks to support list-type answers, ensuring native tool calls are returned unchanged.
- Added tests to verify correct usage count for native tools and ensure proper handling of duplicate tool names.
- Enhanced existing tests to confirm functionality with after-LLM hooks active, addressing previous issues with tool call processing.
Co-authored-by: Copilot Autofix powered by AI <223894421+github-code-quality[bot]@users.noreply.github.com>
Copy link

@cursor cursor bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cursor Bugbot has reviewed your changes and found 2 potential issues.

Bugbot Autofix is OFF. To automatically fix reported issues with cloud agents, enable autofix in the Cursor dashboard.

"result": result,
"tool_name": func_name,
"tool_args": args_dict,
})
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Shared mutable state accessed from thread pool workers

Medium Severity

_execute_native_tool_call appends to self.tools_results (a plain list) at the end of the method, but in the parallel path this method is submitted to a ThreadPoolExecutor. list.append is only incidentally atomic under CPython's GIL and isn't safe under free-threaded Python (PEP 703). The result ordering in tools_results is also non-deterministic across runs.

Additional Locations (1)

Fix in Cursor Fix in Web

"conversation_history_instruction": "You are a member of a crew collaborating to achieve a common goal. Your task is a specific action that contributes to this larger objective. For additional context, please review the conversation history between you and the user that led to the initiation of this crew. Use any relevant information or feedback from the conversation to inform your task execution and ensure your response aligns with both the immediate task and the crew's overall goals.",
"feedback_instructions": "User feedback: {feedback}\nInstructions: Use this feedback to enhance the next output iteration.\nNote: Do not respond or add commentary.",
"lite_agent_system_prompt_with_tools": "You are {role}. {backstory}\nYour personal goal is: {goal}\n\nYou ONLY have access to the following tools, and should NEVER make up tools that are not listed here:\n\n{tools}\n\nIMPORTANT: Use the following format in your response:\n\n```\nThought: you should always think about what to do\nAction: the action to take, only one name of [{tool_names}], just the name, exactly as it's written.\nAction Input: the input to the action, just a simple JSON object, enclosed in curly braces, using \" to wrap keys and values.\nObservation: the result of the action\n```\n\nOnce all necessary information is gathered, return the following format:\n\n```\nThought: I now know the final answer\nFinal Answer: the final answer to the original input question\n```",
"lite_agent_system_prompt_native_tools": "You are {role}. {backstory}\nYour personal goal is: {goal}",
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Native tools prompt references ReAct "Final Answer" concept

Low Severity

The post_tool_reasoning prompt used after native tool calls tells the LLM to "provide the Final Answer," which is a ReAct-specific text pattern. In native tool calling mode, there's no parser looking for "Final Answer:" — the LLM may emit it literally as part of its text response, producing user-visible artifacts like "Final Answer: 42" instead of just "42."

Additional Locations (1)

Fix in Cursor Fix in Web

@joaomdmoura joaomdmoura closed this Mar 1, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant