Skip to content

Agent

v1.3.3Updated

The Agent is the primary operator for sending prompts to large language models and receiving responses. It manages the full lifecycle of an LLM interaction: assembling conversations from input data, injecting system messages and context, sending API requests (with optional streaming), executing tool calls when the model requests them, and delivering final responses through multiple output formats.

  • Multi-provider LLM access through LiteLLM (OpenRouter, OpenAI, Groq, Ollama, Gemini, Anthropic, LM Studio, and custom endpoints)
  • Streaming responses with real-time table updates
  • Tool calling with multi-turn budget control and parallel execution
  • Structured output with JSON schema validation
  • Vision support via direct TOP image input or Context Grabber
  • Audio file input for multimodal models
  • Thinking tag filtering for reasoning models
  • Prompt caching for cost reduction on supported providers
  • Reasoning effort control for compatible models
  • CHOP channel outputs for monitoring agent state, tool metrics, and callback events
🔧 GetTool Enabled 0 tools

This operator exposes 0 tools that allow Agent and Gemini Live LOPs to The Agent does not expose tools itself. Instead, it discovers and calls tools from other LOPs operators connected to its Tool sequence..

The Agent acts as the tool caller, not the tool provider. Any LOP with a GetTool() method can be wired into the Agent’s External Op Tools sequence on the Tools page. The Agent collects all available tools at call time, sends their schemas to the model, and routes tool calls back to the owning operator for execution.

Both single-tool operators (like Tool DAT or Search) and multi-tool providers (like MCP Client, which can expose dozens of tools from a single connection) are supported.

  • Input 1 (DAT): A conversation table with columns role, message, id, timestamp. Each row represents a message in the conversation. The agent reads this table when Call Agent is pulsed. When Call on in1 Table Change is enabled, the agent automatically triggers whenever this input table updates.

The Agent has 4 outputs:

  • Output 1: The conversation_dat table — the full conversation including the assistant’s response appended after each call
  • Output 2: The output_dat text DAT — the latest assistant response text (driven by the internal turn table formatter)
  • Output 3: The history_table — a running log of every API call with model, tokens, timing, and response metadata
  • Output 4: The turn_table — per-turn data collection showing the sequence of streaming chunks, tool calls, tool results, and responses within a single agent turn
  1. Place an Agent LOP in your network.
  2. Create a Table DAT with columns role, message, id, timestamp.
  3. Add a row with role user and your prompt in the message column.
  4. Wire the Table DAT into the Agent’s first input.
  5. On the Agent page, pulse Call Agent.
  6. The response appears on output 1 (conversation table) and output 2 (response text).

The Agent supports three model selection modes, configured with Use Model From on the Model page:

  • ChatTD (default): Uses the global model and API server configured in ChatTD. This is the simplest option and shares settings across all operators.
  • Custom Model: Select a specific API Server and AI Model directly on the Agent’s Model page. Use the Search toggle and Model Search field to filter long model lists.
  • Controller: Point the Controller parameter to another operator that provides model selection (useful for centralized model management across multiple agents).
  1. On the Agent page, enable Use Streaming.
  2. Optionally enable Update Table When Streaming to see the conversation table update in real time as chunks arrive.
  3. Pulse Call Agent. The response text on output 2 updates progressively as the model generates tokens.
  1. On the Tools page, enable Use LOP Tools.
  2. In the External Op Tools sequence, add a row and drag a tool-providing operator (such as a Tool DAT, Search LOP, or MCP Client) into the OP field.
  3. Set the tool’s Active state:
    • enabled: The model can choose to use the tool when appropriate.
    • forced: The model must use this tool on the next call.
    • disabled: The tool is ignored for this call.
  4. Set Tool Turn Budget to control how many rounds of tool calling are allowed before the agent must produce a final response. A budget of 1 means one round of tools followed by a response. Higher budgets allow the model to call tools, see results, and call more tools iteratively.
  5. Enable Tool Follow-up Response (on by default) so the agent makes a follow-up API call after tool execution to produce a natural language summary. When disabled, the agent stops after executing tools without generating a response.
  6. Enable Parallel Tool Calls if you want the model to request multiple tools simultaneously (when the provider supports it). Tools execute concurrently for faster results.

On the Context page:

  • Context Op: Wire a Context Grabber operator to inject additional context (text, images, files) into the conversation before sending to the model.
  • Send TOP Image: Enable this and set the TOP Image parameter to directly send a TouchDesigner TOP as an image with your prompt. The model must support vision.
  • Use Audio: Enable and set Audio File to send an audio file alongside the prompt for multimodal models that accept audio input.
  1. On the I/O page, enable Structured Output.
  2. Create a Text DAT containing a valid JSON schema and wire it to the Schema DAT parameter.
  3. The Agent will instruct the model to return responses conforming to your schema, using strict mode with the OpenAI response format specification.

This is useful for extracting structured data from LLM responses — for example, parsing sentiment scores, extracting entities, or generating configuration objects.

Some reasoning models wrap their internal thought process in tags like <think>...</think>. On the I/O page:

  1. Set Thinking Filter Mode to control where filtering applies:
    • Filter Conversation & Display: Removes thinking tags from both the stored conversation and the display output.
    • Filter Conversation Only: Removes from the conversation history but keeps in display.
    • Filter Display (out2): Keeps in conversation but removes from the display output.
  2. Customize Thinking Phrases if your model uses different delimiters (comma-separated start and end tags).
  3. Optionally set Thinking Replacement Text to substitute filtered content.

On the I/O page, the Output Mode parameter controls how the agent delivers its response:

  • conversation: Standard mode. The response is appended to the conversation table on output 1.
  • table: Response is formatted into a table structure.
  • parameter: Response is written to a parameter.
  • custom: For advanced use cases with custom response handling.

The Assign Perspective setting on the I/O page controls how input message roles are interpreted:

  • user (default): Input roles are passed through as-is.
  • assistant: Swaps user/assistant roles, useful when the agent should continue from the assistant’s perspective.
  • third_party: Concatenates all input messages into a single user message.

The Agent exposes its internal state as CHOP channels through an internal Script CHOP. These channels include callback events (on_task_start, on_task_complete, on_tool_call, on_task_error), agent state (agent_active, agent_streaming, task_idle, task_responding, etc.), tool metrics (tool_turns_used, tool_turn_budget, total_available_tools), and token counts (prompt_tokens, completion_tokens, total_tokens). Connect downstream CHOPs to monitor agent activity in real time.

Beyond the standard onTaskStart and onTaskComplete, the Agent fires two additional callbacks:

Fires when the model requests tool execution, before the tools actually run. The info dict contains a tool_calls list with each tool’s id, name, and arguments. Use this to log, filter, or intercept tool calls before execution.

Fires when an API call fails or tool execution encounters an error. The info dict contains an error object with type, message, code, and model fields. The Agent automatically formats common LiteLLM errors (authentication failures, rate limits, context window exceeded, service unavailable) into user-friendly messages.

  • Start with ChatTD model selection for quick setup, then switch to Custom Model when you need per-agent model control.
  • Set Max Tokens appropriately on the Model page. The default of 256 is conservative — increase it for longer responses.
  • Use Tool Turn Budget wisely. A budget of 1 is sufficient for most single-tool workflows. Increase to 3-5 for complex agentic tasks where the model needs to gather information iteratively.
  • Enable Prompt Caching on the I/O page when making repeated calls with similar conversation history. This reduces costs significantly on providers like Anthropic.
  • Use the Chain ID parameter when integrating with orchestration systems. Setting a consistent Chain ID groups related API calls together for tracing and analytics.
  • Monitor with CHOP channels rather than polling parameters. The CHOP output provides reactive state updates at 60fps.
  • “Duplicate tool name detected”: Two operators in the Tool sequence expose tools with the same name. Remove one operator or reconfigure tool names to be unique.
  • Tool calls not executing: Verify Use LOP Tools is enabled on the Tools page and that tool operators are wired into the External Op Tools sequence with their Active state set to enabled or forced.
  • Empty responses: Check that Max Tokens on the Model page is set high enough. Very low values can cause truncated or empty responses.
  • Rate limit errors: The Agent surfaces provider-specific rate limit messages. Wait a moment and retry, or switch to a different provider/model.
  • Model not supporting images: If you see “content must be a string” errors, the selected model does not support multimodal input. Switch to a vision-capable model.
  • Streaming interruptions: Mid-stream errors are automatically reported. Check your network connection and the provider’s service status.
  • Tool budget exhausted: If the model keeps requesting tools but the budget is used up, increase Tool Turn Budget or simplify the task so fewer tool rounds are needed.
Call Agent (Call) op('agent').par.Call Pulse
Default:
False
Call on in1 Table Change (Onin1) op('agent').par.Onin1 Toggle
Default:
False
Use Streaming (Streaming) op('agent').par.Streaming Toggle

When enabled, responses are delivered in chunks as they are generated

Default:
False
Update Table When Streaming (Streamingupdatetable) op('agent').par.Streamingupdatetable Toggle

When enabled, conversation table is updated as streaming chunks arrive

Default:
False
Current Task (Taskcurrent) op('agent').par.Taskcurrent Str
Default:
"" (Empty String)
Timer (Timer) op('agent').par.Timer Float
Default:
0.0
Range:
0 to 1
Slider Range:
0 to 0
Active (Active) op('agent').par.Active Toggle
Default:
False
Cancel Current (Cancelcall) op('agent').par.Cancelcall Pulse

Cancel any currently active API call or tool execution

Default:
False
Agent Role Definition / Info Header
System Message DAT (Systemmessagedat) op('agent').par.Systemmessagedat OP
Default:
./system_message
System Message (Last) (Displaysysmess) op('agent').par.Displaysysmess Str
Default:
"" (Empty String)
Editsysmess (Editsysmess) op('agent').par.Editsysmess Pulse
Default:
False
Use System Message (Usesystemmessage) op('agent').par.Usesystemmessage Toggle

When disabled, system messages are not sent to the model (for models that do not support system messages)

Default:
False
Chain ID (Chainid) op('agent').par.Chainid Str

Chain ID for tracking calls in orchestration systems. When set, this will be used instead of auto-generating one.

Default:
"" (Empty String)
Header
Output Settings Header
Max Tokens (Maxtokens) op('agent').par.Maxtokens Int
Default:
256
Range:
0 to 1
Slider Range:
16 to 4096
Temperature (Temperature) op('agent').par.Temperature Float
Default:
0.0
Range:
0 to 1
Slider Range:
0 to 1
Reasoning Effort (Reasoningeffort) op('agent').par.Reasoningeffort Menu

Control provider reasoning level when supported (LiteLLM reasoning_effort). Default Off (not sent).

Default:
off
Options:
off, low, medium, high
Model Selection Header
Use Model From (Modelselection) op('agent').par.Modelselection Menu
Default:
chattd_model
Options:
chattd_model, custom_model, controller_model
Controller [ Model ] (Modelcontroller) op('agent').par.Modelcontroller OP
Default:
"" (Empty String)
Select API Server (Apiserver) op('agent').par.Apiserver StrMenu
Default:
openrouter
Menu Options:
  • openrouter (openrouter)
  • openai (openai)
  • groq (groq)
  • ollama (ollama)
  • gemini (gemini)
  • lmstudio (lmstudio)
  • custom (custom)
  • anthropic (anthropic)
AI Model (Model) op('agent').par.Model StrMenu
Default:
llama-3.2-11b-vision-preview
Menu Options:
  • allam-2-7b (allam-2-7b)
  • deepseek-r1-distill-llama-70b (deepseek-r1-distill-llama-70b)
  • gemma2-9b-it (gemma2-9b-it)
  • compound (groq/compound)
  • compound-mini (groq/compound-mini)
  • llama-3.1-8b-instant (llama-3.1-8b-instant)
  • llama-3.3-70b-versatile (llama-3.3-70b-versatile)
  • llama-4-maverick-17b-128e-instruct (meta-llama/llama-4-maverick-17b-128e-instruct)
  • llama-4-scout-17b-16e-instruct (meta-llama/llama-4-scout-17b-16e-instruct)
  • llama-guard-4-12b (meta-llama/llama-guard-4-12b)
  • llama-prompt-guard-2-22m (meta-llama/llama-prompt-guard-2-22m)
  • llama-prompt-guard-2-86m (meta-llama/llama-prompt-guard-2-86m)
  • kimi-k2-instruct (moonshotai/kimi-k2-instruct)
  • kimi-k2-instruct-0905 (moonshotai/kimi-k2-instruct-0905)
  • gpt-oss-120b (openai/gpt-oss-120b)
  • gpt-oss-20b (openai/gpt-oss-20b)
  • playai-tts (playai-tts)
  • playai-tts-arabic (playai-tts-arabic)
  • qwen3-32b (qwen/qwen3-32b)
  • whisper-large-v3 (whisper-large-v3)
  • whisper-large-v3-turbo (whisper-large-v3-turbo)
Search (Search) op('agent').par.Search Toggle
Default:
False
Model Search (Modelsearch) op('agent').par.Modelsearch Str
Default:
"" (Empty String)
Show Model Info (Showmodelinfo) op('agent').par.Showmodelinfo Toggle
Default:
False
Use LOP Tools (Usetools) op('agent').par.Usetools Toggle
Default:
False
Tool Follow-up Response (Toolfollowup) op('agent').par.Toolfollowup Toggle

When enabled, agent makes a follow-up API call after tool execution to generate a final response. When disabled, agent only executes tools without generating responses.

Default:
True
Tool Turn Budget (Toolturnbudget) op('agent').par.Toolturnbudget Int

Maximum number of tool turns the agent may take (initial tool turn counts as 1). Only applies when Allow Follow-up Tools is enabled.

Default:
1
Range:
1 to 1
Slider Range:
1 to 10
Parallel Tool Calls (Paralleltoolcalls) op('agent').par.Paralleltoolcalls Toggle

If enabled and tools are present, request parallel tool calls (LiteLLM parallel_tool_calls). Default Off.

Default:
False
LOP Tools Header
External Op Tools (Tool) op('agent').par.Tool Sequence
Default:
0
Active (Tool0active) op('agent').par.Tool0active Menu
Default:
enabled
Options:
enabled, disabled, forced
OP (Tool0op) op('agent').par.Tool0op OP
Default:
"" (Empty String)
Context Op (Contextop) op('agent').par.Contextop OP
Default:
"" (Empty String)
Use Audio (Useaudio) op('agent').par.Useaudio Toggle
Default:
False
Audio File (Audiofile) op('agent').par.Audiofile File
Default:
"" (Empty String)
Send TOP Image (Sendtopimage) op('agent').par.Sendtopimage Toggle

If enabled, send the TOP specified in Topimage directly with the prompt.

Default:
False
TOP Image (Topimage) op('agent').par.Topimage TOP

Specify a TOP operator to send as an image.

Default:
"" (Empty String)
Enable Prompt Caching (Enablepromptcaching) op('agent').par.Enablepromptcaching Toggle

Enable prompt caching for supported providers to reduce costs and improve performance.

Default:
False
Output Mode (Outputmode) op('agent').par.Outputmode Menu
Default:
conversation
Options:
conversation, table, parameter, custom
Structured Output (Structuredoutput) op('agent').par.Structuredoutput Toggle

Enable structured output with JSON schema validation. Requires a valid schema in Schema DAT.

Default:
False
Schema DAT (Schemadat) op('agent').par.Schemadat DAT

DAT containing JSON schema for structured output. Schema should be valid JSON in the DAT text.

Default:
"" (Empty String)
Jsonmode (Jsonmode) op('agent').par.Jsonmode Toggle
Default:
False
Thinking Filter Mode (Thinkingfilter) op('agent').par.Thinkingfilter Menu
Default:
none
Options:
none, filter_both, filter_convo_only, filter_text_only
Thinking Replacement Text (Thinkingreplace) op('agent').par.Thinkingreplace Str
Default:
"" (Empty String)
Thinking Phrases (Thinkingphrases) op('agent').par.Thinkingphrases Str
Default:
<think>,</think>
Assign Perspective (Perspective) op('agent').par.Perspective Menu
Default:
user
Options:
user, assistant, third_party
Conversation Format (Conversationformat) op('agent').par.Conversationformat Menu
Default:
input_roles
Options:
input_roles, defined_roles, clear_add_as_user, clear_add_as_assistant
Op Display Header
Icon (Icon) op('agent').par.Icon Menu
Default:
none
Options:
none, corner, big
Display Text (Displaytext) op('agent').par.Displaytext Toggle
Default:
False
Table (Tableview) op('agent').par.Tableview Toggle
Default:
False
Show Metadata (Showmetadata) op('agent').par.Showmetadata Toggle
Default:
False
Tool Format (Toolformat) op('agent').par.Toolformat Menu
Default:
original
Options:
original, bracket, plain, emoji, minimal
Callbacks Header
Callback DAT (Callbackdat) op('agent').par.Callbackdat DAT
Default:
ChatTD_callbacks
Edit Callbacks (Editcallbacksscript) op('agent').par.Editcallbacksscript Pulse
Default:
False
Create Callbacks (Createpulse) op('agent').par.Createpulse Pulse
Default:
False
onTaskStart (Ontaskstart) op('agent').par.Ontaskstart Toggle
Default:
False
onTaskComplete (Ontaskcomplete) op('agent').par.Ontaskcomplete Toggle
Default:
False
On Tool Call (Ontoolcall) op('agent').par.Ontoolcall Toggle
Default:
False
Ontaskerror (Ontaskerror) op('agent').par.Ontaskerror Toggle
Default:
False
Textport Debug Callbacks (Debugcallbacks) op('agent').par.Debugcallbacks Menu
Default:
Full Details
Options:
None, Errors Only, Basic Info, Full Details
Available Callbacks:
  • onTaskStart
  • onTaskComplete
  • onTaskError
  • onToolCall
Example Callback Structure:
def onTaskStart(info):
# Called when the agent begins processing a request
# info keys: model, max_tokens, temperature, system_message,
#   use_system_message, audio_path, message (conversation list)
pass

def onTaskComplete(info):
# Called when the agent finishes processing (including after tool follow-ups)
# info keys: response, agent_id, timestamp, is_streaming, is_final,
#   model, max_tokens, temperature, tool_results (if tools were used),
#   agent_tool_history (full tool call/result history)
pass

def onTaskError(info):
# Called when an API call or tool execution fails
# info keys: error (dict with type, message, code, model),
#   agent_id, timestamp, response_time, is_error, is_streaming
pass

def onToolCall(info):
# Called when the model requests tool execution (before tools run)
# info keys: tool_calls (list of {id, type, function: {name, arguments}}),
#   agent_id, timestamp
pass
v1.3.32026-03-01
  • Explicitly set tool_choice='auto' for Groq compatibility when tools enabled - Add budget enforcement before tool execution to drop calls after budget exhausted - Set tool_choice='none' on follow-up calls when budget exhausted for Groq compatibility - Persist tags across follow-up calls for trace grouping - Improve budget status logging in follow-up calls
  • Add par.Traceapicall toggle to exclude agent from trace generation - Pass trace_api_call to ChatTD Customapicall
  • Initial commit
v1.3.22025-09-01

CLEANED MENU - ADDED FORCE OPTION TO tool choice.

ADDED chainid parameter if readOnly then it is set automatically.

v1.3.12025-08-17
  • Added duplicate tool name detection with clear error messages and API call abortion
  • Fixed Claude/Anthropic follow-up call compatibility by providing tools when budget exhausted
  • Enhanced Logger component to handle both 2-parameter and 3-parameter calls flexibly
  • Implemented proper DEBUG/INFO/WARNING/ERROR filtering based on Showlogs parameter
  • Converted 95% of verbose logs from INFO to DEBUG level, keeping only critical information as INFO
  • Improved conversation cleanup to prevent tool call loops and invalid argument propagation
  • Fixed tool call deduplication to prevent infinite loops from LLMs generating identical calls
  • Enhanced streaming tool detection across different providers during responses
  • Improved tool history storage and cleanup to prevent memory leaks
  • Fixed Reset method to properly clear tool_history_unified table
  • Better chain ID generation and tracking across multi-turn conversations
  • Enhanced compatibility with agent session and orchestrator components
  • Streamlined callback execution with consolidated logging to reduce overhead
  • Improved async tool execution with better cancellation and cleanup
  • Enhanced error messages for configuration issues and tool failures
v1.3.02025-08-13
  • Multi-Turn Tool Calling: Added Toolbudget parameter for multiple LLM calls within single request
  • Parallel Tool Execution: Added Paralleltoolcalls parameter for simultaneous tool execution
  • Turn Table System: New turn_table DAT captures all conversation events (streaming, tool calls, results)
  • Chain ID Tracking: chain_id parameter for tracking related calls across conversations
  • Reasoning Model Support: Added Reasoninglevel parameter for thinking models (o1-preview, o1-mini)

## Improvements

  • Tool Call Detection: Better tool call detection across providers during streaming
  • Tool History Management: Unified tool tracking with proper call/result correlation
  • Streaming Architecture: Turn-based streaming with real-time turn table updates
  • Output System: Decoupled output formatting - external scripts process turn table data
  • Callback System: Enhanced callbacks with tool history and chain context

## Bug Fixes

  • Tool Call Parsing: Fixed tool call detection in various response formats
  • Streaming Integration: Fixed tools not being detected during streaming responses
  • Turn Boundaries: Fixed issues with multi-turn conversation boundaries
  • Memory Management: Better cleanup of tool-related data structures

## Breaking Changes

  • Turn Table Primary: Turn table is now primary source of conversation data (not output_dat)
  • New Parameters: Toolbudget and Paralleltoolcalls parameters added
  • Chain ID Required: Chain IDs now required for proper multi-turn tracking
v1.2.32025-07-24

🧹 Code Refactoring & Maintenance

  • Improved Tool Loading Robustness: The agent's parse_tools method has been made more resilient. It now gracefully handles GetTool-enabled operators that do not explicitly provide a response_format in their callback info, defaulting to "json". This prevents an entire tool from failing to load and improves backward compatibility with older or non-compliant tool operators.
v1.2.22025-07-22

🚀 New Features

  • Built-in Thinking Filter: Integrated the functionality of the ThinkingFilter LOP directly into the Agent. This allows for the removal of blocks from conversations and final responses without needing a separate operator.
    • Added Thinkingfilter (Filter Mode), Thinkingreplace (Replacement Text), and Thinkingphrases (Start/End Phrases) parameters to an "I/O" page.
    • The filter correctly processes both outgoing conversation history and incoming model responses, including real-time filtering of the output_dat during streaming.
  • UI Warning for Streaming + Tools: Added a dynamic parameter label system to warn users when both Streaming and Usetools are active, as this combination may be unstable. The parameter labels for both will change to include a warning symbol.

🐛 Critical Bug Fixes

  • Fixed Streaming Callback Flood: Resolved a critical bug where onTaskComplete and other finalization logic would execute on every single data chunk during a streaming response. The agent now correctly identifies the true final chunk, ensuring callbacks fire only once.
  • Restored Cancelcall for Streaming: The fix for the callback flood also resolved an issue where Cancelcall would fail during streaming because the current_api_call_id was being cleared prematurely. Cancellation now works as expected throughout the entire streaming process.
  • Corrected Tool Call Detection Structure: Fixed a structural parsing error where the agent would fail to find tool calls in the final chunk of a streaming response. The logic now correctly checks for tool calls in both response.choices[0].delta.tool_calls and response.choices[0].message.tool_calls, making tool detection more robust for different API response structures.

🧹 Code Refactoring & Maintenance

  • Removed Obsolete Parameters & Methods: Deprecated and removed the Outputmode and Conversationformat parameters, as their logic was superseded by direct handling within the HandleResponse method.
  • Disabled Dead Code: The obsolete methods associated with the old output modes (execute_output_mode, update_conversation_dat) have been neutralized to prevent confusion and improve code clarity.
v1.2.12025-07-20

🐛 Critical Bug Fix

  • Fixed Group Callback Mechanism: Resolved critical issue where group callbacks (used by orchestrator and other external systems) were not being executed
    • Root cause: Agent was storing callback info in self.group_callback and self.groupOP but looking for it in self.last_group_callback and self.last_groupOP
    • Solution: Added proper transfer of callback info to last_ variables in the Call method
    • Impact: Enables proper communication between Agent and orchestrator systems for multi-step workflows

    🔧 Technical Details

    • Modified Call method to properly transfer group callback information before API calls
    • This fix enables the Agent Orchestrator's autonomous mode to function correctly
    • Maintains backward compatibility with existing callback patterns
v1.2.02025-07-13

added the tool manager for tool logging and tool history.

v1.1.32025-07-01
  • Enhanced Tool Callbacks for Orchestration:
    • The HandleResponse callback method now includes a new, comprehensive agent_tool_history object in its callbackInfo dictionary whenever tools are used.
    • This object preserves the complete context of a tool interaction, including the initial tool_calls generated by the model and the final tool_results from execution.
    • This critically solves an issue where tool call information was being lost on agents that use the "Tool Follow-up" feature, enabling robust, stateful orchestration.
    • Improved State Management:
      • The agent's internal tool history is now cleared after every API call cycle. This ensures the agent remains stateless and prevents tool call information from one operation from leaking into the next.
      • This change reinforces the design pattern where long-term conversation and history management is the responsibility of a higher-level component (like the Agent Orchestrator).
      • Backwards Compatibility:
        • The existing tool_results key is still populated in the callbackInfo to ensure that older components relying on it continue to function without modification.
v1.1.22025-06-30

Agent Operator v1.1.2 - 20250630

This update focuses on a major refactoring of the agent's tool handling system, removing legacy code, improving robustness, and ensuring compatibility with modern, standardized tool providers.

#### ✨ Features & Enhancements

  • Generic Tool Result Handling: The _make_follow_up_call_with_tool_results method was completely overhauled. It no longer contains hardcoded logic for specific tools (like the old knowledge graph). It now intelligently formats any successful tool's dictionary output into a clean JSON string, making the agent compatible with any GetTool-based operator.
  • API-Aware Tool Roles: The agent now dynamically sets the message role for tool results ('function' for Gemini, 'tool' for others). This resolves a critical API incompatibility that was causing empty follow-up responses from the Gemini backend.
  • Streamlined Tool Parsing: Redundant calls to parse tools within the Call method have been eliminated. Tools are now parsed only once, improving efficiency and code readability.

#### 🐛 Bug Fixes

  • Robust Tool Loading: Fixed a KeyError: 'args' in the parse_tools method, allowing the agent to safely load newer tools that don't use the legacy args dictionary.
  • Corrected Follow-up Logic: Resolved a NameError and a subsequent critical logic flaw where processed tool results were being ignored in the final follow-up call to the LLM. The agent now correctly uses the processed content.

#### 🧹 Code Cleanup & Refactoring

  • Removed Legacy MCP Logic: All code related to the old MCP Client Manager, which was handled directly within the agent, has been removed. This aligns the agent with the new architecture where MCP clients are standard tool providers.
  • Removed Legacy Parameter Tool Handling: The specific logic for handling adjust_..._parameters tools within _call_tools_async has been removed, as this functionality is now managed by a dedicated Tool Parameter operator. The agent's tool execution method is now leaner and more focused.
v1.1.12025-05-12

added json mode

v1.1.02025-05-03

Moved to LiteLLM as backend.

Added improved model page + new model info display toggle.

Added image TOP parameter

Added audio support (for gemini multimodal + maybe some others)

v1.0.02024-11-06

Initial release