Agent
The Agent is the primary operator for sending prompts to large language models and receiving responses. It manages the full lifecycle of an LLM interaction: assembling conversations from input data, injecting system messages and context, sending API requests (with optional streaming), executing tool calls when the model requests them, and delivering final responses through multiple output formats.
Key Features
Section titled “Key Features”- Multi-provider LLM access through LiteLLM (OpenRouter, OpenAI, Groq, Ollama, Gemini, Anthropic, LM Studio, and custom endpoints)
- Streaming responses with real-time table updates
- Tool calling with multi-turn budget control and parallel execution
- Structured output with JSON schema validation
- Vision support via direct TOP image input or Context Grabber
- Audio file input for multimodal models
- Thinking tag filtering for reasoning models
- Prompt caching for cost reduction on supported providers
- Reasoning effort control for compatible models
- CHOP channel outputs for monitoring agent state, tool metrics, and callback events
Agent Tool Integration
Section titled “Agent Tool Integration”This operator exposes 0 tools that allow Agent and Gemini Live LOPs to The Agent does not expose tools itself. Instead, it discovers and calls tools from other LOPs operators connected to its Tool sequence..
Use the Tool Debugger operator to inspect exact tool definitions, schemas, and parameters.
The Agent acts as the tool caller, not the tool provider. Any LOP with a GetTool() method can be wired into the Agent’s External Op Tools sequence on the Tools page. The Agent collects all available tools at call time, sends their schemas to the model, and routes tool calls back to the owning operator for execution.
Both single-tool operators (like Tool DAT or Search) and multi-tool providers (like MCP Client, which can expose dozens of tools from a single connection) are supported.
Input/Output
Section titled “Input/Output”Inputs
Section titled “Inputs”- Input 1 (DAT): A conversation table with columns
role,message,id,timestamp. Each row represents a message in the conversation. The agent reads this table whenCall Agentis pulsed. WhenCall on in1 Table Changeis enabled, the agent automatically triggers whenever this input table updates.
Outputs
Section titled “Outputs”The Agent has 4 outputs:
- Output 1: The
conversation_dattable — the full conversation including the assistant’s response appended after each call - Output 2: The
output_dattext DAT — the latest assistant response text (driven by the internal turn table formatter) - Output 3: The
history_table— a running log of every API call with model, tokens, timing, and response metadata - Output 4: The
turn_table— per-turn data collection showing the sequence of streaming chunks, tool calls, tool results, and responses within a single agent turn
Usage Examples
Section titled “Usage Examples”Basic Conversation
Section titled “Basic Conversation”- Place an Agent LOP in your network.
- Create a Table DAT with columns
role,message,id,timestamp. - Add a row with role
userand your prompt in the message column. - Wire the Table DAT into the Agent’s first input.
- On the Agent page, pulse
Call Agent. - The response appears on output 1 (conversation table) and output 2 (response text).
Selecting a Model
Section titled “Selecting a Model”The Agent supports three model selection modes, configured with Use Model From on the Model page:
- ChatTD (default): Uses the global model and API server configured in ChatTD. This is the simplest option and shares settings across all operators.
- Custom Model: Select a specific
API ServerandAI Modeldirectly on the Agent’s Model page. Use theSearchtoggle andModel Searchfield to filter long model lists. - Controller: Point the
Controllerparameter to another operator that provides model selection (useful for centralized model management across multiple agents).
Streaming Responses
Section titled “Streaming Responses”- On the Agent page, enable
Use Streaming. - Optionally enable
Update Table When Streamingto see the conversation table update in real time as chunks arrive. - Pulse
Call Agent. The response text on output 2 updates progressively as the model generates tokens.
Using Tools
Section titled “Using Tools”- On the Tools page, enable
Use LOP Tools. - In the External Op Tools sequence, add a row and drag a tool-providing operator (such as a Tool DAT, Search LOP, or MCP Client) into the
OPfield. - Set the tool’s
Activestate:- enabled: The model can choose to use the tool when appropriate.
- forced: The model must use this tool on the next call.
- disabled: The tool is ignored for this call.
- Set
Tool Turn Budgetto control how many rounds of tool calling are allowed before the agent must produce a final response. A budget of 1 means one round of tools followed by a response. Higher budgets allow the model to call tools, see results, and call more tools iteratively. - Enable
Tool Follow-up Response(on by default) so the agent makes a follow-up API call after tool execution to produce a natural language summary. When disabled, the agent stops after executing tools without generating a response. - Enable
Parallel Tool Callsif you want the model to request multiple tools simultaneously (when the provider supports it). Tools execute concurrently for faster results.
Adding Context
Section titled “Adding Context”On the Context page:
- Context Op: Wire a Context Grabber operator to inject additional context (text, images, files) into the conversation before sending to the model.
- Send TOP Image: Enable this and set the
TOP Imageparameter to directly send a TouchDesigner TOP as an image with your prompt. The model must support vision. - Use Audio: Enable and set
Audio Fileto send an audio file alongside the prompt for multimodal models that accept audio input.
Structured Output
Section titled “Structured Output”- On the I/O page, enable
Structured Output. - Create a Text DAT containing a valid JSON schema and wire it to the
Schema DATparameter. - The Agent will instruct the model to return responses conforming to your schema, using strict mode with the OpenAI response format specification.
This is useful for extracting structured data from LLM responses — for example, parsing sentiment scores, extracting entities, or generating configuration objects.
Filtering Thinking Tags
Section titled “Filtering Thinking Tags”Some reasoning models wrap their internal thought process in tags like <think>...</think>. On the I/O page:
- Set
Thinking Filter Modeto control where filtering applies:- Filter Conversation & Display: Removes thinking tags from both the stored conversation and the display output.
- Filter Conversation Only: Removes from the conversation history but keeps in display.
- Filter Display (out2): Keeps in conversation but removes from the display output.
- Customize
Thinking Phrasesif your model uses different delimiters (comma-separated start and end tags). - Optionally set
Thinking Replacement Textto substitute filtered content.
Output Modes
Section titled “Output Modes”On the I/O page, the Output Mode parameter controls how the agent delivers its response:
- conversation: Standard mode. The response is appended to the conversation table on output 1.
- table: Response is formatted into a table structure.
- parameter: Response is written to a parameter.
- custom: For advanced use cases with custom response handling.
Assign Perspective
Section titled “Assign Perspective”The Assign Perspective setting on the I/O page controls how input message roles are interpreted:
- user (default): Input roles are passed through as-is.
- assistant: Swaps user/assistant roles, useful when the agent should continue from the assistant’s perspective.
- third_party: Concatenates all input messages into a single user message.
Using the CHOP Outputs
Section titled “Using the CHOP Outputs”The Agent exposes its internal state as CHOP channels through an internal Script CHOP. These channels include callback events (on_task_start, on_task_complete, on_tool_call, on_task_error), agent state (agent_active, agent_streaming, task_idle, task_responding, etc.), tool metrics (tool_turns_used, tool_turn_budget, total_available_tools), and token counts (prompt_tokens, completion_tokens, total_tokens). Connect downstream CHOPs to monitor agent activity in real time.
Callbacks
Section titled “Callbacks”Beyond the standard onTaskStart and onTaskComplete, the Agent fires two additional callbacks:
onToolCall
Section titled “onToolCall”Fires when the model requests tool execution, before the tools actually run. The info dict contains a tool_calls list with each tool’s id, name, and arguments. Use this to log, filter, or intercept tool calls before execution.
onTaskError
Section titled “onTaskError”Fires when an API call fails or tool execution encounters an error. The info dict contains an error object with type, message, code, and model fields. The Agent automatically formats common LiteLLM errors (authentication failures, rate limits, context window exceeded, service unavailable) into user-friendly messages.
Best Practices
Section titled “Best Practices”- Start with ChatTD model selection for quick setup, then switch to Custom Model when you need per-agent model control.
- Set Max Tokens appropriately on the Model page. The default of 256 is conservative — increase it for longer responses.
- Use Tool Turn Budget wisely. A budget of 1 is sufficient for most single-tool workflows. Increase to 3-5 for complex agentic tasks where the model needs to gather information iteratively.
- Enable Prompt Caching on the I/O page when making repeated calls with similar conversation history. This reduces costs significantly on providers like Anthropic.
- Use the Chain ID parameter when integrating with orchestration systems. Setting a consistent Chain ID groups related API calls together for tracing and analytics.
- Monitor with CHOP channels rather than polling parameters. The CHOP output provides reactive state updates at 60fps.
Troubleshooting
Section titled “Troubleshooting”- “Duplicate tool name detected”: Two operators in the Tool sequence expose tools with the same name. Remove one operator or reconfigure tool names to be unique.
- Tool calls not executing: Verify
Use LOP Toolsis enabled on the Tools page and that tool operators are wired into theExternal Op Toolssequence with theirActivestate set toenabledorforced. - Empty responses: Check that
Max Tokenson the Model page is set high enough. Very low values can cause truncated or empty responses. - Rate limit errors: The Agent surfaces provider-specific rate limit messages. Wait a moment and retry, or switch to a different provider/model.
- Model not supporting images: If you see “content must be a string” errors, the selected model does not support multimodal input. Switch to a vision-capable model.
- Streaming interruptions: Mid-stream errors are automatically reported. Check your network connection and the provider’s service status.
- Tool budget exhausted: If the model keeps requesting tools but the budget is used up, increase
Tool Turn Budgetor simplify the task so fewer tool rounds are needed.
Parameters
Section titled “Parameters”op('agent').par.Call Pulse - Default:
False
op('agent').par.Onin1 Toggle - Default:
False
op('agent').par.Streaming Toggle When enabled, responses are delivered in chunks as they are generated
- Default:
False
op('agent').par.Streamingupdatetable Toggle When enabled, conversation table is updated as streaming chunks arrive
- Default:
False
op('agent').par.Taskcurrent Str - Default:
"" (Empty String)
op('agent').par.Timer Float - Default:
0.0- Range:
- 0 to 1
- Slider Range:
- 0 to 0
op('agent').par.Active Toggle - Default:
False
op('agent').par.Cancelcall Pulse Cancel any currently active API call or tool execution
- Default:
False
op('agent').par.Systemmessagedat OP - Default:
./system_message
op('agent').par.Displaysysmess Str - Default:
"" (Empty String)
op('agent').par.Editsysmess Pulse - Default:
False
op('agent').par.Usesystemmessage Toggle When disabled, system messages are not sent to the model (for models that do not support system messages)
- Default:
False
op('agent').par.Chainid Str Chain ID for tracking calls in orchestration systems. When set, this will be used instead of auto-generating one.
- Default:
"" (Empty String)
op('agent').par.Maxtokens Int - Default:
256- Range:
- 0 to 1
- Slider Range:
- 16 to 4096
op('agent').par.Temperature Float - Default:
0.0- Range:
- 0 to 1
- Slider Range:
- 0 to 1
op('agent').par.Modelcontroller OP - Default:
"" (Empty String)
op('agent').par.Search Toggle - Default:
False
op('agent').par.Modelsearch Str - Default:
"" (Empty String)
op('agent').par.Showmodelinfo Toggle - Default:
False
Provider Model Documentation
Consult the documentation for your chosen provider to find supported models, API key information, and usage limits.
View LiteLLM Supported Providers →
op('agent').par.Usetools Toggle - Default:
False
op('agent').par.Toolfollowup Toggle When enabled, agent makes a follow-up API call after tool execution to generate a final response. When disabled, agent only executes tools without generating responses.
- Default:
True
op('agent').par.Toolturnbudget Int Maximum number of tool turns the agent may take (initial tool turn counts as 1). Only applies when Allow Follow-up Tools is enabled.
- Default:
1- Range:
- 1 to 1
- Slider Range:
- 1 to 10
op('agent').par.Paralleltoolcalls Toggle If enabled and tools are present, request parallel tool calls (LiteLLM parallel_tool_calls). Default Off.
- Default:
False
op('agent').par.Tool Sequence - Default:
0
op('agent').par.Tool0op OP - Default:
"" (Empty String)
Context
Section titled “Context”op('agent').par.Contextop OP - Default:
"" (Empty String)
op('agent').par.Useaudio Toggle - Default:
False
op('agent').par.Audiofile File - Default:
"" (Empty String)
op('agent').par.Sendtopimage Toggle If enabled, send the TOP specified in Topimage directly with the prompt.
- Default:
False
op('agent').par.Topimage TOP Specify a TOP operator to send as an image.
- Default:
"" (Empty String)
op('agent').par.Enablepromptcaching Toggle Enable prompt caching for supported providers to reduce costs and improve performance.
- Default:
False
op('agent').par.Structuredoutput Toggle Enable structured output with JSON schema validation. Requires a valid schema in Schema DAT.
- Default:
False
op('agent').par.Schemadat DAT DAT containing JSON schema for structured output. Schema should be valid JSON in the DAT text.
- Default:
"" (Empty String)
op('agent').par.Jsonmode Toggle - Default:
False
op('agent').par.Thinkingreplace Str - Default:
"" (Empty String)
op('agent').par.Thinkingphrases Str - Default:
<think>,</think>
op('agent').par.Displaytext Toggle - Default:
False
op('agent').par.Tableview Toggle - Default:
False
op('agent').par.Showmetadata Toggle - Default:
False
Callbacks
Section titled “Callbacks”op('agent').par.Callbackdat DAT - Default:
ChatTD_callbacks
op('agent').par.Editcallbacksscript Pulse - Default:
False
op('agent').par.Createpulse Pulse - Default:
False
op('agent').par.Ontaskstart Toggle - Default:
False
op('agent').par.Ontaskcomplete Toggle - Default:
False
op('agent').par.Ontoolcall Toggle - Default:
False
op('agent').par.Ontaskerror Toggle - Default:
False
Callbacks
Section titled “Callbacks”onTaskStartonTaskCompleteonTaskErroronToolCall
def onTaskStart(info):
# Called when the agent begins processing a request
# info keys: model, max_tokens, temperature, system_message,
# use_system_message, audio_path, message (conversation list)
pass
def onTaskComplete(info):
# Called when the agent finishes processing (including after tool follow-ups)
# info keys: response, agent_id, timestamp, is_streaming, is_final,
# model, max_tokens, temperature, tool_results (if tools were used),
# agent_tool_history (full tool call/result history)
pass
def onTaskError(info):
# Called when an API call or tool execution fails
# info keys: error (dict with type, message, code, model),
# agent_id, timestamp, response_time, is_error, is_streaming
pass
def onToolCall(info):
# Called when the model requests tool execution (before tools run)
# info keys: tool_calls (list of {id, type, function: {name, arguments}}),
# agent_id, timestamp
pass Changelog
Section titled “Changelog”v1.3.32026-03-01
- Explicitly set tool_choice='auto' for Groq compatibility when tools enabled - Add budget enforcement before tool execution to drop calls after budget exhausted - Set tool_choice='none' on follow-up calls when budget exhausted for Groq compatibility - Persist tags across follow-up calls for trace grouping - Improve budget status logging in follow-up calls
- Add par.Traceapicall toggle to exclude agent from trace generation - Pass trace_api_call to ChatTD Customapicall
- Initial commit
v1.3.22025-09-01
CLEANED MENU - ADDED FORCE OPTION TO tool choice.
ADDED chainid parameter if readOnly then it is set automatically.
v1.3.12025-08-17
- Added duplicate tool name detection with clear error messages and API call abortion
- Fixed Claude/Anthropic follow-up call compatibility by providing tools when budget exhausted
- Enhanced Logger component to handle both 2-parameter and 3-parameter calls flexibly
- Implemented proper DEBUG/INFO/WARNING/ERROR filtering based on Showlogs parameter
- Converted 95% of verbose logs from INFO to DEBUG level, keeping only critical information as INFO
- Improved conversation cleanup to prevent tool call loops and invalid argument propagation
- Fixed tool call deduplication to prevent infinite loops from LLMs generating identical calls
- Enhanced streaming tool detection across different providers during responses
- Improved tool history storage and cleanup to prevent memory leaks
- Fixed Reset method to properly clear tool_history_unified table
- Better chain ID generation and tracking across multi-turn conversations
- Enhanced compatibility with agent session and orchestrator components
- Streamlined callback execution with consolidated logging to reduce overhead
- Improved async tool execution with better cancellation and cleanup
- Enhanced error messages for configuration issues and tool failures
v1.3.02025-08-13
- Multi-Turn Tool Calling: Added
Toolbudgetparameter for multiple LLM calls within single request - Parallel Tool Execution: Added
Paralleltoolcallsparameter for simultaneous tool execution - Turn Table System: New
turn_tableDAT captures all conversation events (streaming, tool calls, results) - Chain ID Tracking:
chain_idparameter for tracking related calls across conversations - Reasoning Model Support: Added
Reasoninglevelparameter for thinking models (o1-preview, o1-mini)
## Improvements
- Tool Call Detection: Better tool call detection across providers during streaming
- Tool History Management: Unified tool tracking with proper call/result correlation
- Streaming Architecture: Turn-based streaming with real-time turn table updates
- Output System: Decoupled output formatting - external scripts process turn table data
- Callback System: Enhanced callbacks with tool history and chain context
## Bug Fixes
- Tool Call Parsing: Fixed tool call detection in various response formats
- Streaming Integration: Fixed tools not being detected during streaming responses
- Turn Boundaries: Fixed issues with multi-turn conversation boundaries
- Memory Management: Better cleanup of tool-related data structures
## Breaking Changes
- Turn Table Primary: Turn table is now primary source of conversation data (not output_dat)
- New Parameters:
ToolbudgetandParalleltoolcallsparameters added - Chain ID Required: Chain IDs now required for proper multi-turn tracking
v1.2.32025-07-24
🧹 Code Refactoring & Maintenance
- Improved Tool Loading Robustness: The agent's
parse_toolsmethod has been made more resilient. It now gracefully handlesGetTool-enabled operators that do not explicitly provide aresponse_formatin their callback info, defaulting to"json". This prevents an entire tool from failing to load and improves backward compatibility with older or non-compliant tool operators.
v1.2.22025-07-22
🚀 New Features
- Built-in Thinking Filter: Integrated the functionality of the
ThinkingFilterLOP directly into the Agent. This allows for the removal ofblocks from conversations and final responses without needing a separate operator. - Added
Thinkingfilter(Filter Mode),Thinkingreplace(Replacement Text), andThinkingphrases(Start/End Phrases) parameters to an "I/O" page. - The filter correctly processes both outgoing conversation history and incoming model responses, including real-time filtering of the
output_datduring streaming. - UI Warning for Streaming + Tools: Added a dynamic parameter label system to warn users when both
StreamingandUsetoolsare active, as this combination may be unstable. The parameter labels for both will change to include a warning symbol.
🐛 Critical Bug Fixes
- Fixed Streaming Callback Flood: Resolved a critical bug where
onTaskCompleteand other finalization logic would execute on every single data chunk during a streaming response. The agent now correctly identifies the true final chunk, ensuring callbacks fire only once. - Restored
Cancelcallfor Streaming: The fix for the callback flood also resolved an issue whereCancelcallwould fail during streaming because thecurrent_api_call_idwas being cleared prematurely. Cancellation now works as expected throughout the entire streaming process. - Corrected Tool Call Detection Structure: Fixed a structural parsing error where the agent would fail to find tool calls in the final chunk of a streaming response. The logic now correctly checks for tool calls in both
response.choices[0].delta.tool_callsandresponse.choices[0].message.tool_calls, making tool detection more robust for different API response structures.
🧹 Code Refactoring & Maintenance
- Removed Obsolete Parameters & Methods: Deprecated and removed the
OutputmodeandConversationformatparameters, as their logic was superseded by direct handling within theHandleResponsemethod. - Disabled Dead Code: The obsolete methods associated with the old output modes (
execute_output_mode,update_conversation_dat) have been neutralized to prevent confusion and improve code clarity.
v1.2.12025-07-20
🐛 Critical Bug Fix
- Fixed Group Callback Mechanism: Resolved critical issue where group callbacks (used by orchestrator and other external systems) were not being executed
- Root cause: Agent was storing callback info in
self.group_callbackandself.groupOPbut looking for it inself.last_group_callbackandself.last_groupOP - Solution: Added proper transfer of callback info to
last_variables in theCallmethod - Impact: Enables proper communication between Agent and orchestrator systems for multi-step workflows
- Modified
Callmethod to properly transfer group callback information before API calls - This fix enables the Agent Orchestrator's autonomous mode to function correctly
- Maintains backward compatibility with existing callback patterns
🔧 Technical Details
v1.2.02025-07-13
added the tool manager for tool logging and tool history.
v1.1.32025-07-01
- Enhanced Tool Callbacks for Orchestration:
- The
HandleResponsecallback method now includes a new, comprehensiveagent_tool_historyobject in itscallbackInfodictionary whenever tools are used. - This object preserves the complete context of a tool interaction, including the initial
tool_callsgenerated by the model and the finaltool_resultsfrom execution. - This critically solves an issue where tool call information was being lost on agents that use the "Tool Follow-up" feature, enabling robust, stateful orchestration.
- Improved State Management:
- The agent's internal tool history is now cleared after every API call cycle. This ensures the agent remains stateless and prevents tool call information from one operation from leaking into the next.
- This change reinforces the design pattern where long-term conversation and history management is the responsibility of a higher-level component (like the
Agent Orchestrator). - Backwards Compatibility:
- The existing
tool_resultskey is still populated in thecallbackInfoto ensure that older components relying on it continue to function without modification.
v1.1.22025-06-30
Agent Operator v1.1.2 - 20250630
This update focuses on a major refactoring of the agent's tool handling system, removing legacy code, improving robustness, and ensuring compatibility with modern, standardized tool providers.
#### ✨ Features & Enhancements
- Generic Tool Result Handling: The
_make_follow_up_call_with_tool_resultsmethod was completely overhauled. It no longer contains hardcoded logic for specific tools (like the old knowledge graph). It now intelligently formats any successful tool's dictionary output into a clean JSON string, making the agent compatible with anyGetTool-based operator. - API-Aware Tool Roles: The agent now dynamically sets the message role for tool results (
'function'for Gemini,'tool'for others). This resolves a critical API incompatibility that was causing empty follow-up responses from the Gemini backend. - Streamlined Tool Parsing: Redundant calls to parse tools within the
Callmethod have been eliminated. Tools are now parsed only once, improving efficiency and code readability.
#### 🐛 Bug Fixes
- Robust Tool Loading: Fixed a
KeyError: 'args'in theparse_toolsmethod, allowing the agent to safely load newer tools that don't use the legacyargsdictionary. - Corrected Follow-up Logic: Resolved a
NameErrorand a subsequent critical logic flaw where processed tool results were being ignored in the final follow-up call to the LLM. The agent now correctly uses the processed content.
#### 🧹 Code Cleanup & Refactoring
- Removed Legacy MCP Logic: All code related to the old MCP Client Manager, which was handled directly within the agent, has been removed. This aligns the agent with the new architecture where MCP clients are standard tool providers.
- Removed Legacy Parameter Tool Handling: The specific logic for handling
adjust_..._parameterstools within_call_tools_asynchas been removed, as this functionality is now managed by a dedicatedTool Parameteroperator. The agent's tool execution method is now leaner and more focused.
v1.1.12025-05-12
added json mode
v1.1.02025-05-03
Moved to LiteLLM as backend.
Added improved model page + new model info display toggle.
Added image TOP parameter
Added audio support (for gemini multimodal + maybe some others)
v1.0.02024-11-06
Initial release