Skip to content

Chat Operator

v2.0.0

The Chat LOP manages multi-turn conversations with AI models directly from the TouchDesigner parameter panel. It provides a dynamic message sequence where you define roles and content for each message, making it ideal for few-shot prompting, conversation prototyping, and building example dialogues that downstream operators like Agent can use as context.

  • Conversation Table (DAT, optional): A table with role, message, id, and timestamp columns. Combined with the message sequence based on the Input Handling setting.
  • conversation_dat: A table DAT containing the full conversation state with role, message, id, and timestamp columns.

The Chat LOP excels at creating few-shot examples that teach an AI a specific response format.

  1. On the Messages page, click the + button on the Message sequence to add message blocks.
  2. Set up alternating user/assistant pairs as examples:
    • Block 0: Role = user, Text = Translate 'hello' to French.
    • Block 1: Role = assistant, Text = {"translation": "bonjour"}
    • Block 2: Role = user, Text = Translate 'goodbye' to Spanish.
    • Block 3: Role = assistant, Text = {"translation": "adios"}
  3. Wire the output of this Chat LOP into the first input of an Agent LOP.

When the Agent receives a new prompt, it uses these examples as context and follows the established format.

  1. Add one or more message blocks on the Messages page, ending with a user role message.
  2. Pulse Call Assistant.
  3. The AI model responds and a new assistant message block is automatically appended to the sequence.

Call User reverses all roles before sending to the API — the model sees user messages as assistant and vice versa. This lets the AI generate the “user” side of a conversation.

  1. Set up your conversation with existing assistant messages.
  2. On the Conversation page, enable Use User Prompt and enter a prompt that describes what kind of user responses to generate.
  3. Pulse Call User on the Messages page.
  4. A new user message block is appended with the generated response.
  1. Create a Table DAT with role and message columns (plus optional id and timestamp).
  2. Wire it into the Chat LOP’s input.
  3. Pulse Load from Input on the Conversation page to populate the message sequence from the table.

This sets Input Handling to none so the loaded messages replace any input merging.

The Input Handling menu on the Messages page controls how wired input data merges with the message sequence:

  • prepend — Input messages appear before the sequence messages
  • append — Input messages appear after the sequence messages
  • index — Insert at a specific position (set via Insert Index)
  • none — Ignore input entirely, use only the message sequence

On the Conversation page, enable Use System Message and enter instructions in the System Message field. This is prepended to the conversation when calling the assistant.

  • Use the Chat LOP for static or semi-static conversation templates. For dynamic single-message injection, use the Add Message LOP instead.
  • Wire multiple Chat LOPs in series to build layered conversation contexts for an Agent.
  • Use Conversation ID on the Conversation page to tag conversations for tracking across your network.
  • Pulse Clear Conversation to reset the sequence to a single empty user message.
Active (Active) op('chat').par.Active Toggle
Default:
False
Call Assistant (Callassistant) op('chat').par.Callassistant Pulse
Default:
False
Call User (Calluser) op('chat').par.Calluser Pulse
Default:
False
Input Handling (Inputhandling) op('chat').par.Inputhandling Menu
Default:
prepend
Options:
prepend, append, index, none
Insert Index (Insertindex) op('chat').par.Insertindex Int
Default:
0
Range:
0 to 1
Slider Range:
0 to 1
Message (Message) op('chat').par.Message Sequence
Default:
0
Role (Message0role) op('chat').par.Message0role Menu
Default:
user
Options:
user, assistant, system
Text (Message0text) op('chat').par.Message0text Str
Default:
"" (Empty String)
Output Settings Header
Max Tokens (Maxtokens) op('chat').par.Maxtokens Int
Default:
256
Range:
0 to 1
Slider Range:
16 to 4096
Temperature (Temperature) op('chat').par.Temperature Float
Default:
0.0
Range:
0 to 1
Slider Range:
0 to 1
Model Selection Header
Use Model From (Modelselection) op('chat').par.Modelselection Menu
Default:
chattd_model
Options:
chattd_model, custom_model, controller_model
Controller [ Model ] (Modelcontroller) op('chat').par.Modelcontroller OP
Default:
"" (Empty String)
Select API Server (Apiserver) op('chat').par.Apiserver Menu
Default:
openrouter
Options:
openrouter, openai, groq, ollama, gemini, lmstudio, custom
AI Model (Model) op('chat').par.Model StrMenu
Default:
llama-3.2-11b-vision-preview
Menu Options:
  • gemini-1.5-flash (gemini-1.5-flash)
  • gemini-1.5-flash-002 (gemini-1.5-flash-002)
  • gemini-1.5-flash-8b (gemini-1.5-flash-8b)
  • gemini-1.5-flash-8b-001 (gemini-1.5-flash-8b-001)
  • gemini-1.5-flash-8b-latest (gemini-1.5-flash-8b-latest)
  • gemini-1.5-flash-latest (gemini-1.5-flash-latest)
  • gemini-1.5-pro (gemini-1.5-pro)
  • gemini-1.5-pro-002 (gemini-1.5-pro-002)
  • gemini-1.5-pro-latest (gemini-1.5-pro-latest)
  • gemini-2.0-flash (gemini-2.0-flash)
  • gemini-2.0-flash-001 (gemini-2.0-flash-001)
  • gemini-2.0-flash-exp (gemini-2.0-flash-exp)
  • gemini-2.0-flash-exp-image-generation (gemini-2.0-flash-exp-image-generation)
  • gemini-2.0-flash-lite (gemini-2.0-flash-lite)
  • gemini-2.0-flash-lite-001 (gemini-2.0-flash-lite-001)
  • gemini-2.0-flash-lite-preview (gemini-2.0-flash-lite-preview)
  • gemini-2.0-flash-lite-preview-02-05 (gemini-2.0-flash-lite-preview-02-05)
  • gemini-2.0-flash-preview-image-generation (gemini-2.0-flash-preview-image-generation)
  • gemini-2.0-flash-thinking-exp (gemini-2.0-flash-thinking-exp)
  • gemini-2.0-flash-thinking-exp-01-21 (gemini-2.0-flash-thinking-exp-01-21)
  • gemini-2.0-flash-thinking-exp-1219 (gemini-2.0-flash-thinking-exp-1219)
  • gemini-2.0-pro-exp (gemini-2.0-pro-exp)
  • gemini-2.0-pro-exp-02-05 (gemini-2.0-pro-exp-02-05)
  • gemini-2.5-flash (gemini-2.5-flash)
  • gemini-2.5-flash-lite (gemini-2.5-flash-lite)
  • gemini-2.5-flash-lite-preview-06-17 (gemini-2.5-flash-lite-preview-06-17)
  • gemini-2.5-flash-preview-05-20 (gemini-2.5-flash-preview-05-20)
  • gemini-2.5-flash-preview-tts (gemini-2.5-flash-preview-tts)
  • gemini-2.5-pro (gemini-2.5-pro)
  • gemini-2.5-pro-preview-03-25 (gemini-2.5-pro-preview-03-25)
  • gemini-2.5-pro-preview-05-06 (gemini-2.5-pro-preview-05-06)
  • gemini-2.5-pro-preview-06-05 (gemini-2.5-pro-preview-06-05)
  • gemini-2.5-pro-preview-tts (gemini-2.5-pro-preview-tts)
  • gemini-exp-1206 (gemini-exp-1206)
  • gemma-3-12b-it (gemma-3-12b-it)
  • gemma-3-1b-it (gemma-3-1b-it)
  • gemma-3-27b-it (gemma-3-27b-it)
  • gemma-3-4b-it (gemma-3-4b-it)
  • gemma-3n-e2b-it (gemma-3n-e2b-it)
  • gemma-3n-e4b-it (gemma-3n-e4b-it)
  • learnlm-2.0-flash-experimental (learnlm-2.0-flash-experimental)
Search (Search) op('chat').par.Search Toggle
Default:
False
Model Search (Modelsearch) op('chat').par.Modelsearch Str
Default:
"" (Empty String)
Use System Message (Usesystemmessage) op('chat').par.Usesystemmessage Toggle
Default:
False
System Message (Systemmessage) op('chat').par.Systemmessage Str
Default:
"" (Empty String)
Use User Prompt (Useuserprompt) op('chat').par.Useuserprompt Toggle
Default:
False
User Prompt (Userprompt) op('chat').par.Userprompt Str
Default:
"" (Empty String)
Clear Conversation (Clearconversation) op('chat').par.Clearconversation Pulse
Default:
False
Load from Input (Loadfrominput) op('chat').par.Loadfrominput Pulse
Default:
False
Conversation ID (Conversationid) op('chat').par.Conversationid Str
Default:
"" (Empty String)
Callbacks Header
Callback DAT (Callbackdat) op('chat').par.Callbackdat DAT
Default:
ChatTD_callbacks
Edit Callbacks (Editcallbacksscript) op('chat').par.Editcallbacksscript Pulse
Default:
False
Create Callbacks (Createpulse) op('chat').par.Createpulse Pulse
Default:
False
onTaskStart (Ontaskstart) op('chat').par.Ontaskstart Toggle
Default:
False
onTaskComplete (Ontaskcomplete) op('chat').par.Ontaskcomplete Toggle
Default:
False
Textport Debug Callbacks (Debugcallbacks) op('chat').par.Debugcallbacks Menu
Default:
Full Details
Options:
None, Errors Only, Basic Info, Full Details
Available Callbacks:
  • onTaskStart
  • onTaskComplete
Example Callback Structure:
def onTaskStart(info):
# Called when a Call Assistant or Call User request begins
# info contains: op, callType
pass

def onTaskComplete(info):
# Called when the AI response is received and added to the conversation
# info contains: op, result, conversationID
pass
v2.0.02025-07-30
  • upgraded model page for release 2.0.0
v1.0.02024-11-06

Initial release