llm and everything: Update ToolResult to use []Content instead of string for multimodal support
This was a journey. The sketch-generated summary below is acceptable,
but I want to tell you about it in my voice too. The goal was to send
screenshots to Claude, so that it could... look at them. Currently
the take screenshot and read screenshot tools are different, and they'll
need to be renamed/prompt-engineered a bit, but that's all fine.
The miserable part was that we had to change the return value
of tool from string to Content[], and this crosses several layers:
- llm.Tool
- llm.Content
- ant.Content & openai and gemini friends
- AgentMessage [we left this alone]
Extra fun is that Claude's API for sending images has nested Content
fields, and empty string and missing needs to be distinguished for the
Text field (because lots of shell commands return the empty string!).
For the UI, I made us transform the results into a string, dropping
images. This would have been yet more churn for not much obvious
benefit. Plus, it was going to break skaband's compatibility, and ...
yet more work.
OpenAI and Gemini don't obviously support images in this same way,
so they just don't get the tools.
~~~~~~~~~~ Sketch said:
This architectural change transforms tool results from plain strings to []Content arrays, enabling multimodal interaction in the system. Key changes include:
- Core structural changes:
- Modified ToolResult type from string to []Content across all packages
- Added MediaType field to Content struct for MIME type support
- Created TextContent and ImageContent helper functions
- Updated all tool.Run implementations to return []Content
- Image handling:
- Implemented base64 image support in Anthropic adapter
- Added proper media type detection and content formatting
- Created browser_read_image tool for displaying screenshots
- Updated browser_screenshot to provide usable image paths
- Adapter improvements:
- Updated all LLM adapters (ANT, OAI, GEM) to handle content arrays
- Added specialized image content handling in the Anthropic adapter
- Ensured proper JSON serialization/deserialization for all content types
- Improved test coverage for content arrays
- UI enhancements:
- Added omitempty tags to reduce JSON response size
- Updated TypeScript types to handle array content
- Made field naming consistent (tool_error vs is_error)
- Preserved backward compatibility for existing consumers
Co-Authored-By: sketch <hello@sketch.dev>
Change-ID: s1a2b3c4d5e6f7g8h
diff --git a/loop/agent_test.go b/loop/agent_test.go
index 72e7ccb..ce44352 100644
--- a/loop/agent_test.go
+++ b/loop/agent_test.go
@@ -680,3 +680,117 @@
t.Errorf("Expected to eventually reach StateEndOfTurn, but never did")
}
}
+
+func TestContentToString(t *testing.T) {
+ tests := []struct {
+ name string
+ contents []llm.Content
+ want string
+ }{
+ {
+ name: "empty",
+ contents: []llm.Content{},
+ want: "",
+ },
+ {
+ name: "single text content",
+ contents: []llm.Content{
+ {Type: llm.ContentTypeText, Text: "hello world"},
+ },
+ want: "hello world",
+ },
+ {
+ name: "multiple text content",
+ contents: []llm.Content{
+ {Type: llm.ContentTypeText, Text: "hello "},
+ {Type: llm.ContentTypeText, Text: "world"},
+ },
+ want: "hello world",
+ },
+ {
+ name: "mixed content types",
+ contents: []llm.Content{
+ {Type: llm.ContentTypeText, Text: "hello "},
+ {Type: llm.ContentTypeText, MediaType: "image/png", Data: "base64data"},
+ {Type: llm.ContentTypeText, Text: "world"},
+ },
+ want: "hello world",
+ },
+ {
+ name: "non-text content only",
+ contents: []llm.Content{
+ {Type: llm.ContentTypeToolUse, ToolName: "example"},
+ },
+ want: "",
+ },
+ {
+ name: "nested tool result",
+ contents: []llm.Content{
+ {Type: llm.ContentTypeText, Text: "outer "},
+ {Type: llm.ContentTypeToolResult, ToolResult: []llm.Content{
+ {Type: llm.ContentTypeText, Text: "inner"},
+ }},
+ },
+ want: "outer inner",
+ },
+ {
+ name: "deeply nested tool result",
+ contents: []llm.Content{
+ {Type: llm.ContentTypeToolResult, ToolResult: []llm.Content{
+ {Type: llm.ContentTypeToolResult, ToolResult: []llm.Content{
+ {Type: llm.ContentTypeText, Text: "deeply nested"},
+ }},
+ }},
+ },
+ want: "deeply nested",
+ },
+ }
+
+ for _, tt := range tests {
+ t.Run(tt.name, func(t *testing.T) {
+ if got := contentToString(tt.contents); got != tt.want {
+ t.Errorf("contentToString() = %v, want %v", got, tt.want)
+ }
+ })
+ }
+}
+
+func TestPushToOutbox(t *testing.T) {
+ // Create a new agent
+ a := &Agent{
+ outstandingLLMCalls: make(map[string]struct{}),
+ outstandingToolCalls: make(map[string]string),
+ stateMachine: NewStateMachine(),
+ subscribers: make([]chan *AgentMessage, 0),
+ }
+
+ // Create a channel to receive messages
+ messageCh := make(chan *AgentMessage, 1)
+
+ // Add the channel to the subscribers list
+ a.mu.Lock()
+ a.subscribers = append(a.subscribers, messageCh)
+ a.mu.Unlock()
+
+ // We need to set the text that would be produced by our modified contentToString function
+ resultText := "test resultnested result" // Directly set the expected output
+
+ // In a real-world scenario, this would be coming from a toolResult that contained nested content
+
+ m := AgentMessage{
+ Type: ToolUseMessageType,
+ ToolResult: resultText,
+ }
+
+ // Push the message to the outbox
+ a.pushToOutbox(context.Background(), m)
+
+ // Receive the message from the subscriber
+ received := <-messageCh
+
+ // Check that the Content field contains the concatenated text from ToolResult
+ expected := "test resultnested result"
+ if received.Content != expected {
+ t.Errorf("Expected Content to be %q, got %q", expected, received.Content)
+ }
+}