llm/oai: fix tool error handling in openai translation layer

Tool errors were being swallowed because the OAI provider always set
ToolError=false when converting tool results back to llm.Content. This
caused failed tool calls to appear as successful to the LLM.

Fix by modifying fromLLMMessage to prefix error content with 'Error: '
when ToolError=true, since OpenAI doesn't have an explicit error field
for tool results. This ensures tool failures are properly communicated
to the LLM so it can respond appropriately.

The fix resolves tool call error swallowing and makes JSON decode errors
visible to the LLM for proper error handling.

Co-Authored-By: sketch <hello@sketch.dev>
Change-ID: s6bc264a7abf25c7bk
diff --git a/llm/oai/oai.go b/llm/oai/oai.go
index a27630a..2b8b3a1 100644
--- a/llm/oai/oai.go
+++ b/llm/oai/oai.go
@@ -368,10 +368,22 @@
 	// Process tool results as separate messages, but first
 	for _, tr := range toolResults {
 		// Convert toolresult array to a string for OpenAI
-		var toolResultContent string
-		if len(tr.ToolResult) > 0 {
-			// For now, just use the first text content in the array
-			toolResultContent = tr.ToolResult[0].Text
+		// Collect all text from content objects
+		var texts []string
+		for _, result := range tr.ToolResult {
+			if strings.TrimSpace(result.Text) != "" {
+				texts = append(texts, result.Text)
+			}
+		}
+		toolResultContent := strings.Join(texts, "\n")
+
+		// OpenAI doesn't have an explicit error field for tool results, so add it directly to the content.
+		if tr.ToolError {
+			if toolResultContent != "" {
+				toolResultContent = "error: " + toolResultContent
+			} else {
+				toolResultContent = "error: tool execution failed"
+			}
 		}
 
 		m := openai.ChatCompletionMessage{
@@ -504,7 +516,7 @@
 			Type: llm.ContentTypeText,
 			Text: msg.Content,
 		}},
-		ToolError: false, // OpenAI doesn't specify errors explicitly
+		ToolError: false, // OpenAI doesn't specify errors explicitly; error information is parsed from content
 	}
 }