| commit | b8a8f35348326b8daf69e31efc77d37fac4bd276 | [log] [tgz] |
|---|---|---|
| author | Philip Zeyliger <philip@bold.dev> | Mon Jun 02 07:39:37 2025 -0700 |
| committer | Philip Zeyliger <philip@bold.dev> | Mon Jun 02 17:01:19 2025 -0700 |
| tree | fa7a59c5951dbd3ecb5e6bf840ac7b45167a9e74 | |
| parent | b5739403b1b6ec6fac909b258bd47ce5a338940e [diff] |
loop: implement comprehensive conversation compaction system
"comprehensive" is over-stating it. Currently, users get
the dreaded:
error: failed to continue conversation: status 400 Bad Request:
{"type":"error","error":{"type":"invalid_request_error","message":"input
length and max_tokens exceed context limit: 197257 + 8192 > 200000,
decrease input length or max_tokens and try again"}}
That's... annoying. Instead, let's compact automatically. I was going to
start with adding a /compact command or button, but it turns out that
teasing that through the system is annoying, because the agent state
machine is intended to be somewhat single-threaded, and what do you do
when a /compact comes in while other things are going on. It's possible,
but it was genuinely easier to prompt my way into doing it
automatically.
I originally set the threshold to 75%, but given that 8192/200000 is 4%,
I just changed it to 94%.
We'll see how well it works!
~~~~
Implement automatic conversation compaction to manage token limits and prevent
context overflow, with enhanced UX feedback and accurate token tracking.
Problem Analysis:
Large conversations could exceed model context limits, causing failures
when total tokens approached or exceeded the maximum context window.
Without automatic management, users would experience unexpected errors
and conversation interruptions in long sessions.
Implementation:
1. Automatic Compaction Infrastructure:
- Added ShouldCompact() method to detect when compaction is needed
- Configurable token thresholds for different compaction triggers
- Integration with existing loop state machine for seamless operation
2. Accurate Token Counting:
- Enhanced context size estimation using actual token usage from LLM responses
- Track real token consumption rather than relying on estimates
- Account for tool calls, system prompts, and conversation history
3. Compaction Logic and Timing:
- Triggered at 75% of context limit (configurable threshold)
- Preserves recent conversation context while compacting older messages
- Maintains conversation continuity and coherence
4. Enhanced User Experience:
- Visual indicators in webui when compaction occurs
- Token count display showing current usage vs limits
- Clear messaging about compaction status and reasoning
- Timeline updates to reflect compacted conversation state
5. UI Component Updates:
- sketch-timeline.ts: Added compaction status display
- sketch-timeline-message.ts: Enhanced message rendering for compacted state
- sketch-app-shell.ts: Token count integration and status updates
Technical Details:
- Thread-safe implementation with proper mutex usage
- Preserves conversation metadata and essential context
- Configurable compaction strategies for different use cases
- Comprehensive error handling and fallback behavior
- Integration with existing LLM provider implementations (Claude, OpenAI, Gemini)
Testing:
- Added unit tests for ShouldCompact logic with various scenarios
- Verified compaction triggers at correct token thresholds
- Confirmed UI updates reflect compaction status accurately
- All existing tests continue to pass without regression
Benefits:
- Prevents context overflow errors in long conversations
- Maintains conversation quality while managing resource limits
- Provides clear user feedback about system behavior
- Enables unlimited conversation length with automatic management
- Improves overall system reliability and user experience
This system ensures sketch can handle conversations of any length while
maintaining performance and providing transparent feedback to users about
token usage and compaction activities.
Co-Authored-By: sketch <hello@sketch.dev>
Change-ID: s28a53f4e442aa169k
Sketch is an agentic coding tool. It draws the 🦉
Sketch runs in your terminal, has a web UI, understands your code, and helps you get work done. To keep your environment pristine, sketch starts a docker container and outputs its work onto a branch in your host git repository.
Sketch helps with most programming environments, but Sketch has extra goodies for Go.
go install sketch.dev/cmd/sketch@latest sketch
Currently, Sketch runs on macOS and Linux. It uses Docker for containers.
| Platform | Installation |
|---|---|
| macOS | brew install colima (or Docker Desktop/Orbstack) |
| Linux | apt install docker.io (or equivalent for your distro) |
| WSL2 | Install Docker Desktop for Windows (docker entirely inside WSL2 is tricky) |
The sketch.dev service is used to provide access to an LLM service and give you a way to access the web UI from anywhere.
Start Sketch by running sketch in a Git repository. It will open your browser to the Sketch chat interface, but you can also use the CLI interface. Use -open=false if you want to use just the CLI interface.
Ask Sketch about your codebase or ask it to implement a feature. It may take a little while for Sketch to do its work, so hit the bell (🔔) icon to enable browser notifications. We won't spam you or anything; it will notify you when the Sketch agent's turn is done, and there's something to look at.
When you start Sketch, it:
This design lets you run multiple sketches in parallel since they each have their own sandbox. It also lets Sketch work without worry: it can trash its own container, but it can't trash your machine.
Sketch's agentic loop uses tool calls (mostly shell commands, but also a handful of other important tools) to allow the LLM to interact with your codebase.
Sketch is trained to make Git commits. When those happen, they are automatically pushed to the git repository where you started sketch with branch names sketch/*.
Finding Sketch branches:
git branch -a --sort=creatordate | grep sketch/ | tail
The UI keeps track of the latest branch it pushed and displays it prominently. You can use standard Git workflows to pull those branches into your workspace:
git cherry-pick $(git merge-base origin/main sketch/foo)
or merge the branch
git merge sketch/foo
or reset to the branch
git reset --hard sketch/foo
Ie use the same workflows you would if you were pulling in a friend's Pull Request.
Advanced: You can ask Sketch to git fetch sketch-host and rebase onto another commit. This will also fetch where you started Sketch, and we do a bit of "git fetch refspec configuration" to make origin/main work as a git reference.
Don't be afraid of asking Sketch to help you rebase, merge/squash commits, rewrite commit messages, and so forth; it's good at it!
The diff view shows you changes since Sketch started. Leaving comments on lines adds them to the chat box, and, when you hit Send (at the bottom of the page), Sketch goes to work addressing your comments.
You can interact directly with the container in three ways:
ssh sketch-ilik-eske-tcha-lott. We have automatically configured your SSH configuration to make these special hostnames work.Using SSH (and/or VSCode) allows you to forward ports from the container to your machine. For example, if you want to start your development webserver, you can do something like this:
# Forward container port 8888 to local port 8000 ssh -L8000:localhost:8888 sketch-ilik-epor-tfor-ward go run ./cmd/server
This makes http://localhost:8000/ on your machine point to localhost:8888 inside the container.
You can ask Sketch to browse a web page and take screenshots. There are tools both for taking screenshots and "reading images", the latter of which sends the image to the LLM. This functionality is handy if you're working on a web page and want to see what the in-progress change looks like.
Docker images, containers, and so forth tend to pile up. Ask Docker to prune unused images and containers:
docker system prune -a
See CONTRIBUTING.md for development guidelines.
Sketch is open source. It is right here in this repository! Have a look around and mod away.
If you want to run Sketch entirely without the sketch.dev service, you can set the flag -skaband-addr="" and then provide an ANTHROPIC_API_KEY environment variable. (More LLM services coming soon!)