Event Streaming
Real-time tool event streaming from backend to frontend
Mix implements a sophisticated 4-layer streaming architecture that provides real-time visibility into AI reasoning, tool execution, and results. This system transforms raw LLM provider events into user-facing updates via Server-Sent Events (SSE).
Architecture Overview
Key Insight: Events flow through distinct transformation layers, with tool execution happening AFTER provider streaming completes.
Event Flow Layers
1. LLM Provider Events (Raw Stream)
Location: go_backend/internal/llm/provider/provider.go
Raw streaming events from LLM providers (Anthropic, OpenAI):
EventThinkingDelta
- Claude's reasoning chunksEventContentDelta
- Response text chunksEventToolUseStart
- Tool invocation beginsEventToolUseStop
- Tool execution completesEventComplete
- Final response readyEventError
- Provider-level errors
2. Agent Events (Session-Aware Wrappers)
Location: go_backend/internal/llm/agent/agent.go:29-36
Provider events enhanced with session context and streaming state:
AgentEventTypeThinking
- Reasoning with session IDAgentEventTypeResponse
- Tool/content updatesAgentEventTypeError
- Application errorsAgentEventTypeSummarize
- Session summarizationAgentEventTypeToolExecutionStart
- Backend tool execution beginsAgentEventTypeToolExecutionComplete
- Backend tool execution finished
3. Pub/Sub Broadcasting
Location: go_backend/internal/pubsub/broker.go
- Multi-client support: Broadcasts to all connected sessions
- Buffered channels: Prevents blocking on slow clients
- Subscription management: Session-specific filtering
4. SSE Client Delivery
Location: go_backend/internal/http/sse_events.go
User-facing events delivered to web/desktop clients:
thinking
- AI reasoning in real-timetool
- Tool declaration and final resultstool_execution_start
- Backend tool execution beginstool_execution_complete
- Backend tool execution finishedcomplete
- Final response readyerror
- Error handlingpermission
- Interactive permission requests
Key Streaming Scenarios
1. AI Reasoning (Thinking)
When: During Claude's reasoning process
Purpose: Transparent AI decision-making
2. Tool Execution Flow
When: File operations, bash commands, API calls
Purpose: Real-time tool execution feedback with start/completion events
Key Point: Tools execute AFTER provider completes streaming, with dedicated execution events
3. Permission System Integration
When: File modifications, system operations Purpose: Security boundaries with error handling Key Point: Permission checking is REACTIVE - tools fail and return errors, no interactive dialogs
Key Files
- mix_agent/internal/llm/provider/provider.go: Provider layer that handles LLM communication and streams raw provider events via StreamResponse() method
- mix_agent/internal/llm/agent/agent.go: Agent orchestration layer that processes provider events, executes tools, and publishes AgentEvent types
- mix_agent/internal/pubsub/broker.go: Core pubsub broadcasting system that manages event subscriptions and distribution across multiple subscribers
- mix_agent/internal/http/sse.go: SSE client delivery layer that manages persistent HTTP connections and streams events to frontend clients