Mix LogoMix

Event Streaming

Real-time tool event streaming from backend to frontend

Mix implements a sophisticated 4-layer streaming architecture that provides real-time visibility into AI reasoning, tool execution, and results. This system transforms raw LLM provider events into user-facing updates via Server-Sent Events (SSE).

Architecture Overview

Key Insight: Events flow through distinct transformation layers, with tool execution happening AFTER provider streaming completes.

Event Flow Layers

1. LLM Provider Events (Raw Stream)

Location: go_backend/internal/llm/provider/provider.go

Raw streaming events from LLM providers (Anthropic, OpenAI):

  • EventThinkingDelta - Claude's reasoning chunks
  • EventContentDelta - Response text chunks
  • EventToolUseStart - Tool invocation begins
  • EventToolUseStop - Tool execution completes
  • EventComplete - Final response ready
  • EventError - Provider-level errors

2. Agent Events (Session-Aware Wrappers)

Location: go_backend/internal/llm/agent/agent.go:29-36

Provider events enhanced with session context and streaming state:

  • AgentEventTypeThinking - Reasoning with session ID
  • AgentEventTypeResponse - Tool/content updates
  • AgentEventTypeError - Application errors
  • AgentEventTypeSummarize - Session summarization
  • AgentEventTypeToolExecutionStart - Backend tool execution begins
  • AgentEventTypeToolExecutionComplete - Backend tool execution finished

3. Pub/Sub Broadcasting

Location: go_backend/internal/pubsub/broker.go

  • Multi-client support: Broadcasts to all connected sessions
  • Buffered channels: Prevents blocking on slow clients
  • Subscription management: Session-specific filtering

4. SSE Client Delivery

Location: go_backend/internal/http/sse_events.go

User-facing events delivered to web/desktop clients:

  • thinking - AI reasoning in real-time
  • tool - Tool declaration and final results
  • tool_execution_start - Backend tool execution begins
  • tool_execution_complete - Backend tool execution finished
  • complete - Final response ready
  • error - Error handling
  • permission - Interactive permission requests

Key Streaming Scenarios

1. AI Reasoning (Thinking)

When: During Claude's reasoning process
Purpose: Transparent AI decision-making

2. Tool Execution Flow

When: File operations, bash commands, API calls
Purpose: Real-time tool execution feedback with start/completion events
Key Point: Tools execute AFTER provider completes streaming, with dedicated execution events

3. Permission System Integration

When: File modifications, system operations Purpose: Security boundaries with error handling Key Point: Permission checking is REACTIVE - tools fail and return errors, no interactive dialogs

Key Files