Skip to main content

llm

Multi-provider LLM client layer supporting OpenRouter, Azure OpenAI, direct OpenAI chat, direct Google Gemini chat, and direct Anthropic Claude chat with automatic failover. Includes model catalog management, pricing metadata, and preference-based model selection.

Used by cmd/worker, cmd/ui, cmd/run, cmd/seed-registry.

Usage

import "cruvero/internal/llm"

Key Types / Interfaces

TypeSourceDescription
Clientclient.goInterface: Chat and ChatWithModel methods
MultiClientclient.goMulti-provider client routing to OpenRouter, Azure, OpenAI, Google, or Anthropic
Messageopenrouter.goChat message with role and content
ChatResultopenrouter.goLLM response with content and usage metrics
Usageopenrouter.goToken counts (prompt, completion, total) and cost
FailoverChainfailover.goProvider failover chain with health tracking and recovery
FailoverEventfailover.goEvent emitted during provider failover
FailoverOptionsfailover.goFailover configuration (threshold, recovery interval, latency)
ModelInfomodels.goModel metadata: architecture, pricing, context size
ModelStoremodels.goInterface for persisting model information
PostgresModelStoremodels.goPostgreSQL-backed model store

Key Files

FilePurpose
client.goClient interface and MultiClient (provider routing)
openrouter.goOpenRouter API client
azure.goAzure OpenAI API client
openai_chat.goDirect OpenAI Chat Completions API client
google.goDirect Google Gemini GenerateContent API client
anthropic.goDirect Anthropic Claude Messages API client
failover.goFailover chain with circuit breaker and recovery
models.goModel catalog and preference management