Skip to main content
kombify AI supports multiple AI providers through a unified interface. The model router automatically selects the best model for each task based on complexity, cost, and your preferences.

Provider tiers

Tier 1 — Direct API

ProviderModelsUse case
OpenAIGPT-5-nano, GPT-4oGeneral chat, code generation
AnthropicClaude Sonnet, Claude HaikuComplex reasoning, analysis
Google AIGemini Pro, Gemini FlashMultimodal, fast responses

Tier 2 — Cloud platforms

ProviderModelsUse case
Azure AI FoundryHosted model endpointsEnterprise deployments

Tier 3 — Routers and local

ProviderModelsUse case
OpenRouter200+ modelsAccess to any model
OllamaSelf-hosted LLMsFull privacy, no API costs

Model routing logic

kombify AI selects models based on:
  1. Task complexity — Simple questions use fast, cheap models; complex tasks use more capable models
  2. User preference — You can pin a specific model in settings
  3. Cost budget — Monthly budget limits are respected
  4. Provider availability — Automatic fallback if a provider is down

Default model selection

Task typeDefault modelReasoning
Simple chatGPT-5-nanoFast, very low cost
Code generationClaude SonnetStrong code capabilities
Complex reasoningGPT-4o / Claude SonnetBest overall performance
Quick summariesGemini FlashFast, cost-effective

Configuration

Override the default model in AI Settings > Models or per-conversation in the model picker dropdown.

Further reading

BYOK setup

Configure your own API keys

Configuration reference

All AI configuration options