OpenAI Integration
Simili Bot v0.2.0 supports OpenAI as a full alternative to Google Gemini for both embeddings and LLM analysis.
Services
Simili Bot uses OpenAI for:
- Text Embeddings - Convert text to vectors for semantic search
- LLM Analysis - AI reasoning, duplicate detection, routing, and triage
Embeddings
Supported models
| Model | Dimensions | Notes |
|---|
text-embedding-3-small | 1536 | Default for OpenAI |
text-embedding-3-large | 3072 | Highest quality |
text-embedding-ada-002 | 1536 | Legacy, not recommended |
The dimensions value in your config must match the model’s output dimensions. Mismatches cause Qdrant collection errors.
LLM analysis
Default LLM model: gpt-5.2
All LLM tasks are supported with OpenAI:
- Duplicate detection
- Quality assessment
- Issue routing
- Auto triage
Provider precedence
If both GEMINI_API_KEY and OPENAI_API_KEY are set, Gemini takes priority.
To force OpenAI, either:
- Only set
OPENAI_API_KEY (do not set GEMINI_API_KEY), or
- Explicitly set
provider: "openai" in your config
Configuration
Embeddings only with OpenAI
embedding:
provider: "openai"
api_key: "${OPENAI_API_KEY}"
model: "text-embedding-3-small"
dimensions: 1536
Full OpenAI stack
embedding:
provider: "openai"
api_key: "${OPENAI_API_KEY}"
model: "text-embedding-3-small"
dimensions: 1536
llm:
provider: "openai"
api_key: "${OPENAI_API_KEY}"
model: "gpt-5.2"
Mixed providers
Use Gemini for LLM and OpenAI for embeddings (or vice versa):
embedding:
provider: "openai"
api_key: "${OPENAI_API_KEY}"
model: "text-embedding-3-large"
dimensions: 3072
llm:
provider: "gemini"
api_key: "${GEMINI_API_KEY}"
model: "gemini-2.5-flash"
Error handling
Simili Bot uses the same exponential backoff retry logic for OpenAI as for Gemini — typed error handling ensures transient failures are retried correctly.
Next steps