Skip to main content

Open Models Configuration

RA.Aid supports a variety of open source and compatible model providers. This guide covers configuration options and best practices for using different models with RA.Aid.

Overview

RA.Aid supports these model providers:

ProviderDescriptionKey Features
DeepSeekChinese hedge fund who creates sophisticated LLMsStrong, open models like R1
FireworksServerless AI inference platform for open-source modelsHigh-performance inference, pay-per-token, variety of open models
OpenRouterMulti-model gateway serviceAccess to 100+ models, unified API interface, pay-per-token
OpenAI-compatibleSelf-hosted model endpointsCompatible with Llama, Mistral and other open models
AnthropicClaude model series200k token context, strong tool use, JSON/XML parsing
GeminiGoogle's multimodal modelsCode generation in 20+ languages, parallel request support
OllamaLocal LLM hosting frameworkRun models locally, no API keys required, offline usage
BedrockAmazon LLM platform on AWSLarge variety of models, serverless, leverages IAM policies

Provider Configuration

DeepSeek Models

DeepSeek offers powerful reasoning models optimized for complex tasks.

# Environment setup
export DEEPSEEK_API_KEY=your_api_key_here

# Basic usage
ra-aid -m "Your task" --provider deepseek --model deepseek-reasoner

# With temperature control
ra-aid -m "Your task" --provider deepseek --model deepseek-reasoner --temperature 0.7

Available Models:

  • deepseek-reasoner: Optimized for reasoning tasks
  • Access via OpenRouter: deepseek/deepseek-r1

Advanced Configuration

Expert Tool Configuration

Configure the expert model for specialized tasks; this usually benefits from a more powerful, slower, reasoning model:

# DeepSeek expert
export EXPERT_DEEPSEEK_API_KEY=your_key
ra-aid -m "Your task" --expert-provider deepseek --expert-model deepseek-reasoner

# OpenRouter expert
export EXPERT_OPENROUTER_API_KEY=your_key
ra-aid -m "Your task" --expert-provider openrouter --expert-model mistralai/mistral-large-2411

# Gemini expert
export EXPERT_GEMINI_API_KEY=your_key
ra-aid -m "Your task" --expert-provider gemini --expert-model gemini-2.0-flash-thinking-exp-1219

# Ollama expert
ra-aid -m "Your task" --expert-provider ollama --expert-model qwq:32b

Best Practices

  • Set environment variables in your shell configuration file
  • Use lower temperatures (0.1-0.3) for coding tasks
  • Test different models to find the best fit for your use case
  • Consider using expert mode for complex programming tasks

Environment Variables

Complete list of supported environment variables:

VariableProviderPurpose
OPENROUTER_API_KEYOpenRouterMain API access
DEEPSEEK_API_KEYDeepSeekMain API access
FIREWORKS_API_KEYFireworksMain API access
OPENAI_API_KEYOpenAI-compatibleAPI access
OPENAI_API_BASEOpenAI-compatibleCustom endpoint
ANTHROPIC_API_KEYAnthropicAPI access
GEMINI_API_KEYGeminiAPI access
AWS_PROFILEBedrockAPI access
AWS_SECRET_ACCESS_KEYBedrockAPI access
AWS_ACCESS_KEY_IDBedrockAPI access
AWS_SESSION_TOKENBedrockAPI access
AWS_REGIONBedrockRegion specification (default: us-east-1)
OLLAMA_BASE_URLOllamaCustom endpoint (default: http://localhost:11434)
EXPERT_OPENROUTER_API_KEYOpenRouterExpert tool
EXPERT_DEEPSEEK_API_KEYDeepSeekExpert tool
EXPERT_FIREWORKS_API_KEYFireworksExpert tool
EXPERT_GEMINI_API_KEYGeminiExpert tool
EXPERT_OLLAMA_BASE_URLOllamaExpert tool endpoint
EXPERT_AWS_PROFILEBedrockExpert tool
EXPERT_AWS_SECRET_ACCESS_KEYBedrockExpert tool
EXPERT_ACCESS_KEY_IDBedrockExpert tool
EXPERT_AWS_SESSION_TOKENBedrockExpert tool

Troubleshooting

  • Verify API keys are set correctly
  • Check endpoint URLs for OpenAI-compatible setups
  • Monitor API rate limits and quotas
  • For Ollama, ensure the service is running (ollama list)

See Also