- Added functionality to update the default LLM provider for users via a new endpoint in UserController. - Introduced LlmProvider enum to manage available LLM options: Auto, Gemini, OpenAI, and Claude. - Updated User and UserEntity models to include DefaultLlmProvider property. - Enhanced database context and migrations to support the new LLM provider configuration. - Integrated LLM services into the application bootstrap for dependency injection. - Updated TypeScript API client to include methods for managing LLM providers and chat requests.
259 lines
6.2 KiB
Markdown
259 lines
6.2 KiB
Markdown
# Using Claude Code API Keys with MCP
|
|
|
|
## Overview
|
|
|
|
The Managing platform's MCP implementation now prioritizes **Claude (Anthropic)** as the default LLM provider when in auto mode. This allows you to use your Claude Code API keys seamlessly.
|
|
|
|
## Auto Mode Priority (Updated)
|
|
|
|
When using "auto" mode (backend selects provider), the priority order is now:
|
|
|
|
1. **Claude** (Anthropic) ← **Preferred** (Claude Code API keys)
|
|
2. Gemini (Google)
|
|
3. OpenAI (GPT)
|
|
|
|
The system will automatically select Claude if an API key is configured.
|
|
|
|
## Setup with Claude Code API Keys
|
|
|
|
### Option 1: Environment Variables (Recommended)
|
|
|
|
Set the environment variable before running the API:
|
|
|
|
```bash
|
|
export Llm__Claude__ApiKey="your-anthropic-api-key"
|
|
dotnet run --project src/Managing.Api
|
|
```
|
|
|
|
Or on Windows:
|
|
```powershell
|
|
$env:Llm__Claude__ApiKey="your-anthropic-api-key"
|
|
dotnet run --project src/Managing.Api
|
|
```
|
|
|
|
### Option 2: User Secrets (Development)
|
|
|
|
```bash
|
|
cd src/Managing.Api
|
|
dotnet user-secrets set "Llm:Claude:ApiKey" "your-anthropic-api-key"
|
|
```
|
|
|
|
### Option 3: appsettings.Development.json
|
|
|
|
Add to `src/Managing.Api/appsettings.Development.json`:
|
|
|
|
```json
|
|
{
|
|
"Llm": {
|
|
"Claude": {
|
|
"ApiKey": "your-anthropic-api-key",
|
|
"DefaultModel": "claude-3-5-sonnet-20241022"
|
|
}
|
|
}
|
|
}
|
|
```
|
|
|
|
**⚠️ Note**: Don't commit API keys to version control!
|
|
|
|
## Getting Your Anthropic API Key
|
|
|
|
1. Go to [Anthropic Console](https://console.anthropic.com/)
|
|
2. Sign in or create an account
|
|
3. Navigate to **API Keys** section
|
|
4. Click **Create Key**
|
|
5. Copy your API key
|
|
6. Add to your configuration using one of the methods above
|
|
|
|
## Verification
|
|
|
|
To verify Claude is being used:
|
|
|
|
1. Start the API
|
|
2. Check the logs for: `"Claude provider initialized"`
|
|
3. In the AI chat, the provider dropdown should show "Claude" as available
|
|
4. When using "Auto" mode, logs should show: `"Auto-selected provider: claude"`
|
|
|
|
## Using Claude Code API Keys with BYOK
|
|
|
|
If you want users to bring their own Claude API keys:
|
|
|
|
```typescript
|
|
// Frontend example
|
|
const response = await aiChatService.sendMessage(
|
|
messages,
|
|
'claude', // Specify Claude
|
|
'user-anthropic-api-key' // User's key
|
|
)
|
|
```
|
|
|
|
## Model Configuration
|
|
|
|
The default Claude model is `claude-3-5-sonnet-20241022` (Claude 3.5 Sonnet).
|
|
|
|
To use a different model, update `appsettings.json`:
|
|
|
|
```json
|
|
{
|
|
"Llm": {
|
|
"Claude": {
|
|
"ApiKey": "your-key",
|
|
"DefaultModel": "claude-3-opus-20240229" // Claude 3 Opus (more capable)
|
|
}
|
|
}
|
|
}
|
|
```
|
|
|
|
Available models:
|
|
- `claude-3-5-sonnet-20241022` - Latest, balanced (recommended)
|
|
- `claude-3-opus-20240229` - Most capable
|
|
- `claude-3-sonnet-20240229` - Balanced
|
|
- `claude-3-haiku-20240307` - Fastest
|
|
|
|
## Benefits of Using Claude
|
|
|
|
1. **MCP Native**: Claude has native MCP support
|
|
2. **Context Window**: Large context window (200K tokens)
|
|
3. **Tool Calling**: Excellent at structured tool use
|
|
4. **Reasoning**: Strong reasoning capabilities for trading analysis
|
|
5. **Code Understanding**: Great for technical queries
|
|
|
|
## Example Usage
|
|
|
|
Once configured, the AI chat will automatically use Claude:
|
|
|
|
**User**: "Show me my best backtests from the last month with a score above 80"
|
|
|
|
**Claude** will:
|
|
1. Understand the request
|
|
2. Call the `get_backtests_paginated` MCP tool with appropriate filters
|
|
3. Analyze the results
|
|
4. Provide insights in natural language
|
|
|
|
## Troubleshooting
|
|
|
|
### Claude not selected in auto mode
|
|
|
|
**Issue**: Logs show Gemini or OpenAI being selected instead of Claude
|
|
|
|
**Solution**:
|
|
- Verify the API key is configured: check logs for "Claude provider initialized"
|
|
- Ensure the key is valid and active
|
|
- Check environment variable name: `Llm__Claude__ApiKey` (double underscore)
|
|
|
|
### API key errors
|
|
|
|
**Issue**: "Authentication error" or "Invalid API key"
|
|
|
|
**Solution**:
|
|
- Verify key is copied correctly (no extra spaces)
|
|
- Check key is active in Anthropic Console
|
|
- Ensure you have credits/billing set up
|
|
|
|
### Model not found
|
|
|
|
**Issue**: "Model not found" error
|
|
|
|
**Solution**:
|
|
- Use supported model names from the list above
|
|
- Check model availability in your region
|
|
- Verify model name spelling in configuration
|
|
|
|
## Advanced: Multi-Provider Fallback
|
|
|
|
You can configure multiple providers for redundancy:
|
|
|
|
```json
|
|
{
|
|
"Llm": {
|
|
"Claude": {
|
|
"ApiKey": "claude-key"
|
|
},
|
|
"Gemini": {
|
|
"ApiKey": "gemini-key"
|
|
},
|
|
"OpenAI": {
|
|
"ApiKey": "openai-key"
|
|
}
|
|
}
|
|
}
|
|
```
|
|
|
|
Auto mode will:
|
|
1. Try Claude first
|
|
2. Fall back to Gemini if Claude fails
|
|
3. Fall back to OpenAI if Gemini fails
|
|
|
|
## Cost Optimization
|
|
|
|
Claude pricing (as of 2024):
|
|
- **Claude 3.5 Sonnet**: $3/M input tokens, $15/M output tokens
|
|
- **Claude 3 Opus**: $15/M input tokens, $75/M output tokens
|
|
- **Claude 3 Haiku**: $0.25/M input tokens, $1.25/M output tokens
|
|
|
|
For cost optimization:
|
|
- Use **3.5 Sonnet** for general queries (recommended)
|
|
- Use **Haiku** for simple queries (if you need to reduce costs)
|
|
- Use **Opus** only for complex analysis requiring maximum capability
|
|
|
|
## Rate Limits
|
|
|
|
Anthropic rate limits (tier 1):
|
|
- 50 requests per minute
|
|
- 40,000 tokens per minute
|
|
- 5 requests per second
|
|
|
|
For higher limits, upgrade your tier in the Anthropic Console.
|
|
|
|
## Security Best Practices
|
|
|
|
1. **Never commit API keys** to version control
|
|
2. **Use environment variables** or user secrets in development
|
|
3. **Use secure key management** (Azure Key Vault, AWS Secrets Manager) in production
|
|
4. **Rotate keys regularly**
|
|
5. **Monitor usage** for unexpected spikes
|
|
6. **Set spending limits** in Anthropic Console
|
|
|
|
## Production Deployment
|
|
|
|
For production, use secure configuration:
|
|
|
|
### Azure App Service
|
|
```bash
|
|
az webapp config appsettings set \
|
|
--name your-app-name \
|
|
--resource-group your-rg \
|
|
--settings Llm__Claude__ApiKey="your-key"
|
|
```
|
|
|
|
### Docker
|
|
```bash
|
|
docker run -e Llm__Claude__ApiKey="your-key" your-image
|
|
```
|
|
|
|
### Kubernetes
|
|
```yaml
|
|
apiVersion: v1
|
|
kind: Secret
|
|
metadata:
|
|
name: llm-secrets
|
|
type: Opaque
|
|
stringData:
|
|
claude-api-key: your-key
|
|
```
|
|
|
|
## Next Steps
|
|
|
|
1. Configure your Claude API key
|
|
2. Start the API and verify Claude provider is initialized
|
|
3. Test the AI chat with queries about backtests
|
|
4. Monitor usage and costs in Anthropic Console
|
|
5. Adjust model selection based on your needs
|
|
|
|
## Support
|
|
|
|
For issues:
|
|
- Check logs for provider initialization
|
|
- Verify API key in Anthropic Console
|
|
- Test API key with direct API calls
|
|
- Review error messages in application logs
|