Overview
Improved the LLM configuration for the blog-api-server project and deployed it to the server.
LLM Configuration Improvements
Existing Issues
- Multiple API key environment variables (
ZAI_API_KEY,ANTHROPIC_API_KEY) - Complex provider branching logic
- Scattered model settings
Changes
Environment Variable Simplification
# Before
ZAI_API_KEY=xxx
ANTHROPIC_API_KEY=xxx
ZAI_MODEL=gpt-4o-mini
LLM=ZAI
# After
LLM=ZAI # Provider (ZAI, OPENAI, ANTHROPIC)
LLM_API_KEY=xxx # Single API Key
LLM_MODEL=glm-4.7 # Default model
LLM_TIMEOUT=120 # Timeout (seconds)
Automatic BASE_URL Configuration
LLM_BASE_URLS = {
"ZAI": "https://api.z.ai/api/coding/paas/v4",
"OPENAI": "https://api.openai.com/v1",
"ANTHROPIC": "https://api.anthropic.com/v1"
}
Code Structure Improvements
class Translator:
"""LLM-based translator"""
def __init__(self):
self.api_key = LLM_API_KEY
self.base_url = LLM_BASE_URL # Auto-selected
self.model = LLM_MODEL
self.timeout = LLM_TIMEOUT
Model Configuration
Default Model
- glm-4.7 (default)
- max_tokens: 8192
Supported Models
| Model | max_tokens |
|---|---|
| glm-4 | 8192 |
| glm-4.7 | 8192 |
| gpt-4o-mini | 4096 |
| gpt-4o | 8192 |
| claude-3-5-haiku | 8192 |
Team Composition
Assembled the blog-api-server development team.
| Role | Name | Responsibilities |
|---|---|---|
| Team Lead | team-lead | Overall management |
| Developer | developer | Code writing, feature implementation |
| Deployer | deployer | Server deployment, infrastructure |
| Monitor | monitor | Log analysis, performance monitoring |
Server Deployment
Deployment Target
- Server: blog.fcoinfup.com (130.162.133.47)
- Path:
/var/www/blog-api
Deployment Content
translator.pyupdate- systemd service restart
Deployment Result
● blog-api.service - Blog API Server
Active: active (running)
Next Steps
- Translation API testing
- Monitoring dashboard setup
- Log file rollover policy application
Korean Version: 한국어 버전