Skip to main content

Overview

Aider is a command-line AI coding assistant that can edit code in your local git repository. It’s designed for pair programming workflows and supports multiple LLM providers. Connect Aider to ModelStack to access all supported models from your terminal.

Prerequisites

  • Python 3.8+ installed
  • Git repository initialized
  • ModelStack API key from your dashboard

Installation

Install Aider using pip:
python -m pip install aider-chat
Or using pipx (recommended):
pipx install aider-chat

Configuration

Set these environment variables in your shell:
export OPENAI_API_KEY="your_modelstack_api_key"
export OPENAI_API_BASE="https://api.modelstack.cc/v1"
Add these to your ~/.zshrc or ~/.bashrc to persist across sessions.

Method 2: Command-Line Flags

Pass the configuration directly when running Aider:
aider \
  --openai-api-key your_modelstack_api_key \
  --openai-api-base https://api.modelstack.cc/v1 \
  --model claude-sonnet-4-6

Usage

Basic Usage

Start Aider in your project directory:
aider --model claude-sonnet-4-6
Aider will:
  1. Detect files in your git repo
  2. Connect to ModelStack
  3. Open an interactive chat session

Example Session

$ aider --model claude-sonnet-4-6

Aider v0.x.x
Model: claude-sonnet-4-6 via https://api.modelstack.cc/v1

> Add error handling to the login function in auth.py

# Aider will read auth.py, make changes, and show you a diff
# Type /yes to accept or /no to reject

Switching Models

Change models mid-session:
> /model gpt-4o
Or specify at startup:
aider --model gemini-2.5-pro

Available Commands

  • /add <file> - Add files to the chat context
  • /drop <file> - Remove files from context
  • /model <name> - Switch to a different model
  • /commit - Commit changes with AI-generated message
  • /undo - Undo the last change
  • /help - Show all commands

Troubleshooting

Issue: “API key not found” Solution: Ensure OPENAI_API_KEY is set or pass --openai-api-key flag. Issue: “Model not found” Solution: Use ModelStack’s standardized model names. See supported models. Issue: Rate limit errors Solution: Check your plan limits and consider upgrading if you’re hitting rate limits frequently.

Best Practices

  1. Start small: Add only relevant files to context with /add
  2. Use specific models: Choose faster models (Haiku, GPT-4o-mini) for simple tasks
  3. Review changes: Always review diffs before accepting with /yes
  4. Commit often: Use /commit to save progress incrementally

Next Steps


Last verified: 2026-03-07