API
Models

Models API

The Models API allows you to list and manage available AI models across different providers. Dispersl supports multiple LLM providers including Anthropic, OpenAI, DeepSeek, and Google.

Overview

Dispersl is model-agnostic, allowing you to:

  • Use any supported LLM model
  • Swap providers on the fly
  • Maintain context across different models
  • Choose the best model for each task type

Supported Providers

  • Anthropic: Claude 3 Sonnet, Claude 3 Haiku, Claude 3 Opus
  • OpenAI: GPT-4, GPT-4 Turbo, GPT-3.5 Turbo
  • DeepSeek: DeepSeek R1, DeepSeek Coder
  • Google: Gemini Pro, Gemini Pro Vision

Endpoints

List Available Models

GET /models

Retrieves all available AI models with their specifications and tier requirements.

Headers

Authorization: Bearer YOUR_API_KEY

Response

{
  "models": [
    {
      "id": "anthropic/claude-3-sonnet",
      "name": "Claude 3 Sonnet",
      "description": "Anthropic's most balanced model for complex reasoning and analysis",
      "context_length": 200000,
      "tier_requirements": {
        "free_model": false
      }
    },
    {
      "id": "openai/gpt-4",
      "name": "GPT-4",
      "description": "OpenAI's most capable model for complex multi-step tasks",
      "context_length": 128000,
      "tier_requirements": {
        "free_model": false
      }
    },
    {
      "id": "openai/gpt-3.5-turbo",
      "name": "GPT-3.5 Turbo",
      "description": "Fast and efficient model for simpler tasks",
      "context_length": 16385,
      "tier_requirements": {
        "free_model": true
      }
    },
    {
      "id": "deepseek/deepseek-r1",
      "name": "DeepSeek R1",
      "description": "Advanced reasoning model optimized for code and mathematics",
      "context_length": 65536,
      "tier_requirements": {
        "free_model": false
      }
    },
    {
      "id": "google/gemini-pro",
      "name": "Gemini Pro",
      "description": "Google's multimodal model for text and vision tasks",
      "context_length": 32768,
      "tier_requirements": {
        "free_model": true
      }
    }
  ],
  "status": "success"
}

Model Selection Guide

For Code Generation

  • Primary: anthropic/claude-3-sonnet - Excellent for complex code architecture
  • Alternative: deepseek/deepseek-r1 - Specialized for coding tasks
  • Budget: openai/gpt-3.5-turbo - Good for simple code generation

For Testing

  • Primary: openai/gpt-4 - Comprehensive test coverage
  • Alternative: anthropic/claude-3-sonnet - Good test reasoning
  • Budget: openai/gpt-3.5-turbo - Basic test generation

For Documentation

  • Primary: google/gemini-pro - Excellent for documentation
  • Alternative: anthropic/claude-3-sonnet - Clear explanations
  • Budget: openai/gpt-3.5-turbo - Simple documentation

For Git Operations

  • Primary: deepseek/deepseek-r1 - Good for version control logic
  • Alternative: openai/gpt-4 - Reliable for git workflows
  • Budget: openai/gpt-3.5-turbo - Basic git operations

For Chat/Analysis

  • Primary: anthropic/claude-3-sonnet - Best for analysis and reasoning
  • Alternative: openai/gpt-4 - Good conversational abilities
  • Budget: openai/gpt-3.5-turbo - Quick responses

Context Length Considerations

ModelContext LengthBest For
Claude 3 Sonnet200,000 tokensLarge codebases, extensive documentation
GPT-4128,000 tokensComplex multi-step tasks
DeepSeek R165,536 tokensCode analysis and generation
Gemini Pro32,768 tokensMultimodal tasks, documentation
GPT-3.5 Turbo16,385 tokensQuick tasks, simple operations

Tier Requirements

Free Tier Models

  • openai/gpt-3.5-turbo
  • google/gemini-pro

Pro/Enterprise Models

  • anthropic/claude-3-sonnet
  • anthropic/claude-3-opus
  • openai/gpt-4
  • deepseek/deepseek-r1

Error Responses

  • 400: Failed to initialize AI model
  • 401: No Bearer token provided
  • 500: Internal server error

Usage in Agent Requests

When making requests to agent endpoints, specify the model in the request body:

{
  "prompt": "Your task description",
  "model": "anthropic/claude-3-sonnet",
  "context": "Additional context"
}

Best Practices

  1. Match model to task: Use specialized models for their strengths
  2. Consider context length: Choose models with sufficient context for your data
  3. Monitor costs: Balance performance with cost considerations
  4. Test different models: Experiment to find the best model for your use case
  5. Use free tier models: Start with free models for development and testing