Skip to content

Supported Language Models

Locus AI provides a powerful ecosystem of language models, each bringing unique strengths to your workflow. By integrating multiple leading AI providers, we enable you to:

  • Choose the perfect model for each specific task
  • Combine different models for enhanced results
  • Validate findings across multiple AI perspectives
  • Optimize for speed, accuracy, or creativity as needed

This flexibility ensures you're not limited to a single AI approach, but can leverage the best capabilities of each model to achieve optimal results. Below, we'll explore the advantages of this multi-model approach and introduce you to our comprehensive suite of AI capabilities.

Advantages of Multiple LLMs

Diverse Perspectives and Capabilities

Having access to multiple LLMs provides several key advantages:

  1. Complementary Strengths

    • Different models excel at different tasks
    • Combine models for comprehensive analysis
    • Use specialized models for specific needs
  2. Validation and Cross-checking

    • Run the same query across multiple models
    • Compare responses for accuracy and completeness
    • Identify potential biases or limitations
  3. Optimization for Different Tasks

    • Use fast models for initial analysis
    • Switch to more precise models for detailed review
    • Balance speed vs accuracy based on needs

Benefits of Re-running AI Actions

Re-running actions with different models or settings can enhance your results:

  1. Iterative Refinement

    • Start with a general model for broad insights
    • Refine with specialized models for deeper analysis
    • Adjust temperature settings to explore different perspectives
  2. Quality Assurance

    • Verify findings across multiple models
    • Identify consistent patterns and insights
    • Reduce the impact of model-specific biases
  3. Creative Exploration

    • Use different temperature settings for varied outputs
    • Combine insights from multiple runs
    • Discover unexpected connections and ideas

Example Workflow

For analyzing a legal document:

  1. Use GPT-3.5-turbo for quick initial summary
  2. Run Deepseek Reasoner for detailed logical analysis
  3. Verify with Claude 3 Haiku for ethical considerations
  4. Compare and synthesize insights from all models

Default Model

Locus Model

  • Type: Default model pointer
  • Points to: GPT-4o
  • Temperature: 0.1 (default)
  • Best for: General Locus AI usage
  • Key features:
    • Same capabilities as GPT-4o
    • Default model selection
    • Automatically uses optimal settings
    • Seamless fallback option

TIP

The Locus model is a pointer to our default model, currently set to GPT-4o. This ensures you're always using our recommended model configuration for general usage.

OpenAI

OpenAI provides state-of-the-art language models known for their versatility and powerful capabilities. To use OpenAI models, you'll need an API key from OpenAI. Learn how to get your API key.

Available Models

GPT-4o

  • Type: Premium model
  • Temperature: 0.1 (default)
  • Best for: High-precision tasks requiring advanced reasoning
  • Key features:
    • Maximum accuracy and reliability
    • Complex problem-solving capabilities
    • Detailed analysis and generation

GPT-4o-mini

  • Type: Optimized variant
  • Temperature: 0.7 (default)
  • Best for: Balance of performance and efficiency
  • Key features:
    • Faster processing times
    • More creative outputs
    • Cost-effective for routine tasks

GPT-3.5-turbo-1106

  • Type: Fast, efficient model
  • Temperature: 0.7 (default)
  • Best for: Quick, general-purpose tasks
  • Key features:
    • Rapid response times
    • Cost-effective processing
    • Good for most common use cases

Deepseek

Deepseek offers specialized models for chat and reasoning tasks, with both hosted and user-key options available. Learn how to get your API key.

Available Models

Deepseek Chat

  • Type: Versatile chat model
  • Temperature: 0.1 (default)
  • Context Window: 64,000 tokens
  • Key features:
    • Large context window
    • JSON output support
    • Available with user API key option
    • Fast response times

Deepseek Reasoner

  • Type: Specialized reasoning model
  • Temperature: 0.1 (default)
  • Context Window: 64,000 tokens
  • Key features:
    • Enhanced reasoning capabilities
    • JSON structured outputs
    • Backup model support
    • Extended timeout (300s)

Deepseek Reasoner US/EU

  • Type: Region-specific deployment
  • Temperature: 0.1 (default)
  • Note: Slower processing speed (🐌)
  • Key features:
    • Geographic optimization
    • Backup to Locus model
    • Extended timeout (6000s)
    • Hosted on DeepInfra

Anthropic

Anthropic's Claude models are known for their strong analytical capabilities and ethical considerations. You'll need an Anthropic API key to use these models. Learn how to access the Claude API.

Available Models

Claude 3 Haiku

  • Type: Latest generation model
  • Temperature: 0.8 (default)
  • Best for: Balanced performance and creativity
  • Key features:
    • Strong analytical capabilities
    • Excellent instruction following
    • Ethical considerations built-in
    • Newer, more capable version

Google Gemini

Google's Gemini models offer cutting-edge AI capabilities with fast processing times. API access is required for integration.

Available Models

Gemini 1.5 Flash

  • Type: High-speed model
  • Temperature: 0.8 (default)
  • Best for: Quick processing of large amounts of data
  • Key features:
    • Ultra-fast processing
    • Optimized for efficiency
    • Good for bulk operations

Choosing the Right Model

When selecting a model, consider:

  1. Task Requirements

    • Complexity of the task
    • Required accuracy level
    • Processing speed needs
    • Context window requirements
  2. Resource Considerations

    • API costs
    • Response time requirements
    • Temperature settings for creativity vs precision
    • Geographic requirements (for region-specific models)
  3. Integration Needs

    • API key requirements
    • Service availability
    • Temperature customization options
    • JSON output requirements

Glossary of Terms

Core Concepts

Temperature

  • What it is: A parameter that controls the randomness/creativity of the model's responses
  • Scale: 0.0 to 1.0
  • Low temperature (0.0-0.3):
    • More focused and deterministic outputs
    • Better for tasks requiring accuracy and consistency
    • Ideal for: fact-based Q&A, analysis, summarization
  • High temperature (0.7-1.0):
    • More creative and varied outputs
    • Better for tasks requiring creativity and exploration
    • Ideal for: brainstorming, creative writing, idea generation

Context Window

  • What it is: The maximum amount of text the model can process in a single interaction
  • Measured in: Tokens (roughly 4 characters per token)
  • Importance:
    • Determines how much text can be analyzed at once
    • Affects the model's ability to maintain context
    • Larger windows allow for processing longer documents
  • Usage considerations:
    • Longer context = more thorough analysis but slower processing
    • Shorter context = faster processing but may miss broader context

Additional Terms

API Base

  • The endpoint URL where the model service is hosted
  • Different bases may affect response times and regional compliance

Max Tokens

  • Maximum length of the response the model can generate
  • Helps control costs and response times
  • Should be set based on your output needs

Timeout

  • Maximum time allowed for the model to respond
  • Longer timeouts allow for more thorough processing
  • Shorter timeouts ensure responsive user experience

Response Format

  • Specifies how the model should structure its output
  • JSON format enables structured data processing
  • Free text format for natural language responses

Backup Model

  • Fallback option if primary model is unavailable
  • Ensures service continuity
  • May have different capabilities or performance characteristics