GenAI Studio
  • Edge AI SDK/GenAI Studio
  • Getting Started
    • About GenAI Studio
    • Quickstart
      • Prerequisite
      • Installation
      • Utilities
    • Feature Overview
      • Inference Chat
      • Fine-tuning
      • Model Management
      • Application
    • Version History
      • Version 1.1
      • Version 1.0
  • Inference
    • Chat Inference
    • AI Agents
  • Finetune
    • Text-to-Text
      • Overview
      • Full Parameter
      • LoRA
    • Text-to-Image (Coming Soon)
    • Dataset Management
    • Schedule
  • Model
    • Model Management
  • Validation
  • Convert
  • Administration
    • Resource Monitoring
  • System Configuration
    • AI Providers
      • LLM Setup
      • Embedder Setup
      • Vector DB
      • Transcription Setup
    • System Administration
      • Users
      • Workspace Chats
      • Invites
      • GPU Resource
      • Register an App
    • Appearance Customization
    • Tools
      • Embedded Chat Widgets
      • Event Logs
      • Security & Access
  • Application
    • Text to Image
    • Background Removal
    • OCR
  • FAQ
    • Technical
Powered by GitBook
On this page
  • Supported Language Model Providers
  • Local Language Model Providers
  • Cloud Language Model Providers
  1. System Configuration
  2. AI Providers

LLM Setup

PreviousAI ProvidersNextEmbedder Setup

Last updated 2 months ago

GenAI Studio offers a wide range of LLM providers for chatting and generative AI. Depending on your chosen provider, additional configuration may be necessary.

Supported Language Model Providers

Local Language Model Providers

  • Ollama (Built-in by default)

  • LM Studio

  • Local AI

Cloud Language Model Providers

  • OpenAI

  • Azure Open AI

  • AWS Bedrock

  • Anthopic

  • Cohere

  • Google Gemini Pro

  • Hugging Face

  • Together AI

  • OpenRouter

  • Perplexity AI

  • Mistral API

  • Groq

  • KobaldCPP