GenAI Studio
  • Edge AI SDK/GenAI Studio
  • Getting Started
    • About GenAI Studio
    • Quickstart
      • Prerequisite
      • Installation
      • Utilities
    • Feature Overview
      • Inference Chat
      • Fine-tuning
      • Model Management
      • Application
    • Version History
      • Version 1.1
      • Version 1.0
  • Inference
    • Chat Inference
    • AI Agents
  • Finetune
    • Text-to-Text
      • Overview
      • Full Parameter
      • LoRA
    • Text-to-Image (Coming Soon)
    • Dataset Management
    • Schedule
  • Model
    • Model Management
  • Validation
  • Convert
  • Administration
    • Resource Monitoring
  • System Configuration
    • AI Providers
      • LLM Setup
      • Embedder Setup
      • Vector DB
      • Transcription Setup
    • System Administration
      • Users
      • Workspace Chats
      • Invites
      • GPU Resource
      • Register an App
    • Appearance Customization
    • Tools
      • Embedded Chat Widgets
      • Event Logs
      • Security & Access
  • Application
    • Text to Image
    • Background Removal
    • OCR
  • FAQ
    • Technical
Powered by GitBook
On this page
  1. Getting Started

About GenAI Studio

PreviousEdge AI SDK/GenAI StudioNextQuickstart

Last updated 5 months ago

Software GenAI Studio is more than just an LLM fine-tuning tool; it's a powerful AI platform. Built on the foundation of AnythingLLM and enhanced with customizable features and efficient fine-tuning capabilities, GenAI Studio empowers users to rapidly develop AI models tailored to specific needs across a wide range of applications.

Key Features of Software GenAI Studio:

  • Built upon AnythingLLM:

    • Leverages AnythingLLM's robust model connectivity and RAG system for a solid foundation.

    • Provides a stable and reliable base for LLM models.

  • Extensive Customization:

    • Advantech AI Assistance: Integrates Advantech DeviceOn, EPD, SUSI domain applications, and top AI applications from Hugging Face.

    • Text-to-image, text-to-speech: Expands model capabilities to enable multi-modal generation.

    • Customization RAG and RAGOps:

      • Offers chunk inspection and editing for precise control over generated content.

      • Supports customized chunk segmentation to cater to diverse application needs.

      • Allows each workspace to utilize distinct embedding and chat models.

      • Automatically synchronize the source documentation and update or remove the content from the knowledge base.

  • Efficient Model Fine-tuning:

    • Phison aidaptiv+ technology: Facilitates efficient full-parameter model fine-tuning with limited GPU resources for language models of 7B, 13B, 33B, and 70B sizes.

    • Accelerated model development: Shortens model development cycles and enhances development efficiency.

  • Flexible Model Deployment:

    • Supports local and cloud-based models: Seamlessly integrates with local models (Ollama) or cloud-based models (e.g., OpenAI, Gemini, Anthropic).

    • Traditional RAG support: Compatible with traditional Retrieval Augmented Generation techniques.