Overview
Last updated
Last updated
Track fine-tuning and deployment tasks with status updates and progress metrics.
Interactive task cards show best Loss Rate and allow viewing historical records.
Action buttons for and update status.
Visualized Presentation: The Card View presents a summary of each Finetune task in the form of individual cards. This visual approach is more intuitive and makes it easy to quickly browse and identify different tasks.
Focused Key Information: Each card typically highlights the key metrics of a task, such as the Finetune Result (Success or Failed), Best Loss Value, training progress (e.g., Epoch number), elapsed time, and occupied disk space.
Suitable Scenarios: The Card View is well-suited for an overview of a small to medium number of Finetune tasks, allowing users to quickly understand the current status and main performance indicators of each task. Users can easily compare the results of different tasks at a glance.
Operational Convenience: Card View often provides direct action buttons on the cards, such as viewing details or canceling tasks, facilitating quick user interaction.
Structured Data Display: The Table View presents detailed information for all Finetune tasks in a clear row and column structure. Each row represents a task, and each column represents a different attribute or metric of that task.
Comprehensive Information: The Table View can display more detailed information than the Card View, such as Task Name, Finetune Result, Best Loss Rate, Deploy Result, Disk Occupied, Schedule Information, Owner, and Action options.
Suitable Scenarios: The Table View is ideal for managing a large number of Finetune tasks, allowing users to quickly find specific tasks or compare detailed differences between tasks through sorting and filtering functionalities.
Data Comparison and Analysis: The Table View is more conducive for users to perform detailed data comparison and analysis, such as comparing the best loss rates and resource consumption of different tasks.
The Card View emphasizes visualization and quick overview, suitable for rapidly grasping the status of a smaller number of tasks.
The Table View emphasizes structured and detailed information display, suitable for managing a large number of tasks and detailed data analysis.
Clicking a task name opens a user interface for managing and tracking the history of Large Language Model (LLM) fine-tuning experiments. The interface displays a structured table of fine-tuning runs, allowing users to monitor, compare, and manage experiments effectively. This table is essential for those fine-tuning LLMs, providing clear insights into past experiments and their results.
Each fine-tuning record offers three actions on the far right:
Schedule: Set up a schedule based on these parameters.
Load Parameter and Finetune: Fine-tune the model again using the current parameters.
Download Logs: Download the training logs for this record.
For successful training, the Loss Rate
column shows the minimum loss achieved. Click the link to view a detailed breakdown.
The Epoch List page displays detailed information about different training epochs, including their respective loss rates, sizes, deployment quantity, and available actions. Below are the key components of the interface and their descriptions:
Epoch:
Represents the specific training epoch.
Loss Rate:
Displays the loss rate for each epoch, providing insight into the model’s training performance.
Values are shown in decimal format for precision.
Size:
Indicates the size of the model for each epoch, measured in GiB.
Deployment Quantity:
Shows how many instances of the epoch model have been deployed.
A value of 0
means the model has not been deployed, while other values indicate the number of deployments.
Actions:
Delete: Allows users to delete the corresponding epoch. (Free up disk space)
Import to Ollama's Inference Repo
Create Workspace with this Inference