Table of Contents

Fine Tuning LLM in .NET Applications


๐ŸŽฏ Purpose of the Sample

The Fine-Tuning Demo showcases how to use the LM-Kit.NET SDK to fine-tune large language models (LLMs) for specific tasks. This sample demonstrates the process of fine-tuning models to enhance their performance in tasks such as sentiment analysis, sarcasm detection, and functioning as a chemistry assistant.


๐Ÿ‘ฅ Industry Target Audience

This sample is particularly beneficial for developers and organizations in the following sectors:

  • ๐Ÿ”ฌ Machine Learning and AI Research: Researchers looking to optimize models for specific tasks.
  • ๐Ÿ’ป Software Development: Developers aiming to integrate task-specific language models into their applications.
  • ๐Ÿ“ž Customer Support: Enhance automated customer support systems with models fine-tuned for specific queries and responses.
  • ๐Ÿซ Education: Develop educational tools with models fine-tuned to provide accurate and relevant information.

๐Ÿš€ Problem Solved

Pre-trained language models are powerful, but their performance can be suboptimal for specific tasks without further training. The Fine-Tuning Demo addresses this problem by demonstrating how to fine-tune models for specific applications, thereby significantly improving their accuracy and effectiveness.


๐Ÿ’ป Sample Application Description

The Fine-Tuning Demo is a console application that allows users to fine-tune language models for specific tasks. The demo includes fine-tuning experiments for sentiment analysis, sarcasm detection, and functioning as a chemistry assistant.

โœจ Key Features

  • ๐Ÿ“ˆ Model Fine-Tuning: Fine-tune language models for specific tasks using the LoRA (Low-Rank Adaptation) technique.
  • ๐Ÿ” Task-Specific Training: Demonstrates fine-tuning for tasks such as sentiment analysis, sarcasm detection, and chemistry assistance.
  • ๐Ÿ“Š Progress Tracking: Displays progress, loss, and accuracy metrics during the fine-tuning process.
  • ๐Ÿ’พ Checkpointing: Supports saving and loading training checkpoints to resume training sessions.

๐Ÿง  Supported Models

The sample supports the following state-of-the-art model for fine-tuning:

  • TinyLLaMA 1.1B 1T OpenOrca

๐Ÿ› ๏ธ Getting Started

๐Ÿ“‹ Prerequisites

  • .NET Framework 4.6.2 or .NET 6.0

๐Ÿ“ฅ Download the Project

โ–ถ๏ธ Running the Application

  1. ๐Ÿ“‚ Clone the repository:

    git clone https://github.com/LM-Kit/lm-kit-net-samples.git
    
  2. ๐Ÿ“ Navigate to the project directory:

    cd lm-kit-net-samples/console_framework_4.62/finetuning
    

    or

    cd lm-kit-net-samples/console_net6/finetuning
    
  3. ๐Ÿ”จ Build and run the application:

    dotnet build
    dotnet run
    
  4. ๐Ÿ” Select the Fine-Tuning Experiment:

    • Uncomment the desired fine-tuning experiment in Program.cs.
    • Available experiments: Sentiment Analysis, Sarcasm Detection, Chemistry Assistant.

๐Ÿ’ก Example Usage

  1. Set the License Key (if available):

    LMKit.Licensing.LicenseManager.SetLicenseKey(""); // Set an optional license key here if available.
    
  2. Uncomment the Desired Experiment:

    // Uncomment the fine-tuning experiment you want to run:
    SentimentAnalysisFinetuning.RunTraining();
    //SarcasmDetectionFinetuning.RunTraining();
    //ChemistryAssistantFinetuning.RunTraining();
    
  3. Run the Application:

    dotnet run
    
  4. Monitor Progress: The console will display the progress, loss, and accuracy metrics during the fine-tuning process.

๐Ÿ› ๏ธ Special Commands

  • Resume Training: Load training checkpoints to resume previous training sessions.
  • Early Stop Conditions: Automatically stop training based on loss or maximum training duration.

๐Ÿ”ฌ Fine-Tuning Experiments

๐Ÿงพ Sentiment Analysis Fine-Tuning

  • Purpose: Enhance the accuracy of LMKit's sentiment analysis engine, using a tiny LLaMA model.
  • Initial Accuracy: ~46%
  • Target Accuracy: 95% - 98%

๐Ÿ—จ๏ธ Sarcasm Detection Fine-Tuning

  • Purpose: Improve the accuracy of LMKit's sarcasm detection engine, using a tiny LLaMA model.
  • Initial Accuracy: ~50%
  • Target Accuracy: 85%+

โš—๏ธ Chemistry Assistant Fine-Tuning

  • Purpose: Fine-tune a small LLaMA model to function as a chemistry assistant.
  • Initial Accuracy: 16.67%
  • Target Accuracy:
    • Windows: 25.93% at iteration 3651
    • macOS: 38.89% at iteration 2570

By following these steps, developers can explore the functionalities of LM-Kit.NET and integrate advanced fine-tuning techniques into their applications, enhancing the accuracy and relevance of language models for specific tasks.