Table of Contents

🧠 Choosing the Right Model for Your Use Case and Hardware

LM-Kit.NET provides flexibility and scalability, allowing you to integrate generative AI models into various .NET applications. However, choosing the right model and hardware setup is crucial to achieve the desired performance, depending on your use case, hardware capacities, and optimization preferences. This guide will help you understand the key factors when selecting a model for your specific needs.


🔍 Overview of LLM Objects in LM-Kit.NET

In LM-Kit.NET, most classes (such as MultiTurnConversation, SingleFunctionCall, TextTranslation, Categorization or SentimentAnalysis) require an LLM (Large Language Model) object as a parameter. The LLM object is essentially a wrapper for the language model you will use in your application.

Currently, LM-Kit.NET supports models in the GGUF format, a highly efficient format for storing quantized and non-quantized models. For more details on GGUF, see the official GGUF documentation.

Here's an example of how you can load a GGUF model into LM-Kit.NET:

var model = new LMKit.Model.LLM("my-model.gguf");

The performance of your application depends on several factors, including the model you choose, the quantization method, and your hardware configuration. Let's break these factors down.


💡 Factors to Consider When Choosing a Model

When selecting a model for your use case, you should take the following into account:

1. Use Case and Intent

The model you choose should align with the type of task you're solving. For example:

  • For simple classification tasks, a smaller model with fewer parameters can be sufficient and even preferable for faster inference.
  • For more complex tasks, such as multi-turn conversations or text generation, larger models may yield more accurate and sophisticated responses.

If you're performing basic classification tasks, a small model like the Phi3.5 model inferred on a CPU will generally provide satisfactory performance.

2. Hardware Capabilities

The size of the model is directly related to the hardware resources required:

  • Small - Tiny - Mini models (less than 3 billion parameters) can produce acceptable inference times even on modern CPUs.
  • Larger models (6 billion parameters or more) typically require GPUs with at least 5 to 6 GB of VRAM for smooth inference. For instance, an 8B model would be ideal if you have access to a GPU with sufficient VRAM, providing faster responses than CPU-based inference.

LM-Kit.NET also supports multi-GPU setups, allowing you to scale up depending on your hardware availability.

3. Quantization Mode

Quantization reduces the model size and memory footprint without sacrificing too much accuracy. LM-Kit.NET supports multiple quantization levels, and for most use cases, we recommend 4-bit quantization as it provides a good balance between model accuracy and computational efficiency.

This ensures faster performance, especially on devices with limited resources, such as CPUs or GPUs with lower VRAM capacities.

4. Context Size

Another critical aspect is the context size or the amount of input text the model can process in one inference cycle. Larger models generally support bigger context sizes, which is beneficial for tasks that require long-form text comprehension or multi-turn conversations. However, large context sizes also demand more memory and computation, so ensure your hardware is capable of handling it.

When dealing with limited memory, it is advisable to either:

  • Reduce the context size,
  • Use a smaller model,
  • Or ensure your hardware can accommodate larger context sizes for complex tasks.

🔧 Benchmarking and Experimentation with LM-Kit.NET

LM-Kit.NET has been engineered with various optimizations to improve inference times across different hardware configurations. Given the variability of tasks and hardware setups, the best approach is to experiment with different models and hardware combinations. LM-Kit.NET makes this easy by allowing you to test various configurations quickly.

You can access all the benchmarked and validated models in our Hugging Face repository. This repository contains models of different sizes, quantization modes, and contexts, ensuring you have a range of options to find the best fit for your needs.

This allows you to compare performance across multiple devices and models, enabling you to pick the most suitable configuration for your use case.


🔄 Performance Optimizations with LM-Kit.NET

LM-Kit.NET includes a wide range of optimization strategies designed to enhance model performance across various hardware configurations. While many of these strategies are built-in and operate seamlessly, some can be manually configured for greater control, such as:

  • Key-Value Cache Recycling: Reuse cached data and minimize recomputation, improving memory efficiency during inference.
  • Model Cache: Control how models are stored in memory to balance performance and resource usage.

These options, along with several others, can be fine-tuned using the LMKit.GlobalConfiguration class:

LMKit.GlobalConfiguration.EnableKVCacheRecycling = true;
LMKit.GlobalConfiguration.EnableTokenHealing = true;

While these two strategies are user-configurable, LM-Kit.NET incorporates many additional internal optimizations designed to improve inference speed and reduce resource consumption without requiring user intervention. These optimizations ensure that LM-Kit.NET is well-suited to a variety of devices, from CPUs to multiple GPU cards, and scales effectively across different hardware configurations.


🛠 Experimentation and Feedback

LM-Kit.NET makes it simple to experiment with various configurations. By testing different model sizes, quantization methods, and hardware setups, you can quickly find the optimal configuration for your specific use case.

We are committed to improving the speed and efficiency of LM-Kit.NET. If you have specific performance expectations or face challenges, feel free to reach out to our team and share your feedback. We continuously work on optimizations to enhance the user experience.

Reach out to our team at LM-Kit Support for assistance.