AI-Powered Named Entity Recognition for .NET Applications
🎯 Purpose of the Sample
This Named Entity Recognition (NER) Demo showcases how to use the LM-Kit.NET SDK to extract structured entities, such as people, organizations, dates, locations, emails, and more, from unstructured content in both text and image formats. It is designed to help developers automatically identify and classify real-world objects in natural language or visual documents.
The demo leverages the NamedEntityRecognition
class, which abstracts complex language model behavior into a streamlined, high-level API. Using LM-Kit's robust multimodal inference and Dynamic Sampling technology, the NER engine supports accurate extraction from text, scanned PDFs, and even photographs of documents, on-device and with low latency.
👥 Industry Target Audience
This demo is especially suited for domains that routinely analyze high volumes of unstructured or semi-structured data:
- ⚖️ Legal & Compliance: Extract named parties, addresses, and dates from contracts and legal filings.
- 🏥 Healthcare & Life Sciences: Identify patient names, medication references, or temporal expressions in clinical notes.
- 📨 Contact Processing & CRM: Parse business cards, email threads, and meeting notes for contact information and company names.
- 📊 Business Intelligence: Structure market data with named products, organizations, or monetary values from reports.
- 📁 Government & Public Sector: Redact or categorize sensitive named entities in citizen documents or public records.
- 🕵️ Security & Intelligence: Detect IP addresses, URLs, and other identifiers in digital communication artifacts.
🚀 Problem Solved
Named entity extraction is traditionally complex due to diverse entity types, ambiguous contexts, and variable input quality (e.g., scanned vs. typed). Manual tagging is tedious and error-prone, especially when parsing documents at scale. The LM-Kit on-device NER demo solves this by offering:
- Automatic classification into structured types.
- Support for both textual and visual input.
- Flexibility to use predefined or custom entities.
- Fast, private, and offline-capable processing.
This enables more efficient automation of workflows in compliance, document indexing, search, anonymization, and data mining.
💻 Sample Application Description
The NER Demo is a console application that accepts the path to a text file or an image. It loads a selected language model, runs entity recognition on the input, and prints out the detected entities along with their type, confidence, and position (when possible).
✨ Key Features
- 📷 Text + Vision: Accepts images, plain text or image+text.
- 🏷️ Built-In & Custom Entities: Choose from built-in types like Person, Date, Organization, or define your own (e.g., DiseaseName).
- 📍 Positional Indexing: Returns the start and end positions of each entity in the original input (where applicable).
- 💬 Verbatim Output: Entities are returned exactly as written, no normalization or paraphrasing.
- 📦 On-Device Processing: All inference is done locally, preserving privacy and avoiding cloud dependency.
🧠 Supported Models
This sample is compatible with all vision-enabled LM-Kit models, including:
- MiniCPM 2.6 o Vision 8.1B
- Alibaba Qwen 2.5 Vision (3B / 7B)
- Google Gemma 3 Vision (4B / 12B)
- Any custom model URI that supports multimodal inference
🛠️ Getting Started
📋 Prerequisites
- .NET 6.0 or higher
- ~3GB+ GPU VRAM (depending on model size)
- ImageMagick or similar tool if handling non-standard image formats
📥 Download the Project
- NER Demo Repository
(Note: Replace with the actual NER folder when ready)
▶️ Running the Application
📂 Clone the repository:
git clone https://github.com/LM-Kit/lm-kit-net-samples.git
📁 Navigate to the NER project directory:
cd lm-kit-net-samples/console_net/named_entity_recognition
🔨 Build and run the app:
dotnet build dotnet run
💡 Example Usage
Set the License Key:
LMKit.Licensing.LicenseManager.SetLicenseKey(""); // Optional community license
Choose Your Model: The app will list predefined models or allow a custom model URI:
Please select the model you want to use: 0 - MiniCPM 2.6 o Vision 8.1B 1 - Alibaba Qwen 2.5 Vision 3B ...
Input the File Path: Provide the full path to either an image or text file:
> C:\Docs\passport_scan.png
Read the Output: The program will display each entity, its type, and confidence score:
7 detected entities | processing time: 00:00:01.342 Person: "Marie Curie" (confidence=0.96) Organization: "Sorbonne" (confidence=0.92) Date: "7 November 1867" (confidence=0.89)
🧩 Optional Customization
You can configure the extractor with custom entities like "DiseaseName"
or "PatentNumber"
by using:
engine.EntityDefinitions = new List<EntityDefinition>
{
new EntityDefinition("DiseaseName"),
new EntityDefinition("ICDCode")
};
This makes it ideal for vertical-specific use cases like medical NER or legal clause detection.
📚 Additional Notes
- Confidence Metric: Each result includes a confidence value between 0 and 1.
- OCR Support: If the input is an image, OCR is invoked automatically if the model and modality allow it.
- Fallback Handling: If positional indices are unavailable, entities are still extracted with type and value.