Dev in a Box
Open Source

TokenEvaluator.Net

Joseph
#opensource#nuget#developers#tokenization

repoTemplate main

Description

TokenEvaluator.Net is a simple and useful library designed to measure and calculate the token count of given text inputs, as per the specifics of the language model specified by the user. This tool is crucial for efficient resource management when dealing with AI language models, such as OpenAI’s GPT-3.5-turbo and others.

By providing a comprehensive and detailed evaluation of the token count, this library assists developers in understanding the cost, performance, and optimization aspects of their AI language model interactions.

Whether you’re running an AI chatbot, a content generator, or any application that leverages AI language models, understanding your token usage is fundamental. TokenEvaluator.Net fills this gap, offering a clear, accurate, and easy-to-use solution.

Features

  1. Precise token count calculations aligned with the specified language model
  2. Support for a diverse array of popular AI language models
  3. Efficient and lightweight architecture suitable for both integrated and standalone usage
  4. Open-source, fostering community contributions and ongoing enhancement

Unlock the power of accurate token count understanding with TokenEvaluator.Net - the essential tool for AI developers.

Supported Tokenizers

These are the currently supported tokenizers:

Supported Vision Models

These are the currently supported vision models:

Based on the OpenAI API documentation for Vision enabled models (as of 04/12/2023), to calculate the token count of an image, you need to consider the size of the image and the detail option on each image_url block.

NuGet Packages

logo64 TokenEvaluator.Net NuGet

Getting Started

TokenEvaluator.Net can be used via dependency injection, or an instance can be created using a tightly-coupled factory class.

Dependency Injection

If you want to be able to inject an instance of this client into multiple methods, then you can make use of the libraries dependency injection extension to add all of the required interfaces and implementations to your service collection.

using TokenEvaluator.Net.Dependency;

// Init a service collection, use the extension method to add the library services.
IServiceCollection services = new ServiceCollection();
services.AddTokenEvaluator.NetServices();
services.AddSingleton<ITokenEvaluatorClient, TokenEvaluatorClient>();
var serviceProvider = services.BuildServiceProvider();

Then simply inject the service into your class constructors like so:

internal const string GeneratedText = "The quick, brown fox—enamored by the moonlit night—jumped over 10 lazily sleeping dogs near 123 Elm St. at approximately 7:30 PM. Isn't text tokenization interesting?";

public ClassConstructor(ITokenEvaluatorClient tokenClient)
{
    // Set token encoding type
    tokenClient.SetDefaultTokenEncoding(EncodingType.Cl100kBase);
    var tokenCount = tokenClient.EncodedTokenCount(GeneratedText);

    // or choose a supported model
    tokenClient.SetDefaultTokenEncodingForModel(ModelType.TextDavinci003);
    var tokenCount = tokenClient.EncodedTokenCount(GeneratedText);
}

Factory Implementation

Using this as a concrete, tightly-coupled implementation is fairly straightforward. Simply use the below code and all internal interface and service references will be initialised and tightly-coupled. This is difficult to write tests for within your application, but ultimately is the easiest way to implement the client.


using TokenEvaluator.Net;

var client = TokenEvaluatorClientFactory.Create();
client.SetDefaultTokenEncoding(EncodingType.Cl100kBase);
var tokenCount = client.EncodedTokenCount(Constants.GeneratedText);

Unsafe Encoding

EncodedTokenCount allows the use of unsafe encoding, these methods use the unsafe keyword and are not recommended for use in production environments. They are however useful for benchmarking and testing purposes; refer to the Microsoft documentation on unsafe code for more information. Microsoft Docs: Unsafe code, pointer types, and function pointers

the ‘unsafe’ parameter defaults to false, but can be set to true if required.

internal const string GeneratedText = "The quick, brown fox—enamored by the moonlit night—jumped over 10 lazily sleeping dogs near 123 Elm St. at approximately 7:30 PM. Isn't text tokenization interesting?";

public ClassConstructor(ITokenEvaluatorClient tokenClient)
{
    // Set token encoding type
    tokenClient.SetDefaultTokenEncoding(EncodingType.Cl100kBase);
    var tokenCount = tokenClient.EncodedTokenCount(GeneratedText, unsafe: true, useParallelProcessing: false);

    // or choose a supported model
    tokenClient.SetDefaultTokenEncodingForModel(ModelType.TextDavinci003);
    var tokenCount = tokenClient.EncodedTokenCount(GeneratedText, unsafe: true, useParallelProcessing: false);
}

Parallel Encoding

EncodedTokenCount, Encode, and Decode Methods allow developers to make use of parallel processing (This utilises the parallel threading library), this is useful for large text inputs and can significantly reduce the time taken to encode the text. This is not recommended for use in production environments, but is useful for benchmarking and testing purposes.

The ‘useParallelProcessing’ parameter defaults to true, but can be set to false if required.

internal const string GeneratedText = "The quick, brown fox—enamored by the moonlit night—jumped over 10 lazily sleeping dogs near 123 Elm St. at approximately 7:30 PM. Isn't text tokenization interesting?";

public ClassConstructor(ITokenEvaluatorClient tokenClient)
{
    // Set token encoding type
    tokenClient.SetDefaultTokenEncoding(EncodingType.Cl100kBase);
    var tokenCount = tokenClient.EncodedTokenCount(GeneratedText, unsafe: false,useParallelProcessing: true);
    var tokens = tokenClient.Encode(GeneratedText, useParallelProcessing: true);
    var decodedText = tokenClient.Decode(tokens, useParallelProcessing: true);
}

Benchmarking

For the purposes of openness and transparency included below are a number of benchmark tests. The library used to run these is included within the Benchmark folder.

These need some review and assessment to determine realistically which is the most efficient, but the results are included below for reference.

BenchmarkDotNet v0.13.6, Windows 11 (10.0.22621.1848/22H2/2022Update/SunValley2)
11th Gen Intel Core i7-11700K 3.60GHz, 1 CPU, 16 logical and 8 physical cores
.NET SDK 7.0.304
  [Host]     : .NET 7.0.7 (7.0.723.27404), X64 RyuJIT AVX2 [AttachedDebugger]
  DefaultJob : .NET 7.0.7 (7.0.723.27404), X64 RyuJIT AVX2

Token Count

MethodMeanMinQ1MedianMaxOp/sGen0Gen1Allocated
TiktokenSharp CountSpeed1,574.6 μs1,526.2 μs1,541.8 μs1,563.4 μs1,639.0 μs635.1248.0469177.73442031.03 KB
SharpToken CountSpeed2,377.3 μs2,290.6 μs2,331.4 μs2,360.9 μs2,498.6 μs420.6324.2188226.56252658.29 KB
TokenEvaluatorNet Managed NonParallel CountSpeed1,621.9 μs1,543.0 μs1,605.4 μs1,616.0 μs1,693.9 μs616.6238.2813171.87501954.78 KB
TokenEvaluatorNet Unsafe NonParallel CountSpeed597.7 μs576.0 μs592.1 μs595.3 μs617.0 μs1,673.047.8516-391.77 KB
TokenEvaluatorNet Managed Parallel CountSpeed1,640.2 μs1,592.7 μs1,617.5 μs1,638.2 μs1,686.8 μs609.7238.2813171.87501954.78 KB
TokenEvaluatorNet Unsafe Parallel CountSpeed593.6 μs578.7 μs590.8 μs592.9 μs602.8 μs1,684.647.8516-391.77 KB
TikToken CountSpeed604.6 μs592.2 μs597.3 μs605.3 μs620.2 μs1,653.947.8516-391.74 KB

Encode/Decode

MethodMeanMinQ1MedianMaxOp/sGen0Gen1Allocated
TiktokenSharp EncodeDecode1,841.2 μs1,789.1 μs1,809.6 μs1,836.9 μs1,921.1 μs543.1289.0625164.06252383.89 KB
SharpToken EncodeDecode2,717.7 μs2,531.2 μs2,634.8 μs2,705.1 μs3,076.4 μs368.0339.8438226.56252802.97 KB
TokenEvaluatorNet EncodeDecode2,559.7 μs2,356.4 μs2,501.5 μs2,547.7 μs2,811.4 μs390.7375.0000371.09383005.72 KB
TikToken Unsafe EncodeDecode934.8 μs895.7 μs919.7 μs929.4 μs980.7 μs1,069.875.19537.8125618.92 KB

[Top]

← Back to Blogs