Learning AI function calling in C# with Llama 3.2 SLM and Ollama running on your machine
Mona Vale, New South Wales, Australia
I’ve been trying to wrap my head around function/tool calling for a while now, and I’m excited to share what I’ve learned with you. It’s a powerful way to let developers integrate advanced AI features directly into their applications. We’ll walk through understanding the core concepts, setting up your environment, and implementing a practical example using a C# source generator.
Introduction
AI function calling allows AI models, to interact with external tools or APIs by calling specific functions within their responses. This enables the AI to perform tasks that require real-time data or specific computations, which the model itself cannot directly handle.
For example, an AI model doesn’t know anything about the current weather in a specific location or the current time. By calling external functions, the AI can access the necessary data or perform the required computations to provide accurate and up-to-date information.
Understanding the Core Concepts
When you provide a prompt to the AI model, you can also specify a set of functions that the model can use. The model can decide to generate a structured output that includes the function to call and the necessary parameters.
For example, if you ask for the weather in Illzach, France, and pass a function to get the weather, the model generate an output that includes a call to the weather function with “Illzach, France” as the parameter.
Supported Models: Ensure the AI model you are using supports function calling
Function Definitions: Clearly defined functions that the AI can call, including the parameters they require
Integration Setup: Proper setup to handle the structured output from the AI and make the actual API calls
We will leverage
Llama 3.2 SLM, a model supporting function calling
C# source generator to easily define function definitions to pass to the AI model
Ollama to run the SLM and deal with the integration of all
Setting Up Your Environment
To get started with AI function calling in C#, you need
Install Ollama to run the SLM and handle the integration
Pull Llama 3.2 SLM model on your machine ollama pull llama3.2:3b
Implementing a Practical Example
Let’s walk through a practical example of AI function calling in C# using a simple time service. We’ll define a function that takes a city name as a parameter and returns the current time.
1 2
stringGetCurrentTime(string city) => $"It is {DateTime.Now.Hour}:{DateTime.Now.Minute} in {city}.";
We will need to pass a function description to the SLM model. We can use a C# attribute to achieve that goal.
1 2 3
[Function("Get the current time for a city")] publicstringGetCurrentTime(string city) => $"It is {DateTime.Now.Hour}:{DateTime.Now.Minute} in {city}.";
The Function attribute will be used by the C# source generator to generate the function description to pass to the SLM model. We need a partial class, so that the other part of the class is generated by the source generator.
1 2 3 4 5 6
publicsealedpartialclassFunctions { [Function("Get the current time for a city")] stringGetCurrentTime(string city) => $"It is {DateTime.Now.Hour}:{DateTime.Now.Minute} in {city}."; }
The source generator will generate the following code.
1 2 3 4 5 6 7 8
partialclassFunctions { public List<FunctionDetails> GetFunctionDetails() => [ new("GetCurrentTime", new FunctionParameters("city"), "Get the current time for a city") ]; }
and it will also generate FunctionDetails and FunctionParameters
Note that the source generator is not handling the parameter and we currently hardcode the parameter name city in the FunctionParameters record. I leave that as an exercise for the reader to improve the source generator to handle multiple parameters and their types.
We are now able to provide a prompt to the AI model specifying functions that the model can use. To simplify this, we created ChatRequestBuilder allowing to build an OllamaSharp ChatRequest with the user prompt, defining the model and the functions.
1 2 3 4 5 6
var functions = new Functions(); var request = new ChatRequestBuilder() .SetModel("llama3.2:3b") .AddMessage(userMessage, "user") .AddFunctions(functions) .Build();
You find the whole source code for ChatRequestBuilder in my aiPlayground repository.
We can call OllamaSharp Chat API to get the json response from the AI model.
1 2 3 4 5 6 7 8 9 10 11
var jsonBuilder = new StringBuilder();
// 👇🏼 Calling OllamaSharp Chat and getting the response streamed back var ollamaApiClient = new OllamaApiClient(new Uri("http://localhost:11434")); awaitforeach (var responseStream in ollamaApiClient.Chat(request)) { jsonBuilder.Append(responseStream?.Message.Content); } Console.WriteLine($"User message: {userMessage}"); var json = jsonBuilder.ToString(); Console.WriteLine($"Received from SLM: {json}");
It is finally time to parse the json response and call the function.
1 2
var result = functions.Execute(JsonSerializer.Deserialize<FunctionDetails>(json)); Console.WriteLine($"Function calling result: \"{result}\"");
To achieve that we need to add the Execute method to the Functions class generated using the C# source code generator.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
partialclassFunctions { publicstringExecute(FunctionDetails? function) { return function?.Name switch { "GetCurrentTime" => GetCurrentTime(function.FunctionParameters.City) }; } public List<FunctionDetails> GetFunctionDetails() => [ new("GetCurrentTime", new FunctionParameters("city"), "Get the current time for a city") ]; }
Running the application will output the following
1 2 3
User message: What time is it in Illzach, France? Received from SLM: {"name": "GetCurrentTime","parameters": {"city": "Illzach, France"}} Functioncallingresult: "Itis9:53inIllzach, France."
Glimpse into the C# source code generator
The source generator is a C# feature that allows you to generate additional C# source files at compile time. It can be used to automate repetitive tasks, reduce boilerplate code, and improve performance by generating code that would otherwise be written manually. In our case, we create a source generator implementing IIncrementalGenerator.
First, we need to create the FunctionAttribute defining the Description of the function to call.
It is time to implement void Initialize(IncrementalGeneratorInitializationContext context) from IIncrementalGenerator interface. It will add the FunctionAttribute, FunctionDetails, and FunctionParameters to the compilation. Then it will get all methods decorated with the FunctionAttribute and generate the source code.
// 👇🏼 Go through all attributes of the class. foreach (var attributeListSyntax in methodDeclarationSyntax.AttributeLists) foreach (var attributeSyntax in attributeListSyntax.Attributes) { if (context.SemanticModel.GetSymbolInfo(attributeSyntax).Symbol isnot IMethodSymbol attributeSymbol) continue; // if we can't get the symbol, ignore it
var attributeName = attributeSymbol.ContainingType.ToDisplayString();
// 👇🏼 Check the full name of the [Function] attribute. if (attributeName == $"{Namespace}.{AttributeName}") return (methodDeclarationSyntax, true); }
return (methodDeclarationSyntax, false); }
Finally, we generate the source code for the partial Functions class.
privatestatic List<(string name, string description)> ExtractFunctionDescriptions( ImmutableArray<MethodDeclarationSyntax> methodDeclarations) { var functionNames = new List<(string name, string description)>();
// Go through all filtered method declarations. foreach (var methodDeclarationSyntax in methodDeclarations) { // 👇🏼 Get attribute property called Function from methodDeclarationSyntax var attributeSyntax = methodDeclarationSyntax.AttributeLists .SelectMany(x => x.Attributes) .First(x => x.Name.ToString() == "Function");
var descriptionArgument = attributeSyntax.ArgumentList?.Arguments.First(); if (descriptionArgument == null) continue; // 👇🏼 Get attribute Description property var descriptionLiteral = (LiteralExpressionSyntax)descriptionArgument.Expression; var descriptionValue = descriptionLiteral.Token.ValueText;
functionNames.Add( new ValueTuple<string, string>(methodDeclarationSyntax.Identifier.Text, descriptionValue)); }
return functionNames; }
And create the function details and function name pattern matching.
// 👇🏼 Partial class annotated with the [Function] attribute used to generate functions. publicsealedpartialclassFunctions { [Function("Get the current time for a city")] stringGetCurrentTime(string city) => $"It is {DateTime.Now.Hour}:{DateTime.Now.Minute} in {city}.";
[Function("Get the current weather for a city")] stringGetCurrentWeather(string city) => "The weather in " + city + " is sunny."; }
partialclassFunctions { publicstringExecute(FunctionDetails? function) { return function?.Name switch { "GetCurrentTime" => GetCurrentTime(function.FunctionParameters.City), "GetCurrentWeather" => GetCurrentWeather(function.FunctionParameters.City) }; } public List<FunctionDetails> GetFunctionDetails() => [ new("GetCurrentTime", new FunctionParameters("city"), "Get the current time for a city"), new("GetCurrentWeather", new FunctionParameters("city"), "Get the current weather for a city") ]; }
Conclusion
AI function/tool calling is a powerful tool that allows AI models to interact with external tools and APIs, enabling them to perform complex tasks that require real-time data or specific computations. By understanding the core concepts and setting up your environment correctly, you can easily integrate AI-driven functionality into your C# projects, enhancing their capabilities.
I hope this tutorial has provided you with a clear understanding of AI function calling in C# and how you can get started with it.
I am an experienced Team Leader & Distinguished Solution Architect with a passion for shipping high-quality products by empowering development team and culture toward an agile mindset. I bring technical vision and strategy, leading engineering teams to move product, processes and architecture forward.