
Jan 22, 2025
What is Function Calling for LLMs?
Function calling is the process by which an LLM interacts with predefined external functions to extend its capabilities beyond language processing.
This feature bridges the gap between natural language understanding and actionable operations, enabling applications like API calls, task automation, and database queries. For example, imagine asking a chatbot for today’s stock price. The LLM doesn’t know the stock price directly but can call a stock API to retrieve and share the information.
Who Calls the Function?
In this process, the LLM does not directly call the function; instead, the responsibility lies within your code. Using the example of asking, “What’s the IBM stock price today?” the workflow unfolds as follows:
You query the LLM, “What’s the IBM stock price today?”, ensuring that the available functions are included in the context.
The LLM analyzes the prompt and suggests which function to call, providing the necessary arguments. At this stage, the suggested function must be executed within your code.
After obtaining the response from the function, a follow-up call is made to the LLM, passing along the function’s output. The LLM then integrates this data to generate the final response for the user.
Step-by-Step Function Calling
Function Definition
Functions must be predefined, detailing their name, parameters, expected input/output formats, and descriptions. For example, a stock price function might require parameters like ticker and date and return JSON with price, open, close, etc.
LLM Invocation
With functions defined, the LLM needs to be invoked with the function definitions and the user request.
When calling the LLM, it generates a structured JSON output suggesting the function to invoke and its arguments. For instance, the model outputs:
Function Execution
Parse the LLM’s output, execute the suggested function, and return results. Ensure graceful error handling to manage cases where the API fails or returns unexpected results. Execution of the external function will return a JSON, which later should be passed to LLM for further integrating results.
Integrating Results
Invoke LLM with output from the function call. The function output should be sent along with the previous chat history and function definitions.
The LLM incorporates the function output into its response:
"The stock price of IBM on January 20, 2025, was $250.25."
Sample Code
Here is the sample code for this project.
LLAMA 3 Function Calling
Historically, OpenAI GPT-4 introduced function calling, which enabled all agentic workflows, and older versions of LLAMA weren’t supporting it out of the box, however, with some fine-tuning, it was possible to achieve it. Later on, LLAMA 3.1 introduced native function calling. To test function calling with LLAMA 3.2 you can add LLAMA 3.2 in your OmniModels section and use its Model ID within the code.

Applications and Use Cases
Conversational Agents
A virtual assistant capable of providing stock prices or checking financial data by calling external APIs can turn a passive Q&A model into a proactive problem-solver.
Data Retrieval and Integration
Accessing user information from a CRM system or retrieving data from SQL databases for analytics reduces manual lookup efforts by directly pulling structured data.
Complex Task Automation
Automating workflows, such as generating invoices or sending notifications via third-party APIs, streamlines multi-step operations, improving efficiency and reducing errors.