Function Calling with LLMs (GPT-4, LLAMA)

Function Calling with LLMs (GPT-4, LLAMA)

Function Calling with LLMs (GPT-4, LLAMA)

Jan 22, 2025

    What is Function Calling for LLMs?

    Function calling is the process by which an LLM interacts with predefined external functions to extend its capabilities beyond language processing. 

    This feature bridges the gap between natural language understanding and actionable operations, enabling applications like API calls, task automation, and database queries. For example, imagine asking a chatbot for today’s stock price. The LLM doesn’t know the stock price directly but can call a stock API to retrieve and share the information.

    Who Calls the Function?

    In this process, the LLM does not directly call the function; instead, the responsibility lies within your code. Using the example of asking, “What’s the IBM stock price today?” the workflow unfolds as follows:

    1. You query the LLM, “What’s the IBM stock price today?”, ensuring that the available functions are included in the context.

    2. The LLM analyzes the prompt and suggests which function to call, providing the necessary arguments. At this stage, the suggested function must be executed within your code.

    3. After obtaining the response from the function, a follow-up call is made to the LLM, passing along the function’s output. The LLM then integrates this data to generate the final response for the user.

    Step-by-Step Function Calling

    Function Definition

    Functions must be predefined, detailing their name, parameters, expected input/output formats, and descriptions. For example, a stock price function might require parameters like ticker and date and return JSON with price, open, close, etc.


    # Define functions 
    functions = [
        {
            "name": "get_stock_price",
            "description": "Fetch the current stock price of a specific publicly traded company.",
            "parameters": {
                "type": "object",
                "properties": {
                    "ticker": {
                        "type": "string",
                        "description": "The stock ticker symbol (e.g., IBM, TSLA) for a publicly traded company, to fetch current price."
                    }
                },
                "required": ["ticker"]
            }
        },
        {
            "name": "get_order_book",
            "description": "Fetch the order book data for a specific trading pair or stock ticker.",
            "parameters": {
                "type": "object",
                "properties": {
                    "ticker": {
                        "type": "string",
                        "description": "The stock ticker symbol (e.g., IBM, TSLA) to fetch order book."
                    }
                },
                "required": ["ticker"]
            }
        }
    ]
    
    def get_stock_price(ticker: str) -> dict:
        # Using Alpha Vantage demo API to fetch stock prices. With demo API key it works only for IBM stock. 
        url = "https://www.alphavantage.co/query"
        params = {
            "function": "TIME_SERIES_INTRADAY",
            "symbol": ticker,
            "interval": "5min",
            "apikey": "demo"
        }
    
        response = requests.get(url, params=params)
    
        if response.status_code == 200:
            data = response.json()
            time_series = data.get("Time Series (5min)", {})
    
            # Extract the latest price based on the most recent timestamp
            if time_series:
                latest_timestamp = sorted(time_series.keys())[-1]
                latest_data = time_series[latest_timestamp]
                return {
                    "ticker": ticker,
                    "price": latest_data.get("1. open", "N/A")
                }
    
        return {
            "ticker": ticker,
            "price": f"Price not available for {ticker} company."
        }



    LLM Invocation

    With functions defined, the LLM needs to be invoked with the function definitions and the user request. 


      # Initial model call
      response = client.chat.completions.create(
          model=model,
          messages=[
              {"role": "system", "content": system_prompt},
              {"role": "user", "content": user_input}
          ],
          functions=functions,
          function_call="auto"
      )

    When calling the LLM, it generates a structured JSON output suggesting the function to invoke and its arguments. For instance, the model outputs:


    FunctionCall(arguments='{"ticker": "IBM"}', name='get_stock_price')

    Function Execution

    Parse the LLM’s output, execute the suggested function, and return results. Ensure graceful error handling to manage cases where the API fails or returns unexpected results. Execution of the external function will return a JSON, which later should be passed to LLM for further integrating results.


    {    
      "ticker": "IBM",    
      "price": "250.25"
    }

    Integrating Results

    Invoke LLM with output from the function call. The function output should be sent along with the previous chat history and function definitions. 


      # Send the function's output back to the model
      follow_up_response = client.chat.completions.create(
          model=model,
          messages=[
              {"role": "system", "content": system_prompt},
              {"role": "user", "content": user_input},
              {"role": "assistant", "content": "null", "function_call": response.choices[0].message.function_call},
              {"role": "function", "name": response.choices[0].message.function_call.name, "content": json.dumps(function_result)}
          ],
          functions=functions,
          function_call="auto"
      )


    The LLM incorporates the function output into its response:

    "The stock price of IBM on January 20, 2025, was $250.25."

    Sample Code

    Here is the sample code for this project.


    LLAMA 3 Function Calling

    Historically, OpenAI GPT-4 introduced function calling, which enabled all agentic workflows, and older versions of LLAMA weren’t supporting it out of the box, however, with some fine-tuning, it was possible to achieve it. Later on, LLAMA 3.1 introduced native function calling. To test function calling with LLAMA 3.2 you can add LLAMA 3.2 in your OmniModels section and use its Model ID within the code. 


    Applications and Use Cases

    Conversational Agents

    A virtual assistant capable of providing stock prices or checking financial data by calling external APIs can turn a passive Q&A model into a proactive problem-solver.

    Data Retrieval and Integration

    Accessing user information from a CRM system or retrieving data from SQL databases for analytics reduces manual lookup efforts by directly pulling structured data.

    Complex Task Automation

    Automating workflows, such as generating invoices or sending notifications via third-party APIs, streamlines multi-step operations, improving efficiency and reducing errors.