What is Function Calling / Tool Use in LLMs?
Function calling lets LLMs trigger external actions — searching the web, running code, querying databases — by outputting structured JSON that your application executes.
TL;DR: Function calling lets LLMs trigger external actions — searching the web, running code, querying databases — by outputting structured JSON that your application executes.
The Core Idea
Without function calling, LLMs only output text. With it, the model can pause its response, output a structured JSON call like {"function": "search", "args": {"query": "latest AI news"}}, your code executes the function and returns the result, and the model continues with that real data.
How It Works (Step by Step)
1. You define available tools (name, description, parameters) in JSON schema. 2. You send a user message. 3. The model decides which tool to call and generates the JSON call. 4. Your application runs the function. 5. You send the result back to the model. 6. The model continues its response with real data.
Real-World Applications
ChatGPT plugins use function calling. Claude's computer use feature uses tool calls to click, type, and read the screen. GitHub Copilot uses function calls to read files and run tests. Any chatbot that "looks things up" uses function calling under the hood.
Function Calling vs Prompting
Standard prompting: the LLM only generates text. Function calling: the LLM generates structured actions that trigger real code. Function calling is what makes the difference between a chatbot (text) and an AI agent (can act on the world).