Engineering

How AI Tools Work - Turning Language Into Action

January 24, 2026

Language models are good at understanding and generating text. But text alone cannot set a reminder, fetch a webpage, generate an image, or send an email. That is where tools come in.

What is a tool in AI?

A tool is a capability you give to a language model. It has a name, a description of what it does, and a schema describing what inputs it expects. The model does not execute the tool itself. It signals intent - "I want to call this tool with these arguments" - and the host system executes it on the model's behalf.

This is how every major AI system works today. OpenAI calls them function calls. Anthropic calls them tool use. The concept is the same: the model decides what to do, and the host system decides how to do it safely.

Without tools, an AI assistant can only talk. With tools, saying "remind me to call the dentist tomorrow at 9am" triggers a real action. The model recognises the intent, calls a reminder tool with the right content and time, and the system creates an actual reminder that will notify you. This is what separates a chatbot from an agent.

How Iris implements tools

Iris is built on the Laravel AI SDK, which provides a clean contract for tool definitions. Each tool defines three things: a description so the model knows when to use it, a JSON schema so the model knows what arguments to pass, and a handler that executes the actual logic.

For example, the reminder tool tells the model: "Use this when the user asks to be reminded about something at a specific time." Its schema expects a content string, a remind_at time (natural language is fine - "tomorrow at 9am" works), and an optional recurrence. The handler validates the input, checks for scheduling clashes, creates the reminder in the database, and returns a structured result like REMINDER_CREATED id=123 content="Call dentist" remind_at="2026-02-19 09:00:00".

That structured result goes back to the model as context, and the model generates a natural response: "Done - I'll remind you to call the dentist tomorrow at 9am."

The full loop

When you send a message, here is what happens:

  1. The orchestrator builds the full context - your message, conversation history, relevant memories, matched skills, and the list of available tools with their schemas
  2. This goes to the AI model via whichever provider is active (Chutes, Ollama, etc.)
  3. The model reads everything and decides: reply with text, or call a tool
  4. If it calls a tool, the Laravel AI SDK validates the arguments against the schema and calls the handler
  5. The handler executes (creates a reminder, fetches a webpage, generates an image) and returns a structured result
  6. The SDK passes this result back to the model
  7. The model reads the result and generates a human-readable response
  8. Steps 3-7 can repeat - the model might chain several tool calls in sequence

This loop is controlled by MaxSteps. The main agent allows up to 10 tool calls per turn. Sub-agents are limited to 5 to prevent runaway loops during delegation.

27 tools and counting

Iris currently has 27 tools spanning memory, reminders, media generation, web browsing, document reading, email, calendar, home automation, webhooks, and agent delegation. Some examples:

  • "Remember that I prefer window seats" - stores a fact in long-term memory
  • "What is on this page?" with a URL - fetches and reads the webpage, with URL validation that rejects private IP ranges and localhost
  • "Generate an image of a sunset over mountains" - calls an image generation model via Chutes
  • "Delegate this research to an analyst" - spawns a sub-agent, streams its output, and returns the result
  • "Remind me to review the PR tomorrow morning" - creates a timed reminder with clash detection

Every tool returns consistent structured markers (REMINDER_CREATED, WEB_PAGE_FETCHED, SUBAGENT_SUCCESS, WEB_FETCH_FAILED) so the orchestrator can detect what happened and handle failures gracefully, even if the model does not relay the result properly.

Not every agent gets every tool

Tool scoping is important. The main agent gets access to all 27 tools, but when a sub-agent is spawned for delegation, it receives a reduced set - delegation tools, web fetch, weather, memory, reminders, and calendar. No media generation, no email, no webhooks. Sub-agents handle their specific task and nothing more.

On top of that, a security policy layer can enable or disable specific tools per user. If a tool should not be available in a certain context, it simply does not appear in the model's tool list.

Shortcuts for obvious requests

Not every request needs the full model loop. If the orchestrator detects an obvious pattern - like "generate an image of..." or "take a screenshot of..." - it bypasses the model entirely and calls the tool directly. Faster response, no wasted tokens. The model only gets involved when it needs to reason about what to do.

The key insight

The model never has direct access to the database, external APIs, or the filesystem. Every action goes through a validated, schema-checked, scoped tool. The model decides what to do. The tool system decides whether and how to do it. That separation is what makes it safe to give an AI assistant real capabilities.