Skip to main content
Bluesky RSS Feed GitHub

Making an LLM chat client agentic

2025-12-21 in lagan

Say you have a minimal LLM chat client that does the following:

Here’s mine in a gist.

To transform that into an agent:

When the model produces a tool call, it’s the client that turns the crank on the next inference run. When there’s no tool call, we pop out of the loop, display the output, and wait for the user to prompt again.

Here’s my ping-enabled agent for a local Ollama-hosted model.

Particles index