A man looking at a futuristic human image

AI agents: Embracing the new world order

Senior Front End Developer Andrew Lismanto shares some insights from a talk by IBM’s Head of AI Developer Advocacy on the future of AI agents, from the recent SXSW Sydney. 

Andrew Lismanto

01 December 2025

3 minute read

During my visit to the third SXSW in Sydney, one session that drew my interest was ‘What’s Next with AI Agents?’ by Nicholas Renotte, Head of AI Developer Advocacy at IBM. Renotte is someone who teaches the world’s biggest companies how to build, secure and innovate with AI, so I was eager to hear his perspective on what the future holds for AI agents.

I walked into the talk expecting a deep dive into complicated AI jargon and research papers. Instead, he delivered something refreshingly practical: a hands-on look at what AI agents actually are, how they work, and where they’re headed.

In his session, Renotte provided a clear roadmap for organisations and professionals on how we can stop merely using AI and start making it work for us.

He started by stripping away the buzzwords. An AI agent can be summed up in a single equation:

Agent = LLM + Tools
That’s it.

LLM (Large Language Model) = the ‘brain’ (like ChatGPT, Claude, Gemini, etc.)
Tools = things the agent can use (APIs, functions, databases, actions).

What makes an agent more than ‘just ChatGPT’ is how it thinks and acts. Agents follow a loop called ReAct:
Think → Act → Observe → Repeat.

This lets them break down tasks, decide when to use a tool, check the result, and try again.

Building an agent is easier than you think

He demonstrated building an agent in under 15 minutes. The key insight? Descriptions are critical.

In his demonstration, he emphasised a surprising point – the single most critical element in building a reliable agent isn't the code or the model, but the agent’s description. 

A well-written description tells the agent:

  • What it’s responsible for
  • When it should (or shouldn’t) use a tool
  • What kind of output is expected.

A precise ‘job description’ acts as the instruction manual, ensuring the AI knows exactly when and how to deploy its power.

It’s just like writing clear component props or documentation. Clarity prevents chaos.

He also showcased LangFlow, a no-code tool where you drag and connect blocks to build an agent visually.

Reusable tools and inter-agent communication

AI frameworks and models are changing and being updated very fast, even weekly. Relying on one vendor or framework for your critical business tools is a recipe for expensive re-engineering down the line. So industry must adopt modularity – building tools that can be shared across any platform. The solution, Renotte suggested, is the Universal Tool enabled by the Model Context Protocol (MCP). 

Building a defensive wall

We must remember that LLMs will still hallucinate. As agents gain the ability to take action (transfer money, book appointments, execute code), taking steps to mitigate the risk of error, bias, or malicious attack is non-negotiable. An agent that hallucinates or acts on a harmful prompt can cause significant damage.

Instead of pretending we can eliminate mistakes, he proposes using layered safeguards (similar to the Swiss cheese model in risk engineering). These include:

  • Guardian Model: Think of it as a digital bouncer. It screens incoming prompts. If something is harmful, sensitive, or suspicious, it never reaches your core agent. It can be using simple if-else statements in the tools code, or even utilising another AI that’s specifically trained to detect harmful content.
  • Tool Permissions: Every tool should enforce strict privileges – read-only, write-only, or admin. No agent should have unrestricted access.

The intention is not to make an AI agent flawless, but make it reliable enough for real-world use.

In summary, the message is boiled down to three directives: define precisely, build universally, and defend aggressively.

The AI agent era isn’t about losing control – it’s about gaining smarter tools. Agents won’t replace us; they’ll be able to help us build richer, more adaptive, more intelligent user experiences. Now’s the right moment to start experimenting.

Want to tap into the expertise of an agency that’s been in operation since 1999?

Get in touch

Keep Reading

Want more? Here are some other blog posts you might be interested in.