Agentic AI: When Your AI Starts Making Plans (And You Don’t Like Them)
Just finished Agentic AI by Ken Huang.
It’s not bedtime reading unless your nightmares involve goal-driven machine agents improvising in your production environment.
Spoiler: they do now.
Here’s the setup:
You thought AI was some nerdy intern spitting out text. Cute.
Now imagine that intern gets bored of waiting on you, starts filing expense reports, scheduling meetings, and hiring contractors on Upwork.
That’s agentic AI — models with initiative, memory, and the gall to act without your permission.
But let’s talk security. Because this is where things get spicy.
The book dives deep into two ideas most people still haven’t clocked:
Your agents can be manipulated without being “hacked.”
Feed them poisoned data, whisper the right prompt, and they'll sabotage you with a smile.You can’t firewall a personality.
Agents are persistent, evolving, and contextual. Old-school defense doesn’t apply when the threat vector has feelings and a to-do list.
So how do you defend against rogue agents?
Huang outlines frameworks like MAESTRO and SHIELD. Cute acronyms. Helpful stuff.
But if you actually want to survive this shift?
You go ephemeral.
Delete your heroes.
Re-spawn your infrastructure.
Burn it all down and rebuild it before anyone gets too comfy.
Because what persists, gets owned.
Agents included.
The Big Idea?
Forget guarding the castle.
Start moving the damn castle every ten minutes.
Your agent doesn’t need a long memory. It needs just enough to act and die.
That’s where our team at R6 is already living — disposable NIMs, adaptive defense, AI that’s harder to track than a rat in a restaurant.
Final word?
If you’re still deploying long-lived, memory-laden agents with stable endpoints and static configs, you’re not deploying AI.
You’re building the world’s most helpful insider threat.
Read the book.
Then light a match.
— Zsolt
(Who’s not panicking. You are.)