By Byron V. Acohido
Stephen Klein didn’t simply stir the pot. He lit a fireplace.
Associated: Klein’s LinkedIn debate
In a sharply worded publish that shortly went viral on LinkedIn, the technologist and tutorial took direct intention at what he known as the “hype-as-a-service” enterprise mannequin behind so-called agentic AI. His critique was blunt: what the trade is promoting as autonomous, goal-directed intelligence is, generally, little greater than brittle immediate chains and hard-coded workflows dressed up in fancy language.
In Klein’s view, most present agentic programs are glorified wrappers — job orchestrators stitched collectively from APIs and enormous language fashions. They’re not “brokers,” he argues, until they display the hallmarks of true autonomy: self-directed aim setting, adaptive reasoning, reminiscence, and the flexibility to function throughout altering environments with minimal human intervention. Something much less? Advertising and marketing noise.
To his credit score, Klein struck a nerve. His publish drew a wave of applause from engineers and skeptics annoyed by the overreach of AI branding. However the backlash was telling, too. A quieter refrain — trade practitioners, startup builders, just a few considerate researchers — responded not with denial, however with a query: even when most of as we speak’s programs aren’t totally agentic, aren’t they nonetheless meaningfully new?
Cybersecurity use circumstances
That’s the place Klein’s readability turns brittle. As a result of whereas his tutorial rigor is efficacious, his framing misses what’s really taking place — not within the hype decks, however on the bottom.
At RSAC 2025, I spoke with over a dozen cybersecurity distributors quietly integrating LLM-powered choice assist into core operations. Simbian is utilizing GenAI to energy a co-pilot that helps SOC analysts prioritize alerts in actual time. Corelight is utilizing it to sift community telemetry for delicate risk patterns. Are these “brokers” within the Kleinian sense? Not fairly. Are they meaningfully altering how work will get achieved in high-stakes, regulated environments? Completely.
And it’s not simply the safety sector.
At NTT Information, I encountered one of the grounded — and arguably most agentic — use circumstances but. Their system at present makes use of conventional pc imaginative and prescient fashions to tag visible components in live-stream video — helmet vs. no helmet, license plate vs. background. These pixel-level attributes information Attribute-Primarily based Encryption (ABE) that redacts content material dynamically, preserving privateness whereas imposing coverage.
However what makes this actually next-gen is what comes subsequent: NTT’s engineers are layering in Mistral, a compact, open-source vision-language mannequin (VLM), regionally fine-tuned to function as a domain-specific AI agent. This isn’t a general-purpose chatbot. It’s an embedded mannequin designed to interpret dwell video semantically — figuring out nuanced occasions like theft or assault, flagging concerned actors, and triggering differential encryption in actual time.
In brief: Mistral isn’t simply including inference — it’s turning into an embedded decision-maker. Educated on each private and non-private datasets, it brings contextual judgment to surveillance duties that had been as soon as binary. That’s not hype. That’s a purpose-built agent system, architected for real-world autonomy beneath strict coverage constraints.
Agentic AI residents
Klein is correct to name for clearer definitions. However in circumstances like this, the semantics are chasing one thing that’s already actual — programs quietly reshaping how autonomy is engineered and utilized.
Tanaka
Dr. Hidenori Tanaka, head of NTT’s Physics of AI group, takes this concept a step additional. He envisions a future the place LLM-enabled brokers usually are not merely optimized for engagement, however purposefully designed with domain-specific personalities aligned to their supposed use. Chatbots, he argues, are now not inert instruments; they’re new actors within the societal cloth—”residents,” in his phrases—shaping human cognition by means of on a regular basis interplay.
Tanaka’s central perception is that AI character shouldn’t be unintended. It’s engineered—by means of system prompts, coaching information, and company incentives. And this, he warns, creates macro-level results: if AI is universally optimized for consolation or virality, it dangers reinforcing polarization and eroding public belief. As a substitute, he requires a scientific self-discipline that may translate open-ended ethical questions—What ought to an AI worth? What does it imply to be variety?—into measurable benchmarks and controllable behaviors.
His aim is to not anthropomorphize machines however to embed deliberate design into how brokers evolve. He desires to remodel LLM improvement from an advert hoc enterprise right into a grounded, interdisciplinary science—one rooted in physics, psychology, and ethics, and able to cultivating brokers that assist, reasonably than distort, our shared cognitive house.
The coining of “agentic AI”
The reality is, the time period agentic AI didn’t start in academia. It crept into the lexicon in mid-2024, because the generative wave matured. With instruments like LangChain, OpenAI’s Brokers SDK, and AutoGen, builders started constructing programs that might bear in mind context, choose instruments, pursue objectives, and adapt their subsequent steps primarily based on real-world outcomes. The trade wanted language to explain what felt like a brand new functionality — and agentic sounded proper.
Ng
Thought leaders like Andrew Ng — a pioneering AI educator and founding father of DeepLearning.AI — helped popularize the time period agentic AI in 2023 and 2024. Via his newsletters, programs, and public commentary, Ng framed agentic programs as LLM-powered functions able to goal-seeking conduct and multi-step coordination — a framing that gave the time period important traction amongst builders and enterprise adopters. By late 2024, it was in all places: product sheets, panel discussions, investor pitches.
Critics like Klein noticed this as definitional drift. However I’d argue it’s nearer to pure language evolution — messy, natural, formed by use, not decree.
Laborious traces vs. gradient adoption
Which brings us to the current pressure: tutorial purists need exhausting traces. Practitioners are working in gradients.
And whereas we completely must push again on deceptive claims — particularly when real-world belief and security are on the road — we must be cautious to not flatten the dialog right into a binary. As a result of a lot of what’s now labeled agentic AI might fall wanting Klein’s threshold, however that doesn’t make it trivial.
The shift is actual. We’re shifting from instruments that merely reply to enter, to programs that assist provoke, coordinate, and execute. It’s not synthetic normal intelligence. It’s not even full autonomy. However it’s a completely different texture of software program — and that issues.
In a latest essay I known as Wither Genius?, I described how this shift is crowding the center: the house as soon as occupied by mid-tier skilled fluency — the technical author, the monetary analyst, the coverage drafter — is being compressed by LLMs that may now emulate construction and tone with alarming fluency. And but, the higher and decrease bounds of creativity — the intuition to ask a brand new query, the instinct to problem the immediate — stay deeply human. The form of genius expressed by Truman Capote’s narrative nuance, Rachel Springs’ creative social worldbuilding, or Frank Herbert’s philosophical scaffolding in Dune remains to be far past what language fashions can conjure. That frontier stays ours — for now.
What we’re seeing is scaffolding being laid for one thing new. That scaffolding won’t meet each checkbox on Klein’s autonomy rubric, nevertheless it’s already supporting workflows, insights, and choice fashions that didn’t exist two years in the past.
A brand new form of company
Extra importantly, it’s enabling a brand new form of company — not simply in machines, however in individuals.
You see it within the daikon farmer tuning a Hugging Face mannequin to automate irrigation. Within the native instructor tweaking GenAI lesson plans for her college students — however refusing to let the mannequin monitor them. Within the musicians who launched a streaming radio station in my coastal hometown, co-composing their scripts with AI.
None of this suits neatly into Klein’s body. But it surely’s taking place. And it’s highly effective.
So sure — let’s name out overhyped claims. Let’s elevate the bar for what we imply by agentic. However let’s additionally acknowledge the deeper transformation underway. This isn’t only a semantic debate. It’s the early friction of a brand new human-machine relationship — one which’s nonetheless taking form.
Klein desires to outline the time period. The remainder of us try to outline the long run.
Let’s not confuse the 2. I’ll maintain watch – and maintain reporting.
Pulitzer Prize-winning enterprise journalist Byron V. Acohido is devoted to fostering public consciousness about the best way to make the Web as personal and safe because it should be.
(Editor’s observe: A machine assisted in creating this content material. I used ChatGPT-4o to speed up analysis, to scale correlations, to distill advanced observations and to tighten construction, grammar, and syntax. The evaluation and conclusions are totally my very own—drawn from lived expertise and editorial judgment honed over many years of investigative reporting.)