
I read the Gartner Hype Cycle for AI in IT Operations as soon as it came out last year. We were already helping shape that conversation. At the time, it felt like validation. The market was clearly moving toward AI-driven operations, agents, and automation.
I went back to it recently. Not because the report changed. Because we did.
The more I read it now, the more convinced I am that people are missing the real takeaway.
Gartner is not being conservative. It is being accurate. Most of what they highlight, especially around AI agents and autonomous systems, is still sitting at the Innovation Trigger, with a five to ten-year path to maturity. That is not an indictment of the technology. It reflects how immature the execution layer still is.
Everyone is focused on model intelligence. Very few are focused on what happens after the model generates an answer.
If you read the report closely, the structure tells the story. The entire value chain ends with “act and orchestrate,” the point where systems are meant to move from insight to execution. That is the future the market keeps promising. Systems that do not just analyze, but actually fix.
Now look at where that category sits. Still early. Still experimental. Still not trusted.
That gap is the whole story.
We are very good at “sense,” “think,” and even “assist.” That is why AI assistants are everywhere. That is why every tool now has some form of copiloting built in. Gartner identifies this as the most active area in the market right now.
But assistance is not execution.
And this is where things break down in the real world.
Consider something as basic as fixing a misconfiguration. Today’s flow looks polished on the surface. A tool detects an issue. An AI assistant generates a fix. Sometimes it even writes a patch. What happens next? A human reviews it. Tests it. Validates it. Rewrites parts of it.
That is not automation. It’s high-speed work generation.
This is the probabilistic tax in action. I wrote about this in our recent blog. Every AI-generated suggestion carries a hidden cost. Not in tokens, but in engineering hours. The more you rely on “maybe correct” systems, the more you shift that cost onto the human who has to make them production-safe.
And this is why Gartner’s timeline holds up.
Because when execution is probabilistic, trust will always lag. When trust lags, adoption follows. And if adoption lags, meaningful progress at scale stays five to ten years out.
The report even calls out the core issue in multiple places without saying it directly. Production-ready capabilities remain limited. Skills are lacking. Organizations are not ready to remove humans from the loop. Most importantly, few organizations trust AI enough to let it act without validation.
That is not a model problem. That is a control problem.
Infrastructure, security, and operations do not operate on probability. They depend on determinism. A Terraform plan either applies or it doesn’t. A policy either passes or it fails. A pipeline either breaks or it ships.
There is no room for guesswork disguised as “this looks right.”
A year ago, we were not asking how to make AI smarter. We were asking how to make execution reliable enough to trust.
That is the problem ORL was designed to solve.
ORL is not another assistant. It is not just another model. It is the execution layer that the market is missing. It takes policy intent and translates it into deterministic code transformations. Not suggestions. Not drafts. Actual, governed changes that behave the same way every time.
Same input. Same output. No interpretation layer in between.
And that changes what becomes possible.
Once execution is deterministic, you remove the need for constant human verification.You are no longer reviewing guesses. You are reviewing outcomes already aligned with policy, standards, and the way your systems actually behave.
That is what allows you to plug this into Git. Into CI pipelines. Into real workflows where things can break. This is also why we are talking about the Gartner Hype Cycle again now. This is also why we are returning to talking about the Gartner Hype Cycle again. Not because it predicts the future, but because it explains why the present is stuck.
The industry is pouring energy into agents, copilots, and generative systems. And that is fine. That is where innovation starts. But without a deterministic execution layer, it all stalls before reaching production.
That five-year timeline is not inevitable. It is a byproduct of the architecture everyone is choosing.
If you keep building on probabilistic systems, you will wait. You will keep adding review layers. You will keep slowing down the very workflows you are trying to accelerate.
If you introduce determinism at execution time, the timeline collapses.
Agents stop being experiments. They start becoming infrastructure.
We pulled our perspective together here, and you can get your copy of the Gartner Hype Cycle as well:
https://www.gomboc.ai/gartner-hype-cycle-for-ai-in-it-operations
Read it. The gap between where we are and where we think we are is the whole story.
If you are still relying on “maybe,” then Gartner is right. It will take years.
If you are done paying the probabilistic tax, there is nothing stopping you from doing this on Monday.


