AI, AGI, , , and many other terms that have been flooding us lately — are starting to cause confusion even among specialists.
Question “What’s better, Windows or Linux?”, now the question has evolved into: “Why isn’t the new version of brandname considered AGI?”
NGFW AI, SOC AI, Pentest AI, and many more — have led to a point where, even from a marketing perspective, a new cybersecurity product without the “AI” label already looks weaker than competitors.
— Do our logs appear on dashboards automatically?
— Well… a script pulls them
— But automatically?
— Via cron
— Don’t forget to add AI
— And the dashboards render automatically
— Then we’re the first with Double AI 🦾
This is, of course, a weak attempt at a joke — but I’m sure someone from a product team smiled.
Are there benefits to the widespread adoption of AI? Absolutely — and significant ones. But there are also downsides that people don’t always like to acknowledge.
The image above perfectly illustrates the current state of the industry for me: one person lying there not understanding what’s happening, while two others are shouting — AI! GPT! AGI! LLM! It somewhat resembles a scene from . Everyone is chasing the White Rabbit, but no one asked where it’s actually going.
AI is that rabbit. Many are already deep inside the “rabbit hole,” but don’t fully understand where the exit is — or why they went in.
In recent months, the nature of questions I receive in private messages has changed significantly. Previously, people would ask:
— “How does Kerberos work?”
— or at least “Why is this a vulnerability?”
Now the questions look more like:
— “Which AI do you use to complete labs on HTB?”
— “Share a prompt”
— “How do I bypass restrictions? It only gives hints but doesn’t write the exploit for me”
I increasingly recall how I used to read articles from Securelist, search for books, and try to understand not just how to pentest, but how the systems themselves work. Because how can you test something if you don’t understand how it functions?
“Give me the ready solution,” “generate an exploit,” and “explain briefly” — this is not a problem of the AI industry. It’s a problem of the people making such requests, and those who attach endless “AI” labels everywhere.
A production database being dropped because your AI agent suggested it (after you told it you were working in a test environment to bypass restrictions) — is just one of many problems.
Back to the rabbit analogy:
Today, you need to run alongside AI just to stay in place. To become more valuable — you’ll have to think faster than it.
What matters is not just getting the flag or the correct answer, but understanding why it is correct — and being able to explain it.
You can know how to use a tool without understanding the domain itself.
A separate pain point is security (of course, since we’re in infosec):
People are feeding AI systems with configs containing API keys, NDA-protected code, and internal data. Does anyone think about where that data goes next?
As for CTFs that already explicitly ban AI usage — I won’t even comment.
The more AI is adopted, the more valuable humans will become — but specifically highly skilled ones.
I don’t know what true AGI will look like, but I doubt it will fully replicate creative thinking in pentesting anytime soon — especially when breaking application logic can cause far greater damage than discovering yet another unauthenticated RCE CVE.
Humans are dangerous because of their creativity and unpredictability. AI, to me, is still largely about algorithms — despite the progress of systems like XBOW and Mythos.
The “AI” label doesn’t make software better — or you smarter — until your pentest agent (or any tool) actually thinks faster than you do.
Automating reconnaissance or assisting in writing PoCs does not change the fact that today you must evolve faster than new models are released.
If tomorrow a new “XSUPERAI3000” passes and instantly, I would still choose the person who spends evenings in labs — because they have creative thinking on their side.
At the same time, it’s important to understand: in the 21st century, the winner will be the specialist who not only continuously improves their skills but can also quickly learn and effectively use their own “Jarvis.”