Learn first
For the latest KMBS events and news, visit KMBS Live at the top right corner of the screen
Open kmbs liveIn his view, the strength of AI lies not in imitating human cognitive functions, but in adding value to the human being:
"We don't need generalized artificial intelligence that imitates humans. We need augmented intelligence that elevates a person in their professional role to a new level — where we may let go of traditional expert dominance, but instead gain something new. It’s the power of a person who can think in broader systems and synthesize complex decisions into new forms of value."
This kind of integration requires systems thinking and an understanding that the era of “chatification” introduces a new type of thinking — one in which we begin to think through communication with artificial intelligence that has access to all the knowledge in the world. However, the keys to that knowledge lie in our prompts to AI. If we can't formulate a task properly, AI will operate at a corresponding level. In other words, this new type of thinking immediately challenges us — to learn how to capture and understand an idea. It’s actually very easy to become both an imitator and a creator.
But over time, AI will learn to distinguish between genuine human thought and intuition — and mere imitation.
The era of chatification changes not only processes but also the logic of management. The more companies are prepared for these challenges, the sooner they will be able to increase efficiency and scale their results.
Let’s talk about AI and its nature. Why is this topic so important?
Many people have already tried using artificial intelligence. Sometimes the results are satisfying, other times not. But we all understand that AI is no longer something we can ignore.
AI did not emerge four years ago. These technologies have existed for over 70 years — they started with Turing and the film about him, “Enigma.” He was a fascinating person, because even before the invention of computers, he predicted the rise of AI. Turing also developed a test for evaluating AI — the Turing Test. In particular, by 2025, ChatGPT only passed it by 70%. But imagine this:
> 70% of people around the world cannot tell if they are talking to a human or an AI agent.
There are several levels of AI implementation in a company’s system. In explaining them, we’ll go through the challenges managers face.
First Level
Currently, the AI market is speculative. Companies invest substantial amounts in prototypes and early solutions — sometimes paying up to 200% more than actual worth. This happens largely because we think of AI as something abstract. For example, Elon Musk posts on X: “By 2030, we’ll create AI with all the cognitive functions of a human.” Many think: “What does that mean for us? What’s our strategy then? Let's wait until 2030.”
Surveys show that thousands of companies say AI is strategically important to them. But only ten of them actually experiment and develop anything. The others think: “What does AI have to do with me? It’s not for me.” This requires a critical approach.
Peek “under the hood” of AI, and you'll see it's built on servers with graphic processors, which contain an average of 18 billion transistors. Developers now compete for access to 200 billion transistors. These are shrinking so much in size that in the future there may be a physical limit to the viability of such technology. But few are talking about it.
Therefore, managers need to understand how this technology works, what its physical limitations are, and what to realistically expect from it. Such insights influence whether to invest in internal closed systems or to focus on global competitive models, into which hundreds of billions of dollars have already been invested.
Second Level
This level is when teams start using AI chat tools, but the company neither monitors nor facilitates their use. This leads to poor decision-making and conclusions, and the emergence of “shadow AI” in organizations. Managers must address this problem and move the team to the third level.
For example, a marketer may begin using chat tools to create a communication strategy but lacks prompts, context framing, or knowledge of appropriate methodology. The result is a superficially solid-looking, yet ineffective strategy that may fail in practice — and the company may not even realize that the failure came from a poorly prompted AI.
People ask something in a chat and when it doesn’t work, they say: “AI is just making stuff up.” Why? Because we think AI "thinks" or "talks" like us. We greet it, say thank you — but AI treats that as noise. Its neural setup processes information differently. Context is key.
People often say: “By the third minute, it starts responding well.” In fact, AI simply accumulates chat history and uses the context. We dislike its answers because we don’t use the tech correctly.
Third Level
Let’s consider this from a management perspective. If you create the right conditions and infrastructure — a shared environment, prompt libraries, knowledge and tool bases, and know what not to base communication strategies on — then your outcome will be entirely different.
If we provide AI with solid examples, brand and client info, data sources, and statistics, AI will create a fundamentally different strategy. And the same strategy cannot be recreated by another marketer’s prompt — it will always be 70% unique to you.
The third level is when you begin developing agents (digital assistants). You design prompts that explain their role, company, goal, and tasks. You train them by uploading data, examples, and relevant knowledge. The results become significantly higher in accuracy and quality.
At this level, AI no longer hallucinates or invents data, because it has enough context.
However, managers face new challenges. Imagine an office with 100 employees who each create 3–10 models to write, calculate, analyze, and correct.
Recruiters are already saying: “We prefer to hire people who use agents, as they bring 25–50% performance gains — up to 70% for developers, 30% for lawyers, analysts, and finance professionals.”
Within two years, recruiters will favor those who already work effectively with AI. But here’s the issue: if each of your 100 employees has agents, you end up managing 1,000 agents. Who manages them? Do they report to a CEO or HR manager? Who owns their knowledge repository? This matters — because those agents shape your company’s intellectual footprint.
There are no easy answers. Even if you deploy AI at work, in a few years you’ll face the question: Who manages the agents, and who ensures their security?
> The third level is challenging not just because it changes how you work — it enhances client value.
Many companies already have portals or apps. But when asked what they prefer, more than 50% of clients say they want a bot that enables them to complete actions with one tap. People who’ve interacted with AI don’t understand why they still need to log into portals and waste time.
Interfaces are shifting. The chat age is here.
Try communicating with AI via real-time video messages — it’s surprisingly comfortable. Imagine skipping the video itself by grabbing the transcript and feeding it to an agent. You work faster and in your context — it’s a fundamentally higher level of learning.
Why will jobs change? Because agents perform parts of your professional role. Hence, your role will evolve too.
Why can’t you just buy a ready-made agent? Because agents need your context, your goals, your working methods — and that context changes over time. That’s why you must learn how to configure them yourself.
Fourth Level: Multi-Agent Systems
At this level, AI stops being a solo tool — interactions occur among several agents. Game theory and role distribution become important, with each role creating value for the system. Let’s return to our communication strategy:
Imagine three agents working on it:
- Agent 1 develops the strategy.
- Agent 2 analyzes your clients (reviews, comments, engagement) and critiques Agent 1’s work, perhaps even in real time.
- Agent 3 adapts the strategy in response to feedback and manages your content calendar.
This level demands a team of AI-savvy specialists. Not just tech people or coders — but thinkers. This is a whole new mindset.
With many agents, you can appoint managers or group them into departments. For example:
- One agent calculates, another writes, the third designs slides.
Agent-to-agent interaction is flawless since they understand prompts and roles.
Once agents activate in sequence, the value chain they create is massive.
> The key to AI lies in systems thinking.
To build a multi-agent model, you must design inter-agent interaction. Without methodological knowledge, you can’t build proper context or structure.
The myth that all young people can work with GPT is false — only those with systems thinking truly succeed.
Fifth Level: AI-Powered Automation
Now we embed agents into business processes. This requires redesigning how systems operate — giving AI a proactive and autonomous role.
In the communications strategy example, an agent autonomously plans weekly content and optimizes it based on feedback — without your involvement.
Imagine this: the commercial director calls you at night — sales have plummeted. The team scrambles, everyone feels stressed.
Now imagine this instead:
AI messages you: “I identified a cash flow risk. I’ve contacted the director for context and analyzed the situation. Everything will stabilize.”
This is a different level of leadership entirely.
Some say digital transformation is over — what’s next is automation with AI elements. But understand: automation and AI are different.
Automation follows fixed logic. AI adjusts based on goals. They coexist, but their nature diverges.
Sixth Level: Cooperative AI
Now we focus on the person — with 100 trillion brain connections and unmatched intuition. This is something AI doesn’t possess.
Let’s return to our communication strategy. We might create an agent that gathers team context and factors it in (even for clients) before finalizing plans. Here, we fully embrace augmented intelligence in purposeful, high-value roles.
This isn’t a tech question anymore — it’s a matter of organizational design, acknowledging the new role of AI and the redefined role of humans working with AI.
At level six, a perfect arbiter emerges — one that can fairly make decisions and transform team dynamics.
No matter the AI level — agent-based or autonomous — you will miss the human element. AI, even with a trillion connections, is far from matching the human brain at 100 trillion.
Humans feel and synthesize in ways AI cannot.
When people realized that void, experts began experimenting with Cooperative AI. One experiment:
Management decisions made where no one meets in person. A question — drafted with AI — is shared.
Other team members submit short answers. You feed the results into an AI model to receive decision options.
You present AI-generated decisions to the team — and they often adopt them. Because AI excels at fast, context-aware summarization.
> That’s why we don’t need generalized AI to mimic human cognition. We need augmented intelligence.
These technologies will handle routine tasks and free us up to focus on ideas, vision, leadership, and the intellectual parts of our roles that AI cannot fulfill.
As a result, we'll lay the foundation for a new corporate culture — where every team member, regardless of seniority, can fully realize their creative potential.