Learn first
For the latest KMBS events and news, visit KMBS Live at the top right corner of the screen
Open kmbs liveWhy does AI only work within the logic of systems thinking?
Because when you deal with uncertainty, you need to create a system that enables you to stumble upon solutions. And it’s essential that this system allows for flexibility—without rigid structures. In other words, you need to build such a system that encourages experimentation. If you decide to learn from mistakes, those mistakes should come at a low cost. The task of a leader is not to foresee the future, but to create a system that can discover it.
When introducing something new—like AI implementation in a company—existing processes need to change. How can a company prepare for that?
First of all, you have to acknowledge that no system is ever truly ready for something new. Don’t even entertain the notion that some parts are ready while others are not, or that some have a better culture and others worse. In reality, there are no “bad” cultures—just accept that nobody is completely ready.
Second, acknowledge that uncertainty is physically uncomfortable for people. They want clarity—because they have obligations, families, plans, and dreams. They will always perceive uncertainty as a threat. That’s why it's better to present clear concepts, provide clarity, and not expect “readiness.”
Third, this is about working at a system level. While something is acquiring structure or clarity, there is no evidence yet that it will work at a systemic level. Therefore, when we work with uncertainty, our main task is to consistently reduce that uncertainty—by continuously running experiments. In doing so, we start collecting proof and evidence.
If you come to your team promising something that then doesn’t work, you risk losing trust. I like the idea that there are three underappreciated but vital resources in a system: trust, time, and talent. These are critical for any system to function.
Speaking of talent and people, how can you identify those who should enter these experiments?
If you've decided to build an internal sandbox within the company, then it’s about defining roles. It means there’s a team that treats uncertainty professionally. For example, this team abandons the notion that they "know” something. Their professional attitude becomes: "I don’t know, but we’ll try to find out." Such a team is engaged in testing hypotheses professionally. And naturally, they will invite other people from the company into this sandbox.
Inside the sandbox, there are defined rules for how ideas transform into assumptions, how the cost of validating an assumption is calculated, and so forth. In my view, this is the ideal model for working with AI.
Let’s talk about graduates of our programs. When they first joined the learning process, what were their expectations or misconceptions about AI?
Great question. We regularly analyze participant surveys—what expectations they have when entering the program or accelerator (the kmbs AI Workshop for Teams program uses an accelerator model with asynchronous creative team work to design AI prototypes via tools like Open WebUI and Slack, alongside weekly online facilitation). There are three very interesting recurring themes:
First, teams and top managers tend to believe they are already behind. That the world has changed and they didn’t catch up. We challenge this misconception by showing that it’s not true. I believe AI is a decade-long transformation—not a 1–3 year phenomenon. So, there’s still time.
Second, when participants enter the program, they hope AI will reduce their routine workload, so they can focus on creativity, art, or the customer. But this mindset often lacks a focus on creating something new. In reality, AI won’t reduce routine immediately. At certain stages, it may even increase it because you’ll need to reconfigure systems, generate data setups that don’t yet exist, and fundamentally change business processes. Routine can be optimized, but it won’t give you that bold dream you might hope for.
I believe we should stop focusing on reducing routine and instead aim to change the lifestyle of our employees. How they check email, how they receive tasks, how they process meetings—AI can transform many of these things. People need to live differently. This changes the entire focus and leads to a different set of assumptions.
For example, let’s imagine our employee checks email with headphones—what if 70 emails that need archiving are never manually touched again? And maybe it’s about culture after all. We shouldn’t mix up someone’s personal workflow with the idea that everyone in the company can now work differently—and be allowed to do so.
So, this second misconception is that we want to reduce routine and gain time, when, in fact, we should be transforming our management style—how we work day to day.
The third misconception teams bring is the desire to move from “shadow” AI to systemic AI. What does shadow mean? Even if the company doesn’t formally deal with AI, employees are already using personal subscriptions—GPT, Gemini, or favorite apps. They work with them in secret and don’t talk about it.
Essentially, it becomes a stealth game—they do everything faster, but don’t report it. This happens because formal AI rules are not yet defined by the company. Leadership hasn’t announced a stance. Even leaders themselves often use shadow AI tools privately.
The crucial systemic step is to shift from shadow to structured AI usage.
But you can’t just walk into the office and declare, “Starting today, everyone uses AI.”
There are three levels of AI engagement in companies:
1. The first level is personal AI models. Some call them AI-bots, but I prefer to call them “personal AI models.” These models are mostly individual-focused and help a specific role become more effective professionally. On average, each role in a company will need 3 to 7 personal models—not counting models that only check email.
2. The second level is Agent AI. Agents are also models—but integrated into business processes, connected to data, other models, roles, etc. They support partial automation and often run multiple models under the hood.
3. The third level is Autonomous AI agents. These are delegated certain tasks and can act pro-actively, objectively, and autonomously.
Whatever we do at the personal level, the next evolutionary step for any model is to become an agent—part of a business process, autonomous.
Can you share how teams evolve during the program? What are their initial ideas and what do they walk away with?
They leave with functioning prototypes. That’s important to us—we want the team to return to their companies with evidence that it works. That’s one of our core values.
If we look at these three levels—personal models, agents, and “agentic” behavior (the ability to act autonomously and influence the environment)—then we deal with three assumptions driving most of today’s ideas.
1. The first assumption, at the personal model level, is that the team will be more productive using only these tools and models.
2. The second assumption is that business processes can be improved through AI integration that reduces routine.
3. The third is: "If I have an autonomous AI, I can compete at a different level."
In reality, with that first level comes a challenge—especially for HR and tech HR professionals. You can’t just be an HR officer anymore. One of the current trends is the merging of HR and IT. When the role shifts toward technicality, HR begins managing not only people but their personal AI models. In this way, HR professionals become partially IT professionals, too.
Who is responsible for AI implementation in most companies?
Currently, research shows that 97% of IT professionals see corporate AI as more of a risk than an opportunity. This means we need to retrain both: the users and the IT specialists. It’s a new synthesis that requires a new perspective.
So, for the first level of agents—it’s a challenge for HR + IT.
The second is a challenge for new-style COOs—those constantly reengineering business processes with a human touch and embedding AI into them in ways that respect human roles.
The third level is for leaders—those expected to discover disruptive innovation and drive it to maintain competitive advantage. We call this “algorithmic competition”: some companies using AI will make better decisions in logistics, pricing, customer service, and values—and other companies won’t even understand how they’re doing it.
That’s not just math—it’s the realm of corporate AI, and the agentic level. It requires new technology, new frameworks—and poses significant leadership challenges. This is why we focus in our school on the second and third levels, training top managers and leaders to take the next step forward into complex AI integrations.
In the “AI Workshop for Teams” program, how do you know at what level each team is working?
We clearly define what level a team is operating at. It’s important they spend time testing many assumptions and building many prototypes using personal models. That means learning to work with personal models and lifestyle redesign.
If a company wants to prototype new business processes, we start right away with low-code platforms, build and launch business processes, and determine which models are needed.
If we’re working on innovation right away, then we spend much more time on idea design before testing any tech assumptions. We don’t dive into models, algorithms, or singularity—and I like that. We’re talking to managers, decision-makers—people who need to figure it out.
For example, Euromix, a large Ukrainian distributor, has been actively integrating AI into its business processes. The company participated in the accelerator program and became the third team to work within a sandbox. An internal AI R&D unit was created. Leadership of the unit went to a leader from the first team, who is now an internal tutor. They deployed personal AI models, tested agents, evaluated sales rep performance, conducted store analytics, and generated product assortment recommendations.
They’re now building a full internal sandbox, developing their version of a conversational assistant for internal use. Their initial request was typical—reduce routine, systematize processes, eliminate “shadow” work. Every team enters with assumptions, and I can confidently say that helping to dismantle these assumptions is one of the core values of what we do. More teams now arrive prepared, already equipped with personal models, and ready to work on business processes, agentic systems, or AI innovation.
Our accelerator has evolved—we now spend less time on personal models, more time on big challenges and ideas.
With EVA, we launched a corporate program through a one-day top-management workshop. They took it seriously. Within three to four months, they returned to us for the accelerator, now ready as a full team. By then, they had already launched internal corporate training—and said they’d never seen so many employees eager to learn before. That shows willingness to move toward systemic AI deployment.
One EVA team tested a bold idea—and thanks to the company’s tech maturity, produced two strong prototypes. That was one of the moments where we, as facilitators, created the space and they surpassed our expectations.
Who is usually ready to build a sandbox?
I believe new HR leaders will take on the responsibility of building sandboxes. The real question is: who is ready to accept responsibility? These changes affect about 30–40% of company roles. As a result, the CTO role is evolving, and so is HR’s. The CTO now operates partly in HR’s domain—and vice versa. This is not about laying people off—it's about changing roles and scopes of responsibility.
That’s why we insist: the key to implementing AI in corporations lies in systems thinking.
To summarize: What steps should a CEO or executive take now if they realize it's time to engage with AI?
I wouldn’t recommend taking many steps all at once. Why? Because the bias of “we’re behind” can distort today’s decision-making. In reality, there aren’t many steps needed.
Here are the 3 key steps for a CEO:
1. Understand that today, form is more important than function. That is, focus on building systems that support AI integration. It’s long-term—measured in decades.
2. Recognize that investments should be distributed unevenly: three dollars to the people, one to technology. People matter more than tools.
3. Leaders must stop viewing AI as just an IT tool. Even though IT professionals often present it that way, that would oversimplify what AI really is. AI fundamentally creates a new type of management—a swarming, collective, or additive intelligence.
Can AI adoption begin without clear agreements—without a roadmap, defined stages, or rules?
We need to embrace one basic rule about AI: over the next 2–3 years, we will make a lot of mistakes. And it’s important that we approach this with tolerance.
At the same time, if we choose to learn from our mistakes, those mistakes must not cost us layoffs, broken roles, abandoned ideas, or lost faith in the technology.
That’s why it’s critical for leaders and accelerator participants to build sandboxes with respect—for people, for systems, and for the inevitable role transformations. In this context, systems thinking is absolutely essential.