This won’t be about agentic frameworks, advanced RAG with HNSW, hierarchical caching or even prompt engineering.

Aside from a machine learning course in late 2020, my first real encounter with AI was through the GitHub Copilot preview in the fall of 2021. At that time, it made me wonder about the future of programming. It’s funny to look back on those early attempts because it was a little more than an advanced autocomplete, riddled with so many errors that it was barely usable. A couple of months later, in early 2022, they made it a paid service, which turned me off completely, as I just didn’t see enough value at that level of execution.

The main reason I wanted to use GitHub Copilot—and many other tools of its kind since—was to do my job faster. I was just starting out as a software engineer back then, and without a formal computer science degree, I knew it would be very difficult for me to compete (to find a job) with more experienced developers who could solve problems more quickly. So, I tried to use tools that would help me close that gap and become more effective—speed plus quality. In essence, it’s the same reason we use an IDE instead of a notepad, plugins instead of purely manual changes, or a search engine instead of books in a library. This was just one more tool in my toolkit.

That is to say, the increase in efficiency—or labor productivity, in economic terms—has been the result of every industrial revolution. Why? Because capitalism is the dominant economic system today, where the goal is to increase profit. This can be achieved either extensively (by scaling up) or intensively (by earning more from the same resources). Without delving into theoretical justifications and calculations, using more effective methods and technologies allows a business owner to earn more than their competitors, at least during the temporary lag while others are technologically behind. This also provides a competitive advantage through reinvestment of additionally accumulated capital either in scaling up or in technological upgrades.

The current active push for AI is, at its core, also about increasing business efficiency—through automation that was previously based on clearly defined logic, but now with the integration of large language models (LLMs) into decision-making chains, allows for the creation of much more flexible systems under the ReAct (reasoning + acting) paradigm.

So why did this make me wonder?

Because I heard that programming, to some extent, is a creative profession (due to the need to find new solutions and think architecturally), which was difficult to describe step-by-step and automate, as had been done with other processes before. The same was true for many other professions. So, the vision of the future used to be in the context of robotization, just as mechanization had come before it—the replacement of manual labor, while intellectual labor always remained the prerogative of humans. No matter what systems we created before, they could not reason and independently make complex decisions based on planning. Now, they can.

And watching how LLMs (including VLMs) have developed over the past few years and using them every day, despite their far from perfect state in terms of reliability (hallucinations) and cost (the expense of training and mass inference), it is nevertheless astonishing how far we have already come. For now, it seems like those problems are also technical, not conceptual (although the issue of safety and security is still a huge question mark). And given the current amount of resources (including human ingenuity and state/military funding) directed toward the creation of artificial general intelligence (AGI), development will continue, and so will its integration into business.

What does this mean?

I am observing a split among companies in their attitude toward AI.

Some companies unequivocally ban the use of any AI assistants at work, justifying it on the one hand with the potential leakage of trade secrets, and on the other, with a desire to ensure the quality of the work, making sure that people fully understand the problem and the solution they are working on themselves.

The other type of companies—we can call them early innovators or early adopters in the context of the innovation cycle—see the use of AI as a potential competitive advantage in various aspects:

  • Reducing the factor of human error through even greater automation.

  • Lowering costs by replacing more expensive human labor.

  • Increasing the speed of solution delivery.

For example, since that 2021, I have continued to test and use AI tools, and until recently, my main assistant was Perplexity, precisely because of the breadth of questions I solved with it. But now, I am achieving even greater speed using Cursor as my code editor. Yes, these are still far from perfect tools, but I can clearly see where I have managed to cut down on time spent searching for options and the best solution. But since it is a tool, you need to learn how to use it properly and understand its limitations. Nevertheless, comparing my experience in a large corporation with my work in a startup with a free-spirited vibe coding culture, I see an incredible difference in the speed of creating solutions and what can be achieved with a very small team. But! Using such assistants speeds up the creation of solutions, though it does not guarantee their quality, as that already requires a fundamental understanding of what you are working on, knowledge of good practices, and a broad skill set to avoid growth problems down the line.

This brings me to the idea that companies now need fewer people for the three reasons described above, as the same capitalist principles are at the core. AI is just another step on this path.

It is very hard for me to think about third and fourth order effects, but judging by the trends, the first to be replaced are entry-level positions whose intellectual work is directly tied to a computer, because AI, in many ways, already solves such tasks better. This means that more experienced and competent specialists are next in line as the technology improves. This is worrying because, in parallel, no one has stopped the classic automation of manual labor, including robotization. So, the pressure on the labor market is intensifying from both sides. One could argue that this is offset by a demographic dip, but again, from my observations the speed of AI development is measured in months, not human generations, meaning it is far outpacing it. And whereas companies used to hire new people as a source of new ideas and extra hands, they are now redirecting budgets from hiring to testing and implementing new technologies like AI—again, driven by competition and the fear of being left behind.

I see 2 directions: using AI as a tool, and the complete replacement of some human operations with multi-agent systems now, and entire professions in the future. The thing is, this is happening almost invisibly. There are rare statements from leaders of big companies, who warn of the coming social impact of AI advancement and that society will need to “adapt.” But for the outsider it looks like life goes on as usual. Couples walk down the streets with strollers, cars drive by, stores are restocked with fresh products, new iPhones are released. It’s just that people start looking for a job not for 1-2 months, but for six months to a year. Suddenly there is a rise of gig economy, more uber drivers, more food delivery people, more freelancing and temporary positions without benefits. Why? Because of the speed of change. We don’t know how to evaluate and adapt so quickly, to change our worldview. Although new technologies previously eliminated some professions and created new ones, I don’t see many new professions emerging in the last couple of years that can compensate for, much less create, additional jobs.

There also is an asymmetry of information: it seems as though you are not keeping up in terms of skills and lack competencies (you look at it from the supply side), when in fact the demand itself has decreased.

We also like to generalize, thinking, “well, everyone else is working, busy with something, so the problem must be with me.”

There are also nuances here, like the migration of personnel from one industry to another, leading to higher competition. The integration of AI into the hiring process itself—not just at the search and resume analysis stage, but also full-fledged interviews with an AI agent—places special demands on how one must now “know how to sell oneself”. And it’s right that one should strive to adapt their skills to the demands of the time; it’s just that there is a limit to your locus of control. Though employers will still need highly qualified personnel, whether it’s for the skilful use of AI tools to increase their efficiency or simply for rare, fundamental knowledge and the ability to create new solutions based on it. The best thing to do in such a situation is to keep trying while “hope for the best but prepare for the worst”, as those are the rules of this game.

But the same kind of “invisible” changes that are happening in society, which people feel individually and experience internally, are also happening in parallel in business. As I mentioned above, competition forces companies to “keep up” and watch their rivals, which means trying to implement the same methods of optimization and efficiency improvement (if not better), and also redirecting budgets to research and innovation. This also means adapting the company policy and processes to new requirements. For example, there is a shortage of talent who may help the company with AI adoption. Which means some companies hire specialized consulting firms. Some look for talents in the global pool and become more open to the remote work. Some try to restructure their existing teams and create internal startups. Some lacking resources may need to raise stakes and acquire venture capital to finance the transformation, as those are the rules of this game.