Artificial intelligence (AI) is like a soulless automaton: powerful, fast, tireless… but blind without the guidance of a human mind. This vision, which echoes in the imagination of great science fiction authors, perfectly describes the challenge of our time.
In an age where machines learn, write, analyze, and suggest, true intelligence remains the one that knows how to ask the right questions, interpret weak signals, and make sound decisions.
And this is precisely where the concept of “human in the loop” comes into play: a model that places humans at the center of the digital revolution, not as passive spectators or simple proofreaders of algorithm-generated texts, but as active collaborators, interpreters, and decision-makers.
In the model “human in the loop”, an AI-based system can generate suggestions, analyze data, propose solutions, or automate tasks. However, it is always the human being who checks, approves, corrects or decides whether to accept or modify what the machine produces.
The principle known as “human in the loop” is not just an operational guideline: it is a cultural and strategic vision that puts people at the center, despite (or perhaps thanks to) automation. It’s an invitation not to blindly delegate decisions to the algorithm, but to maintain control, responsibility, and a critical sense of every output generated by AI.
As is well known, in fact, AI systems are not immune to errors, including so-called “hallucinations,” situations in which the AI system produces false or unverified information, convincingly presenting it as real facts. Unlike random errors, these “hallucinations” emerge because the AI, based on probabilities learned from the text, generates plausible but unfounded content.
In the world of intellectual property—a field where law, innovation, and strategy intertwine—the adoption of AI opens up exciting prospects. But it’s clear that no machine, no matter how sophisticated, can replace the expertise, ethics, and vision of those who live this profession responsibly every day.
To cite just one example of the advantageous use of AI tools according to the human-in-the-loop system, from May the European Patent Office (source: EPO website. 08.04.2025, Minutes of oral proceedings to be prepared with the assistance of AI | epo.org) will adopt tools for transcribing the minutes of hearings with the attorneys (so-called Oral Proceedings) and, for some time, has been using automatic systems for the classification of patent applications, under the careful supervision of the Examiners.
Furthermore, the same European office has launched the LIP (Legal Interactive Platform) system, an AI platform that allows professionals to search for rulings, guidelines, etc., in natural language, extremely quickly and accurately.
The sustainable future of innovation lies in this balance between automation and human discernment.
At the heart of the technological revolution we are experiencing, artificial intelligence has already acquired a central role. But in this scenario where machines become increasingly capable, the awareness that true value still lies in human beings is also growing.
We live in an era of unprecedented technological acceleration, where digital tools and innovations multiply every day.
In this context, the main challenge for professionals and businesses is not so much owning the technology, but staying up-to-date and knowing how to use it consciously and creatively.
The market today offers a vast range of software labelled as “AI-based”, but in reality many of these are nothing more than customized interfaces, built on existing language models such as ChatGPT, Gemini, or Bard. These are vertical solutions, certainly useful, but they do not represent true autonomous innovation.
AI is extremely skilled at processing data, finding correlations, and suggesting connections. But it has no intuition, no experience, no sensitivity. It doesn’t know the cultural context of an invention, it can’t read between the lines of a legal expression, nor can it intuit the strategic interests of a company.
For this reason, even in tasks where AI excels—such as patent research or document drafting—human supervision is essential.
It’s not simply about “controlling” AI: it’s about collaborating with it, using its power to enhance our capabilities, not replace them. AI can speed things up, it can make suggestions, but only humans can decide what is correct, what is relevant, what is right.
All this implies a profound change, even in the training of new generations of intellectual property professionals. If we want the “human in the loop” paradigm to be real and sustainable, we need to train experts capable of using AI competently, without ever losing sight of the larger picture.
A dual education is needed: on the one hand, a solid understanding of AI tools, their limitations, and the logic that governs them. On the other hand, a thorough cultural and legal education that allows one to make sense of data, understand the mechanisms of patent protection, and above all, exercise critical judgment.
It’s not enough to learn how to “use a prompt”: you need to understand why you make a certain strategic choice, how to truly protect an idea, and what the limits of human intervention are in creating and defending innovation.
The biggest risk today is not AI. The real danger is the loss of responsibility, the belief that a generative system can replace our role, our ethics, our intuition. To avoid this, we must build—starting from university and professional education and training—a culture of attention, verification, and interpretation.
It is precisely in this space between technology and humanity that the future of intellectual property can be born. A future where AI becomes an ally, but never a judge; a collaborator, but never a decision-maker.
The “human in the loop” paradigm reminds us that humanity is and must be at the center of law, innovation, and the protection of creativity.
Not out of nostalgia, but out of necessity.
And today’s task—for professionals, trainers, and institutions—is precisely this: to train people capable of guiding AI, rather than being subjected to it. Because true innovation is not that which replaces us, but that which completes us.