How AI will change everything: perspective from a physicist ranked among the world's top 2% most-cited scientists
What role does AI play in modern science and how might it change in the future? We asked physicist Victor Asadchy.
What role does AI play in modern science and how might it change in the future? We asked physicist Victor Asadchy.
What role does AI play in modern science and how might it change in the future? We asked physicist Victor Asadchy.
It’s difficult to get used to the pace at which AI masters new areas of intellectual activity. Some of these developments still seem like curiosities—for example, AI’s ability to play 'Mafia' and successfully deceive and convince other players.
News about AI’s successes in scientific fields is far more impressive. On September 22, a preprint appeared on Cornell University’s website describing a new method for evaluating the mathematical abilities of LLMs. The creators, Professor Moran Feldman from the University of Haifa (Israel) and Amin Karbasi, research director at the Cisco Foundation AI, called it the «Gödel test"—named after Austrian logician and mathematician Kurt Gödel. The essence of the Gödel test is that AI is asked to solve not previously solved problems but entirely new, albeit relatively simple, ones. Effectively, LLMs are asked to independently formulate and verify new ideas—essentially making what we might call «discoveries.»
For the experiment with GPT-5, the authors selected five recent problems in combinatorial optimization. For three of them, the LLM proposed solutions that scientists evaluated as «almost correct»: the key idea matched what was needed for the solution, but AI couldn’t develop the reasoning to the level of a complete, verifiable proof. GPT-5 failed to solve the two remaining problems.
The idea that AI will begin to engage in science as a genuine scientist-subject capable of making discoveries doesn’t seem at all fantastic to LLM creators. A couple of days after the Feldman and Karbasi preprint was published, OpenAI CEO Sam Altman proposed a new criterion for AGI (Artificial General Intelligence—AI with capabilities that would match or exceed human capabilities in virtually all cognitive tasks) in an interview with Axel Springer Global Reporters. According to Altman, AI (such as GPT-8) could be considered AGI if it solved the problem of quantum gravity and could explain to humans how it did so. Altman also expects full-fledged AGI to emerge by 2030.

According to Altman, solving the problem of quantum gravity would be comparable to Albert Einstein’s general theory of relativity—"one of the most beautiful things humanity has ever created.» British-Israeli theoretical physicist and quantum computing pioneer David Deutsch, Altman’s interlocutor, agreed with the idea that solving quantum gravity could be the «ultimate test for AGI.»
How is AI already being used by modern researchers? Could it engage in science on par with human scientists or even surpass them in the foreseeable future? Will scientists' roles be reduced to composing prompts for AI? We asked these questions to a Belarusian physicist from Finland’s Aalto University who was ranked among the world’s top two percent most cited scientists by Stanford University and Elsevier publisher in 2024.
—Let’s assume that AI actually learns to prove unsolved mathematical hypotheses, even if they’re not complex. Does this mean it’s capable of independently engaging in science? Or is this too narrow a criterion?
—I believe such a test is just one criterion for evaluating AI’s ability to engage in science. Science isn’t limited to solving given (and purely theoretical) problems. The scientific approach typically includes five elements:
For example, people have observed rainbows in the sky since ancient times. At some point, people wanted to know why rainbows appear and why they are always opposite the Sun (problem formulation). Over a thousand years ago in the Middle East, a hypothesis emerged suggesting that rainbows appear due to light refraction from the sun through water droplets in the atmosphere. In medieval times, Descartes and Newton conducted experimental research proving the nature of rainbows and built the corresponding theory.
The Gödel test only evaluates two of the five elements (hypothesis proposition and data analysis with subsequent theory).
Which stages of the scientific method can AI currently handle? Let’s go through them in order. Let’s take an example of a still unexplored area: ball lightning.

Physicists propose many different theoretical models about the nature of ball lightning, but there’s still no experimental data confirming any particular model. AI is not yet capable of making observations, let alone conducting experiments to generate ball lightning. For this, it would need to be connected to «hardware,» for example, integrated with a high-tech robot. Such a solution could replace scientists, but robotics is still quite far from such complex manipulations.
However, problem formulation, hypothesis proposition, and theory construction are not such insurmountable tasks for AI. AI can already handle such tasks to varying degrees. And I wouldn’t be surprised if it catches up with scientists in these areas in the next five years.
An example from my personal teaching experience: AI now excels at creating assignments for undergraduate students. It can already create and correctly solve simple problems that no one has previously formulated and that aren’t available in open access.
—Can AI accelerate scientific development? Could it trigger a breakthrough comparable to the scientific and technological revolutions of the mid-20th century? Perhaps AI itself will lead us into the technological singularity predicted by von Neumann?
—I believe it will accelerate science significantly. Not only will scientific discoveries appear more rapidly, but (especially) new ways of applying them will emerge even faster. Technologies will surprise us more and more.
I certainly acknowledge the possibility of a technological singularity.
—Do you think LLM-based AI assistants will be able to engage in science on par with human scientists in the foreseeable future? Or perhaps even surpass them?
—I think so. In applied physics, something similar has already happened. Current computer modeling can easily solve problems that scientists cannot solve theoretically without computer assistance. As a result, computer modeling is now actively used by scientists, including in creating hypotheses and collecting data.
—Could the development of AI technologies reduce demand for specialists in fields like fundamental physics in the near future? Do you see AI as a threat to scientists, who might be replaced by more compliant and budget-friendly «artificial scientific associates»?
—Probably not. As I mentioned earlier, scientific research typically involves five separate components. And AI won’t achieve full coverage of these anytime soon. At least until then, many other professions might disappear.
At the current stage of AI technology development, I don’t foresee a complete displacement of human scientists from science. Most likely, scientists will actively use AI for creating theories and conducting scientific research. But if and as soon as the first true artificial superintelligence appears, everything could change quickly.

—Could scientists' roles eventually be reduced to correctly formulating task prompts for AI? This seems more difficult in physics than in mathematics, since physics requires practical experiments. But could this happen in theoretical physics, for instance?
—Saying scientists will be reduced to merely formulating prompts is too bold a statement. I would say that correctly formulating prompts will be an important skill for scientists, but far from the only one.
Theoretical physics is still tied to phenomena that are verified by experiments or observations, which distinguishes it from pure mathematics. Therefore, the replacement of even theoretical physicists by artificial intelligence isn’t anticipated in the near future.
—How do you, as a scientist, perceive AI assistants at their current stage of development? Do you see them as useful tools? Or do they pose some threat to people in your profession?
—I view them very positively. They are already highly useful tools, although many scientists have not yet realized this. Even now, AI can be used (with varying probabilities of success) for:
I think this list will grow over time.
Of course, there’s also a threat from AI for people in my profession. Scientists who don’t adapt to AI’s existence may find themselves at a disadvantage in the future.
There’s also a threat from another direction. Scientists who rely too heavily on AI in their work instead of improving themselves with its help could soon be replaced by that very AI.