From explicit programming to the synthesis of solutions
For decades, computer science was grounded in a relatively stable principle: humans define the rules, computers execute the instructions, and the result is largely predictable, repeatable, and testable. That logic has not disappeared, but it is no longer the only dominant model of computation. With the rise of large language models and, more broadly, foundation models, computing is shifting from the strict execution of predefined instructions toward systems that learn from data, generate likely outputs, write code, synthesize content, and interact with users through natural language.
This is not just an incremental improvement in software capability. It is not merely about faster systems or better automation. It is a change in how computation itself is understood and how computational value is produced. Where the central task once consisted in designing the right algorithm with the right sequence of explicit instructions, the emerging task is increasingly to build systems that can interpret intent, work under uncertainty, generalize from examples, and collaborate with humans and tools. That is why it makes sense to describe the current moment as a paradigm shift.
Why this change is deeper than a new software generation
In traditional computing, the machine mainly acted as an executor of logic. In the new phase, the machine also acts as a synthesizer. It does not only compute over carefully structured inputs, but can process unstructured information, natural language, images, audio, and complex tasks in a way that is closer to statistical interpretation than classical rule execution. The shift from deterministic to probabilistic computing is not a minor technical adjustment. It changes the theory, the practice, and even the teaching of computer science.
This is already visible in domains where usefulness is concrete and measurable. In weather forecasting, computational biology, software generation, optimization, and large scale data analysis, machine learning models are no longer just auxiliary components. They are becoming central instruments for knowledge production and decision support. This means that computational power is no longer exhausted by the execution of a known algorithm. It increasingly includes the ability to generate hypotheses, propose options, test alternatives, and synthesize responses under complex conditions.
A new form of programming
One of the clearest indicators of this shift is that the role of the programmer is changing. Programming is no longer only the act of writing lines of code. It is increasingly the practice of expressing intent, defining constraints, providing examples, evaluating results, and checking the reliability of model outputs. Human oversight does not disappear, but it changes form. Instead of always specifying every step of the solution, the developer often defines the framework within which the model generates candidate solutions.
This creates new disciplines and new methods. Model evaluation, correctness checking, orchestration of specialized tools, retrieval based architectures, risk management, and verification of generated code are becoming central parts of the computational workflow. Just as the web created new professions and new forms of technical expertise, generative AI is now producing a new scientific and professional ecology around computing.
New risks and new demands for reliability
Every paradigm shift comes with new forms of uncertainty. The power of large language models is accompanied by well known issues such as hallucinations, instability, uneven reliability, and limited interpretability. This means the new computer science cannot rely only on persuasive outputs or apparent usefulness. It requires new testing methods, stronger safety procedures, more robust documentation practices, and clearer accountability mechanisms.
In high risk applications such as medicine, finance, critical infrastructure, or public administration, it is not enough for a model to produce a plausible answer. That answer must also be verifiably correct, safe, fair, and aligned with legal and institutional requirements. This is where the current transition reveals itself not merely as a technical development, but as a methodological and governance challenge.
The political dimension of the new computational era
If computer science is changing this deeply, then the question of who controls its core infrastructure becomes unavoidable. The next generation of computing should not depend entirely on a small number of closed platforms and opaque corporate ecosystems. It needs open standards, open source software, transparent evaluation, and strong open weight ecosystems so that universities, research centers, public institutions, and smaller firms can participate on fairer terms.
That matters not only for innovation, but also for democracy, scientific accountability, and digital autonomy. If this paradigm shift leads only to greater concentration of technical power, then computational progress will come with institutional dependency. But if it is linked to open infrastructure and shared technological commons, it can support a more creative, more participatory, and more resilient future for computer science.
Artificial intelligence, then, is not simply a new phase of automation. It is a transition toward a new model of computation, a new model of programming, and a new model for organizing knowledge. That is why it is reasonable to say that computer science is entering a genuine paradigm shift.
Important sources:
Communications of the ACM, “Does AI Now Represent a Paradigm Shift?”, This is the core conceptual source because it argues directly that super large models are changing how computing work is done and how sophisticated requests are handled: https://cacm.acm.org/opinion/does-ai-now-represent-a-paradigm-shift/
Stanford HAI, “2025 AI Index Report”, It is relevant because it documents lower inference costs, better efficiency, and the broad diffusion of foundation models across research and industry: https://hai.stanford.edu/ai-index/2025-ai-index-report
Nature, “Probabilistic weather forecasting with machine learning”, It shows that the new computational approach extends beyond language into demanding scientific applications such as weather prediction: https://www.nature.com/articles/s41586-024-08252-9
Nature, “Highly accurate protein structure prediction with AlphaFold”, It is a landmark case of AI transforming scientific discovery in a major field of computational biology: https://www.nature.com/articles/s41586-021-03819-2
Google DeepMind, “AlphaEvolve: A Gemini-powered coding agent for designing advanced algorithms”, It is relevant because it illustrates the move from code generation to the use of models for algorithm design and tool improvement: https://deepmind.google/blog/alphaevolve-a-gemini-powered-coding-agent-for-designing-advanced-algorithms/
Engineering at Meta, “Meta’s Infrastructure Evolution and the Advent of AI”, It is important because it argues that the next computing stack requires open standards, open source software, and open infrastructure: https://engineering.fb.com/2025/09/29/data-infrastructure/metas-infrastructure-evolution-and-the-advent-of-ai/