Why Disruptive Science Is Losing Ground

Dipak Kurmi

In an age where science enjoys unprecedented resources, advanced technology, and a vast global community of researchers, one might expect an unending cascade of transformative discoveries. The modern scientific landscape is equipped with supercomputers, particle accelerators, genetic engineering tools, and artificial intelligence systems that far exceed anything available in the eras that brought forth electricity, penicillin, or the internet. Yet, paradoxically, a disquieting concern has been gaining ground among policymakers, academics, and innovators: are truly revolutionary scientific breakthroughs becoming harder to achieve?

Mounting evidence suggests that despite the surge in research spending and output, science is experiencing diminishing returns in its capacity to disrupt and redefine our understanding of the world. The golden age of seismic innovations may be giving way to an era of incremental, cautious progress. This trend was brought sharply into focus by a 2023 study led by Russell Funk and his colleagues, which analysed millions of scientific papers and patents. Their findings were sobering: disruptive discoveries — the kind that change the trajectory of entire disciplines — have been steadily declining over the last few decades. While the volume of research has exploded, its relative originality and transformative impact appear to be waning.

Historically, disruptive science has been the engine of human progress. The germ theory of disease overthrew centuries of medical dogma, quantum mechanics redefined the laws of physics, and the internet reshaped the global economy and human interaction. Albert Einstein’s theory of relativity not only expanded upon Newtonian mechanics but fundamentally altered our concepts of space and time. Such advances do more than enrich knowledge; they reshape the very foundations of education, industry, and culture. If Funk’s analysis is correct, the frequency of such game-changing work is shrinking — a shift with potentially far-reaching consequences for technological progress, economic development, and humanity’s capacity to confront existential challenges.

This slowdown is reflected in other troubling indicators. Across numerous sectors — agriculture, medicine, computing — the resources needed to achieve breakthroughs have skyrocketed. In the semiconductor industry, for example, sustaining the pace predicted by Moore’s Law now demands 18 times as many researchers as it did in the 1970s. Similarly, the number of new drugs approved per billion dollars invested in pharmaceutical research has been in steady decline since 1950. This phenomenon, sometimes described as the “cost disease of research,” suggests that science is having to work harder and spend more for smaller gains.

The reasons behind this stagnation are complex and interwoven. One major factor is the bureaucratisation of science. In the mid-20th century, researchers often enjoyed greater intellectual freedom. James Watson and Francis Crick, for instance, were able to set aside their formal doctoral work for months to pursue the double-helix model of DNA — a move that today would likely jeopardise careers and funding. By contrast, modern scientists often spend less than 20 percent of their time on actual research. Administrative duties, grant applications, compliance paperwork, and teaching obligations dominate their schedules, leaving limited scope for risky or exploratory work.

The “publish or perish” culture further exacerbates the problem. The race to maintain a steady flow of papers has encouraged “salami slicing” — breaking research into smaller, less substantial publications rather than aiming for bold, integrative studies. Between 1996 and 2023, the number of papers produced per researcher nearly doubled. Yet research has shown an inverse relationship between a scientist’s publication rate and the disruptiveness of their work. High output may correlate with professional security, but it rarely aligns with paradigm-shifting discoveries.

Ironically, technological progress itself may be part of the problem. While 17th-century pioneers like Robert Boyle could conduct revolutionary experiments in a modest townhouse, today’s frontiers in physics, genomics, or climate modelling require massive, expensive infrastructure. This imposes not only financial barriers but also delays, as researchers must navigate the complexities of accessing and maintaining these facilities. The time required for a scientist to reach the cutting edge of their field — to acquire the necessary expertise, tools, and institutional approval — has lengthened considerably. Specialisation, while necessary, can also narrow vision, making it harder to see connections across disciplines where disruptive potential often lies.

Even when science achieves remarkable feats, they may not register as disruptive according to conventional metrics. The case of AlphaFold, the AI system that cracked the decades-old protein-folding problem, is telling. While the breakthrough holds enormous significance for biology and medicine, it scored low on the “disruptiveness” scale used in bibliometric analyses because it built on an established body of knowledge rather than supplanting it. This points to another issue: the limitations of how we measure scientific impact. Citations, impact factors, and h-indices tend to reward incremental contributions that accumulate quickly rather than deep innovations whose significance may take years to be recognised.

The fixation on metrics has reshaped research priorities. When career advancement depends heavily on citation counts, journal prestige, and grant success rates, scientists may avoid unconventional ideas that could fail in the short term but succeed spectacularly in the long term. This risk aversion, compounded by institutional conservatism, can stifle the kind of intellectual audacity that historically produced radical change.

Yet, the picture is not uniformly bleak. Innovative funding models are emerging to encourage bold, high-risk research. The Covid-19 pandemic demonstrated how rapid-response grants could mobilise expertise in real time, leading to lifesaving vaccines and treatments. Countries like Germany and the United Kingdom have launched dedicated innovation agencies aimed at supporting risky, early-stage projects that traditional peer review might reject. In perhaps the most radical experiment, New Zealand has introduced grant lotteries, where funding is allocated randomly among qualified proposals to reduce systemic bias and introduce unpredictability into the selection process.

Another promising insight comes from the observation that smaller research teams tend to produce more disruptive work. With fewer coordination demands and greater autonomy, these teams can pursue unconventional approaches more freely. Moreover, diversity — in terms of gender, disciplinary background, and geographical representation — has been shown to boost creativity and broaden the range of problems tackled. Interestingly, some studies suggest that in-person collaboration remains more conducive to innovation than remote work, potentially because it allows for the spontaneous exchange of ideas that formal meetings often stifle.

A further possibility is that disruptive research is still being produced but is going unrecognised. Shorter attention spans, automated search algorithms, and herding effects in academic publishing may cause important work to be overlooked for years or even decades. This phenomenon, known as “sleeping beauties” in science, has historical precedents: Gregor Mendel’s foundational work on genetics lay dormant for decades before its significance was realised. If similar patterns hold today, the apparent decline in disruption could be more about visibility than actual scarcity.

Addressing the innovation slowdown requires more than simply increasing funding. It calls for a cultural and structural reorientation of the scientific enterprise. We must value depth over quantity, originality over short-term impact, and curiosity over conformity. Funding bodies could reward long-term, high-risk projects with flexible timelines rather than demanding constant measurable progress. Universities and research institutions could streamline administrative burdens and protect time for deep, uninterrupted thinking. Metrics could be reformed to recognise unconventional achievements and interdisciplinary work.

Equally important is cultivating an environment where failure is seen as an essential part of the process rather than a career-ending stigma. The history of science is replete with false starts, blind alleys, and experiments that “failed” in their immediate aims but seeded future breakthroughs. The spirit of exploration that fuelled past revolutions in science must be protected from the pressures of managerial efficiency and bureaucratic oversight.

The challenge before us is not to lament a lost golden age but to create the conditions for a new one. Scientific revolutions rarely arrive on schedule, but they flourish where freedom, diversity, risk-taking, and patience intersect. If the warnings of Funk and others are heeded, this period of slowdown could become a turning point rather than a decline. By acknowledging the problem, rethinking how we measure and reward progress, and making room for the unpredictable, we may yet rekindle the disruptive spirit that has propelled humanity forward.

As Russell Funk has aptly observed, recognising the slowdown is not a threat to science; it is the first step toward restoring its soul. The question is whether we will have the courage to make the changes necessary for that restoration — and whether, when the next great leap arrives, we will be ready to see it for what it truly is. 
(The writer can be reached at dipakkurmiglpltd@gmail.com)
 



Support The Morung Express.
Your Contributions Matter
Click Here