Have we finished good ideas?
#161 - We’ve never had more resources or technology, and yet every breakthrough is harder won
Hello friends,
I hope you had a great week.
I was reading the book Abundance recently and hit a passage that stopped me. It quoted a 2020 study from Stanford and MIT with the irresistible title “Are Ideas Getting Harder to Find?” The economists behind it didn’t hedge their answer: yes.
The study looked across industries (medicine, agriculture, tech) and found the same pattern. Research effort keeps growing, but the breakthroughs per unit of effort are shrinking. In heart disease, clinical trials have multiplied while gains in lives saved have slowed. Cancer research has absorbed billions, spanned five decades, and carried the backing of multiple U.S. presidents, yet the goalposts keep moving.
It’s not that we’ve stopped making progress. It’s that each step forward costs more. That simple fact raises a harder question: if ideas are getting harder to find, what does that mean for the future pace of progress?
The productivity paradox of research
The headline finding from the paper referenced in Abundance is simple enough: we’re still making progress, but the productivity of research (i.e. the return we get on each new dollar, hour, or brain working on a problem) is falling.
In 1965, Gordon Moore observed that the number of transistors on a leading-edge chip had been doubling on a regular cadence (about every two years) and suggested this trend could continue. More transistors per chip typically deliver more compute for the same cost. But it’s not a law of physics; it’s an industry commitment that has been sustained by huge investments in science, engineering, and capital equipment.
Why does that matter here? Because keeping that two-year doubling going has gotten dramatically harder. Shrinking features into the nanometer range means fighting quantum effects, using ever more complex lithography, and coordinating massive, specialized teams. The study’s estimate is stark: to maintain roughly the same pace of improvement in semiconductors as in the 1970s, the number of researchers working on the problem is now about 18× larger. The output line (performance delivered to end users) kept rising at a familiar slope; the input line (people, money, tooling, complexity) rose much faster. That gap is the paradox in one picture.
Medicine shows the same dynamic. In heart disease, a surge in trials hasn’t been matched by equivalent gains in patient outcomes. Agricultural research is another example: decades of effort produced rapid yield increases through the mid-20th century, but yield growth in many developed countries has since flattened, despite more scientists and more money being thrown at the problem.
This isn’t about bad science or lazy scientists. It’s more about a shift in the ratio between input and output. Early breakthroughs often target obvious gaps or “low-hanging fruit”, think penicillin or the first transistors. Once those gains are made, the problems left on the table are harder, more complex, and more resistant to simple fixes.
You start hitting diminishing returns.
The paradox is that we’re living in a time of unprecedented resources, talent, and computational power, yet the marginal cost of each new step forward keeps rising. Understanding why is key to figuring out if this slowdown is inevitable — or if it’s something we can reverse.
When ambition meets complexity
Many of the breakthroughs we celebrate from the past came when the problem space was still relatively simple. Early antibiotics were discovered when the mechanisms of bacterial infection were poorly understood, but also less resistant. Early spaceflight succeeded before we’d exhausted the easiest orbital and lunar missions. The first wave of computing gains came from shrinking transistors and optimizing straightforward architectures.
As fields mature, the nature of the problems changes. What’s left tends to be harder: more variables, more unknowns, more interdependencies. In medicine, the fight against infectious diseases yielded rapid, visible results; the fight against chronic and degenerative diseases like Alzheimer’s and most cancers is slower and messier. The enemy is less obvious, the biology more intricate, and the pathways to a cure more tangled.
This is true in energy as well. Replacing coal with natural gas or adding wind and solar to the grid was relatively straightforward compared to the current challenge of large-scale storage, transmission upgrades, and balancing intermittent supply with constant demand.
Complexity doesn’t just slow down timelines: it changes the type of ambition needed. Big goals are still worth setting, but they require deeper patience, broader coalitions, and more willingness to live with partial wins along the way. That’s why the rhetoric of “moonshots” can mislead. The original Moon landing was a monumental achievement, but it targeted a problem whose scope was finite and well-defined. Most of today’s scientific frontiers are not like that.
The institutional load
The way we organize science has changed. The lone researcher or small lab making a major breakthrough still exists, but most advances now come from large, distributed teams with multiple layers of oversight. This is partly by necessity — the scale of modern research demands specialized skills, expensive infrastructure, and collaboration across institutions and countries. But the coordination cost is real.
In the mid-20th century, projects like the Manhattan Project or Apollo program concentrated talent, funding, and decision-making in relatively flat structures. They moved fast, even by today’s standards. The Manhattan Project went from concept to a functioning atomic weapon in just over three years. Apollo took less than a decade from JFK’s speech to landing on the Moon.
Today, regulatory environments are more demanding, funding is fragmented, and intellectual property rules complicate information sharing. Clinical trials require exhaustive documentation and multi-year approval cycles. Large research grants often come with reporting requirements that consume substantial time and attention. The goal is to ensure safety, accountability, and rigor — all of which matter — but they also slow the loop between idea and result.
The irony is that many of these safeguards arose because earlier breakthroughs happened too quickly, sometimes without enough thought to unintended consequences. We built layers of process to prevent harm, but each layer adds friction. That friction might not stop innovation outright, but it changes its pace and character. Instead of quick leaps, we get careful, incremental steps.
Technology’s double edge
Tools like AI, advanced simulation, and high-throughput experimentation promise to speed up research. In many ways, they already have. Sequencing the human genome took over a decade and billions of dollars; today it can be done in hours for a few hundred. AI models can scan vast datasets, suggest molecular structures, or predict protein folding with accuracy that would have been unthinkable twenty years ago.
When the tools improve, the frontier shifts. Cheaper genome sequencing doesn’t end the challenge of curing genetic diseases, it opens the door to new questions that are harder to answer. Protein-folding breakthroughs accelerate drug discovery, but they also surface more complex hypotheses that require years of testing.
In effect, technology raises the baseline of what’s possible, but it also raises the baseline of what’s expected. The problems left to solve are further out, more multi-dimensional, and often harder to verify experimentally. AI might cut the time to generate an idea, but testing that idea in the real world still runs into the same biological, physical, or logistical constraints as before.
The risk is assuming that better tools automatically mean faster progress. Sometimes they do; sometimes they simply make it possible to attempt problems we would have ignored as too difficult. That’s valuable, but it can make the experience of progress feel slower, because the big wins take longer to land.
Rethinking progress
Part of the “slowing down” story is about measurement. We tend to focus on transformative breakthroughs, the kind that shift an entire field, and discount incremental advances. In medicine, a treatment that extends average survival by six months might not grab headlines, but for patients, it matters. In energy, a one-percent improvement in efficiency at scale can have a larger impact than a single high-profile project.
The problem is that public and political attention gravitates toward moonshots. That creates a mismatch between what science is producing and what people perceive as meaningful progress. When the output is mostly incremental, the narrative shifts to stagnation, even if the cumulative effect is significant.
There are also fields where the slowdown is less visible. AI has advanced faster than most expected, renewable energy costs have fallen dramatically, and battery technology is improving steadily. These areas remind us that “harder to find” doesn’t mean “impossible to find” but they also tend to be the exceptions, not the rule.
If the underlying reality is that problems are getting more complex and expensive to solve, it’s worth asking whether our expectations need to adjust. Measuring progress in smaller, more consistent gains might be more accurate than waiting for the next once-in-a-generation breakthrough. It doesn’t have the same political appeal, but it’s closer to how science actually works.
How do we define success?
The evidence that ideas are getting harder to find is hard to ignore. More people, more money, and more technology are being thrown at problems, yet the marginal return on that effort is shrinking. It’s not a failure of talent or will, it’s the natural result of tackling what’s left after decades of easier wins.
That doesn’t mean the future is bleak. It means that big leaps will demand even more focus, resources, and patience than in the past. Some fields will still deliver rapid, visible change, but many will move in smaller, steadier increments. AI might shift the balance if it can compress the cycle from hypothesis to proof, but it’s just as likely to push the frontier into even more complex territory.
The core of the issue is how do we measure and comment success, as incremental progress becomes more important and moonshot step-changes harder, how do we change the way we narrate and judge “success”?
The challenge for policymakers, investors, and scientists is to accept the reality of diminishing returns without lowering ambition. The pace may change, but the need for bold goals doesn’t. If anything, it becomes more important, because the harder the ideas are to find, the more deliberate we have to be about going after them.
Have a great weekend,
Giovanni