The New "Millionaire Programming": Throwing Generative AI at the Problem
/ 5 min read
Table of Contents
Introduction: The “Millionaire” Philosophy
In the Japanese software engineering community, there is a concept known as “Millionaire Programming” (富豪的プログラミング).
The term was originally coined by UI researcher Toshiyuki Masui in his essay on utilizing computing resources (known for his work on predictive text input and user interfaces). In English, the closest equivalent nuance to his philosophy is the idiom: “Throwing hardware at the problem.”
Traditionally, this approach prioritized development speed over runtime efficiency. Instead of spending weeks optimizing an algorithm to save 100MB of RAM, a “millionaire” programmer would simply utilize the abundance of modern hardware. We accepted that “hardware is cheap, but programmers are expensive,” and we solved problems by spending machine resources lavishly.
However, with the advent of the AI era, this concept is undergoing a fundamental transformation.
We are moving from throwing hardware resources at performance bottlenecks to throwing code generation effort at architectural uncertainty.
In an age where AI can instantly generate ten different implementation patterns, we have acquired a level of “abundance” that allows us to solve problems by spending generative cycles.
Old School: Throwing Hardware at the Problem
The traditional “Millionaire” approach was a strategy of substitution.
We utilized garbage collection, heavy frameworks, and verbose data structures. We didn’t mind if the code was resource-hungry, as long as it was robust and easy to write. We used the “wealth” of Moore’s Law to bypass the “poverty” of human time constraints.
The Constraint: Even with this approach, we were limited by human throughput. We could throw hardware at the code, but we still had to write the code ourselves.
New School: Throwing AI at the Problem
In the AI era, the definition of “wealth” has changed. We are no longer just resource-rich; we are generation-rich.
We can now afford to be “wasteful” with code generation. We can “throw” tokens and inference costs at a problem to explore the solution space.
The Shift in Workflow
1. Simultaneous Parallel Generation Previously, comparing three different architectural approaches (e.g., Redux vs. Context API vs. Zustand) required significant reading and mental modeling. Now, I can simply ask an LLM to implement the feature using all three patterns.
I am throwing generative cycles at my own uncertainty. I can look at the actual code for all three approaches side-by-side before writing a single line of my own. I spend the AI’s “effort” to buy my own “certainty.”
2. The “Disposable Prototype” Pattern In the past, “Fail Fast” was a slogan. Now, “Fail Parallel” is a reality. I often ask AI to write a quick, dirty script just to validate an idea. If it fails, I discard it immediately. The cost of throwing away code has dropped to near zero because the cost of generating it is near zero.
Benefits of Generative Abundance
1. Expanding the Search Space
When we write code manually, we tend to stick to what we know (Local Optima). If I know Python well, I’ll solve every problem with Python. By throwing AI at the problem, I can ask it to generate solutions in languages or paradigms I’m less familiar with. The AI might suggest a Rust-based microservice for a bottleneck I was trying to optimize in Node.js. The “search space” for solutions expands dramatically.
2. Deepening Understanding via Comparison
The best way to learn a trade-off is to see it. By generating Option A (High Performance, High Complexity) and Option B (Lower Performance, High Readability) and placing them next to each other, the abstract trade-off becomes concrete. This “rich” comparison process sharpens my architectural intuition.
The New Bottleneck: Choice Fatigue
However, this new wealth brings a new problem: Decision Fatigue.
If traditional Millionaire Programming suffered from “Software Bloat,” AI-based Millionaire Programming suffers from “Choice Overload.” We are faced with a combinatorial explosion of valid options.
For a single web application, you might instantly be presented with:
- Next.js deployed on Vercel
- Remix running on Cloudflare Workers
- Astro hosted on Netlify
- Plain React on AWS S3
- …and dozens of other viable combinations.
To navigate this, the role of the senior engineer shifts. We are no longer just “builders”; we are “curators.” Our value lies not in how fast we type, but in how effectively we can filter the abundance of generated options to find the one that fits the business constraints.
Conclusion: From Craft to Curation
“Millionaire Programming” used to be about laziness—letting the hardware do the heavy lifting of memory management.
Today, it is about strategy. It is about leveraging the infinite patience and speed of AI to explore the map of possibilities before we commit to a path.
We are throwing AI at the problem to gain something far more valuable: better decisions.
In this new era, don’t be afraid to be a “millionaire.” Ask for five different implementations. Generate code you intend to throw away. Use this abundance to find the signal in the noise.