Green = Sustainable -> Compiler

Green Is a Compiler

The standard green data center question is: Is this facility green?

That is the wrong question. Too easy to answer badly.

The better question is: Can these green conditions be sustained?

That is a compiler question. A compiler takes declared inputs, checks them against rules, and returns a verdict — not a score, not a certification. A gate decision: PASS, FAIL, or UNKNOWN.

Green = Sustainable

Green means sustainable.

Not efficient today. Not renewable on paper. Not carbon neutral by accounting convention.

Sustainable means the conditions that make the facility green can be held over time, as the world changes around it. That one move changes everything — because a lot of things that currently pass as green stop compiling.

Lowest energy use may not be sustainable. A facility running PUE 1.05 on free-air cooling is impressively efficient. But some of that efficiency is borrowed from the climate envelope around it. If that envelope shifts over the operating life of the building, the free-air window narrows and the PUE climbs. The efficiency was not built into the system. It was leased from the atmosphere.

Renewable may not be sustainable. Hydro depends on watershed conditions. Solar depends on manufacturing, degradation, and end-of-life. Wind depends on grid integration and geography. RECs are accounting tools, not physical supply by themselves — a REC can match consumption on paper while the facility draws fossil generation at 2am. The electrons do not care about the certificate.

None of this means renewable energy is bad. It means the sustainability compile is more demanding than the green checklist.

Compiler Outputs

PASS — the claim holds across the declared time horizon, boundary, and stress conditions.

FAIL — the claim does not hold, or a prohibited dependency appears.

UNKNOWN — the witnesses are missing. The compile cannot run.

UNKNOWN is not a soft PASS.

What the Compile Checks

For GreenM3DC, the compile uses four structural checks.

INV — what must remain true

PUE must remain below a declared threshold, measured at the meter, not modeled at design. Renewable fraction must be matched to actual consumption, not just annual average. Carbon accounting must close within a declared reporting window.

NINV — what must never occur

Fossil fuel must not become the primary power source while the facility still claims to be green. Cooling capacity must not fall below heat load — thermal runaway is not a warning, it is a compile failure. Carbon neutrality must not rest entirely on purchased offsets with no internal reduction pathway.

BOUND — where the claim holds

Free-air cooling efficiency is valid only within a declared ambient range. Outside that range the PUE claim does not compile — the model has left its boundary. The renewable claim holds at this grid location, with these generation sources, under these matching rules — not universally.

MORPH — what must be able to change

When ambient conditions exceed the free-air cooling threshold, the mode must shift from free-air to mechanical cooling. That transition must be declared and tested, not assumed. When the primary renewable source degrades, there must be a declared substitution path — not a future intention, a structural commitment.

These are four examples — one per category. The full GreenM3DC compile is built to run over dozens of tests across the same four categories.

The point here is the structure. The list is the work.

Most facilities would not return PASS or FAIL on this compile. They would return UNKNOWN.

Not because they are failing, but because the witnesses are missing. No declared time horizon. No stress scenario. No lifecycle assessment of the hardware fleet.

UNKNOWN is not green. UNKNOWN is not sustainable.

Can you run this compile?

INV PUE_THRESHOLD · RENEWABLE_MATCH · CARBON_WINDOW

NINV FOSSIL_PRIMARY · COOLING_FLOOR · OFFSET_ONLY

BOUND FREE_AIR_ENVELOPE · RENEWABLE_LOCALITY · LOAD_DENSITY

MORPH COOLING_MODE_SHIFT · SOURCE_SUBSTITUTION · HARDWARE_EOL

Next: The IT asset list as structural input — what the BOM actually tells you about whether a facility can be sustained.

The GreenM3 Data Center Project

Back to Green Data Centers

I stopped writing about green data centers for a while because the conversation started feeling stale.

The same ideas kept showing up with new logos attached: renewable energy claims, PUE numbers, sustainability reports, renderings, commitments, and announcements. Some of the work is real. Some of it is excellent. But the public conversation has become predictable.

So I decided to come back a different way.

Instead of writing about another announced facility, I am going to write about my own fictional green data center — one that lets me test what "green" actually means when the claims have to hold.

The project starts with a simple physical frame:

100,000 square meters of floor area, 10 meters tall, for a total of 1,000,000 cubic meters of space.

That number — 1,000,000 cubic meters — is not arbitrary. It is a forcing function.

At that scale, the comfortable hand-waving that fills most green data center writing stops working. You cannot just say "we use renewable energy" and leave it there. You cannot cite a PUE number without explaining how you measured it. You cannot claim cooling efficiency without accounting for what happens when the ambient temperature spikes, the grid gets stressed, or the AI workload doubles overnight.

At 1,000,000 m³, every claim becomes a structural argument.

And structural arguments either hold or they do not.

What I Got Bored Of

The green data center space has a formula. You have seen it.

A press release announces that a new hyperscale facility will be powered by 100% renewable energy. There is a rendering. There are sustainability commitments. There is a PUE number that sounds impressive. The facility opens. The sustainability report comes out twelve months later. Much of it reads like marketing.

I am not saying the work is not real. Some of it is. But the industry conversation has become a loop.

The hard questions are usually avoided.

What does it actually mean to be green in a way that can be verified by someone other than the company making the claim?

What happens to green commitments during a prolonged drought, when cooling towers become a liability?

What happens when the local grid is stressed and diesel generators run for four hours?

What happens when the AI workload doubles overnight and the thermal profile of the building changes?

Those questions are more interesting to me than another announcement.

The Fictional Project as a Tool

So I built a fictional one.

No specific location. No owner. No PR constraints. Just a volume of space and the question:

What would it take to make this genuinely, structurally green?

Fictional does not mean unserious. It means unconstrained. It lets me test the claims without being trapped inside a vendor story, a corporate sustainability report, or a single site's limitations.

I use the word structurally deliberately.

I have been developing a way of thinking called StructuralTruth: the idea that any serious claim about a system should be expressible as:

  • invariants — what must remain true

  • violations — what must never occur

  • boundaries — where the claim holds

  • transformations — what is allowed to change

If you cannot express your "green" claim in those terms, you probably do not have a claim yet.

You have an aspiration.

The fictional data center is the test bed for applying that thinking to physical infrastructure.

The fictional project has a name: GreenM3DC.

M3 stands for the cubic meter — the fundamental unit of the space.

Everything I write in this series will be grounded in one question:

Does the claim hold?

Not does it sound right. Not does it appear in a sustainability report. Not does it support a nice rendering.

Does it actually hold, under measurement, over time, in real operating conditions?

That is the standard I am interested in. It is harder than it sounds.

Let's go.

Next: What does "green" actually mean? A structural definition that survives contact with reality.

How I Use ChatGPT and Claude Code Together — and Why I Don’t Mix Their Roles

Over the last several weeks, I’ve settled into a workflow that looks unusual on the surface but has proven extremely effective in practice:

  • ChatGPT for structural exploration and review

  • Claude Code for deterministic compilation and execution

  • No overlap between their responsibilities

The key is not which models I use—it’s how I separate their roles.

The Capability Asymmetry That Matters

Here is the practical difference that forced this separation:

That tells you how each tool wants to be used.

ChatGPT = Structural Workspace

I use ChatGPT for:

  • Long-lived thinking

  • Naming and structure

  • Clarifying intent

  • Reviewing results after execution

I do not use it to touch the filesystem or “prove” code works.

Claude Code = Compiler

Claude Code is treated as a deterministic machine:

  • It edits real files

  • It runs real commands

  • It fails concretely

  • It enforces correctness through execution

No long-term reasoning. No design debates.

The Critical Rule

I never use ChatGPT to review Claude Chat.

Instead, the loop is always:

Structure → Compile → Review

  1. ChatGPT defines structure

  2. Claude Code executes it

  3. ChatGPT reviews what actually happened

This avoids language-only feedback loops and keeps everything grounded in reality.

Why This Works

  • Exploration stays fast

  • Execution stays correct

  • Code becomes expendable

  • Structure becomes durable

I’m now applying this workflow to OS-level services for electrical and mechanical systems in AI data centers, where ambiguity is expensive and determinism matters.

Final Thought

Most AI frustration comes from asking one tool to do two incompatible jobs.

Once you separate exploration, compilation, and review, AI starts behaving like a real engineering toolchain—not a chatbot.

Writing code with help of AI

I took computer programming classes at UC Berkeley and spent years trying to get better at programming afterward. While I understood the fundamentals, writing software always felt like it required an enormous amount of time and effort relative to the progress made. The work felt more about managing complexity than solving the underlying problems I cared about.

About a month ago, while exploring some ideas involving AI, I unexpectedly revisited writing code—this time with AI’s help. What surprised me wasn’t that the AI could write code, but that it fundamentally changed where the effort was spent. Instead of wrestling with syntax, frameworks, and coordination details, the work shifted toward defining structure, relationships, and invariants.

A week ago, Ray Ozzie wrote about his own experience collaborating with AI to design and prototype hardware and software systems. Ray is best known for creating Lotus Notes, and for his later work on large-scale distributed systems. His reflections strongly resonated with my own experience—but also highlighted something important.

What took weeks of focused effort in his case unfolded for me in hours.

Not because the problems were simpler, but because the approach was different.

Having spent much of 2025 transforming the way I write code, a few months ago I decided to see how far I could push myself in collaborating with AI to tackle hardware design.

The project - motivated by conversations with a customer - is nontrivial. Physical and cost constraints; both analog and digital domains; edge compute/storage ML; power challenges. Of course, Notecard for secure cloud backhaul.

I worked on it on-and-off for about 3-4 weeks - surprised not just that the foundation models had so much knowledge of EE, but that they clearly had internalized a vast number of components’ datasheets. Several times I ran into roadblocks where ultrathink or deep research yielded specific choices I’d never have considered.
— Ray Ozzie

Now I often spend 2–4 hours a day working on computer code. But the work itself is no longer about coding. I’m working on harder, more upstream problems, and the code is simply the executable output of the design.

This ability didn’t appear overnight. It comes from years working in product development as a project and program manager for both hardware and software—guiding execution across teams, understanding how the big picture fits together, and how small decisions compound. Over time, you learn to see systems as composed structures, where relationships matter more than parts, and where symmetries persist even as details change.What used to require explanation and persuasion now shows up as functional proof.

In the past, I might have written a paper or given a conference presentation. Now, in a fraction of the time, I can produce a functional proof—one that can be run, tested, and shared, and that scales to far more people than a paper or presentation ever could.

What also feels right is reaching out directly to Ray Ozzie. After connecting on LinkedIn, I was able to share a few thoughts on how his early technical decisions at Microsoft helped enable the company’s cloud evolution. I’m looking forward to exchanging perspectives on how AI is changing the way we think about coding, structure, and system design.

Why Acoustics Became My Path to Solving Hard Problems

When you’re trying to solve a hard problem, sometimes the only way forward is to take a completely different path. For most of my career, I worked in the world of the visual: graphics, printing, scanning, monitors, typography. Everything was about sight.

And then I realized — sight has limits.

Our eyes top out at around 60 hertz. That’s it. That’s the ceiling. Yet the world runs much faster. Structures change faster. Energy moves faster. Problems unfold faster. And we’ve built entire industries around the assumption that vision is enough.

It isn’t.

What changed my thinking was a conversation nearly fifteen years ago. A friend of mine, a software architect working on autonomous driving, told me something that stuck with me ever since:

> **“Sound solves the driving problems faster than vision.”**

He was right. Sound reacts faster. Sound carries more directional information. Sound sees around corners. And unlike vision, sound doesn’t care about lighting, weather, or glare. That idea opened a door for me that I didn’t fully walk through until much later.

I had worked on the Sound Manager for MacOS System 7, and some of the same developers moved with me from Apple to Microsoft. So sound wasn’t foreign to me — it was just sitting in the background of my career. Waiting.

Then the real shift happened.

A friend needed help with operations problems at Starbucks Coffee Roasting. And out of nowhere I said:

> **“Why don’t we use sound to count the beans?”**

It was obvious to me. Acoustic signatures are clean, distinct, and cheap to capture. You can count beans — accurately — for fractions of a penny. You can detect flow problems. You can measure consistency. You can treat the roasting line like an instrument.

The best part was that this random idea led me straight into the world of academic acoustics. I found a professor who had written papers on the acoustics of coffee bean roasting — which I didn’t even know was a real field — and I’ve been talking with him for more than six months now. Those conversations cracked open everything.

Because once you study how universities and the military use acoustics, you realize just how advanced the field really is.

From there I went deeper. Much deeper.

I revisited the signal-processing foundations I hadn’t touched since working on analog displays and power supplies decades ago. I reconnected with electromagnetic radiation engineers from my Apple days who had to battle compliance certifications at high frequencies. And I discovered something that surprised me:

> **There are way more engineers and funding in RF and high-frequency signal processing than in acoustics.**

So I asked myself the most obvious question:

**What software do they use?**

I found it — a DARPA-backed platform with twenty-four years of development behind it. And I spent a week at their user conference, talking to PhDs, researchers, and engineers who’ve spent their lives working in gigahertz domains.

That was the moment everything clicked.

If their methods work at gigahertz speeds, they will work at megahertz and kilohertz.

If the math works in RF, it works in acoustics.

If the structural patterns hold at high frequencies, they hold at low frequencies.

It all scales.

And so I spent the next couple of months digging into the mathematics — the real math — underneath signal processing. Complex signals. Phase. Time. Direction. Coherence. I/Q analysis. Energy emissions. The structures hidden inside the waves.

That exploration pulled everything together.

All the fields I had touched in my career — typography, printing, sound, color, monitors, analog electronics, imaging, scanning — suddenly made sense as variations of the same underlying structure: **signals and the truths they reveal.**

And that’s why I’ve gone so deep into acoustics.

Not because it’s trendy.

Not because it’s a niche.

But because sound — more than anything else we have — reveals the true structure of the world in real time.

Acoustics isn’t an afterthought.

It’s the path.