I don’t usually do the “week in review” format, but this past week felt different. A lot happened in a short amount of time, and a surprising amount of it was directly relevant to things I spend my days thinking about at AMD and nights tinkering with in my homelab. So let me just walk through the stories that caught my eye and why I think they matter.
TSMC Keeps Printing Money (and Chips)
Let’s start with the biggest signal of the week. TSMC reported first-quarter revenue of 1.13 trillion New Taiwan dollars — roughly $35.6 billion — a 35% year-on-year rise. That’s not a blip. March alone saw a 30.7% month-over-month jump, and a 45.2% year-over-year print on the same month suggests the Q2 ramp is already underway.
What makes this number interesting isn’t just the headline — it’s the composition. While smartphone and PC end markets took a hit due to memory shortages, the AI segment of TSMC’s business “pulled the weight,” according to SemiAnalysis analyst Sravan Kundojjala, who added that TSMC is on track to blow past its stated 30% annual growth target.
Working in the semiconductor space, this hits different. TSMC controls roughly 70% of the global advanced chip foundry market. Every major AI chip — NVIDIA’s accelerators, Apple’s custom silicon, and AMD’s data center GPUs — runs through TSMC’s fabs. That last part is my day job. When TSMC reports numbers like this, it’s not abstract finance news — it’s a signal about where the whole industry is headed. The capex levels ($52–56 billion for 2026) imply TSMC has visibility on 2027–2028 demand that is materially higher than 2026, and that visibility is almost entirely a function of the AI accelerator roadmap — specifically Nvidia’s Rubin generation, AMD’s MI400 series on N2, and custom silicon from Google, Amazon, and Meta.
The full earnings call is April 16. The one thing that’s clear from the monthly number alone: there is no demand collapse. Whatever happens to tariffs, whatever happens to the Taiwan-China relationship — the AI training-to-inference pipeline is still pulling more wafers off TSMC’s leading edge every month than it did the month before.
Big Tech Is Betting on Nuclear
This one I’ve been watching for a while. Major tech companies are putting real financial weight behind next-generation nuclear projects as they seek reliable electricity for power-hungry AI data centers. These deals are providing nuclear firms with capital and, just as importantly, a more credible commercial path at a moment when utilities and governments are scrambling to figure out how to support fast-rising data-center demand.
The framing I keep coming back to: AI didn’t just create a demand for GPUs. It created a demand for everything behind the GPUs — power, cooling, land, water. For years, nuclear startups struggled to move from promise to deployment because power markets were slow-moving and buyers were cautious. AI is changing that equation. If Microsoft, Google, Amazon, and others continue to lock in future energy supplies, power generation becomes a strategic technology layer, not just a background utility.
I run a DGX Spark in my homelab. That thing draws serious power for its footprint. Now multiply that by tens of thousands of nodes in a real data center, and you start to understand why hyperscalers are signing 20-year power agreements with nuclear plants that haven’t been built yet.
Anthropic Is Turning Claude Into a Bug Hunter
This was the story I found most technically fascinating. Anthropic is tightening access to its new Claude Mythos Preview model and steering it toward defensive cybersecurity work rather than a broad public release. The company says the model is powerful enough to uncover serious software flaws across major operating systems and browsers, and it has launched Project Glasswing with partners including Amazon, Apple, Google, Microsoft, and Nvidia to help identify and fix high-severity vulnerabilities before attackers can weaponize similar capabilities.
Think about that for a second. The model has reportedly uncovered thousands of severe vulnerabilities in major operating systems and browsers. This is the positive version of a pretty scary scenario — what happens when that same reasoning capability is used offensively? Anthropic seems to be acknowledging that exact risk by choosing controlled deployment over a public release.
The story now is not just who has the smartest AI, but who can contain the security fallout from AI that can reason through exploits at machine speed. Anthropic’s move suggests frontier labs increasingly see controlled deployment, red-team access, and infrastructure partnerships as competitive necessities, not optional safety theater.
This matters a lot for the infrastructure world I operate in. If AI models can find vulnerabilities faster than humans can patch them, the security playbook needs a serious rethink.
Framework Is About to Do Something Big (and I’m Here for It)
I have a Framework laptop. It’s one of my daily drivers. So when Framework dropped a teaser this week, I paid close attention.
Modular PC maker Framework is hosting a “Next Gen” event on April 21st, and it looks like it might have a lot to do with Linux. Alongside a newsletter announcing the event, Framework posted a video on Thursday titled “Follow the white penguin,” featuring clear references to Linux — including the iconic penguin, the “I use Arch btw” meme, and a shot cycling through several Linux distro logos, including Ubuntu, Fedora, Arch, CachyOS, and Bazzite.
Framework also announced its product availability in four new countries: New Zealand, Norway, Switzerland, and Singapore — and advised potential customers to refrain from placing orders until after the event. That last bit is the tell. They’re not just announcing; they’re launching something.
What I find genuinely compelling is the framing they chose. The announcement addresses any potential concerns about the company’s fate directly: “You might be reading all of this and thinking, is this a farewell letter to personal computing? Is this the end of Framework? No, this is a manifesto. No matter how inevitable the AI-takes-all scenario may sound, as long as there is a person in the world who still wants to own their means of computation, we will be here to build the hardware that enables it.”
That’s a sharp counter-positioning to the current moment. Framework is doubling down on ownership and giving users what they want: computers that you can own at the deepest level — choosing your OS, modifying your hardware, keeping your data and computation local rather than leased from the cloud.
As someone who runs local models on my homelab specifically because I want to control my own compute, this resonates. The DGX Spark sitting next to my desk is my rebuke to “AI as a subscription.” If Framework ships something that makes first-class Linux support official, that’s a big deal for the community.
The Capex Machine Keeps Running
One more number worth noting: Meta has signed a new deal to spend an extra $21 billion with CoreWeave between 2027 and 2032, on top of a prior $14.2 billion commitment. CoreWeave’s data centers, packed with hundreds of thousands of Nvidia GPUs, will support Meta’s growing AI training and inference needs. Meta’s 2026 capital expenditures are projected at $115–135 billion, nearly double 2025 levels.
And on the revenue side, Amazon said its cloud unit’s AI revenue run rate topped $15 billion in the first quarter, marking one of the clearest signals yet that hyperscaler AI spending is beginning to translate into measurable top-line growth. CEO Andy Jassy also said Amazon’s chips business, including Graviton and Trainium, now has an annual revenue run rate above $20 billion, roughly double the figure the company cited earlier this year.
That’s the flywheel closing: capex → infrastructure → revenue → more capex. Everyone’s been waiting for the “when does this translate to real money” moment. Looks like we’re in it.
What I’m Watching Next
The Framework event on April 21 is top of my list. If they ship a Linux-first or Linux-native product, it changes the calculus for anyone who’s been sitting on the fence. I’m also watching the TSMC full earnings call on April 16 closely — the margin guidance and commentary on N2 ramp velocity will tell us a lot about where 2026 is actually heading.
The pattern I keep seeing across all of these stories: the infrastructure layer of AI is hardening fast. Chips, power, security, hardware ownership — all of it is getting more defined and more contested. That’s actually an exciting place to be if you’re building in this space.
More to come.