I'm about to spend two years of my life working on autonomy. That's a real cost.
Two years is enough time to start a company. Enough time to get meaningfully deep in a different field. Enough time for the physical AI landscape to shift under me while I'm heads-down on one company's stack. I could be working on humanoids, on satellite networks, on consumer hardware, on AI platform infrastructure, or on something of my own. The cost of this choice is the value of every road I'm not taking.
And the cost is higher right now than it would be in most decades. Physical AI is at a fork. Foundation models are crossing from language into action. Humanoid companies are moving from demos to deployment. Waymo is scaling. Tesla is shipping Optimus. NVIDIA is betting the company on robotics as its next act. Whatever the next era of embodied intelligence looks like, it's being decided in the next five years. Two years of that window is a quarter of it. I don't get to spend this time in a fog.
So I'm going to write down, for myself, what I think the board looks like, where the opening is, and why this is the seat I'm taking. If the reasoning is wrong, I want it to be wrong on paper so I can see it. If it's right, I want it visible enough to hold me to it.
the board
Physical AI is a small number of serious players making different bets.
On the AV side, the economics have validated. Waymo is running 450,000+ paid rides a week, drove 127 million fully autonomous miles in 2025, and raised at $126B in February 2026. Tesla has millions of paying FSD users and a Semi in commercial production. Aurora is hauling freight. Wayve is shipping to OEMs on an end-to-end bet. Applied is providing infrastructure and targeted autonomy stacks across six verticals. The remaining work is closing the gap between research artifacts and deployed systems across more ODDs, more platforms, more customers.
Humanoids are at an earlier stage. Figure, 1X, Physical Intelligence, Boston Dynamics, Tesla Optimus are racing to build general-purpose manipulation and whole-body control. The research is impressive and moving fast, VLAs, world models, diffusion policies. Early deployment is starting to happen: Figure's humanoids running shifts at BMW, 1X taking $20K deposits for home robots shipping in 2026, Hyundai planning to mass-produce 30,000 Atlas humanoids annually by 2028, NVIDIA releasing GR00T foundation models for humanoid control. These are real, but they're early-product milestones, not scale-deployment milestones. It's roughly where AV was around 2014.
The infrastructure between the two is starting to converge. NVIDIA's DRIVE, Isaac, and GR00T are the same platform family, deliberately unified compute, simulation, and foundation-model stack for physical AI across form factors. Applied is extending its toolchain into robotics through partnerships like LG Innotek in early 2026. Tesla's Optimus shares chip architecture with FSD. Jensen Huang has been explicit at CES 2026: the ChatGPT moment for robotics is here, built on the same physical AI platform that powered AV.
What doesn't transfer is the domain expertise. Manipulation is not navigation. Humanoid whole-body control is not vehicle controls. Human-robot interaction in homes and factories is a different safety problem than driving in traffic. The humanoid-specific stack, VLAs for manipulation, tactile sensing, whole-body RL, is being built fresh. AV didn't solve those problems.
So the real question, for someone choosing between these two areas, is: do you want to build domain expertise in an earlier, more open-ended field, or do you want to build infrastructure expertise in a deploying field, knowing the infrastructure half ports to humanoids when they mature?
why I'm choosing AV
Humanoids are earlier and culturally hotter. If they break out in three years, the engineers who staffed up early win disproportionately. That's real and I took it seriously.
The choice comes down to what kind of work the next two years produce, and what kind of work physical AI is about to need.
Value in physical AI is migrating from research to integration. This is the same pattern every technology category has followed. LLMs: OpenAI's moat isn't model quality, it's the API, developer ecosystem, and fine-tuning infrastructure that lets enterprises actually use the model. Cloud: AWS didn't invent distributed systems; they integrated them into a platform that everyone else builds on. Phones: Apple didn't invent touchscreens or ARM chips; they integrated them into something that shipped. In each category, the research layer gets commoditized over time. The integration layer is where value compounds.
Physical AI shows the same pattern. Waymo's $126B valuation is about deployment scale, not research quality. Applied's $15B is entirely integration infrastructure. Tesla's FSD value is about deploying on millions of vehicles, not research elegance. The companies capturing value are the ones shipping. The integration layer is where this industry is headed.
AV infrastructure is already porting to humanoids. Applied has declared robotics as one of its two core focuses alongside AV. Their research page features humanoid robots, tactile-aware dexterous hands, and mobile manipulators. Mobileye acquired humanoid company Mentee Robotics for $900M in January 2026, positioning their AV vision and perception work as the foundation for labor-oriented robotics. Tesla is converting Fremont capacity from Model S/X production toward Optimus, treating it as the next application of FSD's AI stack. NVIDIA Jetson Thor runs Boston Dynamics' Atlas and is the same compute platform family as NVIDIA DRIVE for AV. Isaac Sim and DRIVE Sim are built on the same Omniverse tech stack. At CES 2026, Jensen Huang stated that NVIDIA's automated-driving platform will be applied to developing humanoids. Figure, 1X, Agility, and Boston Dynamics all build on Isaac Sim and Isaac Lab.
This isn't a prediction. This is what physical AI is doing, right now.
From those two facts, a specific kind of skill becomes important. Bringing up a mature physical AI stack on new hardware in new contexts, repeatedly, across different sensor rigs, different operating environments, different customer requirements. Knowing where stacks break when they meet a new platform. Knowing how to bridge gaps between components. Knowing how to validate in simulation before deploying. The code is different between AV and humanoids, but the structural work of making a stack ship in a new context is domain-agnostic.
That work is already happening in AV, where stacks are mature enough to bring up on new platforms regularly. It's not happening much in humanoids yet because the stacks themselves are still being invented. When humanoids reach the integration phase, this kind of work will become load-bearing in that domain too.
Applied's onboard architecture role is specifically this work. Not research. Not component development. Taking the existing stack and making it ship in new ODDs. Two years of it is concentrated repetition of the skill that the next five years of physical AI will need most.
That's the bet.
where the opening is
The thing most observers miss about this moment is that the research has mostly caught up and the deployment hasn't. Foundation models for driving exist. VLAs exist. End-to-end policies exist. BEV transformers, world models, differentiable stacks, all of it, exists in published form, works in papers, demos on curated benchmarks. What doesn't yet exist, reliably, is these artifacts running on real hardware, in real ODDs, under real safety cases, at scale.
That's a different problem. It's solved by different people. Research headcount scales with problems; deployment headcount scales with customers, vehicles, and ODDs. Most of the work that has to happen before physical AI is useful is the second kind. The bottleneck isn't ideas anymore. It's integration, deployment, and making real systems work under real constraints.
Look at where the money is. Waymo's $126B valuation isn't about having the best research; it's about shipping 450,000 paid rides a week. Tesla's FSD is the industry's largest deployed ADAS, shipped on millions of vehicles despite running a more contested architectural approach than competitors. Applied's valuation comes from integrating research into a toolchain that almost every major OEM actually uses. The companies capturing value aren't the ones with the best research. They're the ones who figured out how to make it ship.
The opening, specifically: the integration layer between research artifacts and deployed systems, across multiple ODDs, is where the next decade of physical AI gets built, and it's under-supplied with the specific kind of person who can do it. Most researchers can't cross into systems. Most systems people can't read and adapt research. The people who can do both, at production scale, are rare, and the industry is going to need a lot of them.
That's the opening. That's where I'm positioning.
why Applied is the seat
Given the opening, the right seat is the company that will give me the most reps at the integration layer, across the widest range of ODDs, with the highest density of senior engineers doing this exact work.
Applied is that seat. Here's the specific case.
Breadth of ODDs. In two years at Tesla I'd know passenger cars extremely well. In two years at Waymo I'd know urban robotaxi extremely well. Both are deep single-ODD experiences. In two years at Applied I should know how autonomy stacks get brought up across passenger cars, trucks, mining equipment, defense platforms, industrial vehicles. Each bring-up is a fresh instance of the same underlying problem: take a stack, adapt it to a new sensor rig, new dynamics, new ODD, new customer, make it ship. That's the repetition I need.
Honesty about architectural uncertainty. Applied's research group under Wei Zhan is pushing foundation models and differentiable stacks. Their Fallback Stack org is staffing a classical deterministic safety net in parallel. Running parallel bets is the honest engineering response to not knowing which paradigm wins. I'd rather be at a company that acknowledges the uncertainty than one ideologically committed to a single answer.
Operator-grade exposure. Applied is still small enough that I'll see real business decisions get made. Which verticals to chase. How to price into OEMs. How to staff a new customer integration. What to build in-house vs partner for. That's the kind of exposure that turns into operator judgment. I don't get that at Waymo. I don't get it at a two-year-old humanoid startup.
The seat makes sense because it matches the opening. Integration layer, multiple ODDs, senior engineers to learn from, operator exposure. That's what I need given where the value is being created.
the approach
If the board is right and the opening is real, then the approach writes itself. What I do in the next two years matters more than where I do it, past the threshold of "reasonable seat."
Ship real things. Not research prototypes. Not demos. Production code, running on real vehicles, used by real customers. Every year I need at least one shipped artifact I can point to.
Go deep on the layer. I'm arriving with breadth. The next two years are about turning breadth into a specific kind of depth: be the person who knows how a stack actually fails when you bring it up on a new platform, and why, and what to do about it. That's the layer I want to own.
Read the research anyway. The PhDs will be reading the literature. I read it too, with a builder's eye. What does this let me deploy? Can this actually ship? I don't get to opt out of the literature just because I'm not publishing to it.
Manufacture proximity to people better than me. This is the thing the PhD gets by default that I don't. I have to build it manually. Reach out to senior engineers. Ask for help. Be the junior person in rooms where I'm lucky to be.
Build a public technical record. Blog posts, writing, talks, open source. In two years I should have a record that makes sense of who I am beyond my employer. That record is what turns into investor credibility and recruiting pull later.
No job is beneath me. If the CI pipeline is broken and fixing it takes three days, I do the three days. If shipping needs me to learn a sensor driver I've never touched, I learn it. The question is always "does the thing get done," not "is this task making me look technical."
Taste over knowledge. Every quarter, ask: of all the things I could be working on, why this? Is this the thing that matters, or the thing that's convenient? Picking well is the most underrated skill in technical work and it's the first operator skill.
what I'm here to earn
The right to have an opinion about autonomy architecture. Earned through shipped systems, debugged failures, integrations that worked when they shouldn't have.
The right to be trusted on a new problem. The senior engineers here earned that by building things. I earn it the same way.
The right to be the person the room looks at when the integration is breaking and nobody knows why.
The right to speak publicly about this industry. In two years I should have a public record that makes sense of who I am beyond my employer.
The right to raise money on my own name when the time comes.
And the one that matters most: the right to ask people to trust me with their time and their careers. To walk up to engineers I respect, who have families and salaries and stability, and ask them to leave it to build something with me. That trust is the hardest thing to earn. It doesn't come from credentials or winning arguments. It comes from having done the work visibly, having been honest about what I don't know, and having built a track record people can point to when they explain to their spouse why they're taking the risk. Two years isn't enough to fully earn that. It's enough to start.