Indoor autonomy works. Just not at scale.

We started with a clear mission: Fix the last-meter delivery mess inside modern buildings.

  • 7,000+ weekly deliveries per luxury building

  • Overwhelmed staff

  • Stolen packages

  • A front desk that’s constantly asking: “Who is this for?”

So we built Lily — a fully autonomous concierge robot that takes your burrito from the delivery driver and brings it up to your 10th-floor apartment.

No pre-mapping. No Bluetooth beacons. No human babysitting.

Lily just delivered.


The Real Problem Showed Up

Lily could handle the job.

But everything else broke — at scale.

Sure, generalized Physical AI models can handle local tasks:

  • Pick up the mail.

  • Toss clothes in the washer.

  • Load the dishwasher.

Imagine Lily grabbing envelopes from the front door slot. Or a humanoid robot tossing towels in the laundry basket.

But now ask it to deliver to apartment 1206 in a brand-new building — and it stalls.

Not because it's unintelligent.

But because it has no global context.

  • Where is 1206?

  • What wing? What elevator? What keypad?

  • How do you avoid a hallway that just closed for maintenance — in real time?

  • How do you flag a burnt-out stairwell light to building ops — without a human in the loop?

And here’s the kicker:

How do you share that insight with the next robot? Retrain a massive action model? Append rules to a brittle logic tree — and hope it holds?

Patch over patch. Hack over hack. And soon, nothing’s stable.

Meanwhile, customers still expect their robots to just work.

And what’s worse — all of this assumes you’ve retrofitted the building with BLE beacons or APIs in every doorknob.


Most haven’t. Most won’t.

These aren't control issues.

They're not even perception problems.

No memory. No conductor. A system problem

An infrastructure problems.


What’s missing is the foundation for scale:

  • Global spatial searchHow does a robot instantly locate what matters in unfamiliar, multi-level buildings?

  • Global reasoningHow does it adapt in real-time when the environment shifts?

  • Global orchestrationHow do we propagate that learning fleet-wide — instantly?

Autonomy doesn’t fail locally. Autonomy fails when you try to scale it.


What We’re Building Now

We quickly realized: Every other Physical AI company is going to run into the same wall.

So we shifted.

We’re building the infrastructure layer for Physical AI:

  • Global search

  • Global reasoning

  • Global orchestration

So robots don’t just move — they adapt.

So autonomy doesn’t just exist — it scales.

So Physical AI becomes unstoppable.


What We’re Sharing With the World

That Lily prototype in the video?

Same hardware as the humanoids you’re tracking.

But Lily runs completely offline.

Spatial search, spatial reasoning, spatial action — all local.

No cloud. No internet dependency. No human fallback.

We’re proud of that.

And we’re open-sourcing parts of it — from inverse kinematics to local nav and manipulation primitives.


A Call to the Builders

To those building serious Physical AI: Figure.AI, Physical Intelligence, Agility, Unitree, OpenMind.

Let’s align.

Let’s build shared memory.

Shared context.

A common infrastructure for robots that need to work in the real world — now.

Get in touch with us at AugustMille.ai.

 

Next up: the prototypes, dead-ends, and messy breakthroughs that shaped our stack.

Subscribe. Share. Steal what’s useful.

Let’s make Physical AI unstoppable.

Next
Next

Turns out, Talking Humanoid Robots Are Still Kind of Creepy…