There’s a version of this post that starts with “I’m not a developer” and ends with “anyone can build anything now.” That post is everywhere. It’s also not quite true, and it’s not what happened.
Here’s what actually happened: I built a functional AI travel app in a few days using Claude as my engineering partner. I never wrote a line of production code. But I wasn’t starting from zero either — and the gap between where I started and where you need to be to pull this off is worth being honest about.
What I already knew
I’m not a developer. But I’m not purely non-technical either. I’ve done some frontend work. I know my way around a terminal. I’ve used GitHub, bought domains, configured hosting — all from personal projects and an earlier startup. I understand how APIs work conceptually even if I’ve never built one.
That baseline mattered more than I expected. Not because I needed it to write code — I didn’t — but because it let me understand what Claude was doing well enough to make good decisions about it.
That’s the distinction worth drawing: technical literacy and coding ability are not the same thing. And in the AI-enabled builder era, the former has never been more valuable while the latter has never been less necessary.
The stack
Here’s what’s actually running Muévete:
- Frontend: Single HTML file. No framework. Leaflet.js for the map, CartoDB for tiles, OSRM for road routing.
- Backend: Two Vercel serverless functions — one proxying Anthropic, one proxying Pexels.
- AI: Claude Haiku via Anthropic API. About a penny per itinerary generation.
- Photos: Pexels API, server-side, with a scoring algorithm that filters out wedding photos and whiskey bottle product shots.
- Hosting: Vercel free tier. Auto-deploys on every GitHub push.
- Analytics: GA4 for custom events, Vercel Analytics for traffic.
- Total monthly cost at launch: Effectively zero until meaningful traffic.
What I learned during the build
Some of this I understood going in. Some of it I learned by doing.
Serverless functions. A small piece of backend code that lives on Vercel’s servers and acts as a middleman between your app and the APIs you’re calling. Your app calls your own endpoint. Your endpoint calls Anthropic or Pexels. Thirty lines of code that the entire security model depends on.
Environment variables. API keys live in a .env.local file locally and in your hosting dashboard in production. They never go in your codebase. They never go to GitHub. I knew this conceptually. I still had to debug it when vercel dev wasn’t reading mine correctly.
API cost modeling. Haiku costs roughly a penny per generation. The Pexels free tier covers 20,000 requests a month. At 10-15 photo requests per trip, that’s 1,300-2,000 generations before hitting a limit. Understanding these numbers — and being able to reason about them — is what lets you make intelligent tradeoffs between models, sources, and architecture without a CTO in the room.
JSON schemas. The AI doesn’t return prose. It returns structured data — a JSON object the app parses and renders. Getting that schema right, and understanding why it breaks when the AI deviates from it, required me to think like an engineer even if I wasn’t writing the code.
What I never needed to learn
Syntax. Frameworks. How to actually debug code. The difference between a promise and a callback. I still don’t know these things in any meaningful way. Claude handled all of it.
What I provided was product judgment. When Claude suggested letting users supply their own API key to reduce hosting costs, I knew that was wrong — friction kills consumer products. When it pushed toward adding accounts early, I pushed back: we’d solve the same problem — saving and sharing itineraries — with shareable URLs instead. Persistent, social-ready, zero infrastructure, and no legal surface area. We also added social sharing links directly to improve virality. Both were product calls, not technical ones.
You can’t outsource that to the AI. It will do what you tell it. If you don’t know what to tell it, you’ll end up with something technically functional and productively useless.
The photo problem as a case study
Getting photos right took longer than the rest of the build combined. It’s worth walking through because it illustrates exactly where human judgment has to show up.
The first approach pulled images from Wikipedia. Logical in theory — Wikipedia has photos of almost everything. In practice: coat-of-arms SVGs, mathematical formulas, a CIA seal for a national park. We tried a five-step fallback chain. We tried filtering by file type and dimensions. We tried scraping article image lists. Nothing worked reliably.
Eventually I made a product decision: Pexels as the primary source, category icons as the honest fallback when nothing good is found. Pexels returned better photos, but “New Orleans French Quarter” still served up uninspiring stock shots of generic buildings. The fix was upstream — changing what the AI generates as a search query. Instead of a place name, I asked for an evocative phrase capturing street-level character. “French Quarter wrought iron balcony evening.” “Chinatown red lanterns night market.”
No amount of technical sophistication gets you there. That’s a product instinct about what the app is actually trying to do — make you feel like you’re somewhere, not just show you where it is on a map.
The minimum viable literacy
So what do you actually need to know to build in this era? My working list:
- How version control works. Git, GitHub, commit, push. Not deeply — just enough to not be afraid of it.
- What a server is and why it matters. The difference between code that runs in a browser and code that runs on a server is foundational to understanding how modern apps are structured.
- How to read an error message. Not fix it — just read it well enough to describe what’s happening to an AI that can fix it.
- What an API is. A way for software to talk to other software. You’re going to be stringing them together.
- Basic cost modeling. Requests times cost per request times expected volume. If you can’t do this math, you can’t make infrastructure decisions.
- Product judgment. What is this thing actually for? Who is it for? What does good look like? This one isn’t technical at all, but it’s the one that determines whether any of the rest of it matters.
That’s the list. It’s not long. Most of it isn’t even coding. But all of it is necessary.
What the gap actually looks like now
Smaller than it’s ever been. But not zero.
The people who think it’s zero are the ones building things that technically function and practically fail — because they couldn’t tell the AI what they actually wanted, couldn’t evaluate whether what came back was good, and couldn’t make the hundred small judgment calls that determine whether a product feels intentional or accidental.
The gap isn’t technical anymore. It’s conceptual. Can you think clearly about what you’re building? Can you communicate precisely? Do you know what good looks like?
If yes — you can build now. The tools are there. The cost is near zero. The only thing standing between an idea and a shipped product is clarity of thought.
Muévete AI is live at muevete.ai. The launch post is here.