EP 15 - What is the figma MCP really like? plus China makes a big play for talent and Qwen updates!
Key Takeaways
Business
- •China (via companies like Alibaba) is making a major push in AI, accelerating competition for both talent and model leadership.
- •Changes to H1B visa dynamics create hiring and retention risks for U.S. startups, making alternative talent strategies and geographic flexibility more important.
- •New platforms and SDKs (e.g., Cloudflare Vibe) open productization opportunities for companies that can integrate AI into scalable user experiences.
Technical
- •Figma MCP has practical limitations for end-to-end web production — designers may still need engineering handoffs or additional tooling for robust implementations.
- •Cloudflare Vibe SDK enables integration with various AI tools and can be used to build scalable, real-time AI-powered features and workflows.
- •AI podcasting presents distinct technical challenges (voice quality, editing, consistency, and tooling gaps) that make fully automated production difficult today.
Personal
- •Hands-on experimentation is essential: try tooling like Figma MCP and Vibe SDK yourself to understand trade-offs rather than relying on marketing claims.
- •Plan hiring and career moves with immigration risk in mind — consider distributed teams or roles in jurisdictions with clearer talent pathways.
- •Stay adaptable and keep learning new SDKs and LLM ecosystems to remain competitive as the AI tooling landscape evolves.
In this episode of The Build, Cameron Rohn and Tom Spencer dig into practical AI engineering and startup strategy as they unpack recent product work and industry shifts. They begin by outlining AI agent development workflows, contrasting memory systems and agent orchestration patterns and naming tools like Langsmith and MCP tools for agent debugging and replay. The conversation then shifts to developer tooling and deployment, highlighting Vercel for frontend hosting and Supabase as a lightweight managed database and realtime layer that accelerates prototypes. They explore building in public strategies next, describing how telemetry, community feedback, and transparent roadmaps shorten iteration loops and enable early monetization experiments. The duo then examines concrete technical architecture decisions: when to use vector stores and embeddings versus structured memory, trade-offs in RAG pipelines, and how to compose microservices with serverless deployments and edge functions. They touch on AI agents, memory systems, and the role of developer workflows in sustaining velocity. Finally, they synthesize entrepreneurship insights about positioning, open-source contributions, and capital-efficient growth. The episode closes with a forward-looking call to action: developers and founders should iterate openly, instrument systems for learning, and keep architecture choices aligned with measurable product outcomes as they continue to build.
© 2025 The Build. All rights reserved.
Privacy Policy