Shipping AI Works v0.1
We shipped AI Works v0.1 last week. Here is how it happened and what we learned along the way.
The Problem
Building AI workflows is still too fragmented. You have LLM providers over here, tool integrations over there, and a mess of glue code in between. We wanted a single workspace where teams could wire up AI pipelines visually and deploy them with one click.
How We Built It
The first prototype took about two weeks. We started with a canvas-based workflow editor using React Flow, wired it to Deepgram's speech APIs for voice-triggered workflows, and added an execution engine that runs each node in sequence.
The tech stack is straightforward:
- React Flow for the visual editor
- Deepgram STT/TTS for voice nodes
- A lightweight execution engine in TypeScript
- Vercel for deployment
What We Learned
Keep the node types minimal. We started with 15 node types and cut it to 6. Turns out people only need: input, output, LLM, transform, condition, and API call. Everything else is a composition of those.
Voice-first changes the UX. When you can speak to trigger a workflow, the whole interaction model shifts. We are still figuring out the right patterns here.
What is Next
v0.2 is focused on collaboration — shared workspaces, real-time cursors, and a template library. We are also exploring agent-based nodes that can make autonomous decisions within a workflow.
Stay tuned.
