In 2012, Bret Victor gave a talk called “Inventing on Principle”. Bret’s ideas about immediacy in creative tools reminded me of a piece I wrote back in 2009 on reaching the speed of thought — the idea that the best tools collapse the gap between intention and result. Thirty minutes in, he demos an animation app for iPad where he performs an animation in real time — he just dragged an asset with his finger while the timeline played, and the motion was captured exactly as he moved. No keyframes. No tweens. The tool got out of the way and let him just do the thing.

Bret never released that app, and for over a decade I wanted something like it to exist.

Recently, my wife — a surgeon — started creating short Osmosis style instructional videos for her Instagram. I would sometimes help with production, but the tooling was always more friction than it should be. We needed a lightweight, integrated way to animate different assets together and produce a complete explainer video without a complex post-production pipeline.

Muy is my answer to both of those things. Named after Eadweard Muybridge — the photographer who first captured motion in sequence — it is a browser-based animation tool where you record animations by performing them, not specifying them.

Product principles as constraints

Before building, I set five principles for the MVP. These were meant to function as constraints, differentiators, and a validation checklist.

  1. Web-based. Modern web APIs have come far enough that a PWA can deliver a UX indistinguishable from a native app. Figma is the proof. Running in the browser meant no install friction, cross-device access, and all the flexibility of the web platform.
  2. Optimized for iPad, but usable on desktop. Being web-based made this nearly automatic. I wanted to respect where people actually create content — not tethered to a workstation.
  3. No AI. There is already enough AI-generated slop in the world. Muy is deliberately about enabling human expression. The goal is to help people unlock their intent without AI assistance. I may add minor AI aids in the future, but the core tool is for human authorship.
  4. No back-end. Everything stays client-side. Projects are stored in IndexedDB. No authentication, no accounts — you just open the URL and use it, like Excalidraw. Simpler, cheaper, and gives users a sense of ownership and control.
  5. Production-ready. The MVP had to produce actual usable output for a wide range of applications — not just a demo. That meant real video export and the ability to compose a complete animation from scratch.

Building with agents

The MVP took two weeks, working in iteration cycles — sometimes carefully designing features in Figma first, sometimes going straight to code. This was also a deliberate exercise in integrating agentic coding into a design workflow.

Screenshot of an early version of the app. I'm glad you can't actually see this image.
Early version. Strong evidence that Figma is definitely still essential for agentic coding.

I used Plan Mode extensively for features that required careful thinking before implementing. I used different models for different tasks: frontier models like Claude for complex reasoning and architecture decisions, lighter local models for more mechanical generation to avoid burning token budget unnecessarily. Using a UI framework like shadcn made a lot of things faster and easier, while still leaving meaningful room for customization — particularly in the custom property widgets and the scrubber component.

Color and palette picker

The process reinforced something I had suspected but not fully internalized: the value of agentic coding is not just speed. It is the ability to iterate fast, that changes while you explore a problem space. In this case, I was a user, along with a few friends, so the feedback loops were short.

The core mechanic

The centerpiece of Muy is performance recording. You select a layer, hit Play, and manipulate its properties while the animation plays. Every frame you move through, the new property value is recorded. When you stop, the animation reflects exactly what you did.

Instead of specifying where something should be at each keyframe and trusting the software to interpolate, or spending too long planning what should be a simple animation, you just show the tool what you want. The interaction model is much closer to puppeteering than programming. Are we ready for vibe animating? Sorry I suggested that 😬️

Alongside position, you can also perform rotation, scale, transparency, and a character-by-character text or vector path reveal — all using the same record-while-playing model.

The interesting design challenge is not the recording itself — it is making the tool invisible enough that the performer stays in flow.

Try it out

The hosted version lives at muy.video. The code is open-source at github.com/jpfaraco/muy.

Saved projects are stored locally in IndexedDB and you can export them as self-contained .muy files with inlined base64 assets. The app installs as a PWA and runs in standalone mode, indistinguishable from a native app.

I hope to keep evolving it. And I hope other people find it useful for making things.