Plan One Shot, Or Animate One Image Directly
Build a single scene end-to-end with reference images, or skip straight to direct image animation. Saved images stay in your library so you can reuse them without uploading again.
Reference uploads and generated outputs are now designed around stable `/uploads/...` media paths so the app can serve them locally or from S3-backed object storage without changing the UI.
Generation Mode
Choose whether we should create a fresh frame first or animate an existing image directly.
1. Reference Stack
Upload product, person, and support references. Place the strongest anchor first.
Upload your first anchor image
Start with the product or face that must stay the most consistent. Add more references for environment, styling, and extra details afterward.
Sign in to keep a reusable library of uploaded images and a persistent history of every generation for debugging.
2. Direct the Frame
Generate a visible OpenAI frame preview, regenerate until it is right, then continue to animation.
Tip: you can refer to Step 1 reference titles directly (example: "Use Ref 1 - Inverter hero as the main product").
3. Direct the Motion
This becomes the animation brief for xAI. Keep it focused on one continuous shot with synchronized speech and music.
Leave this blank if the clip should be silent or if no spoken line is needed.
Use this to test alternatives quickly. Keep `grok-imagine-video` if you are unsure.
Prompt Plan Preview
This is the exact plan that will be sent when you click generate.
Credit required: 15 per one-shot generation
No references yet.
Latest Output
Track the current run with a visible timeline, then inspect the frame and final clip.
Generation History
Review previous runs, reopen outputs, and inspect failures for debugging.