published apr 8, 2026

Build an Automated Ad Generator with This New Tool (eleven Labs Flows)

beginner
Step 1

Open Flows and name the canvas

Log into ElevenLabs, click ElevenCreative in the sidebar, then open Flows. Click + New Flow and name it something reusable, such as “[your product line] ad template.” You are building a template, not a one-shot render, so the name matters.

Pro tip: This was tested on the $5 Starter plan and it works. Video generation with Flows is paid only, but you do not need Creator or Pro to run the image and video nodes in this guide. You can test image generation for free.
Step 2

Add an image node with reference photos

Click the plus on the canvas and add an Image Generation node. Click Reference Images and upload photos of your real product. Repeat with different reference photos to help the AI lock in on the product’s design.

image-2169

Now write a prompt that describes the scene, not the product. The references handle the product for you.

Prompt
The product (see references) on a clean seamless studio backdrop, soft diffused lighting from above, subtle shadow underneath, shallow depth of field, photorealistic product shot.

Run the node. If the scene is off, tweak the prompt and rerun. Only this node updates, so you are not paying for the rest of the pipeline while you iterate on the look.

Pro tip: Most people try to describe the product in the prompt. Don’t. Let the references do it. That separation is what keeps your real product consistent across every variation.
Step 3

Connect a video node to the image

Add a Video Generation node to the canvas. Drag a line from the image node’s output into the video node’s start frame input. Drop in a short motion prompt and run it.

Prompt
Slow cinematic push-in on the product, subtle camera drift, soft light shifting across the surface, shallow depth of field.
image-2170

Because the start frame came from your reference-based image, your real product carries through into the clip without ever being described in the motion prompt.

Pro tip: The most fun part of this workflow is generating different prompts each run. Click the text button on any node to have AI write a different prompt each time.
Step 4

Swap models, export, and clone

Every node in Flows has a model picker in its settings. On the image node, switch between the image models bundled into Flows to see which one handles your product references best.

Do the same on the video node for motion, and on the voice node when you add audio later. This is the hidden value of Flows: you can test one model against another without rebuilding the canvas or leaving the app.

Remember that clicking Run fires that single node. If you click the Run dropdown, you can click Run till here to re-fire all previous nodes too.

image-2171

When the output looks right, click Export on the video node to save the MP4. Then use the canvas menu to Duplicate Flow. Swap the reference photos and the scene prompt for your next product, and the second ad is done in minutes.

Step 5

Going further: Add audio and build reusable ad formats

If you want audio on the clip, add a Text to Speech or Music node and connect it into a Mix Audio node alongside the video. This is how Flows layers voice or a soundtrack onto the clip without leaving the canvas.

Once you have one template working, build a small library of Flows for the formats you use most: a 15-second hero ad, a 30-second explainer, or a UGC talking head. Each one is a canvas you only build once, then clone forever.