top of page

Guide to Better Cinematic AI Videos: Runway Gen-3 Prompt Strategies

Apr 30

3 min read

0

1

0

midjourney blog post image
A Midjourney generated image using Midjourney Automation Suite

Making cinematic AI videos with tools like Runway Gen-3 is exciting. But getting the exact look and motion you want often comes down to your prompts. When Gen-3 first came out, many people found prompt writing tricky. The results weren't always what they expected. This guide shares simple ways to write better prompts for those high-quality, cinematic AI videos.

Start with Reference Images

One of the best ways to guide Runway Gen-3 is by using reference images. You can upload a photo and tell the AI to use it. A powerful method is setting the reference image as the *last frame* of the video. This means the AI figures out how to get from the start to that final image. It gives you a clear goal for the video's movement and look. Just upload your image, check the "last frame" option (currently works in Gen-3 Alpha, not Turbo mode), and write a prompt describing the motion that would lead to that picture.

Keep Your Colors Right

Sometimes, Gen-3 can change the colors from your reference image. If your photo has soft, desaturated colors, the AI video might end up with much brighter, more vivid colors. This can make the final video look different from your vision. To fix this, tell the AI exactly what colors you want in your prompt.

Prompt Keywords for Color Control

Include phrases like:

  • desaturated pastel colors

  • muted colors

  • low contrast

Adding these types of descriptions helps the AI keep the colors consistent with your original idea or reference image throughout the whole clip.

Guiding Camera Motion

Controlling camera movement in Gen-3 can be hard. The AI sometimes zooms in even when you ask it to zoom out. But you can use certain prompt techniques to influence how the camera moves.

Controlling Camera Movement with Prompts

Try these approaches:

  • Use "static shot" if you want less camera motion.

  • Describe the view point: "drone shot," "bird's eye view."

  • Mention directions: "elevates into the air," "tilts up to the moon." Adding environmental details (like the moon's position above a temple) gives the AI clues for motion.

  • Use "fpv fly through." This often creates a forward-moving drone-like video.

Instead of only using camera terms, think about how describing the scene and subject can affect the camera path. For example, prompting the camera to zoom in on a person's "eyes" can help the AI keep the focus on the face and might even make the person turn towards the camera.

Getting precise control over AI generation, whether for video or images, often involves detailed prompt work and refinement. If you find yourself generating many variations, tools designed to automate and manage AI generation workflows can save significant time. Explore resources like the TitanXT Midjourney Automation Suite to see how it can help streamline your creative process.

Refining Your Prompts Takes Practice

You won't always get the perfect result on the first try. Getting the right prompts is a step-by-step process. You try something, see the video, and then adjust. You might try different keywords, move them to the start of the prompt to give them more weight, or add new details.

Think of animating a person talking and smiling. You might start by asking for "muted colors low contrast woman smiles and speaks." The video might look okay, but maybe something weird appears in the background, like an extra object. You might then add "static camera" to reduce movement and prevent the background issue. But maybe now she only smiles and doesn't speak. Next, you could try putting "smiling and speaking" at the very start of the prompt. Or maybe change the word "speaking" to "talking." Sometimes, a small word change can make a big difference and finally get the motion you wanted.

This trial-and-error is normal. The AI can react in surprising ways to different words. Finding what works best means testing ideas and making small changes to see what happens.

If you're using AI image generation tools like Midjourney alongside video tools, managing multiple prompt variations and generations can get complex. The TitanXT Midjourney Automation Suite is a tool built to help automate these tasks, allowing you to test prompt variations more easily and organize your output efficiently based on specific criteria.

Conclusion

Creating cinematic AI videos with Runway Gen-3 gets easier with these prompt strategies. Using reference images as the final frame, controlling colors with specific descriptors like "desaturated" or "muted," and influencing camera motion with environmental clues and strategic keywords are all valuable techniques. Remember that perfecting your prompts is often a process of trying, watching, and adjusting. Don't be afraid to experiment with different wording until you achieve the cinematic look you want.

For those working extensively with Midjourney generations and looking to enhance their workflow, consider how automation can help. The TitanXT Midjourney Automation Suite helps automate parts of the ideation and generation process, letting you focus more on refining your creative vision.

Apr 30

3 min read

0

1

0

Related Posts

Comments

Share Your ThoughtsBe the first to write a comment.
bottom of page
Midjourney Automation Suite - Automate your image generation workflows on Midjourney | Product Hunt