top of page

Bringing Images to Life: Exploring Midjourney's Video Capabilities

Jul 29

6 min read

0

4

0

midjourney blog post image
A Midjourney generated image using Midjourney Automation Suite

Big news for creators! Midjourney just made a huge leap into video. Many of us woke up to a gallery full of animated images, a clear sign of Midjourney's new video model (V1) being live. This move changes the game. Midjourney was already a leader in image creation. Now, it is also a top contender in video generation, proving it can quickly catch up and even move ahead of others in the field.

The new video model works with all kinds of artistic styles. From detailed animations to photorealistic scenes and even animated drawings, the quality is impressive. What stands out is how consistent and high-quality these videos are. You won't find odd movements or strange elements. Every video looks good and moves well, with excellent animation. Let's look closer at how this new feature works.

From Still to Motion: The Workflow

The Image-to-Video Approach

Unlike some other tools, Midjourney's video model does not directly turn text into video. It uses a two-step process: first, you create an image from text, and then you animate that image. Your starting image always forms the first frame of your video. You cannot use it as an end frame right now.

Here's how to begin:

  • Start by creating an image using a classic prompt (like "Will Smith eating spaghetti"). Midjourney will make four images for you.

  • Once you have your images, you can animate one of them. You will see a new "Animate" button when you move your mouse over an image or right-click it.

  • Clicking on the image itself also shows new options for animation.

Automatic vs. Manual Motion

You have two main ways to animate your image:

  • automatic mode: This directly animates your image without letting you add a text prompt for further guidance.

  • manual mode: You get to add a written prompt to guide the animation, helping you get the specific movement you want.

Low Motion vs. High Motion

Within both automatic and manual modes, you can pick "Low Motion" or "High Motion."

  • Low Motion creates less dramatic movements. These videos are often clearer and have very good quality.

  • High Motion makes more dynamic and active videos with bigger movements. The tool takes more "risk," so there can be a higher chance of little glitches. However, the movement is more pronounced.

Midjourney quickly generates four videos at a time, often in less than a minute. This is a big plus compared to other tools that make one video at a time and take longer. The quality on display is very high.

To get a better look at one video, simply click on it. This lets you focus on the details. You can also use keyboard shortcuts to move between the generated videos.

Understanding Costs and Plans

Each video generation uses about 8 minutes of your Midjourney credit. To compare, a typical image generation uses about one minute. This means that generating a video is like making around eight images at once.

If you have a standard plan with 15 hours of generation per month, you can make a little over 100 video generations monthly. This offers great value for the price.

For those with a Pro or Mega plan, you can use "relax mode." This allows for unlimited video generation, though videos might take longer. The big benefit is that these generations do not use up your credit. This makes Midjourney a very affordable choice for AI video creation if you have a higher-tier plan.

Managing these new video features and your Midjourney usage can be complex, especially with different generation modes and costs. Consider using the Midjourney Automation Suite from TitanXT to help streamline your workflow and optimize your creative process.

Refining Your Video Creations

Viewing and Control Options

The new interface offers helpful viewing tools:

  • Autoplay: Right-click on a video and check "autoplay video." All videos will then start playing on their own without needing you to hover your mouse over them.

  • Stop and Scrub: On a Mac, use the Command key (or Control on Windows) to stop a loading video. Once stopped, you can drag your mouse left or right across the video to scrub through it frame by frame. This helps you check details and pick the best video from your set.

Extending and Customizing

Midjourney also lets you extend your videos by 4-second blocks, up to a total of 21 seconds. You can do this with "Auto Extend" (no new prompt) or "Manual Extend" (add a prompt). Manual extend is useful if you want to change what happens in the added video segment.

It is important to know that your video will be created using the same aspect ratio as your original image. If your image is tall (like a phone screen), your video will also be tall. You cannot change the video's ratio if it is different from the original image.

You can also animate your own images that were not made in Midjourney. Simply drag and drop your photo into Midjourney, set it as the "start frame," and then add a prompt to animate it. For example, you could animate yourself tearing off a coat to reveal a "Superman" shirt. (The results can be fun, even if Superman gets long hair!)

Downloading Your Videos

When you want to download a video, you will see two buttons:

  • Standard Download: This gives you a smaller video size (e.g., 800x400 pixels). The quality within that small size is good, but sharing it on social media might cause quality loss as platforms re-encode it.

  • Social Media Download: This option creates a larger video (almost double the size), which fills more of your screen. This larger size helps the video look better when shared on social platforms, even if the raw detail quality remains similar to the standard version. Midjourney is also planning a video upscaling tool that will add more detail when videos are enlarged.

Mastering Video Prompts and Camera Control

Midjourney is new to video, so it does not have specific tools for camera movements like other platforms do. To control camera actions (like zoom out), you need to type exactly what you want into your prompt. This means strong prompting skills are key for video as well as images.

If you need help finding the right words for your prompts, resources like "Ultima library" can be very useful. This library offers keywords and prompt structures related to AI video, camera movements, subject movements, and more. It also helps with generating good initial images, which is critical since you start with an image before animating.

Harnessing the full power of Midjourney's video capabilities, especially when dealing with complex prompts or numerous generations, can be time-consuming. Streamline your creative work and manage your projects with ease by exploring the Midjourney Automation Suite from TitanXT.

Midjourney's Edge: A Head-to-Head Battle

Many have compared Midjourney's new video model against other leading AI video tools like Cling 2.1, Minimax AI Luo 2, Google Vo, and Runway Gen 4. Here's a quick look at some key tests:

The "Flower Release" Challenge

In one test, an image of a girl holding a flower was used with a prompt to make the flower fly away into the sky with "magic particles."

  • Midjourney: Delivered the best animation. The flower flew away as asked, with small blue particles, and the video's colors stayed true from start to finish. A clear win.

  • Google VO: The video was good, but the flower did not fly away, and the colors changed too much.

  • Cling: The flower did not fly away.

  • Minimax: The flower appeared frozen, and the animation was not very exciting.

Dynamic Duo Battle

Another test involved a retro-style image of two people with deformed buildings, prompted to show the characters singing and rapping with hand movements, plus a camera with a slight shake.

  • Midjourney: The quality was very good, and it included a subtle camera shake, as requested. This made it stand out, even if the shake was more gentle than desired.

  • Google VO: Censored the image due to "realistic images with people."

  • Cling: Dynamic, but the camera did not shake much.

  • Runway: Lacked quality, and the characters moved away from the camera.

  • Minimax: Quality was not great.

It's worth noting that Midjourney's output in this test was 480p, while others were 1080p. This showed that higher pixel count does not always mean better quality; Midjourney's animation and prompt detail were superior.

Artistic Style Animation

A third test used an illustration-style image of a person, with a prompt for the jacket to move in the wind and planes in the background to fly correctly.

  • Midjourney: Provided good animation on the subject, showing the jacket moving naturally. The planes in the background moved in the right direction and did not disappear. Another win.

  • Google VO: Again, censored the image.

  • Minimax: Planes and scenery moved in the wrong direction, making the video not useful.

  • Cling: The main character was mostly still, though the background moved well.

  • Runway: The character and planes disappeared, leaving an empty video.

Final Thoughts

Midjourney's entry into AI video generation is a big step. It provides impressive animations, shows good control, and can sometimes outperform other leading tools. While it requires you to first create an image, its distinct way of working delivers quality. Stay tuned for future updates, like the video upscaling tool, which will make Midjourney videos even better.

As Midjourney continues to develop its video features, tools like the Midjourney Automation Suite from TitanXT can become invaluable for managing your creative projects, especially when dealing with multiple video generations and complex workflows. Take your Midjourney experience to the next level!

Jul 29

6 min read

0

4

0

Related Posts

Comments

Share Your ThoughtsBe the first to write a comment.
bottom of page
Midjourney Automation Suite - Automate your image generation workflows on Midjourney | Product Hunt