top of page

First Look at Midjourney Video and Key AI Updates You Should Know

Jul 28

5 min read

0

1

0

midjourney blog post image
A Midjourney generated image using Midjourney Automation Suite

The world of artificial intelligence saw big developments this week. From a surprising update about OpenAI's next model to the first previews of Midjourney's long-awaited video capability, there is plenty to discuss. Let's dive into the most important AI news you need to know.

OpenAI's Next Model Delayed with a Big Hint

Sam Altman, CEO of OpenAI, shared news about their upcoming open-weights model. Originally expected this month, the release is now pushed back. However, this delay comes with an intriguing reason: "Our research team did something unexpected and quite amazing and we think it will be very very worth the wait."

This vague but exciting message has sparked a lot of talk. Many wonder if OpenAI has found a new AI capability or created something truly breakthrough. While some thought a delayed open-source model might not be top-tier, the community now expects OpenAI to deliver something impressive. This could be a gift for everyone in the AI space, possibly even shifting the lead back to OpenAI from competitors like Google.

A Sneak Peek at Midjourney Video

We are finally getting our very first look at Midjourney's video model, which they have been building in private. While Midjourney's image generation has always stood out for its unique style and large community, its video capabilities look very good from these early examples. Unlike some video models that focus on practical uses, Midjourney seems to be building a model that excels in artistic style and visual flow.

Early Midjourney Video Examples

  • A girl looking out a window: This short clip shows rich detail and Midjourney's signature look. The motion is smooth, capturing simple actions well.

  • Moving newsprint bird: This example features a bird with a crown, appearing to be made of newsprint. You can see individual dots and paper texture. The wings move while keeping the distinct newsprint style consistent. This suggests the model will create detailed, stylized videos that may take longer to generate but offer higher quality.

  • Anime style animation: Given Midjourney's Niji model for anime, it is no surprise they are looking at this style. The animation and frame rate look good, though some small details (like sauce pouring on food) still need work artistically.

  • Black and white photograph style: This slow-motion clip features a man with an old-fashioned camera. While detailed, some felt this style could be created with existing image-to-video tools.

  • Spider-Verse inspired animation: This example is very impressive. It blends a slower character frame rate with faster effects, similar to the Spider-Man movies. The character rotates fully and spreads wings naturally, showing good consistency and physics for hair and clothing.

  • "How to Train Your Dragon" style: A girl in medieval armor interacts with a baby dragon. This clip has a lot of detail, even though it is compressed. It also features a high shutter speed effect, making it feel more real.

  • Mirror reflections: Several examples show realistic mirror reflections in different settings, from a futuristic lounge to a cartoon scene. The video keeps backgrounds stable and mirrors true to life, which is a major challenge for AI video.

  • 2D/3D animation blend: A consistent character with detailed hair and beard shows off this style. While the character's mouth moves without sound (a common issue without native audio), the blend looks strong.

  • Single subject movement: The model can keep most of a scene still while one small character moves. This is hard for AI video generators, which often move everything. This shows great control, though some elements like water waves remained static.

  • Varied styles: Other clips include stop motion miniature scenes, interior design, and fantasy elements like wizards DJing. These show Midjourney's ability to produce a wide range of artistic styles.

Midjourney’s video model looks well-suited for creative and artistic projects. If you are looking to streamline your Midjourney workflow and generate stunning visuals more efficiently, why not explore the Midjourney Automation Suite from TitanXT? It helps manage and enhance your creations.

These early videos come from blind testing, so they are not cherry-picked examples. This gives us a real idea of what the model can do. While not perfect, it seems particularly good at 2D animation, possibly better than many current popular models.

Other Key AI Developments

Topaz Labs Astra: A New Creative Video Upscaler

Topaz Labs, known for its leading AI upscaling tools, is introducing Astra. This is their first creative style upscaler for video. It can take any video, including AI-generated content, and upgrade it to crisp 4K resolution. Astra uses new "Starlight" models to add fine details and even "hallucinate" extra elements as it upscales, similar to the Magnific upscaler. Topaz Labs focuses on quality, so expect this to be a top-tier tool when it launches.

Meta AI’s Advances in Robotics

Meta AI is making big steps in robotics with VJepa 2. This model, trained on video, has 1.2 billion parameters and allows robots to plan tasks in new environments they have never seen before. This means robots could perform complex actions in your home, like opening a fridge and bringing you a drink. Meta aims to achieve "Advanced Machine Intelligence (AMI)" for robots and provides benchmarks to help other researchers evaluate models that interact with the physical world.

Updates from Higsfield and Google V3

Higsfield has updated its platform to include the Flux One context, enabling changes like character clothing while keeping camera movements steady. Google Labs continues to improve its Flow website and has launched a new V3 Fast model. V3 Fast is over twice as quick, keeps 720p resolution, but may have less prompt accuracy. Google is also expanding V3 access via API to many other image and video generation websites, making it much more widely available.

OpenAI's GPT-3 Pro and Grok 3.5 on the Horizon

OpenAI has released GPT-3 Pro for its pro users in ChatGPT. This model is exceptionally smart for reasoning and research. However, it is very slow, often taking a long time to answer even simple questions. Users describe it as "slow as molasses, but smart as a weapon." Despite its intelligence, it has already been "jailbroken," showing that even advanced models can be pushed beyond their rules.

Grok 3.5 also appears to be arriving soon, with XAI testing it and a new voice mode. While an earlier release prediction from Elon Musk didn't happen, the new voice mode sounds good. Grok 3.0 was impressive upon release, but competition has grown rapidly with Claude 4 and Gemini 2.5. Hopes are high for Grok 3.5 to be competitive, though perhaps not a huge leap forward at the top level.

ByteDance Seaweed APT2: Long-Form Video Research

A new paper from ByteDance introduces Seaweed APT2, which focuses on adversarial post-training for real-time interactive video generation. This allows for consistent and coherent video generations that last up to a minute at 24 frames per second (720p). While impressive for its real-time capabilities, longer videos show some details changing over time, like a man growing a beard or mountains shifting shape. This research, though not yet a product, is valuable for understanding long-form video consistency and interactive experiences, such as controlling virtual avatars.

Runway ML Introduces Chat Mode

Runway, another leader in AI video, has launched a new chat mode. This feature offers an easy way for new users to work with Runway's models, creating video content through a conversational interface. It simplifies the process by suggesting the right models for your needs, letting you guide the storytelling with simple chat commands. For users just starting with Runway, this could be a great way to get familiar with its many tools.

Looking Ahead

This past week highlighted exciting growth, especially in AI video generation. The previews of Midjourney's video model are promising for creative projects. New upscaling tools and robotic advancements show AI's reach. And chat interfaces like Runway's make AI video creation easier for everyone.

Ready to create your own amazing Midjourney content? Enhance your process with the Midjourney Automation Suite from TitanXT. It is built to help you generate and manage your AI art more effectively. Stay tuned for more AI updates as they happen!

Jul 28

5 min read

0

1

0

Related Posts

Comments

Share Your ThoughtsBe the first to write a comment.
bottom of page
Midjourney Automation Suite - Automate your image generation workflows on Midjourney | Product Hunt