
Comparing Midjourney's New Omni Reference Feature with Runway
May 8
4 min read
0
41
0

Midjourney users have a powerful new tool: Omni Reference. This feature lets you use an image as a guide for your new creations, going beyond just characters to include styles, objects, and more. How does it work, and how does it stack up against something similar like Runway's reference feature? Let's take a look.
What is Midjourney's Omni Reference?
Before Omni Reference, Midjourney had Character Reference. This was great for keeping the look of a specific person or character across different images. It even had a "character weight" to control how strongly that character appeared.
Omni Reference in Midjourney Version 7 takes this concept much further. "Omni" means "all," and that's the key difference. You can now reference not just people, but also specific objects, overall styles, or elements from any image. You get more control over how much of the reference image influences your new generation.
Using Omni Reference in Midjourney
You can use Omni Reference through the Midjourney website interface. Simply drag an image file into the prompt bar. As you hold the mouse button down, options appear, allowing you to drop the image as an "Image Prompt," "Style Reference," or "Omni Reference."
For this exploration, dropping the image onto "Omni Reference" is the goal. You can also add a separate style reference image if you want to combine influences.
Key Settings for Omni Reference
Beyond just adding the image, there are settings that impact your results:
Midjourney Version 7: Ensure you have Version 7 selected in the settings. This feature is new to this version.
Stylization: This setting controls how much the AI adds artistic flair versus sticking strictly to the prompt and reference. Higher values mean more artistic interpretation.
Weirdness: Introduced in Version 7, this adds a touch of unexpected variation to your images.
Omni Strength: This is a crucial setting for Omni Reference. It's found by clicking on the tiny icon next to your uploaded reference image. The default is 100, but you can adjust it from 0 up to 1000. A higher number means the AI will adhere more closely to the reference image. Starting around 700 can be a good spot to find strong influence.
Early Results and Experiences
Experimenting with Omni Reference can sometimes produce mixed results at first. Trying to reference things like product logos or specific characters from outside sources didn't always work as expected. In one instance, attempting to bring in a specific soft drink can only resulted in a tiny portion of the logo appearing on a standard can.
Efforts to reference a personal headshot also yielded inconsistent results initially. Some images didn't look much like the person in the reference photo, even with moderately high Omni Strength settings.
However, success came when combining a headshot with a style reference image for a specific look. When trying to generate an image of a man in a vintage pub based on a personal photo and a desired image style, the results were much better. The AI created images that captured elements from both the person's likeness and the stylistic guide, showing the feature's potential when the elements align.
Comparing to Runway Reference
Runway also offers a reference feature, part of their Gen 4 capabilities. This is specifically noted for helping maintain consistent subjects and scenes, particularly useful for video generation. While Midjourney doesn't currently offer video generation directly, users often take Midjourney images and animate them in tools like Runway.
Comparing these two tools involves looking at how each platform uses reference images. Midjourney's Omni Reference aims for broad application (people, objects, styles), while Runway's feature focuses on subject/scene consistency, likely with an eye towards its core video functionality. The approach and interface differ between the two.
Enhance Your Midjourney Workflow
Exploring new features like Omni Reference can involve a lot of trial and error. Manually managing multiple styles, references, and settings for numerous generations can take significant time. If you're looking to streamline your creative process and experiment with different combinations efficiently, consider using automation tools.
Check out the Midjourney Automation Suite from TitanXT. This tool can help manage and automate your Midjourney tasks, allowing you to generate variations, test parameters, and organize your image creation more effectively. It's designed to save you time and help you focus on creativity.
Next Steps and Potential
Midjourney continues to evolve, with Version 7 bringing powerful new abilities like Omni Reference. While initial experiments might require tweaking settings, the potential to leverage image influences for styles, objects, and consistent looks is clear. As AI image generation tools like Midjourney and those with video capabilities like Runway develop, we'll see even more sophisticated ways to control and direct the creative process.
Managing a large volume of generations and experiments to get the exact results you want with features like Omni Reference can be resource-intensive. For users who need efficiency and scale, the Midjourney Automation Suite from TitanXT offers features to automate your prompting and generation workflows. Explore how automation can support your Midjourney adventures.