top of page

What's New in Midjourney: Understanding --exp, Omni Reference, and More

May 8

4 min read

0

9

0

midjourney blog post image
A Midjourney generated image using Midjourney Automation Suite

Midjourney keeps getting better with frequent updates. If you've been noticing new features or parameters, you're not alone. This look covers recent changes like the V7 model update, the experimental --exp parameter, the flexible Omni reference system, and changes to the --q parameter. Stay up-to-date and see how these tools can help you create better images.

Managing all these updates can be simple. A tool like the TitanXT Midjourney Automator can help you keep track of parameter changes and streamline your workflow.

Midjourney V7 Model and Seeds

The latest V7 image model is here. Along with a small change to how seeds work, this update aims to improve image results.

  • Prompts should give more accurate results.

  • Hands and bodies should look better and more connected to the image.

  • Images often look nicer overall compared to previous versions.

Lightbox Editor Update

There's an update to the Lightbox editor interface. This is for working with images you have already created. You can now find the vary and upscale buttons within the editor itself.

Introducing the --exp Parameter

A new parameter called --exp is available. This is an experimental setting for image style. Midjourney says it is similar to --stylize but works differently. It is meant to make images more detailed and creative.

You can set --exp with a number from 0 to 100. By default, it is 0, meaning it's not used. Values past 50 might not show much difference. You need to add it directly to your prompt text, as it is not a setting on the website or in Discord settings.

Seeing --exp in Action

Using a prompt like 'tree at sunset' with increments of --exp showed how it changes results. Starting with a real-looking photo style, increasing --exp gradually shifted the image towards a painted or digital painting style. The colors became lighter but sometimes looked less natural, moving towards pastel tones.

The Power of Omni Reference

A big new feature is the Omni reference system (--ref). In Midjourney version 6, there was a character reference tool to keep characters consistent. Omni reference takes this further.

  • Use an image URL as a reference for characters, objects, vehicles, or non-human things.

  • Help Midjourney include specific items in your generated images.

  • You can even use an image sheet with several items (like a character, their clothes, and an object), and Midjourney can combine them into one scene.

How to Use Omni Reference

  • On the Midjourney website with V7 enabled, drag your reference image into the Omni reference box in the prompt bar. You can usually only drag one image.

  • On Discord, use `--ref` followed by the image URL. If the image is not online, send it to yourself on Discord first and then copy its URL.

Testing Omni Reference

Tests show that Omni reference works well. By providing reference images for a rabbit, a black dress, and a motorcycle and prompting 'rabbit wearing a black dress on a motorcycle driving in a desert', Midjourney created an image using those specific elements. Just remember to include important details like colors in your prompt, especially if they are not clear in the reference image.

Controlling Variation with --ow

Related to Omni reference is the --ow (Omni Weight) parameter. This controls how closely Midjourney follows the reference image. It goes from 0 to 100, with 100 being the default (follow the image closely). To change the style while using a reference, Midjourney suggests lowering --ow. It works with style words in your prompt and even better with an --sref.

Other Recent Updates

Fast Mode Returns

Good news! Fast mode is back. While Omni reference still uses a faster, higher-cost mode (about double the standard rate), the regular fast mode is available again. This is helpful for faster image generation compared to relaxed mode, especially for upscaling.

Changes to the --q Parameter

The --q parameter, which used to relate to quality, has changed in V7. Now, you have --q 2 and --q 4.

  • --q 2 uses a model similar to the one before May 2nd, 2025.

  • --q 4 uses a different experimental mode. It might offer better details and coherence but could be slower.

  • Midjourney recommends using draft mode for lower-quality tests instead of relying on --q for that.

Returning Parameters: --weird and --tile

Two parameters seen in past versions are also back for V7: --weird and --tile.

  • --weird: Pushes the normal visual style and is meant to create unusual or strange results.

  • --tile: Makes the edges of your image seamless so it can be used as a repeating pattern. This is useful for creating textures and backgrounds.

Final Thoughts

Midjourney released many updates in a short time. Features like Omni reference open up new ways to control your image generation, making it easier to get specific results. The return of fast mode is also a welcome change for many users.

Tools designed to work with artificial intelligence image creation, like the TitanXT Midjourney Automator, can help you manage these updates and keep your workflow smooth and efficient. Give the new parameters and features a try and see how they change your prompting experience. What will you create next?

Whether you're experimenting with --exp, using --ref, or simply enjoying the benefits of V7 and fast mode, the landscape of AI imaging continues to grow. For users creating many images or managing large projects, automating parts of the process could be very beneficial. Explore options available like the Midjourney Automation Suite from TitanXT to see how it fits into your creative routine.

May 8

4 min read

0

9

0

Related Posts

Comments

Share Your ThoughtsBe the first to write a comment.
bottom of page
Midjourney Automation Suite - Automate your image generation workflows on Midjourney | Product Hunt