top of page

Testing Midjourney v7's New Omni Reference Feature

2 days ago

6 min read

0

2

0

midjourney blog post image
A Midjourney generated image using Midjourney Automation Suite

Welcome to Pixelia AI. We planned a video about using voices in ChatGPT today. But then Midjourney version 7 released the Omni Reference system. It's like the Character Reference in 6.1, but it can work with more than just people. I've experimented a bit, but I'm still figuring it out. Let's explore it together and see what it can do. This looks like it will be a big deal for Midjourney users.

Let's look at what the Midjourney team says about Omni Reference. They call it a new image reference system. Omni Reference can do much of what Character Reference did in V6. But it can do more. Think of Omni Reference as telling Midjourney, "Hey, put this image concept into my new picture." It works for people, things, cars, or non-human beings.

Using Omni Reference in Midjourney v7

You need to use Model v7. On the web, make sure v7 is selected. On Discord, you add the reference image link and then the `--ow` parameter.

Understanding the Omni Weight Parameter (--ow)

The parameter `--ow` controls how closely Midjourney follows your reference image. There's a slider for it on the web interface. The value goes from 0 to 1000. The standard value is 100.

If you want to change the style of the image, like turning a photo into an anime version, you should use a low weight. For example, `--ow 25`. If you want the person's face to look very similar or keep their clothes, you should use a higher weight, like `--ow 400`.

Stylize (`--s`) and Experimental Atmosphere (`--x`) values also try to influence the image. If you use high `--s` or `--x` values, you might need a higher matching `--ow` value. For example, if you use `--s 1000` and `--x 100`, you might use `--ow 400`.

Midjourney advises that if you don't use extremely high `--s` and `--x` values, you probably should not go above values like `--ow 400`. Going higher might make the results worse.

Omni Reference should work with personalization, stylization, style references, and mood boards.

If you want a person in the image to hold something, like a sword, add that detail in your text prompt. Say, "A character holding a sword."

If you want to copy a person's look but use a low `--ow` value to change the overall style, make sure to list the features you want to keep in your prompt. For example, "An anime woman with blonde hair and red suspenders."

Testing Omni Reference: What Happened?

Let's test it out. I used an image of myself. To use an image as a reference on the web, you click the image icon, upload your picture, and make sure Model v7 is active. In these tests, I kept other settings like style and experiment values at zero so they wouldn't affect the results much. The image size was set to 2x3.

I used the same text prompt for many tests: "A woman with gray short hair and glasses sitting in a modern family room on a plush couch reading a book." I only changed the `--ow` value.

Default Weight (--ow 100)

With the standard `--ow 100`, the results didn't look much like me. But the clothes were kept and the setting matched. Across several images at this weight, none really looked like me. There was also more variety in the clothing and setting.

Higher Weights (--ow 200, 400, 600, 800, 1000)

At `--ow 200`, the results were still not very close. Sometimes faces looked a bit like me, but other things didn't match, like glasses. I noticed version 7 sometimes has trouble with hands and feet, which appeared in some images.

At `--ow 400`, results were a bit better for likeness, but still not quite right.

At `--ow 600` and `--ow 800`, some images started looking more like me.

At `--ow 1000`, the face was quite accurate. However, it changed the rest of the image quite a lot. Sometimes, there were strange things like double glasses, or odd looking noses and fingers. Still, most images were acceptable.

Comparing to Midjourney v6.1

I tried the same prompt and reference image in Midjourney v6.1 using Character Reference (`--cref`). Results at `--cref 50` and `--cref 100` were not very close to the reference image. Compared to version 6.1, Homo Reference in v7 showed better potential for likeness.

From my tests, Omni Weight values around 400 to 500 often gave better results for similarity.

Creating many images and testing different parameters like this can be time-consuming. Streamline your creative process with tools like the TitanXT Midjourney Automation Suite, designed to manage batches of prompts and settings quickly.

Testing with Different Prompts and Details

I tried another prompt: "A woman with short gray hair and glasses standing on a sidewalk in a busy city." To ensure the whole person was shown, I added clothing details: "wearing a red dress and sandals." Mentioning sandals helps suggest the feet should be visible. I tested this at `--ow 100` and `--ow 500`.

At `--ow 100`, some results looked like me, and the hands and feet were decent. Others had weird looking parts. The face in some was definitely me, though sometimes hair was slightly different.

At `--ow 500`, the results looked much better. The faces were definitely me, hands looked okay, and feet looked okay. Even with slightly wild hair due to wind, the overall images were good. With this specific image and prompt, `--ow 500` gave two good results.

Testing with a Transparent Background Image

I used an image of myself with a transparent background. The prompt was: "A woman with short gray hair and glasses wearing a flowery glowing dress and sandals." I started with `--ow 100` and no environment specified.

When no environment was in the prompt, the results had a black background, similar to the reference image. Some images looked like me, and hands/feet were okay. Some results were distorted or had strange details like one sock and one bare foot.

I tried again, adding an environment: "standing on a beach" and increased the weight to `--ow 400`.

With the beach added, some images included a background that looked like the original image's style, plus the beach. Some faces looked like me, though eyes were sometimes odd. Other images had white outlines, sand where it shouldn't be, or distorted faces and body parts. This shows that using transparent background images can be tricky.

Testing Style Change

I wanted to see if Omni Reference could change the style using a photo reference. I used the first image again and the prompt: "An oil painting of a woman with white hair and glasses walking in a forest." I started with `--ow 100`.

At `--ow 100`, the results did not look much like my face or hair, and they didn't really look like oil paintings.

I tried again with the same prompt, specifying "short white hair", and used a lower weight, `--ow 25`, as Midjourney suggested for style changes.

With `--ow 25`, the images looked much more like oil paintings. However, as expected, the likeness to my face was almost completely gone. It successfully changed the style, but lost the person's features.

Testing with Personalization (--p)

Finally, I tested the prompt that gave better results earlier: "A woman with gray short hair and glasses sitting in a modern family room on a plush couch reading a book." I combined it with `--ow 500` and turned on personalization (`--p`). I used my default mood board.

The results were hit and miss. Some images might pass for me, others clearly didn't. Compared to earlier tests, the hands and faces seemed a bit better overall, which shows the feature is improving even in short time frames.

Managing various parameters like Omni Weight, Stylize, Experiment, and Personalization can quickly become complex. Tools designed for Midjourney automation, like the TitanXT suite, can help you organize and run these tests more efficiently, saving time and effort.

Final Thoughts on Omni Reference

So, that's a first look at the new Omni Reference. I'm still not sure exactly how to write prompts or what specific values to use for every situation. But it definitely has a lot of potential to be a great tool. When comparing the results to the old Character Reference in version 6, Omni Reference in v7 shows better likeness to the source image.

Right now, you can only use one image as an Omni Reference. Hopefully, in the future, it will support using more. This feature is still quite new. I'm looking forward to seeing how it gets better. Even in just a few days of testing, I've seen improvements.

Exploring new Midjourney features like Omni Reference is exciting. Stay creative and keep experimenting!

2 days ago

6 min read

0

2

0

Related Posts

Comments

Share Your ThoughtsBe the first to write a comment.
bottom of page
Midjourney Automation Suite - Automate your image generation workflows on Midjourney | Product Hunt