Have you ever felt confused about the meaning of parameters such as CFG scale, seed, or negative prompt? You’ve come to the right place. In this guide, we will give you the most complete explanation of each parameter with clear examples. You might think you already know everything, but we guarantee you will learn something new. Let’s get started and unlock the full potential of Stable Diffusion with these parameters together.
We know it can be hard to come up with negative prompts, so we have pre-curated negative prompts on https://openart.ai/create for you to easily choose from, here are some examples to demonstrate their effects.
General negative prompts: General: lowres, error, cropped, worst quality, low quality, jpeg artifacts, out of frame, watermark, signature
Negative prompts for people portraits: deformed, ugly, mutilated, disfigured, text, extra limbs, face cut, head cut, extra fingers, extra arms, poorly drawn face, mutation, bad proportions, cropped head, malformed limbs, mutated hands, fused fingers, long neck
Negative prompts for photorealistic images: Photorealistic: illustration, painting, drawing, art, sketch’
Here’s a general guide on what step number to use for different cases:
- If you’re testing a new prompt and want to have fast results to tweak your input, use 10-15 steps
- When you find the prompt you like, increase the steps to 25.
- In case you’re creating a face or an animal with fur or any subject that has detailed texture, and you feel the generated images are missing some of these details, try to bump it up to 40!
Some people are used to creating images with 100 or 150 steps, this was useful for samplers like LMS, but now it’s generally no longer needed with the improved fast samplers like DDIM and DPM Solver++, by using a high number of steps with these samplers you’ll probably be wasting your time and GPU power, without any increase in image quality.
On OpenArt we’ve implemented the three most used samplers by users, Euler A, DDIM, and DPM Solver++. You can try the three and see what fits your prompt better since there is no rule on what sampler to use, but these three are very fast and capable of producing coherent results in 15-25 steps.
There is only one noticeable difference between Euler a sampler and the other two that is worth mentioning, in this comparison you can see how Euler a results – compared to DPM Solver++– have smoother colors with less defined edges, giving it more of a “dreamy” look, so use Euler a if this is an effect you prefer in your generated images.
CFG Guidance Scale
The default CFG used on OpenArt is 7, which gives the best balance between creativity and generating what you want. Going lower than 5 is generally not recommended as the images might start to look more like AI hallucinations, and going above 16 might start to give images with ugly artifacts
So when to use different CFG scale values? CFG scale can be separated into different ranges, each suitable for a different prompt type and goal
- CFG 2 – 6: Creative, but might be too distorted and not follow the prompt. Can be fun and useful for short prompts
- CFG 7 – 10: Recommended for most prompts. Good balance between creativity and guided generation
- CFG 10 – 15: When you’re sure that your prompt is detailed and very clear on what you want the image to look like
- CFG 16 – 20: Not generally recommended unless the prompt is well-detailed. Might affect coherence and quality
- CFG >20: almost never usable
Since the same seed and prompt combo gives the same image each time, we can use this property to our advantage in multiple ways:
- Control specific features of a character: in this example, we changed the emotion, but this can also work for other physical features like hair color or skin color, but the smaller the change the more likely it will wor
- Testing the effect of specific words: If you wonder what a specific word is changing in the prompt, you can use the same seed with a modified prompt to test it out, it’s good practice to test prompts this way by changing a single word or phrase each time
- Change style: If you like the composition of an image, but wonder how it would look in a different style. this can be used for portraits, landscapes, or any scene you create.
The Img2img feature works the exact same way as txt2img, the only difference is that you provide an image to be used as a starting point instead of the noise generated by the seed number.
So how to decide what strength to use? Here is a simple guide with examples:
- To create variations of an image, the suggested strength to use would be 0.5-0.75 and with the same prompt. This can be useful when you like the composition of a created image but some of the details don’t look good enough, or you want to create similar-looking images to images you created in other software like blender or photoshop (in this case the prompt would be a description of the image).
- To change an image style while keeping it similar to the original, you can use a lower-strength img2img multiple times, and get way better image fidelity compared to a single img2img with higher strength. For this example we used a strength of 0.25 for 4 times, so each time we generate the image we re-insert the generated image into the img2img and rerun it with the same prompt and strength till we get the style we need. If the same image was used in img2img with higher strength you would quickly lose image resemblance
Congrats on reaching this far! You now have a comprehesnive understanding of all Stable Diffusion parameters. If you would like to learn more about how to write better prompts, you can check out our Prompt Book. Definitely give it a try on creating some AI images on https://openart.ai/create.