AI Image Generation

Midjourney Prompt Engineering: The Complete Guide

Master Midjourney V8 Alpha, V7, Niji 7, video generation, and advanced prompt engineering techniques. From first prompt to expert-level image creation.

10658 words 53 min read Updated 2026-04-01

Updated April 1, 2026

April 2026 Update: V8 Alpha maturing, V8.1 in training. V8 Alpha (launched March 17, 2026 at alpha.midjourney.com) now has Relax mode for Standard, Pro, and Mega subscribers (all commands except --hd + --q 4 combined).22 A new SREF/Moodboards version (--sv 7) is the default—4x faster and 4x cheaper, with support for --hd, --p, --stylize, and --exp.22 V8.1 is actively training (announced March 21, 2026) targeting improved default aesthetics, creativity, coherence, image prompts, better moodboards/srefs, and possibly 2K default resolution—expected within 1-3 weeks. The current V8 Alpha is temporary and will be replaced when V8.1 ships.23 V8 Alpha known issues: over-processed/hyper-polished default aesthetic, reduced --stylize range at extremes, limited abstraction (model “fixes” surreal prompts into legibility), age drift (subjects rendered older than specified), and occasional “Minecraft effect” blocky textures. Use --style raw to counter the over-polished look.2425 V8 generates images ~5x faster than V7, with dramatically improved instruction-following, coherence, and text rendering (use “quotes” for best results). New parameters: --hd for native 2K resolution and --q 4 for extra coherence. V8 supports --chaos, --weird, --exp, --raw, and --stylize (up to 1000 recommended). Cost note: --hd, --q 4, --sref, and Moodboard jobs cost 4x regular jobs during alpha.21 V7 personalization profiles, moodboards, and srefs are fully backwards-compatible with V8.21 New UI features: improved conversation mode for natural-language flow, “Grid Mode” for focusing on large image sets, and settings moved to sidebars.21 Post-V8 roadmap: editing model first, then V2 video model (new compute cluster enables training larger video models).18 Personalization supports multiple named profiles with accelerated setup (5x faster), and you can select multiple active profiles simultaneously.15 The personalization interface was redesigned on February 26, 2026: image-pair comparisons replaced with a faster click-and-scroll grid system.19 Moodboards gained the --profile parameter for direct ID-based usage and can now be blended with --sref codes in a single prompt.1517 Niji 7 (January 9, 2026) delivers cleaner linework, improved eye/reflection detail, and significantly reduced --sref style drift—--cref remains unavailable, but Personalization and Moodboards are now fully supported on Niji 7 as of February 26, 2026.319 The web UI added Describe on Web (right-click any image for 4 text prompts), new aspect ratios, and batch operations for 2,000 items.1317 Rooms feature was removed on February 26, 2026.16 Video, Moodboards, Draft Mode, and all V7 features remain current. See Changelog for full history.1

I’ve spent hundreds of hours testing Midjourney across every version, parameter combination, and style direction. This guide distills that experience into the comprehensive reference I wish existed when I started. Whether you’re crafting your first prompt or pushing the boundaries of what’s possible, the techniques are here.

Midjourney isn’t a magic prompt-to-image converter. It’s a sophisticated visual language system that responds to specific patterns, respects certain hierarchies, and rewards those who understand its architecture. The difference between generic AI art and stunning, intentional imagery is understanding these patterns.

The key insight: V7 fundamentally changed how prompts work. The old keyword-soup approach (“beautiful, stunning, 8k, detailed, masterpiece”) actively degrades your results. V7 understands natural language—write prompts like you’re describing a photograph to a skilled cinematographer, not tagging a stock photo database.

This guide covers everything from first installation to advanced techniques that most users never discover. Every parameter is documented with actual ranges, real examples, and the edge cases that trip up experienced users.


Table of Contents

Part 1: Foundations

  1. What is Midjourney?
  2. Getting Started
  3. Core Concepts
  4. The Prompt Hierarchy

Part 2: Parameters Mastery

  1. Version Selection
  2. Aspect Ratios
  3. Stylization
  4. Chaos and Weird
  5. Experimental Aesthetics

Part 3: Reference Systems

  1. Omni Reference
  2. Style Reference
  3. Image Weight
  4. Draft Mode

Part 4: Video Generation

  1. Image-to-Video Basics
  2. Extending and Looping
  3. Video Best Practices

Part 5: Genre Templates

  1. Cinematic Realism
  2. Portrait Photography
  3. Product Photography
  4. Fantasy and Sci-Fi
  5. Anime with Niji 7
  6. Architecture
  7. Abstract and Experimental

Part 6: Advanced Techniques

  1. Word Weighting
  2. Negative Prompts
  3. Seed Control
  4. Multi-Subject Composition
  5. Text Rendering

Part 7: Workflows and Optimization

  1. The Iteration Loop
  2. Cost Management
  3. Troubleshooting
  4. Version Migration

Part 8: Reference

  1. Parameter Cheat Sheet
  2. Changelog

What is Midjourney?

Midjourney is a generative AI system that creates images from text descriptions. Unlike traditional image editing or stock photography, you describe what you want to see, and Midjourney generates original images that match your vision.

What makes Midjourney different:

Aspect Midjourney Competitors
Image Quality Industry-leading aesthetics Variable
Natural Language V7 understands full sentences Often keyword-dependent
Photorealism Exceptional with V7 Good to excellent
Anime/Illustration Niji models optimized General-purpose
Video Native support (June 2025) Requires separate tools
Community Integrated sharing/discovery Varies

What you can create:

  • Photorealistic images: Portraits, products, architecture, nature
  • Illustrations: Concept art, book covers, editorial
  • Anime and manga: Via specialized Niji models
  • Abstract art: Experimental, surreal compositions
  • Videos: 5-21 second animated clips from images

What Midjourney isn’t:

  • Not a photo editor (use Photoshop for that)
  • Not a character-consistent system (yet—improving rapidly)
  • Not a tool for recreating specific copyrighted characters
  • Not free (subscriptions from $10-120/month)

Getting Started

Account Setup

  1. Visit midjourney.com
  2. Sign in with Discord or create an account
  3. Choose a subscription:
Plan Price Fast GPU Relax GPU Video Relax
Basic $10/mo 3.3 hrs
Standard $30/mo 15 hrs Unlimited
Pro $60/mo 30 hrs Unlimited Yes
Mega $120/mo 60 hrs Unlimited Yes

Expert tip: Start with Standard ($30/mo). The unlimited Relax mode is essential for experimentation—you’ll burn through Fast hours quickly while learning.

Your First Prompt

Open the web interface at midjourney.com/imagine and type:

A golden retriever sitting in autumn leaves, soft afternoon sunlight

That’s it. No special syntax needed. V7 understands natural language.

What you’ll get: Four variations of a golden retriever in fall scenery. From here, you can:

  • Upscale: Click U1-U4 to generate a high-resolution version
  • Vary: Click V1-V4 to create subtle variations
  • Reroll: Generate four new variations with the same prompt

Web vs Discord

Feature Web Interface Discord
Ease of use Easier Steeper learning curve
Image organization Built-in gallery Scattered in channels
Video generation Full support Not available
Prompt editing Visual interface Text commands
Community Explore tab Channel browsing
Recommendation Start here Power users

The web interface is now the primary experience. Discord works but lacks video generation and has a less intuitive workflow.


Core Concepts

How Prompts Work

Every Midjourney prompt is processed through a pipeline:

Your Text Prompt
      
[Text Encoder]  Converts words to mathematical embeddings
      
[Diffusion Model]  Generates image from noise, guided by embeddings
      
[Upscaler]  Increases resolution and detail
      
Final Image

What this means for you:

  1. Word order matters: Early words have more influence than later ones
  2. Specificity wins: “golden hour sunlight casting long shadows” beats “nice lighting”
  3. Contradictions confuse: “dark, bright, moody, cheerful” cancels itself out
  4. Less is often more: 50-150 tokens typically outperforms 300+ tokens

The Token Economy

Midjourney doesn’t see your words—it sees tokens (roughly word pieces).

Token Count Effect Best For
10-30 Very open interpretation Abstract, experimental
30-80 Balanced control Most prompts
80-150 Detailed control Specific scenes
150+ Diminishing returns May cause conflicts

Expert tip: If your prompt exceeds 150 tokens, you’re probably over-specifying. Cut the adjective spam.

Quality Signals

V7 responds strongly to certain descriptive patterns:

Lighting (most impactful): - “golden hour light casting long shadows across weathered stone” - “Rembrandt lighting with soft fill from camera left” - “bioluminescent glow illuminating the fog”

Materials and textures: - “oxidized copper with verdigris patina” - “worn leather showing decades of use” - “translucent jade catching the light”

Atmosphere and mood: - “melancholic twilight atmosphere” - “oppressive industrial ambiance” - “ethereal dreamlike quality”

Technical camera terms: - “shot on medium format, shallow depth of field” - “85mm lens, f/1.8 aperture” - “anamorphic lens flare, 2.39:1 aspect”


The Prompt Hierarchy

Every effective prompt follows a hierarchy. Words at the top have the most influence.

┌─────────────────────────────────────────────────┐
  1. SUBJECT (who/what)           Most important 
     "elderly fisherman"                          
├─────────────────────────────────────────────────┤
  2. SUBJECT DETAILS (descriptors)               
     "weathered face, silver beard, kind eyes"   
├─────────────────────────────────────────────────┤
  3. CONTEXT (where/when)                        
     "on a wooden dock at dawn"                  
├─────────────────────────────────────────────────┤
  4. STYLE/MOOD (how it feels)                   
     "documentary photography, contemplative"     
├─────────────────────────────────────────────────┤
  5. TECHNICAL (camera/lighting)                 
     "shot on Leica, natural morning light"      
├─────────────────────────────────────────────────┤
  6. PARAMETERS (--ar, --s, etc.)   Fine-tuning 
     "--ar 3:2 --s 100 --v 7"                    
└─────────────────────────────────────────────────┘

Prompt Template

[SUBJECT] [SUBJECT DETAILS], [CONTEXT], [STYLE/MOOD], [TECHNICAL] --parameters

Example applying the hierarchy:

An elderly fisherman with a weathered face and silver beard, standing on a
wooden dock at dawn, documentary photography style, contemplative mood,
shot on Leica M11 with natural morning light, soft mist rising from the water
--ar 3:2 --s 100 --v 7

What most users miss: They start with style (“beautiful cinematic photo of…”) instead of subject. V7 weights early tokens heavily—lead with what you actually want to see.


Version Selection

V8 Alpha (March 17, 2026)

V8 is Midjourney’s next-generation model, currently in alpha testing at alpha.midjourney.com.21

Strengths: - ~5x faster image generation than V7 - Dramatically improved instruction-following and coherence - Native 2K resolution via --hd parameter - Best text rendering yet (use “quotes” in prompts) - Enhanced aesthetic understanding through personalization, srefs, and moodboards - Extra coherence mode via --q 4 - Full backwards compatibility with V7 personalization profiles, moodboards, and srefs

Generation modes:

Mode Speed Cost Best For
Fast ~5x faster than V7 1x Standard workflow
--hd 4x slower 4x Native 2K resolution
--q 4 4x slower 4x Extra coherence
--sref / Moodboard 4x slower 4x Style-guided generation

Known limitations and issues (alpha): - ~~Relax mode not yet available~~ — Relax mode added March 21 for Standard, Pro, and Mega (except --hd + --q 4 combined)22 - Image prompting and variations may behave differently than V7 - Over-processed aesthetic: Default outputs can feel hyper-polished and artificial—use --style raw to counter24 - Reduced stylization range: Very high --stylize values produce less radical variation than V724 - Limited abstraction: Model tends to “fix” surreal or non-representational prompts into something more legible24 - Age drift: Subjects sometimes rendered older or more mature than specified25 - Inconsistent outputs: Same prompt may yield three excellent results and one off-target image (alpha instability)24 - “Minecraft effect”: Occasional blocky textures on certain prompt types25 - Web-only: V8 Alpha requires alpha.midjourney.com — no Discord access25

New UI features: - Conversation mode for natural-language flow - “Grid Mode” for focusing on large image sets - Settings in sidebars (no longer blocking your view)

Usage:

a weathered lighthouse on volcanic cliffs at golden hour,
dramatic clouds, crashing waves --v 8 --hd

V8 Alpha prompt tips: - Use --style raw to reduce the default hyper-polished look and get grittier, more authentic results24 - Specify cinematographic lighting precisely: “single overhead key light with no fill, hard shadows” beats “dramatic lighting”24 - Reference photographers/directors by name for style anchoring (e.g., “Annie Leibovitz portraiture,” “Roger Deakins cinematography”)24 - Describe medium precisely: “35mm film photograph, grain, Kodak Portra 400 palette” narrows the solution space24 - Effective --no patterns: --no blur, depth of field for flat graphics; --no smile, makeup for neutral portraits24 - --stylize 100-400 produces the most useful range in V8; extreme values are less effective than in V724

V8.1 in development (announced March 21, 2026): A “big training run” for V8.1 began around March 21, targeting improved default aesthetics, creativity, coherence, image prompts, better moodboards/srefs, and possibly 2K default resolution. Expected 1-3 weeks from announcement. The current V8 Alpha is temporary software that will be replaced when V8.1 ships.23

When to use V8: - When you want the fastest generation - For text-heavy images - When coherence matters most - To take advantage of native 2K resolution

V7 (Default since June 2025)

V7 is Midjourney’s current flagship model, released April 3, 2025.2

Strengths: - Natural language understanding (write sentences, not keywords) - Best photorealism to date - Dramatically improved text rendering - Better human anatomy (hands, bodies) - Improved spatial relationships - Personalization enabled by default

Generation modes:

Mode Speed Cost Best For
Turbo Fastest 2x normal Final renders when time matters
Fast Normal 1x Standard workflow
Relax Queued Included Exploration, learning
Draft 10x faster 0.5x Rapid iteration

When to use V7: - Photorealistic images - Any prompt with complex natural language - Text rendering - When quality matters most

Niji 7 (January 2026)

Niji 7 is the specialized anime/manga model, released January 9, 2026.3

Strengths: - Crystal-clear eyes, reflections, and fine background details3 - Improved coherence for complex poses and multi-arm setups - More literal prompt interpretation—handles specific color positions and hairstyles precisely - Better text rendering - Enhanced --sref performance with significantly reduced style drift3 - Clean, flat linework aesthetic designed to highlight improved line quality

Limitations: - --cref NOT supported—the team hints at a “more powerful secret surprise” alternative3 - Personalization (--p) and Moodboards fully supported as of February 26, 202619 - More literal than previous Niji versions—adjust vibey prompts

Coming Soon: - New character reference system to replace --cref (expected to exceed --cref capabilities)

Usage:

A determined young mage with crimson hair, casting fire magic,
intense expression, ancient library background --niji 7

When to use Niji 7: - Anime and manga-style illustrations - Character design - Eastern aesthetic illustrations - When you want cleaner linework

Niji 6 (Legacy)

Still available for backward compatibility.

When to use Niji 6: - You need style presets (--style expressive, --style cute, --style scenic) - Your workflow depends on --cref - You prefer the softer, less literal interpretation

Styles:

--niji 6 --style expressive  # Dynamic, stylized
--niji 6 --style cute        # Kawaii aesthetic
--niji 6 --style scenic      # Background focus
--niji 6 --style original    # Classic Niji look

Version Comparison

Feature V7 Niji 7 Niji 6
Photorealism Excellent N/A N/A
Anime Good Excellent Excellent
Natural language Best Good Moderate
Text rendering Best Good Limited
--oref Yes No No
--cref No No Yes
--sref Yes Yes (best) Yes
--p Yes Yes (Feb 2026)19 Optional
Style presets No No Yes

V8 Development Status (March 2026)

As of the March 4, 2026 office hours, V8 is functionally complete and launch-ready.18 The distillation run (speed optimization) is about to begin and takes roughly one week; once complete, V8 will release as an opt-in, non-default model for a ~30-day pre-alpha phase before replacing V7 as default.1618 Guides and moderators began internal testing in late January, with multiple community rating parties through mid-February.912

Confirmed V8 features: - Native 2K resolution (2048px) — eliminates the upscaler middleman for genuinely sharper output14 - Massive improvements in text rendering (V7’s weakest area)14 - Better generation of complex subjects (creatures, centaurs, unusual anatomy) - Complete architectural rewrite (new codebase, supports 64px to 2048px+ native)14 - Style references, moodboards, personalization, weird parameter all supported12 - Style Creator and web profiles for community style sharing11 - Upscaling and editing capabilities built in12 - New creation flow: 64 images at 256px for rapid exploration, then fan in and upscale winners10 - Infrastructure switch from TPUs to GPUs with PyTorch (better supported codebase, faster hiring)11 - V8 “mini” variant designed for lower-tier hardware11 - Push toward real-time preview generation - Speed gains: significant even for Turbo users, and dramatic for non-Turbo workflows18

Launch caveats: - Image prompting and variations may behave differently during the initial rollout18 - ~~Relax mode will not be available at V8 launch~~ — Relax mode is now available for Standard, Pro, and Mega subscribers (all commands except --hd + --q 4 combined)22 - Some features will be refined based on user feedback post-launch18

Timeline (as of March 13, 2026): - Internal testing: January 20269 - Rating parties: early–mid February 202612 - Final rating round (V8 personalization calibration): February 20, 202620 - Functionally complete: confirmed March 4, 202618 - Distillation run: about to begin (~1 week duration)18 - V8 Alpha launched: March 17, 2026 at alpha.midjourney.com (opt-in, non-default)21 - Relax mode added: March 21, 202622 - New SREF/Moodboards version (--sv 7): 4x faster, 4x cheaper, supports --hd, --p, --stylize, --exp22 - V8.1 training run began: ~March 21, 2026 (1-3 week duration, targeting improved aesthetics/coherence/image prompts)23 - Pre-alpha: ~30 days after launch, then becomes default16 - Mobile app improvements planned after V8 launch9 - 3D functionality with camera movement and reframing in development9

What’s next after V8: - Editing model (first priority after V8 launch)18 - V2 video model (new compute cluster arriving March 2026 enables training much larger video models)18 - Hardware projects: four in progress, including a wearable device and a warehouse-scale assembly project10 - Batch mode expansion with user preference learning system9 - Real-time AI models as long-term destination9


Aspect Ratios

The --ar parameter sets image dimensions. Default is 1:1 (square).

Common Ratios

Ratio Dimensions Use Case
1:1 Square Social media, icons
4:5 Portrait Instagram feed, mobile
5:4 Landscape Desktop, presentations
16:9 Widescreen YouTube, presentations
6:11 Tall portrait Phone wallpapers, vertical posters
9:16 Vertical Stories, TikTok, mobile
21:9 Ultrawide Cinematic, film
3:2 Classic Photography prints
2:3 Portrait Vertical prints

Platform-Specific Recommendations

Platform Ratio Notes
Instagram Feed 1:1 or 4:5 4:5 gets more screen space
Instagram Story 9:16 Full vertical
Twitter/X 16:9 or 1:1 16:9 expands in feed
LinkedIn 1.91:1 or 16:9 Professional landscape
Pinterest 2:3 Vertical performs best
YouTube Thumbnail 16:9 Standard video format
Desktop Wallpaper 16:9 or 21:9 Match your monitor

Composition Impact

Aspect ratio isn’t just dimensions—it fundamentally changes composition.

Wide ratios (16:9, 21:9): - Emphasize environment and context - Natural for landscapes, cityscapes - Cinematic feel - Subjects become part of a scene

Tall ratios (4:5, 9:16): - Focus attention on subject - Natural for portraits, products - Intimate feel - More vertical information

Expert tip: For cinematic portraits, try 4:5 instead of the obvious 16:9. You get the subject-focused framing of portrait with enough context for storytelling.


Stylization

The --s parameter controls how much artistic interpretation V7 applies. Range: 0-1000. Default: 100.

Stylization Ranges

Range Effect Best For
0-50 Minimal interpretation Product photos, technical accuracy
50-150 Balanced (default) General use, portraits
150-300 Noticeable style Artistic photos, mood pieces
300-500 Strong style Illustrations, conceptual
500-1000 Very stylized Abstract, experimental

Visual Examples

Portrait of a woman, soft window light --s 50
# Result: Clean, realistic, minimal embellishment

Portrait of a woman, soft window light --s 250
# Result: More artistic interpretation, enhanced mood

Portrait of a woman, soft window light --s 600
# Result: Distinctly stylized, dreamlike quality

Decision Framework

Use low stylization (0-100) when: - Creating product photography - You want photorealistic accuracy - Technical/documentation images - The prompt should be interpreted literally

Use medium stylization (100-300) when: - General creative work - Editorial photography - You want enhancement without extremes - Balanced between realistic and artistic

Use high stylization (300+) when: - Creating illustrations or concept art - Abstract or experimental work - You want Midjourney’s aesthetic to dominate - Pushing creative boundaries

Stylization + Style Raw

For maximum photorealism, combine low stylization with --style raw:

Portrait of a businessman, office background --s 50 --style raw --v 7

--style raw tells V7 to minimize its own aesthetic interpretation, giving you results closer to literal prompt fulfillment.


Chaos and Weird

Chaos (–chaos 0-100)

Controls variation between the four generated images. Default: 0.

Value Effect
0 Very similar outputs
25 Slight variations
50 Moderate variety
75 High variety
100 Maximum unpredictability

When to use chaos: - Exploration phase: --chaos 50-75 to see diverse interpretations - Final render: --chaos 0-25 for consistent results - Finding direction: High chaos early, low chaos for refinement

Weird (–weird 0-3000)

Introduces unconventional, unexpected aesthetics. Default: 0.

Range Effect
0 Standard aesthetics
100-500 Subtle quirks
500-1000 Noticeable strangeness
1000-2000 Very unusual
2000-3000 Maximum weirdness

When to use weird: - Surreal or dreamlike imagery - Breaking out of generic AI aesthetics - Concept art exploration - When “normal” feels too predictable

Combining Chaos and Weird

--chaos 50 --weird 500   # Varied outputs, each slightly quirky
--chaos 100 --weird 0    # Wild variations, normal aesthetic
--chaos 25 --weird 2000  # Similar outputs, all very weird

Expert tip: High weird can produce genuinely unusual imagery, but it’s inconsistent. Use it for exploration, then dial back for final renders.


Experimental Aesthetics

The --exp parameter adds enhanced detail, dynamics, and tone-mapped effects. Range: 0-100. Default: 0.

Effect Levels

Value Effect Notes
0 Off (default) Standard rendering
5 Subtle enhancement Safe to combine with other params
10 Noticeable detail boost Good starting point
25 Strong effect Recommended max for mixing
50 Very strong May reduce prompt accuracy
100 Maximum Can overwhelm –stylize and –p

What –exp Does

  • More detailed textures and surfaces
  • More dynamic, punchy compositions
  • Tone-mapped HDR-like appearance
  • Enhanced visual interest
--exp 10 --s 200           # Enhanced detail, balanced style
--exp 25 --s 100           # Strong exp, controlled stylize
--exp 5 --style raw        # Subtle boost for photorealism

Warning: Parameter Conflicts

At high values (above 25-50), --exp can: - Overwhelm --stylize settings - Override personalization (--p) - Reduce image diversity

Expert tip: Keep --exp at 10-25 for most work. Higher values are for specific stylistic effects, not general quality improvement.


Omni Reference

The --oref parameter transfers subject characteristics from a reference image to your generation. This replaced --cref in V7.

Basic Usage

/imagine A woman in a red dress at a gala --oref [image URL]

What transfers: - Face and facial features - Body type and proportions - Clothing and accessories - Overall identity

Weight Control (–ow)

--ow 0-1000    # Omni weight (default 100)
Weight Effect
0-30 Loose inspiration, allows style changes
30-60 Moderate influence
60-100 Strong resemblance (default area)
100-300 Very close match
300-1000 Maximum fidelity

Weight Interactions

The --ow parameter competes with --stylize and --exp for influence. When using high stylize or exp values, increase --ow to maintain reference consistency:

# High stylize needs higher ow to keep reference
--oref [url] --ow 200 --s 400

# High exp overwhelms default ow
--oref [url] --ow 300 --exp 25

# If you aren't using high stylize/exp, stay at moderate ow (100-400)

Expert tip: For most work without extreme --stylize or --exp, keep --ow under 400. Only go above moderate values when you need to preserve exact facial features or clothing details against strong style parameters.

Best Practices

Reference image quality matters: - High resolution, clear subject - Front-facing photos work best for faces - Consistent lighting in reference - Minimal background distractions

Adjusting weight for style changes:

# Photo to anime conversion - lower weight
--oref [photo URL] --ow 40 --niji 7

# Maintaining strict likeness
--oref [photo URL] --ow 200 --v 7

Combining with style reference:

# Subject from one image, style from another
--oref [subject URL] --sref [style URL] --ow 100 --sw 150

Style Reference

The --sref parameter transfers aesthetic qualities from a reference image.

Basic Usage

/imagine A mountain landscape at sunset --sref [style image URL]

What transfers: - Color palette - Lighting style - Artistic technique - Overall mood/atmosphere - Compositional tendencies

Weight Control (–sw)

--sw 0-1000    # Style weight (default 100)
Weight Effect
0-50 Subtle influence
50-150 Balanced transfer
150-300 Strong style match
300-1000 Dominant style

Multiple Style References

You can combine multiple style images:

--sref [url1] [url2]

The styles blend together. Use for creating unique aesthetic combinations.

Best Practices

Works best with: - Distinctive, consistent styles - Clear aesthetic characteristics - Images with strong visual identity

Less effective for: - Very generic photos - Mixed or unclear styles - Images where the “style” isn’t obvious

Expert tip: Niji 7 has the best --sref performance. If style transfer is critical, consider using Niji 7 even for non-anime content.

Using Old Style Reference Codes

If you have --sref codes from the V6 era, they won’t work directly in V7. Add --sv 4 to use legacy style codes:

/imagine A mountain landscape --sref 123456789 --sv 4
# --sv 4 tells V7 to interpret the code using the V6 style system
/imagine A mountain landscape --sref 123456789 --sv 6
# --sv 6 tells V7 to interpret the code using the V6.1 style system

V8 Alpha SREF update (March 2026): A new SREF/Moodboards version (--sv 7) is now the default in V8 Alpha. It’s 4x faster and 4x cheaper than the previous version, and supports --hd, --p, --stylize, and --exp parameters. Moodboards also default to --sv 7 with the same improvements.22

Note: While --sv 4 and --sv 6 maintain backward compatibility, consider re-generating style references in V7 or V8 for better results with the new models.


Image Weight

The --iw parameter controls how much influence a reference image has on your generation.

Basic Usage

/imagine [prompt] [image URL] --iw 1.5

Weight Range

Range: 0-2 (default 1)

Weight Effect
0-0.5 Prompt dominant
0.5-1 Balanced
1-1.5 Image dominant
1.5-2 Strong image influence

Use Cases

Low weight (0-0.5): Use the image as loose inspiration while prompt dominates

Balanced (0.5-1): Equal influence from prompt and image

High weight (1.5-2): Create variations closely based on the image


Moodboards (Custom Style Profiles)

Moodboards let you create personalized style profiles by rating images. Instead of using a single --sref image, you build a stable aesthetic preference from multiple examples.7 You can now create multiple named profiles, set a default, and even select multiple active profiles simultaneously.15

How Moodboards Work

  1. Create a moodboard at midjourney.com/personalize
  2. Rate images by clicking and scrolling through a grid of images (replaced the old 1v1 comparison system on February 26, 2026) — setup is now up to 5x faster1519
  3. Apply with --p to use your default moodboard
  4. Apply with --p [mID] to use a specific moodboard
  5. Name and organize multiple profiles for different projects or collaborators15

Building a Stable Profile

Ratings Stability
40 Minimum for usable profile15
200 Fairly stable, reliable results15
2,000 Maximum refinement, best consistency15

Expert tip: Rate at least 200 images for a reliable moodboard. Include both likes AND dislikes—dislikes help Midjourney understand what to avoid. You can select multiple active profiles simultaneously for blended aesthetics.15

Using Moodboards

/imagine A forest path at dawn --p
# Uses your default moodboard

/imagine A forest path at dawn --p abc123
# Uses specific moodboard with ID abc123

/imagine A forest path at dawn --profile abc123
# Alternative syntax using --profile parameter

Moodboards vs Style Reference

Approach Best For
--sref One-off style from a single image
--p (Moodboard) Consistent personal aesthetic across projects

Blending Moodboards with –sref

You can combine moodboards with style reference codes in a single prompt for nuanced control:17

/imagine A portrait --p --sref [url] --sw 50
# Your moodboard aesthetic + subtle style reference influence

/imagine A portrait --sref 142710498 --profile drgmjoi 2jrqbw6
# Mix sref codes with multiple moodboard profiles

You can also share moodboard snapshots as codes (e.g., --profile 2jrqbw6) that others can use, or share a link to the live version that updates as you refine it.17


Draft Mode

Draft mode generates images at 10x speed for half the GPU cost. Essential for exploration.

Enabling Draft Mode

/imagine [prompt] --draft

Or toggle in web interface settings.

Draft vs Full Comparison

Aspect Draft Full
Speed ~10x faster Standard
GPU cost 50% 100%
Detail Reduced Full
Best for Exploration Final output

The Draft Workflow

1. Draft Mode Exploration (--draft)
   ├── Test 5-10 variations quickly
   ├── Identify promising directions
   └── Note effective parameters

2. Full Render Refinement
   ├── Remove --draft flag
   ├── Apply learned parameters
   └── Fine-tune with --seed

Expert tip: Always start in Draft mode. The cost savings add up, and you’ll explore more options. Only switch to full render when you’ve found a direction worth committing to.


Image-to-Video Basics

Midjourney’s V1 Video Model launched June 19, 2025, enabling image-to-video animation.

How It Works

  1. Select any image (Midjourney-generated or uploaded)
  2. Click “Animate” button
  3. Choose options (Auto, Manual, Loop)
  4. Generate 5-second video clip

Motion Parameters

--motion low    # Still scenes, slow motion, subtle movement (default)
--motion high   # Big camera motions, larger character movements
--raw           # Reduces creative flair, more prompt control

Motion Comparison

Setting Effect Best For
Low Subtle, cinematic movement Portraits, still life, atmosphere
High Dynamic, energetic motion Action, landscapes, crowds

Warning: High motion can produce unrealistic or glitchy movements. Start with low, increase only if needed.

Cost and Plans

  • Default batch size is 4 videos per prompt; reduce to 1 or 2 with --bs # to conserve GPU time
  • Standard, Pro, and Mega can generate HD video (Fast Mode only)
  • Only Pro and Mega get Relax Mode for video (SD only)
Plan Fast Video Relax Video HD Video
Basic Yes No No
Standard Yes No Yes (Fast only)
Pro Yes Yes (SD only) Yes (Fast only)
Mega Yes Yes (SD only) Yes (Fast only)

HD Video Mode

HD Video mode (launched August 2025) delivers 4x sharper resolution—four times the pixel density for dramatically enhanced video quality.8

How to use HD Video: 1. Generate a standard video first 2. Click the HD option on a completed video 3. Wait for high-resolution render

HD Video costs: - Costs ~3.2x more than standard video - Only available on Pro and Mega plans - Requires a standard video first (can’t generate HD directly)

Mode Resolution Batch 1 Batch 2 Batch 4 (default)
Standard (SD) Base 2 min 4 min 8 min
High Definition (HD) 4x pixels 7 min 13 min 26 min

When to use HD: - Final delivery assets - Large displays or projections - Professional/commercial work - When detail matters in motion

Expert tip: Always test in SD first. HD takes longer and costs more—only upgrade your best clips.


Extending and Looping

Extending Videos

You can extend any video by an additional 4 seconds, up to 4 times (21 seconds max).

Extension options: - Auto: Automatically continues the video - Manual: Adjust prompt before extending

Best practices for extensions: - Plan your narrative arc before starting - First 5 seconds should establish the scene - Each extension should have purpose - Consider pacing—21 seconds is longer than you think

Creating Loops

The Loop option creates seamless looping videos where the first and last frames match.

Select image  Click "Loop"  Generate

Best for: - Background animations - Social media content - Ambient visuals - Cinemagraphs

Tips for better loops: - Simple, repeatable motion works best - Avoid complex camera movements - Atmospheric elements (clouds, water, fire) loop naturally


Video Best Practices

When to Use Video

Good candidates for video: - Atmospheric scenes (fog, rain, fire) - Subtle movement (hair, fabric, water) - Landscapes with environmental motion - Portraits with minimal movement

Less ideal for video: - Complex action sequences - Multi-character scenes - Precise choreography - Technical accuracy requirements

Optimizing for Video

Before animating: 1. Generate the perfect still image first 2. Consider how elements might move 3. Avoid complex, interconnected subjects 4. Simple compositions animate better

Prompt adjustments:

# Good for video
Lone figure standing on cliff edge, wind blowing cape, dramatic clouds

# Less ideal for video
Group of dancers in synchronized formation, precise movements

Cost Management

At 8x image cost, video adds up fast:

Cost-effective workflow: 1. Explore in Draft mode (images) 2. Find perfect composition 3. Generate final high-quality still 4. Animate only the best version 5. Extend only if necessary


Cinematic Realism

The most effective pattern for photorealistic, cinematic results.

The Cinematic Template

[Shot type] by [Director], [subject physical description],
[action/pose], [costume/styling], [setting details],
captured with [Camera Body] using [Lens], [lighting description],
[mood/atmosphere summary]
--ar [ratio] --s [value] --p --no anime, cartoon, illustration, painting

Director Styles

Director Visual Style Best For
Ridley Scott Atmospheric, textured, moody Sci-fi, period drama, close-ups
Denis Villeneuve Epic scale, desolate, geometric Landscapes, wide shots
David Fincher Dark, precise, unsettling Thrillers, moody portraits
Roger Deakins Silhouettes, natural light, poetic Any lighting-focused shot
Alfonso Cuarón Immersive, intimate, tracking Character moments, tension
Wes Anderson Symmetrical, pastel, whimsical Stylized, centered compositions
Christopher Nolan IMAX scale, practical, intense Action, architecture
Terrence Malick Golden hour, ethereal, nature Landscapes, contemplative

Camera Body Reference

Camera Aesthetic Best For
RED Komodo Modern digital cinema Close-ups, narrative
ARRI ALEXA Film-like, rich color Everything cinema
ARRI Alexa Mini Same as ALEXA, smaller Documentary, handheld
ARRI ALEXA 65 Large format, epic Landscapes, IMAX feel
RED V-Raptor 8K, sharp, dynamic Action, high detail
Sony Venice Full-frame, versatile Low light, anamorphic
Hasselblad Medium format, luxury Portraits, fashion
Leica M Rangefinder, classic Street, documentary

Lens Pairings

Focal Length Effect Best For
24mm f/1.4 Wide, environmental Landscapes, establishing
35mm f/2.0 Natural, versatile Documentary, street
50mm f/1.4 Classic, balanced General purpose
85mm f/1.8 Portrait, shallow DOF Close-ups, portraits
105mm f/2.0 Compressed, intimate Headshots
135mm f/2.0 Maximum compression Tight portraits

Complete Cinematic Examples

Close-up portrait:

Dramatic close-up portrait by Ridley Scott, young woman with pale skin
and auburn hair, intense green eyes staring directly at camera, subtle
freckles across nose, wearing dark wool coat, rain falling around her
face, captured with RED Komodo using 85mm f/1.8 lens, cold blue-silver
lighting with warm practical rim light, melancholic determined atmosphere
--ar 4:5 --s 150 --p --no anime, cartoon, illustration, painting

Wide cinematic:

Epic wide shot by Denis Villeneuve, lone figure in orange survival suit
walking across endless salt flats, geometric patterns in dried earth,
massive dust storm approaching on horizon, captured with ARRI ALEXA 65
using 24mm f/2.0 lens, harsh afternoon sun creating stark shadows,
desolate apocalyptic atmosphere
--ar 21:9 --s 200 --p --no anime, cartoon, illustration, painting

Critical: Never use actor names. Describe people physically. “Young woman with pale skin and auburn hair” not “Emma Stone.” Actor names create uncanny valley effects.


Portrait Photography

Lighting Patterns

Pattern Effect Setup
Rembrandt Dramatic, classical Key light 45° side, creates triangle under eye
Butterfly Glamorous, flattering Key light above and forward
Split Dramatic, mysterious Light from pure side
Rim/Edge Separation, depth Light from behind
Loop Subtle shadow Slight angle from Rembrandt

Portrait Template

[Subject description], [expression/emotion], [pose],
[lighting pattern] lighting, shallow depth of field,
[background description], shot on [camera] with [lens]
--ar 4:5 --s 100 --v 7

Portrait Examples

Environmental portrait:

Middle-aged craftsman with salt-and-pepper beard, focused expression,
hands working on leather saddle, Rembrandt lighting from workshop window,
shallow depth of field, blurred tool-filled background, shot on
Hasselblad with 80mm f/1.9, documentary authenticity
--ar 4:5 --s 75 --style raw --v 7

Studio portrait:

Professional woman in her 30s, confident subtle smile, shoulders
turned slightly, butterfly lighting with soft fill, pure white
seamless background, shot on Phase One with 110mm f/2.8, clean
commercial aesthetic
--ar 4:5 --s 50 --v 7

Product Photography

Product Template

[Product] on [surface/platform], [background style],
[lighting setup], commercial photography, high detail,
[brand aesthetic description]
--ar 1:1 --s 50 --v 7 --style raw

Surface and Background Options

Surfaces: - Polished marble (luxury) - Raw concrete (industrial) - Natural wood (organic) - Brushed metal (tech) - Colored acrylic (modern)

Backgrounds: - Gradient (smooth transition) - Seamless (solid color) - Contextual (in-use setting) - Abstract (artistic)

Product Examples

Luxury cosmetic:

Minimalist perfume bottle with gold cap on polished black marble surface,
gradient background from deep purple to black, dramatic rim lighting with
soft front fill, commercial photography, high detail, premium luxury
aesthetic, subtle reflections on marble
--ar 1:1 --s 25 --v 7 --style raw

Tech product:

Wireless earbuds case open showing earbuds inside, floating on
pure white seamless background, soft even lighting from all sides,
commercial product photography, high detail, clean Apple-style
minimalism, subtle shadow beneath
--ar 1:1 --s 50 --v 7 --style raw

Fantasy and Sci-Fi

Fantasy Template

[Character/scene description], [fantasy world details],
[magical elements], [lighting style],
[art style: painterly | concept art | illustration],
[artist influence if applicable]
--ar 16:9 --s 500 --weird 100 --v 7

Fantasy Examples

Epic fantasy:

Ancient elven queen seated on crystalline throne in vast cavern hall,
iridescent robes flowing with captured starlight, bioluminescent
flowers floating around her, massive glowing runes carved into
obsidian walls, ethereal volumetric lighting, painterly fantasy
illustration influenced by Craig Mullins and Alphonse Mucha
--ar 16:9 --s 600 --weird 150 --v 7

Dark fantasy:

Battle-scarred knight in tarnished armor standing in ruined cathedral,
sword planted in cracked stone floor, pale moonlight streaming through
shattered rose window, crows circling above, mist swirling at feet,
dark atmospheric concept art, Beksinski and Zdzisław influence
--ar 16:9 --s 400 --weird 200 --v 7

Sci-Fi Template

[Subject/scene], [technology details], [environment],
[lighting: neon | holographic | industrial | sterile],
[aesthetic: cyberpunk | hard sci-fi | retro-futurism],
[mood description]
--ar 21:9 --s 300 --v 7

Sci-Fi Examples

Cyberpunk:

Solo mercenary in worn tactical gear navigating rain-soaked neon alley,
holographic advertisements flickering overhead, steam rising from
street grates, distant megastructures visible through smog, cyan and
magenta neon reflections on wet pavement, Blade Runner cyberpunk
aesthetic, oppressive urban atmosphere
--ar 21:9 --s 350 --v 7

Hard sci-fi:

Interior of generation ship agricultural bay, massive cylindrical
space with terraced farms curving overhead, artificial sun strip
running along central axis, workers in utilitarian jumpsuits tending
crops, visible structural engineering, hard science fiction aesthetic,
The Expanse influence, functional yet beautiful
--ar 21:9 --s 250 --v 7

Anime with Niji 7

Niji 7 Characteristics

Niji 7 produces cleaner, flatter artwork with improved linework. It interprets prompts more literally than previous versions.

Niji 7 Template

[Character description], [pose/action], [expression],
[setting/background], [specific style notes],
[color palette]
--niji 7 --ar [ratio]

Niji 7 Examples

Action scene:

Young mage with flowing crimson hair and determined golden eyes,
casting powerful fire spell with both hands raised, intense focused
expression, ancient library crumbling around her, debris floating
in magical energy, dynamic diagonal composition, warm orange and
red color palette with cool blue shadows
--niji 7 --ar 3:4

Character portrait:

Elegant noblewoman with silver hair in elaborate updo, wearing dark
blue Victorian-inspired gown with gold embroidery, subtle knowing
smile, half-body portrait, ornate palace balcony background with
moonlit garden visible, soft romantic atmosphere, detailed lace
and fabric textures
--niji 7 --ar 4:5

Style Transfer with Niji 7

Niji 7 has the best --sref performance:

[Your prompt] --niji 7 --sref [style image URL] --sw 150

Start with --sw 150 and adjust: - Lower (50-100) for subtle influence - Higher (200-300) for strong style matching

Migration from Niji 6

Niji 6 approach:

anime girl, beautiful, detailed eyes, colorful --niji 6 --style expressive

Niji 7 approach:

Young woman with vibrant teal hair and large expressive amber eyes,
wearing casual summer dress, cheerful smile, urban cafe background,
afternoon sunlight, contemporary anime style
--niji 7

Key changes: - Write full descriptions, not keyword lists - Be more literal and specific - Style presets don’t exist—describe what you want - Use --sref for consistent style


Architecture

Architecture Template

[Building/space type], [architectural style],
[time of day/lighting], [weather/atmosphere],
[perspective: eye-level | aerial | interior | detail],
architectural photography, clean lines
--ar 16:9 --s 150 --v 7 --style raw

Architectural Styles

Style Characteristics Key Words
Brutalist Raw concrete, massive, geometric Exposed concrete, monolithic
Minimalist Clean lines, white, sparse Negative space, pure forms
Art Deco Ornate, geometric, luxurious Gold accents, sunburst patterns
Gothic Pointed arches, vertical, dramatic Flying buttresses, rose windows
Japanese Wood, paper, nature integration Shoji screens, engawa, zen
Parametric Flowing, computational, organic Zaha Hadid, algorithmic curves

Architecture Examples

Brutalist:

Brutalist concrete museum interior with dramatic skylights, afternoon
sun creating strong geometric shadows on exposed concrete walls, vast
empty gallery space with single sculpture, eye-level perspective
showing depth and scale, architectural photography by Hélène Binet
--ar 16:9 --s 100 --v 7 --style raw

Parametric:

Futuristic parametric architecture concert hall exterior, flowing white
curves inspired by Zaha Hadid, blue hour lighting with building interior
warmly illuminated, long exposure car light trails on surrounding roads,
wide establishing shot, architectural photography
--ar 16:9 --s 150 --v 7

Abstract and Experimental

Abstract Template

[Concept/emotion to express], [visual elements],
[color palette], [texture/material qualities],
[movement/energy description], abstract composition
--s 750 --weird 500 --chaos 50 --v 7

Abstract Examples

Emotional abstract:

The feeling of nostalgia dissolving into hope, fragmented memories
reforming as light, soft blues transitioning to warm amber, watercolor
textures bleeding into geometric shapes, gentle upward movement,
abstract emotional landscape
--ar 1:1 --s 800 --weird 750 --chaos 40 --v 7

Textural abstract:

Microscopic landscape of oxidized copper and crystalline salt
formations, verdigris greens and rust oranges, extreme macro detail,
mineral textures catching diffused light, abstract geological patterns
--ar 1:1 --s 500 --weird 300 --v 7

Pushing Boundaries

For truly experimental work: - Push --weird above 1000 - Combine with --chaos 75+ - Use abstract emotional language - Reference unconventional artists

The architecture of forgotten dreams, impossible geometries folding
through chromatic space, Escher meets Kandinsky, synesthetic color
relationships, visual music
--ar 1:1 --s 1000 --weird 2000 --chaos 75 --v 7

Word Weighting

Use :: syntax to control emphasis on specific elements.

Syntax

word::2      # Double emphasis
word::1.5    # 50% more emphasis
word::1      # Normal (default)
word::0.5   # Half emphasis
word::-1     # Negative (avoid)

Examples

ethereal::2 portrait of a warrior, dramatic lighting::1.5, mist::0.5

This prompt: - Strongly emphasizes ethereal quality - Moderately emphasizes dramatic lighting - Reduces mist presence

When to Use Weighting

Useful for: - Fine-tuning element balance - Suppressing unwanted interpretations - Emphasizing key features

Avoid when: - First draft exploration - Simple prompts that work without it - You’re not sure what to emphasize

Expert tip: Word weighting is a refinement tool, not a first step. Get the basic prompt working, then use weighting to fine-tune.


Negative Prompts

The --no parameter excludes elements from generation.

Basic Usage

/imagine Beautiful landscape --no people, text, watermark

Effective Negatives

Goal Negative
Photorealism --no anime, cartoon, illustration, painting, drawing
Clean image --no text, watermark, signature, frame, border
Natural look --no oversaturated, HDR, artificial
Serious tone --no cute, chibi, kawaii
Simple composition --no busy, cluttered, crowded

Best Practices

Do: - Use specific, clear terms - Address actual problems in your outputs - Keep the list focused (3-5 items)

Don’t: - Create exhaustive lists of everything you don’t want - Use vague terms (“bad”, “ugly”) - Negate things unlikely to appear anyway

The Cinematic Negative

For consistent photorealistic results:

--no anime, cartoon, illustration, painting, drawing, sketch, CGI, 3D render

Seed Control

Seeds enable reproducibility and controlled variation.

Basic Usage

/imagine [prompt] --seed 12345

Same prompt + same seed = very similar output.

Finding Seeds

After generation, click the image info to find the seed used. Note it for reproduction.

Seed Workflows

Variation workflow: 1. Generate with random seed 2. Find a result you like 3. Note the seed 4. Make small prompt changes with same seed 5. Compare variations

Batch consistency:

Scene in morning light --seed 54321
Scene in afternoon light --seed 54321
Scene in evening light --seed 54321

Using the same seed across related prompts creates more consistent compositions.


Multi-Subject Composition

Complex scenes with multiple subjects require careful prompt construction.

Hierarchy Approach

List subjects in order of importance:

[Primary subject], [secondary subject], [tertiary subject],
[their relationship/interaction], [setting], [style]

Spatial Language

Use clear spatial descriptors:

In the foreground, [subject A]
In the middle ground, [subject B]
In the background, [subject C]

Or:

On the left, [subject A]
In the center, [subject B]
On the right, [subject C]

Example

Elderly grandmother and young granddaughter baking together in
sunlit kitchen, grandmother guiding child's hands rolling dough,
flour dusting the wooden counter, warm afternoon light from window,
vintage kitchen appliances in background, intimate family moment,
documentary photography style
--ar 3:2 --s 100 --v 7

Text Rendering

V7 dramatically improved text rendering in images.

Best Practices

Keep text short: - Single words work best - Short phrases (2-4 words) usually work - Long sentences often fail

Use quotation marks:

Neon sign reading "OPEN" in storefront window

Specify typography:

Vintage poster with "JAZZ NIGHT" in art deco typography

Text Examples

Signage:

Rainy city street at night, neon diner sign reading "EAT" glowing
red through rain-streaked window, film noir atmosphere
--ar 16:9 --s 150 --v 7

Typography:

Minimalist book cover design, large serif typography reading "THE END"
centered on cream paper texture, literary fiction aesthetic
--ar 2:3 --s 100 --v 7

Limitations

Text rendering still struggles with: - Long sentences - Complex fonts - Small text in busy images - Multiple text elements

Expert tip: If text is critical, generate the image without text and add typography in post-processing.


The Iteration Loop

Professional workflow for Midjourney:

Phase 1: Explore (Draft Mode)

1. Enable Draft mode (--draft)
2. Write basic prompt with core concept
3. Generate 4-8 batches quickly
4. Identify promising directions
5. Note what works/doesn't

Goal: Find direction, not perfection. Speed matters.

Phase 2: Refine

1. Disable Draft mode
2. Take best concepts from Phase 1
3. Add specific details
4. Adjust parameters (--s, --chaos, etc.)
5. Generate in Fast mode
6. Compare variations

Goal: Narrow down to 2-3 strong options.

Phase 3: Perfect

1. Select best candidate
2. Note the seed
3. Make micro-adjustments to prompt
4. Use same seed for consistency
5. Upscale final choice

Goal: Polish the winner.

Time Allocation

Phase Time Mode
Explore 60% Draft
Refine 30% Fast
Perfect 10% Fast

Most users invert this, spending too long perfecting first attempts. Explore more, perfect less.

Describe on Web

Right-click any image on the web interface and select “Describe” to generate four text prompts from the image.17 This is invaluable for reverse-engineering styles you admire—describe an image from the Explore page, then modify the resulting prompts to match your vision. Prompts auto-clear on page refresh.


Cost Management

Understanding GPU Time

  • Fast Mode: Uses GPU hours from your subscription
  • Relax Mode: Unlimited but queued (Standard+ plans)
  • Draft Mode: Half the GPU cost of regular
  • Video: ~8x the cost of images

Subscription Value

Plan Fast Hours Relax Video Relax $/GPU Hour
Basic 3.3 hrs No No $3.03
Standard 15 hrs Yes No $2.00
Pro 30 hrs Yes Yes $2.00
Mega 60 hrs Yes Yes $2.00

Insight: Standard+ plans have much better value per GPU hour, plus unlimited Relax.

Cost-Saving Strategies

  1. Explore in Draft mode - Half cost, 10x faster
  2. Use Relax for exploration - Free (Standard+)
  3. Save Fast for finals - Only when quality matters
  4. Batch similar prompts - More efficient than one-offs
  5. Plan before generating - Think, then generate

Estimating Usage

Action Approx. GPU Minutes
4 images (standard) ~1 min
4 images (draft) ~0.5 min
Upscale ~0.5 min
Video (4x 5sec) ~8 min

Troubleshooting

Common Issues

Issue Cause Fix
Blurry faces Low –s or style conflict Use --style raw, increase detail prompts
Wrong aspect Default 1:1 Specify --ar explicitly
Too artistic High –s Lower to 50-100
Too literal Low –s Increase to 200+
Inconsistent outputs Low chaos Use --seed for consistency
Style overpowering High –sw Reduce --sw weight
Text not rendering V7 limitation Keep text short, use quotes
Hands look wrong AI limitation Crop or regenerate
Rooms not found Feature removed Feb 26, 202616 Use folders and the Organize page instead

Parameter Conflicts

Avoid combining: - --style raw + high --s (contradictory) - --v 7 + --niji (pick one) - Multiple strong references at 100% weight - --exp 50+ + --stylize (exp overwhelms) - --exp 50+ + --p (exp overrides)

Works well: - --oref + --sref at moderate weights - --chaos + --seed (varied but reproducible) - --style raw + low --s (maximum photorealism) - --exp 10-25 + --s 100-200 (enhanced, controlled)

When Nothing Works

  1. Simplify - Remove parameters, shorten prompt
  2. Split - Try subject and style separately
  3. Seed hunt - Generate many, find good seed, iterate
  4. Reference - Use --sref with image showing your goal
  5. Version - Try different model version

Version Migration

V6 to V7 Migration

Old V6 style:

portrait, beautiful woman, dramatic lighting, 8k, detailed, masterpiece

New V7 style:

A contemplative portrait of a woman in her 30s, Rembrandt lighting
casting gentle shadows across her face, medium format photography
aesthetic with shallow depth of field

Key Changes

Aspect V6 V7
Prompt style Keywords Natural language
Quality words Helpful Mostly ignored
Character ref --cref --oref
Personalization Optional Default
Default behavior Stylized More literal

What to Stop Doing

  • Keyword spam (“beautiful, stunning, amazing”)
  • Quality modifiers (“8k, ultra detailed, masterpiece”)
  • Using --cref (it’s --oref now)
  • Short, comma-separated prompts

What to Start Doing

  • Write full sentences
  • Describe what you see, not what you want
  • Be specific about lighting, materials, mood
  • Use camera/lens terminology
  • Leverage personalization (--p)

Parameter Cheat Sheet

MODELS
--v 8           V8 Alpha (~5x faster, native 2K, best text) (Mar 2026)
--v 7           Default, best overall (June 2025)
--niji 7        Anime/manga (Jan 2026, best coherence)
--niji 6        Anime/manga (legacy, has --style options)
--draft         Fast iteration, 10x faster, half cost

V8-SPECIFIC
--hd            Native 2K resolution (4x cost)
--q 4           Extra coherence mode (4x cost)

ASPECT
--ar 16:9       Widescreen
--ar 21:9       Cinematic ultrawide
--ar 4:5        Portrait (Instagram)
--ar 6:11       Tall portrait (phone wallpapers)
--ar 9:16       Vertical (Stories)
--ar 1:1        Square
--ar 3:2        Classic photo
--ar 2:3        Portrait print

STYLE
--s 0-100       Photorealistic
--s 100-300     Balanced
--s 300-1000    Artistic
--style raw     Minimal AI interpretation
--p             Apply personalization (V7 default)

EXPERIMENTAL
--exp 0-100     Enhanced detail (10-25 sweet spot)
--chaos 0-100   Output variety
--weird 0-3000  Unconventional aesthetics

REFERENCES
--oref [url]    Subject/character (V7)
--ow 0-1000     Omni weight (default 100)
--sref [url]    Style transfer
--sw 0-1000     Style weight (default 100)
--iw 0-2        Image weight (default 1)

VIDEO (Web only)
--motion low    Subtle movement (default)
--motion high   Dynamic movement
--raw           More prompt control

QUALITY (V7 values: 1, 2, 4  different from V6)
--q 1           Standard quality (default)
--q 2           Higher detail, 2x cost
--q 4           Maximum detail, 4x cost
--seed [num]    Reproducibility

NEGATIVE
--no [items]    Exclude elements

Changelog

Date Change Source
2026-04-01 Added V8.1 training run details (announced March 21, targeting improved aesthetics/coherence/image prompts, 1-3 weeks). Added V8 Alpha known issues (over-processed aesthetic, reduced stylization range, limited abstraction, age drift, Minecraft effect, alpha instability). Added V8 Alpha prompt tips (–style raw, cinematographic lighting, medium precision, negative prompting patterns). Noted current V8 Alpha is temporary software to be replaced by V8.1. 232425
2026-03-23 V8 Alpha post-launch updates: Relax mode now available for Standard/Pro/Mega (except --hd + --q 4 combined). New SREF/Moodboards version (--sv 7) is 4x faster/cheaper, supports --hd, --p, --stylize, --exp. Moodboards default to --sv 7. Updated guide timeline to reflect actual launch. 22
2026-03-17 V8 Alpha launched at alpha.midjourney.com. ~5x faster generation, native 2K via --hd, extra coherence via --q 4, dramatically improved text rendering and instruction-following. --chaos, --weird, --exp, --raw, --stylize supported. --hd/--q 4/sref/Moodboard jobs cost 4x. Relax mode unavailable at launch. Full backwards compatibility with V7 profiles/moodboards/srefs. New UI: conversation mode, Grid Mode, sidebar settings. 21
2026-03-13 Added V8 final rating round (Feb 20, personalization calibration). Added V8 Relax mode unavailable at launch caveat. Corrected V7 quality parameter values (1, 2, 4). Added --sv 6 for V6.1 sref codes. V8 still not released as of March 13. 20
2026-03-12 Confirmed Niji 7 fully supports Personalization and Moodboards (Feb 26 update). Updated personalization interface description (grid replaces 1v1 comparisons). Removed “may not be fully available” caveat from Niji 7 section. V8 still not released as of March 12. 19
2026-03-07 V8 confirmed functionally complete and launch-ready (March 4 office hours). Updated timeline to mid-March release. Added launch caveats (image prompting/variations may differ). Added post-V8 roadmap (editing model, V2 video model with new compute cluster). Added exact video GPU minute costs and updated plan tier table with HD/Relax details from official docs. 18
2026-03-03 Updated V8 timeline (distillation run late Feb, opt-in release early March, ~30-day pre-alpha before default). Added --profile moodboard syntax and blending with --sref codes. Added Describe on Web feature. Added Rooms removal (Feb 26). 1617
2026-02-28 Updated V8 status (still pending as of Feb 28, native 2K confirmed, architectural rewrite). Enhanced moodboard/profiles section (multiple named profiles, 5x faster setup, stability tiers refined to 40/200/2000). 1415
2026-02-17 V8 status: final polish phase, multiple rating parties mid-Feb, release imminent. Confirmed V8 features (style refs, moodboards, editing). Added 6:11 aspect ratio, –ow interaction guidance, web platform updates (batch ops, auto param cleanup). 1213
2026-02-09 Updated V8 status (internal testing, rating party, TPU→GPU switch, new creation flow), enhanced Niji 7 details (–sref drift, eye quality, –cref alternative) 910
2026-01-20 Added HD Video mode section (4x resolution, ~3.2x cost, Pro/Mega only) 8
2026-01-17 Added V8 development status, Moodboards section, –sv 4 for legacy sref codes Web scan
2026-01-16 Added V7.1 roadmap info, verified Niji 7 coverage Web scan
2026-01-13 Guide created with V7, Niji 7, video coverage Multiple
2026-01-09 Niji 7 released with improved coherence 3
2025-06-19 V1 Video Model released 4
2025-06-17 V7 became default model 2
2025-04-30 V7 update: –exp parameter, editor improvements 5
2025-04-03 V7 released 2

References


  1. Midjourney Updates. Official changelog and announcements. 

  2. Midjourney Version Documentation. “Version 7 was released on April 3, 2025, and became the default model on June 17, 2025.” 

  3. Niji V7 Announcement. “Niji V7 is now live” - January 9, 2026. 

  4. V1 Video Model. Video generation released June 19, 2025. 

  5. V7 Update, Editor, and –exp. April 30, 2025 update details. 

  6. V8 Development Discussion. Community discussion on V8 training and roadmap details from David Holz Q&A. 

  7. Moodboards Feature. Midjourney personalization via moodboards and image rating. 

  8. HD Video Mode. “HD Video mode delivers 4x sharper AI-generated clips… costs roughly 3.2 times more than SD.” August 2025. 

  9. Office Hours Jan 22. V8 in final tuning, 3D functionality, mobile app plans, batch mode expansion. 

  10. Office Hours Feb 12. Rating party signaling V8 release, hardware projects, real-time 3D research. 

  11. V8 Development Overview. TPU to GPU/PyTorch switch, V8 mini variant, Style Creator, new dataset. 

  12. V8 Rating Party Updates. Multiple rating parties week of Feb 16, V8 release expected shortly after. Confirmed features: style refs, moodboards, personalization, weird, style creator, upscaling, editing. 

  13. Web Updates Jan 20, 2026. Added 6:11, 4:5, 5:4, 21:9 aspect ratios, batch operations for 2000 items, automatic irrelevant parameter stripping. 

  14. V8 Release Status. “Midjourney V8 could drop next week” — native 2K resolution, complete architecture rewrite, dramatically improved text rendering. Late February 2026. 

  15. Profiles and Moodboards. Multiple named profiles, 5x faster setup, select multiple active profiles, 40 ratings to start, stable by 200, improves up to 2000. 

  16. V8 Distillation and Release Timeline. Final distillation run began late February, ~1 week duration, then opt-in release with ~30-day pre-alpha before becoming default. Rooms feature removed February 26, 2026. 

  17. Describe on Web + Moodboard Blending. Right-click Describe generates 4 text prompts from any image. Moodboard blending with --sref codes and --profile parameter for direct moodboard ID usage. 

  18. V8 Functionally Complete — March 4 Office Hours. David Holz confirmed V8 is “functionally complete and launch-ready.” Distillation about to begin. Speed gains significant even for Turbo users. Image prompting/variations may differ during initial rollout. Post-V8 roadmap: editing model first, then V2 video model (new compute cluster in March enables larger video models). Also: Geeky Gadgets V8 overview

  19. Personalization and Web Updates. February 26, 2026. New personalization interface replaces 1v1 image comparisons with faster click-and-scroll grid. Personalization and Moodboards added to Niji 7. Rooms feature sunsetted. 

  20. V8 Rating Party - FINAL ROUND. February 20, 2026. Final round calibrating personalization systems specifically for V8. V8 launch approaching. Also: V8 Release Analysis — Relax mode won’t be available at V8 launch; Basic/Standard users forced to Fast/Turbo during initial rollout. 

  21. V8 Alpha Announcement. March 17, 2026. V8 Alpha available at alpha.midjourney.com. ~5x faster generation, native 2K via --hd, extra coherence via --q 4, improved text rendering (use “quotes”), --stylize up to 1000 recommended. --hd/--q 4/sref/Moodboard jobs cost 4x. Relax mode unavailable. Full V7 backwards compatibility. New UI: conversation mode, Grid Mode, sidebar settings. 

  22. Relax Mode for V8 Alpha. March 21, 2026. Relax mode now available for Standard, Pro, and Mega subscribers in V8 Alpha (all commands except --hd + --q 4 combined). New default SREF/Moodboards version (--sv 7) is 4x faster and 4x cheaper, supports --hd, --p, --stylize, and --exp. Moodboards also default to --sv 7

  23. V8.1 Training Announcement. March 21, 2026. “A big training run for a new version of V8 (it might even be called V8.1)” announced alongside Relax mode. Targeting improved default aesthetics, creativity, coherence, image prompts, better moodboards/srefs, and possibly 2K default resolution. Expected 1-3 weeks. Current V8 Alpha described as temporary software to be replaced. 

  24. Midjourney V8 Alpha Strengths, Weaknesses, and Prompt Tips. MindStudio analysis of V8 Alpha. Over-processed aesthetic, reduced --stylize range at extremes, limited abstraction capability. Prompt tips: cinematographic lighting specificity, --style raw for natural results, photographer/director references, medium precision, effective --no patterns. 

  25. Midjourney 8 vs 7: Why AI Creators Are Switching Back. Geeky Gadgets comparison. Documents “Minecraft effect,” age drift, web-only access limitation, and community criticism of V8 Alpha’s artistic limitations. V8.1 refinements planned.