- Rafa Tirado - The AI Film Soloist
- Posts
- Finding Your AI Art Style ("Production Design AI assistant"?)
Finding Your AI Art Style ("Production Design AI assistant"?)
PLUS: shocking Alibaba's new AI video model, and the “uncanny valley” of AI speech

Good Thursday, my fellow soloists!
Coolest AI Film Soloist of the week!
This week I want to present to you an AI short film made by Abel Art. I think he nailed it!
How did Abel Art achieve this level of quality using AI tools?
What stands out in my opinion is not just the initial concept, crafting the shots and adding dynamic movement to the images, the editing, sound design, and musical composition, but the artist navigating one of the most challenging aspects of AI-generated visuals: maintaining consistent style.
So, would you like to know how you can create your own personal style and look and be consistent to create your own cohesive projects?
Keep reading!
🚨 Your feedback is immeasurably valuable 🚨 Respond to this email and tell me how you think we could add more value to this newsletter. I really appreciate it!

Finding Your AI Art Style with MidJourney
The Challenge of Consistency in AI Art
One of the hardest parts of using AI art software is figuring out how to find your style and use it consistently. Playing with these tools is insanely fun, and you could experiment until the end of humanity, but when you want to create a cohesive project, consistency becomes essential.

“Nosferatu” by Robert Eggers, 2024
The most immersive films have a carefully studied production design—think Nosferatu, Blade Runner, or Panic Room (yes, go see the making of!). Each of these movies has a recognizable aesthetic, a world that feels intentional and lived-in. That’s the level of style consistency we’re aiming for with AI-generated art.
Are you ready to iterate A LOT? Let’s do this!
Different Paths to the Same Goal
The great (and sometimes overwhelming) thing about AI art tools is that there isn’t just one way to achieve a specific style. You can arrive at your goal through different methods, but today I’ll break down the steps I’ve found most useful for defining and replicating a consistent style using MidJourney.
I believe MidJourney is one of the most powerful tools for creating cinematic and artistic images. The downside? It’s not free. But if you’re serious about crafting a distinctive style, it’s worth the investment.
Balancing Play & Structure in AI Art
This technology is so mind-blowing that it’s easy to get lost in endless iterations without a clear structure or goal. And that’s okay; playing and experimenting is a very important part of the process. But at some point, we will need to move from exploration to execution. That’s when the challenge begins:
Defining your style + Repeating it consistently
This used to be a major issue, but with new MidJourney features, style consistency is becoming easier to manage.
The Game-Changer: Moodboards
One of the most exciting new features is Moodboards. This allows you to:
Upload images that represent the style you want to achieve.
Generate a style code that can be added to every prompt.
Ensure visual consistency across all your AI-generated images.
Understanding How MidJourney Works
MidJourney already has a default style- a mix of all the images it has been trained on plus its own algorithms. If you prompt something like “a dog in a street,” it will use its built-in model to generate an image.

Prompt: A dog in the street
However, this generation comes from a certain chaos or randomized processes that create a unique output each time. If you want to control the results, you need specific tools like --sref (style reference) and Moodboards.
Moodboards vs. --sref (Style Reference)
If you’re confused about when to use --sref vs. Moodboards, here’s the key difference:
Moodboards: Define your overall style- the artistic direction, textures, and color palettes. Think of it as your AI production designer assistant.
--sref: Controls the specific composition or look of an image. It lets you refine an individual shot within your established style.
A good way to approach it:
Use Moodboards first to establish a consistent artistic style.
Then use --sref for fine-tuned composition adjustments.
Applying This to Film & Design Projects
When developing a project (film, advertisement, or artwork), the first step is to define the visual language with your team- whether that’s your director, production designer, or concept artist. This process is known in the industry as creating the visual bible.

You could use The Shining visual style as a reference for your next project
Now, as an AI Film Soloist, you can do this process yourself.
Think about the Moodboard as an AI production designer assistant, allowing you to digitally curate your visual direction.
How to Recreate This Process with AI
1. Using Moodboards Directly

Collect image references (from the internet, your past AI generations, or your own work).
Upload these to a Moodboard profile.
Once added, the Moodboard generates a style code.
Use this style code in all prompts for consistent aesthetic output.

Prompt: A police officer in a dark room at night / Moodboard applied
Fine-tune with the stylize slider (--s 0-1000) to see how much influence the Moodboard has over the generations.
Prompt: A police officer in a dark room at night / Moodboard applied / —s 900
Chaos and Mood Boards: Throwing "chaos" into prompts can be fun when combined with your moodboard personalized code
Tip: I’ve seen that moodboards will base their generations on look, camera angles, lenses used, colors, etc. Not in types of characters, or objects, attrezo used in the image references.
2. Refining Through Prompts
Start with a basic prompt.
Gradually add descriptive elements (e.g., specific adjectives, mediums, camera types, lenses, and lighting effects).
A police officer (basic prompt) / A police officer, Mystical (basic prompt+adjective)
Adjust one or two elements at a time to understand their effect.
This method helps you learn how MidJourney interprets prompts and refine your process.
If Moodboards generate inconsistent results, you can":
use your previous best generations as style references. Drag reference image to prompt window and click clip symbol.
use “vary strong” feature to iterate from a specific generation. Put cursor on the image reference and click “vary strong”
3. Mixing Both Methods for Ultimate Control
To achieve a highly refined and polished style, I use this triad process:

Triad process prompt
✅ Structured prompt formula (for textual control)
✅ Best past generations as style references (--sref for composition control)
✅ Moodboard profile code (for overall aesthetic consistency)

Prompt: A police officer in a dark room at night, mystical --profile moodboard, stylize 500
Why This Process Works
By following these steps, you’ll develop a distinct, repeatable style—one that feels uniquely yours. Even better, this approach is collaborative, you can share your Moodboard profile codes with others so they can build upon and refine the style further.
Final Thought: The ability to define and refine your visual style is what separates casual AI art from professional-level creative work. By combining structure with experimentation, you can transform your AI-generated art into something truly cinematic and immersive.

Best AI News links of the week
Image and video generation technologies
🛠️ AI Art Technology and Tools
Alibaba’s new open-source SOTA AI video suite, Wan 2.1 is here!
Project Starlight by Topaz Labs brings old videos back to life with AI
The “uncanny valley” of AI speech is here. Hume AI released Octave, a text-to-speech LLM that understands emotional context, AND Sesame unveils an early glimpse of our expressive voice technology
Magnific “Structure Reference” lets you transform the look of any image while retaining the structural integrity of the subjects.
Ideogram launched its 2a model, a major update to the text-to-image platform.
Pika Labs upgrades AI video with 10-second clips, 1080p, and key frame control
Kaiber AI launched SuperStudio Pro, which provides a single infinite canvas where you can generate videos and images
OpenAI confirmed plans to integrate its Sora video-generation tool directly into the ChatGPT interface
📈 Market trends and Industry Impact
‘Emilia Pérez’ and ‘The Brutalist’ controversies, explained. I did a video about it too, ;)
My Broader AI News pics so you’re in the loop
🤖 Tech Companies, Product Launches & Innovations, and LLM stuff

I hope that these news, tools, and insights help you prepare for the future!
Have a really nice end of the week and weekend.
Stay kind.
Rafa Tirado