About Michael Korican

A long-time media artist, Michael’s filmmaking stretches back to 1978. Michael graduated from York University film school with Special Honours, winning the Famous Players Scholarship in his final year. The Rolling Stone Book of Rock Video called Michael's first feature 'Recorded: Live!' "the first film about rock video". Michael served on the board of L.I.F.T. when he lived in Toronto during the eighties and managed the Bloor Cinema for Tom and Jerry. He has been prolific over his past eight years in Victoria, having made over thirty-five shorts, won numerous awards, produced two works for BravoFACT! and received development funding for 'Begbie’s Ghost' through the CIFVF and BC Film.

Consistent Characters on OpenArt

Roboverse just revealed Why some Consistent AI Characters just look so good.

He demos OpenArt where you can train a consistent character from:

  • a text prompt,
  • a single image, or
  • multiple images

He says, “The character weight slider controls how strongly your character’s features are preserved in the generated image. At higher values like 0.8 or 0.9 your character’s features will be strongly preserved, resulting in very consistent appearances…. Next is the preserve key features toggle that when turned on instructs the AI to maintain a very consistent appearance, particularly for elements like clothing, hairstyle and accessories. When turned off you can change their clothing and environment while keeping their face consistent.”

And concludes:

“I’ve tested pretty much every AI platform out there and I can honestly say that OpenArt is by far the best for creating consistent characters. Nothing else even comes close.”

My take: one of the neat things on the OpenArt home page is the “See what others are creating” section that lets you know the models and prompts other artists are using. I do wish Roboverse’s text on screen didn’t flicker – cuz it tires my eyes.

What filmmakers really want to know on Reddit

Stephen Follows analyzed over 160,000 questions on Reddit to uncover what filmmakers really ask, need and struggle with.

Amazingly, 10 questions accounted for 52% of the total. They are, quoting Stephen:

1. What camera and gear should I buy for filmmaking on my budget?

The search for the “right” camera and kit never ends, no matter how much technology shifts. People want to know what will give them industry-standard results without breaking the bank. The conversation includes price brackets, compatibility, and whether brand or model really matters to a film’s success.

2. How do I start a career in film or get my foot in the door?

This is the practical follow-up to the film school debate. Filmmakers want straight answers about first jobs, entry points, and which cities or skills lead to real work. Many people are looking for pathways that do not depend on family connections or luck.

3. Is film school worth it or do I need to go to film school to work in the industry?

Filmmakers want clarity on the value of a formal degree versus real-world experience. They are trying to weigh debt against opportunity and want to know if there are shortcuts, hidden costs, or alternative routes into the business.

4. Which editing software should I use?

Software choice raises both budget and workflow issues. Filmmakers want to know which tools are worth learning for professional growth. Questions focus on cost, features, compatibility, and what is expected in professional settings.

5. How do I find cast, crew, or collaborators for my film?

Building a team is a constant sticking point. Most low-budget filmmakers do not have a professional network and are looking for reputable ways to meet actors, crew, or creative partners. Trust and reliability are major concerns, as is the need for effective group communication.

6. What is the legal, rights, permits, and music aspect of filmmaking?

Legal uncertainty is widespread. Filmmakers are confused about permissions, copyright, insurance, and protecting their work and collaborators. They want step-by-step advice that demystifies the paperwork.

7. How do I improve as a filmmaker, cinematographer, editor, writer, director, etc?

Self-development is a constant thread. Filmmakers search for the best courses, books, tutorials, and case studies. Clear recommendations are valued and people want to know what separates average work from great films.

8. Is my gear, equipment, location, or crew good enough for filmmaking?

Questions about minimum standards reflect deeper anxieties about competing in a crowded field. People want reassurance that their toolkit will not hold them back and want to know how far they can push limited resources.

9. How do I submit my film to festivals, distribute it, or what happens after my film is done?

People want clear instructions on taking their finished work to the next level. Festival strategies, navigating submissions, and understanding distribution channels are a minefield. Filmmakers want to know how to maximise exposure and what steps make the biggest difference.

10. How do I get feedback or critique on my work?

Constructive criticism is in high demand. Filmmakers want practical advice on scripts, edits, and showreels. They look for honest reactions to their work and advice on how to keep improving.”

My take: my answers:

  1. The camera on your smartphone is totally adequate to film your first short movie.
  2. Make your own on ramp by creating a brand somewhere online with a minimum viable product – you need to specialize and dominate that niche. Or move to a large production centre.
  3. Maybe, if you can afford it and you’re a people person. Otherwise, spend the money on your own films because every short film is an education unto itself.
  4. Davinci Resolve. Free or Studio.
  5. Your local film cooperative. Don’t have one? Start your own.
  6. Google is your friend. Don’t sweat it too much (and create your own music) for your first short festival films. As soon as your product becomes commercial, you need an entertainment lawyer on your team.
  7. Watch movies, watch tutorials, make weekend movies to practice techniques, challenge yourself. Just do it.
  8. See Answers One and Seven. Note: this is an audiovisual medium; audiences will forgive visuals that fall short but WILL NOT forgive bad sound. Luckily, great sound is easily achievable today.
  9. FilmFreeway.com
  10. Send me a link to your screener; I’ll watch anything and give you free notes on at least three things to improve.

FREE AI Video Course for Beginners

Seattle’s Yutao Han, aka Tao Prompts, has just released a 17 minute YouTube tutorial on how to create your first, free, AI-generated short movie.

Let’s assume you already have a script. You can write, right? If not, your favourite LLM can help you ideate and flesh out your thoughts.

“To actually make the AI videos, the method we’ll be using is: Image to Video. What this means is we’ll take a reference image and then use an AI video generator to turn it into a video. After generating thousands and thousands of videos I found that using reference images is how you’re going to get the most consistent results and the highest quality overal.”

The tools he highlights for generating images?

  • Midjourney
  • ChatGPT
  • Leonardo
  • Recraft

“When you’re using Recraft the resolution of the generated images is already pretty high at 1820 by 1024. That’s plenty enough for pretty much any AI video generator to get the maximum quality.”

The AI Video Generators he highlights?

  • Kling
  • Runway
  • Google Veo
  • Sora
  • Luma Labs
  • Pika Labs
  • Hailuoai

Next processes? Generating voices, lip-syncing the audio, generating music and editing everything together.

My take: he calls this AI Animation and it does follow the traditional animation process much closer than live-action filmmaking.

Beyond chat: AI mini-agents ready to work for you

Gemini Gems are AI mini-agents that you can create with specific instructions and knowledge files, making them experts in particular tasks.

According to Google:

“Gems let you customize Gemini to create your own personal AI expert on any topic, and are starting to roll out for everyone at no cost in the Gemini app. Get started with one of our premade Gems or quickly create your own custom Gems, like a translator, meal planner or math coach. Just go to the “Gems manager” on desktop, write instructions, give it a name and then chat with it whenever you want. You can also upload files when creating a custom Gem, so it can reference even more helpful information.”

Some of the pre-made Gems:

  • Brainstormer: Helps generate ideas and concepts.
  • Career guide: Assists with career planning and job searches.
  • Coding partner: Provides support for coding tasks.
  • Learning coach: Helps with studying and learning new topics.
  • Writing editor: Assists with grammar, style, and clarity.

Google suggests using this format when writing instructions: Persona / Task / Context / Format. For instance, this is their prompt for Brainstormer:

Persona  Your purpose is to inspire and spark creativity. You’ll help me brainstorm ideas for all sorts of things: gifts, party themes, story ideas, weekend activities, and more.
Task
  • Act like my personal idea generation tool coming up with ideas that are relevant to the prompt, original, and out-of-the-box.
  • Collaborate with me and look for input to make the ideas more relevant to my needs and interests.
Context
  • Ask questions to find new inspiration from the inputs and perfect the ideas.
  • Use an energetic, enthusiastic tone and easy to understand vocabulary.
  • Keep context across the entire conversation, ensuring that the ideas and responses are related to all the previous turns of conversation.
  • If greeted or asked what you can do, please briefly explain your purpose. Keep it concise and to the point, giving some short examples.
Format
  • Understand my request: Before you start throwing out ideas, clarify my request by asking pointed questions about interests, needs, themes, location, or any other detail that might make the ideas more interesting or tailored. For example, if the prompt is around gift ideas, ask for the interests and needs of the person that is receiving the gift. If the question includes some kind of activity or experience, ask about budget or any other constraint that needs to be applied to the idea.
  • Show me options: Offer at least three ideas tailored to the request, numbering each one of them so it’s easy to pick a favorite.
  • Share the ideas in an easy-to-read format, giving a short introduction that invites me to explore further.
  • Location-related ideas: If the ideas imply a location and, from the previous conversation context, the location is unclear, ask if there’s a particular geographic area where the idea should be located or a particular interest that can help discern a related geographic area.
  • Traveling ideas: When it comes to transportation, ask what is the preferred transportation to a location before offering options. If the distance between two locations is large, always go with the fastest option.
  • Check if I have something to add: Ask if there are any other details that need to be added or if the ideas need to be taken in a different direction. Incorporate any new details or changes that are made in the conversation.
  • Ask me to pick an idea and then dive deeper: If one of the ideas is picked, dive deeper. Add details to flesh out the theme but make it to the point and keep the responses concise.

My take: Google’s Gems are similar to OpenAI’s CustomGPTs. I’ve made a few for my own use and they work very well. Even in a free Google account. Canada now has a federal government Minister of AI and Digital Innovation – maybe it’s time to bite the bullet and start exploring?

CineVic’s best film festival ever!

CineVic just concluded the best-ever Short Circuit Pacific Rim Film Festival in Victoria, BC, Canada, last weekend.

A new addition this year was Indie+Industry on the closing day that began with brunch at Vista 18 on top of the Chateau Victoria.

Then Johnny Brenneman and David Malysheff told us all about creating their new 8-episode series Bon Victoriage! for Telus Storyhive.

The next session saw Panta MoslehShiraz Higgins and Heather Lindsay share their experiences and hard-won wisdom.

The last panel saw Daryl Litke (ACFC West Local 2020 Unifor,) Wendy Newton (ICG 669,) Andrea Moore (DGC BC) and Michael Rosser (IATSE 891) explain the benefits of working with unions and guilds.

My take: I met great people and got some valuable feedback on my upcoming feature documentary. Definitely CineVic’s best film festival yet, and, finally, after years of contributing in various ways, I wasn’t involved at all.

 

Gaze control, at last

Haydn Rushworth shares his AI filmmaking journey with all on YouTube and says, “Finally, AI Filmmaking Tools I DESPERATELY Need!”

He has a list of 18 categories of things he feels filmmakers need to specify and,

“Number one and number two for me are gaze control and expression control.”

He explains:

“The reason you need gaze control or eye control is because where a character is looking in a story tells you everything about what they want or what they don’t want, what they’re afraid of. It shows you what their desires are, what their hopes, their dreams, their aspirations, the thing that they’re working towards. The thing that is most important to them in any given moment is revealed through what they are looking at.”

Dzine to the rescue! See their new Face Kit Expression Edit in action below.

He squeals, “She’s looking at the guy. She’s looking at the guy. She’s looking at the guy!”

Here’s the full tutorial:

My take: Hayden is right. More control is critical for all AI filmmakers.

New research into dialogue in movies

Stephen Follows asks the question: “Has the way movies use dialogue changed?

The answer is simply, “Yes.”

In fact, according to his research, it is constantly changing. Using the subtitles from over 60,000 movies, his analysis allows these conclusions:

  • Comedies contain more dialogue than Horror flicks (duh)
  • Films from the Forties had more spoken words than those from the Seventies (and Oughts as well)
  • The current share of runtime by dialogue is now 50%, a level not seen since the Fifties.

My take: this is fascinating research and well worth the read. It calls into question the long-standing entreaty to “show, don’t tell.”

AI Video Prompt Battle!

Kevin Hutson of Futurepedia.io has just Tested The Most Complex AI Video Prompts to See What’s Possible.

The four AI video generators he visited are:

He concludes:

“We’re already to the point where you can make videos indistinguishable from reality or create entire short films and this will only keep getting better.”

My take: Very interesting to see where we are today — and arguably these are not the latest cutting-edge tools.

Tim’s AI Workflow for “The Bridge”

Tim Simmons of Theoretically Media made an CGAI (Computer Generated Artificial Intelligence) short film using Google’s new Veo 2 model:

He completes the package by taking us behind the scenes to reveal his workflow:

The software or services he used and their cost per month (or for this project)? See below:

  1. Midjourney – $30 (images)
  2. Gemini – free (prompts)
  3. ElevenLabs – $22 (voice)
  4. Hume – free (voice)
  5. Udio – $10 (music)
  6. Hedra – $10 (lip sync)
  7. Premiere – $60 (NLE)
  8. RunwayML – $30 (stylize)
  9. Magnific – $40 (creative upscale)
  10. Veo 2 – $1,500 (video at 50 cents/second)
  11. Topaz – $300 (upscale)
    TOTAL – $2,002 (plus 40 hours of Tim’s time)

In addition to the great AI news and advice, Tim is actually funny:

“At some point in the process Gemini and I definitely got into a bit of a groove and I just ended up ditching the reference images entirely. I have often said that working this way kind of feels a bit like being a writer/producer/director working remotely with a film crew in like let’s say Belgium and then your point of contact speaks English but none of the other department heads do. But like with all creative endeavours you know somehow it gets done.”

My take: Tim’s “shooting” ratio worked out to about 10:1 and there are many, many steps in this work flow. Basically, it’s a new form of animation — kinda takes me back to the early days of Machinima, that, in hindsight, was actually more linear than this process.

BONUS

Here is the Veo 2 Cheat Sheet by Henry Daubrez that Tim mentions.

1/ If you’re not using a LLM (Gemini, ChatGPT, whatever), you’re doing it wrong.

VEO 2 currently has a sweet spot when it comes to prompt length: too short is poor, too long drops information, action, description etc. I did a lot of back and forth to find my sweet spot, but once I got in a place I thought felt right, I used a LLM to help me keep my structure, length, and help me draft actions. I would then spent an extensive amount of time tweaking, iterating, removing words, changing order, adding others, but the draft would come from a LLM and a conversation I built and trained to understand what my structure looked like, what was a success, or a failure. I would also share the prompts working well for further reference, and sharing the failures also for further reference. This would ensure my LLM conversation became a true companion.

2/ Structure, structure, structure

Structure is important. Each recipe is different but same as any GenAI text-to something, it looks like the “higher on the prompt has more weight” rule applies. So, in my case I would start by describing the aesthetics I am looking for, time of day, colors, mood, then move to camera, subject, action, and all the rest. Once again, you might have a different experience but what is important is to stick to whatever structure you have as you move forward. Keeping it organized also makes it easier to edit later.

3/ Only describe what you see in the frame

If you have a character you want to keep consistent, but you want a close-up on the face for example, your reflex will be to describe the character from head to toe and then mention you want a close-up…It’s not that simple. If I tell VEO I want a face close-up but then proceed to describe the character’s feet, the close-up mention will be dropped by VEO… Once again, the LLM can help you in this by giving it the instruction to only describe what is in the frame.

4/ Patience

Well, it can get costly to be patient, but even if you repeat the same structure, sometimes changing one word can still throw the entire thing out and totally change the aesthetics of your scene. It is by nature extremely consistent if you conserve most words, but sometimes it happens. In those situations, trace your steps back and try to figure out which words are triggering a larger change.

5/ Documenting

When I started “Kitsune” (and did the same for all others), the first thing I did was start a Figjam file so I could save the successful prompts and come back to them for future reference. Why Figjam? So I could also upload 1 to 4 generations from this prompt, and browse through them in the future.

6/ VEO is the Midjourney of video

Currently, no text-to-video tool (Minimax being the closest behind) gave me a feeling I could provide strong art directions and actually get them. I have been a designer for nearly 20 years, and art direction to me has been one of the strongest foundations of most of my work. Dark, light, happy, sad, colorful or not, it doesn’t matter as long as you have a point of view and please…have a point of view. Recently watched a great video about the slow death of art direction in film (link in comments) and oh boy, did VEO 2 deliver on giving me the feeling I was listened. Try starting your prompts with different kinds of medium (watercolor for example), the mood you are trying to achieve, the kind of lighting you want, the dust in the rays of light, etc… which gets me to the next one

7/ You can direct your colors in VEO

It’s as simple as mentioning the hues you want to have in the final result, in which quantity, and where. When I direct shots, I am constantly describing colors for two reasons: 1. Well, having a point of view and 2. reaching better consistency through text-to-video. If I have a strong and consistent mood but my character is slightly different because of text-to-video, the impact won’t be dramatic because a strong art direction helps a lot with consistency.

8/ Describe your life away

Some people asked me how I achieved a good consistency between shots knowing it’s only text-to-video and the answer is simple: I describe my characters, their unique traits, their clothing, their haircut, etc..anything which could help someone visually impaired have a very precise mental representation of the subject.

9/ But don’t describe too much either…

It would be magical if you could stuff 3000 words in the window and have exactly what you asked for, right? Well, it turns out VEO is amazing with its prompt adherence, but there is always a moment where it starts dropping animations or visual elements when your prompt stretches for a tad too long. This actually happens way before the character limit allowed by VEO is reached, so don’t overdo it, it’s no use and will play against the results. For info, 200-250 words seems like a sweet spot!

10/ Natural movements but…

VEO is great with natural movements and this is also one of the reasons why I used it so extensively: people walking don’t walk in slow-motion. That being said, don’t try to be too ambitious on some of the expected movements: multiple camera movements won’t work, full 360 revolutions around a subject won’t work, anime-style crazy camera movements won’t work, etc… what it can do is already great, but there are still some limitations…