Hollywood A-Listers Buy Westwood Village Theatre

According to Pat Saperstein of Variety, Jason Reitman Acquires Fox Village Westwood Theater With Filmmakers Including Steven Spielberg, Christopher Nolan, JJ Abrams, Chloé Zhao.

“Jason Reitman has gathered more than two dozen filmmakers to help acquire Westwood’s historic Village Theater, which will program first-run and repertory programming.”

The 36 investors include:

  1. JJ Abrams
  2. Judd Apatow
  3. Damien Chazelle
  4. Chris Columbus
  5. Ryan Coogler
  6. Bradley Cooper
  7. Alfonso Cuarón
  8. Jonathan Dayton
  9. Guillermo del Toro
  10. Valerie Faris
  11. Hannah Fidell
  12. Alejandro González Iñárritu
  13. James Gunn
  14. Sian Heder
  15. Rian Johnson
  16. Gil Kenan
  17. Karyn Kusama
  18. Justin Lin
  19. Phil Lord
  20. David Lowery
  21. Christopher McQuarrie
  22. Chris Miller
  23. Christopher Nolan
  24. Alexander Payne
  25. Todd Phillips
  26. Gina Prince-Bythewood
  27. Jason Reitman
  28. Jay Roach
  29. Seth Rogen
  30. Emma Seligman
  31. Brad Silberling
  32. Steven Spielberg
  33. Emma Thomas
  34. Denis Villeneuve
  35. Lulu Wang
  36. Chloé Zhao

“The Fox Village, built in 1930, has hosted hundreds of premieres over the past 90 years, including Reitman’s own “Juno,” “Licorice Pizza” and many others…. The distinctive Spanish mission revival-style building is topped by a 170-foot neon-lit tower, making it a beacon for filmgoers on the Westside of Los Angeles.”

My take: interestingly, at least four on this list are Canadians: Jason ReitmanSeth RogenEmma Seligman and Denis Villeneuve. I’m glad this historic cinema is being saved.

GAME OVER! OpenAI’s SORA just won the text-to-video race

The inevitability of script-to-screen technology is closer than ever.

OpenAI released test footage and announced, “Introducing Sora, our text-to-video model. All the clips in this video were generated directly by Sora without modification. Sora can create videos of up to 60 seconds featuring highly detailed scenes, complex camera motion, and multiple characters with vibrant emotions.”

See openai.com/sora for more.

“Sora is able to generate complex scenes with multiple characters, specific types of motion, and accurate details of the subject and background. The model understands not only what the user has asked for in the prompt, but also how those things exist in the physical world.”

“The model has a deep understanding of language, enabling it to accurately interpret prompts and generate compelling characters that express vibrant emotions. Sora can also create multiple shots within a single generated video that accurately persist characters and visual style.”

See the Technical Research.

Beyond text-to-video, “Sora can also be prompted with other inputs, such as pre-existing images or video. This capability enables Sora to perform a wide range of image and video editing tasks — creating perfectly looping video, animating static images, extending videos forwards or backwards in time, etc.”

Sora can even replace the whole background in a video: “Diffusion models have enabled a plethora of methods for editing images and videos from text prompts…. One of these methods, SDEdit,32… enables Sora to transform the styles and environments of input videos zero-shot.”

My take: this is powerful stuff! Workers in media industries might want to start thinking about diversifying their skills….

Telefilm eyes feature films in Canada

Telefilm Canada has just published a report on Canadian Movie Consumption – Exploring the Health of Feature Film in Canada.

The study, by ERm Research, provides an “understanding of overall consumption patterns, media sources used by audiences, their decision-making process, genre preferences, barriers to watching more movies, and their theatrical moviegoing habits, as well as perceptions of Canadian content.” The study contacted 2,200 feature film consumers in Canada from September 17 to October 2, 2023.

Three of the report’s findings:

  1. 95% of Canadians aged 18+ have seen one or more feature films in the past year, with nearly three-quarters seeing a movie in theatres.
  2. Paid streaming accounts for 54% of all feature film consumption. Around nine in ten movie consumers use at least one streaming service, with most accessing multiple.
  3. French Canadian movie watchers are more inclined to see Canadian content theatrically and generally have a higher opinion of Canadian films.

Some things that stood out to me:

  1. 55% of the audience on opening nights are under the age of 35 whereas by the second week 50% of the audience is 45 or older. (Page 33.)
  2. Canadian moviegoers see on average only 1.4 feature films annually. (Page 38.)
  3. The top five streamers in Canada are Netflix (67%,) Amazon Prime (50%,) Disney+ (39%,) Crave (21%) and Apple TV+ (12%.) (Page 35.)

You can download the full report here.

My take: not very encouraging. I think we need to take our cue from the Quebecois who see (and like) more Canadian films. Why is that? The obvious answer is that they’re watching French-language films, fare that Hollywood is not producing. A more nuanced answer is that they’re watching films that reflect life in their province. Unfortunately, because Canadian movies have highly limited access to cinema screens in the rest of Canada, Canadians outside of Quebec don’t have that luxury.

See every Canadian movie!

If your New Year’s resolution is to watch more Canadian films, Telefilm has you covered.

Their See It All website will help you discover Canadian movies, new (2023) and old (1973).

You can search the database of over 3,400 by title, by new releases and by streaming platforms.

My take: I wish we could search by director or cast members too!

Super Fast Screenplay Coverage

Jason Hellerman writes on No Film School that I Got My Black List Script Rated By AI … And This Is What It Scored.

Jason says, “An AI-driven program called Greenlight Coverage gives instant feedback on your script. You just upload it, and the AI software spits coverage back to you. It rates different parts of the script on a scale from 1-10 and then gives a synopsis, positive comments, and notes on what would make it better. The program even creates a cast list and movie comps, allowing you to have an AI question-and-answer session to ask specific questions about the script.”

His script Himbo that was on The Black List in 2022 and rated incredibly high by ScriptShadow scored 6/10 on Greenlight Coverage.

He concludes:

“The truth is, I could see a read like this coming from a human being. Is it the best coverage? No. But as someone who has tested many services out there, I felt it gave better coverage than some paid sites, which are hit-and-miss depending on the person who reads your script. I look at AI as a tool that some writers may decide to use. I was happy I tried this tool, and I honestly was surprised by the feedback of the coverage.”

My take: I also participated in the beta test of Greenlight Coverage and asked the creator Jack Zhang the following questions via email.

Michael Korican: For folks used to buying coverage for their scripts, what are the main features of Greenlight Coverage that set it apart?
Jack Zhang: The speed, accuracy, consistency as well as reliability. Also the ability to ask follow up questions that can provide guidance on how to pitch to investors and financiers, all the way to how to further develop certain characters. In the future, we will also include data from Greenlight Essentials.

MK: Writers sometimes wait weeks if not longer for coverage. How fast is Greenlight Coverage?
JZ: 15 mins to 2 hours when they first upload their screenplay, depending on their subscribed package. The follow up questions are answered instantly.

MK: In your testing of Greenlight Coverage, how have produced Hollywood scripts rated?
JZ: It’s a mixed bag; the universally critically acclaimed ones usually get a very high score 8.5 to 9+, like The Godfather, Shawshank, etc.  The bad ones like The Room got 3/10. It really depends on the screenplay and the film.

MK: Greenlight Coverage uses a neural network expert system; the coverage section is highly structured whereas the question section is open-ended. How is this done and what LLM does Greenlight Coverage use?
JZ: We are using large language models to power our back end and it is not one, but a few different ones as well as our proprietary model that was fine tuned based on industry veteran feedback.

MK: Why should those folks who hate AI give Greenlight Coverage a try for free?
JZ: I totally see where they are coming from and personally I also agree that in such a creative industry, the human touch is 100% needed. This is just a tool to help give quick feedback and an unbiased opinion on the screenplay. It is useful as another input to the script, but not the end all and be all for it.

btw, not to brag but Greenlight Coverage gave my latest script, The Guerrilla Gardeners, 8/10. Wanna produce it?

Netflix releases viewership data for the first time

Jason Hellerman reports on No Film School that Netflix Releases All Its Streaming Data for the First Time Ever.

He points out that this is a huge story because the “notoriously secretive Netflix has published all its streaming numbers for the public to see” for the first time.

Netflix will publish the What We Watched: A Netflix Engagement Report twice a year.

The report has four columns:

  1. Title, both original and licensed
  2. Whether the title was available globally
  3. The premiere date for any Netflix TV series or film
  4. Hours viewed

Some takeaways:

  • This six month timeframe aggregates 100 billion hours viewed.
  • Over 60% of the titles appeared on Netflix’s weekly Top 10 lists.
  • 30% of all viewing was for non-English content, mainly Korean and Spanish.

Here’s the Netflix media release.

Here’s their six-month 18,000+ row spreadsheet.

My take: the industry has always wanted more transparency from Netflix and I don’t think it’s a coincidence that this report comes on the heels on the writer and actor strikes. I would love to see someone take this information and cross-reference it with genres, formats and actors. Will other streamers follow with their data?

Now and Then: how the short doc started with audio interviews

Rosie Hilder writes on Creative Bloq all about How Oliver Murray made the 12-minute Now and Then, Last Beatles Song documentary.

Oliver Murray says,

“First of all, the most important thing for me was that it felt fresh and contemporary, so we started out by recording new audio interviews with the surviving members of the band, Sean Ono Lennon and Peter Jackson. It was important to record only audio because that’s my favourite way of getting intimate and conversational interview content.”

He adds,

“I took these interviews into the edit and made a kind of podcast cut of the story, which became our foundation for the timeline…. Interviews are always a big part of my process, and are where I start because more often than not the answers that you get to questions lead you somewhere you didn’t expect and change the course of the project, so I like to do those early. It’s always useful to start with audio because it’s also the most malleable and it’s possible to go back for pick up interviews. Archive footage or access (with a camera) to the people you’re talking to actually doing what they’re talking about is much harder to acquire.”

Rosie asks him, “What is your favourite part of the finished film?”

Oliver replies: The emotional climax of the film is definitely the moment where we get to hear John’s isolated vocal for the first time. It’s quite an emotional moment to hear him emerge from that scratchy demo.

My take: this confirms that sound is more important than picture, to me. I think it would have been nice to have the dates displayed on each film clip used because there are a lot, and they bounce around in time, from now and then.

YouTube’s Dream Track and Music AI Tools

Sarah Fielding of Engadget reports that YouTube’s first AI-generated music tools can clone artist voices and turn hums into melodies.

The YouTube technology is called Dream Track for Shorts.

The mock-up asks you for a text prompt and then writes lyrics, music and has a voice-cloned artist sing:

YouTube is also testing Music AI Tools:

This is all possible due to Google DeepMind’s Lyria, their most advanced AI music generation model to date.

Cleo Abram also explains the real issue with AI music: When do artists get paid?

My take: AI is just a tool — I use it occasionally and will be exploring it more in 2024. What we as a society really need to figure out is what everyone’s going to do, and how they’ll get (not earn) enough money to live, when AI and Robotics make paid work redundant.

Filmed on iPhone

Apple‘s October 30, 2023 Event closed with this end credit: “This event was shot on iPhone and edited on Mac.

That would be filmed using the iPhone 15 Pro Max with the free Blackmagic Camera App and Apple Log. Plus edited in Premier Pro and then graded and finalized in Davinci Resolve.

michael tobin adds further insight and commentary:

My take: Of course, now Apple has to film everything they do like this!

 

Technology saves “last Beatles song”

The Beatles have just released their last single, Now and Then.

It appears as a free VEVO music video as well as these for-purchase products: various versions of vinyl and cassette.

“Now and Then” was a demo John Lennon recorded in The Dakota in the late 1970’s. The main reason it’s The Beatles’ last single is because until now it was too hard to separate John’s vocals from the piano notes. Technology to the rescue:

Want to know more? Check out this Parlogram documentary.

My take: I like this video most when it starts incorporating images from “Then” with footage from “Now” viz. 1:47, 1:55, etc. I would have liked to have seen much more of this technique used. This is truly the visualization of Now and Then — show us more!

Unfortunately, I’m not overly enamoured with the song itself; I find it middling and melancholic. I also don’t like:

  • The graphics and the cover image — boring!
  • The first few shots of the video are over-sharpened and plop us in the “uncanny valley” — not a good start.
  • The lip sync is poor — if you’re going to use AI, why not go all the way and use AI to reshape the mouths for perfect sync?
  • I think they missed a great opportunity to have Paul and Ringo sing verses in their own voices. Again, why not go all in and use AI to voice clone George and have him sing a verse too?

As to “last singles” — I think they should give this treatment the last song the Beatles actually recorded together: The End. Although, after 60 years, perhaps it’s just time to move on.