About Michael Korican

A long-time media artist, Michael’s filmmaking stretches back to 1978. Michael graduated from York University film school with Special Honours, winning the Famous Players Scholarship in his final year. The Rolling Stone Book of Rock Video called Michael's first feature 'Recorded: Live!' "the first film about rock video". Michael served on the board of L.I.F.T. when he lived in Toronto during the eighties and managed the Bloor Cinema for Tom and Jerry. He has been prolific over his past eight years in Victoria, having made over thirty-five shorts, won numerous awards, produced two works for BravoFACT! and received development funding for 'Begbie’s Ghost' through the CIFVF and BC Film.

Michael Korican premieres new short film

My new short film The Dolphins premieres at the Victoria Event Centre on Thursday, February 1, 2018.

I’ve been privileged to make over 50 short films in four decades, starting with Super-8, moving into 16mm, mini-DV and now HD.

A lot of my movies have been entries into the wonderful competitions that CineVic has held over the years: Scrapshots, Reel to ReelOne Shot Wonders, Film Slam, and Film Festivus.

My latest film, however, belongs in the “self-motivated” category. These are the films I’ve made because I needed to make them. Films like Alpbach, Thankful, Awoken, Red Tape and The Dolphins. In each case, I wanted to document a moment in time or explore a creative challenge.

The creative challenge behind The Dolphins was, “Can I make a fiction film on vacation in Mexico?” Not a travelogue, but something with a theme and no crew; just me and my DSLR.

So it’s kinda ironic that it has its premiere at a film competition. Bryan Skinner is hosting the Alan Smithee Awards and The Dolphins is entered.

Bryan is making Open for Submissions, “a comedic, feature-length mockumentary about a newly appointed film festival Executive Director who must overcome sabotage and betrayal in order to save his job and keep the screens alight.” He created the Alan Smithee Awards to source films for his feature.

My take: I hope to see you there!

Kodak looks to the future and the past

There is good news and bad news from Kodak.

Some will remember Kodak as the leading photography film company of the last millenium, toying with bankruptcy in 2012.

The good: Kodak has fully jumped into 360 VR with the Pixpro ORBIT360 4K:

“The KODAK PIXPRO Orbit360 4K VR Camera adopts a minimalist approach to an all-in-one 360 ̊ VR camera, with two fixed focus lenses housed by a futuristic camera body. Each curved lens is designed to work in tandem, to capture full 360 ̊ 4K Video and easily upload 360 ̊ videos and photos to social media platforms like Facebook and YouTube via the camera’s Smart Device App while on the go.”

The real news from CES 2018 however is that Kodak plans two new cameras for later this year. See 2:05 in this report from Digital Trends:

The bad: Kodak has stated that the price for its upcoming Super 8 camera will be in the $2,500 to $3,000 range, which is three to five times more than originally planned.

They also released some test footage:

To my eye this is soft and jittery. I much prefer the rock-steady footage from Logmar:

My take: On one hand, I’m really looking forward to Kodak’s 360 camera that can fold out into a 180 3D mode because I feel this format has the best chance to win the immersive VR stakes. On the other hand, shame on Kodak for jacking up the price of their inferior Super 8 camera.

AI reads minds, makes pictures

As reported by Tristan Greene on The Next Web, scientists at Kyoto University in Japan have created a deep neural network that can decode brainwaves.

That’s right, AI that can read your mind.

Tristan summarizes:

“When these machines are learning to “read our minds” they’re doing it the exact same way human psychics do: by guessing. If you think of the letter “A” the computer doesn’t actually know what you’re thinking, it just knows what the brainwaves look like when you’re thinking it…. AI is able to do a lot of guessing though — so far the field’s greatest trick has been to give AI the ability to ask and answer its own questions at mind-boggling speed. The machine takes all the information it has — brainwaves in this case — and turns it into an image. It does this over and over until (without seeing the same image as the human, obviously) it can somewhat recreate that image.”

Or, as Guohua Shen, Tomoyasu Horikawa, Kei Majima and Yukiyasu Kamitani illustrate:

To my eye, some of the results look awfully reminiscent of William Turner‘s oil paintings, particularly Snow Storm.

See the full paper.

My take: Let’s be honest. This technology, as amazing as it is, is not yet ‘magical.’ (Arthur C. Clarke‘s third law is, “Any sufficiently advanced technology is indistinguishable from magic.”) However, if we think about it a bit and mull over the possibilities, this might one day allow you to transcribe your thoughts, paint pictures with your mind or even become telepathic.

Google uses neural net to synthesize female voice

Research at Google is making huge advances in text-to-speech (TTS) technology. Check this out:

From their Twitter post:

“Building on TTS models like ‘Tacotron’ and deep generative models of raw audio like ‘Wavenet’, we introduce ‘Tacotron 2’ a neural network architecture for speech synthesis directly from text.”

How do they do it? From their blog post:

“In a nutshell it works like this: We use a sequence-to-sequence model optimized for TTS to map a sequence of letters to a sequence of features that encode the audio. These features, an 80-dimensional audio spectrogram with frames computed every 12.5 milliseconds, capture not only pronunciation of words, but also various subtleties of human speech, including volume, speed and intonation. Finally these features are converted to a 24 kHz waveform using a WaveNet-like architecture.”

The results are amazing.

Want more? Here’s the full research paper.

The limitations? Some complex words, sentiment and generation in real time. “Each of these is an interesting research problem on its own,” they conclude.

Listen to more samples.

My take: I’ve used TTS functionality to generate speech for songs and for voice-over. I love it! As the quality improves to the point where it becomes indistinguishable from human voice, I will admit that I’m not quite sure what that will mean in a future where we won’t be sure if the voice we’re hearing is human or robot.

Netflix in 2018

As we move into 2018, a quick recap on the strength of Netflix’s streaming domination:

  • Netflix subscribers around the world watched more than one billion hours per week
  • The average subscriber watched approximately 60 movies in 2017

As reported widely, Neflix intends to spend upwards of $8 billion on new content in 2018. The New York Times lists some of the new films coming this spring:

  • “The Polka King” starring Jack Black
  • “Step Sisters”
  • “A Futile and Stupid Gesture” starring Will Forte
  • “When We First Met”
  • A revival of the “Benji” franchise
  • “Roxanne Roxanne”
  • “Come Sunday” starring Chiwetel Ejiofor

Even more new TV series are coming this year.

Nevertheless, there’s backlash…

Ben Kuchera writing on Polygon complains that:

“Netflix believes that its business begins and ends in your living room, which means any movie it buys will lose its shot at a theatrical release.”

He goes on to quote Noah Baumbach talking about his Netflix experience with “The Meyerowitz Stories (New and Selected)“:

“To be clear, I didn’t make the movie with Netflix. I made the movie independently, as I’ve made all my movies. I wasn’t even thinking of an alternative — I was thinking this would be shown in theaters, as all my movies are. Netflix acquired it from my producer in post and they have their way that’s important to them…. But I think it’s a singular experience, seeing a movie in the theater. I think audiences should be given the opportunity to see things for the first time that way.”

Netflix is even using their model for blockbusters. The $90 million Bright got savaged by the critics but that doesn’t seem to have scared away viewers.

My take: Although not quoted above, Baumbach also went on to say, “We all end up there anyway — all movies are going to end up on these servers, and that’s great.” The reality is that theatrical releases are very expensive. And the cost per viewer, versus streaming, is astronomical. My advice: add a clause to your contract that lets you four-wall a cinema and hold exclusive screenings for your best supporters. An audience of three hundred or so viewers can’t bother Netflix too much, can it?

Google wants you to have the best selfie

Building on last year’s GIF builder, Motion Stills, Google Research has just released two more ‘appsperiments‘ in time for your holiday merriment: Scrubbies and Selfissimo!

Scrubbies lets you “shoot a video in the app and then remix it by scratching it like a DJ. Scrubbing with one finger plays the video. Scrubbing with two fingers captures the playback so you can save or share it.”

Selfissimo! lets you “tap the screen to start a photoshoot. The app encourages you to pose and captures a photo whenever you stop moving. Tap the screen to end the session and review the resulting contact sheet.”

Are you worried that taking so many selfies might give you “selfitis” and turn you into a narcissist? Well, don’t. Snopes disproved that potential mental disorder.

What I love about Selfissimo! is that by taking the photos for you, it gives you more of a true photo session experience, heightened by the fact it only shoots in black and white. Think of the photo shoot scene in Austin Powers ‘The Spy Who Shagged Me’, which itself is homage to the photo shoot scene in Michelangelo Antonioni‘s 1966 masterful film ‘Blow-Up’.

My take: I highly recommend Selfissimo! because it’s so much fun! Here’s to a great 2018, everyone!

Discoverability guide published

Andra Sheffer‘s Independent Production Fund just gave everyone an early Christmas present.

It’s a PDF entitled ‘Be Discovered!’

Download this right away and learn a new strategy to help your work find its audience on the web — one that goes beyond Search Engine Optimization (SEO).

To summarize, the strategy is:

  1. Create an IMDb page
  2. Create a Wikipedia article
  3. Add JSON-LD Schema to your website pages

I know you’ve heard of IMDb and Wikipedia, but you might be scratching your head when it comes to the third thing.

Simply put, schema is one way of adding metadata (tags) to your data (content) so search engines will understand it better and index it properly. This is known as the semantic web.

Luckily there are a couple of free tools from Google to help you:

You use the first one to quickly tag an existing page to create movie schema code you can add to the page, and the second one to double check that the new code is working without errors.

Andra says if you follow this strategy:

“Ideally, all those ‘knowledge cards’ that pop up on the right side of your search screens or as the priority recommendation on mobile devices, will be Canadian web series, resulting from the use of these techniques and the metadata relationships that are discovered by search engines.”

My take: I love it when I learn something new, and the code for semantic indexing of your web content is new to me. Looks like I’m gonna be busy updating my webpages this holiday.

Battling AI’s create new realities

The adage “Seeing is believing” is no longer true.

Three researchers, Ming-Yu Liu, Thomas Breuel and Jan Kautz, working for Nvidia, have created an AI that can generate life-like images.

In their system, multiple neural networks learn together by trying to fool each other with better and better solutions to the problem at hand. These are generative adversarial networks or GANs.

See their paper and GitHub. A sample below:

My take: this is kinda scary. Neat to think of “environmental” filters to add to genuine footage (think Nighttime, Winter, Rainy, etc.) but that this technology can create genuine-looking unreal footage is downright Orwellian. How do we distinguish true from fiction, real from fake? The only conclusion is that everything is now suspect. Sad.

The Attention Economy and the blockchain

Five years ago, I was thinking about a semitransparent way for creators to get paid for their work on the Internet.

Now, someone’s come along and made my dream a reality. And thrown in the blockchain and a cryptocurrency to boot.

Synero, based in Israel, is…

“…developing tools which allow content creators to easily monetize original works without having to turn their channels into advertisment real estate, while granting their followers the opportunity to be rewarded for getting the word out. Simply put, the attention you generate online is worth money. The better the content you create, the more followers you have, the more attention flows around you. Synereo’s applications and monetary models enable you to tap into this resource and reap the fair share of what you create online. Not a content creator? In a world overloaded with information, good taste is as valuable as creative talent. Help curate quality content, and earn your share by promoting creators you appreciate.”

That’s interesting enough. That they intend to use the blockchain and a cryptocurrency to accomplish their goal makes this super-interesting. They have a phased roadmap to accomplish it all.

To be honest, there’s a whiff of a ponzi scheme about the way compensation is distributed.

But wait, there’s more! You won’t get paid in cash, but in a new cryptocurrency called AMP. Super legit, right?

Here are a three more charts on the AMP altcoin.

If you want to do this today:

  1. Get the Chrome Extension
  2. Sign up as a WildSpark Creator

Before you pooh pooh this, see what CMF Trends has to say about them.

My take: I think the frictionless compensation the blockchain could deliver to creators (and potentially influencers) would go far in acknowledging their contributions to the sharing economy. Will it be WildSpark? Not sure. Will the old economy kick and scream about any and all disruption? For sure!

Become a Film Patron today!

My buddy Bryan Skinner needs your help.

He’s making his first feature mockumentary in February 2018 and to do so he’s raising money now.

“Open for Submissions” is about the shenanigans at a film festival, so it’s only natural that Bryan would hold a “Best of the Worst” competition to get the “bad” films he needs. Of course, I’m up for that challenge.

See the trailer for my entry, “The Dolphins”:

Here’s Bryan’s pitch video for “Open for Submissions”:

And here’s Bryan’s project video for “Open for Submissions”:

My take: I have supreme confidence in Bryan and his team being able to complete this project. If you are a filmmaker or you know creative people making art, you should back their vision and become a film patron. It’s easy and you will feel great!