“Engineered Arts, a UK-based designer and manufacturer of humanoid robots, recently showed off one of its most lifelike creations in a video posted on YouTube. The robot, called Ameca, is shown making a series of incredibly human-like facial expressions.”
But wait, there’s more! Meet Mesmer, even more life-like:
This, of course, builds on the research of Dr. Paul Ekman and his exploration of expression.
“The K|Llens One lens, teased earlier this year by German company K|Lens, is finally about to released on Kickstarter. They say that this is the world’s first light field lens that can be used with regular DSLR and mirrorless cameras — and it works for both stills and video. Designed for full-frame cameras, the lens is a “ground-breaking mix of state-of-the-art lens and software technology” which K|Lens says will open up new worlds of creativity to users.”
The lens shoots nine images at once, with each taking up 1/9th the area of the sensor in a 3×3 grid. Custom software then manipulates those images into the desired result.
Because this lens turns any camera into a 3D camera it might have application for specific tasks like Visual Effects, where having depth information is vital for compositing.
Aldred adds:
“Interestingly, while all of the software was developed in-house, the lens itself, they say, was developed in cooperation with Carl Zeiss Jena GmbH, who they say will also be doing all of the manufacturing. So, while K|Lens might be a company that few have heard of, it will essentially be a Zeiss lens. And not just their name stamped on somebody else’s product as Huawei did with Leica, as they’re actually making the thing.”
My take: I’ve blogged about the light field a few times in the last decade and I really like the promise. Could it be the end of out of focus shots for ever? All we need is a similar “sound field” that would allow us to capture every sound source at once and later go into the soundscape to re-record those sources much closer. Right? (Hmm. Is this that?)
“Thunderbird is an augmented reality (AR) -focused start-up supported by the display-centric OEM TCL. Now, the two brands have unveiled something apparently three years in the making: the new Smart Glasses Pioneer Version, with a groundbreaking color micro-LED display geared toward an optimal AR experience. This pair of spectacles is, as the name suggests, the kind of ‘true’ smart glasses that integrate a working, partially transparent display capable of overlaying a mixed-reality display over the wearer’s real-world surroundings. Thunderbird and TCL make the new device sound like a blend of features from the Facebook Ray-Bans and Xiaomi’s own concept Smart Glasses. They do integrate a camera — obtrusively found on the nose-piece — and touch controls on the outside of the ear-hooks to interact with the glasses and the content, phone-like apps, smart-home and -car controls they are rated to sync with.”
You can just about hear it: “HAL, unlock the enhancing algorithm.”
Google explains their new method:
“Diffusion models work by corrupting the training data by progressively adding Gaussian noise, slowly wiping out details in the data until it becomes pure noise, and then training a neural network to reverse this corruption process. Running this reversed corruption process synthesizes data from pure noise by gradually denoising it until a clean sample is produced.”
“There is a moment at the end of the film’s second act when the artist David Choe, a friend of Bourdain’s, is reading aloud an e-mail Bourdain had sent him: “Dude, this is a crazy thing to ask, but I’m curious” Choe begins reading, and then the voice fades into Bourdain’s own: “…and my life is sort of shit now. You are successful, and I am successful, and I’m wondering: Are you happy?” I asked (director) Neville how on earth he’d found an audio recording of Bourdain reading his own e-mail. Throughout the film, Neville and his team used stitched-together clips of Bourdain’s narration pulled from TV, radio, podcasts, and audiobooks. “But there were three quotes there I wanted his voice for that there were no recordings of,” Neville explained. So he got in touch with a software company, gave it about a dozen hours of recordings, and, he said, “I created an A.I. model of his voice.” In a world of computer simulations and deepfakes, a dead man’s voice speaking his own words of despair is hardly the most dystopian application of the technology. But the seamlessness of the effect is eerie. “If you watch the film, other than that line you mentioned, you probably don’t know what the other lines are that were spoken by the A.I., and you’re not going to know,” Neville said. “We can have a documentary-ethics panel about it later.””
People seem offended that the director has literally put words into Bourdain’s mouth, albeit his own words. Personally, I don’t have an issue with this but think there should have been a disclaimer off the top revealing, “Artificial Intelligence was used to generate 45 seconds of Mr. Bourdain’s voiceover in this film.”
My take: what I want to know is, how can I license the Tony Bourdain AI to narrate my movie?
The two portals connect Vilnius’s Train Station with Lublin’s Central Square, about 600 km away.
Benediktas Gylys, initiator of PORTAL says:
“Humanity is facing many potentially deadly challenges; be it social polarisation, climate change or economic issues. However, if we look closely, it’s not a lack of brilliant scientists, activists, leaders, knowledge or technology causing these challenges. It’s tribalism, a lack of empathy and a narrow perception of the world, which is often limited to our national borders. That’s why we’ve decided to bring the PORTAL idea to life – it’s a bridge that unifies and an invitation to rise above prejudices and disagreements that belong to the past. It’s an invitation to rise above the us and them illusion.”
My take: back in the early Nineties (before the Internet caught the public eye) I conceived of a similar network of interconnected public spaces, called Central Square. My vision was similar to Citytv‘s Speakers’ Corner but was to be located in large public outdoor spaces and used to broadcast citizen reports, rants or demonstrations. It would have included sound, which PORTAL seems to have overlooked. I think it was to have appeared on television sets on some of the high-numbered channels. Of course, once increased bandwidth could support Internet video, web cams took off instead. See EarthCam.com for a list.
Drone footage. You’ve seen lots of dreamy sequences from high in the sky. But on March 8, 2021, a small Minneapolis company released a 90-second video with footage the likes of which you’ve never seen before. Here’s the local KARE-TV coverage:
“Captured by filmmaker and expert drone pilot Jay Christensen of Minnesota-based Rally Studios, the astonishing 90-second sequence, called Right Up Our Alley, comprises a single shot that glides through Bryant Lake Bowl and Theater in Minneapolis. The film, which has so far been viewed more than five million times on Twitter alone, was shot using a first-person-view (FPV) Cinewhoop quadcopter, a small, zippy drone that’s used, as the name suggests, to capture cinematic footage.”
F = Fungible. “Fungible” assets are exchangeable for similar items. We can swap the dollars in each other’s pockets or change a $10 bill into two $5 bills without breaking a sweat.
T = Token. Specifically, a cryptographic token validated by the blockchain decentralized database.
N = Non. Duh.
So NFT is a Non-Fungible Token, or in other words, a unique asset that is validated by the blockchain. This solves the real-world problem of vouching for the provenance of that Van Gogh in your attic; in the digital world, the blockchain records changes in the price and ownership, etc. of an asset in a distributed ledger that can’t be hacked. (Just don’t lose your crypto-wallet.)
Early 2021 has seen an explosion in marketplaces for the creation and trading of NFTs. Like most asset bubbles, it’s all tulips until you need to sell and buyers are suddenly scarce.
But I believe NFTs hold the key to unleashing the power of the blockchain for film distribution.
“Non-fungible tokens are blockchain assets that are designed to not be equal. A movie ticket is an example of a non-fungible token. A movie ticket isn’t a ticket to any movie, anytime. It is for a very specific movie and a very specific time. Ownership NFTs provide blockchain security and convenience, but for a specific asset with a specific value.”
What if there was an NFT marketplace dedicated to streaming films? Filmmakers would mint a series of NFTs and each viewer would redeem one NFT to stream the movie. This would allow for frictionless media dissemination and direct economic compensation to filmmakers.
My take: while I think NFTs hold promise in film distribution, the key will be to lower the gas price; the fee paid when creating NFTs in the first place.
“MetaHuman Creator is a cloud-streamed app designed to take real-time digital human creation from weeks or months to less than an hour, without compromising on quality. It works by drawing from an ever-growing library of variants of human appearance and motion, and enabling you to create convincing new characters through intuitive workflows that let you sculpt and craft the result you want. As you make adjustments, MetaHuman Creator blends between actual examples in the library in a plausible, data-constrained way. You can choose a starting point by selecting a number of preset faces to contribute to your human from the diverse range in the database.”
Right now, you can start with 18 different bodies and 30 hair styles.
“When you’re happy with your human, you can download the asset via Quixel Bridge, fully rigged and ready for animation and motion capture in Unreal Engine, and complete with LODs. You’ll also get the source data in the form of a Maya file, including meshes, skeleton, facial rig, animation controls, and materials.”
The takeaway is that your digital humans can live in your Unreal Engine environment. Is this the future of movies?
My take: This reminds me of my experiments in machinima ten years ago. I used a video game called The Movies that had a character generator (that would sync mouth movements with pre-recorded audio,) environments and scenes to record shots I would then assemble into movies. See Cowboys and Aliens (The Harper Version) for one example. You know, in these COVID times, I wonder if Unreal Engine’s ability to mash together video games and VFX will become a safer way to create entertainment that does not require scores of people to film together in the same studio at the same time.
(The studio tour proper starts just before 14 minutes in this promotional video.)
“During the pandemic, one studio stayed open when most others closed. How? L.A. Castle Studios has developed ‘a better way to shoot.’ And owner Tim Pipher believes it’s the way of the future — perhaps no more so than for independent film. ‘I guess some of it comes down to luck,’ explained Pipher to No Film School. His studio has been slammed with work in the midst of the shutdowns. ‘COVID or no COVID, we think we’ve got a better way to shoot.'”
What sets this green-screen studio apart from others is the ability to shoot with a live-composited set.
Simply put, you and your actor can now create inside virtual reality.
How is this possible? It’s achieved by marrying movie making and video game 3D environments. The core software is Epic Games‘ Unreal Engine.
My take: I love this technology! Basically, it’s Star Trek‘s Holodeck with green instead of black walls. Keep in mind, as a filmmaker, you still have to address every other component other than location: for instance casting, costumes, makeup, props, blocking, lighting, shot selection and performance. Do I know any Unreal Engine gurus?