Irish band U2 has always embraced technology and continues to do so on their latest tour by embracing AR.
AR is Augmented Reality and superimposes information on top of your phone’s camera image.
Fans attending the shows will be able to hold up their phones to reveal a huge iceberg and a virtual singing Bono.
You can download the U2 eXPERIENCE app here. To test drive it, point it at the album cover for Songs of Experience. A virtual cover will float on top of the picture of the cover, shatter into shards as music begins to play and then an animated Bono will begin to sing.
As you move your phone side to side or up and down, you’ll see different angles of the holographic representation.
My take: this is pretty cool and might be many folks’ first experience of AR.
“That animal brain is not aware of anything, I am very confident of that. Hypothetically, somebody takes this technology, makes it better, and restores someone’s activity. That is restoring a human being. If that person has memory, I would be freaking out completely.”
My take: this subject gets murky very quickly. Witness the ethical issues the scientists raise in Nature. Another questions whether a brain without stimuli would be torture. Heady stuff.
My take: basically, lack of a suitable camera is no longer an excuse for not filming. But everything else stays the same, starting with a great script and a smart plan.
“The new Pocket Cinema Camera 4K has a ton of features that’ll appeal to that market — like a mini XLR connector, LUT support, and 4K recording at 60 fps — but it still has limitations that’ll keep the camera confined to a niche audience (which, to be fair, is kind of true of every camera). Basically, unless you’re a filmmaker who’s typically in control of lighting and the overall environment they’ll be filming in, this camera probably isn’t for you. It doesn’t have in-body stabilization, and the small sensor will struggle in low light and require adaptors to get the depth of field you’d get from full frame or even Super 35 cameras. That might not matter to some filmmakers, but it could be an issue for people on fast shoots or traveling to unfamiliar locations.”
“The 65-inch display sits flat and sturdy on your wall, like a normal television, until you’re done with it. With one push of a button, the display descends down into its stand, rolling around a coil like wrapping paper. The screen can roll up completely for safe storage and easy transportation, or you can leave a small section of it sticking up, at which point the screen automatically shifts into a widgetized, information-providing display with weather and sports scores. LG’s device has almost nothing in common with most TVs, other than its size. Functionally, it’s more like a really big tablet.”
Fully unrolled, the aspect ration is 16:9.
But wait, there’s more! It can roll down to 21:9, eliminating the black bars above and below widescreen movies.
My take: I want one! I would hang it upside down from the ceiling, so it would mimic a cinema screen of yore.
Get ready for an onslaught of new immersive video cameras.
Youtube launched the VR180 format last year and parent company Google has just partnered with Lenovo to make the world’s simplest point and shoot camera, the Mirage.
180 is the shorthand for VR180, which is the moniker for 3D VR180. The two front-facing lenses approximate your eyes, creating depth.
Lenovo has published the camera’s specs but the biggest drawback I see is the lack of a view screen. It truly is a point and shoot camera, although you could use the onboard WIFI to send the picture to your smartphone for viewing.
“VR180, like most things in VR right now, is the simple-but-usable version of what will someday be much cooler. It exists for a few reasons: because 360-degree video is actually really complicated to do well, because there aren’t many great ways to watch 360 video, and because even when they do watch super-immersive footage, viewers don’t tend to look around much. With VR180, your camera can look and operate more like a regular point-and-shoot, and viewers get a similarly immersive feel without having to constantly spin around.”
There’s also the YI Horizon VR180 coming soon and it includes a view screen, higher resolution and HDMI out, I believe. See Think Media‘s review:
My take: I’m a big fan of 180 and can’t wait to play around with both of these cameras. (Also, I wish the ‘VR’ label would just go away since this technology is not “virtual reality” but basically “reality”. Virtual Reality to me means computer-generated environments; video games are a prime example. 180 is as close as we’re going to come to reality other than actually being there.)
Nova Spivack of the Arch Mission Foundation, whose mission is to preserve and disseminate humanity’s knowledge across time and space for the benefit of future generations, explains:
“We are very happy to announce that our first Arch [data crystals that last billions of years] library, containing the Isaac Asimov Foundation Trilogy, was carried as payload on today’s SpaceX Falcon Heavy launch, enroute to permanent orbit around the Sun. We are eternally grateful to Elon Musk and his incredible team for advocating the Arch Mission Foundation and giving us our first ride into space.”
This is not the first time messages have been sent into space physically.
Some will remember Kodak as the leading photography film company of the last millenium, toying with bankruptcy in 2012.
The good: Kodak has fully jumped into 360 VR with the Pixpro ORBIT360 4K:
“The KODAK PIXPRO Orbit360 4K VR Camera adopts a minimalist approach to an all-in-one 360 ̊ VR camera, with two fixed focus lenses housed by a futuristic camera body. Each curved lens is designed to work in tandem, to capture full 360 ̊ 4K Video and easily upload 360 ̊ videos and photos to social media platforms like Facebook and YouTube via the camera’s Smart Device App while on the go.”
The real news from CES 2018 however is that Kodak plans two new cameras for later this year. See 2:05 in this report from Digital Trends:
The bad: Kodak has stated that the price for its upcoming Super 8 camera will be in the $2,500 to $3,000 range, which is three to five times more than originally planned.
They also released some test footage:
To my eye this is soft and jittery. I much prefer the rock-steady footage from Logmar:
My take: On one hand, I’m really looking forward to Kodak’s 360 camera that can fold out into a 180 3D mode because I feel this format has the best chance to win the immersive VR stakes. On the other hand, shame on Kodak for jacking up the price of their inferior Super 8 camera.
As reported by Tristan Greene on The Next Web, scientists at Kyoto University in Japan have created a deep neural network that can decode brainwaves.
That’s right, AI that can read your mind.
Tristan summarizes:
“When these machines are learning to “read our minds” they’re doing it the exact same way human psychics do: by guessing. If you think of the letter “A” the computer doesn’t actually know what you’re thinking, it just knows what the brainwaves look like when you’re thinking it…. AI is able to do a lot of guessing though — so far the field’s greatest trick has been to give AI the ability to ask and answer its own questions at mind-boggling speed. The machine takes all the information it has — brainwaves in this case — and turns it into an image. It does this over and over until (without seeing the same image as the human, obviously) it can somewhat recreate that image.”
Or, as Guohua Shen, Tomoyasu Horikawa, Kei Majima and Yukiyasu Kamitani illustrate:
To my eye, some of the results look awfully reminiscent of William Turner‘s oil paintings, particularly Snow Storm.
My take: Let’s be honest. This technology, as amazing as it is, is not yet ‘magical.’ (Arthur C. Clarke‘s third law is, “Any sufficiently advanced technology is indistinguishable from magic.”) However, if we think about it a bit and mull over the possibilities, this might one day allow you to transcribe your thoughts, paint pictures with your mind or even become telepathic.
“Building on TTS models like ‘Tacotron’ and deep generative models of raw audio like ‘Wavenet’, we introduce ‘Tacotron 2’ a neural network architecture for speech synthesis directly from text.”
“In a nutshell it works like this: We use a sequence-to-sequence model optimized for TTS to map a sequence of letters to a sequence of features that encode the audio. These features, an 80-dimensional audio spectrogram with frames computed every 12.5 milliseconds, capture not only pronunciation of words, but also various subtleties of human speech, including volume, speed and intonation. Finally these features are converted to a 24 kHz waveform using a WaveNet-like architecture.”
The limitations? Some complex words, sentiment and generation in real time. “Each of these is an interesting research problem on its own,” they conclude.
My take: I’ve used TTS functionality to generate speech for songs and for voice-over. I love it! As the quality improves to the point where it becomes indistinguishable from human voice, I will admit that I’m not quite sure what that will mean in a future where we won’t be sure if the voice we’re hearing is human or robot.