Volumetric filmmaking — when artists create interactive experiences using 3D captured imagery in augmented, virtual, and mixed realities (AR/VR/MR) — is a nascent technology with a popular and growing movement. If you haven’t heard of it yet, you’re not alone. Many see volumetric filmmaking as the future of video in immersive media creation, and it’s clear that leading organizations agree. Most recently, Facebook developed a VR camera in partnership with RED, and Adobe launched an AR content creation tool, Project Aero, in early June.
Vimeo’s Creator Labs is jumping in, too. One of the most exciting goals for the team is helping push Vimeo beyond 2D video, which means innovation — but also building a community. As a part of that, we hosted the second Volumetric Filmmaking Meetup at our Brooklyn office, in partnership with Scatter, Platt Creative, and Manhattan Edit Workshop, for a night of food, drinks, and all things volumetric video.
The night began with volumetric updates from Kyle Kukshtel, Cofounder and Integration Engineer at Scatter, a studio committed to creating and democratizing the tools to make volumetric films (as well as co-hosts and co-creators of the meetup). As leaders in the volumetric space — their film Zero Days VR was nominated for an Emmy — Scatter’s Kukshtel shared some of the latest happenings from the industry. “What’s really important about this field is that things change very quickly,” he said.
Didn’t make it out to Brooklyn? We’ve got you covered. Here’s a recap (and video) of the event:
Photogrammetry deep dive
Az Balabanian, founder of AZADUX, host of the Research VR podcast and self-taught photogrammetry creative, discussed a high-level overview and challenges in the photogrammetry space — and how he’s tackling them head-on.
Photogrammetry involves creating 3D geometric shapes from 2D images using Structure from Motion (SfM) algorithms. In case that didn’t entirely translate, it’s a process that involves taking lots of pictures of an object from every angle imaginable. Then, those images feed into software, and the software reconstructs them into a 3D object. Relatively new, high-end studios have already been using photogrammetry in their productions — take Oats Studios, creators of District 9 and Chappie, for example.
From Claire Hentschker’s shining360, which uses photogrammetry to reconstruct the hotel from Kubrick’s film The Shining
When getting started with photogrammetry, Az suggests having a working knowledge on photography, 3D modeling, and even a little bit of VR. “With VR, you can do 3D modeling with your hands,” he said, referring to the Oculus Medium. While building a foundational knowledge of niche technologies helps, you can still use everyday equipment to create the 2D images. Az uses his DSLR and drone to capture all of his work for AZADUX.
“It requires trial and error to see how [photogrammetry] works, and understand how it works,” he said. With little documentation currently out there, Az hopes his efforts in spreading and sharing his know-how on photogrammetry in-person and through his podcast will help flourishing filmmakers, too.
Creating immersive music videos
Music videos can be considered equally as important as the song itself, with popular artists garnering hundreds of millions of views, with a select few bringing in as many as 40% of the world population. As such, music video budgets typically go toward more safe, standard routes like YouTube, and sometimes 360 video. Rob Ruffler, a former VP of Viacom NEXT, and his business partner, David Liu, wanted to change that.
The result? Aeronaut, a immersive VR music video he directed for The Smashing Pumpkins’ Billy Corgan. “We wanted to create this participatory, immersive VR experience, where you — as the guest — could really experience music in a new way,” he said. Critics agreed, as Aeronaut just won the Cannes Digital Craft Grand Prix award this past June.
Rob walked through the project, process, and challenges that came with creating Aeronaut. From plotting out the verses of the song and hitting a wall for what the visuals would be, to working in Microsoft’s Mixed Reality Capture Studio in Redmond, WA, Rob and David used 106 cameras that captured data at a whopping 10 GB per second to create a truly remarkable music video. (No VR headset in your life? You can watch it in good ol’ 2D here.)
3D scanning as art
While there are many technical aspects to volumetric filmmaking, that doesn’t mean it’s devoid of art and creativity. Far from it. Rosalie Yu is an artist who experiments with different ways of telling stories through digital media. Sharing various commissions, projects, and installations using 3D scanning, Rosalie shared how she uses low-cost ways of creating memorable works of art.
Two studies Rosalie showed us were Skin Deep and Embrace in Progress. Skin Deep was an interactive installation with designer Alon Chitayat, where 3D-scanned self portraits were used as blank canvases for others to draw on. Embrace in Progress, which aimed to capture the motion of intimacy, was a series of 3D-printed sculptures built from 3D scans of physical movement.
You can get involved in Rosalie’s upcoming project, Knowing Together, at Teachers College in September.
Affordable volumetric capture
In a valiant effort to democratize volumetric video, — AKA make it economically feasible for all creators — Ben Nunez and Sebastian Marino introduced us to the world of Evercoast, another affordable volumetric capture solution. Using off-the-shelf hardware and intuitive software, Evercoast showed a proof of concept that may serve filmmakers looking to dive in to volumetric filmmaking.
- It’s a scalable system: start with as few as two cameras, grow to as many as 20+.
- Captured by a single computer: no server farm here; all filmed content goes into one machine. Which brings us to our next point…
- Completely portable: from airlines to Ubers, it’s easy to transport and set up.
- Automated calibration: what’s usually taken way too long, now takes about 20 minutes.
- Real-time 3D camera preview: see what you’re shooting in a 3D space, as it happens.
- Cost-effective: if you want to democratize volumetric video, this part is pretty important.
Accessible volumetric distribution
In Vimeo’s continued creator-first efforts, Casey Pugh heads up Creator Labs at Vimeo, and illuminated some of the projects his team has been working on at the intersection of video and emerging technologies.
One of Creator Labs’ focuses is making volumetric more accessible through tools that help make experiences mobile first. Fortunately, building immersive experiences on the web is supported everywhere thanks to the power of WebGL and WebVR. And since we’re Vimeo, we’ve solved the hardest problem for you: fast video transcoding and delivery.
Casey showcased Creator Labs’ latest experiments, where WebGL, volumetric video, and Vimeo converge. Their demo, which is open-source and available on GitHub, uses a video asset from Scatter DepthKit, the most widely used toolkit for volumetric video capture. For you Oculus & Vive headset owners, what this means is you can now experience a volumetric video straight from your web browser.
For their second demo, they leveraged an Intel RealSense camera and Vimeo Live to create a live streaming volumetric experience in WebVR. Although this is just an experiment, the applications could be interesting. Imagine watching a live concert of your favorite band, and experiencing something better than front row seats could ever provide: the feeling of being right there next to them, on stage, in real time.
All in all, it was a successful night exploring the technical and creative possibilities of volumetric content. We’re excited to continue growing our community with future awesome (and educational) events.
Are you working on an interesting project and want to learn more about our tools? Don’t hesitate to get in touch: firstname.lastname@example.org.