EDIT: You can now do this workflow with the beta. The Merger in the zip file has an option to merge the alternating frames into a sequence or EXR files.

After checking out the Magic Lantern HDR footage yesterday (amazing) I decided to do a few tests here. Here's how the shots break down.

1. The original data flickers each frame between an overexposed and underexposed image. The "Underexposed" set is all the, well, underexposed frames.

2. The "Overexposed" set is the same scene but about 4 EV brighter. The original data was 25 FPS. You can think about the Underexposed and Overexposed sets as two separate 12.5 FPS tracks.

3. "Mask" is the generated mask using only the overexposed frame. Rather than blending the two streams using some crazy scheme, we're only going to take data from the Underexposed track only when the Overexposed track needs it.

4. The two tracks aren't sync'd. Hence the ghosting in some videos. Let's pick frame 82. The frames will come in this order:
- Underexposed Frame 82
- Overexposed Frame 82
- Underexposed Frame 83.
So we need to synthesize an Underexposed frame in-between 82 and 83. You could do this intelligently by using optical flow, but I've just done a per-channel MAX operation.

5. The Retimed track brought up by 4 EV and then blended with the Overexposed track using Mask as the mask. Then we get a sequence of EXRs which we can load into After Effects and edit with Ginger HDR. *EDIT: fixed.*

6. Tonemapped. In the interest of full disclosure, the parameters shown in the video don't exactly match the final output. I couldn't resist changing the parameters some more and didn't have time to recapture.

A few points:

1. Personally, I have a religious conviction against raw exposure blending. You shouldn't have the peculiarities of the exposure differential determining what your final image looks like. IMO you should convert to HDR and then tonemap which will give you far greater control of your image.

For example, you can't do the "Contrast" operation from the video properly if you just have two tracks fudged together. With a proper merge to HDR every pixel will move away from the same linear reference point. But with hacky blending every pixel would move away from a different linear reference point. And don't get me started on interpolating clamped/unclamped values, non-monotonically increasing response curves (think about it), and trying to composite CG elements in linear space.

The other gotcha is that with exposure blending you have to make sure that every single edge in both tracks lines up. But if you use one track as your base and only take what you need from the other one then you have far fewer edges to worry about. Do you really want to match up every single leaf blowing in the wind?

2. This video isn't really the best test case. Making clamped-out light bulbs less clamped isn't that exciting. I'm really interested in seeing more daytime shots and fixing skies, shadows, and specular highlights from the sun. The footage is great for showing how Magic Lantern works and what it can do, but it's not so good at showing why you would want to do it.

3. What Magic Lantern has done is really, really cool. Being able to get HDR video out of a Canon SLR is amazing.

4. Frame rate is an issue. In this test, the raw data was shot at 25 FPS meaning the effective output rate was 12.5 FPS. So if we want 30 FPS then we have to shoot at 60 (bummer). Since we are peaked at 1080p @ 30 FPS, and 720p @ 60 FPS, we can only do HDR 1080p @ 15 FPS and HDR 720p @ 30 FPS. Rather, with the Scarlet you should be able to do 3k @ 30 FPS with HDRx if I remember correctly.

5. There should be better ways of blending frames. We should be able to use the information from the other tracks to guide the optical flow solver. If that's the case, we would be able to output at the same FPS that we shoot at, which would be awesome. It will be interesting to see what solutions everyone comes up with. Any fast movement or quick pan or camera shake is going to be interesting.

6. HDR should be something you don't notice. Hopefully, we're all done with the "HDR Look" that we all know and hate. It's fine for a zombie movie, but for me HDR is best used when the viewer doesn't know that you're using it.

7. I'm actually not a big fan of capturing an "Underexposed" and an "Overexposed" track. Rather, I'd prefer an "Even" track and a "Very Underexposed" track. You want to take as little data as possible from the darker track. Just because you can recover all the detail from the dark area under your bed doesn't mean that you should.

Congrats Magic Lantern! Well done.

Loading more stuff…

Hmm…it looks like things are taking a while to load. Try again?

Loading videos…