Not 23 or 25, but 24
Everybody and his sister is emailing me about this interview with James Cameron in which the director says he’d rather shoot 2K at 48 frames per second than 4K at 24 frames per second.
This is crazy talk.
Look, I’m no expert, okay? But I’ve been shooting quite a bit lately. I’ve done my share of experimenting with funny frame rates. Most recently, yesterday I did some side-by-side comparisons of a 3D sequence rendered out at 23.976 with pulldown added to the same sequence rendered at 59.94i. See, I was asked to create a DVD downrez of one of my shows for playback on an old-fashioned interlaced projector, and I wanted to be sure I was giving the client the best possible product. So I rendered out a test sequence with pulldown and one at 59.94i to compare the results for myself.
Pardon me while I digress into a little bit of background here.
Movies are older than television. Compared to all the electronic gee-whizzery of television, movies are pure simplicity. You’ve got yourself some kind of lens, a strip of film and a metal half-circle that sits between them. The half-circle — the shutter — spins. Whenever the shutter lets light in, a frame of film gets exposed. Whenever the shutter is blocking the light from the lens, the camera pulls the film down so the next frame can be exposed. If you spin the shutter at 48 revolutions per second and pull the film down 24 times per second, you get a movie.
But for a lot of complicated reasons that I won’t go into here mostly because you can damn well google them yourself if you’re so inclined, television works differently. Specifically, free sex cams television doesn’t — or rather, didn’t, back in the old days when it was first invented — draw a whole frame all at once. Instead it breaks the frame up into lines, and draws the frame from the top of the screen to the bottom, a line at a time.
Now, this was mostly okay, because old-school televisions worked by exciting a phosphor with an electron beam, and the phosphor continued to glow for a bit after the beam had moved on. But still, by the time the beam got to the bottom of the screen and started its trip back up, the top of the screen would have begun to grow dim.
So televisions that worked that way flickered noticeably. It was annoying.
Movie projectors had the same problem, at first. A movie projector is basically a camera in reverse; instead of letting light in to expose the film, it pushes light out through the film to project an image on a screen. Projectors have a shutter in them too; its job is to block the light coming through while the film is advanced one frame, just like in a camera.
But movie projectors that used a one-twenty-fourth shutter produced a noticeable and objectionable flicker. The light from the projector was being interrupted twenty-four times a second, so it worked like a very fast strobe.
The solution was simplicity itself: Just run the shutter twice (or even three times) as fast. The light from the projector still flickers, but it flickers forty-eight or seventy-two times a second, which your eye can’t really pick up.
Old-fashioned television worked on the same principle. The screen was constantly going from light to dark and back to light again, but by illuminating the even lines now and the odd lines now and then the even lines again, the flicker was reduced to the point where you could hardly notice it.
Of course, the catch is that every frame has to be divided up into fields — sets of odd or even lines — and those fields have to be drawn separately. And not just separately, but one sixtieth of a second apart. Well, almost. Technically the fields are drawn one hundred five-thousand-nine-hundred-and-ninety-fourths of a second apart. Which is precisely why everybody rounds up.
Anyway, back to the point: Television cameras record in essentially the same way televisions display. That is to say, they record first the even-numbered lines of the frame, then a sixtieth of a second later they record the odd-numbered lines. If you look at both sets of lines together, you’ll notice that they’re slightly out of sync. That’s because they were recorded at different times. But that’s okay, because they’re played at slightly different times, too.
Television, in other words, gives you half the vertical resolution twice as fast.
Movies work nothing like this.
And yet they show jasminlive movies on television all the time. How can this be? Sorcery, I say.
Well, sort of. When movies are shown on television, each frame is converted to a pair of fields, and those fields are recombined in such a way that they turn into 29.97 field-pairs (sort of like frames, but slightly different) per second. This process — of converting 24-frames-per-second material into 29.97-frames-per-second material — is called adding pulldown.
But the sequence I was testing with yesterday didn’t originate in a camera. If it had, I would have had only one option: adding pulldown. Because I shoot everything at 24 frames per second, using a one-forty-eighth shutter, just like in a movie camera. But because I was working with synthetic footage, I could tell my computer either to render out 24 frames per second, or 59.94 fields per second. So I did. I had it generate one of each, then burned them both to DVD and watched them on the only old-fashioned television I could find in the office.
The result? The 59.94 version looked like shit.
Okay, that’s kind of an oversimplification. The truth is, it looked perfect. It looked exactly like it would have looked if I’d shot the scene in real life, using a 59.94-fields-per-second video camera.
But it didn’t look good.
See, material played back at 59.94 frames per second — which is what we’ll call this, since you’re basically getting 59.94 complete but half-resolution images per second — has an entirely different motion quality than material played back at 24 frames per second. And the miracle is, this remains true even if you insert pulldown and play back at 59.94. The fact that the material was recorded at 24 means your eye is interpreting what you see as 24 frames per second, even though those frames are being delivered in a slightly funny way. That’s just how the brain works.
Contrary to what Jim-call-me-James Cameron has to say, a movie shot at 48 frames per second doesn’t look clearer or sharper than a movie shot at 24 frames per second. It looks cheap, because we’re all used to seeing motion pictures played back at 24 and things like game shows and news broadcasts played back at 59.94. A self-fulfilling prophecy? Maybe. We’re used to seeing that, so that’s what we expect, and when we see something that conforms more closely to our expectations than it does to the truth, we can’t shake the feeling that something’s not right.
Maybe in a different world, James Cameron would be right. Maybe in a different world, a world where cheap TV shows are shot on film and only the biggest-budget motion pictures are shot on video at a higher frame rate, the situation would be reversed and everybody would be lusting after that 60 Hz look. But that’s not the world we live in. In this world, 24 frames per second looks cinematic because that’s what we all grew up watching in the cinema. And trying to change that now would be a huge uphill battle for no actual payoff, since in the end, they’re all just pictures flickering in the dark.
RADIO DAZE
So KOMA-AM will soon be no more... replaced by the call letters KOKC.
I never talked about the radio wars in Oklahoma City while I was on the air at KTOK. Didn't seem appropriate, I guess. I also had several friends at KOMA-AM, and I didn't want it to seem like I was trashing them.
However, since all my friends have now been fired, can I offer a few thoughts on talk radio in Oklahoma City? I'll admit to being biased as heck towards KTOK (duh), so keep that in mind as you read this.
Let's start with KOMA (soon to be KOKC) first. Number one thing they could do: get rid of their program director. Whoever is in charge of running this station knows nothing about programming. Since the day KOMA-AM switched from oldies to talk, there has been no consistency. Shows are dropped from the lineup on a monthly basis, shows are switched around seemingly on a whim, etc. There's been no chance for any of their programming to find an audience for https://www.jasminelive.online/ website. Hire someone who actually knows what they're doing (and no, I'm not applying for the job) and you've taken the first step towards competing.
Step number 2: revamp the news department. If the website is any indication, KOMA is suffering from the same problem that most news radio stations have: finding qualified journalists who want to work in radio. I'll admit it's tough to do, but no serious news radio station should have headlines like "Bloody Fetuses on Wheels" and "Tom Terrific Likes New Iraqi Government".
Step number 3: who's your audience? are you going for the 35+ demo? Drop Rusty Humphries. Are you trying for the 25-54 numbers? Get rid of Michael Medved and Jim Bohannon (although I'll admit, 9-midnight is never going to be a top priority for a news/talker). This station has no personality. KTOK's got the Gods of Talk, WKY's the Local Talker, and KOMA's the Leftovers. That's not a good image to have.
Moving on to WKY. Biggest beef I had with WKY was this: good DJ's do not necessarily make good talk show hosts (and KOMA, keep that in mind as well). Sports talk show hosts do not necessarily make good news talk show hosts (hey there Mr. Olbermann!). The best local talk show hosts are just that: locals who are talk show hosts. They have the time and ability to pay attention to what's going on in their locale, their state, and their country. To be good at it takes more than an hour or two of show prep. I can't imagine having to do morning drive on one station, take an hour break, and then do two more hours of talk. That's just silly.
The problem with having a stable of local talk show hosts is that you have to pay them. It's much easier to go to talent that's already in house, throw them a bone, and say "hey, it's all we can afford".
All this is my way of saying I understand the financial reasons for WKY's programming, but it doesn't have to be that way. There's another option for the KY gang...
FM style talk. One of the best shows here in the DC area is a program called The Junkies (originally the Sports Junkies). Four buddies who grew up together and were discovered on local cable access(!). It's not as foul as Howard Stern, but it's certainly "guy radio".
If I'm running WKY (and the Sports Animal), I run ESPN's "Mike and Mike" on the Sports Animal and move Steely and the gang to WKY 6-10. Move Mark Shannon to afternoon drive (3-6)(sans Stein), put Ron Black on from 10-noon, and run shock-jocker Tom Leykis late night. Leykis is controversial (at least he would be in Oklahoma City), and it might generate some desperately needed buzz for the station.
If either one of these stations would implement these changes, I have no doubt they'd see their ratings increase. I don't think either station would ever top KTOK, but it might provide more competition. And if either of these stations do implement any of these changes... I'm going into business as a consultant.
I don’t want to sound all full of myself here. I’m not great at this stuff. I’m pretty good. Better than average, if your sample includes everybody everywhere. Among people who do this professionally, I’m very much an amateur.
But a few times in my life, I’ve created something that hits just the right note with me. Maybe everybody else thinks it’s dumb, or doesn’t get it. But it gives me chills.
And that’s how this show was for me.
That little moment, over maybe thirty seconds, when the laughter died down and everybody got quiet? And then the show ended and nobody said anything, just stared for a second at a black screen and breathed?
Yeah. That’s why I do this.
Anyway, yesterday was a good day.
Oh, the technical stuff? Fine. Nerds.
I shot her with a 650W Arri key light that was rigged high to camera right to put the eyelight in her eye just east-northeast of her pupil. I used an Arri 300W backlight to throw insane contrast on her right cheek. I used no fill light, because fill lights are for pussies.
I used an f/2 with a five-stop ND filter, with an in-camera gain of +3db to blow the highlights on her cheek and a dialed-in color temperature of 5,000 K.
For the slow-mo cutaways, I overcranked to 60 frames per second, which I then played back at 24 in post. I used a ¹⁄₁₂₀ shutter, and I think I had the gain at +6db, but I can’t remember exactly because I didn’t bother to slate that stuff.
I had to send a crewmember — i.e., one of my friend’s roommates who thought it’d be fun to watch the shoot and ended up getting drafted — to CVS to buy a case and some contact-lens solution. Why? Because if at all possible, have your talents take their contacts out. Contact lenses muddy-up the irises and dull the color of the eyes, which are by far the most important things in the shot. And if you’re shooting even medium-tight with a resolution of 1K or higher, you can see the damn things, sitting there like a thin little circle on the sclera. So if you can manage it, shoot your talent without their contacts in.
Especially when you’re shooting women.
The nightmare scenario
Everybody who works in production in any capacity — the DP, the sound guy, the producer most especially — has the same nightmare.
We can’t use yesterday’s footage.
Now, let’s put this in perspective before I get into details: The talent didn’t die. The set, overlit, did not catch fire. The world did not end.
But out of seven interview sessions I shot yesterday, and over six hours of footage, I think I lost the sound from about forty minutes of it.
I had the talent mic’d with a lav, running to one of those little digital recorders that are all the rage these days. It’s basically an iPod in reverse; it’s got a little flash-memory card in it, and instead of copying sound from your computer then playing it back, you record sound on it and then copy it to your computer.
Neat idea, certainly. But man, you wanna talk about putting all your eggs in one basket? Here’s between forty minutes and two hours of sound, depending on what kind of memory chip you use. If anything bad happens to that chip, all your sound is just gone. Like it never happened.
I think that’s what bit me yesterday.
Now, I don’t think it’s a total loss. I was using the shotgun mic on the camera to record safety sound, and that’s still on the tape. But the full-bandwidth signal from the talent’s lav mic? I think that’s just gone.
So obviously this morning my thoughts turn to ways of preventing this from happening again. Since I know precious little about sound and sound recording, I’m kind of a blank slate. Does it make sense to split the main mic and send it to a solid-state recorder and the camera, for security? Should I just go ahead and double-mic my talent, and record the two signals separately? Should I stomp my feet and hold my breath until I finally get somebody to run sound during my shoots?
I dunno. I need to think about it.
But hey, I’ve faced my nightmare now. This is pretty much the worst that can happen, practically speaking. We got picture just fine and the producer will surely want to use it. But the sound is no good, and can’t be fixed in post. That’s about the worst possible case.
Well, no. It could still be worse. The talent in question is still, you know, alive. We can always reshoot. It could be much, much worse.
Man. I feel better already.
Too much light
So I’ve been shooting a lot with a Canon XL H1 high-definition camera lately, and I’ve learned a couple things. It’s still awfully early on, but I think my number-one lesson has been less fucking light.
I want to make this clear right from the start: I am not a professional cinematographer. I’m not even an especially skilled amateur cinematographer. I know the glass part goes toward the subject, but beyond that, I’m strictly a tyro.
I do have a little experience shooting high-definition with a Sony HDW-F900. The F900 was the first real high-definition camera suitable for creative cinematography. We’re going back about eight years here, which should tell you just how out-of-date my experience is. While it looks pretty grim compared to modern cameras like the Panavision Genesis, the F900 was a pretty astonishing piece of kit for its day.
When I first started shooting on the XL H1, the first thing I noticed was that the footage I was getting was almost as good as what I could get out of the F900. There are some real differences, obviously, especially when recording to the H1’s built-in HDV recorder. I would gnaw off my own genitals before I tried pulling a greenscreen key off HDV footage. But if you take a step back and consider the overall quality of the footage, then factor in that the F900 used to sell for about $100,000 without a lens while the H1 debuted for about $10,000 with lens, the results start looking very impressive indeed.
But in actual practice, I found myself struggling with the H1 against footage that was slightly soft, very grainy, and just depressingly flat.
I think I’m starting to find solutions for all those problems, though.
The H1 lives in a strange no-man’s-land between the cheap-ass camcorders you see at the Try-n-Save and professional cameras suitable for electronic news gathering or low-end digital cinematography. Unlike consumer cameras, the H1 comes with a very good lens, and through the use of some adapters that typically cost about as much as the camera body itself, can be fitted with cinema-grade prime lenses. But unlike professional cameras, the H1 also comes with a variety of “point and shoot” recording modes that, in my experience, make all the wrong choices when it comes to setting the camera’s various parameters.
The ones that matter here are focus, gain, aperture, shutter speed and color temperature.
The H1’s stock lens comes with an autofocus option. We’re all pretty used to autofocus, now that still cameras have pretty much perfected it. But here’s the thing: A still camera has to hold focus for a fraction of a second. A motion-picture camera has to hold focus for an hour. Those are entirely different universes.
When I first started shooting with the H1, I was tempted to just set up my shot with a tripod, engage the autofocus and go. Since I was handling the lights, running the camera and oh by the way also directing the shoot all at the same time, the prospect of having one less thing to worry about was seductive.
I only had to make that mistake once.
The autofocus on the H1 may be very good for its class. It may be the world’s best motion-picture autofocus, for all I know. But it still sucks. It won’t hold focus on a moving subject. And I don’t meant race cars here; I mean a subject sitting in a chair, talking and nodding and, like, breathing and stuff. This setting is more of a challenge than the H1’s autofocus can handle.
Now, I don’t mean the H1 can’t basically hold focus. In a setting like that, it’s not like the picture goes entirely out of focus. It just drifts a little. Which on a standard-definition camera wouldn’t matter, because you’d never even see it. The resolution of the image is too low to notice little changes like that. But when shooting high-definition, even the just-barely high-definition 1440 × 1080 footage that comes off an HDV tape, tiny twitches in the focus ring are highly visible. So the whole image ends up looking soft.
Now, let’s be fair here. The footage I shot that day was technically usable. I wouldn’t have been happy with it if broadcast HD had been my destination format, but since I was distributing on standard-definition DVD, I could have gone with it. I ended up not using it for creative reasons, and boy was I pleased about that.
But I learned my lesson. Do not, under any circumstances, use the autofocus on the H1. It’s just not worth it.
The other problem the H1 has out of the box is that it tries to handle all the variables that affect your exposure for you. Most everybody knows about aperture and shutter already; the aperture is the size of the hole the light comes through, and the shutter is how long the light is allowed in. But on digital cinema cameras, there’s a third variable: gain. Gain is roughly equivalent to the ISO rating of the film in a film camera. It’s measured in decibels, and it describes how the circuits in the camera either amplify or attenuate the signal coming in. Basically, the higher your gain, the more sensitive your camera is to light.
The first shots I took on the H1 were grainy as hell, and it didn’t take long for me to find out why. The camera was automatically adjusting the grain as I rolled to try to keep my shots exposed.
Fortunately, turning that shit off was just a matter of twisting a knob away from the detent horrifyingly marked “A” for “automatic.” As in “automatically make my footage look terrible, please.”
The H1 has gain settings that go from -3db all the way up to it-doesn’t-matter, because you should never, ever go about +6db and frankly you’d have to threaten me with physical violence to get me above +3db. Higher gain settings mean more grain, and we’re not talking about artistically justifiable film grain here. We’re talking digital grain, pixel-sized specks of blue and green in the shadows that make you want to seal your eyes shut with gaffer tape.
Between -3db and +3db, I can’t really tell much of a difference in overall image quality. Grain is minimal, and not objectionable. At +3db you start to see the beginnings of objectionable noise in shadow areas if you go with the camera’s default settings, but for reasons I’ll get into shortly you shouldn’t be doing that, so that’s okay.
It was when I was messing with the gain settings and shooting tests to see what the limits were that I realized why my shots to date had been so flat, both in terms of color and in terms of depth-of-field. This camera is really sensitive to light. At a gain setting of 0db — meaning the camera neither amplifies nor attenuates the electrical signal coming from the sensor — we’re looking at an ISO equivalence about about 320. That’s comparable to shooting with 320-speed film. Moving the gain to +3db gives us an ISO equivalence of 400. And +6db? That’s equivalent to ISO 800.
For sake of comparison, though there may be one out there, I’ve never heard of a motion-picture film stock with an ISO equivalence of above about 500. That’s what you’d use to shoot on the surface of the sun.
So basically I’ve been overlighting the shit out of all my shots. I’ve been blasting so much light into the camera that I’ve had to stop down to f/4.0 or even smaller to get the right overall exposure, which means with a ⅓-inch sensor, my depth of field is measured in miles. Which, granted, makes the job of pulling focus easier, since everything between six and six thousand feet from the focal plane looks sharp. But it’s not very good for the ol’ creativity.
And on the subject of overlighting, I want to take a little side-trip here to talk about color balance. Color balance is one of those things that, I think, gets over-emphasized when doing a crash course on cinematography for people who just want to shoot their ’ birthday parties. Yes, there are times when you want your shot to be white-balanced. But those times are less common than you might think.
We’ve gotta talk about film again for a second. This is a gross oversimplification, but in general, motion-picture film comes in two basic varieties: There’s daylight-balanced film, and tungsten-balanced film. If you light a neutral grey object with tungsten lights and shoot it with tungsten film, the grey object will look, well, grey. But if you shoot that same object with daylight film, the object will look really orange, as if the film had been tinted.
That might sound like a bad thing — you want your colors to look right, right? — but it’s more complex than it sounds. A common trick in cinematography is to light slightly warmer than the color balance of your film. If you’re shooting tungsten film and lighting with tungsten lights, maybe you’ll throw an orange gel over your key light to make the shot warmer.
Why would you want to do that? Why would you want to take a system that’s been set up to carefully balance all the colors in the shot and deliberately throw it out of whack?
Because balanced color is boring.
There’s also the fact that properly balanced color makes people’s faces look pretty blotchy. Check yourself out in the mirror sometime. Your face is not an even color. There are lighter areas and darker areas. Maybe the tip of your nose is a little redder than under your eyes, because it gets more sun. Maybe you’re a guy with a five-o’clock shadow. We’re not picture-perfect.
If you shoot people in a balanced setting — where your light is all one color, and your camera is calibrated to see that color as “white” — they’ll end up looking sickly and unattractive.
What does this have to do with the experience of shooting the XL H1? The little fucker is set, by default, to auto-white-balance. Which means it takes whatever light is coming in, says “Well, this must be neutral!” and goes with it.
And we just got through talking about how you rarely want your lighting to actually be neutral.
Well, fortunately the H1 also has an easy, readily accessible way to dial in your color balance manually. You can set the color temperature that you want to be neutral white by twisting a dial. If you’re shooting under tungsten lights, which have a color temperature of 3,200K, you might dial in a white balance temperature of 5,000K to make the shot look a little warm, to give it a little glow. If you’re shooting under sunlight, which has a color temperature of about 5,600K, you can ramp the white balance of the camera all the way up to 12,000K to get just the tone you want.
Oh, one more thing. So far, I’ve been talking about stuff that’s fairly easy to deal with: Turn the gain knob from “A” (shudder) to +0db or whatever you want to set your ISO equivalence. Turn the white-balance knob to “K” and dial in your desired balance temperature. You have to (a) know how to do this stuff and (b) be willing to screw with all the little settings every time you shoot, but in general, it’s not hard.
There’s one setting on the H1 that’s more difficult to change, but that’s just as vital if you want your shots to look good without color-correction. It has to do with the gamma curve.
Describing what gamma means is beyond the scope of this blog post — mostly because I only have a basic, intuitive understanding of it myself, and can’t explain it well. But basically it’s a matter of whether the camera is going to record an image that responds linearly to the light coming in, or whether it’s going to compress or expand one area or another of the tonal range.
The human eye is capable of discriminating between many more levels of brightness — what the nerds call “luminance” — than a camera is. If you take a picture in which the brightness curve is a straight line from pure black to pure white, that picture will end up looking really flat, because there’s so much less information there than what we can see with our eyes. In order to make the picture more visually pleasing, we make the picture look more contrasty by compressing the tonal range in the highlights and the shadows. This is often referred to as blowing out the whites and crushing the blacks.
Unfortunately, the H1 comes out of the box set to reproduce a fairly inoffensive luminance curve. As a result, until you get in there and screw with the presets, footage from the H1 will look very flat, and will need to have its contrast boosted in post. This isn’t optimal, because (a) who wants to take the time to color-correct every damn shot, and (b) stretching the contrast in post gives poorer results than getting good contrast in-camera anyway.
I don’t want to go into all the details of how to manipulate the H1’s custom presets here; a whole book could be written on that topic. But there are two settings that are relevant to this discussion: knee and black. The H1 lets you set the knee — “knee” here being a technical term that refers to the contrast in the highlights — to high, middle or low. Basically, don’t set it to low. Depending on your setup and what look you’re going for, setting the knee to either high or middle will give you a high-key image with nicely blown whites.
Black is easier; black should always be set to press. There is no reason why you should set black to stretch, unless you like washed-out, flat-looking footage.
Yes, there’s a conventional wisdom out there that you should set your luminance curve to be as flat as possible in-camera so you have as much exposure latitude as possible in post. If we were shooting 4:4:4 — or hell, even 4:2:2 — that might make sense. But this is HDV, man, and it’s 4:2:0, and looking for exposure latitude here is like looking for change in your couch to pay your mortgage. Yes, you might find a little if you’re lucky, but it won’t be as much as you need. Trying to capture a flat image in-camera and then boost the contrast in post will result in your running with footage that looks like shit because it’s too flat, or footage that looks like shit because it’s too grainy in the mids.
Anyway, when shooting HDV on the H1, blow out your whites and crush your blacks using the camera’s preset feature. You’ll be happier.
So the good news about the XL H1 is that it’s capable of capturing footage that rivals what you can get out of a digital cinema camera that costs more per day to rent than the H1 does to buy. The bad news is that out of the box, the H1 produces powerfully awful pictures. But the rebuttal witness for the good news points out that once you get the H1 away from all the factory-default settings, understand what all the settings do and take manual control of them each time you shoot, you can get strikingly awesome footage.
Here’s a still from some test footage I shot yesterday. This was under tungsten light from two desk lamps, but I had the color balance temperature in-camera set to 5,000K to make the images pleasantly warm. I was shooting with +3db gain for an ISO equivalence of 400, with the lens zoomed all the way in to a focal length of 108mm and the aperture wide open, giving me a depth of field measured in inches. The knee was set to middle to give me blown-out but not ridiculous highlights, and the black was, as always, set to press to really crush the shadows and draw out all the detail possible from the midtones. This was shot in 1080/24p, with a ¹⁄₄₈ shutter. I scaled the still down from 1920×1080, obviously.