So you’re ready to blaze away at your indie film that is going to have some visual effects in it. You’ve got your camera (maybe even more than one). There’s a decent light kit and a friend to be your lighting person. The actors are ready. Hopefully you have a sound person with decent equipment. The makeup person is ready. The script is finalized (which hopefully has been through a number of revisions by now). Finally, your locations are picked out. LET’S GO!!
Our locations don’t look or sound the same as when we picked them out now that we’re trying to shoot. The lights aren’t powerful enough. The image we’re getting in the camera with these lenses isn’t the same as we imagined when we wrote our scenes. We’re running out of time, let’s just get through all the shots as quickly as possible. We HAVE to shoot them or we have no film. We’ll fix everything in post!
Fast forward 6 months. We don’t know how to achieve the look we’re going for. Some mistakes were made during shooting that we don’t know how to fix. There are experts that can help us but we can’t afford to hire them.
Sound familiar? If you’re cowering behind your camera don’t worry. You’ve joined the ranks of literally thousands of filmmakers both indie and professional that have made these mistakes not only with their first film but then repeated the offense multiple times afterwards.
There are many things that can and should be done when embarking on the filmmaking process. For this article, I’m going to focus mostly on the visual effects aspect but other areas will spill in because they all kind of link together anyway.
As a professional visual effects artist, one thing that I’ve observed on a fairly regular basis is something seemingly innocuous. I get a phone call or email from a director or producer looking for a quote on visual effects for their film. They’ve already shot it and they have some general ideas based on their script of what they want.
Notice anything wrong with this situation? Some of you may have guessed it. The call came at the wrong time in the production process. The time to bring in a visual effects artist or team is before you’ve shot a single frame of your film.
Visual effects and visual post production for that matter (color grading, relighting, etc.) can be a highly technical task and it is one that should never be taken lightly by a director or producer. Anyone doing so is setting up the film for failure and likely throwing production money out of the window due to the costs associated with “fixing it in post”. Or worse yet, they resort to very poor work ethics by attempting to manipulate artists into free or below rate labor to fix their mistakes. The latter is pretty inexcusable. If this is you then on behalf of all of us who work hard for you, please consider changing your view and approach to creating films.
So what’s the first step?
Assuming you have your script completed, at least a draft revision that you are proud of at this point, your next step is to do a script breakdown and organize your shots. Every single shot that is required to compose your film. This is not the place for general notes. Get detailed.
It should look like a spreadsheet that organizes your scene numbers, the shot numbers for all the shots needed in that scene, what actors are in the shot, the shot type, lens to be used along with focal length – aperture- shutter speed, and a description of what each shot is. For shots that involve a visual effect, have a column labeled VFX that contains a description of what that desired effect is.
Once you complete that then you bring in the VFX artist or person who is going to act as the VFX Supervisor (working with a team). You both go over each shot that involves a visual effects need and do a technical breakdown. This is where things get detailed by discussing and working out how each shot is going to be achieved. This stage is more of a general “we’ll need to do this or that” for this shot.
Shot description: We want our lead actor to teleport from this spot to another spot during this shot.
Technical breakdown notes:
Is the shot handheld? Will we need motion tracking? Can we avoid that? We’ll need to shoot clean plates of our set without the actor. What does the effect look like? Does the actor kind of disintegrate into particles and then reappear or is the effect more like a ball of light that blooms over the actor as they disappear and reappear? For the latter then we can achieve that without using a green screen.
To disintegrate our actor and then put him back together again, we’re going to need to isolate them by shooting against a green screen. If we can’t do a green screen shot of our actor then they’ll need to be roto’d out to separate them.
Do you see how there were more questions than answers in the technical breakdown notes? This is a good thing. It leads the team down a path of solving problems about HOW to shoot the shot so that the end result looks fantastic rather than trying to band aide the shot later on in post. Don’t be afraid to ask questions in these early steps. You will all work together to solve them now as you revise your notes and create a game plan for your shooting day when you shoot the shot.
This also identifies limitations about your project that exist either because you don’t have the proper equipment, software, or budget to fulfill the vision desired. Here is where creative ideas flourish as you work to create a shot that you CAN execute within the constraints of your project. Better to execute within your means rather than drop the ball into a splendid shattering mess of mediocrity and make your entire production team look bad at which point, in true Dead Pool inspired style, you’ll have to open your credits with Directed by Doofus and Produced by Doofus Jr.
What about 3D stuffs?
More and more we’re seeing 3D effects in many films both at the big Hollywood studio level and at the indie level. Sometimes this stuff ends up really amazing and often it ends up really rather forgettable. That said, you shouldn’t shy away from this if your story could use it. You’d be surprised how many applications of the 3D workflow can actually enhance your film greatly. Consider things like sign replacements, adding or enhancing building features to your scene, adding natural elements like trees and mountains, or extending your small set to make it look like it’s part of a much much larger setting.
The list can be quite long but you get my point. So how the heck do we not botch this up for our film? It’s easier than you might think but the key is asking up front and planning for it in pre-production. Let’s revisit our shot list shall we?
Shot description: Sara goes into the junkyard where she is confronted by a horde of flying mechanical insects.
Technical breakdown notes:
- Again we need to know if the shot is handheld or locked. If we’re going to do a motion match to a frenzied looking shot we’ll need to make sure our VFX team has the proper motion tracking software to perform this task, eg Syntheyes. We’ll also have to make sure we don’t introduce too much motion blur by way of wild camera moves because that will make motion matching difficult. It is not impossible to motion track a shot like this but it is time consuming and time is money. If we shoot it with the camera locked down we can save all of that time that would be spent motion matching and save money. What’s the shot worth to the film and how much can we spend on this particular shot?
- We’ll need to capture a panoramic or 360 spherical HDR image of our set as it looks during our shooting of this shot from the origin point of our mechanical insects. We’ll need this image to perform HDRI rendering of our mechanical insects. HDRI rendering is the process of lighting our 3D elements using image based lighting (the intensity, color, and position of light sources from our set) to make them look like they were actually there on set. In addition to this, we’ll need to make sure someone draws a quick little map of our set showing where our lights were at the time of filming. This map should note how intense these lights were and what their color temperature was on set, eg. 3400 to 5500 kelvin and whatever gels may have been in use.
- We need to have the exact information about our camera settings as this will aide in matching our 3D image to the camera image when we composite them in post production. Let’s document the following on shoot day:
- Focal length
- Shutter speed
- Do we have a design for our mechanical insects? What software are we using to model and animate them? What are we using to render them? How long does that process take for this shot? How much does that end up costing? (**DON’T GUESS HERE! DO TESTS IN PRE-PRODUCTION!)
OK, what about shooting resolution? 2k, 4k, 6k, or infinikayyyy?
This is actually a fairly important topic when it comes to all of your post production, not just films involving visual effects. The answer to this question really is found by asking some more questions. Where are you showing your film? Digitally on the web via Vimeo or YouTube? Will it be viewed mostly in theaters? For those films being shown on the internet, there is very little compelling reason to shoot your film in anything greater than 4k at the moment.
Most viewers are watching on a mobile device, a tablet, a computer, or rarely streaming to their TV from a site like Vimeo or YouTube. So why not just shoot in 2k? You could and probably should in most cases where your film will never actually be watched in theaters. The argument for 4k shooting for web is that a good editor knows that down-sampling from 4k to a 2k editing timeline actually yields a sharper image even though the end product is in 2k. Although, there is more than sufficient technical proof available to demonstrate that up-scaling a properly shot image from a 2K capture is possible with no perceptible quality change by the viewer. The latter is slightly out of scope for this article but I encourage you to research this on your own. The proof is there.
However, and this is a big one, if you’re doing visual effects for a production that is basically going to YouTube then plan on processing your visual effects images no greater than 2k. Why? Well let’s take a look at our horde of insects example. Rendering each frame of that shot with detailed models and really good lighting takes time. Every time we bump up the resolution of each render to match our footage we’re increasing render time greatly. Sometimes into really crazy numbers.
Let’s say the test render of our proof of concept image is at 960 x 540 (this is 3/4 the size of HD 1280 x 720 and is common to see during draft previews of work) and comes out at 30 seconds per frame. Not bad. Okay now let’s do 720p and we get 1 minute. Hmm. Now let’s do 1080p (this is our 2K mark) and we get 3.5 to 4 minutes. Okay, let’s do 2160P (4K) and we now get 8-10 minutes per frame. How many frames are in this shot again? Let’s see, the shot is 30 seconds long and we’re working in 24 frames per second. That’s 720 frames (30 x 24) at 10 minutes each divided by 60 minutes. That comes out at 120 hours.
See the problem here? Even at a 2K workflow we’re still ending up with about 50 hours or so. Of course we’ll use render farms for this but there’s a cost associated with that above the cost of actually creating the VFX themselves. Render farms work by charging core hours per frame. This is also something you need to have as a line item in your production budget.
The moral of the story is that if you’re trying to keep up with the Jones’ by shooting in infinikayyy to show off your amazing creation on the internet then you’re doing a pretty huge disservice to your film by eating up a significant portion of your budget to satisfy your whim.
If you’re shooting for distribution in theaters then more likely than not you have the appropriate budget for the infinikayyy workflow and this will not be an issue.
Let’s talk about camera types
There are literally dozens of camera models and formats to choose from when determining what to shoot our film with. I’ll avoid the branding fan boy type talk here and stick to basic information as it relates specifically to visual effects. Image resolution and form factor aside (full frame, micro four thirds, etc.), what has the most impact on the quality of your image for visual effects and visual post production is the bit depth and color resolution of your image.
Most DSLR cameras are encoding your image in a 4:2:0 color space. Mid level cameras like the Canon C300 are encoding your image natively at 4:2:2. While big boy cameras like the Arri line are encoding at a full range of 4:4:4. You can learn about what all this means HERE on wikipedia.
So what does this mean for our film? The most immediate correlation is when it comes to green screen shooting and rotoscoping in post production. If you are using DSLRs for your film then green screen shots can come out OK. You better light those shots perfectly or post production on them is going require lots of TLC. This will effect your bottom line costs in post.
You’ll learn from the wiki that an image in 4:2:0 color space is actually throwing out a great deal of color information. This means we’re not getting much to work with when it comes to keying out a green screen or doing rotoscoping. This also impacts a post color grade on the image. For general purposes in the indie realm, a 4:2:2 image for visual effects and good post color processing is definitely sufficient.
There are many tools available on the market to capture your image in these better color depths. Take a look at the Atomos line and other similar products.
Whether you’re shooting to an outboard device like the Atomos or not, you also need to make sure that you are shooting a “flat” or LOG image in camera. There are no DSLR cameras on the market at the moment that shoot a true raw video image but you can get close to the baseline by getting your camera settings as close to a LOG style image as possible. A good explanation on this is found HERE.
A few DSLRs out there offer presets you can download and apply to your camera to essentially get you in the ballpark. If you make the fatal mistake of dialing in your final super cool look on each camera during shooting then don’t bother calling anyone to do your visual effects or post color processing, you’ve effectively ruined your film. Update your opening credits accordingly lol.
A final note on this is that if you’re using multiple cameras, please try to match the models and not mix them all up. This would extend to the settings on all your cameras. Match them up.
What do I send to my Visual Effects team?
Short answer? Money….lots and lots of money. Kidding ;). The footage you send to your VFX team should be as follows:
- The edited timeline of your scene that will feature the visual effect. This helps us see how the scene plays out in your final edit.
- The individual footage clips cut down to the in and out points that will have VFX created in them.
- The footage must be as though straight from the camera. Do not transcode them to another format. Do not process them with a color correction. They must be in their original form with the desired edit on them. That’s it.
- Provide us with the LUT file you will be using to apply your final look on the scene. Your editor knows what this is. If they don’t then you might need to revisit your hiring choice.
Now you have a solid baseline of knowledge. Go out and have a lot of fun making your film! Try not to cut corners on these things. I promise you your production will benefit from applying these practices. Feel free to reach out with any questions.
Go make a cool film!
~ Tobias Steiner – Owner of FrameHaus LLC