Dead Run - A Short Horror/Action/Thriller Film
Sometimes, your fears are real.
In the foreboding and impending darkness of a remote Alaskan forest, a young girl is pursued by something unseen, but not unheard. Determined, terrified, and alone, she runs for her life. With darkness closing in around her, she struggles to outpace the demons of the forest and finds that, sometimes, your fears are real.
Where to See Dead Run
In August, and again in December, 2019, Dead Run was a featured film for the launch of two new Video-On-Demand (VOD) streaming platforms, House of Disaster TV and Four Thieves Productions. You can read more about them below and find links to their respective sites.
You can also, of course, Watch The Film Now on this web site.
We recommend you listen ONLY with full range speakers, or quality, full-coverage headphones (not laptop speakers) to enjoy the full effect of this immersive soundtrack.
House of Disaster TV (HODTV)
In September, 2017, Dead Run was an official selection of Horror of Damned Film Festival in Milan, Italy. The festival is in partnership with the new HODTV launched in August, 2019, a subscription-based, on-demand streaming platform "that selects and offers the best independent horror and thriller films worldwide."
We are both honored and thrilled to have been selected by HODTV to be featured in their premiere launch event. You can explore their curated content on the HODTV web site.
Four Thieves Productions
In June, 2019, we were approached by representatives from Four Thieves Productions about including Dead Run on their ad-based VOD streaming service to be launched in December 2019 and hosted on Hyvio. Four Thieves Productions "is dedicated to bringing quality short form content to audiences worldwide by presenting proven festival quality films".
Once again, we are both honored and thrilled to have been selected by Four Thieves Productions to be featured in the launch of their new streaming service. You can explore curated content from Four Thieves on the Four Thieves Productions page.
Project Concept & Initial Development
The Dead Run project began in the Summer of 2014. It was as hot as Summers get in Alaska that year. I found myself with the urge to make a horror film (I was a burgeoning filmmaker, after all), but it had to satisfy three criteria: (1) be fun to make, (2) be something I'd feel good showing to the world, and (3) allow me to learn everything I could about independent filmmaking during the process. Thus launched a 3 year epic journey.
I knew I wanted a story that was unique (no zombies), fast-paced out of the gate (non-stop action, minimal dialog), that could be made with extremely high quality, but which could be realistically produced with minimal cast and crew, and include a character with whom the audience could empathize. After developing the initial story, I set out looking for just the right teenage girl to star in the film. I found her in the daughter of a friend. She was 14 (at the time), athletic, a dancer, and an actor who had some stage experience... and was eager to participate. I scouted locations that fit my vision, which turned out to be my back yard, the Chugach Mountains in Alaska, fleshed out the script with final details, created a shooting script, decided on wardrobe, and planned for a rehearsal shoot.
We shot some rehearsal footage one evening on location, and after reviewing the footage, made some changes to the script in light of what we'd learned (one of the benefits of being an independent filmmaker!). With our revised shooting script, we set dates for production. Getting the footage was, by far, the most enjoyable part and, as it turned out, the easiest as well. That three days of shooting set me off on a collision course with the ultimate experience in filmmaking. I delve into the details of the entire filmmaking process a little later on this page under Filmmaking Process & BTS.
Who I Am
My name is Shane Taylor, the writer, producer, director, cinematographer, post-production supervisor, sound designer, Sherpa and snack provider on Dead Run. I've been a photographer nearly my entire life, and in 2011 my interest turned to indie filmmaking. The goal of this 3-year project was to cut well below the surface, to dig deep into all aspects of the art of filmmaking so I could understand every detail of the process of producing a film, from beginning to end. The journey was nothing short of epic.
Long ago, when I edited my first video, it became apparent the importance of the relationship between the editor and the director, like Coppola and Murch, or the many Directors who also take on the Editor role in their films. Editors-turned-directors typically shoot for the edit. I had the opportunity in this project to not only explore the relationship between picture editor and director, but the importance of sound to the whole of a film. It's been stated that an audience will excuse an out of focus picture, even calling it 'artistic', but bad sound is unforgivable. I worked very hard to create an immersive 5.1 soundfield for Dead Run, along with a Stereo version for the festival circuit. I now know that wherever I find myself in the filmmaking process in the future, on whatever project, I will be better prepared, more knowledgeable, and better able to contribute to the overall project's success.
I began submitting Dead Run to the festival circuit in June 2017, which ran through October 2018, during which time it was an Official Selection at eleven festivals, and won eight awards, the laurels for which can be seen below. I would also love to collaborate with other creative people on new projects to continue making great films. You can read more about me on the About page, and then visit my Contact page to get in touch.
On to the cast and crew!
Paisley is the talented young actor who plays the protagonist in Dead Run. She was an absolute joy to work with, dedicated to the story and the entire project.
She was a real trooper during the strenuous shooting schedule. Just getting to the location each day was a 1.5 mile hike, one way, and then she had to do a lot of running during the shoot, on uneven, unfamiliar ground... through the woods, for several hours. Then there was the hike out. With gear.
Later, during post-production, she endured 2 days of ADR recording to re-record every shot. I was fortunate to have such a talented young woman star in Dead Run. Thank you Paisley. Well done!
Leeann C Taylor
Makeup FX, Location Sound, Wardrobe
Leeann provided the makeup FX for Paisley, and tested and mixed the blood used on set. Her work was perfect! While on location, we were passed by a couple of hikers and one took a look at Paisley and said "Honey, are you OK?!"
Leeann provided a valuable second pair of eyes and ears on set to ensure we maintained continuity between shots and days, and served as a sounding board throughout the entire project.
She also served as location sound recorder, wiring Paisley for sound before each shoot, and carried the field mixer and recorder for our wireless mic and shotgun, and operated the boom on various shots.
She also handled Paisley's wardrobe.
Leeann is also my wife and soulmate and was a true believer in me and this movie and a tireless supporter throughout the entire process. Thank you for all your love and support.
Amanda is a dear friend and has always been willing to lend a helping hand in any of my artistic endeavors.
She often served as muse to me as a photographer, modeling for my photo shoots on more than a few occasions.
I'm very grateful she agreed to help with the production of Dead Run.
Thank you Amanda.
Ella is Paisley's younger sister (by two years). She was the stand-in model for Paisley when Leeann was working out her makeup FX.
She was on-hand during a 2-day workshop I held to rehearse particular parts of the script with Paisley. She was also with us during the grueling 2-day ADR re-recording sessions. On both occasions, she provided valuable insight and encouragement to her sister, as only sisters can do.
I'm both thankful and grateful that she was with us.
Thank you Ella.
There are quite a few individuals I don't know, and whom I'll probably never meet, who lent their talents to the production of the incredible soundtrack for Dead Run. Although some of them required no attribution for use of the sounds they've contributed to the world on www.freesound.org, I would be remiss if I didn't give credit where due. Below, I list the user names and the sounds (sound #) they created that ultimately found their way into the Dead Run soundfield. Thanks to all of you for the incredible work you do, and for sharing it with the world. They are, in alphabetical order:
Baxuvis - 276567
Benboncan - 81970
Blu3hand - 188161
BrainClaim - 267067, 267197, 273805, 274123
cargio0003 - 205902
ceich93 - 266413
CosmicEmbers - 160688
csengeri - 38228
D DJ Chronos - 33413
Dredile - 164272
Erdie - 23222, 24524
HerbertBoland - 76948
horseyfootage - 280202
jobro - 123075
Jovica - 3863, 16503, 74595, 123053, 177016
JuliusMabe - 263669
Kamyarbb - 146328
klankbeeld - 170387, 205186
nick121087 - 333832
Sonicfreak - 174450
strongheartbeat - 52651
studiorat - 22350
thanvannispen - 38100
waveplay - 193028
xDimebagx - 202239
yewbic - 31904, 33796
ZeSoundResearchinc. - 147874
Filmmaking Process and Behind The Scenes
Here I outline in some detail the process I used to produce Dead Run, including some roadblocks, detours, and full stops I encountered along the way, and provide details found nowhere else about the film. Warning, spoiler alert. You might want to watch the film before reading further. As this was a project specifically designed to learn and experience as much about filmmaking as possible, much of the time developing this ~9 minute film was spent researching, studying, reading text books, watching tutorials on YouTube, reading software manuals, experimenting, talking to other filmmakers on professional forums, and in developing a workflow that would allow me to not only complete this film, but which was also repeatable. I'll get right to the heart of the matter and will provide links and references to many of the resources I found most helpful in hopes that it might be useful to others.
In developing this project, I had to be realistic in what could be accomplished given I would be doing nearly everything myself, by design. Although able to fund a much larger film project, I chose to do this with zero budget. With the exception of a few key pieces of HW added to my existing photographic inventory, the only cost in making this film was some awesome stage blood and paying the actress. I was focused on the technical aspects of filmmaking, not the scale. The story is presented visually and aurally through exposition, rather than through dialog. In fact, there are exactly five words of dialog in the film, four more than I scripted.
Once I had a general idea of what I wanted to shoot, I set out looking for locations. Living in the Chugach Mountains provided many options for shooting in various types of terrain; from barren, rocky, and sparse, to dense spruce and birch, fields of high grass, lush soft, mossy ground, to ground covered in crunchy lichen and dry leaves, and even along raging rivers. I found an area very close to home that had all of that and more, including bears, only about a 1.5 mile hike from the car.
Leeann and I spent many hours hiking and exploring, noting all the many areas that were suitable for shooting, including GPS coordinates, helping me develop the details of the story I wanted to tell, which was intimately involved with the terrain. I knew I wanted a lot of running, so the area had to accommodate that, as well as the ability to shoot it. The area was in a valley, which would give us an extended shooting time each evening out of direct sunlight in order to facilitate the final dusk look I wanted for the film. As an experienced studio photographer, I knew how to light a scene, but I really had no desire to light any of this movie artificially. So I had to plan around available light.
The location notes helped me fill in details of the story based upon the terrain in which I planned to shoot. The antagonist was not only within and after the protagonist, but was also the terrain itself. By this time I had decided on Paisley for the protagonist. She was fit, athletic, and had years of dance and stage experience. I did not write a formal script since I had the entire story in my head. We decided to do a rehearsal shoot with Paisley, not only to give her a chance to get a feel for what we were asking of her, but also for me to see what she was capable of and to figure out how I was going to shoot it.
We spent several hours one drizzly afternoon shooting Paisley on location, trying out ideas, and working out the details. I used a Glidecam 4000 HD steadicam with my DSLR and shot entirely hand-held, and decided that it would be the best (and most efficient) way to shoot the film. I could rest it on the ground for slower shots, and even shoot inverted for low angle shots. I edited the raw footage together and we all watched it, taking note of what worked and what didn't, and what shots we would need to rehearse and pay special attention to during production. Some of this footage is provided in the BTS Rehearsal video below.
I created a detailed shot list that included much of the information that a script would have contained (i.e., actor's business), given there was no dialog. The film was broken into 7 scenes, which I called: Dense Forest Wood, Rough Terrain, The River, The Fall, The Shelter, Rock Garden, and Capture, and consisted of nearly 80 different shots, some of which were tagged as optional. This did not include the pick-up point-of-view (POV) shots that would be needed in the edit which I could do after the shoot with Paisley. The shot list included Scene/Shot/Take number, type of shot (WS, MS, CU, etc.), any continuity concerns, GPS waypoint, the action to be shot, which mics were used (lavalier, shotgun), FPS, camera position, height, direction, and angle, focal length, shutter angle and apertures expected. Each shot also contained additional descriptive language for Paisley to guide her acting.
Having nailed down the shots we needed, we focused on any areas that needed special attention, or additional rehearsal time with Paisley. I scheduled a 2-day workshop with Paisley where we rehearsed several of the more important, tougher shots, like in The Fall scene. We also tested lav placement options that would be effective while running through the woods, as well as testing out our field recorder, boom and shotgun mic. We also took this opportunity to perfect our makeup and artificial blood FX capability. Behind the scenes footage from this workshop is provided below in the BTS Workshop video. It was also during this workshop that the title Dead Run finally came to me.
Feeling good about the shot list, and seeing how well Paisley performed and adapted during the Workshop, we planned for a 3-day shoot in the following weeks. All throughout this time, and long before I began planning for Dead Run, I had already begun my studying, researching, and reading on every aspect of filmmaking including directing, cinematography, lighting, production sound recording, using microphones, editing, dialog replacement and editing, sound design and mixing, re-recording, Foley, creating sound FX, audio post production in general, surround sound, and continuity. These texts gave me a glimpse into what I was in for and was a list of resources often revisited (re-read, and added to) during Post-Production.
With a shot list and detailed equipment list in hand, including both bug spray and bear spray, we were finally ready to start shooting. Getting to location was a 1.5 mile hike in, so we packed light with minimal gear. This included a Nikon D810 DSLR with 24-120 f/4 zoom lens, Glidecam, batteries, lav and shotgun mic, boom, wireless system, sound bag with headphones, mixer, recorder, and snacks, water, a few tools, and first aid kit.
I decided to shoot in chronological order; it made the most sense given our short shooting schedule and continuity concerns, especially considering the introduction of blood midway and expected wardrobe degradation. I knew Day 1 would be slow going as we all settled into our roles, and may not produce much usable footage, which was accurate. That evening we reviewed the footage and decided to refine exposure settings, mic placement, and determined that we had insufficient time to shoot all that we wanted to shoot in the remaining two days, a total of approximately 8 hours on location. So we cut out The Shelter scene, which was based around an old small man-made shelter we discovered constructed of limbs arranged in tee-pee fashion, and which was barely large enough to fit Paisley inside, and the Rock Garden scene, which we posited we could pick up the main feel of at the end of The Fall.
Over the following two days, we worked hard, efficiently, and fast to get the shots we needed for each of the remaining scenes, and were able to do so with some shots requiring only a single take, others a few more for safety. Once wardrobe and makeup and sound was set each day, we moved efficiently from shot to shot, location to location, working through Dense Forest Wood and Rough Terrain on Day 1, and The River, The Fall, and Capture on Day 2. Having a detailed shot list, reference sound from camera, sound from the lav for every shot (Paisley wore a NeoPax ankle strap with the transmitter), and the shotgun where appropriate, we used a simple clap from Paisley as our slate, much of which is captured in the humorous BTS Mark video below. As we shot, I was already editing the film together in my head.
A few days following the shoot with Paisley, I went back out on location alone with camera and Glidecam to capture the various POV shots I knew I would want (and need) for the edit, retracing our steps from scene to scene, getting the shots I needed. With that done, I now had a host of image files, sound files, and B-roll to sort through to start building the story. Some behind the scenes footage from production is captured in the BTS Production video below as well.
Post production is where the rubber met the road, and where the real work began. First, I needed to assess the picture and sound assets accumulated over the final 2 days of shooting, along with the B-roll footage captured later, compared to my shot list. I was very pleased with the footage captured, and the options it would provide for the picture edit. We had collected enough takes, angles, and shots, to tell the story appropriately.
A quick assessment of the sound, however, told me that, although acceptable for most shots, Paisley's actual breathing didn't fit what I heard in my head, and wanted for the film, and I'm not sure it ever could have under shooting conditions on location. In addition, I hadn't anticipated the sheer persistence of the nearby river in adding an annoying background drone that varied from scene to scene in volume as we moved closer to, or further away from, the source as we shot. Although needed in the final cut, the sound of the river in many shots would greatly hinder my efforts to edit the synced sound together into a consistent aural image. I'd have to deal with that later. Next up was the picture edit, where I worked toward a final, locked picture timeline that would serve as the basis for sound design, ADR, Foley, SFX, and everything else involved in creating an immersive 5.1 soundfield.
As my edits began taking form, I started thinking that this would be something that I would feel good about submitting to film festivals. Given this project was primarily about learning filmmaking, this realization and additional goal added another layer of complexity and further focus to the project. I already owned Adobe's Master Collection CS6 and a powerful HP workstation because of my photography work, and therefore had all the tools I needed to create Dead Run. This included Premiere Pro as Non-Linear Editor (NLE), Audition as my DAW (digital audio workstation) for sound design and editing, After Effects for special effects and title design, Speedgrade for color correction and grading, and finally Encore for creating Blu-ray discs, which would serve as backups to the final Digital Cinema Package (DCP) needed for film festival playback.
The first hat I wore in Post was as Editor, where I spent several weeks editing the footage in Premiere Pro, starting by just placing all the footage onto a timeline in (basically) chronological order, including multiple takes. After carefully reviewing each clip for how it met the needs of the story, I started cutting and rearranging until I had a first assembly, which included the raw sound recorded by the camera attached to each clip from the DSLR. I continued to refine this edit, adding sound clips from the field recorder, where the Lav appeared on the left channel and the Shotgun on the right. I began to get a feel for how I could manipulate the footage to change the story as I saw fit, to add tension in places, or to speed things up, or slow things down. The sound, although raw and unedited, provided a little more stimulation during the edit process.
After many iterations, I finally arrived at a cut that I felt did the story justice, and decided to lock the picture. Although rarely a true final cut, it is the point in post-production where the sound team can begin developing a synced sound track to lay under the image. For me, this meant I needed to render the picture to an appropriate file for Audition, load that file into Audition as my picture reference, and put on my Audio Post-Production hat.
Intermingled with my editing sessions, I was also (re)reading books by the masters, like In The Blink Of An Eye by Walter Murch, a small but powerful text that is a must read. Other texts I found helpful were: Between The Scenes, by Jeffrey Michael Bays; The Innocence of the Eye, A Filmmaker's Guide, by Ed Spiegel; An Introduction to Film Analysis, by Michael Ryan and Melissa Lenos; Storytelling Techniques for Digital Filmmakers, buy Ross Hockrow; and The Eye is Quicker, by Richard D. Pepperman. There were others, which are listed at the bottom of this page, but these provided the most insight for me. It goes without saying that I did RTFM of each of the Adobe apps as well.
Special Visual FX
A key scene in Dead Run is when our young girl falls and loses consciousness. Upon awakening, she discovers that she has fallen on, and impelled herself with, a staub (a small, sharp tree stump). Of course, Paisley's mom would not have appreciated us having her actually fall on the staub during Production, so this was added later in Post using After Effects (AE). After we filmed her falling and then getting up off of the staub (which wasn't there), we did an insert shot of the (actual) staub once it was placed in the scene with the proper blood effects added. Following production, while I was out shooting additional POV footage, I also took some hi-res stills of the staub with blood effects added to use in AE. In Photoshop, I extracted the staub from the rest of the image and processed it to fit (visually consistent) within the scene. I then added the image of the staub to a layer above the footage of the shot in AE and created a tracked mask, frame-by-frame, so it became visible in the shot as she lifted herself off of it. It is a subtle, but important, graphic (and continuity) effect given the subsequent insert of the closeup of the staub in the following shot (especially if one misses it the first time and plays it back to see if it was there before).
Another visual effect I added in AE was the only glimpse we get of the girl's predator. This is in the form of only a shadow in the woods, one that was carefully crafted to be invisible; that is, until it moves... and changes shape. This effect was created with a complex mask and color layer above the footage. However, in order to have the shadow appear both behind some trees that were between it and the girl and in front of those trees more distant, the mask used had to be created frame-by-frame (no tracking) given changing perspective in the shot and camera movement and which trees would obscure the shadow. I feathered the mask to soften the edges and made it darker than it needed to be, knowing that I could later fade it as needed in Premiere Pro as a separate layer above the footage to blend it within the scene.
Two other key visual effects in Dead Run were created in Speedgrade. The first was the change in the look of the film when seen through the predator's eyes. This look is different from the overall look of the film to call out the POV being seen by the audience. The context makes it clear what we're experiencing. This is just a different grade used for the shots representing the predator's POV. These shots were then rendered separately and imported back into Premiere Pro as a separate layer above the footage. This kept the POV separate and independently controllable from the overall film's look.
The other special effect I produced in Speedgrade was the lightning flashes that appear late in the film. I did a bit of testing to find the right combination of tint, brightness, contrast, and highlight and shadow detail, that I felt looked realistic. This appeared as a separate color grade layer affecting just those few frames where I wanted the lightning flashes to occur. Once the look was created, I could place the flashes wherever I desired in the frame, and add the sound effects to match.
Post audio was by far the most complex, frustrating, time consuming, and yet, surprisingly, the most enjoyable part of this project. At the time I also thought it to be the most difficult, that is, until I got to color (more on that later). I've read (and if you watch many movies, you'll also have experienced) that sound is treated as second class citizen on far too many production sets, with the oft-heard phrase, we'll fix it in post, being murmured by directors oblivious to the importance of sound to their picture. However, I must admit that I fixed all of my audio in post and, in fact, we created 99.9% of it from scratch. Chalk this up to inexperience, or zero budget, or designing a project with minimal crew or, ultimately, just to the fact that I wanted more than what I heard in our production sound recordings (plus it forced me to learn a lot more about film production). I had a vision (aurally) of what this film would be, and it wasn't there to capture during production, nor do I think it could have been. After all, I wanted to experience ALL aspects of filmmaking, and that includes ADR, dialog editing, and Foley, right?
Audio Post-Production - Dialog
I opened Audition, added my picture file to the top line as reference, and stared at a blank timeline for quite some time, before realizing that I had a lot to do. OK, so, I needed dialog, or in my case, my 5 words, and a lot of heavy breathing, startles, grunts, falls, and screams. Unfortunately, this has to be recorded just like dialog does. So I scheduled 2 days with Paisley to do ADR (Automated or Additional, depending on who you ask, Dialog Replacement). To prepare for this intense two-day session, I converted a spare bedroom adjacent to my office into a recording studio. This involved converting the existing bed into mattress soundproofing, with the help of numerous blankets, quilts and comforters used as sound baffles to suppress reverberation during recording sessions. I placed a spare monitor in the room, along with my Sennheiser 416 shotgun mic on a stand, and a Lav mic, both wired to my MOTU audio interface connected to my HP.
This setup allowed me to play video on the monitor for Paisley to view, while recording her performing synced to picture. From my studies, I knew I needed to provide prompts to Paisley embedded in the video for each and every clip we were to record. So, back in Premiere Pro, I created a cut from my locked picture that included handles, or additional frames not in the shots in the final cut, on the heads and tails of each clip to provide takeoff and landing zones for Paisley. I also added the standard visual cues consisting of two white vertical bars, one of which moved toward the other which, when they met, would cue Paisley when critical recording would begin. This proved highly effective. I used Premiere to do the recording, which I found later was the wrong choice. When recording Foley later in post, I used Audition in the same fashion, which turned out to be the much better approach.
After a few practice runs, Paisley was all over it, and we were set to go. Unfortunately, Paisley also had a cold, which involved some intermittent sniffling, hoarseness, and 'air' in her voice, which I thought I could take advantage of in her distressed state in the film. That turned out to be both right and wrong, but ultimately worked out. Over those two days, we recorded nearly 400 takes of all the shots in the final cut on both the Lav and Shotgun (simultaneously). From that raw material, I then had to review, sync, dissect, clean up, and edit together a suitable dialog track. This was one of those full-stop moments, where I ceased work on the film to study before proceeding. I read numerous textbooks on audio post-production, including multiple revisions of the same book by the same author, books on editing dialog, mixing, mastering, sound effects, sound reproduction, and surround sound. I've included a complete list my favorite texts and web resources at the bottom of this page. One of the most valuable texts to me for this phase was Dialogue Editing for Motion Pictures, 2nd ed., by John Purcell. This excellent treatment goes far beyond just editing, and is highly recommended for any independent filmmaker involved in his/her own sound design.
I also spent a lot of time on several of the best audio post-production forums on the Internet, with my favorite being the Post Production Forum on Gearspace.com. I communicated regularly with very skilled professionals making a living in Post Production on every issue imaginable. This community is full of individuals both very capable and willing to take the time to help anyone at any level of experience. This group was invaluable.
Back in Audition, I inserted a render of the cued picture timeline from Premiere Pro as picture reference, and began pulling in all the ADR takes we recorded onto the timeline in the Multi-track Editor so that each clip in the picture had one or more audio clips beneath it on different tracks, synced to the picture. Although time consuming, this was not difficult. I then auditioned each take against the others for each clip, marking the ones that had the best performance and feel, and marking multiple takes which, if combined, could be edited into an even better (or even different) performance.
Once I had each shot's audio edited onto a final version track, I replaced the picture reference with the original locked picture and edited the audio clips together on a new track, i.e., with cross-fades, etc. It wasn't quite that simple, since there was more editing, syncing, playing with the handles on each audio clip, and various track and clip effects added to end up with a dialog track that felt right, was consistent from shot to shot, and synced to picture exactly. Bad sync is one of my pet peeves and I spent extra effort to ensure sync was the best it could be across all my diegetic sounds (However, I discovered later in my journey that no matter how hard we filmmakers might try, desktop video players are notorious for introducing sync errors and even $100,000 theater systems can produce sync errors if not configured correctly). Thinking they might be useful, I created Markers in Audition at the beginning of each individual shot on the timeline. These references proved invaluable throughout the remainder of Post.
Audio Post-Production - Foley
Although at the time I believed my dialog (or DX) track was complete, it went through numerous iterations of tweaking as I progressed through Post and, indeed, was one of the last components tweaked before I decided Dead Run was finished (ha ha). But the film, with just picture and DX was sorely lacking in impact, so I moved on to Foley. Another full-stop moment. I had read about the art of Foley for years, but never had the opportunity to perform it, and now I had the chance. The first thing I did was go through the film multiple times creating a spotting list, where I noted various Foley elements that I needed for each shot, listing Marker number (i.e., shot), brief shot description, sounds needed, materials making those sounds, notes, and what Foley pass (steps, cloth, body) each would belong to. According to legend, Ben Burtt created the laser blaster sounds of Star Wars by hitting an antenna guy wire with a hammer. In my first Foley attempt, however, I chose instead to use the actual materials making the sounds (since they actually existed in my universe). So I strapped on an empty backpack and headed out to location and collected samples of moss, twigs, dry & damp leaves, both dry and green spruce branches, and lichen-covered rock. I collected some pea gravel, birch bark, jeans, tennis shoes, and the tank from wardrobe at home.
Once again, I converted the spare bedroom adjacent my office into a Foley pit, similar to my recording booth for ADR (see photos below). It was there that I tested, experimented, and ultimately recorded the sounds that gave life to my character as she moved through her environment. I used the same cued picture I used for ADR sessions, but this time I rendered it out of Premiere Pro and loaded it into Audition as picture reference and used Audition as the cueing/recording platform. I began with the steps pass by recording, on my knees with sneakers on hands, multiple takes of each and every footstep on screen, in every shot, using the various materials as appropriate: damp leaves, lichen, moss, dry leaves/twigs, and gravel. Wherever you could not see her feet, there still had to be steps, so I had to watch the legs and upper body motion to anticipate when steps would fall and create them, and the timing ultimately had to work out to sync later when the feet once again became visible. There was a rhythm that you could see and feel. This was especially important for the running shots. Yes, I tried it with the shoes on my feet, but on the hands worked better for me.
I repeated this process in two additional passes: Cloth and Body. The cloth pass captured sounds of her jeans swishing and her clothes rubbing against spruce branches, or lichen-covered rocks, or sliding across the ground covered with leaves and moss. The body pass captured sounds her body made in the environment, like when she put her hands on birch bark, or slid them across lichen-covered rocks, or when she moved her hands across her jeans, or when her arm rubbed against her tank, or she leaned back to rest with head against a tree. All of these sounds might seem trivial, but when you turn off these tracks, you notice immediately that something just isn't right in the world. Listen for these details in the final movie.
After recording nearly 700 takes of Foley performed synced to picture, I loaded them into Audition in a new session in the same fashion I did for ADR. After much auditioning, editing, and difficult decisions, I ended up with a proper Foley track, well, actually 22 of them. I decided to keep major things separated on individual tracks in the final master mix to allow for additional control over individual Foley component levels in the final mix.
Audio Post-Production - Preparing for the Mix
Now, with an edited picture, including ADR and Foley, Dead Run was beginning to take shape. I knew by this point that I wanted to create a 5.1 soundtrack for my film, which some might think is overkill for a 9 minute short film destined for the festival circuit (many of which won't even play 5.1). But I knew the effort was worth it when I watched a guy in the audience in a theater during one of my DCP tests jump as he literally looked left, just as the POV image on screen did, when a twig snapped over his left shoulder on the surround speakers during a tense scene. However, even though I did have a consumer level 5.1 stereo system attached to my HP, from my research I knew that it was insufficient for designing a 5.1 soundtrack for theatrical exposition. I also knew that mixing in 5.1 is far more complex than mixing in stereo, which can effectively be done using a good pair of headphones. I knew I needed a proper way to monitor and a calibrated room. Another full-stop moment, by far the longest.
This is where things got real. Being a physicist, and knowing more than a little about sound, I knew I couldn't just brush off the feeling that this had to be done right. Floyd Toole's book, Sound Reproduction, Loudspeakers and Rooms, was an invaluable resource, as was he on more than a few occasions when he graciously provided direct advice to me. Tomlinson Holman's (THX's namesake) book, Surround Sound, Up and Running, 2nd ed., is great for anyone wishing to work in surround sound. These and other texts helped me understand what I needed to adequately monitor and mix my film in my particular room. Small rooms are not ideal, which is why most professional dub stages are the size of small to medium movie theaters, with large, full-range speakers behind real cinema screens. All that makes a difference in the sound.
Reflections. It's all about reflections, and it gets really messy in small rooms. Low frequency standing waves also wreak havoc in small rooms, which must be dealt with (or at least understood) if accurate monitoring (even in stereo) is to be achieved. But with 5.1 the complexity is amplified by the addition of 3 more loudspeakers and a subwoofer. Placement within the space becomes critical (and I honestly didn't know just how critical until this exercise), as is control of reflections and standing waves. I researched and settled on a 5.1 monitoring system from JBL that fit my needs perfectly. Once ordered, received and installed, I still knew I was not ready to start mixing. Long story short, weeks later of measurement microphones, SPL meter measurements, standard test tones, measurements of the room using Room EQ Wizard (a very cool, free app), moving my sub around the room, sometimes by as little as 6 inches or so, and completely rearranging my space, I arrived at a room configuration that would provide me the best possible monitoring environment for surround sound mixing.
The results were absolutely stunning. However, one must realize, as with even the best calibrated dub stages used on the highest budget features, these stunning results only occur within a relatively small sweet spot. In my room, this is basically where my head is when I sit in my chair with back straight exactly 48" from every monitor. I used the ITU recommended configuration where the Left and Right speakers are 30 deg off axis from Center, and the Left Surround and Right Surround are between 110 and 120 deg off Center on either side, all placed on a circle (in my case, with a radius of 48 inches). Subwoofer placement was dictated by the room and its nodal structure, as it is non-directional.
Here's why this is important: Simply, as an example, if your speakers are turned up to 100% when you mix, then things will sound loud, so you'll turn your faders down during the mix so it sounds right to your ears, which means lower (digital) levels in your audio files (which translate to lower electrical levels at the speakers in the theater, ultimately). Someone listening to your mix will have to turn their volume up even more to hear the lower sound levels you recorded in your audio file. Basically, the levels you've placed in the file become arbitrary under such conditions and depend completely on the volume set on your own monitors while you mix and are not related to any standard reference that those listening downstream can use to replicate your intent. This is why movie theaters and dub stages work to standards and calibrate their systems to those standards. It is also why so many low-budget independent films are all over the map in terms of volume levels when shown at festivals.
The ultimate goal of such room calibration is to conform to the same SMPTE standards agreed to and used by theaters worldwide, which will then (do much to) ensure that what you produce in that room will translate accurately to a calibrated theater. Part of this is level calibration, which is done by sending a standardized pink noise signal which provides a -20 dBFS signal level to each monitor in turn, the volume control of which is then adjusted to provide 85 dB C-weighted, slow response (dBCs) on a sound pressure level meter at the listening position (SMPTE standards dictate that the surrounds are adjusted 3 dB down to 82 dBCs for cinema). This same procedure is used in dub stages and theaters all over the world during calibration, so that everyone is working to the same standard.
This worked great until rogue filmmakers and producers wanted to 'stand out' and started mixing their trailers and sometimes their movies louder than necessary. Like the loudness wars plaguing popular music today, where dynamic range (DR) has all but been squashed out of existence through compression, these ridiculously loud trailers and films prompted complaints from patrons and resulted in movie theaters turning down their master faders to non-standard levels. This caused complaints about properly mixed movies being played too soft, where dialog became inaudible. Quite the mess that is the source of much lamenting on Gearspace and other post-production forums frequented by sound professionals worldwide (it's unsurprising then that all three of the theaters I tested my Dead Run DCP in had their faders turned down below the SMPTE standard settings for a properly mixed soundtrack. As a result, my soundtrack was quieter than designed and what I heard in my room).
Typical dialog in a theater sits at about 60 dB, and peak sound levels I've recorded have reached up to 100 to 110 dB, for a 40-50 dB DR, which is much, much higher than any music you'll hear today. That's what gives movie soundtracks such appeal, emotion, and excitement. It is hoped by all that loudness wars won't ruin the movie-going experience like it has for radio. The best we, as filmmakers, can do is ensure the mix is right when it leaves our hands, and hope theaters maintain their calibrated systems. Another difficulty for indie filmmakers on the festival circuit, is while some require DCPs to maximize customer experience, others can only play Blu-ray discs or DVDs from a laptop connected to a consumer projector; not the way you'd like your hard work to be seen (or heard). DCI standards for DCP dictate that DCPs cannot be stereo (2.0), but must be either mono or 3.0 and up. However, most festivals that don't accept DCP play their BDs and DVDs in stereo, which is never ideal for motion pictures due to problems with phantom centers and people sitting closer to one speaker than the other, causing the diegetic sounds to be disconnected from the image. But if you want a wider, more interesting soundfield, it probably beats mono.
Given the variability of festival exhibition requirements, I decided to hedge my bets and produce both a 5.1 sound mix as well as a 2.0 stereo mix which I could put on BDs or digital files requested by many festivals. The stereo would also be useful ultimately for trailers, on-line screeners festivals use to judge entries, and either VOD or free-to-the-world release on Vimeo following a festival run. There are ways (and formulas) that can be used to mix down a 5.1 soundtrack to stereo, all of which make compromises. Most eliminate the LFE (low frequency effects) channel (the .1 in 5.1) completely. I knew that I wanted to put the LFE channel to good use in Dead Run, so I chose to create a separate 2.0 mix independently, but still based entirely on the 5.1 soundtrack. More on the technique I used later.
Audio Post-Production - The Mix: 5.1
First, back to the mix. Full-stop over. Armed with a head full of new knowledge, and a properly calibrated mix room, I jumped back into Audition (in fact, I spent most of my Post time in Audition). I did another spotting session, making notes of sound effects (SFX), background/atmosphere sounds, LFE sounds, music, and drones I needed to fill out my mix. Once I knew where I needed certain types of sounds, I set out to determine which I could create myself and which were already in existence that I could use. Audition comes with some royalty-free SFX, a couple of which I used. I looked at a lot of SFX libraries that can be purchased, which were good, but expensive. There are options for purchasing individual SFX on-line cafeteria-style, which is an attractive option.
I ended up recording the river myself on another hike out to location, and processed it heavily to give it greater weight to make it more menacing. I also, during my Foley sessions, recorded some blood squishing sounds and twig snaps that I used. The final sounds I created myself were the growls of the predator. I recorded my voice in numerous takes, and pitched it down and processed it using a free SSL plug-in called X-Orcism. With the exception of a couple of drones provided by Adobe, I sourced the remainder of my SFX from freesound.org.
Freesound is a community of artists who love creating sounds and sharing them with the world. Most are CC0 (creative commons - free to use), and many are free to use commercially requiring only attribution (for the sounds I used, you'll find that attribution in the section just prior to this one above). I spent many hours on-line sampling sounds, downloading potential candidates for Dead Run. I spent months in the mix trying the various sounds, positioning, layering, editing and cleaning up the sounds themselves, until I arrived at a 5.1 sound track that provided the immersive soundscape and ambiance I envisioned. This included flies buzzing, a heartbeat, birds flapping, twigs snapping, a raging river, crickets, thunder, atmosphere, drone beds, stabs, booms, and body-rumbling LFE, oh, and a little music at the end.
Audio Post-Production - The Mix: 2.0
As for Stereo, I had to figure out how to create an effective stereo mix from my 5.1 mix. Initially I thought I would just create another stereo session in Audition, but given the clip and track effects and routing that existed already in the 5.1 mix, it became evident that a new separate session would require a lot of duplication, making maintenance a nightmare. After much consternation, I developed a method by which I used sends on each track in Audition to create an entirely separate mix inside my existing 5.1 session that ultimately terminated on a stereo bus. This allowed me to set the contribution of each track to the stereo mix independently from its contribution to the 5.1 mix through send levels and panning. It was a bear to set up initially, but once establish it provided tremendous flexibility, and utilized all of the existing previously-edited clips on the 5.1 timeline, including both clip and track FX. In addition, for most later changes to the 5.1 mix, it required no additional effort for the change to ripple correctly to the 2.0 mix.
I went back and repeated this tactic for the 5.1 mix so that it all channeled similarly through a 5.1 bus, so that both the 2.0 and 5.1 mixes now flowed through two busses to the 5.1 master track. I then simply had to mute the 5.1 bus to have my 2.0 mix appear on the master track, or vice versa. When I mixed the 2.0 bus down, I would have a 5.1 file with information only in the L and R channels, with all others being silent. Converting that file to stereo, increasing gain to adjust for Audition's weird pan law automatic adjustments during conversion, and I had my custom 2.0 mix. This approach allowed me to send even the LFE information to the L and R channels of my stereo mix which, granted, will be lost on systems incapable of producing such low frequencies. But on consumer bass-managed systems of today, those low frequencies will be stripped and sent properly to a sub. Theatrical speakers (at least the screen speakers) are supposed to be full range anyway (down to 40 Hz), which should produce most of these sounds even in Stereo (although I found this to not always be true, if the theater is out of calibration, or if they've turned down their bass to prevent crosstalk between adjacent theater rooms).
Speaking of bass-management, I found out the hard way what traps await you if you use it in your mix, even for 5.1. For example, in my room, my mix sounded fantastic, with booming bass that vibrated your chest. In early theater tests, however, I just wasn't feeling the bass. After much thought, I thought I might have figured out why. In my room, my system strips anything from the L/C/R channels below 80 Hz and sends that information to the sub. In the theater, bass management isn't implemented, which is why the screen speakers are supposed to be full range. I was missing my low end, so clearly they weren't, and probably wasn't producing much below 50-60 Hz. On a hunch, and as a solution, I set up additional bus tracks and routing, with low-pass and high-pass filters in combination, to manually strip frequencies from tracks with low frequency signals and sent anything below 80 Hz to the LFE channel. On my next theatrical test, the difference was stunning. THERE was my bass!
Feeling pretty good about the mix, realizing that I'd return to do some polishing at the end, I turned my attention back to the picture where I would be tasked with correcting flaws in each clip related to color, brightness/contrast, and dynamic range, as well as, deciding on an overall color grade for the film.
Color Correction and Grading
This part of Post was by far the most difficult for me, not only because I lacked experience doing it, but also because there are so very many ways things can go terribly wrong in the process of getting your raw footage to the theater screen. Speedgrade is Adobe's color grading application, which it acquired from IRIDAS in 2011, just prior to the release of CS6. Adobe hastily integrated Speedgrade into the CS6 suite, and by doing so introduced bugs and instability. Prior to Adobe's acquisition, Speedgrade sold for $20,000 per seat. It is unclear what Adobe plans to do with Speedgrade after removing key integration features from it, like Dynamic Link, in the latest releases. There is some speculation that parts of Speedgrade will be consumerized and dumped into Premiere Pro, and Speedgrade as a separate product will die.
I had two choices for my grading application: Speedgrade and DaVinci Resolve, which was released as a free version that could handle up to 2K footage. I began in Speedgrade, since I could create a Speedgrade project directly from Premiere Pro through Dynamic Link (the part they removed in versions beyond CS6). Speedgrade then launched with a timeline already in place, cut up by shot, and ready for color correction and grading. After some fits and starts dealing with bugs, crashes, and questions about downstream workflow, I took a detour and downloaded Resolve and read its 400+ page manual to see what it offered. Each application had advantages over the other in some ways, and used different technologies, i.e., Speedgrade is layer-based, like Photoshop and pretty much every Adobe app, whereas Resolve uses node-based workflows.
At one point, after spending much time working with Patrick Palmer, Adobe's Speedgrade product manager, on some of the issues I was experiencing, I decided to give Resolve a go. I spent a week trying to find a viable way to get my timeline into, and looking ahead, ultimately back out of, Resolve in a way that would work with my existing workflow without a ton of additional work. Imports into Resolve required that I strip my timeline in Premiere to a single track, with no effects or any other funny stuff, which wasn't easy, but which I did. Still, Resolve would have required me to manually relink most of my footage, and undo what I did to it back in Premiere Pro before I could finish. So I punted and went back to Speedgrade which was waiting for me with timeline intact, and learned to work around the issues.
I could not find a lot of great books out there on color correction and grading, and most of the YouTube videos you find are worthless. The best text I found was Alexis Van Hurkman's Color Correction Handbook, 2nd ed., which includes comparisons of all the latest professional grading applications, including Speedgrade and Resolve. Although I was familiar with monitor profiling and calibration, and owned both a spectrometer and a photometer for that purpose from my photography work, there were other things involved that I had not dealt with in photography, such as video levels, vectorscopes and waveform monitors used to assess video signal levels and color saturation. I was familiar with histograms, and waveforms and vectorscope were straightforward, but Speedgrade (CS6) used a vectorscope that differed from every other application on the market, and there appears to be no documentation on it, which was a bit of a hindrance. The latest version of Speedgrade now includes a 'standard' vectorscope.
I spent a lot of time on Lift Gamma Gain (liftgammagain.com) which calls itself The Colorist and Color Grading Forum. There I found working industry professionals who were very knowledgeable, sometimes opinionated, but always graciously helpful and informative. I gained much insight into how to properly calibrate my monitor so that my color correction and grading, just like with sound, would translate to the theater screen standards set by SMPTE and DCI for DCPs. Just like with sound, nearly every movie in the world that is digital (i.e., not actual film) is transmitted on a hard drive and shown as a DCP, defined by a set of standards that strictly define how these digital files are to be constructed. After much study, I was able to successfully profile and calibrate my monitor... and be confident that it was correct (or good enough). One program that was instrumental was DisplayCAL, a free application that walks you through the profiling and calibration, testing, and verification processes. It will also create LUTs that you can then use in your grading application, or any application like Premiere Pro that is not color-managed under Windows and therefore will not utilize your new monitor calibration to show you correct color.
After profiling my Dell U3011 monitor, I created a LUT that I then used as my calibration LUT in Speedgrade, which is specifically designed for this purpose. It takes the data in Speedgrade representing your footage, assumed to be Rec 709, and uses the LUT to translate it into your monitor's representation of Rec 709. Otherwise, just like with sound, if my monitor was adjusted via knobs to a severe blue tint, I would be adding a lot of yellow to remove that blue tint on my screen, but when that color data is ultimately projected at a theater, it will be far too yellow, because that is what the data actually is. Your badly calibrated monitor will prompt you to make the wrong adjustments for a properly calibrated theatrical screen.
During my 6 screen tests, although my color was very close to what I remember in my room, the brightness was not, and I brightened the grade a little more after each test. Since Dead Run was to be set at dusk, I feared having the film be too bright on screen in a darkened theater, and daylight just wouldn't be spooky. So I erred (too much so) on the side of dark. After these iterations, I now have a better feel for how my room translates to a theater screen (although that also depends on how well the projectors have been calibrated, how old the lamps are, how low the house lights go, etc.). Again, you can only do what you can do, and hope for the best. The rest is out of your hands. Professional Color Stages use $40,000 cinema-grade projectors and large screens to simulate the theater environment to eliminate as much variability as possible. We indies don't have such luxury. The good news is that I was able to get good, consistent results with my Dell high-end consumer monitor in three different theaters where it was tested prior to release.
Once I had my footage corrected for both color and brightness/contrast, for shot-to-shot consistency, and the overall grade settled on, I could then output my footage and pull it back into Premiere Pro to add titles, credits, etc., and finish for final delivery. This involved outputting each shot as a DPX file sequence that I then inserted on my timeline back in Premiere Pro above the original footage. Since it was the same footage basically, just graded, it matched up perfectly and was an easy import. With that done, every time I modified the grade in Speedgrade, I simply needed to re-export from Speedgrade, and the changes automatically appeared in Premiere Pro. By this time, I had my 5.1 mix from Audition on an audio track in Premiere Pro under my footage in a sequence I called DCP Master, so I could watch the film in finished form there. I had a second sequence for the stereo version, where I used a nested version of the DCP master sequence (picture only), paired with a stereo mix file from Audition, which was my master for the on-line screener and future BD creations through Encore.
Creating the Digital Cinema Package (DCP)
Now that I had my movie finished (again, ha ha... "films are never finished, only abandoned"), I needed to create a DCP. There are a lot of commercial entities out there willing to create your DCP for you from a digital file you provide them straight from your NLE, for a fee. Some as low as $40-50 per minute of run time. So I could have had this done professionally for around $500, but what fun would that be? For the DIYer, there are a few ways to go about creating your own DCP, and I researched them all. Greg Popp, a contributor on LGG saw one of my posts and reached out via PM to provide his phone number so I could call him to discuss in person. He provided great personal insight into creating DCPs for festivals, something that he had just done himself using the same Adobe applications.
I later found that the information Greg provided was very consistent with what I had determined, based upon my own research, was perhaps the best way (at the moment) to create a DCP. It's not at all straightforward, but it is logical. Basically, it involves installing a plug-in called J2K from fnord (fnordware.com) that allows After Effects to read/write JPEG2000 files, which are the required frame files in DCPs. Once installed, from After Effects, you import your DCP Master sequence from Premiere Pro via dynamic link, reinterpret the footage as 24 fps (assuming your Premiere timeline is 23.976 fps) and Rec 709, and export it as a JPEG2000 file sequence. The tricky part is the color management that has to take place. Although there are methods to do the color space conversion (from Rec 709 to DCI XYZ) outside of After Effects, I determined that using After Effects color management features was the most robust. This involves setting the color space in After Effects to DCDM X'Y'Z' (Gamma 2.6) 5900K (by Adobe), which upon import, would force a conversion of the Rec 709 footage into the proper DCI XYZ color space, making it look a little pinkish. That's OK, since this is the color space standard digital theater systems expect, and which will correctly convert it to the right colors again during projection.
After exporting your JPEG2000 picture files, you must do the same for your audio. DCPs require a separate mono WAV file for each channel to be played, in my case six: L, R, C, LFE, Ls, Rs. In Audition, it is a simple matter once you mix down your 5.1 mix to a single 6 channel file in the Waveform Editor to extract those channels into separate files. Audition has a menu option to do just that, and adds an _L, _R, etc., to the end of each file name for you, indicating the proper channels. Put your 6 audio files and your thousands of JPEG2000 files in a directory and create the DCP. For this task, I used an open source program called OpenDCP to package the picture and audio files in a DCI standard format and to create the requisite XML files to accompany them. OpenDCP has the capability to both convert TIFF files into JPEG2000 format and convert the color from Rec 709 to DCI XYZ, but I believe After Effects is not only faster, but more accurate. Some people on the forums would agree, others would not. It worked perfectly for me. Fnord also has a plug-in that will do the color conversion in both directions, and works both in After Effects and Premiere Pro. Its color conversion varies only slightly from the After Effects approach, but it is different, which I found to be slightly darker.
In OpenDCP, I required use of only the packaging components of the program, since I had done everything else in After Effects. First I packaged the JPEG2000 into a picture MXF file, and then packaged the six WAV files into a sound MXF file. Next I created a properly configured title for the DCP (which is encoded with a lot of information about the film in a standard formatted string of characters) using its handy title generator and, following that, OpenDCP creates a DCI compliant DCP, consisting of six files: 2 MXF files, and 4 XML files that contain information about the DCP, consistency checksums, info on how it should be played, etc. These six files are then copied to my DCP drive ready for delivery to theaters. OpenDCP can also handle subtitles, if your film requires them.
DCPs are Linux formatted hard drives installed in a standard carrier that slides into standard HW in theaters worldwide. Did I say these were standard? In fact, the maker of the standard carriers (the DX115) and receivers, CRU, now provides preconfigured DCP kits consisting of carriers, with or without preformatted drives installed, and peli cases, for prices much better than you could build yourself. These drives are not something you can do natively on either a Mac or a PC; it requires Linux. Luckily, I once worked as a Unix sysadmin. Also, YouTube has already figured all this out for us, and good tutorials exist, especially one from Cine Tech Geek. Similar to what James describes in his video on cinetechgeek.com, I use a bootable ISO of Ubuntu Linux on a USB stick to boot my HP, after which I can mount, partition, format (correctly), and copy my DCP files over directly from my Windows NTFS-formatted internal drives to the CRU drive, ready for delivery to festivals. There are ways using third party applications to do this on the PC and Mac, but your mileage may vary. Linux is by far the safest way. I've tested 6 different DCPs, 6 times in 3 different theaters and none have had issues ingesting or playing.
Release into the wild
With DCP in hand I was ready to release my creation into the wild. Festivals recommend that you test your DCP before submitting, or have it done professionally. You really should test it in a theater if at all possible. At a minimum, test 15 seconds at a time of it with the free downloadable demo version of easyDCP Player from Fraunhofer. In this demo version, you can even do color management and covert the DCP back to Rec 709 so that it appears as you designed it on your computer monitor. You can therefore test not only your color, but also your sound in full surround with this demo.
Behind the Scenes (BTS)
Here are four videos I created using BTS footage from both pre-production and production of Dead Run. I hope you find them informative and entertaining. There was never a dull moment on this film, and it was a blast to shoot.
Photos of Paisley's makeup taken for continuity between shooting days.
The Dead Run Foley room and pit.
Views of a portion of the 100+ track Dead Run audio timeline and color grading suite.
An Introduction to Film Analysis, by Michael Ryan and Melissa Lenos
Anatomy of Horror, The Masters of Occult Fiction, by Glen St. John Barclay
Audio Post Production In Your Project Studio, by Casey Kim
Audio Production & Post Production, by Woody Woodhall
Between The Scenes, by Jeffrey Michael Bays
Cinematic Storytelling, by Jennifer Van Sijill
Cinematography, by MIke Goodridge & Tim Grierson
Cinematography Screencraft, by Peter Ettedgui
Cinematography: Theory and Practice, 3rd ed, by Blain Brown
Color Correction Handbook, 2nd ed, by Alexis Van Hurkman
Continuity in Film and Video, 2nd ed, by Avril Rowlands
Directing: Film Techniques and Aesthetics, by Michael Rabiger & Mick Hurbis-Cherrier
Directors Close Up, 2nd ed, by Jeremy Kagan
Film Directing: Shot by Shot, Visualizing from Concept to Screen, by Steven D. Katz
Film Editing: Great Cuts Every Filmmaker & Movie Lover Must Know, by Gael Chandler
How To Shoot Video That Doesn't Suck, by Steve Stockman
In The Blink Of An Eye, by Walter Murch
Lighting for Digital Video & Television, 2nd ed, by John Jackman
Master Shots, Vol. 3, by Christopher Kenworthy
Mastering Audio: The Art and the Science, 3rd ed, by Bob Katz
Mixing & Mastering, by Bill Gibson
Motion Picture and Video Lighting, 2nd ed, by Blain Brown
My First Movie: Twenty Celebrated Directors Talk About Their First Film, edited by Stephen Lowenstein
Practical Recording Techniques, 4th ed, by Bruce and Jenny Bartlett
Producing & Directing The Short Film & Video, 5th ed, by David K. Irving & Peter W. Rea
Producing Great Sound for Film & Video, 4th ed, by Jay Rose
Rebel Without A Crew, by Robert Rodriguez
Setting Up Your Shots: Great Camera Moves Every Filmmaker Should Know, 2nd ed, by Jeremy Vineyard
Sound Reproduction, Loudspeakers and Rooms, by Floyd Toole
Storytelling Techniques for Digital Filmmakers, by Ross Hockrow
Surround Sound, Up and Running, 2nd ed, by Tomlinson Holman
The Art of Mixing, 2nd ed, by David Gibson
The Art of Short Form Content: From Concept to Color Correction, by Bryan Cook
The Beginning Filmmaker's Guide to Directing, by Renée Harmon
The Conversations: Walter Murch and the Art of Editing Film, by Michael Ondaatje
The DSLR Filmmaker's Handbook, by Barry Anderson & Janie L. Geyen
The Eye is Quicker, by Richard D. Pepperman
The Filmmaker's Eye: Learning (and Breaking) the Rules of Cinematic Composition, by Gustavo Mercado
The Filmmaker's Handbook: A Comprehensive Guide for the Digital Age, 3rd ed, by Steven Ascher & Edward Pincus
The Indie Producer's Handbook: Creative Producing from A to Z, by Myrl A. Schreibman
The Innocence of the Eye: A Filmmaker's Guide, by Ed Spiegel
The Mixing Engineer's Handbook, 3rd ed, by Bobby Owsinski
The Sound Effects Bible, by Ric Viers
Total Directing: Integrating Camera & Performance in Film & Television, by Tom Kingdon
"Only in the hottest years do we find our men dead... and this year it grows hot. Sometimes we find our men without their skin..." wait, what? GET TO THE CHOPPA!!