sam elia sam elia

Ear Training for Sound Designers and Audio Engineers: Essential Skills for Post-Production

I go into why ear training matters, how it improves efficiency and collaboration in post-production, and the top three tools to refine your skills—whether you’re a beginner or a seasoned pro.

For sound designers and audio engineers, critical listening is the backbone of creating great mixes and sound effects. Whether you're tuning an EQ, crafting a reverb, or balancing a mix, having finely-tuned ears can make all the difference. A well-calibrated system is only as good as your ears, and while proper room treatment and tuning are essential, developing your ability to hear and interpret sound accurately is equally crucial. Ear training helps you identify frequencies, recognize subtle details in sound, and make informed decisions during production. Here’s an overview of how ear training can improve your craft and the top three programs across budget levels to help you refine your listening skills.

Why Ear Training is Crucial

Train your ears like a pro!

Developing critical listening skills not only improves your technical abilities but also saves time and budget in the production process. Being able to quickly identify problematic frequencies or subtle flaws with production audio reduces the time spent chasing down issues during post production. This efficiency leads to smoother sessions and happier clients, as revisions and reworks are minimized. Additionally, well-trained ears allow you to make confident creative decisions, ensuring your production achieves its desired impact without over-reliance on guesswork or endless iterations.

Ear training isn’t just for sound designers—it’s equally beneficial for directors, editors, and producers involved in post-production. When everyone on the team has a shared understanding of sound and terminology, communication becomes more efficient and productive. A producer who can articulate, for example, that dialogue sounds boxy around 300 Hz or that a background ambience feels overly bright above 8 kHz helps audio engineers address issues faster and more effectively. This shared language reduces misinterpretation and fosters a more collaborative and streamlined workflow.

Ear training enhances your ability to:

1. Identify Frequency Ranges: Recognizing frequencies allows you to pinpoint problem areas in dialogue, effects, or music and address them efficiently.

2. Distinguish Tonal Characteristics: Understanding how different audio elements sound helps create balance and depth in the mix.

3. Detect Subtle Changes: Being able to hear subtle EQ adjustments or phase issues ensures a polished, professional final product.

4. Improve Translation Across Systems: Critical listening ensures your mixes sound great on various playback systems, from cinema speakers to sound bars.

Ear Training Programs for Every Budget

Now that I’ve established why ear training is essential, the next question is: how do you train your ears effectively? While practice with real-world projects is valuable, targeted tools and programs can accelerate your growth by providing structured exercises and immediate feedback. These programs simulate the challenges you’ll face in post-production, such as identifying problematic frequencies, balancing sound design elements, and fine-tuning your effects chain. Whether you’re a beginner or a seasoned professional, investing in the right ear training tool can help sharpen your critical listening skills and take your audio work to the next level. Here are three top-rated programs across different budget levels to fit your needs and goals.

Free Option: SoundGym

Website: SoundGym 

Overview: SoundGym offers a robust suite of ear training exercises designed specifically for audio engineers and sound designers. With a free account, you gain access to games that improve your ability to recognize frequencies, dynamics, and stereo imaging.

Benefits:

Interactive Games: Games like EQ Mirror and Filter Expert focus on identifying specific frequency boosts and cuts.

Progress Tracking: Track your skill growth over time with detailed statistics.

Community Engagement: Compete with other users and join forums for shared tips and feedback.

Why It’s Great for Beginners: SoundGym is accessible and user-friendly, making it ideal for those starting their ear training journey. The gamified approach keeps learning fun and engaging.

Mid-Cost Option: TrainYourEars EQ Edition

Price: ~$99

Website: TrainYourEars

Overview: TrainYourEars EQ Edition is a software specifically focused on helping you understand and recognize EQ adjustments. By listening to audio samples with applied EQ changes, you’ll learn to identify frequency ranges with precision.

Benefits:

Customizable Training Sessions: Tailor the exercises to your skill level and specific goals.

Practical Applications: Mimics real-world scenarios you’d face during mixing and mastering.

Detailed Feedback: Immediate feedback helps you understand mistakes and improve faster.

Why It’s Perfect for Intermediate Users: This program bridges the gap between beginner exercises and real-world mixing. It’s particularly useful for engineers aiming to fine-tune their frequency recognition skills.

High-Cost Option: Golden Ears by Dave Moulton

Price: ~$300

Website: Golden Ears

Overview: Created by renowned audio engineer Dave Moulton, Golden Ears is an intensive ear training program designed for professionals. It comes as a set of CDs or downloadable files featuring exercises in frequency recognition, dynamic range, and sound timbre.

Benefits:

Comprehensive Curriculum: Covers all aspects of audio, from basic frequency recognition to advanced critical listening.

High-Quality Audio Examples: Real-world examples and professional-level recordings ensure accurate learning.

Proven Results: Used by audio degree programs and industry professionals alike to hone their critical listening skills.

Why It’s Ideal for Pros: This program is for those who are serious about mastering critical listening. It’s thorough, challenging, and provides the depth required for professional growth.

If you’re truly dedicated to advancing your skills, investing in Golden Ears is a worthwhile decision.

While the upfront cost is higher, the Golden Ears program’s comprehensive training ensures you’ll gain the ear training expertise needed to tackle even the most demanding post-production challenges. Whether it’s recognizing subtle tonal differences or crafting the perfect sonic balance, Golden Ears equips you with quality training to elevate your work to a high professional standard. Serious skills require serious tools—and Golden Ears delivers exactly that.

How to Incorporate Ear Training Into Your Routine

Develop a routine that includes 15–30 minutes of ear training each day.

Daily Practice: Dedicate 15–30 minutes each day to ear training. Consistency is key! I found that pairing ear training with a morning routine helped me stay consistent, setting the tone for a productive day.

Apply What You Learn: Test your skills in real-world scenarios while mixing or sound designing. After becoming proficient with my ear training, the biggest thing I've noticed is how knowing what frequency to EQ without resorting to sweeping to zone in on the problem will not only improve your speed but will also prevent you from over-EQing, resulting in more natural results as well.

Experiment with Different Tools: Use reference tracks and A/B testing to develop your critical listening further. Tools like spectrum analyzers or ear training apps can give you real-time feedback, making it easier to spot patterns and progress. Fabfilter’s ProQ RTA was super helpful for me as I was beginning.

Stay Patient: Ear training is a gradual process. Celebrate small victories along the way!

Conclusion

Developing your critical listening skills is an investment in your craft. Whether you’re just starting out or are an experienced professional, ear training can take your audio work to the next level. By incorporating tools like SoundGym, TrainYourEars, or Golden Ears into your routine, you’ll build the skills needed to create mixes that not only sound great but translate across all systems.

Start training your ears today, and unlock the full potential of your audio projects!


Ready to elevate your project with expert sound design? Let's bring your vision to life! Explore my services or contact me today to discuss your project requirements. Thank you for visiting sameliaaudio.com. Stay tuned for more insights and analysis on cinematic storytelling through sound.

Read More
sam elia sam elia

Affordable Ultrasonic Mic? Sonorous Objects SO.104 Review with Audio Samples

I review the Sonorous Objects SO.104, an affordable ultrasonic microphone that’s perfect for field recording and sound design

Introduction

In my previous post on ultrasonic microphones, I mentioned ordering a stereo pair of the Sonorous Objects SO.104 after reviewing its promising specifications. Now, after some hands-on testing, I’m sharing my insights into how these microphones perform in real-world scenarios. While this review isn’t a comprehensive teardown with intricate measurement data, it offers a practical look at how the SO.104 performs in typical use cases. With example recordings and real-world observations, I’ll explore whether this mic delivers on its promise as an affordable ultrasonic option for field recording and sound design.

First Impressions of the Sonorous Objects SO.104 

Unboxing Experience

The Sonorous Objects SO.104 arrived in simple, no-frills packaging: a small cardboard box with each item securely bubble-wrapped and placed in individual plastic bags. A slip of paper included the microphones’ sequential serial numbers, suggesting they are paired by production order to reduce costs. While this approach can work well due to consistent materials and assembly, it’s worth noting that no two microphones are ever perfectly identical. For critical applications like stereo recording of orchestras or drum overheads, even minimal differences can affect the stereo image. However, at this price point, I wouldn’t expect the level of quality assurance seen in high-end or boutique microphones.

Spaced pair of SO.104 omnis

Build Quality

The SO.104 features a sturdy (3D-printed?) housing that encases the Primo EM258 capsule, mounted on a high-quality Neutrik XLR connector. This capsule is known for its wide frequency response (20 Hz to 70 kHz), making it ideal for capturing both audible and ultrasonic frequencies. Key specifications include:

Sensitivity: -32 dB

Self-noise: 20 dBA

Signal-to-noise ratio: 74 dB

Maximum SPL: 115 dB

This combination allows the SO.104 to capture intricate audio details, whether it’s subtle environmental sounds or audio that will be pitch-shifted. The low noise floor further enhances its usability for field recording and sound design.

Design and Aesthetics

Compact and lightweight, the SO.104 is easy to integrate into most setups with its standard XLR connector. However, the foam windscreens from the SO.100 series require mic clips to clamp low on the mic body, near the connector. Aesthetically, the microphones feel more like prototypes or DIY builds, with a somewhat unfinished look—not a dealbreaker, but worth noting for those who value polished design.

Setup and Accessories

The SO.104’s compact design ensures easy setup in both studio and field environments. While Sonorous Objects offers some accessories, I needed to source a stereo bar and weatherproof hard case elsewhere. This might be a minor inconvenience for those looking for a complete kit. Additionally, the included foam windscreens didn’t fit snugly, so I wouldn’t recommend them for outdoor use.

Performance Evaluation

Sound Quality: The Sonorous Objects SO.104 offers a natural sound profile typical of small-diaphragm condenser (SDC) omnidirectional microphones. The mics handle dynamic range well, with minimal distortion or clipping, even at high sound pressure levels (SPL). Thanks to the low self-noise of the Primo EM258 capsule, recordings maintain clarity, capturing intricate details like subtle environmental sounds. If you’re seeking an affordable mic for ultrasonic recording, the SO.104 holds its own in capturing a broad frequency range.

Polar Patterns and Sensitivity: As omnidirectional mics, the SO.104 performs admirably across a wide sound field, with good off-axis response in the audible range. In typical use cases, such as recording ambient soundscapes, the mics deliver a balanced stereo image. Notably, when recordings are slowed to analyze ultrasonic content, the microphones reveal natural tonal shifts and nuanced textures, showcasing their ability to capture ultrasonic frequencies effectively.

Overall Observations

The SO.104 offers a flat, natural response across the audible range, consistent with what you’d expect from SDC omnis. Though not laboratory-grade in precision, the SO.104’s sensitivity and broad frequency capture make it a valuable tool for field recording for sound designers, especially when capturing sounds for extreme downward pitch-shifting.

Microphone Data

Comparison with SE8 Matched Cardioid Pair

In my previous setup, I relied on a matched cardioid pair of SE8 for capturing stereo recordings, which are well-regarded for their flat, natural sound and excellent performance in a variety of recording scenarios. The SE8 microphones are known for their precise off-axis rejection due to their cardioid polar pattern, which is ideal for capturing focused sound sources while minimizing unwanted noise from the surroundings. Their frequency response is fairly neutral across the audible range, with a slight high-frequency boost, delivering a transparent, uncolored sound ideal for critical recordings.

SE8 Matched Cardioid Pair Specs:

Polar Pattern: Cardioid

Frequency Response: 20 Hz - 20 kHz

Sensitivity: -35 dB

Self-Noise: 16 dBA

Max SPL: 136 dB

Condenser Membrane Size: 6 mm

The SE8 mics are generally considered versatile and flat in their performance, making them a reliable choice for a wide range of recording tasks, from vocals to ambient recordings. The cardioid pattern isolates sound effectively, offering great detail in the center while rejecting off-axis sounds. In my testing, I was surprised to see these mics are capable of recording detail up to around 30kHz. 

How the SO.104 Specs Compare to the SE8

The Sonorous Objects SO.104, on the other hand, brings a new dimension to the table, specifically targeting those interested in ultrasonic frequencies. While the SE8 is a more conventional choice for general-purpose recording, the SO.104 excels in capturing both audible and ultrasonic content, making it a unique tool for field recording and sound design.

Sonorous Objects SO.104 Specs:

Polar Pattern: Omnidirectional

Frequency Response: 20 Hz - 70 kHz

Sensitivity: -32 dB

Self-Noise: 20 dBA

Max SPL: 115 dB

Condenser Membrane Size: 5.8 mm

Compared to the SE8, the SO.104 offers a wider frequency range, extending into the ultrasonic spectrum (up to 70 kHz). This makes the SO.104 a great choice for capturing fine details in both the audible and ultrasonic ranges that the SE8 cannot. However, the SO.104 sacrifices some of the SE8’s isolation benefits, as the SO.104 features an omnidirectional pattern, which can pick up more ambient noise and is ideal for capturing a broader soundscape.

It is worth noting that omnidirectional microphones sound more natural than cardioids because they capture sound from all directions, similar to how human ears hear, and don’t exhibit the proximity effect caused as a result of the way cardioid microphones focus on sound from the front- at lower frequencies, longer (lower frequency) wavelengths of sound become less directional. As a result, the rejection of low-frequency sounds from the sides and rear is less effective, which contributes to the proximity effect of cardioid microphones.

The SO.104’s sensitivity is slightly lower at -32 dB, meaning it’s less sensitive than the SE8 (-35 dB), but this doesn’t impact its ability to capture nuanced sounds, especially in the ultrasonic range. Additionally, while the SO.104 has a larger condenser membrane size (5.8 mm vs. 6 mm for the SE8), the difference is minor, though it may affect how each mic responds to transients and subtle textures. In my testing, I was surprised to see these mics are capable of recording detail up to and possibly exceeding 96kHz.

Overall, the SO.104 offers a unique feature set with its ability to capture ultrasonic frequencies, making it an exciting choice for those seeking to explore the high (frequency) end of the spectrum on a budget.

Real-World Examples and Data

To better demonstrate the SO.104’s ultrasonic capabilities, I’ve included slowed-down audio recordings in various contexts. These environmental and foley recordings show how ultrasonic details emerge when the mic captures frequencies above the audible range for typical sound design use cases. When slowed, even the tiniest nuances become apparent, further emphasizing the mic’s ability to reveal hidden layers of sound. When slowed down significantly- two octaves or more- recordings made with mics that do not have a high frequency extension into the ultrasonic range begin to sound muffled and dull.

Graphs and data collected from these examples will be shared below, where you'll see how the frequency response translates to real-world recordings.

Screenshot of the foley session comparing the SO.104 (top pair) and SE8 (bottom pair) reveals the difference in the two mics ultrasonic responses. While the SE8 perform admirably, they simply do not have the high frequency response required for extreme pitch shifting.

Example Recordings

SO.104 Recordings (Before)

SO.104 voice recordings before down pitching showing lots of ultrasonic content above 20kHz

SO.104 Recordings (After Down-pitching)

SO.104 voice recordings after down pitching 2 octaves showing lots of detail remaining up to 20kHz, resulting in a convincing pitch shift.

SE8 Recordings (Before)

SE8 Recordings (After Down-pitching)

SE8 voice recordings after down pitching two octaves are missing detail in the upper frequencies approaching 20kHz, resulting in a muffled and dull pitch shift.

Conclusion and Final Thoughts

The Sonorous Objects SO.104 proves to be a capable and affordable ultrasonic microphone, offering quality performance at its price point. While it may not compete with high-end microphones in terms of build quality or advanced quality assurance testing, the SO.104 delivers solid results for both field recording and sound design. Its compact design, low self-noise, and wide frequency response make it an excellent option for capturing subtle environmental sounds and ultrasonic frequencies.

For users with a limited microphone budget, the SO.104 is a valuable addition to any mic locker, especially when working with ambient recordings or sound effects that require detailed capture of high-frequency audio. It’s an ideal choice for those looking to explore ultrasonic recording without breaking the bank.

While the ill-fitting SO.100 series foam windscreens may not be the best choice for outdoor use and additional accessories like a stereo bar and weatherproof hard case need to be sourced separately, the Sonorous Objects SO.104 provides great value for its price. Whether you’re a professional sound designer or an enthusiast, this mic is a unique tool that delivers performance where it counts.


Ready to elevate your project with expert sound design? Let's bring your vision to life! Explore my services or contact me today to discuss your project requirements. Thank you for visiting sameliaaudio.com. Stay tuned for more insights and analysis on cinematic storytelling through sound.

Read More
sam elia sam elia

Unlocking Efficiency in Your Sound Kit: The Power of the Z-Cable

I share why the Z-cable is my go-to solution for simplifying audio connections and boosting efficiency. I also walk you through how to build your own Z-cable, tailored to your setup needs.

When you're setting up for live sound or location audio, the last thing you want is to be digging through a bag of adapters and connectors, trying to find the right one. Enter the Z-cable—one of my favorite compact, customizable, problem-solving tools that audio engineers can use to streamline their kit and workflow in time-sensitive situations. This post will dive into what a Z-cable is, why it's invaluable for sound engineers, and guide you on how you can make your own.

What is a Z-Cable?

At its core, a Z-cable is an adaptable audio connector tool designed to help audio engineers connect different equipment without needing multiple adapters or “turnaround” barrels. A typical Z-cable has a combination of connector options on each end, allowing for instant adaptability without extra bulk. In practice, this means fewer separate adapters, fewer cables, and a faster, more efficient setup.

A Z-cable is versatile enough to work as a female XLR-to-jack adapter, a male XLR-to-jack adapter, a Y-split cable, a gender reverser, or even a short patch cable. It’s a really useful tool to have in your kit, covering multiple needs with just one cable.

Be sure that when using your Z-cable, you do not combine 2 outputs or double phantom power. The design I will outline uses a female TRS connector with male-to-male TRS barrel adapter to prevent accidental tip contact that would be possible with a male TRS design.

Why Us Audio Engineers Love Z-Cables

Z-cables are designed to be a flexible and efficient solution for real-time audio needs. Here’s why they’re a favorite among live sound and location audio engineers:

Space-Saving: Forget about cramming separate male-to-female and female-to-male adapters into your bag. A single Z-cable takes the place of several adapters, minimizing bulk and saving valuable space.

Increased Flexibility: In the fast-paced world of live sound, connections may need to change on the fly. With a Z-cable, you can instantly switch between different connector types and genders, reducing setup time and troubleshooting stress.

Cleaner Setups: Keeping cables organized is essential to maintaining clear sound and avoiding signal interference. Z-cables combine adapters into a single, streamlined unit, reducing cable clutter and allowing you to always find the right adapter.

Quick Problem-Solving: When something goes wrong, every second counts. With a Z-cable, you have built-in versatility to adapt connections quickly and efficiently.

DIY Guide: Building Your Own Z-Cable

If you're a hands-on audio engineer, building a Z-cable can be a simple yet rewarding project that lets you create something entirely customized to your unique setup. When I decided to make my own, I wanted a solution that would cut down on adapter clutter and be built to my own specifications. I spent around $30 on materials from Redco Audio and a couple of hours working on it and testing each stage to ensure it was as reliable and durable as I needed.

Here's my step-by-step guide, based on what I learned building my own Z-cable. If you’re ready to dive in and get a personalized, problem-solving addition to your kit, this guide will walk you through the process!

Materials Needed

  • XLR Female Connector (2) such as Neutrik NC3FXX

  • TRS Female Connector (1) such as Neutrik NJ3FC6

  • TRS Gender Changer (1) - male-to-male (optional)

  • High-Quality Microphone Cable (you will need about 5 feet in order to cut and strip ends to 4, 1 foot lengths of cable). I went with Mogami W2552.

  • Soldering Iron and Solder

  • Heat Shrink Tubing and Heat Gun (optional, for added durability and insulation. Electrical tape can be used but will not be as reliable and leave sticky residue).

Step 1: Prepare the Cable

1. Start by cutting your cable to the desired length. For a compact Z-cable, 6–12 inches usually works well (be sure to leave enough length to work with your connectors).

2. Strip back the insulation at each end of the cable to expose the wires. You should see three wires inside: typically red (positive), white (negative), and a bare copper ground wire. Be careful not to over-expose the bare wire which can lead to shorts and other issues. Heat shrink tubing greatly reduces these pitfalls.

Step 2: Attach the Connectors

Before you start soldering, remember to place your heat shrink tubing and any other add ons such as strain relief in place on the cable!

1. Connect the male XLR connector to one end of the first length of cable and the female XLR connector to the other. Be sure to solder the wires correctly to the pins: Pin 1 to ground, Pin 2 to positive, and Pin 3 to negative.

2. Attach the next length of cable to the same female XLR connector, and attach a male XLR connector to the opposite end. Be sure to keep your wires matched up to the pins correctly. 

3. Attach the 3rd length of cable to the second male XLR connector you just attached, and attach the least female XLR connector to the opposite end, keeping your wires matched to the correct pins. 

4. Attach the final length of cable to the female XLR connector you just attached, and attach the female TRS connector to the end, being sure your wires are connected to the correct pins: Sleeve to ground, Tip to positive, and Ring to negative. You may wish to attach/ tether a male-to-male TRS gender change barrel for additional flexibility.


Whenever I am soldering cables, I like to have a pinout handy to avoid any crossed wires. Here is a custom pinout for this project:

The middle three connectors will each have two sets of wires soldered to the connection points. It may be helpful to twist these pairs together before tinning and soldering the connections.

Step 3: Test and Insulate

1. Before completing the build, test the cable with a cable tester such as the Behringer CT100 6-in-1 Cable Tester to ensure it’s correctly wired and functional. 

2. Once continuity is confirmed, use heat shrink tubing around the connections for additional insulation and to reinforce durability.

Tips for Customization

You can make Z-cables with a variety of connector types depending on your equipment needs. For example, the design I've outlined would allow you to expand upon it further by adding an additional length of left and right XLR connections to the TRS connector end.

Using color-coded connectors or heat shrink tubing can help you quickly identify each cable’s function in complex setups.

For even more adaptability, you could create multiple Z-cables in various lengths for specific applications, such as in-studio, on-stage, or field use.

Integrating Z-Cables with Your Audio Kit

The Z-cable fits right in with other problem-solving tools in an audio engineer’s kit, such as:

Inline Pads/Attenuators: These devices help you control high-level signals, and a Z-cable can seamlessly integrate with them to expand your options.

DI Boxes: Combining a Z-cable with a DI box lets you adapt connections for both balanced and unbalanced signals, making them more flexible in the field.

Signal Splitters and Combiners: Z-cables reduce clutter when used with splitters, making quick adjustments to the setup smoother and keeping everything clean and organized.

Wrapping Up

Whether you’re doing live sound or setting up in the field, a Z-cable is a powerful tool to have on hand. It helps you adapt faster, reduce clutter, and simplify setups, all while saving space in your kit. Building your own Z-cable can be a fun and rewarding project, letting you tailor it precisely to your needs.

With a Z-cable in your bag, you’ll be ready for whatever surprises your audio setup throws your way. So, roll up your sleeves and get ready to make this essential tool for your kit—you won’t regret it!


Ready to elevate your project with expert sound design? Let's bring your vision to life! Explore my services or contact me today to discuss your project requirements. Thank you for visiting sameliaaudio.com. Stay tuned for more insights and analysis on cinematic storytelling through sound.

Read More
sam elia sam elia

The Hidden World of Ultrasonic Sound: Using Ultrasonic Microphones in Sound Design

I explore how ultrasonic microphones can capture details far beyond human hearing, opening up new possibilities for sound design. From high-end mics to budget-friendly options, I compare the best choices to help you find the right fit for your projects.

In sound design, capturing details beyond the ordinary spectrum of human hearing opens up new creative possibilities. Ultrasonic microphones, capable of recording frequencies well beyond the typical 20 kHz range, allow sound designers to tap into an unheard world of textures and unique sounds. In this post, we’ll explore how ultrasonic microphones are used in sound design and compare popular options across different budget ranges to help you choose the right one for your projects.

What Is an Ultrasonic Microphone?

An ultrasonic microphone is a specialized type of microphone that records frequencies far beyond what standard microphones can capture, often up to 100 kHz or more. This is especially useful for capturing high-frequency details from animals like bats, insects, or small mechanical sounds that produce ultrasonic tones. The resulting recordings can then be pitched down far greater in post-production to reveal nuanced sounds that bring new dimensions to sound design, often producing eerie, alien, or hyper-realistic textures.

It is important to keep in mind that a recorder capable of recording at a high sample rate of 96k or preferably 192k will be needed to capture ultrasonic frequencies.

How Ultrasonic Microphones Are Used in Sound Design

Capturing Inaudible Textures: Ultrasonic microphones allow sound designers to record sounds that are usually inaudible. These could include high-frequency elements of machinery, the flutter of insect wings, or even ultrasonic emissions from certain materials. When slowed down, these sounds reveal intricate details and textures that enrich a soundscape.

High-Frequency Elements of Machinery: Imagine recording the ultrasonic frequencies produced by a spinning industrial fan. When slowed down, the sound could reveal rhythmic, pulsing tones with subtle metallic resonances, adding depth and realism to a scene set in a factory or sci-fi environment.

Flutter of Insect Wings: By capturing the high-pitched flutter of a dragonfly's wings with an ultrasonic microphone, you could slow down the recording to create a sound reminiscent of small, delicate engines or mechanical drones. 

Ultrasonic Emissions from Materials: Certain materials, such as glass or metal, emit faint ultrasonic vibrations when under stress or when rubbed together. When slowed down, these sounds could serve as eerie atmospheres for horror or suspense scenes, adding an unsettling, almost imperceptible layer of tension.

Enhanced Sound Manipulation: By recording sounds at ultrasonic frequencies, designers can slow them down to create exaggerated and surreal effects, perfect for film, game design, or VR applications where exaggerated, hyper-real soundscapes enhance immersion.

Example: In the 2018 God of War video game, the World Serpent, also known as Jörmungandr, is a massive, mythical creature that embodies the ocean and plays a significant role in the game's narrative and atmosphere. To achieve the ancient creature's resonant, otherworldly voice, the sound designer pitch shifted human vocalizations and layeredit with grumbles and rattles of various animal recordings. 

Comparing Ultrasonic Microphones Across Budget Ranges

If you’re interested in diving into ultrasonic sound design, choosing the right microphone is essential. Here, we’ll compare three ultrasonic microphones across budget levels: two industry standard options in the Sanken CO-100K (high-end) and Sony ECM-100U (mid-range), and newcomer Sonorous Objects’ SO.104 Ultrasonic Omni Microphone (budget).

1. Sanken CO-100K

Price: High-end, around $2,500

Frequency Range: 20 Hz – 100 kHz

Polar Pattern: Omnidirectional

Connection: XLR

Best for: Professional sound designers, high-end studios, scientific research

Overview: The Sanken CO-100K is the gold standard for ultrasonic recording in sound design. With its ultra-wide frequency range extending up to 100 kHz, it captures high-fidelity ultrasonic sounds with remarkable detail and precision. Built for high-end studios and demanding production environments, the CO-100K is perfect for those needing extreme accuracy and depth, whether recording fine textures or using pitch-shifting techniques to unveil hidden details. Its dual-capsule design allows for a wider, more forgiving polar pattern at extreme high frequencies.

Pros:

  • Exceptional frequency range (20 Hz – 100 kHz)

  • High build quality for durability and reliability

Cons:

  • High cost, suited only for studios or sound designers with large budgets, though you may be able to find one for rent.

2. Sony ECM-100U

Price: Mid-range, around $1,000

Frequency Range: 20 Hz – 50 kHz

Polar Pattern: Cardioid

Connection: XLR

Best for: Intermediate sound designers, mid-range studio setups

Overview: The Sony ECM-100U is an excellent mid-range option for those interested in high-frequency sound capture without the hefty price tag of top-tier models. While its frequency range caps at 50 kHz, it’s still more than sufficient for most ultrasonic applications, capturing detailed audio that can be slowed down and processed effectively. Its cardioid polar pattern makes it ideal for focused recordings.

Pros:

  • Strong frequency response up to 50 kHz, suitable for ultrasonic recording

  • Cardioid pattern ideal for isolating sound sources

Cons:

  • Limited upper frequency extension compared to higher-end models like the Sanken CO-100K. With this microphone or one with a similar frequency response, the recordings will hold up to less pitch shifting (however, if you cannot record at a sample rate above 96kHz to capture those frequencies, it will make little difference).

3. Sonorous Objects SO.104 Ultrasonic Omni Microphone

Price: Budget, around $100

Frequency Range: 10 kHz – 70 kHz

Polar Pattern: Omnidirectional

Connection: XLR

Best for: Entry-level sound designers, experimental setups, smaller studios

Overview: The Sonorous Objects SO.104 Ultrasonic Omni Microphone is a compelling choice for those on a budget, delivering a frequency range that rivals more expensive models. It’s a versatile and affordable option for those new to ultrasonic recording or anyone looking to experiment with ultrasonic capture without a major investment. With its XLR connection, the SO.104 is easy to integrate into professional setups, making it a unique offering at this price point.

Pros:

  • Impressive frequency range at an affordable price

  • Compact and portable for on-the-go recording

Cons:

  • Lower build quality compared to high-end models, which may result in decreased durability and a shorter lifespan, especially in demanding or frequent use, and may point to other inconsistencies in their production. However, the low comparative price may offset these risks.

  • May lack some off-axis detail offered by more premium options.

Which Microphone Should You Choose?

Cheap plant mic options such as the Zoom H5 are only capable of recording up to 20kHz. The SO.104 look to be a significant upgrade over the handeld’s mics.

For Professional and Scientific Use: The Sanken CO-100K is the best choice for high-end studios and professionals with the budget for top-tier detail and durability. If you're working on scientific research projects, like analyzing animal vocalizations or capturing fine material sounds at ultrasonic frequencies, the CO-100K's extended range makes it invaluable. It’s also ideal for complex sound design projects, as its 100kHz frequency extension allows for twice as much pitch shifting before detail loss compared to mics such as the Sony ECM-100U.

For Versatile Mid-Range Use: The Sony ECM-100U strikes a balance between affordability and quality, offering a solid frequency range and a cardioid polar pattern well-suited for focused recordings. If you're aiming to capture clear, isolated sounds without a high price tag, this microphone is a solid choice– especially for sound designers who need robust ultrasonic capabilities but with recording setups that don't support sample rates above 96 kHz.

For Budget-Conscious and Experimental Use: The Sonorous Objects SO.104 is a compelling choice for those exploring ultrasonic sound on a budget. If you’re new to ultrasonic recording or looking to experiment with capturing high-frequency sounds in less controlled environments, the SO.104 is an affordable, well-specced microphone that still covers an extensive ultrasonic range. Its low price point also makes it ideal for fieldwork or other such risky setups, like placing the mic near unpredictable wildlife or in rugged outdoor environments, where damage to the microphones might be expected and their affordability allows for greater experimentation.

Final Thoughts

Ultrasonic microphones unlock a world of sound that’s otherwise inaccessible, adding depth, texture, and mystery to your sound design projects. From capturing the calls of bats to exploring the rich overtones of mechanical sounds, ultrasonic recording brings an unparalleled range of creative possibilities. Whether you’re investing in a high-end setup with the Sanken CO-100K or experimenting with the Sonorous Objects SO.104, ultrasonic microphones offer unique tools to enhance and expand your creative toolkit.

Encouraged by the given spec data, I recently purchased my own matched stereo pair of Sonorous Objects SO.104. Stay tuned for an upcoming review, where I'll dive deeper into its performance and share insights from my hands-on experience, provide some measurement data, as well as some example recordings. With the right mic in hand, the hidden sounds of the world are yours to capture, manipulate, and transform.


Ready to elevate your project with expert sound design? Let's bring your vision to life! Explore my services or contact me today to discuss your project requirements. Thank you for visiting sameliaaudio.com. Stay tuned for more insights and analysis on cinematic storytelling through sound.

Read More
sam elia sam elia

THE ROLE OF RHYTHM AND TIMING IN SOUND DESIGN

I explore the fascinating world of sound design in film and how it can deviate from on-screen action to enhance storytelling.

Introduction

Sound design in film conveys meaning differently than imagery and must sometimes deviate from the action on screen to enhance the storytelling experience. Rhythm and timing play a crucial role in this aspect of sound design. Tighter micro-timing can make a sound feel powerful and advanced, while looser micro-timing can give the impression of something worn or damaged.

However, there are some notable exceptions where the sound design comes first. In the film "Baby Driver," the film was cut to the soundtrack, allowing the imagery to perfectly align with the sound design, leading to a very satisfying and cohesive edit. The music the main character listens to through the film is integral to both his driving as well as the overall narrative, and the effect is an incredibly tight and rhythmic groove that lets us feel the events of the film unfold from his perspective and underpins the emotional state of the main character.

Well-timed sound effects and music have a significant psychological impact, guiding the audience's emotions and attention. Despite exceptions like "Baby Driver", sound designers often must deviate from the visual imagery of the film.

Deviation from Visuals: An Artistic Choice

Sound does not always match on-screen actions, and this deviation is an artistic choice made by filmmakers and sound designers for various reasons:

  • Characterization: Enhancing the traits or state of a character.

  • Emotional Impact: Strengthening the emotional resonance of a scene.

  • Tension and Surprise: using space to create or enhance anticipation and suspense.

  • Guiding Attention: Directing the audience’s focus to specific elements or moments.

In-Depth Example: Thor’s Hammer Mjölnir Reassembling in "Thor: Love and Thunder"

Scene Description

You can check out the clip with commentary from the director here.

Thor stands amidst a chaotic battle, when he sees his long lost hammer being wielded by a mysterious warrior in Asgardian armor closely resembling his own. He calls Mjölnir to his aid, but it does not obey. The mysterious warrior commands the shattered pieces of Mjölnir to take out several enemies. They then return, whizzing past Thor and reassemble with a satisfying rhythmic magical energy.

Sound Design Analysis

  • Initial Silence: The scene starts with a chaotic battle against unknown foes. As we catch glimpses of Mjölnir, the battle sounds take a bit of a backseat to the sounds of the hammer, pulling our focus to highlight the most important element of the scene.

  • Magical Build-Up: A low hum begins, gradually increasing in volume and complexity. Build up like this are used to add anticipation and weight to what follows.

  • Rhythmic Pulses: Each piece of Mjölnir emits a distinct, rhythmic pulse and pitch as it locks into place. Note that these rapid, metallic impacts are similar but are all unique, avoiding the "machine gun" effect of repeated sounds.

  • Climactic Reassembly: The final pieces come together with a resonant, echoing boom, letting us know Mjölnir may be fragmented but it is no less powerful than before.

Deviation from Visuals

The interesting thing about this scene is that if you pay close attention you will notice that the sound of each piece reassembling does not coincide with the visuals. The imagery is wild and chaotic, with uneven timing between fragments, which also happen to outnumber the audible pings in the sound design, reinforcing the chaos of the battle. This contrasts with the even pinging of the sound design elements to highlight the heroic appearance of Mjölnir.

The magical hum and pulses are timed to build tension and emphasize the significance of Mjölnir’s reassembly, showing how it remains powerful despite being fragmented, while the final boom is slightly delayed from the rest, creating a dramatic pause that heightens the moment’s impact. Fewer pings than visible hammer fragments are also used as a way to avoid cluttering the sound scape with overlapping sounds, providing clarity of intention.

Advice for Sound Designers

Knowing when to deviate from visuals and implement this technique can be challenging. As a sound designer, consider the emotional and narrative goals of the scene. Ask yourself:

  • What does the sound design need to convey beyond the visuals?

  • What is the primary goal? To enhance the emotional weight, or highlight a key narrative moment?

  • Deviating from the exact timing of visuals can be effective at emphasizing emotional beats, creating tension, focusing attention, or even enhancing surrealism. In this instance, the way the sound design deviates from the visuals ultimately serves to build up to the reveal of the mysterious fighter wielding Mjölnir- Thor's former love Jane Foster.

Collaborate closely with the director and editor to understand their vision and experiment with different timings to see what feels most impactful. Trust your intuition and the feedback from initial screenings to fine-tune the sound design for the desired effect.

Conclusion

Rhythm and timing in sound design are essential for creating a compelling auditory experience in film. Deviations from visual timing can significantly enhance the viewer’s emotional engagement and overall experience. The impact of the feeling conveyed by the sound is often more important than strictly following the imagery. These 'mismatches' are hard to notice because they feel right and natural.

Next time you watch a film, pay close attention to the sound design and how it complements or deviates from the visuals to enhance the storytelling. You might be surprised at how much the sound influences your perception and enjoyment of the movie.


Ready to elevate your project with expert sound design? Let's bring your vision to life! Explore my services or contact me today to discuss your project requirements. Thank you for visiting sameliaaudio.com. Stay tuned for more insights and analysis on cinematic storytelling through sound.

Read More
sam elia sam elia

HOW TO CREATE BROADBAND ABSORBERS WITH ROCKWOOL: A STEP-BY-STEP GUIDE

Learn how to build cost-effective, professional-grade broadband sound panels using Rockwool insulation.

Introduction

1 of 14 identical panels I made for my home studio, saving hundreds including the cost of tools- the electric stapler was the real MVP.

When it comes to sound design and home studios, a well-treated space is paramount for achieving professional-grade results. Broadband absorbers are crucial for managing sound reflections, reducing reverberation, and improving overall acoustic quality in the room. By absorbing a wide range of frequencies, these panels help create a more controlled and pleasant sound environment. 

However, pre-made panels can be prohibitively expensive, and a DIY approach can provide a cost-effective solution for achieving professional-quality acoustic treatment that not only meet your acoustic requirements but also reflect your personal style and creativity.

For those embarking on a DIY project, Rockwool is a highly recommended material due to its exceptional acoustic properties. It provides even broadband absorption, is easy to handle, safe, cost-effective, and offers additional benefits such as fire resistance and moisture control. While fiberglass and other materials can also be effective for acoustic treatment, and will work pretty interchangeably with this guide, Rockwool was the best choice for my specific use case. 

  • Fiberglass is a great absorber but can be more challenging to handle safely due to its tendency to shed fine particles, which can be irritating to the skin and lungs. 

  • Rockwool is more user-friendly and easier to cut and shape without excessive dust. 

  • Rockwool’s price and availability happened to be better at the time of my purchase, so it is important to keep that in mind as you design your own panels.

Creating your own broadband absorbers not only allows for significant cost savings—often exceeding 50% compared to pre-made panels—but also enables customization in terms of size, depth, shape, and aesthetics. This guide will walk you through the process of constructing broadband absorbers using Rockwool insulation.

 

Cost Considerations

Get the best bang for your buck with custom DIY panels.

Creating your own broadband absorbers with Rockwool is not only a rewarding DIY project but also a smart financial decision. Let's explore the cost savings you can expect by opting for a DIY approach compared to purchasing pre-made panels.

Cost of Pre-made Panels

Pre-made sound panels typically range from $50 to $150 per panel, depending on the brand, size, and quality.

Cost of DIY Rockwool Panels

Rockwool Insulation: A pack of rockwool insulation (e.g. Safe'n'Sound) costs around $50-$70 and can typically make 6-12 panels, depending on the size and thickness.

Fabric: Acoustically transparent fabric costs around $5-$20 per yard, and you might need 1-2 yards per panel.

Wood Frame: Wood for the frame might cost around $10-$20 per panel.

Additional Materials: Miscellaneous materials like adhesive, screws, and mounting hardware might add an extra $5-$10 per panel.

So, the total cost for a DIY panel typically comes in at well under $50, allowing you to better treat your space.

 

Materials and Tools Required

Materials

Try to find lumber that is not warped, as this will make squaring corners difficult.

  • Insulation batting: I used 3” deep, 15.25x47” Safe'n'Sound batting

  • Lumber: 4x1 planks allow internal space for the batting and a flush center brace

  • Acoustic Transparent (AT) fabric: I used Guilford of Maine (plus cheaper tulle backing fabric)

  • Wood screws

  • Corner brackets: Optional, for added stability.

  • Staples

  • Mounting hardware: French cleat recommended for secure and flush mounting

  • Stick-on foam bumpers: Optional, to prevent tilting/ vibrations

 

Tools

A speed square can help you achieve square corners quickly and easily.

  • Tape measure

  • Speed square

  • Pencil

  • Saw (jigsaw or circular saw)

  • Drill

  • Screwdriver

  • Fabric scissors

  • Staple gun (electric recommended)

  • Level

  • Safety equipment (gloves, mask, and goggles)

 

Planning and Measurements

Decide on the placement of your absorbers for optimal sound absorption. Prioritize key areas such as first reflection points, corners, and back walls. Consulting with acoustic treatment experts like GIK Acoustics, ATS Acoustics, or Acoustimac can provide valuable insights, as well as treatment options or mounting hardware that may be more difficult to DIY or source elsewhere.

Determine the size and quantity of absorbers needed based on your room's dimensions and acoustic requirements. For more information on this, check out my previous entries on mix room calibration part one and part two. Effective broadband absorbers typically combine a depth of Rockwool with an air gap to target specific frequencies, enhancing their efficiency. Utilize the "Product Data Sheet" or "Technical Data Sheet" for selecting your batting. 

This document typically contains detailed information about the product's acoustic properties, such as its sound absorption coefficient, density, thickness, and other relevant specifications. You can usually find this sheet on the manufacturer's website or by contacting them directly.

Selecting Fabric

Burlap covered panels tend to look sloppy no matter how careful you are.

Choose a breathable AT fabric that allows sound to pass through, such as Guilford of Maine, which is specifically designed for acoustic treatments. When making your selection, look for the "Acoustic Performance Data Sheet" or "Acoustic Transparency Data Sheet." This document provides information on the fabric's acoustic properties, including its ability to allow sound to pass through while still providing adequate coverage for acoustic treatments. Alternatives like burlap or other heavy fabrics do not have this data and can often impair acoustic performance and durability. These fabrics will also not be as easy to tension correctly, leading to visual flaws. 

Calculate the amount of fabric needed based on your panel dimensions and the width of the fabric roll. You may find certain orientations lead to less waste or fewer cuts. 

In my case it was optimal to lay the long dimension of the panel across the width of the roll, where the only waste was at the top and bottom of the panel and could be left on the back without the need of additional cuts as this small bit of extra fabric would not be seen when mounted.

 

Cutting the Frame

  • Measure and mark the 4x1 lumber to match the internal dimensions of the Rockwool panels.

  • Arrange the cuts to minimize waste and avoid knots, which can weaken the structure. 

  • Cut the lumber to size, ensuring precise measurements for a snug fit around the Rockwool without compressing the batting, as this will compromise the performance of the material. 

  • In my case I needed 3 lengths of cuts: the long sides (2 per panel), the short sides (2 per panel), and the center brace (one per panel), which is shorter so that it may fit between the long sides and lays flush to the exterior edge of the frame.

Assembling the Frame

  • Align the corners of the cut lumber, using a speed square to ensure right angles. 

  • You may wish to sand out any rough or uneven edges, especially if using alternative fabrics. 

  • Drill pilot holes to prevent the wood from splitting, and join the pieces using screws—two per corner—leaving enough space along the edges for the staples. 

  • For additional stability, consider reinforcing with metal corner brackets or scrap wood cut to triangles, the latter of which can help keep the batting in place. 

  • Attach AT backing fabric (e.g., tulle) to the interior of the frame with staples to hold the Rockwool in place.

 

Preparing the Fabric for Rockwool

  • Cut the fabric to size, allowing some extra material for overlap—you want to avoid staples right on the edge of the fabric. 

  • Lay the fabric flat, exterior side down. 

  • At this point you may wish to apply a spray glue to the fabric where the batting will be placed, however with a wood frame design this shouldn't be necessary. 

  • Handle the Rockwool carefully, wearing gloves and a mask to avoid skin irritation and inhalation of fibers. 

  • Lay the Rockwool flat in the center of the fabric.

 

Attaching and Wrapping the Frame

  • Place the frame onto the Rockwool, with the backing side up, centered on the fabric. 

  • Ensure the Rockwool is evenly distributed within the frame and is not being compressed or bunched up anywhere.

  • Begin stapling the fabric on each side of the center bracing, pulling it taut without overstretching.

  • Continue stapling evenly around the frame, spacing the staples about one staple width apart for even tension (electric staple gun highly recommended). 

  • Work from the center outwards to maintain even tension and avoid waves or distortions in the fabric. 

  • Begin by going from middle to right on one side, switching sides and going the full width from left to right, and finish with the first side going from the middle to the left end, ensuring it is even and taut as you go. 

  • Fold and secure the corners neatly, tucking any excess fabric underneath for a clean finish. Ensure the corners are flat and uniform for the best appearance.

Uneven tension will be visible as waves or distortions in fabric thread/ texture/ design, but this should not have much effect on the acoustic performance of the panel. An even wrap is easier to accomplish with something purpose made such as Guilford of Maine than with alternatives like burlap.

 

Mounting the Absorbers

  • Install the mounting hardware, such as French cleats, on the wall and the back of the absorbers.

  • You will want a secure mounting option as the panels will typically exceed 5lbs, largely due to the weight of the frame. 

  • Use a level and tape measure to ensure accurate placement. 

  • Attach stick-on foam bumpers to the rear side of the corners of the panels to prevent tilting and vibrations, ensuring a secure and stable installation.

Conclusion

In the field of sound design, a well-treated space is crucial for achieving professional-grade results. Broadband absorbers crafted with Rockwool offer a rewarding DIY solution, enhancing your acoustic environment while providing customization and significant cost savings.

With careful planning and execution, your DIY broadband absorbers will transform your studio into a professional-grade acoustic space. Remember, a well-treated space not only improves sound quality but also fosters creativity and productivity in your work. Let your creativity shine through each step of the process, and enjoy the enhanced sound quality that your custom broadband absorbers bring.


Ready to elevate your project with expert sound design? Let's bring your vision to life! Explore my services or contact me today to discuss your project requirements. Thank you for visiting sameliaaudio.com. Stay tuned for more insights and analysis on cinematic storytelling through sound.

Read More
sam elia sam elia

IMPULSE RESPONSES: A VERSATILE TOOL FOR THE SOUND DESIGNER'S TOOLBOX

Learn how IRs are a vital tool in audio engineering and sound design. From capturing realistic reverberations and crafting innovative effects, to improving workflows and other practical applications, IRs empower artists to push the boundaries of creativity.

In the realm of audio engineering and sound design, believability and authenticity are often paramount, and Impulse Responses (IRs) are a great means for achieving these ends. In this article, we'll explore the essence of impulse responses, examining what they are, their place in audio engineering and acoustics, as well as why they're so useful for crafting immersive soundscapes.

Definition of Impulse Responses (IRs)

At its core, an impulse response is a representation of a system's behavior when subjected to an impulse- a sharp and short burst of sound which contains all frequencies. In practical terms, an impulse is often represented as a theoretical idealization, such as a Dirac delta function, which has infinite amplitude and infinitesimal duration. Put simply, it's a snapshot of how a space, a piece of equipment, or an effect responds to an input. This snapshot not only captures the immediate reverberation but also the intricate nuances of reflections, echoes, and resonances that define a sound environment.

Impulses can also be used as a test signal to measure the response of a system, such as a room or a piece of audio equipment. By analyzing the response of the system to the impulse, valuable information about its characteristics, such as reverberation time, frequency response, and phase, can be obtained.

Overall, in acoustics, an impulse serves as a fundamental tool for understanding and characterizing the behavior of acoustic systems and for various applications in audio engineering, sound design, room acoustics, and signal processing.

Importance of IRs in Audio Engineering and Sound Design

The significance of IRs cannot be overstated. They serve as fundamental building blocks for creating realistic reverberations, emulating the sonic characteristics of different spaces or hardware, and sculpting creative effects that push the boundaries of imagination. Whether it's recreating the ambience of a concert hall, capturing the warmth of a vintage guitar amplifier, or conjuring otherworldly soundscapes, IRs provide the essential tools for sonic artists to paint with precision and depth.

Summary

In this article, we navigate the creation and uses of impulse responses by:

  • Exploring their many uses in audio production

  • Contrasting the linearity of IR reverbs with the nonlinearity of algorithmic reverbs

  • Uncovering the art and science behind crafting authentic IRs

Additionally, we cover their practical applications within room acoustics and how inverted impulse responses can be used for room correction, as well as showcase popular software solutions that harness the power of IRs for audio enhancement.

Join me as we unravel the mysteries of impulse responses, unlocking new dimensions of sonic creativity and fidelity.

Uses for Impulse Responses

Impulse responses (IRs) are versatile tools with a wide array of applications in audio engineering and sound design. Let's explore some of the primary uses of impulse responses:

Reverbs of Spaces or Hardware

One of the most common applications of IRs is in recreating the reverberation characteristics of physical spaces or hardware units. Because we can capture the unique sonic fingerprint of a room, hall, or reverb unit using an IR, this gives us the ability to emulate specific acoustic environments, such as remote or unique locations on a film set, or hardware reverb units we may not always be able to cart around with us, such as a full sized plate reverb, adding depth and dimension to audio recordings that precisely match our other recordings.

Speaker Responses

A real bullhorn would break up and distort. Use distortion in front of your bullhorn IR for a more realistic effect.

IRs are also instrumental in capturing the sound of microphone placements on guitar amplifiers and other speakers. By sampling the interaction between a microphone and a speaker cabinet, IRs enable precise emulation of different microphone positions and types. In music, this can facilitate the exploration of authentic guitar tones without the need for extensive mic setup and experimentation.

In sound design and post production, this technique allows for realistic worldization that matches the rest of our audio. For example, we can take a line of dialogue or some diegetic music that is meant to be played from a speaker on screen and apply an impulse response of a matching radio, then run that processed signal through our chosen room reverb (ideally one that was taken on location in that same space) to realistically place it in our space seamlessly with the other elements.

It is worth noting that impulse responses are linear and do not account for speaker breakup or other forms of nonlinearities, such as distortion- more on that later.

Creative Effects

Beyond traditional reverb applications, IRs can be employed to create a variety of creative effects. Most common impulse loaders allow any WAV file to be run through them (often with some caveats, such as file length), resulting in unique timbral transformations and spatial effects. Try loading one shot samples of various instruments for a unique spin on a more traditional reverb, or experiment with musical loops for strange rhythmic delays.

By harnessing the power of impulse responses, audio engineers and sound designers can enhance their productions with lifelike reverberation, authentic speaker tones, and innovative sound effects. IRs remain indispensable tools in the modern audio toolbox, enabling artists to explore new sonic territories and push the boundaries of creativity.

Linearity in Impulse Responses vs Non-Linearity In Algorithmic Reverbs

There are two primary methods that stand out for creating reverberation effects: impulse responses and algorithmic reverbs. Both algorithmic reverbs and IRs serve as invaluable tools, allowing for the recreation of acoustic environments and the emulation of various audio hardware. However, it's essential to understand their strengths and weaknesses.

Cannot Capture Nonlinear Systems with IRs

One fundamental limitation of impulse responses is their inability to capture nonlinear systems such as compression or distortion accurately. While IRs excel at capturing the linear response of a system, they fall short when it comes to representing nonlinear behaviors. As a result, complex effects like dynamic range compression or harmonic distortion cannot be reproduced using standard impulse responses alone, and additional processing may be needed in conjunction with their use for more believable results.

Heuristic Differentiation Of IRs From Algorithmic Reverbs

Impulse responses are static representations of real acoustic spaces or hardware units. They capture a snapshot of the reverberation characteristics at a specific moment in time. In contrast, algorithmic reverbs are dynamic in nature, using mathematical algorithms to simulate reverberation based on adjustable parameters such as room size, decay time, and diffusion, allowing you freedom to sculpt your space in a way that individual IR reverbs generally can’t. 

A practical way of understanding the differences between IR reverbs and algorithmic reverbs, such as the beautiful and realistic algorithmic offerings from Exponential Audio, is the differences in behavior that each exhibits when played alongside a printed and polarity inverted copy. It is worth noting that even some IR reverb plugins may include non-linear processing features, such as chorusing or saturation, that will not fully null similarly to purely algorithmic reverbs.

Here is an example of the two different reverbs, applied to a recording of hands clapping:

When an IR reverb's output is played alongside its printed and its polarity inverted copy, the two signals cancel each other out completely due to IR reverbs being linear and time-invariant. This is not the case with algorithmic reverbs. Listen to the differences from the examples below. Though faint, you will still be able to hear an output from the algorithmic reverb when the copy is inverted, as the variations introduced by the algorithm used will not fully null when summed. 

Workflow Implications

While algorithmic reverbs offer a great degree of flexibility, they come with their own set of workflow implications. First, they may require more pre- and post- roll when punching in changes, taking more time and increasing session file size. And second, unlike static IRs, algorithmic reverbs require additional processing power to generate reverberation in real-time. Since IR-based processing tends to be much more computationally efficient compared to algorithmic processing, understanding when to take advantage of this can enhance CPU efficiency in audio production workflows and allow your sessions to operate more smoothly, especially as track counts increase and processing power is at a premium.

Crafting Impulse Responses: Multiple Paths to Authenticity

Crafting impulse responses is both an art and a science, offering multiple methods to capture the sonic characteristics of real-world spaces, micing distances, and hardware. Logic’s Space Designer and Impulse Response Utility are great tools for creating your own impulse responses and provide excellent documentation to guide you through their creation. Let's explore the common techniques for creating IRs and their respective pros and cons:

1. Convolution (Sine Sweep) Method:

This method involves playing a 20Hz-20kHz sine sweep through a system or space and recording the resulting impulse response. The recorded sweep is then deconvolved using a mathematical process to superimpose all of the times and levels of these recorded differences from the source sweep to the beginning of the new IR file, which resembles a short blip. 

Pros: Offers precise control over the frequency range and amplitude of the impulse, resulting in accurate IRs. Can be particularly effective for capturing linear systems.

Cons: Requires more equipment and setup, and the deconvolution process can be time-consuming. You are also limited to the frequency range and timbrel balance of your loudspeaker for recording IRs of spaces.

Tips: Ensure accurate playback and recording equipment, minimize background noise during recording, and use high-quality microphones for optimal results. Keep your recorded sweeps aligned with and the same length as the source sweep.

2. Starter Pistol (or Clapper) Method:

Be sure to use a method capable of generating low frequencies- a hand clap may not be the best choice!

A starter pistol or clapper is used to create a sharp impulse in a space or through a hardware unit. The resulting sound is recorded and trimmed so that any space before and after the impulse of the pistol/ clapper is removed. The resulting file is used as an IR.

Pros: Simple and straightforward method requiring minimal equipment. Can capture the acoustic characteristics of a space or hardware unit effectively.

Cons: May lack precision compared to the sine sweep method, particularly in capturing low-frequency information. The recordings require careful attention during further editing to be used as an IR.

Tips: Pay attention to the timing and placement of the impulse source for precise results, and consider using multiple recordings from different positions for a comprehensive capture of the space or equipment's characteristics.

By understanding the strengths and limitations of each method, audio engineers and sound designers can choose the most appropriate approach for capturing impulse responses that accurately represent the desired sonic characteristics. Whether using the precision of the convolution method or the simplicity of the starter pistol method, crafting IRs opens up a world of possibilities for creating immersive and authentic audio experiences.

Practical Applications: Harnessing Inverted Impulse Responses for Room Correction

Impulse responses can play a crucial role in system tuning and room correction, offering a way to correct both the frequency response and phase response. For more information on system tuning and room correction, you can check out my previous blog posts on the subject in Understanding Mix Room Calibration: Part 1 and Part 2. Here's a brief overview of how inverted impulse responses are utilized for this purpose.

Using IRs for Room Correction

Room correction involves measuring the acoustic properties of a listening space and applying corrective measures to compensate for anomalies in the systems response. Inverted IRs can be employed as a way to mitigate the effects of room reflections and resonances in ways that simple EQ adjustments cannot, however the technique is not without its pitfalls.

Ø (the greek letter “Phi”) is commonly used to denote the polarity inversion control on EQs or consoles. Commonly mis-referenced as the “phase invert” button, the “polarity invert” button merely swaps the positive and negative of the waveform without affecting the timing, whereas “phase invert” implies a shift of timing.

Volume Inversion

By inverting the amplitude of an impulse response and applying it to an audio signal, room correction systems can effectively cancel out the reflections caused by room boundaries. This helps to equalize the sound, resulting in improved clarity and detail in audio reproduction.

Phase Inversion

In addition to volume inversion, phase inversion techniques can be used to address phase discrepancies introduced by room reflections. Phase inversion may cause audible pre-ringing artifacts unless your fit & inversion is greater than, say, 60db, or you use a program such as FIR Designer, which lets you select which filters are linear phase and minimum phase, allowing you to reduce pre-ringing artifacts by using minimum phase in the low end where pre-ringing is most noticeable.

Showcase of Software Solutions for Room Correction Utilizing IRs

Several software solutions are available for room correction that utilize inverted impulse responses to correct room acoustics:

Sonarworks Reference: Sonarworks Reference offers a user-friendly interface for measuring room responses and generating corrective filters based on inverted impulse responses. It provides real-time monitoring and correction for headphones and speakers, allowing users to achieve a more accurate and consistent listening experience.

Dirac Live: Dirac Live utilizes advanced room correction algorithms to optimize the sound reproduction in listening environments. It measures room acoustics with precision and generates correction filters based on inverted impulse responses, resulting in improved clarity, imaging, and tonal balance.

Room EQ Wizard (REW): Room EQ Wizard is a powerful room measurement and correction tool that supports a wide range of measurement techniques and analysis tools. It allows users to measure room responses, generate corrective filters, and visualize room acoustics using inverted impulse responses and other measurement data.

By utilizing these software solutions, audio enthusiasts can effectively harness inverted impulse responses for room correction, achieving more accurate and faithful sound reproduction in their listening spaces.

Conclusion

Impulse responses are a crucial part of the modern audio engineer and sound designers toolbox, offering a versatile option for capturing, recreating, and manipulating the sonic characteristics of real-world spaces and audio hardware. Throughout this exploration, we've covered the many uses of impulse responses, uncovering their importance to sound design as well as practical applications in audio reproduction.

From faithfully recreating the reverberation of real spaces to creating experimental effects, IRs provide unparalleled flexibility and precision in shaping soundscapes. Their ability to capture the intricate details of acoustic environments and hardware units enables audio engineers and sound designers to achieve immersive and authentic audio experiences.

It's also essential to recognize the limitations of impulse responses, particularly in capturing nonlinear systems. By understanding the principles of linearity and leveraging the appropriate methods for crafting and utilizing IRs, audio professionals can optimize their workflow efficiency and achieve more consistent and faithful sound reproduction.

As we conclude our journey through the world of impulse responses, I invite you to continue exploring and experimenting with IRs in your audio projects. Whether you're a seasoned professional or a budding enthusiast, there's always more to learn and discover in the realm of audio engineering. Embrace the power of impulse responses and unlock new dimensions of creativity and sonic fidelity in your productions.


Ready to elevate your project with expert sound design? Let's bring your vision to life! Explore my services or contact me today to discuss your project requirements. Thank you for visiting sameliaaudio.com. Stay tuned for more insights and analysis on cinematic storytelling through sound.

Read More
sam elia sam elia

IMPERSONATE! HOW TO SOUND LIKE A DALEK: A CREATURE SOUND DESIGN TUTORIAL

I dive into the creature sound design of the Dalek voice. Learn from the unique vocal performances and effects that characterize these fan favorite foes to craft your own creature sound designs.

Introduction

With the recent announcement from the BBC that upcoming episodes and specials of Doctor Who will be exclusively available for streaming on Disney Plus, fans have eagerly awaited the return of the beloved Time Lord on May 10th. 

While we can't hop into a TARDIS to watch it right away, let's pass the time by diving into some creature sound design and exploring the iconic villains of the series, the Daleks.

These relentless extraterrestrial adversaries have been menacing the Doctor and captivating audiences since their introduction in 1963. With their jarringly distorted and robotic delivery, Daleks have become synonymous with fear and intimidation, embodying the beloved blend of sci-fi suspense and charm the series is known for.

In this tutorial, we'll journey through the evolution of the Dalek voice, tracing its origins from actors Peter Hawkins and David Graham to the contemporary techniques of Nicholas Briggs. 

  • We'll dissect the key components of vocal performance

  • Distortion and ring modulation effects

  • And reveal the secrets behind the Daleks' chilling vocalizations.

But our exploration doesn't stop there. 

Insights gained from breaking down such a timeless creature sound design can offer up inspiration for us to craft our own unique creature voices. The possibilities are boundless, and armed with the techniques explored in this tutorial, you'll be equipped to elevate your sound design to unparalleled levels.

So, fellow Whovians, let's embrace this challenge, unleash our creativity, and allow our imaginations to soar. Together, let's exterminate mediocrity and usher in a new era of sound design excellence!

Brief Overview of Daleks and their Iconic Voice

Daleks are a fictional extraterrestrial race of time-travelling, mutant-piloted robots from the British science fiction television series "Doctor Who." Created by writer Terry Nation, they first appeared in the series in 1963 and have been the Doctor's most iconic and formidable adversaries ever since. 

The Dalek's main form of communication is through a harsh, metallic voice, which is instantly recognizable and has become one of their defining features- and that's what we're going to focus on.

Insights to Be Gained from Dalek Voice Design

With distinctive speech patterns and (near) absence of emotion, the Dalek voice underscores their uncompromising nature and stark contrast to the compassionate Doctor, solidifying their status as one of his greatest adversaries in the Doctor Who universe. 

Their catchphrase, "Exterminate!", has become synonymous with their ruthless nature and remains one of the most iconic lines in science fiction.

Studying the design of the Dalek voice can offer valuable insights for sound designers working on other creature sound designs:

Understanding of Emotional Impact: Analyzing how the Dalek voice evokes fear and intimidation can help sound designers grasp the emotional impact of different vocal elements. They can then apply this understanding to evoke specific emotions in their own creature designs, tailoring vocal characteristics to suit the intended effect.

Exploration of Vocal Modulation Techniques: Examining the techniques used to create the Dalek voice provides sound designers with a toolkit for manipulating vocalizations to achieve desired effects. By experimenting with similar techniques, they can create unique and memorable vocal signatures for other creatures, enhancing their distinctiveness and impact.

Consideration of Narrative Context: Breaking down how the Dalek voice reinforces the creatures' characterization and narrative role highlights the importance of aligning sound design choices with storytelling objectives. Sound designers can use this insight to ensure that creature vocalizations not only sound compelling but also serve to enrich the narrative and deepen audience engagement.

Importance of Actor Performance: The delivery of the Dalek voice by actors adds another layer of complexity to its design. Sound designers can learn from the skillful execution of the Dalek voice by actors, understanding how nuances in vocal performance contribute to the overall characterization of the creatures. This insight emphasizes the collaborative nature of sound design, where vocal performances and technical elements work together to bring creatures to life and enhance the audience's connection to the narrative.

Understanding the Dalek Voice

The iconic Dalek voice has a rich history dating back to its inception. Initially developed by actors Peter Hawkins and David Graham, the Dalek voice entails a robotic delivery that relies on changing pitch and speed in rather unsettling and inhuman ways to emote (nearly exclusively emotions akin to hatred). 

While the processing techniques have evolved over the decades, the core recipe developed at the BBC Radiophonic Workshop by sound designer Brian Hodgson largely remains the same: ring modulation with a 30 Hz sine wave and midrange-y distortion, which contribute to the distinctive Dalek sound. 

Even the current voice of the Daleks, Nicholas Briggs, uses a Moog Moogerfooger MF-102 Ring Modulator to process his voice. However, it's important to note that the unique sound of the Daleks is not solely a result of the equipment used, but also how Briggs acts while speaking through the ring modulator effect.

This highlights the collaborative nature of sound design, where technical elements and actor performances work hand in hand to bring creatures like the Daleks to life. Capturing a robotic yet menacing performance is essential for authentically recreating the Dalek voice, and performance without heavy effects can still read as Daleks. 

For example, director Graeme Harper deliberately used less distortion for their voices in the 1985 serial “Revelation of the Daleks,” showcasing how the actor’s interpretation is pivotal in capturing the essence of Daleks. Without the right performance, effects alone often fail to evoke the intended emotion or character attributes, underscoring the importance of finding the right balance between performance and technical elements in creature sound design.

Tools and Software

Let’s explore the practical aspects of bringing a Dalek voice to life. By understanding the tools and techniques at our disposal, we can effectively uncover the chilling and unmistakable essence of the Dalek voice, setting the stage for a new era of sound design excellence. Let's dive in!

Selecting the Right Microphone

When it comes to capturing the Dalek voice, the microphone choice is not too crucial due to the lo-fi nature of the effect, and you will find that most microphones will suffice. I’ll be offering a comparison between two popular options:

Shure SM57: Known for its durability and versatility, the SM57 is a cardioid dynamic microphone that can handle high sound pressure levels, making it suitable for close mic recording of intense performances. While it does not have a high pass filter, the mic naturally rolls off low (and “air”) frequencies, which we will be cutting for our design anyway.

AKG 414 XLS (cardioid, 40Hz high pass enabled): This large-diaphragm condenser microphone is a veritable swiss army knife in the studio, and offers a detailed and transparent sound, competing with mics many times its price. I’ll be using the selectable polar pattern in cardioid for better isolation, and will be engaging the selectable high pass filter at 40Hz to reduce low-frequency rumble.

Voice Modulation Software Options

To achieve the distinctive Dalek-like effects, you'll need the right tools and software. 

DAW or Audio Editor: Software like Pro Tools provides a comprehensive platform for recording and editing your vocal performances. Its advanced features allow for precise control over your sound design process.

Distortion: Experimenting with distortion is key to achieving the harsh and metallic timbre of the Dalek voice. I will be comparing two options included with pro tools- SansAmp and AIR distortion- but you may try reamping your vocals through a guitar amp or passing it through a pedal for unique grit and texture.

EQ for Sculpting Tone: Enhancing the mid frequencies and limiting the bandwidth of your vocal recordings is essential for giving your voice that robotic character. The stock EQ plugins are all we’ll need here, however plugins such as ProQ 3 have additional features that can help you reach your desired tone quickly and more consistently. 

Ring Modulation: One of the signature elements of the Dalek voice is ring modulation. I’ll be comparing the stock plugin Sci-Fi to iZotope’s VocalSynth 2, which both offer customizable ring modulation effects to transform your vocal performances into something truly alien. It is worth noting that plugins such as VocalSynth 2 are “All-In-One” options, and capable of handling this type of design all on their own.

Voice Modulation Techniques for Achieving Dalek-like Effects

Achieving the perfect Dalek-like effect requires a combination of recording techniques and post-processing. Here's a basic outline to get you started:

Record Your Lines: Begin by recording your vocal lines with the desired intensity and emotion, keeping in mind the malicious character of the Daleks.

Prepare Your Recording: Set clip gain to workable levels and remove unwanted sounds so that we do not amplify them when applying the effects.

Add Your Effects in Series with Rough Settings: Apply distortion, EQ, and ring modulation effects to your vocal recordings in series, starting with rough settings to establish the foundation of the Dalek sound.

Tweak Settings to Address Problems: Listen back to your recordings and tweak the settings of each effect to address any issues or inconsistencies. Experiment with different parameters until you achieve the desired Dalek-like effect, adjusting the intensity and balance of each effect as needed.

Recording Your Voice

Creating a suitable recording environment is crucial for capturing the Dalek voice effectively. Here are some tips to ensure optimal sound quality:

Setting Up Your Recording Space for Optimal Sound Quality

Close Mic with Pop Filter: Position your microphone close to your mouth and use a pop filter to minimize plosive sounds and ensure clear, crisp recordings.

Minimize Room Noise: Eliminate background noise by speaking from approximately 1 balled fist (3 to 4 inches) away from the microphone. This close proximity helps to capture a clean signal without unwanted ambient sounds.

Angle Capsule Off Axis: Position the microphone capsule slightly off-axis to reduce the impact of breath noises and sibilance, resulting in smoother recordings. Experiment with the exact positioning and angle so as to minimize plosive sounds (such as "p" and "b") and excessive sibilance (such as "s" and "sh"). Often positioning the mic slightly above and to the corner of your mouth angled down will yield the best results.

Room Treatment: Consider implementing room treatment techniques as discussed in previous blog posts (referenced as Part 1 and Part 2) to further improve the acoustics of your recording space.

Importance of Experimenting with Vocalizations and Effects

Finding the perfect Dalek sound requires experimentation and exploration. 

Don't be afraid to try different vocalizations, accents, and delivery styles to discover what works best for your interpretation of the Dalek voice- have fun with it! By experimenting with various vocal techniques and effects, you can refine your performance and achieve a truly immersive and convincing Dalek sound.

Similarly, experiment with your effects: 

  • Adjust the pitch: it helps when there are many characters but one voice actor

  • Utilize different distortion: find the right balance between color and intelligibility

  • Manipulate the EQ settings: the vocal futzes need to fit with the dialogue tracks 

All of these are key elements in shaping the character of the Dalek voice. 

By applying these techniques strategically, you can create a creature sound design that captures the essence of the Daleks' menacing and emotionless tone, and tailor the effects to your unique voice.

Practice Makes Perfect

Before diving into recording your Dalek voice, take the time to practice your line delivery. 

Whether you're performing the voice yourself or working with a voice actor, practicing line delivery ensures a more polished and convincing performance. 

Provide direction to your voice actor and consider doing alternate reads to explore different interpretations. If recording lines to video, use a cue track known as a "3 pop" to guide ADR cue timing. This involves a 3-second countdown indicated by 1 frame of a 1 kHz sine wave at the start of the preceding 3 seconds to the cue, helping to synchronize dialogue with on-screen action. Feel free to download and use the 3 pop below in your ADR and VO projects. Remember: 3… 2… 1… (line).

Suggestions for Recording and Listening Back to Evaluate Progress

As you record and refine your Dalek voice, take breaks to listen to reference material and evaluate your progress. Listening to similar voices or performances can provide valuable insights and inspiration for tweaking your effects and delivery. Use headphones or studio monitors to listen closely to your recordings and make adjustments as needed to achieve the desired Dalek-like effect.

Editing and Post-Processing

Once you've captured your Dalek voice recordings, it's time to import them into your editing software for post-processing. Follow these steps to prepare your audio for further manipulation.

Preparing Your Raw Voice Recordings

Normalize Audio: Adjust the volume levels of your recordings to ensure consistency and optimal loudness. Aim for peak levels between -15 to -6 dB to provide enough headroom for additional processing. 

Once normalized, use tools such as WLM Plus, iZotope Insight, or Nugen LM-Correct to check your Loudness Units Full Scale (LUFS). We will want to leave a little room for when we apply distortion, which increases loudness, so to be safe try to aim for a range of about -30 LUFS, or about 3-6 LUFS below your target loudness spec.

Strip Silence or Gate: Remove any unwanted silence or background noise from your recordings without cutting off beginnings/endings of words. Use strip silence and/ or gating techniques to speed up this process. Strip silence offers more control over the removal of silence but may require more manual adjustment, while gating is quicker but less precise. If you elect to use gating, place the gate before the rest of the effects in this signal chain.

Step-by-Step Guide to Post-Processing Techniques for Refining the Dalek Voice

Once your recordings are prepared, follow these steps to refine the Dalek voice using post-processing techniques:

First Apply Distortion: Enhance the harsh and metallic timbre of the Dalek voice by applying distortion effects. Try the following rough settings as a starting point and tweak to achieve the desired intensity and texture:

SansAmp:

PRE: 0

BUZZ: 100

PUNCH: 100

CRUNCH: 100

DRIVE: 100

LOW: 66

HIGH: 100

LEVEL: 30

AIR distortion:

DRIVE: 15dB

PRE SHAPE: 0%

HIGH CUT: 20kHz

MODE: WARP

OUTPUT: 100%

DC BIAS: 0%

THRESHOLD: -10dB FS

MIX: 100%

Next Apply EQ: Shape the tonal characteristics of your voice recordings by applying EQ adjustments. Focus on band limiting the recording to reduce mud and sizzle introduced by the distortion, as well as boosting the mid-range frequencies to emphasize the robotic quality of the Dalek voice, making subtle adjustments to other frequency bands as needed. Try these settings as a starting point:

EQ:

HPF: 12dB/oct @80Hz

LPF: 12dB/oct @10kHz

Lastly Apply Ring Modulation: Add the signature 30 Hz ring modulation effect to your voice recordings to further enhance their otherworldly quality. Try the following rough settings as a starting point and experiment with adjusting settings to achieve the desired level of modulation intensity:

Sci-Fi:

EFFECT: RINGMOD

FREQUENCY: 30 Hz

DEPTH: 100%

MODULATION: LFO

RATE: 0.1Hz

DEPTH: 0.0%

VocalSynth:

VOCODER: VINTAGE

SCALE: 50 SHIFT: 0 LEVEL: 33
OSC 1: SYNC SAW GAIN: 100 PITCH: 0 MOD: 0 SHAPE: 0 MOD: 0
OSC 2: SQUARE GAIN: 50 PITCH: +1 MOD: 100 SHAPE: 100 MOD: 100

NOISE GAIN: 100 LFO: SELECTED CRUSH: 0 MOD: 0 LFO: SAMPLE RATE: 5.0

POLYVOX

FORMANT: 0 HUMANIZE: 50 CHARACTER: 10 LEVEL: 100

RING MOD

FREQ: 30 LFO: 0.1 DEPTH: 0 MIX: 100

Results

If everything has gone well, we should end up with something close to the real deal. 

The processed recordings should capture the chilling and unmistakable essence of the Dalek voice, featuring a menacing and emotionless tone with a harsh metallic timbre that instills fear and intimidation. 

  • Distortion effects add character through grit and texture

  • EQ adjustments sculpt the tonal characteristics to emphasize the robotic quality and ensure the vocal futzes fit with the dialogue tracks 

  • And the ring modulation effect adds an otherworldly dimension. 

As you can hear below, the plugins you use can lead to very different flavors of the same general sound- give it a try and let the extermination begin!

Worldize Your Dialogue To Enhance The Overall Sound

To further enhance the Dalek voice and integrate it seamlessly into your audio production, you will need to worldize your dialogue. Use a send to apply reverb to your Dalek voice, ensuring that it matches the reverb used for the other elements in your scene, such as foley. 

This helps to create a cohesive sonic environment and adds depth to your overall sound design.

Importance of Persistence and Patience in Achieving Desired Results

Creating the perfect creature sound design takes time, experimentation, and patience. Don't be discouraged by setbacks or initial challenges. 

Try different combinations or orders of plugins, and try backing off on settings intensity to avoid overdoing it, an especially easy pitfall of using distortion effects. Persistence is key, so keep refining your approach and remain patient as you work towards achieving your desired results.

Applying Newly Acquired Dalek Voice Design Skills to Your Own Creature Voice Designs

Armed with the knowledge and techniques gained from this tutorial, you're now ready to embark on your journey of creating immersive creature voice designs. The skills you've acquired in crafting the Dalek voice can be applied to a wide range of projects, allowing you to unleash your creativity and elevate your sound design to new heights. 

Embrace the challenge, experiment with different vocalizations, and apply your newfound Dalek voice design skills to your own projects. Exterminate mediocrity and take your sound design to new heights!


Ready to elevate your project with expert sound design? Let's bring your vision to life! Explore my services or contact me today to discuss your project requirements. Thank you for visiting sameliaaudio.com. Stay tuned for more insights and analysis on cinematic storytelling through sound.

Read More
sam elia sam elia

UNDERSTANDING MIX ROOM CALIBRATION: PART 2

Dive deeper into mix room calibration with Part 2 of the series.

Welcome back to part 2 of our exploration into mix room calibration! In part 1, I covered the importance of calibrating your mix room, understanding the significance shaping the acoustic environment plays in achieving accurate sound reproduction. I discussed the importance of maintaining a calibrated reference level, the influence of frequency perception as described by the Fletcher-Munson curve, and the fundamental principles of room acoustics, including the impact of room modes on sound propagation. I also explored the various acoustic treatment options you can use to combat these issues, from broadband absorption to diffusion and bass traps, to address common room acoustic issues.

Björn does not have ears

Now, in part 2, let’s continue our journey by diving into key concepts in mix room calibration. I'll explore essential tools and equipment for the calibration process, such as measurement microphones and analysis software. Additionally, I'll delve into the calibration process itself, including initial measurements, analysis of measurement data, and implementation and testing of calibration adjustments.

Furthermore, I'll discuss maintenance and future considerations for ensuring optimal mix room calibration over time, including regular room checks and updating calibration as needed to accommodate future equipment changes.

Join me as I unravel the intricacies of mix room calibration and empower you to create an ideal audio mixing environment where creativity can flourish!

Key Concepts in Mix Room Calibration

In this section, we'll outline some essential concepts that form the foundation of mix room calibration. Understanding these principles is helpful for planning your room treatment.

Bolt-Area Reference Diagram

The bolt-area is a diagram that was used to assess the distribution of modes based on the dimensions of the chosen room. With a little bit of math, if your room dimensions fell within the bolt area of the diagram, you could be confident that the room acoustics would be favorable. Nowadays, it is much easier to simply use software!

Schroeder Frequency

The Schroeder frequency marks the transition between the modal and diffuse sound field within a room. Understanding this frequency is essential for determining the appropriate balance between absorption and diffusion treatments. 

I definitely recommend using software for this, as the math is more complex, however for the typical room this transition frequency resides between 100-300 hz. Important to note that corrective EQ does NOT treat frequencies below the Schroeder Frequency, but can be an effective option at the listening position (only).

Reverberation Time (RT60)

Orchestral halls and cathedrals can have extremely long RT 60 values, sometimes exceeding 10 seconds!

Reverberation time, often denoted as RT60, measures the decay rate of sound within a room, or how quickly sound reduces by 60dB after a sound stops. Achieving the right balance of reverberation is critical for creating a conducive mixing environment with accurate sound reproduction. 

In mixing rooms, a good value to aim for is 0.15-0.3, and for recording spaces up to 0.5 is acceptable. Often, a “live half/ dead half” setup is employed, offering a best-of-both-worlds efficient use of space and resources. 

The live half offers a more inspiring and natural sounding space for recording, where the room's ambience can enhance performances. It also provides a different perspective for listening to mixes, allowing engineers to hear how they translate to a more reflective environment like one you would find in a typical home. 

The dead half provides an accurate and controlled listening environment for critical listening and mixing tasks, allowing engineers to make precise decisions with minimal coloration from the room.

Tools And Equipment For Acoustic Analysis

In order to get the most out of your mix room calibration, a few simple tools can be used to accurately measure and adjust your playback system. Here, we'll explore the essential tools and equipment used in the calibration process.

A Tape Measure

This is a fundamental tool for assessing room dimensions and speaker/listener positioning. Accurate measurements are crucial for implementing precise acoustic treatments and optimizing the listening environment.

AMROC 

The AMROC (Acoustic Mirror Room Optimization Calculator) website offers free to use predictive analysis tools that help determine optimal speaker and listener placement based on room dimensions and acoustic properties. Simply plug in measurements from your tape measure into the form and AMROC will provide you with tons of useful information and calculations to plan your set up and room treatment. 

A Calibrated Measurement Microphone

This is essential for capturing accurate frequency response data within the listening environment. These omnidirectional microphones are specially designed and calibrated to ensure precise measurements across the audible frequency spectrum. 

I suggest MINI-DSP’s UMIK-1 for its combination of features, quality, price, and ease of use. If you elect to go with an XLR measurement microphone, you will require an interface with adequate inputs and cabling to set up a loopback connection for timing reference, and will need to set up the software accordingly.

Room EQ Wizard (REW)

A waterfall graph is named so because its visual representation resembles the cascading flow of a waterfall. In this type of graph, data points are shown sequentially, with each point stacked on top of the previous one, creating a cascading effect similar to how water flows down in a waterfall. The bigger the stack, the longer the resonance.

This is a powerful, free software tool used for analyzing room acoustics and generating measurement data. REW provides comprehensive analysis tools, including frequency response graphs, waterfall plots, and spectrograms, to aid in the calibration process. 

Alternatives such as dirac or live sound software SMAART can also be used, however they are often costly with minimal benefits over REW for our purposes. 

Programs such as sonarworks or audyssey are used to correct the response from your playback system, but they do not provide all of the necessary information needed for room treatment. They are convenient however, and newer monitors often include this functionality built-in, eliminating the need for external signal processing.

Calibration Process

Calibrating a mix room is a systematic process that involves several key steps to ensure accurate and consistent sound reproduction. This section explores each step of the calibration process in detail.

Initial Measurements

The calibration process begins with taking comprehensive measurements of the room's acoustic properties using calibrated measurement microphones and specialized software tools like Room EQ Wizard (REW). These measurements provide valuable data on room modes, frequency response, and reverberation characteristics, which will help us determine how to approach our room treatment and positioning.

Analysis of REW Data

Once measurement data is collected, it is analyzed using software tools like REW to identify acoustic anomalies such as peaks, nulls, and resonances that may affect the accuracy of sound reproduction. This analysis guides subsequent calibration adjustments. 

REW’s website provides a wealth of in-depth information and guides on how to set up, utilize, and interpret your measurements.

Adjustment of Speaker and Listener Positions

Based on the analysis of measurement data, adjustments are made to the positioning of speakers and listeners within the room to optimize the listening environment and minimize acoustic anomalies. This may involve repositioning speakers to achieve better stereo imaging and adjusting the listener's position for optimal listening experience.

Treatment Recommendations for Room Acoustic Issues

Based on the analysis of measurement data and calibration adjustments, decisions can be made for implementing acoustic treatments to address specific room acoustic issues such as reflections, reverberation, and modal resonances. Many companies offer consultation services to assist with treatment recommendations, either for free or for a fee. However, starting with broadband absorption at your first reflection points is often going to be beneficial the majority of the time, and is a pretty simple fix to implement.

Subtle EQ to Tame Peaks

Use an outboard 32-band EQ or the output DSP of your interface

Subtle equalization adjustments may be applied to tame frequency peaks and improve overall frequency response at the listening position only. Aim for no more than +-6dB of correction, and avoid touching frequencies over 1KHz. 

It's important to note that excessive EQ should be avoided as it can introduce additional phase and time domain issues. Uneven frequency response is a much more apparent problem however, and judicious EQ can address issues here, but it cannot effectively mitigate nulls, as these problems occur in the time domain, and must be fixed with room treatment, or avoided via your positioning in the room.

Implementation and Testing

After initial calibration adjustments are made, it's essential to thoroughly test the changes and refine the calibration further as needed. This section outlines the iterative process of implementation, testing, and fine-tuning in mix room calibration.

Iterative Calibration Process

The calibration process often involves multiple iterations of measurement, adjustment, and testing to achieve optimal results. Averaging multiple measurement sweeps and applying 1/3 octave smoothing can provide a clearer picture of room response and aid in decision-making.

Listening Tests

Listening tests are crucial for evaluating the subjective impact of calibration adjustments on sound quality and perceived tonal balance. Engineers should listen to reference tracks and audio material across various genres to assess the effectiveness of the calibration. I find that your favorite songs are the best to use, as you will be highly sensitive to the impact your changes make- just make sure to use high quality WAV/ MP3 files and avoid lossy files/ streaming.

Maintenance and Future Considerations

Get your studio ready for warmer weather!

Maintaining optimal mix room calibration is crucial for ensuring consistent and accurate sound reproduction over time. Additionally, considering future equipment changes and advancements is essential for staying ahead in the ever-evolving world of audio mixing.

Regular Room Checks

Regular room checks should be conducted to monitor changes in acoustic properties over time. Factors such as temperature, humidity, and equipment aging can impact the effectiveness of calibration and may require periodic adjustments. 

These checks help ensure that the mix room maintains its optimal acoustic characteristics. I recommend checking the performance of your system each season change at minimum. Some engineers will check at the start of each session, especially in a shared studio space or when using an analog desk, but this may be overkill if your setup is digital and you do not change anything between sessions.

Updating Calibration as Needed

As equipment and technologies evolve, it's important to update mix room calibration accordingly to take advantage of advancements in audio reproduction and acoustic treatment. This may involve recalibrating speaker and listener positions, updating acoustic treatments, or implementing new software tools for analysis and measurement. 

Staying proactive in updating calibration ensures that the mix room remains optimized for the latest standards and technologies. However, there is something to be said for becoming intimately familiar with your setup over time.

Integration with Future Equipment Changes

Anticipating future equipment changes, such as upgrades or replacements, is crucial for maintaining compatibility and performance consistency in the mix room environment. Calibration adjustments may be necessary to accommodate changes in speaker characteristics, amplifier settings, or signal processing requirements. Integrating new equipment seamlessly into the existing calibration ensures a smooth transition and minimal disruption to the mixing workflow.

Conclusion

Mix room calibration is the cornerstone of achieving optimal audio mixing environments, ensuring accuracy, consistency, and reliability in sound reproduction. Achieving an optimal audio mixing environment requires a combination of careful calibration, and attention to detail. By understanding the above principles and utilizing the right tools and techniques, anyone can create a space where creativity can thrive. This final part summarizes the importance of mix room calibration and provides some concluding thoughts on achieving an optimal audio mixing environment.

Summary of Importance of Mix Room Calibration

A well-calibrated mix room ensures that audio mixes translate accurately across various playback systems and environments, from studio monitors to headphones to consumer-grade speakers. With a properly calibrated mix room, engineers can make decisions more confidently and efficiently, leading to faster workflows and better results.

Additionally, tuning the system to personal preference is an important aspect of mix room calibration, as each engineer may have unique preferences and requirements based on their workflow and sonic goals. I prefer a slight "house bump," or a boost in the frequency response curve of my sound system, in the low-frequency range, that is added intentionally to achieve a desired tonal balance. I find it helps my work translate better, especially to larger systems.

In conclusion, mix room calibration is not just a technical necessity but a creative imperative. By investing time and effort into calibration, you can unlock the full potential of your audio environment and elevate your craft.


Ready to elevate your project with expert sound design? Let's bring your vision to life! Explore my services or contact me today to discuss your project requirements. Thank you for visiting sameliaaudio.com. Stay tuned for more insights and analysis on cinematic storytelling through sound.

Read More
sam elia sam elia

UNDERSTANDING MIX ROOM CALIBRATION: PART 1

Learn how to optimize room acoustics and fine-tune your playback system for top-notch sound quality in part 1 of this blog series

Accurate reproduction of audio is crucial to making critical sound design and mixing decisions. Simply upgrading to higher end monitors is not enough to see much improvement- the acoustic environment and setup of your playback system is an equally important part of the equation, and treating problems here is often the most cost effective way to improve your monitoring and ensure consistency and accuracy in your work. 

I’m going to dive into the many ways you can improve your room acoustics and calibrate your playback system.

Unleash Your Full Potential With Mix Room Calibration

My studio assistant, Björn

Mix room calibration refers to the process of optimizing the acoustic properties of a room to create an ideal environment for audio mixing. This involves adjusting various factors such as speaker placement, room acoustics, and equipment settings to achieve accurate and reliable sound reproduction.

It transcends technical necessity, emerging as a creative imperative. By dedicating time and effort to calibration, one can unleash the full potential of their audio environment, ultimately elevating their craft.

Ensuring accurate translation of audio mixes across various playback systems and environments, from studio monitors to headphones to consumer-grade speakers, is the hallmark of a well-calibrated mix room. 

This calibration empowers engineers to make decisions confidently and efficiently, resulting in streamlined workflows and superior outcomes. Moreover, tailoring the system to individual preferences is integral to mix room calibration, as each engineer may have distinct sonic goals and workflows. 

For instance, I personally utilize a subtle "house bump" in the low-frequency range to achieve a desired tonal balance in my home studio. I find this helps my work translate better, particularly to larger systems. 

Why Calibration Plays A Key Role in Audio Mixing

Let's look deeper into why calibration plays such a pivotal role in the intricate art of audio mixing.

Calibrated Reference Level

One key aspect of mix room calibration is maintaining a consistent listening volume, or reference level. 

Mixing at a consistent reference level not only ensures uniformity in the mixing process but also helps prevent hearing loss and fatigue by using a known, safe playback level, which is paramount in a profession where precise hearing is essential for quality work.

Resonant behavior can be less obvious at quieter volumes because it's masked by ambient noise, but it is still present. While it may be tempting to listen at low levels, the behavior of resonance in the listening environment is linear*, meaning the ratio of direct sound to resonant sound is always the same, regardless of loudness of playback (this is not strictly true, but for our purposes we can make this simplification*).

Fletcher-Munson Curve

Another important consideration in mix room calibration is accounting for the non-linear nature of human hearing. 

The Fletcher-Munson curve, or equal loudness contour, is a way to show how our perception of loudness differs in relation to frequency. Failing to calibrate playback for reference level can result in misleading perceptions of frequency balance, leading to inaccurate mixes that do not translate to other environments. 

Setting your playback at too low of a level will lead you to overemphasize low and high frequencies.

The THX reference level of 85dB is intended for larger spaces. For home studios or smaller mix rooms 78dB is a better, more comfortable target to work at. Because of the non-linear nature of human hearing, a slight (~1dB) “house bump” of the low/ sub frequencies may be helpful for work that translates better.

Understanding the critical role that calibration plays in achieving optimal sound reproduction lays a solid groundwork for our exploration into the intricacies of room acoustics. 

Let's shift our focus to examine the fundamental principles of room acoustics and how they influence the calibration process in creating an ideal audio mixing environment.

Understanding Room Acoustics

Room acoustics play a pivotal role in mix room calibration, as they directly influence how sound behaves within a space. Here, we'll explore the fundamentals of room acoustics, including room modes and common issues that affect mix room calibration.

Explanation of Room Modes

Room modes, also known as standing waves, are resonant frequencies that occur within an enclosed space, such as a room. 

When sound waves bounce off the walls, floor, and ceiling of a room, they can interfere constructively or destructively, resulting in certain frequencies being reinforced while others are canceled out. Which frequencies these interferences occur at depend on:

Sound resonates like ripples on water. Constructive and destructive interference is visible where the ripples intersect.

  • The room's dimensions

  • The location of the listener

  • And the placement of the sound source 

This constructive and destructive interference of sound waves leads to an uneven response that affects the accuracy of audio reproduction and the perceived tonal balance of sound in the room. 

Therefore, managing and minimizing the effects of room modes through acoustic treatment and proper room design is essential for achieving optimal sound quality in audio production, home theaters, and other listening environments.

The impacts of room modes on sound propagation within a space can affect the whole playback system; let’s explore practical solutions for managing and optimizing room acoustics. This next section will cover various acoustic treatment options aimed at addressing the challenges posed by room modes and other acoustic anomalies. 

Acoustic Treatment Options

The techniques we're about to explore for treating room acoustics aren't just confined to professional studios; they're rooted in fundamental principles of acoustics/ physics. 

These principles apply universally, whether you're outfitting a high-end recording facility or setting up a compact home studio. The primary distinguishing factors often revolve around budget constraints and available space. 

Broadband Absorption

Broadband absorption panels, made from materials such as Rockwool or Fiberglass batting, are perhaps the easiest way to improve your room acoustics. 

They are effective at controlling reflections and reverberation within a room as low as approximately 100hz. 

  • Material density

  • Thickness

  • Air gap behind the panels 

Are factors that impact the effectiveness of absorption treatments and can be used to extend their lower range of frequencies they can treat. I recommend using batting that is at least 2” thick for best results.

Rockwool and Fiberglass batting materials are naturally fireproof and safe to use, however they are a mild irritant if disturbed so covering with acoustically transparent fabric is highly recommended. 

I do not recommend making alternative batting out of material such as denim. To be safe, commercially available denim batting requires chemical treatments for fireproofing and mold/ fungus prevention, and as such are less cost effective compared to fiberglass with similar performance to Rockwool.

Considerations for Selecting Broadband Absorption

With a plethora of acoustic treatment options available, it's vital to approach the selection process thoughtfully. Therefore it is crucial to understand the underlying principles guiding our choices, as each treatment method tackles specific acoustic issues. 

Here are some helpful things to consider as we explore these acoustic treatment options:

  • Understanding the ASTM C (absorption) and NRC (reflection) charts can help in selecting the right materials to address specific acoustic issues throughout the audible frequency range.

  • Ideal materials offer consistent values across their treatment range, otherwise the room can easily become overly deadened or boxy or more uneven.

  • Cover DIY panels with acoustically transparent material. I like Guilford of Maine as it is also highly durable upholstery fabric. You may try testing unknown fabrics by blowing through it. If the fabric allows air to easily pass through, so will sound, and the fabric should be acceptable for use.

  • It is important not to overdo it- treating 50% or less of the room's surfaces and focusing on the first reflection points will deliver the best results.

Diffusion

Diffusion is another method used to control reflections in a room. Various methods of diffusion all serve to scatter sound waves, reducing the buildup of standing waves and flutter echoes. Diffusers are often a necessary addition to absorption for dealing with uneven response when positioning is limited.

Phase Grating Diffusers

Phase grating diffusers, also known as Schroeder diffusers or QRD diffusers, utilize a series of regularly spaced ridges or wells to scatter sound waves. 

  • The depth of the ridges or wells create slight time arrival differences, introducing constructive and destructive phase interference patterns. 

  • The width of each well is based upon half wavelengths. 

  • The deepest well is the lowest frequency the diffuser works at.

  • The width is the highest frequency the diffuser works at. 

As sound waves interact with the surface, they are scattered both temporally and spatially. Phase grating diffusers are particularly effective at scattering mid and high-frequency sound waves, helping to create a more balanced and natural sound field within the room, contributing to a more diffuse sound field.

Geometric Diffusers

Geometric diffusers feature irregular shapes and varying depths arranged across a surface. 

Unlike other diffuser types that rely on interference effects or resonance principles, geometric diffusers scatter sound waves based on the physical geometry of the diffuser elements. The irregularity in shape and depth creates multiple reflection points with varying path lengths, causing sound waves to scatter in different directions. 

Geometric diffusers are versatile and can be tailored to achieve specific scattering patterns and diffusion characteristics, making them suitable for a wide range of acoustic environments.

The term "skyline" in the name comes from the visual resemblance of the diffuser's profile to the jagged outline of a city skyline.

Skyline Diffusers

Skyline diffusers feature a series of alternating peaks and valleys arranged in a stepped or serrated pattern. 

These diffusers scatter sound waves by reflecting them off the angled surfaces of the peaks and valleys, breaking up reflections and dispersing sound energy in different directions. These diffusers are fairly easy to DIY, and online tools such as this can help you with all of the math.

Binary Amplitude Diffusers

Binary amplitude diffusers consist of a series of surfaces that alternate reflection and absorption. 

These diffusers are used to mitigate excessive high frequency absorption and improve mid and low frequency absorption. While binary amplitude diffusers may involve elements of both geometry and phase modulation, they are distinct in their design and function as diffusers.

Bass Traps

Low-frequency issues, often caused by fundamental room modes, can be mitigated using bass traps. 

Superchunk Absorbers

Large panels made from Rockwool or Fiberglass similar in construction to broadband absorption.

  • The large volume required to target low frequencies can be impractical for smaller rooms.

  • Hanging broadband absorption in the corners can be an effective compromise, as the larger air gap increases the effectiveness in lower frequencies.

Helmholtz Resonators

Helmholtz resonators are smaller than superchunks, however they are more difficult to create and deploy due to the interaction within the room, and as they are fine-tuned, will often shift the resonant frequency slightly. 

  • They consist of a cavity of air or batting within a box. 

  • They have an opening often equipped with a neck or tube for fine-tuning. 

  • They are designed to target and absorb specific frequencies by vibrating at the resonant frequency of the cavity, canceling out the resonance in the room. 

Helmholtz resonators are particularly effective at addressing relatively narrow-band bass frequencies and can help reduce resonant peaks and standing waves.

Tuned Membranes

These are a stretched membrane, such as a thin fabric or film, mounted over a frame or enclosure. Like Helmholtz resonators, tuned membranes are designed to resonate at specific frequencies, but they do so through the vibration of the membrane itself rather than the resonance of a cavity, and are generally smaller. 

Tuned membranes are often used to absorb lower-mid frequencies rather than bass frequencies.

Active Traps

Active bass traps utilize electronics and sensors to actively monitor and respond to the room's acoustic characteristics in real-time. 

They are essentially an active subwoofer with built-in microphone that generate counteracting sound waves to cancel out specific problematic bass frequencies, effectively absorbing or attenuating them. 

Active bass traps are often very costly, but do offer high levels of control and are space-efficient, though multiples are needed to be effective, further driving up cost.

They also tend to be stinky…

Limitations of Foam

While foam panels are commonly used for acoustic treatment in home studios because they are cheap, advertising is misleading about their true capabilities. 

  • Foam panels are not useful for addressing many issues.

  • Foam panels have uneven absorption and reflection properties

  • In some cases, these uneven absorption and reflection properties can exacerbate other acoustic problems. 

Because of this, I do not recommend using foam panels- flutter echoes rarely remain after broadband absorption is in place, and adding diffusion will treat the remaining issues purported to be treated by the foam, with the added benefit of smoothing the response of the room. 

Identifying Common Room Acoustic Issues

Achieving optimal mix room calibration requires identifying and addressing common room acoustic issues that can impact sound quality and accuracy. Let’s move on to the most common issues with rooms I’ve run into and how to fix them.

Boundaries, Corners, First Reflection Points

Reflections and standing waves caused by room boundaries, corners, and first reflection points can create acoustic anomalies that affect the accuracy of sound reproduction. 

Room modes caused by these boundary interactions are categorized into three types:

  • Axial modes: These occur between two parallel surfaces in a room, such as between two walls or between the floor and ceiling. Axial modes are the strongest and most easily audible modes.

  • Tangential modes: These occur between four surfaces, such as between two pairs of opposite walls or between the walls and ceiling/floor. Tangential modes are weaker than axial modes but still contribute significantly to the room's acoustics.

  • Oblique modes: These occur between six surfaces, involving a more complex interaction of sound waves within the room. Oblique modes are generally weaker and less audible than axial and tangential modes.

Later I will describe how to utilize the AMROC website to predict these problems and analyze your room acoustics with REW.

To achieve the best sound quality in a room, the width should be about 1.6 times the height, and the length should be approximately 2.6 times the height.

Room Layout/ Shape

Parallel surfaces and irregular room shapes contribute to standing waves and modal resonances, requiring careful calibration and treatment to mitigate. 

  • In a custom built room, parallel surfaces are avoided to reduce or eliminate standing waves.

  • However, in a home studio or office, symmetrical rooms can be perfectly serviceable, provided they are not perfect cubes, circular, or exceedingly small, and thoughtful treatment is applied.

Listening Position

The listener's position within the room significantly influences the perception of sound, with optimal listening positions minimizing the impact of room acoustics on sound reproduction. The ideal listening position is about ⅓ of the way into the long dimension of the room.

Desk and Speaker/Sub Positioning

Proper desk and speaker/sub positioning are essential for optimizing the listening environment and minimizing acoustic issues such as comb filtering and frequency cancellations. 

Your monitors may offer guidance on best positioning, however a good rule of thumb is placing the L&R monitor's front surface 16” up to 44” from the front wall and about 44” from the side wall. 

In my experience, the distance to the side wall is less important for a good subjective set-up when paired with room treatments, and as long as the monitors are not up against the wall, bass frequency build up will at least be minimized. 

What is more impactful is creating an equilateral triangle between the speakers and your head at the listening position. If you are using a sub and are not able to set delays, it is ideal to position the sub at the midpoint of the front wall (assuming a symmetrical room) at an equal distance between your monitors as well as from your listening position.

be sure to get low

Sub Crawl

For irregularly shaped/ problem rooms, you may need to perform a sub crawl to find optimal positioning.

  • Place your sub at the listening position and begin physically moving around a room while listening to bass frequencies. 

  • Once you have determined the best (most even) sounding location, you can move the sub there.

It is important to note that you will need to be able to set delays for the sub and monitors independently if the distances are not equal.

Environmental Factors

When setting up a home studio in a noisy urban environment, addressing environmental factors such as street noise, HVAC systems, or windows with soundproofing may first be necessary for minimizing external disturbances that could impact room acoustics and calibration efforts. 

While acoustic treatment focuses on improving the quality of sound within a room by controlling its reflection, absorption, and diffusion characteristics, soundproofing is used for reducing the transmission of sound between spaces. 

Soundproofing involves adding materials or structures that block or absorb sound waves to minimize noise transfer. Consider starting with these simple soundproofing options:

  • Seal cracks/ gaps to the room

  • HVAC duct muffler/ vent silencer

  • Solid core doors

  • Double-paned windows

  • or MLV curtains.

In extreme situations, modification to the room structure may be required. Doubling drywall for added mass and affixing the second layer with acoustic green glue, acoustically decoupling the layers, is a common technique used to improve soundproofing and reduce sound transmission into the room. 

I would not recommend using a space that requires any more of an extreme modification than this however, as purpose building a space from scratch will likely be a better option at that point.

Now that we understand what issues to look for and our options for dealing with them, it's time to dive into the core principles of mix room calibration. 

Be sure to check back next week for the continuation of our discussion in part 2, where we'll explore practical strategies and tools for implementing mix room calibration effectively in your own studio.


Ready to elevate your project with expert sound design? Let's bring your vision to life! Explore my services or contact me today to discuss your project requirements. Thank you for visiting sameliaaudio.com. Stay tuned for more insights and analysis on cinematic storytelling through sound.

Read More
sam elia sam elia

APRIL FOOLS' DAY SPECIAL: UNVEILING THE SOUND ILLUSIONS BEHIND THE SCENES

I delve into the fascinating world of sound design for an April Fools' Day-themed blog post focusing on sound illusions.

April Fools' Day is nearly upon us, and what better way to celebrate than by unraveling some of the intriguing secrets lurking behind the scenes of sound design? Prepare to be amazed as we delve into the realms of auditory illusions and playful pranks that add an extra layer of excitement to our audio experiences. From mind-bending Shepard Tones to speech-jamming antics, let's explore the tricks of the trade that keep us on our toes.

Shepard Tone: The Infinite Ascend (or Descend)

Ever found yourself lost in the gripping tension of a climactic movie scene or the heart-pounding crescendo of a musical masterpiece? Chances are you've encountered the Shepard Tone, the auditory illusion that plays tricks on our perception of pitch. 

Crafted through a clever arrangement of overlapping tones that fade in and out cyclically, this sonic marvel creates the illusion of a pitch that endlessly ascends or descends, leaving us captivated by its hypnotic allure. Whether it's building suspense in a thriller or heightening the intensity of a game, the Shepard Tone never fails to keep us guessing, making it the perfect tool for orchestrating April Fools' Day surprises. 

One famous example of the Shepard Tone in action is the endless staircase in Super Mario 64, where the pitch seems to ascend indefinitely as Mario climbs the steps, adding to the game's surreal and mind-bending atmosphere.

How to create a Shepard Tone:

- Construct overlapping tones or sweeps, separated by octaves, with the higher octave starting on the same pitch where the lower octave ends.

- Fade the level of each tone throughout the duration- for rising pitches, fade the low octave in and the high octave out.

- Carefully adjust frequency and amplitude modulation for seamless loops.

- Example sound files:

Speech Jammer: The Tongue-Twisting Tease

Imagine trying to speak during a Zoom meeting, only to find yourself tangled in a web of delays and tripping over your words. Enter the Speech Jammer, the mischievous phenomenon that turns ordinary speech into a comedic spectacle. 

This April 1st, be careful around live sound engineers- by introducing a slight delay between microphone input and monitor playback, they can throw our linguistic prowess into disarray, leaving us stumbling over our words in a frustrating display of verbal gymnastics. Whether it's for experimental purposes or simply for a good laugh, the Speech Jammer reminds us to embrace the unexpected twists and turns that make April Fools' Day so delightfully unpredictable (just be sure to prank them back after sound check).

How to create a Speech Jammer effect:

- Use a mixer or software application to introduce a slight delay between microphone input and audio output.

- Experiment with delay settings from 200-500 ms to achieve desired disruption effect.

Masking: The Art of Sonic Concealment

Ever wondered how filmmakers manage to seamlessly blend background noise or cover up imperfections in audio recordings? Enter the art of masking, the technique that allows sound designers to conceal unwanted sounds with a little finesse. 

One common way to use masking to our advantage is by employing background music to cover noisy dialogue recordings, smoothing out rough edges and creating a more enjoyable experience for the audience. However, masking can also be unwanted. When there are competing elements in a mix, careful EQ, level, or even timing adjustments can allow the desired sounds to remain clear and prominent. Fabfilter Pro-Q 3 even has a helpful feature in its spectrum analyzer to help identify masking quickly across multiple tracks called “show collisions,” where potential masking glows red.

How to create a Masking effect:

- Overlay unwanted sounds with louder or more prominent sounds.

- Adjust volume levels, frequency ranges, and timing adjustments for seamless integration.

Sweetening: The Subtle Symphony of Enhancement

Last but not least, we have sweetening, the secret ingredient that elevates audio quality to new heights. Through adjustments in processing and including additional layers, sweetening breathes life into recordings, infusing them with complexity, depth, and character. 

One iconic example of sweetening is the big, over-the-top Hollywood punches found in the Indiana Jones movies, where sound designer Ben Burt enhances the impact of the punches through careful manipulation of processing and layering of samples.

Compare the following punch sound effects I have prepared below. The sweetened punch feels much more powerful thanks to careful layering of swishes and fabric to provide anticipation, synthesized elements to enhance the low end impact, and bag crunching to extend the duration while also providing texture. Additional layers may also be added such as vocal efforts or fruit mash foley to further build out your punches.

How to sweeten audio:

- Use equalizers, compressors, reverbs, and other effects processors to sculpt your recordings.

- Layer additional recordings to create rhythm and add detail.

- Example sound files:

Conclusion

In conclusion, the world of sound design is a playground of endless possibilities, where illusions abound and surprises await. As we celebrate April Fools' Day, let's marvel at the ingenuity of these auditory tricks and revel in the joy of sound. Whether you're a seasoned pro or an eager amateur, there's never been a better time to explore the magic that lies within our ears. Happy April Fools' Day, and may your sonic adventures be filled with laughter and delight!


Ready to elevate your project with expert sound design? Let's bring your vision to life! Explore my services or contact me today to discuss your project requirements. Thank you for visiting sameliaaudio.com. Stay tuned for more insights and analysis on cinematic storytelling through sound.

Read More
sam elia sam elia

SOUND DESIGN FUNDAMENTALS: BACKGROUND SOUNDS

I delve into the intricacies of background sounds and show how these subtle yet powerful elements can elevate storytelling.

Backgrounds in sound design serve to establish the environment and atmosphere of a scene. They can include nat sounds from the real-world locations as well as other ambient sounds such as natures elements, traffic, or people, adding depth and realism to the audio landscape. By layering backgrounds with foreground sounds, like dialogue or spot effects, sound designers create immersive experiences for audiences. Backgrounds also aid in storytelling by setting the mood and helping to convey the emotions and context of a scene. Additionally, they contribute to the overall balance and cohesion of the sound mix, ensuring that every element works together to support the narrative and enhance the viewer's engagement.

Differentiating Backgrounds and Ambiance

While backgrounds and ambiance are closely related in sound design, they serve distinct purposes and contribute differently to the overall experience of a scene.

Background Sounds Primarily focus on establishing the environment of a scene and providing diegetic sounds (i.e. sounds within the fictional environment and heard by both the characters and you) that immerse both you and the characters in the fictional world. These elements are spatialized with panning, eq, and reverb to match the visual cues on screen, ensuring a seamless integration of sound to visual. They consist of specific, recognizable sounds that help to ground us in a particular setting. 

The sound of ocean waves crashing against the shore, vehicle pass-bys, or birds flying overhead, can each sound totally different in a variety of ways, and the choices of which characteristics to present via the sound design all help transport you to the setting. For an easy example, we can explore the podrace scene in The Phantom Menace. Compare:

  • The heavy thunk-thunk-ing of the antagonist Sebulba's podracer

  • To young Anakin's, with its high engine whine reminiscent of a high performance F1 car.

These sounds provide context and contribute to the characterization of the opponents in the scene, enhancing your sense of immersion. 

Ambiances on the other hand, encompass a broader range of elements that create a general sense of atmosphere or mood and are often non-diegetic (i.e. sounds that come from outside the fictional world of the film and are not audible to the characters but are heard solely by you). While backgrounds are more literal and specific to the setting of a scene, ambiances typically include more abstract or subtle sounds that evoke a particular feeling or emotion. 

While backgrounds are more literal and specific to the setting of a scene, ambiances typically include more abstract or subtle sounds that evoke a particular feeling or emotion. 

  • For example, while the low hum of a city at night or the shrill, tense droning in a haunted house would both be considered ambient sounds, only the latter is non-diegetic. Either way, ambiances set the overall tone of a scene and intentionally influence your emotional response. 

Backgrounds focus on specific diegetic environmental sounds to provide:

  • Context

  • Realism

  • And establish setting

Ambiances utilize a broader range of non-diegetic or atmospheric elements to:

  • Create mood

  • And evoke emotions 

Both are essential tools in sound design, and work together to enhance the cinematic experience for viewers. 

Walla and Background Dialogue

Backgrounds can also include voices, which will typically take the form of either walla or ambient/ background dialogue. While both serve to enrich the auditory makeup of a scene, they have distinct characteristics and purposes that set them apart. Let's compare the characteristics and purposes of walla and ambient/ background dialogue.

Ambient dialogue is specific lines or interactions spoken by background characters in a scene. These lines are typically audible and may convey information about the setting, context, or mood of the scene. 

  • For example, in the opening scene of Jurassic Park, ambient dialogue is used throughout the scene while the park rangers are corralling the highly dangerous raptor from a cage to its enclosure. The erratic shouting and call outs from the rangers in the background lends legitimacy to the high stakes danger presented by the vicious dino. The tension builds as the crew prepares to open the gate, with more and more fearful shouts of ambient dialogue ringing out in reaction to the angry thrashing in the cage. When the gate opens, the raptor surges forward creating a gap in the enclosure, grabbing a ranger and mauling him as he screams out over the commotion of the panicking group behind the commanding dialogue of their leader, Robert Muldoon. 

Ambient dialogue like this adds tension and realism to a scene by creating the impression of a dynamic and lived-in environment. It helps to establish the context and atmosphere of the setting while also contributing to the overall narrative. Imagine the scene with no screaming from the rangers but instead muted conversations and professional jargon, or perhaps laughter and workers unprofessionally taunting the animal. The original treatment in the film builds your fear of the raptor; the former change suggests a highly professional team, well practiced and unafraid perhaps to the point of overconfidence, while the latter gives the impression of a low trained crew likely to make mistakes. These changes highlight the importance of ambient dialogue in shaping your perception and engagement with the narrative. Without the ambient dialogue as it is, the scene loses its sense of urgency and danger by shifting your focus away from the raptor and lowering the build up of tension, resulting in a portrayal that is ultimately not as terrifying, and fails to set up the events of the film as effectively.

Walla refers to the generalized murmur or indistinct chatter heard in crowded or busy scenes. Unlike ambient dialogue, walla does not consist of discernible words or phrases and is often used to simulate the sound of a crowd without drawing focus away from the main dialogue or action. The purpose of walla is more about creating a sense of presence and atmosphere rather than conveying specific information or interactions, and to add texture to a scene, making it feel more immersive and authentic- particularly in settings like busy streets, parties, or sports events. 

When selecting walla, I like to think of the mood I need to establish in a scene first, and then look at things like crowd size/ demographics etc. because often, the feeling is more important than being 100% accurate to the image, and some exaggeration will help the narrative.

For example, in A League Of Their Own, the crowd reflects an important part in the advancing plot. Early on, the women are playing in smaller parks with half empty crowds. 

  • The walla reflects this with sounds of smaller, quieter crowds Interspersed with the occasional heckler. 

The line "Girls can't play ball!" shouted from a man in the stands is followed by scattered laughter amid the sparse conversations of the mostly disinterested attendees. Later however, as the women gain popularity and find themselves playing in Harvey Field (aka Major League Baseball's Wrigley Field), the crowds are transformed into much larger, engaged groups capable of thunderous applause. 

This evolution of the background walla throughout the film shows us a few things through audio alone:

  • How the initial hostility that the players faced was overcome 

  • How the team has won the hearts of the fans

  • Lends a sense of realism to the environment 

  • The walla contributes to the primary elements of the scene without distracting from the main action. 

While both ambient dialogue and walla contribute to the richness and realism of a scene, they serve different purposes and have distinct characteristics. 

  • Ambient dialogue adds specific interactions and context to a scene, enhancing your understanding of the setting and characters. In contrast

  • Walla creates a sense of atmosphere and immersion by simulating the background noise of crowds or bustling environments.

Together, these elements work to bring the setting of the scene to life in vibrant detail.

Trans-Diegetic Elements and Scene Transitions

Occasionally, background sounds may be trans-diegetic and could even move into or out of the background at key moments. These trans-diegetic elements are often used to form scene transitions, typically utilizing a needle-drop of popular music that fits the scene. The track will begin as either diegetic (i.e. songs on a car radio) or non-diegetic (functioning as score), and then flip for the scene change.

Trans-diegetic sounds can be used to enhance the flow of the scene by creating a sense of continuity and immersion, blurring the boundaries between the narrative space and the soundtrack. 

This type of sound design plays a big part in the narrative style of Guardians of the Galaxy for example. 

  • Used as a way to highlight Peter's longing for a connection to his past on earth, trans-diegetic needle drops are used often throughout the series of films. In one early scene, after the guardians are arrested and arrive at the prison, a guard swipes Peter's walkman from his belongings and the song "Hooked on a Feeling" by Blue Swede starts playing over the headphones. Peter, hearing the song and desperate to preserve the one connection he has to his past, begins to struggle with the guard and gets tased, letting out a scream as the chorus enters with "ahhh// hooked on a feeling" now playing full range, non-diegetically. This trans-diegetic needle drop transition seamlessly bridges the gap between scenes while immersing you further into the film's universe. It’s also able to provide some comedic relief on top of it all.

Indicating Narrative Beats Through Background Sounds

Backgrounds and ambiance in sound design serve as more than just tools for creating immersive environments; they are also powerful indicators of narrative beats in film. At the beginning of any scene, the choice of background sounds and ambiance sets the tone, giving viewers a glimpse into the upcoming narrative developments. 

  • For example, serene nature sounds might precede a peaceful moment, while ominous ambient drones could hint at impending danger, as is the case in the push and pull of high and low intensity sounds prevalent in 1917. These elements subtly guide you through the story's twists and turns. By gradually intensifying background sounds, filmmakers can create a sense of foreboding, leading up to a climactic narrative event. 

Counterintuitively, lowering or removing backgrounds can have the same type of effect! 

These often subconscious cues reinforce emotional beats in the story, amplifying our engagement. Soft background music and gentle ambient sounds enhance tender moments between characters, while discordant sounds mirror conflicts, deepening the emotional impact. Additionally, backgrounds and ambiance draw attention to pivotal moments in the narrative. Sudden changes in background sounds or ambiance, like a dramatic music cue or a sudden absence of ambient noise, underscore the significance of key events or revelations. In essence, backgrounds and ambiance serve as subtle yet effective storytellers, guiding our understanding and emotional connection to the film's narrative.

Conclusion

The world of sound design is a rich and nuanced one, where every element plays a crucial role in shaping the cinematic experience. As we've explored in this brief discussion of background sounds, these subtle yet powerful components add depth, realism, and emotion to the stories we see on screen, with each sound serving a purpose, guiding viewers through the narrative and immersing them in the world of the film.


Ready to elevate your project with expert sound design? Let's bring your vision to life! Explore my services or contact me today to discuss your project requirements. Thank you for visiting sameliaaudio.com. Stay tuned for more insights and analysis on cinematic storytelling through sound.

Read More
sam elia sam elia

THE SOUND DESIGN OF DUNE

I delve into the intricacies of how the sound design choices enhance the overall production of Dune.

This weekend is the U.S. release of the sequel to 2021's Dune, and as a fan of the books and both film adaptations, I couldn't be more excited. Denis Villeneuve's adaptation of Frank Herbert's seminal science fiction epic is a masterclass in world-building. With its immersive sound design by Theo Green and Mark Mangini and Oscar winning score by Hans Zimmer playing pivotal roles in transporting audiences to the harsh desert planet of Arrakis, the upcoming sequel is sure to please. In this analysis, I'll explore how the 2021 film's sound design contributes to the richness of Dune's universe.


Dune


Villeneuve adeptly captures the essence of Herbert's Dune in his adaptation. The story follows the journey of young nobleman Paul Atreides as he begins his destiny on Arrakis, navigating political intrigue, psychedelic spice powers, and the mysteries of the desert. Set in the distant spacefaring future of humanity, Dune features a notably technologically analog society. This quirk of Dune's setting is a result of long held animosity of the people towards “thinking machines”- AI's initially created to help humanity which eventually wound up subjugating them. Despite successfully freeing themselves thousands of years ago, this hatred persists to the present, substituting a  reliance on spice-enhanced humans' abilities. The story centers on the titular desert planet, where noble houses vie for control and influence over it as the only source of the invaluable spice melange. Villeneuve knew the sound design would have to feel natural and organic to help convey these aspects of the setting and keep true to the source material.

Experimenting with Sound Design

Villeneuve's approach involved the sound design team from the outset. He encouraged creativity and experimentation, and ensured sound wasn't an afterthought but an integral part of the creative process. Thanks to this freedom to "make mistakes," happy accidents like the one which became the base sound of the combat shields could occur. In an interview, sound designer Theo Green stated that the sound came about from a patch on an analog synth that began malfunctioning, creating the glitchy pulsing bursts of noise used in the film. The team ran with it and the effect was then sent to the gfx department to develop a visual for the sound.

Holtzman Shields

The shield's design  is one of only 4 synthesized sounds from the 3,200 bespoke sounds created for the movie. Villeneuve strove for realism with the sound design, and wanted natural sounds for the unfamiliar things of Dune's universe. To accomplish this, sound designer Mark Mangini knew a large amount of acoustic recordings would be needed to convince the audience the futuristic images on screen were real, and so went out into death valley, buried his mic collection in the desert, and took out some aggression on the sand with a hammer... In all seriousness, the buried contact microphone recordings were the basis of the thumpers- a device used to bait the worms of Arrakis. He and his team also got incredible recordings of the sand dunes “singing” in the wind, a peculiar natural phenomenon that became a huge part of the ambiance of the desert, and informed the sound design of the sandworm itself.

Sandworms

Villeneuve aimed to evoke awe for the sandworms. Revered as deities by the Fremen, they are the universe's lone source of the coveted spice. However, the initial design the team had come up with sounded far too wet and monster-like for Villeneuve, contradicting the planet's scarcity of water and the Fremen's symbiotic relationship with them. Villeneuve was able to collaborate with the sound designers and after removing layers of wet caves and other animal sounds, this mad scientist approach eventually concocted a recipe for the worms involving whale calls and experimental breath recordings from deep within Mangini's own mouth. The end result, while still terrifying on a primal level, conveys the true nature of the worms as good-like beings acting as natural protectors of their plant, and the details that went into the deep guttural calls the team created were then used to inform the visible anatomy of the worms gigantic maw.

Organic Ornithopters

Crafting the sound of the ornithopters required a blend of organic and mechanical elements to convey their hybrid nature. Villeneuve drew inspiration from their insect-like appearance, intentionally steering clear of conventional helicopter sounds to maintain authenticity. All layers of sound were recorded acoustically, with parts and servos sourced from old cars, beetle wings to capture variations in motion during flight, cat purrs when hovering, and the sound of tent straps flapping violently in a wind storm as the blades of the ‘thopters beating. The use of only acoustic sounds subconsciously signals to the viewer that the sounds are authentic and real, despite coming from an eclectic collection of sound sources, while synthesized sounds often betray this goal.

The Bene Gesserit Voice

The Voice is a crucial element of Herbert's Dune. It is the main tool of the Bene Gesserit, a religious order of matriarchs who use the strange ability to steer the politics of the Dune universe from behind the scenes. This shadowy organization, initially inspired by Herbert's aunts and their persistent religious brainwashing attempts, possesses a mystical connection to each of their women ancestors. To show this connection, Villeneuve and the sound design team employed simple yet effective techniques that also convey the compelling power and influence these characters possess. The Voice is primarily 3 ingredients: actor performance, multiple elderly woman overdubs, and a subharmonic effect created by reamping the voice recordings through powerful subwoofers, rattling the room, worldizing the sound and imparting a sense of undeniable power. The sound design also manages to show how proficient the user of the voice is by simply slipping these ingredients out of sync, directly implicating the level of coordination and power the user possesses. This allows the audience to clearly realize the progression Paul makes in honing this strange ability from his limping, uncoordinated early attempts during training to the powerful blast of Voice he can produce later on when his life is on the line.

Standout Score

Perhaps the most standout element of the film's sound is the Oscar winning score. Zimmer's use of strange sounds gathered from unique (often purpose built) instruments and world music tonalities blurs the line between score and sound design through its clever implementation. Traditional film scoring is often done in the style of John Williams- melodic themes played by an orchestra. Dune's score however, is close to the opposite. You do not even hear a traditional orchestra once throughout the film. The themes are often not melodic, but rather ambient or rhythmic elements, thus reinforcing the feeling for the audience that the events unfolding on screen take place on a futuristic alien planet.

Zimmer's score uses specific elements for each character and setting, evoking and blending themes to highlight multiple undercurrents within scenes and evolving it along with the plot. One example is shortly after the Atreides' arrival on Dune. Jessica is shown with her mother, the Reverend Mother of the Bene Gesserit, who reminds Jessica of her duty to the order and the importance of its long-term plans. Later that evening Jessica and Leto talk. Initially, as Jessica acts on her mission as a Bene Gesserit, we hear the choir chants and whispers of the Bene Gesserit theme, reflecting her intent to advance the designs of the order. However, as the two continue their talk, her love for Leto and her commitment to their family- as well as her acknowledgement of the danger of their current circumstances with the enemy Harkonnens- rise to the surface. The score highlights this conflict in her mindset, adding in the Atreides theme over top. This blending of themes in Dune underscores the complexity of internal motivations represented in the film, as Villeneuve did not want to interrupt the narrative with constant voice overs for this necessary context. The result is a captivating ambiance throughout the film that informs the viewer of these subtexts, allowing the dense internal world found in the books to come to screen effortlessly.

Efficient Decision Making

Villeneuve's early, consistent involvement with his sound team fostered a dialogue and idea exchange through experimentation, resulting in a cohesive direction. Avoiding library temp effects allowed him to focus on the macro view of the mix during post-production, ensuring a refined end result. This approach allowed for a seamless integration of sound design with visual elements, elevating the overall quality of the film.


Ready to elevate your project with expert sound design? Let's bring your vision to life! Explore my services or contact me today to discuss your project requirements. Thank you for visiting sameliaaudio.com. Stay tuned for more insights and analysis on cinematic storytelling through sound.

Read More
sam elia sam elia

THE SOUND DESIGN OF 1917

I delve into the intricacies of how the sound design choices enhance the overall production of 1917.

Welcome to the inaugural post on sameliaaudio.com. This blog endeavors to dissect the intricacies of cinematic sound design and storytelling by breaking down some of my favorite films, shows, and games to highlight how sound is an unseen but powerful guide capable of transporting us through time, evoking emotions, and immersing us in narratives.

Join me here, where I'll unravel the mysteries of sound design, share creative insights into the form and function of sound for film, and explore the myriad of profound ways in which sound design enriches stories.


1917


Known for its fantastic cinematography by Roger Deakins, Sam Mendez's 1917 delivers equally exceptional scoring by Thomas Newman, production sound by Stuart Wilson, mixing by Mark Taylor, sound design from Oliver Tarney, and dialogue by Rachel Tate. In this analysis, I'll delve into the intricacies of how the sound design choices enhance the overall production.

Immersive Opening Sequence

As the film opens, a haunting score featuring a reversed piano motif plays over a shot of a peaceful French countryside behind the front lines, untouched by war and filled with wildflowers and the muted sounds of spring. The overall effect of combining these unconventional modern score elements with delicate ambiance creates a feeling for the viewer like this is a moment frozen in time, as if it were a living memory or dream, a moment to savor before the peril that we know must lay ahead in the war. As the camera edges backwards, mirroring the reversed elements of the score, we are introduced to the protagonists- Tom Blake, a young and naive idealist, and his war weary friend William Schofield, a veteran of the Somme- as they are awakened from a quick rest beneath a tree so that they may go receive their next mission briefing.

The camera continues to pull backwards as the two begin to follow, walking through the busy camp as the peaceful backdrop falls away to the harsh reality of the trenches. Gentle soft grass becomes gloopy mud, heavy gear strains with every movement, and the commotion of the base drowns out the natural world. The score continues to build during the walk, adding dissonance and drones, conveying the mounting uneasy feeling the two (but particularly Blake) are experiencing. As the camera continues backwards, they make their way through the trench while nervously discussing what their orders might be. After a brief quip from Schofield to change the subject and put his friend at ease, the score fades in perfect synchrony with the camera's shifting perspective, as it turns to now face forward while the two proceed ahead. The instrumentation also changes here, indicating a shift within the protagonists, as they have now put aside their fears, gained their composure, and have accepted their fate whatever it may be...

World Building Score

1917 is rife with this type of intricate interplay between visuals and sounds. In his score, Newman has the responsibility of helping the audience inhabit the same world as the characters Schofield and Blake- and succeeds massively. The score plays a critical role in shaping the emotional tone and intensity of the film due to the editing limitations of the one shot approach, complementing the nuanced performances of the lead characters and adding to the overall impact of key scenes. Stepping aside when needed, the score ebbs and flows throughout the film, allowing the other sounds to ground the viewer with authentic bits of world building punctuated with quiet moments to let things breathe.

Authenticity in Audio

1917 prioritized authenticity in sound design, and went to great pains to record authentic sounds. The weapons and vehicles, as well as 5.0 ambiance recordings for the very natural moments of quiet, were all recorded on location from the same perspective as the protagonist, in favor of the highly processed close-miked Hollywood sounds typical of a blockbuster war movie- a choice which clearly aligns the viewer to the protagonist's perspective or attention in the moment, heightening immersion. Foley too, plays a role of keeping the protagonist (and by extension their inner world) the focus for the viewers as the cameras are constantly moving throughout the film. Some of the Foley recordings, such as the muddy trench footsteps, were taken on location as well, in order to make it even more authentic. The cumulative effect of these unique sounds for the viewer is a much more intimate and personal experience closely locked to the protagonist with little distraction from the greater world surrounding them, while still grounding them in that world, allowing you to experience the story unfolding alongside the character.

Effective Collaboration

The collaborative effort between visuals and sounds on display in 1917 creates an immersive experience for the audience that far exceeds traditional editing, as all parts come together to tell the story in an intimate and gripping way. We experience the zeitgeist of the war from the perspective of a soldier caught up in the middle, highlighting their emotions, hopes, and fears. This intimate perspective naturally provides a much more critical depiction of war than typical war movies, in large part due to the production and design choices which humanize the soldiers, complimenting the script's goal of creating sympathetic protagonists who are more victim than hero, and enabling the audience to experience the story alongside them.


Ready to elevate your project with expert sound design? Let's bring your vision to life! Explore my services or contact me today to discuss your project requirements. Thank you for visiting sameliaaudio.com. Stay tuned for more insights and analysis on cinematic storytelling through sound.


Read More