• Video
  • Audio
  • About
  • Blog
    • Published Work
    • Instructional Stuff
      • Audio Instruction
      • Music Business
      • Music Instruments
      • Music Genres
      • Programming And Such
    • Music Thoughts, Rants, Randomness
    • Production And Song Stories
  • Store
  • Contact
  •  

audio

File This

July 30, 2020 by Aaron
Audio Instruction, Instructional Stuff, Music Business, Programming and Such, Published Work, Recording Magazine
5.1 surround, aac, aaron j. trumm, aaron trumm, ac-3, aiff, audio, audio file types, cdda, lossless, lossy, m.c. murph, mp3, nquit, ogg, recording magazine, wma

A Brief Discussion of Audio File Types

This article first appeared Recording Magazine. I reprint it here with permission, and I encourage you to subscribe to that publication, as they are a stand up bunch of folk!

In my article “Keeping Track”, we covered data.  We talked about the information you need to keep with your songs in order to sell, license and organize them. We covered metatags; data about data that gets embedded in files.  We talked a little about the file types that carry metadata and how to use them, and that brought up a wider topic:  audio file types. 

There are hundreds of audio formats and an endless variety of settings and options.  So, without a whole lot of fanfare, we’ll dive into some of the formats that exist as of now, but first let’s delineate a few traits and categories.

Compression

An audio file (or a video file for that matter) is either compressed or uncompressed.  What this means is the file is either whole and complete or it has been squashed down to save space, like using a .zip file; or in physical terms, like using one of those infomercial vacuum bags to suck the air out of your Christmas sweaters.  A WAV file is uncompressed; an MP3 is compressed.

Don’t confuse compression or the lack thereof with the terms lossy or lossless.  Lossy and lossless are two types of compressed files.  If a file is lossy, it means some data has been thrown out because in theory that data isn’t necessary, usually because the human ear can’t hear it.  That data cannot be recovered.  On the other hand, a lossless file is compressed, but no data has been thrown out.  Think of the difference between cutting off the sleeves of your sweater (because it’d be fine as a vest) and sucking it in Mr. Popeil’s vacuum (lossy), and simply sucking it in the vacuum, but leaving it intact (lossless).  As you might guess, lossless files are generally bigger.  MP3s are lossy.  FLAC files are lossless.

File Format and Codec

You may never need to know this, but there is a difference between a file’s format and it’s codec.  The format, or file type, is simply the wrapper in which the audio data is kept.  The codec is the meat of how it’s encoded.  Not all file types support all codecs, but there are some surprising possibilities.  A WAV file might not be encoded with PCM, for example.  We don’t have room here for a comprehensive list, but it’s likely you’ll only ever need to worry about a few possibilities.  We’ll say more on those big ones momentarily.

Sample Rate, Bit Depth and Bit Rate

These are the main measurement of audio quality, and there can be some confusion about what they all mean.

Sample rate is used to refer to an original or uncompressed recording.  It’s how many times per second a snapshot of the signal is taken.  44.1k means 44.1 kilohertz, or 44,100 times in a second.  You probably know that CD quality is 44.1k, 16 bit.

Bit Depth is how many bits are in each sample.  If you record at 44.1k, 16 bit, you’re taking 44,100 16 bit samples every second.  Crudely, more bit depth corresponds to more dynamic range.

Bit Rate can be a bit fuzzier.  Bit rate simply means the number of bits that are processed over a given amount of time, and it is a measure that can be applied to any file.  A CD quality file is 1,411 kbps (kilobits per second), for example.  In practice, though, bitrate is more often used to refer to the quality of a compressed, lossy file.  To be crude again, it comes down to a measure of how much data we’ve thrown away.  The highest bit rate for mp3s is 320 kbps, and the default iTunes rate is 256.  A 128k MP3 is noticeably smaller than a 320k file, but in many situations, not all that different sounding.  A 32k MP3, however, would sound awful, except in special circumstances (audiobooks, for example, often use low bit rates, because that doesn’t much affect a spoken track).

The Big Ones

While there are actually tons of audio file types and different combinations of format/codec possibilities, there are only a few you’re likely to see very often.  In fact, we can narrow that down to three.  WAV, AIFF, and MP3.

WAV (Waveform Audio File Format) files are Microsoft’s format, used in PC applications, and based on RIFF (resource interchange file format).  Usually WAV files are encoded using PCM (pulse code modulation) encoding, which is uncompressed and the same basic encoding used in CDs, but it is possible to encode a WAV file with other codecs, even compressed ones.  A “RIFF Wav” is a normal WAV file, and a “Broadcast WAV” is a WAV file with extended headers, originally used by broadcasters.  WAV files have .wav extensions.

AIFF (Audio Interchange File Format) files are Apple’s uncompressed format, also based on RIFF, and usually using PCM encoding.  The only practical difference between WAV and AIFF files is that AIFF files allow more metadata by default (so you can see stuff like album covers in iTunes), but you will notice that certain DAWs won’t deal with both.  That’s not a problem, as you can easily convert between them with something like Sox or FFMPEG, or free software like Audacity.  AIFF files typically carry .aif extensions.

EDIT with a sneak pro tip: AIFFs and WAVs are literally the same format, from an audio standpoint. So if Joe Schmo who uses GarageBand sends you a bunch of AIFFs that your Windows DAW can’t read – you can just change the extension and voila.

MP3 (MPEG Audio Layer III) files are compressed, lossy and very common.  MP3 shouldn’t be confused with MPEG-3, which is a video format.  MP3 compression is done by throwing away data which isn’t needed, mostly due to a phenomenon in human hearing called auditory masking.  That’s a pretty fancy way of saying we don’t hear everything in an uncompressed file anyway, so we might as well throw some away to save space.  There’s no shortage of debate there, but it seems to work pretty well.  MP3 was a proprietary format, owned and licensed by the The Fraunhofer Institute of Integrated Circuits, and that’s why not all software could make an MP3, at least until very recently.  The Fraunhofer Institute declared MP3 an obsolete format in May of 2017, and terminated its licensing program.  Whether this means the MP3 will die or proliferate further remains to be seen.  For now, it’s still the de-facto compressed file format, and typically what you get when you rip a CD with iTunes or other software, or download that free track from your favorite polka band.

Other Major Formats

There are so many audio formats, we’d be hard pressed to talk about them all here, but there are a few you should know about.

CDDA (Compact Disc Digital Audio) is the format for compact discs.  It’s just an AIFF file with different headers.  If you happen across a .cdda file (probably ripped from a CD), you’ll probably be able to play it in anything that can play a WAV or AIFF.

AAC (Advanced Audio Coding) is a compressed, lossy format created by Dolby which was designed to be a successor to MP3.  Apple subsequently developed a copy protected version that uses DRM (digital rights management) for iTunes, and that’s generally the format of files you buy from iTunes.

FLAC (Free Lossless Audio Codec) is exactly what it sounds like, a free, lossless, compressed format.  Great for archiving files, since it can reduce size up to 60% without losing any quality.

WMA (Windows Media Audio) was originally a compressed, lossy Windows format designed to compete with MP3.  It’s been expanded to include a lossless version, a multichannel version, and a lower bit rate version used for voice.  You may encounter Windows system files or other similar things in WMA format.  WMA files can be copy protected.

AC-3 is a lossy 5.1 surround sound format used by Dolby Digital in DVDs, HDTV and DTV (digital television).  Its highest sample rate is 48k.  A side note:  The “point one” in surround sound refers to a Low Frequency Effect (LFE) channel which has less bandwidth.  The LFE is where the shake your boots BOOM in movies comes from.

What To Use?

At this point your question may be why should I care, or what should I use?  The truth is, audio is audio, and when it comes to format choice, utility is the main consideration.  Your DAW will do what it does, and I recommended letting it do that.  When you’re deciding what to export, think about the use at hand.  You’ll want to export either WAV or AIFF for mastering, making CDs, importing into a video project, or other continuing full resolution work.  They’re really the same thing, so think about the software you’re using next, or what the person on the other end needs, and use that.

When it comes to delivery to the general public, think about the end user rather than entering into an endless debate about the perceptual quality of various algorhythms  or codecs.  If you’re selling downloads to normal people, you’ll probably want to use MP3s.  If you’re delivering files to a digital distributor, you’ll probably be asked for CD quality WAVs, and in some cases, distributors will take AAC files for iTunes.  If you want you can also distribute lossless files in FLAC format, or give people access to WAVs, or even distribute OGG/Vorbis files, which is an open source container/codec combination very similar to MP3.  Beware, though, that not all players support these less common formats, and your user may end up with no way to listen.

As far as bit rate, I like to give my loving, devoted fans the highest quality MP3s I can, so those are encoded at 320k, but it’s also a good idea to make a 128k version for web-based preview listeners, because the smaller size will load faster and stream better.  Some submissions you make (say to internet radio or licensing folk) may have size limits, too, so those smaller MP3s are useful.  In the end, this is a judgement call, and if it’s for your own personal listening, then do whatever you like best.

One other consideration is something we addressed in “Keeping Track”, which is metadata.  There are many situations where you’ll want some data other than audio in your file.  Whether it’s so consumers know who you are, or licensing agents know who to contact, you’ll need some extra info in there, so the file type you use to send to certain people needs to contain that data.  That’s what we covered in “Keeping Track”, so if you haven’t seen it, check that article out.

Resources

As with any very technical topic, an exploration of audio file types can go quite deep, and we don’t have room here to cover everything we could think of, so here are some recommendations for further reading:

  • Principles of Digital Audio by Ken Pohlmann
  • The Audio Expert (chapter 8 especially) by Ethan Winer
  • Mastering Audio by Bob Katz
  • How Music Got Free by Stephen Witt (for a great history of the MP3 format)
  • Any Wikipedia page about “audio file types” or specific types – google “WAV Wikipedia”, for example.

If you’re new to audio or recording, then hopefully we’ve helped you at least begin to sort out file types in digital audio, and if you’re a veteran, I hope you’ve reminded yourself of a few things here.  For the most part, file types are pretty straight forward, but you can run into confusion at times, especially when a DAW or other piece of software gives you a thousand choices.  It’s nice to remember a few basic tenets, cut through the noise, and get back to creating.  So file this away, and we’ll see you in the studio!

Did you know I have a master’s degree in “Music, Science and Technology” from Stanford University?  That means I can go back and forth between Macs and PCs in the studio, and talk at length about debt.  Find me on Facebook and Twitter and other various stuff @AaronJTrumm.

Valhalla DSP – Good Stuff, Man

May 14, 2020 by Aaron
Reviews
aaron j. trumm, audio, audio effects, audio plugins, delay, effects plugins, freq echo, freqecho, modulation, music production, nquit music, reverb, sound, space modulator, valhalla dsp, vintageverb

If you’ve followed me for long, you know I don’t often write reviews on my personal blog (I do it all the time for writing clients).

But, as I’ve grown my business both in music and in writing, I have occasionally come across really cool things which are also made by really cool companies. Sometimes (and this is new), I’ve even sought out an actual sponsorship relationship with them.

So today I thought I’d just quickly let you know about Valhalla DSP. They make effects plugins which are super cool, great sounding, and supremely easy to use.

Disclaimer And Background

The disclaimer – in this case, I do not have any kind of official sponsorship relationship with Valhalla DSP as of this writing.

However, they did very generously provide me with a license to their most popular product (Valhalla VintageVerb), when I was writing an article called the 5 Best Reverb Plugins Compared for Soundfly’s FlyPaper blog.

They also were kind enough find an old account I had never touched point out the other products I already licenses to, and attach Valhalla VintageVerb to that. All in response to me simply asking if they had any demos, because I was writing an article.

This may seem normal, or inconsequential – but it is NOT. This kind of friendly, helpful response to a fellow professional is rare, and when companies are great with me, I want to be great back. So, I’m writing this as a way to give an extra thank you to Valhalla.

Love At First Hearing

I was already going to write something, because I love being treated well, but then I happened to actually start USING my Valhalla plugins, and after about a session and a half, I found myself inserting Valhalla effects pretty much as a default.

“Ok I need to play with a reverb here…” *mindlessly opens Valhalla VintageVerb*

“Hmm how bout a delay?” *Inserts Valhalla FreqEcho before realizing it*

“What if…”  *Valhalla SpaceModulator* “….oh..yeah…that!”

Great Sounding, Easy And Fast

The thing that makes me gravitate toward these plugins is they’re so easy, yet they sound really good. It’s not a hundred years before I find the basic sound I’m looking for. It’s – click click ah ha!

With simple interfaces and controls, it’s easy to find a preset, tweak a preset, and get a sound, and the thing always sounds great. Plus they load quickly and they have yet to jitter, drop, or crash.

And this is working with them in COVID-19 isolation on my laptop, not my monster studio desktop. (Granted, this laptop is no joke – but it’s not a full-on production machine.)

I’ve been able to quickly dial up long tail reverbs, short ambient rooms, shimmering modulation, and super important – a nice stereo vocal delay effect that I used to have to spend three times as long dialing up on another delay. I use that kind of short delay a LOT of vocals, and FreqEcho just saved me a LOT of time.

Affordable AF

Oh – and these things are affordable as hell. FreqEcho and SpaceModulator are freeish (their word – they’re free with purchase), and VintageVerb is a mere $50.

That’s sort of amazing.

That’s It, Go Try These Guys

That’s it, y’all. For now. I just wanted to sing these guys’ praises because I’m impressed. Maybe I’ll have more to say later, but for now, just go try Valhalla DSP. I recommend them!

Keeping Track

August 12, 2019 by Aaron
Instructional Stuff, Music Business, Published Work, Recording Magazine
aaron j. trumm, aaron trumm, audio, metadata, metatagging, metatags, recording magazine, what do i put in my metadata

Meta-tagging and the data you need to keep 

This article first appeared Recording Magazine. I reprint it here with permission, and I encourage you to subscribe to that publication, as they are a stand up bunch of folk!

It may not be a glamorous notion, but handling data properly is one of the biggest differences between a professional and an amateur in music production.  When it comes to selling music to other professionals or making sure your image is awesome when fans find you, good tagging and tracking of recordings is of the utmost importance.  So we’re just going to dive into the where, how and why of keeping your metadata. 

Metatags 

Simply put, metadata is data about other data.  In our case, it’s data about audio data.  Metatags are how files embed this metadata into themselves.  This could be anything from the title of a song to the format of a file.  Audio file formats include various amounts and types of metatags, and you’ll want to fill in this data where appropriate.  We’ll talk about what you’ll need in a bit, but let’s start by talking about a couple of file types and their metatags. 

WAV 

Wav files are our most common uncompressed audio format.  Technically these files have metadata in them, but it’s all technical data about the file (more on that in an upcoming article), and nothing you want to touch.  From our point of view, WAV files contain no extra data.  Don’t confuse this with the metadata that gets amended to a Red Book CDDA file, which is the way CD Players know things like song title and artist name.  This data is stored in the Red Book file, not in the .wav itself. 

EDIT for full accuracy: WAV files DO indeed carry metadata, not just for technical stuff. Very few software packages can read this data, so it’s likely that whoever you send a .wav file to will never see it, even if you add it. AND it’s likely if they save the file in any software that doesn’t deal with it – it’ll get lost. Nevertheless, some people still embed their metadata in their .wav masters – I am one. As of this edit, I use Tag Scanner for this job – it’s free and awesome.

MP3 

MP3’s are currently the most common compressed format, which is usually how we deliver downloads to fans and associates.  Mp3’s ubiquitous nature may change, but probably not soon, and since Mp3s carry so much useful metadata, they’re good for getting the concept down.  There are a couple of different metadata formats associated with Mp3s – ID3v1 and ID3v2 – but we really needn’t  concern ourselves with that right now.  What’s important right now is what data we need. 

Others 

Many other audio file formats exist for audio, including AIFF, OGG and Flac.  Some carry useful metadata and some don’t.  We’ll be diving into file formats in more depth in a later piece, so for now, let’s stick to the basic concepts and use Mp3 as our basic guide. 

What You Need And Why 

So what data do we want to associate with a song?  Obviously the title and artist name would be a good start, but there’s a lot more than that.  To start to understand what you really want to embed in a file, you just need to think about the purpose of a recording.  There are a few potential purposes.  You might be selling the tracks to fans, you might be giving the tracks away for promotion, and you might be using the tracks for business-to-business transactions, such as licensing. 

The Author’s Database

That last scenario is the big one.  When you pitch a track to an agent or a music supervisor, there’s a wealth of information they need to make it easy to potentially use a song.  Fortunately, some of it is the same stuff you want your fans to have, and all of it also serves the purpose of creating a professional look and feel to your work.  So, let’s look at some information from the extended Mp3 standard that we might want. 

  • Title
  • Artist 
  • Album 
  • Album Artist (could be different than the track artist) 
  • Grouping 
  • Composer 
  • Year 
  • Track number (# of #) 
  • Disc # 
  • Genre 
  • Comment
  • BPM 
  • ISRC  (a unique identifier for a given recording) 
  • Publisher 
  • Copyright 
  • URL
  • Album Cover 
  • Lyrics 

Title, artist and album are pretty self-explanatory.  If your song has a featured artist, put that in the TITLE, not the artist.  This way, programs like iTunes don’t separate the song from your others, creating organizational havoc for your listeners.  Album artist might be different, say if you’re a guest on a compilation or something. 

Fill in the year, track number if the song is part of an album, and if the album is some sort of multi-disc set, fill in the disc number.  This is kind of an antiquated notion unless you’re also releasing a physical product like CDs. 

If you fill in your genre, make it as accurate as possible, and don’t overload it.  Use no more than 3 genres (most of the time, it’s a dropdown anyway).  Especially when it comes to licensing, your agent or music supervisor or indie film maker needs to be able to find you in a genre search and not be way off. 

Grouping was intended originally for movements in classical music, but you could use this for subgenres, or some people use it to identify the companies clearing the mastering and publishing sides of a song, which just means who a person has to deal with to pay for a licensing. 

In the composer section, you can list all the composers or the main composers, and if you have writer information, such as writer’s percentage and performing rights identification number.  This is also a good place to identify a public domain song, or if it’s a cover, the original writer. 

Comment may be the most useful section.  Here’s a good place for contact information, which is especially important if you’d like somebody to license your music and pay you!  It’s a good idea to put your website in here, writers splits if you don’t put them in composer, the key, time signature, or any songs or artists the song sounds like.  Also helpful is any identifying info from your performing rights organization, such as BMI # or ISWC (an international standard song identifier), or even the ISRC, which, if you have one, is a unique identifier connected to a given recording.  Another useful thing to do if you’re pitching for licensing is include a link to an instrumental version somewhere on the web.  The comment section is limited, so be judicious with your character count. 

BPM, or tempo, can be important for anyone looking to use your music in a video production, or for DJs who’d like to work your song into a set.  If the song is published, or you have a publishing company, you can enter that info and your copyright info (you can do this even if you have yet to register), and you may find a separate URL tag.  Use that, but also put your website in the comment section. 

Album cover is super important.  Imagine when you open an mp3 from a pro – there’s always an image.  Make sure, if your track is for sale or any kind of public consumption, that you get some kind of cover image in there.  Square is best.  Pro tip:  album cover, title and artist are the only things you will see in every mp3 player. 

Of course, lyrics are also an option.  It’s up to you whether to include lyrics, but if you intend to pitch the song, the lyrics are quite helpful, and it’s a nice thing for fans to have.  Not all players will show the lyrics, but iTunes does. 

How To Do It 

If you’re wondering how to fill in your metatags for, say, an mp3, it’s not difficult.  Many software packages do this.  Some mastering software will let you create tags right in the software, some audio software such as Audacity have metatag features, and there are dedicated tagging packages such as Mp3Tag, TagScanner (both free), Tag & Rename and MediaMonkey, to name a VERY few.  A lot of packages support various formats like AIFF, as well.  You can even add metadata when using command line mp3 encoders, if you’re into that sort of thing. 

Editing MetaData in iTunes

Probably the easiest thing to do is open your song in iTunes, click “info”, and edit the tags to your heart’s content.  When you do this, you’ll notice that you can do this to any mp3, which means that anybody can change your tags around.  You’ll have to live with this.  It’s how people organize their collections. 

If you try a few different packages, you’ll see some variation in what tags are available.  iTunes doesn’t include the ISRC, URL, publisher or copyright tags for example.  This doesn’t mean they aren’t there, but if you were to open the file in a package that does use those tags (TagScanner, for example), you’d see they’re blank. 

TagScanner – the Author’s favorite free MetaTag Editor

In any case, you’ll edit your tags, and hit save.  The metadata is embedded into the file. 

But Wait… 

You can keep a lot of the data you need to associate with a track in the metatags embedded in the file, but you may find that you need more data at your fingertips.  For example, if you submit your track to a few micro-licensing libraries, you’ll find yourself entering in a description and mood over and over.  So, it behooves you to keep some information somewhere connected to your songs, so you don’t have to dig around every time you need it.  You can keep a simple spreadsheet, use a database, or there are a number of software packages and online solutions that can help you keep organized. 

You should keep all the data you would put in a metatag handy, plus more – here’s a big list of data you might want to keep, if you were to use a simple spread sheet: 

  • Credits – any players, engineers, studios, etc that should be credited. 
  • Title 
  • Artist/Act Name – you may have more than one, so keep track if you do 
  • Description – a short description of the song that tells people the basic topic, mood, genre and instrumentation 
  • Moods 
  • Release date – you’ll get asked for this a lot 
  • Album – if it’s part of an album, keep track of which 
  • PRO # – the id number assigned when you register the song with BMI, ASCAP or your PRO. 
  • ISWC – the international song identifier used across PRO’s, issued by your PRO. 
  • ISRC – the unique identifier for the recording, issued by the RIAA in the U.S., or your ISRC agency, or your distributor (this is easiest). 
  • Publisher – if you have a publisher, or if it’s your own publishing company, or none if neither 
  • Writers splits – who owns what percentage of writing 
  • Master splits – who wants what percentage of the recording 
  • Genres – a short list of genres 
  • Time – of the track, not the day! 
  • BPM – tempo 
  • Time signature 
  • Key 
  • UPC – if you have a UPC for the track, keep it with the track info.  Keep the album UPC with your album info.  UPCs are issued by the GS1, or your distributor (this is easiest). 
  • Keywords – a comma separated list of keywords so you can copy/paste when asked for them on a form 
  • Lyrics 
  • Toggles – whether you’ve registered the PRO, copyright, Soundscan or SoundExchange.  You can use Y/N, Yes/No,1/0, whatever feels best to you. 
  • Notes – some general notes 

There You Have It 

There you go!  The topic may not have been as sexy as some other things, but metadata is the nuts and bolts of efficient communication, and it makes all the difference when you’re building a business out of music.  Keep your data clean, handy and up-to-date and you will find it a lot easier to spread the word, engage your partners, and make money.  Even if you don’t plan to build a music business, it’s nice to know where everything is.  So, fire up the computer and get organized! 

Mixing, EQ and Processing Questions

November 9, 2006 by Aaron
Audio Instruction
audio, audio questions, eq, equalization, mixing, processing, recording


I believe this is the last of the questions from years back that I have left…again I don’t find anything I said blatently dumb, but it’s interesting to see this old point of view…

Questions

I am a single acoustic act. Guitar and vocals. I use a Takamine FD360SC which I play through a Trace Elliot TA100R. For vocals I use a Shure Beta 58. Both are connected to a Mackie 1402VLZ. The Beta via channel 1 XLR and the Trace via a DI socket (pre or post eq). The Trace has a 5 band graphic eq, which I am currently leaving flat, it also has several onboard effects such as reverb and delay which I am not using. Both are eq’d at the board via a 3 band parametric eq. It never seems to me as though I am getting the correct mix! I generally boost the mid and the high on the vocal while leaving the low flat. I then boost the mid on the guitar and leave the high and low flat. Any golden rules ? Hopefully you will bear with me for a couple more questions, if not…I appreciate your time on the first. The next questions deal with effects. I have used reverb for years (Alesis Midiverb which is a 1/2 rack unit which I generally leave set to factory preset 21 “Medium/Warm” 1.4 sec delay). I am now using an Alesis Nonoverb for delay (because I don’t like ANY of the reverbs). Input and output set 3/4 of the way up and mix set 1/2. I don’t feel as though I am getting the most out of these effects. Any suggestions ? (Please don’t say “buy a Lexicon!”). Finally, I am considering adding the following: Compressor Rack mount Eq Chorus Noise gate Is there any particular order in which I should daisy chain them ? I want to run two loops one for guitar and one for vocals. Which effects should I dedicate to vocals and which to guitar ? My plan was: Vocals: Reverb+Delay+Eq+Noise gate Guitar: Compressor+Reverb+Delay+req+Noise gate

Answers

the golden rule is really really tweak a LOT until it sounds right 🙂 …vocals and guitars can clash, so you might try eqing them in opposite directions, like maybe CUT the mid on the guitar a little…usually to start with on vocals I cut the high a little, cut the low a little and boost the mid a little. Sometimes I totally cut the high and low and boost the mid all the way. I call that telephoning, although with that three band eq, you can’t really make the actual telephone effect exactly. but as a matter of fact, that extreme eq on the mackie sounds surprisingly normal on some mixes – unfortunately I doubt an accoustic guitar/vox mix would be one of those mixes 🙂 Sounds like you need to take a little more time with the mixes, and tweak everything in every direction just a little more. Also, the mixes might actually be pretty good, but your ears are tired – get 2nd opinions – also, if you’re going for a sound that sounds like you’re used to hearing on records, you’re probably missing some ingredients (like reverb and compression)
Don’t buy a lexicon, I’m really against buying the biggest baddest stuff without first pushing what you’ve got to the total limit. Well first off, learn the ins and outs of editing the effects (again the golden rule is play more) instead of just using presets….uhm, use them in layers, use them in extremes, use them very subtly – basically just spend more time running the gamut and experimenting – uhm…I would set the mix all the way wet, then feed the reverb returns to a channel on the board and mix the effect with the dry signal that way (and also, you can use the eq on the channel if you need to on the effect) – also if you’ve got a natural type sound (ie: guitar and voice) maybe you don’t want to use electronic reverbs so much – maybe you should find a great hall and record in there (I know I know, good luck on finding a hall 🙂 ) – then turn around and use your reverbs for some other kind of project – mostly though, just spend more time I think – every mix you do, go through every preset and see what’s the best, then if something isn’t PERFECT, start editing and tweaking – try everything under the sun

well some people other than me would probably know better and maybe tell you different, but here’s how I would chain them, assuming that this is to run an entire mix through: chorus, compressor, eq, noise gate although I wouldn’t run an entire mix through a chorus, and I wouldn’t neccesarily keep the setup the same for all projects – that’s another part of my little golden rule about playing – you should see how much repatching I do (I wish I had a magic digital auto-routing super patchbay 🙂 )

I would add compressor to vocals and maybe take it off guitar. you almost always need compressor for vox and I’m not sure I’ve ever compressed accoustic guitar. like I said though, for the effects, I wouldn’t have them in an insert loop (that is the delay and reverb) – what I do when in a studio where I can do this is, I have compression on the vocals on an insert loop, and effects like delay and reverb triggered with aux sends and returned to an actual channel on the board.

—
So yeah there’s those – if you want more, just email me a question at aarontrumm @ nquit . com and I’ll give you my opinion, which is mostly what it it is when you talk about audio and art. 🙂

Meanwhile, you can still grab free tracks over at my music download site. 🙂

— Aaron

Graphic EQ or Parametric?

November 8, 2006 by Aaron
Audio Instruction
audio, eq, equalizers, graphic eq, parametric eq, recording


This is another one of the questions that came through 15 years ago that I didn’t lose – I definitely have a better, more sophisticated view now, but I don’t see anything blatently wrong here 😉

Question

…I have been setting up a decent home studio over time now, and would like to pickup a rack mountable EQ for use at mixdown and to cut frequencies on certain tracks, especially guitar… Would I be better off getting a 15 or 31 band graphic, or will something like the ART dual channel 4 band parametric tube preamp be enough? Will the tube quality outweigh the lesser band control in terms of sound quality? Will 4 bands be enough?

Answer

First off, I very very rarely mixdown with eq on the whole mix, but it can be a useful technique.
Well, ok. Obviously 31 band graphic is probably better than 15 graphic, and probably more expensive. Parametric eq tends to give you a lot more control (parametric meaning you can change the frequencies you’re working with). It’s kind of a trade off in my opinion. Parametric is better, really. But you tend to get less bands with parametric, which means less frequencies to be effecting at once. I guess parametric eq is more expensive to make or something. It would be nice to have 31 bands of parametric! 🙂 As far as the tubes, I wouldn’t take that into consideration for gear like this until the end. I use tubes, and I like to, but frankly, I defy you to hear them working 🙂 I don’t know that I see the value of using tubes in an eq, I tend to just use tube preamps to track through and run mixes through. That having been said there are tube-preamp units that have great eq sections, and I also might lean toward the Art, they make some cool stuff. As far as graphic eq’s, Rain pretty much makes the best, I think. As far as four bands being enough, think about your applications, and your possible applications. Will you need to be effecting more than four bands all at once? Like I said, with parametric, you’ve got all the control for dialing in what frequencies you’re messing with, so that might be the ticket if you don’t need to effect more than four at once. As far as tube quality outweighing having less bands, no. I think you should never put tubes before functionality. If you need to get something without tubes so it’ll do what you need, get that, save up, and get a dual tube preamp to run the whole thing through. Also, by the way, you should know that quite a few engineers think the tube thing is crap (I don’t, like I said, i use ’em and love to). So it’s not necessarily a given that tubes are even better. And by the way, tubes burn out eventually.

—
And that was that question. Hmm – let me know if you have questions too – hopefully I have more and better info to answer with! aarontrumm @ nquit . com

And – get you some free music, while you’re at it, so you can see what I do with this “knowledge”.

A vocals mixing question

November 8, 2006 by Aaron
Audio Instruction
audio, mixing, recording, vocal mixing, vocals


Many (like 20) years ago, for a short time, I opened up to questions on my site and people would email me stuff…I have just a couple of those that haven’t been lost and here’s one…

Question

…you know how some CD’s are mixed so that the singer’s voice sounds “farther away” from your ears with the headphones on? and some sound very up close and in your face? well basically, my voice sounded way to up close and i was wondering how to get it to sort of blend in better with the rest of the mix. this may be a silly question but like i said, all of my recording so far has been instrumental so i havent had to worry about it so much, and of course i am self concious about my voice right now. i dont think its simply the recording level, because that just makes the sound quiet, but doesnt change the sound in the audio field. so is it a question of mic type or placement? or of how it is mixed? or some other professional equipment effect that i dont know about? i would love to hear your thoughts on this!

Answer

Actually this can be a tricky thing for me too. You’d be surprised that sometimes it really is level…try lowering the level on the vocals. Also yes mic placement is important. You definitely get a different sound if you stand farther away. 1,2,3 feet and even farther are all different distances I’ve used. Also, reverb can be what you’re looking for. If you try adding a little more reverb and and less of the dry vox in the mix, it tends to sound “farther” away. Sometimes maybe even eq will help you here (with eq on vox I almost ALWAYS roll a little low end off, give it a little extra hi-mids and a tiny bit more high end). Sometimes I try making the vox a little thinner. Sometimes I try the opposite of what I’m thinking, and that sometimes helps. Try doubling up the vocals. (doubling up sometimes cuts through my self consciousness too…don’t do it unless appropriate though)…tweak a lot, try all the things you can think of…also compression…technically, compression will do the opposite of what you’re asking, but compressing the vocal is so common, that to your ear, it’s gonna’ sound more like a finished recording and thus less like naked vocals…at least that was true in my mind on a number of occasions.. Watch out for your self consciousness though. You could be getting mixes that are perfect, but sound WAY out front to you, because you’re scared (let other people hear the mixes…I try to have one or two very select people in the room with me mixing, or at least have them listen later). I always take what the vocalist tells me about the mix with a grain of salt. They (vocalists) almost always bury their vox. You should definitely try things out…try way too much reverb and not enough dry signal, and see what that does, etc…on every mix I’ll try one extreme and then another, and everything in between, just to see what the stuff REALLY needs to sound like. First thing’s first though, get over that self consciousness! 🙂 If you’re a guitarist singing some vocals, become a vocalist too. Take lessons, if that’ll make you feel more confident (it probably will, since it’ll probably make you better). If you’ve already taken lessons, take some more. I don’t know how good a vocalist I am, but on my recordings, I am confident and do whatever I want (even when I shouldn’t probably 🙂 ); one of the reasons is I took a singing class and it scared me to death and that helped; also I ALWAYS sing in my truck driving around (break the speakers in your vehicle, you’ll see *grin*) Make it a point to sing in front of people. Not just at concerts, but always. I don’t know how much of this stuff you already do, but it helps your MIXES, trust me. Also, record your voice a LOT. every DAY make three or four acapella vocal recordings and maybe even some spoken word and listen to it a lot (they don’t have to be good recordings, in fact vary it a lot, listen to your voice in lots of different contexts). You know, get real used to hearing your voice on tape, ’cause it sounds different. I’ve recorded hundreds and hundreds of songs (way way way more than I’ve released, I mean some were on a freakin’ boom box, but hey, I got used to hearing my voice on tape)…all this so you won’t bury your vocals in the mix. 🙂 Now just watch out you don’t get cocky and crank ’em up too much!

An added ps: Don’t mix with the monitors loud. Vocals tend to get too hot in the mix this way. Also you burn your ears out, of course.

It’s quite alright with me, btw, if you want to send questions like this – I’ll post answers – I’m at aarontrumm @ nquit . com 🙂

Meanwhile, grab some free stuff over at aarontrumm.com 🙂

Basics Of Analog Audio Mixers

November 6, 2006 by Aaron
Audio Instruction
analog mixers, audio, basics, signal path


From a 1994 article for the University of New Mexico’s Intro to Electronic Music class:

Signal Flow:

  1. Mixing boards contain multiple “channels”, each of which has separate input, and is controlled separately. These channels are then assigned to either a “buss” or a master output. Every channel assigned to a given buss or master is mixed together and output in it’s blended fashion through the output jacks corresponding to each buss or master output.
  2. Channels on all mixers (with rare or no exception) have either an On switch or a Mute switch, or both. On switches simply turn that channel on. If no sound is being output of a channel you think should be outputting, the first thing to check is the On or Mute switch. Mute switches do the opposite. They turn the channel off. If an On switch is not engaged, no sound will be outputted. If a Mute switch is not engaged, sound will be outputted.
  3. Signal fed to a given channel’s input procedes first to it’s trim section, which allows the input level (volume) to be adjusted; then to it’s EQ (equalization) section, then to the Pan-Pot, which simply adjusts to which side of the stereo spectrum (see stereo concepts) the channel is placed, then to that channel’s “fader” (or gain knob, depending on the board). A fader (or gain knob) is simply a volume control. It controls how much overall level is being output from that channel. This signal flow is represented somewhat by the physical layout of each channel. Input jacks are at the back, Trim knobs are next, then EQ section, then Pan-Pot, then Fader.
  4. After flowing through Trim, EQ, Pan-Pot and Fader, signal is fed to all busses or master outputs that a given channel is assigned to. For example, if a channel is assigned to buss 1 (usually by pressing a button above or near the fader), then signal will be fed to buss 1, buss 1 will output that signal. Busses are simply outputs. All busses have a volume control of their own. That volume controls the overall level of all the sound being fed to that buss (meaning this sound is usually a mix of many channels).
  5. Their are two basic configurations for output on mixing boards:
    $nbsp$nbsp$nbsp 1) All busses are automatically fed to the mixer’s outputs (which usually consist of at least two sets of outputs: one pair for monitoring, and one pair to go directly to tape. “Monitor Out” and “Tape Out”).
    $nbsp$nbsp$nbsp 2) The mixer has a “master left-right”, (which is simply the main master control buss) and individual busses can be assigned to that output. In this case busses are not automatically outputted to the master output jacks. In either situation, most boards have another set of output jacks, which correspond to each individual buss. (For example, if their are four busses, there are four “buss output” jacks, in addition to the master output pairs.)

Common Channel Characteristics:

  1. INPUT – Channels on most boards consist of at least two input jacks (usually at the back): 1) Line input. This is usually an RCA (or “phono”) jack or Quarter-Inch (large “headphone”) jack (although really high-end professional equipment rarely uses anything except XLR connectors, even for line input…most good boards have a choice). This input accepts line-level input, that is, input that does not need to be boosted before proceding to the channel controls. Examples include cassette decks, CD players, Synths, etc. This input is usually labeled “line in” or “tape in”, or similar (sometimes there are both “line in” and “tape in” jacks. They are essentially the same).
    2) Mic input. This is usually an XLR (three-pronged, or “cannon” input) connector, but it can be other types. This input accepts mic level input, that is level that DOES require boosting before it will be at an acceptable level before proceding. Examples include, of course, microphones, and assumably some other devices that ouput at mic level. This boosting is done by the Pre-Amp section of the channel. This input is usually labeled “mic” or “mic input” or simaler.
  2. DIRECT OUT – Channels usually consist of a “Direct Out”. This is (as far as I know) always a line-level ouput. Direct out jacks consist simply of the signal that is being output of the given channel. Direct out jacks are not effected by other channels, busses, or master output levels. Plugging into a direct out does not effect the channel’s output into busses or master outputs, and does not interfere with the flow of that channel into a mix.
  3. INSERT or ACCESS – Channels usually consist of an “insert”, or “access”. This can be accomplished in one of two ways: Either with one jack, that outputs the channel’s output, then returns it, within the same jack, or with one jack that consists of the channel’s output and another that returns it. Inserts interupt the signal, and thus allow you to “insert” another device into that path. (An example is a person may use the insert to take the output of the channel to a compressor or effects processor, then return the modified sound to the channel…the modified sound is then what is fed to any busses or master outputs.) Plugging into an insert DOES effect the channel’s output to any busses or master outputs. It does NOT effect other individual channels.
  4. INPUT SELECTOR – Channels must have an input selector switch if they have more than one input jack. Typically this switch chooses either the mic or line input. IE: If the switch is switched to “mic”, then input for the channel will be accepted from the mic jack, and not from the line jack, and vise-versa.

Thanks for reading! This was quite some time ago but still applies. If you want some current flavor, I’ve got free downloads available!

— Aaron

Basics of Equalization

November 5, 2006 by Aaron
Audio Instruction
aaron j. trumm, audio, eq, equalization, music, nquit music


From a 1994 article for the University of New Mexico’s Intro to Electronic Music class:

Equalization is a process of enhancement of audio signal. It is one type of signal processor, because it processes an audio signal.

Equalization (EQ for short) does basically one of two things: Either it boosts or cuts a given frequency or range of frequencies by a chosen level.

Outboard EQ processors come in two basic flavors: Parametric and Graphic.
Parametric EQ uses a system that allows a user to select which frequencies they’ll be working with. Thus, it is very flexible. Fully Parametric EQ has completely selectable frequencies and bandwidth. Some EQ’s are semi-parametric, meaning they have some, but not complete, flexibility. Most mixing boards use some sort of semi-parametric EQ (although many use a system that is not parametric at all, but cannot be called graphic).
Graphic EQ has no method for changing any of the frequencies that are effected, but still can be quite flexible, if it has a wide variety of bands. Graphic EQ derives its name from the fact that its sliders are configured in such a way that when a user look at them, they graphically represent the configuration being used.

Mixing boards use an EQ system that is not usually truely parametric, but it is not a graphic system (since, of course, it does not use sliders that graphically show the configuration). Many boards use a fixed band EQ in which each knob effects multiple frequencies, such as high, mid and low (unlike graphic EQ in which each slider boosts or cuts only one frequency). Many boards use a semi-parametric system in which some sections of the EQ have a knob which will choose a frequency, which is then usually accompanied by a bandwidth selector, which chooses how wide an area of frequencies will be effected. (This is nearly true parametric EQ, but usually only one or two frequency ranges provide this option, so we don’t normally call this a truely parametric EQ system.) Some boards have sections of their EQ that are semi-parametric, because the frequency effected can be selected, but not the bandwidth.

EQ is used for many purposes, including enhancing an audio track, making a track more audible in a mix, fixing noise problems, helping audio tracks to sound more real or artistic effects such as “telephoning” (my word, meaning to cut low and some high frequencies, making a track sound as if from a telephone).

Latest Posts

  • Mix Fu:
  • Making money in music is scary. Subscriptions may help.
  • Signal Flow Part 2
  • What Is Quantizing and How Do I Use It
  • Cull the Herd

Archives

  • May 2022
  • February 2022
  • September 2021
  • August 2021
  • July 2021
  • June 2021
  • May 2021
  • April 2021
  • March 2021
  • February 2021
  • January 2021
  • August 2020
  • July 2020
  • May 2020
  • March 2020
  • November 2019
  • August 2019
  • July 2019
  • June 2019
  • February 2018
  • February 2017
  • January 2017
  • October 2016
  • August 2016
  • July 2016
  • February 2016
  • November 2015
  • July 2015
  • June 2015
  • October 2013
  • September 2013
  • March 2009
  • February 2009
  • January 2009
  • November 2008
  • October 2008
  • September 2008
  • August 2008
  • July 2008
  • June 2008
  • May 2008
  • November 2006

Follow

 
 
 
 
 
 
 

2022 © Copyright @ NQuit Music –  All Rights Reserved – Website by nquit.com