Mixing Music For Lightning Returns: Final Fantasy XIII

Falk Au Yeong has been making waves in the Video Game industry with many of his recent projects. One of the most notable includes the new Lightning Returns: Final Fantasy XIII. We were able arrange an in depth interview about him, his work, and his favourite DAW – SONAR X3 Producer.

When did you first get into SONAR, and what version are you currently using?

I’ve been a Cakewalk user all the way since Cakewalk Pro Audio and Home Studio, mostly working with MIDI. The first version of SONAR I jumped into was SONAR 3 Producer, and I’m currently using X3 thanks to the extremely reasonably upgrade paths and goodies that come with each update. For example, I’m really digging integrated Melodyne right now and I’m not ashamed to say it!

What is the Video Game Orchestra? How did you become associated with them?

The Video Game Orchestra is a group that plays arrangements of Video Game Music, created by Shota Nakama in 2008. They’ve grown from humble beginnings to playing shows in all kinds of formats each year, from rock band, to acoustic, to full-blown “Rockestral” performances which consist of rock band, orchestra and choir. Recently VGO has also started to get involved in recording gigs and self-produced albums under Shota’s curation.

I got involved earlier this year when Shota e-mailed me asking if I’d be up to do a series of small gigs for ‘a very important client’. Little did I know that I’d be working with one of the biggest independent game music production companies in Japan in real-time, via Skype and various audio streaming solutions. But that’s how we started! Crazy post-midnight sessions recording all manner of instruments, from drums to accordion to alto trombone to string quartet. 

What roles did you play in Lightning Returns: Final Fantasy XIII?

Shota was contracted as an orchestrator for Masashi Hamauzu, one of the three composers for the game, and VGO would record his music. Initially all I figured I’d be doing would be to prep for the recording sessions – stuff like preparing printed audio clicks, sync with the demo material, tackling potential issues like pre-roll, tempo maps, even sequencing some temporary orchestral sections for context. We weren’t 100% sure at this point if the tracking studio would be running other DAWs, so everything had to be prepared in a way that importing would go as quickly and smoothly as possible due to the amount of music we had to record.

Eventually VGO’s involvement evolved to the point that we were also contracted to mix the music we recorded. Masashi Hamauzu was with us in Boston for a week during the initial stages of mixing, where we auditioned everything via rendered stems at another studio and Hamauzu-san gave direction as to what he wanted out of the mixes. This relationship and direction continued over the course of the next month via internet while the mixes were finished by me, natively in SONAR.

Final Fantasy XIII Lightning Returns- World music Vol.01_LRFFXIII_ルクセリオ

You’ve worked yourself into a position where you became the technical go-to guy on VGO’s Final Fantasy involvement. Can you elaborate on how you were able to do this?

Being at the right place at the right time, I guess! I am the type of person who would learn why tools work the way they do, rather than simply learning how to use them. Whenever a problem cropped up, I had a good idea already on what to do as a solution. So while it is true that luck is involved, as it always is, you need to be prepared and fully confident in your ability to grab that chance with both hands when it comes around. You can’t sit around just waiting for ‘luck’ to strike because then if you realize that you weren’t prepared enough to make a good impression, you’ll want to kick yourself!

I also know my limits, and I am extremely careful about what I promise. Prior to whenever I say I can do something, I make damn sure I can deliver. Dependability and honesty goes a long way. It’s essentially an extension of ‘let your yes mean yes, and your no mean no’. To that end, for example, I subcontracted some tasks out to a great friend as a second set of ears and manpower during the initial stages of the mixing, because I knew I only had so many man-hours in myself each day.

Inside of SONAR, what did you use to work on this? What kind of workflow did you use on the music? Did you use existing audio or samples? Any sound design?

We tracked everything live, section-by-section with percussion recorded in Japan. My first task was to organize everything in a manner that it could reasonably be worked with as a mix – everything from lining up the individual sections to follow that one rubato timing of the strings that we really liked, to comping/tightening up performances, to folding down the number of tracks to something practical (we tracked at 96kHz, so track count during playback actually did end up being a slight issue initially) to fixing some a couple of significant copyist errors that no one caught during tracking itself but were obvious in context.

As my task was to bring the composer, orchestrator and producer’s ideal aural palette into existence, our evaluations at many points also resulted in sound reinforcement – especially in terms of the really low percussion – the huge impacts that give that distinct Hollywood feel! There’s quite a smattering of sequenced material to beef up the production wherever it’s needed. I will say there are a couple of car explosions from a popular sample library buried somewhere in there.

SONAR itself comes with as complete a toolset as anyone could ask for to get everything done. While most of the reverb in the final mixes do come from outboard hardware units (my OCD and familiarity demands it!) I have the full version of BREVERB2 via Cakewalk/Overloud’s upgrade path and I had absolutely no hesitation using it to fill it in on certain sections where changes were required.

The simplicity of SONAR’s Freeze functionality meant that the additional sequencing wasn’t an issue at all. Again, due to the size and track count of the project files, I would typically freeze everything the moment I had it sequenced, as playing back a frozen stereo file was less taxing than Kontakt triggering samples in real-time, regardless of how beast your machine is. It was such a cinch in SONAR that there was absolutely no reason not to abuse the feature.

Of course, working in conjunction with other DAWs is a necessity in today’s environment and this calls for efficient importing/exporting of raw audio data, so this project also put SONAR’s template capabilities through its paces. I had Track Templates for virtually everything imaginable – for importing the sets of e.g. 10-tracks-per-string-section-pass, to FX/bus routing, to logical send groupings to bounce stems in a hurry. All completely painless.

What do you think of the recent increase of video game music production? Does SONAR allow you to produce high quality output for these productions?

I’ve been a video game nerd since I was a kid, and a VGM connoisseur for just about as long, so I’ve seen its evolution across the decades. I totally like where things are going as more and more people are realizing how important it is to the experience. George Lucas once said “The sound and music are 50% of the entertainment in a movie” and for a long period of time where the platforms weren’t powerful enough to have fully-voiced characters, the music and sound effects of a game were of utmost importance.

Fast-forward to now where we have almost photo-realistic experience and fully realized orchestral scores, and despite the fact that music has to kowtow to voiceovers, our job has become no less important.

The most important epiphany I came to was the fact that there simply is a difference as to how Japanese developers approach game development as opposed to Western developers, and it reflects in the audio/music as well. Just like all other aspects of games, I believe there are strengths and weaknesses to both. The western approach tends to underscore a lot more and integrate well with the experience, which lends itself a lot more to real-time interactivity based on what’s happening on-screen. The Japanese approach, on the other hand, values leitmotif above all else, and tends to be a lot more memorable and ‘collectible’ (if soundtrack sales figures are any indication, especially considering the difficulty of importing)

Having used SONAR for VGM related stuff in everything from rock to electronica to R&B and orchestral, I’m just going to say right now that “video game music” really escapes any one definition! There’s just so many different things required by different video game experiences that it’s not possible to pinpoint one single thing and say “There! You need to be good at that! And then you’re set for all video games under the sun!”

That being said though, SONAR, at its core, is an extremely powerful package, with a basic toolset that encompasses pretty much everything that you could imagine ever being required for any audio-related task.

Outside of pure familiarity, I stand by the product because it’s Windows-centric (like I mentioned previously, my personality is such that I really dig deep into technology to get what I want, so this just jives with me) has always had a reasonable upgrade path when those come about, and has hands-down the fastest workflow for the way I organize my productions. Want a parallel FX send? Right-click, boom, bam, all set.

What other projects and games have you worked on recently?

VGO also tracked choir for God Eater 2 for Playstation Portable and Playstation Vita, a game that is currently topping Japanese sales charts. I was heavily involved in that, and aside from the large choir sessions, tracked solo vocals in my home studio.

I’m also a huge Sonic fan, and my personal little claim to fame (fortunately or unfortunately) is contributing a bunch of music to the various crazy unofficial projects that the fanbase cooks up. Like the trailer music for a mod of Sonic Generations, or the producing the soundtrack for two complete fan-made games for example.

Last year, I also had a small part in sound editing for Bioshock Infinite as part of my internship with dSonic Inc., an audio outsource outfit based in Boston.

While this does almost border on jack-of-all-trades, master-of-none territory, I’m glad for all these different experience in the various dichotomies of the industry of game audio – from western vs japanese productions, to internships vs freelance, to official vs fan-driven undertakings.

What kind of rig do you use?

Right now I’m on an i7-3770k with 32GB RAM running Windows 7 and utilizing MOTU hardware as my audio interface.

I mix primarily off headphones, which is another relic of familiarity; I harbour no false pretences about how different they are from monitors, but then again when you’ve listened to music most of your adolescent and adult life through the cans, you actually have an easier time relating to how they sound. I’ve used the same model of AKG K702s for the past decade with always a spare pair in case something happens to the current pair. That’s how much I trust in my perception with these. I also periodically check low end with a pair of Dayton Audio BR-1s that I built myself.

I know you attended Berklee College of Music – which is a Mac based studio environment, how did you manage working on your projects at home?

I got used to it! In terms of DAW cross-functionality .wav files are a wonderful thing. While I do believe it’s good to be proficient with as many software/hardware packages as possible, certain facets of the various DAWs used at college cemented my resolve to keep using SONAR as my primary DAW outside of college.

I don’t want to publicly lambaste any of the industry standards, but I’m not going to shy away from saying that there were multiple times at college I was driven to the verge of foaming at the mouth when I had to sit through just one more real-time bounce.

Learn more about SONAR X3 Producer.