I recently had the chance to chat with composer Steve Pardo from SkewSound about how they approach scoring video games for VR.
Just last week I had the opportunity to sit down and talk with composer Steve Pardo of SkewSound about his work on video games and VR. We talked about a number of topics, including what SkewSound is, how they approach scoring video games like Signs of the Sojourner (which was released earlier this year), and the differences between making music for traditional video games and VR. I had a lot of fun talking with Steve Pardo about his work with SkewSound and I look forward to speaking with them again in the future.
How Did SkewSound Get Started?
It’s a fun story. [Skewsound] is a team of four people, myself [Steve Pardo], another composer Chris Wilson, and two sound designers, Dan Crislip and Nick Kallman. We worked together for a long while at Harmonix Music Systems and we worked on music games. Some games you may have heard of like Rock Band, Dance Central, and Fantasia: Music Evolved. Then we were working together on a game that wasn’t a music game, it was a rare move for Harmonix. The game didn’t get made, but we tag-teamed on it as the audio team. Me and Chris Wilson both were assigned music duties so we worked together really closely on the soundtrack for the game. He and I built musical instruments for the game, like a bow guitar and an upright bass out of a drum shell and a bannister. We found that building our own instruments and working together really inspired us. The four of us were always looking for ways to make the project we were working on unique and not just the standard approach to soundtrack and sound design. After that initial collaboration we found we worked really well together as a team. Eventually we all left Harmonix and reflected on our time on that one project. We decided that we would start our own company and use that inspiration we found with each other on new projects.
Can You Tell Me a Little Bit About Your Background in Music?
I grew up in Chicago, and that’s a big jazz and blues town. My dad was heavily into jazz and bossa nova music and I have that harmonic and melodic vocabulary at the root of my musical identity. I was into jazz, the Beatles, and indie rock groups like Radiohead throughout high school. I went to school for jazz and saxophone performance at the University of Miami. I stayed there a couple more years and got my Master’s degree in studio jazz writing, which is for jazz and big band composition and music production wrapped into one program. I also met my wife there, she’s a classical flute player who also plays jazz and we collaborate together on every single project. Immediately after school I got a job at Harmonix and my first project there was on the audio team for The Beatles: Rock Band, which was a dream project for me. I worked at Harmonix for ten years, and about halfway through I started taking audio lead positions, in other words the audio director for games like Rock Band VR, Dance Central VR and a few others. Then I decided to take that experience on the road and went full time with SkewSound.
In Video Games, What is the Difference Between Sound Design and the Soundtrack?
That is a great question. Think of it this way: audio is audio and it’s reflecting what’s happening in the game. It’s also responding to gameplay or fitting alongside the arc of the narrative. From a career perspective it’s really helpful to have [sound design and the soundtrack] separated. We have composers on one side and sound designers over here. So composers, in a perfect world, are composers in the traditional sense. They write songs, they write compositions, they’re music producers working in their studio or home studio. They play instruments, they record their own product and hand over the music to the audio team that’s part of the game development team.
And on the game development team we have sound designers. The sound designers have to implement that music and figure out a way to make it line up with the environment and the gameplay. They’re structuring the music to make sure it fits alongside what’s happening on the screen. They have to create sound effects, they’re taking their microphones out into the field and they’re recording. Or they’re at home or in the recording studio making content from scratch or from synthesizers or manipulating existing sounds from sound libraries and then implementing them. Implementing is the process of taking a sound you’ve created and putting it into the video game. We’re not engineers and we’re not coders but we have to think like them. Sound designers have to put their sounds in the game in a way that fits what’s happening on the screen. And if there’s any features that the audio team can’t build by themselves and it’s rooted deeper in the game engine then we talk to an audio engineer to help develop that feature. The sound designer is doing more than just creating sounds, as they’re also responsible for voice over. They also make the menu screen sounds, “pause game” sounds, explosions, weapon sounds, and environmental sounds, just to name a few examples.
That’s how it would be in a perfect world, but this isn’t a perfect world. I [a composer] have to do sound design all the time, especially implementation for my music. Composers have to create music that considers the infinite possibilities [of sound design]. Having the composer implement their music into the game is an optimal solution from a delivery standpoint. Although if the composer isn’t used to doing that, they shouldn’t have to do it.
What is Your General Process for Composing for a Game Like ‘Signs of the Sojourner’?
Well, every game has its own story, not just from a narrative perspective but from a developer perspective as well. We consider who we’re working with, where they’re coming from and their experience, alongside the narrative and the kind of game it is. In the case of Signs of the Sojourner, we immediately hit it off tonally speaking. Signs of the Sojourner has a super-interesting game mechanic, that’s a narrative mechanic housed within a card game. This mechanic is doing something interesting with gameplay. The cards represent moods and ways of speaking. And the environment of the game revolves around this mechanic, so we needed to support the narrative over the gameplay. All we need to do is make the player feel like they’re a part of this world and the culture. So as the player explores farther and farther away from home, the culture and the music of the game feel increasingly alien. We went through a process of “what does this mean for the music? What kind of music is the player used to hearing in their home world?” And once the player begins moving out, what are the boundaries, so to speak, of the instruments that are going to change and evolve? We decided to explore the story of the environment that the player is inhabiting, the culture of the towns, and the characters that make up those towns.
Is There a Difference Between Composing for Video Games and Composing for VR?
That’s a really good question. Technically speaking, there isn’t [a difference], they’re very similar. However, the experience of being in VR is wildly different. Your expectations are of reality rather than a [traditional] video game. Just as in real life you don’t have an underscore mirroring every moment of your daily life, you wouldn’t have that in VR, that would sound silly. I think we could be more sensitive, as composers, when it comes to VR. You still want to tell and reflect the story, especially during important gameplay moments. However, the way you would approach composing, I feel, would be more subtle and would be more a part of the world that you’re in.
From an implementation perspective, traditionally, music is non-diegetic (not “in the game world”). It’s basically a 2D sound going right into your ears. They’re not panned or attenuated in any specific direction, it’s just like a score for a film. In VR you can do that as well, you can have the music coming right into your ear and not coming from any given direction. The problem is, in VR the player is rotating their head. And in the 21st century we’re used to having ear buds in our ears and what ends up happening is [with traditional video game music] we feel like we have ear buds constantly in our ears listening to this underscore. Which is fine for some things but then it takes you out of the experience rather than adding to it. So we need to be very sensitive when it comes to the technical limitations of music. What we can do is render “virtual speakers” in front and behind [the player]. And just like in the real world those speakers stay where they are when you rotate your head [creating an illusion of movement with the sound]. It’ll feel more like the music is actually in the space instead of just “I feel like I’m wearing ear buds.” We don’t want to Micky Mouse every little thing that happens on the screen. If possible, we want to make the soundtrack feel as one with the ambience of the VR environment. Now, keep in mind there are times where we want the music to tug at your heartstrings and be more emotional or scary. For example, say you’ve just engaged in a battle, we want your pulse to be high, that’s when a traditional approach to music makes sense.
I’d like to give a big thank you to Steve Pardo and SkewSound for giving me this opportunity to speak with them. I hope you enjoyed reading about their work on video game music.