Limited Time: Free Universal Audio Pultec Collection with Loopcloud - See Offer 

Support

Blog

Deepchild Wants you to Embrace the Dirty, Noisy sounds of Old and Corrupted Media
1 Aug '2020
The Former Berlin and London based dance DJ and producer (aka Rick Bull) has been on the crest of many electronic scenes – find out how he does it.
4543

Words: Dan Cole

 

Former Berlin and London based Australian producer Rick Bull has been on the crest of many electronic scenes. Riding high from his time on the Australian house circuit, the DJ — who most will know as Deepchild — moved to Berlin where he took his techier sounds and outboard gear to the likes of Berghain and Tresor. Now, residing back in Australia with an ambient side project in foot, the ethereal and contemplative Acharné, Bull divides his time between the studio, teaching, working with disadvantaged youths, and picking up the various project-based commissions that take him outside of his normal musical scope. All in all, he is an everyday producer with a project folder full of experience, and a passion for sourcing personally-affected sounds that form the building blocks of his music

 

How do you start making music?

 

The short answer to that, is that I’m constantly curating and creating. I’m always downloading reference material, sketching out single elements in a loop-based fashion without any perceived final outcome. I’ve broken my creative process down into this constant brainstorming process to create elements of sound design or little motifs that all go into a big rendered folder. It’s very rare that I start with a completely blank state. It does happen, but I found over the years that I’m best served if the creative part of my process is without an agenda. 

 

If I just wake up and feel like I want to create, but don’t know what I want or how to do it, I just go up the tree level by one folder and just let my brain wander and think. For instance when I was growing up in Saudia Arabia I was fascinated about Ronald Regan talking about the Star Wars programme, which is a bona fide memory. I’ll jump on YouTube and watch videos and download these assets. I’m archiving audio that has a sort of resonance, and I’ll use it as source material for creating sounds or samples, or audio beds or anything. This process is more about the relationship with the source material rather than the quality of the source material itself. It’s kind of an oblique strategy.

 

 

What sources of sample material sources would you recommend for new producers, and what is your filing process like?

 

I have a massive folder on my hard drive entitled ‘Tunes to sample’. In my case, it’s loosely broken down into sounds that excite me. Growing up I was fascinated by MPCs and this whole generation of sample-based synthesizers that came out in the early 90s, like the Korg 01/W, O5R/W, the Roland JV-1080. All of these hardware sampling units promised this new take on reality. They sampled real saxophones, real pianos and in a lot of cases these banks of sounds sounded nothing like the real thing. So in terms of my process now, there’ll be a whole folder of 80s sample synthesizers. I’ll go to YouTube and find someone playing a demo, and this will become sample material for me. Otherwise, I’ve got a pretty deep fascination with Japanese jazz from the 60s, 70s. The Alan Lomax archives are just massively inspiring, I wouldn’t sample it in the way that Moby has, because I feel that’s blatant cultural hijacking. But as archival recording, I think that it’s amazing. 

 

I’m also fascinated by the sound of the media itself, so the wax cylinder it was recorded on, the acetate. That’s another folder that I have; the hums, the crackles, the digital artefacts, which become layers. The Conet Project is another good one, because it’s weird and you can create your own little narrative around it. 

 

I’m a big fan of all the Reaktor stuff, particularly all the obtuse patches no-one has ever heard of, like the automatic generative ensembles. I’m generally not driven by the purist approach to fidelity. I’m happier to take a random poor recording, because all the noises are the things that become special to me. I’m really into tools and plugins that bring out hidden harmonic content in recordings. It’s really exciting to reveal hidden sounds in recordings.

 

It’s an approach that has been running in counter to the way digital technology has pushed us to use; transparency, control, quantization – these things are valid, but I’m still really interested in finding the ghosts in the machine, or recording.

 

Is there a way in which you sample or render the files to get to this point?

 

I’m 100% Ableton, everything in the box. I have a good amount of hardware, but it generally only comes out for special occasions. My current trajectory is about finding source material that has unexpected colourisation. I will important a single sound into Ableton, and will experiment with compression and EQing in exaggerated ways to bring out tone colour. My workflow tends to change every six months, whereas a year ago I would compress everything in the Master Bus before rendering without. At the moment I have no compression on the Master Bus and am experimenting with running everything through a Neve emulation. 

 

For any one project, I’ll imagine a real world setup, except it will be a software setup, and I’ll try and stick with that for a whole bunch of sounds. I work best with restrictions in terms of my workflow. When I’m creating sounds I’ll generally just have two effects returns. Similarly I’ll bash out sounds through a single plugin to develop a unified tone pallet.

 

 

Do you ever use sample libraries as a starting point, instead of source material?

 

I use sample libraries a lot for working in the context of hip-hop and trap production with young artists. It helps me get from point A to B as quickly as possible. I’ll use a bunch of ready made kits and I’ll even use a reference instrumental, which I think is a great way of getting stuff done quickly. 

 

And then for my own work, I might take a beat that I’ve made for a piece of youth work, that really needs to sound as commercially viable as possible, and I’ll resample that beat I’ve made, or taken from the sample library, and I’ll Deepchild-ify it. That’s a really gratifying way of working, because I’ll have done it using my commercial hat, which feels like I’m stealing and turning it into my own. I find assuming different personas for different parts of the process a useful thing because it makes you feel like you’re never doing any work, at the same time as working. 

 

I’ve found these creative strategies useful as a teacher as well. With students who are super technically proficient, but are creatively stymied because they have such a fixed idea of what they’re supposed to be doing without any space for play, or improvisation. 

 

Can you extend this to how you actually write songs?

 

Yes, it’s all part of the same process really, particularly more with the Acharné stuff. There’s obviously a clear tonal throughline, but the mandate is more like, write a piece of music, don’t spend more than one hour on it, it doesn’t have to be four minutes long, work quickly, don’t work with more than generally five-six tracks, drag and drop source material from wherever, record it from your laptop speaker and then forget about it.

 

With Innocence and Suburbia a lot of it was made in airports, on the bus, maybe even on the toilet. It was just writing and then letting it go. It may mean you’ll have 40-50 sketches and most of them will be no good, but then you can weed them together into one single track. You just need to keep constructing and deconstructing and let the pieces talk to each other. Whereas if I sit down with this clear vision in mind, inevitably I’ll end up just getting too uptight and then the magic will be gone.

 

 

How do you know when a track is ready?

 

The material I know that is overworked, I chuck out. If it's good enough, I find that with time it’s way more effective than I would have imagined. This is where the brain split occurs, from an engineering and creative point of view, and ideally these two are working in synergy. Often, the creative stuff is there, but the engineering is just sub-par. My current trajectory is to up the engineering game.

 

I’ll finish a composition and will try to then leave it and set it aside. There needs to be a sense that it sounds useful after a week. I’ll give it the gym test, the walking test, I’ll stick it in the car, I’ll play it in the background. Creatively, if it feels like there’s too much going on, there’s too much going on. You know, I’ve previously released stuff, and then pulled it offline, because I’ve realised, ‘how could I have done that?’

 

There are some artists that will wait one or two years to release something. I used to be that artist, but I’ve kind of swung back to the other side, which is a risky proposition, but I’m looking for that risk taking to drive my process. It can seem like a really counterintuitive way to see if it’s really good.

 

You also do your mastering. How do you approach this?

 

Mastering is a skill I never developed before, so I’m really trying to learn it. The first part of the process for me has been to acquire some really useful deconstruction tools – Ozone [iZotope], Metric AB [ADPTR AUDIO], and Expose – which shows the relative loudness of something. Through this process I’m realising that there are so many flavours of the rainbow. I’m just going through a whole bunch of tracks that I think are exciting and I’m trying to work out what is going on.

 

I was so enamoured by Cosmin TRG’s Izolat [2011] it’s become a go-to reference track for me. It’s really big-Berghain techno, but try to stick that on a radio station with a limiter, and it will disappear. So I’ll reference something like that, something from the Travis Scott album, and Fleetwood Mac ‘Rumours’. The recent Billie Eilish just blew my mind, I have no idea what’s going on here. The mixing and mastering is so exceptional. 

 

I’ll check the music against my reference material and run it through Expose or Metric AB, and then I will run my mix through Ozone, and then I might use a comparison – Ozone can analyse or reference a track, then I’ll apply EQ curves, or it will use its own algorithm. I’ll use those tools to learn about what's going on, and I’ll try to improve something at a mix level, so when it comes to mastering, I’m just doing basic EQing, basic limiting and not too much else. Often, less is more.

 

You can do all of this stuff with basic Ableton stock plugins, but I’ve found these analysis tools are really helpful ways of learning to hear better. I know they’re not marketed as such but they’ve been great tools for me.

 

 

The speakers and headphones you use must also play an integral part in that?

 

They do. I’ve got DT990 PROs, open-backed headphones which are really important. I do have HD25s in my bag as my mixes need to sound good in these too. In my studio I’ve got these Barefoot Sound Footprints, which are three-way monitors and are delightful. I pair these with some small Genelec speakers, which are great but have very little low-end. I need to A/B through both of these. I’ll also often try to keep mix levels pretty low, this worked really well for me when I had neighbours. I’ll experiment with turning down the level until it's barely audible and sometimes I’d mix the low-end with my fingers on the cone to try and better judge the relationship between the kick and the bass. Strategies born from limitation have served me pretty well.