Previous month:
July 2010
Next month:
September 2010

Posts from August 2010

Modulate This Interview with Imagine Research CEO Jay LeBoeuf


I recently had a chance to meet Jay LeBoeuf, the CEO and founder of the San Francisco based company Imagine Research, and learn more about his past and current work. Imagine Research is working on next-generation intelligent signal processing and machine learning technologies. I thought the work was fascinating and Jay graciously agreed to take time out of his busy schedule to share some insights on his work and the field in general. He also has some suggestion on how you can get involved with helping to solve real-world problems in the digital audio and music realm.


Mark Mosher:  How long have you been involved in R&D work and how did you get started?

Jay LeBoeuf: I've always had a passion for music and technology – in undergrad (Cornell University) , I was an electrical engineer, with a minor in music, and gigged with my band on weekends.  Everything suddenly made sense when I did a Masters at CCRMA (Stanford University).  If you understand audio software and technology at its lowest levels, you have this immense appreciation for the tools that our industry uses.  You also develop this urge to make new tools, and help bring new experimental technologies to market… which is how I ended up at Digi.

MM: Prior to founding Imagine Research, you were at Digidesign doing R&D on Pro Tools. What Pro Tools features that Modulate This readers might use daily did you have a hand in creating?

JL: Digi was such an amazing place and opportunity - I was one of the first team members on Pro Tools' transition from OS 9 to OS X.  I was on design and test teams for D-Control / ICON mixing console, the HD Accel Card, integration of M-Audio into the Pro Tools product line, and Pro Tools software releases 5.1.1 through 7.4.  In my later years, I researched techniques for intelligent audio analysis  - the field that I'm most excited about.

Imagine Research Web siteMM:  Do you feel that being an independent research firm allows you to work more on the "bleeding edge" than if you were doing the research from within a company?

JL: Absolutely.  Imagine Research was founded because this "bleeding edge" technology needs a helping hand into industry.  Most companies, especially in the MI space, keep their focus on their incremental features, compatibility, and bug fixes - and applied research is inherently difficult and risky to productize.

The U.S. National Science Foundation has been a great partner in helping us bring innovative, high-risk-high-reward technologies to market.  We've received several Small Business Innovation Research (SBIR) grants to address the feasibility and commercialization challenges of music information retrieval / intelligent audio analysis technologies.  I encourage all entrepreneurs to look into the SBIR program.

MM:  How does Imagine Research help companies leverage emerging and disruptive technologies yet build practical solutions?

JL: Close collaborations are key during the entire technology evaluation process.  We focus on end-user problems and the workflows enabled by technology.  The solution is what's important , and we try not to geek out and use unnecessarily sophisticated technology when a simpler solution works fine.  That said, the more disruptive technologies tend to spawn new ideas, features, and products- and you need a long-term  partnership to capitalize on it! 

MM: According to your web site,  Imagine Research is working on a platform for “machine learning”. Can you briefly tell us what machine learning is and offer some examples of how machine learning could be applied to change how composers and sound designers create?

JL: In short, machine learning algorithms allow a computer to be trained to recognize or predict something.  One way to train a machine learning algorithm to make predictions is to provide it with lots of positive and negative examples.  You can then reinforce its behavior by correcting it, or having your end-users correct its mistakes. 

In our case, we use machine learning to enable machine hearing.  Our platform, MediaMined™,  listens to a sound and understands what it is listening to – exactly as human listeners can identify sounds.   

When software or hardware is capable of understanding what it is listening to, an enormous array of creative possibilities open up: such as DAWs that are aware of each tracks contents, search engines that listen to loops and sound effects and finds similar-sounding content, and intelligent signal processing devices.  I'm confident that this will enable unprecedented access to content, faster and more creative workflows, and lower barriers to entry for novice musicians.

MM: Are there non-musical applications for your platform?

JL: Absolutely.  Our platform was designed for sound-object recognition - so while I frequently discuss analyzing music loops, music samples, and sound effects, we can also understand any real-world sounds.  We're working on applying our techniques to video analysis, as well as exploratory projects involving biomedical signal processing (heart and breath sound analysis), security/surveillance, interactive games, and more than enough to keep us busy!

MM: How can app developers leverage your platform?

JL: While the specific platform details are still under wraps, I'd really enjoy talking with desktop, mobile, and web-based app developers.  We really welcome input at this early stage.  I'm happy to discuss at "info at imagine-research dot com".  For general information, announcements, and updates, please follow us on Twitter (@imagine-research).

MM: Imagine Research also creates "intelligent" algorithms for consumer audio and video products. Can you give us some examples of products that might be utilizing your algorithms?

JL:  Sure - check out JamLegend (think: Guitar Hero but online, free, social-networked, and it's one of the only music games where you can upload and play YOUR OWN music).  We developed the technology for users to play interactive Guitar Hero-style games with any MP3s.  So far, over 1.1 million tracks have been analyzed. 

We have a number of exciting partnerships with our MediaMined platform to be announced.  These applications directly aid musicians and creative professionals. 

MM: How do you think that the growth in cloud computing and the explosion of Smartphone processor power will change the landscape of digital audio?

JL: The most exciting thing to me is unparalleled access to content - we'll be able to access Terabytes of user-generated content, mash-ups, and manufacturer/content-provider material (loops, production music, samples, SFX),  online from any device. 

Music creation can now occur anywhere.  Smartphones provide a means to record / compose wherever and whenever the muse strikes.  With cloud-based access to every loop, sample, sound effect, and music track ever created, how do you begin to find that "killer loop" or sample in a massive cloud-based collection -- and -- on a mobile device?!?  Don’t worry, there’s some disruptive technology for that. 

MM: Do you have any words of advice you can give to Modulate This readers who might want to pursue a career in audio R&D?

JL: Full-time corporate R&D gigs typically requires a graduate degree in music technology and music and audio signal processing such  as at Stanford's CCRMA, UCSB's MAT program, NYU, etc.)  But let's talk about the most untapped resource for research: industry-academic collaboration.  The academics have boundless creativity and technical knowledge, but might not know the current real-world problems that need solving.  I'd encourage readers to reach out to professors and graduate students doing audio work that they find interesting.  Think big - the hardest problems are the ones worth solving. 



Mark Mosher
Electronic Musician, Music Tech & Technique Blogger, Boulder CO

How To Change Sonic Charge uTonic Drum Machine Patterns on the Fly Within Ableton Live


I love Ableton Live’s workflow and while I’m a big fan of Ableton Sampler, Operator, Impulse, Drum Racks etc… I do heavily utilize VST’s to extend Live’s range even further.  I try to find plugins that not only sound great, but also integrate into and compliment Live’s workflow. One recent addition to my software rig is Sonic Charge’s fantastic pattern based drum-machine synth uTonic.

In this post, I’ll show you how to use MIDI to change patterns on the fly within uTonic. This will allow you to use uTonic as a drum machine in conjunction with your other Live clips and scenes.


SNAGHTML3897a4fuTonic’s pattern engine can play one of 12 different rhytmic patterns (a-l) in sync with the Live (or any host). Thankfully, uTonic supports pattern selection via MIDI. To select Pattern “a” you need to send  MIDI note C3 to uTonic. To select Pattern “b” send note C#3 and so on.


STEP 1 - Insert uTonic into a MIDI Track

STEP 2 - Use the “Open Program” menu button to load a program. I picked “Alpha Blipp”

Step 3 – Create a MIDI clip, rename it “Pattern a”, and enter a MIDI note for C#. Playing the clip will select “Pattern a”.


Step 4 – Create a MIDI clip in Scene 2, rename it to “Pattern b”, and enter a MIDI note for C#3. Playing the clip will select “Pattern b”.


Now when you switch from Scene 1 to Scene 2, uTonic will switch patterns as well.

Of course you could also use a MIDI keyboard or program a matrix controller to send these notes as well.

You can also edit MIDI mappings within uTonic using the menu “Edit MIDI Controller/Keys”.


You’ll see an overlay for existing mappings.


Here is a little excerpt from the manual on mapping:

Rectangular markers (like those on top of the drum channel selection buttons) indicate assignable MIDI keys, while oval markers indicate that you can assign MIDI controllers. Click once in a marker to quickly enter MIDI learn for a button or controller (you will see a flashing MIDI connector symbol). Now, simply, press the desired key or turn the desired knob on your hardware controller and you should see a note name or a controller number in the little marker.

Lastly, you can use Ableton Live’ 8’s device mapping feature to map expose parameters to the device.

Step 1 – Click the triangle
Step 2 – Click the “Configure” button
Step 3 – Move a control on uTonic you want to map
Step 4 – You’ll see the corresponding parameter appear in the device. You can now MIDI map this using CTRL-M.



Mark Mosher
Electronic Musician, Synth Programmer, Boulder CO

Percussa AudioCube Production and Performance Notes for "I Hear Your Signals"


For my original music album "I Hear Your Signals" (download the album free) I use Percussa Audiocubes as performance controllers. In this post I’ll give you all the geeky details about how the controllers were applied in the project.

I used 4 AudioCubes plus Percussa's free MIDIbridge app on Windows to configure and route AudioCube signals to Ableton Live. I use the same MIDIbridge patch for every song which allows for consistent and predictable data mapping from the cubes to Ableton Live.

In general, I play a lot of the notes on you hear on the album via keyboards, Theremin and Tenori-On live. I tend to use the cubes as controllers, for scene launching, and for real-time modulation of effects and synth parameters and only use them for triggering notes from time to time.

The AudioCubes are configured with the in the following modes:

  • Cube 1 - Sensor (the red cube at 9:00 in the picture above): This cube sends MIDI CC information back to Live. I configure each side of  cube to give me visual feedback where each cube face is set to a different color. The closer my finger or hand is to the sensor, the brighter the light. Currently, Sensor cubes need to be wired via USB.
  • Cubes 2 & 3 Receivers (white cubes in above picture): Sends MIDI notes back to Live when a signal is received from Cube 4. image I also send RGB MIDI light sequence via MIDI clips in Ableton. The cubes then become light show elements and also offer visual feedback. These cubes are also plugged in via USB so they can receive high-speed transmissions via MIDI clips.
  • Cube 4 – Transmitter (green in the picture above): This cube is wireless. Aligning the faces of this cub with the faces on cubes 2 & 3 triggers MIDI notes back to Ableton Live.

I then MIDI map MIDI CC data and Note information coming from cubes via Ableton Live MIDI Map mode to various functions within live.
For cube 1, CC's are mapped to device parameters and macros. These in-turn are often routed to parameters within VSTs. For example, a cube face might modulate delay time with Ableton's native Ping Pong Delay FX device. Or the CC might map to filter on a VST synth. Below is a snapshot of the MIDIBridge settings for Cube 1 (click to enlarge).


For Cubes 2 & 3, notes are triggered when the face from the transmitter Cube 4 is detected. I route notes to either MIDI tracks holding Ableton instruments or VSTs and/or racks. In some cases I route MIDI notes through a dummy track back to SugarBytes Artillery II running in a send or on the master track for effects. Since effects are triggered via notes rather than CCs with Artillery II, this method allows me to control effects as well as playing notes with signals from Transmitter cubes which only send MIDI note information. In other words, by combining native Ableton effects with Artillery II, I can use any cube in the network to trigger effects.


1) “Arrival”

In this song I’m using AudioCubes as lighting and feedback elements in the live show. They were not used in composition or performance of the music. MIDI clips in Live are used to sequence the lights.

Continue reading "Percussa AudioCube Production and Performance Notes for "I Hear Your Signals"" »

Celebrating Leon Theremin's Birthday with Video, Notes, Links, and a Soundcloud Set


Leon Theremin (born Lev Sergeyevich Terme) was born on this day, August 15th, 1896. To help celebrate I’m going to do a bit of a stream of consciousness post and will offer some links on Leon and his wonderful instrument and some notes on my use of it. Hang in there till the end of the post as I’ve created a Soundcloud set called “Theremin Action” which is a collection of all the songs from REBOOT and I Hear Your Signals that use that Theremin sound or Theremin as a Controller.

Watch Embedded Video


As a youngster, I can vividly remember the first time I saw the movie The Day the Earth Stood Still. I was captivated by the sound of the Theremin in the film. In a related note, below is a pic I took of a reproduction of Gort last weekend while visiting Experience Music Project | Science Fiction Museum in Seattle.

iPhone 2010-08 223

According to, the Theremin in the film was played by Samuel Hoffmann and Paul Shure for a score by Bernard Herrmann. Thereminvox has a re-posted An interview with Samuel Hoffman from Down Beat magazine originally published on 02.09.1951 called “Dr. Hoffman Tells Whys, Wherefores of Theremin” which is a fun read.

I started incorporating the Theremin sound in my original compositions before I actually owned a Theremin. I did so by emulating the sound and performance with synthesizers and playing via keyboard. For example, the song “They Walk Amoung Us” from my album  REBOOT is loaded with Theremin lead lines. In my recent album I Hear Your Signals, I composed the song  “Arrival” which has a Theremin sound as the lead instrument in the second chorus. I created the sound using a patch I programmed from “init” with FAW Circle.

In April I took delivery of a real Theremin – a Moog Etherwave. I did an unboxing review about it here.


While I am learning “classic” Theremin technique, I also began using the Theremin as a spatial controller to control virtual digital synthesizers running out of Ableton Live in combination with Percussa AudioCubes. This combination allows me to control 6 dimensions of sound without touching a knob or dial (volume and pitch from Theremin, 4 dimensions from AudioCubes).


I used the Theremin HEAVILY in the song “Control Zone”. All the pad, lead tracks, and unusual sounding guitar and bell sounds where played in real-time and recorded in one pass with only minimal editing after the fact.

I also used the Theremin as a controller in the song “Dark Signals” to play special effects sounds in various sections of the song.

I’m using a Theremin when I perform live now. It’s just fantastic. I recently played a private event and used the Theremin as a controller and after the show I got a lot of questions about it and some requests to play it. After all these years and advancements in technology, people are still intrigued by an electronic instrument patented in 1928. Clearly, the notion that you can perform music and control sound using spatial movement is still intriguing.

As promised, here is a Soundloud set with all my Theremin related songs. Happy Birthday Leon! And thanks Bob and Moog Music. Theremin Action by MarkMosher

Mark Mosher
Electronic Musician, Thereminst, Boulder CO

Videos on The Making of Robbie Bronnimann's Upcoming Album "Rotations"


Robbie Bronnimann dropped me an email recently letting me know about a very cool series of videos he’s posting over at Sonic State.

First off, if you're not familiar with Robbie’s work, he’s a London-based composer and producer. He also has released music under the name dba with vocalist Shaz Sparks and acted as a writer/producer The Sugababes. He has also collaborated and produced the last few Howard Jones albums and has also toured with Howard (see this Modulate This Post on their Australian Tour).

Now Robbie is working on his own solo album project called Rotations and is shooting video along the way. The videos are great and show a behind-the-scenes view of how a seasoned producer works. I recommend you watch them all as Robbie does a great job explaining his studio and technology - and more importantly goes beyond technology and talks a lot about the creative process.

I’ve embedded the first 5 videos below to get you started. This is a work in progress so you’ll want to bookmark (Note they are posted in reverse date order). My favorite is Video 5 where he brings in Shaz Sparks and his 5 year old daughter  to record vocals. He then build’s up tracks using elements of their voices.

I'm really looking forward to Rotations!

Video 1: Robbie Bronnimanns Journey

Video 2: The Journey Begins

Video 3: Samples, Hits, and out of RAM

Video 3: Sound processing, recording the box


Video 4: Granular hits and transitions, DSI Evolver action  

Video 5: Recording Vocals


Mark Mosher
Electronic Musician, Boulder CO

Ableton Live 8.1.5 Now Available!


Update: Marc from Ableton Denver was kind enough to find the change log and post as a comment. Check out the Change Log here

From Ableton forum -

Ableton's developers have been busy improving quality and we have released the latest result of this ongoing effort, Live 8.1.5:
The changelog is still being written and will be posted here tomorrow (12 August).

Gerhard Behles, CEO
Bernd Roggendorf, CTO

Mark Mosher
Electronic Musician, Boulder CO

Mark Mosher Appearing at the Electro-Music Festival 2010 in New York September 10


One month from today I'll be appearing at this year's Electro-Music 2010 Festival. I'll be performing the new album I Hear Your Signals live, plus giving a one hour talk on "Spatial, Visual, and Matrix Controlerism with Ableton Live".

Schedule for  September 10th.

  • 5:00-6:00 pm - Spatial, Visual, and Matrix Controllerism with Ableton Live
  • 10-10:30 pm Concert where I’ll be performing songs form “I Hear Your Signals”
Here is the full Friday schedule for all events.

If you’ve not heard about this event, check it out here – looks an awesome weekend - Drop me an email if your going to be there so we can meet up.

Mark Mosher
Electronic Musician, Boulder, CO