The video was shot at my sound check by fellow performer and Electric Trombone player Darren Kramer using GoPro. The audio is from GoPro camera.
At this show I’m using Abeton Live to do for live sequenced playback. For live play, For this song called “Dark Signals” (from my second album I HEAR YOUR SIGNALS) I’m using a my Remote SL MKII Keyboard along with Maschine (custom template in made with Control Editor) to play instruments in Live racks. As you'll see in the video I'm using pressure from the pads to modulate this synths.
AudioCubes are receiving light commands via MIDI CCs from Ableton Live. I
never use on-board sounds in Tenori-On and instead use it as a
controller and pump MIDI into synths hosted in Ableton Live.
The visuals are based on live camera input processed in Resolume. The Resolume system receives controller and automation from the Ableton rig via MIDI. I’m using the house projector to display my visuals on a screen that’s the width the of the stage!
It’s a little hard to tell from this video because audio is from GoPro, but the sound at this show was amazing thanks to Gannon Kashiwa who owns and operates GK:SOUND at the TARDIS in Denver. I’m working on a 3 camera edit from footage from the actually show performance which will have audio from the board – so stay tuned.
Mark Mosher Electronic Musician | Composer | Sound Designer | Performer Boulder, CO
I’ve been a fan of the track “Somebody That I Used to Know” by Gotye since I first heard the track on the radio. I was surfing around tonight and noticed and article on the Novation website which mentions the band’s use of Ableton Live and Launchpad. The article mentioned the viral video which I somehow missed – lol. It’s pretty cool so I’ve embedded it below. I then found a live performance video which shows them performing the same song live. For me it’s nice to see Ableton and Launchpad’s in the wild with charting bands. Also, bringing a bunch of Launcpads along is certainly carry-on ready for touring.
Early last week I got a message from Denver Sound Designer James Kojac of Syndicate Synthetique. "I happen to have a pre release DSI/Linn Tempest sitting in front of me… I figured you may want to swing by and check it out.”
Duh! Of course I jumped at the chance to see one of the first Tempest Drum Machines in the wild - #0026 to be precise.
While I was there, James was kind enough to take me through the paces, show me some of the kits he had created so far, and let me play and do a little sound design with it. Before I talk about first impressions let me give you a brief rundown on the core features.
Top Feature Summary
The Tempest is an analog drum machine first introduced in at the Winter 2011 NAMM and is a collaboration between Dave Smith and fellow instrument designer Roger Linn.
16 velocity and pressure-sensitive pads arranged in an 8 x 2 array to facilitate both real-time and step entry of beats.
Two pressure and position-sensitive Note FX slide controllers 6 analog voices each with 2 analog oscillators plus 2 digital oscillators (with a large bank of included samples)
Dave Smith’s classic analog low-pass filter with audio-rate modulation, an additional high-pass filter, analog VCA with feedback, 5 envelopes, 2 LFOs, a variety of analog modulation routings. “Although optimized for drum sounds, it excels at tuned sounds as well, and even doubles as a 6-voice analog synth.”
In addition to the 6 direct voice outputs, there are stereo mix outputs and phones outputs, plus 2 inputs for foot switches or expression pedals, MIDI in/out and USB
90 panel controls, and bright 256 x 64 OLED
A variety of unique effects are provided while maintaining a pure analog signal path:
Stereo analog compressor and distortion circuits affect the stereo output mix
Beat-synced delay is achieved by generating additional delayed note events within the sequencer
A beat-synced "stutter" effect is created entirely within the sequencer by looping short portions of the drumbeat on demand.
I’ve played a lot of drum machines and synths in the past 20 years and found Tempest exceeded my expectations. It has great GUI design and is extremely expressive, and to me was instantly addictive. I planned on stopping by for a quick look and ended up spending like 3 hours jamming! The Tempest was so cool, I think I would have stayed longer if I didn’t have to get up 6:00am the next day – lol.
The OLED display is vivid, the machine looks great in the dark and offers visual feedback from the pads. The pads feel great, are taught and not spongy, don't travel down much and respond well to pressure. Once you spend a little time with the GUI you'll find the dual-function “shift” concept to be quite brilliant and flexible and as it offers quick access to deeper parameters without interrupting performance workflow. I certainly didn’t master it in 3 hours, but definitely started to “grok” the concept and it didn't take long at all to start making interesting music and sound. I got some immediate results with the analog synth architecture and created an interesting snare drum that James was kind enough to name after me in one his kits – woot!
The sound and performance possibilities are simply stunning! It can sound huge and punchy. It can also sound nasty without sounding brittle – and I mean nasty. It also can sound silky and warm. The modulation matrix combined with modulation sources including inputs from pad strikes and pad pressure, slide controllers and knob movement make the Tempest a live performance monster. And since your controlling parameters driving instances of analog synths, it’s insanely expressive. In other words it has a massive sonic range and the potential for unique output based on an individual performer. It is truly a musical instrument and not some sort of cookie cutter sample-based drum machine ROMpler.
Besides live playing on pads, you can sequence and step sequence patterns. There is a performance mode that allows you to to assign patterns to slots and trigger them from the pads – plus play over the top. There are also modes that allow you use the pads to play pitch intervals and set scales.
After a three hours session (without cracking open a manual) I feel like I just dipped my toe in the water and there are lot of other features under the hood that I didn’t experience.
Video of James Jaret Kojac Playing the Tempest
James (Syndicate Synthetique) has been working with the Tempest for a few weeks and was kind enough to let me film a quick jam which I think illustrates the potential and sonic range of Tempest. In this video James is jamming with some of his sounds with some sounds also initially designed by colleague Al Nesby of A23P and Acid Allstars. The video really shows off the range and performance aspects of the DSI Tempest analog Drum Machine.
From what I hear the unit should be available any day now. My friends at Sweetwater list the price at $1999.00 http://www.sweetwater.com/store/detail/Tempest (note I don’t benefit from sales of Tempest – just love Sweetwater and recommend you ask for Jeff Green).
This is not a comprehensive review since I only had three hours with the unit. I will say I was blown away by Tempest. Before I played it I thought 2K was pretty damn expensive for a drum machine. After playing it I realized this was way more than a drum machine and is an outright performance instrument with a massive sonic range. Expensive? Yes. Worth it? If you are looking for a desktop analog drum machine that doubles as an instrument this seems like a good value and I think it's worth a look.
With the great analog synth elements from Dave Smith combined with the design of Roger Linn, Tempest is more than a luxury synth item – it’s a tool that will allow you to uniquely express yourself as an electronic/experimental musician for both beat and synth oriented work. I also see Tempest being well suited as a companion throughout the entire process of making music from composition, to production through to performance.
I'm not immediately in the market for an analog instrument, but after playing Tempest, I'm quite intrigued and would put it on my wish list if I decided to add an analog performance instrument to my hardware rig.
Have you ever wanted to look over the shoulder of someone who's spent multiple years programming and refining a custom application with Max/MSP/Jitter? Well you are in luck as artist Robert Edgar kindly agreed a let me record a behind-the-scenes walk through of his "Simultaneous Opposites Engine" app when I was visiting Sunnyvale last April.
He describes the app as "a performance/navigation system for real-time traversal of existing video files, sorting through the audio and video a single frame at a time, in an arrhythmic spiraling motion". While this is not a commercial application, watching the interview will not only give you insights into Robert's innovative work, but will also give you a sense as to why people choose to program their own apps and how they use technologies like Cycling 74's to solve problems and express their art.
After the interview video I've embedded a few example videos to show some of Robert's recent work. I've included a link to his Vimeo channel at the bottom of the post which you should definitely checkout as it's the home of over 60 videos created by Robert over the last few years allowing you to see the progression of the technology and the art. I recommend you view all videos full screen and HD.
I met Tim at my recent concert at the Art Institute of California/Sunnyvale and he was kind enough to invite me over to see his latest development project, the MultiMultiTouchTouch. This custom solution offers players any number of arbitrarily-shaped multitouch areas with three-dimensional spatial control. Interaction with this space allows users to control and play virtual synthesizers using nothing but a Microsoft Kinect as the controller.
Ironically, the concept shown in Moog Music’s April Fools video “Introducing the Moog Polyphonic Theremin” is not only a reality, but Tim has one-upped this idea by providing polyphonic spatial control in multiple “frames”, AND more granular control than a Theremin with finger blob detection. In short MultiMultiTouchTouch is like having a polyphonic/multitimbral Theremin that can not only detect hand movements, but finger movements as well - from multiple players!!!
Luckily I brought my video camera along and recorded Tim describing and demoing the technology. I also give the MultiMultiTouchTouch a try at the end of the video. So, without further ado, I present the video “An Exclusive First Look at Tim Thompson's Kinect-Based Instrument: MultiMultiTouchTouch”
Pass It On I want to reiterate, this is real and NOT a late April Fool’s joke. Incredible work Tim! Congrats. I can’t wait to see where Tim takes this and look forward to the possibility of doing some MultiMultiTouchTouch compositions and performances myself. To help Tim promote his work share this video.
As you can hear, “Alone” is the emotional bottom of the story in REBOOT and is the only track on the album without a drum groove. This being the case I wanted to come up with some sort performance that would provide contrast to the other songs in the set. Back in September I came up with the idea of attempting to play all the lead melody and ambient noises solely from Percussa AudioCubes. I’ve decided to push-on with this idea and perform the song this way. Here are the behind-the-scenes notes and a rehearsal video.
Goal I normally configure my AudioCubes so there is 1 “Sensor” cube for controlling effects, 2 “Receiver” cubes for sending MIDI notes to Ableton Live to trigger clips, scenes, parameter settings…, and 1 cube as a “Sender” to trigger to the Receiver cubes.
For “Alone” I wanted to use each cube face to play a different note according to a predefined user scale that matched the notes in the song. AudioCubes can detect objects in one of two ways –Wirelessly (“Sender”/”Receiver” pairs), or through infrared (cube set to Sensor mode). In sensor mode, an object’s (hand, other cube, cat…) proximity to cube face is detected with infrared. Since I want to use my hand to trigger the note Sensor mode is the way to go.
Solution: Using Sensor Mode to Play Notes While the Sensor mode is normally used to send MIDI Continuous Controller (CC) values from 0-127 to control parameters on synths and your DAW, there are also options to send MIDI Notes based on minimum sensor thresholds.
Implementation: Configuring AudioCube Function in MIDIBridge Modes and settings for each cube are configured in the free app Percussa MIDIBridge. Click the image below to see a large version of the screenshot which illustrates how I configured individuals notes for Cube 1.
You can also see that even though I’ve set note triggering via the threshold, proximity to each cube face will case the LED intensity to respond according to the normal response curve and with a different color for each face. I continue this method with Cubes 2 & 3 to program the rest of the notes.
Calibration Like a Theremin, AudioCubes running in Sensor mode need to be calibrated. It’s not because they are analog, but instead to take into account the amount of ambient and infrared signal in the “control zone”. The screen shot above also shows how you can tweak gain to adjust for room – and of course you could experiment with “Threshold” as well. The darker the room the better fro the this particular method.
Virtual Ports, Ableton Live and Novaton Launchpad In my rig, MIDIBridge talks through virtual MIDI ports (Midi Yoke) to Ableton Live. Live is playing some minimal original background tracks from the original album offering me a frame of reference for my performance. The signals from the AudioCubes are routed to various virtual instruments such as Camel Audio Alchemy, Absynth 5, and Sonic Charge Synplant. I assigned buttons on the Novation Launchpad to select and arm tracks (sometimes multiple tracks) so I can swap instruments out from under the cubes without having to load another MIDIBridge Patch.
The End Result: “Alone” Rehearsal Video I shot this video back in September when I first figured all this out. I’m now actively rehearsing it and hope to add it to the show soon. The key to playing this song is to play just behind the pocket to give the notes more emotional tension. The AudioCubes are plenty sensitive enough to achieve this and the visual feedback not only helps the audience connect with the performance, but actually helps me with timing. As a musician, I really like the flow and feeling of the movement as well.
I shot this in 720p so if you have the bandwidth watch full screen at that resolution. I also captured audio right from my sound card so listen with good headphones or on a good sound system :^ ).
Listener/Viewer Notes This video is in HD and I captured the audio full fidelity right from my sound card so listen with some good headphones or on a good system and select HD for full-screen viewing. The video and audio were captured in one continuous take with no content edits.
Composer Notes To fit the back-story of my album, I set out to compose a song that sounded a bit alien in origin. To liberate myself from my typical compositional instrument of the keyboard, I decided to compose and perform the textures and melodies using only spatial controllers. I this case I used a Moog Etherwave Theremin, and a Percussa AudioCube. Once I got going with this notion I really got using 6 dimension of spatial control to go “Hendrix” with the Theremin. The title of the song has many meanings, one of which should be obvious to Theremin fans.
Producer Notes I'm routing the Theremin analog signal into Ableton Live and then I convert the the signal from pitch-to-MIDI in real-time. This signal is routed to various virtual instruments hosted in Live. I then use a Percussa AudioCube in Sensor mode to add 4 additional dimensions of modulate in real-time. So 6 dimensions of spatial control. I'm changing the signal routing of the Theremin to route MIDI to different virtual instruments on the fly using the Novation Launchpad.