-
following on from emergencyofstate's example here's a thread for application ideas...don't think this is the same as feature requests...more that feature requests will make the application ideas possible.
here's a few ideas i've been hoping to see realised since aleph was announced:
cartesian sequencer based on makenoise rene capable of driving cv and midi controlled synths - had envisioned grid and arc control;
spectral processing effects for shaping audio input;
comb filter effect (something akin to ohmforce's ohmygod vst. -
My app idea:
Quad CV Gate/LFO brain.
4 independent CV sources. You can choose an LFO w/ variable waveforms or a Gate with every division. Each channel can can also be either synced [external midi clock, or voltage gate from CVin1-4.] or FREE running via Aleph internal clock. You can also modulate each CV source with incoming CV via aleph's cv inputs.
I have a grid layout that allows the manipulation of the 4 LFO's frequency or sync divisions as well as switch between multiple waveforms. I mean shiiit, now with the aleph there are all the buttons and multiple encoders to utilize it's way to big for me to rectify at the moment.
I'll try to think more on it, and perhaps the most useful ideas can be merged with some of your guys' ideas into something really good. -
@eos do it
comb filter would be rad too -
i'm still very much trying to wrap my head around all this, so forgive, not an app...
i 'think' i'd like an operator that's a very robust lfo-function generator. so this operator could be used/pasted x4 into EoS's idea, or added into the lines module.
could that work?
also would really love to get a grid in midi out app, even very basic. -
the first application i want to work on is a 'dynamic function generator'
..along the lines of a CV'able + and morphing CV gate/trigger<->envelope<->LFO<->probability computer
quite general, i know. 4 cv in / 4 cv out can do a lot though! -
@emergencyofstate I dig this idea! I would like to try and send in cv and use the aleph to process these signals, with a 4x4 digital pin matrix inside to address the cv ins to the cv outs. just some basic functions like att, multiply, div, invert, slew and delay/phase would be very useful! (plus midi/monome input for ctrl dynamically?!)
-
not sure if its possible but it would nice to see a module for concatenative synthesis
rudimentary sampling has been mentioned and i'm excited to see the possibilities within the limits of the machine
[ and i'm not sure whether that will be accomplished by a specific set of operators or a dedicated module ] -
My app hopes...
-Basic MIDI sequencer from Grid or on screen out to gear. a la QY series. record MIDI/Loop Playback
-Audio sample player, if it can also sample that would be killer. Footswitch/Grid trigger.
-Print to screen scene settings at scene load, minimal GUI, our own graphics, video?
-Four channel mixer with compressor, EQ, Amp modeling, SVF, etc. Like the heart of a bunch of X0X boxes. ;)
+1 on comb filters, resonant Q, Kurplus Strong stuff (gets close with dubs)
+1 on digital LFOs (wavetable, sample & hold, +1 million other digital tricks for making waves.)
Shoot me down if it's beyond the scope of this hardware. I see the modular users on here will think of cool CV stuff also. -
thank you all so much for these ideas! it's really great to have a thread focussed on end-points (applications) rather than the building blocks that get you there (features).
most of what's been mentioned will be possible, and a bunch of it very soon (indeed some already is with a little imaginative beekeeping). we'll endeavour to make demo scenes (and drawings) to show how to construct some of these ideas in the near future! priorities, argh! -
we should put our heads together and 'hippie patch' something...brainstorm and work towards implementing an app / feature set to play with? we could start 'simple'....say...an LFO..and go from there? the bigger ideas seem out of reach (for me) and i could benefit from working on smaller blocks and then scaling upwards.
i'd like to start working on a CV IO scene. anyone down? -
ha!!! right as galapagoose posted that! we could move the 'small building block' idea to a dedicated thread to organize our thoughts.
-
my 'small building block' idea is more about specific operators and code-based things that need to be implemented rather than ruling out simple applications in this thread! please go ahead and talk about small block elements...
obviously to implement the LFOs effectively we need to finish the proposed 'RAMP' operator, but it'd tough to know whether to prioritise something like that without the discussion of people's desired applications. as such that op (and midiout!) has been pushed toward the top of my working list. -
Cool! From what I'm reading here, that ramp operator you speak of would be useful for the handful of us eager to build CV tools. Would that be useful in creating "gate" or pulse signals? Or would that be better accomplished via a square wave?
-
gates are super easy to create. just an on & off signal. you can control the size of the gate with the 'on' level (0 = 0V, 32767 = 10V).
there's a DELAY operator on the way which will let you set specific millisecond delay time for the pulse length.
also, more generally, there's already plenty of tools to work with to make interesting CV! we'll push out these few additions and then do a tutorial on CV processing / generation building blocks. -
definitely, the aleph is already solid for CV.. can't wait for the tutorial!
-
i'm experimenting feeding a CV in with a pulse and other waveforms to generate a CV trigger out. depending on ADC's polling period and the waveform sent in (narrow pulse and sawtooth), i'm able to turn the pulse into a CV out trig with a bees configuration like:
ADC polling enabled...variable ADC polling period
CV pulse > ADC CV in > threshold > toggle > CV out
if the threshold is low (no pulse/peak) the toggle is off, and when the threshold is high (pulse/peak), the toggle is turned on and a CV trig is sent out.
problems:
despite using a narrow pulse or looking for the peak of a sawtooth, the toggle is re-triggering multiple times per CV in pulse, instead of a 1:1 ratio (1 pulse/peak in 1 trig out). i was hoping that by tweaking the threshold limit (of CV in detection, i suppose this is useful for sawtooth height detection as the pulses have a fixed height) and ADC polling period, a 1:1 (1 pulse in 1 trig out) ratio would be established instead of 1 pulse in and multiple trigs out. i suspect that multiple pulses are being detected within the polling period which causes the burst of triggers (or more specifically, one pulse is detected multiple times within the polling periods pass). if the polling period is set for too long, some pulses will go undetected.
goal:
if able to establish a 1 cv in pulse/peak detection to 1 cv trig out, perhaps this idea could be extended further into dividing and multiplying the pulse stream, effectively turning a 1x CV in/out pair into a clock divider/multiplier/and more. perhaps by using timer, or the proposed 'delay' operator? or maybe even the proposed 'ramp' operator?
solid 'clock detection' from a CV source would certainly open up a lot of application potential.
thoughts and suggestions as to how this can be achieved welcome! -
i'm going to add a GATE feature to the ADC-- basically a comparator that makes it binary output, and only outputs on transition between 0/max. this should solve the issues you're mentioning!
-
for a more general explanation, what's currently happening is the ADC is simply grabbing the state of the CV-IN at a given refresh rate. if the pulse is high for more than one frame it will send a high value output, and it doesn't care if the value has changed or not.
there's a new op in 0.3.8 (will post that today - currently testing) called 'IS' which will allow you to do some simple logic comparisons w/ edge detection. long story short — you'll be able to take a CV-IN turn it into a pulse with 1 op. then you can MUL it to whatever cv_out level you want.
tehn's built-in comparator testing will be more robust, but this is a good interim solution! -
ADC GATE/comparator will aid in network configuration of clock processing & envelopes/env.followers
how "accurate" is the TIMER op? if fed a (theoretical) CV clock that output a pulse at exactly one second intervals, could we expect the same TIMER output report at every pulse? further, will TIMER output fluctuate based on processing load?
looking forward to testing out the 'IS' op -
not sure about envelope following but otherwise you're spot on.
re: following a cv clock
timing in te aleph is pretty good in general. though the accuracy is affected by angthing that creates a whole bunch of interrupts (like turning the encoders really fast). still being refined though.
specifically you'll get inaccuracies induced by the ADC/PERIOD. i would suggest for accurate timing to send 16 or so divisions per clock and average them out for a balance between accuracy and consistency. -
jitter will be related to sampling period. high frequency sampling will reduce jitter, it'll end up like a weird sample and hold otherwise. i think it'll be something to test audibly and then adjust, but i do think you'll be able to get good sync?
-
yeah it sure seems possible. will definitely have to test by ear. is there any drawback to running high frequency sampling in terms of processing power? otherwise, i don't see why we wouldn't want to sample at a near max setting for the tightest possible sync (especially with ADC GATE incoming). it's a theoretical question/concern without (me) having worked much with this setup yet...
other application ideas:
DSP/CV bitcrusher
Stereo Phaser (would this be possible?)
DSP parameter/CV recorder application (can probably already be implemented in bees!)
CV Quantizer (the aleph could handle this quite well, i think) -
Speaking of gates, it would be cool to have some kind of random gates application that is related to an internal or external clock source. I hate to use the euro analogy but something like the zorlon cannon (mkii) or the random gate burst of the wogglebug.
Also, some kind of clocked random cv and gate app with the ability to freeze and loop what was just output. -
it seems like alot of the applications proposed here are focused on the aleph as a euro-modular accessory - LFO source, random gates, CV sequencer etc. i'm sure those will be useful but what i'm really hoping to see for the aleph are the kinds of applications that an analog modular can't easily do, things like granular synthesis, algorithmic composition w/ chaotic systems or some completely new ideas that really harness the power of this unique music platform that ezra and brian have created .. in short, there are plenty of inexpensive modules out there that can give you CVs for an analog synth, i want my soundcomputer to lead me to the future of beautiful computer music!
-
@meatus
Yeah for sure, I completely agree. I'm mainly interested in the audio/dsp side of things. I would kill for even a stripped down standalone version of mlr. But since I have nothing to offer in terms of development I felt like it was best to suggest applications that might be possible at this early stage. -
polygome, press cafe, corners and other monome apps for controlling midi/CV
for me, i'm mostly hoping to see applications that facilitate the unique kind of musical interaction that brian's controllers have created. [without being tethered to a computer.] -
-
i enjoy and love what i'm learning about voltage control so whats been proposed by the majority is awesome
before we get into unexpected/mindbending possibilities its natural that we each gravitate toward our initial reason for buying this thing...which likely fits into our musical comfort zone
most of what i want will be developed soon or is already possible with just "lines" -
@meatus, etc: I hear you. I'm approaching these 'app ideas' from my own perspective at this moment, sorry. -- some of these utility cv operations are concepts that right now, I can only even begin to remotely get my head around as far as mapping and accomplishing myself. Sure, its definitely not utilizing the full potential of things that are possible w/ the aleph. Excited to see some details of these more grandiose ideas!
-
I also agree: nice to see the cv ideas but I have more interest in the audio/routing/granular/in general sound processing side of things, too.
-
ditto - i'm imagining some pretty unique guitar effects and also, like gli, imagine that there's a lot that can be achieved with lines and the current operators.
-
i think aleph will cover a lot of ground in it's many different guises, and i would agree that we've not even imagined the most revelatory uses yet!
regarding these more 'high-level' ideas, i think it would be useful to try and articulate how the aleph could aid these goals. for example when you mention 'granular synthesis' and 'algorithmic composition' these are immensely interesting ideas, but also very broad and lofty concepts. i think it'd be useful to try and articulate how the aleph would work in these contexts, specifically if there are things that make it more compelling than a computer based equivalent.
personally i'm interested in building a probabilistic tracker with samples & cv out. a lofty goal indeed but articulating how this could work i can begin to piece together what functionality needs to be added to bees and dsp modules to enable it (sample loading & playback, graphics drawing, new sequencing grid op etc). -
@galapagoose "sample loading & playback" yey yes yes
and clarification is certainly in order to help out devs trying to accomodate us (non-dev users)
for granular/conc synthesis, the bulk of work would likely be implementing some form of analysis module for the incoming (or stored) audio. users could then pretty easily create scenes based on their control method of choice.
i think -
grains: i'm working right now on good implementation of the fade params in aleph-lines. what this does is implement an arbitrarily sized crossfade between two read heads (or write heads) whenever position is changed. position changes arriving during a crossfade are ignored (or queued? probably ignored.)
the crossfade function could be whatever, going with a half-sine "window" for starters.
you can see how with settable fade times, fast position changes, multiple lines, and flexible routing, this gets into "granular synthesis" or "granular delay" territory pretty easily (and does some other stuff that's less well defined in the literature.) it is missing two ingredients to be a more classical general GS engine: 1) arbitrary playback rate and 2) more simultaneity.
so, next steps:
1) implement interpolated delay lines for arbitrary playback rate. (this would be trivial with floating point, and i might just eat those cycles and use it in this case.) just turned on the "rMul" and "rDiv" params in aleph-lines, which give you non-interpolated lo-fi rate changes of simple fractions. this is a little limited for classic granular synth effects, but it is very cool with filters and crossfeeding.
2) these modules are written in a pretty naive way. idea is to get something sounding and "feeling" good, and algorithmically correct (datatypes, values, order of operations.) algorithms being in place, will pursue more aggressive optimization, with the goal of more "voices" to play with in each module, or arbitrary combinations of voices (e.g. i would like to have 2x each of simple oscs, long, delay lines, filters, with all the mix points for I/O.)
as far as controlling all the parameters for GS, that is sort of up to the user. will aim at providing a) the DSP architecture with low-level params exposed, b) the control processing toolkit that is BEES, and 3) the avr32_lib boilerplate that pretty easily allows creation of single-purpose control apps using c. between these i think all needs should be covered.
anyways, there will be some more fun options in aleph-lines-0.1.1, working towards that stuff. also added CV outputs and filter parameter slew, so it is getting pretty interesting. -
@away: TIMER output is not perfect of course. for starters the timing of the whole system is limited to resolution of the control "tick" which is very close to 1ms.
so, you will see almost the same number every time, it may vary by 1 tick, that number should be very close to 1000 for each 1s pulse, and due to the param scaling if you hook it up directly to a time parameter it might not do exactly what you expect - it will be slightly fast. (i'm working on a solution to this; in a nutshell, parameters with units=seconds equate 1bit of input variance to 1/1024s change in the time paraam. i could make the system heartbeat go at that same rate; then all would be exact, but it would be pretty confusing to use 1024 ticks/s in TIMER and METRO and so on. alternatively, we can have inefficient math and/or rounding error in the param scaler. i dunno yet...)
if you choose your input clock period unluckily then yes it will jitter against the aleph's CV sampling frequency. in bees, timing may be affected by other things going on in a heavily loaded network or by lots of screen redraws or something like that. (i actually haven't seen slowdown directly caused by encoder interrupts, but it can easily be caused by rendering a bunch of text to the screen in response to those interrupts - like scrolling fast thru inputs list. which is why we should soon add a "page up/down" UI command.)
if you want super-accurate CV sampling timing, best consider a custom application that does little else. in that scenario, the avr32 is more than capable of doing audio-rate stuff, like streaming from sdcard to bfin. that particular functionality is a high priority for people so there is some considerable effort being put towards it. -
@zebra sweet
-
re: granular synthesis - what kind? i mean, we're all familiar with the typical granular concept of triggering short pieces of an audio file, but it'd be interesting to experiment with some of Curtis Roads' various granular concepts – e.g. wavelets, pulsarets, glissons, etc. some sort of glisson-based delay could be very interesting.
ha, i don't even have one, i'm just thinkin. -
How did I miss this epic grainular post by zebra? This is right up the alley I was looking for with the Aleph. It's been two months since you were tweaking that granulator so when can we see this released?
-
it is in lines.... pos_read and fade parameters.
what exactly do you want to do? the things i was talking about adding to lines (or some renamed variant) are interpolated buffer access (for arbitrary rate) and better/more arbitrary crossfade window. are you missing those things, specificially?
cause no, i haven't made those feature additions. (i don't know if i'd call them "tweaks...") they are lower priority than other things demanding my time; stabillity and basic features on the control side, making an initial release of the percussion synth, editor support stuff, all the changes in bees 0.5, etc.
those features can be accelerated if anyone wants to try their hand at DSP programming in the context of a single, well-defined task:
interpolated buffer access (and functional 32.32 arithmetic):
https://github.com/tehn/aleph/issues/8
... jeez, this is one of the oldest issues on there. i had been holding out hope that someone would take it on so i wouldn't have to.
different crossfades would be easier. there is a linear crossfade. i even had a sin/cos fade working (what is that, welch window?) but dropped it for efficiency (i think the full arbitrary mixing matrix is more important in lines, but could drop e.g. the filter mode mixing instead.)
also see this thread
https://github.com/tehn/aleph/issues/102
real granular "synthesis" would also require much audio-rate grain triggering from the DSP (not 1ms-rate triggering via param changes,) and more "buffers" (but implying fewer routing options, probably all read heads mixed to a stereo bus with 2*N multiplys, then saturated and copied to mains/alts.) that's definitely not a "tweak" for aleph-lines, that's a totally different module and a good amount of work. -
@misk you are right, there should be some clarification.
when i say "real" or "classical" granular synthesis that is a catch-all for the kinds of low-level techniques you are talking about. you need a specialized playback engine.
that can happen on the blackfin but i'm not sure i'll actually be doing it. i'm just not personally passionate about that level of control over that kind of synthesis. i really need at least another developer on board who is passionate about it, doing the necessary research, etc. i can help develop a subsample scheduler, make the window functions, whatever, but from start to finish it's a big task with a lot of design decisions.
i am very much into extending the typical "echo" or "looper" controls to accommodate coarser kinds of granulated playback effects, a much more achievable goal in the short term. adding the relatively small features mentioned above would help: interpolated buffers allow those cloudy stochastic-pitch effects for granular delay and stuff; better crossfade windows would mean more transparent overlap; an optimization pass, and maybe some different decisions about routing/mixing, could squeeze another voice or two in. i'm experimenting right now with different logic for "perform read head crossfade during fade", which i think will help too. -
What I'd love to figure out is a way in lines to do short repeats, like a comb filter or Kurplus Strong stuff, when shorting the Delay length it jumps between points and it's not very effective for these sounds. Has anyone else attempted this? Any tips?
-
that's a good point.
consider this: bees uses 16b for control values, with a sign bit, so 32768 positive values. lines is currently limited to 60s (which i would like to double.) 60s is 2880000 frames. so there are a lot more possible delay times than control input values, in a linear mapping.
so right now, no, there's no way.
2 ways around it:
1) add a "fine tune" offset in samples for tuning short delay times.
2) make a new parameter type for exponential control of time.
for various reasons, 1) is more compelling to me; i'll add it to next version of lines unless there are objections. a "nudge" parameter, that bumps the read head position +/- 32k samples? -
PS physical modelling synthesis is gonna be another module. there's more to a decent string model than just a comb filter.
-
and PS, @c1t1zen i assume your emoticon signals disappointment. but lines is already pretty powerful for granular effects, and i really will add arbitrary rates when i possibly can. that lets you implement e.g. the "grainstorm" app and a lot more.
if there is something you are particularly interested in from the academic side of granular synthesis algorithms, i would like to know it. i do still want to make a granular playback engine, it's just lines without filters, without routing, with an internal scheduler, with arbitrary window generation, more voices.
i'm just not really geeked out enough on e.g. the curtis roads terminology to actually make those instruments. i don't even know what the right controls would be. (and honestly, i don't care that much; they don't sound that different to me!) so that's where i would ask for more input from the community, even if it's just pointing to a PD patch they like, or something. -
I think "fine tune" makes sense. The Granulator effect in AudioMulch is an interesting filter use. I'm not even that interested in creating sounds from a granular-synth but more of granulating an incoming signal.
I understand if it's a back burner issue. -
@c1t1zen I'm actually having some great fun constructing semi-granular scenes using lines with lots of post_read manipulation, will post one up in the next day or two
@zebra I think actually just more voices in lines would make a really flexible granular engine. When you say without filters is this down to limits of the dsp chip? Would it be easier /possible to just have all the voices route through one or two filters. (i really love the sound of the aleph filters, especially with singing resonation, would be a shame to lose them in a granular module) .
Small adsr envelopes are useful too when going granular imho :) -
Are there any ADSR envelopes? I'd like to make some plucked type sounds. I've seen and used the slew and amp settings explained in the tutorial but it's not the classical ADSR of subtractive synthesis. Or maybe I'm not using them properly for ADSR.
Looking forward to your scene @duncan -
yeah, well its a combination of the limits of the chip and my own time. i haven't done some basic optimization stuff like making an audio FIFO and doing everything as block-processing, which would speed things in many situations at the expense of a little latency. that's a bit of a project though..! some tricky implications. i'll get to it unless someone else does first. all the speed tweaks i've made so far are relatively superficial.
( also i've found i can *sometimes* and *barely* beat the compiler's optimization with hand ASM. so there's a lot of labor-intense optimization that could be done in various inner loops, *after* adding block processing. )
but anyways, that's good to hear. lines could still use some coarse optimization efforts, there's room for short-term options i think. maybe a good goal is to keep 1 or 2 filters (patchable, not fully routable), maybe simplify other routing (separate stereo output busses for adcs/delays/filters, each patched to 1+2 and/or 3+4 ?), maybe drop the full mode blending in the filters and instead use a single selectable mode? that target feature set might give us room for 24.8 interpolated buffer accesses and more buffers. (?? worth a try.)
for next minor rev (0.3.0) i'll add a fine-tune parameter, that's a good call.
you'll find the fade time tweaks in 0.2.0 can really help with pos-scrubby stuff. i would like some feedback on them really, and whether this behavior should be parameterized/changed (it's pretty simple):
- in aleph-lines from the current master release, there is a fade parameter, expressed actually as a rate in "ramp increment value per sample" (ugh, sorry).
- there are two read heads per line. when you change a read-head parameter (pos_read, delay, rMul, rDiv) it initiates a crossfade to the other read head, with the parameter change
- however, if a param change comes in while already fading, it will just jump to the new value.
now, in aleph-lines-0.2.0 :
- fade time is still expressed in that horrible unit but it is a more usable range
- pos/delay changes during crossfade are ignored. this is usually better, but might be bad in some applications, so...
i thought of making the fade coefficient visible directly, so you can set it in the middle and hear both taps.
also, the logic could be parameterized. at some point, however, too many damn parameters? i dunno.
anyways i got to do a round of bugfixes on BEES/avr32lib before doing fun cool DSP stuff that is cool and fun. -
my idea was that you can make ADSR envelopes in BEES, using presets for envelope stages. each preset can affect the relevant Slew parameter and value. in fact you can make any kind of envelope, so it's more flexible. so that's what i'd do right now.
but a lot of people have requested built-in envelopes with trigger inputs on the DSP side, so maybe my idea sucks. i do get that it's more convenient and intuitive to see the traditional controls. also more accurate timing of, say, AHDSR.
thing is, hardwiring envelope logic to every parameter that could possibly use it is a bloat. running the logic at the audio-frame level is slow. etc. the slew system is efficient and flexible.
so, this really the highlights the need for more modular DSP programs, maybe. eventually, i hope that more people can get into that side of the code; perhaps a first step is patching together variants of DSP modules from available code snippets / compilation units. and eventually talk about scripting that kind of code generation.
e.g., there is an env.c unit with an ASR envelope class and i forget what else. used in drum synth module. (it's coming out, i promise!)
PS where does your granulator ADSR go? affecting volume of sum of grain outputs? -
@c1t1zen yeah you can do standard adsr or ahdsr, whatever, with presets in BEES. at least that's the theory. i guess decay section is clunkiest, requiring a DELAY operator, and timing is quantized to 1ms.
and hm, in lines the slewable parameters are filter output mix and filter params. so it'd be hard to use on input. unless you route the delays in series and apply the "envelope" to the output of the first one.
i'd be into a lines variant with gated envelopes for inputs. everything's a trade!





