made an aleph scene, made a video, found some more bugs

  • i realise that I never made a 'monome' video, but after all the hammering away at tutorials and bugs I thought I should share a scene and make a video.
    Grab the scene and watch the video here if you want to try

    http://monome.org/docs/aleph:bees:sharing:gridgrain

    A pretty rough take in the video, still learning how to play the scene!! so sorry about the volume jumps and out of tune pitch jumps!
    It's a direct feed from the Aleph, with an Eventide Space providing a plate reverb.
    I think the Aleph really deserves a bit more video promotion, this really isn't the film to convert the masses though, i'll keep working!

    loving this box

  • awesome stuff, thanks very much for sharing.

    one unfortunate thing about new bees version is .scn files from 0.4.x need some tweaking to function on 0.5.x. it is not a big deal, but it is not a user-friendly process, so i should just go through and post conversions of the posted scenes (in addition to the ones bundled in the release, which i already converted...)

  • to be honest this one needs loads of tweaking anyway and so rebuilding from scratch might be good practise in beekeeping!



  • nice mika vainio style crackles at the start. me likey.

  • @karaokaze yeah there's no latency at all with this unit . . at that point in the video i'm using a footpedal to start recording into the buffer each time i play a new note ( it also resets the writepos to 0 ), so what you're hearing is still the sequencer constantly playing back one bit of the sampled sound. At around 7.10 the live feed from the guitar is routed through, but i'm fading it in and out with the guitar volume knob, so not the best example of zero latency!

  • away from home but I have to try this when I get back!

  • @duncan_speakman ok, well just let me know if you change your mind. just posted new public release ; bees 0.5.2, lines 0.2.1 , it's a pretty big improvement
    https://github.com/tehn/aleph/releases/latest

  • i went ahead and converted this scene, it seems to be correct but lmk if it seems wrong.

    also attaching the intermediate .json, just for fun.

    oh, i'm sorry about the garbage noise at startup in lines. fully initializing the audio buffer at startup was causing DSP stalls, still not sure why...

    i would like to release this drum synth module and then come back to lines, there's a few other simple changes to make: prune unused slew parameters, add a fine tune offset to delay time, lengthen buffers.

  • Drum synth sounds neat!
    Thanks for including the .json I want to open that up and look at the formatting.

  • Posted testing package for drum synth on github releases page, if anyone feels adventurous. @gli perhaps

  • sweet

  • and while you were all getting excited (rightly) about monome_sum, @zebra slips this one out, totally confused by some of the parameters but having fun!



    and another set of sounds in audio only . ..

    https://soundcloud.com/ofcircumstance/drumsyn-v-0-0-0/s-lRLu7

    lots of sub on both, get those headphones in



    (p.s. @zebra in terms of your plans, does anyone really need longer buffers than the ones we already have in Lines? wouldn't we really all like just more buffers ;) )

  • ha, could have done with reading that earlier :)

    the timing of the STEP operator feels quite loose when it's controlling the drumsyn, when i get around to buying a usb/midi cable i'm going to try sequencing it from an octatrack so see how tight it feels then

  • @gli actually how did you find that? it's not listed on the modules page, am i missing a resource page somewhere I should be looking at . . . or did you just use intuition and type dsyn to see if anything was there? ;)

  • dsyn doc page not linked from anywhere yet i don't think. i just posted the dsyn build the other night and typed up the parameter description yesterday.

    timing:
    timing in BEES is really the issue. try drawing less stuff to the PLAY screen. each line of text rendered takes some time.

    also you'll notice slowdowns when moving encoders. i have to fix this somehow. take some time to diagnose and refactor the hierarchy of interrupt handlers for gpio vs timers (and more importantly, encoder-polling timer handlers vs. callbacks from other timers; that's the hard one.)

    so generally, a dedicated app for really tight timings is called for, maybe?

    buffer:
    personally yeah, i want longer buffers, dealing with entire 2-4min musical sections. also agree that more buffers is of broader immediate utility.

    general question: i switched filters in aleph-waves to use only a single mode instead of mixing all of them. this is faster. is it an acceptable loss for waves? would it be acceptable for lines?

    also considering: what if lines had fewer routing parameters. say, just a stereo output bus for dry signals, one for delayed signals, one for delay+filter. each output bus could be hard routed to output channels 1+2 and/or 3+4.

    this also would save a lot of cycles compared to the full mixing matrix running at all times. save enough cycles, get another voice.

  • from a compositional/performance perspective I'm interested to know how you're thinking about using long buffers. Is it to play full sections and then overdub when they loop? (as an alternative to shorter loops) or something else entirely?

    timing wise are there plans to be able run ticks at audio rate (I'm thinking about the phasor type method in max) to control things like step or metro based lfos? At the moment all the communication is from BEES > dsp rather than the other way around right, or am I missing something that's already there :)

  • my two cents..
    on a personal note I love the routing parameters that Lines offers, especially in terms of integration with other hardware, I'd be sad to lose them (and once we get encoder destination switching in presets it'll be a really rich mixing environment :) )
    When you suggest a stereo output bus would each delay/dry/filter still be pan-able so it just goes to one physical output?

    One (of many) of my favourite things about the Aleph is the audio in/out flexbility and the ability to address them all independently... I actually wish both Waves and the Drumsyn had similar routing possibilties.

    Filter-wise I don't think I'd mind single mode in Lines, but it would suck to lose it in Drumsyn (it's so perfect for giving high end click to low sounds)

  • 1) long buffers:

    yeah, longer overdubs basically. but other stuff. here are some recordings from my performances, for example of stuff i like to do (bad quality, sorry)

    - short looping structure, but with metric variations, so you actually need a longer loop:
    http://catfact.net/snd/neow/swycre.mp3

    - long looping structure (>1min)
    http://catfact.net/snd/neow/lions_minimax.mp3

    - "section length" buffer is used as granulated source for much longer-form performance or installation
    http://catfact.net/snd/neow/iwne.mp3
    http://catfact.net/snd/neow/moca73.mp3

  • 2) timing.
    bees can't do app ticks at audio rate, too much stuff going. but a dedicated sequencing app could. would restructure interrupt system so bfin signals, via GPIO pin, for new param values at bottom of each frame.

  • for feedback, my opinions

    1. i need long buffers more than several buffers

    2. single filter mode is an acceptable loss in waves and lines (mabye keep the mutant versions up and make spinoff modules so everybody is happy)

    3. i'm curious to know what lines could be like w/fewer routing options + an additional voice

    now that i'm off work i'll test drumsyn and report back

  • 3.) routing.

    cool, that's great to hear that you find the arbitrary routing useful.

    what i meant is, each mixing "group" has a stereo output bus. each thing in the group can be panned L/R and mixed to that bus. the whole bus is copied to each pair of outputs as requested. so you still get some flexibility... is it enough?

    oh, the other possibility is going in the direction of waves: have patch points from everywhere to everywhere, not mix points. rely on external devices to attenuate as needed, and you don't get to have different mixes going to different places.

    yeah, i wouldn't remove the multimode filter of of dsyn, i like the weird phase sweeps.

  • yeah, we could always roll up a bunch of variants of this stuff. once people start digging into the dsp codebase that will be one of the easiest steps to take.

    ultimately, modular code generation would be nice... i would love to work on this, but would need help also.

    one thing i'm doing right now is putting together heavily-commented "blank" templates for app and module development.

  • rad

    thank you so much z
    trying to break out of my shell and start messing with aleph at a deeper level

    the annotated templates willl help but learning dsyn is the first step

    perfect motivation for me right now

  • anyways as far as my personal work on lines, getting 4x 80s buffers is the goal. i think it's a solid use case. that's the theoretical limit of the SDRAM at 48k.

    2x 160s would be a useful alternative. so actually that is the most immediate goal, because the current lines supports two buffers. (something was going wrong when i tried it earlier, but it was probably something dumb.)

    adding soundfile->RAM support is another outstanding issue. i think it's straightforward, just a question of development time.

  • @gli thanks and you are welcome ;)

    will have something posted up in the next days, i think it will be fun and helpful

  • wow, loving these recordgins @zebra .. amazing stuff.. ( are these with just Aleph/voice/viola ?) when will you be playing in europe? ;)
    my co-writer is a violin/viola player but she's been on tour since the Aleph arrived, i think she might wet herself when she gets to try it.

    thinking about it i realise that actually more ways to access two buffers would equally be super useful (maybe more so), most of the stuff I build in max just has various different players all accessing one buffer. I'm not sure if this is more or less complex programming wise than 4 x buffers.

    on the dsp issue, I wonder who else is actually working on this stuff, @zebra seems to be the only voice in here on that front, are there quite developers out there working on stuff? I read the comments about 'when people dig in' and really hope they will!
    I do really want to dive in (won't happen before June sadly due to composing deadlines) but i'm wondering how long it will be before i can contribute anything useful anyway, when it sounds like you really need some help from 'experienced' hands
    I can understand basic coding ( javascript) and I can build easily in max, but to make stuff work on the Aleph how much do we need to understand about chip architecture, memory issues, processor cycles etc etc ?!!! For example if we wanted to change the type of filter could we just copy and paste equations from some dsp textbook into the appropriate bit? (apologies if this is a dumb question)
    It leaves me thinking that I should just be working creatively within the (loose) limitations that you're already providing and hope that might help guide development of the lower level stuff.

  • so far i'm pretty much the only developer on dsp and avr32, brian has made some operators / working on serial coms, @phortran @eos @bens and a couple other people have contributed fixes.

    stoked to see it grow, bit by bit !

  • yes you can pretty much use same algos as usual, but you have to read up a bit on fixed/fract math and bfin architecture

    recent thread on github with @rick_monster
    https://github.com/tehn/aleph/pull/205

  • eh no sorry, those recordins are just supercollider and viola. (simple patches, though... ) aleph is the new thing. i've hardly had time to play it cause busy coding it. couple of shows only since 'twas released.

    ultimately i don't expect to do that much stuff with single aleph unit, but maybe half as much.

  • omg, right here is this other thread too
    https://github.com/tehn/aleph/issues/204

    so, the takeaway is that we are thinking of what kinds of other parameters to add to lines for indexing, and how to make more transparent usage for timer / etc.

    proposal: timeMul and timeAdd params that affect pos_read, pos_write, loop, delay

    porposal: MUL input for TIMER op. reported ticks are unit t = (MUL * ms)

    proposal: integer milliseconds param type for delay, pos, loop

    proposal: TIMER emits -1 when overflow. lines can automatically increment timeAdd (or something) on reception.

    by default, timeMul = 1, TIMER/MUL = 1, connect TIMER output to delay/loop/pos without any fudge.

    and/or, you can index things with reduced resolution for longer than 32k msec buffers, this way and that way as needed.


    ???

  • i porpose

  • @duncan_speakman yeah changing or adding filter architecture no problem. as you will see, the code is very naive beyond using the blackfin intrinsics. it can be much faster with closer attention and additional work on the core architecture; for now, it is very simple, one needs to define:

    - init function
    - param-change function (called from SPI interrupt / controller)
    - process-frame function (called from audio codec interrupt)
    - module data structure (located at SDRAM address.)

    module code, stack, and globals live in SRAM, which is limited to 64K.

    parameter descriptors are binaries built by a little helper program.

    (so far, not much weirder than making an audio plugin or max external.)

    for arithmetic DSP operations, learn and use the blackfin gcc intrinsics:
    http://blackfin.uclinux.org/doku.php?id=toolchain:built-in_functions

    for deeper understanding, or if you want to use assembly, check out the architecture reference:
    http://www.analog.com/static/imported-files/processor_manuals/Blackfin_pgr_rev2.2.pdf

    and i suppose i may as well add, the datasheet, though you probably don't really want to look at it:
    http://www.analog.com/static/imported-files/data_sheets/ADSP-BF531_BF532_BF533.pdf

    such DSP classes as i've made (in aleph/dsp) are functional, and i think pretty obvious in function. but they are nothing brilliant. feel free to use/adapt/add!

  • Ahhh, when I said I built in max I meant just in the patching environment, sorry!
    sadly a lot of the above goes way over my head, so it's going to be a while before I can contribute usefully to the low level stuff!
    ( I did try setting up a Linux toolchain but hit loads of errors and decided it needs to go on the back burner until summertime, i think i didn't really understand what my $PATH should be)

    If one did want to start learning to code dsp modules where would be the best place to start (from scratch I mean), with a sort of 'C for dummies' book?... I seem to remember an earlier thread with book