/led/level/map display using jitter

  • hi folks,

    i'm working on a demonstration / tutorial patch for how to use jitter to efficiently draw an antialiased GL scene to a monome grid.

    at present i'm working with a 256x256 grid to draw the full res image which is then readback from the GPU and downsampled in jitter to a 16x16 grid and formatted for output to the grid.

    my current patch draws a rectangle down one column of buttons demonstrating the AA fader response, and then positions a circle of variable size around the grid. all of this is drawn with line ramps to view the 'in-between' positions. i am finding it incredibly difficult to get the scaling to work reliably so that the circle always draws as only one led when it is stationary at that led's location — instead i'm seeing neighbouring leds light up to varying degrees and it's not in a logical way across the grid (seemingly random).

    i've been reading about the downsampling method of jit.matrix and it seems that it is a cheap / simple downsample. instead i'm hoping to do an averaging downsample (ie. take the average of all source pixels per destination pixel) which i think should solve this issue. and to top it off, i'd like to do this on the GPU (using jit.gl.slab or similar) to avoid as much CPU usage as possible.

    attached is the patch as it stands (super messy!) -- look inside the patcher in the bottom right called "jit-varbright"

  • here's a demonstration of how it might work in a context like 'corners'.

    all you need to do is connect a compatible device and hit the toggle switch for varbright
    if you go into the jit-varbright sub-patch you can change the size of the circle that's drawn. also you'll want to play with the @val for the jit.op to get the number output just right to get max levels.

    using this cascaded blur + downsampling i haven't been able to get down to a single led-pixel size circle as the brightness drops off before the size has shrunk. anybody with any help would be much appreciated!

  • it's neat to think of how a higher resolution scene could be 'downsampled' and displayed on the monome --- in ways that maintained the source animation integrity + elegant handling of monome gui. opengl feels complex but ultra flexible.

  • ohh this is the updated dev-tools patch?! yippee

    I'm excited to learn about this jitter technique, been reading about it here for a while but never taken the dive. Thanks!

  • nice! will take a look soon...

  • @emergencyofstate i'm not very good with jitter myself but i've been imagining using some techniques like this for a while so trying to get my head around it. the implementation offered is simple and hopefully there's someone here with a little more time to debug i can offer right now..

  • ----------begin_max5_patcher----------

  • thanks for looking through this! i do remember looking at that patch you made but haven't given it enough of a look — will peruse it further..

    re: your changes
    reducing the plane count before any jitter CPU processing is definitely a good idea. do the op and scissors objects automatically adapt to the plane count as well as dims? i'm guessing so but wasn't certain…

    getting rid of that extra matrix is fine. i didn't even think to change the dimensions of the readback texture directly…

    i can tell you're unsure, but i can understand — learning jitter is much more dark magic than max or msp. the tutorials are interesting but don't go nearly far enough to explain the way in which the objects function in weird and mysterious ways… if i understand correctly the chain of gl.slabs are entirely on the GPU - represented by blue patch cables - (my naming the context 'map-readback' could have caused confusion here), but i agree the best solution is a custom shader. trying to get my head around drawing circles and squares in jitter natively first though!! also the nature of what those objects are doing is sequential anyway, ie. they are blurring by a factor of 2 (i don't understand the 'width' param of cf.blur at all and have no idea how much i'm blurring), then downsampling by a factor of two. perhaps i could write a shader that just did the averaging directly and spat out a 16x16 texture to the readback directly.

    the whole point of the above is that simply downsampling the matrix from 256 to 16 failed to give a smooth transition when a circle moved gradually from one cell to the next. i was reading about downsampling on the jitter forum and it seems that there's no flexibility how it happens, and i couldn't find an answer as to whether it was averaging or simply truncating (i noticed very few in-between led levels in practice). i'm striving for something with a smooth transition between states like tehn's sine/cosine fade function, but applying across a grid rather than a line.

    also i'd like to include the jit.op processing inside the GL context. it's simply shifting the 0-255 values for the matrix down to a 0-15 to match the monome varbright capabilities. it wasn't clear to me how to accurately scale matrices using a shader (or some other function?)

    i guess i'm just writing this all up for interested folks to have a read through because i feel i've hit a bit of a wall with the programming and don't know what the path to success is. my dad always said trying to explain a problem to someone outside of it (no matter their knowledge base) is the best way to know where to go next...

  • "but combining these jitter slab processes into something mathematically predictable/understandable is beyond me right now :p"

    that's where i'm at. but! if what you're focused on is using jitter to format / parse LED commands that's a different story. seems like you're already there though.

  • @karaokaze
    well at least i have someone to blame now!