thoughts on designing for arc

  • I am writing this after a relatively short but intense time spent making a number of software sketches for the arc. I think of these as minimal environments for various sound related functionalities that take advantage of the unique arc features. I am focusing on what ideas can be implemented using the arc4 alone, without the need for any other interface. So, no additional functionality coming from the grid, or software requiring user to use the computer for anything other than launching the program and initiating the communication with the arc. Just four "wheels" that can be spun it both directions.

    Interestingly so, I often find myself thinking "I wish there was a way to switch functions without having to reach for screen", or something similar to that. In other words, I am thinking of the button press functionality that was in the older model of arcs. I understand that this was a difficult decision to make, and that it has been thoroughly discussed here. So the point of this comment is not to lament the loss of it, but rather to think of creative ways of developing a functionally rich applications that are essentially "button-less". Personally, I find this both challenging and inspiring at the same time.

    It is possible that a need for a button to serve as a switch is there because that is the gesture/function connection we have learned (and enforced) by interacting with thousands of devices built exclusively around buttons. Monome grid being an obvious example, but there are of course countless other button-driven devices in the musical and everyday world. However, an equivalent of a button press can be programmed as a response to a number of arc behaviors. For example a change of rotation direction can register as the equivalent of a button press in the software. Or a switch from one arc wheel to another. Or a certain speed threshold can function as a trigger, etc.

    These kinds of questions inspire in me a kind of curiosity about the functional vocabulary of the arc…

    --How many rudimentary unique gestures can be performed using arc4 (or 2 for that matter)?
    --How many conditioned gestures can be built up from that? (By this I mean gestures that are unique only in conjunction with another gesture performed at the same time, or in sequence. For example rotating two wheels in the same direction resulting in one outcome, vs. rotating the same two wheels, but in the opposite directions.)
    --How many levels of conditionality can be introduced without loosing a sense of intuitive understanding of what the outcomes are?
    --How many new/unique actions can be recognized as a result of specific LED feedback design (rather than only looking at the physical rotation of the wheels). For example a LED circle divide into number of specific segments would instantly turn the "pot" into a rotary "switch".

    and

    --What are the best approaches to taking full advantage of the staggering 1024 ticks per revolution?

    Finally, just wanted to point out, that there are some interesting ideas discussed in this thread:
    http://post.monome.org/comments.php?DiscussionID=13138&page=1
    started by @GreaterThanZero although this all pertains the original arc design (with the button press). But GTZ describes a lot of unconventional approaches to both grid and arc, which are all inspiring.

    So, if anyone is working on some new ideas, or has some thoughts to toss in here, that would be great. With the 2012 edition of new arc, I am sure there are a lot of people thinking things up… no?

    Sincerely,
    p.

  • The lack of a button makes it difficult for me to visualise (mentally) how to incorporate a delta only arc, but that's mainly because I've only used one that has it.

    My first arc implementation (the chocolate grinder) was built using similar conceits. I wanted to ONLY use an arc, no buttons or anything else. My arc did have buttons though....

    There was a thread on the max forum a while back talking about all the different ways one can generate data from a single stream of information (in that case sensor data). I think mapping out things like that first, then seeing how that can be useful.

    You've got a stream of delta values coming in that are either positive or negative, small or large(r). The first and easiest mapping is to convert that into absolute numbers (ie a stream from 0-1 or 0-127, wrapping or not).

    You can also take the direction (positive or negative) and use that as the delta value (as only +1/-1) then take the difference (+4/-4) and use that to control another parameter. So direction controls inc/dec, but speed controls X.

    With such small resolution you can also program micro-gestures to act as binary/discreet switches. A tiny double twist = button press or something. As long as it happens within a time window it would be hard to do it by accident.

  • I don't have an arc, but can see the attraction. Can you derive rate information from the dials; angular velocity and acceleration? Would those be any use? You could assign different behaviours to different velocity bands, that sort of thing, maybe programme in a little hysteresis to take you into different gestural 'zones'.

  • interesting post, i think there are 2 other possibilities here, one is delta thresholds triggering some unique action (a turn over/under 5/-5 could be a threshold trigger). another possibility is a lack of movement entirely for a period of time.

  • Great comments guys!
    Lots to think about, but the things that immediately stand out for me from your posts are:
    --the fast "double twist" as a switch. am definitely going to try and test this as a behavior;
    --investigate further hysteresis and angular velocity concepts in relation to arc;
    --analyze the ideas that can be derived from LACK of interaction. no interaction over a specific period of time can be utilized as a kind of internal atrophy. this can be super useful/important in applications that are more systemic/generative in nature.

  • @laborcamp
    'internal atrophy' - love it! In modelling terms you should be able to take a direct line from the energy put in by the user (a function of the square of the rotational velocity) and an energy state for the system driven by the arc along with some decay rate or mechanism. Can't be done with buttons :)

    [edit] I suspect that may be GreaterThanZero's 'Throw' patch though...
    http://post.monome.org/comments.php?DiscussionID=13138&page=2

  • @chrisbob12 : that's right: the GTZ Throw and Bellows patches are definitely good examples of ways of implementing friction- or gravity-like conditions. That's one reason why I linked to that thread in my original post.

    But your comment about timing pauses made me think of a slightly different approach to this idea. Not necessarily something that moves closer to the physical modeling, but rather a function of a system. As in, if an arc wheel that is performing certain function is left untouched for a given amount of time, it simply switches it's function to the next function. Or it randomizes some parameters etc. This, for example, could be a way of introducing tonal or structural variations into the system.

  • I've mentioned it elsewhere, but I really think the right answer is going to be pairing the arc with a leap motion controller when those are available. That would then restore the button press in new and interesting ways, potentially letting us mimic this sort of flexibility:
    http://www.disneyresearch.com/project/touche-touch-and-gesture-sensing-for-the-real-world/

    (though, if we had access to that tech, we wouldn't need the Leap)

  • This is definitely cool, and opens up a lot of possibilities. in the future. maybe.

    But we are, where we are now. :-)

    p.

  • You are not wrong.


    The challenge w/ a double-turn or equivalent gesture is your hand will generate some unpredictable amount of opposite-direction movement when you stop rotating (unless you stop by letting go of the knob).

    What would probably help for that kind of application is something like a framerate reducer, or strobe-light. So, rather than receive deltas every 10 ms or whatever, your app receives aggregated numbers for a larger span. (say, every 16th or 32nd note, quantized to the beat?)

    Worth exploring.

  • It's not an arc, but I'm picturing large, heavy wheel-like disks which you can spin, whose physical momentum keeps them going. You use your hand as a brake to stop or slow them.

    It's late, and I've had a beer ;)

    This would sit nicely next to the large monome which you play with carillion-style buttons.

  • Ha! I think I have seen this on LOST...

  • that does look pretty slick though...
    http://createdigitalmusic.com/files/2013/01/leap.png

  • The "throw" experiment came out of wanting exactly what chrisbob12 describes.

    There's a thread around here somewhere wherein Shimoda and I discuss the relative merit of various arcade spinners for use as physical oscillators. There's much promise to the idea. No LEDs, though.


    There's also discussion (w/ Lokey) of building a turntable-sized "arc 1". We found a DJ controller that was pretty close, but with fewer LEDs and only one brightness level.


    ...and then a bunch of you are already sick of me bemoaning the lack of oversized control options in the world. I mean, an audience can't see the connection between twisting a knob and the change in your music, but they'll sure know what's up if your encoder wheels look like this:
    http://www.marvelbuilding.com/wp-content/uploads/2011/03/steering-wheel-of-Wonderful-Kid-Bedroom-with-a-Pirate-Ship-Inside.jpg
    (though honestly, I was thinking more steampunk than pirate. a mixing board of two-handed levers, that sort of thing...)


    ...but we are so off topic now.

  • I don't know what I think about that. My gut response is that size is not (or should not be) the biggest concern. After all most instruments are not much bigger than grids or arcs. As the stage presence goes, I think arcs and grids are actually quite "present". More so than laptops in my opinion.

    But I share the sentiment, that something does get lost from the live experience, when all we see is someone hunched over a laptop. However, a lot depends on the staging of events, venues, and approaches that do not neglect the visual component of the experience. In those respects, grids and arcs do offer a definite quality. And an interesting observation here would be to bring up the Leap again... which in actuality eliminates all physical presence of the instrument!... (BTW I did place an order for one, and am looking forward to giving this a go as well :-)

    But, back to the topic.
    Stage presence aside, I think it is interesting to explore the object/instrument in a very thorough way, searching for it's unique qualities. I feel that the arc has something special to offer. And am hoping that we could talk this through here. The best I can do describing what I have in mind, is what I earlier called a "functional vocabulary" of the arc. It seems important to have a sense of the scope. To understand where the "edges" are.

    When I was at the art academy, my drawing teacher had us do one of those still life drawings. Nothing special there. What was unusual, was that when we were all finished, he had us continue drawing, adding more and more detail, filling in more information. We did this, until the drawings were literally solid graphite surfaces with no distinguishable imagery left. Part of this experience that stayed with me to this day, was the physical realization, experience of the "edge" beyond which one could not move any further.

    I don't know exactly what would be an equivalent of that experience here, but I am hoping/looking for something similar.

  • It's less the size of the device as the size of the gesture. Moving your arm transmits more clearly than moving your finger, and putting your whole body into it makes things quite obvious. The trick is to not look stupid doing so. Which is what large controls facilitate.


    This varies by app, of course, but I think the arc is much more communicative to an audience than the grid is. It just lends itself well to feedback models where the LEDs extend what your hands do, where the grid is better suited towards guiding your hand.

    Note: "trails" is a huge exception that proves me completely wrong. But for the purpose of discussing a functional vocabulary, you should probably ignore that.


    (and yes, I am all about interaction design. This thread is exciting.)

  • One thing not yet explored on the arc, or on newer 2012 grids for that matter, is the idea of feedback that persists and fades slowly, overlapping and building like a visual reverb. I think what you're describing with the drawing experience could be captured this way. What I'm just not sure of is:

    * how to best sonify those visuals
    * how to best control them with your hands

    But it's a strong idea, and "graphite" is a fine name for the app.

  • I love graphite! I will dig it up and bring it here, but there is a great short story by Shalamov about graphite that is just amazing. So, good point (and note taken) regarding Graphite as a name. For my sketches for arc I have been using the name Turbine (or Turbines) which I also quite like.

    The idea of "layers" can be useful in two ways: on one hand it can be a "record" of previous actions/gestures, but on the other hand, and maybe more interestingly, it could be a mapping of the unfolding variables. Foreshadowing. Again, maybe more useful in generative approaches. Computer extrapolating new paths based on previous decisions. Or maybe both can happen at the same time: a residue of the past, and extrapolation of the future, with the performance unfolding in between?

    The LED levels might make something of this kind doable, although 16 levels is not a huge range to work with...

  • Here is the very short, and final chapter from "Kolyma Tales" by Varlam Shalamov entitled "Graphite". This text is the first thing that crosses my mind every time I hear or read the word "graphite".
    p.

    PS
    I guess this is definitely meets the criteria of "off topic" ;-)

  • Yeah, I don't know if I like this after all. It's interesting, but not yet good.

  • Probably needs less pinpoint accuracy, more "blur". Maybe the pointer should exert some random amount of influence on the LEDs to either side of it. Something along those lines.

    Don't know. I wish there wasn't such a large jump from "0" to "1" in brightness.

  • The fades are less linear now. Higher framerate, random chance of each pixel diminishing. Feels somewhat better.

  • whoa, those look lovely!
    have a very "burning coals" quality about them...

  • Great thread, and at the risk of going horribly off-topic, it prompts the following thoughts:

    @GTZ - I quite agree about the size of gesture being important. From any point of view, a person can exercise greater control and expressivity with a larger gesture (within limits) - this may seem counter-intuitive, as it seems that one can exercise such precision with one's finger tips, however, one only has to watch a violinist making a vibrato to realise that the gesture is much more than the finger tips.

    @laborcamp - size of controller is not my biggest concern either; I was thinking more in terms of the inherent physics of the controller (harder to embody in a small controller). It's the mechanical physics which gives a traditional acoustic instrument some feedback and interactivity to the player (vibration, action, damping etc.)

    @laborcamp - one of the more enlightening moments in art class for me was when we were made to draw with our non-dominant hand, and also made to draw without lifting the pencil from the paper or looking at the drawing. I wonder if this kind of approach would be productive in using an arc?

  • Annoying timing; I think I'm experiencing one of your infamous serialosc dropouts, laborcamp.

    I built something to heighten the burning coal metaphor. Each LED is influenced a small amount by the brightness on either side of it, which then feed back into the loop every frame allowing flames to spread (while still diminishing over time). Pretty excited to see it in action. No devices detected.

    Grr.


    EDIT:
    arc's showing up on a different machine. transfering files there; will report back.


    EDIT2:
    holy crap, when did it get to be 1am?!

  • Sounds very exciting... I am actually trying to work with your initial files hoping to connect it with some form of sonic activity.

    Sorry to hear about the dropouts! That is annoying. I have to say though (fingers crossed) that since the update to the serialoscd 1.2a I am in the clear with that issue.

    p.

  • It almost works. Timing issues...

    (I take the list, I split it into three versions, I offset two of them, and I process them back together. Somewhere over the course of that, the numbers fall out of sync and arrive in the wrong order)

    I know I can do this in JavaScript, where the [zl] operations are making my head hurt. May have to throw in a [js] box to pull this off...

  • This is probably hitting the processor a lot harder than previous versions. Not sure if it adds enough to justify that, but it's definitely neat.

  • And as is usually the case, the moment an app starts to take form, I realize I've misnamed it. This version isn't living up to your graphite idea, but "ember" is quite nice. Or "pyro".

    ...and I've wanted to name something "ring of fire" since the arc's release; this might grow up to be that.

    3am. I'm dumb.

  • I think if I were using the buttons, there are a couple of numbers I'd set differently. The fire would spread further and last longer, with "pressed" interactions smothering the flames.

    That might happen later. Still need to figure out what this is good for...

  • hardest working fellow on these forums these days, gtz, well done.

  • @GreaterThanZero : the last version is beautiful indeed. Ember suits it quite well. It looks sophisticated and very organic.

    Of course, in the context of the thread, I am wondering what of the unique functional features of the arc is articulated here? ;-) It certainly showcases the great richness and detail of the display capacity. It is inspiring. And I am sure it can/will evolve into something useful sonically.

    The first tests I have done yesterday with the graphite versions moved in the direction of CC controls rather than actually making sounds (although I did a version that modulated sine waves, with frequency being tied to the brightness of the LED which was quite wild). So this ended up working like a less predictable/organic rendition of Plates. I sort of liked the initial tests and I think I will try to organize them a bit more and post later.

    I don't think it's dumb to stay up and work. I do this all the time! In fact my standard work time is something between 11:00 PM and 4:30 AM... sometimes longer sometimes later. I work best at night. Something about the understanding that nothing else is happening, and nothing requires my attention, except my own work.

    In any case.
    This is fun.
    p.

  • Bidirectional control, mate. The LED feedback isn't limited to representing one parameter. It represents dynamic areas of heat and cold that shift over time, at your command.

    Those could be used pretty directly by dividing the frequency range into 64 regions, as was done here:
    http://vimeo.com/22645870
    (there, applied to a white noise generator. could have been applied to an audio track just as easily)

    Or, say, imagine there are X number of temperature sensors evenly spaced around your body. Each one controls a CC, and you're holding a flamethrower.

    Neither one of those is doing anything terribly unique: you're creating sounds or controlling parameters, big whoop. But, you're controlling them intuitively, and that intuition is built on feedback not available to you in other devices. Unique functional features of the arc, articulated here.

    Is "smearing your input across time" not a heretofore unexplored area of functional vocabulary?

    Anyway... yeah, this should branch off into its own thread soon.

  • This is neither here nor there, but despite getting some pretty good results from them, my LED manipulations are seriously inefficient. I do a lot of things through brute force that cycling74 gave us elegant solutions to with Jitter.

    I'm planning to refactor some of that for the next version of Ember (before starting that other thread). And then I think I want to rebuild some of my old patches that people are still learning from. Bad habits spread like disease around here...

  • Do the wheels have enough resolution to detect an off-center tap on the case as a predictable change in their values? Imagine an arc2 set up vertically, and giving it a quick tap that turns the whole thing CCW around its center. Each wheel will turn CW a bit. Classifying motions of short duration where both wheels match as 'taps' could give a new axis of control.

  • You have to whack the case pretty hard, which still doesn't provide very reliable results. (the wheels don't spin freely; they're meant to stay still until you touch them)

    If I hold the arc in one hand at roughly vertical orientation, and use the other hand to tap either side of a button, as long as my hands are oriented such that the motion travels more downwards than up, that is reliable. And I suppose it is more reliable than twisting the knob short distances, because there's less "bounce" on release. But you still have to be careful, and there are controllers better suited for that.

    You're also left with the usual challenge: There's nothing to signal the end of a gesture. Trying to differentiate between a very short motion and the beginning of a very slow one requires introducing latency on both responses. Possibly less so on the new model though -- the increased resolution might make that easier.

  • it's an interesting thought, but...
    Tap on the side of the case does not do anything really, as GTZ said.
    But taping on the knobs themselves sort-of works, although it would be very difficult to do this with any consistency. It seems that a relatively gentle tap on the side of each knob does move it by (most of the time) a single delta point. And as madronalabs suggested, tap on one side vs. the other provides either increase or decrease (as with rotation). So theoretically could be use as switching/selection "up" or "down"... in reality? Not sure.

  • "Side" is too ambiguous for this vocabulary. Going with a silo metaphor, is your finger striking the "roof" or the "walls"?

    Anyway... You also have a very fast "tap left then right" gesture, a very fast "tap right then left", "double-tap left" and "double-tap right". So this may be somewhat more versatile.

    (you were looking for on-device button substitutes, were you not?)

    roarbearman's "wedjat" encourages this sort of thing already:
    http://docs.monome.org/doku.php?id=wedjat
    though I guess I'm thinking more in terms of toggling modes than playing rhythms.

  • Hi GTZ,
    I tried the taping all around, and what I mentioned above had to do with taping on the side of "walls" of the silo. That was the only way I got somewhat reproducible outcomes. It might be worth a closer look to see if the double-tap can be articulated more consistently and not be confused with short rotations.

    But also, even though I specifically addressed the "missing click" in my initial post, my larger observation was that despite it missing, I think there is plenty in the arc as is now that one can work with. And that perhaps that limitation simply requires a more imaginative approach to application/interaction design.

    I do believe in the idea that obstructions, in general, provoke/inspire creativity. So, this is all good.

  • > (the wheels don't spin freely; they're meant to stay still until you touch them)

    OK, this prevents my idea from working. If the bearings were more free then the rotational inertia would cause a predictable motion.

    Sadly I have no arc myself but hope to remedy that when I have time.

  • It feels like we're arguing, and keep circling back to the same points. But we don't disagree with each other.

    We have five or six variations on the same idea to overcome the lack of a "click", as you say. Mark that one off the list. But understand that this covers one-shots and toggles, but not momentary controls. It's part of a broad category of interactions that require creativity to work through.

    So, yes. A list. We should have one. Not sure on the format, but I'll just brainstorm a bit and we'll see where that gets us.


    Obvious interactions:

    * rotate knob to interpolate between values
    * rotate knob to toggle between list items (on newer arcs, one item must always be selected)
    * nudge the playback position of a loop w/ some degree of sensitivity (for sync)
    * scratching audio

    And then of course, the widely ignored idea of multi-knob interactions. That's huge.

    Here's your layout for a hypothetical "mlarc"...

    - Your first encoder selects one of four decks. Or eight -- why not? When your pointer rolls into a deck's zone, the other three encoders switch pages, applying LEDs and controls to what's needed for that deck. This same encoder can also start or stop that deck with a double-click.

    - Your second knob could then be used for beat juggling (and pattern recording w/ the click gestures). Similar idea -- if you move the dial, playback will jump to whichever block is selected at the next quantized unit.

    - Your third encoder selects from a bank of fx parameters, setting the fourth encoder to a page for controlling that parameter. Or two parameters (represented on the left and right halves of the ring) that you toggle between with a click. Selected parameter is brighter than the other one. And each of those could have their own pattern recorder, controlled by "click" gestures on the third encoder.

    No buttons required.


    Step sequencers are very similar.

    - encoder 0 -- rotate to selects your step, "click" to toggle into a different mode and rotate to select one of several patterns for editing. "click" to return to step selection. "double-click" to make the pattern you've been editing the actively playing one.

    - encoder 1 assigns a value pair (let's say they're pitch and velocity) to the currently selected step in the currently selected pattern on the currently selected track.

    - encoder 2 -- rotate to select which track (or MIDI channel) you're controlling. this will of course change the other three encoders to control and display values for this track.

    - encoder 3 -- heck if I know. some sort of global fx control, maybe?


    There is also something to be said for never designing anything into your last knob, and reserving that strictly as an app selector. (see "seven-up" for inspiration)



    NOTE: "Must Have" controls go on 0 and 1. "Nice to Have" controls go on 2 and 3, and are doubled w/ on-screen equivalents for the arc2 users.

    You might also think about building an onscreen control (preferably mappable to something in MIDI) that swaps those two sets to better cover the arc2 folks, but really, that's a feature we could add to "Pages" or extend "Arc Reactor" into. (It doesn't have to be included in each four-knob app directly, but it is an inevitable development that your apps should be ready for. Just try think in terms of primary and secondary pairs)


    Those interactions aren't terribly abstract, and my examples aren't anything you couldn't do with another controller. So I'm not sure if that's closer or further from what you're looking for in this thread. I'm not clear on what you're after.

  • Eh, that's not organized enough.

    We probably need a google doc.

    Or...

    Dammit.

    Just registered a domain name.

    New site for structured device brainstorming and resources: coming soon.

    I'm not happy with where this night is going.

  • GTZ,
    Just want to say, that there is absolutely no "argument" here from my perspective. Your insight and ideas are great! I find this conversation very inspiring and insightful. I definitely appreciate all perspectives offered by all who posted here. Hoping that others will chime in still.

    Sorry if I was not too clear about "what I was after"... what I was after was this exact kind of open conversation about ideas related to designing software for arcs. Not necessarily any particular app, but a sort of collective brainstorming of various possibilities and ideas, from which a sense of arc "functional vocabulary" might emerge.

    I feel that so far exactly that is happening.

    As per your last post: I think that this can happen in a very organized/rigorous fashion. So some kind of listing can be done. In fact I started doing that on my studio computer as well. Will post that when I am there. I started on a list, and stopped when realizing how much depends on what happens with the LEDs. That the same exact physical interaction can yield completely different outcomes (both from user perception and programmatic implementation) based on what LED functionality is connected with it. So, my listing started branching out very quickly into more complex and conditioned behaviors.

    Having said that, I think it is always useful to make lists as a way of organizing one's thinking on any project... :-)

  • Ok, here is my first attempt to "list" identifiable/usable arc functionality:

    - - -

    RUDIMENTARY
    --which wheel is interacted with
    --direction of rotation
    --speed of rotation
    --duration of rotation in a specific direction

    NUMERIC
    --cumulative delta count value (distance from the delta "0")
    --positive or negative territory of the cumulative delta count (at any given point each wheel is either in the "+" or "-" territory)

    MULTI-WHEEL
    --how many wheels are interacted with
    --when multiple wheels are rotated, what is the distance (wheels directly adjacent on the arc or not) between them
    --when two wheels are rotated, are they rotated in the same direction:
    A and B left,
    A and B right,
    A left and B right,
    A right and B left,
    same as above, but for other wheel combinations:
    AC, AD, BC, BD, CD

    LED-DEPENDENT
    --rotation as a way of selecting a specific, fixed value on the spectrum (a "switch-like" functionality where LED feedback indicates the position, value selected)

    - - -

    As you can see from above, I tried to group the ideas in some way. This may not be the best or most logical way of categorizing this, but it's a start. I am also trying to list these functions without specific applications of them (as in GTZ example of "nudging the playback" or "scratching audio"). I think it would be useful to try and analyze the device in a sort-of objective or abstract way first. Of course a lot about any given thing is learned from actually using it. Which is why a collective discussion of this is especially valuable.

    So, does anyone have any other observations/thoughts on what else do arcs do, that is not listed here?

    p.

  • - Which knobs are NOT being interacted with, and how long has it been?

    - Total amount of motion over a span of time. (shaking back and forth is not a large movement in one direction, but it is a lot of movement)

    (your list was missing the key components necessary to let us play whack-a-mole)

  • whack-a-mole!

    is that the name of your next app?

    ;-)

  • Probably not, but you never know.

    (I did build a step sequencer once that you program by playing Bejeweled on the buttons of a Midi Fighter 3D controller...)

  • "(I did build a step sequencer once that you program by playing Bejeweled on the buttons of a Midi Fighter 3D controller...) "

    yes... that is great. okay now i need to read how we got here.

  • I use an Avid D command, which has touch sensitive encoders, so in 'Touch mode', the encoder will return to it's previous value when the hand moves off the encoder. Would it be possible to mod the 2012 ARC to sense when the hand is just touching, and not moving the encoder?
    Then a quick double tap would replace the key state change that is missing from the original ARC.