The improvements made to the algorithms during the residency made them more computationally efficient, which would enable me to run multiple signals into the same microcontroller. This makes me think of "clusters" of users, how this design could be a way to visually arrange the togetherness into a larger organic structure.
I think anything beyond 12 right now sounds a bit musically daunting, but the signal treatment I developed during this residency could be "tuned" into specific ranges for interesting sonic effects.
Electroluminescent wire could be used to help people listen to individual signals, because this visual feedback would become more important as the sounds became more complex. (How to find yourself as a simple sonic signal in a dense network?)