Imagine you want to add one sentence to a book. But the book is written in Chinese, which you barely understand. And this book is hidden somewhere in a library filled with Chinese books. And the library uses an organizational scheme you’ve never seen before, which is in Latin. Unfortunately you don’t read Latin, and in fact don’t even know if it goes left-to-right, right-to-left, up-to-down, or what. That is essentially the situation that I’ve been for the past few days.
The sentence itself is relatively simple, in my case adding a simple code to delay a computer program. No problem, I can Google the translation and paste it in. But how do I find the right page in the right book? Of course, no one has ever added this particular sentence to this particular book before, so there is no simple tutorial.
Step one was to stare at the computer code for about a day hoping that I would somehow recognize what I was looking for. That didn’t work. But, to extend this metaphor even more, I learned a bit about how the organizational scheme of the library (even though I didn’t have a clue what anything was).
Step two was to read up – and YouTube up – on the basics of how other people added somewhat different sentences to somewhat different books. The difficulty here was finding something similar enough to what I wanted to do. It took some searching, but I finally found something. The organizational scheme I’m using is called Unity and the library is called C# (pronounced “C sharp”).
Step three was learning how other people solved similar problems. I was trying to add a line of code to one file (i.e., one book) in C#, which would communicate to Unity that I wanted to add my delay.
I suppose I should say that I was experimenting with “augmented reality”, which I plan to use for my course. Basically, you put on a headset where you can only see two video feeds (one going into each eye). In my case, the video comes from two cameras mounted on the outside of the headset, basically in the same position as your eyes. See here for a little schematic. If all goes well, you see exactly what you would see normally – regular 3D vision that you use whenever your eyes are open – just in video form. Not really a big deal. But, unlike normal vision, you can alter things to change your perception.
In this case, I wanted to delay the visual information by a few hundred milliseconds so that, if you were to look at your hand and make a fist, you would see your fist closing just a little after you did it. Or if you clapped, you would feel it and hear the sound just before you saw your hands come together. This makes it very difficult to do coordinated things because your performance depends on constant and correct visual feedback. Messing that up can reduce your sense of self-agency, something I have explored in the auditory domain previously. This would be a great thing that an undergraduate course could use to test things like sports performance, rehabilitation from physical injuries, consciousness and cognition, empathy, and many related phenomena.
Anyway, I eventually figured out how to add my sentence. Unfortunately, it turns out that adding it broke the rest of the book. After extensive trial and error over a few days, I eventually found a work-around and was able to achieve what I wanted! In the end, it turned out that it was easier to write my own book than it was to add a sentence to another. Actually, I kind of sidestepped the programming issue completely – I’m reasonably good at programming but, like I said, not in this language – and found a solution using a combination of programs that already exist. Still, it felt great to have done something that no one has done before and something that will be helpful for my students.
The class will now be able to use delayed visual feedback, as well as a variety of other visual distortion effects that could simulate medical disorders, drug states, and even the effects of aging.