To introduce a wider audience of researchers to this field of inquiry, and to situate our work within existing research, we conducted reviews of previous works that utilized motion capture to study the kinematics of sign and gesture production.
Technological platforms like virtual reality and augmented reality offer many possibilities for gesture and sign language research, including but not limited to scalability (as in, access to large and diverse participant pools).
At the DComm conference Deictic Communication – Theory and Application we presented an open source foundation prototype for streaming content from a motion capture system to an augmented reality headset.
Mehmet Aydın Baytaş, Damla Çay, Tyler Thrash, Asım Evren Yantaç, Morten Fjeld (2019). Towards Scalability in Empirical Studies on Nonverbal Communication through Augmented Reality and Motion Digitization. Poster to be presented at the DComm conference Deictic Communication – Theory and Application.
Mehmet Aydın Baytaş, Damla Çay, Asım Evren Yantaç, & Morten Fjeld (2017). Motion Capture in Gesture and Sign Language Research. Poster presented at the DComm conference Language as a Form of Action.