From Strings to Sensors: Movement Representation in AI Theatre

Abstract

The assimilation of AI technology into theatre practices has inaugurated an expansive frontier of possibilities for both thespians and spectators. In terms of movement, this involves the use of avatars, which inhabit a customary screen milieu (encompassing three-dimensional in-world scenography) that necessitates simultaneous consideration of a tridimensional theatrical space and coexisting performers, within a moment of real-time inception and interconnectedness. This complex confluence raises questions pertaining to the ‘avatarisation’ of corporeal embodiments on the theatrical stage and the consequent emergence of novel performative methodologies. Within AI-enabled performances, the use of motion capture technology, commonly known as ‘mocap’, entails the recording of skeletal data from physical actors, referred to as ‘mocaptors’, who wear a geo-spatial system for motion capture. This is then translated into digital data that can subsequently be used to animate digital characters or avatars.

How to Cite

Maiti, A., (2024) “From Strings to Sensors: Movement Representation in AI Theatre”, Moveable Type 15(1), 89-98. doi: https://doi.org/10.14324/111.444.1755-4527.1777

255

Views

43

Downloads

Share

Authors

Abhik Maiti

Download

Issue

Publication details

Dates

Licence

Creative Commons Attribution 4.0

Identifiers

Peer Review

This article has been peer reviewed.

File Checksums (MD5)

  • PDF: 37fa929c4d2f85e91217bd2d4b5982a0