This blog post is written by AI CDT student, Lucy Farnik
Isabel Potter gave a talk at this year’s Spring Research Conference about their work on applying AI to scenography, which they are exploring in their PhD. They chose this research area partially due to their extensive amounts of experience in the creative arts, having been involved with theatre since age 14. They have also founded their own company in this space and are taking on various freelance projects in theatre alongside their PhD.
Isabel’s talk was built on one central theme — artists are not technicians. At the moment, generative AI is getting closer to being able to automate parts of scenography, from creating background music to staging. However, many of these tools are made for people with a STEM background and use a terminology that matches this. For example, tools which can be used for immersive technology in the arts include Unreal Engine which uses many computer vision and mathematics terms. One may contrast this with tools like Adobe Photoshop which uses terms such as “paintbrush tool”, which comes from the terminology artists use on a daily basis.
Isabel is trying to reduce the barrier to entry for artists. They are specifically focusing on lighting design, as this is the most under-explored area of immersive technology for scenography and is also the area that they have the most experience working in. At the moment, prompting large language models to create diagrams such as lighting plots leads to results which are not yet usable, but the step of translating lighting ideas into programs which can be loaded into a lighting desk is already somewhat doable by existing foundation models. They are currently exploring this as a starting point while optimizing for ease of use by a non-technical audience.