Skip to main content

Ag Fás Ar Ais Arís - Bryan Dunphy

In Response to Henry Jenkins

Published onOct 06, 2020
Ag Fás Ar Ais Arís - Bryan Dunphy

You're viewing an older Release (#1) of this Pub.

  • This Release (#1) was created on Oct 06, 2020 ()
  • The latest Release (#2) was created on Feb 01, 2024 ().

Ag Fás Ar Ais Arís: Generative Audiovisual Composition

Bryan Dunphy, Goldsmiths, University of London

Ag Fás Ar Ais Arís (generative audiovisual composition)

Reading Henry Jenkins’ call and response question, I started thinking about my own work and the ‘kinds of stuff’ I collect and use as material. In my case, I work mainly with code. Specifically, I have been creating work using GLSL shaders and CSound orchestra and score files. Throughout the development of a piece I will study fragment shaders on the fantastic ShaderToy platform and also research the comprehensive Canonical Csound Reference Manual. Another great resource is the very active Csound mailing list and archive. In the generative audiovisual field these forums and online repositories represent primary sources for learning about the techniques used in the practice of generative audiovisual composition. There are many platforms and frameworks dedicated to creating generative audio and visuals. Each platform has its own community surrounding it. A more recent addition to the field is the exciting MIMIC project that provides a place for learning about and implementing interactive machine learning techniques in computational art. I have used some of these techniques in Ag Fás Ar Ais Arís, specifically with the help of the rapidLib C++ library.  

In terms of experiencing the process of creating art using code, these platforms and forums, and the code they contain, are a central part of how computational artists experience this type of media. The libraries, shaders and Csound files are the audiovisual objects that create meaning for me. The piece I have submitted for this issue was created using a fragment shader for the visuals, a combined orchestra and score Csound file for the audio, the rapidLib library for some of the mapping and my own C++ framework, the ImmersAV toolkit, to tie it all together. The piece itself is concerned with virtual audiovisual objects. It focuses on a rotating fractal with mapped sound that is consumed by its environment before emerging again. The piece is intended as an exploration of how this type of object can be situated within the virtual environment and how the object and environment can interact. It is also concerned with the idea of foreground and background audio and visual elements and how they can interact. Finally, it also attempts to further explore the ideas of audiovisual equilibrium, isolated incoherence and cross-media complexity. 

No comments here
Why not start the discussion?