Fragment – A Collaborative Cross-Platform Audiovisual Live Coding Environment

Fragment is a collaborative cross-platform audiovisual live coding environment. It uses a pixel-based real-time image-synth approach to sound synthesis.

Fragment is able to produce high-quality fast additive and granular synthesis simultaneously with re-synthesis support. According to the developer, ‘it has many features making it a bliss to produce any kind of sounds or visuals and is aimed at artists seeking a creative environment with few limitations to experiment with’.

The video, above, is a one-hour ambient soundscape, made of granular and additive synthesis patches with live compositing visuals produced from two videos. This demonstrate granular synthesis features, videos looping/compositing and mixing sound synthesis methods.

Features:

  • Complete additive, spectral, granular synthesizer powered by WebAudio oscillators, a wavetable OR
  • Fragment Audio Server
  • Complete audio/visuals live coding environment with JIT compilation of shader code
  • Real-time, collaborative app.
  • Distributed sound synthesis, multi-machines/multi-core support (Audio Server with fas_relay)
  • Stereophonic or monaural
  • Polyphonic
  • Multitimbral
  • 32-bit float images data (WebGL 2 only)
  • Multi-output channels per slice
  • Shader inputs (webcam, images, videos with audio analysis, audio files analysis, drawing over textures…)
  • MIDI in
  • OSC in/out
  • Spectral recording with export and re-import as a texture
  • Audio synthesis can be done on a dedicated computer on the network
  • Per-sessions discussion system
  • Sessions based, no authentifications, ability to run locally

See the Fragment site for more info. 

via Sonic State

One thought on “Fragment – A Collaborative Cross-Platform Audiovisual Live Coding Environment

Leave a Reply

Your email address will not be published. Required fields are marked *