“Artificial Contrasts” explores the tension between documented reality and digital construction. Existing visual and audio material is transformed, mirrored, or altered. Through the use of artificial intelligence and digital editing, new layers of reality emerge. The project investigates how digital tools interpret, aestheticize, and reframe reality – and invites viewers to reconsider perception and authenticity.
Artificial Contrasts Colin Ostermann
Artificial Contrasts
Colin Ostermann
HD Video 5:10 min
Artificial Contrasts – Process Documentation
The project began with a fascination for the contrast between real-world crises and digitally idealized representations. Inspired by postdigital aesthetics, where digital tools are no longer spectacular but embedded in everyday perception, I asked myself: What happens when machines imagine a better version of our world? That question became the conceptual foundation of Artificial Contrasts.
I started collecting found footage material—mostly documentary-style videos showing wildfires, environmental destruction, and moments of ecological trauma. These images were emotionally charged and visually raw. My goal was not to retouch or correct them, but to create an artificial counterpart: synthetic images that represent the exact same scene in a fictional state of peace, balance, and beauty.
After an intensive phase of researching current generative AI workflows, I developed a production pipeline combining ComfyUI, Flux, Wan2.1 VACE, Topaz Video AI, and TouchDesigner. The idea was to first generate a static image that reimagines the first frame of the video, and then use this image to restyle the entire sequence with high visual fidelity and minimal temporal artifacts.
Step 1: Research and Selection of Found Footage
I collected and reviewed footage from platforms like YouTube, Vimeo, Archive.org, and royalty-free libraries such as Pexels and Pixabay. I focused on high-resolution sources (minimum 1080p, ideally 4K), slow or stable camera movements, and striking, emotionally charged visuals. The raw videos were downloaded and sorted for further processing.
Step 2: First Frame Restyling (Flux.1 + ControlNet + Realistic Landscape LoRA)
From each selected video, I extracted the very first frame. Using ComfyUI, I passed this image into a depth-aware style transfer workflow (Flux.1 + ControlNet Depth + Realistic Landscape LoRA), with a descriptive prompt such as:
Beautiful Landscape, Trees with bright green Leaves, Bright Meadow, Green Hairy Long Grass, Sunny Day, Beautiful Weather, Cinematic Shot, Depth of Field, Lens, Film Grain
After about 20 iterations per input, I manually selected the most convincing result.
Input size: 3840×2160 px → Output: 1365×768 px → Upscaled with Flux to 4088×2312 px.
Step 3: Full Video Restyle (Wan2.1 VACE)
The original video and its restyled first frame were used as inputs in the Wan2.1 VACE pipeline. The video was rescaled to 720p for memory efficiency. A depth map was generated and combined with the first frame to guide the restyling process. Final output: 1280×720 px at 24 fps.
Step 4: Video Upscaling
Using Topaz Video AI (v7), I upscaled the 720p render to 4K resolution (3840×2160 px), preserving details and eliminating compression artifacts.
Step 5: Format Conversion for TouchDesigner
To optimize playback in TouchDesigner, I converted the upscaled videos from H.264 to NotchLC (2560×1440 px), ensuring smooth real-time composition.
Step 6: Integration in TouchDesigner + Audio Analysis
All video layers were imported into TouchDesigner. Audio input was analyzed to extract rhythm, frequency ranges, and transients (kick, snare, highs/lows). These parameters triggered real-time effects: switching between original vs. AI version, glitch overlays, and opacity blending.
Step 7: Final Rendering
The visual composition was rendered in TouchDesigner at 1440p, 60 fps. This allowed precise control over timing and transitions.
Step 8: Export & Mastering
Using Adobe Premiere Pro, the final render was converted from H.264 to ProRes, downscaled from 1440p to 1080p, and conformed to 24 fps for exhibition.