top of page

Abonnèr på nye blogginnlegg

When Generative AI Works – and When AI becomes Degenerative

  • Forfatterens bilde: Stian Andreassen
    Stian Andreassen
  • 12. des. 2025
  • 3 min lesing


Maestro Media AS - Generativ AI testing


A Technical Experience from Architectural Visualization and Generative AI


Generative AI has, in a short time, become a new tool in many creative workflows, including visualization and real estate. The ability to add atmosphere, lighting, and seasonal expression to finished visualizations is clearly attractive.


At the same time, many experience the same issue:


Images gradually become softer, lose detail precision, and take on an appearance that feels less photographic – even when the changes are small and carefully controlled.


Based on our own testing at Maestro Media, we have made several concrete observations that explain why this happens, and how it can largely be mitigated.




Double Degeneration: When Quality Loss Accelerates



Maestro Media AS - Generative AI testing
Maestro Media AS - Generative AI testing

In our work, we used generative image-to-image techniques to add winter atmosphere and Christmas lighting to a finished architectural visualization. The workflow was intentionally conservative:


  • low influence

  • no geometry changes

  • one step for winter, one step for lighting



Despite this, we experienced a noticeable loss of image sharpness with each step.


The cause turned out not to be the motif change itself, but the combination of regeneration and automatic resampling.


Most generative image models:


  • interpret the image in an internal latent space

  • operate at resolutions divisible by fixed blocks

  • and return the image at this “native” resolution, not the original



When an image is first regenerated and then reused as input, double degeneration occurs:


  • The image is regenerated (loss of micro-contrast)

  • The image is resampled (loss of high-frequency detail)

  • The process is repeated



The result is rapid and disproportionate quality loss, even with otherwise conservative use.




A Crucial Step: Pre-Resampling



The decisive breakthrough in our tests came when we changed a single variable:


The original image was pre-resampled to the same resolution that the model would return anyway.


In our case:


  • original: 1920 × 1280

  • pre-resampled: 2528 × 1696



We then ran the same combined prompt in a single generative pass.


The results were clear.




Tools We Used



To be completely specific, the workflow consisted of three stages:


Maestro Media AS - Generative AI testing
Maestro Media AS - Generative AI testing

GPT-5.2

Used to develop and refine precise prompts, with clear constraints on geometry, lighting, materials, and image quality.


NanoBanana Pro

Used for the actual image-to-image generation: winter atmosphere, lighting, and seasonal expression.


Adobe Firefly

Used afterward for small, local adjustments and fine-tuning – not to regenerate the entire image.


This combination made it possible to clearly distinguish between generative transformation and controlled post-processing.




The Breakthrough: Pre-Resampling and One Combined Prompt



Maestro Media AS - Generative AI testing
Maestro Media AS - Generative AI testing

The technical breakthrough came when we changed one key variable:


We pre-resampled the original image to the same resolution that NanoBananaPro would return anyway, and executed all changes in a single combined generative pass.


In our case:


  • original: 1920 × 1280

  • pre-resampled master: 2528 × 1696



Then:


  • one combined prompt developed with GPT-5.1

  • one generative run in NanoBanana Pro

  • detail adjustments afterward in Firefly



The results were clear:


  • significantly better perceived sharpness

  • far less loss of micro-contrast

  • more stable material representation

  • an output that actually holds up for commercial use



The quality loss did not disappear entirely – generative AI is still regeneration, not traditional editing – but it became controlled, predictable, and acceptable.




Practical implication



Maestro Media AS - 3D archviz
Maestro Media AS - 3D archviz

These experiences point to an important principle:


Generative AI is not lossless.

But much of the quality loss is caused by how we feed the tools, not necessarily by the creativity of the prompts.


By:


  • adapting source material to the model’s technical constraints

  • reducing the number of generative steps

  • consolidating changes into a single pass

  • and using post-processing tools like Firefly only for local adjustments


… AI can be used far more precisely, even in disciplines with high quality demands.


Written by Stian Andreassen @maestromedia. SEO adapted with GPT-5.2

 
 
 
bottom of page