Concept artists working in games and entertainment are pushing back against a growing trend of backend generative AI usage. Clients and internal teams are increasingly bringing AI-generated images as reference material for projects. But according to artists interviewed in a recent This Week in Videogames report, these AI references are creating extra work rather than speeding up the creative process.
The core issue is how AI imagery functions in the early stages of design. Following the news that Larian Studios used AI concept art in the making of it’s newest Divinity title, artists have reported similar sentiments, that clients now arrive with AI-generated pictures saying “make something like this.” The problem is that these images become fixed targets instead of loose inspiration.
Kirby Crosby, one of the artists quoted in the report, described how AI reference images “worm their way into your head.” Breaking free from those initial AI-generated shapes and compositions requires additional effort. Instead of exploring multiple directions, artists find themselves locked into matching a specific look that may not be functional or coherent.
Another artist, Canavan, pointed to a shift in client dynamics. When a client brings an AI image, they develop an attachment to that exact visual. Iteration becomes harder because the conversation stops being about exploration. Instead, artists spend time explaining why their solutions should replace the AI sample.
The work multiplies in other ways too, as AI-generated images often blend recognizable design elements from multiple sources without clear logic. Artists must reverse-engineer what the AI pulled from, be it silhouettes, genre tropes, or lighting setups, so they can either correct it or propose something with actual design intent. This matters because concept art is fundamentally about problem-solving.
Artists create early visuals to define character designs, environments, props, and mood while communicating production constraints and storytelling goals. The work serves as a guide for 3D modeling teams, animators, and production staff. This process, until now, inherently required reference gathering.
Traditional reference gathering involves photography, historical archives, museum collections, film stills, and design studies. Artists look for surprising connections between materials, cultural motifs, and engineering constraints, and the research itself generates new ideas through for a final product.
AI-generated images work differently. They provide results without traceable origins. The output tends toward the most obvious visual interpretation of a prompt—the genre clichés and popular touchstones already saturated in the training data. Happy accidents and unexpected discoveries get replaced by predictable combinations.
When polish hides problems
The images can also look deceptively polished. That finish can trick stakeholders into thinking a design solution already exists. But AI outputs frequently contain problems: anatomy that doesn’t work, costumes that couldn’t function, materials that don’t make sense together, and missing worldbuilding logic.
Artists also noted a practical issue bleeding into their workflows. Online reference sources like image search and Pinterest are increasingly filled with AI-generated content. Even artists avoiding AI tools on purpose now struggle to find clean reference material.
The report lays out a tension that studios may need to address. If AI images become standard reference material in pitch decks and mood boards, artists may need clearer policies about what counts as acceptable direction. The expectation that AI speeds up early concepting appears to be colliding with the reality that it can extend revision cycles and compress creative exploration.

