(Skormino) (2025)
Image description: A dark silhouetted figure standing in a dark void, surrounded by dramatic lighting. The figures remain stationary while bright blue lightning flames streak from their sword and the ground across the dark sky above them. The entire image uses a limited color palette of deep blues, blacks, and bright whites, with a distinctly blocky, low-resolution pixel art aesthetic that gives the lightning and landscape a geometric, digitized appearance.
Full Generation Parameters:
(abstract:1.4), masterprice, pixpix, 8-bit, pixel_art, , solo, 1boy, holding, standing, male_focus, weapon, sword, holding_weapon, water, cape, armor, glowing, holding_sword, fire, glowing_weapon
Negative prompt: muted, dull, hazy, muddy colors, blurry, mutated, deformed, noise, stock image, borders, frame, watermark, text, signature, username, cropped, out of frame
Steps: 20, CFG scale: 4, Sampler: Euler a, Seed: 21986, Size: 1024x1536, Model: ILL\plantMilkModelSuite_walnut.safetensors, Model hash: 1704e50726, Lora_0 Model hash: a4b9929b1e, Lora_0 Model name: Pixel-Art Style v5 ð(illustrious by Skormino).safetensors, Lora_0 Strength clip: 1, Lora_0 Strength model: 1, Clip skip: 2
Dude… how does it have a perfect pixel grid? I didn’t know generative models could work that way. Or is it a pixel art model that’s just always trained on an 8-pixel grid or something?
You scale the image down and then scale it back up with no interpolation, or use something like unfake.js to snap it to the grid. The author of this model also made a custom comfyUI node to interpret the image directly from the VAE and decode it directly into a pixel image.
Got it, that makes more sense. I thought this was the model output, and I was confused.
It’s just about. It just needs a little nudge.
Yeah, I mean it’s fine, I wasn’t saying it was cheating or anything. I was just curious if there had been some development I didn’t know about.