A limitation of generative models is that they can only generate things they’ve been trained on. But what if you want to consistently compose with a specific object, person’s face, or artistic style not found in the original training data? This is where Eden trained models come in. Models are custom characters, objects, styles, or specific people which have been trained and added by Eden users to the Eden tools’ knowledge base, using the LoRA technique. With models, users are able to consistently reproduce specific content and styles in their creations. Models are first trained by uploading example images to either the Flux or (Older) SDXL Trainer. Training a model takes a couple of hours. Once trained, the model becomes available in all endpoints, including images and video.Documentation Index
Fetch the complete documentation index at: https://docs.eden.art/llms.txt
Use this file to discover all available pages before exploring further.
Training
Train models through the training UI.Selecting your training set
You need just a few images:- Faces/objects: 4-10 images usually sufficient
- Styles: Can use hundreds or thousands for diverse styles
- Tips for training images
- Custom prompts (optional)
- Selective diversity: Maximize variance of everything you’re not trying to learn
- High resolution: At least 768x768 pixels
- Center-cropped: Target subject should be in center square
- Prominence: Feature target prominently
Training parameters
Model types
Faces
Optimized for human faces. Use object mode for non-human faces.
Reference in prompts:
- Xander as a character in a noir graphic novel
- Xander as a knight in shining armour (using angle brackets)
- Xander as the Mona Lisa (using angle brackets)
Objects
For all “things” besides human faces: physical objects, characters, cartoons.
Prompt examples:
- a photo of kojii surfing a wave (using angle brackets)
- kojii in a snowglobe
- a low-poly artwork of Kojii
Styles
Model artistic styles or genres, focusing on abstract characteristics rather than content.
With style models, you don’t need to reference the concept - just prompt normally and the style will be applied.
Styles can capture various aesthetics, color palettes, layout patterns, or abstract notions like knolling:


Generating with models
Once trained, select your model in the creation tool and trigger it by name or<concept> in prompts.
Exporting Models
Eden models are compatible with other tools supporting LoRA.
AUTOMATIC1111
Install files
- Put
[lora_name]_lora.safetensorsinstable-diffusion-webui/models/Lora - Put
[lora_name]_embeddings.safetensorsinstable-diffusion-webui/embeddings
Configure
Use JuggernautXL_v6 as base checkpoint

ComfyUI
Install files
- Put
[lora_name]_lora.safetensorsinComfyUI/models/loras - Put
[lora_name]_embeddings.safetensorsinComfyUI/models/embeddings

LoRA strength has relatively small effect because Eden models optimize token embeddings rather than LoRA matrices.

