Skip to main content
A limitation of generative models is that they can only generate things they’ve been trained on. But what if you want to consistently compose with a specific object, person’s face, or artistic style not found in the original training data? This is where Eden trained models come in. Models are custom characters, objects, styles, or specific people which have been trained and added by Eden users to the Eden tools’ knowledge base, using the LoRA technique. With models, users are able to consistently reproduce specific content and styles in their creations. Models are first trained by uploading example images to either the Flux or (Older) SDXL Trainer. Training a model takes a couple of hours. Once trained, the model becomes available in all endpoints, including images and video.

Training

Train models through the training UI.

Selecting your training set

You need just a few images:
  • Faces/objects: 4-10 images usually sufficient
  • Styles: Can use hundreds or thousands for diverse styles
  • Tips for training images
  • Custom prompts (optional)
  • Selective diversity: Maximize variance of everything you’re not trying to learn
  • High resolution: At least 768x768 pixels
  • Center-cropped: Target subject should be in center square
  • Prominence: Feature target prominently
The choice of training images is the biggest factor determining quality. If unsatisfied, try different images before adjusting settings.

Training parameters

required
object
optional
object
Training at lower resolutions (e.g. 768) can be useful if you want to learn a face but prompt it in settings where the face is only part of the image. Using init_images with rough shape composition helps in this scenario.

Model types

Faces

Optimized for human faces. Use object mode for non-human faces.
Face mode is highly optimized for human faces only. Use object mode for cartoon characters or animals.
Face training images
Reference in prompts:
  • Xander as a character in a noir graphic novel
  • Xander as a knight in shining armour (using angle brackets)
  • Xander as the Mona Lisa (using angle brackets)
Generated images with concept

Objects

For all “things” besides human faces: physical objects, characters, cartoons.
10 really good, diverse HD images is usually better than 100 low-quality or similar images.
Object training images
Prompt examples:
  • a photo of kojii surfing a wave (using angle brackets)
  • kojii in a snowglobe
  • a low-poly artwork of Kojii
Generated with object concept

Styles

Model artistic styles or genres, focusing on abstract characteristics rather than content.
Style training images
With style models, you don’t need to reference the concept - just prompt normally and the style will be applied.
Generated with style concept
Styles can capture various aesthetics, color palettes, layout patterns, or abstract notions like knolling: Knolling training set Generated knolling images

Generating with models

Once trained, select your model in the creation tool and trigger it by name or <concept> in prompts.

Exporting Models

Eden models are compatible with other tools supporting LoRA. Download concept as .tar file

AUTOMATIC1111

1

Download and extract

Download concept and extract the .tar file
2

Install files

  • Put [lora_name]_lora.safetensors in stable-diffusion-webui/models/Lora
  • Put [lora_name]_embeddings.safetensors in stable-diffusion-webui/embeddings
3

Configure

Use JuggernautXL_v6 as base checkpoint
4

Trigger in prompt

Load both embedding AND LoRA weights by triggering in prompt
Using LoRA in AUTOMATIC1111
  • Face/Object modes: Use [lora_name]_embeddings in prompt
  • Style concepts: Use "... in the style of [lora_name]_embeddings"

ComfyUI

1

Install files

  • Put [lora_name]_lora.safetensors in ComfyUI/models/loras
  • Put [lora_name]_embeddings.safetensors in ComfyUI/models/embeddings
2

Load LoRA

Use “Load LoRA” node and adjust strength
3

Trigger model

Reference with embedding:[lora_name]_embeddings in prompt
Using in ComfyUI
LoRA strength has relatively small effect because Eden models optimize token embeddings rather than LoRA matrices.
I