AI Glossary

LoRA (Low-Rank Adaptation)

A technique for fine-tuning large AI image generation models on a small dataset to learn a specific style, subject, or concept. LoRA files modify a base model's outputs and require their own metadata management, versioning, and provenance tracking as they directly affect generation character.

LoRA works by training a small set of additional weights that are applied on top of a base model like Stable Diffusion XL. Rather than fine-tuning the entire model (which requires significant compute and storage), LoRA modifies only a low-rank subset of the model's attention layers, producing files typically 10-200 MB instead of the base model's 2-7 GB.

For asset management, LoRAs introduce unique challenges. They are part of the generation provenance — the same prompt with different LoRAs produces entirely different results. They need version tracking because LoRA files are updated as creators refine their training data. And they need to be linked to the assets they influenced, creating a dependency graph between model components and outputs that traditional DAMs have no concept of.

Related Guides

Related Terms

See AI Asset Management in Action

Numonic automatically captures provenance, preserves metadata, and makes every AI-generated asset searchable and reproducible.