Loading...
Labels: roar and GLaaS
On this page
Not everything worth tracking can be inferred automatically.
While roar observes structure, labels let you attach meaning to what happened.
Record things like:
- qualitative annotations
- metrics (loss, accuracy, evaluation scores)
- experiment notes or outcomes
- any structured key-value data you care about
Labels in roar
On the CLI, labels are lightweight and flexible.
You can attach them to jobs, artifacts, or sessions as part of your workflow—without changing how you run commands.
They are especially useful for capturing results that aren’t visible from execution alone, such as evaluation metrics or human judgments.
For example, you might add labels like this:
roar label set dag current owner=alice project=mnist-baseline
roar label set job @2 phase=train lr=0.001 accuracy=0.94
roar label set artifact ./outputs/model.pt model.name=resnet50 stage=baseline
Labels in GLaaS
When registered to GLaaS, labels become part of a shared, queryable layer of meaning across your lineage.
This enables:
- filtering and searching across runs
- comparing results across different approaches
- constructing leaderboards or evaluation views
- organizing work beyond just structure and lineage