Skip to content

Feature/training: from int to string #63

Merged
milanagm merged 3 commits intomainfrom
feature/training-canonical-strings
Apr 30, 2026
Merged

Feature/training: from int to string #63
milanagm merged 3 commits intomainfrom
feature/training-canonical-strings

Conversation

@milanagm
Copy link
Copy Markdown
Contributor

Switch metabolic training targets from ints to canonical strings

♻️ Current situation & Problem

Depends on / sits on top of the eval pipeline PR (#55, feature/evaluation-pipeline). Surface follow-up to PR #50, which
introduced the curriculum-learning training stack with int-encoded answer targets ("0", "1", "2", …).

Two related issues motivating this PR:

  1. OpenTSLM convention divergence — the upstream OpenTSLM reference datasets (HARAcc, Sleep, ECG) all train on canonical class strings ("biking", "Wake", "yes", …). Our metabolic_finetune.py and mhc_multi_label_qa_dataset.py deviated by using str(int(value)) as the training target.

  2. Train ↔ eval prompt drift — eval (MHCMetabolicQADataset from PR Feature/evaluation pipeline #55) frames the task as "Is this person biologically male or female? … Answer: ", while training framed it as the terse "Based on the sensor data, predict the value of BiologicalSex (binary).".

⚙️ Release Notes

  • Training answer format switched to canonical strings:
    • Binary: "Male" / "Female" / "True" / "False"

    • Ordinal: "Underweight" / "Normal weight" / "Overweight" / …

    • Continuous: numeric string (unchanged)

    • Wrapped as "Answer: " so the loss target matches the prompt's Answer: instruction.

    • New shared helper build_metabolic_post_prompt(label) in time_series_datasets/mhc_label_lookup.py. Both MHCMetabolicQADataset (eval) and MHCMultiLabelQADataset (training) call it — train and eval now emit byte-identical post_prompts. Lives next to METABOLIC_LABEL_CONFIG it formats.

  • LabelLookup API change: _decode_labels → decode_labels (was effectively public; renamed to match its actual usage).
  • scripts/metabolic_finetune.py eval logic removed. The single-token logit-scoring path (_candidate_token_ids,
    _logits_over_candidates, _answer_prefix_embed) was incompatible with multi-token canonical strings. Downstream metrics now flow
    through evaluation/run_eval.py (PR Feature/evaluation pipeline #55) instead. --eval_only flag removed; net −189 LOC.
  • Migration: existing int-trained checkpoints will not parse correctly under the new prompt format. Re-train from a stage-1

captioning checkpoint:
python curriculum_learning.py \
--model OpenTSLMSP \
--stages stage_joint_caption_metabolic \
--batch_size 2 \
--gradient_checkpointing

  • (or stage_metabolic alone if a captioning checkpoint is already on disk.)

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Apr 29, 2026

Warning

Rate limit exceeded

@milanagm has exceeded the limit for the number of commits that can be reviewed per hour. Please wait 11 minutes and 22 seconds before requesting another review.

To keep reviews running without waiting, you can enable usage-based add-on for your organization. This allows additional reviews beyond the hourly cap. Account admins can enable it under billing.

⌛ How to resolve this issue?

After the wait time has elapsed, a review can be triggered using the @coderabbitai review command as a PR comment. Alternatively, push new commits to this PR.

We recommend that you space out your commits to avoid hitting the rate limit.

🚦 How do rate limits work?

CodeRabbit enforces hourly rate limits for each developer per organization.

Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout.

Please see our FAQ for further information.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 85aa008d-7300-4f6b-80d5-ef0c1e863be2

📥 Commits

Reviewing files that changed from the base of the PR and between 445f34c and 3ba478b.

📒 Files selected for processing (2)
  • scripts/metabolic_finetune.py
  • scripts/verify_label_lookup.py
📝 Walkthrough

Walkthrough

This PR centralizes label handling and prompt generation by exposing a public label-decoding API and introducing a reusable prompt-building helper function. The main training script is simplified by removing internal evaluation logic, delegating that responsibility to external evaluation. Dataset modules are updated to use the new public APIs instead of duplicating prompt generation logic.

Changes

Cohort / File(s) Summary
Label API Centralization
time_series_datasets/mhc_label_lookup.py
Exposes decode_labels() as public API (renamed from _decode_labels). Introduces build_metabolic_post_prompt(label) to centralize metabolic prompt formatting with fallback logic for unconfigured labels.
Dataset Migration to Public API
time_series_datasets/mhc_base_qa_dataset.py, time_series_datasets/mhc_metabolic_qa_dataset.py, time_series_datasets/mhc_multi_label_qa_dataset.py
Updated to use public decode_labels() and build_metabolic_post_prompt() instead of internal implementations. Removes redundant prompt-generation logic and label-type-based answer formatting.
Training Script Refactoring
scripts/metabolic_finetune.py
Removes internal evaluation code paths, candidate-token probability evaluation, and --eval_only CLI option. Switches label prompting from numeric digit prediction to string-based format ("Answer: <value>"). Delegates evaluation to external evaluation/run_eval.py.

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~22 minutes

🚥 Pre-merge checks | ✅ 4 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 33.33% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (4 passed)
Check name Status Explanation
Title check ✅ Passed The title 'Feature/training: from int to string' directly matches the core change: switching training targets from integer-encoded strings to canonical string formats.
Description check ✅ Passed The description comprehensively explains the motivation (OpenTSLM convention alignment and train/eval prompt drift), details all key changes including canonical string formats, API changes, and evaluation logic removal, and provides migration guidance.
Linked Issues check ✅ Passed Check skipped because no linked issues were found for this pull request.
Out of Scope Changes check ✅ Passed Check skipped because no linked issues were found for this pull request.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch feature/training-canonical-strings

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
Review rate limit: 0/1 reviews remaining, refill in 11 minutes and 22 seconds.

Comment @coderabbitai help to get the list of available commands and usage tips.

Comment thread time_series_datasets/mhc_label_lookup.py
@milanagm milanagm force-pushed the feature/training-canonical-strings branch 2 times, most recently from edaeb06 to bb31baf Compare April 30, 2026 01:03
@milanagm milanagm marked this pull request as ready for review April 30, 2026 05:12
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

♻️ Duplicate comments (1)
time_series_datasets/mhc_label_lookup.py (1)

165-175: ⚠️ Potential issue | 🟠 Major | ⚡ Quick win

Don't instruct the model to emit the literal <label> token.

The shared rule currently says ONLY <label> verbatim, so every metabolic train/eval prompt now includes an output token that is not one of the listed valid answers. That makes the instruction self-contradictory and can steer generations toward the placeholder instead of Male / Female / etc.

Proposed fix
     return (
         f"{cfg['question']}\n\n"
         f"Possible answers: {values_str}\n\n"
         "Rules:\n"
         "- Base your answer on the sensor patterns above.\n"
         "- You MUST give a classification even if the signal is unclear — "
         "state limitations but still make your best guess.\n"
         "- Never respond with a question.\n"
-        "- You MUST respond with ONLY <label>, exactly as written in the listed possible answers."
+        "- You MUST respond with ONLY one of the listed possible answers, exactly as written."
     )
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@time_series_datasets/mhc_label_lookup.py` around lines 165 - 175, The prompt
construction in the return block (using values_str and cfg) incorrectly
instructs the model to output the literal token "<label>" which conflicts with
the listed answers; change the final rule to require the model to respond with
only one of the listed possible answers exactly as written (e.g., "You MUST
respond with only one of the possible answers listed above, exactly as
written."), referencing cfg["values"]/values_str and the return string so the
sentence replaces the literal "<label>" placeholder with the correct
instruction.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@scripts/metabolic_finetune.py`:
- Around line 11-14: The training targets still include the "Answer: " prefix
which mismatches the shared build_metabolic_post_prompt framing and the plain
str(value) outputs from time_series_datasets/mhc_multi_label_qa_dataset.py;
remove the hard-coded "Answer: " prefix where targets are constructed (e.g.,
replace occurrences like target = f"Answer: {label}" or post_prompt + "Answer: "
usage with just the bare label/str(value)), ensure the training loss uses the
bare label token sequence, and update the module docstring at the top (formerly
Lines 11-14) to describe the new bare-label target format; verify references to
build_metabolic_post_prompt and any variables named post_prompt/target in the
training loop (also the analogous code around Lines 142-149) are changed
consistently.

---

Duplicate comments:
In `@time_series_datasets/mhc_label_lookup.py`:
- Around line 165-175: The prompt construction in the return block (using
values_str and cfg) incorrectly instructs the model to output the literal token
"<label>" which conflicts with the listed answers; change the final rule to
require the model to respond with only one of the listed possible answers
exactly as written (e.g., "You MUST respond with only one of the possible
answers listed above, exactly as written."), referencing
cfg["values"]/values_str and the return string so the sentence replaces the
literal "<label>" placeholder with the correct instruction.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: ebe68d5d-e983-4651-9b4a-15db19c68ec3

📥 Commits

Reviewing files that changed from the base of the PR and between 5c959c5 and 445f34c.

📒 Files selected for processing (5)
  • scripts/metabolic_finetune.py
  • time_series_datasets/mhc_base_qa_dataset.py
  • time_series_datasets/mhc_label_lookup.py
  • time_series_datasets/mhc_metabolic_qa_dataset.py
  • time_series_datasets/mhc_multi_label_qa_dataset.py

Comment thread scripts/metabolic_finetune.py
@milanagm milanagm merged commit 0861d38 into main Apr 30, 2026
3 checks passed
@milanagm milanagm deleted the feature/training-canonical-strings branch April 30, 2026 06:05
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants