Conversation
📝 WalkthroughWalkthroughThis PR introduces a cross-channel annotation framework enabling multi-signal analysis. Added Changes
Sequence DiagramsequenceDiagram
participant Recording
participant CrossChannelExtractor
participant SleepSynthesizer
participant ChannelConfig
participant Annotation
Recording->>CrossChannelExtractor: extract(row)
CrossChannelExtractor->>SleepSynthesizer: for each synthesizer
SleepSynthesizer->>Recording: locate sleep:inbed and sleep:asleep channels
SleepSynthesizer->>SleepSynthesizer: create boolean mask (inbed=1 AND asleep≠1)
SleepSynthesizer->>SleepSynthesizer: detect contiguous windows from mask
SleepSynthesizer->>ChannelConfig: load templates from config
loop for each window
SleepSynthesizer->>SleepSynthesizer: select template by seed rotation
SleepSynthesizer->>Annotation: create with window, caption_type, text
end
SleepSynthesizer-->>CrossChannelExtractor: return list[Annotation]
CrossChannelExtractor-->>Recording: aggregated annotations
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~22 minutes Possibly related PRs
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (2)
.gitignore (1)
6-6: Consider whether to scope the pattern to root-level only.The pattern
data/will ignore any directory nameddataanywhere in the repository tree. If the intent is to ignore only the root-leveldata/directory, use/data/instead. If ignoring alldata/directories throughout the repository is intentional, the current pattern is correct.📝 Proposed alternative if root-level scoping is preferred
-data/ +/data/🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.gitignore at line 6, The .gitignore currently uses the pattern "data/" which ignores any directory named data anywhere in the repo; decide whether you want root-only scoping and, if so, change the pattern to "/data/" instead of "data/". Update the file by replacing the "data/" entry with "/data/" to limit ignoring to the repository root, or leave "data/" unchanged if you intentionally want to ignore every data/ directory recursively.synthesizers/sleep.py (1)
35-36: Cache the parsed templates instead of rereading JSON per row.
SleepSynthesizer.synthesize()is on the per-row annotation path, so reopening and reparsingtemplates.jsonon every call is avoidable overhead. Loading this once in__init__or memoizing bytemplates_pathwould keep dataset runs and explorer scans cheaper.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@synthesizers/sleep.py` around lines 35 - 36, The synthesizer currently re-reads and re-parses templates.json inside SleepSynthesizer.synthesize() which is called per-row; move the JSON load into the SleepSynthesizer.__init__ (or implement a small memoization keyed by config.templates_path) so that templates are parsed once and stored on the instance (e.g., self.templates) instead of reassigning local variable templates in synthesize(); update synthesize() to reference self.templates and keep the existing time_unit logic unchanged.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@visualizer.py`:
- Around line 125-129: The demo wires CrossChannelExtractor (with
SleepSynthesizer) into Annotator but plot_row() currently only renders
'statistical', 'structural', and 'semantic' caption types so the new
'cross_channel' captions are dropped; either update plot_row() to handle and
render 'cross_channel' captions (add 'cross_channel' to the caption types it
iterates/labels and format the label text accordingly) or remove
CrossChannelExtractor from the Annotator initialization to keep the demo
consistent (modify the Annotator call where
CrossChannelExtractor(MHC_CHANNEL_CONFIG, synthesizers=[SleepSynthesizer()]) is
added).
---
Nitpick comments:
In @.gitignore:
- Line 6: The .gitignore currently uses the pattern "data/" which ignores any
directory named data anywhere in the repo; decide whether you want root-only
scoping and, if so, change the pattern to "/data/" instead of "data/". Update
the file by replacing the "data/" entry with "/data/" to limit ignoring to the
repository root, or leave "data/" unchanged if you intentionally want to ignore
every data/ directory recursively.
In `@synthesizers/sleep.py`:
- Around line 35-36: The synthesizer currently re-reads and re-parses
templates.json inside SleepSynthesizer.synthesize() which is called per-row;
move the JSON load into the SleepSynthesizer.__init__ (or implement a small
memoization keyed by config.templates_path) so that templates are parsed once
and stored on the instance (e.g., self.templates) instead of reassigning local
variable templates in synthesize(); update synthesize() to reference
self.templates and keep the existing time_unit logic unchanged.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: db229781-fe5f-42c8-abb1-bba471e8ade3
📒 Files selected for processing (9)
.gitignorecaptionizer.pyexplorer.pyextractors/__init__.pyextractors/cross_channel.pysynthesizers/__init__.pysynthesizers/sleep.pytemplates/templates.jsonvisualizer.py
| annotator = Annotator([ | ||
| StatisticalExtractor(MHC_CHANNEL_CONFIG), | ||
| StructuralExtractor(MHC_CHANNEL_CONFIG), | ||
| CrossChannelExtractor(MHC_CHANNEL_CONFIG, synthesizers=[SleepSynthesizer()]), | ||
| ]) |
There was a problem hiding this comment.
This wiring doesn't surface the new sleep annotations yet.
CrossChannelExtractor now runs here, but plot_row() still only renders statistical, structural, and semantic caption types into the subplot labels. The new cross_channel captions are generated and then dropped, so this example path never shows the feature it just enabled. Either render cross_channel captions as well or leave the extractor out of this demo entrypoint for now.
Possible follow-up in plot_row()
- for ann_type in ("statistical", "structural", "semantic"):
+ for ann_type in ("statistical", "structural", "semantic", "cross_channel"):
if ann_type in caps:
parts.append(textwrap.fill(" ".join(caps[ann_type]), width=80))🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@visualizer.py` around lines 125 - 129, The demo wires CrossChannelExtractor
(with SleepSynthesizer) into Annotator but plot_row() currently only renders
'statistical', 'structural', and 'semantic' caption types so the new
'cross_channel' captions are dropped; either update plot_row() to handle and
render 'cross_channel' captions (add 'cross_channel' to the caption types it
iterates/labels and format the label text accordingly) or remove
CrossChannelExtractor from the Annotator initialization to keep the demo
consistent (modify the Annotator call where
CrossChannelExtractor(MHC_CHANNEL_CONFIG, synthesizers=[SleepSynthesizer()]) is
added).
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (1)
explorer.py (1)
211-217: Consider using unpacking for list construction.Per static analysis, unpacking is more idiomatic than concatenation here.
♻️ Suggested change
def _available_hit_target_names(self) -> list[str]: names = { detector.__class__.__name__ for detectors in self.channel_config.detectors.values() for detector in detectors } - return sorted(names) + [IN_BED_AWAKE_TARGET] + return [*sorted(names), IN_BED_AWAKE_TARGET]🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@explorer.py` around lines 211 - 217, The method _available_hit_target_names builds a sorted list then concatenates a single extra item; replace the list concatenation with sequence unpacking to be more idiomatic: construct the set from self.channel_config.detectors, sort it, and return a new list using unpacking to append IN_BED_AWAKE_TARGET (e.g., return [*sorted(names), IN_BED_AWAKE_TARGET]) so references are to _available_hit_target_names, self.channel_config.detectors, names, and IN_BED_AWAKE_TARGET.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@explorer.py`:
- Around line 757-760: The help text still references a non-existent control
"target< / target>"; update the help string (the block containing "target< /
target> changes the jump target", "n / p jump to next or previous hit for that
target", etc.) to remove the bogus control and instead instruct users to change
the jump target by clicking the detector name buttons (e.g., "InBedAwake") or by
pressing "[" and "]" keys, keeping the related "n / p" and "hit buttons"
guidance intact; locate and edit the help text literal in explorer.py to replace
the incorrect line with the corrected wording.
---
Nitpick comments:
In `@explorer.py`:
- Around line 211-217: The method _available_hit_target_names builds a sorted
list then concatenates a single extra item; replace the list concatenation with
sequence unpacking to be more idiomatic: construct the set from
self.channel_config.detectors, sort it, and return a new list using unpacking to
append IN_BED_AWAKE_TARGET (e.g., return [*sorted(names), IN_BED_AWAKE_TARGET])
so references are to _available_hit_target_names, self.channel_config.detectors,
names, and IN_BED_AWAKE_TARGET.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: fd817f1e-9cd0-4cc9-b554-9c45264c6924
📒 Files selected for processing (2)
explorer.pytemplates/templates.json
🚧 Files skipped from review as they are similar to previous changes (1)
- templates/templates.json
| "target< / target> changes the jump target", | ||
| "n / p jump to next or previous hit for that target", | ||
| "hit< / hit> buttons do the same", | ||
| "[ / ] also changes the jump target", |
There was a problem hiding this comment.
Help text references non-existent controls.
Line 757 mentions target< / target> but these controls don't exist in the UI. The actual ways to change the jump target are:
- Click the detector name buttons (e.g., "InBedAwake")
- Press
[or]keys
Consider removing or correcting this line to avoid user confusion.
📝 Suggested fix
"up/down changes row",
"left/right changes signal",
- "target< / target> changes the jump target",
+ "click detector buttons to change the jump target",
"n / p jump to next or previous hit for that target",
"hit< / hit> buttons do the same",
"[ / ] also changes the jump target",🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@explorer.py` around lines 757 - 760, The help text still references a
non-existent control "target< / target>"; update the help string (the block
containing "target< / target> changes the jump target", "n / p jump to next or
previous hit for that target", etc.) to remove the bogus control and instead
instruct users to change the jump target by clicking the detector name buttons
(e.g., "InBedAwake") or by pressing "[" and "]" keys, keeping the related "n /
p" and "hit buttons" guidance intact; locate and edit the help text literal in
explorer.py to replace the incorrect line with the corrected wording.
Sleep Synthesizer
♻️ Current situation & Problem
The cross-channel extractor framework lacked support for sleep-specific annotations and was not designed for pluggable synthesizers.
⚙️ Release Notes
📚 Documentation
Split from #49 — the stationary and cardio synthesizers will follow in a separate PR.
✅ Testing
tbd
Code of Conduct & Contributing Guidelines
By creating and submitting this pull request, you agree to follow our Code of Conduct and Contributing Guidelines: