Skip to content

Tbain/253 add tags count#506

Open
tbain wants to merge 18 commits intoopenedx:mainfrom
tbain:tbain/253_add_tags_count_rebased
Open

Tbain/253 add tags count#506
tbain wants to merge 18 commits intoopenedx:mainfrom
tbain:tbain/253_add_tags_count_rebased

Conversation

@tbain
Copy link
Copy Markdown

@tbain tbain commented Mar 18, 2026

Description

This implements openedx/modular-learning#253 , the task to add tag usage counts to the tags table under the taxonomies table. The frontend piece is where the results of this aggregation work is displayed is part of a separate pr to openedx/frontend-app-authoring. This change adds a subquery annotation onto the django query for retrieving tags. The original implementation of the counts for tags only counted raw usage of each tag. This feature/PR aggregatea sum of any tag and child tag usage with sibling de-duplication for the same usage (e.g. when two sibling nodes are used against the same course, module, etc. we still only need to count that as '1' for any parent/grandparent nodes) as specified in the AC for the issue above, so it was replaced with this more complicated bit of logic that sums across tag usage based on various courses, sections, modules, and libraries that might use a tag.

  • update:
    The count logic is done in-memory, since we saw noticeable performance issues with trying to stay in the QuerySet/Django paradigm for calculating the counts. This makes the code a little less straightforward, since we break it out into a somewhat odd in-memory python application of the logic, but it still works as intended and resolves as many performance pain points as possible while still adhering to the counting requirements that end up necessitating such code.

  • update:
    The count logic was moved out to the API level so that all the query data could complete and the logic to add the counts would not affect the ability to process the data as a QuerySet until the very end. This necessitated removing all the usage_count unit tests out of the _models and _api levels to the _view levels, so re-implemented that logic as appropriate (with a lot of AI help to speed it up)

AI Usage Disclosure: Claude was used via intelliJ IDE integration was used through the authoring process to work through complicated logic, craft the foundation of the unit tests, and also simplify it/make it more pythonic/alleviate performance concerns.

Supporting information

Github issue with AC: openedx/modular-learning#253

Testing instructions

Refer to the AC in the Github Issue. Steps to verify this is implemented and working via UX (Note, depends on the frontend part of this ticket):

  1. Navigate to the "Studio home" page
  2. Navigate into an existing Course (or create a course and navigate into it)
  3. In the "Course Outline" page, add tag(s) from an existing taxonomy to the course, module, or section. Ensure at least one of the tags you add is a sub-tag of a root tag.
  4. Navigate back to the "Studio home" page
  5. Click the "Taxonomies" tab to navigate to the Taxonomies page
  6. Navigate into the Taxonomy that corresponds to the tag you added in step 3
  7. Observe that, if a tag is used, there is now an additional column on the table named "Usage Count" that is populated with bubbles that display the count of tags usages, if applicable
  8. Ensure that the tag you added in Step 3 properly associates the incremented count from its usage, and ensure that the usage count properly aggregates up the lineage based on the sub tag you selected in step 3

Other information

Include anything else that will help reviewers and consumers understand the change.

  • Does this change depend on other changes elsewhere?
    • this ticket is backwards compatible with the current implementation in frontend-app-authoring, since by default the frontend does not request the counts.
  • Any special concerns or limitations? For example: deprecations, migrations, security, or accessibility.
    • none at this time

@openedx-webhooks
Copy link
Copy Markdown

Thanks for the pull request, @tbain!

This repository is currently maintained by @axim-engineering.

Once you've gone through the following steps feel free to tag them in a comment and let them know that your changes are ready for engineering review.

🔘 Get product approval

If you haven't already, check this list to see if your contribution needs to go through the product review process.

  • If it does, you'll need to submit a product proposal for your contribution, and have it reviewed by the Product Working Group.
    • This process (including the steps you'll need to take) is documented here.
  • If it doesn't, simply proceed with the next step.
🔘 Provide context

To help your reviewers and other members of the community understand the purpose and larger context of your changes, feel free to add as much of the following information to the PR description as you can:

  • Dependencies

    This PR must be merged before / after / at the same time as ...

  • Blockers

    This PR is waiting for OEP-1234 to be accepted.

  • Timeline information

    This PR must be merged by XX date because ...

  • Partner information

    This is for a course on edx.org.

  • Supporting documentation
  • Relevant Open edX discussion forum threads
🔘 Get a green build

If one or more checks are failing, continue working on your changes until this is no longer the case and your build turns green.

Details
Where can I find more information?

If you'd like to get more details on all aspects of the review process for open source pull requests (OSPRs), check out the following resources:

When can I expect my changes to be merged?

Our goal is to get community contributions seen and reviewed as efficiently as possible.

However, the amount of time that it takes to review and merge a PR can vary significantly based on factors such as:

  • The size and impact of the changes that it introduces
  • The need for product review
  • Maintenance status of the parent repository

💡 As a result it may take up to several weeks or months to complete a review and merge your PR.

@openedx-webhooks openedx-webhooks added the open-source-contribution PR author is not from Axim or 2U label Mar 18, 2026
@github-project-automation github-project-automation bot moved this to Needs Triage in Contributions Mar 18, 2026
Copy link
Copy Markdown

@jesperhodge jesperhodge left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There seem to be changes missing. For example, src/taxonomy/data/api.ts.
Could you

  • review this PR and make sure that all necessary changes are in this branch? Compare to the open Unicon PR.
  • review discussions in the Unicon PR and either resolve them or copy them here to be addressed here.
  • fix any pipeline errors
    ?

@mgwozdz-unicon
Copy link
Copy Markdown
Contributor

Since we're no longer using recursive SQL for this, is it possible to update the PR description for accuracy?

@mphilbrick211 mphilbrick211 moved this from Needs Triage to In Eng Review in Contributions Mar 23, 2026
@tbain
Copy link
Copy Markdown
Author

tbain commented Mar 23, 2026

There seem to be changes missing. For example, src/taxonomy/data/api.ts. Could you

* review this PR and make sure that all necessary changes are in this branch? Compare to the open Unicon PR.

* review discussions in the Unicon PR and either resolve them or copy them here to be addressed here.

* fix any pipeline errors
  ?
  • src/taxonomy/data/api.ts, as an example, was a file in the front-end changes. I compared everything with the Backend changes/openedx-core and this is the correct set of files
  • All comments/issues to address from the aforementioned PR have been addressed with this one, so this PR is up to date
  • Working on that - I had missed a test suite that was affected by the changes so address that, still working on a strange quality issue where it's complaining about the time the unit test suite takes

Copy link
Copy Markdown

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Adds rolled-up, de-duplicated tag usage counts (including ancestor rollups) to the tag listing query so the Taxonomies UI can display accurate “Usage Count” values per tag.

Changes:

  • Replaced the prior per-tag direct usage counting subquery with a dynamic, depth-aware subquery that rolls counts up to ancestors with per-object de-duplication.
  • Updated existing API/model tests to reflect rolled-up counts and added a broader set of usage-count test cases.

Reviewed changes

Copilot reviewed 3 out of 3 changed files in this pull request and generated 9 comments.

File Description
src/openedx_tagging/models/base.py Centralizes and updates include_counts behavior by annotating tag querysets with rolled-up, de-duplicated usage_count via a subquery.
tests/openedx_tagging/test_models.py Updates expected usage counts and adds multiple new test scenarios validating ancestor rollup and sibling de-duplication.
tests/openedx_tagging/test_api.py Updates autocomplete/search test expectations to reflect rolled-up usage counts returned by the API when include_counts=True.

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

@bradenmacdonald
Copy link
Copy Markdown
Contributor

Feel free to ping me for review here once the AC are clarified and the comments from Copilot etc are addressed.

@tbain
Copy link
Copy Markdown
Author

tbain commented Mar 28, 2026

Feel free to ping me for review here once the AC are clarified and the comments from Copilot etc are addressed.

@bradenmacdonald I think this is ready for re-review, I resolved all the Copilot issues and added the improvement you suggested for finding the depth via a query rather than depending on the constant

@mgwozdz-unicon
Copy link
Copy Markdown
Contributor

I'm not opposed to this PR as is, but a 10x slowdown isn't great, and I suspect it may be worse if there are more ObjectTags in use (I don't have that many in my test environment).

In order to improve performance, I have two suggestions:

  1. Build the object counts in python. Basically, in the /taxonomy/:n/tags/ REST API endpoint, once we've evaluated the query to load the tags (along with whatever filtering and pagination etc. may be in place), then you can do a second query to load all related ObjectTags, including tag__lineage (1 simple query, no aggregation at the query level). Then, in python, you can group by object tags, split the lineage up into individual tags, de-duplicate with a set(), and then annotate the original query objects with the counts. This also lets you separate implicit counts from explicit counts in the API, which I think would be even better then combining them.
  2. Or, perhaps even better, make the "get counts with implicit counts" a separate REST API endpoint. Then you can implement it either the way I described above or your original way, and it doesn't matter if it's a bit slow since the UI can load it separately, and the rest of the tags will load in immediately, so it doesn't matter if the counts load a bit slower.

Thoughts?

I think I like option 2 better as well because I think it's clearer that it will help with the performance and likely take less implementation time. However, unfortunately, I think even if option 2 is only about a 2ish day effort to implement, the number of fast follows we've been promising are starting to stack up, so we're getting a bit more concerned about our timeline. I'd like to add a new Github Issue for our "Nice to Haves" to address the performance concerns here and proceed with merging as is, if possible @bradenmacdonald ?

Some other related thoughts I have from a big picture use case perspective: We don't anticipate taxonomies much larger than the Lightcast sample. However, I do anticipate that for folks who create new course runs every term and have very short terms, the number of ObjectTag associations could get pretty large. I think that this is where it could potentially be valuable to add a filter to the tag count to only fetch ObjectTags where the Object corresponds to a course that is currently running or will be running in the future. I think the rest of the big picture for this is that for the folks who create new course runs every term and have very short terms, the usage count is probably meaningless for them and not very helpful anyways "Was it used 250 times or 275 times? How do I know how much usage I should be expecting compared to what I'm seeing?" I think there could also be an option to just hide the usage count column altogether if we detect that their usages are so high that this info is irrelevant for the instance. Or to hide the usage count for people who primarily use course runs each term instead of continuously running courses.

@bradenmacdonald
Copy link
Copy Markdown
Contributor

We're trying to stabilize the APIs for Verawood, so if we think we're ultimately going to end up with a second API endpoint for getting the counts, then I'd prefer to split that off separately now, even if we just use the existing implementation exactly as it is in this PR. In that case, it would definitely take less than 2 days, because you don't have to change much (although you could simplify it if you get time). You can also mark that "counts" endpoint as unstable so we can freely evolve it in Willow while keeping the "get tags" endpoint stable.

I think that this is where it could potentially be valuable to add a filter to the tag count to only fetch ObjectTags where the Object corresponds to a course that is currently running or will be running in the future. I think the rest of the big picture for this is that for the folks who create new course runs every term and have very short terms, the usage count is probably meaningless for them and not very helpful anyways "Was it used 250 times or 275 times?

That all makes a lot of sense, but will require a lot of discussion, because the current API is not aware of "courses" as a concept at all, and I'm a bit reluctant to make the tagging API aware of those things - right now tagging is a very low-level feature that other things build on. If you're even considering functionality like that, then I think it's another reason to move the tag usage counts to a separate endpoint, where it can support more elaborate options/filtering.

@ormsbee Can I get your thoughts?

@jesperhodge
Copy link
Copy Markdown

@bradenmacdonald if I understand correctly:

  • if we are doing a separate endpoint, that can be done outside of this PR as ticket later (in Verawood? Willow?)
  • if we are tackling this a different way - e.g. the way Braden suggested as option 1, or the way Mary suggested - this should be a separate discussion outside of this PR.
  • We can "just use the existing implementation exactly as it is in this PR"

Did I get that correctly? So can we consider this PR unblocked in this case?
In that case I would like to move this conversation to a new ticket entirely (possibly a "discovery" ticket) and have @thelmick-unicon / @bradenmacdonald / Axim figure out what release and priority it should have.

tbain added 2 commits April 1, 2026 11:37
@jesperhodge
Copy link
Copy Markdown

@bradenmacdonald @tbain here is the new issue.

I have not worked out an accurate title or description for it, so you can just edit the issue however you see fit.

jesperhodge pushed a commit to openedx/frontend-app-authoring that referenced this pull request Apr 1, 2026
This implements openedx/modular-learning#253 , the task to add tag usage counts to the tags table under the taxonomies table. The corresponding backend part is openedx/openedx-core#506, which updates the count aggregations to ensure the correct count numbers are sent to the frontend. This frontend PR does not depend on the backend part.
@bradenmacdonald
Copy link
Copy Markdown
Contributor

We can "just use the existing implementation exactly as it is in this PR"

What I meant was that we should change the PR to provide the desired tag count data via a separate endpoint. But using more or less the exact same code as you have now if you don't want to refactor it.

So instead of /taxonomy/:n/tags/?include_counts which we can leave alone for now or even remove the "counts" functionality from, add a new endpoint called /taxonomy/:n/usage_counts.

But I guess that's going to require some major changes on the frontend side to combine those pieces of information, so maybe that's not going to work with your timeline.

@bradenmacdonald
Copy link
Copy Markdown
Contributor

I guess before we consider merging this as is, I'd like to know if the slowness mostly scales with taxonomy size or object tag count or both? If the slowness is only a factor on large taxonomies and it's just ~1s, I think that's OK for now. But if it's slow as the # of object tags increases or it's O(n_tags * n_object_tags) or anything like that, then it'll seem fine now and slow to a crawl in prod once people start using thousands of these things and re-running tagged courses.

@ormsbee
Copy link
Copy Markdown
Contributor

ormsbee commented Apr 1, 2026

@bradenmacdonald:

I guess before we consider merging this as is, I'd like to know if the slowness mostly scales with taxonomy size or object tag count or both? If the slowness is only a factor on large taxonomies and it's just ~1s, I think that's OK for now. But if it's slow as the # of object tags increases or it's O(n_tags * n_object_tags) or anything like that, then it'll seem fine now and slow to a crawl in prod once people start using thousands of these things and re-running tagged courses.

I agree with this. If it's ~1s for an outlier taxonomy owing to the number of tags, it's acceptable for now, and we can figure out how to optimize later. If the time scales with the number of things tagged, this will rapidly become unusable.

@mgwozdz-unicon:

I think the rest of the big picture for this is that for the folks who create new course runs every term and have very short terms, the usage count is probably meaningless for them and not very helpful anyways "Was it used 250 times or 275 times? How do I know how much usage I should be expecting compared to what I'm seeing?" I think there could also be an option to just hide the usage count column altogether if we detect that their usages are so high that this info is irrelevant for the instance. Or to hide the usage count for people who primarily use course runs each term instead of continuously running courses.

I'd be cautious about assuming people don't care. I've been told that there's sometimes grant money riding on proving how much things get used. In any case, we'd definitely need product folks to weigh in on it.

@bradenmacdonald
Copy link
Copy Markdown
Contributor

FWIW Claude analyzed the query and says it could be slow. I have not had time to validate this analysis, so take with a grain of salt.

The generated SQL (at depth=3)

SELECT ...,
  COALESCE(
    (SELECT COUNT(DISTINCT U0."object_id") AS "total_usage"
     FROM "oel_tagging_objecttag" U0
       INNER JOIN "oel_tagging_tag" U2 ON (U0."tag_id" = U2."id")
       LEFT OUTER JOIN "oel_tagging_tag" U3 ON (U2."parent_id" = U3."id")
       LEFT OUTER JOIN "oel_tagging_tag" U4 ON (U3."parent_id" = U4."id")
     WHERE U0."taxonomy_id" = 1
       AND (U0."tag_id" = outer."id"
            OR U2."parent_id" = outer."id"
            OR U3."parent_id" = outer."id"
            OR U4."parent_id" = outer."id")
    ), 0) AS "usage_count"
FROM "oel_tagging_tag"
WHERE "oel_tagging_tag"."taxonomy_id" = 1

The scaling problem: it will get meaningfully slower

The old query filtered by tag_id = OuterRef("pk") — it used the FK index on tag_id and touched only the ~2-3 matching ObjectTag rows per tag. Cost per tag: O(direct_uses).

The new query is a correlated subquery that, for each tag in the result set, does this:

  1. Range-scans all ObjectTags for the taxonomy (using the (taxonomy, object_id) index)
  2. JOINs each to the tag table + parent chain (D PK lookups per row — fast)
  3. Evaluates the OR across 4 different table aliases (can't use any index for this filter)
  4. Counts distinct object_id values

Cost per tag: O(all_ObjectTags_in_taxonomy × D)

So the total work is roughly T × O × D where:

  • T = tags in the result set
  • O = total ObjectTags for this taxonomy
  • D = max depth (small, ≤5)
Scenario Tags (T) ObjectTags (O) Old cost (T × ~3) New cost (T × O)
Small 100 50 300 5,000
Medium 500 1,000 1,500 500,000
Large 1,000 10,000 3,000 10,000,000
Very large 1,000 100,000 3,000 100,000,000

It scales linearly with ObjectTag count, but since it's inside a correlated subquery that runs per-tag, the multiplier is the number of tags displayed. This will be painfully slow once a popular taxonomy gets applied to thousands of courses/modules/sections.

Why the OR kills performance

The condition tag_id = X OR parent_id = X OR parent.parent_id = X spans different table aliases. The DB optimizer can't use a single index path — it has to evaluate all conditions per row after the JOINs. There's no composite index that helps here.

@ormsbee
Copy link
Copy Markdown
Contributor

ormsbee commented Apr 1, 2026

Okay. So it sounds like the most straightforward thing is to do the up-front query for counts and stitch together the hierarchy counts in Python as @bradenmacdonald outlined in:

  1. Build the object counts in python. Basically, in the /taxonomy/:n/tags/ REST API endpoint, once we've evaluated the query to load the tags (along with whatever filtering and pagination etc. may be in place), then you can do a second query to load all related ObjectTags, including tag__lineage (1 simple query, no aggregation at the query level). Then, in python, you can group by object tags, split the lineage up into individual tags, de-duplicate with a set(), and then annotate the original query objects with the counts. This also lets you separate implicit counts from explicit counts in the API, which I think would be even better then combining them.

Does that sound right to everyone?

@bradenmacdonald
Copy link
Copy Markdown
Contributor

That sounds good to me, and has the advantage of requiring no further changes to the frontend PR.

@jesperhodge
Copy link
Copy Markdown

@ormsbee @bradenmacdonald the only question I have is related to memory usage.
If I'm understanding correctly, the Python solution pulls every object tag related to the taxonomy into memory and then iterate over them. How many object tag applications are we expecting? Are we good or is there any memory problem?

@jesperhodge
Copy link
Copy Markdown

jesperhodge commented Apr 2, 2026

@bradenmacdonald @ormsbee just to make sure we have considered all alternatives:

AI is suggesting Recursive CTEs as the optimal solution. However, that requires MySQL >= 8. Do we need to support older MySQL versions?

I haven't been able to evaluate the AI response in-depth so it may be incorrect

AI suggestion:
"
Recursive CTE: Top-Down (The "Path Discovery" Strategy)
A recursive solution works in two distinct steps for each Tag:

  1. Discovery: Start at the target Tag ID and recursively find all descendant Tag IDs.
  2. Aggregation: Perform a single SELECT COUNT(...) FROM ObjectTag WHERE tag_id IN (discovered_ids).
  3. Complexity: Approximately O(N * D + log M))
    D is the small cost of traversing the Tag table (usually very fast).
    log M is a single, highly optimized B-Tree Index Seek on the ObjectTag.tag_id column.
    The Benefit: Instead of checking D columns for every row in the big table, you find a small list of IDs first, then use the database's most optimized tool—the primary index—to grab the counts.
    "

N = Number of Tags in your main queryset
M = Total number of rows in ObjectTag
D = depth of tree

@tbain
Copy link
Copy Markdown
Author

tbain commented Apr 3, 2026

Ultimately, via a conversation/clarification over Slack (dated 2026-04-02), we decided to address the performance concerns via in-memory python based code processing rather than trying to rely on django joins and sub-queries, or a recursive SQL/CTE implementation. Since we were seeing such an egregious performance hit, the implementation leans towards minimizing performance issues and bottlenecks where possible, potentially at the slight cost of straight-forwardness of what exactly the code is doing (e.g. performance wise it was very expensive to implement the 'annotation' of the usage_count python implementation to the QuerySet, so the QuerySet is taken in and returned as a finalized list, rather than waiting a little later down the call chain to have django do so automatically, etc.) The logic works as expected according to unit tests and local testing, and pains were taken to remove any behavior that would lead to any performance issues, while still trying to make it as straightforward as possible. My AI IDE integration (Claude backing) reports, and I concur, that this implementation has a linear Big O cost based on the number of objects + tags (e.g. O(obj+tags)), which should be much more performant than the previous implementation which necessitated a much less performant multi-level join of indeterminate depth with would have been a multiplicative relationship to the number of tags for each layer (e.g. something like O(tags * obj * depth)).

@tbain
Copy link
Copy Markdown
Author

tbain commented Apr 6, 2026

Did some local testing with the large Lightcast taxonomy that Braden posted earlier; applied some tags from that taxonomy to an existing course on my local, and then watched the timings for the http://studio.local.openedx.io:8001/api/content_tagging/v1/taxonomies/2/tags/?full_depth_threshold=10000&include_counts=true call. They remained pretty much unchanged from the previous implementation on now-current main;

Various load times with Lightcast Taxonomy:
main: 83ms, 83ms, 76ms, 188ms, 73ms, 78ms (avg: 96ms)
this branch: 86ms, 82ms, 87ms, 82ms, 82ms, 85ms (avg: 84ms)

However, I'm not quite sure this is the best representation of the times to reproduce the same circumstances as Braden saw above with the 10x increase in call time, since I don't have the same tags applied the same way to the same depth, the same course, etc. Also I have a brand new computer that is very fast, which is kind of throwing this off as well. I only have a handful of tags applied; if I could either get some direction from Braden on how he had applied his tickets, or have Braden perform a quick check with his same setup, that would be great.

@tbain tbain requested a review from bradenmacdonald April 6, 2026 17:24
Copy link
Copy Markdown
Contributor

@bradenmacdonald bradenmacdonald left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks! My performance concern is addressed now. I just caught a few more things but hopefully they're relatively straightforward to address.

@tbain
Copy link
Copy Markdown
Author

tbain commented Apr 8, 2026

As per previous comments, moved the _add_counts logic out to the API level just before the return so that the logic to add the usage_counts doesn't interfere with the QuerySet logic until after all the filtering and paging logic is performed. This was necessitated by performance concerns with using something like a QuerySet annotation Case/When adding a multiplicative Big O complexity, so we had to work around that logic and manually add the value derived via Python logic to the list form of the return data rather than the QuerySet form which doesn't support such direct manipulation.

Since we completely moved the counting logic out of the the DB layers up to the API level, this required moving all of the unit testing to the appropriate level and removing it from the lower level unit testing suites. I used AI help to build out as extensive a unit test suite as I could think of in addition to maintaining parity with the tests I had to remove from the lower levels.

Copy link
Copy Markdown

@jesperhodge jesperhodge left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey @tbain this is amazing work. I tested it for correctness and performance, and I love the very extensive tests - that's just great.

Please regard every comment from me as a nit, they are very minor, and I think should not block the PR from being merged at all. That said, it would be nice to have the improvements for the comments in here; but merging this very soon has priority due to the Verawood release.

Can you make sure to bump the version (just a patch I think, not a minor or major version bump)?

Probably squashing can be done while merging.

I'll approve but final approve should come from Braden or Dave


class TestTaxonomyTagsUsageCount(TestTaxonomyViewMixin):
"""
Tests the usage_count rollup logic in the taxonomy tags view
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would like the tests to act as a sort of documentation of expected behavior, in general.
For anyone not deep into the Tag Usage Count logic, this will not be understandable.

Can we add the definition I wrote here that clarifies the behavior?

"
A tag can be directly applied to an object, which can be a course, library, module, or something else.
A tag can also be indirectly applied: when some of its children are applied to an object, it is considered automatically applied.
So, if tag "Chemistry" and tag "Physics" are applied once each to different objects, their parent tag "Natural Science" is considered indirectly applied to 2 objects.
Deduplication: A tag can only be applied to a single object once. So if two child tags are applied to the same object, the parent tag is only applied to it once, because no tag can be applied to the same object twice.
"


def test_usage_count_rollup(self):
"""
Test that usage counts correctly roll up from children to parents,
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd like the test description also to be very clear. You can orient yourself at the definition I wrote above.

I would only use the term "roll up" if it's coupled with an explanation on what rollup is ("When descendant tags are applied to an object, the parent tag is considered automatically applied to the object, which may result in an increase of the usage count value of the parent" or something like that)

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also please specify what the expected behavior here actually is (when is it "correct"?)


def test_usage_count_rollup_multi_level(self):
"""
Test that usage counts correctly roll up across more than two levels
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How do we know what's "correct" vs "incorrect"? Please specify.

results = {tag["value"]: tag for tag in response.data["results"]}

# --- Verification ---
# Arthropoda: applied to obj1, obj2 -> count: 2
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I love these comments that explain exactly what is expected.

assert results["Cnidaria"]["usage_count"] == 1

# Animalia: applied to obj1 (via Arthropoda, Chordata, Cnidaria) and obj2 (via Arthropoda).
# Should be 2, because it counts '1' per object regardless of how many children are applied.
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Awesome explanation.


def test_usage_count_one_level_root_and_child_rollup(self):
"""
Verify that usage counts roll up even when querying only a single level.
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Clarify a bit more?


def test_usage_count_sibling_and_ancestor_deduplication(self):
"""
Test deduplication when multiple children of the same parent are applied to the same object.
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This description sounds good to me, very understandable.

if depth == 1:
# We're already returning just a single level. It will be paginated normally.
if include_counts:
return self._add_counts(results)
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Very minor nit to be clear what this returns:

Suggested change
return self._add_counts(results)
results_with_counts = self._add_counts(results)
return results_with_counts

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a bit subjective, some might not like the suggestion. If @bradenmacdonald you like it please thumbs up this.
If Braden does not thumbs up this please do not implement it @tbain .

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't have a strong opinion here but I guess I do like your suggestion a bit more, as it's more clear.

# We can load and display all the tags in this (sub)tree at once:
self.pagination_class = DisabledTagsPagination
if include_counts:
return self._add_counts(results)
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
return self._add_counts(results)
results_with_counts = self._add_counts(results)
return results_with_counts

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a bit subjective, some might not like the suggestion. If @bradenmacdonald you like it please thumbs up this.
If Braden does not thumbs up this please do not implement it @tbain .

return results.filter(parent_value=parent_tag_value)
filtered_results = results.filter(parent_value=parent_tag_value)
if include_counts:
return self._add_counts(filtered_results)
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
return self._add_counts(filtered_results)
results_with_counts = self._add_counts(results)
return results_with_counts

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a bit subjective, some might not like the suggestion. If @bradenmacdonald you like it please thumbs up this.
If Braden does not thumbs up this please do not implement it @tbain .

Copy link
Copy Markdown
Contributor

@bradenmacdonald bradenmacdonald left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks great! I'll approve and merge as soon you've addressed the include_counts comment and as many of Jesper's suggestions as you'd like.


return filtered_results

def _add_counts(self, tag_data: TagDataQuerySet) -> TagDataQuerySet:
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nit: this could be a top-level function in src/openedx_tagging/api.py.

Nit: Also, along the lines of @jesperhodge 's comments, it would be good to explain the de-duplication here or at lest say "refer to test case X for examples and details". (Or put clear details here and have the test cases refer back here).

self,
search_term: str | None,
include_counts: bool,
include_counts: bool, # pylint: disable=unused-argument
Copy link
Copy Markdown
Contributor

@bradenmacdonald bradenmacdonald Apr 10, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why are we keeping these include_counts parameters if they don't work? I think you can just remove them (from here and from the function that calls these ones).

There is no code referencing include_counts in openedx-platform as far as I can tell, so even though it's a breaking change, it's fine to include such changes in our 0.x version of openedx-core.

Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, I wasn't sure about keeping these or not. I know it's probably a "you aren't going to need it" type thing, but for completions sake, I felt like it was worth continuing them down to this level since they are a param in the query. I'm perfectly fine removing them though, I was 50/50 either way.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

open-source-contribution PR author is not from Axim or 2U

Projects

Status: In Eng Review

Development

Successfully merging this pull request may close these issues.

8 participants