From e54e5e04faaa7c8d4243e783dc1e9cd7680bda1f Mon Sep 17 00:00:00 2001 From: Youssef1313 <31348972+Youssef1313@users.noreply.github.com> Date: Fri, 10 Apr 2026 22:25:52 +0000 Subject: [PATCH] Mirror dotnet-test plugin from dotnet/skills --- .github/agents/code-testing-builder.agent.md | 75 +++ .github/agents/code-testing-fixer.agent.md | 81 +++ .../agents/code-testing-generator.agent.md | 125 +++++ .../agents/code-testing-implementer.agent.md | 91 ++++ .github/agents/code-testing-linter.agent.md | 66 +++ .github/agents/code-testing-planner.agent.md | 135 +++++ .../agents/code-testing-researcher.agent.md | 155 ++++++ .github/agents/code-testing-tester.agent.md | 79 +++ .github/skills/code-testing-agent/SKILL.md | 197 +++++++ .../code-testing-agent/extensions/dotnet.md | 111 ++++ .../unit-test-generation.prompt.md | 173 +++++++ .github/skills/coverage-analysis/SKILL.md | 471 +++++++++++++++++ .../references/guidelines.md | 59 +++ .../references/output-format.md | 83 +++ .../scripts/Compute-CrapScores.ps1 | 113 +++++ .../scripts/Extract-MethodCoverage.ps1 | 193 +++++++ .github/skills/crap-score/SKILL.md | 155 ++++++ .../skills/dotnet-test-frameworks/SKILL.md | 117 +++++ .github/skills/filter-syntax/SKILL.md | 172 +++++++ .../skills/migrate-mstest-v1v2-to-v3/SKILL.md | 197 +++++++ .../skills/migrate-mstest-v3-to-v4/SKILL.md | 480 ++++++++++++++++++ .github/skills/migrate-vstest-to-mtp/SKILL.md | 340 +++++++++++++ .../skills/migrate-xunit-to-xunit-v3/SKILL.md | 219 ++++++++ .github/skills/mtp-hot-reload/SKILL.md | 144 ++++++ .github/skills/platform-detection/SKILL.md | 58 +++ .github/skills/run-tests/SKILL.md | 204 ++++++++ .github/skills/test-anti-patterns/SKILL.md | 137 +++++ .github/skills/writing-mstest-tests/SKILL.md | 347 +++++++++++++ 28 files changed, 4777 insertions(+) create mode 100644 .github/agents/code-testing-builder.agent.md create mode 100644 .github/agents/code-testing-fixer.agent.md create mode 100644 .github/agents/code-testing-generator.agent.md create mode 100644 .github/agents/code-testing-implementer.agent.md create mode 100644 .github/agents/code-testing-linter.agent.md create mode 100644 .github/agents/code-testing-planner.agent.md create mode 100644 .github/agents/code-testing-researcher.agent.md create mode 100644 .github/agents/code-testing-tester.agent.md create mode 100644 .github/skills/code-testing-agent/SKILL.md create mode 100644 .github/skills/code-testing-agent/extensions/dotnet.md create mode 100644 .github/skills/code-testing-agent/unit-test-generation.prompt.md create mode 100644 .github/skills/coverage-analysis/SKILL.md create mode 100644 .github/skills/coverage-analysis/references/guidelines.md create mode 100644 .github/skills/coverage-analysis/references/output-format.md create mode 100644 .github/skills/coverage-analysis/scripts/Compute-CrapScores.ps1 create mode 100644 .github/skills/coverage-analysis/scripts/Extract-MethodCoverage.ps1 create mode 100644 .github/skills/crap-score/SKILL.md create mode 100644 .github/skills/dotnet-test-frameworks/SKILL.md create mode 100644 .github/skills/filter-syntax/SKILL.md create mode 100644 .github/skills/migrate-mstest-v1v2-to-v3/SKILL.md create mode 100644 .github/skills/migrate-mstest-v3-to-v4/SKILL.md create mode 100644 .github/skills/migrate-vstest-to-mtp/SKILL.md create mode 100644 .github/skills/migrate-xunit-to-xunit-v3/SKILL.md create mode 100644 .github/skills/mtp-hot-reload/SKILL.md create mode 100644 .github/skills/platform-detection/SKILL.md create mode 100644 .github/skills/run-tests/SKILL.md create mode 100644 .github/skills/test-anti-patterns/SKILL.md create mode 100644 .github/skills/writing-mstest-tests/SKILL.md diff --git a/.github/agents/code-testing-builder.agent.md b/.github/agents/code-testing-builder.agent.md new file mode 100644 index 0000000000..f23555772f --- /dev/null +++ b/.github/agents/code-testing-builder.agent.md @@ -0,0 +1,75 @@ +--- +description: >- + Runs build/compile commands for any language and reports + results. Discovers build command from project files if not specified. +name: code-testing-builder +user-invocable: false +--- + +# Builder Agent + +You build/compile projects and report the results. You are polyglot — you work with any programming language. + +> **Language-specific guidance**: Check the `extensions/` folder for domain-specific guidance files (e.g., `extensions/dotnet.md` for .NET). Users can add their own extensions for other languages or domains. + +## Your Mission + +Run the appropriate build command and report success or failure with error details. + +## Process + +### 1. Discover Build Command + +If not provided, check in order: + +1. `.testagent/research.md` or `.testagent/plan.md` for Commands section +2. Project files: + - `*.csproj` / `*.sln` → `dotnet build` + - `package.json` → `npm run build` or `npm run compile` + - `pyproject.toml` / `setup.py` → `python -m py_compile` or skip + - `go.mod` → `go build ./...` + - `Cargo.toml` → `cargo build` + - `Makefile` → `make` or `make build` + +### 2. Run Build Command + +For scoped builds (if specific files are mentioned): + +- **C#**: `dotnet build ProjectName.csproj` +- **TypeScript**: `npx tsc --noEmit` +- **Go**: `go build ./...` +- **Rust**: `cargo build` + +### 3. Parse Output + +Look for error messages (CS\d+, TS\d+, E\d+, etc.), warning messages, and success indicators. + +### 4. Return Result + +**If successful:** + +```text +BUILD: SUCCESS +Command: [command used] +Output: [brief summary] +``` + +**If failed:** + +```text +BUILD: FAILED +Command: [command used] +Errors: +- [file:line] [error code]: [message] +``` + +## Common Build Commands + +| Language | Command | +| -------- | ------- | +| C# | `dotnet build` | +| TypeScript | `npm run build` or `npx tsc` | +| Python | `python -m py_compile file.py` | +| Go | `go build ./...` | +| Rust | `cargo build` | +| Java | `mvn compile` or `gradle build` | diff --git a/.github/agents/code-testing-fixer.agent.md b/.github/agents/code-testing-fixer.agent.md new file mode 100644 index 0000000000..2da485b9c7 --- /dev/null +++ b/.github/agents/code-testing-fixer.agent.md @@ -0,0 +1,81 @@ +--- +description: >- + Fixes compilation errors in source or test files. Analyzes + error messages and applies corrections. +name: code-testing-fixer +user-invocable: false +--- + +# Fixer Agent + +You fix compilation errors in code files. You are polyglot — you work with any programming language. + +> **Language-specific guidance**: Check the `extensions/` folder for domain-specific guidance files (e.g., `extensions/dotnet.md` for .NET). Users can add their own extensions for other languages or domains. + +## Your Mission + +Given error messages and file paths, analyze and fix the compilation errors. + +## Process + +### 1. Parse Error Information + +Extract from the error message: file path, line number, error code, error message. + +### 2. Read the File + +Read the file content around the error location. + +### 3. Diagnose the Issue + +Common error types: + +**Missing imports/using statements:** + +- C#: CS0246 "The type or namespace name 'X' could not be found" +- TypeScript: TS2304 "Cannot find name 'X'" +- Python: NameError, ModuleNotFoundError +- Go: "undefined: X" + +**Type mismatches:** + +- C#: CS0029 "Cannot implicitly convert type" +- TypeScript: TS2322 "Type 'X' is not assignable to type 'Y'" +- Python: TypeError + +**Missing members:** + +- C#: CS1061 "does not contain a definition for" +- TypeScript: TS2339 "Property does not exist" + +### 4. Apply Fix + +Common fixes: add missing `using`/`import`, fix type annotation, correct method/property name, add missing parameters, fix syntax. + +### 5. Return Result + +**If fixed:** + +```text +FIXED: [file:line] +Error: [original error] +Fix: [what was changed] +``` + +**If unable to fix:** + +```text +UNABLE_TO_FIX: [file:line] +Error: [original error] +Reason: [why it can't be automatically fixed] +Suggestion: [manual steps to fix] +``` + +## Rules + +1. **One fix at a time** — fix one error, then let builder retry +2. **Be conservative** — only change what's necessary +3. **Preserve style** — match existing code formatting +4. **Report clearly** — state what was changed +5. **Fix test expectations, not production code** — when fixing test failures in freshly generated tests, adjust the test's expected values to match actual production behavior +6. **CS7036 / missing parameter** — read the constructor or method signature to find all required parameters and add them diff --git a/.github/agents/code-testing-generator.agent.md b/.github/agents/code-testing-generator.agent.md new file mode 100644 index 0000000000..2110221364 --- /dev/null +++ b/.github/agents/code-testing-generator.agent.md @@ -0,0 +1,125 @@ +--- +description: >- + Orchestrates comprehensive test generation using + Research-Plan-Implement pipeline. Use when asked to generate tests, write unit + tests, improve test coverage, or add tests. +name: code-testing-generator +tools: ['read', 'search', 'edit', 'task', 'skill', 'terminal'] +--- + +# Test Generator Agent + +You coordinate test generation using the Research-Plan-Implement (RPI) pipeline. You are polyglot — you work with any programming language. + +> **Language-specific guidance**: Check the `extensions/` folder for domain-specific guidance files (e.g., `extensions/dotnet.md` for .NET). Users can add their own extensions for other languages or domains. + +## Pipeline Overview + +1. **Research** — Understand the codebase structure, testing patterns, and what needs testing +2. **Plan** — Create a phased test implementation plan +3. **Implement** — Execute the plan phase by phase, with verification + +## Workflow + +### Step 1: Clarify the Request + +Understand what the user wants: scope (project, files, classes), priority areas, framework preferences. If clear, proceed directly. If the user provides no details or a very basic prompt (e.g., "generate tests"), use [unit-test-generation.prompt.md](../skills/code-testing-agent/unit-test-generation.prompt.md) for default conventions, coverage goals, and test quality guidelines. + +### Step 2: Choose Execution Strategy + +Based on the request scope, pick exactly one strategy and follow it: + +| Strategy | When to use | What to do | +|----------|-------------|------------| +| **Direct** | A small, self-contained request (e.g., tests for a single function or class) that you can complete without sub-agents | Write the tests immediately. Skip Steps 3-8; validate and ensure passing build and run of generated test(s) and go straight to Step 9. | +| **Single pass** | A moderate scope (couple projects or modules) that a single Research → Plan → Implement cycle can cover | Execute Steps 3-8 once, then proceed to Step 9. | +| **Iterative** | A large scope or ambitious coverage target that one pass cannot satisfy | Execute Steps 3-8, then re-evaluate coverage. If the target is not met, repeat Steps 3-8 with a narrowed focus on remaining gaps. Use unique names for each iteration's `.testagent/` documents (e.g., `research-2.md`, `plan-2.md`) so earlier results are not overwritten. Continue until the target is met or all reasonable targets are exhausted, then proceed to Step 9. | + +### Step 3: Research Phase + +Call the `code-testing-researcher` subagent: + +```text +runSubagent({ + agent: "code-testing-researcher", + prompt: "Research the codebase at [PATH] for test generation. Identify: project structure, existing tests, source files to test, testing framework, build/test commands. Check .testagent/ for initial coverage data." +}) +``` + +Output: `.testagent/research.md` + +### Step 4: Planning Phase + +Call the `code-testing-planner` subagent: + +```text +runSubagent({ + agent: "code-testing-planner", + prompt: "Create a test implementation plan based on .testagent/research.md. Create phased approach with specific files and test cases." +}) +``` + +Output: `.testagent/plan.md` + +### Step 5: Implementation Phase + +Execute each phase by calling the `code-testing-implementer` subagent — once per phase, sequentially: + +```text +runSubagent({ + agent: "code-testing-implementer", + prompt: "Implement Phase N from .testagent/plan.md: [phase description]. Ensure tests compile and pass." +}) +``` + +### Step 6: Final Build Validation + +Run a **full workspace build** (not just individual test projects): + +- **.NET**: `dotnet build MySolution.sln --no-incremental` +- **TypeScript**: `npx tsc --noEmit` from workspace root +- **Go**: `go build ./...` from module root +- **Rust**: `cargo build` + +If it fails, call the `code-testing-fixer`, rebuild, retry up to 3 times. + +### Step 7: Final Test Validation + +Run tests from the **full workspace scope**. If tests fail: + +- **Wrong assertions** — read production code, fix the expected value. Never `[Ignore]` or `[Skip]` a test just to pass. +- **Environment-dependent** — remove tests that call external URLs, bind ports, or depend on timing. Prefer mocked unit tests. +- **Pre-existing failures** — note them but don't block. + +### Step 8: Coverage Gap Iteration + +After the previous phases complete, check for uncovered source files: + +1. List all source files in scope. +2. List all test files created. +3. Identify source files with no corresponding test file. +4. Generate tests for each uncovered file, build, test, and fix. +5. Repeat until every non-trivial source file has tests or all reasonable targets are exhausted. + +### Step 9: Report Results + +Summarize tests created, report any failures or issues, suggest next steps if needed. + +## State Management + +All state is stored in `.testagent/` folder: + +- `.testagent/research.md` — Research findings +- `.testagent/plan.md` — Implementation plan +- `.testagent/status.md` — Progress tracking (optional) + +## Rules + +1. **Sequential phases** — complete one phase before starting the next +2. **Polyglot** — detect the language and use appropriate patterns +3. **Verify** — each phase must produce compiling, passing tests +4. **Don't skip** — report failures rather than skipping phases +5. **Clean git first** — stash pre-existing changes before starting +6. **Scoped builds during phases, full build at the end** — build specific test projects during implementation for speed; run a full-workspace non-incremental build after all phases to catch cross-project errors +7. **No environment-dependent tests** — mock all external dependencies; never call external URLs, bind ports, or depend on timing +8. **Fix assertions, don't skip tests** — when tests fail, read production code and fix the expected value; never `[Ignore]` or `[Skip]` diff --git a/.github/agents/code-testing-implementer.agent.md b/.github/agents/code-testing-implementer.agent.md new file mode 100644 index 0000000000..849ac3b87e --- /dev/null +++ b/.github/agents/code-testing-implementer.agent.md @@ -0,0 +1,91 @@ +--- +description: >- + Implements a single phase from the test plan. Writes test + files and verifies they compile and pass. Calls builder, tester, and fixer agents as + needed. +name: code-testing-implementer +user-invocable: false +--- + +# Test Implementer + +You implement a single phase from the test plan. You are polyglot — you work with any programming language. + +> **Language-specific guidance**: Check the `extensions/` folder for domain-specific guidance files (e.g., `extensions/dotnet.md` for .NET). Users can add their own extensions for other languages or domains. + +## Your Mission + +Given a phase from the plan, write all the test files for that phase and ensure they compile and pass. + +## Implementation Process + +### 1. Read the Plan and Research + +- Read `.testagent/plan.md` to understand the overall plan +- Read `.testagent/research.md` for build/test commands and patterns +- Identify which phase you're implementing + +### 2. Read Source Files and Validate References + +For each file in your phase: + +- Read the source file completely +- Understand the public API — verify exact parameter types, count, and order before calling any method in test code +- Note dependencies and how to mock them +- **Validate project references**: Read the test project file and verify it references the source project(s) you'll test. Add missing references before creating test files + +### 3. Write Test Files + +For each test file in your phase: + +- Create the test file with appropriate structure +- Follow the project's testing patterns +- Include tests for: happy path, edge cases (empty, null, boundary), error conditions +- Mock all external dependencies — never call external URLs, bind ports, or depend on timing + +### 4. Verify with Build + +Call the `code-testing-builder` sub-agent to compile. Build only the specific test project, not the full solution. + +If build fails: call `code-testing-fixer`, rebuild, retry up to 3 times. + +### 5. Verify with Tests + +Call the `code-testing-tester` sub-agent to run tests. + +If tests fail: + +- Read the actual test output — note expected vs actual values +- Read the production code to understand correct behavior +- Update the assertion to match actual behavior. Common mistakes: + - Hardcoded IDs that don't match derived values + - Asserting counts in async scenarios without waiting for delivery + - Assuming constructor defaults that differ from implementation +- For async/event-driven tests: add explicit waits before asserting +- Never mark a test `[Ignore]`, `[Skip]`, or `[Inconclusive]` +- Retry the fix-test cycle up to 5 times + +### 6. Format Code (Optional) + +If a lint command is available, call the `code-testing-linter` sub-agent. + +### 7. Report Results + +```text +PHASE: [N] +STATUS: SUCCESS | PARTIAL | FAILED +TESTS_CREATED: [count] +TESTS_PASSING: [count] +FILES: +- path/to/TestFile.ext (N tests) +ISSUES: +- [Any unresolved issues] +``` + +## Rules + +1. **Complete the phase** — don't stop partway through +2. **Verify everything** — always build and test +3. **Match patterns** — follow existing test style +4. **Be thorough** — cover edge cases +5. **Report clearly** — state what was done and any issues diff --git a/.github/agents/code-testing-linter.agent.md b/.github/agents/code-testing-linter.agent.md new file mode 100644 index 0000000000..1336cf191a --- /dev/null +++ b/.github/agents/code-testing-linter.agent.md @@ -0,0 +1,66 @@ +--- +description: >- + Runs code formatting/linting for any language. Discovers lint + command from project files if not specified. +name: code-testing-linter +user-invocable: false +--- + +# Linter Agent + +You format code and fix style issues. You are polyglot — you work with any programming language. + +## Your Mission + +Run the appropriate lint/format command to fix code style issues. + +## Process + +### 1. Discover Lint Command + +If not provided, check in order: + +1. `.testagent/research.md` or `.testagent/plan.md` for Commands section +2. Project files: + - `*.csproj` / `*.sln` → `dotnet format` + - `package.json` → `npm run lint:fix` or `npm run format` + - `pyproject.toml` → `black .` or `ruff format` + - `go.mod` → `go fmt ./...` + - `Cargo.toml` → `cargo fmt` + - `.prettierrc` → `npx prettier --write .` + +### 2. Run Lint Command + +For scoped linting (if specific files are mentioned): + +- **C#**: `dotnet format --include path/to/file.cs` +- **TypeScript**: `npx prettier --write path/to/file.ts` +- **Python**: `black path/to/file.py` +- **Go**: `go fmt path/to/file.go` + +Use the **fix** version of commands, not just verification. + +### 3. Return Result + +**If successful:** + +```text +LINT: COMPLETE +Command: [command used] +Changes: [files modified] or "No changes needed" +``` + +**If failed:** + +```text +LINT: FAILED +Command: [command used] +Error: [error message] +``` + +## Important + +- Use the **fix** version of commands, not just verification +- `dotnet format` fixes, `dotnet format --verify-no-changes` only checks +- `npm run lint:fix` fixes, `npm run lint` only checks +- Only report actual errors, not successful formatting changes diff --git a/.github/agents/code-testing-planner.agent.md b/.github/agents/code-testing-planner.agent.md new file mode 100644 index 0000000000..755f119816 --- /dev/null +++ b/.github/agents/code-testing-planner.agent.md @@ -0,0 +1,135 @@ +--- +description: >- + Creates structured test implementation plans from research + findings. Organizes tests into phases by priority and complexity. Works with any + language. +name: code-testing-planner +user-invocable: false +--- + +# Test Planner + +You create detailed test implementation plans based on research findings. You are polyglot — you work with any programming language. + +## Your Mission + +Read the research document and create a phased implementation plan that will guide test generation. + +## Planning Process + +### 1. Read the Research + +Read `.testagent/research.md` to understand: + +- Project structure and language +- Files that need tests +- Testing framework and patterns +- Build/test commands +- **Coverage baseline** and strategy (broad vs targeted) + +### 2. Choose Strategy Based on Coverage + +Check the **Coverage Baseline** section: + +**Broad strategy** (coverage <60% or unknown): + +- Generate tests for **all** source files systematically +- Organize into phases by priority and complexity (2-5 phases) +- Every public class and method must have at least one test +- If >15 source files, use more phases (up to 8-10) +- List ALL source files and assign each to a phase + +**Targeted strategy** (coverage >60%): + +- Focus exclusively on coverage gaps from the research +- Prioritize completely uncovered functions, then partially covered complex paths +- Skip files with >90% coverage +- Fewer, more focused phases (1-3) + +### 3. Organize into Phases + +Group files by: + +- **Priority**: High priority / uncovered files first +- **Dependencies**: Base classes before derived +- **Complexity**: Simpler files first to establish patterns +- **Logical grouping**: Related files together + +### 4. Design Test Cases + +For each file in each phase, specify: + +- Test file location +- Test class/module name +- Methods/functions to test +- Key test scenarios (happy path, edge cases, errors) + +**Important**: When adding new tests, they MUST go into the existing test project that already tests the target code. Do not create a separate test project unnecessarily. If no existing test project covers the target, create a new one. + +### 5. Generate Plan Document + +Create `.testagent/plan.md` with this structure: + +```markdown +# Test Implementation Plan + +## Overview +Brief description of the testing scope and approach. + +## Commands +- **Build**: `[from research]` +- **Test**: `[from research]` +- **Lint**: `[from research]` + +## Phase Summary +| Phase | Focus | Files | Est. Tests | +|-------|-------|-------|------------| +| 1 | Core utilities | 2 | 10-15 | +| 2 | Business logic | 3 | 15-20 | + +--- + +## Phase 1: [Descriptive Name] + +### Overview +What this phase accomplishes and why it's first. + +### Files to Test + +#### 1. [SourceFile.ext] +- **Source**: `path/to/SourceFile.ext` +- **Test File**: `path/to/tests/SourceFileTests.ext` +- **Test Class**: `SourceFileTests` + +**Methods to Test**: +1. `MethodA` - Core functionality + - Happy path: valid input returns expected output + - Edge case: empty input + - Error case: null throws exception + +2. `MethodB` - Secondary functionality + - Happy path: ... + - Edge case: ... + +### Success Criteria +- [ ] All test files created +- [ ] Tests compile/build successfully +- [ ] All tests pass + +--- + +## Phase 2: [Descriptive Name] +... +``` + +## Rules + +1. **Be specific** — include exact file paths and method names +2. **Be realistic** — don't plan more than can be implemented +3. **Be incremental** — each phase should be independently valuable +4. **Include patterns** — show code templates for the language +5. **Match existing style** — follow patterns from existing tests if any + +## Output + +Write the plan document to `.testagent/plan.md` in the workspace root. diff --git a/.github/agents/code-testing-researcher.agent.md b/.github/agents/code-testing-researcher.agent.md new file mode 100644 index 0000000000..7c7ac529ad --- /dev/null +++ b/.github/agents/code-testing-researcher.agent.md @@ -0,0 +1,155 @@ +--- +description: >- + Analyzes codebases to understand structure, testing patterns, + and testability. Identifies source files, existing tests, build commands, and + testing framework. Works with any language. +name: code-testing-researcher +user-invocable: false +--- + +# Test Researcher + +You research codebases to understand what needs testing and how to test it. You are polyglot — you work with any programming language. + +> **Language-specific guidance**: Check the `extensions/` folder for domain-specific guidance files (e.g., `extensions/dotnet.md` for .NET). Users can add their own extensions for other languages or domains. + +## Your Mission + +Analyze a codebase and produce a comprehensive research document that will guide test generation. + +## Research Process + +### 1. Discover Project Structure + +Search for key files: + +- Project files: `*.csproj`, `*.sln`, `package.json`, `pyproject.toml`, `go.mod`, `Cargo.toml` +- Source files: `*.cs`, `*.ts`, `*.py`, `*.go`, `*.rs` +- Existing tests: `*test*`, `*Test*`, `*spec*` +- Config files: `README*`, `Makefile`, `*.config` + +### 2. Check for Initial Coverage Data + +Check if `.testagent/` contains pre-computed coverage data: + +- `initial_line_coverage.txt` — percentage of lines covered +- `initial_branch_coverage.txt` — percentage of branches covered +- `initial_coverage.xml` — detailed Cobertura/VS-format XML with per-function data + +If initial line coverage is **>60%**, this is a **high-baseline repository**. Focus analysis on: + +1. Source files with no corresponding test file (biggest gaps) +2. Functions with `line_coverage="0.00"` (completely untested) +3. Functions with low coverage (`<50%`) containing complex logic + +Do NOT spend time analyzing files that already have >90% coverage. + +### 3. Identify the Language and Framework + +Based on files found: + +- **C#/.NET**: `*.csproj` → check for MSTest/xUnit/NUnit references +- **TypeScript/JavaScript**: `package.json` → check for Jest/Vitest/Mocha +- **Python**: `pyproject.toml` or `pytest.ini` → check for pytest/unittest +- **Go**: `go.mod` → tests use `*_test.go` pattern +- **Rust**: `Cargo.toml` → tests go in same file or `tests/` directory + +### 4. Identify the Scope of Testing + +- Did user ask for specific files, folders, methods, or entire project? +- If specific scope is mentioned, focus research on that area. If not, analyze entire codebase. + +### 5. Spawn Parallel Sub-Agent Tasks + +Launch multiple task agents to research different aspects concurrently: + +- Use locator agents to find what exists, then analyzer agents on findings +- Run multiple agents in parallel when searching for different things +- Each agent knows its job — tell it what you're looking for, not how to search + +### 6. Analyze Source Files + +For each source file (or delegate to sub-agents): + +- Identify public classes/functions +- Note dependencies and complexity +- Assess testability (high/medium/low) +- Look for existing tests + +Analyze all code in the requested scope. + +### 7. Discover Build/Test Commands + +Search for commands in: + +- `package.json` scripts +- `Makefile` targets +- `README.md` instructions +- Project files + +### 8. Generate Research Document + +Create `.testagent/research.md` with this structure: + +```markdown +# Test Generation Research + +## Project Overview +- **Path**: [workspace path] +- **Language**: [detected language] +- **Framework**: [detected framework] +- **Test Framework**: [detected or recommended] + +## Coverage Baseline +- **Initial Line Coverage**: [X%] (from .testagent/initial_line_coverage.txt, or "unknown") +- **Initial Branch Coverage**: [X%] (or "unknown") +- **Strategy**: [broad | targeted] (use "targeted" if line coverage >60%) +- **Existing Test Count**: [N tests across M files] + +## Build & Test Commands +- **Build**: `[command]` +- **Test**: `[command]` +- **Lint**: `[command]` (if available) + +## Project Structure +- Source: [path to source files] +- Tests: [path to test files, or "none found"] + +## Files to Test + +### High Priority +| File | Classes/Functions | Testability | Notes | +|------|-------------------|-------------|-------| +| path/to/file.ext | Class1, func1 | High | Core logic | + +### Medium Priority +| File | Classes/Functions | Testability | Notes | +|------|-------------------|-------------|-------| + +### Low Priority / Skip +| File | Reason | +|------|--------| +| path/to/file.ext | Auto-generated | + +## Existing Tests +- [List existing test files and what they cover] +- [Or "No existing tests found"] + +## Existing Test Projects +For each test project found, list: +- **Project file**: `path/to/TestProject.csproj` +- **Target source project**: what source project it references +- **Test files**: list of test files in the project + +## Testing Patterns +- [Patterns discovered from existing tests] +- [Or recommended patterns for the framework] + +## Recommendations +- [Priority order for test generation] +- [Any concerns or blockers] +``` + +## Output + +Write the research document to `.testagent/research.md` in the workspace root. diff --git a/.github/agents/code-testing-tester.agent.md b/.github/agents/code-testing-tester.agent.md new file mode 100644 index 0000000000..db5eeb134b --- /dev/null +++ b/.github/agents/code-testing-tester.agent.md @@ -0,0 +1,79 @@ +--- +description: >- + Runs test commands for any language and reports results. + Discovers test command from project files if not specified. +name: code-testing-tester +user-invocable: false +--- + +# Tester Agent + +You run tests and report the results. You are polyglot — you work with any programming language. + +> **Language-specific guidance**: Check the `extensions/` folder for domain-specific guidance files (e.g., `extensions/dotnet.md` for .NET). Users can add their own extensions for other languages or domains. + +## Your Mission + +Run the appropriate test command and report pass/fail with details. + +## Process + +### 1. Discover Test Command + +If not provided, check in order: + +1. `.testagent/research.md` or `.testagent/plan.md` for Commands section +2. Project files: + - `*.csproj` with Test SDK → `dotnet test` + - `package.json` → `npm test` or `npm run test` + - `pyproject.toml` / `pytest.ini` → `pytest` + - `go.mod` → `go test ./...` + - `Cargo.toml` → `cargo test` + - `Makefile` → `make test` + +### 2. Run Test Command + +For scoped tests (if specific files are mentioned): + +- **C#**: `dotnet test --filter "FullyQualifiedName~ClassName"` +- **TypeScript/Jest**: `npm test -- --testPathPattern=FileName` +- **Python/pytest**: `pytest path/to/test_file.py` +- **Go**: `go test ./path/to/package` + +### 3. Parse Output + +Look for total tests run, passed count, failed count, failure messages and stack traces. + +### 4. Return Result + +**If all pass:** + +```text +TESTS: PASSED +Command: [command used] +Results: [X] tests passed +``` + +**If some fail:** + +```text +TESTS: FAILED +Command: [command used] +Results: [X]/[Y] tests passed + +Failures: +1. [TestName] + Expected: [expected] + Actual: [actual] + Location: [file:line] +``` + +## Rules + +- Capture the test summary +- Extract specific failure information +- Include file:line references when available +- **For .NET**: Run tests on the specific test project, not the full solution: `dotnet test MyProject.Tests.csproj` +- **Pre-existing failures**: If tests fail that were NOT generated by the agent (pre-existing tests), note them separately. Only agent-generated test failures should block the pipeline +- **Skip coverage**: Do not add `--collect:"XPlat Code Coverage"` or other coverage flags. Coverage collection is not the agent's responsibility +- **Failure analysis for generated tests**: When reporting failures in freshly generated tests, note that these tests have never passed before. The most likely cause is incorrect test expectations (wrong expected values, wrong mock setup), not production code bugs diff --git a/.github/skills/code-testing-agent/SKILL.md b/.github/skills/code-testing-agent/SKILL.md new file mode 100644 index 0000000000..2fd677ff36 --- /dev/null +++ b/.github/skills/code-testing-agent/SKILL.md @@ -0,0 +1,197 @@ +--- +name: code-testing-agent +description: >- + Generates comprehensive, workable unit tests for any programming language + using a multi-agent pipeline. Use when asked to generate tests, write unit + tests, improve test coverage, add test coverage, create test files, or test a + codebase. Supports C#, TypeScript, JavaScript, Python, Go, Rust, Java, and + more. Orchestrates research, planning, and implementation phases to produce + tests that compile, pass, and follow project conventions. +--- + +# Code Testing Generation Skill + +An AI-powered skill that generates comprehensive, workable unit tests for any programming language using a coordinated multi-agent pipeline. + +## When to Use This Skill + +Use this skill when you need to: + +- Generate unit tests for an entire project or specific files +- Improve test coverage for existing codebases +- Create test files that follow project conventions +- Write tests that actually compile and pass +- Add tests for new features or untested code + +## When Not to Use + +- Running or executing existing tests (use the `run-tests` skill) +- Migrating between test frameworks (use migration skills) +- Writing tests specifically for MSTest patterns (use `writing-mstest-tests`) +- Debugging failing test logic + +## How It Works + +This skill coordinates multiple specialized agents in a **Research → Plan → Implement** pipeline: + +### Pipeline Overview + +```text +┌─────────────────────────────────────────────────────────────┐ +│ TEST GENERATOR │ +│ Coordinates the full pipeline and manages state │ +└─────────────────────┬───────────────────────────────────────┘ + │ + ┌─────────────┼─────────────┐ + ▼ ▼ ▼ +┌───────────┐ ┌───────────┐ ┌───────────────┐ +│ RESEARCHER│ │ PLANNER │ │ IMPLEMENTER │ +│ │ │ │ │ │ +│ Analyzes │ │ Creates │ │ Writes tests │ +│ codebase │→ │ phased │→ │ per phase │ +│ │ │ plan │ │ │ +└───────────┘ └───────────┘ └───────┬───────┘ + │ + ┌─────────┬───────┼───────────┐ + ▼ ▼ ▼ ▼ + ┌─────────┐ ┌───────┐ ┌───────┐ ┌───────┐ + │ BUILDER │ │TESTER │ │ FIXER │ │LINTER │ + │ │ │ │ │ │ │ │ + │ Compiles│ │ Runs │ │ Fixes │ │Formats│ + │ code │ │ tests │ │ errors│ │ code │ + └─────────┘ └───────┘ └───────┘ └───────┘ +``` + +## Step-by-Step Instructions + +### Step 1: Determine the user request + +Make sure you understand what user is asking and for what scope. +When the user does not express strong requirements for test style, coverage goals, or conventions, source the guidelines from [unit-test-generation.prompt.md](unit-test-generation.prompt.md). This prompt provides best practices for discovering conventions, parameterization strategies, coverage goals (aim for 80%), and language-specific patterns. + +### Step 2: Invoke the Test Generator + +Start by calling the `code-testing-generator` agent with your test generation request: + +```text +Generate unit tests for [path or description of what to test], following the [unit-test-generation.prompt.md](unit-test-generation.prompt.md) guidelines +``` + +The Test Generator will manage the entire pipeline automatically. + +### Step 3: Research Phase (Automatic) + +The `code-testing-researcher` agent analyzes your codebase to understand: + +- **Language & Framework**: Detects C#, TypeScript, Python, Go, Rust, Java, etc. +- **Testing Framework**: Identifies MSTest, xUnit, Jest, pytest, go test, etc. +- **Project Structure**: Maps source files, existing tests, and dependencies +- **Build Commands**: Discovers how to build and test the project + +Output: `.testagent/research.md` + +### Step 4: Planning Phase (Automatic) + +The `code-testing-planner` agent creates a structured implementation plan: + +- Groups files into logical phases (2-5 phases typical) +- Prioritizes by complexity and dependencies +- Specifies test cases for each file +- Defines success criteria per phase + +Output: `.testagent/plan.md` + +### Step 5: Implementation Phase (Automatic) + +The `code-testing-implementer` agent executes each phase sequentially: + +1. **Read** source files to understand the API +2. **Write** test files following project patterns +3. **Build** using the `code-testing-builder` sub-agent to verify compilation +4. **Test** using the `code-testing-tester` sub-agent to verify tests pass +5. **Fix** using the `code-testing-fixer` sub-agent if errors occur +6. **Lint** using the `code-testing-linter` sub-agent for code formatting + +Each phase completes before the next begins, ensuring incremental progress. + +### Coverage Types + +- **Happy path**: Valid inputs produce expected outputs +- **Edge cases**: Empty values, boundaries, special characters +- **Error cases**: Invalid inputs, null handling, exceptions + +## State Management + +All pipeline state is stored in `.testagent/` folder: + +| File | Purpose | +| ------------------------ | ---------------------------- | +| `.testagent/research.md` | Codebase analysis results | +| `.testagent/plan.md` | Phased implementation plan | +| `.testagent/status.md` | Progress tracking (optional) | + +## Examples + +### Example 1: Full Project Testing + +```text +Generate unit tests for my Calculator project at C:\src\Calculator +``` + +### Example 2: Specific File Testing + +```text +Generate unit tests for src/services/UserService.ts +``` + +### Example 3: Targeted Coverage + +```text +Add tests for the authentication module with focus on edge cases +``` + +## Agent Reference + +| Agent | Purpose | +| -------------------------- | -------------------- | +| `code-testing-generator` | Coordinates pipeline | +| `code-testing-researcher` | Analyzes codebase | +| `code-testing-planner` | Creates test plan | +| `code-testing-implementer` | Writes test files | +| `code-testing-builder` | Compiles code | +| `code-testing-tester` | Runs tests | +| `code-testing-fixer` | Fixes errors | +| `code-testing-linter` | Formats code | + +## Requirements + +- Project must have a build/test system configured +- Testing framework should be installed (or installable) +- VS Code with GitHub Copilot extension + +## Troubleshooting + +### Tests don't compile + +The `code-testing-fixer` agent will attempt to resolve compilation errors. Check `.testagent/plan.md` for the expected test structure. Check the `extensions/` folder for language-specific error code references (e.g., `extensions/dotnet.md` for .NET). + +### Tests fail + +Most failures in generated tests are caused by **wrong expected values in assertions**, not production code bugs: + +1. Read the actual test output +2. Read the production code to understand correct behavior +3. Fix the assertion, not the production code +4. Never mark tests `[Ignore]` or `[Skip]` just to make them pass + +### Wrong testing framework detected + +Specify your preferred framework in the initial request: "Generate Jest tests for..." + +### Environment-dependent tests fail + +Tests that depend on external services, network endpoints, specific ports, or precise timing will fail in CI environments. Focus on unit tests with mocked dependencies instead. + +### Build fails on full solution + +During phase implementation, build only the specific test project for speed. After all phases, run a full non-incremental workspace build to catch cross-project errors. diff --git a/.github/skills/code-testing-agent/extensions/dotnet.md b/.github/skills/code-testing-agent/extensions/dotnet.md new file mode 100644 index 0000000000..e362ad8934 --- /dev/null +++ b/.github/skills/code-testing-agent/extensions/dotnet.md @@ -0,0 +1,111 @@ +# .NET Extension + +Language-specific guidance for .NET (C#/F#/VB) test generation. + +## Build Commands + +| Scope | Command | +|-------|---------| +| Specific test project | `dotnet build MyProject.Tests.csproj` | +| Full solution (final validation) | `dotnet build MySolution.sln --no-incremental` | +| From repo root (no .sln) | `dotnet build --no-incremental` | + +- Use `--no-restore` if dependencies are already restored +- Use `-v:q` (quiet) to reduce output noise +- Always use `--no-incremental` for the final validation build — incremental builds hide errors like CS7036 + +## Test Commands + +| Scope | Command | +|-------|---------| +| All tests | `dotnet test` | +| Filtered | `dotnet test --filter "FullyQualifiedName~ClassName"` | +| After build | `dotnet test --no-build` | + +- Use `--no-build` if already built +- Use `-v:q` for quieter output + +## Lint Command + +```bash +dotnet format --include path/to/file.cs +dotnet format MySolution.sln # full solution +``` + +## Project Reference Validation + +Before writing test code, read the test project's `.csproj` to verify it has `` entries for the assemblies your tests will use. If a reference is missing, add it: + +```xml + + + +``` + +This prevents CS0234 ("namespace not found") and CS0246 ("type not found") errors. + +## Common CS Error Codes + +| Error | Meaning | Fix | +|-------|---------|-----| +| CS0234 | Namespace not found | Add `` to the source project in the test `.csproj` | +| CS0246 | Type not found | Add `using Namespace;` or add missing `` | +| CS0103 | Name not found | Check spelling, add `using` statement | +| CS1061 | Missing member | Verify method/property name matches the source code exactly | +| CS0029 | Type mismatch | Cast or change the type to match the expected signature | +| CS7036 | Missing required parameter | Read the constructor/method signature and pass all required arguments | + +## `.csproj` / `.sln` Handling + +- During phase implementation, build only the specific test `.csproj` for speed +- For the final validation, build the full `.sln` with `--no-incremental` +- Full-solution builds catch cross-project reference errors invisible in scoped builds + +## MSTest Template + +```csharp +using Microsoft.VisualStudio.TestTools.UnitTesting; + +namespace ProjectName.Tests; + +[TestClass] +public sealed class ClassNameTests +{ + [TestMethod] + public void MethodName_Scenario_ExpectedResult() + { + // Arrange + var sut = new ClassName(); + + // Act + var result = sut.MethodName(input); + + // Assert + Assert.AreEqual(expected, result); + } + + [TestMethod] + [DataRow(2, 3, 5, DisplayName = "Positive numbers")] + [DataRow(-1, 1, 0, DisplayName = "Negative and positive")] + public void Add_ValidInputs_ReturnsSum(int a, int b, int expected) + { + // Act + var result = _sut.Add(a, b); + + // Assert + Assert.AreEqual(expected, result); + } +} +``` + +## Coverage XML Parsing + +If `.testagent/initial_coverage.xml` exists, it uses Cobertura/VS format: + +- `module` elements with `line_coverage` attribute — identifies which assemblies have low coverage +- `function` elements with `line_coverage="0.00"` — identifies completely untested methods +- `range` elements with `covered="no"` — identifies specific uncovered lines + +## Skip Coverage Tools + +Do not configure or run code coverage measurement tools (coverlet, dotnet-coverage, XPlat Code Coverage). These tools have inconsistent cross-configuration behavior and waste significant time. Coverage is measured separately by the evaluation harness. diff --git a/.github/skills/code-testing-agent/unit-test-generation.prompt.md b/.github/skills/code-testing-agent/unit-test-generation.prompt.md new file mode 100644 index 0000000000..ccdbbbc2bc --- /dev/null +++ b/.github/skills/code-testing-agent/unit-test-generation.prompt.md @@ -0,0 +1,173 @@ +--- +description: >- + Best practices and guidelines for generating comprehensive, + parameterized unit tests with 80% code coverage across any programming + language +--- + +# Unit Test Generation Prompt + +You are an expert code generation assistant specialized in writing concise, effective, and logical unit tests. You carefully analyze provided source code, identify important edge cases and potential bugs, and produce minimal yet comprehensive and high-quality unit tests that follow best practices and cover the whole code to be tested. Aim for 80% code coverage. + +## Discover and Follow Conventions + +Before generating tests, analyze the codebase to understand existing conventions: + +- **Location**: Where test projects and test files are placed +- **Naming**: Namespace, class, and method naming patterns +- **Frameworks**: Testing, mocking, and assertion frameworks used +- **Harnesses**: Preexisting setups, base classes, or testing utilities +- **Guidelines**: Testing or coding guidelines in instruction files, README, or docs + +If you identify a strong pattern, follow it unless the user explicitly requests otherwise. If no pattern exists and there's no user guidance, use your best judgment. + +## Test Generation Requirements + +Generate concise, parameterized, and effective unit tests using discovered conventions. + +- **Prefer mocking** over generating one-off testing types +- **Prefer unit tests** over integration tests, unless integration tests are clearly needed and can run locally +- **Traverse code thoroughly** to ensure high coverage (80%+) of the entire scope +- Continue generating tests until you reach the coverage target or have covered all non-trivial public surface area + +### Key Testing Goals + +| Goal | Description | +| ----------------------------- | ---------------------------------------------------------------------------------------------------- | +| **Minimal but Comprehensive** | Avoid redundant tests | +| **Logical Coverage** | Focus on meaningful edge cases, domain-specific inputs, boundary values, and bug-revealing scenarios | +| **Core Logic Focus** | Test positive cases and actual execution logic; avoid low-value tests for language features | +| **Balanced Coverage** | Don't let negative/edge cases outnumber tests of actual logic | +| **Best Practices** | Use Arrange-Act-Assert pattern and proper naming (`Method_Condition_ExpectedResult`) | +| **Buildable & Complete** | Tests must compile, run, and contain no hallucinated or missed logic | + +## Parameterization + +- Prefer parameterized tests (e.g., `[DataRow]`, `[Theory]`, `@pytest.mark.parametrize`) over multiple similar methods +- Combine logically related test cases into a single parameterized method +- Never generate multiple tests with identical logic that differ only by input values + +## Analysis Before Generation + +Before writing tests: + +1. **Analyze** the code line by line to understand what each section does +2. **Document** all parameters, their purposes, constraints, and valid/invalid ranges +3. **Identify** potential edge cases and error conditions +4. **Describe** expected behavior under different input conditions +5. **Note** dependencies that need mocking +6. **Consider** concurrency, resource management, or special conditions +7. **Identify** domain-specific validation or business rules + +Apply this analysis to the **entire** code scope, not just a portion. + +## Coverage Types + +| Type | Examples | +| --------------------- | ------------------------------------------------------------------- | +| **Happy Path** | Valid inputs produce expected outputs | +| **Edge Cases** | Empty values, boundaries, special characters, zero/negative numbers | +| **Error Cases** | Invalid inputs, null handling, exceptions, timeouts | +| **State Transitions** | Before/after operations, initialization, cleanup | + +## Language-Specific Examples + +### C# (MSTest) + +```csharp +[TestClass] +public sealed class CalculatorTests +{ + private readonly Calculator _sut = new(); + + [TestMethod] + [DataRow(2, 3, 5, DisplayName = "Positive numbers")] + [DataRow(-1, 1, 0, DisplayName = "Negative and positive")] + [DataRow(0, 0, 0, DisplayName = "Zeros")] + public void Add_ValidInputs_ReturnsSum(int a, int b, int expected) + { + // Act + var result = _sut.Add(a, b); + + // Assert + Assert.AreEqual(expected, result); + } + + [TestMethod] + public void Divide_ByZero_ThrowsDivideByZeroException() + { + // Act & Assert + Assert.ThrowsException(() => _sut.Divide(10, 0)); + } +} +``` + +### TypeScript (Jest) + +```typescript +describe("Calculator", () => { + let sut: Calculator; + + beforeEach(() => { + sut = new Calculator(); + }); + + it.each([ + [2, 3, 5], + [-1, 1, 0], + [0, 0, 0], + ])("add(%i, %i) returns %i", (a, b, expected) => { + expect(sut.add(a, b)).toBe(expected); + }); + + it("divide by zero throws error", () => { + expect(() => sut.divide(10, 0)).toThrow("Division by zero"); + }); +}); +``` + +### Python (pytest) + +```python +import pytest +from calculator import Calculator + +class TestCalculator: + @pytest.fixture + def sut(self): + return Calculator() + + @pytest.mark.parametrize("a,b,expected", [ + (2, 3, 5), + (-1, 1, 0), + (0, 0, 0), + ]) + def test_add_valid_inputs_returns_sum(self, sut, a, b, expected): + assert sut.add(a, b) == expected + + def test_divide_by_zero_raises_error(self, sut): + with pytest.raises(ZeroDivisionError): + sut.divide(10, 0) +``` + +## Output Requirements + +- Tests must be **complete and buildable** with no placeholder code +- Follow the **exact conventions** discovered in the target codebase +- Include **appropriate imports** and setup code +- Add **brief comments** explaining non-obvious test purposes +- Place tests in the **correct location** following project structure + +## Build and Verification + +- **Scoped builds during development**: Build the specific test project during implementation for faster iteration +- **Final full-workspace build**: After all test generation is complete, run a full non-incremental build from the workspace root to catch cross-project errors +- **API signature verification**: Before calling any method in test code, verify the exact parameter types, count, and order by reading the source code +- **Project reference validation**: Before writing test code, verify the test project references all source projects the tests will use. Check the `extensions/` folder for language-specific guidance (e.g., `extensions/dotnet.md` for .NET) + +## Test Scope Guidelines + +- **Write unit tests, not integration/acceptance tests**: Focus on testing individual classes and methods with mocked dependencies +- **No external dependencies**: Never write tests that call external URLs, bind to network ports, require service discovery, or depend on precise timing +- **Mock everything external**: HTTP clients, database connections, file systems, network endpoints — all should be mocked in unit tests +- **Fix assertions, not production code**: When tests fail, read the production code, understand its actual behavior, and update the test assertion diff --git a/.github/skills/coverage-analysis/SKILL.md b/.github/skills/coverage-analysis/SKILL.md new file mode 100644 index 0000000000..d269904354 --- /dev/null +++ b/.github/skills/coverage-analysis/SKILL.md @@ -0,0 +1,471 @@ +--- +name: coverage-analysis +description: > + Automated, project-wide code coverage and CRAP (Change Risk Anti-Patterns) + score analysis for .NET projects with existing unit tests. Auto-detects + solution structure, runs coverage collection via `dotnet test` (supports both + Microsoft.Testing.Extensions.CodeCoverage and Coverlet), generates reports via + ReportGenerator, calculates CRAP scores per method, and surfaces risk + hotspots — complex code with low test coverage that is dangerous to modify. + Use when the user wants project-wide coverage analysis with risk + prioritization, coverage gap identification, CRAP score computation + across an entire solution, or to diagnose why coverage is stuck or + plateaued and identify what methods are blocking improvement. + DO NOT USE FOR: targeted single-method CRAP analysis (use crap-score skill), + writing tests, general test execution unrelated to coverage/CRAP analysis, + or coverage reporting without CRAP context. +--- + +# Coverage Analysis + +## Purpose + +Raw coverage percentages answer "what code was executed?" — they don't answer what you actually need to know: + +- **What tests should I write next?** — ranked by risk and impact +- **Which uncovered code is risky vs. trivial?** — CRAP scores separate the two +- **Why has coverage plateaued?** — identify the files blocking further gains +- **Is this code safe to refactor?** — complex + uncovered = dangerous to change + +This skill bridges that gap: from a bare .NET solution to a prioritized risk hotspot list, with no manual tool configuration required. + +## When to Use + +Use this skill when the user mentions test coverage, coverage gaps, code risk, CRAP scores, where to add tests, why coverage plateaued, or wants to know which code is safest to refactor — even if they don't explicitly say "coverage analysis". + +## When Not to Use + +- **Targeted single-method CRAP analysis** — use the `crap-score` skill instead +- **Writing or generating tests** — this skill identifies where tests are needed, not write them +- **General test execution** unrelated to coverage or CRAP analysis +- **Coverage reporting without CRAP context** — use `dotnet test` with coverage collection directly + +## Inputs + +| Input | Required | Default | Description | +|-------|----------|---------|-------------| +| Project/solution path | No | Current directory | Path to the .NET solution or project | +| Line coverage threshold | No | 80% | Minimum acceptable line coverage | +| Branch coverage threshold | No | 70% | Minimum acceptable branch coverage | +| CRAP threshold | No | 30 | Maximum acceptable CRAP score before flagging | +| Top N hotspots | No | 10 | Number of risk hotspots to surface | + +### Prerequisites + +- .NET SDK installed (`dotnet` on PATH) +- At least one test project referencing the production code (xUnit, NUnit, or MSTest) +- Internet access for `dotnet tool install` (ReportGenerator) on first run, or ReportGenerator already installed globally + +The skill auto-detects coverage provider state per test project and selects the least-invasive execution strategy: + +- unified Microsoft CodeCoverage when all projects use it, +- unified Coverlet when no project uses Microsoft CodeCoverage, +- per-project provider execution when the solution is truly mixed. + +No pre-existing runsettings files or manually installed tools required. + +## Workflow + +If the user provides a path to existing Cobertura XML (or coverage data is already present in `TestResults/`), skip Steps 3–4 (test execution and provider detection) but **still run Steps 5–6** (ReportGenerator and CRAP score computation). The Risk Hotspots table and CRAP scores are mandatory in every output — they are the skill's core value-add over raw coverage numbers. + +The workflow runs in four phases. Phases 2 and 3 each contain steps that can run in parallel to reduce total wall-clock time. + +### Phase 1 — Setup (sequential) + +#### Step 1: Locate the solution or project + +Given the user's path (default: current directory), find the entry point: + +```powershell +$root = "" + +# Prefer solution file; fall back to project file +$sln = Get-ChildItem -Path $root -Filter "*.sln" -Recurse -Depth 2 -ErrorAction SilentlyContinue | + Select-Object -First 1 +if ($sln) { + Write-Host "ENTRY_TYPE:Solution"; Write-Host "ENTRY:$($sln.FullName)" +} else { + $project = Get-ChildItem -Path $root -Filter "*.csproj" -Recurse -Depth 2 -ErrorAction SilentlyContinue | + Select-Object -First 1 + if ($project) { + Write-Host "ENTRY_TYPE:Project"; Write-Host "ENTRY:$($project.FullName)" + } else { + Write-Host "ENTRY_TYPE:NotFound" + } +} + +# Test projects: search path first, then git root, then parent +$searchRoots = @($root) +$gitRoot = (git -C $root rev-parse --show-toplevel 2>$null) +if ($gitRoot) { $gitRoot = [System.IO.Path]::GetFullPath($gitRoot) } +if ($gitRoot -and $gitRoot -ne $root) { $searchRoots += $gitRoot } +$parentPath = Split-Path $root -Parent +if ($parentPath -and $parentPath -ne $root -and $parentPath -ne $gitRoot) { $searchRoots += $parentPath } + +$testProjects = @() +foreach ($sr in $searchRoots) { + # Primary: match by .csproj content (test framework references) + $testProjects = @(Get-ChildItem -Path $sr -Filter "*.csproj" -Recurse -Depth 5 -ErrorAction SilentlyContinue | + Where-Object { $_.FullName -notmatch '([/\\]obj[/\\]|[/\\]bin[/\\])' } | + Where-Object { (Select-String -Path $_.FullName -Pattern 'Microsoft\.NET\.Test\.Sdk|xunit|nunit|MSTest\.TestAdapter|"MSTest"|MSTest\.TestFramework|TUnit' -Quiet) }) + if ($testProjects.Count -gt 0) { + if ($sr -ne $root) { Write-Host "SEARCHED:$sr" } + break + } +} + +# Fallback: match by file name convention +if ($testProjects.Count -eq 0) { + foreach ($sr in $searchRoots) { + $testProjects = @(Get-ChildItem -Path $sr -Filter "*.csproj" -Recurse -Depth 5 -ErrorAction SilentlyContinue | + Where-Object { $_.Name -match '(?i)(test|spec)' }) + if ($testProjects.Count -gt 0) { + if ($sr -ne $root) { Write-Host "SEARCHED:$sr" } + break + } + } +} +Write-Host "TEST_PROJECTS:$($testProjects.Count)" +$testProjects | ForEach-Object { Write-Host "TEST_PROJECT:$($_.FullName)" } + +# Resolve the test output root (where coverage-analysis artifacts will be written) +if ($testProjects.Count -eq 1) { + $testOutputRoot = $testProjects[0].DirectoryName +} else { + # Multiple test projects — find their deepest common parent directory + $dirs = $testProjects | ForEach-Object { $_.DirectoryName } + $common = $dirs[0] + foreach ($d in $dirs[1..($dirs.Count-1)]) { + $sep = [System.IO.Path]::DirectorySeparatorChar + while (-not $d.StartsWith("$common$sep", [System.StringComparison]::OrdinalIgnoreCase) -and $d -ne $common) { + $prevCommon = $common + $common = Split-Path $common -Parent + # Terminate if we can no longer move up (at filesystem root or no parent) + if ([string]::IsNullOrEmpty($common) -or $common -eq $prevCommon) { + $common = $null + break + } + } + } + if ([string]::IsNullOrEmpty($common)) { + # Fallback when no common parent directory exists (e.g., projects on different drives) + if ($gitRoot) { + $testOutputRoot = $gitRoot + } else { + $testOutputRoot = $root + } + } else { + $testOutputRoot = $common + } +} +Write-Host "TEST_OUTPUT_ROOT:$testOutputRoot" +``` + +- If `ENTRY_TYPE:NotFound` and test projects were found → use the test projects directly as entry points (run `dotnet test` on each test `.csproj`). +- If `ENTRY_TYPE:NotFound` and no test projects found → stop: `No .sln or test projects found under . Provide the path to your .NET solution or project.` +- If `TEST_PROJECTS:0` → stop: `No test projects found (expected projects with 'Test' or 'Spec' in the name). Ensure your solution has unit test projects before running coverage analysis.` + +#### Step 2: Create the output directory + +```powershell +$coverageDir = Join-Path $testOutputRoot "TestResults" "coverage-analysis" +if (Test-Path $coverageDir) { Remove-Item $coverageDir -Recurse -Force } +New-Item -ItemType Directory -Path $coverageDir -Force | Out-Null +Write-Host "COVERAGE_DIR:$coverageDir" +``` + +#### Step 2b: Recommend ignoring `TestResults/` + +```powershell +$pattern = "**/TestResults/" +$gitRoot = (git -C $testOutputRoot rev-parse --show-toplevel 2>$null) +if ($gitRoot) { $gitRoot = [System.IO.Path]::GetFullPath($gitRoot) } +if ($gitRoot) { + $gitignorePath = Join-Path $gitRoot ".gitignore" + $alreadyIgnored = $false + if (Test-Path $gitignorePath) { + $alreadyIgnored = (Select-String -Path $gitignorePath -Pattern '^\s*(\*\*/)?TestResults/?\s*$' -Quiet) + } + if ($alreadyIgnored) { + Write-Host "GITIGNORE_RECOMMENDATION:already-present" + } else { + Write-Host "GITIGNORE_RECOMMENDATION:$pattern" + } +} else { + Write-Host "GITIGNORE_RECOMMENDATION:$pattern" +} +``` + +### Phase 2 — Data collection (Steps 3 and 4 run in parallel) + +Steps 3 and 4 are independent — start both simultaneously. `dotnet test` is the slowest step, and ReportGenerator setup doesn't need coverage files, so running them concurrently cuts wall time significantly. + +#### Step 3: Detect coverage provider and run `dotnet test` with coverage collection + +Before running tests, detect which coverage provider the test projects use. Projects may reference +`Microsoft.Testing.Extensions.CodeCoverage` (Microsoft's built-in provider, common on .NET 9+) or +`coverlet.collector` (open-source, the default in xUnit templates). The provider determines which +`dotnet test` arguments to use — both produce Cobertura XML. + +```powershell +# Detect coverage provider per test project +$coverageProvider = "unknown" # will be set to "ms-codecoverage" or "coverlet" +$msCodeCovProjects = @() +$coverletProjects = @() +$neitherProjects = @() + +foreach ($tp in $testProjects) { + $hasMsCodeCov = Select-String -Path $tp.FullName -Pattern 'Microsoft\.Testing\.Extensions\.CodeCoverage' -Quiet + $hasCoverlet = Select-String -Path $tp.FullName -Pattern 'coverlet\.collector' -Quiet + if ($hasMsCodeCov) { $msCodeCovProjects += $tp } + elseif ($hasCoverlet) { $coverletProjects += $tp } + else { $neitherProjects += $tp } +} + +# Determine the provider strategy +if ($msCodeCovProjects.Count -gt 0 -and $coverletProjects.Count -eq 0) { + $coverageProvider = "ms-codecoverage" + Write-Host "COVERAGE_PROVIDER:ms-codecoverage (ms:$($msCodeCovProjects.Count), none:$($neitherProjects.Count))" +} elseif ($coverletProjects.Count -gt 0 -and $msCodeCovProjects.Count -eq 0) { + $coverageProvider = "coverlet" + Write-Host "COVERAGE_PROVIDER:coverlet (coverlet:$($coverletProjects.Count), none:$($neitherProjects.Count))" +} elseif ($msCodeCovProjects.Count -gt 0 -and $coverletProjects.Count -gt 0) { + $coverageProvider = "mixed-project" + Write-Host "COVERAGE_PROVIDER:mixed-project (ms:$($msCodeCovProjects.Count), coverlet:$($coverletProjects.Count), none:$($neitherProjects.Count))" +} else { + $coverageProvider = "coverlet" + Write-Host "COVERAGE_PROVIDER:none-detected — defaulting to coverlet" +} +``` + +If any discovered test projects have no provider, add one based on the selected strategy: + +```powershell +if ($coverageProvider -eq "ms-codecoverage" -and $neitherProjects.Count -gt 0) { + Write-Host "ADDING_MS_CODECOVERAGE:$($neitherProjects.Count) project(s)" + foreach ($tp in $neitherProjects) { + dotnet add $tp.FullName package Microsoft.Testing.Extensions.CodeCoverage --no-restore + Write-Host " ADDED_MS_CODECOVERAGE:$($tp.FullName)" + } + foreach ($tp in $neitherProjects) { + dotnet restore $tp.FullName --quiet + } +} + +if (($coverageProvider -eq "coverlet" -or $coverageProvider -eq "mixed-project") -and $neitherProjects.Count -gt 0) { + Write-Host "ADDING_COVERLET:$($neitherProjects.Count) project(s)" + foreach ($tp in $neitherProjects) { + dotnet add $tp.FullName package coverlet.collector --no-restore + Write-Host " ADDED:$($tp.FullName)" + } + foreach ($tp in $neitherProjects) { + dotnet restore $tp.FullName --quiet + } +} +``` + +Log each addition to the console so the developer sees what changed. Document the additions in the final report (see Output Format). + +Run one `dotnet test` per entry point for the selected strategy: + +- In `ms-codecoverage` or `coverlet` mode: run a single command for the solution entry (or one per test project if no `.sln` was found). +- In `mixed-project` mode: run one command per test project, using that project's existing provider to avoid dual-provider conflicts. + +**Coverlet** (`coverlet.collector`): + +```powershell +$rawDir = Join-Path "" "raw" +dotnet test "" ` + --collect:"XPlat Code Coverage" ` + --results-directory $rawDir ` + -- DataCollectionRunSettings.DataCollectors.DataCollector.Configuration.Format=cobertura ` + -- DataCollectionRunSettings.DataCollectors.DataCollector.Configuration.Include="[*]*" ` + -- DataCollectionRunSettings.DataCollectors.DataCollector.Configuration.Exclude="[*.Tests]*,[*.Test]*,[*Tests]*,[*Test]*,[*.Specs]*,[*.Testing]*" ` + -- DataCollectionRunSettings.DataCollectors.DataCollector.Configuration.SkipAutoProps=true +``` + +**Microsoft CodeCoverage** (`Microsoft.Testing.Extensions.CodeCoverage`): + +The command syntax depends on the .NET SDK version. In .NET 9, Microsoft.Testing.Platform arguments +must be passed after the `--` separator. In .NET 10+, `--coverage` is a top-level `dotnet test` flag. + +```powershell +$rawDir = Join-Path "" "raw" + +# Detect SDK version for correct argument placement +$sdkVersion = (dotnet --version 2>$null) +$major = if ($sdkVersion -match '^(\d+)\.') { [int]$Matches[1] } else { 9 } + +if ($major -ge 10) { + # .NET 10+: --coverage is a first-class dotnet test flag + dotnet test "" ` + --results-directory $rawDir ` + --coverage ` + --coverage-output-format cobertura ` + --coverage-output $rawDir +} else { + # .NET 9: pass Microsoft.Testing.Platform arguments after the -- separator + dotnet test "" ` + --results-directory $rawDir ` + -- --coverage --coverage-output-format cobertura --coverage-output $rawDir +} +``` + +**Mixed-project mode** (`Microsoft.Testing.Extensions.CodeCoverage` + `coverlet.collector` in the same solution): + +```powershell +$rawDir = Join-Path "" "raw" +$sdkVersion = (dotnet --version 2>$null) +$major = if ($sdkVersion -match '^(\d+)\.') { [int]$Matches[1] } else { 9 } + +foreach ($tp in $testProjects) { + $hasMsCodeCov = Select-String -Path $tp.FullName -Pattern 'Microsoft\.Testing\.Extensions\.CodeCoverage' -Quiet + if ($hasMsCodeCov) { + if ($major -ge 10) { + dotnet test $tp.FullName --results-directory $rawDir --coverage --coverage-output-format cobertura --coverage-output $rawDir + } else { + dotnet test $tp.FullName --results-directory $rawDir -- --coverage --coverage-output-format cobertura --coverage-output $rawDir + } + } else { + dotnet test $tp.FullName ` + --collect:"XPlat Code Coverage" ` + --results-directory $rawDir ` + -- DataCollectionRunSettings.DataCollectors.DataCollector.Configuration.Format=cobertura ` + -- DataCollectionRunSettings.DataCollectors.DataCollector.Configuration.Include="[*]*" ` + -- DataCollectionRunSettings.DataCollectors.DataCollector.Configuration.Exclude="[*.Tests]*,[*.Test]*,[*Tests]*,[*Test]*,[*.Specs]*,[*.Testing]*" ` + -- DataCollectionRunSettings.DataCollectors.DataCollector.Configuration.SkipAutoProps=true + } +} +``` + +Exit code handling: + +- **0** — all tests passed, coverage collected +- **1** — some tests failed (coverage still collected — proceed with a warning) +- **Other** — build failure; stop and report the error + +After the run, locate coverage files: + +```powershell +$coberturaFiles = Get-ChildItem -Path (Join-Path "" "raw") -Filter "coverage.cobertura.xml" -Recurse +Write-Host "COBERTURA_COUNT:$($coberturaFiles.Count)" +$coberturaFiles | ForEach-Object { Write-Host "COBERTURA:$($_.FullName)" } +$vsCovFiles = Get-ChildItem -Path (Join-Path "" "raw") -Filter "*.coverage" -Recurse -ErrorAction SilentlyContinue +if ($vsCovFiles) { Write-Host "VS_BINARY_COVERAGE:$($vsCovFiles.Count)" } +``` + +If `COBERTURA_COUNT` is 0: + +- If `VS_BINARY_COVERAGE` > 0: warn the user — *"Found .coverage files (VS binary format) but no Cobertura XML. These were likely produced by Visual Studio's built-in collector, which outputs a binary format by default. This skill needs Cobertura XML. Re-running with the detected provider configured for Cobertura output."* Then re-run the appropriate `dotnet test` command above (Coverlet or Microsoft CodeCoverage) with Cobertura format. +- If no `.coverage` files either: stop and report — *"Coverage files not generated. Ensure `dotnet test` completed successfully and check the build output for errors."* + +#### Step 4: Verify or install ReportGenerator (parallel with Step 3) + +```powershell +$rgAvailable = $false +$rgCommand = Get-Command reportgenerator -ErrorAction SilentlyContinue +if ($rgCommand) { + $rgAvailable = $true + Write-Host "RG_INSTALLED:already-present" +} else { + $rgToolPath = Join-Path "" ".tools" + dotnet tool install dotnet-reportgenerator-globaltool --tool-path $rgToolPath + if ($LASTEXITCODE -eq 0) { + $env:PATH = "$rgToolPath$([System.IO.Path]::PathSeparator)$env:PATH" + $rgCommand = Get-Command reportgenerator -ErrorAction SilentlyContinue + if ($rgCommand) { + $rgAvailable = $true + Write-Host "RG_INSTALLED:true (tool-path: $rgToolPath)" + } else { + Write-Host "RG_INSTALLED:false" + Write-Host "RG_INSTALL_ERROR:reportgenerator-not-available" + } + } else { + Write-Host "RG_INSTALLED:false" + Write-Host "RG_INSTALL_ERROR:reportgenerator-not-available" + } +} +Write-Host "RG_AVAILABLE:$rgAvailable" +``` + +If installation fails (no internet), keep `RG_AVAILABLE:false` and continue with raw Cobertura XML parsing + script-based analysis in Step 6. Skip HTML/Text/CSV report generation in Step 5 and note this in the output. + +### Phase 3 — Analysis (Steps 5 and 6 run in parallel) + +Once Phase 2 completes (coverage files available, ReportGenerator ready), start Steps 5 and 6 simultaneously — both read from the same Cobertura XML and produce independent outputs. + +#### Step 5: Generate reports with ReportGenerator (parallel with Step 6) + +```powershell +$reportsDir = Join-Path "" "reports" +if ($rgAvailable) { + reportgenerator ` + -reports:"" ` + -targetdir:$reportsDir ` + -reporttypes:"Html;TextSummary;MarkdownSummaryGithub;CsvSummary" ` + -title:"Coverage Report" ` + -tag:"coverage-analysis-skill" + + Get-Content (Join-Path $reportsDir "Summary.txt") -ErrorAction SilentlyContinue +} else { + Write-Host "REPORTGENERATOR_SKIPPED:true" +} +``` + +#### Step 6: Calculate CRAP scores using the bundled script (parallel with Step 5) + +Run `scripts/Compute-CrapScores.ps1` (co-located with this SKILL.md). It reads all Cobertura XML files, applies `CRAP(m) = comp² × (1 − cov)³ + comp` per method, and returns the top-N hotspots as JSON. + +To locate the script: find the directory containing this skill's `SKILL.md` file (the skill loader provides this context), then resolve `scripts/Compute-CrapScores.ps1` relative to it. If the script path cannot be determined, calculate CRAP scores inline using the formula below. + +```powershell +& "/scripts/Compute-CrapScores.ps1" ` + -CoberturaPath @() ` + -CrapThreshold ` + -TopN +``` + +Script outputs: `TOTAL_METHODS:`, `FLAGGED_METHODS:`, `HOTSPOTS:` (top-N sorted by CrapScore descending). + +Also run `scripts/Extract-MethodCoverage.ps1` to get per-method coverage data for the Coverage Gaps table: + +```powershell +& "/scripts/Extract-MethodCoverage.ps1" ` + -CoberturaPath @() ` + -CoverageThreshold ` + -BranchThreshold ` + -Filter below-threshold +``` + +Script outputs: JSON array of methods below the coverage threshold, sorted by coverage ascending. Use this data to populate the Coverage Gaps by File table in the report. + +### Phase 4 — Output (sequential) + +#### Step 7: Build the output report + +Compose the analysis and save it to `TestResults/coverage-analysis/coverage-analysis.md` under the test project directory. Print the full report to the console. + +After saving the file, automatically open `TestResults/coverage-analysis/coverage-analysis.md` in the editor so the user can review it immediately. + +- In editor-hosted environments (VS Code, Visual Studio, or other IDE hosts): open the file in the current host session/editor context after writing it. +- Do not launch a different app instance via hardcoded shell commands (for example `code`, `start`, or platform-specific open commands) unless the host has no native open-file mechanism. +- In CLI or non-editor environments: print the absolute report path and clearly state that the file was generated. + +Do not ask for confirmation before opening the report file. + +Use `references/output-format.md` verbatim for all fixed headings, table structures, symbols, and emoji in the generated report. Use `references/guidelines.md` for execution constraints, prioritization rules, and style. + +## Validation + +- Verify that at least one `coverage.cobertura.xml` file was generated after `dotnet test` +- Confirm `TestResults/coverage-analysis/coverage-analysis.md` was written and contains data +- Spot-check one method's CRAP score: `comp² × (1 − cov)³ + comp` — a method with 100% coverage should have CRAP = complexity +- If ReportGenerator ran, verify `TestResults/coverage-analysis/reports/index.html` exists + +## Common Pitfalls + +- **No Cobertura XML generated** — the test project may lack a coverage provider. The skill auto-adds one, but if `dotnet add package` fails (offline/proxy), coverage collection silently produces nothing. Check for `.coverage` binary files as a fallback indicator. +- **Test failures (exit code 1)** — coverage is still collected from passing tests. Do not abort; proceed with partial data and note the failures in the summary. +- **ReportGenerator install failure** — if `dotnet tool install` fails (no internet), skip HTML/CSV report generation and continue with raw Cobertura XML analysis + script-based CRAP scores. Note the skip in the report. +- **Method name mismatches in Cobertura** — async methods, lambdas, and local functions may have compiler-generated names. The scripts use the Cobertura method name/signature directly; verify against source if results look unexpected. +- **Mixed coverage providers** — when a solution contains both Coverlet and Microsoft CodeCoverage projects, the skill runs per-project to avoid dual-provider conflicts. This is slower but correct. diff --git a/.github/skills/coverage-analysis/references/guidelines.md b/.github/skills/coverage-analysis/references/guidelines.md new file mode 100644 index 0000000000..cb382248f8 --- /dev/null +++ b/.github/skills/coverage-analysis/references/guidelines.md @@ -0,0 +1,59 @@ +# Guidelines + +**Don't modify source or production code.** The only permitted project file modifications are adding a coverage provider package to test projects that currently have no provider: `coverlet.collector` (coverlet/mixed modes) or `Microsoft.Testing.Extensions.CodeCoverage` (ms-codecoverage mode). Do not add a second provider to projects that already have one. Always log package additions and document revert commands in the report. Write all other output to `TestResults/coverage-analysis/` under the test project directory. + +**Always show and open the generated markdown report.** After writing `TestResults/coverage-analysis/coverage-analysis.md`, print its contents to the console and open the file in the current host editor/session automatically (when an editor is available). + +**Don't generate new tests during the initial analysis run.** This skill surfaces where tests are needed. Test generation is a separate follow-up step outside the scope of this skill. + +**Use inline `dotnet test` arguments, not runsettings files.** Runsettings files require the developer to already know what they're doing — the whole point of this skill is that they shouldn't have to. Inline data collector args produce the same result with zero configuration. + +**Show the risk hotspots table even when all thresholds pass.** A project at 90% line coverage can still have a method with cyclomatic complexity 20 and 0% branch coverage. The thresholds measure averages; the hotspot table finds outliers. Don't hide it just because the summary looks green. + +**Always compute and surface CRAP scores.** The Risk Hotspots table is mandatory in every analysis output, whether analyzing pre-existing data, freshly collected data, or diagnosing a plateau. Never skip CRAP score computation — it is the primary differentiator between this skill and raw `dotnet test` coverage output. + +**Continue past test failures (exit code 1).** If some tests fail, coverage is still collected from the passing tests — partial data is better than no data. Note the failures in the summary and proceed. Aborting would leave the developer with nothing actionable. + +**Run `dotnet test` only once per entry point during normal flow.** When a solution is found, run it once against the solution. When no solution is found, run it once per test project. A single recovery rerun is allowed only if the first run produced no Cobertura XML and only `.coverage` binary output. + +**CRAP threshold of 30 is the default for a reason.** Scores above 30 are widely cited (by the original researchers) as "needs immediate attention." Scores between 15 and 30 are moderate — flag them in the table but don't make them sound catastrophic. Scores ≤ 5 are generally fine. + +**Priority assignment for coverage gaps:** + +- **HIGH** — file has both a CRAP score above threshold AND coverage below threshold (the double failure is what makes it urgent) +- **MED** — coverage below threshold OR CRAP score above threshold, but not both +- **LOW** — coverage below threshold with all methods having complexity ≤ 2 (trivial code — missing coverage here is unlikely to hide real bugs) + +--- + +## Coverage Intelligence — Going Beyond the Numbers + +**Prioritize uncovered code that is** complex (cyclomatic complexity > 5), on critical paths (auth, payment, data access, error handling), or changed frequently. **Deprioritize** trivial getters (complexity 1–2), generated files (EF migrations, `*.Designer.cs`, `*.g.cs`), and DI/configuration glue code. + +**Coverage plateau diagnosis** — if coverage has stopped increasing, check for: `[Exclude]` attributes hiding large code sections, tests that execute code but assert nothing (inflated coverage without verification), or integration code that needs external dependencies (databases, file system). + +**AI-generated test quality** — coverage delta alone is insufficient. Flag methods where CRAP score is still above threshold after coverage increased (tests may be happy-path only), and methods covered by a single test with no branch variation. + +--- + +## Style + +- **Keep risk hotspots prominent and immediately after the summary section** — developers should find the highest-risk methods quickly +- **Quantify recommendations** — "adding 3 tests for `ProcessOrder` would cut the CRAP score from 48 to ~6" +- **Be direct** — skip preamble, get to the table +- **Emoji for visual scanning in generated output** (defined in `references/output-format.md`): + + | Symbol | Meaning | + |--------|---------| + | 🔥 | hotspots | + | 📋 | gaps | + | 💡 | recommendations | + | 📁 | reports | + | ✅ | passing | + | ❌ | failing | + | ⚠️ | warning | + | 🔴 | HIGH priority | + | 🟡 | MED priority | + | 🟢 | LOW priority | + +- **Always use Unicode emoji in generated output** — never shortcodes like `:x:` or `:fire:` diff --git a/.github/skills/coverage-analysis/references/output-format.md b/.github/skills/coverage-analysis/references/output-format.md new file mode 100644 index 0000000000..c768806eb1 --- /dev/null +++ b/.github/skills/coverage-analysis/references/output-format.md @@ -0,0 +1,83 @@ +# Output Format + +Copy the template below **verbatim** for all fixed elements (headings, table headers, emoji, symbols). Only replace `` values with actual data. Do not substitute emoji with text equivalents, do not change `·` to `-`, do not change `×` to `x`, and do not drop section emoji prefixes. + +```markdown +# Coverage Analysis - + +| Metric | Value | +|--------|-------| +| **Date** | | +| **Line Coverage** | % | +| **Branch Coverage** | % | +| **Risk Hotspots** | (CRAP > ) | +| **Tests** | passed · failed | + +## Summary + +| Metric | Value | Threshold | Status | +|--------|-------|-----------|--------| +| **Line Coverage** | % | % | ✅ / ❌ | +| **Branch Coverage** | % | % | ✅ / ❌ | +| **Methods Analyzed** | | — | — | +| **Risk Hotspots** | | 0 | ✅ / ⚠️ | +| **Test Result** | | — | ✅ / ⚠️ | + +> Coverage collected from ** of test project(s)**. +> Reports saved to: `/reports/` + +If any coverage provider package was added to test projects, include this note after the summary: + +> ℹ️ **Coverage provider package updates** +> - `coverlet.collector` added to `` project(s): ``, `` +> - `Microsoft.Testing.Extensions.CodeCoverage` added to `` project(s): `` +> +> To revert: `git checkout -- ` + +If all test projects already had a coverage provider, omit this note. + +--- + +## 🔥 Risk Hotspots (Top by CRAP Score) + +Methods flagged as high-risk: complex code with low test coverage that is dangerous to change. + +| Rank | Method | Class | File | Complexity | Coverage | CRAP Score | +|------|--------|-------|------|-----------|---------|-----------| +| 1 | `` | `` | `` | | % | **** | +| … | … | … | … | … | … | … | + +> **CRAP Score** = `Complexity² × (1 − Coverage)³ + Complexity`. +> Scores above are flagged. A score ≤ 5 is considered safe. + +--- + +## 📋 Coverage Gaps by File + +Files below the line or branch coverage threshold, ordered by uncovered lines descending: + +| File | Line Coverage | Branch Coverage | Uncovered Lines | Priority | +|------|--------------|----------------|----------------|---------| +| `` | % | % | | 🔴 HIGH / 🟡 MED / 🟢 LOW | +| … | … | … | … | … | + +--- + +## 💡 Recommendations + +1. **Write tests for the top risk hotspot first** — `` in `` has a CRAP score of (complexity , % coverage). Reducing it to 80% coverage would drop the score to ~. +2. **Focus on ``** — uncovered lines, below threshold. +3. **** + +--- + +## 📁 Reports + +| Report | Path | +|--------|------| +| HTML (browsable) | `/reports/index.html` | +| Text summary | `/reports/Summary.txt` | +| GitHub markdown | `/reports/SummaryGithub.md` | +| CSV data | `/reports/Summary.csv` | +| Raw data | `/raw/` | +``` diff --git a/.github/skills/coverage-analysis/scripts/Compute-CrapScores.ps1 b/.github/skills/coverage-analysis/scripts/Compute-CrapScores.ps1 new file mode 100644 index 0000000000..a4b1799f01 --- /dev/null +++ b/.github/skills/coverage-analysis/scripts/Compute-CrapScores.ps1 @@ -0,0 +1,113 @@ +# Compute-CrapScores.ps1 +# +# Reads a Cobertura XML coverage file and calculates CRAP scores per method. +# Uses Alberto Savoia's original CRAP formula: +# CRAP(m) = comp(m)^2 * (1 - cov(m))^3 + comp(m) +# +# Usage: +# .\Compute-CrapScores.ps1 -CoberturaPath ,,... [-CrapThreshold ] [-TopN ] +# +# Outputs: +# - Hotspot rows (top N by CRAP score) as a JSON array to stdout (HOTSPOTS:) +# - Summary counts as TOTAL_METHODS: and FLAGGED_METHODS: + +param( + [Parameter(Mandatory)][string[]]$CoberturaPath, + [int]$CrapThreshold = 30, + [int]$TopN = 10 +) + +# Merge methods across all Cobertura files using a stable key (Class|Method|Signature|File). +# Line hits are accumulated so a line is counted as covered if any test project covered it. +$methodMap = @{} + +foreach ($filePath in $CoberturaPath) { + if (-not (Test-Path $filePath)) { + Write-Error "Cobertura file not found: $filePath" + exit 2 + } + + try { + [xml]$cobertura = Get-Content $filePath -Encoding UTF8 -ErrorAction Stop + } catch { + Write-Error "Failed to parse Cobertura XML: $filePath. $_" + exit 2 + } + + foreach ($package in $cobertura.coverage.packages.package) { + foreach ($class in $package.classes.class) { + $className = $class.name + $fileName = $class.filename + + foreach ($method in $class.methods.method) { + $key = "$className|$($method.name)|$($method.signature)|$fileName" + + # Cyclomatic complexity is stored as an XML attribute in Cobertura format + $complexity = if ($null -ne $method.complexity) { [int]$method.complexity } else { 1 } + if ($complexity -lt 1) { $complexity = 1 } + + if (-not $methodMap.ContainsKey($key)) { + $methodMap[$key] = @{ + Class = $className + Method = $method.name + Signature = $method.signature + File = $fileName + Complexity = $complexity + LineHits = @{} + } + } + + # Accumulate hit counts per line number across files + foreach ($line in $method.lines.line) { + $lineNo = $line.number + $hits = [int]$line.hits + if ($methodMap[$key].LineHits.ContainsKey($lineNo)) { + $methodMap[$key].LineHits[$lineNo] += $hits + } else { + $methodMap[$key].LineHits[$lineNo] = $hits + } + } + } + } + } +} + +$results = [System.Collections.Generic.List[PSCustomObject]]::new() + +foreach ($entry in $methodMap.Values) { + $totalLines = $entry.LineHits.Count + $coveredLines = ($entry.LineHits.Values | Where-Object { $_ -gt 0 } | Measure-Object).Count + $lineCoverage = if ($totalLines -gt 0) { $coveredLines / $totalLines } else { 0.0 } + + $complexity = $entry.Complexity + + # Alberto Savoia's CRAP formula: comp^2 * (1 - cov)^3 + comp + # The cubic exponent on (1-cov) sharply penalizes low coverage: + # at 0% coverage the risk multiplier is 1.0; at 50% it drops to 0.125. + # Higher scores = more complex AND less covered = riskier to change + $uncovered = 1.0 - $lineCoverage + $crapScore = [Math]::Round(($complexity * $complexity * [Math]::Pow($uncovered, 3)) + $complexity, 2) + + $results.Add([PSCustomObject]@{ + Class = $entry.Class + Method = $entry.Method + Signature = $entry.Signature + File = $entry.File + TotalLines = $totalLines + CoveredLines = $coveredLines + LineCoverage = [Math]::Round($lineCoverage * 100, 1) + Complexity = $complexity + CrapScore = $crapScore + }) +} + +$hotspots = $results | Sort-Object CrapScore -Descending | Select-Object -First $TopN +$flagged = $results | Where-Object { $_.CrapScore -gt $CrapThreshold } + +Write-Host "TOTAL_METHODS:$($results.Count)" +Write-Host "FLAGGED_METHODS:$($flagged.Count)" +if ($hotspots) { + Write-Output "HOTSPOTS:$(@($hotspots) | ConvertTo-Json -Compress)" +} else { + Write-Output "HOTSPOTS:[]" +} diff --git a/.github/skills/coverage-analysis/scripts/Extract-MethodCoverage.ps1 b/.github/skills/coverage-analysis/scripts/Extract-MethodCoverage.ps1 new file mode 100644 index 0000000000..999a8273d1 --- /dev/null +++ b/.github/skills/coverage-analysis/scripts/Extract-MethodCoverage.ps1 @@ -0,0 +1,193 @@ +param( + [Parameter(Mandatory=$true)] + [string[]]$CoberturaPath, + + [Parameter(Mandatory=$false)] + [int]$CoverageThreshold = 80, + + [Parameter(Mandatory=$false)] + [int]$BranchThreshold = 70, + + [Parameter(Mandatory=$false)] + [ValidateSet('uncovered', 'below-threshold', 'all')] + [string]$Filter = 'all' +) + +<# +.SYNOPSIS +Extract method-level coverage from Cobertura XML and output as JSON. + +.DESCRIPTION +Parses one or more Cobertura code coverage XML files and extracts per-method coverage metrics: +- Method name and class +- Line coverage percentage +- Branch coverage percentage +- Lines covered / total +- Branches covered / total +- Complexity (if available) + +When multiple files are provided, line hits are merged across files so a line is counted +as covered if any test project covered it. + +Filters by coverage status (uncovered, below threshold, or all). +Output is JSON for easy post-processing into tables, CSV, or other formats. + +.PARAMETER CoberturaPath +Path(s) to Cobertura coverage.cobertura.xml file(s). Accepts multiple paths for multi-test-project merging. + +.PARAMETER CoverageThreshold +Minimum acceptable line coverage percentage. Methods below this threshold are flagged (default: 80). + +.PARAMETER BranchThreshold +Minimum acceptable branch coverage percentage for methods that contain branches (default: 70). + +.PARAMETER Filter +Which methods to include: + 'uncovered' - methods with 0% coverage only + 'below-threshold' - methods with line coverage < CoverageThreshold OR branch coverage < BranchThreshold (for methods with branches) + 'all' - all methods (default) + +.EXAMPLE +PS> & .\Extract-MethodCoverage.ps1 -CoberturaPath "coverage.cobertura.xml" -CoverageThreshold 80 -BranchThreshold 70 -Filter uncovered +Outputs a JSON array of uncovered methods. + +.EXAMPLE +PS> & .\Extract-MethodCoverage.ps1 -CoberturaPath @("tests1/coverage.cobertura.xml","tests2/coverage.cobertura.xml") +Merges coverage from multiple test projects and outputs combined method-level metrics. + +.OUTPUTS +Writes JSON array to stdout. +Sets exit code 0 on success, 2 on missing/invalid file. +#> + +foreach ($p in $CoberturaPath) { + if (-not (Test-Path $p)) { + Write-Error "Cobertura file not found: $p" + exit 2 + } +} + +# Merge methods across all Cobertura files using a stable key (Class|Method|Signature|File). +# Line hits and branch data are accumulated so coverage reflects all test projects. +$methodMap = @{} + +foreach ($p in $CoberturaPath) { + try { + [xml]$xml = Get-Content $p -Encoding UTF8 -ErrorAction Stop + } catch { + Write-Error "Failed to parse Cobertura XML: $_" + exit 2 + } + + foreach ($package in $xml.coverage.packages.package) { + foreach ($class in $package.classes.class) { + $className = $class.name + $classFilename = $class.filename + + foreach ($method in $class.methods.method) { + $key = "$className|$($method.name)|$($method.signature)|$classFilename" + + if (-not $methodMap.ContainsKey($key)) { + $complexity = if ($null -ne $method.complexity) { [int]$method.complexity } else { 1 } + if ($complexity -lt 1) { $complexity = 1 } + $methodMap[$key] = @{ + Class = $className + Method = $method.name + Signature = $method.signature + File = $classFilename + Complexity = $complexity + LineHits = @{} + BranchData = @{} + } + } + + # Accumulate line hits across files + foreach ($line in $method.lines.line) { + $lineNo = $line.number + $hits = [int]$line.hits + if ($methodMap[$key].LineHits.ContainsKey($lineNo)) { + $methodMap[$key].LineHits[$lineNo] += $hits + } else { + $methodMap[$key].LineHits[$lineNo] = $hits + } + + # Accumulate branch data + if ($line.branch -eq 'true' -and $line.'condition-coverage') { + if ($line.'condition-coverage' -match '\((\d+)/(\d+)\)') { + $covered = [int]$Matches[1] + $total = [int]$Matches[2] + if ($methodMap[$key].BranchData.ContainsKey($lineNo)) { + # Merge branch coverage across files by accumulating covered branches (capped at total) + $existingCovered = $methodMap[$key].BranchData[$lineNo].Covered + $existingTotal = $methodMap[$key].BranchData[$lineNo].Total + if ($existingTotal -ne $total) { + Write-Warning ("Branch total mismatch for {0} at line {1}: {2} vs {3}" -f $key, $lineNo, $existingTotal, $total) + } + $mergedTotal = [Math]::Max($existingTotal, $total) + $mergedCovered = [Math]::Min($existingCovered + $covered, $mergedTotal) + $methodMap[$key].BranchData[$lineNo] = @{ Covered = $mergedCovered; Total = $mergedTotal } + } else { + $methodMap[$key].BranchData[$lineNo] = @{ Covered = $covered; Total = $total } + } + } + } + } + } + } + } +} + +$methods = [System.Collections.Generic.List[PSCustomObject]]::new() + +foreach ($entry in $methodMap.Values) { + $totalLines = $entry.LineHits.Count + $coveredLineCount = ($entry.LineHits.Values | Where-Object { $_ -gt 0 } | Measure-Object).Count + $lineCoveragePercent = if ($totalLines -gt 0) { [math]::Round(($coveredLineCount / $totalLines) * 100, 1) } else { 0 } + + $branchesTotal = 0 + $branchesCovered = 0 + foreach ($bd in $entry.BranchData.Values) { + $branchesCovered += $bd.Covered + $branchesTotal += $bd.Total + } + $branchCoveragePercent = if ($branchesTotal -gt 0) { [math]::Round(($branchesCovered / $branchesTotal) * 100, 1) } else { 0 } + + # Apply filter + if ($Filter -eq 'uncovered' -and $lineCoveragePercent -gt 0) { continue } + if ($Filter -eq 'below-threshold') { + $lineOk = $lineCoveragePercent -ge $CoverageThreshold + $branchOk = ($branchesTotal -eq 0) -or ($branchCoveragePercent -ge $BranchThreshold) + if ($lineOk -and $branchOk) { continue } + } + + $methods.Add([PSCustomObject]@{ + Class = $entry.Class + Method = $entry.Method + Signature = $entry.Signature + File = $entry.File + Complexity = $entry.Complexity + LineCoverage = $lineCoveragePercent + BranchCoverage = $branchCoveragePercent + CoveredLines = $coveredLineCount + TotalLines = $totalLines + UncoveredLines = ($totalLines - $coveredLineCount) + CoveredBranches = $branchesCovered + TotalBranches = $branchesTotal + }) +} +# Sort by uncovered lines descending, then by line coverage ascending +$sorted = $methods | Sort-Object -Property @{Expression='UncoveredLines';Descending=$true}, @{Expression='LineCoverage';Descending=$false}, Class, Method + +# Output as JSON (empty array guard for zero results) +if ($sorted.Count -eq 0) { + Write-Output "[]" +} else { + $json = @($sorted) | ConvertTo-Json + Write-Output $json +} + +# Summary +Write-Host "METHODS_FILTERED:$($methods.Count)" -ForegroundColor Green +$uncovered = $methods | Where-Object { $_.LineCoverage -eq 0 } | Measure-Object | Select-Object -ExpandProperty Count +Write-Host "UNCOVERED_METHODS:$uncovered" -ForegroundColor $(if ($uncovered -gt 0) { 'Yellow' } else { 'Green' }) +exit 0 diff --git a/.github/skills/crap-score/SKILL.md b/.github/skills/crap-score/SKILL.md new file mode 100644 index 0000000000..028f783d17 --- /dev/null +++ b/.github/skills/crap-score/SKILL.md @@ -0,0 +1,155 @@ +--- +name: crap-score +description: > + Calculates CRAP (Change Risk Anti-Patterns) score for .NET methods, classes, + or files. Use when the user asks to assess test quality, identify risky + untested code, compute CRAP scores, or evaluate whether complex methods have + sufficient test coverage. Requires code coverage data (Cobertura XML) and + cyclomatic complexity analysis. + DO NOT USE FOR: writing tests, general test execution unrelated to coverage/CRAP + analysis, or general code coverage reporting without CRAP context. +--- + +# CRAP Score Analysis + +Calculate CRAP (Change Risk Anti-Patterns) scores for .NET methods to identify code that is both complex and undertested. + +## Background + +The CRAP score combines **cyclomatic complexity** and **code coverage** into a single metric: + +$$\text{CRAP}(m) = \text{comp}(m)^2 \times (1 - \text{cov}(m))^3 + \text{comp}(m)$$ + +Where: + +- $\text{comp}(m)$ = cyclomatic complexity of method $m$ +- $\text{cov}(m)$ = code coverage ratio (0.0 to 1.0) of method $m$ + +| CRAP Score | Risk Level | Interpretation | +|------------|------------|----------------| +| < 5 | Low | Simple and well-tested | +| 5-15 | Moderate | Acceptable for most code | +| 15-30 | High | Needs more tests or simplification | +| > 30 | Critical | Refactor and add coverage urgently | + +A method with 100% coverage has CRAP = complexity (the minimum). A method with 0% coverage has CRAP = complexity^2 + complexity. + +## When to Use + +- User wants to assess which methods are risky due to low coverage and high complexity +- User asks for CRAP score of specific methods, classes, or files +- User wants to prioritize which code to test next +- User wants to evaluate test quality beyond simple coverage percentages + +## When Not to Use + +- User just wants to run tests (use `run-tests` skill) +- User wants to write new tests (use `writing-mstest-tests` skill or general coding assistance) +- User only wants a coverage percentage without complexity analysis + +## Inputs + +| Input | Required | Description | +|-------|----------|-------------| +| Target scope | Yes | Method name, class name, or file path to analyze | +| Test project path | No | Path to the test project. Defaults to discovering test projects in the solution. | +| Source project path | No | Path to the source project under analysis | + +## Workflow + +### Step 1: Collect code coverage data + +If no coverage data exists yet (no Cobertura XML available), **always run `dotnet test` with coverage collection first** and mention the exact command in your response. Do not skip this step -- CRAP scores require coverage data. + +Check the test project's `.csproj` for the coverage package, then run the appropriate command: + +| Coverage Package | Command | Output Location | +|---|---|---| +| `coverlet.collector` | `dotnet test --collect:"XPlat Code Coverage" --results-directory ./TestResults` | Typically under `TestResults//coverage.cobertura.xml`. Search recursively under the results directory (for example, `TestResults/**/coverage.cobertura.xml`) or use any explicit coverage path the user provides. | +| `Microsoft.Testing.Extensions.CodeCoverage` (.NET 9) | `dotnet test -- --coverage --coverage-output-format cobertura --coverage-output ./TestResults` | `--coverage-output` path | +| `Microsoft.Testing.Extensions.CodeCoverage` (.NET 10+) | `dotnet test --coverage --coverage-output-format cobertura --coverage-output ./TestResults` | `--coverage-output` path | + +### Step 2: Compute cyclomatic complexity + +Analyze the target source files to determine cyclomatic complexity per method. Count the following decision points (each adds 1 to the base complexity of 1): + +| Construct | Example | +|-----------|---------| +| `if` | `if (x > 0)` | +| `else if` | `else if (y < 0)` | +| `case` (each) | `case 1:` | +| `for` | `for (int i = 0; ...)` | +| `foreach` | `foreach (var item in list)` | +| `while` | `while (running)` | +| `do...while` | `do { } while (cond)` | +| `catch` (each) | `catch (Exception ex)` | +| `&&` | `if (a && b)` | +| `\|\|` (OR) | `if (a \|\| b)` | +| `??` | `value ?? fallback` | +| `?.` | `obj?.Method()` | +| `? :` (ternary) | `x > 0 ? a : b` | +| Pattern match arm | `x is > 0 and < 10` | + +Base complexity is 1 for every method. Each decision point adds 1. + +When analyzing, read the source file and count these constructs per method. Report the breakdown. + +### Step 3: Extract per-method coverage from Cobertura XML + +Parse the Cobertura XML to find each method's `line-rate` attribute under the target `` element. If `line-rate` is not available at method level, compute it from the `` elements: + +$$\text{cov}(m) = \frac{\text{lines with hits} > 0}{\text{total lines}}$$ + +Method names in Cobertura may differ from source (async methods, lambdas). Match by line ranges when names don't align. + +### Step 4: Calculate CRAP scores + +For each method in scope, apply the formula: + +$$\text{CRAP}(m) = \text{comp}(m)^2 \times (1 - \text{cov}(m))^3 + \text{comp}(m)$$ + +### Step 5: Present results + +Present a sorted table (highest CRAP first): + +```text +| Method | Complexity | Coverage | CRAP Score | Risk | +|---------------------------------|------------|----------|------------|----------| +| OrderService.ProcessOrder | 12 | 45% | 28.4 | High | +| OrderService.ValidateItems | 8 | 90% | 8.1 | Moderate | +| OrderService.CalculateTotal | 3 | 100% | 3.0 | Low | +``` + +Include: + +- **Summary**: total methods analyzed, how many in each risk category +- **Top offenders**: methods with CRAP > 30, with specific recommendations +- **Quick wins**: methods with high complexity but where small coverage improvements would drop the score significantly + +### Step 6: Provide actionable recommendations + +For high-CRAP methods, suggest one or both: + +1. **Add tests** -- identify uncovered branches and suggest specific test cases +2. **Reduce complexity** -- suggest extract-method refactoring for deeply nested logic + +Calculate the **coverage needed** to bring a method below a CRAP threshold of 15: + +$$\text{cov}_{\text{needed}} = 1 - \left(\frac{15 - \text{comp}}{\text{comp}^2}\right)^{1/3}$$ + +This formula only applies when comp < 15. When comp >= 15, the minimum possible CRAP score (at 100% coverage) is comp itself, which already meets or exceeds the threshold. In that case, **coverage alone cannot bring the CRAP score below the threshold** -- the method must be refactored to reduce its cyclomatic complexity first. + +Report this as: "To bring `ProcessOrder` (complexity 12) below CRAP 15, increase coverage from 45% to at least 72%." For methods where complexity alone exceeds the threshold, report: "`ComplexMethod` (complexity 18) cannot reach CRAP < 15 through testing alone -- reduce complexity by extracting sub-methods." + +## Validation + +- Verify that coverage data was collected successfully (Cobertura XML exists and contains data) +- Cross-check that method names in coverage data match the source code +- Confirm CRAP scores by spot-checking the formula on one method manually +- Ensure a 100%-covered method's CRAP equals its complexity exactly + +## Common Pitfalls + +- **Stale coverage data**: Always regenerate coverage before computing CRAP scores. Old coverage files will produce misleading results. +- **Method name mismatches**: Cobertura XML may use mangled/compiler-generated names for async methods, lambdas, or local functions. Match by line ranges when names don't align. +- **Generated code**: Exclude auto-generated files (e.g., `*.Designer.cs`, `*.g.cs`) from analysis unless explicitly requested. diff --git a/.github/skills/dotnet-test-frameworks/SKILL.md b/.github/skills/dotnet-test-frameworks/SKILL.md new file mode 100644 index 0000000000..2e4887291e --- /dev/null +++ b/.github/skills/dotnet-test-frameworks/SKILL.md @@ -0,0 +1,117 @@ +--- +name: dotnet-test-frameworks +description: "Reference data for .NET test framework detection patterns, assertion APIs, skip annotations, setup/teardown methods, and common test smell indicators across MSTest, xUnit, NUnit, and TUnit. DO NOT USE directly — loaded by test analysis skills (test-anti-patterns, exp-test-smell-detection, exp-assertion-quality, exp-test-maintainability, exp-test-tagging) when they need framework-specific lookup tables." +user-invocable: false +--- + +# .NET Test Framework Reference + +Language-specific detection patterns for .NET test frameworks (MSTest, xUnit, NUnit, TUnit). + +## Test File Identification + +| Framework | Test class markers | Test method markers | +| --------- | ------------------ | ------------------- | +| MSTest | `[TestClass]` | `[TestMethod]`, `[DataTestMethod]` | +| xUnit | *(none — convention-based)* | `[Fact]`, `[Theory]` | +| NUnit | `[TestFixture]` | `[Test]`, `[TestCase]`, `[TestCaseSource]` | +| TUnit | `[ClassDataSource]` | `[Test]` | + +## Assertion APIs by Framework + +| Category | MSTest | xUnit | NUnit | +| -------- | ------ | ----- | ----- | +| Equality | `Assert.AreEqual` | `Assert.Equal` | `Assert.That(x, Is.EqualTo(y))` | +| Boolean | `Assert.IsTrue` / `Assert.IsFalse` | `Assert.True` / `Assert.False` | `Assert.That(x, Is.True)` | +| Null | `Assert.IsNull` / `Assert.IsNotNull` | `Assert.Null` / `Assert.NotNull` | `Assert.That(x, Is.Null)` | +| Exception | `Assert.Throws()` / `Assert.ThrowsExactly()` | `Assert.Throws()` | `Assert.That(() => ..., Throws.TypeOf())` | +| Collection | `CollectionAssert.Contains` | `Assert.Contains` | `Assert.That(col, Has.Member(x))` | +| String | `StringAssert.Contains` | `Assert.Contains(str, sub)` | `Assert.That(str, Does.Contain(sub))` | +| Type | `Assert.IsInstanceOfType` | `Assert.IsAssignableFrom` | `Assert.That(x, Is.InstanceOf())` | +| Inconclusive | `Assert.Inconclusive()` | *skip via `[Fact(Skip)]`* | `Assert.Inconclusive()` | +| Fail | `Assert.Fail()` | `Assert.Fail()` (.NET 10+) | `Assert.Fail()` | + +Third-party assertion libraries: `Should*` (Shouldly), `.Should()` (FluentAssertions / AwesomeAssertions), `Verify()` (Verify). + +## Sleep/Delay Patterns + +| Pattern | Example | +| ------- | ------- | +| Thread sleep | `Thread.Sleep(2000)` | +| Task delay | `await Task.Delay(1000)` | +| SpinWait | `SpinWait.SpinUntil(() => condition, timeout)` | + +## Skip/Ignore Annotations + +| Framework | Annotation | With reason | +| --------- | ---------- | ----------- | +| MSTest | `[Ignore]` | `[Ignore("reason")]` | +| xUnit | `[Fact(Skip = "reason")]` | *(reason is required)* | +| NUnit | `[Ignore("reason")]` | *(reason is required)* | +| TUnit | `[Skip("reason")]` | *(reason is required)* | +| Conditional | `#if false` / `#if NEVER` | *(no reason possible)* | + +## Exception Handling — Idiomatic Alternatives + +When a test uses `try`/`catch` to verify exceptions, suggest the framework-native alternative: + +**MSTest:** + +```csharp +// Instead of try/catch (matches exact type): +var ex = Assert.ThrowsExactly( + () => processor.ProcessOrder(emptyOrder)); +Assert.AreEqual("Order must contain at least one item", ex.Message); + +// Or (also matches derived types): +var ex = Assert.Throws( + () => processor.ProcessOrder(emptyOrder)); +Assert.AreEqual("Order must contain at least one item", ex.Message); +``` + +**xUnit:** + +```csharp +var ex = Assert.Throws( + () => processor.ProcessOrder(emptyOrder)); +Assert.Equal("Order must contain at least one item", ex.Message); +``` + +**NUnit:** + +```csharp +var ex = Assert.Throws( + () => processor.ProcessOrder(emptyOrder)); +Assert.That(ex.Message, Is.EqualTo("Order must contain at least one item")); +``` + +## Mystery Guest — Common .NET Patterns + +| Smell indicator | What to look for | +| --------------- | ---------------- | +| File system | `File.ReadAllText`, `File.Exists`, `File.WriteAllBytes`, `Directory.GetFiles`, `Path.Combine` with hard-coded paths | +| Database | `SqlConnection`, `DbContext` (without in-memory provider), `SqlCommand` | +| Network | `HttpClient` without `HttpMessageHandler` override, `WebRequest`, `TcpClient` | +| Environment | `Environment.GetEnvironmentVariable`, `Environment.CurrentDirectory` | +| Acceptable | `MemoryStream`, `StringReader`, `InMemory` database providers, custom `DelegatingHandler` | + +## Integration Test Markers + +Recognize these as integration tests (adjust smell severity accordingly): + +- Class name contains `Integration`, `E2E`, `EndToEnd`, or `Acceptance` +- `[TestCategory("Integration")]` (MSTest) +- `[Trait("Category", "Integration")]` (xUnit) +- `[Category("Integration")]` (NUnit) +- Project name ending in `.IntegrationTests` or `.E2ETests` + +## Setup/Teardown Methods + +| Framework | Setup | Teardown | +| --------- | ----- | -------- | +| MSTest | `[TestInitialize]` or constructor | `[TestCleanup]` or `IDisposable.Dispose` / `IAsyncDisposable.DisposeAsync` | +| xUnit | constructor | `IDisposable.Dispose` / `IAsyncDisposable.DisposeAsync` | +| NUnit | `[SetUp]` | `[TearDown]` | +| MSTest (class) | `[ClassInitialize]` | `[ClassCleanup]` | +| NUnit (class) | `[OneTimeSetUp]` | `[OneTimeTearDown]` | +| xUnit (class) | `IClassFixture` | fixture's `Dispose` | diff --git a/.github/skills/filter-syntax/SKILL.md b/.github/skills/filter-syntax/SKILL.md new file mode 100644 index 0000000000..0c1146702d --- /dev/null +++ b/.github/skills/filter-syntax/SKILL.md @@ -0,0 +1,172 @@ +--- +name: filter-syntax +description: "Reference data for test filter syntax across all platform and framework combinations: VSTest --filter expressions, MTP filters for MSTest/NUnit/xUnit v3/TUnit, and VSTest-to-MTP filter translation. DO NOT USE directly — loaded by run-tests, mtp-hot-reload, and migrate-vstest-to-mtp when they need filter syntax." +user-invocable: false +--- + +# Test Filter Syntax Reference + +Filter syntax depends on the **platform** and **test framework**. + +## VSTest filters (MSTest, xUnit v2, NUnit on VSTest) + +```bash +dotnet test --filter +``` + +Expression syntax: `[|&]` + +**Operators:** + +| Operator | Meaning | +|----------|---------| +| `=` | Exact match | +| `!=` | Not exact match | +| `~` | Contains | +| `!~` | Does not contain | + +**Combinators:** `|` (OR), `&` (AND). Parentheses for grouping: `(A|B)&C` + +**Supported properties by framework:** + +| Framework | Properties | +|-----------|-----------| +| MSTest | `FullyQualifiedName`, `Name`, `ClassName`, `Priority`, `TestCategory` | +| xUnit | `FullyQualifiedName`, `DisplayName`, `Traits` | +| NUnit | `FullyQualifiedName`, `Name`, `Priority`, `TestCategory` | + +An expression without an operator is treated as `FullyQualifiedName~`. + +**Examples (VSTest):** + +```bash +# Run tests whose name contains "LoginTest" +dotnet test --filter "Name~LoginTest" + +# Run a specific test class +dotnet test --filter "ClassName=MyNamespace.MyTestClass" + +# Run tests in a category +dotnet test --filter "TestCategory=Integration" + +# Exclude a category +dotnet test --filter "TestCategory!=Slow" + +# Combine: class AND category +dotnet test --filter "ClassName=MyNamespace.MyTestClass&TestCategory=Unit" + +# Either of two classes +dotnet test --filter "ClassName=MyNamespace.ClassA|ClassName=MyNamespace.ClassB" +``` + +## MTP filters — MSTest and NUnit + +MSTest and NUnit on MTP use the **same `--filter` syntax** as VSTest (same properties, operators, and combinators). The only difference is how the flag is passed: + +```bash +# .NET SDK 8/9 (after --) +dotnet test -- --filter "Name~LoginTest" + +# .NET SDK 10+ (direct) +dotnet test --filter "Name~LoginTest" +``` + +## MTP filters — xUnit (v3) + +xUnit v3 on MTP uses **framework-specific filter flags** instead of the generic `--filter` expression: + +| Flag | Description | +|------|-------------| +| `--filter-class "name"` | Run all tests in a given class | +| `--filter-not-class "name"` | Exclude all tests in a given class | +| `--filter-method "name"` | Run a specific test method | +| `--filter-not-method "name"` | Exclude a specific test method | +| `--filter-namespace "name"` | Run all tests in a namespace | +| `--filter-not-namespace "name"` | Exclude all tests in a namespace | +| `--filter-trait "name=value"` | Run tests with a matching trait | +| `--filter-not-trait "name=value"` | Exclude tests with a matching trait | + +Multiple values can be specified with a single flag: `--filter-class Foo Bar`. + +```bash +# .NET SDK 8/9 +dotnet test -- --filter-class "MyNamespace.LoginTests" + +# .NET SDK 10+ +dotnet test --filter-class "MyNamespace.LoginTests" + +# Combine: namespace + trait +dotnet test --filter-namespace "MyApp.Tests.Integration" --filter-trait "Category=Smoke" +``` + +### xUnit v3 query filter language + +For complex expressions, use `--filter-query` with a path-segment syntax: + +```text +////[traitName=traitValue] +``` + +Each segment matches against: assembly name, namespace, class name, method name. Use `*` for "match all" in any segment. Documentation: + +```shell +# xUnit.net v3 MTP — using query language (assembly/namespace/class/method[trait]) +dotnet test -- --filter-query "/*/*/*IntegrationTests*/*[Category=Smoke]" +``` + +## MTP filters — TUnit + +TUnit uses `--treenode-filter` with a path-based syntax: + +```text +--treenode-filter "////" +``` + +Wildcards (`*`) are supported in any segment. Filter operators can be appended to test names for property-based filtering. + +| Operator | Meaning | +|----------|---------| +| `*` | Wildcard match | +| `=` | Exact property match (e.g., `[Category=Unit]`) | +| `!=` | Exclude property value | +| `&` | AND (combine conditions) | +| `\|` | OR (within a segment, requires parentheses) | + +**Examples (TUnit):** + +```bash +# All tests in a class +dotnet run --treenode-filter "/*/*/LoginTests/*" + +# A specific test +dotnet run --treenode-filter "/*/*/*/AcceptCookiesTest" + +# By namespace prefix (wildcard) +dotnet run --treenode-filter "/*/MyProject.Tests.Api*/*/*" + +# By custom property +dotnet run --treenode-filter "/*/*/*/*[Category=Smoke]" + +# Exclude by property +dotnet run --treenode-filter "/*/*/*/*[Category!=Slow]" + +# OR across classes +dotnet run --treenode-filter "/*/*/(LoginTests)|(SignupTests)/*" + +# Combined: namespace + property +dotnet run --treenode-filter "/*/MyProject.Tests.Integration/*/*/*[Priority=Critical]" +``` + +## VSTest → MTP filter translation (for migration) + +**MSTest, NUnit, and xUnit.net v2 (with `YTest.MTP.XUnit2`)**: The VSTest `--filter` syntax is identical on both VSTest and MTP. No changes needed. + +**xUnit.net v3 (native MTP)**: xUnit.net v3 does NOT support the VSTest `--filter` syntax on MTP. Translate filters using xUnit.net v3's native options: + +| VSTest `--filter` syntax | xUnit.net v3 MTP equivalent | Notes | +|---|---|---| +| `FullyQualifiedName~ClassName` | `--filter-class *ClassName*` | Wildcards required for substring match | +| `FullyQualifiedName=Ns.Class.Method` | `--filter-method Ns.Class.Method` | Exact match on fully qualified method | +| `Name=MethodName` | `--filter-method *MethodName*` | Wildcards for substring match | +| `Category=Value` (trait) | `--filter-trait "Category=Value"` | Filter by trait name/value pair | +| Complex expressions | `--filter-query "expr"` | Uses xUnit.net query filter language (see above) | diff --git a/.github/skills/migrate-mstest-v1v2-to-v3/SKILL.md b/.github/skills/migrate-mstest-v1v2-to-v3/SKILL.md new file mode 100644 index 0000000000..85377105d9 --- /dev/null +++ b/.github/skills/migrate-mstest-v1v2-to-v3/SKILL.md @@ -0,0 +1,197 @@ +--- +name: migrate-mstest-v1v2-to-v3 +description: > + Migrate MSTest v1 or v2 test project to MSTest v3. Use when user says + "upgrade MSTest", "upgrade to MSTest v3", "migrate to MSTest v3", + "update test framework", "modernize tests", "MSTest v3 migration", + "MSTest compatibility", "MSTest v2 to v3", or build errors after + updating MSTest packages from 1.x/2.x to 3.x. + USE FOR: upgrading from MSTest v1 assembly references + (Microsoft.VisualStudio.QualityTools.UnitTestFramework) or MSTest v2 NuGet + (MSTest.TestFramework 1.x-2.x) to MSTest v3, fixing assertion overload + errors (AreEqual/AreNotEqual), updating DataRow constructors, replacing + .testsettings with .runsettings, timeout behavior changes, target framework + compatibility (.NET 5 dropped -- use .NET 6+; .NET Fx older than 4.6.2 dropped), + adopting MSTest.Sdk. + First step toward MSTest v4 -- after this, use migrate-mstest-v3-to-v4. + DO NOT USE FOR: migrating to MSTest v4 (use migrate-mstest-v3-to-v4), + migrating between frameworks (MSTest to xUnit/NUnit), or general .NET + upgrades unrelated to MSTest. +--- + +# MSTest v1/v2 -> v3 Migration + +Migrate a test project from MSTest v1 (assembly references) or MSTest v2 (NuGet 1.x-2.x) to MSTest v3. MSTest v3 is **not binary compatible** with v1/v2 -- libraries compiled against v1/v2 must be recompiled. + +## When to Use + +- Project references `Microsoft.VisualStudio.QualityTools.UnitTestFramework.dll` (MSTest v1) +- Project uses `MSTest.TestFramework` / `MSTest.TestAdapter` NuGet 1.x or 2.x +- Resolving build errors after updating MSTest packages from v1/v2 to v3 +- Replacing `.testsettings` with `.runsettings` +- Adopting MSTest.Sdk or in-assembly parallel execution + +## When Not to Use + +- Project already uses MSTest v3 (3.x packages) +- Upgrading v3 to v4 -- use `migrate-mstest-v3-to-v4` +- Migrating between frameworks (MSTest to xUnit/NUnit) + +## Inputs + +| Input | Required | Description | +|-------|----------|-------------| +| Project or solution path | Yes | The `.csproj`, `.sln`, or `.slnx` entry point containing MSTest test projects | +| Build command | No | How to build (e.g., `dotnet build`, a repo build script). Auto-detect if not provided | +| Test command | No | How to run tests (e.g., `dotnet test`). Auto-detect if not provided | + +## Breaking Changes Summary + +MSTest v3 introduces these breaking changes from v1/v2. Address only the ones relevant to the project: + +| Breaking Change | Impact | Fix | +|---|---|---| +| `Assert.AreEqual(object, object)` overload removed | Compile error on untyped assertions | Add generic type: `Assert.AreEqual(expected, actual)`. Same for `AreNotEqual`, `AreSame`, `AreNotSame` | +| `DataRow` strict type matching | Runtime/compile errors when argument types don't match parameter types exactly | Change literals to exact types: `1` for int, `1L` for long, `1.0f` for float | +| `DataRow` max 16 constructor parameters (early v3) | Compile error if >16 args; fixed in later v3 versions | Update to latest 3.x, or refactor test / wrap extra params in array | +| `.testsettings` / `` no longer supported | Settings silently ignored | Delete `.testsettings`, create `.runsettings` with equivalent config | +| Timeout behavior unified across .NET Core / Framework | Tests with `[Timeout]` may behave differently | Verify timeout values; adjust if needed | +| Dropped target frameworks: .NET 5, .NET Fx < 4.6.2, netstandard1.0, UWP < 16299, WinUI < 18362 | Build error | Update TFM: .NET 5 -> net8.0 (LTS) or net6.0+, netfx -> net462+, netstandard1.0 -> netstandard2.0. Note: net6.0, net8.0, net9.0 are all supported | +| Not binary compatible with v1/v2 | Libraries compiled against v1/v2 must be recompiled | Recompile all dependencies against v3 | + +## Response Guidelines + +- **Always identify the current version first**: Before recommending any migration steps, explicitly state the current MSTest version detected in the project (e.g., "Your project uses MSTest v2 (2.2.10)" or "This is an MSTest v1 project using QualityTools assembly references"). This grounds the migration advice and confirms you've read the project files. +- **Focused fix requests** (user has specific compilation errors after upgrading): Address only the relevant breaking change from the table above. Show a concise before/after fix. Do not walk through the full migration workflow. +- **Specific feature migration** (user asks about one aspect like .testsettings, DataRow, or assertions): Address only that specific aspect with a concrete fix. Do not walk through the entire migration workflow or unrelated breaking changes. +- **"What to expect" questions** (user asks about breaking changes before upgrading): Present only the breaking changes that are clearly relevant to the user's visible code and configuration. For each, give a one-line fix summary. Do not include every possible breaking change -- only the ones that apply. Do not walk through the full workflow. +- **Full migration requests** (user wants complete migration): Follow the complete workflow below. +- **Comparison questions** (user asks about v1 vs v2 differences): Explain concisely -- v1 uses assembly references and requires removing them first; v2 uses NuGet and just needs a version bump. Both converge on the same v3 packages and breaking changes. + +## Migration Paths + +- **MSTest v1 (assembly reference to QualityTools)**: Remove the assembly reference (Step 2), add v3 NuGet packages (Step 3), fix breaking changes (Step 5). +- **MSTest v2 (NuGet packages 1.x-2.x)**: Update package versions to 3.x (Step 3), fix breaking changes (Step 5). No assembly reference removal needed. + +Both paths converge at Step 3 -- the same v3 packages and breaking changes apply regardless of starting version. + +## Workflow + +### Step 1: Assess the project + +1. Identify which MSTest version is currently in use: + - **Assembly reference**: Look for `Microsoft.VisualStudio.QualityTools.UnitTestFramework` in project references -> MSTest v1 + - **NuGet packages**: Check `MSTest.TestFramework` and `MSTest.TestAdapter` package versions -> v1 if 1.x, v2 if 2.x +2. Check if the project uses a `.testsettings` file (indicated by `` in test configuration) +3. Check if the target framework is dropped in v3 (see Step 4) +4. Run a clean build to establish a baseline of existing errors/warnings + +### Step 2: Remove v1 assembly references (if applicable) + +If the project uses MSTest v1 via assembly references: + +1. Remove the reference to `Microsoft.VisualStudio.QualityTools.UnitTestFramework.dll` + - In SDK-style projects, remove the `` element from the `.csproj` + - In non-SDK-style projects, remove via Visual Studio Solution Explorer -> References -> right-click -> Remove +2. Save the project file + +### Step 3: Update packages to MSTest v3 + +Choose one of these approaches: + +**Option A -- Install the MSTest metapackage (recommended):** + +Remove individual `MSTest.TestFramework` and `MSTest.TestAdapter` package references and replace with the unified `MSTest` metapackage: + +```xml + +``` + +Also ensure `Microsoft.NET.Test.Sdk` is referenced (or update individual `MSTest.TestFramework` + `MSTest.TestAdapter` packages to 3.8.0 if you prefer not using the metapackage). + +**Option B -- Use MSTest.Sdk (SDK-style projects only):** + +Change `` to ``. MSTest.Sdk automatically provides MSTest.TestFramework, MSTest.TestAdapter, MSTest.Analyzers, and Microsoft.NET.Test.Sdk. + +> **Important**: MSTest.Sdk defaults to Microsoft.Testing.Platform (MTP) instead of VSTest. For VSTest compatibility (e.g., `vstest.console` in CI), add ``. + +When switching to MSTest.Sdk, remove these (SDK provides them automatically): + +- **Packages**: `MSTest`, `MSTest.TestFramework`, `MSTest.TestAdapter`, `MSTest.Analyzers`, `Microsoft.NET.Test.Sdk` +- **Properties**: ``, `Exe`, `false`, `true` + +### Step 4: Update target frameworks if needed + +MSTest v3 supports .NET 6+, .NET Core 3.1, .NET Framework 4.6.2+, .NET Standard 2.0, UWP 16299+, and WinUI 18362+. If the project targets a dropped framework version, update to a supported one: + +| Dropped | Recommended replacement | +|---------|------------------------| +| .NET 5 | .NET 8.0 (current LTS) or .NET 6+ | +| .NET Framework < 4.6.2 | .NET Framework 4.6.2 | +| .NET Standard 1.0 | .NET Standard 2.0 | +| UWP < 16299 | UWP 16299 | +| WinUI < 18362 | WinUI 18362 | + +> **Note**: .NET 6, .NET 8, and .NET 9 are all supported by MSTest v3. Do not change TFMs that are already supported. + +### Step 5: Resolve build errors and breaking changes + +Run `dotnet build` and fix errors using the Breaking Changes Summary above. Key fixes: + +**Assertion overloads** -- MSTest v3 removed `Assert.AreEqual(object, object)` and `Assert.AreNotEqual(object, object)`. Add explicit generic type parameters: + +```csharp +// Before (v1/v2) // After (v3) +Assert.AreEqual(expected, actual); -> Assert.AreEqual(expected, actual); +Assert.AreNotEqual(a, b); -> Assert.AreNotEqual(a, b); +Assert.AreSame(expected, actual); -> Assert.AreSame(expected, actual); +``` + +**DataRow strict type matching** -- argument types must exactly match parameter types. Implicit conversions that worked in v2 fail in v3: + +```csharp +// Error: 1L (long) won't convert to int parameter -> fix: use 1 (int) +// Error: 1.0 (double) won't convert to float parameter -> fix: use 1.0f (float) +``` + +**Timeout behavior** -- unified across .NET Core and .NET Framework. Verify `[Timeout]` values still work. + +### Step 6: Replace .testsettings with .runsettings + +The `.testsettings` file and `` are no longer supported in MSTest v3. **Delete the `.testsettings` file** and create a `.runsettings` file -- do not keep both. + +Key mappings: + +| .testsettings | .runsettings equivalent | +|---|---| +| `TestTimeout` property | `30000` | +| Deployment config | `true` or remove | +| Assembly resolution settings | Remove -- not needed in modern .NET | +| Data collectors | `` section | + +> **Important**: Map timeout to `` (per-test), **not** `` (session-wide). Remove `` entirely. + +### Step 7: Verify + +1. Run `dotnet build` -- confirm zero errors and review any new warnings +2. Run `dotnet test` -- confirm all tests pass +3. Compare test results (pass/fail counts) to the pre-migration baseline +4. Check that no tests were silently dropped due to discovery changes + +## Validation + +- [ ] MSTest v3 packages (or MSTest.Sdk) correctly referenced; v1/v2 references removed +- [ ] Project builds with zero errors +- [ ] All tests pass (`dotnet test`) -- compare pass/fail counts to pre-migration baseline +- [ ] `.testsettings` replaced with `.runsettings` (if applicable) + +## Next Step + +After v3 migration, use `migrate-mstest-v3-to-v4` for MSTest v4. + +## Common Pitfalls + +| Pitfall | Solution | +|---------|----------| +| Missing `Microsoft.NET.Test.Sdk` | Add package reference -- required for test discovery with VSTest | +| MSTest.Sdk tests not found by `vstest.console` | MSTest.Sdk defaults to Microsoft.Testing.Platform; add explicit `Microsoft.NET.Test.Sdk` for VSTest compatibility | diff --git a/.github/skills/migrate-mstest-v3-to-v4/SKILL.md b/.github/skills/migrate-mstest-v3-to-v4/SKILL.md new file mode 100644 index 0000000000..b58959dbc3 --- /dev/null +++ b/.github/skills/migrate-mstest-v3-to-v4/SKILL.md @@ -0,0 +1,480 @@ +--- +name: migrate-mstest-v3-to-v4 +description: > + Migrate an MSTest v3 test project to MSTest v4. Use when user says + "upgrade to MSTest v4", "update to latest MSTest", "MSTest 4 migration", + "MSTest v4 breaking changes", "MSTest v4 compatibility", or has build errors + after updating MSTest packages from 3.x to 4.x. Also use for target + framework compatibility (e.g. net6.0/net7.0 support with MSTest v4). + USE FOR: upgrading MSTest packages from 3.x to 4.x, fixing source breaking + changes (Execute -> ExecuteAsync, CallerInfo constructor, ClassCleanupBehavior + removal, TestContext.Properties, Assert API changes, ExpectedExceptionAttribute + removal, TestTimeout enum removal), resolving behavioral changes + (TreatDiscoveryWarningsAsErrors, TestContext lifecycle, TestCase.Id changes, + MSTest.Sdk MTP changes), handling dropped TFMs (net5.0-net7.0 dropped, + only net8.0+, net462, uap10.0 supported). + DO NOT USE FOR: migrating from MSTest v1/v2 to v3 (use migrate-mstest-v1v2-to-v3 + first), migrating between test frameworks, or general .NET upgrades unrelated + to MSTest. +--- + +# MSTest v3 -> v4 Migration + +Migrate a test project from MSTest v3 to MSTest v4. The outcome is a project using MSTest v4 that builds cleanly, passes tests, and accounts for every source-incompatible and behavioral change. MSTest v4 is **not binary compatible** with MSTest v3 -- any library compiled against v3 must be recompiled against v4. + +## When to Use + +- Upgrading `MSTest.TestFramework`, `MSTest.TestAdapter`, or `MSTest` metapackage from 3.x to 4.x +- Upgrading `MSTest.Sdk` from 3.x to 4.x +- Fixing build errors after updating to MSTest v4 packages +- Resolving behavioral changes in test execution after upgrading to MSTest v4 +- Updating custom `TestMethodAttribute` or `ConditionBaseAttribute` implementations for v4 + +## When Not to Use + +- The project already uses MSTest v4 and builds cleanly -- migration is done +- Upgrading from MSTest v1 or v2 -- use `migrate-mstest-v1v2-to-v3` first, then return here +- The project does not use MSTest +- Migrating between test frameworks (e.g., MSTest to xUnit or NUnit) + +## Inputs + +| Input | Required | Description | +|-------|----------|-------------| +| Project or solution path | Yes | The `.csproj`, `.sln`, or `.slnx` entry point containing MSTest test projects | +| Build command | No | How to build (e.g., `dotnet build`, a repo build script). Auto-detect if not provided | +| Test command | No | How to run tests (e.g., `dotnet test`). Auto-detect if not provided | + +## Response Guidelines + +- **Always identify the current version first**: Before recommending any migration steps, explicitly state the current MSTest version detected in the project (e.g., "Your project uses MSTest v3 (3.8.0)"). This confirms you've read the project files and grounds the migration advice. +- **Focused fix requests** (user has specific compilation errors after upgrading): Address only the relevant breaking changes from Step 3. **Always provide concrete fixed code** using the user's actual types and method names — show a complete, copy-pasteable code snippet, not just a description of what to change. For custom `TestMethodAttribute` subclasses, show the full fixed class including CallerInfo propagation to the base constructor. Mention any related analyzer that could have caught this earlier (e.g., MSTEST0006 for ExpectedException). Do not walk through the entire migration workflow. +- **"What to expect" questions** (user asks about breaking changes before upgrading): Present ALL major breaking changes from the Step 3 quick-lookup table -- not just the ones visible in the current code. For each, provide a one-line fix summary. Also mention key behavioral changes from Step 4 (especially TestCase.Id history impact and TreatDiscoveryWarningsAsErrors default). If project code is available, highlight which changes apply directly. +- **Full migration requests** (user wants complete migration): Follow the complete workflow below. +- **Behavioral/runtime symptom reports** (user describes test execution differences without build errors): Match described symptoms to the behavioral changes table in Step 4. Provide targeted, symptom-specific advice. Mention other behavioral changes the user should watch for. Do not walk through source breaking changes unless the user also has build errors. +- **CI/test-discovery issues** (tests not discovered, vstest.console stopped working, CI pipeline failures after upgrading): Focus on 4.5 (MSTest.Sdk defaults to MTP mode, which does not include Microsoft.NET.Test.Sdk -- needed for vstest.console) and 4.4 (TreatDiscoveryWarningsAsErrors). Explain the root cause clearly and give both fix options (add Microsoft.NET.Test.Sdk package or switch to `dotnet test`). Do not walk through the full migration workflow. +- **Explanatory questions** (user asks "is this a known change?", "what else should I watch out for?"): Explain the relevant changes and advise. Mention related changes the user might encounter next. Do not prescribe a full migration procedure. + +## Workflow + +> **Commit strategy:** Commit at each logical boundary -- after updating packages (Step 2), after resolving source breaking changes (Step 3), after addressing behavioral changes (Step 4). This keeps each commit focused and reviewable. + +### Step 1: Assess the project + +1. Identify the current MSTest version by checking package references for `MSTest`, `MSTest.TestFramework`, `MSTest.TestAdapter`, or `MSTest.Sdk` in `.csproj`, `Directory.Build.props`, or `Directory.Packages.props`. +2. Confirm the project is on MSTest v3 (3.x). If on v1 or v2, use `migrate-mstest-v1v2-to-v3` first. +3. Check target framework(s) -- MSTest v4 drops support for .NET Core 3.1 through .NET 7. Supported target frameworks are: **net8.0**, **net9.0**, **net462** (.NET Framework 4.6.2+), **uap10.0.16299** (UWP), **net9.0-windows10.0.17763.0** (modern UWP), and **net8.0-windows10.0.18362.0** (WinUI). +4. Check for custom `TestMethodAttribute` subclasses -- these require changes in v4. +5. Check for usages of `ExpectedExceptionAttribute` -- removed in v4 (deprecated since v3 with analyzer MSTEST0006). +6. Check for usages of `Assert.ThrowsException` (deprecated) -- removed in v4. +7. Run a clean build to establish a baseline of existing errors/warnings. + +### Step 2: Update packages to MSTest v4 + +**If using the MSTest metapackage:** + +```xml + +``` + +**If using individual packages:** + +```xml + + +``` + +**If using MSTest.Sdk:** + +```xml + +``` + +Run `dotnet restore`, then `dotnet build`. Collect all errors for Step 3. + +### Step 3: Resolve source breaking changes + +Work through compilation errors systematically. Use this quick-lookup table to identify all applicable changes, then apply each fix: + +| Error / Pattern in code | Breaking change | Fix | +|---|---|---| +| Custom `TestMethodAttribute` overrides `Execute` | Execute removed | Change to `ExecuteAsync` returning `Task` (3.1) | +| `[TestMethod("name")]` or custom attribute constructor | CallerInfo params added | Use `DisplayName = "name"` named param; propagate CallerInfo in subclasses (3.2) | +| `ClassCleanupBehavior.EndOfClass` | Enum removed | Remove argument: just `[ClassCleanup]` (3.3) | +| `TestContext.Properties.Contains("key")` | `Properties` is `IDictionary` | Change to `ContainsKey("key")` (3.4) | +| `[Timeout(TestTimeout.Infinite)]` | `TestTimeout` enum removed | Replace with `[Timeout(int.MaxValue)]` (3.5) | +| `TestContext.ManagedType` | Property removed | Use `FullyQualifiedTestClassName` (3.6) | +| `Assert.AreEqual(a, b, "msg {0}", arg)` | Message+params overloads removed | Use string interpolation: `$"msg {arg}"` (3.7) | +| `Assert.ThrowsException(...)` | Renamed | Replace with `Assert.ThrowsExactly(...)` or `Assert.Throws(...)` (3.7) | +| `Assert.IsInstanceOfType(obj, out var t)` | Out parameter removed | Use `var t = Assert.IsInstanceOfType(obj)` (3.7) | +| `[ExpectedException(typeof(T))]` | Attribute removed | Move assertion into test body: `Assert.ThrowsExactly(() => ...)` (3.8) | +| Project targets net5.0, net6.0, or net7.0 | TFM dropped | Change to net8.0 or net9.0 (3.9) | + +> **Important**: Scan the entire project for ALL patterns above before starting fixes. Multiple breaking changes often coexist in the same project. + +#### 3.1 TestMethodAttribute.Execute -> ExecuteAsync + +If you have custom `TestMethodAttribute` subclasses that override `Execute`, change to `ExecuteAsync`. This change was made because the v3 synchronous `Execute` API caused deadlocks when test code used `async`/`await` internally -- the synchronous wrapper would block the thread while the async operation needed that same thread to complete. + +```csharp +// Before (v3) +public sealed class MyTestMethodAttribute : TestMethodAttribute +{ + public override TestResult[] Execute(ITestMethod testMethod) + { + // custom logic + return result; + } +} + +// After (v4) -- Option A: wrap synchronous logic with Task.FromResult +public sealed class MyTestMethodAttribute : TestMethodAttribute +{ + public override Task ExecuteAsync(ITestMethod testMethod) + { + // custom logic (synchronous) + return Task.FromResult(result); + } +} + +// After (v4) -- Option B: make properly async +public sealed class MyTestMethodAttribute : TestMethodAttribute +{ + public override async Task ExecuteAsync(ITestMethod testMethod) + { + // custom async logic + return await base.ExecuteAsync(testMethod); + } +} +``` + +Use `Task.FromResult` when your override logic is purely synchronous. Use `async`/`await` when you call `base.ExecuteAsync` or other async methods. + +#### 3.2 TestMethodAttribute CallerInfo constructor + +`TestMethodAttribute` now uses `[CallerFilePath]` and `[CallerLineNumber]` parameters in its constructor. + +**If you inherit from TestMethodAttribute**, propagate caller info to the base class: + +```csharp +public class MyTestMethodAttribute : TestMethodAttribute +{ + public MyTestMethodAttribute( + [CallerFilePath] string callerFilePath = "", + [CallerLineNumber] int callerLineNumber = -1) + : base(callerFilePath, callerLineNumber) + { + } +} +``` + +**If you use `[TestMethodAttribute("Custom display name")]`**, switch to the named parameter syntax: + +```csharp +// Before (v3) +[TestMethodAttribute("Custom display name")] + +// After (v4) +[TestMethodAttribute(DisplayName = "Custom display name")] +``` + +#### 3.3 ClassCleanupBehavior enum removed + +The `ClassCleanupBehavior` enum is removed. In v3, this enum controlled whether class cleanup ran at end of class (`EndOfClass`) or end of assembly (`EndOfAssembly`). In v4, class cleanup always runs at end of class. Remove the enum argument: + +```csharp +// Before (v3) +[ClassCleanup(ClassCleanupBehavior.EndOfClass)] +public static void ClassCleanup(TestContext testContext) { } + +// After (v4) +[ClassCleanup] +public static void ClassCleanup(TestContext testContext) { } +``` + +If you previously used `ClassCleanupBehavior.EndOfAssembly`, move that cleanup logic to an `[AssemblyCleanup]` method instead. + +#### 3.4 TestContext.Properties type change + +`TestContext.Properties` changed from `IDictionary` to `IDictionary`. Update any `Contains` calls to `ContainsKey`: + +```csharp +// Before (v3) +testContext.Properties.Contains("key"); + +// After (v4) +testContext.Properties.ContainsKey("key"); +``` + +#### 3.5 TestTimeout enum removed + +The `TestTimeout` enum (with only `TestTimeout.Infinite`) is removed. Replace with `int.MaxValue`: + +```csharp +// Before (v3) +[Timeout(TestTimeout.Infinite)] + +// After (v4) +[Timeout(int.MaxValue)] +``` + +#### 3.6 TestContext.ManagedType removed + +The `TestContext.ManagedType` property is removed. Use `TestContext.FullyQualifiedTestClassName` instead. + +#### 3.7 Assert API signature changes + +- **Message + params removed**: Assert methods that accepted both `message` and `object[]` parameters now accept only `message`. Use string interpolation instead of format strings: + +```csharp +// Before (v3) +Assert.AreEqual(expected, actual, "Expected {0} but got {1}", expected, actual); + +// After (v4) +Assert.AreEqual(expected, actual, $"Expected {expected} but got {actual}"); +``` + +- **Assert.ThrowsException renamed**: The `Assert.ThrowsException` APIs are renamed. Use `Assert.ThrowsExactly` (strict type match) or `Assert.Throws` (accepts derived exception types): + +```csharp +// Before (v3) +Assert.ThrowsException(() => DoSomething()); + +// After (v4) -- exact type match (same behavior as old ThrowsException) +Assert.ThrowsExactly(() => DoSomething()); + +// After (v4) -- also catches derived exception types +Assert.Throws(() => DoSomething()); +``` + +- **Assert.IsInstanceOfType out parameter changed**: `Assert.IsInstanceOfType(x, out var t)` changes to `var t = Assert.IsInstanceOfType(x)`: + +```csharp +// Before (v3) +Assert.IsInstanceOfType(obj, out var typed); + +// After (v4) +var typed = Assert.IsInstanceOfType(obj); +``` + +- **Assert.AreEqual for IEquatable\ removed**: If you get generic type inference errors, explicitly specify the type argument as `object`. + +#### 3.8 ExpectedExceptionAttribute removed + +The `[ExpectedException]` attribute is removed in v4. In MSTest 3.2, the `MSTEST0006` analyzer was introduced to flag `[ExpectedException]` usage and suggest migrating to `Assert.ThrowsExactly` while still on v3 (a non-breaking change). In v4, the attribute is gone entirely. Migrate to `Assert.ThrowsExactly`: + +```csharp +// Before (v3) +[ExpectedException(typeof(InvalidOperationException))] +[TestMethod] +public void TestMethod() +{ + MyCall(); +} + +// After (v4) +[TestMethod] +public void TestMethod() +{ + Assert.ThrowsExactly(() => MyCall()); +} +``` + +**When the test has setup code before the throwing call**, wrap only the throwing call in the lambda -- keep Arrange/Act separation clear: + +```csharp +// Before (v3) +[ExpectedException(typeof(ArgumentNullException))] +[TestMethod] +public void Validate_NullInput_Throws() +{ + var service = new ValidationService(); + service.Validate(null); // throws here +} + +// After (v4) +[TestMethod] +public void Validate_NullInput_Throws() +{ + var service = new ValidationService(); + Assert.ThrowsExactly(() => service.Validate(null)); +} +``` + +**For async test methods**, use `Assert.ThrowsExactlyAsync`: + +```csharp +// Before (v3) +[ExpectedException(typeof(HttpRequestException))] +[TestMethod] +public async Task FetchData_BadUrl_Throws() +{ + await client.GetAsync("https://localhost:0"); +} + +// After (v4) +[TestMethod] +public async Task FetchData_BadUrl_Throws() +{ + await Assert.ThrowsExactlyAsync( + () => client.GetAsync("https://localhost:0")); +} +``` + +**If `[ExpectedException]` used the `AllowDerivedTypes` property**, use `Assert.ThrowsAsync` (base type matching) instead of `Assert.ThrowsExactlyAsync` (exact type matching). + +#### 3.9 Dropped target frameworks + +MSTest v4 supports: **net8.0**, **net9.0**, **net462** (.NET Framework 4.6.2+), **uap10.0.16299** (UWP), **net9.0-windows10.0.17763.0** (modern UWP), and **net8.0-windows10.0.18362.0** (WinUI). All other frameworks are dropped -- including net5.0, net6.0, net7.0, and netcoreapp3.1. + +If the test project targets an unsupported framework, update `TargetFramework`: + +```xml + +net6.0 + + +net8.0 +``` + +#### 3.10 Unfolding strategy moved to TestMethodAttribute + +The `UnfoldingStrategy` property (introduced in MSTest 3.7) has moved from individual data source attributes (`DataRowAttribute`, `DynamicDataAttribute`) to `TestMethodAttribute`. + +#### 3.11 ConditionBaseAttribute.ShouldRun renamed + +The `ConditionBaseAttribute.ShouldRun` property is renamed to `IsConditionMet`. + +#### 3.12 Internal/removed types + +Several types previously public are now internal or removed: + +- `MSTestDiscoverer`, `MSTestExecutor`, `AssemblyResolver`, `LogMessageListener` +- `TestExecutionManager`, `TestMethodInfo`, `TestResultExtensions` +- `UnitTestOutcomeExtensions`, `GenericParameterHelper` +- `ITestMethod` in PlatformServices assembly (the one in TestFramework is unchanged) + +If your code references any of these, find alternative approaches or remove the dependency. + +### Step 4: Address behavioral changes + +These changes won't cause build errors but may affect test runtime behavior. + +| Symptom | Cause | Fix | +|---|---|---| +| Tests show as new in Azure DevOps / test history lost | `TestCase.Id` generation changed (4.3) | No code fix; history will re-baseline | +| `TestContext.TestName` throws in `[ClassInitialize]` | v4 enforces lifecycle scope (4.2) | Move access to `[TestInitialize]` or test methods | +| Tests not discovered / discovery failures | `TreatDiscoveryWarningsAsErrors` now true (4.4) | Fix warnings, or set to false in .runsettings | +| Tests hang that didn't before | AppDomain disabled by default (4.1) | Set `DisableAppDomain` to false in .runsettings `RunConfiguration` | +| vstest.console can't find tests with MSTest.Sdk | MSTest.Sdk defaults to MTP; `Microsoft.NET.Test.Sdk` only added in VSTest mode (4.5) | Add explicit package reference or switch to `dotnet test` | +| New warnings from analyzers | Analyzer severities upgraded (4.6) | Fix warnings or suppress in .editorconfig | + +#### 4.1 DisableAppDomain defaults to true + +AppDomains are disabled by default. On .NET Framework, when running inside testhost (the default for `dotnet test` and VS), MSTest re-enables AppDomains automatically. If you need to explicitly control AppDomain isolation, set it via `.runsettings`: + +```xml + + + false + + +``` + +#### 4.2 TestContext throws when used incorrectly + +MSTest v4 now throws when accessing test-specific properties in the wrong lifecycle stage: + +- `TestContext.FullyQualifiedTestClassName` -- cannot be accessed in `[AssemblyInitialize]` +- `TestContext.TestName` -- cannot be accessed in `[AssemblyInitialize]` or `[ClassInitialize]` + +**Fix**: Move any code that accesses `TestContext.TestName` from `[ClassInitialize]` to `[TestInitialize]` or individual test methods, where per-test context is available. Do not replace `TestName` with `FullyQualifiedTestClassName` as a workaround -- they have different semantics. + +#### 4.3 TestCase.Id generation changed + +The generation algorithm for `TestCase.Id` has changed to fix long-standing bugs. This may affect Azure DevOps test result tracking (e.g., test failure tracking over time). There is no code fix needed, but be aware of test result history discontinuity. + +#### 4.4 TreatDiscoveryWarningsAsErrors defaults to true + +v4 uses stricter defaults. Discovery warnings are now treated as errors, which means tests that previously ran despite discovery issues may now fail entirely. If you see unexpected test failures after upgrading (not build errors, but tests not being discovered), check for discovery warnings. To restore v3 behavior while you investigate: + +```xml + + + false + + +``` + +> **Recommended**: Fix the underlying discovery warnings rather than suppressing this setting. + +#### 4.5 MSTest.Sdk and vstest.console compatibility + +MSTest.Sdk defaults to Microsoft.Testing.Platform (MTP) mode. In MTP mode, MSTest.Sdk does **not** add a reference to `Microsoft.NET.Test.Sdk` -- it only adds it in VSTest mode. This is not a v4-specific change; it applies to MSTest.Sdk v3 as well. Without `Microsoft.NET.Test.Sdk`, `vstest.console` cannot discover or run tests and will silently find zero tests. This commonly surfaces during migration when a CI pipeline uses `vstest.console` but the project uses MSTest.Sdk in its default MTP mode. + +**Option A -- Switch to VSTest mode**: Set the `UseVSTest` property. MSTest.Sdk will then automatically add `Microsoft.NET.Test.Sdk`: + +```xml + + + net8.0 + true + + +``` + +**Option B -- Switch CI to `dotnet test`**: Replace `vstest.console` invocations in your CI pipeline with `dotnet test`. This works natively with MTP and is the recommended long-term approach for MSTest.Sdk projects. + +If you need VSTest during a transition period, Option A works without changing CI pipelines. + +#### 4.6 Analyzer severity changes + +Multiple analyzers have been upgraded from Info to Warning by default: + +- MSTEST0001, MSTEST0007, MSTEST0017, MSTEST0023, MSTEST0024, MSTEST0025 +- MSTEST0030, MSTEST0031, MSTEST0032, MSTEST0035, MSTEST0037, MSTEST0045 + +Review and fix any new warnings, or suppress them in `.editorconfig` if intentional. + +### Step 5: Verify + +1. Run `dotnet build` -- confirm zero errors and review any new warnings +2. Run `dotnet test` -- confirm all tests pass +3. Compare test results (pass/fail counts) to the pre-migration baseline +4. If using Azure DevOps test tracking, be aware that `TestCase.Id` changes may affect history continuity +5. Check that no tests were silently dropped due to stricter discovery + +## Validation + +- [ ] All MSTest packages updated to 4.x +- [ ] Project builds with zero errors +- [ ] All tests pass with `dotnet test` +- [ ] Custom `TestMethodAttribute` subclasses updated for `ExecuteAsync` and CallerInfo +- [ ] `ExpectedExceptionAttribute` replaced with `Assert.ThrowsExactly` +- [ ] `Assert.ThrowsException` replaced with `Assert.ThrowsExactly` (or `Assert.Throws`) +- [ ] `ClassCleanupBehavior` enum usages removed +- [ ] `TestContext.Properties.Contains` updated to `ContainsKey` +- [ ] All target frameworks are net8.0+, net9.0, net462+, uap10.0.16299, or WinUI +- [ ] Behavioral changes reviewed and addressed +- [ ] No tests were lost during migration (compare test counts) + +## Related Skills + +- `writing-mstest-tests` -- for modern MSTest v4 assertion APIs and test authoring best practices +- `run-tests` -- for running tests after migration + +## Common Pitfalls + +| Pitfall | Solution | +|---------|----------| +| Custom `TestMethodAttribute` still overrides `Execute` | Change to `ExecuteAsync` returning `Task` | +| `TestMethodAttribute("display name")` no longer compiles | Use `TestMethodAttribute(DisplayName = "display name")` | +| `ClassCleanupBehavior` enum not found | Remove the enum argument; `[ClassCleanup]` now always runs at end of class. For end-of-assembly cleanup, use `[AssemblyCleanup]` | +| `TestContext.Properties.Contains` missing | Use `ContainsKey` -- `Properties` is now `IDictionary` | +| `ExpectedException` attribute not found | Replace with `Assert.ThrowsExactly(() => ...)` inside the test body | +| `Assert.ThrowsException` not found | Replace with `Assert.ThrowsExactly` (or `Assert.Throws` for derived types) | +| `Assert.AreEqual` with format string args fails | Use string interpolation: `$"message {value}"` | +| Tests hang that didn't before | AppDomain is disabled by default; on .NET Fx in testhost it is re-enabled automatically | +| Azure DevOps test history breaks | Expected -- `TestCase.Id` generation changed; no code fix, results will re-baseline | +| Discovery warnings now fail the run | `TreatDiscoveryWarningsAsErrors` is true by default; fix the discovery warnings | +| Net6.0/net7.0 targets don't compile | Update to net8.0 -- MSTest v4 supports net8.0, net9.0, net462, uap10.0.16299, modern UWP, and WinUI | diff --git a/.github/skills/migrate-vstest-to-mtp/SKILL.md b/.github/skills/migrate-vstest-to-mtp/SKILL.md new file mode 100644 index 0000000000..e4e7934b8f --- /dev/null +++ b/.github/skills/migrate-vstest-to-mtp/SKILL.md @@ -0,0 +1,340 @@ +--- +name: migrate-vstest-to-mtp +description: > + Migrates .NET test projects from VSTest to Microsoft.Testing.Platform (MTP). + Use when user asks to "migrate to MTP", "switch from VSTest", "enable + Microsoft.Testing.Platform", "use MTP runner", or mentions EnableMSTestRunner, + EnableNUnitRunner, UseMicrosoftTestingPlatformRunner, or dotnet test exit + code 8. Supports MSTest, NUnit, xUnit.net v2 (via YTest.MTP.XUnit2), and + xUnit.net v3 (native MTP). Also covers translating xUnit.net v3 MTP filter + syntax (--filter-class, --filter-trait, --filter-query). + Covers runner enablement, CLI argument translation, Directory.Build.props + and global.json configuration, CI/CD pipeline updates, and MTP extension + packages. DO NOT USE FOR: migrating between test frameworks + (MSTest/xUnit/NUnit), xUnit.net v2 to v3 API migration, MSTest version + upgrades (use migrate-mstest-* skills), TFM upgrades, or UWP/WinUI test + projects. +--- + +# VSTest -> Microsoft.Testing.Platform Migration + +Migrate a .NET test solution from VSTest to Microsoft.Testing.Platform (MTP). The outcome is a solution where all test projects run on MTP, `dotnet test` works correctly, and CI/CD pipelines are updated. + +> **Important**: Do not mix VSTest-based and MTP-based .NET test projects in the same solution or run configuration -- this is an unsupported scenario. + +## When to Use + +- Switching from VSTest to Microsoft.Testing.Platform for any supported test framework +- Enabling `dotnet run` / `dotnet watch` / direct executable execution for test projects +- Enabling Native AOT or trimmed test execution +- Replacing `vstest.console.exe` with `dotnet test` on MTP +- Updating CI/CD pipelines from the VSTest task to the .NET Core CLI task +- Updating `dotnet test` arguments from VSTest syntax to MTP syntax + +## When Not to Use + +- The project already runs on Microsoft.Testing.Platform -- migration is done +- Migrating between test frameworks (e.g., MSTest to xUnit.net) -- different effort entirely +- The project builds UWP or packaged WinUI test projects -- MTP does not support these yet +- The solution mixes .NET and non-.NET test adapters (e.g., JavaScript or C++ adapters) -- VSTest is required +- Upgrading MSTest versions -- use `migrate-mstest-v1v2-to-v3` or `migrate-mstest-v3-to-v4` + +## Inputs + +| Input | Required | Description | +|-------|----------|-------------| +| Project or solution path | Yes | The `.csproj`, `.sln`, or `.slnx` entry point containing test projects | +| Test framework | No | MSTest, NUnit, xUnit.net v2, or xUnit.net v3. Auto-detected from package references | +| .NET SDK version | No | Determines `dotnet test` integration mode. Auto-detected via `dotnet --version` | +| CI/CD pipeline files | No | Paths to pipeline definitions that invoke `vstest.console` or `dotnet test` | + +## Workflow + +### Step 1: Assess the solution + +1. Identify the test framework for each test project -- see the `platform-detection` skill for the package-to-framework mapping. Key indicators: + - **MSTest**: References `MSTest` or `MSTest.TestAdapter`, or uses `MSTest.Sdk` (with `` not set to `false`). Note: `MSTest.TestFramework` alone is a library dependency, not a test project. + - **NUnit**: References `NUnit3TestAdapter` + - **xUnit.net**: References `xunit` and `xunit.runner.visualstudio` +2. Check the .NET SDK version (`dotnet --version`) -- this determines how `dotnet test` integrates with MTP +3. Check whether a `Directory.Build.props` file exists at the solution or repo root -- all MTP properties should go there for consistency +4. Check for `vstest.console.exe` usage in CI scripts or pipeline definitions +5. Check for VSTest-specific `dotnet test` arguments in CI scripts: `--filter`, `--logger`, `--collect`, `--settings`, `--blame*` +6. Run `dotnet test` to establish a baseline of test pass/fail counts + +### Step 2: Set up Directory.Build.props + +> **Critical**: Always set MTP properties in `Directory.Build.props` at the solution or repo root -- never per-project. This prevents inconsistent configuration where some projects use VSTest and others use MTP (an unsupported scenario). +> **Note**: MTP requires test projects to have `Exe`. Only `MSTest.Sdk` sets this automatically. For all other setups (MSTest NuGet packages with `EnableMSTestRunner`, NUnit with `EnableNUnitRunner`, xUnit.net with `YTest.MTP.XUnit2`), you must set `Exe` explicitly -- either per-project or in `Directory.Build.props` with a condition that targets only test projects. + +### Step 3: Enable the framework-specific MTP runner + +Each framework has its own opt-in property. Add these in `Directory.Build.props` for consistency. + +#### MSTest + +**Option A -- MSTest NuGet packages (3.2.0+):** + +```xml + + true + Exe + +``` + +Ensure the project references MSTest 3.2.0 or later. If the version is already 3.2.0+, no MSTest version upgrade is needed for MTP migration. + +**Option B -- MSTest.Sdk:** + +When using `MSTest.Sdk`, MTP is enabled by default -- no `EnableMSTestRunner` or `OutputType Exe` property is needed (the SDK sets both automatically). The only action is: if the project has `true`, **remove it**. That property forces the project to use VSTest instead of MTP. + +#### NUnit + +Requires `NUnit3TestAdapter` **5.0.0** or later. + +1. Update `NUnit3TestAdapter` to 5.0.0+: + +```xml + +``` + +1. Enable the NUnit runner: + +```xml + + true + Exe + +``` + +#### xUnit.net + +Add a reference to `YTest.MTP.XUnit2` -- this package provides MTP support for xUnit.net v2 projects without requiring an upgrade to xunit.v3. You must also set `OutputType` to `Exe`: + +```xml + +``` + +```xml + + Exe + +``` + +> **Note**: `YTest.MTP.XUnit2` preserves the VSTest `--filter` syntax, so no filter migration is needed for xUnit.net v2. It also supports `--settings` for runsettings (xunit-specific configurations only), `xunit.runner.json`, TRX reporting via `--report-trx`, and `--treenode-filter`. + +#### xUnit.net v3 + +xUnit.net v3 (`xunit.v3` package) has built-in MTP support. Enable it with: + +```xml + + true + +``` + +> **Important**: xUnit.net v3 on MTP does NOT support the VSTest `--filter` syntax. You must translate filters to xUnit.net v3's native filter options (see Step 5). + +### Step 4: Configure dotnet test integration + +The `dotnet test` integration depends on the .NET SDK version. + +#### .NET 10 SDK and later (recommended) + +Use the native MTP mode by adding a `test` section to `global.json`: + +```json +{ + "sdk": { + "version": "10.0.100" + }, + "test": { + "runner": "Microsoft.Testing.Platform" + } +} +``` + +In this mode, `dotnet test` arguments are passed directly -- for example, `dotnet test --report-trx`. + +> **Important**: `global.json` does not support trailing commas. Ensure the JSON is strictly valid. + +#### .NET 9 SDK and earlier + +Use the VSTest mode of `dotnet test` command to run MTP test projects by adding this property in `Directory.Build.props`: + +```xml + + true + +``` + +> **Important**: In this mode, you must use `--` to separate `dotnet test` build arguments from MTP arguments. For example: `dotnet test --no-build -- --list-tests`. + +### Step 5: Update dotnet test command-line arguments + +VSTest-specific arguments must be translated to MTP equivalents. Build-related arguments (`-c`, `-f`, `--no-build`, `--nologo`, `-v`, etc.) are unchanged. + +| VSTest argument | MTP equivalent | Notes | +|-----------------|----------------|-------| +| `--test-adapter-path` | Not applicable | MTP does not use external adapter discovery | +| `--blame` | Not applicable | | +| `--blame-crash` | `--crashdump` | Requires `Microsoft.Testing.Extensions.CrashDump` NuGet package | +| `--blame-crash-dump-type ` | `--crashdump-type ` | Requires CrashDump extension | +| `--blame-hang` | `--hangdump` | Requires `Microsoft.Testing.Extensions.HangDump` NuGet package | +| `--blame-hang-dump-type ` | `--hangdump-type ` | Requires HangDump extension | +| `--blame-hang-timeout ` | `--hangdump-timeout ` | Requires HangDump extension | +| `--collect "Code Coverage;Format=cobertura"` | `--coverage --coverage-output-format cobertura` | Per-extension arguments | +| `-d\|--diag ` | `--diagnostic` | | +| `--filter ` | `--filter ` | Same syntax for MSTest, NUnit, and xUnit.net v2 (with `YTest.MTP.XUnit2`). For xUnit.net v3, see filter migration below | +| `-l\|--logger trx` | `--report-trx` | Requires `Microsoft.Testing.Extensions.TrxReport` NuGet package | +| `--results-directory ` | `--results-directory ` | Same | +| `-s\|--settings ` | `--settings ` | MSTest and NUnit still support `.runsettings` | +| `-t\|--list-tests` | `--list-tests` | Same | +| `-- ` | `--test-parameter` | Applicable only to MSTest and NUnit | + +#### Filter migration + +**MSTest, NUnit, and xUnit.net v2 (with `YTest.MTP.XUnit2`)**: The VSTest `--filter` syntax is identical on both VSTest and MTP. No changes needed. + +**xUnit.net v3 (native MTP)**: xUnit.net v3 does NOT support the VSTest `--filter` syntax on MTP. See the **VSTest → MTP filter translation** section in the `filter-syntax` skill for the complete translation table. Key translation example: + +```shell +# VSTest +dotnet test --filter "FullyQualifiedName~IntegrationTests&Category=Smoke" + +# xUnit.net v3 MTP -- using individual filters (AND behavior) +dotnet test -- --filter-class *IntegrationTests* --filter-trait "Category=Smoke" + +# xUnit.net v3 MTP -- using query language (assembly/namespace/class/method[trait]) +dotnet test -- --filter-query "/*/*/*IntegrationTests*/*[Category=Smoke]" +``` + +> **Note**: When combining `--filter-class` and `--filter-trait`, both conditions must match (AND behavior). For complex expressions, use `--filter-query` with the path-segment syntax. See the [xUnit.net query filter language docs](https://xunit.net/docs/query-filter-language) for full reference. + +### Step 6: Install MTP extension packages (if needed) + +If CI scripts use TRX reporting, crash dumps, or hang dumps, add the corresponding NuGet packages: + +```xml + + + + + + + + + + + +``` + +### Step 7: Update CI/CD pipelines + +#### Azure DevOps + +**If using the VSTest task (`VSTest@3`)**: Replace with the .NET Core CLI task (`DotNetCoreCLI@2`): + +```yaml +# Before (VSTest task) +- task: VSTest@3 + inputs: + testAssemblyVer2: '**/*Tests.dll' + runSettingsFile: 'test.runsettings' + +# After (.NET Core CLI task) +- task: DotNetCoreCLI@2 + displayName: Run tests + inputs: + command: 'test' + arguments: '--no-build --configuration Release' +``` + +**If already using DotNetCoreCLI@2**: Update arguments per Step 5 translations. Remember the `--` separator on .NET 9 and earlier: + +```yaml +- task: DotNetCoreCLI@2 + displayName: Run tests + inputs: + command: 'test' + arguments: '--no-build -- --report-trx --results-directory $(Agent.TempDirectory)' +``` + +#### GitHub Actions + +Update `dotnet test` invocations in workflow files with the same argument translations from Step 5. + +#### Replace vstest.console.exe + +If any script invokes `vstest.console.exe` directly, replace it with `dotnet test`. The test projects are now executables and can also be run directly. + +### Step 8: Handle behavioral differences + +#### Zero tests exit code + +VSTest silently succeeds when zero tests are discovered. MTP fails with **exit code 8**. Options: + +- Pass `--ignore-exit-code 8` when running tests +- Add to `Directory.Build.props`: + +```xml + + $(TestingPlatformCommandLineArguments) --ignore-exit-code 8 + +``` + +- Use environment variable: `TESTINGPLATFORM_EXITCODE_IGNORE=8` + +### Step 9: Remove VSTest-only packages (optional) + +Once migration is complete and verified, remove packages that are only needed for VSTest: + +- `Microsoft.NET.Test.Sdk` -- not needed for MTP (MSTest.Sdk v4 already omits it by default) +- `xunit.runner.visualstudio` -- only needed for VSTest discovery of xUnit.net (not needed when using `YTest.MTP.XUnit2`) +- `NUnit3TestAdapter` VSTest-only features -- the adapter is still needed but only for the MTP runner + +> **Note**: If you need to maintain VSTest compatibility during a transition period, keep these packages. + +### Step 10: Verify + +1. Run `dotnet build` -- confirm zero errors +2. Run `dotnet test` -- confirm all tests pass +3. Compare test pass/fail counts to the pre-migration baseline +4. Run the test executable directly (e.g., `./bin/Debug/net8.0/MyTests.exe`) -- confirm it works +5. Verify CI pipeline produces the expected test result artifacts (TRX files, code coverage, crash dumps) +6. Test that Test Explorer in Visual Studio (17.14+) or VS Code discovers and runs tests + +## Validation + +- [ ] All test projects use MTP runner (no VSTest-only configuration remains) +- [ ] `dotnet build` completes with zero errors +- [ ] `dotnet test` passes all tests and test counts match pre-migration baseline +- [ ] Test executable runs directly (e.g., `./bin/Debug/net8.0/MyTests.exe`) +- [ ] CI pipeline produces expected test result artifacts (TRX files, code coverage, crash dumps) +- [ ] Test Explorer in Visual Studio or VS Code discovers and runs tests +- [ ] No `vstest.console.exe` invocations remain in CI scripts +- [ ] `Exe` is set for all non-MSTest.Sdk test projects + +## Common Pitfalls + +| Pitfall | Solution | +|---------|----------| +| Mixing VSTest and MTP projects in the same solution | Migrate all test projects together -- mixed mode is unsupported | +| `dotnet test` arguments ignored on .NET 9 and earlier | Use `--` to separate build args from MTP args: `dotnet test -- --report-trx` | +| Exit code 8 on CI without failures | MTP fails when zero tests run; use `--ignore-exit-code 8` or fix test discovery | +| MSTest.Sdk v4 + vstest.console no longer works | MSTest.Sdk v4 no longer adds `Microsoft.NET.Test.Sdk` -- add it explicitly or switch to `dotnet test` | +| Missing `Exe` | Required for all setups except MSTest.Sdk (which sets it automatically) | + +## Next Steps + +- Use `run-tests` for running tests on the new MTP platform +- Use `mtp-hot-reload` for iterative test fixing with hot reload on MTP + +## More Info + +- [Test platforms overview](https://learn.microsoft.com/dotnet/core/testing/test-platforms-overview) +- [Migrate from VSTest to Microsoft.Testing.Platform](https://learn.microsoft.com/dotnet/core/testing/migrating-vstest-microsoft-testing-platform) +- [Microsoft.Testing.Platform overview](https://learn.microsoft.com/dotnet/core/testing/microsoft-testing-platform-intro) +- [Testing with dotnet test](https://learn.microsoft.com/dotnet/core/testing/unit-testing-with-dotnet-test) +- [Microsoft.Testing.Platform CLI options](https://learn.microsoft.com/dotnet/core/testing/microsoft-testing-platform-cli-options) +- [Microsoft.Testing.Platform extensions](https://learn.microsoft.com/dotnet/core/testing/unit-testing-platform-extensions) diff --git a/.github/skills/migrate-xunit-to-xunit-v3/SKILL.md b/.github/skills/migrate-xunit-to-xunit-v3/SKILL.md new file mode 100644 index 0000000000..d87323e803 --- /dev/null +++ b/.github/skills/migrate-xunit-to-xunit-v3/SKILL.md @@ -0,0 +1,219 @@ +--- +name: migrate-xunit-to-xunit-v3 +description: > + Migrates .NET test projects from xUnit.net v2 to xUnit.net v3. + USE FOR: upgrading xunit to xunit.v3. + DO NOT USE FOR: migrating between test frameworks (MSTest/NUnit to + xUnit.net), migrating from VSTest to Microsoft.Testing.Platform + (use migrate-vstest-to-mtp). +--- + +# xunit.v3 Migration + +Migrate .NET test projects from xUnit.net v2 to xUnit.net v3. The outcome is a solution where all test projects reference `xunit.v3.*` packages, compiles cleanly, and all tests pass with the same results as before migration. + +## When to Use + +- Upgrading test projects from `xunit` (v2) packages to `xunit.v3` +- Resolving compilation errors after updating xunit package references to v3 + +## When Not to Use + +- Migrating between test frameworks (e.g., MSTest or NUnit to xUnit.net) — different effort entirely +- Migrating from VSTest to Microsoft.Testing.Platform — use `migrate-vstest-to-mtp` +- The projects already reference `xunit.v3` — migration is done + +## Inputs + +| Input | Required | Description | +|-------|----------|-------------| +| Test project or solution | Yes | The .NET project or solution containing xUnit.net v2 test projects | + +## Workflow + +> **Commit strategy:** Commit after each major step so the migration is reviewable and bisectable. Separate project file changes from code changes. + +### Step 1: Identify xUnit.net projects + +Search for test projects referencing xUnit.net v2 packages: + +- `xunit` +- `xunit.abstractions` +- `xunit.assert` +- `xunit.core` +- `xunit.extensibility.core` +- `xunit.extensibility.execution` +- `xunit.runner.visualstudio` + +Make sure to check the package references in project files, MSBuild props and targets files, like `Directory.Build.props`, `Directory.Build.targets`, and `Directory.Packages.props`. + +### Step 2: Verify compatibility + +1. Verify target framework compatibility: xUnit.net v3 requires **.NET 8+** or **.NET Framework 4.7.2+**. For test library projects, .NET Standard 2.0 is also supported. +2. If any of the test projects have non-compatible target frameworks, STOP here and DON'T do anything. Only tell the user to upgrade the target framework first before migrating xUnit.net. +3. Verify project compatibility: xUnit.net v3 only supports SDK-style projects. If any test projects are non-SDK-style, STOP here and DON'T do anything. Only tell the user to migrate to SDK-style projects first before migrating xUnit.net. + +### Step 3: Establish a baseline + +Run `dotnet test` to establish a baseline of test pass/fail counts. When running `dotnet test`, ensure that: + +- You run `dotnet test` without any additional arguments (i.e., don't pass `--no-restore` or `--no-build`). +- Ensure you redirect the command output to a file and read the output from that file. + +### Step 4: Update package references + +1. Update any `PackageReference` or `PackageVersion` items for the new package names, based on the following mapping: + + - `xunit` → `xunit.v3` + - `xunit.abstractions` → Remove entirely + - `xunit.assert` → `xunit.v3.assert` + - `xunit.core` → `xunit.v3.core` + - `xunit.extensibility.core` and `xunit.extensibility.execution` → `xunit.v3.extensibility.core` (if both are referenced in a project consolidate to only a single entry as the two packages are merged) + +2. Update all `xunit.v3.*` packages to the latest correct version available on NuGet. Also update `xunit.runner.visualstudio` to the latest version. + +### Step 5: Set `OutputType` to `Exe` + +In each test project (excluding test library projects), set `OutputType` to `Exe` in the project file: + +```xml + + Exe + +``` + +Depending on the solution in hand, there might be a centralized place where this can be added. For example: + +- If all test projects share (or can share) a common `Directory.Build.props`, add the `Exe` property there. Note that the OutputType should not be added to `Directory.Build.targets`. +- If all test projects share a name pattern (e.g., `*.Tests.csproj`), add a conditional property group in `Directory.Build.props` that applies only to those projects, like `Exe`. Adjust the condition as needed to target only test projects. +- Otherwise, add the `Exe` property to each test project file individually. + +### Step 6: Remove `Xunit.Abstractions` usings + +Find any `using Xunit.Abstractions;` directives in C# files and remove them completely. + +### Step 7: Address `async void` breaking change + +In xUnit.net v3, `async void` test methods are no longer supported and will fail to compile. Search for any test methods declared with `async void` and change them to `async Task`. Test methods can be identified via the `[Fact]` or `[Theory]` attributes or other test attributes. + +### Step 8: Address breaking change of attributes + +In xUnit.net v3, some attributes were updated so that they accept a `System.Type` instead of two strings (fully qualified type name and assembly name). These attributes are: + +- `CollectionBehaviorAttribute` +- `TestCaseOrdererAttribute` +- `TestCollectionOrdererAttribute` +- `TestFrameworkAttribute` + +For example, `[assembly: CollectionBehavior("MyNamespace.MyCollectionFactory", "MyAssembly")]` must be converted to `[assembly: CollectionBehavior(typeof(MyNamespace.MyCollectionFactory))]`. + +### Step 9: Inheriting from FactAttribute or TheoryAttribute + +Identify if there are any custom attributes that inherit from `FactAttribute` or `TheoryAttribute`. These custom user-defined attributes must now provide source information. For example, if the attribute looked like this: + +```csharp +internal sealed class MyFactAttribute : FactAttribute +{ + public MyFactAttribute() + { + } +} +``` + +it must be changed to this: + +```csharp +internal sealed class MyFactAttribute : FactAttribute +{ + public MyFactAttribute( + [CallerFilePath] string? sourceFilePath = null, + [CallerLineNumber] int sourceLineNumber = -1 + ) : base(sourceFilePath, sourceLineNumber) + { + } +} +``` + +### Step 10: Inheriting from BeforeAfterTestAttribute + +Identify if there are any custom attributes that inherit from `BeforeAfterTestAttribute`. These custom user-defined attributes must update their method signatures. Previously, they would have `Before`/`After` overrides that look like this: + +```csharp + public override void Before(MethodInfo methodUnderTest) + { + // Possibly some custom logic here + base.Before(methodUnderTest); + // Possibly some custom logic here + } + + public override void After(MethodInfo methodUnderTest) + { + // Possibly some custom logic here + base.After(methodUnderTest); + // Possibly some custom logic here + } +``` + +it must be changed to this: + +```csharp + public override void Before(MethodInfo methodUnderTest, IXunitTest test) + { + // Possibly some custom logic here + base.Before(methodUnderTest, test); + // Possibly some custom logic here + } + + public override void After(MethodInfo methodUnderTest, IXunitTest test) + { + // Possibly some custom logic here + base.After(methodUnderTest, test); + // Possibly some custom logic here + } +``` + +### Step 11: Address new xUnit analyzer warnings + +xunit.v3 introduced new analyzer warnings. You should attempt to address them. + +One of the most notable warnings is [xUnit1051: Calls to methods which accept CancellationToken should use TestContext.Current.CancellationToken](https://xunit.net/xunit.analyzers/rules/xUnit1051). Identify the calls to such methods, if any, and pass the cancellation token. + +### Step 12: Test platform selection + +You should keep the same test platform that was used with xunit 2. + +Note that xunit 2 is always VSTest except if the user used YTest.MTP.XUnit2. + +- If user had a reference to YTest.MTP.XUnit2: + - Remove the reference to YTest.MTP.XUnit2 completely. + - Add `true` to Directory.Build.props under an unconditional PropertyGroup. +- If user didn't have a reference to YTest.MTP.XUnit2: + - Add `false` to Directory.Build.props under an unconditional PropertyGroup. + +### Step 13: Migrate `Xunit.SkippableFact` + +If there are any package references to `Xunit.SkippableFact`, remove all these package references entirely. + +Then, follow these steps to eliminate usages of APIs coming from the removed package reference: + +- Update any `SkippableFact` attribute to the regular `Fact` attribute. +- Update any `SkippableTheory` attribute to the regular `Theory` attribute. +- Change `Skip.If` method calls to `Assert.SkipWhen`. +- Change `Skip.IfNot` method calls to `Assert.SkipUnless`. + +### Step 14: Update `Xunit.Combinatorial` NuGet package + +Find package references of `Xunit.Combinatorial` and update them from 1.x to the latest 2.x version available. + +### Step 15: Update `Xunit.StaFact` NuGet package + +Find package references of `Xunit.StaFact` and update them from 1.x to the latest 3.x version available. + +### Step 16: Build the solution + +Now, build the solution to identify any remaining compilation errors that might not have been addressed by previous instructions. +Fix any straightforward errors that show up, and keep iterating and fixing more. + +You can also look into and to help with the remaining compilation errors. + +You can fix as much as you can, and it's okay if not everything is fixed. Just tell the user that there are remaining errors that need to be manually addressed. diff --git a/.github/skills/mtp-hot-reload/SKILL.md b/.github/skills/mtp-hot-reload/SKILL.md new file mode 100644 index 0000000000..3964535484 --- /dev/null +++ b/.github/skills/mtp-hot-reload/SKILL.md @@ -0,0 +1,144 @@ +--- +name: mtp-hot-reload +description: > + Suggests using Microsoft Testing Platform (MTP) hot reload to iterate fixes + on failing tests without rebuilding. Use when user says "hot reload tests", + "iterate on test fix", "run tests without rebuilding", "speed up test loop", + "fix test faster", or needs to set up MTP hot reload to rapidly iterate on + test failures. Covers setup (NuGet package, environment variable, + launchSettings.json) and the iterative workflow for fixing tests. + DO NOT USE FOR: writing test code, diagnosing test failures, CI/CD pipeline + configuration, or Visual Studio Test Explorer hot reload (which is a + different feature). +--- + +# MTP Hot Reload for Iterative Test Fixing + +Set up and use Microsoft Testing Platform hot reload to rapidly iterate fixes on failing tests without rebuilding between each change. + +## When to Use + +- User has one or more failing tests and wants to iterate fixes quickly +- User wants to avoid rebuild overhead while fixing test code or production code +- User asks about hot reload for tests or speeding up the test-fix loop +- User needs to set up MTP hot reload in their project + +## When Not to Use + +- User needs to write new tests from scratch (use general coding assistance) +- User needs to diagnose why a test is failing (use diagnostic skills) +- User wants Visual Studio Test Explorer hot reload (different feature, built into VS) +- Project uses VSTest -- hot reload requires Microsoft Testing Platform (MTP) +- User needs CI/CD pipeline configuration + +## Inputs + +| Input | Required | Description | +|-------|----------|-------------| +| Test project path | No | Path to the test project (.csproj). Defaults to current directory. | +| Failing test name or filter | No | Specific test(s) to iterate on | + +## Workflow + +### Step 1: Verify the project uses Microsoft Testing Platform + +Hot reload requires MTP. It does **not** work with VSTest. + +Follow the detection procedure in the `platform-detection` skill to determine the test platform. + +If the project uses VSTest, inform the user that MTP hot reload is not available and suggest migrating to MTP first (see `migrate-vstest-to-mtp`), or using Visual Studio's built-in Test Explorer hot reload feature instead. + +### Step 2: Add the hot reload NuGet package + +Install the `Microsoft.Testing.Extensions.HotReload` package: + +```shell +dotnet add package Microsoft.Testing.Extensions.HotReload +``` + +> **Note**: When using `Microsoft.Testing.Platform.MSBuild` (included transitively by MSTest, NUnit, and xUnit runners), the extension is auto-registered when you install its NuGet package -- no code changes needed. + +### Step 3: Enable hot reload + +Hot reload is activated by setting the `TESTINGPLATFORM_HOTRELOAD_ENABLED` environment variable to `1`. + +**Option A -- Set it in the shell before running tests:** + +```shell +# PowerShell +$env:TESTINGPLATFORM_HOTRELOAD_ENABLED = "1" + +# bash/zsh +export TESTINGPLATFORM_HOTRELOAD_ENABLED=1 +``` + +**Option B -- Add it to `launchSettings.json` (recommended for repeatable use):** + +Create or update `Properties/launchSettings.json` in the test project: + +```json +{ + "profiles": { + "": { + "commandName": "Project", + "environmentVariables": { + "TESTINGPLATFORM_HOTRELOAD_ENABLED": "1" + } + } + } +} +``` + +### Step 4: Run the tests with hot reload + +Run the test project directly (not through `dotnet test`) to use hot reload in console mode: + +```shell +dotnet run --project +``` + +To filter to specific failing tests, pass the filter after `--`. The syntax depends on the test framework -- see the `filter-syntax` skill for full details. Quick examples: + +| Framework | Filter syntax | +|-----------|--------------| +| MSTest | `dotnet run --project -- --filter "FullyQualifiedName~TestMethodName"` | +| NUnit | `dotnet run --project -- --filter "FullyQualifiedName~TestMethodName"` | +| xUnit v3 | `dotnet run --project -- --filter-method "*TestMethodName"` | +| TUnit | `dotnet run --project -- --treenode-filter "/*/*/ClassName/TestMethodName"` | + +The test host will start, run the tests, and **remain running** waiting for code changes. + +### Step 5: Iterate on the fix + +1. Edit the source code (test code or production code) in your editor +2. The test host detects the changes and re-runs the affected tests automatically +3. Review the updated results in the console +4. Repeat until all targeted tests pass + +> **Important**: Hot reload currently works in **console mode only**. There is no support for hot reload in Test Explorer for Visual Studio or Visual Studio Code. + +### Step 6: Finalize + +Once all tests pass: + +1. Stop the test host (Ctrl+C) +2. Run a full `dotnet test` to confirm all tests pass with a clean build +3. Optionally remove `TESTINGPLATFORM_HOTRELOAD_ENABLED` from the environment or keep `launchSettings.json` for future use + +## Validation + +- [ ] Project uses Microsoft Testing Platform (not VSTest) +- [ ] `Microsoft.Testing.Extensions.HotReload` package is installed +- [ ] `TESTINGPLATFORM_HOTRELOAD_ENABLED` environment variable is set to `1` +- [ ] Tests run and the host remains active waiting for changes +- [ ] Code changes are picked up without manual restart + +## Common Pitfalls + +| Pitfall | Solution | +|---------|----------| +| Using `dotnet test` instead of `dotnet run` | Hot reload requires `dotnet run --project ` to run the test host directly in console mode | +| Project uses VSTest, not MTP | Hot reload requires MTP. Migrate to MTP first or use VS Test Explorer hot reload | +| Forgetting to set the environment variable | Set `TESTINGPLATFORM_HOTRELOAD_ENABLED=1` before running | +| Expecting Test Explorer integration | Console mode only -- no VS/VS Code Test Explorer support | +| Making unsupported code changes (rude edits) | Some changes (adding new types, changing method signatures) require a restart. Stop and re-run | diff --git a/.github/skills/platform-detection/SKILL.md b/.github/skills/platform-detection/SKILL.md new file mode 100644 index 0000000000..4ba6bc0444 --- /dev/null +++ b/.github/skills/platform-detection/SKILL.md @@ -0,0 +1,58 @@ +--- +name: platform-detection +description: "Reference data for detecting the test platform (VSTest vs Microsoft.Testing.Platform) and test framework (MSTest, xUnit, NUnit, TUnit) from project files. DO NOT USE directly — loaded by run-tests, mtp-hot-reload, and migrate-vstest-to-mtp when they need detection logic." +user-invocable: false +--- + +# Test Platform and Framework Detection + +Determine **which test platform** (VSTest or Microsoft.Testing.Platform) and **which test framework** (MSTest, xUnit, NUnit, TUnit) a project uses. + +**Detection files to always check** (in order): `global.json` → `.csproj` → `Directory.Build.props` → `Directory.Packages.props` + +## Detecting the test framework + +Read the `.csproj` file **and** `Directory.Build.props` / `Directory.Packages.props` (for centrally managed dependencies) and look for: + +| Package or SDK reference | Framework | +|--------------------------|-----------| +| `MSTest` (metapackage, recommended) or `` | MSTest | +| `MSTest.TestFramework` + `MSTest.TestAdapter` | MSTest (also valid for v3/v4) | +| `xunit`, `xunit.v3`, `xunit.v3.mtp-v1`, `xunit.v3.mtp-v2`, `xunit.v3.core.mtp-v1`, `xunit.v3.core.mtp-v2` | xUnit | +| `NUnit` + `NUnit3TestAdapter` | NUnit | +| `TUnit` | TUnit (MTP only) | + +## Detecting the test platform + +The detection logic depends on the .NET SDK version. Run `dotnet --version` to determine it. + +### .NET SDK 10+ + +On .NET 10+, the `global.json` `test.runner` setting is the **authoritative source**: + +- If `global.json` contains `"test": { "runner": "Microsoft.Testing.Platform" }` → **MTP** +- If `global.json` has `"runner": "VSTest"`, or no `test` section exists → **VSTest** + +> **Important**: On .NET 10+, `` alone does **not** switch to MTP. The `global.json` runner setting takes precedence. If the runner is VSTest (or unset), the project uses VSTest regardless of `TestingPlatformDotnetTestSupport`. + +### .NET SDK 8 or 9 + +On older SDKs, check these signals in priority order: + +**1. Check the `` MSBuild property.** Look in the `.csproj`, `Directory.Build.props`, **and** `Directory.Packages.props`. If set to `true` in **any** of these files, the project uses **MTP**. + +> **Critical**: Always read `Directory.Build.props` and `Directory.Packages.props` if they exist. MTP properties are frequently set there instead of in the `.csproj`, so checking only the project file will miss them. + +**2. Check project-level signals:** + +| Signal | Platform | +|--------|----------| +| `` as project SDK | **MTP** by default | +| `true` | **MTP** runner (xUnit) | +| `true` | **MTP** runner (MSTest) | +| `true` | **MTP** runner (NUnit) | +| `Microsoft.Testing.Platform` package referenced directly | **MTP** | +| `TUnit` package referenced | **MTP** (TUnit is MTP-only) | + +> **Note**: The presence of `Microsoft.NET.Test.Sdk` does **not** necessarily mean VSTest. Some frameworks (e.g., MSTest) pull it in transitively for compatibility, even when MTP is enabled. Do not use this package as a signal on its own — always check the MTP signals above first. +> **Key distinction**: VSTest is the classic platform that uses `vstest.console` under the hood. Microsoft.Testing.Platform (MTP) is the newer, faster platform. Both can be invoked via `dotnet test`, but their filter syntax and CLI options differ. diff --git a/.github/skills/run-tests/SKILL.md b/.github/skills/run-tests/SKILL.md new file mode 100644 index 0000000000..1a269cbafe --- /dev/null +++ b/.github/skills/run-tests/SKILL.md @@ -0,0 +1,204 @@ +--- +name: run-tests +description: > + Runs .NET tests with dotnet test. Use when user says "run tests", "execute + tests", "dotnet test", "test filter", "filter by category", "filter by + class", "run only specific tests", "tests not running", "hang timeout", + "blame-hang", "blame-crash", "TUnit", "treenode-filter", or needs to + detect the test platform (VSTest or Microsoft.Testing.Platform), identify the + test framework, apply test filters, or troubleshoot test execution failures. + Covers MSTest, xUnit, NUnit, and TUnit across both VSTest and MTP platforms. + Also use for --filter-class, --filter-trait, and other + framework-specific filter syntax. + DO NOT USE FOR: writing or generating test code, CI/CD pipeline + configuration, or debugging failing test logic. +--- + +# Run .NET Tests + +Detect the test platform and framework, run tests, and apply filters using `dotnet test`. + +## When to Use + +- User wants to run tests in a .NET project +- User needs to run a subset of tests using filters +- User needs help detecting which test platform (VSTest vs MTP) or framework is in use +- User wants to understand the correct filter syntax for their setup + +## When Not to Use + +- User needs to write or generate test code (use `writing-mstest-tests` for MSTest, or general coding assistance for other frameworks) +- User needs to migrate from VSTest to MTP (use `migrate-vstest-to-mtp`) +- User wants to iterate on failing tests without rebuilding (use `mtp-hot-reload`) +- User needs CI/CD pipeline configuration (use CI-specific skills) +- User needs to debug a test (use debugging skills) + +## Inputs + +| Input | Required | Description | +|-------|----------|-------------| +| Project or solution path | No | Path to the test project (.csproj) or solution (.sln). Defaults to current directory. | +| Filter expression | No | Filter expression to select specific tests | +| Target framework | No | Target framework moniker to run against (e.g., `net8.0`) | + +## Workflow + +### Quick Reference + +| Platform | SDK | Command pattern | +|----------|-----|----------------| +| VSTest | Any | `dotnet test [] [--filter ] [--logger trx]` | +| MTP | 8 or 9 | `dotnet test [] -- ` | +| MTP | 10+ | `dotnet test --project ` | + +**Detection files to always check** (in order): `global.json` -> `.csproj` -> `Directory.Build.props` -> `Directory.Packages.props` + +### Step 1: Detect the test platform and framework + +1. Read `global.json` first — on .NET SDK 10+, `"test": { "runner": "Microsoft.Testing.Platform" }` is the **authoritative MTP signal**. If present, the project uses MTP and SDK 10+ syntax (no `--` separator). +2. Read `.csproj`, `Directory.Build.props`, and `Directory.Packages.props` for framework packages and MTP properties. +3. For full detection logic (SDK 8/9 signals, framework identification), see the `platform-detection` skill. + +**Quick detection summary:** + +| Signal | Means | +|--------|-------| +| `global.json` has `"test": { "runner": "Microsoft.Testing.Platform" }` | **MTP on SDK 10+** — pass args directly, no `--` | +| `true` in csproj or Directory.Build.props | **MTP on SDK 8/9** — pass args after `--` | +| Neither signal present | **VSTest** | + +### Step 2: Run tests + +#### VSTest (any .NET SDK version) + +```bash +dotnet test [ | | | | ] +``` + +Common flags: + +| Flag | Description | +|------|-------------| +| `--framework ` | Target a specific framework in multi-TFM projects (e.g., `net8.0`) | +| `--no-build` | Skip build, use previously built output | +| `--filter ` | Run selected tests (see [Step 3](#step-3-run-filtered-tests)) | +| `--logger trx` | Generate TRX results file | +| `--collect "Code Coverage"` | Collect code coverage using Microsoft Code Coverage (built-in, always available) | +| `--blame` | Enable blame mode to detect tests that crash the host | +| `--blame-crash` | Collect a crash dump when the test host crashes | +| `--blame-hang-timeout ` | Abort test if it hangs longer than duration (e.g., `5min`) | +| `-v ` | Verbosity: `quiet`, `minimal`, `normal`, `detailed`, `diagnostic` | + +#### MTP with .NET SDK 8 or 9 + +With `true`, `dotnet test` bridges to MTP but uses VSTest-style argument parsing. MTP-specific arguments must be passed after `--`: + +```bash +dotnet test [ | | | | ] -- +``` + +#### MTP with .NET SDK 10+ + +With the `global.json` runner set to `Microsoft.Testing.Platform`, `dotnet test` natively understands MTP arguments without `--`: + +```bash +dotnet test + [--project ] + [--solution ] + [--test-modules ] + [] +``` + +Examples: + +```bash +# Run all tests in a project +dotnet test --project path/to/MyTests.csproj + +# Run all tests in a directory containing a project +dotnet test --project path/to/ + +# Run all tests in a solution (sln, slnf, slnx) +dotnet test --solution path/to/MySolution.sln + +# Run all tests in a directory containing a solution +dotnet test --solution path/to/ + +# Run with MTP flags +dotnet test --project path/to/MyTests.csproj --report-trx --blame-hang-timeout 5min +``` + +> **Note**: The .NET 10+ `dotnet test` syntax does **not** accept a bare positional argument like the VSTest syntax. Use `--project`, `--solution`, or `--test-modules` to specify the target. + +#### Common MTP flags + +These flags apply to MTP on both SDK versions. On SDK 8/9, pass after `--`; on SDK 10+, pass directly. + +**Built-in flags (always available):** + +| Flag | Description | +|------|-------------| +| `--no-build` | Skip build, use previously built output | +| `--framework ` | Target a specific framework in multi-TFM projects | +| `--results-directory ` | Directory for test result output | +| `--diagnostic` | Enable diagnostic logging for the test platform | +| `--diagnostic-output-directory ` | Directory for diagnostic log output | + +**Extension-dependent flags (require the corresponding extension package to be registered):** + +| Flag | Requires | Description | +|------|----------|-------------| +| `--filter ` | Framework-specific (not all frameworks support this) | Run selected tests (see [Step 3](#step-3-run-filtered-tests)) | +| `--report-trx` | `Microsoft.Testing.Extensions.TrxReport` | Generate TRX results file | +| `--report-trx-filename ` | `Microsoft.Testing.Extensions.TrxReport` | Set TRX output filename | +| `--blame-hang-timeout ` | `Microsoft.Testing.Extensions.HangDump` | Abort test if it hangs longer than duration (e.g., `5min`) | +| `--blame-crash` | `Microsoft.Testing.Extensions.CrashDump` | Collect a crash dump when the test host crashes | +| `--coverage` | `Microsoft.Testing.Extensions.CodeCoverage` | Collect code coverage using Microsoft Code Coverage | + +> Some frameworks (e.g., MSTest) bundle common extensions by default. Others may require explicit package references. If a flag is not recognized, check that the corresponding extension package is referenced in the project. + +#### Alternative MTP invocations + +MTP test projects are standalone executables. Beyond `dotnet test`, they can be run directly: + +```bash +# Build and run +dotnet run --project + +# Run a previously built DLL +dotnet exec + +# Run the executable directly (Windows) + +``` + +These alternative invocations accept MTP command line arguments directly (no `--` separator needed). + +### Step 3: Run filtered tests + +See the `filter-syntax` skill for the complete filter syntax for each platform and framework combination. Key points: + +- **VSTest** (MSTest, xUnit v2, NUnit): `dotnet test --filter ` with `=`, `!=`, `~`, `!~` operators +- **MTP -- MSTest and NUnit**: Same `--filter` syntax as VSTest; pass after `--` on SDK 8/9, directly on SDK 10+ +- **MTP -- xUnit v3**: Uses `--filter-class`, `--filter-method`, `--filter-trait` (not VSTest expression syntax) +- **MTP -- TUnit**: Uses `--treenode-filter` with path-based syntax + +## Validation + +- [ ] Test platform (VSTest or MTP) was correctly identified +- [ ] Test framework (MSTest, xUnit, NUnit, TUnit) was correctly identified +- [ ] Correct `dotnet test` invocation was used for the detected platform and SDK version +- [ ] Filter expressions used the syntax appropriate for the platform and framework +- [ ] Test results were clearly reported to the user + +## Common Pitfalls + +| Pitfall | Solution | +|---------|----------| +| Missing `Microsoft.NET.Test.Sdk` in a VSTest project | Tests won't be discovered. Add `` | +| Using VSTest `--filter` syntax with xUnit v3 on MTP | xUnit v3 on MTP uses `--filter-class`, `--filter-method`, etc. -- not the VSTest expression syntax | +| Passing MTP args without `--` on .NET SDK 8/9 | Before .NET 10, MTP args must go after `--`: `dotnet test -- --report-trx` | +| Using `--` for MTP args on .NET SDK 10+ | On .NET 10+, MTP args are passed directly: `dotnet test --project . --blame-hang-timeout 5min` — do NOT use `-- --blame-hang-timeout` | +| Multi-TFM project runs tests for all frameworks | Use `--framework ` to target a specific framework | +| `global.json` runner setting ignored | Requires .NET 10+ SDK. On older SDKs, use `` MSBuild property instead | +| TUnit `--treenode-filter` not recognized | TUnit is MTP-only. On .NET SDK 10+ use `dotnet test`; on older SDKs use `dotnet run` since VSTest-mode `dotnet test` does not support TUnit | diff --git a/.github/skills/test-anti-patterns/SKILL.md b/.github/skills/test-anti-patterns/SKILL.md new file mode 100644 index 0000000000..ce5111116b --- /dev/null +++ b/.github/skills/test-anti-patterns/SKILL.md @@ -0,0 +1,137 @@ +--- +name: test-anti-patterns +description: "Quick pragmatic review of .NET test code for anti-patterns that undermine reliability and diagnostic value. Use when asked to review tests, find test problems, check test quality, or audit tests for common mistakes. Catches assertion gaps, flakiness indicators, over-mocking, naming issues, and structural problems with actionable fixes. Use for periodic test code reviews and PR feedback. For a deep formal audit based on academic test smell taxonomy, use exp-test-smell-detection instead. Works with MSTest, xUnit, NUnit, and TUnit." +--- + +# Test Anti-Pattern Detection + +Quick, pragmatic analysis of .NET test code for anti-patterns and quality issues that undermine test reliability, maintainability, and diagnostic value. + +## When to Use + +- User asks to review test quality or find test smells +- User wants to know why tests are flaky or unreliable +- User asks "are my tests good?" or "what's wrong with my tests?" +- User requests a test audit or test code review +- User wants to improve existing test code + +## When Not to Use + +- User wants to write new tests from scratch (use `writing-mstest-tests`) +- User wants to run or execute tests (use `run-tests`) +- User wants to migrate between test frameworks or versions (use migration skills) +- User wants to measure code coverage (out of scope) +- User wants a deep formal test smell audit with academic taxonomy and extended catalog (use `exp-test-smell-detection`) + +## Inputs + +| Input | Required | Description | +|-------|----------|-------------| +| Test code | Yes | One or more test files or classes to analyze | +| Production code | No | The code under test, for context on what tests should verify | +| Specific concern | No | A focused area like "flakiness" or "naming" to narrow the review | + +## Workflow + +### Step 1: Gather the test code + +Read the test files the user wants reviewed. If the user points to a directory or project, scan for all test files using the framework-specific markers in the `dotnet-test-frameworks` skill (e.g., `[TestClass]`, `[Fact]`, `[Test]`). + +If production code is available, read it too -- this is critical for detecting tests that are coupled to implementation details rather than behavior. + +### Step 2: Scan for anti-patterns + +Check each test file against the anti-pattern catalog below. Report findings grouped by severity. + +#### Critical -- Tests that give false confidence + +| Anti-Pattern | What to Look For | +|---|---| +| **No assertions** | Test methods that execute code but never assert anything. A passing test without assertions proves nothing. | +| **Swallowed exceptions** | `try { ... } catch { }` or `catch (Exception)` without rethrowing or asserting. Failures are silently hidden. | +| **Assert in catch block only** | `try { Act(); } catch (Exception ex) { Assert.Fail(ex.Message); }` -- use `Assert.ThrowsException` or equivalent instead. The test passes when no exception is thrown even if the result is wrong. | +| **Always-true assertions** | `Assert.IsTrue(true)`, `Assert.AreEqual(x, x)`, or conditions that can never fail. | +| **Commented-out assertions** | Assertions that were disabled but the test still runs, giving the illusion of coverage. | + +#### High -- Tests likely to cause pain + +| Anti-Pattern | What to Look For | +|---|---| +| **Flakiness indicators** | `Thread.Sleep(...)`, `Task.Delay(...)` for synchronization, `DateTime.Now`/`DateTime.UtcNow` without abstraction, `Random` without a seed, environment-dependent paths. | +| **Test ordering dependency** | Static mutable fields modified across tests, `[TestInitialize]` that doesn't fully reset state, tests that fail when run individually but pass in suite (or vice versa). | +| **Over-mocking** | More mock setup lines than actual test logic. Verifying exact call sequences on mocks rather than outcomes. Mocking types the test owns. For a deep mock audit, use `exp-mock-usage-analysis`. | +| **Implementation coupling** | Testing private methods via reflection, asserting on internal state, verifying exact method call counts on collaborators instead of observable behavior. | +| **Broad exception assertions** | `Assert.ThrowsException(...)` instead of the specific exception type. Also: `[ExpectedException(typeof(Exception))]`. | + +#### Medium -- Maintainability and clarity issues + +| Anti-Pattern | What to Look For | +|---|---| +| **Poor naming** | Test names like `Test1`, `TestMethod`, names that don't describe the scenario or expected outcome. Good: `Add_NegativeNumber_ThrowsArgumentException`. | +| **Magic values** | Unexplained numbers or strings in arrange/assert: `Assert.AreEqual(42, result)` -- what does 42 mean? | +| **Duplicate tests** | Three or more test methods with near-identical bodies that differ only in a single input value. Should be data-driven (`[DataRow]`, `[Theory]`, `[TestCase]`). For a detailed duplication analysis, use `exp-test-maintainability`. Note: Two tests covering distinct boundary conditions (e.g., zero vs. negative) are NOT duplicates -- separate tests for different edge cases provide clearer failure diagnostics and are a valid practice. | +| **Giant tests** | Test methods exceeding ~30 lines or testing multiple behaviors at once. Hard to diagnose when they fail. | +| **Assertion messages that repeat the assertion** | `Assert.AreEqual(expected, actual, "Expected and actual are not equal")` adds no information. Messages should describe the business meaning. | +| **Missing AAA separation** | Arrange, Act, Assert phases are interleaved or indistinguishable. | + +#### Low -- Style and hygiene + +| Anti-Pattern | What to Look For | +|---|---| +| **Unused test infrastructure** | `[TestInitialize]`/`[SetUp]` that does nothing, test helper methods that are never called. | +| **IDisposable not disposed** | Test creates `HttpClient`, `Stream`, or other disposable objects without `using` or cleanup. | +| **Console.WriteLine debugging** | Leftover `Console.WriteLine` or `Debug.WriteLine` statements used during test development. | +| **Inconsistent naming convention** | Mix of naming styles in the same test class (e.g., some use `Method_Scenario_Expected`, others use `ShouldDoSomething`). | + +### Step 3: Calibrate severity honestly + +Before reporting, re-check each finding against these severity rules: + +- **Critical/High**: Only for issues that cause tests to give false confidence or be unreliable. A test that always passes regardless of correctness is Critical. Flaky shared state is High. +- **Medium**: Only for issues that actively harm maintainability -- 5+ nearly-identical tests, truly meaningless names like `Test1`. +- **Low**: Cosmetic naming mismatches, minor style preferences, assertion messages that could be better. When in doubt, rate Low. +- **Not an issue**: Separate tests for distinct boundary conditions (zero vs. negative vs. null). Explicit per-test setup instead of `[TestInitialize]` (this *improves* isolation). Tests that are short and clear but could theoretically be consolidated. + +IMPORTANT: If the tests are well-written, say so clearly up front. Do not inflate severity to justify the review. A review that finds zero Critical/High issues and only minor Low suggestions is a valid and valuable outcome. Lead with what the tests do well. + +### Step 4: Report findings + +Present findings in this structure: + +1. **Summary** -- Total issues found, broken down by severity (Critical / High / Medium / Low). If tests are well-written, lead with that assessment. +2. **Critical and High findings** -- List each with: + - The anti-pattern name + - The specific location (file, method name, line) + - A brief explanation of why it's a problem + - A concrete fix (show before/after code when helpful) +3. **Medium and Low findings** -- Summarize in a table unless the user wants full detail +4. **Positive observations** -- Call out things the tests do well (sealed class, specific exception types, data-driven tests, clear AAA structure, proper use of fakes, good naming). Don't only report negatives. + +### Step 5: Prioritize recommendations + +If there are many findings, recommend which to fix first: + +1. **Critical** -- Fix immediately, these tests may be giving false confidence +2. **High** -- Fix soon, these cause flakiness or maintenance burden +3. **Medium/Low** -- Fix opportunistically during related edits + +## Validation + +- [ ] Every finding includes a specific location (not just a general warning) +- [ ] Every Critical/High finding includes a concrete fix +- [ ] Report covers all categories (assertions, isolation, naming, structure) +- [ ] Positive observations are included alongside problems +- [ ] Recommendations are prioritized by severity + +## Common Pitfalls + +| Pitfall | Solution | +|---------|----------| +| Reporting style issues as critical | Naming and formatting are Medium/Low, never Critical | +| Suggesting rewrites instead of targeted fixes | Show minimal diffs -- change the assertion, not the whole test | +| Flagging intentional design choices | If `Thread.Sleep` is in an integration test testing actual timing, that's not an anti-pattern. Consider context. | +| Inventing false positives on clean code | If tests follow best practices, say so. A review finding "0 Critical, 0 High, 1 Low" is perfectly valid. Don't inflate findings to justify the review. | +| Flagging separate boundary tests as duplicates | Two tests for zero and negative inputs test different edge cases. Only flag as duplicates when 3+ tests have truly identical bodies differing by a single value. | +| Rating cosmetic issues as Medium | Naming mismatches (e.g., method name says `ArgumentException` but asserts `ArgumentOutOfRangeException`) are Low, not Medium -- the test still works correctly. | +| Ignoring the test framework | xUnit uses `[Fact]`/`[Theory]`, NUnit uses `[Test]`/`[TestCase]`, MSTest uses `[TestMethod]`/`[DataRow]` -- use correct terminology | +| Missing the forest for the trees | If 80% of tests have no assertions, lead with that systemic issue rather than listing every instance | diff --git a/.github/skills/writing-mstest-tests/SKILL.md b/.github/skills/writing-mstest-tests/SKILL.md new file mode 100644 index 0000000000..ecf03c520c --- /dev/null +++ b/.github/skills/writing-mstest-tests/SKILL.md @@ -0,0 +1,347 @@ +--- +name: writing-mstest-tests +description: "Best practices for writing MSTest 3.x/4.x unit tests. Use when the user needs to write, improve, fix, or review MSTest tests, including modern assertions, data-driven tests, test lifecycle, and common anti-patterns. Also use when fixing test issues like swapped Assert.AreEqual arguments, incorrect assertion usage, or modernizing legacy test code. Covers MSTest.Sdk, sealed classes, Assert.Throws, DynamicData with ValueTuples, TestContext, and conditional execution." +--- + +# Writing MSTest Tests + +Help users write effective, modern unit tests with MSTest 3.x/4.x using current APIs and best practices. + +## When to Use + +- User wants to write new MSTest unit tests +- User wants to improve or modernize existing MSTest tests +- User asks about MSTest assertion APIs, data-driven patterns, or test lifecycle +- User needs to review MSTest test code for anti-patterns + +## When Not to Use + +- User needs to run or execute tests (use the `run-tests` skill) +- User needs to upgrade from MSTest v1/v2 to v3 (use `migrate-mstest-v1v2-to-v3`) +- User needs to upgrade from MSTest v3 to v4 (use `migrate-mstest-v3-to-v4`) +- User needs CI/CD pipeline configuration +- User is using xUnit, NUnit, or TUnit (not MSTest) + +## Inputs + +| Input | Required | Description | +|-------|----------|-------------| +| Code under test | No | The production code to be tested | +| Existing test code | No | Current tests to review or improve | +| Test scenario description | No | What behavior the user wants to test | + +## Workflow + +### Step 1: Determine project setup + +Check the test project for MSTest version and configuration: + +- If using `MSTest.Sdk` (``): modern setup, all features available +- If using `MSTest` metapackage: modern setup (MSTest 3.x+) +- If using `MSTest.TestFramework` + `MSTest.TestAdapter`: check version for feature availability + +Recommend MSTest.Sdk or the MSTest metapackage for new projects: + +```xml + + + + net9.0 + + +``` + +When using `MSTest.Sdk`, put the version in `global.json` instead of the project file so all test projects get bumped together: + +```json +{ + "msbuild-sdks": { + "MSTest.Sdk": "3.8.2" + } +} +``` + +```xml + + + + net9.0 + + + + + +``` + +### Step 2: Write test classes following conventions + +Apply these structural conventions: + +- **Seal test classes** with `sealed` for performance and design clarity +- Use `[TestClass]` on the class and `[TestMethod]` on test methods +- Follow the **Arrange-Act-Assert** (AAA) pattern +- Name tests using `MethodName_Scenario_ExpectedBehavior` +- Use separate test projects with naming convention `[ProjectName].Tests` + +```csharp +[TestClass] +public sealed class OrderServiceTests +{ + [TestMethod] + public void CalculateTotal_WithDiscount_ReturnsReducedPrice() + { + // Arrange + var service = new OrderService(); + var order = new Order { Price = 100m, DiscountPercent = 10 }; + + // Act + var total = service.CalculateTotal(order); + + // Assert + Assert.AreEqual(90m, total); + } +} +``` + +### Step 3: Use modern assertion APIs + +Use the correct assertion for each scenario. Prefer `Assert` class methods over `StringAssert` or `CollectionAssert` where both exist. + +#### Equality and null checks + +```csharp +Assert.AreEqual(expected, actual); // Value equality +Assert.AreSame(expected, actual); // Reference equality +Assert.IsNull(value); +Assert.IsNotNull(value); +``` + +#### Exception testing -- use `Assert.Throws` instead of `[ExpectedException]` + +```csharp +// Synchronous +var ex = Assert.ThrowsExactly(() => service.Process(null)); +Assert.AreEqual("input", ex.ParamName); + +// Async +var ex = await Assert.ThrowsExactlyAsync( + async () => await service.ProcessAsync(null)); +``` + +- `Assert.Throws` matches `T` or any derived type +- `Assert.ThrowsExactly` matches only the exact type `T` + +#### Collection assertions + +```csharp +Assert.Contains(expectedItem, collection); +Assert.DoesNotContain(unexpectedItem, collection); +var single = Assert.ContainsSingle(collection); // Returns the single element +Assert.HasCount(3, collection); +Assert.IsEmpty(collection); +Assert.IsNotEmpty(collection); +``` + +Replace generic `Assert.IsTrue` with specialized assertions -- they give better failure messages: + +| Instead of | Use | +|---|---| +| `Assert.IsTrue(list.Count > 0)` | `Assert.IsNotEmpty(list)` | +| `Assert.IsTrue(list.Count() == 3)` | `Assert.HasCount(3, list)` | +| `Assert.IsTrue(x != null)` | `Assert.IsNotNull(x)` | +| `list.Single(predicate)` + `Assert.IsNotNull` | `Assert.ContainsSingle(list)` | +| `Assert.IsTrue(list.Contains(item))` | `Assert.Contains(item, list)` | + +#### String assertions + +```csharp +Assert.Contains("expected", actualString); +Assert.StartsWith("prefix", actualString); +Assert.EndsWith("suffix", actualString); +Assert.MatchesRegex(@"\d{3}-\d{4}", phoneNumber); +``` + +#### Type assertions + +```csharp +// MSTest 3.x -- out parameter +Assert.IsInstanceOfType(result, out var typed); +typed.Handle(); + +// MSTest 4.x -- returns directly +var typed = Assert.IsInstanceOfType(result); +``` + +#### Comparison assertions + +```csharp +Assert.IsGreaterThan(lowerBound, actual); +Assert.IsLessThan(upperBound, actual); +Assert.IsInRange(actual, low, high); +``` + +### Step 4: Use data-driven tests for multiple inputs + +#### DataRow for inline values + +```csharp +[TestMethod] +[DataRow(1, 2, 3)] +[DataRow(0, 0, 0, DisplayName = "Zeros")] +[DataRow(-1, 1, 0)] +public void Add_ReturnsExpectedSum(int a, int b, int expected) +{ + Assert.AreEqual(expected, Calculator.Add(a, b)); +} +``` + +#### DynamicData with ValueTuples (preferred for complex data) + +Prefer `ValueTuple` return types over `IEnumerable` for type safety: + +```csharp +[TestMethod] +[DynamicData(nameof(DiscountTestData))] +public void ApplyDiscount_ReturnsExpectedPrice(decimal price, int percent, decimal expected) +{ + var result = PriceCalculator.ApplyDiscount(price, percent); + Assert.AreEqual(expected, result); +} + +// ValueTuple -- preferred (MSTest 3.7+) +public static IEnumerable<(decimal price, int percent, decimal expected)> DiscountTestData => +[ + (100m, 10, 90m), + (200m, 25, 150m), + (50m, 0, 50m), +]; +``` + +When you need metadata per test case, use `TestDataRow`: + +```csharp +public static IEnumerable> DiscountTestDataWithMetadata => +[ + new((100m, 10, 90m)) { DisplayName = "10% discount" }, + new((200m, 25, 150m)) { DisplayName = "25% discount" }, + new((50m, 0, 50m)) { DisplayName = "No discount" }, +]; +``` + +### Step 5: Handle test lifecycle correctly + +- **Always initialize in the constructor** -- this enables `readonly` fields and works correctly with nullability analyzers (fields are guaranteed non-null after construction) +- Use `[TestInitialize]` **only** for async initialization, combined with the constructor for sync parts +- Use `[TestCleanup]` for cleanup that must run even on failure +- Inject `TestContext` via constructor (MSTest 3.6+) + +```csharp +[TestClass] +public sealed class RepositoryTests +{ + private readonly TestContext _testContext; + private readonly FakeDatabase _db; // readonly -- guaranteed by constructor + + public RepositoryTests(TestContext testContext) + { + _testContext = testContext; + _db = new FakeDatabase(); // sync init in ctor + } + + [TestInitialize] + public async Task InitAsync() + { + // Use TestInitialize ONLY for async setup + await _db.SeedAsync(); + } + + [TestCleanup] + public void Cleanup() => _db.Reset(); +} +``` + +#### Execution order + +1. `[AssemblyInitialize]` -- once per assembly +2. `[ClassInitialize]` -- once per class +3. Per test: + - With `TestContext` property injection: Constructor -> set `TestContext` property -> `[TestInitialize]` + - With constructor injection of `TestContext`: Constructor (receives `TestContext`) -> `[TestInitialize]` +4. Test method +5. `[TestCleanup]` -> `DisposeAsync` -> `Dispose` -- per test +6. `[ClassCleanup]` -- once per class +7. `[AssemblyCleanup]` -- once per assembly + +### Step 6: Apply cancellation and timeout patterns + +Always use `TestContext.CancellationToken` with `[Timeout]`: + +```csharp +[TestMethod] +[Timeout(5000)] +public async Task FetchData_ReturnsWithinTimeout() +{ + var result = await _client.GetDataAsync(_testContext.CancellationToken); + Assert.IsNotNull(result); +} +``` + +### Step 7: Use advanced features where appropriate + +#### Retry flaky tests (MSTest 3.9+) + +Use only for genuinely flaky external dependencies (network, file system), not to paper over race conditions or shared state issues. + +```csharp +[TestMethod] +[Retry(3)] +public void ExternalService_EventuallyResponds() { } +``` + +#### Conditional execution (MSTest 3.10+) + +```csharp +[TestMethod] +[OSCondition(OperatingSystems.Windows)] +public void WindowsRegistry_ReadsValue() { } + +[TestMethod] +[CICondition(ConditionMode.Exclude)] +public void LocalOnly_InteractiveTest() { } +``` + +#### Parallelization + +```csharp +[assembly: Parallelize(Workers = 4, Scope = ExecutionScope.MethodLevel)] + +[TestClass] +[DoNotParallelize] // Opt out specific classes +public sealed class DatabaseIntegrationTests { } +``` + +## Validation + +- [ ] Test classes are `sealed` +- [ ] Test methods follow `MethodName_Scenario_ExpectedBehavior` naming +- [ ] `Assert.ThrowsExactly` used instead of `[ExpectedException]` +- [ ] Specialized assertions used instead of `Assert.IsTrue` (e.g., `Assert.IsNotNull`, `Assert.AreEqual`) +- [ ] DynamicData uses ValueTuple return types instead of `IEnumerable` +- [ ] Sync initialization done in the constructor, not `[TestInitialize]` +- [ ] `TestContext.CancellationToken` passed to async calls in tests with `[Timeout]` +- [ ] Project builds with zero errors and all tests pass + +## Common Pitfalls + +| Pitfall | Solution | +|---------|----------| +| `Assert.AreEqual(actual, expected)` -- swapped arguments | Always put expected first: `Assert.AreEqual(expected, actual)`. Failure messages show "Expected: X, Actual: Y" so wrong order makes messages confusing | +| `[ExpectedException]` -- obsolete, cannot assert message | Use `Assert.Throws` or `Assert.ThrowsExactly` | +| `items.Single()` -- unclear exception on failure | Use `Assert.ContainsSingle(items)` for better failure messages | +| Hard cast `(MyType)result` -- unclear exception | Use `Assert.IsInstanceOfType(result)` | +| `IEnumerable` for DynamicData | Use `IEnumerable<(T1, T2, ...)>` ValueTuples for type safety | +| Sync setup in `[TestInitialize]` | Initialize in the constructor instead -- enables `readonly` fields and satisfies nullability analyzers | +| `CancellationToken.None` in async tests | Use `TestContext.CancellationToken` for cooperative timeout | +| `public TestContext? TestContext { get; set; }` | Drop the `?` -- MSTest suppresses CS8618 for this property | +| `TestContext TestContext { get; set; } = null!` | Remove `= null!` -- unnecessary, MSTest handles assignment | +| Non-sealed test classes | Seal test classes by default for performance |