Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
16 commits
Select commit Hold shift + click to select a range
ea7f6ec
Workflow outputs in side quests
vdauwera Feb 12, 2026
23aea53
Merge branch 'master' into gvda-workflow-outputs-in-sidequests
vdauwera Feb 12, 2026
9312104
Address PR #884 review: fix output block syntax and apply suggestions
vdauwera Feb 12, 2026
ae7bf7e
Merge branch 'master' into gvda-workflow-outputs-in-sidequests
vdauwera Feb 17, 2026
b2b4a12
Merge branch 'master' into gvda-workflow-outputs-in-sidequests
ewels Feb 18, 2026
052d110
Merge branch 'master' into gvda-workflow-outputs-in-sidequests
vdauwera Mar 9, 2026
7bcdb92
Merge branch 'master' into gvda-workflow-outputs-in-sidequests
vdauwera Mar 9, 2026
e8b70ba
Merge branch 'master' into gvda-workflow-outputs-in-sidequests
vdauwera Mar 19, 2026
fc46a58
Merge branch 'master' into gvda-workflow-outputs-in-sidequests
vdauwera Mar 24, 2026
9c18cec
Merge remote-tracking branch 'origin/master' into gvda-workflow-outpu…
pinin4fjords Mar 26, 2026
3f31440
Merge branch 'master' into gvda-workflow-outputs-in-sidequests
pinin4fjords Mar 26, 2026
0341664
Add missing main: label to buggy_workflow.nf starter
pinin4fjords Mar 26, 2026
ea0af79
Fix scripting patterns side quest: cpus continuity, cpu-shares, and h…
pinin4fjords Mar 30, 2026
5ea8816
Add workflow outputs mechanism to metadata side quest
pinin4fjords Mar 30, 2026
8289c35
Add separate sub-step for workflow output setup in metadata side quest
pinin4fjords Mar 30, 2026
65c8189
Simplify: pre-populate publish/output scaffolding in metadata starter
pinin4fjords Mar 30, 2026
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
26 changes: 20 additions & 6 deletions docs/en/docs/side_quests/debugging/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -2415,7 +2415,7 @@ Now it's time to put the systematic debugging approach into practice. The workfl
```
**Fix:** Take the output from the previous process
```groovy linenums="88"
handleFiles(heavyProcess.out)
file_ch = handleFiles(heavy_ch)
```

With that, the whole workflow should run.
Expand All @@ -2439,7 +2439,6 @@ Now it's time to put the systematic debugging approach into practice. The workfl
* Process with input/output mismatch
*/
process processFiles {
publishDir "${params.output}/processed", mode: 'copy'

input:
tuple val(sample_id), path(input_file)
Expand All @@ -2458,7 +2457,6 @@ Now it's time to put the systematic debugging approach into practice. The workfl
* Process with resource issues
*/
process heavyProcess {
publishDir "${params.output}/heavy", mode: 'copy'

time '100 s'

Expand All @@ -2481,7 +2479,6 @@ Now it's time to put the systematic debugging approach into practice. The workfl
* Process with file handling issues
*/
process handleFiles {
publishDir "${params.output}/files", mode: 'copy'

input:
path input_file
Expand All @@ -2501,7 +2498,7 @@ Now it's time to put the systematic debugging approach into practice. The workfl
* Main workflow with channel issues
*/
workflow {

main:
// Channel with incorrect usage
input_ch = channel
.fromPath(params.input)
Expand All @@ -2512,7 +2509,24 @@ Now it's time to put the systematic debugging approach into practice. The workfl

heavy_ch = heavyProcess(input_ch.map{it[0]})

handleFiles(heavyProcess.out)
file_ch = handleFiles(heavy_ch)

publish:
processed = processed_ch
heavy = heavy_ch
files = file_ch
}

output {
processed {
path 'processed'
}
heavy {
path 'heavy'
}
files {
path 'files'
}
}
```

Expand Down
7 changes: 3 additions & 4 deletions docs/en/docs/side_quests/dev_environment/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -283,7 +283,6 @@ Efficient navigation is crucial when working with complex workflows spanning mul
```groovy title="basic_workflow.nf" linenums="3"
process FASTQC {
tag "${sample_id}"
publishDir "${params.output_dir}/fastqc", mode: 'copy'

input:
tuple val(sample_id), path(reads)
Expand Down Expand Up @@ -417,13 +416,13 @@ This is invaluable when:

Sometimes you need to find where specific patterns are used across your entire project. Press `Ctrl/Cmd+Shift+F` to open the search panel.

Try searching for `publishDir` across the workspace:
Try searching for `container` across the workspace:

![Project search](img/project_search.png)

This shows you every file that uses publish directories, helping you:
This shows you every file that uses the container directive, helping you:

- Understand output organization patterns
- Understand which processes use containers
- Find examples of specific directives
- Ensure consistency across modules

Expand Down
16 changes: 7 additions & 9 deletions docs/en/docs/side_quests/essential_scripting_patterns/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -896,7 +896,7 @@ Fix this by adding conditional logic to the `FASTP` process `script:` block. An

=== "After"

```groovy title="main.nf" linenums="10" hl_lines="3-27"
```groovy title="main.nf" linenums="10" hl_lines="2-26"
script:
// Simple single-end vs paired-end detection
def is_single = reads instanceof List ? reads.size() == 1 : true
Expand Down Expand Up @@ -1001,8 +1001,6 @@ Take a look a the module file `modules/generate_report.nf`:
```groovy title="modules/generate_report.nf" linenums="1"
process GENERATE_REPORT {

publishDir 'results/reports', mode: 'copy'

input:
tuple val(meta), path(reads)

Expand Down Expand Up @@ -1378,7 +1376,7 @@ cat work/48/6db0c9e9d8aa65e4bb4936cd3bd59e/.command.run | grep "docker run"
You should see something like:

```bash title="docker command"
docker run -i --cpu-shares 4096 --memory 2048m -e "NXF_TASK_WORKDIR" -v /workspaces/training/side-quests/essential_scripting_patterns:/workspaces/training/side-quests/essential_scripting_patterns -w "$NXF_TASK_WORKDIR" --name $NXF_BOXID community.wave.seqera.io/library/fastp:0.24.0--62c97b06e8447690 /bin/bash -ue /workspaces/training/side-quests/essential_scripting_patterns/work/48/6db0c9e9d8aa65e4bb4936cd3bd59e/.command.sh
docker run -i --cpu-shares 2048 --memory 2048m -e "NXF_TASK_WORKDIR" -v /workspaces/training/side-quests/essential_scripting_patterns:/workspaces/training/side-quests/essential_scripting_patterns -w "$NXF_TASK_WORKDIR" --name $NXF_BOXID community.wave.seqera.io/library/fastp:0.24.0--62c97b06e8447690 /bin/bash -ue /workspaces/training/side-quests/essential_scripting_patterns/work/48/6db0c9e9d8aa65e4bb4936cd3bd59e/.command.sh
```

In this example we've chosen an example that requested 2 CPUs (`--cpu-shares 2048`), because it was a high-depth sample, but you should see different CPU allocations depending on the sample depth. Try this for the other tasks as well.
Expand All @@ -1393,7 +1391,7 @@ Another powerful pattern is using `task.attempt` for retry strategies. To show w
process FASTP {
container 'community.wave.seqera.io/library/fastp:0.24.0--62c97b06e8447690'

cpus { meta.depth > 40000000 ? 4 : 2 }
cpus { meta.depth > 40000000 ? 2 : 1 }
memory 1.GB

input:
Expand All @@ -1406,7 +1404,7 @@ Another powerful pattern is using `task.attempt` for retry strategies. To show w
process FASTP {
container 'community.wave.seqera.io/library/fastp:0.24.0--62c97b06e8447690'

cpus { meta.depth > 40000000 ? 4 : 2 }
cpus { meta.depth > 40000000 ? 2 : 1 }
memory 2.GB

input:
Expand All @@ -1432,7 +1430,7 @@ nextflow run main.nf
Detecting adapter sequence for read1...
No adapter detected for read1

.command.sh: line 7: 101 Killed fastp --in1 SAMPLE_002_S2_L001_R1_001.fastq --out1 sample_002_trimmed.fastq.gz --json sample_002.fastp.json --html sample_002.fastp.html --thread 2
.command.sh: line 7: 101 Killed fastp --in1 SAMPLE_002_S2_L001_R1_001.fastq --out1 sample_002_trimmed.fastq.gz --json sample_002.fastp.json --html sample_002.fastp.html --thread 1
```

This indicates that the process was killed for exceeding memory limits.
Expand All @@ -1447,7 +1445,7 @@ To make our workflow more robust, we can implement a retry strategy that increas
process FASTP {
container 'community.wave.seqera.io/library/fastp:0.24.0--62c97b06e8447690'

cpus { meta.depth > 40000000 ? 4 : 2 }
cpus { meta.depth > 40000000 ? 2 : 1 }
memory { 1.GB * task.attempt }
errorStrategy 'retry'
maxRetries 2
Expand All @@ -1462,7 +1460,7 @@ To make our workflow more robust, we can implement a retry strategy that increas
process FASTP {
container 'community.wave.seqera.io/library/fastp:0.24.0--62c97b06e8447690'

cpus { meta.depth > 40000000 ? 4 : 2 }
cpus { meta.depth > 40000000 ? 2 : 1 }
memory 2.GB

input:
Expand Down
70 changes: 35 additions & 35 deletions docs/en/docs/side_quests/metadata/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -133,9 +133,16 @@ Open the `main.nf` workflow file to examine the workflow stub we're giving you a
#!/usr/bin/env nextflow

workflow {

main:
ch_datasheet = channel.fromPath("./data/datasheet.csv")

publish:
cowpy_art = channel.empty()
}

output {
cowpy_art {
}
}
```

Expand All @@ -151,24 +158,16 @@ Make the following changes to add a `splitCsv()` operation to the channel constr

=== "After"

```groovy title="main.nf" linenums="3" hl_lines="4-5"
workflow {

```groovy title="main.nf" linenums="5" hl_lines="3-4"
ch_datasheet = channel.fromPath("./data/datasheet.csv")
.splitCsv(header: true)
.view()

}
```

=== "Before"

```groovy title="main.nf" linenums="3"
workflow {

```groovy title="main.nf" linenums="5"
ch_datasheet = channel.fromPath("./data/datasheet.csv")

}
```

Note that we're using the `header: true` option to tell Nextflow to read the first row of the CSV file as the header row.
Expand Down Expand Up @@ -254,29 +253,21 @@ Make the following edits to the workflow:

=== "After"

```groovy title="main.nf" linenums="3" hl_lines="5-7"
workflow {

```groovy title="main.nf" linenums="5" hl_lines="4-6"
ch_datasheet = channel.fromPath("./data/datasheet.csv")
.splitCsv(header: true)
.map{ row ->
row.character
}
.view()

}
```

=== "Before"

```groovy title="main.nf" linenums="3"
workflow {

```groovy title="main.nf" linenums="5"
ch_datasheet = channel.fromPath("./data/datasheet.csv")
.splitCsv(header: true)
.view()

}
```

Now run the workflow again:
Expand Down Expand Up @@ -887,8 +878,6 @@ You can open the module file to examine its code:
// Generate ASCII art with cowpy
process COWPY {

publishDir "results/", mode: 'copy'

container 'community.wave.seqera.io/library/cowpy:1.1.5--3db457ae1977a273'

input:
Expand Down Expand Up @@ -1074,7 +1063,7 @@ This confirms we're able to access the file and the character for each element i

#### 3.2.3. Call the `COWPY` process

Now let's put it all together and actually call the `COWPY` process on the `ch_languages` channel.
Now we can put it all together and actually call the `COWPY` process on the `ch_languages` channel.

In the main workflow, make the following code changes:

Expand All @@ -1086,18 +1075,23 @@ In the main workflow, make the following code changes:
ch_languages.map { meta, file -> file },
ch_languages.map { meta, file -> meta.character }
)

publish:
cowpy_art = COWPY.out
```

=== "Before"

```groovy title="main.nf" linenums="34"
// Temporary: access the file and character
ch_languages.map { meta, file -> [file, meta.character] }
.view()
ch_languages.map { meta, file -> file }.view { file -> "File: " + file }
ch_languages.map { meta, file -> meta.character }.view { character -> "Character: " + character }

publish:
cowpy_art = channel.empty()
```

You see we simply copy the two map operations (minus the `.view()` statements) as the inputs to the process call.
Just make sure you don't forget the comma between them!
We replaced the temporary view operations with the actual `COWPY` process call, and updated the `publish:` section to wire up `COWPY.out` for publishing.

It's a bit clunky, but we'll see how to make that better in the next section.

Expand Down Expand Up @@ -1270,18 +1264,24 @@ Make the following edits to the main workflow:
=== "After"

```groovy title="main.nf" linenums="34" hl_lines="2"
// Run cowpy to generate ASCII art
COWPY(ch_languages)
// Run cowpy to generate ASCII art
COWPY(ch_languages)

publish:
cowpy_art = COWPY.out
```

=== "Before"

```groovy title="main.nf" linenums="34" hl_lines="3-4"
// Run cowpy to generate ASCII art
COWPY(
ch_languages.map { meta, file -> file },
ch_languages.map { meta, file -> meta.character }
)
// Run cowpy to generate ASCII art
COWPY(
ch_languages.map { meta, file -> file },
ch_languages.map { meta, file -> meta.character }
)

publish:
cowpy_art = COWPY.out
```

That simplifies the call significantly!
Expand Down
17 changes: 12 additions & 5 deletions docs/en/docs/side_quests/nf_test/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -119,8 +119,6 @@ You can see the full workflow code below.
*/
process sayHello {

publishDir 'results', mode: 'copy'

input:
val greeting

Expand All @@ -138,8 +136,6 @@ You can see the full workflow code below.
*/
process convertToUpper {

publishDir 'results', mode: 'copy'

input:
path input_file

Expand All @@ -153,7 +149,7 @@ You can see the full workflow code below.
}

workflow {

main:
// create a channel for inputs from a CSV file
greeting_ch = channel.fromPath(params.input_file).splitCsv().flatten()

Expand All @@ -162,6 +158,17 @@ You can see the full workflow code below.

// convert the greeting to uppercase
convertToUpper(sayHello.out)

publish:
greetings = sayHello.out
upper_greetings = convertToUpper.out
}

output {
greetings {
}
upper_greetings {
}
}
```

Expand Down
Loading
Loading