-
Notifications
You must be signed in to change notification settings - Fork 106
WIP: perf: add stream download/upload mem benchmark #947
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Changes from 3 commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -123,3 +123,6 @@ run | |
| !test/ctx_register.js | ||
|
|
||
| .egg/ | ||
|
|
||
| # Benchmark test files | ||
| benchmark/stream_download/nginx/50mb_ones.txt | ||
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1 @@ | ||
| tmp |
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,43 @@ | ||
| FROM node:24.12.0 | ||
|
|
||
| # 安装 nginx 和其他必要工具 | ||
| RUN apt-get update && apt-get install -y \ | ||
| nginx \ | ||
| curl \ | ||
| vim \ | ||
| && rm -rf /var/lib/apt/lists/* \ | ||
| && apt-get clean | ||
|
Comment on lines
+4
to
+11
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. To optimize the Docker image size, it's recommended to:
|
||
|
|
||
| # 创建 nginx 配置目录 | ||
| RUN mkdir -p /etc/nginx/conf.d | ||
|
|
||
| # 复制 nginx 配置文件 | ||
| COPY nginx.conf /etc/nginx/sites-available/default | ||
|
|
||
| # 创建 nginx 工作目录 | ||
| RUN mkdir -p /var/www/html | ||
|
|
||
| # 创建启动脚本 | ||
| COPY start-nginx.sh /usr/local/bin/start-nginx.sh | ||
| RUN chmod +x /usr/local/bin/start-nginx.sh | ||
|
|
||
| # 暴露端口 | ||
| EXPOSE 80 9229 | ||
|
|
||
| # 设置工作目录 | ||
| WORKDIR /var/www/html | ||
|
|
||
| # 健康检查 | ||
| HEALTHCHECK --interval=30s --timeout=3s --start-period=5s --retries=3 \ | ||
| CMD curl -f http://localhost/health || exit 1 | ||
|
|
||
| RUN mkdir -p /root/workspace | ||
|
|
||
| COPY gc.js /root/workspace/gc.js | ||
| COPY benchmark.js /root/workspace/benchmark.js | ||
| COPY benchmark_undici.js /root/workspace/benchmark_undici.js | ||
|
|
||
| RUN cd /root/workspace && npm i urllib --registry https://registry.npmmirror.com | ||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Installing npm packages directly with
|
||
|
|
||
| # 启动命令 | ||
| CMD ["/usr/local/bin/start-nginx.sh"] | ||
| Original file line number | Diff line number | Diff line change | ||||||||||||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| @@ -0,0 +1,51 @@ | ||||||||||||||||||||||
| # Nginx 下载/上传测试服务器 | ||||||||||||||||||||||
|
|
||||||||||||||||||||||
| ## 快速开始 | ||||||||||||||||||||||
|
|
||||||||||||||||||||||
| > **注意**: 请先切换到 `benchmark/stream_download` 目录下执行以下命令 | ||||||||||||||||||||||
|
Comment on lines
+1
to
+5
|
||||||||||||||||||||||
| # Nginx 下载/上传测试服务器 | |
| ## 快速开始 | |
| > **注意**: 请先切换到 `benchmark/stream_download` 目录下执行以下命令 | |
| # Nginx 下载/上传流式基准测试服务器 | |
| ## 快速开始 | |
| > **注意**: 请先切换到 `benchmark/stream_download` (下载/上传流式基准测试目录)下执行以下命令 |
| Original file line number | Diff line number | Diff line change | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| @@ -0,0 +1,60 @@ | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| const urllib = require('urllib'); | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|
||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| const fs = require('fs'); | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| const crypto = require('crypto'); | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| const path = require('path'); | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|
||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| const urllib = require('urllib'); | |
| const fs = require('fs'); | |
| const crypto = require('crypto'); | |
| const path = require('path'); | |
| import urllib from 'urllib'; | |
| import fs from 'fs'; | |
| import crypto from 'crypto'; | |
| import path from 'path'; |
🤖 Prompt for AI Agents
In benchmark/stream_download/benchmark.js around lines 1 to 4, the file uses
CommonJS require() calls; convert them to ES module imports (e.g., replace each
require with an import statement for urllib, fs, crypto, and path) and update
any downstream code that depended on CommonJS semantics if necessary; also
ensure package.json is set to "type": "module" so the ES imports are valid.
Outdated
Copilot
AI
Dec 25, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Missing error handling for the download and upload operations. If either request fails, the temporary file may not be cleaned up, leading to disk space issues over time. Consider wrapping the operations in a try-finally block to ensure cleanup happens even on failure.
| const downloadResponse = await urllib.request(`${BASE_URL}/download/50mb_ones.txt`, { | |
| writeStream: fs.createWriteStream(tmpFilePath), | |
| }); | |
| const uploadResponse = await urllib.request(`${BASE_URL}/upload/`, { | |
| method: 'POST', | |
| stream: fs.createReadStream(tmpFilePath), | |
| }); | |
| await fs.promises.rm(tmpFilePath); | |
| count++; | |
| if (count % 100 === 0) { | |
| console.log( | |
| `Downloaded and uploaded ${count} times, downloadResponse: ${downloadResponse.status}, uploadResponse: ${uploadResponse.status}`, | |
| ); | |
| let downloadResponse; | |
| let uploadResponse; | |
| try { | |
| downloadResponse = await urllib.request(`${BASE_URL}/download/50mb_ones.txt`, { | |
| writeStream: fs.createWriteStream(tmpFilePath), | |
| }); | |
| uploadResponse = await urllib.request(`${BASE_URL}/upload/`, { | |
| method: 'POST', | |
| stream: fs.createReadStream(tmpFilePath), | |
| }); | |
| count++; | |
| if (count % 100 === 0) { | |
| console.log( | |
| `Downloaded and uploaded ${count} times, downloadResponse: ${downloadResponse.status}, uploadResponse: ${uploadResponse.status}`, | |
| ); | |
| } | |
| } finally { | |
| try { | |
| await fs.promises.rm(tmpFilePath); | |
| } catch { | |
| // Ignore cleanup errors to avoid masking original failure | |
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This benchmark currently involves disk I/O by writing the downloaded file to a temporary path and then reading it back for upload. This can skew the results, as you're measuring disk speed as well as network performance. For a more accurate stream benchmark, you should pipe the download response stream directly into the upload request stream. This avoids the intermediate file and measures the pure stream handling performance.
| async function downloadAndUpload() { | |
| const tmpFilePath = path.join(__dirname, `${crypto.randomUUID()}.txt`); | |
| await urllib.request('http://127.0.0.1/download/50mb_ones.txt', { | |
| writeStream: fs.createWriteStream(tmpFilePath), | |
| }); | |
| await urllib.request('http://127.0.0.1/upload/', { | |
| method: 'POST', | |
| stream: fs.createReadStream(tmpFilePath), | |
| }); | |
| await fs.promises.rm(tmpFilePath); | |
| } | |
| async function downloadAndUpload() { | |
| const { res } = await urllib.request('http://127.0.0.1/download/50mb_ones.txt', { | |
| streaming: true, | |
| timeout: 60000, // 60 seconds | |
| }); | |
| await urllib.request('http://127.0.0.1/upload/', { | |
| method: 'POST', | |
| stream: res, | |
| timeout: 60000, // 60 seconds | |
| }); | |
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add error handling and resource cleanup.
The downloadAndUpload function lacks error handling, which could lead to:
- Unhandled promise rejections causing the process to crash
- Temp files left on disk if upload or deletion fails
- No visibility into failures during benchmarking
🔎 Proposed improvements
async function downloadAndUpload() {
const tmpFilePath = path.join(__dirname, `${crypto.randomUUID()}.txt`);
- await urllib.request('http://127.0.0.1/download/50mb_ones.txt', {
- writeStream: fs.createWriteStream(tmpFilePath),
- });
- await urllib.request('http://127.0.0.1/upload/', {
- method: 'POST',
- stream: fs.createReadStream(tmpFilePath),
- });
- await fs.promises.rm(tmpFilePath);
+ try {
+ await urllib.request('http://127.0.0.1/download/50mb_ones.txt', {
+ writeStream: fs.createWriteStream(tmpFilePath),
+ });
+ await urllib.request('http://127.0.0.1/upload/', {
+ method: 'POST',
+ stream: fs.createReadStream(tmpFilePath),
+ });
+ } finally {
+ // Ensure temp file is cleaned up even on error
+ try {
+ await fs.promises.rm(tmpFilePath);
+ } catch (err) {
+ // Ignore if file doesn't exist
+ if (err.code !== 'ENOENT') throw err;
+ }
+ }
}🤖 Prompt for AI Agents
In benchmark/stream_download/benchmark.js around lines 6 to 16,
downloadAndUpload currently has no error handling or guaranteed cleanup; wrap
the download/upload sequence in a try/catch/finally: create the tmpFilePath and
streams, perform the download and upload inside try, log or rethrow any caught
errors in catch, and in finally ensure the read/write streams are
closed/destroyed and the temp file is removed if it exists (use
fs.promises.unlink or rm and guard with exists check), so failures during upload
or deletion won’t leak files or unhandled promise rejections.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The while(true) loop will exit on the first error that occurs within downloadAndUpload(), stopping the benchmark. To make the benchmark more robust, you should wrap the call in a try...catch block to handle potential errors and allow the loop to continue.
| (async () => { | |
| while (true) { | |
| await downloadAndUpload(); | |
| } | |
| })(); | |
| (async () => { | |
| while (true) { | |
| try { | |
| await downloadAndUpload(); | |
| } catch (err) { | |
| console.error('An error occurred during download/upload:', err); | |
| // Optional: wait a bit before retrying | |
| await new Promise(resolve => setTimeout(resolve, 1000)); | |
| } | |
| } | |
| })(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add logging, metrics, and graceful shutdown.
For a benchmark tool, this implementation has several issues:
- No output to measure performance or track progress
- No metrics collection (timing, throughput, memory usage)
- Infinite loop with no exit condition or signal handling
- Will leave orphaned temp files on forced termination
🔎 Suggested enhancements
+(let iteration = 0;
+const startTime = Date.now();
+
+process.on('SIGINT', () => {
+ const duration = (Date.now() - startTime) / 1000;
+ console.log(`\nBenchmark stopped after ${iteration} iterations in ${duration.toFixed(2)}s`);
+ process.exit(0);
+});
+
(async () => {
while (true) {
- await downloadAndUpload();
+ const iterStart = Date.now();
+ try {
+ await downloadAndUpload();
+ const duration = Date.now() - iterStart;
+ iteration++;
+ console.log(`Iteration ${iteration}: ${duration}ms`);
+ } catch (err) {
+ console.error(`Error in iteration ${iteration + 1}:`, err.message);
+ // Continue or exit based on error severity
+ }
}
})();Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In benchmark/stream_download/benchmark.js around lines 18 to 22, the loop
currently runs forever with no logs, metrics, signal handling, or temp-file
cleanup; update it to run a controlled benchmark: add configurable iteration
count or duration and emit periodic logs and metrics (timings, per-iteration
throughput, cumulative bytes, memory usage) after each downloadAndUpload run;
measure start/end time and bytes transferred to compute throughput and record to
a simple in-process metrics object or exportable JSON/CSV; implement graceful
shutdown by listening for SIGINT/SIGTERM to stop submitting new work, wait for
the current iteration to finish, clean up any temp files created by
downloadAndUpload, flush/serialize metrics and logs, and then exit; ensure
downloadAndUpload returns metadata (bytes, duration, temp paths) so the runner
can aggregate and delete temp files and report results.
Copilot
AI
Dec 24, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The infinite loop in the benchmark script will run continuously without any delay or control mechanism. This makes it difficult to observe memory behavior over time and could overwhelm the system. Consider adding a configurable iteration count or delay between iterations, and implement graceful shutdown handling for SIGINT/SIGTERM signals.
| (async () => { | |
| while (true) { | |
| await downloadAndUpload(); | |
| } | |
| })(); | |
| let keepRunning = true; | |
| process.once('SIGINT', () => { | |
| keepRunning = false; | |
| }); | |
| process.once('SIGTERM', () => { | |
| keepRunning = false; | |
| }); | |
| const maxIterationsEnv = process.env.BENCHMARK_MAX_ITERATIONS; | |
| let maxIterations; | |
| if (maxIterationsEnv) { | |
| const parsed = Number.parseInt(maxIterationsEnv, 10); | |
| if (Number.isFinite(parsed) && parsed > 0) { | |
| maxIterations = parsed; | |
| } | |
| } | |
| const delayEnv = process.env.BENCHMARK_DELAY_MS; | |
| const delayMs = (() => { | |
| if (!delayEnv) { | |
| return 0; | |
| } | |
| const parsed = Number.parseInt(delayEnv, 10); | |
| if (!Number.isFinite(parsed) || parsed < 0) { | |
| return 0; | |
| } | |
| return parsed; | |
| })(); | |
| async function main() { | |
| for (let i = 0; keepRunning && (maxIterations === undefined || i < maxIterations); i++) { | |
| await downloadAndUpload(); | |
| if (!keepRunning) { | |
| break; | |
| } | |
| if (delayMs > 0) { | |
| await new Promise(resolve => setTimeout(resolve, delayMs)); | |
| } | |
| } | |
| } | |
| main().catch(err => { | |
| // eslint-disable-next-line no-console | |
| console.error('Benchmark failed:', err); | |
| process.exitCode = 1; | |
| }); |
Copilot
AI
Dec 25, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The infinite while loop with concurrent Promise.all calls could lead to uncontrolled resource consumption. If download/upload operations are slow or hang, this could accumulate many pending operations. Consider adding concurrency limits or waiting for previous operations to complete before starting new ones.
| Original file line number | Diff line number | Diff line change | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| @@ -0,0 +1,39 @@ | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| const { request } = require('undici'); | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| const fs = require('fs'); | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| const crypto = require('crypto'); | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| const path = require('path'); | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| const { pipeline } = require('stream/promises'); | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| async function downloadAndUpload() { | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| const tmpFilePath = path.join(__dirname, `${crypto.randomUUID()}.txt`); | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| // Download file | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| const downloadResponse = await request('http://127.0.0.1/download/50mb_ones.txt'); | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| await pipeline(downloadResponse.body, fs.createWriteStream(tmpFilePath)); | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| // Upload file | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| await request('http://127.0.0.1/upload/', { | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| method: 'POST', | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| body: fs.createReadStream(tmpFilePath), | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| }); | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| await fs.promises.rm(tmpFilePath); | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|
Comment on lines
+10
to
+20
|
||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| // Download file | |
| const downloadResponse = await request('http://127.0.0.1/download/50mb_ones.txt'); | |
| await pipeline(downloadResponse.body, fs.createWriteStream(tmpFilePath)); | |
| // Upload file | |
| await request('http://127.0.0.1/upload/', { | |
| method: 'POST', | |
| body: fs.createReadStream(tmpFilePath), | |
| }); | |
| await fs.promises.rm(tmpFilePath); | |
| try { | |
| // Download file | |
| const downloadResponse = await request('http://127.0.0.1/download/50mb_ones.txt'); | |
| await pipeline(downloadResponse.body, fs.createWriteStream(tmpFilePath)); | |
| // Upload file | |
| await request('http://127.0.0.1/upload/', { | |
| method: 'POST', | |
| body: fs.createReadStream(tmpFilePath), | |
| }); | |
| } finally { | |
| try { | |
| await fs.promises.rm(tmpFilePath, { force: true }); | |
| } catch { | |
| // Ignore cleanup errors to avoid masking original failure | |
| } | |
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Missing import for setTimeout.
Line 29 uses setTimeout(100) as a promise, but setTimeout is not imported from 'timers/promises'. This will cause a runtime error because the global setTimeout returns a timer ID, not a promise.
🔎 Proposed fix
const fs = require('fs');
const crypto = require('crypto');
const path = require('path');
+const { setTimeout } = require('timers/promises');
const { pipeline } = require('stream/promises');Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In benchmark/stream_download/benchmark_undici.js around line 29, the code calls
await setTimeout(100) but never imports the promise-based setTimeout, so the
global setTimeout returns a timer id and causes a runtime error; fix by adding
an import for the promise-based setTimeout from 'timers/promises' (e.g., import
{ setTimeout } from "timers/promises") at the top of the file and keep using
await setTimeout(100) as intended.
Copilot
AI
Dec 25, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The infinite while loop with no error handling could cause the benchmark to run indefinitely even if operations are failing. If download/upload operations hang, this could accumulate many pending operations. Consider adding error handling and potentially a mechanism to detect and report failures.
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,16 @@ | ||
| time_namelookup: %{time_namelookup}\n | ||
| time_connect: %{time_connect}\n | ||
| time_appconnect: %{time_appconnect}\n | ||
| time_pretransfer: %{time_pretransfer}\n | ||
| time_redirect: %{time_redirect}\n | ||
| time_starttransfer: %{time_starttransfer}\n | ||
| ----------\n | ||
| time_total: %{time_total}\n | ||
| ----------\n | ||
| size_download: %{size_download}\n | ||
| speed_download: %{speed_download}\n | ||
| size_request: %{size_request}\n | ||
| speed_upload: %{speed_upload}\n | ||
| content_type: %{content_type}\n | ||
| num_connects: %{num_connects}\n | ||
| num_redirects: %{num_redirects} |
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,23 @@ | ||
| version: '3.8' | ||
|
|
||
| services: | ||
| nginx-benchmark: | ||
| image: nginx:alpine | ||
| container_name: nginx-benchmark-server | ||
| ports: | ||
| - '8080:80' | ||
| volumes: | ||
| - ./nginx.conf:/etc/nginx/conf.d/default.conf:ro | ||
| - ./nginx:/usr/share/nginx/html:ro | ||
|
Comment on lines
+9
to
+11
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. 🧩 Analysis chain🏁 Script executed: #!/bin/bash
# Check the root directory configured in nginx.conf
rg -n 'root\s+' benchmark/stream_download/nginx.confRepository: cnpm/cnpmcore Length of output: 122 🏁 Script executed: #!/bin/bash
# View the context around the root directives
echo "=== Context around line 6 ==="
sed -n '1,15p' benchmark/stream_download/nginx.conf
echo ""
echo "=== Context around line 76 ==="
sed -n '70,82p' benchmark/stream_download/nginx.confRepository: cnpm/cnpmcore Length of output: 551 Fix the server-level root directive in nginx.conf to match the mounted volume path. The default Change line 6 to 🤖 Prompt for AI Agents
|
||
| restart: unless-stopped | ||
|
|
||
| # 可选:使用 openresty 支持 Lua 模块 | ||
| # openresty-benchmark: | ||
| # image: openresty/openresty:alpine | ||
| # container_name: openresty-benchmark-server | ||
| # ports: | ||
| # - "8080:80" | ||
| # volumes: | ||
| # - ./nginx.conf:/etc/nginx/conf.d/default.conf:ro | ||
| # - ./nginx:/usr/share/nginx/html:ro | ||
| # restart: unless-stopped | ||
| Original file line number | Diff line number | Diff line change | ||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| @@ -0,0 +1,79 @@ | ||||||||||||
| const { PerformanceObserver, constants } = require('node:perf_hooks'); | ||||||||||||
|
|
||||||||||||
| const gcStats = { | ||||||||||||
| totalGCDuration: 0, // ms | ||||||||||||
| count: 0, | ||||||||||||
| byKind: { | ||||||||||||
| scavenge: 0, // minor GC | ||||||||||||
| markSweepCompact: 0, // major GC | ||||||||||||
| incremental: 0, | ||||||||||||
| weakc: 0, | ||||||||||||
| unknown: 0, | ||||||||||||
| }, | ||||||||||||
| }; | ||||||||||||
|
|
||||||||||||
| // kind meaning: https://nodejs.org/api/perf_hooks.html#performancegc_kind | ||||||||||||
| // 1: scavenge | ||||||||||||
| // 2: mark-sweep-compact | ||||||||||||
| // 4: incremental | ||||||||||||
| // 8: weak callbacks | ||||||||||||
| function kindToString(kind) { | ||||||||||||
| switch (kind) { | ||||||||||||
| case constants.NODE_PERFORMANCE_GC_MAJOR: | ||||||||||||
| return 'markSweepCompact'; | ||||||||||||
| case constants.NODE_PERFORMANCE_GC_MINOR: | ||||||||||||
| return 'scavenge'; | ||||||||||||
| case constants.NODE_PERFORMANCE_GC_INCREMENTAL: | ||||||||||||
| return 'incremental'; | ||||||||||||
| case constants.NODE_PERFORMANCE_GC_WEAKCB: | ||||||||||||
| return 'weakc'; | ||||||||||||
| default: | ||||||||||||
| return 'unknown'; | ||||||||||||
| } | ||||||||||||
| } | ||||||||||||
|
|
||||||||||||
| const obs = new PerformanceObserver((list) => { | ||||||||||||
| const entries = list.getEntries(); | ||||||||||||
| for (const entry of entries) { | ||||||||||||
| gcStats.totalGCDuration += entry.duration; | ||||||||||||
| gcStats.count += 1; | ||||||||||||
|
|
||||||||||||
| const kindCode = entry.detail?.kind; | ||||||||||||
| const kind = kindToString(kindCode); | ||||||||||||
| if (!gcStats.byKind[kind]) gcStats.byKind[kind] = 0; | ||||||||||||
| gcStats.byKind[kind] += entry.duration; | ||||||||||||
| } | ||||||||||||
| }); | ||||||||||||
|
|
||||||||||||
| obs.observe({ entryTypes: ['gc'] }); | ||||||||||||
|
|
||||||||||||
| // for other modules to use | ||||||||||||
| function getGCStats() { | ||||||||||||
| return { | ||||||||||||
| totalGCDuration: gcStats.totalGCDuration, | ||||||||||||
| count: gcStats.count, | ||||||||||||
| avgDuration: gcStats.count ? gcStats.totalGCDuration / gcStats.count : 0, | ||||||||||||
| byKind: { ...gcStats.byKind }, | ||||||||||||
| }; | ||||||||||||
| } | ||||||||||||
|
|
||||||||||||
| // only print GC stats if the GC environment variable is set | ||||||||||||
| if (process.env.GC || true) { | ||||||||||||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. 🛠️ Refactor suggestion | 🟠 Major Remove constant condition to make environment check meaningful. The condition 🔎 Proposed fix-if (process.env.GC || true) {
+if (process.env.GC) {
setInterval(() => {📝 Committable suggestion
Suggested change
🧰 Tools🪛 Biome (2.1.2)[error] 61-61: Unexpected constant condition. (lint/correctness/noConstantCondition) 🤖 Prompt for AI Agents
|
||||||||||||
| if (process.env.GC || true) { | |
| if (process.env.GC) { |
| Original file line number | Diff line number | Diff line change | ||||
|---|---|---|---|---|---|---|
| @@ -0,0 +1,30 @@ | ||||||
| #!/bin/bash | ||||||
|
|
||||||
| # 生成一个50MB的txt文件,内容都是1 | ||||||
| # 文件名为: 50mb_ones.txt | ||||||
|
|
||||||
| OUTPUT_FILE="nginx/50mb_ones.txt" | ||||||
| TARGET_SIZE_MB=50 | ||||||
| TARGET_SIZE_BYTES=$((TARGET_SIZE_MB * 1024 * 1024)) | ||||||
|
|
||||||
| # 检查文件是否已存在 | ||||||
| if [ -f "$OUTPUT_FILE" ]; then | ||||||
| echo "文件 $OUTPUT_FILE 已存在,正在删除..." | ||||||
| rm -f "$OUTPUT_FILE" | ||||||
| fi | ||||||
|
|
||||||
| echo "正在生成 $TARGET_SIZE_MB MB 的文件,内容都是1..." | ||||||
|
|
||||||
| # 使用dd命令生成文件,每块1KB,共50*1024块 | ||||||
| dd if=/dev/zero bs=1024 count=$((TARGET_SIZE_MB * 1024)) | tr '\0' '1' > "$OUTPUT_FILE" | ||||||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. The
Suggested change
|
||||||
|
|
||||||
| # 验证文件大小 | ||||||
| ACTUAL_SIZE=$(stat -f%z "$OUTPUT_FILE" 2>/dev/null || stat -c%s "$OUTPUT_FILE" 2>/dev/null) | ||||||
|
||||||
| ACTUAL_SIZE=$(stat -f%z "$OUTPUT_FILE" 2>/dev/null || stat -c%s "$OUTPUT_FILE" 2>/dev/null) | |
| ACTUAL_SIZE=$(wc -c < "$OUTPUT_FILE") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The Node.js version
24.12.0specified does not appear to be a valid or current version. The latest Node.js version is 22.x, and the current LTS is 20.x. Using a non-existent version will cause the build to fail. Please use a current stable or LTS version. Using an-alpineimage is also recommended for smaller image sizes.