Skip to content
Open
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 3 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -123,3 +123,6 @@ run
!test/ctx_register.js

.egg/

# Benchmark test files
benchmark/stream_download/nginx/50mb_ones.txt
41 changes: 41 additions & 0 deletions benchmark/stream_download/Dockerfile
Original file line number Diff line number Diff line change
@@ -0,0 +1,41 @@
FROM node:24.12.0
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The Node.js version 24.12.0 specified does not appear to be a valid or current version. The latest Node.js version is 22.x, and the current LTS is 20.x. Using a non-existent version will cause the build to fail. Please use a current stable or LTS version. Using an -alpine image is also recommended for smaller image sizes.

FROM node:20.14.0-alpine


# 安装 nginx 和其他必要工具
RUN apt-get update && apt-get install -y \
nginx \
curl \
vim \
&& rm -rf /var/lib/apt/lists/* \
&& apt-get clean
Comment on lines +4 to +11
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

To optimize the Docker image size, it's recommended to:

  1. Use --no-install-recommends with apt-get install to avoid installing unnecessary packages.
  2. Remove vim as it's a large dependency and generally not needed in a production or benchmark image. If you need to debug, you can docker exec into a running container and install it manually.
RUN apt-get update && apt-get install -y --no-install-recommends \
    nginx \
    curl \
    && rm -rf /var/lib/apt/lists/* \
    && apt-get clean


# 创建 nginx 配置目录
RUN mkdir -p /etc/nginx/conf.d

# 复制 nginx 配置文件
COPY nginx.conf /etc/nginx/sites-available/default

# 创建 nginx 工作目录
RUN mkdir -p /var/www/html

# 创建启动脚本
COPY start-nginx.sh /usr/local/bin/start-nginx.sh
RUN chmod +x /usr/local/bin/start-nginx.sh

# 暴露端口
EXPOSE 80 9229

# 设置工作目录
WORKDIR /var/www/html

# 健康检查
HEALTHCHECK --interval=30s --timeout=3s --start-period=5s --retries=3 \
CMD curl -f http://localhost/health || exit 1

RUN mkdir -p /root/workspace

COPY benchmark.js /root/workspace/benchmark.js

RUN cd /root/workspace && npm i urllib --registry https://registry.npmmirror.com
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Installing npm packages directly with npm i inside the Dockerfile is not ideal for dependency management. It's better practice to add urllib to a dependencies section in your package.json, copy package.json (and package-lock.json) into the image, and then run npm install or npm ci. This makes your dependencies explicit and leverages Docker's layer caching more effectively. I've added a separate comment on package.json with a suggestion. With that change, this line should be updated to use npm install.

RUN cd /root/workspace && npm install --registry https://registry.npmmirror.com

Copy link

Copilot AI Dec 25, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The npm package installation uses a hardcoded Chinese mirror registry. For a project that may be used internationally, consider using the default npm registry or making the registry configurable. If the Chinese mirror is required for specific performance reasons, consider adding a comment explaining why.

Copilot uses AI. Check for mistakes.

# 启动命令
CMD ["/usr/local/bin/start-nginx.sh"]
51 changes: 51 additions & 0 deletions benchmark/stream_download/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,51 @@
# Nginx 下载/上传测试服务器

## 快速开始

> **注意**: 请先切换到 `benchmark/stream_download` 目录下执行以下命令
Comment on lines +1 to +5
Copy link

Copilot AI Dec 24, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The benchmark script is located in a directory called 'stream_download', but it performs both download AND upload operations. The directory name doesn't accurately reflect its purpose. Consider renaming to 'stream_benchmark' or 'stream_upload_download' to better represent the full scope of functionality.

Suggested change
# Nginx 下载/上传测试服务器
## 快速开始
> **注意**: 请先切换到 `benchmark/stream_download` 目录下执行以下命令
# Nginx 下载/上传流式基准测试服务器
## 快速开始
> **注意**: 请先切换到 `benchmark/stream_download` (下载/上传流式基准测试目录)下执行以下命令

Copilot uses AI. Check for mistakes.

### 构建镜像

```bash
docker build --platform linux/amd64 -t nginx-node-benchmark .
```

### 运行容器

```bash
docker run -rm -d --platform linux/amd64 \
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

There's a typo in the docker run command. The -rm flag should be --rm.

Suggested change
docker run -rm -d --platform linux/amd64 \
docker run --rm -d --platform linux/amd64 \

Copy link

Copilot AI Dec 24, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The docker run command has a typo: -rm should be --rm (with double dash). This will cause the command to fail as -rm is not a valid Docker flag.

Suggested change
docker run -rm -d --platform linux/amd64 \
docker run --rm -d --platform linux/amd64 \

Copilot uses AI. Check for mistakes.
--name nginx-benchmark-server \
-p 8080:80 \
-v $(pwd)/nginx:/var/www/html \
nginx-node-benchmark
```

### 测试

```bash
# 下载测试
curl -O http://localhost:8080/download/test-file.txt

# 上传测试
curl -X POST -d "test" http://localhost:8080/upload/
```

### 停止

```bash
docker stop nginx-benchmark-server && docker rm nginx-benchmark-server
```

### 运行生成大文件

```bash
sh generate_50mb_file.sh
```

### 运行 node 测试

```bash
docker exec -ti nginx-benchmark-server bash
cd /root/workspace
node benchmark.js
```
22 changes: 22 additions & 0 deletions benchmark/stream_download/benchmark.js
Original file line number Diff line number Diff line change
@@ -0,0 +1,22 @@
const urllib = require('urllib');
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Search for package.json files and check for urllib dependency
find . -name 'package.json' -type f | head -20

Repository: cnpm/cnpmcore

Length of output: 420


🏁 Script executed:

#!/bin/bash
# Check the main package.json for urllib
cat ./package.json

Repository: cnpm/cnpmcore

Length of output: 5286


🏁 Script executed:

#!/bin/bash
# Also search for urllib references in package.json files
rg -l 'package\.json' -g '**' | head -5
rg 'urllib' package.json -A 2 -B 2

Repository: cnpm/cnpmcore

Length of output: 301


🏁 Script executed:

#!/bin/bash
# Check the benchmark package.json
cat ./benchmark/stream_download/package.json

Repository: cnpm/cnpmcore

Length of output: 81


🏁 Script executed:

#!/bin/bash
# Also verify the require statement in the benchmark file
cat ./benchmark/stream_download/benchmark.js | head -20

Repository: cnpm/cnpmcore

Length of output: 646


🏁 Script executed:

#!/bin/bash
# Check if urllib is used anywhere else in the project and what versions might be expected
rg 'urllib' --type ts --type js -i | grep -v node_modules | head -20

Repository: cnpm/cnpmcore

Length of output: 328


🏁 Script executed:

#!/bin/bash
# Check if there's any documentation or comments about urllib in the project
rg 'urllib' -B 2 -A 2 --type ts --type js --type json | grep -v node_modules | head -40

Repository: cnpm/cnpmcore

Length of output: 938


🏁 Script executed:

#!/bin/bash
# Check if urllib is listed as a transitive dependency in the main package.json
# by looking for packages that might depend on urllib
rg '"urllib"' node_modules 2>/dev/null | head -5 || echo "node_modules not available"

Repository: cnpm/cnpmcore

Length of output: 39


🏁 Script executed:

#!/bin/bash
# Check if there are any npm scripts or docs about running benchmarks
cat ./package.json | grep -A 20 '"scripts"'

Repository: cnpm/cnpmcore

Length of output: 970


Add urllib to benchmark/stream_download/package.json dependencies.

The urllib package is required by benchmark.js but not declared in the benchmark's package.json. This will cause the benchmark to fail with a module not found error at runtime.

Update benchmark/stream_download/package.json to include urllib as a dependency:

{
  "type": "commonjs",
  "dependencies": {
    "urllib": "^3.0.0"
  }
}
🤖 Prompt for AI Agents
In benchmark/stream_download/benchmark.js around line 1, the file requires
"urllib" but benchmark/stream_download/package.json is missing that dependency;
update benchmark/stream_download/package.json to add "urllib" under dependencies
(e.g., "urllib": "^3.0.0") and ensure "type" remains "commonjs", then run npm
install (or update lockfile) so the module is available at runtime.

const fs = require('fs');
const crypto = require('crypto');
const path = require('path');
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Convert to ES module imports.

The code uses CommonJS require() statements, which violates the project's coding guidelines mandating ES modules throughout the codebase.

🔎 Proposed fix
-const urllib = require('urllib');
-const fs = require('fs');
-const crypto = require('crypto');
-const path = require('path');
+import urllib from 'urllib';
+import fs from 'fs';
+import crypto from 'crypto';
+import path from 'path';

Note: This requires changing package.json to "type": "module" (see related comment on package.json).

Based on learnings: Use ES modules (import/export) throughout the codebase - no CommonJS require()

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
const urllib = require('urllib');
const fs = require('fs');
const crypto = require('crypto');
const path = require('path');
import urllib from 'urllib';
import fs from 'fs';
import crypto from 'crypto';
import path from 'path';
🤖 Prompt for AI Agents
In benchmark/stream_download/benchmark.js around lines 1 to 4, the file uses
CommonJS require() calls; convert them to ES module imports (e.g., replace each
require with an import statement for urllib, fs, crypto, and path) and update
any downstream code that depended on CommonJS semantics if necessary; also
ensure package.json is set to "type": "module" so the ES imports are valid.


async function downloadAndUpload() {
const tmpFilePath = path.join(__dirname, `${crypto.randomUUID()}.txt`);
await urllib.request('http://127.0.0.1/download/50mb_ones.txt', {
writeStream: fs.createWriteStream(tmpFilePath),
});
await urllib.request('http://127.0.0.1/upload/', {
Copy link

Copilot AI Dec 24, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The hardcoded URL 'http://127.0.0.1/download/50mb_ones.txt' uses port 80, but the README and other scripts configure the service to run on port 8080. This mismatch will cause the benchmark to fail when run. The URL should be 'http://127.0.0.1:8080/download/50mb_ones.txt' to match the Docker port mapping.

Suggested change
await urllib.request('http://127.0.0.1/download/50mb_ones.txt', {
writeStream: fs.createWriteStream(tmpFilePath),
});
await urllib.request('http://127.0.0.1/upload/', {
await urllib.request('http://127.0.0.1:8080/download/50mb_ones.txt', {
writeStream: fs.createWriteStream(tmpFilePath),
});
await urllib.request('http://127.0.0.1:8080/upload/', {

Copilot uses AI. Check for mistakes.
Copy link

Copilot AI Dec 24, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The hardcoded URL 'http://127.0.0.1/upload/' uses port 80, but the README and other scripts configure the service to run on port 8080. This mismatch will cause the benchmark to fail when run. The URL should be 'http://127.0.0.1:8080/upload/' to match the Docker port mapping.

Suggested change
await urllib.request('http://127.0.0.1/upload/', {
await urllib.request('http://127.0.0.1:8080/upload/', {

Copilot uses AI. Check for mistakes.
method: 'POST',
stream: fs.createReadStream(tmpFilePath),
});
await fs.promises.rm(tmpFilePath);
Copy link

Copilot AI Dec 24, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The benchmark script lacks error handling around the download and upload operations. If any operation fails (network error, disk full, etc.), the script will crash without cleanup, potentially leaving orphaned temporary files. Wrap the operations in try-catch blocks and ensure cleanup happens in a finally block.

Suggested change
await urllib.request('http://127.0.0.1/download/50mb_ones.txt', {
writeStream: fs.createWriteStream(tmpFilePath),
});
await urllib.request('http://127.0.0.1/upload/', {
method: 'POST',
stream: fs.createReadStream(tmpFilePath),
});
await fs.promises.rm(tmpFilePath);
let downloadStream;
let uploadStream;
try {
downloadStream = fs.createWriteStream(tmpFilePath);
await urllib.request('http://127.0.0.1/download/50mb_ones.txt', {
writeStream: downloadStream,
});
uploadStream = fs.createReadStream(tmpFilePath);
await urllib.request('http://127.0.0.1/upload/', {
method: 'POST',
stream: uploadStream,
});
} finally {
if (downloadStream) {
downloadStream.destroy();
}
if (uploadStream) {
uploadStream.destroy();
}
try {
await fs.promises.rm(tmpFilePath, { force: true });
} catch {
// ignore cleanup errors in benchmark script
}
}

Copilot uses AI. Check for mistakes.
}
Comment on lines +28 to +44
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

This benchmark currently involves disk I/O by writing the downloaded file to a temporary path and then reading it back for upload. This can skew the results, as you're measuring disk speed as well as network performance. For a more accurate stream benchmark, you should pipe the download response stream directly into the upload request stream. This avoids the intermediate file and measures the pure stream handling performance.

Suggested change
async function downloadAndUpload() {
const tmpFilePath = path.join(__dirname, `${crypto.randomUUID()}.txt`);
await urllib.request('http://127.0.0.1/download/50mb_ones.txt', {
writeStream: fs.createWriteStream(tmpFilePath),
});
await urllib.request('http://127.0.0.1/upload/', {
method: 'POST',
stream: fs.createReadStream(tmpFilePath),
});
await fs.promises.rm(tmpFilePath);
}
async function downloadAndUpload() {
const { res } = await urllib.request('http://127.0.0.1/download/50mb_ones.txt', {
streaming: true,
timeout: 60000, // 60 seconds
});
await urllib.request('http://127.0.0.1/upload/', {
method: 'POST',
stream: res,
timeout: 60000, // 60 seconds
});
}

Comment on lines +28 to +44
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Add error handling and resource cleanup.

The downloadAndUpload function lacks error handling, which could lead to:

  • Unhandled promise rejections causing the process to crash
  • Temp files left on disk if upload or deletion fails
  • No visibility into failures during benchmarking
🔎 Proposed improvements
 async function downloadAndUpload() {
   const tmpFilePath = path.join(__dirname, `${crypto.randomUUID()}.txt`);
-  await urllib.request('http://127.0.0.1/download/50mb_ones.txt', {
-    writeStream: fs.createWriteStream(tmpFilePath),
-  });
-  await urllib.request('http://127.0.0.1/upload/', {
-    method: 'POST',
-    stream: fs.createReadStream(tmpFilePath),
-  });
-  await fs.promises.rm(tmpFilePath);
+  try {
+    await urllib.request('http://127.0.0.1/download/50mb_ones.txt', {
+      writeStream: fs.createWriteStream(tmpFilePath),
+    });
+    await urllib.request('http://127.0.0.1/upload/', {
+      method: 'POST',
+      stream: fs.createReadStream(tmpFilePath),
+    });
+  } finally {
+    // Ensure temp file is cleaned up even on error
+    try {
+      await fs.promises.rm(tmpFilePath);
+    } catch (err) {
+      // Ignore if file doesn't exist
+      if (err.code !== 'ENOENT') throw err;
+    }
+  }
 }
🤖 Prompt for AI Agents
In benchmark/stream_download/benchmark.js around lines 6 to 16,
downloadAndUpload currently has no error handling or guaranteed cleanup; wrap
the download/upload sequence in a try/catch/finally: create the tmpFilePath and
streams, perform the download and upload inside try, log or rethrow any caught
errors in catch, and in finally ensure the read/write streams are
closed/destroyed and the temp file is removed if it exists (use
fs.promises.unlink or rm and guard with exists check), so failures during upload
or deletion won’t leak files or unhandled promise rejections.


(async () => {
while (true) {
await downloadAndUpload();
}
})();
Comment on lines +47 to +70
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The while(true) loop will exit on the first error that occurs within downloadAndUpload(), stopping the benchmark. To make the benchmark more robust, you should wrap the call in a try...catch block to handle potential errors and allow the loop to continue.

Suggested change
(async () => {
while (true) {
await downloadAndUpload();
}
})();
(async () => {
while (true) {
try {
await downloadAndUpload();
} catch (err) {
console.error('An error occurred during download/upload:', err);
// Optional: wait a bit before retrying
await new Promise(resolve => setTimeout(resolve, 1000));
}
}
})();

Comment on lines +47 to +70
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Add logging, metrics, and graceful shutdown.

For a benchmark tool, this implementation has several issues:

  • No output to measure performance or track progress
  • No metrics collection (timing, throughput, memory usage)
  • Infinite loop with no exit condition or signal handling
  • Will leave orphaned temp files on forced termination
🔎 Suggested enhancements
+(let iteration = 0;
+const startTime = Date.now();
+
+process.on('SIGINT', () => {
+  const duration = (Date.now() - startTime) / 1000;
+  console.log(`\nBenchmark stopped after ${iteration} iterations in ${duration.toFixed(2)}s`);
+  process.exit(0);
+});
+
 (async () => {
   while (true) {
-    await downloadAndUpload();
+    const iterStart = Date.now();
+    try {
+      await downloadAndUpload();
+      const duration = Date.now() - iterStart;
+      iteration++;
+      console.log(`Iteration ${iteration}: ${duration}ms`);
+    } catch (err) {
+      console.error(`Error in iteration ${iteration + 1}:`, err.message);
+      // Continue or exit based on error severity
+    }
   }
 })();

Committable suggestion skipped: line range outside the PR's diff.

🤖 Prompt for AI Agents
In benchmark/stream_download/benchmark.js around lines 18 to 22, the loop
currently runs forever with no logs, metrics, signal handling, or temp-file
cleanup; update it to run a controlled benchmark: add configurable iteration
count or duration and emit periodic logs and metrics (timings, per-iteration
throughput, cumulative bytes, memory usage) after each downloadAndUpload run;
measure start/end time and bytes transferred to compute throughput and record to
a simple in-process metrics object or exportable JSON/CSV; implement graceful
shutdown by listening for SIGINT/SIGTERM to stop submitting new work, wait for
the current iteration to finish, clean up any temp files created by
downloadAndUpload, flush/serialize metrics and logs, and then exit; ensure
downloadAndUpload returns metadata (bytes, duration, temp paths) so the runner
can aggregate and delete temp files and report results.

Comment on lines +47 to +70
Copy link

Copilot AI Dec 24, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The infinite loop in the benchmark script will run continuously without any delay or control mechanism. This makes it difficult to observe memory behavior over time and could overwhelm the system. Consider adding a configurable iteration count or delay between iterations, and implement graceful shutdown handling for SIGINT/SIGTERM signals.

Suggested change
(async () => {
while (true) {
await downloadAndUpload();
}
})();
let keepRunning = true;
process.once('SIGINT', () => {
keepRunning = false;
});
process.once('SIGTERM', () => {
keepRunning = false;
});
const maxIterationsEnv = process.env.BENCHMARK_MAX_ITERATIONS;
let maxIterations;
if (maxIterationsEnv) {
const parsed = Number.parseInt(maxIterationsEnv, 10);
if (Number.isFinite(parsed) && parsed > 0) {
maxIterations = parsed;
}
}
const delayEnv = process.env.BENCHMARK_DELAY_MS;
const delayMs = (() => {
if (!delayEnv) {
return 0;
}
const parsed = Number.parseInt(delayEnv, 10);
if (!Number.isFinite(parsed) || parsed < 0) {
return 0;
}
return parsed;
})();
async function main() {
for (let i = 0; keepRunning && (maxIterations === undefined || i < maxIterations); i++) {
await downloadAndUpload();
if (!keepRunning) {
break;
}
if (delayMs > 0) {
await new Promise(resolve => setTimeout(resolve, delayMs));
}
}
}
main().catch(err => {
// eslint-disable-next-line no-console
console.error('Benchmark failed:', err);
process.exitCode = 1;
});

Copilot uses AI. Check for mistakes.
Comment on lines +47 to +70
Copy link

Copilot AI Dec 25, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The infinite while loop with concurrent Promise.all calls could lead to uncontrolled resource consumption. If download/upload operations are slow or hang, this could accumulate many pending operations. Consider adding concurrency limits or waiting for previous operations to complete before starting new ones.

Copilot uses AI. Check for mistakes.

Large diffs are not rendered by default.

16 changes: 16 additions & 0 deletions benchmark/stream_download/curl-format.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,16 @@
time_namelookup: %{time_namelookup}\n
time_connect: %{time_connect}\n
time_appconnect: %{time_appconnect}\n
time_pretransfer: %{time_pretransfer}\n
time_redirect: %{time_redirect}\n
time_starttransfer: %{time_starttransfer}\n
----------\n
time_total: %{time_total}\n
----------\n
size_download: %{size_download}\n
speed_download: %{speed_download}\n
size_request: %{size_request}\n
speed_upload: %{speed_upload}\n
content_type: %{content_type}\n
num_connects: %{num_connects}\n
num_redirects: %{num_redirects}
23 changes: 23 additions & 0 deletions benchmark/stream_download/docker-compose.nginx.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,23 @@
version: '3.8'

services:
nginx-benchmark:
image: nginx:alpine
container_name: nginx-benchmark-server
ports:
- '8080:80'
volumes:
- ./nginx.conf:/etc/nginx/conf.d/default.conf:ro
- ./nginx:/usr/share/nginx/html:ro
Comment on lines +9 to +11
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Check the root directory configured in nginx.conf
rg -n 'root\s+' benchmark/stream_download/nginx.conf

Repository: cnpm/cnpmcore

Length of output: 122


🏁 Script executed:

#!/bin/bash
# View the context around the root directives
echo "=== Context around line 6 ==="
sed -n '1,15p' benchmark/stream_download/nginx.conf

echo ""
echo "=== Context around line 76 ==="
sed -n '70,82p' benchmark/stream_download/nginx.conf

Repository: cnpm/cnpmcore

Length of output: 551


Fix the server-level root directive in nginx.conf to match the mounted volume path.

The default root /var/www/html/; at the server level (line 6) does not match the mounted volume path /usr/share/nginx/html. The container will fail to serve content because /var/www/html is not mounted in docker-compose. The root /usr/share/nginx/html; directive at line 76 only applies to the error page location block and won't cover general requests.

Change line 6 to root /usr/share/nginx/html/; to align with the docker-compose volume mount.

🤖 Prompt for AI Agents
In benchmark/stream_download/docker-compose.nginx.yml around lines 9 to 11, the
nginx server-level root in the mounted nginx.conf points to /var/www/html which
doesn't match the docker-compose volume mount /usr/share/nginx/html; update the
server-level root directive in nginx.conf (line 6) to read root
/usr/share/nginx/html/ so general requests use the mounted content (leave the
existing error-page-specific root intact).

Copy link

Copilot AI Dec 24, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The volume mount path in docker-compose.nginx.yml uses '/usr/share/nginx/html' (line 11), which is the standard nginx path, but the nginx.conf file references '/var/www/html/' as the root directory (line 6 in nginx.conf) and the Dockerfile creates '/var/www/html' as the working directory. This path inconsistency will cause the nginx service to fail to serve files correctly.

Copilot uses AI. Check for mistakes.
restart: unless-stopped

# 可选:使用 openresty 支持 Lua 模块
# openresty-benchmark:
# image: openresty/openresty:alpine
# container_name: openresty-benchmark-server
# ports:
# - "8080:80"
# volumes:
# - ./nginx.conf:/etc/nginx/conf.d/default.conf:ro
# - ./nginx:/usr/share/nginx/html:ro
# restart: unless-stopped
30 changes: 30 additions & 0 deletions benchmark/stream_download/generate_50mb_file.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,30 @@
#!/bin/bash

# 生成一个50MB的txt文件,内容都是1
# 文件名为: 50mb_ones.txt

OUTPUT_FILE="50mb_ones.txt"
TARGET_SIZE_MB=50
TARGET_SIZE_BYTES=$((TARGET_SIZE_MB * 1024 * 1024))

# 检查文件是否已存在
if [ -f "$OUTPUT_FILE" ]; then
echo "文件 $OUTPUT_FILE 已存在,正在删除..."
rm -f "$OUTPUT_FILE"
fi

echo "正在生成 $TARGET_SIZE_MB MB 的文件,内容都是1..."

# 使用dd命令生成文件,每块1KB,共50*1024块
dd if=/dev/zero bs=1024 count=$((TARGET_SIZE_MB * 1024)) | tr '\0' '1' > "$OUTPUT_FILE"
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The dd command is functionally correct, but can be made more readable by using 1M for block size and referencing the TARGET_SIZE_MB variable directly.

Suggested change
dd if=/dev/zero bs=1024 count=$((TARGET_SIZE_MB * 1024)) | tr '\0' '1' > "$OUTPUT_FILE"
dd if=/dev/zero bs=1M count=${TARGET_SIZE_MB} | tr '\0' '1' > "$OUTPUT_FILE"


# 验证文件大小
ACTUAL_SIZE=$(stat -f%z "$OUTPUT_FILE" 2>/dev/null || stat -c%s "$OUTPUT_FILE" 2>/dev/null)
Copy link

Copilot AI Dec 24, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The stat command usage at line 22 attempts to use both BSD (-f%z) and GNU (-c%s) syntax with fallback, but the command structure will fail. When the BSD version fails, it will not silently fall back to the GNU version - instead, it will show an error. Consider using a more robust approach by checking the OS type first or using a conditional that properly handles the error without displaying it.

Suggested change
ACTUAL_SIZE=$(stat -f%z "$OUTPUT_FILE" 2>/dev/null || stat -c%s "$OUTPUT_FILE" 2>/dev/null)
ACTUAL_SIZE=$(wc -c < "$OUTPUT_FILE")

Copilot uses AI. Check for mistakes.
if [ "$ACTUAL_SIZE" -eq "$TARGET_SIZE_BYTES" ]; then
echo "成功生成文件: $OUTPUT_FILE"
echo "文件大小: $(ls -lh "$OUTPUT_FILE" | awk '{print $5}')"
else
echo "警告: 文件大小不匹配,期望: $TARGET_SIZE_BYTES 字节,实际: $ACTUAL_SIZE 字节"
fi

echo "文件路径: $(pwd)/$OUTPUT_FILE"
78 changes: 78 additions & 0 deletions benchmark/stream_download/nginx.conf
Original file line number Diff line number Diff line change
@@ -0,0 +1,78 @@
server {
listen 80;
server_name localhost;

# 设置根目录为 nginx 文件夹
root /var/www/html/;

# 禁用缓存以支持流式下载测试
Copy link

Copilot AI Dec 25, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The nginx configuration disables sendfile which is a performance optimization. While the comment states this is to support streaming download tests, consider documenting why this is necessary for the benchmark specifically, as sendfile is generally recommended for serving static files efficiently.

Suggested change
# 禁用缓存以支持流式下载测试
# 为了基准测试精确观测“流式”下载行为,这里显式关闭 sendfile。
# sendfile 通常在生产环境中应开启以提升静态文件性能(零拷贝),
# 但它会绕过用户态缓冲和分块发送逻辑,使得内核直接把文件数据写入 socket。
# 本基准测试需要测量应用/代理层逐步写出响应数据的吞吐与延迟,因此禁用 sendfile,
# 确保响应经由 nginx 用户态处理与缓冲,从而更真实地反映流式下载场景的表现。

Copilot uses AI. Check for mistakes.
sendfile off;
tcp_nopush off;
tcp_nodelay on;
keepalive_timeout 65;

# 下载路径 - GET /download/
location /download/ {
# 映射到 nginx 目录中的文件
alias /var/www/html/;
autoindex on;
autoindex_exact_size off;

# 支持断点续传
add_header Accept-Ranges bytes;

# 设置合适的缓存头用于测试
expires -1;
add_header Cache-Control "no-cache, no-store, must-revalidate";
add_header Pragma "no-cache";

# 允许跨域访问(测试需要)
Copy link

Copilot AI Dec 25, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The CORS configuration allows all origins with wildcard (*). While this may be acceptable for a local benchmark, consider adding a comment noting this is for testing only and should be restricted in any production-like scenario.

Suggested change
# 允许跨域访问(测试需要)
# 允许跨域访问(测试需要)
# 注意:以下 CORS 配置仅用于本地/基准测试环境,禁止直接用于生产环境。
# 在生产环境中应将 Access-Control-Allow-Origin 限制为受信任的具体域名,而不是使用通配符 *。

Copilot uses AI. Check for mistakes.
add_header Access-Control-Allow-Origin *;
add_header Access-Control-Allow-Methods "GET, OPTIONS";
add_header Access-Control-Allow-Headers "DNT,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Range";

if ($request_method = 'OPTIONS') {
return 204;
}
}

# 上传路径 - POST /upload/
location /upload/ {
# 只允许 POST 方法
limit_except POST {
deny all;
}

# 将上传内容重定向到 /dev/null
client_body_in_file_only clean;
client_body_temp_path /tmp/nginx_temp;
client_max_body_size 0; # 不限制上传大小
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Unlimited upload size is a DoS risk.

client_max_body_size 0 allows unlimited upload sizes, which could be exploited in a DoS attack by uploading extremely large files. While this is a benchmark environment, consider setting a reasonable limit (e.g., 100MB or 1GB) to prevent accidental resource exhaustion.

🤖 Prompt for AI Agents
In benchmark/stream_download/nginx.conf around line 49, client_max_body_size is
set to 0 (unlimited), which is a DoS risk; change this to a reasonable cap (for
example "100m" or "1g") appropriate for your benchmark needs, update the
directive to that value on line 49, and document the chosen limit in a comment
so future maintainers understand the trade-off.

Copy link

Copilot AI Dec 25, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Setting client_max_body_size to 0 disables the upload size limit entirely. For a benchmark this may be intentional, but it creates a potential denial-of-service vector if the server is exposed. Consider adding a comment explaining this is intentional for benchmarking purposes only and should not be used in production.

Suggested change
client_max_body_size 0; # 不限制上传大小
client_max_body_size 0; # 不限制上传大小(仅用于压测,禁止用于生产环境)

Copilot uses AI. Check for mistakes.

# 使用 Lua 模块或直接返回 201
return 201;
Comment on lines +46 to +52
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion | 🟠 Major

Unnecessary file I/O for upload endpoint.

The configuration writes uploaded request bodies to temporary files (client_body_in_file_only clean) but immediately returns 201 without processing them. This creates unnecessary disk I/O that defeats the purpose of a benchmark testing streaming upload performance.

🔎 Recommended fix
     location /upload/ {
         # 只允许 POST 方法
         limit_except POST {
             deny all;
         }
         
-        # 将上传内容重定向到 /dev/null
-        client_body_in_file_only clean;
-        client_body_temp_path /tmp/nginx_temp;
-        client_max_body_size 0;  # 不限制上传大小
+        # 丢弃上传内容以测试纯上传性能
+        client_body_in_single_buffer on;
+        client_max_body_size 100m;  # 合理限制上传大小
         
-        # 使用 Lua 模块或直接返回 201
         return 201;

This approach reads the body into memory without writing to disk, which better simulates streaming upload scenarios.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
# 将上传内容重定向到 /dev/null
client_body_in_file_only clean;
client_body_temp_path /tmp/nginx_temp;
client_max_body_size 0; # 不限制上传大小
# 使用 Lua 模块或直接返回 201
return 201;
# 丢弃上传内容以测试纯上传性能
client_body_in_single_buffer on;
client_max_body_size 100m; # 合理限制上传大小
return 201;
🤖 Prompt for AI Agents
In benchmark/stream_download/nginx.conf around lines 46 to 52, the config forces
writing request bodies to disk (client_body_in_file_only clean and
client_body_temp_path) even though the handler immediately returns 201, causing
unnecessary I/O; remove or disable those directives (set
client_body_in_file_only off or delete the line and remove
client_body_temp_path), ensure requests are buffered in memory by adding or
increasing client_body_buffer_size (and optionally enable
client_body_in_single_buffer on) while keeping client_max_body_size 0 if
unlimited uploads are desired, so uploads are handled in-memory and the endpoint
returns 201 without disk writes.


# 如果需要更复杂的处理,可以使用 Lua
# content_by_lua_block {
# ngx.req.read_body()
# local data = ngx.req.get_body_data()
# -- 数据已经被读取,但不做任何处理
# ngx.status = 201
# ngx.say("{\"status\":\"uploaded\",\"bytes_received\":" .. ngx.req.get_headers()["content-length"] .. "}")
# ngx.exit(201)
# }
}

# 健康检查端点
location /health {
access_log off;
return 200 "healthy\n";
add_header Content-Type text/plain;
}

# 错误页面
error_page 404 /404.html;
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
}
16 changes: 16 additions & 0 deletions benchmark/stream_download/nginx/test-file.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,16 @@
这是一个测试文件,用于 nginx 下载测试。

文件内容:
- 测试文件大小:约 1KB
- 用途:验证 /download/ 路径的文件下载功能
- 创建时间:2025-12-24

可以通过以下方式测试:
1. GET http://localhost:8080/download/test-file.txt
2. 使用 curl: curl -O http://localhost:8080/download/test-file.txt
3. 使用 wget: wget http://localhost:8080/download/test-file.txt

测试上传功能:
1. POST http://localhost:8080/upload/
2. 使用 curl: curl -X POST -d "test data" http://localhost:8080/upload/
3. 使用 curl 上传文件: curl -X POST --data-binary @test-file.txt http://localhost:8080/upload/
3 changes: 3 additions & 0 deletions benchmark/stream_download/package.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
{
"type": "commonjs"
}
Comment on lines +1 to +3
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This package.json file is missing the dependencies section. To make dependency management explicit and reliable, you should declare urllib here. This also works in conjunction with improvements to the Dockerfile to leverage layer caching.

Suggested change
{
"type": "commonjs"
}
{
"type": "commonjs",
"dependencies": {
"urllib": "^3.22.0"
}
}

Comment on lines +1 to +3
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Change module type to ES modules to align with project standards.

The package.json specifies "type": "commonjs", which contradicts the project's coding guidelines and learnings that mandate ES modules (import/export) throughout the codebase. This forces the benchmark.js file to use CommonJS require() instead of modern ES6 import statements.

🔎 Proposed fix
 {
-  "type": "commonjs"
+  "type": "module"
 }

This change will require updating benchmark.js to use ES module syntax (see related comment on that file).

Based on learnings: Use ES modules (import/export) throughout the codebase - no CommonJS require()

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
{
"type": "commonjs"
}
{
"type": "module"
}
🤖 Prompt for AI Agents
In benchmark/stream_download/package.json around lines 1 to 3, the module type
is set to "commonjs" but the project standard requires ES modules; change the
"type" value to "module" in package.json and then update benchmark.js to use ES
module syntax (replace require() with import statements and export default where
applicable), ensure any file extensions or relative import paths comply with ES
module rules, and run the benchmark to verify no runtime import errors remain.

90 changes: 90 additions & 0 deletions benchmark/stream_download/start-docker.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,90 @@
#!/bin/bash

# Docker 启动脚本
set -e

# 设置变量
IMAGE_NAME="nginx-node-benchmark"
CONTAINER_NAME="nginx-benchmark-server"
HOST_PORT="8080"
CONTAINER_PORT="80"
MOUNT_DIR="$(pwd)/nginx"
CONTAINER_MOUNT_DIR="/var/www/html"

# 颜色输出
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
NC='\033[0m' # No Color

echo -e "${GREEN}=== 启动 Docker 容器 ===${NC}"

# 检查 Docker 是否运行
if ! docker info > /dev/null 2>&1; then
echo -e "${RED}错误: Docker 未运行或未安装${NC}"
exit 1
fi

# 检查 nginx 目录是否存在
if [ ! -d "$MOUNT_DIR" ]; then
echo -e "${YELLOW}警告: nginx 目录不存在,正在创建...${NC}"
mkdir -p "$MOUNT_DIR"
fi

# 检查是否有测试文件,如果没有就创建一些
if [ ! -f "$MOUNT_DIR/test-file.txt" ]; then
echo -e "${YELLOW}创建测试文件...${NC}"
cp nginx/test-file.txt "$MOUNT_DIR/" 2>/dev/null || echo "测试文件已存在"
fi

if [ ! -f "$MOUNT_DIR/large-test-file.bin" ]; then
echo -e "${YELLOW}创建大测试文件...${NC}"
cp nginx/large-test-file.bin "$MOUNT_DIR/" 2>/dev/null || dd if=/dev/zero of="$MOUNT_DIR/large-test-file.bin" bs=1M count=10
Comment on lines +37 to +42
Copy link

Copilot AI Dec 24, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The conditional logic at lines 36-38 attempts to copy files that may not exist yet. The script references 'nginx/test-file.txt' from the current directory when the MOUNT_DIR variable already points to '$(pwd)/nginx'. This creates a circular reference where it tries to copy 'nginx/test-file.txt' to 'nginx/test-file.txt'. Consider checking if the file exists in MOUNT_DIR directly, or create a simple test file if it doesn't exist.

Suggested change
cp nginx/test-file.txt "$MOUNT_DIR/" 2>/dev/null || echo "测试文件已存在"
fi
if [ ! -f "$MOUNT_DIR/large-test-file.bin" ]; then
echo -e "${YELLOW}创建大测试文件...${NC}"
cp nginx/large-test-file.bin "$MOUNT_DIR/" 2>/dev/null || dd if=/dev/zero of="$MOUNT_DIR/large-test-file.bin" bs=1M count=10
echo "This is a small test file for nginx benchmark." > "$MOUNT_DIR/test-file.txt"
fi
if [ ! -f "$MOUNT_DIR/large-test-file.bin" ]; then
echo -e "${YELLOW}创建大测试文件...${NC}"
dd if=/dev/zero of="$MOUNT_DIR/large-test-file.bin" bs=1M count=10

Copilot uses AI. Check for mistakes.
Comment on lines +37 to +42
Copy link

Copilot AI Dec 24, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Similar to the test-file.txt issue, the script tries to copy 'nginx/large-test-file.bin' to '$MOUNT_DIR/large-test-file.bin', but if MOUNT_DIR is '$(pwd)/nginx', this creates a circular reference. The fallback 'dd' command is good, but the copy attempt will always fail when the file doesn't exist in the parent scope.

Suggested change
cp nginx/test-file.txt "$MOUNT_DIR/" 2>/dev/null || echo "测试文件已存在"
fi
if [ ! -f "$MOUNT_DIR/large-test-file.bin" ]; then
echo -e "${YELLOW}创建大测试文件...${NC}"
cp nginx/large-test-file.bin "$MOUNT_DIR/" 2>/dev/null || dd if=/dev/zero of="$MOUNT_DIR/large-test-file.bin" bs=1M count=10
if [ -f "nginx/test-file.txt" ]; then
cp "nginx/test-file.txt" "$MOUNT_DIR/"
else
echo "benchmark test file" > "$MOUNT_DIR/test-file.txt"
fi
fi
if [ ! -f "$MOUNT_DIR/large-test-file.bin" ]; then
echo -e "${YELLOW}创建大测试文件...${NC}"
if [ -f "nginx/large-test-file.bin" ]; then
cp "nginx/large-test-file.bin" "$MOUNT_DIR/"
else
dd if=/dev/zero of="$MOUNT_DIR/large-test-file.bin" bs=1M count=10
fi

Copilot uses AI. Check for mistakes.
fi
Comment on lines +34 to +43
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Fix path confusion in test file creation logic.

Lines 37 and 42 attempt to copy files from nginx/ to $MOUNT_DIR/ where MOUNT_DIR is defined as $(pwd)/nginx, resulting in circular or redundant paths (e.g., cp nginx/test-file.txt $(pwd)/nginx/test-file.txt). Additionally, the || echo "测试文件已存在" fallback message is misleading when the fallback actually creates files using dd.

🔎 Recommended fix
 # 检查是否有测试文件,如果没有就创建一些
 if [ ! -f "$MOUNT_DIR/test-file.txt" ]; then
     echo -e "${YELLOW}创建测试文件...${NC}"
-    cp nginx/test-file.txt "$MOUNT_DIR/" 2>/dev/null || echo "测试文件已存在"
+    echo "This is a test file" > "$MOUNT_DIR/test-file.txt"
 fi
 
 if [ ! -f "$MOUNT_DIR/large-test-file.bin" ]; then
     echo -e "${YELLOW}创建大测试文件...${NC}"
-    cp nginx/large-test-file.bin "$MOUNT_DIR/" 2>/dev/null || dd if=/dev/zero of="$MOUNT_DIR/large-test-file.bin" bs=1M count=10
+    dd if=/dev/zero of="$MOUNT_DIR/large-test-file.bin" bs=1M count=10
 fi
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
# 检查是否有测试文件,如果没有就创建一些
if [ ! -f "$MOUNT_DIR/test-file.txt" ]; then
echo -e "${YELLOW}创建测试文件...${NC}"
cp nginx/test-file.txt "$MOUNT_DIR/" 2>/dev/null || echo "测试文件已存在"
fi
if [ ! -f "$MOUNT_DIR/large-test-file.bin" ]; then
echo -e "${YELLOW}创建大测试文件...${NC}"
cp nginx/large-test-file.bin "$MOUNT_DIR/" 2>/dev/null || dd if=/dev/zero of="$MOUNT_DIR/large-test-file.bin" bs=1M count=10
fi
# 检查是否有测试文件,如果没有就创建一些
if [ ! -f "$MOUNT_DIR/test-file.txt" ]; then
echo -e "${YELLOW}创建测试文件...${NC}"
echo "This is a test file" > "$MOUNT_DIR/test-file.txt"
fi
if [ ! -f "$MOUNT_DIR/large-test-file.bin" ]; then
echo -e "${YELLOW}创建大测试文件...${NC}"
dd if=/dev/zero of="$MOUNT_DIR/large-test-file.bin" bs=1M count=10
fi
🤖 Prompt for AI Agents
In benchmark/stream_download/start-docker.sh around lines 34 to 43, the script
copies from "nginx/..." into $MOUNT_DIR which is set to $(pwd)/nginx causing
circular paths and a misleading fallback message; fix by (1) resolving a proper
source directory (e.g. SCRIPT_DIR or repo root) and use that as the cp source
instead of "nginx/" when $MOUNT_DIR points to ./nginx, (2) add a conditional: if
source file exists then cp into $MOUNT_DIR, else if target missing create it
with dd, and (3) update the fallback echo to accurately reflect the action taken
(e.g. "created with dd" vs "already exists"). Ensure file-existence checks use
absolute/consistent paths so cp is never attempted from the same directory as
the target.


# 停止并删除已存在的容器
echo "检查并停止已存在的容器..."
docker stop "$CONTAINER_NAME" > /dev/null 2>&1 || true
docker rm "$CONTAINER_NAME" > /dev/null 2>&1 || true

# 构建 Docker 镜像
echo "构建 Docker 镜像..."
docker build -t "$IMAGE_NAME" .

# 启动容器
echo "启动容器..."
docker run -d \
--name "$CONTAINER_NAME" \
-p "$HOST_PORT:$CONTAINER_PORT" \
-v "$MOUNT_DIR:$CONTAINER_MOUNT_DIR:ro" \
--restart unless-stopped \
"$IMAGE_NAME"

# 等待容器启动
echo "等待容器启动..."
sleep 3
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Using a fixed sleep 3 to wait for the container to start is unreliable. The container might take more or less time depending on the host machine's performance. A more robust approach is to poll the healthcheck endpoint in a loop until it becomes available or a timeout is reached.

Suggested change
sleep 3
# Wait for container to be healthy
echo "Waiting for container to be healthy..."
for i in {1..20}; do
if curl -s -f http://localhost:"$HOST_PORT"/health > /dev/null; then
echo "Container is healthy."
break
fi
if [ $i -eq 20 ]; then
echo -e "${RED}Container did not become healthy in time.${NC}"
docker logs "$CONTAINER_NAME"
exit 1
fi
echo "Still waiting for container... ($i/20)"
sleep 1
done


# 检查容器状态
if docker ps | grep -q "$CONTAINER_NAME"; then
echo -e "${GREEN}容器启动成功!${NC}"
echo -e "${GREEN}访问地址: http://localhost:$HOST_PORT${NC}"
echo -e "${GREEN}下载测试: http://localhost:$HOST_PORT/download/${NC}"
echo -e "${GREEN}上传测试: http://localhost:$HOST_PORT/upload/${NC}"
echo -e "${GREEN}健康检查: http://localhost:$HOST_PORT/health${NC}"

# 显示容器信息
echo ""
echo "容器信息:"
docker ps | grep "$CONTAINER_NAME"

echo ""
echo "测试命令:"
echo "下载测试: curl -O http://localhost:$HOST_PORT/download/test-file.txt"
echo "上传测试: curl -X POST -d 'test' http://localhost:$HOST_PORT/upload/"

else
echo -e "${RED}容器启动失败!${NC}"
echo "查看日志:"
docker logs "$CONTAINER_NAME"
exit 1
fi
Loading
Loading