Skip to content

tests: Add virtio-net tests + supporting testing framework improvements#603

Merged
slp merged 15 commits intocontainers:mainfrom
mtjhrc:network-testing
Apr 24, 2026
Merged

tests: Add virtio-net tests + supporting testing framework improvements#603
slp merged 15 commits intocontainers:mainfrom
mtjhrc:network-testing

Conversation

@mtjhrc
Copy link
Copy Markdown
Collaborator

@mtjhrc mtjhrc commented Mar 24, 2026

Changes:

  • Move network namespace isolation (unshare) from a single global wrapper to per-test isolation in the runner
  • Test runner can now clean up background processes (e.g. gvproxy, vmnet-helper) after each test
  • Add configurable per-test timeout to prevent hanging the suite
  • Tests can now specify a Containerfile to build a guest rootfs via podman — the runner builds the image, exports it, and mounts it as the guest's virtiofs root. Used by iperf3 tests to get a Fedora-based rootfs with iperf3 pre-installed.
  • Introduce TestOutcome::Report so tests can produce structured output (terminal text + GitHub-flavored markdown for CI summaries) instead of just pass/fail
  • Introduce functional virtio-net tests for passt, tap, gvproxy, and vmnet-helper backends using guest DHCP
  • Introduce parametrized iperf3 performance tests that run iperf3 client in the guest against a host server and report throughput

Comment thread tests/test_cases/src/test_net/vmnet_helper.rs Outdated
Comment thread tests/test_cases/src/test_net/vmnet_helper.rs Outdated
Comment thread tests/test_cases/src/test_net/vmnet_helper.rs Outdated
.arg("--fd")
.arg(helper_fd.to_string())
.arg("--enable-tso")
.arg("--enable-checksum-offload")
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It will be useful to test with and without offloading. Offloading does not work for use cases, for example when clients is in a Kubernets pod network. We get very low bandwidth (1000x slower) and huge amount of retransmits.

https://github.com/nirs/vmnet-helper/blob/615b5bb5dc4def4e9453e7f4caaa7ff9d7cdd3b3/performance/2026-02/M2/minikube/report.md#pod-network---iperf3-on-vm

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm, makes sense.

Also I am still unsure how I want to go about the performance testing. I mean if we should have many fixed variants of the tests here or just make them more parametrized and make the user pick.
Thing is we probably won't be running these performance tests in a CI for the foreseeable future anyway (no full macOS CI, not sure how consistent the performance baseline is on Linux...)

I put the performance tests here mostly because it's a really convenient way to run them locally for me when making changes to libkrun code (and reviewing PRs).

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You can check vmnet-helper benchmarking infrastructure. It uses yaml to describe benchmarks parameters:
https://github.com/nirs/vmnet-helper/tree/main/benchmarks

and plots parameters:
https://github.com/nirs/vmnet-helper/blob/main/plots/offloading.yaml

The bench run command read the yaml file and run all the benchmarks, saving results to json files:
https://github.com/nirs/vmnet-helper/blob/main/bench

The bench plot command read plot yaml files describing the plots and read data generated by the bench run.

The benchmarks are very noisy since we run multiple vms and we cannot control what macOS run in the background.

To compare results for PR you need to run the same benchmark twice, once with the previous version and once with the change. The run must be long enough to mitigate random noise during the run.

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes the graphs and everything is nice! But makes we wonder if it is necessary to replicate the whole testing infrastructure here (but I guess we want that for other back-ends on Linux).
Anyways such improvements are out-of-scope for this PR, but they can be added later.

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Network benchmarks for different programs and network proxies sounds like a separate project. This can be used by vfkit, krunkit, qemu, lima, minikube, vmnet-helper, gvproxy, passt and more.

@mtjhrc mtjhrc force-pushed the network-testing branch 8 times, most recently from 4ada604 to 6b882be Compare March 31, 2026 12:52
@mtjhrc mtjhrc force-pushed the network-testing branch 4 times, most recently from e9941e1 to 32aad4f Compare April 1, 2026 11:48
@mtjhrc mtjhrc added the 1.x label Apr 13, 2026
mtjhrc added 2 commits April 21, 2026 16:19
Signed-off-by: Matej Hrica <mhrica@redhat.com>
Instead of wrapping the entire test runner in a single unshare
namespace from run.sh, perform per-test network namespace isolation
directly in the runner when spawning each test subprocess.

On Linux, each test is wrapped with `unshare --user --map-root-user
--net` and loopback is brought up inside the namespace. If unshare
is unavailable, tests run without isolation (with a warning).
On macOS, tests run directly without unshare.

Signed-off-by: Matej Hrica <mhrica@redhat.com>
@mtjhrc mtjhrc marked this pull request as ready for review April 21, 2026 14:55
@mtjhrc
Copy link
Copy Markdown
Collaborator Author

mtjhrc commented Apr 21, 2026

/gemini review

Copy link
Copy Markdown

@gemini-code-assist gemini-code-assist Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request significantly expands the integration testing suite by adding unified virtio-net tests supporting multiple backends (passt, TAP, gvproxy, and vmnet-helper) and iperf3-based performance benchmarks. Key infrastructure improvements include Podman-based rootfs provisioning, automated background process cleanup, per-test timeouts, and namespace isolation using buildah. Review feedback highlights a critical safety issue with libc::fork() in multi-threaded contexts, recommends replacing deprecated ifconfig with ip, and suggests more robust error handling for external commands and JSON parsing.

Comment thread tests/test_cases/src/test_net/passt.rs Outdated
Comment thread tests/runner/src/main.rs Outdated
Comment thread tests/test_cases/src/rootfs.rs
Comment thread tests/test_cases/src/test_net/gvproxy.rs Outdated
Comment thread tests/test_cases/src/test_net/vmnet_helper.rs Outdated
Use `buildah unshare -- unshare --net` instead of
`unshare --user --map-root-user --net` to get proper
UIDs/GIDs inside the test namespace via /etc/subuid and /etc/subgid.

Signed-off-by: Matej Hrica <mhrica@redhat.com>
mtjhrc added 12 commits April 21, 2026 17:10
Signed-off-by: Matej Hrica <mhrica@redhat.com>
Move TestOutcome from the runner into test_cases so individual tests
can return their own outcome from check(). The runner now uses the
returned value directly instead of relying solely on catch_unwind to
distinguish pass from fail.

Signed-off-by: Matej Hrica <mhrica@redhat.com>
Add a Report variant to TestOutcome that carries a ReportImpl trait
object, allowing tests to produce structured output (text for the
terminal, GitHub-flavored markdown for CI summaries) instead of a
simple pass/fail.

Signed-off-by: Matej Hrica <mhrica@redhat.com>
Register background process PIDs (gvproxy, vmnet-helper) for automatic
cleanup after each test. The runner sends SIGTERM, waits up to 5s, then
SIGKILL any survivors.

Signed-off-by: Matej Hrica <mhrica@redhat.com>
Each test has a configurable timeout (default 15s). If the child process
doesn't exit within the deadline, the runner kills it, dumps any captured
stdout, cleans up registered PIDs, and reports FAIL.

Signed-off-by: Matej Hrica <mhrica@redhat.com>
Write test stdout directly to stdout.txt in the test artifacts
directory instead of buffering in memory. Read it back for check().
This ensures raw output (e.g. iperf3 JSON) is always available in
artifacts, and also shows where the test got stuck if it times out.

Signed-off-by: Matej Hrica <mhrica@redhat.com>
Display the full error message in a code block within the test's
details section, separated from the log output by a horizontal rule.

Signed-off-by: Matej Hrica <mhrica@redhat.com>
Add tests for passt, tap, gvproxy, and vmnet-helper using
guest DHCP setup across the supported network backends.

Signed-off-by: Matej Hrica <mhrica@redhat.com>
Add parametrized performance tests for each virtio-net backend
(passt, tap, gvproxy, vmnet-helper) in both upload and download
directions. Each test starts an iperf3 server on the host, runs
the iperf3 client inside a Fedora-based guest VM, and reports
throughput results as structured text/markdown via the Report
outcome.

Tests require IPERF_DURATION to be set at compile time and use a
podman-built rootfs with iperf3 pre-installed. They are skipped
when prerequisites are unavailable.

Signed-off-by: Matej Hrica <mhrica@redhat.com>
- Install buildah for namespace isolation in tests
- Build passt from source (Ubuntu 24.04 apt version is too old)
- Install dnsmasq and iperf3 for tap and perf tests

Signed-off-by: Matej Hrica <mhrica@redhat.com>
Rootfs directories contain files with mapped UIDs that the runner
can't read, breaking the artifact zip upload.

Signed-off-by: Matej Hrica <mhrica@redhat.com>
Build with NET=1 and run network/iperf3 tests in CI.

Signed-off-by: Matej Hrica <mhrica@redhat.com>
@slp slp added 2.0 and removed 1.x labels Apr 22, 2026
@slp slp merged commit 48ae6f7 into containers:main Apr 24, 2026
11 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants