Skip to content

Add coverage tests and Python 3.12 SSL compatibility fixes#1457

Open
theanant404 wants to merge 4 commits intoOWASP:masterfrom
theanant404:master
Open

Add coverage tests and Python 3.12 SSL compatibility fixes#1457
theanant404 wants to merge 4 commits intoOWASP:masterfrom
theanant404:master

Conversation

@theanant404
Copy link
Copy Markdown

…ity fixes

Proposed change

Type of change

This pull request introduces several improvements and additions, primarily focusing on enhancing test coverage for core components, updating SSL/TLS handling for sockets, and making a minor update to the HTML report template. The most significant changes are the addition of comprehensive unit and integration tests for the TemplateLoader and base engine logic, as well as modernizing SSL context usage for socket connections.

Testing improvements

  • Added comprehensive tests for TemplateLoader covering initialization, parsing, formatting, loading, and integration scenarios in tests/core/lib/test_base_extended.py. These tests ensure correct handling of YAML templates and input substitution.
  • Introduced new tests for the base engine's content filtering and condition processing logic in tests/core/lib/test_base.py, including mocking of dependencies and verification of side effects.
  • Added tests for FTP and FTPS engine and library classes in tests/core/lib/test_ftp_ftps.py, verifying correct inheritance, method presence, and callable attributes.

Security and compatibility updates

  • Updated SSL/TLS socket wrapping in both nettacker/core/lib/socket.py and nettacker/core/lib/ssl.py to use ssl.create_default_context() and context.wrap_socket(..., server_hostname=host), improving security and compatibility with modern Python best practices. [1] [2]

Report template update

  • Added a new HTML snippet to report.html for displaying scan metadata, a placeholder for graph HTML/CSS, and a script tag for JavaScript, likely for improved reporting or visualization.
Screenshot 2026-03-28 at 12 07 19 AM Screenshot 2026-03-28 at 12 07 31 AM
  • New core framework functionality
  • Bugfix (non-breaking change which fixes an issue)
  • Code refactoring without any functionality changes
  • New or existing module/payload change
  • Documentation/localization improvement
  • Test coverage improvement
  • Dependency upgrade
  • Other improvement (best practice, cleanup, optimization, etc)

Checklist

  • I've followed the contributing guidelines
  • I have digitally signed all my commits in this PR
  • I've run make pre-commit and confirm it didn't generate any warnings/changes
  • I've run make test, I confirm all tests passed locally
  • I've added/updated any relevant documentation in the docs/ folder
  • I've linked this PR with an open issue
  • I've tested and verified that my code works as intended and resolves the issue as described
  • I have attached screenshots demonstrating my code works as intended
  • I've checked all other open PRs to avoid submitting duplicate work
  • I confirm that the code and comments in this PR are not direct unreviewed outputs of AI
  • I confirm that I am the Sole Responsible Author for every line of code, comment, and design decision

Copilot AI review requested due to automatic review settings March 27, 2026 18:39
@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai bot commented Mar 27, 2026

No actionable comments were generated in the recent review. 🎉

ℹ️ Recent review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: 2f6708f9-befe-4fd7-bc5c-6252ad3e1995

📥 Commits

Reviewing files that changed from the base of the PR and between d99aba9 and e45fcbb.

📒 Files selected for processing (1)
  • nettacker/core/lib/socket.py
🚧 Files skipped from review as they are similar to previous changes (1)
  • nettacker/core/lib/socket.py

Summary by CodeRabbit

  • Bug Fixes

    • Improved TLS behavior with SNI-aware connections and robust fallback on handshake failures
    • Enhanced graph visualization resilience when event data is incomplete
  • New Features

    • Added an HTML report template for structured scan output
  • Tests

    • Expanded test coverage across protocol libraries, core app, templating, utilities, logging, and report generation

Walkthrough

SSL socket creation changed to use explicit SSLContext wrapping; extensive unit tests were added across many core modules; the D3 tree graph engine now reads event fields with .get() defaults; a new minimal report.html template was added.

Changes

Cohort / File(s) Summary
SSL / Socket
nettacker/core/lib/socket.py, nettacker/core/lib/ssl.py, tests/core/lib/test_socket.py, tests/core/lib/test_ssl.py
Replaced ssl.wrap_socket() with context-based ssl.create_default_context() / context.wrap_socket(..., server_hostname=host) and disabled hostname/certificate checks. Tests updated to mock/context-wrap behavior and added fallback/wrap-failure coverage. Review SSL handshake, SNI use, and security flags.
Test Additions — Protocols & Engines
tests/core/lib/test_ftp_ftps.py, tests/core/lib/test_pop3.py, tests/core/lib/test_smtp.py, tests/core/lib/test_telnet.py, tests/core/lib/test_ssh.py, tests/core/lib/test_protocols_comprehensive.py
Added smoke tests asserting engine-to-library wiring, client attributes, and callable brute_force entries across many protocols.
Test Additions — Core App, Arg Parsing, Templates
tests/core/test_app.py, tests/core/test_app_extended.py, tests/core/test_app_more.py, tests/core/test_arg_parser.py, tests/core/test_arg_parser_extended.py, tests/core/test_template.py, tests/core/lib/test_base.py, tests/core/lib/test_base_extended.py
New tests for Nettacker app control flow, target expansion, start/run behavior, ArgParser coercions/validation (ports, schemas, modules_extra_args), TemplateLoader parsing/formatting/loading, and BaseEngine condition branches.
Test Additions — HTTP/Logger/Utilities
tests/core/lib/test_http_and_logger_extended.py, tests/core/test_socks_proxy.py, tests/core/utils/test_common_additional.py, tests/core/utils/test_common_extended.py, tests/core/utils/test_common_more.py, tests/test_logger.py, tests/test_main.py
Expanded coverage for HttpEngine, Logger (ANSI codes, modes, output behavior), SOCKS proxy setup, and many common utility functions (headers, path sanitization, token generation, i18n helpers).
Graph Engine & Tests
nettacker/lib/graph/d3_tree_v1/engine.py, tests/lib/graph/d3_tree_v1/test_engine.py
Switched event field access to event.get(..., <default>) to avoid KeyError; tests added for HTML/JS escaping, structure and missing-field behavior.
Report Template
report.html
Added minimal HTML scaffold with placeholders for graph HTML/CSS/JS and metadata placeholders used by reporting.
Compare Report Test
tests/lib/test_compare_report.py
Added minimal test asserting build_report is callable.

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

Suggested labels

gsoc

Suggested reviewers

  • arkid15r
  • securestep9
🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 51.56% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Title check ✅ Passed The title 'Add coverage tests and Python 3.12 SSL compatibility fixes' accurately summarizes the main changes: comprehensive test coverage additions and SSL/TLS modernization for Python 3.12 compatibility.
Description check ✅ Passed The description is well-structured, covering testing improvements, security/compatibility updates, and report template changes with specific file references and context linking.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 12

🧹 Nitpick comments (17)
tests/lib/test_graph_d3_v1.py (2)

11-14: Test is brittle—depends on hardcoded implementation string.

The test asserts the presence of "Starting attack", which is a hardcoded string in the implementation (engine.py:42). This couples the test to an internal detail rather than testing behavior. If the message changes (e.g., for internationalization), the test breaks unnecessarily.

🔄 Test structure instead of hardcoded strings

Consider testing the actual behavior and structure:

 def test_d3_tree_v1_start_empty():
     result = d3_v1.start([])
-    assert "Starting attack" in result
     assert isinstance(result, str)
+    # Verify it's valid HTML
+    assert result.startswith("<!DOCTYPE html>") or "<html" in result
+    # Verify it contains the expected d3.js structure
+    assert "d3.tree()" in result or "d3.hierarchy" in result
+    # Verify it has a root node with no children
+    assert '"children": []' in result or '"children":[]' in result

This tests the structure and format rather than specific internal messages.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/lib/test_graph_d3_v1.py` around lines 11 - 14, The test
test_d3_tree_v1_start_empty is brittle because it asserts a hardcoded
implementation string; update it to verify behavior/structure instead of exact
text by calling d3_v1.start([]) and asserting the result is a non-empty string
(e.g., assert isinstance(result, str) and result.strip()), or check a stable
pattern such as result.startswith or contains a generic keyword like "attack" if
that is part of the public contract; modify the assertion(s) in
test_d3_tree_v1_start_empty to use d3_v1.start and these more resilient checks
rather than the literal "Starting attack".

17-29: Strengthen assertions to verify proper data structure.

The test only checks if strings appear somewhere in the output, which doesn't verify that the event data is correctly formatted or placed in the proper structure. For instance, the data could appear in an HTML comment or error message and the test would still pass.

🔍 Verify data structure and all event fields
 def test_d3_tree_v1_start_with_events():
     events = [
         {
             "target": "127.0.0.1",
             "module_name": "port_scan",
             "port": 80,
             "event": "port_open",
         }
     ]
     result = d3_v1.start(events)
+    assert isinstance(result, str)
+    
+    # Verify all event fields are included
     assert "127.0.0.1" in result
     assert "port_scan" in result
-    assert isinstance(result, str)
+    assert "80" in result or "port: 80" in result
+    assert "port_open" in result
+    
+    # Verify HTML structure
+    assert "<html" in result.lower()
+    assert "</html>" in result.lower()
+    
+    # Verify data is in JSON structure (not just loose text)
+    # The d3.tree expects hierarchical data
+    assert '"name"' in result or "'name'" in result
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/lib/test_graph_d3_v1.py` around lines 17 - 29, The test currently only
checks substrings; instead locate and extract the JSON/event payload emitted by
d3_v1.start(events) and assert the actual event structure and values: parse the
JSON from the returned string (or otherwise extract the data structure emitted
by start), then assert the parsed object is a list/dict as expected and contains
an event with keys "target", "module_name", "port", and "event" with the values 
"127.0.0.1", "port_scan", 80, and "port_open" respectively; update
test_d3_tree_v1_start_with_events to parse the returned string, validate the
JSON structure and types, and then use explicit equality/assertions rather than
substring checks on d3_v1.start.
tests/core/test_socks_proxy.py (1)

14-18: Strengthen proxy-path assertions to verify behavior, not only shape.

Right now this can still pass if set_socks_proxy() returns any 2-tuple without exercising proxy setup. Consider asserting mock interactions (e.g., mock_socksocket.assert_called() and mock_getaddrinfo.assert_called()).

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/core/test_socks_proxy.py` around lines 14 - 18, The test currently only
checks the shape of set_socks_proxy's return value; update it to assert actual
proxy setup interactions: after calling
set_socks_proxy("socks5://localhost:1080") verify
mock_socksocket.assert_called() and mock_getaddrinfo.assert_called() (or
assert_called_with(...) if specific args are known) and optionally assert that
the returned tuple values match expected socket/addr info returned by those
mocks; reference the set_socks_proxy call and the mocks mock_socksocket and
mock_getaddrinfo when adding these assertions.
tests/lib/test_compare_report.py (1)

1-1: @theanant404 please ensure all commits are signed before merge.

Unsigned commits fail pre-merge checks in this repo. Use git commit -S (or configure SSH signing) and re-push any unsigned commits.

Based on learnings: "When reviewing any pull request in the OWASP/Nettacker repository, always remind the PR author to use signed commits and explicitly state that the PR fails pre-merge checks if any commits are unsigned."

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/lib/test_compare_report.py` at line 1, The commits for this PR are
unsigned and will fail the repository's pre-merge checks; sign your commits
(e.g., use git commit -S or configure GPG/SSH commit signing) and re-push them —
if you need to sign existing commits, perform an interactive rebase to
re-commit/sign or use git filter-branch/rebase --exec to sign then force-push
the updated history so tests in files like tests/lib/test_compare_report.py with
imports such as build_report pass the merge policy.
tests/core/test_app_more.py (1)

8-10: Consider asserting an observable outcome from print_logo().

This currently only verifies “no exception.” Adding at least one assertion on logging/output interaction would make regressions easier to catch.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/core/test_app_more.py` around lines 8 - 10, The test only ensures
Nettacker.print_logo() doesn't raise but doesn't assert visible behavior; update
test_print_logo to assert an observable logging/output interaction by capturing
the patched logger (patch("nettacker.core.app.log") -> mock_log) and then call
Nettacker.print_logo() and assert that mock_log.<appropriate_method> (e.g., info
or debug) was called with an expected message or substring (using
assert_called_once_with or assert_any_call with a string containing the logo
text), so the test verifies the function produced the expected log/output.
tests/core/lib/test_base.py (1)

6-10: Make the truncation assertion a bit more specific.

assert result != content can pass for unrelated changes. Add one explicit truncation signal check.

🧪 Minimal assertion hardening
 def test_filter_large_content_truncates():
     engine = BaseEngine()
     content = "abcdefghij klm"
     result = engine.filter_large_content(content, filter_rate=10)
     assert result != content
+    assert "klm" not in result
+    assert result.startswith("abcdefghij")
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/core/lib/test_base.py` around lines 6 - 10, The test currently only
checks that result != content which is too weak; update
test_filter_large_content_truncates to assert a clear truncation signal from
BaseEngine.filter_large_content by checking that the returned string is shorter
than the input and contains the expected truncation indicator (e.g., "..." or a
configured marker) or that result.endswith("..."), and keep the existing
filter_rate usage to trigger truncation; locate the function call to
BaseEngine.filter_large_content in the test and add these explicit assertions.
tests/core/lib/test_base_extended.py (1)

125-155: Strengthen open() tests by asserting the exact file path passed to open().

Right now these tests mostly assert type/name and can miss regressions in module-name parsing (action/library split).

♻️ Suggested test hardening
 def test_open_extracts_module_name_correctly(self):
     """Test that open correctly parses module name."""
     mock_yaml_content = "test: data\n"
-    
-    with patch("nettacker.core.template.Config.path.modules_dir") as mock_modules_dir:
-        mock_dir = MagicMock()
-        mock_modules_dir.__truediv__ = MagicMock(return_value=mock_dir)
-        
-        with patch("builtins.open", mock_open(read_data=mock_yaml_content)):
-            loader = TemplateLoader("port_scan_scan")
-            # This should split "port_scan_scan" into action="scan", library="port_scan"
-            loader.open()
-            # Verify the correct path construction was attempted
-            assert loader.name == "port_scan_scan"
+    modules_dir = Path("/tmp/modules")
+    expected_path = modules_dir / "scan" / "port_scan.yaml"
+
+    with patch("nettacker.core.template.Config.path.modules_dir", modules_dir):
+        with patch("builtins.open", mock_open(read_data=mock_yaml_content)) as mocked_open:
+            loader = TemplateLoader("port_scan_scan")
+            result = loader.open()
+            assert result == mock_yaml_content
+            mocked_open.assert_called_once_with(expected_path)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/core/lib/test_base_extended.py` around lines 125 - 155, Update the
tests to assert the exact file path passed to open() by configuring the mocked
Config.path.modules_dir and its __truediv__ chain to return a predictable
Path-like object and then checking that the patched builtins.open was called
with the expected file path string; specifically, in tests referencing
TemplateLoader.open and loader.name, construct the expected path that reflects
how open() builds it from Config.path.modules_dir and the parsed module parts
(action/library split) and assert open was called with that exact path (and
correct mode), so regressions in TemplateLoader.open’s module-name parsing will
be caught.
tests/core/lib/test_pop3.py (1)

1-1: @theanant404 Please confirm all commits are GPG/SSH-signed before merge.

Unsigned commits fail pre-merge checks in this repo. If needed:

  • git commit --amend -S
  • git push --force-with-lease

Based on learnings: "When reviewing any pull request in the OWASP/Nettacker repository, always remind the PR author to use signed commits and explicitly state that the PR fails pre-merge checks if any commits are unsigned."

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/core/lib/test_pop3.py` at line 1, All commits in this PR must be
GPG/SSH-signed because unsigned commits fail the repository's pre-merge checks;
sign the latest commit with git commit --amend -S (or sign all commits via an
interactive rebase and amending each commit), then push the signed history with
git push --force-with-lease so CI will accept the PR; confirm the PR shows
signed commits before merging.
tests/core/utils/test_common_more.py (2)

28-47: Consider using monkeypatch.setattr instead of manual sys.argv manipulation.

While the try/finally pattern works, pytest's monkeypatch fixture is safer and more idiomatic for modifying sys.argv in tests.

♻️ Alternative using monkeypatch
-def test_find_args_value_exists():
-    import sys
-    old_argv = sys.argv
-    try:
-        sys.argv = ["prog", "-t", "example.com"]
-        result = find_args_value("-t")
-        assert result == "example.com"
-    finally:
-        sys.argv = old_argv
+def test_find_args_value_exists(monkeypatch):
+    import sys
+    monkeypatch.setattr(sys, "argv", ["prog", "-t", "example.com"])
+    result = find_args_value("-t")
+    assert result == "example.com"
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/core/utils/test_common_more.py` around lines 28 - 47, Replace manual
sys.argv manipulation in tests test_find_args_value_exists and
test_find_args_value_missing with pytest's monkeypatch fixture: use
monkeypatch.setattr to set sys.argv to the desired list before calling
find_args_value, and rely on monkeypatch to restore argv afterward; update the
test signatures to accept the monkeypatch parameter and remove the try/finally
blocks so the tests call find_args_value("-t") and find_args_value("-x")
respectively while using monkeypatch.setattr(sys, "argv", [...]).

1-1: Remove unused import.

The math module is imported but never used in this test file.

🧹 Suggested fix
-import math
-
 from nettacker.core.utils.common import (
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/core/utils/test_common_more.py` at line 1, Remove the unused import of
the math module from the top of the test file by deleting the "import math"
statement; locate the import in tests/core/utils/test_common_more.py (the "math"
symbol) and remove it so the file no longer imports an unused module.
tests/core/utils/test_common_extended.py (3)

96-102: Test assertion is trivially true.

assert result is not None will pass for any return value. Consider asserting the expected behavior based on the function's contract.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/core/utils/test_common_extended.py` around lines 96 - 102, The test
TestReverseAndRegexCondition.test_reverse_true_regex_true currently uses a
trivial assertion (assert result is not None); change it to assert the actual
expected output of reverse_and_regex_condition(True, True) per the function
contract (e.g., assert result is True or assert result ==
<expected_value/structure>). Locate the test method in
TestReverseAndRegexCondition and replace the non-specific assertion with one
that checks the exact return value or behavior of reverse_and_regex_condition.

202-226: Test assertions are ineffective.

Using or conditions like assert result == "en" or result is None makes the assertion always pass regardless of the actual result. Either fix the expected values or verify the actual function behavior.

♻️ Suggested fix
     def test_find_args_value_exists(self):
         """Test finding existing argument."""
         with patch.object(sys, "argv", ["prog", "-L", "en"]):
             result = find_args_value("-L")
-            assert result == "en" or result is None
+            assert result == "en"
 
     def test_find_args_value_not_exists(self):
         """Test finding non-existent argument."""
         with patch.object(sys, "argv", ["prog"]):
             result = find_args_value("-L")
-            # Should return None or False
-            assert result is None or result is False
+            assert result is None
 
     def test_find_args_long_flag(self):
         """Test finding long flag argument."""
         with patch.object(sys, "argv", ["prog", "--language", "fa"]):
             result = find_args_value("--language")
-            assert result == "fa" or result is None
+            assert result == "fa"
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/core/utils/test_common_extended.py` around lines 202 - 226, The test
assertions in tests like test_find_args_value_exists,
test_find_args_value_not_exists, test_find_args_long_flag and
test_find_args_value_last_in_argv are written with "or" clauses that make them
vacuously true; update them to assert the specific expected return from
find_args_value (e.g., assert find_args_value("-L") == "en" in
test_find_args_value_exists, assert find_args_value("-L") is None in
test_find_args_value_not_exists, assert find_args_value("--language") == "fa" in
test_find_args_long_flag, and assert find_args_value("-L") is None in
test_find_args_value_last_in_argv) so the tests actually verify the function
behavior rather than always passing.

26-35: Test assertion is too weak.

Asserting result is not None doesn't verify the function behaves correctly. Consider asserting the actual expected output value.

♻️ Suggested improvement
     def test_replace_dependent_response_with_data(self):
         """Test replacing response dependent keys."""
-        response_dependent = {"ip": "192.168.1.1"}
-        log = "Check response"
+        response_dependent = {"ip": "192.168.1.1"}
+        log = "Target IP: response_dependent['ip']"
         
         result = replace_dependent_response(log, response_dependent)
-        assert result is not None
+        assert "192.168.1.1" in result or result == log  # Verify actual replacement
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/core/utils/test_common_extended.py` around lines 26 - 35, The test
TestReplaceDependentResponse::test_replace_dependent_response_with_data
currently only asserts result is not None; update it to assert the concrete
expected output from replace_dependent_response given log="Check response" and
response_dependent={"ip":"192.168.1.1"} (e.g., assert the returned string/value
equals the known expected string or structure), using the function name
replace_dependent_response and the variables response_dependent and log to
locate the call and replace the weak assertion with a precise equality
assertion.
tests/core/lib/test_http_and_logger_extended.py (2)

195-214: Duplicate tests for TerminalCodes.

These tests (test_all_color_codes_exist, test_color_code_values_are_strings) are duplicated in tests/test_logger.py (lines 28-32). Consider consolidating logger-related tests in a single location.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/core/lib/test_http_and_logger_extended.py` around lines 195 - 214, The
TestLoggerColorCodes class duplicates existing logger tests
(test_all_color_codes_exist and test_color_code_values_are_strings) already
present in tests/test_logger.py; remove or consolidate these duplicated tests by
deleting TestLoggerColorCodes (or moving its two methods) from
tests/core/lib/test_http_and_logger_extended.py and keeping a single canonical
copy in tests/test_logger.py, ensuring the TerminalCodes enum and the test names
(test_all_color_codes_exist, test_color_code_values_are_strings) remain covered
and that no other tests reference the removed class before running the test
suite.

13-22: Consider expanding HttpEngine tests beyond existence checks.

These tests only verify that HttpEngine exists and has a library attribute. Consider adding tests for actual functionality or removing these if they don't provide meaningful coverage.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/core/lib/test_http_and_logger_extended.py` around lines 13 - 22, These
tests only assert presence of HttpEngine and its library attribute; update them
to provide meaningful coverage by either removing them or replacing them with
functional checks: instantiate HttpEngine (e.g., call HttpEngine() or
HttpEngine.create() depending on constructor), assert that .library is not None
and that it exposes expected callable behavior (check for methods like
"request", "get" or "send" on HttpEngine.library), and add a simple behavioral
test such as performing a mocked request/response flow or verifying that an
instance method (e.g., a send_request/handle_request method if present) returns
the expected type/result; keep tests named test_http_engine_exists and
test_http_engine_has_library but expand their assertions to cover instantiation
and basic method behavior.
tests/core/lib/test_protocols_comprehensive.py (1)

258-308: Consolidate duplicated protocol wiring checks using parametrization.

The engine/library and client-attribute assertions are repeated in multiple classes. A single parametrized test set would keep intent while reducing maintenance overhead.

Refactor sketch
+@pytest.mark.parametrize(
+    "engine_cls,library_cls",
+    [
+        (SmtpEngine, SmtpLibrary),
+        (SmtpsEngine, SmtpsLibrary),
+        (TelnetEngine, TelnetLibrary),
+        (FtpEngine, FtpLibrary),
+        (Pop3Engine, Pop3Library),
+        (SshEngine, SshLibrary),
+    ],
+)
+def test_engine_library_wiring(engine_cls, library_cls):
+    assert engine_cls.library == library_cls
+
+
+@pytest.mark.parametrize(
+    "library_cls",
+    [SmtpLibrary, SmtpsLibrary, TelnetLibrary, FtpLibrary, Pop3Library, SshLibrary],
+)
+def test_library_has_client(library_cls):
+    assert hasattr(library_cls, "client")
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/core/lib/test_protocols_comprehensive.py` around lines 258 - 308,
Replace the repeated class-based assertions with two pytest.parametrized tests:
one that takes pairs of (engine_class, expected_library) and asserts
engine_class.library == expected_library (replacing TestProtocolEngines and
references to SmtpEngine, TelnetEngine, FtpEngine, Pop3Engine, SshEngine and
their corresponding *Library names), and another that parametrizes the library
classes (SmtpLibrary, SmtpsLibrary, TelnetLibrary, FtpLibrary, Pop3Library,
SshLibrary) and asserts hasattr(library, "client") and library.client is not
None when applicable (replacing TestProtocolClientAttributes); implement these
as standalone functions using `@pytest.mark.parametrize` with clear tuple lists of
the classes to keep tests concise and maintainable.
tests/core/test_arg_parser_extended.py (1)

1-3: @theanant404 please ensure all commits are signed before merge.

Unsigned commits should fail pre-merge checks in this repository; please re-sign and push if any commit is currently unsigned.

Based on learnings: "When reviewing any pull request in the OWASP/Nettacker repository, always remind the PR author to use signed commits and explicitly state that the PR fails pre-merge checks if any commits are unsigned."

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/core/test_arg_parser_extended.py` around lines 1 - 3, Your branch
contains unsigned commits; re-sign every commit on this branch (the
tests/core/test_arg_parser_extended.py change is part of that branch) by
interactively rebasing and amending commits to include a GPG/SSH signature, then
force-push the updated branch so pre-merge checks pass; finally verify commit
signatures locally (e.g., via git signature verification) before notifying for
review.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@nettacker/core/lib/socket.py`:
- Around line 29-30: The code currently uses ssl.create_default_context() and
then context.wrap_socket(...), and on exception silently falls back to plain TCP
leaving ssl_flag=False; instead, modify the TLS handshake logic in
nettacker/core/lib/socket.py around context = ssl.create_default_context() and
context.wrap_socket(socket_connection, server_hostname=host) so that certificate
validation failures are treated as TLS-enabled: either (recommended) set
context.check_hostname = False and context.verify_mode = ssl.CERT_NONE before
wrapping to allow a TLS connection to succeed and set ssl_flag=True, or catch
the wrap_socket exception and explicitly mark ssl_flag=True while recording a
certificate validation error (e.g., a cert_issue flag) so services with bad
certs are not misclassified as non-SSL.

In `@nettacker/core/lib/ssl.py`:
- Around line 120-121: The create_tcp_socket() function currently builds an SSL
context with ssl.create_default_context() which enforces certificate validation
and causes the function to fall back to plain TCP on self-signed/expired certs;
modify create_tcp_socket() to mirror is_weak_cipher_suite() by setting
context.check_hostname = False and context.verify_mode = ssl.CERT_NONE before
calling context.wrap_socket(...) so the scanner continues to perform
ssl_version_and_cipher_scan() and ssl_certificate_scan() against hosts with
untrusted certificates; keep existing exception handling in place but ensure the
adjusted context is used when attempting the SSL wrap to avoid silently skipping
scans.

In `@report.html`:
- Line 1: The report template uses placeholders <graph_html>, /*css*/, /*js*/,
and scan-id which the graph renderer in nettacker/lib/graph/d3_tree_v1/engine.py
does not replace; update either the template or the renderer so they match:
replace the template placeholders in report.html with the renderer's expected
tokens (__data_will_locate_here__, __title_to_replace__,
__description_to_replace__, __html_title_to_replace__), or modify engine.py to
also perform substitutions for <graph_html>, /*css*/, /*js*/, and scan-id;
ensure the symbols referenced (report.html placeholders and engine.py
replacement tokens) are aligned so no raw placeholders remain in output.
- Line 1: The HTML in report.html is malformed (unpaired <graph_html>,
mismatched <table> tags, missing doctype and proper structure) so replace the
single-line blob with a valid HTML skeleton: add <!DOCTYPE html> plus
<html><head>...</head><body>...</body></html>, ensure the table uses proper
<table><thead>/<tbody> and <tr>/<td> pairs instead of the stray <graph_html>,
convert <graph_html> to a semantic element (e.g., <div id="graph_html">) or
remove it, keep and place <div id="json_length"> and <p class="footer"> inside
the body, and ensure the <script> block is closed and placed before </body> so
all tags (table, div, p, script) are properly opened and closed.

In `@tests/core/lib/test_protocols_comprehensive.py`:
- Around line 221-232: The test test_load_message_init_farsi_with_fallback must
explicitly assert that the English fallback was exercised: after constructing
loader = load_message(), assert mock_load_yaml.call_count == 2 (or >=2 if other
calls expected) and verify the fallback result was used by checking either
loader.messages.get("key") == "en_value" or that
mock_load_yaml.call_args_list[1] shows the 'en' load; update the assertions to
reference test_load_message_init_farsi_with_fallback, load_message,
mock_load_yaml, and loader to make the multi-load behavior explicit.

In `@tests/core/test_app_extended.py`:
- Around line 65-69: The test named test_scan_target_increments_loop is
incomplete: it patches nettacker.core.app.die_failure but never exercises code
that would call die_failure, contains no assertions, and only patches
app_module.Module; either delete this test or implement it properly by invoking
the code path that triggers die_failure (e.g., call the function that runs the
scan using make_nettacker_with_options and simulate the failure) and add
assertions to verify behavior; reference the test name
test_scan_target_increments_loop, the patched symbol die_failure, the mocked
app_module.Module, and the helper make_nettacker_with_options when updating or
removing the test.
- Around line 31-41: The test test_expand_targets_single_targets masks failures
by catching all exceptions and only asserting a trivially non-null
app.arguments; remove the try/except and instead invoke
app.expand_targets("scan-1") directly (or assert it returns expected value) and
add meaningful assertions that verify behavior — for example assert that
expand_targets returns the expected target group list or that
app.arguments/target-related fields contain "1.1.1.1" after calling
expand_targets; use make_nettacker_with_options to set up inputs and assert on
the observable state of app (no broad except/pass).

In `@tests/core/test_arg_parser.py`:
- Around line 49-61: The base dict in the test (the variable named base)
contains a duplicated key "modules_extra_args" which causes the first value to
be shadowed; remove the redundant second "modules_extra_args" entry (or
consolidate values into a single key) so only one "modules_extra_args" remains
in the base dictionary used by the tests.

In `@tests/core/test_template.py`:
- Around line 11-19: The test test_template_loader_open currently swallows
FileNotFoundError by calling pytest.skip; remove the try/except and make the
template resolution deterministic by providing a test fixture or monkeypatching
TemplateLoader to point to a known test template path (or injecting a temporary
template file) before calling loader = TemplateLoader("port_scan_scan") and
loader.open(); ensure the test asserts content is not None and is a str and does
not skip on FileNotFoundError so failures surface (refer to TemplateLoader,
loader.open, and the current pytest.skip usage to locate the code to change).

In `@tests/lib/test_graph_d3_v1.py`:
- Around line 1-29: The test module is not placed to mirror the package under
test (nettacker.lib.graph.d3_tree_v1.engine); move the test into a package
structure that mirrors the module (e.g., a test module named test_d3_tree_v1.py
under a graph test package or a more granular graph/d3_tree_v1/test_engine.py),
update the test module name to test_d3_tree_v1 or test_engine and keep the
import using "from nettacker.lib.graph.d3_tree_v1 import engine as d3_v1", and
ensure the target test package directories have __init__.py files so test
discovery finds the tests.
- Around line 1-29: Add more comprehensive tests for d3_v1.escape_for_html_js
and d3_v1.start: create parameterized cases for escape_for_html_js covering
empty string, strings without special chars, quotes, multiple occurrences of <,
>, & and Unicode to assert none of "<", ">", "&" remain unescaped; add tests for
start with multiple events (verify each target and module_name appear), events
missing optional fields (e.g., only "target") to ensure it still returns a
string and includes the target, and an XSS payload event (target
"<script>...</script>") asserting raw "<script>" is absent and the escaped
sequence (e.g., "\\u003Cscript\\u003E" or HTML entities) is present; reference
functions escape_for_html_js and start when adding these tests.
- Around line 4-8: The assertions in test_escape_for_html_js are too weak;
change them to assert the full transformed output of
d3_v1.escape_for_html_js("<tag>&entity</tag>") instead of using containment
checks. Replace the three "in" asserts with a single equality check comparing
the function result to the exact expected escaped string (the fully escaped form
where '<' becomes '\u003C', '>' '\u003E', and '&' '\u0026') so the test fails if
any original characters remain; update the test function test_escape_for_html_js
accordingly to use that exact expected value.

---

Nitpick comments:
In `@tests/core/lib/test_base_extended.py`:
- Around line 125-155: Update the tests to assert the exact file path passed to
open() by configuring the mocked Config.path.modules_dir and its __truediv__
chain to return a predictable Path-like object and then checking that the
patched builtins.open was called with the expected file path string;
specifically, in tests referencing TemplateLoader.open and loader.name,
construct the expected path that reflects how open() builds it from
Config.path.modules_dir and the parsed module parts (action/library split) and
assert open was called with that exact path (and correct mode), so regressions
in TemplateLoader.open’s module-name parsing will be caught.

In `@tests/core/lib/test_base.py`:
- Around line 6-10: The test currently only checks that result != content which
is too weak; update test_filter_large_content_truncates to assert a clear
truncation signal from BaseEngine.filter_large_content by checking that the
returned string is shorter than the input and contains the expected truncation
indicator (e.g., "..." or a configured marker) or that result.endswith("..."),
and keep the existing filter_rate usage to trigger truncation; locate the
function call to BaseEngine.filter_large_content in the test and add these
explicit assertions.

In `@tests/core/lib/test_http_and_logger_extended.py`:
- Around line 195-214: The TestLoggerColorCodes class duplicates existing logger
tests (test_all_color_codes_exist and test_color_code_values_are_strings)
already present in tests/test_logger.py; remove or consolidate these duplicated
tests by deleting TestLoggerColorCodes (or moving its two methods) from
tests/core/lib/test_http_and_logger_extended.py and keeping a single canonical
copy in tests/test_logger.py, ensuring the TerminalCodes enum and the test names
(test_all_color_codes_exist, test_color_code_values_are_strings) remain covered
and that no other tests reference the removed class before running the test
suite.
- Around line 13-22: These tests only assert presence of HttpEngine and its
library attribute; update them to provide meaningful coverage by either removing
them or replacing them with functional checks: instantiate HttpEngine (e.g.,
call HttpEngine() or HttpEngine.create() depending on constructor), assert that
.library is not None and that it exposes expected callable behavior (check for
methods like "request", "get" or "send" on HttpEngine.library), and add a simple
behavioral test such as performing a mocked request/response flow or verifying
that an instance method (e.g., a send_request/handle_request method if present)
returns the expected type/result; keep tests named test_http_engine_exists and
test_http_engine_has_library but expand their assertions to cover instantiation
and basic method behavior.

In `@tests/core/lib/test_pop3.py`:
- Line 1: All commits in this PR must be GPG/SSH-signed because unsigned commits
fail the repository's pre-merge checks; sign the latest commit with git commit
--amend -S (or sign all commits via an interactive rebase and amending each
commit), then push the signed history with git push --force-with-lease so CI
will accept the PR; confirm the PR shows signed commits before merging.

In `@tests/core/lib/test_protocols_comprehensive.py`:
- Around line 258-308: Replace the repeated class-based assertions with two
pytest.parametrized tests: one that takes pairs of (engine_class,
expected_library) and asserts engine_class.library == expected_library
(replacing TestProtocolEngines and references to SmtpEngine, TelnetEngine,
FtpEngine, Pop3Engine, SshEngine and their corresponding *Library names), and
another that parametrizes the library classes (SmtpLibrary, SmtpsLibrary,
TelnetLibrary, FtpLibrary, Pop3Library, SshLibrary) and asserts hasattr(library,
"client") and library.client is not None when applicable (replacing
TestProtocolClientAttributes); implement these as standalone functions using
`@pytest.mark.parametrize` with clear tuple lists of the classes to keep tests
concise and maintainable.

In `@tests/core/test_app_more.py`:
- Around line 8-10: The test only ensures Nettacker.print_logo() doesn't raise
but doesn't assert visible behavior; update test_print_logo to assert an
observable logging/output interaction by capturing the patched logger
(patch("nettacker.core.app.log") -> mock_log) and then call
Nettacker.print_logo() and assert that mock_log.<appropriate_method> (e.g., info
or debug) was called with an expected message or substring (using
assert_called_once_with or assert_any_call with a string containing the logo
text), so the test verifies the function produced the expected log/output.

In `@tests/core/test_arg_parser_extended.py`:
- Around line 1-3: Your branch contains unsigned commits; re-sign every commit
on this branch (the tests/core/test_arg_parser_extended.py change is part of
that branch) by interactively rebasing and amending commits to include a GPG/SSH
signature, then force-push the updated branch so pre-merge checks pass; finally
verify commit signatures locally (e.g., via git signature verification) before
notifying for review.

In `@tests/core/test_socks_proxy.py`:
- Around line 14-18: The test currently only checks the shape of
set_socks_proxy's return value; update it to assert actual proxy setup
interactions: after calling set_socks_proxy("socks5://localhost:1080") verify
mock_socksocket.assert_called() and mock_getaddrinfo.assert_called() (or
assert_called_with(...) if specific args are known) and optionally assert that
the returned tuple values match expected socket/addr info returned by those
mocks; reference the set_socks_proxy call and the mocks mock_socksocket and
mock_getaddrinfo when adding these assertions.

In `@tests/core/utils/test_common_extended.py`:
- Around line 96-102: The test
TestReverseAndRegexCondition.test_reverse_true_regex_true currently uses a
trivial assertion (assert result is not None); change it to assert the actual
expected output of reverse_and_regex_condition(True, True) per the function
contract (e.g., assert result is True or assert result ==
<expected_value/structure>). Locate the test method in
TestReverseAndRegexCondition and replace the non-specific assertion with one
that checks the exact return value or behavior of reverse_and_regex_condition.
- Around line 202-226: The test assertions in tests like
test_find_args_value_exists, test_find_args_value_not_exists,
test_find_args_long_flag and test_find_args_value_last_in_argv are written with
"or" clauses that make them vacuously true; update them to assert the specific
expected return from find_args_value (e.g., assert find_args_value("-L") == "en"
in test_find_args_value_exists, assert find_args_value("-L") is None in
test_find_args_value_not_exists, assert find_args_value("--language") == "fa" in
test_find_args_long_flag, and assert find_args_value("-L") is None in
test_find_args_value_last_in_argv) so the tests actually verify the function
behavior rather than always passing.
- Around line 26-35: The test
TestReplaceDependentResponse::test_replace_dependent_response_with_data
currently only asserts result is not None; update it to assert the concrete
expected output from replace_dependent_response given log="Check response" and
response_dependent={"ip":"192.168.1.1"} (e.g., assert the returned string/value
equals the known expected string or structure), using the function name
replace_dependent_response and the variables response_dependent and log to
locate the call and replace the weak assertion with a precise equality
assertion.

In `@tests/core/utils/test_common_more.py`:
- Around line 28-47: Replace manual sys.argv manipulation in tests
test_find_args_value_exists and test_find_args_value_missing with pytest's
monkeypatch fixture: use monkeypatch.setattr to set sys.argv to the desired list
before calling find_args_value, and rely on monkeypatch to restore argv
afterward; update the test signatures to accept the monkeypatch parameter and
remove the try/finally blocks so the tests call find_args_value("-t") and
find_args_value("-x") respectively while using monkeypatch.setattr(sys, "argv",
[...]).
- Line 1: Remove the unused import of the math module from the top of the test
file by deleting the "import math" statement; locate the import in
tests/core/utils/test_common_more.py (the "math" symbol) and remove it so the
file no longer imports an unused module.

In `@tests/lib/test_compare_report.py`:
- Line 1: The commits for this PR are unsigned and will fail the repository's
pre-merge checks; sign your commits (e.g., use git commit -S or configure
GPG/SSH commit signing) and re-push them — if you need to sign existing commits,
perform an interactive rebase to re-commit/sign or use git filter-branch/rebase
--exec to sign then force-push the updated history so tests in files like
tests/lib/test_compare_report.py with imports such as build_report pass the
merge policy.

In `@tests/lib/test_graph_d3_v1.py`:
- Around line 11-14: The test test_d3_tree_v1_start_empty is brittle because it
asserts a hardcoded implementation string; update it to verify
behavior/structure instead of exact text by calling d3_v1.start([]) and
asserting the result is a non-empty string (e.g., assert isinstance(result, str)
and result.strip()), or check a stable pattern such as result.startswith or
contains a generic keyword like "attack" if that is part of the public contract;
modify the assertion(s) in test_d3_tree_v1_start_empty to use d3_v1.start and
these more resilient checks rather than the literal "Starting attack".
- Around line 17-29: The test currently only checks substrings; instead locate
and extract the JSON/event payload emitted by d3_v1.start(events) and assert the
actual event structure and values: parse the JSON from the returned string (or
otherwise extract the data structure emitted by start), then assert the parsed
object is a list/dict as expected and contains an event with keys "target",
"module_name", "port", and "event" with the values  "127.0.0.1", "port_scan",
80, and "port_open" respectively; update test_d3_tree_v1_start_with_events to
parse the returned string, validate the JSON structure and types, and then use
explicit equality/assertions rather than substring checks on d3_v1.start.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: d34350aa-e640-4ea5-afe9-456c177244df

📥 Commits

Reviewing files that changed from the base of the PR and between 74d89e2 and 685f0b8.

📒 Files selected for processing (28)
  • nettacker/core/lib/socket.py
  • nettacker/core/lib/ssl.py
  • report.html
  • tests/core/lib/test_base.py
  • tests/core/lib/test_base_extended.py
  • tests/core/lib/test_ftp_ftps.py
  • tests/core/lib/test_http_and_logger_extended.py
  • tests/core/lib/test_pop3.py
  • tests/core/lib/test_protocols_comprehensive.py
  • tests/core/lib/test_smtp.py
  • tests/core/lib/test_socket.py
  • tests/core/lib/test_ssh.py
  • tests/core/lib/test_ssl.py
  • tests/core/lib/test_telnet.py
  • tests/core/test_app.py
  • tests/core/test_app_extended.py
  • tests/core/test_app_more.py
  • tests/core/test_arg_parser.py
  • tests/core/test_arg_parser_extended.py
  • tests/core/test_socks_proxy.py
  • tests/core/test_template.py
  • tests/core/utils/test_common_additional.py
  • tests/core/utils/test_common_extended.py
  • tests/core/utils/test_common_more.py
  • tests/lib/test_compare_report.py
  • tests/lib/test_graph_d3_v1.py
  • tests/test_logger.py
  • tests/test_main.py

Comment on lines +1 to +29
from nettacker.lib.graph.d3_tree_v1 import engine as d3_v1


def test_escape_for_html_js():
escaped = d3_v1.escape_for_html_js("<tag>&entity</tag>")
assert "\\u003C" in escaped
assert "\\u003E" in escaped
assert "\\u0026" in escaped


def test_d3_tree_v1_start_empty():
result = d3_v1.start([])
assert "Starting attack" in result
assert isinstance(result, str)


def test_d3_tree_v1_start_with_events():
events = [
{
"target": "127.0.0.1",
"module_name": "port_scan",
"port": 80,
"event": "port_open",
}
]
result = d3_v1.start(events)
assert "127.0.0.1" in result
assert "port_scan" in result
assert isinstance(result, str)
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Test file location doesn't mirror package structure.

The test file is located at tests/lib/test_graph_d3_v1.py, but the module under test is nettacker.lib.graph.d3_tree_v1.engine. The test should be located at tests/lib/graph/test_d3_tree_v1.py (or tests/lib/graph/d3_tree_v1/test_engine.py) to properly mirror the package layout.

As per coding guidelines: Tests must mirror the package layout with pattern tests/core/, tests/lib/, etc.

📁 Relocate the test file

Move the file from:

tests/lib/test_graph_d3_v1.py

To:

tests/lib/graph/test_d3_tree_v1.py

Or for a more granular structure:

tests/lib/graph/d3_tree_v1/test_engine.py
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/lib/test_graph_d3_v1.py` around lines 1 - 29, The test module is not
placed to mirror the package under test (nettacker.lib.graph.d3_tree_v1.engine);
move the test into a package structure that mirrors the module (e.g., a test
module named test_d3_tree_v1.py under a graph test package or a more granular
graph/d3_tree_v1/test_engine.py), update the test module name to test_d3_tree_v1
or test_engine and keep the import using "from nettacker.lib.graph.d3_tree_v1
import engine as d3_v1", and ensure the target test package directories have
__init__.py files so test discovery finds the tests.

🛠️ Refactor suggestion | 🟠 Major

Expand test coverage with edge cases and error scenarios.

Given that this PR aims to add "comprehensive tests" (per PR title and objectives), the current coverage is limited. The module lacks tests for edge cases, error handling, and various input scenarios.

🧪 Add comprehensive test cases

Consider adding tests for:

Edge cases for escape_for_html_js:

import pytest

`@pytest.mark.parametrize`("input_str,expected_chars", [
    ("", ""),  # Empty string
    ("no special chars", "no special chars"),  # No escaping needed
    ("'quotes\"", "'quotes\""),  # Quotes should not be escaped
    ("<script>alert('XSS')</script>", None),  # XSS payload
    ("Multiple <> && << >>", None),  # Multiple occurrences
    ("Unicode: 你好 <test>", None),  # Unicode with special chars
])
def test_escape_for_html_js_edge_cases(input_str, expected_chars):
    result = d3_v1.escape_for_html_js(input_str)
    # Verify no unescaped special characters
    assert "<" not in result
    assert ">" not in result
    assert "&" not in result

Multiple events and error handling:

def test_d3_tree_v1_start_multiple_events():
    """Test with multiple events to verify hierarchy."""
    events = [
        {"target": "127.0.0.1", "module_name": "port_scan", "port": 80},
        {"target": "192.168.1.1", "module_name": "vuln_scan", "port": 443},
    ]
    result = d3_v1.start(events)
    assert "127.0.0.1" in result
    assert "192.168.1.1" in result
    assert "port_scan" in result
    assert "vuln_scan" in result

def test_d3_tree_v1_start_with_missing_fields():
    """Test handling of events with missing fields."""
    events = [{"target": "127.0.0.1"}]  # Missing module_name, port, event
    result = d3_v1.start(events)
    assert isinstance(result, str)
    assert "127.0.0.1" in result

def test_d3_tree_v1_start_with_xss_payload():
    """Test that XSS payloads are properly escaped."""
    events = [
        {
            "target": "<script>alert('XSS')</script>",
            "module_name": "test",
        }
    ]
    result = d3_v1.start(events)
    # Verify XSS payload is escaped
    assert "<script>" not in result
    assert "\\u003Cscript\\u003E" in result or "&lt;script&gt;" in result
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/lib/test_graph_d3_v1.py` around lines 1 - 29, Add more comprehensive
tests for d3_v1.escape_for_html_js and d3_v1.start: create parameterized cases
for escape_for_html_js covering empty string, strings without special chars,
quotes, multiple occurrences of <, >, & and Unicode to assert none of "<", ">",
"&" remain unescaped; add tests for start with multiple events (verify each
target and module_name appear), events missing optional fields (e.g., only
"target") to ensure it still returns a string and includes the target, and an
XSS payload event (target "<script>...</script>") asserting raw "<script>" is
absent and the escaped sequence (e.g., "\\u003Cscript\\u003E" or HTML entities)
is present; reference functions escape_for_html_js and start when adding these
tests.

Comment on lines +4 to +8
def test_escape_for_html_js():
escaped = d3_v1.escape_for_html_js("<tag>&entity</tag>")
assert "\\u003C" in escaped
assert "\\u003E" in escaped
assert "\\u0026" in escaped
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion | 🟠 Major

Strengthen assertions to verify complete transformation.

The test uses weak containment checks (in) that could produce false positives. For example, if the function returned "<tag>\\u003C" (with original < still present), the test would still pass.

✅ Verify complete character replacement
 def test_escape_for_html_js():
+    input_str = "<tag>&entity</tag>"
-    escaped = d3_v1.escape_for_html_js("<tag>&entity</tag>")
+    escaped = d3_v1.escape_for_html_js(input_str)
+    # Verify original characters are replaced
+    assert "<" not in escaped
+    assert ">" not in escaped
+    assert "&" not in escaped
+    # Verify escape sequences are present
     assert "\\u003C" in escaped
     assert "\\u003E" in escaped
     assert "\\u0026" in escaped
+    # Or verify exact expected output
+    assert escaped == "\\u003Ctag\\u003E\\u0026entity\\u003C/tag\\u003E"
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/lib/test_graph_d3_v1.py` around lines 4 - 8, The assertions in
test_escape_for_html_js are too weak; change them to assert the full transformed
output of d3_v1.escape_for_html_js("<tag>&entity</tag>") instead of using
containment checks. Replace the three "in" asserts with a single equality check
comparing the function result to the exact expected escaped string (the fully
escaped form where '<' becomes '\u003C', '>' '\u003E', and '&' '\u0026') so the
test fails if any original characters remain; update the test function
test_escape_for_html_js accordingly to use that exact expected value.

Copy link
Copy Markdown

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR aims to increase automated test coverage across Nettacker’s core utilities, engines, and protocol libraries, while also updating TLS socket wrapping to be compatible with modern Python SSL APIs (Python 3.12+).

Changes:

  • Added many new unit tests across tests/ for core utilities, argument parsing, app flow, logger behavior, and multiple protocol libraries.
  • Updated create_tcp_socket implementations to use ssl.create_default_context() + context.wrap_socket(..., server_hostname=host) for SNI/modern SSL compatibility.
  • Modified report.html (root-level) with new inline template content.

Reviewed changes

Copilot reviewed 28 out of 28 changed files in this pull request and generated 25 comments.

Show a summary per file
File Description
tests/test_main.py Adds a basic test ensuring nettacker.main.run() instantiates and runs Nettacker.
tests/test_logger.py Adds extensive logger tests covering flags, formatting, and output behavior.
tests/lib/test_graph_d3_v1.py Adds tests for D3 v1 graph HTML escaping and output generation.
tests/lib/test_compare_report.py Adds minimal coverage test for compare report engine entrypoint.
tests/core/utils/test_common_more.py Adds additional tests for common utility helpers (CPU core selection, time formatting, argv parsing, headers, bytes conversion).
tests/core/utils/test_common_extended.py Adds broader coverage tests for common.py utilities and edge cases.
tests/core/utils/test_common_additional.py Adds targeted tests for sanitization, threading helper, and token/path helpers.
tests/core/test_template.py Adds basic tests for TemplateLoader init and open().
tests/core/test_socks_proxy.py Adds tests for SOCKS proxy configuration helper.
tests/core/test_arg_parser_extended.py Adds extensive ArgParser branch/validation/coercion tests.
tests/core/test_arg_parser.py Adds baseline ArgParser validation/coercion tests.
tests/core/test_app_more.py Adds a minimal test for Nettacker.print_logo().
tests/core/test_app_extended.py Adds additional app flow/target-expansion/scan-start tests.
tests/core/test_app.py Adds tests for target expansion, filtering by events, and report flow decisions.
tests/core/lib/test_telnet.py Adds basic Telnet engine/library presence tests.
tests/core/lib/test_ssl.py Updates SSL socket tests to align with create_default_context().wrap_socket(...).
tests/core/lib/test_ssh.py Adds basic SSH engine/library presence tests.
tests/core/lib/test_socket.py Updates socket tests for new SSL wrapping and adds error-path coverage.
tests/core/lib/test_smtp.py Adds basic SMTP/SMTPS engine/library presence tests.
tests/core/lib/test_protocols_comprehensive.py Adds broader “low coverage” protocol/message/compare-report tests.
tests/core/lib/test_pop3.py Adds basic POP3/POP3S engine/library presence tests.
tests/core/lib/test_http_and_logger_extended.py Adds additional HTTP engine + logger coverage tests.
tests/core/lib/test_ftp_ftps.py Adds basic FTP/FTPS engine/library presence tests.
tests/core/lib/test_base_extended.py Adds comprehensive tests for TemplateLoader parse/format/load/open workflows.
tests/core/lib/test_base.py Adds tests for BaseEngine content filtering and condition processing paths.
report.html Replaces root-level report HTML content with a single-line inline template fragment.
nettacker/core/lib/ssl.py Updates TLS wrapping to use ssl.create_default_context() + wrap_socket(..., server_hostname=host).
nettacker/core/lib/socket.py Updates TLS wrapping to use ssl.create_default_context() + wrap_socket(..., server_hostname=host).
Comments suppressed due to low confidence (2)

nettacker/core/lib/socket.py:35

  • When wrap_socket raises, the original connected socket is discarded and replaced with a new socket without being closed. This can leak file descriptors in the failure path; close the original socket_connection (best-effort) before creating the fallback socket.
    except Exception:
        socket_connection = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
        socket_connection.settimeout(timeout)
        socket_connection.connect((host, port))

nettacker/core/lib/ssl.py:126

  • In the TLS-wrap exception path, the first connected socket is replaced by a new one without being closed, which can leak sockets under repeated failures. Close the original socket_connection before allocating/connecting the fallback socket.
    except Exception:
        socket_connection = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
        socket_connection.settimeout(timeout)
        socket_connection.connect((host, port))

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

@@ -0,0 +1,18 @@
from unittest.mock import patch, MagicMock
Copy link

Copilot AI Mar 27, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

MagicMock is imported but not used in this file. Ruff will error on unused imports (F401); remove it or use it in assertions.

Suggested change
from unittest.mock import patch, MagicMock
from unittest.mock import patch

Copilot uses AI. Check for mistakes.
Comment on lines +73 to +74
second_call = logger.run_from_api
assert first_call is second_call
Copy link

Copilot AI Mar 27, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

test_run_from_api_cached uses first_call is second_call on booleans, which will pass even if the property is recomputed (because True/False are singletons). If you want to assert caching, change sys.argv after the first access and verify the value doesn’t change, or inspect that the cached_property stored a value on the instance.

Suggested change
second_call = logger.run_from_api
assert first_call is second_call
# Change argv to a value that would disable API mode if recomputed
with patch.object(sys, "argv", ["nettacker.py"]):
second_call = logger.run_from_api
assert first_call is True
assert second_call is True

Copilot uses AI. Check for mistakes.
@@ -0,0 +1,75 @@
from unittest.mock import MagicMock, patch
Copy link

Copilot AI Mar 27, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

MagicMock is imported but never used in this test file, which will trigger Ruff F401. Remove the unused import (or use it in the test).

Suggested change
from unittest.mock import MagicMock, patch
from unittest.mock import patch

Copilot uses AI. Check for mistakes.
Comment on lines +5 to +11
import sys
from unittest.mock import MagicMock, patch
import pytest

from nettacker.core.lib.http import HttpEngine
from nettacker.logger import Logger, TerminalCodes, get_logger

Copy link

Copilot AI Mar 27, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

MagicMock and pytest are imported but not used in this test module. Ruff will fail with unused-import errors (F401); remove them or add tests that use them.

Copilot uses AI. Check for mistakes.
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 6

♻️ Duplicate comments (2)
report.html (2)

1-7: ⚠️ Potential issue | 🔴 Critical

Unresolved: Invalid HTML structure will break rendering.

The HTML structure issues flagged in previous reviews remain unaddressed:

  • Missing <!DOCTYPE html> declaration
  • Missing <html>, <head>, and <body> wrapper elements
  • Unpaired <graph_html> tag (opens on line 2, never closes)
  • Mismatched <table> tags (one open on line 1, two closes on lines 3 and 4)
  • Orphaned content on line 3 after the first </table> close

This malformed markup will fail to render correctly in browsers and parsers.

🔧 Proposed fix: Add valid HTML structure
-<table>
-    <graph_html>/*css*/
-</table>datetargetmodule_nameportlogsjson_event<tr>nowx</tr>
-</table>
-<div id="json_length">1</div>
-<p class="footer">Software Details: OWASP Nettacker version 1.0 [beta] in now ScanID: scan-id</p>
-<script>/*js*/</script>
+<!DOCTYPE html>
+<html lang="en">
+<head>
+    <meta charset="UTF-8">
+    <title>OWASP Nettacker Report</title>
+    <style>/*css*/</style>
+</head>
+<body>
+    <div id="graph_html">__graph_html_placeholder__</div>
+    <table>
+        <thead>
+            <tr>
+                <th>date</th>
+                <th>target</th>
+                <th>module_name</th>
+                <th>port</th>
+                <th>logs</th>
+                <th>json_event</th>
+            </tr>
+        </thead>
+        <tbody>
+            <tr>
+                <td>now</td>
+                <td>x</td>
+                <td></td>
+                <td></td>
+                <td></td>
+                <td></td>
+            </tr>
+        </tbody>
+    </table>
+    <div id="json_length">1</div>
+    <p class="footer">Software Details: OWASP Nettacker version 1.0 [beta] in now ScanID: scan-id</p>
+    <script>/*js*/</script>
+</body>
+</html>
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@report.html` around lines 1 - 7, The HTML is malformed: add a proper document
wrapper and fix unpaired/mismatched tags by inserting <!DOCTYPE html> and
top-level <html>, <head> and <body> elements, move the existing <table>...
content inside <body>, remove or properly close the stray <graph_html> tag (or
replace it with a valid element), ensure every <table> has a single matching
</table>, and keep the <div id="json_length">, <p class="footer"> and
<script>/*js*/</script> inside the body so the elements referenced (graph_html,
table, json_length, footer, script) are valid and well-nested for correct
rendering.

2-7: ⚠️ Potential issue | 🟠 Major

Unresolved: Template placeholders are incompatible with existing renderers.

The placeholder mismatch flagged in previous reviews remains unaddressed. This template uses:

  • <graph_html> (line 2)
  • /*css*/ (line 2)
  • /*js*/ (line 7)
  • scan-id (line 6)
  • json_length value of 1 (line 5)

However, the existing report builder in nettacker/lib/compare_report/engine.py only replaces __data_will_locate_here__, and the graph renderer expects __title_to_replace__, __description_to_replace__, and __html_title_to_replace__.

Without a corresponding renderer that substitutes these new placeholders, they will appear as raw text in the generated report output.

Run the following script to verify if any renderer handles these placeholders:

#!/bin/bash
# Description: Search for renderers that replace the placeholders used in report.html

echo "Searching for code that replaces <graph_html>, /*css*/, /*js*/, scan-id..."
rg -n -C3 'graph_html|/\*css\*/|/\*js\*/|scan-id' --type=py -g '!test*' -g '!**/web/**' 

echo -e "\n--- Checking if report.html is used by any report generator ---"
rg -n -C3 'report\.html' --type=py -g '!test*'
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@report.html` around lines 2 - 7, The report template report.html uses new
placeholders (<graph_html>, /*css*/, /*js*/, scan-id, and the json_length div)
that are not handled by the existing report builder in
nettacker/lib/compare_report/engine.py (which only replaces
__data_will_locate_here__) and the graph renderer (which expects
__title_to_replace__, __description_to_replace__, __html_title_to_replace__);
update the rendering pipeline by adding replacement logic for these new
placeholders (or modify report.html to use the existing placeholders) — ensure
engine.py (or the graph renderer) performs substitutions for <graph_html> ->
generated graph HTML, /*css*/ -> injected CSS, /*js*/ -> injected JS, scan-id ->
actual scan id string, and the `#json_length` element -> actual JSON length so the
final report contains rendered content instead of raw placeholders.
🧹 Nitpick comments (5)
tests/core/test_arg_parser.py (1)

73-78: Optional: Verify error messages passed to die_failure.

The three tests that mock die_failure (for invalid language, graph name, and schema) currently only verify that die_failure was called once, but don't validate the error message argument. While the tests correctly verify the core failure behavior, adding message verification would make them more robust against unintended message changes.

💡 Example enhancement for test_invalid_language_triggers_die_failure
 `@patch.object`(arg_module, "die_failure", side_effect=RuntimeError("fail"))
 def test_invalid_language_triggers_die_failure(mock_die, tmp_path):
     options = make_options(tmp_path, language="de")
     with pytest.raises(RuntimeError):
         ArgParser(api_arguments=options)
-    mock_die.assert_called_once()
+    mock_die.assert_called_once()
+    # Optionally verify the error message contains expected content
+    call_args = mock_die.call_args[0][0]
+    assert "language" in call_args.lower()

Similar checks can be applied to test_invalid_graph_name and test_invalid_schema.

Also applies to: 81-86, 103-108

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/core/test_arg_parser.py` around lines 73 - 78, Update the three tests
(test_invalid_language_triggers_die_failure, test_invalid_graph_name,
test_invalid_schema) to not only assert that arg_module.die_failure was called
once but also verify the actual error message passed; after constructing
ArgParser(api_arguments=options) and catching the RuntimeError, inspect
mock_die.call_args or use mock_die.assert_called_once_with(...) to assert the
expected error string (use the specific message each validation should produce),
referencing the existing test names, ArgParser, make_options, and die_failure to
locate and update the assertions.
tests/lib/graph/d3_tree_v1/test_engine.py (1)

9-12: Consider regex fragility for nested JSON extraction.

The non-greedy \{.*?\} pattern with }; anchor works for the current JSON structure, but it would break if any string value inside the JSON ever contains };. Given the escaping applied by escape_for_html_js, this is unlikely to occur in practice.

If you want a more robust approach in the future, consider using a JSON-aware parser or finding the balanced braces programmatically.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/lib/graph/d3_tree_v1/test_engine.py` around lines 9 - 12, The regex in
_extract_tree_data uses a non-greedy pattern r"treeData\s*=\s*(\{.*?\});" which
can fail if the JSON contains sequences like "};"; update _extract_tree_data to
locate the "treeData =" start, then extract the JSON by walking characters to
find the matching closing brace (balanced-brace parsing) or use a JSON-aware
approach: find the position of the first '{' after "treeData", iterate tracking
open/close braces until balanced, then json.loads that substring; ensure you
still assert a match and handle parse errors gracefully.
tests/core/lib/test_protocols_comprehensive.py (3)

22-108: Consolidate duplicated protocol wiring checks.

These class-based smoke checks overlap with the parametrized assertions at Line 259-280, so the same contract is maintained in two places. Consider keeping one style (preferably the parametrized set) to reduce maintenance noise.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/core/lib/test_protocols_comprehensive.py` around lines 22 - 108, Remove
the duplicated class-based smoke tests (TestSmtpLibrary, TestSmtpsLibrary,
TestTelnetLibrary, TestFtpLibrary, TestPop3Library, TestSshLibrary) that assert
the same protocol wiring (e.g., SmtpLibrary.client == smtplib.SMTP,
SmtpEngine.library == SmtpLibrary, and the hasattr checks) because those checks
are already covered by the existing parametrized assertions; keep any unique
assertions not covered by the parametrized tests, and if you delete these
classes ensure imports/fixtures remain valid and no other tests reference those
class names.

119-120: Avoid format-sensitive JSON substring assertions.

These assertions depend on exact serialization formatting, not just content.

💡 Suggested hardening
-            assert '"scan1": "data1"' in result
-            assert '"scan2": "data2"' in result
+            assert '"scan1"' in result and '"data1"' in result
+            assert '"scan2"' in result and '"data2"' in result
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/core/lib/test_protocols_comprehensive.py` around lines 119 - 120, The
two brittle assertions check JSON via serialized substrings; instead parse the
response string into a JSON object (use the test's result variable and
json.loads) and assert the parsed dict contains the expected entries (e.g.,
parsed_result["scan1"] == "data1" and parsed_result["scan2"] == "data2" or that
the parsed dict includes the expected subset) so the test verifies content not
formatting.

1-4: @theanant404 please verify all commits are signed before merge.

Unsigned commits fail Nettacker pre-merge checks. If needed: git commit --amend -S (or re-sign via interactive rebase) and then git push --force-with-lease.

Based on learnings: "When reviewing any pull request in the OWASP/Nettacker repository, always remind the PR author to use signed commits and explicitly state that the PR fails pre-merge checks if any commits are unsigned."

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/core/lib/test_protocols_comprehensive.py` around lines 1 - 4, All
commits in this PR must be GPG-signed to satisfy the Nettacker pre-merge checks;
ensure every commit is signed (use git commit --amend -S for the most recent
commit or interactive rebase to re-sign earlier commits), then push the
rewritten history with git push --force-with-lease, and verify signatures
locally (e.g., git log --show-signature) before requesting merge so the CI
pre-merge check for unsigned commits will pass.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@nettacker/core/lib/socket.py`:
- Around line 29-32: Replace the per-call call to ssl.create_default_context()
with a module-level, reusable SSLContext to avoid repeated CA loading and future
policy changes: create a module-level SSLContext(...) using
ssl.PROTOCOL_TLS_CLIENT, set context.check_hostname = False and
context.verify_mode = ssl.CERT_NONE once at import time, and then use that
shared context to wrap the socket (replace the local creation of context and its
use around socket_connection and server_hostname=host); update any references
where context is created in the socket wrapping code to use the module-level
context instead.

In `@nettacker/lib/graph/d3_tree_v1/engine.py`:
- Around line 31-34: The code uses event.get("target", "unknown_target") (and
similarly for module_name, port, event_name) which doesn't guard against keys
present with value None; update the lookups in engine.py to coalesce None to
defaults (e.g., use the returned value or fallback) so variables target,
module_name, port, and event_name never become None and thus won't create None
dict keys in normalisedjson; locate the event.get(...) calls in the function
that builds normalisedjson and replace them with a None-safe fallback pattern
for each of the four identifiers.

In `@tests/core/test_arg_parser.py`:
- Around line 65-70: The long-line lambda in the stub_loaders fixture causes a
line-length violation; replace the oversized inline lambda for
ArgParser.load_modules (and optionally load_profiles) with a short named helper
function or split the lambda across multiple lines so each line is under 99
chars — e.g., create a small def _fake_load_modules(limit=-1,
full_details=False): return {"mod1": {"profiles": []}, "all": {}} and set
monkeypatch.setattr(ArgParser, "load_modules", staticmethod(_fake_load_modules))
(and similarly replace the long load_profiles lambda if needed) so the fixture
and monkeypatch calls stay within the 99-character limit.

In `@tests/core/utils/test_common_extended.py`:
- Around line 139-168: The tests for remove_sensitive_header_keys only check
return type and miss verifying redaction; update
test_remove_sensitive_headers_with_password and
test_remove_sensitive_headers_with_cookie to assert that result is a dict and
that sensitive keys (e.g., "Authorization" and "Cookie") are either removed or
replaced with a redacted value in result["response"]["headers"]; also add an
assertion in test_remove_sensitive_headers_empty that calling
remove_sensitive_header_keys({}) returns an empty dict (or equivalent) to ensure
no exceptions. Locate these checks in the existing test functions
(test_remove_sensitive_headers_empty,
test_remove_sensitive_headers_with_password,
test_remove_sensitive_headers_with_cookie) and add assertions verifying absence
or redaction of the sensitive header keys.
- Around line 174-190: The tests use weak tautological assertions allowing wrong
results; update tests for get_http_header_key and get_http_header_value to
assert exact expected outputs: for test_get_http_header_key assert
get_http_header_key("Content-Type: application/json") == "Content-Type"; for
test_get_http_header_key_with_spaces assert get_http_header_key(" 
Authorization: Bearer token").strip() == "Authorization" (or ensure function
trims leading spaces) and for test_get_http_header_value assert
get_http_header_value("Content-Type: application/json") == "application/json"
(or compare the stripped string) so failures surface reliably when
get_http_header_key or get_http_header_value return incorrect values.
- Around line 83-94: The test test_merge_logs_duplicate_deduplication currently
only asserts the return type and calls merge_logs_to_list incorrectly per the
note; update it to call merge_logs_to_list with the full data list (not
individual items) and add assertions that the returned value is a list, that its
length is 1 (deduplicated), and that the single entry contains the expected
"same message" value to verify deduplication; reference merge_logs_to_list and
the local data variable when making the change.

---

Duplicate comments:
In `@report.html`:
- Around line 1-7: The HTML is malformed: add a proper document wrapper and fix
unpaired/mismatched tags by inserting <!DOCTYPE html> and top-level <html>,
<head> and <body> elements, move the existing <table>... content inside <body>,
remove or properly close the stray <graph_html> tag (or replace it with a valid
element), ensure every <table> has a single matching </table>, and keep the <div
id="json_length">, <p class="footer"> and <script>/*js*/</script> inside the
body so the elements referenced (graph_html, table, json_length, footer, script)
are valid and well-nested for correct rendering.
- Around line 2-7: The report template report.html uses new placeholders
(<graph_html>, /*css*/, /*js*/, scan-id, and the json_length div) that are not
handled by the existing report builder in nettacker/lib/compare_report/engine.py
(which only replaces __data_will_locate_here__) and the graph renderer (which
expects __title_to_replace__, __description_to_replace__,
__html_title_to_replace__); update the rendering pipeline by adding replacement
logic for these new placeholders (or modify report.html to use the existing
placeholders) — ensure engine.py (or the graph renderer) performs substitutions
for <graph_html> -> generated graph HTML, /*css*/ -> injected CSS, /*js*/ ->
injected JS, scan-id -> actual scan id string, and the `#json_length` element ->
actual JSON length so the final report contains rendered content instead of raw
placeholders.

---

Nitpick comments:
In `@tests/core/lib/test_protocols_comprehensive.py`:
- Around line 22-108: Remove the duplicated class-based smoke tests
(TestSmtpLibrary, TestSmtpsLibrary, TestTelnetLibrary, TestFtpLibrary,
TestPop3Library, TestSshLibrary) that assert the same protocol wiring (e.g.,
SmtpLibrary.client == smtplib.SMTP, SmtpEngine.library == SmtpLibrary, and the
hasattr checks) because those checks are already covered by the existing
parametrized assertions; keep any unique assertions not covered by the
parametrized tests, and if you delete these classes ensure imports/fixtures
remain valid and no other tests reference those class names.
- Around line 119-120: The two brittle assertions check JSON via serialized
substrings; instead parse the response string into a JSON object (use the test's
result variable and json.loads) and assert the parsed dict contains the expected
entries (e.g., parsed_result["scan1"] == "data1" and parsed_result["scan2"] ==
"data2" or that the parsed dict includes the expected subset) so the test
verifies content not formatting.
- Around line 1-4: All commits in this PR must be GPG-signed to satisfy the
Nettacker pre-merge checks; ensure every commit is signed (use git commit
--amend -S for the most recent commit or interactive rebase to re-sign earlier
commits), then push the rewritten history with git push --force-with-lease, and
verify signatures locally (e.g., git log --show-signature) before requesting
merge so the CI pre-merge check for unsigned commits will pass.

In `@tests/core/test_arg_parser.py`:
- Around line 73-78: Update the three tests
(test_invalid_language_triggers_die_failure, test_invalid_graph_name,
test_invalid_schema) to not only assert that arg_module.die_failure was called
once but also verify the actual error message passed; after constructing
ArgParser(api_arguments=options) and catching the RuntimeError, inspect
mock_die.call_args or use mock_die.assert_called_once_with(...) to assert the
expected error string (use the specific message each validation should produce),
referencing the existing test names, ArgParser, make_options, and die_failure to
locate and update the assertions.

In `@tests/lib/graph/d3_tree_v1/test_engine.py`:
- Around line 9-12: The regex in _extract_tree_data uses a non-greedy pattern
r"treeData\s*=\s*(\{.*?\});" which can fail if the JSON contains sequences like
"};"; update _extract_tree_data to locate the "treeData =" start, then extract
the JSON by walking characters to find the matching closing brace
(balanced-brace parsing) or use a JSON-aware approach: find the position of the
first '{' after "treeData", iterate tracking open/close braces until balanced,
then json.loads that substring; ensure you still assert a match and handle parse
errors gracefully.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: 89c6ca0b-d498-4fb9-8d71-a666835e44e4

📥 Commits

Reviewing files that changed from the base of the PR and between 685f0b8 and d99aba9.

📒 Files selected for processing (18)
  • nettacker/core/lib/socket.py
  • nettacker/core/lib/ssl.py
  • nettacker/lib/graph/d3_tree_v1/engine.py
  • report.html
  • tests/core/lib/test_base.py
  • tests/core/lib/test_base_extended.py
  • tests/core/lib/test_http_and_logger_extended.py
  • tests/core/lib/test_protocols_comprehensive.py
  • tests/core/test_app_extended.py
  • tests/core/test_app_more.py
  • tests/core/test_arg_parser.py
  • tests/core/test_socks_proxy.py
  • tests/core/test_template.py
  • tests/core/utils/test_common_extended.py
  • tests/core/utils/test_common_more.py
  • tests/lib/graph/__init__.py
  • tests/lib/graph/d3_tree_v1/__init__.py
  • tests/lib/graph/d3_tree_v1/test_engine.py
✅ Files skipped from review due to trivial changes (4)
  • nettacker/core/lib/ssl.py
  • tests/core/lib/test_base.py
  • tests/core/utils/test_common_more.py
  • tests/core/lib/test_base_extended.py
🚧 Files skipped from review as they are similar to previous changes (5)
  • tests/core/test_app_more.py
  • tests/core/test_socks_proxy.py
  • tests/core/test_template.py
  • tests/core/test_app_extended.py
  • tests/core/lib/test_http_and_logger_extended.py

theanant404 and others added 2 commits March 28, 2026 01:08
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Signed-off-by: Anant <anantkumar012002@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants