Skip to content

fix: add skills by defaults#304

Open
mmilanta wants to merge 3 commits intomainfrom
fix/cli-skill-by-defaults
Open

fix: add skills by defaults#304
mmilanta wants to merge 3 commits intomainfrom
fix/cli-skill-by-defaults

Conversation

@mmilanta
Copy link
Copy Markdown
Contributor

@mmilanta mmilanta commented May 8, 2026

No description provided.

@mmilanta mmilanta requested a review from a team as a code owner May 8, 2026 15:41
@qodo-merge-etso
Copy link
Copy Markdown

Review Summary by Qodo

Enable skills scanning by default with toggle support

✨ Enhancement 🧪 Tests

Grey Divider

Walkthroughs

Description
• Changed --skills flag to be enabled by default
• Implemented BooleanOptionalAction for toggle behavior
• Added --no-skills option to disable skills scanning
• Updated help text and CLI examples for clarity
• Added comprehensive test coverage for skills flag behavior
Diagram
flowchart LR
  A["--skills flag"] -- "Changed from opt-in" --> B["Enabled by default"]
  B -- "Uses BooleanOptionalAction" --> C["--skills and --no-skills both work"]
  C -- "Tested with" --> D["New test suite"]
Loading

Grey Divider

File Changes

1. src/agent_scan/cli.py ✨ Enhancement +4/-4

Enable skills scanning by default with toggle

• Changed --skills default from False to True
• Replaced store_true action with argparse.BooleanOptionalAction
• Updated help text to indicate skills are enabled by default
• Updated CLI example from --skills to --no-skills in epilog

src/agent_scan/cli.py


2. tests/unit/test_cli_parsing.py 🧪 Tests +25/-0

Add skills flag behavior test coverage

• Added new TestSkillsFlag test class with four test methods
• Tests verify default behavior is True
• Tests verify --skills flag keeps value as True
• Tests verify --no-skills flag disables skills to False
• Tests verify flag toggling behavior works correctly

tests/unit/test_cli_parsing.py


Grey Divider

Qodo Logo

@qodo-merge-etso
Copy link
Copy Markdown

qodo-merge-etso Bot commented May 8, 2026

Code Review by Qodo

🐞 Bugs (1) 📘 Rule violations (0) 📎 Requirement gaps (0)

Grey Divider


Action required

1. E2E tests now fail 🐞 Bug ≡ Correctness
Description
--skills now defaults to True, but e2e tests still assert that skill scanning does not happen unless
--skills is explicitly provided, so they will fail under the new default behavior.
Code

src/agent_scan/cli.py[R208-211]

        "--skills",
-        default=False,
-        action="store_true",
-        help="Scan skills beyond mcp servers.",
+        default=True,
+        action=argparse.BooleanOptionalAction,
+        help="Scan skills beyond mcp servers (default: enabled). Use --no-skills to disable.",
Evidence
The PR changes the CLI flag to default-enabled, meaning scans will include skills unless explicitly
disabled. However, existing e2e tests still run scan/inspect without any skills flag and assert that
zero skill servers are returned, which contradicts the new default and will fail.

src/agent_scan/cli.py[207-212]
tests/e2e/test_scan.py[147-170]
tests/e2e/test_inspect.py[153-176]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

### Issue description
The CLI now scans skills by default, but e2e tests still assume the old behavior (skills scanned only when `--skills` is passed). These tests will fail because they assert there are *no* skill servers when the flag is omitted.

### Issue Context
`--skills` switched from `store_true` default-off to `BooleanOptionalAction` default-on; the opt-out is now `--no-skills`.

### Fix Focus Areas
- tests/e2e/test_scan.py[147-170]
- tests/e2e/test_inspect.py[153-176]

### What to change
- For tests that verify "no skill servers", add `--no-skills` to the CLI invocation.
- Optionally add new assertions that default behavior *does* include skills, and keep `--no-skills` as the explicit opt-out path.

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools



Remediation recommended

2. Docs outdated for new default ✓ Resolved 🐞 Bug ⚙ Maintainability
Description
Repository docs still state skills scanning is default-off and instruct users to add --skills, but
the CLI now enables skills by default and uses --no-skills to opt out, so documentation and examples
are incorrect.
Code

src/agent_scan/cli.py[R208-211]

        "--skills",
-        default=False,
-        action="store_true",
-        help="Scan skills beyond mcp servers.",
+        default=True,
+        action=argparse.BooleanOptionalAction,
+        help="Scan skills beyond mcp servers (default: enabled). Use --no-skills to disable.",
Evidence
The CLI now declares skills scanning as default enabled and introduces --no-skills. Multiple docs
still describe the inverse default ("add --skills" / "omit --skills") and show help text claiming
default-off, which will mislead users.

src/agent_scan/cli.py[207-212]
README.md[26-26]
README.md[189-246]
docs/scanning.md[36-36]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

### Issue description
Docs and examples still describe skills scanning as opt-in (`--skills`) and default-off, but the CLI now defaults it to enabled and uses `--no-skills` to opt out.

### Issue Context
The CLI help text now explicitly says skills scanning is enabled by default and documents `--no-skills`.

### Fix Focus Areas
- README.md[26-26]
- README.md[189-246]
- docs/scanning.md[36-36]

### What to change
- Replace language like "By default it focuses on MCP servers; add `--skills`" with wording reflecting skills are included by default.
- Update option tables/help snippets to show `--skills/--no-skills` and the correct default.
- Update examples: add an example for `--no-skills` (and remove/adjust examples implying `--skills` is required to scan skills).

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


Grey Divider

Qodo Logo

@qodo-merge-etso
Copy link
Copy Markdown

CI Feedback 🧐

A test triggered by this PR failed. Here is an AI-generated analysis of the failure:

Action: test (ubuntu-latest)

Failed stage: Run tests [❌]

Failed test name: TestInspect.test_inspect_skills_without_flag[skills_parent_dir-binary]

Failure summary:

The GitHub Action failed because pytest reported 6 failing E2E tests related to scanning/inspecting
skill paths when the --skills flag is not provided.
- tests/e2e/test_inspect.py:174
(TestInspect.test_inspect_skills_without_flag[...]) failed for skills_parent_dir, skill_folder, and
skill_md_file because the command inspect --json --dangerously-run-mcp-servers <skill_path> returned a skill
server (test-skill) in the JSON output, but the test expected zero skill servers without --skills.
-
tests/e2e/test_scan.py:168 (TestFullScanFlow.test_scan_skills_without_flag[...]) failed for the same
three parametrizations because the command scan --json --dangerously-run-mcp-servers <skill_path> also returned
a skill server (test-skill) without --skills.
- These assertion failures caused make ci to exit
non-zero (make: *** [Makefile:33: ci] Error 1), and the job ended with a non-success exit (Process
completed with exit code 2).

Relevant error logs:
1:  ##[group]Runner Image Provisioner
2:  Hosted Compute Agent
...

413:  testpaths: tests
414:  plugins: asyncio-1.3.0, cov-7.1.0, lazy-fixtures-1.4.0, anyio-4.12.1
415:  asyncio: mode=Mode.STRICT, debug=False, asyncio_default_fixture_loop_scope=None, asyncio_default_test_loop_scope=function
416:  collecting ... collected 517 items / 40 deselected / 477 selected
417:  tests/e2e/test_guard_install.py::TestGuardInstallE2E::test_guard_install_claude[binary] PASSED [  0%]
418:  tests/e2e/test_guard_install.py::TestGuardInstallE2E::test_guard_install_cursor[binary] PASSED [  0%]
419:  tests/e2e/test_inspect.py::TestInspect::test_infer_transport[streamable_http_transport_config_file-http-8124-binary] PASSED [  0%]
420:  tests/e2e/test_inspect.py::TestInspect::test_infer_transport[sse_transport_config_file-sse-8123-binary] PASSED [  0%]
421:  tests/e2e/test_inspect.py::TestInspect::test_infer_transport_server_not_working[{"mcp": {"servers": {"http_server": {"url": "http://www.mcp-scan.com/mcp", "type": "http"}}}}-http-binary] PASSED [  1%]
422:  tests/e2e/test_inspect.py::TestInspect::test_infer_transport_server_not_working[{"mcp": {"servers": {"http_server": {"url": "http://www.mcp-scan.com/sse", "type": "sse"}}}}-sse-binary] PASSED [  1%]
423:  tests/e2e/test_inspect.py::TestInspect::test_infer_transport_server_not_working[{"mcp": {"servers": {"http_server": {"url": "http://www.mcp-scan.com/mcp"}}}}-http-binary] PASSED [  1%]
424:  tests/e2e/test_inspect.py::TestInspect::test_inspect[binary] PASSED      [  1%]
425:  tests/e2e/test_inspect.py::TestInspect::test_inspect_skills_with_flag[skills_parent_dir-binary] PASSED [  1%]
426:  tests/e2e/test_inspect.py::TestInspect::test_inspect_skills_with_flag[skill_folder-binary] PASSED [  2%]
427:  tests/e2e/test_inspect.py::TestInspect::test_inspect_skills_with_flag[skill_md_file-binary] PASSED [  2%]
428:  tests/e2e/test_inspect.py::TestInspect::test_inspect_skills_without_flag[skills_parent_dir-binary] FAILED [  2%]
429:  tests/e2e/test_inspect.py::TestInspect::test_inspect_skills_without_flag[skill_folder-binary] FAILED [  2%]
430:  tests/e2e/test_inspect.py::TestInspect::test_inspect_skills_without_flag[skill_md_file-binary] FAILED [  2%]
431:  tests/e2e/test_inspect.py::TestInspect::test_direct_scan[streamable_http-binary] PASSED [  3%]
...

435:  tests/e2e/test_inspect.py::TestInspect::test_direct_scan_stdio_servers[pypi_latest-binary] PASSED [  3%]
436:  tests/e2e/test_inspect.py::TestInspect::test_direct_scan_stdio_servers[pypi_versioned-binary] PASSED [  4%]
437:  tests/e2e/test_inspect.py::TestInspect::test_direct_scan_stdio_servers[oci-binary] PASSED [  4%]
438:  tests/e2e/test_inspect.py::TestInspect::test_vscode_settings_no_mcp[binary] PASSED [  4%]
439:  tests/e2e/test_scan.py::TestFullScanFlow::test_basic[claudestyle_config_file-binary] PASSED [  4%]
440:  tests/e2e/test_scan.py::TestFullScanFlow::test_basic[vscode_mcp_config_file-binary] PASSED [  5%]
441:  tests/e2e/test_scan.py::TestFullScanFlow::test_basic[vscode_config_file-binary] PASSED [  5%]
442:  tests/e2e/test_scan.py::TestFullScanFlow::test_basic[streamable_http_transport_config_file-binary] PASSED [  5%]
443:  tests/e2e/test_scan.py::TestFullScanFlow::test_basic[sse_transport_config_file-binary] PASSED [  5%]
444:  tests/e2e/test_scan.py::TestFullScanFlow::test_scan_sse_http[streamable_http_transport_config_file-binary] PASSED [  5%]
445:  tests/e2e/test_scan.py::TestFullScanFlow::test_scan_sse_http[sse_transport_config_file-binary] PASSED [  6%]
446:  tests/e2e/test_scan.py::TestFullScanFlow::test_scan[tests/mcp_servers/configs_files/weather_config.json-server_names0-binary] PASSED [  6%]
447:  tests/e2e/test_scan.py::TestFullScanFlow::test_scan[tests/mcp_servers/configs_files/math_config.json-server_names1-binary] PASSED [  6%]
448:  tests/e2e/test_scan.py::TestFullScanFlow::test_scan[tests/mcp_servers/configs_files/all_config.json-server_names2-binary] PASSED [  6%]
449:  tests/e2e/test_scan.py::TestFullScanFlow::test_ci_exit_code_with_flag[binary] PASSED [  6%]
450:  tests/e2e/test_scan.py::TestFullScanFlow::test_scan_skills_without_flag[skills_parent_dir-binary] FAILED [  7%]
451:  tests/e2e/test_scan.py::TestFullScanFlow::test_scan_skills_without_flag[skill_folder-binary] FAILED [  7%]
452:  tests/e2e/test_scan.py::TestFullScanFlow::test_scan_skills_without_flag[skill_md_file-binary] FAILED [  7%]
453:  tests/e2e/test_scan.py::TestFullScanFlow::test_ci_without_dangerous_flag_exits_2[binary] PASSED [  7%]
...

514:  tests/unit/test_cli_parsing.py::TestControlServerParsing::test_parse_control_servers[with_other_cli_args] PASSED [ 20%]
515:  tests/unit/test_cli_parsing.py::TestControlServerParsing::test_parse_control_servers[single_server_with_multiple_headers] PASSED [ 20%]
516:  tests/unit/test_cli_parsing.py::TestControlServerParsing::test_parse_control_servers_missing_identifier[single_server_no_identifier] PASSED [ 20%]
517:  tests/unit/test_cli_parsing.py::TestControlServerParsing::test_parse_control_servers_missing_identifier[single_server_headers_only_no_identifier] PASSED [ 21%]
518:  tests/unit/test_cli_parsing.py::TestControlServerParsing::test_parse_control_servers_missing_identifier[multiple_servers_one_missing_identifier] PASSED [ 21%]
519:  tests/unit/test_cli_parsing.py::TestControlServerParsing::test_parse_control_servers_missing_identifier[options_only_apply_to_preceding_server] PASSED [ 21%]
520:  tests/unit/test_cli_parsing.py::TestCLIArgumentParsing::test_scan_with_multiple_control_servers_parses_correctly PASSED [ 21%]
521:  tests/unit/test_cli_parsing.py::TestSkillsFlag::test_skills_default_is_true PASSED [ 22%]
522:  tests/unit/test_cli_parsing.py::TestSkillsFlag::test_skills_flag_keeps_true PASSED [ 22%]
523:  tests/unit/test_cli_parsing.py::TestSkillsFlag::test_no_skills_disables PASSED [ 22%]
524:  tests/unit/test_cli_parsing.py::TestSkillsFlag::test_no_skills_then_skills_re_enables PASSED [ 22%]
525:  tests/unit/test_cli_parsing.py::TestControlServerHeaderParsing::test_parse_headers_single_header PASSED [ 22%]
526:  tests/unit/test_cli_parsing.py::TestControlServerHeaderParsing::test_parse_headers_multiple_headers PASSED [ 23%]
527:  tests/unit/test_cli_parsing.py::TestControlServerHeaderParsing::test_parse_headers_none_input PASSED [ 23%]
528:  tests/unit/test_cli_parsing.py::TestControlServerHeaderParsing::test_parse_headers_empty_list PASSED [ 23%]
529:  tests/unit/test_cli_parsing.py::TestControlServerHeaderParsing::test_parse_headers_invalid_format_raises_error PASSED [ 23%]
530:  tests/unit/test_cli_parsing.py::TestControlServerUploadIntegration::test_control_servers_passed_to_pipeline PASSED [ 23%]
531:  tests/unit/test_cli_parsing.py::TestControlServerUploadIntegration::test_no_control_servers_passed_to_pipeline PASSED [ 24%]
532:  tests/unit/test_cli_parsing.py::TestControlServerUploadIntegration::test_skip_ssl_verify_passed_to_pipeline PASSED [ 24%]
533:  tests/unit/test_cli_parsing.py::TestCIMode::test_ci_exits_1_when_any_issue[E001] PASSED [ 24%]
534:  tests/unit/test_cli_parsing.py::TestCIMode::test_ci_exits_1_when_any_issue[X002] PASSED [ 24%]
535:  tests/unit/test_cli_parsing.py::TestCIMode::test_ci_exits_1_when_any_issue[X007] PASSED [ 24%]
536:  tests/unit/test_cli_parsing.py::TestCIMode::test_ci_no_exit_when_no_issues PASSED [ 25%]
537:  tests/unit/test_cli_parsing.py::TestCIMode::test_non_ci_no_exit_with_analysis_issues PASSED [ 25%]
538:  tests/unit/test_cli_parsing.py::TestCIMode::test_ci_exits_1_on_path_level_failure PASSED [ 25%]
539:  tests/unit/test_cli_parsing.py::TestCIMode::test_ci_exits_1_on_server_level_failure PASSED [ 25%]
540:  tests/unit/test_cli_parsing.py::TestCIMode::test_ci_no_exit_on_non_failure_error PASSED [ 25%]
541:  tests/unit/test_cli_parsing.py::TestJSONOutput::test_json_output_suppresses_stdout_during_scan PASSED [ 26%]
542:  tests/unit/test_cli_parsing.py::TestJSONOutput::test_json_output_only_contains_json PASSED [ 26%]
543:  tests/unit/test_cli_parsing.py::TestIgnoreIssuesCodes::test_ignore_codes_filters_all_issues_ci_no_exit PASSED [ 26%]
544:  tests/unit/test_cli_parsing.py::TestIgnoreIssuesCodes::test_ignore_codes_reflected_in_json_output PASSED [ 26%]
545:  tests/unit/test_cli_parsing.py::TestIgnoreIssuesCodes::test_ignore_codes_partial_filter_still_exits PASSED [ 27%]
546:  tests/unit/test_cli_parsing.py::TestIgnoreIssuesCodes::test_ignore_codes_without_ci_exits_with_error PASSED [ 27%]
547:  tests/unit/test_cli_parsing.py::TestIgnoreIssuesCodes::test_ignore_codes_not_set_keeps_all_issues PASSED [ 27%]
548:  tests/unit/test_cli_parsing.py::TestIgnoreIssuesCodes::test_ignore_codes_empty_string_keeps_all_issues PASSED [ 27%]
549:  tests/unit/test_cli_parsing.py::TestIgnoreIssuesCodes::test_ignore_codes_can_suppress_failure_code PASSED [ 27%]
550:  tests/unit/test_cli_parsing.py::TestIgnoreIssuesCodes::test_ignore_codes_does_not_suppress_other_failure PASSED [ 28%]
551:  tests/unit/test_config_scan.py::test_scan_mcp_config[claudestyle_config_file] PASSED [ 28%]
...

556:  tests/unit/test_config_scan.py::test_check_server_mocked PASSED          [ 29%]
557:  tests/unit/test_config_scan.py::test_math_server PASSED                  [ 29%]
558:  tests/unit/test_config_scan.py::test_all_server PASSED                   [ 29%]
559:  tests/unit/test_config_scan.py::test_weather_server PASSED               [ 29%]
560:  tests/unit/test_config_scan.py::test_vscode_settings_file_without_mcp PASSED [ 30%]
561:  tests/unit/test_config_scan.py::test_vscode_settings_file_with_empty_mcp PASSED [ 30%]
562:  tests/unit/test_config_scan.py::TestServerUrlAliasParsing::test_server_url_field_parsed_as_remote_server PASSED [ 30%]
563:  tests/unit/test_config_scan.py::TestServerUrlAliasParsing::test_server_url_does_not_drop_other_servers PASSED [ 30%]
564:  tests/unit/test_config_scan.py::TestServerUrlAliasParsing::test_server_url_not_silently_dropped_as_config_without_mcp PASSED [ 31%]
565:  tests/unit/test_config_scan.py::TestServerUrlAliasParsing::test_url_wins_over_server_url_when_both_present PASSED [ 31%]
566:  tests/unit/test_control_server.py::test_upload_payload_excludes_hostname_and_username PASSED [ 31%]
567:  tests/unit/test_control_server.py::test_upload_sends_username_as_list_without_scanned_usernames PASSED [ 31%]
568:  tests/unit/test_control_server.py::test_upload_sends_username_as_list_with_empty_scanned_usernames PASSED [ 31%]
569:  tests/unit/test_control_server.py::test_upload_sends_username_list_with_single_user PASSED [ 32%]
570:  tests/unit/test_control_server.py::test_upload_sends_username_list_with_multiple_users PASSED [ 32%]
571:  tests/unit/test_control_server.py::test_upload_includes_scan_error_in_payload PASSED [ 32%]
572:  tests/unit/test_control_server.py::test_upload_file_not_found_error_in_payload PASSED [ 32%]
573:  tests/unit/test_control_server.py::test_upload_parse_error_in_payload PASSED [ 32%]
574:  tests/unit/test_control_server.py::test_upload_server_http_error_in_payload PASSED [ 33%]
575:  tests/unit/test_control_server.py::test_upload_server_startup_error_in_payload PASSED [ 33%]
576:  tests/unit/test_control_server.py::test_upload_retries_on_network_error PASSED [ 33%]
577:  tests/unit/test_control_server.py::test_upload_retries_on_server_error PASSED [ 33%]
578:  tests/unit/test_control_server.py::test_upload_does_not_retry_on_client_error PASSED [ 33%]
579:  tests/unit/test_control_server.py::test_upload_succeeds_on_second_attempt PASSED [ 34%]
580:  tests/unit/test_control_server.py::test_upload_custom_max_retries PASSED [ 34%]
581:  tests/unit/test_control_server.py::test_upload_exponential_backoff PASSED [ 34%]
582:  tests/unit/test_control_server.py::test_upload_sends_payload_when_results_empty PASSED [ 34%]
583:  tests/unit/test_control_server.py::test_upload_does_not_retry_on_unexpected_error PASSED [ 35%]
584:  tests/unit/test_control_server.py::test_upload_unknown_mcp_config_error_in_payload PASSED [ 35%]
585:  tests/unit/test_entity_to_tool.py::test_entity_to_tool[tests/mcp_servers/signatures/math_server_signature.json] PASSED [ 35%]
...

699:  tests/unit/test_guard.py::TestManagedInstallClaude::test_detect_at_managed_path PASSED [ 59%]
700:  tests/unit/test_guard.py::TestManagedInstallClaude::test_uninstall_from_managed_path PASSED [ 59%]
701:  tests/unit/test_guard.py::TestManagedInstallCursor::test_install_to_managed_path PASSED [ 59%]
702:  tests/unit/test_guard.py::TestManagedInstallCursor::test_detect_at_managed_path PASSED [ 59%]
703:  tests/unit/test_guard.py::TestManagedInstallCursor::test_uninstall_from_managed_path PASSED [ 60%]
704:  tests/unit/test_guard.py::TestPermissionDeniedStatus::test_detect_claude_raises_on_unreadable PASSED [ 60%]
705:  tests/unit/test_guard.py::TestPermissionDeniedStatus::test_detect_cursor_raises_on_unreadable PASSED [ 60%]
706:  tests/unit/test_guard.py::TestPermissionDeniedStatus::test_print_client_status_permission_denied PASSED [ 60%]
707:  tests/unit/test_guard.py::TestPermissionDeniedStatus::test_print_client_status_not_installed PASSED [ 61%]
708:  tests/unit/test_guard.py::TestPermissionDeniedStatus::test_print_client_status_installed PASSED [ 61%]
709:  tests/unit/test_guard.py::TestPreflightWritable::test_passes_when_parent_writable PASSED [ 61%]
710:  tests/unit/test_guard.py::TestPreflightWritable::test_passes_when_parent_does_not_exist PASSED [ 61%]
711:  tests/unit/test_guard.py::TestPreflightWritable::test_raises_when_parent_not_writable PASSED [ 61%]
712:  tests/unit/test_guard.py::TestBashHookScript::test_posts_base64_payload PASSED [ 62%]
713:  tests/unit/test_guard.py::TestBashHookScript::test_cursor_endpoint PASSED [ 62%]
714:  tests/unit/test_guard.py::TestBashHookScript::test_missing_push_key_fails PASSED [ 62%]
715:  tests/unit/test_guard.py::TestBashHookScript::test_missing_url_fails PASSED [ 62%]
716:  tests/unit/test_guard.py::TestPowerShellHookScript::test_posts_base64_payload SKIPPED [ 62%]
717:  tests/unit/test_guard.py::TestPowerShellHookScript::test_cursor_endpoint SKIPPED [ 63%]
718:  tests/unit/test_guard.py::TestPowerShellHookScript::test_missing_push_key_fails SKIPPED [ 63%]
719:  tests/unit/test_guard.py::TestCursorStylePowerShellInvocation::test_cursor_invokes_command_string SKIPPED [ 63%]
720:  tests/unit/test_guard.py::TestCursorStyleBashInvocation::test_cursor_invokes_command_string PASSED [ 63%]
721:  tests/unit/test_guard.py::TestEnsureGuardEnabledForTenant::test_empty_tenant_returns_without_fetch PASSED [ 63%]
722:  tests/unit/test_guard.py::TestEnsureGuardEnabledForTenant::test_missing_token_non_localhost_exits PASSED [ 64%]
723:  tests/unit/test_guard.py::TestEnsureGuardEnabledForTenant::test_whitespace_token_treated_as_missing PASSED [ 64%]
724:  tests/unit/test_guard.py::TestEnsureGuardEnabledForTenant::test_localhost_allows_empty_token PASSED [ 64%]
725:  tests/unit/test_guard.py::TestEnsureGuardEnabledForTenant::test_access_denied_exits PASSED [ 64%]
726:  tests/unit/test_guard.py::TestEnsureGuardEnabledForTenant::test_endpoint_error_exits PASSED [ 64%]
727:  tests/unit/test_guard.py::TestEnsureGuardEnabledForTenant::test_guard_disabled_tenant_exits PASSED [ 65%]
728:  tests/unit/test_guard.py::TestEnsureGuardEnabledForTenant::test_guard_enabled_continues PASSED [ 65%]
729:  tests/unit/test_guard.py::TestRunInstallCallsEnsureGuardEnabled::test_interactive_mint_path_calls_ensure_with_token PASSED [ 65%]
730:  tests/unit/test_guard.py::TestRunInstallCallsEnsureGuardEnabled::test_headless_with_push_key_skips_ensure PASSED [ 65%]
731:  tests/unit/test_guard.py::TestRunInstallCallsEnsureGuardEnabled::test_headless_installs_without_snyk_token PASSED [ 66%]
732:  tests/unit/test_inspect.py::test_get_mcp_config_per_client_sets_username_for_detected_agents PASSED [ 66%]
733:  tests/unit/test_inspect.py::test_get_mcp_config_per_client_no_username_for_absolute_paths PASSED [ 66%]
734:  tests/unit/test_inspect.py::test_detected_usernames_filtering PASSED     [ 66%]
735:  tests/unit/test_inspect.py::test_detected_usernames_falls_back_to_all_when_none_detected PASSED [ 66%]
736:  tests/unit/test_inspect.py::test_inspect_pipeline_reports_only_detected_usernames PASSED [ 67%]
737:  tests/unit/test_inspect.py::test_inspect_pipeline_falls_back_to_all_usernames_when_no_agents_detected PASSED [ 67%]
738:  tests/unit/test_inspect.py::test_inspect_pipeline_detected_usernames_are_sorted PASSED [ 67%]
739:  tests/unit/test_inspect.py::test_inspect_pipeline_single_user_detected_among_many PASSED [ 67%]
740:  tests/unit/test_inspect.py::test_inspect_pipeline_deduplicates_usernames_across_clients PASSED [ 67%]
741:  tests/unit/test_inspect.py::test_inspect_pipeline_no_clients_returns_empty_results PASSED [ 68%]
742:  tests/unit/test_inspect.py::test_inspect_pipeline_missing_explicit_path_returns_file_not_found_error PASSED [ 68%]
743:  tests/unit/test_inspect.py::test_inspect_pipeline_paths_mode_does_not_leak_all_usernames PASSED [ 68%]
...

772:  tests/unit/test_models.py::TestStdioServerRebalance::test_rebalance_with_malformed_command_raises PASSED [ 74%]
773:  tests/unit/test_models.py::TestStdioServerRebalance::test_rebalance_with_empty_command_raises PASSED [ 74%]
774:  tests/unit/test_models.py::TestStdioServerRebalance::test_rebalance_with_whitespace_only_raises PASSED [ 75%]
775:  tests/unit/test_models.py::TestStdioServerArgsCoercion::test_args_omitted PASSED [ 75%]
776:  tests/unit/test_models.py::TestStdioServerArgsCoercion::test_args_explicit_null PASSED [ 75%]
777:  tests/unit/test_models.py::TestStdioServerArgsCoercion::test_args_empty_list_preserved PASSED [ 75%]
778:  tests/unit/test_models.py::TestStdioServerArgsCoercion::test_args_populated_preserved PASSED [ 75%]
779:  tests/unit/test_models.py::TestStdioServerArgsCoercion::test_existing_absolute_path_with_no_args PASSED [ 76%]
780:  tests/unit/test_models.py::TestStdioServerArgsCoercion::test_existing_absolute_path_with_explicit_null_args PASSED [ 76%]
781:  tests/unit/test_models.py::TestStdioServerArgsCoercion::test_existing_absolute_path_with_populated_args_preserved PASSED [ 76%]
782:  tests/unit/test_pushkeys.py::TestFetchGuardEnabled::test_returns_true_when_api_enables PASSED [ 76%]
783:  tests/unit/test_pushkeys.py::TestFetchGuardEnabled::test_returns_false_when_api_disables PASSED [ 76%]
784:  tests/unit/test_pushkeys.py::TestFetchGuardEnabled::test_skips_auth_on_localhost PASSED [ 77%]
785:  tests/unit/test_pushkeys.py::TestFetchGuardEnabled::test_raises_on_bad_json_shape PASSED [ 77%]
786:  tests/unit/test_pushkeys.py::TestFetchGuardEnabled::test_raises_access_denied_on_403 PASSED [ 77%]
787:  tests/unit/test_pushkeys.py::TestFetchGuardEnabled::test_non_403_http_error_message_omits_response_body PASSED [ 77%]
788:  tests/unit/test_pushkeys.py::TestMintPushKeyUrl::test_builds_hidden_tenants_path PASSED [ 77%]
...

843:  tests/unit/test_utils.py::TestRebalanceCommandArgsWithSpacesInPath::test_full_command_is_path_with_spaces_no_args PASSED [ 89%]
844:  tests/unit/test_utils.py::test_calculate_distance PASSED                 [ 89%]
845:  tests/unit/test_utils.py::TestSuppressStdout::test_suppress_stdout_suppresses_print PASSED [ 89%]
846:  tests/unit/test_utils.py::TestSuppressStdout::test_suppress_stdout_restores_stdout_after_context PASSED [ 90%]
847:  tests/unit/test_utils.py::TestSuppressStdout::test_suppress_stdout_works_with_multiple_prints PASSED [ 90%]
848:  tests/unit/test_verify_api.py::TestProxySupport::test_analyze_machine_honors_http_proxy_env PASSED [ 90%]
849:  tests/unit/test_verify_api.py::TestProxySupport::test_analyze_machine_honors_https_proxy_env PASSED [ 90%]
850:  tests/unit/test_verify_api.py::TestProxySupport::test_analyze_machine_works_without_proxy PASSED [ 90%]
851:  tests/unit/test_verify_api.py::TestProxySupport::test_analyze_machine_with_skip_ssl_verify_and_proxy PASSED [ 91%]
852:  tests/unit/test_verify_api.py::TestProxySupport::test_setup_tcp_connector_with_ssl_verify PASSED [ 91%]
853:  tests/unit/test_verify_api.py::TestProxySupport::test_setup_tcp_connector_without_ssl_verify PASSED [ 91%]
854:  tests/unit/test_verify_api.py::TestAnalyzeMachineRetries::test_analyze_machine_retries_on_timeout PASSED [ 91%]
855:  tests/unit/test_verify_api.py::TestAnalyzeMachineHeaders::test_analyze_machine_includes_additional_headers PASSED [ 92%]
856:  tests/unit/test_verify_api.py::TestAnalyzeMachineScanMetadata::test_analyze_machine_includes_scan_metadata_when_scan_context_provided PASSED [ 92%]
857:  tests/unit/test_verify_api.py::TestAnalyzeMachineScanMetadata::test_analyze_machine_omits_scan_metadata_when_scan_context_not_provided PASSED [ 92%]
858:  tests/unit/test_verify_api.py::TestAnalyzeMachineHttpErrors::test_analyze_machine_http_error_responses[400] PASSED [ 92%]
859:  tests/unit/test_verify_api.py::TestAnalyzeMachineHttpErrors::test_analyze_machine_http_error_responses[401] PASSED [ 92%]
860:  tests/unit/test_verify_api.py::TestAnalyzeMachineHttpErrors::test_analyze_machine_http_error_responses[403] PASSED [ 93%]
861:  tests/unit/test_verify_api.py::TestAnalyzeMachineHttpErrors::test_analyze_machine_http_error_responses[413] PASSED [ 93%]
862:  tests/unit/test_verify_api.py::TestAnalyzeMachineHttpErrors::test_analyze_machine_http_error_responses[422] PASSED [ 93%]
863:  tests/unit/test_verify_api.py::TestAnalyzeMachineHttpErrors::test_analyze_machine_http_error_responses[429] PASSED [ 93%]
864:  tests/unit/test_verify_api.py::TestAnalyzeMachineHttpErrors::test_analyze_machine_http_error_responses[500] PASSED [ 93%]
865:  tests/unit/test_verify_api.py::TestAnalyzeMachineHttpErrors::test_analyze_machine_http_error_responses[502] PASSED [ 94%]
866:  tests/unit/test_verify_api.py::TestAnalyzeMachineHttpErrors::test_analyze_machine_http_error_responses[503] PASSED [ 94%]
867:  tests/unit/test_verify_api.py::TestAnalyzeMachineHttpErrors::test_analyze_machine_http_error_responses[504] PASSED [ 94%]
868:  tests/v4compatibility/test_inspect.py::test_get_mcp_config_per_client[client0-valid-True] PASSED [ 94%]
...

879:  tests/v4compatibility/test_inspect.py::test_inspect_skill[mcp-builder-skill_server2] PASSED [ 97%]
880:  tests/v4compatibility/test_inspect.py::test_inspect_skill[theme-factory-skill_server3] PASSED [ 97%]
881:  tests/v4compatibility/test_inspect.py::test_inspect_skill[internal-comms-skill_server4] PASSED [ 97%]
882:  tests/v4compatibility/test_inspect.py::test_inspect_skill[webapp-testing-skill_server5] PASSED [ 97%]
883:  tests/v4compatibility/test_inspect.py::test_inspect_skill[xlsx-skill_server6] PASSED [ 97%]
884:  tests/v4compatibility/test_inspect.py::test_inspect_skill[frontend-design-skill_server7] PASSED [ 98%]
885:  tests/v4compatibility/test_inspect.py::test_inspect_skill[pdf-skill_server8] PASSED [ 98%]
886:  tests/v4compatibility/test_inspect.py::test_inspect_skill[web-artifacts-builder-skill_server9] PASSED [ 98%]
887:  tests/v4compatibility/test_inspect.py::test_inspect_skill[slack-gif-creator-skill_server10] PASSED [ 98%]
888:  tests/v4compatibility/test_inspect.py::test_inspect_skill[skill-creator-skill_server11] PASSED [ 98%]
889:  tests/v4compatibility/test_inspect.py::test_inspect_skill[doc-coauthoring-skill_server12] PASSED [ 99%]
890:  tests/v4compatibility/test_inspect.py::test_inspect_skill[pptx-skill_server13] PASSED [ 99%]
891:  tests/v4compatibility/test_inspect.py::test_inspect_skill[malicious-skill-skill_server14] PASSED [ 99%]
892:  tests/v4compatibility/test_inspect.py::test_inspect_skill[algorithmic-art-skill_server15] PASSED [ 99%]
893:  tests/v4compatibility/test_inspect.py::test_inspect_skill[canvas-design-skill_server16] PASSED [100%]
894:  =================================== FAILURES ===================================
895:  ____ TestInspect.test_inspect_skills_without_flag[skills_parent_dir-binary] ____
...

901:  "skill_path",
902:  [
903:  "tests/mcp_servers/.test-client/skills",
904:  "tests/mcp_servers/.test-client/skills/test-skill",
905:  "tests/mcp_servers/.test-client/skills/test-skill/SKILL.md",
906:  ],
907:  ids=["skills_parent_dir", "skill_folder", "skill_md_file"],
908:  )
909:  def test_inspect_skills_without_flag(self, agent_scan_cmd, skill_path):
910:  """Test that scanning skill paths does NOT produce skill results without --skills flag."""
911:  result = subprocess.run(
912:  [*agent_scan_cmd, "inspect", "--json", "--dangerously-run-mcp-servers", skill_path],
913:  capture_output=True,
914:  text=True,
915:  )
916:  assert result.returncode == 0, f"Command failed with error: {result.stderr}"
917:  output = json.loads(result.stdout)
918:  all_servers = [server for entry in output.values() for server in entry["servers"]]
919:  skill_servers = [s for s in all_servers if s["server"]["type"] == "skill"]
920:  >       assert len(skill_servers) == 0, (
921:  f"Expected no skill servers without --skills flag, got: {[s['name'] for s in skill_servers]}"
922:  )
923:  E       AssertionError: Expected no skill servers without --skills flag, got: ['test-skill']
924:  E       assert 1 == 0
925:  E        +  where 1 = len([{'name': 'test-skill', 'server': {'path': 'tests/mcp_servers/.test-client/skills/test-skill', 'type': 'skill'}, 'signature': {'metadata': {'meta': None, 'protocolVersion': 'built-in', 'capabilities': {'experimental': None, 'logging': None, 'prompts': None, 'resources': None, 'tools': {'listChanged': False}, 'completions': None, 'tasks': None}, 'serverInfo': {'name': 'test-skill', 'title': None, 'version': 'skills', 'websiteUrl': None, 'icons': None}, 'instructions': 'This is a test that everythings is correct', 'prompts': {'listChanged': False}, 'resources': {'subscribe': None, 'listChanged': False}}, 'prompts': [{'name': 'SKILL.md', 'title': None, 'description': '\n\n# Test skill\n\nThis skill is just to double check that everything works fine\n', 'arguments': [], 'icons': None, 'meta': None}], 'resources': [], 'resource_templates': [], 'tools': []}, 'error': None}])
926:  tests/e2e/test_inspect.py:174: AssertionError
927:  ______ TestInspect.test_inspect_skills_without_flag[skill_folder-binary] _______
...

933:  "skill_path",
934:  [
935:  "tests/mcp_servers/.test-client/skills",
936:  "tests/mcp_servers/.test-client/skills/test-skill",
937:  "tests/mcp_servers/.test-client/skills/test-skill/SKILL.md",
938:  ],
939:  ids=["skills_parent_dir", "skill_folder", "skill_md_file"],
940:  )
941:  def test_inspect_skills_without_flag(self, agent_scan_cmd, skill_path):
942:  """Test that scanning skill paths does NOT produce skill results without --skills flag."""
943:  result = subprocess.run(
944:  [*agent_scan_cmd, "inspect", "--json", "--dangerously-run-mcp-servers", skill_path],
945:  capture_output=True,
946:  text=True,
947:  )
948:  assert result.returncode == 0, f"Command failed with error: {result.stderr}"
949:  output = json.loads(result.stdout)
950:  all_servers = [server for entry in output.values() for server in entry["servers"]]
951:  skill_servers = [s for s in all_servers if s["server"]["type"] == "skill"]
952:  >       assert len(skill_servers) == 0, (
953:  f"Expected no skill servers without --skills flag, got: {[s['name'] for s in skill_servers]}"
954:  )
955:  E       AssertionError: Expected no skill servers without --skills flag, got: ['test-skill']
956:  E       assert 1 == 0
957:  E        +  where 1 = len([{'name': 'test-skill', 'server': {'path': 'tests/mcp_servers/.test-client/skills/test-skill', 'type': 'skill'}, 'signature': {'metadata': {'meta': None, 'protocolVersion': 'built-in', 'capabilities': {'experimental': None, 'logging': None, 'prompts': None, 'resources': None, 'tools': {'listChanged': False}, 'completions': None, 'tasks': None}, 'serverInfo': {'name': 'test-skill', 'title': None, 'version': 'skills', 'websiteUrl': None, 'icons': None}, 'instructions': 'This is a test that everythings is correct', 'prompts': {'listChanged': False}, 'resources': {'subscribe': None, 'listChanged': False}}, 'prompts': [{'name': 'SKILL.md', 'title': None, 'description': '\n\n# Test skill\n\nThis skill is just to double check that everything works fine\n', 'arguments': [], 'icons': None, 'meta': None}], 'resources': [], 'resource_templates': [], 'tools': []}, 'error': None}])
958:  tests/e2e/test_inspect.py:174: AssertionError
959:  ______ TestInspect.test_inspect_skills_without_flag[skill_md_file-binary] ______
...

965:  "skill_path",
966:  [
967:  "tests/mcp_servers/.test-client/skills",
968:  "tests/mcp_servers/.test-client/skills/test-skill",
969:  "tests/mcp_servers/.test-client/skills/test-skill/SKILL.md",
970:  ],
971:  ids=["skills_parent_dir", "skill_folder", "skill_md_file"],
972:  )
973:  def test_inspect_skills_without_flag(self, agent_scan_cmd, skill_path):
974:  """Test that scanning skill paths does NOT produce skill results without --skills flag."""
975:  result = subprocess.run(
976:  [*agent_scan_cmd, "inspect", "--json", "--dangerously-run-mcp-servers", skill_path],
977:  capture_output=True,
978:  text=True,
979:  )
980:  assert result.returncode == 0, f"Command failed with error: {result.stderr}"
981:  output = json.loads(result.stdout)
982:  all_servers = [server for entry in output.values() for server in entry["servers"]]
983:  skill_servers = [s for s in all_servers if s["server"]["type"] == "skill"]
984:  >       assert len(skill_servers) == 0, (
985:  f"Expected no skill servers without --skills flag, got: {[s['name'] for s in skill_servers]}"
986:  )
987:  E       AssertionError: Expected no skill servers without --skills flag, got: ['test-skill']
988:  E       assert 1 == 0
989:  E        +  where 1 = len([{'name': 'test-skill', 'server': {'path': 'tests/mcp_servers/.test-client/skills/test-skill', 'type': 'skill'}, 'signature': {'metadata': {'meta': None, 'protocolVersion': 'built-in', 'capabilities': {'experimental': None, 'logging': None, 'prompts': None, 'resources': None, 'tools': {'listChanged': False}, 'completions': None, 'tasks': None}, 'serverInfo': {'name': 'test-skill', 'title': None, 'version': 'skills', 'websiteUrl': None, 'icons': None}, 'instructions': 'This is a test that everythings is correct', 'prompts': {'listChanged': False}, 'resources': {'subscribe': None, 'listChanged': False}}, 'prompts': [{'name': 'SKILL.md', 'title': None, 'description': '\n\n# Test skill\n\nThis skill is just to double check that everything works fine\n', 'arguments': [], 'icons': None, 'meta': None}], 'resources': [], 'resource_templates': [], 'tools': []}, 'error': None}])
990:  tests/e2e/test_inspect.py:174: AssertionError
991:  ___ TestFullScanFlow.test_scan_skills_without_flag[skills_parent_dir-binary] ___
...

997:  "skill_path",
998:  [
999:  "tests/mcp_servers/.test-client/skills",
1000:  "tests/mcp_servers/.test-client/skills/test-skill",
1001:  "tests/mcp_servers/.test-client/skills/test-skill/SKILL.md",
1002:  ],
1003:  ids=["skills_parent_dir", "skill_folder", "skill_md_file"],
1004:  )
1005:  def test_scan_skills_without_flag(self, agent_scan_cmd, skill_path):
1006:  """Test that scanning skill paths does NOT produce skill results without --skills flag."""
1007:  result = subprocess.run(
1008:  [*agent_scan_cmd, "scan", "--json", "--dangerously-run-mcp-servers", skill_path],
1009:  capture_output=True,
1010:  text=True,
1011:  )
1012:  assert result.returncode == 0, f"Command failed with error: {result.stderr}"
1013:  output = json.loads(result.stdout)
1014:  all_servers = [server for entry in output.values() for server in entry["servers"]]
1015:  skill_servers = [s for s in all_servers if s["server"]["type"] == "skill"]
1016:  >       assert len(skill_servers) == 0, (
1017:  f"Expected no skill servers without --skills flag, got: {[s['name'] for s in skill_servers]}"
1018:  )
1019:  E       AssertionError: Expected no skill servers without --skills flag, got: ['test-skill']
1020:  E       assert 1 == 0
1021:  E        +  where 1 = len([{'name': 'test-skill', 'server': {'path': 'tests/mcp_servers/.test-client/skills/test-skill', 'type': 'skill'}, 'signature': {'metadata': {'meta': None, 'protocolVersion': 'built-in', 'capabilities': {'experimental': None, 'logging': None, 'prompts': None, 'resources': None, 'tools': {'listChanged': False}, 'completions': None, 'tasks': None}, 'serverInfo': {'name': 'test-skill', 'title': None, 'version': 'skills', 'websiteUrl': None, 'icons': None}, 'instructions': 'This is a test that everythings is correct', 'prompts': {'listChanged': False}, 'resources': {'subscribe': None, 'listChanged': False}}, 'prompts': [{'name': 'SKILL.md', 'title': None, 'description': '\n\n# Test skill\n\nThis skill is just to double check that everything works fine\n', 'arguments': [], 'icons': None, 'meta': None}], 'resources': [], 'resource_templates': [], 'tools': []}, 'error': None}])
1022:  tests/e2e/test_scan.py:168: AssertionError
1023:  _____ TestFullScanFlow.test_scan_skills_without_flag[skill_folder-binary] ______
...

1029:  "skill_path",
1030:  [
1031:  "tests/mcp_servers/.test-client/skills",
1032:  "tests/mcp_servers/.test-client/skills/test-skill",
1033:  "tests/mcp_servers/.test-client/skills/test-skill/SKILL.md",
1034:  ],
1035:  ids=["skills_parent_dir", "skill_folder", "skill_md_file"],
1036:  )
1037:  def test_scan_skills_without_flag(self, agent_scan_cmd, skill_path):
1038:  """Test that scanning skill paths does NOT produce skill results without --skills flag."""
1039:  result = subprocess.run(
1040:  [*agent_scan_cmd, "scan", "--json", "--dangerously-run-mcp-servers", skill_path],
1041:  capture_output=True,
1042:  text=True,
1043:  )
1044:  assert result.returncode == 0, f"Command failed with error: {result.stderr}"
1045:  output = json.loads(result.stdout)
1046:  all_servers = [server for entry in output.values() for server in entry["servers"]]
1047:  skill_servers = [s for s in all_servers if s["server"]["type"] == "skill"]
1048:  >       assert len(skill_servers) == 0, (
1049:  f"Expected no skill servers without --skills flag, got: {[s['name'] for s in skill_servers]}"
1050:  )
1051:  E       AssertionError: Expected no skill servers without --skills flag, got: ['test-skill']
1052:  E       assert 1 == 0
1053:  E        +  where 1 = len([{'name': 'test-skill', 'server': {'path': 'tests/mcp_servers/.test-client/skills/test-skill', 'type': 'skill'}, 'signature': {'metadata': {'meta': None, 'protocolVersion': 'built-in', 'capabilities': {'experimental': None, 'logging': None, 'prompts': None, 'resources': None, 'tools': {'listChanged': False}, 'completions': None, 'tasks': None}, 'serverInfo': {'name': 'test-skill', 'title': None, 'version': 'skills', 'websiteUrl': None, 'icons': None}, 'instructions': 'This is a test that everythings is correct', 'prompts': {'listChanged': False}, 'resources': {'subscribe': None, 'listChanged': False}}, 'prompts': [{'name': 'SKILL.md', 'title': None, 'description': '\n\n# Test skill\n\nThis skill is just to double check that everything works fine\n', 'arguments': [], 'icons': None, 'meta': None}], 'resources': [], 'resource_templates': [], 'tools': []}, 'error': None}])
1054:  tests/e2e/test_scan.py:168: AssertionError
1055:  _____ TestFullScanFlow.test_scan_skills_without_flag[skill_md_file-binary] _____
...

1061:  "skill_path",
1062:  [
1063:  "tests/mcp_servers/.test-client/skills",
1064:  "tests/mcp_servers/.test-client/skills/test-skill",
1065:  "tests/mcp_servers/.test-client/skills/test-skill/SKILL.md",
1066:  ],
1067:  ids=["skills_parent_dir", "skill_folder", "skill_md_file"],
1068:  )
1069:  def test_scan_skills_without_flag(self, agent_scan_cmd, skill_path):
1070:  """Test that scanning skill paths does NOT produce skill results without --skills flag."""
1071:  result = subprocess.run(
1072:  [*agent_scan_cmd, "scan", "--json", "--dangerously-run-mcp-servers", skill_path],
1073:  capture_output=True,
1074:  text=True,
1075:  )
1076:  assert result.returncode == 0, f"Command failed with error: {result.stderr}"
1077:  output = json.loads(result.stdout)
1078:  all_servers = [server for entry in output.values() for server in entry["servers"]]
1079:  skill_servers = [s for s in all_servers if s["server"]["type"] == "skill"]
1080:  >       assert len(skill_servers) == 0, (
1081:  f"Expected no skill servers without --skills flag, got: {[s['name'] for s in skill_servers]}"
1082:  )
1083:  E       AssertionError: Expected no skill servers without --skills flag, got: ['test-skill']
1084:  E       assert 1 == 0
1085:  E        +  where 1 = len([{'name': 'test-skill', 'server': {'path': 'tests/mcp_servers/.test-client/skills/test-skill', 'type': 'skill'}, 'signature': {'metadata': {'meta': None, 'protocolVersion': 'built-in', 'capabilities': {'experimental': None, 'logging': None, 'prompts': None, 'resources': None, 'tools': {'listChanged': False}, 'completions': None, 'tasks': None}, 'serverInfo': {'name': 'test-skill', 'title': None, 'version': 'skills', 'websiteUrl': None, 'icons': None}, 'instructions': 'This is a test that everythings is correct', 'prompts': {'listChanged': False}, 'resources': {'subscribe': None, 'listChanged': False}}, 'prompts': [{'name': 'SKILL.md', 'title': None, 'description': '\n\n# Test skill\n\nThis skill is just to double check that everything works fine\n', 'arguments': [], 'icons': None, 'meta': None}], 'resources': [], 'resource_templates': [], 'tools': []}, 'error': None}])
1086:  tests/e2e/test_scan.py:168: AssertionError
1087:  =============================== warnings summary ===============================
...

1109:  src/agent_scan/pushkeys.py                79     40    49%   23-26, 52-73, 99-101, 119-135
1110:  src/agent_scan/redact.py                  74      4    95%   166-167, 171, 175
1111:  src/agent_scan/run.py                     11     11     0%   1-16
1112:  src/agent_scan/signed_binary.py           39     23    41%   40-72
1113:  src/agent_scan/skill_client.py            83     10    88%   27, 34, 43, 51, 124-128, 151
1114:  src/agent_scan/traffic_capture.py        137     26    81%   64-69, 80-87, 104, 106, 113, 148, 152, 167, 172, 179, 187, 229, 233, 250-253, 264, 268
1115:  src/agent_scan/upload.py                  75      4    95%   25-26, 32-33
1116:  src/agent_scan/utils.py                   89     17    81%   26-29, 44-45, 100-137, 151
1117:  src/agent_scan/verify_api.py             158     49    69%   30-37, 41-44, 59, 62, 65, 68, 71, 74, 77, 80, 83, 86-91, 94, 194, 203-218, 247-250, 290-296, 304-318
1118:  src/agent_scan/version.py                  5      2    60%   5-6
1119:  src/agent_scan/well_known_clients.py     126     95    25%   280-308, 325, 340-395, 403-431, 443-476
1120:  --------------------------------------------------------------------
1121:  TOTAL                                   2689    992    63%
1122:  Coverage HTML written to dir htmlcov
1123:  =========================== short test summary info ============================
1124:  FAILED tests/e2e/test_inspect.py::TestInspect::test_inspect_skills_without_flag[skills_parent_dir-binary] - AssertionError: Expected no skill servers without --skills flag, got: ['test-skill']
1125:  assert 1 == 0
1126:  +  where 1 = len([{'name': 'test-skill', 'server': {'path': 'tests/mcp_servers/.test-client/skills/test-skill', 'type': 'skill'}, 'signature': {'metadata': {'meta': None, 'protocolVersion': 'built-in', 'capabilities': {'experimental': None, 'logging': None, 'prompts': None, 'resources': None, 'tools': {'listChanged': False}, 'completions': None, 'tasks': None}, 'serverInfo': {'name': 'test-skill', 'title': None, 'version': 'skills', 'websiteUrl': None, 'icons': None}, 'instructions': 'This is a test that everythings is correct', 'prompts': {'listChanged': False}, 'resources': {'subscribe': None, 'listChanged': False}}, 'prompts': [{'name': 'SKILL.md', 'title': None, 'description': '\n\n# Test skill\n\nThis skill is just to double check that everything works fine\n', 'arguments': [], 'icons': None, 'meta': None}], 'resources': [], 'resource_templates': [], 'tools': []}, 'error': None}])
1127:  FAILED tests/e2e/test_inspect.py::TestInspect::test_inspect_skills_without_flag[skill_folder-binary] - AssertionError: Expected no skill servers without --skills flag, got: ['test-skill']
1128:  assert 1 == 0
1129:  +  where 1 = len([{'name': 'test-skill', 'server': {'path': 'tests/mcp_servers/.test-client/skills/test-skill', 'type': 'skill'}, 'signature': {'metadata': {'meta': None, 'protocolVersion': 'built-in', 'capabilities': {'experimental': None, 'logging': None, 'prompts': None, 'resources': None, 'tools': {'listChanged': False}, 'completions': None, 'tasks': None}, 'serverInfo': {'name': 'test-skill', 'title': None, 'version': 'skills', 'websiteUrl': None, 'icons': None}, 'instructions': 'This is a test that everythings is correct', 'prompts': {'listChanged': False}, 'resources': {'subscribe': None, 'listChanged': False}}, 'prompts': [{'name': 'SKILL.md', 'title': None, 'description': '\n\n# Test skill\n\nThis skill is just to double check that everything works fine\n', 'arguments': [], 'icons': None, 'meta': None}], 'resources': [], 'resource_templates': [], 'tools': []}, 'error': None}])
1130:  FAILED tests/e2e/test_inspect.py::TestInspect::test_inspect_skills_without_flag[skill_md_file-binary] - AssertionError: Expected no skill servers without --skills flag, got: ['test-skill']
1131:  assert 1 == 0
1132:  +  where 1 = len([{'name': 'test-skill', 'server': {'path': 'tests/mcp_servers/.test-client/skills/test-skill', 'type': 'skill'}, 'signature': {'metadata': {'meta': None, 'protocolVersion': 'built-in', 'capabilities': {'experimental': None, 'logging': None, 'prompts': None, 'resources': None, 'tools': {'listChanged': False}, 'completions': None, 'tasks': None}, 'serverInfo': {'name': 'test-skill', 'title': None, 'version': 'skills', 'websiteUrl': None, 'icons': None}, 'instructions': 'This is a test that everythings is correct', 'prompts': {'listChanged': False}, 'resources': {'subscribe': None, 'listChanged': False}}, 'prompts': [{'name': 'SKILL.md', 'title': None, 'description': '\n\n# Test skill\n\nThis skill is just to double check that everything works fine\n', 'arguments': [], 'icons': None, 'meta': None}], 'resources': [], 'resource_templates': [], 'tools': []}, 'error': None}])
1133:  FAILED tests/e2e/test_scan.py::TestFullScanFlow::test_scan_skills_without_flag[skills_parent_dir-binary] - AssertionError: Expected no skill servers without --skills flag, got: ['test-skill']
1134:  assert 1 == 0
1135:  +  where 1 = len([{'name': 'test-skill', 'server': {'path': 'tests/mcp_servers/.test-client/skills/test-skill', 'type': 'skill'}, 'signature': {'metadata': {'meta': None, 'protocolVersion': 'built-in', 'capabilities': {'experimental': None, 'logging': None, 'prompts': None, 'resources': None, 'tools': {'listChanged': False}, 'completions': None, 'tasks': None}, 'serverInfo': {'name': 'test-skill', 'title': None, 'version': 'skills', 'websiteUrl': None, 'icons': None}, 'instructions': 'This is a test that everythings is correct', 'prompts': {'listChanged': False}, 'resources': {'subscribe': None, 'listChanged': False}}, 'prompts': [{'name': 'SKILL.md', 'title': None, 'description': '\n\n# Test skill\n\nThis skill is just to double check that everything works fine\n', 'arguments': [], 'icons': None, 'meta': None}], 'resources': [], 'resource_templates': [], 'tools': []}, 'error': None}])
1136:  FAILED tests/e2e/test_scan.py::TestFullScanFlow::test_scan_skills_without_flag[skill_folder-binary] - AssertionError: Expected no skill servers without --skills flag, got: ['test-skill']
1137:  assert 1 == 0
1138:  +  where 1 = len([{'name': 'test-skill', 'server': {'path': 'tests/mcp_servers/.test-client/skills/test-skill', 'type': 'skill'}, 'signature': {'metadata': {'meta': None, 'protocolVersion': 'built-in', 'capabilities': {'experimental': None, 'logging': None, 'prompts': None, 'resources': None, 'tools': {'listChanged': False}, 'completions': None, 'tasks': None}, 'serverInfo': {'name': 'test-skill', 'title': None, 'version': 'skills', 'websiteUrl': None, 'icons': None}, 'instructions': 'This is a test that everythings is correct', 'prompts': {'listChanged': False}, 'resources': {'subscribe': None, 'listChanged': False}}, 'prompts': [{'name': 'SKILL.md', 'title': None, 'description': '\n\n# Test skill\n\nThis skill is just to double check that everything works fine\n', 'arguments': [], 'icons': None, 'meta': None}], 'resources': [], 'resource_templates': [], 'tools': []}, 'error': None}])
1139:  FAILED tests/e2e/test_scan.py::TestFullScanFlow::test_scan_skills_without_flag[skill_md_file-binary] - AssertionError: Expected no skill servers without --skills flag, got: ['test-skill']
1140:  assert 1 == 0
1141:  +  where 1 = len([{'name': 'test-skill', 'server': {'path': 'tests/mcp_servers/.test-client/skills/test-skill', 'type': 'skill'}, 'signature': {'metadata': {'meta': None, 'protocolVersion': 'built-in', 'capabilities': {'experimental': None, 'logging': None, 'prompts': None, 'resources': None, 'tools': {'listChanged': False}, 'completions': None, 'tasks': None}, 'serverInfo': {'name': 'test-skill', 'title': None, 'version': 'skills', 'websiteUrl': None, 'icons': None}, 'instructions': 'This is a test that everythings is correct', 'prompts': {'listChanged': False}, 'resources': {'subscribe': None, 'listChanged': False}}, 'prompts': [{'name': 'SKILL.md', 'title': None, 'description': '\n\n# Test skill\n\nThis skill is just to double check that everything works fine\n', 'arguments': [], 'icons': None, 'meta': None}], 'resources': [], 'resource_templates': [], 'tools': []}, 'error': None}])
1142:  = 6 failed, 461 passed, 10 skipped, 40 deselected, 50 warnings in 76.34s (0:01:16) =
1143:  make: *** [Makefile:33: ci] Error 1
1144:  ##[error]Process completed with exit code 2.
1145:  Post job cleanup.

Comment thread src/agent_scan/cli.py
Comment on lines 208 to +211
"--skills",
default=False,
action="store_true",
help="Scan skills beyond mcp servers.",
default=True,
action=argparse.BooleanOptionalAction,
help="Scan skills beyond mcp servers (default: enabled). Use --no-skills to disable.",
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Action required

1. E2e tests now fail 🐞 Bug ≡ Correctness

--skills now defaults to True, but e2e tests still assert that skill scanning does not happen unless
--skills is explicitly provided, so they will fail under the new default behavior.
Agent Prompt
### Issue description
The CLI now scans skills by default, but e2e tests still assume the old behavior (skills scanned only when `--skills` is passed). These tests will fail because they assert there are *no* skill servers when the flag is omitted.

### Issue Context
`--skills` switched from `store_true` default-off to `BooleanOptionalAction` default-on; the opt-out is now `--no-skills`.

### Fix Focus Areas
- tests/e2e/test_scan.py[147-170]
- tests/e2e/test_inspect.py[153-176]

### What to change
- For tests that verify "no skill servers", add `--no-skills` to the CLI invocation.
- Optionally add new assertions that default behavior *does* include skills, and keep `--no-skills` as the explicit opt-out path.

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants