Skip to content

Marketplace Documentation

Skill Audit Methodology

Every skill submitted to the OpenSyber marketplace passes through an automated 4-stage security pipeline before it can be installed by any user.

Pipeline Overview

Our scanner runs automatically when a skill is submitted. Skills must pass all stages with a score of 70 or above and have zero critical or high findings to be approved.

1Manifest Validation
2Network Permission Audit
3Source Code Scan
4Sandbox Testing

Stage 1: Manifest Validation

Every skill must include a valid manifest with required metadata. The scanner checks:

  • Required fields — name, slug, version, entrypoint, author
  • Slug format — lowercase alphanumeric with hyphens (e.g. my-skill-name)
  • Version format — strict semver (MAJOR.MINOR.PATCH)
  • Entrypoint — must reference an existing file in the package

Stage 2: Network Permission Audit

Skills that request network access are subject to domain-level scrutiny:

  • Domain limit — maximum 10 network domains per skill
  • Wildcard detection — wildcard domains (e.g. *.example.com) are flagged as high severity
  • Known exfiltration domains — checked against our threat intelligence feed
  • Excessive scope — requesting more domains than the skill's functionality warrants

Stage 3: Source Code Scan

Static analysis powered by our supply-chain security engine scans for:

  • Environment scanning — patterns that enumerate or exfiltrate environment variables
  • Credential access — attempts to read SSH keys, tokens, or auth files
  • Shell injection — unsafe use of exec, spawn, or template literals in commands
  • Dependency risks — postinstall scripts, known malicious packages
  • Package size — maximum 5MB per skill package

Stage 4: Sandbox Testing

Skills that pass stages 1–3 are executed in an isolated sandbox environment:

  • Isolated container — runs in a seccomp-profiled container with no network access to production
  • Permission enforcement — only declared permissions are granted
  • Behavior monitoring — filesystem access, network calls, and process spawning are logged
  • Resource limits — CPU, memory, and execution time are capped

Scoring System

Each skill starts at 100 points. Findings deduct points based on severity. Skills must score 70+ with no critical or high findings to be approved.

SeverityDeductionExample
Critical-30Root filesystem access, known malicious patterns
High-15Missing manifest fields, wildcard network domains
Medium-5Excessive permissions, oversized packages
Low-2Missing author metadata, minor style issues

Approval Criteria

  • Score of 70 or above after all deductions
  • Zero critical findings
  • Zero high findings
  • All declared permissions must have a legitimate use case

Questions?

If you have questions about the audit process or need help fixing findings, contact us at marketplace@opensyber.cloud.