Skip to main content
Findings are security issues discovered by Corridor in your code. They are created from PR reviews, guardrail violations, and code scans. Each finding represents a specific security issue at a specific location in your code.

Finding properties

PropertyDescription
TitleBrief description of the vulnerability (e.g., “SQL Injection in userController.js”)
DescriptionDetailed explanation with security impact
SeverityCritical, High, Medium, Low, or Informational
StateOpen, Closed, or Potential
LocationFile path, line number, and code snippet
RemediationSpecific guidance on how to fix the issue

Finding states

StateDescription
PotentialNeeds verification—may be false positive
OpenConfirmed issue requiring remediation
ClosedIssue has been resolved
Won’t FixAccepted risk with documented justification

Managing findings

When a new finding arrives, review the description and code context, verify it’s a real issue, and either move it to Open or mark as Won’t Fix with justification. To fix a finding:
  • Manual fix: Navigate to the file/line mentioned and fix the code based on Corridor’s recommendation
  • AI-assisted fix: Use your AI assistant to help fix it—copy Corridor’s finding description and ask your AI to fix the issue
  • Auto fix (if available): On certain findings, Corridor can open a new branch/PR with the suggested code changes for you to review and merge
After a fix is applied, Corridor will rescan and mark the finding as Closed if the issue is gone.

Auto-close on merge

When a PR is merged, Corridor automatically checks whether any open findings were addressed by the changes. It analyzes the diff against each finding’s affected code and description to determine whether the vulnerability was remediated. If so, the finding is moved to Closed with the reason “Vulnerability fixed.” Corridor errs on the side of caution: a finding is only auto-closed when the analysis is confident the vulnerability was actually fixed, so you won’t see real issues disappear due to unrelated changes. Closed findings will show a Fixed by PR #N badge in the detail view, linking directly to the merged PR.
Previous findings from before the release of this feature will close as new PRs touch files related to those findings.
If you believe a finding should have been auto-closed but wasn’t, you can always close it manually from the finding detail view.

False positives

If you determine a finding is not actually a problem, mark it as Won’t Fix with a reason. This feedback helps improve detection accuracy over time.

Custom tags

Tags let you organize findings beyond severity and state. They are team-scoped, so every project on your team shares the same tag vocabulary. Common reasons to use tags:
  • Validation workflow: Mark findings a human has reviewed and confirmed as real (the built-in Validated tag is seeded on new teams for this purpose)
  • Triage and ownership: Group findings by team, service, sprint, or owner so the right people see the right issues
  • Filtering: Narrow the findings list to a specific tag, or surface only untagged findings that still need triage
  • Default views: Set a team-wide default tag filter so the findings page opens pre-filtered to what matters most—for example, defaulting to Validated so the list shows only confirmed issues

Creating tags

Only team admins can create, rename, remove, or set the default for tags. Any team member can apply existing tags to findings.
Team admins manage tags from Team Settings → Finding Tag:
1

Open team settings

Go to Teams in the dashboard and scroll to the Finding Tag section.
2

Add a tag

Type a tag name and click Add. Tag names must be unique within the team.
3

(Optional) Set a default filter

Use the Default tag filter dropdown to choose which tag the findings page should pre-filter to when any team member opens it.
To remove a tag, click the × next to it in the same section. Removing a tag also removes it from any finding it was applied to.

Applying tags to findings

Tags can be applied in three ways:
  • Dashboard: Open the kebab (⋯) menu on any finding and toggle a tag on or off. The Validated tag also has a dedicated one-click Validate button on open findings
  • API: PUT /findings/:id with a tags: string[] body, where each entry is a tag name configured on the team
  • MCP / AI assistant: With the Corridor MCP integration set up, ask your AI assistant to apply tags as part of updateFindingState—e.g. “tag this finding as validated and assign it to the platform team”

Managing findings via AI assistant

If you have the Corridor MCP integration set up, your AI assistant can interact with findings directly:
  • Retrieve findings: Ask your AI “What are the open critical security issues?” to get findings filtered by state and severity
  • Get finding details: Ask “Show me details about finding X” to get the full description, affected code, and remediation guidance
  • Update finding state: Tell your AI “Mark this finding as a false positive” to close the finding with the appropriate reason
See Corridor MCP for the full list of available tools and requirements.

Chats for deeper analysis

Chat features are available on Enterprise plans.
Use the chat feature to dig deeper into findings—ask questions about the vulnerability and its impact, get more detailed remediation guidance, and explore related code patterns. See Chats for more details.

Next steps

PR Reviews

How findings are discovered in pull requests

Chats

Investigate findings with AI assistance