Skip to main content
Corridor’s MCP server provides tools that enable AI coding assistants to directly interact with your security data. When you use Claude Code, Cursor, or other MCP-compatible tools, the AI assistant can access and manage your Corridor findings and guardrails.

Available tools

ToolDescription
analyzePlanAnalyze a planned code implementation and get relevant security context from your project’s guardrails
getFindingsRetrieve security findings with filters for state (open/closed/potential), severity, and limit
getFindingGet detailed info about a specific finding including description, affected code, and remediation guidance
updateFindingStateMark findings as closed (false positive, risk accepted, fixed) or reopen them
getGuardrailsGet security guardrails and context documents for a project
createGuardrailCreate new security guardrails programmatically
listProjectsList all Corridor projects you have access to

analyzePlan

The analyzePlan tool is the core tool that AI coding assistants call before generating code. It takes a description of what you plan to implement and returns relevant security context from your project’s guardrails and context documents. This helps prevent vulnerabilities by giving the AI assistant project-specific security guidance at the point of code generation. Parameters:
ParameterTypeRequiredDescription
planstringYesDescription of what you plan to implement or the user request you are working on
cwdstringNoAbsolute path to the current working directory of the workspace
branchstringNoCurrent git branch name
commit_hashstringNoCurrent git commit hash
has_unstaged_changesbooleanNoWhether there are uncommitted changes in the working directory
Example:
You: "Add a login form that accepts email and password"

Claude: [Uses analyzePlan with plan='Add a login form that accepts email and password']
       Corridor returns security guidance like:
       • Ensure all user input is validated and sanitized server-side before processing
       • Use parameterized queries for any database lookups to prevent SQL injection
       • Implement rate limiting on the login endpoint to prevent brute-force attacks
The AI assistant uses this security context to write safer code from the start, following your team’s specific guardrails.

Example conversation

You: "What are the open critical security issues in this project?"

Claude: [Uses getFindings with state='open', severity='critical']
       "I found 3 critical security findings:
        1. SQL Injection in auth.ts:42
        2. XSS vulnerability in render.tsx:87
        ..."
Your AI assistant can also:
  • “Add a new API endpoint” → calls analyzePlan for security context before writing code
  • “Show me details about finding X” → calls getFinding
  • “Mark this finding as a false positive” → calls updateFindingState
  • “What guardrails should I follow?” → calls getGuardrails
  • “Create a guardrail for SQL injection prevention” → calls createGuardrail

Requirements

  • Corridor extension installed and authenticated
  • MCP enabled for your team (team setting)
  • IDE Extension Support entitlement on your plan
  • User must be a member of a team that owns the project

Security notes

  • Tools validate team membership before granting access
  • Uniform 404 responses prevent information leakage
  • Admin operations reject API tokens (require user auth)

Next steps

Findings

Track and remediate security issues

Guardrails

Configure security guardrails