Paste raw security assessment notes — AI identifies and consolidates findings, assigns severity levels, maps CVSS 3.1 scores, and inserts structured content blocks directly into the report editor, severity-ordered from highest risk down.
Key Features
AI does the heavy lifting — analysts review, refine, and insert.
Paste up to ~10 pages of raw notes. AI parses the input and identifies individual findings — consolidating duplicates and surfacing distinct vulnerabilities as separate structured blocks.
Each finding is assigned a severity level (Critical, High, Medium, Low, Informational) and a CVSS 3.1 score with vector string — based on the vulnerability context in the notes.
AI generates complete finding write-ups with Title, Severity, CVSS score, Description, Impact, Recommendation, and Affected System — ready for direct insertion into the report template.
"Insert All" places findings highest-risk-first — sorted by CVSS score descending, with severity rank as fallback. Critical findings always lead; informational findings follow at the end.
Each generated finding is previewed before insertion. Analysts can edit any field inline, insert findings selectively, or use "Insert All" to bulk-populate the document.
Insert [TO DO: placeholder text] spans directly into the report with a single click — highlighted in yellow so incomplete sections are easy to spot during review.
How It Works
A two-step AI call followed by parallel finding generation — each finding produced individually for reliability and accuracy.
Risk Ordering
Findings are inserted in descending risk order so the most critical issues always appear first in the report.
| Severity | CVSS 3.1 Range | Insertion Order | Example Finding Type |
|---|---|---|---|
| Critical | 9.0 – 10.0 | 1st | Remote code execution, unauthenticated admin access |
| High | 7.0 – 8.9 | 2nd | SQL injection, privilege escalation, authentication bypass |
| Medium | 4.0 – 6.9 | 3rd | CSRF, stored XSS, insecure direct object reference |
| Low | 0.1 – 3.9 | 4th | Information disclosure, verbose error messages |
| Informational | 0.0 (N/A) | 5th | Best practice gaps, missing headers, banner grabbing |
CVSS score is extracted from the generated HTML using regex on the
CVSS 3.1 Score: field.
When no parseable score is present, severity rank is used as the tiebreaker.
Completeness Tracking
Flag incomplete sections during drafting — visible at a glance in both the editor and exports.
Click the "TODO Marker" toolbar button in the report editor to insert a [TO DO: describe what is needed] span at the cursor position — no manual formatting required.
When saving a report that still contains [TO DO: markers, a non-blocking reminder modal appears — letting the analyst decide to go back and address them or save anyway.
The report editor scans the editor content for [TO DO: text before saving. If found, a modal is shown with two options: Go Back (cancel the save to address the markers) or Save Anyway (proceed — the modal then transitions to a "Saving…" spinner state and auto-closes when the PDF and DOCX exports complete). The save flow is fully non-blocking and preserves all report data regardless of the analyst's choice.
Under the Hood
Built for reliability with sequential generation and dynamic timeout management.
Powered by a configurable AI language model via API. The parse call uses max_tokens: 2000 with JSON-only output; each write-up call generates full HTML structured content.
Timeout scales with finding count: 30 seconds base + 20 seconds per finding (capped at 10 findings). Prevents premature timeouts for large batches while staying responsive for small inputs.
Finding write-ups are generated sequentially (not in parallel). This prevents rate-limit errors and ensures each finding receives the model's full attention — improving quality and consistency.
Related Feature
AI finding generation works inside the report editor — explore how the full template-to-export pipeline works, including Mustache variables, PDF & DOCX generation, and role-based access control.