The AI Findings feature in the FireTail platform extends traditional API security by identifying issues and risks arising from interactions with large language models (LLMs) . These findings help monitor the behavior of AI models and ensure outputs adhere to your organization’s safety, compliance, and data privacy standards.
FireTail’s AI Findings help detect threats such as:
To view AI-related security issues:
You’ll see a categorized view of all AI-related findings, including severity levels, status, tags, model metadata, and detection source.
Use the filter functionality to narrow down AI findings based on specific criteria:
Click Add Filter to view findings that match your conditions.
Select Field:
Choose from a variety of attributes:
Operator: Choose comparison logic
Value: Enter the matching value.
Click Submit to apply the filter.
Filter for findings generated within a selected time period.
Click Download to export a CSV file of the AI Findings for further analysis or reporting. Learn more about how to download here.
Each AI Finding is tagged with a severity level to help prioritize risk:
To update a severity:
The default status of each finding is Open. You can change the status to reflect how the issue is being handled.
Note:
If you mark a finding as Risk Accepted, Ignored, or False Positive, it will not be re-triggered if discovered again. If marked as Remediated, it will reappear if re-detected during future scans.
Click on an individual finding to see more information, including:
Review each finding carefully in the context of your business and security needs. Remediation steps may include: