A majority of AI model responses during a specific timeframe were terminated before natural completion.
This was primarily due to either reaching the maximum token limit or being halted by system guardrails. Such behavior may limit the usefulness of responses or indicate overly constrained generation settings.
Remediation
Refine prompts to be more concise in order to reduce token usage. Review guardrail logs to identify and address any blocked content. If necessary, adjust generation token limits or guardrail policies to allow for more complete responses.
Example Attack Scenario
How to Identify with Example Scenario
How to Resolve with Example Scenario
How to Identify with Example Scenario
Find the text in bold to identify issues such as these in API specifications
How to Resolve with Example Scenario
Modify the text in bold to resolve issues such as these in API specifications