May 14, 2025

Logging AI before it happens

Did you know that some AI chats capture and log your chat before you even submit it? Anything you start to type or think about submitting could be sent to the AI provider.

Logging AI before it happens

Did you know that some AI chatbots capture your text before you submit it?

At FireTail, we've been working on helping customers understand the AI usage that's happening inside their organization. As a noted cybersecurity analyst told me a few weeks ago at RSAC, you can classify AI usage into 2 categories:

  1. Workload; this is AI that is incorporated into an application. This includes LLM-powered apps, LLM-augmented apps, agents, agentic AI and MCPs.
  2. Workforce; AI use by your organization's workers. This includes your team members chatting with engines like ChatGPT, Claude AI or others.

An information security team needs to be aware of both parts if they want to understand full visibility into how AI is being used by the organization today.

Workforce AI usage

Most AI usage by workers in their day-to-day jobs is happening through the web browser, with the possible exception of software developers who might be using co-pilots in their IDEs (integrated development environments). So a typical workflow would be something like:

  • Get a task that has some level of complexity or volume where AI could be helpful, such as analyzing a large file, summarizing a large block of content, etc.
  • Choose an AI provider
  • Open a web browser
  • Log in to AI provider
  • Start a chat session
  • Form a prompt, possibly attach 1 or more files, and submit it to the chatbot / AI provider

The risk to many organizations is that they don't know what text or content their employees might be submitting. They don't know if employees are sending in corporate proprietary information, sensitive customer data, financials, PII or really anything about what employees might be sending.

To help solve this problem, FireTail built a managed browser extension (currently in beta) that helps information security teams log usage across managed browsers in the organization. During our testing, we observed an unexpected behavior:

The AI engine's frontend JavaScript automatically submitted an incomplete prompt to the backend.

We noticed this first by checking log files on the backend, where the info sec team had central visibility into usage across the org. We were surprised by the initial results, so we repeated the test with some test prompts.

The FireTail browser extension captured the phantom submission

In fact, for each initial prompt, there were multiple submissions during the creation of the prompt. This sends text to the autocomplete API on the AI provider side, and definitely captures the user's prompt text. You'll notice that there is no response body captured because no response body exists yet.

We're continuing to push the envelope of innovation to help our customers manage and enable secure AI adoption. Stay tuned for more.