LLM strips underscores when extracting/decoding strings

I want the LLM to preserve exact string formatting, specifically underscores, when extracting or decoding obfuscated or encoded strings (like Base64 or strings with zero-width characters). Currently, alphanumeric characters are preserved but underscores are deleted. This is important for accurately handling technical data like API tokens or encoded strings. Steps to Reproduce: 1. Feed the AI a Base64 string containing underscores, or a string interwoven with zero-width characters. 2. Ask it to extract or decode the string. Expected Result: Exact string formatting is preserved. Actual Result: Alphanumeric characters are preserved, but underscores are deleted.
Post type
πŸ› Bug
What part of support platform?
Fibi AI Agent

Please authenticate to join the conversation.

Upvoters
Status

In Review

Board

Support platform

Date

About 1 month ago

Author

Shreya Yadav

Subscribe to post

Get notified by email when there are changes.