LLM strips underscores when extracting/decoding strings
I want the LLM to preserve exact string formatting, specifically underscores, when extracting or decoding obfuscated or encoded strings (like Base64 or strings with zero-width characters). Currently, alphanumeric characters are preserved but underscores are deleted. This is important for accurately handling technical data like API tokens or encoded strings.
Steps to Reproduce:
1. Feed the AI a Base64 string containing underscores, or a string interwoven with zero-width characters.
2. Ask it to extract or decode the string.
Expected Result: Exact string formatting is preserved.
Actual Result: Alphanumeric characters are preserved, but underscores are deleted.