Homoglyph Attack
Homoglyph attacks exploit the visual similarity between characters from different alphabets or scripts to deceive machine learning models. Current research focuses on the vulnerability of various models, including sentiment analyzers, AI-generated text detectors, and text-to-image synthesizers, to these attacks, often employing convolutional neural networks and vision transformers to detect or mitigate them. These attacks highlight significant security and ethical concerns across diverse applications, from combating disinformation to ensuring fairness in AI systems, prompting investigations into robust detection and defense mechanisms. The impact extends to data integrity and the reliability of AI-driven systems in various fields.
Papers
June 17, 2024
February 5, 2024
January 15, 2024
June 17, 2023
May 24, 2023