Based Attack
Based attacks exploit vulnerabilities in machine learning models to manipulate their outputs, aiming to either misclassify inputs (e.g., through adversarial patches or crafted prompts) or infer sensitive information about the training data (membership inference attacks). Current research focuses on developing and evaluating these attacks across various model types, including deep neural networks for image classification and large language models for text processing, employing techniques like gradient-based methods, quantile regression, and self-prompt calibration. Understanding and mitigating these attacks is crucial for ensuring the reliability and security of machine learning systems in diverse applications, from autonomous vehicles to privacy-preserving AI.