Feature Space Hijacking Attack
Feature space hijacking attacks exploit vulnerabilities in collaborative machine learning frameworks, such as split learning and federated learning, to manipulate model training and potentially reconstruct sensitive client data. Current research focuses on understanding the effectiveness of these attacks against various architectures, including deep neural networks, and exploring defense mechanisms like differential privacy and function secret sharing, though their efficacy remains a subject of ongoing investigation. The significance of this research lies in its implications for the security and privacy of sensitive data used in collaborative machine learning, impacting the development of trustworthy and robust AI systems across diverse applications.