Learning Halfspaces
Learning halfspaces focuses on efficiently finding optimal linear separators in high-dimensional data, a fundamental problem in machine learning with implications for classification and regression tasks. Current research emphasizes developing computationally efficient algorithms, particularly for scenarios with noisy labels or non-Gaussian data distributions, exploring models like deep linearly gated networks and leveraging techniques such as contrastive moments and non-convex optimization. These advancements aim to improve the accuracy and scalability of halfspace learning, impacting diverse fields from computer-aided design (through implicit surface representation) to privacy-preserving machine learning.
Papers
October 21, 2024
April 5, 2024
November 2, 2023
October 23, 2023
June 28, 2023
February 13, 2023
December 6, 2022
September 21, 2022
September 17, 2022
July 28, 2022