Study Feature
Research on "Study Feature" broadly investigates the performance and limitations of various machine learning models across diverse tasks, focusing on areas like data compression, emotion recognition, remaining useful life prediction, and medical image generation. Current studies heavily utilize large language models (LLMs) and deep convolutional neural networks (CNNs), often exploring techniques like transfer learning, prompt engineering, and ensemble methods to improve model accuracy and robustness. This research is significant for advancing both fundamental understanding of model capabilities and for developing practical applications in fields ranging from healthcare and industrial maintenance to natural language processing and security.
Papers
Limited Data, Unlimited Potential: A Study on ViTs Augmented by Masked Autoencoders
Srijan Das, Tanmay Jain, Dominick Reilly, Pranav Balaji, Soumyajit Karmakar, Shyam Marjit, Xiang Li, Abhijit Das, Michael S. Ryoo
Study of speaker localization with binaural microphone array incorporating auditory filters and lateral angle estimation
Yanir Maymon, Israel Nelken, Boaz Rafaely
Where you go is who you are -- A study on machine learning based semantic privacy attacks
Nina Wiedemann, Ourania Kounadi, Martin Raubal, Krzysztof Janowicz
Transformers Learn Higher-Order Optimization Methods for In-Context Learning: A Study with Linear Models
Deqing Fu, Tian-Qi Chen, Robin Jia, Vatsal Sharan
On the use of Vision-Language models for Visual Sentiment Analysis: a study on CLIP
Cristina Bustos, Carles Civit, Brian Du, Albert Sole-Ribalta, Agata Lapedriza
Grounded and Well-rounded: A Methodological Approach to the Study of Cross-modal and Cross-lingual Grounding
Timothee Mickus, Elaine Zosa, Denis Paperno