ID | Paper |
1 | The Role of Local Alignment and Uniformity in Image-Text Contrastive Learning on Medical Images Philip Müller, Georgios Kaissis, Daniel Rueckert |
3 | Loss Landscape of Self-Supervised Learning Liu Ziyin, Ekdeep Singh Lubana, Masahito Ueda, Hidenori Tanaka |
4 | Self-Supervised Pre-training with Transformers for Object Detection Guoqiang Jin, Fan Yang, Mingshan Sun, Yakun Liu, Wei Li, Tianpeng Bao, Rui Zhao, Liwei Wu |
6 | Local Pseudo-Attributes for Long-Tailed Recognition Dong-Jin Kim, Tsung-Wei Ke, Stella X. Yu |
7 | Simplicial Embeddings in Self-Supervised Learning and Downstream Classification Samuel Lavoie, Christos Tsirigotis, Max Schwarzer, Ankit Vani, Michael Noukhovitch, Kenji Kawaguchi, Aaron Courville |
8 | Contrastive Learning for Multi-Label Classification Vladimir Zaigrajew, Maciej Zieba |
9 | Matryoshka Representation Learning Aniket Rege, Aditya Kusupati, Gantavya Bhatt, Matthew Wallingford, Aditya Sinha, Vivek Ramanujan, William Howard-Snyder, Kaifeng Chen, Sham M. Kakade, Prateek Jain, Ali Farhadi |
10 | Where Should I Spend My FLOPS? Efficiency Evaluations of Visual Pre-training Methods Skanda Koppula, Yazhe Li, Evan Shelhamer, Andrew Jaegle, Nikhil Parthasarathy, Relja Arandjelovic, Joao Carreira, Olivier J Henaff |
11 | Improving Dense Contrastive Learning with Dense Negative Pairs Berk Iskender, Zhenlin Xu, Simon Kornblith, En-Hung Chu, Maryam Khademi |
12 | Region Proposal Network Pre-Training Helps Label-Efficient Object Detection Linus Ericsson, Nanqing Dong, Yongxin Yang, Ales Leonardis, Steven McDonagh |
13 | Super-Resolution through StyleGAN Regularized Latent Search Marzieh Gheisari, Auguste Genovesio |
14 | Rethinking Benchmarking Framework of Self-Supervised Learning Approaches for Anomaly Localization Tryambak Gangopadhyay, Sungmin Hong, Sujoy Roy, Yash Shah, Lin Lee Cheong |
15 | OmniMAE: Single Model Masked Pretraining on Images and Videos Rohit Girdhar, Alaaeldin El-Nouby, Mannat Singh, Kalyan Vasudev Alwala, Armand Joulin, Ishan Misra |
17 | Transformed Autoencoder: Pre-training with Mask-Free Encoder and Transformed Decoder Yichen Zhou, Pan Zhou, Chenyang Si, Weihao Yu, Teck Khim Ng, Shuicheng YAN |
18 | Mugs: A Multi-Granular Self-Supervised Learning Framework Pan Zhou, Yichen Zhou, Chenyang Si, Weihao Yu, Teck Khim Ng, Shuicheng YAN |
19 | Time Series Anomaly Detection using Skip-Step Contrastive Predictive Coding Kexin Zhang, Qingsong Wen, Chaoli Zhang, Liang Sun, Yong Liu |
20 | Why Do Self-Supervised Models Transfer? On the Impact of Invariance on Downstream Tasks Linus Ericsson, Henry Gouk, Timothy Hospedales |
22 | Contrastive Self-supervised Learning via Minimizing Distance between Cosine Similarities of Negative Pair Seongho Jeong, Heechul Jung |
24 | Towards Understanding Why Mask-Reconstruction Pretraining Helps in Downstream Tasks Jiachun Pan, Pan Zhou, Shuicheng YAN |
25 | Self-Guided Diffusion Models Vincent Tao Hu, David W Zhang, Yuki M Asano, Gertjan J. Burghouts, Cees G. M. Snoek |
26 | DUEL: Adaptive Duplicate Elimination on Working Memory for Self-Supervised Learning Won-Seok Choi, Dong-Sig Han, Hyundo Lee, Junseok Park, Byoung-Tak Zhang |
27 | Content suppresses style: dimensionality collapse in contrastive learning Evgenia Rusak, Patrik Reizinger, Roland S. Zimmermann, Wieland Brendel |
28 | Coreset Sampling from Open-Set for Fine-Grained Self-Supervised Learning Sungnyun Kim, Sangmin Bae, Se-Young Yun |
30 | Unpaired Image-to-Image Translation with Limited Data to Reveal Subtle Phenotypes Anis Bourou, Auguste Genovesio |
31 | Joint Embedding Predictive Architectures Focus on Slow Features Vlad Sobal, Jyothir S V, Siddhartha Jalagam, Nicolas Carion, Kyunghyun Cho, Yann LeCun |
32 | Positive unlabeled learning with tensor networks Bojan Žunkovič |
33 | VIP: Towards Universal Visual Reward and Representation via Value-Implicit Pre-Training Yecheng Jason Ma, Shagun Sodhani, Dinesh Jayaraman, Osbert Bastani, Vikash Kumar, Amy Zhang |
34 | Multimodal contrastive learning for remote sensing tasks Umangi Jain, Alex Wilson, Varun Gulshan |
35 | Clinical Contrastive Learning for Biomarker Detection Kiran Premdat Kokilepersaud, Mohit Prabhushankar, Ghassan AlRegib |
36 | Elastic Weight Consolidation Improves the Robustness of Self-Supervised Learning Methods under Transfer Andrius Ovsianas, Jason Ramapuram, Dan Busbridge, Eeshan Gunesh Dhekane, Russ Webb |
37 | SL3D: Self-supervised-Self-labeled 3D Recognition Fernando Julio Cendra, Lan Ma, Jiajun Shen, XIAOJUAN QI |
38 | Contrastive Self-supervision Defines General-Purpose Similarity Functions Charles Guille-Escuret, Pau Rodriguez, David Vazquez, Ioannis Mitliagkas, Joao Monteiro |
39 | Pitfalls of Gaussians as a noise distribution in NCE Holden Lee, Chirag Pabbaraju, Anish Prasad Sevekari, Andrej Risteski |
40 | Generating High Fidelity Synthetic Data via Coreset selection and Entropic Regularization Omead Pooladzandi, Pasha Khosravi, Erik Nijkamp, Baharan Mirzasoleiman |
41 | A Theoretical Study of Inductive Biases in Contrastive Learning Jeff Z. HaoChen, Tengyu Ma |
42 | Towards Sustainable Self-supervised Learning Shanghua Gao, Pan Zhou, Ming-Ming Cheng, Shuicheng YAN |
43 | Towards Unsupervised Visual Reasoning: Do Off-The-Shelf Features Know How to Reason? Monika Wysoczańska, Tom Monnier, Tomasz Trzcinski, David Picard |
44 | An eigenspace view reveals how predictor networks and stop-grads provide implicit variance regularization Manu Srinath Halvagal, Axel Laborieux, Friedemann Zenke |
45 | Towards Self-Supervised Learning for Prediction of Vital Status of Colorectal Cancer Patients Sathwik Acharya, Om Amitesh Boggaram Ravishankar, Murali Krishna Madan Rao, Girivinay Padegal, Gowri Srinivasa |
46 | On the Role of Nonlinearity in Training Dynamics of Contrastive Learning on One-layer Network Yuandong Tian |
47 | Homomorphic Self-Supervised Learning T. Anderson Keller, Xavier Suau, Luca Zappella |
48 | Leveraging the Third Dimension in Contrastive Learning Sumukh K Aithal, Anirudh Goyal, Alex Lamb, Yoshua Bengio, Michael Curtis Mozer |
49 | Guiding Energy-based Models via Contrastive Latent Variables Hankook Lee, Jongheon Jeong, Sejun Park, Jinwoo Shin |
50 | Understanding contrastive versus reconstructive self-supervised learning of Vision Transformers Shashank Shekhar, Florian Bordes, Pascal Vincent, Ari S. Morcos |
51 | Contrastive Self-Supervised Learning for Skeleton Representations Nico Lingg, Miguel Sarabia, Luca Zappella, Barry-John Theobald |
52 | Conditional Contrastive Learning for Improving Fairness in Self-Supervised Learning Martin Q. Ma, Yao-Hung Hubert Tsai, Paul Pu Liang, Han Zhao, Kun Zhang, Ruslan Salakhutdinov, Louis-Philippe Morency |
54 | Improving self-supervised representation learning via sequential adversarial masking Dylan Sam, Min Bai, Tristan James McKinney, Li Erran Li |
56 | When does visual self-supervision aid adversarial training in improving adversarial robustness? Michal Kucer, Diane Oyen, Garrett T. Kenyon |
57 | Lovasz Theta Contrastive Learning Georgios Smyrnis, Matt Jordan, Ananya Uppal, Giannis Daras, Alex Dimakis |
58 | Towards Reliable Zero Shot Classification in Self-Supervised Models with Conformal Prediction Bhawesh Kumar, Anil Palepu, Rudraksh Tuwani, Andrew Beam |
59 | Same Pre-training Loss, Better Downstream: Implicit Bias Matters for Language Models Hong Liu, Sang Michael Xie, Zhiyuan Li, Tengyu Ma |
60 | Feature Dropout: Revisiting the Role of Augmentations in Contrastive Learning Alex Tamkin, Margalit Glasgow, Xiluo He, Noah Goodman |
61 | Understanding and Improving the Role of Projection Head in Self-Supervised Learning Kartik Gupta, Thalaiyasingam Ajanthan, Anton van den Hengel, Stephen Gould |
64 | Variational Energy-Based Models: A Probabilistic Framework for Contrastive Self-Supervised Learning Tianqi Du, Yifei Wang, Weiran Huang, Yisen Wang |
65 | Does Structural Attention Improve Compositional Representations in Vision-Language Models? Rohan Pandey, Rulin Shao, Paul Pu Liang, Louis-Philippe Morency, Ruslan Salakhutdinov |
66 | Visual Pre-training for Navigation: What Can We Learn from Noise? Yanwei Wang, Ching-Yun Ko, Pulkit Agrawal |
67 | UniCon: Unidirectional Split Learning with Contrastive Loss for Visual Question Answering Yuwei Sun, Hideya Ochiai |
68 | AggNCE: Asymptotically Identifiable Contrastive Learning Jingyi Cui, Weiran Huang, Yifei Wang, Yisen Wang |
69 | CASS: Cross Architectural Self-Supervision for Medical Image Analysis Pranav Singh, Elena Sizikova, Jacopo Cirrone |
70 | Can we train vision and language zero-shot classification models without syntax? Ajinkya Tejankar, Maziar Sanjabi, Bichen Wu, Madian Khabsa, Saining Xie, Hamed Pirsiavash, Hamed Firooz |
72 | Exploring Spurious Learning in Self-Supervised Representations Kimia Hamidieh, Haoran Zhang, Marzyeh Ghassemi |
74 | TS-Rep: Self-supervised time series representation learning from robot sensor data Pratik Somaiya, Harit Pandya, Riccardo Polvara, Marc Hanheide, Grzegorz Cielniak |