My blog is a collection of answers people don’t want to hear to questions they didn’t ask.
― Sebastyne Young
Paper: PDF: HyperNetworks Blog : Blog on Hypernetworks Fig 1. Photo via David Ha's Blog Coming up soon!
Paper PDF: SimCLR: Contrastive Learning of Visual Representations Blog: Advancing Self-Supervised and Semi-Supervised Learning with SimCLR Fig 1. Photo via Google AI's Blog Coming up soon!
Paper: Learning Transferable Visual Models From Natural Language Supervision PDF: Learning Transferable Visual Models From Natural Language Supervision Blog: CLIP: Connecting Text and Images General Terms:
iNat2017 Data: https://github.com/visipedia/inat_comp/tree/master/2017 iNat2018 and iNat2019 Data: https://github.com/visipedia/inat_comp/blob/master/2018/README.md Data: https://github.com/visipedia/inat_comp Details: The dataset is similar to iNat2017 with small differences, which are mentioned in the website.
In deep learning, it is not easy to tune hyperparameters for optimal results. If we have 2 parameters (each with 3 prior desirable values), it is an easier problem. We will have possible combinations to try.
What is KL-Divergence KL Divergence is a measure of how one probabilty distributon is different from another. Some people also call it the distance between two distributions, however, strictly speaking it is not the distance.
Question: If a class has only two samples, can a computer make correct prediction? Note: Number of samples is too less for training. Approach: Few Shot Learning Few shot learning is a problem where we try to learn when the training data is very small.
Paper: Link Objective: Explores how can we use additional meta-data available to make better classification (in this case animal species). Explores how to make best use of additional meta data which comes with most images today.
Paper: Link Objective: Focuses on species of plants and animals captured in wide variety of situations, different camera types, varying image quality, feature large class imbalance and verified by citizen scientists.
Paper: Link Objective: Leverage free, noisy data from the web to train effective models of fine-grained recognition. Summary: Interesting paper on using noisy data from the web. They sample images directly from Google search, using all returned images as images for a given category.