The pace of machine learning research is advancing at such a breakneck pace, the only way to keep up is to read every single day. So awhile back, 100 days ago to be exact, I decided to read one paper every single day - the first thing to tackle every morning. The organization is split up into two posts (NLP and Images) to make a gigantic blog post into two really large ones. Please note, this is written by a student and may contain errors. Without further ado:
Notes from Kyunghyun Cho’s lecture during Montreal’s Deep Learning Summer School
Recently, there has been a lot of talk around generative models as the next big thing in deep learning. There are articles about general adversarial networks, variational autoencoders, PixelRNNs, and perhaps some reinforcement learning augmentation. But why all this sudden interest? Why not pursue memory networks or other ideas? I am by no means an expert, but my suspicion is that generative models offer superior context and speed.
Although not intractable, the problem of training recommendation systems includes many difficult components that must be resolved in order to have practical use in the real world. Based on my current understanding, there are three separate problem domains:
Within the Beyond Clothing Ontologies: Modeling Fashion with Subjective Influence Networks, the authors Kurt Bollacker, Natalia Díaz-Rodríguez, Xian Li (from Stitch Fix) have presented a supplementary fashion ontology for modeling out the relationships of clothing that go beyond other existing ontologies by taking into account deeper subjective measures, namely influence networks, rather than just objective measures of clothing, such as length of sleeve or material. While related schemas on have been proposed in the past, they fall short from being useful because they fail to incorporate many subjective aspects of fashion, such as where a style originated from or how a particular design was influenced by past trends.