This Week I Learned - Week 34 2021

What is this? An semi-unstructured brain dump of things I read or learn - hopefully weekly. This is still short this week, way too much writing my thesis.

Effect of the Week

Self-Serving bias suggests that nobody likes to admit to being incompetent. Instead, anything else than ourselves is likely to be blamed, especially external factors. We often use this automatically to present ourselves in a positive manner and thereby uphold our self-worth. When we believe we are in control of our success but external things are responsible for our failure, that’s self-serving bias.

AI

This week seems to be all about cutting things from other things.

  • It is crazy how far Music Demixing has come. Using Spleeter by Deezer you can get astonishing results in separating voice from other components (stems) of music. You can easily create karaoke versions of songs or listen to only the voice. There was Music Demixing challenge end of July hosted by Sony with a prize pool of 10k CHF.
  • Omnimatte(Lu et al., 2021) can remove contents from videos with surprising quality. It can even remove shadows or water deformation.

Feedback Generation for student submitted Code

Educators from Stanford University (Wu et al., 2021) are one of the first or even the first to have deployed an AI system, that provides automatic and specific feedback to over 16,000 students coding solutions. As costs of manual annotation are labor-intensive, due to the diverse nature of students solutions Wu et al. (2021) frames the feedback challenge as a few-shot classification problem. They contribute a meta-learning framework for a few-shot classification of sequence data like programming code to build a novel network architecture called ProtoTransformer.

ProtoTransformer Architecture (Graphic by <a href='#wu2021prototransformer' id='ref-wu2021prototransformer-3'>Wu et al. (2021)</a> )

They employ few-shot learning to classify rubrics of feedback to give and use the model’s attention to highlight the problematic sections. Side Information about task description and existing rubrics is added by embedding it via a pretrained S-BERT Model (Reimers and Gurevych, 2019). Code strings are tokenized, variable names normalized via byte-pair encoding (BPE), and the pretrained model weights of CodeBERT are used for the main network.

Psychology

  • It seems like people’s decision-making approaches tend to fit into one of two categories: either you are a maximizer – a person who is committed to making a choice that will ultimately benefit them as much as possible – or you are a satisfying person whose choices are based on less exact criteria and more on consideration on what they wish to gain. (Simon, 1956). Having more choices does not make us happier with an outcome of a decision. This shows for example in online dating where maximizers want to exhaust options to find the optimal partner. In practice, this leads to the more-means-worse effect, where more searching leads to worse choices by distracting with irrelevant information.(Yang and Chiou, 2010)

Bibliography

Erika Lu, Forrester Cole, Tali Dekel, Andrew Zisserman, William T Freeman, and Michael Rubinstein. Omnimatte: associating objects and their effects in video. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 4507–4515. 2021.

Nils Reimers and Iryna Gurevych. Sentence-bert: sentence embeddings using siamese bert-networks. arXiv preprint arXiv:1908.10084, 2019.

Herbert A Simon. Rational choice and the structure of the environment. Psychological review, 63(2):129, 1956.

Mike Wu, Noah Goodman, Chris Piech, and Chelsea Finn. Prototransformer: a meta-learning approach to providing student feedback. arXiv preprint arXiv:2107.14035, 2021. 1 2 3

Mu-Li Yang and Wen-Bin Chiou. Looking online for the best romantic partner reduces decision quality: the moderating role of choice-making strategies. Cyberpsychology, Behavior, and Social Networking, 13(2):207–210, 2010.