This Week I Learned - Week 39 2021

Uff! There has been quite some radio silence. Sorry for that, all the text I produce is currently going into my thesis, which is due next week. Then hopefully, there will be more regular thoughts on here. Without further ado, let’s dive in!

Effect of the Week

Anchoring Bias is when our decisions are influenced by a reference (anchor). After an anchor is dropped (e.g. a salary) all further discussions, arguments, and decisions are different, than before. This effect was experimentally shown in many studies, is extremely hard to avoid, even when working in groups, and can be retained over long times.

AI

  • OpenAI (Wu et al., 2021) published an approach that optimizes book summarization by cleverly utilizing human feedback. Long texts like books are recursively summarized by splitting the full text into sections (chunks) that are then summarized level by level: the models’ summarization of each chunk is summarized until the final summarization is produced. This is called task decomposition, where complex tasks are split into smaller, simpler tasks, where the answer of these is used to answer the overall task. The question remains how human feedback and preferences are communicated to the AI system. This problem is also called Alignment Problem. OpenAI uses both behavior cloning (BC), where the AI system aims to reproduce human behavior, and reinforcement learning (RL) with a reward model trained to predict human preferences. It would be extremely cost-inefficient to provide human feedback on the final summarization, therefore the model learns on the decomposed subtasks. Training data is created very similar to the approach by Stiennon et al. (2020) two ways: (1) BC: Getting a demonstration where a human writes a summarization and (2) RL: humans decide which of two solutions they prefer by comparing two model outputs.
  • As for many ML systems data creation is outsourced e.g. to click workers on Amazon Mechanical Turk systematic Data Manipulation can degrade models’ performance. But even if you have no active labeling and rely on passively scraping the web manipulated data can be problematic.Goldblum et al. (2020) combine the current issues in a meta paper worth reading. Especially in today’s time, where more and more models are freely and openly available (e.g. on HuggingFace), there is no real oversight. In theory, I could train a model with a backdoor included as I have full control of the training data.
  • NLP models rely heavily on click workers on Amazon Mechanical Turk (AMT). This is not only concerning from a labor-economic standpoint (modern sweatshops) but also has issues with quality. AMT is already used for evaluation in many places, e.g. by rating a generated text. This in turn raises concerns of reproducibility. Karpinska et al. (2021) compare AMT labels with expert labels annotated by English teachers and found large disagreements in evaluation. They propose additional tuning to increase quality on AMT: (1) present model output and human output side by side, (2) provide training tasks to calibrate workers own rating, (3) filter by time spent on tasks, (4) a maximum number of tasks per worker, (5) pre-task proficiency test.
  • Continual Learning is the task of incrementally learning from new samples without revisiting old samples. The issue is that old content can be easily forgotten (Catastrophic Forgetting). For many real-world applications including self-driving cars, Continual Learning is essential to achieve results. There are several approaches, most of them relying on Regularisation, e.g. adding a loss which compares the old activations with the new ones.

Society

  • I blogged about a possible university of the future.

Neuroscience

  • I read the book So Good They Can’t Ignore You by Cal Newport, who argues for Deliberate Practice to gain Career Capital. The core of Deliberate Practice resonated with me: restraining from turning behaviors into automatic habits and instead of improving yourself by practicing at the edge of your comfort zone. I am not very good at spending focused time on uncomfortable things, but I try to get better. I think Deliberate Practice relies more heavily on feedback, which can be hard to obtain. I am not sure we should go down the lane of measuring everything to get feedback, but it certainly helps for easily quantifiable things like sports. But what is easily quantifiable for Knowledge Workers? I still have to make the connection to Spaced Repetition but I am not sure how exactly this connection exists.

Bibliography

Micah Goldblum, Dimitris Tsipras, Chulin Xie, Xinyun Chen, Avi Schwarzschild, Dawn Song, Aleksander Madry, Bo Li, and Tom Goldstein. Dataset security for machine learning: data poisoning, backdoor attacks, and defenses. arXiv preprint arXiv:2012.10544, 2020.

Marzena Karpinska, Nader Akoury, and Mohit Iyyer. The perils of using mechanical turk to evaluate open-ended text generation. arXiv preprint arXiv:2109.06835, 2021.

Nisan Stiennon, Long Ouyang, Jeff Wu, Daniel M Ziegler, Ryan Lowe, Chelsea Voss, Alec Radford, Dario Amodei, and Paul Christiano. Learning to summarize from human feedback. arXiv preprint arXiv:2009.01325, 2020.

Jeff Wu, Long Ouyang, Daniel M Ziegler, Nissan Stiennon, Ryan Lowe, Jan Leike, and Paul Christiano. Recursively summarizing books with human feedback. arXiv preprint arXiv:2109.10862, 2021.