
TILOS Seminar: How Transformers Learn Causal Structure with Gradient Descent
HDSI 123 and Virtual 3234 Matthews Ln, La Jolla, CA, United StatesJason Lee, Princeton University Abstract: The incredible success of transformers on sequence modeling tasks can be largely attributed to the self-attention mechanism, which allows information to be transferred between different parts of a sequence. Self-attention allows transformers to encode causal structure which makes them particularly suitable for sequence modeling. However, the process by which transformers […]