A (very) brief overview on interpretability and manifold learning. Also, my thoughts on how machine unlearning may reveal newer strategies to expose LLM internals. Read on.