Great discussion and insider views of AI/ML research.
Academics think of themselves as trailblazers, explorers — seekers of the truth.
Any fundamental discovery involves a significant degree of risk. If an idea is guaranteed to work then it moves from the realm of research to engineering. Unfortunately, this also means that most research careers will invariably be failures at least if failures are measured via “objective” metrics like citations.
Today we discuss the recent article from Mark Saroufim called Machine Learning: the great stagnation. We discuss the rise of gentleman scientists, fake rigor, incentives in ML, SOTA-chasing, "graduate student descent", distribution of talent in ML and how to learn effectively.
Topics include: OpenAI, GPT-3, RL: Dota & Starcraft, conference papers, incentives and incremental research, Is there an ML stagnation? Is theory useful? Is ML entirely empirical these days? How to suceed as a researcher, Why everyone is forced to become their own media company, and much more.
If you don't want to watch the video, read these (by Mark Saroufim) instead:
No comments:
Post a Comment