Large infrastructure bets on artificial intelligence rest on the expectation that algorithms will keep getting better as models scale up. New research from MIT suggests that expectation may not hold.
The team analyzed how known scaling laws interact with projected gains in model efficiency and concluded it could become harder to extract major performance improvements from the biggest, most compute-intensive models. At the same time, efficiency advances could allow models running on more modest hardware to close the gap and become far more capable over the next decade.
Those results call into question the logic behind massive cloud and data-center spending aimed at supporting ever-larger models. Organizations that assume sheer size will continue to drive algorithmic progress may need to shift investment toward efficiency and smarter design. The paper maps scaling laws against projected efficiency improvements to forecast how performance and cost may change in coming years.

