In addition, they exhibit a counter-intuitive scaling Restrict: their reasoning work improves with issue complexity as much as a degree, then declines Irrespective of obtaining an satisfactory token spending budget. By evaluating LRMs with their typical LLM counterparts beneath equivalent inference compute, we detect a few efficiency regimes: (one) https://illusion-of-kundun-mu-onl02110.canariblogs.com/illusion-of-kundun-mu-online-an-overview-50561132