We compared the CPU usage of Opus 1.6 and Opus 1.5(The compile options are exactly the same, and fixed-point is not used). When DRED is not enabled, we observed that the encoder CPU consumption in 1.6 increases by about 5%–10% compared to 1.5. The exact number depends on the platform performance—the lower the platform performance, the more noticeable the increase.
The figures I attached here show the cost time on a high-performance machine. I also broke down and measured the time spent in SILK and CELT separately. The results show a noticeable increase in CELT processing time. After checking the code, this appears to be introduced by tone detection. SILK also shows some overall CPU increase.
So my question is: is this CPU increase in 1.6 compared to 1.5 expected? What benefits does it bring—for example, improvements in speech quality or something else?

We compared the CPU usage of Opus 1.6 and Opus 1.5(The compile options are exactly the same, and fixed-point is not used). When DRED is not enabled, we observed that the encoder CPU consumption in 1.6 increases by about 5%–10% compared to 1.5. The exact number depends on the platform performance—the lower the platform performance, the more noticeable the increase.
The figures I attached here show the cost time on a high-performance machine. I also broke down and measured the time spent in SILK and CELT separately. The results show a noticeable increase in CELT processing time. After checking the code, this appears to be introduced by tone detection. SILK also shows some overall CPU increase.
So my question is: is this CPU increase in 1.6 compared to 1.5 expected? What benefits does it bring—for example, improvements in speech quality or something else?