I know that if you increase the look-back period on a chart, it decreases performance... for example, if you set a chart to look back only 200 bars, it should result in less CPU usage than setting a chart to look back 100,000 bars. (We're talking about a 3 minute chart here, just as an example.)
The question is, why does this decrease performance? My theory is that if you have indicators on the chart, the code will re-calculate that indicator across the entire length of the look-back series. Particularly if you have it set to CalcOnBarClose = false so that it re-calculates every tick, I could see how this would be a problem with a lot more bars.
However, does the same performance issue apply if you have no indicators on the chart? Or in a third scenario, what if you have indicators, but they are always set to calculate on bar close = true?
I guess I am trying to figure out if the code bottleneck that causes the length of the data series to affect performance is in the indicator calculation part of the code, or the actual screen drawing part of the code. I can't see how it should be any slower if you have a chart with 100,000 bars on it, if you are only showing the last 100 bars and you either have no indicators or have every indicator set to relcalculate only at the close of each bar, which would be every few minutes. (I would hope that it's not somehow "drawing" all of the 100,000 bars somewhere if they are not shown on screen, for example.)
Any input from NT's dev staff would be appreciated here, since this is probably a fairly technical question... thanks!
Comment