I then went into the debugger to figure out what I had done wrong in my spreadsheet even though I thought I was doing the same operations. When I stepped into the NT indicator source code, I found out that there is optimized code to cache previous data used in the Keltner channel calculation, and thus speed processing of the new values. This was all very clever and understandable, however there was a resulting minor rounding difference.
I could then not rely on the exact reconstruction of a series of events in my strategy. In other words, it rendered post-event reconstruction impossible, or much harder.
What I have done instead is to create my own Keltner channel that is exactly the formula, instead of just very close. Now I can always reconstruct events exactly. I found the rounding difference was enough to affect the Keltner parameters I use.
I was hoping NT could comment on this, and explain how this caching mechanism is intended to work and how they preserve precision? Caching works well with SMA calculations, and I see how NT does this without any rounding impact. But I think the Keltner Channel is allowing a small error by using caching.
Comment