As of last Friday evening (U.S. Central Standard Time), July 23rd, 2022, right around the same time as the Kinetick outage configuration issue with DTN Servers / logon APIs - I have been unable to run a successful backtest, one which I previously (just prior to above issue) was able to run without a problem.
I've tried backups of my NT8 setup (I export a full NT backup to an external server daily), I've cleaned the db cache, refreshed the sdf file (no corruption issues), and verified my historical data is intact. I am backtesting quite a bit of data (going back to 2006) and realize this typically takes a while to cache the historical data... however, there is no progress with the Strategy Analyzer even after 48hrs of letting it run (shows "Running backtest on XYZ ..." - never completes or prints output as specified in the script like it used to).
There are no log or trace files that show any error or issue with the backtest whatsoever...
What I did notice is that the db 'cache' folder is no longer populating like it used to... basically, when it ran a successful backtest in the past (as recent as last week on the entire 16.5+ year dataset), the db cache folder would increase size to the amount of historical data it was processing (roughly 23 GB). Just to note, this entire backtest only usually took between 1-4 hrs depending on whether or not the data was already cached.
Now, the db cache folder doesn't get any larger than 1.68 GB and the Strategy Analyzer never completes. Just to reiterate, I've verified the database contains all the necessary contract data (nothing is missing or has changed there)... and when looking through the "smaller" db cache folder, it appears that all the subfolders for the dataset (i.e. tick, minute contract-period folders) are present, but somehow the size of the folder is only 1.68GB vs. ~23GB -- clearly there is an issue I'm not seeing and need some assistance to get backtests running again.
Also on another note - this does not have any relation to performance constraints; my system is running a stable-OC'd watercooled 64-core threadripper w/ 256GB RAM, all SSDs, and dual 2080Ti GPUs -- and NT barely uses 1/10th of those resources at break-neck usage. However, I did notice NT8 used to take up more memory when it was successfully running the backtest, and now it only tops out memory usage at roughly 10GB RAM (again, it appears to not be caching the historical data in order to actually run the backtest... it used to take up to 40GB RAM to finish the task, which was the desired response when caching the dataset).
Need help from Dev/Eng!!! Thanks.
Comment