Announcement

Collapse
No announcement yet.

Partner 728x90

Collapse

Tuning your PC to run the Strategy Analyzer?

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

    Tuning your PC to run the Strategy Analyzer?


    This isn't a direct support question, but can you recommend any resources to tune your PC to run the Strategy Optimizer.

    When I run the Optimizer, I get about 500K interations, in about 3 hours. After that, the PC bogs down, and NT eventually halts or crasher. While three hours isn't bad, I'd like to set up runs to go all night, or over the weekend, where I can do 10 million interactions per run.

    I'm using the IsInstantiatedOnEachOptimizationIteration property as recommend, and that helps. Also, I'm running on PC with a fresh install of Windows 10, NT, and nothing else. Each machine has 12 GB memory. Likewise, I go through PC turning videos, such as:

    https://www.youtube.com/watch?v=gx6ffIMSy28

    But I'm curious, what have others done. Are there folks who can do a million interactions per run, without problem? How did you do it?

    Thanks,




    #2
    If you have that many iterations, you might want to consider using the Genetic optimiser... it is HEAPS faster, and, might help identify a reduced range of parameter values... if you really want to still use the Default optimiser.

    The Genetic algo likely won't find the absolute best parameter combo... but, really... who cares? Seriously... you shouldn't. Things like Max Profit or whatever represent an extreme outlier using 20/20 perfect hindsight... it just is not anything you can expect to reproduce. So - don't over rate it. A "close enough" result from the Genetic process is probably a reasonable guide if your strategy stinks... or smells like a rose !!

    If you are determined to use the brute strength approach... there are processors out there that can really help... I think AMD (Ryzen ?) has something with like 64 or 128 cores... but beware of chasing the pot of gold at the end of the rainbow ;-)

    Good luck,
    T.

    Comment


      #3
      Hello timmbbo,

      Thanks for your post.

      tgn55's insight regarding using the Genetic optimizer over the default optimizer can yield faster optimizations because the parameters would be chosen intuitively rather than trying every possible combination.

      More CPU cores means more simultaneous optimizations and in turn faster optimizations, but enough memory would also be needed to handle that workload. When we see the platform stall, it is memory being fully utilized, waiting for .NET garbage collection to free resources to continue the optimization.

      The best things to do are program the strategy so it can have IsInstantiatedOnEachOptimizationIteration set to false, and to configure efficient optimizations. I.E. step through parameters in larger intervals, use Genetic, etc.

      I have included some information on how memory for the thread's reference.

      Understanding memory utilization with optimizations

      I have created a video demonstrating how the SampleMACrossover strategy, a simple strategy that utilizes IsInStantiatedOnEachOptimizationIteration=false for efficiency, can still quickly utilize memory resources under the right circumstances.

      Demo — https://drive.google.com/file/d/15pz...w?usp=drivesdk

      We should consider the following for memory consumption:

      Data * Strategy resources * Number of optimization iterations * Number of trades * Keep best # of results.

      As we can see there are a number of factors that are involved and memory utilization can climb very quickly depending on a few of these factors. Once memory gets maxed out, we can experience short freezes where memory is decommitted, stored to disk, and then new resources are committed before the backtest is resumed.

      We can easily control the number of iterations involved and we can also consider writing our strategies to use IsInstantiatedOnEachOptimizationIteration = false; (Which requires that we reset class level variables in State.DataLoaded.)

      IsInstantiatedOnEachOptimizationIteration — https://ninjatrader.com/support/help...niteration.htm

      Optimization Tips — https://ninjatrader.com/support/help...ionPerformance

      Walk Forward Optimization — https://ninjatrader.com/support/help...ss_metrics.htm

      Genetic Optimization — https://ninjatrader.com/support/help..._algorithm.htm

      We will also leave the thread open for any additional community feedback.
      JimNinjaTrader Customer Service

      Comment


        #4
        Thanks for both replies. I'm using information from both post already to improve my optimizations.

        One other simple trick I use is, rebotting inbetween sessions. Starting from a fresh boot seems to clear out memory and start out with a clean machine.

        Comment


          #5
          Originally posted by timmbbo View Post
          Thanks for both replies. I'm using information from both post already to improve my optimizations.

          One other simple trick I use is, rebotting inbetween sessions. Starting from a fresh boot seems to clear out memory and start out with a clean machine.

          Good replies above.

          In regards to your comment, I keep the Windows Task Manager open the the Performance tab where I monitor first Memory, second CPU and third Disk.

          Conclusions from my limited experience:

          - High Disk usage means you have blown out memory and your approach needs a major redesign.

          - Slow running tests while CPU is constantly near 100% while Memory is stable and not above 75% and you have not designed what you would expect to be CPU intensive test means you have introduced too many disparate instruments or platform components and it is time to simplify each run.

          - Slow running tests while CPU is and Memory are both looking healthy mean has the same cause as the item above. You have included too many disparate instruments or resources causing internal the platform a massive Async and locking traffic jam, which for me has on a few occasions led to a full deadlock and forced hard PC shutdown. Don't do this Abort and redesign your analysis approach.


          A few ideas .. 1) Monitoring Memory, CPU and Disk those resources I try to build analysis plans that:

          - Use Task Manager to identify them and strip out all the parasitic Windows Services and apps running on your machines.

          - Execute a test to ensure your Antivirus is not killing your analysis performance.

          - Attempt to keep any excursion below 80% of available memory short lived. For me the top driver of performance degradation occurs when Windows switches into managing pressured Memory.

          - 12Gb does not feel like a lot of RAM. I personally recommend 32Gb for solid serious on-going analysis work. If your analysis shows you are truly Memory constrained and can bump the memory with larger, faster clock, matched memory go for it.

          - Build simple analysis plans that subset the analysis so that I expect to build forward from earlier answers to reduce the number of features in each analysis run.

          - Avoiding slow grind from Tick Replay and tick by tick analysis. In EARLY analysis stages I examine how fractional Price Action of the instrument my faster target bar periods to much higher level time frames. For example compare rotations and impulses on 1 Second Charts with the same periods on 1 Minute charts. If that is not a good fit then compare 60 tick chart with 300, 600, 1200, 1800, 2400 tick charts. Minutes are faster but larger size Tick charts do a better job at carrying forward the same Price Action, just scaled up. tick , how much the shape of the price action changes when you compare that would let me early in the analysis use Minutes rather than ticks or seconds.

          - If you code has internal complex computations that will last with change through a number of iterations though very thin very occasional prints to the output window you can see you are continuing to re-compute those calculations on every cycle ... Well that is just silly. Don't do that.

          Either code your own caches that don't get rebuilt or carefully take advantage of the platforms Indicator caching capabilities.
          One very quick way to test the performance value to gained by more use of the platforms indicator cache is:

          - Instantiate in OCS().DataLoaded a 1 period EMA() for each of your complex calculations or code a simple passthrough Indi with the ~one line In OBU(): { Value[0] = Input[0]; }

          - Insert this super thin Indicator at the conclusion of the complex calculations so in the future they will be pulled from cache rather than burning a lot of CPU calculating it all again.





          There is more but I need to go for now. Others here have more experience than I and probably offer a long list of good inputs on the topic.

          HedgePlay
          Last edited by hedgeplay; 06-18-2021, 10:58 AM.

          Comment

          Latest Posts

          Collapse

          Topics Statistics Last Post
          Started by diolksd, Today, 08:56 AM
          0 responses
          1 view
          0 likes
          Last Post diolksd
          by diolksd
           
          Started by Mubeen Haider, Today, 08:11 AM
          1 response
          5 views
          0 likes
          Last Post NinjaTrader_Jesse  
          Started by ChainsawDR, Yesterday, 12:55 PM
          0 responses
          11 views
          0 likes
          Last Post ChainsawDR  
          Started by lismartin, Yesterday, 03:30 PM
          3 responses
          17 views
          0 likes
          Last Post NinjaTrader_Jim  
          Started by RT001, Yesterday, 03:47 PM
          4 responses
          27 views
          0 likes
          Last Post RT001
          by RT001
           
          Working...
          X