Announcement

Collapse
No announcement yet.

Partner 728x90

Collapse

Just how reliable is the data from 'Utilization monitor'?

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

    Just how reliable is the data from 'Utilization monitor'?

    Hi,

    I've been working on performance for some indicators the last few days, and whilst I've made use of it before, I've been using the 'utilisation monitor' quite a lot these last few days. Conclusion - It has no idea what it's talking about, or, something is really wrong somewhere else! Whilst it may give a general idea, I think it is completely missing some things and reporting incorrectly unless you can offer some explanation for me to understand what might be going on?

    For instance. If I bring up utilisation monitor once, let it run 10-20 seconds, I get a reasonable set of results. Mostly the top usage is fairly in excess of the others - But, not changing anything else, close the monitor, open it up, run it 10-20 seconds, and I can get a completely different indicator at the top, supposedly consuming more resources than anything else. Now, whilst this is of course possible, I'm not sure I believe it. For instance, I have a few SMA's running - They are often reported as using more resources than the volume profiles. Here's another example. Of all the charts I have open (about 14), there is 1 arrow on 1 chart only (shown), and it's reported as being the top resource user. How can this be? Makes not a lot of sense to me. That colour line has 1300 bars displayed and still, the arrow is taking more resources than that, the 3 volume profiles on that chart, etc? I've taken to adding stopwatch code to my own indies to profile their functions properly, but I want to report this as feedback to get some response as to what you think might be going on. Clearly, though, for the average user, I don't think the monitor is being useful in this instance unless something else is very wrong that might be affecting other things? Thanks.
    Click image for larger version

Name:	NinjaTrader_3wT7kSsCmy.png
Views:	371
Size:	20.5 KB
ID:	1145526

    #2
    Hello pjsmith,

    Thank you for your reply.

    The Utilization Monitor shows only how long it's taken for any processing done on that particular NinjaScript item since the Utilization monitor was opened, not since the script started running. So, for example, let's say we have an indicator that has to process a lot of data when it starts but then simply updates once per bar in real time. If we start the Utilization Monitor before the indicator is applied, we'd likely see completely different values in the Utilization Monitor if we closed it down and restarted after real time is reached.

    It's really meant as a bit of a snapshot to show what's currently taking up the most time. I don't really get worried about anything in the monitor unless the Total Time reported is basically exponentially higher than everything else, and that's really what it's meant for - so you can tell if there's something wildly out of the ordinary and identify which indicator or item it is.

    Please let us know if we may be of further assistance to you.
    Kate W.NinjaTrader Customer Service

    Comment


      #3

      I use the Utilization Monitor all the time and find it quite valuable.

      I will write a reply I hope helps other developers in the community but I do not consider myself an expert with NinjaScript Utilization Monitor so please continuing looking for additional inputs.


      A few very subjective (not scientifically measured) practical conclusions from my use.
      • I feel as though not 100% of NT8's CPU use is represented within the Utilization Monitor output. That is ok. I acknowledge this and still find good value using the tool.
      • Short term use may have spikes of accuracy but for my purposes longer term use produces more reliable information. Sometimes it appears as though there is a rotation between charts and indicators seem that seem to be drawing a disproportionate amount of CPU in relation to their equal peers (see "*Why?") so hours of use rather than minutes helps to normalize the data.

      *Why? I assume in at least in part due to
      1. Multi-threading process management
      2. Indicators and strategies using shorter time-frame BarArrays seem to carry more workload when using multiple time periods with the same instrument (in the guide see designing for Real-Time to see the workflow across from faster to longer periods in BarArrays of same instrument.
      3. Shared use of a common dependency. Example, internally 8 indicators all call EMA against the same data set and period. The first indicator that calls EMA pays more Utilization burden that the following indicators who leverage results from the first indicators work by pulling from data still in the stack, close heap memory or caching rather than pulling in that data on it's own.
      4. Again these are untested fast assumptions and so please discard for any more authoritative answers you see from the NT team.

      So given all that above how to I use NinjaScript Utilization Monitor for my advantage?

      I leverage a little product development testing experience. Simpler than it might look on first read.


      Fast Batch utilization analysis to make sure I am writing and deploying heathier code. Lets say I want a serious no BS test of 5 bulky indicators/strategies that have a number of common dependencies on EMA, StdDev and Swing.


      EXAMPLE:
      Lets get this testing done fast so...
      1. Open and new NT8 workspace that only includes the Control Center and a has a NinjaScript Output window.
      2. Double check to make sure you have closed all other WorkSpaces.
      3. Exit NT8 and reopen.
      4. Create a chart on a time period one third of which I plan to use ( faster chart means faster test)
      5. To tone down the blaring impact use of Candlesticks has on utilization monitoring open press CTLR-F to open the data series and change the Chart Style property from "Candlestick" to "Line on Close"
      6. Only use one time period on the chart to keep the BarArray workflow implications apples to apple (see #2 above)
      7. To address #3 above first add to the chart EMA, StdDev and Swing and rename the labels Sacrifice-EMA, Sacrifice-StdDev and Sacrifice-Swing so I can both see what utilization impact they have and they are not corrupting my results by unfairly punishing the first indicator that calls them.
      8. Now add the five indicators and strategies to test
      9. Now on the upper left red "Chart Label" right click to duplicate the chart and in this second chart move the top of the five indicators you are testing to the bottom.
      10. Ensure the first-to-last processing implications are not corrupting utilization results by continuing to reorder the five indicators you are testing in each copy of the chart you have created. Click on the second chart to duplicate and reorder the tested indicators in this new third chart. Repeat the process twice more so that test your five indicators you will have created five charts with the sacrificial indicators on the top of each chart and then below the five indicators ordered being tested are uniquely ordered in each chart.
      11. After each chart has fully completed initialization and loading of historical data right-click on the Output window to create and open the NinjaScript Utilization Monitor window.


      Now you have five charts where initial load and positioning advantages have been normalized. Short term results still are less likely to be fully accurate so let these run for some period of time.

      Want to go faster?
      1. Kill the NinjaScript Utilization Monitor window you just opened
      2. Duplicate those five charts and add them to a second symbol on a different time frame.
      3. Want still faster testing? Make a third copy of those charts with third represented symbol and a fast chart that still represents the price action and volume characteristics you will see in production. However, be careful to ensure NT8 is always overall lightly loaded and spry. If NT8 is struggling at all it may destabilize the accuracy of your test results.
      4. until has
      5. After the new charts have all stabilized start up a new NinjaScript Utilization Monitor window.

      To date the above approach has almost always produced utilization analysis I felt is adequate for the design questions I have been asking at the moment.


      "Well thanks for writing all that but to me this looks like too much work to use repetitively. I need something faster."
      This method is a bit of work to set up the first time.

      The good news is that setup for the 2nd through 100th test cycle can go much faster. While setting up this test harness the fist time .. upon completing step 8 above (with just the first chart completed) save the Workspace including the primary dependencies Indicators, strategies and the date in the Workspace Name.

      I find these saved Workspaces very fast to leverage for future test cycles.

      -----------------------------

      Note: Your analysis at each moment might not fit or require all that above so modify, trim and improve as best serves your needs.


      Would love to hear recommendations and best practice learnings from others as well.


      HedgePlay
      Last edited by hedgeplay; 03-09-2021, 11:54 AM.

      Comment

      Latest Posts

      Collapse

      Topics Statistics Last Post
      Started by Waxavi, Today, 02:00 AM
      0 responses
      2 views
      0 likes
      Last Post Waxavi
      by Waxavi
       
      Started by elirion, Today, 01:36 AM
      0 responses
      4 views
      0 likes
      Last Post elirion
      by elirion
       
      Started by gentlebenthebear, Today, 01:30 AM
      0 responses
      4 views
      0 likes
      Last Post gentlebenthebear  
      Started by samish18, Yesterday, 08:31 AM
      2 responses
      9 views
      0 likes
      Last Post elirion
      by elirion
       
      Started by Mestor, 03-10-2023, 01:50 AM
      16 responses
      391 views
      0 likes
      Last Post z.franck  
      Working...
      X