One common criticism that I saw was the processing speed and memory consumption. I realize that there's simply a lot going on... you can use more memory and less CPU, or more CPU and less memory.. ultimately, the same calculations need to happen somehow.
I imagine this criticism applies only to people doing something insane (like me doing a 22,000 permutation optimize on a dozen instruments, over a 12 month period. Yes, my computer got a little hot). The average strategy wouldn't need the extra horsepower, just the ones that do a lot in a short period of time.
There is a solution to this criticism: GPU integration. It is NOT an easy solution... but it's something to consider, since I would imagine a good 90% of the workload in my strategies (cannot speak to others, but I could guess it is also true in most other cases) is all FLOPs.
I've done some CUDA programming... I've also used other applications that implement it (password hash cracking, for example) and there is simply no comparrison... it's like racing a bicycle and a ferrari with each other. The catch is that it would probably be a huge amount of work to do. You'd have to rewrite massive amounts of code... and only for the people who have compatible GPUs.
Not really a request, I'm not hitting any walls with anything I've been able to cook up yet.. but if you want to dispel the naysayers on that point, that'd put the entire matter to rest in a heartbeat.
Comment