Log in

Practical limitations in the Neural net extrapolation: Revision

image
От 22 септември 2011

 

I just would like to point out something very important about the application of the principle of uncertainty the technical analysis and the Elliotware approach in particular. Some may think that if they can use advanced machine learning solutions they can totally not take into account the common technical analysis. I think it is true if the model is base on tic data and high frequency trading there are other rules and principles. However when we go higher the technical analysis becomes important. But what I mean. I mean precisely the support and resustence zones.

Those zones are known as decision zones. That means that there a decision has to be made. 

And this is analogy with the principle of uncertainly (a quote from wikipedia). 

In quantum mechanics, the Heisenberg uncertainty principle states a fundamental limit on the accuracy with which certain pairs of physical properties of a particle, such as position and momentum, cannot be simultaneously known. In other words, the more precisely one property is measured, the less precisely the other can be controlled, determined, or known.

Well this is just an analogy. In technical analysis when we are closer to such a decision zone. The scenarios are clearly measured and predetermined. The price is either doing this or that. But if we know precusely the scenarios we do not know the actual direction.

And vice-versa when we are away from those decision zones. We may know very well the general direction of the market but we have not a clue of the trajectories.

And in that particular situation the Neural Net by their extrapolating abilities are giving us a help. When the direction is clear but the scenarios are unclear.  

If you are clearly near a decision zone the performance of the neural net will depend  more on luck than other things.

Here I woul like to post my shot. Here we have two scenarios. Those scenarios are clearly cut. One is going high, the other is goind down to the channel. From the downside of the channel a new decision has to be made.

I used BPNN Caterpillar with the same training period on 30 m time frame. Both of the BPNN was with lag bars 40. However one had 3 computations the other 5. One of the nets was pointing high the other low.

I mean a slight change of the inputs and we have a totally different prediction. I think if we consider a decision zone it is normal. Basically what inputs will you use and the outcome is based on luck.