Log in

Neural Networks: From Theory to Practice for mql5

By JohnLast 2685 days ago Comments (4)
http://www.mql5.com/en/articles/497

In this excellent article you can see an implementation of neural netwotk code for Metatrader 5. In the article all the basic concepts for neural networks are explained in a use friendly manner. At the end a code is given for neural net implementation.

This code is different from the basic Reshetov code we used to build our networks. In fact I was expecting somebody to do this in the mql5 community.

Finally we have mql5 template for neural net expert advisers. I think we may improve it because the standard neural net have known weaknesses.

As you can see, you end with many weights to optimise. The common weakness of this standard architecture is that it produces ovefitted stuff that would fails out of sample without much effort. Take the article for what is for: a brilliant and concise explanation and very good code implementation.

This last comment reveal just my understanding of the process you can see my blogpost for more detail

 

Comments

  • jaguar1637 2675 days ago

    Yes, I saw this NN.

    In my opinion, t works bcoz .....................the input values come from a fractal indicator.

    If you use AC, as Reshecov tries, you have to calculate weights applies for the NN w/ a Genetic Algorithm before. 

    MT5 can provide more features. but, if man has no strategy behind, beginners can only copy the stuff and play w/ it like toys.

  • JohnLast 2674 days ago

    I was thinking that the RSI is used are ypou referring to this line:

    inputs[i]=(((iRSI_buf[i]-x_min)*(d2-d1))/(x_max-x_min))+d1;

    As for me the main problem with this is that it uses every tic (correct me if I am wrong).

    Wheen you use every tic you have to process a lot if information and when you use the genetic optimizer you make things very tough for it. We are not talking about the fact that the simulations for tic experts are quite unreliable.

    The second thing is the end of the article, you can't deduce from in sample tests that a neural net EA is performing better. The truth is that using weights you can optimize (curve fit) so well the past history, and make miracles.

    However those neural nets fail miserably out of sample and people still are doing this stuff because they love the concept behind it. Creating a stable artificial intelligency model is very hard. 

    Here I will cite an opinion from a guy from linkedin he finds that "the majority of researchers attempting to stabilise evolutionary trading models get the representation wrong (input data, filters and the fitness function) due to their lack of understanding of the market microstructure and unrealistic expectations arising from it (e.g. predicting price). Building a robust stationary model that has a quantifiable edge helps reducing the search space and defining the representation that can be subsequently employed in the adaptive layer of the system. The latter becomes the icing on the cake. :-)" (Vlad Shurupov).

    As for this code it is a brilliant example how the computer language is used to implement the neural net model: use of arrays and use of loops.



     

  • jaguar1637 2674 days ago

    Yes, John

    the RSI is used by to this line:

    inputs[i]=(((iRSI_buf[i]-x_min)*(d2-d1))/(x_max-x_min))+d1;

    As for me the main problem with this is that it uses every tic (correct me if I am wrong). => Yes, it's true. A NN should work once when  new bars is opened and  w/ values provided from last bars, not during the bars

     

  • JohnLast 2674 days ago

    I think that is important for two reasons:

    -quality of the simulation

    -quality of training: for every pass the genetic optmizer would need to process every tic, from one hand you can add by this a lot of noise and on another it will make very difficult and long the training process.