- Blogs
- Indicator mods
- Perceptron and Kernel Indicator mods

- Public
- 2 likes
jaguar1637 liked this 2515 days ago

JohnLast liked this 2379 days ago

By JohnLast 2687 days ago Comments (26)

The recent indicators based on the different perceptrons make a breaktrough in the way we can see indicators. Why?

We can combine and visualize different indicators into one indicator

We can combine different parameters of the same indicator into one composite indicator

We can make a combination of of the two previous possibilities

Really we are limited only by the imagination.

However the basic paradigm is that an indicator by itself is not everything.

In fact an indicator is only a visualization of algorythmic mathematical operations. So an indicator byitslef is not a trading system.

The current tests show that when we use percetron (kernel) indicators we can achieve somewhat better results than by the use of the simple indicators. That we consider to be an edge and of course we hope so.

The combination of those perceptron indicators and the native genetic algorythm revealed to be a very, very good match. In that we we have a very simple algorythm directly into metatrader 4, without the need of exporting and importing neural nets and trading data.

For all that we can see a big thank you to Reshetov who implemented the first linear percetron into MT4.

Well if you can see his Expert advisder Artificial Intelligence is the second rated EA in the mql official site. And as we develop on those basic code we can say that we have here the best for Metatrader 4 ;).

So what we can do from now?

1. We can combine more and more different thinks.

2. We can integrate those ideas really easy into existing code and see if we get a boost in the performance.

I want to make special thanks to Jaguar and Jack1 for their help and contribution.

## Comments

Yep, the exemple is from a perceptron with a linear function

//+------------------------------------------------------------------+//| The PERCEPTRON - a perceiving and recognizing function |//+------------------------------------------------------------------+double perceptron_AC(){double w1 = t5 - 100;double w2 = t6 - 100;double w3 = t7 - 100;double w4 = t8 - 100;double a1 = iAC(Symbol(), 0, 0);double a2 = iAC(Symbol(), 0, 7);double a3 = iAC(Symbol(), 0, 14);double a4 = iAC(Symbol(), 0, 21);return(w1 * a1 + w2 * a2 + w3 * a3 + w4 * a4);}We got also, perceptron with Gauss functiondouble perceptronsto(int j)

{

double w1 = MathExp(-y1*y1/2);

double w2 = MathExp(-y2*y2/2);

double w3 = MathExp(-y3*y3/2);

double w4 = MathExp(-y4*y4/2);

double p1 = iStochastic(NULL, 0, KPeriod,DPeriod,slowing,Stoch_Method, PRICE_CLOSE, MODE_MAIN, y5+j )- 50.0;

double p2 = iStochastic(NULL, 0, KPeriod,DPeriod,slowing,Stoch_Method, PRICE_CLOSE, MODE_MAIN, y6+j )- 50.0;

double p3 = iStochastic(NULL, 0, KPeriod,DPeriod,slowing,Stoch_Method, PRICE_CLOSE, MODE_MAIN, y7+j )- 50.0;

return(w1*w2*w3*w4* p1 + w1*w2*w3*w4* p2 + w1*w2*w3*w4*p3 );

}

double perceptronpfe(int l)

{

double w1 = MathExp(-x1*x1/2);

double w2 = MathExp(-x2*x2/2);

double w3 = MathExp(-x3*x3/2);

double w4 = MathExp(-x4*x4/2);

double p1 = iCustom(NULL,0, "PFE",x5,true,5, 0,x5+l) *50;

double p2 = iCustom(NULL,0, "PFE",x6,true,5, 0,x6+l) *50;

double p3 = iCustom(NULL,0, "PFE",x7,true,5, 0,x7+l) *50;

return(w1*w2*w3*w4* p1 + w1*w2*w3*w4* p2 + w1*w2*w3*w4*p3 );

}

and also perceptron w/ chaotic kernel_________________________________________________I did a study upon correlation of those perceptron_indie to the price (posted upon the website http://beathespread.com) , and the best perceptrons are in the orderperceptron_DPOperceptron_WPRperceptron_PFEperceptron_Fisher_Transformperceptron_StochasticSo, now, the last check (his is for Jack) is to choose the best perceptrons for entry signals, and the best perceptrons for exit signals. Is there anyone able to perform this study ?

In fact, 4 situations :

May be to developp a perceptron for TP and also another one for SL

Every combination is possible, it is just a tool, the limits are only set by the imagination.

Yes,

Now, for sure, to open trades BUY or SELL, Perceptron_iVAR must be used to allow trades (it's logical) . So, I will build a Perceptron_iVAR this morning

I would prefer to widen my scope a little bit and to look again towards general chaotic analysis.

As we have discussed before there are some market conditions that are more or less locally predictable.

I think that there are two main ways:

1. To identify when your EA is going to work well regarding the market conditions.

a. So you turn on and off the EA based on the market conditions.

b. You adapt and optimize on the local conditions

c. A combination between the a and b

2. You have a panel of complementary experts (EA experts that are working in synergy) that are working all the time.

So those are my basic assumptions. I think it is impossible to have a single EA that will perform all the time staying 100 % of the time in the market and which performs on all market conditions with single parameters.

You are right. There are many phases inside a market. So, the first goal is to build a PERCEPTRON EA for Trending hours.

From this classification I exclude intentionally the EA experts that are using mainly money management techniques in order to maintain the upward equity curve.

The case of the money management EA is a differnt issue as there the primary focus is on the risk control.

I have prepared and tested some mods of Spinal Implant and PNN based on a modified Martingale that do work but I do not publish them for the general public because they are dangerous.

yes, I thought exactly the same (it's amazing), this stuff can not being released now, becoz too dangerous !

And most of the EA commercially available and on the 3rd party signal services are exactly from this kind.

They are plug and play, deliver good results on the short (even medium) term but you never know when and how the time bomb will activate.

OpenBUY => Perceptron_DPO the curve crosses the line above 0

Close Buy => Perceptron_PFE the curve crosses the line below 0

Open Sell => Perceptron_PFE the curve crosses the line below 0

Perceptron_DPO the curve crosses the line below 0

Close Sell => in testing

Regarding Reshetov, look at (w/ Google Translate), his own web site

http://reshetov.xnet.uz/

This paper examined the theoretical and practical possibilities of an expert system that calculates the probability of a complete set of inconsistent rules.(That is missing for perceptron from beathespread)

All the process is describing the calculation process

I think I would need to look more carefully on that. If something you do not unnerstand in Russian you can ask.

I think it would be nice to get in contact with Reshetov.

If you talk russian, it could be better for you to get in contact with Reshetov

My questions are about how to tune perceptrons's parameters we built

My answer is that the genetic optimizer tune the parameters of the perceptrons.

The indicators are just a visual representation.

However you can see the values of the EAs (when the indicator is an exact copy of the EAs percetron) and put them manually in the indicator settings.

I have no idea how to contact Mr. Reshetov, maybe if I write a short article on the mql forum he could contact us.

ok

I found a way to optimize perceptrons. I will post some examples right now

I think I have found an error in the code of the percetrons. In fact the mistake is not a mistake.

What I mean is that we have built something that is called hyper neural network instead of a normal neural network. As it is written in a Master thesis:

"When one searches for the information on hyper neural networks he basically doesn't find much. This is caused by the fact that the idea of hyper neural network is relatively new.

The basic idea is in representing the weights, that belong to interconnections in the neural network, by functions. The learning process then doesn't happen directly on the level of weights like it is common in normal neural networks, but instead it happens by changing the function that is used to generate the weight every time when its value is needed."

So where is the difference.

What we did is:

Here as you can see in this line: double w1 = 1/(1+MathExp(-x1));

That means we train directly the weight as a function.

So if I am correct the traditional neural network would be

double w1 = 1/(1+MathExp(-x1*p1));

and the we return just the w

return(w1);

There is a minor difference but still it is a difference from the standard model. I did it because it was the right think to do in my mind and by the way making a deviation from the standard neural net architecture. But by chance I made something different that exists and is called a hyper neural network.

However by using as a weight a chaotic function I think I made something even different.

The analysis of the litterature the chaotic function as a function does exist. The model is called

a chaotic neurone. You can see a visual example here.

The model is created by Dr. Kazuyuki Aihara. Howevr it is something different from what we did.

The Aihara model is using the chaos to kick out the neural network from a local optimum: chaotic annealing. There is another interesting paper about the neural network based on chaos. In this paper it is used a chaos optimization algorythm.

So from what I can see thanks to google, I have done something else.

I made a hyper neural network and used the logistic map directly to model the weight.Then the learning process does not happen directly on the level of weights like it is common in normal neural networks.

The learning process happens by changing directly the logistic map that is used to generate the weight every time when its value is needed by using the metatrader native genetic algorythm.## Chaotic neuron model

A chaotic neuron model that qualitatively reproduces chaos in nerve membranes observed in squid giant axons and the Hodgkin-Huxley equations can be described by the following simple 1-dimensional map (Aihara et al., 1990):

where y(t) is the internal state at discrete time t , k is the decay factor, α is the scaling factor of refractoriness, a is the bias including both the threshold and the temporally constant input, and output x(t) is obtained by x(t)=f{y(t)} with, for example, the logistic function

Figure 4 shows the shape of the chaotic neuron map, which is bimodal and similar to return plots of the data of squid giant axons and the Hodgkin-Huxley equations (Aihara, 1994, 1995; Mees et al., 1992). The chaotic neuron map is a piecewise monotone continuous map with two turning points and (+−+) signs of the slope on the three monotonic intervals (Aihara et al., 1990). While the derivatives on the two (+)-segments, which correspond to resting and firing, are between 0 and 1, those on the middle (−)-segment can be less than −1 if the steepness parameter ϵ of the output function is sufficiently small. This middle branch corresponds to the continuous type of threshold separatrix peculiar to excitable nerve dynamics (FitzHugh, 1969), which causes orbital instability or the butterfly effect of chaos. Chaotic neural networks composed of the chaotic neurons generate various kinds of spatio-temporal chaos with abilities of parallel distributed processing (Aihara, 2002).

John

I appreciate this X-Mas gift

thanks again , I will modify the EAs

rgds

When I check out the website, about the Chaos non linear dynamic, it's exactly what I was looking for.

Indeed I did not sleep last night trying to figure out those things. The second code is the normal neural network model.

That is because we have first a summation of inputs, then we mulptiply with the weight and finally we apply the transfer function. What I did was a hyper network, in the hypernetwork we vary the weight as a function by searching the values through the genetic optimizer.

You can test by yourself, I think both models work the normal one and the old one.

However the hyper chaotic neural network in my tests is superior to the others.

I will read more the article in fact I am interested in those concepts of Breadh first and Depth first.

Then I will apply after that the chaotic transfer function of Aihara, I am interested if that transfer function will make chaotically different versions of the EA activity and in that way we can eliminate the luck component making tests having the same idea as the Monte Carlo simulations. In the monte carlo simulation we make multiple hypothetic versions of the market and try to see if the model will perform on them.

In that model thanks to the neural activation function we can make multiple versions of the EA and then to see if it is robust enough.

I am not sure if that is very clear after all.

well,

there is another thing. Luis Pinc got a wonderfull idea.

Luis Princ wants to compile this new ea to java and run it on the contest from DukasCopy.

How can we make this thing happened ? Who is able to write into Java langage ?

Regards

I just remarqued something else. Please check this link for additional information :

http://beathespread.com/file/view/8954/spinal-implant-sigm-ea-with-sigmoidal-transfer-function

The issue here is that if we apply the normal neural net model: We multiply the weight by the input. We need to take into account the scaling that is necessary.

I will give an example with the rational function because it is easier. Chceck the image to see how the function varies based on x.

double w1 = x1/1+MathAbs(x1);// this is the code MathAbs means we get the absolute value of x.

Here we vary the x parameter from -10 to 10 in order to get the proper scaling between minus 1 and 1. We choose this because we want to use the whole red curve.

The we multiply this by the indicator value (p). As the indicator value (p) is between -1 and 1 we match the scaling.

for example:

p= 0.5// Pfe has a value of 0.5

x=5 then we aplly the transfer function to get the weight (because it is a hyper infrastructure)

w=5/1+5= 0.83/ We vary the weigh according to the formula of a transfer function that is why it is a hyper neural network.

So we get the output p*w=0.5*0.83=0.415 // We do not need an tranfer function anymore because we have a proper scaling already.

It is OK you may want to play with the scaling a little bit by adding a bias but it is OK as it is for me.

The key here is that the genetic optimizer explore the whole surface of the rational function and that is guaranteed because it is independant from the weight.In the normal neural network we take the indicator value p then we multiply the weight then we apply the transfer function to get the proper scaling and as a result we get the output.

So let take the same numerical example.

p = 0.5 // Indicator value is 0.5

x=5

then we multiply the indicator value by the x parameter and to the result we apply the transfer function.

0.5*5=2.5

Then we apply the rational functionand the output is=2.5/1+2.5=0.71

As you can see even if thex parameter is the same the output of the neural network is different.

vgc always suggest then when testing you include in the test EA a lot of print functions and you make calculation by hand in order to check if the code is calculations are correct and you can see if scaling adjustments are necessery.

As you can see in the hyper neural model an addiotional use of transfer function is not needed. In the normal neurone you apply the tranfer function to get the proper scaling.

-We recommend scaling from 0 to 1 instead of -1 to 1 because nasty surprises may arise out of sample. You can see easily why (all the times practically ;) ) neural net models tend to counter the trend.

If you get a weight parater that is negative and you multiply it by the indicator value that may be positive you get a negative value and negative output result and you counter the momentum.

-We recommed the use of print functions to check if everything is consistent

-For the normal neural net you can use instead of normal transfer function the Aihara chaotic transfer function. You add some noise. In fact some empiric experience show that by adding some noise the model becomes more robust.

-Even for the hyper model we can add additionaly the Aihara transfer function but I think we need to make some rescaling of the input of the pfe.

Here is a screen shot of the Aihara Chaotic neurone model and the bifurcation diagram.