The answer to this exctionction seems to be to use technical indicators in other ways. The key words are:

**predictive **turning times;

**predictive **Institutional buy & sell times of algorithms;

**predictive **Institutional Support and Resistance zones;

**predictive **algorithmic time studies

The most common word is the word predictive. Then the question is:

**How a technical indicator becomes predictive?**

Technical indicators become predictive through the activity called data mining.

What they are really offering is the result of their data mining process. The technical indicators are just ways to preprocess information then this information is feeded into a data mining routine. And you get results, eventually the results may also be another technical indicators or may be just BUY and SELL labels.

The key idea behind this wbinar is that the data mining would give you an edge. I can challenge this and to say that the data mining comes with data mining bias. And that already there are many commercial software and free software doing this from long time. Even the genetic algorythm in Metatrader is a decent data mining tool. So behind this claim there is nothing but HYPE.

The real question is how you do your data mining, and this those guys will never tell you because this is their proprietary secret. However the reality is that the market is getting more and more sophisticated.

The first technical analyis was designed to beat three different categories of strategie:

-the common investors who were deploying simple strategies as buy and hold. In that case you follow their trend. They provide the trend with their mony and you just ride the tide.

-The common investors who panic and sell. The whole goal of the technical analysis was to analyse patterns of reversal of trend. This work remain valid today.

-The smart money. This is the most difficult, this is a situation when a large wealth is controlled by entity with the will to manipulate the market. This is the most challenging situation a technical analyst has to face. In fact there is not a true answer for that and that is why analysts prefer more liquid markets that hare somewhat more immune to this behaviour.

It is difficult to solve this because those smart money they draw the market, they can drow patterns and use them as a trap. Some studies like VSA (Volume Spread Analysis) were designed to counter this kind of phenomena but I won't bet my shirt on this.

This stuff operated in classical markets. The classical markets are those markets for which is valdid the fractal market hypothesis by Edgar E. Peters.

"New capital-market theory that combines fractals and other concept from chaos theory with the traditional quantitative methods to explain and predict market behavior. FMH takes into account the daily randomness of the market and anomalies such as market crashes and stampedes. It proposes that a (1) market is stable and has sufficient liquidity when it comprises of investors with different time horizons, (2) these investors stay in their 'preferred habitat' (time horizon), no matter what the market information indicates, (3) the available information may not be reflected in the market prices, and (4) the market prices trend indicates the changes in expected earnings (which mirror long-term economic trends). Proposed by Edgar E. Peters, author of the 1991 book 'Chaos and order In The Capital Markets' and the 1994 book 'Fractal Market Analysis: Applying Chaos Theory to Investment and Economics.' Also called different investment horizon theory. See also capital market theories."

Read more: http://www.businessdictionary.com/definition/fractal-market-hypothesis-FMH.html#ixzz2TMARhR3X

However with the advent of high frequncy this is largely challenged. In fact most of the activity is just one time horizon, the low latency horizon. Even the large investors need to time their entries within this horizon otherwise they will get butchered by the predatory algorythms. Of course this is a big approximation and simplification.

All this use of algorythms necessitates an extensive use of data mining. And data mining with different time horizons. We can say that different investors are using data mining with different time horizons. Do you see the slight difference here. In the past the time horizons were arelated with buy and sell decisions. Now the time horisons are related with data mining scheduled routine.

And those combinations of routines may explain the relative steadiness of market states in the current market.

]]>The point as I see it is that predicting the markets using any AI brute force (brute force method in the sense of relying on the abilities of the algorithm by itself) is a dicey business.

The markets are terribly complex they constantly are looking for balance and falling out of balance. The markets are so terribly complex that you may wander if the complex learning paradigms can beat the buy and hold on one hand and the simple linear regression on the other. Most of the time they can't pass the Monte Carlo simulations.

If that was not the case every AI specialist would find easily his way to riches. The problem is not in the AI methods they work fine on normal problems. The slight nuance is that you are asking them to do impossible things.

As the AI models can't let you without an answer they will give you an answer to the question as it is in the book Hitchiker guide to the galaxy by Douglas Adams. They asked a very complex computer what is the answer to the ultimate question to life, universe and everything. It takes Deep Thought 7½ million years to compute and check the answer, which turns out to be 42**.**

So the direct and brute force approach of successful and sound and proven AI do not work consistently enough.

So what can be done. From one hand those guys are hired by the HF quantshops where an edge is found in the speed of execution. Then the edge needs deep infrastructure understanding and market structure understanding that are completely different fields of knowledge. Only by combining this meta knowledge those quant shops succeed.

Other possible way is to look towards econometrics and deep market knowledge. Then you may be able to find a good definition of the problem and apply correctly the model. The economist need to define the problem the AI specialist has to apply correctly the model.

Those are two main models I know that lead to an edge. There may be other ways in order to beat the markets but they are fringe like entropy analysis, market states clustering and chaos theory etc.

So then why people do stay?

One possible answer is that beyond profitability the quest is very interesting per se, you learn new things, it is an excellent "game" for smart people. Eventually you find good on-line friends to work together on projects thinking that the grail is right next on the corner. Everything will be fine unless you allow yourself to make a gross loss.

Start small start smart. Take it easy.

What is a central question of all trading strategies is the optimization. However is there something wrong with the optimization or at least with our perception of optimization of trading strategies.

I think there is something elusive here.

I think that the main problem is that we look at the market as a problem.

Yes, I think so, we think that the market is a problem and if there is a problem we look for a solution. Applying mathematical models for searching a solution is just the right thing to do.

But what if, there is something wrong in our paradigm?

Of course the trading rule with the highest observed performance is likely to perform in the future provided we have sufficient observations and we do properly performance statistics.

However the problem is that the best rules are highly positively biased. Aronson writes the Objective Technical analysis methods "have the potential to be valid knowledge but only if back-tested results are considered in light if randomness (sampling variability) and data-mining bias.

This screen shot is from the book of Aronson Evidence based technical analysis. Here you can see out-of-sample performance deterioration, that means that the rule which performed well in the past did not perform in out-of-sample trading.

Here in the first part is the in-sample data: the data used for data mining or simply put rule back-testing. In this period you see 50 % Return on on investment. However the out of sample is a different thing. The out-of-sample actually means that the data is not used for data mining and is the actual performance.

The reality is more complex as we may assume. There is not only a problem that needs a solution to be found but there is something else too. That explains why the data - mining normally fails in out of sample trading. I will stop here for the moment.

От FXhackers |

Larry C. Sanders addresses the topics of probability, marble game and MonteCarlo simulation. He has also designed a software program TradeSim which he sells at his website. The book can be downloaded at this site for free (this is the bookmark link). This book also talks about the Monte Carlo analysis of trades.

Address of the bookmark: http://www.tradelabstrategies.com/customers/tradingstrategiesrev2.pdf

]]>This an article that I wrote about Monte Carlo analysis of trading results for system risk control.

Here I compiled the best ressourses I've found about this aspect of risk control of trading systems.

I hope you will like it.

The Monte Carclo analysis is applied for the risk control of trading systems. It can aswer the question how much risk you can get with a given trading sytem. Beyound what level you need to drop the system.

*"What is the last thing you do before you climb on a ladder? You shake it. And that is a Monte Carlo simulation" *

Sam Savage, Stanford University

So if we make an analogy the ladder is the result of the trading system's back tests.

Have a look at the picture. Here we have 10 trades of our imaginary trading system, they are numbered from 1 to 10. Each trade is either good or bad, what is important is that we have an equity curve that is formed by those trades.

Here you can imagine we have a result consisting of 10 trades. What we do is that we shaken those results randomly, however no trade is added or deleted. What we do is that we just replace their position randomly. This is called selection without replacement. In other words we change the positions of the trade sequence of the system.

By doing selection without replacement we duplicate the probability distribution of the initial trade sequence.

If we apply this many times on an initial trading sequence we will get something that looks like that, check the screen shot.

The beginning and the end is at the same point because it is the same trading sequence, but the paths are different.

However the next questions arise, how many permutations we can do? 10, 100, 10000 or 100000.

The practical limits are given by the statistics.

Imagine we have 1000 trades. By calculating the permutations that will give us:

1000*999*998*997*...3*2*1

The number we get is unbelievably big. The number of permutations of 200 is bigger than the number of all atoms that exist in the whole universe. So it if obvious we are not going to do all those permutations.

Hopefully there is a simple algorithm that will help us:

If the probability p is close to 0.5 So the 95 % confidence interval reduces to the following formula:

If we have a flipping coin of 1000 times and we want a confidence level of 95 %. The confidence level will explain how likely the 500 heads is placed within the confidence interval around the estimation.

For the Gaussian distribution we can be 95 % sure that the value is accurate within 3.1 %. With 95 % probability the coin will will fall between 469 and 531 times a head from the 1000 trials you can do.

If we use the formula we get

0.5+1/sqrt(1000)= 0.5+ 0.3162277

0.5-1/sqrt(1000)=0.5- 0.3162277

So we can share that the value is 95 % sure that is accurate within 3.1%.**Practical use of Monte Carlo simulation for traders**

So what does it means in practice. In fact the most important aspect of this kind of Monte Carlo analysis is the analysis of the drawdowns. For example with a given confidence level you can say what that worse drowdown can be.

Look again at the shot where are the equity curves. We are going to be interested at the worse case scenario.

Imagine the original system has 500 USD drow down an 1500 USD average profit per year. In our hypothetic example that would be the system with the blue line.

So now we are going to look at the hypothetical equity line with the worst drow down.

If your Monte Carlo analysis show you for example 1000 USD as the worse drowdawn at 95 % confidence level that means that there is 5 % chance of facing 1000 USD drow downs before making any profits. Well if you look at even lower confidence levels the expected drowdown can be lower.

However the worst drow down is not more than the average profit of the system. According to Urban Jaeckle and Emilio Tomasini it is unacceptable to have a drowdown lower than the average profit per year.

The main limitation of the Monte Carlo analysis are that if your initial results are curve – fitted your results will be a nonsense. The Monte Carlo analysis is good only when it is applied for a sound trading system and not to an over-fitted one.

If your trading system is sound or not that is just another thing. The Monte Carlo simulation need to be applied with care but it is a valuable tool for risk control of a sound trading system.**Recommended readings **

Urban Jaeckle, Emilio Tomasini, 'Trading Systems A new approach to system development and portoflio optimization' Hariman house, p.101-108

Farell, Christopher, 'Monte Carlo models simulate all kinds of scenarios' Business Week 2001

Discrete Event Simulation a First course at this link

http://sites.stat.psu.edu/~jiali/course/stat597e/notes2/percept.pdf

]]>

*In this short blog post I want to revisit some concepts and to show you some new paths of investigation, I hope so. It is known to us at the expense of a lot of suffering and/or time (for many), that there is no holy grail indicator, system or EA (Expert Advisor), that will make money consistently with the same set of parameters. That is not due to the fact that all EA sellors are scammers, but to the fact that the market is very complex and reacts to itself. For that reason many systems have been developped helping the professional traders to gain some kind of meta-knowledge for the use of the trading tools. *

Let first examine the The Walk Forward Optimization

**1. The Walk Forward Optimization as defind by Wikipedia is:**

**"**is a method used in finance for determining the best parameters to use in a trading strategy. The trading strategy is optimized with in sample data for a time window in a data series. The remainder of the data are reserved for out of sample testing. A small portion of the reserved data following the in sample data is tested with the results recorded. The in sample time window is shifted forward by the period covered by the out of sample test, and the process repeated. At the end, all of the recorded results are used to assess the trading strategy."^{[1]}

And Walk Forward Optimization is a different thing from the backtesting:

* "*Backtesting is using past data to test a trading system. It's useful because if a system was not profitable in the past it will not be profitable in the future. It refers to applying a trading system to historical data to verify how a system would have performed during the specified time period."

And Forward testing or paper trading is also known as * Walk forward testing* is "the simulation of the real markets data on paper only. It means that though you are moving along the markets live, but you are not actually putting in real money, but doing virtual trading in lie markets to understand the movements of markets better. Hence, it is also called as the

Hower nothing is more usefull than a visual representation. On the site of Amibroker you can find a dynamic representation of the concept.

Those concepts are really some of the most important thing we can learn for trading. And that is because all professional traders use this stuff. So and here we can ask a question.

**2. We can rise some question especially regarding this routine face to face of what we know of the true nature of the market**

However can we do better?

Yes that is a reasonable question.

What about the reverse. You are going now to ask me, why I am going to reverse it.

Or you may not discuss at and push me over the cliff and not care about this stuff any more.

But if you want to pay a little more attention here is my argument.

From what we know the market is chaotic, there are some measures of this, for example the Lyapunov exponent.

Here I provide several easy to follow and understand links.

"The Lyapunov exponent (L) determines the rate of predictability. A positive Lyapunov exponent indicates chaos and it sets the time scale which makes the state of prediction possible. The bigger the largest positive Lyapunov exponent is, the more rapid the loss of predictive “power”, and the less the prediction time for the time series is. The predictability is calculated as a value = 1.0 / L."

See this excellent article for more details.

Basically The calculation of Lyapunov exponent or predictability enables us to evaluate the reliability of prediction.

You can see this article too for more easy to understand details on the Lyapunov exponent.

The Lamda (Lyapunov exponent can be lower than zero):The orbit attracts to a stable fixed point or stable periodic orbit. Negative Lyapunov exponents are characteristic of dissipative or non-conservative systems (the damped harmonic oscillator for instance). Such systems exhibit asymptotic stability; the more negative the exponent, the greater the stability.

As for the market this would mean that the market conditions are really excellent for prediction.

The Lamda (Lyapunov exponent can be equal to zero):

A Lyapunov exponent of zero indicates that the system is in some sort of steady state mode. Still cool for market analysis.

The Lamda (Lyapunov exponent can be bigger than zero):

The orbit is unstable and chaotic. Nearby points, no matter how close, will diverge to any arbitrary separation. All neighborhoods in the phase space will eventually be visited. These points are said to be unstable.

The bigger the lamda more chaotic becomes the actual prediction.

**3. How that stuff is related with the optimization routine?**

Yes that is the question here. How that stuff is related with the optimization routine?

In fact when you use Forward testing or paper trading then use actually optimize on much previous datam, and during the paper trading window actually you use older model that performs very well on the paper trading data. OK, but if the market is chaotic theoretically that means that every model will degrade with every new market information that is coming. (We have a different model based on clusters of volatility and fractal dimension and that some models perform very well on specific clusters).

So when you actually use the model it is based on old data.

Basically the idea is to use the recent data and see if the model that works well on current data performs well on past data.

That is what I call Walk Reverse Testing.

1. In this case our system is optimised on the very recent data.

2. It performs on immediately past data (reverse testing, it answers the question if the system is working now)

3. And we can see even more we can apply this model on all the available data (the ultimate test, it answers the question if the system is sound)

If a system pass well the tests of 1 and 2 and 3, I could rely on it more heavily.

**4. Here I would add some more ideas based on the recent developments by Jaguar.**

Here on this site the are two shared easy to use instruments for the estimation of the bigger lyapunov exponent.

-The new user friendly program

What we did until now is what is described in the books, to detrend the prices.

I use =ln (F3/F2), the F column (excel) is the close price. That makes the detrending.

However I think that something is wrong here. Still I want to detrend, but I think following this we may distort the data.

So what about the approach of Jaguar to look for the best correlated indicator of Jaguar, or at the normalized oscillator we actually use and make an estimation of the Lyapunov exponent on this.

The issue is open.

**References:**

1. Expert Advisors Based on Popular Trading Systems and Alchemy of Trading Robot Optimization

2. Wikipedia Walk forward optimization

3. Kirkpatrick, Charles D.; Dahlquist, Julie R. (2010-11-15). *Technical Analysis: The Complete Resource for Financial Market Technicians*. FT Press. p. 548. ISBN 978-0-13-705944-7. Retrieved 13 June 2011.

4. ^{a} ^{b} Investopedia: Backtesting And Forward Testing

5.** ^** Can your system do the walk

6. How Not to Fall into Optimization Traps?

7. Robert Pardo. *Design, Testing and Optimization of Trading Systems*. ISBN-10: 0471554464

Well, I am on the way on 3 # projects

- to rewrite Burg w/ RFSDS

- to write a swarm particule optimization indicator

- to write something w/ Ornstein-Uhlebeck algo

I will show you the rest, when my computer will be repaired

jy

]]>OK let get started.

We take a very basic expert. Let it be a moving average expert with only one moving average.When the MA crosses the price we take a signal. Something like that.

We opitmize the expert with a history window. As there is only one parameter we are going to analyse the performance at every different value.

For example we have something like that

MA 10 -100

MA 20 +12

MA 30 +23

MA 40 +20

etc.

I will do the same for different windows (month, or week etc.)

After that I can make a table with those data.

And then comes the interesting thing. I may try to use Rapid Miner to analyse those data.

For example I can take a machine learning algorythm which is going to try to predict which parameter is going to be profitable for the next month. It is a non sense to predict values but we can try to predict if a parameter is profitable or non profitable.

Until now we are trying to look at the data, otimize on the data and cross the fingers that the market would behave on the same way. Well we cannot jump the prayer step but with a predictive algorythm we could try to predict the profitable parameters.

It is not very clear by now. It is just an idea, I shared. The calculations are not heavy at all, but to gather the data and to prepare the data sheet is manual and by hand.

This is one of my projects for sure.

]]>1. We find a training window and we train (optimize) the expert on this window.

2. We hope that the optimized parameters would work on the test window.

3. When we do it for enough nimbers of train and test window we hope that at the trading actual window it will work.

This is how I have been teached to do optimization and training.

I think for aleternative method.

]]>