Log in

Particle Swarm Optimization

http://msdn.microsoft.com/en-us/magazine/hh335067.aspx

This is very interesting article describing the Particle Swarm Optimization. There is code available in C#.

I think that if we can understand how klot (check this link for the EA) made his custom genetic optimization algorythm and bypassing the native MT4, we could do the same however applying the Particle Swam Optimization in order to obtain HIVE family of EA neural networks.

My first impressions are that the Particle Swarm Optimization is easier to implement than the Genetic.

Comments

  • jaguar1637 2458 days ago

    Hi JOhn

    I trie to create a Particle Swarm optimization (Thank you to you Tovim), 

    while working of plenty of releases, I was able to see the famous and relevant Big Bird , But in another release, I  lost it and was unable to retrieve the way to show it again

  • JohnLast 2458 days ago

    This article is very good because it explains how the Particle Swarm Optimization works with numeric example. It is really great mixing explanation with code. 

    As it looks there is a lot of reason in the nature. You can find reason and algorithms everywhere. We humans we just start to understand what is going on. 

    Here is an interesting article for download that compares the ideas behind the genetic algorithm with Particles Swarm Optimization. 

    I think understanding is key, for those who has followed they know the ideas behind the neural net we made together with the optimization algorithm. I personnally believe that understanding what those things are is important in order to know their limits.

    Traders face the neural nets as somekind of magic isntruments and software developpers tell them that they do not need to know how they work in order to use them.

    I think that it is not true. If a trader knows how his algorithm works and can tweak it, he can do the most important step to link his trading knowledge of the markets with the algorithm. Without that a system cannot work in practice.

     

     

  • jaguar1637 2458 days ago

    Well, I check and re-check again the stuff : it's quite complicated, but instead of all the complex functions for calculating the min value from Rosenbrock (a, DIMENSION,val) into 2 dimensions X & Y

    double ObjectiveFunction(double x[])
    {
    double res = 3.0 + (x[0] * x[0]) + (x[1] * x[1]); // f(x) = 3 + x^2 + y^2
    return(res);
    }

    ============================================

    Those functions are used for the correct calculation

    void f6(int a, int zz)
    {
    double num, denom, f6;
    double errorf6;
    num=(MathSin(MathSqrt((xx[0][a]*xx[0][a])+(xx[1][a]*xx[1][a])))) *
    (MathSin(MathSqrt((xx[0][a]*xx[0][a])+(xx[1][a]*xx[1][a])))) - 0.5;
    denom=(1.0 + 0.001 * ((xx[0][a] * xx[0][a]) + (xx[1][a]*xx[1][a]))) *
    (1.0 + 0.001 * ((xx[0][a] * xx[0][a]) + (xx[1][a]*xx[1][a])));
    if (denom == 0.0) denom= 0.0001;
    f6= 0.5 - (num/denom);
    errorf6= 1 - f6;
    yy = errorf6;
    // return(errorf6);
    }

    void sphere(int a, int b)
    {
    /* This is the familiar sphere model
    int a: index of particles b:dimension */
    double result;
    int i;
    result=0.0;
    for (i=0;i<b;i++)
    {
    result += xx[i][a]*xx[i][a];
    }
    yy = result;
    //return (result);
    }
    void rosenbrock(int u, int b)
    {
    /* this is the Rosenbrock function */
    /* a: index of the particles; b:dimension */
    /* b = DIMENSION */
    /* Modified by myself */
    int i;
    double result[];
    double code_return;
    for (i=1; i< b; i++)
    {
    //result += 100.0*(xx[i][u]-xx[i-1][u]*xx[i-1][u])
    // *(xx[i][u]-xx[i-1][u]*xx[i-1][u])
    // + (xx[i-1][u]-1)*(xx[i-1][u]-1);
    result[i] = 100.0*(xx[i][u]-xx[i-1][u]*xx[i-1][u])
    *(xx[i][u]-xx[i-1][u]*xx[i-1][u])
    + (xx[i-1][u]-1)*(xx[i-1][u]-1);
    if (i == 1)
    {
    code_return= result[i]; // I assume a minval first
    yy =1;
    }
    if ( result[i] < code_return )
    {
    code_return = result[i];
    yy = i;
    }
    }
    //return(code_return);
    }

     

     

  • jaguar1637 2458 days ago

    This article opens quite a new debate here, becoz, the PSO is defined by 

    // The algorithm can be written as follows:
    // 1. Initialize each particle with a random velocity and random position.
    // 2. Calculate the cost for each particle. If the current cost is lower
    // than the best value so far, remember this position (pbest).
    // 3. Choose the particle with the lowest cost of all particles.
    // The position of this particle is gBest.
    // 4. Calculate, for each particle, the new velocity and position according
    // to the above equations.
    // 5. Repeat steps 2-4 until maximum iteration or minimum error criteria
    // is not attained.

  • JohnLast 2458 days ago

    Est-ce que c'est toi qui a fait ce code? Je demande car il est different du code du bookmark.

    Moi je pensais faire exactement la meme chose. Cependant la fin ne serait pas:

    staticdouble ObjectiveFunction(double[] x)

       {

    return 3.0 + (x[0] * x[0]) + (x[1] * x[1]);

        }

    But instead

    staticdouble ObjectiveFunction(double[] x)

       {

    return MathExp(gamma*(-x[0]*x[0]-x[1]*x[1]));// And this is our kernel

        }

    The idea as you can see would be to let those particles fly over the complex mathemaical space.

  • JohnLast 2458 days ago

    The whole code from the article is:

    using System;

    namespace ParticleSwarmOptimization
    {
      class Program
      {
        static Random ran = null;
        static void Main(string[] args)
        {
          try
          {
            Console.WriteLine("\nBegin Particle Swarm Optimization demonstration\n");
            Console.WriteLine("\nObjective function to minimize has dimension = 2");
            Console.WriteLine("Objective function is f(x) = 3 + (x0^2 + x1^2)");

            ran = new Random(0);

            int numberParticles = 10;
            int numberIterations = 1000;
            int iteration = 0;
            int Dim = 2; // dimensions
            double minX = -100.0;
            double maxX = 100.0;

            Console.WriteLine("Range for all x values is " + minX + " <= x <= " + maxX);
            Console.WriteLine("\nNumber iterations = " + numberIterations);
            Console.WriteLine("Number particles in swarm = " + numberParticles);

            Particle[] swarm = new Particle[numberParticles];
            double[] bestGlobalPosition = new double[Dim]; // best solution found by any particle in the swarm. implicit initialization to all 0.0
            double bestGlobalFitness = double.MaxValue; // smaller values better

            double minV = -1.0 * maxX;
            double maxV = maxX;

            Console.WriteLine("\nInitializing swarm with random positions/solutions");
            for (int i = 0; i < swarm.Length; ++i) // initialize each Particle in the swarm
            {
              double[] randomPosition = new double[Dim];
              for (int j = 0; j < randomPosition.Length; ++j) {
                double lo = minX;
                double hi = maxX;
                randomPosition[j] = (hi - lo) * ran.NextDouble() + lo; //
              }
              //double fitness = SphereFunction(randomPosition); // smaller values are better
              //double fitness = GP(randomPosition); // smaller values are better
              double fitness = ObjectiveFunction(randomPosition);
              double[] randomVelocity = new double[Dim];

              for (int j = 0; j < randomVelocity.Length; ++j) {
                double lo = -1.0 * Math.Abs(maxX - minX);
                double hi = Math.Abs(maxX - minX);
                randomVelocity[j] = (hi - lo) * ran.NextDouble() + lo;
              }
              swarm[i] = new Particle(randomPosition, fitness, randomVelocity, randomPosition, fitness);

              // does current Particle have global best position/solution?
              if (swarm[i].fitness < bestGlobalFitness) {
                bestGlobalFitness = swarm[i].fitness;
                swarm[i].position.CopyTo(bestGlobalPosition, 0);
              }
            } // initialization

            Console.WriteLine("\nInitialization complete");
            Console.WriteLine("Initial best fitness = " + bestGlobalFitness.ToString("F4"));
            Console.WriteLine("Best initial position/solution:");
            for (int i = 0; i < bestGlobalPosition.Length; ++i)
            {
              Console.WriteLine("x" + i + " = " + bestGlobalPosition[i].ToString("F4") + " ");
            }

            double w = 0.729; // inertia weight. see http://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=00870279
            double c1 = 1.49445; // cognitive/local weight
            double c2 = 1.49445; // social/global weight
            double r1, r2; // cognitive and social randomizations

            Console.WriteLine("\nEntering main PSO processing loop");
            while (iteration < numberIterations)
            {
              ++iteration;
              double[] newVelocity = new double[Dim];
              double[] newPosition = new double[Dim];
              double newFitness;

              for (int i = 0; i < swarm.Length; ++i) // each Particle
              {
                Particle currP = swarm[i];

                for (int j = 0; j < currP.velocity.Length; ++j) // each x value of the velocity
                {
                  r1 = ran.NextDouble();
                  r2 = ran.NextDouble();

                  newVelocity[j] = (w * currP.velocity[j]) +
                    (c1 * r1 * (currP.bestPosition[j] - currP.position[j])) +
                    (c2 * r2 * (bestGlobalPosition[j] - currP.position[j]));

                  if (newVelocity[j] < minV)
                    newVelocity[j] = minV;
                  else if (newVelocity[j] > maxV)
                    newVelocity[j] = maxV;
                }

                newVelocity.CopyTo(currP.velocity, 0);

                for (int j = 0; j < currP.position.Length; ++j)
                {
                  newPosition[j] = currP.position[j] + newVelocity[j];
                  if (newPosition[j] < minX)
                    newPosition[j] = minX;
                  else if (newPosition[j] > maxX)
                    newPosition[j] = maxX;
                }

                newPosition.CopyTo(currP.position, 0);
                newFitness = ObjectiveFunction(newPosition);
                currP.fitness = newFitness;

                if (newFitness < currP.bestFitness) {
                  newPosition.CopyTo(currP.bestPosition, 0);
                  currP.bestFitness = newFitness;
                }

                if (newFitness < bestGlobalFitness) {
                  newPosition.CopyTo(bestGlobalPosition, 0);
                  bestGlobalFitness = newFitness;
                }

              } // each Particle

              Console.WriteLine(swarm[0].ToString());
              Console.ReadLine();

            } // while

            Console.WriteLine("\nProcessing complete");
            Console.Write("Final best fitness = " );
            Console.WriteLine(bestGlobalFitness.ToString("F4"));
            Console.WriteLine("Best position/solution:");
            for (int i = 0; i < bestGlobalPosition.Length; ++i) {
              Console.Write("x" + i + " = " );
              Console.WriteLine(bestGlobalPosition[i].ToString("F4") + " ");
            }
            Console.WriteLine("");

            Console.WriteLine("\nEnd PSO demonstration\n");
            Console.ReadLine();
          }
          catch (Exception ex)
          {
            Console.WriteLine("Fatal error: " + ex.Message);
            Console.ReadLine();
          }
        } // Main()

        static double ObjectiveFunction(double[] x)
        {
          return 3.0 + (x[0] * x[0]) + (x[1] * x[1]); // f(x) = 3 + x^2 + y^2
        }

      } // class Program

      public class Particle
      {
        public double[] position; // equivalent to x-Values and/or solution
        public double fitness;
        public double[] velocity;

        public double[] bestPosition; // best position found so far by this Particle
        public double bestFitness;

        public Particle(double[] position, double fitness, double[] velocity, double[] bestPosition, double bestFitness)
        {
          this.position = new double[position.Length];
          position.CopyTo(this.position, 0);
          this.fitness = fitness;
          this.velocity = new double[velocity.Length];
          velocity.CopyTo(this.velocity, 0);
          this.bestPosition = new double[bestPosition.Length];
          bestPosition.CopyTo(this.bestPosition, 0);
          this.bestFitness = bestFitness;
        }

        public override string ToString()
        {
          string s = "";
          s += "==========================\n";
          s += "Position: ";
          for (int i = 0; i < this.position.Length; ++i)
            s += this.position[i].ToString("F2") + " ";
          s += "\n";
          s += "Fitness = " + this.fitness.ToString("F4") + "\n";
          s += "Velocity: ";
          for (int i = 0; i < this.velocity.Length; ++i)
            s += this.velocity[i].ToString("F2") + " ";
          s += "\n";
          s += "Best Position: ";
          for (int i = 0; i < this.bestPosition.Length; ++i)
            s += this.bestPosition[i].ToString("F2") + " ";
          s += "\n";
          s += "Best Fitness = " + this.bestFitness.ToString("F4") + "\n";
          s += "==========================\n";
          return s;
        }

      } // class Particle

    } // ns

  • JohnLast 2458 days ago

    The code of klot is so interesting because he shows how to integrate the learning algorithm with Metatrader. Even if the genetic algorithm is different they are close relatives.

    There is an algorithm for MT5 but the author says that it was just for mathematical optimizations and he had not implemented for practical trading strategies. And klot did exactly that, he implements the algorithm is the EA, and trains with it a Neural net (all that directly in the same EA) even more he calculates LinRegresSlope as inputs directly in the same EA.

    What I really want we can achieve:

    -replace the neural net model with the kernel

    -implement instead of genetic optimization, particle swam optimization

    I know for proficient coder that is a routine task, but for a novice it is difficult.

     

  • jaguar1637 2457 days ago

    Wouaaaaaahhhh

    in the article, the function replacing Rosenbrock , Ratigrin .. is

    The base of the code in thes article, is the Dimension is just 2 , so , it does not explore kernel

    staticdouble ObjectiveFunction(double[] x)

       {

    return 3.0 + (x[0] * x[0]) + (x[1] * x[1]);

        }

    But, you propose instead this one

    staticdouble ObjectiveFunction(double[] x)

       {

    return MathExp(gamma*(-x[0]*x[0]-x[1]*x[1]));// And this is our kernel

        }

    This is a sigmoïd function, isn't it ? applied on a 2 -dimensionnal space, not a 3-D

    For example, the rastingrin function is called by 

    //-----------------------------------------------------------
    void rastrigrin(int a, int b)
    {
    /* This is the generalized Rastrigrin function */
    /* a:index of the particles; b:dimension */

    int i;
    double result=0.0;

    for (i=0; i<b; i++)
    {
    result += xx[i][a]*xx[i][a]
    - 10.0*MathCos(2.0*3.141591*xx[i][a])
    + 10.0;
    }
    yy = result;
    // return(result);
    }


    so, the dimension is present inside the Rastigrin function, not in the one shown in the article.

  • jaguar1637 2457 days ago

    A more logical function w/ the dimension added

    staticdouble ObjectiveFunction(double[] x, int dim )

       {

    z0=x[0]; z1=x[0];

    y0= xx[z0][dim] ;

    y1= xx[z1][dim] ;

    return( 3.0 + (y0* y0) + (y1 * y1) );

        }

    That's better !!

  • jaguar1637 2457 days ago

    Of course, if you would like, John, to add your function

    the result could be : 

    staticdouble ObjectiveFunction(double[] x, int dim )

       {

    z0=x[0]; z1=x[0];

    y0= xx[z0][dim] ;

    y1= xx[z1][dim] ;

    return( MathExp(gamma* ( (y0* y0) + (y1 * y1) ) );

        }

  • jaguar1637 2457 days ago

    return( MathExp(gamma* ( (y0* y0) + (y1 * y1) ) );

    is like

    return(MathExp(gamma* (MathPower(y0,2) +MathPower(y1,2) )));

     look at this function

  • JohnLast 2457 days ago

    MathExp(gamma*(-x1*x1-y1*y1));

    Produces:

    image

  • jaguar1637 2457 days ago

    Well, we should get something very interesting

    before going forward, I would ask Vegastart to tell us what he thinks about this