pyswarms.single package

The pyswarms.single module implements various techniques in continuous single-objective optimization. These require only one objective function that can be optimized in a continuous space.

Note

PSO algorithms scale with the search space. This means that, by using larger boundaries, the final results are getting larger as well.

Note

Please keep in mind that Python has a biggest float number. So using large boundaries in combination with exponentiation or multiplication can lead to an OverflowError.

pyswarms.single.global_best module

A Global-best Particle Swarm Optimization (gbest PSO) algorithm.

It takes a set of candidate solutions, and tries to find the best solution using a position-velocity update method. Uses a star-topology where each particle is attracted to the best performing particle.

The position update can be defined as:

\[x_{i}(t+1) = x_{i}(t) + v_{i}(t+1)\]

Where the position at the current timestep \(t\) is updated using the computed velocity at \(t+1\). Furthermore, the velocity update is defined as:

\[v_{ij}(t + 1) = m * v_{ij}(t) + c_{1}r_{1j}(t)[y_{ij}(t) − x_{ij}(t)] + c_{2}r_{2j}(t)[\hat{y}_{j}(t) − x_{ij}(t)]\]

Here, \(c1\) and \(c2\) are the cognitive and social parameters respectively. They control the particle’s behavior given two choices: (1) to follow its personal best or (2) follow the swarm’s global best position. Overall, this dictates if the swarm is explorative or exploitative in nature. In addition, a parameter \(w\) controls the inertia of the swarm’s movement.

An example usage is as follows:

import pyswarms as ps
from pyswarms.utils.functions import single_obj as fx

# Set-up hyperparameters
options = {'c1': 0.5, 'c2': 0.3, 'w':0.9}

# Call instance of GlobalBestPSO
optimizer = ps.single.GlobalBestPSO(n_particles=10, dimensions=2,
                                    options=options)

# Perform optimization
stats = optimizer.optimize(fx.sphere, iters=100)

This algorithm was adapted from the earlier works of J. Kennedy and R.C. Eberhart in Particle Swarm Optimization [IJCNN1995].

[IJCNN1995]J. Kennedy and R.C. Eberhart, “Particle Swarm Optimization,” Proceedings of the IEEE International Joint Conference on Neural Networks, 1995, pp. 1942-1948.
class pyswarms.single.global_best.GlobalBestPSO(n_particles, dimensions, options, bounds=None, bh_strategy='periodic', velocity_clamp=None, vh_strategy='unmodified', center=1.0, ftol=-inf, init_pos=None)[source]

Bases: pyswarms.base.base_single.SwarmOptimizer

__init__(n_particles, dimensions, options, bounds=None, bh_strategy='periodic', velocity_clamp=None, vh_strategy='unmodified', center=1.0, ftol=-inf, init_pos=None)[source]

Initialize the swarm

n_particles

int – number of particles in the swarm.

dimensions

int – number of dimensions in the space.

options

dict with keys {'c1', 'c2', 'w'} – a dictionary containing the parameters for the specific optimization technique.

  • c1 : float
    cognitive parameter
  • c2 : float
    social parameter
  • w : float
    inertia parameter
bounds

tuple of np.ndarray (default is None) – a tuple of size 2 where the first entry is the minimum bound while the second entry is the maximum bound. Each array must be of shape (dimensions,).

bh_strategy

String – a strategy for the handling of out-of-bounds particles.

velocity_clamp

tuple (default is None) – a tuple of size 2 where the first entry is the minimum velocity and the second entry is the maximum velocity. It sets the limits for velocity clamping.

vh_strategy

String – a strategy for the handling of the velocity of out-of-bounds particles.

center

list (default is None) – an array of size dimensions

ftol

float – relative error in objective_func(best_pos) acceptable for convergence

init_pos

numpy.ndarray (default is None) – option to explicitly set the particles’ initial positions. Set to None if you wish to generate the particles randomly.

optimize(objective_func, iters, n_processes=None, **kwargs)[source]

Optimize the swarm for a number of iterations

Performs the optimization to evaluate the objective function f for a number of iterations iter.

Parameters:
  • objective_func (function) – objective function to be evaluated
  • iters (int) – number of iterations
  • n_processes (int) – number of processes to use for parallel particle evaluation (default: None = no parallelization)
  • kwargs (dict) – arguments for the objective function
Returns:

the global best cost and the global best position.

Return type:

tuple

pyswarms.single.local_best module

A Local-best Particle Swarm Optimization (lbest PSO) algorithm.

Similar to global-best PSO, it takes a set of candidate solutions, and finds the best solution using a position-velocity update method. However, it uses a ring topology, thus making the particles attracted to its corresponding neighborhood.

The position update can be defined as:

\[x_{i}(t+1) = x_{i}(t) + v_{i}(t+1)\]

Where the position at the current timestep \(t\) is updated using the computed velocity at \(t+1\). Furthermore, the velocity update is defined as:

\[v_{ij}(t + 1) = m * v_{ij}(t) + c_{1}r_{1j}(t)[y_{ij}(t) − x_{ij}(t)] + c_{2}r_{2j}(t)[\hat{y}_{j}(t) − x_{ij}(t)]\]

However, in local-best PSO, a particle doesn’t compare itself to the overall performance of the swarm. Instead, it looks at the performance of its nearest-neighbours, and compares itself with them. In general, this kind of topology takes much more time to converge, but has a more powerful explorative feature.

In this implementation, a neighbor is selected via a k-D tree imported from scipy. Distance are computed with either the L1 or L2 distance. The nearest-neighbours are then queried from this k-D tree. They are computed for every iteration.

An example usage is as follows:

import pyswarms as ps
from pyswarms.utils.functions import single_obj as fx

# Set-up hyperparameters
options = {'c1': 0.5, 'c2': 0.3, 'w': 0.9, 'k': 3, 'p': 2}

# Call instance of LBestPSO with a neighbour-size of 3 determined by
# the L2 (p=2) distance.
optimizer = ps.single.LocalBestPSO(n_particles=10, dimensions=2,
                                   options=options)

# Perform optimization
stats = optimizer.optimize(fx.sphere, iters=100)

This algorithm was adapted from one of the earlier works of J. Kennedy and R.C. Eberhart in Particle Swarm Optimization [IJCNN1995] [MHS1995]

[IJCNN1995]J. Kennedy and R.C. Eberhart, “Particle Swarm Optimization,” Proceedings of the IEEE International Joint Conference on Neural Networks, 1995, pp. 1942-1948.
[MHS1995]J. Kennedy and R.C. Eberhart, “A New Optimizer using Particle Swarm Theory,” in Proceedings of the Sixth International Symposium on Micromachine and Human Science, 1995, pp. 39–43.
class pyswarms.single.local_best.LocalBestPSO(n_particles, dimensions, options, bounds=None, bh_strategy='periodic', velocity_clamp=None, vh_strategy='unmodified', center=1.0, ftol=-inf, init_pos=None, static=False)[source]

Bases: pyswarms.base.base_single.SwarmOptimizer

__init__(n_particles, dimensions, options, bounds=None, bh_strategy='periodic', velocity_clamp=None, vh_strategy='unmodified', center=1.0, ftol=-inf, init_pos=None, static=False)[source]

Initialize the swarm

n_particles

int – number of particles in the swarm.

dimensions

int – number of dimensions in the space.

bounds

tuple of np.ndarray, optional (default is None) – a tuple of size 2 where the first entry is the minimum bound while the second entry is the maximum bound. Each array must be of shape (dimensions,).

bh_strategy

String – a strategy for the handling of out-of-bounds particles.

velocity_clamp

tuple (default is (0,1)) – a tuple of size 2 where the first entry is the minimum velocity and the second entry is the maximum velocity. It sets the limits for velocity clamping.

vh_strategy

String – a strategy for the handling of the velocity of out-of-bounds particles.

center

list (default is None) – an array of size dimensions

ftol

float – relative error in objective_func(best_pos) acceptable for convergence

options

dict with keys {'c1', 'c2', 'w', 'k', 'p'} – a dictionary containing the parameters for the specific optimization technique

  • c1 : float
    cognitive parameter
  • c2 : float
    social parameter
  • w : float
    inertia parameter
  • k : int
    number of neighbors to be considered. Must be a positive integer less than n_particles
  • p: int {1,2}
    the Minkowski p-norm to use. 1 is the sum-of-absolute values (or L1 distance) while 2 is the Euclidean (or L2) distance.
init_pos

numpy.ndarray (default is None) – option to explicitly set the particles’ initial positions. Set to None if you wish to generate the particles randomly.

static

bool (Default is False) – a boolean that decides whether the Ring topology used is static or dynamic

_abc_impl = <_abc_data object>
optimize(objective_func, iters, n_processes=None, **kwargs)[source]

Optimize the swarm for a number of iterations

Performs the optimization to evaluate the objective function f for a number of iterations iter.

Parameters:
  • objective_func (function) – objective function to be evaluated
  • iters (int) – number of iterations
  • n_processes (int) – number of processes to use for parallel particle evaluation (default: None = no parallelization)
  • kwargs (dict) – arguments for the objective function
Returns:

the local best cost and the local best position among the swarm.

Return type:

tuple

pyswarms.single.general_optimizer module

A general Particle Swarm Optimization (general PSO) algorithm.

It takes a set of candidate solutions, and tries to find the best solution using a position-velocity update method. Uses a user specified topology.

The position update can be defined as:

\[x_{i}(t+1) = x_{i}(t) + v_{i}(t+1)\]

Where the position at the current timestep \(t\) is updated using the computed velocity at \(t+1\). Furthermore, the velocity update is defined as:

\[v_{ij}(t + 1) = m * v_{ij}(t) + c_{1}r_{1j}(t)[y_{ij}(t) − x_{ij}(t)] + c_{2}r_{2j}(t)[\hat{y}_{j}(t) − x_{ij}(t)]\]

Here, \(c1\) and \(c2\) are the cognitive and social parameters respectively. They control the particle’s behavior given two choices: (1) to follow its personal best or (2) follow the swarm’s global best position. Overall, this dictates if the swarm is explorative or exploitative in nature. In addition, a parameter \(w\) controls the inertia of the swarm’s movement.

An example usage is as follows:

import pyswarms as ps
from pyswarms.backend.topology import Pyramid
from pyswarms.utils.functions import single_obj as fx

# Set-up hyperparameters and topology
options = {'c1': 0.5, 'c2': 0.3, 'w':0.9}
my_topology = Pyramid(static=False)

# Call instance of GlobalBestPSO
optimizer = ps.single.GeneralOptimizerPSO(n_particles=10, dimensions=2,
                                    options=options, topology=my_topology)

# Perform optimization
stats = optimizer.optimize(fx.sphere, iters=100)

This algorithm was adapted from the earlier works of J. Kennedy and R.C. Eberhart in Particle Swarm Optimization [IJCNN1995].

[IJCNN1995]J. Kennedy and R.C. Eberhart, “Particle Swarm Optimization,” Proceedings of the IEEE International Joint Conference on Neural Networks, 1995, pp. 1942-1948.
class pyswarms.single.general_optimizer.GeneralOptimizerPSO(n_particles, dimensions, options, topology, bounds=None, bh_strategy='periodic', velocity_clamp=None, vh_strategy='unmodified', center=1.0, ftol=-inf, init_pos=None)[source]

Bases: pyswarms.base.base_single.SwarmOptimizer

__init__(n_particles, dimensions, options, topology, bounds=None, bh_strategy='periodic', velocity_clamp=None, vh_strategy='unmodified', center=1.0, ftol=-inf, init_pos=None)[source]

Initialize the swarm

n_particles

int – number of particles in the swarm.

dimensions

int – number of dimensions in the space.

options

dict with keys {'c1', 'c2', 'w'} or {'c1', --     'c2', 'w', 'k', 'p'} a dictionary containing the parameters for the specific optimization technique.

  • c1 : float
    cognitive parameter
  • c2 : float
    social parameter
  • w : float
    inertia parameter

if used with the Ring, VonNeumann or Random topology the additional parameter k must be included * k : int

number of neighbors to be considered. Must be a positive integer less than n_particles

if used with the Ring topology the additional parameters k and p must be included * p: int {1,2}

the Minkowski p-norm to use. 1 is the sum-of-absolute values (or L1 distance) while 2 is the Euclidean (or L2) distance.

if used with the VonNeumann topology the additional parameters p and r must be included * r: int

the range of the VonNeumann topology. This is used to determine the number of neighbours in the topology.
topology

pyswarms.backend.topology.Topology – a Topology object that defines the topology to use in the optimization process. The currently available topologies are:

  • Star
    All particles are connected
  • Ring (static and dynamic)
    Particles are connected to the k nearest neighbours
  • VonNeumann
    Particles are connected in a VonNeumann topology
  • Pyramid (static and dynamic)
    Particles are connected in N-dimensional simplices
  • Random (static and dynamic)
    Particles are connected to k random particles

Static variants of the topologies remain with the same neighbours over the course of the optimization. Dynamic variants calculate new neighbours every time step.

bounds

tuple of np.ndarray (default is None) – a tuple of size 2 where the first entry is the minimum bound while the second entry is the maximum bound. Each array must be of shape (dimensions,).

bh_strategy

String – a strategy for the handling of out-of-bounds particles.

velocity_clamp

tuple (default is None) – a tuple of size 2 where the first entry is the minimum velocity and the second entry is the maximum velocity. It sets the limits for velocity clamping.

vh_strategy

String – a strategy for the handling of the velocity of out-of-bounds particles.

center

list (default is None) – an array of size dimensions

ftol

float – relative error in objective_func(best_pos) acceptable for convergence

init_pos

numpy.ndarray (default is None) – option to explicitly set the particles’ initial positions. Set to None if you wish to generate the particles randomly.

_abc_impl = <_abc_data object>
optimize(objective_func, iters, n_processes=None, **kwargs)[source]

Optimize the swarm for a number of iterations

Performs the optimization to evaluate the objective function f for a number of iterations iter.

Parameters:
  • objective_func (function) – objective function to be evaluated
  • iters (int) – number of iterations
  • n_processes (int) – number of processes to use for parallel particle evaluation (default: None = no parallelization)
  • kwargs (dict) – arguments for the objective function
Returns:

the global best cost and the global best position.

Return type:

tuple