pyswarms.single package¶
The pyswarms.single
module implements various techniques in
continuous singleobjective optimization. These require only one
objective function that can be optimized in a continuous space.
Note
PSO algorithms scale with the search space. This means that, by using larger boundaries, the final results are getting larger as well.
Note
Please keep in mind that Python has a biggest float number.
So using large boundaries in combination with exponentiation or
multiplication can lead to an OverflowError
.
pyswarms.single.global_best module¶
A Globalbest Particle Swarm Optimization (gbest PSO) algorithm.
It takes a set of candidate solutions, and tries to find the best solution using a positionvelocity update method. Uses a startopology where each particle is attracted to the best performing particle.
The position update can be defined as:
Where the position at the current timestep \(t\) is updated using the computed velocity at \(t+1\). Furthermore, the velocity update is defined as:
Here, \(c1\) and \(c2\) are the cognitive and social parameters respectively. They control the particle’s behavior given two choices: (1) to follow its personal best or (2) follow the swarm’s global best position. Overall, this dictates if the swarm is explorative or exploitative in nature. In addition, a parameter \(w\) controls the inertia of the swarm’s movement.
An example usage is as follows:
import pyswarms as ps
from pyswarms.utils.functions import single_obj as fx
# Setup hyperparameters
options = {'c1': 0.5, 'c2': 0.3, 'w':0.9}
# Call instance of GlobalBestPSO
optimizer = ps.single.GlobalBestPSO(n_particles=10, dimensions=2,
options=options)
# Perform optimization
stats = optimizer.optimize(fx.sphere, iters=100)
This algorithm was adapted from the earlier works of J. Kennedy and R.C. Eberhart in Particle Swarm Optimization [IJCNN1995].
[IJCNN1995]  J. Kennedy and R.C. Eberhart, “Particle Swarm Optimization,” Proceedings of the IEEE International Joint Conference on Neural Networks, 1995, pp. 19421948. 

class
pyswarms.single.global_best.
GlobalBestPSO
(n_particles, dimensions, options, bounds=None, oh_strategy=None, bh_strategy='periodic', velocity_clamp=None, vh_strategy='unmodified', center=1.0, ftol=inf, ftol_iter=1, init_pos=None)[source]¶ Bases:
pyswarms.base.base_single.SwarmOptimizer

__init__
(n_particles, dimensions, options, bounds=None, oh_strategy=None, bh_strategy='periodic', velocity_clamp=None, vh_strategy='unmodified', center=1.0, ftol=inf, ftol_iter=1, init_pos=None)[source]¶ Initialize the swarm

options
¶ a dictionary containing the parameters for the specific optimization technique.
 c1 : float
 cognitive parameter
 c2 : float
 social parameter
 w : float
 inertia parameter
Type: dict with keys {'c1', 'c2', 'w'}

bounds
¶ a tuple of size 2 where the first entry is the minimum bound while the second entry is the maximum bound. Each array must be of shape
(dimensions,)
.Type: tuple of numpy.ndarray, optional

oh_strategy
¶ a dict of update strategies for each option.
Type: dict, optional, default=None(constant options)

velocity_clamp
¶ a tuple of size 2 where the first entry is the minimum velocity and the second entry is the maximum velocity. It sets the limits for velocity clamping.
Type: tuple, optional

center
¶ an array of size
dimensions
Type: list (default is None
)

ftol
¶ relative error in objective_func(best_pos) acceptable for convergence. Default is
np.inf
Type: float

ftol_iter
¶ number of iterations over which the relative error in objective_func(best_pos) is acceptable for convergence. Default is
1
Type: int

init_pos
¶ option to explicitly set the particles’ initial positions. Set to
None
if you wish to generate the particles randomly.Type: numpy.ndarray, optional


optimize
(objective_func, iters, n_processes=None, verbose=True, **kwargs)[source]¶ Optimize the swarm for a number of iterations
Performs the optimization to evaluate the objective function
f
for a number of iterationsiter.
Parameters:  objective_func (callable) – objective function to be evaluated
 iters (int) – number of iterations
 n_processes (int) – number of processes to use for parallel particle evaluation (default: None = no parallelization)
 verbose (bool) – enable or disable the logs and progress bar (default: True = enable logs)
 kwargs (dict) – arguments for the objective function
Returns: the global best cost and the global best position.
Return type:

pyswarms.single.local_best module¶
A Localbest Particle Swarm Optimization (lbest PSO) algorithm.
Similar to globalbest PSO, it takes a set of candidate solutions, and finds the best solution using a positionvelocity update method. However, it uses a ring topology, thus making the particles attracted to its corresponding neighborhood.
The position update can be defined as:
Where the position at the current timestep \(t\) is updated using the computed velocity at \(t+1\). Furthermore, the velocity update is defined as:
However, in localbest PSO, a particle doesn’t compare itself to the overall performance of the swarm. Instead, it looks at the performance of its nearestneighbours, and compares itself with them. In general, this kind of topology takes much more time to converge, but has a more powerful explorative feature.
In this implementation, a neighbor is selected via a kD tree
imported from scipy
. Distance are computed with either
the L1 or L2 distance. The nearestneighbours are then queried from
this kD tree. They are computed for every iteration.
An example usage is as follows:
import pyswarms as ps
from pyswarms.utils.functions import single_obj as fx
# Setup hyperparameters
options = {'c1': 0.5, 'c2': 0.3, 'w': 0.9, 'k': 3, 'p': 2}
# Call instance of LBestPSO with a neighboursize of 3 determined by
# the L2 (p=2) distance.
optimizer = ps.single.LocalBestPSO(n_particles=10, dimensions=2,
options=options)
# Perform optimization
stats = optimizer.optimize(fx.sphere, iters=100)
This algorithm was adapted from one of the earlier works of J. Kennedy and R.C. Eberhart in Particle Swarm Optimization [IJCNN1995] [MHS1995]
[IJCNN1995]  J. Kennedy and R.C. Eberhart, “Particle Swarm Optimization,” Proceedings of the IEEE International Joint Conference on Neural Networks, 1995, pp. 19421948. 
[MHS1995]  J. Kennedy and R.C. Eberhart, “A New Optimizer using Particle Swarm Theory,” in Proceedings of the Sixth International Symposium on Micromachine and Human Science, 1995, pp. 39–43. 

class
pyswarms.single.local_best.
LocalBestPSO
(n_particles, dimensions, options, bounds=None, oh_strategy=None, bh_strategy='periodic', velocity_clamp=None, vh_strategy='unmodified', center=1.0, ftol=inf, ftol_iter=1, init_pos=None, static=False)[source]¶ Bases:
pyswarms.base.base_single.SwarmOptimizer

__init__
(n_particles, dimensions, options, bounds=None, oh_strategy=None, bh_strategy='periodic', velocity_clamp=None, vh_strategy='unmodified', center=1.0, ftol=inf, ftol_iter=1, init_pos=None, static=False)[source]¶ Initialize the swarm

bounds
¶ a tuple of size 2 where the first entry is the minimum bound while the second entry is the maximum bound. Each array must be of shape
(dimensions,)
.Type: tuple of numpy.ndarray

oh_strategy
¶ a dict of update strategies for each option.
Type: dict, optional, default=None(constant options)

velocity_clamp
¶ a tuple of size 2 where the first entry is the minimum velocity and the second entry is the maximum velocity. It sets the limits for velocity clamping.
Type: tuple (default is (0,1)
)

ftol
¶ relative error in objective_func(best_pos) acceptable for convergence. Default is
np.inf
Type: float

ftol_iter
¶ number of iterations over which the relative error in objective_func(best_pos) is acceptable for convergence. Default is
1
Type: int

options
¶ a dictionary containing the parameters for the specific optimization technique
 c1 : float
 cognitive parameter
 c2 : float
 social parameter
 w : float
 inertia parameter
 k : int
 number of neighbors to be considered. Must be a
positive integer less than
n_particles
 p: int {1,2}
 the Minkowski pnorm to use. 1 is the sumofabsolute values (or L1 distance) while 2 is the Euclidean (or L2) distance.
Type: dict with keys {'c1', 'c2', 'w', 'k', 'p'}

init_pos
¶ option to explicitly set the particles’ initial positions. Set to
None
if you wish to generate the particles randomly.Type: numpy.ndarray, optional


_abc_impl
= <_abc_data object>¶

optimize
(objective_func, iters, n_processes=None, verbose=True, **kwargs)[source]¶ Optimize the swarm for a number of iterations
Performs the optimization to evaluate the objective function
f
for a number of iterationsiter.
Parameters:  objective_func (callable) – objective function to be evaluated
 iters (int) – number of iterations
 n_processes (int) – number of processes to use for parallel particle evaluation (default: None = no parallelization)
 verbose (bool) – enable or disable the logs and progress bar (default: True = enable logs)
 kwargs (dict) – arguments for the objective function
Returns: the local best cost and the local best position among the swarm.
Return type:

pyswarms.single.general_optimizer module¶
A general Particle Swarm Optimization (general PSO) algorithm.
It takes a set of candidate solutions, and tries to find the best solution using a positionvelocity update method. Uses a user specified topology.
The position update can be defined as:
Where the position at the current timestep \(t\) is updated using the computed velocity at \(t+1\). Furthermore, the velocity update is defined as:
Here, \(c1\) and \(c2\) are the cognitive and social parameters respectively. They control the particle’s behavior given two choices: (1) to follow its personal best or (2) follow the swarm’s global best position. Overall, this dictates if the swarm is explorative or exploitative in nature. In addition, a parameter \(w\) controls the inertia of the swarm’s movement.
An example usage is as follows:
import pyswarms as ps
from pyswarms.backend.topology import Pyramid
from pyswarms.utils.functions import single_obj as fx
# Setup hyperparameters and topology
options = {'c1': 0.5, 'c2': 0.3, 'w':0.9}
my_topology = Pyramid(static=False)
# Call instance of GlobalBestPSO
optimizer = ps.single.GeneralOptimizerPSO(n_particles=10, dimensions=2,
options=options, topology=my_topology)
# Perform optimization
stats = optimizer.optimize(fx.sphere, iters=100)
This algorithm was adapted from the earlier works of J. Kennedy and R.C. Eberhart in Particle Swarm Optimization [IJCNN1995].
[IJCNN1995]  J. Kennedy and R.C. Eberhart, “Particle Swarm Optimization,” Proceedings of the IEEE International Joint Conference on Neural Networks, 1995, pp. 19421948. 

class
pyswarms.single.general_optimizer.
GeneralOptimizerPSO
(n_particles, dimensions, options, topology, bounds=None, oh_strategy=None, bh_strategy='periodic', velocity_clamp=None, vh_strategy='unmodified', center=1.0, ftol=inf, ftol_iter=1, init_pos=None)[source]¶ Bases:
pyswarms.base.base_single.SwarmOptimizer

__init__
(n_particles, dimensions, options, topology, bounds=None, oh_strategy=None, bh_strategy='periodic', velocity_clamp=None, vh_strategy='unmodified', center=1.0, ftol=inf, ftol_iter=1, init_pos=None)[source]¶ Initialize the swarm

options
¶  ‘c2’, ‘w’, ‘k’, ‘p’}`
a dictionary containing the parameters for the specific optimization technique.
 c1 : float
 cognitive parameter
 c2 : float
 social parameter
 w : float
 inertia parameter
if used with the
Ring
,VonNeumann
orRandom
topology the additional parameter k must be included * k : intnumber of neighbors to be considered. Must be a positive integer less thann_particles
if used with the
Ring
topology the additional parameters k and p must be included * p: int {1,2}the Minkowski pnorm to use. 1 is the sumofabsolute values (or L1 distance) while 2 is the Euclidean (or L2) distance.if used with the
VonNeumann
topology the additional parameters p and r must be included * r: intthe range of the VonNeumann topology. This is used to determine the number of neighbours in the topology.Type: dict with keys {'c1', 'c2', 'w'}
or :code:`{‘c1’,

topology
¶ a
Topology
object that defines the topology to use in the optimization process. The currently available topologies are: Star
 All particles are connected
 Ring (static and dynamic)
 Particles are connected to the k nearest neighbours
 VonNeumann
 Particles are connected in a VonNeumann topology
 Pyramid (static and dynamic)
 Particles are connected in Ndimensional simplices
 Random (static and dynamic)
 Particles are connected to k random particles
Static variants of the topologies remain with the same neighbours over the course of the optimization. Dynamic variants calculate new neighbours every time step.
Type: pyswarms.backend.topology.Topology

bounds
¶ a tuple of size 2 where the first entry is the minimum bound while the second entry is the maximum bound. Each array must be of shape
(dimensions,)
.Type: tuple of numpy.ndarray, optional

oh_strategy
¶ a dict of update strategies for each option.
Type: dict, optional, default=None(constant options)

velocity_clamp
¶ a tuple of size 2 where the first entry is the minimum velocity and the second entry is the maximum velocity. It sets the limits for velocity clamping.
Type: tuple, optional

center
¶ an array of size
dimensions
Type: list (default is None
)

ftol
¶ relative error in objective_func(best_pos) acceptable for convergence. Default is
np.inf
Type: float

ftol_iter
¶ number of iterations over which the relative error in objective_func(best_pos) is acceptable for convergence. Default is
1
Type: int

init_pos
¶ option to explicitly set the particles’ initial positions. Set to
None
if you wish to generate the particles randomly.Type: numpy.ndarray, optional


_abc_impl
= <_abc_data object>¶

optimize
(objective_func, iters, n_processes=None, verbose=True, **kwargs)[source]¶ Optimize the swarm for a number of iterations
Performs the optimization to evaluate the objective function
f
for a number of iterationsiter.
Parameters:  objective_func (callable) – objective function to be evaluated
 iters (int) – number of iterations
 n_processes (int) – number of processes to use for parallel particle evaluation (default: None = no parallelization)
 verbose (bool) – enable or disable the logs and progress bar (default: True = enable logs)
 kwargs (dict) – arguments for the objective function
Returns: the global best cost and the global best position.
Return type:
