Jaya Optimization Algorithm

Dhiraj Rai
4 min readNov 14, 2018

The Jaya algorithm is a metaheuristic which is capable of solving both constrained and unconstrained optimization problems. It is a population based method which repeatedly modifies a population of individual solutions. It is a gradient-free optimization algorithm. There are a number of optimization algorithms like genetic algorithm, particle swarm optimization algorithm , artificial bee colony optimization algorithm, firefly optimization algorithm, bio-geography based optimization algorithm, cuckoo search, etc. but what distinguishes Jaya from the other is the fact that it does not contain any hyperparameters.

Overview

In this blog post I will share a simple python implementation of Jaya algorithm with a help of a simple unconstrained optimization problem. Users can copy the code as-it-is, and simply modify the “myobj” function and the upper bound (ub) and the lower bound (lb) to solve their own objective function.

Take away

The main aim of this blog post is to give a walk-through to the readers into the code for Jaya algorithm written in python. For thorough understanding about the Jaya algorithm I recommend the readers to go through [1] and [2].

Flowchart

The following flowchart explains the work flow of Jaya algorithm [1]

Fig 1: Flowchart for Jaya Algorithm [1]

The code

The complete code in ipynb can be found here

Step 1: Import the required libraries.

import math
import numpy as np
import random
import pandas as pd

Step 2: Define the objective function

I have used the Himmelblau’s function for the purpose of demonstration. The details about this function are as under.

The objective is to minimize f(x,y)

Subject to:

-5 < x, y < 5

Details about the Himmelblau’s function [2]
Fig 2: Complete search space for Himmelblau function [2]

In order to define the objective function I have defined a function named “myobj” as follows:

def myobj(p1):
F=[]
for i in range (len(p1)):
x=p1.loc[i]
f=(((x[0]**2)+x[1]-11)**2)+((x[0]+x[1]**2)-7)**2
F.append(f)
return F

In the above code snippet I have defined the Himmelblau’s function. However, the users can define their own functions by modifying the “myobj” function accordingly and appropriately.

Search space and termination criteria

Once the objective function is defined we are required to define the lower bound (lb) and upper bound (ub) of the variables. In our example, these are, -5 ≤ x ≤ 5 and -5 ≤ y≤ 5.

Termination criteria is actually a condition upon satisfaction of which the algorithm stops execution. In our case I have set the maximum number of generations (1000) as termination criteria (G = 1000).

The bounds and termination criteria can be defined as follows:

pop_size = 25
Gen = 1000
lb=[-5,-5]
ub=[5,5]

Initial population

Once the search space is defined and the termination criteria is set. The next task is to initialize the population. That is, generating random solutions, within the given range of variables. These random solutions serve as the starting point for our algorithm. The initial population can be generated as follows:

def initialpopulation(mini,maxi,pop_size):
pop=[]
for i in range(pop_size):
p=[]
for a,b in zip(mini,maxi):
p.append(a + (b-a) * random.random())
pop.append(p)
ini_pop=pd.DataFrame(pop)
return ini_pop

Update population

In every iteration, the solutions are updated based on the strategy shown in Fig 1. I have implemented the same strategy in the function below.

def updatepopulation(p1,dim):      
best_x=np.array(p1.loc[p1['f'].idxmin][0:dim])
worst_x=np.array(p1.loc[p1['f'].idxmax][0:dim])
new_x=[]
for i in range(len(p1)):
old_x=np.array(p1.loc[i][0:dim])
r1=np.random.random(dim)
r2=np.random.random(dim)
new_x.append(old_x+r1*(best_x-abs(old_x))-r2*(worst_x-abs(old_x)))
new_p1=pd.DataFrame(new_x)
return new_p1

Greedy selection

Inherently, in every generation the algorithm keeps only the good solution and completely discards the poor or inferior solution. For detailed explanation on “greedy selection” I recommend reading [1]. I have defined greedy selection in python in the following code snippet.

def greedyselector(p1,new_p1):    
for i in range(len(p1)):
if p1.loc[i]['f']>new_p1.loc[i]['f']:
p1.loc[i]=new_p1.loc[i]
return p1

Trimming

It is imperative that the combination of variables suggested by the algorithm algorithm must lie within the prescribed bounds. This can be achieved by a number of different approaches. One approach is to trim the variables to their respective bounds. To do this, I have defined a function named “trimr” in the following code snippet.

def trimr(new_p1,lb,ub):    
col=new_p1.columns.values
for i in range(len(new_p1)):
for j in range(len(col)):
if new_p1.loc[i][j]>ub[j]:
new_p1.loc[i][j]=ub[j]
elif new_p1.loc[i][j]<lb[j]:
new_p1.loc[i][j]=lb[j]
return new_p1

Looping

All the above mentioned functions have to be executed in a loop, in order to perform iterations.

def jaya(*argv):
pop_size, Gen, mini, maxi= argv
lb=np.array(mini)
ub=np.array(maxi)
p1=initialpopulation(lb,ub,pop_size)
p1['f']=myobj(p1)

dim=len(lb)
gen=0
best=[]
while (gen<Gen):
new_p1=updatepopulation(p1,dim)
new_p1=trimr(new_p1,lb,ub)
new_p1['f']=myobj(new_p1)
p1=greedyselector(p1,new_p1)
gen=gen+1
# print(gen)
best=p1['f'].min()
xbest=p1.loc[p1['f'].idxmin()][0:dim].tolist()
# print('Best={}'.format(best))
# print('xbest={}'.format(xbest))
return best,xbest

Executing the code

Import all the above mentioned functions. And then the algorithm can be executed as follows:

best,xbest = jaya(pop_size, Gen, lb, ub)
print('The objective function value = {}'.format(best))
print('The optimum values of variables = {}'.format(xbest))

The complete code in ipynb can be found here

References

[1] Rao, R.V., Jaya: An Advanced Optimization Algorithm and its Engineering Applications.

[2] https://ipfs.io/ipfs/QmXoypizjW3WknFiJnKLwHCnL72vedxjQkDDP1mXWo6uco/wiki/Himmelblau's_function.html

--

--