tabu add(x, y) = iter + tabu add tenure
tabu delete(x, y) = iter + tabu delete tenure
We assign positive penalties to edges in tabu status, which means they are forbidden by the recency memory.
A TS restriction is overridden by aspiration if the outcome of the move under consideration is sufficiently desirable. This can be achieved by deducting a large number from the total penalty.
5.3.3 Move Evaluation
After finding all neighborhood candidate moves, we evaluate these moves so that they are ready for selection. The resulting evaluation value is in two parts: the total profit and the penalties. For our crane scheduling problem, the penalties comprise: (1) Short-term memory penalties which include tabu status penalties as well as aspiration satisfaction “penalties” (2) Long-term memory penalties which include transition measure and residence measure penalties and (3) Penalties for other biases.
5.3.4 Probabilistic Move Selection
After evaluating all candidate moves, we select one move to proceed. The fundamental idea of move selection is to choose the best move, i.e., choose the move with the highest evaluation. However, we found that this greedy selection strategy is strongly biased. We therefore made adjustments using probabilities. The strategy for a probabilistic move selection is given as follows:
1. Generate the candidate list and evaluate candidate moves which have described above
2. Select the move from the candidate list with the highest evaluation value
3. Accept the move with probability p and exit; otherwise, go to (4). Here, p is a param-
eter and is set to 0.3 in the algorithm
4. Remove the move from the candidate list. If the list is empty, accept the first move of
the original candidate list and exit. Otherwise, go to (2).
It is easily shown that the probability of choosing one of the best k moves is 1 − (1 − p)k, which is large even if p is not large. For example, if p is 0.25, the probability of choosing the best 5 moves is 0.763, the best 10 moves is 0.944. We can therefore choose relatively high evaluation moves while avoiding favoring those with highest evaluations always.
5.4 SWO with Local Search Approach
“Squeaky Wheel” Optimization (SWO) is a general approach to optimization and consists of a Construct-Analyze-Prioritize cycle at its core. The Analyzer will assign a numerical ”blame” value to the problem elements that contribute to shortcomings in the current solution. The Prioritizer will modify sequences of problem elements and elements that received blame are moved to the front of the sequence. The higher the blame, the further the element is moved. The Constructor deals with problem elements according to the modified sequence in the next iteration. The cycle repeats until a termination condition is satisfied.
The SWO algorithm has been effective in job scheduling and graph coloring problems and outperforms TS and integer programming in some applications. Although SWO strives to avoid getting trapped in local optima by changing the sequence in the Prioritizer, blame values can trap SWO in small cycles. In SWO, there are tasks that must be handled badly locally in order to achieve a good overall solution. However, the Analyzer always assigns high blames to these tasks to force the Constructor handle them first in the next iteration which can be a disadvantage.
To overcome these weaknesses, we developed an enhanced SWO framework. In this framework, we first obtain a greedy solution from the Constructor, perform a local search on the solution, and then try to find a better solution in the local solution space.
Local search can improve the SWO heuristic in the following ways:
• Unlike the traditional heuristic search methods such as TS and Simulated Annealing, SWO is sensitive to the sequence of problem elements rather than the objective function. Omitting the objective function may cause the final solution to be inferior. However, local search compensates this by causing moves to neighbors with higher objective function values.