# Bidirectional Search

Hello people! In this post we will talk about Bidirectional Search, a powerful search strategy which can be used if you have a goal directed agent in a (possibly infinite) search space. The idea behind Bidirectional search is to run two simultaneous searches, one forward from the source state and the other backward from the goal, hoping that the two searches meet somewhere in the middle.

In this post we will talk about Bidirectional Breadth First Search, where we apply BFS from the source and goal simultaneously until search routes intersect. Visually, the way regular BFS would progress would be something like this –

But if we applied BFS simultaneously from source (S) and goal (G), visually it would look something like this –

We can already start to seem some improvement as the number of vertices visited would be lesser.

In terms of complexity, if branching factor is b, and distance from source to goal is d, then, BFS would take O(bd), but Bidirectional search would take O(bd/2 + bd/2) ⇒ O(bd/2), which is quite better than O(bd). So in many cases a bidirectional search is faster as the amount of exploration done is lesser.

## Properties of Bidirectional search

• Our goal in Bidirectional search is to find a path from source to goal.
• We can use any search algorithm on both sides, Bidirectional search is more of a strategy than a fixed algorithm. Using BFS on both sides is the most popular option as it guarantees an optimal path.
• Bidirectional search must be used only when your goal is well defined. Eg. for a Rubik’s cube, your goal is to fill each side with one color, which is your goal and your current configuration of cube is your source state. We could use Bidirectional search in this scenario. But in chess, your goal is to checkmate the other player, but a checkmate can happen in thousands of ways. Bidirectional search isn’t feasible in chess.
• Bidirectional search using BFS needs the edge weights to be same or non-existent. So usually Bidirectional BFS is used in undirected unweighted graphs.

## Code

Writing the code for Bidirectional BFS is easier if you have already written the code for Breadth First Search using queue. Basically, all the variables we used in BFS get doubled here. That is, for Bidirectional BFS, we will use 2 queues and 2 parent arrays. One set will be used for applying BFS from source and other set will be used for applying BFS from goal. So we can call them –

• sourceParent – the parent array to construct paths for BFS from source vertex.
• sourceQueue – the queue for BFS from source vertex.
• goalParent – the parent array to construct paths for BFS from goal vertex.
• goalQueue – the queue for BFS from goal vertex.

Although we are using a separate set of variables for BFS from source and goal, the way in which they operate will be the same, i.e., the steps like picking first element in queue, enqueuing its adjacent vertices, etc. We can put that logic in a separate method called BFSExplore().

So how will we know if the BFS from source and goal have an intersecting node? After expanding a layer on either side, we will check if a vertex has been marked as visited in both of the BFS. If yes, then that is our intersection node. Recall, that we can check if a vertex has been visited if its parent value has been set.

Now the only thing left is to return the final path. There’s no genius logic for this, just do it in a brute force way 😛 . Add all vertices from source → intersecting_vertex using sourceParent into a collection. Then add all the vertices from intersecting_vertex → goal using goalParent into the same collection. Be careful, don’t add the intersecting_vertex twice in the path 😉

I have given you all the theory you need to write the code for Bidirectional BFS. It may take some time, but do try your best to write the code yourself. You can refer to my code below if you get stuck –

I hope this has given you a good understanding of Bidirectional search. Keep practicing! Happy coding! 😀

# Iterative Deepening Depth First Search (IDDFS)

Hello people! In this post we will talk about another search algorithm Iterative deepening depth first search (IDDFS) or Iterative deepening search (IDS). This algorithm is used when you have a goal directed agent in an infinite search space (or search tree).

Why do Breadth First Search (BFS) and Depth First Search (DFS) fail in the case of an infinite search space?

• In DFS, you would recursively look at a node’s adjacent vertex. DFS may not end in an infinite search space. Also, DFS may not find the shortest path to the goal. DFS needs O(d) space, where d is depth of search.
• BFS consumes too much memory. BFS needs to store all the elements in the same level. In the case of a tree, the last level has N / 2 leaf nodes, the second last level has N / 4. So, BFS needs O(N) space.

Iterative deepening depth first search (IDDFS) is a hybrid of BFS and DFS. In IDDFS, we perform DFS up to a certain “limited depth,” and keep increasing this “limited depth” after every iteration.

Let us take an example to understand this –

Our starting node (A) is at a depth of 0. Our goal node (R) is at a depth of 4. The above example is a finite tree, but think of the above tree as an infinitely long tree and only up to depth = 4 is shown in the diagram.

As stated earlier, in IDDFS, we perform DFS up to a certain depth and keep incrementing this allowed depth. Performing DFS upto a certain allowed depth is called Depth Limited Search (DLS). As Depth Limited Search (DLS) is important for IDDFS, let us take time to understand it first.

Let us understand DLS, by performing DLS on the above example. In Depth Limited Search, we first set a constraint on how deep (or how far from root) will we go. Let’s say our limit (DEPTH) is 2. Now, in the above diagram, place your hand to cover the nodes at depth 3 and 4. Now, by looking at the rest of the nodes, can you tell the order in which a normal DFS would visit them? It would be as follows –

```A B E F C G D H
```

Can you do it for DEPTH = {0, 1, 2, 3, 4} ? Just cover the nodes you don’t need with your hand and try to perform DFS in you mind. You should get answers like this –

 DEPTH DLS traversal 0 A 1 A B C D 2 A B E F C G D H 3 A B E I F J K C G L D H M N 4 A B E I F J K O P C G L R D H M N S

Now that you have got an idea of Depth Limited Search, Iterative deepening depth first search is just one loop away! The pseudo-code for IDDFS is as below –

```IDDFS(tree):
for depth = 0 to infinity:
if (DLS(tree, depth)):
return true

return false
```

Before you race off to code, here are a few things –

• IDDFS is used to check if the goal is reachable from start node. So its return type is boolean.
• IDDFS is only used to check, not return the path from start node to goal. So we don’t maintain anything like parent array (like in DFS).
• IDDFS is meant to run DLS from 0 → ∞, but we will write our IDDFS program to run DLS from 0 → MAX_DEPTH. Because in real world we never run anything up to ∞.
• First code the DLS method, then add the IDDFS method which calls the DLS method.
• IDDFS is meant to run in an infinite space tree. So, you can use a binary tree if you want, but in my opinion using an N-ary tree makes more sense. So, in my code below I use N-ary tree, the code taken from my article on N-ary tree.

You should be capable of writing the code for Iterative deepening depth first search now. Try it, I’m sure you can 😉 You can refer to my code if you get stuck –

JavaOutput
```A
B
E
I
F
J
K
O
P
C
G
L
R
D
H
M
N
S
Depth = 0, DLS Traversal => A,
Depth = 1, DLS Traversal => A, B, C, D,
Depth = 2, DLS Traversal => A, B, E, F, C, G, D, H,
Depth = 3, DLS Traversal => A, B, E, I, F, J, K, C, G, L, D, H, M, N,
Goal node = R is not reachable at a depth of 3
Depth = 0, DLS Traversal => A,
Depth = 1, DLS Traversal => A, B, C, D,
Depth = 2, DLS Traversal => A, B, E, F, C, G, D, H,
Depth = 3, DLS Traversal => A, B, E, I, F, J, K, C, G, L, D, H, M, N,
Depth = 4, DLS Traversal => A, B, E, I, F, J, K, O, P, C, G, L, R,
Goal node = R is reachable at a depth of 4
```

In the output, the tree is printed first, then the IDDFS traversals. Purposefully, I took the goal node as a node which is not reachable by depth = 3 but is reachable by depth = 4. As you have noticed from the output above, we visit the nodes at depth = 0 a lot, the nodes at depth = 2 a little fewer but we visit them multiple times too, and we visit the nodes at depth = DEPTH_MAX only once. This may seem inefficient, but it is actually not. This is because, there are very few nodes at depth = 0, but a lot of nodes at depth = DEPTH_MAX. If ‘d‘ is depth, and ‘b‘ is the branching factor in the search tree (this would be N for an N-ary tree), then mathematically –

The time complexity remains O(bd) but the constants are large, so IDDFS is slower than BFS and DFS (which also have time complexity of O(bd)).

 Time complexity Space complexity DFS O(bd) O(d) BFS O(bd) O(bd) IDDFS O(bd) O(bd)

Iterative deepening depth first search may not be directly used in practical applications but the technique of iteratively progressing your search in an infinite search space is pretty useful and can be applied in many AI applications.

Congrats, your AI just got better! Keep practicing! Happy coding! 😀

# Minimax algorithm with Alpha-Beta Pruning

Hello people, in this post we will try to improve the performance of our Minimax algorithm by applying Alpha-Beta Pruning. Please read my post on Minimax algorithm if you haven’t already.

Alpha-beta pruning is based on the Branch and bound algorithm design paradigm, where we will generate uppermost and lowermost possible values to our optimal solution and using them, discard any decision which cannot possibly yield a better solution than the one we have so far.

### Idea of Alpha-beta pruning

We will maintain two additional parameters α and β. α is the best score achievable by the max player so far and β is the best score achievable by the min player so far. These values will be passed down to recursion calls via arguments.

We will try to use α and β to prune our search tree by skipping choices which can’t possibly give us a better solution. Let us understand this with the help of an example. Look at the sketch below –

Let us assume we have the above game tree formed by two agents (max and min) playing a hypothetical game. Here, the Max agent tries to maximize the score and Min agent tries to minimize the score. Now, for a normal Minimax algorithm, we would traverse all these nodes, but this time we will send the values of α and β.

The topmost Min node makes the first call. Initially, the values of α and β are null. Here val is the value which will be returned. Now what will happen next? Min has two possibilities above and the call goes to the first possibility, which is the first Max node in the above diagram. It passes on the values of α and β, which both happen to be null for the moment. What will Max do there?

The choices for Max are 2 and 4. Since Max always maximizes the score, it will choose 4. So we update the value to be returned to 4. Notice that the value of α = 4. α denotes the best possibility for Max so far. It doesn’t play a big role here, but you must have an idea on when to update α and β. Now, we return value 4 from the Max node.

Hmm, now the Min node sees that the first possible decision will give it a score of 4. Okay, so now β = 4. Remember, β is the best possible decision for Min node so far. Now what does Min do? It looks at the next possibility. It passes on values of α and β. α is anyway null, but β = 4.

Okay, so the Max node receives the values of α and β. Now, α is null and β = 4. So Max node starts looking at all the possibilities one-by-one. The possibilities are 6 and 8.

This is important! Now, this is the case when Max has finished reading the first possibility which is 6. Remember, it hasn’t gone to the next possibility (which is 8) yet!

After reading 6, val = 6 and α = 6, because it is the best solution so far. Now, what if the values for the choices ahead returned a value lesser than 6? Then obviously Max would choose 6 since it is the highest. Okay, what if the values for the choices ahead returned a value greater than 6, say X? Then obviously Max would choose X, since it is greater than 6. But, even if it did, will it affect the decision of Min on top? No! Because between 4 and X, Min would obviously choose 4!

So no matter what the next value Max encounters, it cannot affect the decision of Min. How did Max node know Min already has a choice which yields 4? From the value of β!

Read the above case again and again if you didn’t understand it. If you’ve understood it then you’ve learned Minimax algorithm with alpha-beta pruning! 😉

So, we break further computation in Max, and return 6.

So we have managed to prune a node. This is a small example, but for a real-world scenario we would be able to prune a lot of nodes, especially when we hit the best solution earlier.

The pruning happens for these cases –

• val ≥ β in a Max node
• val ≤ α in a Min node

Now let’s try to write the pseudo-code for Minimax algorithm with alpha beta pruning. Before we do that, first try to write down the pseudo-code for a regular Minimax algorithm. If you could, that’s awesome! 😀 If not, take a look at the pseudo-code in my post on Minimax Algorithm, because I will only make slight modifications in that.

We know that pruning happens only in the above stated two cases. So the only modifications we need to make to our existing Minimax algorithm pseudo-code is that,

• Add the parameters alpha and beta to the procedure.
• Add the conditions to update alpha and beta.

The final pseudo-code goes like this –

```(score, move) maxTurn(game, depth, alpha, beta):
if game is in terminal state:
return (score(game, depth), none)

max = (none, none)

foreach emptySpace in game:
game[emptySpace] = X
currentMove = minTurn(game, depth + 1, alpha, beta)

if currentMove.score > max.score:
max = (currentMove.score, emptySpace)

if currentMove.score >= beta:
return max

if currentMove.score > alpha:
alpha = currentMove.score

game[emptySpace] = none   // reverting change

return max

(score, move) minTurn(game, depth, alpha, beta):
if game is in terminal state:
return (score(game, depth), none)

min = (none, none)

foreach emptySpace in game:
game[emptySpace] = O
currentMove = maxTurn(game, depth + 1, alpha, beta)

if currentMove.score < min.score:
min = (currentMove.score, emptySpace)

if currentMove.score <= alpha:
return min

if currentMove.score < beta:
beta = currentMove.score

game[emptySpace] = none   // reverting change

return min
```

So, now instead of 1 if condition, we have 3 if conditions in our methods. The order of the new conditions can be interchanged, I like to write it this way.

Now that you have the pseudo-code, can you use that to tell which nodes get pruned for the example below?

Take 2 minutes, it is easy. When done, check your answers with mine –

Did you get them right? If yes kudos! If not, you just have to try one more time. I’m sure you’ll get it! 🙂

Now you are more than capable of writing the code for Minimax algorithm with alpha beta pruning. It is just a matter of a few conditions if you have already written the code for Minimax algorithm. Try to code it, you can refer to my code if you get stuck (it is a modification of the code in my post on Minimax algorithm) –

Congratulations! You AI just got hell a lot faster! Keep practicing! Happy coding! 😀

# MiniMax Algorithm

Hello people! In this post we will look at one of the most basic Artificial Intelligence algorithm, the MiniMax algorithm. MiniMax algorithm is used to implement basic AI or game logic in 2 player games. The most common scenario is implementing a perfect Tic-Tac-Toe player. So, in this article we will look at how to implement it.

## Definition –

Given that two players are playing a game optimally (playing to win), MiniMax algorithm tells you what is the best move that a player should pick at any state of the game. So, the input to MiniMax algorithm would be –

1. State of the game.
2. Whose turn it is.

And the output would be the best move that can be played by the player given in the input.

## Idea of MiniMax Algorithm –

In the game of Tic-Tac-Toe, there are two players, player X and player O. Now imagine there’s a scoreboard which displays a huge number called “score”, and –

1. If X wins, the score increases by 10.
2. If O wins, the score is decreased by 10.
3. If it is a draw, then the score remains unchanged.

So now, the bigger the number score has, the better it is for X, which means X will try to maximize score as much as possible. Similarly, the lesser the score, the better it is for O, so O will try to minimize the score as much as possible.

To understand this better, imagine you are player X and this is the current state of your game –

So if you are in state 1 (from the above diagram), you have 3 possible moves which lead to state 2, 3 and 4. For these moves the scores are –

• +10 if you choose state 2.
• 0 if you choose state 3 because it will be a draw if Player O is playing optimally.
• -10 if you choose state 3.

So conceptually we know that player X must choose the winning move. This is done programmatically bu choosing the move which will return the maximum score. So, X will always try to maximize the score and will always choose that move which will fetch X the maximum score. Thus, in the above scenario X chooses the move which goes to state 2.

Now, to make sure you understand both sides of this algorithm. Let us take the same state as above and let us say it is O’s turn now and not X’s. Can you draw a state diagram which will depict all the possible moves and the scores which will be returned by playing them? You can! Just pause here for a moment and try to draw the state diagram for the above game, assuming it is player O’s move.

You should get a diagram like this –

Did you get a diagram like this? Good job if you did 😉 . Player O has 3 possible moves and the scores are –

• -10 if you choose to go to state 2.
• -10 if you choose to go to state 3.
• +10 if you choose to go to state 4.

Player O will always try to minimize the score, so player O must select a move which will either lead to state 2 or 3 to win.

## Writing code for MiniMax algorithm –

Writing code for MiniMax algorithm is not very difficult, but you may not get it in the first try so I’ll help you out. Firstly, have a clarity on the smaller pieces of logic and write methods for them first. You will need these 3 helper methods for your code –

1. printGame(game) – which prints the state of Tic-Tac-Toe game.
2. hasPlayerWon(game, player) – which tells if the given player has won the given Tic-Tac-Toe game.
3. score(game) – which returns +10 if player X has won, -10 if player Y has won, 0 otherwise.

So now you have a little clarity over the smaller pieces, code them first. Now, you are ready to write the MiniMax algorithm method. It is a recursive method which takes in 2 inputs –

• the state of the game
• information on which player’s turn it is.

It will need to return –

• max/min score which can be achieved for the given player for the given game state.
• best move which can be played by given player.

We can make a new class to return all the information we need. So, our MiniMax algorithm will look like –

```(score, move) maxTurn(game):
if game is in terminal state:
return (score(game), none)

max = (none, none)

foreach emptySpace in game:
game[emptySpace] = X
currentMove = minTurn(game)

if currentMove.score > max.score:
max = (currentMove.score, emptySpace)

game[emptySpace] = none   // reverting change

return max

(score, move) minTurn(game):
if game is in terminal state:
return (score(game), none)

min = (none, none)

foreach emptySpace in game:
game[emptySpace] = O
currentMove = maxTurn(game)

if currentMove.score < min.score:
min = (currentMove.score, emptySpace)

game[emptySpace] = none   // reverting change

return min
```

With your new clarity over the helper methods and the pseudocode, try to write the code for MiniMax algorithm. When in doubt come back and read the MiniMax algorithm theory and the implementation tips. Remember, you are trying to write the code for an ideal Tic-Tac-Toe player here, so you need to write the starter code for simulating a 2-player Tic-Tac-Toe game.

### Ideal player doesn’t give up!

There’s just one thing lacking in our algorithm now. Take the case given below –

So for the above case, player O will lose no matter the decision taken. If you think about this, if we apply the Minimax algorithm we formed so far, in this case Player O would choose state 2. This is because state 2 is the first state it programmatically encounters while computing the minimum value.

But this doesn’t seem right. Ideally, we would want Player O to go for state 5 because that’s what an ideal player would do. An ideal player would choose that move in which he/she would loose the game in 3/4 turns rather than just the next turn.

How do we implement this in our algorithm? We can add another parameter “depth” to our algorithm and decay the score by the factor of depth. So, if a player wins by taking more turns, the gain would be lesser and if the player took lesser turns, the gain would be more.

If we apply this “decay score by level” logic to the above example, the case would look like this –

Now our player O will obviously choose state 5. Now our TicTacToe bot is an ideal bot! The pseudo-code now would look something like this –

```(score, move) maxTurn(game, depth):
if game is in terminal state:
return (score(game, depth), none)

max = (none, none)

foreach emptySpace in game:
game[emptySpace] = X
currentMove = minTurn(game, depth + 1)

if currentMove.score > max.score:
max = (currentMove.score, emptySpace)

game[emptySpace] = none   // reverting change

return max

(score, move) minTurn(game, depth):
if game is in terminal state:
return (score(game, depth), none)

min = (none, none)

foreach emptySpace in game:
game[emptySpace] = O
currentMove = maxTurn(game, depth + 1)

if currentMove.score < min.score:
min = (currentMove.score, emptySpace)

game[emptySpace] = none   // reverting change

return min

int score(game, depth):
if X has won:
return 10 - depth
else if O has won:
return depth - 10

return 0
```

Try to implement the ideal TicTacToe bot on your won, it will be fun to create a game. If you get stuck, you can refer to my code –

Congrats on taking your first step to build your dream AI! 😀 Keep practicing! Happy Coding! 😀