Bellman Ford Algorithm

Hello people…! In this post I will talk about another single source shortest path algorithm, the Bellman Ford Algorithm. Unlike Dijkstra’s Algorithm, which works only for a graph positive edge weights, the Bellman Ford Algorithm will give the shortest path from a given vertex for a graph with negative edge weights also. Due to this, the Bellman Ford Algorithm is more versatile, but, it’s speciality comes at a cost. The runtime complexity of Bellman Ford Algorithm is O(|V||E|), which is substantially more than that of Dijkstra’s Algorithm. Sometimes this algorithm is called Bellman Ford Moore Algorithm, as the same algorithm was published by another researcher.

Before we get started, there are a couple of things that we must understand. Firstly, why does Dijkstra’s Algorithm fail for negative edge weights and second, the concept of Negative Cycles.

Why does Dijkstra fail?

Consider, the graph below,

Negative Edges in a Graph

Negative Edges in a Graph

The Dijkstra’s Algorithm is based on the principle that, if S → V1 → … → Vk is the shortest path from S → Vk then D(S, Vi) ≤ D(S, Vj). But in the above given graph, clearly that principle is violated. In the above graph, the shortest path from V1 to V3 is of length 3 units but the shortest path to V4 is of length 1 unit which means that V4 is actually closer to V1 than V3, which is contradicting Dijkstra’s principle.

Negative Cycles

A Negative Cycle is a path V1 → V 2 → V3 → … Vk → V1 where the total sum of the edge weights in the path is negative. Consider the graph below –

Negative Cycle in a Graph

Negative Cycle in a Graph

The path B → C → D is a negative cycle as the path’s total weight would be -2. So, the distance from A → B is 2, but if we circle the cycle once, we would get the distance as 0, if we circle once more, we would get -2. Like this we could keep on circling as much as we want to reduce the shortest distance. Hence the shortest distance to the vertex B, E becomes indeterminate.

So, we want Bellman Ford Algorithm to solve these two issues. We want it to compute the shortest path even for a graph with negative edges and negative cycles. The Bellman Ford will accurately compute the shortest path for a graph with negative edges but the algorithm fails for a graph with negative cycles. But, the algorithm can tell you if a negative cycle exists or not. If it exists the solution it puts up is incorrect, otherwise, the solution given by Bellman Ford Algorithm is perfect. This sounds fine because logically there will be no shortest paths for a graph with negative cycles.

Unlike the Dijkstra’s Algorithm where we had to use a priority queue, we will require no additional data structure to code Bellman Ford Algorithm. This makes writing the code much easier. And the algorithm is pretty straight-forward too. Take a look at the pseudo-code of the Bellman Ford Algorithm given below –

bellmanFord(G, s)
	for all edges in G(V)
		D(V) = INT_MAX
		parent[V] = -1

	D(s) = 0

	for i = 1 to |G(V)| - 1
		for each edge (u, v) in G(E)
			if edge can be Relaxed
				D(v) = D(u) + weight of edge (u, v)
				parent[v] = u

	for each edge in G(E)
		if edge can be Relaxed
			return false

	return true

You may not understand the pseudo-code at the first look, here’s a step-by-step representation of it –

  • Initialise the array which contains the shortest distances to infinity (a high integer value in the pseudo-code).
  • Initialise the parent array which contains the parent vertices in the shortest path to NULL (or -1 if it is an integer array).
  • Set the shortest distance of starting vertex to 0.
  • Explore all the edges, and see if you can relax them. If you can, relax the edge and proceed with the exploration.
  • Do the above operation |V| – 1 times.
  • After that, do another exploration on the graph checking all the edges if they can be relaxed. If they can be relaxed, you can a negative cycle in the graph. Hence, return false.
  • If the exploration gets finished successfully, the graph has no negative cycles and the data that you compute dis correct, so return true.

Now, what does exploring all the edges mean? If you are implementing the graph using an Adjacency List, it means to iterate over all the linked lists associated with all vertices. Now, what will be the sum of all the nodes in all Linked Lists in a given Adjacency List? Number of edges off course! So, we check all the edges from, edges of vertex 1 to vertex |V| in a linear manner. This whole operation takes O(|E|) time, which is repeated |V| – 1, so this part of the code takes O(|E||V|) time. Now, analyse the pseudo-code for a few minutes. Ask yourself how would you code this-ans-that. Now, when your mind is filled with the pseudo-code, look at the sketch below. The sketch below is sort of, “dry run” of the pseudo-code stated above –

Bellman Ford Algorithm Step-by-Step

Bellman Ford Algorithm Step-by-Step

The above sketch is self-explanatory. I hope you understand how the iterations go. In a way it looks like a very ordinary algorithm, without any greedy steps or partitions or so. The Bellman Ford Algorithm is pretty easy to code too. If you can work hard for an hour or two I’m sure you can code this algorithm. It does not require any priority queue or other tools. All you need to code Bellman Ford Algorithm is the pseudo-code. The pseudo-code is very important. Keep looking at the pseudo-code again-and-again whenever you get a doubt. I have put my code below for a reference, it is a C++ code –

    

If you have any doubts regarding the algorithm feel free to drop a comment. I’ll surely reply to them. I hope my post helped you in learning the Bellman Ford Algorithm. If it did, let me know by commenting! Keep practising… Happy Coding…! 🙂

Dijkstra’s Algorithm

Hello, people…! In this post, I will talk about one of the fastest single source shortest path algorithms, which is, the Dijkstra’s Algorithm. The Dijkstra’s Algorithm works on a weighted graph with non-negative edge weights and gives a Shortest Path Tree. It is a greedy algorithm, which sort of mimics the working of breadth first search and depth first search.

The Dijkstra’s Algorithm starts with a source vertex ‘s‘ and explores the whole graph. We will use the following elements to compute the shortest paths –

  • Priority Queue Q.
  • An array D, which keeps the record of the total distance from starting vertex s to all other vertices.

Just like the other graph search algorithms, Dijkstra’s Algorithm is best understood by listing out the algorithm in a step-by-step process –

  • The Initialisation –
  1. D[s], which is the shortest distance to s is set to 0. The distance from the source to itself is 0.
  2. For all the other vertices V, D[V] is set to infinity as we do not have a path yet to them, so we simply say that the distance to them is infinity.
  3. The Priority Queue Q, is constructed which is initially holds all the vertices of the graph. Each vertex V will have the priority D[V].
  • The Algorithm –
  1. Now, pick up the minimum priority (D[V]) element from Q (which removes it from Q). For the first time, this operation would obviously give s.
  2. For all the vertices, v, adjacent to s, i.e., check if the edge from s → v gives a shorter path. This is done by checking the following condition –

    if, D[s] + (weight of edge s → v) < D[v], we found a new shorter route, so update D[v]
    D[v] = D[s] + (weight of edge s → v)

  3. Now pick the next minimum priority element from Q, and repeat the process until there are no elements left in Q.

Let us understand this with the help of an example. Consider the graph below –

Dijkstra's Algorithm - Theory of ProgrammingFirstly, initialize your components, the shortest distances array D, the priority queue Q. The distance from the source to itself is zero. So, D[s] = 0, and the rest of the array is ∞. The set of vertices is inserted into the priority queue Q, with a priority D[v]. Now, we start our algorithm by extracting the minimum element from the priority queue.

Dijkstra's Algorithm - Theory of Programming

The minimum element in the priority queue will definitely be s (which is A here). Look at all the adjacent vertices of A. Vertices B, C, D are adjacent to A. We can go to B travelling the edge of weight 2, to C travelling an edge of weight 1, to D travelling an edge of weight 5. The values of D[B], D[C], D[D] are ∞ . We have found a new way of reaching them in 2, 1, 5 units respectively, which is less than ∞, hence a shorter path. This is what the if-condition mentioned above does. So, we update the values of D[B], D[C], D[D] and the priorities of B, C, D, in the priority queue. With this, we have finished processing the vertex A.

Dijkstra's Algorithm - Theory of Programming

Now, the process continues to its next iteration and we extract the minimum element from the priority queue. The minimum element would be vertex C which would be having a priority of 1. Now, look at all the adjacent vertices to C. There’s vertex D. From C, it would take 1 unit of distance to reach D. But to reach C in prior, you need 1 more unit of distance. So, if you go to D, via C, the total distance would be 2 units, which is less than the current value of shortest distance discovered to D, D[D] = 5. So, we reduce the value of D[D] to 2. This reduction is also called as “Relaxation“. With that, we’re done with vertex C.

Dijkstra's Algorithm - Theory of Programming

Now, the process continues to its next iteration and we extract the minimum element from the priority queue. Now, there are two minimum elements, B and D. You can go for anyone. We will go for vertex D. From D, you can go to E and F, with a total distance of 2 + 2 {D[D] + (weight of D → E)}, and 2 + 3. Which is less than ∞, so D[E] becomes 4 and D[F] becomes 5. We’re done with vertex D.

Dijkstra's Algorithm - Theory of Programming

Now, the process continues to its next iteration and we extract the minimum element from the priority queue. The minimum element in the priority queue is vertex B. From vertex B, you can reach F in 2 + 1 units of distance, which is less than the current value of D[F], 5. So, we relax D(F) to 3. From vertex B, you can reach vertex D in 2 + 2 units of distance, which is more than the current value of D(D), 2. This route is not considered as it is clearly proven to be a longer route. With that, we’re done with vertex B.

Dijkstra's Algorithm - Theory of ProgrammingNow, the process continues to its next iteration and we extract the minimum element from the priority queue. The minimum element in the priority queue is vertex E. From vertex E, you can reach vertex C in 4 + 4 units of distance, which is more than the current value of D(C), 1. This route is not considered as it is clearly proven to be a longer route. With that, we’re done with vertex E.

Dijkstra's Algorithm - Theory of ProgrammingNow, the process continues to its next iteration and we extract the minimum element from the priority queue. The minimum element in the priority queue is vertex F. You cannot go to any other vertex from vertex F, so, we’re done with vertex F.

Dijkstra's Algorithm - Theory of ProgrammingWith the removal of vertex F, our priority queue becomes empty. So, our algorithm is done…! You can simply return the array D to output the shortest paths.

Having got an idea about the overall working of the Dijkstra’s Algorithm, it’s time to look at the pseudo-code –

dijsktra(G, S)
    D(S) = 0
    Q = G(V)

    while (Q != NULL)
        u = extractMin(Q)
        for all V in adjacencyList[u]
            if (D(u) + weight of edge &amp;lt; D(V))
                D(V) = D(u) + weight of edge
                decreasePriority(Q, V)

In the pseudo-code, G is the input graph and S is the starting vertex. I hope you understand the pseudo-code. If you don’t, feel free to comment your doubts. Now, before we code Dijkstra’s Algorithm, we must first prepare a tool, which is the Priority Queue.

The Priority Queue

The Priority Queue is implemented by a number of data structures such as the Binary Heap, Binomial Heap, Fibonacci Heap, etc. The priority queue in my code is implemented by a simple Binary Min Heap. If you are not aware about the Binary Heap, you can refer to my post on Binary Heaps. Now the functionalities that we need from our priority queue are –

  • Build Heap – O(|V|) procedure to construct the heap data structure.
  • Extract Min – O(log |V|) procedure, where we return the top-most element from the Binary Heap and delete it. Finally, we make the necessary changes to the data structure.
  • Decrease Key – We decrease the priority of an element in the priority queue when we find a shorter path, as known as Relaxation.

If you know the working of the Binary Heap you can code the Priority Queue in about 1-2 hours. Alternatively, you can use C++ STL’s priority queue instead. But you don’t have a “decrease key” method there. So, if you want to use C++ STL’s priority queue, instead of removing elements, you can re-insert the same element with the lowered priority value.

Simple O(|V|2) Implementation

If you choose to implement priority queue simply by using an array, you can achieve the minimum operation in O(N) time. This will give your algorithm a total runtime complexity of O(|V|2). It is the simplest version of Dijkstra’s algorithm. This is the version you are supposed to use if you quickly want to code the Dijkstra’s algorithm for competitive programming, without having to use any fancy data structures. Take a look at the pseudocode again and try to code the algorithm using an array as the priority queue. You can use my code below as a reference –

    

Faster O(|E| log |V|) implementation

You can use a binary heap as a priority queue. But remember that to perform the decrease-key operation, you’ll need to know the index of the vertex inside the binary heap array. For that, you’ll need an additional array to store a vertex’s index. Each time any change is made in the binary heap array, corresponding changes must be made in the auxiliary array. Personally, I don’t like this version but I’ll put my code below so that you can use it as a reference.
If you want to do the same thing in C++, you can use a priority queue to reduce a lot. The tweak here is that because you cannot remove a certain vertex from a C++ STL priority queue, we can re-insert it with the new lower priority. This will increase the memory consumption but trust me, its worth it. I have put the codes for both versions below.

C using Binary HeapC++ using STL
    
    

The complexity of the above codes is actually O(|V| + |E|) ✗ O(log |V|), which finally makes it O(|E| log |V|). Dijkstra’s Algorithm can be improved by using a Fibonacci Heap as a Priority Queue, where the complexity reduces to O(|V| log |V| + |E|). Because the Fibonacci Heap takes constant time for Decrease Key operation. But the Fibonacci Heap is an incredibly advanced and difficult data structure to code. We’ll talk about that implementation later.

This is the Dijkstra’s Algorithm. If you don’t understand anything or if you have any doubts. Feel free to comment them. I really hope my post has helped you in understanding the Dijkstra’s Algorithm. If it did, let me know by commenting. I tried my best to keep it as simple as possible. Keep practising and… Happy Coding…! 🙂

Depth First Search Algorithm

Hello people…! In this post I will talk about the other Graph Search Algorithm, the Depth First Search Algorithm. Depth First Search is different by nature from Breadth First Search. As the name suggests, “Depth”, we pick up a vertex S and see all the other vertices that can possibly reached by that vertex and we do that to all vertices in V. Depth First Search can be used to search over all the vertices, even for a disconnected graph. Breadth First Search can also do this, but DFS is more often used to do that. Depth First Search is used to solve puzzles! You can solve a given maze or even create your own maze by DFS. DFS is also used in Topological Sorting, which is the sorting of things according to a hierarchy. It is also used to tell if a cycle exists in a given graph. There are many other applications of DFS and you can do a whole lot of cool things with it. So, lets get started…!

The way the Depth First Search goes is really like solving a maze. When you see a maze in a newspaper or a magazine or anywhere else, the way you solve it is you take a path and go through it. If you find any junction or a crossroad, where you have a choice of paths to choose, you mark that junction, take up a path and traverse the whole route in your brain. If you see a dead end, you realize that this leads you now where so you come back to the junction you marked before and take up another path of the junction. See if that is also a dead end, else you continue to explore the, puzzle as this route contributes in reaching your destination. Well, at least that’s how I solve a maze. But… I hope you get the idea. You pick up a path and you explore it as much as possible. When you can’t travel any further and you haven’t yet reached your destination, you come back to the starting point and pick another path.

In code, you do this by recursion. Because of the very nature of recursion. Your recursion stack grows-grows and eventually becomes an empty stack. If you think for a while you can notice that the way of traversing which I told you above is logically covering only the vertices accessible from a given vertex S. Such traversal can be implemented using a single function that is recursive. But we want to explore the whole graph. So, we will use another function to do this for us. So, DFS is a two-functions thing. We will come back to the coding part later. Now, DFS too can be listed out in a step-by-step process –

  • For a given set of vertices V = {V1, V2,V3…}, start with V1, and explore all the vertices that can possibly be reached from V1.
  • Then go to the next vertex V2, if it hasn’t been visited, explore all the nodes reachable from V2, if not, go for V3.
  • Similarly, go on picking up all the vertices one-by-one and explore as much as possible if it wasn’t visited at all.

Now, how do you tell if a vertex wasn’t visited earlier…? If it has no parent vertex assigned. So what you have to do when you visit a node is –

  • Set the parent vertex of the current vertex to be the vertex from where you reached that vertex. We will use an array to assign parent vertices, which is initialised to -1, telling that the vertices were never visited.
  • When you are starting your exploration from a vertex, say, V1, we assign parent[V1] = 0, because it has no parent. If there is an edge from V1 to V2, we say, parent[V2] = V1.

Let’s look at an example and see how DFS really works –

Depth First Search Algorithm Step-by-Step

Depth First Search Algorithm Step-by-Step

The working of DFS is pretty clear from the picture. Notice how we would assign the parent vertices to each vertex. Once we have visited all the vertices from a given initial vertex V1, we backtrack to V1. What do we really mean by this “backtrack” here is that the recursion control will gradually come back to the function that started explopring from V1. We will understand this once we put DFS in code. And one more thing, whenever we got a choice of going to two vertices from one vertex, we preferred going to the vertex with the greater number. Why is this…? This is because we will be following Head Insertion in our Adjacency Lists to have O(1) Insertion operation. Now, assuming that we insert the vertices from Vertex 1 to Vertex 10, the greater number vertices will end up being in front of the Linked Lists. Take a moment to understand this. Even if you don’t understand, it’s ok…! You will get the hang of it later. Why I really did that was to explain you the concept of Edge Classification.

Edge Classification in Graphs

Edge Classification in Graphs

As you can see there are three types of edges, in fact, there are 4 actually. They are –

  • Tree Edge – These are the edges through which we have traversed all the vertices of the graph by DFS. More clearly, these are the edges that represent the parent-child relationship. That is, the tree edge Vertex 1 → Vertex 3 says that, Vertex 1 is the parent of Vertex 3. Just like the parent-child relationship in a tree. Why this is called a “tree” edge is that it happens so that these edges together form a “tree”, or rather a “forest”.
  • Forward Edge – This is an edge which points from one vertex which is higher in the hierarchy of parent-child relationship to a vertex which is a descendant. Observe that Vertex 2 is a descendant of Vertex 1, so the edge Vertex 1 → Vertex 3, is a forward edge.
  • Backward Edge – This is the opposite of forward edge. It points from a descendant Vertex to an ancestor Vertex. In the above diagram, the edge, Vertex 4 → Vertex 3 is a backward edge.
  • Cross Edge – Every other edge is a cross edge. We don’t have a cross edge in the above diagram, but, a cross edge can arise when there is a edge between siblings, two vertices that have the same parent.

Cycle Detection – Using DFS, we can detect if there are any cycles in the given graph. How…? This is very simple. If there exists at least one backward edge, then the given graph will have cycles. Well, telling how many cycles would be there for given number of backward edges is a challenge. But the point is if there is a backward edge, there is a cycle. This is by the very definition of the Backward Edge. It is an edge from a descendant to an ancestor. If we have a backward edge, then there will surely be another path of tree edges from the ancestor to descendant, forming a cycle. The picture below should make things clear.

Cycle Detection by Edge Classification

Cycle Detection by Edge Classification

But, how do we compute backward edges in code…? This is a little tricky. We could have a boolean array of size |V| which would hold the status of the vertex, whether it is in the recursion stack or not. If the vertex is in the recursion stack, it means that the vertex is indeed an ancestor. So, the edge will be a backward edge.

Now try to code the DFS, it is basically a recursion process. If you are good with recursion, I’m sure you can get this. Nonetheless, I have put my code below –

    

Just remember that, the parent array is used to indicate whether the vertex is visited or not, and it also indicates the path through which we came. On the other hand, the status array indicates, if a vertex is currently in the recursion stack. This is used to detect backward edges, as discussed. If a vertex has an edge which points to a vertex in the recursion stack, then it is a backward edge.

This is the Depth First Search Algorithm. It has a time complexity of O(|V| + |E|), just like the Breadth First Search. This is because, we visit every vertex once, or you could say, twice, and we cover all the edges that AdjacencyList[Vi] has, for all ViV which takes O(|E|) time, which is actually the for loop in our depth_first_search_explore() function. DFS is not very difficult, you just need to have experienced hands in recursion. You might end up getting stuck with some bug. But it is worth spending time with the bugs because they make you think in the perfect direction. So if you are not getting the code, you just have to try harder. But if you are having any doubts, feel free to comment them…! Keep practising… Happy Coding…! 🙂

Breadth First Search Algorithm

Hello people…! In this post I will explain one of the most widely used Graph Search Algorithms, the Breadth First Search (BFS) Algorithm. Once you have learned this, you would have gained a new weapon in your arsenal, and you can start solving good number of Graph Theory related competitive programming questions.

What we do in a BFS is a simple step-by-step process, which is –

  1. Start from a vertex S. Let this vertex be at, what is called…. “Level 0”.
  2. Find all the other vertices that are immediately accessible from this starting vertex S, i.e., they are only a single edge away (the adjacent vertices).
  3. Mark these adjacent vertices to be at “Level 1”.
  4. There will be a challenge that you might be coming back to the same vertex due to a loop or a ring in the graph. If this happens your BFS will take time. So, you will go only to those vertices who do not have their “Level” set to some value.
  5. Mark which is the parent vertex of the current vertex you’re at, i.e., the vertex from which you accessed the current vertex. Do this for all the vertices at Level 1.
  6. Now, find all those vertices that are a single edge away from all the vertices which are at “Level 1”. These new set of vertices will be at “Level 2”.
  7. Repeat this process until you run out of graph.

This might sound heavy… Well atleast it would sound heavy to me if I heard it for the first time…! Many questions may pop-up in your mind, like, “How are we gonna do all that…?!”. Well, for now, focus on the concept, we will talk about the code later. And remember, we are talking about an Undirected Graph here. We will talk about Directed Graphs later. To understand the above stated steps, examine the picture below –

Breadth First Search Algorithm - Step-by-Step

Breadth First Search Algorithm – Step-by-Step

The sketch clearly shows you how we explore the vertices adjacent to a vertex and mark their levels. If you have noticed, whenever there were two ways of accessing the same vertex from multiple vertices of the same Level, i.e., in the diagram, Vertex 3 was accessible from Vertex 2 and Vertex 8, we preferred its parent to be Vertex 8, the larger number. Why is that so…? We will learn that in a short moment.

The concepts that I wish to emphasize from the above picture are, how BFS can tell you the shortest path from a given vertex (the start vertex) to all the other vertices and the number of edges or, the “length of the path”, would be the Level of that Vertex itself. This is a very important feature of the BFS, you will understand this more clearly when I explain it with the help of an example in a different post.

Now, having got some knowledge about the BFS, it is a good thing to do an exercise on this topic to really get the flow going. Try applying BFS on the Graph given. All you have to do is to implement the step-by-step process and get that final figure which I got above. And make sure you label the Levels and Parents for each vertex in the end.

Breadth First Search Practise Question

Breadth First Search Practise Question

Now, we come to the code part of the Breadth First Search, in C. We use the same Adjacency List that we used in our discussion of Graph Theory Basics. Coming back to our BFS discussion, the level of each vertex is stored in a separate array and so is the case for parent of each vertex. The three arrays are initialized to appropriate values. Now recall our step-by-step process that was stated earlier. Try to put that in terms of pseudocode and then proper code. Take a while to think how you would do that. If you could code it, you are marvelous…! 😀 … If not, don’t fret, I put my code below…!

CC++ STLC++ STL (using a Queue)
    
[/code]
    
    

Try using the above example given as practice as the sample input to your code, so that you can easily compare the results. Now, the important point here is when we insert a vertex into the Adjacency List, we follow Head Insertion to get O(1) Insertion. What happens due to this is that, the Linked List will have numbers in the descending order.

So, when we are applying BFS for adjacent vertices of a given node, the Vertex with the higher number is met first. And then we explore starting by that higher numbered Vertex. This is why whenever we had a choice in approaching a vertex, we preferred approaching from the vertex which had the bigger number, why…? Due to Head Insertion…! If you had used Tail Insertion, this would be reversed.

Other Implementations of BFS

This is the overview of Breadth First Search Algorithm. It has a computational complexity of O(|V| + |E|), which is pretty fast. But why O(|V| + |E|)…? If you think about the algorithm a little bit, we cover all the |V| vertices level-by-level, checking all the |E| edges twice (for an Undirected Graph). Once the level is set for a subset of vertices we don’t visit them again. Put an itty-bitty thought into it and I’m sure you’ll get the idea…! 😉 … But, the code above runs slower than O(|V| + |E|)… To achieve the proper complexity we must use a Queue.

We can use BFS in the following scenarios –

  • Shortest Path or Quickest Path (if all edges have equal weight).
  • Finding the Length of Shortest Path between two Vertices.

With the knowledge of BFS you can start solving Graph Theory related problems. It really depends on your logic how you will apply the BFS to the given problem. And there is only one way to develop it… Practice…! So, keep practicing… Feel free to comment your doubts..! Happy Coding…! 🙂