Introduction to Depth First Search
In the previous two post we introduce two uninformed search algorithm(also known as blind search) breadth first search algorithm and uniform cost search algorithm. In this post we introduce another uninformed search algorithm known as depth first search algorithm.
Depth First Search Algorithm
In DFS, we always expands the deepest node available in the current frontier of the search tree. The search proceeds immediately to the deepest level of the search tree, where the node have no successor. As those nodes are expanded, they are dropped from the frontier. Then the search backup to the next deepest node that still has unexplored successors.
In above figure, we simulate the depth first search on binary tree. The unexplored region is shown in light gray. Explored nodes with no descendants in the frontier are removed from memory. Nodes at depth \(3\) have no successors and \(M\) is the only goal node.
DFS is an instance of graph search algorithm which is discusses in Searching For Solutions in detail. In depth first search we use a LIFO queue (also known as Stack) to implement frontier. A LIFO queue means that the most recently generated node is chosen for expansion. This must be the deepest unexpanded node because it is one deeper than its parent which, in turn, was the deepest unexpanded node when it was selected.
As an alternative to the GRAPHSEARCH style implementation, it is common, to implement depth first search with a recursive function that calls itself on each of its children in turn. We discussed recursive implementation of depth first search in our next post.
Performance of Depth First Search Algorithm

Completeness
Completeness of DFS depend strongly on whether the Graph Search or Tree Search version are used.
 The Graphsearch version, which avoids repeated states and redundant paths, is complete in finite state space because it will eventually expand every node.
 Tree search version is not complete, but it can be modified without any extra memory cost, so that it checks new states against those on the path from the root to the current node. This avoids infinite loops in finite state spaces but does not avoid the proliferation of redundant paths.
 In infinite state spaces, both versions fail if an infinite nongoal path is encountered.

Optimality
DFS is not optimal, due to the same reason due to which it is not complete. For example, in above figure, DFS will explore the entire left subtree even if node \(C\) is a goal node. If node \(j\) were also a goal node, then DFS would return it as a solution instead of \(C\), which would be a better solution, hence DFS is not optimal.

Time Complexity
The time complexity of Depthfirst graph search is bounded by the size of the state space(which may be infinite). A depth first tree search, on the other hand, may generate all of the \(\mathcal O(b^m)\) nodes in the search tree, where \(m\) is the maximum depth of any node; this can be much greater than the size of the state space. Note that, \(m\) itself can be much larger than \(d\) (the depth of the shallowest solution) and is infinite if the tree is unbounded.

Space Complexity
For graph search version of DFS, the space requirement of DFS is same as BFS. For a state space with branching factor \(b\) and maximum depth \(m\), tree search version of depth first search requires storage of only \(\mathcal O(b^m)\) nodes. Due to this reduction in space complexity, DFS tree search is the basic workhorse of many areas of AI, including constraint satisfaction, propositional satisfiability and logic programming.
Backtracking Search
Backtracking search is a variant of depth first search. Backtracking search still minimized the memory requirement. In backtracking, only one successor is generated at a time, rather than all the successors, each partially expanded node remembers which successor to generate next. In this way only \(\mathcal O(m)\) memory is needed rather than \(\mathcal O(bm)\). Backtracking search facilitates yet another memory saving and time saving trick.
The idea of generating a successor by modifying the current state description directly. rather than copying it first.
This reduces the memory requirements to just one state description and \(\mathcal O(m)\) actions. For this to work, we must be able to undo each modification when we go back to generate the next successor.
This article is contributed by Ram Kripal. If you like eLgo Academy and would like to contribute, you can mail your article to admin@elgoacademy.org. See your article appearing on the eLgo Academy page and help others. Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above.