I am trying to determine the maximum memory usage of the Pending Nodes (Stack / Queue) data structure for both trips: BFS and (Preorder) DFS.

Since BFS and DFS have node detection control while traveling through charts (no loops), we can analyze the problem by thinking of trees instead of charts, using your starting node as usual as the root.

I first assume that the resulting travel is a complete tree (all leaves are the same depth), with a height $ h $. $ n $ Node, and thus all nodes with degrees $ d $ (except leaves).

Of course, since the tree is complete, $ n $ can be calculated from $ h $ and $ d $:

$$ n = frac {1 – d ^ {h + 1}} {1 – d} $$

Under this assumption, the worst-case scenario for DFS is when you are in the deepest non-leaf node of the first branch of the tree: since you insert a node, insert all the child nodes for each level $ D – 1 $ Pending nodes in the stack, except the last non-leaf, where all child nodes are inserted.

If you were not in the first branch but in another branch, you would have a branch with less than $ d – 1 $ pending children in the stack, so you have fewer nodes in the stack.

The maximum memory consumption of DFS for a complete graph with degree of homogeneity is ($ ms $ stands for `maximum space`

):

$$ ms = h * (d – 1) + 1 $$

This last one `+ 1`

Represent the additional child for the last non-leaf node. For example, for a tree with $ d = 4 $ and $ h = $ 20 Node would require a maximum DFS stack:

$$ ms_ {DFS} = 20 * 3 + 1 = 81 node $$

Considering that this graph would have $ n = 1.4660155e ^ {12} $ Node, that is a more than allowable amount. This is the advantage of logarithmic space complexity ($ h = lfloor log_d ((d-1) n) rfloor $).

However, in BFS with exponential storage complexity, the worst case scenario is that all of the sheets to be discovered were discovered while all other nodes were discovered so that all sheets are present in your queue (that is, the complete last outstanding level) but nothing else is detected:

$$ ms_ {BFS} = d ^ h node $$

which is the same in our example $ 1.0995116e ^ {12} $,

My problem now is to reduce the graph's limit to completeness $ d $ is no longer a homogeneous degree, but an average degree (which may now include decimals), and the tree may have an imbalance, even if it is a list. As a result, the number of nodes is free, so there is no connection to them `d`

and `h`

as previously.

If $ d $ is an average degree and $ n $ For any number of nodes, I tried to model an upper bound of space by first modeling a complete tree with a homogeneity $ lfloor d rfloor $ Degrees, and then add kids to the last sheet until they get to $ d $ (I assume that the resulting number of nodes should be the same $ n $but I'm not sure about that; I even tried to calculate some given $ d $ and $ n $, a lower and upper limit for the height of the tree).

Since $ d $ is an average when a node has more than $ d $ Children is because another node has less than $ d $ Children, and therefore, the idea was to find the worst case scenario for DFS and BFS by removing children from one node and moving the cut branch as the child of another node, or generally finding the closest possible upper memory usage limit. but I could not find a way.

The thing is, if you repeatedly apply this elevation by moving sibling branches to the lowest levels, you probably have a bunch of parent or sibling paths that only consist of lists that are quickly removed from the stack / queue, So there has to be a tree status where you can not make space consumption worse. However, I assume that the "worst tree" can be different from DFS and BFS.

Is it even possible to carry out this calculation of the upper limit (or the exact amount)? Or is the worst-case scenario just the right balance?