Time complexity is a concept used to measure how the run time of an algorithm increases with the size of the input data. Understanding time complexity is crucial for accurately assessing the efficiency of an algorithm.
The runtime can intuitively assess the efficiency of an algorithm. How can we accurately estimate the runtime of a piece of an algorithm?
1. **Determining the Running Platform**: This includes hardware configuration, programming language, system environment, etc., all of which can affect the efficiency of code execution.
1. **Determining the Running Platform**: This includes hardware configuration, programming language, system environment, etc., all of which can affect the efficiency of code execution.
2. **Evaluating the Run Time for Various Computational Operations**: For instance, an addition operation `+` might take 1 ns, a multiplication operation `*` might take 10 ns, a print operation `print()` might take 5 ns, etc.
2. **Evaluating the Run Time for Various Computational Operations**: For instance, an addition operation `+` might take 1 ns, a multiplication operation `*` might take 10 ns, a print operation `print()` might take 5 ns, etc.
The basic operations on graphs can be divided into operations on "edges" and operations on "vertices". Under the two representation methods of "adjacency matrix" and "adjacency list", the implementation methods are different.
The basic operations on graphs can be divided into operations on "edges" and operations on "vertices". Under the two representation methods of "adjacency matrix" and "adjacency list", the implementations are different.
## 9.2.1 Implementation based on adjacency matrix
## 9.2.1 Implementation based on adjacency matrix
Trees represent a "one-to-many" relationship, while graphs have a higher degree of freedom and can represent any "many-to-many" relationship. Therefore, we can consider trees as a special case of graphs. Clearly, **tree traversal operations are also a special case of graph traversal operations**.
Trees represent a "one-to-many" relationship, while graphs have a higher degree of freedom and can represent any "many-to-many" relationship. Therefore, we can consider tree as a special case of graph. Clearly, **tree traversal operations are also a special case of graph traversal operations**.
Both graphs and trees require the application of search algorithms to implement traversal operations. Graph traversal can be divided into two types: <u>Breadth-First Search (BFS)</u> and <u>Depth-First Search (DFS)</u>.
Both graphs and trees require the application of search algorithms to implement traversal operations. Graph traversal can be divided into two types: <u>Breadth-First Search (BFS)</u> and <u>Depth-First Search (DFS)</u>.
@ -18,7 +18,7 @@ Both graphs and trees require the application of search algorithms to implement
### 1. Algorithm implementation
### 1. Algorithm implementation
BFS is usually implemented with the help of a queue, as shown in the code below. The queue has a "first in, first out" property, which aligns with the BFS idea of traversing "from near to far".
BFS is usually implemented with the help of a queue, as shown in the code below. The queue is "first in, first out", which aligns with the BFS idea of traversing "from near to far".
1. Add the starting vertex `startVet` to the queue and start the loop.
1. Add the starting vertex `startVet` to the queue and start the loop.
2. In each iteration of the loop, pop the vertex at the front of the queue and record it as visited, then add all adjacent vertices of that vertex to the back of the queue.
2. In each iteration of the loop, pop the vertex at the front of the queue and record it as visited, then add all adjacent vertices of that vertex to the back of the queue.
@ -184,7 +184,7 @@ To prevent revisiting vertices, we use a hash set `visited` to record which node
[class]{}-[func]{graphBFS}
[class]{}-[func]{graphBFS}
```
```
The code is relatively abstract, it is suggested to compare with Figure 9-10 to deepen the understanding.
The code is relatively abstract, you can compare it with Figure 9-10 to get a better understanding.
=== "<1>"
=== "<1>"
![Steps of breadth-first search of a graph](graph_traversal.assets/graph_bfs_step1.png){ class="animation-figure" }
![Steps of breadth-first search of a graph](graph_traversal.assets/graph_bfs_step1.png){ class="animation-figure" }
@ -223,7 +223,7 @@ The code is relatively abstract, it is suggested to compare with Figure 9-10 to
!!! question "Is the sequence of breadth-first traversal unique?"
!!! question "Is the sequence of breadth-first traversal unique?"
Not unique. Breadth-first traversal only requires traversing in a "from near to far" order, **and the traversal order of multiple vertices at the same distance can be arbitrarily shuffled**. For example, in Figure 9-10, the visitation order of vertices $1$ and $3$ can be switched, as can the order of vertices $2$, $4$, and $6$.
Not unique. Breadth-first traversal only requires traversing in a "near to far" order, **and the traversal order of the vertices with the same distance can be arbitrary**. For example, in Figure 9-10, the visit order of vertices $1$ and $3$ can be swapped, as can the order of vertices $2$, $4$, and $6$.
### 2. Complexity analysis
### 2. Complexity analysis
@ -233,7 +233,7 @@ The code is relatively abstract, it is suggested to compare with Figure 9-10 to
## 9.3.2 Depth-first search
## 9.3.2 Depth-first search
**Depth-first search is a traversal method that prioritizes going as far as possible and then backtracks when no further paths are available**. As shown in Figure 9-11, starting from the top left vertex, visit some adjacent vertex of the current vertex until no further path is available, then return and continue until all vertices are traversed.
**Depth-first search is a traversal method that prioritizes going as far as possible and then backtracks when no further path is available**. As shown in Figure 9-11, starting from the top left vertex, visit some adjacent vertex of the current vertex until no further path is available, then return and continue until all vertices are traversed.
![Depth-first traversal of a graph](graph_traversal.assets/graph_dfs.png){ class="animation-figure" }
![Depth-first traversal of a graph](graph_traversal.assets/graph_dfs.png){ class="animation-figure" }
A <u>hash table</u>, also known as a <u>hash map</u>, is a data structure that establishes a mapping between keys and values, enabling efficient element retrieval. Specifically, when we input a `key` into the hash table, we can retrive the corresponding `value` in $O(1)$ time complexity.
A <u>hash table</u>, also known as a <u>hash map</u>, is a data structure that establishes a mapping between keys and values, enabling efficient element retrieval. Specifically, when we input a `key` into the hash table, we can retrieve the corresponding `value` in $O(1)$ time complexity.
As shown in Figure 6-1, given $n$ students, each student has two data fields: "Name" and "Student ID". If we want to implement a query function that takes a student ID as input and returns the corresponding name, we can use the hash table shown in Figure 6-1.
As shown in Figure 6-1, given $n$ students, each student has two data fields: "Name" and "Student ID". If we want to implement a query function that takes a student ID as input and returns the corresponding name, we can use the hash table shown in Figure 6-1.
@ -14,9 +14,9 @@ As shown in Figure 6-1, given $n$ students, each student has two data fields: "N
In addition to hash tables, arrays and linked lists can also be used to implement query functionality, but the time complexity is different. Their efficiency is compared in Table 6-1:
In addition to hash tables, arrays and linked lists can also be used to implement query functionality, but the time complexity is different. Their efficiency is compared in Table 6-1:
- **Inserting elements**: Simply append the element to the tail of the array (or linked list). The time complexity of this operation is $O(1)$.
- **Inserting an element**: Simply append the element to the tail of the array (or linked list). The time complexity of this operation is $O(1)$.
- **Searching for elements**: As the array (or linked list) is unsorted, searching for an element requires traversing through all of the elements. The time complexity of this operation is $O(n)$.
- **Searching for an element**: As the array (or linked list) is unsorted, searching for an element requires traversing through all of the elements. The time complexity of this operation is $O(n)$.
- **Deleting elements**: To remove an element, we first need to locate it. Then, we delete it from the array (or linked list). The time complexity of this operation is $O(n)$.
- **Deleting an element**: To remove an element, we first need to locate it. Then, we delete it from the array (or linked list). The time complexity of this operation is $O(n)$.
<palign="center"> Table 6-1 Comparison of time efficiency for common operations </p>
<palign="center"> Table 6-1 Comparison of time efficiency for common operations </p>
@ -30,7 +30,7 @@ In addition to hash tables, arrays and linked lists can also be used to implemen
</div>
</div>
It can be seen that**the time complexity for operations (insertion, deletion, searching, and modification) in a hash table is $O(1)$**, which is highly efficient.
As observed,**the time complexity for operations (insertion, deletion, searching, and modification) in a hash table is $O(1)$**, which is highly efficient.
## 6.1.1 Common operations of hash table
## 6.1.1 Common operations of hash table
@ -66,7 +66,7 @@ Common operations of a hash table include: initialization, querying, adding key-
unordered_map<int,string> map;
unordered_map<int,string> map;
/* Add operation */
/* Add operation */
// Add key-value pair (key, value) to the hash table
// Add key-value pair (key, value) to hash table
map[12836] = "Xiao Ha";
map[12836] = "Xiao Ha";
map[15937] = "Xiao Luo";
map[15937] = "Xiao Luo";
map[16750] = "Xiao Suan";
map[16750] = "Xiao Suan";
@ -89,7 +89,7 @@ Common operations of a hash table include: initialization, querying, adding key-
Map<Integer,String> map = new HashMap<>();
Map<Integer,String> map = new HashMap<>();
/* Add operation */
/* Add operation */
// Add key-value pair (key, value) to the hash table
// Add key-value pair (key, value) to hash table
map.put(12836, "Xiao Ha");
map.put(12836, "Xiao Ha");
map.put(15937, "Xiao Luo");
map.put(15937, "Xiao Luo");
map.put(16750, "Xiao Suan");
map.put(16750, "Xiao Suan");
@ -111,7 +111,7 @@ Common operations of a hash table include: initialization, querying, adding key-
/* Initialize hash table */
/* Initialize hash table */
Dictionary<int,string> map = new() {
Dictionary<int,string> map = new() {
/* Add operation */
/* Add operation */
// Add key-value pair (key, value) to the hash table
// Add key-value pair (key, value) to hash table
{ 12836, "Xiao Ha" },
{ 12836, "Xiao Ha" },
{ 15937, "Xiao Luo" },
{ 15937, "Xiao Luo" },
{ 16750, "Xiao Suan" },
{ 16750, "Xiao Suan" },
@ -135,7 +135,7 @@ Common operations of a hash table include: initialization, querying, adding key-
hmap := make(map[int]string)
hmap := make(map[int]string)
/* Add operation */
/* Add operation */
// Add key-value pair (key, value) to the hash table
// Add key-value pair (key, value) to hash table
hmap[12836] = "Xiao Ha"
hmap[12836] = "Xiao Ha"
hmap[15937] = "Xiao Luo"
hmap[15937] = "Xiao Luo"
hmap[16750] = "Xiao Suan"
hmap[16750] = "Xiao Suan"
@ -158,7 +158,7 @@ Common operations of a hash table include: initialization, querying, adding key-
var map: [Int: String] = [:]
var map: [Int: String] = [:]
/* Add operation */
/* Add operation */
// Add key-value pair (key, value) to the hash table
// Add key-value pair (key, value) to hash table
map[12836] = "Xiao Ha"
map[12836] = "Xiao Ha"
map[15937] = "Xiao Luo"
map[15937] = "Xiao Luo"
map[16750] = "Xiao Suan"
map[16750] = "Xiao Suan"
@ -202,7 +202,7 @@ Common operations of a hash table include: initialization, querying, adding key-
/* Initialize hash table */
/* Initialize hash table */
const map = new Map<number,string>();
const map = new Map<number,string>();
/* Add operation */
/* Add operation */
// Add key-value pair (key, value) to the hash table
// Add key-value pair (key, value) to hash table
map.set(12836, 'Xiao Ha');
map.set(12836, 'Xiao Ha');
map.set(15937, 'Xiao Luo');
map.set(15937, 'Xiao Luo');
map.set(16750, 'Xiao Suan');
map.set(16750, 'Xiao Suan');
@ -230,7 +230,7 @@ Common operations of a hash table include: initialization, querying, adding key-
Map<int,String> map = {};
Map<int,String> map = {};
/* Add operation */
/* Add operation */
// Add key-value pair (key, value) to the hash table
// Add key-value pair (key, value) to hash table
map[12836] = "Xiao Ha";
map[12836] = "Xiao Ha";
map[15937] = "Xiao Luo";
map[15937] = "Xiao Luo";
map[16750] = "Xiao Suan";
map[16750] = "Xiao Suan";
@ -255,7 +255,7 @@ Common operations of a hash table include: initialization, querying, adding key-
let mut map: HashMap<i32,String> = HashMap::new();
let mut map: HashMap<i32,String> = HashMap::new();
/* Add operation */
/* Add operation */
// Add key-value pair (key, value) to the hash table
// Add key-value pair (key, value) to hash table
map.insert(12836, "Xiao Ha".to_string());
map.insert(12836, "Xiao Ha".to_string());
map.insert(15937, "Xiao Luo".to_string());
map.insert(15937, "Xiao Luo".to_string());
map.insert(16750, "Xiao Suan".to_string());
map.insert(16750, "Xiao Suan".to_string());
@ -502,10 +502,10 @@ First, let's consider the simplest case: **implementing a hash table using only
So, how do we locate the corresponding bucket based on the `key`? This is achieved through a <u>hash function</u>. The role of the hash function is to map a larger input space to a smaller output space. In a hash table, the input space consists of all the keys, and the output space consists of all the buckets (array indices). In other words, given a `key`, **we can use the hash function to determine the storage location of the corresponding key-value pair in the array**.
So, how do we locate the corresponding bucket based on the `key`? This is achieved through a <u>hash function</u>. The role of the hash function is to map a larger input space to a smaller output space. In a hash table, the input space consists of all the keys, and the output space consists of all the buckets (array indices). In other words, given a `key`, **we can use the hash function to determine the storage location of the corresponding key-value pair in the array**.
When given a `key`, the calculation process of the hash function consists of the following two steps:
With a given `key`, the calculation of the hash function consists of two steps:
1. Calculate the hash value by using a certain hash algorithm `hash()`.
1. Calculate the hash value by using a certain hash algorithm `hash()`.
2. Take the modulus of the hash value with the bucket count (array length) `capacity` to obtain the array `index` corresponding to that key.
2. Take the modulus of the hash value with the bucket count (array length) `capacity` to obtain the array `index` corresponding to the key.
@ -15,15 +15,15 @@ As shown in Figure 7-16, a <u>binary search tree</u> satisfies the following con
## 7.4.1 Operations on a binary search tree
## 7.4.1 Operations on a binary search tree
We encapsulate the binary search tree as a class `BinarySearchTree` and declare a member variable `root`, pointing to the tree's root node.
We encapsulate the binary search tree as a class `BinarySearchTree` and declare a member variable `root` pointing to the tree's root node.
### 1. Searching for a node
### 1. Searching for a node
Given a target node value `num`, one can search according to the properties of the binary search tree. As shown in Figure 7-17, we declare a node `cur` and start from the binary tree's root node `root`, looping to compare the size relationship between the node value `cur.val` and `num`.
Given a target node value `num`, one can search according to the properties of the binary search tree. As shown in Figure 7-17, we declare a node `cur`, start from the binary tree's root node `root`, and loop to compare the size between the node value `cur.val` and `num`.
- If `cur.val < num`, it means the target node is in `cur`'s right subtree, thus execute `cur = cur.right`.
- If `cur.val < num`, it means the target node is in `cur`'s right subtree, thus execute `cur = cur.right`.
- If `cur.val > num`, it means the target node is in `cur`'s left subtree, thus execute `cur = cur.left`.
- If `cur.val > num`, it means the target node is in `cur`'s left subtree, thus execute `cur = cur.left`.
- If `cur.val = num`, it means the target node is found, exit the loop and return the node.
- If `cur.val = num`, it means the target node is found, exit the loop, and return the node.
=== "<1>"
=== "<1>"
![Example of searching for a node in a binary search tree](binary_search_tree.assets/bst_search_step1.png){ class="animation-figure" }
![Example of searching for a node in a binary search tree](binary_search_tree.assets/bst_search_step1.png){ class="animation-figure" }
@ -39,7 +39,7 @@ Given a target node value `num`, one can search according to the properties of t
<palign="center"> Figure 7-17 Example of searching for a node in a binary search tree </p>
<palign="center"> Figure 7-17 Example of searching for a node in a binary search tree </p>
The search operation in a binary search tree works on the same principle as the binary search algorithm, eliminating half of the possibilities in each round. The number of loops is at most the height of the binary tree. When the binary tree is balanced, it uses $O(\log n)$ time. Example code is as follows:
The search operation in a binary search tree works on the same principle as the binary search algorithm, eliminating half of the cases in each round. The number of loops is at most the height of the binary tree. When the binary tree is balanced, it uses $O(\log n)$ time. The example code is as follows:
=== "Python"
=== "Python"
@ -177,8 +177,8 @@ The search operation in a binary search tree works on the same principle as the
Given an element `num` to be inserted, to maintain the property of the binary search tree "left subtree <rootnode<rightsubtree,"theinsertionoperationproceedsasshowninFigure7-18.
Given an element `num` to be inserted, to maintain the property of the binary search tree "left subtree <rootnode<rightsubtree,"theinsertionoperationproceedsasshowninFigure7-18.
1. **Finding the insertion position**: Similar to the search operation, start from the root node and loop downwards according to the size relationship between the current node value and `num` until passing through the leaf node (traversing to `None`) then exit the loop.
1. **Finding insertion position**: Similar to the search operation, start from the root node, loop downwards according to the size relationship between the current node value and `num`, until the leaf node is passed (traversed to `None`), then exit the loop.
2. **Insert the node at that position**: Initialize the node `num` and place it where `None` was.
2. **Insert the node at this position**: Initialize the node `num` and place it where `None` was.
![Inserting a node into a binary search tree](binary_search_tree.assets/bst_insert.png){ class="animation-figure" }
![Inserting a node into a binary search tree](binary_search_tree.assets/bst_insert.png){ class="animation-figure" }
@ -186,8 +186,8 @@ Given an element `num` to be inserted, to maintain the property of the binary se
In the code implementation, note the following two points.
In the code implementation, note the following two points.
- The binary search tree does not allow duplicate nodes; otherwise, it will violate its definition. Therefore, if the node to be inserted already exists in the tree, the insertion is not performed, and it directly returns.
- The binary search tree does not allow duplicate nodes to exist; otherwise, its definition would be violated. Therefore, if the node to be inserted already exists in the tree, the insertion is not performed, and thenode returns directly.
- To perform the insertion operation, we need to use the node `pre` to save the node from the last loop. This way, when traversing to `None`, we can get its parent node, thus completing the node insertion operation.
- To perform the insertion operation, we need to use the node `pre` to save the node from the previous loop. This way, when traversing to `None`, we can get its parent node, thus completing the node insertion operation.
=== "Python"
=== "Python"
@ -355,9 +355,9 @@ Similar to searching for a node, inserting a node uses $O(\log n)$ time.
### 3. Removing a node
### 3. Removing a node
First, find the target node in the binary tree, then remove it. Similar to inserting a node, we need to ensure that after the removal operation is completed, the property of the binary search tree "left subtree <rootnode<rightsubtree"isstillsatisfied.Therefore,basedonthenumberofchildnodesofthetargetnode,wedivideitinto 0,1,and2cases,performingthecorrespondingnoderemovaloperations.
First, find the target node in the binary tree, then remove it. Similar to inserting a node, we need to ensure that after the removal operation is completed, the property of the binary search tree "left subtree <rootnode<rightsubtree"isstillsatisfied.Therefore,basedonthenumberofchildnodesofthetargetnode,wedivideitintothree cases:0,1,and2,andperformthecorrespondingnoderemovaloperations.
As shown in Figure 7-19, when the degree of the node to be removed is $0$, it means the node is a leaf node, and it can be directly removed.
As shown in Figure 7-19, when the degree of the node to be removed is $0$, it means the node is a leaf node and can be directly removed.
![Removing a node in a binary search tree (degree 0)](binary_search_tree.assets/bst_remove_case1.png){ class="animation-figure" }
![Removing a node in a binary search tree (degree 0)](binary_search_tree.assets/bst_remove_case1.png){ class="animation-figure" }
@ -623,9 +623,9 @@ The operation of removing a node also uses $O(\log n)$ time, where finding the n
### 4. In-order traversal is ordered
### 4. In-order traversal is ordered
As shown in Figure 7-22, the in-order traversal of a binary tree follows the "left $\rightarrow$ root $\rightarrow$ right" traversal order, and a binary search tree satisfies the size relationship "left child node $<$ root node $<$ right child node".
As shown in Figure 7-22, the in-order traversal of a binary tree follows the traversal order of "left $\rightarrow$ root $\rightarrow$ right," and a binary search tree satisfies the size relationship of "left child node $<$ root node $<$ right child node."
This means that in-order traversal in a binary search tree always traverses the next smallest node first, thus deriving an important property: **The in-order traversal sequence of a binary search tree is ascending**.
This means that when performing in-order traversal in a binary search tree, the next smallest node will always be traversed first, thus leading to an important property: **The sequence of in-order traversal in a binary search tree is ascending**.
Using the ascending property of in-order traversal, obtaining ordered data in a binary search tree requires only $O(n)$ time, without the need for additional sorting operations, which is very efficient.
Using the ascending property of in-order traversal, obtaining ordered data in a binary search tree requires only $O(n)$ time, without the need for additional sorting operations, which is very efficient.
@ -635,7 +635,7 @@ Using the ascending property of in-order traversal, obtaining ordered data in a
## 7.4.2 Efficiency of binary search trees
## 7.4.2 Efficiency of binary search trees
Given a set of data, we consider using an array or a binary search tree for storage. Observing Table 7-2, the operations on a binary search tree all have logarithmic time complexity, which is stable and efficient. Only in scenarios of high-frequency addition and low-frequency search and removal, arrays are more efficient than binary search trees.
Given a set of data, we consider using an array or a binary search tree for storage. Observing Table 7-2, the operations on a binary search tree all have logarithmic time complexity, which is stable and efficient. Arrays are more efficient than binary search trees only in scenarios involving frequent additions and infrequent searches or removals.
<palign="center"> Table 7-2 Efficiency comparison between arrays and search trees </p>
<palign="center"> Table 7-2 Efficiency comparison between arrays and search trees </p>
@ -649,9 +649,9 @@ Given a set of data, we consider using an array or a binary search tree for stor
</div>
</div>
In ideal conditions, the binary search tree is "balanced," thus any node can be found within $\log n$ loops.
Ideally, the binary search tree is "balanced," allowing any node can be found within $\log n$ loops.
However, continuously inserting and removing nodes in a binary search tree may lead to the binary tree degenerating into a chain list as shown in Figure 7-23, at which point the time complexity of various operations also degrades to $O(n)$.
However, if we continuously insert and remove nodes in a binary search tree, it may degenerate into a linked list as shown in Figure 7-23, where the time complexity of various operations also degrades to $O(n)$.
![Degradation of a binary search tree](binary_search_tree.assets/bst_degradation.png){ class="animation-figure" }
![Degradation of a binary search tree](binary_search_tree.assets/bst_degradation.png){ class="animation-figure" }
From the perspective of physical structure, a tree is a data structure based on linked lists, hence its traversal method involves accessing nodes one by one through pointers. However, a tree is a non-linear data structure, which makes traversing a tree more complex than traversing a linked list, requiring the assistance of search algorithms to achieve.
From a physical structure perspective, a tree is a data structure based on linked lists. Hence, its traversal method involves accessing nodes one by one through pointers. However, a tree is a non-linear data structure, which makes traversing a tree more complex than traversing a linked list, requiring the assistance of search algorithms.
Common traversal methods for binary trees include level-order traversal, pre-order traversal, in-order traversal, and post-order traversal, among others.
The common traversal methods for binary trees include level-order traversal, pre-order traversal, in-order traversal, and post-order traversal.
## 7.2.1 Level-order traversal
## 7.2.1 Level-order traversal
As shown in Figure 7-9, <u>level-order traversal</u> traverses the binary tree from top to bottom, layer by layer, and accesses nodes in each layer in a left-to-right order.
As shown in Figure 7-9, <u>level-order traversal</u> traverses the binary tree from top to bottom, layer by layer. Within each level, it visits nodes from left to right.
Level-order traversal essentially belongs to<u>breadth-first traversal</u>, also known as <u>breadth-first search (BFS)</u>, which embodies a "circumferentially outward expanding" layer-by-layer traversal method.
Level-order traversal is essentially a type of <u>breadth-first traversal</u>, also known as <u>breadth-first search (BFS)</u>, which embodies a "circumferentially outward expanding" layer-by-layer traversal method.
![Level-order traversal of a binary tree](binary_tree_traversal.assets/binary_tree_bfs.png){ class="animation-figure" }
![Level-order traversal of a binary tree](binary_tree_traversal.assets/binary_tree_bfs.png){ class="animation-figure" }
@ -155,14 +155,14 @@ Breadth-first traversal is usually implemented with the help of a "queue". The q
### 2. Complexity analysis
### 2. Complexity analysis
- **Time complexity is $O(n)$**: All nodes are visited once, using $O(n)$ time, where $n$ is the number of nodes.
- **Time complexity is $O(n)$**: All nodes are visited once, taking $O(n)$ time, where $n$ is the number of nodes.
- **Space complexity is $O(n)$**: In the worst case, i.e., a full binary tree, before traversing to the lowest level, the queue can contain at most $(n + 1) / 2$ nodes at the same time, occupying $O(n)$ space.
- **Space complexity is $O(n)$**: In the worst case, i.e., a full binary tree, before traversing to the bottom level, the queue can contain at most $(n + 1) / 2$ nodes simultaneously, occupying $O(n)$ space.
## 7.2.2 Preorder, in-order, and post-order traversal
## 7.2.2 Preorder, in-order, and post-order traversal
Correspondingly, pre-order, in-order, and post-order traversal all belong to <u>depth-first traversal</u>, also known as <u>depth-first search (DFS)</u>, which embodies a "proceed to the end first, then backtrack and continue" traversal method.
Correspondingly, pre-order, in-order, and post-order traversal all belong to <u>depth-first traversal</u>, also known as <u>depth-first search (DFS)</u>, which embodies a "proceed to the end first, then backtrack and continue" traversal method.
Figure 7-10 shows the working principle of performing a depth-first traversal on a binary tree. **Depth-first traversal is like walking around the perimeter of the entire binary tree**, encountering three positions at each node, corresponding to pre-order traversal, in-order traversal, and post-order traversal.
Figure 7-10 shows the working principle of performing a depth-first traversal on a binary tree. **Depth-first traversal is like "walking" around the entire binary tree**, encountering three positions at each node, corresponding to pre-order, in-order, and post-order traversal.
![Preorder, in-order, and post-order traversal of a binary search tree](binary_tree_traversal.assets/binary_tree_dfs.png){ class="animation-figure" }
![Preorder, in-order, and post-order traversal of a binary search tree](binary_tree_traversal.assets/binary_tree_dfs.png){ class="animation-figure" }
@ -428,4 +428,4 @@ Figure 7-11 shows the recursive process of pre-order traversal of a binary tree,
### 2. Complexity analysis
### 2. Complexity analysis
- **Time complexity is $O(n)$**: All nodes are visited once, using $O(n)$ time.
- **Time complexity is $O(n)$**: All nodes are visited once, using $O(n)$ time.
- **Space complexity is $O(n)$**: In the worst case, i.e., the tree degrades into a linked list, the recursion depth reaches $n$, the system occupies $O(n)$ stack frame space.
- **Space complexity is $O(n)$**: In the worst case, i.e., the tree degenerates into a linked list, the recursion depth reaches $n$, the system occupies $O(n)$ stack frame space.