diff --git a/docs-en/chapter_array_and_linkedlist/array.md b/docs-en/chapter_array_and_linkedlist/array.md index 9abeb78a7..e5e69e27c 100755 --- a/docs-en/chapter_array_and_linkedlist/array.md +++ b/docs-en/chapter_array_and_linkedlist/array.md @@ -287,6 +287,11 @@ Accessing elements in an array is highly efficient, allowing us to randomly acce } ``` +??? pythontutor "Visualizing Code" + + + 全屏观看 > + ### 3.   Inserting Elements As shown in the image below, to insert an element in the middle of an array, all elements following the insertion point must be moved one position back to make room for the new element. @@ -464,6 +469,11 @@ It's important to note that since the length of an array is fixed, inserting an } ``` +??? pythontutor "Visualizing Code" + + + 全屏观看 > + ### 4.   Deleting Elements Similarly, as illustrated below, to delete an element at index $i$, all elements following index $i$ must be moved forward by one position. @@ -618,6 +628,11 @@ Note that after deletion, the last element becomes "meaningless", so we do not n } ``` +??? pythontutor "Visualizing Code" + + + 全屏观看 > + Overall, the insertion and deletion operations in arrays have the following disadvantages: - **High Time Complexity**: Both insertion and deletion in an array have an average time complexity of $O(n)$, where $n$ is the length of the array. @@ -837,6 +852,11 @@ In most programming languages, we can traverse an array either by indices or by } ``` +??? pythontutor "Visualizing Code" + + + 全屏观看 > + ### 6.   Finding Elements To find a specific element in an array, we need to iterate through it, checking each element to see if it matches. @@ -1000,6 +1020,11 @@ Since arrays are linear data structures, this operation is known as "linear sear } ``` +??? pythontutor "Visualizing Code" + + + 全屏观看 > + ### 7.   Expanding Arrays In complex system environments, it's challenging to ensure that the memory space following an array is available, making it unsafe to extend the array's capacity. Therefore, in most programming languages, **the length of an array is immutable**. @@ -1205,6 +1230,11 @@ To expand an array, we need to create a larger array and then copy the elements } ``` +??? pythontutor "Visualizing Code" + + + 全屏观看 > + ## 4.1.2   Advantages and Limitations of Arrays Arrays are stored in contiguous memory spaces and consist of elements of the same type. This approach includes a wealth of prior information that the system can use to optimize the operation efficiency of the data structure. diff --git a/docs-en/chapter_array_and_linkedlist/linked_list.md b/docs-en/chapter_array_and_linkedlist/linked_list.md index 9d83cda39..9f1ce26d7 100755 --- a/docs-en/chapter_array_and_linkedlist/linked_list.md +++ b/docs-en/chapter_array_and_linkedlist/linked_list.md @@ -540,6 +540,11 @@ In contrast, the time complexity of inserting an element in an array is $O(n)$, } ``` +??? pythontutor "Visualizing Code" + + + 全屏观看 > + ### 3.   Deleting a Node As shown below, deleting a node in a linked list is also very convenient, **requiring only the change of one node's reference (pointer)**. @@ -725,6 +730,11 @@ Note that although node `P` still points to `n1` after the deletion operation is } ``` +??? pythontutor "Visualizing Code" + + + 全屏观看 > + ### 4.   Accessing Nodes **Accessing nodes in a linked list is less efficient**. As mentioned earlier, any element in an array can be accessed in $O(1)$ time. However, in a linked list, the program needs to start from the head node and traverse each node sequentially until it finds the target node. That is, accessing the $i$-th node of a linked list requires $i - 1$ iterations, with a time complexity of $O(n)$. @@ -899,6 +909,11 @@ Note that although node `P` still points to `n1` after the deletion operation is } ``` +??? pythontutor "Visualizing Code" + + + 全屏观看 > + ### 5.   Finding Nodes Traverse the linked list to find a node with a value equal to `target`, and output the index of that node in the linked list. This process also falls under linear search. The code is as follows: @@ -1096,6 +1111,11 @@ Traverse the linked list to find a node with a value equal to `target`, and outp } ``` +??? pythontutor "Visualizing Code" + + + 全屏观看 > + ## 4.2.2   Arrays vs. Linked Lists The following table summarizes the characteristics of arrays and linked lists and compares their operational efficiencies. Since they employ two opposite storage strategies, their properties and operational efficiencies also show contrasting features. diff --git a/docs-en/chapter_computational_complexity/iteration_and_recursion.md b/docs-en/chapter_computational_complexity/iteration_and_recursion.md index d2266f264..b2c466fc7 100644 --- a/docs-en/chapter_computational_complexity/iteration_and_recursion.md +++ b/docs-en/chapter_computational_complexity/iteration_and_recursion.md @@ -182,6 +182,11 @@ The following function implements the sum $1 + 2 + \dots + n$ using a `for` loop } ``` +??? pythontutor "Visualizing Code" + + + 全屏观看 > + The flowchart below represents this sum function. ![Flowchart of the Sum Function](iteration_and_recursion.assets/iteration.png){ class="animation-figure" } @@ -388,6 +393,11 @@ Below we use a `while` loop to implement the sum $1 + 2 + \dots + n$: } ``` +??? pythontutor "Visualizing Code" + + + 全屏观看 > + **The `while` loop is more flexible than the `for` loop**. In a `while` loop, we can freely design the initialization and update steps of the condition variable. For example, in the following code, the condition variable $i$ is updated twice in each round, which would be inconvenient to implement with a `for` loop: @@ -607,6 +617,11 @@ For example, in the following code, the condition variable $i$ is updated twice } ``` +??? pythontutor "Visualizing Code" + + + 全屏观看 > + Overall, **`for` loops are more concise, while `while` loops are more flexible**. Both can implement iterative structures. Which one to use should be determined based on the specific requirements of the problem. ### 3.   Nested Loops @@ -821,6 +836,11 @@ We can nest one loop structure within another. Below is an example using `for` l } ``` +??? pythontutor "Visualizing Code" + + + 全屏观看 > + The flowchart below represents this nested loop. ![Flowchart of the Nested Loop](iteration_and_recursion.assets/nested_iteration.png){ class="animation-figure" } @@ -1026,6 +1046,11 @@ Observe the following code, where calling the function `recur(n)` completes the } ``` +??? pythontutor "Visualizing Code" + + + 全屏观看 > + The Figure 2-3 shows the recursive process of this function. ![Recursive Process of the Sum Function](iteration_and_recursion.assets/recursion_sum.png){ class="animation-figure" } @@ -1222,6 +1247,11 @@ For example, in calculating $1 + 2 + \dots + n$, we can make the result variable } ``` +??? pythontutor "Visualizing Code" + + + 全屏观看 > + The execution process of tail recursion is shown in the following figure. Comparing regular recursion and tail recursion, the point of the summation operation is different. - **Regular Recursion**: The summation operation occurs during the "return" phase, requiring another summation after each layer returns. @@ -1430,6 +1460,11 @@ Using the recursive relation, and considering the first two numbers as terminati } ``` +??? pythontutor "Visualizing Code" + + + 全屏观看 > + Observing the above code, we see that it recursively calls two functions within itself, **meaning that one call generates two branching calls**. As illustrated below, this continuous recursive calling eventually creates a "recursion tree" with a depth of $n$. ![Fibonacci Sequence Recursion Tree](iteration_and_recursion.assets/recursion_tree.png){ class="animation-figure" } @@ -1748,6 +1783,11 @@ Therefore, **we can use an explicit stack to simulate the behavior of the call s } ``` +??? pythontutor "Visualizing Code" + + + 全屏观看 > + Observing the above code, when recursion is transformed into iteration, the code becomes more complex. Although iteration and recursion can often be transformed into each other, it's not always advisable to do so for two reasons: - The transformed code may become harder to understand and less readable. diff --git a/docs-en/chapter_computational_complexity/space_complexity.md b/docs-en/chapter_computational_complexity/space_complexity.md index 7fa2bf2c0..f803786a5 100644 --- a/docs-en/chapter_computational_complexity/space_complexity.md +++ b/docs-en/chapter_computational_complexity/space_complexity.md @@ -1062,6 +1062,11 @@ Note that memory occupied by initializing variables or calling functions in a lo } ``` +??? pythontutor "Visualizing Code" + + + 全屏观看 > + ### 2.   Linear Order $O(n)$ Linear order is common in arrays, linked lists, stacks, queues, etc., where the number of elements is proportional to $n$: @@ -1326,6 +1331,11 @@ Linear order is common in arrays, linked lists, stacks, queues, etc., where the } ``` +??? pythontutor "Visualizing Code" + + + 全屏观看 > + As shown below, this function's recursive depth is $n$, meaning there are $n$ instances of unreturned `linear_recur()` function, using $O(n)$ size of stack frame space: === "Python" @@ -1467,6 +1477,11 @@ As shown below, this function's recursive depth is $n$, meaning there are $n$ in } ``` +??? pythontutor "Visualizing Code" + + + 全屏观看 > + ![Recursive Function Generating Linear Order Space Complexity](space_complexity.assets/space_complexity_recursive_linear.png){ class="animation-figure" }

Figure 2-17   Recursive Function Generating Linear Order Space Complexity

@@ -1687,6 +1702,11 @@ Quadratic order is common in matrices and graphs, where the number of elements i } ``` +??? pythontutor "Visualizing Code" + + + 全屏观看 > + As shown below, the recursive depth of this function is $n$, and in each recursive call, an array is initialized with lengths $n$, $n-1$, $\dots$, $2$, $1$, averaging $n/2$, thus overall occupying $O(n^2)$ space: === "Python" @@ -1846,6 +1866,11 @@ As shown below, the recursive depth of this function is $n$, and in each recursi } ``` +??? pythontutor "Visualizing Code" + + + 全屏观看 > + ![Recursive Function Generating Quadratic Order Space Complexity](space_complexity.assets/space_complexity_recursive_quadratic.png){ class="animation-figure" }

Figure 2-18   Recursive Function Generating Quadratic Order Space Complexity

@@ -2019,6 +2044,11 @@ Exponential order is common in binary trees. Observe the below image, a "full bi } ``` +??? pythontutor "Visualizing Code" + + + 全屏观看 > + ![Full Binary Tree Generating Exponential Order Space Complexity](space_complexity.assets/space_complexity_exponential.png){ class="animation-figure" }

Figure 2-19   Full Binary Tree Generating Exponential Order Space Complexity

diff --git a/docs-en/chapter_computational_complexity/time_complexity.md b/docs-en/chapter_computational_complexity/time_complexity.md index e949874c4..1e81ee035 100644 --- a/docs-en/chapter_computational_complexity/time_complexity.md +++ b/docs-en/chapter_computational_complexity/time_complexity.md @@ -1119,6 +1119,11 @@ Constant order means the number of operations is independent of the input data s } ``` +??? pythontutor "Visualizing Code" + + + 全屏观看 > + ### 2.   Linear Order $O(n)$ Linear order indicates the number of operations grows linearly with the input data size $n$. Linear order commonly appears in single-loop structures: @@ -1271,6 +1276,11 @@ Linear order indicates the number of operations grows linearly with the input da } ``` +??? pythontutor "Visualizing Code" + + + 全屏观看 > + Operations like array traversal and linked list traversal have a time complexity of $O(n)$, where $n$ is the length of the array or list: === "Python" @@ -1439,6 +1449,11 @@ Operations like array traversal and linked list traversal have a time complexity } ``` +??? pythontutor "Visualizing Code" + + + 全屏观看 > + It's important to note that **the input data size $n$ should be determined based on the type of input data**. For example, in the first example, $n$ represents the input data size, while in the second example, the length of the array $n$ is the data size. ### 3.   Quadratic Order $O(n^2)$ @@ -1636,6 +1651,11 @@ Quadratic order means the number of operations grows quadratically with the inpu } ``` +??? pythontutor "Visualizing Code" + + + 全屏观看 > + The following image compares constant order, linear order, and quadratic order time complexities. ![Constant, Linear, and Quadratic Order Time Complexities](time_complexity.assets/time_complexity_constant_linear_quadratic.png){ class="animation-figure" } @@ -1916,6 +1936,11 @@ For instance, in bubble sort, the outer loop runs $n - 1$ times, and the inner l } ``` +??? pythontutor "Visualizing Code" + + + 全屏观看 > + ### 4.   Exponential Order $O(2^n)$ Biological "cell division" is a classic example of exponential order growth: starting with one cell, it becomes two after one division, four after two divisions, and so on, resulting in $2^n$ cells after $n$ divisions. @@ -2144,6 +2169,11 @@ The following image and code simulate the cell division process, with a time com } ``` +??? pythontutor "Visualizing Code" + + + 全屏观看 > + ![Exponential Order Time Complexity](time_complexity.assets/time_complexity_exponential.png){ class="animation-figure" }

Figure 2-11   Exponential Order Time Complexity

@@ -2279,6 +2309,11 @@ In practice, exponential order often appears in recursive functions. For example } ``` +??? pythontutor "Visualizing Code" + + + 全屏观看 > + Exponential order growth is extremely rapid and is commonly seen in exhaustive search methods (brute force, backtracking, etc.). For large-scale problems, exponential order is unacceptable, often requiring dynamic programming or greedy algorithms as solutions. ### 5.   Logarithmic Order $O(\log n)$ @@ -2456,6 +2491,11 @@ The following image and code simulate the "halving each round" process, with a t } ``` +??? pythontutor "Visualizing Code" + + + 全屏观看 > + ![Logarithmic Order Time Complexity](time_complexity.assets/time_complexity_logarithmic.png){ class="animation-figure" }

Figure 2-12   Logarithmic Order Time Complexity

@@ -2591,6 +2631,11 @@ Like exponential order, logarithmic order also frequently appears in recursive f } ``` +??? pythontutor "Visualizing Code" + + + 全屏观看 > + Logarithmic order is typical in algorithms based on the divide-and-conquer strategy, embodying the "split into many" and "simplify complex problems" approach. It's slow-growing and is the most ideal time complexity after constant order. !!! tip "What is the base of $O(\log n)$?" @@ -2784,6 +2829,11 @@ Linear-logarithmic order often appears in nested loops, with the complexities of } ``` +??? pythontutor "Visualizing Code" + + + 全屏观看 > + The image below demonstrates how linear-logarithmic order is generated. Each level of a binary tree has $n$ operations, and the tree has $\log_2 n + 1$ levels, resulting in a time complexity of $O(n \log n)$. ![Linear-Logarithmic Order Time Complexity](time_complexity.assets/time_complexity_logarithmic_linear.png){ class="animation-figure" } @@ -2990,6 +3040,11 @@ Factorials are typically implemented using recursion. As shown in the image and } ``` +??? pythontutor "Visualizing Code" + + + 全屏观看 > + ![Factorial Order Time Complexity](time_complexity.assets/time_complexity_factorial.png){ class="animation-figure" }

Figure 2-14   Factorial Order Time Complexity

@@ -3352,6 +3407,11 @@ The "worst-case time complexity" corresponds to the asymptotic upper bound, deno } ``` +??? pythontutor "Visualizing Code" + + + 全屏观看 > + It's important to note that the best-case time complexity is rarely used in practice, as it is usually only achievable under very low probabilities and might be misleading. **The worst-case time complexity is more practical as it provides a safety value for efficiency**, allowing us to confidently use the algorithm. From the above example, it's clear that both the worst-case and best-case time complexities only occur under "special data distributions," which may have a small probability of occurrence and may not accurately reflect the algorithm's run efficiency. In contrast, **the average time complexity can reflect the algorithm's efficiency under random input data**, denoted by the $\Theta$ notation. diff --git a/docs/chapter_array_and_linkedlist/array.md b/docs/chapter_array_and_linkedlist/array.md index fa6bf7446..015c5415c 100755 --- a/docs/chapter_array_and_linkedlist/array.md +++ b/docs/chapter_array_and_linkedlist/array.md @@ -119,6 +119,10 @@ comments: true var nums = [_]i32{ 1, 3, 2, 5, 4 }; ``` +??? pythontutor "可视化运行" + + + ### 2.   访问元素 数组元素被存储在连续的内存空间中,这意味着计算数组元素的内存地址非常容易。给定数组内存地址(首元素内存地址)和某个元素的索引,我们可以使用图 4-2 所示的公式计算得到该元素的内存地址,从而直接访问该元素。 @@ -287,6 +291,11 @@ comments: true } ``` +??? pythontutor "可视化运行" + + + 全屏观看 > + ### 3.   插入元素 数组元素在内存中是“紧挨着的”,它们之间没有空间再存放任何数据。如图 4-3 所示,如果想在数组中间插入一个元素,则需要将该元素之后的所有元素都向后移动一位,之后再把元素赋值给该索引。 @@ -464,6 +473,11 @@ comments: true } ``` +??? pythontutor "可视化运行" + + + 全屏观看 > + ### 4.   删除元素 同理,如图 4-4 所示,若想删除索引 $i$ 处的元素,则需要把索引 $i$ 之后的元素都向前移动一位。 @@ -618,6 +632,11 @@ comments: true } ``` +??? pythontutor "可视化运行" + + + 全屏观看 > + 总的来看,数组的插入与删除操作有以下缺点。 - **时间复杂度高**:数组的插入和删除的平均时间复杂度均为 $O(n)$ ,其中 $n$ 为数组长度。 @@ -837,6 +856,11 @@ comments: true } ``` +??? pythontutor "可视化运行" + + + 全屏观看 > + ### 6.   查找元素 在数组中查找指定元素需要遍历数组,每轮判断元素值是否匹配,若匹配则输出对应索引。 @@ -1000,6 +1024,11 @@ comments: true } ``` +??? pythontutor "可视化运行" + + + 全屏观看 > + ### 7.   扩容数组 在复杂的系统环境中,程序难以保证数组之后的内存空间是可用的,从而无法安全地扩展数组容量。因此在大多数编程语言中,**数组的长度是不可变的**。 @@ -1205,6 +1234,11 @@ comments: true } ``` +??? pythontutor "可视化运行" + + + 全屏观看 > + ## 4.1.2   数组的优点与局限性 数组存储在连续的内存空间内,且元素类型相同。这种做法包含丰富的先验信息,系统可以利用这些信息来优化数据结构的操作效率。 diff --git a/docs/chapter_array_and_linkedlist/linked_list.md b/docs/chapter_array_and_linkedlist/linked_list.md index 16bc3a608..6bf4eb49d 100755 --- a/docs/chapter_array_and_linkedlist/linked_list.md +++ b/docs/chapter_array_and_linkedlist/linked_list.md @@ -396,6 +396,10 @@ comments: true n3.next = &n4; ``` +??? pythontutor "可视化运行" + + + 数组整体是一个变量,比如数组 `nums` 包含元素 `nums[0]` 和 `nums[1]` 等,而链表是由多个独立的节点对象组成的。**我们通常将头节点当作链表的代称**,比如以上代码中的链表可记作链表 `n0` 。 ### 2.   插入节点 @@ -540,6 +544,11 @@ comments: true } ``` +??? pythontutor "可视化运行" + + + 全屏观看 > + ### 3.   删除节点 如图 4-7 所示,在链表中删除节点也非常方便,**只需改变一个节点的引用(指针)即可**。 @@ -725,6 +734,11 @@ comments: true } ``` +??? pythontutor "可视化运行" + + + 全屏观看 > + ### 4.   访问节点 **在链表中访问节点的效率较低**。如上一节所述,我们可以在 $O(1)$ 时间下访问数组中的任意元素。链表则不然,程序需要从头节点出发,逐个向后遍历,直至找到目标节点。也就是说,访问链表的第 $i$ 个节点需要循环 $i - 1$ 轮,时间复杂度为 $O(n)$ 。 @@ -899,6 +913,11 @@ comments: true } ``` +??? pythontutor "可视化运行" + + + 全屏观看 > + ### 5.   查找节点 遍历链表,查找其中值为 `target` 的节点,输出该节点在链表中的索引。此过程也属于线性查找。代码如下所示: @@ -1096,6 +1115,11 @@ comments: true } ``` +??? pythontutor "可视化运行" + + + 全屏观看 > + ## 4.2.2   数组 vs. 链表 表 4-1 总结了数组和链表的各项特点并对比了操作效率。由于它们采用两种相反的存储策略,因此各种性质和操作效率也呈现对立的特点。 diff --git a/docs/chapter_array_and_linkedlist/list.md b/docs/chapter_array_and_linkedlist/list.md index 8390d4f93..2c42e5f5a 100755 --- a/docs/chapter_array_and_linkedlist/list.md +++ b/docs/chapter_array_and_linkedlist/list.md @@ -139,6 +139,10 @@ comments: true try nums.appendSlice(&[_]i32{ 1, 3, 2, 5, 4 }); ``` +??? pythontutor "可视化运行" + + + ### 2.   访问元素 列表本质上是数组,因此可以在 $O(1)$ 时间内访问和更新元素,效率很高。 @@ -258,6 +262,10 @@ comments: true nums.items[1] = 0; // 将索引 1 处的元素更新为 0 ``` +??? pythontutor "可视化运行" + + + ### 3.   插入与删除元素 相较于数组,列表可以自由地添加与删除元素。在列表尾部添加元素的时间复杂度为 $O(1)$ ,但插入和删除元素的效率仍与数组相同,时间复杂度为 $O(n)$ 。 @@ -488,6 +496,10 @@ comments: true _ = nums.orderedRemove(3); // 删除索引 3 处的元素 ``` +??? pythontutor "可视化运行" + + + ### 4.   遍历列表 与数组一样,列表可以根据索引遍历,也可以直接遍历各元素。 @@ -671,6 +683,10 @@ comments: true } ``` +??? pythontutor "可视化运行" + + + ### 5.   拼接列表 给定一个新列表 `nums1` ,我们可以将其拼接到原列表的尾部。 @@ -772,6 +788,10 @@ comments: true try nums.insertSlice(nums.items.len, nums1.items); // 将列表 nums1 拼接到 nums 之后 ``` +??? pythontutor "可视化运行" + + + ### 6.   排序列表 完成列表排序后,我们便可以使用在数组类算法题中经常考查的“二分查找”和“双指针”算法。 @@ -859,6 +879,10 @@ comments: true std.sort.sort(i32, nums.items, {}, comptime std.sort.asc(i32)); ``` +??? pythontutor "可视化运行" + + + ## 4.3.2   列表实现 许多编程语言内置了列表,例如 Java、C++、Python 等。它们的实现比较复杂,各个参数的设定也非常考究,例如初始容量、扩容倍数等。感兴趣的读者可以查阅源码进行学习。 diff --git a/docs/chapter_backtracking/backtracking_algorithm.md b/docs/chapter_backtracking/backtracking_algorithm.md index 04e9d672a..2bd52777e 100644 --- a/docs/chapter_backtracking/backtracking_algorithm.md +++ b/docs/chapter_backtracking/backtracking_algorithm.md @@ -206,6 +206,11 @@ comments: true [class]{}-[func]{preOrder} ``` +??? pythontutor "可视化运行" + + + 全屏观看 > + ![在前序遍历中搜索节点](backtracking_algorithm.assets/preorder_find_nodes.png){ class="animation-figure" }

图 13-1   在前序遍历中搜索节点

@@ -472,6 +477,11 @@ comments: true [class]{}-[func]{preOrder} ``` +??? pythontutor "可视化运行" + + + 全屏观看 > + 在每次“尝试”中,我们通过将当前节点添加进 `path` 来记录路径;而在“回退”前,我们需要将该节点从 `path` 中弹出,**以恢复本次尝试之前的状态**。 观察图 13-2 所示的过程,**我们可以将尝试和回退理解为“前进”与“撤销”**,两个操作互为逆向。 @@ -779,6 +789,11 @@ comments: true [class]{}-[func]{preOrder} ``` +??? pythontutor "可视化运行" + + + 全屏观看 > + “剪枝”是一个非常形象的名词。如图 13-3 所示,在搜索过程中,**我们“剪掉”了不满足约束条件的搜索分支**,避免许多无意义的尝试,从而提高了搜索效率。 ![根据约束条件剪枝](backtracking_algorithm.assets/preorder_find_constrained_paths.png){ class="animation-figure" } diff --git a/docs/chapter_backtracking/n_queens_problem.md b/docs/chapter_backtracking/n_queens_problem.md index f967d72c4..eda66d5bf 100644 --- a/docs/chapter_backtracking/n_queens_problem.md +++ b/docs/chapter_backtracking/n_queens_problem.md @@ -662,6 +662,11 @@ comments: true [class]{}-[func]{nQueens} ``` +??? pythontutor "可视化运行" + + + 全屏观看 > + 逐行放置 $n$ 次,考虑列约束,则从第一行到最后一行分别有 $n$、$n-1$、$\dots$、$2$、$1$ 个选择,**因此时间复杂度为 $O(n!)$** 。实际上,根据对角线约束的剪枝也能够大幅缩小搜索空间,因而搜索效率往往优于以上时间复杂度。 数组 `state` 使用 $O(n^2)$ 空间,数组 `cols`、`diags1` 和 `diags2` 皆使用 $O(n)$ 空间。最大递归深度为 $n$ ,使用 $O(n)$ 栈帧空间。因此,**空间复杂度为 $O(n^2)$** 。 diff --git a/docs/chapter_backtracking/permutations_problem.md b/docs/chapter_backtracking/permutations_problem.md index 5112faa0b..b72c481ca 100644 --- a/docs/chapter_backtracking/permutations_problem.md +++ b/docs/chapter_backtracking/permutations_problem.md @@ -471,6 +471,11 @@ comments: true [class]{}-[func]{permutationsI} ``` +??? pythontutor "可视化运行" + + + 全屏观看 > + ## 13.2.2   考虑相等元素的情况 !!! question @@ -942,6 +947,11 @@ comments: true [class]{}-[func]{permutationsII} ``` +??? pythontutor "可视化运行" + + + 全屏观看 > + 假设元素两两之间互不相同,则 $n$ 个元素共有 $n!$ 种排列(阶乘);在记录结果时,需要复制长度为 $n$ 的列表,使用 $O(n)$ 时间。**因此时间复杂度为 $O(n!n)$** 。 最大递归深度为 $n$ ,使用 $O(n)$ 栈帧空间。`selected` 使用 $O(n)$ 空间。同一时刻最多共有 $n$ 个 `duplicated` ,使用 $O(n^2)$ 空间。**因此空间复杂度为 $O(n^2)$** 。 diff --git a/docs/chapter_backtracking/subset_sum_problem.md b/docs/chapter_backtracking/subset_sum_problem.md index 7d63532a3..3064e1310 100644 --- a/docs/chapter_backtracking/subset_sum_problem.md +++ b/docs/chapter_backtracking/subset_sum_problem.md @@ -428,6 +428,11 @@ comments: true [class]{}-[func]{subsetSumINaive} ``` +??? pythontutor "可视化运行" + + + 全屏观看 > + 向以上代码输入数组 $[3, 4, 5]$ 和目标元素 $9$ ,输出结果为 $[3, 3, 3], [4, 5], [5, 4]$ 。**虽然成功找出了所有和为 $9$ 的子集,但其中存在重复的子集 $[4, 5]$ 和 $[5, 4]$** 。 这是因为搜索过程是区分选择顺序的,然而子集不区分选择顺序。如图 13-10 所示,先选 $4$ 后选 $5$ 与先选 $5$ 后选 $4$ 是不同的分支,但对应同一个子集。 @@ -906,6 +911,11 @@ comments: true [class]{}-[func]{subsetSumI} ``` +??? pythontutor "可视化运行" + + + 全屏观看 > + 图 13-12 所示为将数组 $[3, 4, 5]$ 和目标元素 $9$ 输入以上代码后的整体回溯过程。 ![子集和 I 回溯过程](subset_sum_problem.assets/subset_sum_i.png){ class="animation-figure" } @@ -1425,6 +1435,11 @@ comments: true [class]{}-[func]{subsetSumII} ``` +??? pythontutor "可视化运行" + + + 全屏观看 > + 图 13-14 展示了数组 $[4, 4, 5]$ 和目标元素 $9$ 的回溯过程,共包含四种剪枝操作。请你将图示与代码注释相结合,理解整个搜索过程,以及每种剪枝操作是如何工作的。 ![子集和 II 回溯过程](subset_sum_problem.assets/subset_sum_ii.png){ class="animation-figure" } diff --git a/docs/chapter_computational_complexity/iteration_and_recursion.md b/docs/chapter_computational_complexity/iteration_and_recursion.md index fb88b4914..6098f3640 100644 --- a/docs/chapter_computational_complexity/iteration_and_recursion.md +++ b/docs/chapter_computational_complexity/iteration_and_recursion.md @@ -182,9 +182,10 @@ comments: true } ``` -??? pythontutor "分步调试" +??? pythontutor "可视化运行" - + + 全屏观看 > 图 2-1 是该求和函数的流程框图。 @@ -392,9 +393,10 @@ comments: true } ``` -??? pythontutor "分步调试" +??? pythontutor "可视化运行" - + + 全屏观看 > **`while` 循环比 `for` 循环的自由度更高**。在 `while` 循环中,我们可以自由地设计条件变量的初始化和更新步骤。 @@ -615,9 +617,10 @@ comments: true } ``` -??? pythontutor "分步调试" +??? pythontutor "可视化运行" - + + 全屏观看 > 总的来说,**`for` 循环的代码更加紧凑,`while` 循环更加灵活**,两者都可以实现迭代结构。选择使用哪一个应该根据特定问题的需求来决定。 @@ -833,9 +836,10 @@ comments: true } ``` -??? pythontutor "分步调试" +??? pythontutor "可视化运行" - + + 全屏观看 > 图 2-2 是该嵌套循环的流程框图。 @@ -1042,9 +1046,10 @@ comments: true } ``` -??? pythontutor "分步调试" +??? pythontutor "可视化运行" - + + 全屏观看 > 图 2-3 展示了该函数的递归过程。 @@ -1242,9 +1247,10 @@ comments: true } ``` -??? pythontutor "分步调试" +??? pythontutor "可视化运行" - + + 全屏观看 > 尾递归的执行过程如图 2-5 所示。对比普通递归和尾递归,两者的求和操作的执行点是不同的。 @@ -1454,9 +1460,10 @@ comments: true } ``` -??? pythontutor "分步调试" +??? pythontutor "可视化运行" - + + 全屏观看 > 观察以上代码,我们在函数内递归调用了两个函数,**这意味着从一个调用产生了两个调用分支**。如图 2-6 所示,这样不断递归调用下去,最终将产生一棵层数为 $n$ 的「递归树 recursion tree」。 @@ -1776,9 +1783,10 @@ comments: true } ``` -??? pythontutor "分步调试" +??? pythontutor "可视化运行" - + + 全屏观看 > 观察以上代码,当递归转化为迭代后,代码变得更加复杂了。尽管迭代和递归在很多情况下可以互相转化,但不一定值得这样做,有以下两点原因。 diff --git a/docs/chapter_computational_complexity/space_complexity.md b/docs/chapter_computational_complexity/space_complexity.md index 942481296..4d169e94b 100755 --- a/docs/chapter_computational_complexity/space_complexity.md +++ b/docs/chapter_computational_complexity/space_complexity.md @@ -1061,6 +1061,11 @@ $$ } ``` +??? pythontutor "可视化运行" + + + 全屏观看 > + ### 2.   线性阶 $O(n)$ 线性阶常见于元素数量与 $n$ 成正比的数组、链表、栈、队列等: @@ -1325,6 +1330,11 @@ $$ } ``` +??? pythontutor "可视化运行" + + + 全屏观看 > + 如图 2-17 所示,此函数的递归深度为 $n$ ,即同时存在 $n$ 个未返回的 `linear_recur()` 函数,使用 $O(n)$ 大小的栈帧空间: === "Python" @@ -1466,6 +1476,11 @@ $$ } ``` +??? pythontutor "可视化运行" + + + 全屏观看 > + ![递归函数产生的线性阶空间复杂度](space_complexity.assets/space_complexity_recursive_linear.png){ class="animation-figure" }

图 2-17   递归函数产生的线性阶空间复杂度

@@ -1686,6 +1701,11 @@ $$ } ``` +??? pythontutor "可视化运行" + + + 全屏观看 > + 如图 2-18 所示,该函数的递归深度为 $n$ ,在每个递归函数中都初始化了一个数组,长度分别为 $n$、$n-1$、$\dots$、$2$、$1$ ,平均长度为 $n / 2$ ,因此总体占用 $O(n^2)$ 空间: === "Python" @@ -1845,6 +1865,11 @@ $$ } ``` +??? pythontutor "可视化运行" + + + 全屏观看 > + ![递归函数产生的平方阶空间复杂度](space_complexity.assets/space_complexity_recursive_quadratic.png){ class="animation-figure" }

图 2-18   递归函数产生的平方阶空间复杂度

@@ -2018,6 +2043,11 @@ $$ } ``` +??? pythontutor "可视化运行" + + + 全屏观看 > + ![满二叉树产生的指数阶空间复杂度](space_complexity.assets/space_complexity_exponential.png){ class="animation-figure" }

图 2-19   满二叉树产生的指数阶空间复杂度

diff --git a/docs/chapter_computational_complexity/time_complexity.md b/docs/chapter_computational_complexity/time_complexity.md index c7e1d14d7..4dd371666 100755 --- a/docs/chapter_computational_complexity/time_complexity.md +++ b/docs/chapter_computational_complexity/time_complexity.md @@ -1123,6 +1123,11 @@ $$ } ``` +??? pythontutor "可视化运行" + + + 全屏观看 > + ### 2.   线性阶 $O(n)$ 线性阶的操作数量相对于输入数据大小 $n$ 以线性级别增长。线性阶通常出现在单层循环中: @@ -1275,6 +1280,11 @@ $$ } ``` +??? pythontutor "可视化运行" + + + 全屏观看 > + 遍历数组和遍历链表等操作的时间复杂度均为 $O(n)$ ,其中 $n$ 为数组或链表的长度: === "Python" @@ -1443,6 +1453,11 @@ $$ } ``` +??? pythontutor "可视化运行" + + + 全屏观看 > + 值得注意的是,**输入数据大小 $n$ 需根据输入数据的类型来具体确定**。比如在第一个示例中,变量 $n$ 为输入数据大小;在第二个示例中,数组长度 $n$ 为数据大小。 ### 3.   平方阶 $O(n^2)$ @@ -1640,6 +1655,11 @@ $$ } ``` +??? pythontutor "可视化运行" + + + 全屏观看 > + 图 2-10 对比了常数阶、线性阶和平方阶三种时间复杂度。 ![常数阶、线性阶和平方阶的时间复杂度](time_complexity.assets/time_complexity_constant_linear_quadratic.png){ class="animation-figure" } @@ -1920,6 +1940,11 @@ $$ } ``` +??? pythontutor "可视化运行" + + + 全屏观看 > + ### 4.   指数阶 $O(2^n)$ 生物学的“细胞分裂”是指数阶增长的典型例子:初始状态为 $1$ 个细胞,分裂一轮后变为 $2$ 个,分裂两轮后变为 $4$ 个,以此类推,分裂 $n$ 轮后有 $2^n$ 个细胞。 @@ -2148,6 +2173,11 @@ $$ } ``` +??? pythontutor "可视化运行" + + + 全屏观看 > + ![指数阶的时间复杂度](time_complexity.assets/time_complexity_exponential.png){ class="animation-figure" }

图 2-11   指数阶的时间复杂度

@@ -2283,6 +2313,11 @@ $$ } ``` +??? pythontutor "可视化运行" + + + 全屏观看 > + 指数阶增长非常迅速,在穷举法(暴力搜索、回溯等)中比较常见。对于数据规模较大的问题,指数阶是不可接受的,通常需要使用动态规划或贪心算法等来解决。 ### 5.   对数阶 $O(\log n)$ @@ -2460,6 +2495,11 @@ $$ } ``` +??? pythontutor "可视化运行" + + + 全屏观看 > + ![对数阶的时间复杂度](time_complexity.assets/time_complexity_logarithmic.png){ class="animation-figure" }

图 2-12   对数阶的时间复杂度

@@ -2595,6 +2635,11 @@ $$ } ``` +??? pythontutor "可视化运行" + + + 全屏观看 > + 对数阶常出现于基于分治策略的算法中,体现了“一分为多”和“化繁为简”的算法思想。它增长缓慢,是仅次于常数阶的理想的时间复杂度。 !!! tip "$O(\log n)$ 的底数是多少?" @@ -2788,6 +2833,11 @@ $$ } ``` +??? pythontutor "可视化运行" + + + 全屏观看 > + 图 2-13 展示了线性对数阶的生成方式。二叉树的每一层的操作总数都为 $n$ ,树共有 $\log_2 n + 1$ 层,因此时间复杂度为 $O(n \log n)$ 。 ![线性对数阶的时间复杂度](time_complexity.assets/time_complexity_logarithmic_linear.png){ class="animation-figure" } @@ -2994,6 +3044,11 @@ $$ } ``` +??? pythontutor "可视化运行" + + + 全屏观看 > + ![阶乘阶的时间复杂度](time_complexity.assets/time_complexity_factorial.png){ class="animation-figure" }

图 2-14   阶乘阶的时间复杂度

@@ -3356,6 +3411,11 @@ $$ } ``` +??? pythontutor "可视化运行" + + + 全屏观看 > + 值得说明的是,我们在实际中很少使用最佳时间复杂度,因为通常只有在很小概率下才能达到,可能会带来一定的误导性。**而最差时间复杂度更为实用,因为它给出了一个效率安全值**,让我们可以放心地使用算法。 从上述示例可以看出,最差时间复杂度和最佳时间复杂度只出现于“特殊的数据分布”,这些情况的出现概率可能很小,并不能真实地反映算法运行效率。相比之下,**平均时间复杂度可以体现算法在随机输入数据下的运行效率**,用 $\Theta$ 记号来表示。 diff --git a/docs/chapter_data_structure/basic_data_types.md b/docs/chapter_data_structure/basic_data_types.md index 0e678ffa1..fa5c0a1a6 100644 --- a/docs/chapter_data_structure/basic_data_types.md +++ b/docs/chapter_data_structure/basic_data_types.md @@ -166,3 +166,7 @@ comments: true ```zig title="" ``` + +??? pythontutor "可视化运行" + + diff --git a/docs/chapter_hashing/hash_algorithm.md b/docs/chapter_hashing/hash_algorithm.md index fe8052d71..fad5185b1 100644 --- a/docs/chapter_hashing/hash_algorithm.md +++ b/docs/chapter_hashing/hash_algorithm.md @@ -566,6 +566,11 @@ index = hash(key) % capacity [class]{}-[func]{rotHash} ``` +??? pythontutor "可视化运行" + + + 全屏观看 > + 观察发现,每种哈希算法的最后一步都是对大质数 $1000000007$ 取模,以确保哈希值在合适的范围内。值得思考的是,为什么要强调对质数取模,或者说对合数取模的弊端是什么?这是一个有趣的问题。 先抛出结论:**使用大质数作为模数,可以最大化地保证哈希值的均匀分布**。因为质数不与其他数字存在公约数,可以减少因取模操作而产生的周期性模式,从而避免哈希冲突。 @@ -870,6 +875,10 @@ $$ ``` +??? pythontutor "可视化运行" + + + 在许多编程语言中,**只有不可变对象才可作为哈希表的 `key`** 。假如我们将列表(动态数组)作为 `key` ,当列表的内容发生变化时,它的哈希值也随之改变,我们就无法在哈希表中查询到原先的 `value` 了。 虽然自定义对象(比如链表节点)的成员变量是可变的,但它是可哈希的。**这是因为对象的哈希值通常是基于内存地址生成的**,即使对象的内容发生了变化,但它的内存地址不变,哈希值仍然是不变的。 diff --git a/docs/chapter_hashing/hash_collision.md b/docs/chapter_hashing/hash_collision.md index 82d94fb4c..e7693d7bb 100644 --- a/docs/chapter_hashing/hash_collision.md +++ b/docs/chapter_hashing/hash_collision.md @@ -1316,6 +1316,11 @@ comments: true [class]{HashMapChaining}-[func]{} ``` +??? pythontutor "可视化运行" + + + 全屏观看 > + 值得注意的是,当链表很长时,查询效率 $O(n)$ 很差。**此时可以将链表转换为“AVL 树”或“红黑树”**,从而将查询操作的时间复杂度优化至 $O(\log n)$ 。 ## 6.2.2   开放寻址 diff --git a/docs/chapter_hashing/hash_map.md b/docs/chapter_hashing/hash_map.md index 048487c78..b401387ff 100755 --- a/docs/chapter_hashing/hash_map.md +++ b/docs/chapter_hashing/hash_map.md @@ -283,6 +283,10 @@ comments: true ``` +??? pythontutor "可视化运行" + + + 哈希表有三种常用的遍历方式:遍历键值对、遍历键和遍历值。示例代码如下: === "Python" @@ -474,6 +478,10 @@ comments: true ``` +??? pythontutor "可视化运行" + + + ## 6.1.2   哈希表简单实现 我们先考虑最简单的情况,**仅用一个数组来实现哈希表**。在哈希表中,我们将数组中的每个空位称为「桶 bucket」,每个桶可存储一个键值对。因此,查询操作就是找到 `key` 对应的桶,并在桶中获取 `value` 。 @@ -1633,6 +1641,11 @@ index = hash(key) % capacity } ``` +??? pythontutor "可视化运行" + + + 全屏观看 > + ## 6.1.3   哈希冲突与扩容 从本质上看,哈希函数的作用是将所有 `key` 构成的输入空间映射到数组所有索引构成的输出空间,而输入空间往往远大于输出空间。因此,**理论上一定存在“多个输入对应相同输出”的情况**。 diff --git a/docs/chapter_stack_and_queue/deque.md b/docs/chapter_stack_and_queue/deque.md index dc2248c4d..dd0677e86 100644 --- a/docs/chapter_stack_and_queue/deque.md +++ b/docs/chapter_stack_and_queue/deque.md @@ -354,6 +354,10 @@ comments: true ``` +??? pythontutor "可视化运行" + + + ## 5.3.2   双向队列实现 * 双向队列的实现与队列类似,可以选择链表或数组作为底层数据结构。 diff --git a/docs/chapter_stack_and_queue/queue.md b/docs/chapter_stack_and_queue/queue.md index 88f8696d1..0cd9ab0c5 100755 --- a/docs/chapter_stack_and_queue/queue.md +++ b/docs/chapter_stack_and_queue/queue.md @@ -318,6 +318,10 @@ comments: true ``` +??? pythontutor "可视化运行" + + + ## 5.2.2   队列实现 为了实现队列,我们需要一种数据结构,可以在一端添加元素,并在另一端删除元素,链表和数组都符合要求。 @@ -1207,6 +1211,11 @@ comments: true } ``` +??? pythontutor "可视化运行" + + + 全屏观看 > + ### 2.   基于数组的实现 在数组中删除首元素的时间复杂度为 $O(n)$ ,这会导致出队操作效率较低。然而,我们可以采用以下巧妙方法来避免这个问题。 @@ -2118,6 +2127,11 @@ comments: true } ``` +??? pythontutor "可视化运行" + + + 全屏观看 > + 以上实现的队列仍然具有局限性:其长度不可变。然而,这个问题不难解决,我们可以将数组替换为动态数组,从而引入扩容机制。有兴趣的读者可以尝试自行实现。 两种实现的对比结论与栈一致,在此不再赘述。 diff --git a/docs/chapter_stack_and_queue/stack.md b/docs/chapter_stack_and_queue/stack.md index dd7d92370..61f788648 100755 --- a/docs/chapter_stack_and_queue/stack.md +++ b/docs/chapter_stack_and_queue/stack.md @@ -312,6 +312,10 @@ comments: true ``` +??? pythontutor "可视化运行" + + + ## 5.1.2   栈的实现 为了深入了解栈的运行机制,我们来尝试自己实现一个栈类。 @@ -1079,6 +1083,11 @@ comments: true } ``` +??? pythontutor "可视化运行" + + + 全屏观看 > + ### 2.   基于数组的实现 使用数组实现栈时,我们可以将数组的尾部作为栈顶。如图 5-3 所示,入栈与出栈操作分别对应在数组尾部添加元素与删除元素,时间复杂度都为 $O(1)$ 。 @@ -1685,6 +1694,11 @@ comments: true } ``` +??? pythontutor "可视化运行" + + + 全屏观看 > + ## 5.1.3   两种实现对比 **支持操作** diff --git a/docs/chapter_tree/array_representation_of_tree.md b/docs/chapter_tree/array_representation_of_tree.md index 52ce9275a..7606e931f 100644 --- a/docs/chapter_tree/array_representation_of_tree.md +++ b/docs/chapter_tree/array_representation_of_tree.md @@ -154,7 +154,7 @@ comments: true self._tree = list(arr) def size(self): - """节点数量""" + """数组长度""" return len(self._tree) def val(self, i: int) -> int: @@ -231,7 +231,7 @@ comments: true tree = arr; } - /* 节点数量 */ + /* 数组长度 */ int size() { return tree.size(); } @@ -326,7 +326,7 @@ comments: true tree = new ArrayList<>(arr); } - /* 节点数量 */ + /* 数组长度 */ public int size() { return tree.size(); } @@ -413,7 +413,7 @@ comments: true class ArrayBinaryTree(List arr) { List tree = new(arr); - /* 节点数量 */ + /* 数组长度 */ public int Size() { return tree.Count; } @@ -508,7 +508,7 @@ comments: true } } - /* 节点数量 */ + /* 数组长度 */ func (abt *arrayBinaryTree) size() int { return len(abt.tree) } @@ -605,7 +605,7 @@ comments: true tree = arr } - /* 节点数量 */ + /* 数组长度 */ func size() -> Int { tree.count } @@ -703,7 +703,7 @@ comments: true this.#tree = arr; } - /* 节点数量 */ + /* 数组长度 */ size() { return this.#tree.length; } @@ -789,7 +789,7 @@ comments: true this.#tree = arr; } - /* 节点数量 */ + /* 数组长度 */ size(): number { return this.#tree.length; } @@ -873,7 +873,7 @@ comments: true /* 构造方法 */ ArrayBinaryTree(this._tree); - /* 节点数量 */ + /* 数组长度 */ int size() { return _tree.length; } @@ -972,7 +972,7 @@ comments: true Self { tree: arr } } - /* 节点数量 */ + /* 数组长度 */ fn size(&self) -> i32 { self.tree.len() as i32 } @@ -1083,7 +1083,7 @@ comments: true free(abt); } - /* 节点数量 */ + /* 数组长度 */ int size(ArrayBinaryTree *abt) { return abt->size; } diff --git a/overrides/stylesheets/extra.css b/overrides/stylesheets/extra.css index c87bb75f8..dc910e534 100644 --- a/overrides/stylesheets/extra.css +++ b/overrides/stylesheets/extra.css @@ -182,6 +182,8 @@ body { .md-typeset .admonition.pythontutor, .md-typeset details.pythontutor { border-color: var(--md-default-fg-color--lightest); + margin-top: 0; + margin-bottom: 1.5625em; } .md-typeset .admonition:focus-within,