diff --git a/docs-en/chapter_computational_complexity/iteration_and_recursion.md b/docs-en/chapter_computational_complexity/iteration_and_recursion.md index 0294c9c32..a06a7dcd6 100644 --- a/docs-en/chapter_computational_complexity/iteration_and_recursion.md +++ b/docs-en/chapter_computational_complexity/iteration_and_recursion.md @@ -4,17 +4,17 @@ comments: true # 2.2   Iteration and Recursion -In algorithms, repeatedly performing a task is common and closely related to complexity analysis. Therefore, before introducing time complexity and space complexity, let's first understand how to implement task repetition in programs, focusing on two basic programming control structures: iteration and recursion. +In algorithms, the repeated execution of a task is quite common and is closely related to the analysis of complexity. Therefore, before delving into the concepts of time complexity and space complexity, let's first explore how to implement repetitive tasks in programming. This involves understanding two fundamental programming control structures: iteration and recursion. ## 2.2.1   Iteration -"Iteration" is a control structure for repeatedly performing a task. In iteration, a program repeats a block of code as long as a certain condition is met, until this condition is no longer satisfied. +"Iteration" is a control structure for repeatedly performing a task. In iteration, a program repeats a block of code as long as a certain condition is met until this condition is no longer satisfied. -### 1.   for Loop +### 1.   For Loops -The `for` loop is one of the most common forms of iteration, **suitable for use when the number of iterations is known in advance**. +The `for` loop is one of the most common forms of iteration, and **it's particularly suitable when the number of iterations is known in advance**. -The following function implements the sum $1 + 2 + \dots + n$ using a `for` loop, with the sum result recorded in the variable `res`. Note that in Python, `range(a, b)` corresponds to a "left-closed, right-open" interval, covering $a, a + 1, \dots, b-1$: +The following function uses a `for` loop to perform a summation of $1 + 2 + \dots + n$, with the sum being stored in the variable `res`. It's important to note that in Python, `range(a, b)` creates an interval that is inclusive of `a` but exclusive of `b`, meaning it iterates over the range from $a$ up to $b−1$. === "Python" @@ -193,13 +193,13 @@ The flowchart below represents this sum function.

Figure 2-1   Flowchart of the Sum Function

-The number of operations in this sum function is proportional to the input data size $n$, or in other words, it has a "linear relationship". This is actually what **time complexity describes**. This topic will be detailed in the next section. +The number of operations in this summation function is proportional to the size of the input data $n$, or in other words, it has a "linear relationship." This "linear relationship" is what time complexity describes. This topic will be discussed in more detail in the next section. -### 2.   while Loop +### 2.   While Loops -Similar to the `for` loop, the `while` loop is another method to implement iteration. In a `while` loop, the program checks the condition in each round; if the condition is true, it continues, otherwise, the loop ends. +Similar to `for` loops, `while` loops are another approach for implementing iteration. In a `while` loop, the program checks a condition at the beginning of each iteration; if the condition is true, the execution continues, otherwise, the loop ends. -Below we use a `while` loop to implement the sum $1 + 2 + \dots + n$: +Below we use a `while` loop to implement the sum $1 + 2 + \dots + n$. === "Python" @@ -399,9 +399,9 @@ Below we use a `while` loop to implement the sum $1 + 2 + \dots + n$:
Full Screen >
-**The `while` loop is more flexible than the `for` loop**. In a `while` loop, we can freely design the initialization and update steps of the condition variable. +**`While` loops provide more flexibility than `for` loops**, especially since they allow for custom initialization and modification of the condition variable at each step. -For example, in the following code, the condition variable $i$ is updated twice in each round, which would be inconvenient to implement with a `for` loop: +For example, in the following code, the condition variable $i$ is updated twice each round, which would be inconvenient to implement with a `for` loop. === "Python" @@ -849,24 +849,24 @@ The flowchart below represents this nested loop.

Figure 2-2   Flowchart of the Nested Loop

-In this case, the number of operations in the function is proportional to $n^2$, or the algorithm's running time and the input data size $n$ have a "quadratic relationship". +In such cases, the number of operations of the function is proportional to $n^2$, meaning the algorithm's runtime and the size of the input data $n$ has a 'quadratic relationship.' -We can continue adding nested loops, each nesting is a "dimensional escalation," which will increase the time complexity to "cubic," "quartic," and so on. +We can further increase the complexity by adding more nested loops, each level of nesting effectively "increasing the dimension," which raises the time complexity to "cubic," "quartic," and so on. ## 2.2.2   Recursion -"Recursion" is an algorithmic strategy that solves problems by having a function call itself. It mainly consists of two phases. +"Recursion" is an algorithmic strategy where a function solves a problem by calling itself. It primarily involves two phases: -1. **Recursion**: The program continuously calls itself, usually with smaller or more simplified parameters, until reaching a "termination condition." -2. **Return**: Upon triggering the "termination condition," the program begins to return from the deepest recursive function, aggregating the results of each layer. +1. **Calling**: This is where the program repeatedly calls itself, often with progressively smaller or simpler arguments, moving towards the "termination condition." +2. **Returning**: Upon triggering the "termination condition," the program begins to return from the deepest recursive function, aggregating the results of each layer. From an implementation perspective, recursive code mainly includes three elements. -1. **Termination Condition**: Determines when to switch from "recursion" to "return." -2. **Recursive Call**: Corresponds to "recursion," where the function calls itself, usually with smaller or more simplified parameters. -3. **Return Result**: Corresponds to "return," where the result of the current recursion level is returned to the previous layer. +1. **Termination Condition**: Determines when to switch from "calling" to "returning." +2. **Recursive Call**: Corresponds to "calling," where the function calls itself, usually with smaller or more simplified parameters. +3. **Return Result**: Corresponds to "returning," where the result of the current recursion level is returned to the previous layer. -Observe the following code, where calling the function `recur(n)` completes the computation of $1 + 2 + \dots + n$: +Observe the following code, where simply calling the function `recur(n)` can compute the sum of $1 + 2 + \dots + n$: === "Python" @@ -1059,22 +1059,22 @@ The Figure 2-3 shows the recursive process of this function.

Figure 2-3   Recursive Process of the Sum Function

-Although iteration and recursion can achieve the same results from a computational standpoint, **they represent two entirely different paradigms of thinking and solving problems**. +Although iteration and recursion can achieve the same results from a computational standpoint, **they represent two entirely different paradigms of thinking and problem-solving**. -- **Iteration**: Solves problems "from the bottom up." It starts with the most basic steps, then repeatedly adds or accumulates these steps until the task is complete. -- **Recursion**: Solves problems "from the top down." It breaks down the original problem into smaller sub-problems, each of which has the same form as the original problem. These sub-problems are then further decomposed into even smaller sub-problems, stopping at the base case (whose solution is known). +- **Iteration**: Solves problems "from the bottom up." It starts with the most basic steps, and then repeatedly adds or accumulates these steps until the task is complete. +- **Recursion**: Solves problems "from the top down." It breaks down the original problem into smaller sub-problems, each of which has the same form as the original problem. These sub-problems are then further decomposed into even smaller sub-problems, stopping at the base case whose solution is known. -Taking the sum function as an example, let's define the problem as $f(n) = 1 + 2 + \dots + n$. +Let's take the earlier example of the summation function, defined as $f(n) = 1 + 2 + \dots + n$. -- **Iteration**: In a loop, simulate the summing process, iterating from $1$ to $n$, performing the sum operation in each round, to obtain $f(n)$. -- **Recursion**: Break down the problem into sub-problems $f(n) = n + f(n-1)$, continuously (recursively) decomposing until reaching the base case $f(1) = 1$ and then stopping. +- **Iteration**: In this approach, we simulate the summation process within a loop. Starting from $1$ and traversing to $n$, we perform the summation operation in each iteration to eventually compute $f(n)$. +- **Recursion**: Here, the problem is broken down into a sub-problem: $f(n) = n + f(n-1)$. This decomposition continues recursively until reaching the base case, $f(1) = 1$, at which point the recursion terminates. ### 1.   Call Stack -Each time a recursive function calls itself, the system allocates memory for the newly initiated function to store local variables, call addresses, and other information. This leads to two main consequences. +Every time a recursive function calls itself, the system allocates memory for the newly initiated function to store local variables, the return address, and other relevant information. This leads to two primary outcomes. - The function's context data is stored in a memory area called "stack frame space" and is only released after the function returns. Therefore, **recursion generally consumes more memory space than iteration**. -- Recursive calls introduce additional overhead. **Hence, recursion is usually less time-efficient than loops**. +- Recursive calls introduce additional overhead. **Hence, recursion is usually less time-efficient than loops.** As shown in the Figure 2-4 , there are $n$ unreturned recursive functions before triggering the termination condition, indicating a **recursion depth of $n$**. @@ -1086,10 +1086,10 @@ In practice, the depth of recursion allowed by programming languages is usually ### 2.   Tail Recursion -Interestingly, **if a function makes its recursive call as the last step before returning**, it can be optimized by compilers or interpreters to be as space-efficient as iteration. This scenario is known as "tail recursion". +Interestingly, **if a function performs its recursive call as the very last step before returning,** it can be optimized by the compiler or interpreter to be as space-efficient as iteration. This scenario is known as "tail recursion." -- **Regular Recursion**: The function needs to perform more code after returning to the previous level, so the system needs to save the context of the previous call. -- **Tail Recursion**: The recursive call is the last operation before the function returns, meaning no further actions are required upon returning to the previous level, so the system doesn't need to save the context of the previous level's function. +- **Regular Recursion**: In standard recursion, when the function returns to the previous level, it continues to execute more code, requiring the system to save the context of the previous call. +- **Tail Recursion**: Here, the recursive call is the final operation before the function returns. This means that upon returning to the previous level, no further actions are needed, so the system does not need to save the context of the previous level. For example, in calculating $1 + 2 + \dots + n$, we can make the result variable `res` a parameter of the function, thereby achieving tail recursion: @@ -1256,8 +1256,8 @@ For example, in calculating $1 + 2 + \dots + n$, we can make the result variable The execution process of tail recursion is shown in the following figure. Comparing regular recursion and tail recursion, the point of the summation operation is different. -- **Regular Recursion**: The summation operation occurs during the "return" phase, requiring another summation after each layer returns. -- **Tail Recursion**: The summation operation occurs during the "recursion" phase, and the "return" phase only involves returning through each layer. +- **Regular Recursion**: The summation operation occurs during the "returning" phase, requiring another summation after each layer returns. +- **Tail Recursion**: The summation operation occurs during the "calling" phase, and the "returning" phase only involves returning through each layer. ![Tail Recursion Process](iteration_and_recursion.assets/tail_recursion_sum.png){ class="animation-figure" } @@ -1499,12 +1499,12 @@ Summarizing the above content, the following table shows the differences between If you find the following content difficult to understand, consider revisiting it after reading the "Stack" chapter. -So, what is the intrinsic connection between iteration and recursion? Taking the above recursive function as an example, the summation operation occurs during the recursion's "return" phase. This means that the initially called function is actually the last to complete its summation operation, **mirroring the "last in, first out" principle of a stack**. +So, what is the intrinsic connection between iteration and recursion? Taking the above recursive function as an example, the summation operation occurs during the recursion's "return" phase. This means that the initially called function is the last to complete its summation operation, **mirroring the "last in, first out" principle of a stack**. -In fact, recursive terms like "call stack" and "stack frame space" hint at the close relationship between recursion and stacks. +Recursive terms like "call stack" and "stack frame space" hint at the close relationship between recursion and stacks. -1. **Recursion**: When a function is called, the system allocates a new stack frame on the "call stack" for that function, storing local variables, parameters, return addresses, and other data. -2. **Return**: When a function completes execution and returns, the corresponding stack frame is removed from the "call stack," restoring the execution environment of the previous function. +1. **Calling**: When a function is called, the system allocates a new stack frame on the "call stack" for that function, storing local variables, parameters, return addresses, and other data. +2. **Returning**: When a function completes execution and returns, the corresponding stack frame is removed from the "call stack," restoring the execution environment of the previous function. Therefore, **we can use an explicit stack to simulate the behavior of the call stack**, thus transforming recursion into an iterative form: @@ -1792,7 +1792,7 @@ Therefore, **we can use an explicit stack to simulate the behavior of the call s Observing the above code, when recursion is transformed into iteration, the code becomes more complex. Although iteration and recursion can often be transformed into each other, it's not always advisable to do so for two reasons: -- The transformed code may become harder to understand and less readable. +- The transformed code may become more challenging to understand and less readable. - For some complex problems, simulating the behavior of the system's call stack can be quite challenging. -In summary, **choosing between iteration and recursion depends on the nature of the specific problem**. In programming practice, weighing the pros and cons of each and choosing the appropriate method for the situation is essential. +In conclusion, **whether to choose iteration or recursion depends on the specific nature of the problem**. In programming practice, it's crucial to weigh the pros and cons of both and choose the most suitable approach for the situation at hand. diff --git a/docs-en/chapter_computational_complexity/time_complexity.md b/docs-en/chapter_computational_complexity/time_complexity.md index f281fa99e..c97d826a7 100644 --- a/docs-en/chapter_computational_complexity/time_complexity.md +++ b/docs-en/chapter_computational_complexity/time_complexity.md @@ -2326,7 +2326,7 @@ The following image and code simulate the "halving each round" process, with a t === "Python" ```python title="time_complexity.py" - def logarithmic(n: float) -> int: + def logarithmic(n: int) -> int: """对数阶(循环实现)""" count = 0 while n > 1: @@ -2339,7 +2339,7 @@ The following image and code simulate the "halving each round" process, with a t ```cpp title="time_complexity.cpp" /* 对数阶(循环实现) */ - int logarithmic(float n) { + int logarithmic(int n) { int count = 0; while (n > 1) { n = n / 2; @@ -2353,7 +2353,7 @@ The following image and code simulate the "halving each round" process, with a t ```java title="time_complexity.java" /* 对数阶(循环实现) */ - int logarithmic(float n) { + int logarithmic(int n) { int count = 0; while (n > 1) { n = n / 2; @@ -2367,7 +2367,7 @@ The following image and code simulate the "halving each round" process, with a t ```csharp title="time_complexity.cs" /* 对数阶(循环实现) */ - int Logarithmic(float n) { + int Logarithmic(int n) { int count = 0; while (n > 1) { n /= 2; @@ -2381,7 +2381,7 @@ The following image and code simulate the "halving each round" process, with a t ```go title="time_complexity.go" /* 对数阶(循环实现)*/ - func logarithmic(n float64) int { + func logarithmic(n int) int { count := 0 for n > 1 { n = n / 2 @@ -2395,7 +2395,7 @@ The following image and code simulate the "halving each round" process, with a t ```swift title="time_complexity.swift" /* 对数阶(循环实现) */ - func logarithmic(n: Double) -> Int { + func logarithmic(n: Int) -> Int { var count = 0 var n = n while n > 1 { @@ -2438,10 +2438,10 @@ The following image and code simulate the "halving each round" process, with a t ```dart title="time_complexity.dart" /* 对数阶(循环实现) */ - int logarithmic(num n) { + int logarithmic(int n) { int count = 0; while (n > 1) { - n = n / 2; + n = n ~/ 2; count++; } return count; @@ -2452,10 +2452,10 @@ The following image and code simulate the "halving each round" process, with a t ```rust title="time_complexity.rs" /* 对数阶(循环实现) */ - fn logarithmic(mut n: f32) -> i32 { + fn logarithmic(mut n: i32) -> i32 { let mut count = 0; - while n > 1.0 { - n = n / 2.0; + while n > 1 { + n = n / 2; count += 1; } count @@ -2466,7 +2466,7 @@ The following image and code simulate the "halving each round" process, with a t ```c title="time_complexity.c" /* 对数阶(循环实现) */ - int logarithmic(float n) { + int logarithmic(int n) { int count = 0; while (n > 1) { n = n / 2; @@ -2480,7 +2480,7 @@ The following image and code simulate the "halving each round" process, with a t ```zig title="time_complexity.zig" // 对数阶(循环实现) - fn logarithmic(n: f32) i32 { + fn logarithmic(n: i32) i32 { var count: i32 = 0; var n_var = n; while (n_var > 1) @@ -2494,8 +2494,8 @@ The following image and code simulate the "halving each round" process, with a t ??? pythontutor "Code Visualization" -
-
Full Screen >
+
+
Full Screen >
![Logarithmic Order Time Complexity](time_complexity.assets/time_complexity_logarithmic.png){ class="animation-figure" } @@ -2506,7 +2506,7 @@ Like exponential order, logarithmic order also frequently appears in recursive f === "Python" ```python title="time_complexity.py" - def log_recur(n: float) -> int: + def log_recur(n: int) -> int: """对数阶(递归实现)""" if n <= 1: return 0 @@ -2517,7 +2517,7 @@ Like exponential order, logarithmic order also frequently appears in recursive f ```cpp title="time_complexity.cpp" /* 对数阶(递归实现) */ - int logRecur(float n) { + int logRecur(int n) { if (n <= 1) return 0; return logRecur(n / 2) + 1; @@ -2528,7 +2528,7 @@ Like exponential order, logarithmic order also frequently appears in recursive f ```java title="time_complexity.java" /* 对数阶(递归实现) */ - int logRecur(float n) { + int logRecur(int n) { if (n <= 1) return 0; return logRecur(n / 2) + 1; @@ -2539,7 +2539,7 @@ Like exponential order, logarithmic order also frequently appears in recursive f ```csharp title="time_complexity.cs" /* 对数阶(递归实现) */ - int LogRecur(float n) { + int LogRecur(int n) { if (n <= 1) return 0; return LogRecur(n / 2) + 1; } @@ -2549,7 +2549,7 @@ Like exponential order, logarithmic order also frequently appears in recursive f ```go title="time_complexity.go" /* 对数阶(递归实现)*/ - func logRecur(n float64) int { + func logRecur(n int) int { if n <= 1 { return 0 } @@ -2561,7 +2561,7 @@ Like exponential order, logarithmic order also frequently appears in recursive f ```swift title="time_complexity.swift" /* 对数阶(递归实现) */ - func logRecur(n: Double) -> Int { + func logRecur(n: Int) -> Int { if n <= 1 { return 0 } @@ -2593,9 +2593,9 @@ Like exponential order, logarithmic order also frequently appears in recursive f ```dart title="time_complexity.dart" /* 对数阶(递归实现) */ - int logRecur(num n) { + int logRecur(int n) { if (n <= 1) return 0; - return logRecur(n / 2) + 1; + return logRecur(n ~/ 2) + 1; } ``` @@ -2603,11 +2603,11 @@ Like exponential order, logarithmic order also frequently appears in recursive f ```rust title="time_complexity.rs" /* 对数阶(递归实现) */ - fn log_recur(n: f32) -> i32 { - if n <= 1.0 { + fn log_recur(n: i32) -> i32 { + if n <= 1 { return 0; } - log_recur(n / 2.0) + 1 + log_recur(n / 2) + 1 } ``` @@ -2615,7 +2615,7 @@ Like exponential order, logarithmic order also frequently appears in recursive f ```c title="time_complexity.c" /* 对数阶(递归实现) */ - int logRecur(float n) { + int logRecur(int n) { if (n <= 1) return 0; return logRecur(n / 2) + 1; @@ -2626,7 +2626,7 @@ Like exponential order, logarithmic order also frequently appears in recursive f ```zig title="time_complexity.zig" // 对数阶(递归实现) - fn logRecur(n: f32) i32 { + fn logRecur(n: i32) i32 { if (n <= 1) return 0; return logRecur(n / 2) + 1; } @@ -2634,8 +2634,8 @@ Like exponential order, logarithmic order also frequently appears in recursive f ??? pythontutor "Code Visualization" -
-
Full Screen >
+
+
Full Screen >
Logarithmic order is typical in algorithms based on the divide-and-conquer strategy, embodying the "split into many" and "simplify complex problems" approach. It's slow-growing and is the most ideal time complexity after constant order. @@ -2656,7 +2656,7 @@ Linear-logarithmic order often appears in nested loops, with the complexities of === "Python" ```python title="time_complexity.py" - def linear_log_recur(n: float) -> int: + def linear_log_recur(n: int) -> int: """线性对数阶""" if n <= 1: return 1 @@ -2670,7 +2670,7 @@ Linear-logarithmic order often appears in nested loops, with the complexities of ```cpp title="time_complexity.cpp" /* 线性对数阶 */ - int linearLogRecur(float n) { + int linearLogRecur(int n) { if (n <= 1) return 1; int count = linearLogRecur(n / 2) + linearLogRecur(n / 2); @@ -2685,7 +2685,7 @@ Linear-logarithmic order often appears in nested loops, with the complexities of ```java title="time_complexity.java" /* 线性对数阶 */ - int linearLogRecur(float n) { + int linearLogRecur(int n) { if (n <= 1) return 1; int count = linearLogRecur(n / 2) + linearLogRecur(n / 2); @@ -2700,7 +2700,7 @@ Linear-logarithmic order often appears in nested loops, with the complexities of ```csharp title="time_complexity.cs" /* 线性对数阶 */ - int LinearLogRecur(float n) { + int LinearLogRecur(int n) { if (n <= 1) return 1; int count = LinearLogRecur(n / 2) + LinearLogRecur(n / 2); for (int i = 0; i < n; i++) { @@ -2714,12 +2714,12 @@ Linear-logarithmic order often appears in nested loops, with the complexities of ```go title="time_complexity.go" /* 线性对数阶 */ - func linearLogRecur(n float64) int { + func linearLogRecur(n int) int { if n <= 1 { return 1 } count := linearLogRecur(n/2) + linearLogRecur(n/2) - for i := 0.0; i < n; i++ { + for i := 0; i < n; i++ { count++ } return count @@ -2730,7 +2730,7 @@ Linear-logarithmic order often appears in nested loops, with the complexities of ```swift title="time_complexity.swift" /* 线性对数阶 */ - func linearLogRecur(n: Double) -> Int { + func linearLogRecur(n: Int) -> Int { if n <= 1 { return 1 } @@ -2774,9 +2774,9 @@ Linear-logarithmic order often appears in nested loops, with the complexities of ```dart title="time_complexity.dart" /* 线性对数阶 */ - int linearLogRecur(num n) { + int linearLogRecur(int n) { if (n <= 1) return 1; - int count = linearLogRecur(n / 2) + linearLogRecur(n / 2); + int count = linearLogRecur(n ~/ 2) + linearLogRecur(n ~/ 2); for (var i = 0; i < n; i++) { count++; } @@ -2788,11 +2788,11 @@ Linear-logarithmic order often appears in nested loops, with the complexities of ```rust title="time_complexity.rs" /* 线性对数阶 */ - fn linear_log_recur(n: f32) -> i32 { - if n <= 1.0 { + fn linear_log_recur(n: i32) -> i32 { + if n <= 1 { return 1; } - let mut count = linear_log_recur(n / 2.0) + linear_log_recur(n / 2.0); + let mut count = linear_log_recur(n / 2) + linear_log_recur(n / 2); for _ in 0..n as i32 { count += 1; } @@ -2804,7 +2804,7 @@ Linear-logarithmic order often appears in nested loops, with the complexities of ```c title="time_complexity.c" /* 线性对数阶 */ - int linearLogRecur(float n) { + int linearLogRecur(int n) { if (n <= 1) return 1; int count = linearLogRecur(n / 2) + linearLogRecur(n / 2); @@ -2819,10 +2819,10 @@ Linear-logarithmic order often appears in nested loops, with the complexities of ```zig title="time_complexity.zig" // 线性对数阶 - fn linearLogRecur(n: f32) i32 { + fn linearLogRecur(n: i32) i32 { if (n <= 1) return 1; var count: i32 = linearLogRecur(n / 2) + linearLogRecur(n / 2); - var i: f32 = 0; + var i: i32 = 0; while (i < n) : (i += 1) { count += 1; } @@ -2832,8 +2832,8 @@ Linear-logarithmic order often appears in nested loops, with the complexities of ??? pythontutor "Code Visualization" -
-
Full Screen >
+
+
Full Screen >
The image below demonstrates how linear-logarithmic order is generated. Each level of a binary tree has $n$ operations, and the tree has $\log_2 n + 1$ levels, resulting in a time complexity of $O(n \log n)$. diff --git a/docs/chapter_appendix/terminology.md b/docs/chapter_appendix/terminology.md index eaa1cfd42..125b9e9c8 100644 --- a/docs/chapter_appendix/terminology.md +++ b/docs/chapter_appendix/terminology.md @@ -18,6 +18,7 @@ comments: true | algorithm | 算法 | 演算法 | | data structure | 数据结构 | 資料結構 | | code | 代码 | 程式碼 | +| file | 文件 | 檔案 | | function | 函数 | 函式 | | method | 方法 | 方法 | | variable | 变量 | 變數 | @@ -81,6 +82,7 @@ comments: true | complete binary tree | 完全二叉树 | 完全二元樹 | | full binary tree | 完满二叉树 | 完滿二元樹 | | balanced binary tree | 平衡二叉树 | 平衡二元樹 | +| binary search tree | 二叉搜索树 | 二元搜尋樹 | | AVL tree | AVL 树 | AVL 樹 | | red-black tree | 红黑树 | 紅黑樹 | | level-order traversal | 层序遍历 | 層序走訪 | diff --git a/docs/chapter_computational_complexity/time_complexity.md b/docs/chapter_computational_complexity/time_complexity.md index 8340f0a71..b416344d1 100755 --- a/docs/chapter_computational_complexity/time_complexity.md +++ b/docs/chapter_computational_complexity/time_complexity.md @@ -2330,7 +2330,7 @@ $$ === "Python" ```python title="time_complexity.py" - def logarithmic(n: float) -> int: + def logarithmic(n: int) -> int: """对数阶(循环实现)""" count = 0 while n > 1: @@ -2343,7 +2343,7 @@ $$ ```cpp title="time_complexity.cpp" /* 对数阶(循环实现) */ - int logarithmic(float n) { + int logarithmic(int n) { int count = 0; while (n > 1) { n = n / 2; @@ -2357,7 +2357,7 @@ $$ ```java title="time_complexity.java" /* 对数阶(循环实现) */ - int logarithmic(float n) { + int logarithmic(int n) { int count = 0; while (n > 1) { n = n / 2; @@ -2371,7 +2371,7 @@ $$ ```csharp title="time_complexity.cs" /* 对数阶(循环实现) */ - int Logarithmic(float n) { + int Logarithmic(int n) { int count = 0; while (n > 1) { n /= 2; @@ -2385,7 +2385,7 @@ $$ ```go title="time_complexity.go" /* 对数阶(循环实现)*/ - func logarithmic(n float64) int { + func logarithmic(n int) int { count := 0 for n > 1 { n = n / 2 @@ -2399,7 +2399,7 @@ $$ ```swift title="time_complexity.swift" /* 对数阶(循环实现) */ - func logarithmic(n: Double) -> Int { + func logarithmic(n: Int) -> Int { var count = 0 var n = n while n > 1 { @@ -2442,10 +2442,10 @@ $$ ```dart title="time_complexity.dart" /* 对数阶(循环实现) */ - int logarithmic(num n) { + int logarithmic(int n) { int count = 0; while (n > 1) { - n = n / 2; + n = n ~/ 2; count++; } return count; @@ -2456,10 +2456,10 @@ $$ ```rust title="time_complexity.rs" /* 对数阶(循环实现) */ - fn logarithmic(mut n: f32) -> i32 { + fn logarithmic(mut n: i32) -> i32 { let mut count = 0; - while n > 1.0 { - n = n / 2.0; + while n > 1 { + n = n / 2; count += 1; } count @@ -2470,7 +2470,7 @@ $$ ```c title="time_complexity.c" /* 对数阶(循环实现) */ - int logarithmic(float n) { + int logarithmic(int n) { int count = 0; while (n > 1) { n = n / 2; @@ -2484,7 +2484,7 @@ $$ ```zig title="time_complexity.zig" // 对数阶(循环实现) - fn logarithmic(n: f32) i32 { + fn logarithmic(n: i32) i32 { var count: i32 = 0; var n_var = n; while (n_var > 1) @@ -2498,8 +2498,8 @@ $$ ??? pythontutor "可视化运行" -
-
全屏观看 >
+
+
全屏观看 >
![对数阶的时间复杂度](time_complexity.assets/time_complexity_logarithmic.png){ class="animation-figure" } @@ -2510,7 +2510,7 @@ $$ === "Python" ```python title="time_complexity.py" - def log_recur(n: float) -> int: + def log_recur(n: int) -> int: """对数阶(递归实现)""" if n <= 1: return 0 @@ -2521,7 +2521,7 @@ $$ ```cpp title="time_complexity.cpp" /* 对数阶(递归实现) */ - int logRecur(float n) { + int logRecur(int n) { if (n <= 1) return 0; return logRecur(n / 2) + 1; @@ -2532,7 +2532,7 @@ $$ ```java title="time_complexity.java" /* 对数阶(递归实现) */ - int logRecur(float n) { + int logRecur(int n) { if (n <= 1) return 0; return logRecur(n / 2) + 1; @@ -2543,7 +2543,7 @@ $$ ```csharp title="time_complexity.cs" /* 对数阶(递归实现) */ - int LogRecur(float n) { + int LogRecur(int n) { if (n <= 1) return 0; return LogRecur(n / 2) + 1; } @@ -2553,7 +2553,7 @@ $$ ```go title="time_complexity.go" /* 对数阶(递归实现)*/ - func logRecur(n float64) int { + func logRecur(n int) int { if n <= 1 { return 0 } @@ -2565,7 +2565,7 @@ $$ ```swift title="time_complexity.swift" /* 对数阶(递归实现) */ - func logRecur(n: Double) -> Int { + func logRecur(n: Int) -> Int { if n <= 1 { return 0 } @@ -2597,9 +2597,9 @@ $$ ```dart title="time_complexity.dart" /* 对数阶(递归实现) */ - int logRecur(num n) { + int logRecur(int n) { if (n <= 1) return 0; - return logRecur(n / 2) + 1; + return logRecur(n ~/ 2) + 1; } ``` @@ -2607,11 +2607,11 @@ $$ ```rust title="time_complexity.rs" /* 对数阶(递归实现) */ - fn log_recur(n: f32) -> i32 { - if n <= 1.0 { + fn log_recur(n: i32) -> i32 { + if n <= 1 { return 0; } - log_recur(n / 2.0) + 1 + log_recur(n / 2) + 1 } ``` @@ -2619,7 +2619,7 @@ $$ ```c title="time_complexity.c" /* 对数阶(递归实现) */ - int logRecur(float n) { + int logRecur(int n) { if (n <= 1) return 0; return logRecur(n / 2) + 1; @@ -2630,7 +2630,7 @@ $$ ```zig title="time_complexity.zig" // 对数阶(递归实现) - fn logRecur(n: f32) i32 { + fn logRecur(n: i32) i32 { if (n <= 1) return 0; return logRecur(n / 2) + 1; } @@ -2638,8 +2638,8 @@ $$ ??? pythontutor "可视化运行" -
-
全屏观看 >
+
+
全屏观看 >
对数阶常出现于基于分治策略的算法中,体现了“一分为多”和“化繁为简”的算法思想。它增长缓慢,是仅次于常数阶的理想的时间复杂度。 @@ -2660,7 +2660,7 @@ $$ === "Python" ```python title="time_complexity.py" - def linear_log_recur(n: float) -> int: + def linear_log_recur(n: int) -> int: """线性对数阶""" if n <= 1: return 1 @@ -2674,7 +2674,7 @@ $$ ```cpp title="time_complexity.cpp" /* 线性对数阶 */ - int linearLogRecur(float n) { + int linearLogRecur(int n) { if (n <= 1) return 1; int count = linearLogRecur(n / 2) + linearLogRecur(n / 2); @@ -2689,7 +2689,7 @@ $$ ```java title="time_complexity.java" /* 线性对数阶 */ - int linearLogRecur(float n) { + int linearLogRecur(int n) { if (n <= 1) return 1; int count = linearLogRecur(n / 2) + linearLogRecur(n / 2); @@ -2704,7 +2704,7 @@ $$ ```csharp title="time_complexity.cs" /* 线性对数阶 */ - int LinearLogRecur(float n) { + int LinearLogRecur(int n) { if (n <= 1) return 1; int count = LinearLogRecur(n / 2) + LinearLogRecur(n / 2); for (int i = 0; i < n; i++) { @@ -2718,12 +2718,12 @@ $$ ```go title="time_complexity.go" /* 线性对数阶 */ - func linearLogRecur(n float64) int { + func linearLogRecur(n int) int { if n <= 1 { return 1 } count := linearLogRecur(n/2) + linearLogRecur(n/2) - for i := 0.0; i < n; i++ { + for i := 0; i < n; i++ { count++ } return count @@ -2734,7 +2734,7 @@ $$ ```swift title="time_complexity.swift" /* 线性对数阶 */ - func linearLogRecur(n: Double) -> Int { + func linearLogRecur(n: Int) -> Int { if n <= 1 { return 1 } @@ -2778,9 +2778,9 @@ $$ ```dart title="time_complexity.dart" /* 线性对数阶 */ - int linearLogRecur(num n) { + int linearLogRecur(int n) { if (n <= 1) return 1; - int count = linearLogRecur(n / 2) + linearLogRecur(n / 2); + int count = linearLogRecur(n ~/ 2) + linearLogRecur(n ~/ 2); for (var i = 0; i < n; i++) { count++; } @@ -2792,11 +2792,11 @@ $$ ```rust title="time_complexity.rs" /* 线性对数阶 */ - fn linear_log_recur(n: f32) -> i32 { - if n <= 1.0 { + fn linear_log_recur(n: i32) -> i32 { + if n <= 1 { return 1; } - let mut count = linear_log_recur(n / 2.0) + linear_log_recur(n / 2.0); + let mut count = linear_log_recur(n / 2) + linear_log_recur(n / 2); for _ in 0..n as i32 { count += 1; } @@ -2808,7 +2808,7 @@ $$ ```c title="time_complexity.c" /* 线性对数阶 */ - int linearLogRecur(float n) { + int linearLogRecur(int n) { if (n <= 1) return 1; int count = linearLogRecur(n / 2) + linearLogRecur(n / 2); @@ -2823,10 +2823,10 @@ $$ ```zig title="time_complexity.zig" // 线性对数阶 - fn linearLogRecur(n: f32) i32 { + fn linearLogRecur(n: i32) i32 { if (n <= 1) return 1; var count: i32 = linearLogRecur(n / 2) + linearLogRecur(n / 2); - var i: f32 = 0; + var i: i32 = 0; while (i < n) : (i += 1) { count += 1; } @@ -2836,8 +2836,8 @@ $$ ??? pythontutor "可视化运行" -
-
全屏观看 >
+
+
全屏观看 >
图 2-13 展示了线性对数阶的生成方式。二叉树的每一层的操作总数都为 $n$ ,树共有 $\log_2 n + 1$ 层,因此时间复杂度为 $O(n \log n)$ 。 diff --git a/docs/chapter_sorting/quick_sort.md b/docs/chapter_sorting/quick_sort.md index 47f40d230..6da5c2e74 100755 --- a/docs/chapter_sorting/quick_sort.md +++ b/docs/chapter_sorting/quick_sort.md @@ -1037,8 +1037,8 @@ comments: true ??? pythontutor "可视化运行" -
-
全屏观看 >
+
+
全屏观看 >
## 11.5.5   尾递归优化 diff --git a/overrides/partials/comments.html b/overrides/partials/comments.html index c656c84c4..6f02cdf8c 100755 --- a/overrides/partials/comments.html +++ b/overrides/partials/comments.html @@ -2,6 +2,9 @@ {% if config.theme.language == 'zh' %} {% set comm = "欢迎在评论区留下你的见解、问题或建议" %} {% set lang = "zh-CN" %} + {% elif config.theme.language == 'zh-Hant' %} + {% set comm = "歡迎在評論區留下你的見解、問題或建議" %} + {% set lang = "zh-TW" %} {% elif config.theme.language == 'en' %} {% set comm = "Feel free to drop your insights, questions or suggestions" %} {% set lang = "en" %}