Add "reference" for EN version. Bug fixes. (#1326)

pull/1327/head
Yudong Jin 7 months ago committed by GitHub
parent bb511e50e6
commit 354b81cb6c
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194

@ -1,4 +1,4 @@
# Programming environment setup # Installation
## Install IDE ## Install IDE

@ -1,4 +1,4 @@
# Backtracking algorithm # Backtracking algorithms
<u>Backtracking algorithm</u> is a method to solve problems by exhaustive search, where the core idea is to start from an initial state and brute force all possible solutions, recording the correct ones until a solution is found or all possible choices are exhausted without finding a solution. <u>Backtracking algorithm</u> is a method to solve problems by exhaustive search, where the core idea is to start from an initial state and brute force all possible solutions, recording the correct ones until a solution is found or all possible choices are exhausted without finding a solution.
@ -77,7 +77,7 @@ Complex backtracking problems usually involve one or more constraints, **which a
!!! question "Example Three" !!! question "Example Three"
In a binary tree, search for all nodes with a value of $7$ and return the paths from the root to these nodes, **requiring that the paths do not contain nodes with a value of $3**. In a binary tree, search for all nodes with a value of $7$ and return the paths from the root to these nodes, **requiring that the paths do not contain nodes with a value of $3$**.
To meet the above constraints, **we need to add a pruning operation**: during the search process, if a node with a value of $3$ is encountered, it returns early, discontinuing further search. The code is as shown: To meet the above constraints, **we need to add a pruning operation**: during the search process, if a node with a value of $3$ is encountered, it returns early, discontinuing further search. The code is as shown:

@ -1,4 +1,4 @@
# Dynamic programming problem characteristics # Characteristics of dynamic programming problems
In the previous section, we learned how dynamic programming solves the original problem by decomposing it into subproblems. In fact, subproblem decomposition is a general algorithmic approach, with different emphases in divide and conquer, dynamic programming, and backtracking. In the previous section, we learned how dynamic programming solves the original problem by decomposing it into subproblems. In fact, subproblem decomposition is a general algorithmic approach, with different emphases in divide and conquer, dynamic programming, and backtracking.

@ -33,7 +33,7 @@ We aim to gradually reduce the problem size during the edit process, which enabl
Thus, each round of decision (edit operation) in string $s$ changes the remaining characters in $s$ and $t$ to be matched. Therefore, the state is the $i$-th and $j$-th characters currently considered in $s$ and $t$, denoted as $[i, j]$. Thus, each round of decision (edit operation) in string $s$ changes the remaining characters in $s$ and $t$ to be matched. Therefore, the state is the $i$-th and $j$-th characters currently considered in $s$ and $t$, denoted as $[i, j]$.
State $[i, j]$ corresponds to the subproblem: **The minimum number of edits required to change the first $i$ characters of $s$ into the first $j$ characters of $t**. State $[i, j]$ corresponds to the subproblem: **The minimum number of edits required to change the first $i$ characters of $s$ into the first $j$ characters of $t$**.
From this, we obtain a two-dimensional $dp$ table of size $(i+1) \times (j+1)$. From this, we obtain a two-dimensional $dp$ table of size $(i+1) \times (j+1)$.
@ -122,7 +122,7 @@ As shown below, the process of state transition in the edit distance problem is
Since $dp[i, j]$ is derived from the solutions above $dp[i-1, j]$, to the left $dp[i, j-1]$, and to the upper left $dp[i-1, j-1]$, and direct traversal will lose the upper left solution $dp[i-1, j-1]$, and reverse traversal cannot build $dp[i, j-1]$ in advance, therefore, both traversal orders are not feasible. Since $dp[i, j]$ is derived from the solutions above $dp[i-1, j]$, to the left $dp[i, j-1]$, and to the upper left $dp[i-1, j-1]$, and direct traversal will lose the upper left solution $dp[i-1, j-1]$, and reverse traversal cannot build $dp[i, j-1]$ in advance, therefore, both traversal orders are not feasible.
For this reason, we can use a variable `leftup` to temporarily store the solution from the upper left $dp[i-1, j-1]`, thus only needing to consider the solutions to the left and above. This situation is similar to the complete knapsack problem, allowing for direct traversal. The code is as follows: For this reason, we can use a variable `leftup` to temporarily store the solution from the upper left $dp[i-1, j-1]$, thus only needing to consider the solutions to the left and above. This situation is similar to the unbounded knapsack problem, allowing for direct traversal. The code is as follows:
```src ```src
[file]{edit_distance}-[class]{}-[func]{edit_distance_dp_comp} [file]{edit_distance}-[class]{}-[func]{edit_distance_dp_comp}

@ -1,4 +1,4 @@
# Initial exploration of dynamic programming # Introduction to dynamic programming
<u>Dynamic programming</u> is an important algorithmic paradigm that decomposes a problem into a series of smaller subproblems, and stores the solutions of these subproblems to avoid redundant computations, thereby significantly improving time efficiency. <u>Dynamic programming</u> is an important algorithmic paradigm that decomposes a problem into a series of smaller subproblems, and stores the solutions of these subproblems to avoid redundant computations, thereby significantly improving time efficiency.

@ -1,6 +1,6 @@
# 0-1 Knapsack problem # 0-1 Knapsack problem
The knapsack problem is an excellent introductory problem for dynamic programming and is the most common type of problem in dynamic programming. It has many variants, such as the 0-1 knapsack problem, the complete knapsack problem, and the multiple knapsack problem, etc. The knapsack problem is an excellent introductory problem for dynamic programming and is the most common type of problem in dynamic programming. It has many variants, such as the 0-1 knapsack problem, the unbounded knapsack problem, and the multiple knapsack problem, etc.
In this section, we will first solve the most common 0-1 knapsack problem. In this section, we will first solve the most common 0-1 knapsack problem.

@ -10,14 +10,14 @@
**Knapsack problem** **Knapsack problem**
- The knapsack problem is one of the most typical dynamic programming problems, with variants including the 0-1 knapsack, complete knapsack, and multiple knapsacks. - The knapsack problem is one of the most typical dynamic programming problems, with variants including the 0-1 knapsack, unbounded knapsack, and multiple knapsacks.
- The state definition of the 0-1 knapsack is the maximum value in a knapsack of capacity $c$ with the first $i$ items. Based on decisions not to include or to include an item in the knapsack, optimal substructures can be identified and state transition equations constructed. In space optimization, since each state depends on the state directly above and to the upper left, the list should be traversed in reverse order to avoid overwriting the upper left state. - The state definition of the 0-1 knapsack is the maximum value in a knapsack of capacity $c$ with the first $i$ items. Based on decisions not to include or to include an item in the knapsack, optimal substructures can be identified and state transition equations constructed. In space optimization, since each state depends on the state directly above and to the upper left, the list should be traversed in reverse order to avoid overwriting the upper left state.
- In the complete knapsack problem, there is no limit on the number of each kind of item that can be chosen, thus the state transition for including items differs from the 0-1 knapsack. Since the state depends on the state directly above and to the left, space optimization should involve forward traversal. - In the unbounded knapsack problem, there is no limit on the number of each kind of item that can be chosen, thus the state transition for including items differs from the 0-1 knapsack. Since the state depends on the state directly above and to the left, space optimization should involve forward traversal.
- The coin change problem is a variant of the complete knapsack problem, shifting from seeking the “maximum” value to seeking the “minimum” number of coins, thus the state transition equation should change $\max()$ to $\min()$. From pursuing “not exceeding” the capacity of the knapsack to seeking exactly the target amount, thus use $amt + 1$ to represent the invalid solution of “unable to make up the target amount.” - The coin change problem is a variant of the unbounded knapsack problem, shifting from seeking the “maximum” value to seeking the “minimum” number of coins, thus the state transition equation should change $\max()$ to $\min()$. From pursuing “not exceeding” the capacity of the knapsack to seeking exactly the target amount, thus use $amt + 1$ to represent the invalid solution of “unable to make up the target amount.”
- Coin Change Problem II shifts from seeking the “minimum number of coins” to seeking the “number of coin combinations,” changing the state transition equation accordingly from $\min()$ to summation operator. - Coin Change Problem II shifts from seeking the “minimum number of coins” to seeking the “number of coin combinations,” changing the state transition equation accordingly from $\min()$ to summation operator.
**Edit distance problem** **Edit distance problem**
- Edit distance (Levenshtein distance) measures the similarity between two strings, defined as the minimum number of editing steps needed to change one string into another, with editing operations including adding, deleting, or replacing. - Edit distance (Levenshtein distance) measures the similarity between two strings, defined as the minimum number of editing steps needed to change one string into another, with editing operations including adding, deleting, or replacing.
- The state definition for the edit distance problem is the minimum number of editing steps needed to change the first $i$ characters of $s$ into the first $j$ characters of $t$. When $s[i] \ne t[j]$, there are three decisions: add, delete, replace, each with their corresponding residual subproblems. From this, optimal substructures can be identified, and state transition equations built. When $s[i] = t[j]$, no editing of the current character is necessary. - The state definition for the edit distance problem is the minimum number of editing steps needed to change the first $i$ characters of $s$ into the first $j$ characters of $t$. When $s[i] \ne t[j]$, there are three decisions: add, delete, replace, each with their corresponding residual subproblems. From this, optimal substructures can be identified, and state transition equations built. When $s[i] = t[j]$, no editing of the current character is necessary.
- In edit distance, the state depends on the state directly above, to the left, and to the upper left. Therefore, after space optimization, neither forward nor reverse traversal can correctly perform state transitions. To address this, we use a variable to temporarily store the upper left state, making it equivalent to the situation in the complete knapsack problem, allowing for forward traversal after space optimization. - In edit distance, the state depends on the state directly above, to the left, and to the upper left. Therefore, after space optimization, neither forward nor reverse traversal can correctly perform state transitions. To address this, we use a variable to temporarily store the upper left state, making it equivalent to the situation in the unbounded knapsack problem, allowing for forward traversal after space optimization.

@ -1,23 +1,23 @@
# Complete knapsack problem # Unbounded knapsack problem
In this section, we first solve another common knapsack problem: the complete knapsack, and then explore a special case of it: the coin change problem. In this section, we first solve another common knapsack problem: the unbounded knapsack, and then explore a special case of it: the coin change problem.
## Complete knapsack problem ## Unbounded knapsack problem
!!! question !!! question
Given $n$ items, where the weight of the $i^{th}$ item is $wgt[i-1]$ and its value is $val[i-1]$, and a backpack with a capacity of $cap$. **Each item can be selected multiple times**. What is the maximum value of the items that can be put into the backpack without exceeding its capacity? See the example below. Given $n$ items, where the weight of the $i^{th}$ item is $wgt[i-1]$ and its value is $val[i-1]$, and a backpack with a capacity of $cap$. **Each item can be selected multiple times**. What is the maximum value of the items that can be put into the backpack without exceeding its capacity? See the example below.
![Example data for the complete knapsack problem](unbounded_knapsack_problem.assets/unbounded_knapsack_example.png) ![Example data for the unbounded knapsack problem](unbounded_knapsack_problem.assets/unbounded_knapsack_example.png)
### Dynamic programming approach ### Dynamic programming approach
The complete knapsack problem is very similar to the 0-1 knapsack problem, **the only difference being that there is no limit on the number of times an item can be chosen**. The unbounded knapsack problem is very similar to the 0-1 knapsack problem, **the only difference being that there is no limit on the number of times an item can be chosen**.
- In the 0-1 knapsack problem, there is only one of each item, so after placing item $i$ into the backpack, you can only choose from the previous $i-1$ items. - In the 0-1 knapsack problem, there is only one of each item, so after placing item $i$ into the backpack, you can only choose from the previous $i-1$ items.
- In the complete knapsack problem, the quantity of each item is unlimited, so after placing item $i$ in the backpack, **you can still choose from the previous $i$ items**. - In the unbounded knapsack problem, the quantity of each item is unlimited, so after placing item $i$ in the backpack, **you can still choose from the previous $i$ items**.
Under the rules of the complete knapsack problem, the state $[i, c]$ can change in two ways. Under the rules of the unbounded knapsack problem, the state $[i, c]$ can change in two ways.
- **Not putting item $i$ in**: As with the 0-1 knapsack problem, transition to $[i-1, c]$. - **Not putting item $i$ in**: As with the 0-1 knapsack problem, transition to $[i-1, c]$.
- **Putting item $i$ in**: Unlike the 0-1 knapsack problem, transition to $[i, c-wgt[i-1]]$. - **Putting item $i$ in**: Unlike the 0-1 knapsack problem, transition to $[i, c-wgt[i-1]]$.
@ -43,7 +43,7 @@ Since the current state comes from the state to the left and above, **the space-
This traversal order is the opposite of that for the 0-1 knapsack. Please refer to the following figures to understand the difference. This traversal order is the opposite of that for the 0-1 knapsack. Please refer to the following figures to understand the difference.
=== "<1>" === "<1>"
![Dynamic programming process for the complete knapsack problem after space optimization](unbounded_knapsack_problem.assets/unbounded_knapsack_dp_comp_step1.png) ![Dynamic programming process for the unbounded knapsack problem after space optimization](unbounded_knapsack_problem.assets/unbounded_knapsack_dp_comp_step1.png)
=== "<2>" === "<2>"
![unbounded_knapsack_dp_comp_step2](unbounded_knapsack_problem.assets/unbounded_knapsack_dp_comp_step2.png) ![unbounded_knapsack_dp_comp_step2](unbounded_knapsack_problem.assets/unbounded_knapsack_dp_comp_step2.png)
@ -78,11 +78,11 @@ The knapsack problem is a representative of a large class of dynamic programming
### Dynamic programming approach ### Dynamic programming approach
**The coin change can be seen as a special case of the complete knapsack problem**, sharing the following similarities and differences. **The coin change can be seen as a special case of the unbounded knapsack problem**, sharing the following similarities and differences.
- The two problems can be converted into each other: "item" corresponds to "coin", "item weight" corresponds to "coin denomination", and "backpack capacity" corresponds to "target amount". - The two problems can be converted into each other: "item" corresponds to "coin", "item weight" corresponds to "coin denomination", and "backpack capacity" corresponds to "target amount".
- The optimization goals are opposite: the complete knapsack problem aims to maximize the value of items, while the coin change problem aims to minimize the number of coins. - The optimization goals are opposite: the unbounded knapsack problem aims to maximize the value of items, while the coin change problem aims to minimize the number of coins.
- The complete knapsack problem seeks solutions "not exceeding" the backpack capacity, while the coin change seeks solutions that "exactly" make up the target amount. - The unbounded knapsack problem seeks solutions "not exceeding" the backpack capacity, while the coin change seeks solutions that "exactly" make up the target amount.
**First step: Think through each round's decision-making, define the state, and thus derive the $dp$ table** **First step: Think through each round's decision-making, define the state, and thus derive the $dp$ table**
@ -92,7 +92,7 @@ The two-dimensional $dp$ table is of size $(n+1) \times (amt+1)$.
**Second step: Identify the optimal substructure and derive the state transition equation** **Second step: Identify the optimal substructure and derive the state transition equation**
This problem differs from the complete knapsack problem in two aspects of the state transition equation. This problem differs from the unbounded knapsack problem in two aspects of the state transition equation.
- This problem seeks the minimum, so the operator $\max()$ needs to be changed to $\min()$. - This problem seeks the minimum, so the operator $\max()$ needs to be changed to $\min()$.
- The optimization is focused on the number of coins, so simply add $+1$ when a coin is chosen. - The optimization is focused on the number of coins, so simply add $+1$ when a coin is chosen.
@ -117,7 +117,7 @@ For this reason, we use the number $amt + 1$ to represent an invalid solution, b
[file]{coin_change}-[class]{}-[func]{coin_change_dp} [file]{coin_change}-[class]{}-[func]{coin_change_dp}
``` ```
The following images show the dynamic programming process for the coin change problem, which is very similar to the complete knapsack problem. The following images show the dynamic programming process for the coin change problem, which is very similar to the unbounded knapsack problem.
=== "<1>" === "<1>"
![Dynamic programming process for the coin change problem](unbounded_knapsack_problem.assets/coin_change_dp_step1.png) ![Dynamic programming process for the coin change problem](unbounded_knapsack_problem.assets/coin_change_dp_step1.png)
@ -166,7 +166,7 @@ The following images show the dynamic programming process for the coin change pr
### Space optimization ### Space optimization
The space optimization for the coin change problem is handled in the same way as for the complete knapsack problem: The space optimization for the coin change problem is handled in the same way as for the unbounded knapsack problem:
```src ```src
[file]{coin_change}-[class]{}-[func]{coin_change_dp_comp} [file]{coin_change}-[class]{}-[func]{coin_change_dp_comp}

@ -39,9 +39,9 @@ Since an `Item` object list is initialized, **the space complexity is $O(n)$**.
### Correctness proof ### Correctness proof
Using proof by contradiction. Suppose item $x$ has the highest unit value, and some algorithm yields a maximum value `res`, but the solution does not include item $x`. Using proof by contradiction. Suppose item $x$ has the highest unit value, and some algorithm yields a maximum value `res`, but the solution does not include item $x$.
Now remove a unit weight of any item from the knapsack and replace it with a unit weight of item $x$. Since the unit value of item $x$ is the highest, the total value after replacement will definitely be greater than `res`. **This contradicts the assumption that `res` is the optimal solution, proving that the optimal solution must include item $x**. Now remove a unit weight of any item from the knapsack and replace it with a unit weight of item $x$. Since the unit value of item $x$ is the highest, the total value after replacement will definitely be greater than `res`. **This contradicts the assumption that `res` is the optimal solution, proving that the optimal solution must include item $x$**.
For other items in this solution, we can also construct the above contradiction. Overall, **items with greater unit value are always better choices**, proving that the greedy strategy is effective. For other items in this solution, we can also construct the above contradiction. Overall, **items with greater unit value are always better choices**, proving that the greedy strategy is effective.

@ -1,4 +1,4 @@
# Maximum product after cutting problem # Maximum product cutting problem
!!! question !!! question
@ -40,7 +40,7 @@ As shown below, when $n \geq 4$, splitting out a $2$ increases the product, **wh
Next, consider which factor is optimal. Among the factors $1$, $2$, and $3$, clearly $1$ is the worst, as $1 \times (n-1) < n$ always holds, meaning splitting out $1$ actually decreases the product. Next, consider which factor is optimal. Among the factors $1$, $2$, and $3$, clearly $1$ is the worst, as $1 \times (n-1) < n$ always holds, meaning splitting out $1$ actually decreases the product.
As shown below, when $n = 6$, $3 \times 3 > 2 \times 2 \times 2$. **This means splitting out $3$ is better than splitting out $2**. As shown below, when $n = 6$, $3 \times 3 > 2 \times 2 \times 2$. **This means splitting out $3$ is better than splitting out $2$**.
**Greedy strategy two**: In the splitting scheme, there should be at most two $2$s. Because three $2$s can always be replaced by two $3$s to obtain a higher product. **Greedy strategy two**: In the splitting scheme, there should be at most two $2$s. Because three $2$s can always be replaced by two $3$s to obtain a higher product.

@ -0,0 +1,25 @@
---
icon: material/bookshelf
---
# References
[1] Thomas H. Cormen, et al. Introduction to Algorithms (3rd Edition).
[2] Aditya Bhargava. Grokking Algorithms: An Illustrated Guide for Programmers and Other Curious People (1st Edition).
[3] Robert Sedgewick, et al. Algorithms (4th Edition).
[4] Yan Weimin. Data Structures (C Language Version).
[5] Deng Junhui. Data Structures (C++ Language Version, Third Edition).
[6] Mark Allen Weiss, translated by Chen Yue. Data Structures and Algorithm Analysis in Java (Third Edition).
[7] Cheng Jie. Speaking of Data Structures.
[8] Wang Zheng. The Beauty of Data Structures and Algorithms.
[9] Gayle Laakmann McDowell. Cracking the Coding Interview: 189 Programming Questions and Solutions (6th Edition).
[10] Aston Zhang, et al. Dive into Deep Learning.

@ -73,9 +73,7 @@ The implementation code of counting sort is shown below:
- **Time complexity is $O(n + m)$, non-adaptive sort**: Involves traversing `nums` and `counter`, both using linear time. Generally, $n \gg m$, and the time complexity tends towards $O(n)$. - **Time complexity is $O(n + m)$, non-adaptive sort**: Involves traversing `nums` and `counter`, both using linear time. Generally, $n \gg m$, and the time complexity tends towards $O(n)$.
- **Space complexity is $O(n + m)$, non-in-place sort**: Utilizes arrays `res` and `counter` of lengths $n$ and $m$ respectively. - **Space complexity is $O(n + m)$, non-in-place sort**: Utilizes arrays `res` and `counter` of lengths $n$ and $m$ respectively.
- **Stable sort**: Since elements are filled into `res` in a "right-to-left" order, reversing the traversal of $nums$ can prevent changing the relative position between equal elements, thereby achieving a stable sort. Actually, traversing `nums$ in - **Stable sort**: Since elements are filled into `res` in a "right-to-left" order, reversing the traversal of `nums` can prevent changing the relative position between equal elements, thereby achieving a stable sort. Actually, traversing `nums` in order can also produce the correct sorting result, but the outcome is unstable.
order can also produce the correct sorting result, but the outcome is unstable.
## Limitations ## Limitations

@ -60,7 +60,7 @@ The implementation of merge sort is shown in the following code. Note that the i
## Algorithm characteristics ## Algorithm characteristics
- **Time complexity of $O(n \log n)$, non-adaptive sort**: The division creates a recursion tree of height $\log n$, with each layer merging a total of $n$ operations, resulting in an overall time complexity of $O(n \log n)$. - **Time complexity of $O(n \log n)$, non-adaptive sort**: The division creates a recursion tree of height $\log n$, with each layer merging a total of $n$ operations, resulting in an overall time complexity of $O(n \log n)$.
- **Space complexity of $O(n)$, non-in-place sort**: The recursion depth is $\log n`, using $O(\log n)` stack frame space. The merging operation requires auxiliary arrays, using an additional space of $O(n)$. - **Space complexity of $O(n)$, non-in-place sort**: The recursion depth is $\log n$, using $O(\log n)$ stack frame space. The merging operation requires auxiliary arrays, using an additional space of $O(n)$.
- **Stable sort**: During the merging process, the order of equal elements remains unchanged. - **Stable sort**: During the merging process, the order of equal elements remains unchanged.
## Linked List sorting ## Linked List sorting

@ -74,7 +74,7 @@ nav:
- 4.1 Array: chapter_array_and_linkedlist/array.md - 4.1 Array: chapter_array_and_linkedlist/array.md
- 4.2 Linked list: chapter_array_and_linkedlist/linked_list.md - 4.2 Linked list: chapter_array_and_linkedlist/linked_list.md
- 4.3 List: chapter_array_and_linkedlist/list.md - 4.3 List: chapter_array_and_linkedlist/list.md
- 4.4 Memory and cache: chapter_array_and_linkedlist/ram_and_cache.md - 4.4 Memory and cache *: chapter_array_and_linkedlist/ram_and_cache.md
- 4.5 Summary: chapter_array_and_linkedlist/summary.md - 4.5 Summary: chapter_array_and_linkedlist/summary.md
- Chapter 5. Stack and queue: - Chapter 5. Stack and queue:
# [icon: material/stack-overflow] # [icon: material/stack-overflow]
@ -94,7 +94,7 @@ nav:
# [icon: material/graph-outline] # [icon: material/graph-outline]
- chapter_tree/index.md - chapter_tree/index.md
- 7.1 Binary tree: chapter_tree/binary_tree.md - 7.1 Binary tree: chapter_tree/binary_tree.md
- 7.2 Binary tree Traversal: chapter_tree/binary_tree_traversal.md - 7.2 Binary tree traversal: chapter_tree/binary_tree_traversal.md
- 7.3 Array Representation of tree: chapter_tree/array_representation_of_tree.md - 7.3 Array Representation of tree: chapter_tree/array_representation_of_tree.md
- 7.4 Binary Search tree: chapter_tree/binary_search_tree.md - 7.4 Binary Search tree: chapter_tree/binary_search_tree.md
- 7.5 AVL tree *: chapter_tree/avl_tree.md - 7.5 AVL tree *: chapter_tree/avl_tree.md
@ -113,69 +113,68 @@ nav:
- 9.2 Basic graph operations: chapter_graph/graph_operations.md - 9.2 Basic graph operations: chapter_graph/graph_operations.md
- 9.3 Graph traversal: chapter_graph/graph_traversal.md - 9.3 Graph traversal: chapter_graph/graph_traversal.md
- 9.4 Summary: chapter_graph/summary.md - 9.4 Summary: chapter_graph/summary.md
# - Chapter 10. Searching: - Chapter 10. Searching:
# # [icon: material/text-search] # [icon: material/text-search]
# - chapter_searching/index.md - chapter_searching/index.md
# - 10.1 Binary search: chapter_searching/binary_search.md - 10.1 Binary search: chapter_searching/binary_search.md
# - 10.2 Binary search insertion point: chapter_searching/binary_search_insertion.md - 10.2 Binary search insertion: chapter_searching/binary_search_insertion.md
# - 10.3 Binary search boundaries: chapter_searching/binary_search_edge.md - 10.3 Binary search boundaries: chapter_searching/binary_search_edge.md
# - 10.4 Hashing optimization strategy: chapter_searching/replace_linear_by_hashing.md - 10.4 Hashing optimization strategies: chapter_searching/replace_linear_by_hashing.md
# - 10.5 Revisiting search algorithms: chapter_searching/searching_algorithm_revisited.md - 10.5 Search algorithms revisited: chapter_searching/searching_algorithm_revisited.md
# - 10.6 Summary: chapter_searching/summary.md - 10.6 Summary: chapter_searching/summary.md
# - Chapter 11. Sorting: - Chapter 11. Sorting:
# # [icon: material/sort-ascending] # [icon: material/sort-ascending]
# - chapter_sorting/index.md - chapter_sorting/index.md
# - 11.1 Sorting algorithms: chapter_sorting/sorting_algorithm.md - 11.1 Sorting algorithms: chapter_sorting/sorting_algorithm.md
# - 11.2 Selection sort: chapter_sorting/selection_sort.md - 11.2 Selection sort: chapter_sorting/selection_sort.md
# - 11.3 Bubble sort: chapter_sorting/bubble_sort.md - 11.3 Bubble sort: chapter_sorting/bubble_sort.md
# - 11.4 Insertion sort: chapter_sorting/insertion_sort.md - 11.4 Insertion sort: chapter_sorting/insertion_sort.md
# - 11.5 Quick sort: chapter_sorting/quick_sort.md - 11.5 Quick sort: chapter_sorting/quick_sort.md
# - 11.6 Merge sort: chapter_sorting/merge_sort.md - 11.6 Merge sort: chapter_sorting/merge_sort.md
# - 11.7 Heap sort: chapter_sorting/heap_sort.md - 11.7 Heap sort: chapter_sorting/heap_sort.md
# - 11.8 Bucket sort: chapter_sorting/bucket_sort.md - 11.8 Bucket sort: chapter_sorting/bucket_sort.md
# - 11.9 Counting sort: chapter_sorting/counting_sort.md - 11.9 Counting sort: chapter_sorting/counting_sort.md
# - 11.10 Radix sort: chapter_sorting/radix_sort.md - 11.10 Radix sort: chapter_sorting/radix_sort.md
# - 11.11 Summary: chapter_sorting/summary.md - 11.11 Summary: chapter_sorting/summary.md
# - Chapter 12. Divide and conquer: - Chapter 12. Divide and conquer:
# # [icon: material/set-split] # [icon: material/set-split]
# - chapter_divide_and_conquer/index.md - chapter_divide_and_conquer/index.md
# - 12.1 Divide and conquer algorithm: chapter_divide_and_conquer/divide_and_conquer.md - 12.1 Divide and conquer algorithms: chapter_divide_and_conquer/divide_and_conquer.md
# - 12.2 Divide and conquer search strategy: chapter_divide_and_conquer/binary_search_recur.md - 12.2 Divide and conquer search strategy: chapter_divide_and_conquer/binary_search_recur.md
# - 12.3 Building tree problem: chapter_divide_and_conquer/build_binary_tree_problem.md - 12.3 Building binary tree problem: chapter_divide_and_conquer/build_binary_tree_problem.md
# - 12.4 Hanota problem: chapter_divide_and_conquer/hanota_problem.md - 12.4 Tower of Hanoi Problem: chapter_divide_and_conquer/hanota_problem.md
# - 12.5 Summary: chapter_divide_and_conquer/summary.md - 12.5 Summary: chapter_divide_and_conquer/summary.md
# - Chapter 13. Backtracking: - Chapter 13. Backtracking:
# # [icon: material/map-marker-path] # [icon: material/map-marker-path]
# - chapter_backtracking/index.md - chapter_backtracking/index.md
# - 13.1 Backtracking algorithm: chapter_backtracking/backtracking_algorithm.md - 13.1 Backtracking algorithms: chapter_backtracking/backtracking_algorithm.md
# - 13.2 Permutations problem: chapter_backtracking/permutations_problem.md - 13.2 Permutation problem: chapter_backtracking/permutations_problem.md
# - 13.3 Subset sum problem: chapter_backtracking/subset_sum_problem.md - 13.3 Subset sum problem: chapter_backtracking/subset_sum_problem.md
# - 13.4 n-queens problem: chapter_backtracking/n_queens_problem.md - 13.4 n queens problem: chapter_backtracking/n_queens_problem.md
# - 13.5 Summary: chapter_backtracking/summary.md - 13.5 Summary: chapter_backtracking/summary.md
# - Chapter 14. Dynamic programming: - Chapter 14. Dynamic programming:
# # [icon: material/table-pivot] # [icon: material/table-pivot]
# - chapter_dynamic_programming/index.md - chapter_dynamic_programming/index.md
# - 14.1 Introduction to dynamic programming: chapter_dynamic_programming/intro_to_dynamic_programming.md - 14.1 Introduction to dynamic programming: chapter_dynamic_programming/intro_to_dynamic_programming.md
# - 14.2 Features of DP problems: chapter_dynamic_programming/dp_problem_features.md - 14.2 Characteristics of DP problems: chapter_dynamic_programming/dp_problem_features.md
# - 14.3 DP solution approach: chapter_dynamic_programming/dp_solution_pipeline.md - 14.3 DP problem-solving approach¶: chapter_dynamic_programming/dp_solution_pipeline.md
# - 14.4 0-1 Knapsack problem: chapter_dynamic_programming/knapsack_problem.md - 14.4 0-1 Knapsack problem: chapter_dynamic_programming/knapsack_problem.md
# - 14.5 Unbounded knapsack problem: chapter_dynamic_programming/unbounded_knapsack_problem.md - 14.5 Unbounded knapsack problem: chapter_dynamic_programming/unbounded_knapsack_problem.md
# - 14.6 Edit distance problem: chapter_dynamic_programming/edit_distance_problem.md - 14.6 Edit distance problem: chapter_dynamic_programming/edit_distance_problem.md
# - 14.7 Summary: chapter_dynamic_programming/summary.md - 14.7 Summary: chapter_dynamic_programming/summary.md
# - Chapter 15. Greedy: - Chapter 15. Greedy:
# # [icon: material/head-heart-outline] # [icon: material/head-heart-outline]
# - chapter_greedy/index.md - chapter_greedy/index.md
# - 15.1 Greedy algorithm: chapter_greedy/greedy_algorithm.md - 15.1 Greedy algorithms: chapter_greedy/greedy_algorithm.md
# - 15.2 Fractional knapsack problem: chapter_greedy/fractional_knapsack_problem.md - 15.2 Fractional knapsack problem: chapter_greedy/fractional_knapsack_problem.md
# - 15.3 Maximum capacity problem: chapter_greedy/max_capacity_problem.md - 15.3 Maximum capacity problem: chapter_greedy/max_capacity_problem.md
# - 15.4 Maximum product cutting problem: chapter_greedy/max_product_cutting_problem.md - 15.4 Maximum product cutting problem: chapter_greedy/max_product_cutting_problem.md
# - 15.5 Summary: chapter_greedy/summary.md - 15.5 Summary: chapter_greedy/summary.md
# - Chapter 16. Appendix: - Chapter 16. Appendix:
# # [icon: material/help-circle-outline] # [icon: material/help-circle-outline]
# - chapter_appendix/index.md - chapter_appendix/index.md
# - 16.1 Installation: chapter_appendix/installation.md - 16.1 Installation: chapter_appendix/installation.md
# - 16.2 Contributing: chapter_appendix/contribution.md - 16.2 Contributing: chapter_appendix/contribution.md
# # [status: new] - 16.3 Terminology: chapter_appendix/terminology.md
# - 16.3 &nbsp; Terminology: chapter_appendix/terminology.md - References:
# - References: - chapter_reference/index.md
# - chapter_reference/index.md

Loading…
Cancel
Save