diff --git a/docs/chapter_backtracking/n_queens_problem.md b/docs/chapter_backtracking/n_queens_problem.md
index 06362da20..c1ef02061 100644
--- a/docs/chapter_backtracking/n_queens_problem.md
+++ b/docs/chapter_backtracking/n_queens_problem.md
@@ -28,7 +28,11 @@
为了满足列约束,我们可以利用一个长度为 $n$ 的布尔型数组 `cols` 记录每一列是否有皇后。在每次决定放置前,我们通过 `cols` 将已有皇后的列进行剪枝,并在回溯中动态更新 `cols` 的状态。
-那么,如何处理对角线约束呢?设棋盘中某个格子的行列索引为 $(row, col)$ ,选定矩阵中的某条主对角线,我们发现该对角线上所有格子的行索引减列索引都相等,**即对角线上所有格子的 $row - col$ 为恒定值**。
+!!! tip
+
+ 请注意,矩阵的起点位于左上角,其中行索引从上到下增加,列索引从左到右增加。
+
+那么,如何处理对角线约束呢?设棋盘中某个格子的行列索引为 $(row, col)$ ,选定矩阵中的某条主对角线,我们发现该对角线上所有格子的行索引减列索引都相等,**即主对角线上所有格子的 $row - col$ 为恒定值**。
也就是说,如果两个格子满足 $row_1 - col_1 = row_2 - col_2$ ,则它们一定处在同一条主对角线上。利用该规律,我们可以借助下图所示的数组 `diags1` 记录每条主对角线上是否有皇后。
diff --git a/docs/chapter_data_structure/classification_of_data_structure.md b/docs/chapter_data_structure/classification_of_data_structure.md
index 89ee854be..4a74c2de0 100644
--- a/docs/chapter_data_structure/classification_of_data_structure.md
+++ b/docs/chapter_data_structure/classification_of_data_structure.md
@@ -30,7 +30,7 @@
值得说明的是,将内存比作 Excel 表格是一个简化的类比,实际内存的工作机制比较复杂,涉及地址空间、内存管理、缓存机制、虚拟内存和物理内存等概念。
-内存是所有程序的共享资源,当某块内存被某个程序占用时,则无法被其他程序同时使用了。**因此在数据结构与算法的设计中,内存资源是一个重要的考虑因素**。比如,算法所占用的内存峰值不应超过系统剩余空闲内存;如果缺少连续大块的内存空间,那么所选用的数据结构必须能够存储在分散的内存空间内。
+内存是所有程序的共享资源,当某块内存被某个程序占用时,则通常无法被其他程序同时使用了。**因此在数据结构与算法的设计中,内存资源是一个重要的考虑因素**。比如,算法所占用的内存峰值不应超过系统剩余空闲内存;如果缺少连续大块的内存空间,那么所选用的数据结构必须能够存储在分散的内存空间内。
如下图所示,**物理结构反映了数据在计算机内存中的存储方式**,可分为连续空间存储(数组)和分散空间存储(链表)。物理结构从底层决定了数据的访问、更新、增删等操作方法,两种物理结构在时间效率和空间效率方面呈现出互补的特点。
diff --git a/docs/chapter_preface/about_the_book.md b/docs/chapter_preface/about_the_book.md
index ae386834f..93ebf4265 100644
--- a/docs/chapter_preface/about_the_book.md
+++ b/docs/chapter_preface/about_the_book.md
@@ -2,9 +2,9 @@
本项目旨在创建一本开源、免费、对新手友好的数据结构与算法入门教程。
-- 全书采用动画图解,结构化地讲解数据结构与算法知识,内容清晰易懂,学习曲线平滑。
-- 算法源代码皆可一键运行,支持 Python、C++、Java、C#、Go、Swift、JavaScript、TypeScript、Dart、Rust、C 和 Zig 等语言。
-- 鼓励读者在线上章节评论区互帮互助、共同进步,提问与评论通常可在两日内得到回复。
+- 全书采用动画图解,内容清晰易懂、学习曲线平滑,引导初学者探索数据结构与算法的知识地图。
+- 源代码可一键运行,帮助读者在练习中提升编程技能,了解算法工作原理和数据结构底层实现。
+- 提倡读者互助学习,欢迎大家在评论区提出问题与分享见解,在交流讨论中共同进步。
## 读者对象
@@ -32,7 +32,7 @@
本书在开源社区众多贡献者的共同努力下不断完善。感谢每一位投入时间与精力的撰稿人,他们是(按照 GitHub 自动生成的顺序):krahets、Gonglja、nuomi1、codingonion、Reanon、justin-tse、hpstory、danielsss、curtishd、night-cruise、S-N-O-R-L-A-X、msk397、gvenusleo、RiverTwilight、gyt95、zhuoqinyue、Zuoxun、mingXta、hello-ikun、khoaxuantu、FangYuan33、GN-Yu、longsizhuo、mgisr、Cathay-Chen、guowei-gong、xBLACKICEx、K3v123、IsChristina、JoseHung、qualifier1024、pengchzn、Guanngxu、QiLOL、L-Super、WSL0809、Slone123c、lhxsm、yuan0221、what-is-me、rongyi、JeffersonHuang、longranger2、theNefelibatas、yuelinxin、xiongsp、nanlei、a16su、cy-by-side、gaofer、malone6、Wonderdch、hongyun-robot、XiaChuerwu、yd-j、bluebean-cloud、iron-irax、he-weilai、Nigh、MolDuM、Phoenix0415、XC-Zero、SamJin98、reeswell、NI-SW、Horbin-Magician、xjr7670、YangXuanyi、DullSword、iStig、qq909244296、jiaxianhua、wenjianmin、keshida、kilikilikid、lclc6、lwbaptx、luluxia、boloboloda、hts0000、gledfish、fbigm、echo1937、szu17dmy、dshlstarr、coderlef、czruby、beintentional、KeiichiKasai、xb534、ElaBosak233、baagod、zhouLion、yishangzhang、yi427、yabo083、weibk、wangwang105、th1nk3r-ing、tao363、4yDX3906、syd168、siqyka、selear、sdshaoda、noobcodemaker、chadyi、lyl625760、lucaswangdev、liuxjerry、0130w、shanghai-Jerry、JackYang-hellobobo、Javesun99、lipusheng、ShiMaRing、FreddieLi、FloranceYeh、Transmigration-zhou、fanchenggang、gltianwen、Dr-XYZ、curly210102、CuB3y0nd、youshaoXG、bubble9um、fanenr、52coder、foursevenlove、KorsChen、ZongYangL、hezhizhen、linzeyan、ZJKung、GaochaoZhu、yang-le、Evilrabbit520、Turing-1024-Lee、Suremotoo、Allen-Scai、Richard-Zhang1019、qingpeng9802、primexiao、nidhoggfgg、1ch0、MwumLi、ZnYang2018、hugtyftg、logan-qiu、psychelzh 和 Keynman 。
-本书的代码审阅工作由 codingonion、curtishd、Gonglja、gvenusleo、hpstory、justin-tse、krahets、night-cruise、nuomi1 和 Reanon 完成(按照首字母顺序排列)。感谢他们付出的时间与精力,正是他们确保了各语言代码的规范与统一。
+本书的代码审阅工作由 codingonion、curtishd、Gonglja、gvenusleo、hpstory、justin-tse、khoaxuantu、krahets、night-cruise、nuomi1 和 Reanon 完成(按照首字母顺序排列)。感谢他们付出的时间与精力,正是他们确保了各语言代码的规范与统一。
在本书的创作过程中,我得到了许多人的帮助。
diff --git a/docs/chapter_sorting/bucket_sort.md b/docs/chapter_sorting/bucket_sort.md
index 5f361b16c..84b7f08d0 100644
--- a/docs/chapter_sorting/bucket_sort.md
+++ b/docs/chapter_sorting/bucket_sort.md
@@ -24,8 +24,7 @@
桶排序适用于处理体量很大的数据。例如,输入数据包含 100 万个元素,由于空间限制,系统内存无法一次性加载所有数据。此时,可以将数据分成 1000 个桶,然后分别对每个桶进行排序,最后将结果合并。
-- **时间复杂度为 $O(n + k)$** :假设元素在各个桶内平均分布,那么每个桶内的元素数量为 $\frac{n}{k}$ 。假设排序单个桶使用 $O(\frac{n}{k} \log\frac{n}{k})$ 时间,则排序所有桶使用 $O(n \log\frac{n}{k})$ 时间。**当桶数量 $k$ 比较大时,时间复杂度则趋向于 $O(n)$** 。合并结果时需要遍历所有桶和元素,花费 $O(n + k)$ 时间。
-- **自适应排序**:在最差情况下,所有数据被分配到一个桶中,且排序该桶使用 $O(n^2)$ 时间。
+- **时间复杂度为 $O(n + k)$** :假设元素在各个桶内平均分布,那么每个桶内的元素数量为 $\frac{n}{k}$ 。假设排序单个桶使用 $O(\frac{n}{k} \log\frac{n}{k})$ 时间,则排序所有桶使用 $O(n \log\frac{n}{k})$ 时间。**当桶数量 $k$ 比较大时,时间复杂度则趋向于 $O(n)$** 。合并结果时需要遍历所有桶和元素,花费 $O(n + k)$ 时间。在最差情况下,所有数据被分配到一个桶中,且排序该桶使用 $O(n^2)$ 时间。
- **空间复杂度为 $O(n + k)$、非原地排序**:需要借助 $k$ 个桶和总共 $n$ 个元素的额外空间。
- 桶排序是否稳定取决于排序桶内元素的算法是否稳定。
diff --git a/docs/chapter_sorting/quick_sort.md b/docs/chapter_sorting/quick_sort.md
index 60dccec92..96be54bac 100755
--- a/docs/chapter_sorting/quick_sort.md
+++ b/docs/chapter_sorting/quick_sort.md
@@ -61,7 +61,7 @@
## 算法特性
-- **时间复杂度为 $O(n \log n)$、自适应排序**:在平均情况下,哨兵划分的递归层数为 $\log n$ ,每层中的总循环数为 $n$ ,总体使用 $O(n \log n)$ 时间。在最差情况下,每轮哨兵划分操作都将长度为 $n$ 的数组划分为长度为 $0$ 和 $n - 1$ 的两个子数组,此时递归层数达到 $n$ ,每层中的循环数为 $n$ ,总体使用 $O(n^2)$ 时间。
+- **时间复杂度为 $O(n \log n)$、非自适应排序**:在平均情况下,哨兵划分的递归层数为 $\log n$ ,每层中的总循环数为 $n$ ,总体使用 $O(n \log n)$ 时间。在最差情况下,每轮哨兵划分操作都将长度为 $n$ 的数组划分为长度为 $0$ 和 $n - 1$ 的两个子数组,此时递归层数达到 $n$ ,每层中的循环数为 $n$ ,总体使用 $O(n^2)$ 时间。
- **空间复杂度为 $O(n)$、原地排序**:在输入数组完全倒序的情况下,达到最差递归深度 $n$ ,使用 $O(n)$ 栈帧空间。排序操作是在原数组上进行的,未借助额外数组。
- **非稳定排序**:在哨兵划分的最后一步,基准数可能会被交换至相等元素的右侧。
diff --git a/docs/chapter_sorting/sorting_algorithm.md b/docs/chapter_sorting/sorting_algorithm.md
index 88ca9f628..ded41fd5a 100644
--- a/docs/chapter_sorting/sorting_algorithm.md
+++ b/docs/chapter_sorting/sorting_algorithm.md
@@ -35,14 +35,12 @@
('E', 23)
```
-**自适应性**:自适应排序的时间复杂度会受输入数据的影响,即最佳时间复杂度、最差时间复杂度、平均时间复杂度并不完全相等。
-
-自适应性需要根据具体情况来评估。如果最差时间复杂度差于平均时间复杂度,说明排序算法在某些数据下性能可能劣化,因此被视为负面属性;而如果最佳时间复杂度优于平均时间复杂度,则被视为正面属性。
+**自适应性**:自适应排序能够利用输入数据已有的顺序信息来减少计算量,达到更优的时间效率。自适应排序算法的最佳时间复杂度通常优于平均时间复杂度。
**是否基于比较**:基于比较的排序依赖比较运算符($<$、$=$、$>$)来判断元素的相对顺序,从而排序整个数组,理论最优时间复杂度为 $O(n \log n)$ 。而非比较排序不使用比较运算符,时间复杂度可达 $O(n)$ ,但其通用性相对较差。
## 理想排序算法
-**运行快、原地、稳定、正向自适应、通用性好**。显然,迄今为止尚未发现兼具以上所有特性的排序算法。因此,在选择排序算法时,需要根据具体的数据特点和问题需求来决定。
+**运行快、原地、稳定、自适应、通用性好**。显然,迄今为止尚未发现兼具以上所有特性的排序算法。因此,在选择排序算法时,需要根据具体的数据特点和问题需求来决定。
接下来,我们将共同学习各种排序算法,并基于上述评价维度对各个排序算法的优缺点进行分析。
diff --git a/docs/chapter_sorting/summary.assets/sorting_algorithms_comparison.png b/docs/chapter_sorting/summary.assets/sorting_algorithms_comparison.png
index 6a6559d08..832dfb4dd 100644
Binary files a/docs/chapter_sorting/summary.assets/sorting_algorithms_comparison.png and b/docs/chapter_sorting/summary.assets/sorting_algorithms_comparison.png differ
diff --git a/docs/chapter_sorting/summary.md b/docs/chapter_sorting/summary.md
index 9bc5baf9a..31edcb8b8 100644
--- a/docs/chapter_sorting/summary.md
+++ b/docs/chapter_sorting/summary.md
@@ -9,7 +9,7 @@
- 桶排序包含三个步骤:数据分桶、桶内排序和合并结果。它同样体现了分治策略,适用于数据体量很大的情况。桶排序的关键在于对数据进行平均分配。
- 计数排序是桶排序的一个特例,它通过统计数据出现的次数来实现排序。计数排序适用于数据量大但数据范围有限的情况,并且要求数据能够转换为正整数。
- 基数排序通过逐位排序来实现数据排序,要求数据能够表示为固定位数的数字。
-- 总的来说,我们希望找到一种排序算法,具有高效率、稳定、原地以及正向自适应性等优点。然而,正如其他数据结构和算法一样,没有一种排序算法能够同时满足所有这些条件。在实际应用中,我们需要根据数据的特性来选择合适的排序算法。
+- 总的来说,我们希望找到一种排序算法,具有高效率、稳定、原地以及自适应性等优点。然而,正如其他数据结构和算法一样,没有一种排序算法能够同时满足所有这些条件。在实际应用中,我们需要根据数据的特性来选择合适的排序算法。
- 下图对比了主流排序算法的效率、稳定性、就地性和自适应性等。
![排序算法对比](summary.assets/sorting_algorithms_comparison.png)
diff --git a/en/CONTRIBUTING.md b/en/CONTRIBUTING.md
index 366a16c38..b14906fcf 100644
--- a/en/CONTRIBUTING.md
+++ b/en/CONTRIBUTING.md
@@ -20,7 +20,7 @@ That is, our contributors are computer scientists, engineers, and students from
- **Native Chinese with professional working English**: Ensuring translation accuracy and consistency between CN and EN versions.
- **Native English**: Enhance the authenticity and fluency of the English content to flow naturally and to be engaging.
-Don't hesitate to join us via WeChat `krahets-jyd` or on [Discord](https://discord.gg/9hrbyZFBX3)!
+Don't hesitate to join us via WeChat `krahets-jyd` or on [Discord](https://discord.gg/nvspS56295)!
## Translation process
diff --git a/en/docs/chapter_backtracking/n_queens_problem.md b/en/docs/chapter_backtracking/n_queens_problem.md
index da2ab0fd1..bca5beb38 100644
--- a/en/docs/chapter_backtracking/n_queens_problem.md
+++ b/en/docs/chapter_backtracking/n_queens_problem.md
@@ -28,6 +28,10 @@ Essentially, **the row-by-row placing strategy serves as a pruning function**, a
To satisfy column constraints, we can use a boolean array `cols` of length $n$ to track whether a queen occupies each column. Before each placement decision, `cols` is used to prune the columns that already have queens, and it is dynamically updated during backtracking.
+!!! tip
+
+ Note that the origin of the chessboard is located in the upper left corner, where the row index increases from top to bottom, and the column index increases from left to right.
+
How about the diagonal constraints? Let the row and column indices of a cell on the chessboard be $(row, col)$. By selecting a specific main diagonal, we notice that the difference $row - col$ is the same for all cells on that diagonal, **meaning that $row - col$ is a constant value on that diagonal**.
Thus, if two cells satisfy $row_1 - col_1 = row_2 - col_2$, they are definitely on the same main diagonal. Using this pattern, we can utilize the array `diags1` shown in the figure below to track whether a queen is on any main diagonal.
diff --git a/en/docs/chapter_introduction/algorithms_are_everywhere.md b/en/docs/chapter_introduction/algorithms_are_everywhere.md
index 77b81ac8a..a4b7638e2 100644
--- a/en/docs/chapter_introduction/algorithms_are_everywhere.md
+++ b/en/docs/chapter_introduction/algorithms_are_everywhere.md
@@ -53,4 +53,4 @@ From cooking a meal to interstellar travel, almost all problem-solving involves
!!! tip
- If concepts such as data structures, algorithms, arrays, and binary search still seem somewhat obsecure, I encourage you to continue reading. This book will gently guide you into the realm of understanding data structures and algorithms.
+ If concepts such as data structures, algorithms, arrays, and binary search still seem somewhat obscure, I encourage you to continue reading. This book will gently guide you into the realm of understanding data structures and algorithms.
diff --git a/en/docs/chapter_preface/about_the_book.md b/en/docs/chapter_preface/about_the_book.md
index 4905aba8c..13d79487d 100644
--- a/en/docs/chapter_preface/about_the_book.md
+++ b/en/docs/chapter_preface/about_the_book.md
@@ -2,9 +2,9 @@
This open-source project aims to create a free, and beginner-friendly crash course on data structures and algorithms.
-- Using animated illustrations, it delivers structured insights into data structures and algorithmic concepts, ensuring comprehensibility and a smooth learning curve.
-- Run code with just one click, supporting Java, C++, Python, Go, JS, TS, C#, Swift, Rust, Dart, Zig and other languages.
-- Readers are encouraged to engage with each other in the discussion area for each section, questions and comments are usually answered within two days.
+- Animated illustrations, easy-to-understand content, and a smooth learning curve help beginners explore the "knowledge map" of data structures and algorithms.
+- Run code with just one click, helping readers improve their programming skills and understand the working principle of algorithms and the underlying implementation of data structures.
+- Promoting learning by teaching, feel free to ask questions and share insights. Let's grow together through discussion.
## Target audience
@@ -32,7 +32,7 @@ The main content of the book is shown in the figure below.
This book is continuously improved with the joint efforts of many contributors from the open-source community. Thanks to each writer who invested their time and energy, listed in the order generated by GitHub: krahets, codingonion, nuomi1, Gonglja, Reanon, justin-tse, danielsss, hpstory, S-N-O-R-L-A-X, night-cruise, msk397, gvenusleo, RiverTwilight, gyt95, zhuoqinyue, Zuoxun, Xia-Sang, mingXta, FangYuan33, GN-Yu, IsChristina, xBLACKICEx, guowei-gong, Cathay-Chen, mgisr, JoseHung, qualifier1024, pengchzn, Guanngxu, longsizhuo, L-Super, what-is-me, yuan0221, lhxsm, Slone123c, WSL0809, longranger2, theNefelibatas, xiongsp, JeffersonHuang, hongyun-robot, K3v123, yuelinxin, a16su, gaofer, malone6, Wonderdch, xjr7670, DullSword, Horbin-Magician, NI-SW, reeswell, XC-Zero, XiaChuerwu, yd-j, iron-irax, huawuque404, MolDuM, Nigh, KorsChen, foursevenlove, 52coder, bubble9um, youshaoXG, curly210102, gltianwen, fanchenggang, Transmigration-zhou, FloranceYeh, FreddieLi, ShiMaRing, lipusheng, Javesun99, JackYang-hellobobo, shanghai-Jerry, 0130w, Keynman, psychelzh, logan-qiu, ZnYang2018, MwumLi, 1ch0, Phoenix0415, qingpeng9802, Richard-Zhang1019, QiLOL, Suremotoo, Turing-1024-Lee, Evilrabbit520, GaochaoZhu, ZJKung, linzeyan, hezhizhen, ZongYangL, beintentional, czruby, coderlef, dshlstarr, szu17dmy, fbigm, gledfish, hts0000, boloboloda, iStig, jiaxianhua, wenjianmin, keshida, kilikilikid, lclc6, lwbaptx, liuxjerry, lucaswangdev, lyl625760, chadyi, noobcodemaker, selear, siqyka, syd168, 4yDX3906, tao363, wangwang105, weibk, yabo083, yi427, yishangzhang, zhouLion, baagod, ElaBosak233, xb534, luluxia, yanedie, thomasq0, YangXuanyi and th1nk3r-ing.
-The code review work for this book was completed by codingonion, Gonglja, gvenusleo, hpstory, justin‐tse, krahets, night-cruise, nuomi1, and Reanon (listed in alphabetical order). Thanks to them for their time and effort, ensuring the standardization and uniformity of the code in various languages.
+The code review work for this book was completed by codingonion, Gonglja, gvenusleo, hpstory, justin‐tse, khoaxuantu, krahets, night-cruise, nuomi1, and Reanon (listed in alphabetical order). Thanks to them for their time and effort, ensuring the standardization and uniformity of the code in various languages.
Throughout the creation of this book, numerous individuals provided invaluable assistance, including but not limited to:
diff --git a/en/docs/chapter_sorting/bucket_sort.md b/en/docs/chapter_sorting/bucket_sort.md
index e96492be4..8d6c2e9e9 100644
--- a/en/docs/chapter_sorting/bucket_sort.md
+++ b/en/docs/chapter_sorting/bucket_sort.md
@@ -24,8 +24,7 @@ The code is shown as follows:
Bucket sort is suitable for handling very large data sets. For example, if the input data includes 1 million elements, and system memory limitations prevent loading all the data at once, you can divide the data into 1,000 buckets and sort each bucket separately before merging the results.
-- **Time complexity is $O(n + k)$**: Assuming the elements are evenly distributed across the buckets, the number of elements in each bucket is $n/k$. Assuming sorting a single bucket takes $O(n/k \log(n/k))$ time, sorting all buckets takes $O(n \log(n/k))$ time. **When the number of buckets $k$ is relatively large, the time complexity tends towards $O(n)$**. Merging the results requires traversing all buckets and elements, taking $O(n + k)$ time.
-- **Adaptive sorting**: In the worst case, all data is distributed into a single bucket, and sorting that bucket takes $O(n^2)$ time.
+- **Time complexity is $O(n + k)$**: Assuming the elements are evenly distributed across the buckets, the number of elements in each bucket is $n/k$. Assuming sorting a single bucket takes $O(n/k \log(n/k))$ time, sorting all buckets takes $O(n \log(n/k))$ time. **When the number of buckets $k$ is relatively large, the time complexity tends towards $O(n)$**. Merging the results requires traversing all buckets and elements, taking $O(n + k)$ time. In the worst case, all data is distributed into a single bucket, and sorting that bucket takes $O(n^2)$ time.
- **Space complexity is $O(n + k)$, non-in-place sorting**: It requires additional space for $k$ buckets and a total of $n$ elements.
- Whether bucket sort is stable depends on whether the algorithm used to sort elements within the buckets is stable.
diff --git a/en/docs/chapter_sorting/quick_sort.md b/en/docs/chapter_sorting/quick_sort.md
index d969b4c26..a78491fd3 100644
--- a/en/docs/chapter_sorting/quick_sort.md
+++ b/en/docs/chapter_sorting/quick_sort.md
@@ -61,7 +61,7 @@ The overall process of quick sort is shown in the figure below.
## Algorithm features
-- **Time complexity of $O(n \log n)$, adaptive sorting**: In average cases, the recursive levels of pivot partitioning are $\log n$, and the total number of loops per level is $n$, using $O(n \log n)$ time overall. In the worst case, each round of pivot partitioning divides an array of length $n$ into two sub-arrays of lengths $0$ and $n - 1$, reaching $n$ recursive levels, and using $O(n^2)$ time overall.
+- **Time complexity of $O(n \log n)$, non-adaptive sorting**: In average cases, the recursive levels of pivot partitioning are $\log n$, and the total number of loops per level is $n$, using $O(n \log n)$ time overall. In the worst case, each round of pivot partitioning divides an array of length $n$ into two sub-arrays of lengths $0$ and $n - 1$, reaching $n$ recursive levels, and using $O(n^2)$ time overall.
- **Space complexity of $O(n)$, in-place sorting**: In completely reversed input arrays, reaching the worst recursion depth of $n$, using $O(n)$ stack frame space. The sorting operation is performed on the original array without the aid of additional arrays.
- **Non-stable sorting**: In the final step of pivot partitioning, the pivot may be swapped to the right of equal elements.
diff --git a/en/docs/chapter_sorting/sorting_algorithm.md b/en/docs/chapter_sorting/sorting_algorithm.md
index 619556f92..c560e56fe 100644
--- a/en/docs/chapter_sorting/sorting_algorithm.md
+++ b/en/docs/chapter_sorting/sorting_algorithm.md
@@ -35,14 +35,12 @@ Stable sorting is a necessary condition for multi-level sorting scenarios. Suppo
('E', 23)
```
-**Adaptability**: Adaptive sorting has a time complexity that depends on the input data, i.e., the best time complexity, worst time complexity, and average time complexity are not exactly equal.
-
-Adaptability needs to be assessed according to the specific situation. If the worst time complexity is worse than the average, it suggests that the performance of the sorting algorithm might deteriorate under certain data, hence it is seen as a negative attribute; whereas, if the best time complexity is better than the average, it is considered a positive attribute.
+**Adaptability**: Adaptive sorting leverages existing order information within the input data to reduce computational effort, achieving more optimal time efficiency. The best-case time complexity of adaptive sorting algorithms is typically better than their average-case time complexity.
**Comparison-based**: Comparison-based sorting relies on comparison operators ($<$, $=$, $>$) to determine the relative order of elements and thus sort the entire array, with the theoretical optimal time complexity being $O(n \log n)$. Meanwhile, non-comparison sorting does not use comparison operators and can achieve a time complexity of $O(n)$, but its versatility is relatively poor.
## Ideal sorting algorithm
-**Fast execution, in-place, stable, positively adaptive, and versatile**. Clearly, no sorting algorithm that combines all these features has been found to date. Therefore, when selecting a sorting algorithm, it is necessary to decide based on the specific characteristics of the data and the requirements of the problem.
+**Fast execution, in-place, stable, adaptive, and versatile**. Clearly, no sorting algorithm that combines all these features has been found to date. Therefore, when selecting a sorting algorithm, it is necessary to decide based on the specific characteristics of the data and the requirements of the problem.
Next, we will learn about various sorting algorithms together and analyze the advantages and disadvantages of each based on the above evaluation dimensions.
diff --git a/en/docs/chapter_sorting/summary.assets/sorting_algorithms_comparison.png b/en/docs/chapter_sorting/summary.assets/sorting_algorithms_comparison.png
index 8492fc4c3..48b4c7c73 100644
Binary files a/en/docs/chapter_sorting/summary.assets/sorting_algorithms_comparison.png and b/en/docs/chapter_sorting/summary.assets/sorting_algorithms_comparison.png differ
diff --git a/en/docs/chapter_sorting/summary.md b/en/docs/chapter_sorting/summary.md
index d2a92f978..ecf7f3f81 100644
--- a/en/docs/chapter_sorting/summary.md
+++ b/en/docs/chapter_sorting/summary.md
@@ -9,7 +9,7 @@
- Bucket sort consists of three steps: data bucketing, sorting within buckets, and merging results. It also embodies the divide-and-conquer strategy, suitable for very large datasets. The key to bucket sort is the even distribution of data.
- Counting sort is a special case of bucket sort, which sorts by counting the occurrences of each data point. Counting sort is suitable for large datasets with a limited range of data and requires that data can be converted to positive integers.
- Radix sort sorts data by sorting digit by digit, requiring data to be represented as fixed-length numbers.
-- Overall, we hope to find a sorting algorithm that has high efficiency, stability, in-place operation, and positive adaptability. However, like other data structures and algorithms, no sorting algorithm can meet all these conditions simultaneously. In practical applications, we need to choose the appropriate sorting algorithm based on the characteristics of the data.
+- Overall, we hope to find a sorting algorithm that has high efficiency, stability, in-place operation, and adaptability. However, like other data structures and algorithms, no sorting algorithm can meet all these conditions simultaneously. In practical applications, we need to choose the appropriate sorting algorithm based on the characteristics of the data.
- The figure below compares mainstream sorting algorithms in terms of efficiency, stability, in-place nature, and adaptability.
![Sorting Algorithm Comparison](summary.assets/sorting_algorithms_comparison.png)
diff --git a/mkdocs.yml b/mkdocs.yml
index c35624940..e51872f7d 100644
--- a/mkdocs.yml
+++ b/mkdocs.yml
@@ -190,7 +190,6 @@ nav:
- 4.1 数组: chapter_array_and_linkedlist/array.md
- 4.2 链表: chapter_array_and_linkedlist/linked_list.md
- 4.3 列表: chapter_array_and_linkedlist/list.md
- # [status: new]
- 4.4 内存与缓存 *: chapter_array_and_linkedlist/ram_and_cache.md
- 4.5 小结: chapter_array_and_linkedlist/summary.md
- 第 5 章 栈与队列:
diff --git a/zh-hant/codes/go/chapter_stack_and_queue/array_deque.go b/zh-hant/codes/go/chapter_stack_and_queue/array_deque.go
index 89be939b6..4cf6632bd 100644
--- a/zh-hant/codes/go/chapter_stack_and_queue/array_deque.go
+++ b/zh-hant/codes/go/chapter_stack_and_queue/array_deque.go
@@ -72,6 +72,9 @@ func (q *arrayDeque) pushLast(num int) {
/* 佇列首出列 */
func (q *arrayDeque) popFirst() any {
num := q.peekFirst()
+ if num == nil {
+ return nil
+ }
// 佇列首指標向後移動一位
q.front = q.index(q.front + 1)
q.queSize--
@@ -81,6 +84,9 @@ func (q *arrayDeque) popFirst() any {
/* 佇列尾出列 */
func (q *arrayDeque) popLast() any {
num := q.peekLast()
+ if num == nil {
+ return nil
+ }
q.queSize--
return num
}
diff --git a/zh-hant/codes/go/chapter_stack_and_queue/array_queue.go b/zh-hant/codes/go/chapter_stack_and_queue/array_queue.go
index a719e5513..699fe4d79 100644
--- a/zh-hant/codes/go/chapter_stack_and_queue/array_queue.go
+++ b/zh-hant/codes/go/chapter_stack_and_queue/array_queue.go
@@ -49,6 +49,10 @@ func (q *arrayQueue) push(num int) {
/* 出列 */
func (q *arrayQueue) pop() any {
num := q.peek()
+ if num == nil {
+ return nil
+ }
+
// 佇列首指標向後移動一位,若越過尾部,則返回到陣列頭部
q.front = (q.front + 1) % q.queCapacity
q.queSize--
diff --git a/zh-hant/codes/go/chapter_stack_and_queue/queue_test.go b/zh-hant/codes/go/chapter_stack_and_queue/queue_test.go
index ec216f33b..8eca50e4b 100644
--- a/zh-hant/codes/go/chapter_stack_and_queue/queue_test.go
+++ b/zh-hant/codes/go/chapter_stack_and_queue/queue_test.go
@@ -46,9 +46,13 @@ func TestQueue(t *testing.T) {
}
func TestArrayQueue(t *testing.T) {
+
// 初始化佇列,使用佇列的通用介面
capacity := 10
queue := newArrayQueue(capacity)
+ if queue.pop() != nil {
+ t.Errorf("want:%v,got:%v", nil, queue.pop())
+ }
// 元素入列
queue.push(1)
diff --git a/zh-hant/codes/rust/chapter_backtracking/permutations_i.rs b/zh-hant/codes/rust/chapter_backtracking/permutations_i.rs
index 090f57d95..7b2f10bfa 100644
--- a/zh-hant/codes/rust/chapter_backtracking/permutations_i.rs
+++ b/zh-hant/codes/rust/chapter_backtracking/permutations_i.rs
@@ -23,7 +23,7 @@ fn backtrack(mut state: Vec, choices: &[i32], selected: &mut [bool], res: &
backtrack(state.clone(), choices, selected, res);
// 回退:撤銷選擇,恢復到之前的狀態
selected[i] = false;
- state.remove(state.len() - 1);
+ state.pop();
}
}
}
diff --git a/zh-hant/codes/rust/chapter_backtracking/permutations_ii.rs b/zh-hant/codes/rust/chapter_backtracking/permutations_ii.rs
index d2a97642a..8a5e85269 100644
--- a/zh-hant/codes/rust/chapter_backtracking/permutations_ii.rs
+++ b/zh-hant/codes/rust/chapter_backtracking/permutations_ii.rs
@@ -27,7 +27,7 @@ fn backtrack(mut state: Vec, choices: &[i32], selected: &mut [bool], res: &
backtrack(state.clone(), choices, selected, res);
// 回退:撤銷選擇,恢復到之前的狀態
selected[i] = false;
- state.remove(state.len() - 1);
+ state.pop();
}
}
}
diff --git a/zh-hant/codes/rust/chapter_computational_complexity/time_complexity.rs b/zh-hant/codes/rust/chapter_computational_complexity/time_complexity.rs
index 809dd0651..07c0ab9c6 100644
--- a/zh-hant/codes/rust/chapter_computational_complexity/time_complexity.rs
+++ b/zh-hant/codes/rust/chapter_computational_complexity/time_complexity.rs
@@ -113,7 +113,7 @@ fn linear_log_recur(n: i32) -> i32 {
return 1;
}
let mut count = linear_log_recur(n / 2) + linear_log_recur(n / 2);
- for _ in 0..n as i32 {
+ for _ in 0..n {
count += 1;
}
return count;
diff --git a/zh-hant/docs/chapter_backtracking/n_queens_problem.md b/zh-hant/docs/chapter_backtracking/n_queens_problem.md
index 43196a3e4..daebf2707 100644
--- a/zh-hant/docs/chapter_backtracking/n_queens_problem.md
+++ b/zh-hant/docs/chapter_backtracking/n_queens_problem.md
@@ -28,7 +28,11 @@
為了滿足列約束,我們可以利用一個長度為 $n$ 的布林型陣列 `cols` 記錄每一列是否有皇后。在每次決定放置前,我們透過 `cols` 將已有皇后的列進行剪枝,並在回溯中動態更新 `cols` 的狀態。
-那麼,如何處理對角線約束呢?設棋盤中某個格子的行列索引為 $(row, col)$ ,選定矩陣中的某條主對角線,我們發現該對角線上所有格子的行索引減列索引都相等,**即對角線上所有格子的 $row - col$ 為恆定值**。
+!!! tip
+
+ 請注意,矩陣的起點位於左上角,其中行索引從上到下增加,列索引從左到右增加。
+
+那麼,如何處理對角線約束呢?設棋盤中某個格子的行列索引為 $(row, col)$ ,選定矩陣中的某條主對角線,我們發現該對角線上所有格子的行索引減列索引都相等,**即主對角線上所有格子的 $row - col$ 為恆定值**。
也就是說,如果兩個格子滿足 $row_1 - col_1 = row_2 - col_2$ ,則它們一定處在同一條主對角線上。利用該規律,我們可以藉助下圖所示的陣列 `diags1` 記錄每條主對角線上是否有皇后。
diff --git a/zh-hant/docs/chapter_computational_complexity/time_complexity.md b/zh-hant/docs/chapter_computational_complexity/time_complexity.md
index 96f1d7ff3..75c138b2d 100755
--- a/zh-hant/docs/chapter_computational_complexity/time_complexity.md
+++ b/zh-hant/docs/chapter_computational_complexity/time_complexity.md
@@ -30,7 +30,7 @@
a = a + 1; // 1 ns
a = a * 2; // 10 ns
// 迴圈 n 次
- for (int i = 0; i < n; i++) { // 1 ns ,每輪都要執行 i++
+ for (int i = 0; i < n; i++) { // 1 ns
cout << 0 << endl; // 5 ns
}
}
@@ -45,7 +45,7 @@
a = a + 1; // 1 ns
a = a * 2; // 10 ns
// 迴圈 n 次
- for (int i = 0; i < n; i++) { // 1 ns ,每輪都要執行 i++
+ for (int i = 0; i < n; i++) { // 1 ns
System.out.println(0); // 5 ns
}
}
@@ -60,7 +60,7 @@
a = a + 1; // 1 ns
a = a * 2; // 10 ns
// 迴圈 n 次
- for (int i = 0; i < n; i++) { // 1 ns ,每輪都要執行 i++
+ for (int i = 0; i < n; i++) { // 1 ns
Console.WriteLine(0); // 5 ns
}
}
@@ -105,7 +105,7 @@
a = a + 1; // 1 ns
a = a * 2; // 10 ns
// 迴圈 n 次
- for(let i = 0; i < n; i++) { // 1 ns ,每輪都要執行 i++
+ for(let i = 0; i < n; i++) { // 1 ns
console.log(0); // 5 ns
}
}
@@ -120,7 +120,7 @@
a = a + 1; // 1 ns
a = a * 2; // 10 ns
// 迴圈 n 次
- for(let i = 0; i < n; i++) { // 1 ns ,每輪都要執行 i++
+ for(let i = 0; i < n; i++) { // 1 ns
console.log(0); // 5 ns
}
}
@@ -135,7 +135,7 @@
a = a + 1; // 1 ns
a = a * 2; // 10 ns
// 迴圈 n 次
- for (int i = 0; i < n; i++) { // 1 ns ,每輪都要執行 i++
+ for (int i = 0; i < n; i++) { // 1 ns
print(0); // 5 ns
}
}
@@ -150,7 +150,7 @@
a = a + 1; // 1 ns
a = a * 2; // 10 ns
// 迴圈 n 次
- for _ in 0..n { // 1 ns ,每輪都要執行 i++
+ for _ in 0..n { // 1 ns
println!("{}", 0); // 5 ns
}
}
@@ -165,7 +165,7 @@
a = a + 1; // 1 ns
a = a * 2; // 10 ns
// 迴圈 n 次
- for (int i = 0; i < n; i++) { // 1 ns ,每輪都要執行 i++
+ for (int i = 0; i < n; i++) { // 1 ns
printf("%d", 0); // 5 ns
}
}
@@ -180,7 +180,7 @@
a = a + 1 // 1 ns
a = a * 2 // 10 ns
// 迴圈 n 次
- for (i in 0..自適應排序的時間複雜度會受輸入資料的影響,即最佳時間複雜度、最差時間複雜度、平均時間複雜度並不完全相等。
-
-自適應性需要根據具體情況來評估。如果最差時間複雜度差於平均時間複雜度,說明排序演算法在某些資料下效能可能劣化,因此被視為負面屬性;而如果最佳時間複雜度優於平均時間複雜度,則被視為正面屬性。
+**自適應性**:自適應排序能夠利用輸入資料已有的順序資訊來減少計算量,達到更優的時間效率。自適應排序演算法的最佳時間複雜度通常優於平均時間複雜度。
**是否基於比較**:基於比較的排序依賴比較運算子($<$、$=$、$>$)來判斷元素的相對順序,從而排序整個陣列,理論最優時間複雜度為 $O(n \log n)$ 。而非比較排序不使用比較運算子,時間複雜度可達 $O(n)$ ,但其通用性相對較差。
## 理想排序演算法
-**執行快、原地、穩定、正向自適應、通用性好**。顯然,迄今為止尚未發現兼具以上所有特性的排序演算法。因此,在選擇排序演算法時,需要根據具體的資料特點和問題需求來決定。
+**執行快、原地、穩定、自適應、通用性好**。顯然,迄今為止尚未發現兼具以上所有特性的排序演算法。因此,在選擇排序演算法時,需要根據具體的資料特點和問題需求來決定。
接下來,我們將共同學習各種排序演算法,並基於上述評價維度對各個排序演算法的優缺點進行分析。
diff --git a/zh-hant/docs/chapter_sorting/summary.assets/sorting_algorithms_comparison.png b/zh-hant/docs/chapter_sorting/summary.assets/sorting_algorithms_comparison.png
index 5b2fac915..1fd5338ff 100644
Binary files a/zh-hant/docs/chapter_sorting/summary.assets/sorting_algorithms_comparison.png and b/zh-hant/docs/chapter_sorting/summary.assets/sorting_algorithms_comparison.png differ
diff --git a/zh-hant/docs/chapter_sorting/summary.md b/zh-hant/docs/chapter_sorting/summary.md
index b1bde36f7..969b37a5f 100644
--- a/zh-hant/docs/chapter_sorting/summary.md
+++ b/zh-hant/docs/chapter_sorting/summary.md
@@ -9,7 +9,7 @@
- 桶排序包含三個步驟:資料分桶、桶內排序和合並結果。它同樣體現了分治策略,適用於資料體量很大的情況。桶排序的關鍵在於對資料進行平均分配。
- 計數排序是桶排序的一個特例,它透過統計資料出現的次數來實現排序。計數排序適用於資料量大但資料範圍有限的情況,並且要求資料能夠轉換為正整數。
- 基數排序透過逐位排序來實現資料排序,要求資料能夠表示為固定位數的數字。
-- 總的來說,我們希望找到一種排序演算法,具有高效率、穩定、原地以及正向自適應性等優點。然而,正如其他資料結構和演算法一樣,沒有一種排序演算法能夠同時滿足所有這些條件。在實際應用中,我們需要根據資料的特性來選擇合適的排序演算法。
+- 總的來說,我們希望找到一種排序演算法,具有高效率、穩定、原地以及自適應性等優點。然而,正如其他資料結構和演算法一樣,沒有一種排序演算法能夠同時滿足所有這些條件。在實際應用中,我們需要根據資料的特性來選擇合適的排序演算法。
- 下圖對比了主流排序演算法的效率、穩定性、就地性和自適應性等。
![排序演算法對比](summary.assets/sorting_algorithms_comparison.png)
diff --git a/zh-hant/docs/index.html b/zh-hant/docs/index.html
index 578ddaca8..de1acac46 100644
--- a/zh-hant/docs/index.html
+++ b/zh-hant/docs/index.html
@@ -286,6 +286,14 @@
Zig, Rust
+