<p><aclass="glightbox"href="../subset_sum_problem.assets/subset_sum_i.png"data-type="image"data-width="100%"data-height="auto"data-desc-position="bottom"><imgalt="子集和 I 回溯过程"class="animation-figure"src="../subset_sum_problem.assets/subset_sum_i.png"/></a></p>
<p><aclass="glightbox"href="../subset_sum_problem.assets/subset_sum_ii.png"data-type="image"data-width="100%"data-height="auto"data-desc-position="bottom"><imgalt="子集和 II 回溯过程"class="animation-figure"src="../subset_sum_problem.assets/subset_sum_ii.png"/></a></p>
<p>As shown in the image below, to insert an element in the middle of an array, all elements following the insertion point must be moved one position back to make room for the new element.</p>
<p><aclass="glightbox"href="../array.assets/array_insert_element.png"data-type="image"data-width="100%"data-height="auto"data-desc-position="bottom"><imgalt="Array Element Insertion Example"class="animation-figure"src="../array.assets/array_insert_element.png"/></a></p>
<p>Similarly, as illustrated below, to delete an element at index <spanclass="arithmatex">\(i\)</span>, all elements following index <spanclass="arithmatex">\(i\)</span> must be moved forward by one position.</p>
<p><aclass="glightbox"href="../array.assets/array_remove_element.png"data-type="image"data-width="100%"data-height="auto"data-desc-position="bottom"><imgalt="Array Element Deletion Example"class="animation-figure"src="../array.assets/array_remove_element.png"/></a></p>
<p>Overall, the insertion and deletion operations in arrays have the following disadvantages:</p>
<ul>
<li><strong>High Time Complexity</strong>: Both insertion and deletion in an array have an average time complexity of <spanclass="arithmatex">\(O(n)\)</span>, where <spanclass="arithmatex">\(n\)</span> is the length of the array.</li>
<p>In complex system environments, it's challenging to ensure that the memory space following an array is available, making it unsafe to extend the array's capacity. Therefore, in most programming languages, <strong>the length of an array is immutable</strong>.</p>
<p>To expand an array, we need to create a larger array and then copy the elements from the original array. This operation has a time complexity of <spanclass="arithmatex">\(O(n)\)</span> and can be time-consuming for large arrays. The code is as follows:</p>
<h2id="412-advantages-and-limitations-of-arrays">4.1.2 Advantages and Limitations of Arrays<aclass="headerlink"href="#412-advantages-and-limitations-of-arrays"title="Permanent link">¶</a></h2>
<p>Arrays are stored in contiguous memory spaces and consist of elements of the same type. This approach includes a wealth of prior information that the system can use to optimize the operation efficiency of the data structure.</p>
<h3id="3-deleting-a-node">3. Deleting a Node<aclass="headerlink"href="#3-deleting-a-node"title="Permanent link">¶</a></h3>
<p>As shown below, deleting a node in a linked list is also very convenient, <strong>requiring only the change of one node's reference (pointer)</strong>.</p>
<p>Note that although node <code>P</code> still points to <code>n1</code> after the deletion operation is completed, it is no longer accessible when traversing the list, meaning <code>P</code> is no longer part of the list.</p>
<p><strong>Accessing nodes in a linked list is less efficient</strong>. As mentioned earlier, any element in an array can be accessed in <spanclass="arithmatex">\(O(1)\)</span> time. However, in a linked list, the program needs to start from the head node and traverse each node sequentially until it finds the target node. That is, accessing the <spanclass="arithmatex">\(i\)</span>-th node of a linked list requires <spanclass="arithmatex">\(i - 1\)</span> iterations, with a time complexity of <spanclass="arithmatex">\(O(n)\)</span>.</p>
<p>Traverse the linked list to find a node with a value equal to <code>target</code>, and output the index of that node in the linked list. This process also falls under linear search. The code is as follows:</p>
<h2id="422-arrays-vs-linked-lists">4.2.2 Arrays vs. Linked Lists<aclass="headerlink"href="#422-arrays-vs-linked-lists"title="Permanent link">¶</a></h2>
<p>The following table summarizes the characteristics of arrays and linked lists and compares their operational efficiencies. Since they employ two opposite storage strategies, their properties and operational efficiencies also show contrasting features.</p>
<palign="center"> Table 4-1 Efficiency Comparison of Arrays and Linked Lists </p>
<p>The flowchart below represents this sum function.</p>
<p><aclass="glightbox"href="../iteration_and_recursion.assets/iteration.png"data-type="image"data-width="100%"data-height="auto"data-desc-position="bottom"><imgalt="Flowchart of the Sum Function"class="animation-figure"src="../iteration_and_recursion.assets/iteration.png"/></a></p>
<palign="center"> Figure 2-1 Flowchart of the Sum Function </p>
<p><strong>The <code>while</code> loop is more flexible than the <code>for</code> loop</strong>. In a <code>while</code> loop, we can freely design the initialization and update steps of the condition variable.</p>
<p>For example, in the following code, the condition variable <spanclass="arithmatex">\(i\)</span> is updated twice in each round, which would be inconvenient to implement with a <code>for</code> loop:</p>
<p>Overall, <strong><code>for</code> loops are more concise, while <code>while</code> loops are more flexible</strong>. Both can implement iterative structures. Which one to use should be determined based on the specific requirements of the problem.</p>
<p>The flowchart below represents this nested loop.</p>
<p><aclass="glightbox"href="../iteration_and_recursion.assets/nested_iteration.png"data-type="image"data-width="100%"data-height="auto"data-desc-position="bottom"><imgalt="Flowchart of the Nested Loop"class="animation-figure"src="../iteration_and_recursion.assets/nested_iteration.png"/></a></p>
<palign="center"> Figure 2-2 Flowchart of the Nested Loop </p>
<p>The Figure 2-3 shows the recursive process of this function.</p>
<p><aclass="glightbox"href="../iteration_and_recursion.assets/recursion_sum.png"data-type="image"data-width="100%"data-height="auto"data-desc-position="bottom"><imgalt="Recursive Process of the Sum Function"class="animation-figure"src="../iteration_and_recursion.assets/recursion_sum.png"/></a></p>
<palign="center"> Figure 2-3 Recursive Process of the Sum Function </p>
<p>The execution process of tail recursion is shown in the following figure. Comparing regular recursion and tail recursion, the point of the summation operation is different.</p>
<ul>
<li><strong>Regular Recursion</strong>: The summation operation occurs during the "return" phase, requiring another summation after each layer returns.</li>
<p>Observing the above code, we see that it recursively calls two functions within itself, <strong>meaning that one call generates two branching calls</strong>. As illustrated below, this continuous recursive calling eventually creates a "recursion tree" with a depth of <spanclass="arithmatex">\(n\)</span>.</p>
<p>Observing the above code, when recursion is transformed into iteration, the code becomes more complex. Although iteration and recursion can often be transformed into each other, it's not always advisable to do so for two reasons:</p>
<ul>
<li>The transformed code may become harder to understand and less readable.</li>
<h3id="2-linear-order-on">2. Linear Order <spanclass="arithmatex">\(O(n)\)</span><aclass="headerlink"href="#2-linear-order-on"title="Permanent link">¶</a></h3>
<p>Linear order is common in arrays, linked lists, stacks, queues, etc., where the number of elements is proportional to <spanclass="arithmatex">\(n\)</span>:</p>
<p>As shown below, this function's recursive depth is <spanclass="arithmatex">\(n\)</span>, meaning there are <spanclass="arithmatex">\(n\)</span> instances of unreturned <code>linear_recur()</code> function, using <spanclass="arithmatex">\(O(n)\)</span> size of stack frame space:</p>
<p><aclass="glightbox"href="../space_complexity.assets/space_complexity_recursive_linear.png"data-type="image"data-width="100%"data-height="auto"data-desc-position="bottom"><imgalt="Recursive Function Generating Linear Order Space Complexity"class="animation-figure"src="../space_complexity.assets/space_complexity_recursive_linear.png"/></a></p>
<palign="center"> Figure 2-17 Recursive Function Generating Linear Order Space Complexity </p>
<p>As shown below, the recursive depth of this function is <spanclass="arithmatex">\(n\)</span>, and in each recursive call, an array is initialized with lengths <spanclass="arithmatex">\(n\)</span>, <spanclass="arithmatex">\(n-1\)</span>, <spanclass="arithmatex">\(\dots\)</span>, <spanclass="arithmatex">\(2\)</span>, <spanclass="arithmatex">\(1\)</span>, averaging <spanclass="arithmatex">\(n/2\)</span>, thus overall occupying <spanclass="arithmatex">\(O(n^2)\)</span> space:</p>
<p><aclass="glightbox"href="../space_complexity.assets/space_complexity_recursive_quadratic.png"data-type="image"data-width="100%"data-height="auto"data-desc-position="bottom"><imgalt="Recursive Function Generating Quadratic Order Space Complexity"class="animation-figure"src="../space_complexity.assets/space_complexity_recursive_quadratic.png"/></a></p>
<palign="center"> Figure 2-18 Recursive Function Generating Quadratic Order Space Complexity </p>
<p><aclass="glightbox"href="../space_complexity.assets/space_complexity_exponential.png"data-type="image"data-width="100%"data-height="auto"data-desc-position="bottom"><imgalt="Full Binary Tree Generating Exponential Order Space Complexity"class="animation-figure"src="../space_complexity.assets/space_complexity_exponential.png"/></a></p>
<palign="center"> Figure 2-19 Full Binary Tree Generating Exponential Order Space Complexity </p>
<h3id="2-linear-order-on">2. Linear Order <spanclass="arithmatex">\(O(n)\)</span><aclass="headerlink"href="#2-linear-order-on"title="Permanent link">¶</a></h3>
<p>Linear order indicates the number of operations grows linearly with the input data size <spanclass="arithmatex">\(n\)</span>. Linear order commonly appears in single-loop structures:</p>
<p>Operations like array traversal and linked list traversal have a time complexity of <spanclass="arithmatex">\(O(n)\)</span>, where <spanclass="arithmatex">\(n\)</span> is the length of the array or list:</p>
<p>It's important to note that <strong>the input data size <spanclass="arithmatex">\(n\)</span> should be determined based on the type of input data</strong>. For example, in the first example, <spanclass="arithmatex">\(n\)</span> represents the input data size, while in the second example, the length of the array <spanclass="arithmatex">\(n\)</span> is the data size.</p>
<h3id="3-quadratic-order-on2">3. Quadratic Order <spanclass="arithmatex">\(O(n^2)\)</span><aclass="headerlink"href="#3-quadratic-order-on2"title="Permanent link">¶</a></h3>
<p>Quadratic order means the number of operations grows quadratically with the input data size <spanclass="arithmatex">\(n\)</span>. Quadratic order typically appears in nested loops, where both the outer and inner loops have a time complexity of <spanclass="arithmatex">\(O(n)\)</span>, resulting in an overall complexity of <spanclass="arithmatex">\(O(n^2)\)</span>:</p>
<p>The following image compares constant order, linear order, and quadratic order time complexities.</p>
<p><aclass="glightbox"href="../time_complexity.assets/time_complexity_constant_linear_quadratic.png"data-type="image"data-width="100%"data-height="auto"data-desc-position="bottom"><imgalt="Constant, Linear, and Quadratic Order Time Complexities"class="animation-figure"src="../time_complexity.assets/time_complexity_constant_linear_quadratic.png"/></a></p>
<palign="center"> Figure 2-10 Constant, Linear, and Quadratic Order Time Complexities </p>
<h3id="4-exponential-order-o2n">4. Exponential Order <spanclass="arithmatex">\(O(2^n)\)</span><aclass="headerlink"href="#4-exponential-order-o2n"title="Permanent link">¶</a></h3>
<p>Biological "cell division" is a classic example of exponential order growth: starting with one cell, it becomes two after one division, four after two divisions, and so on, resulting in <spanclass="arithmatex">\(2^n\)</span> cells after <spanclass="arithmatex">\(n\)</span> divisions.</p>
<p>The following image and code simulate the cell division process, with a time complexity of <spanclass="arithmatex">\(O(2^n)\)</span>:</p>
<p><aclass="glightbox"href="../time_complexity.assets/time_complexity_exponential.png"data-type="image"data-width="100%"data-height="auto"data-desc-position="bottom"><imgalt="Exponential Order Time Complexity"class="animation-figure"src="../time_complexity.assets/time_complexity_exponential.png"/></a></p>
<palign="center"> Figure 2-11 Exponential Order Time Complexity </p>
<p>Exponential order growth is extremely rapid and is commonly seen in exhaustive search methods (brute force, backtracking, etc.). For large-scale problems, exponential order is unacceptable, often requiring dynamic programming or greedy algorithms as solutions.</p>
<h3id="5-logarithmic-order-olog-n">5. Logarithmic Order <spanclass="arithmatex">\(O(\log n)\)</span><aclass="headerlink"href="#5-logarithmic-order-olog-n"title="Permanent link">¶</a></h3>
<p>In contrast to exponential order, logarithmic order reflects situations where "the size is halved each round." Given an input data size <spanclass="arithmatex">\(n\)</span>, since the size is halved each round, the number of iterations is <spanclass="arithmatex">\(\log_2 n\)</span>, the inverse function of <spanclass="arithmatex">\(2^n\)</span>.</p>
<p><aclass="glightbox"href="../time_complexity.assets/time_complexity_logarithmic.png"data-type="image"data-width="100%"data-height="auto"data-desc-position="bottom"><imgalt="Logarithmic Order Time Complexity"class="animation-figure"src="../time_complexity.assets/time_complexity_logarithmic.png"/></a></p>
<palign="center"> Figure 2-12 Logarithmic Order Time Complexity </p>
<p>Logarithmic order is typical in algorithms based on the divide-and-conquer strategy, embodying the "split into many" and "simplify complex problems" approach. It's slow-growing and is the most ideal time complexity after constant order.</p>
<divclass="admonition tip">
<pclass="admonition-title">What is the base of <spanclass="arithmatex">\(O(\log n)\)</span>?</p>
@ -3930,6 +3975,11 @@ O(\log_m n) = O(\log_k n / \log_k m) = O(\log_k n)
<p>The image below demonstrates how linear-logarithmic order is generated. Each level of a binary tree has <spanclass="arithmatex">\(n\)</span> operations, and the tree has <spanclass="arithmatex">\(\log_2 n + 1\)</span> levels, resulting in a time complexity of <spanclass="arithmatex">\(O(n \log n)\)</span>.</p>
<p><aclass="glightbox"href="../time_complexity.assets/time_complexity_logarithmic_linear.png"data-type="image"data-width="100%"data-height="auto"data-desc-position="bottom"><imgalt="Linear-Logarithmic Order Time Complexity"class="animation-figure"src="../time_complexity.assets/time_complexity_logarithmic_linear.png"/></a></p>
<palign="center"> Figure 2-13 Linear-Logarithmic Order Time Complexity </p>
<p><aclass="glightbox"href="../time_complexity.assets/time_complexity_factorial.png"data-type="image"data-width="100%"data-height="auto"data-desc-position="bottom"><imgalt="Factorial Order Time Complexity"class="animation-figure"src="../time_complexity.assets/time_complexity_factorial.png"/></a></p>
<palign="center"> Figure 2-14 Factorial Order Time Complexity </p>
<p>It's important to note that the best-case time complexity is rarely used in practice, as it is usually only achievable under very low probabilities and might be misleading. <strong>The worst-case time complexity is more practical as it provides a safety value for efficiency</strong>, allowing us to confidently use the algorithm.</p>
<p>From the above example, it's clear that both the worst-case and best-case time complexities only occur under "special data distributions," which may have a small probability of occurrence and may not accurately reflect the algorithm's run efficiency. In contrast, <strong>the average time complexity can reflect the algorithm's efficiency under random input data</strong>, denoted by the <spanclass="arithmatex">\(\Theta\)</span> notation.</p>
<p>For some algorithms, we can simply estimate the average case under a random data distribution. For example, in the aforementioned example, since the input array is shuffled, the probability of element <spanclass="arithmatex">\(1\)</span> appearing at any index is equal. Therefore, the average number of loops for the algorithm is half the length of the array <spanclass="arithmatex">\(n / 2\)</span>, giving an average time complexity of <spanclass="arithmatex">\(\Theta(n / 2) = \Theta(n)\)</span>.</p>