Add the initial translation of chapter "greedy" (#1320)
After Width: | Height: | Size: 12 KiB |
After Width: | Height: | Size: 23 KiB |
After Width: | Height: | Size: 20 KiB |
After Width: | Height: | Size: 20 KiB |
@ -0,0 +1,50 @@
|
|||||||
|
# Fractional knapsack problem
|
||||||
|
|
||||||
|
!!! question
|
||||||
|
|
||||||
|
Given $n$ items, the weight of the $i$-th item is $wgt[i-1]$ and its value is $val[i-1]$, and a knapsack with a capacity of $cap$. Each item can be chosen only once, **but a part of the item can be selected, with its value calculated based on the proportion of the weight chosen**, what is the maximum value of the items in the knapsack under the limited capacity? An example is shown below.
|
||||||
|
|
||||||
|
![Example data of the fractional knapsack problem](fractional_knapsack_problem.assets/fractional_knapsack_example.png)
|
||||||
|
|
||||||
|
The fractional knapsack problem is very similar overall to the 0-1 knapsack problem, involving the current item $i$ and capacity $c$, aiming to maximize the value within the limited capacity of the knapsack.
|
||||||
|
|
||||||
|
The difference is that, in this problem, only a part of an item can be chosen. As shown in the figure below, **we can arbitrarily split the items and calculate the corresponding value based on the weight proportion**.
|
||||||
|
|
||||||
|
1. For item $i$, its value per unit weight is $val[i-1] / wgt[i-1]$, referred to as the unit value.
|
||||||
|
2. Suppose we put a part of item $i$ with weight $w$ into the knapsack, then the value added to the knapsack is $w \times val[i-1] / wgt[i-1]$.
|
||||||
|
|
||||||
|
![Value per unit weight of the item](fractional_knapsack_problem.assets/fractional_knapsack_unit_value.png)
|
||||||
|
|
||||||
|
### Greedy strategy determination
|
||||||
|
|
||||||
|
Maximizing the total value of the items in the knapsack essentially means maximizing the value per unit weight. From this, the greedy strategy shown below can be deduced.
|
||||||
|
|
||||||
|
1. Sort the items by their unit value from high to low.
|
||||||
|
2. Iterate over all items, **greedily choosing the item with the highest unit value in each round**.
|
||||||
|
3. If the remaining capacity of the knapsack is insufficient, use part of the current item to fill the knapsack.
|
||||||
|
|
||||||
|
![Greedy strategy of the fractional knapsack problem](fractional_knapsack_problem.assets/fractional_knapsack_greedy_strategy.png)
|
||||||
|
|
||||||
|
### Code implementation
|
||||||
|
|
||||||
|
We have created an `Item` class in order to sort the items by their unit value. We loop and make greedy choices until the knapsack is full, then exit and return the solution:
|
||||||
|
|
||||||
|
```src
|
||||||
|
[file]{fractional_knapsack}-[class]{}-[func]{fractional_knapsack}
|
||||||
|
```
|
||||||
|
|
||||||
|
Apart from sorting, in the worst case, the entire list of items needs to be traversed, **hence the time complexity is $O(n)$**, where $n$ is the number of items.
|
||||||
|
|
||||||
|
Since an `Item` object list is initialized, **the space complexity is $O(n)$**.
|
||||||
|
|
||||||
|
### Correctness proof
|
||||||
|
|
||||||
|
Using proof by contradiction. Suppose item $x$ has the highest unit value, and some algorithm yields a maximum value `res`, but the solution does not include item $x`.
|
||||||
|
|
||||||
|
Now remove a unit weight of any item from the knapsack and replace it with a unit weight of item $x$. Since the unit value of item $x$ is the highest, the total value after replacement will definitely be greater than `res`. **This contradicts the assumption that `res` is the optimal solution, proving that the optimal solution must include item $x**.
|
||||||
|
|
||||||
|
For other items in this solution, we can also construct the above contradiction. Overall, **items with greater unit value are always better choices**, proving that the greedy strategy is effective.
|
||||||
|
|
||||||
|
As shown in the figure below, if the item weight and unit value are viewed as the horizontal and vertical axes of a two-dimensional chart respectively, the fractional knapsack problem can be transformed into "seeking the largest area enclosed within a limited horizontal axis range". This analogy can help us understand the effectiveness of the greedy strategy from a geometric perspective.
|
||||||
|
|
||||||
|
![Geometric representation of the fractional knapsack problem](fractional_knapsack_problem.assets/fractional_knapsack_area_chart.png)
|
After Width: | Height: | Size: 30 KiB |
After Width: | Height: | Size: 20 KiB |
@ -0,0 +1,94 @@
|
|||||||
|
# Greedy algorithms
|
||||||
|
|
||||||
|
<u>Greedy algorithm</u> is a common algorithm for solving optimization problems, which fundamentally involves making the seemingly best choice at each decision-making stage of the problem, i.e., greedily making locally optimal decisions in hopes of finding a globally optimal solution. Greedy algorithms are concise and efficient, and are widely used in many practical problems.
|
||||||
|
|
||||||
|
Greedy algorithms and dynamic programming are both commonly used to solve optimization problems. They share some similarities, such as relying on the property of optimal substructure, but they operate differently.
|
||||||
|
|
||||||
|
- Dynamic programming considers all previous decisions at the current decision stage and uses solutions to past subproblems to construct solutions for the current subproblem.
|
||||||
|
- Greedy algorithms do not consider past decisions; instead, they proceed with greedy choices, continually narrowing the scope of the problem until it is solved.
|
||||||
|
|
||||||
|
Let's first understand the working principle of the greedy algorithm through the example of "coin change," which has been introduced in the "Complete Knapsack Problem" chapter. I believe you are already familiar with it.
|
||||||
|
|
||||||
|
!!! question
|
||||||
|
|
||||||
|
Given $n$ types of coins, where the denomination of the $i$th type of coin is $coins[i - 1]$, and the target amount is $amt$, with each type of coin available indefinitely, what is the minimum number of coins needed to make up the target amount? If it is not possible to make up the target amount, return $-1$.
|
||||||
|
|
||||||
|
The greedy strategy adopted in this problem is shown in the following figure. Given the target amount, **we greedily choose the coin that is closest to and not greater than it**, repeatedly following this step until the target amount is met.
|
||||||
|
|
||||||
|
![Greedy strategy for coin change](greedy_algorithm.assets/coin_change_greedy_strategy.png)
|
||||||
|
|
||||||
|
The implementation code is as follows:
|
||||||
|
|
||||||
|
```src
|
||||||
|
[file]{coin_change_greedy}-[class]{}-[func]{coin_change_greedy}
|
||||||
|
```
|
||||||
|
|
||||||
|
You might exclaim: So clean! The greedy algorithm solves the coin change problem in about ten lines of code.
|
||||||
|
|
||||||
|
## Advantages and limitations of greedy algorithms
|
||||||
|
|
||||||
|
**Greedy algorithms are not only straightforward and simple to implement, but they are also usually very efficient**. In the code above, if the smallest coin denomination is $\min(coins)$, the greedy choice loops at most $amt / \min(coins)$ times, giving a time complexity of $O(amt / \min(coins))$. This is an order of magnitude smaller than the time complexity of the dynamic programming solution, which is $O(n \times amt)$.
|
||||||
|
|
||||||
|
However, **for some combinations of coin denominations, greedy algorithms cannot find the optimal solution**. The following figure provides two examples.
|
||||||
|
|
||||||
|
- **Positive example $coins = [1, 5, 10, 20, 50, 100]$**: In this coin combination, given any $amt$, the greedy algorithm can find the optimal solution.
|
||||||
|
- **Negative example $coins = [1, 20, 50]$**: Suppose $amt = 60$, the greedy algorithm can only find the combination $50 + 1 \times 10$, totaling 11 coins, but dynamic programming can find the optimal solution of $20 + 20 + 20$, needing only 3 coins.
|
||||||
|
- **Negative example $coins = [1, 49, 50]$**: Suppose $amt = 98$, the greedy algorithm can only find the combination $50 + 1 \times 48$, totaling 49 coins, but dynamic programming can find the optimal solution of $49 + 49$, needing only 2 coins.
|
||||||
|
|
||||||
|
![Examples where greedy algorithms do not find the optimal solution](greedy_algorithm.assets/coin_change_greedy_vs_dp.png)
|
||||||
|
|
||||||
|
This means that for the coin change problem, greedy algorithms cannot guarantee finding the globally optimal solution, and they might find a very poor solution. They are better suited for dynamic programming.
|
||||||
|
|
||||||
|
Generally, the suitability of greedy algorithms falls into two categories.
|
||||||
|
|
||||||
|
1. **Guaranteed to find the optimal solution**: In these cases, greedy algorithms are often the best choice, as they tend to be more efficient than backtracking or dynamic programming.
|
||||||
|
2. **Can find a near-optimal solution**: Greedy algorithms are also applicable here. For many complex problems, finding the global optimal solution is very challenging, and being able to find a high-efficiency suboptimal solution is also very commendable.
|
||||||
|
|
||||||
|
## Characteristics of greedy algorithms
|
||||||
|
|
||||||
|
So, what kind of problems are suitable for solving with greedy algorithms? Or rather, under what conditions can greedy algorithms guarantee to find the optimal solution?
|
||||||
|
|
||||||
|
Compared to dynamic programming, greedy algorithms have stricter usage conditions, focusing mainly on two properties of the problem.
|
||||||
|
|
||||||
|
- **Greedy choice property**: Only when the locally optimal choice can always lead to a globally optimal solution can greedy algorithms guarantee to obtain the optimal solution.
|
||||||
|
- **Optimal substructure**: The optimal solution to the original problem contains the optimal solutions to its subproblems.
|
||||||
|
|
||||||
|
Optimal substructure has already been introduced in the "Dynamic Programming" chapter, so it is not discussed further here. It's important to note that some problems do not have an obvious optimal substructure, but can still be solved using greedy algorithms.
|
||||||
|
|
||||||
|
We mainly explore the method for determining the greedy choice property. Although its description seems simple, **in practice, proving the greedy choice property for many problems is not easy**.
|
||||||
|
|
||||||
|
For example, in the coin change problem, although we can easily cite counterexamples to disprove the greedy choice property, proving it is much more challenging. If asked, **what conditions must a coin combination meet to be solvable using a greedy algorithm**? We often have to rely on intuition or examples to provide an ambiguous answer, as it is difficult to provide a rigorous mathematical proof.
|
||||||
|
|
||||||
|
!!! quote
|
||||||
|
|
||||||
|
A paper presents an algorithm with a time complexity of $O(n^3)$ for determining whether a coin combination can use a greedy algorithm to find the optimal solution for any amount.
|
||||||
|
|
||||||
|
Pearson, D. A polynomial-time algorithm for the change-making problem[J]. Operations Research Letters, 2005, 33(3): 231-234.
|
||||||
|
|
||||||
|
## Steps for solving problems with greedy algorithms
|
||||||
|
|
||||||
|
The problem-solving process for greedy problems can generally be divided into the following three steps.
|
||||||
|
|
||||||
|
1. **Problem analysis**: Sort out and understand the characteristics of the problem, including state definition, optimization objectives, and constraints, etc. This step is also involved in backtracking and dynamic programming.
|
||||||
|
2. **Determine the greedy strategy**: Determine how to make a greedy choice at each step. This strategy can reduce the scale of the problem at each step and eventually solve the entire problem.
|
||||||
|
3. **Proof of correctness**: It is usually necessary to prove that the problem has both a greedy choice property and optimal substructure. This step may require mathematical proofs, such as induction or reductio ad absurdum.
|
||||||
|
|
||||||
|
Determining the greedy strategy is the core step in solving the problem, but it may not be easy to implement, mainly for the following reasons.
|
||||||
|
|
||||||
|
- **Greedy strategies vary greatly between different problems**. For many problems, the greedy strategy is fairly straightforward, and we can come up with it through some general thinking and attempts. However, for some complex problems, the greedy strategy may be very elusive, which is a real test of individual problem-solving experience and algorithmic capability.
|
||||||
|
- **Some greedy strategies are quite misleading**. When we confidently design a greedy strategy, write the code, and submit it for testing, it is quite possible that some test cases will not pass. This is because the designed greedy strategy is only "partially correct," as described above with the coin change example.
|
||||||
|
|
||||||
|
To ensure accuracy, we should provide rigorous mathematical proofs for the greedy strategy, **usually involving reductio ad absurdum or mathematical induction**.
|
||||||
|
|
||||||
|
However, proving correctness may not be an easy task. If we are at a loss, we usually choose to debug the code based on test cases, modifying and verifying the greedy strategy step by step.
|
||||||
|
|
||||||
|
## Typical problems solved by greedy algorithms
|
||||||
|
|
||||||
|
Greedy algorithms are often applied to optimization problems that satisfy the properties of greedy choice and optimal substructure. Below are some typical greedy algorithm problems.
|
||||||
|
|
||||||
|
- **Coin change problem**: In some coin combinations, the greedy algorithm always provides the optimal solution.
|
||||||
|
- **Interval scheduling problem**: Suppose you have several tasks, each of which takes place over a period of time. Your goal is to complete as many tasks as possible. If you always choose the task that ends the earliest, then the greedy algorithm can achieve the optimal solution.
|
||||||
|
- **Fractional knapsack problem**: Given a set of items and a carrying capacity, your goal is to select a set of items such that the total weight does not exceed the carrying capacity and the total value is maximized. If you always choose the item with the highest value-to-weight ratio (value / weight), the greedy algorithm can achieve the optimal solution in some cases.
|
||||||
|
- **Stock trading problem**: Given a set of historical stock prices, you can make multiple trades, but you cannot buy again until after you have sold if you already own stocks. The goal is to achieve the maximum profit.
|
||||||
|
- **Huffman coding**: Huffman coding is a greedy algorithm used for lossless data compression. By constructing a Huffman tree, it always merges the two nodes with the lowest frequency, resulting in a Huffman tree with the minimum weighted path length (coding length).
|
||||||
|
- **Dijkstra's algorithm**: It is a greedy algorithm for solving the shortest path problem from a given source vertex to all other vertices.
|
@ -0,0 +1,9 @@
|
|||||||
|
# Greedy
|
||||||
|
|
||||||
|
![Greedy](../assets/covers/chapter_greedy.jpg)
|
||||||
|
|
||||||
|
!!! abstract
|
||||||
|
|
||||||
|
Sunflowers turn towards the sun, always seeking the greatest possible growth for themselves.
|
||||||
|
|
||||||
|
Greedy strategy guides to the best answer step by step through rounds of simple choices.
|
After Width: | Height: | Size: 11 KiB |
After Width: | Height: | Size: 15 KiB |
After Width: | Height: | Size: 17 KiB |
After Width: | Height: | Size: 17 KiB |
After Width: | Height: | Size: 18 KiB |
After Width: | Height: | Size: 18 KiB |
After Width: | Height: | Size: 18 KiB |
After Width: | Height: | Size: 18 KiB |
After Width: | Height: | Size: 18 KiB |
After Width: | Height: | Size: 15 KiB |
After Width: | Height: | Size: 15 KiB |
After Width: | Height: | Size: 16 KiB |
After Width: | Height: | Size: 16 KiB |
After Width: | Height: | Size: 22 KiB |
@ -0,0 +1,99 @@
|
|||||||
|
# Maximum capacity problem
|
||||||
|
|
||||||
|
!!! question
|
||||||
|
|
||||||
|
Input an array $ht$, where each element represents the height of a vertical partition. Any two partitions in the array, along with the space between them, can form a container.
|
||||||
|
|
||||||
|
The capacity of the container is the product of the height and the width (area), where the height is determined by the shorter partition, and the width is the difference in array indices between the two partitions.
|
||||||
|
|
||||||
|
Please select two partitions in the array that maximize the container's capacity and return this maximum capacity. An example is shown in the following figure.
|
||||||
|
|
||||||
|
![Example data for the maximum capacity problem](max_capacity_problem.assets/max_capacity_example.png)
|
||||||
|
|
||||||
|
The container is formed by any two partitions, **therefore the state of this problem is represented by the indices of the two partitions, denoted as $[i, j]$**.
|
||||||
|
|
||||||
|
According to the problem statement, the capacity equals the product of height and width, where the height is determined by the shorter partition, and the width is the difference in array indices between the two partitions. The formula for capacity $cap[i, j]$ is:
|
||||||
|
|
||||||
|
$$
|
||||||
|
cap[i, j] = \min(ht[i], ht[j]) \times (j - i)
|
||||||
|
$$
|
||||||
|
|
||||||
|
Assuming the length of the array is $n$, the number of combinations of two partitions (total number of states) is $C_n^2 = \frac{n(n - 1)}{2}$. The most straightforward approach is to **enumerate all possible states**, resulting in a time complexity of $O(n^2)$.
|
||||||
|
|
||||||
|
### Determination of a greedy strategy
|
||||||
|
|
||||||
|
There is a more efficient solution to this problem. As shown in the following figure, we select a state $[i, j]$ where the indices $i < j$ and the height $ht[i] < ht[j]$, meaning $i$ is the shorter partition, and $j$ is the taller one.
|
||||||
|
|
||||||
|
![Initial state](max_capacity_problem.assets/max_capacity_initial_state.png)
|
||||||
|
|
||||||
|
As shown in the following figure, **if we move the taller partition $j$ closer to the shorter partition $i$, the capacity will definitely decrease**.
|
||||||
|
|
||||||
|
This is because when moving the taller partition $j$, the width $j-i$ definitely decreases; and since the height is determined by the shorter partition, the height can only remain the same (if $i$ remains the shorter partition) or decrease (if the moved $j$ becomes the shorter partition).
|
||||||
|
|
||||||
|
![State after moving the taller partition inward](max_capacity_problem.assets/max_capacity_moving_long_board.png)
|
||||||
|
|
||||||
|
Conversely, **we can only possibly increase the capacity by moving the shorter partition $i$ inward**. Although the width will definitely decrease, **the height may increase** (if the moved shorter partition $i$ becomes taller). For example, in the figure below, the area increases after moving the shorter partition.
|
||||||
|
|
||||||
|
![State after moving the shorter partition inward](max_capacity_problem.assets/max_capacity_moving_short_board.png)
|
||||||
|
|
||||||
|
This leads us to the greedy strategy for this problem: initialize two pointers at the ends of the container, and in each round, move the pointer corresponding to the shorter partition inward until the two pointers meet.
|
||||||
|
|
||||||
|
The following figures illustrate the execution of the greedy strategy.
|
||||||
|
|
||||||
|
1. Initially, the pointers $i$ and $j$ are positioned at the ends of the array.
|
||||||
|
2. Calculate the current state's capacity $cap[i, j]$ and update the maximum capacity.
|
||||||
|
3. Compare the heights of partitions $i$ and $j$, and move the shorter partition inward by one step.
|
||||||
|
4. Repeat steps `2.` and `3.` until $i$ and $j$ meet.
|
||||||
|
|
||||||
|
=== "<1>"
|
||||||
|
![The greedy process for maximum capacity problem](max_capacity_problem.assets/max_capacity_greedy_step1.png)
|
||||||
|
|
||||||
|
=== "<2>"
|
||||||
|
![max_capacity_greedy_step2](max_capacity_problem.assets/max_capacity_greedy_step2.png)
|
||||||
|
|
||||||
|
=== "<3>"
|
||||||
|
![max_capacity_greedy_step3](max_capacity_problem.assets/max_capacity_greedy_step3.png)
|
||||||
|
|
||||||
|
=== "<4>"
|
||||||
|
![max_capacity_greedy_step4](max_capacity_problem.assets/max_capacity_greedy_step4.png)
|
||||||
|
|
||||||
|
=== "<5>"
|
||||||
|
![max_capacity_greedy_step5](max_capacity_problem.assets/max_capacity_greedy_step5.png)
|
||||||
|
|
||||||
|
=== "<6>"
|
||||||
|
![max_capacity_greedy_step6](max_capacity_problem.assets/max_capacity_greedy_step6.png)
|
||||||
|
|
||||||
|
=== "<7>"
|
||||||
|
![max_capacity_greedy_step7](max_capacity_problem.assets/max_capacity_greedy_step7.png)
|
||||||
|
|
||||||
|
=== "<8>"
|
||||||
|
![max_capacity_greedy_step8](max_capacity_problem.assets/max_capacity_greedy_step8.png)
|
||||||
|
|
||||||
|
=== "<9>"
|
||||||
|
![max_capacity_greedy_step9](max_capacity_problem.assets/max_capacity_greedy_step9.png)
|
||||||
|
|
||||||
|
### Implementation
|
||||||
|
|
||||||
|
The code loops at most $n$ times, **thus the time complexity is $O(n)$**.
|
||||||
|
|
||||||
|
The variables $i$, $j$, and $res$ use a constant amount of extra space, **thus the space complexity is $O(1)$**.
|
||||||
|
|
||||||
|
```src
|
||||||
|
[file]{max_capacity}-[class]{}-[func]{max_capacity}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Proof of correctness
|
||||||
|
|
||||||
|
The reason why the greedy method is faster than enumeration is that each round of greedy selection "skips" some states.
|
||||||
|
|
||||||
|
For example, under the state $cap[i, j]$ where $i$ is the shorter partition and $j$ is the taller partition, greedily moving the shorter partition $i$ inward by one step leads to the "skipped" states shown below. **This means that these states' capacities cannot be verified later**.
|
||||||
|
|
||||||
|
$$
|
||||||
|
cap[i, i+1], cap[i, i+2], \dots, cap[i, j-2], cap[i, j-1]
|
||||||
|
$$
|
||||||
|
|
||||||
|
![States skipped by moving the shorter partition](max_capacity_problem.assets/max_capacity_skipped_states.png)
|
||||||
|
|
||||||
|
It is observed that **these skipped states are actually all states where the taller partition $j$ is moved inward**. We have already proven that moving the taller partition inward will definitely decrease the capacity. Therefore, the skipped states cannot possibly be the optimal solution, **and skipping them does not lead to missing the optimal solution**.
|
||||||
|
|
||||||
|
The analysis shows that the operation of moving the shorter partition is "safe", and the greedy strategy is effective.
|
After Width: | Height: | Size: 10 KiB |
After Width: | Height: | Size: 6.3 KiB |
After Width: | Height: | Size: 9.3 KiB |
After Width: | Height: | Size: 9.2 KiB |
@ -0,0 +1,12 @@
|
|||||||
|
# Summary
|
||||||
|
|
||||||
|
- Greedy algorithms are often used to solve optimization problems, where the principle is to make locally optimal decisions at each decision stage in order to achieve a globally optimal solution.
|
||||||
|
- Greedy algorithms iteratively make one greedy choice after another, transforming the problem into a smaller sub-problem with each round, until the problem is resolved.
|
||||||
|
- Greedy algorithms are not only simple to implement but also have high problem-solving efficiency. Compared to dynamic programming, greedy algorithms generally have a lower time complexity.
|
||||||
|
- In the problem of coin change, greedy algorithms can guarantee the optimal solution for certain combinations of coins; for others, however, the greedy algorithm might find a very poor solution.
|
||||||
|
- Problems suitable for greedy algorithm solutions possess two main properties: greedy-choice property and optimal substructure. The greedy-choice property represents the effectiveness of the greedy strategy.
|
||||||
|
- For some complex problems, proving the greedy-choice property is not straightforward. Contrarily, proving the invalidity is often easier, such as with the coin change problem.
|
||||||
|
- Solving greedy problems mainly consists of three steps: problem analysis, determining the greedy strategy, and proving correctness. Among these, determining the greedy strategy is the key step, while proving correctness often poses the challenge.
|
||||||
|
- The fractional knapsack problem builds on the 0-1 knapsack problem by allowing the selection of a part of the items, hence it can be solved using a greedy algorithm. The correctness of the greedy strategy can be proved by contradiction.
|
||||||
|
- The maximum capacity problem can be solved using the exhaustive method, with a time complexity of $O(n^2)$. By designing a greedy strategy, each round moves inwardly shortening the board, optimizing the time complexity to $O(n)$.
|
||||||
|
- In the problem of maximum product after cutting, we deduce two greedy strategies: integers $\geq 4$ should continue to be cut, with the optimal cutting factor being $3$. The code includes power operations, and the time complexity depends on the method of implementing power operations, generally being $O(1)$ or $O(\log n)$.
|