?) {
// Check if it's a solution
if (isSolution(state)) {
// Record the solution
recordSolution(state, res)
// Stop searching
return
}
// Iterate through all choices
for (choice in choices) {
// Pruning: check if the choice is valid
if (isValid(state, choice)) {
// Try: make a choice, update the state
makeChoice(state, choice)
backtrack(state, choices, res)
// Retreat: undo the choice, revert to the previous state
undoChoice(state, choice)
}
}
}
```
=== "Ruby"
```ruby title=""
```
=== "Zig"
```zig title=""
```
Next, we solve Example Three based on the framework code. The `state` is the node traversal path, `choices` are the current node's left and right children, and the result `res` is the list of paths:
```src
[file]{preorder_traversal_iii_template}-[class]{}-[func]{backtrack}
```
As per the requirements, after finding a node with a value of $7$, the search should continue, **thus the `return` statement after recording the solution should be removed**. The figure below compares the search processes with and without retaining the `return` statement.
![Comparison of retaining and removing the return in the search process](backtracking_algorithm.assets/backtrack_remove_return_or_not.png)
Compared to the implementation based on preorder traversal, the code implementation based on the backtracking algorithm framework seems verbose, but it has better universality. In fact, **many backtracking problems can be solved within this framework**. We just need to define `state` and `choices` according to the specific problem and implement the methods in the framework.
## Common terminology
To analyze algorithmic problems more clearly, we summarize the meanings of commonly used terminology in backtracking algorithms and provide corresponding examples from Example Three as shown in the table below.
Table Common backtracking algorithm terminology
| Term | Definition | Example Three |
| --------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------- |
| Solution (solution) | A solution is an answer that satisfies specific conditions of the problem, which may have one or more | All paths from the root node to node $7$ that meet the constraint |
| Constraint (constraint) | Constraints are conditions in the problem that limit the feasibility of solutions, often used for pruning | Paths do not contain node $3$ |
| State (state) | State represents the situation of the problem at a certain moment, including choices made | Current visited node path, i.e., `path` node list |
| Attempt (attempt) | An attempt is the process of exploring the solution space based on available choices, including making choices, updating the state, and checking if it's a solution | Recursively visiting left (right) child nodes, adding nodes to `path`, checking if the node's value is $7$ |
| Backtracking (backtracking) | Backtracking refers to the action of undoing previous choices and returning to the previous state when encountering states that do not meet the constraints | When passing leaf nodes, ending node visits, encountering nodes with a value of $3$, terminating the search, and function return |
| Pruning (pruning) | Pruning is a method to avoid meaningless search paths based on the characteristics and constraints of the problem, which can enhance search efficiency | When encountering a node with a value of $3$, no further search is continued |
!!! tip
Concepts like problems, solutions, states, etc., are universal, and are involved in divide and conquer, backtracking, dynamic programming, and greedy algorithms, among others.
## Advantages and limitations
The backtracking algorithm is essentially a depth-first search algorithm that attempts all possible solutions until a satisfying solution is found. The advantage of this method is that it can find all possible solutions, and with reasonable pruning operations, it can be highly efficient.
However, when dealing with large-scale or complex problems, **the operational efficiency of backtracking may be difficult to accept**.
- **Time**: Backtracking algorithms usually need to traverse all possible states in the state space, which can reach exponential or factorial time complexity.
- **Space**: In recursive calls, it is necessary to save the current state (such as paths, auxiliary variables for pruning, etc.). When the depth is very large, the space requirement may become significant.
Even so, **backtracking remains the best solution for certain search problems and constraint satisfaction problems**. For these problems, since it is unpredictable which choices can generate valid solutions, we must traverse all possible choices. In this case, **the key is how to optimize efficiency**, with common efficiency optimization methods being two types.
- **Pruning**: Avoid searching paths that definitely will not produce a solution, thus saving time and space.
- **Heuristic search**: Introduce some strategies or estimates during the search process to prioritize the paths that are most likely to produce valid solutions.
## Typical backtracking problems
Backtracking algorithms can be used to solve many search problems, constraint satisfaction problems, and combinatorial optimization problems.
**Search problems**: The goal of these problems is to find solutions that meet specific conditions.
- Full permutation problem: Given a set, find all possible permutations and combinations of it.
- Subset sum problem: Given a set and a target sum, find all subsets of the set that sum to the target.
- Tower of Hanoi problem: Given three rods and a series of different-sized discs, the goal is to move all the discs from one rod to another, moving only one disc at a time, and never placing a larger disc on a smaller one.
**Constraint satisfaction problems**: The goal of these problems is to find solutions that satisfy all the constraints.
- $n$ queens: Place $n$ queens on an $n \times n$ chessboard so that they do not attack each other.
- Sudoku: Fill a $9 \times 9$ grid with the numbers $1$ to $9$, ensuring that the numbers do not repeat in each row, each column, and each $3 \times 3$ subgrid.
- Graph coloring problem: Given an undirected graph, color each vertex with the fewest possible colors so that adjacent vertices have different colors.
**Combinatorial optimization problems**: The goal of these problems is to find the optimal solution within a combination space that meets certain conditions.
- 0-1 knapsack problem: Given a set of items and a backpack, each item has a certain value and weight. The goal is to choose items to maximize the total value within the backpack's capacity limit.
- Traveling salesman problem: In a graph, starting from one point, visit all other points exactly once and then return to the starting point, seeking the shortest path.
- Maximum clique problem: Given an undirected graph, find the largest complete subgraph, i.e., a subgraph where any two vertices are connected by an edge.
Please note that for many combinatorial optimization problems, backtracking is not the optimal solution.
- The 0-1 knapsack problem is usually solved using dynamic programming to achieve higher time efficiency.
- The traveling salesman is a well-known NP-Hard problem, commonly solved using genetic algorithms and ant colony algorithms, among others.
- The maximum clique problem is a classic problem in graph theory, which can be solved using greedy algorithms and other heuristic methods.