Unify the figure reference as "the figure below" and "the figure above".

Bug fixes.
This commit is contained in:
krahets 2024-05-01 07:26:40 +08:00
parent c14a358aa7
commit 12fc5e49eb
35 changed files with 89 additions and 89 deletions

View File

@ -20,7 +20,7 @@ As shown in the figure below, there is an "edit icon" in the upper right corner
![Edit page button](contribution.assets/edit_markdown.png)
Images cannot be directly modified and require the creation of a new [Issue](https://github.com/krahets/hello-algo/issues) or a comment to describe the problem. We will redraw and replace the images as soon as possible.
Figures cannot be directly modified and require the creation of a new [Issue](https://github.com/krahets/hello-algo/issues) or a comment to describe the problem. We will redraw and replace the figures as soon as possible.
### Content creation

View File

@ -6,7 +6,7 @@ We recommend using the open-source, lightweight VS Code as your local Integrated
![Download VS Code from the official website](installation.assets/vscode_installation.png)
VS Code has a powerful extension ecosystem, supporting the execution and debugging of most programming languages. For example, after installing the "Python Extension Pack," you can debug Python code. The installation steps are shown in the following figure.
VS Code has a powerful extension ecosystem, supporting the execution and debugging of most programming languages. For example, after installing the "Python Extension Pack," you can debug Python code. The installation steps are shown in the figure below.
![Install VS Code Extension Pack](installation.assets/vscode_extension_installation.png)

View File

@ -42,7 +42,7 @@ From a garbage collection perspective, for languages with automatic garbage coll
If an element is searched first and then deleted, the time complexity is indeed `O(n)`. However, the `O(1)` advantage of linked lists in insertion and deletion can be realized in other applications. For example, in the implementation of double-ended queues using linked lists, we maintain pointers always pointing to the head and tail nodes, making each insertion and deletion operation `O(1)`.
**Q**: In the image "Linked List Definition and Storage Method", do the light blue storage nodes occupy a single memory address, or do they share half with the node value?
**Q**: In the figure "Linked List Definition and Storage Method", do the light blue storage nodes occupy a single memory address, or do they share half with the node value?
The diagram is just a qualitative representation; quantitative analysis depends on specific situations.

View File

@ -36,7 +36,7 @@ Based on the code from Example One, we need to use a list `path` to record the v
In each "try", we record the path by adding the current node to `path`; before "retreating", we need to pop the node from `path` **to restore the state before this attempt**.
Observe the process shown below, **we can understand trying and retreating as "advancing" and "undoing"**, two operations that are reverse to each other.
Observe the process shown in the figure below, **we can understand trying and retreating as "advancing" and "undoing"**, two operations that are reverse to each other.
=== "<1>"
![Trying and retreating](backtracking_algorithm.assets/preorder_find_paths_step1.png)

View File

@ -8,7 +8,7 @@ As shown in the figure below, when $n = 4$, there are two solutions. From the pe
![Solution to the 4 queens problem](n_queens_problem.assets/solution_4_queens.png)
The following image shows the three constraints of this problem: **multiple queens cannot be on the same row, column, or diagonal**. It is important to note that diagonals are divided into the main diagonal `\` and the secondary diagonal `/`.
The figure below shows the three constraints of this problem: **multiple queens cannot be on the same row, column, or diagonal**. It is important to note that diagonals are divided into the main diagonal `\` and the secondary diagonal `/`.
![Constraints of the n queens problem](n_queens_problem.assets/n_queens_constraints.png)
@ -18,7 +18,7 @@ As the number of queens equals the number of rows on the chessboard, both being
This means that we can adopt a row-by-row placing strategy: starting from the first row, place one queen per row until the last row is reached.
The image below shows the row-by-row placing process for the 4 queens problem. Due to space limitations, the image only expands one search branch of the first row, and prunes any placements that do not meet the column and diagonal constraints.
The figure below shows the row-by-row placing process for the 4 queens problem. Due to space limitations, the figure only expands one search branch of the first row, and prunes any placements that do not meet the column and diagonal constraints.
![Row-by-row placing strategy](n_queens_problem.assets/n_queens_placing.png)
@ -30,7 +30,7 @@ To satisfy column constraints, we can use a boolean array `cols` of length $n$ t
How about the diagonal constraints? Let the row and column indices of a cell on the chessboard be $(row, col)$. By selecting a specific main diagonal, we notice that the difference $row - col$ is the same for all cells on that diagonal, **meaning that $row - col$ is a constant value on that diagonal**.
Thus, if two cells satisfy $row_1 - col_1 = row_2 - col_2$, they are definitely on the same main diagonal. Using this pattern, we can utilize the array `diags1` shown below to track whether a queen is on any main diagonal.
Thus, if two cells satisfy $row_1 - col_1 = row_2 - col_2$, they are definitely on the same main diagonal. Using this pattern, we can utilize the array `diags1` shown in the figure below to track whether a queen is on any main diagonal.
Similarly, **the sum $row + col$ is a constant value for all cells on a secondary diagonal**. We can also use the array `diags2` to handle secondary diagonal constraints.

View File

@ -22,7 +22,7 @@ From the perspective of the backtracking algorithm, **we can imagine the process
From the code perspective, the candidate set `choices` contains all elements of the input array, and the state `state` contains elements that have been selected so far. Please note that each element can only be chosen once, **thus all elements in `state` must be unique**.
As shown in the following figure, we can unfold the search process into a recursive tree, where each node represents the current state `state`. Starting from the root node, after three rounds of choices, we reach the leaf nodes, each corresponding to a permutation.
As shown in the figure below, we can unfold the search process into a recursive tree, where each node represents the current state `state`. Starting from the root node, after three rounds of choices, we reach the leaf nodes, each corresponding to a permutation.
![Permutation recursive tree](permutations_problem.assets/permutations_i.png)
@ -33,11 +33,11 @@ To ensure that each element is selected only once, we consider introducing a boo
- After making the choice `choice[i]`, we set `selected[i]` to $\text{True}$, indicating it has been chosen.
- When iterating through the choice list `choices`, skip all nodes that have already been selected, i.e., prune.
As shown in the following figure, suppose we choose 1 in the first round, 3 in the second round, and 2 in the third round, we need to prune the branch of element 1 in the second round and elements 1 and 3 in the third round.
As shown in the figure below, suppose we choose 1 in the first round, 3 in the second round, and 2 in the third round, we need to prune the branch of element 1 in the second round and elements 1 and 3 in the third round.
![Permutation pruning example](permutations_problem.assets/permutations_i_pruning.png)
Observing the above figure, this pruning operation reduces the search space size from $O(n^n)$ to $O(n!)$.
Observing the figure above, this pruning operation reduces the search space size from $O(n^n)$ to $O(n!)$.
### Code implementation
@ -55,7 +55,7 @@ After understanding the above information, we can "fill in the blanks" in the fr
Suppose the input array is $[1, 1, 2]$. To differentiate the two duplicate elements $1$, we mark the second $1$ as $\hat{1}$.
As shown in the following figure, half of the permutations generated by the above method are duplicates.
As shown in the figure below, half of the permutations generated by the above method are duplicates.
![Duplicate permutations](permutations_problem.assets/permutations_ii.png)
@ -63,7 +63,7 @@ So, how do we eliminate duplicate permutations? Most directly, consider using a
### Pruning of equal elements
Observing the following figure, in the first round, choosing $1$ or $\hat{1}$ results in identical permutations under both choices, thus we should prune $\hat{1}$.
Observing the figure below, in the first round, choosing $1$ or $\hat{1}$ results in identical permutations under both choices, thus we should prune $\hat{1}$.
Similarly, after choosing $2$ in the first round, choosing $1$ and $\hat{1}$ in the second round also produces duplicate branches, so we should also prune $\hat{1}$ in the second round.
@ -90,6 +90,6 @@ Please note, although both `selected` and `duplicated` are used for pruning, the
- **Repeated choice pruning**: There is only one `selected` throughout the search process. It records which elements are currently in the state, aiming to prevent an element from appearing repeatedly in `state`.
- **Equal element pruning**: Each round of choices (each call to the `backtrack` function) contains a `duplicated`. It records which elements have been chosen in the current traversal (`for` loop), aiming to ensure equal elements are selected only once.
The following figure shows the scope of the two pruning conditions. Note, each node in the tree represents a choice, and the nodes from the root to the leaf form a permutation.
The figure below shows the scope of the two pruning conditions. Note, each node in the tree represents a choice, and the nodes from the root to the leaf form a permutation.
![Scope of the two pruning conditions](permutations_problem.assets/permutations_ii_pruning_summary.png)

View File

@ -23,7 +23,7 @@ Unlike the permutation problem, **elements in this problem can be chosen an unli
Inputting the array $[3, 4, 5]$ and target element $9$ into the above code yields the results $[3, 3, 3], [4, 5], [5, 4]$. **Although it successfully finds all subsets with a sum of $9$, it includes the duplicate subset $[4, 5]$ and $[5, 4]$**.
This is because the search process distinguishes the order of choices, however, subsets do not distinguish the choice order. As shown in the following figure, choosing $4$ before $5$ and choosing $5$ before $4$ are different branches, but correspond to the same subset.
This is because the search process distinguishes the order of choices, however, subsets do not distinguish the choice order. As shown in the figure below, choosing $4$ before $5$ and choosing $5$ before $4$ are different branches, but correspond to the same subset.
![Subset search and pruning out of bounds](subset_sum_problem.assets/subset_sum_i_naive.png)
@ -34,7 +34,7 @@ To eliminate duplicate subsets, **a straightforward idea is to deduplicate the r
### Duplicate subset pruning
**We consider deduplication during the search process through pruning**. Observing the following figure, duplicate subsets are generated when choosing array elements in different orders, for example in the following situations.
**We consider deduplication during the search process through pruning**. Observing the figure below, duplicate subsets are generated when choosing array elements in different orders, for example in the following situations.
1. When choosing $3$ in the first round and $4$ in the second round, all subsets containing these two elements are generated, denoted as $[3, 4, \dots]$.
2. Later, when $4$ is chosen in the first round, **the second round should skip $3$** because the subset $[4, 3, \dots]$ generated by this choice completely duplicates the subset from step `1.`.
@ -62,7 +62,7 @@ Besides, we have made the following two optimizations to the code.
[file]{subset_sum_i}-[class]{}-[func]{subset_sum_i}
```
The following figure shows the overall backtracking process after inputting the array $[3, 4, 5]$ and target element $9$ into the above code.
The figure below shows the overall backtracking process after inputting the array $[3, 4, 5]$ and target element $9$ into the above code.
![Subset sum I backtracking process](subset_sum_problem.assets/subset_sum_i.png)
@ -74,7 +74,7 @@ The following figure shows the overall backtracking process after inputting the
Compared to the previous question, **this question's input array may contain duplicate elements**, introducing new problems. For example, given the array $[4, \hat{4}, 5]$ and target element $9$, the existing code's output results in $[4, 5], [\hat{4}, 5]$, resulting in duplicate subsets.
**The reason for this duplication is that equal elements are chosen multiple times in a certain round**. In the following figure, the first round has three choices, two of which are $4$, generating two duplicate search branches, thus outputting duplicate subsets; similarly, the two $4$s in the second round also produce duplicate subsets.
**The reason for this duplication is that equal elements are chosen multiple times in a certain round**. In the figure below, the first round has three choices, two of which are $4$, generating two duplicate search branches, thus outputting duplicate subsets; similarly, the two $4$s in the second round also produce duplicate subsets.
![Duplicate subsets caused by equal elements](subset_sum_problem.assets/subset_sum_ii_repeat.png)
@ -90,6 +90,6 @@ At the same time, **this question stipulates that each array element can only be
[file]{subset_sum_ii}-[class]{}-[func]{subset_sum_ii}
```
The following figure shows the backtracking process for the array $[4, 4, 5]$ and target element $9$, including four types of pruning operations. Please combine the illustration with the code comments to understand the entire search process and how each type of pruning operation works.
The figure below shows the backtracking process for the array $[4, 4, 5]$ and target element $9$, including four types of pruning operations. Please combine the illustration with the code comments to understand the entire search process and how each type of pruning operation works.
![Subset sum II backtracking process](subset_sum_problem.assets/subset_sum_ii.png)

View File

@ -117,7 +117,7 @@ For example, in calculating $1 + 2 + \dots + n$, we can make the result variable
[file]{recursion}-[class]{}-[func]{tail_recur}
```
The execution process of tail recursion is shown in the following figure. Comparing regular recursion and tail recursion, the point of the summation operation is different.
The execution process of tail recursion is shown in the figure below. Comparing regular recursion and tail recursion, the point of the summation operation is different.
- **Regular recursion**: The summation operation occurs during the "returning" phase, requiring another summation after each layer returns.
- **Tail recursion**: The summation operation occurs during the "calling" phase, and the "returning" phase only involves returning through each layer.

View File

@ -754,7 +754,7 @@ Linear order is common in arrays, linked lists, stacks, queues, etc., where the
[file]{space_complexity}-[class]{}-[func]{linear}
```
As shown below, this function's recursive depth is $n$, meaning there are $n$ instances of unreturned `linear_recur()` function, using $O(n)$ size of stack frame space:
As shown in the figure below, this function's recursive depth is $n$, meaning there are $n$ instances of unreturned `linear_recur()` function, using $O(n)$ size of stack frame space:
```src
[file]{space_complexity}-[class]{}-[func]{linear_recur}
@ -770,7 +770,7 @@ Quadratic order is common in matrices and graphs, where the number of elements i
[file]{space_complexity}-[class]{}-[func]{quadratic}
```
As shown below, the recursive depth of this function is $n$, and in each recursive call, an array is initialized with lengths $n$, $n-1$, $\dots$, $2$, $1$, averaging $n/2$, thus overall occupying $O(n^2)$ space:
As shown in the figure below, the recursive depth of this function is $n$, and in each recursive call, an array is initialized with lengths $n$, $n-1$, $\dots$, $2$, $1$, averaging $n/2$, thus overall occupying $O(n^2)$ space:
```src
[file]{space_complexity}-[class]{}-[func]{quadratic_recur}
@ -780,7 +780,7 @@ As shown below, the recursive depth of this function is $n$, and in each recursi
### Exponential order $O(2^n)$
Exponential order is common in binary trees. Observe the below image, a "full binary tree" with $n$ levels has $2^n - 1$ nodes, occupying $O(2^n)$ space:
Exponential order is common in binary trees. Observe the figure below, a "full binary tree" with $n$ levels has $2^n - 1$ nodes, occupying $O(2^n)$ space:
```src
[file]{space_complexity}-[class]{}-[func]{build_tree}

View File

@ -464,7 +464,7 @@ Let's understand this concept of "time growth trend" with an example. Assume the
}
```
The following figure shows the time complexities of these three algorithms.
The figure below shows the time complexities of these three algorithms.
- Algorithm `A` has just one print operation, and its run time does not grow with $n$. Its time complexity is considered "constant order."
- Algorithm `B` involves a print operation looping $n$ times, and its run time grows linearly with $n$. Its time complexity is "linear order."
@ -994,7 +994,7 @@ Quadratic order means the number of operations grows quadratically with the inpu
[file]{time_complexity}-[class]{}-[func]{quadratic}
```
The following image compares constant order, linear order, and quadratic order time complexities.
The figure below compares constant order, linear order, and quadratic order time complexities.
![Constant, linear, and quadratic order time complexities](time_complexity.assets/time_complexity_constant_linear_quadratic.png)
@ -1008,7 +1008,7 @@ For instance, in bubble sort, the outer loop runs $n - 1$ times, and the inner l
Biological "cell division" is a classic example of exponential order growth: starting with one cell, it becomes two after one division, four after two divisions, and so on, resulting in $2^n$ cells after $n$ divisions.
The following image and code simulate the cell division process, with a time complexity of $O(2^n)$:
The figure below and code simulate the cell division process, with a time complexity of $O(2^n)$:
```src
[file]{time_complexity}-[class]{}-[func]{exponential}
@ -1028,7 +1028,7 @@ Exponential order growth is extremely rapid and is commonly seen in exhaustive s
In contrast to exponential order, logarithmic order reflects situations where "the size is halved each round." Given an input data size $n$, since the size is halved each round, the number of iterations is $\log_2 n$, the inverse function of $2^n$.
The following image and code simulate the "halving each round" process, with a time complexity of $O(\log_2 n)$, commonly abbreviated as $O(\log n)$:
The figure below and code simulate the "halving each round" process, with a time complexity of $O(\log_2 n)$, commonly abbreviated as $O(\log n)$:
```src
[file]{time_complexity}-[class]{}-[func]{logarithmic}
@ -1062,7 +1062,7 @@ Linear-logarithmic order often appears in nested loops, with the complexities of
[file]{time_complexity}-[class]{}-[func]{linear_log_recur}
```
The image below demonstrates how linear-logarithmic order is generated. Each level of a binary tree has $n$ operations, and the tree has $\log_2 n + 1$ levels, resulting in a time complexity of $O(n \log n)$.
The figure below demonstrates how linear-logarithmic order is generated. Each level of a binary tree has $n$ operations, and the tree has $\log_2 n + 1$ levels, resulting in a time complexity of $O(n \log n)$.
![Linear-logarithmic order time complexity](time_complexity.assets/time_complexity_logarithmic_linear.png)
@ -1076,7 +1076,7 @@ $$
n! = n \times (n - 1) \times (n - 2) \times \dots \times 2 \times 1
$$
Factorials are typically implemented using recursion. As shown in the image and code below, the first level splits into $n$ branches, the second level into $n - 1$ branches, and so on, stopping after the $n$th level:
Factorials are typically implemented using recursion. As shown in the code and the figure below, the first level splits into $n$ branches, the second level into $n - 1$ branches, and so on, stopping after the $n$th level:
```src
[file]{time_complexity}-[class]{}-[func]{factorial_recur}

View File

@ -4,7 +4,7 @@ In both merge sorting and building binary trees, we decompose the original probl
!!! question
Given three pillars, denoted as `A`, `B`, and `C`. Initially, pillar `A` is stacked with $n$ discs, arranged in order from top to bottom from smallest to largest. Our task is to move these $n$ discs to pillar `C`, maintaining their original order (as shown below). The following rules must be followed during the disc movement process:
Given three pillars, denoted as `A`, `B`, and `C`. Initially, pillar `A` is stacked with $n$ discs, arranged in order from top to bottom from smallest to largest. Our task is to move these $n$ discs to pillar `C`, maintaining their original order (as shown in the figure below). The following rules must be followed during the disc movement process:
1. A disc can only be picked up from the top of a pillar and placed on top of another pillar.
2. Only one disc can be moved at a time.
@ -16,7 +16,7 @@ In both merge sorting and building binary trees, we decompose the original probl
### Consider the base case
As shown below, for the problem $f(1)$, i.e., when there is only one disc, we can directly move it from `A` to `C`.
As shown in the figure below, for the problem $f(1)$, i.e., when there is only one disc, we can directly move it from `A` to `C`.
=== "<1>"
![Solution for a problem of size 1](hanota_problem.assets/hanota_f1_step1.png)
@ -24,7 +24,7 @@ As shown below, for the problem $f(1)$, i.e., when there is only one disc, we ca
=== "<2>"
![hanota_f1_step2](hanota_problem.assets/hanota_f1_step2.png)
As shown below, for the problem $f(2)$, i.e., when there are two discs, **since the smaller disc must always be above the larger disc, `B` is needed to assist in the movement**.
As shown in the figure below, for the problem $f(2)$, i.e., when there are two discs, **since the smaller disc must always be above the larger disc, `B` is needed to assist in the movement**.
1. First, move the smaller disc from `A` to `B`.
2. Then move the larger disc from `A` to `C`.
@ -48,7 +48,7 @@ The process of solving the problem $f(2)$ can be summarized as: **moving two dis
For the problem $f(3)$, i.e., when there are three discs, the situation becomes slightly more complicated.
Since we already know the solutions to $f(1)$ and $f(2)$, we can think from a divide-and-conquer perspective and **consider the two top discs on `A` as a unit**, performing the steps shown below. This way, the three discs are successfully moved from `A` to `C`.
Since we already know the solutions to $f(1)$ and $f(2)$, we can think from a divide-and-conquer perspective and **consider the two top discs on `A` as a unit**, performing the steps shown in the figure below. This way, the three discs are successfully moved from `A` to `C`.
1. Let `B` be the target pillar and `C` the buffer pillar, and move the two discs from `A` to `B`.
2. Move the remaining disc from `A` directly to `C`.
@ -68,7 +68,7 @@ Since we already know the solutions to $f(1)$ and $f(2)$, we can think from a di
Essentially, **we divide the problem $f(3)$ into two subproblems $f(2)$ and one subproblem $f(1)$**. By solving these three subproblems in order, the original problem is resolved. This indicates that the subproblems are independent, and their solutions can be merged.
From this, we can summarize the divide-and-conquer strategy for solving the Tower of Hanoi shown in the following image: divide the original problem $f(n)$ into two subproblems $f(n-1)$ and one subproblem $f(1)$, and solve these three subproblems in the following order.
From this, we can summarize the divide-and-conquer strategy for solving the Tower of Hanoi shown in the figure below: divide the original problem $f(n)$ into two subproblems $f(n-1)$ and one subproblem $f(1)$, and solve these three subproblems in the following order.
1. Move $n-1$ discs with the help of `C` from `A` to `B`.
2. Move the remaining one disc directly from `A` to `C`.
@ -86,7 +86,7 @@ In the code, we declare a recursive function `dfs(i, src, buf, tar)` whose role
[file]{hanota}-[class]{}-[func]{solve_hanota}
```
As shown below, the Tower of Hanoi forms a recursive tree with a height of $n$, each node representing a subproblem, corresponding to an open `dfs()` function, **thus the time complexity is $O(2^n)$, and the space complexity is $O(n)$**.
As shown in the figure below, the Tower of Hanoi forms a recursive tree with a height of $n$, each node representing a subproblem, corresponding to an open `dfs()` function, **thus the time complexity is $O(2^n)$, and the space complexity is $O(n)$**.
![Recursive tree of the Tower of Hanoi](hanota_problem.assets/hanota_recursive_tree.png)

View File

@ -35,7 +35,7 @@ To illustrate the problem-solving steps more vividly, we use a classic problem,
Given an $n \times m$ two-dimensional grid `grid`, each cell in the grid contains a non-negative integer representing the cost of that cell. The robot starts from the top-left cell and can only move down or right at each step until it reaches the bottom-right cell. Return the minimum path sum from the top-left to the bottom-right.
The following figure shows an example, where the given grid's minimum path sum is $13$.
The figure below shows an example, where the given grid's minimum path sum is $13$.
![Minimum Path Sum Example Data](dp_solution_pipeline.assets/min_path_sum_example.png)
@ -45,7 +45,7 @@ Each round of decisions in this problem is to move one step down or right from t
The state $[i, j]$ corresponds to the subproblem: the minimum path sum from the starting point $[0, 0]$ to $[i, j]$, denoted as $dp[i, j]$.
Thus, we obtain the two-dimensional $dp$ matrix shown below, whose size is the same as the input grid $grid$.
Thus, we obtain the two-dimensional $dp$ matrix shown in the figure below, whose size is the same as the input grid $grid$.
![State definition and DP table](dp_solution_pipeline.assets/min_path_sum_solution_state_definition.png)
@ -59,7 +59,7 @@ Thus, we obtain the two-dimensional $dp$ matrix shown below, whose size is the s
For the state $[i, j]$, it can only be derived from the cell above $[i-1, j]$ or the cell to the left $[i, j-1]$. Therefore, the optimal substructure is: the minimum path sum to reach $[i, j]$ is determined by the smaller of the minimum path sums of $[i, j-1]$ and $[i-1, j]$.
Based on the above analysis, the state transition equation shown in the following figure can be derived:
Based on the above analysis, the state transition equation shown in the figure below can be derived:
$$
dp[i, j] = \min(dp[i-1, j], dp[i, j-1]) + grid[i, j]
@ -104,7 +104,7 @@ Implementation code as follows:
[file]{min_path_sum}-[class]{}-[func]{min_path_sum_dfs}
```
The following figure shows the recursive tree rooted at $dp[2, 1]$, which includes some overlapping subproblems, the number of which increases sharply as the size of the grid `grid` increases.
The figure below shows the recursive tree rooted at $dp[2, 1]$, which includes some overlapping subproblems, the number of which increases sharply as the size of the grid `grid` increases.
Essentially, the reason for overlapping subproblems is: **there are multiple paths to reach a certain cell from the top-left corner**.
@ -132,7 +132,7 @@ Implement the dynamic programming solution iteratively, code as shown below:
[file]{min_path_sum}-[class]{}-[func]{min_path_sum_dp}
```
The following figures show the state transition process of the minimum path sum, traversing the entire grid, **thus the time complexity is $O(nm)$**.
The figure below show the state transition process of the minimum path sum, traversing the entire grid, **thus the time complexity is $O(nm)$**.
The array `dp` is of size $n \times m$, **therefore the space complexity is $O(nm)$**.

View File

@ -39,7 +39,7 @@ From this, we obtain a two-dimensional $dp$ table of size $(i+1) \times (j+1)$.
**Step two: Identify the optimal substructure and then derive the state transition equation**
Consider the subproblem $dp[i, j]$, whose corresponding tail characters of the two strings are $s[i-1]$ and $t[j-1]$, which can be divided into three scenarios as shown below.
Consider the subproblem $dp[i, j]$, whose corresponding tail characters of the two strings are $s[i-1]$ and $t[j-1]$, which can be divided into three scenarios as shown in the figure below.
1. Add $t[j-1]$ after $s[i-1]$, then the remaining subproblem is $dp[i, j-1]$.
2. Delete $s[i-1]$, then the remaining subproblem is $dp[i-1, j]$.
@ -71,7 +71,7 @@ Observing the state transition equation, solving $dp[i, j]$ depends on the solut
[file]{edit_distance}-[class]{}-[func]{edit_distance_dp}
```
As shown below, the process of state transition in the edit distance problem is very similar to that in the knapsack problem, which can be seen as filling a two-dimensional grid.
As shown in the figure below, the process of state transition in the edit distance problem is very similar to that in the knapsack problem, which can be seen as filling a two-dimensional grid.
=== "<1>"
![Dynamic programming process of edit distance](edit_distance_problem.assets/edit_distance_dp_step1.png)

View File

@ -36,7 +36,7 @@ $$
dp[i] = dp[i-1] + dp[i-2]
$$
This means that in the stair climbing problem, there is a recursive relationship between the subproblems, **the solution to the original problem can be constructed from the solutions to the subproblems**. The following image shows this recursive relationship.
This means that in the stair climbing problem, there is a recursive relationship between the subproblems, **the solution to the original problem can be constructed from the solutions to the subproblems**. The figure below shows this recursive relationship.
![Recursive relationship of solution counts](intro_to_dynamic_programming.assets/climbing_stairs_state_transfer.png)
@ -48,11 +48,11 @@ Observe the following code, which, like standard backtracking code, belongs to d
[file]{climbing_stairs_dfs}-[class]{}-[func]{climbing_stairs_dfs}
```
The following image shows the recursive tree formed by brute force search. For the problem $dp[n]$, the depth of its recursive tree is $n$, with a time complexity of $O(2^n)$. Exponential order represents explosive growth, and entering a long wait if a relatively large $n$ is input.
The figure below shows the recursive tree formed by brute force search. For the problem $dp[n]$, the depth of its recursive tree is $n$, with a time complexity of $O(2^n)$. Exponential order represents explosive growth, and entering a long wait if a relatively large $n$ is input.
![Recursive tree for climbing stairs](intro_to_dynamic_programming.assets/climbing_stairs_dfs_tree.png)
Observing the above image, **the exponential time complexity is caused by 'overlapping subproblems'**. For example, $dp[9]$ is decomposed into $dp[8]$ and $dp[7]$, $dp[8]$ into $dp[7]$ and $dp[6]$, both containing the subproblem $dp[7]$.
Observing the figure above, **the exponential time complexity is caused by 'overlapping subproblems'**. For example, $dp[9]$ is decomposed into $dp[8]$ and $dp[7]$, $dp[8]$ into $dp[7]$ and $dp[6]$, both containing the subproblem $dp[7]$.
Thus, subproblems include even smaller overlapping subproblems, endlessly. A vast majority of computational resources are wasted on these overlapping subproblems.
@ -69,7 +69,7 @@ The code is as follows:
[file]{climbing_stairs_dfs_mem}-[class]{}-[func]{climbing_stairs_dfs_mem}
```
Observe the following image, **after memoization, all overlapping subproblems need to be calculated only once, optimizing the time complexity to $O(n)$**, which is a significant leap.
Observe the figure below, **after memoization, all overlapping subproblems need to be calculated only once, optimizing the time complexity to $O(n)$**, which is a significant leap.
![Recursive tree with memoized search](intro_to_dynamic_programming.assets/climbing_stairs_dfs_memo_tree.png)
@ -85,7 +85,7 @@ Since dynamic programming does not include a backtracking process, it only requi
[file]{climbing_stairs_dp}-[class]{}-[func]{climbing_stairs_dp}
```
The image below simulates the execution process of the above code.
The figure below simulates the execution process of the above code.
![Dynamic programming process for climbing stairs](intro_to_dynamic_programming.assets/climbing_stairs_dp.png)

View File

@ -8,7 +8,7 @@ In this section, we will first solve the most common 0-1 knapsack problem.
Given $n$ items, the weight of the $i$-th item is $wgt[i-1]$ and its value is $val[i-1]$, and a knapsack with a capacity of $cap$. Each item can be chosen only once. What is the maximum value of items that can be placed in the knapsack under the capacity limit?
Observe the following figure, since the item number $i$ starts counting from 1, and the array index starts from 0, thus the weight of item $i$ corresponds to $wgt[i-1]$ and the value corresponds to $val[i-1]$.
Observe the figure below, since the item number $i$ starts counting from 1, and the array index starts from 0, thus the weight of item $i$ corresponds to $wgt[i-1]$ and the value corresponds to $val[i-1]$.
![Example data of the 0-1 knapsack](knapsack_problem.assets/knapsack_example.png)
@ -76,13 +76,13 @@ After introducing memoization, **the time complexity depends on the number of su
[file]{knapsack}-[class]{}-[func]{knapsack_dfs_mem}
```
The following figure shows the search branches that are pruned in memoized search.
The figure below shows the search branches that are pruned in memoized search.
![The memoized search recursive tree of the 0-1 knapsack problem](knapsack_problem.assets/knapsack_dfs_mem.png)
### Method three: Dynamic programming
Dynamic programming essentially involves filling the $dp$ table during the state transition, the code is shown below:
Dynamic programming essentially involves filling the $dp$ table during the state transition, the code is shown in the figure below:
```src
[file]{knapsack}-[class]{}-[func]{knapsack_dp}

View File

@ -40,7 +40,7 @@ Comparing the code for the two problems, the state transition changes from $i-1$
Since the current state comes from the state to the left and above, **the space-optimized solution should perform a forward traversal for each row in the $dp$ table**.
This traversal order is the opposite of that for the 0-1 knapsack. Please refer to the following figures to understand the difference.
This traversal order is the opposite of that for the 0-1 knapsack. Please refer to the figure below to understand the difference.
=== "<1>"
![Dynamic programming process for the unbounded knapsack problem after space optimization](unbounded_knapsack_problem.assets/unbounded_knapsack_dp_comp_step1.png)
@ -117,7 +117,7 @@ For this reason, we use the number $amt + 1$ to represent an invalid solution, b
[file]{coin_change}-[class]{}-[func]{coin_change_dp}
```
The following images show the dynamic programming process for the coin change problem, which is very similar to the unbounded knapsack problem.
The figure below show the dynamic programming process for the coin change problem, which is very similar to the unbounded knapsack problem.
=== "<1>"
![Dynamic programming process for the coin change problem](unbounded_knapsack_problem.assets/coin_change_dp_step1.png)

View File

@ -10,34 +10,34 @@ G & = \{ V, E \} \newline
\end{aligned}
$$
If vertices are viewed as nodes and edges as references (pointers) connecting the nodes, graphs can be seen as a data structure that extends from linked lists. As shown below, **compared to linear relationships (linked lists) and divide-and-conquer relationships (trees), network relationships (graphs) are more complex due to their higher degree of freedom**.
If vertices are viewed as nodes and edges as references (pointers) connecting the nodes, graphs can be seen as a data structure that extends from linked lists. As shown in the figure below, **compared to linear relationships (linked lists) and divide-and-conquer relationships (trees), network relationships (graphs) are more complex due to their higher degree of freedom**.
![Relationship between linked lists, trees, and graphs](graph.assets/linkedlist_tree_graph.png)
## Common types of graphs
Based on whether edges have direction, graphs can be divided into "undirected graphs" and "directed graphs", as shown below.
Based on whether edges have direction, graphs can be divided into "undirected graphs" and "directed graphs", as shown in the figure below.
- In undirected graphs, edges represent a "bidirectional" connection between two vertices, for example, the "friendship" in WeChat or QQ.
- In directed graphs, edges have directionality, that is, the edges $A \rightarrow B$ and $A \leftarrow B$ are independent of each other, for example, the "follow" and "be followed" relationship on Weibo or TikTok.
![Directed and undirected graphs](graph.assets/directed_graph.png)
Based on whether all vertices are connected, graphs can be divided into "connected graphs" and "disconnected graphs", as shown below.
Based on whether all vertices are connected, graphs can be divided into "connected graphs" and "disconnected graphs", as shown in the figure below.
- For connected graphs, it is possible to reach any other vertex starting from a certain vertex.
- For disconnected graphs, there is at least one vertex that cannot be reached from a certain starting vertex.
![Connected and disconnected graphs](graph.assets/connected_graph.png)
We can also add a "weight" variable to edges, resulting in "weighted graphs" as shown below. For example, in mobile games like "Honor of Kings", the system calculates the "closeness" between players based on shared gaming time, and this closeness network can be represented with a weighted graph.
We can also add a "weight" variable to edges, resulting in "weighted graphs" as shown in the figure below. For example, in mobile games like "Honor of Kings", the system calculates the "closeness" between players based on shared gaming time, and this closeness network can be represented with a weighted graph.
![Weighted and unweighted graphs](graph.assets/weighted_graph.png)
Graph data structures include the following commonly used terms.
- "Adjacency": When there is an edge connecting two vertices, these two vertices are said to be "adjacent". In the above figure, the adjacent vertices of vertex 1 are vertices 2, 3, and 5.
- "Path": The sequence of edges passed from vertex A to vertex B is called a "path" from A to B. In the above figure, the edge sequence 1-5-2-4 is a path from vertex 1 to vertex 4.
- "Adjacency": When there is an edge connecting two vertices, these two vertices are said to be "adjacent". In the figure above, the adjacent vertices of vertex 1 are vertices 2, 3, and 5.
- "Path": The sequence of edges passed from vertex A to vertex B is called a "path" from A to B. In the figure above, the edge sequence 1-5-2-4 is a path from vertex 1 to vertex 4.
- "Degree": The number of edges a vertex has. For directed graphs, "in-degree" refers to how many edges point to the vertex, and "out-degree" refers to how many edges point out from the vertex.
## Representation of graphs
@ -48,7 +48,7 @@ Common representations of graphs include "adjacency matrices" and "adjacency lis
Let the number of vertices in the graph be $n$, the "adjacency matrix" uses an $n \times n$ matrix to represent the graph, where each row (column) represents a vertex, and the matrix elements represent edges, with $1$ or $0$ indicating whether there is an edge between two vertices.
As shown below, let the adjacency matrix be $M$, and the list of vertices be $V$, then the matrix element $M[i, j] = 1$ indicates there is an edge between vertex $V[i]$ and vertex $V[j]$, conversely $M[i, j] = 0$ indicates there is no edge between the two vertices.
As shown in the figure below, let the adjacency matrix be $M$, and the list of vertices be $V$, then the matrix element $M[i, j] = 1$ indicates there is an edge between vertex $V[i]$ and vertex $V[j]$, conversely $M[i, j] = 0$ indicates there is no edge between the two vertices.
![Representation of a graph with an adjacency matrix](graph.assets/adjacency_matrix.png)
@ -68,7 +68,7 @@ The "adjacency list" uses $n$ linked lists to represent the graph, with each lin
The adjacency list only stores actual edges, and the total number of edges is often much less than $n^2$, making it more space-efficient. However, finding edges in the adjacency list requires traversing the linked list, so its time efficiency is not as good as that of the adjacency matrix.
Observing the above figure, **the structure of the adjacency list is very similar to the "chaining" in hash tables, hence we can use similar methods to optimize efficiency**. For example, when the linked list is long, it can be transformed into an AVL tree or red-black tree, thus optimizing the time efficiency from $O(n)$ to $O(\log n)$; the linked list can also be transformed into a hash table, thus reducing the time complexity to $O(1)$.
Observing the figure above, **the structure of the adjacency list is very similar to the "chaining" in hash tables, hence we can use similar methods to optimize efficiency**. For example, when the linked list is long, it can be transformed into an AVL tree or red-black tree, thus optimizing the time efficiency from $O(n)$ to $O(\log n)$; the linked list can also be transformed into a hash table, thus reducing the time complexity to $O(1)$.
## Common applications of graphs

View File

@ -24,7 +24,7 @@ To prevent revisiting vertices, we use a hash table `visited` to record which no
[file]{graph_bfs}-[class]{}-[func]{graph_bfs}
```
The code is relatively abstract, it is suggested to compare with the following figure to deepen the understanding.
The code is relatively abstract, it is suggested to compare with the figure below to deepen the understanding.
=== "<1>"
![Steps of breadth-first search of a graph](graph_traversal.assets/graph_bfs_step1.png)
@ -61,7 +61,7 @@ The code is relatively abstract, it is suggested to compare with the following f
!!! question "Is the sequence of breadth-first traversal unique?"
Not unique. Breadth-first traversal only requires traversing in a "from near to far" order, **and the traversal order of multiple vertices at the same distance can be arbitrarily shuffled**. For example, in the above figure, the visitation order of vertices $1$ and $3$ can be switched, as can the order of vertices $2$, $4$, and $6$.
Not unique. Breadth-first traversal only requires traversing in a "from near to far" order, **and the traversal order of multiple vertices at the same distance can be arbitrarily shuffled**. For example, in the figure above, the visitation order of vertices $1$ and $3$ can be switched, as can the order of vertices $2$, $4$, and $6$.
### Complexity analysis
@ -83,12 +83,12 @@ This "go as far as possible and then return" algorithm paradigm is usually imple
[file]{graph_dfs}-[class]{}-[func]{graph_dfs}
```
The algorithm process of depth-first search is shown in the following figure.
The algorithm process of depth-first search is shown in the figure below.
- **Dashed lines represent downward recursion**, indicating that a new recursive method has been initiated to visit a new vertex.
- **Curved dashed lines represent upward backtracking**, indicating that this recursive method has returned to the position where this method was initiated.
To deepen the understanding, it is suggested to combine the following figure with the code to simulate (or draw) the entire DFS process in your mind, including when each recursive method is initiated and when it returns.
To deepen the understanding, it is suggested to combine the figure below with the code to simulate (or draw) the entire DFS process in your mind, including when each recursive method is initiated and when it returns.
=== "<1>"
![Steps of depth-first search of a graph](graph_traversal.assets/graph_dfs_step1.png)

View File

@ -2,7 +2,7 @@
!!! question
Given $n$ items, the weight of the $i$-th item is $wgt[i-1]$ and its value is $val[i-1]$, and a knapsack with a capacity of $cap$. Each item can be chosen only once, **but a part of the item can be selected, with its value calculated based on the proportion of the weight chosen**, what is the maximum value of the items in the knapsack under the limited capacity? An example is shown below.
Given $n$ items, the weight of the $i$-th item is $wgt[i-1]$ and its value is $val[i-1]$, and a knapsack with a capacity of $cap$. Each item can be chosen only once, **but a part of the item can be selected, with its value calculated based on the proportion of the weight chosen**, what is the maximum value of the items in the knapsack under the limited capacity? An example is shown in the figure below.
![Example data of the fractional knapsack problem](fractional_knapsack_problem.assets/fractional_knapsack_example.png)
@ -17,7 +17,7 @@ The difference is that, in this problem, only a part of an item can be chosen. A
### Greedy strategy determination
Maximizing the total value of the items in the knapsack essentially means maximizing the value per unit weight. From this, the greedy strategy shown below can be deduced.
Maximizing the total value of the items in the knapsack essentially means maximizing the value per unit weight. From this, the greedy strategy shown in the figure below can be deduced.
1. Sort the items by their unit value from high to low.
2. Iterate over all items, **greedily choosing the item with the highest unit value in each round**.

View File

@ -13,7 +13,7 @@ Let's first understand the working principle of the greedy algorithm through the
Given $n$ types of coins, where the denomination of the $i$th type of coin is $coins[i - 1]$, and the target amount is $amt$, with each type of coin available indefinitely, what is the minimum number of coins needed to make up the target amount? If it is not possible to make up the target amount, return $-1$.
The greedy strategy adopted in this problem is shown in the following figure. Given the target amount, **we greedily choose the coin that is closest to and not greater than it**, repeatedly following this step until the target amount is met.
The greedy strategy adopted in this problem is shown in the figure below. Given the target amount, **we greedily choose the coin that is closest to and not greater than it**, repeatedly following this step until the target amount is met.
![Greedy strategy for coin change](greedy_algorithm.assets/coin_change_greedy_strategy.png)
@ -29,7 +29,7 @@ You might exclaim: So clean! The greedy algorithm solves the coin change problem
**Greedy algorithms are not only straightforward and simple to implement, but they are also usually very efficient**. In the code above, if the smallest coin denomination is $\min(coins)$, the greedy choice loops at most $amt / \min(coins)$ times, giving a time complexity of $O(amt / \min(coins))$. This is an order of magnitude smaller than the time complexity of the dynamic programming solution, which is $O(n \times amt)$.
However, **for some combinations of coin denominations, greedy algorithms cannot find the optimal solution**. The following figure provides two examples.
However, **for some combinations of coin denominations, greedy algorithms cannot find the optimal solution**. The figure below provides two examples.
- **Positive example $coins = [1, 5, 10, 20, 50, 100]$**: In this coin combination, given any $amt$, the greedy algorithm can find the optimal solution.
- **Negative example $coins = [1, 20, 50]$**: Suppose $amt = 60$, the greedy algorithm can only find the combination $50 + 1 \times 10$, totaling 11 coins, but dynamic programming can find the optimal solution of $20 + 20 + 20$, needing only 3 coins.

View File

@ -6,7 +6,7 @@
The capacity of the container is the product of the height and the width (area), where the height is determined by the shorter partition, and the width is the difference in array indices between the two partitions.
Please select two partitions in the array that maximize the container's capacity and return this maximum capacity. An example is shown in the following figure.
Please select two partitions in the array that maximize the container's capacity and return this maximum capacity. An example is shown in the figure below.
![Example data for the maximum capacity problem](max_capacity_problem.assets/max_capacity_example.png)
@ -22,11 +22,11 @@ Assuming the length of the array is $n$, the number of combinations of two parti
### Determination of a greedy strategy
There is a more efficient solution to this problem. As shown in the following figure, we select a state $[i, j]$ where the indices $i < j$ and the height $ht[i] < ht[j]$, meaning $i$ is the shorter partition, and $j$ is the taller one.
There is a more efficient solution to this problem. As shown in the figure below, we select a state $[i, j]$ where the indices $i < j$ and the height $ht[i] < ht[j]$, meaning $i$ is the shorter partition, and $j$ is the taller one.
![Initial state](max_capacity_problem.assets/max_capacity_initial_state.png)
As shown in the following figure, **if we move the taller partition $j$ closer to the shorter partition $i$, the capacity will definitely decrease**.
As shown in the figure below, **if we move the taller partition $j$ closer to the shorter partition $i$, the capacity will definitely decrease**.
This is because when moving the taller partition $j$, the width $j-i$ definitely decreases; and since the height is determined by the shorter partition, the height can only remain the same (if $i$ remains the shorter partition) or decrease (if the moved $j$ becomes the shorter partition).
@ -38,7 +38,7 @@ Conversely, **we can only possibly increase the capacity by moving the shorter p
This leads us to the greedy strategy for this problem: initialize two pointers at the ends of the container, and in each round, move the pointer corresponding to the shorter partition inward until the two pointers meet.
The following figures illustrate the execution of the greedy strategy.
The figure below illustrate the execution of the greedy strategy.
1. Initially, the pointers $i$ and $j$ are positioned at the ends of the array.
2. Calculate the current state's capacity $cap[i, j]$ and update the maximum capacity.
@ -86,7 +86,7 @@ The variables $i$, $j$, and $res$ use a constant amount of extra space, **thus t
The reason why the greedy method is faster than enumeration is that each round of greedy selection "skips" some states.
For example, under the state $cap[i, j]$ where $i$ is the shorter partition and $j$ is the taller partition, greedily moving the shorter partition $i$ inward by one step leads to the "skipped" states shown below. **This means that these states' capacities cannot be verified later**.
For example, under the state $cap[i, j]$ where $i$ is the shorter partition and $j$ is the taller partition, greedily moving the shorter partition $i$ inward by one step leads to the "skipped" states shown in the figure below. **This means that these states' capacities cannot be verified later**.
$$
cap[i, i+1], cap[i, i+2], \dots, cap[i, j-2], cap[i, j-1]

View File

@ -32,7 +32,7 @@ n & \geq 4
\end{aligned}
$$
As shown below, when $n \geq 4$, splitting out a $2$ increases the product, **which indicates that integers greater than or equal to $4$ should be split**.
As shown in the figure below, when $n \geq 4$, splitting out a $2$ increases the product, **which indicates that integers greater than or equal to $4$ should be split**.
**Greedy strategy one**: If the splitting scheme includes factors $\geq 4$, they should be further split. The final split should only include factors $1$, $2$, and $3$.
@ -40,7 +40,7 @@ As shown below, when $n \geq 4$, splitting out a $2$ increases the product, **wh
Next, consider which factor is optimal. Among the factors $1$, $2$, and $3$, clearly $1$ is the worst, as $1 \times (n-1) < n$ always holds, meaning splitting out $1$ actually decreases the product.
As shown below, when $n = 6$, $3 \times 3 > 2 \times 2 \times 2$. **This means splitting out $3$ is better than splitting out $2$**.
As shown in the figure below, when $n = 6$, $3 \times 3 > 2 \times 2 \times 2$. **This means splitting out $3$ is better than splitting out $2$**.
**Greedy strategy two**: In the splitting scheme, there should be at most two $2$s. Because three $2$s can always be replaced by two $3$s to obtain a higher product.
@ -55,7 +55,7 @@ From the above, the following greedy strategies can be derived.
### Code implementation
As shown below, we do not need to use loops to split the integer but can use the floor division operation to get the number of $3$s, $a$, and the modulo operation to get the remainder, $b$, thus:
As shown in the figure below, we do not need to use loops to split the integer but can use the floor division operation to get the number of $3$s, $a$, and the modulo operation to get the remainder, $b$, thus:
$$
n = 3a + b

View File

@ -37,7 +37,7 @@ This essential skill for elementary students, looking up a dictionary, is actual
The above method of organizing playing cards is essentially the "Insertion Sort" algorithm, which is very efficient for small datasets. Many programming languages' sorting functions include the insertion sort.
**Example 3: Making Change**. Suppose we buy goods worth $69$ yuan at a supermarket and give the cashier $100$ yuan, then the cashier needs to give us $31$ yuan in change. They would naturally complete the thought process as shown below.
**Example 3: Making Change**. Suppose we buy goods worth $69$ yuan at a supermarket and give the cashier $100$ yuan, then the cashier needs to give us $31$ yuan in change. They would naturally complete the thought process as shown in the figure below.
1. The options are currencies smaller than $31$, including $1$, $5$, $10$, and $20$.
2. Take out the largest $20$ from the options, leaving $31 - 20 = 11$.

View File

@ -20,7 +20,7 @@ If you are an algorithm expert, we look forward to receiving your valuable sugge
## Content structure
The main content of the book is shown in the following figure.
The main content of the book is shown in the figure below.
- **Complexity analysis**: explores aspects and methods for evaluating data structures and algorithms. Covers methods of deriving time complexity and space complexity, along with common types and examples.
- **Data structures**: focuses on fundamental data types, classification methods, definitions, pros and cons, common operations, types, applications, and implementation methods of data structures such as array, linked list, stack, queue, hash table, tree, heap, graph, etc.

View File

@ -4,7 +4,7 @@
!!! question
Given an array `nums` of length $n$, with elements arranged in ascending order and non-repeating. Please find and return the index of element `target` in this array. If the array does not contain the element, return $-1$. An example is shown below.
Given an array `nums` of length $n$, with elements arranged in ascending order and non-repeating. Please find and return the index of element `target` in this array. If the array does not contain the element, return $-1$. An example is shown in the figure below.
![Binary search example data](binary_search.assets/binary_search_example.png)

View File

@ -38,7 +38,7 @@ However, **using these algorithms often requires data preprocessing**. For examp
## Choosing a search method
Given a set of data of size $n$, we can use linear search, binary search, tree search, hash search, and other methods to search for the target element from it. The working principles of these methods are shown in the following figure.
Given a set of data of size $n$, we can use linear search, binary search, tree search, hash search, and other methods to search for the target element from it. The working principles of these methods are shown in the figure below.
![Various search strategies](searching_algorithm_revisited.assets/searching_algorithms.png)

View File

@ -2,7 +2,7 @@
<u>Bubble sort</u> achieves sorting by continuously comparing and swapping adjacent elements. This process resembles bubbles rising from the bottom to the top, hence the name bubble sort.
As shown in the following figures, the bubbling process can be simulated using element swap operations: starting from the leftmost end of the array and moving right, sequentially compare the size of adjacent elements. If "left element > right element," then swap them. After the traversal, the largest element will be moved to the far right end of the array.
As shown in the figure below, the bubbling process can be simulated using element swap operations: starting from the leftmost end of the array and moving right, sequentially compare the size of adjacent elements. If "left element > right element," then swap them. After the traversal, the largest element will be moved to the far right end of the array.
=== "<1>"
![Simulating bubble process using element swap](bubble_sort.assets/bubble_operation_step1.png)
@ -27,7 +27,7 @@ As shown in the following figures, the bubbling process can be simulated using e
## Algorithm process
Assuming the length of the array is $n$, the steps of bubble sort are shown below.
Assuming the length of the array is $n$, the steps of bubble sort are shown in the figure below.
1. First, perform a "bubble" on $n$ elements, **swapping the largest element to its correct position**.
2. Next, perform a "bubble" on the remaining $n - 1$ elements, **swapping the second largest element to its correct position**.

View File

@ -10,7 +10,7 @@ The figure below shows the process of inserting an element into an array. Assumi
## Algorithm process
The overall process of insertion sort is shown in the following figure.
The overall process of insertion sort is shown in the figure below.
1. Initially, the first element of the array is sorted.
2. The second element of the array is taken as `base`, and after inserting it into the correct position, **the first two elements of the array are sorted**.

View File

@ -1,6 +1,6 @@
# Merge sort
<u>Merge sort</u> is a sorting algorithm based on the divide-and-conquer strategy, involving the "divide" and "merge" phases shown in the following figure.
<u>Merge sort</u> is a sorting algorithm based on the divide-and-conquer strategy, involving the "divide" and "merge" phases shown in the figure below.
1. **Divide phase**: Recursively split the array from the midpoint, transforming the sorting problem of a long array into that of shorter arrays.
2. **Merge phase**: Stop dividing when the length of the sub-array is 1, start merging, and continuously combine two shorter ordered arrays into one longer ordered array until the process is complete.

View File

@ -47,7 +47,7 @@ After the pivot partitioning, the original array is divided into three parts: le
## Algorithm process
The overall process of quick sort is shown in the following figure.
The overall process of quick sort is shown in the figure below.
1. First, perform a "pivot partitioning" on the original array to obtain the unsorted left and right sub-arrays.
2. Then, recursively perform "pivot partitioning" on both the left and right sub-arrays.

View File

@ -2,7 +2,7 @@
<u>Selection sort</u> works on a very simple principle: it starts a loop where each iteration selects the smallest element from the unsorted interval and moves it to the end of the sorted interval.
Suppose the length of the array is $n$, the algorithm flow of selection sort is as shown below.
Suppose the length of the array is $n$, the algorithm flow of selection sort is as shown in the figure below.
1. Initially, all elements are unsorted, i.e., the unsorted (index) interval is $[0, n-1]$.
2. Select the smallest element in the interval $[0, n-1]$ and swap it with the element at index $0$. After this, the first element of the array is sorted.

View File

@ -2,7 +2,7 @@
<u>Sorting algorithms (sorting algorithm)</u> are used to arrange a set of data in a specific order. Sorting algorithms have a wide range of applications because ordered data can usually be searched, analyzed, and processed more efficiently.
As shown in the following figure, the data types in sorting algorithms can be integers, floating point numbers, characters, or strings, etc. Sorting rules can be set according to needs, such as numerical size, character ASCII order, or custom rules.
As shown in the figure below, the data types in sorting algorithms can be integers, floating point numbers, characters, or strings, etc. Sorting rules can be set according to needs, such as numerical size, character ASCII order, or custom rules.
![Data types and comparator examples](sorting_algorithm.assets/sorting_examples.png)

View File

@ -10,7 +10,7 @@
- Counting sort is a special case of bucket sort, which sorts by counting the occurrences of each data point. Counting sort is suitable for large datasets with a limited range of data and requires that data can be converted to positive integers.
- Radix sort sorts data by sorting digit by digit, requiring data to be represented as fixed-length numbers.
- Overall, we hope to find a sorting algorithm that has high efficiency, stability, in-place operation, and positive adaptability. However, like other data structures and algorithms, no sorting algorithm can meet all these conditions simultaneously. In practical applications, we need to choose the appropriate sorting algorithm based on the characteristics of the data.
- The following figure compares mainstream sorting algorithms in terms of efficiency, stability, in-place nature, and adaptability.
- The figure below compares mainstream sorting algorithms in terms of efficiency, stability, in-place nature, and adaptability.
![Sorting Algorithm Comparison](summary.assets/sorting_algorithms_comparison.png)

View File

@ -22,7 +22,7 @@ As shown in the figure below, given a non-perfect binary tree, the above method
![Level-order traversal sequence corresponds to multiple binary tree possibilities](array_representation_of_tree.assets/array_representation_without_empty.png)
To solve this problem, **we can consider explicitly writing out all `None` values in the level-order traversal sequence**. As shown in the following figure, after this treatment, the level-order traversal sequence can uniquely represent a binary tree. Example code is as follows:
To solve this problem, **we can consider explicitly writing out all `None` values in the level-order traversal sequence**. As shown in the figure below, after this treatment, the level-order traversal sequence can uniquely represent a binary tree. Example code is as follows:
=== "Python"

View File

@ -206,7 +206,7 @@ Each node has two references (pointers), pointing to the "left-child node" and "
## Common terminology of binary trees
The commonly used terminology of binary trees is shown in the following figure.
The commonly used terminology of binary trees is shown in the figure below.
- "Root node": The node at the top level of the binary tree, which has no parent node.
- "Leaf node": A node with no children, both of its pointers point to `None`.