It is absolutely acceptable that the majority of programmers do not know excessive amount of algorithms and especially methods of their development. Since after graduation from a university or after successful passing the job interview to a position of a developer, in case if a person had some knowledge in computer science, the need to simply "code" and create ordinary "working" business applications erases all the theoretical remains in the head. This is so true, because there is no need to know everything, since all this has already been implemented in most libraries in almost all languages and it has been working for ages in production. But it seems to me that the main difference between an ordinary programmer and a software engineer is in more profound knowledge in computer science (which includes knowledge of algorithms and methods for their evaluation), as well as in paradigms in development. Facing with nontrivial tasks one gets the available screwdrivers and keys and plunges, while the other opens the book and reads what a screwdriver is. Dynamic programming is a timetested screwdriver that can unscrew even very tight bolts.
Introduction
In the original version, the problem of planning a multiperiod process in production at very small steps and time points was considered. Step by step it was required to keep track of how the decisions made in production at previous steps reflected on the company's further success and what to do next not to fail: buy a factory, sell timber, go offshore. The optimality principle of Belman sounds like: the optimal policy has the property that regardless of initial states and initial decisions taken, the remaining solutions should represent the optimal policy in relation to the state resulting from the first solution. This is also called the optimal substructure.
Problem definitionThe problem has an optimal substructure, if its optimal solution can be rationally compiled from the optimal solutions of its subtasks. The presence of the optimal substructure in the problem is used in order to determine the applicability of dynamic programming and greedy algorithms for solving this problem. For example, the problem of finding the shortest path between some vertices of a graph contains an optimal solution of subtasks. Many problems solved by dynamic programming can be defined as searching in a given oriented acyclic graph of the shortest path from one vertex to another. Depending on the formulation of the problem, whether dynamic programming on a segment, on a prefix, on a tree, the optimality term for subproblems can be different, but, generally, is formulated as follows: if there is an optimal solution for some subtask that arises in the process of solving the problem, then it should be used to solve the problem in general.
Flow
During the process of compiling dynamic programming algorithms, it is required to follow a sequence of four actions:

Describe the structure of the optimal solution.

Recursively determine the value of the optimal solution.

Calculate the value of the optimal solution using the method of bottomup analysis.

Make an optimal decision based on the received information.
Recurring formulas
Given: initial states (a0 = a1 = 1), and dependencies. The only difficulty that can arise is the understanding that 2n is a parity condition for a number, and 2n + 1 is an odd number. Basically, we need to check whether the number is even and make calculations with this number according to different formulas.
Recursion vs loop
Constant problem of choice when implementing the algorithm for solving the problem: recursion or cycle. The recursion arises from the condition of the problem (a repeating formula, etc.). Actually, usually it works perfectly in most cases, it is quickly and easily can be implemented. Sequential computation. Now let's get back to where we started  the recursion is slow. It's not too slow for bringing real troubles, but in tasks where every millisecond is important it might become a problem. The main but not the only one drawback of the method of sequential computation is because it is suitable only if the function refers exclusively to the elements in front of it. Our problem satisfies this condition. The essence of the method is as follows: we create an array of N elements and sequentially fill it with values.
Caching
A recursive solution with value caching. The idea of memoization is very simple  once calculating the value, we put it in some data structure. Before each calculation, we check whether a calculated value is presented in this structure, and if it is there, then we use it. You may use an array filled with flag values as the data structure. If the value of the element by the index N is equal to the value of the flag, then we probably have not calculated it yet. This creates certain difficulties, because the value of the flag should not belong to the set of values of the function, which is not always obvious. Hash table is a good choice  all actions in it are performed for O (1), which is very convenient. However, with a large number of values, two numbers can have the same hash, which, naturally, causes problems. In this case, it is worth using, for example, a RB tree.
Typical task
At the top of the ladder, containing N steps, there is a ball that starts jumping down to the bottom. The ball can jump to the next step, or jump over one or two steps. (for instance, if the ball is on the 8th step, then it can move to the 5th, 6th or 7th.) Determine the number of all possible "routes" of the ball from the top to the ground.
The idea of a solution. The first step can be accessed in only one way  by making a jump with a length equal to one. The second step can be reached by making a jump of 2, or from the first step  only 2 options. The third step can be reached by making a jump of three, from the first or from the second step. Specifically, there are only four options (0> 3; 0> 1> 3; 0> 2> 3; 0> 1> 2> 3). Considering the fourth step, you can get there from the first step  one route for each route to it, with the second or third  the same. In other words, the number of ways to the 4th step is the sum of the routes to the 1st, 2nd and 3rd steps. Mathematically, F (N) = F (N1) + F (N2) + F (N3).
2d Dynamic
In the rectangular table NxM in the beginning the player is in the left upper cell. In one move, he is allowed to move to the next cell either to the right or down (it is forbidden to move to the left and upwards). You have to calculate how many ways a player has so that he could get to the right lower cell.
The logic of the solution is completely identical to the problem with the ball and ladder  but now it is possible to get into the cell (x, y) from cells (x1, y) or (x, y1). Totally F (x, y) = F (x1, y) + F (x, y1). In addition, it is possible to understand that all cells with values (1, y) and (x, 1) have only one route, either straight down or straight to the right.
Explosion hazard task
When processing radioactive materials, waste is formed of two types  especially dangerous (type A) and nonhazardous (type B). The same containers are used for their storage. After placing the waste in the containers, the latter are stacked in a vertical pile. A stack is considered as explosive if there is more than one type A container in a row. A stack is considered safe if it is not explosive. Determine the number of possible types of safe stacks for a given number of containers “N”.
The answer is (N + 1)  Fibonacci number. You could guess by simply calculating the first 23 values. Each main element is divided into two  the main one (ends with B) and the secondary (ends with A). The side elements are transformed into basic ones in one iteration (only B can be added to the sequence ending in A).
Broken calculator task
There is a calculator that performs three operations: Add to the number X unit; Multiply X by 2; Multiply the number X by 3. Determine: which least number of operations is needed in order to obtain “N” from a given number 1. Output this number, and, on the next line, a set of executed operations "111231".
The naive solution is to divide the number by 3, as long as possible, otherwise by 2, if possible, otherwise subtract a unit, and so on until it turns into 1. This is a wrong decision, because it excludes, for example, the possibility to reduce the number by one, and then divide by three, which causes errors with large numbers (for example, 32718). The correct solution is to find for each number from 2 to N the minimum number of actions based on the previous elements, basically: F (N) = min (F (N1), F (N / 2), F (N / 3) ) + 1. You should remember that all indices must be integers. To recreate the list of actions, it is necessary to go in the opposite direction and look for such index i when F (i) = F (N), where N is the number of the element in question. If i = N1, put 1 to the beginning of the line, if i = N / 2  put two, otherwise  three.
Golden pyramidImagine a triangle composed of numbers. One number is located at the top. There are two numbers below, then three, and so on right to the bottom. You start at the top, and you need to go down to the bottom of the triangle. For each move you can go one level down and choose between two numbers under the current position. While walking this path, you "collect" and summarize the numbers that you pass. Your goal is to find the maximum amount that can be obtained from different routes.
The first thing that comes to mind is to use recursion and calculate all the paths from the top. When we go one level down, all available numbers form a new smaller triangle, and we can start our function for a new subset and continue this until we reach the bottom.
What’s next
A “greedy” algorithm, like dynamic programming, is applicable in those cases where the desired object is built from pieces. The “greedy” algorithm at each step, locally, makes an optimal choice. A simple example when trying to gain a certain amount by the minimum number of coins, you can consistently type coins with the maximum value (not exceeding the amount that remained). A “greedy” algorithm usually works much faster than an algorithm based on dynamic programming, but the final solution will not always be optimal.
Amortization analysis is a means of analyzing algorithms that produce a sequence of similar operations. Instead of evaluating the operating time for each of these operations separately, the depreciation analysis estimates the average operating time per transaction. The difference can be significant if longrunning operations are in progress. In fact, depreciation analysis is not only a tool for evaluating algorithms but also an approach to development (this is closely related)