Reinforcement Training

 3r33588. 3r3-31. Hello! 3r? 3572.  3r33588. 3r? 3572.  3r33588. We opened a new thread on the course "Machine learning" , so wait in the near future articles related to this, so to speak, discipline. Well, of course, open seminars. And now let's consider what reinforcement learning is. 3r? 3572.  3r33588. 3r? 3572.  3r33588. Reinforcement training is an important type of machine learning where the agent learns to behave in the environment, performing actions and seeing results. 3r? 3572.  3r33588. 3r? 3572.  3r33588. In recent years, we have seen a lot of success in this exciting area of ​​research. For example, 3r3r166. DeepMind and Deep Q Learning Architecture
in 201? 3–3–318. victory over the go go champion with AlphaGo
in 201? 3r320. OpenAI and PPO
in 201? among others. 3r? 3572.  3r33588. 3r? 3572.  3r33588. Reinforcement Training 3r? 3577. 3r? 3572.  3r33588. 3r? 3572.  3r33588. 3r3333. 3r3334. 3r3335. 3r33584. 3r33584. 3r33584. 3r? 3572.  3r33588. 3r3566. DeepMind DQN [/i] 3r? 3572.  3r33588. 3r? 3572.  3r33588. In this series of articles we will focus on the study of various architectures used today to solve the problem of learning with reinforcement. These include Q-learning, Deep Q-learning, Policy Gradients, Actor Critic and PPO. 3r? 3572.  3r33588. 3r? 3572.  3r33588. In this article you will learn:
 3r33588. 3r? 3572.  3r33588.
 3r33588. 3r? 3513. What is reinforcement learning, and why rewards are central to r3r3514.  3r33588. 3r? 3513. Three approaches to training with reinforcements 3-333514.  3r33588. 3r? 3513. What does “deep” mean in deep learning with reinforcements 3-33514.  3r33588. 3r? 3516. 3r? 3572.  3r33588. It is very important to master these aspects before plunging into the implementation of reinforcement training agents. 3r? 3572.  3r33588. 3r? 3572.  3r33588. The idea of ​​reinforcement training is that the agent learns from the environment, interacting with her, and receiving rewards for taking action. 3r? 3572.  3r33588. 3r? 3572.  3r33588. 3r? 3572.  3r33588. 3r? 3572.  3r33588. But then you try to touch the fire. Oh! He burned his hand (negative reward -1). You have just realized that fire is positive when you are at a sufficient distance, because it produces heat. But if you get close to him - burn yourself. 3r? 3572.  3r33588. 3r? 3572.  3r33588. This is how people learn through interaction. Reinforcement learning is simply a computational approach to learning through action. 3r? 3572.  3r33588. 3r? 3572.  3r33588. 3r? 3543. The learning process with reinforcement
 3r33588. 3r3544. 3r? 3572.  3r33588. 3r3116. 3r? 3572.  3r33588. 3r? 3572.  3r33588. As an example, imagine an agent is learning to play Super Mario Bros. The reinforcement learning process (Reinforcement Learning - RL) can be modeled as a loop that works as follows: 3r37272.  3r33588. 3r? 3572.  3r33588.
 3r33588. 3r? 3513. The agent receives the S0 state from the environment (in our case we get the first frame of the game (state) from Super Mario Bros (environment)) 3r31414.  3r33588. 3r? 3513. Based on this state of S? the agent takes an action A0 (the agent will move to the right) 3r3514.  3r33588. 3r? 3513. The environment is moving to a new state S1 (new frame) 3r3514.  3r33588. 3r? 3513. The environment gives some reward to R1 agent (not dead: +1) 3r-3514.  3r33588. 3r? 3516. 3r? 3572.  3r33588. This RL cycle yields the sequence 3r33543. states, actions and rewards. 3r3544. 3r? 3572.  3r33588. The agent's goal is to maximize the expected accumulated reward. 3r? 3572.  3r33588. 3r? 3572.  3r33588. 3r? 3543. The central idea of ​​the reward hypothesis is 3-333544. 3r? 3572.  3r33588. 3r? 3572.  3r33588. Why is the agent's goal to maximize the expected accumulated reward? Well, reinforcement learning is based on the idea of ​​a reward hypothesis. All goals can be described by maximizing the expected accumulated reward. 3r? 3572.  3r33588. 3r? 3572.  3r33588. 3r? 3543. Therefore, in reinforcement training, in order to achieve the best behavior, we need to maximize the expected accumulated reward. 3r3544. 3r? 3572.  3r33588. 3r? 3572.  3r33588. Accumulated remuneration at each time step t can be written as: 3r-3572.  3r33588. 3r? 3572.  3r33588. 3r33170. 3r? 3572.  3r33588. 3r? 3572.  3r33588. This is equivalent:
 3r33588. 3r? 3572.  3r33588. 3r3179. 3r? 3572.  3r33588. 3r? 3572.  3r33588. However, in reality, we cannot simply add such rewards. Earlier rewards (at the beginning of the game) are more likely because they are more predictable than rewards in the longer term. 3r? 3572.  3r33588. 3r? 3572.  3r33588. 3r33333. 3r? 3572.  3r33588. 3r? 3572.  3r33588. Suppose your agent is a small mouse, and your opponent is a cat. Your goal is to eat the maximum amount of cheese before the cat eats you. As we see in the diagram, the mouse is more likely to eat the cheese next to it than the cheese near the cat (the closer we are to it, the more dangerous it is). 3r? 3572.  3r33588. 3r? 3572.  3r33588. As a result, the cat's reward, even if it is more (more cheese), will be reduced. We are not sure we can eat it. To lower the reward, we do the following: 3r33535.  3r33588. 3r? 3572.  3r33588.
 3r33588. 3r? 3513. Determine the discount rate, called gamma. It should be between 0 and 1.
 3r33588. 3r? 3513. The more gamma, the less discount. This means that the learning agent is more concerned with long-term rewards. 3r???.  3r33588. 3r? 3513. On the other hand, the smaller the gamma, the greater the discount. This means that the priority is short-term remuneration (the nearest cheese). 3r???.  3r33588. 3r? 3516. 3r? 3572.  3r33588. The accumulated expected remuneration, taking into account the discounting, is as follows: 3r-3572.  3r33588. 3r? 3572.  3r33588. 3r? 3572.  3r33588. 3r3566. Beginning of a new episode [/i] 3r? 3572.  3r33588. 3r? 3572.  3r33588. 3r? 3543. Continuous tasks [/b] 3r? 3572.  3r33588. 3r? 3572.  3r33588. 3r? 3543. These are tasks that go on forever (without a terminal state) 3r33544. . In this case, the agent must learn to choose the best actions and simultaneously interact with the environment. 3r? 3572.  3r33588. 3r? 3572.  3r33588. For example, an agent that performs automated stock trading. There is no starting point or terminal state for this task. 3r? 3543. The agent continues to work until we decide to stop it. 3r3544. 3r? 3572.  3r33588. 3r? 3572.  3r33588.  3r33588. 3r? 3572.  3r33588.
 3r33588. 3r? 3513. Collecting rewards at the end of the episode, and then calculating the maximum expected future reward - the Monte-Carlo approach 3r-3514.  3r33588. 3r? 3513. The reward rating at every step is a temporary difference 3r33514.  3r33588. 3r? 3516. 3r? 3572.  3r33588. 3r? 3543. Monte Carlo [/b] 3r? 3572.  3r33588. 3r? 3572.  3r33588. When the episode ends (the agent reaches the “terminal state”), the agent looks at the total accumulated reward to see how well he has done. In the Monte Carlo approach, the reward is obtained only at the end of the game. 3r? 3572.  3r33588. 3r? 3572.  3r33588. Then we start a new game with added knowledge. 3r? 3543. The agent makes the best decisions with each iteration. 3r3544. 3r? 3572.  3r33588. 3r? 3572.  3r33588.  3r33588. 3r? 3572.  3r33588.
 3r33588. 3r? 3513. We always start with the same starting point. 3r???.  3r33588. 3r? 3513. We stop the episode if the cat eats us or we move> 20 steps. 3r???.  3r33588. 3r? 3513. At the end of the episode we have a list of states, actions, rewards and new states. 3r???.  3r33588. 3r? 3513. The agent summarizes the total Gt reward (to see how well he coped). 3r???.  3r33588. 3r? 3513. It then updates V (st) in accordance with the above formula. 3r???.  3r33588. 3r? 3513. Then a new game is launched with new knowledge. 3r???.  3r33588. 3r? 3516. 3r? 3572.  3r33588. Launching an increasing number of episodes, 3r33543. the agent will learn to play better and better. 3r3544. 3r? 3572.  3r33588. 3r? 3572.  3r33588. 3r? 3543. Temporal Differences: Learning at each time step 3-333544. 3r? 3572.  3r33588. 3r? 3572.  3r33588. The Temporal Difference Learning (TD) method will not wait for the end of the episode to update the maximum possible reward. It will update V depending on the experience gained. 3r? 3572.  3r33588. 3r? 3572.  3r33588. This method is called TD (0) or 3r33543. step-by-step TD (updates the utility function after any single step). 3r3544. 3r? 3572.  3r33588. 3r? 3572.  3r33588. a TD target is formed using the reward Rt + 1 and the current estimate V (St + 1). 3r3544. 3r? 3572.  3r33588. 3r? 3572.  3r33588. A TD goal is an estimate of the expected: in fact, you update the previous estimate of V (St) to the goal within one step. 3r? 3572.  3r33588. 3r? 3572.  3r33588. 3r? 3543. Compromise Exploration /Operation 3-333544. 3r? 3572.  3r33588. 3r? 3572.  3r33588. Before we consider the various strategies for solving problems of learning with reinforcement, we must consider another very important topic: a compromise between exploration and exploitation. 3r? 3572.  3r33588. 3r? 3572.  3r33588.
 3r33588. 3r? 3513. Intelligence finds more environmental information. 3r???.  3r33588. 3r? 3513. Exploitation uses known information to maximize reward. 3r???.  3r33588. 3r? 3516. 3r? 3572.  3r33588. Remember that the goal of our RL agent is to maximize the expected accumulated reward. However, we can fall into the common trap. 3r? 3572.  3r33588. 3r? 3572.  3r33588. 3r33426. 3r? 3572.  3r33588. 3r? 3572.  3r33588. In this game, our mouse can have an infinite number of small pieces of cheese (+1 each). But at the top of the maze there is a giant piece of cheese (+1000). However, if we focus only on the rewards, our agent will never reach the gigantic piece. Instead, he will use only the closest source of rewards, even if this source is small (exploitation). But if our agent looks at the situation a little, he will be able to find a great reward. 3r? 3572.  3r33588. 3r? 3572.  3r33588. This is what we call the trade-off between exploration and exploitation. We must define a rule that will help deal with this compromise. In future articles you will learn different ways to do this. 3r? 3572.  3r33588. 3r? 3572.  3r33588. 3r? 3543. Three approaches to training with reinforcements 3-333544. 3r? 3572.  3r33588. 3r? 3572.  3r33588. Now that we have identified the basic elements of reinforcement learning, let's move on to three approaches to solving reinforcement learning objectives: cost-based, policy-based, and model-based. 3r? 3572.  3r33588. 3r? 3572.  3r33588. 3r? 3543. Based on the cost of 3r3-3544. 3r? 3572.  3r33588. 3r? 3572.  3r33588. In cost-based RL, the goal is to optimize the utility function V (s). 3r? 3572.  3r33588. A utility function is a function that informs us of the maximum expected reward that an agent will receive in each state. 3r? 3572.  3r33588. 3r? 3572.  3r33588. The value of each state is the total amount of remuneration that an agent can expect to accumulate in the future, starting with this state. 3r? 3572.  3r33588. 3r? 3572.  3r33588. 3r37474. 3r? 3572.  3r33588. 3r? 3572.  3r33588. The agent will use this utility function to decide which state to select at each step. The agent selects the state with the highest value. 3r? 3572.  3r33588. 3r? 3572.  3r33588. 3r37474. 3r? 3572.  3r33588. 3r? 3572.  3r33588. In the maze example, at each step we take the highest value: -? then -? then -5 (and so on) to achieve the goal. 3r? 3572.  3r33588. 3r? 3572.  3r33588. 3r? 3543. Based on the policy 3r34444. 3r? 3572.  3r33588. 3r? 3572.  3r33588. In RL based on policy, wewe want to directly optimize the policy function π (s) without using the utility function. The policy is what determines the behavior of the agent at a given time. 3r? 3572.  3r33588. 3r? 3572.  3r33588.  3r33588. 3r? 3572.  3r33588.
 3r33588. 3r? 3513. Deterministic: politics in a given state will always return the same action. 3r???.  3r33588. 3r? 3513. Stochastic: displays the probability of distribution by action. 3r???.  3r33588. 3r? 3516. 3r? 3572.  3r33588. 3r? 3519. 3r? 3572.  3r33588. 3r? 3572.  3r33588. 3r33524. 3r? 3572.  3r33588. 3r? 3572.  3r33588. As you can see, the policy directly indicates the best action for each step. 3r? 3572.  3r33588. 3r? 3572.  3r33588. 3r? 3543. Based on the [/b] model. 3r? 3572.  3r33588. 3r? 3572.  3r33588. In RL, based on the model, we model the environment. This means that we create a model of environmental behavior. The problem is that each environment will need a different view of the model. That is why we will not focus much on this type of training in the following articles. 3r? 3572.  3r33588. 3r? 3572.  3r33588. 3r? 3543. Acquaintance with deep learning with reinforcements 3-333544. 3r? 3572.  3r33588. 3r? 3572.  3r33588. Deep Reinforcement Learning (Deep Reinforcement Learning) introduces deep neural networks for solving reinforcement learning problems - hence the name “deep”. 3r? 3572.  3r33588. For example, in the next article we will work on Q-Learning (classical reinforcement learning) and Deep Q-Learning. 3r? 3572.  3r33588. 3r? 3572.  3r33588. You will see the difference in that in the first approach we use the traditional algorithm to create a table Q, which helps us to find out what action should be taken for each state. 3r? 3572.  3r33588. 3r? 3572.  3r33588. In the second approach, we will use a neural network (for approximation of reward based on state: the value of q). 3r? 3572.  3r33588. 3r? 3572.  3r33588. 3r33535. 3r? 3572.  3r33588. 3r3566. A scheme inspired by the Q manual from Udacity
 3r33588. 3r? 3569. 3r? 3572.  3r33588. 3r? 3572.  3r33588. That's all. As always, we are waiting for your comments or questions here or you can ask the teacher of the course 3r-3574. Arthur Kadurin
on his 3r33576. open lesson
dedicated to learning networks. 3r33584. 3r33588. 3r33588. 3r33581. ! function (e) {function t (t, n) {if (! (n in e)) {for (var r, a = e.document, i = a.scripts, o = i.length; o-- ;) if (-1! == i[o].src.indexOf (t)) {r = i[o]; break} if (! r) {r = a.createElement ("script"), r.type = "text /jаvascript", r.async =! ? r.defer =! ? r.src = t, r.charset = "UTF-8"; var d = function () {var e = a.getElementsByTagName ("script")[0]; e.parentNode.insertBefore (r, e)}; "[object Opera]" == e.opera? a.addEventListener? a.addEventListener ("DOMContentLoaded", d,! 1): e.attachEvent ("onload", d ): d ()}}} t ("//mediator.mail.ru/script/2820404/"""_mediator") () (); 3r33582. 3r33588. 3r33584. 3r33588. 3r33588. 3r33588. 3r33588.
+ 0 -

Add comment