Translation of Andrew Un’s Passion for Machine Learning Chapters 28 - 29

3r3-31. 3r3118. previous chapters 3–3–326.

3r3116.

Learning curves

3r3116.

28 Diagnosing Offset and Scatter: Learning Curves

3r3116.

3r3118. We considered several approaches to the separation of errors into avoidable bias and scatter. We did this by estimating the optimal proportion of errors by calculating errors on the training sample of the algorithm and on the validation sample. Let's discuss a more informative approach: learning curve graphs. 3r3116.

Graphs of learning curves are dependences of the fraction of an error on the number of examples of a training sample.

3r3116.

3r3118.

3r3116.

3r3118. As the size of the training sample increases, the error on the validation sample should decrease. 3r3116.

We will often be guided by some “desired fraction of errors” that we hope will eventually reach our algorithm. For example: 3r3121. 3r3116.

3r3335.

If we hope to achieve the level of quality available to humans, then the proportion of human errors should be the “desired fraction of errors” 3r344.

If a learning algorithm is used in a certain product (such as a supplier of feline images), we may have an understanding of what level of quality needs to be achieved so that users get the most benefit 3r344.

If you are working on an important application for a long time, you may have a reasonable understanding of what progress you can make in the next quarter /year.

3r3116.

3r3118. Add the desired quality level to our learning curve:

3r3116.

3r3118. You can visually extrapolate the red error curve in a validation sample and assume how much closer you could get to the desired level of quality by adding more data. In the example shown in the picture, it seems likely that doubling the size of the training sample will achieve the desired level of quality. 3r3116.

However, if the curve of the fraction of the error of the validation sample reached a plateau (that is, it turned into a straight line parallel to the x-axis), it immediately indicates that adding additional data will not help achieve the goal: 3r3121. 3r3116.

3r3118. 3r3633.

3r3116.

3r3118. A look at the learning curve can thus help you avoid spending months collecting twice the training data just to realize that adding them does not help. 3r3116.

One of the drawbacks of this approach is that if you look only at the error curve on a validation sample, it can be difficult to extrapolate and accurately predict how the red curve will behave if you add more data. Therefore, there is another additional schedule that can help assess the impact of additional training data on the proportion of errors: a training error.

3r3122.

3r3116.

3r375. 29 Graph of learning errors 3r3r76. 3r3116.

3r3118. Errors in the validation (and test) samples should decrease as the training sample increases. But in the training sample, the error in adding data usually increases.

3r3116.

3r3118. Let's illustrate this effect with an example. Suppose your training sample consists of only 2 examples: One picture with cats and one without cats. In this case, it is easy for the learning algorithm to remember both examples of the training sample and to show 0% error on the training sample. Even if both teaching examples are incorrectly labeled, the algorithm will easily remember their classes.

3r3122.

3r3116.

3r3118. Now imagine that your training sample consists of 100 examples. Suppose a certain number of examples are classified incorrectly, or in some examples it is impossible to establish a class, for example, in blurry images, when even a person cannot determine whether a cat is present in the image or not. Suppose that the learning algorithm still “remembers” most examples of a training sample, but now it is more difficult to obtain an accuracy of 100%. By increasing the training sample from 2 to 100 examples, you will find that the accuracy of the algorithm on the training sample will gradually decrease.

3r3122.

3r3116.

3r3118. Finally, imagine that your training sample consists of 1?000 examples. In this case, it becomes increasingly difficult for the algorithm to perfectly classify all the examples, especially if there are blurry images and classification errors in the training set. Thus, your algorithm will work worse on such a training sample.

3r3122.

3r3116.

3r3118. Let's add a graph of learning errors to our previous 3r3121. 3r3116.

3r3118.

3r3116.

3r3118. You can see that the blue “Learning Errors” curve increases as your training sample increases. Moreover, the learning algorithm usually shows better quality in the training sample than in the validation sample; thus, the red error curve on the validation sample lies strictly above the blue error curve on the training sample.

3r3116.

3r3118. Next, let's discuss how to interpret these graphs.

3r3116.

3r3118. 3r3119. to be continued 3r3–3120.

3r3122.

3r3128. ! function (e) {function t (t, n) {if (! (n in e)) {for (var r, a = e.document, i = a.scripts, o = i.length; o-- ;) if (-1! == i[o].src.indexOf (t)) {r = i[o]; break} if (! r) {r = a.createElement ("script"), r.type = "text /jаvascript", r.async =! ? r.defer =! ? r.src = t, r.charset = "UTF-8"; var d = function () {var e = a.getElementsByTagName ("script")[0]; e.parentNode.insertBefore (r, e)}; "[object Opera]" == e.opera? a.addEventListener? a.addEventListener ("DOMContentLoaded", d,! 1): e.attachEvent ("onload", d ): d ()}}} t ("//mediator.mail.ru/script/2820404/"""_mediator") () ();

It may be interesting

#### weber

Author**14-11-2018, 12:28**

Publication Date
#### Development / Programming

Category- Comments: 0
- Views: 241