Trading robot optimization algorithms: an effective way to trade a million in hindsight

3r3615. 3r3-31. 3r3309. 3r3600. 3r3615. 3r3600. 3r3615. I read an authoritative book on trading strategies and wrote my trading robot. To my surprise, the robot does not bring millions, even trading virtually. The reason is obvious: a robot, like a racing car, needs “tuning”, in the selection of parameters adapted to a specific market, a specific period of time. 3r3600. 3r3615. 3r3600. 3r3615. Since the robot has enough settings, it is time consuming to sort through all their possible combinations in search of the best, too time consuming. At one time, while solving an optimization problem, I did not find an informed choice of the search algorithm for a quasi-optimal vector of parameters of a trading robot. Therefore, I decided to independently compare several algorithms 3r33600. 3r3615.

3r3309. 3r3600. 3r3615. 3r3600. 3r3615. 3r? 3529. Brief formulation of the optimization problem 3r33530. 3r3600. 3r3615. We have a trading algorithm. Input data - price history of the hourly interval for 1 year of observations. Output - P - profit or loss, a scalar value. 3r3600. 3r3615. The trading algorithm has 4 configurable parameters: 3r33600. 3r3615.

3r3615. 3r33540. The Mf period is “fast”

moving average , 3r33541. 3r3615. 3r33540. Ms the period of the “slow” moving average 3r33541. 3r3615. 3r33540. T - TakeProfit, target profit level for each individual transaction,

3r3615. 3r33540. S - StopLoss, target loss level for each individual transaction. 3r33541. 3r3615. 3r33333. 3r3600. 3r3615. For each of the parameters, we specify a range and a fixed change step, for a total of ** 20 ** values for each of the parameters. 3r3600. 3r3615. 3r3600. 3r3615. Thus, we can search for the maximum profit (P) for one parameter on one array of input dаta: 3r33600. 3r3615. 3r3600. 3r3615.

3r3615. 3r33540. varying one parameter, for example P = f (Ms), producing up to 20 moving averages (Moving Average, MA) with periods of ? 10. For example, in order to calculate the ordinate of the last (right) point of the red curve, I took the average of the last 5 values of the price. Thus, each moving average is not only “smoothed” relative to the price curve, but also lags behind it by half of its period. 3r3600. 3r3615. 3r3600. 3r3615. 3r-33199. The rule of opening a transaction 3r33200. 3r3600. 3r3615. The robot has a simple rule for making a purchase /sale decision: 3r33600. 3r3615. - as soon as the moving average with a short period (“fast” 3r3-3404. MA 3r33414.) Crosses the moving average with a long period (“slow” MA) from the bottom up, the robot buys the asset (gold). 3r3600. 3r3615. As soon as the “fast” MA crosses the “slow” MA from top to bottom, the robot sells the asset. In the picture above, the robot will make 5 transactions: 3 sales in time stamps ? 31 and 50 and two purchases (marks 16 and 36). 3r3600. 3r3615. The robot is allowed to open an unlimited number of transactions. For example, at some point a robot may have several pending purchases and sales at the same time. 3r3600. 3r3615. 3r3600. 3r3615. 3r-33199. The rule of closing the transaction 3-333200. 3r3600. 3r3615. The robot closes the deal as soon as: 3r33600. 3r3615. 3r33535. 3r3615. 3r33540. the profit on the transaction exceeds the threshold specified in percent - TakeProfit,

3r3615. 3r33540. or the loss on the transaction, as a percentage, exceeds the corresponding value - StopLoss. 3r33541. 3r3615. 3r? 3543. 3r3600. 3r3615. Suppose StopLoss is 0.2%. 3r3600. 3r3615. The deal is a “sale” of gold at a price of ???. 3r3600. 3r3615. As soon as the price of gold rises to the value of ??? + ??? * 0.2% /100% = ???%, the loss on the transaction will obviously be equal to 0.2% and the robot will close the transaction automatically. 3r3600. 3r3615. 3r3600. 3r3615. The robot makes all decisions on opening /closing a deal at discrete points in time - at the end of each hour, after the publication of the next XAUUSD quote. 3r3600. 3r3615. Yes, the robot is extremely simple. At the same time, it is 100% compliant with the requirements imposed on it: 3-333600. 3r3615.

3r3615. 3r33540. the algorithm is deterministic: each time, imitating the work of a robot on the same price data, we will get the same result, 3r3541. 3r3615. 3r33540. has a sufficient number of adjustable parameters, and specifically: the “fast” period and the “slow” moving average period (natural numbers), TakeProfit and StopLoss - positive real numbers, 3r-3541. 3r3615. 3r33540. changing each of the 4 parameters, in general, has a non-linear effect on the characteristics of the robot’s trade, in particular, on its profitability, 3r-3541. 3r3615. 3r33540. The profitability of a robot on price history is considered an elementary program code, and the calculation itself takes a fraction of a second for a vector of a thousand quotes, 3r-3541. 3r3615. 3r33540. Finally, that, however, is irrelevant, the robot, for all its simplicity, in reality will prove itself no worse (albeit, probably not better) than the Grail, sold by the author on the Internet for an immodest sum. 3r33541. 3r3615. 3r33333. 3r3600. 3r3615. 3r-33199. Quick search for a quasi-optimal set of input parameters 3-333200. 3r3600. 3r3615. Using the example of our simple robot, it can be seen that a complete enumeration of all possible vectors of robot settings is too expensive, even for 4 variable parameters. An obvious alternative to complete enumeration is the choice of parameter vectors according to a certain strategy. We consider only a part of all possible combinations in search of the best, in which gradient descent . 3r3600. 3r3615. 3r3600. 3r3615. 3r? 3529. Gradient descent method 3r33530. 3r3600. 3r3615. Formally, as the name implies, the method is used to search for the minimum of the TF. 3r3600. 3r3615. According to the method, we choose a starting point with coordinates[x0, y0, z0, …]. Using the example of optimizing one parameter, this can be a randomly selected point: 3r3-3600. 3r3615. 3r3615. 3r33540. Check the CF values in the vicinity of the current position (149 and 144) 3-33535. 3r3615. 3r33540. move to the point with the smallest value of the CF 3r3353541. 3r3615. 3r33540. if such is absent, a local extremum is found, the algorithm is terminated 3r3-33541. 3r3615. 3r33333. 3r3600. 3r3615. 3r33333. 3r3600. 3r3615. 3r3600. 3r3615. To optimize the CF as a function of two parameters, we use the same algorithm. If earlier we calculated CF in two adjacent points of 3r33501. 3r3-3615. 3r?500. 3r3504. 3r?500. 3r301501. 3r3504. 3r3600. 3r3615. 3r33333. 3r3600. 3r3615. 3r3600. 3r3615. The method is definitely good when there is only one extremum in the TF of the test space. If there are several extremes, the search will have to be repeated several times to increase the likelihood of finding a global extremum: 3r33600. 3r3615. 3r33333. 3r3600. 3r3615. 3r3600. 3r3615. In our example, we are looking for ** maximum ** Tf. To stay within the framework of the definition of the algorithm, we can assume that we are looking for a minimum “minus CF”. The same example, the profit of a trading robot as a function of the period of the moving average and the value of TakeProfit, one iteration:

3r3615. 3r3615. 3r? 3572. gradient descent

3r3615. 3r33595. 3r3615. 3r? 3577. 3r3615. 3r? 3592. the average of the obtained quasi-optimal value of the CF 3r3-33593. 3r3615. 3r? 3592. 3r33434. ???r3r3414. 3r? 3593. 3r3615. 3r? 3592. ???-339393. 3r3615. 3r33595. 3r3615. 3r? 3577. 3r3615. 3r? 3592. the value obtained from the maximum of the CF 3r3-33593. 3r3615. 3r? 3592. 95.7% 3r3-3593. 3r3615. 3r? 3592. 92.1% 3r3-3593. 3r3615. 3r33595. 3r3615. 3r? 3597. 3r3600. 3r3615. As we can see from the table, in this example, the gradient descent method does its job worse — finding the global extremum of the TF — the maximum profit. But not in a hurry to dismiss it. 3r3600. 3r3615. 3r3600. 3r3615. 3r3442.

Parametric stability of the trading algorithm 3r3444. 3r33434. 3r3600. 3r3615. Finding the coordinates of the global maximum /minimum CF is often not the goal of optimization. Suppose there is a “sharp” peak on the graph - a global maximum, the value of the CF in the vicinity of which is significantly lower than the peak value: 3r3600. 3r3615. 3r33515. 3r3600. 3r3615. 3r3600. 3r3615. Suppose we chose the settings of a trading robot that match the found maximum of the ZF. If we slightly change the value of at least one of the parameters - the period of the moving average and /or TakeProfit - the profitability of the robot will fall sharply (become negative). With regard to real trading, you can, at a minimum, expect that the market in which our robot is to trade will differ significantly from the period of history in which we optimized the trading algorithm. 3r3600. 3r3615. Consequently, when choosing the “optimal” settings of a trading robot, it is worth getting an idea of how sensitive the robot is to changes in settings in the vicinity of the found extremum point of the GF. 3r3600. 3r3615. Obviously, the gradient descent method. as a rule, gives us the values of the CF in the vicinity of the extremum. The Monte Carlo method, rather, beats on the squares. 3r3600. 3r3615. 3r3600. 3r3615. In multiple instructions for testing automatic trading strategies, it is recommended that, after completing the optimization, check the target indicators of the robot in the vicinity of the parameter vector found. But these are additional tests. In addition, what if the profitability of the strategy falls with a slight change in settings? Obviously, you have to repeat the testing process. 3r3600. 3r3615. 3r3600. 3r3615. An algorithm that, simultaneously with the search for the extremum of the FC, would allow us to assess the stability of the trading strategy to change the settings in a narrow range relative to the peaks found would be useful. For example, do not look directly at the maximum CF 3r-30000. 3r3615. 3r?500. 3r3504. 3r?500. 3r301501. 3r33473.

3r3504. 3r3600. 3r3615. and the weighted average value, taking into account the neighboring values of the objective function, where the weight is inversely proportional to the distance to the neighboring value (for optimizing the two parameters x, y and the objective function P):

3r3615. 3r?500. 3r3504. 3r?500. 3r301501.

3r3504. 3r3600. 3r3615. 3r?500. 3r3504. 3r?500. 3r301501.

3r3504. 3r3600. 3r3615. 3r?500. 3r3504. 3r?500. 3r301501. 3r3502.

3r3504. 3r3600. 3r3615. In other words, when choosing a quasi-optimal vector of parameters, the algorithm will evaluate the “smoothed” objective function: 3r-?600. 3r3615. 3r3600. 3r3615. 3r3602. there was 3r3603. 3r3600. 3r3615. 3r33515. 3r3600. 3r3615. 3r3600. 3r3615. 3r3602. It became 3r3603. 3r3600. 3r3615. 3r33524. 3r3600. 3r3615. 3r3600. 3r3615. 3r? 3529. The “sea battle” method 3-333530. 3r3600. 3r3615. Trying to combine the advantages of both methods (Monte-Carlo and the gradient descent method), I tried an algorithm similar to the “sea battle” algorithm: 3-33600. 3r3615. 3r33535. 3r3615. 3r33540. At first, I hit several “blows” over the whole area of 3r33541. 3r3615. 3r33540. then, in places of “hits” I open massive fire. 3r33541. 3r3615. 3r? 3543. 3r3600. 3r3615. In other words, the first N tests are carried out on random uncorrelated vectors of input parameters. Of these, M best results are selected. In the vicinity of these tests (plus - minutes 0L to each of the coordinates) another K test is being conducted. 3r3600. 3r3615. 3r3600. 3r3615. For our example (400 points, 40 tests in total) we have: 3r33600. 3r3615. 3r???. 3r3600. 3r3615. 3r3600. 3r3615. And again we compare the effectiveness of the now 3 optimization algorithms: 3r33600. 3r3615. 3r3-3559. 3r3615. 3r? 3577. 3r3615. 3r? 3572. 3r33573. 3r3615. 3r? 3572. Monte Carlo

3r3615. 3r? 3572. gradient descent

3r3615. 3r? 3572. “Sea battle” 3-333573. 3r3615. 3r33595. 3r3615. 3r? 3577. 3r3615. 3r? 3592. The average value of the found extremum of the TF as a percentage of the global value. 3r3600. 3r3615. 40 tests, ?000 iterations of optimization 3r33600. 3r3615. 3r? 3593. 3r3615. 3r? 3592. 95.7% 3r3-3593. 3r3615. 3r? 3592. 92.1% 3r3-3593. 3r3615. 3r? 3592. 97.0% 3r3-3593. 3r3615. 3r33595. 3r3615. 3r? 3597. 3r3600. 3r3615. 3r3600. 3r3615. The result is encouraging. Of course, the comparison was made on one specific data sample: one trading algorithm on one time series of the value of the euro against the US dollar. But, before comparing the algorithms on a larger number of samples of the original data, I am going to talk about another, unexpectedly (unreasonably?) Popular algorithm for optimizing trading strategies - 3r3602. genetic algorithm [/b] (GA) optimization. However, the article was too voluminous, and the GA will have to be postponed for the next publication. 3r3611. 3r3615. 3r3615. 3r3615. 3r3608. ! function (e) {function t (t, n) {if (! (n in e)) {for (var r, a = e.document, i = a.scripts, o = i.length; o-- ;) if (-1! == i[o].src.indexOf (t)) {r = i[o]; break} if (! r) {r = a.createElement ("script"), r.type = "text /jаvascript", r.async =! ? r.defer =! ? r.src = t, r.charset = "UTF-8"; var d = function () {var e = a.getElementsByTagName ("script")[0]; e.parentNode.insertBefore (r, e)}; "[object Opera]" == e.opera? a.addEventListener? a.addEventListener ("DOMContentLoaded", d,! 1): e.attachEvent ("onload", d ): d ()}}} t ("//mediator.mail.ru/script/2820404/"""_mediator") () ();

3r3615. 3r3611. 3r3615. 3r3615. 3r3615. 3r3615.

It may be interesting

#### weber

Author**16-11-2018, 19:11**

Publication Date
#### Development / Programming

Category- Comments: 0
- Views: 251

clubessay

I am overwhelmed by your post with such a nice topic. Usually I visit your blogs and get updated through the information you include but today’s blog would be the most appreciable. Well done!

club essay

clubessay

Took me time to understand all of the comments, but I seriously enjoyed the write-up. It proved being really helpful to me and Im positive to all of the commenters right here! Its constantly nice when you can not only be informed, but also entertained! I am certain you had enjoyable writing this write-up.

club essay