Bitcoin Forum
May 09, 2024, 11:10:20 AM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
  Home Help Search Login Register More  
  Show Posts
Pages: [1] 2 3 »
1  Economy / Trading Discussion / Blockchain Quantitative Investment Series - Dynamic Balance Strategy on: March 28, 2019, 07:47:28 AM
article originally from FMZ.COM ( A place you can create your own trading bot by Python, JavaScript and C++)


can't post pics here, for more info, please come to https://www.fmz.com/bbs-topic/3611

The “real stuff” of quantitative trading gathering place where you can truly benefit from.

NO.1

Warren Buffett's mentor, Benjamin Graham, once mentioned in the book <<The Intelligent Investor>> a trading model in which stock and bonds are dynamically balanced.

This trading model is very simple:

50% of the funds in the hands are invested in equity funds, and the remaining 50% are invested in bond funds. That is, stocks and bonds each account for half.

An asset position rebalancing based on fixed intervals or market changes restores the ratio of stock assets to bond assets to an initial 1:1.

This is the whole logic of the entire strategy, including when to buy and sell, and how much to buy and sell. How simple and effect it is!

NO.2

In this method, the volatility of bond funds is actually very small, far below the stock volatility, so bonds are used here as "reference anchors", that is, using bonds to measure whether the stocks are rising too much or too little.

If the stock price rises, the market value of the stock will be greater than the market value of the bond. When the market value ratio of these two exceeds the set of a threshold, the total position will be readjusted, the stock will be sold, and the bond will be bought to make the stock value to bond value ratio to restore to the initial 1:1.

Conversely, if the stock price falls, the market value of the stock will be less than the market value of the bond. When the market value ratio of the these two exceeds the set of a threshold, the total position will be readjusted, the stock will be bought, and the bond will be sold to make the market capitalization ratio of the bond value to stock value to restore to the initial 1:1.

In this way, the ratio between the dynamic balance of stocks and bonds is enough to enjoy the profit of stock growth and reduce asset volatility. As a pioneer in the value investing, Graham provided us with a wonderful idea.

Since this is a complete and mutual strategy, why don't we use it on cryptocurrency market?

NO.3

Blockchain Assets Dynamic Balance Strategy in BTC

Strategy logic

According to the current value of BTC, the account balance is retained at $6400 cash and 1 BTC, ie the initial ratio of cash to BTC market value is 1:1.

If the price of the BTC rises to $7400, that is, the BTC market value is greater than the account balance, and the difference between them exceeds the set threshold, then (7400-6400)/7400/2 coins are sold. It means that BTC has appreciated and we need to exchange the cash back.

If the price of the BTC falls to $5400, ie the BTC market value is less than the account balance and the difference between them exceeds the set threshold, buy (6400-5400)/5400/2 coins. It means that BTC has depreciated and we need to buy BTC back.

In this way, regardless of whether the BTC is appreciated or depreciated, the account balance and the market value of the BTC are always kept dynamically equal. If the BTC is devalued, buy some, and then sell it when it raises again, just like the balance scales.

NO.4

So how do you implement it with programming code?

Let us take the FMZ quantitative trading platform as an example. Let us first look at the strategic framework:
Code:
// strategy parameter
var threshold = 0.05; // Threshold
var LoopInterval = 60; // Polling interval(seconds)
var MinStock = 0.001; // Minimum transaction volume
var XPrecision = 4; // Quantity accuracy
var ZPrecision = 8; // Price accuracy

// Withdrawal order function
function CancelPendingOrders() {

}

// Placing Order function
function onTick() {

}

// Main function
function main() {
    // Filter non-critical information
    SetErrorFilter("GetRecords:|GetOrders:|GetDepth:|GetAccount|:Buy|Sell|timeout");
    while (true) { // Polling mode
        if (onTick()) { // Execute the onTick function
            CancelPendingOrders(); // Cancel unexecuted pending orders
        }
        Sleep(LoopInterval * 1000); // Sleep
    }
}
The entire strategy framework is actually very simple, a “main” function, an “onTick” order placing function, a “CancelPendingOrders” function, and the necessary parameters.

NO.5

Order module
Code:
// Placing Order function
function onTick() {
    var acc = _C(exchange.GetAccount); // Get account information
    var ticker = _C(exchange.GetTicker); // Get Tick data
    var spread = ticker.Sell - ticker.Buy; // Get the bid-ask spread of Tick data
    // 0.5 times the difference between the account balance and the current position value
    var diffAsset = (acc.Balance - (acc.Stocks * ticker.Sell)) / 2;
    var ratio = diffAsset / acc.Balance; // diffAsset / Account Balance
    LogStatus('ratio:', ratio, _D()); // Print ratio and current time
    if (Math.abs(ratio) < threshold) { // If the absolute value of ratio is less than the specified threshold
        return false; // return false
    }
    if (ratio > 0) { // If ratio is greater than 0
        var buyPrice = _N(ticker.Sell + spread, ZPrecision); // Calculate the order price
        var buyAmount = _N(diffAsset / buyPrice, XPrecision); // Calculate the order quantity
        if (buyAmount < MinStock) { // If the order quantity is less than the minimum transaction volume
            return false; // return false
        }
        exchange.Buy(buyPrice, buyAmount, diffAsset, ratio); // Buy order
    } else {
        var sellPrice = _N(ticker.Buy - spread, ZPrecision); // Calculate the order price
        var sellAmount = _N(-diffAsset / sellPrice, XPrecision); // Calculate the order quantity
        if (sellAmount < MinStock) { // If the order quantity is less than the minimum transaction volume
            return false; // return false
        }
        exchange.Sell(sellPrice, sellAmount, diffAsset, ratio); // Sell order
    }
    return true; // return true
}
The logic of the order transaction is clear, and all the comments have been written into the code. You can click on the image to enlarge it.

The main process is as follows:

Get account information.

Get the Tick data.

Calculate the Tick data bid-ask spread.

Calculate the account balance and the BTC market value spread.

Calculate the trigger condition of trading, order price, and order quantity.

Place an order and return true.

NO.6

Cancel pending order module
Code:
// Withdrawal order function
function CancelPendingOrders() {
    Sleep(1000); // Sleep 1 second
    var ret = false;
    while (true) {
        var orders = null;
        // Continue to get an array of unexecuted orders, if an exception is returned, continue to get
        while (!(orders = exchange.GetOrders())) {
            Sleep(1000); // Sleep 1 second
        }
        if (orders.length == 0) { // If the order array is empty
            return ret; // Return to withdrawal status
        }
        for (var j = 0; j < orders.length; j++) { // Traversing the array of unexecuted orders
            exchange.CancelOrder(orders[j].Id); // Cancel unexecuted orders one by one
            ret = true;
            if (j < (orders.length - 1)) {
                Sleep(1000); // Sleep 1 second
            }
        }
    }
}
The cancel pending order module is even simpler, the steps are as follows:

Wait 1 second before withdrawing the order, because some exchange houses may have server delays.

Continue to get an array of unexecuted orders, and if an exception is returned, keep trying until it successful.

If the unexecuted order array is empty, it will return the withdrawal status immediately.

If there are unexecuted orders, the entire array is traversed and the order is withdrawn according to the order ID.

NO.7

This Strategy’s all programming source code


On the FMZ quantitative trading platform, with just 80 lines of code, a complete blockchain BTC dynamic balancing strategy has been successfully build. But such simple strategy as this one, is there any value? Look down~

NO.8

Next, let's test this simple dynamic balancing strategy to see if it works. The following is a backtest on the historical data of BTC, for your reference only.

Backtesting environment

Backtest performance

Back test curve

Another one, the same period BTC price chart

Is there any shock to you?

The BTC has continued its eight-month decline, and even the biggest decline has exceeded 70%, which has caused many investors to lose confidence in blockchain assets.

The cumulative revenue of this strategy is as high as 160%, and the annualized return-to-risk ratio exceeds 5. For such a simple trading strategy, this return on investment has exceeded the majority of the “All-in” types of players.

NO.9

This balancing strategy, with only one core parameter (threshold value), is a very simple investment method that pursues not excess returns but solid profits.

Contrary to the trend strategy, the dynamic balance strategy is against the trend. This strategy is to reduce the position and cool down when the market is too hot. When the market is deserted, it will be hidden in, which is similar to macroeconomic regulation.

In fact, the dynamic balance strategy is based on the idea that the price is unpredictable, while at the same time capturing the price fluctuations. The key core of the dynamic balancing strategy is to set and adjust the asset allocation ratio, as well as the trigger threshold.

In view of the length of the article, it is impossible for an article to be comprehensive about everything. As an old saying “Give a man a fish and you feed him for a day. Teach a man to fish and you feed him for a lifetime.”. The most important thing about the dynamic balance strategy is the investment idea. You can even replace the individual BTC assets in this article with a basket of blockchain asset portfolios.

Finally, let's end this article with a paragraph in Benjamin Graham's famous book <<The Intelligent Investor>>:

The stock market is not a “weighing gauge” that accurately measures value. On the contrary, it is a “voting machine”. The decisions made by countless people are a rational and emotional dopant. There are many times when these choices are made. It is a far cry from the value judgment of reason. The secret of investing is to invest when prices are much lower than intrinsic value, and believe that market trends will pick up.

Benjamin Graham
For directly copy the source code, please visit our strategy square at : https://www.fmz.com/strategy/110900

there are lot of strategies that you can study, download, rent, or purchase.

NO.10

about us

The reason for operating this website is to change the current status of the quantitative trading world where lack of the “real stuff”, where has a lot of scams and barely deep communications, and create a more pure quantitative trading learning and communication platform. For more information, please visit our website (www.fmz.com)

Your forwarding will be the driving force to support us to continue to create more “real stuff”! If you feel that this article is helpful to you, please forward it to your friend and support us. Sharing is also a kind of wisdom!

article originally from FMZ.COM ( A place you can create your own trading bot by Python, JavaScript and C++)

can't post pics here, for more info, please come to https://www.fmz.com/bbs-topic/3611
2  Economy / Trading Discussion / Detailed explanation of BitMEX pending order strategy on: March 26, 2019, 07:36:35 AM
article originally from FMZ.COM ( A place you can create your own trading bot by Python, JavaScript and C++)
BitMEX has become the platform of choice for cryptocurrency leverage trading, but its API trading restrictions are strict and make automatic traders feeling very confused. This article mainly shares some tips on the use of APIs in the FMZ quantitative trading platform, mainly for the market making strategy.

1. Features of BitMEX
The most significant advantage is that the trading liquidity is very active, especially the Bitcoin perpetual contract, the transaction amount per minute often exceeds one million or even ten million US dollars; BitMEX pending orders trading have the policy of return commission fee, although it is not much, but attracted a large number of market making tradings, which made the price depth very rich. the latest buying and selling price often have more than one million dollars worth pending orders; because of this point, the transaction price often fluctuates around the minimum change unit of $0.50.

2.BitMEX API frequency limit
The request frequency of the REST API is limited to 300 times every 5 minutes, almost equal to 1 time every second, this limit can be said to be very strict compared to other trading platforms. After the limit is exceeded, 'Rate limit exceeded' will be prompted. If you keep exceeding the limit, the IP may be disabled for one hour. Multiple disables in a short time will result in a week being disabled. For each API request, BitMEX will return the header data, header data is used to see the current number of remaining requests. In fact, if the API is used properly, it will not exceed the frequency limit and generally does not need to be checked.

3.Use websocket to get the market quote
The BitMEX REST API is more restrictive. The official recommendation is to use the websocket protocol more, and push more data types than the average exchange. Pay attention to the following points for specific use:

If the depth data push time is too long, there will be an error, which does not correspond to the real depth. It is estimated that there are too many depth changes and there are omissions in the push, but in general, due to excellent fluidity, you can subscribe to "ticker" or "trades".
The order details push is missing a lot and is almost unavailable.
There is a significant delay in the push of account information, preferably using the REST API.
When the market is volatile too big, the push delay will reach a few seconds.
The following code uses the websocket protocol to obtain market and account information in real time, mainly for market-making strategies. The specific use needs to be performed in the main() function.
Code:
var ticker  = {price:0, buy:0, sell:0, time:0} //Ticker information, the latest price, "buy one" price, "sell one" price, update time
//Account information, respectively, position, buying and selling price, buying and selling quantity, position status, order Id
var info = {position:0, buyPrice:0, sellPrice:0, buyAmount:0, sellAmount:0, buyState:0, sellState:0, buyId:0, sellId:0}
var buyListId = []//Global variables, pre-emptive buying id list, will described below
var sellListId = []
var APIKEY = 'your api id' //Need to fill in the BitMEX API ID here. Note that it is not a key, which is required for websocket protocol authentication.
var expires = parseInt(Date.now() / 1000) + 10
var signature = exchange.HMAC("sha256", "hex", "GET/realtime" + expires, "{{secretkey}}")//The secretkey will be automatically replaced at the bottom level and does not need to be filled in.
var bitmexClient = Dial("wss://www.bitmex.com/realtime", 60)
var auth = JSON.stringify({args: [APIKEY, expires, signature], op: "authKeyExpires"})//Authentication information, otherwise you cannot subscribe to the account
bitmexClient.write(auth)
bitmexClient.write('{"op": "subscribe", "args": ["position","execution","trade:XBTUSD"]}')//Subscribed to positions, order execution and perpetual contract real-time transaction
while(true){
    var data = bitmexClient.read()
    if(data){
        bitmexData = JSON.parse(data)
        if('table' in bitmexData && bitmexData.table == 'trade'){
            data = bitmexData.data
            ticker.price = parseFloat(data[data.length-1].price)//The latest transaction price, will push multiple transactions at a time, take one will be ok
            //You can get the "buy one" and "sell one" price according to the direction of the latest transaction, without subscribing to the depth.
            if(data[data.length-1].side == 'Buy'){
                ticker.sell = parseFloat(data[data.length-1].price)
                ticker.buy = parseFloat(data[data.length-1].price)-0.5
            }else{
                ticker.buy = parseFloat(data[data.length-1].price)
                ticker.sell = parseFloat(data[data.length-1].price)+0.5
            }
            ticker.time =  new Date(data[data.length-1].timestamp);//Update time, can be used to determine the delay
        }
    }else if(bitmexData.table == 'position'){
        var position = parseInt(bitmexData.data[0].currentQty)  
        if(position != info.position){
            Log('Position change: ', position, info.position, '#FF0000@')//Position change Log, and pushed to WeChat, remove @ means Do not push
            info.position = position  
        }
        info.position  = parseInt(bitmexData.data[0].currentQty)  
    }
}
4. Placing order skills
BitMEX officially recommends using "bulk ordering" and "order modification" to place order. "bulk ordering" can be executed faster due to BitMEX real-time auditing, risk checking, margin calculation, and commissioning. Therefore, the frequency of the "bulk ordering" is calculated as one tenth of the normal frequency. Futhermore, our order operation should use the method of "bulk ordering" and "order modification" to minimize the use of API. The query order status also needs to consume the API using frequency. It can judge the order status according to the position change or modification order failure.

"bulk ordering" does not limit the order quantity (can't be too much), in fact, a single order can also use the "bulk ordering" interface. Due to the operation of modifying the order, we can "pre-order" some orders where the price deviates greatly, these orders will not be executed, but when we need to place an order, we only need to modify the price and quantity of the placed order. when modifying the order occurs failure, it can also be used as a signal for the order to be executed.

The following is the specific implementation code:
Code:
// Cancel all orders and reset global variables
function cancelAll(){
    exchange.IO("api","DELETE","/api/v1/order/all","symbol=XBTUSD")//Call IO extension revocation
    info = {position:0, buyPrice:0, sellPrice:0, buyAmount:0, sellAmount:0, buyState:0, sellState:0, buyId:0, sellId:0}
    buyListId = []
    sellListId = []
}
//placing alternate order
function waitOrders(){
    var orders = []
    if(buyListId.length<4){
        //When the number of inspections is insufficient, place another "bulk"
        for(var i=0;i<7;i++){
            //Due to BitMEX restrictions, the price can not be excessively excessive, the order quantity can not be too small, and the "execInst" parameter guarantees that only the market making transaction can be executed.
            orders.push({symbol:'XBTUSD', side:'Buy', orderQty:100, price:ticker.buy-400+i, execInst:'ParticipateDoNotInitiate'})
        }
    }
    if(sellListId.length<4){
        for(var i=0;i<7;i++){
            orders.push({symbol:'XBTUSD', side:'Sell', orderQty:100, price:ticker.buy+400+i, execInst:'ParticipateDoNotInitiate'})
        }
    }
    if(orders.length>0){
        var param = "orders=" + JSON.stringify(orders);
        var ids = exchange.IO("api", "POST", "/api/v1/order/bulk", param);//Bulk orders submitted here
        for(var i=0;i<ids.length;i++){
            if(ids.side == 'Buy'){
                buyListId.push(ids.orderID)
            }else{
                sellListId.push(ids.orderID)
            }
        }
    }
}
//Modify order function
function amendOrders(order, direction, price, amount, id){
    var param = "orders=" + JSON.stringify(order);
    var ret = exchange.IO("api", "PUT", "/api/v1/order/bulk", param);//Modify one order at a time
    //Modification occurs error
    if(!ret){
        var err = GetLastError()
        //overloaded unmodified strategy, need to recycle the order id
        if(err.includes('The system is currently overloaded')){
            if(id){
                if(direction == 'buy'){
                    buyListId.push(id)
                }else{
                    sellListId.push(id)
                }
            }
            Sleep(1000)
            return
        }
        //Illegal order status, indicating that the order to be modified has been completely executed
        else if(err.includes('Invalid ordStatus')){
            Log(order, direction)
            if(direction == 'buy'){
                info.buyId = 0
                info.buyState = 0
                info.buyAmount = 0
                info.buyPrice = 0
            }else{
                info.sellId = 0
                info.sellState = 0
                info.sellAmount = 0
                info.sellPrice = 0
            }
            //Since the push is not timely, update the position with the "rest" protocol here.
            pos = _C(exchange.GetPosition)
            if(pos.length>0){
                info.position = pos[0].Type == 0 ? pos[0].Amount : -pos[0].Amount
            }else{
                info.position = 0
            }
        }
        //Unknown error cannot be modified, all orders are cancelled, reset once
        else if(err.includes('Invalid orderID')){
            cancelAll()
            Log('Invalid orderID,reset once')
        }
        //Exceed the frequency limit, you can continue to try after hibernation
        else if(err.includes('Rate limit exceeded')){
            Sleep(2000)
            return
        }
        //The account is banned, all orders are revoked, and sleep is awaiting recovery for a long time.
        else if(err.includes('403 Forbidden')){
            cancelAll()
            Log('403,reset once')
            Sleep(5*60*1000)
        }
    }else{
        //Modify order successfully
        if(direction == 'buy'){
            info.buyState = 1
            info.buyPrice = price
            info.buyAmount = amount
        }else{
            info.sellState = 1
            info.sellPrice = price
            info.sellAmount = amount
        }
    }
}
//0.5 price change
function fixSize(num){
    if(num>=_N(num,0)+0.75){
        num = _N(num,0)+1
    }else if(num>=_N(num,0)+0.5){
        num=_N(num,0)+0.5
    }else{
        num=_N(num,0)
    }
    return num
}
//Trading function
function trade(){
    waitOrders()//Check if you need a replacement order
    var buyPrice = fixSize(ticker.buy-5) //For demonstration purposes only, specific transactions should be written by yourself.
    var sellPrice = fixSize(ticker.sell+5)
    var buyAmount =  500
    var sellAmount = 500
    //Modify from an alternate order when there is no order
    if(info.buyState == 0  && buyListId.length > 0){
        info.buyId = buyListId.shift()
        amendOrders([{orderID:info.buyId, price:buyPrice, orderQty:buyAmount}],'buy', group, buyPrice, buyAmount, info.buyId)
    }
    if(info.sellState == 0 && sellListId.length > 0){
        info.sellId = sellListId.shift()
        amendOrders([{orderID: info.sellId, price:sellPrice, orderQty:sellAmount}],'sell', group, sellPrice, sellAmount, info.sellId )
    }
    //Existing orders need to change price
    if(buyPrice !=  info.buyPrice && info.buyState == 1){
        amendOrders([{orderID:info.buyId, price:buyPrice, orderQty:buyAmount}],'buy', group, buyPrice, buyAmount)
    }
    if(sellPrice != info.sellPrice && info.sellState == 1){
        amendOrders([{orderID:info.sellId, price:sellPrice, orderQty:sellAmount}],'sell', group, sellPrice, sellAmount)
    }
}
5. Others
BitMEX's server is in the Amazon's server in Dublin, Ireland. The server running strategy ping is less than 1ms when you choose a AWS cloud sever in Dublin, but when there is still a delay in pushing, the overload problem cannot be solved. In addition, when the account is logged in, the server agent cannot be located in the United States and other places where don't allow cryptocurrency tradings. Due to the regulation, the account will be banned.

The code in this article has been modified from my personal strategy and is not guaranteed to be completely correct for reference. The specific use of the market code should be executed in the main function, the trading-related code is placed before the main function, and the trade() function is placed in the push market quote.
article originally from FMZ.COM ( A place you can create your own trading bot by Python, JavaScript and C++)
3  Economy / Trading Discussion / SUCCESSFUL BACKTESTING OF ALGORITHMIC TRADING STRATEGIES - PART II on: March 21, 2019, 07:01:21 AM
article originally from FMZ.COM ( A place you can create your own trading bot by Python, JavaScript and C++)

In the first article on successful backtesting we discussed statistical and behavioural biases that affect our backtest performance. We also discussed software packages for backtesting, including Excel, MATLAB, Python, R and C++. In this article we will consider how to incorporate transaction costs, as well as certain decisions that need to be made when creating a backtest engine, such as order types and frequency of data.


Transaction Costs

One of the most prevalent beginner mistakes when implementing trading models is to neglect (or grossly underestimate) the effects of transaction costs on a strategy. Though it is often assumed that transaction costs only reflect broker commissions, there are in fact many other ways that costs can be accrued on a trading model. The three main types of costs that must be considered include:

 

Commissions/Fees

The most direct form of transaction costs incurred by an algorithmic trading strategy are commissions and fees. All strategies require some form of access to an exchange, either directly or through a brokerage intermediary ("the broker"). These services incur an incremental cost with each trade, known as commission.

Brokers generally provide many services, although quantitative algorithms only really make use of the exchange infrastructure. Hence brokerage commissions are often small on per trade basis. Brokers also charge fees, which are costs incurred to clear and settle trades. Further to this are taxes imposed by regional or national governments. For instance, in the UK there is a stamp duty to pay on equities transactions. Since commissions, fees and taxes are generally fixed, they are relatively straightforward to implement in a backtest engine (see below).

 

Slippage/Latency

Slippage is the difference in price achieved between the time when a trading system decides to transact and the time when a transaction is actually carried out at an exchange. Slippage is a considerable component of transaction costs and can make the difference between a very profitable strategy and one that performs poorly. Slippage is a function of the underlying asset volatility, the latency between the trading system and the exchange and the type of strategy being carried out.

An instrument with higher volatility is more likely to be moving and so prices between signal and execution can differ substantially. Latency is defined as the time difference between signal generation and point of execution. Higher frequency strategies are more sensitive to latency issues and improvements of milliseconds on this latency can make all the difference towards profitability. The type of strategy is also important. Momentum systems suffer more from slippage on average because they are trying to purchase instruments that are already moving in the forecast direction. The opposite is true for mean-reverting strategies as these strategies are moving in a direction opposing the trade.

 

Market Impact/Liquidity

Market impact is the cost incurred to traders due to the supply/demand dynamics of the exchange (and asset) through which they are trying to trade. A large order on a relatively illiquid asset is likely to move the market substantially as the trade will need to access a large component of the current supply. To counter this, large block trades are broken down into smaller "chunks" which are transacted periodically, as and when new liquidity arrives at the exchange. On the opposite end, for highly liquid instruments such as the S&P500 E-Mini index futures contract, low volume trades are unlikely to adjust the "current price" in any great amount.

More illiquid assets are characterised by a larger spread, which is the difference between the current bid and ask prices on the limit order book. This spread is an additional transaction cost associated with any trade. Spread is a very important component of the total transaction cost - as evidenced by the myriad of UK spread-betting firms whose advertising campaigns express the "tightness" of their spreads for heavily traded instruments.

 

Transaction Cost Models

In order to successfully model the above costs in a backtesting system, various degrees of complex transaction models have been introduced. They range from simple flat modelling through to a non-linear quadratic approximation. Here we will outline the advantages and disadvantages of each model:

Flat/Fixed Transaction Cost Models

Flat transaction costs are the simplest form of transaction cost modelling. They assume a fixed cost associated with each trade. Thus they best represent the concept of brokerage commissions and fees. They are not very accurate for modelling more complex behaviour such as slippage or market impact. In fact, they do not consider asset volatility or liquidity at all. Their main benefit is that they are computationally straightforward to implement. However they are likely to significantly under or over estimate transaction costs depending upon the strategy being employed. Thus they are rarely used in practice.

 

Linear/Piecewise Linear/Quadratic Transaction Cost Models

More advanced transaction cost models start with linear models, continue with piece-wise linear models and conclude with quadratic models. They lie on a spectrum of least to most accurate, albeit with least to greatest implementation effort. Since slippage and market impact are inherently non-linear phenomena quadratic functions are the most accurate at modelling these dynamics. Quadratic transaction cost models are much harder to implement and can take far longer to compute than for simpler flat or linear models, but they are often used in practice.

Algorithmic traders also attempt to make use of actual historical transaction costs for their strategies as inputs to their current transaction models to make them more accurate. This is tricky business and often verges on the complicated areas of modelling volatility, slippage and market impact. However, if the trading strategy is transacting large volumes over short time periods, then accurate estimates of the incurred transaction costs can have a significant effect on the strategy bottom-line and so it is worth the effort to invest in researching these models.

 

Strategy Backtest Implementation Issues

While transaction costs are a very important aspect of successful backtesting implementations, there are many other issues that can affect strategy performance.

 

Trade Order Types

One choice that an algorithmic trader must make is how and when to make use of the different exchange orders available. This choice usually falls into the realm of the execution system, but we will consider it here as it can greatly affect strategy backtest performance. There are two types of order that can be carried out: market orders and limit orders.

A market order executes a trade immediately, irrespective of available prices. Thus large trades executed as market orders will often get a mixture of prices as each subsequent limit order on the opposing side is filled. Market orders are considered aggressive orders since they will almost certainly be filled, albeit with a potentially unknown cost.

Limit orders provide a mechanism for the strategy to determine the worst price at which the trade will get executed, with the caveat that the trade may not get filled partially or fully. Limit orders are considered passive orders since they are often unfilled, but when they are a price is guaranteed. An individual exchange's collection of limit orders is known as the limit order book, which is essentially a queue of buy and sell orders at certain sizes and prices.

When backtesting, it is essential to model the effects of using market or limit orders correctly. For high-frequency strategies in particular, backtests can significantly outperform live trading if the effects of market impact and the limit order book are not modelled accurately.

 

OHLC Data Idiosyncrasies

There are particular issues related to backtesting strategies when making use of daily data in the form of Open-High-Low-Close (OHLC) figures, especially for equities. Note that this is precisely the form of data given out by Yahoo Finance, which is a very common source of data for retail algorithmic traders!

Cheap or free datasets, while suffering from survivorship bias (which we have already discussed in Part I), are also often composite price feeds from multiple exchanges. This means that the extreme points (i.e. the open, close, high and low) of the data are very susceptible to "outlying" values due to small orders at regional exchanges. Further, these values are also sometimes more likely to be tick-errors that have yet to be removed from the dataset.

This means that if your trading strategy makes extensive use of any of the OHLC points specifically, backtest performance can differ from live performance as orders might be routed to different exchanges depending upon your broker and your available access to liquidity. The only way to resolve these problems is to make use of higher frequency data or obtain data directly from an individual exchange itself, rather than a cheaper composite feed.

In the next couple of articles we will consider performance measurement of the backtest, as well as a real example of a backtesting algorithm, with many of the above effects included.

 

article originally from FMZ.COM ( A place you can create your own trading bot by Python, JavaScript and C++)
4  Economy / Trading Discussion / Successful Backtesting of Algorithmic Trading Strategies - Part I on: March 20, 2019, 09:07:04 AM
article originally from FMZ.COM ( A place you can create your own trading bot by Python, JavaScript and C++)

This article continues the series on quantitative trading, which started with the Beginner's Guide and Strategy Identification. Both of these longer, more involved articles have been very popular so I'll continue in this vein and provide detail on the topic of strategy backtesting.

Algorithmic backtesting requires knowledge of many areas, including psychology, mathematics, statistics, software development and market/exchange microstructure. I couldn't hope to cover all of those topics in one article, so I'm going to split them into two or three smaller pieces. What will we discuss in this section? I'll begin by defining backtesting and then I will describe the basics of how it is carried out. Then I will elucidate upon the biases we touched upon in the Beginner's Guide to Quantitative Trading. Next I will present a comparison of the various available backtesting software options.

In subsequent articles we will look at the details of strategy implementations that are often barely mentioned or ignored. We will also consider how to make the backtesting process more realistic by including the idiosyncrasies of a trading exchange. Then we will discuss transaction costs and how to correctly model them in a backtest setting. We will end with a discussion on the performance of our backtests and finally provide an example of a common quant strategy, known as a mean-reverting pairs trade.

Let's begin by discussing what backtesting is and why we should carry it out in our algorithmic trading.

What is Backtesting?
Algorithmic trading stands apart from other types of investment classes because we can more reliably provide expectations about future performance from past performance, as a consequence of abundant data availability. The process by which this is carried out is known as backtesting.

In simple terms, backtesting is carried out by exposing your particular strategy algorithm to a stream of historical financial data, which leads to a set of trading signals. Each trade (which we will mean here to be a 'round-trip' of two signals) will have an associated profit or loss. The accumulation of this profit/loss over the duration of your strategy backtest will lead to the total profit and loss (also known as the 'P&L' or 'PnL'). That is the essence of the idea, although of course the "devil is always in the details"!

What are key reasons for backtesting an algorithmic strategy?

Filtration - If you recall from the article on Strategy Identification, our goal at the initial research stage was to set up a strategy pipeline and then filter out any strategy that did not meet certain criteria. Backtesting provides us with another filtration mechanism, as we can eliminate strategies that do not meet our performance needs.
Modelling - Backtesting allows us to (safely!) test new models of certain market phenomena, such as transaction costs, order routing, latency, liquidity or other market microstructure issues.
Optimisation - Although strategy optimisation is fraught with biases, backtesting allows us to increase the performance of a strategy by modifying the quantity or values of the parameters associated with that strategy and recalculating its performance.
Verification - Our strategies are often sourced externally, via our strategy pipeline. Backtesting a strategy ensures that it has not been incorrectly implemented. Although we will rarely have access to the signals generated by external strategies, we will often have access to the performance metrics such as the Sharpe Ratio and Drawdown characteristics. Thus we can compare them with our own implementation.
Backtesting provides a host of advantages for algorithmic trading. However, it is not always possible to straightforwardly backtest a strategy. In general, as the frequency of the strategy increases, it becomes harder to correctly model the microstructure effects of the market and exchanges. This leads to less reliable backtests and thus a trickier evaluation of a chosen strategy. This is a particular problem where the execution system is the key to the strategy performance, as with ultra-high frequency algorithms.

Unfortunately, backtesting is fraught with biases of all types. We have touched upon some of these issues in previous articles, but we will now discuss them in depth.

Biases Affecting Strategy Backtests
There are many biases that can affect the performance of a backtested strategy. Unfortunately, these biases have a tendency to inflate the performance rather than detract from it. Thus you should always consider a backtest to be an idealised upper bound on the actual performance of the strategy. It is almost impossible to eliminate biases from algorithmic trading so it is our job to minimise them as best we can in order to make informed decisions about our algorithmic strategies.

There are four major biases that I wish to discuss: Optimisation Bias, Look-Ahead Bias, Survivorship Bias and Psychological Tolerance Bias.

Optimisation Bias
This is probably the most insidious of all backtest biases. It involves adjusting or introducing additional trading parameters until the strategy performance on the backtest data set is very attractive. However, once live the performance of the strategy can be markedly different. Another name for this bias is "curve fitting" or "data-snooping bias".

Optimisation bias is hard to eliminate as algorithmic strategies often involve many parameters. "Parameters" in this instance might be the entry/exit criteria, look-back periods, averaging periods (i.e the moving average smoothing parameter) or volatility measurement frequency. Optimisation bias can be minimised by keeping the number of parameters to a minimum and increasing the quantity of data points in the training set. In fact, one must also be careful of the latter as older training points can be subject to a prior regime (such as a regulatory environment) and thus may not be relevant to your current strategy.

One method to help mitigate this bias is to perform a sensitivity analysis. This means varying the parameters incrementally and plotting a "surface" of performance. Sound, fundamental reasoning for parameter choices should, with all other factors considered, lead to a smoother parameter surface. If you have a very jumpy performance surface, it often means that a parameter is not reflecting a phenomena and is an artefact of the test data. There is a vast literature on multi-dimensional optimisation algorithms and it is a highly active area of research. I won't dwell on it here, but keep it in the back of your mind when you find a strategy with a fantastic backtest!

Look-Ahead Bias
Look-ahead bias is introduced into a backtesting system when future data is accidentally included at a point in the simulation where that data would not have actually been available. If we are running the backtest chronologically and we reach time point N, then look-ahead bias occurs if data is included for any point N+k, where k>0. Look-ahead bias errors can be incredibly subtle. Here are three examples of how look-ahead bias can be introduced:

Technical Bugs - Arrays/vectors in code often have iterators or index variables. Incorrect offsets of these indices can lead to a look-ahead bias by incorporating data at N+k for non-zero k.
Parameter Calculation - Another common example of look-ahead bias occurs when calculating optimal strategy parameters, such as with linear regressions between two time series. If the whole data set (including future data) is used to calculate the regression coefficients, and thus retroactively applied to a trading strategy for optimisation purposes, then future data is being incorporated and a look-ahead bias exists.
Maxima/Minima - Certain trading strategies make use of extreme values in any time period, such as incorporating the high or low prices in OHLC data. However, since these maximal/minimal values can only be calculated at the end of a time period, a look-ahead bias is introduced if these values are used -during- the current period. It is always necessary to lag high/low values by at least one period in any trading strategy making use of them.
As with optimisation bias, one must be extremely careful to avoid its introduction. It is often the main reason why trading strategies underperform their backtests significantly in "live trading".

Survivorship Bias
Survivorship bias is a particularly dangerous phenomenon and can lead to significantly inflated performance for certain strategy types. It occurs when strategies are tested on datasets that do not include the full universe of prior assets that may have been chosen at a particular point in time, but only consider those that have "survived" to the current time.

As an example, consider testing a strategy on a random selection of equities before and after the 2001 market crash. Some technology stocks went bankrupt, while others managed to stay afloat and even prospered. If we had restricted this strategy only to stocks which made it through the market drawdown period, we would be introducing a survivorship bias because they have already demonstrated their success to us. In fact, this is just another specific case of look-ahead bias, as future information is being incorporated into past analysis.

There are two main ways to mitigate survivorship bias in your strategy backtests:

Survivorship Bias Free Datasets - In the case of equity data it is possible to purchase datasets that include delisted entities, although they are not cheap and only tend to be utilised by institutional firms. In particular, Yahoo Finance data is NOT survivorship bias free, and this is commonly used by many retail algo traders. One can also trade on asset classes that are not prone to survivorship bias, such as certain commodities (and their future derivatives).
Use More Recent Data - In the case of equities, utilising a more recent data set mitigates the possibility that the stock selection chosen is weighted to "survivors", simply as there is less likelihood of overall stock delisting in shorter time periods. One can also start building a personal survivorship-bias free dataset by collecting data from current point onward. After 3-4 years, you will have a solid survivorship-bias free set of equities data with which to backtest further strategies.
We will now consider certain psychological phenomena that can influence your trading performance.

Psychological Tolerance Bias
This particular phenomena is not often discussed in the context of quantitative trading. However, it is discussed extensively in regard to more discretionary trading methods. It has various names, but I've decided to call it "psychological tolerance bias" because it captures the essence of the problem. When creating backtests over a period of 5 years or more, it is easy to look at an upwardly trending equity curve, calculate the compounded annual return, Sharpe ratio and even drawdown characteristics and be satisfied with the results. As an example, the strategy might possess a maximum relative drawdown of 25% and a maximum drawdown duration of 4 months. This would not be atypical for a momentum strategy. It is straightforward to convince oneself that it is easy to tolerate such periods of losses because the overall picture is rosy. However, in practice, it is far harder!

If historical drawdowns of 25% or more occur in the backtests, then in all likelihood you will see periods of similar drawdown in live trading. These periods of drawdown are psychologically difficult to endure. I have observed first hand what an extended drawdown can be like, in an institutional setting, and it is not pleasant - even if the backtests suggest such periods will occur. The reason I have termed it a "bias" is that often a strategy which would otherwise be successful is stopped from trading during times of extended drawdown and thus will lead to significant underperformance compared to a backtest. Thus, even though the strategy is algorithmic in nature, psychological factors can still have a heavy influence on profitability. The takeaway is to ensure that if you see drawdowns of a certain percentage and duration in the backtests, then you should expect them to occur in live trading environments, and will need to persevere in order to reach profitability once more.

Software Packages for Backtesting
The software landscape for strategy backtesting is vast. Solutions range from fully-integrated institutional grade sophisticated software through to programming languages such as C++, Python and R where nearly everything must be written from scratch (or suitable 'plugins' obtained). As quant traders we are interested in the balance of being able to "own" our trading technology stack versus the speed and reliability of our development methodology. Here are the key considerations for software choice:

Programming Skill - The choice of environment will in a large part come down to your ability to program software. I would argue that being in control of the total stack will have a greater effect on your long term P&L than outsourcing as much as possible to vendor software. This is due to the downside risk of having external bugs or idiosyncrasies that you are unable to fix in vendor software, which would otherwise be easily remedied if you had more control over your "tech stack". You also want an environment that strikes the right balance between productivity, library availability and speed of execution. I make my own personal recommendation below.
Execution Capability/Broker Interaction - Certain backtesting software, such as Tradestation, ties in directly with a brokerage. I am not a fan of this approach as reducing transaction costs are often a big component of getting a higher Sharpe ratio. If you're tied into a particular broker (and Tradestation "forces" you to do this), then you will have a harder time transitioning to new software (or a new broker) if the need arises. Interactive Brokers provide an API which is robust, albeit with a slightly obtuse interface.
Customisation - An environment like MATLAB or Python gives you a great deal of flexibility when creating algo strategies as they provide fantastic libraries for nearly any mathematical operation imaginable, but also allow extensive customisation where necessary.
Strategy Complexity - Certain software just isn't cut out for heavy number crunching or mathematical complexity. Excel is one such piece of software. While it is good for simpler strategies, it cannot really cope with numerous assets or more complicated algorithms, at speed.
Bias Minimisation - Does a particular piece of software or data lend itself more to trading biases? You need to make sure that if you want to create all the functionality yourself, that you don't introduce bugs which can lead to biases.
Speed of Development - One shouldn't have to spend months and months implementing a backtest engine. Prototyping should only take a few weeks. Make sure that your software is not hindering your progress to any great extent, just to grab a few extra percentage points of execution speed. C++ is the "elephant in the room" here!
Speed of Execution - If your strategy is completely dependent upon execution timeliness (as in HFT/UHFT) then a language such as C or C++ will be necessary. However, you will be verging on Linux kernel optimisation and FPGA usage for these domains, which is outside the scope of this article!
Cost - Many of the software environments that you can program algorithmic trading strategies with are completely free and open source. In fact, many hedge funds make use of open source software for their entire algo trading stacks. In addition, Excel and MATLAB are both relatively cheap and there are even free alternatives to each.
Now that we have listed the criteria with which we need to choose our software infrastructure, I want to run through some of the more popular packages and how they compare:

Note: I am only going to include software that is available to most retail practitioners and software developers, as this is the readership of the site. While other software is available such as the more institutional grade tools, I feel these are too expensive to be effectively used in a retail setting and I personally have no experience with them.

Backtesting Software Comparison

MS Excel
Description: WYSIWYG (what-you-see-is-what-you-get) spreadsheet software. Extremely widespread in the financial industry. Data and algorithm are tightly coupled.

Execution: Yes, Excel can be tied into most brokerages.

Customisation: VBA macros allow more advanced functionality at the expense of hiding implementation.

Strategy Complexity: More advanced statistical tools are harder to implement as are strategies with many hundreds of assets.

Bias Minimisation: Look-ahead bias is easy to detect via cell-highlighting functionality (assuming no VBA).

Development Speed: Quick to implement basic strategies.

Execution Speed: Slow execution speed - suitable only for lower-frequency strategies.

Cost: Cheap or free (depending upon license).

Alternatives: OpenOffice

MATLAB
Description: Programming environment originally designed for computational mathematics, physics and engineering. Very well suited to vectorised operations and those involving numerical linear algebra. Provides a wide array of plugins for quant trading. In widespread use in quantitative hedge funds.

Execution: No native execution capability, MATLAB requires a separate execution system.

Customisation: Huge array of community plugins for nearly all areas of computational mathematics.

Strategy Complexity: Many advanced statistical methods already available and well-tested.

Bias Minimisation: Harder to detect look-ahead bias, requires extensive testing.

Development Speed: Short scripts can create sophisticated backtests easily.

Execution Speed: Assuming a vectorised/parallelised algorithm, MATLAB is highly optimised. Poor for traditional iterated loops.

Cost: ~1,000 USD for a license.

Alternatives: Octave, SciLab

Python
Description: High-level language designed for speed of development. Wide array of libraries for nearly any programmatic task imaginable. Gaining wider acceptance in hedge fund and investment bank community. Not quite as fast as C/C++ for execution speed.

Execution: Python plugins exist for larger brokers, such as Interactive Brokers. Hence backtest and execution system can all be part of the same "tech stack".

Customisation: Python has a very healthy development community and is a mature language. NumPy/SciPy provide fast scientific computing and statistical analysis tools relevant for quant trading.

Strategy Complexity: Many plugins exist for the main algorithms, but not quite as big a quant community as exists for MATLAB.

Bias Minimisation: Same bias minimisation problems exist as for any high level language. Need to be extremely careful about testing.

Development Speed: Pythons main advantage is development speed, with robust in built in testing capabilities.

Execution Speed: Not quite as fast as C++, but scientific computing components are optimised and Python can talk to native C code with certain plugins.

Cost: Free/Open Source

Alternatives: Ruby, Erlang, Haskell

R
Description: Environment designed for advanced statistical methods and time series analysis. Wide array of specific statistical, econometric and native graphing toolsets. Large developer community.

Execution: R possesses plugins to some brokers, in particular Interactive Brokers. Thus an end-to-end system can written entirely in R.

Customisation: R can be customised with any package, but its strengths lie in statistical/econometric domains.

Strategy Complexity: Mostly useful if performing econometric, statistical or machine-learning strategies due to available plugins.

Bias Minimisation: Similar level of bias possibility for any high-level language such as Python or C++. Thus testing must be carried out.

Development Speed: R is rapid for writing strategies based on statistical methods.

Execution Speed: R is slower than C++, but remains relatively optimised for vectorised operations (as with MATLAB).

Cost: Free/Open Source

Alternatives: SPSS, Stata

C++
Description: Mature, high-level language designed for speed of execution. Wide array of quantitative finance and numerical libraries. Harder to debug and often takes longer to implement than Python or MATLAB. Extremely prevalent in both the buy- and sell-side.

Execution: Most brokerage APIs are written in C++ and Java. Thus many plugins exist.

Customisation: C/C++ allows direct access to underlying memory, hence ultra-high frequency strategies can be implemented.

Strategy Complexity: C++ STL provides wide array of optimised algorithms. Nearly any specialised mathematical algorithm possesses a free, open-source C/C++ implementation on the web.

Bias Minimisation: Look-ahead bias can be tricky to eliminate, but no harder than other high-level language. Good debugging tools, but one must be careful when dealing with underlying memory.

Development Speed: C++ is quite verbose compared to Python or MATLAB for the same algorithmm. More lines-of-code (LOC) often leads to greater likelihood of bugs.

Execution Speed: C/C++ has extremely fast execution speed and can be well optimised for specific computational architectures. This is the main reason to utilise it.

Cost: Various compilers: Linux/GCC is free, MS Visual Studio has differing licenses.

Alternatives: C#, Java, Scala

Different strategies will require different software packages. HFT and UHFT strategies will be written in C/C++ (these days they are often carried out on GPUs and FPGAs), whereas low-frequency directional equity strategies are easy to implement in TradeStation, due to the "all in one" nature of the software/brokerage.

My personal preference is for Python as it provides the right degree of customisation, speed of development, testing capability and execution speed for my needs and strategies. If I need anything faster, I can "drop in" to C++ directly from my Python programs. One method favoured by many quant traders is to prototype their strategies in Python and then convert the slower execution sections to C++ in an iterative manner. Eventually the entire algo is written in C++ and can be "left alone to trade"!

In the next few articles on backtesting we will take a look at some particular issues surrounding the implementation of an algorithmic trading backtesting system, as well as how to incorporate the effects of trading exchanges. We will discuss strategy performance measurement and finally conclude with an example strategy.

article originally from FMZ.COM ( A place you can create your own trading bot by Python, JavaScript and C++)
5  Economy / Trading Discussion / Value at Risk (VaR) for Algorithmic Trading Risk Management on: March 20, 2019, 03:54:28 AM
article originally from FMZ.COM ( A place you can create your own trading bot by Python, JavaScript and C++)

Estimating the risk of loss to an algorithmic trading strategy, or portfolio of strategies, is of extreme importance for long-term capital growth. Many techniques for risk management have been developed for use in institutional settings. One technique in particular, known as Value at Risk or VaR, will be the topic of this article.

We will be applying the concept of VaR to a single strategy or a set of strategies in order to help us quantify risk in our trading portfolio. The definition of VaR is as follows:

VaR provides an estimate, under a given degree of confidence, of the size of a loss from a portfolio over a given time period.

In this instance "portfolio" can refer to a single strategy, a group of strategies, a trader's book, a prop desk, a hedge fund or an entire investment bank. The "given degree of confidence" will be a value of, say, 95% or 99%. The "given time period" will be chosen to reflect one that would lead to a minimal market impact if a portfolio were to be liquidated.

For example, a VaR equal to 500,000 USD at 95% confidence level for a time period of a day would simply state that there is a 95% probability of losing no more than 500,000 USD in the following day. Mathematically this is stated as:

P(L≤−5.0×10^5)=0.05
Or, more generally, for loss L exceeding a value VaR with a confidence level c we have:

P(L≤−VaR)=1−c
The "standard" calculation of VaR makes the following assumptions:

Standard Market Conditions - VaR is not supposed to consider extreme events or "tail risk", rather it is supposed to provide the expectation of a loss under normal "day-to-day" operation.
Volatilities and Correlations - VaR requires the volatilities of the assets under consideration, as well as their respective correlations. These two quantities are tricky to estimate and are subject to continual change.
Normality of Returns - VaR, in its standard form, assumes the returns of the asset or portfolio are normally distributed. This leads to more straightforward analytical calculation, but it is quite unrealistic for most assets.

Advantages and Disadvantages
VaR is pervasive in the financial industry, hence you should be familiar with the benefits and drawbacks of the technique. Some of the advantages of VaR are as follows:

1.VaR is very straightforward to calculate for individual assets, algo strategies, quant portfolios, hedge funds or even bank prop desks.
2.The time period associated with the VaR can be modified for multiple trading strategies that have different time horizons.
3.Different values of VaR can be associated with different forms of risk, say broken down by asset class or instrument type. This makes it easy to interpret where the majority of portfolio risk may be clustered, for instance.
4.Individual strategies can be constrained as can entire portfolios based on their individual VaR.
5.VaR is straightforward to interpret by (potentially) non-technical external investors and fund managers.

However, VaR is not without its disadvantages:

1.VaR does not discuss the magnitude of the expected loss beyond the value of VaR, i.e. it will tell us that we are likely to see a loss exceeding a value, but not how much it exceeds it.
2.It does not take into account extreme events, but only typical market conditions.
3.Since it uses historical data (it is rearward-looking) it will not take into account future market regime shifts that can change volatilities and correlations of assets.

VaR should not be used in isolation. It should always be used with a suite of risk management techniques, such as diversification, optimal portfolio allocation and prudent use of leverage.

Methods of Calculation
As of yet we have not discussed the actual calculation of VaR, either in the general case or a concrete trading example. There are three techniques that will be of interest to us. The first is the variance-covariance method (using normality assumptions), the second is a Monte Carlo method (based on an underlying, potentially non-normal, distribution) and the third is known as historical bootstrapping, which makes use of historical returns information for assets under consideration.

In this article we will concentrate on the Variance-Covariance Method and in later articles will consider the Monte Carlo and Historical Bootstrap methods.

Variance-Covariance Method
Consider a portfolio of P dollars, with a confidence level c. We are considering daily returns, with asset (or strategy) historical standard deviation σ and mean μ. Then the daily VaR, under the variance-covariance method for a single asset (or strategy) is calculated as:

P−(P(α(1−c)+1))
Where α is the inverse of the cumulative distribution function of a normal distribution with mean μ and standard deviation σ.

We can use the SciPy and pandas libraries from Python in order to calculate these values. If we set P=106 and c=0.99, we can use the SciPy ppf method to generate the values for the inverse cumulative distribution function to a normal distribution with μ and σ obtained from some real financial data, in this case the historical daily returns of CitiGroup (we could easily substitute the returns of an algorithmic strategy in here):

Code:
# var.py

import datetime
import numpy as np
import pandas.io.data as web
from scipy.stats import norm


def var_cov_var(P, c, mu, sigma):
    """
    Variance-Covariance calculation of daily Value-at-Risk
    using confidence level c, with mean of returns mu
    and standard deviation of returns sigma, on a portfolio
    of value P.
    """
    alpha = norm.ppf(1-c, mu, sigma)
    return P - P*(alpha + 1)

if __name__ == "__main__":
    start = datetime.datetime(2010, 1, 1)
    end = datetime.datetime(2014, 1, 1)

    citi = web.DataReader("C", 'yahoo', start, end)
    citi["rets"] = citi["Adj Close"].pct_change()

    P = 1e6   # 1,000,000 USD
    c = 0.99  # 99% confidence interval
    mu = np.mean(citi["rets"])
    sigma = np.std(citi["rets"])

    var = var_cov_var(P, c, mu, sigma)
    print "Value-at-Risk: $%0.2f" % var

The calculated value of VaR is given by:

Value-at-Risk: $56510.29
VaR is an extremely useful and pervasive technique in all areas of financial management, but it is not without its flaws. We have yet to discuss the actual value of what could be lost in a portfolio, rather just that it may exceed a certain amount some of the time.

In follow-up articles we will not only discuss alternative calculations for VaR, but also outline the concept of Expected Shortfall (also known as Conditional Value at Risk), which provides an answer to how much is likely to be lost.

article originally from FMZ.COM ( A place you can create your own trading bot by Python, JavaScript and C++)
6  Economy / Trading Discussion / Money Management via the Kelly Criterion on: March 19, 2019, 01:45:23 AM
article originally from FMZ.COM ( A place you can create your own trading bot by Python, JavaScript and C++)

Risk and money management are absolutely critical topics in quantitative trading. We have yet to explore these concepts in any reasonable amount of detail beyond stating the different sources of risk that might affect strategy performance. In this article we will be considering a quantitative means of managing account equity in order to maximise long-term account growth and limiting downside risk.

Investor Objectives
It might seem that the only important investor objective is to simply "make as much money as possible". However the reality of long-term trading is more complex. Since market participants have differing risk preferences and constraints there are many objectives that investors may possess.

Many retail traders consider the only goal to be the increase of account equity as much as possible, with little or no consideration given to the "risk" of a strategy. A more sophisticated retail investor would be measuring account drawdowns, but might also be able to stomach quite a drop in equity (say 50%) if they were aware that it was optimal, in the sense of growth rate, in the long term.

An institutional investor would think very differently about risk. It is almost certain that they will have a mandated maximum drawdown (say 20%) and that they would be considering sector allocation and average daily volume limits, which would all be additional constraints on the "optimisation problem" of capital allocation to strategies. These factors might even be more important than maximising the long-term growth rate of the portfolio.

Thus we are in a situation where we can strike a balance between maximising long-term growth rate via leverage and minimising our "risk" by trying to limit the duration and extent of the drawdown. The major tool that will help us achieve this is called the Kelly Criterion.

Kelly Criterion
Within this article the Kelly Criterion is going to be our tool to control leverage of, and allocation towards, a set of algorithmic trading strategies that make up a multi-strategy portfolio.

We will define leverage as the ratio of the size of a portfolio to the actual account equity within that portfolio. To make this clear we can use the analogy of purchasing a house with a mortgage. Your down payment (or "deposit" for those of us in the UK!) constitutes your account equity, while the down payment plus the mortgage value constitutes the equivalent of the size of a portfolio. Thus a down payment of 50,000 USD on a 200,000 USD house (with a mortgage of 150,000 USD) constitutes a leverage of (150000+50000)/50000=4. Thus in this instance you would be 4x leveraged on the house. A margin account portfolio behaves similarly. There is a "cash" component and then more stock can be borrowed on margin, to provide the leverage.

Before we state the Kelly Criterion specifically I want to outline the assumptions that go into its derivation, which have varying degrees of accuracy:

Each algorithmic trading strategy will be assumed to possess a returns stream that is normally distributed (i.e. Gaussian). Further, each strategy has its own fixed mean and standard deviation of returns. The formula assumes that these mean and std values do not change, i.e. that they are same in the past as in the future. This is clearly not the case with most strategies, so be aware of this assumption.

The returns being considered here are excess returns, which means they are net of all financing costs such as interest paid on margin and transaction costs. If the strategy is being carried out in an institutional setting, this also means that the returns are net of management and performance fees.

All of the trading profits are reinvested and no withdrawals of equity are carried out. This is clearly not as applicable in an institutional setting where the above mentioned management fees are taken out and investors often make withdrawals.

All of the strategies are statistically independent (there is no correlation between strategies) and thus the covariance matrix between strategy returns is diagonal.

These assumptions are not particularly accurate but we will consider ways to relax them in later articles.

Now we come to the actual Kelly Criterion! Let's imagine that we have a set of N algorithmic trading strategies and we wish to determine both how to apply optimal leverage per strategy in order to maximise growth rate (but minimise drawdowns) and how to allocate capital between each strategy. If we denote the allocation between each strategy i as a vector f of length N, s.t. f=(f1,...,fN), then the Kelly Criterion for optimal allocation to each strategy fi is given by:

Where μi are the mean excess returns and σi are the standard deviation of excess returns for a strategy i. This formula essentially describes the optimal leverage that should be applied to each strategy.

While the Kelly Criterion fi gives us the optimal leverage and strategy allocation, we still need to actually calculate our expected long-term compounded growth rate of the portfolio, which we denote by g. The formula for this is given by:

Where r is the risk-free interest rate, which is the rate at which you can borrow from the broker, and S is the annualised Sharpe Ratio of the strategy. The latter is calculated via the annualised mean excess returns divided by the annualised standard deviations of excess returns. See this article for more details.

Note: If you would like to read a more mathematical approach to the Kelly formula, please take a look at Ed Thorp's paper on the topic: The Kelly Criterion in Blackjack Sports Betting, And The Stock Market (2007).

A Realistic Example
Let's consider an example in the single strategy case (i=1). Suppose we go long a mythical stock XYZ that has a mean annual return of m=10.7% and an annual standard deviation of σ=12.4%. In addition suppose we are able to borrow at a risk-free interest rate of r=3.0%. This implies that the mean excess returns are μ=m−r=10.7−3.0=7.7%. This gives us a Sharpe Ratio of S=0.077/0.124=0.62.

With this we can calculate the optimal Kelly leverage via f=μ/σ2=0.077/0.1242=5.01. Thus the Kelly leverage says that for a 100,000 USD portfolio we should borrow an additional 401,000 USD to have a total portfolio value of 501,000 USD. In practice it is unlikely that our brokerage would let us trade with such substantial margin and so the Kelly Criterion would need to be adjusted.

We can then use the Sharpe ratio S and the interest rate r to calculate g, the expected long-term compounded growth rate. g=r+S2/2=0.03+0.622/2=0.22, i.e. 22%. Thus we should expect a return of 22% a year from this strategy.

Kelly Criterion in Practice
It is important to be aware that the Kelly Criterion requires a continuous rebalancing of capital allocation in order to remain valid. Clearly this is not possible in the discrete setting of actual trading and so an approximation must be made. The standard "rule of thumb" here is to update the Kelly allocation once a day. Further, the Kelly Criterion itself should be recalculated periodically, using a trailing mean and standard deviation with a lookback window. Again, for a strategy that trades roughly once a day, this lookback should be set to be on the order of 3-6 months of daily returns.

Here is an example of rebalancing a portfolio under the Kelly Criterion, which can lead to some counter-intuitive behaviour. Let's suppose we have the strategy described above. We have used the Kelly Criterion to borrow cash to size our portfolio to 501,000 USD. Let's assume we make a healthy 5% return on the following day, which boosts our account size to 526,050 USD. The Kelly Criterion tells us that we should borrow more to keep the same leverage factor of 5.01. In particular our account equity is 126,050 USD on a portfolio of 526,050, which means that the current leverage factor is 4.17. To increase it to 5.01, we need to borrow an additional 105,460 USD in order to increase our account size to 631,510.5 USD (this is 5.01×126050).

Now consider that the following day we lose 10% on our portfolio (ouch!). This means that the total portfolio size is now 568,359.45 USD (631510.5×0.9). Our total account equity is now 62,898.95 USD (126050−631510.45×0.1). This means our current leverage factor is 568359.45/62898.95=9.03. Hence we need to reduce our account by selling 253,235.71 USD of stock in order to reduce our total portfolio value to 315,123.73 USD, such that we have a leverage of 5.01 again (315123.73/62898.95=5.01).

Hence we have bought into a profit and sold into a loss. This process of selling into a loss may be extremely emotionally difficult, but it is mathematically the "correct" thing to do, assuming that the assumptions of Kelly have been met! It is the approach to follow in order to maximise long-term compounded growth rate.

You may have noticed that the absolute values of money being re-allocated between days were rather severe. This is a consequence of both the artificial nature of the example and the extensive leverage employed. 10% loss in a day is not particularly common in higher-frequency algorithmic trading, but it does serve to show how extensive leverage can be on absolute terms.

Since the estimation of means and standard deviations are always subject to uncertainty, in practice many traders tend to use a more conservative leverage regime such as the Kelly Criterion divided by two, affectionately known as "half-Kelly". The Kelly Criterion should really be considered as an upper bound of leverage to use, rather than a direct specification. If this advice is not heeded then using the direct Kelly value can lead to ruin (i.e. account equity disappearing to zero) due to the non-Gaussian nature of the strategy returns.

Should You Use The Kelly Criterion?
Every algorithmic trader is different and the same is true of risk preferences. When choosing to employ a leverage strategy (of which the Kelly Criterion is one example) you should consider the risk mandates that you need to work under. In a retail environment you are able to set your own maximum drawdown limits and thus your leverage can be increased. In an institutional setting you will need to consider risk from a very different perspective and the leverage factor will be one component of a much larger framework, usually under many other constraints.

In later articles we will consider other forms of money (and risk!) management, some of which can help with the additional constraints discussed above.

article originally from FMZ.COM ( A place you can create your own trading bot by Python, JavaScript and C++)
7  Economy / Trading Discussion / Sharpe Ratio for Algorithmic Trading Performance Measurement on: March 18, 2019, 07:05:59 AM
article originally from FMZ.COM ( A place you can create your own trading bot by Python, JavaScript and C++)


When carrying out an algorithmic trading strategy it is tempting to consider the annualised return as the most useful performance metric. However, there are many flaws with using this measure in isolation. The calculation of returns for certain strategies is not completely straightforward. This is especially true for strategies that aren't directional such as market-neutral variants or strategies which make use of leverage. These factors make it hard to compare two strategies based solely upon their returns.

In addition, if we are presented with two strategies possessing identical returns how do we know which one contains more risk? Further, what do we even mean by "more risk"? In finance, we are often concerned with volatility of returns and periods of drawdown. Thus if one of these strategies has a significantly higher volatility of returns we would likely find it less attractive, despite the fact that its historical returns might be similar if not identical.

These problems of strategy comparison and risk assessment motivate the use of the Sharpe Ratio.

Definition of the Sharpe Ratio
William Forsyth Sharpe is a Nobel-prize winning economist, who helped create the Capital Asset Pricing Model (CAPM) and developed the Sharpe Ratio in 1966 (later updated in 1994).

The Sharpe Ratio S is defined by the following relation:

Where Ra is the period return of the asset or strategy and Rb is the period return of a suitable benchmark.

The ratio compares the mean average of the excess returns of the asset or strategy with the standard deviation of those returns. Thus a lower volatility of returns will lead to a greater Sharpe ratio, assuming identical returns.

The "Sharpe Ratio" often quoted by those carrying out trading strategies is the annualised Sharpe, the calculation of which depends upon the trading period of which the returns are measured. Assuming there are N trading periods in a year, the annualised Sharpe is calculated as follows:

Note that the Sharpe ratio itself MUST be calculated based on the Sharpe of that particular time period type. For a strategy based on trading period of days, N=252 (as there are 252 trading days in a year, not 365), and Ra, Rb must be the daily returns. Similarly for hours N=252×6.5=1638, not N=252×24=6048, since there are only 6.5 hours in a trading day.

Benchmark Inclusion
The formula for the Sharpe ratio above alludes to the use of a benchmark. A benchmark is used as a "yardstick" or a "hurdle" that a particular strategy must overcome for it to worth considering. For instance, a simple long-only strategy using US large-cap equities should hope to beat the S&P500 index on average, or match it for less volatility.

The choice of benchmark can sometimes be unclear. For instance, should a sector Exhange Traded Fund (ETF) be utilised as a performance benchmark for individual equities, or the S&P500 itself? Why not the Russell 3000? Equally should a hedge fund strategy be benchmarking itself against a market index or an index of other hedge funds? There is also the complication of the "risk free rate". Should domestic government bonds be used? A basket of international bonds? Short-term or long-term bills? A mixture? Clearly there are plenty of ways to choose a benchmark! The Sharpe ratio generally utilises the risk-free rate and often, for US equities strategies, this is based on 10-year government Treasury bills.

In one particular instance, for market-neutral strategies, there is a particular complication regarding whether to make use of the risk-free rate or zero as the benchmark. The market index itself should not be utilised as the strategy is, by design, market-neutral. The correct choice for a market-neutral portfolio is not to substract the risk-free rate because it is self-financing. Since you gain a credit interest, Rf, from holding a margin, the actual calculation for returns is: (Ra+Rf)−Rf=Ra. Hence there is no actual subtraction of the risk-free rate for dollar neutral strategies.

Limitations
Despite the prevalence of the Sharpe ratio within quantitative finance, it does suffer from some limitations.

Firstly, the Sharpe ratio is backward looking. It only accounts for historical returns distribution and volatility, not those occuring in the future. When making judgements based on the Sharpe ratio there is an implicit assumption that the past will be similar to the future. This is evidently not always the case, particular under market regime changes.

The Sharpe ratio calculation assumes that the returns being used are normally distributed (i.e. Gaussian). Unfortunately, markets often suffer from kurtosis above that of a normal distribution. Essentially the distribution of returns has "fatter tails" and thus extreme events are more likely to occur than a Gaussian distribution would lead us to believe. Hence, the Sharpe ratio is poor at characterising tail risk.

This can be clearly seen in strategies which are highly prone to such risks. For instance, the sale of call options (aka "pennies under a steam roller"). A steady stream of option premia are generated by the sale of call options over time, leading to a low volatility of returns, with a strong excess above a benchmark. In this instance the strategy would possess a high Sharpe ratio (based on historical data). However, it does not take into account that such options may be called, leading to significant and sudden drawdowns (or even wipeout) in the equity curve. Hence, as with any measure of algorithmic trading strategy performance, the Sharpe ratio cannot be used in isolation.

Although this point might seem obvious to some, transaction costs MUST be included in the calculation of Sharpe ratio in order for it to be realistic. There are countless examples of trading strategies that have high Sharpes (and thus a likelihood of great profitability) only to be reduced to low Sharpe, low profitability strategies once realistic costs have been factored in. This means making use of the net returns when calculating in excess of the benchmark. Hence, transaction costs must be factored in upstream of the Sharpe ratio calculation.

Practical Usage and Examples
One obvious question that has remained unanswered thus far in this article is "What is a good Sharpe Ratio for a strategy?". Pragmatically, you should ignore any strategy that possesses an annualised Sharpe ratio S<1 after transaction costs. Quantitative hedge funds tend to ignore any strategies that possess Sharpe ratios S<2. One prominent quantitative hedge fund that I am familiar with wouldn't even consider strategies that had Sharpe ratios S<3 while in research. As a retail algorithmic trader, if you can achieve a Sharpe ratio S>2 then you are doing very well.

The Sharpe ratio will often increase with trading frequency. Some high frequency strategies will have high single (and sometimes low double) digit Sharpe ratios, as they can be profitable almost every day and certainly every month. These strategies rarely suffer from catastrophic risk and thus minimise their volatility of returns, which leads to such high Sharpe ratios.

Examples of Sharpe Ratios
This has been quite a theoretical article up to this point. Now we will turn our attention to some actual examples. We will start simply, by considering a long-only buy-and-hold of an individual equity then consider a market-neutral strategy. Both of these examples have been carried out in the Python pandas data analysis library.

The first task is to actually obtain the data and put it into a pandas DataFrame object. In the article on securities master implementation in Python and MySQL I created a system for achieving this. Alternatively, we can make use of this simpler code to grab Yahoo Finance data directly and put it straight into a pandas DataFrame. At the bottom of this script I have created a function to calculate the annualised Sharpe ratio based on a time-period returns stream:

Code:
import datetime
import numpy as np
import pandas as pd
import urllib2


def get_historic_data(ticker,
                      start_date=(2000,1,1),
                      end_date=datetime.date.today().timetuple()[0:3]):
    """
    Obtains data from Yahoo Finance and adds it to a pandas DataFrame object.

    ticker: Yahoo Finance ticker symbol, e.g. "GOOG" for Google, Inc.
    start_date: Start date in (YYYY, M, D) format
    end_date: End date in (YYYY, M, D) format
    """

    # Construct the Yahoo URL with the correct integer query parameters
    # for start and end dates. Note that some parameters are zero-based!
    yahoo_url = "http://ichart.finance.yahoo.com/table.csv?s=%s&a=%s&b=%s&c=%s&d=%s&e=%s&f=%s" % \
        (ticker, start_date[1] - 1, start_date[2], start_date[0], end_date[1] - 1, end_date[2], end_date[0])
   
    # Try connecting to Yahoo Finance and obtaining the data
    # On failure, print an error message
    try:
        yf_data = urllib2.urlopen(yahoo_url).readlines()
    except Exception, e:
        print "Could not download Yahoo data: %s" % e

    # Create the (temporary) Python data structures to store
    # the historical data
    date_list = []
    hist_data = [[] for i in range(6)]

    # Format and copy the raw text data into datetime objects
    # and floating point values (still in native Python lists)
    for day in yf_data[1:]:  # Avoid the header line in the CSV
        headers = day.rstrip().split(',')
        date_list.append(datetime.datetime.strptime(headers[0],'%Y-%m-%d'))
        for i, header in enumerate(headers[1:]):
            hist_data[i].append(float(header))

    # Create a Python dictionary of the lists and then use that to
    # form a sorted Pandas DataFrame of the historical data
    hist_data = dict(zip(['open', 'high', 'low', 'close', 'volume', 'adj_close'], hist_data))
    pdf = pd.DataFrame(hist_data, index=pd.Index(date_list)).sort()

    return pdf

def annualised_sharpe(returns, N=252):
"""
    Calculate the annualised Sharpe ratio of a returns stream
    based on a number of trading periods, N. N defaults to 252,
    which then assumes a stream of daily returns.

    The function assumes that the returns are the excess of
    those compared to a benchmark.
    """
    return np.sqrt(N) * returns.mean() / returns.std()
Now that we have the ability to obtain data from Yahoo Finance and straightforwardly calculate the annualised Sharpe ratio, we can test out a buy and hold strategy for two equities. We will use Google (GOOG) and Goldman Sachs (GS) from Jan 1st 2000 to May 29th 2013 (when I wrote this article!).

We can create an additional helper function that allows us to quickly see buy-and-hold Sharpe across multiple equities for the same (hardcoded) period:

Code:
def equity_sharpe(ticker):
    """
    Calculates the annualised Sharpe ratio based on the daily
    returns of an equity ticker symbol listed in Yahoo Finance.

    The dates have been hardcoded here for the QuantStart article
    on Sharpe ratios.
    """

    # Obtain the equities daily historic data for the desired time period
    # and add to a pandas DataFrame
    pdf = get_historic_data(ticker, start_date=(2000,1,1), end_date=(2013,5,29))

    # Use the percentage change method to easily calculate daily returns
    pdf['daily_ret'] = pdf['adj_close'].pct_change()

    # Assume an average annual risk-free rate over the period of 5%
    pdf['excess_daily_ret'] = pdf['daily_ret'] - 0.05/252

    # Return the annualised Sharpe ratio based on the excess daily returns
    return annualised_sharpe(pdf['excess_daily_ret'])
For Google, the Sharpe ratio for buying and holding is 0.7501. For Goldman Sachs it is 0.2178:

equity_sharpe('GOOG')
0.75013831274645904

equity_sharpe('GS')
0.21777027767830823

Now we can try the same calculation for a market-neutral strategy. The goal of this strategy is to fully isolate a particular equity's performance from the market in general. The simplest way to achieve this is to go short an equal amount (in dollars) of an Exchange Traded Fund (ETF) that is designed to track such a market. The most ovious choice for the US large-cap equities market is the S&P500 index, which is tracked by the SPDR ETF, with the ticker of SPY.

To calculate the annualised Sharpe ratio of such a strategy we will obtain the historical prices for SPY and calculate the percentage returns in a similar manner to the previous stocks, with the exception that we will not use the risk-free benchmark. We will calculate the net daily returns which requires subtracting the difference between the long and the short returns and then dividing by 2, as we now have twice as much trading capital. Here is the Python/pandas code to carry this out:

Code:
def market_neutral_sharpe(ticker, benchmark):
    """
    Calculates the annualised Sharpe ratio of a market
    neutral long/short strategy inolving the long of 'ticker'
    with a corresponding short of the 'benchmark'.
    """

    # Get historic data for both a symbol/ticker and a benchmark ticker
    # The dates have been hardcoded, but you can modify them as you see fit!
    tick = get_historic_data(ticker, start_date=(2000,1,1), end_date=(2013,5,29))
    bench = get_historic_data(benchmark, start_date=(2000,1,1), end_date=(2013,5,29))
   
    # Calculate the percentage returns on each of the time series
    tick['daily_ret'] = tick['adj_close'].pct_change()
    bench['daily_ret'] = bench['adj_close'].pct_change()
   
    # Create a new DataFrame to store the strategy information
    # The net returns are (long - short)/2, since there is twice
    # trading capital for this strategy
    strat = pd.DataFrame(index=tick.index)
    strat['net_ret'] = (tick['daily_ret'] - bench['daily_ret'])/2.0
   
    # Return the annualised Sharpe ratio for this strategy
    return annualised_sharpe(strat['net_ret'])
For Google, the Sharpe ratio for the long/short market-neutral strategy is 0.7597. For Goldman Sachs it is 0.2999:

market_neutral_sharpe('GOOG', 'SPY')
0.75966612163452329

market_neutral_sharpe('GS', 'SPY')
0.29991401047248328
Despite the Sharpe ratio being used almost everywhere in algorithmic trading, we need to consider other metrics of performance and risk. In later articles we will discuss drawdowns and how they affect the decision to run a strategy or not.

article originally from FMZ.COM ( A place you can create your own trading bot by Python, JavaScript and C++)
8  Economy / Trading Discussion / Continuous Futures Contracts for Backtesting Purposes on: March 18, 2019, 06:53:06 AM
article originally from FMZ.COM ( A place you can create your own trading bot by Python, JavaScript and C++)

Brief Overview of Futures Contracts
Futures are a form of contract drawn up between two parties for the purchase or sale of a quantity of an underlying asset at a specified date in the future. This date is known as the delivery or expiration. When this date is reached the buyer must deliver the physical underlying (or cash equivalent) to the seller for the price agreed at the contract formation date.

In practice futures are traded on exchanges (as opposed to Over The Counter - OTC trading) for standardised quantities and qualities of the underlying. The prices are marked to market every day. Futures are incredibly liquid and are used heavily for speculative purposes. While futures were often utilised to hedge the prices of agricultural or industrial goods, a futures contract can be formed on any tangible or intangible underlying such as stock indices, interest rates of foreign exchange values.

A detailed list of all the symbol codes used for futures contracts across various exchanges can be found on the CSI Data site: Futures Factsheet.

The main difference between a futures contract and equity ownership is the fact that a futures contract has a limited window of availability by virtue of the expiration date. At any one instant there will be a variety of futures contracts on the same underlying all with varying dates of expiry. The contract with the nearest date of expiry is known as the near contract. The problem we face as quantitative traders is that at any point in time we have a choice of multiple contracts with which to trade. Thus we are dealing with an overlapping set of time series rather than a continuous stream as in the case of equities or foreign exchange.

The goal of this article is to outline various approaches to constructing a continuous stream of contracts from this set of multiple series and to highlight the tradeoffs associated with each technique.

Forming a Continuous Futures Contract
The main difficulty with trying to generate a continuous contract from the underlying contracts with varying deliveries is that the contracts do not often trade at the same prices. Thus situations arise where they do not provide a smooth splice from one to the next. This is due to contango and backwardation effects. There are various approaches to tackling this problem, which we now discuss.

Common Approaches
Unfortunately there is no single "standard" method for joining futures contracts together in the financial industry. Ultimately the method chosen will depend heavily upon the strategy employing the contracts and the method of execution. Despite the fact that no single method exists there are some common approaches:

Back/Forward ("Panama") Adjustment
This method alleviates the "gap" across multiple contracts by shifting each contract such that the individual deliveries join in a smooth manner to the adjacent contracts. Thus the open/close across the prior contracts at expiry matches up.

The key problem with the Panama method includes the introduction of a trend bias, which will introduce a large drift to the prices. This can lead to negative data for sufficiently historical contracts. In addition there is a loss of the relative price differences due to an absolute shift in values. This means that returns are complicated to calculate (or just plain incorrect).

Proportional Adjustment
The Proportionality Adjustment approach is similar to the adjustment methodology of handling stock splits in equities. Rather than taking an absolute shift in the successive contracts, the ratio of the older settle (close) price to the newer open price is used to proportionally adjust the prices of historical contracts. This allows a continous stream without an interruption of the calculation of percentage returns.

The main issue with proportional adjustment is that any trading strategies reliant on an absolute price level will also have to be similarly adjusted in order to execute the correct signal. This is a problematic and error-prone process. Thus this type of continuous stream is often only useful for summary statistical analysis, as opposed to direct backtesting research.

Rollover/Perpetual Series
The essence of this approach is to create a continuous contract of successive contracts by taking a linearly weighted proportion of each contract over a number of days to ensure a smoother transition between each.

For example consider five smoothing days. The price on day 1, P1, is equal to 80% of the far contract price (F1) and 20% of the near contract price (N1). Similarly, on day 2 the price is P2=0.6×F2+0.4×N2. By day 5 we have P5=0.0×F5+1.0×N5=N5 and the contract then just becomes a continuation of the near price. Thus after five days the contract is smoothly transitioned from the far to the near.

The problem with the rollover method is that it requires trading on all five days, which can increase transaction costs.

There are other less common approaches to the problem but we will avoid them here.

Roll-Return Formation in Python and Pandas
The remainder of the article will concentrate on implementing the perpetual series method as this is most appropriate for backtesting. It is a useful way to carry out strategy pipeline research.

We are going to stitch together the WTI Crude Oil "near" and "far" futures contract (symbol CL) in order to generate a continuous price series. At the time of writing (January 2014), the near contract is CLF2014 (January) and the far contract is CLG2014 (February).

In order to carry out the download of futures data I've made use of the Quandl plugin. Make sure to set the correct Python virtual environment on your system and install the Quandl package by typing the following into the terminal:

import datetime
import numpy as np
import pandas as pd
import Quandl
The main work is carried out in the futures_rollover_weights function. It requires a starting date (the first date of the near contract), a dictionary of contract settlement dates (expiry_dates), the symbols of the contracts and the number of days to roll the contract over (defaulting to five). The comments below explain the code:

Code:
def futures_rollover_weights(start_date, expiry_dates, contracts, rollover_days=5):
    """This constructs a pandas DataFrame that contains weights (between 0.0 and 1.0)
    of contract positions to hold in order to carry out a rollover of rollover_days
    prior to the expiration of the earliest contract. The matrix can then be
    'multiplied' with another DataFrame containing the settle prices of each
    contract in order to produce a continuous time series futures contract."""

    # Construct a sequence of dates beginning from the earliest contract start
    # date to the end date of the final contract
    dates = pd.date_range(start_date, expiry_dates[-1], freq='B')

    # Create the 'roll weights' DataFrame that will store the multipliers for
    # each contract (between 0.0 and 1.0)
    roll_weights = pd.DataFrame(np.zeros((len(dates), len(contracts))),
                                index=dates, columns=contracts)
    prev_date = roll_weights.index[0]

    # Loop through each contract and create the specific weightings for
    # each contract depending upon the settlement date and rollover_days
    for i, (item, ex_date) in enumerate(expiry_dates.iteritems()):
        if i < len(expiry_dates) - 1:
            roll_weights.ix[prev_date:ex_date - pd.offsets.BDay(), item] = 1
            roll_rng = pd.date_range(end=ex_date - pd.offsets.BDay(),
                                     periods=rollover_days + 1, freq='B')

            # Create a sequence of roll weights (i.e. [0.0,0.2,...,0.8,1.0]
            # and use these to adjust the weightings of each future
            decay_weights = np.linspace(0, 1, rollover_days + 1)
            roll_weights.ix[roll_rng, item] = 1 - decay_weights
            roll_weights.ix[roll_rng, expiry_dates.index[i+1]] = decay_weights
        else:
            roll_weights.ix[prev_date:, item] = 1
        prev_date = ex_date
    return roll_weights
Now that the weighting matrix has been produced, it is possible to apply this to the individual time series. The main function downloads the near and far contracts, creates a single DataFrame for both, constructs the rollover weighting matrix and then finally produces a continuous series of both prices, appropriately weighted:

Code:
if __name__ == "__main__":
    # Download the current Front and Back (near and far) futures contracts
    # for WTI Crude, traded on NYMEX, from Quandl.com. You will need to
    # adjust the contracts to reflect your current near/far contracts
    # depending upon the point at which you read this!
    wti_near = Quandl.get("OFDP/FUTURE_CLF2014")
    wti_far = Quandl.get("OFDP/FUTURE_CLG2014")
    wti = pd.DataFrame({'CLF2014': wti_near['Settle'],
                        'CLG2014': wti_far['Settle']}, index=wti_far.index)

    # Create the dictionary of expiry dates for each contract
    expiry_dates = pd.Series({'CLF2014': datetime.datetime(2013, 12, 19),
                              'CLG2014': datetime.datetime(2014, 2, 21)}).order()

    # Obtain the rollover weighting matrix/DataFrame
    weights = futures_rollover_weights(wti_near.index[0], expiry_dates, wti.columns)

    # Construct the continuous future of the WTI CL contracts
    wti_cts = (wti * weights).sum(1).dropna()

    # Output the merged series of contract settle prices
    wti_cts.tail(60)

The output is as follows:

2013-10-14 102.230
2013-10-15 101.240
2013-10-16 102.330
2013-10-17 100.620
2013-10-18 100.990
2013-10-21 99.760
2013-10-22 98.470
2013-10-23 97.000
2013-10-24 97.240
2013-10-25 97.950
..
..
2013-12-24 99.220
2013-12-26 99.550
2013-12-27 100.320
2013-12-30 99.290
2013-12-31 98.420
2014-01-02 95.440
2014-01-03 93.960
2014-01-06 93.430
2014-01-07 93.670
2014-01-08 92.330
Length: 60, dtype: float64

It can be seen that the series is now continuous across the two contracts. The next step is to carry this out for multiple deliveries across a variety of years, depending upon your backtesting needs.
article originally from FMZ.COM ( A place you can create your own trading bot by Python, JavaScript and C++)
9  Economy / Trading Discussion / Research Backtesting Environments in Python with pandas on: March 16, 2019, 04:07:35 AM
article originally from FMZ.COM ( A place you can create your own trading bot by Python, JavaScript and C++)

Backtesting is the research process of applying a trading strategy idea to historical data in order to ascertain past performance. In particular, a backtester makes no guarantee about the future performance of the strategy. They are however an essential component of the strategy pipeline research process, allowing strategies to be filtered out before being placed into production.

In this article (and those that follow it) a basic object-oriented backtesting system written in Python will be outlined. This early system will primarily be a "teaching aid", used to demonstrate the different components of a backtesting system. As we progress through the articles, more sophisticated functionality will be added.

Backtesting Overview
The process of designing a robust backtesting system is extremely difficult. Effectively simulating all of the components that affect the performance of an algorithmic trading system is challenging. Poor data granularity, opaqueness of order routing at a broker, order latency and a myriad of other factors conspire to alter the "true" performance of a strategy versus the backtested performance.

When developing a backtesting system it is tempting to want to constantly "rewrite it from scratch" as more factors are found to be crucial in assessing performance. No backtesting system is ever finished and a judgement must be made at a point during development that enough factors have been captured by the system.

With these concerns in mind the backtester presented in here will be somewhat simplistic. As we explore further issues (portfolio optimisation, risk management, transaction cost handling) the backtester will become more robust.

Types of Backtesting Systems
There are generally two types of backtesting system that will be of interest. The first is research-based, used primarily in the early stages, where many strategies will be tested in order to select those for more serious assessment. These research backtesting systems are often written in Python, R or MatLab as speed of development is more important than speed of execution in this phase.

The second type of backtesting system is event-based. That is, it carries out the backtesting process in an execution loop similar (if not identical) to the trading execution system itself. It will realistically model market data and the order execution process in order to provide a more rigourous assessment of a strategy.

The latter systems are often written in a high-performance language such as C++ or Java, where speed of execution is essential. For lower frequency strategies (although still intraday), Python is more than sufficient to be used in this context.

Object-Oriented Research Backtester in Python
The design and implementation of an object-oriented research-based backtesting environment will now be discussed. Object orientation has been chosen as the software design paradigm for the following reasons:

The interfaces of each component can be specified upfront, while the internals of each component can be modified (or replaced) as the project progresses
By specifying the interfaces upfront it is possible to effectively test how each component behaves (via unit testing)
When extending the system new components can be constructed upon or in addition to others, either by inheritance or composition
At this stage the backtester is designed for ease of implementation and a reasonable degree of flexibility, at the expense of true market accuracy. In particular, this backtester will only be able to handle strategies acting on a single instrument. Later the backtester will modified to handle sets of instruments. For the initial backtester, the following components are required:

Strategy - A Strategy class receives a Pandas DataFrame of bars, i.e. a list of Open-High-Low-Close-Volume (OHLCV) data points at a particular frequency. The Strategy will produce a list of signals, which consist of a timestamp and an element from the set {1,0,−1} indicating a long, hold or short signal respectively.
Portfolio - The majority of the backtesting work will occur in the Portfolio class. It will receive a set of signals (as described above) and create a series of positions, allocated against a cash component. The job of the Portfolio object is to produce an equity curve, incorporate basic transaction costs and keep track of trades.
Performance - The Performance object takes a portfolio and produces a set of statistics about its performance. In particular it will output risk/return characteristics (Sharpe, Sortino and Information Ratios), trade/profit metrics and drawdown information.
What's Missing?
As can be seen this backtester does not include any reference to portfolio/risk management, execution handling (i.e. no limit orders) nor will it provide sophisticated modelling of transaction costs. This isn't much of a problem at this stage. It allows us to gain familiarity with the process of creating an object-oriented backtester and the Pandas/NumPy libraries. In time it will be improved.

Implementation
We will now proceed to outline the implementations for each object.

Strategy
The Strategy object must be quite generic at this stage, since it will be handling forecasting, mean-reversion, momentum and volatility strategies. The strategies being considered here will always be time series based, i.e. "price driven". An early requirement for this backtester is that derived Strategy classes will accept a list of bars (OHLCV) as input, rather than ticks (trade-by-trade prices) or order-book data. Thus the finest granularity being considered here will be 1-second bars.

The Strategy class will also always produce signal recommendations. This means that it will advise a Portfolio instance in the sense of going long/short or holding a position. This flexibility will allow us to create multiple Strategy "advisors" that provide a set of signals, which a more advanced Portfolio class can accept in order to determine the actual positions being entered.

The interface of the classes will be enforced by utilising an abstract base class methodology. An abstract base class is an object that cannot be instantiated and thus only derived classes can be created. The Python code is given below in a file called backtest.py. The Strategy class requires that any subclass implement the generate_signals method.

In order to prevent the Strategy class from being instantiated directly (since it is abstract!) it is necessary to use the ABCMeta and abstractmethod objects from the abc module. We set a property of the class, called metaclass to be equal to ABCMeta and then decorate the generate_signals method with the abstractmethod decorator.

Code:
# backtest.py

from abc import ABCMeta, abstractmethod

class Strategy(object):
    """Strategy is an abstract base class providing an interface for
    all subsequent (inherited) trading strategies.

    The goal of a (derived) Strategy object is to output a list of signals,
    which has the form of a time series indexed pandas DataFrame.

    In this instance only a single symbol/instrument is supported."""

    __metaclass__ = ABCMeta

    @abstractmethod
    def generate_signals(self):
        """An implementation is required to return the DataFrame of symbols
        containing the signals to go long, short or hold (1, -1 or 0)."""
        raise NotImplementedError("Should implement generate_signals()!")
While the above interface is straightforward it will become more complicated when this class is inherited for each specific type of strategy. Ultimately the goal of the Strategy class in this setting is to provide a list of long/short/hold signals for each instrument to be sent to a Portfolio.

Portfolio
The Portfolio class is where the majority of the trading logic will reside. For this research backtester the Portfolio is in charge of determining position sizing, risk analysis, transaction cost management and execution handling (i.e. market-on-open, market-on-close orders). At a later stage these tasks will be broken down into separate components. Right now they will be rolled in to one class.

This class makes ample use of pandas and provides a great example of where the library can save a huge amount of time, particularly in regards to "boilerplate" data wrangling. As an aside, the main trick with pandas and NumPy is to avoid iterating over any dataset using the for d in ... syntax. This is because NumPy (which underlies pandas) optimises looping by vectorised operations. Thus you will see few (if any!) direct iterations when utilising pandas.

The goal of the Portfolio class is to ultimately produce a sequence of trades and an equity curve, which will be analysed by the Performance class. In order to achieve this it must be provided with a list of trading recommendations from a Strategy object. Later on, this will be a group of Strategy objects.

The Portfolio class will need to be told how capital is to be deployed for a particular set of trading signals, how to handle transaction costs and which forms of orders will be utilised. The Strategy object is operating on bars of data and thus assumptions must be made in regard to prices achieved at execution of an order. Since the high/low price of any bar is unknown a priori it is only possible to use the open and close prices for trading. In reality it is impossible to guarantee that an order will be filled at one of these particular prices when using a market order, so it will be, at best, an approximation.

In addition to assumptions about orders being filled, this backtester will ignore all concepts of margin/brokerage constraints and will assume that it is possible to go long and short in any instrument freely without any liquidity constraints. This is clearly a very unrealistic assumption, but is one that can be relaxed later.

The following listing continues backtest.py:

Code:
# backtest.py

class Portfolio(object):
    """An abstract base class representing a portfolio of
    positions (including both instruments and cash), determined
    on the basis of a set of signals provided by a Strategy."""

    __metaclass__ = ABCMeta

    @abstractmethod
    def generate_positions(self):
        """Provides the logic to determine how the portfolio
        positions are allocated on the basis of forecasting
        signals and available cash."""
        raise NotImplementedError("Should implement generate_positions()!")

    @abstractmethod
    def backtest_portfolio(self):
        """Provides the logic to generate the trading orders
        and subsequent equity curve (i.e. growth of total equity),
        as a sum of holdings and cash, and the bar-period returns
        associated with this curve based on the 'positions' DataFrame.

        Produces a portfolio object that can be examined by
        other classes/functions."""
        raise NotImplementedError("Should implement backtest_portfolio()!")
At this stage the Strategy and Portfolio abstract base classes have been introduced. We are now in a position to generate some concrete derived implementations of these classes, in order to produce a working "toy strategy".

We will begin by generating a subclass of Strategy called RandomForecastStrategy, the sole task of which is to produce randomly chosen long/short signals! While this is clearly a nonsensical trading strategy, it will serve our needs by demonstrating the object oriented backtesting framework. Thus we will begin a new file called random_forecast.py, with the listing for the random forecaster as follows:

Code:
# random_forecast.py

import numpy as np
import pandas as pd
import Quandl   # Necessary for obtaining financial data easily

from backtest import Strategy, Portfolio

class RandomForecastingStrategy(Strategy):
    """Derives from Strategy to produce a set of signals that
    are randomly generated long/shorts. Clearly a nonsensical
    strategy, but perfectly acceptable for demonstrating the
    backtesting infrastructure!"""   
   
    def __init__(self, symbol, bars):
    """Requires the symbol ticker and the pandas DataFrame of bars"""
        self.symbol = symbol
        self.bars = bars

    def generate_signals(self):
        """Creates a pandas DataFrame of random signals."""
        signals = pd.DataFrame(index=self.bars.index)
        signals['signal'] = np.sign(np.random.randn(len(signals)))

        # The first five elements are set to zero in order to minimise
        # upstream NaN errors in the forecaster.
        signals['signal'][0:5] = 0.0
        return signals
Now that we have a "concrete" forecasting system, we must create an implementation of a Portfolio object. This object will encompass the majority of the backtesting code. It is designed to create two separate DataFrames, the first of which is a positions frame, used to store the quantity of each instrument held at any particular bar. The second, portfolio, actually contains the market price of all holdings for each bar, as well as a tally of the cash, assuming an initial capital. This ultimately provides an equity curve on which to assess strategy performance.

The Portfolio object, while extremely flexible in its interface, requires specific choices when regarding how to handle transaction costs, market orders etc. In this basic example I have considered that it will be possible to go long/short an instrument easily with no restrictions or margin, buy or sell directly at the open price of the bar, zero transaction costs (encompassing slippage, fees and market impact) and have specified the quantity of stock directly to purchase for each trade.

Here is the continuation of the random_forecast.py listing:

Code:
# random_forecast.py

class MarketOnOpenPortfolio(Portfolio):
    """Inherits Portfolio to create a system that purchases 100 units of
    a particular symbol upon a long/short signal, assuming the market
    open price of a bar.

    In addition, there are zero transaction costs and cash can be immediately
    borrowed for shorting (no margin posting or interest requirements).

    Requires:
    symbol - A stock symbol which forms the basis of the portfolio.
    bars - A DataFrame of bars for a symbol set.
    signals - A pandas DataFrame of signals (1, 0, -1) for each symbol.
    initial_capital - The amount in cash at the start of the portfolio."""

    def __init__(self, symbol, bars, signals, initial_capital=100000.0):
        self.symbol = symbol       
        self.bars = bars
        self.signals = signals
        self.initial_capital = float(initial_capital)
        self.positions = self.generate_positions()
       
    def generate_positions(self):
    """Creates a 'positions' DataFrame that simply longs or shorts
    100 of the particular symbol based on the forecast signals of
    {1, 0, -1} from the signals DataFrame."""
        positions = pd.DataFrame(index=signals.index).fillna(0.0)
        positions[self.symbol] = 100*signals['signal']
        return positions
                   
    def backtest_portfolio(self):
    """Constructs a portfolio from the positions DataFrame by
    assuming the ability to trade at the precise market open price
    of each bar (an unrealistic assumption!).

    Calculates the total of cash and the holdings (market price of
    each position per bar), in order to generate an equity curve
    ('total') and a set of bar-based returns ('returns').

    Returns the portfolio object to be used elsewhere."""

    # Construct the portfolio DataFrame to use the same index
    # as 'positions' and with a set of 'trading orders' in the
    # 'pos_diff' object, assuming market open prices.
        portfolio = self.positions*self.bars['Open']
        pos_diff = self.positions.diff()

        # Create the 'holdings' and 'cash' series by running through
        # the trades and adding/subtracting the relevant quantity from
        # each column
        portfolio['holdings'] = (self.positions*self.bars['Open']).sum(axis=1)
        portfolio['cash'] = self.initial_capital - (pos_diff*self.bars['Open']).sum(axis=1).cumsum()

        # Finalise the total and bar-based returns based on the 'cash'
        # and 'holdings' figures for the portfolio
        portfolio['total'] = portfolio['cash'] + portfolio['holdings']
        portfolio['returns'] = portfolio['total'].pct_change()
        return portfolio
This gives us everything we need to generate an equity curve based on such a system. The final step is to tie it all together with a main function:

Code:
if __name__ == "__main__":
    # Obtain daily bars of SPY (ETF that generally
    # follows the S&P500) from Quandl (requires 'pip install Quandl'
    # on the command line)
    symbol = 'SPY'
    bars = Quandl.get("GOOG/NYSE_%s" % symbol, collapse="daily")

    # Create a set of random forecasting signals for SPY
    rfs = RandomForecastingStrategy(symbol, bars)
    signals = rfs.generate_signals()

    # Create a portfolio of SPY
    portfolio = MarketOnOpenPortfolio(symbol, bars, signals, initial_capital=100000.0)
    returns = portfolio.backtest_portfolio()

    print returns.tail(10)
The output of the program is as follows. Yours will differ from the output below depending upon the date range you select and the random seed used:

          SPY  holdings    cash  total   returns
Date
2014-01-02 -18398 -18398 111486 93088 0.000097
2014-01-03 18321 18321 74844 93165 0.000827
2014-01-06 18347 18347 74844 93191 0.000279
2014-01-07 18309 18309 74844 93153 -0.000408
2014-01-08 -18345 -18345 111534 93189 0.000386
2014-01-09 -18410 -18410 111534 93124 -0.000698
2014-01-10 -18395 -18395 111534 93139 0.000161
2014-01-13 -18371 -18371 111534 93163 0.000258
2014-01-14 -18228 -18228 111534 93306 0.001535
2014-01-15 18410 18410 74714 93124 -0.001951

In this instance the strategy lost money, which is unsurprising given the stochastic nature of the forecaster! The next steps are to create a Performance object that accepts a Portfolio instance and provides a list of performance metrics upon which to base a decision to filter the strategy out or not.

We can also improve the Portfolio object to have a more realistic handling of transaction costs (such as Interactive Brokers commissions and slippage). We can also straightforwardly include a forecasting engine into a Strategy object, which will (hopefully) produce better results. In the following articles we will explore these concepts in more depth.

article originally from FMZ.COM ( A place you can create your own trading bot by Python, JavaScript and C++)
10  Economy / Trading Discussion / How to make your own trading bot on: March 16, 2019, 02:50:47 AM
article originally from FMZ.COM ( A place you can create your own trading bot by Python, JavaScript and C++)

i can't post pics here because of my rank on bitcointalk. for more information, please check here: https://www.fmz.com/bbs-topic/2884

Foreword
I’m certainly not a great programmer, but writing this project taught me a lot (and kept me occupied). Most of my code were done on FMZ.COM, and if I were to refactor the python code I would use a more object orientated model. Nonetheless, I was pleasantly surprised with the results I got and the bot has made almost 100% ether profit so far.

What does it do?
It is an arbitrage bot. That means that it earns money from trading the difference between prices on two (or more) exchanges. As of now it is unidirectional and only trades between Etherdelta and Bittrex: they share approximately twenty eth/token pairs. Here’s a diagram to illustrate how it works:

Words followed by parenthesis are ethereum transactions that invoke a smart contract function call.

The Code
I could have used fmz.com platform python editor to create the transactions and function calls and it would have been fairly straightforward. I needed something more reliable; a failed transaction means losing money. Every single one my GET requests needed a reply, even if the TCP packet got lost or the webserver on the other end was temporarily down. Therefore I decided to implement my own python etherscan API wrapper and used pythereum to create the transactions and etherscan to publish them. I also wrote my own requests.get decorator that is a while loop that only exits once the reply is satisfactory.

Here is the code I used to encode the etherdelta json API responses as hexadecimal, rlp encoded, ethereum transactions (not for the faint hearted):


The raw hexadecimal values in the closure at the bottom are the function signatures that correspond to each function. A function signature is derived from the keccak of the function and its arguments. It must be appended to the data parameter of a transaction followed by the data that makes up the arguments. In total my code is around 400 lines long and contained in 5 different files.

The Outcome
I made a couple of graphs from the data I logged using pymatplotlib.
 

Conclusion
Overall the entire project took me around two weeks during my spare time at school and it was a blast all round. I’ve taken a break from coding vigorously and am currently in the process of planning arbitrage bot v2. The next version is going to include 86 different exchanges and a whole lot of trading pairs.

To the moon!

article originally from FMZ.COM ( A place you can create your own trading bot by Python, JavaScript and C++)
11  Economy / Trading Discussion / If you find or believe that there are many loopholes in using indicator. on: March 15, 2019, 02:21:33 AM
article originally from FMZ.COM ( A place you can create your own trading bot by Python, JavaScript and C++)

If you find or believe that there are many loopholes in using traditional techniques to guide trading, then this article is worth your reading

I originally don’t want to write a trading article again, because I want to write a piece of content that I am satisfied with, and most people also feel good. When the article comes out, it is not realistic in theory and in practice. Still writing it, still willing to share, I just assume one day, a successful trader said that I have seen my article and inspired by it.

How pleasant it is.

Many people have realized that the real trading system should be simple, but this simplicity is not simple. if you tell people a simple and effective thing, they may not use it, they may even complicate it. Sublimating the trading method to the philosophical level, and then simplifies it into a code of conduct, specific it to the details. This progress, you must take a trip inside and successfully come out.

There is no standard answer to how to understand the nature of the market. I tend to think that the essence of the market is correcting mistakes. When the public is wrong, there is a market. This is not a nonsense, it has practical guidance. if you are used to trend trading, you will enter the market when others are shocked by others. Entry market time can be established from some technical indicators’ "divergence". Many people use divergence and may not know why the divergence is effective. The effective divergence I understand is that most retail investors stay in the last pattern of thinking (They are wrong!), and the market suddenly reverses to make them losing money.

I am really reluctant to take out the above “the real stuff”. I’m taking it out now just to testify to a point of view. “Seeing the road does not go” and “Never obsessed with appearance”. Technical indicators, some people call it a toy. You have to look through it and treat it as a “Reference object”, not just use it to simply judge the long and short. The core of "seeing the road does not go" is not only rely on the experience, but not deny all experience. You need to choose the best strategy according to the specific situation.

Defects in your trading system are inevitable, but if you can't make money at all, there is a problem with the design of the system itself. At this time, you have to reflect on whether you are lost in your own thinking or Imprisoned by some kind of real environment facts. You have to seriously think about how to “not obsessed with appearance".


The market always develops along the minimum resistance, and most people are doomed to be wrong, "seeing the road does not go" is to understand the true intention of the market. Not just the technical indicators, not the historical experience. If you find all these thinking against your system. Please review your trading system.

Many times, you feel that the market is always playing with you, and people can't help but suspect that the market is a conspiracy place everywhere. I think, the news on the market may have conspiracy, but the market itself is not, because the price has been clearly shown to everyone. How to judge, how to formulate a strategy is not part of this article.

But if I end up like this, I am afraid that readers will think that this is just another article of empty words. So, I will get the archive files from my computer. Which is an early diary article of mine, but at the time it was a summary of the experience that I was very reluctant to share.

What time will the trend line is effective: It lasts for a long time and has been effective many times. Compared to current k-line time period level, the amplitude is not too big.

A certain time period level of K line extends along the same as the moving average line. Then the moving average has the similar function to the trend line.

The slope of the K line is much steeper than the usual average line, and does not occur in the initial segment, then the band end with a reverse breakthrough of its own or a sub-level moving average line as a signal - the market is evolving, there may be a reversal, there may be continue the original direction, so it is very reasonable to close the position, and then make a new order after patience waiting.

If it is not a large trend level at the day level, the 1-hour chart is the basic observation window. And the retracement of the day level is also can be 1-hour as the basic window.

If 1-hour chart is in the narrow channel, basic observation window is 4-hour.

A strong unilateral trend may be accompanied by a strong retracement; a possible strong retracement (significant type, continuous strength) need to close the position, you can even try to make a profit out of the retracement. If the strength is unsustainable, which means the retracement not established, evolved into a strong retracement, or a weak retracement (the retracement is more sustained in time than in space). The weak retracement will return to the original trend and you can open a large position.

3 locations to entry the market:

The top place entry: assuming that the place is the pole, the basis for the judgment is the trend line. Since it is assumed to be the top place, it must be a very limited fault-tolerant space; if the stop loss has been reached, close the position immediately. Never hesitate. Waiting for the new directions.

Inversion point entry: reversing in the obvious support and resistance position. Or the previous trend is being damaged, the new trends may begin; Or the retracement ends and returns to the original trend. The stop loss is set at the 1/2 neckline of the trend line.

Chasing the trend entry: when you missed 1 and 2 entry positions, found that the market is still strong, it will be no retracement opportunities in the short term, when chasing buying long or selling short, the important K line is the K line that prompts you to opening the position, if the direction moves to the opposite way and broken the highest or lowest edge of this K line. Close the position and stop loss immediately. It is possible that after the stop loss, the market moves in the original direction, or evolve into the way of tunnel moving. Do not regret, looking for opportunities to enter the market again after observation and confirmation. When the top place entry situation occurs, it is best to find a narrow tunnel market; the large band is more difficult to evolve into a "tunnel", and the current inertia is likely to break through the possible tunnel lines and become a complex seesaw.

Reverse: The retracement time of the confirmation is positively related to the size of the previous wave. If H1 reverses cross MA23 and H4 leaves MA23 still far away, not eager to enter. Wait for 3 or more k lines to completely leave the moving average.

H1 or H4 k line mature growing time need an early chase (three types of entry points). In the late stage, it is not advisable to chase. If there is no "late stage", it must be in a larger trend.

If it is a big trend, even if you miss a band, you still have a chance to enter again. Before the day level trend is not formed,

The hourly chart determines the appearance of the basic window. If the market evolves into a general trend, then it will be.

Understand, stabilize, and make sure the risk ration before doing it; the good trend will definitely give you a clear signal, only you wait patiently! ! !

article originally from FMZ.COM ( A place you can create your own trading bot by Python, JavaScript and C++)
12  Economy / Trading Discussion / About the floating stop loss setting on: March 15, 2019, 02:19:15 AM
article originally from FMZ.COM ( A place you can create your own trading bot by Python, JavaScript and C++)

About the floating stop loss setting

The only secret of trading is that there is no secret, just because they are too common to be experienced and sentimental. Such secrets have been summed up in a few hundred years.

Nothing else but the stop loss, that is, controlling losses; Sticking to the trend is to use the current price trend as an objective basis, without your own subjective speculations; capital management, always use the funds that you can afford to participate in this game.

Today, let me talk something about the floating stop loss.

As we all know, there are many ways to set the initial stop loss, such as: fixed value stop loss, fixed ratio stop loss, key price stop loss, shape stop loss, etc. However, the stop loss does not only include the initial stop loss. There are many details about the stop loss. For example, after the profits exceed the cost, is the stop loss moved to the position opening price? Under what circumstances should the stop loss be turned into a take profit?

To figure out how the floating stop loss changes gradually as the market changes, let's figure out the difference between the initial part of the stop loss and the subsequent floating part:

The initial stop loss mainly refers to the planned stop loss position when intervening in the market, and its setting is objective. According to the morphological trend, key price or fixed value, arrange the exit path when the market does not develop as expected.


The subsequent floating part is mainly to solve a series of details such as how to move the stop loss after the market moves to the expected direction, when to move the stop loss, how to turn the stop loss into the take profit position.

What kind of dilemma can be avoided by solving the above floating part problem?

Presumably many friends have had this experience:

After entering the market, the price trend broke out, then stepped back on the cost price, and finally stopped the loss; or quickly moved the stop loss to the cost price after entering the market, the price returned to the cost price, forced to break even, then the market broke out again, missed the opportunity. These embarrassing situations occur because the floating parts of the stop loss are not well resolved.

There are two aspects to the setting of the floating part of the stop loss that require special attention:

First, after entering the market, you should not move the stop loss to the cost price after only a few points of profits. It is easy to be swept back and forth by the moving price, causing you to respect the market, the market does not respect you, and you always being driven out of the latest darkness before the dawn. In order to avoid turning from profit to loss, the mobile stop loss should be timely, but "timely" should also pay attention to "appropriate". The stop loss can be moved to the cost price after the market has gone for a while, or when it is turned back in a sub-time level.

Second, the mobile stop loss has certain subjectivity compared to the initial stop loss. When to move, how much to move is a judgment of subjective experience. From the perspective of preventing profit retreat, setting the trailing stop loss itself is no problem, but be sure to pay attention to the timing and the size of the moving point. With some floating profit, the price can be immediately moved to the cost price when the price breaks through the most recent support/resistance level. Only in this way can we effectively avoid becoming a "bystander" who is easily swept out.


In summary, both aspects are telling us:

Floating stop losses should be “in time” and “appropriate”. Timely is to keep profits, avoid turning profit into losses; appropriate is to avoid being easily stopped by the market or eliminated before the market moving is launched again.

So what is "appropriate"? How to accurately define this concept? To be honest, this is a process in which the benevolent see benevolence and the wise see wisdom, but the fundamental is actually the dynamic balance of winning-odds and winning percentage. If the hour k line is used as the basis for entry the market and the day k line is used as the basis for exit the market, then the winning-odds are guaranteed, that is, the profit-loss ratio is guaranteed, and the winning percentage is relatively high. However, if the 5-minute k line is used as the basis for entry, the day k line is used as the basis for the exit. Although the winning-odds are even higher, the winning percentage is greatly reduced. Therefore, controlling "appropriate" is more about the cognition and perception of winning percentage and odds in everyone's mind.

Only by finding the “timely” and “appropriate” in your stop loss can you say that you have a clear and effective trading system. What remains after this is the basic fund management and mentality control issues. After solving these problems, you can improve your execution ability. As long as the price don't touch the stop loss, let the position run all the time.

By simplifying the trading and removing the complicated and non-positive subjective judgments, the trading can be integrated into your own behavioral habits, and the objective signal trading method becomes as natural and skilled as eating and sleeping.

Having said that, I would like to solemnly remind you that you must strictly abide by the trading discipline and strictly enforce the stop loss.


A nation lacking basic ethical standards and codes of conduct must be a nation without cohesion, combat power, and scattered sand; a company with few disciplines must be a company with low internal management and unsustainable profit; a basic "precept" "Practitioners who are unable to abide by, will certainly not experience the joy of "fixing" and the joy of "wisdom"; a trader who ignores "trading discipline" and cannot abide by "trading discipline" must not continue to make money in the market!

“Trading Discipline” enables us to ensure that we do not engage in extremely risky trading opportunities; “trading discipline” allows us to avoid indulging in unfavorable trading and not being able to extricate ourselves; “trading discipline” enables our trading plans to be truly implemented, truly “Knowing and doing”; “trading discipline” enables us to trade in a more peaceful state of mind, thus having a better trading psychology and therefore better judgment and market feeling... “trading discipline” is the trader’s The basis for the survival of the speculative market, the guarantee of sustained and stable profits.

As a trader who want make profit, we must let "trading discipline" flow into our blood, deep into our bone marrow, and become part of our lives. In this way, in the front of the tides and the market’s clouds, we can watch and smile without fear.

article originally from FMZ.COM ( A place you can create your own trading bot by Python, JavaScript and C++)
13  Economy / Trading Discussion / The evolution of "moving average line" operation on: February 22, 2019, 04:24:44 AM
(pics won't show on this post, for the full version, please check it here: https://www.fmz.com/bbs-topic/2806)

The double moving average line strategy, by establishing the m-day and n-day moving average line, which these two moving average lines must have intersections during the price moves. If m>n, the n-day moving average line “up cross” m-day moving average is the buying point, and vice versa. This strategy is based on the intersection of different days of moving averages, grasping the strong and weak moments of the trading pairs, and execute trading. The short-period moving average up cross long-period moving average is called “Buying Point” and vice versa. Ok, now we can build a simple strategy: buying when up cross, selling when down cross.

Now we will use the Chinese commodity future rebar index 15 minutes K line as the data source for backtesting. let's take a look of the power of the moving average.

Single moving average strategy
The single moving average can also engage in strategy. In fact, it is a variant of the double moving average. The current price will be treat as one of the moving averages.

Code:
MA5:MA(C,5);
CROSS(C,MA5),BK;
CROSSDOWN(C,MA5),SP;
AUTOFILTER;
The simple one open position and one close position model backtest performance is shown in the figure. although it seems not bad, as long as the slippage and commission fees are taken into consideration, the result will be terrible.

1.Simple double moving average line strategy

Code:
MA5:=MA(CLOSE,5);
MA10:=MA(CLOSE,10);
CROSS(MA5,MA10),BK;
CROSSDOWN(MA5,MA10),BP;
CROSS(MA10,MA5),SK;
CROSSDOWN(MA10,MA5),SP;
AUTOFILTER;
With such a simple strategy, without optimization, the results are not ideal, and the profit are as follows:

2.Small improvement of the double moving average

Code:
MA5:=MA(CLOSE,5);
MA10:=MA(CLOSE,10);
CROSS(MA5,MA10)&&MA10>REF(MA10,1)&&REF(MA10,1)>REF(MA10,2)&&MA5>REF(MA5,1)&&REF(MA5,1)>REF(MA5,2),BK;
CROSSDOWN(MA5,MA10),BP;
CROSS(MA10,MA5)&&MA10<REF(MA10,1)&&REF(MA10,1)<REF(MA10,2)&&MA5<REF(MA5,1)&&REF(MA5,1)<REF(MA5,2),SK;
CROSSDOWN(MA10,MA5),SP;
AUTOFILTER;
Compared with the original strategy, the confirmation condition is increased. Such as if the strategy want to buy long, requiring MA10 and MA5 are both in an upward trend for the past two periods, filtering out some recurring short-period signals and improving the winning rate.
The final backtest results performed well:

3.Moving average line difference strategy

Code:
MA1:=EMA(C,33)-EMA(C,60);//Calculate the average difference between the 33-cycle and 60-cycle exponentials as MA1
MA2:=EMA(MA1,9);//Calculate the average of the 9-cycle MA1 index
MA3:=MA1-MA2;//Calculate the difference between MA1 and MA2 as MA3
MA4:=(MA(MA3,3)*3-MA3)/2;//Calculate difference of 3 times the average of 3 cycles of MA3 and half of the MA3
MA3>MA4&&C>=REF(C,1),BPK;//When MA3 is greater than MA4 and the closing price is not less than the closing price of the previous K-line, close position and open long position
MA3<MA4&&C<=REF(C,1),SPK;//When MA3 is smaller than MA4 and the closing price is not greater than the closing price of the previous K line, close position and open short position.
AUTOFILTER;
What is the result of the long-and-short-period moving average subtraction in the moving average? Strategy research relies on this constant attempt. MA4 is actually the average of the first two periods of MA3.
When the current value of MA3 is greater than the average of the previous two periods, buy long, here added a filter condition that the current price is greater than the previous K-line closing price, which improves the winning rate. You can try it yourself.
The effect of removing this condition has little effect. The specific backtest results are as follows:

4.Three moving average line strategy

With the double moving average line, we will naturally think of the results of the three moving averages, and the three moving averages have more filtering conditions.

Code:
MA1: MA(C, 10);
MA2: MA (C, 30);
MA3: MA (C, 90);
MA1>MA2&&MA2>MA3, BPK;
MA1<MA2&&MA2<MA3, SPK;
AUTOFILTER;
The above is the simplest three moving averages strategy source code, short-period, medium-period and long-period moving averages, opening long position to meet short-period > medium-period, medium-period > long-period. The idea is still the idea of the two moving averages. The backtest results are as follows:

Through the introduction of the previous five strategies, we can see how the average line strategy evolves. The single moving average strategy is easy to trigger repeatedly. It is necessary to increase the filtering conditions. Different conditions produce different strategies, but the nature of the moving average strategy has not changed. The short-period represents the short-period trends, long period represent long-period trends, and crossing represents a breakthrough in trends.

With these strategies as examples, it is estimated that readers can easily inspire their own moving average improvements.

(pics won't show on this post, for the full version, please check it here: https://www.fmz.com/bbs-topic/2806)
14  Economy / Trading Discussion / Quantitative Dissonance in Cryptocurrency Trading on: February 21, 2019, 02:54:03 AM
https://medium.com/altcoin-magazine/quantitative-dissonance-in-cryptocurrency-trading-d30aebd437c8


There is no more distinctive smell that takes you back to an era, a time and a space, then the scent of well-worn linoleum and the nylon, polypropylene blend that passes off as carpeting.

And every time I return to New York City, I find my olfactory senses assaulted by this time travel potpourri as I walk up the short flight of steps leading to an old brownstone off 8th Avenue where a well-known New York psychic resides.

This, I assure you, is not my idea of a good time.

Yet I make this pilgrimage every time I’m back in the city, not so much for my own benefit, but more for the satisfaction of one of my closest friends who swears by psychics and insists I make this pilgrimage each time I return to the city.

As someone who works with quantitative models and algorithms, I will be the first to admit that even entertaining the concept of a psychic flies against the logic which resides in my cellular DNA.

But I can understand the allure. As human beings, we are always on the search for “guidance” or an “oracle” to make sense of the randomness.


Jamie’s idea to disrupt the lightbulb industry took an interesting turn.
William Shakespeare’s Hamlet articulated it best,

“There’s a divinity that shapes our ends, rough hew them how he will.”
But what I’ve grown to realize over the decades is that whether you’re consulting goat entrails or crystal (meth) balls, tarot cards or tea leaves, you’ll find whatever it is you’re looking for.

You’ll See What You Want to See
Observation imbues greater significance to a factor than may objectively exist for that factor, were the observer not so observing.

In plain English, what I mean is that, you notice what you want to notice.


Let’s take for instance, you wish to purchase a brand new shiny 2019 Mercedes-Benz S-Class. Picture the vehicle in your mind right now and be as specific in your visualization as possible.

The next thing you realize is that you see the car everywhere on the streets — assuming you don’t live in Flint, Michigan, this isn’t hard to imagine.

But it’s not because there are suddenly more S-Classes on the streets, it’s just that you notice them more — which is a form of confirmation bias.

One of the key dangers which seasoned traders guard against is confirmation bias. Because charts, no matter how short or long they may be, can be used to justify an almost infinite number of trading decisions with little correlation to reality.

Data is historical, observation on the other hand, is deliberate.

Which is why pure quantitative trading strategies have done so poorly of late.

Quantitative Dissonance
There is a term of reference among quantitative trading circles which refers to retail investors as “dumb money.” I am of the view that this is an unfortunate phraseology, because making money does not necessarily make one smart and losing money does not make one dumb — often times randomness and other factors as Nicholas Nassim Taleb so eruditely expounds on, play an outsize role in outcomes.

And to label one “dumb” or “smart” simply on the basis of the dollars in one’s bank account I believe, credits or prejudices the individual beyond their fair level of assessment or merit.


If two heads are better than one, then three heads must be phenomenal.
Trading is and will always be a zero-sum game. A winning trade means someone has lost and a losing trade means someone else has won.

Ceteris paribus the market finds its equilibrium.

But that equilibrium has far more to do with psychology than with mathematical models, or the “fat tails” of reality as opposed to the bell curve of idealized states.

During the deregulation of the Reagan administration, trading became hugely popular with the rise of improved computer modelling and computing power, allowing retail investors to use untested methods such as chart patterns — which professional traders took advantage of, scalping the retail investors who were looking for answers in tea leaves.

But after the dotcom crash at the turn of the century, retail investors trading on flimsy chart and candlestick patterns left the markets in droves, leaving behind the professional traders and those with access to “proprietary information networks.”

The pond suddenly emptied of prey and left only sharks behind.

A Pond Filled with Sharks Supports Very Few Sharks
So it should come as no surprise that returns for quantitative trading outfits over the last two decades have lagged the general market.

Redemptions in the past year alone reached record levels.

There have of course been notable exceptions, chief among which has been Bridgewater’s Pure Alpha Fund, which despite the machinations of the market in 2018, managed to return an extremely commendable 14.6% — in a year when most hedge funds finished under water.

But Bridgewater isn’t a pure quantitative fund.


Waters were soon only left with sharks of all ages.
Because at its core, as any freshman student of finance will tell you, any form of modelling requires assumptions to be made and the real world does not conform to these assumptions.

The same factors which saw to Long Term Capital Management’s demise have not changed — that small detail known as reality.

Models assume constants and pureplay algorithms work off those constants — but life is anything but constant and neither are markets.

And because there are no magic algorithms, a purely quantitative approach to trading will always have limitations in excess of a fundamental-quantitative blended approach as found in funds like Bridgewater’s.

A More Rigorous Decision Making Matrix
Bridgewater employs a strategy which encourages independent thinking, to go against the grain, documents the thought process and then tests the approach using rigorous computer models that crunch the data to improve decision making.

And these are the precise methods which work not just in the financial markets, but in the cryptocurrency space as well.


“Let me consider that a moment…”
Cryptocurrency markets are challenging in a variety of ways, but because there is limited data, algorithmic trading only works within certain specific and specified time frames. Which means that attention to detail, an understanding of the underlying fundamentals driving blockchain protocol development and trading psychology has the ability to yield outsize and more importantly, consistent returns, than a similar approach would in traditional financial markets.

For starters, copy trading is far more challenging in the cryptocurrency markets than in the traditional markets because the sophisticated tracking tools needed to perform such analysis has yet to be deployed in cryptocurrency markets — something which in turn preserves access to alpha (where available).

And even if such tracking tools were to be deployed, there is insufficient empirical data to suggest that they would yield the same or similar results, especially given that market liquidity in cryptocurrency markets are (for now) somewhat limited — the trader may literally be the market.

But perhaps if nothing else, what Bridgewater’s stellar results and the cryptocurrency markets have demonstrated is that relying purely on quantitative strategies alone, is insufficient to translate into low beta (a return with a low correlation to the overall market) returns.

Instead, given the complexity of markets, a far more thorough, disciplined and managed approach is required. One which mends the intellectual curiosity of a critically analytical process with the rigorousness of a computationally robust and tested methodology.

The answers aren’t in the tea leaves, they’re in asking better questions.

https://medium.com/altcoin-magazine/quantitative-dissonance-in-cryptocurrency-trading-d30aebd437c8
15  Economy / Trading Discussion / Introduction to Box Theory on: February 20, 2019, 08:52:56 AM
article originally from FMZ.COM

Box Theory is to make the stock price from the ups and downs into a share of square. Like boxes. In other words, it is to divide the rising market or the falling market into a number of small markets, and then study the highs and lows of these small markets.


In the rising market, after the stock price breaks through the new high price, because the public are afraid of too high, it is very likely to fall back, and then rise again, forming a box between the new high price and the low point of the fall price;

In the falling market, every time the stock price falls to the new low price, based on the strong rebound mentality, it is very likely to rebound, and then back to the original trend. A box is formed between the high point of the rebound and the new low price, and then according to stock price fluctuations in the box to speculate on stock price movements.

As the basic characteristics of the box theory can be clearly seen, this is an extension of the concept of the resistance line. When the stock price rises to a certain level, it will encounter resistance, and if it falls to a certain level, it will encounter support. Naturally, the stock price rises and falls between certain levels. This float produces a lot of box shapes.

If the stock price tends to be a box-shaped trend, the stock price naturally has a high price and a low price. Whenever the stock price reaches the high price, the selling pressure is heavier and the stock should be sold. When the stock price returns to the low point and the support is strong, it is the buying opportunity. This short-term operation can be maintained until the stock price breaks above the upper or lower limit of the box, and then the operating strategy is changed.


As the stock price trend breaks through the upper limit of the box, indicating that the resistance has been overcome, the stock price continues to rise. Once it falls back, the past resistance level naturally forms a support, causing the stock price to rise and another rising box shape to be established.

Therefore, when the stock price breaks through the resistance line, it naturally forms a buying point. At this time, the profit is greater and the risk is lower. On the contrary, when the stock price trend breaks through the box-shaped lower limit, it indicates that the support has failed, and the stock price continues to fall. Once it rises, the support in the past naturally forms resistance, causing the stock price to fall back and another falling box to form.

When the stock price falls below the support and rebounds, it is the selling point, and it is not suitable for buying. Otherwise, the loss opportunity is large and the risk increases.

The trend of the a day can also be expressed in the box shape. Due to the ups and downs of daily market fluctuations, in just a few hours of trading time, there are often numerous small wave-like undulations. These wave-like undulations are sometimes regular, sometimes no, here let’s image that they are all regular fluctuations. and let me introduce the rising market and falling market trends.

Some short-term speculators record the bidding process of individual stocks for 3 hours every day, and sort them out. They will find two completely different trends.

According to the box theory, the practice is to observe and not take action in the first box. After waiting for the second box, you must judge the market rise, fall, consolidation, prepare to take action, and wait for the trend to break through the second box. It is clear that it is the best time to buy and sell.


To use the box theory, you need to focus on the following points:

1. First, we must determine the stock price trend, and then determine the highs and lows of each small box after determining the rising or falling market.

2, from the high and low turning points of each market to find the timing of buying or selling and the appropriate price, the calculation method can be obtained from the upper and lower limits of the previous box, with the rise and fall of the price.

3, in the rising market to do long, do not contradict it and selling-short. Never selling-short in the rising market, this is the basic truth of the trend.

4. According to the operator's short-term operation experience and the sensitivity of the mind, decide how to use the box theory operation in which time cycle period. It is indisputable that the profit of the day-trading is low and the risk is high, so it is better to use the box theory operation in a longer time cycle period.

5. As mentioned earlier, for box-shaped changes, it is best to observe one or two box-shaped changes before deciding whether to buy or sell, unless you are a very experienced trader, do not easily start in the first box.

6. The change of the box shape on the current or short-term is highly susceptible to sudden factors, resulting in irregular changes, and should be vigilant.

7, the use of box theory operation, it is best to take the premise of the general trend, with each segment of the market as the focus of operation, do not easily jump in.

8, hot stocks are very active, long and short battles between the two sides are fierce, the rate of rise and fall is fast, the box theory can not adapt. The scope of the unpopular activities is not suitable. Only the stocks with stable stocks tend to rise when they rise, but also when they fall. They rise or fall in a stepwise manner, so they are most suitable for box theory operation.

article originally from FMZ.COM
16  Economy / Trading Discussion / Crawling down Binance's announcement of delisting currencies on: February 20, 2019, 03:38:08 AM
article originally from FMZ.COM

Strategy purpose:

On February 15, Binance announced the taking off announcement of the CLOAK, MOD, SALT, SUB, WINGS. After the announcement, the currency involved immediately began to fall, generally down 15% in an hour, with more and more users knew the news, the decline continued and there was no rebound, and it has fallen by half so far. If you can sell the coins held in the first time after the announcement, you can recover a lot of losses.

This strategy runs on the FMZ quantitative trading platform (formerly BotVS).

Idea:

Crawling down the Binance security announcement pages and observe the information of the nearest two times of announcement. The specific format is “Binance will delist CLOAK, MOD, SALT, SUB, WINGS”, “Binance will delist BCN, CHAT, ICN, TRIG”.

The strategy will use the "will delist" as a keyword to crawl the new release announcement, of course, does not rule out the Binance change notification format, you can refer to this strategy to improve. Since the crawler task is too simple, it will be written in simple JavaScript. After crawling down the delist currency, the account information will be checked. If there is a delist currency, it will be sold at a lower price. If there is an uncompleted order, it will be revoked first. Until the sale of the remaining coins is completely sold.

Crawling code:

Code:
var html = HttpQuery('https://support.binance.com/hc/zh-cn/sections/115000202591-%E6%9C%80%E6%96%B0%E5%85%AC%E5%91%8A')//Announcement page
html = html.slice(html.indexOf('article-list'),html.indexOf('pagination')) // Article list section
if(html.indexOf('will delist')>0){
    if(html.slice(html.indexOf('will delist')+3,html.indexOf('</a>')) != title){ //crawl only the first delist information
        var title = html.slice(html.indexOf('delist')+3,html.indexOf('</a>'))
        var downList = title.split('、')
        Log('New announcement is detected, the currency will be delist:', title, '@')//WeChat push release announcement
    }
}

Revoking order code:

Code:
function cancellOrder(){
    var openOrders = exchange.IO('api', 'GET', '/api/v3/openOrders')//Get all unexecuted orders
    for (var i=0; i<openOrders.length; i++){
        var order = openOrders[i];
        for (var j=0;j<downList.length;j++){
            if(order.symbol.startsWith(downList[j])){
                var currency = downList[j] + '_' + order.symbol.slice(downList[j].length);
                Log('There is a delist currency order exist, revoked', currency)
                exchange.IO("currency", currency)//To revoke a order, you need the trading pair information, so you must first switch to the trading pair.
                exchange.CancelOrder(order.orderId)
            }
        }
    }
}

Check account code:

Code:
function checkAccount(){
    var done = false
    while(!done){
        account = _C(exchange.GetAccount)
        done = true
        for (var i=0; i<account.Info.balances.length; i++){
            if(downList.indexOf(account.Info.balances[i].asset)>-1 &amp;&amp; parseFloat(account.Info.balances[i].free)>pairInfo[account.Info.balances[i].asset+'BTC'].minQty){
                Log('delist currency will be emptied', account.Info.balances[i].asset)
                sellAll(account.Info.balances[i].asset, parseFloat(account.Info.balances[i].free))
                done = false
            }
        }
        Sleep(1000)
    }
    Log('Sale completed')
}

Placing order code:

Code:
var exchangeInfo = JSON.parse(HttpQuery('https://api.binance.com/api/v1/exchangeInfo'))
var pairInfo = {}  //Trading pair information, storing transaction progress, minimum trading volume and other related information, placing order will needed
if(exchangeInfo){
    for (var i=0; i<exchangeInfo.symbols.length; i++){
        var info = exchangeInfo.symbols[i];
        pairInfo[info.symbol] = {minQty:parseFloat(info.filters[2].minQty),tickerSize:parseFloat(info.filters[0].tickSize),
            stepSize:parseFloat(info.filters[2].stepSize), minNotional:parseFloat(info.filters[3].minNotional)}
    }
}else{
    Log('Failed to get transaction information')
}
function sellAll(coin, free){
    var symbol = coin + 'BTC'
    exchange.IO("currency", coin+'_BTC') //switching trading pair
    var ticker = _C(exchange.GetTicker)
    var sellPrice = _N(ticker.Buy*0.7, parseInt((Math.log10(1.1/pairInfo[symbol].tickerSize))))
    var sellAmount = _N(free, parseInt((Math.log10(1.1/pairInfo[symbol].stepSize))))
    if (sellAmount > pairInfo[symbol].minQty &amp;&amp; sellPrice*sellAmount > pairInfo[symbol].minNotional){
        exchange.Sell(sellPrice, sellAmount, symbol)
    }
}

To sum up:

The above code is only for demonstration, the complete code can be found at FMZ.COM. The announcement page can be crawled once in a minute, which will have enough time to sell before the ordinary user.

But there may be some problems, such as crawling being blocked, announcement format changes, and so on. If the currency is not on the Binance, you can also refer to this strategy to other exchanges. After all, the delist currency will affect all platforms.

article originally from FMZ.COM
17  Economy / Trading Discussion / Ten classic model ideas of programmatic trading strategies on: February 19, 2019, 03:09:31 AM
article originally from FMZ.COM

Ten classic model ideas of programmatic trading strategies

www.fmz.com

1.

Interval breakthrough

The volatility range breaks through trading, triggering a breakout trade operation on the day based on a percentage of yesterday's volatility. If yesterday's volatility is abnormal, the necessary adjustment should be made to ensure its reasonableness.

Main features: intraday trading strategy; close all position when market closed at the day. interval breakout based on the relationship between yesterday's amplitude and today's opening price;

yesterday's amplitude = yesterday's highest price - yesterday's lowest price;

upper track = today's closing price + N * yesterday's amplitude;

lower track = today's closing price - N * Yesterday's amplitude;

when the price breaks through the upper track, buying long; when the price falls below the lower track, selling short.

2.

Fiali four price

Yesterday's high, yesterday's low, yesterday's closing price, today's opening price, can be called the Filip four price. It is the main breakthrough trading strategy reference system adopted by Japanese futures trading champion Fiali. In addition, because of Fiera's subjective mental trading model, it decided that it also combined and applied the “resistance line” in the actual transaction, namely the resistance line and the support line.

Main features: intraday trading strategy, close all position when market closed at the day; Fiali four price refers to yesterday's high, yesterday's low, yesterday's closing, today's opening;

upper rail = yesterday's high;

lower rail = yesterday's low;

when the price breaks through the upper track, buying long; when the price falls below the lower track, selling short.

3. Sky Garden

Breaking out when the market opens is the fastest way to enter the market. Of course, the probability of error is also the highest. The direction of the first K-line, which is the standard for judging the possible movement direction of the day. It is more effective when the opening is a gap opening high or low.

Main features: intraday trading strategy, close all position when market closed at the day; Sky Garden is used when the day is gap open for the opening price, that is, when the opening price is >= yesterday's closing price*1.01 or the opening price is <= yesterday's closing price*0.99;

upper track = The highest price of the first K line;

the lower rail = the lowest price of the first K line;

when the price breaks through the upper track, buying long; when the price falls below the lower track, selling short.

4.

Sideways breakthrough

It is easier to achieve quantitative breakthroughs in form, such as fractal, narrow cross-section breakthrough, various K-line combinations, double-bottom and double-top; it is difficult to achieve quantitative morphological breakthrough, with trend line and arc top and bottom, also various of classic techniques shape such as flag, diamond, and triangle. The trend is followed by consolidation, and the trend is after consolidation. The trading strategy of the sideways breakthrough fully reflects the price fluctuation law of the volatility cycle. What we need to do is to reasonably quantify the definition of consolidation, such as the span of the period and the magnitude of the fluctuations.

Main features: intraday trading strategy, close all position when market closed at the day; sideways breakthrough when the high and low points of the past 30 K lines fluctuate within 0.5% of the upper and lower axis;

upper rail = the highest price of the past 30 K lines;

lower rail = The lowest price of the past 30 K lines;

when the price breaks through the upper track, buying long; when the price falls below the lower track, selling short.

5. Turning trading

Relatively speaking, breakthroughs based on fixed points may be subject to changes in the price range of the variety; while breakthroughs in fixed percentage ranges are less likely to be similar, unless the level of volatility of the variety changes dramatically.

Main features: intraday trading strategy, close all position when market closed at the day; turning to trading based on today's opening price;

upper rail = today's opening price + today's opening price * 0.01;

lower rail = today's opening price - today's opening price * 0.01;

when the price breaks through the upper track, buying long; when the price falls below the lower track, selling short.

6. HANS123

As a breakthrough trading strategy widely popular in the foreign exchange market, HANS123 breaks through the high and low points of the number N of K line after the market opened, as a criterion for triggering trading signals. This is also an early stage trading mode, with price envelopes, time confirmation, fluctuation range and other filtering techniques, it can improve its odds.

main feature:

Intraday trading strategy, close all position when market closed at the day; HANS123 is ready to enter after 30 minutes of opening;

Upper rail = 30 minutes highest price after opening;

Lower rail = 30 minutes lowest price after opening;

when the price breaks through the upper track, buying long; when the price falls below the lower track, selling short.

7. Daily average ATR breakthrough

We have reason to believe that when a certain extent of ATR volatility has occurred, we are more willing to gamble the volatility of the day to continue to develop in the direction of a certain ATR. The benchmark for comparison may be the opening price or the day or the new highest and lowest record price has been set.

Also, can calculate the ATR in the past 10 days.

main feature:

Intraday trading strategy, close all position when market closed at the day; The daily average ATR breaks through the relationship between today's opening price and the average ATR of the past N trading days;

Upper rail = today's opening price + N trading day average ATR*M;

Lower rail = today's opening price - N trading day average ATR * M;

when the price breaks through the upper track, buying long; when the price falls below the lower track, selling short.

8. ORB breakthrough

The ORB breakthrough trading was first proposed by US fund manager Toby in 1988. He measured the opening price by the smaller of highest price and the lowest price. Once the market exceeds this range, it is considered a real breakthrough. In practical applications, breakthroughs in early stage trading and breakthroughs after narrow fluctuations can be used as effective filtering conditions.

main feature:

Intraday trading strategy, close all position when market closed at the day; The ORB failure breakthrough is based on the past N trading day ORB indicators;

Upper rail = today's opening price + N days ORB*M;

Lower rail = today's opening price - N days ORB*M;

when the price breaks through the upper track, buying long; when the price falls below the lower track, selling short. If the number of failures was high in the past, and the probability of the next success will be higher.

9. Time-average price breakthrough

The time-sharing average price yellow line is widely used in the time-sharing average price trend chart of various trading software. Therefore, its position is particularly prominent in terms of the self-fulfilling language of the trading strategy.

main feature:

Intraday trading strategy, close all position when market closed at the day; The time-sharing average price yellow line is based on the average price of today's time-sharing chart;

Upper rail = yellow line of the hourly average price of the day;

Lower rail = yellow line of the hourly average price of the day;

when the price breaks through the upper track, buying long; when the price falls below the lower track, selling short.

10. ATR volatility breakthrough in the day

Focus on the assessment of changes in short-term market volatility. The volatility breakthrough has the ability to adapt to the market to a certain extent, and the ability to adapt to different market environments is stronger in practical applications.

main feature:

Intraday trading strategy, close all position when market closed at the day; the intraday ATR breaks through the ATR based on the opening price of the K-line and the past N cycles;

Upper rail = the K line opening price + N period ATR * M;

Lower rail = the K line opening price - N period ATR * M;

when the price breaks through the upper track, buying long; when the price falls below the lower track, selling short.

article originally from FMZ.COM
18  Economy / Trading Discussion / The public are always wrong on: February 16, 2019, 05:54:09 AM
article originally from FMZ.COM

Bernard Baruch was born in 1870 in South Carolina and graduated from the City College of New York. Baruch is a successful example of starting from scratch and a stock trader who is good at grasping opportunities, as well as a flexible investor, also a politician who is familiar with economic development, investing in ghosts and speculative masters.

Baruch proposes to pay attention to three aspects of the investment object: first, it must have real assets; second, it is better to have a franchise advantage of operation, which can reduce competition, and the way out for its products or services is more assured; The third and most important is the management ability of the investment target.

Baruch cautioned that he would rather invest in a company that has no money but is well managed, and don't touch the stocks of a well-funded but poorly managed company. Baruch also paid considerable attention to the control of risk. He believes that it is necessary to keep a certain amount of cash in the hands; it is recommended that investors must reassess their investment at intervals and see if the stock price can still meet the original expectations. He also reminded investors to learn to stop loss: the mistake is inevitable, the only choice after the mistake is to stop the loss in the shortest time. Baruch did not agree with the so-called excess returns, and he cautioned against trying to buy at the bottom and sell at the top. He said: "Whoever says that he can always sneak into the top, it is a lie." He also reminded investors to beware of so-called insider information or hearsay, and investment mistakes are often cast into it. Therefore, some people blame Baruch for thinking that one of the reasons why he was called a “speculative master” was his seemingly desperate style.


"The masses are always wrong" is the first essence of Baruch's investment philosophy. Many of his deep understanding of investment are derived from this basic principle. For example, Baruch advocates a very simple standard to identify when it is the low price that should be bought and the high price of the sale: when people cheer for the stock market, you must sell decisively, regardless of whether it will continue to rise or not; when stocks are cheap enough that no one wants, you should dare to buy, regardless of whether it will fall again. People are often surprised by Baruch's judgment and his ability of grasping the fleeting opportunities.

He believes that any so-called "real situation" in the stock market is indirectly conveyed through people's emotional fluctuations; in any short period of time, the rise or fall of stock prices is mainly not due to objective, non-human economic forces or Changes in the situation, but because people react to what is happening. Therefore, he reminds everyone that the basis of judgment is understanding. If you understand all the facts, your judgment is correct. On the contrary, your judgment is wrong. In all aspects of the public's psychological understanding, Buffett and Baruch are exactly the same. Isn’t that Buffett always say you have to shrink your hands when the public is greedy, and to be aggressive when the public is afraid? The two masters have many similarities in the investment philosophy.

Baruch’s investment approach is more flexible and he advocates stop loss firmly. He said that if investors have the awareness of stoping loss, even if they only do three or four times every ten times, they will become rich. He wants investors to have plan B so they can turn around and leave at any time. Buffett seems to be more assertive and don’t change the investment plan easily that has been formulated. He said: "If you can't carry out the plan with ease after the stock price has fallen by half, then you are not suitable for stock investment." What can be done is that Buffett is cautious in choosing stocks. In this way, Buffett is like a well-trained Tai Chi master, and Baruch is more like a swordsman sealing the throat.


The basic qualities that investment and speculation must have:

1. Self-reliance. Must think independently. Avoid emotionalization and remove all environmental factors that may lead to irrational behavior.

2. Judgment: Don't let go of any details - think for a moment. Don't let what you want to happen affect your judgment.

3. Courage: Don't overestimate the courage you might have when everything is bad for you.

4. Agile: Good at discovering all the factors that may change the situation and the factors that may affect public opinion.

5. Cautious: Be easy, otherwise it is difficult to be cautious. When the stock market is in your favor, you need to be even more modest. It is not a cautious act when you think that the price has reached the lowest point; you’d better wait and see, it is not late to buy later. It’s not a cautious act to wait until the price rises to the highest point – it’s probably safer to get out of the hand.

6. Flexibility: Consider all objective facts together with your own subjective view. It is necessary to completely abandon the attitude of stubbornness – or “self-righteousness”. The idea of ​​earning a certain amount of money over a certain period can completely ruin your own flexibility. Once you decide, act – don't wait and see what will happen to the stock market.


The psychological literacy that must be possessed by investment and speculation:

Almost everyone can't escape being controlled by their own emotions: they are either too optimistic or too pessimistic. After you have mastered the objective facts and formed your own opinions, please watch the trend. You should know what should happen in the market, but don't mistake it for what will happen in the market. The more the public intervenes in the stock market, the greater its power. Don't try to work against everyone, and don't stand too forward. If it is a bull market, of course, don't sell short. However, if there is a possibility of reversal or if you are worried about holding stocks, you will not be able to stay for a long time; vice versa.

When the stock market panics, the best stocks don't expect to sell a reasonable price. Pay close attention to all things that inspire or horrify the public. When the stock price climbs, considerate comprehensively what will make it climb higher. On the contrary, of course, you must also think about it. Don't forget history. The same thing when the stock price falls. Pay attention to the mainstream, but no need to have too many companions.

"Stop the loss and let the profit continue." Overall, the action should be fast. If you can't do this, please reduce your intervention. In addition, please reduce the intervention once you have doubts. After making up your mind, you should act immediately, and you don't have to consider the market reaction. However, when planning, you must consider the market trends from time to time.

Compare the two with a full understanding of past conditions and a comprehensive grasp of the current situation. Psychologically prepared for all obstacles, and excessive action will always lead to overreaction.

Unpredictable ingredients: The “opportunity” factor needs to be considered and you need to be prepared in financial, mental and physical factors.

article originally from FMZ.COM
19  Economy / Trading Discussion / Keep proper distance but not losing passion on: February 16, 2019, 02:33:02 AM
A conclusion that I get from years of trading-----

When you are concentrated, you will be oblivious of yourself and forget those messy thoughts, namely a state of "unintentional". If you got this calm feeling, which is amazing, you need to maintain this in your work and life so you can achieve more success. The more we use this feeling, the better result we get then more skilled we are using.

I used to listen to more senior friends: "Don’t be too close to the stock. The closer you go, the worse you will be." It’s correct. However, the stock market needs to be learned. If you don't go deep into the research, how can you be wise? So, at first glance, this seems to be very contradictory. But this is a question of high skill: people with deeper skills can stay close to the market without being confused by the market.

Learning and actual operations, from a certain point of view, are two different things. It belongs to two different stages. When studying, we must be serious in thinking things over deeply, and researching repeatedly. The spirit of perseverance is important, as well as careful work, aiming at the pursuit of perfect high standards, and not to be sloppy in the details.


In short: you need to be incisive and subtle, then you gain success. However, when your skill is practiced and you enters the market, you go to a higher level in which you will be bold and resolute, not punctilious in details and only focusing on the general trend.

In other words: When you are a student, it is inevitably being close to the market, because you need to look at it, dissect it, analyze it, and have to go very close to it. However, when it comes to reaching the edge of the winner's realm, it feels that there is a distance from the market and it is no longer as close as it used to be.

When we do trading, it is also the same four layers: At beginners’ phrase, there will be many wrong ideas and techniques, which will make us doubt the correct method, even do not want to learn, so it is very necessary to focus on studying at this time.

I hope that the garbage knowledge in the brain will be removed. Then the burden of brain is lightened, and a good weight loss effect is achieved, which is a very natural state. At this point, operating the stock naturally follows the Fa-rectification, not deliberate, not tweaking. Since you have reached the height of unintentional heights, it’s undoubtedly the best time to work harder. Currently, everything is developed normally, trading is operated naturally, and you are free from emotional shackles. Everything is natural, and you just go with it, achieving with minimum fuss and maximum efficiency.

What is said above is normal steps of learning. However, most people probably have no chance to follow the steps to learn. For my part, the actual situation is as follows:

1. I just entered the market, I don’t know much, I don’t care much, I’m lazy, and I look at the market table occasionally. (In fact, this is a bit of a state of "unintentional")

2. Suddenly one day, I felt that the opportunity to enter the market came. I went to buy stocks (or emptied) (because I was "unintentional", it was easier to find real and big opportunities to enter.)

3. After a little bit of sweetness, I want to learn more (in fact, I want to learn more short-term operation knowledge), to see if I can "frequently operate" so that I can make more money (this is greed, then I began to lose the realm of "unintentionality" and entered the world of deliberate creation. This heart is a delusional heart, no longer pure.)

4. When I realized that I was going in the wrong direction, I made up my mind to jump out of small places, small details, small short lines, and small patterns. I wanted to find my original heart back, which was pure and kind.

5. Then I got rid of weed and kept the flower of the leek. I returned to the state of “lazy” when I first entered the stock market, but that was only the surface. The correct knowledge was ready.

6. While waiting for the next opportunity, I will not ask for it, I will not be anxious, and I will not lose my heart. Because I have forgotten that I am waiting (ahh, I envy this kind of relaxing life)

7. The opportunity is coming... "The god of opportunity will come knock my door" - if you are ready, you will hear the ringtone. You don't need to listen every night, every minute, every second. When the opportunity comes, you will know and naturally act.
20  Economy / Trading Discussion / The technical analysis method is a mirror, and history will repeat itself. on: February 16, 2019, 02:03:42 AM
Often listen to some investors complaining that technical analysis methods are not reliable, and some even think that technical analysis methods are useless. I believe that some investors have such complaints because they cannot correctly understand and recognize technical analysis methods, and they misuse technical analysis methods in market practice. The technical analysis method is a scientific summary of market experience. After several generations of research, innovation and development in the modern market, the technical analysis method system has become more mature and perfect. However, technical analysis methods also have their limitations. For example, a technical analysis method is not a panacea. It may be suitable only for some specific market environment, but it is powerless for another market environment and may even lead to errors. Therefore, correct understanding and in-depth understanding of the characteristics of technical analysis methods, recognizing the market environment applicable to each technical analysis method is the key to effective application of technical analysis methods.


1. Common misunderstandings and errors are applied in the application of technical analysis methods. Investors who lack analytical experience often have the following misunderstandings and applications:

(1) Over-reliance on technical analysis results. Some investors believe that technical analysis methods should be accurate analytical tools, so they are superstitious about the predictions drawn from an analytical method. I encountered an investor T at work. T is an economics lecturer. He is very fond of technical analysis methods. Once, he made 50 units of soymeal futures selling orders at 2,900 yuan/ton according to his technical analysis results. As a result, soybean meal futures did not fall and went up, and broke through 3000. At the key resistance level of the price, I urged him to stop the loss as planned, but he refused to implement it and took out the drawings and explained: “I still insist on bearish because there is a technical analysis method to support my short view.” Finally The soybean meal futures soared to over RMB 3,400/ton, and the investor suffered heavy losses.


(2) Use an analytical method as a universal tool for market forecasting. Some investors think that every technical analysis method can be applied to any market environment. For example, regardless of whether the market has a trend or no trend, they all have to look at the moving average, or whether they are clear or not, they are obsessed with the waves theory. ̈ It is obvious that the moving average method is generally applicable to a trending market. However, if it is used in the oscillating consolidation market, the information it buys and sells is mostly a false signal. If investors use this information to make a trading, they will be punished by “left slap, right slap”, and some investors are in the trading. Buying also loses money, and selling also loses money. The reason is this. Wave theory analysis is one of the best and most valuable technical analysis methods recognized by investment guru, but it is not omnipotent. In practice, we often see that sometimes the market wave shape is very clear, very easy to identify and count, but when the market is too strong, due to the extension and extension of the waves, the waves are confused; when the market is in a trendless During the consolidation period, due to the multiplicity and various structures of the adjustment waves, the number of waves is very complicated or easy to be wrong.


(3) Ignore the market environment and misuse technical analysis methods. Some investors do not consider the market environment, unilaterally and habitually apply their own familiar technical analysis methods, such as the habit of applying moving averages and KD indicators, and lack of research on the application of other analytical methods. Some are also accustomed to using a single analysis method, forgetting the teachings of Dow's "different analytical methods should be mutually validated." The above misunderstanding and misapplication have greatly affected the effective use of technical analysis methods.

2. Correct understanding and recognizing is the key to technical analysis applications

Practice has proved that the key to the application of technical analysis methods is the correct understanding and recognizing of technical analysis methods. I believe that the technical analysis methods should be correctly understood from the following aspects:

(1) The technical analysis method is a mirror, and history will repeat itself, but it is by no means a simple repetition.

The emergence of technical analysis methods allows people to use the historical information of the market to make inferences about future market changes. The pioneers of technical analysis believe that "history will repeat itself," but this reenactment is by no means a simple repetition. For example, the Shanghai Composite Index has experienced a 7-year bull market, showing a complete five rising waves. Among them, 1, 3, and 5 push waves have 5 sub-wave structures, but their internal structure, running time, and length of the waves. They are all different.

(2) Technical analysis mostly uses statistical analysis as a means, and its analysis result is a probabilistic event, not an absolute event.

This understanding is crucial. It allows you to objectively and dialectically treat the results of each technical analysis without making any of the mistakes mentioned above. For example, after the closing of the market on a certain day, analysts A and B analyzed the trend of the next day's soybean futures based on the internal information of the Dalian soybean futures market. A predicts that prices will rise, and B predicts that prices will fall. It can only be determined by the price trend of the next day, and no one can decide before this. This example shows that the market analysis result is only a kind of prediction. It may or may not be correct. The prediction result should be used as the basis for formulating the investment plan, but the plan must be prepared to cope with the error of the prediction result. The stop loss item in the investment plan is the necessary measure to prevent the analysis result from going wrong.

(3) Each technical analysis method has advantages and disadvantages, and each is applicable to a certain market environment, and is not applicable to all markets.


For example, trend indicators (moving average method, etc.) are suitable for use in a market with a trend, and in the consolidation market, in general, its application value will be reduced. The swing indicator (strong index, random index, etc.) is suitable for consolidation, and the application value is reduced in the trend of the market. Therefore, there is no difference between the technical analysis methods and the difference between the applicable and the inapplicable to the specific market. Do not give up some methods easily, and do not arbitrarily apply a certain method. Investors must master the application characteristics of technical analysis methods and select different analysis methods for different market environments.

3. Use several points of technical analysis methods

How to use technical analysis methods? I propose the following points:

(1) Each technical analysis method must be carefully studied and deeply understood. While mastering the basic application knowledge of the method, it is necessary to focus on understanding its advantages and disadvantages and the applicable market environment.

In the market analysis, the choice of an analytical method is like doctors treating diseases and medications. Different treatments should be used for different diseases, and different prescriptions should be used for different diseases. Although a prescription can not cure all diseases, it can play a role in the treatment of certain diseases. Similarly, a technical analysis method cannot effectively predict all markets, but its prediction for a certain type of market is very effective. Therefore, we must use the strengths of each analysis method to avoid its shortcomings and beware of misuse.


(2) Pay attention to the mutual verification between different methods.

The originator of technical analysis methods - Dow Jones, in the description of his theory, emphasizes the mutual citation analysis between different methods. This is an important rule of technical analysis. Wave theory master Bochet is the champion of the US Options Trading Competition. He is good at grasping the bottom of the market. According to reports, he used the cross-analysis of periodic analysis, wave analysis and the opposite theory to judge the stage of the market. I have a profound understanding in the analysis and practice. When using the technical analysis method to analyze the market trend, it is necessary to use the fundamentals to confirm. The market is complex and ever-changing, and simple analysis is bound to be error-prone.

(3) Be sure to have your own mental preparations that will make mistakes and correct mistakes.

Practice has proved that no matter how closely analyzed, the possibility of error still exists. The forecast can only provide the possibility of an event, and cannot provide the certainty of the event. The conclusion of the analysis must be confirmed by the market. Don't be superstitious about your analysis. When the market has proved that you are wrong, you must resolutely and decisively correct it. "The market is always right, and the mistake is always your own." This motto is a must-have for mature market analysts, and must be kept in mind in the application of technical analysis methods.
Pages: [1] 2 3 »
Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!