TobbeLino
Newbie
Offline
Activity: 42
Merit: 0
|
|
May 11, 2013, 09:48:50 AM |
|
I've taken a quick look at the code, and a few things come to mind. First, that 144 is pretty arbitrary: since EMA is a recursive calculation, its accuracy is very dependent on the number of samples. After a certain point it becomes less relevant since the further you go back, the less the weight given to the data, but 200 * whatever timeframe you're trading is a generally accepted sample size. I remember someone complaining that the initial sample size was not accurate enough so Piotr increased it to 144 which should be an ok compromise, but it won't hurt if you have more. So you could either keep making these 144 requests of 1000 trades and then divide that by your trading sample size, which would give you about 10 times more candles to back your calculations on, or you can make that 144 dynamic by making requests with a timestamp equal to (number of samples required for accuracy x trading sample size) instead of this fixed 144. In other words, fetch only as much as you need according to your sample size instead of making an arbitrary number of requests. This fixed value made sense when the original bot only dealt with H1 trading and needed 144H of backdata, but not so much now that you allow users to define the trading candle size. Also, this history and its subsequent updated candles should probably be cached into localstorage until next launch. Since for a 60m candle you're fetching about 5 days worth of history, chances are that some of that data will still be valid on next bot launch, and that would also prevent from refetching everything upon a simple change of parameters. Yes, 144 is from the original bot, and I have not dared to change or question it However, I think the number of samples should be the same no matter what the sample interval is (the bot fetches 144 samples, not 144 hours of samples). The EMA-vaules are calculated from the samples - independent of the interval, so the accuracy should be exactly the same as the original bot for any sample interval. Caching samples would probably be a good idea! Because calling the MtGox API repeatedly in a short time could mabe get you blocked by their DDoS-protection...
|
|
|
|
tagada
Member
Offline
Activity: 74
Merit: 10
|
|
May 11, 2013, 10:54:32 AM |
|
Yes, 144 is from the original bot, and I have not dared to change or question it However, I think the number of samples should be the same no matter what the sample interval is (the bot fetches 144 samples, not 144 hours of samples). The EMA-vaules are calculated from the samples - independent of the interval, so the accuracy should be exactly the same as the original bot for any sample interval. Caching samples would probably be a good idea! Because calling the MtGox API repeatedly in a short time could mabe get you blocked by their DDoS-protection... I see. I completely agree that the history size should be at least 144 x the sample size, but what i meant is that since you're getting 10x as much data for each request when using the API v2, you probably don't need to fetch as many times as the original bot did when using the API v0. In that regard, obtaining 144 samples' worth of data does not necessarily mean having to make 144 requests to the API. In my experience, the MtGox API is more defensive against the time between each request (hammering will get you banned for a while) than to the actual number of requests itself. In the case of the bot, i think the time it takes to parse the json and fetch the next request should be enough timeout. You could always add a small timeout in between requests if i'm wrong, but since it seems to work as-it-is now, i think you should be okay with an even smaller number of requests for the same amount of data.
|
|
|
|
papamoi
|
|
May 11, 2013, 11:11:31 AM |
|
hi guys
is this one giving good perf?
what can we expect on this?
|
|
|
|
TobbeLino
Newbie
Offline
Activity: 42
Merit: 0
|
|
May 11, 2013, 11:15:33 AM |
|
I see. I completely agree that the history size should be at least 144 x the sample size, but what i meant is that since you're getting 10x as much data for each request when using the API v2, you probably don't need to fetch as many times as the original bot did when using the API v0. In that regard, obtaining 144 samples' worth of data does not necessarily mean having to make 144 requests to the API.
In my experience, the MtGox API is more defensive against the time between each request (hammering will get you banned for a while) than to the actual number of requests itself. In the case of the bot, i think the time it takes to parse the json and fetch the next request should be enough timeout. You could always add a small timeout in between requests if i'm wrong, but since it seems to work as-it-is now, i think you should be okay with an even smaller number of requests for the same amount of data.
But the problem with fetching historical trade data at given points in time with MtGox API is that the bot only wants ONE trade (the first one) from each chunk of trades. The rest of the data is usually not relevant since we need samples at the correct points in time (if the sample interval is very short, there COULD be more than one useful trade in the chunk, but in most cases there are probably more than 1000 trades between each sample time, so only the first trade is useful. So in most cases we have to make one call for each sample). If there was a way to specify how many trades to fetch with each call, I would set it to 1, but as far as I know that's not possible with any API version... But caching data should cure some of the problem. Then you could at least restart the bot/browser without a new storm of calls to MtGox. But as you said, it's probably ok - while developing, I need to restart the bot all the time, causing 144 calls to be made, and I don't think I've been blocked...
|
|
|
|
tagada
Member
Offline
Activity: 74
Merit: 10
|
|
May 11, 2013, 12:32:55 PM |
|
But the problem with fetching historical trade data at given points in time with MtGox API is that the bot only wants ONE trade (the first one) from each chunk of trades. The rest of the data is usually not relevant since we need samples at the correct points in time (if the sample interval is very short, there COULD be more than one useful trade in the chunk, but in most cases there are probably more than 1000 trades between each sample time, so only the first trade is useful. So in most cases we have to make one call for each sample). If there was a way to specify how many trades to fetch with each call, I would set it to 1, but as far as I know that's not possible with any API version...
But caching data should cure some of the problem. Then you could at least restart the bot/browser without a new storm of calls to MtGox. But as you said, it's probably ok - while developing, I need to restart the bot all the time, causing 144 calls to be made, and I don't think I've been blocked...
Good point. But if the bot expects each request to be roughly equivalent to one sample and uses only one price from each request (looks like it's using the opening price from each sample/response), then that seems like a pretty rigid and ineffective way to go. So maybe instead of making n requests and pick the opening price from each, it would be more effective to use all those trades to build your own candles from it? Take a look at goxtool's slot_fullhistory() and OHLCV() functions here https://github.com/prof7bit/goxtool/blob/master/goxapi.py as this is pretty much the way it does it, building its own Open-High-Low-Close-Volume candles from all the fetched history. That could give you a relative independence from MtGox's response size since you'd be fetching for a total history duration rather than making a precise number of requests in which only one price is picked from each. Does that make sense?
|
|
|
|
TobbeLino
Newbie
Offline
Activity: 42
Merit: 0
|
|
May 11, 2013, 12:38:23 PM |
|
But the problem with fetching historical trade data at given points in time with MtGox API is that the bot only wants ONE trade (the first one) from each chunk of trades. The rest of the data is usually not relevant since we need samples at the correct points in time (if the sample interval is very short, there COULD be more than one useful trade in the chunk, but in most cases there are probably more than 1000 trades between each sample time, so only the first trade is useful. So in most cases we have to make one call for each sample). If there was a way to specify how many trades to fetch with each call, I would set it to 1, but as far as I know that's not possible with any API version...
But caching data should cure some of the problem. Then you could at least restart the bot/browser without a new storm of calls to MtGox. But as you said, it's probably ok - while developing, I need to restart the bot all the time, causing 144 calls to be made, and I don't think I've been blocked...
Good point. But if the bot expects each request to be roughly equivalent to one sample and uses only one price from each request (looks like it uses the opening price of each sample), that seems like a pretty rigid and ineffective way to go. It seems to expect each reply to be equivalent to a sample size. So maybe instead of making n requests and pick the opening price from each, it would be more effective to use all those trades to build your own candles from it? Take a look at goxtool's slot_fullhistory() and OHLCV() functions here https://github.com/prof7bit/goxtool/blob/master/goxapi.py as this is pretty much the way it does it, building its own Open-High-Low-Close-Volume candles from all the fetched history. That could give you a relative independence from MtGox's response size since you'd be fetching for a total history duration rather than making a precise number of requests in which only one price is picked from each. Does that make sense? The API's the bot currently uses allow it to specify from which time we want the chunk, so the first trade in each reply is always "on time". But we can't specify how big the chunk should be (in our case we only need it to be 1 trade).
|
|
|
|
tagada
Member
Offline
Activity: 74
Merit: 10
|
|
May 11, 2013, 12:49:41 PM |
|
The API's the bot currently uses allow it to specify from which time we want the chunk, so the first trade in each reply is always "on time". But we can't specify how big the chunk should be (in our case we only need it to be 1 trade).
That's my point: since the responses are now larger, do they overlap the requests? If so, you're better off parsing the response rather than making a now obsolete request for some data you already received. I probably should have been clearer
|
|
|
|
rikigst
Newbie
Offline
Activity: 46
Merit: 0
|
|
May 11, 2013, 04:24:26 PM |
|
by the way, has anyone been able to profit with the bot in the last couple of days? Not me.
Strong fluctuations like this morning (from 116 to 118.5 at 11:00 UTC) make the bot insta-buy @118, than it sold btc after 5 hours of slow price decline (16:00 UTC) @116.4.
My conf is 10 min; 1 tick; 10/21 emas, thr 0.23buy -0.13sell
Anyone?
|
|
|
|
davider
|
|
May 11, 2013, 04:29:11 PM |
|
Current situation is the worst. It needs higher volatility. The idea is to profit on the long run by accepting some losses on the way.
|
|
|
|
StarenseN
Legendary
Offline
Activity: 2478
Merit: 1362
|
|
May 11, 2013, 04:52:19 PM |
|
by the way, has anyone been able to profit with the bot in the last couple of days? Not me.
Strong fluctuations like this morning (from 116 to 118.5 at 11:00 UTC) make the bot insta-buy @118, than it sold btc after 5 hours of slow price decline (16:00 UTC) @116.4.
My conf is 10 min; 1 tick; 10/21 emas, thr 0.23buy -0.13sell
Anyone?
I lost a bit aswell so I disabled the bot during this low volume period.
|
|
|
|
viperzero
Member
Offline
Activity: 301
Merit: 10
|
|
May 11, 2013, 08:25:21 PM |
|
I have some very old history making metatrader forex expert advisors. I would suggest you could add a trigger based on average trading volumes. For example last 50 bar average compared to last 5 bar average. If volume is below certain level trigger bot will not trade.
Also using local storage is useful like some previous poster suggested to make all sorts on internal calculations more robust plus not unnecessary hammering mtgox & risking to get banned or if mtgox lag is heavy bot could perhaps survive better.
EDIT: I would also suggest to add a stoploss, sometimes slow dripdowns compared to ema trigger set too high can be very risky.
|
|
|
|
davider
|
|
May 11, 2013, 08:31:42 PM |
|
I have some very old history making metatrader forex expert advisors. I would suggest you could add a trigger based on average trading volumes. For example last 50 bar average compared to last 5 bar average. If volume is below certain level trigger bot will not trade.
I think it's a good idea. Maybe it would be great to do some tests on Mt.Gox historical data to check if this option improves the performance of the bot.
|
|
|
|
|
maverick1337
Newbie
Offline
Activity: 40
Merit: 0
|
|
May 12, 2013, 02:42:48 PM |
|
Is it possible or a good idea to prevent the bot from trading at a loss and just hold the BTC?
|
|
|
|
eduk
Newbie
Offline
Activity: 14
Merit: 0
|
|
May 12, 2013, 03:10:11 PM Last edit: May 12, 2013, 11:17:29 PM by eduk |
|
Hi all, I'm new to Bitcointalk been waiting hours to get out of newbie mode to post this. Yesterday I spent quite a long times learning the inner workings of this bot, I took TobbeLino's version and have made some modifications to attempt to start making the bot a bit smarter. I have added some volatility data with a volatility buy threshold/sell threshold that must also be passed before a trade is executed. The volatility data is worked out by taking X amount of samples (which you can set in options) and then finding the lowest and highest sample from within them. The lowest is then taken away from the highest to give the difference. This volatility score is the added to the volatility score from a couple of samples ago and divided by 2 to smooth it out slightly. Whether this is the best way of working out volatility, I'm not sure. I found this true range and average true range from this link and basically simplified it http://stockcharts.com/school/doku.php?id=chart_school:technical_indicators:average_true_range_atr So if anyone has any ideas how to improve this volatility score, just let me know. Also please test it with small amounts as I've only been using it for 5 minute samples so far, there have also been some changes to the section which determines whether to execute a trade and so use at your own risk. I used this morning and it made a sell at 11:40 BST as in image below. https://i.imgur.com/N3zkxVu.pngOptions have been added to set it up. I'm liking the look of 3 volatility samples with about a 0.9 to 1.2 threshold on each. I believe that this volatility check will also allow you to lower the main buy sell thresholds. Download: https://mega.co.nz/#!k9EERBpI!fr0EdklaQIZPyfVqF7C-aRYeBRwkqc0bdzbitz4Xho4 (Copy/Paste this as forum screwed it up) As a bonus I have also added a sound notification so that when a trade gets made you can hear it happening. @TobbeLino, if you like this update, feel free to add to it to your GitHub / improve upon it or contact me.
|
|
|
|
|
IdiotHole
Newbie
Offline
Activity: 12
Merit: 0
|
|
May 12, 2013, 05:06:40 PM |
|
This looks very interesting indeed. With this kind of ranging market detection in place the bot could be left running even in flat periods like the last few days without worrying about losing money. Given that the source is included, I wonder how hard this would be to incorporate into the bot? I've not done any coding for a while but might have to look into this if I ever get the time.
|
|
|
|
eduk
Newbie
Offline
Activity: 14
Merit: 0
|
|
May 12, 2013, 05:14:47 PM |
|
MACD Line: (12-day EMA - 26-day EMA) Signal Line: 9-day EMA of MACD Line MACD Histogram: MACD Line - Signal Line That calculation using a multiple day EMA is currently out of the scope of the bot because it will only collect the last 144 samples from Mt Gox. This can be extended to 1000 if you change "const MaxSamplesToKeep = 144;" at the top of background.js but at small intervals won't reach that far to collect that amount of data. Alternatively the calculation could be done over a shorter time period, or the bot would need to be upgraded to store past data somehow or get it from somewhere else.
|
|
|
|
eduk
Newbie
Offline
Activity: 14
Merit: 0
|
|
May 12, 2013, 05:29:16 PM |
|
This looks very interesting indeed. With this kind of ranging market detection in place the bot could be left running even in flat periods like the last few days without worrying about losing money. Given that the source is included, I wonder how hard this would be to incorporate into the bot? I've not done any coding for a while but might have to look into this if I ever get the time. Have a look at the one version I posted above with added volitality check. You should be able to just change the function "updateVol" to do a completely different calculation and therefore use a different indicator. The work of adding new thresholds and checking for them is all included.
|
|
|
|
IdiotHole
Newbie
Offline
Activity: 12
Merit: 0
|
|
May 12, 2013, 05:51:55 PM |
|
Thanks I'll look into this and run it for a while in sim mode along side TobbeLino's bot to see how it behaves...
|
|
|
|
|