Bitcoin Forum
May 22, 2024, 04:21:01 AM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: 1 2 3 [All]
  Print  
Author Topic: Predicting future Bitcoin Price  (Read 10596 times)
bb113 (OP)
Hero Member
*****
Offline Offline

Activity: 728
Merit: 500


View Profile
September 16, 2013, 01:52:13 PM
 #1

How about this method:
1)Get the gox historical daily prices
2)Calculate the day-to-day percent changes, ie (Price[n+1] - Price[n])/Price[n]
3)Create a markov chain starting at the most recent price

4)Sample a price change from the distribution of differences in the historical data (see topright panel below)
5)Update Price by one day in future
6)Goto 4

Example outputs:



Overall Posterior Distributions:


Interpretation:
30 days from today there 95% probability the price will be between 62.5 and 320.7 usd, 67% probability it is between 105.7 and 219.6 USD, and the expected value is either 181 or 169 usd depending on whether you prefer the mean or median as your estimator.
ElectricMucus
Legendary
*
Offline Offline

Activity: 1666
Merit: 1057


Marketing manager - GO MP


View Profile WWW
September 16, 2013, 02:11:00 PM
 #2

Care to backtest it? (That is doing the predictions at a already past point of time so you can see how useful the model is.)
BombaUcigasa
Legendary
*
Offline Offline

Activity: 1442
Merit: 1000



View Profile
September 16, 2013, 02:57:10 PM
 #3

Bro, do you even imgur.com? I can't see your pictures.
bb113 (OP)
Hero Member
*****
Offline Offline

Activity: 728
Merit: 500


View Profile
September 16, 2013, 03:07:38 PM
 #4

Here are the results if I had done this 1 year ago and predicted a year into the future (ie it does not include the price changes after 365 days ago).
The actual prices are noted by the red text:



Bro, do you even imgur.com? I can't see your pictures.
I tried going to that site and it wanted to install something as far as i could tell. This site is just the first one that comes up with a search( "postimg.org"). Not sure why it would not display for you.
oda.krell
Legendary
*
Offline Offline

Activity: 1470
Merit: 1007



View Profile
September 16, 2013, 03:50:45 PM
Last edit: September 16, 2013, 10:50:41 PM by oda.krell
 #5

Really interesting. Thanks for sharing this. By the way, I can see the images fine.

Some notes/observations:

the 95% interval is, hm, sort of useless for our purposes, isn't it? The range is huge (as you would expect), so not much information can be gained from that. Except maybe to tell the uber-bulls that a 100k $/btc price next year is exceedingly unlikely :P

at a glance, the median appears to be more accurate than the mean. What is really puzzling is that in your backtest looking 1 year ahead, a) mean and median diverge a lot (350 vs 149), and b) the meanmedian is actually remarkably accurate. Just a lucky guess, or not, I don't know. But that's a pretty remarkable result for a relatively simple model, no?

Would be interesting to see how the model works in a "sliding" backtest, say: 1 week prediction, starting 1 year ago, moving forward 1 week each iteration. Looking at the result of your backtest, the median might turn out to be a pretty good predictor of the price one week later, which for trading purposes would be already very useful, but it would be *really* interesting to see how the model performs a week before April 10 2013, or a week before one of the other wild price swings in May.

EDIT: wrote 'mean', meant 'median'

Not sure which Bitcoin wallet you should use? Get Electrum!
Electrum is an open-source lightweight client: fast, user friendly, and 100% secure.
Download the source or executables for Windows/OSX/Linux/Android from, and only from, the official Electrum homepage.
BombaUcigasa
Legendary
*
Offline Offline

Activity: 1442
Merit: 1000



View Profile
September 16, 2013, 03:56:55 PM
 #6

I tried going to that site and it wanted to install something as far as i could tell. This site is just the first one that comes up with a search( "postimg.org"). Not sure why it would not display for you.
Sorry to hear that. This is what I see: https://i.imgur.com/dus4hIc.jpg

I can't see the picture in your post: http://s12.postimg.org/8obt1tyij/Gox_Predict001.png, the new url is http://s12.postimg.org/omkirysqj/Gox_Predict001.png for me. It must be an error on your part or some silly protection scheme from postimg.

Here are the results if I had done this 1 year ago and predicted a year into the future (ie it does not include the price changes after 365 days ago).
The actual prices are noted by the red text:

Can you use a logarithmic scale? It's hard to understand what goes on from a flat line and an exponential J curve.
Odalv
Legendary
*
Offline Offline

Activity: 1400
Merit: 1000



View Profile
September 16, 2013, 03:59:50 PM
 #7


Sorry to hear that. This is what I see: https://i.imgur.com/dus4hIc.jpg



I can see the images fine.
bb113 (OP)
Hero Member
*****
Offline Offline

Activity: 728
Merit: 500


View Profile
September 17, 2013, 12:08:05 PM
 #8

With regards to putting more constraints/dependancies, there is a nice linear relationship between the "acceleration of price change" (x axis) and percent chance in price one day later (y-axis):


In the figures above I sampled a change in price putting uniform weights on all, instead we can make some more likely to be chosen (shown by the heatmaps above for the points shown in black). In that case here is a repeat of the first figure:



Now the distribution shown in the heatmap just a normal distribution with standard deviation (.3) which is arbitrary. This appears to be a good skeleton for a model though.


Really interesting. Thanks for sharing this. By the way, I can see the images fine.

Some notes/observations:

the 95% interval is, hm, sort of useless for our purposes, isn't it? The range is huge (as you would expect), so not much information can be gained from that. Except maybe to tell the uber-bulls that a 100k $/btc price next year is exceedingly unlikely Tongue

at a glance, the median appears to be more accurate than the mean. What is really puzzling is that in your backtest looking 1 year ahead, a) mean and median diverge a lot (350 vs 149), and b) the meanmedian is actually remarkably accurate. Just a lucky guess, or not, I don't know. But that's a pretty remarkable result for a relatively simple model, no?

Would be interesting to see how the model works in a "sliding" backtest, say: 1 week prediction, starting 1 year ago, moving forward 1 week each iteration. Looking at the result of your backtest, the median might turn out to be a pretty good predictor of the price one week later, which for trading purposes would be already very useful, but it would be *really* interesting to see how the model performs a week before April 10 2013, or a week before one of the other wild price swings in May.

EDIT: wrote 'mean', meant 'median'

This would be kind of a pain to assess the results of, you think just compare the median with the actual price for the statistic?

I tried going to that site and it wanted to install something as far as i could tell. This site is just the first one that comes up with a search( "postimg.org"). Not sure why it would not display for you.
Sorry to hear that. This is what I see: https://i.imgur.com/dus4hIc.jpg

I can't see the picture in your post: http://s12.postimg.org/8obt1tyij/Gox_Predict001.png, the new url is http://s12.postimg.org/omkirysqj/Gox_Predict001.png for me. It must be an error on your part or some silly protection scheme from postimg.

Here are the results if I had done this 1 year ago and predicted a year into the future (ie it does not include the price changes after 365 days ago).
The actual prices are noted by the red text:

Can you use a logarithmic scale? It's hard to understand what goes on from a flat line and an exponential J curve.
Yes I could do that. There was an annoying technical reason it is not already like that.
bb113 (OP)
Hero Member
*****
Offline Offline

Activity: 728
Merit: 500


View Profile
September 17, 2013, 01:50:05 PM
 #9

And here is 1 year back-testing. The median is not so accurate now but still I think this model is stronger (if it wasnt for the arbitrary standard deviation when assigning the percent change weights). Also the heatmaps in the last post should have had their axes multiplied by 100 to be percents.


bb113 (OP)
Hero Member
*****
Offline Offline

Activity: 728
Merit: 500


View Profile
September 17, 2013, 08:19:21 PM
 #10

OK here are sliding backtest results.

210 days:

Monthly (30 days):

Weekly (7 days):

Daily:
oda.krell
Legendary
*
Offline Offline

Activity: 1470
Merit: 1007



View Profile
September 18, 2013, 03:57:09 PM
 #11

OK here are sliding backtest results.

I admit defeat. How do I read those latest results? Are the "sliding" test results only visible in the upper log chart? If so, how can I compare test result with actual price?

Not sure which Bitcoin wallet you should use? Get Electrum!
Electrum is an open-source lightweight client: fast, user friendly, and 100% secure.
Download the source or executables for Windows/OSX/Linux/Android from, and only from, the official Electrum homepage.
ElectricMucus
Legendary
*
Offline Offline

Activity: 1666
Merit: 1057


Marketing manager - GO MP


View Profile WWW
September 18, 2013, 05:33:11 PM
 #12

It looks like a laggy trend follower to me, not particularly useful.

The black lines in the middle should be the median and the dotted lines the standard deviations.
bb113 (OP)
Hero Member
*****
Offline Offline

Activity: 728
Merit: 500


View Profile
September 18, 2013, 05:37:31 PM
 #13

Those are the results of four different backtests:
1)210 days straight
2) Backtest for 30 days then update the distribution of price changes
3) Backtest for 7 days and do like in #2
4) Backtest one day at a time

Each of them also forecasts for the same time period into the future. I simply appended the results of each backtest on to the last one to create the topleft panels. The actual price data (daily VWAP) is shown by the blue line. The arrows at the bottom of this chart indicate the timepoints at which I plotted the distribution of estimates in the histograms below (just six evenly spaced intervals). The actual prices are indicated on the histograms by the point and blue text if it fell within the 95% HDI, but in red text if it did not for that timepoint.

You can see from the 3rd set of charts (weekly), that it did not perform very well during the large rise and fall. I've since added a further influence on the choice of price change: fit a lowess curve and determine the distance of each price from the curve in the historical data. Then use this correlation to pick a similar price change as occurred historically. Its running in the background now so...
bb113 (OP)
Hero Member
*****
Offline Offline

Activity: 728
Merit: 500


View Profile
September 18, 2013, 05:45:56 PM
 #14

It looks like a laggy trend follower to me, not particularly useful.

The black lines in the middle should be the median and the dotted lines the standard deviations.

Yes, this is to be expected since it is based entirely on historical data. All it can do is give a range of scenarios similar to what has occurred in the past. However, once the basic model is determined, we can add in the influence of a prior distribution on the price change choice (ie make it more likely to choose positive changes if we feel bullish, or negative for bearish), which should improve the forecasting given that the prior used is actually valid. Another thing is to increase the tail risk if for example the price has been "stable too long" or there are lots of black swans sensed.

Basically it will be a way to combine the "this is like 2011/2012/whatever" concept with the subjective impression due to current info on the fundamentals.
oda.krell
Legendary
*
Offline Offline

Activity: 1470
Merit: 1007



View Profile
September 18, 2013, 06:19:47 PM
 #15

Those are the results of four different backtests:
1)210 days straight
2) Backtest for 30 days then update the distribution of price changes
3) Backtest for 7 days and do like in #2
4) Backtest one day at a time

Each of them also forecasts for the same time period into the future. I simply appended the results of each backtest on to the last one to create the topleft panels. The actual price data (daily VWAP) is shown by the blue line. The arrows at the bottom of this chart indicate the timepoints at which I plotted the distribution of estimates in the histograms below (just six evenly spaced intervals). The actual prices are indicated on the histograms by the point and blue text if it fell within the 95% HDI, but in red text if it did not for that timepoint.

You can see from the 3rd set of charts (weekly), that it did not perform very well during the large rise and fall. I've since added a further influence on the choice of price change: fit a lowess curve and determine the distance of each price from the curve in the historical data. Then use this correlation to pick a similar price change as occurred historically. Its running in the background now so...

Still don't get it. I can't see a blue line (for actual price) in the topleft chart. And for test 4), if you backtested one day at a time starting about a year ago, shouldn't there be 365 of the lower histograms, each with a 1 day prediction period?

Not sure which Bitcoin wallet you should use? Get Electrum!
Electrum is an open-source lightweight client: fast, user friendly, and 100% secure.
Download the source or executables for Windows/OSX/Linux/Android from, and only from, the official Electrum homepage.
bb113 (OP)
Hero Member
*****
Offline Offline

Activity: 728
Merit: 500


View Profile
September 18, 2013, 06:34:54 PM
 #16

Blue line is really dark due to image compression, so call it the thick black line that looks like the price history, the chart is getting busy so it may be kind of hard to see... It pretty much goes down the middle of the estimates. It is most obvious in the monthly chart The various colors in the background are the results of a random sampling of the simulation results. The thick grey line is the median estimate, the dotted lines towards the outside are the 67% and 95% HDIs in grey and black respectively. Sometimes they are hard to see now.

The reason there are not 365 histograms is because thats alot of histograms. Instead I chose six at equally spaced intervals, the timepoints that got chose are marked by the triangles at the bottom. You can see the start and end points of the backtesting in the monthly and weekly charts because the estimates are very tight early on then get less certain farther from the start point. The daily graph is high resolution enough that you cant really see this effect.
oda.krell
Legendary
*
Offline Offline

Activity: 1470
Merit: 1007



View Profile
September 18, 2013, 10:38:59 PM
 #17

I see it now. On the daily test I still can't really make out the difference between median and actual price, but I looked at the weekly result zoomed in and it's pretty clear there.

What is interesting is that weekly works quite well as a leading predictor even during volatile times, like the extreme run-up in before the April crash, but it completely fails when it comes to crashes -- the April 10th one, but even slightly less drastic ones, like the one on September 4th. (if I'm reading the chart correctly now)

Sorry if that's stating the obvious, but it's just that the gap is pretty noticeable. The method works well while a trend is continuing, but as it is now, it seems to have no chance to predict a sudden reversal.

Not sure which Bitcoin wallet you should use? Get Electrum!
Electrum is an open-source lightweight client: fast, user friendly, and 100% secure.
Download the source or executables for Windows/OSX/Linux/Android from, and only from, the official Electrum homepage.
bb113 (OP)
Hero Member
*****
Offline Offline

Activity: 728
Merit: 500


View Profile
September 18, 2013, 11:15:31 PM
 #18

I see it now. On the daily test I still can't really make out the difference between median and actual price, but I looked at the weekly result zoomed in and it's pretty clear there.

What is interesting is that weekly works quite well as a leading predictor even during volatile times, like the extreme run-up in before the April crash, but it completely fails when it comes to crashes -- the April 10th one, but even slightly less drastic ones, like the one on September 4th. (if I'm reading the chart correctly now)

Sorry if that's stating the obvious, but it's just that the gap is pretty noticeable. The method works well while a trend is continuing, but as it is now, it seems to have no chance to predict a sudden reversal.


Look at the beta (.5,.5) distribution here:


What I'm thinking is that when things get to the state we were at at the peak, noone really thinks it is going to level off. Either it will keep climbing or crash. So what you do at that point (in lieu of having a good peak indicator) is multiply the probabilities of choosing each price change by that distribution (transform min/max price changes to 0 and 1), thus using your judgement while forecasting to input large kurtosis risk.

Actually I think the beta distribution goes infinite as it approaches zero and one so just pad the limits with .1 or whatever...
btceic
Sr. Member
****
Offline Offline

Activity: 392
Merit: 250


♫ A wave came crashing like a fist to the jaw ♫


View Profile WWW
September 18, 2013, 11:33:10 PM
 #19

Naval Ravikant said its going to $1,000,000 per BTC, start around the 15:00 minute mark for context, but watch the whole thing, it's amazing.

http://techcrunch.com/2013/09/10/disrupt-sf-13-bitcoin-panel/

♫ This situation, which side are you on? Are you getting out? Are you dropping bombs? Have you heard of diplomatic resolve? ♫ How To Run A Cheap Full Bitcoin Node For $19 A Year ♫ If I knew where it was, I would take you there. There’s much more than this. ♫ Track Your Bitcoins Value
oda.krell
Legendary
*
Offline Offline

Activity: 1470
Merit: 1007



View Profile
September 19, 2013, 04:08:27 PM
 #20

I see it now. On the daily test I still can't really make out the difference between median and actual price, but I looked at the weekly result zoomed in and it's pretty clear there.

What is interesting is that weekly works quite well as a leading predictor even during volatile times, like the extreme run-up in before the April crash, but it completely fails when it comes to crashes -- the April 10th one, but even slightly less drastic ones, like the one on September 4th. (if I'm reading the chart correctly now)

Sorry if that's stating the obvious, but it's just that the gap is pretty noticeable. The method works well while a trend is continuing, but as it is now, it seems to have no chance to predict a sudden reversal.


Look at the beta (.5,.5) distribution here:


What I'm thinking is that when things get to the state we were at at the peak, noone really thinks it is going to level off. Either it will keep climbing or crash. So what you do at that point (in lieu of having a good peak indicator) is multiply the probabilities of choosing each price change by that distribution (transform min/max price changes to 0 and 1), thus using your judgement while forecasting to input large kurtosis risk.

Actually I think the beta distribution goes infinite as it approaches zero and one so just pad the limits with .1 or whatever...

Did I mention I don't really know much about statistical methods? So I'll need to ask some dumb questions: that beta distribution is not motivated by anything bitcoin specific, right? You're assuming it is a good representation of peak/reversal likelihoods as a function of extreme price change, correct?. Also, I don't see how you get the actual multiplier that graph. What's the PDF that ranges from 0 to 2.5?

Another idea/question: your random walk approach worked quite okay with price change as input. What do you think are the chances that you can get some complimentary information (maybe estimating reversal likelihood) by applying it to a different domain, say, btc volume, dollar volume, OBV. Not at all sure if that could work, but I'm wondering if price (change) is the only useful input. Any intuition about this?

Not sure which Bitcoin wallet you should use? Get Electrum!
Electrum is an open-source lightweight client: fast, user friendly, and 100% secure.
Download the source or executables for Windows/OSX/Linux/Android from, and only from, the official Electrum homepage.
bb113 (OP)
Hero Member
*****
Offline Offline

Activity: 728
Merit: 500


View Profile
September 20, 2013, 05:18:18 AM
 #21


Did I mention I don't really know much about statistical methods? So I'll need to ask some dumb questions: that beta distribution is not motivated by anything bitcoin specific, right? You're assuming it is a good representation of peak/reversal likelihoods as a function of extreme price change, correct?. Also, I don't see how you get the actual multiplier that graph. What's the PDF that ranges from 0 to 2.5?

Another idea/question: your random walk approach worked quite okay with price change as input. What do you think are the chances that you can get some complimentary information (maybe estimating reversal likelihood) by applying it to a different domain, say, btc volume, dollar volume, OBV. Not at all sure if that could work, but I'm wondering if price (change) is the only useful input. Any intuition about this?

The beta distribution is just one I know with two modes that spans a fixed interval like that. I learned stats from the internet and playing around in R so I don't know that much either and may get stuff wrong, I mostly just do things until something looks like it makes sense and look up stuff along the way when I can't figure out how to do something... I do know that the stats I was taught (p values and null hypotheses, etc) are flawed to the point of uselessness for most purposes and what needs to be done instead is build models that predict things. Even if the model is wrong it can still be improved by later work or give someone an idea.

PDF= probability density function
http://stats.stackexchange.com/questions/4220/a-probability-distribution-value-exceeding-1-is-ok

All that is done is to multiply two densities together for each value or add them depending on what you want to do (divide each by the sum as well). This has the effect of values for % price change that are highly associated with the multiple predictors to be more likely.

In the future other stuff can be added, for example in addition to what you mentioned: forum activity, wikipedia page views, google trends. Each of these can be looked at historically to see what was going on when the price changed by a certain amount. Right now its easier to just use the info from price alone and kind of treat it like an event horizon (contains all the useful information). Currently I have the influences as acceleration of price increase, lag-1 autocorrelation, and distance from a loess curve that gets fit at every timepoint as it updates. Each of these can be given different weights and such in multiple combinations, plus I am supposed to be doing other things so I'm thinking to just use the price info and then all other information gets calculated in my brain as a more nebulous set of probabilities for the "sum of the other evidence". For example, just say the odds of the price rising in the next week are twice that of it falling. So multiply the densities of all the positive price changes by 2.

bb113 (OP)
Hero Member
*****
Offline Offline

Activity: 728
Merit: 500


View Profile
September 20, 2013, 11:09:42 PM
 #22


Here is a video of one scenario for the next 1000 days (starting yesterday) of bitcoin that uses autocorrelation:
http://tinypic.com/r/a32ntz/5

The x-axis of the left 4 panels is time, they show the last 100 days of predictions for each parameter (today is "index" 1155). The topleft is the price, while it has a lower panel showing the all time price data. The heatmaps show the relationship between rate of price chage (acceleration)/distance to loess fit and the next days % price change. The density plots show the probability of choosing a given % price change for the next timepoint.

It definitely behaves like the bitcoin we all know and love. Up to 330 in about 120 days then crash to 55, then up to 2k and crash to 500, etc.


Batch Results:
600watt
Legendary
*
Offline Offline

Activity: 2338
Merit: 2106



View Profile
September 21, 2013, 08:33:48 AM
 #23


Here is a video of one scenario for the next 1000 days (starting yesterday) of bitcoin that uses autocorrelation:
http://tinypic.com/r/a32ntz/5

The x-axis of the left 4 panels is time, they show the last 100 days of predictions for each parameter (today is "index" 1155). The topleft is the price, while it has a lower panel showing the all time price data. The heatmaps show the relationship between rate of price chage (acceleration)/distance to loess fit and the next days % price change. The density plots show the probability of choosing a given % price change for the next timepoint.

It definitely behaves like the bitcoin we all know and love. Up to 330 in about 120 days then crash to 55, then up to 2k and crash to 500, etc.



nice work! i enjoyed the part where it went way above 7 k $ most. if we all just stick to your prediction we are done.... Smiley
Professor James Moriarty
aka TheTortoise
Sr. Member
****
Offline Offline

Activity: 434
Merit: 250



View Profile
September 21, 2013, 08:37:47 AM
 #24


 For every year bitcoin went up , yes in the short run it may go down but in the long run there is no possible outcome of bitcoin dropping down the price, I believe we will see 1000$ per bitcoin by the end of 2014 or start of 2015.
BombaUcigasa
Legendary
*
Offline Offline

Activity: 1442
Merit: 1000



View Profile
September 21, 2013, 09:40:39 AM
 #25

It definitely behaves like the bitcoin we all know and love. Up to 330 in about 120 days then crash to 55, then up to 2k and crash to 500, etc.


Batch Results:

Finally something practical that I see in this thread Smiley

Good methods, but this seems too complex for such a simple estimation...
bb113 (OP)
Hero Member
*****
Offline Offline

Activity: 728
Merit: 500


View Profile
September 21, 2013, 01:33:02 PM
 #26

It definitely behaves like the bitcoin we all know and love. Up to 330 in about 120 days then crash to 55, then up to 2k and crash to 500, etc.

Finally something practical that I see in this thread Smiley

Good methods, but this seems too complex for such a simple estimation...

You don't believe there is 99.9% chance the price will be between 0 and $9.7 million 1000 days from now? At least it is an honest estimate.
NamelessOne
Legendary
*
Offline Offline

Activity: 840
Merit: 1000



View Profile
September 21, 2013, 02:48:37 PM
 #27


Here is a video of one scenario for the next 1000 days (starting yesterday) of bitcoin that uses autocorrelation:
http://tinypic.com/r/a32ntz/5

The x-axis of the left 4 panels is time, they show the last 100 days of predictions for each parameter (today is "index" 1155). The topleft is the price, while it has a lower panel showing the all time price data. The heatmaps show the relationship between rate of price chage (acceleration)/distance to loess fit and the next days % price change. The density plots show the probability of choosing a given % price change for the next timepoint.

It definitely behaves like the bitcoin we all know and love. Up to 330 in about 120 days then crash to 55, then up to 2k and crash to 500, etc.


Thank you for posting that video and the images. This is a great thread!
oda.krell
Legendary
*
Offline Offline

Activity: 1470
Merit: 1007



View Profile
September 21, 2013, 03:38:09 PM
 #28

I don't care what anyone else says. I like it.

But if you really want to give them something to complain about, use your systems to produce some short term price calls.

do I hear some bitterness there?

anyway, that's what I had in mind earlier, when I suggested the sliding backtest with a 1 day prediction window, , and concentrating on the shorter prediction periods in general (what's the point of a 3 year prediction with a huge range?). It's not about trading advice per se, but about being able to judge where exactly the method went wrong.

Also, I am pretty impressed by bb113's method(s) so far, but I have a hard time seeing how we can make our input in here useful for improving it further. So, yes, price calls within a realistic time and confindence interval are probably what would allow us to give more constructive criticism.

Not sure which Bitcoin wallet you should use? Get Electrum!
Electrum is an open-source lightweight client: fast, user friendly, and 100% secure.
Download the source or executables for Windows/OSX/Linux/Android from, and only from, the official Electrum homepage.
bb113 (OP)
Hero Member
*****
Offline Offline

Activity: 728
Merit: 500


View Profile
September 21, 2013, 03:50:09 PM
 #29

Thanks for the encouragement/ criticism guys, its been a good, wholesome break from writing this thing for school. Just to warn you my eventual idea is to follow suggestions as much as practical and make the code easy for an R noob to use by just changing some numbers at the top and then try to get some donations for it (I am a poor grad student despite bitcoin). I need to get back to work tomorrow/monday though, so if you have any ideas such as the backtesting was a good one, incorporating further influences I think is good but too much for now, let me know soon.

Right now it generates one of those videos for one example every batch because it is slow to make the images, and actually watching those play out and messing with the settings is almost better than watching the actual price. For example, I've had it crash on me by running up to $90k and then dropping so fast to zero all the probabilities were zero.


This is predictions for the next week. Basically a slow uptrend:
bb113 (OP)
Hero Member
*****
Offline Offline

Activity: 728
Merit: 500


View Profile
September 23, 2013, 07:27:54 PM
 #30

Ok, so here is what I have to offer that may be of use to some people in my quest for donations:

There are two R scripts, the first one downloads all the trades from bitcoincharts (trades are aggregated together if they occur within a second of each other I believe) and saves it as a csv. After this is done the first time, downloading all the trades will be skipped and it will only download trades since the last time the script was run and update the csv file. It then calculates the daily data from that.

It next gets the bitcointalk forum stats as well as wikipedia page views and makes a chart shown below. Then what it does is take the price history, calculate "breakpoints" which indicate timepoints of structural change and overlays a plot of this onto a heatmap showing possible lines of resistance according to how much volume has occurred at each price. Volume can be in USD, BTC, or # of trades per day. This is the second plot (using USD volume).

The second script will take this daily data, and attempt to predict into the future based on previous day-to-day price changes as has been shown throughout this thread. It can also take snapshots of what it is doing and save them to a folder which can then be made into a video. I recommend using virtual dub for this.

It was made on windows 7 using R 2.14.2 (64 bit), but should work using the newer versions of R. Not sure what problems will arise on other OSes.







Script 1:
Code:

####Only Run First Time (uncomment to run)
#install.packages("rjson")
#install.packages("TTR")
#install.packages("RCurl")
#install.packages("XML")
#install.packages("fields")
#install.packages("psych")
#install.packages("strucchange")


#Load required packages
require(rjson)
require(TTR)
require(RCurl)
require(XML)
require(fields)
require(psych)
require(strucchange)


#Choose Directory for data
data.dir<-"C:/Users/Valued Customer/Documents/GoxData"

# Download All gox USD history from bitcoincharts.com? T or F
# If false it will only update from previous file in data directory named "goxUSD.csv"
dl.all=F

# Get Google Trends Data (experimental, doesnt work)
get.gtrends=F

#Plot Breakpoints and volume heatmaps? This is slow.
plot.breakpoints=T

#Volume for heatmaps: "usd", "trades", or "btc"
heatmap.vol="usd"





#Set working Directory
setwd(data.dir)

####Experimental google trends (requires copy/paste from exported csv)####
if(get.gtrends==T){
new.gtrends=F
if(new.gtrends==T){
  gtrends1<-readClipboard()
 
  out=NULL
  for(i in 1:length(gtrends1)){
    temp<-strsplit(gtrends1[i]," ")[[1]][1]
    out<-rbind(out,temp)
  }
  gtrends1<-out
 
  out=NULL
  for(i in 1:nrow(gtrends1)){
    date<-strsplit(as.character(gtrends1[i,1]), "-")
    yr<-as.numeric(date[[1]][1])
    mo<-as.numeric(date[[1]][2])
    day<-as.numeric(date[[1]][3])
   
    temp<-cbind(yr,mo,day)
    out<-rbind(out,temp)
  }
 
  gtrends<-data.frame()
  gtrends<-as.data.frame(cbind(out,c(gtrends1),gtrends2))
 
  gtrends[,1]<-as.numeric(as.character(gtrends[,1]))
  gtrends[,2]<-as.numeric(as.character(gtrends[,2]))
  gtrends[,3]<-as.numeric(as.character(gtrends[,3]))
  gtrends[,5]<-as.numeric(as.character(gtrends[,5]))
 
  gtrends<-gtrends[which(gtrends[,4]=="2009-11-29"):nrow(gtrends),]
  gtrends<-gtrends[is.finite(gtrends[,5]),]
  colnames(gtrends)<-c("Year","Month","Day","Date","Value")
  rownames(gtrends)<-1:nrow(gtrends)
  write.table(gtrends,"gtrends.txt")
}else{
  gtrends<-read.table("gtrends.txt")
}
}
##End Google Trends









####Get All goxUSD data###

if( dl.all==T){
 
  gox<-read.table("http://api.bitcoincharts.com/v1/csv/mtgoxUSD.csv",sep=",")
 
  ##Convert Times from Unix
  times=matrix(nrow=nrow(gox), ncol=6)
  time.string=matrix(nrow=nrow(gox),ncol=1)
  pb<-txtProgressBar(min = 0, max = nrow(gox), initial = 0, char = "=",
                     width = NA, style = 3)
  for(i in 1:nrow(gox)){
    timeStamp<-ISOdatetime(1970,1,1,0,0,0) + gox[i,1]
   
    date<-strsplit(as.character(timeStamp), " ")[[1]][1]
    yr<-as.numeric(strsplit(date, "-")[[1]][1])
    mo<-as.numeric(strsplit(date, "-")[[1]][2])
    day<-as.numeric(strsplit(date, "-")[[1]][3])
   
    ToD<-strsplit(as.character(timeStamp), " ")[[1]][2]
    hr<-as.numeric(strsplit(ToD, ":")[[1]][1])
    min<-as.numeric(strsplit(ToD, ":")[[1]][2])
    sec<-as.numeric(strsplit(ToD, ":")[[1]][3])
   
    times[i,]<-cbind(yr,mo,day,hr,min,sec)
    time.string[i,]<-as.character(timeStamp)
    setTxtProgressBar(pb, i)
  }
  close(pb)
  gox<-cbind(times,gox)
  gox<-cbind(gox,gox[,8]*gox[,9])
  colnames(gox)<-c("Yr","Mo","Day","Hr","Min","Sec","UnixT","Price","BTCVol", "USDVol")
write.csv(gox,"goxUSD.csv", row.names=F)
}
##End get All goxUSD





####Update goxUSD data with new prices####

gox<-read.csv("goxUSD.csv", header=T, sep=",")

while((tail(gox[,7],1)+60*15)<as.numeric(Sys.time())){
  update<-read.table(paste("http://api.bitcoincharts.com/v1/trades.csv?symbol=mtgoxUSD&start=",
                           (tail(gox[,7],1)+1),sep="")
                     ,sep=",")
 
  ##Convert Times from Unix
  times=matrix(nrow=nrow(update), ncol=6)
  time.string=matrix(nrow=nrow(update),ncol=1)
  pb<-txtProgressBar(min = 0, max = nrow(update), initial = 0, char = "=",
                     width = NA, style = 3)
  for(i in 1:nrow(update)){
    timeStamp<-ISOdatetime(1970,1,1,0,0,0) + update[i,1]
   
    date<-strsplit(as.character(timeStamp), " ")[[1]][1]
    yr<-as.numeric(strsplit(date, "-")[[1]][1])
    mo<-as.numeric(strsplit(date, "-")[[1]][2])
    day<-as.numeric(strsplit(date, "-")[[1]][3])
   
    ToD<-strsplit(as.character(timeStamp), " ")[[1]][2]
    hr<-as.numeric(strsplit(ToD, ":")[[1]][1])
    min<-as.numeric(strsplit(ToD, ":")[[1]][2])
    sec<-as.numeric(strsplit(ToD, ":")[[1]][3])
   
    times[i,]<-cbind(yr,mo,day,hr,min,sec)
    time.string[i,]<-as.character(timeStamp)
    setTxtProgressBar(pb, i)
  }
  close(pb)
  update<-cbind(times,update)
  update<-cbind(update,update[,8]*update[,9])
  colnames(update)<-c("Yr","Mo","Day","Hr","Min","Sec","UnixT","Price","BTCVol", "USDVol")
  gox<-rbind(gox,update)
}

#Remove Duplicate Entries (maybe bitcoincharts bug?)
if(length(which(duplicated(gox)))>0){
gox<-gox[-which(duplicated(gox)),]
}
write.csv(gox,"goxUSD.csv", row.names=F)
##End Update goxUSD




####Aggregate Gox data into Daily Info####
Unix.start<-gox[1,7] - gox[1,4]*3600 - gox[1,5]*60 - gox[1,6]
daily=NULL
while(Unix.start<max(gox[,7])){
  Unix.end<-Unix.start+60*60*24
  temp<-gox[which(gox[,7]>Unix.start & gox[,7]<Unix.end),]
  temp2<-cbind(
    temp[1,1],
    temp[1,2],
    temp[1,3],
    Unix.start,
    sum(temp[,10])/sum(temp[,9]),
    sum(temp[,9]),
    sum(temp[,10]),
    nrow(temp)
  )
  colnames(temp2)<-c("Yr","Mo","Day","UnixT","VWAP","BTCVol","USDVol","num.Trades")
 
  daily<-rbind(daily,temp2)
 
  Unix.start<-Unix.end + 1
}
first.gox<-paste(daily[1,1],"-",0,daily[1,2],"-",daily[1,3],sep="")

daily[(which(is.na(daily[,5]))),5]<-
  daily[(which(is.na(daily[,5]))[1]-1),5]

write.csv(daily,"goxUSDdaily.csv", row.names=F)
##End Daily




##Get months from unix timestamps
month.labels=data.frame()
for(y in 2010:2013){
  for(m in 1:12){
    unixT<-gox[which(gox[,1]==y & gox[,2]==m),7][1]
    month.labels<-rbind(month.labels,cbind(unixT,month.abb[m]))
  }
}
month.labels<-month.labels[which(!is.na(month.labels[,1])),]
month.labels<-month.labels[2:nrow(month.labels),]
month.labels[,2]<-as.character(month.labels[,2])
yrs<-c(11,12,13)
for(i in 1:length(which(month.labels[,2]=="Jan"))){
 
  month.labels[which(month.labels[,2]=="Jan"),2][1]<-paste(
    month.labels[which(month.labels[,2]=="Jan"),2][1],
    yrs[i], sep="")
}
##End Month Labels






####Get Forum Data####
forum2=NULL
for(y in 2009:2013){
  for (m in 1:12){
   
    start<-paste(month.name[m],y)
    if(m==1){
      end<-paste(month.name[12],(y-1))
    }else{
      end<-paste(month.name[m-1],y)
    }
   
    date<-as.numeric(paste(y,m,sep=""))
   
    forum<-getURL(paste("https://bitcointalk.org/index.php?action=stats;expand=",
                        date,"#",date,sep=""),
                  ssl.verifypeer = FALSE, useragent = "R"
    )
    forum<-readLines(tc <- textConnection(forum)); close(tc)
    pagetree <- htmlTreeParse(forum, error=function(...){}, useInternalNodes = TRUE)
   
    x <- xpathSApply(pagetree, "//*/table", xmlValue) 
    x <- unlist(strsplit(x, "\n"))
    x <- gsub("\t","",x)
    x <- sub("^[[:space:]]*(.*?)[[:space:]]*$", "\\1", x, perl=TRUE)
    x <- x[!(x %in% c("", "|"))]
   
    if(any(x==start) & any(x==end)){
      temp<-as.data.frame(rbind(x[which(x==start)[2]:(which(x==end)[2]-1)]))
     
      out=NULL
      for(j in 2:(length(temp)/6)){
        n<-seq(1,length(temp),by=6)[j]
        temp2<-as.matrix(temp[n:(n+5)])
        out<-rbind(out,temp2)
      }
     
    }else{
      out<-matrix(rep(NA,6),nrow=1,ncol=6)
    }
   
    forum2<-rbind(forum2,out)
   
  }
}
forum2<-forum2[!is.na(forum2[,1]),]
colnames(forum2)<-c("Date","New Topics","New Posts","New Members","Most Online","Page Views")

forum<-as.data.frame(forum2)
for(i in 2:6){
  forum[,i]<-as.numeric(as.character(forum[,i]))
}
##End Forum Stats




####Get Wikipedia Page Views####
wiki3=NULL
for(y in 2009:2013){
  for (m in 1:12){
   
    if(m<10){
      mo<-paste(0,m,sep="")
    }else{
      mo<-m
    }
    date<-as.numeric(paste(y,mo,sep=""))
   
    wiki<-fromJSON(
      getURL(
        paste("http://stats.grok.se/json/en/",
              date,
              "/Bitcoin",sep="")
      )
    )
   
    wiki<-as.matrix(wiki$daily_views)
   
    wiki2<-cbind(rownames(wiki),wiki)
    rownames(wiki2)<-1:nrow(wiki2)
   
    out=NULL
    for(i in 1:nrow(wiki2)){
      date2<-strsplit(as.character(wiki2[i,1]), "-")
      yr<-as.numeric(date2[[1]][1])
      mo<-as.numeric(date2[[1]][2])
      day<-as.numeric(date2[[1]][3])
     
      temp<-cbind(yr,mo,day)
      out<-rbind(out,temp)
    }
   

    wiki2<-cbind(out[,1],out[,2],out[,3],wiki2)
    wiki2<-as.data.frame(wiki2)
    wiki2[,1]<-as.numeric(wiki2[,1])
    wiki2[,2]<-as.numeric(wiki2[,2])
    wiki2[,3]<-as.numeric(wiki2[,3])
    wiki2[,5]<-as.numeric(wiki2[,5])
    wiki2<-wiki2[order(wiki2[,3]),]
    rownames(wiki2)<-1:nrow(wiki2)
   
    wiki3<-rbind(wiki3,wiki2)
  }
}

shrt.months<-c(2,4,6,9,11)
lp.yrs<-seq(2000,2020,by=4)


del.points<-c(which(wiki3[,2] %in% shrt.months & wiki3[,3]==31),
              which(wiki3[,2]==2 & wiki3[,3]==30),
              which(!wiki3[,1] %in% lp.yrs &
                      wiki3[,2]==2 & wiki3[,3]==29)
)
wiki3<-wiki3[-del.points,]
rownames(wiki3)<-1:nrow(wiki3)


wiki<-wiki3[which(wiki3[,4]==forum[1,1]):nrow(wiki3),]
rownames(wiki)<-1:nrow(wiki)
##End Wikipedia Page Views




####Plot Interest Indicators####

x.axis.labels=NULL
for(i in seq(1,nrow(forum),by=7*8)){
  x.axis.labels<-rbind(x.axis.labels,paste(forum[i,1]))
}

dev.new()
plot(
  forum[,2]/max(forum[,2]),
  ylab="Value Normalized to Max",
  xaxt="n", xlab="Time",
  col=rainbow(6, alpha=.75)[1], type="l", lwd=2,
  main="Normalized to Series Max"
)
axis(1,at=seq(1,nrow(wiki),by=7*8),labels=x.axis.labels, las=1, cex=.7)
for(i in 3:5){
  lines(
    forum[,i]/max(forum[,i]), col=rainbow(6, alpha=.75)[i-1], lwd=2,
  )
}
lines(wiki[,5]/max(wiki[,5]), col=rgb(0,0,0,.75), lwd=2)

if(get.gtrends==T){
lines(seq(1,nrow(wiki),by=7),gtrends[1:(nrow(gtrends)),5]/
max(gtrends[1:(nrow(gtrends)-1),5]), col=rainbow(6, alpha=.75)[5],lwd=4
)
}

lines(which(forum[,1]==first.gox):nrow(forum),
      daily[,5]/max(daily[,5],na.rm=T),
      lwd=6,lty=1,col="Black"
)
lines(which(forum[,1]==first.gox):nrow(forum),
      daily[,5]/max(daily[,5],na.rm=T),
      lwd=3,lty=2,col="Yellow"
)

legend("topleft", c(paste("BitcoinTalk:",colnames(forum)[2:5]),"Google Trends (Bitcoin + Bitcoins)", "Wikipedia Page Hits", "MtGox USD/BTC Daily VWAP"),
       col=c(rainbow(6, alpha=.75),rgb(0,0,0,.75),"Black"), lty=c(rep(1,6),1),
       lwd=c(4,4,4,4,6,4,8),bty="n",y.intersp=.75
)
legend("topleft", c(paste("BitcoinTalk:",colnames(forum)[2:5]),"Google Trends (Bitcoin + Bitcoins)", "Wikipedia Page Hits", "MtGox USD/BTC Daily VWAP"),
       col=c(rainbow(6, alpha=.75)[1:5],rgb(0,0,0,.75),"Yellow"), lty=c(rep(1,6),2),
       lwd=c(4,4,4,4,6,4,5),bty="n",y.intersp=.75
)
##End Plots








if(plot.breakpoints==T){

####Plot Breakpoints with Volume Heatmaps (slow)####
trades=F
usd=T
btc=F

t<-(1:length(daily[,5]))
bp<-breakpoints(daily[,5]~t, h=30)


##Format data##
price <- round(daily[,5],2) #  for line plot
if(heatmap.vol=="trades"){
  volume <- daily[,8] # for line color heatmap
  legend.lab="Trades per Day (Normalized to Max)"
}
if(heatmap.vol=="usd"){
  volume <- daily[,7] # for line color heatmap
  legend.lab="USD per Day (Normalized to Max)"
}
if(heatmap.vol=="btc"){
  volume <- daily[,6] # for line color heatmap
  legend.lab="BTC per Day (Normalized to Max)"
}

time <- 1:length(price) # X axis

##Smoothing Kernel Settings##
FWHM<-1 #Set size of smoothing kernal (in pixels)
time.resolution<-.00001
price.resolution<-.05
lower.cutoff.multiplier<-.01

##Create Heatmap Matrix##
price.range<-round(seq(min(price,na.rm=T),(max(price,na.rm=T)+1),by=.1),2)
heatmap<-matrix(nrow=length(time),ncol=length(price.range))

for (i in 1:length(time)){
  heatmap[i,which(price.range==price[i])]<-volume[i]
}
heatmap[which(is.na(heatmap))]<-0


##Smooth the heatmap##
smooth.heatmap<-image.smooth(heatmap, dx=time.resolution, dy=price.resolution, Nwidth=, Mwidth=,
                             theta=FWHM, kernel.function=dnorm, tol= 1.1, na.rm=TRUE)
smooth.heatmap$z[which(smooth.heatmap$z<0)]<-0
smooth.heatmap$z<-smooth.heatmap$z/max(smooth.heatmap$z)

##Heatmap Chart Settings###
max.smoothedvolume<-max(smooth.heatmap$z)
lower.cutoff<-lower.cutoff.multiplier*max.smoothedvolume
number.of.color.bins<-1000
colors<-c("Grey",rev(rainbow(number.of.color.bins-1)))

##Make Chart##
dev.new()
image.plot(time, price.range, as.matrix(smooth.heatmap$z),
           breaks= c(0,seq(lower.cutoff, max.smoothedvolume, length=number.of.color.bins)),
           col=colors, legend.lab=legend.lab,
           ylab="USD/BTC (Mt.Gox VWAP)", xlab="Time"

#lines(time, price, lwd=4)


lines(daily[,5], type="l",  lwd=3)
points(as.numeric(names(bp[[9]])[bp[[1]]]),
       bp[[9]][bp[[1]]],
       col="Black",pch=16, type="b", lwd=3,cex=1.2
)

lines(daily[,5], type="l",  lwd=3)
points(as.numeric(names(bp[[9]])[bp[[1]]]),
       bp[[9]][bp[[1]]],
       col="Red",pch=16, type="b"
)

##End Breakpoints and Heatmaps
}




Script 2
Code:

require("fields")



#########Settings##############

# Set data directory and directory for movie frames
data.dir<-"C:/Users/Valued Customer/Documents/GoxData"
movie.dir<-"C:/Users/Valued Customer/Documents/GoxPredict/"



days.before.today.start<-0 #When To start?
iters<-1000  #Number of Simulations to Run
pred.days<-30 #Days Ahead to Predict
num.examps<-10 #Number of Example Scenarios to Plot



live.plots=T #Make Movie?

make.pngs=F
use.autocorr=T #Adjust Probabilities due to Autocorrelation
diff.sd<-.025 #Width of acceleration distribution
loess.sd<-.025 #Width of loess distance distribution
probs.auto.sd<-.1 #Width of autocorrelation distribution
loess.span<-30 #Days for loess curve fit (smaller= tighter fit)

#weight of loess curve probs
#(>1 means more important than acceleration of price change)
loess.weight<-1





####Put daily data into format####
setwd(data.dir)
daily<-read.csv("goxUSDdaily.csv", header=T)
gox<-matrix(ncol=7,nrow=nrow(daily))
gox[,7]<-daily[,5]
gox2<-gox
##









###Live Plots function 
  plot.heatmaps.etc<-function(spot.color="Black"){
   
   
   
    par(mfrow=c(2,5))
   
    plot(p, type="l", lwd=3, xlim=c(length(p)-100, length(p)), ylab="USD/BTC",
         ylim=c(0,max(p[(length(p)-100): length(p),])),
         main=c("MtGox USD/BTC",round(tail(p,1),1)))
    points(length(p),tail(p,1), col="Red", pch=16, cex=1.5)
    lines(smthp, col="Blue", lwd=2)
    lines(seq(length(p)-100, length(p), length=length(p)),
          .8*min(p[(length(p)-100): length(p),])*p/max(p))
    abline(h=max(.8*min(p[(length(p)-100): length(p),])*p/max(p)))
    text(x=length(p)-80,y=.9*max(.8*min(p[(length(p)-100): length(p),])*p/max(p)),
         labels=paste("Max=",round(max(p),1)), cex=.7)
   
   
    plot(100*d2, type="l", xlim=c(length(p)-102, length(p)-2), ylab="% Price Change",
         main=c("% Price Change",round(100*tail(d2,1),1)))
    points(length(d2),100*tail(d2,1), col="Red", pch=16, cex=1.5)
    abline(h=0)
   
    par(mar=c(5.1,4.1,4.1,3.1))
   
   
   
    img.matrix<-matrix(nrow=length(seq(min(diff(d)),max(diff(d)),by=.001)),
                       ncol=length(seq(min(d[-nrow(d)]),max(d[-nrow(d)]),by=.001))
    )
   
    diffd.image<-(diff(d)-min(diff(d)))/(max(diff(d))-min(diff(d)))
    d.image<-(d[-nrow(d)]-min(d[-nrow(d)]))/(max(d[-nrow(d)])-min(d[-nrow(d)]))
   
   
    for(i in 1:nrow(img.matrix)){
      img.matrix[i,]<-dnorm(seq(min(diff(d)),max(diff(d)),by=.001),tail(diff(d2),1)[1],diff.sd)[i]
    }
    img.matrix<-img.matrix/max(img.matrix)
   
   
    if(use.autocorr==T){
      for(i in 1:ncol(img.matrix)){
        img.matrix[,i]<-img.matrix[,i]+dnorm(seq(min(d[-nrow(d)]),max(d[-nrow(d)]),by=.001),delta,probs.auto.sd)[i]/
          max(dnorm(seq(min(d[-nrow(d)]),max(d[-nrow(d)]),by=.001),delta,.1))
      }
    }
   
   
    image.plot(img.matrix/max(img.matrix), axes=F, xlab="Rate of % Change", ylab="Lag-1 % Change")
   
    points(diffd.image,d.image, pch=21, col="Black", bg="Grey")
   
    #cur.point.x<-diffd.image[which(abs(tail(d2,1)[1]-d)==min(abs(tail(d2,1)[1]-d)))]
    cur.point.x<-diffd.image[which(d==delta)]
    points(cur.point.x,
           (delta-min(d))/(max(d)-min(d)),
           col="Grey", bg=spot.color, pch=21, cex=2)
   
    title(main=c("Rate of Change vs", "% Change in Price"))
    axis(side=1, at=seq(min(diffd.image),max(diffd.image),length=6),
         100*round(seq(min(diff(d)),max(diff(d)),length=6),2)
    )
    axis(side=2, at=seq(min(d.image),max(d.image),length=6),
         100*round(seq(min(d[-nrow(d)]),max(d[-nrow(d)]),length=6),2), las=2
    )
   
   
   
   
    img.matrix<-matrix(nrow=length(seq(min(p.smth.diff1),max(p.smth.diff1),by=.001)),
                       ncol=length(seq(min(d),max(d),by=.001))
    )
   
    p.smth.diff1.image<-(p.smth.diff1-min(p.smth.diff1))/(max(p.smth.diff1)-min(p.smth.diff1))
    d.image<-(d-min(d))/(max(d)-min(d))
   
   
    for(i in 1:nrow(img.matrix)){
      img.matrix[i,]<-dnorm(seq(min(p.smth.diff1),max(p.smth.diff1),by=.001),
                            tail(p.smth.diff2,1),loess.sd)[i]
    }
    if(length(which(img.matrix>0))<1){
      for( i in 1:length(img.matrix)){
        img.matrix[i]<-1/length(img.matrix)
      }
    }
    img.matrix<-img.matrix/max(img.matrix, na.rm=T)
   
   
    if(use.autocorr==T){
      for(i in 1:ncol(img.matrix)){
        img.matrix[,i]<-img.matrix[,i]+dnorm(seq(min(d[-nrow(d)]),max(d[-nrow(d)]),by=.001),delta,probs.auto.sd)[i]/
          max(dnorm(seq(min(d[-nrow(d)]),max(d[-nrow(d)]),by=.001),delta,.1))
      }
    }
   
    image.plot(img.matrix/max(img.matrix), axes=F, xlab="% Distance to Loess Fit", ylab="Lag-1 % Change")
   
    points(p.smth.diff1.image,d.image, pch=21, col="Black", bg="Grey")
   
    cur.point.x<-p.smth.diff1.image[which(d==delta)]
    points(cur.point.x,
           (delta-min(d))/(max(d)-min(d)),
           col="Grey", bg=spot.color, pch=21, cex=2)
   
   
   
   
   
    title(main=c("% Distance to Loess Fit vs", "% Change in Price"))
    axis(side=1, at=seq(min(p.smth.diff1.image),max(p.smth.diff1.image),length=6),
         100*round(seq(min(p.smth.diff1),max(p.smth.diff1),length=6),2)
    )
    axis(side=2, at=seq(min(d.image),max(d.image),length=6),
         100*round(seq(min(d[-nrow(d)]),max(d[-nrow(d)]),length=6),2), las=2
    )
    par(mar=c(5.1,4.1,4.1,2.1))
    ##############################
   
   
    if(use.autocorr==T){
      probs.init2<-probs.init
      probs.init2[-which(d==delta)]<-probs.init[-which(d==delta)]*probs.autocorr
      probs.init2[which(d==delta)]<-1/length(probs.init2)
    }
   
   
   
    normalizer<-1/(diff(range(d))/512)
    y.max<-max(density(d, weights=probs.init2/sum(probs.init2))$y, normalizer*(probs.init2/sum(probs.init2)))
    plot(d,normalizer*probs.init2/sum(probs.init2), ylim=c(0,y.max), xaxt="n",
         xlab="Percent Price Change", ylab="Probability Density",
         main="Autocorrelation Weights")
    lines(density(d, weights=probs.init2/sum(probs.init2)), col="Red", lwd=3)
    #points(tail(d2,1),normalizer*probs.init[which(d==delta)[1]]/sum(probs.init2),
    #col="Light Blue", pch=16, cex=1.5
    #)
    axis(side=1, at=seq(min(d),max(d),length=6),
         100*round(seq(min(d),max(d),length=6),2)
    )
   
    plot(100*diff(d2), type="l", xlim=c(length(p)-102, length(p)-2), ylab="Rate of Price Change",
         main=c("Rate of % Price Change", round(tail(100*diff(d2),1),1)))
    points(length(diff(d2)),100*tail(diff(d2),1), col="Red", pch=16, cex=1.5)
    abline(h=0)
    plot(100*p.smth.diff2, type="l", xlim=c(length(p)-101, length(p)-1), ylab="% Difference From Loess Fit",
         main=c("% Difference from Loess Fit",round(tail(100*p.smth.diff2,1),1)))
    abline(h=0)
    points(length(p.smth.diff2),100*tail(p.smth.diff2,1), col="Red", pch=16, cex=1.5)
   
   
   
    y.max<-max(density(d, weights=probs1/sum(probs1))$y, normalizer*(probs1/sum(probs1)))
    plot(d,normalizer*probs1/sum(probs1), ylim=c(0,y.max), xaxt="n",
         xlab="Percent Price Change", ylab="Probability Density",
         main=c("Weighted due to", "Rate of Price Change"))
    lines(density(d, weights=probs1/sum(probs1)), col="Red", lwd=3)
    axis(side=1, at=seq(min(d),max(d),length=6),
         100*round(seq(min(d),max(d),length=6),2)
    )
    #points(tail(d2,1),normalizer*probs.init[which(d==delta)[1]]/sum(probs.init),
    #col="Light Blue", pch=16, cex=1.5
    #)
   
    y.max<-max(density(d, weights=probs2/sum(probs2))$y, normalizer*(probs2/sum(probs2)))
    plot(d,normalizer*probs2/sum(probs2), ylim=c(0,y.max), xaxt="n",
         xlab="Percent Price Change", ylab="Probability Density",
         main=c("Weighted due to", "Distance from Loess Fit"))
    lines(density(d, weights=probs2/sum(probs2)), col="Red", lwd=3)
    axis(side=1, at=seq(min(d),max(d),length=6),
         100*round(seq(min(d),max(d),length=6),2)
    )
    #points(tail(d2,1),normalizer*probs.init[which(d==delta)[1]]/sum(probs.init),
    #col="Light Blue", pch=16, cex=1.5
    #)
   
   
    y.max<-max(density(d, weights=probs/sum(probs))$y, normalizer*(probs/sum(probs)))
    plot(d,normalizer*probs/sum(probs), ylim=c(0,y.max), xaxt="n",
         xlab="Percent Price Change", ylab="Probability Density",
         main="Final Weights")
    lines(density(d, weights=probs/sum(probs)), col="Red", lwd=3)
    axis(side=1, at=seq(min(d),max(d),length=6),
         100*round(seq(min(d),max(d),length=6),2)
    )
    #points(tail(d2,1),normalizer*probs.init[which(d==delta)[1]]/sum(probs.init),
    #col="Light Blue", pch=16, cex=1.5
    #)
    #points(d,normalizer*probs4/sum(probs4), col="Grey", pch=16)
    #lines(density(d, weights=probs4/sum(probs4)), col="Grey", lwd=3)
   
   
   
  }
 ####End Live plots function



  ###Highest Density Interval Calculator Function
get.HDI<-function(sampleVec,credMass){
  sortedPts = sort( sampleVec )
  ciIdxInc = floor( credMass * length( sortedPts ) )
  nCIs = length( sortedPts ) - ciIdxInc
  ciWidth = rep( 0 , nCIs )
  for ( i in 1:nCIs ) {
    ciWidth[ i ] = sortedPts[ i + ciIdxInc ] - sortedPts[ i ]
  }
  HDImin = sortedPts[ which.min( ciWidth ) ]
  HDImax = sortedPts[ which.min( ciWidth ) + ciIdxInc ]
  HDIlim = c( HDImin , HDImax, credMass )
  return( HDIlim )
}

###End HDI function
















###Initialize variables
start.days<-seq(days.before.today.start,
                0, by=-pred.days)
if(tail(start.days,1)!=0){
  start.days<-c(start.days,0)
}


p.outs=NULL
means2.out=NULL
medians2.out=NULL
HDIs95.out=NULL
HDIs67.out=NULL


if(live.plots==T){
  setwd(movie.dir)
  #if(getwd()==movie.dir){
  #do.call(file.remove, list(list.files(getwd()))
  #}
  png(filename = "GoxRchangePlot%03d.png", res=72,
      width = 1600, height = 900, units = "px", pointsize = 24)
}

if(start.days[1]>0){back.testing=T}else{back.testing=F}

for(s in 1:length(start.days)){
  start.day<-start.days[s]
 
  gox<-gox2[1:(nrow(gox2)-start.day),]
  p<-as.matrix(gox[,7]) #Get gox daily prices from imported data
 
  d<-diff(p)/p[-nrow(p)] #Calculate day-to-day change in price
  d<-as.matrix(rnorm(length(d),d,0.001))
  autocorrs<-acf(d, plot=F) #Calculate autocorrelation in diffs
  autocorrs<-autocorrs$acf[-1] #Remove lag-0 data from autocorrelation
 
  probs<-rep(1,length(d))
  probs.init<-probs
  prior.probs<-probs.init
  #prior.probs[which(d<0)]<-probs.init[which(d<0)]*2
 
  t<-(1:length(p))
  smthp<-loess(p~t,degree=1, span=loess.span/length(p),evaluation=length(p),
               family="gaussian",  control = loess.control(surface = "direct"))$fitted
 
  p.smth.diff1<-(p-smthp)/smthp
  p.smth.diff1<-p.smth.diff1[-length(p.smth.diff1)]
 
 
 
 


####################
 
 
  #Allocate Memory for output matrix and initialize progress bar
  p.out=matrix(nrow=nrow(p)+pred.days, ncol=iters)
  pb<-txtProgressBar(min = 0, max = iters,
                     initial = 0, char = "=",width = 60, style = 3,)
  #
  #Run simulation
 
  for(j in 1:iters){
    p<-as.matrix(gox[,7])
    t<-(1:length(p))
    smthp<-loess(p~t,degree=1, span=loess.span/length(p),evaluation=length(p),
                 family="gaussian",  control = loess.control(surface = "direct"))$fitted
   
    p.smth.diff1<-(p-smthp)/smthp
    p.smth.diff1<-p.smth.diff1[-length(p.smth.diff1)]
    delta<-tail(d,1)[1]
    for(i in 1:pred.days){
     
     
     
      probs1<-probs.init
      probs2<-probs.init
      probs.autocorr<-probs.init
      probs<-probs.init
     
      d2<-diff(p)/p[-nrow(p)]
      dtemp<-cbind(d[-nrow(d)],diff(d))
      dtemp2<-cbind(order(dtemp[,2]),dtemp[order(dtemp[,2]),])
      dtemp2<-cbind(dtemp2,dnorm(dtemp2[,3],tail(diff(d2),1),diff.sd))
     
      probs1[-nrow(d)]<-dtemp2[order(dtemp2[,1]),4]
      probs1<-probs1/sum(probs1)
      probs1[nrow(probs1)]<-1/length(probs)
     
      t<-(1:length(p))
      smthp<-loess(p~t,degree=1, span=loess.span/length(p),evaluation=length(p),
                   family="gaussian",  control = loess.control(surface = "direct"))$fitted
     
      p.smth.diff2<-(p-smthp)/smthp
      dtemp<-cbind(d,p.smth.diff1)
      dtemp3<-cbind(order(dtemp[,2]),dtemp[order(dtemp[,2]),])
      dtemp3<-cbind(dtemp3,dnorm(dtemp3[,3],tail(p.smth.diff2,1),loess.sd))
     
      probs2<-dtemp3[order(dtemp3[,1]),4]
      probs2<-probs2/sum(probs2)
     
      probs<-probs1+loess.weight*probs2
     
     
      if(use.autocorr==T){
        probs.autocorr<-dnorm(d[-which(d==delta)],delta,probs.auto.sd)
        probs.autocorr<-probs.autocorr/sum(probs.autocorr)
        probs[-which(d==delta)]<-probs[-which(d==delta)]+ probs.autocorr
       
      }
     
     
      probs<-probs*prior.probs
     
     
      if(live.plots==T){
        if(j==1 && s==1){
          plot.heatmaps.etc(spot.color="Black")
        }
      }
     
      if(length(which(probs>0))<1){probs<-probs.init}
      delta<-sample(d,1, prob=probs)
     
      if(live.plots==T){
        if(j==1 && s==1){
          plot.heatmaps.etc(spot.color="Blue")
        }
      }
     
     
     
      new<-as.numeric(tail(p,1)+delta*tail(p,1))
      if(new<0){new=0}
      p<-rbind(p,new[1])
     
    }
    setTxtProgressBar(pb, j)
    p.out[,j]<-p
    if(live.plots==T && s==1 && j==1){dev.off()
                                      setwd(data.dir)
    }
  }
  close(pb)
  print(paste(s, "of", length(start.days), "Complete"))
  #End Simulation
 
 
 
  #Calculate means, medians, and hdis at each timepoint
  means=matrix(nrow=nrow(p.out),ncol=1)
  medians=matrix(nrow=nrow(p.out),ncol=1)
  HDIs95=matrix(nrow=nrow(p.out),ncol=2)
  HDIs67=matrix(nrow=nrow(p.out),ncol=2)
  for(i in 1:nrow(p.out)){
    means[i,]<-mean(p.out[i,])
    medians[i,]<-median(p.out[i,])
    HDIs95[i,]<-cbind(get.HDI(p.out[i,], .95)[1],get.HDI(p.out[i,], .95)[2])
    HDIs67[i,]<-cbind(get.HDI(p.out[i,], .67)[1],get.HDI(p.out[i,], .67)[2])
  }
  #
 
  p.out2<-p.out[-(1:nrow(gox)),]
  means2<-means[-(1:nrow(gox)),]
  medians2<-medians[-(1:nrow(gox)),]
  HDIs95<-HDIs95[-(1:nrow(gox)),]
  HDIs67<-HDIs67[-(1:nrow(gox)),]
 
  p.outs<-rbind(p.outs,p.out2)
  means2.out<-c(means2.out,means2)
  medians2.out<-c(medians2.out,medians2)
  HDIs95.out<-rbind(HDIs95.out,HDIs95)
  HDIs67.out<-rbind(HDIs67.out,HDIs67)
}

gox<-gox2[1:(nrow(gox2)-start.days[1]),]
means<-as.matrix(c(gox[,7],means2.out))
medians<-as.matrix(c(gox[,7],medians2.out))
HDIs95<-rbind(cbind(gox[,7],gox[,7]),HDIs95.out)
HDIs67<-rbind(cbind(gox[,7],gox[,7]),HDIs67.out)

goxtemp=matrix(nrow=nrow(gox),ncol=ncol(p.out))
goxtemp[,1:ncol(goxtemp)]<-gox[,7]

p.out<-rbind(goxtemp,p.outs)

#############################









####Plot predictions and posterior distributions
if(make.pngs==T){
  png(filename = "GoxPredict%03d.png", res=150,
      width = 1920, height = 1080, units = "px", pointsize = 12)
}else{
  dev.new()
}

layout(matrix(c(1,1,2,3,4,5,6,7,8),nrow=3, ncol=3, byrow=T))
if(
  diff(range(c(gox[max(1,nrow(gox)-3*pred.days):nrow(gox),7],
               means[nrow(gox):nrow(means),])))>300
){
  plot(0,0.1,
       #ylim=c(0.1,max(HDIs95[,2])), xaxt="n",
       ylim=c(0.1,max(HDIs95[,2])), xaxt="n", log="y",
       xlim=c(max(0,nrow(gox)-3*pred.days), nrow(p.out)), type="n",
       #xlab=paste("Days From 'Today' (", start.days[1], " Days Ago)", sep=""),
       xlab=paste("Days From Today"),
       ylab="USD/BTC", main=c("MtGox USD/BTC Predictions",
                              paste("Iterations:", iters)
       ), sub=paste("Use AutoCorrelation Data=", use.autocorr))
  axis(side=1,
       at=floor(seq(max(0,nrow(gox)-3*pred.days), nrow(p.out), length=6)),
       labels=floor(seq(max(0,nrow(gox)-3*pred.days), nrow(p.out), length=6)-nrow(gox2)
       ))
}else{
  plot(0,0.1,
       ylim=c(0.01,max(HDIs95[,2])), xaxt="n",
       #ylim=c(0.01,max(HDIs95[,2])), xaxt="n", log="y",
       xlim=c(max(0,nrow(gox)-3*pred.days), nrow(p.out)), type="n",
       #xlab=paste("Days From 'Today' (", start.days[1], " Days Ago)", sep=""),
       xlab=paste("Days From Today"),
       ylab="USD/BTC", main=c("MtGox USD/BTC Predictions",
                              paste("Iterations:", iters)
       ), sub=paste("Use AutoCorrelation Data=", use.autocorr))
  axis(side=1,
       at=floor(seq(max(0,nrow(gox)-3*pred.days), nrow(p.out), length=6)),
       labels=floor(seq(max(0,nrow(gox)-3*pred.days), nrow(p.out), length=6)-nrow(gox2)
       ))
}
for(i in seq(1,ncol(p.out),length=1000)){
  lines(p.out[,i],type="l", col=rainbow(iters, alpha=1/min(iters,10))[i])
}
legend("topleft", c("Mean","Median","95% HDI","67% HDI"),
       col=c("Black","Dark Grey","Black","Light Grey"), lty=c(1,1,2,2),
       bty="n", lwd=2, ncol=2, text.width=20, cex=.7
)

abline(v=nrow(gox))
abline(v=nrow(gox2))
points(nrow(gox)+floor(seq(1, start.days[1]+pred.days, length=6)),rep(.1,6), pch=2)

lines(nrow(as.matrix(gox[,7])): nrow(p.out),
      means[nrow(as.matrix(gox[,7])): nrow(p.out),], col="Black", lwd=3)

lines(nrow(as.matrix(gox[,7])): nrow(p.out),
      medians[nrow(as.matrix(gox[,7])): nrow(p.out),], col="Light Grey", lwd=3)

lines(nrow(as.matrix(gox[,7])): nrow(p.out),
      HDIs95[nrow(as.matrix(gox[,7])): nrow(p.out),1], col="Black", lwd=3, lty=2)
lines(nrow(as.matrix(gox[,7])): nrow(p.out),
      HDIs95[nrow(as.matrix(gox[,7])): nrow(p.out),2], col="Black", lwd=3, lty=2)

lines(nrow(as.matrix(gox[,7])): nrow(p.out),
      HDIs67[nrow(as.matrix(gox[,7])): nrow(p.out),1], col="Light Grey", lwd=3, lty=2)
lines(nrow(as.matrix(gox[,7])): nrow(p.out),
      HDIs67[nrow(as.matrix(gox[,7])): nrow(p.out),2], col="Light Grey", lwd=3, lty=2)
lines(as.matrix(gox2[,7]), lwd=3)
lines(as.matrix(gox2[,7]), lwd=1, col="Blue")



d.temp<-d

hist(100*d.temp,  freq=F, col="Green", ylim=c(0,max(density(100*d)$y)),
     breaks=seq(min(100*d.temp),1.1*max(100*d.temp),
                by=.025*diff(range(100*d.temp))),
     xlab="Percent Change",
     main=c("Day-to-day % Change",
            paste("95% HDI:",round(get.HDI(100*d.temp,.95)[1],1),
                  "-",round(get.HDI(100*d.temp,.95)[2],1)),
            paste("67% HDI:",round(get.HDI(100*d.temp,.67)[1],1),
                  "-",round(get.HDI(100*d.temp,.67)[2],1))
     ),
     sub=paste("Mean=",round(mean(100*d.temp),2), "Median=", round(median(100*d.temp),2))
)
lines(density(100*d.temp), lwd=3)
segments(get.HDI(100*d.temp,.95)[1],0,get.HDI(100*d.temp,.95)[2],0, lwd=5)
segments(get.HDI(100*d.temp,.67)[1],0,get.HDI(100*d.temp,.67)[2],0, col="Light Grey", lwd=3)




for(i in floor(seq(1, start.days[1]+pred.days, length=6))){
  y.max<-max(density(p.out[nrow(as.matrix(gox[,7]))+i,])$y)
  x.lim<-c(min(p.out[nrow(as.matrix(gox[,7]))+i,]),
           HDIs95[nrow(as.matrix(gox[,7]))+i,][2])
  hist(p.out[nrow(as.matrix(gox[,7]))+i,], freq=F,
       xlab="USD/BTC", col="Grey",ylim=c(0,y.max),
       xlim=x.lim,
       breaks=seq(min(p.out[nrow(as.matrix(gox[,7]))+i,]),
                  1.1*max(p.out[nrow(as.matrix(gox[,7]))+i,]),
                  by=.05*diff(x.lim)),
       sub=paste("Mean=", round(means[nrow(as.matrix(gox[,7]))+i,],2),
                 "Median=", round(medians[nrow(as.matrix(gox[,7]))+i,],2)
       ), main=""
  )
 
  title(main=c(paste(-(start.days[1]-i), "Days From Today"),
               paste("Range:",round(min(p.out[nrow(as.matrix(gox[,7]))+i,]),1),"-",
                     round(max(p.out[nrow(as.matrix(gox[,7]))+i,]),1)),
               paste("95% HDI:", round(HDIs95[nrow(as.matrix(gox[,7]))+i,][1],1), "-",
                     round(HDIs95[nrow(as.matrix(gox[,7]))+i,][2],1)),
               paste("67% HDI:", round(HDIs67[nrow(as.matrix(gox[,7]))+i,][1],1), "-",
                     round(HDIs67[nrow(as.matrix(gox[,7]))+i,][2],1))
  ),
        cex.main=1)
 
  lines(density(p.out[nrow(as.matrix(gox[,7]))+i,], n=1024),
        col="Black", lwd=3
  )
  if(back.testing==T){
    if((nrow(as.matrix(gox[,7]))+i)<nrow(gox)){
      if(is.finite(gox2[nrow(as.matrix(gox[,7]))+i,7])){
        if(HDIs95[nrow(as.matrix(gox[,7]))+i,][1]<gox2[nrow(as.matrix(gox[,7]))+i,7] &&
             HDIs95[nrow(as.matrix(gox[,7]))+i,][2]>gox2[nrow(as.matrix(gox[,7]))+i,7]){
         
          y.dens<-density(p.out[nrow(as.matrix(gox[,7]))+i,], n=1024)$y[which(density(p.out[nrow(as.matrix(gox[,7]))+i,], n=1024)$x>
                                                                                gox2[nrow(as.matrix(gox[,7]))+i,7])[1]]
          points(gox2[nrow(as.matrix(gox[,7]))+i,7],y.dens, pch=16, col="Red", cex=1.5)
          text(gox2[nrow(as.matrix(gox[,7]))+i,7],y.dens,gox2[nrow(as.matrix(gox[,7]))+i,7], pos=4,
               offset=1, col="Blue", font=2)
        }else{
          if(gox2[nrow(as.matrix(gox[,7]))+i,7]<x.lim[1]){
            text(x.lim[1],y.max,gox2[nrow(as.matrix(gox[,7]))+i,7], pos=4,
                 offset=1, col="Red", font=2)
          }else{
            text(.9*x.lim[2],y.max,gox2[nrow(as.matrix(gox[,7]))+i,7], pos=4,
                 offset=1, col="Red", font=2)
          }
        }
      }
    }
  }
  segments(HDIs95[nrow(as.matrix(gox[,7]))+i,][1], 0,
           HDIs95[nrow(as.matrix(gox[,7]))+i,][2],0, lwd=5)
  segments(HDIs67[nrow(as.matrix(gox[,7]))+i,][1], 0,
           HDIs67[nrow(as.matrix(gox[,7]))+i,][2],0, col="Light Grey", lwd=3)
 
}

if(make.pngs==T){dev.off()}
#


#Plot example traces
if(make.pngs==T){
  png(filename = "GoxExamples%03d.png", res=150,
      width = 1920, height = 1080, units = "px", pointsize = 12)
}else{
  dev.new()
}
par(mfrow=c(1,1))
examps<-floor(runif(num.examps,1,ncol(p.out)))

plot(0,0.1,
     ylim=c(0.001,max(p.out[,examps],as.matrix(gox2[,7]))), xaxt="n",
     xlim=c(max(0,nrow(gox)-3*pred.days), nrow(p.out)), type="n",
     xlab="Days From Today",ylab="USD/BTC",
     main=c("MtGox USD/BTC Predictions", paste(num.examps,"Example Scenarios"))
)

for(i in examps){
  lines(p.out[,i],type="l", col=rainbow(num.examps, alpha=1)[which(examps==i)], lwd=2)
}
axis(side=1,
     at=floor(seq(max(0,nrow(gox)-3*pred.days), nrow(p.out), length=6)),
     labels=floor(seq(max(0,nrow(gox)-3*pred.days), nrow(p.out), length=6)-nrow(gox)
     ))
if(back.testing==T){
  lines(as.matrix(gox2[,7]), lwd=5)
}else{
  lines(as.matrix(gox[,7]), lwd=5)
}
abline(v=nrow(gox))
if(make.pngs==T){dev.off()}
#


#graphics.off()



bb113 (OP)
Hero Member
*****
Offline Offline

Activity: 728
Merit: 500


View Profile
September 23, 2013, 07:35:05 PM
 #31

Forgot the donation address.

If this helps anyone make some money or at least learn some new approaches please donate a bit:

13HesWvPjuPYTo3owGV2kE1avStSemHeYX
bb113 (OP)
Hero Member
*****
Offline Offline

Activity: 728
Merit: 500


View Profile
September 25, 2013, 06:43:04 PM
 #32

hmm, too much info to be useful? or the begging was obnoxious maybe?

The range has been good so far for the last week.
bb113 (OP)
Hero Member
*****
Offline Offline

Activity: 728
Merit: 500


View Profile
September 25, 2013, 06:51:05 PM
 #33

hmm, too much info to be useful? or the begging was obnoxious maybe?

The range has been good so far for the last week.

Perhaps you can use it to create returns for yourself then.

No money to trade right now. But of course I will build on it in the future and do so.
bb113 (OP)
Hero Member
*****
Offline Offline

Activity: 728
Merit: 500


View Profile
September 25, 2013, 07:01:20 PM
 #34

I was asked via PM for better instructions, so here are :
Basic instructions to run the first script.

1) Choose: "Download R 3.0.1 for Windows" at http://ftp.ussg.iu.edu/CRAN/bin/windows/base/
2) Find downloaded program and Install
3) Open R 3.0.1
4) Select "File->New Script" from menu bar at the top
5) Copy the first script below and paste into "Untitled -R editor" window
If running the first time:
5a) Uncomment all of the "install.package()" commands (lines 2-8)
5b) Choose a working directory for the data file (line 22)
5b) Set the dl.all variable to True (dl.all=T) (line 26)
6) Right Click in the "Untitled -R editor" window and choose "Select All"
7) Right Click in window and choose "Run Line or Selection"
Cool Wait for results chart to appear, it will take a while to download all the data from bitcoincharts the first time.
oda.krell
Legendary
*
Offline Offline

Activity: 1470
Merit: 1007



View Profile
September 25, 2013, 07:04:40 PM
 #35

hmm, too much info to be useful? or the begging was obnoxious maybe?

The range has been good so far for the last week.

Right, forgot I wanted to say thanks. Don't have much time playing around with it, but I will eventually. Also, keep on posting here... good to have you around (no homo) (k, maybe sort of homo) (but balls didn't touch!) (I'm ruined :/)

Not sure which Bitcoin wallet you should use? Get Electrum!
Electrum is an open-source lightweight client: fast, user friendly, and 100% secure.
Download the source or executables for Windows/OSX/Linux/Android from, and only from, the official Electrum homepage.
bb113 (OP)
Hero Member
*****
Offline Offline

Activity: 728
Merit: 500


View Profile
September 25, 2013, 07:09:42 PM
 #36

hmm, too much info to be useful? or the begging was obnoxious maybe?

The range has been good so far for the last week.

Perhaps you can use it to create returns for yourself then.

No money to trade right now. But of course I will build on it in the future and do so.

One very difficult thing I have had to learn is that even if you can project price, even that is not enough information to understand how to manage risks in your trades. Money management strategy is a whole other animal.

The way I envision this being useful is to set some estimates on upper and lower bounds assuming bitcoin behaves as it has in the past. It is one step up from drawing lines on charts. Since the past data consists of a 1000% increase in price over four years that is what this will predict by default. Incorporation of estimates for the future needs to be added to make it more valid but this is much more nebulous.  Then some sort of cost-benefit for how much I am willing to risk which hasn't been thought out yet. I am not poor from badly trading bitcoins though in case that's what you are thinking.

bb113 (OP)
Hero Member
*****
Offline Offline

Activity: 728
Merit: 500


View Profile
September 25, 2013, 07:34:48 PM
 #37


Measuring that risk accurately for your own positions is critical. Looking at a generic report, for instance, won't necessarily tell you about risks for all specific positions since everyone has had different entry/exit points in the past. There are now Wall Street firms that this kind of risk metric can be outsourced to, but revealing all of your positions in a portfolio can be risky, particularly if they are very large. Which is why, until fairly recently, almost all investment firms have elected to perform this function in-house.

That is a field I have never looked deeply into. However, it doesn't make much sense to me that past decisions (entry/exit points) should influence the future decisions, seems like sunk cost fallacy.
bb113 (OP)
Hero Member
*****
Offline Offline

Activity: 728
Merit: 500


View Profile
September 25, 2013, 07:51:35 PM
 #38


Measuring that risk accurately for your own positions is critical. Looking at a generic report, for instance, won't necessarily tell you about risks for all specific positions since everyone has had different entry/exit points in the past. There are now Wall Street firms that this kind of risk metric can be outsourced to, but revealing all of your positions in a portfolio can be risky, particularly if they are very large. Which is why, until fairly recently, almost all investment firms have elected to perform this function in-house.

That is a field I have never looked deeply into. However, it doesn't make much sense to me that past decisions (entry/exit points) should influence the future decisions, seems like sunk cost fallacy.

Well, that is not something you have to guess about. It's testable.

So there are two ways of looking at it.
1) "Frequentist": Come up with a plan beforehand, stick with it, and try to maximize gains/minimize losses in the long run. I am not too familiar with decision theory but there are different ways of doing this: eg minimax is minimize the maximum loss.
2) "Bayesian": Modify the plan as time progresses by updating the model and cost-benefit analysis at each point where a decision is made.

In the first case I can see where past decisions should play a role, but not in the second. I am thinking more along the lines of the "bayesian" approach while you are talking about thinking about long run probabilities. Is this what you mean, or am I putting words in your mouth?


bb113 (OP)
Hero Member
*****
Offline Offline

Activity: 728
Merit: 500


View Profile
September 25, 2013, 08:38:38 PM
 #39


Measuring that risk accurately for your own positions is critical. Looking at a generic report, for instance, won't necessarily tell you about risks for all specific positions since everyone has had different entry/exit points in the past. There are now Wall Street firms that this kind of risk metric can be outsourced to, but revealing all of your positions in a portfolio can be risky, particularly if they are very large. Which is why, until fairly recently, almost all investment firms have elected to perform this function in-house.

That is a field I have never looked deeply into. However, it doesn't make much sense to me that past decisions (entry/exit points) should influence the future decisions, seems like sunk cost fallacy.

Well, that is not something you have to guess about. It's testable.

So there are two ways of looking at it.
1) "Frequentist": Come up with a plan beforehand, stick with it, and try to maximize gains/minimize losses in the long run. I am not too familiar with decision theory but there are different ways of doing this: eg minimax is minimize the maximum loss.
2) "Bayesian": Modify the plan as time progresses by updating the model and cost-benefit analysis at each point where a decision is made.

In the first case I can see where past decisions should play a role, but not in the second. I am thinking more along the lines of the "bayesian" approach while you are talking about thinking about long run probabilities. Is this what you mean, or am I putting words in your mouth?




It very much comes into the Bayesian approach as posterior analysis. That is what posterior analysis is all about, making correction for error.

I think that is what I mean by "update model and cost-benefit analysis"? Either you are being cryptic or we are talking past one another due to my informal education in this stuff. Anyway, yes figuring out a personalized risk metric/profile is a crucial part of decision making and I am probably currently underestimating the difficulty of doing this accurately. It definitely is not incorporated into what has been posted into this thread, but could be built on top of it.
bb113 (OP)
Hero Member
*****
Offline Offline

Activity: 728
Merit: 500


View Profile
September 25, 2013, 09:44:12 PM
 #40


What I am looking at is an indicator which shows the likelihood that an intermediate reversal has occurred. It works by comparing how BTCOBV and $OBV track compared to price development in order to detect what regime of feedback effects is dominant.

Then I go back to look at the distribution of the change in price from bin to bin in the last leg before the last reversal and take the compliment of it to form a random variable going forward, there are actually 5 random variables involved which have some dependency re. the development of the reversal indicator, to provide a random trial of where it might turn again in the future, this is the output of the trial.

I am not actually certain that it is meaningful to assign a point probability to a particular outcome. It would be more meaningful to say, for instance, that there is a percentile case for price being above or below a target, or before or after a target. The median case as a rule of thumb would be the 50th percentile presuming a well-behaved Gaussian distribution. But I don't think that is a safe assumption. For one thing, I know that there are dependencies that I don't have worked into the (Markov chain) Monte Carlo yet. 

Ok, let me see if I can finally figure out what you are doing.
1) You take the price history and divide it up (bin it) into intervals based on amount of volume elapsed.
2) You then take the difference in price from bin to bin and have some distribution of "price changes" (not sure if absolute or percentage wise).
3) Each of these price changes are associated with patterns in the bitcoin and USD volume time courses.
4) You go back to "the last reversal" which is defined in some way you do not mention.
5) You take the pattern of volume from bin to bin between that last reversal and the reversal before it.
6) You then compare the pattern of volume from step 5 to patterns since "the last reversal" and try to predict what the next volume bin should be based on similarity to the previously observed sequences.
7) You monte carlo sample a volume bin and the corresponding price change using weights determined by the similarity to previous patterns.
Cool Update the prediction one step into the future and repeat 7

So what you are actually sampling is the next volume bin based on previous sequences of volume bins Huh



bb113 (OP)
Hero Member
*****
Offline Offline

Activity: 728
Merit: 500


View Profile
September 25, 2013, 11:24:18 PM
 #41

So in the example you just gave, when you sample the first step the chance of choosing any one of the "closest 10 samples" is 1/10 or 10%?
bb113 (OP)
Hero Member
*****
Offline Offline

Activity: 728
Merit: 500


View Profile
September 26, 2013, 03:49:18 PM
 #42

So in the example you just gave, when you sample the first step the chance of choosing any one of the "closest 10 samples" is 1/10 or 10%?

Or whatever testing shows to be the most effective sample size. But yes, the first steps would refer to the shortest history.

I was trying to get at whether all bins were equally likely to be selected or not.
bb113 (OP)
Hero Member
*****
Offline Offline

Activity: 728
Merit: 500


View Profile
October 02, 2013, 12:04:30 PM
 #43

it seems that projections based on samples from highly correlated sequences tend to produce a higher posterior divergence than those with a lower correlation.

Ok, almost there. So what do you mean by sequence. Sequence of bins? Or are you referring to the sequence of price moves within the bin?
Pages: 1 2 3 [All]
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!