Bitcoin Forum
July 03, 2024, 06:03:31 PM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: « 1 2 [3]  All
  Print  
Author Topic: in defense of technical analysis  (Read 3676 times)
bb113
Hero Member
*****
Offline Offline

Activity: 728
Merit: 500


View Profile
March 08, 2013, 11:18:09 PM
 #41



First of all we must define that science is the pursuit of objetive understanding of the natural world, first and utmost.
Then IF this understanding is true, it should be testable, reproducible, therefore, it should have predictive power.
And IF this understanding is false, it should be provable as false.
This is the key requirement to be scientific.
Science+Time refine this knowledge. High accuracy is not an excluding requirement to be scientific (or to discredited), but it is a refinement of the result of this testing and retesting process over time to confirm, improve or reject an hypothesis or a theory. It is really not the result but the process that matters.



Yes, please show me a paper from any of the fields you listed that falsifies a real prediction. As I said, as practiced, the things getting falsified are worthless because we know them to be false before trying to falsify them. It would be the same as if TA was deemed accurate because it predicted the price would not be exactly the same tomorrow at this second as it is now. So it is just a 50-50 chance of guessing right up or down.

That is a grave statement and a deep misunderstanding of the scientific process.
First of all, what is your profession? I just want to know what kind of audience I am responding to.
Are you a graduate student in hard sciences, an academician or just an average joe fanatic of the sciences?
If you are one of the first, I am appalled at the lack of epistemological understanding. (this is something I also realized among the grad students in my school)

You can never know what is false unless it is tested. If your mentality was widespread, all counterintuitive hypotheses would be rejected from the get go. That kind of prejudgement is very harming for scientific discoveries, which makes me think that you are not actually related with any scientific discipline, either that or you are too new to this.

If you want to know about researches in social sciences, simply subscribe to social scientific journals yourself. If you work in academia, you should have free access to all of them. You have plenty of social disciplines that use quantitative research.

You are severely misunderstanding me. Take any two groups of people and compare them along any measure and you will find they are different if you look close enough. I "know" this as much as I can know anything. This is what occurs in the social sciences. Here is a good description of the problem from back in 1967:

http://www.psych.ucsb.edu/~janusonis/meehl1967.pdf
bb113
Hero Member
*****
Offline Offline

Activity: 728
Merit: 500


View Profile
March 09, 2013, 01:48:03 AM
 #42

You can start with SSRN and then the Journal of experimental social psychology (ISSN: 0022-1031), Personality and social psychology review (ISSN:1088-8683), Journal of personality and social psychology (ISSN: 0022-3514), Experimental Economics (ISSN: 1386-4157).

Finally, I invite you to read this article published in Science:
http://www.ucd.ie/geary/static/publications/workingpapers/gearywp200935.pdf
Have fun.

I just noticed this edit. From that paper:

Quote
For example, if a firm pays a higher wage or a subject provides higher effort, costs are higher and final earnings are lower.

This type of thinking is exactly what I am talking about. "Higher" this, "Lower" that. Its 50-50 all the way. The falsified hypothesis is one of zero effect of A on B. In social science A always affects B in some way, so that hypothesis is worthless. This is what makes it not really "science", the data may be good, but the way it is interpreted is not scientific. No matter how many confounds there are it should be possible to estimate some interval of the result of A on B that can be narrowed over time as more data is collected. This does seem common in the social sciences for whatever reason.

Have you ever seen a social science experiment replicated exactly?

Note: this all goes for biomedical science as well. The only difference is it is often easier to control confounds.
nanopene
Newbie
*
Offline Offline

Activity: 49
Merit: 0


View Profile
March 09, 2013, 02:19:47 AM
Last edit: March 09, 2013, 02:46:30 AM by nanopene
 #43



First of all we must define that science is the pursuit of objetive understanding of the natural world, first and utmost.
Then IF this understanding is true, it should be testable, reproducible, therefore, it should have predictive power.
And IF this understanding is false, it should be provable as false.
This is the key requirement to be scientific.
Science+Time refine this knowledge. High accuracy is not an excluding requirement to be scientific (or to discredited), but it is a refinement of the result of this testing and retesting process over time to confirm, improve or reject an hypothesis or a theory. It is really not the result but the process that matters.



Yes, please show me a paper from any of the fields you listed that falsifies a real prediction. As I said, as practiced, the things getting falsified are worthless because we know them to be false before trying to falsify them. It would be the same as if TA was deemed accurate because it predicted the price would not be exactly the same tomorrow at this second as it is now. So it is just a 50-50 chance of guessing right up or down.

That is a grave statement and a deep misunderstanding of the scientific process.
First of all, what is your profession? I just want to know what kind of audience I am responding to.
Are you a graduate student in hard sciences, an academician or just an average joe fanatic of the sciences?
If you are one of the first, I am appalled at the lack of epistemological understanding. (this is something I also realized among the grad students in my school)

You can never know what is false unless it is tested. If your mentality was widespread, all counterintuitive hypotheses would be rejected from the get go. That kind of prejudgement is very harming for scientific discoveries, which makes me think that you are not actually related with any scientific discipline, either that or you are too new to this.

If you want to know about researches in social sciences, simply subscribe to social scientific journals yourself. If you work in academia, you should have free access to all of them. You have plenty of social disciplines that use quantitative research.

You are severely misunderstanding me. Take any two groups of people and compare them along any measure and you will find they are different if you look close enough. I "know" this as much as I can know anything. This is what occurs in the social sciences. Here is a good description of the problem from back in 1967:

http://www.psych.ucsb.edu/~janusonis/meehl1967.pdf

Funny, I needed this paper to troll my professors Smiley
To be honest, I have my own criticisms as well, but I wouldn't go as far as saying that soft sciences are not real science.
It is science, not pseudo, not fringe, it IS science. At least psychology is a field that has been serious about it, and within psychology, neuropsychology, behavioral neuroscience, comparative psychology, neurobiology are the hardest of all in the spectrum of the psychological subdisciplines. In fact, there is nothing really "soft" in them, they all are very well versed on NHST (well, they should be), it is a requirement for research for that line of study.
I will read that paper with more time to dissect it carefully later, I love it, thank you.

Now, responding to your remark: "compare them along any measure and you will find they are different if you look close enough".There are always differences, that's granted. What matters is if it is statistically significant. To be sure there are different experimental designs to filter out, changing criterion, reversal and other that are very well known in medical research, double-blind testing, control groups. We use Analysis of Variance to tackle that problem, which is the same statistical tool used in any other "hard" science.

The only difference with the hard sciences, is that we are much younger and growing.
Btw, we are way offtopic.

PS: Let me share this paper: http://www.statpower.net/Steiger%20Biblio/Steiger04b.pdf
omarabid
Full Member
***
Offline Offline

Activity: 133
Merit: 100


View Profile
March 09, 2013, 05:30:53 PM
 #44

So in order for a TA analyst (I won't judge the TA science per se, I'll keep it with the people) should make correct predictions most of the times (or at least, correct enough to break even).

So here is my suggestion: Every TA in Btctalk keep a note about his predictions in a text file (sure you'll post about it here, but I don't want to go through hundreds of posts) and then present them to the btctalk society to judge his TA skills.

The file format is quite simple

dd/mm/yyyy: (predicted price for the day) | (predicted future trend)

Who's in?

We should do better than that, and create a speculation game/competition. Everyone starts with 100 virtual bitcoins (is that an oxymoron too?) and trades them using mtgox prices. After a few months, we see who comes out ahead.

That's basically the same thing (he should add the quantity invested of the 100btc, maybe in %).

Quote
dd/mm/yyyy: (predicted price for the day) | (predicted future trend) | (% of your investment*)

* 100 is the base. You start with 100.

So where are all these TA analysts  Cool
bb113
Hero Member
*****
Offline Offline

Activity: 728
Merit: 500


View Profile
March 09, 2013, 08:22:32 PM
 #45



Now, responding to your remark: "compare them along any measure and you will find they are different if you look close enough".There are always differences, that's granted. What matters is if it is statistically significant. To be sure there are different experimental designs to filter out, changing criterion, reversal and other that are very well known in medical research, double-blind testing, control groups.  We useAnalysis of Variance to tackle that problem, which is the same statistical tool used in any other "hard" science.


I would suggest to stop using that. ANOVAs rely on assumptions (eg normality) that are either known to be false when describing human behaviour or impossible to prove. Do permutation testing instead.
nanopene
Newbie
*
Offline Offline

Activity: 49
Merit: 0


View Profile
March 10, 2013, 03:36:03 AM
 #46



Now, responding to your remark: "compare them along any measure and you will find they are different if you look close enough".There are always differences, that's granted. What matters is if it is statistically significant. To be sure there are different experimental designs to filter out, changing criterion, reversal and other that are very well known in medical research, double-blind testing, control groups.  We useAnalysis of Variance to tackle that problem, which is the same statistical tool used in any other "hard" science.


I would suggest to stop using that. ANOVAs rely on assumptions (eg normality) that are either known to be false when describing human behaviour or impossible to prove. Do permutation testing instead.

Well that will depend on the type and nature of the experiments that are performed, but thanks for the suggestions Wink
Are you a statistician?
bb113
Hero Member
*****
Offline Offline

Activity: 728
Merit: 500


View Profile
March 10, 2013, 02:17:04 PM
 #47



Now, responding to your remark: "compare them along any measure and you will find they are different if you look close enough".There are always differences, that's granted. What matters is if it is statistically significant. To be sure there are different experimental designs to filter out, changing criterion, reversal and other that are very well known in medical research, double-blind testing, control groups.  We useAnalysis of Variance to tackle that problem, which is the same statistical tool used in any other "hard" science.


I would suggest to stop using that. ANOVAs rely on assumptions (eg normality) that are either known to be false when describing human behaviour or impossible to prove. Do permutation testing instead.

Well that will depend on the type and nature of the experiments that are performed, but thanks for the suggestions Wink
Are you a statistician?

No, just someone who deals with similar situations and realized ANOVAs, etc are not the correct tool for that job. People just use them because its the only thing they were ever taught.

Lets stop this OT convo tho.
Pages: « 1 2 [3]  All
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!