Bitcoin Forum

Other => Politics & Society => Topic started by: bb113 on December 22, 2012, 06:01:07 AM



Title: Public Perception of Science
Post by: bb113 on December 22, 2012, 06:01:07 AM
From what I have observed most public debate regarding science revolves around two issues:

1) Climate Change due to human influence on the environment
2) Evolution of life on Earth due to long term natural selection

How do you determine what to believe (or not) regarding these theories?
What kind of evidence would convince you to change your mind?
Why do you place trust (or not) in the consensus of the experts in these fields?
Given infinite resources, how would you determine the "truth"?


Title: Re: Public Perception of Science
Post by: Lethn on December 22, 2012, 06:46:49 AM
My answers :D :


Climate change due to human influence hasn't been properly proved either way, we haven't even bothered trying to find and explore planets which have climates close to our own so why can people so readily take a side on this issue?

Evolution has already been determined to be true by Darwin, religious people find a way to deny anything.

edit: hmmm that's a pretty long list, I'll have to take more time on the answers for the other stuff at least, will edit this post to answer the other four things so bare with me.

How do you determine what to believe (or not) regarding these theories?


When I determine what to believe I look at the evidence presented ( if there is any actual fact based evidence, will explain that in the next bit ) if it contains convincing numbers or repeating examples throughout history then that is something that I will base my theories on, has someone who is very interested in economics and now actively trading in tiny amounts on the stock market I can tell you it works as a way of proving a theory right. As we have been told by people history often repeats itself, but what is much more reliable is mathematics that hasn't been messed with in any way by political lobbies, this is why I haven't taken a side on the climate change issue because both sides have been known to use propaganda rather than repetition and math to make their point.

What kind of evidence would convince you to change your mind?

It's hard to answer this one, because personally I can never trust anything 100% since that's how human beings are, there will always be things and situations that break a pattern, it can be something predictable but then it can be just something crazy or mind blowing that happens which completely throws off the current evidence. That's just the way humanity is though and I've learned to accept that, for the evolution side of things I think the only way I could be convinced to take a different view on it is if God himself appeared before me and presented everything he'd done, but so far, that just hasn't happened and I doubt it ever will.

In regards to climate change, I think the two sides are about even in the arguments they make, because they keep finding ways to one up each other every now and then, for instance, people who don't believe humans are doing anything have pointed out that mars has had it's ice cap melt before, you also have the fact that the sun is apparently burning more brightly than usual. Then there are the people on the opposite scale who present evidence saying that the gases we emit are being poured into the atmosphere which is also true, so to be honest I don't think either side has won yet so I'm just going to sit on the side lines and observe like an intelligent person rather than just jump to conclusions.

Why do you place trust (or not) in the consensus of the experts in these fields?

This seems to be the same question as the one before, as I stated, repetition of history/events and math, this is why we do experiments in science so we can repeat the results constantly, the problem is though there are people who will always deny this kind of evidence.

Given infinite resources, how would you determine the "truth"?

Lots of research and experimentation, that's what science is about, but then again there are some things we won't ever find the answer to.

Edit: there we go, didn't take as long as I thought it would :P


Title: Re: Public Perception of Science
Post by: CountSparkle on December 22, 2012, 06:59:05 AM
The first thing I noticed is that both of those topics have to do with gradual change over time. Maybe it's a hard wired brain thing, and no amount of evidence can convince them? (same as with birthers, truthers, and religious nuts)


Title: Re: Public Perception of Science
Post by: organofcorti on December 22, 2012, 07:44:15 AM
Given infinite resources it's simple to determine the truth. In the first case, duplicate the earth at the point just before human industry began. Then let the two histories proceed - one without human industry and one with. Then measure outcomes.

In the second, follow return to a time before any animals existed. Then follow the history of the body plan and DNA of every single lifeform. Note species that evolved from other species.

With infinite resources, anything that is in concept provable is provable.

A large majority of scientists who specialise in that field disagreeing with  climate change or evolution would be sufficient to make me doubt them.


Title: Re: Public Perception of Science
Post by: bb113 on December 22, 2012, 07:50:54 AM
Given infinite resources it's simple to determine the truth. In the first case, duplicate the earth at the point just before human industry began. Then let the two histories proceed - one without human industry and one with. Then measure outcomes.

In the second, follow return to a time before any animals existed. Then follow the history of the body plan and DNA of every single lifeform. Note species that evolved from other species.

With infinite resources, anything that is in concept provable is provable.

A large majority of scientists who specialise in that field disagreeing with  climate change or evolution would be sufficient to make me doubt them.

This experiment would take a long time...


Title: Re: Public Perception of Science
Post by: bb113 on December 22, 2012, 07:52:44 AM
My answers :D :


Climate change due to human influence hasn't been properly proved either way, we haven't even bothered trying to find and explore planets which have climates close to our own so why can people so readily take a side on this issue?

Evolution has already been determined to be true by Darwin, religious people find a way to deny anything.

edit: hmmm that's a pretty long list, I'll have to take more time on the answers for the other stuff at least, will edit this post to answer the other four things so bare with me.

How do you determine what to believe (or not) regarding these theories?


When I determine what to believe I look at the evidence presented ( if there is any actual fact based evidence, will explain that in the next bit ) if it contains convincing numbers or repeating examples throughout history then that is something that I will base my theories on, has someone who is very interested in economics and now actively trading in tiny amounts on the stock market I can tell you it works as a way of proving a theory right. As we have been told by people history often repeats itself, but what is much more reliable is mathematics that hasn't been messed with in any way by political lobbies, this is why I haven't taken a side on the climate change issue because both sides have been known to use propaganda rather than repetition and math to make their point.

What kind of evidence would convince you to change your mind?

It's hard to answer this one, because personally I can never trust anything 100% since that's how human beings are, there will always be things and situations that break a pattern, it can be something predictable but then it can be just something crazy or mind blowing that happens which completely throws off the current evidence. That's just the way humanity is though and I've learned to accept that, for the evolution side of things I think the only way I could be convinced to take a different view on it is if God himself appeared before me and presented everything he'd done, but so far, that just hasn't happened and I doubt it ever will.

In regards to climate change, I think the two sides are about even in the arguments they make, because they keep finding ways to one up each other every now and then, for instance, people who don't believe humans are doing anything have pointed out that mars has had it's ice cap melt before, you also have the fact that the sun is apparently burning more brightly than usual. Then there are the people on the opposite scale who present evidence saying that the gases we emit are being poured into the atmosphere which is also true, so to be honest I don't think either side has won yet so I'm just going to sit on the side lines and observe like an intelligent person rather than just jump to conclusions.

Why do you place trust (or not) in the consensus of the experts in these fields?

This seems to be the same question as the one before, as I stated, repetition of history/events and math, this is why we do experiments in science so we can repeat the results constantly, the problem is though there are people who will always deny this kind of evidence.

Given infinite resources, how would you determine the "truth"?

Lots of research and experimentation, that's what science is about, but then again there are some things we won't ever find the answer to.

Edit: there we go, didn't take as long as I thought it would :P

I agree with most of what you have put forward but don't understand why you are so convinced of evolution but not climate science.


Title: Re: Public Perception of Science
Post by: myrkul on December 22, 2012, 07:53:28 AM
This experiment would take a long time...

I'd point out that time is a "resource."


Title: Re: Public Perception of Science
Post by: organofcorti on December 22, 2012, 07:54:20 AM
Given infinite resources it's simple to determine the truth. In the first case, duplicate the earth at the point just before human industry began. Then let the two histories proceed - one without human industry and one with. Then measure outcomes.

In the second, follow return to a time before any animals existed. Then follow the history of the body plan and DNA of every single lifeform. Note species that evolved from other species.

With infinite resources, anything that is in concept provable is provable.

A large majority of scientists who specialise in that field disagreeing with  climate change or evolution would be sufficient to make me doubt them.

This experiment would take a long time...

Not with infinite resources. "HG, hand me that time travel doohickey you've been writing about" ;)

The point is that infinite resources are not required. Just science. But I'm preaching to the choir and will shut up now.


Title: Re: Public Perception of Science
Post by: Lethn on December 22, 2012, 07:54:44 AM
Quote
I agree with most of what you have put forward but don't understand why you are so convinced of evolution but not climate science.

To put it simply, we know fuck all about space and other planets besides are own so how can we claim to know anything about our own planet?

Edit: I suppose you could say the same for evolution actually, if we saw the same behaviour from organic beings from other planets as well, then that would make evolution even more solid, but with evolution there is a lot more evidence to support it than there is with climate science, which I don't really believe is science, not just yet anyway.


Title: Re: Public Perception of Science
Post by: bb113 on December 22, 2012, 07:54:56 AM
The first thing I noticed is that both of those topics have to do with gradual change over time. Maybe it's a hard wired brain thing, and no amount of evidence can convince them? (same as with birthers, truthers, and religious nuts)

Who is them? Why can't "them" be those who believe the scientists rather than the religious. Once things are complicated enough noone but the experts will have the time to examine the evidence for themselves, and so will have to rely on expert opinion or various heuristics regarding what science has done for them lately.


Title: Re: Public Perception of Science
Post by: bb113 on December 22, 2012, 07:57:58 AM
This experiment would take a long time...

I'd point out that time is a "resource."

Given infinite resources it's simple to determine the truth. In the first case, duplicate the earth at the point just before human industry began. Then let the two histories proceed - one without human industry and one with. Then measure outcomes.

In the second, follow return to a time before any animals existed. Then follow the history of the body plan and DNA of every single lifeform. Note species that evolved from other species.

With infinite resources, anything that is in concept provable is provable.

A large majority of scientists who specialise in that field disagreeing with  climate change or evolution would be sufficient to make me doubt them.

This experiment would take a long time...

Not with infinite resources. "HG, hand me that time travel doohickey you've been writing about" ;)

The point is that infinite resources are not required. Just science. But I'm preaching to the choir and will shut up now.

...Ok, I mean in the end using neyman-pearson strategy the correct answer to everything will be found as long as there is no competition for resources or you can do it faster than everyone else. This is robot science though.


Title: Re: Public Perception of Science
Post by: myrkul on December 22, 2012, 08:01:10 AM
I agree with most of what you have put forward but don't understand why you are so convinced of evolution but not climate science.

I pretty much agree completely with Lethn, let me explain my reasoning.

Evolution is proven. There's over a hundred years of experimental and observational data on it. Moths have evolved to cope with pollution, and then evolved back once it was cleaned up. That pretty much put the lock on it for me.

Climate science is another beast entirely. That the climate is changing is not really in doubt. Which way, of course, has been a matter of some debate. Back in the 70's for instance, the big worry was global cooling, and a new ice age. Now it's global warming... and a new ice age. (The science behind how that would happen is weird, but believable.) But what has not even come close to being proven is that it is in any way anthropogenic. The earth's temperature has tracked solar output for 5000 years, and I see no reason for it to stop now.


Title: Re: Public Perception of Science
Post by: bb113 on December 22, 2012, 08:14:46 AM
I agree with most of what you have put forward but don't understand why you are so convinced of evolution but not climate science.

I pretty much agree completely with Lethn, let me explain my reasoning.

Evolution is proven. There's over a hundred years of experimental and observational data on it. Moths have evolved to cope with pollution, and then evolved back once it was cleaned up. That pretty much put the lock on it for me.

Climate science is another beast entirely. That the climate is changing is not really in doubt. Which way, of course, has been a matter of some debate. Back in the 70's for instance, the big worry was global cooling, and a new ice age. Now it's global warming... and a new ice age. (The science behind how that would happen is weird, but believable.) But what has not even come close to being proven is that it is in any way anthropogenic. The earth's temperature has tracked solar output for 5000 years, and I see no reason for it to stop now.

That is still not "speciation you can see with your eyes". I don't see why we should think that speciation should occur on the timescales of human memory as it has been in the past (at most a couple generations before rhetorical noise obscures the reality). The truth is we are only now at the stage where we can verify this (in my opinion very good) theory directly.


Title: Re: Public Perception of Science
Post by: myrkul on December 22, 2012, 08:17:09 AM
I agree with most of what you have put forward but don't understand why you are so convinced of evolution but not climate science.

I pretty much agree completely with Lethn, let me explain my reasoning.

Evolution is proven. There's over a hundred years of experimental and observational data on it. Moths have evolved to cope with pollution, and then evolved back once it was cleaned up. That pretty much put the lock on it for me.

Climate science is another beast entirely. That the climate is changing is not really in doubt. Which way, of course, has been a matter of some debate. Back in the 70's for instance, the big worry was global cooling, and a new ice age. Now it's global warming... and a new ice age. (The science behind how that would happen is weird, but believable.) But what has not even come close to being proven is that it is in any way anthropogenic. The earth's temperature has tracked solar output for 5000 years, and I see no reason for it to stop now.

That is still not "speciation you can see with your eyes". I don't see why we should think that speciation should occur on the timescales of human memory as it has been in the past (at most a couple generations before rhetorical noise obscures the reality). The truth is we are only now at the stage where we can verify this (in my opinion very good) theory directly.

A clear genetic link is good enough for me, or extended observation of divergent populations, both of which have been done.


Title: Re: Public Perception of Science
Post by: bb113 on December 22, 2012, 08:19:32 AM
I'm no expert on this, but I am almost sure that there are hidden assumptions that are often not considered. Link me.


Title: Re: Public Perception of Science
Post by: drakahn on December 22, 2012, 08:23:47 AM
How do you determine what to believe (or not) regarding these theories?

"believe" is a weak term for me... I believe most of my beliefs could be changed pretty easily... I previously believed people would make something happen on dec 21st, I was let down...

That said, I look for something that both answers the question and cannot be explained by "He had to say that specific answer to (continue to) get paid"

What kind of evidence would convince you to change your mind?

Evidence that the people presenting it have nothing to gain by it being believed, that either way they are getting their next grant(or whatever) to continue to find new evidence

Evidence that better answers the question

Evidence backed up by reproducible experiments with the same or close results across the board

Evidence that can be confirmed by average joe

Evidence that the person presenting it did not want to find

Why do you place trust (or not) in the consensus of the experts in these fields?

I try not to trust too much, I prefer an answer I don't need to trust for it to ring true

Given infinite resources, how would you determine the "truth"?

infinite... that's a fun word...

Evolution: I would build a series of enclosed ecosystems full of fast breeding organisms that previously have never interacted, each ecosystem would be slightly different from the last - Try and force evolution

Climate change: Fill [Mars/The Moon/A planet we built to have the opposite orbit to earth] with the gasses that are responsible for climate change and see what happens.

Both would take up time that would mean the person starting the experiment would not be the one that finishes it

---

Of course, I'm internet educated... so if I ever go out and get a formal education my ideas will probably change a fair bit...


Title: Re: Public Perception of Science
Post by: bb113 on December 22, 2012, 08:27:09 AM
How do you determine what to believe (or not) regarding these theories?

"believe" is a weak term for me... I believe most of my beliefs could be changed pretty easily... I previously believed people would make something happen on dec 21st, I was let down...

That said, I look for something that both answers the question and cannot be explained by "He had to say that specific answer to (continue to) get paid"

What kind of evidence would convince you to change your mind?

Evidence that the people presenting it have nothing to gain by it being believed, that either way they are getting their next grant(or whatever) to continue to find new evidence

Evidence that better answers the question

Evidence backed up by reproducible experiments with the same or close results across the board

Evidence that can be confirmed by average joe

Evidence that the person presenting it did not want to find

Why do you place trust (or not) in the consensus of the experts in these fields?

I try not to trust too much, I prefer an answer I don't need to trust for it to ring true

Given infinite resources, how would you determine the "truth"?

infinite... that's a fun word...

Evolution: I would build a series of enclosed ecosystems full of fast breeding organisms that previously have never interacted, each ecosystem would be slightly different from the last - Try and force evolution

Climate change: Fill [Mars/The Moon/A planet we built to have the opposite orbit to earth] with the gasses that are responsible for climate change and see what happens.

Both would take up time that would mean the person starting the experiment would not be the one that finishes it

---

Of course, I'm internet educated... so if I ever go out and get a formal education my ideas will probably change a fair bit...

These are great answers, now go figure out something useful to produce for society so you can fund an organization that performs science according to these principles.


Title: Re: Public Perception of Science
Post by: myrkul on December 22, 2012, 08:36:01 AM
I'm no expert on this, but I am almost sure that there are hidden assumptions that are often not considered. Link me.

...Are you seriously asking me to link you the proof of evolution? Please tell me that I'm missing something here. At ny rate, this should be more than enough: https://en.wikipedia.org/wiki/Evolution#References

Since Darwin's day, we've been watching the Galapagos islands, and many other isolated populations, we've found clear morphological links, and now genetic (one species of...lemur I think it was, was only identified by genetics, gods only know how they tell themselves apart) links have shown the same things... those neat little tree diagrams with species branching off at certain points are right.

As to climatology, when the weatherman can accurately tell me what the weather will be like in two weeks, I'll believe they can forecast 10, 20, or 50 years out. Too many variables.


Title: Re: Public Perception of Science
Post by: bb113 on December 22, 2012, 08:38:06 AM
I'm no expert on this, but I am almost sure that there are hidden assumptions that are often not considered. Link me.

...Are you seriously asking me to link you the proof of evolution? Please tell me that I'm missing something here. At ny rate, this should be more than enough: https://en.wikipedia.org/wiki/Evolution#References

Since Darwin's day, we've been watching the Galapagos islands, and many other isolated populations, we've found clear morphological links, and now genetic (one species of...lemur I think it was, was only identified by genetics, gods only know how they tell themselves apart) links have shown the same things... those neat little tree diagrams with species branching off at certain points are right.

As to climatology, when the weatherman can accurately tell me what the weather will be like in two weeks, I'll believe they can forecast 10, 20, or 50 years out. Too many variables.

I've learned to question everything I have been taught, especially what I take for granted because I was told it before my brain was fully developed.


Title: Re: Public Perception of Science
Post by: organofcorti on December 22, 2012, 08:49:28 AM
Oh christ.

An argument between two forum members who have no holy cows and who question everything. This is going to go off topic like a conspiracy theorist at a conformist convention. Not that I mind, though.


Title: Re: Public Perception of Science
Post by: myrkul on December 22, 2012, 08:50:04 AM
I've learned to question everything I have been taught, especially what I take for granted because I was told it before my brain was fully developed.

That is a fine stance for a scientist, actually.


Title: Re: Public Perception of Science
Post by: myrkul on December 22, 2012, 08:52:28 AM
Oh christ.

An argument between two forum members who have no holy cows and who question everything. This is going to go off topic like a conspiracy theorist at a conformist convention. Not that I mind, though.

LOL... well, it might indeed range far afield, but these sorts of "arguments" tend to be quite boring... unless you're interested or knowledgeable in the subject matter.


Title: Re: Public Perception of Science
Post by: bb113 on December 22, 2012, 08:55:16 AM
Oh christ.

An argument between two forum members who have no holy cows and who question everything. This is going to go off topic like a conspiracy theorist at a conformist convention. Not that I mind, though.

You actually set me on this path by getting me into R. I started running simulations on t-tests and ended up concluding that all the anti-NHST literature was actually underestimating the extent of the problem.


I've learned to question everything I have been taught, especially what I take for granted because I was told it before my brain was fully developed.

That is a fine stance for a scientist, actually.

I agree. I hope to continue being one, but that is actually more difficult than you'd think.


Title: Re: Public Perception of Science
Post by: Lethn on December 22, 2012, 08:57:49 AM
pffft :D you don't need to be a scientist to question everything, how do you think Socrates got killed?


Title: Re: Public Perception of Science
Post by: bb113 on December 22, 2012, 09:01:46 AM
pffft :D you don't need to be a scientist to question everything, how do you think Socrates got killed?

He got involved in politics.


Title: Re: Public Perception of Science
Post by: myrkul on December 22, 2012, 09:03:52 AM
pffft :D you don't need to be a scientist to question everything, how do you think Socrates got killed?

Actually, I would argue that everyone should be a scientist, regardless of profession. Science is a way of life, not just a job.


Title: Re: Public Perception of Science
Post by: organofcorti on December 22, 2012, 09:10:36 AM
You actually set me on this path by getting me into R. I started running simulations on t-tests and ended up concluding that all the anti-NHST literature was actually underestimating the extent of the problem.

Interesting. Mind posting some code, your aims and conclusions? And no, I'm not purposely trying to make this go OT, but I htink this would be a good example of an individual attempting to find the truth. It fits with the topic.


Title: Re: Public Perception of Science
Post by: bb113 on December 22, 2012, 09:30:02 AM
Keep in mind the t-test is the most robust to violations of assumptions.Example:

http://i45.tinypic.com/35lc2ly.png


Code is too long apparently...


Title: Re: Public Perception of Science
Post by: bb113 on December 22, 2012, 09:36:39 AM
Code:
require(fields) #Used for image.plot


###Detect RStudio (modifies plot settings:no live plot)
using.RStudio=nzchar(Sys.getenv("RSTUDIO_USER_IDENTITY"))



###General Settings###
runs<-1000 # Number of Simulations to Run
max.samps<-150 # Maximum number of Samples to test
initial.samp.size<-5 # must be at least 2
samp.interval<-5 # must be at least 1
effect<-.1   # Difference between population means

#Live Plot Settings
live.plot=F # If false, p values traces from last 10 simulations will be plotted
sleep.time<-.1 # In seconds, use to slow down live plot
live.plot.in.own.window=T # If live.plot=T, also plot p value trace in its own device



###Investigator Bias Settings###
##Sequential Hypothesis Testing
# Set true (T) to add samples to the previous sample.
# Otherwise (set to F) each sample is independant
sequential=F

##Multiple Statistical Tests
# If True, choose lowest of t-test and Mann-Whitney U-test p-values.
# Otherwise only perform t-test
also.perform.Utest=F

##Multiple Outcomes/Publication Bias, e.g. only report significant comparisons
#Set greater than 1 if multiple outcome measures will be tested.
#Probabilities and effect sizes will be for any of the tests significant at each sample size
#History of each is saved for sequential testing
number.of.outcomes= 1

##Drop Outliers
# Set to true to drop outliers as defined by outlier.cutoff
# In case of sequential, outliers are kept for next iteration
# Outlier bias are multipliers of outlier cutoff for treatment group
# Set > 1 to be less strict
# Set < 1 for more strict
drop.outliers=T
outlier.method="MAD" # Defaults to "MAD" (median absolute deviation), can also use "SD" (number of standard deviations from mean)
outlier.cutoff<-5 # Number of sample deviations from sample mean/median (upper and lower)
outlier.bias.high<-1 # Bias multiplier (see above)
outlier.bias.low<-1   # Bias multiplier (see above)
n.cutoff<-2   # Replace values after dropping if sample size is less than the cutoff, min=2, max=sample size


###Population Distributions###
#Treatment Group
treatment.sd1<-1 # Treatment Group Primary Standard Deviation
treatment.shape1<-2 # Treatment Group Primary Shape Parameter (set to 2 for normal)
treatment.sd2<-1 # Treatment Group Secondary Standard Deviation (for bimodal)
treatment.shape2<-2 # Treatment Group Secondary Shape Parameter (set to 2 for normal, for bimodal)

#Control Group
control.sd1<-1 # Control Group Primary Standard Deviation
control.shape1<-2 # Control Group Primary Shape Parameter (set to 2 for normal)
control.sd2<-1 # Control Group Secondary Standard Deviation (for bimodal)
control.shape2<-2 # Control Group Secondary Shape Parameter (set to 2 for normal, for bimodal)



###Bimodal Settings###
larger.bimodal.proportion<- .8 #Set to 1 for unimodal
bimodal.offset<-8 #Difference Between Primary and Secondary Means for bimodal Populations
bimodal.treatment.only=F # Simulate treatment effect on subset of treatment group, control will be unimodal.




### Advanced: Random Parameter Generation Settings ###
random.pop.parameters=F # Generate Population Parameters (ignores settings above)
random.bias=F # Generate Bias Parameters (ignores settings above)
equal.pop.parameters=T # Sets Treatment and Control Group Population Parameters equal
random.shape.only=T # Use sd settings above
batch.size=1 # Set >1 to run batch of random simulations





###About Shape (p) Parameter###
#Generalized normal distribution with Shape(p) Parameter#
#Modified from package "normalp" to return numeric(0) rather than errors
rnormp2<-function (n, mu = 0, sigmap = 1, p = 2, method = c("def", "chiodi"))
{
  if (!is.numeric(n) || !is.numeric(mu) || !is.numeric(sigmap) ||
    !is.numeric(p))
    return(numeric(0))
  if (p < 1)
    stop("p must be at least equal to one")
  if (sigmap <= 0)
    return(numeric(0))
  method <- match.arg(method)
  if (method == "def") {
    qg <- rgamma(n, shape = 1/p, scale = p)
    z <- qg^(1/p)
    z <- ifelse(runif(n) < 0.5, -z, z)
    x <- mu + z * sigmap
  }
  if (method == "chiodi") {
    i <- 0
    x <- c(rep(0, n))
    while (i < n) {
      u <- runif(1, -1, 1)
      v <- runif(1, -1, 1)
      z <- abs(u)^p + abs(v)^(p/(p - 1))
      if (z <= 1) {
        i <- i + 1
        x[i] <- u * (-p * log(z)/z)^(1/p)
        x[i] <- mu + x[i] * sigmap
      }
    }
  }
  x
}
# shape < 1 : invalid #
#plot(seq(-5,5,by=.01),dnormp(seq(-5,5,by=.01), mu=0, sigmap=1, p=.5))
#lines(seq(-5,5,by=.01), dnorm(seq(-5,5,by=.01), 0,1), lwd=2, col="Blue")

# 1<= shape < 2 : heavier tails, 1 is laplace dist #
#plot(seq(-5,5,by=.01),dnormp(seq(-5,5,by=.01), mu=0, sigmap=1, p=1.5))
#lines(seq(-5,5,by=.01), dnorm(seq(-5,5,by=.01), 0,1), lwd=2, col="Blue")

# shape = 2 : normal dist #
#plot(seq(-5,5,by=.01),dnormp(seq(-5,5,by=.01), mu=0, sigmap=1, p=2))
#lines(seq(-5,5,by=.01), dnorm(seq(-5,5,by=.01), 0,1), lwd=2, col="Blue")

# shape > 2 :lighter tails, inf is uniform dist #
#plot(seq(-5,5,by=.01),dnormp(seq(-5,5,by=.01), mu=0, sigmap=1, p=10))
#lines(seq(-5,5,by=.01), dnorm(seq(-5,5,by=.01), 0,1), lwd=2, col="Blue")





###Start Script###

if (batch.size > 1){
  batchmode=T
  batch.results=NULL
  batch.params=NULL
}else{
  batchmode=F
}

for(b in 1:batch.size){
 
 
  ###Randomly Set Bias###
 
  if(random.bias==T){
    ###Investigator Bias Settings###
    ##Sequential Hypothesis Testing
    seq.samp<-round(runif(1,0,1))
    if(seq.samp==1){
      sequential=T
    }else{
      sequential=F
    }
   
    ##Multiple Statistical Tests
    Utest.samp<-round(runif(1,0,1))
    if(Utest.samp==1){
      also.perform.Utest=T
    }else{
      also.perform.Utest=F
    }
   
    ##Multiple Outcomes/Publication Bias, e.g. only report significant comparisons
    number.of.outcomes= round(runif(1, 1, 3),0)
   
    ##Drop Outliers
    outlier.samp<-round(runif(1,0,1))
    if(outlier.samp==1){
      drop.outliers=T
    }else{
      drop.outliers=F
    }
    outlier.cutoff<-runif(1.5,1000)
    outlier.bias.high<-runif(1,.5,1.5)
    outlier.bias.low<-runif(1,.5,1.5)
   
   
  } #End Random Bias Settings
 
 
  ###Randomly Set Population Parameters###
  if(random.pop.parameters==T){
   
    ###Population Distributions###
    #Treatment Group
    if(random.shape.only!=T){
      treatment.sd1<-runif(1, .1, 5)
    }
    treatment.shape1<-runif(1, 1, 10)
    if(random.shape.only!=T){
      treatment.sd2<-runif(1, .1, 5)
    }
    treatment.shape2<-runif(1, 1, 10)
   
    #Control Group
    if(equal.pop.parameters==T){
      if(random.shape.only!=T){
        control.sd1<-treatment.sd1
      }
      control.shape1<-treatment.shape1
      if(random.shape.only!=T){
        control.sd2<-treatment.sd2
      }
      control.shape2<-treatment.shape2
    }else{
      if(random.shape.only!=T){
        control.sd1<-runif(1, .1, 5)
      }
      control.shape1<-runif(1, 1, 10)
      if(random.shape.only!=T){
        control.sd2<-runif(1, .1, 5)
      }
      control.shape2<-runif(1, 1, 10)
    }
   
    ###Bimodal Settings###
    larger.bimodal.proportion<- runif(1,.5,1)
    bimodal.offset<-runif(1, .1, 10)
   
    if(equal.pop.parameters==T){
      bimodal.treatment.only=F
    }else{
     
      bimod.samp<-round(runif(1,0,1))
     
      if(bimod.samp==1){
        bimodal.treatment.only=T
      }else{
        bimodal.treatment.only=F
      }
     
    }
   
  } #End Random Population Parameter Generation
 
 
  ###Start Sampling###
  if(initial.samp.size<2|| samp.interval<1){
    print("Simulation not Run: minimum sample size =2, minimum sample interval=1")
  }else{
   
   
    ###Create Population Histogram and Live Chart###
    smaller.bimodal.proportion<- (1-larger.bimodal.proportion)
    bimodal.pop<-c(rep(1,larger.bimodal.proportion*1000),
                   rep(0,smaller.bimodal.proportion*1000))
   
    if(larger.bimodal.proportion==1){
      treatment.sd2<-NA
      treatment.shape2<-NA
      control.sd2<-NA
      control.shape2<-NA
      bimodal.offset<-NA
      bimodal.treatment.only=F
    }
   
    if(drop.outliers!=T){
      outlier.cutoff<-NA
      outlier.bias.high<-NA 
      outlier.bias.low<-NA
    }
   
    if(outlier.method!="MAD" && outlier.method!="SD"){
      outlier.method="MAD"
    }
   
    if(bimodal.treatment.only==T){
      control.sd2<-NA
    }
   
    if(bimodal.treatment.only==T){
      control<-rnormp2(100000,0,control.sd1,control.shape1)
    }else{
      control<-c(rnormp2(larger.bimodal.proportion*100000,0,control.sd1,control.shape1),
                 rnormp2(smaller.bimodal.proportion*100000,0+bimodal.offset,control.sd2,control.shape2))
    }


Title: Re: Public Perception of Science
Post by: bb113 on December 22, 2012, 09:37:47 AM
Code:
   treatment<-c(rnormp2(larger.bimodal.proportion*100000,0+effect,treatment.sd1,treatment.shape1),
                 rnormp2(smaller.bimodal.proportion*100000,0+effect+bimodal.offset,treatment.sd2,treatment.shape2))
   
   
      if(batchmode!=T){
        if(using.RStudio!=T){
        dev.new(width=600, height=500)
        }
        }
   
    if(using.RStudio==T){
      live.plot.in.own.window=F
      live.plot=F
    }
    layout(matrix(c(1,2,3,4,5,6), 3, 2, byrow = TRUE))
    hist(treatment, col=rgb(0,0,1,.5), freq=F,
         breaks=seq(min(c(treatment,control))-1,max(c(treatment,control))+1,by=.1),
         xlab="Value", ylim=c(0,1.2),
         main=c("Population Distributions",
                paste("Difference in Means=", round(effect,2)),
                paste("Bimodal Proportions=", round(larger.bimodal.proportion,2), ",", round(smaller.bimodal.proportion,2),
                      " Bimodal Offset=", round(bimodal.offset,2))
         ),
         sub=paste("Bimodal Effect on Treatment Only=",bimodal.treatment.only)
    )
    hist(control, col=rgb(1,0,0,.5), freq=F,
         breaks=seq(min(c(treatment,control))-1,max(c(treatment,control))+1,by=.1),
         add=T)
    legend(x="topleft", legend=c(paste("Treatment:",
                                       "mu=", 0+effect,
                                       ", sd1=", round(treatment.sd1,2),
                                       ", p1=", round(treatment.shape1,2),
                                       ", sd2=", round(treatment.sd2,2),
                                       ", p2=", round(treatment.shape2,2)
    ),
                                 paste("Control:",
                                       "mu=",0,
                                       ", sd1=", round(control.sd1,2),
                                       ", p1=", round(control.shape1,2),
                                       ", sd2=", round(control.sd2,2),
                                       ", p2=", round(control.shape2,2))
    ),
           col=c(rgb(0,0,1,.5),rgb(1,0,0,.5)), lwd=6, bty="n")
   
    plot(initial.samp.size,1, type="l", lwd=2, col="White",
         xlim=c(initial.samp.size,(max.samps+samp.interval)), xlab="Sample Size (per group)",
         ylim=c(0,1), ylab="P-value",
         main=c("Representative P-value Traces (Significant: P < 0.05)",
                paste("Outlier Cutoff(",outlier.method,") =",round(outlier.cutoff,2),
                      ", Outlier Bias (High,Low) =", "(", round(outlier.bias.high,2), ",", round(outlier.bias.low,2), ")",
                      sep=""),
                paste("# of Outcomes=", number.of.outcomes)
         ),
         sub=paste("Sequential Sampling=", sequential, ", U-test=", also.perform.Utest)
    )
    abline(h=.05, lwd=2)
   
    ###Start Simulation###
    all.results=NULL
    pb <- txtProgressBar(min = 0, max = runs*max.samps, style = 3)
    for(r in 1:runs){
     
     
     
      ###Start Run###
      run.result=NULL
     
      experiment.results=vector("list", number.of.outcomes)
      for(t in 1:number.of.outcomes){
        if(bimodal.treatment.only==T){
          random.sample<-rep(1,initial.samp.size)
        }else{
          random.sample<-bimodal.pop[runif(initial.samp.size,1,1000)]
        }
        control<-c(rnormp2(length(which(random.sample==1)),0,control.sd1,control.shape1),
                   rnormp2(length(which(random.sample==0)),0+bimodal.offset,control.sd2,control.shape2))
       
        random.sample<-bimodal.pop[runif(initial.samp.size,1,1000)]
        treatment<-c(rnormp2(length(which(random.sample==1)),0+effect,treatment.sd1,treatment.shape1),
                     rnormp2(length(which(random.sample==0)),0+effect+bimodal.offset,treatment.sd2,treatment.shape2))
       
        experiment.result<-rbind(control,treatment)
        experiment.results[[t]]<-experiment.result
      }
     
     
     
      x=initial.samp.size
     
      while(x <(max.samps+samp.interval)){
        sample.result=NULL
        n.cutoff=min(n.cutoff,x)
       
        test.results=NULL
        for(t in 1:number.of.outcomes){
         
          control<-experiment.results[[t]][1,]
          treatment<-experiment.results[[t]][2,]
         
         
          if(also.perform.Utest==T){
            pval1<-t.test(treatment,control)$p.value
            pval2<-wilcox.test(treatment,control)$p.value
            pval<-min(pval1,pval2)
           
            #Drop Outliers
            if(drop.outliers==T){
              if(pval>.05){
                control.temp<-control
                treatment.temp<-treatment
               
                if(outlier.method=="SD"){
                high.cutoff<-mean(control.temp, na.rm=T)+outlier.cutoff*sd(control.temp, na.rm=T)
                low.cutoff<-mean(control.temp, na.rm=T)-outlier.cutoff*sd(control.temp, na.rm=T)
                high.outliers<-which(control.temp>high.cutoff)
                low.outliers<-which(control.temp<low.cutoff)
               
                control.temp[high.outliers]<-NA
                control.temp[low.outliers]<-NA
               
                high.cutoff<-mean(treatment.temp, na.rm=T)+outlier.bias.high*outlier.cutoff*sd(treatment.temp, na.rm=T)
                low.cutoff<-mean(treatment.temp, na.rm=T)-outlier.bias.low*outlier.cutoff*sd(treatment.temp, na.rm=T)
                high.outliers<-which(treatment.temp>high.cutoff)
                low.outliers<-which(treatment.temp<low.cutoff)
               
                treatment.temp[high.outliers]<-NA
                treatment.temp[low.outliers]<-NA
                }
               
                if(outlier.method=="MAD"){
                  high.cutoff<-median(control.temp, na.rm=T)+outlier.cutoff*mad(control.temp, na.rm=T)
                  low.cutoff<-median(control.temp, na.rm=T)-outlier.cutoff*mad(control.temp, na.rm=T)
                  high.outliers<-which(control.temp>high.cutoff)
                  low.outliers<-which(control.temp<low.cutoff)
                 
                  control.temp[high.outliers]<-NA
                  control.temp[low.outliers]<-NA
                 
                  high.cutoff<-median(treatment.temp, na.rm=T)+outlier.bias.high*outlier.cutoff*mad(treatment.temp, na.rm=T)
                  low.cutoff<-median(treatment.temp, na.rm=T)-outlier.bias.low*outlier.cutoff*mad(treatment.temp, na.rm=T)
                  high.outliers<-which(treatment.temp>high.cutoff)
                  low.outliers<-which(treatment.temp<low.cutoff)
                 
                  treatment.temp[high.outliers]<-NA
                  treatment.temp[low.outliers]<-NA
                 
                }
                           
               
                if(
                  length(treatment.temp[which(!is.na(treatment.temp))]) > (n.cutoff - 1) &&
                    length(control.temp[which(!is.na(control.temp))]) > (n.cutoff - 1)
                ){
                 
                  pval1<-t.test(treatment.temp,control.temp)$p.value
                  pval2<-wilcox.test(treatment.temp,control.temp)$p.value
                  pval<-min(pval1,pval2)
                 
                  control<-control.temp
                  treatment<-treatment.temp
                 
                  #Replace Outliers for Small N
                }else{
                 
                  while(
                    length(treatment.temp[which(!is.na(treatment.temp))]) < n.cutoff &&
                      length(control.temp[which(!is.na(control.temp))]) < n.cutoff
                  ){
                    if(bimodal.treatment.only==T){
                      random.sample<-rep(1,length(which(is.na(control.temp))))
                    }else{
                      random.sample<-bimodal.pop[runif(length(which(is.na(control.temp))),1,1000)]
                    }
                    control.temp[which(is.na(control.temp))]<-c(rnormp2(length(which(random.sample==1)),
                                                                        0,control.sd1,control.shape1),
                                                                rnormp2(length(which(random.sample==0)),
                                                                        0+bimodal.offset,control.sd2,control.shape2))
                   
                    random.sample<-bimodal.pop[runif(length(which(is.na(treatment.temp))),1,1000)]
                    treatment.temp[which(is.na(treatment.temp))]<-c(rnormp2(length(which(random.sample==1)),
                                                                            0+effect,treatment.sd1,treatment.shape1),
                                                                    rnormp2(length(which(random.sample==0)),
                                                                            0+effect+bimodal.offset,treatment.sd2,treatment.shape2))
                   
                    if(outlier.method=="SD"){
                      high.cutoff<-mean(control.temp, na.rm=T)+outlier.cutoff*sd(control.temp, na.rm=T)
                      low.cutoff<-mean(control.temp, na.rm=T)-outlier.cutoff*sd(control.temp, na.rm=T)
                      high.outliers<-which(control.temp>high.cutoff)
                      low.outliers<-which(control.temp<low.cutoff)
                     
                      control.temp[high.outliers]<-NA
                      control.temp[low.outliers]<-NA
                     
                      high.cutoff<-mean(treatment.temp, na.rm=T)+outlier.bias.high*outlier.cutoff*sd(treatment.temp, na.rm=T)
                      low.cutoff<-mean(treatment.temp, na.rm=T)-outlier.bias.low*outlier.cutoff*sd(treatment.temp, na.rm=T)
                      high.outliers<-which(treatment.temp>high.cutoff)
                      low.outliers<-which(treatment.temp<low.cutoff)
                     
                      treatment.temp[high.outliers]<-NA
                      treatment.temp[low.outliers]<-NA
                    }
                   
                    if(outlier.method=="MAD"){
                      high.cutoff<-median(control.temp, na.rm=T)+outlier.cutoff*mad(control.temp, na.rm=T)
                      low.cutoff<-median(control.temp, na.rm=T)-outlier.cutoff*mad(control.temp, na.rm=T)
                      high.outliers<-which(control.temp>high.cutoff)
                      low.outliers<-which(control.temp<low.cutoff)
                     
                      control.temp[high.outliers]<-NA
                      control.temp[low.outliers]<-NA
                     
                      high.cutoff<-median(treatment.temp, na.rm=T)+outlier.bias.high*outlier.cutoff*mad(treatment.temp, na.rm=T)
                      low.cutoff<-median(treatment.temp, na.rm=T)-outlier.bias.low*outlier.cutoff*mad(treatment.temp, na.rm=T)
                      high.outliers<-which(treatment.temp>high.cutoff)
                      low.outliers<-which(treatment.temp<low.cutoff)
                     
                      treatment.temp[high.outliers]<-NA
                      treatment.temp[low.outliers]<-NA
                     
                    }
                   
                  } #end while sample size too small loop
                 
                  pval1<-t.test(treatment.temp,control.temp)$p.value
                  pval2<-wilcox.test(treatment.temp,control.temp)$p.value
                  pval<-min(pval1,pval2)
                  control<-control.temp
                  treatment<-treatment.temp
                 
                } # end replace pvals
               
              } # end if pval...
            } #end drop outliers
           
            # Do if U-test==F
          }else{
            pval<-t.test(treatment,control)$p.value
           
            #Drop Outliers
            if(drop.outliers==T){
              if(pval>.05){
                control.temp<-control
                treatment.temp<-treatment
               
                if(outlier.method=="SD"){
                  high.cutoff<-mean(control.temp, na.rm=T)+outlier.cutoff*sd(control.temp, na.rm=T)
                  low.cutoff<-mean(control.temp, na.rm=T)-outlier.cutoff*sd(control.temp, na.rm=T)
                  high.outliers<-which(control.temp>high.cutoff)
                  low.outliers<-which(control.temp<low.cutoff)
                 
                  control.temp[high.outliers]<-NA
                  control.temp[low.outliers]<-NA
                 
                  high.cutoff<-mean(treatment.temp, na.rm=T)+outlier.bias.high*outlier.cutoff*sd(treatment.temp, na.rm=T)
                  low.cutoff<-mean(treatment.temp, na.rm=T)-outlier.bias.low*outlier.cutoff*sd(treatment.temp, na.rm=T)
                  high.outliers<-which(treatment.temp>high.cutoff)
                  low.outliers<-which(treatment.temp<low.cutoff)
                 
                  treatment.temp[high.outliers]<-NA
                  treatment.temp[low.outliers]<-NA
                }
               
                if(outlier.method=="MAD"){
                  high.cutoff<-median(control.temp, na.rm=T)+outlier.cutoff*mad(control.temp, na.rm=T)
                  low.cutoff<-median(control.temp, na.rm=T)-outlier.cutoff*mad(control.temp, na.rm=T)
                  high.outliers<-which(control.temp>high.cutoff)
                  low.outliers<-which(control.temp<low.cutoff)
                 
                  control.temp[high.outliers]<-NA
                  control.temp[low.outliers]<-NA
                 
                  high.cutoff<-median(treatment.temp, na.rm=T)+outlier.bias.high*outlier.cutoff*mad(treatment.temp, na.rm=T)
                  low.cutoff<-median(treatment.temp, na.rm=T)-outlier.bias.low*outlier.cutoff*mad(treatment.temp, na.rm=T)
                  high.outliers<-which(treatment.temp>high.cutoff)
                  low.outliers<-which(treatment.temp<low.cutoff)
                 
                  treatment.temp[high.outliers]<-NA
                  treatment.temp[low.outliers]<-NA
                 
                }
                if(
                  length(treatment.temp[which(!is.na(treatment.temp))]) > (n.cutoff - 1) &&
                    length(control.temp[which(!is.na(control.temp))]) > (n.cutoff - 1)
                ){
                  pval<-t.test(control.temp,treatment.temp)$p.value
                  control<-control.temp
                  treatment<-treatment.temp
               
                #Replace Outliers for Small N
              }else{
               
                while(
                  length(treatment.temp[which(!is.na(treatment.temp))]) < n.cutoff &&
                    length(control.temp[which(!is.na(control.temp))]) < n.cutoff
                ){
                  if(bimodal.treatment.only==T){
                    random.sample<-rep(1,length(which(is.na(control.temp))))
                  }else{
                    random.sample<-bimodal.pop[runif(length(which(is.na(control.temp))),1,1000)]
                  }
                  control.temp[which(is.na(control.temp))]<-c(rnormp2(length(which(random.sample==1)),
                                                                      0,control.sd1,control.shape1),
                                                              rnormp2(length(which(random.sample==0)),
                                                                      0+bimodal.offset,control.sd2,control.shape2))
                 
                  random.sample<-bimodal.pop[runif(length(which(is.na(treatment.temp))),1,1000)]
                  treatment.temp[which(is.na(treatment.temp))]<-c(rnormp2(length(which(random.sample==1)),
                                                                          0+effect,treatment.sd1,treatment.shape1),
                                                                  rnormp2(length(which(random.sample==0)),
                                                                          0+effect+bimodal.offset,treatment.sd2,treatment.shape2))
                 
                  if(outlier.method=="SD"){
                    high.cutoff<-mean(control.temp, na.rm=T)+outlier.cutoff*sd(control.temp, na.rm=T)
                    low.cutoff<-mean(control.temp, na.rm=T)-outlier.cutoff*sd(control.temp, na.rm=T)
                    high.outliers<-which(control.temp>high.cutoff)
                    low.outliers<-which(control.temp<low.cutoff)
                   
                    control.temp[high.outliers]<-NA
                    control.temp[low.outliers]<-NA
                   
                    high.cutoff<-mean(treatment.temp, na.rm=T)+outlier.bias.high*outlier.cutoff*sd(treatment.temp, na.rm=T)
                    low.cutoff<-mean(treatment.temp, na.rm=T)-outlier.bias.low*outlier.cutoff*sd(treatment.temp, na.rm=T)
                    high.outliers<-which(treatment.temp>high.cutoff)
                    low.outliers<-which(treatment.temp<low.cutoff)
                   
                    treatment.temp[high.outliers]<-NA
                    treatment.temp[low.outliers]<-NA
                  }
                 
                  if(outlier.method=="MAD"){
                    high.cutoff<-median(control.temp, na.rm=T)+outlier.cutoff*mad(control.temp, na.rm=T)
                    low.cutoff<-median(control.temp, na.rm=T)-outlier.cutoff*mad(control.temp, na.rm=T)
                    high.outliers<-which(control.temp>high.cutoff)
                    low.outliers<-which(control.temp<low.cutoff)
                   
                    control.temp[high.outliers]<-NA
                    control.temp[low.outliers]<-NA
                   
                    high.cutoff<-median(treatment.temp, na.rm=T)+outlier.bias.high*outlier.cutoff*mad(treatment.temp, na.rm=T)
                    low.cutoff<-median(treatment.temp, na.rm=T)-outlier.bias.low*outlier.cutoff*mad(treatment.temp, na.rm=T)
                    high.outliers<-which(treatment.temp>high.cutoff)
                    low.outliers<-which(treatment.temp<low.cutoff)
                   
                    treatment.temp[high.outliers]<-NA
                    treatment.temp[low.outliers]<-NA
                   
                  }
                 
                } #end while sample size too small loop
               
                pval<-t.test(treatment.temp,control.temp)$p.value
                control<-control.temp
                treatment<-treatment.temp
               
              } # end replace pvals
             
             
            }# end if pval>.05
          } #end drop outliers
         
         
         
        }#end If U-test==F
       
        measured.effect=mean(treatment, na.rm=T)-mean(control, na.rm=T)
       
        test.result<-cbind(pval,measured.effect)
        test.results<-rbind(test.results,test.result)
      }
     
      pval<-min(test.results[,1])
      measured.effect<-test.results[which(test.results[,1]==min(test.results[,1]))[1],2]
     
     
     
      sample.result<-cbind(sample.result,r, x,pval,measured.effect)
      run.result<-rbind(run.result,sample.result)
     
      ##Update Live Plot
      if(live.plot==T){
        lines(run.result[,2],run.result[,3], type="l", lwd=3, col=rainbow(runs)[r],
              xlim=c(initial.samp.size,max.samps+initial.samp.size),ylim=c(0,1)
        )
       
        if(live.plot.in.own.window==T && x==initial.samp.size && r==1){
          dev.new()
          plot(initial.samp.size,1, type="l", lwd=2, col="White",
               xlim=c(initial.samp.size,(max.samps+samp.interval)), xlab="Sample Size (per group)",
               ylim=c(0,1), ylab="P-value",
               main=c("Representative P-value Traces (Significant: P < 0.05)",
                      paste("Outlier Cutoff(SDs) =",round(outlier.cutoff,2),
                            ", Outlier Bias (High,Low) =", "(", round(outlier.bias.high,2), ",", round(outlier.bias.low,2), ")"),
                      paste("# of Outcomes=", number.of.outcomes)
               ),
               sub=paste("Sequential Sampling=", sequential, ", U-test=", also.perform.Utest)
          )
          abline(h=.05, lwd=2)
          lines(run.result[,2],run.result[,3], type="l", lwd=3, col=rainbow(runs)[r],
                xlim=c(initial.samp.size,max.samps+initial.samp.size),ylim=c(0,1)
          )
          dev.set(which=dev.prev())
        }
       
        if(live.plot.in.own.window==T && x*r>initial.samp.size){
          dev.set(which=dev.next())
          lines(run.result[,2],run.result[,3], type="l", lwd=3, col=rainbow(runs)[r],
                xlim=c(initial.samp.size,max.samps+initial.samp.size),ylim=c(0,1)
          )
          dev.set(which=dev.prev())
        }
        # End Update Live Plot
       
       
       
        Sys.sleep(sleep.time)
      }
     
     
      #Generate New Samples
      if(sequential==T){
       
        ##Take Sequential Samples##
       
        for(t in 1:number.of.outcomes){
         
          #Control
          if(bimodal.treatment.only==T){
            random.sample<-rep(1,samp.interval)
          }else{
            random.sample<-bimodal.pop[runif(samp.interval,1,1000)]
          }
          control<-append(experiment.results[[t]][1,],c(rnormp2(length(which(random.sample==1)),
                                                                0,control.sd1,control.shape1),
                                                        rnormp2(length(which(random.sample==0)),
                                                                0+bimodal.offset,control.sd2,control.shape2)
          ))
         
          #Treatment
          random.sample<-bimodal.pop[runif(samp.interval,1,1000)]
          treatment<-append(experiment.results[[t]][2,],c(rnormp2(length(which(random.sample==1)),
                                                                  0+effect,treatment.sd1,treatment.shape1),
                                                          rnormp2(length(which(random.sample==0)),
                                                                  0+effect+bimodal.offset,treatment.sd2,treatment.shape2)
          ))
         
          experiment.result<-rbind(control,treatment)
          experiment.results[[t]]<-experiment.result
         
        }#End Sequential
       
      }else{
       
        ##Take Independant Samples (non sequential)##
        experiment.results=vector("list", number.of.outcomes)
        for(t in 1:number.of.outcomes){
         
          #Control
          if(bimodal.treatment.only==T){
            random.sample<-rep(1,x)
          }else{
            random.sample<-bimodal.pop[runif(x,1,1000)]
          }
          control<-c(rnormp2(length(which(random.sample==1)),
                             0,control.sd1,control.shape1),
                     rnormp2(length(which(random.sample==0)),
                             0+bimodal.offset,control.sd2,control.shape2)
          )
         
          #Treatment
          random.sample<-bimodal.pop[runif(x,1,1000)]
          treatment<-c(rnormp2(length(which(random.sample==1)),
                               0+effect,treatment.sd1,treatment.shape1),
                       rnormp2(length(which(random.sample==0)),
                               0+effect+bimodal.offset,treatment.sd2,treatment.shape2)
          )
         
          experiment.result<-rbind(control,treatment)
          experiment.results[[t]]<-experiment.result
         
        } #End independant
       
      } #End sampling
     
     
      setTxtProgressBar(pb, x + max.samps*(r-1))
      x<-x+samp.interval
     
    } # End Run
    all.results<-rbind(all.results,run.result)
   
    if(live.plot==F && r==runs){
      colnum<-1
      for(k in (runs-9):runs){
        xvar<-all.results[which(all.results[,1]==k),2]
        yvar<-all.results[which(all.results[,1]==k),3]
        lines(xvar,yvar, type="l", lwd=3, col=rainbow(10)[colnum],
              xlim=c(initial.samp.size,max.samps+initial.samp.size),ylim=c(0,1)
        )
        colnum<-colnum+1
      }
    }
   
  } # End Simulation
  close(pb)
 
 
 
  ###Extract Significant Results###
  rownames(all.results)<-seq(1,nrow(all.results))
  sigs<-which(all.results[,3]<.05)
  percent.sigs<-100*hist(all.results[sigs,2], plot=F,
                         breaks=seq(initial.samp.size-1, max.samps+2*samp.interval,by=(samp.interval))
  )$counts/runs
 
  ###Extract First Significant Result for each Simulation Run###
  first.sigs=NULL
  first.sigs.filtered=NULL
  for(i in 1:runs){
    temp<-all.results[which(all.results[,1]==i),]
    if(length(temp[which(temp[,3]<.05),2])==0){
      first.sig<-Inf
    }else{
      first.sig<-min(temp[which(temp[,3]<.05),2])
    }
    first.sigs<-rbind(first.sigs,first.sig)
  }
 
  first.sigs.filtered<-first.sigs[which(first.sigs!=Inf),]
 
  percent.atLeastOne.sig<-100*cumsum(
    hist(first.sigs.filtered,
         breaks=seq(initial.samp.size-1, max.samps+2*samp.interval,by=(samp.interval)),
         plot=F
    )$counts
  )/runs
 



Title: Re: Public Perception of Science
Post by: bb113 on December 22, 2012, 09:38:24 AM
Code:
 ###Create Summary Plots###
 
 
  if(bimodal.treatment.only==T){
   
    plot(all.results[sigs,2],all.results[sigs,4],
         xlab="Sample Size (per group)",xlim=c(0, max.samps+samp.interval),
         ylab="Measured Effect",
         ylim=c(min(c(-1,all.results[sigs,4]),na.rm=T),max(c(1,all.results[sigs,4]),na.rm=T)),
         main=c("Measured Effect Sizes of Significant Results",
                "Significance Level: P < 0.05",
                paste("True Difference=", round(effect,2), "+",
                      round(bimodal.offset,2), "for", round(100*smaller.bimodal.proportion,2), "% of Treatment Group")),
         col="Cyan")
    abline(h=effect, lwd=2)
    abline(h=bimodal.offset, lwd=1, lty=2)
   
   
  }else{
   
    plot(all.results[sigs,2],all.results[sigs,4],
         xlab="Sample Size (per group)",xlim=c(0, max.samps+samp.interval),
         ylab="Measured Effect", ylim=c(min(c(-1,all.results[sigs,4]),na.rm=T),max(c(1,all.results[sigs,4]),na.rm=T)),
         main=c("Measured Effect Sizes of Significant Results",
                "Significance Level: P < 0.05",
                paste("True Difference=", effect)),
         col="Cyan")
    abline(h=effect, lwd=2)
  }
 
    pval.max.percents=NULL
    for(s in 1:length(unique(all.results[,2]))){
size<-unique(all.results[,2])[s]
      temp<-max(100*hist(all.results[which(all.results[,2]==size),3],
                         breaks=seq(0,1,by=.005),
                         plot=F)$counts/runs)
      pval.max.percents<-rbind(pval.max.percents,temp)
    }
    ymax<-min(1.5*max(pval.max.percents),100)
   
    Sys.sleep(.1)
    plot(seq(0,.995,by=.005),
         100*hist(all.results[which(all.results[,2]==initial.samp.size),3],
                  breaks=seq(0,1,by=.005),
                  plot=F)$counts/runs,
         ylab="Percent of P-values", ylim=c(0,ymax),
         xlab="P-value", xlim=c(0,1),
         type="l", col=rainbow(length(unique(all.results[,2])))[1],
         main="Distribution of All P-values"
    )
    for(s in 2:length(unique(all.results[,2]))){
      size<-unique(all.results[,2])[s]
      lines(seq(0,.995,by=.005),
            100*hist(all.results[which(all.results[,2]==size),3],
                     breaks=seq(0,1,by=.005),
                     plot=F)$counts/runs,
            type="l",col=rainbow(length(unique(all.results[,2])))[s]
      )
     
    }
    abline(h=100/length(seq(0,.995,by=.005)))
   
    image.plot(matrix(seq(initial.samp.size,max.samps,by=samp.interval)),
               col=rainbow(length(unique(all.results[,2]))),
               smallplot=c(.6,.85,.7,.75),
               legend.only=T, horizontal=T, legend.shrink=.5, add=T,
               legend.args=list(text="Sample Size", side=3, line=.2, font=2, cex=.5, bty="o", bg="White"),
               axis.args=list(
                 at=c(initial.samp.size,max.samps),
                              labels=as.character(c(initial.samp.size,max.samps
                              )
                              )
               )
    )
 
  xsamps<-unique(all.results[,2])
 
  if(sequential==T){
    plot(xsamps,
         percent.atLeastOne.sig[1:length(xsamps)],
         ylab=expression("Pr(# Significant)">=1), ylim=c(0,max(percent.atLeastOne.sig)+10),
         xlab="Sample Size (per Group)",xlim=c(0, max.samps+samp.interval),
         type="s", col="Cyan", lwd=5,
         main=c("Probability of at Least 1 Significant Result",
                paste("Number of Simulations=",runs),
                paste("Initial Sample Size=",initial.samp.size, " Sampling Interval=",samp.interval)
         )
    )
    abline(h=seq(0,max(percent.atLeastOne.sig)+10,by=10),
           lwd=1, col=rgb(0,0,0,.5))
    abline(v=seq(0,max.samps+initial.samp.size,by=10), lwd=1, col=rgb(0,0,0,.5))
    abline(h=5,
           lwd=2, col=rgb(0,0,0,1))
   
  }else{
    plot(xsamps,
         percent.sigs[1:length(xsamps)],
         ylim=c(0,max(percent.sigs)+10), ylab="% Significant Results",
         xlim=c(0, max.samps+samp.interval), xlab="Sample Size (per Group)",
         type="l", lwd=4, col="Cyan",
         main=c("Percent of Significant Results for Different Sample Sizes",
                paste("Number of Simulations=",runs),
                paste("Initial Sample Size=",initial.samp.size, " Sampling Interval=",samp.interval)
         )
    )
    abline(h=seq(0,max(percent.sigs)+10,by=10),
           lwd=1, col=rgb(0,0,0,.5))
    abline(v=seq(0,max.samps+initial.samp.size,by=10), lwd=1, col=rgb(0,0,0,.5))
    abline(h=5, lwd=2)
  }
    Sys.sleep(.1)
    plot(seq(0,.995,by=.005),
         100*hist(all.results[which(all.results[,2]==initial.samp.size),3],
                  breaks=seq(0,1,by=.005),
                  plot=F)$counts/runs,
         ylab="Percent of P-values", ylim=c(0,ymax),
         xlab="P-value", xlim=c(0,.05),
         type="l", col=rainbow(length(unique(all.results[,2])))[1],
         main="Distribution of Significant P-values"
    )
    for(s in 2:length(unique(all.results[,2]))){
      size<-unique(all.results[,2])[s]
      lines(seq(0,.995,by=.005),
            100*hist(all.results[which(all.results[,2]==size),3],
                     breaks=seq(0,1,by=.005),
                     plot=F)$counts/runs,
            type="l",col=rainbow(length(unique(all.results[,2])))[s]
      )
     
    }
    abline(h=100/length(seq(0,.995,by=.005)))
   
    image.plot(matrix(seq(initial.samp.size,max.samps,by=samp.interval)),
               col=rainbow(length(unique(all.results[,2]))),
               smallplot=c(.6,.85,.7,.75),
               legend.only=T, horizontal=T, legend.shrink=.3, add=T,
               legend.args=list(text="Sample Size", side=3, line=.2, font=2, cex=.5),
               axis.args=list(
                 at=c(initial.samp.size,max.samps),
                 labels=as.character(c(initial.samp.size,max.samps
                 )
                 )
               )
    )
   
   
} #End Script

if(batchmode==T){
  effect.info=NULL
  for(samp.point in 1:length(sort(unique(all.results[sigs,2])))){
    sp<-sort(unique(all.results[sigs,2]))[samp.point]
    temp<-all.results[sigs,4][which(all.results[sigs,2]==sp)]
    effect.temp2<-cbind(
      max.pos<-max(temp[which(temp>0)]),
      mean.pos<-mean(temp[which(temp>0)]),
      min.pos<-min(temp[which(temp>0)]),
      max.neg<-max(temp[which(temp<0)]),
      mean.neg<-mean(temp[which(temp<0)]),
      min.neg<-min(temp[which(temp<0)])
    )
   
    effect.info<-rbind(effect.info,effect.temp2)
  }# End Batch
 
  batch.result<-cbind(b,
                      sort(unique(all.results[sigs,2])),
                      percent.sigs[1:length(xsamps)],
                      percent.atLeastOne.sig[1:length(xsamps)],
                      effect.info
  )
  colnames(batch.result)<-c("Batch", "Samp.Size", "Percent.Sig",
                            "Percent.atLeastOne.sig","max.pos","mean.pos","min.pos",
                            "max.neg","mean.neg","min.neg")
 
  batch.results<-rbind(batch.results,batch.result)
 
  sim.params<-cbind(rep(b,12),
                    rbind(
                      treatment.sd1,
                      treatment.shape1,
                      treatment.sd2,
                      treatment.shape2,
                      control.sd1,
                      control.shape1,
                      control.sd2,
                      control.shape2,
                      larger.bimodal.proportion,
                      bimodal.offset,
                      bimodal.treatment.only,
                      sequential,
                      also.perform.Utest,
                      number.of.outcomes
                    )
  )
  batch.params<-rbind(batch.params,sim.params)
}

print(paste("Batch #", b, " of", batch.size))
}

if(batchmode==T){
  one.sig=NULL
  for (i in 1:length(unique(batch.results[,2]))){
    temp1<-unique(batch.results[,2])[i]
    temp2<-batch.results[which(batch.results[,2]==temp1),]
    temp3<-cbind(temp1,
                 temp2[which(temp2[,4]==max(temp2[,4])),1],
                 temp2[which(temp2[,4]==max(temp2[,4])),4]
    )
    one.sig<-rbind(one.sig,temp3)
  }
  colnames(one.sig)<-c("Samp.Size", "Batch", "Percent.atleastOne.Significant")
  rownames(one.sig)<-1:nrow(one.sig)
 
 
  sampsize.sig=NULL
  for (i in 1:length(unique(batch.results[,2]))){
    temp1<-unique(batch.results[,2])[i]
    temp2<-batch.results[which(batch.results[,2]==temp1),]
    temp3<-cbind(temp1,
                 temp2[which(temp2[,3]==max(temp2[,3])),1],
                 temp2[which(temp2[,3]==max(temp2[,3])),3]
    )
    sampsize.sig<-rbind(sampsize.sig,temp3)
  }
  colnames(sampsize.sig)<-c("Samp.Size", "Batch", "Percent.Sig")
  rownames(sampsize.sig)<-1:nrow(sampsize.sig)
 
  one.sig
  sampsize.sig
 
  batch.params[which(batch.params[,1]==90),]
 
} # End Batches





verify=F
if(verify==T){
 
 
  #control<-c(rnormp2(larger.bimodal.proportion*10000,0,control.sd1,control.shape1),
  #rnormp2(smaller.bimodal.proportion*10000,0+bimodal.offset,control.sd2,control.shape2))
 
 
  #treatment<-c(rnormp2(larger.bimodal.proportion*10000,0+effect,treatment.sd1,treatment.shape1),
  #rnormp2(smaller.bimodal.proportion*10000,0+effect+bimodal.offset,treatment.sd2,treatment.shape2))
 
 
  #hist(control)
 
 
  require(sciplot)
 
  effect=0.1
  nsamp=1000
  drop.outliers=T
  outlier.cutoff<-2
  out=NULL
  for(i in 1:nsamp){
   
    dev.new()
    par(mfrow=c(1,2))
    #a<-c(rnorm(8,10,1),rnorm(2,12,1))
    #b<-c(rnorm(8,10,1),rnorm(2,12,1))
   
    random.sample<-bimodal.pop[runif(initial.samp.size,1,1000)]
    random.sample
    control<-c(rnormp2(length(which(random.sample==1)),0,control.sd1,control.shape1),
               rnormp2(length(which(random.sample==0)),0+bimodal.offset,control.sd2,control.shape2))
   
    random.sample<-bimodal.pop[runif(initial.samp.size,1,1000)]
    random.sample
    treatment<-c(rnormp2(length(which(random.sample==1)),0+effect,treatment.sd1,treatment.shape1),
                 rnormp2(length(which(random.sample==0)),0+effect+bimodal.offset,treatment.sd2,treatment.shape2))
   
    pval<-t.test(control,treatment)$p.value
   
    if(drop.outliers==T){
      if(pval>.05){
        control.temp<-control
        treatment.temp<-treatment
       
        high.cutoff<-mean(control.temp)+outlier.cutoff*sd(control.temp)
        low.cutoff<-mean(control.temp)-outlier.cutoff*sd(control.temp)
        high.outliers<-which(control.temp>high.cutoff)
        low.outliers<-which(control.temp<low.cutoff)
       
        control.temp[high.outliers]<-NA
        control.temp[low.outliers]<-NA
       
        high.cutoff<-mean(treatment.temp)+outlier.cutoff*sd(treatment.temp)
        low.cutoff<-mean(treatment.temp)-outlier.cutoff*sd(treatment.temp)
        high.outliers<-which(treatment.temp>high.cutoff)
        low.outliers<-which(treatment.temp<low.cutoff)
       
        treatment.temp[high.outliers]<-NA
        treatment.temp[low.outliers]<-NA
       
        if(
          length(treatment.temp[which(!is.na(treatment.temp))])>1 &&
            length(control.temp[which(!is.na(control.temp))])>1
        ){
          pval<-t.test(control.temp,treatment.temp)$p.value
          control<-control.temp
          treatment<-treatment.temp
        }
       
      }
    } #end drop outliers
   
    adjust=6
    a<-control+adjust
    b<-treatment+adjust
   
   
    boxplot(a,b,col=c("Blue","Red"),ylim=c(0,3*adjust))
    stripchart(a, add=T, at=1,
               ylim=c(0,3*adjust),
               vertical=T, pch=16)
    stripchart(b, add=T, at=2,
               vertical=T, pch=16)
   
    x.fact<-c(rep("a",length(a)),rep("b",length(b)))
    vals<-c(a,b)
    bargraph.CI(x.fact,vals, col=c("Blue","Red"), ylim=c(0,3*adjust),
                main=round(t.test(a,b)$p.value,3)
    )
    control
    treatment
   
   
    out<-rbind(out,t.test(a,b)$p.value)
    Sys.sleep(.5)
    dev.off()
  }
 
  length(which(out<.05))/nsamp
 
 
 
}

#one.sig
#sampsize.sig

#batch.results
#batch.params[which(batch.params[,1]==40),]



Title: Re: Public Perception of Science
Post by: bb113 on December 22, 2012, 09:39:25 AM
Rstudio is best to run it in, copy-paste the code in order...the settings are all at the top.


Title: Re: Public Perception of Science
Post by: organofcorti on December 22, 2012, 09:44:18 AM
Do you mind posting your aims and what your results suggests? It's not that I'm lazy (never!) but other readers might want some insight as to what you're doing :)


I've never used Rstudio, but if I can get a good idea of where you're going with this I can understand what your code does more easily.


Title: Re: Public Perception of Science
Post by: bb113 on December 22, 2012, 09:54:06 AM
The above code allows you to simulate performing t-tests on various 2-sample data sets coming from either normal or bimodal distributions. You can choose the sample size, whether to perform the tests while adding subjects either sequentially or throwing away the previous results, also perform a U test, drop outliers, test multiple outcomes and only report the significant one, and more as is explained in the settings at the top. Just run it once and then mess with the settings.

There is also batch mode where you can randomly generate settings and see which leads to certain results.


Title: Re: Public Perception of Science
Post by: bb113 on December 22, 2012, 10:02:07 AM
I should say this was not meant for public consumption... but it will work. Batch mode will probably be confusing unless you see what the code does.


Title: Re: Public Perception of Science
Post by: organofcorti on December 22, 2012, 11:26:07 AM
Could you pastebin that script? I'm getting errors I'm having trouble tracing (in more than a thousand lines of code :)), and I might have made copy paste mistakes.


Title: Re: Public Perception of Science
Post by: bb113 on December 22, 2012, 12:12:32 PM
I see no reason that should happen. You have fields?

require(fields) #Used for image.plot


That was easier than I thought:
http://pastebin.com/2wvnFBsy


Title: Re: Public Perception of Science
Post by: organofcorti on December 22, 2012, 12:18:59 PM
I see no reason that should happen. You have fields?

require(fields) #Used for image.plot

I didn't have fields, but I installed it when I read the require at the top of the script. I was thinking more that I'd missed a line when I copy pasted, or copied them out of order or something. Probably not, but best to be sure.

That was easier than I thought:
http://pastebin.com/2wvnFBsy

Cheers for that!


Title: Re: Public Perception of Science
Post by: bb113 on December 22, 2012, 12:25:21 PM
I see no reason that should happen. You have fields?

require(fields) #Used for image.plot

I didn't have fields, but I installed it when I read the require at the top of the script. I was thinking more that I'd missed a line when I copy pasted, or copied them out of order or something. Probably not, but best to be sure.

That was easier than I thought:
http://pastebin.com/2wvnFBsy

Cheers for that!

BTW I haven't figured out what to put exactly but this was made for the wikipedia page:
https://en.wikipedia.org/wiki/Statistical_hypothesis_testing

If you can figure it out make it clearer and put stuff there.


Title: Re: Public Perception of Science
Post by: organofcorti on December 22, 2012, 12:39:56 PM
Getting this error:
Code:
Error in function (title, width, height, pointsize, family, fontsmooth,  : 
  Unable to create Quartz device target, given type may not be supported.
In addition: Warning message:
In function (title, width, height, pointsize, family, fontsmooth,  :
  Requested on-screen area is too large (600.0 by 500.0 inches).

Nevermind.When I finish reading about why null hypothesis testing has become suspect lately I'll try running it on someone's windows machine.


Title: Re: Public Perception of Science
Post by: bb113 on December 22, 2012, 12:49:47 PM
Getting this error:
Code:
Error in function (title, width, height, pointsize, family, fontsmooth,  : 
  Unable to create Quartz device target, given type may not be supported.
In addition: Warning message:
In function (title, width, height, pointsize, family, fontsmooth,  :
  Requested on-screen area is too large (600.0 by 500.0 inches).

Nevermind.When I finish reading about why null hypothesis testing has become suspect lately I'll try running it on someone's windows machine.

... Line 297
Code:
      if(batchmode!=T){ 
        if(using.RStudio!=T){
        dev.new(width=600, height=500)

make it:
Code:
      if(batchmode!=T){ 
        if(using.RStudio!=T){
        dev.new()

or just run in normal R


Now I see the first error... It works on a mac, but maybe you are missing something else.


Title: Re: Public Perception of Science
Post by: bb113 on December 22, 2012, 01:04:05 PM
Also to bring this back on topic:

Quote
I agree with most of what you have put forward but don't understand why you are so convinced of evolution but not climate science.

To put it simply, we know fuck all about space and other planets besides are own so how can we claim to know anything about our own planet?

Edit: I suppose you could say the same for evolution actually, if we saw the same behaviour from organic beings from other planets as well, then that would make evolution even more solid, but with evolution there is a lot more evidence to support it than there is with climate science, which I don't really believe is science, not just yet anyway.

What evidence exactly convinces you? Personally I have cloned DNA and mutated it then put it back into bacteria which made them glow (loose explanation). This convinced me DNA exists and does shit. But if you do not have this experience, what is the convincing evidence?


Title: Re: Public Perception of Science
Post by: Lethn on December 22, 2012, 01:04:50 PM
Every time I see a human do something stupid and they pay for it or learn from it, that and I've also seen how animals are actually getting used to cities now, even cats where I live have adapted to learn how to listen out for a car engine so the moment it starts they dart off rather than just get confused by the noise.


Title: Re: Public Perception of Science
Post by: organofcorti on December 22, 2012, 01:05:49 PM

... Line 297
Code:
      if(batchmode!=T){ 
        if(using.RStudio!=T){
        dev.new(width=600, height=500)

make it:
Code:
      if(batchmode!=T){ 
        if(using.RStudio!=T){
        dev.new()


[/quote]

That worked - cheers. Got some reading and learning to do now. Screw santa, I have a new project ;)


Title: Re: Public Perception of Science
Post by: bb113 on December 22, 2012, 01:14:54 PM
Every time I see a human do something stupid and they pay for it or learn from it, that and I've also seen how animals are actually getting used to cities now, even cats where I live have adapted to learn how to listen out for a car engine so the moment it starts they dart off rather than just get confused by the noise.

The natural selection -> evolution narrative does make sense at all levels doesn't it. That is why I think it is so beautiful, it may still be wrong in explaining many things we see around us though.


Title: Re: Public Perception of Science
Post by: bb113 on December 22, 2012, 01:20:25 PM

... Line 297
Code:
      if(batchmode!=T){ 
        if(using.RStudio!=T){
        dev.new(width=600, height=500)

make it:
Code:
      if(batchmode!=T){ 
        if(using.RStudio!=T){
        dev.new()



That worked - cheers. Got some reading and learning to do now. Screw santa, I have a new project ;)
Expand it to ANOVAS and if my preliminary results hold it will mean far worse for scientists who assume normal distributions, etc when drawing conclusions from data.

It really is possible that the last 50 years of science has been "experts" measuring their own opinions.


Title: Re: Public Perception of Science
Post by: bb113 on January 04, 2013, 08:45:14 AM
So.. what did you think of this OoC? Is it worth moving forward with? The code is actually really inefficient now but may be worth fixing up and publishing.


Title: Re: Public Perception of Science
Post by: Snipes777 on January 04, 2013, 02:23:50 PM
Evolution is true because it has been proven and demonstrated through hypothesis and experiment to be true through several papers and scientific experiments.

As to whether climate change is a natural occurrence or man-made, I do not know because most of the evidence is correlative, not causative.

http://yourlogicalfallacyis.com/false-cause (http://yourlogicalfallacyis.com/false-cause)

The burden of proof lies with the person who asserts a hypothesis, and thus, the default is to be skeptical and not necessarily believe any hypothesis that is not proven.

http://yourlogicalfallacyis.com/burden-of-proof (http://yourlogicalfallacyis.com/burden-of-proof)

However, just because of the correlative fallacy, this doesn't mean that the assertion is false. There may come a day when there is better evidence or there may be evidence I am not aware of that can prove that climate change is caused by man.

http://yourlogicalfallacyis.com/the-fallacy-fallacy (http://yourlogicalfallacyis.com/the-fallacy-fallacy)

As to your mention of personal experience that proved evolution to you, I can read medical or scientific papers about other people's experiences and studies that are properly performed and then benefit from the knowledge they have garnered without experiencing the event first-hand.


Title: Re: Public Perception of Science
Post by: organofcorti on January 05, 2013, 08:15:56 AM
So.. what did you think of this OoC? Is it worth moving forward with? The code is actually really inefficient now but may be worth fixing up and publishing.

Sorry, been on holidays and afk for a while.

I'm still coming to terms with your ideas - PM me?


Title: Re: Public Perception of Science
Post by: stochastic on January 06, 2013, 01:31:14 PM
From what I have observed most public debate regarding science revolves around two issues:

1) Climate Change due to human influence on the environment
2) Evolution of life on Earth due to long term natural selection

How do you determine what to believe (or not) regarding these theories?
What kind of evidence would convince you to change your mind?
Why do you place trust (or not) in the consensus of the experts in these fields?
Given infinite resources, how would you determine the "truth"?

Like in most other sciences, most people don't understand evolutionary theory.  People feel these concepts are easy to understand, but scientists devote a significant amount of their lives to understand these things, and most of the time they are wrong.  If you ask people's opinion of theoretical quantum physics and most people will say "oh I don't understand that stuff."  If you ask them their opinion about climate change or evolution, then they have the answer.

It pisses me off these ignorant people that claim they can understand these complex topics and yet have no idea how to run a statistical test.  They post links to articles that they them self could never reproduce.  "Hey this paper is peer-reviewed, and it says that XXX is not true."  Wow, now can you explain each statistical test of their data that they did?  Did they remove any data points, if so why?

I once met a guy that asked what I do.  I told him I was an evolutionary biologist.  This guy had the nerve to tell me, "Oh I read this book by this scientist, Dr. Bebe, that said evolution was not real."

My reply was, "Wow, that is interesting, what do you do for a living?"

He said, "I am a musician."

"You know music doesn't exist, right, it is just noise in the air?"


Quote
Evolution has already been determined to be true by Darwin

While I sympathize with the person that said this, nevertheless, it is a very incorrect statement.  Darwin didn't prove anything.  He laid out a hypothesis with evidence based on morphological data.  He had no idea what hereditary factors that was required.  A few decades later biologists and statisticians combined Mendel and Darwin's theories which now allowed for a genetic ability for change.

This is what is wrong with trying to educate non-scientists.  They have a preconceived bias and they read other reports that fulfill this bias.  There is a deeper reason why people take beliefs and it has nothing to do with the scientific method.  It is based on emotion.

Darwin's theory was not enough though because biologists did not understand the concept of mutation.  The true hero of evolution is Motoo Kimura (http://en.wikipedia.org/wiki/Motoo_Kimura) who developed the neutral theory of evolution (http://en.wikipedia.org/wiki/Neutral_theory_of_molecular_evolution) that proved mathematically that most changes at the DNA level is have neutral effects.  Unlike Darwin that said that these characteristics of living systems arise due to their beneficial nature.  Kimura, on the other hand, explained that these characteristics arose because of neutral mutations and genetic drift.  Genetic drift is the change in gene frequencies overtime due to random sampling.

What is important about Kimura's theory is that it allows for the possibility of Darwin's theory to be tested.  Before, Darwin's theory was just a proof-in-concept.  It was just an idea that had never been tested.  It would be like if the bitcoin white paper was published 100 years ago.  Sure the concept could work, but no one could make it work because we didn't have the necessary tools to make it work.  Kimura's theory now allowed evolutionary adaptation to be a testable hypothesis.  For example of some tests that allow for this see the the McDonald-Kreitman (http://en.wikipedia.org/wiki/McDonald%E2%80%93Kreitman_test) test and the Ka/Ks ratio (http://en.wikipedia.org/wiki/Ka/Ks_ratio).


Title: Re: Public Perception of Science
Post by: Grant on January 07, 2013, 07:11:10 PM
From what I have observed most public debate regarding science revolves around two issues:

1) Climate Change due to human influence on the environment
2) Evolution of life on Earth due to long term natural selection

How do you determine what to believe (or not) regarding these theories?
What kind of evidence would convince you to change your mind?
Why do you place trust (or not) in the consensus of the experts in these fields?
Given infinite resources, how would you determine the "truth"?

No idea if anybody thought of this before, but how about just apply the scientific-method ? Essentially what im suggesting here, is to do opposite of what unscientific "career-scientists" such as Richard Dawkins is doing.

This> You try to prove wrong whatever you tried to believe. = science
Not this> You try to prove whatever you believe to be right. = religion


Title: Re: Public Perception of Science
Post by: ImNotHerb on January 07, 2013, 07:25:36 PM
The vast majority of "the public" lacks the intelligence to understand the intricacies of the Scientific Method, let alone be able to detect when it is improperly used. To them, "Scientist" is a title that commands respect, deference... and obedience - the New Priesthood.

Evolution is a theory that accounts for facts that anyone can observe - that is, the abundance of life on earth. Humans are here, along with monkeys, birds, snails, insects, trees, mold, bacteria, etc. The religious assertion had always been that "God" just *poofed* all our ancestors into being in some cosmic fart, and that is how we all got here. Evolution, however, provides a simple physical processes for generating the biodiversity we observe - eliminating the impetus to accept a supernatural explanation.

"Anthropogenic global warming", on the other hand, is a different beast altogether. It began as speculation in search of facts, or put another way, a bias in search of confirmation. It doesn't seek to explain a past, but to predict a future. It also has the unfortunate feature of being a warped political justification for increased wealth confiscation - something all States fundamentally seek to gain. And wouldn't you know it; the State is the single biggest provider of grants and funding research (confirmatory only) into the potential goldmine. "The end is nigh... unless you pay up!"

People forget that even Galileo recanted when he saw what the political apparatus had in store for him. Nowadays that lesson is lost while the threats are much more subtle - public ridicule, loss of funding... being "blackballed" for anything less than fully-endorsing a politically-backed consensus. One minute you're in the game; a 'respected scientist' and the next minute you're a joke who will be mocked by the media and shunned from academia. Written off as a kook on par with flat-earthers and republicans.

In other words, the observation and scientific discovery surrounding evolution is orders of magnitude above the statistical shenanigans that support AGW. They are no more alike than dermatology and phrenology.

That's my observation.  :)


Title: Re: Public Perception of Science
Post by: myrkul on January 07, 2013, 07:31:53 PM
In other words, the observation and scientific discovery surrounding evolution is orders of magnitude above the statistical shenanigans that support AGW. They are no more alike than dermatology and phrenology.

http://www.dorktower.com/files/2012/12/THIS.gif


Title: Re: Public Perception of Science
Post by: FirstAscent on January 07, 2013, 08:53:01 PM
"Anthropogenic global warming", on the other hand, is a different beast altogether. It began as speculation in search of facts, or put another way, a bias in search of confirmation. It doesn't seek to explain a past, but to predict a future. It also has the unfortunate feature of being a warped political justification for increased wealth confiscation - something all States fundamentally seek to gain. And wouldn't you know it; the State is the single biggest provider of grants and funding research (confirmatory only) into the potential goldmine. "The end is nigh... unless you pay up!"

Is this like the relationship between CFC output and depletion of the ozone layer? You know, where the EPA, and then finally the Montreal Protocol reduced CFC output. All those damned money grubbing scientists getting grants from the nasty governments, showing the detrimental effects of CFCs, and then, god forbid, the passing of regulations which, ahem, reduced CFC production?

Sounds exactly the same to me.

http://www.theozonehole.com/montreal.htm


Title: Re: Public Perception of Science
Post by: organofcorti on January 07, 2013, 09:48:45 PM
Evolution is a theory that accounts for facts that anyone can observe - that is, the abundance of life on earth.

It appears you think that theories that account for facts that people can't easily observe are worthless? This would be a large majority of the work I do every day and let me assure you that when I diagnose a neonate with a hearing loss, a neonate which is then given early intervention, it makes a huge impact on their life. Even if you don't understand the technology by which I make the diagnosis or the theories underpinning it.

It's nice that evolution is so easily understood by lay-people. But this is an exception, not the rule. Most discoveries that I read about as part of my job would not be understandable by most people outside my field. However, they do have a significant impact on the world.

Don't mistake your inability to understand climate change (don't feel bad, I don't understand it either) with a climate scientists inability to understand climate change. Don't just assume that because you don't fully comprehend something then the people involved must be obfuscating, and involved in some nefarious political conspiracy. I certainly am not.



Title: Re: Public Perception of Science
Post by: FirstAscent on January 07, 2013, 10:09:43 PM
Climate science is another beast entirely. That the climate is changing is not really in doubt. Which way, of course, has been a matter of some debate. Back in the 70's for instance, the big worry was global cooling, and a new ice age.

The above is more crap from the deniers' playbook. Tell me myrkul, if I keep showing that your shit isn't even worth shit, will you shut up with your FUD? Recall your claim about melting ice caps?

Now it's crap from you about a consensus from scientists about an impending ice age in the '70s. You sir, are a brainwashed fool who eats up everything you can from politically motivated sites, rather than scientific sites. I feel sorry for you, because you seem to have a high IQ - but poorly utilized.

Here are some people who will explain it for you.

http://www.csmonitor.com/Environment/Bright-Green/2009/0728/were-they-really-predicting-an-ice-age-in-the-1970s

http://www.skepticalscience.com/ice-age-predictions-in-1970s.htm

http://greenfyre.wordpress.com/2010/11/20/the-1970s-ice-age-9-myth/


Title: Re: Public Perception of Science
Post by: bb113 on January 07, 2013, 11:37:25 PM
From what I have observed most public debate regarding science revolves around two issues:

1) Climate Change due to human influence on the environment
2) Evolution of life on Earth due to long term natural selection

How do you determine what to believe (or not) regarding these theories?
What kind of evidence would convince you to change your mind?
Why do you place trust (or not) in the consensus of the experts in these fields?
Given infinite resources, how would you determine the "truth"?

No idea if anybody thought of this before, but how about just apply the scientific-method ? Essentially what im suggesting here, is to do opposite of what unscientific "career-scientists" such as Richard Dawkins is doing.

This> You try to prove wrong whatever you tried to believe. = science
Not this> You try to prove whatever you believe to be right. = religion

You could, but it may be time consuming and expensive. Lets take a simple example, convince yourself (or at least design experiments you can plausibly do at home) that gravity has anything to do with mass without any circular logic. Its actually a good exercise for realizing the confusing web of theory, data, and logic that scientists rely on in reality. There are hidden assumptions everywhere. Usually they are recognized by the original proponents, then moved to footnotes by later authors, and finally forgotten altogether leading to "laws".

Also there are many who argue that much of what is commonly taken as "science" is actually not due to the fact that scientists often try to "disprove" strawman null hypotheses rather than disprove (or even make any) predictions.


Title: Re: Public Perception of Science
Post by: FirstAscent on January 07, 2013, 11:47:08 PM
From what I have observed most public debate regarding science revolves around two issues:

1) Climate Change due to human influence on the environment
2) Evolution of life on Earth due to long term natural selection

How do you determine what to believe (or not) regarding these theories?
What kind of evidence would convince you to change your mind?
Why do you place trust (or not) in the consensus of the experts in these fields?
Given infinite resources, how would you determine the "truth"?

No idea if anybody thought of this before, but how about just apply the scientific-method ? Essentially what im suggesting here, is to do opposite of what unscientific "career-scientists" such as Richard Dawkins is doing.

This> You try to prove wrong whatever you tried to believe. = science
Not this> You try to prove whatever you believe to be right. = religion

You could, but it may be time consuming and expensive. Lets take a simple example, convince yourself (or at least design experiments you can plausibly do at home) that gravity has anything to do with mass without any circular logic. Its actually a good exercise for realizing the confusing web of theory, data, and logic that scientists rely on in reality. There are hidden assumptions everywhere. Usually they are recognized by the original proponents, then moved to footnotes by later authors, and finally forgotten altogether leading to "laws".

Also there are many who argue that much of what is commonly taken as "science" is actually not due to the fact that scientists often try to "disprove" strawman null hypotheses rather than disprove (or even make any) predictions.

What do you think are your most grievous assumptions and most flagrant errors or oversights that you make in your quest (based on your bias resulting from your political ideology) to pinpoint tiny things which might reduce the credibility of climate science? As you like to come off as someone who claims to be objective, I would hope, but don't have much faith, that you could report on these.

EDIT: I'll be honest here. I think you have an agenda combined with a lack of commons sense. Add to that a mix of selective cherry picking on datasets and you get worthless speculation. I judge your agenda based on posts you've made about governments. I judge your lack of common sense based on continued posts you made in a year old thread. And I judge your cherry picking by the obvious evidence of your personal selection of only a very few datasets.


Title: Re: Public Perception of Science
Post by: bb113 on January 07, 2013, 11:53:08 PM
From what I have observed most public debate regarding science revolves around two issues:

1) Climate Change due to human influence on the environment
2) Evolution of life on Earth due to long term natural selection

How do you determine what to believe (or not) regarding these theories?
What kind of evidence would convince you to change your mind?
Why do you place trust (or not) in the consensus of the experts in these fields?
Given infinite resources, how would you determine the "truth"?

No idea if anybody thought of this before, but how about just apply the scientific-method ? Essentially what im suggesting here, is to do opposite of what unscientific "career-scientists" such as Richard Dawkins is doing.

This> You try to prove wrong whatever you tried to believe. = science
Not this> You try to prove whatever you believe to be right. = religion

You could, but it may be time consuming and expensive. Lets take a simple example, convince yourself (or at least design experiments you can plausibly do at home) that gravity has anything to do with mass without any circular logic. Its actually a good exercise for realizing the confusing web of theory, data, and logic that scientists rely on in reality. There are hidden assumptions everywhere. Usually they are recognized by the original proponents, then moved to footnotes by later authors, and finally forgotten altogether leading to "laws".

Also there are many who argue that much of what is commonly taken as "science" is actually not due to the fact that scientists often try to "disprove" strawman null hypotheses rather than disprove (or even make any) predictions.

What do you think are your most grievous assumptions and most flagrant errors or oversights that you make in your quest (based on your bias resulting from your political ideology) to pinpoint tiny things which might reduce the credibility of climate science? As you like to come off as someone who claims to be objective, I would hope, but don't have much faith, that you could report on these.

If we get past the rhetoric, that is an interesting question. I will answer it if you answer the OP.


Title: Re: Public Perception of Science
Post by: FirstAscent on January 07, 2013, 11:56:34 PM
From what I have observed most public debate regarding science revolves around two issues:

1) Climate Change due to human influence on the environment
2) Evolution of life on Earth due to long term natural selection

How do you determine what to believe (or not) regarding these theories?
What kind of evidence would convince you to change your mind?
Why do you place trust (or not) in the consensus of the experts in these fields?
Given infinite resources, how would you determine the "truth"?

No idea if anybody thought of this before, but how about just apply the scientific-method ? Essentially what im suggesting here, is to do opposite of what unscientific "career-scientists" such as Richard Dawkins is doing.

This> You try to prove wrong whatever you tried to believe. = science
Not this> You try to prove whatever you believe to be right. = religion

You could, but it may be time consuming and expensive. Lets take a simple example, convince yourself (or at least design experiments you can plausibly do at home) that gravity has anything to do with mass without any circular logic. Its actually a good exercise for realizing the confusing web of theory, data, and logic that scientists rely on in reality. There are hidden assumptions everywhere. Usually they are recognized by the original proponents, then moved to footnotes by later authors, and finally forgotten altogether leading to "laws".

Also there are many who argue that much of what is commonly taken as "science" is actually not due to the fact that scientists often try to "disprove" strawman null hypotheses rather than disprove (or even make any) predictions.

What do you think are your most grievous assumptions and most flagrant errors or oversights that you make in your quest (based on your bias resulting from your political ideology) to pinpoint tiny things which might reduce the credibility of climate science? As you like to come off as someone who claims to be objective, I would hope, but don't have much faith, that you could report on these.

If we get past the rhetoric, that is an interesting question. I will answer it if you answer the OP.

In the EDIT I made to my post, I may have answered the question to you for you.


Title: Re: Public Perception of Science
Post by: FirstAscent on January 07, 2013, 11:58:54 PM
Furthermore, I judge you as having very limited knowledge about climate science in general, as evident by your own admission of having no general knowledge of things going in the natural world - for example, the record breaking ice melt in the Arctic this year.


Title: Re: Public Perception of Science
Post by: bb113 on January 08, 2013, 12:07:44 AM
ok:
Quote
From what I have observed most public debate regarding science revolves around two issues:

1) Climate Change due to human influence on the environment
2) Evolution of life on Earth due to long term natural selection
How do you determine what to believe (or not) regarding these theories?
What kind of evidence would convince you to change your mind?
Why do you place trust (or not) in the consensus of the experts in these fields?
Given infinite resources, how would you determine the "truth"?

Where do I make any claims about climate science in general? This thread is about public perception of science and what the determinants are. If you don't wish to talk about that please explain why?


Title: Re: Public Perception of Science
Post by: Rob E on January 08, 2013, 12:11:48 AM
From what I have observed most public debate regarding science revolves around two issues:

1) Climate Change due to human influence on the environment
2) Evolution of life on Earth due to long term natural selection

How do you determine what to believe (or not) regarding these theories?
What kind of evidence would convince you to change your mind?
Why do you place trust (or not) in the consensus of the experts in these fields?
Given infinite resources, how would you determine the "truth"?

No idea if anybody thought of this before, but how about just apply the scientific-method ? Essentially what im suggesting here, is to do opposite of what unscientific "career-scientists" such as Richard Dawkins is doing.

This> You try to prove wrong whatever you tried to believe. = science
Not this> You try to prove whatever you believe to be right. = religion

You could, but it may be time consuming and expensive. Lets take a simple example, convince yourself (or at least design experiments you can plausibly do at home) that gravity has anything to do with mass without any circular logic. Its actually a good exercise for realizing the confusing web of theory, data, and logic that scientists rely on in reality. There are hidden assumptions everywhere. Usually they are recognized by the original proponents, then moved to footnotes by later authors, and finally forgotten altogether leading to "laws".

Also there are many who argue that much of what is commonly taken as "science" is actually not due to the fact that scientists often try to "disprove" strawman null hypotheses rather than disprove (or even make any) predictions.

I really like that post  . .  That is  awesome ..    


Title: Re: Public Perception of Science
Post by: hashman on January 08, 2013, 04:50:09 AM
1) Climate Change due to human influence on the environment


This one's pretty easy.  Lets begin an experiment by opening our eyes.  Perhaps we can see asphalt, dams, buildings, agricultural land?  Roads?  Houses?  That's all due to humans.  Now adjust the "climate control" knob and notice the effect in your room.. so I think you will agree, humans can affect our climate. 

The interesting questions are in the details.  How fast are our species becoming extinct and what does it mean for our survival?  What is all this extra CO2 in the air going to do?  How much longer can we visit Venice?  By what date will middle USA be a desert?  Could we reverse that trend?  What changes will occur in the oceans that will affect us now that we have killed off almost all the large fish?   

2) Evolution of life on Earth due to long term natural selection


You gotta love people who claim they don't believe in evolution.  Literally this means you don't believe that time passes.  Actually things move, the Earth rotates, things change, and it's called evolution.  Life changes, it can be observed on short time scales in the lab or the greenhouse or on large time scales by looking at the fossil record.

There are certainly many puzzing questions and mysteries about biological evolution, but I feel this is a rant already ;) 







Title: Re: Public Perception of Science
Post by: bb113 on January 08, 2013, 05:13:45 PM
1) Climate Change due to human influence on the environment


This one's pretty easy.  Lets begin an experiment by opening our eyes.  Perhaps we can see asphalt, dams, buildings, agricultural land?  Roads?  Houses?  That's all due to humans.  Now adjust the "climate control" knob and notice the effect in your room.. so I think you will agree, humans can affect our climate. 

The interesting questions are in the details.  How fast are our species becoming extinct and what does it mean for our survival?  What is all this extra CO2 in the air going to do?  How much longer can we visit Venice?  By what date will middle USA be a desert?  Could we reverse that trend?  What changes will occur in the oceans that will affect us now that we have killed off almost all the large fish?   

2) Evolution of life on Earth due to long term natural selection


You gotta love people who claim they don't believe in evolution.  Literally this means you don't believe that time passes.  Actually things move, the Earth rotates, things change, and it's called evolution.  Life changes, it can be observed on short time scales in the lab or the greenhouse or on large time scales by looking at the fossil record.

There are certainly many puzzing questions and mysteries about biological evolution, but I feel this is a rant already ;) 


I wouldn't visit venice if I were you. Its just a huge rat maze.