 Boards
 Current Events
 New proposal to shift statistical threshold from p < .05 to p < .005
http://www.nature.com/news/bignamesinstatisticswanttoshakeupmuchmalignedpvalue1.22375
Full preprint: https://osf.io/preprints/psyarxiv/mky9j Shifting alpha to a new arbitrary value is literally pointless, especially in the way they recommend. If you are a scientist, you need to understand statistics and data analysis, that's really all. And I don't think that this is largely the issue (I think most do understand the statistics deeply), but it's a good scapegoat for the culture of pushing positive results and the journaling system, which are much more difficult to fix.
=E[(xE[x])(yE[y])]

Literally why would you care?
My name is Harpuia, one of the four Guardians of Master X and General of the Strong Air Battalion, The Rekku Army.

SageHarpuia posted...
Literally why would you care? pvalue cutoffs affect significance of findings, which in turn affect chances of being published in reputable journals, which in turn affect career prospects. So shifting the cutoff would substantively affect millions of researchers around the world (including myself). Fisher never intended for the pvalue to be used as a hard cutoff for meaningful findings but as one aspect to be considering among a bunch of other factors. It's too bad that that's what it's become.
No sig here

i feel like the problem is not that .05 is too high
it's that media tends to exaggerate the significance of findings with pvalues that evoke lesser confidence
And when the hourglass has run out, eternity asks you about only one thing: whether you have lived in despair or not.

I guess it's less laughable than the time someone published, in a pretty highly reputable (but not well regarded) journal, a suggestion to switch from pvalues to confidence intervals, even though they are literally mathematically equivalent.
=E[(xE[x])(yE[y])]

Transcendentia posted...
This is fantastic news and I hope it happens. You're, uh, really bad at this.
=E[(xE[x])(yE[y])]

I am kinda rusty on statistics, but isn't a p value of 0.005 way too low to be practical? Like thats a very small part for outliers and in practice it would almost include the whole population?
Playing: Rainbow Six Siege/Battlefield 1/Dark Souls 3

I think 0.01 or 0.02 would be more reasonable. 0.005 is 10 times more significant. That's ridiculous with some distributions.
Rainbow Dashing: "it's just star wars"
AutumnEspirit: *kissu* 
Transcendentia posted...
This is fantastic news and I hope it happens. We've all had enough of experts. 
JM_14_GOW posted...
I am kinda rusty on statistics, but isn't a p value of 0.005 way too low to be practical? Like thats a very small part for outliers and in practice it would almost include the whole population? iirc it would correspond to 3sigma instead of 2sigma on a normal probability distribution it's not 'too low to be practical' so much as 'unnecessary and likely to suppress useful data'
And when the hourglass has run out, eternity asks you about only one thing: whether you have lived in despair or not.
(edited 23 hours ago)quote

Darkman124 posted...
JM_14_GOW posted...I am kinda rusty on statistics, but isn't a p value of 0.005 way too low to be practical? Like thats a very small part for outliers and in practice it would almost include the whole population? Yeah thats more or less what I am trying to say, basically its including the 99.995 of the population and you lose a significant amount of outliers imo.
Playing: Rainbow Six Siege/Battlefield 1/Dark Souls 3

JM_14_GOW posted...
p<.005 is equivalent to a 99.5% confidence interval, which in a normal probabiltiy distribution is three standard deviations from the mean it's common for engineering projects to require 3sigma for safety of designs as it's treated as being as close to "we know this won't happen" as is possible, since the standard deviation there is typically a manufacturing tolerance. i think it's a lot less viable in pure science where the error is influenced by things outside your control and cannot be as easily minimized
And when the hourglass has run out, eternity asks you about only one thing: whether you have lived in despair or not.
(edited 23 hours ago)quote

Nah.
Number of legendary 500 post topics: 26, 500th posts: 15; PiO ATTN: 2
Thank the lord, the PiOverlord! RotM wins 1 
Darkman124 posted...
it's not 'too low to be practical' so much as 'unnecessary and likely to suppress useful data' This is pretty much the primary issue. The false negative rate in a lot of fields is a much larger issue than the false positive issue. We really shouldn't be paying too much attention to arbitrary cut offs, tbh.
=E[(xE[x])(yE[y])]

SageHarpuia posted...
Literally why would you care? His user name is literally COVxy.
BKSheikah owned me so thoroughly in the 2017 guru contest, I'd swear he used the Lens of Truth to pick his bracket. (thengamer.com/guru)

ZMythos posted...
I think 0.01 or 0.02 would be more reasonable. 0.005 is 10 times more significant. That's ridiculous with some distributions. The fact that people are saying things like this shows how arbitrary it is. It really depends more on the context. For some areas, like maybe marketing, a p value of 0.1 is fine. In something like astrophysics the p value needs to be far lower than 0.05
/poast

literal_garbage posted...
Transcendentia posted...you guys mad? Must be metatrolling from the great Clad.
=E[(xE[x])(yE[y])]

I dont like it.

I don't have a whole lot of knowledge on this subject, but from my perspective, it seems like different fields would want to use different p value thresholds. It doesn't make sense to have a universal threshold that applies across all fields.
I may not go down in history, but I will go down on your sister.

I thought the threshold varied depending on the field and type of study, and that 0.05 was more of an informal standard than anything?
Everything has an end, except for the sausage. It has two.

Sativa_Rose posted...
I don't have a whole lot of knowledge on this subject, but from my perspective, it seems like different fields would want to use different p value thresholds. It doesn't make sense to have a universal threshold that applies across all fields. Meh, it's more like a careful consideration of an argument in the context of the totality of the evidence is more important than an arbitrary cut off on any one of the tests. I'd be completely fine accepting a paper with no statistically significant results if all of the data pointed towards a single conclusion, if there were enough convergent evidence.
=E[(xE[x])(yE[y])]

 Boards
 Current Events
 New proposal to shift statistical threshold from p < .05 to p < .005