Please use this identifier to cite or link to this item:
https://lib.hpu.edu.vn/handle/123456789/22276
Full metadata record
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Colquhoun, David | en_US |
dc.date.accessioned | 2016-07-18T06:49:08Z | |
dc.date.available | 2016-07-18T06:49:08Z | |
dc.date.issued | 2014 | en_US |
dc.identifier.other | HPU4160433 | en_US |
dc.identifier.uri | https://lib.hpu.edu.vn/handle/123456789/22276 | - |
dc.description.abstract | If you use p=0.05 to suggest that you have made a discovery, you will be wrong at least 30% of the time. If, as is often the case, experiments are underpowered, you will be wrong most of the time. This conclusion is demonstrated from several points of view. First, tree diagrams which show the close analogy with the screening test problem. Similar conclusions are drawn by repeated simulations oft-tests. | en_US |
dc.format.extent | 16 p. | en_US |
dc.format.mimetype | application/pdf | - |
dc.language.iso | en | en_US |
dc.subject | Statistics | en_US |
dc.subject | Computational biology | en_US |
dc.subject | Significance tests | en_US |
dc.subject | Reproducibility | en_US |
dc.subject | Statistics | en_US |
dc.subject | False discovery rat | en_US |
dc.title | An investigation of the false discovery rate and the misinterpretation of p-values | en_US |
dc.type | Article | en_US |
dc.size | 741KB | en_US |
dc.department | Education | en_US |
Appears in Collections: | Education |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
0316_Aninvestigation.pdf Restricted Access | 741.15 kB | Adobe PDF | View/Open Request a copy |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.