Modern basic science rewards large papers in highly cited journals.  However, it is difficult for translational and clinical researchers to assess the quality of a basic science paper.  A proxy is the number of citations, but that is a very inaccurate measure; some papers that cannot be replicated have hundreds of citations.  Given the lack of safeguards ensuring publication quality, the intense competition to produce high profile publications incentivizes publication bias (i.e. tendencies for journals to publish experiments confirming its original hypothesis or "positive" results).  An obvious way to affirm or dispute the quality of a paper is through replication.  A large amount of translational and clinical research likely yields negative results because it was based on invalid basic science premises.  Unfortunately, at present replication studies are not valued by the basic science community, and they typically go unpublished. In addition, it is often extremely difficult to replicate published data in basic sciences due to the absence of detailed protocols. Our solution is MicroPub, a platform for helping scientists replicate published data and incentivizing reproducible science. Here's why and how.