The real merit of research is in its specific, substantive content. But if a contribution proves important and useful, it will be taken up, built upon and cited in subsequent research. Scientometrics attempts to estimate and quantify this research uptake and impact. The classical metric of research productivity had been publication counts ("publish or perish") and the prestige of their publication venues (refereed journals or scholarly monographs), based on their prior track records for quality and importance. Publication counts were soon supplemented by "journal impact factors" (average citation counts), and eventually also by individual article and author citation counts. In the online era, the potential metrics have extended further to include download counts, growth and decay rates for metrics, co-citation measures, and more elaborate a-priori formulas such as the "h-index" and its variants. Still prominently missing today, however, are three things: (1) book metrics, (2) a validation of the metrics, discipline by discipline, that tests and confirms their meaning and predictive power, especially in research assessment, and (3) a sufficiently large and open webwide database to allow the global research community to test, validate and monitor its metrics (which are currently collected systematically only by proprietary commercial databases). The Open Access (OA) movement (for providing free online access to all journal articles) is helping to generate the requisite OA database for articles by extending universities' and funders' "publish or perish" mandates to also require their authors to make their publications OA by depositing them in their institution's OA repository. OA not only makes it possible to harvest research impact metrics webwide, but it has also been shown to increase them (the "OA Impact Advantage"). I will describe the new OA metrics, the OA Advantage, how OA metrics can be tested and validated for use in research assessment.