Magna Carta Scientiae

Abstract

Science is a catalyst for human progress. But a publishing monopoly and funding monopsony have inhibited research.

We intend to improve incentives in science by developing smart research contracts mediated by peer-to-peer review networks. These will collectively reward scientific contributions, including proposals, papers, replications, datasets, analyses, annotations, editorials, and more.

Long term, these smart contracts help accelerate research by minimizing science friction, ensuring science quality, and maximizing science variance.

Email or follow @atoms_org to help us build a flourishing research economy.

1. Introduction: The Research Economy

Papers are the fundamental asset of the research economy: they serve as proof of work that valuable research has been completed.

Funding agencies and research institutions evaluate scientists based on their publications. Principal investigators (PIs) attract prospective students and collaborators via papers. Investors and companies use scientific literature to conduct due diligence on research ranging from basic discoveries to clinical studies.

Thus, the evaluation and dissemination of papers are vital to this research economy.

Publishers are the sole arbiters of papers today. They assign a value — denominated in prestige” — by accepting a paper into the appropriate journal based on selectivity and domain.

To evaluate papers, journals typically outsource it to two or three PIs, who often outsource it further to their students. Reviewers are unpaid for this peer review work, as it is an expected part of their scientific duties.

Peer review is believed to be necessary because of the industrialization of science. Research papers and proposals have become too specialized and too numerous, making it difficult to assess merit prima facie.

As a result, scientific incentives have become distorted in two major ways: prestige capture and reviewer misalignment.

1.1 Prestige Capture

Over half of all research papers in 2013 were published by five companies, who have used their centuries of brand equity to build an economic moat. This results in prestige capture, which akin to regulatory capture, causes public and scientific interest to be directed towards the regulators of prestige.

Publishers have exploited prestige capture to become the ultimate rent-seekers, with operating margins between 25-40% and market capitalizations up to $50B. They charge institutions millions of dollars in annual subscriptions and researchers thousands of dollars in publication charges per paper, all while preventing the public from openly accessing research.

These costs are not just financial. Prestige capture causes misincentives that impede the pace, reliability, and opportunities of research — these will be covered in detail in Subsections 2.1-2.3.

1.2 Reviewer Misalignment

Peer review today suffers from a classic principal-agent problem: reviewers lack skin in the game.

There is no downside (or upside) for the reviewers who rejected many of the Nobel Prize-winning publications. This is because nearly all peer review is single-blinded (reviewer’s identity is undisclosed to the researchers) and closed (review is never published).

Blinded, closed reviews are typically deemed necessary for reviewers to share accurate feedback without fear of retribution. However, the opaque, centralized nature of peer review can lead to slower, biased, or conservative decision-making. Conflicts of interest — especially intellectual conflicts — stifle competing ideas. Paper acceptances are limited to a static decision, rather than a dynamic process that can be updated with new diverse reviews.

Peer review is not limited to journals. It remains the dominant mechanism used to select grants by funders, particularly federal agencies, who accounted for 42% of total U.S. basic research funding in 2017. This funding monopsony — combined with the publishing monopoly — results in a dysfunctional research economy.

The solution is not to eliminate peer review in favor of unilateral decisions made by journal editors and funding officers. Nor is open review always the right answer.

Instead, we need to experiment with new versatile models for scientific funding and sharing.

Smart research contracts mediated by peer-to-peer review networks would enable such experimentation. Section 3 describes the high-level design of such contracts.

The following Subsection 1.3 explores the exchange of funds, knowledge, and prestige among participants in the research economy. These may be helpful for understanding emergent behavior (Subsection 3.4) and downstream applications (Section 4) that arise from smart research contracts - but like the other pre-collapsed sections, not strictly necessary.

1.3 Participants in The Research Economy

Science has become a sprawling enterprise with a range of complex roles. These can be reduced to seven primary participants:

  1. Public: fund research through taxes or philanthropy and receive a return on funding” via new knowledge, products, jobs, and policies
  2. Funders: allocate funding as scientific fiduciaries for the public (via federal agencies) or specific interests (via private foundations)
  3. Research institutions: provide infrastructure for scientists to discover and share new knowledge
  4. Principal investigators (PIs): raise funds, conduct research, mentor students, and review proposals/papers
  5. Students: learn, perform, and publish experiments
  6. Publishers: evaluate and distribute research
  7. Industry: develop research into innovations that enable new products and jobs

Principal investigators have often been described as founders or CEOs — let us extend this metaphor to analogize and analyze how research functions in economic terms.

Public The public serves as the limited partners of this economy. We provide taxes collectively to government funding agencies (e.g. National Institutes of Health, European Research Council, National Science Foundation, Department of Defense) and donations individually to private foundations (e.g. Gates Foundation, Wellcome Trust, Howard Hughes Medical Institute, Cystic Fibrosis Foundation).

For this investment, the public receives a return on research funding in four tangible ways:

  1. new knowledge via publications and schools that educate the public
  2. new products via companies who translate research into innovative products
  3. new jobs via organizations who employ workers in emerging industries
  4. new policies via governments and other entities (e.g. medical organizations) who improve laws or guidelines

Funders Funders serve as capital allocators.

Government funding accounted for $38B or 44% of total U.S. basic research funding in 2015. Most of these government grants are allocated by risk-averse, index fund-like agencies, who prefer established research groups with mid-to-late stage PIs.

These government agencies do not retain any intellectual property rights for external research they fund, as the 1980 Bayh-Dole Act assigned inventions to the research institutions. Some have advocated for amending Bayh-Dole so that NIH and other agencies can increase available funding with royalties, but such a change may alter the basic science projects that NIH chooses to fund. Federal funding agencies serve as fiduciaries for the general public: their presumed goals are to maximize return on research funding across all four aforementioned categories (knowledge, products, jobs, policies).

Federal agencies rely heavily on extensive peer review to complete due diligence today. This is a relatively recent phenomenon. Per historian Melanie Baldwin, when NIH was formed in 1948, they initially evaluated grant applications with little or no consultation with outside referees.” Federal funding agencies were pushed to rely much more on peer review in the 1970s after Congressional inquiries into National Science Foundation funding demanded more accountability.

Other funders, such as HHMI and DARPA, are more willing to fund younger scientists working on cutting-edge research. These organizations are often regarded as more effective research funders and empower their program officers to champion specific projects (relying on their scientific advisors to provide expertise as needed). This may be attributed to the fact that private foundations usually have a specific return on funding they seek. A foundation may pursue new products (Cystic Fibrosis Foundation with Kalydeco), policies (Arnold Foundation with evidence-based policymaking), or knowledge (Simons Foundation with math and other basic science).

Some philanthropic funders require royalty or equity rights from intellectual property. The Cystic Fibrosis Foundation retained royalties for funding a high-throughput screen of potential cystic fibrosis drugs in 1999; they later sold their royalty rights in 2014 for $3.3B after the drug — Kalydeco, the first drug that targets an underlying mutation that causes cystic fibrosis — was approved.

Private foundations comprise $11B or 13% of total 2015 U.S. basic research funding. The remaining U.S. funding comes from research institutions (13%) and industry (28%). Non-government funders are highly fragmented when compared to federal agencies. The 2021 NIH budget alone is $38.7B.

Thus, research funding behaves like a monopsony, with federal agencies as the primary buyer.

Research Institutions Research institutions are most akin to incubators and office space providers. Most are universities, although some are private non-profit institutes (e.g. Max Planck Institutes) and government institutes (e.g. NIH intramural research). They recruit PIs and students, offer them physical laboratories and libraries to conduct research, and often provide research funds, salaries, and stipends. The physical campuses enable rich learning and collaboration opportunities among PIs and students. They also provide the public a return on funding via new knowledge, which trickles from research papers to university syllabi to school curricula.

In return, research institutions charge rent in the form of indirect grant overhead fees, with rates ranging from 20% to 85% according to a 2014 Nature investigation using Freedom of Information Act requests. A 2012 National Science Foundation estimate showed that indirect costs of all grants totaled $15.9B. Research institutions also charge equity, as they own the intellectual property (IP) rights that arise from research. Royalties and equity from IP licenses are typically split among the 1) institution, 2) scientist inventors, and 3) the department (Stanford’s policy is 28.3% even split among those three groups plus a 15% administrative overhead). $2.94B in licensing revenue was generated from university technology transfer in 2018.

At their best, research institutions resemble Y Combinator, where their selectivity, mentorship, resources, and network enable the top PIs to excel as prodigious researchers. The success of the PIs compounds the prestige of the institution, which amplifies recruiting of high-quality students and PIs to those institutions. This prestige is reflexive, with PIs and students at top universities benefitting from higher funding success rates and amounts (although that is also partially attributable to higher inherent research productivity and quality).

At their worst, some research institutions are more akin to rent-collecting landlords and predatory incubators who extract more value from their PIs and students than they provide. The aforementioned 2014 Nature article mentioned one institute, the Boston Biomedical Research Institute, that attempted to negotiate an overhead rate of 103%, although it was only able to recoup 70%, or $2.4 million on $3.4 million in direct funding.”

Principal Investigators As founder-CEOs of research groups, PIs juggle a large number of roles. First, they must fundraise. A 2018 survey of 11,167 PIs in America with active federal grants found that 44.3% of their time was devoted to various grant activities, with 16.0% allocated directly to writing proposals and preparing budgets.

PIs must also publish. This push to publish can lead to an emphasis on paper quantity over quality, with the most prolific researchers publishing over 70 papers annually. However, high quality matters more. A survey of 308 economics professors ranked publishing in a top-five journal as the most important factor out of eight for promotion. This perception was empirically justified: professors with one, two, and three publications in top-five journals increased their probability of tenure from 30% to 43% to 62%, respectively (whereas no such increase was observed with additional publications in Tier A,” Tier B,” or General” journals).

In addition, PIs are responsible for recruiting, mentoring, and managing junior researchers, which include graduate students, postdocs, lab technicians, and other staff scientists. The average UK biology group size in 2015 was 7.3, while the average MIT biology lab grew from 6 members in 1966 to 12 in 2000. Thus, management is a crucial PI skill.

Most universities also require that PIs teach undergraduates. The majority of professors reported 5-12 hours per week on teaching preparation and another 5-12 hours on scheduled teaching in a 2017 survey of 20,771 full-time faculty.

PIs often have a wide range of extramural duties as well, serving as science communicators to the public; advisers, consultants, and co-founders to companies; expert witnesses in trials; and advisers for public policy.

Eventually, if they succeed across most of these roles, PIs obtain tenure — perhaps the research equivalent of an IPO.

Students Students are the employees (or apprentices, to be more precise), learning while working their way towards becoming PIs themselves one day or joining industry. The average grant-funded graduate student in America is budgeted $75K, of which $50K goes to the university as tuition, resulting in $25K in annual salary.

Graduate students take a few classes in their first couple of years while rotating through different research groups. Once they have selected their PI(s), most graduate students perform experiments, review literature, and teach undergraduate students. A global survey of 6000 PhDs found the majority spending over 50 hours per week on their PhD program, with a quarter spending over 60 hours per week. On average, PhDs in the U.S. take 6 years to complete graduate school.

After completing their PhD, 43% remain in academia, while 42% work in industry. While they double their typical salary as postdocs to $47K/year, they continue working long hours. Applicants for faculty positions spent a median of 4 years in a postdoctoral position.

Publishers Publishers are often described like investment bankers.

They help researchers format their papers (ala banker decks) and distribute their papers to the right audience (ala road shows). These may have been necessary and valuable services when journals were founded in the 1600s, but digitization has obsoleted their PDF formatting and closed access. Instead, researchers can directly list papers on pre-print servers to go public immediately.

In reality, publishers are more akin to the independent” evaluators of the research economy — the rough equivalent of a credit rating agency (e.g. S&P, Moody’s) or a sell-side equity analyst. Rather than provide a quantifiable valuation, journals only offer a ternary response of accept, reject, or revise. Journals typically require several months to provide this response, and if rejected, the researchers have to start their evaluation process all over again with a new journal. Researchers pay thousands of dollars in publishing fees per paper.

If a journal does accept a paper, most of these publications are not available to the public without paying for each paper (tens of dollars) or a journal subscription (hundreds of dollars). Universities annually spend millions of dollars for a bundle of these subscriptions so their PIs and students have access, with the average health sciences journal growing 67.3% in price over the past decade. In 2015, American universities spent in total $2.3B on journal subscriptions. These publication charges and subscription fees are often unaffordable for PIs and universities in developing countries.

As a result, journals have operating margins that exceed investment banks — and most industries. Springer Nature was recently valued at $7B with 23% margins, while Elsevier has a $50B market capitalization and 37% margins.

Industry Industry serves essentially their same role in this extended metaphor and provides a return on funding to the public via new products and new jobs.

Startups, large corporations, and investment firms all participate in the research economy. They license promising intellectual property from technology transfer offices. Industry recruits heavily roles from research institutions, with roles ranging from junior scientists, engineers, and analysts straight out of college to exeuctive and managing director positions for tenured faculty.

Investors and companies also partner directly with universities to fund entire portfolios of research, such as the $100M Deerfield Management partnership with Harvard and $100M Gilead Sciences collaboration with Yale. Industry has emerged to become a major research supporter in the U.S., representing 29% of total basic research funding in 2017.

2. Tenets of Research Progress

Multiple studies suggest a decline in research progress — let us take a closer look.

Bloom et al observed decreasing productivity growth across many measures of U.S. life expectancy and agricultural yields. The most common explanation is that diminishing marginal returns cause such research output to decelerate. Yet the underlying input — basic scientific advances — may still be accelerating.

Indeed, there have been many recent discoveries that should improve both medicine and agriculture. Emerging structural biology capabilities, such as cryo-electron microscopy, enable rational design of better drugs and pesticides. Likewise for synthetic biology, which has led to sophisticated cellular engineering and environmental engineering.

To bastardize the low-hanging fruit analogy, we may be inventing space elevators to reach the next available fruit for increasing longevity and crops. The fact that the fruit is now past the Kármán line does not necessarily mean that our space elevator program is languishing.

To better address underlying science progress, Collison and Nielsen used pairwise Nobel Prize ratings across decades to show stagnation in perceived importance of Nobel Prizes. However, perhaps the Nobel Foundation itself is stagnating, as the rise in Nobel Prize lag times suggests there are increasingly more deserving awardees waiting in line.

If we had new Nobel Prizes for Genomics, Immunology, Neuroscience, Oncology, Virology, Optics, Astrophysics, and so on, these new Prizes may be perceived as improving — or at least staying constant as new Prize categories emerge. But that would dilute the brand prestige of the Nobel Prize, which is why no new Prize has been created since 1968 (and even that prize is technically not considered an official Nobel Prize).

Others have attempted to use citation and textual analysis, but basic science stagnation remains difficult to quantify. Nevertheless, three key tenets of research progress are facing crises:

  1. Science friction: research becomes slower, costlier, and more laborious
  2. Science quality: research faces issues of irreproducibility, favoritism, fraud, and burnout
  3. Science variance: research becomes more homogeneous, risk-averse, and short term-oriented

These three issues are interrelated: science friction is caused by attempts to enforce science quality, but doing so curtails science variance.

As a result, incentives in research hiring, funding, and publishing often diverge from accelerating research progress. These are covered in the following Subsections 2.1-2.3.

2.1 Science Friction

The combination of prestige capture and reviewer misalignment generates substantial science friction today, burdening research with unnecessary delays, costs, and labor issues.

Hiring Limited increase of faculty positions makes it exceedingly competitive to become a tenure-track PI. Only 17% of new PhDs in science, engineering and health-related fields” secure faculty positions within 3 years of graduation. For a given assistant professor job opening, MIT typically receives 400 applicants.

Staying in academia to pursue a postdoctoral position is often a decision with a negative financial expected value. However, PIs obtain much value from recruiting postdocs, particularly those with fellowships — every additional externally-funded postdoc was correlated with 29% more annual papers in a 2015 study of 119 MIT biology labs.

These (dis)incentives lead many talented scientists to be stuck — if not exploited — on the academic treadmill. 2020 Nobel Laureate Emmanuelle Charpentier, who co-discovered the CRISPR-Cas9 system, struggled to receive tenure, spending 25 years moving through nine institutions in five countries.”

Doug Prasher, who originally cloned the gene for green fluorescent protein — one of the most ubiquitous tools used in molecular biology today and awarded the 2008 Nobel Prize in Chemistry sans Prasher — left science and became a van driver after feeling dejected by the tenure process.

As a relic of religious schools from the 1700s, academic tenure and hiring processes should be reformed.

Funding Managing research grants is onerous. From a 2018 survey of 11,167 PIs, 38.1% of time involved with a federally-funded project is spent on administrative activities, including applying for approvals,” supervising budgets,” and writing/submitting required progress reports.”

Much of this burden is exacerbated by the competition for funding. The number of NIH R01 grant applications has doubled with a limited increase of awards. Funding rates declined from 32% in 2000 to 20% in 2020, forcing PIs to apply for more grants every year. These funding and hiring hurdles cause the average age of a first-time R01 recipient to be almost 45 today. Given that the majority of Nobel laureates made their prize-winning discoveries by age 40, this temporal friction may lead to lower research productivity.

Financial friction is generated by grant overhead costs, with NIH incurring an average of 52 percent in indirect overhead fees. Both economists and biomedical scientists have advocated for reducing these fees (or tiering them based on total funding received, since they primarily cover fixed university costs) to increase direct research funding and overall scientific productivity.

Publishing Journals once competed on their speed of review and breadth of distribution. The prestigious journal Nature succeeded in the 1800s because they unbundled the prior standard for communicating research — scholarly books and monthly periodicals — into a journal of weekly papers with fast turnaround time and broad readership. Publication times have since ballooned to a median of nearly half a year for Nature and similarly for other journals, causing a constipated backlog of research communication.

To some degree, pre-print servers have further unbundled journals. However, while pre-prints have improved the speed of communication, they have not attained the prestige of peer-reviewed publications. Academic researchers today cannot thrive on a diet consisting of only pre-prints.

Journals also cause financial friction with high publishing fees, unpaid reviewership, closed access, and expensive subscriptions. The University of California system recently announced a four-year, $45M agreement with Elsevier around open-access publishing after canceling its subscription in 2019 to force two years of negotiation. In return, UC researchers get a 15% discount on standard Elsevier open-access publishing charges (except for Cell Press and The Lancet, where only a 10% discount applies, thereby dropping the publishing fee for Cell from $9900 to $8800). The UC system covers $1000 towards every open-access publishing charge, although researchers are allowed to publish closed-access to save on fees if desired. Springer Nature also recently announced in 2021 an open-access publication policy across all of their journals for $11,390 per submitted article.

Sci-Hub is a widely used solution to circumvent journal subscription access. But it is unsustainable as an illegal, centralized service dependent on its founder, Alexandra Elbakyan, continuing to upload. In fact, Sci-Hub has recently stopped uploading new papers in 2021, with Elbakyan apparently burnt out from the various court cases against her. While moving Sci-Hub to a decentralized alternative (e.g. IPFS) does permit more anti-fragile access to papers, it does not solve the structural issue of prestige capture by the publishing companies.

NIH instituted a Public Access Policy in 2008 that mandated any publications that arose from NIH research be made publicly available within a year — a laudable but incremental step towards true open access. A coalition of funders, including HHMI, Gates Foundation, Wellcome Trust, and many European research agencies, have been working since 2017 to implement Plan S, a policy to mandate grant awardees to directly and immediately publish open access. Unfortunately, negotiations have been slow, with the open access requirements continually weakened.

Centralized, unpaid peer review of papers also creates substantial friction. A 2018 survey of ~11,000 researchers calculated that 68.5 million hours are spent reviewing globally each year.” Furthermore, with more papers being published, fewer researchers are agreeing to review, with the average editor having to send out 2.4 peer review invitations for every review completed. 71% of scientists decline requests due to lack of expertise, while 42% decline due to lack of time.

We need a fundamentally new publishing system that is the opposite of what exists today: one that is fast, free, open, and rewards both researchers and reviewers.

2.2 Science Quality

Ostensibly, this science friction exists to assess and ensure quality. But serious issues with reproducibility, fraud, favoritism, and burnout persist today, largely due to a lack of proper incentives and transparency.

Hiring Tenure and promotion committees focus on the number of publications and journal impact factor, without any explicit consideration for reproducibility. Lacking such incentives has led to a potential replication crisis. A 2016 survey of 1576 researchers found that over 70% had encountered experiments that they were unable to reproduce.

While this lack of replicability typically arises from poor research methodology, there still exists some intentional fraud. Meta-analysis of anonymous surveys suggests that ~2% of scientists admit to data falsification. Yet this academic misconduct is rarely punished. Funding agencies, such as the NIH, rely on institutions to enforce discipline for academic fraud. But the reflexive nature of prestige between a university and its PIs makes institutions slow — and often combative — when responding to allegations of misconduct. Even after years of internal investigations, most institutes fail to fire researchers with confirmed fraudulent papers — until pressured to do so by external media investigations.

Because of science friction, many graduate students and postdocs leave academic research. A 2019 National Science Foundation survey of 120K PhDs showed that for the first time, private sector employment (42%) is now nearly on par with educational institutions (43%).” It remains too early to tell whether this shift from academia to industry will result in lower overall research quality or productivity, as top technology and pharmaceutical companies provide substantial resources for research.

Funding Funding agencies have only recently mentioned replicability. NIH implemented a Rigor and Reproducibility” guideline in 2019 for evaluating proposals. NSF sponsored a congressionally mandated report on ways to improve transparency and rigor in research” in 2019. But these fall short of funding replication studies or penalizing irreproducible research.

Funders also lack transparency. Reviewers for various funding agencies may have vested interests in certain scientific ideas, as was the case of beta-amyloid researchers who suppressed alternative Alzheimer’s projects. Grant funding opportunities can be written with specific research groups already in mind, such as a $4.25M FDA grant with only one applicant — a healthcare policy center run by the former FDA Commissioner.

Revenue generated by high overhead costs also leads institutions to have financial conflicts of interest. A biologist whistleblower sued Duke on behalf of the government after the university knowingly received over $200M in grants using fraudulent data. Duke later settled this False Claims Act violation in 2019 with a $112.5M fine.

Given that NIH grants usually require a minimum of 2-3 months of peer review to evaluate proposals, we would expect funders to discern and select the highest quality proposals. Indeed, a 2015 study of 130,000 NIH research grants seemed to support the grant review process. The authors found a one–standard deviation worse peer-review score among awarded grants is associated with 15% fewer citations, 7% fewer publications, 19% fewer high-impact publications, and 14% fewer follow-on patents.” But a re-analysis of that same dataset with only the top 20th percentile of reviews (to reflect actual NIH funding standards) was unable to find any correlation between NIH scoring and grant productivity, suggesting that proposals could be triaged much faster without compromising quality.

Publishing Similarly, the arduous peer review process enforced by journals often fails to determine quality. Editors and peer reviewers at Springer and IEEE accepted hundreds of algorithmically generated papers between 2008 and 2013. Such bait operations have even been conducted by prestigious journals themselves. In 2013, a reporter for Science sent a bogus paper to 304 open-access journals, with 157 duped into accepting it.

Citation count is the most common metric for determining journal prestige via impact factor and researcher quality via h-index. But citations are methodologically broken. Negative citations are not distinguishable from positive ones in citation count (although the former appears to occur rarely, estimated at 2.4% based on a 2015 study of articles in Journal of Immunology). Papers in the Reproducibility Project: Psychology” that fail to replicate are cited at nearly identical rates as those that do replicate. In fact, a 2021 analysis found that papers in top psychology, economics, and general interest journals that fail to replicate are cited more than those that replicate.” Even papers that have been retracted continue to get cited positively years after their retraction.

The centralized opacity of journal editorial decisions also creates biases. A 2021 analysis of over 5000 biomedical journals revealed that 270+ journals had more than 10% of the articles authored by the same person, with 60% of these most prolific authors” appearing to be members of the editorial board. Editorial favoritism often leads to lower quality papers published by authors who are institutionally affiliated with the journal editors.

2.3 Science Variance

Whereas minimizing science friction and ensuring science quality improve the average value of research, maximizing science variance allows revolutionary science to thrive. Despite following unpredictable power law distributions, breakthrough research gets normalized — literally, as the NIH uses percentiles for scoring — when peer review attempts to enforce quality.

Three primary factors are necessary for power law winners to emerge: diversity, risk tolerance, and long time horizons. The current research economy undervalues all three due to poor incentive structures.

Hiring University hiring lacks educational diversity, as only 25% of institutions produced 71 to 86% of all tenure-track faculty” based on a 2015 analysis of ~19,000 faculty in computer science, business, and history departments.

This institutional concentration may lead to more homogeneous research ideas and demographic diversity issues — both of which limit transformative discoveries. A recent 2020 paper analyzing 1.2M US PhD recipients noted that underrepresented groups produce higher rates of scientific novelty; however, their novel contributions are devalued and discounted,” corroborating a widely-cited 2004 model that diversity trumps ability.

While the lack of demographic diversity is due in part to upstream education inequality, it may also be driven by implicit bias. In a 2012 randomized double-blind study, 127 professors (across biology, chemistry, and physics) were provided identical applications for a laboratory manager position except randomized for name (John or Jennifer). John was rated as the better applicant across all parameters (competence, hireability, willingness to mentor) and offered $3730 more salary — by both male and female faculty. A similar study in 2020 randomizing both race and gender found comparable biases against postdoc applicants who were female and/or underrepresented races.

Institutional hiring also fails to accommodate high risk tolerance and long time horizons. Tenure is typically evaluated within six years after hiring in the U.S. and many European countries, with grants and publications cited by hiring committees as the most important considerations. Given the slow turnaround time and risk aversion of both funding agencies and journal editors, pre-tenure researchers are steered to work on low-risk research with viable short term results. Such risk aversion was shown in a 2016 study of 562 physicists, where post-tenure scientists were associated with a willingness to pursue more explorative research.

As Sydney Brenner lamented in his 2002 Nobel Prize lecture on his studies of C. elegans genetics: Such longterm research could not be done today, when everybody is intent only on assured short term results and nobody is willing to gamble. Innovation comes only from the assault on the unknown.”

Funding Diversity is also a major issue for funding agencies. Older PIs are obtaining an increasingly larger fraction of NIH funding. A 2016 NIH study found that PIs in the 56-70 age group rose from being ~20% of all NIH funding in 1998 to ~35% in 2014, as the 24-40 and 41-55 age groups both declined.

Such age biases appear unjustified. A 2016 study of 2,887 physicists and a 2017 paper of 2,453 computer science faculty independently showed that major discoveries were distributed randomly across career progression.

Racial and gender bias affect science funding as well. FOIA requests of NIH grant application data revealed that under-represented minorities are funded at 78-90% rate compared to other white and mixed race researchers. Black applicants for NIH R01 grants were 10% less likely to be funded, even after controlling for the applicant’s educational background, country of origin, training, previous research awards, publication record, and employer characteristics.” A natural experiment created by a 2014 change in the Canadian Institutes of Health Research grant review process showed that a PI-focused review process favored men at 4.0% after adjusting for age and research domain.

Comparisons between HHMI funding (regarded as high-risk, long-term, people-oriented) and NIH (viewed to be low-risk, short-term, project-oriented) found that HHMI produced a significantly higher rate of impactful articles. This has worsened over recent decades, as the NIH has funded less and less edge science.” Similar trends around high-risk, interdisciplinary funding have been identified in Australian Research Council’s Discovery Programme and European Research Council projects.

In fact, any kind of government funding appears to be correlated against breakthrough discoveries. A 2019 study of 65 million papers, patents and software products that span the period 1954–2014” demonstrated that in contrast to Nobel Prize papers, which have an average disruption among the top 2% of all contemporary papers, [government] funded papers rank near the bottom 31%.”

The structure of grants remains too homogenous. R01 grants, which constitute over half the total funding of NIH extramural research, are primarily four or five years in length. A wider distribution of grant durations will enable more long-term, high-risk research. This is particularly needed for studies that require data collection over lifetimes, such as the Framingham Heart Study that has continually renewed their NIH grant since 1948 — but was jeopardized by a 40% federal budget cut in 2013.

As Richard Feynman declared at CERN shortly after his 1965 Nobel Prize: If you give more money to theoretical physics it doesn’t do any good if it just increases the number of guys following the comet head. So it’s necessary to increase the amount of variety … and the only way to do it is to implore you few guys to take a risk with your lives that you will never be heard of again, and go off in the wild blue yonder and see if you can figure it out.”

Publishing Publication output — citations — famously follow a power law. However, publication input does not. Many — if not most — of the top scientific discoveries were repeatedly rejected by top journals, including polymerase chain reaction, nuclear magnetic resonance, and the Krebs cycle — among many other Nobel Prize-winning examples that have been gathered.

Based on a 2017 study of all articles published in 2001, highly novel papers take longer to accumulate citations, which might justify the short-term decision to reject so many profound discoveries. But even the reliance on citations is flawed. We need a diversity of methods for evaluating research beyond basic citation count, which has existed since 1955 and emerged to help journals define their own prestige.

Peer review remains a highly homogenous process. 10% of peer reviewers are responsible for 50% of all reviews. American, British, and Japanese reviewers are overrepresented, with nearly 2 peer reviews per submitted article of their own.” This lack of diversity is largely attributed to editors (96% based in Western countries, Australia, and Japan), who rely on their existing scientific networks to recruit reviewers.

We also need more variance in how research is communicated. Journals remain attached to PDF-based papers — essentially a medium from centuries ago. Despite the vast multimedia capabilities enabled by digitization, there appears to be only one video-based journal, the Journal of Visualized Experiments. The lack of market entrants may be caused by the inability to charge high fees due to minimal prestige and few citations, as JoVE’s university subscription fees ($2400) and publication charges ($1400) are much lower than most other journals, despite including complimentary video production services.

There is a growing interest in discussing and distributing research outside of journals. More scientists are using Twitter to contextualize and comment on papers. Machine learning researchers at distill.pub have pioneered ways to communicate research more intuitively; others at paperswithcode.com provide datasets and code repositories alongside papers. Sites dedicated to uploading figures and sharing raw data have emerged as well. Jupyter notebooks, Mathematica, RStudio, and new commercial tools are being increasingly used by scientists to collaborate.

Overall, sharing research should become a more expressive experience, rather than one simply dominated by prestigious publishing.

As Emmanuelle Charpentier stated in her 2020 Nobel Prize interview: This is not about a paper published in Nature or published in Science [or]… published in the high impact-factor journals…It’s really about solid work. And I want to say this because nowadays where everyone is evaluated through a potential number of publications and H-index factors…it’s nice, but sometimes you just need one story, one very good story. You need time to do the work in a proper way, in a deep way and I want to mention this because I would not like to see science having lost this sense.”

3. Smart Research Contracts

To fix the research economy, we need to fix incentives. Smart contracts allow a diverse array of research activities to be recognized with reputational and financial rewards. These contracts will be mediated by peer-to-peer review networks that help distill scientific signal from noise.

3.1 Why Smart Contracts

Let us first explore how decentralized funding and reviewing mechanisms may enable a better research economy.

Published in 2009, Satoshi Nakamoto’s Bitcoin whitepaper established a scalable and secure electronic currency. While the work has since had a major impact across the fields of cryptography, networking, distributed systems, and economics, one of its main contributions has been to demonstrate the power of incentives in transparent systems.

In the case of Bitcoin, incentives are applied to extend, secure, and maintain a monetary ledger. The ledger — a blockchain — is public and distributed amongst multiple peers who compete to extend it. Every time a peer earns the right (through proof-of-work via computational mining”) to extend the chain by publishing a new block of transactions, they receive newly minted tokens. This emergent coordination of a shared goal has led Bitcoin’s total network value to pass $1 trillion in 2021.

This is made possible in part by the following features of the network:

These same characteristics benefit the research ecosystem. In particular:

Bitcoin spawned a broader set of experiments around cryptocurrencies” — distributed systems with tokens that incentivize development and maintenance of public ledgers.

One such system is Ethereum. Launched in 2015, Ethereum is akin to a decentralized computer. This means that Ethereum maintains arbitrary state (much like a traditional computer), replicated across all of its participants. Transactions in Ethereum update the state according to a broad range of possible commands (as the network is Turing-complete).

Put another way, while Bitcoin transactions focus on transfers of value (and a limited set of general purpose commands), Ethereum can express all state changes a modern computer can. This is completed through the engineering of on-chain programs known as smart contracts.” The term is borrowed from the legal realm in that these programs are binding: they cannot be altered without being transparent to all participants and will otherwise perform as their code states. They are smart” in that they can operate on evolving state in more complex ways than traditional legal contracts.

Smart contracts enable the development of a transparent and decentralized scientific platform, with the underlying tokens potentially providing a means of gathering funds and distributing proceeds to researchers and reviewers efficiently across the world. Tokens may also be used to track the accrual of reputation across the research ecosystem.

Beyond utilitarian reasons, there is a moral imperative for scientific knowledge to be collectively shared, reviewed, and funded. A decentralized and tokenized ledger may be the best data and economic structure for guaranteeing that in perpetuity.

3.2 Atomic Unit

Our smart research contracts are executed via Atoms, which consist of the following:

Users will have a profile containing their Funds and affiliated Atoms. These Atoms can be encrypted if the user desires, e.g. limiting the confidentiality of a research proposal to the designated reviewers.

3.3 Example Contracts

The versatility of these contracts enables experimentation with funding models. To simplify the user experience, common contract templates will be available, but we also anticipate creative contracts that have not yet been imagined.

Let us consider a few toy contracts inspired by our current research economy:

**Prize** A simple prize may be defined as an instant transfer of funds from one Atom to another. For example, a user (who could be a funder, researcher, layperson reader, etc) directlys award a simple prize to a deserving comment, review, paper, or proposal.

Prizes can be made more complex to incentivize additional actions. For example, a Paper Upload Prize” (PUP) could be initiated by a user who wishes to read a closed-access paper.

That user transfers some funds (e.g. $5) as part of her PUP contract, specifying that her funds can be claimed by any one of the Authors (with their identity verified) who uploads a copyright-free draft of the requested paper to IPFS. Other readers interested in that same paper may join as funders who contribute to this PUP. At some accumulated fund value, one of the authors ought to have sufficient incentive to upload the paper. The contract may specify a short escrow period where the funders and/or other users can verify that the correct paper was indeed uploaded by one of the authors.

Various considerations arise from this Paper Upload Prize.” How do we prevent authors from excessively waiting for the prize to accumulate before uploading? Do first authors and corresponding authors deserve more, or is an even split the most feasible mechanism? Designing contracts without negative unintended consequences will be challenging, but best practices should eventually emerge for most common contracts.

**Grant** A grant may be defined as an Atom soliciting research proposals (e.g. how to engineer an immortal yeast cell).

Such a contract could award 100 ETH in funds to 5 top proposals. 10 reviewers are either manually selected or automatically selected based on accumulated reviewer token value in certain domains (e.g. yeast genetics, directed evolution, etc). The contract provides 1 ETH to each of these 10 reviewers, who keep 0.5 ETH and directly allocate the remaining 0.5 ETH across 5 projects of their choosing. The remaining 90 ETH could be allocated using a quadratic funding-like mechanism.

Using escrowed tranches of funding, the grant may encourage winners to regularly post experimental data or papers. Additional contractual mechanisms could be designed to reward the most prescient reviewers (e.g. those who select the proposal that generates the most positive citations after 10 years). Other reviewer and researcher behaviors can be similarly incentivized.

**Tenure** Tenure may be defined as an Atom that disburses funds to another single Atom or several different Atoms over some defined interval and duration of time. The Description may be the funder’s name, ala endowed professorships and departments at universities.

Tenure can be granted to projects in addition to individuals, which could enable more collaboration instead of competition. The availability of such subscription”-based funding models may also encourage researchers to conduct more divergent, higher-risk experiments.

3.4 Emergent Behavior

From these basic contracts arise a rich diversity of emergent research behavior. These will allow us to assess and advance the three factors of science friction, quality, and variance.

Minimizing Science Friction Smart research contracts enable grants and prizes to be awarded faster with lower staff overhead. It should become trivially easy for anyone to launch their own Breakthrough Prize or fastgrants.org, with high-quality reviewers readily assembled to award deserving research or triage proposals.

Designing the initial Atom grants to be direct prize transfers can simplify contracts and minimize grant overhead fees. In such contracts, a grant would not demand a precise budgetary allocation towards direct research costs (e.g. reagents, equipment, personnel, etc). Instead, the grant could be restructured as a proposal prize” awarded to the best research proposal for a given grant opportunity. Such funding would be akin to the GiveDirectly model, allowing PIs to determine the best allocation of funds. If a lab stumbles upon an exciting but orthogonal skunkworks project, the PI can divert funds to that project (such experiments regularly happen with grants today).

Implicit researcher duties are now made explicit with incentives. As a result, scientific roles can become both more specialized and more diverse. PIs can focus less time on writing grants and more time on conducting research. Or the PIs who enjoy and excel at raising funds can do so and even re-deploy it to the right scientists, akin to founders who become angel investors and venture capitalists.

Students get rewarded and recognized for the work they do when reviewing papers. Imagine more graduate students like Matt Rognlie, who noted in a 2015 Marginal Revolution blog comment that Thomas Piketty failed to account for depreciation in Capital - and was subsequently commissioned to submit a Brookings Paper.

Other research roles can prosper as well: full-time reviewers, mentors, statisticians, or fraud hunters.

Ensuring Science Quality Reproducibility improves, as contracts directly link grant requests to research proposals and publications. Grants can mandate pre-registration within the proposal: that way exact experimental methods and hypotheses are ratified to prevent post-hoc p-hacking. Depositing raw data and code can be incentivized, such that others can quickly run meta-analyses or reproduce the analyses (and potentially draw different conclusions).

Decentralized peer-to-peer review generates rich, diverse insights for proposals and papers compared to small panels of centrally selected reviewers. Skin-in-the-game causes both proposal funders and publication reviewers to be more accurate, especially over longer time horizons. Imagine being the Cell editor who publicly rejected the CRISPR-Cas9 paper from Virginijus Siksnys in April 2012 — just two months before Jennifer Doudna and Emmanuelle Charpentier’s paper would be submitted and quickly accepted by Science.

New publishing models can be tried. Continuous experimental updates and paper revisions may occur in real-time. If an independent group fails to reproduce a method, the original authors can update their experimental methods with more detail and clarity. Papers become unbundled into their component sections (e.g. methods, figures, data/code, etc), simplifying authorship and improving citations, with backlinks from future publications and reviews pointing to the precise research component being cited. Impact factor will not be calculated by raw citation count but strength of citation, perhaps computed by sentiment analysis.

Adding new philanthropists leads to more competition for the funding monopsony. Funders may compete in status to have the best science portfolio” showcasing the experiments and discoveries they enabled. Such transparency should discourage funding of low-quality, flashy” research.

Bounties can be created for reproducing studies and identifying fraudulent data, much like bug bounties in software. These may be specified directly within the original grant contract (e.g. a portion of funds is held in escrow to award anyone who tries to reproduce a study). Or interested downstream parties may create contracts to reward replication attempts or fraud detection (e.g. investors or companies may provide bounties before investing or licensing certain IP - lest they turn out to be fraudulent).

Better meta-science emerges, as new smart contract-based funding and publishing models are readily analyzed. Contract mechanisms, incentives, and duration will continue to be refined.

Maximizing Science Variance With smart research contracts, projects vary widely in budget, duration, and scope. Crowdfunded megagrants (e.g. March of Dimes raising $10M+ for polio research) can be created at the same time as microgrants (e.g. micropayments for crowdsourced research data). Projects may be short term (e.g. crowdsourcing data for thirty minutes as an earthquake is happening), or long term (e.g. Richard Lenski’s 30+ year E. coli evolution experiment). Funders could support solo scientists (e.g. Isaac Newton), or international big science projects (e.g. Large Hadron Collider, Human Genome Project, etc).

Access to funding broadens. Talented researchers in countries with limited funding agencies can access global grants and prizes. While there is a risk of exacerbating research inequality (such that only famous or flashy researchers gain more funding), meta-science improvements should eventually lead to a better allocation of funding. A large variety of research funding experiments can be tried, including lotteries, universal basic grants, and democratic allocation.

Publishing is also democratized. Patients submit their own experience with certain diseases or even personal clinical trial data, which can then be aggregated and analyzed. Citizen science” experiments can be conducted and published, e.g. students sequencing grocery store fish to confirm their species.

Funders, researchers, and reviewers can elect to be pseudonymous if desired. Such pseudonyms may reduce bias and improve quality. They enable scientists to study stigmatized ideas or jump into new, unrelated research fields. While bad actors (e.g. with unrevealed conflicts of interest) could exploit pseudonymity, a robust review and reproducibility system should minimize those issues. On-chain identity management solutions may also help to reduce collusion. Pseudonyms can be unblinded later, allowing funders, researchers, and reviewers to liquidate accrued pseudonymous reputation.

Contracts can experiment with new incentives for public goods. Large grants may be designed with escrowed open source provisions, such that bonus prizes are released after publication once the patent disclosure window has expired. Generation of large, collaborative open source datasets could also be better incentivized such that private companies do not hoard proprietary datasets with competitive, overlapping efforts.

For example, CRISPR is a popular technique for generating thousands of genetic knockout mutations in cancer cells to identify new drug targets. While the Broad Institute is publishing a large CRISPR dataset publicly via the Cancer Dependency Map, multiple private startups (e.g. Ideaya, Repare, Tango) have been generating their own proprietary CRISPR cell line screens. Could there ever be contracts designed with sufficient incentives for these CRISPR cell line data to be aggregated, perhaps obfuscated in some clever way — such that all researchers benefit with minimal redundant efforts? Research contract experimentation will be the only way to find out.

4. Future Applications

Once the scaffold of this research economy has been built via smart contracts, future applications can prosper on the platform. Many of these resemble successful apps from other industries today. They will help amplify the public return on funding via new knowledge, policies, products, and jobs created by science.

4.1 Dynamic journals

Future editors” (both human and algorithmic) will be able to curate and cluster papers any way they choose. These could be based on measuring impact factor in new ways, identifying emerging scientific fields, or optimizing personal feeds. This enables researchers, readers, and funders to efficiently sift through literature, which becomes more important in a world without centralized peer review as a filter. Popular editors will emerge who provide engaging context and commentary that expands the readership for science (much like the most popular newsletter writers, e.g. Matt Levine with finance, Ben Thompson with tech, Bill Simmons with sports — or Zeynep Tufekci with her tweets and articles on COVID).

4.2 Scientific prediction markets

Several studies (Dreber 2015, Camerer 2018, Gordon 2021, replicationmarkets.com) have found some success using prediction markets to estimate reproducibility of papers, with Robin Hanson, Andrew Gelman, and Harry Crane all advocating for various implementations. These prediction markets may judge paper quality better than conventional journal prestige or citation metrics. Grant funders, investors, and companies could use such markets to help allocate funding. These organizations may even contract the best prediction market analysts to improve their review processes (much like how Mark Cuban hired the most famous basketball bettor to work for his Dallas Mavericks team).

4.3 New research institutes

New philanthropists will eventually be able to spin up HHMI-like programs quickly and partner with physical lab space incubators to create their own Janelia Research Campus. Or instead of being organized geographically, these new institutes could be virtual (ala decentralized autonomous organizations) and organized around a shared mission that is collectively funded (e.g. VitaDAO). A much more diverse range of researchers could collaborate, with decentralized autonomous research institutes spun up temporarily for a focused objective (e.g. Manhattan Project) or permanently around a specific field (e.g. RNA-based memory formation).

4.4 Hiring marketplace

Academia still relies on PDF-based curriculum vitae to hire. GitHub and LinkedIn have transformed hiring in other industries. Similar apps can be built to help summarize and visualize the corpus of a scientist’s contributions beyond just publication record. Ratings and reviews of PIs may be pseudonymously shared ala Glassdoor. Better recruiting and hiring tools are needed as well. For example, when a grant proposal is successfully funded, the corresponding postdoctoral and graduate student openings could be propagated automatically across all of a PIs papers with a we’re hiring” link in the abstract.

4.5 Knowledge creator economy

There exists a large disparity in compensation between labor (students) and capital (PIs) in research groups today. Graduate students possess valuable specialized knowledge that can be monetized. GLG-like expert networks could be set up with students to help investors and industry conduct due diligence. Such services could range from discussing relevant literature to reproducing key experiments that underpin a company’s science or technology. The best science communicators might have Twitch-like streams to discuss the latest literature with the public.

4.6 Open electronic notebooks

Open source software has shown how value can be created via broad, online collaboration. If some experiments were pushed instantly online, similar scientific collaboration could occur. Scientists worldwide could mentor and help debug each others’ experiments, thereby unbundling the current mentorship model of one PI (and a few thesis committee members) for a given graduate student. Like software, incentives do exist for certain projects to remain closed for competitive reasons (e.g. intellectual property, recognition, etc), but many research projects have unnecessary paranoia around being scooped.

4.7 Decentralized clinical trials

As clinical studies become increasingly decentralized, clinical trial contracts could be designed directly to recruit and reward participants. Instead of funding every trial individually, pharmaceutical companies could pool their funds collectively. Each trial would automatically draw down funds from this pool to minimize conflict-of-interest issues. Decentralized patient advocacy organizations could arise where patients (particularly those with rare diseases) directly fund and benefit from new biomedical innovations.

4.8 Intellectual property marketplace

Tech transfer between universities and companies remains a largely opaque process. IP transactions are complex and infrequent, so they may not be well-suited to a digital marketplace. But market inefficiency can lead to university IP being severely undervalued. Ron Davis’ DNA sequencing technology, which was licensed by Stanford to Ion Torrent, generated only $2000 in annual royalties for the scientists despite a $725M acquisition. Several biotech firms (e.g. Roivant) have been created explicitly to exploit this arbitrage, searching exhaustively for abandoned and undervalued IP from pharmaceutical companies and academic groups to license. Increasingly more philanthropic funders are also benefitting from IP generated by research grants; these royalty and equity rights could potentially be managed on such marketplaces as well.

5. Conclusion

We hope these smart research contracts will accelerate science progress. Transitioning to a more decentralized science economy enables new funding, publishing, and reviewing models to emerge with better incentive alignment. Long term, we will develop tools to assess and improve scientific productivity.

We seek to be a catalyst for experimenting — and competing — against the research incumbency. While monolithic, science funders and publishers have previously adapted to new competition. The HHMI Investigators Program likely inspired the NIH to initiate their own Director’s Pioneer Award. The Gates Foundation’s 10% grant overhead rate was cited by the White House as justification for a 2018 budget proposal change to reduce the average NIH overhead rate of 52%. The launch of several open-access journals focused on rapid publishing in the early 2000s, such as PLoS, was correlated with a substantial reduction of median journal publication time.

Thus, competition is necessary to re-vitalize the research economy.

Alan Kay once shared an important sign at Bell Labs: Either do something very useful, or very beautiful.” But then he lamented: Funders today won’t fund the second at all, and are afraid to fund at the risk level needed for the first.”

We hope to build a world where both useful and beautiful research flourish.


Join us: , @atoms_org