Skip to main content
;

INST Committee Report

If you have any questions or comments regarding the accessibility of this publication, please contact us at accessible@parl.gc.ca.

PDF

CHAPTER FIVE: IMPROVEMENTS TO THE SYSTEM FOR
ALLOCATING FEDERAL RESEARCH FUNDS

The Committee supports the practice of employing peer review as a mechanism for determining the allocation of federal research funds, but it believes that a number of improvements can be made to the system. This chapter addresses this issue and builds on testimony heard during the hearings on “best practices” for peer review, and for the allocation of federal research funds in general.

Perceived Weaknesses in the System

In addition to the broad areas of concern expressed earlier in the report, the Committee is worried about other shortcomings in the present system for the allocation of federal research funds. These areas of concern include unsatisfactory feedback to, or appeal mechanisms for, applicants; a lack of data on the efficacy of peer review in general; insufficient internal reviews of agency programs; peer reviewer overload; and inadequate efforts by the granting agencies to measure and communicate the impacts of federally funded research on Canadian society.

Inadequate or Inconsistent Feedback to Applicants

The Committee believes that the granting agencies could place more effort into improving and harmonizing the types of feedback that applicants receive following a funding decision, and ensuring that a formal, transparent appeal mechanism is available at each agency to deal with complaints about the decision-making process.

The Committee notes that all three agencies provide some type of feedback to applicants following a funding recommendation, but that the feedback provided varies within and among programs, and according to the agency involved. The Committee heard that the feedback obtained from the granting agencies is not always very useful:

[The feedback] is inconsistent. In some cases, yes, there is [adequate feedback], and people are able to improve their applications and are successful in a subsequent submission. In some instances, it isn’t as helpful as it might be. In part, I’m sure this reflects the staff pressures the granting agencies are under in trying to get comments out. But it can be difficult when you have one or two external reviews that are very positive, yet the decision is negative. One is left in a quandary about how one is going to improve [the application]. [Wayne Marsh, President, Canadian Association of University Research Administrators, 43:10:15]

By law, applicants have access to their application files (however, under the Privacy Act, the written opinion of a reviewer about an application is available to the applicant, but the name of the reviewer cannot be divulged). For its major Discovery Grants Program, NSERC sends a notification of funding decision to applicants, followed shortly thereafter by the selection committee’s comments on the application, if available. Applicants receiving comments also receive any external referees’ reports on the application. When selection committee comments are not available, an applicant may submit a written request to NSERC to obtain any external referees’ reports received on the application. For SSHRC’s Standard Research Grants Program, applicants receive copies of all information used to make a decision, including external written assessments and the committee’s deliberations relating to their application. For its major research funding programs, CIHR provides copies of external referees’ reports (if available), copies of the internal reviewers’ assessments (one from each of the two principal readers assigned to the application), and a summary of the committee discussion (when available) with the notification of a funding decision.

At NSERC and SSHRC, a formal appeal process is in place for applicants who feel that their applications have been unfairly treated by the peer-review process. At NSERC, appeals of decision must be based on compelling evidence of error or discrimination in the review process; the onus is on the applicant to demonstrate the error. The process may vary according to the program, but for the Discovery Grants Program external consultants, who are senior members of the research community and have some experience with NSERC peer review, examine appeals:

Sometimes errors can occur in the peer review process … and if an applicant feels their application wasn’t assessed appropriately, they can use our appeal process to request a review of the decision. NSERC then requests an independent review by a senior researcher, who was not involved in the original decision, and staff makes a final decision based on this adviser’s report. [Elizabeth Boston, NSERC, 39:15:40]

At SSHRC, appeals of decisions must be based on procedural or factual error in the process. If SSHRC determines that there are sufficient grounds for appeal, the application and any new information provided are examined by the adjudication committee that made the original decision. The two agencies receive a relatively small number of appeals each year (see Table 2).

Table 2
Data on Applications Appealed at NSERC and SSHRC
 for the Agencies’ Major Research Grants Programs
Competition Years 2000 and 2001

The CIHR does not have a formal appeal process. However, for CIHR’s major research funding programs, applicants who are unsuccessful in one competition may resubmit the same (or a similar) application in a subsequent competition, and may include a two-page rebuttal with the resubmission to address concerns raised by reviewers or to counter criticisms that the applicants believe are unfounded. The Operating Grants Program at CIHR has two competitions per year (unlike the major grants programs at NSERC and SSHRC that are held once a year), thus the time between receiving a notification of decision and resubmission is a matter of a few months. The Committee appreciates the process at CIHR whereby applicants can submit rebuttals to reviewers’ comments with a subsequent application, but it is concerned that CIHR does not have a formal appeal process for its programs.

The Committee notes that some foreign granting agencies (e.g., the Australian Research Council) allow applicants to reply to external referees’ comments before a funding recommendation is made. It also notes that the Cross Council Research Forum of the U.K. research councils suggests that allowing applicants the “right to reply” to referees’ comments should be part of joint, cross-council programs that review interdisciplinary proposals. The Committee encourages the Canadian granting agencies to consider the feasibility of incorporating such a “rejoinder mechanism” in its major granting programs.

The Committee is aware that providing feedback and offering a formal appeal process necessitates substantial effort on the part of the agencies and selection committees, and that there is a financial cost involved in providing such feedback. Nevertheless, the Committee feels that the agencies should make every effort to provide applicants with as much feedback as possible following a funding recommendation. Since all three agencies are conducting more of their business electronically, the administrative burden and cost of providing feedback may decrease in the future as “e-business” practices become more established and new technologies become available (the Committee recognizes that privacy issues may limit some of the efficiencies that could potentially be realized by providing feedback electronically). The Committee recommends:

RECOMMENDATION 9

That the Government of Canada ensure that the granting agencies release all information on file relevant to a funding recommendation to applicants in addition to the notification of decision. Additionally, a formal appeal process, limited to perceived errors in procedure or fact, should be in place for applicants to all peer-reviewed programs, and a third-party, not the original selection committee, should review appeals of decision.

Peer Review Is Untested

Peer review is often described as being “rigorous” or “a cornerstone of excellence.” According to one witness, however, the value of peer review for deciding on the allocation of research funds is taken as an “article of faith,” and the system is largely untested:

… [W]e see that there is little scientific evidence for the efficacy of peer review in general … Good science means, at the very least, conducting reliable, repeatable scientific research on both consistency of past reviews and the impacts of decisions taken. [Bryan Poulin, Professor, Lakehead University, 58:09:30]

The witness argued that agency databases should be opened to interested researchers so that these individuals can study whether the system is working, and went on to suggest that:

… [Funds] should be more widely distributed until we find out if the peer review system is fatally flawed. If it’s fatally flawed, it needs an overhaul. [Bryan Poulin, Professor, Lakehead University, 58:09:30]

The Committee shares the concerns about the paucity of data on the efficacy and impact of peer review in the Canadian system. It notes, however, that comprehensive studies or reviews of peer review practices have been conducted in such countries as the United States44 and Australia.45 One witness suggested that there is a large literature on peer review:

In terms of the literature on peer review, it’s vast and disaggregated. There’s a long history since the funding councils have been around of [studies] regarding the strengths and weaknesses of peer review. Some being based on anecdotal comments, some have been based on studies done at different times and under different circumstances. There are few systematic studies linking grant decision outcome with funding policies. The majority of studies that are actually represented in the literature are not by those who’ve been involved in the funding councils themselves. They tend to be produced by those who are, like myself, independent researchers. [Fiona Wood, University of New England, Australia, 79:19:55]

Internal Evaluation of Agency Programs and Practices May Be Insufficient

The Committee believes that the agencies themselves should be doing more to evaluate their programs and practices, including peer review, to ensure that they are efficient, transparent and responsive to the needs of the research community. The Committee notes that internal program evaluation studies are conducted by the agencies, but that the period of time between evaluations is often long (in some cases more than 10 years46), even for major funding programs. At present, both the Discovery Grants Program at NSERC and the Operating Grants Program at CIHR47 (the two agencies’ major research grants programs) are under evaluation.

The Committee points to the evaluation plan proposed by the CIHR as being a good model for internal program evaluation. The agency plans to evaluate all programs on a periodic basis: the aim is to evaluate continuing programs every five years, limited‑term strategic programs at the end of their term (normally five years), and partnered programs, which also normally have a term of five years, at the termination of their memorandums of understanding. In addition, the entire organization will be subject to international review every five years, and the performance of individual institutes will be evaluated at the time of the appointment (or renewal) of their scientific directors (every four years).

The Committee suggests that internal examinations of peer review by the agencies can be conducted in isolation and do not have to form part of more intensive program evaluations. For example, the Committee notes that the Director of the National Science Foundation (NSF) in the United States must submit an annual report on the NSF proposal review system. The report provides summary information about levels of proposal and award activity and the process by which proposals are reviewed and awarded. The report is posted on the agency’s Web site each year.48 In other countries, funding agencies (e.g., the Economic and Social Research Council in the United Kingdom) conduct periodic, independent reviews of their peer review processes. The Committee notes that evaluations of peer review practices have occasionally been conducted by the Canadian federal granting agencies (see Appendix 3). As such, the Committee recommends:

RECOMMENDATION 10

That the Government of Canada require the granting agencies to engage in more regular internal reviews of their own programs and practices (including peer review), and to periodically examine decision-making processes at other Canadian and foreign agencies to ensure that best practices for the allocation of research funds are in place. The results of these internal evaluations should be easily accessible to the research community and general public.

Peer Reviewer Overload

As mentioned earlier in the report (Chapter Three), it is often difficult to find qualified, arm’s length external referees or selection committee members to evaluate proposals in emerging and interdisciplinary areas because of the small number of researchers working in those areas. Members of these small communities tend to know and work with each other; the Canadian federal agencies all have guidelines intended to prevent a researcher that has a conflict of interest with an applicant from acting as a referee of that applicant’s proposals. Because of the relatively small research community in Canada, finding impartial reviewers for more “mainstream” research proposals may be problematic as well. Additionally, the introduction of new, peer-reviewed programs at the federal agencies, government departments and other organizations (e.g., the Canada Foundation for Innovation) has led to an increased demand for peer reviewers, and to a phenomenon termed “peer reviewer fatigue” by some commentators.

The federal granting agencies have tried to reduce the problems associated with a small reviewer pool by calling on international reviewers to participate in the review process:

The potential for peer fatigue is quite critical. It’s particularly true in a country like Canada, compared to the States, where there are a limited number of qualified experts who can serve on review panels. Countries such as Sweden, Australia, and New Zealand make substantial use of international experts to ensure the independence of their review processes and to counteract peer fatigue. Certainly, I’m aware of international experts participating in Canadian reviews, and I think that’s an increasing trend. [Alan Winter, President and CEO, New Media Innovation Centre and Council of Science and Technology Advisors, 55:09:35]

The agencies also limit the number of times any one individual can be called on by an agency program to act as an external referee. However, since granting agencies around the world are experiencing the same problems, other solutions to reduce overload on reviewers are probably required. One option being employed by some agencies is offering some form of incentive to reward reviewers or their universities:

Peer reviewers in the past have tended not to be paid, so it’s interesting to note that a number of funding councils are moving much more towards the idea of paying for the reviews that they receive. The EPSRC [Engineering and Physical Sciences Research Council] in the U.K. is a good example of this. The concern is basically driven by the perception of overload on reviewers so the idea is in fact, given their competing demands on the time of the best reviewers that an incentive needs to be provided to ensure that there is a value placed on the reviews that are received. [Fiona Wood, University of New England, Australia, 79:19:55]

In EPSRC’s “Referees’ Incentive Scheme,” which began as a three-year pilot project in 2001, university departments earn points for “useful” referee reports returned on time to EPSRC by researchers. In December of each year, beginning in 2002, points accumulated by departments over the preceding academic year will be translated into a share of the scheme fund, which stands at £750,000 (approximately $1.7 million) for the first year. These funds will be paid centrally to institutions on behalf of departments, and heads of department can use the monies for any purpose that EPSRC would normally consider to be legitimate expenditure on a grant. Other granting agencies also use some form of payment to recognize and reward the work of reviewers. In Canada, for example, the Alberta Heritage Foundation for Medical Research offers payment to external referees for reviews:

I should point out that because we work outside the province, in other words we’re pulling in our reviewers from around the world, there’s really no reason for them to help Alberta, other than altruism, and altruism only goes so far. So we actually pay for this, which adds to the cost of our peer review system, but I think it increases the quality, because we are pulling in an international opinion. [Matthew Spence, President and Chief Executive Officer, Alberta Heritage Foundation for Medical Research, 66:10:50]

The three federal granting agencies generally offer no compensation to peer reviewers. Members of selection committees receive payment only for expenses incurred to attend committee meetings.

An informal survey on the topic of ”peer reviewer fatigue” by NSERC in 2000 suggests that, at present, it may not be a major problem.49 The Committee appreciates the difficulties associated with finding individuals to act as referees for grant applications. It notes, however, that the federal agencies are augmenting the amount of business that they conduct electronically, and that one of the stated goals of “e-business” is to reduce the workload for peer reviewers. Given the considerable expense involved in offering some form of payment for referee reports, and that peer reviewer workload does not seem to be unmanageable at present, the Committee is reluctant at this time to make a recommendation that focuses on payment of referees. Instead, the Committee encourages the agencies to continue with their efforts to find other ways to reduce the workload associated with peer review that would not involve large increases to their administrative budgets. Other options, including payment of referees, may have to be considered if these efforts do not have the desired result or if the workload linked to peer review continues to increase.

Outcomes and Impact of Research Are Not Adequately Measured and Reported

The Committee is concerned that the Canadian federal granting agencies do not put enough effort into measuring and communicating the outputs, outcomes and impacts50 of federally funded research programs. Funding agencies around the world are being asked to better measure and report on the outputs, outcomes and impacts of funding. The under-reporting of output measures by granting agencies may be related to several “disincentives” for reporting on performance:

For high-risk research but even for less risky research, there will be many instances where the original research objectives were not met. Some oversight organizations could view this as failure. In addition, bibliometric studies have shown that … seminal research is produced by relatively few performers. That is independent of whether the metric is the number of papers you produce, the number of patents, the number of citations, or whatever, especially for outputs. They are the quantification of the near-term products. Why would organizations be motivated to show the concentration of productivity in a relatively small number of performers? [Ronald N. Kostoff, 88:11:10]

A form of “performance monitoring” is undertaken by the Canadian granting agencies through the annual Departmental Performance Reports tabled in Parliament. Since 2001, these reports are supposed to place more emphasis on linking resources to outcomes (i.e., benefits to Canadians), rather than reporting largely on departmental activities. Performance monitoring is also undertaken internally by the agencies through irregular evaluations of individual programs. In addition, independent performance audits (as well as financial and compliance audits) are conducted for Parliament by the Office of the Auditor General, which periodically assesses the “value for money” of some of the agencies’ programs.

At the level of the individual applicant, peer review evaluates the outputs and outcomes (but rarely the impact) of past research funding provided by the agency. At the level of the agency, other measures and types of performance evaluation are necessary. There are different sorts of measures, whose utility varies according to the type of research and discipline involved, that can be used as indicators of agency performance. Bibliometrics, the study of the quantitative data of the publication patterns of individual articles, journals, and books in order to analyze trends and make comparisons within a body of literature,51 is used by some researchers and a few agencies to measure the outputs, outcomes and impacts of research programs. In bibliometrics, counts of numbers of publications are taken as a measure of research output, and citation data (the number of times a paper is cited in the literature) are used to measure impact. Bibliometrics can be used to monitor or evaluate a group’s (e.g., an institution, agency or country) research productivity and impact. The Director of a Canadian organization that conducts bibliometric studies as part of its activities, informed the Committee that, in terms of program evaluation:

CIHR has asked us in the last year to evaluate their granting programs … everything [from] the scientific production [to] the quality of the papers published by the researchers who have received grants. This is the first time that CIHR asked us, and the other Councils do so, I would say, quite sporadically. [Benoît Godin, Observatoire des sciences et des technologies, 66:09:25]

The Committee recognizes that for program evaluation purposes, the data produced from bibliometric studies should not be taken at face value nor used in isolation, since there are problems associated with such data.52 For research with commercial potential or purpose, other measures (e.g., the number of patents and licences issued, the production of spin-off companies etc.) related to the economic impact of the research can be examined instead of, or in addition to, traditional bibliometric indicators to assess the impact of research. Some critics argue that measuring the impact of research in the humanities and social sciences is difficult, if not impossible, to do, and that performance indicators are suited more to measuring the impact of research in the applied sciences and technology. The design and utility of performance indicators for the social sciences is currently a major subject of debate in that community. Despite the difficulties in measuring the impact of research in the social sciences, one witness suggested that efforts in designing and using performance indicators should be stepped up:

Twenty years ago, there was an interest in impact measurement. But it seems as though what is being said is that it is too difficult to measure. The task requires instruments that do not exist and there certainly are major methodological challenges. It is certainly not easy to measure the social or cultural impact of scientific activities, but it is our belief that a community effort, probably supported by government programs, should be made with regard to this important issue of the measurement of non-scientific impacts of … science and technology. [Benoît Godin, Observatoire des sciences et des technologies, 66:09:50]

The Committee notes that NSERC did provide some quantification (via bibliometrics, and numbers of patents, licences and spin-off companies produced from NSERC sponsored research) of the outcomes and impact of agency funded research in its 2000-01 Departmental Performance Report;53 many of the figures provided refer to the nation’s performance, thus the exact contributions of NSERC funded research to these figures and the impact of particular programs is difficult to ascertain. The Departmental Performance Reports produced by SSHRC and CIHR for the same year provide fewer quantitative data, and, in the case of SSHRC, less concrete qualitative information on the link between funding provided and the outcomes and impacts of that funding. The Committee notes that SSHRC plans to introduce a “Final Research Report” form that grantees will be required to complete at the end of each granting period. The form will capture data on such measures as research productivity, knowledge dissemination and transfer, training, international collaboration, and leveraging of financial resources. The information collected from these reports will allow SSHRC to better track the outputs of its research programs. The report will be tested as a pilot project in the late spring of 2002, and SSHRC plans to officially launch the report in late June 2002. Such final report forms are used by certain foreign agencies (e.g., the Economic and Social Research Council in the United Kingdom).

The agencies suggest that measuring the outcomes or impact of some types of research is difficult since there are often long gaps between funding a research project or program and witnessing the socio-economic impact of that research. The Committee appreciates this issue, but it notes that in that case, the agencies should place more effort into long-term evaluations of the impact of their overall research programs through “retrospective analysis,” in addition to more short-term monitoring efforts. One witness argued that in addition to the time lag problem associated with measuring outcomes, there is also a problem associated with tracking the outputs and outcomes over time:

For science and technology, tracking this output data over long periods of time is difficult. … Research gets conducted in a given organization. It evolves into technology development. That may be conducted in another organization, [and] … may be sponsored by another sponsor. … It keeps going like that to eventual application … The point is, it is very difficult to track research that was sponsored and performed by organizations initially and track them into eventual applications. [Ronald N. Kostoff, 88:11:10]

The Committee recognizes the difficulties associated with assessing the outcomes and impacts of agency funded research, especially in areas such as the social sciences and humanities. However, it believes that the granting agencies should do as much as possible to link budget with performance, and to better explain the economic, societal or environmental impact of research that they fund. Reporting on the qualitative and quantitative outputs, outcomes and impacts of agency-supported research should form part of Departmental Performance Reports, internal program evaluations, and material for public relations. The agencies have a responsibility to provide evidence to the government and taxpayer that there is “value for money” in the relatively large investments made by the government in the agencies. Such reports could also serve a role in helping the agencies to decide the research areas where funds should be allocated in the future. The Committee commends efforts made by the agencies to improve performance monitoring, but believes that they can do more in this regard. To that end, the Committee recommends:

RECOMMENDATION 11

That the Government of Canada ensure that the federal granting agencies take steps to better measure and report on the outcomes and, where possible, impacts of their research programs for the benefit of the general public.

Alternatives to Peer Review

The Committee heard evidence that peer review is the most efficient system available for determining the allocation of federal research funds. Some witnesses indicated that without peer review, science of inferior quality might be supported by the granting agencies:

An absence of peer review or ever watered down peer review may result in mediocre science, waste of resources, and in the long term, poor policy decisions. And some of my international colleagues have in fact reflected on this absence of peer review in certain areas of science and the problems it’s created. [Peter Johnson, Chair, Canadian Polar Commission, 75:09:15]

However, the Committee also heard the opinion that the peer-review process needs to be improved, or even overhauled. A variety of alternatives to peer review have been proposed in recent years, including such mechanisms as productivity-based formula funding, bibliometrics, cash prizes, lottery, block grants to universities (where decision-making is transferred from research funding agencies to universities), discretionary (or “pork barrel”) funding, and bicameral review. Three of the leading alternatives, which were discussed during the Committee’s hearings, are considered in more detail here.

Bibliometrics

Some researchers suggest that bibliometrics, either through the analysis of citation data or “journal impact factors” (a measure of the average number of citations earned by the papers that each journal contains), could be used as a supplement to the peer-review process:

The other way bibliometrics can be useful is to aid peers in their decision to fund research. I would add that it does not replace peers’ judgment, but it can be used as a tool to help researchers, because bibliometrics can aid researchers in telling them what is the quality of the journals they evaluate in which researchers publish. [Benoît Godin, Observatoire des sciences et des technologies, 66:09:25]

Using journal impact factors as an information tool to assess and compare the quality of an individual’s publishing record is a controversial issue since the impact factors were not designed to evaluate the work of individuals.54 Additionally, extra cost and time is introduced when bibliometrics becomes a part of the evaluation process:

Also, once we start using bibliometrics information as an explicit part of the process, then you actually have to start commitment funding to obtaining that information, ensuring its accurate and reliable, ensuring that the way it’s actually used by, for example, your review panel, it’s consistent across panels, documenting where there are problems, how you approach those problems and resolve them. So it’s an order of complexity that’s probably beyond a number of funding agencies at this time. [Fiona Wood, University of New England, Australia, 79:20:20]

In 1999, the Medical Research Council (MRC) in the United Kingdom launched a pilot study to examine the feasibility of incorporating bibliometrics into its peer review process. The pilot study found that there was a good correlation between the results of bibliometric assessment and conventional peer review of past progress. However, the correlation of bibliometric assessments with the final award decision was quite low. The MRC concluded that bibliometrics should not be used routinely as part of the Council’s evaluation procedures since the costs and extra review time needed to use the data properly would not be justified by the benefits offered.55 The inclusion of bibliometric data as part of the peer-review process has been adopted by the Wellcome Trust, the world’s largest medical research charity. The Trust’s neuroscience panel (annual budget of approximately £20 million or about $45 million) uses modified journal impact factors, citation analysis and paper counts, to help panel members judge the scientific record of applicants. These data are not used in isolation, and in fact, applicants judged to have excellent track records have been turned down for funding for other reasons (e.g., a poor research proposal).56 The Committee encourages the Canadian granting agencies to explore the value and practicality of incorporating bibliometric measures into its peer-review processes.

Bicameral Review

One witness told the Committee that the present system for allocating research funds is completely flawed, and presented some anecdotal evidence to support his claim. The witness presented a proposal for an alternative peer review system, called “bicameral review”:

It might be thought that current peer review procedures, despite their flaws, are better than simply allocating funds by tossing a coin. But coin tossing at least gives excellence a fighting change. In fact, the current system is worse than coin tossing since it actively selects against excellence … Under bicameral review the first decision is made by the committee of peers, who only review the applicant’s track record, not the applicant’s proposed project. The second decision is made in-house by specialists in the funding agency, who with respect to budget justification only review the applicant’s proposed project, not the applicant’s track record. [Donald Forsdyke, Professor, Queen’s University, 58:09:50]

Under the system proposed by the witness, track record is assessed as a ratio of achievement to funds received by the peer review committee. The agency takes the applicant ratings provided by the peer-review committees and then decides what funds the applicant needs. Funds are allocated on a sliding scale: applicants at the top of the scale get 100% of what they are deemed to need, and applicants just below the top receive a lower proportion of funds. This allocation method progresses to the bottom of the scale, where the applicant may receive only 10% of what he or she needs.

Productivity-based Formula Funding

This alternative to peer review is based on the assumption that past success is the best predictor of future performance. Productivity-based formula funding proposes that researchers be funded based on their track records. Under such a system, funds would be allocated according to an algorithm (i.e., dollars awarded would be proportional to some weighted sum of numbers of publications, numbers of advanced degrees awarded, etc.).

One witness pointed out that there are strong similarities between bicameral review and productivity-based formula funding:

These two alternatives place heavy emphasis on awards to established researchers with strong track records, although they differ in how the track records would be determined. Both minimize the use of true technical experts in the evaluation of the prospective portion of the proposed research. [Ronald Kostoff, 88:10:10]

The witness went on to suggest that since traditional peer review also places heavy emphasis on the track record of the performer(s) in reaching a funding recommendation, the two proposed alternatives are, in practice, fairly similar to standard peer review methods. The major difference is the absence of technical experts to evaluate the proposed research described in grant applications.

The Committee welcomes concrete suggestions for changes to the system for the allocation of federal research funds. It has some concerns, however, about the bicameral review and productivity-based formula funding proposals. First, the lack of evaluation of the quality and feasibility of the proposed research itself by an expert panel is worrisome, especially in the context of project- (vs. program) based proposals. Second, the idea of giving money to every applicant under the bicameral review system is a concern given that funds are scarce, and funds would be channelled from applicants ranked highly by the initial review panel to lower-ranked applicants. The Committee does not believe that this method is an efficient or wise way to allocate taxpayers’ money. Nevertheless, the Committee encourages the agencies to review these and other proposed alternatives (or enhancements) to peer review when conducting internal evaluations of peer review (see recommendation 10).

Although many alternatives to peer review have been proposed, most members of the research community do not consider that these alternatives could entirely replace peer review since many of them lack an arm’s length, “quality assurance” component. Rather, the general consensus is that peer review can be fine-tuned but not replaced:

There really aren’t alternatives to what funding agencies are using by way of obtaining scientific expertise for their decision-making. It’s really things like whether or not bibliometrics has a role to play in funding agencies in helping with the process. [Fiona Wood, University of New England, Australia, 79:20:20]

My bottom line is that while peer review has its imperfections and limitations, there is little evidence that the best researchers and ideas are going without funding and far less evidence that the alternatives described above would improve the situation. [Ronald Kostoff, 88:10:10]

The Committee concurs and believes that peer review is the most efficient method for determining the allocation of federal research funds. However the system can and should be improved. Regular reviews and refinements of peer review practices by the agencies themselves are critical to ensuring that the system for the allocation of federal research funds is efficient, transparent and responsive to the changing needs of the research community and other stakeholders.


42Data for NSERC’s Research Grants Program (now called the Discovery Grants Program).
43Data for SSHRC’s Standard Research Grants Program.
44D. E. Chubin and E. J. Hackett, Peerless Science: Peer Review and U.S. Science Policy, State University of New York Press, Albany, New York, 1990.
45F. Q. Wood, The Peer Review Process, report commissioned for the National Board of Education, Employment and Training (Australia). Australian Government Publishing Service, Canberra, 1997.
46See Appendix 3 for a list of recent major evaluation studies at the three agencies.
47The Auditor General recommended that CIHR evaluate its Operating Grants Program since it has never been subject to an intensive evaluation.
48Report to the National Science Board on the National Science Foundation’s Merit Review System Fiscal Year 2000, http://www.nsf.gov/nsb/documents/2001/nsb0136/nsb0136.pdf
49NSERC Contact, Fall 2000, Vol. 25, No. 3, http://www.nserc.ca/pubs/contact/v25_n3_e.pdf
50Output = the direct result of program activities; outcome = accomplishment of program objectives attributable to program outputs; and impact = broad (often long-range) social, economic or environmental results of a research program. Definitions adapted from categories discussed in the U. S. Government Performance and Results Act of 1993.
51ISI (formerly the Institute for Scientific Information) definition, http://www.isinet.com/isi/search/glossary/index.html.
52 R. Barré, “Sense and nonsense of S&T productivity indicators,” Science and Public Policy, Vol. 28, August 2001, p. 259-66.
53The report is accessible electronically from the Treasury Board of Canada Secretariat web site: http://www.tbs-sct.gc.ca/rma/dpr/00-01/NSERC00dpre.pdf
54D. Adam, "The Counting House", Nature, Vol. 415, February 2002, p. 726-29.
55 See MRC Bibliometric Analyses Pilot Study, 1999, http://www.mrc.ac.uk/index/funding/funding- specific_schemes/funding-evaluation_of_schemes/funding-bibliometric_analyses_pilot_study.htm
56G. Lewison, R. Cottrell and D. Dixon, “Bibliometric indicators to assist the peer review process in grant decisions,” Research Evaluation, Vol. 8, April 1999, p. 47-52.