Skip to main content

INST Committee Meeting

Notices of Meeting include information about the subject matter to be examined by the committee and date, time and place of the meeting, as well as a list of any witnesses scheduled to appear. The Evidence is the edited and revised transcript of what is said before a committee. The Minutes of Proceedings are the official record of the business conducted by the committee at a sitting.

For an advanced search, use Publication Search tool.

If you have any questions or comments regarding the accessibility of this publication, please contact us at accessible@parl.gc.ca.

Previous day publication Next day publication

37th PARLIAMENT, 1st SESSION

Standing Committee on Industry, Science and Technology


EVIDENCE

CONTENTS

Tuesday, April 23, 2002




½ 1905
V         The Chair (Mr. Walt Lastewka (St. Catharines, Lib.))
V         Dr. Fiona Wood (Senior Research Fellow and Lecturer, Centre for Higher Education and Management Policy, School of Professional Development and Leadership, University of New England, Australia)

½ 1910

½ 1915

½ 1920

½ 1925

½ 1930

½ 1935
V         The Chair
V         
V         Dr. Fiona Wood

½ 1940
V         Mr. James Rajotte
V         Dr. Fiona Wood
V         Mr. James Rajotte
V         Dr. Fiona Wood

½ 1945
V         The Chair
V         Mr. Joseph Volpe (Eglinton--Lawrence, Lib.)
V         Dr. Fiona Wood

½ 1950
V         Mr. Joseph Volpe

½ 1955
V         Dr. Fiona Wood
V         Mr. Joseph Volpe
V         The Chair
V         Mr. Stéphane Bergeron (Verchères--Les-Patriotes, BQ)
V         Dr. Fiona Wood

¾ 2000
V         Mr. Stéphane Bergeron
V         Dr. Fiona Wood
V         Mr. Stéphane Bergeron
V         Dr. Fiona Wood
V         Mr. Stéphane Bergeron

¾ 2005
V         Dr. Fiona Wood
V         Mr. Stéphane Bergeron
V         The Chair
V         Mr. Larry Bagnell (Yukon, Lib.)
V         Dr. Fiona Wood
V         Mr. Larry Bagnell
V         Dr. Fiona Wood
V         The Chair
V         Mrs. Bev Desjarlais (Churchill, NDP)
V         Dr. Fiona Wood

¾ 2010
V         Mrs. Bev Desjarlais
V         Dr. Fiona Wood

¾ 2015
V         Mrs. Bev Desjarlais
V         Dr. Fiona Wood
V         Mrs. Bev Desjarlais
V         Dr. Fiona Wood
V         The Chair
V         
V         Dr. Fiona Wood
V         Mr. Brian Fitzpatrick
V         Dr. Fiona Wood

¾ 2020
V         The Chair
V         
V         Dr. Fiona Wood
V         The Chair
V         Ms. Lalita Acharya (Committee Rersearcher)
V         Dr. Fiona Wood

¾ 2025
V         The Chair
V         Mrs. Cheryl Gallant










CANADA

Standing Committee on Industry, Science and Technology


NUMBER 079 
l
1st SESSION 
l
37th PARLIAMENT 

EVIDENCE

Tuesday, April 23, 2002

[Recorded by Electronic Apparatus]

½  +(1905)  

[English]

+

    The Chair (Mr. Walt Lastewka (St. Catharines, Lib.)): We have Dr. Fiona Wood on video from Australia. Good evening. My name is Walt Lastewka, and I'm the chairman of the industry, science and technology committee here in Canada. I want to really thank you for taking the time to be with us this morning.

    I have a couple of members here. There will be more coming, because we're just finishing votes in the House, but we wanted to begin. Serge Marcil is the parliamentary secretary to the Minister of Industry. Mr. Fitzpatrick is a member of the opposition. We work very closely together on this committee.

    I understand, Dr. Wood, that you're going to put on a short presentation first, and then we'll get into questions. I would ask you to begin.

+-

    Dr. Fiona Wood (Senior Research Fellow and Lecturer, Centre for Higher Education and Management Policy, School of Professional Development and Leadership, University of New England, Australia): Thank you very much. I very much appreciate the invitation to participate in your meeting. The questions your committee are addressing are obviously issues of great importance to many countries, including Australia. I wish you well with your study and your outcome.

    I thought I'd provide you with some background to my own work in research performance, research funding agencies, and peer review. As background, my work covers both higher education policy and science policy. Given that approximately 40% of Commonwealth support for science and innovation goes to higher education research, this background has proved a good mix. I'll just give you a quick overview of the projects I've been involved in over the last decade.

    One of the earliest was a project I undertook in the late 1980s concerning sectors influencing research performance of university academic staff. You may not be aware that in Australia in 1988 we had a substantial change by government to the policy regarding higher education. We moved from a binary system to a unified national system. Also, changes were made to the research funding agencies, such as the Australian Research Council, which was a new research council that took over from the previous Australian Research Grants Scheme. The issues raised with research performance centred on the fact that it's a highly complex area. Academics, of course, are very concerned about measures regarding research performance and that whatever is used actually reflects the sort of work they're involved in and is valid and reliable.

    I followed this study up with a project looking at unsuccessful applicants to the large grant program of the Australian Research Council. What I was looking at there was whether or not the information unsuccessful applicants received about their grant proposals was considered useful, particularly with a view to resubmission. This seemed to be a fairly straightforward project to initiate at the time. Unfortunately, there was a great deal of controversy regarding the results. One of the outcomes from that project was that a number of applicants felt there was a very poor match between the reviewers selected to assess their proposals and the actual proposals themselves. They felt some assessors were not chosen well in respect of familiarity with their field. They also felt there were problems with the sorts of comments they received; they weren't constructive, in the view of a number of the people who responded to our survey. As a result of that project, I actually felt the need to invest in a flak jacket. When you investigate something like peer review, it is a highly emotional area, and as it is linked to the division of funds, people are often very volatile in their reactions to questions regarding their competence to assess research.

    At the same time as this study I did some preliminary work regarding commercialization of university research in Australia and looked at issues and problems regarding that. This was followed up by an inquiry that looked at the role of seed research funds in supporting science and technology. Seed research funds are highly valued, but they're often overlooked in respect of their role in the funding organization's approach to the support of the research effort. For example, $6,000 to $10,000 might go a long way to ensuring collaboration between Australian scientists and colleagues overseas, through visiting of labs, attendance at conferences, and the like.

½  +-(1910)  

    In 1993, with a colleague, I co-convened an international symposium on research grants management and funding. It was clear from this meeting, which was attended by many senior representatives from major research funding agencies throughout the world, U.K., Netherlands, Canada, and the U.S., that there were a number of stresses in the operations of national research funding agencies, particularly in regard to peer review: the overload or perception of overload on reviewers; the sorts of reviewers who were being selected to make assessments on proposals, what their backgrounds were; how the information regarding their reviewers was compiled by the research funding agencies; questions about monitoring of the process; and whether or not there was a linkage between theex post aspect of the research that was supported and the ex ante. It's probably the case in Canada, as in other agencies, that a great deal of effort, certainly in the 1980s and 1990s, was concentrated on the ex ante part of the process. But much greater expectation by government about what the return is for the investment of the public dollar in the science enterprise has certainly focused attention on the ex post aspect of research funding. This probably remains one of the major issues in the whole science funding area. What are the appropriate measures to use? What does peer review look like in the contemporary context? What is the role of, for example, bibliometrics in helping in the decision-making process?

    I also undertook a doctoral research project in the mid-1990s that looked at issues and problems in the public funding of university basic research. This entailed a case study of the Australian Research Council's large grants program, probably one of the first cases of having an outsider actually come in, observe the process, and evaluate, to some extent, the outcomes of a particular grant cycle.

    A major project was undertaken in 1996 for the Australian Research Council, which is very concerned to see the adequacies of its peer review process. That project entailed an identification of the strengths and weaknesses of peer review in an international context, looking at the Australian Research Council as it operated with its large grants program at that time, identifying problems with the process, and looking at alternatives that might be considered by the council.

    I profiled for one application round the applications and the referees who were used. With the referees, my interest was in the number selected for a particular grant, the response rate, their institutional affiliation, and the extent to which international referees' input was being sought actively by the large grants program for the decision-making process. That showed variation across the disciplinary panels with regard to the institutions used, the response rates, and the extent to which international assessors were being used. I think that's an ongoing issue for all research funding agencies now: to what extent do you include overseas assessors? We have comments saying there are differences in the assessments provided by those from overseas. Japanese assessors tend to be more liberal, U.K. assessors tend to be a lot more critical, so how do you calibrate for every comment and score given by international assessors?

    In 1997, along with Professor Richard Brook, who was then chief executive of the Engineering and Physical Sciences Research Council in the U.K., and Professor Arie Rip, who is Professor of Science and Technology at the University of Twente, I addressed the question of the future of the peer review system at a symposium convened by the Netherlands Organization for Scientific Research for its retiring deputy director general. I'll pick up on some of the points from that presentation shortly.

½  +-(1915)  

    Finally, there are two other projects of relevance to your committee. One is a study I conducted for the Australian Academy of Science regarding international networks and the competitiveness of Australia's science and technology. The focus of this project was on the adequacy of the support provided to early career researchers to obtain international experience. That international experience could be at a conference, or it could actually working in a private research laboratory in Europe or in North America. We're particularly concerned to see the adequacy of that support, because at the same time a bibliometric study was undertaken that suggested that there had been a decline in citations of Australian scientific work, and one of the explanations that had been put forward to account for this was that there was insufficient support for our young scientists to obtain experience overseas.

    A second project relates to a publication produced by the British Medical Association, entitled Peer Review in Health Sciences. In a chapter of that publication Professor Simon Wesley and I looked at peer review in research grant agencies.

    My more recent work focuses on institutional research management. That's part of an OECD project. We also looked at federal-state relations regarding higher education.

    Now I'll shift to a quick overview of peer review issues based on my work.

    One of the big problems with peer review is definition. I'm sure your committee has come to terms with that. While it's essentially judgment by scientists or other professionals identified as having the requisite expertise on other scientists' research in a competition for scare resources, it's actually a misnomer. Peer actually doesn't mean equal, it means those at the forefront of the field. So there's often a problem when discussing peer review in not acknowledging that the definition is a generic definition, the peer review process itself is generic. The one funding agency may use a range of different peer review procedures. Comparing peer review processes across systems is fraught with a number of dangers if the historical context of the funding agencies is not taken into account, the funding and policy approaches by governments to the science base, and also a clarity of the exact peer review that has been used.

    Peer reviews, however, are still regarded as a linchpin in science. They are considered a regulatory mechanism that provides the best quality control and accountability for use of public research funds. It's also used as a justification for autonomy within science. But, of course, the history of research funding agencies and the use of peer review is fairly recent.

    Peer reviewers in the past have tended not to be paid, so it's interesting to note that a number of funding councils are moving much more towards the idea of paying for the reviews they receive. The EPSRC in the U.K. is a good example of this. The concern is basically driven by a perception of overload on reviewers. The idea is that given the competing demands on the time of the best reviewers, an incentive needs to be provided to ensure that there is a value placed on the reviews that are received. The EPSRC also provides to the institutions, as from last year, I gather, a payment for the number of reviews that are received by staff members. Something like £750,000 was allocated last year towards payment of reviewers. So this is quite a big change in the perception that's traditionally been held with peer reviewers. The pay-off is getting a snapshot of where the scientific effort is being directed with grant proposals at a particular time.

½  +-(1920)  

    One of the problems, of course, with peer review is that peers can be applicants and also assessors. That's a major problem in regard to conflict of interest and identifying the best assessors for a proposal. Funding agencies have to take that into account.

    Peer reviews are only a means to an end. I think in the professional literature a great deal of emphasis has been focused on peer review itself, as opposed to it as a process, a mechanism funding agencies can use to determine funding allocation.

    A set number of attributes have been identified for peer review. Chubin in the U.S. has identified these as effectiveness, efficiency, accountability, responsiveness, rational and fair procedure, and valid and reliable mechanism measures. But of course, in reality, trade-offs are inevitable regarding the different attributes.

    With regard to the research communities themselves, we have a number of major stresses that question the role of research funding agencies in relation to interpreting government policy, where public funds should be invested, and the adequacy of the peer review process itself and the way it is interpreted. These stresses are things like the scale of the research enterprise itself, eroding university infrastructure. In Australia this is an ongoing concern. The capacity to do research consistently exceeds the funds available, and again to draw on the EPSRC in the U.K., they do not wish to have less than a 50% or a third success rate for grant applications. They don't want to have far more applications being submitted than there are funds to provide for at a level they feel comfortable with, nor do they want applications being sent to the agency that are premature. So the pressure is back on the institutions themselves to have very professional research management officers who will do, essentially, a first peer assessment of the proposals to ensure that they are of the calibre the funding council expects. And for universities themselves that's a funding commitment, the staffing of those management officers.

    High costs and complexity of research have been around for a while. Other stresses are increasing internationalization of research and development, accommodating mass systems of higher education, demands for interdisciplinary solutions to complex problems, worked-out fields not being easy to terminate, an aging research community, and increasing legal and administrative constraints. We also have the impact of advances in information communications and technology and their impact on knowledge production and the institutional capacity to support it.

    We have issues such as brain drain, brain gain, and brain churn. While there are perceptions in a number of countries about brain drain, the information being used has often been inadequate to support claims that countries are losing the best. There's a perception in Australia that we have brain drain, but with the sorts of information that can be used by funding agencies to support those claims, I think that's still very open.

    Understanding what national systems of innovation are and how they work is obviously a challenge in many countries, and actually having the level of understanding of scientists about what is involved in the national systems of innovation is clearly a communication issue for funding agencies and universities.

    The literature on peer review is vast and disaggregated. There's a long history, since the funding councils have been around, of comments regarding the strengths and weaknesses of peer review. Some have been based on anecdotal comments, some have been based on studies done at different times and under different circumstances. There are few systematic studies linking grant decision outcome with funding policies. The majority of studies in the literature are not by those who've been involved in the funding councils themselves. They tend to be produced by those who, like myself, are independent researchers. Ron Kostoff of the U.S. Office of Naval Research has drawn attention to this particular issue in the past. There's a danger, when looking at the literature on peer review, of applying the findings regarding strengths and weaknesses unilaterally.

½  +-(1925)  

    Problems in peer review itself are predicting outcomes from a proposal; defining excellence; how you select your peers and monitor and evaluate their performance; the operations of panels, and their membership; the perceptions regarding buyers, which can be personal buyers, cognitive buyers, institutional buyers; the ability of peer review to be helpful in priority setting. The cost of running peer review for national research funding agencies has been substantial in a number of cases. I noticed that with the quinquennial review in the U.K. regarding the research councils, there is a strong push to try to share efficiencies among the funding agencies across the councils. If you introduce a requirement for not only monitoring of the grants process, but also an evaluation, that clearly creates costs in data acquisition and the sort of staff you're going to involve in the process for both monitoring and evaluation.

    Over the years that I've been working in the area to do with research funding agencies and the peer review process, transparency issues have certainly been addressed by funding councils. The Internet has helped very much in this regard, but there is a danger in providing information on the Internet about the operations of funding councils where the information is not kept up-to-date and is just provided. That transparency could actually have a negative impact on the understanding of the research performer about the operations of the council.

    Another issue has to do with anonymity of reviewers. It's an ongoing debate about whether or not identifying information should be provided. In countries such as Australia, which have a small science population, perhaps anonymity is still important, but when you have a large science base, I personally see no problems with the listing of those consulted for assessments during a particular grant cycle.

    There are ongoing issues with scoring procedures. To what extent does a score actually reflect the quality of a particular proposal? We have, over the years, with different funding agencies, experimentation with a range of procedures in changing scoring to try to make it better reflect the quality of the proposal.

    Conflict of interest continues to be a concern. Scientific misconduct has been documented on and off over the years, but the extent to which that occurs, I think, is very difficult to establish.

    Information about applicant feedback varies amongst councils. Some funding councils allow for an appeals process, which basically covers the scientific, as well the procedural, aspect. The Australian Research Council will only entertain appeals based on the administrative side.

    Opportunity of costs for proposals not funded is an open question.

    Of course, there are discrepancies between assessor comments and assessments. That has been an ongoing issue over the decades.

    In response to criticisms of peer review, we have calls for more monitoring and review of the process, expectation of experimentation on different ways of using it. And of course, the linkage between the ex post and ex ante aspects is much much stronger. There's increased transparency, and use of non-peers, as well as peers. Better assessor information needs to be compiled by funding agencies, and greater use of electronic media, which we see, for example, in many funding agencies allowing for the submission of grant proposals electronically. There's the use of broader criteria for the assessment of proposals, ringmarking of funding for particular groups, such as early career researchers, and allowing, for example, for applicants to provide rejoinders to assessments before final decisions are made. Triage has been used in various forms. We have in Australia what's called culling, which probably sounds a very hard comment on the process, getting rid of a particular percentage that aren't considered competitive early on. In the U.K. I think it's called sifting. Payment of assessors is a matter I spoke about earlier. Evaluation of assessor performance is an ongoing concern of the majority of funding agencies.

½  +-(1930)  

    As for alternatives to peer review, we've had in the past suggestions about formula funding, having a strong program manager, moving to block grants, bibliometrics. Bibliometrics in itself, I would say, is inadequate as an alternative, although some funding agencies appear to be using it to augment the process. Funding the performer instead of the proposal is another option. A science court, proposed by, for example, Ron Kostoff in the U.S., is another option.

    At the presentation to the NWO I referred to earlier what I considered an important question to ask was whether there was a best practice model for research funding agencies. Clearly, the GPRA legislation in the U.S. is more geared along the lines of expecting that agencies will move towards best practice in their operations and concepts such as benchmarking will begin to be part of the thinking of the funding agencies. Having recently looked at some of the material from different funding agencies internationally, I find it interesting that in the last five years there have been specific comments about benchmarking for particular operations of the funding councils.

    I'll just finish with a few comments about our funding agencies in Australia. We have the Australian Research Council and the National Health and Medical Research Council, the two principal agencies for supporting particularly higher education research. They have been subject to a number of reviews and changes in their operations, specifically geared to linking much more into supporting the national innovation system. They've experimented with different ways of using peer review and configuring the funding agencies themselves in respect of panels, the use of assessors, interviews, and the like.

    The Australian Research Council now operates as a statutory body separate from the former Department of Education, Training and Youth Affairs, which used to have the responsibility for providing administrative support for the ARC. Now the ARC has responsibility for the whole process. It also has a board, a part-time chair, and a full-time chief executive officer. Similar arrangements are in place with the National Health and Medical Research Council.

    The ARC has moved towards appointing executive directors to oversee six broad disciplinary groupings. The executive directors are supported by expert advisory committees and discipline-specific readers. Readers are actually paid a nominal amount for the number of applications they assess and rank. The board determines the expert advisory committee membership, and there's much greater attention to assessing the performance of readers. There is also a quality subcommittee to the ARC's board.

½  +-(1935)  

    The shift in recent times has been towards supporting larger projects, longer-term projects, encouraging team-based research, encouraging concentration of resources within universities, identifying the true cost of research, and increasing transparency and accountability. The ARC now operates with a number of programs, including the discovery and linkage program. It has also set up federation fellowships, which are prestigious fellowships primarily geared to bringing the best back to Australia or encouraging people from overseas to conduct research in Australia. The salary is about $225,000 per annum and the institution provides a mix of financial and in-kind support for these researchers.

    As you are probably aware, the Australian government has indicated to the Australian Research Council that one-third of its annual budget for 2003 is to go to four priority areas. There has been a mixed response from the Australian science community to this and criticism regarding insufficient consultation, although that's a matter of debate.

    In Australia a colleague and I did a profile last year on the outcome of large grants for the universities, and 69% of the large grants went to what are called the group of eight universities. These are the research-strong universities. There are questions regarding the role of regional and small universities now and support they can expect from bodies such as the ARC.

    I came across a report recently produced regarding a review of the Australian Research Council, and it was looking at the comparative cost of administration of research schemes. For the Australian Research Council at that stage it was 1.96% of the $350 million committed to funding. For the National Endowment for the Humanities in the U.S. it was 12.3% of $136 million. For the MRC in Canada it was 3.8% of its $342.4 million, and for your Social Sciences and Humanities Research Council it was 8.1% of the money that was committed. For the Netherlands Organization for Scientific Research it was 3%. I think one of big issues research funding agencies will be concerned with internationally will be how to keep the amount of funding expended on the administrative support of the process to a reasonable percentage, whatever that might be, making maximum use of electronic data banks that are available, and ensuring that the monitoring and evaluation are done in the most efficient and effective way possible. It raises concerns about how much money you commit to the actual administrative side and the evaluation side of the funding council's processes.

    I'll leave my presentation there now. I hope I've covered sufficient for you to identify areas of interest to your committee.

+-

    The Chair: Thank you very much, Dr. Wood. It's really appreciated.

    We have a few more members who have joined us after the vote, so we'll start off with some questioning. Mr. Rajotte, will you begin? Mr. Rajotte is a member of our opposition.

+-

    Mr. James Rajotte (Edmonton Southwest, Canadian Alliance): Thank you very much, Dr. Wood, for appearing before us today. We appreciate very much this give and take between our two nations in a very important area.

    I do want to touch first upon possible conflicts of interest. You said there's still a concern, and you rightly pointed out that researchers are at different times applicants and reviewers. I'm wondering how you in Australia minimize potential conflicts of interest.

+-

    Dr. Fiona Wood: In Australia, as in many national funding agencies, there are procedures regarding conflicts of interest that have been in place for a number of years, so it's not an area that has only recently been addressed. With the Australian Research Council and the NHMRC, I don't have up-to-date information on that, except that there are procedures relating to those who have a conflict through belonging, for example, to the same institution as an applicant or being a competitor in the same research field, particularly where there is commercially relevant research that may be an outcome from that. There may be personal relationships of a family nature, or somebody may be a former student of one of the assessors. This information is certainly meant to be put up front.

    There are two things here, though. Where external assessments are asked for by those from the research community, there would be requirements there to indicate whether or not there is a conflict of interest. Certainly, I've seen examples where people have actually returned assessments saying they are a direct competitor of the proposer, and so they feel it would be inappropriate for them to actually make an assessment. This is a problem if you have a small community of researchers: if you have direct competitors, where do you look for the assessment? My view is that to some extent, you need to go outside the country for external assessment from international peers. That concerns assessment done through the mail.

    With assessments done by research panels, a great deal of attention has been paid by research funding agencies in Australia to the potential for conflict there. Where discussions are held regarding an application from a proposer from the same institution as one of the panel members, that panel member has to withdraw from that particular discussion. Also, of course, you have panel members who apply for funding themselves, so you have a different potential conflict there. Funding agencies, as I say, will vary in the ways they set up procedures to accommodate that.

    In regard to the U.K. research councils, you would be familiar with the Nolan committee of inquiry into those holding public office that was conducted a few years ago. There were seven principles for public life advocated by that committee, which the funding agencies will acknowledge in relation to their performance and expectations of reviewers used within the process.

    That is sort of a disaggregated response to the query, but I hope that will help.

½  +-(1940)  

+-

    Mr. James Rajotte: You also mentioned the use of peers and non-peers, and I'm wondering if you could expand on that process.

+-

    Dr. Fiona Wood: I'm not sure whether you are familiar with the operations of the Dutch Technology Foundation, which is part of the Netherlands Organization for Scientific Research. They experimented a number of years ago with the use of both peers, people primarily from within the academic community, and non-peers, primarily those who are end-users from the private sector of a particular piece of research, thus providing quite a different sort of input into the peer review process from that of one that relied primarily on those who held positions within the university system or within the public research institutes. The STW, the Dutch Technology Foundation, are very firmly convinced that this is an excellent way of acquiring a very good cross-section of input into the peer review process.

+-

    Mr. James Rajotte: One of the complaints you hear here in Canada is that there's not enough feedback to the applicants as to why they were denied funding for their project and there's not enough of an appeal mechanism. I'm wondering if you could describe the feedback provided to applicants by ARC following their recommendation, and what steps can be taken to appeal a decision or find out why a decision was made, and then improve the process with the feedback they provide?

+-

    Dr. Fiona Wood: You've raised a number of very important and complex issues, I think, which are still being addressed by bodies such as the ARC. One of the big questions is, to what extent does an applicant have a right to expect feedback regarding their application, particularly if it's unsuccessful? Really, in this situation, we're just looking at unsuccessful applicants, who are obviously the large proportion of applicants to a funding agency. That's a matter for the funding agency to address itself. Certainly, there's a variation with the sort of information that is provided, from almost checked boxes that there was a deficiency in the methodology, that the applicant didn't have sufficient track record within the international community, that the proposal wasn't well thought out, or whatever.

    Generally, you do get provided with the assessor's comments, and then you get the opportunity to provide a rejoinder to those comments. That seems to be a fair and reasonable amount of information provided to the applicant. The problem, of course, has been where you get only two assessments and they're diametrically opposed. One says this is the best thing since sliced bread, and the other says this is a very poorly thought out proposal and unlikely to deliver any results. Having actually got the referees' comments, the applicant is confused about which information is taken notice of by the panel. But you do get the opportunity to provide a rejoinder, and that's considered a very valuable aspect of the peer review process with the ARC.

    You also mention appeal. In the U.S., certainly in the 1990s, there was a great deal of attention directed towards the appeal mechanism. Whether or not they became over-bureaucratized is a question that was raised. With the Australian Research Council, as mentioned previously, it's to do with the administrative side, not the scientific judgment. If, for example, there was some sort of procedural aspect regarding your proposal that meant it didn't get forwarded at the right time to assessors, you could appeal on that basis, but you couldn't appeal on the determination of the merit of your proposal . That's a problem a number of Australian academics see with the process.

½  +-(1945)  

+-

    The Chair: Thank you, Mr. Rajotte.

    Mr. Volpe.

+-

    Mr. Joseph Volpe (Eglinton--Lawrence, Lib.): Thank you, Mr. Chair. Dr. Wood, thank you very much for joining us from such a great distance.

    One of the issues many of us face here is that research councils, research granting agencies, etc., operate essentially for the purposes of the research community that seeks to maintain research independent from specific objectives of society and commerce. For example--and I hope I'm expressing myself accurately--there are many who think research has to fulfil a particular purpose, in that monies expended for maintaining research at a national level have to produce results. The fact is that many results in the scientific community often translate themselves into pharmaceuticals etc., and countries like Australia and Canada trail far behind several other countries in innovation in that regard. So there's a question of whether one talks about the peer review system for the purposes of dealing with the bureaucracy or we deal with the substance of research. If we deal with the substance and direction of research, should the model be one where there is a partnership, through foundations etc., with industry that would direct research, or should governments operate as businesses do in saying, we want particular performance results, accomplishments, otherwise, let's forget this research?

    Do you have that dialogue, and if so, which of the two do you prefer?

+-

    Dr. Fiona Wood: You've asked a really tricky question, sir. They're certainly issues of importance, not only in Australia, but elsewhere. If I can go a roundabout way to answering your question, it's interesting to see, when you look at the research funding agencies' home pages, that virtually all of them now have a strategic plan, very much a private sector approach to their operations. How well this sits with the academic community in particular, I think is an open question.

    We certainly have an expectation that funding that goes into the science base, in particular into public institutions, does deliver a return in some form. There is an acknowledgment that the outcomes from basic and strategic basic science are often not measurable directly, as they are longer-term in producing returns for industry, particular social issues, or whatever. What the funding agencies are moving towards in Australia, and have been for a few years, is providing different funding programs for the basic and strategic basic and for those where there's an expectation of linkage with industry. Partnerships are very much a concern in Australia between government, higher education, and industry groupings. Whether or not academics can respond to those expectations is an open question. We have cooperative research centres in Australia very much geared to maximizing the private and public sector science activity, and they work very successfully.

    When it comes to preferred models, for autonomy over deciding what research you would like to undertake over a period of time, you don't wish to be directed through funding agency policies into undertaking your research in a way you initially didn't intend. I suspect there's a greater awareness, certainly in Australia, of the importance of the public-private sector linkages. The big concerns are that some areas are much more relevant for these linkages than others. The view of some is that they will become disenfranchised from the research effort because they can't demonstrate direct relevance to industry or whatever.

    Does that help answer your query?

½  +-(1950)  

+-

    Mr. Joseph Volpe: I'm afraid, Dr. Wood, you're probably much more political than some of the people around the table. You have managed to do something that here in Canada we call skating. For those who don't have ice surfaces, maybe it's dancing. At any rate, you've done an admirable job of throwing it back over to me, and I compliment you on that.

    If I can continue to be provocative, there are many who feel--and this is part of the debate--that a scientist is only as much of “the brain”, of inventory, of any countries assets as, say, a professor in English poetry, literature, or philosophy. Yet the latter are not seen as utilitarian. They don't have a strategic function other than to give character to a country's value structure and social system. There isn't a direct material relevance in some of the things they do. There isn't a commercial transfer ability. In an environment where one has decreasing resources, the question of accountability is always present, and performance and utilitarianism are very much a part of the granting system. And yet, in most of the peer review issues with which I have been associated, the dominant argument is always whether we should have basic scientific research, and that's a euphemism for saying research for the sake of research. That way we keep our brains here, but we don't know what we're going to do with them.

    Is that a mischaracterization of the concept?

½  +-(1955)  

+-

    Dr. Fiona Wood: I don't think it is. I use the term disenfranchised. Some in the humanities and in parts of the social sciences feel it's very difficult to demonstrate their utilitarian value and are concerned that if the policies of funding councils become too tightly linked with commodity outputs, they will no longer have a role within the research system in the way they previously had. Picked bodies, such as the academies, have an important role to play in informing government and communities about the role those disciplines that don't have immediate utilitarian value have in the required civilization concept. They vary in their effectiveness as lobby groups, certainly in Australia, but the issues you've raised are ones that are still playing themselves out. It's presumably a watershed for some countries if they do, in fact, disenfranchise through their funding policies those who traditionally were part of the basic science enterprise.

    If I can add a little more on the lobbying that is done by the picked groups, they will have a substantial influence on the receptiveness of governments to the sorts of policies they have regarding funding and the national funding agencies. I think it's also important that national funding agencies look to see what's happening elsewhere, which is obviously what Canada is doing. Are there best practice models of funding agencies that incorporate support for areas such as the humanities, where they are still performing a viable and vital part of the research enterprise nationally? It's an open question, I think, how active and how informative the picked bodies are, and how receptive governments are to the sorts of arguments they might make.

+-

    Mr. Joseph Volpe: Thank you, Dr. Wood.

+-

    The Chair: Thank you, Mr. Volpe.

    Monsieur Bergeron.

[Translation]

+-

    Mr. Stéphane Bergeron (Verchères--Les-Patriotes, BQ): Thank you, Mr. Chairman.

    If I may, I would like to follow up on Mr. Volpe's question on strategic research, since Mr. Volpe may have been a little biased when he mentioned the value of basic research, which he described as research for the sake of doing research.

    In January 2002, your government decided to allocate a third of its subsidy budget toward research in target areas, in other words, so-called strategic research. Does that mean there is increased focus on practical research in Australia? Of course, two thirds of the budget goes to what is known as basic research, but is there an increased tendency to redirect the research budgets towards what is known as strategic research?

[English]

+-

    Dr. Fiona Wood: That's a good question. Your comments relate to the Australian Research Council, and earmarking of funding for priority areas will take up about a third of the funding for next year. The Australian Research Council undertook extensive consultation over the last year with the relevant communities in Australia about what they considered to be appropriate priority areas and came up with about 12 priority areas and ranked them A and B. The government has come back, though, as you say, with a directive to fund four of those areas, and that's caused concern in some parts of the community, with support from some. I believe the Academy of Technological Sciences and Engineering has not found a problem with the government's directive, but others have made critical comments about government being actively involved in determining priorities.

    Priority setting has been on the agenda for our funding councils, both NHMRC, the medical one, and the Australian Research Council, for a number of years, and they've dealt with it in different ways. It's certainly an issue that is very contentious, but the argument in favour of it in relation to Australia is that we have a very small population. We have a large land mass, but with the population, the spread of our expertise, we can't possibly afford to cover everything, so priority setting, at some stage, is inevitable. And including our academic community in particular in the process is one of the better ways, obviously, of getting an outcome that is acceptable to the performers of the research, as well as the government.

¾  +-(2000)  

[Translation]

+-

    Mr. Stéphane Bergeron: I would say that Canada and Australia probably share the distinctive feature of having a huge territory with a relatively low population. That means your experience will no doubt be a source of inspiration.

    That said, I feel like asking you the following question myself. Do you think allocating a third of the budget to more strategic, more targeted research is adequate, or do you think it is below what should be invested in strategic research?

[English]

+-

    Dr. Fiona Wood: You've asked a question that has preoccupied government agendas and research agendas for decades. I'm afraid I can't respond to that in a helpful way. I can say that one needs to keep a watching brief on how these decisions are made, who's involved in the process, and how the monitoring process is set up for the way the funding is invested, people, infrastructure, international linkages, that sort of thing. I'm afraid I can't give you an answer about too much or too little, except to say that a third did seem to be quite a bit for funding to go to directed, strategic, priority-driven research.

[Translation]

+-

    Mr. Stéphane Bergeron: I must agree with Mr. Volpe that you certainly have excellent qualities for becoming a politician in Australia.

    That said, and since you don't seem to want to answer questions of that nature...

[English]

+-

    Dr. Fiona Wood: If I were to answer your question, I would be one of the first, I think, to come out publicly and put a percentage figure or a monetary figure on the allocation. And I think I would be a fool to do that. It's an issue that is largely playing itself out in many countries. I think what would be useful would be an international forum or workshop that looked at these specific issues and particularly compared where the information was being drawn by government to make these sorts of decisions and how the funding agencies, the research communities, particularly picked bodies, are having input into that. I think that's where you would get your answer. Obviously, national traditions, history, and in particular, governments that may be in at the time will very much influence the outcomes. But there's great value from international comparative discussions in this area.

    That's about as far as I will go in comment on that.

[Translation]

+-

    Mr. Stéphane Bergeron: If I may, Mr. Chairman, I would like to ask one last short question.

    Australia and Canada are countries whose demographic makeup is largely the result of immigration. Canada's federal government recently issued a new strategic plan for innovation which included suggestions to amend immigration policies and formalities with a view to retaining foreign students who had gained knowledge on their territory, so that they could integrate into Canadian society more easily, which, in turn, would allow us to benefit from their knowledge. Do you think it would be worthwhile for a country like Australia to amend its immigration laws to try to keep foreign students so that they could contribute to scientific research in Australia?

¾  +-(2005)  

[English]

+-

    Dr. Fiona Wood: In principle, I would be very supportive of that approach. You do have the view, though, that it is useful for students from other countries, particularly if they're supported by funds from their countries, to take the outcomes of their studies back to their countries. And if we're looking at third-world countries in particular, I think it's important that their outcomes actually are used for the betterment of their particular country. In general, we're looking at internationalization of research and development. We look very much, in Australia, at what is happening with the EU framework programs, with mobility of academics, and early career researchers in particular, between public and private sectors and between countries. I think it's very important to acknowledge the returns, both individually and to different countries, from having more fluid immigration regulations.

[Translation]

+-

    Mr. Stéphane Bergeron: Thank you.

[English]

+-

    The Chair: Thank you, Mr. Bergeron.

    Mr. Bagnell.

+-

    Mr. Larry Bagnell (Yukon, Lib.): Thank you.

    I only have one question. I come from the part of Canada that has the fewest people. It's very remote from the rest of Canada. So our academic institutions are very small and don't have a lot of senior expertise in them. I'm just wondering if such institutions in Australia complain that they don't get any funds or they get minimal funds, they're not treated as well as they could be?

+-

    Dr. Fiona Wood: It's a good question. There certainly is concern from a number of the smaller regional universities and the universities that came from the former binary divide, the colleges of advanced education. They are not looked upon by the major research funding agencies as well as their more established colleagues. At the university I'm at, a small regional university, we've noticed a decline in the success of large grant applications to the Australian Research Council over a number of years, reflecting as much a view by the council of concentration and selectivity, a push very much towards funding in institutions that already have a fairly large critical mass of performance and reasonable levels of research infrastructure.

    To me, a diverse system of performers is essential for any country. And I think that question of whether or not funding should be ringmarked for regional or smaller universities is one that needs to be seriously addressed.

+-

    Mr. Larry Bagnell: Are there any mechanisms you're aware of, either in Australia or anywhere else in the world, to deal with that problem?

+-

    Dr. Fiona Wood: Not so much mechanisms, but certainly there's an awareness that there are issues still to be resolved. I think the only mechanism you could use would be ringmarking a proportion of funding. Or at least, taking into account your desired performers, you should make a decision that you only want institutions that have a critical mass of whatever to apply, or that in a diverse system of higher education and research, smaller institutions have a role to play, and it's important to ensure that they obtain support through the funding agencies.

+-

    The Chair: Thank you, Mr. Bagnell.

    Mrs. Desjarlais.

+-

    Mrs. Bev Desjarlais (Churchill, NDP): Thank you, Mr. Chair.

    Following the line of the smaller regional institutions, would you see that as more a political issue than an issue with the academic community?

+-

    Dr. Fiona Wood: In Australia, it's a political and an academic issue. Our funding in Australia for universities comes from the federal government, as from 1974. Statutory responsibility for universities belongs to individual states. You have a mix there. You have a government that can use funding to direct universities to perform in a particular way, but the universities themselves operate under state legislation. States are very concerned to make sure there is support for the institutions they have legislative responsibility for. In those states that have regional universities, of course, members of Parliament are very keen to observe what is happening with funding allocations.

    With the academic community, we have a de facto stratification in Australia with what is called the group of eight institutions. I mentioned them before. They're very strong, research-intensive institutions with very long histories. Then you have another category of institutions, which we call the pre-1988 non-group of eight universities, university of technology institutions, and regional universities. I would say concentration and selectivity policies are very much in favour of the group of eight institutions. So they could be quite happy with policies that saw them continuing to get a greater share of the funding allocation. Clearly, regional universities and those universities that have come from the former binary divide, that were in the CAE sector, would have quite a different perspective on that funding allocation.

    So it's to do with a diverse system of research and higher education in what's called a mass system of higher education. Political statements on concentration and selectivity are very much about performers and who are considered appropriate performers.

¾  +-(2010)  

+-

    Mrs. Bev Desjarlais: You mentioned that there's always this challenge with how much should be put towards administrative costs. Have you come up with any idea of what it should be? Could you tell us what proportion tends to be administrative costs right now?

+-

    Dr. Fiona Wood: I did mention earlier that there was a profile prepared as part of the review of the Australian Research Council several years ago comparing the administrative costs across agencies in Canada, Australia, and elsewhere. What these percentages mean depends on what your base budget is, whether there's a norm, what's reasonable. I have no answer to that. The question, though, of how much actual funding is committed is certainly one that's preoccupying a number of funding agencies.

    I also mentioned earlier the U.K. Research Council. The quinquennial review was actually focusing attention on prioritizing their sharing of best practice in the administrative side of their processes. One part of this has to do with whether you're using the same sorts of peer reviewers across funding agencies. From the administration side, it would make sense; there would be cost savings if you started standardizing your assessor reports, your application forms, or whatever. So sharing of information appears to be very much encouraged by government in an obvious area where funding agencies can learn from each other about effectiveness and efficiency.

    The figures come from 1998. In the Australian Research Council just under 2% was committed to the administration. The National Science Foundation had just under 4%. The largest was just over 12% for the National Endowment of the Humanities in the U.S. For the Medical Research Council in Canada the figure was 3.8%, and for your SSHRCC 8.1%. Again, as I say, these figures were for 1998.

    Whether or not any represents the right proportion, I think most funding agencies would take the view that money saved on the administrative side, obviously, is money that can go into the research support itself. But there's a question of balance. You must ensure that you have high quality people involved in your administrative support. You must ensure that you have proper systems in place, particularly with management information systems that actually add value to the process, rather than just supporting it. It's also people who are involved in the monitoring and evaluation. There's a question there. Do you have consultants for that, or do you actually have them as part of a unit within your administrative support? Again, this is an area where comparisons across funding agencies internationally I think will be very valuable.

¾  +-(2015)  

+-

    Mrs. Bev Desjarlais: Do the independent agencies in Australia have parliamentary or government oversight in auditing of their finances?.

+-

    Dr. Fiona Wood: Can you be more specific about independent agencies?

+-

    Mrs. Bev Desjarlais: Following the passage of the Research Council Act in Australia, are they still under government oversight as far as their funding goes?

+-

    Dr. Fiona Wood: The Australian Research Council and the NHMRC are under a statutory body. There are several forms of accountability. One would be through their strategic plan. Another would be through the reports they are required to file annually to Parliament. But with their directions themselves, there's not overseeing in the sense of micro management overseeing. They are, of course, accountable, as they use public funds, so there are a number of mechanisms for demonstrating that accountability, apart from annual reports and response to their strategic plans.

+-

    The Chair: Thank you very much.

    Mr. Fitzpatrick.

+-

    Mr. Brian Fitzpatrick (Prince Albert, Canadian Alliance): Thank you, Dr. Wood.

    Canada is a large country, like Australia. We don't have a large population, only 30 million, most of it concentrated very close to the U.S. border. It seems to me, given the size of our country, our resources, and so on, we have to maybe look at a strategic approach to this whole area of research. I guess the question is quite specific. Do you know of any particular strategic model that Canada should be looking at if we move in this direction, or if we are in this direction?

+-

    Dr. Fiona Wood: That's a hard question. I think it's probably too early with the reconfigured research councils in Australia to say whether or not they would be an appropriate model in their approach to strategic planning and prioritizing of research funding, taking into account the sorts of points you've just made. I think one of the biggest drivers influencing the way our funding agencies operate is the imperative to have knowledge-based economies that are internationally competitive, more than the concern about small population, large land mass. That issue, small population, large land mass, would probably be resolved in the context of wanting to effectively link our science base into having a healthy knowledge-based economy. So that's a non-answer, unfortunately, with that one.

+-

    Mr. Brian Fitzpatrick: With this question maybe I'll get the same answer. It seems to me that from a national perspective, for the benefit of your country, if research doesn't translate itself into real benefits for society, it's not without value, but it's not.... I think society wants to get a good return on this sort of thing and strengthen the country and so on. Do you have any models for commercialization that you find attractive, for putting something into actual use and making sure a nation derives as much benefit as it can from breakthroughs in research and development?

+-

    Dr. Fiona Wood: There are a number of different funding mechanisms used by the Australian government taking that into account. The cooperative research centres would certainly be worth looking at, because of their explicit requirements to link the public sector research performance and the private sector research performance into activities of benefit to the Australian community. Also, an important component of their operations is research training. We haven't touched on that in our discussion today in any real detail. The sorts of research training our PhD and early career researchers are receiving via the different funding models provide an important area to focus on. With my own research now looking more at institutional research management, there is certainly an awareness of having research students who understand commercialization of research and the different issues associated with that, how to link in as part of the team with private sector sponsors or public sector sponsors.

    So, as I say, the cooperative research centres would certainly be worth looking at. The research and development corporations that were established under legislation, I think probably about the mid-1980s, appear to have very effective policies and strategies in place for bringing a return to their particular stakeholder groups. So I strongly suggest you have a look at the operations of those different rural R and D agencies.

    I think the whole issue is still being played out. It's probably too soon to say whether or not one particular model is producing the outcomes expected by government with regard to commodities or a stronger knowledge-based economy.

¾  +-(2020)  

+-

    The Chair: Thank you, Mr. Fitzpatrick.

    Ms. Gallant.

+-

    Mrs. Cheryl Gallant (Renfrew—Nipissing—Pembroke, Canadian Alliance): Thank you.

    Dr. Wood, would you please describe the criteria upon which the decision to fund individual professors and research at universities are based in Australia? Is it the number of papers, the length of time? How do you decide who gets what in a university setting?

+-

    Dr. Fiona Wood: You've just asked the hardest question today to provide information on. There are a variety of selection criteria used by different institutions, depending on where they position themselves within the unified national system. Given that we have 36 or 37 institutions, there would be commonality of criteria, but also there would be differences, particularly disciplinary differences. If you look at criteria that are of particular importance for funding allocations, rewards by government for performance, we are not only looking at research performance expressed in publications, we're also looking at the amount of funding brought in through the competitive grants and through links with industry. Also, the government is very keen on looking at research training, so the number of PhD students you have and have graduated is also being taken into account. But if you're looking at the criteria, you would actually have to look at the different institutions themselves, apart from the obvious standard criteria.

+-

    The Chair: I'll allow our researcher to ask some questions and take advantage of the situation. Go ahead.

+-

    Ms. Lalita Acharya (Committee Rersearcher): Dr. Wood, you mentioned a number of proposed alternatives to peer review, formula funding, the pork-barrel, bibliometrics, funding the performer, etc., I'm wondering if any of them are real alternatives to peer review, or whether they could be used to supplement the peer review process.

+-

    Dr. Fiona Wood: I'm not aware of the familiarity of the committee with the work of Dr. Kostoff from the U.S. Office of Naval Research. He's very much of the view that peer review, the way it is used, needs to be of high quality. He's one of the advocates of, for example, the court approach to review, where you have advocates for particular proposals and a great deal of debate regarding those proposals, and you augment this with the bibliometric information.

    There really aren't alternatives to what funding agencies are using by way of obtaining scientific expertise for their decision-making. It's really things like whether or not bibliometrics has a role to play in funding agencies in helping with the process. I think that's still very much an open question. Kostoff and others are very keen to have use of that information within the process, but there are a number of downsides to bibliometrics, which are well documented in the professional literature. Also, once you start using bibliometrics information as an explicit part of the process, you have to start committing funding to obtaining that information, ensuring that it's accurate and reliable, ensuring that the way it's actually used by, for example, your review panels is consistent across panels, documenting where there are problems, how you approach those problems and resolve them. So it's an order of complexity that's probably beyond a number of funding agencies at this time.

    Lotteries are being offered as alternatives, and of course, that's really just an expression of dissatisfaction with the outcomes of the grants process. That's obviously not a legitimate alternative if you're concerned with directing the funding effort nationally through the funding agencies.

    Essentially, what we have are variations of how peer review is used within the funding agencies and looking at where peer advice is obtained. I mentioned before the importance of obtaining international peer contributions to the assessment process, and in Australia that's been used in varying degrees over time. I think the problem's about different ways assessors from different countries might make assessments, and that is less of a problem now, when we have much more internationalization of R and D, than perhaps it would have been 15 or so years ago.

    So one could say it's fine-tuning of peer review, but it's thinking perhaps more broadly about the mechanisms that are being used and the way you actually set up your monitoring and your performance analysis within the agencies. That's where the question of how much to spend on the administrative support comes in, and the linking of ex ante and ex post evaluation is a big issue.

¾  -(2025)  

+-

    The Chair: Thank you.

    Thank you very much, Dr. Wood. I really appreciate this, and on behalf of the whole committee, I want to thank you for taking the time to be with us today and to enlighten us on what's going on in Australia, helping us to find our way as we continue with this study. So I'd like to thank you very much. Have a great day. I hope one of these days you'll be able to come and visit Canada, or we'll visit Australia, and talk a little more about science and technology. Thank you very much.

    Before we go, we have one item of business I'd like to bring up. As you know, our vice-chair was moved around to another position, and it's time to elect a vice-chair of the opposition, to make sure the executive is strong, with the three executive members.

    I would ask for a motion for that position.

-

    Mrs. Cheryl Gallant: I would like to move that James Rajotte be the vice-chair.

    The Chair: Mr. Marcil seconds that.

    (Motion agreed to)

    The Chair: Congratulations, James. It's good to have you aboard.

    Mr. James Rajotte: Thank you, Mr. Chair.

    The Chair: The meeting is now adjourned. Thank you very much for coming out tonight. It was better than getting up at 4 a.m.