Skip to main content
Start of content

PACP Committee Report

If you have any questions or comments regarding the accessibility of this publication, please contact us at accessible@parl.gc.ca.

PDF

APPENDIX I: STUDY METHODOLOGY

The central objective of the study was to determine the extent to which government responses accept the recommendations of the PAC and provide commitments to the actions recommended. A secondary objective was to estimate the degree to which departments actually carry out the actions to which they are committed, using publicly available information so as to avoid the expense and complexity of detailed follow-up audits of the kind conducted by the Auditor General.

In order to permit the completion of the review on a timely basis, a sample of Committee recommendations was selected. The sample consisted of recommendations requiring that results of recommended departmental actions be reported publicly in the Departmental Performance Report (DPR) of the appropriate government department, and made during the Thirty-seventh and Thirty-Eighth Parliaments. The survey covered two full sessions of the 37th Parliament and one full session of the 38th Parliament, a period extending from January 31, 2001 to November 29, 2005, just less than four years and ten months.

The scope of the study was limited to recommendations that required the federal government to report its corrective actions in accountability documents such as Reports on Plans and Priorities (RPPs), Departmental Performance Reports (DPRs), and Annual Reports. This restricted the group of government responses to a sample consisting of responses to 110 recommendations found in 34 committee reports. This sample includes a significant number of recommendations, made during a substantial time period which includes several different governments. It thus avoids possible distortions that could occur in a shorter time-period during which a single government held power.

The restriction of the sample to recommendations calling for the public reporting of results in DPR's (or, in some cases, Reports on Plans and Priorities, RPP's) provided a readily accessible basis on which to confirm that action had been taken, or not. It also reflects a consistent theme of Committee recommendations over the years, concerning the need for greater transparency in the workings of government, and more effective use of the DPR's, which are the key public reporting document of government departments. As a result of the special emphasis the Committee has placed on the reporting aspect, a review of compliance with reporting recommendations may shed light specifically on departmental responsiveness to Committee recommendations, as distinct from the Auditor General recommendations which may be similar in other respects.

The selection of a sample of recommendations to survey was also advantageous because the volume of issues considered by the Committee normally results in the development of large numbers of recommendations each year. During the period surveyed, the Standing Committee on Public Accounts tabled a total of 77 committee reports, which incorporated an overall total of 535 written recommendations. Any attempt to explore the implementation of all recommendations over the multi-year time-period needed to ensure that short-term patterns would not distort results would have resulted in excessive delay in completing this review.

Government responses to each recommendation in the sample were classified on the basis set out on Table 1, immediately below:

TABLE 1: HOW WE CLASSIFIED GOVERNMENT RESPONSES

ITEM

CLASSIFICATION

CRITERIA

RESPONSES

Rejected

Key test: Expresses disagreement with major elements of the recommendation.

- may explicitly state disagreement,

- may provide grounds (e.g. we do not think recommendation is practical),

-may state a preferred alternative course of action.

Status Quo

Key test: Does not commit government to new actions called for in the recommendation.

- may rely on vaguely affirmative language instead of clear commitments,

- may provide details re what a department is already doing, in lieu of action commitments.

Accepted

Key test: Makes the major action commitments called for in the recommendation.

- may alter recommended action time-frames or other non-central elements.

In order to determine whether reporting requirements set out in each recommendation had been met, the report referenced in the recommendation was reviewed against the criteria set out below (in cases where government responses specified that reporting would be available elsewhere, these alternate sources were also examined).

TABLE 2: HOW WE CLASSIFIED GOVERNMENT REPORTING

REPORTING

Confirmation

Key test: Does the DPR (or other report) provide all or most of the information requested in the recommendation?

-the information may be presented as a response to the recommendation,

-the presentation needs to be sufficiently clear to be identifiable as a response,

-is the level of detail sufficient to respond to the recommendation?

No Confirmation

Key test: Report contains little or none of the information requested.

- report may rely on highly general statements that do not provide enough detail to confirm that a recommendation is being implemented.

Not Applicable

Key test: Was the recommendation rejected (precluding reporting of results)?

While most of the government responses and relevant sections of performance reports proved easy to classify within the above frameworks, it should be noted that imprecision in the language of many of the responses, including affirmative comments on recommendations that fell short of explicit agreement with their detailed content, posed a continuing challenge. As well, several of the classification boundaries require the exercise of judgement even where the response being classified is reasonably precise, and leave room for discussion about individual classifications. In particular, the boundary between the partial acceptance of a recommendation sufficient to warrant classification as "accepted," and the less complete, or merely less clear, level of acceptance that would suggest classification as "status quo," is judgemental. Also, the level and detail of reporting that distinguishes a "confirmation" classification from a "no confirmation" classification is similarly judgemental (although the tests outlined above limit arbitrariness).

The study addressed the "grey area" issue by means of a process of independent classification by two analysts, followed by cross-checking for consistency in the application of the criteria and reconciliation of differences on the basis of joint discussion of the evidence. It is noteworthy that the number of "grey area" classification issues identified was less than 15% of the total. This indicates that the general findings of the study may be accepted with a high level of confidence, even though certain individual classifications may remain problematical.

top