Vol 18, No 1 (2022)
Review paper
Published online: 2022-01-05

open access

Page views 5735
Article views/downloads 760
Get Citation

Connect on Social Media

Connect on Social Media

Ways of developing quality indicators in cancer care — opportunities, challenges and limitations for the Polish healthcare system

Karolina Piekarska1, Rafał Zyśk2, Maciej Krzakowski3, Jan Walewski4, Barbara Polityńska56, Marek Z. Wojtukiewicz17
Oncol Clin Pract 2022;18(1):40-60.

Abstract

The aim of this article is to present possible ways of developing quality indicators in oncological care in the Polish healthcare system based on practice from other countries. 

The development of indicators in the healthcare systems presented in this paper is a process following several stages. Initially, at the planning stage, the clinical area to be assessed is selected and teams responsible for developing the indicators are organized. This is followed by the development stage in which the measurement team prioritizes and selects clinical indicators on the basis of documentation and knowledge gained from the scientific literature. After selecting the target clinical indicators, the specifications for each measure are operationalized, along with inclusion and exclusion criteria for the target population, and data sources are identified. In each of the foreign healthcare systems analyzed, determination of the final indicators was preceded by extensive clinical and social consultations. 

The use of clinical indicators to assess quality is an important approach in the process of assessing the quality of cancer care. Thanks to the introduction of quality indicators, participants in the healthcare system (regulators, clinicians, patients, managers of medical institutions) can obtain reliable information that is necessary for defining priorities, modifying methods of determining benefits, benchmarking, making informed choices, and improving the quality of oncological care.

Ways of developing quality indicators in cancer care opportunities, challenges and limitations for the Polish healthcare system

REVIEW ARTICLE

Karolina Piekarska1Rafał Zyśk2Maciej Krzakowski3Jan Walewski4Barbara Polityńska56Marek Z. Wojtukiewicz17
1Department of Oncology, Medical University of Białystok, Białystok, Poland
2Institute for Healthcare Management, Faculty of Medicine, The Łazarski University of Warsaw, Warsaw, Poland
3Department of Lung and Chest Cancer, National Research Institute of Oncology, Warsaw, Poland
4Department of Lymphoid Malignancy, National Research Institute of Oncology, Warsaw, Poland
5Department of Psychology and Philosophy, Medical University of Białystok, Białystok, Poland
6Robinson College, University of Cambridge, Cambridge CB3 9AN, UK
7Department of Clinical Oncology, Comprehensive Cancer Center, Białystok, Poland

ABSTRACT

The aim of this article is to present possible ways of developing quality indicators in oncological care in the Polish healthcare system based on practice from other countries.

The development of indicators in the healthcare systems presented in this paper is a process following several stages. Initially, at the planning stage, the clinical area to be assessed is selected and teams responsible for developing the indicators are organized. This is followed by the development stage in which the measurement team prioritizes and selects clinical indicators on the basis of documentation and knowledge gained from the scientific literature. After selecting the target clinical indicators, the specifications for each measure are operationalized, along with inclusion and exclusion criteria for the target population, and data sources are identified. In each of the foreign healthcare systems analyzed, determination of the final indicators was preceded by extensive clinical and social consultations.

The use of clinical indicators to assess quality is an important approach in the process of assessing the quality of cancer care. Thanks to the introduction of quality indicators, participants in the healthcare system (regulators, clinicians, patients, managers of medical institutions) can obtain reliable information that is necessary for defining priorities, modifying methods of determining benefits, benchmarking, making informed choices, and improving the quality of oncological care.

Key words: quality indicators, quality assessment, quality measurement, quality improvement

Oncol Clin Pract 2022; 18, 1: –60

Address for correspondence:

prof. dr hab. n. med.

Marek Z. Wojtukiewicz

Department of Oncology,

Medical University of Białystok

ul. Ogrodowa 12, 15–027 Białystok, Poland

e-mail: mzwojtukiewicz@gmail.com

Oncology in Clinical Practice

DOI: 10.5603/OCP.2021.0044

Copyright © 2022 Via Medica

ISSN 2450–1654

e-ISSN 2450–6478

Received: 02.12.2021 Accepted: 02.12.2021 Early publication date: 05.01.2022

This article is available in open access under Creative Common Attribution-Non-Commercial-No Derivatives 4.0 International (CC BY-NC-ND 4.0) license, allowing to download articles and share them with others as long as they credit the authors and the publisher, but without permission to change them in any way or use them commercially.

Introduction

The issue of quality in the Polish healthcare system has been increasingly gaining importance in recent years, both for patient communities and public healthcare providers, as well as those involved in making healthcare decisions. Given the ever-increasing cost of cancer care and the limited availability of many costly treatment options for cancer patients, cancer care requires a more rigorous approach to quality assessment [1]. It is worth noting that even though legislation on different kinds of healthcare quality indicators (QI) exists in Poland, the concept of quality has not been substantively defined in any legislation. Assessment of quality in healthcare with the aid of quality indicators should be ensured by the law on quality in healthcare and patient safety, which is currently being drafted and is due to be published at the beginning of 2022. Of some concern is the fact that in the draft submitted for public consultation, the definition of the term “quality” has, once more, been overlooked, while quality indicators themselves are referred to in only one clause of the Act (Article 3). Quality in healthcare is defined and measured by indicators relating to three main areas: 1) clinical; 2) consumer; and 3) healthcare management considerations [2]. Thus, so long as the quality of healthcare is not described within these specific domains, it remains an undefined concept.

The current article describes proposals for developing indicators for the quality of oncological care in Poland (from prophylaxis, through diagnostics, and therapy to broadly understood post-treatment care) on the basis of practice from other countries and considering challenges and limitations related to the process of assessing the quality of oncological care.

The process of developing quality indicators

The concept of quality in healthcare can be understood from a wide perspective, with the particular focus depending on the point of view of the surveyor. It is perceived from a specific perspective by patients whose attention is focused on relationships with staff, examinations, test results, and treatment outcomes, as well as the atmosphere and the environment in which they find themselves. On the other hand, for healthcare workers, quality is mainly related to the reliable provision of services, in accordance with accepted standards, and the availability of state-of-the-art diagnostic and treatment instruments [3]. The contemporary approach to the issue of quality in healthcare is based on the model proposed by Avedis Donabedian in the 1960s. A successful outcome in terms of patient recovery, restoration of function or survival is equated with high quality care. Presenting measurable clinical outcomes for a treatment procedure makes it possible to compare therapeutic effects among different groups of patients and the results achieved in different centers. It is also useful to present and compare the social effects (benefits) of improving the quality of life and levels of patient satisfaction. Donabedian distinguished three dimensions of the quality of medical services, which together constitute the quality of a given service. It is not possible to consider a service as being of high quality if errors or shortcomings are identified in any of the three categories.

Quality of the structure includes the structure of the organization, the number of medical personnel, their qualifications, equipment, medical apparatus, buildings.
Quality of the process refers to the range of actions performed or not undertaken during the process of diagnosis, treatment, nursing, and rehabilitation of the patient, including waiting time for the procedure and results of examinations. Donabedian emphasizes the fact that the best results are achieved when treatment follows a systematic course, according to tried and tested principles.
Quality of the results includes the degree of improvement in the patients’ health and their satisfaction with the healthcare services provided. This comprises indicators such as death rates, morbidity, adverse events, etc. [4].

Donabedian also pointed out the need to assess the course of the care process and the consequent potential of the information gained for improving care management. Analysis of diagnostic and therapeutic processes and comparisons between centers of interest contributes to a better assessment of the quality of healthcare and further improvement of results. Elements that complement contemporary understanding and practice of quality assurance in healthcare include the constantly growing social awareness of responsibility for one’s own health (including the influence of lifestyle on health), alternative options for diagnostic and therapeutic procedures, the need to publish information on the needs and rights of patients, costs and effectiveness of procedures, and treatment outcomes. Quality assessment is performed using various types of indicators and criteria relating to specific standards [5, 6].

Material and methods

The article presents a cross-sectional analysis of approaches and experiences in developing quality indicators for the evaluation of oncological care in selected countries. For this purpose, both scientific reports and documents regulating the area related to quality in healthcare were searched.

This publication will present the implementation of a quality assessment system in oncology in six selected countries: Australia, Germany, Scotland, the USA, Canada (Ontario), and Japan, which seem to have the greatest expertise in assessing the quality of cancer care.

Results

Australia

The Australian Council on Healthcare Standards (ACHS) sponsored an expert-led consensus-based four-step process, based on the modified Delphi method, to define a set of clinical indicators to assess the quality of oncology services provided in Australia. This process was carried out in response to requests from service providers applying for accreditation. The multidisciplinary process steering committee was composed of clinical experts and patient representatives. Five additional participants were the stakeholder group that debated the final set of indicators [7].

The process of developing the indicators was carried out in 4 stages:

Stage I Establishment of the Steering Committee

A steering committee of 16 key experts in cancer treatment, policy, nursing, outpatient care, radio-­ -oncology, and patients representing a wide variety of experiences and perspectives was established. During a one-day meeting, the committee identified directions, terms, and potential areas for the introduction of quality indicators.

Stage II Literature review and search for indicators

A literature review and systematic search of the websites of international oncology societies was carried out to identify currently used indicators. The steering committee then adopted a systematic work plan for analyzing the list of identified potential indicators, evaluating each indicator and prioritizing it in an online survey, which took about 3 hours to complete. Individual indicators were ranked from 0 to 5 (lowest to highest priority) for each of the two criteria: ease of access and data collection, as well as clinical relevance (including potential feasibility for improving quality and evaluating best practice performance). On this basis, a priority list of potential clinical indicators was developed.

Stage III Discussion of results

In this step, each of the highest rated indicators obtained in stage 2 was discussed in detail before being accepted, rejected, or modified. The nomenclature and measurement method were optimized for each selected indicator. The results were then discussed during a meeting with a reference group of 20 stakeholders, including health policy experts, key leaders from various clinical specialties, major providers of cancer services, nurses’ representatives, representatives of municipal and rural services, pharmacists, statisticians, indicator specialists and representatives of the public.

Stage IV Development of coding rules for new indicators

At this stage, the steering committee, in cooperation with the ACHS, oversaw the development of an oncology care guide to facilitate the clinical coding of new indicators. The guide was approved by the Clinical Oncology Society of Australia (COSA), then ratified by the ACHS Board of Directors and published on the ACHS website [8].

During the process of development, the main reason for rejecting certain indicators was the concern that collecting data would be too burdensome for the participants of the system. This was because some information was not recorded at all or would probably have to be collected from a range of different sources. Despite widespread recognition that increasing access to digital medical data should facilitate its use in informing quality of care considerations, many Australian centers still have relatively simple electronic health record systems [7, 9].

Consequently, preference was given to indicators derived from data that are commonly and routinely collected in healthcare facility systems (waiting list information, electronic medical records, financial systems, etc.). On the other hand, it is recognized that the information necessary to calculate clinical indicators may prompt organizations to consider adding or redesigning their data collection to achieve full compliance.

In summary, this was the first attempt to create a comprehensive set of high-quality clinical indicators to measure the quality of care at each stage of cancer progression in patients in Australia. An expert group and a consensus-based methodology with broad stakeholder representation should ensure that this approach is easy to use and productive in obtaining quality baseline data for monitoring, evaluating, and comparing oncology care provided in different centers. Clinical indicators should be evaluated on a regular basis with the possibility of adding new ones or modifying existing ones, both in response to the experiences of reporting organizations and to adapt to the changing needs of cancer care services. Examples of Australian quality indicators in cancer care are shown in Table 1.

Table 1. Examples of Australian Quality Indicators in Cancer Care

Area

Name of indicator

Specifications

Screening

Breast screening rates (one of six indicators)

Numerator: Number of females in the target age range who received a mammogram through the national screening program over a 24-month period

Denominator: Average number of female residents in the target age range during the 2-year reporting period

Diagnosis

Cancer incidence

Cancer incidence indicates the number of new cancers diagnosed during a specified period (usually one year). The major source of cancer incidence data is the Australian Cancer Database (ACD) which contains records of all primary, malignant cancers (except basal cell and squamous cell carcinomas of the skin) diagnosed in Australia since 1982

All Australian states and territories have legislation that makes cancer a notifiable disease. Various designated bodies, i.e., institutions such as hospitals, pathology laboratories, and registries of births, deaths, and marriages, are required to report cancer cases and deaths to their jurisdictional cancer registries

Each registry supplies incidence data annually to the Australian Institute of Health and Welfare (AIHW) under an agreement between the registries and the AIHW. These data are compiled into the ACD, the only repository of national cancer incidence data

Distribution of cancer stage

The distribution of cancer stage at diagnosis for the top five incidence cancers (breast [female], colorectal, lung, melanoma, and prostate cancer)

Numerator: Incident cancer cases for a selected Registry-Derived Stage (RD-Stage) at diagnosis value (stage 1, stage 2, stage 3, stage 4, or unknown) for a selected cancer type.

Denominator: All eligible RD-Stage records that could be matched to an incident cancer case in the ACD for the relevant cancer type. The denominator includes cases with an „Unknown” stage at diagnosis for which the registry did not have sufficient information to define the stage

Capture of stage data

The unadjusted crude proportion of cancer cases for which stage data are available for cases with a principal diagnosis. The same top 5 incidence cancers

Numerator: Incident cancer cases for a selected RD-Stage at diagnosis value (staged or unknown) for a selected cancer type

Denominator: All eligible RD-Stage records that could be matched to an incident cancer case in the ACD for the relevant cancer type. The denominator includes cases with an „Unknown” stage at diagnosis for which the registry did not have sufficient information to define the stage

Treatment

Radiotherapy treatment activity

This measure shows the number of radiotherapy services processed by Medicare Australia over time and by population group

Unit of analysis:

The Medicare Benefits Schedule (MBS) summarized data show the number of radiotherapy “services” for which a reimbursement is claimed under the Medicare Benefits Schedule and processed by the Department of Human Services. These types of data will allow for linkage of cancer incidence and radiotherapy services provided in later phases of Stage, Treatment, and Recurrence (STaR) analyses and reporting

Note: An additional source of data regarding radiotherapy treatment in Australia is the National Radiotherapy Waiting Times Database (NRWTD). Data are provided to the AIHW from jurisdictional health authorities and private radiotherapy providers. The unit of measure for the NRWTD data is the number of radiotherapy “courses” which is defined as a “series of one or more external beam radiotherapy treatments”

Systemic anti-cancer therapy treatment activity

This measure shows the number of people receiving at least one of the cancer-related systemic therapies and a small number of supportive treatments in Australia in any one year over the period from 2012 to 2016

Unit of analysis:

The number of people who were dispensed systemic anti-cancer related therapeutic items in a given individual year and for whom a reimbursement claim was processed under the Australian Government’s Pharmaceutical Benefits Scheme (PBS)

To provide the context of the use of these therapies within the Australian population, the number of people who accessed these therapies was adjusted for the increase in population over the period. Data are expressed as the number of people who accessed these therapies per 100 000 population for the relevant year (Estimated Resident Population data are sourced from the Australian Bureau of Statistics)

Treatment

Surgical treatment activity top 5 incident cancers

This measure focuses initially on the five highest incidence cancers in Australia: prostate, breast, colorectal, melanoma, and lung. Initial examination of procedure codes by principal diagnosis indicated a degree of overlap for treatment procedures recorded for colon and rectal cancers. To avoid potential confusion in reporting the data, these cancers have been analyzed as a group (i.e. colorectal cancers). It is anticipated that for later data analyses, where a confirmed incidence for these two cancers is available, separate data will be presented for colon and rectal cancers

Unit of analysis:

The number of hospital separations where the principal diagnosis for a relevant cancer was recorded and where there was at least one cancer-related procedure.

Note that the unit of analysis is for hospital separations, not individual patients. An individual who had multiple separations in a given year will have a record for each of these separations. Therefore, an individual patient may be counted more than once in these data

Multidisciplinary care

The proportion of new cancer cases discussed at an MDT meeting

Numerator: The number of new cases discussed at an MDT meeting in the reporting period

Denominator: The number of new cases referred to the cancer service in the reporting period

Germany

The process of developing quality indicators forms part of the German Guideline Program in Oncology (GGPO) jointly launched by the German Cancer Society (DKG), German Cancer Aid (DKH), and the Association of Scientific Medical Societies (AWMF) in 2008. This program is designed to support the development, implementation and evaluation of evidence-based clinical practice guidelines. On the basis of these guidelines, indicators can be developed to evaluate the structure, process and outcome of cancer care, which provide a measure of the quality of care and adherence to recommendations. Indicators are tools for internal quality management in medical facilities and benchmarking with other institutions. Quality indicators are generally developed for areas where the authors of the guidelines and other healthcare professionals have identified potential for quality improvement [11].

The development of quality indicators was carried out in 5 stages:

Stage I The composition of a representative working group on quality indicators (QIs) was determined by the Guideline Development Group (GDG)

The development of the QIs was discussed during the inaugural meeting of GDG. The working group had a maximum of 14 members, and the requirement was that it should be interdisciplinary, composed of people involved in the relevant areas addressed by the guidelines. The bodies routinely involved in this process include the committees of certified centers, cancer clinical registries, and other institutions in the area of quality management. The process is supervised and supported, with regard to methodology, by a representative of the GGPO office and a representative of the AWMF.

Stage II Preparation of quality indicators

The goal of the first meeting of the Quality Indicators Working Group (QIWG) was to prepare a list of fundamental indicators based on clinical guidelines. It was emphasized that only strong recommendations should be translated into potential indicators and that these should be as specific as possible. Only recommendations with an “A” rating, those considered to have the highest priority for implementation, were taken into consideration, as it was assumed that actions based on these recommendations should be of clear benefit to most patients. Consequently, only strong recommendations were considered suited to the development of indicators, irrespective of whether they were evidence-based or consensual. To prepare a preliminary list of indicators, available databases containing scientific reports, websites of governmental institutions and scientific societies were searched.

Stage III The first meeting of the working group on quality indicators

The first meeting of the QIWG aimed to complete a preliminary selection from the potential indicators identified in Stage II. The QIWG consensus was based on the following exclusion criteria:

A1: the recommendations cannot be operationalized (this generally means they are not measurable)
A2: the inability to improve the quality of healthcare provision
A3: problems with the interpretation of the indicator description by the QIWG and/or expenditure associated with the preparation of documentation too high in relation to the benefits
A4: other (e.g. duplication of indicators from two different guidelines).

The exclusion criteria are derived from the four basic requirements for quality indicators, defined in the German QUALIFY assessment instrument [12]:

The significance of the quality captured by the quality indicator (category “accuracy”);
The clarity of the definition of the indicator and its application (category “scientific soundness”);
Comprehensibility and interpretability (category “practicality”);
Costs incurred in collecting data (category “practicality”);
The indicators are accepted on the basis of a vote in accordance with the rules of consensus recommendation (AWMF Regelwerk), which means that a minimum of 75% of votes is required. [13]

Stage IV Written evaluation of potential quality indicators

The selected set of potential indicators was evaluated using a standard scoring sheet that included the criteria listed above. Two additional criteria (“risk adjustment” and “implementation barriers”) were also commented on by the evaluators.

Stage V The second meeting of the working group on quality indicators

The subject of the second QIWG meeting was to discuss the written assessments of the indicators and to select the final set of indicators. For final approval, the consent of a minimum of 75% of votes was required.

The entire process of developing quality indicators took 6 to 12 weeks and was fully documented. The final set of indicators was submitted to the coordinators of oncology registries and certification committees for their implementation [14–17].

Figure 1 presents a summary of the process of developing quality indicators. Examples of German quality indicators in breast cancer care are shown in Table 2.

Figure 1. The development process for Quality Indicators [11]; GDG Guideline Development Group; GGPO German Guideline Program in Oncology; Ql quality indicators; WG working group
Table 2. Examples of German Quality Indicators in breast cancer care

Area

Name of indicator

Specifications

Treatment

Post-operative case review

Numerator: Primary cases of denominator presented at the tumor conference

Denominator: Surgical primary cases

Rate: Target value95%

Pre-therapeutic case discussion

Numerator: Primary cases of denominator presented at the pre-therapeutic conference

Denominator: Primary cases

Rate: Target value:40%

Discussions of cases involving local recurrence/metastases

Nominator: Patients of the dominator presented at the tumor board

Denominator: Patients with first local recurrence and/or first remote metastasis (without primary M1 pat.)

Rate Mandatory statement of reasons* <70%

Radiotherapy after BCT in the case of invasive mammary carcinoma (GL QI 8)

Numerator: Primary cases of the denominator in which radiotherapy was recommended

Denominator: Primary cases with invasive mammary carcinoma and BCS (without primary M1 pat.)

Rate: Target value90%

Radiotherapy after BCT in the case of DCIS

Numerator: Primary cases of the denominator in which radiotherapy was recommended

Denominator: Primary cases with DCIS and BCT

Rate Mandatory statement of reasons* < 80%

Chemotherapy in the case of rec. pos. and nodal pos. result

Numerator: Primary cases of the denominator in which chemotherapy was recommended

Denominator: Primary cases with invasive mammary carcinoma with rec. pos. and a nodal positive result (without primary M1 pat.)

Rate: Target value60%

Endocrine therapy in the case of steroid rec. positive result

Numerator: Primary cases of the denominator in which endocrine therapy was recommended

Denominator: Primary cases with invasive mammary carcinoma in the case of steroid rec. a positive result (without primary M1 pat.)

Rate: Target value80%

Trastuzumab therapy over 1 year in the case of HER-2 pos. result

Numerator: Primary cases of the denominator for which trastuzumab therapy over 1 year was recommended

Denominator: Primary cases with invasive mammary carcinoma with HER-2 positive result (without primary M1 pat.)

Rate: Target value95%

Endocrine therapy for metastasis

Numerator: Patients of the denominator who were started on endocrine-based therapy in the metastasized stage as first-line therapy

Denominator: Patients with steroid rec. pos. and HER2-negative inv. mammary carcinoma with 1st Remote metastasis (incl. primary M1 pat.)

Rate: Target value95%

Psycho-oncological care (Consultation > 25 min)

Numerator: Patients who received psycho-oncological care in an inpatient or outpatient setting (duration of consultation > 25 Min.)

Denominator: Primary case patients + patients with 1st local recurrence and/or remote metastasis (without primary M1 pat as they are already included in primary cases)

Rate: Mandatory statement of reasons* 95%

Counselling social services

Numerator: Patients who received counselling by social services in an inpatient or outpatient setting

Denominator: Primary case patients + patients with 1st local recurrence and/or with 1st remote metastasis (without primary M1 pat as they are already included in primary cases)

Rate: Mandatory statement of reasons* < 30%

Share of study patients

Numerator: Patients who were included in a study with an ethical vote

Denominator: Primary cases

Rate: Target value5%

Pre-therapeutic histological confirmation

Numerator: Primary cases of the denominator with pre-therapeutic histological diagnosis confirmation by punch or vacuum-assisted biopsy

Denominator: Primary cases with initial surgery and histology of invasive mammary carcinoma or DCIS

Rate: Target value90%

Treatment

Primary cases mammary carcinoma

Number Primary Cases

Target value100

Number of surgical procedures for R0 resection for BCT

Numerator: Primary cases of the denominator with only one surgical procedure up to final surgical condition BCS

Denominator: Surgical primary cases with BCS and RO

Rate: Mandatory statement of reasons* < 70%

Breast-conserving procedure for pT1

Numerator: Number BCT (the final surgical state with pT1 (incl. (y)pT1)

Denominator: Surgical primary cases with pT1 (incl. (y)pT1)

Rate: Target value 7090%

Mastectomies

Numerator: Mastectomies (final surgical stage)

Denominator: Surgical primary cases

Rate: Mandatory statement for reasons* 40%

Lymph node removal in the case of DCIS

Numerator: Primary cases with axillary lymph node removal (primary axillary lymph node removal or sentinel lymph node removal)

Denominator: Primary cases DCIS and completed surgical therapy and BCT

Rate: Target value5%

Determination of nodal status in case of invasive mammary carcinoma

Numerator Primary cases with invasive mammary carcinoma for which the nodal status has been determined

Denominator Surgical primary cases with invasive mammary carcinoma (without primary M1)

Rate: Target value95%

Only sentinel lymphonodectomy (SLNE) for pNO (women)

Numerator: Female primary cases with sole sentinel lymph node removal (SNB)

Denominator: Female primary cases of invasive mammary carcinoma and negative pN staging and without preoperative tumor-specific therapy

Rate: Target value80%

Intraoperative sample radiography/ sonography

Numerator: Operations with intraoperative preparation X-ray or with intraoperative preparation sonography

Denominator: Surgical procedures with preoperative wire marking guided by mammography or sonography

Rate: Target value95%

Revision surgeries

Numerator: Revision surgery due to postoperative complications (only operated primary cases)

Denominator: Surgical primary cases

Rate: Target value5%

Therapy of the axillary lymphatic drainage for pN1mi

Numerator: Primary cases with therapy (axilla dissection or radiotherapy) of the axillary lymph drainage areas

Denominator: Primary cases with invasive breast carcinoma, pN1mi

Rate: Target value5%

Scotland

The Better Cancer Care plan, developed in 2008, included a commitment to prepare a program that would define how indicators for the quality of oncology services would be developed. To achieve this, the Scottish Cancer Taskforce established the National Cancer Quality Steering Group (NCQSG) which is responsible for:

development of small sets of quality performance indicators (QPIs) nationwide (approximately 10-15) which are specific to a given tumor
overseeing the implementation of the national management framework, which is the basis for reporting results concerning national QPIs,
ensuring the sustainability of the work of this program.

QPIs have been developed in collaboration with three Regional Cancer Networks (NOSCAN, SCAN, WOSCAN), the Information Services Division (ISD), and Healthcare Improvement Scotland. It is assumed that the QPIs will be regularly reviewed to reflect advances in scientific knowledge and changes in clinical practice. The overarching goal of the work program on the quality of cancer care was to ensure that activities at the level of the National Health Service (NHS) board were focused on the areas most important to improving the survival and quality of life of patients while ensuring safe, effective, and patient-centered oncology care [19].

The process of developing Scottish Quality Indicators is illustrated by the example of indicators for breast cancer. The stages of the process are described in Annex 1 to the document ‘Breast Cancer Clinical Quality Performance Indicators v4.0’.

Stage I Preparatory work and determining the scope of the undertaking

Since NHS Quality Improvement Scotland Clinical Standards for Breast Cancer had been used nationally since 2001, it was agreed that instead of undertaking a lengthy QPI development process, extensive literature searches and discussion with clinicians should be conducted as part of the NHS QIS review (in 2008). These standards were used as the basis for the development of the QPIs. Preparatory work included independent peer review by development group members and an evaluation of existing NHS QIS Breast Cancer Standards against agreed criteria. Potential areas for the development of new quality indicators that were focused on results were also identified. The results of the above work were used in discussions by development groups in subsequent stages of the process.

Stage II Development of indicators

The Breast Cancer Indicators Development Group QPIs has defined evidence-based, measurable indicators with a clear focus on improving the quality and outcomes of care provided. The group developed the QPIs on the basis of existing NHS QIS clinical standards. QPI projects were assessed by the Breast Cancer QPI Development Group according to three criteria:

General significance does the indicator reflect measures of clinical significance, which may have a meaningful influence on the quality and results of the care delivered?
Based on scientific evidence is the indicator based on high-quality scientific evidence?
Measurability is the indicator measurable, i.e. are there clear criteria regarding the measurement of variables?

Stage III The consultation process

In 2011, extensive clinical and social work was undertaken as part of the development work for quality indicators, whereby the QPIs for breast cancer were made available on the Scottish Government website, along with a pilot test for collecting a minimum core dataset and specifications for measurability. During the consultation period, clinicians and stakeholders from across NHS Scotland, breast cancer patients, and other interested parties had the opportunity to influence the development of the QPIs for breast cancer.

QPIs are designed to be clear and measurable, based on high-quality clinical evidence; they take into account other recognized standards and guidelines.

Each QPI has a short title that is used in reports, as well as a more complete description that explains exactly what each indicator measures.
The indicator provides a brief overview of the evidence base and a rationale that explains why it was important to develop the indicator.
The indicator has general and detailed specifications for measurability that highlight how the indicator will be measured in practice to enable comparison within NHS Scotland.
For each QPI, a target has been defined that reflects the expected level of quality that clinical sites should be aiming for (“value less than ...” or “value greater than ...”).

To ensure that the chosen target levels are the most appropriate and lead to continuous quality improvement as intended, they are evaluated and revised as newer scientific evidence or data become available. Due to the difficulty in accurately selecting patients, comorbidities, and the general condition of patients, a degree of tolerance has been built into QPIs, and target levels were set taking into account the above factors. If there are other factors that influence the target level of the indicator, this was also noted in the detailed QPI description [20]. Examples of Scottish Quality Indicators in Breast Cancer Care are shown in Table 3.

Table 3. Examples of Scottish Quality Indicators in Breast Cancer Care

Area

Name of indicator

Specifications

Diagnosis

Referral for Genetics Testing

Patients with breast cancer should be offered referral to a specialist

genetics clinic where appropriate

Numerator: Number of patients with breast cancer under 30 years of age referred to a specialist clinic for genetic testing

Denominator: All patients with breast cancer who are under 30 years of age.

Exclusions: No exclusions

Genomic Testing

Patients with breast cancer should be offered genomic testing where appropriate

Numerator: Number of patients with ER-positive, HER2-negative, node-negative breast cancer who have a 35% overall survival benefit of chemotherapy treatment predicted at 10 years that undergo genomic testing

Denominator: All patients with ER-positive, HER2-negative, node-negative breast cancer who have a 35% overall survival benefit of chemotherapy treatment predicted at 10 years

Exclusions:

patients with breast cancer taking part in clinical trials of chemotherapy treatment

patients who undergo neoadjuvant therapy

Treatment

Immediate Reconstruction Rate

Patients undergoing mastectomy for breast cancer should have access to timely immediate breast reconstruction

Numerator: Number of patients with breast cancer undergoing immediate breast reconstruction at the time of mastectomy

Denominator: All patients with breast cancer undergoing mastectomy

Exclusions:

all patients with M1 disease

all male patients

Minimizing Hospital Stay

Patients should have the opportunity for day case/“23-hour”* breast surgery wherever appropriate

This QPI measures two distinct elements.

(I) Patients with breast cancer undergoing wide excision and/or an axillary sampling procedure as day case surgery; and

(II) Patients with breast cancer undergoing mastectomy (without reconstruction) with a maximum hospital stay of 1 night following their procedure

(I)

Numerator: Number of patients with breast cancer undergoing wide excision and/or axillary sampling procedure (sentinel node biopsy or 4 node sample) as day case surgery

Denominator: All patients with breast cancer undergoing wide excision and/or axillary sampling procedure (sentinel node biopsy or 4 node sample)

Exclusions:

all patients with breast cancer undergoing partial breast reconstruction/mammoplasty

(II)

Numerator: Number of patients with breast cancer undergoing mastectomy (without reconstruction) with a maximum hospital stay of 1 night following their procedure.

Denominator: All patients with breast cancer undergoing mastectomy (without reconstruction)

Exclusions: No exclusions

HER2 Status for Decision Making

HER2 status should be available to inform treatment decision-making

Numerator: Number of patients with invasive breast cancer for whom the HER2 status (as detected by immunohistochemistry [IHC] and/or FISH analysis) is reported within 2 weeks of core biopsy

Denominator: All patients with invasive breast cancer

Exclusions:

patients in whom no invasive carcinoma is present on core biopsy

Radiotherapy for Breast Conservation in Older Adults

Radiotherapy use should be reduced in patients70 years of age with early-stage breast cancer and a low risk of recurrence

Numerator: Number of patients70 years with T1 N0, ER-positive, HER2-negative, LVI-negative, Grade I to II breast cancers undergoing conservation surgery (completely excised with margin1mm) with hormone therapy who receive radiotherapy

Denominator: All patients70 years with T1 N0, ER-positive, HER2-negative, LVI negative, Grade I to II breast cancers undergoing conservation surgery (completely excised with margin1mm) with hormone therapy

Exclusions:

all patients with breast cancer taking part in clinical trials of radiotherapy treatment

Adjuvant Chemotherapy

Patients with breast cancer should receive chemotherapy post-operatively where it will provide a survival benefit for patients

Numerator: Number of patients with hormone receptor (ERplus/minus PR) positive, HER2-negative breast cancer who have a > 5% overall survival benefit of chemotherapy treatment predicted at 10 years and/or high-risk genomic assay score that undergo adjuvant chemotherapy

Denominator: All patients with hormone receptor (ER plus/minus PR) positive, HER2-negative breast cancer who have a >5% overall survival benefit of chemotherapy treatment predicted at 10 years and/or high-risk genomic assay score

Exclusions:

all patients with breast cancer taking part in trials of chemotherapy treatment

all patients with breast cancer who have had neoadjuvant chemotherapy

all patients with M1 disease

Re-excision Rates

Patients undergoing surgery for breast cancer should only undergo

one definitive operation where possible

Numerator: Number of patients with breast cancer (invasive or in situ) having breast conservation surgery who undergo re-excision or mastectomy following initial breast surgery

Denominator: All patients with breast (invasive or in situ) cancer having breast conservation surgery as their initial or only breast surgery

Exclusions:

LCIS alone

30 Day Mortality following Systemic Anti-Cancer Therapy (SACT)

30-day mortality following Systemic Anti-Cancer Therapy (SACT) treatment for breast cancer

Numerator: Number of patients with breast cancer who undergo SACT that die within 30 days of treatment

Denominator: All patients with breast cancer who undergo SACT

Exclusions: No exclusions

Please note:

This indicator will be reported separately for neoadjuvant, adjuvant, and palliative chemotherapy, as opposed to one single figure

Clinical Trial and Research Study Access

All patients should be considered for participation in available clinical trials/research studies, wherever eligible

Numerator: Number of patients diagnosed with breast cancer who consented to a clinical trial/research study

Denominator: All patients diagnosed with breast cancer

Exclusions: No exclusions

Neoadjuvant Chemotherapy

Patients with breast cancer who receive chemotherapy should be offered neoadjuvant chemotherapy to achieve pathological complete response where appropriate

Numerator: Number of patients with triple-negative or HER2-positive, Stage II or III ductal breast cancer who receive chemotherapy that undergo neoadjuvant chemotherapy

Denominator: All patients with triple-negative or HER2-positive, Stage II or III ductal breast cancer who receive chemotherapy

Exclusions:

patients who undergo palliative chemotherapy

Deep Inspiratory Breath Hold (DIBH) Radiotherapy

Patients with left-sided breast cancer or DCIS undergoing adjuvant radiotherapy treatment should use a deep inspiratory breath-hold (DIBH) radiotherapy technique

Numerator: Number of patients with left-sided breast cancer or DCIS receiving adjuvant radiotherapy treatment who use a DIBH radiotherapy technique

Denominator: All patients with left-sided breast cancer or DCIS receiving adjuvant radiotherapy treatment

Exclusions: No exclusions

USA

Following the 1999 report by the Institute of Medicine (IOM) entitled “Ensuring Quality Cancer Care” on the provision of high-quality cancer care and the growing problems in this part of the healthcare system, the American Society of Clinical Oncology (ASCO) initiated the National Initiative on Cancer Care Quality (NICCQ). A study was conducted in five large urban areas, providing key data on the quality of cancer care that, in the opinion of researchers, needed systemic improvement. NICCQ experts pointed to challenges related to the implementation of a quality monitoring system on a national scale, and above all, to the need to meet key requirements, which were considered to represent accurate measurement and reporting of the quality of oncological care at the lowest possible cost and achieving results that provide support and information about activities that promote quality improvement. Based on discussions with clinical experts, professional associations, and other stakeholders, researchers identified four key features that are critical to developing the NICCQ: representative patient sampling, patient privacy, appropriate measures of quality of care, and multiple data sources. The impossibility of self-assessment in the area of oncology, prompted Dr. Joseph Simone, working through ASCO, to propose the development of a methodology for improving quality in oncological care. A small network of oncologists was originally created to develop a methodology for implementing quality improvement in oncology, oncological care quality control measures and a data entry and reporting system. This working group of oncologists eventually formed the Quality Oncology Practice Initiative (QOPI).

The process of selecting experts to participate in the program was based on the modified Delphi method. The working group then defined the criteria for guiding measurement. In order to simplify the measurement process, the measures were originally binary i.e. yes/no, or agree/disagree. In addition, the measurements were also intended to address a number of important issues. Those occurring in guidelines for clinical practice or other similar standards of oncology care and based on high-quality evidence were considered of primary importance, but it was agreed not to restrict the choice to these alone. Thus, three basic sources of measures were taken into account:

consensus-based, determined by all participants in the QOPI program
evidence-based standards (i.e. those derived from published clinical practice guidelines and health technology assessment developed by scientific societies)
items related to patient/physician interactions required by organizations other than scientific societies.

Currently, the QOPI provides a system for measuring care processes at intervals of six months using a retrospective analysis of medical records. As part of the system, staff responsible for completing medical data select patients fulfilling the requirements for QOPI, starting with those most recently observed in clinical practice and going back in time up to 6 months to meet the minimum sample size. The minimum number of reported patients is determined by the number of full-time oncologists or hemato-oncologists at the center.

Data are transferred to the QOPI database using a structured, secure online form and are analyzed and reported back to the center. Reports are available within 4 weeks after data collection, so centers can use the results to improve the quality of cancer care. For each quality measure, the reports include detailed practice data and comparative aggregate data [21–24].

Examples of American Quality Indicators in Breast Cancer Care are shown in Table 4.

Table 4. Examples of American Quality Indicators in Breast Cancer Care

Module

Name of indicator (*QOPI® Certification Measure)

Core

Pathology report confirming malignancy*

Staging documented within one month of first office visit*

Pain addressed appropriately

Pain assessed on either of the two most recent office visits*

Documented plan for chemotherapy, including doses, route, and time intervals*

Chemotherapy intent (curative vs. non-curative) documented before or within two

weeks after administration*

Chemotherapy intent discussion with patient documented*

Documented plan for oral chemotherapy: Dose*

Documented plan for oral chemotherapy: Administration schedule (start day, days

of treatment/rest and planned duration)*

Patient consent for chemotherapy*

Smoking status/tobacco use documented in past year*

Patient emotional well-being assessed by the second office visit*

Action taken to address problems with emotional well-being by the second office

visit*

Height, Weight, and BSA documented prior to chemotherapy*

Breast

Combination chemotherapy received within 4 months of diagnosis by women

under 70 with AJCC stage IA (T1c) and IB III ER/PR negative breast cancer*

Test for Her-2/neu overexpression or gene amplification*

Tamoxifen or AI received within 1 year of diagnosis by patients with AJCC stage IA

(T1c) and IB III ER or PR positive breast cancer*

Canada (Ontario)

The development of quality indicators involves several steps and is the responsibility of the Quality Standards Advisory Committee (QSAC). The process for developing indicators starts after draft quality statements have been approved. However, literature reviews and work related to the environmental scanning process begin before the draft statements are agreed upon. The process of indicator development is illustrated in Figure 2.

Figure 2. Development process Canada (Ontario) [26]

The first step involves the identification of results that the QSAC prioritizes as being essential to the quality assessment process. The selected results should be in line with the quality criteria set out in Quality Matters, i.e. safety, effectiveness, patient orientation, efficiency, timeliness, and equity. Next, the QSAC selects a limited set of results that reflect and may affect the objectives set out in the quality standard. The results should be factors that can be reasonably expected to be influenced by the adoption of the quality standard in the entire province. At the start of each quality standards assessment project, the Health Quality Ontario team reviews the existing literature on the subject of quality issues and environmental analysis. The literature review primarily includes an international inventory of existing quality indicators (with associated definitions and validation information). It is important to identify functioning indicators so that the QSAC can determine priorities with minimal delay. The environmental analysis focuses on measuring, reporting, and collecting data in a given province. It describes the existing reporting activities, methods of analysis, and tools including previous activities, plans, or reports on quality improvement. It also describes the existing datasets that compile information relevant to the quality standard. On the basis of the results of the literature review and environmental analysis, the Health Quality Ontario team compiles a shortlist of potential indicators for prioritization (if necessary) by the QSAC. The draft indicators are then made available for consultation and the “technical details” for each indicator are agreed by a panel of experts (together with QSAC members). This process includes determining whether an indicator can be calculated using the datasets at the disposal of the provinces, delineating how the indicators should be measured, and specifying an alternative if it is not possible to measure a particular indicator, as well as stipulating any restrictions for each of the indicators. Once the proposal has been developed, each suggested indicator undergoes two phases of public consultations. It is first sent to organizations representing patient interests, and then the project is posted on the Health Quality Ontario website for 3 weeks of public consultation to obtain patient feedback. All the feedback received is analyzed and thematically synthesized in the consultation report. A draft consultation report is sent to the QSAC to provide background for the final meeting, in which the QSAC and a team of experts discuss any proposed changes or amendments to the quality standard. Then, a public version of the consultation report is prepared, which describes the feedback, comments and suggestions received, as well as any changes made to the quality standard along with a justification for each of the changes. The report also indicates where the QSAC chose not to make changes, along with the rationale behind that decision. Then the set of indicators is published with recommendations for their implementation by different stakeholders, in order to support the continuous process of quality improvement [26, 27]

The quality-of-care assessment carried out in Canada may vary from province to province. Apart from the regional quality assessment programs, nationwide evaluations are also carried out. An example of this kind of exercise is the evaluation of the quality of a screening program for the early detection of breast cancer. Table 5 shows the indicators used to evaluate the program, broken down into 5 main areas.

Table 5. Examples of Canadian indicators for assessing the quality of a breast cancer screening program

Area

Name of indicator

Specifications

Coverage

Participation Rate

The participation rate is the percentage of women who have a screening mammogram within 30 months, as a proportion of the target population

National target (50 to 69 years):70% of the target population within 30 months

Retention Rate

Retention rate is the estimated percentage of women aged 50 to 67 years who returned for screening within 30 months of their previous screen

National target (50 to 67 years):75% within 30 months of an initial screen;90% within 30 months of a subsequent screen

Annual Screening Rate

The annual screening rate is the estimated percentage of women who returned to screen within 18 months of their previous screen. Target: None

Follow-Up

Abnormal Call Rate

Abnormal call rate is the percentage of screening mammograms that are identified as abnormal

National target (50 to 69 years): < 10% of initial screens; < 5% of subsequent screens

Diagnostic Assessment

Most women who receive an abnormal screening result do not go on to be diagnosed with breast cancer; however, additional assessment is required to reach a definitive diagnosis. This can include additional imaging, core or open biopsy, and/or fine-needle aspiration (FNA)

Diagnostic Interval

Time from screen to notification of screen result

National target (50 to 69 years):90% within two weeks

Time from abnormal screen to first diagnostic assessment

National target (50 to 69 years):90% within three weeks

Time from abnormal screen to definitive diagnosis

National target (50 to 69 years):90% within five weeks if no tissue biopsy is performed;90% within seven weeks if tissue biopsy (core or open) is performed

Quality of Screening

Non-Malignant Biopsy Rate

The percentage of non-malignant open surgical biopsies is the percentage of non-malignant biopsies which were open surgical biopsies

National target: No target established

Positive Predictive Value (PPV) of the Screening Mammography Program

The positive predictive value (PPV) of the screening mammography program is the percentage of abnormal cases diagnosed with breast cancer (invasive or in situ) after diagnostic workup

National target (50 to 69 years):5% for initial screens;6% for subsequent screens

Sensitivity of the Screening Mammography Program

The sensitivity of the screening mammography program is the percentage of breast cancer cases (invasive and in situ) that were correctly identified as cancer during the screening episode

National target: No target established

Post-Screen Invasive Cancer Rate

The post-screen invasive cancer rate is the number of invasive

breast cancers found after a normal or benign mammography

screening episode within 0 to < 12 months and 12 to 24 months of the screening date, per 10,000 person-years of follow-up.

National target (50 to 69 years): < 6 per 10,000 person-years

within 0 to < 12 months of the screen date; < 12 per 10,000

person-years within 12 to 24 months of the screening date

Detection

In Situ Cancer Detection Rate

In situ cancer detection rate is the number of ductal carcinomas in situ (DCIS) cancers detected per 1,000 screens

National target: No target established

Invasive Cancer Detection Rate

Invasive cancer detection rate is the number of invasive cancers detected per 1,000 screens

National target (50 to 69 years): > 5 per 1,000 initial screens; > 3 per 1,000 subsequent screens

Percent Ductal Carcinoma in Situ

Percent ductal carcinoma in situ is the percentage of all cancers detected that are DCIS

National target: No target established

Disease Extent at Diagnosis

Screen-Detected Invasive Tumor Size

Screen-detected invasive tumor size is the percentage of screen-detected invasive cancers with a tumor size15 mm in greatest diameter as determined by the best available evidence: 1) pathological, 2) radiological, and 3) clinical

National target (50 to 69 years): > 50% screen-detected invasive tumors15 mm

The proportion of Node Negative Screen-Detected Invasive Cancers

The proportion of node-negative screen-detected invasive cancers is the percentage of screen-detected invasive cancers in which cancer has not invaded the axillary lymph nodes as determined by pathological evidence

National target (50 to 69 years): > 70% of screen-detected invasive cancers

Japan

Under the Basic Cancer Control Act of 2006, efforts to ensure the quality of cancer care in many sectors have been intensified, amongst others, the role of the government in ensuring the quality of cancer care has been increased. Accordingly, a government-funded research project was launched to develop quality indicators in five main areas (breast, lung, stomach, colon, and liver cancer).

The project was primarily aimed at assessing how current best practices are applied, rather than assessing the overall usefulness of services (such as waiting times); however, the study also raised a number of issues regarding the concepts and methodology used to measure quality.

The project used a methodology developed by scientists from the University of California, Los Angeles, and the RAND Corporation. Thus, under the adopted methodology, a set of potential quality indicators was created, evidence was collected in support of these indicators and a discussion panel composed of multidisciplinary experts was summoned to conduct an analysis in two rounds of evaluation. The panel used a simple grading scale from 1 to 9 to evaluate the indicators. The first assessment was made before the meeting of the working group, while the next assessment took place following a discussion within the expert panel. As part of the final list, a set of 206 indicators were obtained, all of which described the care standards together with the target group of patients to whom they would be applied [29].

Currently, new research is being undertaken to create measures using the methodological experience gained so far, in a multi-stage approach including the selection of potential indicators based on the scientific literature, the creation of a multidisciplinary expert panel, and a two-round verification of the survey results [30]. Examples of Japanese Quality Indicators in Breast Cancer Care are shown in Table 6.

Table 6. Examples of Japanese Quality Indicators in Breast Cancer Care

Area

Name of indicator

Specifications

Diagnosis

Hormone receptor test

Numerator: Patients with known ER-positive or negative

Denominator: All cases of breast cancer (excluding unknown ER)

HER-2 test

Numerator: Patients who had HER-2 testing (in cases of HER-2 2+, positive or negative was determined by FISH)

Denominator: All cases of breast cancer

Treatment

Adherence to St. Gallen consensus recommendations

Numerator: Patients who received post-operative therapy adherent to the St. Gallen consensus recommendations

Denominator: Post-operative cases with invasive breast cancer

Appropriate hormone therapy

Numerator: Post-operative hormone therapy (tamoxifen, toremifene, anastrozole, exemestane and letrozole)

Denominator: Post-operative cases with invasive breast cancer positive

Appropriate post-operative chemotherapy regimen

Numerator: Patients who received either regimen including anthracyclines, taxanes or CMF

Denominator: Invasive breast cancer cases with implementation of post-operative adjuvant chemotherapy

Radiation therapy after breast-conserving surgery

Numerator: Patients who received post-operative radiation therapy

Denominator: Patients 70 years old or younger who underwent breast-conserving surgery for stage IIII breast cancer

Lymph node dissection forN1 patients

Numerator: Patients who had lymph node dissection of level I or above

Denominator: Patients who had surgery for N13 breast cancer

From the comparison presented above, it can be seen that the process of developing quality indicators in all of the care systems analyzed is conducted in an analogous manner. An important element in the development of quality indicators is their validation and evaluation, which allow for the creation of the most optimal sets of indicators. With time, when the value of a given indicator reaches the target level of performance, it is withdrawn from the measurement process, as its importance for the assessment of the quality of care ceases to be important. Drawing on the practices of other countries would seem to be an optimal solution for the Polish healthcare system, bearing in mind that most of the programs for cancer care quality assessment are based on American experiences, which laid the foundation for quality assessment in the healthcare system. A summary of the results of the review is presented in Table 7.

Table 7. Summary of the results of the review

Comparative feature

Poland

Australia

Germany

Scotland

USA

Canada

Japan

Guidelines for the development of defined indicators

+

+

+

+

+

+

Clinical practice guidelines used as the main source for developing QIs

+

+

+

+

+

+

A review of the current quality indicators

+

+

+

+

+

+

A multi-disciplinary expert panel established

+

+

+

+

+

+

Extensive public consultations carried out

+

+

+

+

+

The results of the indicators made available to the public

+

+

+

+

+

+

Cyclical assessments and adjustment of indicators to changing conditions

+

+

+

+

+

+

Discussion

Within oncological care in the Polish healthcare system, measures are taken to monitor quality with the use of quality indicators. Currently, quality assessment of oncology care is conducted as part of the oncology package for services provided to breast-cancer patients and in the oncology network provider care pilot program.

These are the two most advanced initiatives attempting to measure quality in oncology. In the directive issued by the Minister of Health on 24 May 2019, amending the regulation on guaranteed services in the field of hospital treatment (Journal of Laws of 2019, item 1062), there are detailed conditions that healthcare providers should meet in the provision of comprehensive oncological care for patients with breast cancer. In terms of the quality of service provision, reference is made to the Act issued on 27 August 2004 on healthcare services financed from public funds (Journal of Laws 2021 item 1285) (hereinafter: the Act on Benefits):

“The provider calculates the effectiveness of cancer diagnostics and treatment in accordance with Article 32c of the Act and meets or aims to meet the designated threshold values of the indicators, if defined. A standardized, written protocol for diagnosis and therapeutic management is to be used at all stages of advancement.” [32]. Before the directive entered into force, the Minister of Health published, by means of an announcement, the assessment measures for conducting oncological diagnostics and oncological treatment for breast cancer. The announcement defines 14 measures that can be used to calculate the effectiveness indicators for cancer diagnostics and cancer treatment. The numerator and denominator as well as the rules determining their calculation were defined for each of the measures [33]. In accordance with the provisions set out in Art. 32c of paragraph 1 of the Act on Benefits:

1. Service providers providing healthcare services in the field of oncological diagnostics and oncological treatment calculate annually the effectiveness indicators of oncological diagnostics and oncological treatment for the previous calendar year on the basis of data from statistical reports referred to in regulations issued under Art. 137 paragraph 2. The service providers shall provide the National Healthcare Fund with the effectiveness indicators for oncological diagnostics and cancer treatment no later than by the end of the second quarter of the calendar year following the year for which these indicators were calculated“ [34].

It is worth noting that the Act on Benefits clearly defines the basis on which data should be recorded and the time intervals over which it should be calculated to evaluate the results achieved by service providers in terms of quality indicators. It does not, however, define the rules for disclosing to the public information on the values achieved. In contrast to other countries, the Polish system has not yet established target values for quality indicators in cancer care [33]. This lack of criteria generally hampers the process of assessing the quality of care in individual organizations. A significant problem is also the fact that currently only those medical centers that provide services under the oncology package contracted with the public payer are obliged to calculate the effectiveness indicators in oncological care. According to information published by the National Healthcare Fund on the website ezdrowie.gov.pl, where quantitative data on surgical treatment for thirteen types of cancer can be found, it appears that in 2020 surgical treatment of breast cancer was carried out by 167 centers, 21 of which were above the threshold or potentially above the quality threshold, defined at the level of 250 procedures. It should be emphasized that this threshold of 250 procedures applies to those services that have been agreed under the terms of the oncology package. Centers in the potentially above-threshold category carried out more than 250 procedures, but to a large extent, these were performed outside the terms of the oncology package. [35]. Thus, the healthcare providers who choose not to charge for oncological services under the package are not required to calculate the effectiveness of oncological care. As a consequence, this leads to a situation in which the payer is not able to monitor the quality of oncological services provided in all centers throughout the country, nor is it possible to comprehensively assess the functioning of oncological care in Poland.

The assessment of the quality of oncological care is also carried out as part of an ongoing pilot program for evaluating the care of patients within the oncological network, which was introduced by a directive of the Minister of Health on 13 December 2018 (Journal of Laws 2018, item 2423, with later amendments). The aim of the project, whose completion is planned for 31 December 2021 [36], is to assess the organization, quality, and effects of oncological care within the oncological network in the four provinces covered by the program, i.e. Dolnośląskie, Podlaskie, Pomorskie, and Świętokrzyskie Voivodeships. The program covers 5 major organ tumors: colon, lung, breast, ovary, and prostate. The assessment of care within the framework of the oncological network is carried out in relation to patients covered by the pilot study, with respect to diagnoses, and the measures specified in the ordinance. Paragraph 9 of the directive defines 35 measures for the assessment of oncological care, including 15 general measures, 3 for colorectal cancer, 3 for prostate cancer, 4 for lung cancer, 3 for ovarian cancer, and 7 for breast cancer. The center responsible for drawing up and submitting reports to the provincial branch of the National Healthcare Fund is the Provincial Coordinating Centre (WOK) [37].

The evaluation of the results of the pilot project will be carried out by the provincial branches of the National Healthcare Fund in cooperation with the Provincial Coordinating Centre on the basis of the final report on the implementation of the pilot project. Overall assessment of the results of the program may be problematic due to difficulties caused by the quality assessment measures. Most of them are clinical measures, whose calculation from the historical data on the provision of services available to the public payer is significantly limited, because these data have not hitherto been collected by the National Health Fund. Understanding of the real benefits achieved by the pilot scheme would be easier if it was attempted to calculate the measures from the data of the hospitals involved in the pilot scheme retrospectively. However, the image of oncological care obtained in this way would be reflective only of the provinces covered by the pilot program. Achieving a more comprehensive evaluation reflecting results on a country-wide scale is not possible in a pilot scheme because measures for centers not participating in the program cannot be calculated. The lack of knowledge about the quality of oncological care in Poland prior to the start of the pilot program makes it difficult to answer the question of whether the program’s objective has been achieved. Nevertheless, the coordination of oncological care and cooperation within the oncological network has undoubtedly had a positive influence on increasing the satisfaction of cancer patients, shortening the waiting time for an appointment at oncology clinics, standardizing treatment processes, and increasing the importance of quality monitoring in oncology [38].

As already mentioned, the development of quality indicators is directly related to the guidelines for clinical practice. The long-term program “The National Oncological Strategy for 20202030,” adopted by the Council of Ministers and published in February 2020, assumes that by the end of 2021, national diagnostic and therapeutic guidelines and organizational standards in key malignant neoplasms will be developed, based on scientific evidence and taking into account the current conditions for financing health services from public funds. [39] It is worth noting that the overarching goal of this program is to increase the percentage of people surviving 5 years from the end of oncological therapy [40]. The guidelines published as an announcement by the Minister of Health should be a starting point for the development of cancer care quality indicators with a group of healthcare system stakeholders, which will be easy to implement in the Polish system and will contribute to quality improvement in priority areas. It would appear sensible to draw on the experiences of other countries and to consider implementing similar solutions in the Polish healthcare system. In these countries, most clinical data necessary to calculate indicators are collected directly from clinical registers, but in the Polish healthcare system, until the necessary database of clinical registers is built, it is worth considering the possibility of transferring the obligation to calculate indicators for the quality of oncological care from service providers to the payer. This would translate into complete information on the quality of care, taking into account all service providers, not only those implementing the oncology package.

The reporting system should be tightened up, and supplemented with key parameters necessary to calculate the indicators. Establishing multidisciplinary teams to develop quality indicators would allow for the identification of problematic areas in cancer care in Poland and the creation of key indicators which might have a significant impact on improving the quality of care. Such teams should include, in particular, representatives of scientific societies responsible for the development of clinical guidelines, representatives of the Minister of Health, representatives of the payer, system experts, and persons representing the interests of cancer patients. The desired endpoint of these consultations is the recommendation of quality indicators that would ensure maximum benefits from the point of view of the system, with a minimum administrative burden for both service providers and the payer. Currently, healthcare providers are subject to numerous reporting obligations, including the submission of a wide range of reports, which leads to considerable difficulties in task management and creates complications in identifying the appropriate information necessary for the process of assessing the quality of care. Reporting obligations include reports of significant complexity, which are required to satisfy the needs of a large group of various stakeholders, but also more selective reports, narrowly focused on specific problems, to meet the needs of individual groups of stakeholders, e.g. reports about the implementation of health policy, corporate governance, audit by the payer for health services, etc. Reporting by medical entities has a variety of goals, including control over spending public funds (e.g. within the budget of the National Healthcare Fund) [41].

The National Healthcare Fund as a state organization is obliged to collect reporting data on reimbursement of service providers by the National Healthcare Fund. Thus, in the context of measures or quality control, as well as the assessment of provided services, the diagnostic and therapeutic processes, or resources (e.g. human resources), it is currently, the largest administrator of relevant medical data.

Although using reporting data for measurement purposes (especially quality measurement) might seem easy to implement, there are several limitations, which make the process somewhat difficult. A significant problem, in terms of the National Healthcare Fund’s reporting system, is the scope of the data. The purpose of data collection by the National Healthcare Fund is not quality control. As a result, reported data are not sufficient to calculate quality indicators, especially where clinical exclusions are concerned. In addition, a very serious problem is the lack of access to data that would accurately reflect the issues related to the health of patients. This is closely related to a very important aspect concerning the appropriateness of data sets (continuously validated in real-time) and specially designed registers. Using the example of the NHS Scotland quality monitoring system, it can be seen that the measurement process included the creation of appropriate databases and a range of variables necessary for reporting. The process also included data validation at the service provider level, as well as periodic assessment of indicators, their correlation and compliance with the latest diagnostic and therapeutic guidelines [42, 43]. Moreover, the direct presentation of the results at the healthcare provider level may be biased due to the impossibility of excluding randomness in the information presented (unacceptably small samples of patients outside the system of basic hospital provision of healthcare services may cause large variances) [44–47].

Moreover, the results for the indicators should be modified to correct for the clinical characteristics of patients, taking into account clinical features such as age, sex, casemix, which might affect the result [48, 49], and data from the National Healthcare Fund may be insufficient in scope to provide these details. It is also worth noting that, based on National Healthcare Fund data, it is not possible to make a direct comparison between centers, and the use of measures to assess hospital performance in order to arrive at an “evidence-based” definition of centers deviating from the recommended standard level of care would require the use of statistical methods such as meta-analyses [50, 51]. Moreover, the use of quality indicators should be clearly distinguished to differentiate between those used at the service provider level (control of the internal process within one organization) and indicators used to compare different centers (benchmarking) [50, 51]. An important issue, from this perspective, is the creation of a statistical tool to use for comparative analysis of service providers (rankability).

Despite the significant limitations of the data available from the National Healthcare Fund, there are some possibilities for the measurement of the quality of healthcare in a sufficiently easy and at the same time precise manner. Of course, creating quality indicators based on available reporting data is subject to methodological limitations and requires appropriate validation, but the prospect appears promising. Ongoing (real-time) quality control is impossible because the data are submitted over pre-established reporting periods (usually monthly). However, under certain assumptions, it is realistic to observe processes or groups of patients post factum and react to any identified problems. A fairly common measure, that of readmission rates, can be used as an example. In the case of this measurement, it is first necessary to clearly define the patient population (e.g. patients with arterial hypertension, atrial fibrillation, etc.) to be investigated in relation to quality issues, with the aid of an appropriate indicator. It is possible to define a population of patients (e.g. patients admitted to a hospital) due to a specific health problem and patients re-admitted with an indication of the reason for re-hospitalization (e.g. based on the primary diagnosis). It is also possible to determine the time that has elapsed since the previous hospitalization and to exclude the possibility that the re-hospitalization is for other clinical reasons and was not planned (e.g. due to a procedure performed shortly after the previous visit) [52–54].

The above considerations are also directly related to the use of reporting data, e.g. ICD-10 or ICD-9 codes, as a method of quantifying certain phenomena. This is of great importance, for example, in the case of identifying those parties involved in the abuse of selected medical procedures or services or those avoiding certain groups of patients due to low point tariff values (called upcoding). In addition, the information collected by the National Healthcare Fund enables identification of the disease that is the direct cause of hospitalization, patient visit, examination, or surgery, using codes compliant with the currently functioning classification of diseases and health problems (indicated in the Ministry of Health Regulations). Reporting also includes codes of no more than three coexisting causes according to the ICD 10, excluding services in the field of primary healthcare [55]. The recorded data also contain information on the essential medical procedures performed according to the ICD 9, in the Polish version recognized by the payer as applicable in connection with reimbursement, and their appropriate inclusion when generating data or carrying out analyses makes it possible to obtain information that is not provided directly by healthcare providers (e.g. information concerning the next stage in cancer treatment or the probable stage of disease advancement).

Sometimes it is possible (as a result of following the entire diagnostic and therapeutic pathway), using an appropriate algorithm (e.g. flagging), to refine information on certain procedures and clinical conditions. The data resources of the National Healthcare Fund enable the determination of, among others, the patient’s sex and age, as well as the location of service provision and the patient’s place of residence. It is also possible to use data on comorbidities. However, this requires standardization of the method of data generation and the definition of an appropriate indicator expressing multiple morbidities. In addition, in the context of comparative analyses, direct comparison of centers is possible and can be used, for example, to identify “outliers” (such an approach should not, however, be used to draw conclusions about the quality of care!). This method should only be used to identify treatment centers that require observation, to identify the reasons for patients’ withdrawing from treatment in specific centers, and for defining problems throughout the entire process, thus providing an appropriate response to problems that may have arisen.

Moreover, the use of data collected for reporting purposes at the service provider (and payer) level is associated with a lower cost than creating a new register. An important limitation of this approach is the phenomenon of reporting better-paid procedures (upcoding) or “fragmented” coding of services, i.e. dividing services that could be accounted for as one into several separate procedures (unbundling) [56], where the extraction of relevant clinical data may be associated with additional costs. With this approach, the appropriate reporting regime imposed by the overseeing institution, e.g. by a public payer, should also be observed. Redirecting the obligation to calculate quality indicators to the payer also opens the possibility of creating a system of financial incentives for the best performers by shifting from financing services to paying for results. Performance based payment schemes have been developed because the traditional fee-for-service system rewards providers for the quantity and complexity of the services they deliver. The service-fee system encourages higher care intensity, but not a higher quality of care, and has contributed to an increase in healthcare costs. Performance-based payment programs encourage higher-quality healthcare while lowering costs. Typical compensation for performance programs provides a financial bonus to healthcare providers if they meet performance criteria for healthcare quality indicators [57]. In the United States, the introduction of alternative payment models (APMs) for financing health services improved the quality of oncological care while reducing costs, even despite its failure to report many important health conditions. Effective integration of quality initiatives with a healthcare reimbursement structure is likely to be the key to the long-term success of APMs in cancer care [58].

Conclusions

The efforts currently undertaken within the Polish healthcare system aimed at improving quality of care should lead to the development of a transparent methodology for the development, measurement, publication of results, and evaluation of cancer care quality indicators. Provided that adequate resources are committed, this may be facilitated by the expansion of the powers and tasks of the National Healthcare Fund within the National Oncological Network planned by the Ministry of Health. The Fund will be responsible, inter alia, for the administration and operation of an integrated IT system for the oncological network and the analysis of quantitative and qualitative parameters for oncological care achieved by individual centers included in the network. [36] The process of developing care quality indicators for individual disease entities must be designed in a manner adapted to the needs of the Polish healthcare system, though the experiences of other countries in this area may constitute the foundation. Bearing in mind the changes that will be introduced by the Act on Quality in Healthcare or the Act on the National Oncological Network [59] announced by the Ministry of Health, it is necessary to consider the possibilities for measuring quality with the use of oncological care quality indicators, developed in a systematic manner and of confirmed utility in the healthcare system. The quality assessment system in oncological care based on quality indicators allows patients to make an informed choice about the center in which their treatment will be conducted, and in the case of a public payer, it opens up new possibilities for financing services.

Given the current shape of the reporting and financial reimbursement system, in which the National Healthcare Fund collects a significant amount of information necessary to implement statutory tasks, minor modifications in the manner and scope of the data provided may contribute to the creation of an environment conducive to quality measurement with the use of indicators. Moreover, these are solutions that can be implemented “immediately”, they do not require significant financial outlays, and at the same time, can bring potentially large benefits towards optimization of the current system. For each set of quality indicators presented in this review, it is possible to select a few that are easily measurable under the Polish system, while others would require the introduction of several additional variables. According to the theory of marginal profit aggregation, introducing even small modifications (improvements) in the information collected and provided to the payer by service providers may seem insignificant from the point of view of an individual medical organization but may be extremely useful in the process of quality assessment in oncological care from the perspective of the entire healthcare system.

Funding

No funding was received from any sources.

Conflict of interest

The authors do not have any conflict of interests to declare

References

  1. Hlávka JP, Lin PJ, Neumann PJ. Outcome measures for oncology alternative payment models: practical considerations and recommendations. Am J Manag Care. 2019; 25(12): e403e409, indexed in Pubmed: 31860235.
  2. Rządowe Centrum Legislacji, Rządowy Proces Legislacyjny. https://legislacja.rcl.gov.pl/projekt/12349305 (04.08.2021).
  3. Bembnowska M, Jośko-Ochojska J. Zarządzanie jakością w ochronie zdrowia. Hygeia Public Health. 2015; 50(3): 457462.
  4. Kautsch M. Zarządzanie w opiece zdrowotnej. Nowe Wyzwania. Wolters Kluwer, Warszawa 2010: 324325.
  5. Jarosiński M, Winch S. Zarządzanie podmiotami leczniczymi przekształcanymi w spółki prawa handlowego, Oleszczyk K. Zarządzanie przez jakość w podmiocie leczniczym. Oficyna Wydawnicza Szkoła Główna Handlowa w Warszawie, Warszawa 2014: 131146.
  6. Donabedian A. The Quality of Care. JAMA. 1988; 260(12): 1743, doi: 10.1001/jama.1988.03410120089033.
  7. Segelov E, Carrington C, Aranda S, et al. Developing clinical indicators for oncology: the inaugural cancer care indicator set for the Australian Council on Healthcare Standards. Med J Aust. 2021; 214(11): 528531, doi: 10.5694/mja2.51087, indexed in Pubmed: 34053081.
  8. ACHS 2020 Clinical Indicator Program Information. https://www.achs.org.au/media/153524/achs_2020_clinical_indicator_program_information.pdf (04.08.2021).
  9. Hernandez-Boussard T, Blayney DW, Brooks JD. Leveraging Digital Data to Inform and Improve Quality Cancer Care. Cancer Epidemiol Biomarkers Prev. 2020; 29(4): 816822, doi: 10.1158/1055-9965.EPI-19-0873, indexed in Pubmed: 32066619.
  10. National Cancer Control Indicators. https://www.canceraustralia.gov.au/ (13.08.2021).
  11. German Guideline Program in Oncology (German Cancer Society, German Cancer Aid, Association of the Scientific Medical Societies). Development of guidelinebased quality indicators: methodology for the German Guideline Program in Oncology, version 2.0. 2017. http://www.leitlinienprogrammonkologie.de/methodik/informationen-zur-methodik/.
  12. Reiter A, Fischer B, Kötting J, et al. QUALIFY: Ein Instrument zur Bewertung von Qualitätsindikatoren. Zeitschrift für ärztliche Fortbildung und Qualität im Gesundheitswesen - German Journal for Quality in Health Care. 2008; 101(10): 683688, doi: 10.1016/j.zgesun.2007.11.003.
  13. German Association of the Scientific Medical Societies (AWMF) - Standing Guidelines Commission. AWMF Guidance Manual and Rules for Guideline Development, 1st Edition 2012. English version. http://www.awmf.org/leitlinien/awmf-regelwerk.html (04.08.2021).
  14. Wolff KD, Rau A, Ferencz J, et al. Effect of an evidence-based guideline on the treatment of maxillofacial cancer: A prospective analysis. J Craniomaxillofac Surg. 2017; 45(3): 427431, doi: 10.1016/j.jcms.2016.12.013, indexed in Pubmed: 28108238.
  15. Kowalski C, Ferencz J, Brucker SY, et al. Quality of care in breast cancer centers: results of benchmarking by the German Cancer Society and German Society for Breast Diseases. Breast. 2015; 24(2): 118123, doi: 10.1016/j.breast.2014.11.014, indexed in Pubmed: 25515645.
  16. Wesselmann S, Winter A, Ferencz J, et al. Documented quality of care in certified colorectal cancer centers in Germany: German Cancer Society benchmarking report for 2013. Int J Colorectal Dis. 2014; 29(4): 511518, doi: 10.1007/s00384-014-1842-x, indexed in Pubmed: 24584335.
  17. Kowalski C, et al. Quality assessment in prostate cancer centers certified by the German Cancer Society. World J Urol. 2016; 34(5): 665672.
  18. Annual Report 2020 of the Certified Breast Cancer Centres (BCCs) Audit year 2019 / indicator year 2018 Qualitätsindikatoren Brustkrebs 2020. http://ecc-cert.org/ (18.08.2021).
  19. Healthcare Improvement Scotland. https://www.healthcareimprovementscotland.org/our_work/cancer_care_improvement/cancer_qpis.aspx (04.08.2021).
  20. NHS Scottish Cancer Taskforce National Cancer Quality Steering Group Breast Cancer Clinical Quality Performance Indicators, updated August 2019 (v4.0). Published by: Healthcare Improvement Scotland.
  21. Ensuring Quality Cancer Care. 1999, doi: 10.17226/6467.
  22. McNiff K. The quality oncology practice initiative: assessing and improving care within the medical oncology practice. J Oncol Pract. 2006; 2(1): 2630, doi: 10.1200/JOP.2006.2.1.26, indexed in Pubmed: 20871730.
  23. McNiff KK, Bonelli KR, Jacobson JO. Quality oncology practice initiative certification program: overview, measure scoring methodology, and site assessment standards. J Oncol Pract. 2009; 5(6): 270276, doi: 10.1200/JOP.091045, indexed in Pubmed: 21479069.
  24. Neuss MN, Desch CE, McNiff KK, et al. A process for measuring the quality of cancer care: the Quality Oncology Practice Initiative. J Clin Oncol. 2005; 23(25): 62336239, doi: 10.1200/JCO.2005.05.948, indexed in Pubmed: 16087948.
  25. ASCO QOPI® Certification Track 2021 Measures Summary. https://practice.asco.org/quality-improvement/quality-programs/quality-oncology-practice-initiative/qopi-related-measures (13.08.2021).
  26. https://www.hqontario.ca/Portals/0/documents/evidence/Quality_Standards_Process_and_Methods_Guide--Oct_2017.pdf (10.08.2021).
  27. https://criticalcarecanada.com/presentations/2013/how_to_develop_quality_indicators.pdf (10.08.2021).
  28. Canadian Partnership Against Cancer. Breast Cancer Screening in Canada: Monitoring and Evaluation of Quality Indicators - Results Report, January 2011 to December 2012. Toronto: Canadian Partnership Against Cancer; 2017.
  29. Higashi T. Lessons learned in the development of process quality indicators for cancer care in Japan. Biopsychosoc Med. 2010; 4: 14, doi: 10.1186/1751-0759-4-14, indexed in Pubmed: 21054836.
  30. Matsumura S, Ozaki M, Iwamoto M, et al. Development and Pilot Testing of Quality Indicators for Primary Care in Japan. JMA J. 2019; 2(2): 131138, doi: 10.31662/jmaj.2018-0053, indexed in Pubmed: 33615023.
  31. Mukai H, Higashi T, Sasaki M, et al. Quality evaluation of medical care for breast cancer in Japan. Int J Qual Health Care. 2016; 28(1): 110113, doi: 10.1093/intqhc/mzv109, indexed in Pubmed: 26668106.
  32. Rozporządzenie Ministra Zdrowia z dnia 24 maja 2019 r. zmieniające rozporządzenie w sprawie świadczeń gwarantowanych z zakresu leczenia szpitalnego (Dz. U. z 2019 r. poz.1062).
  33. Obwieszczenie Ministra Zdrowia z dnia 2 lipca 2018 r. w sprawie mierników oceny prowadzenia diagnostyki onkologicznej i leczenia onkologicznego (DZ. URZ. Min. Zdr. 2018.52 Ogłoszony: 03.07.2018).
  34. Ustawa z dnia 27 sierpnia 2004 r. o świadczeniach opieki zdrowotnej finansowanych ze środków publicznych, (Dz. U. z 2021 r. poz. 1285).
  35. https://ezdrowie.gov.pl/portal/home/zdrowe-dane/monitorowanie/nowotwory-zlosliwe-koncentracja-leczenia-zabiegowego (05.08.2021).
  36. https://orka2.sejm.gov.pl/INT9.nsf/klucz/ATTC2FHM8/%24FILE/i22133-o1.pdf (22.08.2021).
  37. Obwieszczenie Ministra Zdrowia z dnia 9 marca 2021 r. w sprawie ogłoszenia jednolitego tekstu rozporządzenia Ministra Zdrowia w sprawie programu pilotażowego opieki nad świadczeniobiorcą w ramach sieci onkologicznej (Dz.U. 2021 poz. 639).
  38. Polskie Towarzystwo Onkologiczne. http://pto.med.pl/sites/default/files/aktualnosci/PILOTA%C5%BB_Komisja%20Zdrowia%20Senat_03.03.2021-skompresowany.pdf (04.08.2021).
  39. Uchwała Nr 10 Rady Ministrów z dnia 4 lutego 2020 r. w sprawie przyjęcia programu wieloletniego pn. Narodowa Strategia Onkologiczna na lata 20202030 (DZ.URZ. RP 2020.189).
  40. https://www.gov.pl/web/zdrowie/narodowa-strategia-onkologiczna (16.08.2021).
  41. Szewieczek A, Strojek-Filus M. Względności informacji prezentowanych w sprawozdaniach statystycznych sporządzanych przez szpitale publiczne w Polsce. Studia Oeconomica Posnaniensia. 2016; 4(11): 254270, doi: 10.18559/soep.2016.11.19.
  42. The Healthcare Quality Strategy for NHS Scotland. The Scottish Government 2010. https://www.gov.scot/publications/healthcare-quality-strategy-nhsscotland/.
  43. http://www.healthcareimprovementscotland.org/our_work/cancer_care_improvement/cancer_qpis/cancer_qpi_assurance_programme.aspx (05.08.2021).
  44. Berg M, Meijerink Y, Gras M, et al. Feasibility first: developing public performance indicators on patient safety and clinical effectiveness for Dutch hospitals. Health Policy. 2005; 75(1): 5973, doi: 10.1016/j.healthpol.2005.02.007, indexed in Pubmed: 16298229.
  45. Powell AE, Davies HTO, Thomson RG. Using routine comparative data to assess the quality of health care: understanding and avoiding common pitfalls. Qual Saf Health Care. 2003; 12(2): 122128, doi: 10.1136/qhc.12.2.122, indexed in Pubmed: 12679509.
  46. Freeman T. Using performance indicators to improve health care quality in the public sector: a review of the literature. Health Serv Manage Res. 2002; 15(2): 126137, doi: 10.1258/0951484021912897, indexed in Pubmed: 12028801.
  47. Gibberd R, Hancock S, Howley P, et al. Using indicators to quantify the potential to improve the quality of health care. Int J Qual Health Care. 2004; 16 Suppl 1: i37i43, doi: 10.1093/intqhc/mzh019, indexed in Pubmed: 15059985.
  48. van Dishoeck, AM. Indicators for quality of hospital care: Beyond the numbers. Rozprawa doktorska 2015; 923. https://www.semanticscholar.org.
  49. van Dishoeck AM, Koek MBG, Steyerberg EW, et al. Use of surgical-site infection rates to rank hospital performance across several types of surgery. Br J Surg. 2013; 100(5): 62836; discussion 637, doi: 10.1002/bjs.9039, indexed in Pubmed: 23338243.
  50. Carini E, Gabutti I, Frisicale EM, et al. Assessing hospital performance indicators. What dimensions? Evidence from an umbrella review. BMC Health Serv Res. 2020; 20(1): 1038, doi: 10.1186/s12913-020-05879-y, indexed in Pubmed: 33183304.
  51. van Dishoeck AM, Looman CWN, van der Wilden-van Lier ECM, et al. Displaying random variation in comparing hospital performance. BMJ Qual Saf. 2011; 20(8): 651657, doi: 10.1136/bmjqs.2009.035881, indexed in Pubmed: 21228432.
  52. Mainz J. Defining and classifying clinical indicators for quality improvement. Int J Qual Health Care. 2003; 15(6): 523530, doi: 10.1093/intqhc/mzg081, indexed in Pubmed: 14660535.
  53. Fischer C, Lingsma HF, Marang-van de Mheen PJ, et al. Is the readmission rate a valid quality indicator? A review of the evidence. PLoS One. 2014; 9(11): e112282, doi: 10.1371/journal.pone.0112282, indexed in Pubmed: 25379675.
  54. Benbassat J, Taragin M. Hospital readmissions as a measure of quality of health care: advantages and limitations. Arch Intern Med. 2000; 160(8): 10741081, doi: 10.1001/archinte.160.8.1074, indexed in Pubmed: 10789599.
  55. Rozporządzenie Ministra Zdrowia z dnia 26 czerwca 2019 r. w sprawie zakresu niezbędnych informacji przetwarzanych przez świadczeniodawców, szczegółowego sposobu rejestrowania tych informacji oraz ich przekazywania podmiotom zobowiązanym do finansowania świadczeń ze środków publicznych (Dz.U. 2019 poz. 1207 z późn.zm.).
  56. https://www.phillipsandcohen.com/upcoding-unbundling-fragmentation/ (05.08.2021).
  57. Lee JS, Nathan H. Quality Measurement and Pay for Performance. Surg Oncol Clin N Am. 2018; 27(4): 621632, doi: 10.1016/j.soc.2018.05.003, indexed in Pubmed: 30213407.
  58. Wen L. Divers Ch., et.al. Improving Quality of Care in Oncology Through Healthcare Payment Reform. Am J Manag Care. 2018; 24(3): e93e98.
  59. https://www.rynekzdrowia.pl/Serwis-Onkologia/Ustawa-o-Krajowej-Sieci-Onkologicznej-od-1-stycznia-2022-roku-Niedzielski-o-szczegolach,218537,1013.html (06.08.2021).

Æ