Glossary of Terms for Healthcare Data Analytics



BALANCED SCORECARD:
A framework developed by Robert Kaplan and David Norton that suggests four perspectives of performance measurement to provide a comprehensive view of an organisation. These are service user perspective, internal management perspective, continuous improvement perspective and financial perspective.



BENCHMARK:

A point of reference or standard by which something can be measured

BENCHMARKING:

The process of comparing the cost, cycle time, productivity, or quality of a specific process or method to another that is widely considered to be an industry standard or best practice.

CASEMIX:

Casemix is an internationally recognised system of measuring clinical activity incorporating the age, gender and health status of the population served by an organisation with a view to objective determination of hospital reimbursement.

DATA:

Data are numbers, symbols, words, images, graphics that have yet to be organised or analysed

DATA DICTIONARY:

A descriptive list of names (also called representations or displays), definitions, and attributes of data elements to be collected in an information system or database.

DATA ELEMENT:

A unit of data for which the definition, identification, representation, and permissible values are

DOMAINS OF QUALITY:

Are those definable, preferably measurable and actionable, attributes of the system that are related to its functioning to maintain, restore or improve health

HEALTH INFORMATION:

Health Information is defined as information, recorded in any form or medium, which is created or communicated by an organisation or individual relating to the past, present or future, physical or mental health or social care of an individual or cohort. It also includes information relating to the management of the health and social care system


KPI SELECTION CRITERIA

KPIs should be chosen based on the judgement and consensus of experts and potential users. The List of characteristics and related questions which can be used to assist in the identification of KPIs. Adapted from criteria developed by the World Health Organization (WHO)

Validity 
Does the KPI measure what it is supposed to measure? A valid KPI measures what it is supposed to measure and captures an important aspect of quality that can be influenced by the healthcare facility or system. Ideally KPIs selected should have links to processes and outcomes through scientific evidence. Measures that have been selected using scientific evidence possess high content validity and measures selected through consensus and guidelines will have high face validity. Content validity refers to whether the KPI captures important aspects of the quality of care provided. Face validity can be determined by the KPI making sense logically and clinically or from previous usage.

Reliability 
Does the KPI provide a consistent measure? The KPI should provide a consistent measure in the same population and settings irrespective of who performs the measurement. Reliability is similar to reproducibility to the extent that if the measure is repeated you should get the same result. Any variations in the result of the KPI should reflect actual changes in the process or outcome. Reliability can be influenced by training, the KPI definition and the precision of the data collection methods. Inter-rater reliability compares differences between evaluators performing the same measurement. Internal consistency examines the relationship between sub-indicators of the same overall measurement, and, if reliable, there should be correlation of the results. Test-retest reliability compares the difference between results when the same evaluator performs the measurement at different times. 

Explicit evidence base
Is the KPI supported by scientific evidence or the consensus of
experts? KPIs should be based on scientific evidence, the consensus of expert opinions among health professionals or on clinical guidelines. The preferred method of choosing KPIs is through evaluating scientific evidence in support of each KPI and rating the strength of that evidence. One example of a rating system is to give the highest rating to evidence (“A” evidence) from meta-analysis of randomised controlled trials and give a lesser rating (“B” evidence) to evidence for controlled studies without randomisation and a further lower rating (“C” evidence) to data from epidemiological studies. 

In healthcare, there may only be limited scientific evidence to support a KPI and it becomes Guidance on developing Key Performance Indicators and Minimum Data Sets to Monitor Healthcare Quality Health Information and Quality Authority 33 necessary to avail of expert opinion. There are a number of methods by which a KPI can be developed through facilitating group consensus from a panel of experts, such as the Delphi technique, the RAND appropriateness method and from clinical guidelines. Appendix 2 gives a brief description of each method and Appendix 3 provides an example of a Delphi assessment instrument. The expert panel can exist independently of the advisory group and are used as a point of reference for the KPI development process

Acceptability 
Are the KPIs acceptable? The data collected should be acceptable to those being assessed and to those carrying out the assessment. 

Feasibility 
Is it possible to collect the required data and is it worth the resources? There should be a feasibility analysis carried out to determine what data are currently collected and the resources required to collect any additional required data. The feasibility analysis should determine what data sources are currently available and if they are relevant to the needs of the current project. This will include determining if there are existing KPIs or benchmarking processes based on these data sources.

The reporting burden of collecting the data contained in the KPI should not outweigh the value of the information obtained. Preferably, data should be integrated into service delivery, and, where additional data are required that are not currently part of service delivery, there should be cost benefit analysis to determine if it is cost-effective to collect.

The feasibility analysis should also include what means are used to collect data and the limitations of the systems used for collection. It should also outline the reporting arrangements, including reporting arrangements for existing data collection and frequency of data collection and analyses.

Sensitivity 
Are small changes reflected in the results? Changes in the component of care being measured should be captured by the measurement process and reflected in the results. The performance indicator should be capable of detecting changes in the quality of care and these changes must be reflected in the resulting values. 

Specificity 
Does the KPI actually capture changes that occur in the service for
which the measure is intended? Only changes in the area being measured are reflected in the measurement results

Relevance 
What useful decisions can be made from the KPI? The results of the measurement should be of use in planning and the subsequent delivery of healthcare and contribute to performance improvement

Balance 
Do we have a set of KPIs that measure different aspects of the service? The final suite of indicators should measure different aspects of the service in order to provide a comprehensive picture of performance, including user perspective

Tested 
Have national and international KPIs been considered? There should be due consideration given to indicators that have been tried and tested in the national and international arena rather than developing new indicators for the same purpose.

Safe 
Will an undue focus on the KPI lead to potential adverse effects on other aspects of quality and safety? The indicator should not lead to an undue focus on the aspect of care being measured that may in turn lead to a compromise in the quality and safety of other aspects of the service.

Avoid duplication 
Has consideration been given to other projects or initiatives? Prior to developing the indicator due consideration should be given to other projects or initiatives to ensure that there will not be a duplication of data collection.

Timeliness
Is the information available within an acceptable period of time to inform decision-makers? The data should be available within a time period that enables decision-makers utilise the data to inform their decision-making process. If the data is required for operational purposes, then it will be required within a shorter timeframe than data used for long term strategic purposes. 

METADATA:

Data that defines and describes other data

MINIMUM DATA SET:

The minimum set of data elements that are required to be collected for a specific purpose

NUMERATOR:

The specifications that define the subset of data items in the denominator that meet the indicator criteria.

KEY PERFORMANCE INDICATORS:

Performance Indicators are specific and measurable elements of practice that can be used to assess quality of care. Indicators are quantitative measures of structures, processes or outcomes that may be correlated with the quality of care delivered by the healthcare system. 

PROCESS INDICATORS:

Performance indicators that monitor the activities carried out in the assessment/diagnosis and treatment of service users.

OUTCOME INDICATORS:

Performance indicators that monitor the desired states resulting from care processes, which may include reduction in morbidity and mortality, and improvement in the quality of life.

RELIABILITY:

Reliability is the consistency of your measurement, or the degree to which an instrument measures the same way each time it is used under the same condition with the same subjects.

STRUCTURE INDICATORS:

Performance indicators that monitor the attributes of the health system that contribute to its ability to meet the healthcare needs of the population.

The Delphi Technique:

The Delphi technique is a facilitated structured process whereby a panel of experts complete questionnaires (see Appendix 3 for example) remotely and, through feedback and scoring over a number of rounds where some KPIs are discarded, a consensus is achieved on a final set of KPIs. The panel need not ever meet face-to-face and each individual’s feedback is provided anonymously to the panel, which eliminates the possibility of undue influence by dominant personalities within the panel.

The RAND appropriateness method:

The RAND appropriateness method combines scientific evidence with expert
opinion by facilitating experts to rate, discuss and re-rate KPIs. Unlike the Delphi technique the expert panel meet face-to-face to discuss possible KPIs and are given a copy of the scientific literature in support of the KPIs so that they can ground their opinion on evidence-based literature

VALIDITY:

Validity of indicators refers to whether performance indicators are measuring what they are supposed to measure. e are constantly looking for Healthcare Informatics & Digital Health Experts to share their experiences by writing articles on Technology benefiting in the delivery of Healthcare Services.

And there you go, its fairly simple and we look forward to you sharing your experiences with our community of readers. We appreciate you considering sharing your knowledge via The HCITExpert Blog
Author
Team HCITExperts
Your partner in Digital Health Transformation using innovative and insightful ideas

No comments:

Post a Comment

Popular Posts