Part 13. Taxpayer Advocate Service
Chapter 5. TAS Balanced Measures
Section 1. TAS Balanced Performance Measurement System
13.5.1 TAS Balanced Performance Measurement System
Manual Transmittal
March 16, 2023
Purpose
(1) This transmits a complete text and Table of Contents for IRM 13.5.1, Taxpayer Advocate Service, Balanced Measures.
Material Changes
(1) Editorial changes were made throughout the IRM to correct grammar and sentence structure for clarity.
(2) IRM 13.5.1.1.4 is updated to clarify (6).
(3) IRM 13.5.1.1.5 is updated to include Centralized Case Intake (CCI) as an acronym in the table.
(4) IRM 13.5.1.3 is updated to include the new Customer Satisfaction measure for Congressional Aides.
(5) IRM 13.5.1.3.1.1(2) is updated to clarify the Privacy Act notice.
(6) IRM 13.5.1.3.1.4 is updated to add (c).
(7) IRM 13.5.1.3.1.6(2)a is updated to change Program Letter to officially published internal guidance.
(8) IRM 13.5.1.3.3 is a new subsection added for the Customer Satisfaction measure for Congressional Aides.
(9) IRM 13.5.1.4.1.4 is updated to clarify the process of selecting cases for monthly review.
(10) In May of 2020, TAS eliminated the dialogue process for Case Advocacy. Therefore, the following references were removed, and IRM sections were renumbered, as appropriate:
13.5.1.4.1.5 Case Advocacy Dialogue Process
13.5.1.4.1.6(1)(f) Case Advocacy Quality Roles and Responsibilities - QRP
13.5.1.4.1.10(2)(c) Case Quality Advocacy - Roles and Responsibilities - LTA
13.5.1.4.2.4(2)(d) Systemic Advocacy Quality - Roles and Responsibilities - QRP Director
IRM 13.5.1.4.1.8 was moved to 13.5.1.4.1.10 for reorganization of materials.
(11) IRM 13.5.1.4.1.11 and IRM 13.5.1.4.1.12 are new subsections that add quality roles and responsibilities for CCI.
Effect on Other Documents
Supersedes IRM 13.5.1 dated September 29, 2020
Audience
All Taxpayer Advocate Service employees
Effective Date
(03-16-2023)
Erin M. Collins
Program Scope
(1) This section provides an overview of the Balanced Performance Measurement System for TAS and outlines how TAS uses balanced measures to monitor, measure, and improve organizational performance.
(2) This section supplements IRS guidance in IRM 1.5.1, Managing Statistics in a Balanced Measurement System, The IRS Balanced Performance Measurement System.
Background
(1) In fiscal year 2000, TAS developed a system of balanced measures to assist in measuring and improving organizational performance.
(2) TAS's Balanced Performance Measurement System includes the following components:
Employee Satisfaction
Customer Satisfaction
Business Results (Quality and Quantity)
Authority
(1) Internal Revenue Code (IRC) §7803 established the Office of the National Taxpayer Advocate to assist taxpayers with resolving problems with the Internal Revenue Service (IRS), identify areas in which taxpayers have problems dealing with the IRS, propose changes in administrative practices of IRS and to identify potential legislative changes to mitigate problems.
Responsibilities
(1) TAS managers are responsible for using balanced measures data to monitor, measure, and improve organizational performance.
Program Objectives
(1) TAS uses balanced performance measures data to:
assess program effectiveness and service delivery;
understand why measured data has changed; and
determine what actions were taken or could be taken to influence results.
(2) Caution must be exercised when sharing numeric targets and performance results in order to avoid driving unintended consequences.
(3) The performance of any one work unit should not be used as a standard by which the performance of any other work unit is evaluated. Each work unit has unique factors, specific tax issues to address and differences in the types of taxpayers served.
(4) The numerical results achieved for any measure will never directly equate to the evaluation of an individual.
(5) Additional information about the appropriate use of measures, including the definition of record of tax enforcement results (ROTERs), setting targets, use of measures in evaluations, etc., is in IRM 1.5.1, Managing Statistics in a Balanced Measurement System, the IRS Balanced Performance Measurement System.
(6) All managers must ensure strict adherence to IRM guidance on the appropriate use and application of balanced measures. The organizational measures of Customer Satisfaction, Employee Satisfaction, and Business Results may be used to evaluate the performance of, or to impose or suggest production goals for an organizational unit but may not be used to directly determine the evaluation of individual employees.
Acronyms
(1) The following table contains a list of acronyms used throughout this IRM
Acronym | Definition |
AP | Advocacy Projects |
BA | Business Assessment |
BOE | Business Objects Enterprise |
BSP | Business Systems Planning |
CCI | Centralized Case Intake |
CIPSEA | Confidential Information Protection and Statistical Efficiency Act |
CQR | Case Quality Review |
DEDCA | Deputy Executive Director of Case Advocacy |
EDCA | Executive Director of Case Advocacy |
EDCA ITS | Executive Director of Case Advocacy Intake and Technical Support |
II | Immediate Interventions |
IRC | Internal Revenue Code |
IRS | Internal Revenue Service |
LTA | Local Taxpayer Advocate |
QRP | Quality Review Program |
QSS | Quality Sample Selection |
ROTER | Record of Tax Results Enforcement Results |
SA | Systemic Advocacy |
SAMS | Systemic Advocacy Management System |
SOI | Statistics of Income |
TAGM | Taxpayer Advocate Group Manager |
TAS Employee Satisfaction Measure
(1) The employee satisfaction measure is a numerical rating of the employees' perception of management practices, organizational barriers, and overall work environment that effect employees’ efforts to do a good job. Employee satisfaction is a key component of employee engagement, which is the degree of employees' motivation, commitment, and involvement in the mission of the organization.
(2) The goal of the employee satisfaction component is to measure, among other factors bearing upon employee satisfaction, the quality of supervision and the adequacy of training and support services.
(3) Employee satisfaction is measured through an annual servicewide survey administered to all TAS employees. The survey provides employees with the opportunity to provide confidential information regarding their satisfaction in important areas such as:
Leadership policies and practices;
Work environment;
Rewards and recognition for professional accomplishment and personal contributions to achieving organizational mission;
Opportunity for professional development and growth; and
Opportunity to contribute to achieving organizational mission.
(4) Survey results are received at the national, area, office and where applicable at the workgroup level. To protect employee anonymity, results are received only if the minimum number of respondents is met.
(5) Survey results should be used by all levels of the organization to make improvements that address employees' concerns and increase employees' level of engagement and satisfaction.
(6) TAS must consider and address employee satisfaction in organizational planning, budgeting, and review activities.
Employee Satisfaction - Use and Limitations of Survey Information
(1) Each TAS manager receives an employee survey result report to share with their workgroup. Survey results are available only if a minimum of 10 responses were received. If the minimum is not met, the manager receives a report reflecting higher level results. Example: A Case Advocate group had fewer than 10 responses. Responses from the employees rolled up to the next (LTA) level. The LTA level had a total of 15 responses, and consequently a results report was available. The TAGM receives a copy of the LTA level results to share with the employees. The LTA also receives a copy of the LTA level results to share with their direct reports.
(2) National Office will have access to information collected from the employee satisfaction survey at a national, area and office level, if available. Areas will only have access to information nationwide, their own area and offices under their chain of command. Offices will only have access to information for nationwide, their area and their office.
(3) Survey results of any one office or workgroup should not be used as a standard by which any other unit is evaluated, as there are inherent differences among workgroups and offices.
(4) Survey results should be used in coordination with feedback received from other sources such as discussions with employees, town hall meetings, and elevated issues, to identify and address employees' concerns and make improvements that will increase employee satisfaction.
Employee Satisfaction - Roles and Responsibilities - Business Assessment (BA) Director
(1) The BA Director is responsible for the following nationwide activities:
Serving as TAS's primary Point of Contact for IRS's Human Capital Office, which leads Servicewide Employee Engagement;
Leading TAS Employee Engagement Coordinators to ensure program requirements for administering the survey are met;
Ensuring workgroups are properly reflected in the database used to administer the survey. This ensures every manager receives a results report;
Analyzing nationwide survey results and other data, providing recommendations for improvements and collaborating with stakeholders to implement improvements;
Developing and implementing an annual nationwide employee engagement communication plan;
Developing and maintaining an annual nationwide employee engagement action plan that includes activities to address and increase employee satisfaction; and
Adhering to confidentiality rules governed by the Confidential Information Protection and Statistical Efficiency Act (CIPSEA).
Employee Satisfaction - Roles and Responsibilities - TAS Senior Managers
(1) TAS senior managers are responsible for activities shown in 13.5.1.2.4, Employee Satisfaction - Roles and Responsibilities - TAS Managers, for all workgroups within their office or department.
(2) In addition, TAS senior managers should conduct ongoing discussions with their subordinate managers to share best practices and make improvements to address employees' concerns and increase employee satisfaction.
Employee Satisfaction - Roles and Responsibilities - TAS Managers
(1) TAS managers’ annual performance plan includes employee satisfaction.
(2) TAS managers are responsible for using survey results and employee feedback to develop personal commitments to incorporate activities to promote employee satisfaction in their daily operations and interactions with employees.
(3) TAS managers are also responsible for annually reviewing results from the employee survey and conducting annual meetings with their workgroup to:
Engage employees in meaningful dialogue to identify and overcome barriers that impact employees' abilities to perform their jobs effectively and increase their job satisfaction;
Recognize the accomplishments of the workgroup and its members;
Identify areas of strength to build on improvement initiatives to address employees' concerns; and
Develop and document actionable improvement initiatives.
(4) Following the annual meeting, TAS managers are responsible for:
Implementing their workgroup's identified improvement activities that are within the workgroup's control;
Elevating improvement initiatives that are beyond the workgroup's control but within the organization's control;
Following up on elevated workgroup recommendations; and
Monitoring implemented initiatives to assess whether the desired outcome is achieved and making any adjustments, if appropriate.
(5) Year-round, TAS managers should incorporate activities and discussions with employees to address employees' concerns and together develop improvements that will address concerns, improve work processes and increase employee satisfaction.
(6) Managers will provide sufficient time and resources for all personnel who perform duties related to the employee satisfaction program.
Employee Satisfaction - Roles and Responsibilities - TAS Employees
(1) Employees are encouraged to support the employee satisfaction survey process by completing the confidential surveys and answering all items candidly and honestly.
(2) Employees are also encouraged to actively participate in the survey meetings to discuss employee survey results and concerns and provide recommendations to increase employee satisfaction.
(3) Employee engagement and satisfaction is a year-round commitment and partnership for both the employee and manager. Employees are encouraged to raise concerns and recommendations for improvement throughout the year with their manager so the manager can have the opportunity to address those concerns.
TAS Customer Satisfaction Measure
(1) TAS measures its customer satisfaction for Case Advocacy and Systemic Advocacy.
(2) Customer Satisfaction Measures for Case Advocacy, Systemic Advocacy and Congressional Aides are discussed in greater detail in subsequent IRM sections.
Case Advocacy Customer Satisfaction Measure
(1) TAS uses a paper survey to measure the customer satisfaction of those taxpayers who were part of the case resolution process. Taxpayers with closed cases are randomly selected each month to complete the survey. The BA unit is responsible for the development and operation of the survey process. This includes working with TAS Research in developing the survey and sample plan, compiling the data, and providing comprehensive reports. The methodology for sampling includes a process to ensure respondents cannot be identified or associated with their responses.
(2) A statistically valid sampling plan for the level of customer satisfaction TAS intends to measure is prepared by the TAS Research Unit. It is used to determine the number of randomly selected participants for the survey. The level of customer satisfaction measurement is always at the national level; however, it may be extended to the area or office level as needed.
(3) TAS uses the survey results to assess the advocates’ performance in various aspects, such as:
Was the advocate responsive to its customers' needs regarding timeliness, accuracy, fairness, and the resolution of the problem, and
Did the advocate listen, and
Did the advocate help the taxpayer understand their rights as a taxpayer?
(4) The goal for the Case Advocacy Customer Satisfaction Measure is to measure whether TAS customers (taxpayers, their representatives and/or Congressional Aides) felt they received courteous, timely, and professional treatment by the TAS personnel with whom they dealt.
Case Advocacy Customer Satisfaction - Use and Limitations of Survey Information
(1) In addition to using the survey results to assess its performance, TAS analyzes the information collected to identify strategies to improve customer satisfaction, enhance communication, and reach the best possible outcome for taxpayers.
(2) Survey reports will not identify individual TAS employees nor the taxpayers or practitioners who respond to the survey. TAS is required to provide to each survey responder a Privacy Act notice which states: “Data collected will be shared with the TAS staff, but the responses will be used for research and aggregate reporting purposes only and will not be used for other non-statistical or non-research purposes. The information that you provide will be protected as required by law.”
(3) TAS will never use survey information to identify an individual employee or to evaluate the performance of an individual employee.
(4) All identifying information such as name, address, case file number, and phone number are removed before customer satisfaction data is compiled.
(5) BA and TAS Research will provide an end-of-year report at the national level from the customer satisfaction survey results.
(6) The National office has access to all information collected from the customer satisfaction survey regardless of the level it is collected at - national, area or office level. Any data collected below the national level will be shared as follows:
Areas will only have access to information for nationwide, their own area and the offices under their chain of command.
Offices will only have access to information for nationwide, their area and their own office.
(7) All levels of the organization should use the information collected from the survey to conduct analysis, explore best practices, and develop plans to improve customer satisfaction.
Note: Customer satisfaction results are only one measurement of program performance and must be balanced with other measures and indicators to evaluate the overall success of TAS advocacy and to develop plans for improvement.
(8) Survey results of any one office or workgroup should not be used as a standard by which any other unit is evaluated because of inherent differences among offices and workgroups.
(9) Customer satisfaction results cannot be used to evaluate any employee or to impose or suggest goals for any employee.
Case Advocacy Customer Satisfaction - Roles and Responsibilities - BA Director
(1) The BA Director is responsible for the national customer satisfaction survey and related activities including:
Collaborating with the Executive Director of Case Advocacy (EDCA) to develop the specifications of the annual customer satisfaction survey plan;
Consulting with EDCA in developing a survey instrument that will provide actionable information to drive customer service improvements;
Collaborating with EDCA to identify organizational training needs, suggest strategic actions and participate in studies to improve customer service;
Procuring, administering, and overseeing the survey process and delivery of periodic reports that provide a basis for TAS's customer service improvement efforts;
Recording survey result data in Business Objects Enterprise (BOE); and
Analyzing nationwide survey results and other data, providing recommendations for improvements and collaborating with stakeholders to implement improvements.
Case Advocacy Customer Satisfaction - Roles and Responsibilities - EDCA
(1) EDCA is a principal management authority for aligning TAS's organizational actions with customers expectations.
(2) EDCA is responsible for the following activities:
Setting performance goals at the appropriate level and taking into account the balance of available resources and operational conditions;
Coordinating with the TAS Director of Employee Support and Development, and Deputy Executive Directors of Case Advocacy (DEDCA) to meet training needs identified from the customer satisfaction data;
Ensuring customer survey results are available throughout TAS's area and LTA offices;
Evaluating actions taken at all organizational levels in response to customer satisfaction reports and data analysis; and
Collaborating with TAS BA to develop nationwide strategies to improve customer satisfaction.
Case Advocacy Customer Satisfaction - Roles and Responsibilities - EDCA ITS
(1) The EDCA ITS is a principal management authority for aligning TAS’s organizational actions with customers expectations.
(2) The EDCA ITS is responsible for the following activities:
Coordinating with the TAS Director of Employee Support and Development, and DEDCA to meet training needs identified from the customer satisfaction data.
Maintaining an efficient workload intake and delivery system that promotes achievement of the balanced measures and TAS objectives.
Working with the Washington DC LTA office on initiatives to improve customer satisfaction.
Case Advocacy Customer Satisfaction - Roles and Responsibilities - DEDCA
(1) The DEDCAs are responsible for the customer satisfaction program for the offices within their area.
(2) DEDCAs and LTAs are collectively responsible for analyzing the data using process management techniques and for engaging employees through their representative organizations in identifying local initiatives to improve customer satisfaction.
(3) DEDCAs will use the detailed data analysis provided with the BA prepared reports and other analysis and provide guidance to LTAs that supplements the annual reports and drives organizational improvement activities.
(4) DEDCAs will periodically evaluate the impact of improvement action plans and implement corrections, as appropriate.
Case Advocacy Customer Satisfaction - Roles and Responsibilities - LTA
(1) LTAs are responsible for the customer satisfaction program for their office.
(2) The LTA is responsible for:
Using the customer satisfaction survey data, along with the other balanced measures and officially published internal guidance, recommend actionable suggestions to improve TAS’s ability to identify and respond to taxpayers concerns. Actionable suggestions could be, but are not limited to, procedural changes within the office or at the national level initiated through a Systemic Advocacy Management System (SAMS) request, recommendations for training, equipment, etc.
Monitoring processes and customer satisfaction survey data and any other information available to determine if improvements have had the desired impact and make adjustments as needed to achieve desired results.
Case Advocacy Customer Satisfaction - Roles and Responsibilities - TAGM
(1) TAGMs are responsible for promoting customer satisfaction program awareness at the group level.
(2) The TAGM is responsible for:
Understanding customers’ needs and expectations in order to support the LTA in developing improvement initiatives.
Effectively communicating customer needs and expectations to their employees to implement improvement initiatives.
Monitoring customer satisfaction and acting on results. All feedback should be used to facilitate continuous improvement in day-to-day operations.
Systemic Advocacy Customer Satisfaction Measure
(1) The goal of the Systemic Advocacy customer satisfaction survey is to measure the level of satisfaction for TAS internal customers (IRS employees) who submitted issues to the Systemic Advocacy Management System (SAMS).
(2) Submitters from outside the IRS are not surveyed.
Systemic Advocacy Customer Satisfaction - Use and Limitations of Survey Information
(1) Systemic Advocacy uses the information from the survey to gauge Customer Satisfaction and identify possible enhancements that may improve satisfaction.
(2) Systemic Advocacy does not collect any data that identifies the person who responded to the survey.
Congressional Aides Customer Satisfaction Measure
(1) As it deems necessary, EDCA will request a customer satisfaction survey of the Congressional Aides who have worked with their Local Taxpayer Advocates on their constituents’ tax issues. This is to measure the Congressional Aides’ level of satisfaction.
Congressional Aides Customer Satisfaction Measure - Roles and Responsibilities - BA
(1) BA is responsible for the development and operation of the Congressional Aides survey process. This includes working with TAS Research in developing the survey and sample plan, compiling the data, and providing comprehensive reports. TAS Research uses an online survey product for administrating the survey.
(2) BA is responsible for working with the EDCA to ensure the survey meets EDCA’s needs.
(3) BA is responsible for coordinating with the EDCA staff to encourage Congressional Aides to participate in the survey.
(4) Only Congressional Aides who have been identified as associated with an LTA office are surveyed.
Congressional Aides Customer Satisfaction Measure - Use and Limitations of Survey Information
(1) The DEDCAs use the information from the Congressional Aides survey to gauge congressional customer satisfaction and identify possible enhancements that may improve satisfaction.
(2) The DEDCAs are responsible for setting performance goals, including accounting for available resources and the operational conditions.
TAS Business Results Measures
(1) Business results measures include numerical scores determined under the quantity or output and quality or efficiency measures at an operational level.
(2) Quantity or output measures consist of outcome-neutral production and resource data such as the number of cases closed and inventory information. Quality measures are derived from TAS's Quality Review Program (QRP) and are discussed in greater detail in the subsequent sections.
(3) The goal of business results measures is to assess TAS’s performance in achieving its overall mission and strategic goals. Before taking actions to improve business results, the other two components - customer satisfaction and employee satisfaction - must be considered and addressed in order to carry out TAS's programs and functions successfully.
(4) Business results measures are one of three components of the balanced measurement system. Before taking actions to improve business results, the customer satisfaction and employee satisfaction components must be considered and addressed in order to carry out TAS's programs and functions successfully.
TAS Quality Measures
(1) Quality measures are numeric indicators of the extent to which completed work meets prescribed standards - TAS quality attributes.
(2) TAS measures quality of work completed by Case Advocacy and Systemic Advocacy through its specifically dedicated staff in QRP.
(3) Case Advocacy and Systemic Advocacy quality measures, management processes, and roles and responsibilities are discussed in greater detail in the subsequent IRM sections.
Case Advocacy Quality Measures
(1) TAS Case Advocacy quality measures are numerical scores indicating the extent to which TAS casework meets the prescribed quality attributes. The attributes measure whether the casework actions correctly followed Internal Revenue Manual procedures and other official case processing guidance such as Interim Guidance memoranda.
(2) These results are indicators of quality and are used to improve TAS's advocacy efforts.
(3) QRP provides quality results at the national, area, and office level. The results are based on QRP's review of the randomly selected closed cases from every office each month.
(4) The quality results are a product review and based on the case in its entirety regardless of whether a case was partially worked and transferred from one office to another.
Case Advocacy Quality Attributes
(1) TAS's Case Quality Attributes make up the overall quality.
(2) The quality attributes measure TAS's effectiveness in key aspects such as advocacy, communication with taxpayers, resolving taxpayers issues, and adherence to procedural requirements.
Case Advocacy Quality - Use and Limitations of Review Information
(1) National office will have access to quality results information at a national, area and office level. Areas will have access to information for nationwide, their own area and the offices under their chain of command. Offices will have access to information for nationwide, their area and their own office.
(2) TAS quality results are based on stratified random samples at the office level and are only statistically valid at the office, area and national level. They are not statistically valid at an individual case advocate level, office group level, or one or more Primary Core Issue Codes level.
(3) Quality scores are an estimate of the population. Because a sample does not include all cases in a population, the estimate resulting from the sample will not equal the actual quality in the entire population and will have some variability associated with it. The precision margin and level of confidence are used to convey the variability of an estimate. Example: An 85 percent quality estimate with a +/- 5 percent precision margin means if TAS reviewed 100 percent of the closures, there is 90 percent confidence the actual quality score of the population would fall somewhere between 80 percent and 90 percent.
(4) Quality scores may be shared with all TAS employees in balance with other measures, with the clear purpose of sharing, and not done in such a way as to imply individual or group targets below the organizational unit.
(5) The monthly review results include results at the individual case, office, Area and national level. The individual case level results may not be shared with TAGMs, lead case advocates, case advocates, lead intake advocates, intake advocates or other similar positions in offices. However, managers and designated analysts may discuss with employees in a group or training setting, the issue and procedures for advocating more effectively for taxpayers. The identity of the employee who worked the case should not be revealed and employees should not be asked to defend why they worked cases in a particular way.
(6) All levels of the organization should use the quality result information to conduct analysis, explore best practices and develop plans to increase TAS’s effectiveness in advocating for taxpayers and case processing.
Note: Case quality results are only one measurement of program performance and must be balanced with other measures and indicators to evaluate the overall success of TAS advocacy and develop plans for improvement.
(7) Quality results of any one office should not be used as a standard by which any other office unit is evaluated as there are inherent differences among offices.
(8) Quality results may not be used to evaluate any employee or to impose or suggest goals for any employee.
Case Advocacy Quality Monthly Review Sample
(1) Each month, QRP reviews a sample of randomly selected closed cases from every office to measure the extent to which casework meets the prescribed quality attributes. Cases are randomly selected through TAMIS for the monthly sample and are accessible in the Quality Sample Selection (QSS) report. The QSS report lists the randomly selected cases, which QRP will review.
(2) The QSS report only selects TAS cases eligible for the quality sample, which includes closed criteria 1 through 9 cases (regular and reopen), including Congressional, Senate Finance Committee, Tax Forum cases (when assigned to a TAS office after the close of the forum) and excludes Special Case Code, F1, Tax Forum Event (Non-CQR) cases.
(3) To ensure an eligible case is not inadvertently excluded as a potential part of the random sampling, the QSS report was designed to use the TAMIS Closed Date and not the TAS Closed Date. Therefore, a TAS case with a TAS Closed Date in one month and a TAMIS Closed Date in a subsequent month will be in the subsequent month’s quality sample pool for possible random selection.
(4) QRP may replace a case from the QSS report using an alternate case under the following circumstances:
The case is a Tax Forum case and not correctly coded with Special Case Code, F1.
A clerical error caused the erroneous closure and the case’s documentation supports the clerical error.
Extenuating circumstances where cases would not properly reflect a sample of TAS's normal work and the DNTA approves exclusion for cases meeting set criteria.
(5) To ensure the selection of the correct original or reopen case, the QSS report also includes the reopen sequence number. QRP will review from the reopen date to case closure.
(6) If the total number of cases closed for the month is less than the required sample size, then all of the cases closed that month will be subject to quality review.
Case Advocacy Quality - Roles and Responsibilities - QRP
(1) The QRP Director has overall responsibility for the QRP and oversees the following activities:
Ensuring QRP releases the monthly quality results;
Maintaining and revising, as appropriate, the TAS quality review database with support from TAS Business Modernization (BM);
Creating and maintaining monthly and cumulative quality reports at the national, the area and the office levels with support from BM;
Publishing the quality results data and notifying the LTA and area designees of the reports availability;
Evaluating the monthly case quality review sample size and making adjustments when appropriate; and
Analyzing nationwide results, providing recommendations for improvements and collaborating with stakeholders to implement improvements.
Case Advocacy Quality - Roles and Responsibilities - EDCA
(1) EDCA is TAS’s principal management authority for aligning TAS’s organizational actions and goals with customer expectations by:
Proposing national, area and office-level performance goals, taking into account the balance of available resources and operational conditions;
Determining resources required for TAS’s Area and LTA offices to effectively manage the quality process;
Coordinating with the QRP Director, DEDCAs, and Employee Support and Development Director to meet employee training needs identified through the quality review data;
Ensuring the quality results are available to Area and LTA offices to improve quality; and
Evaluating the actions taken at all organizational levels in response to quality reports and data analyses.
Case Advocacy Quality - Roles and Responsibilities - DEDCA
(1) TAS DEDCAs are responsible for the case quality review program for their Area.
(2) The DEDCA is responsible for:
Reviewing the area’s and associated offices’ monthly and cumulative quality reports;
Analyzing quality review data for the area and offices within that Area to identify trends, procedures needing improvement, training needs, systemic problems, and best practices; and
Using the analytical results to improve quality in the area’s offices (e.g., share best practices, identify and provide needed training, work with the offices within the area with room for improvement, etc.).
Case Advocacy Quality - Roles and Responsibilities - LTA
(1) LTAs are responsible for the quality review program in their office.
(2) The LTA is responsible for:
Reviewing the office’s monthly and cumulative quality reports;
Disseminating clarifications in TAS procedures to TAS managers and case advocates in the office;
Analyzing quality review data for the office to identify trends, procedures needing improvement, training needs, systemic problems, and best practices;
Using the analytical results to improve quality in the office (e.g., share best practices, set up training classes, work with managers and case advocates on specific improvement opportunities, etc.); and
Ensuring cases that warrant reopening are correctly resolved.
Case Advocacy Quality - Roles and Responsibilities - TAGM
(1) TAGMs are responsible for case quality program awareness at the group level.
(2) The TAGM is responsible for:
Assisting the LTA in developing improvement initiatives, including sharing at the group level to foster employee buy-in (participation);
Facilitating the development of improvement initiatives by engaging employees during group meetings and during coaching opportunities; and
Targeting identified errors during case reviews as outlined in officially published internal guidance to gauge improvement.
Case Advocacy Quality - Roles and Responsibilities - EDCA ITS
(1) EDCA ITS is a principal management authority for aligning TAS’s organizational actions and goals with customer expectations by:
Proposing national and organizational performance goals, taking into account the balance of available resources and operational conditions;
Determining resources required to effectively manage the quality process;
Coordinating with the QRP Director, CCI department managers, and Employee Support and Development Director to meet employee training needs identified through the quality review data;
Ensuring the quality results are available to ITS leadership to improve quality; and
Evaluating the actions taken at all organizational levels in response to quality reports and data analyses.
Case Advocacy Quality - Roles and Responsibilities – CCI Department Managers
(1) CCI Department Managers are responsible for the quality review program in their offices.
(2) The CCI Department Manager is responsible for:
Reviewing the monthly and cumulative quality reports;
Disseminating clarifications in TAS procedures to CCI managers and intake advocates in their offices;
Analyzing quality review data for their offices to identify trends, procedures needing improvement, training needs, systemic problems, and best practices; and
Using the analytical results to improve quality in their offices (e.g., share best practices, set up training classes, work with managers and intake advocates on specific improvement opportunities, etc.).
Case Advocacy Quality - Roles and Responsibilities – CCI Managers
(1) CCI Managers are responsible for case quality program awareness at the group level.
(2) The CCI Manager is responsible for:
Assisting the Department Manager in developing improvement initiatives, including sharing at the group level to foster employee buy-in (participation);
Facilitating the development of improvement initiatives by engaging employees during group meetings and during coaching opportunities; and
Targeting identified errors during case reviews as outlined in guidance to gauge improvement.
TAS Systemic Advocacy Quality Measures
(1) TAS SA quality, a business results measure, is a numerical score of the extent to which TAS SA advocacy projects (AP) and immediate interventions (II) meet the prescribed quality attributes. The attributes measure whether the project work actions correctly followed IRM guidance such as Interim Guidance memoranda.
(2) TAS derives its SA quality from the QRP’s monthly quality reviews of the closed SA APs and IIs.
(3) Each month QRP shares with SA, the individual and cumulative quality results of APs and IIs.
Systemic Advocacy Quality Attributes
(1) QRP performs quality review on all closed APs and IIs to determine if SA worked them according to the standards and procedures.
(2) The attributes are categorized in three focus areas:
Advocacy - Taking the appropriate actions to resolve taxpayer problems.
Customer- Providing clear and complete responses to submitters through the use of accurate, effective, and comprehensive written and verbal contacts to ensure submitters find TAS employees professional, positive, knowledgeable, and competent.
Procedural - Resolving submitter’s inquiries efficiently within the guidelines and timeframes prescribed and through proper workload management.
Note: A list of the individual, SA Quality Attributes can be found on the QRP’s Quality Report SharePoint site.
Sharing and Using TAS Systemic Advocacy Quality Review Results
(1) Once QRP has completed its review of APs and IIs, QRP shares the results with the Executive Director of Systemic Advocacy (EDSA)and their designated staff members.
(2) TAS uses project quality review data to provide a basis for measuring and improving program effectiveness by:
Analyzing the results to identify defect trends and root cause;
Developing plans to increase effectiveness in advocating for taxpayers and project processing; and
Exploring best practices.
(3) Managers and designated analysts may discuss with employees the merits and issues of a particular project that was reviewed but emphasis must be on technique for advocating more effectively for taxpayers and not the quality score. The identity of the employee who worked the case should not be revealed and employees should not be asked to defend why they worked cases in a particular way. Managers should also be sensitive to whether the project was worked by a bargaining unit or non-bargaining unit employee.
TAS Systemic Advocacy Dialogue Process
(1) TAS established a dialogue process to enable SA to request a reconsideration of an error identified in QRP’s review.
(2) Results of the dialogue process may result in QRP reversing an error charged. If the error is overturned, QRP will revise the quality scores.
(3) The dialogue process is also a useful tool to identify improvement opportunity in procedural guidance, advocacy, and training.
TAS Systemic Advocacy Dialogue Process and Timeframes
(1) Systemic Advocacy initiates the dialogue process by contacting QRP to submit information regarding dialogued attributes.
(2) Instructions for submitting the dialogue and timeframes for the dialogue process are posted on QRP’s SharePoint site.
Systemic Advocacy Quality - Roles and Responsibilities - QRP Director
(1) The QRP Director has overall responsibility for QRP.
(2) The QRP Director oversees the following activities:
Ensuring QRP reviews and documents the results of the monthly projects reviewed, posts results in SharePoint, and shares results with Systemic Advocacy;
Maintaining and revising, as appropriate, the TAS quality review data collection instrument with support from Statistics of Income (SOI);
Creating and maintaining monthly, quarterly and cumulative quality reports; and
Providing quarterly analysis of quality result trends.
Systemic Advocacy Quality - Roles and Responsibilities - EDSA
(1) The EDSA has overall responsibility of Systemic Advocacy, which includes processing of APs and IIs.
(2) The EDSA oversees the following quality related activities in SA:
Proposing SA performance goals, taking into account the balance of available resources and operational conditions;
Determining resources required in SA to effectively manage projects;
Coordinating with the QRP Director to meet employee training needs identified through the quality review data; and
Evaluating the actions taken in response to quality reports and data analyses.
Using Diagnostic Tools in TAS
(1) TAS uses diagnostic tools to analyze factors that influence performance and encourages dialogue about specific actions that managers may take to improve customer satisfaction, employee satisfaction, and business results.
(2) The following are examples of TAS diagnostic tools:
Median closed case cycle time
Mean closed case cycle time
Relief granted
Number of Taxpayer Assistance Orders issued
Closures with secondary issues
Internal Management Document (IMD) recommendations made to the IRS
IMD recommendations accepted by the IRS
Cycle time analyzed by unique segmentation
Customer satisfaction survey results, such as responses to particular survey questions, improvement priorities identified, and narrative comments
Employee survey results, such as responses to particular survey questions
Employee experience/training/skill levels
External factors (e.g., tax law, status of economy)
Employee absenteeism, turnover rates
Physical resources
Receipts
Inventory level
Closure to receipt ratio
Workload mix
Staffing resources
Cost information
Regular criteria receipts (excludes reopen criteria receipts)
Regular criteria ending inventory
Regular criteria closures as a percentage of regular criteria receipts (excludes reopen criteria receipts)
Reopen criteria receipts as a percentage of regular criteria closures
Permanent staffing on rolls
(3) TAS does not use diagnostic tools to measure individual performance.
(4) TAS may establish improvement targets for diagnostic tools but only in direct support of overlying budget or operational level measures.
(5) Using diagnostic tools to compare one unit against other units may be appropriate for conducting analysis, exploring best practices, or seeking process enhancements to support improvement of the overarching balanced measure(s).
(6) Diagnostic tools include any type of data that is helpful in understanding what influences and impacts balanced measures. It is permissible to use ROTERs as diagnostic tools. See IRM 1.5.2.11, Managing Statistics in a Balanced Measurement System, Uses of Section 1204 Statistics for additional information on ROTERs.
Case Advocacy National Quality Sample Size
(1) TAS determines the national sample plan based on consultations with Statistics of Income (SOI) personnel and EDCA and secures approval from the Deputy National Taxpayer Advocate for any changes in sampling methodology. If an office’s sample size changes based on the existing approved sampling plan, QRP will notify those offices impacted.
(2) The national random sample is divided or stratified among individual offices at the LTA level. Stratifying the random sample by individual office improves the statistical accuracy of the quality estimate for each office because the variation in quality within an individual office is likely to be lower than the variation in quality between individual offices.
(3) The monthly sample size in each office is based on the number of randomly sampled cases necessary to provide a statistically valid estimate of case quality at the LTA level by the end of the fiscal year.
(4) TAS uses sample sizes that achieve a minimum confidence level of 90 percent in the quality estimate with a maximum margin of error, or precision margin, of 5 percent above or below the quality estimate. TAS may establish annual sampling plans that review more cases than necessary to achieve 90 percent confidence and 5 percent precision in order to achieve other organizational goals, such as trend analysis or targeted program analysis. However, TAS will not sample less than the required minimum number of cases to achieve 90 percent confidence with 5 percent precision by the end of the fiscal year in each office. Monthly sample size in each office is determined at the beginning of each fiscal year. In the interest of administrative convenience, monthly sample size in each office generally will not vary during a fiscal year. However, monthly sample sizes may vary between offices based on technical advice from SOI, but the monthly sample size for each office will generally remain the same throughout the fiscal year.
(5) SOI uses a method of calculating confidence levels and precision margins called the Standard Score, or z-Score Distribution Method. Using this method, SOI can make statistically valid estimates of quality, confidence level, and precision margin at any organizational level once a random sample of 40 or more cases has been reviewed by QRP at that organizational level. The organizational level can be national, area, or office level.
(6) The z-Score Distribution Method requires a minimum of 40 random case reviews because variation in the distribution of sampling estimates starts to resemble a standard normal, bell-shaped curve when 40 or more randomly sampled cases are available. Once the distribution of estimates starts to resemble a normal, bell-shaped curve, statisticians can assign confidence levels and precision margins to the quality estimate at the organizational level based on the known properties of a normal, bell-shaped curve.
(7) Monthly sample size determines when 40 or more case samples are available at each organizational level. At the national level, 40 or more cases are available during the first month of each fiscal year, so SOI can estimate quality with 90 percent confidence and 5 percent precision starting in October each year. Similarly, SOI can compute statistically valid estimates for area offices in the first month of each year if the total number of samples taken from offices within the area consist of 40 or more cases during the first month of the fiscal year. In contrast, LTA offices do not achieve the cumulative sample sizes of 40 cases or more in the first month of the fiscal year. Therefore, SOI cannot compute statistically valid estimates with precision margins for LTA offices until later in the fiscal year when the cumulative sample sizes has reached 40 or more cases per LTA office.
(8) SOI “weights” TAS quality results by the total number of cases closed in an office during a month. Weighting is necessary because TAS samples a fixed number of cases in each office per month, but the total number of cases closed in each office varies every month. Therefore, each case in the sample actually represents a certain number of cases that were closed during the month but were not included in the random sample. Weighting adjusts the quality estimate to account for the cases that were not included in the sample during the month.