Click here for more sample CPC practice exam questions with Full Rationale Answers

Practice Exam

Click here for more sample CPC practice exam questions and answers with full rationale

Practice Exam

CPC Practice Exam and Study Guide Package

Practice Exam

What makes a good CPC Practice Exam? Questions and Answers with Full Rationale

CPC Exam Review Video

Laureen shows you her proprietary “Bubbling and Highlighting Technique”

Download your Free copy of my "Medical Coding From Home Ebook" at the top right corner of this page

Practice Exam

2018 CPC Practice Exam Answer Key 150 Questions With Full Rationale (HCPCS, ICD-9-CM, ICD-10, CPT Codes) Click here for more sample CPC practice exam questions with Full Rationale Answers

Practice Exam

Click here for more sample CPC practice exam questions and answers with full rationale

Briefings on Coding Compliance Strategies, August 2016

Postoperative respiratory failure’s introduction into the CMS value-based reimbursement model

By Robert Stein, MD, CCDS, and Shannon Newell, RHIA, CCS, AHIMA-approved ICD-10-CM/PCS trainer

The accurate capture of acute respiratory failure has been a long-standing challenge for CDI programs. The accurate reporting of this condition as a post-procedural event can be even more difficult.

The importance of data quality for post-procedural acute respiratory failure will impact quality outcomes linked to reimbursement under the Hospital-Acquired Condition Reduction Program (HACRP), as well as the Hospital Value-Based Purchasing Program (HVBP), if language in the fiscal year (FY) 2017 IPPS proposed rule is finalized.

In this article we’ll provide insights into how clinical documentation and reported codes may impact payments, and guidance on some common CDI challenges to strengthen data quality.

 

Performance may impact reimbursement in FY 2018

A quality measure named Patient Safety Indicator (PSI) 11 has existed since 1998, when it was developed by the Agency for Health Care Research and Quality (AHRQ). The measure has been adopted for use by CMS and other comparative databases, such as the University HealthSystem Consortium and Healthgrades, to compare performance across hospitals.

If the proposed rule is finalized as written, how well your hospital performs on this measure will begin to impact hospital reimbursement under the two hospital pay-for-performance programs noted above. Reimbursement impact will begin in:

  • FY 2018 for the HACRP
  • FY 2019 for the HVBP

 

Performance for this measure will be assessed and scored, and the score will then be rolled into a weighted patient safety composite measure. Performance for the overall composite measure will then determine reimbursement impact. The name of this composite measure is the Patient Safety and Adverse Events Composite, previously known as the PSI 90 composite measure.

The Patient Safety and Adverse Events Composite measure was reviewed in last month’s column. What is important to note for PSI 11 is that performance for this measure will impact approximately 22% of the composite weight:

 

Data quality and PSI 11 performance

PSI 11 performance is determined by the diagnosis (ICD-10-CM) codes we submit on claims. This is a risk-adjusted measure evaluated using an observed over an expected ratio.

Discharges included in the measure:

  • All elective surgical discharges treated at the hospital are evaluated for comorbidities which impact the complexity of the patient mix and the associated expected rate of postoperative respiratory failure events

Identification of postoperative respiratory events:

  • Any discharge included in the measure which has one of the following ICD-10-CM codes on the claim triggers a reportable actual?or observed? postoperative respiratory failure event:

 

Additional details for key measure drivers can be found on review of PSI 11 specifications located on the AHRQ website at www.qualityindicators.ahrq.gov/Modules/psi_resources.aspx.

 

PSI 11 CDI vulnerabilities

In our review of thousands of medical records for hospitals across the country, we see common challenges which impact PSI 11 data quality. We discuss a few of the common questions we encounter below to assist your internal data quality efforts.

 

How do I recognize acute respiratory failure?

  • Acute respiratory failure is at the end of a continuum initiated by respiratory dysfunction resulting in abnormalities of oxygenation and/or carbon dioxide elimination
  • Acute on chronic respiratory failure is an exacerbation or decompensation of chronic respiratory failure

Clinical criteria to identify if not documented and/or to validate a documented diagnosis include:

  • The use of supplemental oxygen or non-invasive/invasive mechanical ventilation
  • Signs and symptoms indicative of increased work of breathing (e.g., dyspnea, tachypnea [respiratory rate greater than 28], respiratory distress, labored breathing, use of accessory muscles)
  • Impaired gas exchange, which may be identified by the following clinical indicators:

What is the definition of "prolonged" postoperative mechanical ventilation?

  • A code for mechanical ventilation (and intubation) should not be assigned postoperatively for mechanical ventilation when it is considered a normal part of surgery.
  • Prolonged mechanical ventilation should be reported for an extended period postoperatively. A general rule of thumb for extended is 48 hours with the start time as the time of intubation for the procedure. Provider documentation should support what appears to be an extended time, but is in fact unexpected given the procedure and/or patient complexity.

 

If the patient is extubated postoperatively, but continues to be treated with supplemental oxygen, when is a query for acute respiratory failure appropriate?

  • To determine if this represents acute respiratory failure the values for impaired oxygen exchange can be utilized, along with the amount of oxygen being administered to the patient.
  • The P/F ratio can be a helpful tool to identify respiratory failure criteria above for a patient receiving supplemental oxygen:
  • If an ABG test is not available, an estimated P/F ratio can be calculated:
  • An illustration of the calculation follows:
  • The P/F ratio is a useful tool to validate the presence of acute hypoxemic respiratory failure when patients are receiving supplemental oxygen.

 

When respiratory failure exists in a post-procedural patient, how do I determine if this is, and/or is not, related to the procedure?

  • Physician education to promote clear documentation which relates the respiratory failure to an underlying condition (e.g., COPD) and/or to a procedure, and/or to the anesthesia, is essential.
  • When such documentation is not clear, a documentation query or clarification is required.

 

In addition to the above, other record review findings which negatively impact PSI 11 data quality include:

  • Accurate reporting of mechanical ventilation duration:
  • Accurate selection of post-procedural respiratory failure as the principal diagnosis:

 

Summary

Value-based care will increasingly utilize claims-based measures to assess quality and cost outcomes linked to payment. To strengthen organizational performance for PSI 11, the following steps are suggested:

  • Establish synergy between the CDI program and quality department to support:
  • Promote point-of-care capture of risk-adjustment variables pertinent to PSI 11 performance:
  • Actively engage your CDI physician advisor with medical staff education and CDI record reviews to facilitate and promote accurate capture of documentation relevant to accurate cohort identification and risk adjustment

 

Editor’s note: Stein is associate director of the MS-DRG Assurance program for Enjoin, providing clinical insight and education as part of the pre-bill review process. He earned his CCDS credential in June 2013 and completed AHIMA’s ICD-10-CM/PCS coder workforce training in August 2013. Newell is the director of CDI quality initiatives for Enjoin. Her team provides health systems with physician-led education and infrastructure design to sustainably address documentation and coding challenges essential to optimal performance under value-based payments across the continuum. She has extensive operational and consulting expertise in coding and clinical documentation improvement, performance improvement, case management, and health information management. You can reach Newell at (704) 931-8537 or [email protected]. Opinions expressed are that of the authors and do not represent HCPro or ACDIS.

 

Using data to drive physician engagement

"You are your own best teacher," or so the old adage goes. Sure, goodies and gifts are great for recognizing high-quality documentation, but for CDI teams struggling to obtain physician buy-in, the best strategy may be found in their providers’ own records.

With pay-for-performance and other quality initiatives underway as a part of healthcare reform, physicians need to see how they are performing in real time. Showing them this data in comparison to their peers demonstrates through real numbers how they stack up, says ACDIS Advisory Board member Robin Jones, RN, BSN, CCDS, MHA/Ed, system director for CDI at Mercy Health in Cincinnati.

 

Query responses

Until recently, most providers were not interested in seeing how unanswered clarifications or conflicting DRG assignment affected metrics, Jones says. CDI programs traditionally measure overall success by tracking items such as:

  • Query rate (overall and by CDI specialist/physician)
  • Physician response rate (overall and by CDI specialist/physician)
  • Physician agreement rate (overall and by CDI specialist/physician)
  • CC/MCC capture rates
  • MS-DRG shifts
  • Case-mix index changes

This data isn’t often shown to physicians, and yet, since queries represent the single most important tool for CDI programs, gleaning patterns of information from them often illuminates opportunities for improved physician support. For example, a lack of response from a particular physician might represent an opportunity for education or a change in approach, or the need for a new method of communication (e.g., notifying the physician of an outstanding query through a phone call rather than email).

Mercy’s CDI program lists physicians’ clarification response rates and places them in physician lounges for all to see, says Jones. To keep the information anonymous, the CDI team assigns each physician a number so they can quickly and safely gauge how they are performing in comparison to their peers.

"When physicians see their rate is lower than their peers, they hurriedly find our CDI supervisor," Jones says.

Mercy also provides physicians with an individualized list of DRGs assigned to their patients, so they can cross-reference that information to their own private billing.

 

Case studies

CDI programs can elevate the importance of data by tying it to case studies?real scenarios relevant to patient care, says ACDIS Advisory Board member Karen Newhouser, RN, BSN, CCDS, CCS, CCM, CDIP, director of education at Med- Partners based in Tampa, Florida.

Additional elements

Show providers an example of poor documentation, then compare it to the same case with improved documentation and show how the improvement affects a variety of metrics, Newhouser says. Collectively, members of the ACDIS Advisory Board suggest sharing information regarding the following data points:

  • Severity of illness/risk of mortality (ROM)
  • Length of stay (LOS), average LOS, geometric mean LOS, and expected LOS
  • Readmission rates
  • Observed over expected mortality ratio

 

Be transparent so physicians can see the benefits?both financial and quality-related?of precise documentation, Newhouser says.

"Physicians need to know that the money is important if they want to have a hospital to practice in, updated equipment, and a paycheck," she explains. But, "it is imperative to remind them that while money is important, it is quality that must come first."

For each metric, consider the data for the facility as a whole, and compare it to other facilities within the system or region, says Michelle McCormack, RN, BSN, CCDS, CRCR, director of CDI at Stanford (California) Health Care. Sharing such information with the physicians illustrates how their documentation affects the larger hospital community.

Then, drill down into the data to identify individual metrics, comparing physicians against one another within the facility and within a particular specialty or service line, says McCormack.

 

External analysis

Beyond simply showing physicians the data, CDI teams must teach providers how documentation and coding affects their personal profile as well as their facility’s standing, says Judy Schade, RN, MSN, CCM, CCDS, CDI specialist at Mayo Clinic Hospital in Phoenix. A host of consumer websites cull data and employ a variety of algorithms to rank physicians and hospitals?many of these are well known, such as CMS’ Hospital and Physician Compare sites, Healthgrades, and Leapfrog.

Understand how those practicing within your facility measure up in these reports and share important milestones as necessary, Schade says. When positive shifts occur that correlate with documentation improvement focus areas, tout those accomplishments and acknowledge the role the physicians play.

"Physicians will be engaged if they understand how documentation and coding impacts their personal profile," Schade says. "Physicians are by nature competitive, and so they aim to be high achievers." CDI programs can use this to their advantage.

Nuanced details of these reports need analysis, warns Paul Evans, RHIA, CCS, CCS-P, CCDS, manager for regional CDI at Sutter West Bay in San Francisco.

For example, The San Francisco Chronicle recently published raw mortality outcomes data for the region. Since the paper did not understand how observed versus expected mortality plays a role in telling the story of a patient’s care, its analysis left a tertiary care center in the Sutter family looking as though it had worse mortality rates than its competitors despite the fact that it treated extremely sick patients, Evans explains.

"You have to be careful to compare apples to apples," Schade agrees.

With internal data in hand, Evans showed the high-level ROM of that facility’s patients and demonstrated that the facility actually outperformed its competitors.

"Unfortunately, you can’t explain statistics and ROM to the typical layperson, but you certainly can communicate it to your staff and to your physicians," Evans says.

 

Data discretion

Some data discretion may be warranted. Choose data elements that are most relevant to the CDI program’s goals at the time, as well as targeted to the specific physicians in the audience. Remember to share success stories with data elements as they are reached.

"CDI managers should consider all data points and make sure the numbers they present to the physician accurately represents the message they need to convey and targets the needs of the physicians themselves," says ACDIS Advisory Board member Wendy Clesi, RN, CCDS, director of CDI services at Enjoin.

For example, if a service line that has not been responding to queries begins to consistently increase its response rate, include the improvements in that response rate along with the other metrics you present, McCormack says.

"You want to select metrics that will allow you to see progress as well as areas of opportunity," she says.

It can be difficult to choose which data points to share, McCormack says, but sharing such concrete analysis leads to greater support from physicians overall.

 

Editor’s note: This article originally appeared in the CDI Journal. For any questions, contact editor Amanda Tyler at [email protected].

 

AHIMA pratice brief addresses clinical validity and coding compliance

 

We as coders, clinical documentation specialists, and compliance officers, are actively invested in coding compliance, aren’t we? AHIMA and ACDIS emphasize coding compliance in their codes of ethics. If we aren’t interested in coding compliance, why are we reading newsletters named Briefings in Coding Compliance Strategies and other similar publications?

Many coders I know code solely on what a doctor documents, claiming not to be physicians, nor having the authority to challenge a diagnosis or documented treatment by a provider.

In fact, AHIMA’s 2008 practice brief, Managing an Effective Query Process, emphasized that we should not query physicians if the clinical indicators do not support a provider’s documented diagnosis. This practice brief stated:

Providers often make clinical diagnoses that may not appear to be consistent with test results. Queries should not be used to question a provider’s clinical judgment, but rather to clarify documentation when it fails to meet any of the five criteria listed?legibility, completeness, clarity, consistency, or precision.

 

While AHIMA told us then not to query to ascertain clinical validity of documentation, the United States Department of Justice (DOJ), or Health and Human Services, must not have gotten the memo.

In June 2009, Johns Hopkins Bayview Medical Center, in Baltimore, Maryland, settled a False Claims Act case for $ 2.75 million. This happened after the DOJ said that the hospital’s "employees allegedly focused on lab test results which might indicate the presence of a complicating secondary diagnosis such as malnutrition or respiratory failure, and advised treating doctors to include such a diagnosis in the medical record, even if the condition was not actually diagnosed or treated during the hospital stay."

Baptist Healthcare Inc. and its affiliated hospitals near Louisville, Kentucky, paid $ 8.9 million in 2011 to settle a case involving the documentation, coding, and clinical validity of respiratory infections and inflammations, pulmonary edema, respiratory failure, and septicemia. These do not include the costs of attorneys, expert witnesses, and other intangibles expended in legal defense. Visit the DOJ’s website to learn more of these settlements: www.justice.gov.

The Medicare Provider Quarterly Compliance Newsletter then emphasized in July 2011 that providers and facilities are to determine the validity of documented acute respiratory failure, and when the clinical indicators are not present and emphasized, Recovery Audit Contractors had leeway to change a principal diagnosis based on provider documentation. This would happen if Recovery Audit Contractors believed that the clinical indictors did not support the documented diagnosis. Read the newsletter at http://tinyurl.com/jb5aauu, page 2.

AHIMA has since changed its tune. In its 2013 Query Practice Brief, AHIMA stated that a query is appropriate when the health record documentation "provides a diagnosis without underlying clinical validation."

The article adds the additional statement, "when a practitioner documents a diagnosis that does not appear to be supported by the clinical indicators in the health record, it is currently advised that a query be generated to address the conflict or that the conflict be addressed through the facility’s escalation process." Their sample escalation policy is available at http://tinyurl.com/2013AHIMAescalationpolicy.

AHIMA recently stepped this up a notch by publishing a clinical validation practice brief in the July 2016 Journal of the American Health Information Management Association, available to AHIMA members at http://tinyurl.com/2016AHIMAclinicalvalidation. I encourage you to get a copy from an AHIMA member or from your local medical library and to discuss this document with your compliance officer or attorney.

Given that AHIMA is one of the ICD-10-CM/PCS Cooperating Parties, their practice briefs are often quoted by the DOJ, and thus must be read closely, and if agreeable, incorporated into one’s compliance plan. Several points are made in this practice brief, most of which I agree with, but some of which I do not. These include:

 

Compliance

AHIMA states:

Compliance, whether it’s a formal compliance department that understands compliant coding or coding management performing quality audits, can support the clinical validation process. Compliance can assist in developing a standardized query policy that applies to all who perform the query process within the organization regardless of the department in which they are located.

 

I wholeheartedly agree; however, AHIMA does not articulate under what circumstances, or how, a facility can omit an ICD-10-CM code for a documented diagnosis that is re-authenticated by an authorized provider.

I personally believe that if recovery auditors can deny codes for documented diagnoses based on their clinical judgment, then facilities should be able to do the same, particularly if they believe that the code would not survive reasonable scrutiny. I wish that they had discussed this.

 

Clinical validation

AHIMA states, "it appears clinical validation may be most appropriate under the purview of the CDI professional with a clinical background," emphasizing that it is the coder’s role to become more clinically astute as to refer cases to a nurse or physician advisor as necessary.

I disagree to some extent. The ICD-10-CM Official Guidelines state that ICD-10-CM code assignment is a joint effort between the provider and the coder, not the provider and the CDI specialist or the CDI specialist and the coder. So, I believe that a properly trained and certified coder who is well versed in clinical terminology and definitions should be able to have the conversation with the provider alone and not have to delegate this to another individual that may not be as experienced. That said, if the coder is insecure with the situation, he or she should have a lifeline for clinical support as to ensure the validity of the documented diagnosis or treatment.

 

Referencing clinical criteria

AHIMA and Coding Clinic for ICD-10-CM both say that the Coding Clinic should not be referenced as a source for clinical criteria supporting provider documentation. I wholeheartedly agree, except in cases where no definition of a clinical term is available in the physician literature, such as with functional quadriplegia or acute pulmonary insufficiency following surgery or trauma.

For these two conditions, Coding Clinic and/or the ICD-10-CM Official Guidelines are the only sources for definitions as to ensure their validity. The most recent high-impact physician literature or textbooks should be referenced when defining other clinical conditions, or when defending claims of clinical invalidity. A physician advisor can point out which references are highly respected.

 

Coders and CDI defining diagnoses

AHIMA states:

Although it is tempting for CDI and coding professionals to define diagnoses for providers, doing so is beyond their scope. For example, it is not appropriate for a CDI or coding professional to omit the diagnosis of malnutrition when it is based on the patient’s pre-albumin level rather than American Society for Parenteral and Enteral Nutrition (ASPEN) criteria. Many practicing physicians have not adopted ASPEN criteria and there is no federal or American Medical Association (AMA) requirement stating that ASPEN criteria must be utilized by a physician in making the diagnosis of malnutrition.

While this is technically true, given that CDI and coding professionals are not licensed to practice medicine, nor are involved with direct patient care under most circumstances, they still should be their facility’s representatives to encourage the medical staff, as a whole, to adopt facilitywide definitions of challenging clinical terms (e.g., sepsis, malnutrition, acute respiratory failure). They should also monitor and encourage individual providers as they adopt these definitions in their documentation and escalate noncompliance with these definitions to physician advisors, compliance officers, or medical staff leadership.

While one physician may not use ASPEN, or the Academy of Nutrition and Dietetics criteria, to define and diagnose malnutrition, I challenge readers to find any support for pre-albumin or albumin as a current clinical indicator for malnutrition, or a more authoritative criteria than that of the nation’s premier association of dietitians and nutritional support teams in defining, diagnosing, and documenting malnutrition in the adult and pediatric population.

 

Multiple-choice queries

AHIMA appears to have changed the language for multiple-choice queries with this practice brief, especially when clinical validity is an issue. In an example for validating documented sepsis without apparent clinical indicators, they offered the following multiple-choice options:

  • Sepsis was confirmed
  • Sepsis was ruled out
  • Sepsis was without clinical significance
  • Unable to determine
  • Other ______________

Given that this is AHIMA’s query format, we’re obligated to consider it; however, this does cause some difficulties. What can a coder do with "sepsis was without clinical significance" or "unable to determine," if that’s the option the provider selects? If "sepsis was without clinical significance" is selected, do we not code it with the belief that the documented condition doesn’t qualify as an additional diagnosis as defined in the ICD-10-CM guidelines? How many of us have run into physicians who document "unable to determine" as a way of avoiding the question?

I believe that if any of these two options are chosen, then the record should be escalated to a physician advisor or coding manager who implements the facility’s policy of coding the documented diagnosis without defendable clinical indicators.

 

Clinical validation auditing

AHIMA states, "auditing a small sample (e.g., 15 records per year) of coded records by each coding professional (both contract and employed) is one way to ensure that each coding professional is given some education on clinical validation."

While true, I believe that these audits should include CDI specialists, given that many are not members of AHIMA and may not read AHIMA practice briefs, much less believe that they apply to them. AHIMA does emphasize their position as one of the four Cooperating Parties for ICD-10-CM/PCS and that this brief is "relevant to all clinical documentation improvement professionals and those who manage the CDI function, regardless of the healthcare setting in which they work or their credentials."

 

Summary

In conclusion, please be sure to read this practice brief and consider how this affects your organization. Given that there are no standard definitions for at-risk ICD-10-CM/PCS terminology published by any of the Cooperating Parties or payers, and given that medical terminology used in documentation should be defined by physicians and their professional organizations, I encourage all facilities to engage with their medical staff to provide indicators for the clinical terminologies most often challenged by payers.

I also would encourage facilities to develop and implement policies that ensure their validity prior to any submission of HIPAA transactions sets with appropriate boundaries and limits.

 

Editor’s note: Dr. Kennedy is a general internist and certified coder, specializing in clinical effectiveness, medical informatics, and clinical documentation and coding improvement strategies. Contact him at 615-479-7021 or at [email protected]. Advice given is general. Readers should consult professional counsel for specific legal, ethical, clinical, or coding questions. For any other questions, contact editor Amanda Tyler at [email protected]. Opinions expressed are that of the author and do not necessarily represent HCPro, ACDIS, or any of its subsidiaries.

 

CMS releases 2017 ICD-10-CM codes

CMS has released the final list of new and revised ICD-10-CM codes available for reporting beginning October 1, 2016, with more than 2,000 changes.

The files include the code descriptions in tabular order, as well as an updated index and tables for neoplasms and drugs.

HCPro.com – Briefings on Coding Compliance Strategies