Click here for more sample CPC practice exam questions with Full Rationale Answers

Practice Exam

Click here for more sample CPC practice exam questions and answers with full rationale

Practice Exam

CPC Practice Exam and Study Guide Package

Practice Exam

What makes a good CPC Practice Exam? Questions and Answers with Full Rationale

CPC Exam Review Video

Laureen shows you her proprietary “Bubbling and Highlighting Technique”

Download your Free copy of my "Medical Coding From Home Ebook" at the top left corner of this page

Practice Exam

2016 CPC Practice Exam Answer Key 150 Questions With Full Rationale (HCPCS, ICD-9-CM, ICD-10, CPT Codes) Click here for more sample CPC practice exam questions with Full Rationale Answers

Practice Exam

Click here for more sample CPC practice exam questions and answers with full rationale

Seeking compliance individual to assist office

We are looking to outsource a compliance person to help get our office in order. We need someone to come in and evaluate our practice for possible compliance issues and steer us in the correct direction. Does anyone have any suggestions of someone that might be considered for this? We are in Alabama so would certainly love to find someone close to our area.

Thank you in advance!!

Medical Billing and Coding Forum

Bolster billing compliance: Implement a Medicare Part A triple-check process

Bolster billing compliance: Implement a Medicare Part A triple-check process

Medicare billing is a domain rife with payer offshoots and evolving regulations that can be difficult to navigate without a strategy to weather claim scrutiny and withstand the gaze of CMS’ various auditing contractors.

Enter the triple-check process, a time-tested internal auditing strategy used by proactive long-term care providers to facilitate billing accuracy and compliance the first time a UB-04 claim form is submitted. As its name suggests, triple check is a layered verification process that involves staff members from billing, nursing, and therapy departments?the three core disciplines required to submit a clean claim. But this sturdy foundation is also pliable, allowing a facility to easily adapt the procedure to the various types of claims it files.

Read on for an expert iteration of the triple-check process, which is modified from the HCPro book The Medicare Billing Manual for Long-Term Care, written by Frosini Rubertino, RN, BSN, C-NE, CDONA/LTC. This specific triple-check procedure is designed to mobilize key staff to ensure accuracy and timely submission of Part A claims.

 

Procedure

Each month, the SNF will collect all Medicare Part A billing information ready for submission and enlist the following individuals to carry out their designated roles in verifying the accuracy of these items: administrator, director of nursing, MDS coordinator, facility rehab director or designee, business office manager, medical records personnel, and central supply staff.

The following is a breakdown of each of these staff members’ responsibilities in the triple-check process:

Business office manager and medical records personnel

  • Verify that the qualifying stay information recorded on the UB-04 aligns with that on the medical records face sheet.

 

Business office manager

  • Verify that each resident has benefit days available in the HIPAA Eligibility Transaction System.
  • Verify the admit date on the UB-04 aligns with the date in the manual census log.
  • Verify covered service dates listed on the UB-04 align with those in the Medicare and manual census logs.
  • Verify that a resident’s financial file contains a signed and completed Medicare Secondary Payer form whenever applicable.

 

Business office manager and MDS coordinator

  • Verify that ADLs are correct and are supported by documentation. Confirm that staff have coded all other contributory items (e.g., mood, IVs).
  • Verify that ARDs on each MDS align with the occurrence dates found at form locators (FL) 31?34 on the UB-04.
  • Verify that the RUG level listed on each MDS aligns with that found at FL 44 on the UB-04.
  • Verify that the assessment type for each MDS aligns with the modifier found at FL 44 on the UB-04.
  • Verify that the number of accommodation units listed on the UB-04 aligns with the assessment type for each MDS. Verify that the total number of accommodation units aligns with corresponding covered service dates.

 

Facility rehab director, MDS coordinator, and business office manager

  • Verify that physical therapy minutes listed on the daily treatment grid align with those noted in the service log. Align the days and minutes documented in the MDS with those on the treatment grid. Align the number of units billed on the UB-04 with those in the service log.
  • Verify that each principal diagnosis is accurate, that all secondary diagnoses support skilled care, and that every ICD-9 code corresponds to an appropriate diagnosis.
  • Verify that occupational therapy minutes recorded on the daily treatment grid align with those in the service log. Align the days and minutes in the MDS with those on the treatment grid. Align the number of units billed on the UB-04 with those in the service log.
  • Verify that speech therapy minutes listed on the daily treatment grid align with those noted in the service log. Align the days and minutes in the MDS with those on the treatment grid. Align the number of units billed on the UB-04 with those in the service log.

 

DON and medical records personnel

  • Verify each resident’s need for Medicare skilled intervention by reviewing supporting clinical documentation that corresponds with the dates of service listed in the manual census log.
  • Verify that each (re)certification form has been completed and signed by the appropriate physician.
  • Verify that each physician order has been obtained and implemented.
  • Verify that each chart reflects appropriate charting guidelines. Confirm that charting has been completed at least once in every 24-hour period, relates to skilled service provided, and supports therapy.

 

Facility rehab director

  • Verify that physician orders include rehabilitation.
  • Verify that each evaluation notes the prior level of function.
  • Verify that clinical documentation contains a progress note establishing the need for continued skilled intervention.

 

Administrator

  • Chair the triple-check meeting (detailed below), and ensure that the entire process is completed by appropriate staff each month before Medicare claims are submitted. Participation in the triple check will allow the administrator to monitor the effectiveness of key operational processes carried out by the facility’s ­interdisciplinary team (IDT) on an ongoing basis.

Triple-check meeting and audit tool

Each of the SNF’s triple-check participants should complete their respective duties prior to the Medicare triple-check meeting, which will be held monthly before the SNF bills for a given batch of services. In other words, the meeting is not an occasion for staff to complete their initial claim component(s). Instead, it’s a chance for IDT members to cross-check the work of their colleagues by verifying the accuracy of claim items that others have completed, thereby ensuring each element has been studied by multiple sets of eyes.

The triple-check meeting will also serve as the platform for the SNF’s business office manager to document the completion of each integral item on a billing claim using the triple-check audit tool, an internal checklist-type document that will be included in every month-end closing report.

Using this audit tool, the manager will denote items verified as correct during the triple-check meeting with an "X." He or she will mark items identified as incorrect with an "O" and, in the remarks section of the document, record the steps the team will take to obtain the correct information. Items initially found to be incorrect but rectified during the meeting should still be marked with an "O" to better track any practice patterns that could lead to billing slipups and inform future training activities.

The business office manager will call for any claim found to have errors during the triple-check meeting to be put on hold until it is amended. Once staff have made necessary revisions, the manager will indicate these correction(s) and the corresponding date(s) in the remarks section of the audit tool. He or she will then contact a corporate entity to review the changes and ultimately grant approval to submit the claim.

HCPro.com – Billing Alert for Long-Term Care

Compliance Is Not Complicated

To be sure your providers and employees are following all rules and regulations to keep your medical practice compliant, you should create a compliance program. The Office of Inspector General lists seven core components for an effective compliance program. Implement standards through written policies and procedures Providers and employees need written policies or procedures to […]
AAPC Knowledge Center

Briefings on Coding Compliance Strategies, September 2016

CMS’ 2017 IPPS final rule released

CMS released the fiscal year (FY) 2017 IPPS final rule August 2, and ICD-10-CM/PCS code changes and the addition of the Medicare Outpatient Observation Notice (MOON) both had starring roles. CMS also made changes to several quality initiatives and reversed the agency’s 0.2% payment reduction instituted along with the 2-midnight rule first implemented in the FY 2014 rule.

"Most coders have been hearing about the magnitudes of new codes being added to the ICD-10-CM and ICD-10-PCS code sets, and the final rule does not disappoint," says Shannon McCall, RHIA, CCS, CCS-P, CPC, CPC-I, CEMC, CRC, CCDS, director of HIM and coding for HCPro, a division of BLR, in Middleton, Massachusetts.

 

ICD-10-CM and ICD-10-PCS updates

The final rule includes tables with nearly 2,600 lines of new ICD-10-CM codes and 14,000 lines of ICD-10-PCS codes on the associated Excel spreadsheets, some of which were not mentioned in the proposed rule, says McCall.

"Even with the thousands of additions to the ICD-10-CM code set, I think many will find that the additions look scarier than they actually are," says McCall. "For example, adding laterality (right, left, bilateral, unspecified) to conditions like diabetes mellitus retinopathy codes (categories E08?E11 and E13) equated to over 250 ‘new’ codes, but in reality, they are just added specificity that equate them to the detail in other eye disorder codes."

The eye disorders (Chapter 7, categories H00?H59) overall have a significant number of additions by way of further specificity like new stages or presence of symptoms?creating over 300 codes in that category, says McCall.

CMS has also introduced nearly 4,000 new codes for ICD-10-PCS in the final rule. But, McCall says, similar to the ICD-10-CM additions, some of the new ICD-10-PCS codes aren’t new procedures, but simply increases in specificity for one or more characters, which may result in hundreds of new codes.

"An example is the added distinction by subdividing the body part option of the thoracic aorta into the ascending/arch and descending portions," says McCall. "Therefore, any PCS table that included an option for the thoracic aorta now includes different body part values for ascending/arch thoracic aorta and descending thoracic aorta. These can create hundreds of codes once you provide all approach options, all device options, and all qualifier options for the associated ICD-10-PCS tables."

One of the more significant coding changes CMS has made, says McCall, is recognizing hundreds of procedures that were illogically considered surgical in ICD-10-PCS for FY 2016 but will be reclassified as nonsurgical in FY 2017. The intent of the transition from ICD-9-CM Volume 3 to ICD-10-PCS was for procedures to remain in the same (or similar) MS-DRGs that they occupied prior to ICD-10 implementation. However, not all codes were successfully reclassified. The 2017 IPPS final rule addresses some of these instances, including percutaneous drainage procedures such as paracentesis, as well as procedures like esophageal banding and arterial catheterization. Tables 6P.4a?6P.4k include all the procedures that will be reassigned as nonsurgical for FY 2017, says McCall.

"The reason for the assignment errors were mapping errors from ICD-9-CM Volume 3 to ICD-10-PCS. For example, the mapping for paracentesis (whether it was diagnostic or therapeutic) should have been to ICD-9-CM Volume 3 procedure code 54.91, but for some reason a diagnostic paracentesis was mapped to 54.29 (other diagnostic procedures on abdominal region)," says McCall.

CMS responded in the final rule to this mapping issue by saying:

We agree with the commenters that diagnostic drainage of the peritoneal cavity is more accurately replicated with ICD-9-CM procedure code 54.91 (percutaneous abdominal drainage) for reporting diagnostic paracentesis procedures and it is designated as a non-O.R. procedure. Therefore, we agree that the designation of ICD-10-PCS procedure code 0W9G3ZX (drainage of peritoneal cavity, percutaneous approach, diagnostic) should also be changed from O.R. to non-O.R.

 

The MOON form

In addition to creating new diagnosis and procedure codes, CMS also created the MOON as part of the Notice of Observation Treatment and Implication for Care Eligibility Act.

The MOON is a CMS-developed standardized notice hospitals are required to give to Medicare patients. These patients must be receiving observation services as an outpatient for more than 24 hours, and the MOON must be given no later than 36 hours after observation services are initiated. Hospitals must give a verbal explanation of the MOON to patients and obtain a signature to acknowledge receipt and understanding of the notice.

 

Payment adjustments

CMS also indicated that payment rates will increase by 0.95% in FY 2017 compared to FY 2016 for hospitals participating in the Inpatient Quality Reporting (IQR) Program and EHR meaningful use, according to the rule.

"This also reflects a 1.5 percentage point reduction for documentation and coding required by the American Taxpayer Relief Act of 2012 and an increase of approximately 0.8 percentage points to remove the adjustment to offset the estimated costs of the two-midnight policy and address its effects in FYs 2014, 2015, and 2016," said CMS.

In addition, CMS created two adjustments to reverse the effects of the 0.2% cut it instituted along with the 2-midnight rule, which has been the source of an ­ongoing legal challenge by the American Hospital Association and other parties. More on this story can be found here: www.hcpro.com/HIM-320994-859/Court-gives-providers-a-chance-to-comment-on-2midnight-rule-payment-reduction.html.

CMS made a permanent adjustment of approximately 0.2% to remove the cut for FYs 2017 and onward, and a temporary adjustment of 0.8% to address the retroactive impacts of this cut for FYs 2014, 2015, and 2016.

 

Quality program updates

CMS finalized five changes to the Hospital-Acquired Condition Reduction Program in this rule, as well as updates to the IQR Program, changes to the Hospital Readmissions Reduction Program, and updates to the Hospital Value-Based Purchasing Program.

Listening to commenter feedback, CMS reduced requirements for reporting electronic clinical quality measures (eCQM) as part of the IQR Program. Originally, CMS proposed requiring hospitals to submit data on all 15 eCQMs, but finalized a policy requiring hospitals to report four quarters of data on an annual basis for eight of the available eCQMs.

The entirety of the final rule is available in PDF format on the Federal Register, and it is expected to be officially published Monday, August 22. CMS says the rule applies to approximately 3,330 acute care hospitals and approximately 430 long-term care hospitals, and it will affect discharges occurring on or after October 1, 2016.

The final rule can be downloaded here: www.federalregister.gov/public-inspection.

The related CMS fact sheet can be viewed here: www.cms.gov/Newsroom/MediaReleaseDatabase/Fact-sheets/2016-Fact-sheets-items/2016-08-02.html.

 

2017 IPPS final rule and claims-based measures

by Shannon Newell, RHIA, CCS, AHIMA-approved ICD-10-CM/PCS trainer

The fiscal year (FY) 2017 IPPS final rule was released August 2 and will be published in the Federal Register August 22. The majority of the finalized updates are consistent with those outlined in the proposed rule, but with a few refinements to applicable time periods. The final rule expands and refines the number of claims-based ­outcomes linked to payment under these programs.

Let’s review a few of the key changes to support your CDI program’s strategic focus for the coming year.

 

Risk-standardized readmission rates

Risk-standardized readmission performance for the coronary artery bypass graft (CABG) cohort will be linked to reimbursement in FY 2017. The applicable time period for discharges used to assess performance in FY 2017 has passed, but today’s discharges will impact performance in FY 2018.

This is a great example of why it’s important to focus on new measures adopted in this year’s rule for future program years. CMS utilizes a two- to three-year historical window of data for claims-based measures, so today’s performance impacts us financially two to three years in the future.

 

Risk-adjusted PSI 90 composite

The current Patient Safety Indicator (PSI) 90 measure will continue to be utilized in the Hospital-Acquired Condition Reduction Program (HACRP) and Hospital Value-Based Purchasing Program (HVBP) through FY 2018. At that time:

  • The HACRP will adopt the modified PSI 90 composite in FY 2018
  • The HVBP will discontinue future use of the PSI 90 measure in the FY 2019 rulemaking?CMS notes that the HVBP intends to adopt the modified PSI 90 composite in future rulemaking

 

The modified PSI 90 composite, also called the Patient Safety and Adverse Events Composite, was finalized as proposed. A review of key modifications follows:

  • PSIs in the composite have been revised; one PSI was deleted (PSI 7?CLABSI) and three new PSIs were added, providing a total of 10 PSIs in the modified composite
  • The final rule notes that PSIs 12 and 15 have had specification revisions
  • PSI weighting in the composite has been refined to incorporate the impact of both volume and harm

 

Applicable time periods for the measure were shortened as proposed, although date ranges were revised as noted below in italicized font:

  • HACRP:
    • FY 2018: July 1, 2014?September 30, 2015 (15 months)
    • FY 2019: October 1, 2015?June 30, 2017 (21 months)
  • HVBP:
    • FY 2018: Same as HACRP above (for the performance period; the baseline period will not be revised)

 

Performance scoring for the HACRP will adopt Winsorized z-scores instead of deciles.

  • The z-score method uses a continuous measure score rather than forcing measure results into deciles.
  • Z-scores represent a hospital’s distance from the national mean for a measure in units of standard deviations. A negative z-score reflects values below the national mean, and thus indicates strong performance.
  • To form the total hospital-acquired condition (HAC) score, the z-scores will be used as hospitals’ measure scores. The current scoring approach will then kick in.
    • The domains will be scored as follows:
    • The domain scores will then be multiplied by the domain weight
    • The weighted domain scores will be added together for the total HAC score
    • Hospitals in the top (worst) quartile would be subject to the payment penalty

 

Risk-standardized mortality measures

Risk-adjusted CABG mortality performance will impact financial reimbursement under the HVBP effective with the FY 2022 program. The applicable time periods that will be used to assess performance at that time follow:

  • Baseline period: July 1, 2012?June 30, 2015
  • Performance period: July 1, 2017?June 30, 2020

 

The pneumonia cohort will expand to include patients with a principal diagnosis of aspiration pneumonia and/or patients with a principal diagnosis of sepsis and a secondary present-on-admission diagnosis of pneumonia:

  • This aligns the cohort definition with that for the pneumonia readmission measure adopted with the FY 2021 program year.
  • Applicable timelines will be shortened from the usual three years of data to expedite HVBP adoption. The applicable time period for the cohort follows; italicized font indicates refinements to the dates in the final rule:
    • FY 2021:
    • FY 2022:

 

Cost measures

The previously adopted HVBP payment measure for pneumonia (hospital-level, risk-standardized payment associated with a 30-day episode of care for pneumonia) will expand the pneumonia cohort.

The expanded cohort will be consistent with the cohort definition used for the risk-adjusted readmission measure in the Hospital Readmissions Reduction Program (HRRP) and the risk-adjusted mortality measure used in the HVBP:

  • The expanded cohort is anticipated to shift 9.3% of hospitals from the "average payment" category to the "greater than average payment" category

Two new payment measures will be added to the efficiency and cost reduction domain in the HVBP beginning FY 2021:

  • Hospital-level, risk-standardized payment associated with a 30-day episode of care for acute myocardial infarction
  • Hospital-level, risk-standardized payment associated with a 30-day episode of care for heart failure

 

These payment measures are intended to be paired with the 30-day mortality measures, thereby directly linking payment to quality by the alignment of comparable populations and risk adjustment methodologies to facilitate the assessment of efficiency and value of care:

  • The applicable time periods for the measures are as follows:
    • Baseline period: July 1, 2012?June 30, 2015
    • Performance period: July 1, 2017?June 30, 2019
  • The risk adjustment methodologies used for these measures are similar to those used for risk-adjusted mortality

 

Performance for these new measures will be scored using the methodology for the Medicare spending per beneficiary measure.

 

Summary

Effective October 1, 2017, performance for cost and quality measures in the HRRP, HVBP, and HACRP will impact up to 6% of your hospital’s inpatient acute Medicare fee-for-service reimbursement.

So, where to begin? First, become familiar with the measure specifications and risk adjustment methodologies, in addition to existing CMS provided reports on historical performance, to gain insights into your organization’s clinical documentation and coding vulnerabilities.

Measure specifications can be found at: www.cms.gov/Medicare/Quality-Initiatives-Patient-Assessment-Instruments/HospitalQualityInits/Measure-Methodology.html.

The final rule is available here: www.federalregister.gov/public-inspection.

 

Editor’s note: Newell is the director of CDI quality initiatives for Enjoin. She has extensive operational and consulting expertise in coding and clinical documentation improvement, performance improvement, case management, and health information management. You can reach Newell at [email protected] Opinions expressed are that of the author and do not represent HCPro or ACDIS.

 

Correctly documenting and coding altered mental status and encephalopathy

 

by James S. Kennedy, MD, CCS, CDIP

 

Last month, I wrote about the role of coding and CDI compliance in ensuring the clinical validity of submitted ICD-10-CM/PCS codes, which impact payment, outcomes measurement (e.g., complications, mortality, and readmissions), and patient safety.

Emphasizing that ICD-10 code assignment can no longer rely solely upon the words documented by a treating provider, a team effort involving compliance, coders, CDI, and physicians defining clinical terminology and managing the application of code conventions, guidelines, and official advice is crucial.

While a good lawyer knows the law, a better lawyer knows the law, the judge, and the jury. We in coding compliance must know not only the code assignment process, but also the accountability agents and attorneys who delight in finding what they perceive to be our mistakes.

In our defense, when we can state our case using the most up-to-date clinical definitions for the circumstances described by a treating provider and rigorously apply coding and CDI principles, we are more likely to prevail when publically accused of upcoding, abuse, or fraud.

On July 27, I was interviewed by the director of ­ACDIS, Brian Murphy, regarding the documentation and coding of encephalopathy, which has highlighted the tremendous confusion in the compliant definition, documentation, and coding of altered mental status and its underlying causes. This interview and its accompanying slides are available at http://www.acdis.org/acdis-radio/encephalopathy.

To discuss this topic, let’s review a little history first. The documentation and coding of encephalopathy wasn’t much of an issue until around 2007, when CMS designated the various encephalopathies (e.g., metabolic, toxic) as MCCs when they weren’t even CCs in the older CMS-DRG system.

Unlike unspecified heart failure?which is not a CC or MCC unless the physician states that it is systolic (HFrEF) or diastolic (HFpEF)?CMS allows unspecified or acute encephalopathy (ICD-10-CM code G93.40) to be an MCC while some of its descriptors (e.g., anoxic encephalopathy) are just a CC or nothing at all (e.g., G94, encephalopathy in diseases classified elsewhere).

When challenged as to why encephalopathy with (e.g., toxic, hepatic, metabolic, or NEC) or without an adjective should remain a MCC, CMS stated in its fiscal year (FY) 2012 IPPS rule that "its clinical advisers recommended that these encephalopathy codes remain at an MCC level because these patients with encephalopathy typically utilize significant resources and are at a higher severity level." Readers can view this quote in the Federal Register on pp. 51544 and 51545 at http://tinyurl.com/jv69k8m.

Consequently, several hospitals and CDI consultants continue to advocate documentation and coding of the term "encephalopathy" alone in the presence of any altered mental status in order to obtain a MCC in MS-DRGs or a severity of illness of 3 in APR-DRGs. This is a practice that I believe is sure to be challenged, and one that requires thoughtful inquiry to ensure the validity of this or an alternative strategy.

 

Strategies for accuracy

Let’s now outline how to structure a diagnosis or condition for the purpose of ensuring completeness and precision in its ICD-10-CM coding. Every condition has five components that must be documented and linked to each other to fully describe that condition for coding purposes. Using the mnemonic MUSIC, and applied to an altered mental status, these are:

  • Manifestations?These could be delirium, psychosis, dementia, amnestic disorder, stupor, coma, unconsciousness, chronic vegetative state, and others, not just altered mental status or altered level of consciousness. Many of these are Chapter 18 symptoms, which cannot be a principal diagnosis if attributed to their underlying causes.
    • Note: Unresponsive doesn’t have a code; thus, an alternative term must be used.
    • Note: ICD-10-CM has code first requirements for the underlying cause of dementia (F01, F02), amnestic disorders (F04), delirium (F05), or other mental or personality disorders (F06, F07) due to a known physiological disorder, which means that if they were not documented or linked to the specified alteration of mental status or consciousness, they should be queried for. See the next section.
  • Underlying causes?These may include various structural brain diseases (e.g., strokes, cerebral neoplasms, cerebral edema, traumatic brain injury), neurodegenerative disorders (e.g., Alzheimer’s, Lewy body dementia, normal pressure hydrocephalus), or the various encephalopathies (e.g., toxic, metabolic, anoxic).
  • Severity or specificity?This includes whether any brain disease due to injury or medications is in the active treatment phase (initial encounter), healing phase (subsequent encounter), or has long-standing sequelae. If the doctor only documents "encephalopathy," we should query him or her as to its specific nature or underlying cause.
    • Note: The Glasgow Coma Scale measures severity; ICD-10-CM will add the National Institutes of Health Stroke Scale starting October 1, 2016. Note that both of these may be coded from non-provider documentation according to the 2017 ICD-10-CM Official Guidelines released in ­August 2016; thus, please encourage the nursing staff to capture these whenever possible.
    • Note: We may see codes for the severity of hepatic encephalopathy using the West-Haven classification (0, 1, 2, 3, or 4) in FY 2018; thus, consider discussing this with your gastroenterologists or hepatologists.
  • Instigating or precipitating cause?This is another condition that provoked the underlying cause or made it worse, such as a drug overdose causing a toxic encephalopathy, a cerebral embolus from atrial fibrillation causing a stroke, or a change in circumstances inciting behavioral changes with neurodegenerative disorders. Elder or child abuse should always be considered as well.
  • Consequences?These include seizures that may be the direct effect of a metabolic encephalopathy due to hyponatremia, a fracture that occurred during a drug-induced delirium or psychosis, or malnutrition due to poor oral intake in a patient with end-stage Alzheimer’s disease.

 

The definitions of the various altered mental states or levels of consciousness can be found in credible psychiatry (e.g., DSM-V) or neurological literature (e.g., Adams and Victor’s Principles of Neurology); thus, when the term "altered mental status" alone is documented, the physician should be queried for the exact nature of the altered mental state (e.g., delirium, amnestic syndrome, dementia). Please ask your coding or CDI physician champion for additional information on these definitions.

 

Focusing on encephalopathy

While there are many causes of altered mental status, let’s focus on encephalopathy.

The Greek etymology of the word "encephalopathy" means "disease of the brain," much like how the word "myopathy" means "disease of the muscle," "nephropathy" means "disease of the kidney," and "neuropathy" means "disease of the nerve." As such, one could construe any disease of the brain to be an encephalopathy, such as strokes, brain tumors, and the like. In fact, Dorland’s medical dictionary, available in 3M’s encoder, defines encephalopathy as "any degenerative disease of the brain." These definitions, in my opinion, are too broad.

I prefer the definition in Adams and Victor’s Principles of Neurology, 10th Edition, that defines encephalopathy as a global brain dysfunction that has an underlying cause distinct from any other named brain disease (e.g., Alzheimer’s). It could be a general medical condition (e.g., metabolic encephalopathy), a poisoning (e.g., toxic encephalopathy), chronic liver failure (e.g., hepatic encephalopathy), diffuse anoxia, or the like that results in the diffuse brain dysfunction. These, of course, would have to be defined, differentiated, and documented by the provider, emphasizing that they are separate, distinct, or overlying another underlying diffuse brain disease (e.g., Alzheimer’s).

Similarly, the National Institute of Neurological Disorders and Stroke states that encephalopathy is a term for any diffuse disease of the brain that alters brain function or structure; access this at http://tinyurl.com/NINDSencephalopathy. The hallmark of encephalopathy is an altered mental state.

There is a myriad of brain diseases, many of which have names (e.g., Alzheimer’s disease, Lewy body dementia, normal pressure hydrocephalus, and Jakob-Creutzfeldt disease) that are distinct disease entities whose labels describe the brain’s pathology and clinical manifestations. I personally believe that if the patient’s manifestations can be explained solely by a named brain disease, the term "encephalopathy" is integral to that named brain disease, given that the name is more specific than the term.

The ICD-10-CM Index to Diseases has similar examples, such as hepatic encephalopathy being classified as hepatic failure; in this circumstance, another code for encephalopathy would not be coded if the mental status abnormality is only due to hepatic failure.

Therefore, if encephalopathy is documented without linkage to any condition, a query is needed to determine its underlying cause. If it is due to a named disease not in the ICD-10-CM Index to Diseases under the key term encephalopathy, the underlying disease (e.g., UTI) is coded first, followed by G94 (other disorders of brain in diseases classified elsewhere), which is per the Excludes1 note for G93.4 (other and unspecified encephalopathy).

That’s not to say that a patient with a preexisting brain disease cannot have another superimposed process, which, if clinical indicators are present, should be defined, diagnosed, and documented by the physician. The ICD-10-CM Index to Diseases outlines many of these, such as toxic, metabolic, toxic-metabolic, hepatic, anoxic, and other types of encephalopathy. Definitions include:

  • Metabolic encephalopathy is a specified altered mental status due to a metabolic issue, such as hypercapnia (e.g., carbon dioxide narcosis), hyponatremia, pancreatitis, uremia, and the like.
  • Toxic encephalopathy is a diffuse brain dysfunction due to an adverse effect or a poisoning of a medication. Note that the ICD-10-CM Index to Diseases states that any encephalopathy due to a medication is coded to G92 (toxic encephalopathy), and that G92 has a code first instruction for any poisoning (T51?T64), which includes alcohol.
  • Toxic metabolic encephalopathies, which encompass delirium and the acute confusional state, are an acute condition of global cerebral dysfunction in the absence of primary structural brain disease. These are coded as G92 (toxic encephalopathy), unless the physician further specifies the toxic or metabolic issue. Queries regarding the definitions of metabolic or toxic etiologies are often required. While I don’t like this term, preferring to use the word "toxic" or "metabolic" alone, toxic-metabolic does exist in the clinical literature and coding nosologies. Learn more at http://www.tinyurl.com/toxicmetabolicencephalopathy.
  • Hepatic encephalopathies are due to hepatic failure and are coded as such. In ICD-9-CM, all hepatic ­encephalopathies are MCCs; however, in ICD-10-CM, hepatic encephalopathy is only an MCC if associated with coma or if the hepatic failure is described as acute or subacute (less than six months in duration). Coding Clinic, Second Quarter 2016, emphasized that hepatic encephalopathy is not coded as coma unless documented by the provider as such, and if clinically valid (e.g., the patient is unconscious).
  • Hypoglycemic encephalopathy is listed as E16.2 (hypoglycemia, unspecified) in the ICD-10-CM Index; however, Coding Clinic, Third Quarter 2015, advised that encephalopathy due to hypoglycemia in a diabetic should be coded using E11.649 (Type 2 diabetes mellitus with hypoglycemia without coma) as the principal diagnosis, with G93.41 (metabolic encephalopathy) as an additional diagnosis. I have been told by Coding Clinic that the Editorial Advisory Board is revisiting this issue. Stay tuned for future Coding Clinic articles to see if they reverse this opinion (I think they should).

 

Let us at BCCS know how you’re faring with encephalopathy, especially as you write appeals.

 

Editor’s note: Dr. Kennedy is a general internist and certified coder, specializing in clinical effectiveness, medical informatics, and clinical documentation and coding improvement strategies. Contact him at 615-479-7021 or at [email protected] Advice given is general. Readers should consult professional counsel for specific legal, ethical, clinical, or coding questions. For any other questions, contact editor Amanda Tyler at [email protected] Opinions expressed are that of the author and do not necessarily represent HCPro, ACDIS, or any of its subsidiaries.

 

HCPro.com – Briefings on Coding Compliance Strategies

Compliance in coding Infusions

CPT guidelines for infusions are time based. Started at xx, completed at xxx. How do you all handle infusions that do not follow the order? Example:
Medication ordered to run at 200 ml/hr over 5 hours (1000ml bag). Nurse documents start time at noon but marks it "stopped" at 1600 with TVI of 1000ml.
Example #2: Medication order states "infuse over 1 hour"; nurse start time is noon and stop time is given at 1245. Do you code these regardless of whether the order was followed, or do you downgrade/write off for compliance reasons?

I have reached out to our MAC for their input but wanted to see what you all do from a charging/coding perspective.

Medical Billing and Coding Forum

Briefings on Coding Compliance Strategies, August 2016

Postoperative respiratory failure’s introduction into the CMS value-based reimbursement model

By Robert Stein, MD, CCDS, and Shannon Newell, RHIA, CCS, AHIMA-approved ICD-10-CM/PCS trainer

The accurate capture of acute respiratory failure has been a long-standing challenge for CDI programs. The accurate reporting of this condition as a post-procedural event can be even more difficult.

The importance of data quality for post-procedural acute respiratory failure will impact quality outcomes linked to reimbursement under the Hospital-Acquired Condition Reduction Program (HACRP), as well as the Hospital Value-Based Purchasing Program (HVBP), if language in the fiscal year (FY) 2017 IPPS proposed rule is finalized.

In this article we’ll provide insights into how clinical documentation and reported codes may impact payments, and guidance on some common CDI challenges to strengthen data quality.

 

Performance may impact reimbursement in FY 2018

A quality measure named Patient Safety Indicator (PSI) 11 has existed since 1998, when it was developed by the Agency for Health Care Research and Quality (AHRQ). The measure has been adopted for use by CMS and other comparative databases, such as the University HealthSystem Consortium and Healthgrades, to compare performance across hospitals.

If the proposed rule is finalized as written, how well your hospital performs on this measure will begin to impact hospital reimbursement under the two hospital pay-for-performance programs noted above. Reimbursement impact will begin in:

  • FY 2018 for the HACRP
  • FY 2019 for the HVBP

 

Performance for this measure will be assessed and scored, and the score will then be rolled into a weighted patient safety composite measure. Performance for the overall composite measure will then determine reimbursement impact. The name of this composite measure is the Patient Safety and Adverse Events Composite, previously known as the PSI 90 composite measure.

The Patient Safety and Adverse Events Composite measure was reviewed in last month’s column. What is important to note for PSI 11 is that performance for this measure will impact approximately 22% of the composite weight:

 

Data quality and PSI 11 performance

PSI 11 performance is determined by the diagnosis (ICD-10-CM) codes we submit on claims. This is a risk-adjusted measure evaluated using an observed over an expected ratio.

Discharges included in the measure:

  • All elective surgical discharges treated at the hospital are evaluated for comorbidities which impact the complexity of the patient mix and the associated expected rate of postoperative respiratory failure events

Identification of postoperative respiratory events:

  • Any discharge included in the measure which has one of the following ICD-10-CM codes on the claim triggers a reportable actual?or observed? postoperative respiratory failure event:

 

Additional details for key measure drivers can be found on review of PSI 11 specifications located on the AHRQ website at www.qualityindicators.ahrq.gov/Modules/psi_resources.aspx.

 

PSI 11 CDI vulnerabilities

In our review of thousands of medical records for hospitals across the country, we see common challenges which impact PSI 11 data quality. We discuss a few of the common questions we encounter below to assist your internal data quality efforts.

 

How do I recognize acute respiratory failure?

  • Acute respiratory failure is at the end of a continuum initiated by respiratory dysfunction resulting in abnormalities of oxygenation and/or carbon dioxide elimination
  • Acute on chronic respiratory failure is an exacerbation or decompensation of chronic respiratory failure

Clinical criteria to identify if not documented and/or to validate a documented diagnosis include:

  • The use of supplemental oxygen or non-invasive/invasive mechanical ventilation
  • Signs and symptoms indicative of increased work of breathing (e.g., dyspnea, tachypnea [respiratory rate greater than 28], respiratory distress, labored breathing, use of accessory muscles)
  • Impaired gas exchange, which may be identified by the following clinical indicators:

What is the definition of "prolonged" postoperative mechanical ventilation?

  • A code for mechanical ventilation (and intubation) should not be assigned postoperatively for mechanical ventilation when it is considered a normal part of surgery.
  • Prolonged mechanical ventilation should be reported for an extended period postoperatively. A general rule of thumb for extended is 48 hours with the start time as the time of intubation for the procedure. Provider documentation should support what appears to be an extended time, but is in fact unexpected given the procedure and/or patient complexity.

 

If the patient is extubated postoperatively, but continues to be treated with supplemental oxygen, when is a query for acute respiratory failure appropriate?

  • To determine if this represents acute respiratory failure the values for impaired oxygen exchange can be utilized, along with the amount of oxygen being administered to the patient.
  • The P/F ratio can be a helpful tool to identify respiratory failure criteria above for a patient receiving supplemental oxygen:
  • If an ABG test is not available, an estimated P/F ratio can be calculated:
  • An illustration of the calculation follows:
  • The P/F ratio is a useful tool to validate the presence of acute hypoxemic respiratory failure when patients are receiving supplemental oxygen.

 

When respiratory failure exists in a post-procedural patient, how do I determine if this is, and/or is not, related to the procedure?

  • Physician education to promote clear documentation which relates the respiratory failure to an underlying condition (e.g., COPD) and/or to a procedure, and/or to the anesthesia, is essential.
  • When such documentation is not clear, a documentation query or clarification is required.

 

In addition to the above, other record review findings which negatively impact PSI 11 data quality include:

  • Accurate reporting of mechanical ventilation duration:
  • Accurate selection of post-procedural respiratory failure as the principal diagnosis:

 

Summary

Value-based care will increasingly utilize claims-based measures to assess quality and cost outcomes linked to payment. To strengthen organizational performance for PSI 11, the following steps are suggested:

  • Establish synergy between the CDI program and quality department to support:
  • Promote point-of-care capture of risk-adjustment variables pertinent to PSI 11 performance:
  • Actively engage your CDI physician advisor with medical staff education and CDI record reviews to facilitate and promote accurate capture of documentation relevant to accurate cohort identification and risk adjustment

 

Editor’s note: Stein is associate director of the MS-DRG Assurance program for Enjoin, providing clinical insight and education as part of the pre-bill review process. He earned his CCDS credential in June 2013 and completed AHIMA’s ICD-10-CM/PCS coder workforce training in August 2013. Newell is the director of CDI quality initiatives for Enjoin. Her team provides health systems with physician-led education and infrastructure design to sustainably address documentation and coding challenges essential to optimal performance under value-based payments across the continuum. She has extensive operational and consulting expertise in coding and clinical documentation improvement, performance improvement, case management, and health information management. You can reach Newell at (704) 931-8537 or [email protected] Opinions expressed are that of the authors and do not represent HCPro or ACDIS.

 

Using data to drive physician engagement

"You are your own best teacher," or so the old adage goes. Sure, goodies and gifts are great for recognizing high-quality documentation, but for CDI teams struggling to obtain physician buy-in, the best strategy may be found in their providers’ own records.

With pay-for-performance and other quality initiatives underway as a part of healthcare reform, physicians need to see how they are performing in real time. Showing them this data in comparison to their peers demonstrates through real numbers how they stack up, says ACDIS Advisory Board member Robin Jones, RN, BSN, CCDS, MHA/Ed, system director for CDI at Mercy Health in Cincinnati.

 

Query responses

Until recently, most providers were not interested in seeing how unanswered clarifications or conflicting DRG assignment affected metrics, Jones says. CDI programs traditionally measure overall success by tracking items such as:

  • Query rate (overall and by CDI specialist/physician)
  • Physician response rate (overall and by CDI specialist/physician)
  • Physician agreement rate (overall and by CDI specialist/physician)
  • CC/MCC capture rates
  • MS-DRG shifts
  • Case-mix index changes

This data isn’t often shown to physicians, and yet, since queries represent the single most important tool for CDI programs, gleaning patterns of information from them often illuminates opportunities for improved physician support. For example, a lack of response from a particular physician might represent an opportunity for education or a change in approach, or the need for a new method of communication (e.g., notifying the physician of an outstanding query through a phone call rather than email).

Mercy’s CDI program lists physicians’ clarification response rates and places them in physician lounges for all to see, says Jones. To keep the information anonymous, the CDI team assigns each physician a number so they can quickly and safely gauge how they are performing in comparison to their peers.

"When physicians see their rate is lower than their peers, they hurriedly find our CDI supervisor," Jones says.

Mercy also provides physicians with an individualized list of DRGs assigned to their patients, so they can cross-reference that information to their own private billing.

 

Case studies

CDI programs can elevate the importance of data by tying it to case studies?real scenarios relevant to patient care, says ACDIS Advisory Board member Karen Newhouser, RN, BSN, CCDS, CCS, CCM, CDIP, director of education at Med- Partners based in Tampa, Florida.

Additional elements

Show providers an example of poor documentation, then compare it to the same case with improved documentation and show how the improvement affects a variety of metrics, Newhouser says. Collectively, members of the ACDIS Advisory Board suggest sharing information regarding the following data points:

  • Severity of illness/risk of mortality (ROM)
  • Length of stay (LOS), average LOS, geometric mean LOS, and expected LOS
  • Readmission rates
  • Observed over expected mortality ratio

 

Be transparent so physicians can see the benefits?both financial and quality-related?of precise documentation, Newhouser says.

"Physicians need to know that the money is important if they want to have a hospital to practice in, updated equipment, and a paycheck," she explains. But, "it is imperative to remind them that while money is important, it is quality that must come first."

For each metric, consider the data for the facility as a whole, and compare it to other facilities within the system or region, says Michelle McCormack, RN, BSN, CCDS, CRCR, director of CDI at Stanford (California) Health Care. Sharing such information with the physicians illustrates how their documentation affects the larger hospital community.

Then, drill down into the data to identify individual metrics, comparing physicians against one another within the facility and within a particular specialty or service line, says McCormack.

 

External analysis

Beyond simply showing physicians the data, CDI teams must teach providers how documentation and coding affects their personal profile as well as their facility’s standing, says Judy Schade, RN, MSN, CCM, CCDS, CDI specialist at Mayo Clinic Hospital in Phoenix. A host of consumer websites cull data and employ a variety of algorithms to rank physicians and hospitals?many of these are well known, such as CMS’ Hospital and Physician Compare sites, Healthgrades, and Leapfrog.

Understand how those practicing within your facility measure up in these reports and share important milestones as necessary, Schade says. When positive shifts occur that correlate with documentation improvement focus areas, tout those accomplishments and acknowledge the role the physicians play.

"Physicians will be engaged if they understand how documentation and coding impacts their personal profile," Schade says. "Physicians are by nature competitive, and so they aim to be high achievers." CDI programs can use this to their advantage.

Nuanced details of these reports need analysis, warns Paul Evans, RHIA, CCS, CCS-P, CCDS, manager for regional CDI at Sutter West Bay in San Francisco.

For example, The San Francisco Chronicle recently published raw mortality outcomes data for the region. Since the paper did not understand how observed versus expected mortality plays a role in telling the story of a patient’s care, its analysis left a tertiary care center in the Sutter family looking as though it had worse mortality rates than its competitors despite the fact that it treated extremely sick patients, Evans explains.

"You have to be careful to compare apples to apples," Schade agrees.

With internal data in hand, Evans showed the high-level ROM of that facility’s patients and demonstrated that the facility actually outperformed its competitors.

"Unfortunately, you can’t explain statistics and ROM to the typical layperson, but you certainly can communicate it to your staff and to your physicians," Evans says.

 

Data discretion

Some data discretion may be warranted. Choose data elements that are most relevant to the CDI program’s goals at the time, as well as targeted to the specific physicians in the audience. Remember to share success stories with data elements as they are reached.

"CDI managers should consider all data points and make sure the numbers they present to the physician accurately represents the message they need to convey and targets the needs of the physicians themselves," says ACDIS Advisory Board member Wendy Clesi, RN, CCDS, director of CDI services at Enjoin.

For example, if a service line that has not been responding to queries begins to consistently increase its response rate, include the improvements in that response rate along with the other metrics you present, McCormack says.

"You want to select metrics that will allow you to see progress as well as areas of opportunity," she says.

It can be difficult to choose which data points to share, McCormack says, but sharing such concrete analysis leads to greater support from physicians overall.

 

Editor’s note: This article originally appeared in the CDI Journal. For any questions, contact editor Amanda Tyler at [email protected]

 

AHIMA pratice brief addresses clinical validity and coding compliance

 

We as coders, clinical documentation specialists, and compliance officers, are actively invested in coding compliance, aren’t we? AHIMA and ACDIS emphasize coding compliance in their codes of ethics. If we aren’t interested in coding compliance, why are we reading newsletters named Briefings in Coding Compliance Strategies and other similar publications?

Many coders I know code solely on what a doctor documents, claiming not to be physicians, nor having the authority to challenge a diagnosis or documented treatment by a provider.

In fact, AHIMA’s 2008 practice brief, Managing an Effective Query Process, emphasized that we should not query physicians if the clinical indicators do not support a provider’s documented diagnosis. This practice brief stated:

Providers often make clinical diagnoses that may not appear to be consistent with test results. Queries should not be used to question a provider’s clinical judgment, but rather to clarify documentation when it fails to meet any of the five criteria listed?legibility, completeness, clarity, consistency, or precision.

 

While AHIMA told us then not to query to ascertain clinical validity of documentation, the United States Department of Justice (DOJ), or Health and Human Services, must not have gotten the memo.

In June 2009, Johns Hopkins Bayview Medical Center, in Baltimore, Maryland, settled a False Claims Act case for $ 2.75 million. This happened after the DOJ said that the hospital’s "employees allegedly focused on lab test results which might indicate the presence of a complicating secondary diagnosis such as malnutrition or respiratory failure, and advised treating doctors to include such a diagnosis in the medical record, even if the condition was not actually diagnosed or treated during the hospital stay."

Baptist Healthcare Inc. and its affiliated hospitals near Louisville, Kentucky, paid $ 8.9 million in 2011 to settle a case involving the documentation, coding, and clinical validity of respiratory infections and inflammations, pulmonary edema, respiratory failure, and septicemia. These do not include the costs of attorneys, expert witnesses, and other intangibles expended in legal defense. Visit the DOJ’s website to learn more of these settlements: www.justice.gov.

The Medicare Provider Quarterly Compliance Newsletter then emphasized in July 2011 that providers and facilities are to determine the validity of documented acute respiratory failure, and when the clinical indicators are not present and emphasized, Recovery Audit Contractors had leeway to change a principal diagnosis based on provider documentation. This would happen if Recovery Audit Contractors believed that the clinical indictors did not support the documented diagnosis. Read the newsletter at http://tinyurl.com/jb5aauu, page 2.

AHIMA has since changed its tune. In its 2013 Query Practice Brief, AHIMA stated that a query is appropriate when the health record documentation "provides a diagnosis without underlying clinical validation."

The article adds the additional statement, "when a practitioner documents a diagnosis that does not appear to be supported by the clinical indicators in the health record, it is currently advised that a query be generated to address the conflict or that the conflict be addressed through the facility’s escalation process." Their sample escalation policy is available at http://tinyurl.com/2013AHIMAescalationpolicy.

AHIMA recently stepped this up a notch by publishing a clinical validation practice brief in the July 2016 Journal of the American Health Information Management Association, available to AHIMA members at http://tinyurl.com/2016AHIMAclinicalvalidation. I encourage you to get a copy from an AHIMA member or from your local medical library and to discuss this document with your compliance officer or attorney.

Given that AHIMA is one of the ICD-10-CM/PCS Cooperating Parties, their practice briefs are often quoted by the DOJ, and thus must be read closely, and if agreeable, incorporated into one’s compliance plan. Several points are made in this practice brief, most of which I agree with, but some of which I do not. These include:

 

Compliance

AHIMA states:

Compliance, whether it’s a formal compliance department that understands compliant coding or coding management performing quality audits, can support the clinical validation process. Compliance can assist in developing a standardized query policy that applies to all who perform the query process within the organization regardless of the department in which they are located.

 

I wholeheartedly agree; however, AHIMA does not articulate under what circumstances, or how, a facility can omit an ICD-10-CM code for a documented diagnosis that is re-authenticated by an authorized provider.

I personally believe that if recovery auditors can deny codes for documented diagnoses based on their clinical judgment, then facilities should be able to do the same, particularly if they believe that the code would not survive reasonable scrutiny. I wish that they had discussed this.

 

Clinical validation

AHIMA states, "it appears clinical validation may be most appropriate under the purview of the CDI professional with a clinical background," emphasizing that it is the coder’s role to become more clinically astute as to refer cases to a nurse or physician advisor as necessary.

I disagree to some extent. The ICD-10-CM Official Guidelines state that ICD-10-CM code assignment is a joint effort between the provider and the coder, not the provider and the CDI specialist or the CDI specialist and the coder. So, I believe that a properly trained and certified coder who is well versed in clinical terminology and definitions should be able to have the conversation with the provider alone and not have to delegate this to another individual that may not be as experienced. That said, if the coder is insecure with the situation, he or she should have a lifeline for clinical support as to ensure the validity of the documented diagnosis or treatment.

 

Referencing clinical criteria

AHIMA and Coding Clinic for ICD-10-CM both say that the Coding Clinic should not be referenced as a source for clinical criteria supporting provider documentation. I wholeheartedly agree, except in cases where no definition of a clinical term is available in the physician literature, such as with functional quadriplegia or acute pulmonary insufficiency following surgery or trauma.

For these two conditions, Coding Clinic and/or the ICD-10-CM Official Guidelines are the only sources for definitions as to ensure their validity. The most recent high-impact physician literature or textbooks should be referenced when defining other clinical conditions, or when defending claims of clinical invalidity. A physician advisor can point out which references are highly respected.

 

Coders and CDI defining diagnoses

AHIMA states:

Although it is tempting for CDI and coding professionals to define diagnoses for providers, doing so is beyond their scope. For example, it is not appropriate for a CDI or coding professional to omit the diagnosis of malnutrition when it is based on the patient’s pre-albumin level rather than American Society for Parenteral and Enteral Nutrition (ASPEN) criteria. Many practicing physicians have not adopted ASPEN criteria and there is no federal or American Medical Association (AMA) requirement stating that ASPEN criteria must be utilized by a physician in making the diagnosis of malnutrition.

While this is technically true, given that CDI and coding professionals are not licensed to practice medicine, nor are involved with direct patient care under most circumstances, they still should be their facility’s representatives to encourage the medical staff, as a whole, to adopt facilitywide definitions of challenging clinical terms (e.g., sepsis, malnutrition, acute respiratory failure). They should also monitor and encourage individual providers as they adopt these definitions in their documentation and escalate noncompliance with these definitions to physician advisors, compliance officers, or medical staff leadership.

While one physician may not use ASPEN, or the Academy of Nutrition and Dietetics criteria, to define and diagnose malnutrition, I challenge readers to find any support for pre-albumin or albumin as a current clinical indicator for malnutrition, or a more authoritative criteria than that of the nation’s premier association of dietitians and nutritional support teams in defining, diagnosing, and documenting malnutrition in the adult and pediatric population.

 

Multiple-choice queries

AHIMA appears to have changed the language for multiple-choice queries with this practice brief, especially when clinical validity is an issue. In an example for validating documented sepsis without apparent clinical indicators, they offered the following multiple-choice options:

  • Sepsis was confirmed
  • Sepsis was ruled out
  • Sepsis was without clinical significance
  • Unable to determine
  • Other ______________

Given that this is AHIMA’s query format, we’re obligated to consider it; however, this does cause some difficulties. What can a coder do with "sepsis was without clinical significance" or "unable to determine," if that’s the option the provider selects? If "sepsis was without clinical significance" is selected, do we not code it with the belief that the documented condition doesn’t qualify as an additional diagnosis as defined in the ICD-10-CM guidelines? How many of us have run into physicians who document "unable to determine" as a way of avoiding the question?

I believe that if any of these two options are chosen, then the record should be escalated to a physician advisor or coding manager who implements the facility’s policy of coding the documented diagnosis without defendable clinical indicators.

 

Clinical validation auditing

AHIMA states, "auditing a small sample (e.g., 15 records per year) of coded records by each coding professional (both contract and employed) is one way to ensure that each coding professional is given some education on clinical validation."

While true, I believe that these audits should include CDI specialists, given that many are not members of AHIMA and may not read AHIMA practice briefs, much less believe that they apply to them. AHIMA does emphasize their position as one of the four Cooperating Parties for ICD-10-CM/PCS and that this brief is "relevant to all clinical documentation improvement professionals and those who manage the CDI function, regardless of the healthcare setting in which they work or their credentials."

 

Summary

In conclusion, please be sure to read this practice brief and consider how this affects your organization. Given that there are no standard definitions for at-risk ICD-10-CM/PCS terminology published by any of the Cooperating Parties or payers, and given that medical terminology used in documentation should be defined by physicians and their professional organizations, I encourage all facilities to engage with their medical staff to provide indicators for the clinical terminologies most often challenged by payers.

I also would encourage facilities to develop and implement policies that ensure their validity prior to any submission of HIPAA transactions sets with appropriate boundaries and limits.

 

Editor’s note: Dr. Kennedy is a general internist and certified coder, specializing in clinical effectiveness, medical informatics, and clinical documentation and coding improvement strategies. Contact him at 615-479-7021 or at [email protected] Advice given is general. Readers should consult professional counsel for specific legal, ethical, clinical, or coding questions. For any other questions, contact editor Amanda Tyler at [email protected] Opinions expressed are that of the author and do not necessarily represent HCPro, ACDIS, or any of its subsidiaries.

 

CMS releases 2017 ICD-10-CM codes

CMS has released the final list of new and revised ICD-10-CM codes available for reporting beginning October 1, 2016, with more than 2,000 changes.

The files include the code descriptions in tabular order, as well as an updated index and tables for neoplasms and drugs.

HCPro.com – Briefings on Coding Compliance Strategies

AHIMA practice brief addresses clinical validity and coding compliance

AHIMA practice brief addresses clinical validity and coding compliance

We as coders, clinical documentation specialists, and compliance officers, are actively invested in coding compliance, aren’t we? AHIMA and ACDIS emphasize coding compliance in their codes of ethics. If we aren’t interested in coding compliance, why are we reading newsletters named Briefings in Coding Compliance Strategies and other similar publications?

Many coders I know code solely on what a doctor documents, claiming not to be physicians, nor having the authority to challenge a diagnosis or documented treatment by a provider.

In fact, AHIMA’s 2008 practice brief, Managing an Effective Query Process, emphasized that we should not query physicians if the clinical indicators do not support a provider’s documented diagnosis. This practice brief stated:

Providers often make clinical diagnoses that may not appear to be consistent with test results. Queries should not be used to question a provider’s clinical judgment, but rather to clarify documentation when it fails to meet any of the five criteria listed?legibility, completeness, clarity, consistency, or precision.

 

While AHIMA told us then not to query to ascertain clinical validity of documentation, the United States Department of Justice (DOJ), or Health and Human Services, must not have gotten the memo.

In June 2009, Johns Hopkins Bayview Medical Center, in Baltimore, Maryland, settled a False Claims Act case for $ 2.75 million. This happened after the DOJ said that the hospital’s "employees allegedly focused on lab test results which might indicate the presence of a complicating secondary diagnosis such as malnutrition or respiratory failure, and advised treating doctors to include such a diagnosis in the medical record, even if the condition was not actually diagnosed or treated during the hospital stay."

Baptist Healthcare Inc. and its affiliated hospitals near Louisville, Kentucky, paid $ 8.9 million in 2011 to settle a case involving the documentation, coding, and clinical validity of respiratory infections and inflammations, pulmonary edema, respiratory failure, and septicemia. These do not include the costs of attorneys, expert witnesses, and other intangibles expended in legal defense. Visit the DOJ’s website to learn more of these settlements: www.justice.gov.

The Medicare Provider Quarterly Compliance Newsletter then emphasized in July 2011 that providers and facilities are to determine the validity of documented acute respiratory failure, and when the clinical indicators are not present and emphasized, Recovery Audit Contractors had leeway to change a principal diagnosis based on provider documentation. This would happen if Recovery Audit Contractors believed that the clinical indictors did not support the documented diagnosis. Read the newsletter at http://tinyurl.com/jb5aauu, page 2.

AHIMA has since changed its tune. In its 2013 Query Practice Brief, AHIMA stated that a query is appropriate when the health record documentation "provides a diagnosis without underlying clinical validation."

The article adds the additional statement, "when a practitioner documents a diagnosis that does not appear to be supported by the clinical indicators in the health record, it is currently advised that a query be generated to address the conflict or that the conflict be addressed through the facility’s escalation process." Their sample escalation policy is available at http://tinyurl.com/2013AHIMAescalationpolicy.

AHIMA recently stepped this up a notch by publishing a clinical validation practice brief in the July 2016 Journal of the American Health Information Management Association, available to AHIMA members at http://tinyurl.com/2016AHIMAclinicalvalidation. I encourage you to get a copy from an AHIMA member or from your local medical library and to discuss this document with your compliance officer or attorney.

Given that AHIMA is one of the ICD-10-CM/PCS Cooperating Parties, their practice briefs are often quoted by the DOJ, and thus must be read closely, and if agreeable, incorporated into one’s compliance plan. Several points are made in this practice brief, most of which I agree with, but some of which I do not. These include:

 

Compliance

AHIMA states:

Compliance, whether it’s a formal compliance department that understands compliant coding or coding management performing quality audits, can support the clinical validation process. Compliance can assist in developing a standardized query policy that applies to all who perform the query process within the organization regardless of the department in which they are located.

 

I wholeheartedly agree; however, AHIMA does not articulate under what circumstances, or how, a facility can omit an ICD-10-CM code for a documented diagnosis that is re-authenticated by an authorized provider.

I personally believe that if recovery auditors can deny codes for documented diagnoses based on their clinical judgment, then facilities should be able to do the same, particularly if they believe that the code would not survive reasonable scrutiny. I wish that they had discussed this.

 

Clinical validation

AHIMA states, "it appears clinical validation may be most appropriate under the purview of the CDI professional with a clinical background," emphasizing that it is the coder’s role to become more clinically astute as to refer cases to a nurse or physician advisor as necessary.

I disagree to some extent. The ICD-10-CM Official Guidelines state that ICD-10-CM code assignment is a joint effort between the provider and the coder, not the provider and the CDI specialist or the CDI specialist and the coder. So, I believe that a properly trained and certified coder who is well versed in clinical terminology and definitions should be able to have the conversation with the provider alone and not have to delegate this to another individual that may not be as experienced. That said, if the coder is insecure with the situation, he or she should have a lifeline for clinical support as to ensure the validity of the documented diagnosis or treatment.

 

Referencing clinical criteria

AHIMA and Coding Clinic for ICD-10-CM both say that the Coding Clinic should not be referenced as a source for clinical criteria supporting provider documentation. I wholeheartedly agree, except in cases where no definition of a clinical term is available in the physician literature, such as with functional quadriplegia or acute pulmonary insufficiency following surgery or trauma.

For these two conditions, Coding Clinic and/or the ICD-10-CM Official Guidelines are the only sources for definitions as to ensure their validity. The most recent high-impact physician literature or textbooks should be referenced when defining other clinical conditions, or when defending claims of clinical invalidity. A physician advisor can point out which references are highly respected.

 

Coders and CDI defining diagnoses

AHIMA states:

Although it is tempting for CDI and coding professionals to define diagnoses for providers, doing so is beyond their scope. For example, it is not appropriate for a CDI or coding professional to omit the diagnosis of malnutrition when it is based on the patient’s pre-albumin level rather than American Society for Parenteral and Enteral Nutrition (ASPEN) criteria. Many practicing physicians have not adopted ASPEN criteria and there is no federal or American Medical Association (AMA) requirement stating that ASPEN criteria must be utilized by a physician in making the diagnosis of malnutrition.

While this is technically true, given that CDI and coding professionals are not licensed to practice medicine, nor are involved with direct patient care under most circumstances, they still should be their facility’s representatives to encourage the medical staff, as a whole, to adopt facilitywide definitions of challenging clinical terms (e.g., sepsis, malnutrition, acute respiratory failure). They should also monitor and encourage individual providers as they adopt these definitions in their documentation and escalate noncompliance with these definitions to physician advisors, compliance officers, or medical staff leadership.

While one physician may not use ASPEN, or the Academy of Nutrition and Dietetics criteria, to define and diagnose malnutrition, I challenge readers to find any support for pre-albumin or albumin as a current clinical indicator for malnutrition, or a more authoritative criteria than that of the nation’s premier association of dietitians and nutritional support teams in defining, diagnosing, and documenting malnutrition in the adult and pediatric population.

 

Multiple-choice queries

AHIMA appears to have changed the language for multiple-choice queries with this practice brief, especially when clinical validity is an issue. In an example for validating documented sepsis without apparent clinical indicators, they offered the following multiple-choice options:

  • Sepsis was confirmed
  • Sepsis was ruled out
  • Sepsis was without clinical significance
  • Unable to determine
  • Other ______________

Given that this is AHIMA’s query format, we’re obligated to consider it; however, this does cause some difficulties. What can a coder do with "sepsis was without clinical significance" or "unable to determine," if that’s the option the provider selects? If "sepsis was without clinical significance" is selected, do we not code it with the belief that the documented condition doesn’t qualify as an additional diagnosis as defined in the ICD-10-CM guidelines? How many of us have run into physicians who document "unable to determine" as a way of avoiding the question?

I believe that if any of these two options are chosen, then the record should be escalated to a physician advisor or coding manager who implements the facility’s policy of coding the documented diagnosis without defendable clinical indicators.

 

Clinical validation auditing

AHIMA states, "auditing a small sample (e.g., 15 records per year) of coded records by each coding professional (both contract and employed) is one way to ensure that each coding professional is given some education on clinical validation."

While true, I believe that these audits should include CDI specialists, given that many are not members of AHIMA and may not read AHIMA practice briefs, much less believe that they apply to them. AHIMA does emphasize their position as one of the four Cooperating Parties for ICD-10-CM/PCS and that this brief is "relevant to all clinical documentation improvement professionals and those who manage the CDI function, regardless of the healthcare setting in which they work or their credentials."

 

Summary

In conclusion, please be sure to read this practice brief and consider how this affects your organization. Given that there are no standard definitions for at-risk ICD-10-CM/PCS terminology published by any of the Cooperating Parties or payers, and given that medical terminology used in documentation should be defined by physicians and their professional organizations, I encourage all facilities to engage with their medical staff to provide indicators for the clinical terminologies most often challenged by payers.

I also would encourage facilities to develop and implement policies that ensure their validity prior to any submission of HIPAA transactions sets with appropriate boundaries and limits.

 

Editor’s note: Dr. Kennedy is a general internist and certified coder, specializing in clinical effectiveness, medical informatics, and clinical documentation and coding improvement strategies. Contact him at 615-479-7021 or at [email protected] Advice given is general. Readers should consult professional counsel for specific legal, ethical, clinical, or coding questions. For any other questions, contact editor Amanda Tyler at [email protected] Opinions expressed are that of the author and do not necessarily represent HCPro, ACDIS, or any of its subsidiaries.

HCPro.com – Briefings on Coding Compliance Strategies

Compliance Plan

Quick question….

I just took over as the coder/biller for a rheumatology practice with no written compliance plan 😮 Am I correct that per the ACA we need to have one in place? Can anyone point me in the right direction to start one? Hope this is not a foolish question but I just can’t believe they never had one in place!

Medical Billing and Coding Forum

OIG Posts a Resource Compliance Guide & Enforcement Action

You can use the links provided and go directly to the OIG material.

The OIG has developed free educational resources listed are to help health care providers, practitioners, and suppliers understand the health care fraud and abuse laws and the consequences of violating them. These compliance education materials can also provide ideas for ways to cultivate a culture of compliance within your own health care organization.

The Enforcement action is an example of recent healthcare fraud / abuse.

—————–

Resource Guide

—————–

Enforcement Action

  • Federal Jury Convicts Doctor of $ 40 Million Medicare Fraud (March 24, 2017; U.S. Attorney; Northern District of Texas) https://go.usa.gov/xX9Bp

—————–

The post OIG Posts a Resource Compliance Guide & Enforcement Action appeared first on The Coding Network.

The Coding Network