The Truth About AI and Clinical Notes in Chiropractic EMRs: State of the Art in 2025

Todd Lloyd
October 7, 2025

Author’s context: I recently attended a PI documentation seminar led by a chiropractor I admire. Afterward, the doctor cautioned against over‑reliance on AI for PI records due to hallucinations and legal optics. I agree with many points. In this piece I refer to the presenter as “Dr. Personal Injury,” and I add my own observations from day‑to‑day PI practice.

I read an excerpt by Dr. Personal Injury asserting that generative AI technology for clinical note-taking is not yet ready for prime time in chiropractic electronic medical records (EMRs). Key points from his statement include:

  • AI’s Benefits and Drawbacks: AI can quickly find information, but its generated content often contains inaccuracies or context errors that require manual correction. Dr. Personal Injury notes that every AI-generated output must be carefully read and verified for accuracy.
  • Personal Experience with an AI-Integrated EMR: Dr. Personal Injury describes incorporating an AI feature into his chiropractic EMR template (built on AdvancedMD). He found the AI’s output “underwhelming” — requiring more time to review and fix than using his pre-defined templates and macros. He observed that the AI’s report formatting was less polished, and subtle grammar differences sometimes altered clinical findings.
  • Similar Results with Other Systems: He reports testing multiple AI-driven chiropractic EMRs and encountering the same issues: the AI often failed to match the preferred report structure, and using it (including “ambient” voice-dictation AI) forced additional editing, negating any time savings.
  • AI Note-Taking Not Yet Reliable: In Dr. Personal Injury’s view, AI documentation tools need perhaps “1-2 years” more development to become consistently accurate in healthcare. He claims many colleagues in various medical disciplines likewise tried AI for notes and then stopped using it due to these accuracy and workflow problems.
  • Legal Concerns: For medico-legal documentation, he warns that opposing attorneys may ask if AI assisted in generating a report. If yes, it could be used to undermine the chiropractor’s credibility, since the content came from an “unlicensed programmer” rather than the clinician. He suggests courts are currently leery of AI-generated reports, often disallowing them.

The question we address is: Are these assertions true? Below, we examine each major claim in light of current evidence and expert commentary.

Accuracy Issues with AI-Generated Clinical Notes

As a practicing chiropractor working in PI, my own day-to-day experience tracks with the need for accuracy and tight oversight, but I’ve also found a practical way to reduce typical “hallucination” risk. Immediately after the encounter, I record a brief debrief on a Sony voice recorder and transcribe it on my Mac with MacWhisper (Parakeet 3.0). Speaking the exam, the patient’s narrative, and my impressions captures nuance that checkbox templates and rushed typing tend to miss. When I then ask an AI to summarize strictly from the transcript—no added details, no speculative fill-ins—I rarely see classic hallucinations. The most common clean-up is adjectives like “significant,” which I deliberately remove unless it appears verbatim in source material, because it can be a legal term of art in med‑legal contexts. This voice‑first, transcript‑first approach keeps the note grounded in what was actually said and done, while still benefiting from AI’s organizing and summarizing strengths.

Dr. Personal Injury’s caution about AI accuracy is well-founded. Modern generative AI (e.g. large language models like GPT-4) can produce text that reads confidently but may contain errors, omissions, or invented details – known as “hallucinations.” In a clinical context, even minor inaccuracies can have serious consequences. Recent analyses highlight several reliability concerns with AI-generated medical notes:

  • Error Rates: Traditional speech-to-text dictation for doctors has high error rates (7–11% word error rate) due to medical jargon and accents . Newer “ambient” AI scribes (which listen to the patient visit and generate notes using AI) claim lower raw error rates (~1–3%), but they introduce new failure modes . For example, an AI might fabricate an exam finding or diagnosis that was never discussed, omit critical symptoms or context from the conversation, or misinterpret statements in a way that changes meaning . These subtle errors align with Dr. Personal Injury’s observation that AI altered the nuance of his findings.
  • Hallucinations and Omissions: Experts have documented AI scribes hallucinating content to fill gaps – for instance, documenting an examination that never occurred or creating a plausible-sounding detail with no basis in reality . On the flip side, AI might omit important information that was actually mentioned, such as a patient’s concern or part of the assessment, especially if the conversation was complex . Dr. Personal Injury’s report of “subtle changes in grammar that changed my findings” reflects these risks: even a small phrasing change by the AI could misrepresent a patient’s condition or the clinician’s assessment.
  • Need for Careful Review: Due to such issues, medical risk management authorities strongly advise that clinicians must thoroughly review and edit AI-generated notes before finalizing them . The Texas Medical Liability Trust, for example, emphasizes that skipping review or blindly trusting the AI can lead to documentation errors or omissions that increase liability risk . In fact, the Federation of State Medical Boards has issued guidance that physicians remain “fully responsible for the content of all medical documentation, regardless of how it was generated.” Automatic signing of AI-produced notes is discouraged, precisely because the clinician needs to verify every detail . This guidance supports Dr. Personal Injury’s stance that one cannot rely on AI output unchecked – “each time you use it, you MUST read what AI created to ensure it is what you want”, as he wrote.

In summary, the downside of AI is indeed accuracy. The current generation of AI models can draft impressively fluent text, but they do require “tweaking for accuracy,” especially in healthcare contexts . Dr. Personal Injury’s caution that AI note content must be verified line-by-line is echoed by medical experts and patient safety advocates. This part of his claim is true and well-substantiated by the literature.

Efficiency and Workflow: Does AI Save Time or Create More Work?

In my clinic, the biggest efficiency win has come from switching my capture modality, not from asking AI to invent text. Voice-first capture, then transcript-only summarization, gives me a rich, accurate base that AI can condense without guessing. I also use an encrypted, enterprise AI to ask targeted questions across long notes—“Where do pain scores conflict?” or “List meds mentioned in history but missing in plan.” It highlights the exact lines so I can add a precise addendum. Compared with old checkbox templates or typing during the visit, this keeps me present with the patient and shortens end-of-day charting. Where I still spend time is in quick language cleanup (removing value‑laden words) and a final read-through before signing—worth the minutes for med‑legal defensibility.

One of the big promises of AI in clinical documentation is improved efficiency – reducing the time doctors spend on writing notes. Dr. Personal Injury, however, found that using the AI in his chiropractic EMR cost him time rather than saving it, because he had to correct the AI’s output and adjust its format. What does the evidence say about AI’s impact on workflow and time?

  • Early Evidence of Time Savings: Some studies and pilot programs have reported moderate time savings with AI-assisted documentation, especially for summarizing existing text. For instance, a 2025 case study at NYU Langone Health used GPT-4 to auto-summarize previous encounter notes for physicians’ chart review. Providers with the longest pre-AI chart review times saw their review time per patient drop by about 18 seconds on average with the AI summaries, and most clinicians felt the summaries were accurate and helpful . Similarly, ambient AI scribetools (which generate a full note from the doctor-patient conversation) have shown documentation time reductions of around 20–30% in some trials . These findings indicate AI can in theory lighten the documentation load.
  • Real-World Mixed Results: Despite promising pilots, many clinicians report that current AI note tools are not yet plug-and-play time savers. Dr. Personal Injury’s experience is not unique. For example, in one Reddit forum a primary care physician using Nuance DAX (an AI ambient scribe) said: “I would say it’s not particularly helpful yet, but I could see the technology improving… At this point, I continue to use it because I feel it is time-neutral”. He noted the AI did an okay job summarizing a patient’s History of Present Illness, but the assessment/plan it generated was oversimplified and had to be edited or replaced, similar to Dr. Personal Injury’s complaint. Integration issues (needing to copy-paste from the AI into the EHR) and the need to explain the AI to patients also introduced friction . In short, some doctors find little to no net time saved once you count the editing and oversight required.
  • Structure and Formatting Issues: Dr. Personal Injury specifically criticized the AI’s “report structure” as being in a raw computer format rather than a polished, compliance-ready narrative. This touches on a practical issue: Many AI-generated notes are overly generic or templated in style. Clinicians worry that these notes all start to look the same (“standard legalese language”) and may omit the personalized details that make a note useful . If an AI’s output doesn’t fit the documentation style needed (e.g., a SOAP note format with specific wording), a clinician must re-format it, which again eats up time. Well-designed templates and macros (the traditional approach Dr. Personal Injury reverted to) may actually produce a better-formatted note faster, at least until AI can be aligned to each provider’s preferences.
  • Vendor Claims vs. Reality: It’s worth noting that some EHR companies are now marketing AI assistants for chiropractic and other specialties, boasting huge efficiency gains. For instance, a leading chiropractic EHR vendor, ChiroTouch, launched an AI assistant in 2025 claiming “early users report up to 92% time savings on charting”. Such claims should be taken with a grain of salt. They likely represent best-case scenarios or specific tasks. No independent study has verified a 92% documentation time reduction, and it’s improbable that all users see such dramatic improvements. Independent research and user reports paint a more modest picture – AI can help with drafting text, but physicians often spend additional time reviewing and correcting notes before signing. Dr. Personal Injury’s underwhelming results align with many clinicians’ cautious experiences rather than the marketing hype.

Bottom line: AI note-taking tools in 2023–2025 have shown potential to speed up documentation, but they are far from a hands-off solution. Many users find that the time saved in initial drafting is offset by the time needed to review, edit, and ensure compliance. Dr. Personal Injury’s claim that current AI “wasn’t ready for prime-time” for efficient note-writing appears true for the majority of typical use cases, especially in nuanced clinical documentation like chiropractic exams where formatting and detail are crucial.

AI “Clinical Decisions” and Changes in Meaning

In my experience with AI-assisted note-taking, I've found it to be remarkably accurate and faithful to what I dictate. The AI doesn't make clinical decisions; rather, it precisely captures and organizes what was actually discovered during the patient encounter. When I dictate "patient reports pain that worsens at night," that's exactly what appears in my notes—no embellishment, no clinical interpretation added. The AI maintains the integrity of my clinical observations without inserting its own judgments or conclusions. This precision is crucial for maintaining accurate documentation that reflects my professional assessment rather than algorithmically-generated content. When necessary, I can easily make clarifications directly in the note, maintaining a clear record of my clinical reasoning and findings that accurately represents the patient's condition and my professional judgment.

Nevertheless, another concern raised is that AI may inadvertently make clinical decisions or alter the intended meaning in the note. Dr. Personal Injury noted that the AI’s output sometimes included phrasing changes that “changed my findings.” This implies the AI might interpret the input data and rephrase it in a way that is clinically incorrect or overstates/understates something. Is there evidence of AI doing this?

Yes – this is a known risk. Generative AI doesn’t truly understand clinical context; it predicts text that seems appropriate. In doing so, it can introduce subtle but important shifts:

  • A commentary in NPJ Digital Medicine describes that AI-generated notes can suffer from contextual misinterpretations, where the nuance of a patient’s statement or a doctor’s assessment is lost . For example, if a patient denies a symptom, a careless AI might drop the negation and document the symptom as present (or vice versa). Or the AI might phrase a clinical impression more definitively than the doctor intended, effectively making an unwarranted diagnostic conclusion. Dr. Personal Injury’s mention that “AI makes too many clinical decisions, according to how the report reads” likely refers to scenarios where the AI filled in conclusions or inferences that the clinician themself did not explicitly make. The TMLT risk bulletin also warns of this: advanced AI scribe systems might start suggesting diagnoses or treatments, and if the provider isn’t vigilant, they might end up in the note without proper validation . Relying on such AI-generated suggestions “without careful review and agreement” could lead to serious errors in the care plan .
  • Another documented failure mode is when an AI, to fit a standardized template, inserts normal findings or default text that wasn’t actually stated – effectively making up data to avoid leaving sections blank. One report noted AI filling in a normal review-of-systems entry that was never discussed . This could be what Dr. Personal Injury experienced as “providing choices” that didn’t match what he actually found. All of these issues reinforce why human oversight is essential. The physician must ensure the final note reflects their own clinical judgment and actual encounter details, not the AI’s best guess.

Thus, it is true that current AI note-generation can overstep by implicitly making clinical choices or altering emphasis. This is precisely why professional guidelines insist the clinician remains the ultimate author“physicians remain fully responsible for the content… regardless of how it was generated” . The AI is a tool, not a licensed practitioner, and it cannot be given free rein to decide what goes into the record.

Is AI Note-Taking Close to Ready?

Dr. Personal Injury predicts that AI for note-taking will be very useful in the future — but not quite yet, suggesting “maybe 1–2 years” away from being consistently reliable. It’s always tricky to put timelines on technology, but we can gauge the trajectory:

  • Rapid Improvement: The field of AI is evolving extremely fast. The generative models available by late 2024 (like GPT-4) were significantly more capable than those a year prior, and ongoing research is refining these tools for medical applications. Large health systems (e.g., Stanford, Mayo Clinic) have been piloting AI chart assistants, and EHR vendors like Epic are integrating GPT-based features . Early adopters express optimism that with just a bit more accuracy and better EHR integration, AI scribes could meaningfully improve workflows in the near future . So the idea that AI note-taking is “close, but not yet” is shared by many – essentially a cautious optimism that we’re on the cusp, but still have kinks to iron out.
  • Current Adoption with Caution: By 2025, about 30% of physician practices (in medicine broadly) were already experimenting with AI scribe tools . Major deployments (like the Veterans Affairs pilot with Nuance and Abridge) are underway . This shows confidence that the technology can be used, but it’s often accompanied by safety checks. Notably, many AI-generated notes today undergo human review by either the clinician or a quality assurance team. In Nuance’s DAX system, for example, the draft note is sent to a human reviewer/editor to catch errors before it’s finalized . This hybrid approach acknowledges that the AI isn’t fully trustworthy alone – essentially supporting Dr. Personal Injury’s point that it’s not yet ready to be totally “seamless” without oversight.
  • “Prime Time” Criteria: What would “ready for prime time” look like? Likely, AI note systems would need to achieve near-perfect factual accuracy, handle medical nuance, integrate effortlessly with workflows, and have clear legal/regulatory green lights. We are making progress – error rates are dropping and some studies find no negative impact on patient safety or care when using AI notes with oversight. But at the moment, even a 1–3% error rate is problematic in healthcare, and fully autonomous note generation (with no human correction) is not advisable. So, Dr. Personal Injury’s skepticism for 2023/2024 is valid. His estimated timeline of 1–2 years is an opinion – it’s possible that by 2025–2026 AI will have improved markedly, but it’s equally possible it will still require human checks. In any case, the consensus in late 2024 was that AI note-taking is promising but still emerging – a tool “close” to primetime, but needing careful use for now.

Thus, saying “AI FOR NOTES IS NOT READY FOR PRIME TIME… YET” is a reasonable summary of the situation. It captures the mix of excitement and caution evident in the healthcare community .

But I say it’s ready now.

Legal and Compliance Concerns

From a compliance standpoint, I treat AI as a drafting aid and keep authorship unambiguous. If I discover a discrepancy, I fix it, and correction it in plain language. I avoid retroactive edits that blur the record. Before signing, I do a final human read to confirm that every statement is supported by source text, that no value‑laden adjectives crept in, and that the assessment and plan align with the documented history and exam. If asked by counsel, I can explain precisely how AI assisted (transcript summarization, line‑level inconsistency surfacing) and why the final note is mine, not the model’s.

The excerpt makes a striking claim that in medico-legal scenarios, using AI to generate reports can be a liability. Specifically, Dr. Personal Injury says opposing counsel may ask if AI was used, and if so, hold it against the doctor because the report “was created by an unlicensed programmer and not you.” He further claims courts are leery of AI-generated reports and often disallow them. Let’s unpack this:

  • Authenticity and Expert Witness Testimony: In legal proceedings (e.g., personal injury cases where chiropractors may serve as expert witnesses or providers of record), the integrity of documentation is critical. An expert’s report is expected to be their own work product, reflecting their professional opinion. If part of that report was ghostwritten – whether by a junior staffer, an attorney, or an AI – it opens a line of attack for opposing counsel. Legal experts indeed foresee questions like, “Who actually drafted this report?” becoming common . In fact, a January 2025 federal case (Kohls v. Ellison in D. Minnesota) excluded an expert's declaration that was found to contain AI-generated false citations. The judge emphasized the need to trust that an expert’s declaration is personally vetted and accurate, and joined a “growing chorus of courts” warning lawyers to verify any AI-generated content used in filings. This illustrates that courts are paying attention to AI’s role and are prepared to strike or discount content tainted by unverified AI information.
  • Unreliable because dictated by a machine”: A blog from law firm Zuckerman Spaeder noted it’s only a matter of time before a lawyer argues “an expert’s testimony is unreliable because the expert’s opinion was dictated by a machine.” Lawyers are advised to consider asking opposing experts whether AI played a material role in preparing their report. This aligns almost exactly with what Dr. Personal Injury warns: if you answer “Yes, AI helped with my report,” you invite a challenge to the report’s credibility. The concern is that an AI (essentially software created by programmers who are not licensed medical professionals) might have introduced content or wording that is not the expert’s own thinking, which could undermine the expert’s authorship and judgment.
  • Current Court Stance: There isn’t yet a blanket rule disallowing all AI-assisted reports, but the skepticism is real. As of 2024–2025, some judges have even issued standing orders requiring attorneys to disclose AI involvement in briefs and filings . For expert reports, which by rule must be “prepared and signed by the witness”, any sign that the witness did not substantially write the report can be problematic. Historically, even heavy editing by attorneys has been a point of contention in expert reports – AI involvement would be scrutinized under the same lens or more so. The bottom line for clinicians: if your clinical note or IME report was significantly generated by AI, you absolutely must review and adopt it as your own, or risk it being challenged. If you can’t defend every word, it could be discredited. Dr. Personal Injury’s advice here aligns with emerging legal ethics guidance: some bar associations suggest that if AI is used, it should be disclosed and the professional is responsible for verifying the output .
  • “Disallowing AI-generated reports”: It’s a strong phrasing to say courts disallow most AI-generated reports. Since AI-drafted expert reports are just now appearing, we don’t have many direct precedents. However, given the early cases and commentary, one can certainly say courts are extremely wary of AI-generated content in evidence. If such content is found to be inaccurate or if the expert cannot testify that it’s truly their own conclusions, the court can exclude that evidence. In practice, any AI usage would likely need to be revealed and defended. So while “most AI-generated reports” haven’t been outright banned (because few have been formally offered), any report found to rely on AI is at high risk of being challenged or not admitted unless thoroughly verified. In sum, Dr. Personal Injury’s caution is valid – legal frameworks are lagging behind technology, and until they catch up, using AI for official documentation is a legal gray area. The safest route for now is to personally author and verify all reports, using AI only as a behind-the-scenes helper if at all, and being prepared to attest that the content is your own.

Conclusion: Is Dr. Personal Injury’s Assessment True?

Broadly speaking, yes – the cautionary points raised in “The Truth About AI and Notes in Chiropractic EMRs” are grounded in the current reality. To recap:

  • AI’s speed and knowledge access are undeniably useful, but accuracy remains a serious weakness. Healthcare providers must double-check AI outputs, as authoritative sources confirm the risk of AI-generated errors, hallucinations, and misinterpretations .
  • Efficiency gains from AI documentation are not guaranteed in practice. Some early adopters find the technology promising but not yet a time-saver after accounting for oversight and editing . This matches Dr. Personal Injury’s firsthand experience that existing chiropractic AI note systems can be underwhelming – requiring as much or more effort compared to well-designed traditional templates.
  • AI note-taking in healthcare is still maturing. Experts agree it’s “close, but not yet” ready for fully trustable use. Dr. Personal Injury’s timeframe of 1–2 years for significant improvement may or may not prove accurate, but it reflects a common sentiment: the tech is rapidly improving yet still needs refinement and validation before it can seamlessly integrate into clinical workflows .
  • Legal and professional standards currently put the onus on the clinician. Any note or report generated with AI must ultimately be owned and authenticated by the practitioner. In legal settings, using AI without transparency and careful verification can indeed backfire, as courts and attorneys treat such reports with skepticism .

It’s also true that Dr. Personal Injury has a vested interest (promoting his own EMR system). Nonetheless, the core of his message is echoed by independent sources: “AI for clinical documentation isn’t ready for prime time… yet.”Practitioners should approach AI-based note generation with eyes open: use it as a tool for drafts or inspiration if desired, but rigorously review everything, and be mindful of compliance and legal implications. Until the day comes when AI can be trusted to produce 100% accurate and contextually correct notes (and that day is not today), caveat emptor – let the user beware – is prudent advice.

Author’s summary

AI in chiropractic documentation, especially in PI, is already useful when applied with safeguards. Voice-first capture plus careful transcription and neutral, non-legal summarization has proven more complete and efficient than typing and checkbox workflows. An encrypted, enterprise AI can surface inconsistencies for quick addenda, and Notion 3.0’s AI helps me transform free‑form referrals into structured, linked records and track lien outcomes attorney by attorney.

This doesn’t eliminate my responsibility to verify every line. I avoid value‑laden language (for example, I strip the word “significant” unless it appears verbatim in source notes), and I always perform a human review before finalizing. Used this way, AI reduces typing, improves completeness, and frees attention for patient care—while keeping the record defensible. I’m optimistic about 2025 and beyond: with tight prompts, encryption, and human-in‑the‑loop checks, AI is already a practical companion for chiropractic PI documentation.

Sources:

  • Topaz M. et al. “Beyond human ears: navigating the uncharted risks of AI scribes in clinical practice.” NPJ Digital Medicine. 8, 569 (24 Sep 2025): Discusses error rates (7–11% in older dictation vs ~1–3% in LLM-based scribes) and new failure modes like hallucinations, omissions .
  • Texas Medical Liability Trust. “Using AI medical scribes: Risk management considerations.” (Oct 2023): Outlines potential AI documentation errors (missing info, transcription mistakes, *“hallucinating” data to fill gaps) and emphasizes physician responsibility for verifying AI-generated notes .
  • Fischer SH & Gebauer SL. “Are AI-Generated Medical Notes Really Any Worse?” RAND Corporation Commentary. (Apr 4, 2025): Notes that 90% of human-written notes and 96% of speech-recognition draft notes contain errors, arguing AI’s imperfections must be weighed against existing documentation problems . (Highlights the need to improve both human and AI documentation.)
  • Silberlust J, et al. “Artificial intelligence-generated encounter summaries: early insights…” J Am Med Inform Assoc. (Collection Oct 2025): Pilot study where GPT-4 note summaries in Epic saved clinicians some review timeand were rated highly for accuracy, but focused on summarizing existing notes (not generating full notes from scratch) .
  • Reddit r/medicine forum – physician feedback on Nuance DAX AI scribe (2023): A doctor’s anecdotal report that the AI’s draft notes needed significant editing (especially in the plan section), yielding a “time neutral” outcome initially.
  • Zuckerman Spaeder LLP (Connolly JJ). “New Question for Expert Witnesses: Who Drafted This Report, You or Your Machine?” (Jan 15, 2025): Legal analysis of a case where an AI-assisted expert report was thrown out due to errors, and advice that attorneys should avoid AI-drafted expert reports or be prepared to disclose and defend them .
  • ChiroTouch press release. “ChiroTouch Launches Rheo: The First AI Assistant Purpose-Built for Chiropractors.” (Aug 19, 2025): Vendor claims of 92% charting time savings with an AI assistant – illustrative of the optimistic marketing versus the more cautious independent assessments above.
Todd Lloyd
adjust.clinic logo Petaluma chiropractor
linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram