AI-Generated Notes: How to Review, Edit, and Sign Them Responsibly
The clinician's obligation when using AI for documentation, what to check before signing, how to think about liability, and the 'AI is a first draft' framing.
AI-assisted documentation is moving from "something some therapists are experimenting with" to something a meaningful percentage of the profession uses regularly. With that shift comes a question that doesn't get asked clearly enough: what exactly is your obligation when you sign a note that an AI generated?
The short answer is the same as it's always been — you're signing the note. The AI isn't licensed. The AI doesn't have a therapeutic relationship with your client. You do. Whatever process produced the first draft, the signed note is yours.
This isn't an argument against AI documentation tools. Used well, they can significantly reduce documentation burden without reducing documentation quality. But "used well" requires understanding your role in the workflow — and that role isn't just pressing approve.
The "First Draft" Framing
The most useful way to think about AI-generated notes is as a first draft written by an extremely fast, reasonably intelligent research assistant who was present in the session but doesn't know your client, doesn't know your clinical judgment, and doesn't know what matters and what doesn't in this particular therapeutic relationship.
That framing has practical implications:
First drafts require editing. Not light copyediting — actual substantive review of whether the draft reflects what actually happened and what you actually intended.
First drafts can be wrong. AI systems can mishear, mischaracterize, or simply miss the clinically significant moment in a session. They're generating text based on patterns, not clinical understanding.
First drafts are starting points, not endpoints. The note you sign should be yours — informed by the AI draft, improved through your review, but ultimately reflecting your clinical judgment.
If you would be uncomfortable explaining every sentence in a note to a licensing board, you shouldn't sign it — regardless of how it was generated.
What to Actually Check
Here is a practical review checklist for AI-generated notes:
Accuracy of clinical content
- Does the note accurately reflect what the client said and presented? Not just the major themes, but the specific content that matters — the thing they said that shifted the session, the affect that told you something important, the moment where something clicked.
- Are diagnostic impressions and symptom descriptions accurate? AI systems can sometimes drift toward language that doesn't match your actual clinical formulation.
- Are interventions described accurately? "Clinician used CBT techniques" is both accurate and useless. The note should reflect what you actually did, not a generic descriptor.
Completeness
- Is the risk assessment portion (if applicable) complete and accurate? Risk documentation is an area where AI-generated notes need especially careful review — an AI might summarize a risk discussion in ways that don't capture your actual clinical reasoning.
- Does the note reflect the entire session, including any significant shifts or pivotal moments?
- Is the plan section accurate and specific?
Language and framing
- Does the language reflect your clinical voice and the approach you take with this client?
- Is there anything in the note that could be harmful if read by the client (in a records request), by another provider, or in a legal proceeding?
- Does the note use appropriate clinical language without being reductive or pathologizing?
Technical accuracy
- Is the date correct?
- Are session length and modality (in-person, telehealth, audio-only) accurate?
- Are CPT codes or billing descriptors, if auto-populated, accurate?
Where AI Notes Tend to Go Wrong
Most AI documentation tools make predictable errors that you should know to look for:
Over-reliance on filler language. AI systems trained on clinical notes learn common phrases and use them heavily: "client presented with," "explored," "discussed coping strategies." These phrases are often technically accurate but clinically empty. Edit them toward specificity.
Missing the key moment. The most significant thing that happened in a session is often the thing that's hardest for AI to identify — a pause, a tone shift, something the client didn't say. AI notes sometimes produce a technically accurate summary of a session that misses the clinical heart of it entirely.
Inaccurate transcription of emotional content. If you're using an ambient AI tool that records and transcribes sessions, emotional nuance — sarcasm, deflection, affective flatness — often doesn't survive transcription. A note that says "client appeared upbeat" when the client was presenting a forced brightness that concerned you clinically is a substantively inaccurate note.
Risk documentation gaps. If a client makes an offhand comment about suicidal ideation that you explored and resolved, a rushed AI system might not flag it or might underemphasize it. Risk-related content needs especially careful human review.
The Liability Question
Some therapists worry that using AI documentation tools creates new liability. The reality is more nuanced.
Using AI to assist documentation doesn't inherently create liability — it's a tool, and plenty of tools assist clinical documentation (templates, EHR auto-population, dictation software). What creates liability is signing notes that don't accurately reflect your clinical care.
If you're reviewing AI-generated notes carefully and editing them to reflect your actual clinical judgment, the process of how the first draft was generated is largely irrelevant from a liability standpoint. The note is yours. It should be accurate. That's the standard.
Where liability does arise:
- If you sign notes without adequate review and they contain errors
- If the AI documentation tool records or stores session content in violation of HIPAA
- If you use AI tools that don't meet HIPAA compliance standards
On the HIPAA point: any AI documentation tool that processes session audio or transcripts needs to have a Business Associate Agreement in place with you. This is non-negotiable. Before using any AI documentation tool, verify that the vendor signs BAAs and understand where your data is being processed and stored.
Building a Review Practice
For AI documentation to work as a quality-enhancing tool rather than a quality-compromising shortcut, you need a consistent review practice — not a heroic effort to catch every error, but a reliable habit that ensures you've actually read and evaluated each note before signing.
Some practical approaches:
- Always read the full note, not just scan it. Scanning is how errors get through.
- Read it from the perspective of someone who wasn't in the session. Does it tell a coherent clinical story?
- Make at least one specific edit to every note. This keeps you genuinely engaged with the content rather than click-approving.
- Flag any note where you made significant edits, at least while you're building familiarity with a new tool. This helps you calibrate whether the AI is producing useful first drafts or consistently missing things.
The Bottom Line
AI documentation tools, at their best, give you back time you can spend on clinical work, supervision, professional development, or simply recovery. They don't change what a note needs to do or what your obligations are when you sign one.
Use them. Review them carefully. Sign only what you'd stand behind.
Spend less time on notes, more time on clients
TherapyScribe generates clinical notes from your session recordings in seconds — HIPAA-compliant and ready to sign.
Start free 14-day trial →