AI transcription tools are now standard in many newsrooms. But their use raises ethical questions that responsible journalism must address: informed consent, source protection, accuracy obligations, and transparency with audiences.
Informed Consent for Recording
In most jurisdictions, you need consent to record. If you're using an AI tool to transcribe, sources should know their voice is being processed by third-party software. This doesn't mean a detailed GDPR disclosure — a simple "I'm recording our conversation and will use transcription software to help with notes" is sufficient for most contexts.
Source Protection
This is the most critical issue. If a source speaks on condition of anonymity, their voice recording and transcription must be handled with extreme care.
- Know where the transcription service stores data and for how long
- Delete recordings immediately after transcribing if source protection is required
- Use on-device models (offline) when maximum security is needed
- Never upload recordings to cloud services if sources face serious risk
Red line: never use a cloud transcription service that retains audio or trains on user data when transcribing anonymous sources in high-risk contexts.
Accuracy Obligations
Using AI transcription doesn't reduce your accuracy obligations — it shifts them. You're still responsible for verifying direct quotes. The fact that an AI produced the transcript doesn't protect you if you publish an inaccurate quote.
Transparency with Audiences
As AI use in journalism increases, audience trust depends on transparency. Consider disclosing AI use in transcription in your editorial methodology documentation, even if not in every article.