How should Scrum Teams handle data security risks when using AI tools like ChatGPT?
Hi everyone,
Many Scrum Teams today are using public AI tools such as ChatGPT, Gemini, and Copilot to speed up daily work — writing user stories, summarizing requirements, generating code, or analyzing logs.
While these tools improve productivity, I’m concerned about an often-overlooked risk: accidentally sharing sensitive or confidential data with public LLM platforms.
For example, team members might paste:
- customer information
- financial or production data
- internal documents
- source code
- or proprietary business logic
Since these tools run outside the organization, this could lead to data leakage or compliance issues.
I’m curious how other teams are addressing this.
Do you:
- have AI usage guidelines or policies?
- enforce data redaction/anonymization before using AI tools?
- restrict public AI tools altogether?
- or use private/enterprise AI solutions?
From a Scrum perspective, should this be handled as part of the Definition of Done, working agreements, or organizational policy?
Would love to hear how your teams balance productivity with security.
Thanks!
Policies about tools, AI or otherwise, should be handled outside of Scrum. In my experience in larger organizations, policies and guidance come from information technology or information security teams, often with input from legal, regulatory, and privacy specialists. The guidance will state which, if any, external tools employees may use at work.
Think of it this way. If it's possible to compromise sensitive or confidential data at all, then there's a problem. It's a disaster waiting to happen. There may be technical debt regarding the security of existing systems and/or access policies.