Skip to main content

How should Scrum Teams handle data security risks when using AI tools like ChatGPT?

Last post 09:28 pm February 16, 2026 by Noble Nathanael
4 replies
12:34 pm January 30, 2026

Hi everyone,

Many Scrum Teams today are using public AI tools such as ChatGPT, Gemini, and Copilot to speed up daily work — writing user stories, summarizing requirements, generating code, or analyzing logs.

While these tools improve productivity, I’m concerned about an often-overlooked risk: accidentally sharing sensitive or confidential data with public LLM platforms.

For example, team members might paste:

  • customer information
  • financial or production data
  • internal documents
  • source code
  • or proprietary business logic

Since these tools run outside the organization, this could lead to data leakage or compliance issues.

I’m curious how other teams are addressing this.

Do you:

  • have AI usage guidelines or policies?
  • enforce data redaction/anonymization before using AI tools?
  • restrict public AI tools altogether?
  • or use private/enterprise AI solutions?

From a Scrum perspective, should this be handled as part of the Definition of Done, working agreements, or organizational policy?

Would love to hear how your teams balance productivity with security.

Thanks!


06:44 pm January 30, 2026

Policies about tools, AI or otherwise, should be handled outside of Scrum. In my experience in larger organizations, policies and guidance come from information technology or information security teams, often with input from legal, regulatory, and privacy specialists. The guidance will state which, if any, external tools employees may use at work.


10:51 pm January 30, 2026

Think of it this way. If it's possible to compromise sensitive or confidential data at all, then there's a problem. It's a disaster waiting to happen. There may be technical debt regarding the security of existing systems and/or access policies.


06:25 pm February 10, 2026

Governance. This is not a word that is liked inside Scrum context.

As one director told me this, years ago - "in case of xyz, who goes to jail?"

This is a superbly viable question that should straighten up some people.

 


03:53 pm February 16, 2026

Anything sensitive gets redacted or anonymized before we use public AI tools. For real security, we mostly rely on private/enterprise AI instances. We also include “no sensitive info in AI tools” in our working agreements so it becomes part of daily habits.


By posting on our forums you are agreeing to our Terms of Use.

Please note that the first and last name from your Scrum.org member profile will be displayed next to any topic or comment you post on the forums. For privacy concerns, we cannot allow you to post email addresses. All user-submitted content on our Forums may be subject to deletion if it is found to be in violation of our Terms of Use. Scrum.org does not endorse user-submitted content or the content of links to any third-party websites.

Terms of Use

Scrum.org may, at its discretion, remove any post that it deems unsuitable for these forums. Unsuitable post content includes, but is not limited to, Scrum.org Professional-level assessment questions and answers, profanity, insults, racism or sexually explicit content. Using our forum as a platform for the marketing and solicitation of products or services is also prohibited. Forum members who post content deemed unsuitable by Scrum.org may have their access revoked at any time, without warning. Scrum.org may, but is not obliged to, monitor submissions.