Let me tell you something about trust. There’s a concept in this industry — radical, I know — where you use a company’s product and they don’t secretly funnel your work into their AI pipeline while smiling to your face. A wild idea. A crazy thought. GitHub apparently just laughed that concept out of the building.

On March 25, 2026, GitHub quietly updated their Privacy Statement and Terms of Service. Starting April 24 — that’s just over two weeks from the announcement — they will use your Copilot interaction data to train AI models. Not just Copilot Pro users. Not just enterprise. All of them. Copilot Free, Copilot Pro, Copilot Pro+. Everyone gets harvested unless they opt the hell out.

And the opt-out? It’s buried so deep you’d think they didn’t want you to find it. Opt-in by default. You have to actively go out of your way to tell Microsoft “no, actually, I don’t want the code I wrote and the problems I solved to become training data for your next AI model that you’ll sell back to me.”

The Timing Is Almost Funny

GitHub made this announcement on March 25. They gave people 30 days to opt out before the data collection begins on April 24. Thirty days. During which most developers are buried in actual work, not checking privacy policy updates from a company that’s already proven it has a… flexible relationship with user consent.

The announcement itself was so quiet you almost missed it. No fanfare. No blog post with a cute animation explaining what’s happening. Just a quiet update to the legal docs, like they were hoping nobody would notice until it was too late.

This is the same company that, not so long ago, made a big show of not training on public code repositories after the whole Copilot controversy. Remember that? They listened! They heard the community! They would never! Now they’re just harvesting the data directly from their paying customers instead. Much cleaner. Much harder to complain about, technically, because you technically agreed to their terms when you clicked “I Accept” on something you never read.

What They Actually Collected

According to the updated policy, GitHub plans to collect:

  • Your prompts and inputs to Copilot
  • The code Copilot generates for you
  • Your conversations with Copilot聊天功能
  • Context from your codebase that you share with it
  • All the little hints and snippets that make Copilot actually useful

So basically, they’re taking the most valuable part of your workflow — the specific problems you needed help solving, the specific code patterns you were working with, the specific solutions you arrived at with AI assistance — and feeding it into the model to make it better for everyone else.

Including, presumably, your competitors. Your potential future employers. Anyone willing to pay for access to a model trained on your intellectual labor. But specifically not you, because you already paid for the privilege of being training data, and now you’re supposed to pay again for the improved version.

The Opt-Out Process Is A Joke

Here’s how you opt out, according to the documentation I’ve seen:

You go to your GitHub settings. You navigate to the Copilot section. You find the privacy settings. You toggle off the thing that says “Allow GitHub to use my Copilot interactions for research and development.”

That’s it. That’s the process. It should take about 30 seconds.

The problem? It’s opt-out, not opt-in. Which means they know — they know — that most people won’t do it. They know people are lazy, busy, trusting. They know developers tend to assume the defaults are fine because there’s too much other stuff to worry about. And they built their entire business model around that assumption.

This isn’t a bug in their policy. It’s the feature. They intentionally set it up this way because they know opt-out rates are trash. Every company that pulls this garbage knows exactly what they’re doing.

What This Actually Means

Here’s the thing that nobody seems to be screaming about loud enough: GitHub just turned every Copilot user into an unpaid intern for their AI pipeline. You’re not just using the tool — you’re training the tool, on your time, with your problems, using your brain’s work as the curriculum.

Your code. Your bugs. Your creative solutions to difficult problems. All of it, fed into the machine to make Microsoft’s product better. And you pay $10/month for the privilege. That’s not a tool. That’s a data extraction scheme with a IDE bolted on.

The real kicker? You can’t even use the older, non-AI versions anymore. GitHub has been steadily neutering the non-Copilot experience — removing features, making them second-class, burying them under AI-first interfaces. You either use their AI and accept the terms, or you use a different platform entirely. The choice is an illusion.

How to Actually Protect Yourself

If you care about this — and honestly, you should — here are your options:

  1. Opt out right now. Go to GitHub Settings → Copilot → Privacy and turn off the data collection toggle. Do it today. Don’t wait. The deadline is April 24.

  2. If you’re on Copilot Free, honestly, just stop using it. The free tier is basically a honeypot now — you’re giving them exactly as much data as Pro users for zero cost to you. That’s a terrible deal.

  3. Consider alternatives. There are other AI coding assistants that aren’t explicitly training on your interactions. It might cost more. It might be less convenient. But at least you’re not the product.

  4. If you’re an enterprise user, check your organization’s settings. A lot of admins have already opted out on behalf of their teams, but plenty haven’t. Make noise.

The Broader Picture

This isn’t happening in isolation. GitHub is just the latest in a long line of companies discovering that they can harvest user data with almost no consequences. The pattern is always the same:

  1. Introduce a service
  2. Make it indispensable
  3. Change the terms quietly
  4. Give a short window to opt out
  5. Profit

The regulatory environment hasn’t caught up. Most developers don’t have the time to lawyer up. And the companies know that the actual enforcement is basically nonexistent. So they keep doing it.

What concerns me more than the data collection itself is what it represents: the slow erosion of developer autonomy. We’re being conditioned to accept that our tools, our platforms, our entire workflow belongs to someone else. That the code we write isn’t really ours. That our knowledge and problem-solving are just raw materials for someone else’s model.

This is the moment where we decide how much we’re willing to accept. Opt out if you care. Or don’t. But at least make the choice consciously, rather than by default.

The deadline is April 24. Don’t say nobody warned you.