Sam Altman says he wants feedback to fix ChatGPT. The reality? Paying power users get a canned “thanks but no thanks” and no way to talk to a human.
We’re told OpenAI has gone into “code red” to improve ChatGPT.
Internal memos leaked, headlines everywhere: the lead is shrinking, competitors are catching up, we have to make ChatGPT better, faster, more personal.
Sounds great… until you actually try to help and in response to your queries from a paying customer you get corporate speak and word salads in canned responses. Maybe they should ask the Marines about what a Code Red actually is. Open AI can’t handle the truth?
What a Real “Code Red” Looks Like in the Marine Corps
When I hear “code red,” I don’t think about a leaked memo and a PR cycle. I think about the Marine Corps.
In the Marines, a “code red” isn’t a buzzword. It’s not a strategy slide or a Slack message. It’s an unofficial, very real message that something is badly broken and someone is about to feel the full weight of the unit coming down on them. You don’t advertise it to the press, you don’t workshop talking points about it, and you sure as hell don’t treat it like clickbait.
I’m not glorifying hazing or old-school barracks justice—but the point is this: in the Corps, when leadership says things are at a code red, you feel it in your gut. There’s urgency, accountability, and consequences. Compare that to Silicon Valley “code red,” where the headlines scream crisis while paying users can’t even find a support link, and serious feedback gets bounced by an “AI support agent” with a canned reply. One is a real wake-up call. The other is marketing.
As a ChatGPT Plus power user who spends hours a day inside this thing, I raised my hand and said, “Fine. You want feedback? I’ve got two concrete fixes and I’m willing to be one of your testers.”
What did I get back?
“Thank you for your interest in contributing and participating as a power user. Currently, OpenAI does not have a public process for joining advisory or early-access programs, and I’m not able to make selections or enroll users directly. If such opportunities become available, they will be announced through official OpenAI channels.”
Translation:
We love the idea of feedback. The reality of you talking to us? Hard pass.
And that, ladies and gentlemen, is why OpenAI just earned itself a file in the Loser Fatigue cabinet.
The Promise: “Code Red” to Improve ChatGPT
According to the reporting:
- OpenAI is under pressure because competitors (hello Gemini, Grok, etc.) are closing the gap.
- Sam Altman calls a “code red”—the mission is to:
- make ChatGPT more capable,
- improve the day-to-day experience,
- focus on speed, reliability, and personalization.
- Other shiny projects get pushed back so they can fix the core product.
On paper, that’s exactly what you’d want a CEO to say.
Admit you’re slipping, focus on the flagship, and listen to your users.
So I did what any sane, paying user would do:
- I wrote a detailed note.
- I introduced myself (76-year-old Marine, Plus subscriber, lives in ChatGPT daily).
- I offered concrete feedback:
- Stable, clearly-labeled internet access for power users.
- A real solution to the mid-2024 knowledge cutoff when we’re living in late-2025.
- And I volunteered to be part of any advisory group / early-access cohort.
Mission: raise my hand, get in the loop, help make ChatGPT better.
The Reality: “We Don’t Have a Public Process…”
Here’s what happens when you try to climb the chain of command at OpenAI:
- In the app: no obvious Help, Support, or Give feedback link.
- In the browser Help Center: no clear Contact us path, no “talk to a human.”
- You end up stuck using tiny thumbs-down feedback boxes meant for “this answer sucked,” not “here’s critical product input from someone who lives in here.”
And when you finally manage to get something through?
You get the canned wall:
“No public process.”
“Can’t enroll users directly.”
“Watch official channels.”
It’s like the company equivalent of,
“Thank you for your interest in being listened to. That feature is not available in your region.”
For a tool billing itself as the most advanced AI assistant on Earth, the human support layer is basically a ghost town.
Why This Matters (Especially for Paying Users)
This isn’t just me being cranky.
If you’re:
- paying a monthly fee,
- using ChatGPT to write, publish, research, script, and produce,
- and watching the competition close in…
…you have a stake in whether OpenAI actually listens to the people who depend on it.
The whole “code red” story is about closing the gap.
But you don’t close a gap by shouting into your own echo chamber and ignoring the Marines out in the field actually stress-testing your product every day.
Serious users are saying:
- Give us stable, clearly-marked web access.
Let me know when the model can actually browse, and show me what it pulled from the web versus what’s just from old training data. - Fix the knowledge cutoff problem.
In late-2025, hearing “I can’t go beyond mid-2024” is not “cute transparency,” it’s a brick wall. At least be honest about when you’ve augmented answers with fresher sources.
Instead of engaging, the response is:
“We don’t have a process for talking to people like you right now.”
Tell me again who’s losing the AI race?
Loser Fatigue Verdict
OpenAI, for this episode, lands squarely in Loser Fatigue territory:
- Loser move: Announce a “code red” to improve ChatGPT while making it nearly impossible for serious users to contact you.
- Loser move: Treat Plus subscribers like a captive audience instead of partners.
- Loser move: Hide support and feedback channels, then send canned “no public process” replies.
We’re not asking to sit on your board or meddle with your model weights.
We’re asking for one simple thing:
If you’re going to tell the world you want feedback from users,
act like you actually want feedback from users.
Until then, don’t be surprised when the people who built their second careers on your product start looking over the fence at the competition and thinking,
“Maybe they’ll at least pretend to listen.”
Update: The “AI Support Agent” Reply
After I sent my detailed email to ChatGPT head honcho Nick Turley, I finally did get a response—just not from Nick. Instead, an “AI support agent for OpenAI” thanked me for my feedback, said they’d “make sure” my suggestions about a reliable “web-aware” mode and clearer knowledge-cutoff indicators are passed along to the product team, and reminded me they can’t guarantee any direct contact or enrollment in feedback programs. Translation: we might drop your ideas in the suggestion box, but don’t hold your breath, Sarge. I’m “always welcome to share my insights here for ongoing consideration,” which is corporate-speak for: keep talking, we’ll keep auto-replying. For a company in so-called “code red” about improving ChatGPT, that’s a pretty low-energy way to treat the people actually trying to help fix it.
Postscript: I emailed this perspective to Greg Norman at Fox News—the same reporter who wrote the Altman “code red” article. Maybe it was too real, maybe it risked upsetting a shiny AI advertiser, or maybe it will get lost in the inbox or flagged as spam. Either way, if the media wants to sell you “code red” drama, they should at least be willing to hear from the people actually living with the fallout. Film at 11!
Bunker Notice
If you made it this far, you’re bunker material. Join the Bunker Briefing—my unfiltered monthly dispatch from Bunker #69.
…and this whole saga becomes not just a vent, but a documented case file you can point back to the next time some PR flack says, “We’re listening to our users.”