Recent 400 Errors Referencing CloudFlare

Sorry @Alan_G , with the holidays and everything going on I completely missed that you had replied here.

In terms of the full request headers “as our app is sending them”, we really don’t put much in the outgoing request header ourselves. We save a copy of all of the outgoing requests before they are sent by our app, so I’m not immediately sure if GCP is adding any values of it’s own to the outgoing request once it’s left our app.

  1. Here’s an example of one such header from yesterday where the operation ran up against the 20 second timeout we specify on our HTTP client:
    X-Request-Id: [f38143e1-2bac-439b-b7ec-b35726297796]\nX-Shopify-Access-Token: [redacted]\nContent-Type: [application/json]\n

    This request had no response body or headers since it was timed-out from our side.

  2. Here’s an example of a header from a request on 12/05 where we got a 400 back from Shopify with the Cloudflare error message:

    X-Request-Id: [173e3df0-e4e9-446c-b173-efe1d8d0522b]\nX-Shopify-Access-Token: [redacted]\nContent-Type: [application/json]\n

    And here’s the response header we got back:
    Content-Length: [155]\nCf-Ray: [9a92cbe8587be1de-ATL]\nServer: [cloudflare]\nDate: [Fri, 05 Dec 2025 10:17:55 GMT]\nContent-Type: [text/html]\n

It’s also worth mentioning that we’ve seen a pretty big decrease in the number of issues over the last couple of weeks, seemingly related to this old Community post I found. I found that the default number of connections in our Cloud NAT settings were really low, so I raised them quite a bit. That helped a LOT with reducing the number of issues, but it didn’t entirely resolve the problems.

I’ll check in with the GCP folks to see about any of those timing breakdowns you mentioned and let you know what I come up with.

1 Like

No worries at all on the delay @ktbishop, apologies for the delay on my end too here (holidays as well!) :slight_smile:

Thanks for the detailed header info too. The fact that raising Cloud NAT connection limits helped makes me think it could be related to port allocation issues on the egress side, which aligns with what we were suspecting.

When you hear back from GCP on those timing breakdowns, definitely loop back here. If you happen to catch another 400 with the Cloudflare response in the meantime, it’d be helpful to note:

The approximate request payload size
Whether it was during a burst of concurrent requests or isolated (if possible)

That CF-Ray ID you shared (9a92cbe8587be1de-ATL) is useful if we need to dig deeper on our end as well. It does look like it’s now out of our usual log retention window, but if you do find a more recent one that would be helpful as well.

Happy New Year, and talk soon!