Querying Orders via bulkOperation with `updated_at` and `processed_at` filters causes timeouts

Hello!

We query the Shopify GraphQL API on behalf of many different clients, and we have noticed that for certain shops, bulk operations for Orders always time out when using both updated_at and processed_at filters at the same time in the query (using OR).

Previously we were under the impression that this timeout only happens when the query we make would yield too big of a result, but that does not seem to be the case as it happens even when using a minimal query for a time period that has few/no orders at all.

Here is an example of such a query:

mutation {
  bulkOperationRunQuery (query:"""
{
  orders (query: "(processed_at:>=2012-01-24T00:00:00Z processed_at:<=2012-01-24T00:59:59Z) OR (updated_at:>=2012-01-24T00:00:00Z updated_at:<=2012-01-24T00:59:59Z)") {
    edges {
      node {
        id
        updatedAt
        processedAt
      }
    }
  }
}
""") {
    bulkOperation {
      id
      status
    }
    userErrors {
      field
      message
    }
}

And the resulting timeout error, which we always get around 5 minutes after creating the bulk operation:

{
    "data": {
        "node": {
            "id": "gid://shopify/BulkOperation/3612565307523",
            "status": "FAILED",
            "errorCode": "TIMEOUT",
            "createdAt": "2025-06-04T07:39:45Z",
            "completedAt": null,
            "objectCount": "0",
            "rootObjectCount": "0",
            "fileSize": null,
            "url": null,
            "partialDataUrl": null
        }
    }
}

Making the same query with only updated_at or processed_at works perfectly fine, and the operation finishes pretty much instantaneously. It’s only when both are used that something seems to go very wrong.

I’ve looked through all relevant documentation on the subject, and have tried all recommended actions (adding a sortKey to the query, asking for a smaller interval, using less fields), but nothing seems to result in a successful operation.

As this query works perfectly fine for most of our clients, but consistently fails for others, I’m wondering if there’s any common denominator for the shops that it’s happening to. Does anyone know what could be going wrong here, or have any suggestions for workarounds? Any insights would be greatly appreciated! :folded_hands:

1 Like

Hey @jrag,

On the shops where this is happening, do you get any errors running the query outside of bulk operations? That can sometimes help narrow down a specific resource.

I would also add in the query syntax debugging headers on one of the non bulk tests, just to ensure the query parameters are being parsed as expected.

For any other common denominators, is this always happening on historic orders (I see you have 2012 there)? Have the shops where it’s failing approved read_all_orders scope?

Hi @KyleG-Shopify, thank you for taking the time to help look into this! :smile:

I tried running the query without using bulk operations, and it failed validation because I needed to provide first in the arguments. As far as I know, this is only required for non-bulk operation queries, so the lack of this argument should not be relevant to the bulk operation issue. Just to be sure, I made a bulk operation query with the first argument added, and it still failed in the same manner.

Either way, after adding first: 100 and retrying the non-bulk query, I received the following API response:

Response Body (expand)
{
    "errors": [
        {
            "message": "Internal error. Looks like something went wrong on our end.\nRequest ID: 94c4cf63-d754-4d53-b855-a1e59160cf1a-1749109996 (include this in support requests).",
            "extensions": {
                "requestId": "94c4cf63-d754-4d53-b855-a1e59160cf1a-1749109996",
                "code": "INTERNAL_SERVER_ERROR"
            }
        }
    ],
    "data": null
}

So the query does not seem to work outside of bulk operations either. I added the debug header as you advised, but none of the debug info seems to be returned in the response when the query fails in this manner. Just like with the bulk operation, though, the query works fine if I only provide processed_at or updated_at in the query, it only fails when using both.

I have tried several different dates, both recent and old, and it unfortunately does not seem to make a difference. I can confirm that we have access to the read_all_orders scope for all shops affected by this.

If it helps narrow it down, we have observed that we are in some cases initially able to make this query for a specific shop for any given date, but at some point in time, it stops working (even for dates that we could previously query). For other shops, it never works to begin with.

We have hypothesized on our end that maybe the query works until a shop reaches a specific number of total orders, given that it primarily seems to affect our larger clients. In an attempt to verify if this is relevant, I tried making ordersCount queries using affected and unaffected shops, and noticed that all the shops that our bulk operation was failing for would time out when querying ordersCount. Here is one such timeout response, if it helps:

Response Body (expand)
{
    "errors": [
        {
            "message": "Failed to complete in time.",
            "locations": [
                {
                    "line": 2,
                    "column": 3
                }
            ],
            "path": [
                "ordersCount"
            ],
            "extensions": {
                "code": "TIMEOUT",
                "requestId": "b9be6e47-fe01-43e1-b602-d4b084fe370f-1749111375"
            }
        }
    ],
    "data": {
        "ordersCount": null
    },
    "extensions": {
        "cost": {
            "requestedQueryCost": 10,
            "actualQueryCost": 1,
            "throttleStatus": {
                "maximumAvailable": 20000.0,
                "currentlyAvailable": 19999,
                "restoreRate": 1000.0
            }
        }
    }
}

Please let me know if there is any other information we can provide in order to help with troubleshooting, and thanks for your time and assistance so far! :folded_hands:

As per my research this can be an alternate approach : Use pagination approach

  1. Use BulkOperationRunQuery with Pagination

Instead of fetching all orders in a single query, use cursor-based pagination to break the query into smaller chunks. This reduces the risk of timeouts by fetching manageable subsets of data.

graphql

CollapseWrap

Copy

mutation { bulkOperationRunQuery(query: """ { orders(first: 100, query: "(processed_at:>=2012-01-24T00:00:00Z processed_at:<=2012-01-24T00:59:59Z) OR (updated_at:>=2012-01-24T00:00:00Z updated_at:<=2012-01-24T00:59:59Z)") { edges { cursor node { id updatedAt processedAt } } pageInfo { hasNextPage } } } """) { bulkOperation { id status url } userErrors { field message } } }

Why it helps: Limiting the query to first: 100 (or another manageable number) reduces the data processed per request. You can iterate through pages using the cursor and after parameters in subsequent queries until hasNextPage is false.

  • Next steps : Poll the bulkOperation status using its id until it completes, then download the JSONL file from the url to process the results.

Hi Ankit, thank you for your suggestion, but I have already attempted to use pagination with my bulk operation query and unfortunately it still times out in the same way. I explained that in my second post, but maybe I was not very clear about it :sweat_smile:

Thanks for sharing that. It does look like it’s timing out even on the non-bulk query.

Testing on my own shop, I don’t have any orders in that range so it’s not timing out for me, but the debugger does reveal some issues. This is what is returned:

"query": "(processed_at:>=2012-01-24T00:00:00Z processed_at:<=2012-01-24T00:59:59Z) OR (updated_at:>=2012-01-24T00:00:00Z updated_at:<=2012-01-24T00:59:59Z)",
                "parsed": {
                    "or": [
                        {
                            "and": [
                                {
                                    "field": "processed_at",
                                    "range_gte": "2012-01-24T00:00:00-07:00",
                                    "range_lte": "2012-01-24T23:59:59-07:00"
                                },
                                {
                                    "field": "00",
                                    "match_all": "00Z"
                                },
                                {
                                    "field": "59",
                                    "match_all": "59Z"
                                }
                            ]
                        },
                        {
                            "and": [
                                {
                                    "field": "updated_at",
                                    "range_gte": "2012-01-24T00:00:00-07:00",
                                    "range_lte": "2012-01-24T23:59:59-07:00"
                                },
                                {
                                    "field": "00",
                                    "match_all": "00Z"
                                },
                                {
                                    "field": "59",
                                    "match_all": "59Z"
                                }
                            ]
                        }
                    ]
                },
                "warnings": [
                    {
                        "field": "00",
                        "message": "Invalid search field for this query."
                    },
                    {
                        "field": "59",
                        "message": "Invalid search field for this query."
                    },
                    {
                        "field": "00",
                        "message": "Invalid search field for this query."
                    },
                    {
                        "field": "59",
                        "message": "Invalid search field for this query."
                    }
                ]
            }
        ]
    }

Can you do a test without the time range?
ie: processed_at:>=2012-01-24 processed_at:<=2012-01-24) OR (updated_at:>=2012-01-24 updated_at:<=2012-01-24

When I do this, I see it is parsed as expected without the errors

{
                "path": [
                    "orders"
                ],
                "query": "(processed_at:>=2012-01-24 processed_at:<=2012-01-25) OR (updated_at:>=2012-01-24 updated_at:<=2012-01-25)",
                "parsed": {
                    "or": [
                        {
                            "field": "processed_at",
                            "range_gte": "2012-01-24T00:00:00-07:00",
                            "range_lte": "2012-01-24T23:59:59-07:00"
                        },
                        {
                            "field": "updated_at",
                            "range_gte": "2012-01-24T00:00:00-07:00",
                            "range_lte": "2012-01-24T23:59:59-07:00"
                        }
                    ]
                }
            }

Alternatively, if that works, you can simplify it down even further as "(processed_at:2012-01-24) OR (updated_at:2012-01-24)" is parsed identical as above.

I suggest you use filters :
{
orders(first: 100, query: “status:ANY -status:CANCELLED -status:CLOSED created_at:>2024-01-01 updated_at:>2024-01-01”) {
edges {
node {
id
name
createdAt
updatedAt
}
}
}
}

Note : update at and other time stamp queries are costly for internal shopify processing so applying order filters will help if you cant split the query into two parts.

Also this might be an addiitonal step which is not recommeded but can be helpful is to delete order that were cancelled if they do not have any analytical values.

Note : Shopify by default allows 60 days of orders so if you need to look at orders prior to this range then these are not effectively cached by shopify.

1 Like

Hi @KyleG-Shopify,

We do encounter this issue even without using timestamps, those were added in an attempt to see if requesting an interval smaller than a whole day would yield a successful response.

I tried this query just now:

{
  orders (query: "(processed_at:>=2012-01-24 processed_at:<=2012-01-25) OR (updated_at:>=2012-01-24 updated_at:<=2012-01-25)", first: 100) {
    edges {
      node {
        id
        updatedAt
        processedAt
      }
    }
  }
}

Which resulted in another timeout:

{
    "errors": [
        {
            "message": "Internal error. Looks like something went wrong on our end.\nRequest ID: 53b7aa55-d861-45dc-9f67-855096b4a7b4-1749463890 (include this in support requests).",
            "extensions": {
                "requestId": "53b7aa55-d861-45dc-9f67-855096b4a7b4-1749463890",
                "code": "INTERNAL_SERVER_ERROR"
            }
        }
    ],
    "data": null
}

I also tried to add status filters to the query, but that still results in a timeout.

Given that using updated_at or processed_at individually yields a successful response almost instantaneously, it does not feel like the query should be too costly or heavy when both are used at the same time. When querying recent dates (within the last 60 days) the query times out in the exact same manner, so it does not feel like it relates to order age either.

Thanks for sharing that. The error you are seeing is still a timeout as well.

It doesn’t seem like too complex of a query and as you mentioned, it works on most shops. I would recommend reaching out to our support team directly so they can securely take a closer look at the affected shops.

In the meantime, as a workaround, running separate queries for updated_at and processed_at and combining the results after should work.

Thanks for your input and for taking the time to look at this, much appreciated!

I created a ticket (ID 57920897) via the support chat in the Shopify Help Center around the same time I made this thread, is that the route one should go for this kind of inquiry? Or is there a dedicated GraphQL API support team that I can get in touch with?

I have not received a reply to said ticket yet and we have a handful of clients unable to report on their Shopify data as a result of this problem, so we want to confirm that we have gone through the correct process to get this issue investigated.

And just to confirm regarding the suggested workaround: if we were to make the queries separately and combine the result, we would just need to filter out duplicate orders by their IDs, is that correct? Are there any other caveats we should be aware of when combining the results?

Thanks again! :star:

Yes, that’s is the best route for inquiries that are only affecting a few select shops as we would need authenticated access to look further. Your ticket is with the correct team right now, they just have a bit of a backlog at the moment.

Yes, ID’s are unique identifiers so you would just need to remove the duplicates.

1 Like

I received a reply to my support ticket, where I was advised to add the parameter reverse: true to my request, like this:

(query: "(processed_at:>=2025-06-15 processed_at:<=2025-06-15) OR (updated_at:>=2025-06-15 updated_at:<=2025-06-17)", reverse: true)

This change made it so the bulk operation actually started running, where it would previously timeout after a few minutes. It still failed in the end after a few hours, but at least with a partial result instead of the operation failing altogether.

It was worth a try at least, and maybe the workaround solves the problem for some other shops, so I wanted to post it here in case anyone else happens to be having the same problem. Definitely worth a try at least :smile:

1 Like

I did some research as to how shopify works when we add the reverse.
Setting reverse: true flips this to descending order (newest to oldest), allowing developers to retrieve the most recent items first.

There is a good chance that the latest orders( most recent items first ) have updated or processed results while older orders might not have these attributes to be in the selection list.This means most of the times the query will get data before it times out or runs out on limit.

1 Like