Search query using attribute hasVariantsThatRequires.. does not work

Hello,

I’m trying to use the new field hasVariantsThatRequiresComponents as a search filter in the graphql version 2025-04 but is not working.

The query:

products(first: 5, query: "has_variants_that_requires_components:true") {
    nodes {
      id
      title
      hasVariantsThatRequiresComponents
    }
  }

return components with hasVariantsThatRequiresComponents being false.

I have tried some other queries like:

has_variants_that_requires_components:true
has_variants_with_components:true
hasVariantsThatRequiresComponents:true

But none works, the results are always the same, products with hasVariantsThatRequiresComponents being false ( it simply ignore, does not meaning it’s returning the opposite).

1 Like

Hey @overduka :waving_hand: - I did a bit of digging into this and I think the query argument should be: has_variant_with_components The field, hasVariantsThatRequiresComponents is correct though. I tried this query in 2025-04 and it worked for me:

{
  products(first: 5, query: "has_variant_with_components:true") {
    nodes {
      id
      title
      hasVariantsThatRequiresComponents
    }
  }
}

Hope this helps - let me know if you’re still seeing any issues pop up.

2 Likes

Hey @Alan_G, this works! Thank you very much!

By the way, where is this documented? I did not find any reference to it, and it differs from the attribute name, so it’s really impossible to guess.

Hey @overduka - no worries! The only spot I can see where this is documented on our end is in the query arguments section for the products query here in our docs, it is definitely a little “hidden” compared to some other filters we show in our examples for sure.

Works a treat…. (between you and me, ChatGPT need never know).

1 Like

Glad this works @Si_Hobbs ! Yeah, I do find sometimes models will selectively forget/overlook documentation, especially if it’s something small like this. Not sure what your set up looks like for ChatGPT, etc, but if you’re using something like Cursor or OpenAI Codex/Anthropic’s Claude Code, we do have a Model Context Protocol server here that I find helps LLMs be a bit smarter when it comes to “secret” API tips and tricks since our MCP allows them access to the syntax directly in a machine readable form:

Just wanted to share in case that helps too!

Yeah i have both the Sidekick GPT and the MCP server in Claude. It’s just one of those humorous things AI things. I am eternally intrigued about your internal processes to fine tune Sidekick.

Hey @Si_Hobbs - ah, gotcha! Yeah, I’m a pretty big AI proponent, but there’s still always the possibility for “hallucinations”, for sure. I always “trust, but verify” anything AI is sending my way haha, but it does feel like that gap is closing day by day almost.

I can’t share too much internal info about our processes when it comes to AI agent optimization, but we do have a decent blog post form our eng team here if you haven’t checked it out that covers a lot of the overview for how we build these systems:

The MCP server itself is more of a “model-agnostic” tool, but my understanding is that we do have different modalities depending on what model is using it so that it works better with the chosen model.

Hope this helps a bit, always fun to chat AI!