I’ve been working on developing an AI MCP Agent for customer service of Shopify stores. I’ve been using the new storefront MCP server and I’ve struggled with some easy queries the LLM makes to the FAQs tool. Here is a clear example (images are in Spanish, sorry I’m native Spanish speaking):
- Asked to the chat “Where can I contact you?”, the llm sends a tool call with the query “contact”
- Tool responds that there is no answer for that
- I asked the chat a second time and now with the question “Contact Information?”
- The chat finally responds with the contact information
The FAQ title I have for the contact information is “Contact information and location”, I think the simple question “What’s your contact information?” Should match the FAQ. Am I missing something?
I don’t know if there’s the possibility of creating tools per FAQ? Maybe that would help the model decide with which FAQ is best to answer depending on the user’s question.
1 Like
Hey @rafaremo,
Thanks for sharing those details. To ensure I have the full context, is the issue primarily with the Knowledge Base app response or is it more with your own agent implementation of the Storefront MCP server?
Hello @KyleG-Shopify,
Thank you for the reply. In my opinion the problem comes from the way the Search Shop Policies tool makes the queries for the correct response. For example:
- In the first image of the initial thread, there’s a tool calle to search_shop_policies_and_faqs that the LLM did with the query “contacto”.
- In the third image you can see there’s a FAQ called “Datos de Contacto y Ubicación”.
- In the first image again you can see the tool answered with “I’m sorry, I don’t have an answer for that question.”
I don’t know how does the search_shop_policies_and_faqs tool looks for the correct answer to return, but in my experience, something that works super good is if you expose a tool that returns all the FAQs titles and let the LLM decide which FAQ to specifically query. LLMs are super good understanding the context and intention of the questions asked and that would 10x the quality of the results when querying for the correct answer for a given question, even if the user decides tu use slang or a weird language to ask the questions.
Thanks for that fedback Rafael.
What I would recommend here is to make sure your agent tool is set up to properly pass the full context. When I test on my own, this is what I get:
You can see that the tool call is passing both the query and context parameters.
I used this guide here to set up the agent: Build a Storefront AI agent
And some tips here on adjusting the prompt to make it more personallized Test and customize your agent
You can also test with different LLM models to see if some work better than others for these kinds of questions.
Thank you @KyleG-Shopify ,
Is the context parameter new? I did nothing with my MCP client and now the LLM is sending the context parameter and giving better much answers 
Anyways, thank you for the help and I know AI products in Shopify will continue to get better as time goes by 
Thanks for confirming that it’s working now.
I’m not sure if that is new. After your reply I went and set up my own agent to be sure I had a complete working understanding of these new MCP’s.
Continue to share feedback as you use these tools so they can continue to get better and better!
Cheers.