How does Shopify test storefront performance?

According to this part:

Column 1 Column 2
Page Weight
Most visited product page 40%
Most visited collection page 43%
Home page 17%

Recently, I’ve submitted my application for BFS. Here’s the feedback I got:
App must not reduce Lighthouse storefront speed by more than 10 points. Your app is currently reducing Lighthouse score by more than 10 points. Please see our dev docs for more information on how to measure and reduce Lighthouse impact.

My app mainly offers a custom product template but this page should not be the most visited product page. When my app is not installed, the store doesn’t have the custom product pages.

The home and collection pages are almost not affected according to our test. How can we get the test details from the reviewer? I want to see which page he used for testing.

Hi Benny, the testing methodology we use is exactly as you linked in the dev docs. There can be some degree of variability for each test run, so we take the average of multiple test runs for each page.

1 Like

Yes, I think so. But I can’t get the 10 points diff from the tester, whether under a fast internet (~500MB) or a slow one (~90MB). There is no 10 points diff.


Which page is used as the most visited product page and the most visited collection page? How is the page selected?

Hey Benny, good catch here - I’ll make a note to update the dev docs here to be more reflective of current test conditions.

That said, the weightings still apply, but instead of “most visited”, we compare Lighthouse scores of a product page/collection page with the app feature enabled vs. scores of a product page/collection page without the app feature enabled.

What app is this for? I’ can reach out to ask you for specifics, and look a bit further into this.

Our app is used for creating a custom product page with our product template. The purpose is to let customers build custom bundles. We failed the test last time. However, when I test the our dev store’s storefront, I can’t find a great diff before and after the app is installed.

Our app only adds a tiny script tag which may affect the home page, collection and product pages. Other than this js, there is no other code. Therefore I wonder which product page is used as the most visited product page.

If the tester takes an ordinary product page as the before benchmark and compare our custom product page after the app is installed, then we must fail the test. However, I don’t think it’s a fair comparison.

So we do compare an ordinary product/collection page before the feature is enabled with a product/collection page with the feature enabled.

Why do you think this is not a fair comparison?

  1. Our app is not an embedded widget
    We use theme app extension - app blocks but the blocks are not embedded into a standard PDP. Most bundle apps are offering widgets like this (image taken from the TAE docs):


    Our app offers a custom product template with app blocks that can turn a standard PDP into a collection-like product grid. Our case:

    Therefore the comparison is not apple-to-apple, it measures a standard product page against another type. Most apps only offer app blocks for the built-in product template but we offer our template with our own app blocks.

  2. Our product template depends on user input
    Our case is similar to a landing page builder. When the user adds a new product template with many elements, of course this product page performance will be slower than ordinary product pages. For example, the users display 50 product cards. The image loading time must be slower than loading a single product. But this is what the users want to achieve. In such case, it’s inappropriate to say the app degrades the store performance because these new product pages don’t exist before the app is installed. And these new pages are not the “most-visited” product pages before the app is installed.