According to this part:
Column 1 |
Column 2 |
Page |
Weight |
Most visited product page |
40% |
Most visited collection page |
43% |
Home page |
17% |
Recently, I’ve submitted my application for BFS. Here’s the feedback I got:
App must not reduce Lighthouse storefront speed by more than 10 points. Your app is currently reducing Lighthouse score by more than 10 points. Please see our dev docs for more information on how to measure and reduce Lighthouse impact.
My app mainly offers a custom product template but this page should not be the most visited product page. When my app is not installed, the store doesn’t have the custom product pages.
The home and collection pages are almost not affected according to our test. How can we get the test details from the reviewer? I want to see which page he used for testing.
Hi Benny, the testing methodology we use is exactly as you linked in the dev docs. There can be some degree of variability for each test run, so we take the average of multiple test runs for each page.
1 Like
Yes, I think so. But I can’t get the 10 points diff from the tester, whether under a fast internet (~500MB) or a slow one (~90MB). There is no 10 points diff.
Which page is used as the most visited product page and the most visited collection page? How is the page selected?
Hey Benny, good catch here - I’ll make a note to update the dev docs here to be more reflective of current test conditions.
That said, the weightings still apply, but instead of “most visited”, we compare Lighthouse scores of a product page/collection page with the app feature enabled vs. scores of a product page/collection page without the app feature enabled.
What app is this for? I’ can reach out to ask you for specifics, and look a bit further into this.
Our app is used for creating a custom product page with our product template. The purpose is to let customers build custom bundles. We failed the test last time. However, when I test the our dev store’s storefront, I can’t find a great diff before and after the app is installed.
Our app only adds a tiny script tag which may affect the home page, collection and product pages. Other than this js, there is no other code. Therefore I wonder which product page is used as the most visited product page.
If the tester takes an ordinary product page as the before benchmark and compare our custom product page after the app is installed, then we must fail the test. However, I don’t think it’s a fair comparison.
So we do compare an ordinary product/collection page before the feature is enabled with a product/collection page with the feature enabled.
Why do you think this is not a fair comparison?