I have been working on MPD browsing performance and think it has been taken pretty far.
However I am still noticing that it takes quite a while for a browsing page with a lot of entries to show: 350 items takes approx 4 seconds to show on my Raspberry Pi 2).
When I look at the java script console in this situation I see that the websocket response is instantaneous but then the DOM takes approximately 4 seconds to build.
I therefore think it would be good if we could focus some attention on improving the rendering speed of the UI.
This would really make Volumio snappier.
This is in no way meant as a criticism of the UI, just a statement that if it was optimised a bit we might really start to see a nippy application.
I’m absolutely open to optimize Volumio wherever is needed, wether is the backend or frontend! I’m inviting kurtommy which is taking care of the UI here so we can have a comprehensive discussion!
Great work piercer!
I should point out that the 4 seconds to build is actually javascript running in my browser. I imagine it is because it is creating a lot of ‘browse-table-items.html’
[code]
{{item.title}}
{{item.artist}} - {{item.album}}
[/code]
Which have a lot of angular wiring to do, and the dom probably gets parsed a lot.
Hi Dude, my PI Socket takes about 800ms for push to the browser ~150items, and we still have to solve this bottleneck.
For the UI the problem was not the rendering (layout/paint/composite) but the js time since the table has a lot of watchers.
This is before the perf fix: as you can see a big piece of js and small paint at the end, the list is on screen after ~2sec
This is after perf fix: now we have a lot small chunk of js/paint cycle and the first list items appear on screen after ~180ms (no matter the length of the list) the rest will be added progressively
I am now using the head of the master branch of Volumio2-UI on my Pi2.
Thanks! I don’t know whether the optimisations you mention above are in yet, but the UI is much faster, especially when browsing long lists (I think 2-3 times faster). Really noticeable.
So it seems that the backend response time is quick enough. Our current bottlenech appears to be rendering the hamburger menus for the individual entries of the list…
My idea would be sending the data to create those hamburgers from the BE, instead of letting the FE decide what to put into those…
What do you guys think?