Cool. I like having a faster load time. That's why I use Google chrome, because from all the browsers out there, I notice a difference with the speed. Chrome is the fastest.
Perhaps at least as interesting: how does this affect the total amount of transferred data? I'm far less limited by network speed than by my total allowable monthly quota.
Jan - Good question. We didn't measure this directly, but some previous work has shown that SPDY's header compress reduces the sizes of headers by 85-88%. See: http://www.chromium.org/spdy/spdy-whitepaper
It would be good to study how much overall reduction SPDY achieves.
After having seen this graph again on Friday at your talk I have to say I don't think this visualization is good for these data. Your goal is to understand the distribution of expected speedups using SPDY, but to do so one has to visually estimate the average separation between the curves. This is hard because the separations are not sorted. Also, the x axis is not a continuous metric, it's "site number", so drawing a line graph at all is inappropriate.
A better visualization is a CDF of percent speedups -- the median would pop out, as would the range of speedups, and the distribution would be apparent.
Random Software Guy: Please take a look at the full article which has another graph showing the speedups (sorted as you requested). We spent some time mulling over how to present this data. A CDF is not a good presentation since it hides the fact that in one case we had a worse load time using SPDY.
Just came across this post, these are very interesting results. One question: did you test any case in which the transfer was competing in the bottleneck link against other TCP flows? In this case, the aggregate bandwidth obtained by the multiple HTTP connections (6, from the full article), would be larger than the bandwidth obtained by the single TCP connection used by SPDY, and this could have a (partially) compensating effect. The number of flows won't make any difference* if all the flows are part of our web transfer, which seems to have been the case in these experiments.
This is an interesting case. It's true that SPDY through a single TCP connection may underperform parallel TCP connections in certain cases, including the one you mention. It's worthwhile doing that study separately to see how much of an effect this has on mobile networks.
Cool. I like having a faster load time. That's why I use Google chrome, because from all the browsers out there, I notice a difference with the speed. Chrome is the fastest.
ReplyDeleteWish nginx rolls out support for SPDY soon.
ReplyDeleteThey announced several times it will be available sometimes during this month:
Deletehttps://twitter.com/#!/nginxorg/status/178043637920305152
https://twitter.com/#!/nginxorg/status/184915722642784260
How many times was each page loaded to measure an average load time?
ReplyDeleteRobit - 5 times. Given the very controlled environment we didn't need more measurements than that.
ReplyDeleteThis comment has been removed by the author.
ReplyDeleteThis comment has been removed by the author.
ReplyDeleteWere the SPDY connections being made over a TLS connection?
ReplyDeleteJohn - Yes, all SPDY connections were over SSL/TLS.
ReplyDeleteHow can I start to use it?
ReplyDeleteOr... how can I know if I'm using it already?
Perhaps at least as interesting: how does this affect the total amount of transferred data? I'm far less limited by network speed than by my total allowable monthly quota.
ReplyDeleteJan - Good question. We didn't measure this directly, but some previous work has shown that SPDY's header compress reduces the sizes of headers by 85-88%. See: http://www.chromium.org/spdy/spdy-whitepaper
ReplyDeleteIt would be good to study how much overall reduction SPDY achieves.
After having seen this graph again on Friday at your talk I have to say I don't think this visualization is good for these data. Your goal is to understand the distribution of expected speedups using SPDY, but to do so one has to visually estimate the average separation between the curves. This is hard because the separations are not sorted. Also, the x axis is not a continuous metric, it's "site number", so drawing a line graph at all is inappropriate.
ReplyDeleteA better visualization is a CDF of percent speedups -- the median would pop out, as would the range of speedups, and the distribution would be apparent.
Random Software Guy: Please take a look at the full article which has another graph showing the speedups (sorted as you requested). We spent some time mulling over how to present this data. A CDF is not a good presentation since it hides the fact that in one case we had a worse load time using SPDY.
ReplyDeleteWhy don't you incorporate Google's Snappy compression into SPDY, which is much faster than Gzip?
ReplyDeleteWhat I want to say is,
ReplyDeletecan Internet censorship be implemented on SPDY? Is it secure enough?
I wish it can be good news for Chinese users.
Just came across this post, these are very interesting results. One question: did you test any case in which the transfer was competing in the bottleneck link against other TCP flows? In this case, the aggregate bandwidth obtained by the multiple HTTP connections (6, from the full article), would be larger than the bandwidth obtained by the single TCP connection used by SPDY, and this could have a (partially) compensating effect. The number of flows won't make any difference* if all the flows are part of our web transfer, which seems to have been the case in these experiments.
ReplyDeleteHi Rodrigo,
ReplyDeleteThis is an interesting case. It's true that SPDY through a single TCP connection may underperform parallel TCP connections in certain cases, including the one you mention. It's worthwhile doing that study separately to see how much of an effect this has on mobile networks.