Discussion:
[whatwg] HttpOnly cookie for WebSocket?
(too old to reply)
Salvatore Loreto
2010-01-28 08:38:04 UTC
Permalink
Hi,

a new IETF wg has been formed to take care of WebSocket protocol
HyBi: http://tools.ietf.org/wg/hybi/charters
So, this issue is something it should be discussed there
(btw I am forwdard it to the HyBi ml)

N.B. to subscribe to the HyBi ml: https://www.ietf.org/mailman/listinfo/hybi


/Sal

A new IETF working group has been formed in the Applications Area.
For additional information, please contact the Area Directors or the
WG Chairs.
BiDirectional or Server-Initiated HTTP (hybi)
May/Should WebSocket use HttpOnly cookie while Handshaking?
I think it would be useful to use HttpOnly cookie on WebSocket so that
we could authenticate the WebSocket connection by the auth token
cookie which might be HttpOnly for security reason.
http://www.ietf.org/id/draft-ietf-httpstate-cookie-02.txt
--
ukai
Salvatore Loreto
2010-01-28 11:03:51 UTC
Permalink
Hi Ian,

first I think it would be better have and maintain both whatwg and hybi
mailing list in any conversation
related to WebSocket

at the BoF in Hiroshima there was a clear consensus from all the
participants
(both the one physically present and the one remotely attending via
streaming and chat)
to move the WebSocket standardization work within IETF community.
To be clear, the IETF community is not a closed community, all the
people involved in the
discussion (especially in the mail discussion) are the one forming the
IETF community.

the fact that there are already implementation of WebSocket (based on
the current draft)
already or ready to be shipped in browsers and servers is a good news,
that highlight even more the need to have a clear standard document;
so just to say one of the HyBi wg intention is to gather all the
experiences from people that have
implemented WebSocket so to eventually improve (if and only if
necessary) the current draft.

having said that, the work on HTTPState is also done within IETF community,
so discuss about the possible usage of HTTPState in WebWocket in the
same community can
give the possibility to people involved in HTTPState to express their
opinion and provide their comments

/Sal
Post by Salvatore Loreto
a new IETF wg has been formed to take care of WebSocket protocol
HyBi: http://tools.ietf.org/wg/hybi/charters
So, this issue is something it should be discussed there
(btw I am forwdard it to the HyBi ml)
N.B. to subscribe to the HyBi ml: https://www.ietf.org/mailman/listinfo/hybi
The WHATWG is still actively working on the WebSocket protocol, as we are
http://wiki.whatwg.org/wiki/FAQ#What_are_the_various_versions_of_the_spec.3F
...and feedback on the WebSocket protocol is therefore very welcome on
this mailing list. (Indeed, I continue to track all e-mails sent to this
list and will reply to all substantial feedback sent to it.)
As a side note, it's unclear exactly what the HyBi group is actually going
to be working on. The timetable listed on the charter linked above is
clearly at odds with reality; WebSocket is already shipping in Chrome and
is ready to be shipped in two other browsers, and multiple servers are
already available, so clearly March 2011 for a last call isn't really
workable (especially since the spec reached last call at the WHATWG in
2009 -- the main thing missing now is test cases). However, I encourage
anyone interested in Web Sockets to participate in the HyBi group, and
indeed discussion of their timetable is probably best had there.
Ian Hickson
2010-01-28 11:26:24 UTC
Permalink
Post by Salvatore Loreto
first I think it would be better have and maintain both whatwg and hybi
mailing list in any conversation related to WebSocket
Please don't cross-post to the WHATWG list -- it causes threads to
fragment since not everyone is subscribed to both lists and the WHATWG
list rejects non-subscriber posts.
Post by Salvatore Loreto
at the BoF in Hiroshima there was a clear consensus from all the
participants (both the one physically present and the one remotely
attending via streaming and chat) to move the WebSocket standardization
work within IETF community.
As I recall, the suggestion was for the W3C to work with the WHATWG on the
API spec, and for the IETF to work with the WHATWG on the protocol spec.
Certainly I do not recall any communication from the IETF to the WHATWG to
the effect that the WHATWG should stop working on or publishing the
WebSocket protocol spec. Indeed I was quite shocked to see my feedback to
this effect be ignored and the charter be published with no mention of
cooperation with the WHATWG, and with a ridiculous timeline that has us
going back to requirements and only reaching last call in March 2011, when
the spec in question has, as I mentioned in my e-mail to the WHATWG
earlier, already reached last call as of October last year. I was even
more shocked to see no mention of a test suite, which is really the only
time-consuming thing that really remains to be done at this point.
--
Ian Hickson U+1047E )\._.,--....,'``. fL
http://ln.hixie.ch/ U+263A /, _.. \ _\ ;`._ ,.
Things that are impossible just take longer. `._.-(,_..'--(,_..'`-.;.'
Greg Wilkins
2010-01-28 13:33:23 UTC
Permalink
Ian,


I'm also curious that you say the WHATWG is still actively
working on the protocol (even though you also say that the
protocol has reached last call at the WHATWG)?

The WHATWG submitted the document to the IETF and surely
it was expected that IETF processes would be applied
to edit and refine the protocol and the document.
If the WHATWG continue to work on their own document,
that is only going to result in multiple specifications!

HTTP was "already shipping" and had "multiple servers"
available before RFC1945. Then over a decade of
specification work took place before RFC2616 finally
gave us a truly scalable protocol that has now stood
for another decade and guided an unprecedented expansion
of usage.

So it is only to be expected that the websockets protocol
and specification will continue to evolve over the next
few years.

The question is, who will guide this evolution?

Surely when the WHATWG submitted the protocol to the
IETF they were passing the protocol from the WHATWG
process to the IETF process?


regards
Ian Hickson
2010-01-28 21:53:36 UTC
Permalink
-whatwg; Please don't cross-post to the WHATWG list -- it causes threads
to fragment since not everyone is subscribed to both lists and the WHATWG
list rejects non-subscriber posts. Also, people on the WHATWG list didn't
subscribe for political stuff, they just want the technical discussions.
I'm also curious that you say the WHATWG is still actively working on
the protocol (even though you also say that the protocol has reached
last call at the WHATWG)?
The idea of a last call for comments is to get comments... those comments
then have to be addressed. Then there's a call for implementations, which
also results ein feedback, which also has to be addressed. And then
there's the test suite that needs writing. Last Call therefore is not even
half-way along the process.
The WHATWG submitted the document to the IETF
I don't think that's an accurate portrayal of anything that has occurred,
unless you mean the way my commit script uploads any changes to the draft
to the tools.ietf.org scripts. That same script also submits the varous
documents generated from that same source document to the W3C and WHATWG
source version control repositories.
and surely it was expected that IETF processes would be applied to edit
and refine the protocol and the document. If the WHATWG continue to work
on their own document, that is only going to result in multiple
specifications!
HTML5, Web Storage, Web Workers, Microdata, 2D Canvas, Server Sent Events,
Web Sockets API, Cross-Document Messaging, and Channel Messaging are all
being developed by the W3C and the WHATWG together, and there's only one
spec for each of those. Why would the IETF not be able to work with the
WHATWG in the same way?
Surely when the WHATWG submitted the protocol to the IETF they were
passing the protocol from the WHATWG process to the IETF process?
Goodness no. If you are referring to my publishing a draft using the IETF
tools, that was (and is) done in the spirit of cooperation, just as is
done with HTML5 with the W3C. I would be very happy to work with the IETF
on Web Sockets, in conjunction with the WHATWG community, just as HTML5
is developed as a joint effort.
--
Ian Hickson U+1047E )\._.,--....,'``. fL
http://ln.hixie.ch/ U+263A /, _.. \ _\ ;`._ ,.
Things that are impossible just take longer. `._.-(,_..'--(,_..'`-.;.'
Julian Reschke
2010-01-28 22:11:27 UTC
Permalink
Post by Ian Hickson
...
Post by Greg Wilkins
The WHATWG submitted the document to the IETF
I don't think that's an accurate portrayal of anything that has occurred,
unless you mean the way my commit script uploads any changes to the draft
to the tools.ietf.org scripts. That same script also submits the varous
documents generated from that same source document to the W3C and WHATWG
source version control repositories.
...
By submitting an Internet Draft according to BCP 78 you grant the IETF
certain rights; it's not relevant whether it was a script or yourself
using a browser or a MUA who posted it.

You may want to check <http://tools.ietf.org/html/bcp78#section-5.3>.

Best regards, Julian

PS: and yes, IANAL
Ian Hickson
2010-01-28 22:20:14 UTC
Permalink
Post by Ian Hickson
...
Post by Greg Wilkins
The WHATWG submitted the document to the IETF
I don't think that's an accurate portrayal of anything that has occurred,
unless you mean the way my commit script uploads any changes to the draft to
the tools.ietf.org scripts. That same script also submits the varous
documents generated from that same source document to the W3C and WHATWG
source version control repositories.
...
By submitting an Internet Draft according to BCP 78 you grant the IETF certain
rights; it's not relevant whether it was a script or yourself using a browser
or a MUA who posted it.
You may want to check <http://tools.ietf.org/html/bcp78#section-5.3>.
With the exception of the trademark rights, which I don't have and
therefore cannot grant, the rights listed there are a subset of the rights
the IETF was already granted by virtue of the WHATWG publishing the spec
under a very liberal license. So that doesn't appear to be relevant.
--
Ian Hickson U+1047E )\._.,--....,'``. fL
http://ln.hixie.ch/ U+263A /, _.. \ _\ ;`._ ,.
Things that are impossible just take longer. `._.-(,_..'--(,_..'`-.;.'
Ian Fette (イアンフェッティ)
2010-01-28 22:49:42 UTC
Permalink
So, moving back to the original question... I am very concerned here. A
relatively straightforward question was asked, with rationale for the
question. "May/Should WebSocket use HttpOnly cookie while Handshaking?
I think it would be useful to use HttpOnly cookie on WebSocket so that we
could authenticate the WebSocket connection by the auth token cookie which
might be HttpOnly for security reason."

It seems reasonable to assume that Web Sockets will be used in an
environment where users are authenticated, and that in many cases the Web
Socket will be established once the user has logged into a page via
HTTP/HTTPS. It seems furthermore reasonable to assume that a server may
track the logged-in-ness of the client using a HttpOnly cookie, and that the
server-side logic to check whether a user is already logged in could easily
be leveraged for Web Sockets, since it starts as an HTTP connection that
includes cookies and is then upgraded. It seems like a very straightforward
thing to say "Yes, it makes sense to send the HttpOnly cookie for Web Socket
connections".

Instead, we are bogged down in politics.

How are we to move forward on this spec? We have multiple server
implementations, there are multiple client implementations, if a simple
question like this gets bogged down in discussions of WHATWG vs IETF we are
never going to get anywhere. Clearly there are people on both groups who
have experience in the area and valuable contributions to add, so how do we
move forward? Simply telling the folks on WHATWG that they've handed the
spec off to IETF is **NOT** in line with what I recall at the IETF, where I
recall agreeing to the two WGs working in concert with each other. What we
have before us is a very trivial question (IMO) that should receive a quick
response. Can we use this as a proof of concept that the two groups can work
together? If so, what are the concrete steps?

If we can't figure out how to move forward on such a simple issue, it seems
to me that we are in an unworkable situation, and should probably just
continue the work in WHATWG through to a final spec, let implementations
settle for a while, and then hand it off to IETF for refinement and
finalization in a v2 spec... (my $0.02)

-Ian
Post by Ian Hickson
Post by Julian Reschke
Post by Ian Hickson
...
Post by Greg Wilkins
The WHATWG submitted the document to the IETF
I don't think that's an accurate portrayal of anything that has
occurred,
Post by Julian Reschke
Post by Ian Hickson
unless you mean the way my commit script uploads any changes to the
draft to
Post by Julian Reschke
Post by Ian Hickson
the tools.ietf.org scripts. That same script also submits the varous
documents generated from that same source document to the W3C and
WHATWG
Post by Julian Reschke
Post by Ian Hickson
source version control repositories.
...
By submitting an Internet Draft according to BCP 78 you grant the IETF
certain
Post by Julian Reschke
rights; it's not relevant whether it was a script or yourself using a
browser
Post by Julian Reschke
or a MUA who posted it.
You may want to check <http://tools.ietf.org/html/bcp78#section-5.3>.
With the exception of the trademark rights, which I don't have and
therefore cannot grant, the rights listed there are a subset of the rights
the IETF was already granted by virtue of the WHATWG publishing the spec
under a very liberal license. So that doesn't appear to be relevant.
--
Ian Hickson U+1047E )\._.,--....,'``. fL
http://ln.hixie.ch/ U+263A /, _.. \ _\ ;`._ ,.
Things that are impossible just take longer. `._.-(,_..'--(,_..'`-.;.'
_______________________________________________
hybi mailing list
https://www.ietf.org/mailman/listinfo/hybi
Maciej Stachowiak
2010-01-28 22:55:52 UTC
Permalink
+1

We at Apple are interested in moving the technology forward, not so much in debating the politics. Can we at least keep procedural mattes out of threads about technical questions?

- Maciej
So, moving back to the original question... I am very concerned here. A relatively straightforward question was asked, with rationale for the question. "May/Should WebSocket use HttpOnly cookie while Handshaking?
I think it would be useful to use HttpOnly cookie on WebSocket so that we could authenticate the WebSocket connection by the auth token cookie which might be HttpOnly for security reason."
It seems reasonable to assume that Web Sockets will be used in an environment where users are authenticated, and that in many cases the Web Socket will be established once the user has logged into a page via HTTP/HTTPS. It seems furthermore reasonable to assume that a server may track the logged-in-ness of the client using a HttpOnly cookie, and that the server-side logic to check whether a user is already logged in could easily be leveraged for Web Sockets, since it starts as an HTTP connection that includes cookies and is then upgraded. It seems like a very straightforward thing to say "Yes, it makes sense to send the HttpOnly cookie for Web Socket connections".
Instead, we are bogged down in politics.
How are we to move forward on this spec? We have multiple server implementations, there are multiple client implementations, if a simple question like this gets bogged down in discussions of WHATWG vs IETF we are never going to get anywhere. Clearly there are people on both groups who have experience in the area and valuable contributions to add, so how do we move forward? Simply telling the folks on WHATWG that they've handed the spec off to IETF is **NOT** in line with what I recall at the IETF, where I recall agreeing to the two WGs working in concert with each other. What we have before us is a very trivial question (IMO) that should receive a quick response. Can we use this as a proof of concept that the two groups can work together? If so, what are the concrete steps?
If we can't figure out how to move forward on such a simple issue, it seems to me that we are in an unworkable situation, and should probably just continue the work in WHATWG through to a final spec, let implementations settle for a while, and then hand it off to IETF for refinement and finalization in a v2 spec... (my $0.02)
-Ian
Post by Ian Hickson
...
Post by Greg Wilkins
The WHATWG submitted the document to the IETF
I don't think that's an accurate portrayal of anything that has occurred,
unless you mean the way my commit script uploads any changes to the draft to
the tools.ietf.org scripts. That same script also submits the varous
documents generated from that same source document to the W3C and WHATWG
source version control repositories.
...
By submitting an Internet Draft according to BCP 78 you grant the IETF certain
rights; it's not relevant whether it was a script or yourself using a browser
or a MUA who posted it.
You may want to check <http://tools.ietf.org/html/bcp78#section-5.3>.
With the exception of the trademark rights, which I don't have and
therefore cannot grant, the rights listed there are a subset of the rights
the IETF was already granted by virtue of the WHATWG publishing the spec
under a very liberal license. So that doesn't appear to be relevant.
--
Ian Hickson U+1047E )\._.,--....,'``. fL
http://ln.hixie.ch/ U+263A /, _.. \ _\ ;`._ ,.
Things that are impossible just take longer. `._.-(,_..'--(,_..'`-.;.'
_______________________________________________
hybi mailing list
https://www.ietf.org/mailman/listinfo/hybi
_______________________________________________
hybi mailing list
https://www.ietf.org/mailman/listinfo/hybi
Rob Sayre
2010-01-28 23:10:48 UTC
Permalink
Also interested in moving the technology forward, not so much in
debating the politics.
Post by Maciej Stachowiak
+1
We at Apple are interested in moving the technology forward, not so
much in debating the politics. Can we at least keep procedural mattes
out of threads about technical questions?
- Maciej
Post by Ian Fette (イアンフェッティ)
So, moving back to the original question... I am very concerned here.
A relatively straightforward question was asked, with rationale for
the question. "May/Should WebSocket use HttpOnly cookie while
Handshaking?
I think it would be useful to use HttpOnly cookie on WebSocket so
that we could authenticate the WebSocket connection by the auth token
cookie which might be HttpOnly for security reason."
It seems reasonable to assume that Web Sockets will be used in an
environment where users are authenticated, and that in many cases the
Web Socket will be established once the user has logged into a page
via HTTP/HTTPS. It seems furthermore reasonable to assume that a
server may track the logged-in-ness of the client using a HttpOnly
cookie, and that the server-side logic to check whether a user is
already logged in could easily be leveraged for Web Sockets, since it
starts as an HTTP connection that includes cookies and is then
upgraded. It seems like a very straightforward thing to say "Yes, it
makes sense to send the HttpOnly cookie for Web Socket connections".
Instead, we are bogged down in politics.
How are we to move forward on this spec? We have multiple server
implementations, there are multiple client implementations, if a
simple question like this gets bogged down in discussions of WHATWG
vs IETF we are never going to get anywhere. Clearly there are people
on both groups who have experience in the area and valuable
contributions to add, so how do we move forward? Simply telling the
folks on WHATWG that they've handed the spec off to IETF is **NOT**
in line with what I recall at the IETF, where I recall agreeing to
the two WGs working in concert with each other. What we have before
us is a very trivial question (IMO) that should receive a quick
response. Can we use this as a proof of concept that the two groups
can work together? If so, what are the concrete steps?
If we can't figure out how to move forward on such a simple issue, it
seems to me that we are in an unworkable situation, and should
probably just continue the work in WHATWG through to a final spec,
let implementations settle for a while, and then hand it off to IETF
for refinement and finalization in a v2 spec... (my $0.02)
-Ian
Post by Julian Reschke
Post by Ian Hickson
...
Post by Greg Wilkins
The WHATWG submitted the document to the IETF
I don't think that's an accurate portrayal of anything that
has occurred,
Post by Julian Reschke
Post by Ian Hickson
unless you mean the way my commit script uploads any changes
to the draft to
Post by Julian Reschke
Post by Ian Hickson
the tools.ietf.org <http://tools.ietf.org/> scripts. That
same script also submits the varous
Post by Julian Reschke
Post by Ian Hickson
documents generated from that same source document to the W3C
and WHATWG
Post by Julian Reschke
Post by Ian Hickson
source version control repositories.
...
By submitting an Internet Draft according to BCP 78 you grant
the IETF certain
Post by Julian Reschke
rights; it's not relevant whether it was a script or yourself
using a browser
Post by Julian Reschke
or a MUA who posted it.
You may want to check
<http://tools.ietf.org/html/bcp78#section-5.3>.
With the exception of the trademark rights, which I don't have and
therefore cannot grant, the rights listed there are a subset of the rights
the IETF was already granted by virtue of the WHATWG publishing the spec
under a very liberal license. So that doesn't appear to be relevant.
--
Ian Hickson U+1047E
)\._.,--....,'``. fL
http://ln.hixie.ch/ U+263A /, _.. \ _\
;`._ ,.
Things that are impossible just take longer.
`._.-(,_..'--(,_..'`-.;.'
_______________________________________________
hybi mailing list
https://www.ietf.org/mailman/listinfo/hybi
_______________________________________________
hybi mailing list
https://www.ietf.org/mailman/listinfo/hybi
_______________________________________________
hybi mailing list
https://www.ietf.org/mailman/listinfo/hybi
John Fallows
2010-02-01 05:04:28 UTC
Permalink
Agreed.

Kaazing is much more interested in a resolving any outstanding technical
issues with WebSockets rather than the political distractions that seem to
have been hindering real progress.

Regards,
John Fallows
Post by Maciej Stachowiak
+1
We at Apple are interested in moving the technology forward, not so much in
debating the politics. Can we at least keep procedural mattes out of threads
about technical questions?
- Maciej
So, moving back to the original question... I am very concerned here. A
relatively straightforward question was asked, with rationale for the
question. "May/Should WebSocket use HttpOnly cookie while Handshaking?
I think it would be useful to use HttpOnly cookie on WebSocket so that we
could authenticate the WebSocket connection by the auth token cookie which
might be HttpOnly for security reason."
It seems reasonable to assume that Web Sockets will be used in an
environment where users are authenticated, and that in many cases the Web
Socket will be established once the user has logged into a page via
HTTP/HTTPS. It seems furthermore reasonable to assume that a server may
track the logged-in-ness of the client using a HttpOnly cookie, and that the
server-side logic to check whether a user is already logged in could easily
be leveraged for Web Sockets, since it starts as an HTTP connection that
includes cookies and is then upgraded. It seems like a very straightforward
thing to say "Yes, it makes sense to send the HttpOnly cookie for Web Socket
connections".
Instead, we are bogged down in politics.
How are we to move forward on this spec? We have multiple server
implementations, there are multiple client implementations, if a simple
question like this gets bogged down in discussions of WHATWG vs IETF we are
never going to get anywhere. Clearly there are people on both groups who
have experience in the area and valuable contributions to add, so how do we
move forward? Simply telling the folks on WHATWG that they've handed the
spec off to IETF is **NOT** in line with what I recall at the IETF, where I
recall agreeing to the two WGs working in concert with each other. What we
have before us is a very trivial question (IMO) that should receive a quick
response. Can we use this as a proof of concept that the two groups can work
together? If so, what are the concrete steps?
If we can't figure out how to move forward on such a simple issue, it seems
to me that we are in an unworkable situation, and should probably just
continue the work in WHATWG through to a final spec, let implementations
settle for a while, and then hand it off to IETF for refinement and
finalization in a v2 spec... (my $0.02)
-Ian
Post by Ian Hickson
Post by Julian Reschke
Post by Ian Hickson
...
Post by Greg Wilkins
The WHATWG submitted the document to the IETF
I don't think that's an accurate portrayal of anything that has
occurred,
Post by Julian Reschke
Post by Ian Hickson
unless you mean the way my commit script uploads any changes to the
draft to
Post by Julian Reschke
Post by Ian Hickson
the tools.ietf.org scripts. That same script also submits the varous
documents generated from that same source document to the W3C and
WHATWG
Post by Julian Reschke
Post by Ian Hickson
source version control repositories.
...
By submitting an Internet Draft according to BCP 78 you grant the IETF
certain
Post by Julian Reschke
rights; it's not relevant whether it was a script or yourself using a
browser
Post by Julian Reschke
or a MUA who posted it.
You may want to check <http://tools.ietf.org/html/bcp78#section-5.3>.
With the exception of the trademark rights, which I don't have and
therefore cannot grant, the rights listed there are a subset of the rights
the IETF was already granted by virtue of the WHATWG publishing the spec
under a very liberal license. So that doesn't appear to be relevant.
--
Ian Hickson U+1047E )\._.,--....,'``. fL
http://ln.hixie.ch/ U+263A /, _.. \ _\ ;`._ ,.
Things that are impossible just take longer. `._.-(,_..'--(,_..'`-.;.'
_______________________________________________
hybi mailing list
https://www.ietf.org/mailman/listinfo/hybi
_______________________________________________
hybi mailing list
https://www.ietf.org/mailman/listinfo/hybi
_______________________________________________
hybi mailing list
https://www.ietf.org/mailman/listinfo/hybi
--
Post by Maciej Stachowiak
|< Kaazing Corporation >|<
John Fallows | CTO | +1.650.960.8148
888 Villa St, Ste 410 | Mountain View, CA 94041, USA
Ian Hickson
2010-01-29 00:04:22 UTC
Permalink
Post by Ian Fette (イアンフェッティ)
So, moving back to the original question... I am very concerned here. A
relatively straightforward question was asked, with rationale for the
question. "May/Should WebSocket use HttpOnly cookie while Handshaking? I
think it would be useful to use HttpOnly cookie on WebSocket so that we
could authenticate the WebSocket connection by the auth token cookie
which might be HttpOnly for security reason."
I replied to ukai on IRC -- independent of any politics, I plan to edit
the spec as he suggested next week (allowing httpOnly cookies), along with
going through all the other pending feedback on the spec.
--
Ian Hickson U+1047E )\._.,--....,'``. fL
http://ln.hixie.ch/ U+263A /, _.. \ _\ ;`._ ,.
Things that are impossible just take longer. `._.-(,_..'--(,_..'`-.;.'
Greg Wilkins
2010-01-29 03:34:11 UTC
Permalink
Post by Ian Fette (イアンフェッティ)
Instead, we are bogged down in politics.
Work is proceeding on all fronts on the actual implementations,
so I don't think we are bogged down. Nobody is saying hold
any releases for this. In fact I think it will be good to
get experience from wider usage of the protocol as it currently
stands.

But there are important issues to be discussed here and they
should not be derided as just unproductive politics.
Who will edit the specification document is a key question that
needs to be answered (but probably not in the HttpOnly cookie
thread).

For me (and my company, project & community), I have a problem
with the WhatWG process as it is not sufficiently open. It boils
down to:

0) Ian has been appointed AFAICT by an industry consortium
of browser vendors.
1) you can talk all you like
2) but you have to convince Ian to change anything
3) you have to be prepared to be unhappy if you can't
convince Ian

I don't mean to dis Ian or the whatwg and I understand they've
done great work on HTML5. But this is hardly the right
process to standardize a protocol that will fundamentally
affect the entire network infrastructure, with many components
that cannot easily as easily updated as issuing a new point
release on a browser. I don't see how we can put 1 person
(any person) as the sole final arbiter of such a important
decisions.

The IETF has a proven process for producing internet standards
that the entire industry follow. Why is websocket so special
that it needs a different process?
Post by Ian Fette (イアンフェッティ)
If we can't figure out how to move forward on such a simple issue, it
seems to me that we are in an unworkable situation, and should probably
just continue the work in WHATWG through to a final spec, let
implementations settle for a while, and then hand it off to IETF for
refinement and finalization in a v2 spec... (my $0.02)
I'm not a IETF process expert, but what I do know indicates
that the IETF is just as unlikely to rubber stamp a V2 as they
are to rubber stamp a V1.

The whatwg is perfectly entitle to keep the specification
under their own auspices, but if they want the specification
to be given the gravitas of an official IETF document, then it
has to be exposed to the IETF process and achieve a rough
consensus of all who are involved - including the whatwg.

Delaying the IETF process to v2 is unlikely to change many
of those voices from whom rough consensus is required.
The "it's deploy now, so it's too late to change", is not
a great argument to rely on.

The whatwg has done a great job getting it this far,
but I really think they should trust (and be involved in)
the IETF process to take it to the next stage.


regards
Rob Sayre
2010-01-29 04:04:33 UTC
Permalink
Post by Greg Wilkins
The IETF has a proven process for producing internet standards
that the entire industry follow.
Not really. It changes all the time and they don't really write down
what they are doing until it is too late. You also must please some set
of people. I don't think it's super interesting to make a process
argument here.

- Rob
Ian Fette (イアンフェッティ)
2010-01-29 04:16:53 UTC
Permalink
I'm not saying "it's deployed so it's too late to make any changes." What I
am saying is that, from what I can see, things are in a very disfunctional
state. A simple question comes up and it's not clear who is responsible for
doing what, and how we actually move forward. That's what bothers me. I
could personally care less what the actual process ends up being, so long as
when a simple question gets asked it gets answered quickly and we can move
forward. That was not happening in this case.

As for "IETF is a proven process that has worked well in the past" -- I
think there are a number of things that have changed between when HTTP was
going through the IETF process and today. First, I really don't know how
much it matters anymore whether things have an official IETF stamp of
approval, so long as implementers agree on an interface. Second, I think the
dynamics (number of people with a significant stake in the game) are
different, as is the shift from a more research-oriented (DARPA and then big
research labs like Bell Labs etc) to industry-driven environment with
manufacturers / vendors / whatever coming out with new functionality. Third,
I think things today are moving much more quickly in terms of the pace of
innovation. Fourth, I think the number of people waiting on these
innovations is much larger (look at the number of users and the amount of
commerce / transactions going on in the Internet).

So, I guess all I'm trying to say is that I don't think "IETF has worked
before so it works now" is necessarily a great argument, in much the same
vein that "it's deployed so it's too late to make any changes" is a great
argument. There are legitimate pros of the IETF process, and I don't mean to
dismiss that, but I'm not willing to take "It worked for HTTP" as some sort
of gospel truth reason why it should work for WS.

If it works, great. If it doesn't, let's figure out some process that does
work.

-Ian
Post by Greg Wilkins
Post by Ian Fette (イアンフェッティ)
Instead, we are bogged down in politics.
Work is proceeding on all fronts on the actual implementations,
so I don't think we are bogged down. Nobody is saying hold
any releases for this. In fact I think it will be good to
get experience from wider usage of the protocol as it currently
stands.
But there are important issues to be discussed here and they
should not be derided as just unproductive politics.
Who will edit the specification document is a key question that
needs to be answered (but probably not in the HttpOnly cookie
thread).
For me (and my company, project & community), I have a problem
with the WhatWG process as it is not sufficiently open. It boils
0) Ian has been appointed AFAICT by an industry consortium
of browser vendors.
1) you can talk all you like
2) but you have to convince Ian to change anything
3) you have to be prepared to be unhappy if you can't
convince Ian
I don't mean to dis Ian or the whatwg and I understand they've
done great work on HTML5. But this is hardly the right
process to standardize a protocol that will fundamentally
affect the entire network infrastructure, with many components
that cannot easily as easily updated as issuing a new point
release on a browser. I don't see how we can put 1 person
(any person) as the sole final arbiter of such a important
decisions.
The IETF has a proven process for producing internet standards
that the entire industry follow. Why is websocket so special
that it needs a different process?
Post by Ian Fette (イアンフェッティ)
If we can't figure out how to move forward on such a simple issue, it
seems to me that we are in an unworkable situation, and should probably
just continue the work in WHATWG through to a final spec, let
implementations settle for a while, and then hand it off to IETF for
refinement and finalization in a v2 spec... (my $0.02)
I'm not a IETF process expert, but what I do know indicates
that the IETF is just as unlikely to rubber stamp a V2 as they
are to rubber stamp a V1.
The whatwg is perfectly entitle to keep the specification
under their own auspices, but if they want the specification
to be given the gravitas of an official IETF document, then it
has to be exposed to the IETF process and achieve a rough
consensus of all who are involved - including the whatwg.
Delaying the IETF process to v2 is unlikely to change many
of those voices from whom rough consensus is required.
The "it's deploy now, so it's too late to change", is not
a great argument to rely on.
The whatwg has done a great job getting it this far,
but I really think they should trust (and be involved in)
the IETF process to take it to the next stage.
regards
Ian Hickson
2010-01-29 04:17:32 UTC
Permalink
The whatwg has done a great job getting it this far, but I really think
they should trust (and be involved in) the IETF process to take it to
the next stage.
I'm happy to work with the IETF, the point is just that the IETF should
cooperate with the WHATWG, on a joint effort, just like the W3C cooperates
with the WHATWG over HTML5.

To be blunt, though, if the IETF wants trust, it should earn it. Had the
IETF actually approached the WHATWG community or even mentioned working
with the WHATWG anywhere in the charter, or, say, responded to my feedback
on the charter, or had a realistic timetable in the charter that
acknowledged the stage at which the WebSockets spec is at, maybe trust
would be easier.

Instead, what's happened is the equivalent of me talking to some of the
people working on HTTP, and then saying "ok we'll do HTTP on a new mailing
list" and not even letting the HTTP working group know about it.
--
Ian Hickson U+1047E )\._.,--....,'``. fL
http://ln.hixie.ch/ U+263A /, _.. \ _\ ;`._ ,.
Things that are impossible just take longer. `._.-(,_..'--(,_..'`-.;.'
Greg Wilkins
2010-01-29 23:31:14 UTC
Permalink
Post by Ian Hickson
Instead, what's happened is the equivalent of me talking to some of the
people working on HTTP, and then saying "ok we'll do HTTP on a new mailing
list" and not even letting the HTTP working group know about it.
Hello!!!! Google has done exactly that! SPDY!

http://dev.chromium.org/spdy/


Don't get me wrong, I think it's awesomely great that google is doing
such research. But google has to be aware that their market power
makes them a poor community player. If chrome suddenly started
shipping with SPDY enabled by default, then that would effectively
be a hostile takeover of HTTP.

As google has done exactly this with websocket, it shows that they
have no concerns about doing a non consensus based takeover of
port 80, so why not takeover the entire web protocol as well.


You talk as if the IETF is trying to do the take over.

The reality is that the IETF has had custodianship of the internet
protocols since day dot, and it is Google^H^H^H^H^H^HWhatWG that is
trying to take over the job of creating new internet standards.
Maybe that was warranted in the case of HTML5 and the W3C, but I see
no evidence that IETF deserves to be usurped when it comes to
their role regarding internet protocols.


regards
Roberto Peon
2010-01-29 23:41:02 UTC
Permalink
I guess I can't just lurk today!

Actually for SPDY we're trying to do a lot of experimentation (i.e.
research) and then we'll figure out what the standard needs to be.
Until we know it is actually better *and why*, it is not useful to waste
people's time discussing a standard.

Were we to do it the other way around (set a standard, and then do
research), things would be unlikely to work well.. what else would you have
us do?
We're even being public about it, with open source implementations for
something which will be backwards compatible with what exists today...
Honestly, if it worked, I'd happily use a different port (currently we're
wanting to use port 443 and it *is* an encrypted channel), but we have data
that shows that this doesn't work reliably.

It seems like the network has ossified a bit, and it is hard to get any
changes out there.
-=R
Post by Salvatore Loreto
Post by Ian Hickson
Instead, what's happened is the equivalent of me talking to some of the
people working on HTTP, and then saying "ok we'll do HTTP on a new
mailing
Post by Ian Hickson
list" and not even letting the HTTP working group know about it.
Hello!!!! Google has done exactly that! SPDY!
http://dev.chromium.org/spdy/
Don't get me wrong, I think it's awesomely great that google is doing
such research. But google has to be aware that their market power
makes them a poor community player. If chrome suddenly started
shipping with SPDY enabled by default, then that would effectively
be a hostile takeover of HTTP.
As google has done exactly this with websocket, it shows that they
have no concerns about doing a non consensus based takeover of
port 80, so why not takeover the entire web protocol as well.
You talk as if the IETF is trying to do the take over.
The reality is that the IETF has had custodianship of the internet
protocols since day dot, and it is Google^H^H^H^H^H^HWhatWG that is
trying to take over the job of creating new internet standards.
Maybe that was warranted in the case of HTML5 and the W3C, but I see
no evidence that IETF deserves to be usurped when it comes to
their role regarding internet protocols.
regards
_______________________________________________
hybi mailing list
https://www.ietf.org/mailman/listinfo/hybi
Greg Wilkins
2010-01-30 00:33:44 UTC
Permalink
Roberto,

I don't mean to criticise your efforts on SPDY. As I said
it is really great that you guys are doing that kind of research
and you're finding out great stuff.

I'm definitely not advocating that you should do design by
committee and I don't mean to fault what you have done to
date.

The note of caution I was trying to highlight, is what if
google started shipping Chrome with SPDY enabled. Given
Googles general market presence (and growing browser
presence), that would essentially be a take over of the
future of HTTP by a single corporation.

Considering how they have proceeded with websocket, it
is not inconceivable that a similar path might eventually
be followed for SPDY.

That is why it is important that the internet industry
through the IETF clearly asserts that the IETF process
is the currently accepted way of creating consensus
for internet protocols.

Even if the "network has ossified" somewhat (and I've
been robustly corrected before for saying similar), that
is not a reason to give up on a consensus approach.
The WHATWG is not an alternative consensus mechanism,
it is a closed consortium of a one sector of the industry,
with a market-share behemoth in control.

I don't mean to be over dramatic, but this discussion
is essentially about if we are going to cede control over
the future of internet protocols from the IETF to Google.

Sorry again if my comments were taken as a criticism of
your work on SPDY. None was intended, just a note of
caution about the future.

regards
Post by Roberto Peon
I guess I can't just lurk today!
Actually for SPDY we're trying to do a lot of experimentation (i.e.
research) and then we'll figure out what the standard needs to be.
Until we know it is actually better *and why*, it is not useful to waste
people's time discussing a standard.
Were we to do it the other way around (set a standard, and then do
research), things would be unlikely to work well.. what else would you
have us do?
We're even being public about it, with open source implementations for
something which will be backwards compatible with what exists today...
Honestly, if it worked, I'd happily use a different port (currently
we're wanting to use port 443 and it *is* an encrypted channel), but we
have data that shows that this doesn't work reliably.
It seems like the network has ossified a bit, and it is hard to get any
changes out there.
-=R
Post by Ian Hickson
Instead, what's happened is the equivalent of me talking to some
of the
Post by Ian Hickson
people working on HTTP, and then saying "ok we'll do HTTP on a new
mailing
Post by Ian Hickson
list" and not even letting the HTTP working group know about it.
Hello!!!! Google has done exactly that! SPDY!
http://dev.chromium.org/spdy/
Don't get me wrong, I think it's awesomely great that google is doing
such research. But google has to be aware that their market power
makes them a poor community player. If chrome suddenly started
shipping with SPDY enabled by default, then that would effectively
be a hostile takeover of HTTP.
As google has done exactly this with websocket, it shows that they
have no concerns about doing a non consensus based takeover of
port 80, so why not takeover the entire web protocol as well.
You talk as if the IETF is trying to do the take over.
The reality is that the IETF has had custodianship of the internet
protocols since day dot, and it is Google^H^H^H^H^H^HWhatWG that is
trying to take over the job of creating new internet standards.
Maybe that was warranted in the case of HTML5 and the W3C, but I see
no evidence that IETF deserves to be usurped when it comes to
their role regarding internet protocols.
regards
_______________________________________________
hybi mailing list
https://www.ietf.org/mailman/listinfo/hybi
SM
2010-01-29 07:56:25 UTC
Permalink
Post by Greg Wilkins
For me (and my company, project & community), I have a problem
with the WhatWG process as it is not sufficiently open. It boils
0) Ian has been appointed AFAICT by an industry consortium
of browser vendors.
As far as I know, Ian submitted an Internet-Draft about
Websockets. According to the HyBi charter,
draft-hixie-thewebsocketprotocol is to be used as an input document
for the working group.
Post by Greg Wilkins
I'm happy to work with the IETF, the point is just that the IETF should
cooperate with the WHATWG, on a joint effort, just like the W3C cooperates
with the WHATWG over HTML5.
The IETF is the sum of the voices from the individuals who
participate in the process. That includes Greg and Ian and everyone
else in this Working Group. The charter says that this Working Group
will take into consideration the concerns raised by the W3C WebApps
working group. It has already been agreed that the HyBi working
group will take on prime responsibility for the specification of the
WebSockets protocol. People from the WHATWG are welcome to
participate in the IETF process.
Post by Greg Wilkins
To be blunt, though, if the IETF wants trust, it should earn it. Had the
IETF actually approached the WHATWG community or even mentioned working
with the WHATWG anywhere in the charter, or, say, responded to my feedback
on the charter, or had a realistic timetable in the charter that
acknowledged the stage at which the WebSockets spec is at, maybe trust
would be easier.
According to draft-hixie-thewebsocketprotocol-68, Ian Hickson from
Google, Inc. submitted the Internet-Draft and asserted that the
submission is in full comformance with BCP 78 and BCP 79. It is Ian
that brought the specification to the IETF. Ian accepted to give
change control to the IETF and this Working Group has taken up that work.

As far as I know, there has been feedback on the charter from the
individuals in this Working Group. There was also a call for
comments on the charter before it was approved. The timetable was
also part of the chartering discussion.

It has previously been mentioned on another IETF mailing list that
people blink their eyes as they read the first page of a RFC. After
submitting 68 revisions of the draft-hixie-thewebsocketprotocol, I
would assume that the author is fully aware of the IETF
requirements. The submission was made on behalf of a well-known
company which has the resources to assess the implications. There
are long-time participants from that company that understand how the
IETF works and they may be able to explain the process to the author.

Regards,
-sm
Ian Hickson
2010-01-29 08:36:31 UTC
Permalink
Post by Ian Hickson
I'm happy to work with the IETF, the point is just that the IETF
should cooperate with the WHATWG, on a joint effort, just like the W3C
cooperates with the WHATWG over HTML5.
The IETF is the sum of the voices from the individuals who participate
in the process.
As is the WHATWG.
The charter says that this Working Group will take into consideration
the concerns raised by the W3C WebApps working group.
But it doesn't mention the WHATWG, which is working on this spec.
It has already been agreed that the HyBi working group will take on
prime responsibility for the specification of the WebSockets protocol.
By whom?
People from the WHATWG are welcome to participate in the IETF process.
One could equally say:

People from the IETF are welcome to participate in the WHATWG process.

However, instead, I suggest we work together, just like the W3C and the
WHATWG are cooperating on a dozen other specs.
Post by Ian Hickson
To be blunt, though, if the IETF wants trust, it should earn it. Had
the IETF actually approached the WHATWG community or even mentioned
working with the WHATWG anywhere in the charter, or, say, responded to
my feedback on the charter, or had a realistic timetable in the
charter that acknowledged the stage at which the WebSockets spec is
at, maybe trust would be easier.
As far as I know, there has been feedback on the charter from the
individuals in this Working Group.
According to draft-hixie-thewebsocketprotocol-68, Ian Hickson from
Google, Inc. submitted the Internet-Draft and asserted that the
submission is in full comformance with BCP 78 and BCP 79. It is Ian
that brought the specification to the IETF. Ian accepted to give change
control to the IETF and this Working Group has taken up that work.
Actually, I was asked to submit it by the IETF. I agreed to do so while
simultaneously publishing it through the WHATWG. At no point was it
suggested that the WHATWG should stop working on it.
It has previously been mentioned on another IETF mailing list that
people blink their eyes as they read the first page of a RFC. After
submitting 68 revisions of the draft-hixie-thewebsocketprotocol, I would
assume that the author is fully aware of the IETF requirements. The
submission was made on behalf of a well-known company which has the
resources to assess the implications. There are long-time participants
from that company that understand how the IETF works and they may be
able to explain the process to the author.
My goal is not to follow IETF process. My goal is to get interoperable
implementations. If the IETF would like to take part in this effort, I am
happy to be involved also. Let me know if you're interested.
--
Ian Hickson U+1047E )\._.,--....,'``. fL
http://ln.hixie.ch/ U+263A /, _.. \ _\ ;`._ ,.
Things that are impossible just take longer. `._.-(,_..'--(,_..'`-.;.'
Martin J. Dürst
2010-01-29 11:26:54 UTC
Permalink
Hello Ian,
Post by Ian Hickson
People from the WHATWG are welcome to participate in the IETF process.
People from the IETF are welcome to participate in the WHATWG process.
However, instead, I suggest we work together, just like the W3C and the
WHATWG are cooperating on a dozen other specs.
Please don't cross-post to the WHATWG list -- it causes threads to
fragment since not everyone is subscribed to both lists and the WHATWG
list rejects non-subscriber posts.
Ian, could you please explain how exactly *you* imagine such a
cooperation should work, if not e.g. by cross-posting?

Regards, Martin.
--
#-# Martin J. Dürst, Professor, Aoyama Gakuin University
#-# http://www.sw.it.aoyama.ac.jp mailto:***@it.aoyama.ac.jp
Ian Hickson
2010-01-29 11:39:24 UTC
Permalink
Post by Martin J. Dürst
Ian, could you please explain how exactly *you* imagine such a
cooperation should work, if not e.g. by cross-posting?
The same way it works with the HTML5 specification and the various Web
Apps specifications -- feedback is collected from both groups (and indeed,
anywhere else that feedback is provided, e.g. on blogs or forums), and
changes are made that take into account all the feedback. The most active
members of both groups stay in regular contact, e.g. on IRC, or by e-mail,
to ensure that everyone is on the same page. Where editorial differences
arise (e.g. the IETF prefers text/plain specs, the WHATWG prefers HTML-
based specs), the groups ensure that normative requirement remain
identical across different versions. Basically, exactly as has been
happening for the past few years already.
--
Ian Hickson U+1047E )\._.,--....,'``. fL
http://ln.hixie.ch/ U+263A /, _.. \ _\ ;`._ ,.
Things that are impossible just take longer. `._.-(,_..'--(,_..'`-.;.'
Julian Reschke
2010-01-29 13:07:54 UTC
Permalink
Post by Ian Hickson
Post by Martin J. Dürst
Ian, could you please explain how exactly *you* imagine such a
cooperation should work, if not e.g. by cross-posting?
The same way it works with the HTML5 specification and the various Web
Apps specifications -- feedback is collected from both groups (and indeed,
anywhere else that feedback is provided, e.g. on blogs or forums), and
changes are made that take into account all the feedback. The most active
members of both groups stay in regular contact, e.g. on IRC, or by e-mail,
to ensure that everyone is on the same page. Where editorial differences
arise (e.g. the IETF prefers text/plain specs, the WHATWG prefers HTML-
based specs), the groups ensure that normative requirement remain
identical across different versions. Basically, exactly as has been
happening for the past few years already.
...
Feedback that affects the contents of a WG deliverable should be
submitted as "IETF Contribution", as described in
<http://www.ietf.org/about/note-well.html>.

Best regards, Julian
Ian Hickson
2010-02-01 11:25:45 UTC
Permalink
Post by Julian Reschke
Post by Ian Hickson
The same way it works with the HTML5 specification and the various Web
Apps specifications -- feedback is collected from both groups (and
indeed, anywhere else that feedback is provided, e.g. on blogs or
forums), and changes are made that take into account all the feedback.
The most active members of both groups stay in regular contact, e.g.
on IRC, or by e-mail, to ensure that everyone is on the same page.
Where editorial differences arise (e.g. the IETF prefers text/plain
specs, the WHATWG prefers HTML- based specs), the groups ensure that
normative requirement remain identical across different versions.
Basically, exactly as has been happening for the past few years
already. ...
Feedback that affects the contents of a WG deliverable should be
submitted as "IETF Contribution", as described in
<http://www.ietf.org/about/note-well.html>.
I'm not going to ignore feedback that is sent outside the context of this
working group.
--
Ian Hickson U+1047E )\._.,--....,'``. fL
http://ln.hixie.ch/ U+263A /, _.. \ _\ ;`._ ,.
Things that are impossible just take longer. `._.-(,_..'--(,_..'`-.;.'
Maciej Stachowiak
2010-02-01 11:38:28 UTC
Permalink
Post by Ian Hickson
Post by Julian Reschke
Post by Ian Hickson
The same way it works with the HTML5 specification and the various Web
Apps specifications -- feedback is collected from both groups (and
indeed, anywhere else that feedback is provided, e.g. on blogs or
forums), and changes are made that take into account all the feedback.
The most active members of both groups stay in regular contact, e.g.
on IRC, or by e-mail, to ensure that everyone is on the same page.
Where editorial differences arise (e.g. the IETF prefers text/plain
specs, the WHATWG prefers HTML- based specs), the groups ensure that
normative requirement remain identical across different versions.
Basically, exactly as has been happening for the past few years
already. ...
Feedback that affects the contents of a WG deliverable should be
submitted as "IETF Contribution", as described in
<http://www.ietf.org/about/note-well.html>.
I'm not going to ignore feedback that is sent outside the context of this
working group.
I don't think the rules that Julian linked require that. The RFCs that apply to IETF contributions per that page seem to be about requirements relating to intellectual property rights. Specifically, grant of non-exclusive copyright license to the IETF Trust, and patent disclosure obligations. I am not sure of their relevance to this thread.

Regards,
Maciej
Julian Reschke
2010-02-01 12:18:31 UTC
Permalink
Post by Ian Hickson
...
I'm not going to ignore feedback that is sent outside the context of this
working group.
I'm encouraging you to convince those contributors to give their
feedback on the IETF mailing list.

BR, Julian
Greg Wilkins
2010-01-29 13:39:34 UTC
Permalink
Post by Ian Hickson
Post by Martin J. Dürst
Ian, could you please explain how exactly *you* imagine such a
cooperation should work, if not e.g. by cross-posting?
The same way it works with the HTML5 specification and the various Web
Apps specifications....
Ian,

the problem with this approach is that an internet protocol
is out of scope of for the WHATWG charter and the WHATWG
process is entirely inappropriate for forging a consensus
across all the interested parties.

The whatwg process relies on the consent of a single individual
(yourself) as editor. This position is an appointment made by an
invitation only committee made up of 9 representatives from
various browser manufacturers. You are also on that committee,
the spokesman for the group and an employee of the company that
is shipping the first client implementation.

The whatwg self describes as having a main focus of HTML5,
plus work on Web Workers, Web Forms and Web Controls. There
is nothing in the whatwg activities to suggest that it is
the right body to specify something that will affect web
servers, OS network stacks, firewalls, proxies, routers, bridges,
caches, connection aggregators, SSL offload, load balancers,
filters, corporate security policies, web frameworks,
3G networks, mobile battery life, traffic analysis tools,
etc. etc.

I'm totally happy for a consortium of browser vendors
to use whatever process they like to define the
specifications for the mark up and javascript APIs
they will support in their own products.

I'm totally unhappy that such a consortium should
produce a "standard" internet protocol without
due process involving all of the industry and
community in a truly open decision making process.

You are an incredibly diligent guy and I really applaud
the effort you put in to consider and reply to the
vast amount of feedback that you get. But at the
end of the day, if you personally are unconvinced, then
it's not going in the spec. No one person is without
bias, conflicts of interest, areas of inexperience,
bad days at the office etc.




So the solution that I would like to suggest,
is that the WhatWG continue work on the current protocol
specification, and that will be 1.0. The browser vendors and
other WhatWG participants can continue to work towards the
goal of interoperable implementations. This will be a
WhatWG document and will never be an IETF RFC.

In parallel, the IETF WG should focus on producing
a 1.1 version of the protocol/specification based on
an all-of-industry feedback and consensus. This will
be built on 1.0 and have reasonable backwards
compatibility as a necessary requirement. But it's
charter will directly address the concerns of the whole
internet and not just the browsers and app developers.
It will exposed to the full IETF process and will
eventually be an IETF RFC.

Each group will have editorial control over
their own document and they will need to
cooperate, so that they do not significantly
diverge on any points of substance. The
whatwg would also participate in the IETF
process and their consent would be a vital
part of any rough consensus there.

This is not unlike the standardization of HTTP,
where HTTP/1.0 was more or less a codification of the
protocol that had been implemented by browsers.
HTTP/1.1 was the internet standard developed by
all of industry and addressed the concerns of all.


regards
Justin Erenkrantz
2010-01-29 15:56:58 UTC
Permalink
Post by Greg Wilkins
The whatwg process relies on the consent of a single individual
(yourself) as editor.  This position is an appointment made by an
invitation only committee made up of 9 representatives from
various  browser manufacturers.   You are also on that committee,
the spokesman for the group and an employee of the company that
is shipping the first client implementation.
Yes, this is my biggest concern about the process so far - it seems
very exclusionary to those of us who develop servers. So far, this is
a significant portion of the community that I feel has not had a
legitimate chance to provide any real input into the WebSocket
protocol. Instead, as an httpd developer who knows just as much about
HTTP as anyone else on this list, I just get the feeling that the
browser developers are telling me that I need to implement a
"protocol" without providing a legitimate opportunity for feedback.

Instead, I just see that there have been unilateral decisions that
have profound consequences (mandated port, conflation of security,
etc.) that show little hope of being re-considered. It seems that
folks are intending to rubber-stamp the draft from WHATWG which
is...sad. As such, it often leaves me wondering whether I should even
bother trying - so I applaud Greg for trying to speak up while also
getting Jetty to speak WebSocket.

I've expressed my feelings before that the current "draft" documents
for WebSocket is pretty impenetrable and I hope that future IETF
drafts alter it into something that is independently implementable by
both server and client developers. Part of the elegance of HTTP is
that it's pretty easy to implement a basic reasonably-conformant
version - WebSocket simply does not have that property at this time.

Since IETF 77 is in my backyard, I do hope to attend any WG sessions
related to hybi. If folks are interested in having a working session
on rewriting the latest draft into something that is more
approachable, I'm definitely interested. I really would like to offer
something constructive as I share the goals of the WG - there is a
need for an async protocol, but I'm hard-pressed to stand behind the
current process as I don't feel very enfranchised at the moment. --
justin
Greg Wilkins
2010-01-29 22:55:19 UTC
Permalink
Post by Justin Erenkrantz
so I applaud Greg for trying to speak up while also
getting Jetty to speak WebSocket.
The process of implementing WebSocket and using it within cometd
has been very illuminating.

Once you get paste the strange language of the spec, the protocol
is pretty easy to implement.

It was a little silly having to implement two framing mechanisms
when only one of them has an API to enable it's usage.

We're currently updating cometd to optionally use websocket and
the amazing this is how little it changes (or improves) the
resulting protocol.

Because there is no "buy-in" from intermediaries, we have no
way of knowing how long they will keep open an idle connection.
Because there is no meta-data, we have no idea how long a browser
will keep an idle connection.

So we have to send keep alive messages over the websocket
to make sure it is not closed. For this we are just using
the /meta/connect message we use for long polling over XHR.

Because there is no orderly close mechanism, we have to keep
our own close handshake and implement our own acks for our
reliable messaging extension (which currently batches acks
into the equivalent of a long poll.... I guess we could
re-invent TCP inside websocket and do message by message
acks... but what a waste!).

So if you look on the wire of cometd running over websocket,
it just looks like long polling. We use the same /meta/connect
long poll message as a keep alive and as our ack carrier.

We've saved 1 connection, which is great. But I fear that
saving will be eaten up by the ability of application developers
to open any number of websockets.

If we are not running in reliable message mode, then we don't
need to wait for a /meta/connect before sending a message
to the client, so we get a small improvement in maximal
message latency for current usage and a good improvement
for streaming usage.

But as I've frequently said, it works ok, but it solves
few of my real pain points as comet vendor and it's
caused me more code/complexity not less.

It's not provided any semantics that would allow
any cometd users to consider going directly to websockets
instead of using a comet framework. Sure it makes sending
messages easy, but that is always easy. It does not help
for when you can't send messages or when connections drop
or servers change etc. These are all the realworld things
that must be dealt with and ws makes this harder not easier.

It will make cometd usable for a few more use-cases, but
for the vast majority of cometd users, it will be a
transparent change under the hood that makes no significant
difference and I'm left wondering what all the fuss is
about.


regards
Maciej Stachowiak
2010-01-30 06:28:54 UTC
Permalink
Post by Greg Wilkins
Because there is no "buy-in" from intermediaries, we have no
way of knowing how long they will keep open an idle connection.
Because there is no meta-data, we have no idea how long a browser
will keep an idle connection.
So we have to send keep alive messages over the websocket
to make sure it is not closed. For this we are just using
the /meta/connect message we use for long polling over XHR
That is a valid concern. I think it would be a problem to design a protocol where buy-in from intermediaries is required to deploy at all, because that would greatly delay the deployment timeline. However, you have pointed out a real problem with not knowing how intermediaries will react, namely that you don't know if you need to take special measures to hold the connection open.

I think the right way to approach this, and issues of intermediary participation, is to have optional opt-in from intermediaries. I would like to see a design where at least the SSL version of the WebSocket protocol can operate with no need to change proxies or other intermediaries, but where if proxies or other intermediaries are updated to opt in, there's a way for both the client and server to know that, and to be able to make certain default assumptions, for instance how long the connection is held open even if completely idle. Do you have a concrete proposal?

Regards,
Maciej
Justin Erenkrantz
2010-01-30 07:15:21 UTC
Permalink
Post by Maciej Stachowiak
both the client and server to know that, and to be able to make certain
default assumptions, for instance how long the connection is held open even
if completely idle. Do you have a concrete proposal?
It should be possible to exchange parameters on keep-alive as the
connection is established. httpd has some extensions that expose
Keep-Alive parameters in an OPTIONS response - serf tries take
advantage of this information as a hint - otherwise, it heuristically
determines the keepalive parameters based upon the default httpd
configs.

My concern is that - a la HTTP - is that you should not conflate
content-related metadata with protocol metadata. For something akin
to BWTP, that could be exchanged as part of the channel create process
without affecting the content itself. -- justin
Greg Wilkins
2010-01-30 07:19:07 UTC
Permalink
Feedback from server-side implementors is awesome! I'm going to reply to
some of these topics separately to put them in separate threads. Let's
start with intermediaries and expectation of holding open an idle
Post by Greg Wilkins
Because there is no "buy-in" from intermediaries, we have no
way of knowing how long they will keep open an idle connection.
Because there is no meta-data, we have no idea how long a browser
will keep an idle connection.
So we have to send keep alive messages over the websocket
to make sure it is not closed. For this we are just using
the /meta/connect message we use for long polling over XHR
That is a valid concern. I think it would be a problem to design a
protocol where buy-in from intermediaries is required to deploy at all,
because that would greatly delay the deployment timeline. However, you
have pointed out a real problem with not knowing how intermediaries will
react, namely that you don't know if you need to take special measures
to hold the connection open.
I think the right way to approach this, and issues of intermediary
participation, is to have optional opt-in from intermediaries. I would
like to see a design where at least the SSL version of the WebSocket
protocol can operate with no need to change proxies or other
intermediaries, but where if proxies or other intermediaries are updated
to opt in, there's a way for both the client and server to know that,
and to be able to make certain default assumptions, for instance how
long the connection is held open even if completely idle. Do you have a
concrete proposal?
I've made many many concrete proposals.... so many that I've lost
track as they've all been rebuffed. I don't really care how the
issue is solved... it just needs to be solved.

But for starters... let's make the upgrade request not just look like
a HTTP request, let's make it a real HTTP request. Then intermediaries
and servers would be free to add new headers and do funky HTTP stuff
without needing to involve the browsers.

Then for the timeout problem, we actually need the same solution for
long polling as for websockets. We need a standard header that expresses
the idle timeouts, which can be checked by all intermediaries and either
respected or adjusted by them.

Ideally there could be some non 101 return codes that could be sent
so that a websocket client and a websocket server could have several
rounds of negotiation for things like credentials, timeouts and maybe
even redirections!

regards
Maciej Stachowiak
2010-01-30 07:36:40 UTC
Permalink
Post by Greg Wilkins
Post by Maciej Stachowiak
That is a valid concern. I think it would be a problem to design a
protocol where buy-in from intermediaries is required to deploy at all,
because that would greatly delay the deployment timeline. However, you
have pointed out a real problem with not knowing how intermediaries will
react, namely that you don't know if you need to take special measures
to hold the connection open.
I think the right way to approach this, and issues of intermediary
participation, is to have optional opt-in from intermediaries. I would
like to see a design where at least the SSL version of the WebSocket
protocol can operate with no need to change proxies or other
intermediaries, but where if proxies or other intermediaries are updated
to opt in, there's a way for both the client and server to know that,
and to be able to make certain default assumptions, for instance how
long the connection is held open even if completely idle. Do you have a
concrete proposal?
I've made many many concrete proposals.... so many that I've lost
track as they've all been rebuffed. I don't really care how the
issue is solved... it just needs to be solved.
But for starters... let's make the upgrade request not just look like
a HTTP request, let's make it a real HTTP request. Then intermediaries
and servers would be free to add new headers and do funky HTTP stuff
without needing to involve the browsers.
I don't have anything against this suggestion per se, but it doesn't seem to solve either of the problems raised above:

- Letting intermediaries indicate that they are aware of WebSocket (perhaps allowing you to assume a default minimum timeout, and maybe with other benefits.)
- Knowing a lower bound on your idle timeout so you can avoid sending excessive messages just to keep the connection alive.

It might help provide a mechanism for this, but it's not even totally clear to me how it would.
Post by Greg Wilkins
Then for the timeout problem, we actually need the same solution for
long polling as for websockets. We need a standard header that expresses
the idle timeouts, which can be checked by all intermediaries and either
respected or adjusted by them.
It seems like we need a mechanism for that which is robust in the face of unaware intermediaries. One way I can think of to do that is to encode the information in a hop-by-hop header. Unaware intermediaries would (I hope) just drop it, while aware intermediaries could read it and update the information. Thus, you only get a header indicating WebSocket awareness if both the origin server (or client) and all intermediaries are aware, and your idle timeout lower bound would be based on the minimum of all participating endpoints and intermediaries.

Some set of headers is defined to be hop-by-hop: <http://www.w3.org/Protocols/rfc2616/rfc2616-sec13.html#sec13.5.1>. The Connection header can also be used to cause additional headers to be treated as hop-by-hop headers: <http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.10>. One thing I don't know is whether deployed intermediaries respect this. Do they respect the fixed list of hop-by-hop headers? Do they treat headers listed in the Connection header field as hop-by-hop?
Post by Greg Wilkins
Ideally there could be some non 101 return codes that could be sent
so that a websocket client and a websocket server could have several
rounds of negotiation for things like credentials, timeouts and maybe
even redirections!
That might help with the origin server, but does it help with intermediaries? If intermediaries would normally just pass through a 101, then you cannot tell if an intermediary would time you out faster than your origin server.

Regards,
Maciej
Greg Wilkins
2010-01-30 22:23:53 UTC
Permalink
Post by Maciej Stachowiak
But for starters... let's make the upgrade request not just look like a HTTP request, let's make it a real HTTP request. Then intermediaries and servers would be free to add
new headers and do funky HTTP stuff without needing to involve the browsers.
It doesn't solve the problems, but it enables more standard solutions to them.
It also does not break intermediaries and servers just for giggles.
Post by Maciej Stachowiak
Then for the timeout problem, we actually need the same solution for long polling as for websockets. We need a standard header that expresses the idle timeouts, which can be
checked by all intermediaries and either respected or adjusted by them.
It seems like we need a mechanism for that which is robust in the face of unaware intermediaries. One way I can think of to do that is to encode the information in a hop-by-hop
header. Unaware intermediaries would (I hope) just drop it, while aware intermediaries could read it and update the information. Thus, you only get a header indicating WebSocket
awareness if both the origin server (or client) and all intermediaries are aware, and your idle timeout lower bound would be based on the minimum of all participating endpoints
and intermediaries.
Some set of headers is defined to be hop-by-hop: <http://www.w3.org/Protocols/rfc2616/rfc2616-sec13.html#sec13.5.1>. The Connection header can also be used to cause additional
headers to be treated as hop-by-hop headers: <http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.10>. One thing I don't know is whether deployed intermediaries respect
this. Do they respect the fixed list of hop-by-hop headers? Do they treat headers listed in the Connection header field as hop-by-hop?
I think a hop-by-hop header is exactly what is needed, so that any non transparent
proxy would have to explicitly copy the timeout value from input to output.

This would still be an issue for transparent proxies that try to not look like
a hope, but at least the argument could be made that the timeout value is there
in the header so they should at least respect it.

So this header will not initially solve the problem, because of non compliant
intermediaries. However, together with orderly close, it would allow non compliant
intermediaries to be detected (and new connections with shorter idle timeout
re-established).
Post by Maciej Stachowiak
Ideally there could be some non 101 return codes that could be sent so that a websocket client and a websocket server could have several rounds of negotiation for things like
credentials, timeouts and maybe even redirections!
That might help with the origin server, but does it help with intermediaries? If intermediaries would normally just pass through a 101, then you cannot tell if an intermediary
would time you out faster than your origin server.
Intermediaries often respond to a request on a servers behalf. They may need
their own authentication or they may do a redirect themselves.

So in a chain A-B-C, perhaps A-B will first negotiate in a few exchanges
and then A-C will negotiate before the connection is established.

So websocket needs to define what happens when a upgrade request is
responded to with a 401, 302 etc.

regards
Justin Erenkrantz
2010-01-31 00:00:13 UTC
Permalink
Post by Greg Wilkins
Post by Maciej Stachowiak
this. Do they respect the fixed list of hop-by-hop headers? Do they treat headers listed in the Connection header field as hop-by-hop?
I think a hop-by-hop header is exactly what is needed, so that any non transparent
proxy would have to explicitly copy the timeout value from input to output.
While I grok the rationale for hop-by-hop headers, I'd really like to
come to some way to cleanly separate content meta-data from protocol
meta-data. IMO, it makes stuff like caching and intermediaries way
too hard to implement.

Spitballing an idea: if we have some way to multiplex over a single
connection (channels, etc.), then we can have a "protocol"
meta-channel or similar for exchanging hop-by-hop capabilities. --
justin
Greg Wilkins
2010-01-31 00:23:46 UTC
Permalink
Post by Justin Erenkrantz
Post by Greg Wilkins
Post by Maciej Stachowiak
this. Do they respect the fixed list of hop-by-hop headers? Do they treat headers listed in the Connection header field as hop-by-hop?
I think a hop-by-hop header is exactly what is needed, so that any non transparent
proxy would have to explicitly copy the timeout value from input to output.
While I grok the rationale for hop-by-hop headers, I'd really like to
come to some way to cleanly separate content meta-data from protocol
meta-data. IMO, it makes stuff like caching and intermediaries way
too hard to implement.
Spitballing an idea: if we have some way to multiplex over a single
connection (channels, etc.), then we can have a "protocol"
meta-channel or similar for exchanging hop-by-hop capabilities. --
justin
Justin, in this case, the hop-by-hop header is only applicable
to the initial upgrade HTTP request and not to the actual data channel
once established.

I also think there is a need for out-of-band meta data to be
exchanged once the channel is established, but I think that should
be considered separately.

cheers
Justin Erenkrantz
2010-01-31 00:30:39 UTC
Permalink
Justin,  in this case, the hop-by-hop header is only applicable
to the initial upgrade HTTP request and not to the actual data channel
once established.
Ah, yes, if it's confined to the original Upgrade HTTP request, yah,
you can't run away from hop-by-hop headers there.
I also think there is a need for out-of-band meta data to be
exchanged once the channel is established, but I think that should
be considered separately.
Fair 'nuf. -- justin
Maciej Stachowiak
2010-01-31 00:48:20 UTC
Permalink
Post by Justin Erenkrantz
Post by Greg Wilkins
Post by Maciej Stachowiak
this. Do they respect the fixed list of hop-by-hop headers? Do they treat headers listed in the Connection header field as hop-by-hop?
I think a hop-by-hop header is exactly what is needed, so that any non transparent
proxy would have to explicitly copy the timeout value from input to output.
While I grok the rationale for hop-by-hop headers, I'd really like to
come to some way to cleanly separate content meta-data from protocol
meta-data. IMO, it makes stuff like caching and intermediaries way
too hard to implement.
Surely HTTP headers in the handshake are an appropriate place for protocol metadata?
Post by Justin Erenkrantz
Spitballing an idea: if we have some way to multiplex over a single
connection (channels, etc.), then we can have a "protocol"
meta-channel or similar for exchanging hop-by-hop capabilities. --
This doesn't help you with detecting unaware intermediaries, or with letting intermediaries participate if you are going over SSL.

Regards,
Maciej
Justin Erenkrantz
2010-01-31 00:59:18 UTC
Permalink
Post by Maciej Stachowiak
Surely HTTP headers in the handshake are an appropriate place for protocol metadata?
I agree - but, in the current drafts, that's forbidden as you must use
the exact byte sequences specified in the document. You can't add any
HTTP headers...

According to previous posts, this came from the notion that it is
"best" to treat the HTTP/1.1 upgrade request as a "black box" of bytes
rather than as...a real HTTP/1.1 request that defers to that RFC as to
how it should be interpreted. There has been significant pushback to
making it a real HTTP request... And, as Greg has pointed out, by not
permitting any other status codes in a response, it doesn't allow a
server to do primitive SW load-balancing and issue 3xx's to go visit
other servers or do auth before protocol initialization or other
common tricks.
Post by Maciej Stachowiak
Post by Justin Erenkrantz
Spitballing an idea: if we have some way to multiplex over a single
connection (channels, etc.), then we can have a "protocol"
meta-channel or similar for exchanging hop-by-hop capabilities.  --
This doesn't help you with detecting unaware intermediaries, or with letting intermediaries participate if you are going over SSL.
Prior to initialization - correct, but this extra channel was only
about what you do after you have successfully initiated the WS
protocol. -- justin
Maciej Stachowiak
2010-01-31 00:23:31 UTC
Permalink
Post by Greg Wilkins
But for starters... let's make the upgrade request not just look like a HTTP request, let's make it a real HTTP request. Then intermediaries and servers would be free to add
new headers and do funky HTTP stuff without needing to involve the browsers.
It doesn't solve the problems, but it enables more standard solutions to them.
It also does not break intermediaries and servers just for giggles.
I don't know whether the hardcoded handshake format it the right tradeoff overall. But reasons have been given for it, so I don't think its fair to call it "just for giggles". If you can explain how making the HTTP upgrade handshake more flexible would help solve specific problems, then it would be easier to do a cost-benefit analysis.
Post by Greg Wilkins
It seems like we need a mechanism for that which is robust in the face of unaware intermediaries. One way I can think of to do that is to encode the information in a hop-by-hop header. [...]
I think a hop-by-hop header is exactly what is needed, so that any non transparent
proxy would have to explicitly copy the timeout value from input to output.
Copy or adjust, if its idle timeout is lower than the last server it talked to.
Post by Greg Wilkins
This would still be an issue for transparent proxies that try to not look like
a hope, but at least the argument could be made that the timeout value is there
in the header so they should at least respect it.
Sounds like the hop-by-hop header design is broken in the face of transparent proxies, if they do indeed preserve hop-by-hop headers. Is that truly the case for deployed transparent proxies? How common are they? (The reason I say it breaks the feature is because you'd have no way of detecting whether you are going through an unaware transparent proxy, so you can't tell either whether your path is really clean, or whether the idle timeout you got is accurate.)

It also seems like the hop-by-hop header technique does not work for WebSocket over SSL. That's a pretty serious problem, because I think that's the only case that is likely to work over most unmodified proxies. Any idea how to address the SSL case?
Post by Greg Wilkins
So this header will not initially solve the problem, because of non compliant
intermediaries. However, together with orderly close, it would allow non compliant
intermediaries to be detected (and new connections with shorter idle timeout
re-established).
There's really two problems to be solved:
(a) Detect that your path to the origin server, including all intermediaries, consists solely of WebSocket-aware servers.
(b) Determine some information about the max idle timeout, which is valid if based on (a) you detected that you have a fully aware path.

I don't see how a close handshake would allow unaware intermediaries to be detected. The handshake would presumably be done by the origin server and would not even take the form of an http message. If WebSocket can go through the intermediary at all, then the handshake is likely to go through unmodified. Are you saying that if you don't see the handshake you should assume an intermediary timed you out? That seems like a poor assumption, because any number of network failures could have interrupted a connection. Also, how long do you assume that your path to the client or origin server is going through the same proxis?
Post by Greg Wilkins
Ideally there could be some non 101 return codes that could be sent so that a websocket client and a websocket server could have several rounds of negotiation for things like
credentials, timeouts and maybe even redirections!
That might help with the origin server, but does it help with intermediaries? If intermediaries would normally just pass through a 101, then you cannot tell if an intermediary
would time you out faster than your origin server.
Intermediaries often respond to a request on a servers behalf. They may need
their own authentication or they may do a redirect themselves.
So in a chain A-B-C, perhaps A-B will first negotiate in a few exchanges
and then A-C will negotiate before the connection is established.
So websocket needs to define what happens when a upgrade request is
responded to with a 401, 302 etc.
I don't entirely understand how your proposal would work. But the issue I'm raising is that if the origin server or an intermediary responds with a 101, that doesn't tell you if there were any unaware intermediaries in the path between you and them. So it does not solve the problem of knowing you have buy-in from intermediaries, or computing your idle time-out.

Regards,
Maciej
Greg Wilkins
2010-01-31 00:37:07 UTC
Permalink
Post by Maciej Stachowiak
Post by Greg Wilkins
Post by Maciej Stachowiak
I don't have anything against this suggestion per se, but it doesn't
It doesn't solve the problems, but it enables more standard solutions to them.
It also does not break intermediaries and servers just for giggles.
I don't know whether the hardcoded handshake format it the right
tradeoff overall. But reasons have been given for it, so I don't think
its fair to call it "just for giggles".
Sorry for my choice of language. Reasons have been given, but I
have seen zero support for those reasons from anybody but the
author.
Post by Maciej Stachowiak
If you can explain how making
the HTTP upgrade handshake more flexible would help solve specific
problems, then it would be easier to do a cost-benefit analysis.
The cost of not having real HTTP for the upgrade request is that
every proxy/server needs to have it's code checked and updated
to enforce a very strict ordering of headers and binary equivalence
of sections of the header.

I can see no benefit for doing this.


The cost of having real HTTP is nothing. We have that already.

The benefit of having real HTTP is that we don't need to debate
if HttpOnly cookies are supported or if we can add hop-by-hop
headers etc. Normal existing techniques can be applied
using existing code bases.
Post by Maciej Stachowiak
Post by Greg Wilkins
Post by Maciej Stachowiak
It seems like we need a mechanism for that which is robust in the
face of unaware intermediaries. One way I can think of to do that is
to encode the information in a hop-by-hop header. [...]
I think a hop-by-hop header is exactly what is needed, so that any non transparent
proxy would have to explicitly copy the timeout value from input to output.
Copy or adjust, if its idle timeout is lower than the last server it talked to.
+1
Post by Maciej Stachowiak
Post by Greg Wilkins
This would still be an issue for transparent proxies that try to not look like
a hope, but at least the argument could be made that the timeout value is there
in the header so they should at least respect it.
Sounds like the hop-by-hop header design is broken in the face of
transparent proxies, if they do indeed preserve hop-by-hop headers. Is
that truly the case for deployed transparent proxies? How common are
they? (The reason I say it breaks the feature is because you'd have no
way of detecting whether you are going through an unaware transparent
proxy, so you can't tell either whether your path is really clean, or
whether the idle timeout you got is accurate.)
Transparent proxies are very prevalent. Some countries even mandate
their usage.

There is little we can do about them as they will not participate
in any of the negotiations and they may still close a connection
that they deem idle.

But at least if we do have the timeout in the upgrade/response
they will be able observe the timeout and respect it.
Post by Maciej Stachowiak
It also seems like the hop-by-hop header technique does not work for
WebSocket over SSL. That's a pretty serious problem, because I think
that's the only case that is likely to work over most unmodified
proxies. Any idea how to address the SSL case?
That's a tough one! No idea.

But not using SSL as a tunnel for connections that don't need SSL would
be a good start.
Post by Maciej Stachowiak
I don't see how a close handshake would allow unaware intermediaries to
be detected.
If an idle connection is closed with out and orderly close conversation, then
a user-agent can suspect that negotiated timeouts were not respected.
They can keeps stats on destinations that frequently do this and heuristically
determine that there is a timeout imposed by an intermediary.

This can't be done without orderly close.
Post by Maciej Stachowiak
The handshake would presumably be done by the origin server
and would not even take the form of an http message. If WebSocket can go
through the intermediary at all, then the handshake is likely to go
through unmodified. Are you saying that if you don't see the handshake
you should assume an intermediary timed you out? That seems like a poor
assumption, because any number of network failures could have
interrupted a connection. Also, how long do you assume that your path to
the client or origin server is going through the same proxis?
Sure network failures will be a problem, but only a few samples would
be needed to work out the difference.


I'm not sure we can do better than this. But happy to hear otherwise.
Also I think a partial solution is better than no solution in this case.




cheers
Maciej Stachowiak
2010-01-31 00:58:25 UTC
Permalink
Post by Greg Wilkins
Post by Maciej Stachowiak
I don't know whether the hardcoded handshake format it the right
tradeoff overall. But reasons have been given for it, so I don't think
its fair to call it "just for giggles".
Sorry for my choice of language. Reasons have been given, but I
have seen zero support for those reasons from anybody but the
author.
As I understand it, the reason is security. If you strictly limit the format of the handshake interchange, then its less likely that WebSocket could be abused to talk to a non-WebSocket server - if you need to trick it into echoing back something very specific, that's a harder problem. It also makes the checks that the handshake was correct simpler and therefore potentially more robust.

However, from what you said in the rest of your message, it seems like the implementation cost is very high for servers that want to offer WebSocket services on the same host/port as ordinary HTTP services. (For clients it matters less, because they know up front if a connection is WebSocket or HTTP, so they don't have to make a WebSocet handshake go through a general-purpose HTTP stack.) I'm not sure that's a good tradeoff. Making something hard to implement is itself likely to lead to security problems.

(No time to comment on the rest of your message right now, but I wanted to reply to that point.)

Regards,
Maciej
Jamie Lokier
2010-02-01 01:00:22 UTC
Permalink
Post by Maciej Stachowiak
As I understand it, the reason is security. If you strictly limit the
format of the handshake interchange, then its less likely that
WebSocket could be abused to talk to a non-WebSocket server - if you
need to trick it into echoing back something very specific, that's a
harder problem. It also makes the checks that the handshake was
correct simpler and therefore potentially more robust.
With that goal, it would be better to make the handshake response
*not* valid HTTP, and deliberately choose something that no HTTP
server would produce and no HTTP proxy would be likely to relay.

That would be better for security and for blocking relay through
unaware proxies, both of which are stated goals for the protocol.

-- Jamie
Maciej Stachowiak
2010-02-01 03:08:53 UTC
Permalink
Post by Jamie Lokier
Post by Maciej Stachowiak
As I understand it, the reason is security. If you strictly limit the
format of the handshake interchange, then its less likely that
WebSocket could be abused to talk to a non-WebSocket server - if you
need to trick it into echoing back something very specific, that's a
harder problem. It also makes the checks that the handshake was
correct simpler and therefore potentially more robust.
With that goal, it would be better to make the handshake response
*not* valid HTTP, and deliberately choose something that no HTTP
server would produce and no HTTP proxy would be likely to relay.
Previous versions of the protocol did indeed do things that way, but that seemed unacceptable.
Post by Jamie Lokier
That would be better for security and for blocking relay through
unaware proxies, both of which are stated goals for the protocol.
It would make sharing a port with a Web server impossible. I think that would be too high a cost relative to the benefit. Note also: HTTP resources are not the only ones that could be abused, so it would have to look like nothing that any protocol ever produces.

Side note: Ian reminded me on IRC that while some parts of the handshake are restricted beyond what HTTP allows, the protocol does in fact allow arbitrary additional headers in both the request and response. This reduces the burden on server implementors. But it also seems to reduce the security benefit. In particular, the part of the handshake where the server echoes back the origin is not part of the hardcoded handshake but rather just a normal header. In light of this I'm not sure the fixed header is pulling its weight. It does probably add some amount of protection for resources that suffer header injection attacks - to fake the WebSocket handshake you'd have to inject before the real HTTP response header, and could not rely on injecting in the middle of a response. OTOH just the special status line ("HTTP/1.1 101 Web Socket Protocol Handshake") guarantees this. Would it be reasonable to limit the hardcoded part of the handshake to the status line?

Allowing arbitrary headers also means that we already have the ability to add per-connection protocol-level metadata without breaking compatibility. In particular, if we introduce future frame types, we could also introduce request and response headers where client and server both report the extra frame types they know. For example, this would let us introduce transparent gzip compression/decrompression between the client and UA in a future protocol revision. Likewise it could be used to negotiate transparent multiplexing and large message splitting. This mostly addresses my concerns about future-proofing the protocol for later feature additions.

Regards,
Maciej
Greg Wilkins
2010-02-01 04:05:36 UTC
Permalink
Post by Maciej Stachowiak
Side note: Ian reminded me on IRC that while some parts of the handshake are restricted beyond what HTTP allows, the protocol does in fact allow arbitrary additional headers in
both the request and response.
True - and I'm definitely guilty of exaggerating when I say *none* of the feedback here
has been accepted (but it's never been accepted with the message: "that's a good idea, I'll add that...")
Post by Maciej Stachowiak
This reduces the burden on server implementors.
It actually increases the burden. If there are only a few fixed headers then it's easier
to correctly order and format them. With arbitrary headers we have to allow existing mechanisms
add/examine their headers, but then make sure they've not actually broken any websocket
restrictions.
Post by Maciej Stachowiak
But it also seems to reduce the security benefit. In particular, the part of the handshake where
the server echoes back the origin is not part of the hardcoded handshake but rather just a normal header. In light of this I'm not sure the fixed header is pulling its weight.
It does probably add some amount of protection for resources that suffer header injection attacks - to fake the WebSocket handshake you'd have to inject before the real HTTP
response header, and could not rely on injecting in the middle of a response.
But then the sentinel framing of websocket is completely vulnerable to injection attacks.
All websocket endpoints will have to validate that utf-8 data given to them really is utf-8 data.

And I still don't get the protection it is giving? Can you describe a concrete example
of an attack that could happen if arbitrary ordering of headers?
Post by Maciej Stachowiak
OTOH just the special status line ("HTTP/1.1 101 Web Socket Protocol Handshake") guarantees this.
Would it be reasonable to limit the hardcoded part of the handshake to the status line?
It does not need to be expressed as hardcoded bytes.
It can be expressed as a HTTP response with status code of 101 and
reason of "Web Socket Protocol Handshake"

Then if we ever get HTTP/1.2 or HTTP/2.0, websocket will not break!
Post by Maciej Stachowiak
Allowing arbitrary headers also means that we already have the ability to add per-connection protocol-level metadata without breaking compatibility. In particular, if we
introduce future frame types, we could also introduce request and response headers where client and server both report the extra frame types they know. For example, this would
let us introduce transparent gzip compression/decrompression between the client and UA in a future protocol revision. Likewise it could be used to negotiate transparent
multiplexing and large message splitting. This mostly addresses my concerns about future-proofing the protocol for later feature additions.
I agree that a large number of requirements can be met with solutions based on headers in
the upgrade request/ response.

However, I think that sometime additional negotiation might be required, so supporting
other response types like 401, 302 etc may well be beneficial. So all the more
reason to just accept that the upgrade is-a HTTP request/response with additional
websocket criteria.

Moreover, I still believe there are requirements that are going to need meta data to be
able to be sent over an established channel (see my requirements thread).

regards
Justin Erenkrantz
2010-02-01 04:19:17 UTC
Permalink
Post by Maciej Stachowiak
But it also seems to reduce the security benefit.
I've noticed a few mentions so far of "security" as a key driver for
having an hardcoded initialization sequence, but I can't just envision
the tangible security benefits from mandating this.

So, what is the threat model that this mechanism is trying to prevent?
How do these threats differ from other attacks against HTTP? --
justin
Justin Erenkrantz
2010-02-01 05:29:29 UTC
Permalink
Post by Maciej Stachowiak
OTOH just the special status line ("HTTP/1.1 101 Web Socket Protocol Handshake") guarantees this. Would it be reasonable to limit the hardcoded part of the handshake to the status line?
If we are under HTTP/1.1 rules, then there are two points against this:

- The status reason should always be completely arbitrary. The
status code is the only relevant bit.

- The protocol actually being upgraded to is indicated in the Upgrade
response header.

HTH. -- justin
Justin Erenkrantz
2010-01-31 00:37:32 UTC
Permalink
Post by Maciej Stachowiak
I don't know whether the hardcoded handshake format it the right tradeoff
overall. But reasons have been given for it, so I don't think its fair to
call it "just for giggles". If you can explain how making the HTTP upgrade
handshake more flexible would help solve specific problems, then it would be
easier to do a cost-benefit analysis.
For serf (a client library), it can't (easily) produce the exact byte
sequence for the Upgrade request as dictated by the latest drafts
because it has its own optimizations for ordering of the HTTP/1.1
headers and such. So, if the server is looking for a specific byte
sequence, serf isn't going to produce it in anything close to a
reasonable way.

On the server side, I can't even begin to think of the contortions
we'd have to add to httpd to get it to recognize a specific byte
pattern *on port 80*.
Greg Wilkins
2010-01-31 00:43:13 UTC
Permalink
Post by Justin Erenkrantz
On the server side, I can't even begin to think of the contortions
we'd have to add to httpd to get it to recognize a specific byte
pattern *on port 80*.
The difficulty on the server side is that you don't even know it
is a websocket upgrade request until you have already parsed
it (using the lenient parsers that most HTTP servers have).

So you then have to re-parse it to check the exact byte order!

cheers
Maciej Stachowiak
2010-01-30 06:43:11 UTC
Permalink
Post by Greg Wilkins
Because there is no orderly close mechanism, we have to keep
our own close handshake and implement our own acks for our
reliable messaging extension (which currently batches acks
into the equivalent of a long poll.... I guess we could
re-invent TCP inside websocket and do message by message
acks... but what a waste!).
So if you look on the wire of cometd running over websocket,
it just looks like long polling. We use the same /meta/connect
long poll message as a keep alive and as our ack carrier.
I think it's a flaw in the WebSocket protocol that you can't be sure what messages were delivered successfully without inventing your own client-level ACK protocol. This seems like a shame because TCP already has per-message ACKs and reliable delivery, but we're actually losing this capability at the higher level.

I'm curious about a couple of things:

1) Do OS TCP stacks expose enough information to socket-level clients to determine what has definitely been sent when a TCP connection is closed?

2) Is a TCP-level ack sufficient for this purpose? (Or would clients for reliable message delivery want a full end-to-end ack from the receiving application?)


If the answer to both of these questions is "yes", then we only need a change to the client API, not the protocol, to have built-in reliable message delivery. We could just look at what has been ack'd at the TCP level to determine what has been delivered.

If the answer to either is "no", then we should consider whether acks could be added to the protocol. We should probably think about piggybacking them on messages and having some sort of close handshake rather than sending wholly separate ack messages. Also, I'm pretty sure the client needs acks from the server, but is the opposite required? Do servers need to know if the client got a message?

Once we have a strawman proposal for how acks could work, the next question is whether this needs to be in the 1.0 version of the protocol, or whether there is a clean and compatible way to add it in a later revision.

Regards,
Maciej
Justin Erenkrantz
2010-01-30 07:10:35 UTC
Permalink
Post by Maciej Stachowiak
I think it's a flaw in the WebSocket protocol that you can't be sure what messages were delivered successfully without inventing your own client-level ACK protocol. This seems like a shame because TCP already has per-message ACKs and reliable delivery, but we're actually losing this capability at the higher level.
1) Do OS TCP stacks expose enough information to socket-level clients to determine what has definitely been sent when a TCP connection is closed?
2) Is a TCP-level ack sufficient for this purpose? (Or would clients for reliable message delivery want a full end-to-end ack from the receiving application?)
It depends upon what level of "reliability" you are looking for. If
you are aiming for the "common" case, the answer to both is "yes".

However, edge cases make the answer "no" - it is quite possible to
have "lost" responses that a server actually sends, but the client
will never see. Some versions of httpd would aggressively close the
server-side socket "too soon" so that the FIN/ACK cycle happens too
soon and unread data is lost - so httpd has a bunch of logic to hold
off on the close() call until a set time has elapsed so as to minimize
this risk. (See
https://issues.apache.org/bugzilla/show_bug.cgi?id=35292 for one
description.)

The explicit channel close semantics and ACK process of BWTP is, IMO,
a particularly elegant solution here.

HTH. -- justin
Maciej Stachowiak
2010-01-30 07:25:05 UTC
Permalink
Post by Justin Erenkrantz
Post by Maciej Stachowiak
I think it's a flaw in the WebSocket protocol that you can't be sure what messages were delivered successfully without inventing your own client-level ACK protocol. This seems like a shame because TCP already has per-message ACKs and reliable delivery, but we're actually losing this capability at the higher level.
1) Do OS TCP stacks expose enough information to socket-level clients to determine what has definitely been sent when a TCP connection is closed?
2) Is a TCP-level ack sufficient for this purpose? (Or would clients for reliable message delivery want a full end-to-end ack from the receiving application?)
It depends upon what level of "reliability" you are looking for. If
you are aiming for the "common" case, the answer to both is "yes".
However, edge cases make the answer "no" - it is quite possible to
have "lost" responses that a server actually sends, but the client
will never see.
So you could lose messages, but can you tell in this case that you are not guaranteed yet that they have been delivered?
Post by Justin Erenkrantz
Some versions of httpd would aggressively close the
server-side socket "too soon" so that the FIN/ACK cycle happens too
soon and unread data is lost - so httpd has a bunch of logic to hold
off on the close() call until a set time has elapsed so as to minimize
this risk. (See
https://issues.apache.org/bugzilla/show_bug.cgi?id=35292 for one
description.)
The explicit channel close semantics and ACK process of BWTP is, IMO,
a particularly elegant solution here.
It seems like a WebSocket-level close handshake would only solve part of the problem - you also need to be able to deal with an interruption of service that prematurely breaks the connection, and ideally you would have some better guarantee in that case than just assuming all messages are lost. Does that also need provision at the protocol layer, or could it just piggyback on TCP-level acks?

Regards,
Maciej
Justin Erenkrantz
2010-01-30 07:33:56 UTC
Permalink
Post by Maciej Stachowiak
It depends upon what level of "reliability" you are looking for.  If
you are aiming for the "common" case, the answer to both is "yes".
However, edge cases make the answer "no" - it is quite possible to
have "lost" responses that a server actually sends, but the client
will never see.
So you could lose messages, but can you tell in this case that you are not guaranteed yet that they have been delivered?
No, not really - the client simply thinks the server close()'d the
connection but it has no way of knowing there were other data packets
that the server really meant for the client to see. Correspondingly,
the server did everything in the right order - it wrote all the data
it expected and then it close()'d the socket. Yet...oops.

I don't know how much code Jetty has to deal with lingering close, but
httpd has an embarrassingly large amount of code to deal with this
situation.
Post by Maciej Stachowiak
It seems like a WebSocket-level close handshake would only solve part of the problem - you also need to be able to deal with an interruption of service that prematurely breaks the connection, and ideally you would have some better guarantee in that case than just assuming all messages are lost. Does that also need provision at the protocol layer, or could it just piggyback on TCP-level acks?
Like Greg, I think orderly close is about as good as you can do.
Interruption of service is always going to be a possibility (power
failure, router outages, etc.) - at least if orderly close is
explicitly part of the protocol, then if it doesn't happen, then the
client knows something went awry and then it can deal with it as best
as it can. Currently, in HTTP, you can't tell the difference between
an orderly close and a "oh, no, something bad happened". I think
providing that type of hint would be a big step-forward - especially
when async messages are involved. -- justin
Maciej Stachowiak
2010-01-30 07:47:50 UTC
Permalink
Post by Justin Erenkrantz
Post by Maciej Stachowiak
Post by Justin Erenkrantz
It depends upon what level of "reliability" you are looking for. If
you are aiming for the "common" case, the answer to both is "yes".
However, edge cases make the answer "no" - it is quite possible to
have "lost" responses that a server actually sends, but the client
will never see.
So you could lose messages, but can you tell in this case that you are not guaranteed yet that they have been delivered?
No, not really - the client simply thinks the server close()'d the
connection but it has no way of knowing there were other data packets
that the server really meant for the client to see. Correspondingly,
the server did everything in the right order - it wrote all the data
it expected and then it close()'d the socket. Yet...oops.
Presumably the server could know that at least all the packets ACK'd at the TCP level have been successfully delivered, right? So I assume the only problem is the remaining packets after that, if you don't do a lingering close.

In this case things are a bit more complicated because either the client or the server could be transmitting at any time, and either could choose to close the connection at any time, and either side may want to know if some of its messages are not guaranteed to be delivered.
Post by Justin Erenkrantz
I don't know how much code Jetty has to deal with lingering close, but
httpd has an embarrassingly large amount of code to deal with this
situation.
I would like to understand the lingering close issue better. Does it consist of waiting for TCP ACKs for all your packets before closing the TCP connection?
Post by Justin Erenkrantz
Post by Maciej Stachowiak
It seems like a WebSocket-level close handshake would only solve part of the problem - you also need to be able to deal with an interruption of service that prematurely breaks the connection, and ideally you would have some better guarantee in that case than just assuming all messages are lost. Does that also need provision at the protocol layer, or could it just piggyback on TCP-level acks?
Like Greg, I think orderly close is about as good as you can do.
Interruption of service is always going to be a possibility (power
failure, router outages, etc.) - at least if orderly close is
explicitly part of the protocol, then if it doesn't happen, then the
client knows something went awry and then it can deal with it as best
as it can. Currently, in HTTP, you can't tell the difference between
an orderly close and a "oh, no, something bad happened". I think
providing that type of hint would be a big step-forward - especially
when async messages are involved. -- justin
I think you can do better than just orderly close. Either from TCP-level acks or from WebSocket-protocol-level acks, you could tell that some number of your messages have definitely been delivered, even in the face of a service interruption. Right?

Maybe I'm thinking of reliable message delivery differently than you, but I assumed a major goal would be to know what might need to be retransmitted even if there is an unexpected disconnect.

Regards,
Maciej
Jamie Lokier
2010-01-30 14:49:36 UTC
Permalink
Post by Maciej Stachowiak
Post by Justin Erenkrantz
Post by Maciej Stachowiak
Post by Justin Erenkrantz
It depends upon what level of "reliability" you are looking for. If
you are aiming for the "common" case, the answer to both is "yes".
However, edge cases make the answer "no" - it is quite possible to
have "lost" responses that a server actually sends, but the client
will never see.
So you could lose messages, but can you tell in this case that you are not guaranteed yet that they have been delivered?
No, not really - the client simply thinks the server close()'d the
connection but it has no way of knowing there were other data packets
that the server really meant for the client to see. Correspondingly,
the server did everything in the right order - it wrote all the data
it expected and then it close()'d the socket. Yet...oops.
Presumably the server could know that at least all the packets ACK'd
at the TCP level have been successfully delivered, right? So I
assume the only problem is the remaining packets after that, if you
don't do a lingering close.
No.

The server sends a TCP RST when it recieves data after it has called
close().

That TCP RST causes the client to *discard* data is has previously
received and ACK'd at the TCP level. The client application does not
see that data, if it hasn't already read it from the OS.

The client should get a socket error, but that's not very useful.
Depending on how the client is used, the practical effect is sometimes
a truncated message, or missing messages.

This is why Apache must implement a rather complicated "lingering
close", which uses shutdown(SHUT_WR) instead of close(), and then
reads and discards any further data recieved from the HTTP client.

HTTP servers (and proxies) which don't do this are prone to unreliable
response delivery if the client sends any more data, such as a
pipelined request. It only happens under some network and load
conditions, and with some clients, and some configurations, which is
why there are a lot of implementations that get it wrong.

Some applications piggybacked on WebSocket look likely to get it wrong
and suffer this problem in corner cases.
Post by Maciej Stachowiak
I would like to understand the lingering close issue better. Does it consist of waiting for TCP ACKs for all your packets before closing the TCP connection?
No, it consists of calling shutdown(SHUT_WR) immediately, and then
reading and discarding whatever the client sends until you recieve a
client-side close (read() returns EOF), or you think you have waited
long enough for the client application to have read the response
(e.g. 2 minutes).

Due to the time heuristic, and the client's internal delays, there is
no guarantee that the client application will actually have received
the response, but it is ok in practice with normal HTTP clients.
Post by Maciej Stachowiak
I think you can do better than just orderly close. Either from
TCP-level acks or from WebSocket-protocol-level acks, you could tell
that some number of your messages have definitely been delivered,
even in the face of a service interruption. Right?
I agree.
Post by Maciej Stachowiak
Maybe I'm thinking of reliable message delivery differently than
you, but I assumed a major goal would be to know what might need to
be retransmitted even if there is an unexpected disconnect.
Yes, that is a major one, because you often want automatic
retransmission when possible. (See: HTTP pipelining problems).

Unexpected disconnected can happen for many reasons, including the
network itself which endpoints have no control over. E.g. NAT router
resets (happens daily on one network I'm aware of).

Orderly close does not help with network-level disconnects, so other
techniques like duplicate elimination are valuable too.

-- Jamie
Justin Erenkrantz
2010-01-30 23:52:50 UTC
Permalink
Post by Jamie Lokier
Orderly close does not help with network-level disconnects, so other
techniques like duplicate elimination are valuable too.
True, but I think it's important that we - as a group - crawl before we run.

I'd like to see consensus that orderly close is important and should
be added and how we go about doing so in a sane way.

If we can't even get buy-in for that, then other more advanced
optimization techniques are likely to go nowhere as well. -- justin
Salvatore Loreto
2010-02-01 19:33:49 UTC
Permalink
just to make order in this particular thread discussion

as I understood, two different problems have been arised

1) "safely" shutdown a websocket connection.
The possibility to lose data while closing a TCP connection is a
well-known problem,
as has also been discussed and described in the thread.

Some people think there is a value to add to the spec gracefully
shutdown the websocket connection.

However I haven't seen a clear consensus on it.

So please if you have an opinion on this, speak up!!!


2) what happen if/when the connection is lost; this can happen for
several different reasons:
e.g. a NAT restarting, the mobile terminal go out of network
coverage etc.

here we have different sub problems in my opinion:

2.1) how detect as fast as possible that the connection has been lost

2.2.) what to do after have reconnected.

Are those something people think is important to spend cycles on??


cheers
/Sal
Post by Justin Erenkrantz
Post by Jamie Lokier
Orderly close does not help with network-level disconnects, so other
techniques like duplicate elimination are valuable too.
True, but I think it's important that we - as a group - crawl before we run.
I'd like to see consensus that orderly close is important and should
be added and how we go about doing so in a sane way.
If we can't even get buy-in for that, then other more advanced
optimization techniques are likely to go nowhere as well. -- justin
_______________________________________________
hybi mailing list
https://www.ietf.org/mailman/listinfo/hybi
Greg Wilkins
2010-02-01 20:18:10 UTC
Permalink
Post by Salvatore Loreto
just to make order in this particular thread discussion
as I understood, two different problems have been arised
1) "safely" shutdown a websocket connection.
The possibility to lose data while closing a TCP connection is a
well-known problem,
as has also been discussed and described in the thread.
Some people think there is a value to add to the spec gracefully
shutdown the websocket connection.
However I haven't seen a clear consensus on it.
So please if you have an opinion on this, speak up!!!
I've spoken enough in support of orderly close... so I 'll let others
speak up here.
Post by Salvatore Loreto
2) what happen if/when the connection is lost; this can happen for
e.g. a NAT restarting, the mobile terminal go out of network coverage
etc.
2.1) how detect as fast as possible that the connection has been lost
2.2.) what to do after have reconnected.
Are those something people think is important to spend cycles on??
Yes.

I think that it is very important for an endpoint and/or application to be
able to distinguish between:

a) orderly close in response to an application request, in which case
the app probably should not attempt to re-establish.

b) close in response to an error (message too large or similar),
in which case the app/endpoint should refrain from any
retries it might otherwise have attempted.

c) close in response to an idle timeout.
in which case the endpoint should not recreate the connection.
But an application might, if presence is required.

d) unexpected close due to some failure (or undisclosed intermediary
timeout). In this case an application may wish to retransmit
messages (either all recent idempotent messages or un-acked
non-idempotent messages).

e) inability to open a connection due to no connectivity. In this
case an app might do a retry at regular intervals

f) inability to open a connection due to permission denial from the
server. In this case, the app probably wants to alert the user
of the error.


I think that the endpoint code will be able to do very little
of the handling of reconnections and retries, simply because
it does not know enough about the messages. So the application
will have to decide about reconnects and retransmits, but it
currently does not have the information needed to make those
decisions.


I think fail fast is also desirable, but I'm not sure it is so
desirable that I'd advocate a regular keep-alive message just
failure detection. But if keep-alive messages are required to
keep open connections through transparent proxies, then it could
be 2 birds with 1 stone.

cheers
Anne van Kesteren
2010-02-01 22:25:49 UTC
Permalink
On Mon, 01 Feb 2010 20:33:49 +0100, Salvatore Loreto
Post by Salvatore Loreto
1) "safely" shutdown a websocket connection.
The possibility to lose data while closing a TCP connection is a
well-known problem,
as has also been discussed and described in the thread.
Some people think there is a value to add to the spec gracefully
shutdown the websocket connection.
However I haven't seen a clear consensus on it.
So please if you have an opinion on this, speak up!!!
We discussed this on IRC earlier today. The idea is to introduce a frame
for closing. If A wants to close the connection it transmits 0xFF 0x00.
Once the B receives that and has no more messages to transmit it transmits
0xFF 0x00 as well. When A receives that it knows it can close the
connection.

This would not be required of servers however as there are scenarios (e.g.
news tickers) where it does not really matter whether everything has been
received by the client.
--
Anne van Kesteren
http://annevankesteren.nl/
Greg Wilkins
2010-01-30 07:22:12 UTC
Permalink
Note that I don't think we really want per message acks as the
default.

If we just had orderly shutdown of a websocket connection, then
we would know that all messages sent before the shutdown had been
delivered.

If the connection closed suddenly we would then have the possibility
of message loss. Sometimes it might be good enough just to know that.
Sometimes we might just replay the last few messages on a new connection,
other times we might need to ask the other end for a sequence number, etc.

Orderly close is the starting point for all of these.

cheers
Post by Maciej Stachowiak
Post by Greg Wilkins
Because there is no orderly close mechanism, we have to keep
our own close handshake and implement our own acks for our
reliable messaging extension (which currently batches acks
into the equivalent of a long poll.... I guess we could
re-invent TCP inside websocket and do message by message
acks... but what a waste!).
So if you look on the wire of cometd running over websocket,
it just looks like long polling. We use the same /meta/connect
long poll message as a keep alive and as our ack carrier.
I think it's a flaw in the WebSocket protocol that you can't be sure what messages were delivered successfully without inventing your own client-level ACK protocol. This seems like a shame because TCP already has per-message ACKs and reliable delivery, but we're actually losing this capability at the higher level.
1) Do OS TCP stacks expose enough information to socket-level clients to determine what has definitely been sent when a TCP connection is closed?
2) Is a TCP-level ack sufficient for this purpose? (Or would clients for reliable message delivery want a full end-to-end ack from the receiving application?)
If the answer to both of these questions is "yes", then we only need a change to the client API, not the protocol, to have built-in reliable message delivery. We could just look at what has been ack'd at the TCP level to determine what has been delivered.
If the answer to either is "no", then we should consider whether acks could be added to the protocol. We should probably think about piggybacking them on messages and having some sort of close handshake rather than sending wholly separate ack messages. Also, I'm pretty sure the client needs acks from the server, but is the opposite required? Do servers need to know if the client got a message?
Once we have a strawman proposal for how acks could work, the next question is whether this needs to be in the 1.0 version of the protocol, or whether there is a clean and compatible way to add it in a later revision.
Regards,
Maciej
Jamie Lokier
2010-01-30 14:10:20 UTC
Permalink
Post by Maciej Stachowiak
1) Do OS TCP stacks expose enough information to socket-level
clients to determine what has definitely been sent when a TCP
connection is closed?
In general, no they don't. Linux does, but even then you have to poll
the statistics rather than waiting for an ack event.

There is a related issue, which is evident in Apache's
code and is a flaw in HTTP we'd do well not to repeat.
That is: It's not safe to close
Post by Maciej Stachowiak
2) Is a TCP-level ack sufficient for this purpose? (Or would clients
for reliable message delivery want a full end-to-end ack from the
receiving application?)
No, and yes. The reasons are numerous:

- There are many kinds of TCP-level intermediaries
which listen on a port, and for each incoming connection,
create an outgoing connection elsewhere and simply relay
octets both ways. They do not interpret HTTP at all.

+ For example, SSH is sometimes used to establish the above
type of proxy; in SSH it is called tunnelling. There
is other software which tunnels in this way.

+ Some routing/forwarding techniques, and some bandwidth
management devices do it.

+ Ironically, sometimes it is used to avoid an intercepting HTTP
proxy that is causing problems. :-)

+ Some mobile / wireless / satellite system relays do the
same thing, because it allows different TCP algorithms to
be used for the wireless link, which are better suited
to the channel characteristics (e.g. high loss, fading).

- Forwarding over HTTP proxies, for example using CONNECT on port
443 is described in the WebSocket draft.

- Forwarding over HTTP proxies that implement WebSocket
detection and forwarding. Not yet, but to be expected if
it's deployed.

- Forwarding by HTTP proxies that switch into "tunnelling mode"
when they see something they cannot parse. I am told these exist,
because they are more reliable than strict proxies and it's needed
due to a certain amount of badly non-compliant HTTP out there
(e.g. header names containing spaces and quote marks).

Some of these are intercepting "invisible"/"stealth" proxies and
do not insert Via headers. These proxies may end up forwarding
WebSocket despite it's attempt to detect them.

- Data that has been acknowledged by TCP ACK is not sent to the
application under some circumstances:

+ The application calls close() or shutdown(SHUT_RD) between the
data being sent and before it has read the data. This
window is unavoidable, because OSes generally don't allow
applications to read any data that was buffered at that
precise moment.

+ The network facing component crashes, is terminated,
is taken offline etc. For example, restarting cometd.
Even though communication is nominally with another
component *behind* the network facing one, which is still
running. This is just another kind of proxy.

-- Jamie
Jamie Lokier
2010-01-30 15:12:11 UTC
Permalink
Post by Jamie Lokier
That is: It's not safe to close
Edit fart there... That issue has been addressed elsewhere in this
thread - under "lingering close".
Post by Jamie Lokier
Post by Maciej Stachowiak
2) Is a TCP-level ack sufficient for this purpose? (Or would clients
for reliable message delivery want a full end-to-end ack from the
receiving application?)
In a nutshell, don't try to make reliable things depend on TCP
behaviour. Treat TCP as a transport layer only.

Nowadays, on the "web user" Internet anyway, you can't depend on a
connection remaining open if the endpoints continue working; not even
with keepalives. NATs occasionally lose state; route changes break
connections when NATs are involved; mobile connections drop every few
minutes in some areas; IP address may even change every few minutes.

In many ways, TCP was more dependable for things like long-lived
connections in ye olde days.

Optimisation heuristics like splitting messages at probable MSS
boundaries, and timing keepalives to piggyback on ACKs, are ok, because
nothing fails when they are wrong.

-- Jamie
Maciej Stachowiak
2010-01-30 06:49:34 UTC
Permalink
Here's comments on some of the issues that did not seem to me to merit their own thread.
Post by Greg Wilkins
It was a little silly having to implement two framing mechanisms
when only one of them has an API to enable it's usage.
This is a little lame, but it will ultimately be fixed at the API level once we have a good way to represent binary data on the client side.
Post by Greg Wilkins
So if you look on the wire of cometd running over websocket,
it just looks like long polling. We use the same /meta/connect
long poll message as a keep alive and as our ack carrier.
We've saved 1 connection, which is great. But I fear that
saving will be eaten up by the ability of application developers
to open any number of websockets.
Doesn't it also save you the need to potentially close and reopen the connection that would have been used for messages from the client to the server? It seems like proper full duplex is a significant win, especially over SSL where connection setup is very expensive.
Post by Greg Wilkins
If we are not running in reliable message mode, then we don't
need to wait for a /meta/connect before sending a message
to the client, so we get a small improvement in maximal
message latency for current usage and a good improvement
for streaming usage.
Wouldn't you also get an improvement in throughput, and not just latency, for streaming since you no longer need to repeatedly send full HTTP headers?
Post by Greg Wilkins
But as I've frequently said, it works ok, but it solves
few of my real pain points as comet vendor and it's
caused me more code/complexity not less.
I'm curious to hear about some of the pain points that you think are not addressed. You mentioned two (reliable message delivery and maintaining idle connections) and I think you're right that those should be addressed. Any others?
Post by Greg Wilkins
It's not provided any semantics that would allow
any cometd users to consider going directly to websockets
instead of using a comet framework. Sure it makes sending
messages easy, but that is always easy. It does not help
for when you can't send messages or when connections drop
or servers change etc. These are all the realworld things
that must be dealt with and ws makes this harder not easier.
What kind of changes do you think would make it more practical to use WebSocket directly rather than go through a framework?

Regards,
Maciej
Greg Wilkins
2010-01-30 07:13:49 UTC
Permalink
Post by Maciej Stachowiak
Here's comments on some of the issues that did not seem to me to merit their own thread.
Post by Greg Wilkins
It was a little silly having to implement two framing mechanisms
when only one of them has an API to enable it's usage.
This is a little lame, but it will ultimately be fixed at the API level once we have a good way to represent binary data on the client side.
Note that there are uses for this protocol beyond javascript in browsers.
Improvements in the protocol should not have to wait for improvements in
just the js API.

Note also that the binary frame is more than capable of carrying the
UTF-8 data, so it could have been entirely possible to just have the
binary framing and not have the sentinel framing mechanism with
all it's associated injection and buffer overrun issues.
Post by Maciej Stachowiak
Post by Greg Wilkins
So if you look on the wire of cometd running over websocket,
it just looks like long polling. We use the same /meta/connect
long poll message as a keep alive and as our ack carrier.
We've saved 1 connection, which is great. But I fear that
saving will be eaten up by the ability of application developers
to open any number of websockets.
Doesn't it also save you the need to potentially close and reopen the connection that would have been used for messages from the client to the server? It seems like proper full duplex is a significant win, especially over SSL where connection setup is very expensive.
No. HTTP/1.1 keeps the connection open between long poll
requests.

long polling gives full duplex over two TCP/IP connections.
Post by Maciej Stachowiak
Post by Greg Wilkins
If we are not running in reliable message mode, then we don't
need to wait for a /meta/connect before sending a message
to the client, so we get a small improvement in maximal
message latency for current usage and a good improvement
for streaming usage.
Wouldn't you also get an improvement in throughput, and not just latency, for streaming since you no longer need to repeatedly send full HTTP headers?
For a few use-cases that are streaming large volumes of data that
is almost continuous will get greater throughput.

But for many many comet applications (eg chat, auctions,
monitoring), the events a far enough a part and small enough
that in most cases there is a waiting long poll and the
response fits in a single MTU. For these use cases,
websockets only helps with the minority of events that
occur when there is not a long poll waiting, thus it
really only helps the maximal latency.
Post by Maciej Stachowiak
Post by Greg Wilkins
But as I've frequently said, it works ok, but it solves
few of my real pain points as comet vendor and it's
caused me more code/complexity not less.
I'm curious to hear about some of the pain points that you think are not addressed. You mentioned two (reliable message delivery and maintaining idle connections) and I think you're right that those should be addressed. Any others?
For the scaling of the web applications that I work with,
connections have often been the limiting factor.

Websockets have no limits on the number of connections
that an application can open, thus no limit on the
amount of server side resources that a client side
developer can requisition.

I've previously explained in detail how a widget vendor
might find some better performance by opening multiple
websockets, so that they get a greater share of the
bandwidth available from a server. But that only
works if your the only one doing it. Soon everybody
will be opening 4, 8, 16 connections etc.


working with load balancers and other intermediaries
that need out-of-band communication with the server
is another.
Post by Maciej Stachowiak
Post by Greg Wilkins
It's not provided any semantics that would allow
any cometd users to consider going directly to websockets
instead of using a comet framework. Sure it makes sending
messages easy, but that is always easy. It does not help
for when you can't send messages or when connections drop
or servers change etc. These are all the realworld things
that must be dealt with and ws makes this harder not easier.
What kind of changes do you think would make it more practical to use WebSocket directly rather than go through a framework?
I actually fear the opposite.

It's like every book on Ajax starts with a chapter about how to
program directly to XHR. So app developers go off and program
directly to XHR and get them selves in all sorts of strife.
Any seasoned Ajax developer will always tell you to have some
kind of framework wrapping XHR.

The same is going to happen with websockets. Programmers
are going to seee books/publicity about it and start using
it directly. Websocket is really easy to use and they will
soon have working bidirectional applications.

But working bidirectional applications are easy even with
simple HTTP. The hard thing is how to make an applications
that fail well, or handle a laggy intermittent mobile network,
or can cope with strange intermediaries etc. Websocket
provides a solution for very few of the problems that you
get doing bidirectionality over HTTP. So programmers are
just going to go out and make a whole bunch of crappy
applications that don't work well in the real world
and we'll be picking up the pieces for years to come.

regards
Justin Erenkrantz
2010-01-30 07:25:01 UTC
Permalink
Post by Greg Wilkins
Note that there are uses for this protocol beyond javascript in browsers.
Improvements in the protocol should not have to wait for improvements in
just the js API.
Agreed - the motivating factors on my end for an async protocol have
applications in mind that look nothing like browsers. (Autonomous
agents is probably the shortest description that gets you in the
ballpark.)

Again, this is why the current draft is so impenetrable (to me) since
it expects that the only person implementing the draft is a client
vendor using synchronous socket methods...which sort of defeats the
purpose when you are trying to write a fully async client...or any
type of server. -- justin
Maciej Stachowiak
2010-01-30 07:42:50 UTC
Permalink
Post by Justin Erenkrantz
Post by Greg Wilkins
Note that there are uses for this protocol beyond javascript in browsers.
Improvements in the protocol should not have to wait for improvements in
just the js API.
Agreed - the motivating factors on my end for an async protocol have
applications in mind that look nothing like browsers. (Autonomous
agents is probably the shortest description that gets you in the
ballpark.)
The problem here isn't lack of protocol support for binary frames (as far as I can tell), it's the fact that they are unsupported in the proposed WebSocket client API for browsers. Am I misunderstanding? I see a full definition of binary protocol frames in <http://tools.ietf.org/html/draft-hixie-thewebsocketprotocol-68>.

So, unless I'm misunderstanding, you can send binary frames to your heart's delight if you are not relying on JavaScript code running in a browser. Eventually you'll be able to do this from JavaScript too, once we have worked out how to manage binary data.

The only issue with the protocol here is that there are two kinds of framing. But that's not a limitation on the ability to use binary data in non-browser clients, nor is it a cause or effect of limitations in the client API. It seems like a matter of judgment whether it's better to have two kinds of framing are one. The SPDY team concluded that it's best for performance to use length-encoded framing for everything, which makes me wonder if that lesson applies to the WebSocket protocol as well.
Post by Justin Erenkrantz
Again, this is why the current draft is so impenetrable (to me) since
it expects that the only person implementing the draft is a client
vendor using synchronous socket methods...which sort of defeats the
purpose when you are trying to write a fully async client...or any
type of server. -- justin
I'm not sure what you mean by "synchronous socket methods". As far as I can tell, none of the WebSocket API or protocol spec requires clients to do synchronous networking via the client API, and the implementations I am aware of certainly do not do so. Can you clarify what you mean?

Regards,
Maciej
Greg Wilkins
2010-01-30 22:35:25 UTC
Permalink
Post by Maciej Stachowiak
Note that there are uses for this protocol beyond javascript in browsers. Improvements in the protocol should not have to wait for improvements in just the js API.
Agreed - the motivating factors on my end for an async protocol have applications in mind that look nothing like browsers. (Autonomous agents is probably the shortest
description that gets you in the ballpark.)
The problem here isn't lack of protocol support for binary frames (as far as I can tell), it's the fact that they are unsupported in the proposed WebSocket client API for
browsers. Am I misunderstanding? I see a full definition of binary protocol frames in <http://tools.ietf.org/html/draft-hixie-thewebsocketprotocol-68>.
So, unless I'm misunderstanding, you can send binary frames to your heart's delight if you are not relying on JavaScript code running in a browser. Eventually you'll be able to
do this from JavaScript too, once we have worked out how to manage binary data.
The only issue with the protocol here is that there are two kinds of framing. But that's not a limitation on the ability to use binary data in non-browser clients, nor is it a
cause or effect of limitations in the client API. It seems like a matter of judgment whether it's better to have two kinds of framing are one. The SPDY team concluded that it's
best for performance to use length-encoded framing for everything, which makes me wonder if that lesson applies to the WebSocket protocol as well.
Correct, we can send binary frames now is we wish and consenting endpoints can handle them.

The issues with this are:

+ Why have two framing techniques when binary is sufficient to carry everything.

+ Who controls allocation of the frame type byte? So far every suggestion of usage
for that (eg a bit to indicate that the frame contains meta-data headers) has been
rejected. So are binary users simply to pick their own bytes and hope for no
collisions? Will IANA eventually allocate values? is 7 bits enough?

+ Sentinel framing is unsafe. It relies on the fact that there
are no 0 bytes in the utf-8 strings that are passed to it. Strangely enough,
users can't be trusted to always provide valid utf-8 data, so if user data is
not validated then sentinel encoding allows frame injection attacks.
After all we have learnt with HTTP, it seams silly to repeat the mistake
of a protocol that is exposed to such attacks

+ the utf-8 Sentinel framing is inflexible. It sends only raw utf-8.
What if I want to send gzipped utf-8, or utf-16 etc. This could simply be
handled with a content encoding header in the upgrade request and use of
binary framing.


regards
Maciej Stachowiak
2010-01-30 08:16:05 UTC
Permalink
Post by Greg Wilkins
Post by Maciej Stachowiak
Post by Greg Wilkins
So if you look on the wire of cometd running over websocket,
it just looks like long polling. We use the same /meta/connect
long poll message as a keep alive and as our ack carrier.
We've saved 1 connection, which is great. But I fear that
saving will be eaten up by the ability of application developers
to open any number of websockets.
Doesn't it also save you the need to potentially close and reopen the connection that would have been used for messages from the client to the server? It seems like proper full duplex is a significant win, especially over SSL where connection setup is very expensive.
No. HTTP/1.1 keeps the connection open between long poll
requests.
HTTP/1.1 doesn't guarantee that the client will actually reuse the connection, it just makes it possible.
Post by Greg Wilkins
For a few use-cases that are streaming large volumes of data that
is almost continuous will get greater throughput.
But for many many comet applications (eg chat, auctions,
monitoring), the events a far enough a part and small enough
that in most cases there is a waiting long poll and the
response fits in a single MTU. For these use cases,
websockets only helps with the minority of events that
occur when there is not a long poll waiting, thus it
really only helps the maximal latency.
Sure, throughput is not much concern when your average bandwidth is low compared to your network capacity.
Post by Greg Wilkins
Post by Maciej Stachowiak
Post by Greg Wilkins
But as I've frequently said, it works ok, but it solves
few of my real pain points as comet vendor and it's
caused me more code/complexity not less.
I'm curious to hear about some of the pain points that you think are not addressed. You mentioned two (reliable message delivery and maintaining idle connections) and I think you're right that those should be addressed. Any others?
For the scaling of the web applications that I work with,
connections have often been the limiting factor.
Websockets have no limits on the number of connections
that an application can open, thus no limit on the
amount of server side resources that a client side
developer can requisition.
It does have a limit that the client can only be starting one connection per server at a time (see section 4.1, step 1 in the algorithm), and allows servers to reject connections. Thus the server could enforce a connection limit per client IP or per client IP+Origin. It seems like that is enough to limit the potential for abuse. What do you think?
Post by Greg Wilkins
I've previously explained in detail how a widget vendor
might find some better performance by opening multiple
websockets, so that they get a greater share of the
bandwidth available from a server. But that only
works if your the only one doing it. Soon everybody
will be opening 4, 8, 16 connections etc.
So, this seems to have an assumption that the client-side code is developed by an independent entity from the server-side code. I can see how that might be true if your WebSocket service is intended for cross-site use. However, it seems like that often won't be the case. For example, a vendor providing the chat service is likely to author both the client-side JavaScript and the server-side code to manage it. Presumably they would not make this mistake. We do need to make sure that it's practical for clients to minimize the number of connections they choose to use of course.

For the cross-origin case, enforcing a connection limit (rather than just making multiplexing possible) seems challenging. You would have to multiplex messages to and from all clients over a single connection, which means content destined for different security domains is being sent over one pipe. While that's not a security issue per se, it does increase the risk of problems. It also may creates a challenge for multiprocess browsers. It could also cause problems when multiple independent services are hosted on a single origin, where one sends many small messages that need low latency, and another may send occasional messages that are very large. The large messages would put spikes in your latency that you can't avoid. (One way to deal might be to redesign the protocol to split large messages.) I would like to clearly understand the cost/benefit tradeoff before we consider making a single connection (or some other small number) mandatory.

It also seems that in the case of many likely services, using multiple connections gives no obvious benefit. For example, if I'm connecting to a chat server, will it really help me to have 4 connections instead of 1? I don't see how. Even for something more bandwidth-intensive like video streaming, how will multiple connections help? It seems like it could only possibly be of benefit if the protocol you are using over WebSocket lets you split data relating to the same operations over multiple connections. But it seems like that is in the server developer's hands. The reason HTTP clients sometimes violate the connection limit is because the nature of the protocol *does* give a benefit when opening many connections - you can evade bandwidth limits on large downloads using range requests, or make sure that you get low latency for critical resources without head-of-line blocking when making many requests. But I don't think that translates to many foreseeable kinds of WebSocket services, where having a single stateful stream is essential to using a service.

I do think the ability to do multiplexing as an optional feature may be useful. I see it as something that could be a 2.0 (or 1.1) protocol feature, and that could be totally transparent in the client API. But if there are pitfalls that make it impossible to roll out later, it would be good to know now.
Post by Greg Wilkins
working with load balancers and other intermediaries
that need out-of-band communication with the server
is another.
What's needed for the sake of these kinds of intermediaries? I think the principle we should apply here is that the protocol should be able to work with no changes to intermediaries in general, but if we have a way to make it work better with optional cooperation from intermediaries, we should consider it. Can you mention some concrete problems that come up here? Do you have solutions in mind?
Post by Greg Wilkins
Post by Maciej Stachowiak
Post by Greg Wilkins
It's not provided any semantics that would allow
any cometd users to consider going directly to websockets
instead of using a comet framework. Sure it makes sending
messages easy, but that is always easy. It does not help
for when you can't send messages or when connections drop
or servers change etc. These are all the realworld things
that must be dealt with and ws makes this harder not easier.
What kind of changes do you think would make it more practical to use WebSocket directly rather than go through a framework?
I actually fear the opposite.
It's like every book on Ajax starts with a chapter about how to
program directly to XHR. So app developers go off and program
directly to XHR and get them selves in all sorts of strife.
Any seasoned Ajax developer will always tell you to have some
kind of framework wrapping XHR.
The same is going to happen with websockets. Programmers
are going to seee books/publicity about it and start using
it directly. Websocket is really easy to use and they will
soon have working bidirectional applications.
But working bidirectional applications are easy even with
simple HTTP. The hard thing is how to make an applications
that fail well, or handle a laggy intermittent mobile network,
or can cope with strange intermediaries etc. Websocket
provides a solution for very few of the problems that you
get doing bidirectionality over HTTP. So programmers are
just going to go out and make a whole bunch of crappy
applications that don't work well in the real world
and we'll be picking up the pieces for years to come.
What I'm really interested in here is the problems themselves, not what form the damage will take. You did list some. That's great. It would be good to list any others, and to work on solutions to the ones identified.

Regards,
Maciej
Roberto Peon
2010-01-30 09:03:31 UTC
Permalink
Post by Maciej Stachowiak
Post by Greg Wilkins
Post by Maciej Stachowiak
Post by Greg Wilkins
So if you look on the wire of cometd running over websocket,
it just looks like long polling. We use the same /meta/connect
long poll message as a keep alive and as our ack carrier.
We've saved 1 connection, which is great. But I fear that
saving will be eaten up by the ability of application developers
to open any number of websockets.
Doesn't it also save you the need to potentially close and reopen the
connection that would have been used for messages from the client to the
server? It seems like proper full duplex is a significant win, especially
over SSL where connection setup is very expensive.
Post by Greg Wilkins
No. HTTP/1.1 keeps the connection open between long poll
requests.
HTTP/1.1 doesn't guarantee that the client will actually reuse the
connection, it just makes it possible.
Post by Greg Wilkins
For a few use-cases that are streaming large volumes of data that
is almost continuous will get greater throughput.
But for many many comet applications (eg chat, auctions,
monitoring), the events a far enough a part and small enough
that in most cases there is a waiting long poll and the
response fits in a single MTU. For these use cases,
websockets only helps with the minority of events that
occur when there is not a long poll waiting, thus it
really only helps the maximal latency.
Sure, throughput is not much concern when your average bandwidth is low
compared to your network capacity.
Post by Greg Wilkins
Post by Maciej Stachowiak
Post by Greg Wilkins
But as I've frequently said, it works ok, but it solves
few of my real pain points as comet vendor and it's
caused me more code/complexity not less.
I'm curious to hear about some of the pain points that you think are not
addressed. You mentioned two (reliable message delivery and maintaining idle
connections) and I think you're right that those should be addressed. Any
others?
Post by Greg Wilkins
For the scaling of the web applications that I work with,
connections have often been the limiting factor.
Websockets have no limits on the number of connections
that an application can open, thus no limit on the
amount of server side resources that a client side
developer can requisition.
It does have a limit that the client can only be starting one connection
per server at a time (see section 4.1, step 1 in the algorithm), and allows
servers to reject connections. Thus the server could enforce a connection
limit per client IP or per client IP+Origin. It seems like that is enough to
limit the potential for abuse. What do you think?
This assumes that all connections for an IP are terminated by one host,
which for better and worse, isn't a correct assumption!
-=R
Post by Maciej Stachowiak
Post by Greg Wilkins
I've previously explained in detail how a widget vendor
might find some better performance by opening multiple
websockets, so that they get a greater share of the
bandwidth available from a server. But that only
works if your the only one doing it. Soon everybody
will be opening 4, 8, 16 connections etc.
So, this seems to have an assumption that the client-side code is developed
by an independent entity from the server-side code. I can see how that might
be true if your WebSocket service is intended for cross-site use. However,
it seems like that often won't be the case. For example, a vendor providing
the chat service is likely to author both the client-side JavaScript and the
server-side code to manage it. Presumably they would not make this mistake.
We do need to make sure that it's practical for clients to minimize the
number of connections they choose to use of course.
For the cross-origin case, enforcing a connection limit (rather than just
making multiplexing possible) seems challenging. You would have to multiplex
messages to and from all clients over a single connection, which means
content destined for different security domains is being sent over one pipe.
While that's not a security issue per se, it does increase the risk of
problems. It also may creates a challenge for multiprocess browsers. It
could also cause problems when multiple independent services are hosted on a
single origin, where one sends many small messages that need low latency,
and another may send occasional messages that are very large. The large
messages would put spikes in your latency that you can't avoid. (One way to
deal might be to redesign the protocol to split large messages.) I would
like to clearly understand the cost/benefit tradeoff before we consider
making a single connection (or some other small number) mandatory.
It also seems that in the case of many likely services, using multiple
connections gives no obvious benefit. For example, if I'm connecting to a
chat server, will it really help me to have 4 connections instead of 1? I
don't see how. Even for something more bandwidth-intensive like video
streaming, how will multiple connections help? It seems like it could only
possibly be of benefit if the protocol you are using over WebSocket lets you
split data relating to the same operations over multiple connections. But it
seems like that is in the server developer's hands. The reason HTTP clients
sometimes violate the connection limit is because the nature of the protocol
*does* give a benefit when opening many connections - you can evade
bandwidth limits on large downloads using range requests, or make sure that
you get low latency for critical resources without head-of-line blocking
when making many requests. But I don't think that translates to many
foreseeable kinds of WebSocket se
rvices, where having a single stateful stream is essential to using a
service.
I do think the ability to do multiplexing as an optional feature may be
useful. I see it as something that could be a 2.0 (or 1.1) protocol feature,
and that could be totally transparent in the client API. But if there are
pitfalls that make it impossible to roll out later, it would be good to know
now.
Post by Greg Wilkins
working with load balancers and other intermediaries
that need out-of-band communication with the server
is another.
What's needed for the sake of these kinds of intermediaries? I think the
principle we should apply here is that the protocol should be able to work
with no changes to intermediaries in general, but if we have a way to make
it work better with optional cooperation from intermediaries, we should
consider it. Can you mention some concrete problems that come up here? Do
you have solutions in mind?
Post by Greg Wilkins
Post by Maciej Stachowiak
Post by Greg Wilkins
It's not provided any semantics that would allow
any cometd users to consider going directly to websockets
instead of using a comet framework. Sure it makes sending
messages easy, but that is always easy. It does not help
for when you can't send messages or when connections drop
or servers change etc. These are all the realworld things
that must be dealt with and ws makes this harder not easier.
What kind of changes do you think would make it more practical to use
WebSocket directly rather than go through a framework?
Post by Greg Wilkins
I actually fear the opposite.
It's like every book on Ajax starts with a chapter about how to
program directly to XHR. So app developers go off and program
directly to XHR and get them selves in all sorts of strife.
Any seasoned Ajax developer will always tell you to have some
kind of framework wrapping XHR.
The same is going to happen with websockets. Programmers
are going to seee books/publicity about it and start using
it directly. Websocket is really easy to use and they will
soon have working bidirectional applications.
But working bidirectional applications are easy even with
simple HTTP. The hard thing is how to make an applications
that fail well, or handle a laggy intermittent mobile network,
or can cope with strange intermediaries etc. Websocket
provides a solution for very few of the problems that you
get doing bidirectionality over HTTP. So programmers are
just going to go out and make a whole bunch of crappy
applications that don't work well in the real world
and we'll be picking up the pieces for years to come.
What I'm really interested in here is the problems themselves, not what
form the damage will take. You did list some. That's great. It would be good
to list any others, and to work on solutions to the ones identified.
Regards,
Maciej
_______________________________________________
hybi mailing list
https://www.ietf.org/mailman/listinfo/hybi
Greg Wilkins
2010-01-30 22:57:02 UTC
Permalink
Post by Maciej Stachowiak
Post by Maciej Stachowiak
So if you look on the wire of cometd running over websocket, it just looks like long polling. We use the same /meta/connect long poll message as a keep alive and as our
ack carrier.
We've saved 1 connection, which is great. But I fear that saving will be eaten up by the ability of application developers to open any number of websockets.
Doesn't it also save you the need to potentially close and reopen the connection that would have been used for messages from the client to the server? It seems like proper
full duplex is a significant win, especially over SSL where connection setup is very expensive.
No. HTTP/1.1 keeps the connection open between long poll requests.
HTTP/1.1 doesn't guarantee that the client will actually reuse the connection, it just makes it possible.
Indeed. An sometime if you go via a load balancing proxy like nginx,
the HTTP/1.1 persistent connections are downgraded to HTTP/1.0
non-persistent.

But the long polling techniques still work and for many use-cases
the extra cost of opening a connection is not significant.
For other use-cases it is significant.


My point is that my current pain point is more the max number
of connections rather than the open/close rate of connections (although
the later does cause some problems so avoiding HTTP/1.0 is good
to do).
Post by Maciej Stachowiak
Post by Maciej Stachowiak
But as I've frequently said, it works ok, but it solves few of my real pain points as comet vendor and it's caused me more code/complexity not less.
I'm curious to hear about some of the pain points that you think are not addressed. You mentioned two (reliable message delivery and maintaining idle connections) and I
think you're right that those should be addressed. Any others?
For the scaling of the web applications that I work with, connections have often been the limiting factor.
Websockets have no limits on the number of connections that an application can open, thus no limit on the amount of server side resources that a client side developer can
requisition.
It does have a limit that the client can only be starting one connection per server at a time (see section 4.1, step 1 in the algorithm), and allows servers to reject
connections. Thus the server could enforce a connection limit per client IP or per client IP+Origin. It seems like that is enough to limit the potential for abuse. What do you
think?
It is very hard for a server to determine what is abuse.

If the server sets a low limit of connections per browser, then multiple
tabs/frames in the same browser can quickly hit that limit in non abusing
uses.

If the server sets a moderate limit of connections to allow for the occasional
user of multiple tabs/windows, then a single frame can abuse that limit
and open maximal connections.

The point is that the connection usage policy should be something that
is handled between the user-agent and the server. The application
developer should not be required to (or able to) participate in
the connection management.
Post by Maciej Stachowiak
I've previously explained in detail how a widget vendor might find some better performance by opening multiple websockets, so that they get a greater share of the bandwidth
available from a server. But that only works if your the only one doing it. Soon everybody will be opening 4, 8, 16 connections etc.
So, this seems to have an assumption that the client-side code is developed by an independent entity from the server-side code. I can see how that might be true if your
WebSocket service is intended for cross-site use. However, it seems like that often won't be the case. For example, a vendor providing the chat service is likely to author both
the client-side JavaScript and the server-side code to manage it. Presumably they would not make this mistake. We do need to make sure that it's practical for clients to
minimize the number of connections they choose to use of course.
Firstly it is problematic to make assumptions about future usage. But there are already plenty
of examples of where third parties contribute code to a webpage either statically or dynamically.
Go to any home page on any portal site and you will see plenty of third party widgets offering
services... many of which would benefit from websocket type connectivity.

It is also not safe to assume that client libraries provided by a server will always be used.
If the connection limit is imposed by the client library and their is an advantage to be
obtained by exceeding the connection limit, then app developers will work around the libraries
(or in js they will probably just modify the code).

Voluntary resource restriction just does not work.
Post by Maciej Stachowiak
I do think the ability to do multiplexing as an optional feature may be useful. I see it as something that could be a 2.0 (or 1.1) protocol feature, and that could be totally
transparent in the client API. But if there are pitfalls that make it impossible to roll out later, it would be good to know now.
I agree that multiplexing would be advantageous and I've proposed several ways in which
it could be done. However I also do recognize that it is a difficult thing to achieve in 1.0.

The suggestion that I have made (and that was rejected) was that at least the 1.0 spec change it's
language so that the websocket user is not promised a connection, but rather a conduit or channel.
This would allow multiplexing to be added at a later date with less disruption etc.
Post by Maciej Stachowiak
working with load balancers and other intermediaries that need out-of-band communication with the server is another.
What's needed for the sake of these kinds of intermediaries? I think the principle we should apply here is that the protocol should be able to work with no changes to
intermediaries in general, but if we have a way to make it work better with optional cooperation from intermediaries, we should consider it. Can you mention some concrete
problems that come up here? Do you have solutions in mind?
The number and type of intermediaries is too numerous and varied to generalize.
But the ability to insert meta data into a stream that will not affect the application data is
something that is easy to do and would enable a large variety of extensions (including multiplexing).

EG. a meta data frame could be injected to indicate the channel, encoding, encryption, origin, etc.
of a following frame (or frames).



regards
Maciej Stachowiak
2010-01-31 00:36:11 UTC
Permalink
Post by Greg Wilkins
It is very hard for a server to determine what is abuse.
If the server sets a low limit of connections per browser, then multiple
tabs/frames in the same browser can quickly hit that limit in non abusing
uses.
The handshake sends the Origin, so it could be per-browser per-origin, then your only problem is multiple tabs from the same site. (Different frames of the same page can cooperate to share a connection.)
Post by Greg Wilkins
If the server sets a moderate limit of connections to allow for the occasional
user of multiple tabs/windows, then a single frame can abuse that limit
and open maximal connections.
The point is that the connection usage policy should be something that
is handled between the user-agent and the server. The application
developer should not be required to (or able to) participate in
the connection management.
Post by Maciej Stachowiak
I've previously explained in detail how a widget vendor might find some better performance by opening multiple websockets, so that they get a greater share of the bandwidth
available from a server. But that only works if your the only one doing it. Soon everybody will be opening 4, 8, 16 connections etc.
So, this seems to have an assumption that the client-side code is developed by an independent entity from the server-side code. I can see how that might be true if your
WebSocket service is intended for cross-site use. However, it seems like that often won't be the case. For example, a vendor providing the chat service is likely to author both
the client-side JavaScript and the server-side code to manage it. Presumably they would not make this mistake. We do need to make sure that it's practical for clients to
minimize the number of connections they choose to use of course.
Firstly it is problematic to make assumptions about future usage. But there are already plenty
of examples of where third parties contribute code to a webpage either statically or dynamically.
Go to any home page on any portal site and you will see plenty of third party widgets offering
services... many of which would benefit from websocket type connectivity.
If you are embedding untrusted third party code on your site without doing anything to restrict what it can do, then you have much bigger problems than excessive WebSocket connections.
Post by Greg Wilkins
It is also not safe to assume that client libraries provided by a server will always be used.
If the connection limit is imposed by the client library and their is an advantage to be
obtained by exceeding the connection limit, then app developers will work around the libraries
(or in js they will probably just modify the code).
Voluntary resource restriction just does not work.
I think the main limiting factor on use of excess connections is simply that many of the likely use cases would not actually benefit from using more connections (as described in other parts of my previous email).
Post by Greg Wilkins
Post by Maciej Stachowiak
I do think the ability to do multiplexing as an optional feature may be useful. I see it as something that could be a 2.0 (or 1.1) protocol feature, and that could be totally
transparent in the client API. But if there are pitfalls that make it impossible to roll out later, it would be good to know now.
I agree that multiplexing would be advantageous and I've proposed several ways in which
it could be done. However I also do recognize that it is a difficult thing to achieve in 1.0.
The suggestion that I have made (and that was rejected) was that at least the 1.0 spec change it's
language so that the websocket user is not promised a connection, but rather a conduit or channel.
This would allow multiplexing to be added at a later date with less disruption etc.
I think that from the API perspective, whether you got a fresh connection or a channel over a multiplexed version of the protocol is not observable. Thus, such a change in the protocol to allow multiplexing would not require any API changes - it could be totally transparent to JS-level clients. What kind of disruption are you worried about, and can you help me understand more about your suggested change? Is it the protocol spec or the API spec that should be mentioning the possibility of channels that are not separate TCP connections? And how would this reduce future disruption.
Post by Greg Wilkins
Post by Maciej Stachowiak
working with load balancers and other intermediaries that need out-of-band communication with the server is another.
What's needed for the sake of these kinds of intermediaries? I think the principle we should apply here is that the protocol should be able to work with no changes to
intermediaries in general, but if we have a way to make it work better with optional cooperation from intermediaries, we should consider it. Can you mention some concrete
problems that come up here? Do you have solutions in mind?
The number and type of intermediaries is too numerous and varied to generalize.
But the ability to insert meta data into a stream that will not affect the application data is
something that is easy to do and would enable a large variety of extensions (including multiplexing).
EG. a meta data frame could be injected to indicate the channel, encoding, encryption, origin, etc.
of a following frame (or frames).
Can we talk about some specific problems for intermediaries? You don't have to cover everything, but a few example use cases would help me understand how your proposed mechanism would help.

Regards,
Maciej
Greg Wilkins
2010-01-31 00:40:52 UTC
Permalink
Post by Maciej Stachowiak
If you are embedding untrusted third party code on your site without doing anything to restrict what it can do, then you have much bigger problems than excessive WebSocket connections.
exactly - which is why connection limits should be enforced by the browser and not the application.
Post by Maciej Stachowiak
Post by Greg Wilkins
Voluntary resource restriction just does not work.
I think the main limiting factor on use of excess connections is simply that many of the likely use cases would not actually benefit from using more connections (as described in other parts of my previous email).
I still don't think it is a good idea to enable browsers to be the perfect
platform for launching denial of service attacks on any server/port.


regards
Greg Wilkins
2010-01-30 23:07:05 UTC
Permalink
Post by Maciej Stachowiak
What I'm really interested in here is the problems themselves, not what form the damage
will take. You did list some.
That's great. It would be good to list any others, and to
work on solutions to the ones identified.
Which brings us back to process.

It's really great that you've taken an interested in the issues that are being raised on
this list and have engaged in discussion about them.

But we've been here before. They've all been raised and discussed in some detail here
and many ideas and solutions have been proposed.

All have been rejected. Not only that, the problems themselves have been rejected
and no solution at all has been offered.

Hence my strong words against the WHATWG process. It has disenfranchised a
significant part of the internet community and is not addressing the concerns
that are being raised.

Note that I'm not advocated we use the IETF process because I'm deluded that I
can get my own way. If I had my own way, the IETF would be considering something
like BWTP rather than websocket... but that was hummed down and I accept that.
So I now wish to positively engage with websocket (hence Jetty now supports it,
cometd will soon support it and I've put BWTP aside). However, accepted the
will of the community is an entirely different thing to accepting the will
of the WHATWG editor.

I repeat my suggestion that the WHATWG continue to edit the current document
to produce an interoperable and deployed 1.0, while the IETF begins the process
to produce a new document describing a 1.1 version of the protocol.

regards
Justin Erenkrantz
2010-01-31 00:03:52 UTC
Permalink
Post by Greg Wilkins
I repeat my suggestion that the WHATWG continue to edit the current document
to produce an interoperable and deployed 1.0, while the IETF begins the process
to produce a new document describing a 1.1 version of the protocol.
If I could make a humble suggestion, perhaps it makes sense to have
WHATWG produce a 0.9 version based on Hixie's latest draft or whatnot.
This is akin to what happened with HTTP...

I just feel that if the IETF-governed protocol takes into account
real-world feedback from non-browser devs, that the protocol may look
different enough that a 1.0->1.1 won't be significant enough to
reflect the differences.

My $.02. -- justin
Maciej Stachowiak
2010-01-31 00:46:03 UTC
Permalink
Post by Greg Wilkins
Post by Maciej Stachowiak
What I'm really interested in here is the problems themselves, not what form the damage
will take. You did list some.
That's great. It would be good to list any others, and to
work on solutions to the ones identified.
Which brings us back to process.
It's really great that you've taken an interested in the issues that are being raised on
this list and have engaged in discussion about them.
But we've been here before. They've all been raised and discussed in some detail here
and many ideas and solutions have been proposed.
All have been rejected. Not only that, the problems themselves have been rejected
and no solution at all has been offered.
I think you have raised some valid issues, and I personally think they should be addressed (if not in 1.0 of the protocol, at least understand how a later protocol revision could handle it). However, I must admit that some of your comments sound like handwaving that assumes the solution without clearly stating the problem. For example, when you talk about the problem of figuring out your idle timeout, it seems like the problem statement is very clear. Whenever you talk about the desire to have arbitrary metadata, I haven't seen you clearly relate that back to a specific use case or concrete problem to be solved. I can see why feedback of that second sort might not get much of a hearing. I think if you focus on concrete problems you have run into as an implementor, you will have much better luck.
Post by Greg Wilkins
Hence my strong words against the WHATWG process. It has disenfranchised a
significant part of the internet community and is not addressing the concerns
that are being raised.
Now you have the opportunity to discuss these things under the IETF process. Go for it.
Post by Greg Wilkins
Note that I'm not advocated we use the IETF process because I'm deluded that I
can get my own way. If I had my own way, the IETF would be considering something
like BWTP rather than websocket... but that was hummed down and I accept that.
So I now wish to positively engage with websocket (hence Jetty now supports it,
cometd will soon support it and I've put BWTP aside). However, accepted the
will of the community is an entirely different thing to accepting the will
of the WHATWG editor.
I repeat my suggestion that the WHATWG continue to edit the current document
to produce an interoperable and deployed 1.0, while the IETF begins the process
to produce a new document describing a 1.1 version of the protocol.
It seems plausible that we may need a revision of the protocol down the line. But I think that developing the two versions in two separate standards organizations, and doing so simultaneously and without trying to agree together on the right outcome, is not likely to lead to a coherent outcome.

Regards,
Maciej
Greg Wilkins
2010-01-31 08:16:38 UTC
Permalink
Post by Maciej Stachowiak
I think you have raised some valid issues, and I personally think they should be addressed
(if not in 1.0 of the protocol, at least understand how a later protocol revision could handle it).
However, I must admit that some of your comments sound like handwaving that assumes the solution
without clearly stating the problem. For example, when you talk about the problem of figuring
out your idle timeout, it seems like the problem statement is very clear. Whenever you talk
about the desire to have arbitrary metadata, I haven't seen you clearly relate that back to a
specific use case or concrete problem to be solved. I can see why feedback of that second sort
might not get much of a hearing. I think if you focus on concrete problems you have run into
as an implementor, you will have much better luck.
Maciej,

I think that is a pretty unfair comment.

I've not provided much detail this time around because the points have been
made so many times before.

I have previously provided many lengthy detailed discussions about these
points. I have proposed several concrete enhancements to websocket, I've
blogged at length about some other ways to improve websocket

http://blogs.webtide.com/gregw/entry/how_to_improve_websocket

and I've even drafted two versions of an alternative protocol BWTP
that has gained some interest:

http://www.ietf.org/id/draft-wilkins-hybi-bwtp-00.txt
http://bwtp.wikidot.com/

There are two implementations of BWTP available.

I've got deployed comet services with millions of users and am happy to
answer any questions asked about my experiences or comments expressed here.

There has been plenty of support expressed on this list for the issues
raised and plenty of other issues raised by others on this list as well.

This is not hand waiving.
Post by Maciej Stachowiak
Post by Greg Wilkins
Hence my strong words against the WHATWG process. It has disenfranchised a
significant part of the internet community and is not addressing the concerns
that are being raised.
Now you have the opportunity to discuss these things under the IETF process. Go for it.
Well perhaps you are correct.
We should treat this a day 0 and start again and restate the problems, requirements
etc. I do believe this is exactly the schedule outlined in the charter - which has
now been questioned by the WHATWG.
Post by Maciej Stachowiak
It seems plausible that we may need a revision of the protocol down the line. But
I think that developing the two versions in two separate standards organizations,
and doing so simultaneously and without trying to agree together on the right
outcome, is not likely to lead to a coherent outcome.
The WHATWG have not expressed any interest in further extending the protocol.
They say their current goal is to achieve interoperability, which I assume means
solving any ambiguities of misunderstandings of the current document.

If that truly is their only current objective, then don't see a particular
problem if the IETF moves forward with proposals/discussion/consideration of
how to improve websocket to deal with issues like orderly close and idle
timeouts. Of course coordination will be required and the whatwg need to
be part of any consensus on new features.

regards
Maciej Stachowiak
2010-01-31 08:46:42 UTC
Permalink
Post by Greg Wilkins
Post by Maciej Stachowiak
I think you have raised some valid issues, and I personally think they should be addressed
(if not in 1.0 of the protocol, at least understand how a later protocol revision could handle it).
However, I must admit that some of your comments sound like handwaving that assumes the solution
without clearly stating the problem. For example, when you talk about the problem of figuring
out your idle timeout, it seems like the problem statement is very clear. Whenever you talk
about the desire to have arbitrary metadata, I haven't seen you clearly relate that back to a
specific use case or concrete problem to be solved. I can see why feedback of that second sort
might not get much of a hearing. I think if you focus on concrete problems you have run into
as an implementor, you will have much better luck.
Maciej,
I think that is a pretty unfair comment.
I've not provided much detail this time around because the points have been
made so many times before.
Like I said, I think some of your points are really good. But the ones where you don't give a clear grounding in use cases or concrete problems to be solved read to me like you've assumed the solution.
Post by Greg Wilkins
I have previously provided many lengthy detailed discussions about these
points. I have proposed several concrete enhancements to websocket, I've
blogged at length about some other ways to improve websocket
http://blogs.webtide.com/gregw/entry/how_to_improve_websocket
I just read that post, and I don't see any concrete use cases for arbitrary metadata. You give some reasonable arguments for specifying a MIME type, but leap from there to arbitrary name/value fields. I did not see any connecting logic.

To me it seems that transfer encoding (or similar) information might be useful if the UA, rather than the JS client, is expected to do some processing, otherwise, you can just as easily define it by convention for a subprotocol to run over WebSocket. WebSocket does have a way to identify a subprotocol, and that seems like a sufficient hook to define subprotocols with per-frame metadata. If you want to promote going further, I'd like to hear the concrete use case or problem that would be addressed by having the functionality at the protocol level.

I should note that I'm not personally against open-ended metadata fields. It has certainly worked well for HTTP, and enabled us to enhance the protocol after the fact in many ways. But I don't feel like I can make a case for include it in the base protocol based solely on design taste, so I'd like to know what problems it solves. As you may have seen on other issues, if you give convincing use cases, then I at least will back you up. (I hope other implementors are still reading some of these subthreads and will make their own evaluations.)
Post by Greg Wilkins
and I've even drafted two versions of an alternative protocol BWTP
http://www.ietf.org/id/draft-wilkins-hybi-bwtp-00.txt
http://bwtp.wikidot.com/
There are two implementations of BWTP available.
I appreciate that you've spent considerable time designing, implementing and promoting a different protocol. That's good technical work, and a useful exploration. But that does not answer the question of why particular features are required.
Post by Greg Wilkins
I've got deployed comet services with millions of users and am happy to
answer any questions asked about my experiences or comments expressed here.
There has been plenty of support expressed on this list for the issues
raised and plenty of other issues raised by others on this list as well.
This is not hand waiving.
I apologize for using overly judgmental language. Let me try to state it in more neutral terms: In some cases, you seem to be pushing some preferred design approaches without presenting a clear grounding in concrete use cases. Now that you have taken a shot at implementing the WebSocket protocol on the server side, I think your feedback is enormously valuable. But I find it much more useful when it is grounded in concrete problems that you tried to solve, but could not. If you've given clear use cases for some of the features you advocate before, then great.
Post by Greg Wilkins
Post by Maciej Stachowiak
Post by Greg Wilkins
Hence my strong words against the WHATWG process. It has disenfranchised a
significant part of the internet community and is not addressing the concerns
that are being raised.
Now you have the opportunity to discuss these things under the IETF process. Go for it.
Well perhaps you are correct.
We should treat this a day 0 and start again and restate the problems, requirements
etc. I do believe this is exactly the schedule outlined in the charter - which has
now been questioned by the WHATWG.
Here's the kind of requirements information what would be useful to me as a browser implementor:

- Are there details of WebSocket that would hamper deployment, or fail to solve some problems in the way comet is done today?

- Are these things that could reasonably be fixed in a future revision of the protocol, or do they need to be addressed in 1.0? (One thing that concerns me about the latter is the lack of versioning in the WebSocket protocol - if we solve some problems later by introducing new frame types, then how can you tell if the party at the other end understands those new frame types? We need some way for a client and server to be able to negotiate use of WebSocket 2.0 while still interoperating with endpoints that only speak 1.0. I'd like to understand the story for this before we push wide deployment.)

- For the problems that aren't solved, are there reasonable solutions that can plausibly address the issue? (Note: solutions that assume all intermediaries have to change before the WebSocket protocol can be used at all do not seem viable to me, but the ability for intermediaries to optionally opt in seems like a reasonable approach.) I'm particularly concerned about the fact that all early deployment is likely to be over SSL (since that is probably your only option for going through unmodified proxies), but a lot of the solutions we have discussed for intermediaries to participate do not work with SSL.

That's the kind of information that will help us client-side implementors know if there are critical things to fix before promoting wide deployment.
Post by Greg Wilkins
Post by Maciej Stachowiak
It seems plausible that we may need a revision of the protocol down the line. But
I think that developing the two versions in two separate standards organizations,
and doing so simultaneously and without trying to agree together on the right
outcome, is not likely to lead to a coherent outcome.
The WHATWG have not expressed any interest in further extending the protocol.
They say their current goal is to achieve interoperability, which I assume means
solving any ambiguities of misunderstandings of the current document.
If that truly is their only current objective, then don't see a particular
problem if the IETF moves forward with proposals/discussion/consideration of
how to improve websocket to deal with issues like orderly close and idle
timeouts. Of course coordination will be required and the whatwg need to
be part of any consensus on new features.
If we're thinking about two protocol versions, then ideally I'd like to see both groups collaborate on both. If we identify any true showstopper issues in the 1.0 protocol, then I at least would like to see them fixed before it is too late. To me a showstopper issue would be one that completely prevents deployment in a common situation, or that precludes solving a problem in a future protocol revision.

Regards,
Macie
Greg Wilkins
2010-01-31 22:36:09 UTC
Permalink
Post by Maciej Stachowiak
Like I said, I think some of your points are really good. But the ones where you don't give a clear grounding in use cases or concrete problems to be solved read to me like
you've assumed the solution.
When the good points are so soundly rejected and left un-addressed, it is hardly
motivation to carry on and provide more details on the other points.

I refer you to Justin's recent comment: "but this is such a ridiculously large pain
point for servers that is being constantly belittled by the person in charge of
the drafts that it makes me question even continuing to bother providing feedback
at all"

This is a sentiment I deeply share.

You say that server-side feedback is welcome, but I don't think there are many
server side people here feeling the luv'n!
Post by Maciej Stachowiak
I just read that post, and I don't see any concrete use cases for arbitrary metadata.
You give some reasonable
arguments for specifying a MIME type, but leap from there to
arbitrary name/value fields. I did not see any connecting logic.
Well one of the key things about supporting arbitrary meta-data is that is
allows unanticipated future requirements to be met without having to rev the spec.

Now it may be that we can address all the issues raised (negotiating timeouts,
supporting different content types, orderly close initiated by either end or an
intermediary, handling large message etc), without supporting
arbitrary meta-data. But in my experience, to come up with specific
solutions for these 3 problems would break the 0,1 or infinity rule -ie where
there are 3 use-cases, there are actually probably 4 or more.
Post by Maciej Stachowiak
In some cases, you seem to be pushing some preferred design approaches without
presenting a clear grounding in concrete use cases.
See this just sounds like you are shooting the messenger. So it's my fault the
spec does not address my concerns because i've been inadequate in the way that I've
framed my feedback and the editor of the document is faultless in his approach.

Great! you are really encouraging me to continue with this process.


Besides, this is such an unfair criticism both to me and the WHATWG editor.

Ian has taken an immense amount of time to engage in discussion here and to
try to understand our concerns etc. I think he mostly does understand,
but simply does not agree. It's not a communication problem.

Moreover, I've changed my preferred design approach to these problems so
many times that even I can't keen track of what I really would like to see.
Maybe I'd like to see multi-plexing in the base protocol, or maybe I'd just
like the base to be extensible enough that multi-plexing can be layered on
top of it. There are lots of factors in making such a call, but first
we have to agree that multiplexing is a desirable feature either now or in
the future and what other features it would have to coexist with.

We are nowhere near that agreement, and without an agreed set of requirements
it is impossible to come to a consensus of a design approach.

This is why the charter of the WG does start at the requirements stage (much
to the scorn of the WHATWG editor). So I think I might break off these
threads and follow the charter and participate in some threads about
requirements.


cheers
SM
2010-01-31 07:31:56 UTC
Permalink
Post by Greg Wilkins
Which brings us back to process.
In a previous message, I asked a question about the WHATWG [1]. As
there hasn't been any answer, I gather that the group will not be
providing input.
Post by Greg Wilkins
I repeat my suggestion that the WHATWG continue to edit the current document
to produce an interoperable and deployed 1.0, while the IETF begins the process
to produce a new document describing a 1.1 version of the protocol.
See comments below.
Post by Greg Wilkins
If I could make a humble suggestion, perhaps it makes sense to have
WHATWG produce a 0.9 version based on Hixie's latest draft or whatnot.
One of the goals is a working group item that describes the Web
Socket requirements. Some of the issues could be sorted out by
working on that document first.
Maciej Stachowiak
2010-01-30 02:34:16 UTC
Permalink
Post by Justin Erenkrantz
Post by Greg Wilkins
The whatwg process relies on the consent of a single individual
(yourself) as editor. This position is an appointment made by an
invitation only committee made up of 9 representatives from
various browser manufacturers. You are also on that committee,
the spokesman for the group and an employee of the company that
is shipping the first client implementation.
Yes, this is my biggest concern about the process so far - it seems
very exclusionary to those of us who develop servers. So far, this is
a significant portion of the community that I feel has not had a
legitimate chance to provide any real input into the WebSocket
protocol. Instead, as an httpd developer who knows just as much about
HTTP as anyone else on this list, I just get the feeling that the
browser developers are telling me that I need to implement a
"protocol" without providing a legitimate opportunity for feedback.
The browser implementors working on WebSocket are very interested in meeting server-side needs. We want to implement features that will actually be used on the Web, not just for the fun of coding it. :-) Not only that, but many of us also have significant deployed and upcoming Web applications, and are considering the use of WebSocket in those offerings.

Thus, I would say that browser developers are very interested in input and feedback from server developers. In fact, I at least would like the final product to be something that server-side developers can feel positive about, and not just grudgingly accept.

If there are technical points of feedback that are not properly addressed, then let's have the conversation about those technical issues ASAP, and not mix it up with meta-level process issues.

Regards,
Maciej
Greg Wilkins
2010-01-30 06:12:42 UTC
Permalink
Post by Maciej Stachowiak
If there are technical points of feedback that are not properly addressed,
then let's have the conversation about those technical issues ASAP, and not
mix it up with meta-level process issues.
There have been many attempts by server side developers to express their
concerns regarding websocket. As Jamie pointed out on elsewhere on
this thread, all substantive suggestions were met with "no, I don't agree,
your idea will no be considered for WebSocket".

More importantly than accepting specific ideas, the actual concerns have
have not been accept or addressed. We've simply been told that we
should not be concerned about the things we are concerned about.

So it is these technical frustrations that are driving the current
discussion about process. However, even if Websocket was the
best protocol ever created, I still do not think that a closed
consortium representing only a segment of the industry should
be the author, editor and implementers of a new internet protocol
that will affect all of the industry.


regards
Maciej Stachowiak
2010-01-30 06:22:46 UTC
Permalink
Post by Greg Wilkins
Post by Maciej Stachowiak
If there are technical points of feedback that are not properly addressed,
then let's have the conversation about those technical issues ASAP, and not
mix it up with meta-level process issues.
There have been many attempts by server side developers to express their
concerns regarding websocket. As Jamie pointed out on elsewhere on
this thread, all substantive suggestions were met with "no, I don't agree,
your idea will no be considered for WebSocket".
More importantly than accepting specific ideas, the actual concerns have
have not been accept or addressed. We've simply been told that we
should not be concerned about the things we are concerned about.
Well I hope the chairs of the HyBi WG work out a process for suitably addressing these kinds of issues. I'm not entirely clear on what the concerns actually are, but perhaps we can try working through one or two of them as a test case for collaboration.

By the way, I was pleased to see your "Technical feedback" subthread forked from this thread. It's a lot easier to get down to business on concrete issues. I will read over it and comment.
Post by Greg Wilkins
So it is these technical frustrations that are driving the current
discussion about process. However, even if Websocket was the
best protocol ever created, I still do not think that a closed
consortium representing only a segment of the industry should
be the author, editor and implementers of a new internet protocol
that will affect all of the industry.
I think you are overestimating how much of a unitary entity the WHATWG is. The browser vendors seeking to implement WebSocket have chosen to do so relatively independently, and we send our feedback publicly where it gets considered. It's not like implementing engineers ask the WHATWG Steering Committee to give Ian orders on what to put on the spec. Furthermore, browser implementors do not have some kind of vendetta against server-side developers. We want to work with you guys, in fact, that is the whole point of having web/internet specifications.

That being said, I think debating the organizational structure of the WHATWG is not really helpful. For the HyBi WG, what we need is a process that satisfies IETF requirements. I think it is possible, and probably even desirable, to set that up in a way that allows alignment and continued collaboration with what is happening at the WHATG. I also think it is up to the chairs of this WG to work out the process details. As I've said before, I would be delighted to advise, being in a very similar position. But I would rather not get into it too much on the list.

Regards,
Maciej
Maciej Stachowiak
2010-01-30 02:46:34 UTC
Permalink
Post by Greg Wilkins
Post by Ian Hickson
Post by Martin J. Dürst
Ian, could you please explain how exactly *you* imagine such a
cooperation should work, if not e.g. by cross-posting?
The same way it works with the HTML5 specification and the various Web
Apps specifications....
Ian,
the problem with this approach is that an internet protocol
is out of scope of for the WHATWG charter and the WHATWG
process is entirely inappropriate for forging a consensus
across all the interested parties.
As one of the Chairs of the W3C HTML Working Group, I have had much opportunity to work through the issues of coordinating joint development with the WHATWG. It is definitely true that there have been difficulties at times, due to the different processes and distinct (though overlapping) communities. And HTML5 is probably the biggest spec going through this joint development process, with the most different points of controversy. However, I believe we have now worked out a process that meets W3C consensus requirements without forking from the WHATWG copy of the spec.

In the process we have developed, it is not the case that a single individual has the final word. Nor is it the case that a committee of 9 browser representatives has the final word. We have is a well-defined process to report and track feedback, we let the editor decide on the initial disposition of issues based on discussion, and we have a process for resolving disputes if we cannot come to agreement informally. This process seems to be working both for large-scale editorial changes (such as factoring parts of the specification into wholly separate specs) and for changes in normative requirements.

I think the Chair(s) of HyBi Working Group should figure out whether coordination is a goal, and if so what process can satisfy IETF requirements even while working with an editor who feels responsibility to a second standards organization. Since this is something I have had to deal with myself, I'd be glad to advise the HyBi Chairs (offline) on this topic.

With this in mind, could we leave further consideration of the process issues to the HyBi Chairs? It is ultimately their responsibility to work this out, and I fear this thread may be more distracting than helpful.

Regards,
Maciej
SM
2010-01-29 11:15:47 UTC
Permalink
Hi Ian,
Post by Ian Hickson
As is the WHATWG.
I am sticking to the IETF angle as I don't have any documentation on
how the WHATWG works. Greg Wilkins raised two points in his last
message. The work done in here has to gain the consensus of this
Working Group. It then goes through an IETF-wide Last-Call where
there is cross-area review. It is the consensus at that final stage
that matters. I am not describing the finer points as it is better
to read the relevant documents for a good explanation.
Post by Ian Hickson
But it doesn't mention the WHATWG, which is working on this spec.
Yes, it doesn't. The Working Group would have to recharter if it
wants to add that.
Post by Ian Hickson
By whom?
This charter was discussed on this mailing list by some of the people
who are part of the Working Group and that was what they agreed to.
Post by Ian Hickson
People from the IETF are welcome to participate in the WHATWG process.
Yes, we could say that. But the WHATWG process will not get the
document published as a RFC. To put it differently, you will still
have to get the document through the IETF process.
Post by Ian Hickson
However, instead, I suggest we work together, just like the W3C and the
WHATWG are cooperating on a dozen other specs.
I think that is an excellent suggestion. It is unlikely that there
can be a formal agreement between the different groups. However, the
individuals may be able to work something out.
The IESG reads the feedback. If I am not mistaken, the IESG
generally does not send out individual replies for feedback they receive.
Post by Ian Hickson
Actually, I was asked to submit it by the IETF. I agreed to do so while
simultaneously publishing it through the WHATWG. At no point was it
suggested that the WHATWG should stop working on it.
The WHATWG can continue working on the specifications. This Working
Group will probably work on their own version of the specification
too. The outcome will be two different specifications for the same
technology. I don't think that is in the interest of the Internet community.

Regards,
-sm
Ian Hickson
2010-01-29 11:51:13 UTC
Permalink
It then goes through an IETF-wide Last-Call where there is cross-area
review.
I'm certainly all in favour of more review -- I'm not sure limiting it to
the IETF is necessarily a good plan though. I would hope we would look for
review from the entire Web community, including Web authors, and members
of other groups such as the W3C. That is why the WHATWG announced a public
last call last year and invited feedback from the entire community. If
there are specific groups we can invite to comment on the spec who are not
yet aware of the spec, I would be happy to contact them; do you have any
suggestions on this front?
It is the consensus at that final stage that matters.
IMHO it's the interoperable implementations that matter, but if we can
get consensus as well then so much the better.
Post by Ian Hickson
But it doesn't mention the WHATWG, which is working on this spec.
Yes, it doesn't. The Working Group would have to recharter if it wants
to add that.
If the charter is to be relevant, it seems that acknowledging what is
actually being done is important. Personally I do not put much stock in
charters, so it's not a priority for me, but if people are going to refer
to the charter, then we should make sure they refer to something that
reflects reality.
Post by Ian Hickson
By whom?
This charter was discussed on this mailing list by some of the people
who are part of the Working Group and that was what they agreed to.
Does it not seem odd that the people who discussed where the spec should
be edited did not include the person editing the spec?

(I did not at any point see agreement on this list that the HyBi group
should take over the WHATWG work without working with the WHATWG. I've
read every e-mail sent to this list since it was created.)
Post by Ian Hickson
People from the IETF are welcome to participate in the WHATWG process.
Yes, we could say that. But the WHATWG process will not get the
document published as a RFC.
Getting the document published as an RFC is not a goal. Getting
interoperable implementations is the goal.
To put it differently, you will still have to get the document through
the IETF process.
That's fine, it's not mutually exclusive with working with the WHATWG.
HTML5 and many other specs are going through both the W3C and WHATWG
processes together, and are published simultaneously through both groups.
Post by Ian Hickson
However, instead, I suggest we work together, just like the W3C and
the WHATWG are cooperating on a dozen other specs.
I think that is an excellent suggestion. It is unlikely that there can
be a formal agreement between the different groups. However, the
individuals may be able to work something out.
I don't see the difference between a formal agreement and individuals
working something out. I'd be glad to work something out. The first step
would be to change from the attitude of "the HyBi group is working on this
and the WHATWG is welcome to work on something similar as well" to "the
HyBi group and the WHATWG are working together on this".
The IESG reads the feedback. If I am not mistaken, the IESG generally
does not send out individual replies for feedback they receive.
The feedback had no effect on the charter, either.
Post by Ian Hickson
Actually, I was asked to submit it by the IETF. I agreed to do so
while simultaneously publishing it through the WHATWG. At no point was
it suggested that the WHATWG should stop working on it.
The WHATWG can continue working on the specifications. This Working
Group will probably work on their own version of the specification too.
No, that's not "working together".
The outcome will be two different specifications for the same
technology. I don't think that is in the interest of the Internet community.
It's not. Let's work together to create one set of normative requirements,
not two.
--
Ian Hickson U+1047E )\._.,--....,'``. fL
http://ln.hixie.ch/ U+263A /, _.. \ _\ ;`._ ,.
Things that are impossible just take longer. `._.-(,_..'--(,_..'`-.;.'
Francis Brosnan Blazquez
2010-01-29 15:39:53 UTC
Permalink
Hi Ian,
Post by Ian Hickson
IMHO it's the interoperable implementations that matter, but if we can
get consensus as well then so much the better.
This is circular; without consensus it will be more difficult to get
interoperable implementations...and at this moment all the politics
created comes more from not having such consensus (rather than whatwg
vs. ietf).

We are really interested on getting Websocket integrated in our products
but we have found some limitations [1].

[1] http://www.ietf.org/mail-archive/web/hybi/current/msg00977.html

Maybe we have missed something, so I ask again:

1) It is possible to send binary content (that includes octets 0x00 and
0xFF) from a browser javascript?

2) Will be Websocket protocol be able to use existing HTTP proxies?
Post by Ian Hickson
Post by Ian Hickson
But it doesn't mention the WHATWG, which is working on this spec.
Yes, it doesn't. The Working Group would have to recharter if it wants
to add that.
If the charter is to be relevant, it seems that acknowledging what is
actually being done is important. Personally I do not put much stock in
charters, so it's not a priority for me, but if people are going to refer
to the charter, then we should make sure they refer to something that
reflects reality.
Not taking into consideration part of the ietf procress (the charter)
doesn't sound like consensus.
Post by Ian Hickson
Post by Ian Hickson
By whom?
This charter was discussed on this mailing list by some of the people
who are part of the Working Group and that was what they agreed to.
Does it not seem odd that the people who discussed where the spec should
be edited did not include the person editing the spec?
Agreed.
Post by Ian Hickson
(I did not at any point see agreement on this list that the HyBi group
should take over the WHATWG work without working with the WHATWG. I've
read every e-mail sent to this list since it was created.)
Post by Ian Hickson
People from the IETF are welcome to participate in the WHATWG process.
Yes, we could say that. But the WHATWG process will not get the
document published as a RFC.
Getting the document published as an RFC is not a goal. Getting
interoperable implementations is the goal.
Again this is somehow circular and it conflicts with Websocket target.
Having Websocket as a RFC is an additional warranty it has completed a
process ensuring that there was consensus and that experts have reviewed
the work.

You want consensus but at the same time it's not a goal having Websocket
published as RFC. I don't understand this.
Post by Ian Hickson
To put it differently, you will still have to get the document through
the IETF process.
That's fine, it's not mutually exclusive with working with the WHATWG.
HTML5 and many other specs are going through both the W3C and WHATWG
processes together, and are published simultaneously through both groups.
Post by Ian Hickson
However, instead, I suggest we work together, just like the W3C and
the WHATWG are cooperating on a dozen other specs.
I think that is an excellent suggestion. It is unlikely that there can
be a formal agreement between the different groups. However, the
individuals may be able to work something out.
I don't see the difference between a formal agreement and individuals
working something out. I'd be glad to work something out. The first step
would be to change from the attitude of "the HyBi group is working on this
and the WHATWG is welcome to work on something similar as well" to "the
HyBi group and the WHATWG are working together on this".
I agree with this but you can't say at the same time that is not a goal
to complete the RFC procress..which is the same to say "the WHATWG is
working on this and the HyBi group is welcome to work on something
similar as well".
Post by Ian Hickson
The IESG reads the feedback. If I am not mistaken, the IESG generally
does not send out individual replies for feedback they receive.
The feedback had no effect on the charter, either.
The IESG should had reply you but that's not the point.
Post by Ian Hickson
Post by Ian Hickson
Actually, I was asked to submit it by the IETF. I agreed to do so
while simultaneously publishing it through the WHATWG. At no point was
it suggested that the WHATWG should stop working on it.
The WHATWG can continue working on the specifications. This Working
Group will probably work on their own version of the specification too.
No, that's not "working together".
Agreed.
Post by Ian Hickson
The outcome will be two different specifications for the same
technology. I don't think that is in the interest of the Internet community.
It's not. Let's work together to create one set of normative requirements,
not two.
Agreed.

I think websocket is a big opportunity to move web development to the
next level but it has to solve, at least, its compatibility with
existing HTTP infrastructure as pointed by HTTP experts.

Cheers!
--
Francis Brosnan Blazquez <***@aspl.es>
ASPL
Ian Hickson
2010-02-01 12:58:05 UTC
Permalink
Post by Francis Brosnan Blazquez
1) It is possible to send binary content (that includes octets 0x00 and
0xFF) from a browser javascript?
Not currently, but this will be possible in due course. Right now we're
waiting for TC39 to add binary support to JS.

The protocol is written such that it will be able to support it, however.
Post by Francis Brosnan Blazquez
2) Will be Websocket protocol be able to use existing HTTP proxies?
Yes, see step 2 of the client handshake:

http://tools.ietf.org/html/draft-hixie-thewebsocketprotocol#section-4

HTH,
--
Ian Hickson U+1047E )\._.,--....,'``. fL
http://ln.hixie.ch/ U+263A /, _.. \ _\ ;`._ ,.
Things that are impossible just take longer. `._.-(,_..'--(,_..'`-.;.'
SM
2010-01-29 16:30:10 UTC
Permalink
Hi Ian,
Post by Ian Hickson
I'm certainly all in favour of more review -- I'm not sure limiting it to
the IETF is necessarily a good plan though. I would hope we would look for
There may be better ways to get reviews which I am not aware of.
Post by Ian Hickson
there are specific groups we can invite to comment on the spec who are not
yet aware of the spec, I would be happy to contact them; do you have any
suggestions on this front?
No.
Post by Ian Hickson
If the charter is to be relevant, it seems that acknowledging what is
actually being done is important. Personally I do not put much stock in
charters, so it's not a priority for me, but if people are going to refer
to the charter, then we should make sure they refer to something that
reflects reality.
The charter specifies the objectives and sets the milestones. It
would be tedious to find the working group administrative information
and deliverables without a charter.
Post by Ian Hickson
Does it not seem odd that the people who discussed where the spec should
be edited did not include the person editing the spec?
Yes, that would look odd. You took part of the discussion (
http://www.ietf.org/mail-archive/web/hybi/current/msg00753.html )
Post by Ian Hickson
(I did not at any point see agreement on this list that the HyBi group
should take over the WHATWG work without working with the WHATWG. I've
read every e-mail sent to this list since it was created.)
The content of the message at
http://www.ietf.org/mail-archive/web/hybi/current/msg00765.html may
be relevant.

By the way, some browser vendors supported the creation of this
Working Group ( see
http://www.ietf.org/mail-archive/web/hybi/current/msg00604.html and
http://www.ietf.org/mail-archive/web/hybi/current/msg00647.html ).

Could the WHATWG inform this Working Group about the name of the
person(s) speaking on behalf of the group?

Regards,
-sm
Greg Wilkins
2010-01-29 22:12:21 UTC
Permalink
Post by SM
Could the WHATWG inform this Working Group about the name of the
person(s) speaking on behalf of the group?
SM,

I no Ian does not put much stock in charters, but this one
http://www.whatwg.org/charter has a tiny foot note that says:

"Queries should be directed either to the mailing list or to Ian Hickson,
who is acting as a spokesman for the group."

But currently there is no convention to indicate when Ian is speaking with
WhatWG voice, Google voice or with Hixie voice.


regards
SM
2010-01-30 00:11:14 UTC
Permalink
Post by Greg Wilkins
I no Ian does not put much stock in charters, but this one
As some of the discussion has been about talking to the WHATWG, my
question is to find out who is talking on behalf of the group.
Post by Greg Wilkins
If one specification does not address significant requirements, and
the other specification does but will take a long time to arrive, then
it may well be in the interest of the internet community to have both.
That's the extreme alternative and it could be unpleasant.
Post by Greg Wilkins
It's about different groups of people, so the term "consensus" is
being used to mean different things in this discussion; hence conflict
and emotion. (There is also a mismatch of expectations, which I will
Yes.

Regards,
-sm
Ian Hickson
2010-02-01 13:18:22 UTC
Permalink
Post by SM
Post by Ian Hickson
Does it not seem odd that the people who discussed where the spec
should be edited did not include the person editing the spec?
Yes, that would look odd. You took part of the discussion (
http://www.ietf.org/mail-archive/web/hybi/current/msg00753.html )
Yes, I did. Yet a "consensus" was reached which I didn't agree with, and
my feedback to that effect was ignored. (In the very e-mail you quote,
from last October, I mentioned that the WHATWG should be involved.)
Post by SM
Post by Ian Hickson
(I did not at any point see agreement on this list that the HyBi group
should take over the WHATWG work without working with the WHATWG. I've
read every e-mail sent to this list since it was created.)
The content of the message at
http://www.ietf.org/mail-archive/web/hybi/current/msg00765.html may be
relevant.
I replied later in that thread:

http://www.ietf.org/mail-archive/web/hybi/current/msg00785.html

...and received no further reply. Note that I already indicated my concern
over this "primary responsibility" business in that thread. However, the
discussion in that thread, which included a suggestion from the chairs of
text mentioning the WHATWG to be added to the charter, apparently had no
effect on the charter.

I sent similar comments again (to the IESG) when the charter was finally
proposed. That was ignored also.
Post by SM
By the way, some browser vendors supported the creation of this Working
Group ( see
http://www.ietf.org/mail-archive/web/hybi/current/msg00604.html and
http://www.ietf.org/mail-archive/web/hybi/current/msg00647.html ).
Could the WHATWG inform this Working Group about the name of the
person(s) speaking on behalf of the group?
For administrative matters, I currently do.

For technical matters, the WHATWG has over a thousand subscribers and they
speak for themselves on the WHATWG mailing list.
--
Ian Hickson U+1047E )\._.,--....,'``. fL
http://ln.hixie.ch/ U+263A /, _.. \ _\ ;`._ ,.
Things that are impossible just take longer. `._.-(,_..'--(,_..'`-.;.'
Jamie Lokier
2010-01-29 22:21:24 UTC
Permalink
Post by SM
The WHATWG can continue working on the specifications. This Working
Group will probably work on their own version of the specification
too. The outcome will be two different specifications for the same
technology. I don't think that is in the interest of the Internet community.
If one specification does not address significant requirements, and
the other specification does but will take a long time to arrive, then
it may well be in the interest of the internet community to have both.

-- Jamie
Salvatore Loreto
2010-01-29 06:13:44 UTC
Permalink
Hi,

thanks to have moved back to the original question.
My original mail my was a call to discuss the technical aspects openly
in the mailing list, so to let the spec move forward.


I agree that it would be useful to use cookie on WebSocket,
however I have some perhaps stupid doubts and questions on its usage
that I'd like to be clarified

1) Is the usage of cookies optional or it is mandatory?
what will happen in the few cases where the WebSocket will be
established without the user has already logged into a page?

2) another aspect that leaves me doubtful is leverage HTTP aspect for
WebSocket.
If I remember correctly (please correct me if I am wrong), Ian
Hickson has several time underlined the fact that WebSocket
is an independent protocol from HTTP, it just and only reuses in
the handshake the HTTP syntax for opportunistic reasons;
however starting to leverage on more and more HTTP features let me
think that WebSocket is not or can not be considered independent
anymore.

3) Ian Fette, when you say "...the server logic to check whether a user
is already logged in..." it appears to me that you are relaying on the
assumption that a specific technology/language is used to implement
this usage.
A protocol should be neutral from the languages and technologies,
and just describe the steps that need to be followed.


thanks in advance for all the clarifications
/Sal
www.sloreto.com
Post by Ian Fette (イアンフェッティ)
So, moving back to the original question... I am very concerned here.
A relatively straightforward question was asked, with rationale for
the question. "May/Should WebSocket use HttpOnly cookie while
Handshaking?
I think it would be useful to use HttpOnly cookie on WebSocket so that
we could authenticate the WebSocket connection by the auth token
cookie which might be HttpOnly for security reason."
It seems reasonable to assume that Web Sockets will be used in an
environment where users are authenticated, and that in many cases the
Web Socket will be established once the user has logged into a page
via HTTP/HTTPS. It seems furthermore reasonable to assume that a
server may track the logged-in-ness of the client using a HttpOnly
cookie, and that the server-side logic to check whether a user is
already logged in could easily be leveraged for Web Sockets, since it
starts as an HTTP connection that includes cookies and is then
upgraded. It seems like a very straightforward thing to say "Yes, it
makes sense to send the HttpOnly cookie for Web Socket connections".
Instead, we are bogged down in politics.
How are we to move forward on this spec? We have multiple server
implementations, there are multiple client implementations, if a
simple question like this gets bogged down in discussions of WHATWG vs
IETF we are never going to get anywhere. Clearly there are people on
both groups who have experience in the area and valuable contributions
to add, so how do we move forward? Simply telling the folks on WHATWG
that they've handed the spec off to IETF is **NOT** in line with what
I recall at the IETF, where I recall agreeing to the two WGs working
in concert with each other. What we have before us is a very trivial
question (IMO) that should receive a quick response. Can we use this
as a proof of concept that the two groups can work together? If so,
what are the concrete steps?
If we can't figure out how to move forward on such a simple issue, it
seems to me that we are in an unworkable situation, and should
probably just continue the work in WHATWG through to a final spec, let
implementations settle for a while, and then hand it off to IETF for
refinement and finalization in a v2 spec... (my $0.02)
-Ian
Post by Julian Reschke
Post by Ian Hickson
...
Post by Greg Wilkins
The WHATWG submitted the document to the IETF
I don't think that's an accurate portrayal of anything that
has occurred,
Post by Julian Reschke
Post by Ian Hickson
unless you mean the way my commit script uploads any changes
to the draft to
Post by Julian Reschke
Post by Ian Hickson
the tools.ietf.org <http://tools.ietf.org> scripts. That same
script also submits the varous
Post by Julian Reschke
Post by Ian Hickson
documents generated from that same source document to the W3C
and WHATWG
Post by Julian Reschke
Post by Ian Hickson
source version control repositories.
...
By submitting an Internet Draft according to BCP 78 you grant
the IETF certain
Post by Julian Reschke
rights; it's not relevant whether it was a script or yourself
using a browser
Post by Julian Reschke
or a MUA who posted it.
You may want to check
<http://tools.ietf.org/html/bcp78#section-5.3>.
With the exception of the trademark rights, which I don't have and
therefore cannot grant, the rights listed there are a subset of the rights
the IETF was already granted by virtue of the WHATWG publishing the spec
under a very liberal license. So that doesn't appear to be relevant.
--
Ian Hickson U+1047E )\._.,--....,'``.
fL
http://ln.hixie.ch/ U+263A /, _.. \ _\
;`._ ,.
Things that are impossible just take longer.
`._.-(,_..'--(,_..'`-.;.'
_______________________________________________
hybi mailing list
https://www.ietf.org/mailman/listinfo/hybi
Ian Fette (イアンフェッティ)
2010-01-29 06:22:15 UTC
Permalink
Post by Salvatore Loreto
Hi,
thanks to have moved back to the original question.
My original mail my was a call to discuss the technical aspects openly in
the mailing list, so to let the spec move forward.
I agree that it would be useful to use cookie on WebSocket,
however I have some perhaps stupid doubts and questions on its usage that
I'd like to be clarified
1) Is the usage of cookies optional or it is mandatory?
what will happen in the few cases where the WebSocket will be established
without the user has already logged into a page?
I assume this would be left up to the application. If the server is
expecting some sort of authentication cookie and doesn't get one, it can
attempt to do authentication in an application specific manner over the
websocket connection, it can close the websocket connection, it can transmit
some applicaiton-specific error message to the client over the websocket
connection, etc.
Post by Salvatore Loreto
2) another aspect that leaves me doubtful is leverage HTTP aspect for
WebSocket.
If I remember correctly (please correct me if I am wrong), Ian Hickson
has several time underlined the fact that WebSocket
is an independent protocol from HTTP, it just and only reuses in the
handshake the HTTP syntax for opportunistic reasons;
however starting to leverage on more and more HTTP features let me
think that WebSocket is not or can not be considered independent
anymore.
cookies are already sent with WS, the only question is whether that includes
or excludes cookies that are HttpOnly
Post by Salvatore Loreto
3) Ian Fette, when you say "...the server logic to check whether a user is
already logged in..." it appears to me that you are relaying on the
assumption that a specific technology/language is used to implement
this usage.
A protocol should be neutral from the languages and technologies, and
just describe the steps that need to be followed.
I was describing a common use case. Using cookies to track whether someone
is logged in or not is extremely common. I'm not relying on a specific
technology or language, just pointing out that this is a very common,
language-independent paradigm (agnostic to what languages you're using on
the client or the server). I wasn't asking that the protocol specify
anything about languages or being logged in, merely that this extremely
common use case means we should consider supporting sending HttpOnly cookies
when Web Socket connections are established (and either way, the spec should
be clear about whether or not they are sent.)
Post by Salvatore Loreto
thanks in advance for all the clarifications
/Sal
www.sloreto.com
So, moving back to the original question... I am very concerned here. A
relatively straightforward question was asked, with rationale for the
question. "May/Should WebSocket use HttpOnly cookie while Handshaking?
I think it would be useful to use HttpOnly cookie on WebSocket so that we
could authenticate the WebSocket connection by the auth token cookie which
might be HttpOnly for security reason."
It seems reasonable to assume that Web Sockets will be used in an
environment where users are authenticated, and that in many cases the Web
Socket will be established once the user has logged into a page via
HTTP/HTTPS. It seems furthermore reasonable to assume that a server may
track the logged-in-ness of the client using a HttpOnly cookie, and that the
server-side logic to check whether a user is already logged in could easily
be leveraged for Web Sockets, since it starts as an HTTP connection that
includes cookies and is then upgraded. It seems like a very straightforward
thing to say "Yes, it makes sense to send the HttpOnly cookie for Web Socket
connections".
Instead, we are bogged down in politics.
How are we to move forward on this spec? We have multiple server
implementations, there are multiple client implementations, if a simple
question like this gets bogged down in discussions of WHATWG vs IETF we are
never going to get anywhere. Clearly there are people on both groups who
have experience in the area and valuable contributions to add, so how do we
move forward? Simply telling the folks on WHATWG that they've handed the
spec off to IETF is **NOT** in line with what I recall at the IETF, where I
recall agreeing to the two WGs working in concert with each other. What we
have before us is a very trivial question (IMO) that should receive a quick
response. Can we use this as a proof of concept that the two groups can work
together? If so, what are the concrete steps?
If we can't figure out how to move forward on such a simple issue, it
seems to me that we are in an unworkable situation, and should probably just
continue the work in WHATWG through to a final spec, let implementations
settle for a while, and then hand it off to IETF for refinement and
finalization in a v2 spec... (my $0.02)
-Ian
Post by Ian Hickson
Post by Julian Reschke
Post by Ian Hickson
...
Post by Greg Wilkins
The WHATWG submitted the document to the IETF
I don't think that's an accurate portrayal of anything that has
occurred,
Post by Julian Reschke
Post by Ian Hickson
unless you mean the way my commit script uploads any changes to the
draft to
Post by Julian Reschke
Post by Ian Hickson
the tools.ietf.org scripts. That same script also submits the varous
documents generated from that same source document to the W3C and
WHATWG
Post by Julian Reschke
Post by Ian Hickson
source version control repositories.
...
By submitting an Internet Draft according to BCP 78 you grant the IETF
certain
Post by Julian Reschke
rights; it's not relevant whether it was a script or yourself using a
browser
Post by Julian Reschke
or a MUA who posted it.
You may want to check <http://tools.ietf.org/html/bcp78#section-5.3>.
With the exception of the trademark rights, which I don't have and
therefore cannot grant, the rights listed there are a subset of the rights
the IETF was already granted by virtue of the WHATWG publishing the spec
under a very liberal license. So that doesn't appear to be relevant.
--
Ian Hickson U+1047E )\._.,--....,'``. fL
http://ln.hixie.ch/ U+263A /, _.. \ _\ ;`._ ,.
Things that are impossible just take longer. `._.-(,_..'--(,_..'`-.;.'
_______________________________________________
hybi mailing list
https://www.ietf.org/mailman/listinfo/hybi
Greg Wilkins
2010-01-29 14:13:14 UTC
Permalink
Post by Ian Fette (イアンフェッティ)
cookies are already sent with WS, the only question is whether that
includes or excludes cookies that are HttpOnly
The upgrade request is a HTTP requests (well at least it should be
a HTTP request, and not just something that strongly resembles one),
so I believe HttpOnly cookies should be included.

This would not expose the cookie and it's value to the
javascript in browser, nor can I think of any way that this reduces
the security provided by HttpOnly.


regards
Maciej Stachowiak
2010-01-30 02:28:41 UTC
Permalink
Post by Greg Wilkins
Post by Ian Fette (イアンフェッティ)
cookies are already sent with WS, the only question is whether that
includes or excludes cookies that are HttpOnly
The upgrade request is a HTTP requests (well at least it should be
a HTTP request, and not just something that strongly resembles one),
so I believe HttpOnly cookies should be included.
This would not expose the cookie and it's value to the
javascript in browser, nor can I think of any way that this reduces
the security provided by HttpOnly.
I agree. The purpose of HttpOnly is to prevent the cookie from being seen by scripting APIs, not to limit the network protocols over which it is provided. Thus, sending it over WebSocket connections would be in line with its purpose, and I think this is the case whether or not we think the WebSocket upgrade request is or is not HTTP.

Regards,
Maciej
Salvatore Loreto
2010-02-01 10:26:31 UTC
Permalink
-as individual-

I agree too and I see the need to use HttpOnly.
I was just trying to highlight that at the end WebSocket does not only
share port with HTTP/Web Server,
it also supposed to share other information (i.e. in this case cookies!!!).
The spec implicitly assume that the WebSocket server can access those
information that the WebServer posses.
Post by Salvatore Loreto
Hi,
thanks to have moved back to the original question.
My original mail my was a call to discuss the technical aspects
openly in the mailing list, so to let the spec move forward.
I agree that it would be useful to use cookie on WebSocket,
however I have some perhaps stupid doubts and questions on its
usage that I'd like to be clarified
1) Is the usage of cookies optional or it is mandatory?
what will happen in the few cases where the WebSocket will be
established without the user has already logged into a page?
I assume this would be left up to the application. If the server is
expecting some sort of authentication cookie and doesn't get one, it
can attempt to do authentication in an application specific manner
over the websocket connection, it can close the websocket connection,
it can transmit some applicaiton-specific error message to the client
over the websocket connection, etc.
I'd like the spec be more detailed about the correct server behaviour in
this situation.


regards
Sal
Post by Salvatore Loreto
Post by Greg Wilkins
Post by Ian Fette (イアンフェッティ)
cookies are already sent with WS, the only question is whether that
includes or excludes cookies that are HttpOnly
The upgrade request is a HTTP requests (well at least it should be
a HTTP request, and not just something that strongly resembles one),
so I believe HttpOnly cookies should be included.
This would not expose the cookie and it's value to the
javascript in browser, nor can I think of any way that this reduces
the security provided by HttpOnly.
I agree. The purpose of HttpOnly is to prevent the cookie from being
seen by scripting APIs, not to limit the network protocols over which
it is provided. Thus, sending it over WebSocket connections would be
in line with its purpose, and I think this is the case whether or not
we think the WebSocket upgrade request is or is not HTTP.
Regards,
Maciej
Wenbo Zhu
2010-01-28 11:05:46 UTC
Permalink
On Thu, Jan 28, 2010 at 12:12 AM, Fumitoshi Ukai (鵜飌文敏)
May/Should WebSocket use HttpOnly cookie while Handshaking?
WebSocket is a "stateful" protocol, and its cookie support is only
applicable in interacting with the HTTP context .. and therefore the spec
should simply refer to what's specified for HTTP for clarification ...

- Wenbo

I think it would be useful to use HttpOnly cookie on WebSocket so that we
could authenticate the WebSocket connection by the auth token cookie which
might be HttpOnly for security reason.
http://www.ietf.org/id/draft-ietf-httpstate-cookie-02.txt
--
ukai
Jamie Lokier
2010-01-29 22:17:21 UTC
Permalink
Post by Francis Brosnan Blazquez
1) It is possible to send binary content (that includes octets 0x00 and
0xFF) from a browser javascript?
No, the current WebSocket browser API does not support binary frames.

It has been suggested that it may in future, when:

- Javascript acquires some additional data type for
handling binary data (and WebSocket uses it)
or
- the WebSocket API acquires methods for sending and receiving
binary and interconverting it to something Javascript can use,
like arrays of integers or octets re-represented as Unicode
code points in a string.
Post by Francis Brosnan Blazquez
2) Will be Websocket protocol be able to use existing HTTP proxies?
It has been stated many times that the current WebSocket protocol is
designed to *intentionally* fail when it meets a proxy. (Except a
WebSocket-aware proxy.)

This failure is not guaranteed, so you can't guarantee a fast fallback
to an alternative protocol, but that is a hard problem to solve.

So the answer is no, it does not use existing proxies.
Post by Francis Brosnan Blazquez
Again this is somehow circular and it conflicts with Websocket target.
Having Websocket as a RFC is an additional warranty it has completed a
process ensuring that there was consensus and that experts have reviewed
the work.
You want consensus but at the same time it's not a goal having Websocket
published as RFC. I don't understand this.
This is how it looks to me:

It's about different groups of people, so the term "consensus" is
being used to mean different things in this discussion; hence conflict
and emotion. (There is also a mismatch of expectations, which I will
come to later.)

Prominent browser vendors *will* achieve consensus, as you can see on
this very thread, they are the ones keen to "avoid the politics" and
get straight to rolling it out. They seem to have implemented it
already. It's not surprising that substantive changes and delays are
unwelcome among that group - they want to use it in real products and
web services right away, and are waiting for minor issues to be agreed
and a signoff. As far as they are concerned, the WebSocket design
phase happened quite a long time ago (perhaps years), and is now near
its end.

But (at least some) people working on other parts of the web
infrastructure have not been as as involved in WebSocket development,
and discovered it quite recently through other avenues. It's clear
there is not a consensus which includes these people (I include myself).

Moreover, there is something of a fear from this side that new
protocol deployment over port 80 is not something to be done lightly,
because using port 80 for something other than HTTP (and which isn't
compatible with HTTP) has *many* infrastructure consequences, whether
we like it or not. (The list of consequences is too long for this
email. Greg gave a good list of relevant areas.)

Despite some wishes, there's a fair chance that proxies - including
"hidden" proxies (used by some ISPs and corporate and government
firewalls) - will have to learn to accomodate WebSocket, if it becomes
widely used.

Some fear that a change affecting infrastructure like that can only be
done every ten years or so (because it depends hugely on collective
adoption), and so should not be done without considerable analysis of
it's consequences for the infrastructure.

And, also, if such a change will happen, it is a *rare* opportunity to
combine the technical experience from different areas to make
something that works very well as a foundation for the future. We
now have a *lot* of experience with web architecture, it's performance
characteristics, and what structures tend to lead to good implementations
of the whole application stack, these days.
Post by Francis Brosnan Blazquez
Post by Ian Hickson
I don't see the difference between a formal agreement and individuals
working something out. I'd be glad to work something out. The first step
would be to change from the attitude of "the HyBi group is working on this
and the WHATWG is welcome to work on something similar as well" to "the
HyBi group and the WHATWG are working together on this".
I agree with this but you can't say at the same time that is not a goal
to complete the RFC procress..which is the same to say "the WHATWG is
working on this and the HyBi group is welcome to work on something
similar as well".
Here's how I see it, from my perspective as *not* a WHATWG
participant, but a participant on the Hybi list only:

When the Hybi mailing list started, there was an invitation to discuss
development of future Hybi protocols on the list.

WebSocket appeared quickly on the list, with an invitation to
comments. Nothing wrong with that.

However, all substantive comments on the Hybi list concerning possible
improvements to WebSocket were met with "no, I don't agree; your idea
will not be considered for WebSocket". Wording changes and minor
tweaks were accomodated however.

It was, essentially, a frustrating waste of time to explore technical
protocol issues and ideas, and it became clear that WebSocket was
beyond that stage in its design.

I don't blame Ian; I think that WebSocket was simply already at a
later design stage - it had been gestating for quite a long time
before it arrived on the Hybi list already, and despite everything, it
does address a particular set of problems, even if some of us do not
agree on how well it does so.

=> To a large extent, I think there has been a mismatch of
expectations between what the Hybi list was set up to channel (the
expertise of interested parties from different backgrounds relating to
protocol design, network characteristics, etc. - IETF sort of stuff),
and the arrival of WebSocket which doesn't really satisfy the same goals.

And I think part of the emotion comes from the fear that we might end
up with *only* that as our available bidirectional transport through
web browsers for many years to come, and some of us foresee unwelcome
limitations or unintended consequences from that.

Mismatched expectations are often frustrating, but they aren't
necessarily anyone's fault.

And fears that we'll be stuck with one thing for years to come may not
be founded. I think Greg has shown a promising way forward by
suggesting that we settle on WebSocket/1.0 largely as it is, and apply
collective knowledge to moving it forwards into WebSocket/1.1.

(If we go down that route, I would remind us to try hard to make it
easy to comply with than HTTP/1.1 did (or harder to screw up!).)

-- Jamie
Loading...