While my web application utilizing WebSockets runs smoothly with the ws
protocol, there are some hiccups when switching to wss
. Most messages between client and server get through as expected, but on occasion I encounter one of the following errors in the Chrome console:
"Could not decode a text frame as UTF-8."
or
"Invalid frame header"
When this occurs, Chrome terminates the connection. This issue arises whether I serve using wss
directly from the server (running on .NET with SuperWebSocket) or when using Apache's mod_proxy_wstunnel
to reverse proxy from ws
to wss
. Interestingly, setting up a basic "echo" server under the same Apache setup does not exhibit this problem. It seems that there might be something peculiar about the data being transmitted through the SuperWebSocket API (despite the messages being valid UTF-8 and functioning normally over ws
).
The puzzling aspect is how changing protocols could lead to such issues. Hence, my query:
Is it possible for a WebSocket frame to be error-free when sent without TLS but become corrupted when sent with TLS?