Your communication skills in English are excellent.
Summary - Timeouts range from 5-30 seconds based on user familiarity.
I recommend setting long poll timeouts to be 100 times the server's "request" time. This argument supports timeouts between 5-20 seconds, depending on how quickly you need to detect dropped connections and missing clients.
Reasoning Behind This Recommendation:
- Many examples use timeouts of 20-30 seconds.
- Most routers will quietly terminate connections that remain open for too long.
- Clients can suddenly vanish due to network issues or entering low-power mode.
- Servers cannot identify dropped connections, thus a 5 minute timeout is not advisable as it ties up sockets and resources, potentially leading to a DOS attack on your server.
Hence, a timeout of less than 30 seconds is considered standard. How should you decide?
What are the advantages and drawbacks of having long-poll connections?
Assuming a typical request takes 100ms of server "request" time for connecting, querying a database, and sending a response:
A 10-second timeout amounts to 1% of the long-polling time. (100 / 10,000 = .01 = 1%)
A 20-second timeout translates to 0.5%
A 30-second timeout equals 0.33%, and so on.
Any longer than 30 seconds, the practical benefit of extending the timeout further will always be less than a 0.33% increase in performance. Therefore, there is little justification for timeouts over 30s.
Conclusion:
It is recommended to set long poll timeouts at 100 times the server's "request" time, advocating for timeouts within the range of 5-20 seconds depending on the urgency to detect disconnected clients.
Best practice: Sync both client and server timeouts for requests. Leave extra network ping time for the client's safety. For instance, server = 100x request time, client = 102x request time.
Best practice: Long polling proves superior to websockets in many scenarios due to its simplicity, scalability, and reduced HTTP attack surface.