In the process of creating a whiteboard app for a web conferencing platform, I have found that most libraries use 2 separate canvases - one for drawing on the top and another for storing older drawings at the bottom (to improve performance by clearing the upper canvas after each draw).
My next challenge is to stream the content of these canvases so that other users can view the content in real time.
Option1 :
One option is to use captureStream
on each canvas and merge the video tracks into a single stream,
new MediaStream([...bottomTracks, ...topTracks])
, transfer the stream using webrtc, extract the tracks from the stream, create a new stream for each track new MediaStream([stream.getVideoTracks()[0]])
, and play the streams on absolutely positioned video elements. While this method works, my boss is not fond of it for reasons unknown. Is there a downside to this approach?
Option2:
Another option is to create a third off-screen canvas and periodically use drawImage
to copy the content of both canvases onto this off-screen canvas, and then use captureStream
on that canvas to stream the content.
Are there any other, more efficient ways to achieve this?
Update : Why Am I Using 2 canvases?
The main reason for using 2 canvases is to avoid redrawing everything if changes need to be made to content that has already been drawn. This can happen in two scenarios:
- Smoothing out lines that have been drawn by mouse drag, touch movement, or pen (to create a more natural writing feel).
- Drawing shapes by mouse drag, touch movement, or pen.
While I could redraw everything with each change, I believe this would result in lower performance on lower-end devices. The two canvases are positioned one on top of the other, with user drawings done on the upper canvas. Once the drawing is completed and any enhancements are added, the line or shape is copied to the bottom canvas, and the upper canvas is then cleared.