2

I am working on computer-vision projects and trying to find a way how to build a desktop application with modern UI that will be able to display live video processed by my algorithms. The code that creates video stream and web browser will be on the same machine.

I have a native C++ code that captures video from IP camera, makes some processing of captured frames with drawing results on them. For example - it detects human faces and draws bounding boxes on video frames. Video processing code is quite heavy and can't be translated into javascript.

After that I want to display this frames with boxes on them in Web-based GUI in a desktop application build with NW.js.

The question is how to display processed frames with lowest possible CPU usage overhead?

I can compress each frame into JPG(PNG/BMP...) and send it to <img> tag, but this way will take too much CPU, and this is the worst possible solution.

The perfect solution should be sending frame data directly into WebGL texture on GPU by native code, but I cant find a way how to make it.

May be some compromise will be RGB32(24) transfer, but image stream is quite heavy (about 200 MB per second), this is the stream of one 2MP/25fps IP camera. And in a future I want to be able using up to 4 cameras on one PC.

I have an experience with Qt and this task can be solved with Qt, but I really need some HTML5-based GUI with its animations, styles and other features.

My target OS is Windows, but it will be good to have cross-platform solution. And if Windows is a problem - I can use Linux.

4

0 に答える 0