Our team is looking for assistance in the development of our dynamic point cloud project (so called 4D video). Briefly, the project uses an array of Microsoft Kinects to reconstruct humans and environments in 3D in real time for broadcasting in AR/VR.
Current tasks are to develop algorithms for (you can choose at least 2 or 3 for this price, which you think you can do best):
- uniting point clouds in real time;
- devices (up to four Microsoft Kinects) calibration and positioning in relation to each other;
- improving the overall quality of resulting models (of humans)
- data compression similar to xyzrgb
At least 2 Microsoft Kinects v2 and 2 PCs are required to fulfill the tasks. If you are in Moscow, Russia, we have an office with all the equipment.