We want to develop a standalone application that runs on macOX. The application will present rendered animated 3d models within a full screen canvas. It will interface intelligently to external hardware.
The target hardware for application development will be Apple Mac Mini computers. In addition we would also like the application to run on MacBook Pro models.
Due to the complexity of this project, we prefer to seek out an entire team that can facilitate in the total development as specified below. We want to establish distinct clear milestones for completion. In addition we want to outline a project plan to release several revisions of this software with significant increases in enhancement. Therefore everything specified below is subject to prioritization based on the complexity of each element.
First and foremost we want a project manager. We will need developers with a multitude of different skillsets. It is likely we will need someone for each key area:
-3D rendering and animation
-Front end and UI
We will require an initial consultation to define a timeline and budget. Once done, we will set out for immediate development. Ideally we wish to release a preliminary version of the software by September with more sophisticated version being release by end of the year and again by mid spring.
The displayed content will be renderings of smart devices such as phones and tablets as well as laptops. Key devices will be iPhones, iPads, and MacBooks. Secondary devices will be tablets & phones running Android and Windows/Linux PCs.
When a user plugs in a device (up to three) the application will recognize the device and pass the display from the device into a 3d mesh target object which is the parent of a 3d rendered proxy of the device body. For example, the ideas is that a user picks up the real iPhone and on the application display is show a virtual iPhone moving in the 3d space. The content then from the iPhone shows up on the virtual screen face of the virtual phone.
The idea is that as the user navigates through the smart device a representation of that device is presented within the application.
The next step is that a user will be able to connect up to three devices. Using intelligent recognition within the app, the display will automatically display an animation that puts the most recently handled device to the front of the virtual space delivering the impression that we are changing focus from one device to the next.
Within the smart devices, we will need to access the motion API in order to extract the data that reflect the devices's movement. In the case of the Macbook or Windows/Linux Laptop, its external display output will connect into the application's host computer via USB capture devices such as an HDMI to USB adapter.
Obviously since there is no motion API we can leverage to track a user beginning to use the laptop, we will need to provide a contingency case where the laptop device sends a trigger to the host application's computer indicating that use has begun. This should be done by developing a companion application daemon that will run on the demo laptop that recognizes awakening such as the touch of a mouse/trackpad or the press of a key.
Returning to the application, it will display a 3d environment within a window and must also have a simplified administration panel. The application shall require the ability to display the virtual full screen on a second display or by pressing a hot key to make its rendered view take over full screen display of the single output.
As mentioned, the use of the motion API from the smart devices and wakeup recognition of the laptop daemon will automatically trigger preconfigured animations to front the active device to the virtual space. However, we will also want to utilize a physical hardware panel to initiate changes. The hardware panel will be a simple keypad device that interfaces to the host application's computer via USB. The buttons on this hardware will be configured to initiate different presentation states as well as enable or disable certain devices from the virtual 3d display environment.
The hardware will be developed by us but we welcome your consultation. In addition, the hardware device will also act as a USB hardware lock to enable or disable the un-watermarked commercial version of this application.
Within the administration page will need to be access to change the look and feel of the 3d environment. Users will also need access to provide text fields, fonts, images and videos that will customized the appearance of the 3d environment as well as brand the virtual devices. Our team will develop these 3d meshes and appropriate UV maps. We will define the targets to render these pieces as well. What we will need though are those developers capable of building the 3d environment and animations according to our specification.