Looking for an experienced roboticist for a consulting project, with the view of extending onto a bigger, more hands-on role. Trying to get an expert’s perspective on how to create an interactive robot, based on the android platform. The robot will be able to see, hear, speak and control additional accessories through either bluetooth, NFC or USB or a combination of.
We’re new to robotics, so basically looking for an expert in the field to get us up to speed with a ‘brain dump’ to shorten the learning curve. If you’ve worked on a consumer robot (or components of) we’d love to speak to you. We have plenty of dumb questions to ask around everything robot related. Skype or other form of video chat is preferred.
We realise that finding someone with experience in all of the things listed below is going to be tricky, so if you're an expert in just one of the areas, we'd still love to hear from you.
Projects that share attributes of what we are hoping to build include
Key points we want to explore include:
Kiosk / COSU mode
The device is to run in single application/use mode without allowing the users to switch tasks. The application must load on boot.
USB, NFC and Bluetooth devices
We envision the app to interact with a number of accessories at a later stage. We’d like to explore using the above mentioned technologies.
Interacting with the robot will cause it to display various animations on the screen as well as use the actuators to move parts of it’s ‘body’ in response to users’ interactions. An example would be rotating the head to follow the user.
Integration of 3d party libraries (e.g. openCV)
There’s a variety of use cases that we would like to explore for the robot, and a number of them will require the robot to see and detect objects. Using cloud based services like cloudsight or Google Cloud Vision API is essential.
The robot must be able to hear and talk. From a users’ point of view, we see this working in a similar way to Amazon Alexa, where the robot is always listening, but only tries to identify the wake word. Once it hears the wake voice, either offline (preferably) or online STT can take place
Ideally, we want to be able to handle as much processing offline as possible (speech, vision), to make the robot operable in situations where there’s limited connectivity.