top of page

Robots are increasingly becoming an integral part of today's society, industries, services, medical applications and entertainment. This led to the robots robotic systems that were once confined to predetermined areas, have to share space with users and interact with these in less and less structured environments.

It is in this area of interaction between humans and robots, called the HRI that this thesis is inserted. It presents an application of HRI based on a laser pointer two frequencies for interaction between the user and the robot and projection mapping in the opposite direction, closing the loop of communication. The developed application will then be incorporated into the Carlos project, which is intended to develop mobile robot to fit-out operations autonomously in cooperation with operators in industrial environments.

The developed solution uses a camera-projector setup as a form of environmental interaction, a 3D graphic simulator (gazebo) for implementation techniques of projection mapping, a graphical interface to import objects and a laser for the user to interact.

The concept base system based on a virtual recreation of the real world in the simulator and a virtual camera, which simulates the actual projector placed in the position corresponding to the real. Thus, all changes are applied and the resulting image is mapped correctly in the real world. The laser pointer has two functionality, right click to list the objects on the work surface and the left click to selection you want to view. So that it can be integrated into the Carlos project, the application was developed using the ROS making the modular system and enabling hardware abstraction.

Since the system uses the location of the projector to generate transformations in the image, it was necessary to implement a calibration methodology of intrinsic parameters of the camera and projector and all transformations between frames. In order to increase the accuracy of calibration we used a robotic manipulator.

The results are shown promising, the implemented system is capable of projections on surfaces with lower errors and location of 3cm inferioresa leisure pointing 4cm. These errors arise mainly from the fact that the ratio of the projected image may not be exactly 16:10, having noticed that the pixels are not designed precisely square.

CONCLUSIONS

CONCLUSIONS AND FUTURE WORK

FUTURE WORK

For future work, a first step would by making the system more robust, especially in terms of location as it is on this that the image transformation methodology is based. One possible solution would be to replace the camera used for RGBD sensor, such as a kinect, which allow to obtain a 3D representation of the system viewing area which initially serve to validate the current location and the second stage to increase their precision. At the same time, the conversion from laser pointer reference position (camera-projector) would not need a kinect, increasing identification accuracy of the same.

The use of only a laser pointer can present ineffective in environments very bright and or mirrored surfaces. To this end, could implement a multi-modal interaction system, ie, the laser pointer aa couple exist other form of information transmission, such as voice commands, identifying gestures instrumented glove among others.

At the moment the laser pointer only allows the choice of the user objects you want to view. If it were possible it would be advantageous to allow interaction with objects (drag and drop).

bottom of page