Projects (en)

Here is a brief description of the projects I’ve been involved with for the past years. Each work has its own objectives and proposals, but the underlying techniques always rely on machine learning studies applied to image processing and computer vision.

Kinship recognition (2013 - today)

"You have your father's eyes".

This affirmation illustrates the fact that kins have similar facial traits due to the genetic information they have in common. Siblings, for instance, share, on average, 50% of their genetic info.

If, on the one hand, genetics define facial appearance, on the other, is there a mechanism to extract kinship information from face images? It is commonly known that humans have a reasonable ability of telling whether two individuals are related or not based on the inspection of their photographs.

But, can the computer do the same? Can the machine be able to recognize if two subjects are indeed related to each other by inspecting their photographs?

The answer is YES, the machine can tell siblings from non-siblings even more accurately than humans. Moreover, it is also possible to detect parent-child pairs from face images.

This research was published on academic and public media.

Kinship recognition

SMAT (2013-2014)

This project involved the processing of aerial images, not only acquired by Unmanned Aerial Vehicles (UAVs), but also obtained by satellites.

The first objective was to implement:

  • A real-time overlay of multi-layer cartographic data onto the video stream provided by the UAV based on its telemetry information.
  • The final stage was to use machine learning and image processing to automatically identify elements of interest on the images from the UAV, as terrain types, roads, rivers, buildings, vehicles, etc.
Below there is a snapshot of the software's during runtime execution.

Eye-Mouse (2008-2012)

"..eyes have the power to speak a great language." Martin Buber.

Eye Tracking consists in extracting some information from the human’s eyes. In this process, the final goal can be as simple as determining whether an individual’s eyes are opened or closed, or as complex as accurately determining the 3-D point in space where the subject is focusing his gaze.

When the machine becomes capable of extracting information from the human’s eyes, several applications emerge, such as;

  • improvement in websites and mobile devices interfaces based on studies of user’s attention;
  • enhancement in communication capabilities of physically impaired people;
  • multimodal interaction in mobile devices.

Motivated to help people facing the locked-in-syndrome, a condition where the individual, although maintaining his consciousness, is unable to voluntarily control any muscle except those around the eyes, I worked with three low-cost eye tracking systems.

Eye-Tracking System 1

The first is composed by wearable glasses, with an ordinary USB webcam and an infrared LED mounted on them. After a brief calibration, this system makes the computer’s mouse to go to the position on the screen where the user is looking at with reasonable accuracy.

Eye Mouse

Eye-Tracking system 2

The second is mounted directly onto the computer screen and measures the user’s gaze vector with respect to the 3-D environment. Again, the mouse pointer goes to the point on the screen where the individual is looking at with higher accuracy in comparison with the previous approach. In addition, since the system is remote and not head-mounted, it allows a certain freedom in head movement.

Eye-Tracking System 3

Finally, the third implementation couples an eye tracking with a text prediction software. The algorithm relies only on the computer built-in camera and detects the user’s face and eyes. The pupil centers are computed and, when the eyes remain looking up for a given time, it triggers a command on the automatic text predictor software. The whole system was designed to enhance the communication capabilities of those with very strict eye movements, where the individual is able to generate only one symbol (open/ close eyes, look up/ down, etc.) when communicating.

Aero-Acoustic Analysis (2004-2005)

Quiet Please!

While living in Stuttgart, Germany, I had the great opportunity of working by Bosch, in the applied physics department.

During this period, I worked with research in the aeroacoustic section, where the objective was to investigate and minimize noises caused by air flowing through the company’s products, as the car electric generator, for instance.

In order to do that, I assembled a robotic system responsible for positioning probes inside an acoustically isolated room containing a wind-tunnel.

Measurements were performed using laser scanning interferometry and mono, bi and tri-dimensional sensors of hot wire anemometry, used to estimate the wind velocity in a mesh within the room.