Alex Leykin
CV
Publications
Talks
Contact
Alex Leykin
I am a Computer Vision Researcher
.
Here is what I do!
CV
Photos
Modeling of visual attention by using low level saliency features is validated by using the actual eye tracking data. Naive guessing of where a person will look in the image will be successful only at a base level (dashed line)
My store shelf VR simulation software allows to test the affects of clutter on the distribution of visual attention.
Vanishing Point Histogram is used to detect a number of maxima in a background subtracted silhouette blob.
How many people are there if they are standing next to each other? Using my Vanishing Point Projection (VPP) Histogram helps answer this question.
Modeling each tracked person as a quadric, when projected to the 2D image presents as a conic! Go google it!
Manually modeling objects in the scene aids tracking and can be used for collecting fixture proximity statistics.
Some clusters of basic human emotions PCA'd into a 3D space. A very reductionist approach to human feelings ;-)
IRV - our thoroughbred Hoosier autonomous vehicle! See more videos about self driving cars on my page.
Eye fixations while looking at a store shelf.
Facial expressions are detected in the program from the grayscale image of the face.
This is done by finding the optimal number of emotion clusters for the face video sequence.
Key points on the face are indicative of the AU movements and help detect emotions. Read about Paul Ekman and watch "Lie to me" with Tim Roth!
In-store traffic can be visualized in many ways: here the length of each arrow relates to the speed and the color is direction. Green = in, red = out.
Part of my thesis was to track people using a single panoramic camera to cover an entire area of a small apparel store.
Embedding a CGI into a real scene is one way to introduce manipulations for visual attention studies.
My Visual Attention™ software allows to quickly design eye-tracking studies, generate heatmaps and aggregated quantitative data.
Modeling visual saliency can predict where people are inclined to look.
Here is what is salient in this scene.
Clutter negatively affects visual search times. We can automatically measure clutter with computer vision techniques.
Our VideoAnnotation™ software is used to map traffic data, visualize and analyze traffic patterns. In general it can be used for any spatio-temporal data. Contact Alex Leykin@CIL for an educational or commercial license.
My Kinect Analyzer™ software runs on the new Kinect v2 and allows to gather skeletal movement data. Based on user-defined 3D areas of interests it can detect a number of events, such as picking up a product from the shelf.
For self-driving cars in DARPA Grand Challenge I worked on horizon and road detection.
To find "line-like" features Hough Transform is applied.
When tracking pedestrians outdoors I used a second near-IR camera to mitigate the effects of shadows and cloud cover.
We distribute NewVision Lite™ software under academic license. It is a quick way to calibrate the camera, load videos and start tracking!
VideoAnnotation™ tool can be also used to code tracks from the mobile eye-tracker video.
Detecting ostacles for a self-driving car. As always, first tests are in the parking lot :-)
Videos