September 2022

S M T W T F S
    123
45678910
11121314151617
181920 21222324
2526 27282930 

Style Credit

Expand Cut Tags

No cut tags
Sunday, June 29th, 2008 09:15 am
If you want a breakdown of points for the 4 placing teams, they've put up a rather large PDF of the workshop presentation from Friday. It includes the actual images returned, so you can see what kind of quality we're capable of achieving. (UBC's bounding boxes are normally a bit better, but there was a bug in that part of the code.)
Sunday, June 29th, 2008 05:50 pm (UTC)
Neat! It looks like the 2nd-place winner made their robot a bit too tall/unzoomable, to judge from the robot-perspective photos.
Sunday, June 29th, 2008 05:54 pm (UTC)
Yeah, they didn't have a zoom OR a pan-tilt unit. But they navigated entirely based on stereo vision, which is very impressive.
Monday, June 30th, 2008 05:07 pm (UTC)
Interesting how a lot of the same objects were identified. (Spam, Ritz crackers.) I suppose that reflects how unique they are (product brand labels vs. how many different types of vacuum cleaners there are) and how prevalent they are in image search engines.

Why were there non-scoring objects?
Monday, June 30th, 2008 06:02 pm (UTC)
Yeah, the nature of the state-of-the-art makes recognizing specific images much much more reliable than recognizing classes of objects. Give us another couple of years. :)

To get a point, the bounding box marking the objects has to be at least so good. I think the current formula is intersection (area of the object inside the box) over union (combined area of object and box) has to be at least 1/4. One of the things discussed for next year was changing that to either always give at least a single point, or just use the ratio as the score directly (to prevent people from drawing bounding boxes around the entire image).