We won!
1st place: UBC, 12 points
2nd place: IMRA-Europe, 8 points
University of Maryland: didn't place, no data
I'm not sure I ever posted about what exactly SRVC is. So...
This was the second year for the Semantic Robot Vision Challenge, a competition designed to force realworld use of modern vision techniques in a robotic context. Last year was at AAAI in Vancouver, now it's being hosted by CVPR in Anchorage. Looks like it will stick with CVPR, so next year will be Miami.
4 hours before the active competition starts, teams are given a text list of ~20 items. Some are specific (DVD '300', Spam) and some are general categories (frying pan, fax machine). Without human intervention, their systems then access the internet to gather training images to learn what these objects look like. (Mostly from Google Images, but some other sources as well.) Each then has 4 hours in which to process the data.
For the actual run, each team gets 30 minutes to recognize the objects in a prepared environment. So far these environments have been pretty simple, but that will change as the contest evolves. There are two leagues, software and hardware. The software leagues are given a set of images taken by the organizers, some of which are good and some of which aren't. The hardware leagues have to send in a robot to take their pictures -- still limited to 30 minutes for gathering and analysis. (This is why we had 4 laptops strapped on this year.)
Pretty hard stuff. Lots of fun. We can do a lot better next year. :)
ETA: I'll post a detailed score breakdown once I get the data. But in short, you can get 0-3 points for each object you recognize, depending on how good the bounding box is, plus 1 bonus point if you announced the classification in realtime during the competition. (This encouraged robot state displays that made this year's contest a lot more audience friendly.) We got 5 scoring objects, 1 non-scoring object, and a bunch of bonus points.
1st place: UBC, 12 points
2nd place: IMRA-Europe, 8 points
University of Maryland: didn't place, no data
I'm not sure I ever posted about what exactly SRVC is. So...
This was the second year for the Semantic Robot Vision Challenge, a competition designed to force realworld use of modern vision techniques in a robotic context. Last year was at AAAI in Vancouver, now it's being hosted by CVPR in Anchorage. Looks like it will stick with CVPR, so next year will be Miami.
4 hours before the active competition starts, teams are given a text list of ~20 items. Some are specific (DVD '300', Spam) and some are general categories (frying pan, fax machine). Without human intervention, their systems then access the internet to gather training images to learn what these objects look like. (Mostly from Google Images, but some other sources as well.) Each then has 4 hours in which to process the data.
For the actual run, each team gets 30 minutes to recognize the objects in a prepared environment. So far these environments have been pretty simple, but that will change as the contest evolves. There are two leagues, software and hardware. The software leagues are given a set of images taken by the organizers, some of which are good and some of which aren't. The hardware leagues have to send in a robot to take their pictures -- still limited to 30 minutes for gathering and analysis. (This is why we had 4 laptops strapped on this year.)
Pretty hard stuff. Lots of fun. We can do a lot better next year. :)
ETA: I'll post a detailed score breakdown once I get the data. But in short, you can get 0-3 points for each object you recognize, depending on how good the bounding box is, plus 1 bonus point if you announced the classification in realtime during the competition. (This encouraged robot state displays that made this year's contest a lot more audience friendly.) We got 5 scoring objects, 1 non-scoring object, and a bunch of bonus points.