F-divergence: Difference between revisions
en>Headbomb m →References: Various citation cleanup (identifiers mostly) using AWB |
|||
Line 1: | Line 1: | ||
{{copyedit|for=grammar, especially in the Analysis and Comparison section|date=September 2013}} | |||
'''Caltech 101''' is a [[data set]] of [[digital images]] created in September, 2003, compiled by [[Fei-Fei Li]], [[Marco Andreetto]], [[Marc 'Aurelio Ranzato]] and [[Pietro Perona]] at the [[California Institute of Technology]]. It is intended to facilitate [[Computer Vision]] research and techniques. It is most applicable to techniques interested in recognition, classification, and categorization. Caltech 101 contains a total of 9146 images, split between 101 distinct object (including [[face]]s, [[watches]], [[ants]], [[pianos]], etc.) and a background category (for a total of 102 categories). Provided with the images are a set of [[annotations]] describing the outlines of each image, along with a [[Matlab]] [[Scripting language|script]] for viewing. | |||
==Purpose== | |||
Most Computer Vision and [[Machine Learning]] algorithms function by training on a large set of example inputs. | |||
To work effectively, most of these techniques require a large and varied set of training data. For example, the relatively well known real time face detection method used by [[Paul Viola]] and [[Michael J. Jones]] was trained on 4916 hand labeled faces.<ref name="Viola Jones">P. Viola and M. J. Jones, Robust Real-Time Object Detection, , IJCV 2004</ref> | |||
However, acquiring a large volume of appropriate and usable images is often difficult. Furthermore, cropping and resizing many images, as well as marking point of interest by hand, is a tedious and time intensive task. | |||
Historically, most data sets used in computer vision research have been tailored to the specific needs of the project being worked on. | |||
<!-- Missing image removed: [[Image:Caltech101vs256.gif|thumb | Caltech 101 vs Caltech 256 on same algorithms]] --> | |||
A large problem in comparing different computer vision techniques is the fact that most groups are using their own data sets. Each of these data sets may have different properties that make reported results from different methods harder to compare directly. For example, differences in image size, image quality, relative location of objects within the images, and level of occlusion and clutter present can lead to varying results <ref name="oertel">Oertel, C., Colder, B., Colombe, J., High, J., Ingram, M., Sallee, P., Current Challenges in Automating Visual Perception. Proceedings of IEEE Advanced Imagery Pattern Recognition Workshop 2008</ref> | |||
The Caltech 101 data set aims to alleviate many of these common problems. | |||
*The work of collecting a large set of images, and cropping and resizing them appropriately has been taken care of. | |||
*A large number of different categories are represented, which benefits both single, and multi class recognition algorithms. | |||
*Detailed object outlines have been marked for each image. | |||
*By being released for general use, the Caltech 101 acts as a common standard by which to compare different algorithms without bias due to different data sets. | |||
However, a recent study <ref name="pinto_et_al_2008">[http://compbiol.plosjournals.org/perlserv/?request=get-document&doi=10.1371/journal.pcbi.0040027 Why is Real-World Visual Object Recognition Hard? Pinto N, Cox DD, DiCarlo JJ PLoS Computational Biology Vol. 4, No. 1, e27 ] {{doi|10.1371/journal.pcbi.0040027}}</ref> demonstrates that tests based on uncontrolled natural images (like the Caltech 101 dataset) can be seriously misleading, potentially guiding progress in the wrong direction. | |||
==Dataset== | |||
===Images=== | |||
<!-- Missing image removed: [[Image:Caltech101.gif| thumb| right | Caltech 101 images]] --> | |||
The Caltech 101 data set consists of a total of 9146 images, split between 101 different object categories, as well as an additional background/clutter category. | |||
Each object category contains between 40 and 800 images on average. Common and popular categories such as faces tend to have a larger number of images than less used categories. | |||
Each image is about 300x200 pixels in dimension. | |||
Images of oriented objects such as [[airplanes]] and [[motorcycles]] were mirrored to be left-right aligned, and vertically oriented structures such as buildings were rotated to be off axis. | |||
===Annotations=== | |||
As a supplement to the images, a set of annotations are provided for each image. Each set of annotations contains two pieces of information. | |||
The general bounding box in which the object is located, and a detailed human specified outline enclosing the object. | |||
A Matlab script is provided along with the annotations that will load an image and its corresponding annotation file and display them as a Matlab figure. | |||
<!-- Missing image removed: [[Image:Caltech101 croc annotated.jpg| Crocodile image with annotations.]] --> | |||
The bounding box is yellow and the outline is red. | |||
==Uses== | |||
The Caltech 101 data set has been used to train and test several Computer Vision recognition and classification algorithms. | |||
The first paper to make use of Caltech 101 was an incremental [[Bayesian inference|Bayesian]] approach to [[one shot learning]].<ref name="OneShot">[http://www.vision.caltech.edu/feifeili/Fei-Fei_GMBV04.pdf L. Fei-Fei, R. Fergus and P. Perona. Learning generative visual models from few training examples: an incremental Bayesian approach tested on 101 object categories. IEEE. CVPR 2004, Workshop on Generative-Model Based Vision. 2004]</ref> One shot learning is an attempt to learn a class of object using only a few examples, by building off of prior knowledge of many other classes. | |||
The Caltech 101 images, along with the annotations, were used for another one shot learning paper at Caltech. | |||
L. Fri-Fei, R. Fergus and P. Perona. One-Shot learning of object categories <ref name="OneShot2">[http://vision.cs.princeton.edu/documents/Fei-FeiFergusPerona2006.pdf L. Fei-Fei, R. Fergus and P. Perona. One-Shot learning of object categories. IEEE Trans. Pattern Analysis and Machine Intelligence, Vol28(4), 594 - 611, 2006.]</ref> | |||
Other Computer Vision papers that report using the Caltech 101 data set: | |||
*Shape Matching and Object Recognition using Low Distortion Correspondence. Alexander C. Berg, Tamara L. Berg, [[Jitendra Malik]]. [[CVPR]] 2005 | |||
*The Pyramid Match Kernel:Discriminative Classification with Sets of Image Features. K. Grauman and T. Darrell. International Conference on Computer Vision (ICCV), 2005 <ref>[http://www.vision.caltech.edu/Image_Datasets/Caltech101/grauman_darrell_iccv05.pdf The Pyramid Match Kernel:Discriminative Classification with Sets of Image Features. K. Grauman and T. Darrell. International Conference on Computer Vision (ICCV), 2005]</ref> | |||
*Combining Generative Models and Fisher Kernels for Object Class Recognition Holub, AD. Welling, M. Perona, P. International Conference on Computer Vision (ICCV), 2005 <ref>[http://www.its.caltech.edu/%7Eholub/publications.htm Combining Generative Models and Fisher Kernels for Object Class Recognition Holub, AD. Welling, M. Perona, P. International Conference on Computer Vision (ICCV), 2005]</ref> | |||
*Object Recognition with Features Inspired by Visual Cortex. T. Serre, L. Wolf and T. Poggio. Proceedings of 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR 2005), IEEE Computer Society Press, San Diego, June 2005.<ref>[http://web.mit.edu/serre/www/publications/serre_etal-CVPR05.pdf Object Recognition with Features Inspired by Visual Cortex. T. Serre, L. Wolf and T. Poggio. Proceedings of 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR 2005), IEEE Computer Society Press, San Diego, June 2005]</ref> | |||
*SVM-KNN: Discriminative Nearest Neighbor Classification for Visual Category Recognition. Hao Zhang, Alex Berg, Michael Maire, [[Jitendra Malik]]. CVPR, 2006<ref>[http://www.vision.caltech.edu/Image_Datasets/Caltech101/nhz_cvpr06.pdf SVM-KNN: Discriminative Nearest Neighbor Classification for Visual Category Recognition. Hao Zhang, Alex Berg, Michael Maire, Jitendra Malik. CVPR, 2006]</ref> | |||
*Beyond Bags of Features: Spatial Pyramid Matching for Recognizing Natural Scene Categories. Svetlana Lazebnik, Cordelia Schmid, and Jean Ponce. CVPR, 2006<ref>[http://www.vision.caltech.edu/Image_Datasets/Caltech101/cvpr06b_lana.pdf Beyond Bags of Features: Spatial Pyramid Matching for Recognizing Natural Scene Categories. Svetlana Lazebnik, Cordelia Schmid, and Jean Ponce. CVPR, 2006]</ref> | |||
* Empirical study of multi-scale filter banks for object categorization, M.J. Mar韓-Jim閚ez, and N. P閞ez de la Blanca. December 2005<ref>[http://www.vision.caltech.edu/Image_Datasets/Caltech101/mjmarinVIP121505.pdf Empirical study of multi-scale filter banks for object categorization, M.J. Mar韓-Jim閚ez, and N. P閞ez de la Blanca. December 2005]</ref> | |||
*Multiclass Object Recognition with Sparse, Localized Features, Jim Mutch and David G. Lowe., pg. 11-18, CVPR 2006, IEEE Computer Society Press, New York, June 2006<ref>[http://www.mit.edu/~jmutch/papers/cvpr2006_mutch_lowe.pdf Multiclass Object Recognition with Sparse, Localized Features, Jim Mutch and David G. Lowe. , pg. 11-18, CVPR 2006, IEEE Computer Society Press, New York, June 2006]</ref> | |||
*Using Dependent Regions or Object Categorization in a Generative Framework, G. Wang, Y. Zhang, and L. Fei-Fei. IEEE Comp. Vis. Patt. Recog. 2006<ref>[http://vision.cs.princeton.edu/documents/WangZhangFei-Fei_CVPR2006.pdf Using Dependent Regions or Object Categorization in a Generative Framework, G. Wang, Y. Zhang, and L. Fei-Fei. IEEE Comp. Vis. Patt. Recog. 2006]</ref> | |||
==Analysis and comparison== | |||
===Advantages=== | |||
Caltech 101 has several advantages over other similar datasets: | |||
*Uniform size and presentation. | |||
Almost all the images within each category are uniform in image size and in the relative position of interest objects. This means that, in general, users who wish to use the Caltech 101 dataset do not need to spend and extra time cropping and scaling the images before they can be used. | |||
*Low level of clutter/occlusion: | |||
Algorithms concerned with recognition usually function by storing features unique to the object that is to be recognized. However, the majority of images taken have varying degrees of background clutter. Algorithms trained on cluttered images can potentially build incorrect | |||
*Detailed Annotations: | |||
The detailed annotations of object outlines is another advantage to using the dataset. | |||
===Weaknesses=== | |||
There are several weaknesses to the Caltech 101 dataset.<ref name="pinto_et_al_2008"/><ref>[http://www-cvr.ai.uiuc.edu/ponce_grp/publication/paper/sicily06c.pdf Dataset Issues in Object Recognition. J. Ponce, T. L. Berg, M. Everingham, D. A. Forsyth, M. Hebert, S. Lazebnik, M. Marszalek, C. Schmid, B. C. Russell, A. Torralba, C. K. I. Williams, J. Zhang, and A. Zisserman. Toward Category-Level Object Recognition, Springer-Verlag Lecture Notes in Computer Science. J. Ponce, M. Hebert, C. Schmid, and A. Zisserman (eds.), 2006]</ref> Some of them are conscious trade-offs for the advantages it provides, and some are simply limitations of the dataset itself. Papers who rely solely on Caltech 101 to prove their point are nowadays frequently rejected. The weaknesses are: | |||
*The dataset is too clean: | |||
Images are very uniform in presentation, left right aligned, and usually not occluded. As a result, the images are not always representative of practical inputs that the algorithm being trained might be expected to see. Under practical conditions, there is usually more clutter, occlusion, and variance in relative position and orientation of interest objects. In fact, by taking the average over a category one can often clearly recognise the concept, which is unrealistic. | |||
*Limited number of categories: | |||
The Caltech 101 dataset represents only a small fraction of the possible object categories. | |||
*Some categories contain few images: | |||
Certain categories are not represented as well as others, containing as few as 31 images. | |||
This means that <math>\mathrm{N}_{\mathrm{train}} \le 30</math>. The number of images used for training must be less than or equal to 30, which is not sufficient for all purposes. | |||
*Aliasing and artifacts due to manipulation: | |||
Some images have been rotated and scaled from their original orientation, and suffer from some amount of [[Compression artifact|artifacts]] or [[aliasing]]. | |||
===Other datasets=== | |||
*[[Caltech 256]] is another image dataset created at the California Institute of technology in 2007, a successor to Caltech 101. It is intended to address some of the weaknesses inherent to Caltech 101. Overall, it is a more difficult dataset than Caltech 101 (but it suffers from the same problems <ref name="pinto_et_al_2008"/>) | |||
**30,607 images, covering a larger number of categories. | |||
**Minimum number of image per category raised to 80. | |||
**Images not left-right aligned. | |||
**More variation in image presentation. | |||
*[[LabelMe]] is an open, dynamic dataset created at [[MIT Computer Science and Artificial Intelligence Laboratory]] (CSAIL). LabelMe takes a different approach to the problem of creating a large image dataset, with different trade-offs. | |||
**106,739 images, 41,724 annotated images, and 203,363 labeled objects. | |||
**Users may add images to the dataset by upload, and add labels or annotations to existing images. | |||
**Due to its open nature, LabelMe has many more images covering a much wider scope than Caltech 101. However, since each person decides what images to upload, and how to label and annotate each image, there can be a lack of consistency between images. | |||
*[[VOC 2008]] is a European efforts of collecting images for benchmarking visual categorization methods. Compared to Caltech 101/256, a smaller number of categories (about 20) are collected. However, the number of images in each categories is larger. | |||
*[[Overhead Imagery Research Data Set]] (OIRDS) is an annotated library of imagery and tools to aid in the development of computer vision algorithms.<ref name="OIRDSVehicles">F. Tanner, B. Colder, C. Pullen, D. Heagy, C. Oertel, & P. Sallee, ''Overhead Imagery Research Data Set (OIRDS) – an annotated data library and tools to aid in the development of computer vision algorithms'', June 2009, <http://sourceforge.net/apps/mediawiki/oirds/index.php?title=Documentation> (28 December 2009)</ref> OIRDS v1.0 is composed of passenger vehicle objects annotated in overhead imagery. Passenger vehicles in the OIRDS include cars, trucks, vans, etc. In addition to the object outlines, the OIRDS includes subjective and objective statistics that quantify the vehicle within the image's context. For example, subjective measures of image clutter, clarity, noise, and vehicle color are included along with more objective statistics such as [[ground sample distance]] (GSD), time of day, and day of year. | |||
** ~900 images, containing ~1800 annotated images | |||
** ~30 annotations per object | |||
** ~60 statistical measures per object | |||
** wide variation in object context | |||
** limited to passenger vehicles in overhead imagery | |||
*[[MICC-Flickr 101]] is a recent dataset created at the Media Integration and Communication Center (MICC), [[University of Florence]], in 2012. It is based on the popular Caltech 101 and it has been collected from Flickr. The MICC-Flickr 101<ref name="ballan_et_al_2012">[http://www.micc.unifi.it/publications/2012/BBDSSZ12/miccflickr101.pdf L. Ballan, M. Bertini, A. Del Bimbo, A.M. Serain, G. Serra, B.F. Zaccone. Combining Generative and Discriminative Models for Classifying Social Images from 101 Object Categories. Int. Conference on Pattern Recognition (ICPR), 2012.]</ref> dataset fixes the main drawback of Caltech 101, i.e. its low intra-class variability, and provides social annotations through user tags. It builds on a standard and widely used dataset composed of a still manageable number of categories (101) and, therefore, can be used to compare and evaluate object categorization performance in a constrained scenario (Caltech 101) and object categorization "in the wild" (MICC-Flickr 101) on the same 101 categories. | |||
==See also== | |||
* [[MNIST database]] | |||
* [[LabelMe]] | |||
==References== | |||
{{reflist}} | |||
==External links== | |||
* http://www.vision.caltech.edu/Image_Datasets/Caltech101/ -Caltech 101 Homepage (Includes download) | |||
* http://www.vision.caltech.edu/Image_Datasets/Caltech256/ -Caltech 256 Homepage (Includes download) | |||
* http://labelme.csail.mit.edu/ -LabelMe Homepage | |||
* http://www2.it.lut.fi/project/visiq/ -Randomized Caltech 101 download page (Includes download) | |||
* http://www.micc.unifi.it/datasets/micc-flickr-101/ -MICC-Flickr101 Homepage (Includes download) | |||
[[Category:Datasets in computer vision]] | |||
[[Category:California Institute of Technology]] |
Revision as of 06:29, 9 December 2013
Template:Copyedit Caltech 101 is a data set of digital images created in September, 2003, compiled by Fei-Fei Li, Marco Andreetto, Marc 'Aurelio Ranzato and Pietro Perona at the California Institute of Technology. It is intended to facilitate Computer Vision research and techniques. It is most applicable to techniques interested in recognition, classification, and categorization. Caltech 101 contains a total of 9146 images, split between 101 distinct object (including faces, watches, ants, pianos, etc.) and a background category (for a total of 102 categories). Provided with the images are a set of annotations describing the outlines of each image, along with a Matlab script for viewing.
Purpose
Most Computer Vision and Machine Learning algorithms function by training on a large set of example inputs. To work effectively, most of these techniques require a large and varied set of training data. For example, the relatively well known real time face detection method used by Paul Viola and Michael J. Jones was trained on 4916 hand labeled faces.[1] However, acquiring a large volume of appropriate and usable images is often difficult. Furthermore, cropping and resizing many images, as well as marking point of interest by hand, is a tedious and time intensive task.
Historically, most data sets used in computer vision research have been tailored to the specific needs of the project being worked on. A large problem in comparing different computer vision techniques is the fact that most groups are using their own data sets. Each of these data sets may have different properties that make reported results from different methods harder to compare directly. For example, differences in image size, image quality, relative location of objects within the images, and level of occlusion and clutter present can lead to varying results [2]
The Caltech 101 data set aims to alleviate many of these common problems.
- The work of collecting a large set of images, and cropping and resizing them appropriately has been taken care of.
- A large number of different categories are represented, which benefits both single, and multi class recognition algorithms.
- Detailed object outlines have been marked for each image.
- By being released for general use, the Caltech 101 acts as a common standard by which to compare different algorithms without bias due to different data sets.
However, a recent study [3] demonstrates that tests based on uncontrolled natural images (like the Caltech 101 dataset) can be seriously misleading, potentially guiding progress in the wrong direction.
Dataset
Images
The Caltech 101 data set consists of a total of 9146 images, split between 101 different object categories, as well as an additional background/clutter category.
Each object category contains between 40 and 800 images on average. Common and popular categories such as faces tend to have a larger number of images than less used categories. Each image is about 300x200 pixels in dimension. Images of oriented objects such as airplanes and motorcycles were mirrored to be left-right aligned, and vertically oriented structures such as buildings were rotated to be off axis.
Annotations
As a supplement to the images, a set of annotations are provided for each image. Each set of annotations contains two pieces of information.
The general bounding box in which the object is located, and a detailed human specified outline enclosing the object. A Matlab script is provided along with the annotations that will load an image and its corresponding annotation file and display them as a Matlab figure.
The bounding box is yellow and the outline is red.
Uses
The Caltech 101 data set has been used to train and test several Computer Vision recognition and classification algorithms. The first paper to make use of Caltech 101 was an incremental Bayesian approach to one shot learning.[4] One shot learning is an attempt to learn a class of object using only a few examples, by building off of prior knowledge of many other classes.
The Caltech 101 images, along with the annotations, were used for another one shot learning paper at Caltech.
L. Fri-Fei, R. Fergus and P. Perona. One-Shot learning of object categories [5]
Other Computer Vision papers that report using the Caltech 101 data set:
- Shape Matching and Object Recognition using Low Distortion Correspondence. Alexander C. Berg, Tamara L. Berg, Jitendra Malik. CVPR 2005
- The Pyramid Match Kernel:Discriminative Classification with Sets of Image Features. K. Grauman and T. Darrell. International Conference on Computer Vision (ICCV), 2005 [6]
- Combining Generative Models and Fisher Kernels for Object Class Recognition Holub, AD. Welling, M. Perona, P. International Conference on Computer Vision (ICCV), 2005 [7]
- Object Recognition with Features Inspired by Visual Cortex. T. Serre, L. Wolf and T. Poggio. Proceedings of 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR 2005), IEEE Computer Society Press, San Diego, June 2005.[8]
- SVM-KNN: Discriminative Nearest Neighbor Classification for Visual Category Recognition. Hao Zhang, Alex Berg, Michael Maire, Jitendra Malik. CVPR, 2006[9]
- Beyond Bags of Features: Spatial Pyramid Matching for Recognizing Natural Scene Categories. Svetlana Lazebnik, Cordelia Schmid, and Jean Ponce. CVPR, 2006[10]
- Empirical study of multi-scale filter banks for object categorization, M.J. Mar韓-Jim閚ez, and N. P閞ez de la Blanca. December 2005[11]
- Multiclass Object Recognition with Sparse, Localized Features, Jim Mutch and David G. Lowe., pg. 11-18, CVPR 2006, IEEE Computer Society Press, New York, June 2006[12]
- Using Dependent Regions or Object Categorization in a Generative Framework, G. Wang, Y. Zhang, and L. Fei-Fei. IEEE Comp. Vis. Patt. Recog. 2006[13]
Analysis and comparison
Advantages
Caltech 101 has several advantages over other similar datasets:
- Uniform size and presentation.
Almost all the images within each category are uniform in image size and in the relative position of interest objects. This means that, in general, users who wish to use the Caltech 101 dataset do not need to spend and extra time cropping and scaling the images before they can be used.
- Low level of clutter/occlusion:
Algorithms concerned with recognition usually function by storing features unique to the object that is to be recognized. However, the majority of images taken have varying degrees of background clutter. Algorithms trained on cluttered images can potentially build incorrect
- Detailed Annotations:
The detailed annotations of object outlines is another advantage to using the dataset.
Weaknesses
There are several weaknesses to the Caltech 101 dataset.[3][14] Some of them are conscious trade-offs for the advantages it provides, and some are simply limitations of the dataset itself. Papers who rely solely on Caltech 101 to prove their point are nowadays frequently rejected. The weaknesses are:
- The dataset is too clean:
Images are very uniform in presentation, left right aligned, and usually not occluded. As a result, the images are not always representative of practical inputs that the algorithm being trained might be expected to see. Under practical conditions, there is usually more clutter, occlusion, and variance in relative position and orientation of interest objects. In fact, by taking the average over a category one can often clearly recognise the concept, which is unrealistic.
- Limited number of categories:
The Caltech 101 dataset represents only a small fraction of the possible object categories.
- Some categories contain few images:
Certain categories are not represented as well as others, containing as few as 31 images. This means that . The number of images used for training must be less than or equal to 30, which is not sufficient for all purposes.
- Aliasing and artifacts due to manipulation:
Some images have been rotated and scaled from their original orientation, and suffer from some amount of artifacts or aliasing.
Other datasets
- Caltech 256 is another image dataset created at the California Institute of technology in 2007, a successor to Caltech 101. It is intended to address some of the weaknesses inherent to Caltech 101. Overall, it is a more difficult dataset than Caltech 101 (but it suffers from the same problems [3])
- 30,607 images, covering a larger number of categories.
- Minimum number of image per category raised to 80.
- Images not left-right aligned.
- More variation in image presentation.
- LabelMe is an open, dynamic dataset created at MIT Computer Science and Artificial Intelligence Laboratory (CSAIL). LabelMe takes a different approach to the problem of creating a large image dataset, with different trade-offs.
- 106,739 images, 41,724 annotated images, and 203,363 labeled objects.
- Users may add images to the dataset by upload, and add labels or annotations to existing images.
- Due to its open nature, LabelMe has many more images covering a much wider scope than Caltech 101. However, since each person decides what images to upload, and how to label and annotate each image, there can be a lack of consistency between images.
- VOC 2008 is a European efforts of collecting images for benchmarking visual categorization methods. Compared to Caltech 101/256, a smaller number of categories (about 20) are collected. However, the number of images in each categories is larger.
- Overhead Imagery Research Data Set (OIRDS) is an annotated library of imagery and tools to aid in the development of computer vision algorithms.[15] OIRDS v1.0 is composed of passenger vehicle objects annotated in overhead imagery. Passenger vehicles in the OIRDS include cars, trucks, vans, etc. In addition to the object outlines, the OIRDS includes subjective and objective statistics that quantify the vehicle within the image's context. For example, subjective measures of image clutter, clarity, noise, and vehicle color are included along with more objective statistics such as ground sample distance (GSD), time of day, and day of year.
- ~900 images, containing ~1800 annotated images
- ~30 annotations per object
- ~60 statistical measures per object
- wide variation in object context
- limited to passenger vehicles in overhead imagery
- MICC-Flickr 101 is a recent dataset created at the Media Integration and Communication Center (MICC), University of Florence, in 2012. It is based on the popular Caltech 101 and it has been collected from Flickr. The MICC-Flickr 101[16] dataset fixes the main drawback of Caltech 101, i.e. its low intra-class variability, and provides social annotations through user tags. It builds on a standard and widely used dataset composed of a still manageable number of categories (101) and, therefore, can be used to compare and evaluate object categorization performance in a constrained scenario (Caltech 101) and object categorization "in the wild" (MICC-Flickr 101) on the same 101 categories.
See also
References
43 year old Petroleum Engineer Harry from Deep River, usually spends time with hobbies and interests like renting movies, property developers in singapore new condominium and vehicle racing. Constantly enjoys going to destinations like Camino Real de Tierra Adentro.
External links
- http://www.vision.caltech.edu/Image_Datasets/Caltech101/ -Caltech 101 Homepage (Includes download)
- http://www.vision.caltech.edu/Image_Datasets/Caltech256/ -Caltech 256 Homepage (Includes download)
- http://labelme.csail.mit.edu/ -LabelMe Homepage
- http://www2.it.lut.fi/project/visiq/ -Randomized Caltech 101 download page (Includes download)
- http://www.micc.unifi.it/datasets/micc-flickr-101/ -MICC-Flickr101 Homepage (Includes download)
- ↑ P. Viola and M. J. Jones, Robust Real-Time Object Detection, , IJCV 2004
- ↑ Oertel, C., Colder, B., Colombe, J., High, J., Ingram, M., Sallee, P., Current Challenges in Automating Visual Perception. Proceedings of IEEE Advanced Imagery Pattern Recognition Workshop 2008
- ↑ 3.0 3.1 3.2 Why is Real-World Visual Object Recognition Hard? Pinto N, Cox DD, DiCarlo JJ PLoS Computational Biology Vol. 4, No. 1, e27 21 year-old Glazier James Grippo from Edam, enjoys hang gliding, industrial property developers in singapore developers in singapore and camping. Finds the entire world an motivating place we have spent 4 months at Alejandro de Humboldt National Park.
- ↑ L. Fei-Fei, R. Fergus and P. Perona. Learning generative visual models from few training examples: an incremental Bayesian approach tested on 101 object categories. IEEE. CVPR 2004, Workshop on Generative-Model Based Vision. 2004
- ↑ L. Fei-Fei, R. Fergus and P. Perona. One-Shot learning of object categories. IEEE Trans. Pattern Analysis and Machine Intelligence, Vol28(4), 594 - 611, 2006.
- ↑ The Pyramid Match Kernel:Discriminative Classification with Sets of Image Features. K. Grauman and T. Darrell. International Conference on Computer Vision (ICCV), 2005
- ↑ Combining Generative Models and Fisher Kernels for Object Class Recognition Holub, AD. Welling, M. Perona, P. International Conference on Computer Vision (ICCV), 2005
- ↑ Object Recognition with Features Inspired by Visual Cortex. T. Serre, L. Wolf and T. Poggio. Proceedings of 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR 2005), IEEE Computer Society Press, San Diego, June 2005
- ↑ SVM-KNN: Discriminative Nearest Neighbor Classification for Visual Category Recognition. Hao Zhang, Alex Berg, Michael Maire, Jitendra Malik. CVPR, 2006
- ↑ Beyond Bags of Features: Spatial Pyramid Matching for Recognizing Natural Scene Categories. Svetlana Lazebnik, Cordelia Schmid, and Jean Ponce. CVPR, 2006
- ↑ Empirical study of multi-scale filter banks for object categorization, M.J. Mar韓-Jim閚ez, and N. P閞ez de la Blanca. December 2005
- ↑ Multiclass Object Recognition with Sparse, Localized Features, Jim Mutch and David G. Lowe. , pg. 11-18, CVPR 2006, IEEE Computer Society Press, New York, June 2006
- ↑ Using Dependent Regions or Object Categorization in a Generative Framework, G. Wang, Y. Zhang, and L. Fei-Fei. IEEE Comp. Vis. Patt. Recog. 2006
- ↑ Dataset Issues in Object Recognition. J. Ponce, T. L. Berg, M. Everingham, D. A. Forsyth, M. Hebert, S. Lazebnik, M. Marszalek, C. Schmid, B. C. Russell, A. Torralba, C. K. I. Williams, J. Zhang, and A. Zisserman. Toward Category-Level Object Recognition, Springer-Verlag Lecture Notes in Computer Science. J. Ponce, M. Hebert, C. Schmid, and A. Zisserman (eds.), 2006
- ↑ F. Tanner, B. Colder, C. Pullen, D. Heagy, C. Oertel, & P. Sallee, Overhead Imagery Research Data Set (OIRDS) – an annotated data library and tools to aid in the development of computer vision algorithms, June 2009, <http://sourceforge.net/apps/mediawiki/oirds/index.php?title=Documentation> (28 December 2009)
- ↑ L. Ballan, M. Bertini, A. Del Bimbo, A.M. Serain, G. Serra, B.F. Zaccone. Combining Generative and Discriminative Models for Classifying Social Images from 101 Object Categories. Int. Conference on Pattern Recognition (ICPR), 2012.