vioft2nntf2t|tblJournal|Abstract_paper|0xf4ff1d0317000000675a010001000c00
This paper proposes a new approach to annotate image. First, in order to precisely model training data, shape context features of each image is represented as a bag of visual words. Then, we specifically design a novel optimized graph-based semi-supervised learning for image annotation, in which we maximize the average weighed distance between the different semantic objects, and minimize the average weighed distance between the same semantic objects. Training data insufficiency and lack of generalization of learning method can be resolved through OGSSL with significantly improved image semantic annotation performance. This approach is compared with several other approaches. The experimental results show that this approach performs more effectively and accurately.