This week, after few improvements, I have submitted the Initial Delivery Documentation.
I have also completed the first task from the project timeline, ‘Development of Uploading Interface’, which will be used by the user to upload a query image for which tags will be suggested. This has been done in PHP. I have faced few technical problems with Ubuntu, that I partly fixed myself, and received help from Dian with the rest.
Dian (who I will be mentioning a lot 🙂 ), from Insight Research Centre, is working with the same Flickr dataset, for the EPA project, mentioned in the previous post.
We (as I have no experience in image processing I rely on Dian’s suggestions and advice) have started to look into the image feature extraction. This is one of the biggest tasks and challenges of this project. As the LIRE features are now available we have considered the extraction of LIRE from query image and base the search for similar images in the dataset on some of the low level features provided by LIRE (http://www.lire-project.net/ , http://www.semanticmetadata.net/lire/).
After reviewing the paper on ‘Instance Search and Semantic Indexing’ project from TRECVid, available at http://doras.dcu.ie/20287/ we are considering to use AlexNet features (not yet available for the dataset) and extract these with Caffe, a deep learning framework(http://caffe.berkeleyvision.org/), as it seems to be a better approach in terms of processing time and capabilities it provides than LIRE. I have read some documents to familiarize myself with different aspects such as framework, features, installation etc.
During the meeting with Dian on Wednesday, we have also discussed a possible approach for indexing, which is another big task in this project. Spotify Annoy may be a possible solution. This will be further examined after image feature extraction task is completed.