Research update May 18, 2018

Results from the experiments running 11 algorithms:

accuracy_chart

 

I developed 11 different algorithms for perceptually hashing video that incorporate LLE. I performed the experiments on 250 videos from the Moments in Time [1] video dataset. Figure 1 shows an accuracy graph of the 11 different LLE based hashing algorithms. The version that incorporates both Discrete Cosine Transform (DCT) and LLE performed with the highest accuracy. Currently these algorithms do not yet perform as well as pHash on the same video dataset. I believe that using LLE to extract the information from DCT is a novel approach to video hashing.

[1[ Monfort, M., Zhou, B., Bargal, S. A., Yan, T., Andonian, A., Ramakrishnan, K., Oliva, A. (2018). Moments in Time Dataset: one million videos for event understanding. ArXiv:1801.03150v1.

Advertisements

Research update April 27, 2018

Here are a few made up profile pictures to illustrate a real world example of perceptual hashing.

 

A perceptual hash function takes in visual information of any size and maps the output to a number.

percep_hash_def_01.svg

From the profiles pictures above you may notice that profile 1 and 4 are rather similar. Profile 4 has the addition of reading glasses. Perceptual hashing may be used to identify similar image and video information. Image hashing is useful for copyright infringement, child abuse image identification, and storage redundancy.

Canine Images licensed by Creative Commons

<https://www.flickr.com/photos/brunkfordbraun/7390426948/&gt;

<https://www.flickr.com/photos/7870246@N03/5007296455&gt;

<https://www.flickr.com/photos/nickimm/9048426633/in/photolist-nDxkJ8-nmsfWT-opLGML-nmyBUN-qc6n65-kAT3Ba-hTavHw-qgSyUS-iasFFA-nJhoaG-keRMuH-jLiVUp-eMzxZK-ritykg-fwqxAv-fKjUnb-f7KtVi-f59ktx-f2Kq1j-f2sV3T-eV4Ajt-eNjjCQ-fUoPdk-ea2fK3-eMzx74-f8piak-ffM8Sa-fVYn&gt;

Perceptual Hashing figure created in Inkscape

 

 

Research update April 23, 2018

This week I ran the experiments on videos from the “Moments in Time Dataset: one million videos for event understanding.” Here is a montage of one frame with several different filters applied. I will be presenting “LLE Based Image Hashing” this week at the CSU East Bay Research Symposium.

montage

 

 

Research update April 9, 2018

This week I began coding up the experiment. The first step is using pHash as the baseline score for the image hashing. The next step will be comparing the baseline to the new implementation. The SQLite database is now set up and ready to use. The code for the experiment will be made available in just a few weeks once everything is up on GitHub and well documented. There will also be a Docker image for those not running Linux natively and that want to try out the experiment.

I am continuing preparing the presentation for the research symposium in two weeks.

Next week, I will be performing analysis on the data from the experiment and I am looking forward to posting the results.

Research update March 30, 2018

This week I worked on writing up the code for the video preprocessing in python. For this included transcoding the videos into different formats to create multiple videos in the same conjugacy class as the original video. I also wrote up code to hash the videos via pHash and store them and compare the videos via Hamming Distance.

Next week will consist of finishing up setting up the experiment pieces so that they may be run consecutively.

I am also working on my poster and verbal presentation for the upcoming conference.

Research Update March 21, 2018

This week I updated and rewrote the research summary for the CSU Statewide Competition. I have been working on the perceptual hashing on videos experiment. This has included designing the experiment, setting up pHash, and creating a few more videos for comparison. Preliminary data will be made available in the next two weeks.

I will be presenting a poster in late April at the CSU Bay Research Symposium.