Last updated: 2022-06-21
Checks: 6 1
Knit directory: project_video_salience/
This reproducible R Markdown analysis was created with workflowr (version 1.7.0). The Checks tab describes the reproducibility checks that were applied when the results were created. The Past versions tab lists the development history.
Great! Since the R Markdown file has been committed to the Git repository, you know the exact version of the code that produced these results.
Great job! The global environment was empty. Objects defined in the global environment can affect the analysis in your R Markdown file in unknown ways. For reproduciblity it’s best to always run the code in an empty environment.
The command set.seed(20210113)
was run prior to running
the code in the R Markdown file. Setting a seed ensures that any results
that rely on randomness, e.g. subsampling or permutations, are
reproducible.
Great job! Recording the operating system, R version, and package versions is critical for reproducibility.
Nice! There were no cached chunks for this analysis, so you can be confident that you successfully produced the results during this run.
Using absolute paths to the files within your workflowr project makes it difficult for you and others to run your code on a different machine. Change the absolute path(s) below to the suggested relative path(s) to make your code more reproducible.
absolute | relative |
---|---|
C:_video_salience_video_to_images.py | split_video_to_images.py |
C:_video_salience_salience_in_video.py | static_salience_in_video.py |
C:_video_salience_salience_in_video_MOG2.py | motion_salience_in_video_MOG2.py |
Great! You are using Git for version control. Tracking code development and connecting the code version to the results is critical for reproducibility.
The results in this page were generated with repository version 6c39d76. See the Past versions tab to see a history of the changes made to the R Markdown and HTML files.
Note that you need to be careful to ensure that all relevant files for
the analysis have been committed to Git prior to generating the results
(you can use wflow_publish
or
wflow_git_commit
). workflowr only checks the R Markdown
file, but you know if there are other scripts or data files that it
depends on. Below is the status of the Git repository when the results
were generated:
Ignored files:
Ignored: .Rhistory
Ignored: .Rproj.user/
Ignored: analysis/data_analysis_salience_cache/
Untracked files:
Untracked: .PowerFolder/
Untracked: analysis/Results in unmatched sample.docx
Untracked: analysis/packages.bib
Untracked: analysis/vancouver-brackets.csl
Untracked: code/OLD/
Untracked: code/analysis_salience_130121.R
Untracked: code/analysis_salience_150421.R
Untracked: code/extract_salience_metrics.R
Untracked: code/luminance_per_fixated_pixel.R
Untracked: code/mean_salience_per_video.R
Untracked: code/preprocessing1_matching_gaze_and_salience_data.R
Untracked: code/preprocessing2_matching_gaze_and_motionsalience_data.R
Untracked: code/preprocessing3_datareduction_adding_additional_data.R
Untracked: code/python_code_salience_extraction/
Untracked: code/sesnory_subgroup_analysis.R
Untracked: data/df_model_luminance_test.Rdata
Untracked: data/luminance_data.Rdata
Untracked: data/merged_data/
Untracked: data/motion_salience.Rdata
Untracked: data/perceptual_salience
Untracked: data/perceptual_salience.Rdata
Untracked: data/test_luminance.Rdata
Untracked: data/video_stimuli_scenes.csv
Untracked: desktop.ini
Untracked: manuscript/
Untracked: output/gaze_animate_sample.mp4
Untracked: output/gaze_animate_sample_dollhouse_scene5.mp4
Untracked: output/motion_salience/
Untracked: output/motion_salience_video_pingudoctors_scene0.avi
Untracked: output/salience_video_artist.avi
Untracked: output/stimuli_pics/
Untracked: output/stimuli_salience/
Untracked: output/stimuli_scene/
Untracked: packages.bib
Untracked: project_init_workflow.R
Untracked: vancouver-brackets.csl
Unstaged changes:
Modified: code/README.md
Modified: data/README.md
Note that any generated files, e.g. HTML, png, CSS, etc., are not included in this status report because it is ok for generated content to have uncommitted changes.
These are the previous versions of the repository in which changes were
made to the R Markdown (analysis/extract_salience.Rmd
) and
HTML (docs/extract_salience.html
) files. If you’ve
configured a remote Git repository (see ?wflow_git_remote
),
click on the hyperlinks in the table below to view the files as they
were in that past version.
File | Version | Author | Date | Message |
---|---|---|---|---|
html | fda1e04 | nicobast | 2022-03-01 | Build site. |
html | fffe9d1 | nicobast | 2021-09-13 | Build site. |
Rmd | 4485527 | nicobast | 2021-09-13 | Publish the initial files for myproject |
use of OpenCV in Python 3.8. Open CV includes a salience API that is further described here.
We apply the following salience algorithms:
execute in CMD
### 1 INSTALL: openCV to python
### contrib also contains additional modules of openCV including the salience API
pip install opencv-contrib-python
### 2 SETUP:
py # change to python and check version
import.cv2
cv2.__version__
cv.saliency #check whether saliency module of opencv is installed
#vid_path = "C:/Users/Nico/PowerFolders/Paper_AIMS-LEAP_ETcore/stimuli/nonhuman/" # define folder with videos in a directory in split_video_to_images.py
execute as BAT file
@echo off
ECHO Identify STATIC SALIENCE in VIDEO files:
ECHO INFO: splits video into images that are separately analyzed
ECHO -----------------------------------------------------
REM set /p path="Enter Path with Video files: "
set path="C:\Users\Nico\PowerFolders\data_LEAP\stimuli\nonhuman"
set /p name="Enter Name of the Video: "
REM e.g. path = "C:/Users/Nico/PowerFolders/Paper_AIMS-LEAP_ETcore/stimuli/nonhuman"
REM e.g. path = "birds"
REM independent of environment variables (full path files)
ECHO SPLITTING VIDEO...
"C:\Python38\python.exe" "C:\Users\Nico\PowerFolders\project_video_salience\split_video_to_images.py" %name% %path%
ECHO ...DONE
ECHO IDENTIFY SALIENCE...
"C:\Python38\python.exe" "C:\Users\Nico\PowerFolders\project_video_salience\static_salience_in_video.py" %name%
ECHO ...DONE
REM with environment variables
REM start "split_video" python split_video_to_images.py
REM start "salience_detection" python static_salience_in_video.py %name%
PAUSE
import cv2
import sys
import os
vid_name = sys.argv[1]
vid_path = sys.argv[2]
#vid_name = "coralreef"
#vid_path = "C:/Users/Nico/PowerFolders/Paper_AIMS-LEAP_ETcore/stimuli/nonhuman"
#get the path of the vid_name in vid_path (search for the video)
for path in os.listdir(vid_path):
if vid_name in path:
full_path = os.path.join(vid_path, path)
output_folder = "stimuli_pics"
if not os.path.exists(output_folder):
os.mkdir(output_folder)
os.chdir(output_folder)
#create folder for pics if not existing
if not os.path.exists(vid_name):
os.mkdir(vid_name)
os.chdir(vid_name)
#loop to create pics from vid
cap = cv2.VideoCapture(full_path)
i=0
while(cap.isOpened()):
ret, frame = cap.read()
if ret == False:
break
cv2.imwrite(vid_name+str(i)+'.jpg',frame)
print(vid_name+str(i)+'.jpg')
i+=1
cap.release()
cv2.destroyAllWindows()
script named “static_salience_in_video.py”
import cv2
import sys
import os
stimulus_name = sys.argv[1]
input_folder = "stimuli_pics"
output_folder = "stimuli_salience"
#create stimulus-specific output folder if it does not exist
if not os.path.exists(os.path.join(output_folder,stimulus_name)):
os.mkdir(os.path.join(output_folder,stimulus_name))
#get and sort input image data - see also VidToImg.py
name_images = os.listdir(os.path.join(input_folder,stimulus_name)) #get individuals images of video file
name_images.sort(key=lambda f: int(''.join(filter(str.isdigit, f)))) #sort alphanumerically
n_images = len(name_images)
# initialize OpenCV's static saliency spectral residual detector and
saliency = cv2.saliency.StaticSaliencySpectralResidual_create()
#### saliency model after
# Hou, X., & Zhang, L. (2007, June). Saliency detection: A spectral residual approach. In 2007 IEEE Conference on computer vision and pattern recognition (pp. 1-8). Ieee.
#loop over images
i=0
while(i<n_images):
image = cv2.imread(os.path.join(input_folder,stimulus_name,name_images[i])) #read image
(success, saliencyMap) = saliency.computeSaliency(image) # compute the saliency map
saliencyMap = (saliencyMap * 255).astype("uint8") #changes 0-1 values to 255 grayscale
cv2.imwrite(os.path.join(output_folder,stimulus_name,stimulus_name+"_salience"+str(i)+".jpg"), saliencyMap) #write salience map
print(os.path.join(output_folder,stimulus_name,stimulus_name+"_salience"+str(i)+".jpg")) #print processed salience map
#show image
#cv2.imshow("Image", image)
#cv2.imshow("Salience", saliencyMap)
#cv2.waitKey(20)
i+=1
execute as BAT file
"C:\Users\Nico\AppData\Local\Programs\Python\Python37\python.exe"@echo off
ECHO Identify MOTION SALIENCE of SCENES:
ECHO INFO: scenes are previously split by split_images_to_scenes.py...
ECHO -----------------------------------------------------
REM independent of environment variables (full path files)
REM path_to_python = where py
ECHO IDENTIFY MOTION SALIENCE...
"C:\Users\Nico\AppData\Local\Programs\Python\Python37\python.exe" "C:\Users\Nico\PowerFolders\project_video_salience\motion_salience_in_video_MOG2.py" 50faces
ECHO 50faces DONE
"C:\Users\Nico\AppData\Local\Programs\Python\Python37\python.exe" "C:\Users\Nico\PowerFolders\project_video_salience\motion_salience_in_video_MOG2.py" artist
ECHO artist DONE
"C:\Users\Nico\AppData\Local\Programs\Python\Python37\python.exe" "C:\Users\Nico\PowerFolders\project_video_salience\motion_salience_in_video_MOG2.py" birds
ECHO birds DONE
"C:\Users\Nico\AppData\Local\Programs\Python\Python37\python.exe" "C:\Users\Nico\PowerFolders\project_video_salience\motion_salience_in_video_MOG2.py" coralreef
ECHO coralreef DONE
"C:\Users\Nico\AppData\Local\Programs\Python\Python37\python.exe" "C:\Users\Nico\PowerFolders\project_video_salience\motion_salience_in_video_MOG2.py" dollhouse
ECHO dollhouse DONE
"C:\Users\Nico\AppData\Local\Programs\Python\Python37\python.exe" "C:\Users\Nico\PowerFolders\project_video_salience\motion_salience_in_video_MOG2.py" flowersstars
ECHO flowersstars DONE
"C:\Users\Nico\AppData\Local\Programs\Python\Python37\python.exe" "C:\Users\Nico\PowerFolders\project_video_salience\motion_salience_in_video_MOG2.py" musicbooth
ECHO musicbooth DONE
"C:\Users\Nico\AppData\Local\Programs\Python\Python37\python.exe" "C:\Users\Nico\PowerFolders\project_video_salience\motion_salience_in_video_MOG2.py" Pingu_doctors
ECHO Pingu_doctors DONE
"C:\Users\Nico\AppData\Local\Programs\Python\Python37\python.exe" "C:\Users\Nico\PowerFolders\project_video_salience\motion_salience_in_video_MOG2.py" Pingu1
ECHO Pingu1 DONE
PAUSE
script names “motion_salience_in_video_MOG2.py”
import cv2
import sys
import os
###NOTE: takes videos split to scenes as input - as videos with different scenes will cause the motion alorithm to provide false positives
video_folder = "stimuli_scene"
input_folder = sys.argv[1]
output_folder = "motion_salience"
scenes = os.listdir(os.path.join(video_folder,input_folder))
scenes.sort(key=lambda f: int(''.join(filter(str.isdigit, f)))) #sort alphanumerically
for scene in scenes:
#input_folder = sys.argv[1]
# for loop stimulus name
stimulus_file = scene
stimulus_name = os.path.splitext(scene)[0]
#create stimulus-specific output folder if it does not exist
output_path = os.path.join(output_folder,input_folder,stimulus_name)
if not os.path.exists(output_path):
os.makedirs(output_path)
#open video of the scene (ceated by split_images_to_scenes)
video_path = os.path.join(video_folder,input_folder,stimulus_file)
cap = cv2.VideoCapture(video_path)
# amount_of_frames = cap.get(cv2.CAP_PROP_FRAME_COUNT)
# print(amount_of_frames)
#---> identify MOTION SALIENCE
# loop over frames from the video file stream
i=1
while (cap.isOpened()):
# grab the frame from the video
# cap.set(1,i) #set frame of the video stream
boolr, frame = cap.read() #read the frame of the stream
if boolr == False: #break if cannot be read
break
print(i)
#if saliency does not exist create it
try:
saliency
except:
#saliency = cv2.createBackgroundSubtractorMOG2(history = 25, detectShadows = False)
saliency = cv2.createBackgroundSubtractorMOG2(history = 500, varThreshold = 30, detectShadows = False)
#default varThresholdGen = 9
#default varThreshold = 16
#default history = 500 <- shorter adaption
## convert the input frame to grayscale and compute the saliency
## map based on the motion model
#gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
#(success, saliencyMap) = saliency.computeSaliency(gray)
#saliencyMap = (saliencyMap * 255).astype("uint8")
#print(success)
#apply foreground segmentation
fgMask = saliency.apply(frame, learningRate = 1/i)
#add frame counter to original image
cv2.rectangle(frame, (10, 2), (100,20), (255,255,255), -1)
cv2.putText(frame, str(cap.get(cv2.CAP_PROP_POS_FRAMES)), (15, 15),
cv2.FONT_HERSHEY_SIMPLEX, 0.5 , (0,0,0))
#print to screen
cv2.imshow("Image", frame)
cv2.imshow("Salience", fgMask)
# #save to file
#cv2.imwrite(os.path.join(output_path,stimulus_name+"_motion_salience_frame"+str(i)+".jpg"), fgMask) #write salience map
#print(os.path.join(output_path,stimulus_name+"_motion_salience"+str(i)+".jpg")) #print processed salience map
i=i+1 #increase count
keyboard = cv2.waitKey(30)
if keyboard == 'q' or keyboard == 27:
break
cap.release()
cv2.destroyAllWindows()
sessionInfo()
R version 4.2.0 (2022-04-22 ucrt)
Platform: x86_64-w64-mingw32/x64 (64-bit)
Running under: Windows 10 x64 (build 19044)
Matrix products: default
locale:
[1] LC_COLLATE=German_Germany.utf8 LC_CTYPE=German_Germany.utf8
[3] LC_MONETARY=German_Germany.utf8 LC_NUMERIC=C
[5] LC_TIME=German_Germany.utf8
attached base packages:
[1] stats graphics grDevices utils datasets methods base
other attached packages:
[1] workflowr_1.7.0
loaded via a namespace (and not attached):
[1] Rcpp_1.0.8.3 highr_0.9 bslib_0.3.1 compiler_4.2.0
[5] pillar_1.7.0 later_1.3.0 git2r_0.30.1 jquerylib_0.1.4
[9] tools_4.2.0 getPass_0.2-2 digest_0.6.29 jsonlite_1.8.0
[13] evaluate_0.15 tibble_3.1.7 lifecycle_1.0.1 pkgconfig_2.0.3
[17] rlang_1.0.2 cli_3.3.0 rstudioapi_0.13 yaml_2.3.5
[21] xfun_0.31 fastmap_1.1.0 httr_1.4.3 stringr_1.4.0
[25] knitr_1.39 sass_0.4.1 fs_1.5.2 vctrs_0.4.1
[29] rprojroot_2.0.3 glue_1.6.2 R6_2.5.1 processx_3.6.1
[33] fansi_1.0.3 rmarkdown_2.14 callr_3.7.0 magrittr_2.0.3
[37] whisker_0.4 ps_1.7.1 promises_1.2.0.1 htmltools_0.5.2
[41] ellipsis_0.3.2 httpuv_1.6.5 utf8_1.2.2 stringi_1.7.6
[45] crayon_1.5.1