General Category > Anim8or v1.0 Discussion Forum
Feature request - frame indexed file names
2020 Hindsight:
Feature request for Steve,
I was wondering if it would be possible for you to add a feature to Anim8or so that if a user entered an image file name like "hero{frame}.png", Animator would understand it should replace "{frame}" with the frame number, to pull in a different image file for each frame. It would be even nicer for video editing if you could enter something like "hero{frame+30}.png" to have the first frame read from "hero30.png", then "hero31.png"... ?
I envisage this working regardless of whether you enter the file name in a dialog box, or into a '.an8' script like:
image {
"C:\\ANIM8OR MANUAL\\waves\\water types\\sky-1286888_960_720_{frame}.jpg" }
The image files could be generated from video by other software. I have written code to do it in Python, but I expect there are software packages available to chop video into image files.
2020 Hindsight:
This is some simple Python code to chop video into image files:
(As the comments explain I cribbed from stackoverflow and downloaded a video file 'SampleVideo_720x480_5mb.mp4' for the test - that filename lies; it is actually 640x480. The code seems to work with whatever video resolution or format you pass to it. The code comment actually points to a different video file on the same download site - hope that doesn't confuse anyone - any video file should work just put the video file name in the line: vidcap = cv2.VideoCapture('SampleVideo_720x480_5mb.mp4') ):
'''
https://stackoverflow.com/questions/33311153/python-extracting-and-saving-video-frames
From here
https://sample-videos.com/
download this video
https://sample-videos.com/video123/mp4/720/big_buck_bunny_720p_5mb.mp4
so we have the same video file for the test. Make sure to have that mp4 file in the same directory of your python code.
Then also make sure to run the python interpreter from the same directory.
Note: I had to install "opencv-python" package for "cv2"
'''
import cv2
#vidcap = cv2.VideoCapture('big_buck_bunny_720p_5mb.mp4')
vidcap = cv2.VideoCapture('SampleVideo_720x480_5mb.mp4')
success, image = vidcap.read()
count = 0
while success:
#cv2.imwrite("frame%d.jpg" % count, image) # save frame as JPEG file
cv2.imwrite("frame%d.png" % count, image) # save frame as PNG file
success,image = vidcap.read()
print('Read a new frame: ', success)
count += 1
If anyone wants to use this code, you need to install Python3 on your machine, and the package "opencv-python". (Python Packages are installed using the pip command, or from a menu on your IDE.) I use the PyCharm IDE by JetBrains, it is free.
That code just produces normal image files. But I have written another version which converts a green screen background into alpha. I filmed this against a disposable table cloth on my washing line. If I had ironed it, I wouldn't have had to have alpha selection as aggressive as it is. Note I've only attached every 6th frame.
Steve:
A year or so ago I did some preliminary work on using a .WAV video file for a background or possibly a texture. I never got it working successfully. Maybe it's time to revisit that. At the same time I'll see if I can also support numbered image sequences.
2020 Hindsight:
That would be great!
I notice that the OpenCV library I've been using for video also has C, and C++ interfaces. Possibly that would make your job easier?
https://opencv.org/platforms/
if the license is compatible with yours:
https://opencv.org/license/
So I guess you could translate the above Python code to C++ with the following :
VideoCapture ( )
https://docs.opencv.org/3.4/d8/dfe/classcv_1_1VideoCapture.html
read() method to get next frame:
https://docs.opencv.org/3.4/d8/dfe/classcv_1_1VideoCapture.html#a473055e77dd7faa4d26d686226b292c1
imwrite()
https://docs.opencv.org/3.4/d4/da8/group__imgcodecs.html#gabbc7ef1aa2edfaa87772f1202d67e0ce
I added an alpha channel by calling:
image = cv2.cvtColor(image, cv2.COLOR_RGB2RGBA) # Replace RGB image with RGBA image.
between read(), and imwrite()
And some day maybe you could do joint tracking from video by filming people with post-it notes or reflectors on their joints ;)
2020 Hindsight:
Could the indexed image be mapped on to a moving mesh - or is that a problem? e.g. Could you map a moving kaleidoscope pattern on to an animated cuttlefish?
Navigation
[0] Message Index
[#] Next page
Go to full version