Vectorize your code¶
There are at least two situations when you apply same ffmpeg filters to a set of
streams. First is adaptive streaming for internet video (which requires a number
of files with same content and different bitrate); second one is linear editing
(parts of video are cut from source and concatenated together to form new video
sequence). Maybe there are more, and fffw
provides a way to handle stream
vector transformations.
SIMD Wrapper¶
To use SIMD
helper you’ll need to
initialize input file with streams (including meta). This is used to track
changes applied to input streams. Also, SIMD
requires a list of outputs
including codecs to be able to connect input streams to corresponding outputs.
mi = MediaInfo.parse('input.mp4')
video_meta, audio_meta = from_media_info(mi)
video = Stream(VIDEO, video_meta)
audio = Stream(AUDIO, audio_meta)
source = input_file('input.mp4', video, audio)
output1 = output_file('output1.mp4',
VideoCodec('libx264'),
AudioCodec('aac'))
output2 = output_file('output2.mp5',
VideoCodec('libx265'),
AudioCodec('libfdk_aac'))
simd = SIMD(source, output1, output2)
Apply filters¶
The easiest way to apply a filter to a stream vector is to pass it to “pipeline
operator” (|
):
cursor = self.simd | Volume(30)
If a vector has only one element (i.e. input stream itself), no preliminary
splitting occurs. Split
filter is added automatically if applied filter
vector contains distinct elements, like a filter with different parameters:
simd.connect(Scale, params=[(1920, 1080), (1280, 720)])
Another way to manage applied filter is to pass a mask for a filter vector:
simd.connect(Deint(), mask=[True, False])
This excludes applied filter from some of input streams, having False
in
mask array.
Finalizing filter graph¶
At the end stream vector (cursor
) must be connected to a codec vector:
cursor > simd
simd.ffmpeg.run()
This connects each stream vector element to a corresponding codec in simd
output file list.
Complete example¶
This example shows two cases for vector stream processing: linear editing and adaptive video streaming file generation.
from pymediainfo import MediaInfo
from fffw.encoding import *
from fffw.encoding.vector import SIMD, Vector
from fffw.graph import *
# detect information about input file
mi = MediaInfo.parse('source.mp4')
video_meta, audio_meta = from_media_info(mi)
# initialize input file with stream and metadata
source = input_file('source.mp4',
Stream(VIDEO, video_meta),
Stream(AUDIO, audio_meta))
outputs = []
for size in 360, 540, 720, 1080:
out = output_file(f'{size}.mp4',
VideoCodec('libx264'),
AudioCodec('aac'))
outputs.append(out)
simd = SIMD(source, *outputs)
mi = MediaInfo.parse('logo.png')
logo_meta, = from_media_info(mi)
# add a logo
logo = simd < input_file('logo.png', Stream(VIDEO, logo_meta))
trim = [
{'kind': VIDEO, 'start': 25, 'end': 50},
{'kind': VIDEO, 'start': 160, 'end': 240},
{'kind': VIDEO, 'start': 330, 'end': 820},
]
# cut three parts from input video stream and
# reset timestamps for it
edited = simd.video.connect(Trim, params=trim) | SetPTS(VIDEO)
# concatenate all vector elements to a single stream
concat = Concat(VIDEO, input_count=len(edited))
for stream in edited:
stream | concat
# cut same parts from input audio stream
for p in trim:
p['kind'] = AUDIO
audio = simd.audio.connect(Trim, params=trim) | SetPTS(AUDIO)
audio_concat = Concat(AUDIO, input_count=len(audio))
for stream in audio:
stream | audio_concat
# add a logo to an edited video stream
with_logo = concat | Overlay(x=100, y=100)
logo | with_logo
# now we need to vectorize video stream again to perform
# scaling to multiple sizes
cursor = Vector(with_logo)
sizes = [(640, 360), (960, 540), (1280, 720), (1920, 1080)]
cursor = cursor.connect(Scale, params=sizes)
# finalize video processing
cursor > simd
# finalize audio processing
audio_concat > simd
simd.ffmpeg.overwrite = True
print(simd.ffmpeg.get_cmd())