fffw.graph package

Submodules

fffw.graph.base module

class fffw.graph.base.Dest

Bases: Traversable

Audio/video output stream node.

Must connect to single filter output only.

connect_edge(edge: Edge) Edge

Connects and edge to output stream.

Should be called only from Node methods. Initializes edge identifier.

Parameters:

edge (Edge) – edge to connect output stream to.

Return Edge:

connected edge

get_meta_data(dst: Dest | Node | None = None) Meta | None
Parameters:

dst – destination node

Returns:

metadata passed to destination node after transformation

render(partial: bool = False) List[str]

Returns a list of filter_graph edge descriptions.

This method must be called in Namer context.

Parameters:

partial – partially formatted graph render mode flag

Returns:

edge description list [“[v:0]yadif[v:t1]”, “[v:t1]scale[out]”]

transform(*metadata: Meta) Meta

Apply codec changes to stream metadata.

property edge: Edge | None
kind: StreamType
property meta: Meta | None
property name: str

edge name (i.e. [vout0]) for codec -map argument only.

Type:

returns

class fffw.graph.base.Edge(input: Source | Node, output: Dest | Node)

Bases: Traversable

Internal ffmpeg data stream graph.

get_meta_data(dst: Dest | Node) Meta | None
Parameters:

dst – destination node

Returns:

metadata passed to destination node after transformation

reconnect(dest: Dest | Node) None

Allows to detach an edge from one output and connect to another one.

render(partial: bool = False) List[str]

Returns a list of filter_graph edge descriptions.

This method must be called in Namer context.

Parameters:

partial – partially formatted graph render mode flag

Returns:

edge description list [“[v:0]yadif[v:t1]”, “[v:t1]scale[out]”]

property input: Source | Node
property kind: StreamType
property name: str

Get actual name for edge from source node.

Property must be accessed within Namer context.

Returns:

edge identifier generated from output node name if connected to Dest, or a name of last enabled filter before (and including) current node.

property output: Dest | Node
class fffw.graph.base.Namer

Bases: object

Unique stream identifiers generator.

classmethod name(obj: Edge) str
class fffw.graph.base.Node

Bases: Traversable, ABC

Graph node describing ffmpeg filter.

connect_dest(other: Node) Node
connect_dest(other: Dest) Dest

Connects next filter or output to one of filter outputs.

Parameters:

other – next filter or output stream

Returns:

next filter or output stream, connected to current stream

connect_edge(edge: Edge) Edge

Connects and edge to one of filter inputs

Parameters:

edge – input stream edge

Returns:

connected edge

get_filter_cmd(partial: bool = False) str

Returns filter description.

output format is like [IN] FILTER ARGS [OUT] where IN, OUT - lists of input/output edge id, FILTER - filter name, ARGS - filter params

Parameters:

partial – partially formatted graph render mode flag

Returns:

current description string like “[v:0]yadif[v:t1]”

get_meta_data(dst: Dest | Node) Meta | None

Returns metadata for selected destination.

render(partial: bool = False) List[str]

Returns a list of filter_graph edge descriptions.

This method must be called in Namer context.

Parameters:

partial – partially formatted graph render mode flag

Returns:

edge description list [“[v:0]yadif[v:t1]”, “[v:t1]scale[out]”]

transform(*metadata: Meta) Meta

Apply filter changes to stream metadata.

abstract property args: str
property enabled: bool
filter: str
property input: Edge | None
input_count: int = 1
property inputs: List[Edge | None]

list of placeholders for input edges.

Type:

returns

kind: StreamType
property meta: Meta | None

Compute metadata for current node.

property output: Edge | None
output_count: int = 1
property outputs: List[Edge | None]

list of placeholders for output edges.

Type:

returns

class fffw.graph.base.Once(attr_name: str)

Bases: object

Property that must be set exactly once.

class fffw.graph.base.Source(kind: StreamType, meta: Meta | None = None)

Bases: Traversable

Graph node containing audio or video input.

Must connect to single graph edge only as a source

connect_dest(other: N) N
connect_dest(other: D) D
get_meta_data(dst: Dest | Node) Meta | None
Parameters:

dst – destination node

Returns:

metadata passed to destination node after transformation

render(partial: bool = False) List[str]

Returns a list of filter_graph edge descriptions.

This method must be called in Namer context.

Parameters:

partial – partially formatted graph render mode flag

Returns:

edge description list [“[v:0]yadif[v:t1]”, “[v:t1]scale[out]”]

property connected: bool
property kind: StreamType

stream type

Type:

returns

property meta: Meta | None

stream metadata

Type:

returns

abstract property name: str
class fffw.graph.base.Traversable

Bases: object

Abstract class base for filter graph edges/nodes traversing and rendering.

abstract get_meta_data(dst: Dest | Node) Meta | None
Parameters:

dst – destination node

Returns:

metadata passed to destination node after transformation

abstract render(partial: bool = False) List[str]

Returns a list of filter_graph edge descriptions.

This method must be called in Namer context.

Parameters:

partial – partially formatted graph render mode flag

Returns:

edge description list [“[v:0]yadif[v:t1]”, “[v:t1]scale[out]”]

fffw.graph.meta module

class fffw.graph.meta.AudioMeta(duration: TS, start: TS, bitrate: int, scenes: List[Scene], streams: List[str], sampling_rate: int, channels: int, samples: int)

Bases: Meta

Audio stream metadata.

Describes audio stream characteristics.

validate() None
channels: int

Number of audio channels.

property kind: StreamType
samples: int

Samples count.

sampling_rate: int

Samples per second.

class fffw.graph.meta.Device(hardware: str, name: str)

Bases: object

Describes hardware device used for video acceleration

hardware: str
name: str
class fffw.graph.meta.Meta(duration: TS, start: TS, bitrate: int, scenes: List[Scene], streams: List[str])

Bases: object

Stream metadata.

Describes common stream characteristics like bitrate and duration.

bitrate: int

Input stream bitrate in bits per second.

duration: TS

Resulting stream duration.

property end: TS

Timestamp of last frame resulting stream.

Type:

return

property kind: StreamType
scenes: List[Scene]

List of continuous stream fragments (maybe from different files), that need to be read to get a result with current metadata.

start: TS

First frame/sample timestamp for resulting stream.

streams: List[str]

List of streams (maybe from different files), that need to be read to get a result with current metadata.

class fffw.graph.meta.Scene(stream: str | None, duration: TS, start: TS, position: TS)

Bases: object

Continuous part of stream used in transcoding graph.

duration: TS

Stream duration.

property end: TS
position: TS

Position of scene in current stream.

start: TS

First frame/sample timestamp in source stream.

stream: str | None

Stream identifier.

class fffw.graph.meta.StreamType(value)

Bases: Enum

An enumeration.

AUDIO = 'a'
VIDEO = 'v'
class fffw.graph.meta.TS(value: int | float | str | timedelta)

Bases: float

Timestamp data type.

Accepts common timestamp formats like ‘123:45:56.1234’. Integer values are parsed as milliseconds.

total_seconds() float
property days: int
property microseconds: int
property seconds: int
class fffw.graph.meta.VideoMeta(duration: TS, start: TS, bitrate: int, scenes: List[Scene], streams: List[str], width: int, height: int, par: float, dar: float, frame_rate: float, frames: int, device: Device | None)

Bases: Meta

Video stream metadata.

Describes video stream characteristics.

validate() None
dar: float

Display aspect ratio.

device: Device | None

Hardware device asociated with current stream.

frame_rate: float

Frames per second.

frames: int

Number of frames.

height: int
property kind: StreamType
par: float

Pixel aspect ratio.

width: int
fffw.graph.meta.audio_meta_data(**kwargs: Any) AudioMeta
fffw.graph.meta.from_media_info(mi: MediaInfo) List[Meta]
fffw.graph.meta.video_meta_data(**kwargs: Any) VideoMeta

Module contents