Welcome to mirror list, hosted at ThFree Co, Russian Federation.

github.com/mpc-hc/FFmpeg.git - Unnamed repository; edit this file 'description' to name the repository.
summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
Diffstat (limited to 'doc/muxers.texi')
-rw-r--r--doc/muxers.texi741
1 files changed, 584 insertions, 157 deletions
diff --git a/doc/muxers.texi b/doc/muxers.texi
index 6e06e463f1..ae5301c4bb 100644
--- a/doc/muxers.texi
+++ b/doc/muxers.texi
@@ -1,10 +1,10 @@
@chapter Muxers
@c man begin MUXERS
-Muxers are configured elements in Libav which allow writing
+Muxers are configured elements in FFmpeg which allow writing
multimedia streams to a particular type of file.
-When you configure your Libav build, all the supported muxers
+When you configure your FFmpeg build, all the supported muxers
are enabled by default. You can list all available muxers using the
configure option @code{--list-muxers}.
@@ -13,11 +13,28 @@ You can disable all the muxers with the configure option
with the options @code{--enable-muxer=@var{MUXER}} /
@code{--disable-muxer=@var{MUXER}}.
-The option @code{-formats} of the av* tools will display the list of
+The option @code{-formats} of the ff* tools will display the list of
enabled muxers.
A description of some of the currently available muxers follows.
+@anchor{aiff}
+@section aiff
+
+Audio Interchange File Format muxer.
+
+It accepts the following options:
+
+@table @option
+@item write_id3v2
+Enable ID3v2 tags writing when set to 1. Default is 0 (disabled).
+
+@item id3v2_version
+Select ID3v2 version to write. Currently only version 3 and 4 (aka.
+ID3v2.3 and ID3v2.4) are supported. The default is version 4.
+
+@end table
+
@anchor{crc}
@section crc
@@ -35,20 +52,20 @@ CRC=0x@var{CRC}, where @var{CRC} is a hexadecimal number 0-padded to
For example to compute the CRC of the input, and store it in the file
@file{out.crc}:
@example
-avconv -i INPUT -f crc out.crc
+ffmpeg -i INPUT -f crc out.crc
@end example
You can print the CRC to stdout with the command:
@example
-avconv -i INPUT -f crc -
+ffmpeg -i INPUT -f crc -
@end example
-You can select the output format of each frame with @command{avconv} by
+You can select the output format of each frame with @command{ffmpeg} by
specifying the audio and video codec and format. For example to
compute the CRC of the input audio converted to PCM unsigned 8-bit
and the input video converted to MPEG-2 video, use the command:
@example
-avconv -i INPUT -c:a pcm_u8 -c:v mpeg2video -f crc -
+ffmpeg -i INPUT -c:a pcm_u8 -c:v mpeg2video -f crc -
@end example
See also the @ref{framecrc} muxer.
@@ -56,40 +73,79 @@ See also the @ref{framecrc} muxer.
@anchor{framecrc}
@section framecrc
-Per-frame CRC (Cyclic Redundancy Check) testing format.
+Per-packet CRC (Cyclic Redundancy Check) testing format.
-This muxer computes and prints the Adler-32 CRC for each decoded audio
-and video frame. By default audio frames are converted to signed
+This muxer computes and prints the Adler-32 CRC for each audio
+and video packet. By default audio frames are converted to signed
16-bit raw audio and video frames to raw video before computing the
CRC.
The output of the muxer consists of a line for each audio and video
-frame of the form: @var{stream_index}, @var{frame_dts},
-@var{frame_size}, 0x@var{CRC}, where @var{CRC} is a hexadecimal
-number 0-padded to 8 digits containing the CRC of the decoded frame.
+packet of the form:
+@example
+@var{stream_index}, @var{packet_dts}, @var{packet_pts}, @var{packet_duration}, @var{packet_size}, 0x@var{CRC}
+@end example
+
+@var{CRC} is a hexadecimal number 0-padded to 8 digits containing the
+CRC of the packet.
-For example to compute the CRC of each decoded frame in the input, and
-store it in the file @file{out.crc}:
+For example to compute the CRC of the audio and video frames in
+@file{INPUT}, converted to raw audio and video packets, and store it
+in the file @file{out.crc}:
@example
-avconv -i INPUT -f framecrc out.crc
+ffmpeg -i INPUT -f framecrc out.crc
@end example
-You can print the CRC of each decoded frame to stdout with the command:
+To print the information to stdout, use the command:
@example
-avconv -i INPUT -f framecrc -
+ffmpeg -i INPUT -f framecrc -
@end example
-You can select the output format of each frame with @command{avconv} by
-specifying the audio and video codec and format. For example, to
+With @command{ffmpeg}, you can select the output format to which the
+audio and video frames are encoded before computing the CRC for each
+packet by specifying the audio and video codec. For example, to
compute the CRC of each decoded input audio frame converted to PCM
unsigned 8-bit and of each decoded input video frame converted to
MPEG-2 video, use the command:
@example
-avconv -i INPUT -c:a pcm_u8 -c:v mpeg2video -f framecrc -
+ffmpeg -i INPUT -c:a pcm_u8 -c:v mpeg2video -f framecrc -
@end example
See also the @ref{crc} muxer.
+@anchor{framemd5}
+@section framemd5
+
+Per-packet MD5 testing format.
+
+This muxer computes and prints the MD5 hash for each audio
+and video packet. By default audio frames are converted to signed
+16-bit raw audio and video frames to raw video before computing the
+hash.
+
+The output of the muxer consists of a line for each audio and video
+packet of the form:
+@example
+@var{stream_index}, @var{packet_dts}, @var{packet_pts}, @var{packet_duration}, @var{packet_size}, @var{MD5}
+@end example
+
+@var{MD5} is a hexadecimal number representing the computed MD5 hash
+for the packet.
+
+For example to compute the MD5 of the audio and video frames in
+@file{INPUT}, converted to raw audio and video packets, and store it
+in the file @file{out.md5}:
+@example
+ffmpeg -i INPUT -f framemd5 out.md5
+@end example
+
+To print the information to stdout, use the command:
+@example
+ffmpeg -i INPUT -f framemd5 -
+@end example
+
+See also the @ref{md5} muxer.
+
@anchor{hls}
@section hls
@@ -102,7 +158,7 @@ receive the same basename as the playlist, a sequential number and
a .ts extension.
@example
-avconv -i in.nut out.m3u8
+ffmpeg -i in.nut out.m3u8
@end example
@table @option
@@ -116,6 +172,39 @@ Set the number after which index wraps.
Start the sequence from @var{number}.
@end table
+@anchor{ico}
+@section ico
+
+ICO file muxer.
+
+Microsoft's icon file format (ICO) has some strict limitations that should be noted:
+
+@itemize
+@item
+Size cannot exceed 256 pixels in any dimension
+
+@item
+Only BMP and PNG images can be stored
+
+@item
+If a BMP image is used, it must be one of the following pixel formats:
+@example
+BMP Bit Depth FFmpeg Pixel Format
+1bit pal8
+4bit pal8
+8bit pal8
+16bit rgb555le
+24bit bgr24
+32bit bgra
+@end example
+
+@item
+If a BMP image is used, it must use the BITMAPINFOHEADER DIB header
+
+@item
+If a PNG image is used, it must use the rgba pixel format
+@end itemize
+
@anchor{image2}
@section image2
@@ -146,31 +235,32 @@ The pattern "img%%-%d.jpg" will specify a sequence of filenames of the
form @file{img%-1.jpg}, @file{img%-2.jpg}, ..., @file{img%-10.jpg},
etc.
-The following example shows how to use @command{avconv} for creating a
+The following example shows how to use @command{ffmpeg} for creating a
sequence of files @file{img-001.jpeg}, @file{img-002.jpeg}, ...,
taking one image every second from the input video:
@example
-avconv -i in.avi -vsync 1 -r 1 -f image2 'img-%03d.jpeg'
+ffmpeg -i in.avi -vsync 1 -r 1 -f image2 'img-%03d.jpeg'
@end example
-Note that with @command{avconv}, if the format is not specified with the
+Note that with @command{ffmpeg}, if the format is not specified with the
@code{-f} option and the output filename specifies an image file
format, the image2 muxer is automatically selected, so the previous
command can be written as:
@example
-avconv -i in.avi -vsync 1 -r 1 'img-%03d.jpeg'
+ffmpeg -i in.avi -vsync 1 -r 1 'img-%03d.jpeg'
@end example
Note also that the pattern must not necessarily contain "%d" or
"%0@var{N}d", for example to create a single image file
@file{img.jpeg} from the input video you can employ the command:
@example
-avconv -i in.avi -f image2 -frames:v 1 img.jpeg
+ffmpeg -i in.avi -f image2 -frames:v 1 img.jpeg
@end example
@table @option
-@item -start_number @var{number}
-Start the sequence from @var{number}.
+@item start_number @var{number}
+Start the sequence from @var{number}. Default value is 1. Must be a
+non-negative number.
@item -update @var{number}
If @var{number} is nonzero, the filename will always be interpreted as just a
@@ -179,12 +269,132 @@ images.
@end table
-@section MOV/MP4/ISMV
+The image muxer supports the .Y.U.V image file format. This format is
+special in that that each image frame consists of three files, for
+each of the YUV420P components. To read or write this image file format,
+specify the name of the '.Y' file. The muxer will automatically open the
+'.U' and '.V' files as required.
+
+@section matroska
+
+Matroska container muxer.
+
+This muxer implements the matroska and webm container specs.
+
+The recognized metadata settings in this muxer are:
+
+@table @option
+
+@item title=@var{title name}
+Name provided to a single track
+@end table
+
+@table @option
+
+@item language=@var{language name}
+Specifies the language of the track in the Matroska languages form
+@end table
+
+@table @option
+
+@item stereo_mode=@var{mode}
+Stereo 3D video layout of two views in a single video track
+@table @option
+@item mono
+video is not stereo
+@item left_right
+Both views are arranged side by side, Left-eye view is on the left
+@item bottom_top
+Both views are arranged in top-bottom orientation, Left-eye view is at bottom
+@item top_bottom
+Both views are arranged in top-bottom orientation, Left-eye view is on top
+@item checkerboard_rl
+Each view is arranged in a checkerboard interleaved pattern, Left-eye view being first
+@item checkerboard_lr
+Each view is arranged in a checkerboard interleaved pattern, Right-eye view being first
+@item row_interleaved_rl
+Each view is constituted by a row based interleaving, Right-eye view is first row
+@item row_interleaved_lr
+Each view is constituted by a row based interleaving, Left-eye view is first row
+@item col_interleaved_rl
+Both views are arranged in a column based interleaving manner, Right-eye view is first column
+@item col_interleaved_lr
+Both views are arranged in a column based interleaving manner, Left-eye view is first column
+@item anaglyph_cyan_red
+All frames are in anaglyph format viewable through red-cyan filters
+@item right_left
+Both views are arranged side by side, Right-eye view is on the left
+@item anaglyph_green_magenta
+All frames are in anaglyph format viewable through green-magenta filters
+@item block_lr
+Both eyes laced in one Block, Left-eye view is first
+@item block_rl
+Both eyes laced in one Block, Right-eye view is first
+@end table
+@end table
+
+For example a 3D WebM clip can be created using the following command line:
+@example
+ffmpeg -i sample_left_right_clip.mpg -an -c:v libvpx -metadata stereo_mode=left_right -y stereo_clip.webm
+@end example
+
+This muxer supports the following options:
+
+@table @option
+
+@item reserve_index_space
+By default, this muxer writes the index for seeking (called cues in Matroska
+terms) at the end of the file, because it cannot know in advance how much space
+to leave for the index at the beginning of the file. However for some use cases
+-- e.g. streaming where seeking is possible but slow -- it is useful to put the
+index at the beginning of the file.
+
+If this option is set to a non-zero value, the muxer will reserve a given amount
+of space in the file header and then try to write the cues there when the muxing
+finishes. If the available space does not suffice, muxing will fail. A safe size
+for most use cases should be about 50kB per hour of video.
+
+Note that cues are only written if the output is seekable and this option will
+have no effect if it is not.
+
+@end table
+
+@anchor{md5}
+@section md5
+
+MD5 testing format.
+
+This muxer computes and prints the MD5 hash of all the input audio
+and video frames. By default audio frames are converted to signed
+16-bit raw audio and video frames to raw video before computing the
+hash.
+
+The output of the muxer consists of a single line of the form:
+MD5=@var{MD5}, where @var{MD5} is a hexadecimal number representing
+the computed MD5 hash.
+
+For example to compute the MD5 hash of the input converted to raw
+audio and video, and store it in the file @file{out.md5}:
+@example
+ffmpeg -i INPUT -f md5 out.md5
+@end example
+
+You can print the MD5 to stdout with the command:
+@example
+ffmpeg -i INPUT -f md5 -
+@end example
+
+See also the @ref{framemd5} muxer.
+
+@section mov/mp4/ismv
+
+MOV/MP4/ISMV (Smooth Streaming) muxer.
The mov/mp4/ismv muxer supports fragmentation. Normally, a MOV/MP4
file has all the metadata about all packets stored in one location
(written at the end of the file, it can be moved to the start for
-better playback using the @command{qt-faststart} tool). A fragmented
+better playback by adding @var{faststart} to the @var{movflags}, or
+using the @command{qt-faststart} tool). A fragmented
file consists of a number of fragments, where packets and metadata
about these packets are stored together. Writing a fragmented
file has the advantage that the file is decodable even if the
@@ -198,6 +408,9 @@ Fragmentation is enabled by setting one of the AVOptions that define
how to cut the file into fragments:
@table @option
+@item -moov_size @var{bytes}
+Reserves space for the moov atom at the beginning of the file instead of placing the
+moov atom at the end. If the space reserved is insufficient, muxing will fail.
@item -movflags frag_keyframe
Start a new fragment at each video keyframe.
@item -frag_duration @var{duration}
@@ -208,7 +421,7 @@ Create fragments that contain up to @var{size} bytes of payload data.
Allow the caller to manually choose when to cut fragments, by
calling @code{av_write_frame(ctx, NULL)} to write a fragment with
the packets written so far. (This is only useful with other
-applications integrating libavformat, not from @command{avconv}.)
+applications integrating libavformat, not from @command{ffmpeg}.)
@item -min_frag_duration @var{duration}
Don't create fragments that are shorter than @var{duration} microseconds long.
@end table
@@ -243,12 +456,50 @@ This option is implicitly set when writing ismv (Smooth Streaming) files.
Run a second pass moving the index (moov atom) to the beginning of the file.
This operation can take a while, and will not work in various situations such
as fragmented output, thus it is not enabled by default.
+@item -movflags rtphint
+Add RTP hinting tracks to the output file.
@end table
Smooth Streaming content can be pushed in real time to a publishing
point on IIS with this muxer. Example:
@example
-avconv -re @var{<normal input/transcoding options>} -movflags isml+frag_keyframe -f ismv http://server/publishingpoint.isml/Streams(Encoder1)
+ffmpeg -re @var{<normal input/transcoding options>} -movflags isml+frag_keyframe -f ismv http://server/publishingpoint.isml/Streams(Encoder1)
+@end example
+
+@section mp3
+
+The MP3 muxer writes a raw MP3 stream with an ID3v2 header at the beginning and
+optionally an ID3v1 tag at the end. ID3v2.3 and ID3v2.4 are supported, the
+@code{id3v2_version} option controls which one is used. The legacy ID3v1 tag is
+not written by default, but may be enabled with the @code{write_id3v1} option.
+
+For seekable output the muxer also writes a Xing frame at the beginning, which
+contains the number of frames in the file. It is useful for computing duration
+of VBR files.
+
+The muxer supports writing ID3v2 attached pictures (APIC frames). The pictures
+are supplied to the muxer in form of a video stream with a single packet. There
+can be any number of those streams, each will correspond to a single APIC frame.
+The stream metadata tags @var{title} and @var{comment} map to APIC
+@var{description} and @var{picture type} respectively. See
+@url{http://id3.org/id3v2.4.0-frames} for allowed picture types.
+
+Note that the APIC frames must be written at the beginning, so the muxer will
+buffer the audio frames until it gets all the pictures. It is therefore advised
+to provide the pictures as soon as possible to avoid excessive buffering.
+
+Examples:
+
+Write an mp3 with an ID3v2.3 header and an ID3v1 footer:
+@example
+ffmpeg -i INPUT -id3v2_version 3 -write_id3v1 1 out.mp3
+@end example
+
+To attach a picture to an mp3 file select both the audio and the picture stream
+with @code{map}:
+@example
+ffmpeg -i input.mp3 -i cover.png -c copy -map 0 -map 1
+-metadata:s:v title="Album cover" -metadata:s:v comment="Cover (Front)" out.mp3
@end example
@section mpegts
@@ -273,15 +524,49 @@ Set the service_id (default 0x0001) also known as program in DVB.
Set the first PID for PMT (default 0x1000, max 0x1f00).
@item -mpegts_start_pid @var{number}
Set the first PID for data packets (default 0x0100, max 0x0f00).
+@item -mpegts_m2ts_mode @var{number}
+Enable m2ts mode if set to 1. Default value is -1 which disables m2ts mode.
+@item -muxrate @var{number}
+Set muxrate.
+@item -pes_payload_size @var{number}
+Set minimum PES packet payload in bytes.
+@item -mpegts_flags @var{flags}
+Set flags (see below).
+@item -mpegts_copyts @var{number}
+Preserve original timestamps, if value is set to 1. Default value is -1, which
+results in shifting timestamps so that they start from 0.
+@item -tables_version @var{number}
+Set PAT, PMT and SDT version (default 0, valid values are from 0 to 31, inclusively).
+This option allows updating stream structure so that standard consumer may
+detect the change. To do so, reopen output AVFormatContext (in case of API
+usage) or restart ffmpeg instance, cyclically changing tables_version value:
+@example
+ffmpeg -i source1.ts -codec copy -f mpegts -tables_version 0 udp://1.1.1.1:1111
+ffmpeg -i source2.ts -codec copy -f mpegts -tables_version 1 udp://1.1.1.1:1111
+...
+ffmpeg -i source3.ts -codec copy -f mpegts -tables_version 31 udp://1.1.1.1:1111
+ffmpeg -i source1.ts -codec copy -f mpegts -tables_version 0 udp://1.1.1.1:1111
+ffmpeg -i source2.ts -codec copy -f mpegts -tables_version 1 udp://1.1.1.1:1111
+...
+@end example
+@end table
+
+Option mpegts_flags may take a set of such flags:
+
+@table @option
+@item resend_headers
+Reemit PAT/PMT before writing the next packet.
+@item latm
+Use LATM packetization for AAC.
@end table
The recognized metadata settings in mpegts muxer are @code{service_provider}
and @code{service_name}. If they are not set the default for
-@code{service_provider} is "Libav" and the default for
+@code{service_provider} is "FFmpeg" and the default for
@code{service_name} is "Service01".
@example
-avconv -i file.mpg -c copy \
+ffmpeg -i file.mpg -c copy \
-mpegts_original_network_id 0x1122 \
-mpegts_transport_stream_id 0x3344 \
-mpegts_service_id 0x5566 \
@@ -299,185 +584,327 @@ Null muxer.
This muxer does not generate any output file, it is mainly useful for
testing or benchmarking purposes.
-For example to benchmark decoding with @command{avconv} you can use the
+For example to benchmark decoding with @command{ffmpeg} you can use the
command:
@example
-avconv -benchmark -i INPUT -f null out.null
+ffmpeg -benchmark -i INPUT -f null out.null
@end example
Note that the above command does not read or write the @file{out.null}
-file, but specifying the output file is required by the @command{avconv}
+file, but specifying the output file is required by the @command{ffmpeg}
syntax.
Alternatively you can write the command as:
@example
-avconv -benchmark -i INPUT -f null -
+ffmpeg -benchmark -i INPUT -f null -
@end example
-@section matroska
+@section ogg
-Matroska container muxer.
+Ogg container muxer.
-This muxer implements the matroska and webm container specs.
+@table @option
+@item -page_duration @var{duration}
+Preferred page duration, in microseconds. The muxer will attempt to create
+pages that are approximately @var{duration} microseconds long. This allows the
+user to compromise between seek granularity and container overhead. The default
+is 1 second. A value of 0 will fill all segments, making pages as large as
+possible. A value of 1 will effectively use 1 packet-per-page in most
+situations, giving a small seek granularity at the cost of additional container
+overhead.
+@end table
-The recognized metadata settings in this muxer are:
+@section segment, stream_segment, ssegment
-@table @option
+Basic stream segmenter.
-@item title=@var{title name}
-Name provided to a single track
-@end table
+The segmenter muxer outputs streams to a number of separate files of nearly
+fixed duration. Output filename pattern can be set in a fashion similar to
+@ref{image2}.
-@table @option
+@code{stream_segment} is a variant of the muxer used to write to
+streaming output formats, i.e. which do not require global headers,
+and is recommended for outputting e.g. to MPEG transport stream segments.
+@code{ssegment} is a shorter alias for @code{stream_segment}.
-@item language=@var{language name}
-Specifies the language of the track in the Matroska languages form
-@end table
+Every segment starts with a keyframe of the selected reference stream,
+which is set through the @option{reference_stream} option.
-@table @option
+Note that if you want accurate splitting for a video file, you need to
+make the input key frames correspond to the exact splitting times
+expected by the segmenter, or the segment muxer will start the new
+segment with the key frame found next after the specified start
+time.
+
+The segment muxer works best with a single constant frame rate video.
+
+Optionally it can generate a list of the created segments, by setting
+the option @var{segment_list}. The list type is specified by the
+@var{segment_list_type} option.
+
+The segment muxer supports the following options:
-@item STEREO_MODE=@var{mode}
-Stereo 3D video layout of two views in a single video track
@table @option
-@item mono
-video is not stereo
-@item left_right
-Both views are arranged side by side, Left-eye view is on the left
-@item bottom_top
-Both views are arranged in top-bottom orientation, Left-eye view is at bottom
-@item top_bottom
-Both views are arranged in top-bottom orientation, Left-eye view is on top
-@item checkerboard_rl
-Each view is arranged in a checkerboard interleaved pattern, Left-eye view being first
-@item checkerboard_lr
-Each view is arranged in a checkerboard interleaved pattern, Right-eye view being first
-@item row_interleaved_rl
-Each view is constituted by a row based interleaving, Right-eye view is first row
-@item row_interleaved_lr
-Each view is constituted by a row based interleaving, Left-eye view is first row
-@item col_interleaved_rl
-Both views are arranged in a column based interleaving manner, Right-eye view is first column
-@item col_interleaved_lr
-Both views are arranged in a column based interleaving manner, Left-eye view is first column
-@item anaglyph_cyan_red
-All frames are in anaglyph format viewable through red-cyan filters
-@item right_left
-Both views are arranged side by side, Right-eye view is on the left
-@item anaglyph_green_magenta
-All frames are in anaglyph format viewable through green-magenta filters
-@item block_lr
-Both eyes laced in one Block, Left-eye view is first
-@item block_rl
-Both eyes laced in one Block, Right-eye view is first
-@end table
+@item reference_stream @var{specifier}
+Set the reference stream, as specified by the string @var{specifier}.
+If @var{specifier} is set to @code{auto}, the reference is choosen
+automatically. Otherwise it must be a stream specifier (see the ``Stream
+specifiers'' chapter in the ffmpeg manual) which specifies the
+reference stream. The default value is @code{auto}.
+
+@item segment_format @var{format}
+Override the inner container format, by default it is guessed by the filename
+extension.
+
+@item segment_list @var{name}
+Generate also a listfile named @var{name}. If not specified no
+listfile is generated.
+
+@item segment_list_flags @var{flags}
+Set flags affecting the segment list generation.
+
+It currently supports the following flags:
+@table @samp
+@item cache
+Allow caching (only affects M3U8 list files).
+
+@item live
+Allow live-friendly file generation.
@end table
-For example a 3D WebM clip can be created using the following command line:
+Default value is @code{samp}.
+
+@item segment_list_size @var{size}
+Update the list file so that it contains at most the last @var{size}
+segments. If 0 the list file will contain all the segments. Default
+value is 0.
+
+@item segment_list_type @var{type}
+Specify the format for the segment list file.
+
+The following values are recognized:
+@table @samp
+@item flat
+Generate a flat list for the created segments, one segment per line.
+
+@item csv, ext
+Generate a list for the created segments, one segment per line,
+each line matching the format (comma-separated values):
@example
-avconv -i sample_left_right_clip.mpg -an -c:v libvpx -metadata STEREO_MODE=left_right -y stereo_clip.webm
+@var{segment_filename},@var{segment_start_time},@var{segment_end_time}
@end example
-This muxer supports the following options:
+@var{segment_filename} is the name of the output file generated by the
+muxer according to the provided pattern. CSV escaping (according to
+RFC4180) is applied if required.
-@table @option
+@var{segment_start_time} and @var{segment_end_time} specify
+the segment start and end time expressed in seconds.
-@item reserve_index_space
-By default, this muxer writes the index for seeking (called cues in Matroska
-terms) at the end of the file, because it cannot know in advance how much space
-to leave for the index at the beginning of the file. However for some use cases
--- e.g. streaming where seeking is possible but slow -- it is useful to put the
-index at the beginning of the file.
+A list file with the suffix @code{".csv"} or @code{".ext"} will
+auto-select this format.
-If this option is set to a non-zero value, the muxer will reserve a given amount
-of space in the file header and then try to write the cues there when the muxing
-finishes. If the available space does not suffice, muxing will fail. A safe size
-for most use cases should be about 50kB per hour of video.
+@samp{ext} is deprecated in favor or @samp{csv}.
-Note that cues are only written if the output is seekable and this option will
-have no effect if it is not.
+@item ffconcat
+Generate an ffconcat file for the created segments. The resulting file
+can be read using the FFmpeg @ref{concat} demuxer.
+
+A list file with the suffix @code{".ffcat"} or @code{".ffconcat"} will
+auto-select this format.
+@item m3u8
+Generate an extended M3U8 file, version 3, compliant with
+@url{http://tools.ietf.org/id/draft-pantos-http-live-streaming}.
+
+A list file with the suffix @code{".m3u8"} will auto-select this format.
@end table
-@section segment
+If not specified the type is guessed from the list file name suffix.
-Basic stream segmenter.
+@item segment_time @var{time}
+Set segment duration to @var{time}, the value must be a duration
+specification. Default value is "2". See also the
+@option{segment_times} option.
-The segmenter muxer outputs streams to a number of separate files of nearly
-fixed duration. Output filename pattern can be set in a fashion similar to
-@ref{image2}.
+Note that splitting may not be accurate, unless you force the
+reference stream key-frames at the given time. See the introductory
+notice and the examples below.
-Every segment starts with a video keyframe, if a video stream is present.
-The segment muxer works best with a single constant frame rate video.
+@item segment_time_delta @var{delta}
+Specify the accuracy time when selecting the start time for a
+segment, expressed as a duration specification. Default value is "0".
-Optionally it can generate a flat list of the created segments, one segment
-per line.
+When delta is specified a key-frame will start a new segment if its
+PTS satisfies the relation:
+@example
+PTS >= start_time - time_delta
+@end example
+
+This option is useful when splitting video content, which is always
+split at GOP boundaries, in case a key frame is found just before the
+specified split time.
+
+In particular may be used in combination with the @file{ffmpeg} option
+@var{force_key_frames}. The key frame times specified by
+@var{force_key_frames} may not be set accurately because of rounding
+issues, with the consequence that a key frame time may result set just
+before the specified time. For constant frame rate videos a value of
+1/2*@var{frame_rate} should address the worst case mismatch between
+the specified time and the time set by @var{force_key_frames}.
+
+@item segment_times @var{times}
+Specify a list of split points. @var{times} contains a list of comma
+separated duration specifications, in increasing order. See also
+the @option{segment_time} option.
+
+@item segment_frames @var{frames}
+Specify a list of split video frame numbers. @var{frames} contains a
+list of comma separated integer numbers, in increasing order.
+
+This option specifies to start a new segment whenever a reference
+stream key frame is found and the sequential number (starting from 0)
+of the frame is greater or equal to the next value in the list.
-@table @option
-@item segment_format @var{format}
-Override the inner container format, by default it is guessed by the filename
-extension.
-@item segment_time @var{t}
-Set segment duration to @var{t} seconds.
-@item segment_list @var{name}
-Generate also a listfile named @var{name}.
-@item segment_list_size @var{size}
-Overwrite the listfile once it reaches @var{size} entries.
@item segment_wrap @var{limit}
Wrap around segment index once it reaches @var{limit}.
-@end table
-@example
-avconv -i in.mkv -c copy -map 0 -f segment -list out.list out%03d.nut
-@end example
+@item segment_start_number @var{number}
+Set the sequence number of the first segment. Defaults to @code{0}.
-@section mp3
+@item reset_timestamps @var{1|0}
+Reset timestamps at the begin of each segment, so that each segment
+will start with near-zero timestamps. It is meant to ease the playback
+of the generated segments. May not work with some combinations of
+muxers/codecs. It is set to @code{0} by default.
-The MP3 muxer writes a raw MP3 stream with an ID3v2 header at the beginning and
-optionally an ID3v1 tag at the end. ID3v2.3 and ID3v2.4 are supported, the
-@code{id3v2_version} option controls which one is used. The legacy ID3v1 tag is
-not written by default, but may be enabled with the @code{write_id3v1} option.
+@item initial_offset @var{offset}
+Specify timestamp offset to apply to the output packet timestamps. The
+argument must be a time duration specification, and defaults to 0.
+@end table
-For seekable output the muxer also writes a Xing frame at the beginning, which
-contains the number of frames in the file. It is useful for computing duration
-of VBR files.
+@subsection Examples
-The muxer supports writing ID3v2 attached pictures (APIC frames). The pictures
-are supplied to the muxer in form of a video stream with a single packet. There
-can be any number of those streams, each will correspond to a single APIC frame.
-The stream metadata tags @var{title} and @var{comment} map to APIC
-@var{description} and @var{picture type} respectively. See
-@url{http://id3.org/id3v2.4.0-frames} for allowed picture types.
+@itemize
+@item
+To remux the content of file @file{in.mkv} to a list of segments
+@file{out-000.nut}, @file{out-001.nut}, etc., and write the list of
+generated segments to @file{out.list}:
+@example
+ffmpeg -i in.mkv -codec copy -map 0 -f segment -segment_list out.list out%03d.nut
+@end example
-Note that the APIC frames must be written at the beginning, so the muxer will
-buffer the audio frames until it gets all the pictures. It is therefore advised
-to provide the pictures as soon as possible to avoid excessive buffering.
+@item
+As the example above, but segment the input file according to the split
+points specified by the @var{segment_times} option:
+@example
+ffmpeg -i in.mkv -codec copy -map 0 -f segment -segment_list out.csv -segment_times 1,2,3,5,8,13,21 out%03d.nut
+@end example
-Examples:
+@item
+As the example above, but use the @command{ffmpeg} @option{force_key_frames}
+option to force key frames in the input at the specified location, together
+with the segment option @option{segment_time_delta} to account for
+possible roundings operated when setting key frame times.
+@example
+ffmpeg -i in.mkv -force_key_frames 1,2,3,5,8,13,21 -codec:v mpeg4 -codec:a pcm_s16le -map 0 \
+-f segment -segment_list out.csv -segment_times 1,2,3,5,8,13,21 -segment_time_delta 0.05 out%03d.nut
+@end example
+In order to force key frames on the input file, transcoding is
+required.
-Write an mp3 with an ID3v2.3 header and an ID3v1 footer:
+@item
+Segment the input file by splitting the input file according to the
+frame numbers sequence specified with the @option{segment_frames} option:
@example
-avconv -i INPUT -id3v2_version 3 -write_id3v1 1 out.mp3
+ffmpeg -i in.mkv -codec copy -map 0 -f segment -segment_list out.csv -segment_frames 100,200,300,500,800 out%03d.nut
@end example
-Attach a picture to an mp3:
+@item
+To convert the @file{in.mkv} to TS segments using the @code{libx264}
+and @code{libfaac} encoders:
@example
-avconv -i input.mp3 -i cover.png -c copy -metadata:s:v title="Album cover"
--metadata:s:v comment="Cover (Front)" out.mp3
+ffmpeg -i in.mkv -map 0 -codec:v libx264 -codec:a libfaac -f ssegment -segment_list out.list out%03d.ts
@end example
-@section ogg
+@item
+Segment the input file, and create an M3U8 live playlist (can be used
+as live HLS source):
+@example
+ffmpeg -re -i in.mkv -codec copy -map 0 -f segment -segment_list playlist.m3u8 \
+-segment_list_flags +live -segment_time 10 out%03d.mkv
+@end example
+@end itemize
-Ogg container muxer.
+@section tee
+
+The tee muxer can be used to write the same data to several files or any
+other kind of muxer. It can be used, for example, to both stream a video to
+the network and save it to disk at the same time.
+
+It is different from specifying several outputs to the @command{ffmpeg}
+command-line tool because the audio and video data will be encoded only once
+with the tee muxer; encoding can be a very expensive process. It is not
+useful when using the libavformat API directly because it is then possible
+to feed the same packets to several muxers directly.
+
+The slave outputs are specified in the file name given to the muxer,
+separated by '|'. If any of the slave name contains the '|' separator,
+leading or trailing spaces or any special character, it must be
+escaped (see the ``Quoting and escaping'' section in the ffmpeg-utils
+manual).
+
+Muxer options can be specified for each slave by prepending them as a list of
+@var{key}=@var{value} pairs separated by ':', between square brackets. If
+the options values contain a special character or the ':' separator, they
+must be escaped; note that this is a second level escaping.
+The following special options are also recognized:
@table @option
-@item -page_duration @var{duration}
-Preferred page duration, in microseconds. The muxer will attempt to create
-pages that are approximately @var{duration} microseconds long. This allows the
-user to compromise between seek granularity and container overhead. The default
-is 1 second. A value of 0 will fill all segments, making pages as large as
-possible. A value of 1 will effectively use 1 packet-per-page in most
-situations, giving a small seek granularity at the cost of additional container
-overhead.
+@item f
+Specify the format name. Useful if it cannot be guessed from the
+output name suffix.
+
+@item bsfs[/@var{spec}]
+Specify a list of bitstream filters to apply to the specified
+output. It is possible to specify to which streams a given bitstream
+filter applies, by appending a stream specifier to the option
+separated by @code{/}. If the stream specifier is not specified, the
+bistream filters will be applied to all streams in the output.
+
+Several bitstream filters can be specified, separated by ",".
+
+@item select
+Select the streams that should be mapped to the slave output,
+specified by a stream specifier. If not specified, this defaults to
+all the input streams.
@end table
+Some examples follow.
+@itemize
+@item
+Encode something and both archive it in a WebM file and stream it
+as MPEG-TS over UDP (the streams need to be explicitly mapped):
+@example
+ffmpeg -i ... -c:v libx264 -c:a mp2 -f tee -map 0:v -map 0:a
+ "archive-20121107.mkv|[f=mpegts]udp://10.0.1.255:1234/"
+@end example
+
+@item
+Use @command{ffmpeg} to encode the input, and send the output
+to three different destinations. The @code{dump_extra} bitstream
+filter is used to add extradata information to all the output video
+keyframes packets, as requested by the MPEG-TS format. The select
+option is applied to @file{out.aac} in order to make it contain only
+audio packets.
+@example
+ffmpeg -i ... -map 0 -flags +global_header -c:v libx264 -c:a aac -strict experimental
+ -f tee "[bsfs/v=dump_extra]out.ts|[movflags=+faststart]out.mp4|[select=a]out.aac"
+@end example
+@end itemize
+
+Note: some codecs may need different options depending on the output format;
+the auto-detection of this can not work with the tee muxer. The main example
+is the @option{global_header} flag.
+
@c man end MUXERS