[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

1. Introduction

FFmpeg is a very fast video and audio converter. It can also grab from a live audio/video source. The command line interface is designed to be intuitive, in the sense that ffmpeg tries to figure out all the parameters, when possible. You have usually to give only the target bitrate you want.

FFmpeg can also convert from any sample rate to any other, and resize video on the fly with a high quality polyphase filter.

[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

2. Quick Start

[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

2.1 Video and Audio grabbing

FFmpeg can use a video4linux compatible video source and any Open Sound System audio source:

ffmpeg /tmp/out.mpg 

Note that you must activate the right video source and channel before launching ffmpeg. You can use any TV viewer such as xawtv ( by Gerd Knorr which I find very good. You must also set correctly the audio recording levels with a standard mixer.

[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

2.2 Video and Audio file format conversion

* ffmpeg can use any supported file format and protocol as input:


* You can input from YUV files:

ffmpeg -i /tmp/test%d.Y /tmp/out.mpg 

It will use the files:
/tmp/test0.Y, /tmp/test0.U, /tmp/test0.V,
/tmp/test1.Y, /tmp/test1.U, /tmp/test1.V, etc...

The Y files use twice the resolution of the U and V files. They are raw files, without header. They can be generated by all decent video decoders. You must specify the size of the image with the `-s' option if ffmpeg cannot guess it.

* You can input from a RAW YUV420P file:

ffmpeg -i /tmp/test.yuv /tmp/out.avi

The RAW YUV420P is a file containing RAW YUV planar, for each frame first come the Y plane followed by U and V planes, which are half vertical and horizontal resolution.

* You can output to a RAW YUV420P file:

ffmpeg -i mydivx.avi -o hugefile.yuv

* You can set several input files and output files:

ffmpeg -i /tmp/a.wav -s 640x480 -i /tmp/a.yuv /tmp/a.mpg

Convert the audio file a.wav and the raw yuv video file a.yuv to mpeg file a.mpg

* You can also do audio and video conversions at the same time:

ffmpeg -i /tmp/a.wav -ar 22050 /tmp/a.mp2

Convert the sample rate of a.wav to 22050 Hz and encode it to MPEG audio.

* You can encode to several formats at the same time and define a mapping from input stream to output streams:

ffmpeg -i /tmp/a.wav -ab 64 /tmp/a.mp2 -ab 128 /tmp/b.mp2 -map 0:0 -map 0:0

Convert a.wav to a.mp2 at 64 kbits and b.mp2 at 128 kbits. '-map file:index' specify which input stream is used for each output stream, in the order of the definition of output streams.

* You can transcode decrypted VOBs

ffmpeg -i snatch_1.vob -f avi -vcodec mpeg4 -b 800 -g 300 -bf 2 -acodec mp3 -ab 128 snatch.avi

This is a typical DVD ripper example, input from a VOB file, output to an AVI file with MPEG-4 video and MP3 audio, note that in this command we use B frames so the MPEG-4 stream is DivX5 compatible, GOP size is 300 that means an INTRA frame every 10 seconds for 29.97 fps input video. Also the audio stream is MP3 encoded so you need LAME support which is enabled using --enable-mp3lame when configuring. The mapping is particularly useful for DVD transcoding to get the desired audio language.

NOTE: to see the supported input formats, use ffmpeg -formats.

[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

3. Invocation

[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

3.1 Syntax

The generic syntax is:

ffmpeg [[options][`-i' input_file]]... {[options] output_file}...
If no input file is given, audio/video grabbing is done.

As a general rule, options are applied to the next specified file. For example, if you give the `-b 64' option, it sets the video bitrate of the next file. Format option may be needed for raw input files.

By default, ffmpeg tries to convert as losslessly as possible: it uses the same audio and video parameter for the outputs as the one specified for the inputs.

[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

3.2 Main options

show license

show help

show available formats, codecs, protocols, ...

`-f fmt'
force format

`-i filename'
input file name

overwrite output files

`-t duration'
set the recording time in seconds. hh:mm:ss[.xxx] syntax is also supported.

`-title string'
set the title

`-author string'
set the author

`-copyright string'
set the copyright

`-comment string'
set the comment

activate high quality settings

[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

3.3 Video Options

`-b bitrate'
set the video bitrate in kbit/s (default = 200 kb/s)
`-r fps'
set frame rate (default = 25)
`-s size'
set frame size. The format is `WxH' (default 160x128). The following abbreviations are recognized:

`-aspect aspect'
set aspect ratio (4:3, 16:9 or 1.3333, 1.7777)
`-croptop size'
set top crop band size (in pixels)
`-cropbottom size'
set bottom crop band size (in pixels)
`-cropleft size'
set left crop band size (in pixels)
`-cropright size'
set right crop band size (in pixels)
disable video recording
`-bt tolerance'
set video bitrate tolerance (in kbit/s)
`-maxrate bitrate'
set max video bitrate tolerance (in kbit/s)
`-minrate bitrate'
set min video bitrate tolerance (in kbit/s)
`-bufsize size'
set ratecontrol buffere size (in kbit)
`-vcodec codec'
force video codec to codec. Use the copy special value to tell that the raw codec data must be copied as is.
use same video quality as source (implies VBR)

`-pass n'
select the pass number (1 or 2). It is useful to do two pass encoding. The statistics of the video are recorded in the first pass and the video at the exact requested bit rate is generated in the second pass.

`-passlogfile file'
select two pass log file name to file.

[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

3.4 Advanced Video Options

`-g gop_size'
set the group of picture size
use only intra frames
`-qscale q'
use fixed video quantiser scale (VBR)
`-qmin q'
min video quantiser scale (VBR)
`-qmax q'
max video quantiser scale (VBR)
`-qdiff q'
max difference between the quantiser scale (VBR)
`-qblur blur'
video quantiser scale blur (VBR)
`-qcomp compression'
video quantiser scale compression (VBR)

`-rc_init_cplx complexity'
initial complexity for 1-pass encoding
`-b_qfactor factor'
qp factor between p and b frames
`-i_qfactor factor'
qp factor between p and i frames
`-b_qoffset offset'
qp offset between p and b frames
`-i_qoffset offset'
qp offset between p and i frames
`-rc_eq equation'
set rate control equation (see section 3.8 FFmpeg formula evaluator). Default is tex^qComp.
`-rc_override override'
rate control override for specific intervals
`-me method'
set motion estimation method to method. Available methods are (from lower to best quality):
Try just the (0, 0) vector.
(default method)
exhaustive search (slow and marginally better than epzs)

`-dct_algo algo'
set dct algorithm to algo. Available values are:
FF_DCT_AUTO (default)

`-idct_algo algo'
set idct algorithm to algo. Available values are:
FF_IDCT_AUTO (default)

`-er n'
set error resilience to n.
FF_ER_CAREFULL (default)

`-ec bit_mask'
set error concealment to bit_mask. bit_mask is a bit mask of the following values:
FF_EC_GUESS_MVS (default=enabled)
FF_EC_DEBLOCK (default=enabled)

`-bf frames'
use 'frames' B frames (supported for MPEG-1, MPEG-2 and MPEG-4)
`-mbd mode'
macroblock decision
FF_MB_DECISION_SIMPLE: use mb_cmp (cannot change it yet in ffmpeg)
FF_MB_DECISION_BITS: chooses the one which needs the fewest bits
FF_MB_DECISION_RD: rate distoration

use four motion vector by macroblock (only MPEG-4)
use data partitioning (only MPEG-4)
`-bug param'
workaround not auto detected encoder bugs
`-strict strictness'
how strictly to follow the standarts
enable Advanced intra coding (h263+)
enable Unlimited Motion Vector (h263+)

deinterlace pictures
calculate PSNR of compressed frames
dump video coding statistics to `vstats_HHMMSS.log'.
`-vhook module'
insert video processing module. module contains the module name and its parameters separated by spaces.

[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

3.5 Audio Options

`-ab bitrate'
set audio bitrate (in kbit/s)
`-ar freq'
set the audio sampling freq (default = 44100 Hz)
`-ab bitrate'
set the audio bitrate in kbit/s (default = 64)
`-ac channels'
set the number of audio channels (default = 1)
disable audio recording
`-acodec codec'
force audio codec to codec. Use the copy special value to tell that the raw codec data must be copied as is.

[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

3.6 Audio/Video grab options

`-vd device'
set video grab device (e.g. `/dev/video0')
`-vc channel'
set video grab channel (DV1394 only)
`-tvstd standard'
set television standard (NTSC, PAL (SECAM))
set DV1394 grab
`-ad device'
set audio device (e.g. `/dev/dsp')

[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

3.7 Advanced options

`-map file:stream'
set input stream mapping
print specific debug info
add timings for benchmarking
dump each input packet
only use bit exact algorithms (for codec testing)
`-ps size'
set packet size in bits
read input at native frame rate. Mainly used to simulate a grab device.
loop over the input stream. Currently it works only for image streams. This option is used for ffserver automatic testing.

[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

3.8 FFmpeg formula evaluator

When evaluating a rate control string, FFmpeg uses an internal formula evaluator.

The following binary operators are available: +, -, *, /, ^.

The following unary operators are available: +, -, (...).

The following functions are available:

max(x, y)
min(x, y)
gt(x, y)
lt(x, y)
eq(x, y)

The following constants are available:


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

3.9 Protocols

The filename can be `-' to read from the standard input or to write to the standard output.

ffmpeg handles also many protocols specified with the URL syntax.

Use 'ffmpeg -formats' to have a list of the supported protocols.

The protocol http: is currently used only to communicate with ffserver (see the ffserver documentation). When ffmpeg will be a video player it will also be used for streaming :-)

[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

4. Tips

  • For streaming at very low bit rate application, use a low frame rate and a small gop size. This is especially true for real video where the Linux player does not seem to be very fast, so it can miss frames. An example is:

    ffmpeg -g 3 -r 3 -t 10 -b 50 -s qcif -f rv10 /tmp/b.rm

  • The parameter 'q' which is displayed while encoding is the current quantizer. The value of 1 indicates that a very good quality could be achieved. The value of 31 indicates the worst quality. If q=31 too often, it means that the encoder cannot compress enough to meet your bit rate. You must either increase the bit rate, decrease the frame rate or decrease the frame size.

  • If your computer is not fast enough, you can speed up the compression at the expense of the compression ratio. You can use '-me zero' to speed up motion estimation, and '-intra' to disable completely motion estimation (you have only I frames, which means it is about as good as JPEG compression).

  • To have very low bitrates in audio, reduce the sampling frequency (down to 22050 kHz for mpeg audio, 22050 or 11025 for ac3).

  • To have a constant quality (but a variable bitrate), use the option '-qscale n' when 'n' is between 1 (excellent quality) and 31 (worst quality).

  • When converting video files, you can use the '-sameq' option which uses in the encoder the same quality factor than in the decoder. It allows to be almost lossless in encoding.

[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

5. Supported File Formats and Codecs

You can use the -formats option to have an exhaustive list.

[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

5.1 File Formats

FFmpeg supports the following file formats through the libavformat library:

Supported File Format Encoding Decoding Comments
MPEG audio X X
MPEG1 systems X X muxed audio and video
MPEG2 PS X X also known as VOB file
MPEG2 TS X also known as DVB Transport Stream
Macromedia Flash X X Only embedded audio is decoded
FLV X X Macromedia Flash video files
Real Audio and Video X X
Raw AC3 X X
Raw MPEG video X X
Raw PCM8/16 bits, mulaw/Alaw X X
SUN AU format X X
NUT X X NUT Open Container Format
Quicktime X X
MPEG4 X X MPEG4 is a variant of Quicktime
Raw MPEG4 video X X
4xm X 4X Technologies format, used in some games
Playstation STR X
Id RoQ X used in Quake III, Jedi Knight 2, other computer games
Interplay MVE X format used in various Interplay computer games
WC3 Movie X multimedia format used in Origin's Wing Commander III computer game
Sega FILM/CPK X used in many Sega Saturn console games
Westwood Studios VQA/AUD X Multimedia formats used in Westwood Studios games
Id Cinematic (.cin) X Used in Quake II

X means that the encoding (resp. decoding) is supported.

[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

5.2 Image Formats

FFmpeg can read and write images for each frame of a video sequence. The following image formats are supported:

Supported Image Format Encoding Decoding Comments
PAM X X PAM is a PNM extension with alpha support
PGMYUV X X PGM with U and V components in YUV 4:2:0
JPEG X X Progressive JPEG is not supported
.Y.U.V X X One raw file per component
Animated GIF X X Only uncompressed GIFs are generated
PNG X X 2 bit and 4 bit/pixel not supported yet

X means that the encoding (resp. decoding) is supported.

[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

5.3 Video Codecs

Supported Codec Encoding Decoding Comments
MPEG1 video X X
MPEG2 video X X
MPEG4 X X Also known as DIVX4/5
MSMPEG4 V3 X X Also known as DIVX3
WMV8 X X Not completely working
H263(+) X X Also known as Real Video 1.0
Lossless MJPEG X X
Sunplus MJPEG X fourcc: SP5X
Huff YUV X X
FFmpeg Video 1 X X Lossless codec (fourcc: FFV1)
Asus v1 X X fourcc: ASV1
Asus v2 X X fourcc: ASV2
Creative YUV X fourcc: CYUV
H.264 X
Sorenson Video 1 X fourcc: SVQ1
Sorenson Video 3 X fourcc: SVQ3
On2 VP3 X still experimental
Theora X still experimental
Intel Indeo 3 X only works on i386 right now
FLV X X Flash H263 variant
ATI VCR1 X fourcc: VCR1
ATI VCR2 X fourcc: VCR2
Cirrus Logic AccuPak X fourcc: CLJR
4X Video X used in certain computer games
Sony Playstation MDEC X
Id RoQ X used in Quake III, Jedi Knight 2, other computer games
Xan/WC3 X used in Wing Commander III .MVE files
Interplay Video X used in Interplay .MVE files
Apple Video X fourcc: rpza
Cinepak X
Microsoft RLE X
Microsoft Video-1 X
Westwood VQA X
Id Cinematic Video X used in Quake II

X means that the encoding (resp. decoding) is supported.

Check at to get a precise comparison of FFmpeg MPEG4 codec compared to the other solutions.

[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

5.4 Audio Codecs

Supported Codec Encoding Decoding Comments
MPEG audio layer 2 IX IX
MPEG audio layer 1/3 IX IX MP3 encoding is supported through the external library LAME
AC3 IX X liba52 is used internally for decoding
Vorbis X X supported through the external library libvorbis
Microsoft ADPCM X X
Duck DK3 IMA ADPCM X used in some Sega Saturn console games
Duck DK4 IMA ADPCM X used in some Sega Saturn console games
Westwood Studios IMA ADPCM X used in Westwood Studios games likes Command and Conquer
RA144 X Real 14400 bit/s codec
RA288 X Real 28800 bit/s codec
AMR-NB X X supported through an external library
AMR-WB X X supported through an external library
DV audio X
Id RoQ DPCM X used in Quake III, Jedi Knight 2, other computer games
Interplay MVE DPCM X used in various Interplay computer games
Xan DPCM X used in Origin's Wing Commander IV AVI files
Apple MACE 3 X
Apple MACE 6 X

X means that the encoding (resp. decoding) is supported.

I means that an integer only version is available too (ensures highest performances on systems without hardware floating point support).

[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

6. Platform Specific information

[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

6.1 Linux

ffmpeg should be compiled with at least GCC 2.95.3. GCC 3.2 is the preferred compiler now for ffmpeg. All future optimizations will depend on features only found in GCC 3.2.

[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

6.2 BSD

[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

6.3 Windows

[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

6.3.1 Native Windows compilation

  • Install the current versions of MSYS and MinGW from You can find detailed installation instructions in the download section and the FAQ.

  • If you want to test the FFmpeg Simple Media Player, also download the MinGW development library of SDL 1.2.x (`SDL-devel-1.2.x-mingw32.tar.gz') from Unpack it in a temporary place, and unpack the archive `i386-mingw32msvc.tar.gz' in the MinGW tool directory. Edit the `sdl-config' script so that it gives the correct SDL directory when invoked.

  • Extract the current version of FFmpeg (the latest release version or the current CVS snapshot whichever is recommended).
  • Start the MSYS shell (file `msys.bat').

  • Change to the FFMPEG directory and follow the instructions of how to compile ffmpeg (file `INSTALL'). Usually, launching `./configure' and `make' suffices. If you have problems using SDL, verify that `sdl-config' can be launched from the MSYS command line.

  • You can install FFmpeg in `Program Files/FFmpeg' by typing `make install'. Don't forget to copy `SDL.dll' at the place you launch `ffplay'.


  • The target `make wininstaller' can be used to create a Nullsoft based Windows installer for FFmpeg and FFplay. `SDL.dll' must be copied in the ffmpeg directory in order to build the installer.

  • By using ./configure --enable-shared when configuring ffmpeg, you can build `avcodec.dll' and `avformat.dll'. With make install you install the FFmpeg DLLs and the associated headers in `Program Files/FFmpeg'.

  • Visual C++ compatibility: if you used ./configure --enable-shared when configuring FFmpeg, then FFmpeg tries to use the Microsoft Visual C++ lib tool to build avcodec.lib and avformat.lib. With these libraries, you can link your Visual C++ code directly with the FFmpeg DLLs.

[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

6.3.2 Cross compilation for Windows with Linux

You must use the MinGW cross compilation tools available at

Then configure ffmpeg with the following options:
./configure --enable-mingw32 --cross-prefix=i386-mingw32msvc-
(you can change the cross-prefix according to the prefix choosen for the MinGW tools).

Then you can easily test ffmpeg with wine (

[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

6.4 MacOS X

[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

6.5 BeOS

The configure script should guess the configuration itself. Networking support is currently not finished. errno issues fixed by Andrew Bachmann.

Old stuff:

François Revol - revol at free dot fr - April 2002

The configure script should guess the configuration itself, however I still didn't tested building on net_server version of BeOS.

ffserver is broken (needs poll() implementation).

There is still issues with errno codes, which are negative in BeOs, and that ffmpeg negates when returning. This ends up turning errors into valid results, then crashes. (To be fixed)

[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

7. Developers Guide

[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

7.1 API

  • libavcodec is the library containing the codecs (both encoding and decoding). See `libavcodec/apiexample.c' to see how to use it.

  • libavformat is the library containing the file formats handling (mux and demux code for several formats). See `ffplay.c' to use it in a player. See `output_example.c' to use it to generate audio or video streams.

[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

7.2 Integrating libavcodec or libavformat in your program

You can integrate all the source code of the libraries to link them statically to avoid any version problem. All you need is to provide a 'config.mak' and a 'config.h' in the parent directory. See the defines generated by ./configure to understand what is needed.

You can use libavcodec or libavformat in your commercial program, but any patch you make must be published. The best way to proceed is to send your patches to the ffmpeg mailing list.

[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

7.3 Coding Rules

ffmpeg is programmed in ANSI C language. GCC extensions are tolerated. Indent size is 4. The TAB character should not be used.

The presentation is the one specified by 'indent -i4 -kr'.

Main priority in ffmpeg is simplicity and small code size (=less bugs).

Comments: for functions visible from other modules, use the JavaDoc format (see examples in `libav/utils.c') so that a documentation can be generated automatically.

[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

7.4 Submitting patches

When you submit your patch, try to send a unified diff (diff '-up' option). I cannot read other diffs :-)

Run the regression tests before submitting a patch so that you can verify that there are no big problems.

Patches should be posted as base64 encoded attachments (or any other encoding which ensures that the patch wont be trashed during transmission) to the ffmpeg-devel mailinglist, see

[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

7.5 Regression tests

Before submitting a patch (or committing with CVS), you should at least test that you did not break anything.

The regression test build a synthetic video stream and a synthetic audio stream. Then these are encoded then decoded with all codecs or formats. The CRC (or MD5) of each generated file is recorded in a result file. Then a 'diff' is launched with the reference results and the result file.

The regression test then goes on to test the ffserver code with a limited set of streams. It is important that this step runs correctly as well.

Run 'make test' to test all the codecs.

Run 'make fulltest' to test all the codecs, formats and ffserver.

[Of course, some patches may change the regression tests results. In this case, the regression tests reference results shall be modified accordingly].

[Top] [Contents] [Index] [ ? ]

Table of Contents

[Top] [Contents] [Index] [ ? ]

Short Table of Contents

1. Introduction
2. Quick Start
3. Invocation
4. Tips
5. Supported File Formats and Codecs
6. Platform Specific information
7. Developers Guide

[Top] [Contents] [Index] [ ? ]

About this document

This document was generated by System on December, 22 2003 using texi2html

The buttons in the navigation panels have the following meaning:

Button Name Go to From 1.2.3 go to
[ < ] Back previous section in reading order 1.2.2
[ > ] Forward next section in reading order 1.2.4
[ << ] FastBack previous or up-and-previous section 1.1
[ Up ] Up up section 1.2
[ >> ] FastForward next or up-and-next section 1.3
[Top] Top cover (top) of document  
[Contents] Contents table of contents  
[Index] Index concept index  
[ ? ] About this page  

where the Example assumes that the current position is at Subsubsection One-Two-Three of a document of the following structure:
  • 1. Section One
    • 1.1 Subsection One-One
      • ...
    • 1.2 Subsection One-Two
      • 1.2.1 Subsubsection One-Two-One
      • 1.2.2 Subsubsection One-Two-Two
      • 1.2.3 Subsubsection One-Two-Three     <== Current Position
      • 1.2.4 Subsubsection One-Two-Four
    • 1.3 Subsection One-Three
      • ...
    • 1.4 Subsection One-Four

This document was generated by System on December, 22 2003 using texi2html