How to encode MPEG files from XaoS

To save a sequence, make a xaf file first (the easiest way to do this is to use the record function in the file menu). Then you need to render the sequence. XaoS can output sequences of ordinary PNG images, that can later be used by an MPEG encoder.

Generating sequences for MPEG

To encode a sequence, use the following command:

xaos -render [filename] -size 352x240 -antialiasing 
 -renderframerate 24 -basename [basename]

[filename] is the name of the xaf file, [basename] is the name used as the base for rendered images. XaoS adds a four digit number and extension automatically.

You might also want to change the resolution. 352×240 is the default size for MPEG files, but other sizes work as well. Each dimension must be a multiple of 16.

The framerate can also be altered. MPEG supports only a few selected framerates (namely 23.976, 24, 25, 29.97, 30) and you can pick any of them.

-antialiasing is used to produce anti-aliased images. It takes a much longer time and much more memory to calculate them, but resulting images are better for MPEG DCT compression and they are compressed about 3 times more. (the same is true of JPEG images)

At the other hand, the other XaoS rendering option -alwaysrecalc (which disables XaoS’s zooming optimizations) is not recommended. If that’s used, the sequence of animation then contains quite a lot of extra information, which increases size of MPEG file, but because of MPEG‘s lossy compression it is hard to see any difference. So it don’t worth it.

Rendered files

Once you start it, XaoS will generate thousands of frames. They take quite a long time to calculate and save, and consume plenty of disk space. (e.g. to render part 1 of the tutorial you need about 60MB and half an hour of time).

All images are named [basename]framenum.png. For example intro0001.png is the first frame of the animation intro. If consecutive frames are the same, XaoS doesn’t save them, so some frames may be missing. If your encoder can’t handle that, you will need to write a simple script which will fill in the gaps by means of mv or symbolic linking.

A list of all filenames is saved into the file [basename].par, where each line is the name of one frame. The names repeat here if necessary, so you can use this file to supply filenames to the encoder.

Pattern file

Some other files are generated as well. A pattern file is generated, which contains a recommended order of P, B and I frames.

MPEG sequence consist of these three frames. The I frames are just images saved in a format similar to JPEG files.

The P frames are images which are constructed as a delta from the previous one (the latest I or P frame). In case consecutive frames are similar (and in animations they often are), a P frame takes much less disk space than an I frame.

The B frames are constructed from the nearest previous P or I frame and the next P or I frame. They take even less disk space, but they are quite hard to encode. Also they are not used as previous frames, so their information is lost once they are displayed. They are usually rendered at lower quality than a P or I frame and used just to interpolate nearest frame and make animation smoother. It is generally not a good idea to make a whole sequence just from B frames.

Using just P frames is generally not a good idea. It makes the file shorter, but to jump into Nth frame of animation you need to recalculate all P and B frames since latest I frame. Decoders often want to jump to some frame (when the user asks, or when they can’t decode a sequence in time and must skip some), so you need to have some I frames in the animation to make this possible. The latter reason means that you need to place them quite often. Usually they are used for every 15th frame or thereabouts. Because they cost quite a lot, in my animations I usually use every 27th frame. To set this distance use -iframedist option. It should be a multiple of 3.

XaoS generates a recommended order of frames based on its knowledge of fractal motion. Situations where the screen doesn’t move at all are rendered just from P frames (since jumping is usually not required here); in situations where the screen changes completely (at least in XaoS’s opinion) I frames are used and in other cases, a standard sequence IBBPBBPBBPBBP... is used.

If your encoder supports this, you should supply this pattern for encoding to squeeze out some bytes.

Motion vector files

XaoS also generates a motion vector recommendation for the encoder. This is useful for encoding of B and P frames.

If some objects on the screen are moving at a constant speed, motion vectors can store that speed, so no image needs to be saved to represent that change.

Calculating this motion vector is a very significant task. If you guess them well, you increase quality and reduce file size, which is always great. Calculation also takes lots of CPU and it is hard to get optimal vectors (it just takes too long).

XaoS knows how the fractals move, so it can calculate this vectors quite easily. XaoS saves this information into *.p and *.b files. (*.p are for P frames, *.b are for B frames). If your encoder supports this, you should provide this vector to increase quality. They are not exact (XaoS can make mistakes); the encoder should try to find its own vectors, then try XaoS’s ones, and pick whichever is better.

This technique saves quite a lot of bytes in fast zooming/un-zooming animations (where images move more than 3 or 5 pixels per frame–since most programs look 10-20 pixels around each point for motion vectors).

To enable saving of motion vector files, add the option -rendervectors.

Berkeley parallel MPEG encoder

This is the encoder I use. It seems to be the best freely available software encoder I’ve tested. It can generate quite small files, but it is rather slow. It is available at Berkeley’s FTP site mm-ftp.CS.Berkeley.EDU and called mpeg_encode1.5b

It has lots of options to tune, so you should spend quite a lot of time playing with this. The configuration I use is in file doc/mpeg.param.

I’ve also made some patches that makes possible to use the pattern and motion files generated from XaoS. The patch is in doc/mpeg_encode.patch. So if you want to use these features (they are EXPERIMENTAL) you might apply this patch and recompile the encoder.

Once you filled the mpeg.param file (see comments inside), you should render sequence using mpeg_encode [filename] and with luck you are done.

Show pagesource Old revisions Login Index