What's the issue ?
At HD resolution, with 100% Quality and PWM Audio, PhotoStory 3 (artificially) limits the output 'playback' loading by building in 'skip frames'. Because this is built into the actual WVP2 output file, it will always playback with 'jitter'
Jitter is first seen on 'fast' pan and zooms, never on 'fades'. So one way to 'eliminate' jitter is to build the pan and zooms 'separately'. Of course, this has to be done by using the parameters found in the project.xml file.
What's needed ?
Half the applications I use for 'video' have no idea of the 'standard' AVCHD frame rate specification - most only offer 'PAL' (25fps) or 'NTSC' (at 30fps - few actually generate real 29.97 fps NTSC). None offer the AVCHD default 23.976 fps 'out of the box' (and very few offer any means of adjusting the frame rate)
Generating video from JPG
To perform all the mathematics (and drive the command line utilities) a QBasic script was written (see below)
To generate individual .jpg frames (and perform 'transition fades' - see later), the standard in Open Source image conversion, Image Magick 'convert' command line utility was used.
Going from a pile of individual jpeg 'frames' to my final goal (h264 AVCHD compatible video) turned out to be a lot harder
I started by looking for a simple way to 'wrap' the jpg frames into an AVI 'container' - the idea being that it would be a lot easier to 'join' multiple small AVI's later The first tool I found was JPGs_to_AVI.exe (which can also be driven from the command line) HOWEVER whilst this allowed me to check everything was 'working', it set the frame rate to 30.00fps with no means to adjust it (it uses some method of coding the AVI that defeated the freeware AVI Frame Rate Changer 1.10 (Blight) 'avirate.exe' frame rate adjust utility). Further, the AVI it generated is unusable by Windows Movie Maker (WMM won't even preview the AVI and locks up when attempting to build output) I thus dumped 'JPGs_to_AVI.exe' and moved over to the Open Source MakeAVI Whilst this sort of 'works OK', it is limited to 1091 files at a time (1092+ files crashes the program, so you would have to build one photo at a time (or break your the PhotoStory into 45 second 'chunks') and concatenate all the AVI's later.Further, when I entered 23.976 as the frame rate, it opened a 'Please enter an integer' warning (so, another 'video author' who is ignorant of actual video frame rates) = and if you fail to enter 'an integer' it generates AVI with the default 15 fps rate (although the Bligh AVI Frame Rate Changer soon changed that :-). Windows Movie Maker was able to 'preview' the generated AVI but, once again, locked up when asked to built the output (Microsoft applications are notorious for ignoring Industry Standards, so this is likely the fault of WMM, not the author) and whilst MakeAVI can go direct to h264, that's of no use when it is limited to 1091 source frames ! So, another one for the bin ... I finally ended up with the ubiquitous Open Source MEncoder, which 'does the job' and can be driven from the command line (so can be added to the QBasic script and allowed to run overnight)
What file sizes are involved ?
Each jpg frame will be about 350kb, so a 10 minute 'story' (10x60x23.976 = 14,385 frames @ 350kb) will be just under 5 Gb. However, 'raw' AVI is an uncompressed format, so the jpg's will be unpacked into RGB frames. Each AVI frame will be 1920x1080x3 = 6,075 kb and a 10 minute (14,385 frame) 'story' saved as AVI would be over 83 Gb !
So, whilst AVI can be used to test the pan, zoom and transitions of single photos, plainly the only 'practical' route to a full length Story will be to go 'direct' from the individual JPG frames to the final h.264 (mp4) video
Image Magick 'convert' (and 'composite' = see below) is very 'compute intensive' (convert typically uses less than .5 Gb RAM), so you might want to use a machine with a fast CPU ('convert' never seems to use more than 50% on my dual core CPU, so you are better off with a single ultra-fast single core rather than 4 (or 8) 'medium speed' cores)Your 'AVI' (or mp4) 'maker' is likely disk speed limited - so having your source and destination on different drives can help a lot. Best speed is likely to be achieved by placing the individual jpgs on a RAID mirror - and your final AVI on a RAM disk / SSD (see re: playback below) - indeed, if you intend to do a lot of video work, its well worth while investing in a SSD. NB to run Qbasic on a Windows 7 64bit system, look into 'DOSbox' and QB64
How the script works
The script simply reads the name of the source photo from the project.xml, it's size (width x height in pixels), the total running time and the 2 'crop boxs' = the start and end 'crop box' left x top start position and width x height size (actually, the height was ignored since it MUST be 9/16ths of the width). Double precision floating point is used to do the actual calculations, with 'CINT' (round to nearest integer) used prior to generating the JPG.
The only 'clever' bit is that the script ASSUMES your the PhotoStory has been built for 16:9 (i.e. all the photos are '133% height pre-distorted) so it will 'un-distort' the original source (by multiplying the width to 133%) before starting the crop & frame generation
1) For a frame rate of 23.976 fps, each second of running time requires 23.976 JPG's = so a very fast 'pan / zoom' of 3 seconds will require '71.928' jpgs.
The nearest integer is 72, so the '3s zoom' will be 72 frames long and actually take 3.003 seconds i.e. 0.003s too long. This timing error must be 'carried forward' and the next photo time adjusted so that the overall 'story' time is kept correct = if you don't, the under/over 'slippage' (up half a 1/23.976 ths of a second per photo) will quickly accumulate - after only 5 photos you could be 1/10th a second 'out' and this is more than enough to wreak you 'align the transition to the music' timing !
2) Whilst it is tempting to calculate a 'pixel position increment' per frame, errors could accumulate as the position is reduced to the 'nearest integer'. Whilst errors could be 'carried forward' (in the same way as timing) I decided to avoid the possibility of 'carry forward' coding errors causing 'jitter' by recalculating from the initial values for each jpg frame.
The first frame will be cropped at 'start crop box' position (and size), so, to reach the end frame, we must take 'frame count-1' steps (in the example, 72 = 1 start + 71 steps, the last step of which should end at the exact 'end crop box' position). Thus, for frame 'n' (n = 0 to 71), crop posn. = start + n * (end-start) / 71 (and crop width** = start + n * (end-start) / 71. Since "(end-start) / 71" does not change from frame to frame, this fraction can be calculated outside the 'make n frames' loop.**Crop height is always 9/16ths of the width (and the output width x height is always 1920 x 1080 (for HD))
Essentially, you pass 'composite' two actual images and set a blend value. Composite then takes that percentage of the first image and adds just enough of the second to end up with 100%. For example, if the blend is set to 20%, 20% of the first is merged with 80% of the second (if 50%, then 50% of each, if 80%, then it's 80% of the first with 20% of the second) Note, alternatively use the convert 'overlap' command to control blending with a 'mask' image containing the blend values (so blend can vary from pixel to pixel), however for a simple fade transition, we vary the whole image from 0% to 100% over the course of the transition and 'composite' will do the job just fine.composite -blend N(0-100)% image1 image2 outputImage
How are transitions handled ?
The way PhotoStory 3 pan/zooms and transitions (fade) 'interact' is quite complex. First, transitions have no effect on the overall Story time (Story time is the sum of all the basic photo times - for example, 6 photos of 10s each = 60s total time, irrespective of any transitions that might be set). Transitions are thus 'contained' within the 'main' photo display time.
So, when you set a transition time for a photo, it has the effect of both 'extending' that photo into the previous one and, at the same time, extending the previous photo into this one i.e. a transition causes both this and the previous photo times to be 'stretched' and 'overlapped' into one another. This photo is stretched by half the transition time into the previous .. and the previous is stretched by half the time into this.
This means that setting a transition has the effect of extending the pan and zoom time.
So, for example, if this photo is set to display for 10s (and neither this photo nor the next have any transition set), and you pan/zoom across this a photo, the pan/zoom will take 10s (or 10 * 23.976 = 240 frames, assuming no 'rounding error' carried forward from the previous photo) If you now set this photo to a 4s (96 frame) transition, this photo will be extended by 'half the transition time' (48 frames) into the previous photo (and it will have been be extended by 48 frames into this). Thus this photo's pan/zoom will now take 12s (288 frames) in total (it starts 2s into the previous photo and runs for 10s in this). Likewise 48 frames (2s) will be added to the previous photo which will now 'overrun' into this photos 'main' display time. The 4s transition is thus 'contained' within the overall photo timing (whilst adding to the pan and zoom time). If you then set the NEXT photo to (say) 6s transition time, this photo (and it's pan/zoom) will have to be extended by a further 3s (so to the 48 frames of 'extra start', an 'extra end' of 72 frames is added to the pan/zoom which now runs into the following photo). The total running time for this photo's pan/zoom will now be 48+240+72 = 360 frames (2+10+3 = 15s), which is significantly longer than the 240 frames / 10s originally chosen for the 'main' display time !
Fortunately, each PhotoStory 3 project.xml entry for a Vu (visual unit = photo) specifies both the 'start' time extension as well as the 'end' .. so it's a (relatively) easy matter to discover the 'total' pan/zoom time for each photo (without having to 'look up' details of the next photo)
Things are made a bit more complex by the fact that transitions have to start on a specific frame number .. however at least any 'nearest integer' errors in the start/end transition times won't carry forward ! For each Vu, the 'main' time is used to calculate the initial frame count (and any rounding error carried forward). We then adjust the count for the 'extra start' frames (nearest integer of half the start transition time) and add the 'extra end' frame count (CINT of half the end trans. time) to get the overall frame count (used when calculating the pan/zoom values). Rounding errors in 'extra' counts are ignored (only the main count effects the overall Story time)When frame generation starts, the 'extra start' frames will be 'fade merged' with the same count of frames in the previous Vu frame set folder. This will be followed by 'fade merging' the same count of 'main time' frames the 'extra end' frames the previous Vu will have already generated and placed into this Vu's folder. The rest of the frames are then generated without any 'merge', however at the 'end' of this photo, if the next specified a transition, the 'extra end' frames will be placed in the next Vu frame set folder (where they will be picked up later and merged with some of the next Vu's 'main' frames)
Note that the FIRST photo in a PhotoStory must be treated as a 'special case'. There may be no 'previous photo', however you can still set a 'transition' ! If a transition is set, PhotoStory generates a 'dummy' blank (all black) photo allowing you to 'fade in from black' (remember - transition does NOT add to overall Story timing, so the 'fade in' will actually be contained within the first photo's display time (i.e. all you have to do is add 'half the transition time' of black frames to the first photo folder)
Playing back as AVI ?
If your AVI file 'source' can't keep up with the data display rate, any playback will suffer from 'stutter' - which, of course, will disguise any 'jitter' caused by arithmetic errors in your 'conversion' script :-).
AVI playback of HD full frame (1920x1080x3bytes RGB) at 30 fps requires a disk bandwidth of about 178MB/s. You will be lucky to get 75MB/s off a single disk, so even a RAID 'mirror' (which, in theory, can double the sequential read rate) won't do the trick. The only 'disk' that's going to be fast enough is a SSD (Solid State Disk) - or, for those running on a PC less than 5 years old, a RAM disk A 128Gb SATA III SSD can be had for less than £50 (amazon). Note that older SATA II motherboards will limit the data rate off a 'SATA III' 500MB/s rated SSD to about 265 MB/s (which is still more than enough - get the AS SSD Benchmark tool to check yours) Of course, for ultimate read speed (up to 7 GB/s) you need a RAM disk. The 32bit XP system has an Operating System limit of 4Gb RAM, however a RAM disk driver will allow access to whatever 'extra' RAM your motherboard supports (see Imdisk (Open Source - see here for setting it up) or, if you can find a download source that won't replace your browser, hijack your home page and search preferences, install unwanted 'tool bars' and spam you with adverts, the Vsuite Ramdisk FREE ED. will allows access to 4Gb (enough for almost 23 seconds :-) )All 'recent' PC motherboards will support more than 4Gb RAM (I suggest you fit at least 8Gb, so that gives you a 4Gb RAM disk or 23s at full res. AVI, which should be more than enough to test fast pan/zoom and transitions of up to half a dozen photos
Converting AVI to mp4
Since I had neither a SSD or RAM disk during testing, I used 'Hybrid' (a first class h264 encoder, free for personal use) to convert the massive AVI files into .mp4 before attempting to play them back.
Note that Hybrid uses the fps (frames per second) rate specified in the AVI file, so you need to get that correct before submitting the AVI source to Hybrid (it's amazing how many software programmers seem incapable of dealing with 'real numbers' :-) ). To correct the fps, the freeware AVI Frame Rate Changer 1.10 (Blight)'avirate.exe' utility can be used (or some other command line tool)
Once the bugs were ironed out, I modified the Qbasic script to invoke MEncoder at the end of the 'Story' (to go straight from JPEG to x264)
To generate 'output.mp4' (using the ms mpeg4v2 codec) from all the jpgs in the folder :- mencoder "mf://*.jpg" -mf fps=23.976 -o output.mp4 -ovc lavc -lavcopts vcodec=msmpeg4v2:vbitrate=9000
Next subject :- DRM - (and the media moguls)