kamiboy wrote:I see. Well, I don't think the way that the PS3 deinterlaces PS2 games is the wrong way per say. I believe it to be more correct than scaling each 240i field individually to construct each frame.
What? Any deinterlacing is the wrong way to go about it, at least, it is not the most desirable way to do it.
But this is largely depending on one thins, how PS2 games handle rendering of their graphical content. Let us assume that the games are rendered with a vertical resolution of 480. but for each 1/60th of a frame the game engine has to put out 240 lines out of its full vertical resolution onto the screen.
There is two ways to go about this, either render the game world 60 times a second, but only render the 240i lines that are displayable at that exact atom of time in the game engines progression.
This is very uncommon for the simple reason that it is a very inefficient way to do it, even on the PS2. In fact, I can't think of any games that do this right now.
This would mean that each of these 240i fields are unrelated to the previous and blending them together would cause feathering for fast moving object, but would make static portions appear to be in higher res, and reduce jaggies in those areas.
The other way that the game engine could handle rendering of graphics is by running the game in 30fps and while rendering a full 480p frame render it in two passes one 480i field at a time. In this instance the PS3's recombining of each 480i field back into a full 480p frame is the correct way to go about it as it just puts back two pieces that belong together.
The way it's done is this. The screen has to be drawn, 640x480. Or perhaps lesser resolutions (and on the PS2, this is often). And that entire screen resolution is what is rendered. What the video chip passes off (not the GS, the one that actually sends the video to your screen) is then either a field or a frame. If either the field or the frame is less than 640x480, the GS gives the video chip instructions on how to stretch it, accounting for typical overscans, so that what arrives at your screen appears fullscreen, and for all you know, 640x480 (unless you start counting stairsteps). If the output is to be progressive, then it sends the full frame and not an alternating field. It really is that simple.
The reason homebrews and Xploder glitch is varied. 1. They can mess with that stretching that is involved because they operate by flushing the entire framebuffer to the video chip; by doing this they may miss the information it needs to stretch it properly, leaving you adjusting your screen controls. 2. They have to "live" somewhere in the memory of the PS2 (or the PS3) and so they can actually occupy space that the game program needs in order to function, causing a freeze, a crash, a weird bug, at the start or later in the game, you never have any way of knowing because this is the development stuff that is not public knowledge and the program that forces the framebuffer has to go
somewhere. And 3. ultimately it is messing with a program, making it do something it was never tested for. Some games, for instance, rely upon the framerate of the pass to the video chip as the tick for the game timing, and if you drop the contents of a full frame at once (two fields rendered in the same pass) you get a game that runs at twice the speed! TimeSplitters does this with XPloder and GSM, for instance. It makes the thing run so fast it's unplayable. More typically game programs that encounter this stuff unexpectedly and may crash.
But dropping the entire contents of the framebuffer is absolutely the "proper" way to achieve 480p from 480i with discrete computer graphics, and this is something that the PS3 never, ever, ever does, at least not in any way that the PS2 does not do it already. It
deinterlaces two fields when this is not necessary to get a full 480 line frame, i.e. there are actually
four fields alive and well in a single two-field pass, it's just that two of them are completely unused. That is why deinterlacing will never be the proper way to achieve 480p, at least not with any kind of computer game graphics. What the PS3 does is a compromise because so many devs did 480i games only, and did them with various rendering routines, stretching, and so on that you never know what you are going to wind up with when you flip the switch to activate progressive scan. Ideally it really should be just so simple as flipping that switch, and it is this simple on other systems, XBox for example, but there was a lot more going on with PS2 development, a lot more kittens to herd.
Personally I am not so knowledgeable about the inner workings of the PS2, Gamecube and other post 32bit consoles to know how they go about things. But assuming that developers can make the choice themselves I have to believe that since a game that runs in 60fps is as rare today as it was 10 years ago then most games display their internal rendered full 480p frame one field at a time, and thus they would benefit from the PS3's de-interlacing.
If I understand this, you seem to be under the impression that moving to 480p from 480i should yield 60fps. Believe me, I wish that were the case, but it totally isn't true. At any rate I'm not sure what you think you can gather from the framerate of a game with respect to its method of broadcast. It is never easier to render interlaced, and that's why nobody does it. They render progressive, at whatever framerate they can manage. The video broadcast sync is completely independent of the framerate that the game is rendered at, if its user has requested interlaced then half of the frame is discarded--all of those odd lines from that frame go out the window never to be seen, and the next field is taken from the odd lines
of the next frame. If the user has requested a progressive scan, none of this trashing is ever necessary and the entire image that is rendered is what is broadcast. Case in point here, you're not going to have a lower rendered framerate between 480i and 480p, the same resolution is rendered, half of the work done is just chucked. That's why it's always desirable to just get 480p the right way. You have a smoother animation, fewer jaggies, none of the scissored edges inherent with interlacing, and yet no slowdown. You've just allowed yourself to see the animation the way the computer rendered it.
Deinterlacing is not the same as bumping the full frame out the door. Deinterlacing only ever works with the fields of adjacent draws, never with the same two fields created in a single rendered frame. This is why it is only ever undesirable. Of course depending on circumstances, it might be the best thing you can get.... but I'd rather just drop to 480i than deal with any deinterlacing, honestly.