icycalm wrote:NO ONE is saying Gears of War does not look next-gen enough. NO ONE is saying it doesn't look awesome.
All I am saying is it would look better in 480p with loads more polygons and effects.
You do seem to be presuming that the reason there aren't more polygons and effects is because the game was not targeted at a 480p render.
Time for some lessons in this thread.
480i and 480p are rendered by a computer the same way. This includes your video game system. An interlaced broadcast like 480i is made by producing a 480p frame, then sending odd lines at one draw, then even lines at the next draw. This is the simplest, easiest way we have to create an interlaced broadcast. The lesson? You aren't going to get more horsepower by using 480i instead of 480p. Other side of the coin: you aren't going to get more horsepower by using 1080i instead of 1080p, they both require the same amount of work and this is why Resistance devs disabled the 1080i option--because it was just as resource-sapping as 1080p.
Strider doesn't have a very good way of saying some of the things he's noticing but he's quite right that the 720p target is not hindering the 480p/i broadcasts. Another lesson: A polygon is a polygon is a polygon is a polygon. It requires just as many resources to render a polygon in a higher resolution as it does in a lower resolution. The reason that higher resolutions tend to slow down 3D games is because it limits the fill rate--not because you get more polygons in the scene. The reason we still like higher resolutions anyway is because a higher resolution screen allows you to see more detail in the textures that are drawn upon the polygons--even if you never increase the resolution of the textures on those polygons (which is independent of the resolution of the screen resolution). Lowering the resolution literally removes your ability to see these texturing details; the only thing it does that is beneficial is possibly (though not necessarily) improve the framerate by lowering the fillrate so the thing has less to draw.
Comparing a DVD like Hero to Dead Or Alive 4 and concluding that because of the visual disparity HD is not necessary or that DOA4 should strive to be more like Hero than crank its resolution is foolish. Video games are made with discreet lines that form complete framed images all by themselves. Hero on the other hand is not drawn, it is capturing an infinite amount of colour through its lens, translating that to something that an algorithm can categorise, and restructure along with subtle nuances of vision which the camera captures in the same way that the eye does, relying on blur, persistence of vision and colour and lighting oddities to mimic the actual sight on your screen. There have been attempts to introduce motion blur to games that have not always been successful, or even very liked by gamers. Some of the better methods to blurring motion involve a lot of memory so that's no surprise, others are more simple and just deal with a scene focus, this is very common as far back as anyone who has played Metal Gear Solid 2 and is getting more common with games today.
But a big Achilles heel for computer graphics is lighting. Just ask game programming god John Carmack about lighting and prepare for a lecture.
elvis wrote:If you look at the raw throughput numbers, modern hardware is more than capable of creating the effects we saw 5-10 years ago in realtime today. The only thing holding it back is the software we use.
OpenGL and Direct3D are two APIs which are quite long in the tooth. They are both very broad in what they can achieve, and both notoriously slow. Enormous potential exists in a modern day video card, and what you and I see on a screen is quite honestly 1/10th of what is possible due entirely to old software holding it back.
This is complete and utter bullshit. A renderfarm can create an offline scene with a single light that has more realistic lighting effects throughout the scene than what you can get from an 8800GTX in realtime with a dozen lights--or even if that 8800GTX were 10 times more powerful than you claim it is now. And good luck on creating a decent video game scene with a only one light. There is not any realistic limit to what you can do with an offline render. Realtime is another story, and there is not anything that is even close to that kind of quality on any hardware which can be implemented, realtime, in a game environment, such that it will actually work simultaneously with all the other factors involved (animation, camera, AI, sound, etc.), we're all aware it is nothing but a kludge, one that looks nice for what it is but a kludge nonetheless. It's a prime reason everyone jeered Sony years ago for promising Toy Story-level graphics from the PS2.
So in closing, the article is just plain silly. Hero in HD > DOA4 in HD, ergo HD for DOA4 is a waste? Well gee, Hero in SD > DOA4 in SD. Is SD a waste for DOA4 as well? Maybe we should go back to the good old Game & Watch screens since we know the hardware can handle it.