Jump to content
Welcome to the virtual battlefield, Guest!

World War II Online is a Massively Multiplayer Online First Person Shooter based in Western Europe between 1939 and 1943. Through land, sea, and air combat using a ultra-realistic game engine, combined with a strategic layer, in the largest game world ever created - We offer the best WWII simulation experience around.

Effect of graphic card overclocking on in game FPS


amorosa
 Share

Recommended Posts

.benchremagen at core clock 500Mhz and memory clock 750Mhz:30fps

.benchremagen at core clock 650Mhz and memory clock 1000Mhz: 30fps

Same for .benchvehicles (15fps). Raising the core frequency by 23% and memory frequency by 25% has zero effect on fps. Limited by CPU? Can somebody else test this?

[Athlon X2 4200@2.7GHz, Radeon 4830, 4Gb Ram, W7 64bit]

Edited by amorosa
Link to comment
Share on other sites

Checked out 1.30.3. Same coordinates and same view direction give me 5 more fps then I get in 1.31. Again doesnt move with overclock (or underclock).

Link to comment
Share on other sites

Yeah. Should have read the thread about SLI first. It is quite obvious the GPU is twiddling its thumbs. I just dont know what the hell the CPU is calculating in the simple remagen scene.

Good question.

There's no reason the framerate should be taking that huge hit if it's not the video card doing the work.

Unless the CPU is doing graphics processing for some crazy reason.

Link to comment
Share on other sites

Good question.

There's no reason the framerate should be taking that huge hit if it's not the video card doing the work.

Unless the CPU is doing graphics processing for some crazy reason.

Not sure and CRS can correct me if I'm wrong, But if this game is anything like my CAD renderer, the CPU does the bulk of the work while rendering. Think of it like an artist painting a picture. The Artist's brain is deciding what goes where, how hard to push the brush, etc etc, and all the hand is doing is going where the brain tells it. Same deal with cpu/gpu relationship.

May be exclusive to OpenGL, may be true for any computer graphics when the program is written for CPU-based performance.

Edited by Hick
Link to comment
Share on other sites

Well there is something going on, and it isn't all to do with what is visible on screen. It is difficult to make an exact comparison between the 1.31 tests and the current 1.30, but if you take the test with a rifleman at Area 51, shadows and clutter off, looking at the tank test track, you can put your avatar in pretty much the same place and get a very similar view in front of you. If I do this on 1.30 I get 67 FPS and if I do it with 1.31 I get 29 FPS. That is a massive performance hit.

Link to comment
Share on other sites

May be exclusive to OpenGL' date=' may be true for any computer graphics when the program is written for CPU-based performance.[/quote']

2nd one is true.

Actually, it's typical for older game engines (which api does not matter).

Link to comment
Share on other sites

2nd one is true.

Actually, it's typical for older game engines (which api does not matter).

Nice to know I'm not completely off my rocker. Thx!

Link to comment
Share on other sites

I believe this game is CPU bound. "No physics lookup tables" I think is the CRS bragging, although technically that's just an optimization if the lookup tables are correct ;). But it means heavy CPU usage to calculate the world physics all the time, so likely the game is CPU bound rather than GPU bound.

Link to comment
Share on other sites

Force Vsync off and watch your CPU usage in the taskmanager, you'll see alot of the time in 1.30 you will not max the CPU but in 1.31 you max the CPU upto 97% or so and this is on a dualcore.

For example with Vsync and shadows off at the 51 track area 1.30 gets 160fps with 60% CPU usage, 1.31 sees 80fps and 100% usage.

With Vsync on I see 33% usage and 60fps with 1.30 and 67% usage with 60fps in 1.31.

I'm curious as to how many threads we have now in 1.31, has anyone with a quadcore gone over 60% CPU usage in the taskmanager with Vsync off?

Link to comment
Share on other sites

.benchremagen at core clock 500Mhz and memory clock 750Mhz:30fps

.benchremagen at core clock 650Mhz and memory clock 1000Mhz: 30fps

Same for .benchvehicles (15fps). Raising the core frequency by 23% and memory frequency by 25% has zero effect on fps. Limited by CPU? Can somebody else test this?

[Athlon X2 4200@2.7GHz, Radeon 4830, 4Gb Ram, W7 64bit]

I too am clearly CPU limited. q9550 @ 3.4 ghz / GTX 260. Changing graphics settings has no effect on FPS.

I just dont know what the hell the CPU is calculating in the simple remagen scene.

I'd be curious to know that as well.

Not sure and CRS can correct me if I'm wrong' date=' But if this game is anything like my CAD renderer, the CPU does the bulk of the work while rendering.[/quote']

Games aren't anything like CAD/3D modeling software. CAD/3D use software renderers which run off the CPU, making them API, platform and video card independent, and extremely slow. A typical software renderer could expect to achieve 0.1-0.2 FPS for a full resolution game scene on a quad core high performance chip. CPU's are for 'general' use and are not optimized for any particular use, like GPUs are.

Think of it like an artist painting a picture. The Artist's brain is deciding what goes where' date=' how hard to push the brush, etc etc, and all the hand is doing is going where the brain tells it. Same deal with cpu/gpu relationship.[/quote']

Well, kind of. The CPU uploads matrices (positions and rotations of objects) and uploads some variable stuff like light position/direction to the shaders, then the GPU does the rest. If no states have changed in the last frame then it technically doesn't even have to do that, just tell the GPU to draw and it does the rest. Generally, the CPU has very, very little to do while rendering.

I believe this game is CPU bound. "No physics lookup tables" I think is the CRS bragging' date=' although technically that's just an optimization if the lookup tables are correct. But it means heavy CPU usage to calculate the world physics all the time, so likely the game is CPU bound rather than GPU bound.[/quote']

A properly optimized physics engine will slightly more than linearly scale with the number of dynamic objects. In the simple offline test with at most a few objects should be very, very quick to process (easily more than 1000FPS anyway). I didn't know people still used lookup tables, at least not for the last 6-7 years. CPU's are much, much faster than memory, hence it's usually quicker to recalculate something than store and retrieve a value.

Source: My knowledge as a hobbyist game programmer for 7 years.

Link to comment
Share on other sites

It mustn't be necessary the CPU it could be also the sound card, interfaces like joystick or network interface card, i tried to overclock my GPU too in an older version but i realized soon it doesn't affect the frames. I have the weird feeling something in the engine prevent it from rendering the frames freely.

Edited by seafox
Link to comment
Share on other sites

Games aren't anything like CAD/3D modeling software. CAD/3D use software renderers which run off the CPU' date=' making them API, platform and video card independent, and extremely slow. A typical software renderer could expect to achieve 0.1-0.2 FPS for a full resolution game scene on a quad core high performance chip. CPU's are for 'general' use and are not optimized for any particular use, like GPUs are.[/quote']

not entirely true, CRS itself has stated several times over the past few years this game is more CPU dependent than GPU dependent. Older games were written to take advantage of the rapidly increasing CPU technology, but now that has slowed, and GPU's are taking off by leaps and bounds. While newer games are written to take advantage of the modern GPU architectures, this is by no means a new game.

A testament to that is the fact this game is still written for OpenGL, not DirectX. There comes a point when the code will actually limit the game's performance once the hardware tech becomes advanced enough. We may be beginning to reach that point here, I don't know. But look at games written in 1995-2001 and see how little difference GPU's make in Framerates vs CPU's and RAM.

Well, kind of. The CPU uploads matrices (positions and rotations of objects) and uploads some variable stuff like light position/direction to the shaders, then the GPU does the rest. If no states have changed in the last frame then it technically doesn't even have to do that, just tell the GPU to draw and it does the rest. Generally, the CPU has very, very little to do while rendering.
Again, it al depends on how the program is written. Go watch Mythbuster's Demonstration at Nvidia 2008. They use robots with paintballs to demonstrate the difference between CPU-based rendering and GPU-based rendering.
Link to comment
Share on other sites

Really would appreciate a RAT answer on this. Is there a direct comparison between 1.30 offline and 1.31 offline (in which case the performance drop is very worrying)? Or is the 1.31 beta client crunching a load of non-game stuff in the background (debugs, dev tools etc) which is giving the CPU a thrashing in beta but will be gone in the release?

Link to comment
Share on other sites

not entirely true' date=' CRS itself has stated several times over the past few years this game is more CPU dependent than GPU dependent. Older games were written to take advantage of the rapidly increasing CPU technology, but now that has slowed, and GPU's are taking off by leaps and bounds. While newer games are written to take advantage of the modern GPU architectures, this is by no means a new game.[/quote']

It is CPU limited of course, but I doubt this is because of graphics. I would hazard a guess that you could cut out the entire render code and the game wouldn't run much faster. Something else is eating up the CPU. Games really started to hammer the GPU in the GF3/GF4 era (2001/2002), so relying heavily on the GPU is hardly a new concept. In any event, it would be impossible to software render a game, like CAD does, on current CPU's (except possibly the PS3's cell processor if the resolution was low enough).

A testament to that is the fact this game is still written for OpenGL' date=' not DirectX. There comes a point when the code will actually limit the game's performance once the hardware tech becomes advanced enough. We may be beginning to reach that point here, I don't know. But look at games written in 1995-2001 and see how little difference GPU's make in Framerates vs CPU's and RAM. [/quote']

OpenGL isn't too bad (if you use it correctly). IMO, Direct3D has been better than OpenGL since 8.0, but that's not to say OpenGL is bad. On the contrary, in terms of absolute performance, they are very similar (since in the end the GPU does the same thing regardless of API). My main gripe is OpenGL's C API and extensions, Direct3D is more intuitive and easier to code for me.

Modern compilers are very smart now days, and of course the entire code gets recompiled on every release, so it's not the same as running a 2001 game on a 2009 PC.

Again' date=' it al depends on how the program is written. Go watch Mythbuster's Demonstration at Nvidia 2008. They use robots with paintballs to demonstrate the difference between CPU-based rendering and GPU-based rendering.[/quote']

Heh, I think I've seen that before. But, seeing as how I've actually written software scanline rendering code and GPU accelerated ray tracing, I have a pretty good idea what I'm talking about. ;)

Edited by norbert5
Link to comment
Share on other sites

- CAD does not use software rendering. Hasn't done so for the last 15 years either, seeing as the first really nice 3D hardware was made for CAD (SGI indigo etc).

- OpenGL and Direct3D are equivalent in functionality and speed

There are a lot of other things that could make ww2ol CPU limited, I suspect it sends too many too small batches to the graphics card.

Link to comment
Share on other sites

- CAD does not use software rendering. Hasn't done so for the last 15 years either' date=' seeing as the first really nice 3D hardware was made for CAD (SGI indigo etc).[/quote']

I was referring to final renders, not the viewports.

Link to comment
Share on other sites

Really would appreciate a RAT answer on this. Is there a direct comparison between 1.30 offline and 1.31 offline (in which case the performance drop is very worrying)? Or is the 1.31 beta client crunching a load of non-game stuff in the background (debugs' date=' dev tools etc) which is giving the CPU a thrashing in beta but will be gone in the release?[/quote']We are well aware about the high CPU load of the game.

Performance will be improved significantly for the release client with the changes in the works for data and code.

Link to comment
Share on other sites

I don't know if it is as CPU bound as you all think. Although yes it is a major preformance sapper from the CPU, it seems a lot of ATI GPUs just don't handle this game well.

I have a 4890 and an E6600 at 2.4 gHZ and I only get 20 FPS in the 1.31 beta and 25 fps in 1.30. Though when I had an 8800gts 640mb I was getting 45-100 FPS in 1.30. There is something wrong with ATI GPUs and this game.

Link to comment
Share on other sites

@blato: ati+opengl=crap. John Carmack said once, if something doesn't work, he suspects himself as the cause if using nvidia, and if using ati, suspects the driver.

@jaeger: nice to hear.

Link to comment
Share on other sites

We are well aware about the high CPU load of the game.

Performance will be improved significantly for the release client with the changes in the works for data and code.

Thanks for that reply Jaeger. We'll keep the faith and keep testing and reporting.

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
 Share

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...